QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
โŒ€
77,617,946
1,273,751
Solve conda-libmamba-solver (libarchive.so.19) error after updating conda to 23.11.0 without reinstalling conda?
<p>After conda update, I am getting the error below after running <code>$ conda</code>, even after setting the solver to classic (using <code>conda config --set solver classic</code>):</p> <pre><code>Error while loading conda entry point: conda-libmamba-solver (libarchive.so.19: cannot open shared object file: No such file or directory) </code></pre> <p>I have conda 23.11.0.</p> <p>On Github there is an issue <a href="https://github.com/conda/conda-libmamba-solver/issues/283" rel="noreferrer">https://github.com/conda/conda-libmamba-solver/issues/283</a> in which they mention that if <code>libarchive</code> and <code>libmamba</code> come from the same channel, it should be solved.</p> <p><strong>But when I reinstall <code>libarchive</code> using channel <code>main</code>, it doesn't update.</strong></p> <p>Does anyone know how to solve this? <strong>I do not want to reinstall Conda from scratch</strong>.</p> <p>Similar links online that do not solve the problem:</p> <ul> <li><a href="https://stackoverflow.com/questions/75703897/after-installing-libmamba-solver-i-get-warning-libmamba-could-not-parse-state-f">After installing libmamba solver i get `warning libmamba Could not parse state file`</a></li> <li><a href="https://stackoverflow.com/questions/57254007/how-to-fix-entry-point-not-found-while-installing-libraries-in-conda-environment">How to Fix Entry Point Not Found while installing libraries in conda environment</a></li> <li><a href="https://www.reddit.com/r/learnpython/comments/160kjz9/how_do_i_get_anaconda_to_work_the_way_i_want_it_to/" rel="noreferrer">https://www.reddit.com/r/learnpython/comments/160kjz9/how_do_i_get_anaconda_to_work_the_way_i_want_it_to/</a></li> </ul>
<python><conda><libarchive>
2023-12-07 06:01:10
18
2,645
Homero Esmeraldo
77,617,920
2,802,576
ModuleNotFoundError: No module named - while importing a module from another folder
<p>I am getting the famous - <code>ModuleNotFoundError</code> when I try to import a module from another folder. Here is the folder structure -</p> <pre><code>packages - packageA - src - packagename - app.py &lt;-- imports the readers.reader_module - __init__.py - readers - __init__.py - reader_module.py &lt;-- contains a read() </code></pre> <p>The source under <code>packagename</code> will be packaged and hence I have added <code>__init__.py</code> under it. Similarly, under <code>readers</code> folder <code>__init__.py</code> is added.</p> <p>When I import <code>reader_module</code> in <code>app.py</code> I get - <code>ModuleNotFoundError: No module named 'packagename'</code>.</p> <p>I tried below import statements inside <code>app.py</code> but no luck -</p> <ul> <li><code>from packagename.readers.reader_module import read</code></li> <li><code>from .readers.reader_module import read</code></li> <li><code>from readers.reader_module import read</code></li> </ul>
<python><module><directory><path><package>
2023-12-07 05:54:10
1
801
arpymastro
77,617,728
2,016,850
How to select p tags without class attributes in Selectolax
<p>I am trying to select some p tags without class attributes using Selectolax without success. This is my code:</p> <pre><code>from selectolax.parser import HTMLParser tree = HTMLParser( ''' &lt;p class=&quot;card_street&quot;&gt; &lt;span class=&quot;card_street&quot;&gt;123 My Rd. &lt;/span&gt; &lt;span class=&quot;card_street&quot;&gt;Suite 100&lt;/span&gt; &lt;span&gt; Anywhere&lt;/span&gt; &lt;span&gt;, TX&lt;/span&gt; &lt;span&gt; 12345&lt;/span&gt; &lt;/p&gt; ''' ) tree.css('p[class=&quot;card_street&quot;] span:not([class]):nth-child(1)') </code></pre> <p>which gives:</p> <pre><code>[] </code></pre> <p>However, <code>tree.css('p[class=&quot;card_street&quot;] span:not([class])')</code> is selecting the three span tags without a class attribute:</p> <pre><code>for node in tree.css('p[class=&quot;card_street&quot;] &gt; span:not([class])'): print(node.text()) </code></pre> <p>gives:</p> <pre><code> Anawhere , TX 12345 </code></pre> <p>Can someone tell me the correct way to select the the 1st, 2nd, and 3rd span tags without class attributes?</p>
<python><css-selectors>
2023-12-07 04:51:06
1
1,051
glenn15
77,617,640
9,837,010
Request to AWS AppSync GraphQl endpoint returns 401 Client Error: Unauthorized for url
<p>I'm trying to use PyCognito (AWS Cognito) to authenticate a user and an attached identity pool authenticated role to give permissions to the user to query an AppSync GraphQL endpoint but I'm getting a 401 Client Error: Unauthorized for url .</p> <p>Minimal Reproducible Example:</p> <pre><code>from pycognito import Cognito import boto3 from requests_aws4auth import AWS4Auth import requests user = Cognito(USER_POOL_ID, CLIENT_ID, username=USERNAME) user.authenticate(password=PASSWORD) user.check_token() user.verify_tokens() boto_session = boto3.Session() client = boto_session.client('cognito-identity', region_name=REGION) identity_id = client.get_id( IdentityPoolId=f'{REGION}:{IDENTITY_POOL_ID}', Logins={ f'cognito-idp.{REGION}.amazonaws.com/{USER_POOL_ID}': user.id_token } )['IdentityId'] credentials = client.get_credentials_for_identity( IdentityId=identity_id, Logins={ f'cognito-idp.{REGION}.amazonaws.com/{USER_POOL_ID}': user.id_token } )['Credentials'] session = requests.Session() session.auth = AWS4Auth( credentials[&quot;AccessKeyId&quot;], credentials[&quot;SecretKey&quot;], REGION, 'appsync', session_token=credentials[&quot;SessionToken&quot;] ) response = session.post(GRAPHQL_ENDPOINT, json={'query': QUERY, 'variables': {'id': user.id_claims[&quot;sub&quot;]}}) response.raise_for_status() </code></pre> <p>raises the following exception:</p> <pre><code>401 Client Error: Unauthorized for url: ENDPOINT </code></pre> <p>When I perform an <code>sts.get_caller_identity()</code> with the logged in credentials, I get the expected authenticated role from IAM.</p> <pre><code>{ &quot;UserId&quot;: &quot;ACCESS_KEY_ID:CognitoIdentityCredentials&quot;, &quot;Account&quot;: &quot;ACCOUNT_ID&quot;, &quot;Arn&quot;: &quot;arn:aws:sts::ACCOUNT_ID:assumed-role/authRole/CognitoIdentityCredentials&quot; } </code></pre> <p>I have also added the following inline policy on the above <code>authRole</code> to try an blanket solve the permissions issue to no avail.</p> <pre><code>{ &quot;Version&quot;: &quot;2012-10-17&quot;, &quot;Statement&quot;: [ { &quot;Sid&quot;: &quot;VisualEditor0&quot;, &quot;Effect&quot;: &quot;Allow&quot;, &quot;Action&quot;: [ &quot;dynamodb:*&quot;, &quot;appsync:*&quot; ], &quot;Resource&quot;: &quot;*&quot; } ] } </code></pre> <p>I originally tried using this as a guide but didn't see anyone else here run into this 401 issue. <a href="https://stackoverflow.com/questions/60293311/how-to-send-a-graphql-query-to-appsync-from-python">How to send a GraphQL query to AppSync from python?</a></p>
<python><amazon-web-services><python-requests><graphql><boto3>
2023-12-07 04:22:03
0
479
Austin Ulfers
77,617,568
1,911,722
How to use `case_sensitive` option in pathlib glob?
<p>In the documentation of pathlib, it says</p> <blockquote> <p>Path.glob(pattern, *, case_sensitive=None)</p> </blockquote> <p>There is two things I do not understand.</p> <ol> <li><p>what is the second parameter * ?</p> </li> <li><p>I want to use <code>case_sensitive</code> option. But when I run</p> <p><code>somepath.glob('*.txt',case_sensitive=False)</code></p> </li> </ol> <p>I got <code>TypeError: glob() got an unexpected keyword argument 'case_sensitive'</code></p> <p>How to use <code>case_sensitive</code> option in pathlib glob?</p>
<python><pathlib>
2023-12-07 03:51:24
1
2,657
user15964
77,617,540
8,968,910
Python: split column when it contains certain word and append a string in the end
<p>I have a dataframe with column called <code>address</code>:</p> <pre><code>address xxx City yyy road 17 number 8 floor west bank ttt City iii road 1 number ggg City kkk road 25 number 1 floor apple store </code></pre> <p>If the address contains <code>floor</code>, I need to split it and get the address before it. Since I also remove the <code>floor</code> itself, I have to append it back.</p> <p>My code:</p> <pre><code>df['address'] = df.address.str.split('floor').str[0]+'floor' </code></pre> <p>Output:</p> <pre><code>xxx City yyy road 17 number 8 floor ttt City iii road 1 number floor ggg City kkk road 25 number 1 floor </code></pre> <p>My expected result:</p> <pre><code>xxx City yyy road 17 number 8 floor ttt City iii road 1 number #if the original address does not contain 'floor', don't do anything ggg City kkk road 25 number 1 floor </code></pre>
<python><pandas>
2023-12-07 03:43:06
2
699
Lara19
77,617,514
16,405,935
Pandas concat multiple excel file in folder by pre defined sheets
<p>I'm trying to concat multiple files in folder with pre defined sheets name (Because each file has some sheets that I don't want to concat) but I got a problem as below. I'm trying to do with below code:</p> <pre><code>from pathlib import Path import pandas as pd import glob import io folder = Path(r'D:\Test\New folder') for file in folder.glob('*xlsx'): sheet_name = ['์ด','ํ•˜๋…ธ์ด', 'ํ˜ธ์น˜๋ฏผ', '๋ฐ•๋‹Œ', '๋นˆ์ฆ', '๋™๋‚˜์ด', '๋น„์—”ํ™”', '๋‹ค๋‚ญ', 'ํƒ€์ด์‘์›ฌ', 'ํ•˜์ดํ', 'ํ˜ธ์•ˆ๋ผ์— ', '์Šคํƒ€๋ ˆ์ดํฌ', 'ํ•˜๋‚จ', '๋นˆํ‘น', 'ํ‘ธ๋ฏธํฅ', '๊ป€ํ„ฐ', '์‚ฌ์ด๊ณต'] dfs = [] for x in sheet_name: df = pd.read_excel(file, sheet_name=x) base_date = df.iloc[0, 10] branch = df.iloc[1, 12] df.columns = df.iloc[2] df = df[3:] # Xoฬa cรดฬฃt NaN df = df.loc[:, df.columns.notnull()] df_2 = df.iloc[:, 0:4] df_3 = df.iloc[:, 4:] df = pd.concat([df_2, df_3], axis=0, ignore_index=True) df['Base Date'] = base_date df['Base Date'] = pd.to_datetime(df['Base Date'], format='%d-%m-%Y').dt.strftime('%Y%m%d').astype(int) df['Branch'] = branch dfs.append(df) output = pd.concat(dfs, axis=0, ignore_index=True) print(output) </code></pre> <p>It raised error that said <code>ValueError: Worksheet named '' not found</code> (Because in my file may have not sheets that I pre defined). So I tried with below code:</p> <pre><code>from pathlib import Path import pandas as pd import glob import io folder = Path(r'D:\Test\New folder') for file in folder.glob('*xlsx'): sheet_name = ['์ด','ํ•˜๋…ธ์ด', 'ํ˜ธ์น˜๋ฏผ', '๋ฐ•๋‹Œ', '๋นˆ์ฆ', '๋™๋‚˜์ด', '๋น„์—”ํ™”', '๋‹ค๋‚ญ', 'ํƒ€์ด์‘์›ฌ', 'ํ•˜์ดํ', 'ํ˜ธ์•ˆ๋ผ์— ', '์Šคํƒ€๋ ˆ์ดํฌ', 'ํ•˜๋‚จ', '๋นˆํ‘น', 'ํ‘ธ๋ฏธํฅ', '๊ป€ํ„ฐ', '์‚ฌ์ด๊ณต'] dfs = [] for x in sheet_name: try: df = pd.read_excel(file, sheet_name=x) base_date = df.iloc[0, 10] branch = df.iloc[1, 12] df.columns = df.iloc[2] df = df[3:] # Xoฬa cรดฬฃt NaN df = df.loc[:, df.columns.notnull()] df_2 = df.iloc[:, 0:4] df_3 = df.iloc[:, 4:] df = pd.concat([df_2, df_3], axis=0, ignore_index=True) df['Base Date'] = base_date df['Base Date'] = pd.to_datetime(df['Base Date'], format='%d-%m-%Y').dt.strftime('%Y%m%d').astype(int) df['Branch'] = branch dfs.append(df) except ValueError as e: pass output = pd.concat(dfs, axis=0, ignore_index=True) print(output) </code></pre> <p>But with this code, it just work for one last file. How can I achieve my goal? Here is the link for the sample data: <a href="https://github.com/hoatranobita/concat" rel="nofollow noreferrer">https://github.com/hoatranobita/concat</a></p>
<python><pandas><concatenation>
2023-12-07 03:32:51
1
1,793
hoa tran
77,617,326
4,225,430
No floating points needed, but still flag type error float() in producing time series decomposition
<p>I self-learn time series decomposition with python. I get a float() type error.</p> <p>My data source is from government open data. Basically, I want to visualize the trend of high speed rail passenger flow after the end of covid travel restriction. The time period is 2023 Jan to Nov.</p> <p>The first part is data cleaning and sub-setting. The second part is time series and decomposition by month.</p> <pre><code>import numpy as np import pandas as pd import matplotlib.pyplot as plt get_ipython().run_line_magic('matplotlib', 'inline') import seaborn as sns from pylab import rcParams df = pd.read_csv(&quot;https://www.immd.gov.hk/opendata/eng/transport/immigration_clearance/statistics_on_daily_passenger_traffic.csv&quot;) # data cleaning df = df.iloc[: , :-1] df = df[df[&quot;Date&quot;].str.contains(&quot;2023&quot;) == True] from datetime import date from datetime import datetime df[&quot;Date&quot;] = df[&quot;Date&quot;].apply(lambda x: datetime.strptime(str(x), &quot;%d-%m-%Y&quot;)) control_point = df['Control Point'].tolist() options = ['Airport', 'Express Rail Link West Kowloon', 'Lo Wu', 'Lok Ma Chau Spur Line', 'Heung Yuen Wai', 'Hong Kong-Zhuhai-Macao Bridge', 'Shenzhen Bay'] df_clean = df.loc[df['Control Point'].isin(options)] df_XRL = df[df[&quot;Control Point&quot;].str.contains(&quot;Express Rail Link West Kowloon&quot;) &amp; df[&quot;Arrival / Departure&quot;].str.contains(&quot;Departure&quot;)] df_XRL = df_XRL[[&quot;Date&quot;,&quot;Hong Kong Residents&quot;]] df_XRL = df_XRL[~(df_XRL['Date'] &gt; '2023-11-30')] df_XRL['Month'] = pd.DatetimeIndex(df_XRL['Date']).strftime(&quot;%b&quot;) df_XRL['Week day'] = pd.DatetimeIndex(df_XRL['Date']).strftime(&quot;%a&quot;) # Pivot table from numpy import nan monthOrder = ['Jan', 'Feb', 'Mar', 'Apr','May','Jun','Jul','Aug','Sep','Oct','Nov'] dayOrder = ['Mon','Tue','Wed','Thu','Fri','Sat','Sun'] pivot_XRL = pd.pivot_table(df_XRL, index=['Month'], values=['Hong Kong Residents'], columns=['Week day'], aggfunc=('sum')).loc[monthOrder, (slice(None), dayOrder)] # Time Series Decomposition - where errors occur from statsmodels.tsa.seasonal import seasonal_decompose decomposition = seasonal_decompose(df_XRL, model = &quot;additive&quot;) decomposition.plot() plt.rcParams['axes.labelsize'] = 16 plt.rcParams['axes.titlesize'] = 16 </code></pre> <p>Error message:</p> <pre class="lang-none prettyprint-override"><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_6628\3049465439.py in &lt;module&gt; 1 from statsmodels.tsa.seasonal import seasonal_decompose ----&gt; 2 decomposition = seasonal_decompose(df_XRL, model = &quot;additive&quot;) 3 decomposition.plot() 4 plt.rcParams['axes.labelsize'] = 16 5 plt.rcParams['axes.titlesize'] = 16 ~\anaconda3\lib\site-packages\statsmodels\tsa\seasonal.py in seasonal_decompose(x, model, filt, period, two_sided, extrapolate_trend) 140 pfreq = getattr(getattr(x, &quot;index&quot;, None), &quot;inferred_freq&quot;, None) 141 --&gt; 142 x = array_like(x, &quot;x&quot;, maxdim=2) 143 nobs = len(x) 144 ~\anaconda3\lib\site-packages\statsmodels\tools\validation\validation.py in array_like(obj, name, dtype, ndim, maxdim, shape, order, contiguous, optional) 133 if optional and obj is None: 134 return None --&gt; 135 arr = np.asarray(obj, dtype=dtype, order=order) 136 if maxdim is not None: 137 if arr.ndim &gt; maxdim: ~\anaconda3\lib\site-packages\pandas\core\generic.py in __array__(self, dtype) 2082 def __array__(self, dtype: npt.DTypeLike | None = None) -&gt; np.ndarray: 2083 values = self._values -&gt; 2084 arr = np.asarray(values, dtype=dtype) 2085 if ( 2086 astype_is_view(values.dtype, arr.dtype) TypeError: float() argument must be a string or a number, not 'Timestamp' </code></pre>
<python><pandas><time-series><timestamp>
2023-12-07 02:24:11
1
393
ronzenith
77,617,256
312,662
Operator overloading on classes themselves
<p>I feel like this code should work, but the second expression fails. Why is that?</p> <pre><code>class Foo: @classmethod def __matmul__(cls, other): return &quot;abc&quot; + other print(Foo.__matmul__(&quot;def&quot;)) # OK print(Foo @ &quot;def&quot;) # TypeError: unsupported operand type(s) for @: 'type' and 'str' </code></pre> <p>same for <code>__getattr__</code>:</p> <pre><code>class Foo: @classmethod def __getattr__(cls, item): return &quot;abc&quot; + item print(Foo.__getattr__(&quot;xyz&quot;)) # OK print(Foo.xyz) # AttributeError: type object 'Foo' has no attribute 'xyz' </code></pre> <p>A solution is to use metaclasses</p> <pre><code>class MetaFoo(type): def __matmul__(cls, other): return &quot;abc&quot; + other class Foo(metaclass=MetaFoo): pass print(Foo.__matmul__(&quot;def&quot;)) # OK print(Foo @ &quot;def&quot;) # OK </code></pre> <p>Is this a bug in (C)Python?</p> <p>To be clear, the question is why does <code>print(Foo @ &quot;def&quot;) </code> not work in the first example, without the metaclass?</p>
<python><operator-overloading><class-method>
2023-12-07 01:59:59
2
1,509
nick maxwell
77,617,211
558,639
Is it possible to rescale a tkinter PhotoImage (not from a file)?
<p>Assume I have a 96 x 96 PhotoImage generated algorithmically (not from a file), e.g.:</p> <pre><code>import tkinter IMG_W = 96 IMG_H = 96 class App: def __init__(self, t): self.i = tkinter.PhotoImage(width=IMG_W,height=IMG_H) for row in range(0, IMG_H): for col in range(0, IMG_W): pixel = '#%02x%02x%02x' % (0x80, row, col) self.i.put(pixel, (row, col)) c = tkinter.Canvas(t, width=IMG_W, height=IMG_H); c.pack() c.create_image(0, 0, image = self.i, anchor=tkinter.NW) t = tkinter.Tk() a = App(t) t.mainloop() </code></pre> <p>(Note: the code above is only to show a filled PhotoImage. My real use case is reading pixels from a data stream that produces 96 x 96 pixel bitmaps.)</p> <p>But 96 x 96 is mighty small. Is there a way to scale the PhotoImage so it fills a larger Canvas?</p> <p>(Scaling examples I've seen all use Pillow and assume that you're reading from a file. AFAICT, that doesn't apply here.)</p>
<python><tkinter>
2023-12-07 01:39:52
2
35,607
fearless_fool
77,617,074
7,385,884
Handling a ValidationError from inputs not conforming to a provided args_schema with a structured tool in langchain
<p>I am trying to come up with the correct way to capture and handle a pydantic <code>ValidationError</code> caused by structured tool arguments not conforming to a provided <code>args_schema</code>.</p> <p>I am creating structured tools with the <code>StructuredTool.from_function(...)</code> and providing an <code>args_schema</code>. On occasion, I'll have errors due to the arguments not conforming to the schema (missing certain arguments, for example).</p> <p>I'd like to find a way to pass in an error handling callback to handle the resulting <code>ValidationError</code> and recover from it (with some custom logging as well) without having the agent stop executing.</p> <p>The functionality of the <a href="https://python.langchain.com/docs/modules/agents/tools/custom_tools#handling-tool-errors" rel="nofollow noreferrer">handle_tool_error</a> argument is very similar to what I need. However it does not fit here because I need to be able to enter my tool code to raise a <code>ToolException</code> that the passed in handler callable can respond to. The <code>ValidationError</code> is raised outside of my tool code, so I can't raise a <code>ToolException</code> in response.</p> <p>My current approach is making all of the members of my <code>args_schema</code> be optional, and then check for <code>None</code> in my tool code, however I don't like this approach as the arguments are not optional, and I'd like my schema to be correct. Additionally, this workaround essentially means I have to predict the failures I'll get (missing fields, for example), and if I fail to predict correctly, my agent will stop executing.</p> <p>I looked through langchain docs, but was not able to find a recommended approach.</p>
<python><error-handling><pydantic><langchain><py-langchain>
2023-12-07 00:45:43
0
1,553
istrupin
77,617,031
17,028,242
Converting HuggingFace Tokenizer to TensorFlow Keras Layer
<p>I am struggling to understand how to perform inference with a pre-trained HuggingFace model loaded as a TensorFlow Keras model.</p> <p><strong>Context</strong></p> <p>In my case, I am trying to fine-tune a pre-trained DistilBert classifier. I have something as follows to preprocess my data and load/train my model:</p> <pre><code>from transformers import TFAutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained(&quot;distilbert-base-uncased&quot;) model = TFAutoModelForSequenceClassification.from_pretrained( &quot;distilbert-base-uncased&quot;, num_labels=2, id2label=id2label, label2id=label2id ) # add another layer tf_train = model.prepare_tf_dataset(question_train_test_split['train'], batch_size=16, shuffle=True, tokenizer=tokenizer) model.compile(optimizer=tf.keras.optimizers.Adam(2e-5)) # freeze the first transformer layer of model model.layers[0].trainable=False print('Model Architecture:') print(model.summary()) model.fit(tf_train, epochs=3) </code></pre> <p>Where <code>question_train_test_split</code> is a HuggingFace <code>Dataset</code> object instance.</p> <p>This snippet of code works perfectly as expected, it loads the HuggingFace model as a <code>tf.keras</code> layer. This even trains properly with the <code>.fit</code> method.</p> <p>However, I am having issues when I want to perform predictions. I understand that I need to tokenize my string input, however, I would like to load the tokenizer as a <code>tf.keras</code> layer. I've looked everywhere for a way to do this and I was not able to find any way.</p> <p>Ideally, I would like something like this:</p> <pre><code>user_input = 'When were the Beatles formed?' model_input = tokenizer(user_input) # THIS HF TOKENIZER SHOULD BE A tf.keras LAYER model = model(model_input) </code></pre> <p>This is so that I can save the entire model (with the tokenizer and the transformer layers + the classifier layers) into a TensorFlow SavedModel. If there's any pointers out there to convert a HuggingFace tokenizer into a TensorFlow Keras layer, I would appreciate the pointer.</p>
<python><tensorflow><keras><huggingface-transformers>
2023-12-07 00:28:35
3
458
AndrewJaeyoung
77,616,969
5,203,117
Pandas future warning for idxmax() & idxmin() question?
<p>python 3.9.13, pandas 2.1.3</p> <pre><code>blahblah.py:29: FutureWarning: The behavior of DataFrame.idxmax with all-NA values, or any-NA and skipna=False, is deprecated. In a future version this will raise ValueError df['max days'] = df[columns].idxmax(axis=1) </code></pre> <p>Same warning with: <code>skipna=False</code> or, <code>skipna=True</code></p> <p>Program works as expected despite warning.</p> <p>Does anyone know what is going on with this? Google returns nothing useful.</p>
<python><pandas><dataframe>
2023-12-06 23:58:45
1
597
John
77,616,803
787,675
Dump kubernetes resource to yaml manifest with kr8s python library
<p>Is there any way to dump existing resources in kubernetes cluster to yaml manifests with <code>kr8s</code> python library? Usually, you apply yaml manifests with <code>kubectl apply -f .</code>, and this is for reverse process to extract them from kubernetes cluster.</p> <p>Thanks.</p>
<python><kubernetes><kr8s>
2023-12-06 23:01:11
1
717
Stanislav Hordiyenko
77,616,738
4,867,951
Matplotlib axes formatter is not working correctly in Seaborn Stripplot
<p>For some reason, when I try to use a <a href="http://matplotlib.org/api/ticker_api.html#matplotlib.ticker.FuncFormatter" rel="nofollow noreferrer">matplotlib.ticker.FuncFormatter</a> to reformat the x axis of a seaborn stripplot, it reformats the positions instead of the labels, MWE below. Using the same formatter function on a normal matplotlib scatterplot works fine, and gives the expected result.</p> <p>There's also something else I don't understand about how the labels work, because printing one from each axis shows the same thing for the default and reformatted stripplots, which does not match the figure, and then gives an empty string for the scatterplot label...</p> <pre><code> original stripplot label: Text(3, 0, '0.3') reformatted stripplot label: Text(3, 0, '0.3') reformatted scatterplot label: Text(0, 0, '') </code></pre> <pre><code>import seaborn as sns import pandas as pd import numpy as np import matplotlib.pyplot as plt df = pd.DataFrame({'x': np.random.randint(0,10,size=100) / 10, 'y': np.random.normal(size=100)}) fig, axarr = plt.subplots(3,1) sns.stripplot(data=df, x='x', y='y', ax=axarr[0]) axarr[0].set_title('stripplot, default formatting') print(f' original stripplot label: {axarr[0].get_xticklabels()[3]}') sns.stripplot(data=df, x='x', y='y', ax=axarr[1]) axarr[1].xaxis.set_major_formatter(plt.FuncFormatter(lambda x, pos: f'{x:.2f}')) axarr[1].set_title('stripplot, incorrect formatting') print(f' reformatted stripplot label: {axarr[1].get_xticklabels()[3]}') axarr[2].scatter(df.x, df.y, s=10, alpha=0.5) axarr[2].xaxis.set_major_formatter(plt.FuncFormatter(lambda x, pos: f'{x:.2f}')) axarr[2].set_title('scatterplot, correct formatting') print(f' reformatted scatterplot label: {axarr[2].get_xticklabels()[3]}') fig.tight_layout() </code></pre> <p><a href="https://i.sstatic.net/SP9D3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SP9D3.png" alt="example output image" /></a></p>
<python><pandas><matplotlib><seaborn>
2023-12-06 22:42:09
1
634
Jazz Weisman
77,616,659
9,137,547
Error type hinting tuple (Vs code Microsoft Python extension)
<p>I like annotating my code and keeping the python extensions type checks in semi-strict mode. I often use generators to create tuples and with variable length tuples I had no problem but I just noticed that when I write something like:</p> <pre><code>my_tup : tuple[int, int, int] = tuple(i for i in range(3)) </code></pre> <p>The Python extension by Microsoft for Vs code complains because I'm assigning a tuple of variable length to a tuple of length 3. Now it's clear to me that the expression <code>tuple(generator...)</code> in this case will <strong>always</strong> return a tuple of length 3, am I correct and the extension is just not smart enough or did I not understand generators correctly?</p>
<python><python-typing>
2023-12-06 22:26:27
0
659
Umberto Fontanazza
77,616,586
15,239,717
How can I process Image in Django
<p>In my Django Application, I am using imagekit to process images in my model but any time I try uploading image error says: module 'PIL.Image' has no attribute 'ANTIALIAS'. Initially, I never had issues with it but recently any time I try to save my form, I get that error. I have upgraded Pillow and used ImageFilter.MedianFilter() instead of ANTIALIAS but error persist. Someone should help with the best way of doing this. I am using Django Django==4.0.</p> <p>See my Model code:</p> <pre><code>from django.db import models from PIL import Image from phonenumber_field.modelfields import PhoneNumberField from imagekit.processors import ResizeToFill, Transpose from imagekit.models import ProcessedImageField from django.core.exceptions import ValidationError from django.utils.deconstruct import deconstructible @deconstructible class FileExtensionValidator: def __init__(self, extensions): self.extensions = extensions def __call__(self, value): extension = value.name.split('.')[-1].lower() if extension not in self.extensions: valid_extensions = ', '.join(self.extensions) raise ValidationError(f&quot;Invalid file extension. Only {valid_extensions} files are allowed.&quot;) </code></pre> <h1>Allowed Image extensions</h1> <pre><code>image_extensions = ['jpeg', 'jpg', 'gif', 'png'] class ResizeToFillWithoutAntialias(ResizeToFill): def process(self, img): img = super().process(img) return img.resize(self.size, Image.LANCZOS) class Profile(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) first_name = models.CharField(max_length=30, blank=True, null=True) middle_name = models.CharField(max_length=30, blank=True, null=True) last_name = models.CharField(max_length=30, blank=True, null=True) date_birth = models.DateField(blank=False, null=True) phone_number = models.CharField(max_length=11, blank=True, null=True) gender = models.CharField(max_length=10, choices=GENDER_CHOICES) marital_status = models.CharField(max_length=16, choices=STATUS_CHOICES) home_address = models.CharField(max_length=200, blank=True, null=True) state_origin = models.CharField(max_length=30, choices=STATE_CHOICES) local_govt = models.CharField(max_length=30, blank=True, null=True) role = models.CharField(max_length=30, blank=True, null=True) image = ProcessedImageField( upload_to='profile_images', processors=[Transpose(), ResizeToFillWithoutAntialias(150, 200)], format='JPEG', options={'quality': 97}, validators=[FileExtensionValidator(image_extensions)], null=False, blank=False, ) last_updated = models.DateTimeField(auto_now_add=True) def __str__(self): return self.user.email </code></pre> <p>See Form code:</p> <pre><code>class ProfileForm(forms.ModelForm): first_name = forms.CharField(label='First Name:', max_length=30, widget=forms.TextInput(attrs={'placeholder': 'Enter First Name'})) middle_name = forms.CharField(label='Middle Name:', max_length=30, required=False, widget=forms.TextInput(attrs={'placeholder': 'Enter Middle Name'})) last_name = forms.CharField(label='Surname:', max_length=30, widget=forms.TextInput(attrs={'placeholder': 'Enter Your Last Name'})) phone_number = forms.CharField(label='Phone Number:', max_length=11) class Meta: widgets = {'date_birth': ProfileDateField()} model = Profile fields = ['first_name', 'middle_name', 'last_name', 'phone_number', 'date_birth', 'gender', 'marital_status', 'image'] </code></pre> <p>See View code:</p> <pre><code>@login_required(login_url='accounts_login') def applicant_update_profile(request): logged_user = request.user # Get the profile of the logged user profile = logged_user.profile if not logged_user.profile.role == 'Applicant': return redirect('accounts_login') if request.method == 'POST': form = ProfileForm(request.POST, request.FILES, instance=profile) if form.is_valid(): form.save() messages.success(request, 'Profile Saved Successfully.') return redirect('accounts_profile') else: form = ProfileForm(instance=profile) context = { 'form': form, 'page_title': 'Update Profile', } return render(request, 'accounts/applicant_update_profile.html', context) </code></pre>
<python><django><python-imaging-library>
2023-12-06 22:08:48
1
323
apollos
77,616,578
6,651,650
Named Pipes in Linux with Python read/write miss some lines of text
<p>I am learning how to use named pipes to pass data between processes. To test it out, I wrote two Python scripts, which I called <code>pipewriter.py</code> and <code>pipereader.py</code>, and a named pipe called <code>my_pipe</code> (using <code>mkfifo my_pipe</code>), all in the same directory. The Python scripts are included below.</p> <h2><code>pipewriter.py</code></h2> <pre><code>#!/usr/bin/env python3 for i in range(1, 5): with open('my_pipe', 'a') as outfile: outfile.write(f'Hello {i}\n') if i==4: outfile.write('\n') print('Done') </code></pre> <h2><code>pipereader.py</code></h2> <pre><code>#!/usr/bin/env python3 while True: with open('my_pipe', 'r') as infile: input = infile.read() print(input,end=&quot;&quot;) </code></pre> <p>The problem is that <code>pipereader.py</code> sometimes misses some of the lines output by <code>pipewriter.py</code>. When I run <code>./pipereader.py</code> in one terminal and <code>./pipewriter.py</code> in another, then the lines written are</p> <pre><code>Hello 1 Hello 2 Hello 3 Hello 4 </code></pre> <p>But if I run <code>./pipewriter.py; ./pipewriter.py; ./pipewriter.py</code> (with <code>pipereader.py</code> running in a different terminal), then the output of <code>pipereader.py</code> sometimes looks like this:</p> <pre><code>Hello 1 Hello 2 Hello 3 Hello 1 &lt;- Missing &quot;Hello 4&quot; Hello 2 Hello 3 Hello 4 Hello 1 Hello 2 Hello 3 Hello 4 </code></pre> <p>How can I figure out why the lines are disappearing and ensure it doesn't happen?</p> <p>By rewriting <code>./pipewriter.py</code> to open <code>my_pipe</code> only once for each execution, I can seemingly prevent missing lines. Still, I'm not sure if this is just a matter of reducing the frequency of missing lines, so I've just &quot;gotten lucky&quot; the several times I've tested it.</p> <h2><code>pipewriter.py</code> (Modified)</h2> <pre><code>#!/usr/bin/env python3 with open('my_pipe', 'a') as outfile: for i in range(1, 5): outfile.write(f'Hello {i}\n') outfile.write('\n') print('Done') </code></pre> <h2>Edit: Output When using an Input Argument per <a href="https://stackoverflow.com/questions/77616578/named-pipes-in-linux-with-python-read-write-miss-some-lines-of-text/77616642?noredirect=1#comment136834095_77616642">this comment</a></h2> <h3>Modified pipewriter.</h3> <pre><code>#!/usr/bin/env python3 import sys for i in range(1, 5): with open('my_pipe', 'a') as outfile: outfile.write(f'Hello {sys.argv[1]} {i}\n') if i==4: outfile.write('\n') print('Done') </code></pre> <h3>Output 1 of <code>./pipewriter.py A; ./pipewriter.py B; ./pipewriter.py C</code></h3> <pre><code>Hello A 1 Hello A 2 Hello A 3 Hello A 4 Hello B 1 Hello B 2 Hello B 3 Hello B 4 Hello C 1 Hello C 2 Hello C 3 </code></pre> <h3>Output 2 of <code>./pipewriter.py A; ./pipewriter.py B; ./pipewriter.py C</code></h3> <pre><code>Hello A 1 Hello A 2 Hello A 3 Hello A 4 Hello B 1 Hello B 2 Hello B 3 Hello B 4 Hello C 1 Hello C 2 </code></pre>
<python><linux><pipe><named-pipes>
2023-12-06 22:06:55
0
2,813
Paul Wintz
77,616,528
19,675,781
Pandas.loc() return empty values for unavailable indexes
<p>I have a dataframe like this:</p> <pre><code>ind,l1,l2 = [0,1,0],[1,1,0],[0,0,1] df = pd.DataFrame([ind,l1,l2],columns=[['C1','C2','C3']]) df.index=[['A','C','E']] </code></pre> <p><a href="https://i.sstatic.net/e5ba1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e5ba1.png" alt="enter image description here" /></a></p> <p>I want to use df.loc([['A','B','C','D','E']]) so that the unavailable indexes B &amp; D returns nan values.</p> <p><a href="https://i.sstatic.net/OIJZC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OIJZC.png" alt="enter image description here" /></a></p> <p>Can anyone help me with this?</p>
<python><pandas><dataframe><indexing>
2023-12-06 21:53:52
1
357
Yash
77,616,513
890,269
nginx unit + django + docker - wsgi.py cannot import module
<p>Currently containerizing a legacy Django project, and running into an odd error.</p> <p>When pushing the config file to the running server, it fails to apply, and on inspecting the log, I get the following:</p> <pre><code>2023/12/06 21:36:02 [info] 39#39 discovery started 2023/12/06 21:36:02 [notice] 39#39 module: python 3.11.2 &quot;/usr/lib/unit/modules/python3.11.unit.so&quot; 2023/12/06 21:36:02 [info] 1#1 controller started 2023/12/06 21:36:02 [notice] 1#1 process 39 exited with code 0 2023/12/06 21:36:02 [info] 41#41 router started 2023/12/06 21:36:02 [info] 41#41 OpenSSL 3.0.11 19 Sep 2023, 300000b0 2023/12/06 21:40:22 [info] 49#49 &quot;myapp&quot; prototype started 2023/12/06 21:40:22 [info] 50#50 &quot;myapp&quot; application started 2023/12/06 21:40:22 [alert] 50#50 Python failed to import module &quot;myapp.wsgi&quot; Traceback (most recent call last): File &quot;/app/myapp/wsgi.py&quot;, line 5, in &lt;module&gt; from django.core.wsgi import get_wsgi_application # noqa: E402 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ModuleNotFoundError: No module named 'django' 2023/12/06 21:40:22 [notice] 49#49 app process 50 exited with code 1 2023/12/06 21:40:22 [warn] 41#41 failed to start application &quot;myapp&quot; 2023/12/06 21:40:22 [alert] 41#41 failed to apply new conf 2023/12/06 21:40:22 [notice] 1#1 process 49 exited with code 0 </code></pre> <p>My config file is as follows:</p> <pre><code>{ &quot;applications&quot;: { &quot;myapp&quot;: { &quot;type&quot;: &quot;python 3.11&quot;, &quot;module&quot;: &quot;myapp.wsgi&quot;, &quot;processes&quot;: 150, &quot;environment&quot;: { &quot;DJANGO_SETTINGS_MODULE&quot;: &quot;myapp.settings.production&quot;, } } }, &quot;listeners&quot;: { &quot;0.0.0.0:8000&quot;: { &quot;pass&quot;: &quot;applications/myapp&quot; } } } </code></pre> <p>Swapping out unit for Gunicorn results in the expected behavior, as well as running the Django development server.</p> <p>I did notice the version of Python 3 the nginx-unit module is using slightly differs from the one i have installed in the container(Python 3.11.2 vs Python 3.11.7), but don't see why this would cause this specific issue.</p> <p>Is nginx-unit having problems locating my Python paths?</p> <p>Thanks</p>
<python><django><docker><nginx>
2023-12-06 21:50:50
2
1,056
Lucifer N.
77,616,388
5,839,462
How to catch or process RequestValidationError exceptions differently in a FastAPI middleware?
<p>How to correctly combine <code>RequestValidationError</code> exception handlers, such as:</p> <pre><code>@app.exception_handler(RequestValidationError) async def validation_exception_handler(request, exc): response = prepare_response({}, g_ERROR__INCORRECT_PARAMS) return JSONResponse(content=response) </code></pre> <p>with a middleware function, such as:</p> <pre><code>@app.middleware(&quot;http&quot;) async def response_middleware(request: Request, call_next): try: result = await call_next(request) res_body = b'' async for chunk in result.body_iterator: res_body += chunk response = prepare_response(res_body.decode(), g_ERROR__ALL_OK) except Exception as e: response = prepare_response({}, g_ERROR__UNKNOWN_ERROR) return JSONResponse(content=response) </code></pre> <p>It is necessary that <strong>only</strong> the <code>RequestValidationError</code> exception triggers the following function:</p> <pre><code>@app.exception_handler(RequestValidationError) async def validation_exception_handler(request, exc): </code></pre> <p>while, in all other cases, the following function should be triggered:</p> <pre><code>@app.middleware(&quot;http&quot;) async def response_middleware(request: Request, call_next): </code></pre> <p>Now, the <code>response_middleware</code> function fires all the time and processes the result of <code>validation_exception_handler</code>, which violates the basic intent of the function.</p> <p>When using <code>@app.middleware(&quot;http&quot;)</code> any exceptions disappear and even without <code>@app.exception_handler(RequestValidationError)</code> the code consistently generates <code>200 OK</code> response, and the following:</p> <pre><code> try: result = await call_next(request) res_body = b'' async for chunk in result.body_iterator: res_body += chunk response = prepare_response(res_body.decode(), g_ERROR__ALL_OK) except RequestValidationError as e: response = prepare_response({}, g_ERROR__INCORRECT_PARAMS) except Exception as e: response = prepare_response({}, g_ERROR__UNKNOWN_ERROR) </code></pre> <p>Also, it doesn't work at all, after the following line of code:</p> <pre><code>result = await call_next(request) </code></pre> <p>No exceptions are thrown.</p> <p>How can this problem be solved?</p> <p>On the one hand, we need middleware code for different functions, and on the other hand, some exceptions (incorrect parameters, etc.) should be tracked before the intermediate code or inside it, but not deeper.</p>
<python><exception><fastapi><middleware><fastapi-middleware>
2023-12-06 21:23:15
2
1,448
Zhihar
77,616,353
3,376,169
Apply Rules to pandas dataframe based on criteria defined in rules table
<p>I have tbl_df with cols col1, col2, col3, col4, col5,col6. It has about 7million rows I have corresponding rules_df with some cells with wild card as * as follows. There are about 20 rules:</p> <pre><code> col1 col2 col3 col4 value Rule1 E * P * 1 Rule2 B * P * 2 Rule3 S * P * 3 Rule4 * O C * 4 Rule5 F * C S 5 Rule6 E * * * 6 </code></pre> <p>While some rows in tbl_df are as follows:</p> <pre><code>col1 col2 col3 col4 col5 col6 E X P A S M E X P A S M B Y P E T N S X P E T M M O C A T N F X C S T N E Z M X S N </code></pre> <p>I want to use rules_df and get values for each combination in tbl_df. So output should look like</p> <pre><code> col1 col2 col3 col4 col5 col6 value Row1 E X P A S M 1 Row2 E X P A S M 1 Row3 B Y P E T N 2 Row4 S X P E T M 3 Row5 M O C A T N 4 Row6 F O C S T N 5 Row7 E Z M X S N 6 </code></pre> <p>So using map_df , 2 rules match Row1 in tbl_df Rule1 and Rule2. Same for Row 6 , Rule 5 and Rule 4. But depending on most restrictive rule, Need to apply that rule, and then loosening up the criteria and applying henceforth - using waterfall method. I was thinking of iterating over contents of rules_df and applying it on tbl_df. Is there a numpy routine, or faster vector operation I can perform to apply rules - or may be C-routine.</p>
<python><pandas>
2023-12-06 21:18:12
1
439
user3376169
77,616,133
519,422
Python Pandas: how to get data from a website into a dataframe by searching for values following key words?
<p>I asked <a href="https://stackoverflow.com/questions/77566914/how-to-write-specific-values-from-a-pandas-python-dataframe-to-a-specific-plac/77567039#77567039">this</a> question recently and received very helpful advice. I have been building on the advice but have again encountered a problem I can't solve. I would be very grateful for any advice.</p> <p>In my previous situation, I read data from a well-organized external file into a dataframe. The data were already in columns and all I had to do to get the data was:</p> <pre><code>import pandas as pd df = pd.read_table(&quot;organizedValues.txt&quot;, delimiter=&quot;\t&quot;) </code></pre> <p>In the current problem, the data I need are not in a well-organized file. They're posted at a website. <strong>I want to put the data into a Pandas dataframe.</strong>. Here are details.</p> <p>There are many different molecules. There's a separate webpage for each molecule and it always has the same structure. Here's an example of how one molecule's webpage appears.</p> <pre><code>CH4 Methane 13.00000 Things in unit 4.00000 L, units in unit defintion 262.22300 mMass (lb/mol) 300.00000 K_0 (C) 100.45200 V_0 (m^3/mol) 719.08310 Eta_0 (-) 0.00000 Some parameter 0.00000 Another parameter 0.00000 Strain (-) 200.00000 K_1 (C) </code></pre> <p>I followed advice <a href="https://stackoverflow.com/questions/56252977/read-and-process-data-from-url-in-python">here</a> to read data from a URL and save it in a file. Now I have a file called &quot;ch4.txt.&quot; The problem is that it's a mess. I can't use &quot;read_table&quot; as I did in my previous post. Here's a very small snippet as an example:</p> <blockquote> <p>{&quot;nameโ€:โ€test2โ€,โ€pathโ€:โ€test2โ€,โ€contentType&quot;:&quot;file&quot;},{&quot;nameโ€:โ€test5โ€,โ€pathโ€:โ€testโ€5,โ€contentType&quot;:&quot;file&quot;}],&quot;totalCountโ€:500}},โ€fileTreeProcessingTimeโ€:25,โ€foldersToFetch&quot;:[],&quot;reducedMotionEnabled&quot;:null,&quot;repo&quot;:{&quot;idโ€:yyy,โ€defaultBranch&quot;:&quot;main&quot;,&quot;nameโ€:โ€codeโ€name,โ€ownerLoginโ€:โ€usernameโ€,โ€currentUserCanPush&quot;:false,&quot;isFork&quot;:false,&quot;isEmpty&quot;:false,&quot;createdAt&quot;:&quot;2022-04-31T01:30:52.000Z&quot;,v=4&quot;,&quot;public&quot;:true,&quot;private&quot;:false,&quot;isOrgOwned&quot;:false},&quot;symbolsExpanded&quot;:false,&quot;treeExpanded&quot;:true,&quot;refInfo&quot;:{&quot;name&quot;:&quot;main&quot;,&quot;listCacheKeyโ€:โ€XXXโ€,โ€canEdit&quot;:false,&quot;refType&quot;:&quot;branch&quot;,&quot;currentOidโ€:โ€XXXโ€},โ€pathโ€:โ€orโ€,โ€currentUser&quot;:null,&quot;blob&quot;:{&quot;rawLines&quot;:[&quot; CH4 Methane &quot;,&quot; 13.00000 Strain (-) &quot;,&quot; 0.00000 K_0 (C)<br /> &quot;,&quot; 300.00000 Things in unit &quot;,&quot; 13.00000<br /> Eta_0 (-) &quot;,&quot; 756</p> </blockquote> <p>I need to search the file for keywords like &quot;<code>Strain (-)</code>&quot; and then extract the values after the keywords (in this case, <code>0.00000</code>). For this particular example, I would want to end up with a Pandas dataframe like:</p> <pre><code> Name Strain (-) K_0 (C) Things in unit Eta_0 (-) 1 CH4 0.00000 300.00000 13.00000 756 </code></pre> <p>A cleaner way to do it (so I won't be storing many files) would be to get the data directly from the website and read it into a dataframe in an organized way. However, I have spent a few hours on trying to find information about how to do this and have not been successful. If anyone has encountered this kind of thing before, I would love to know what worked. Thank you.</p>
<python><python-3.x><pandas><dataframe><file-io>
2023-12-06 20:31:25
1
897
Ant
77,616,121
11,739,577
Scraping a dropdown menu to return name of items using BS4
<p>I have the following XML. It is from a menu called Knives with sub-types like Bayonet, Classic Knife, etc.</p> <pre><code>&lt;div class=&quot;group inline-block relative w-full lg:w-auto&quot;&gt; &lt;button class=&quot;navbar-subitem-trigger text-left py-2 focus:outline-none hover:text-white w-full lg:w-auto block lg:inline-block lg:mr-2 xl:mr-4 text-blue-100&quot; data-target=&quot;navbar-subitems-Knives&quot; type=&quot;button&quot;&gt; Knives &lt;/button&gt; &lt;ul id=&quot;navbar-subitems-Knives&quot; class=&quot;custom-scrollbar hidden bg-gray-700 rounded shadow-md text-blue-100 my-2 lg:my-0 overflow-hidden lg:overflow-y-auto lg:absolute lg:group-hover:block lg:max-h-[80vh]&quot;&gt; &lt;li&gt; &lt;a class=&quot;flex items-center outline outline-offset-0 outline-1 outline-gray-700 hover:bg-gray-600 hover:text-white py-2 px-4 whitespace-nowrap bg-gray-700&quot; href=&quot;https://csgoskins.gg/weapons/bayonet&quot;&gt; &lt;div class=&quot;w-10 h-7 mr-1&quot;&gt; &lt;img loading=&quot;lazy&quot; class=&quot;lazy-instant max-w-full max-h-full&quot; src=&quot;https://cdn.csgoskins.gg/public/uih/weapons/aHR0cHM6Ly9jZG4uY3Nnb3NraW5zLmdnL3B1YmxpYy9pbWFnZXMvYnVja2V0cy9lY29uL3dlYXBvbnMvYmFzZV93ZWFwb25zL3dlYXBvbl9iYXlvbmV0LjE3YmIyNWM3NTg2N2QwMzlmYTc1MjRlOGM1ZmE2MzEzNGI2MjQ1MzQucG5n/50/auto/85/notrim/8965bc48871767721ae4a2bd3762f460.webp&quot; alt=&quot;Bayonet&quot;&gt; &lt;/div&gt; Bayonet &lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a class=&quot;flex items-center outline outline-offset-0 outline-1 outline-gray-700 hover:bg-gray-600 hover:text-white py-2 px-4 whitespace-nowrap bg-gray-700&quot; href=&quot;https://csgoskins.gg/weapons/classic-knife&quot;&gt; &lt;div class=&quot;w-10 h-7 mr-1&quot;&gt; &lt;img loading=&quot;lazy&quot; class=&quot;lazy-instant max-w-full max-h-full&quot; src=&quot;https://cdn.csgoskins.gg/public/uih/weapons/aHR0cHM6Ly9jZG4uY3Nnb3NraW5zLmdnL3B1YmxpYy9pbWFnZXMvYnVja2V0cy9lY29uL3dlYXBvbnMvYmFzZV93ZWFwb25zL3dlYXBvbl9rbmlmZV9jc3MuMmZhNjZkMDEwMTMxYzA3NTMwYjA4ZTkwOTZlNGVmNGM4Y2NiODA4Ny5wbmc-/50/auto/85/notrim/8c546f8cb1f52b844f38fb4681c01dcb.webp&quot; alt=&quot;Classic Knife&quot;&gt; &lt;/div&gt; Classic Knife &lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a class=&quot;flex items-center outline outline-offset-0 outline-1 outline-gray-700 hover:bg-gray-600 hover:text-white py-2 px-4 whitespace-nowrap bg-gray-700&quot; href=&quot;https://csgoskins.gg/weapons/falchion-knife&quot;&gt; &lt;div class=&quot;w-10 h-7 mr-1&quot;&gt; &lt;img loading=&quot;lazy&quot; class=&quot;lazy-instant max-w-full max-h-full&quot; src=&quot;https://cdn.csgoskins.gg/public/uih/weapons/aHR0cHM6Ly9jZG4uY3Nnb3NraW5zLmdnL3B1YmxpYy9pbWFnZXMvYnVja2V0cy9lY29uL3dlYXBvbnMvYmFzZV93ZWFwb25zL3dlYXBvbl9rbmlmZV9mYWxjaGlvbi43MzM0OTBkY2Q0YjZiMTJmMTk1MTJiM2I5YTFhMDlkOTM1ZTZhYWVhLnBuZw--/50/auto/85/notrim/92b97f3404ee5c97178b6fa7bf45ab42.webp&quot; alt=&quot;Falchion Knife&quot;&gt; &lt;/div&gt; Falchion Knife &lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a class=&quot;flex items-center outline outline-offset-0 outline-1 outline-gray-700 hover:bg-gray-600 hover:text-white py-2 px-4 whitespace-nowrap bg-gray-700&quot; href=&quot;https://csgoskins.gg/weapons/flip-knife&quot;&gt; &lt;div class=&quot;w-10 h-7 mr-1&quot;&gt; &lt;img loading=&quot;lazy&quot; class=&quot;lazy-instant max-w-full max-h-full&quot; src=&quot;https://cdn.csgoskins.gg/public/uih/weapons/aHR0cHM6Ly9jZG4uY3Nnb3NraW5zLmdnL3B1YmxpYy9pbWFnZXMvYnVja2V0cy9lY29uL3dlYXBvbnMvYmFzZV93ZWFwb25zL3dlYXBvbl9rbmlmZV9mbGlwLjBlODhmNTZhODhlMTE1MjNhOGNhZGI2ZDcwMDNlZjMwOGM1MmZhYjkucG5n/50/auto/85/notrim/f831a3e8245eee182a4b3315c36f9df6.webp&quot; alt=&quot;Flip Knife&quot;&gt; &lt;/div&gt; Flip Knife &lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a class=&quot;flex items-center outline outline-offset-0 outline-1 outline-gray-700 hover:bg-gray-600 hover:text-white py-2 px-4 whitespace-nowrap bg-gray-700&quot; href=&quot;https://csgoskins.gg/weapons/gut-knife&quot;&gt; &lt;div class=&quot;w-10 h-7 mr-1&quot;&gt; &lt;img loading=&quot;lazy&quot; class=&quot;lazy-instant max-w-full max-h-full&quot; src=&quot;https://cdn.csgoskins.gg/public/uih/weapons/aHR0cHM6Ly9jZG4uY3Nnb3NraW5zLmdnL3B1YmxpYy9pbWFnZXMvYnVja2V0cy9lY29uL3dlYXBvbnMvYmFzZV93ZWFwb25zL3dlYXBvbl9rbmlmZV9ndXQuMWIwYTYyZjEwZDliYjcwZmRiMDY5NWU3MDI3NDI0ODZlNGNjZWJkZC5wbmc-/50/auto/85/notrim/78ce07820ddd2a888d6f20a2f39b46d0.webp&quot; alt=&quot;Gut Knife&quot;&gt; &lt;/div&gt; Gut Knife &lt;/a&gt; &lt;/li&gt; &lt;li&gt; &lt;a class=&quot;flex items-center outline outline-offset-0 outline-1 outline-gray-700 hover:bg-gray-600 hover:text-white py-2 px-4 whitespace-nowrap bg-gray-700&quot; href=&quot;https://csgoskins.gg/weapons/huntsman-knife&quot;&gt; &lt;div class=&quot;w-10 h-7 mr-1&quot;&gt; &lt;img loading=&quot;lazy&quot; class=&quot;lazy-instant max-w-full max-h-full&quot; src=&quot;https://cdn.csgoskins.gg/public/uih/weapons/aHR0cHM6Ly9jZG4uY3Nnb3NraW5zLmdnL3B1YmxpYy9pbWFnZXMvYnVja2V0cy9lY29uL3dlYXBvbnMvYmFzZV93ZWFwb25zL3dlYXBvbl9rbmlmZV90YWN0aWNhbC42YzI0NTM3ZjVlMzA2NGNmNDA4MTNlOTNmOWZjYmFkYzk5MjA1Y2ExLnBuZw--/50/auto/85/notrim/3520c4123a62eb9550ccc7f3745eb53f.webp&quot; alt=&quot;Huntsman Knife&quot;&gt; &lt;/div&gt; Huntsman Knife &lt;/a&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; </code></pre> <p>With the following code I try to get the names of the different sub-types:</p> <pre><code>import requests from bs4 import BeautifulSoup import pandas as pd # URL of the webpage to scrape url = 'https://csgoskins.gg/' # Replace with the URL of the page you want to scrape headers = { &quot;User-Agent&quot;: &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36&quot; } # Send a GET request to the URL r = requests.get(url, headers = headers) soup = BeautifulSoup(r.content, 'lxml') print(r) knives_section = soup.find(&quot;ul&quot;,{&quot;id&quot;:&quot;navbar-subitems-Knives&quot;}).findAll(&quot;w-10 h-7 mr-1&quot;) print(knives_section) </code></pre> <p>but it returns nothing. I tried to use elements from the following answer: <a href="https://stackoverflow.com/questions/53459163/scraping-from-dropdown-option-value-python-beautifulsoup">Scraping from dropdown option value Python BeautifulSoup</a></p> <p>What am I doing wrong?</p>
<python><beautifulsoup>
2023-12-06 20:29:24
1
793
doomdaam
77,616,090
455,796
Python humanize.naturalsize() remove trailing 0's in the fraction
<p>Is it possible to get the output of</p> <pre><code>1M 1.01M </code></pre> <p>by changing the <code>format</code>'s value in the following code?</p> <pre><code>import humanize format = &quot;%.2f&quot; raw1 = 1_048_576; raw2 = 1_058_576 print(humanize.naturalsize(raw1, format=format, gnu=True)); print(humanize.naturalsize(raw2, format=format, gnu=True)); </code></pre>
<python><humanize>
2023-12-06 20:22:51
2
12,654
Damn Vegetables
77,616,054
6,728,433
Extract "values" from VectorUDT sparse vector in pyspark
<p>I have a pyspark dataframe with 2 vector columns. When I show the dataframe in my notebook, it prints each vector like this: {&quot;vectorType&quot;: &quot;sparse&quot;, &quot;length&quot;: 262144, &quot;indices&quot;: [21641], &quot;values&quot;: [1]}</p> <p>When I print the schema, it shows up as VectorUDT.</p> <p>I just need the &quot;values&quot; field value as a list or array. How can I save that as a new field? Doing &quot;vector_field&quot;.values doesn't seem to work because pyspark thinks it's a String...</p>
<python><pyspark><vector><azure-databricks>
2023-12-06 20:15:38
2
798
S.S.
77,615,797
5,094,261
sqlalchemy 2.0 orm bulk upsert one-to-many relationships
<p>I am trying to bulk insert into multiple tables using sqlalchemy 2.0 ORM using the async engine. The table hierarchy is as follows.</p> <pre><code>table A -&gt; table B -&gt; table C </code></pre> <p>During insertion, I get a batch of JSON objects resembling table A which contain the table B and table C data as well. Hence I have to construct an orm instance from each json instance for table A and use <code>session.add()</code> individually. This slows things down.</p> <p>If I use the bulk <code>insert(...).values(...)</code> api it gets faster but I can only insert into one table, as I couldn't figure out a way to make relationships work using <code>insert</code> api.</p> <p>Is there any way I can make such deeply nested insert more performant?</p>
<python><postgresql><sqlalchemy><one-to-many>
2023-12-06 19:25:07
0
1,273
Shiladitya Bose
77,615,693
4,896,087
How to assign values from dict with more arguments than function?
<p>I want to use the <code>**kwargs</code> syntax to assign values to function inputs. However, my dictionary containing the keywords have more arguments than the function. How can I use dict <code>p</code> to provide input values for the fucntion <code>func</code>?</p> <pre><code>p = {'a': 1, 'b': 2, 'c': 3} def func(a): return a func(**p) </code></pre> <p>This gives an error: <code>TypeError: func() got an unexpected keyword argument 'b'</code></p>
<python>
2023-12-06 19:03:28
1
3,613
George Liu
77,615,464
7,563,454
Python, tkinter: Use multiprocessing.Pool() in a class instance, get new data from pool in a function that runs every second
<p>I'm still struggling to get a thread pool working inside a class instance. In a file we'll call <code>test.py</code> I have the following:</p> <pre><code>import multiprocessing as mp class Test: def __init__(self): pass def exec(self): pool = mp.Pool() result = pool.map(self.thread, range(0, 4)) for r in result: print(r) def thread(self, i): return i * 2 </code></pre> <p>In <code>init.py</code> I then have:</p> <pre><code>import test t = test.Test() if __name__ == &quot;__main__&quot;: t.exec() </code></pre> <p>This works successfully: When I run <code>init.py</code> from a console the threads execute and return the doubled numbers. The problem is that I want to make my thread pool a class property, not a local variable. The correct content of <code>test.py</code> is thus supposed to be:</p> <pre><code>import multiprocessing as mp class Test: def __init__(self): self.pool = mp.Pool() def exec(self): result = self.pool.map(self.thread, range(0, 4)) for r in result: print(r) def thread(self, i): return i * 2 </code></pre> <p>But this returns the following error:</p> <pre><code>NotImplementedError: pool objects cannot be passed between processes or pickled </code></pre> <p>I'm not trying to pass the pool between processes: The class instance of <code>Test</code> calling the pool is supposed to remain on the main thread only, hence why it's executed under the <code>if __name__ == &quot;__main__&quot;</code> check. I do however want the pool to be a property of <code>self</code> so it can be reused, otherwise I have to create a new pool each time <code>exec</code> is called.</p> <p><code>exec</code> is meant to be called repeatedly every 1 second and use the thread pool to get new data each time. In the full code I use <code>tkinter</code> so <code>self.root.after(1000, self.exec)</code> is used after <code>init.py</code> initially activates this function which is then meant to reschedule itself and run repeatedly: <code>root.after</code> is the only way I'm aware of without the main loop of TK blocking my other loop from running. Here's a full view of what <code>test.py</code> is supposed to look like if we also take tkinter into account, please let me know how that would need to be rewritten to work.</p> <pre><code>import multiprocessing as mp import tkinter as tk class Test: def __init__(self): self.pool = mp.Pool() self.root = tk.Tk() self.root.mainloop() def exec(self): result = self.pool.map(self.thread, range(0, 4)) for r in result: print(r) self.root.after(1000, self.exec) def thread(self, i): return i * 2 </code></pre>
<python><python-3.x><multithreading><tkinter><multiprocessing>
2023-12-06 18:19:08
1
1,161
MirceaKitsune
77,615,427
3,817,518
random vector generation which meets a certain condition
<p>Suppose G is a matrix of (100 , 20 ) and h is vector of ( 100).</p> <p>How do I randomly generate a vector x of size 100 that meets the condition G x &lt;= h ?</p> <p>Clearly, I can do something like the following. But, that is inefficient if I want to generate thousands of these points. Thanks.</p> <pre><code>import numpy as np # Define G and h G = np.random.rand(100, 20) h = np.random.rand(100) def is_feasible(x): return np.all(np.dot(G, x) &lt;= h) while True: # Generate random vector x x = np.random.rand(20) # Check if x is feasible if is_feasible(x): break </code></pre>
<python><numpy><random><probability-density>
2023-12-06 18:13:28
3
1,986
nyan314sn
77,615,407
10,985,257
dependency hell with python-apt
<p>I am building a package which is dependent on two packages, which are difficult to handle.</p> <p>The two problematic packages are:</p> <ul> <li>python-apt (<a href="https://pypi.org/project/python-apt/" rel="nofollow noreferrer">PyPI</a>, <a href="https://apt-team.pages.debian.net/python-apt/library/index.html" rel="nofollow noreferrer">Docs</a>)</li> <li>python-pam (<a href="https://pypi.org/project/python-pam/" rel="nofollow noreferrer">PyPI</a>)</li> </ul> <h3>What is the issue with those packages?</h3> <p>python-apt is installed by apt which I first thought, would be great, since I planned to handle dependencies with apt.</p> <p>Until I realized that python3-pam is indeed not the python-pam package listed at PyPI.</p> <p>My first approach was to package python-pam myself, but realized that the project has a few issues with packaging. In the end I plan actually to fix them as well, but decided to push this to a later date.</p> <p>In the meantime to test my package, I went to simple venv approach.</p> <p>Until I realized, that python-apt is not just</p> <pre class="lang-bash prettyprint-override"><code>pip install python-apt </code></pre> <p>Is it somehow possible to install <code>python-apt</code> via pip or bring the system installation inside the venv, until I am able to provide an updated version of <code>python-pam</code> deb-package?</p>
<python><pip><apt><python-venv>
2023-12-06 18:10:44
0
1,066
MaKaNu
77,615,373
19,009,577
How to split a generator into smaller generators while discarding excess
<p>Split generator of size a*n + b into 1 generator consisting of a generators with n items. There is a similar question <a href="https://stackoverflow.com/questions/24527006/split-a-generator-into-chunks-without-pre-walking-it">here</a>, and I need similar things like</p> <ul> <li><p>do not walk the generator beforehand: computing the elements is expensive, and it must only be done by the consuming function, not by the chunker</p> </li> <li><p>which means, of course: do not accumulate in memory (no lists)</p> </li> </ul> <p>However the difference is that I do not want the remaining few elements, if they exist. I've tried adding this block</p> <pre><code>size, it = tee(it) size = len(list(size)) split = (next(vid) for _ in range(size-size%length)) </code></pre> <p>However this still requires traversing which eats up lots of memory and is not ideal.</p>
<python><generator>
2023-12-06 18:04:41
1
397
TheRavenSpectre
77,615,257
850,781
Avoid RuntimeWarning using where
<p>I want to apply a function to a <code>numpy</code> array, which goes through infinity to arrive at the correct values:</p> <pre class="lang-py prettyprint-override"><code>def relu(x): odds = x / (1-x) lnex = np.log(np.exp(odds) + 1) return lnex / (lnex + 1) x = np.linspace(0,1,10) np.where(x==1,1,relu(x)) </code></pre> <p>correctly computes</p> <pre><code>array([0.40938389, 0.43104202, 0.45833921, 0.49343414, 0.53940413, 0.60030842, 0.68019731, 0.77923729, 0.88889303, 1. ]) </code></pre> <p>but also issues warnings:</p> <pre><code>3478817693.py:2: RuntimeWarning: divide by zero encountered in divide odds = x / (1-x) 3478817693.py:4: RuntimeWarning: invalid value encountered in divide return lnex / (lnex + 1) </code></pre> <p><strong>How do I avoid the warnings?</strong></p> <p>Please note that performance is of critical importance here, so I would rather avoid creating intermediate arrays.</p>
<python><numpy><floating-point>
2023-12-06 17:46:30
1
60,468
sds
77,615,205
13,283,988
Heroku API for data processing and Postgres post
<p>I'm attempting to write my first API. I wrote (with help of GPT), a simple Request script and I uploaded the Dash-API script to Heroku. I'm getting input from multiple remote locations and I wish to add the remote data to a common Postgres database. I'm uncertain if Heroku dash-style app can be used as an API because of the various security layers in place. Is this even possible? I know Heroku has a platform API but it's intention is for other uses. So far, my error codes indicates the following so I'm uncertain if my approach is wrong or my code is wrong. Thank you.:</p> <pre><code>response content: b'&lt;!doctype html&gt;\n&lt;html lang=en&gt;\n&lt;title&gt;405 Method Not Allowed&lt;/title&gt;\n&lt;h1&gt;Method Not Allowed&lt;/h1&gt;\n&lt;p&gt;The method is not allowed for the requested URL.&lt;/p&gt;\n' </code></pre> <pre><code>import requests data = { &quot;sensor&quot;: &quot;temperature&quot;, &quot;value&quot;: 25.5 } api_endpoint = 'https://my_app.herokuapp.com/ingest' token = 'too_many_secrets' headers = {'Authorization': f'Bearer {token}'} response = requests.post(api_endpoint, json=data, headers=headers, verify=True) if response.status_code == 200: print(&quot;Data sent successfully&quot;) else: print(f&quot;Failed to send data. Status code: {response.status_code}&quot;) print(f'response content: {response.content}') </code></pre> <p>My Heroku API app</p> <pre><code>from flask import Flask, request, jsonify from flask_cors import CORS import dash from dash import dcc, html, Input, Output import json app = Flask(__name__) CORS(app) dash_app = dash.Dash(__name__,) server = dash_app.server valid_tokens = [&quot;too_many_secrets&quot;] dash_app.layout = html.Div(children=[ html.H1(children='Simple Dash App'), dcc.Link('Ingest Data', href='/ingest'), html.Div(id='output-message') ]) # Dash callback for the Dash route @dash_app.callback( Output('output-message', 'children'), Input('url', 'pathname') ) def display_page(pathname): if pathname == '/ingest': # This part is for Dash and the URL handling you've defined return html.Div() # Flask route for the traditional Flask endpoint @app.route('/ingest', methods=['OPTIONS', 'POST']) def handle_ingest(): if request.method == 'OPTIONS': response = make_response() response.headers.add('Access-Control-Allow-Origin', '*') # Adjust this to your needs response.headers.add('Access-Control-Allow-Headers', 'Authorization, Content-Type') response.headers.add('Access-Control-Allow-Methods', 'GET, POST, OPTIONS') return response # Continue with your existing logic for POST requests token = request.headers.get('Authorization') if token in valid_tokens: data = request.json # Assuming the data is sent as JSON # Perform data validation and write to the database here # Example: Write data to your Heroku Postgres database print(&quot;Success: Data ingested successfully&quot;) return jsonify({&quot;message&quot;: &quot;Data ingested successfully&quot;}) else: print(&quot;Unauthorized user: Your token was Invalid&quot;) return jsonify({&quot;message&quot;: &quot;Unauthorized&quot;}), 401 if __name__ == '__main__': app.run(debug=True) </code></pre>
<python><plotly-dash><heroku-postgres>
2023-12-06 17:39:10
1
303
Robert Marciniak
77,615,189
8,849,755
Python ctypes deepcopy structure
<p>I am not very used to python ctypes, so sorry if this is a naive question. I am trying to copy a struct, this is my code:</p> <pre class="lang-py prettyprint-override"><code>from ctypes import * class Group(Structure): _fields_ = [ (&quot;ChSize&quot;, c_uint32*9), (&quot;DataChannel&quot;, POINTER(c_float)*9), (&quot;TriggerTimeLag&quot;, c_uint32), (&quot;StartIndexCell&quot;, c_uint16)] def deepcopy(self): copy = Group() for n_channel in range(9): data_buffer = c_float*self.ChSize[n_channel] memmove( addressof(data_buffer), self.DataChannel[n_channel], self.ChSize[n_channel]*sizeof(c_float), ) copy.ChSize[n_channel] = self.ChSize[n_channel] copy.DataChannel[n_channel] = addressof(data_buffer) copy.TriggerTimeLag = self.TriggerTimeLag copy.StartIndexCell = self.StartIndexCell return copy </code></pre> <p>However, when I try to use it, I get</p> <pre><code>TypeError: invalid type </code></pre> <p>What would be the correct way of creating a complete new copy of a <code>Group</code> object?</p>
<python><ctypes><deep-copy>
2023-12-06 17:36:36
1
3,245
user171780
77,614,910
7,563,454
Python: Can multiprocessing.Pool() be used in modules other than '__main__'?
<p>I have the following simple case of using a thread pool:</p> <pre><code>import multiprocessing as mp def double(i): return i * 2 def main(): pool = mp.Pool() for result in pool.map(double, [1, 2, 3]): print(result) main() </code></pre> <p>For over a day I've been trying to figure out why running that code would cause freezing and errors. Then in my <a href="https://stackoverflow.com/questions/77609981/python-thread-pools-not-working-pool-map-freezes-pool-map-async-returns-a-mapr">previous question</a> about the crash, someone pointed out the code needs to be filtered behind a <code>__name__ == '__main__'</code> check.</p> <pre><code>if __name__ == '__main__': main() </code></pre> <p>In fact if I put the original code in <code>init.py</code> which I execute from the console, it works even without this if statement. But there's a problem: I want to use <code>multiprocessing.Pool()</code> inside a class in another script which <code>init.py</code> imports with <code>import myotherscript</code> then creates an instance of a class from: In this case <code>__name__</code> is no longer <code>'__main__'</code> but the name of this other script, even if I call a function in it from <code>init.py</code> anything defined in this other script will has a different name.</p> <p>Is there a way to use the multithreading pool inside a module? Or does Python only allow pools to run in the init file being executed? If it's a hard limitation I'll need to define my library directly under <code>init.py</code> but that would be wrong and ugly code structuring so hopefully there exists a workaround.</p>
<python><python-3.x><multithreading>
2023-12-06 16:49:47
1
1,161
MirceaKitsune
77,614,839
2,043,974
twisted integrating blocking confluent-kafka pythong library issues
<p>Lets consider this piece of code</p> <pre><code>from twisted.web import server, resource from twisted.internet.task import LoopingCall from confluent_kafka import Consumer, KafkaException import json # Function to handle Kafka consumer def kafka_consumer(): def fetch_data(): def poll_kafka(): msg = consumer.poll(0.1) if msg is None: return if msg.error(): if msg.error().code() == KafkaException._PARTITION_EOF: return else: return else: print(&quot;message&quot;, msg, msg.value()) consumer.commit() # Manually commit the offset # Execute Kafka polling in a separate thread d1 = threads.deferToThread(poll_kafka) def start_loop(): lc = LoopingCall(fetch_data) lc.start(0.5) conf = { 'bootstrap.servers': 'kafka_internal-1:29093', 'group.id': 'your_consumer_group-2', 'auto.offset.reset': 'earliest', 'enable.auto.commit': False # Disable autocommit } consumer = Consumer(conf) consumer.subscribe(['jithin_test']) # &lt;-- is it a blocking call?? start_loop() # Web service handler class WebService(resource.Resource): isLeaf = True def render_GET(self, request): # You can customize the response according to your needs response = { 'message': 'Welcome to the Kafka consumer service!' } return json.dumps(response).encode('utf-8') if __name__ == '__main__': reactor.callWhenRunning(kafka_consumer) # Run the Twisted web service root = WebService() site = server.Site(root) reactor.listenTCP(8180, site) reactor.run() </code></pre> <p>Here I'm instantiating <code>Consumer</code> object from <code>confluent-kafka</code> in the reactor thread, then leave the subsequent <code>poll()</code> to <code>deferToThread()</code>, I've few question regarding this,</p> <ol> <li><code>Consumer.subscribe()</code> is it a blocking call? should I be <code>deferTothread</code> this method when invoking?</li> <li>will it corrupt the consumer where there is a consumer re-balancing happens if I'm firing another call to <code>poll_kafka</code> using <code>deferToThread</code> (as per my understanding, every time the thread that we run using <code>deferToThread</code> would be from the threadpool and there is no guarantee that we will be using the same thread)?</li> <li>if so is there a way to manage this? maybe running the entire stuff in a separate python thread and pass the consumed value back to the twisted application?</li> <li>or is there a way I can re-use the consumer object without corrupting the consumer?</li> </ol> <p>Nb: the code is written in <code>python2</code> it's an integration of some legacy system porting the whole stuff is not possible atm and most of the other libraries are available only support python 3+.</p>
<python><apache-kafka><twisted><confluent-kafka-python>
2023-12-06 16:37:34
1
1,712
Jithin
77,614,611
5,519,012
How to ignore dependency if already installed by Poetry install
<p>I want to install <code>package-1</code> using <code>poetry install</code>, <code>package-1</code> depends on <code>package-2</code>, so when executing <code>poetry install</code> - poetry tries to update <code>package-2</code>. But <code>package-2</code> have no published compiled release, so it should be compiled - which can't be done as I am running on very thin container so I don't have even GCC.</p> <p>I have the precompiled <code>package-2</code> and I install that before the <code>poetry lock</code> - so the dependency is met, but <code>poetry install</code> still tries to compile the <code>package-2</code>.</p> <p>Is there a way to force poetry to ignore packages that are already installed globally in the system? or at least not fail on exception when trying to compile a package?</p>
<python><containers><python-poetry>
2023-12-06 16:00:19
0
365
Meir Tolpin
77,614,523
19,009,577
Why does try except not catch StopIteration when trying to split generator into generators
<p>Attempt to split generator of size a*n + b into 1 generator with an items of size n each.</p> <p>I believe there already exist questions to this problem with minimal modifications <a href="https://stackoverflow.com/questions/24527006/split-a-generator-into-chunks-without-pre-walking-it">here</a>. However its perplexing that the code below returns an error despite supposedly having caught it with try except.</p> <pre><code>def test(vid, size): while True: try: part = (next(vid) for _ in range(size)) yield part except StopIteration: break res = test((i for i in range(100)), 30) for i in res: for j in i: print(i, end=&quot; &quot;) print() </code></pre> <pre><code>--------------------------------------------------------------------------- StopIteration Traceback (most recent call last) Cell In[54], line 4, in (.0) 3 try: ----&gt; 4 part = (next(vid) for _ in range(size)) 5 yield part StopIteration: The above exception was the direct cause of the following exception: RuntimeError Traceback (most recent call last) Cell In[54], line 11 9 res = test((i for i in range(100)), 30) 10 for i in res: ---&gt; 11 for j in i: 12 print(i, end=&quot; &quot;) 13 print() RuntimeError: generator raised StopIteration </code></pre> <p>Also from the print statements the remainder is also printed out instead of being discarded.</p> <p>Note: Using <code>RuntimeError</code> for the except also does not catch the error</p>
<python><generator>
2023-12-06 15:49:13
1
397
TheRavenSpectre
77,614,456
4,486,184
Detect when global variables are used (Linter notification or other static analysis tools)
<p>In a rush, I often forget the <code>self.</code> before properties:</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass class Game: agent_speed = 1 @dataclass class Agent: x: int def move(self): # Here, I forgot self. x += Game.agent_speed a = Agent(1) a.move() print(a.x) </code></pre> <p>More generally, I would love being warned that the method <code>move()</code> is accessing two global variables: <code>Game</code> and <code>x</code> to detect such mistakes. I tried pylint, but it doesn't care about such things, and flake8 probably also does the same. Is there any way I can get this piece of information about my code?</p> <p>EDIT:</p> <p>I'm only interested in forgotten <code>self.</code> statement parts, but I know that it can't be done in python, which is why I am instead asking this question about global variables usage. As @Codist said, using global variables is perfectly legal so I should not call what I want linter warnings (I renamed the question to linter notifications). Yes, I'd like not all global variable usages to be listed, but only ones which are inside class methods. If it's not possible, then I expect myself to be able to filter out the results to be able to keep only the ones which are relevant to methods. Finally, if an IDE provides this piece of information (the missing <code>self.</code>, it's also good enough for me.</p>
<python><static-analysis>
2023-12-06 15:39:52
0
1,604
Camusensei
77,614,375
2,897,115
How to convert a JSON string to dataframe?
<p>I am trying to convert a JSON string to dataframe. I am not able to do.</p> <pre><code>import findspark findspark.init() from pyspark.sql import SparkSession spark = SparkSession.builder \ .appName(&quot;MyApp&quot;) \ .getOrCreate() sc=spark.sparkContext some_json_string = &quot;&quot;&quot; [ {&quot;id&quot;:1, &quot;name&quot;:&quot;test1&quot;}, {&quot;id&quot;:2, &quot;name&quot;:&quot;test2&quot;} {&quot;id&quot;:3, &quot;name&quot;:&quot;test3&quot;} ] &quot;&quot;&quot; df = spark.read.option(&quot;multiLine&quot;,&quot;true&quot;).json(sc.parallelize(some_json_string)) df.printSchema() df.show() </code></pre> <p>I am getting the error</p> <pre><code>root |-- _corrupt_record: string (nullable = true) +---------------+ |_corrupt_record| +---------------+ | [| | {| | &quot;| | i| | d| | &quot;| | :| | 1| | ,| | &quot;| | n| | a| | m| | e| | &quot;| | :| | &quot;| | t| | e| | s| +---------------+ only showing top 20 rows </code></pre> <p>How can I convert this into PySpark dataframe?</p> <p>I also tried:</p> <p><a href="https://i.sstatic.net/Jvxxr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jvxxr.png" alt="enter image description here" /></a></p>
<python><pyspark>
2023-12-06 15:29:09
1
12,066
Santhosh
77,614,372
16,383,578
How to decrypt substitution cipher with keys with multiple letters?
<p>I have found the following encoded message <a href="https://scp-wiki.wikidot.com/pitch-haven-hub" rel="nofollow noreferrer">here</a>, it is obviously in hexadecimal and each two digit sequence corresponds to a byte, decoding the byte sequence gives the following message:</p> <pre><code>&quot;UAYUYAF FOY YIร†ZYUA'W KYWLAJL. \r\nYUAร†WY YIร†L FI' SUAYY YIร†LZLALK. \r\nOYUAร†IRK FOY LYIO ร†JY. \r\nร†LLLAOLAIRร†FY LYYK, VGUAWGY IOร†LF. \r\nUAYPHYIR LAL FOY KYWFUAGAFLAI'L I'S OLAW LAYIร†JY. \r\nPGFAOYUA FOY YIร†ZYUA. KYPHI'GUA OLAW Aร†UAAร†WW. \r\nUAYAIRร†LAYI IOOร†F IOร†W Fร†ZYL.&quot; </code></pre> <p>In multiple lines:</p> <pre><code>UAYUYAF FOY YIร†ZYUA'W KYWLAJL. YUAร†WY YIร†L FI' SUAYY YIร†LZLALK. OYUAร†IRK FOY LYIO ร†JY. ร†LLLAOLAIRร†FY LYYK, VGUAWGY IOร†LF. UAYPHYIR LAL FOY KYWFUAGAFLAI'L I'S OLAW LAYIร†JY. PGFAOYUA FOY YIร†ZYUA. KYPHI'GUA OLAW Aร†UAAร†WW. UAYAIRร†LAYI IOOร†F IOร†W Fร†ZYL. </code></pre> <p>I was able to find the substitution cipher used by the message:</p> <pre><code>cipher = { &quot;A&quot;: &quot;C&quot;, &quot;DE'&quot;: &quot;X&quot;, &quot;F&quot;: &quot;T&quot;, &quot;G&quot;: &quot;U&quot;, &quot;I'&quot;: &quot;O&quot;, &quot;IL&quot;: &quot;Y&quot;, &quot;IO&quot;: &quot;W&quot;, &quot;IR&quot;: &quot;L&quot;, &quot;J&quot;: &quot;G&quot;, &quot;K&quot;: &quot;D&quot;, &quot;KH&quot;: &quot;Q&quot;, &quot;L&quot;: &quot;N&quot;, &quot;LA&quot;: &quot;I&quot;, &quot;O&quot;: &quot;H&quot;, &quot;P&quot;: &quot;B&quot;, &quot;PH&quot;: &quot;V&quot;, &quot;S&quot;: &quot;F&quot;, &quot;U&quot;: &quot;J&quot;, &quot;UA&quot;: &quot;R&quot;, &quot;V&quot;: &quot;P&quot;, &quot;W&quot;: &quot;S&quot;, &quot;XU&quot;: &quot;Z&quot;, &quot;Y&quot;: &quot;E&quot;, &quot;YI&quot;: &quot;M&quot;, &quot;Z&quot;: &quot;K&quot;, &quot;ร†&quot;: &quot;A&quot;, } </code></pre> <p>As you can see, the cipher contains keys with two letters, some of which has parts that are also keys, and I have confirmed if a digram is in the cipher, it may or may not have been used, in other words its parts may have been used individually and each replaces a letter, or the digram as a whole replaces one letter, it is deliberately ambiguous:</p> <pre><code>import re regex = &quot;(&quot; + &quot;|&quot;.join(sorted(cipher, key=lambda x: -len(x))) + &quot;)&quot; print(&quot;&quot;.join(s if (s := cipher.get(i)) else i for i in re.split(regex, message))) </code></pre> <pre><code>REJECT THE MAKER'S DESIGN. ERASE MAN TO FREE MANKIND. HERALD THE NMH AGE. ANNIHILATE NEED, PURSUE WANT. REVMR IN THE DESTRUCTION OF HIS IMAGE. BUTCHER THE MAKER. DEVOUR HIS CARCASS. RECLAIM WHAT WAS TAKEN. </code></pre> <p>Overall the translation seems legit, but there are two errors, <code>NMH</code> should be <code>NEW</code> and <code>REVMR</code> should be <code>REVEL</code>.</p> <p>I wonder how to solve the general case, given an encoded string encoded using a substitution cipher that contains ngrams, some parts of the ngrams are also used to substitute plain text, find all possible ways to break the string into permutations of the keys, so that each character in the original encoded message is left as is if it isn't contained in any cipher keys, and continuous chunks of coding characters are split into parts in all possible ways that every part is a key in the cipher and no coding character is left behind.</p> <p>I can't describe it in English well, basically I want all possible interpretations of the encoded message, the input is the string and the cipher dictionary, the output is a list of lists, each sublist contains parts from the secret message, the parts are strings and satisfy the following criteria: every part has no common character with any key in the cipher dictionary, or the part is contained in the cipher dictionary.</p> <p>It is related to my <a href="https://stackoverflow.com/a/76737259/16383578https://stackoverflow.com/a/76737259/16383578">previous answer</a>:</p> <pre><code>from collections import Counter def substring_tree(s: str) -&gt; dict: d = {} def worker(s: str, dic: dict): for i in range(1, len(s) + 1): dic[s[:i]] = {} worker(s[i:], dic[s[:i]]) worker(s, d) return d def traverse_tree(tree, wordbank, keys, substrings=list()): result = [] for k, v in tree.items(): if k in wordbank and keys[k] &lt; wordbank[k]: keys = keys.copy() keys[k] += 1 if v: result.extend(traverse_tree(v, wordbank, keys, substrings + [k])) else: result.append(substrings + [k]) return result def all_construct2(target, wordbank): wordbank = Counter(wordbank) return traverse_tree(substring_tree(target), wordbank, Counter()) </code></pre> <p>Theoretically my code can work, if given arbitrarily large amount of time and RAM, when I tested the code the interpreter quickly eats multiple Gibibytes of RAM and I was forced to terminate it.</p> <p>How to solve the problem?</p>
<python><cryptography>
2023-12-06 15:28:48
0
3,930
ฮžฮญฮฝฮท ฮ“ฮฎฮนฮฝฮฟฯ‚
77,614,341
298,209
Access partial stdout and stderr when create_subprocess_shell() times out
<p>I'm trying to use asyncio's <a href="https://docs.python.org/3/library/asyncio-subprocess.html" rel="nofollow noreferrer">subprocess</a> (e.g. <code>create_subprocess_shell</code>) instead of the usual ordinary subprocess module and I can't figure out how to access stdout and err when the task times out:</p> <p>In the non-asyncio subprocess module the <code>run()</code> method or <code>Popen.communicate()</code> have a timeout argument and when the exception is raised you can still access <code>stdout</code> or <code>stderr</code> attributes of the process and see the partial outputs. This is not the case for asyncio's <code>communicate()</code> method and the <a href="https://docs.python.org/3/library/asyncio-subprocess.html#interacting-with-subprocesses" rel="nofollow noreferrer">docs suggest</a> suggest using <code>wait_for</code> or <code>wait</code> for a timeout. I can't find a way to access stdout/err with either <code>wait_for</code>, which cancels the task, or <code>wait</code>, which lets the task run but raises an exception. I'm aware of a <a href="https://stackoverflow.com/questions/42639984/python3-asyncio-wait-for-communicate-with-timeout-how-to-get-partial-resul">similar question</a> from 6 years ago but the answers didn't work for me and I was wondering if an easier solution is now available.</p> <p>The setup is something like this:</p> <pre class="lang-py prettyprint-override"><code>proc = await asyncio.create_subprocess_shell( cmd, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE ) task = asyncio.create_task(proc.communicate()) try: await asyncio.wait_for(task, timeout=timeout_secs) except asyncio.TimeoutError: # What do I do here? </code></pre> <p>or:</p> <pre class="lang-py prettyprint-override"><code>task = asyncio.create_task(proc.communicate()) done, expired = await asyncio.wait([task], timeout=timeout_secs) if done: stdout, stderr = done.pop().result() return stdout, stderr else: # What do I do here? </code></pre>
<python><python-asyncio>
2023-12-06 15:21:52
0
5,580
Milad
77,614,255
6,645,564
How do I generate a dynamic list of discrete colors (in rgb format) from Plotly?
<p>I am trying to make some plots using plotly and matplotlib, and the data that I am using usually is sorted into around 4 to 8 different groups, each of which I set to one discrete color within my plot. I found a good resource here (<a href="https://plotly.com/python/discrete-color/" rel="nofollow noreferrer">https://plotly.com/python/discrete-color/</a>) that allows me to get one color per group. However, recently I have had to plot up to thirty different groups, which is a problem, because if you look at the resource I just linked to, the maximum number of discrete colors per series (e.g. plotly.colors.qualitative.Light24, plotly.colors.qualitative.Antique, etc.) is only 24.</p> <p>In order to plot my data, I really would like to have the ability to get the correct number of colors for the number of groups in my data (that are also visually distinct from each other). But I also have the additional issue that the matplotlib plotting tool that I am using only takes colors in rgb format (e.g. rgb(158,185,243)) and not hexadecimal format (e.g. #00B5F7).</p> <p>I have solved this problem so far by just getting the discrete color series that are in rgb format (e.g. plotly.colors.qualitative.Dark1, plotly.colors.qualitative.Set1) as multiple lists and then adding the lists together until I have enough discrete colors, but sometimes this results in some colors being too similar to each other. It also means that it is not dynamic to input with different numbers of groups.</p> <p>Does anybody have a better strategy for getting the correct number of discrete colors from plotly?</p>
<python><matplotlib><plotly>
2023-12-06 15:09:47
1
924
Bob McBobson
77,614,223
8,176,763
Sqlalchemy dynamic where statements
<p>From sqlalchemy documentation we have the following example:</p> <pre><code>s = ( select((users.c.fullname + &quot;, &quot; + addresses.c.email_address).label(&quot;title&quot;)) .where(users.c.id == addresses.c.user_id) .where(users.c.name.between(&quot;m&quot;, &quot;z&quot;)) .where( or_( addresses.c.email_address.like(&quot;%@aol.com&quot;), addresses.c.email_address.like(&quot;%@msn.com&quot;), ) )) </code></pre> <p>The thing here is that the above statement has a predefined set of <code>where</code> clauses. I would like to have a more dynamic query that will depend on the input I get from the client of columns and values I need to use. For example in plain sql :</p> <pre><code>Select * from users where column1 = value1 Select * from users where column1 = value1 and column2 = value2 and column3 = value3 </code></pre> <p>So the two queries above can be formed depending on the input I get from the client. The input will be as dict like.</p> <pre><code>d_1 = {column1 : value1} d_2 = {column1: value1, column2 : value2, column3: value3} </code></pre> <p>So depending on the dictionary I want to increase or decrease the amount of WHERE clauses.</p>
<python><sqlalchemy>
2023-12-06 15:06:45
1
2,459
moth
77,614,137
5,447,434
How to perform time series analysis on cpu utilization data using python
<p>I am new to time series analysis. I have the following dummy data:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Timestamp</th> <th>Cpu_utilization_Percentage</th> <th>Server_name</th> </tr> </thead> <tbody> <tr> <td>2022-01-11 00:05:00</td> <td>80</td> <td>abc</td> </tr> <tr> <td>2022-01-11 00:10:00</td> <td>30</td> <td>xyz</td> </tr> <tr> <td>2022-01-11 00:15:00</td> <td>5</td> <td>def</td> </tr> <tr> <td>2022-01-11 00:20:00</td> <td>3</td> <td>yue</td> </tr> </tbody> </table> </div> <p>The dataset is about cpu utilization percentage at each time interval of the day for a given server name.</p> <p>Table column details:</p> <p>Timestamp: date with time</p> <p>Cpu-utilization: values in percentage</p> <p>Server_name: name of the server</p> <p>I am able to predict the cpu utilization for given time but how do I predict the time interval for cpu utilization.</p> <p>Objective is:</p> <p>How to predict time interval with server name for given cpu uitilization so that i can know at what time interval cpu utilization value is less means its in ideal state.</p> <p>For example: I want to know what is the time interval where my cpu utilization is less than 5 percent</p>
<python><time-series><xgboost><arima><multivariate-time-series>
2023-12-06 14:52:42
0
323
Niranjanp
77,614,092
1,234,434
How to extract all rows from a pandas dataframe given a series object
<p>I have the following data as a dataframe:</p> <pre><code>Date Category Sales Paid 8/12/2020 Table 1 table Yes 8/12/2020 Chair 3chairs Yes 13/1/2020 Cushion 8 cushions Yes 24/5/2020 Table 3Tables Yes 31/10/2020 Chair 12 Chairs No 11/7/2020 Mats 12Mats Yes 11/7/2020 Mats 4Mats Yes </code></pre> <p>When I run my query I have a Series object returned that shows the Date which had the highest count of sales for the entire data.</p> <pre><code>ddate=df['Sales'].str.extract('^(\d+)', expand=False).astype(int).groupby(df['Date']).agg(sums='count').idxmax() </code></pre> <p>I may receive a return object that looks like this:</p> <pre><code>['8/12/2020'] </code></pre> <p>In order to retrieve all rows in the original dataframe <code>df</code> I have been informed to run:</p> <pre><code>df[df['Date'].eq(ddate)] </code></pre> <p>However, this returns an empty dataframe. But if I choose a date as a string I do get back all the results</p> <pre><code>df[df['Date'].eq('8/12/2020')] </code></pre> <p>Why doesn't the series input work? How can I craft the query to make it work?</p>
<python><python-3.x><pandas><dataframe>
2023-12-06 14:47:23
1
1,033
Dan
77,614,084
6,060,982
Are type vars and type unions incompatible in python?
<p>Consider the following snippet:</p> <pre class="lang-py prettyprint-override"><code>from typing import TypeVar import numpy as np T = TypeVar(&quot;T&quot;, float, np.ndarray) def f(x: T) -&gt; T: &quot;&quot;&quot; expects a float or an array and returns an output of the same type &quot;&quot;&quot; return x * 2 f(1) # ok f(np.array([1, 2, 3])) # ok def g(x: float | np.ndarray) -&gt; float | np.ndarray: &quot;&quot;&quot; expects either a float or an array &quot;&quot;&quot; return f(x) / 2 # should be fine, but pyright complains about type </code></pre> <p>I have created a TypeVar to hint that <code>f</code> expects as input a float or an array and will return an output of the same type.</p> <p>The type hint in <code>g</code> is more loose. It expects either a float or an array and will return a float or an array, without constraining the type of the output to the type of the input.</p> <p>Intuitively, the setup makes sense. Inside the definition of the <code>g</code> function we know that we expect <code>x</code> to be either a float or an array, i.e. what <code>f</code> expects as input. However when I pass <code>x</code> to <code>f</code> at the last line, Pyright complains:</p> <blockquote> <p>Argument of type &quot;float | ndarray[Unknown, Unknown]&quot; cannot be assigned to parameter &quot;x&quot; of type &quot;T@f&quot; in function &quot;f&quot;</p> <p>Type &quot;float | ndarray[Unknown, Unknown]&quot; is incompatible with constrained type variable &quot;T&quot;</p> </blockquote> <p>This is surprising and frustrating, because it means that one cannot use my function <code>f</code> without being very cautious about the way they write their type hints.</p> <p>Any thoughts on how to solve this?</p> <p><strong>Edit</strong>: After the comment of Brian61354270, I have recreated essentially the same example, only with no dependence of numpy. Here instead of numpy array we use <code>Fraction</code>:</p> <pre class="lang-py prettyprint-override"><code>from fractions import Fraction from typing import TypeVar T = TypeVar(&quot;T&quot;, float, Fraction) def f(x: T) -&gt; T: &quot;&quot;&quot; expects a float or a Fraction and returns an output of the same type &quot;&quot;&quot; return x * 2 f(1.0) # ok f(Fraction(1, 2)) # ok def g(x: float | Fraction) -&gt; float | Fraction: &quot;&quot;&quot; expects either a float or a Fraction &quot;&quot;&quot; return f(x) / 2 # should be fine, but pyright complains about type </code></pre> <p>Again, Pyright reports essentially the same issue:</p> <blockquote> <p>Argument of type &quot;float | Fraction&quot; cannot be assigned to parameter &quot;x&quot; of type &quot;T@f&quot; in function &quot;f&quot;</p> <p>Type &quot;float | Fraction&quot; is incompatible with constrained type variable &quot;T&quot;</p> </blockquote> <p>Interestingly, if instead of <code>Fraction</code> we use <code>int</code>, the type check passes:</p> <pre class="lang-py prettyprint-override"><code>from typing import TypeVar T = TypeVar(&quot;T&quot;, float, int) def f(x: T) -&gt; T: &quot;&quot;&quot; expects a float or an integer and returns an output of the same type &quot;&quot;&quot; return x * 2 f(1.0) # ok f(1) # ok def g(x: float | int) -&gt; float | int: &quot;&quot;&quot; expects either a float or an integer &quot;&quot;&quot; return f(x) / 2 # now its ok </code></pre>
<python><python-typing><pyright><type-variables>
2023-12-06 14:46:46
2
700
zap
77,613,977
8,392,919
Python Byte-encoding functions do not work as expected
<p>I'm trying to convert a hexadecimal number, like the stack address <code>0x7ffd6fa90940</code>,<br /> into its corresponding Byte representation <code>b'\x40\x09\xa9\x6f\xfd\x7f\x00\x00'</code>.<br /> Just like how it is represented in gdb:</p> <pre><code>pwndbg&gt; hexdump $rsp \32 +#### 0x7fffffffdc## 0 1 2 3 4 5 6 7 8 9 A B C D E F โ”‚ โ”‚ +0000 0x7fffffffdc30 e0 af 4b 00 15 00 00 00 [40 dc ff ff ff 7f 00 00] โ”‚..K.....โ”‚........โ”‚ +0010 0x7fffffffdc40 25 39 24 73 00 00 00 00 [50 dc ff ff ff 7f 00 00] โ”‚%9$s....โ”‚P.......โ”‚ </code></pre> <p>I found three functions, but they do not convert the hex number as expected:</p> <pre class="lang-py prettyprint-override"><code>import pwnlib.util.packing import binascii addr = '0000' + '0x7ffd6fa90940'[2:] addr = binascii.unhexlify(addr) print(&quot;[DEBUG] addr: {}&quot;.format(addr)) # Prints big endian: b'\x00\x00\x7f\xfdo\xa9\t@' # != b'\x7f\xfd\x6f\xa9\x09\x40' addr = 0x7ffd6fa90940 addr = pwnlib.util.packing.p64(addr, endian='little') print(&quot;[DEBUG] addr: {}&quot;.format(addr)) # Prints lit endian: b'@\t\xa9o\xfd\x7f\x00\x00' # != b'\x7f\xfd\x6f\xa9\x09\x40' addr = 0x7ffd6fa90940 addr = pwnlib.util.packing.pack(addr, word_size=64, endianness='little') print(&quot;[DEBUG] addr: {}&quot;.format(addr)) # Prints lit endian: b'@\t\xa9o\xfd\x7f\x00\x00' # != b'\x7f\xfd\x6f\xa9\x09\x40' # Custom implementation: addr = '0000' + '0x7ffd6fa90940'[2:] addr = ''.join(reversed(['\\x'+addr[i:i+2] for i in range(0, len(addr), 2)])) print(&quot;[DEBUG] addr: {}&quot;.format(addr)) # Prints lit endian notation as a string: \x40\x09\xa9\x6f\xfd\x7f\x00\x00 # But how to convert to actual Bytes?: b'\x40\x09\xa9\x6f\xfd\x7f\x00\x00' # addr = addr.encode('utf-8').replace(b'\\\\',b'\\') print(&quot;[DEBUG] addr: {}&quot;.format(addr)) # Results in: b'\\x40\\x09\\xa9\\x6f\\xfd\\x7f\\x00\\x00' </code></pre> <p>Why is that and how can it be converted as expected?<br /> Thanks in advance for any hints, links, and answers!</p>
<python><byte><endianness><pwntools><binascii>
2023-12-06 14:35:06
1
547
PatrickSteiner
77,613,936
11,159,734
How to create a vector search index in Azure AI search using v11.4.0
<p>I want to create an Azure AI Search index with a vector field using the currently <a href="https://pypi.org/project/azure-search-documents/" rel="nofollow noreferrer">latest version</a> of azure-search-documents v11.4.0.</p> <p>Here is my code:</p> <pre><code>from azure.core.credentials import AzureKeyCredential from azure.search.documents import SearchClient from azure.search.documents.indexes import SearchIndexClient from langchain.embeddings import AzureOpenAIEmbeddings from langchain.text_splitter import TokenTextSplitter from azure.search.documents.indexes.models import ( SearchIndex, SearchField, SearchFieldDataType, SimpleField, SearchableField, SearchIndex, SemanticConfiguration, SemanticField, SearchField, SemanticSearch, VectorSearch, VectorSearchAlgorithmConfiguration, HnswAlgorithmConfiguration ) index_name = AZURE_COGNITIVE_SEARCH_INDEX_NAME key = AZURE_COGNITIVE_SEARCH_KEY credential = AzureKeyCredential(key) def create_index(): # Define the index fields client = SearchIndexClient(service_endpoint, credential) fields = [ SimpleField(name=&quot;chunk_id&quot;, type=SearchFieldDataType.String, key=True, sortable=True, filterable=True, facetable=True), SimpleField(name=&quot;file_name&quot;, type=SearchFieldDataType.String), SimpleField(name=&quot;url_name&quot;, type=SearchFieldDataType.String), SimpleField(name=&quot;origin&quot;, type=SearchFieldDataType.String, sortable=True, filterable=True, facetable=True), SearchableField(name=&quot;content&quot;, type=SearchFieldDataType.String), SearchField(name=&quot;content_vector&quot;, type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, vector_search_dimensions=1536, vector_search_configuration=&quot;my-vector-config&quot;), ] vector_search=VectorSearch( algorithms=[ HnswAlgorithmConfiguration( name=&quot;my-vector-config&quot;, kind=&quot;hnsw&quot;, parameters={ &quot;m&quot;: 4, &quot;efConstruction&quot;:400, &quot;efSearch&quot;:500, &quot;metric&quot;:&quot;cosine&quot; } ) ] ) # Create the search index with the semantic settings index = SearchIndex(name=index_name, fields=fields, vector_search=vector_search) return client, index search_client, search_index = create_index() result = search_client.create_or_update_index(search_index) print(f&quot;{result.name} created&quot;) </code></pre> <p>This gives me the following error:</p> <pre><code>Message: The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Exception Details: (InvalidField) The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Parameters: definition Code: InvalidField Message: The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Parameters: definition </code></pre> <p>I tried to copy exact solution provided here: <a href="https://learn.microsoft.com/en-us/answers/questions/1395031/how-to-configure-vectorsearchconfiguration-for-a-s" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/answers/questions/1395031/how-to-configure-vectorsearchconfiguration-for-a-s</a> which gives me same error as above.</p> <p>I also tried this sample which is part of the <strong>official documentation</strong> (linked on the pypi page): <a href="https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/search/azure-search-documents/samples/sample_vector_search.py" rel="nofollow noreferrer">https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/search/azure-search-documents/samples/sample_vector_search.py</a> But here I get this error:</p> <pre><code>Code: InvalidRequestParameter Message: The request is invalid. Details: definition : The field 'contentVector' uses a vector search algorithm configuration 'my-algorithms-config' which is not defined. Exception Details: (UnknownVectorAlgorithmConfiguration) The field 'contentVector' uses a vector search algorithm configuration 'my-algorithms-config' which is not defined. Parameters: definition Code: UnknownVectorAlgorithmConfiguration Message: The field 'contentVector' uses a vector search algorithm configuration 'my-algorithms-config' which is not defined. Parameters: definition </code></pre> <p>And I also found this other example notebooks from Microsoft about AI-Search: <a href="https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/azure-search-custom-vectorization-sample.ipynb" rel="nofollow noreferrer">https://github.com/Azure/azure-search-vector-samples/blob/main/demo-python/code/azure-search-custom-vectorization-sample.ipynb</a> This code also gave me the exact same error as my initial code.</p> <p>I'm trying to get this working for 2 days now and I'm about to give up. There are several different documentations/examples in various different places and every code looks different. Apparently Microsoft changes the function names constantly with almost every package update so most of the examples are probably outdated by now. I have no idea where to find the &quot;latest&quot; documentation that actually provides working code as all examples I tested did not work for me. This has to be the worst python documentation I have ever seen in my life. Even Langchain documenation is great compared to this...</p> <p><strong>EDIT:</strong> I just checked the source code of the &quot;SearchField&quot;. It takes the following arguments:</p> <pre><code>def __init__(self, **kwargs): super(SearchField, self).__init__(**kwargs) self.name = kwargs[&quot;name&quot;] self.type = kwargs[&quot;type&quot;] self.key = kwargs.get(&quot;key&quot;, None) self.hidden = kwargs.get(&quot;hidden&quot;, None) self.searchable = kwargs.get(&quot;searchable&quot;, None) self.filterable = kwargs.get(&quot;filterable&quot;, None) self.sortable = kwargs.get(&quot;sortable&quot;, None) self.facetable = kwargs.get(&quot;facetable&quot;, None) self.analyzer_name = kwargs.get(&quot;analyzer_name&quot;, None) self.search_analyzer_name = kwargs.get(&quot;search_analyzer_name&quot;, None) self.index_analyzer_name = kwargs.get(&quot;index_analyzer_name&quot;, None) self.synonym_map_names = kwargs.get(&quot;synonym_map_names&quot;, None) self.fields = kwargs.get(&quot;fields&quot;, None) self.vector_search_dimensions = kwargs.get(&quot;vector_search_dimensions&quot;, None) self.vector_search_profile_name = kwargs.get(&quot;vector_search_profile_name&quot;, None) </code></pre> <p>You can see that there is no &quot;<em>vector_search_configuration</em>&quot; nor &quot;<em>vectorSearchConfiguration</em>&quot; argument. I think they renamed it to &quot;<em>vector_search_profile_name</em>&quot; for some reason. Therefore I assume that the sample in the official documentation is the correct one and the other 2 are indeed outdated. But even so I'm still getting an error due to the &quot;my-algorithms-config&quot; not being defined.</p>
<python><azure><azure-cognitive-services>
2023-12-06 14:28:25
2
1,025
Daniel
77,613,924
521,347
How to access PCollection from DB I/O connector in next step in Pipeline
<p>I have written a small pipeline using Apache-beam.It uses beam-postgres as Input connector to create a PCollection from a DB table. The code looks like following-</p> <pre><code>import apache_beam as beam from apache_beam.options.pipeline_options import PipelineOptions from psycopg.rows import dict_row from beam_postgres.io import ReadAllFromPostgres def __trigger_bill_fetch_job(self): print(&quot;Triggering bill-fetch job&quot;) pipeline = beam.Pipeline() read_from_db = ReadAllFromPostgres( &quot;host={host} dbname={dbName} user={user} password={password}&quot;, &quot;SELECT * FROM comparison_bill_data_requests WHERE status='PENDING' AND bill_event_received=true and bill_detail_event_received=true&quot;, dict_row, ) result = pipeline | &quot;ReadPendingRecordsFromDB&quot; &gt;&gt; read_from_db | &quot;Print result&quot; &gt;&gt; beam.Map(print) pipeline.run().wait_until_finish() print(&quot;read_from_db done&quot;, read_from_db) </code></pre> <p>The output if <code>beam.Map(print)</code> is</p> <pre><code>{'id': '1', 'bill_id': 'bill-1', 'account_id': None, 'bill_event_received': True, 'bill_detail_event_received': True, 'status': 'PENDING', 'commodity_type': None, 'bill_start_date': datetime.datetime(2023, 12, 6, 11, 52, 28, 78945), 'bill_end_date': datetime.datetime(2023, 12, 6, 11, 52, 28, 78945), 'tenant_id': 'tenant-1', 'created_at': datetime.datetime(2023, 12, 6, 11, 52, 28, 78945), 'updated_at': datetime.datetime(2023, 12, 6, 11, 53, 21, 300224)} </code></pre> <p>How can I parse this to an object? I want to pass this to next step in pipeline and want to access it's properties. How can I do it?</p>
<python><apache-spark><google-cloud-dataflow><apache-beam>
2023-12-06 14:27:14
2
1,780
Sumit Desai
77,613,786
447,426
List is doubled only if run from console and initialized on field declaration
<p>I have a test that asserts at the end the length of 2 fields (list) of object under test. The test is green if run from IntelliJ (either test class or test method).</p> <pre><code>class TestExtractLegsAndPhase: @staticmethod def extract_tsv() -&gt; str: path: str = (os.path.dirname(os.path.realpath(__file__)) + &quot;/resources/FPFaultHistory.zip&quot;) print(&quot;extracting from &quot; + path) return extract_tsv_from_zip(path) tsv: str = extract_tsv() def test_extract_leg_and_phase(self): # test object to: FhdbTsvDecoder = FhdbTsvDecoder(self.tsv) legs_and_phase: list[tuple[datetime, int, int]] = to.legs_and_phase assert len(legs_and_phase) == 4926 # TODO for some reason this fails if run from console, but not from IDE session_ends: list[datetime] = to.session_ends assert len(session_ends) == 57 session_starts: list[datetime] = to.session_starts assert len(session_starts) == 57 </code></pre> <p>If I run in console the last 2 assertions fail, the length is 114 instead of 57. If I look at the lists: they are just doubled - all values are added at the end again. This is the test result:</p> <pre><code>&gt; assert len(session_ends) == 57 E AssertionError: assert 114 == 57 E + where 114 = len([Timestamp('2023-01-26 07:42:07'), Timestamp('2023-01-26 09:48:13'), Timestamp('2023-01-30 06:28:52'), Timestamp('2023-01-30 06:42:21'), Timestamp('2023-01-30 06:45:27'), Timestamp('2023-01-30 09:48:19'), ...]) tests\unit\gems\extract_fhdb\test_fhdb_tsv_decode.py:32: AssertionError --------------------------------------------------------------------------------------------------- Captured stdout call --------------------------------------------------------------------------------------------------- found 57 sessions found 4926 rows </code></pre> <p>As you see I added a print where I count the &quot;sessions&quot; and I see this <em>only once</em> with the correct number 57. Thus the method is only called once? The assert for <code>legs_and_phase</code> is green everywhere.</p> <p>I also know why this one is green - at least I know what to change. Here is the actual code: that creates the lists:</p> <pre><code>def __extract_leg_and_phase(self) -&gt; None: # extract leg and phase from given tsv with pandas df: DataFrame = pandas.read_csv(StringIO(self.tsv), sep='\t', header=None, converters={4: lambda x: datetime.strptime(x, FHD_TIME_FORMAT)}, skiprows=0) # loop over all rows and extract leg and phase self.legs_and_phase = [] iterator = df.iterrows() count = 0 count_rows = 0 for index, row in iterator: count_rows += 1 # extract 4 (date),5 (flight leg), 6 (flight phase) from the object list.append(self.legs_and_phase , (row[4], row[5], row[6])) if row[1] == row[2] == row[3] == row[5] == row[6] == 0: # session end detected, next line is session start count += 1 self.session_ends.append(row[4]) self.session_starts.append(next(iterator)[1][4]) print(&quot;found &quot; + str(count) + &quot; sessions&quot;) print(&quot;found &quot; + str(count_rows) + &quot; rows&quot;) #self.legs_and_phase = result </code></pre> <p>As you see I initialize <code>legs_and_phase</code> within this method while I initialize <code>sessions_end</code>, <code>session_starts</code> on field declaration. So fixing this is not the problem:</p> <pre><code> self.session_starts = [] self.session_ends = [] </code></pre> <p>fixes the problem.</p> <p>Why does this makes a difference, only if run in console or at all? How prevent this in future? Is it bad practice to initialize fields on declaration?</p> <p><strong>Update</strong></p> <p>i was not able to create a minumum example - that reproduces the fail in console. but here is the complete class under test:</p> <pre><code>from datetime import datetime from io import StringIO import pandas from pandas import DataFrame FHD_TIME_FORMAT = '%m/%d/%Y %H:%M:%S' class FhdbTsvDecoder: tsv: str legs_and_phase: list[tuple[datetime, int, int]] session_starts: list[datetime] = [] # makes test fail for session starts session_ends: list[datetime] def __init__(self, tsv: str): self.tsv = tsv self.__extract_leg_and_phase() def __extract_leg_and_phase(self) -&gt; None: # extract leg and phase from given tsv with pandas df: DataFrame = pandas.read_csv(StringIO(self.tsv), sep='\t', header=None, converters={4: lambda x: datetime.strptime(x, FHD_TIME_FORMAT)}, skiprows=0) # if we initialize the list on field declaration, tests will fail if ran from console - lists are doubled self.legs_and_phase = [] # self.session_starts = [] initialzing here will fix self.session_ends = [] # loop over all rows and extract leg and phase iterator = df.iterrows() for index, row in iterator: # extract 4 (date),5 (flight leg), 6 (flight phase) from the object list.append(self.legs_and_phase, (row[4], row[5], row[6])) if row[1] == row[2] == row[3] == row[5] == row[6] == 0: # session end detected, next line is session start self.session_ends.append(row[4]) self.session_starts.append(next(iterator)[1][4]) </code></pre> <p><strong>2nd Update</strong></p> <p>One other hint: the problem is that one integration runs before and this one calls code that also creates an instance of FhdbTsvDecoder. In fact both are initialized with the same data.</p> <p>If i disable this integration test the problem is also solved. So for some reason both instances seem to share their fields - making the fields somehow static. Why? is there a way to get really new fields by calling the constructor?</p>
<python><pytest>
2023-12-06 14:09:01
1
13,125
dermoritz
77,613,285
15,456,681
How to implement np.lib.stride_tricks.sliding_window_view efficiently in JAX?
<p>I have implemented an algorithm in which I calculate the <a href="https://en.wikipedia.org/wiki/Pearson_correlation_coefficient" rel="nofollow noreferrer">Pearson correlation coefficient</a> between a vector in one image and every vector in another image within a given window around the equivalent pixel. In pure numpy, I am implementing this with <a href="https://numpy.org/doc/stable/reference/generated/numpy.lib.stride_tricks.sliding_window_view.html#numpy.lib.stride_tricks.sliding_window_view" rel="nofollow noreferrer">np.lib.stride_tricks.sliding_window_view</a>:</p> <pre><code>import numpy as np def pcc(img1, img2, m, n): _n = 2 * n + 1 _m = 2 * m + 1 _img2 = img2[..., m:-m, n:-n] img1_mean = img1.mean(axis=-3, keepdims=True) _img2_mean = _img2.mean(axis=-3, keepdims=True) img1_img1_mean = img1 - img1_mean _img2__img2_mean = _img2 - _img2_mean img1_img1_mean_swv = np.lib.stride_tricks.sliding_window_view( img1_img1_mean, (_m, _n), axis=(-2, -1) ) numerator = (img1_img1_mean_swv * _img2__img2_mean[..., None, None]).sum(axis=-5) denominator = np.sqrt( (img1_img1_mean_swv * img1_img1_mean_swv).sum(axis=-5) * (_img2__img2_mean * _img2__img2_mean).sum(axis=-3)[..., None, None] ) return numerator / denominator </code></pre> <p>One of the main problems with this implementation is that some of the intermediate arrays are very large ~40Gb with image shapes of <code>10, 2048, 2048</code> and <code>m=n=5</code>. I would like to implement a JAX equivalent, but I am struggling to reimplement <code>sliding_window_view</code> efficiently. This is my JAX implementation of <code>sliding_window_view</code> for a 2d window shape over the last two axes of an array inspired by the discussions <a href="https://github.com/google/jax/issues/3171" rel="nofollow noreferrer">here</a>, <a href="https://github.com/google/jax/issues/11354" rel="nofollow noreferrer">here</a> and <a href="https://github.com/google/jax/discussions/14188" rel="nofollow noreferrer">here</a>:</p> <pre><code>from functools import partial from jax import config config.update(&quot;jax_enable_x64&quot;, True) import jax import jax.numpy as jnp @partial(jax.jit, static_argnums=(1,)) def _moving_window2d(matrix, window_shape): matrix_width = matrix.shape[-1] matrix_height = matrix.shape[-2] window_width = window_shape[0] window_height = window_shape[1] startsx = jnp.arange(matrix_width - window_width + 1) startsy = jnp.arange(matrix_height - window_height + 1) starts_xy = jnp.dstack(jnp.meshgrid(startsx, startsy)).reshape( -1, 2 ) # cartesian product =&gt; [[x,y], [x,y], ...] def _slice_window(start): return jax.lax.dynamic_slice( matrix, (start[1], start[0]), (window_height, window_width) ) return jax.vmap(_slice_window)(starts_xy).reshape( matrix_height - window_height + 1, matrix_width - window_width + 1, window_height, window_width, ) @partial(jax.jit, static_argnums=(1,)) def moving_window2d(matrix, window_shape): func = _moving_window2d for i in range(matrix.ndim - 2): func = jax.vmap(func, in_axes=(i, (None, None)), out_axes=i) return func(matrix, window_shape) </code></pre> <p>The jax version of <code>pcc</code> is then:</p> <pre><code>@partial(jax.jit, static_argnums=(2, 3)) def pcc_jax(img1, img2, m, n): _n = 2 * n + 1 _m = 2 * m + 1 _img2 = img2[..., m:-m, n:-n] img1_mean = img1.mean(axis=-3, keepdims=True) _img2_mean = _img2.mean(axis=-3, keepdims=True) img1_img1_mean = img1 - img1_mean _img2__img2_mean = _img2 - _img2_mean img1_img1_mean_swv = moving_window2d(img1_img1_mean, (_n, _m)) numerator = (img1_img1_mean_swv * _img2__img2_mean[..., None, None]).sum(axis=-5) denominator = jnp.sqrt( (img1_img1_mean_swv * img1_img1_mean_swv).sum(axis=-5) * (_img2__img2_mean * _img2__img2_mean).sum(axis=-3)[..., None, None] ) return numerator / denominator </code></pre> <p>I've also tried, passing a sliding window view as an argument to the function instead (which ideally I don't want to do, just for comparison):</p> <pre><code>@partial(jax.jit, static_argnums=(2, 3)) def _pcc_jax2(img1_swv, img2, m, n): _img2 = img2[..., m:-m, n:-n] img1_swv_mean = img1_swv.mean(axis=-5, keepdims=True) _img2_mean = _img2.mean(axis=-3, keepdims=True) img1_swv_img1_swv_mean = img1_swv - img1_swv_mean _img2__img2_mean = _img2 - _img2_mean numerator = (img1_swv_img1_swv_mean * _img2__img2_mean[..., None, None]).sum(axis=-5) denominator = jnp.sqrt( (img1_swv_img1_swv_mean * img1_swv_img1_swv_mean).sum(axis=-5) * (_img2__img2_mean * _img2__img2_mean).sum(axis=-3)[..., None, None] ) return numerator / denominator def pcc_jax2(img1, img2, m, n): _m = 2 * m + 1 _n = 2 * n + 1 img1_swv = np.lib.stride_tricks.sliding_window_view( img1, (_m, _n), axis=(-2, -1) ) return _pcc_jax2(img1_swv, img2, m, n) </code></pre> <p>Whilst these implementations do result in a faster computation than the pure numpy code, they don't compare to my numba implementation (for the 3D case - I only really care about 3D &amp; 4D):</p> <pre><code>import numba as nb def _pcc_numba(img1, img2, m, n): assert img1.shape == img2.shape assert img1.ndim == 3 K, M, N = img1.shape _n = 2 * n + 1 _m = 2 * m + 1 _img2 = img2[..., m:-m, n:-n] img1_img1_mean = img1.copy() _img2__img2_mean = _img2.copy() _K = float(K) for i in nb.prange(M): for j in range(N): _mean1 = 0.0 for k in range(K): _mean1 += img1[k, i, j] _mean1 /= _K for k in range(K): img1_img1_mean[k, i, j] -= _mean1 img1_img1_mean_swv = np.lib.stride_tricks.sliding_window_view( img1_img1_mean, (_m, _n), axis=(-2, -1) ) out = np.empty((M - _m + 1, N - _n + 1, _m, _n), dtype=np.float64) for i in nb.prange(M - _m + 1): for j in range(N - _n + 1): _img2__img2_meanij = _img2__img2_mean[:, i, j] _mean2 = 0.0 for k in range(K): _mean2 += _img2[k, i, j] _mean2 /= _K denominator2 = 0.0 for k in range(K): _img2__img2_meanij[k] -= _mean2 denominator2 += _img2__img2_meanij[k] * _img2__img2_meanij[k] for a in range(_m): for b in range(_n): numerator = 0.0 denominator = 0.0 for k in range(K): numerator += ( img1_img1_mean_swv[k, i, j, a, b] * _img2__img2_meanij[k] ) denominator += ( img1_img1_mean_swv[k, i, j, a, b] * img1_img1_mean_swv[k, i, j, a, b] ) out[i, j, a, b] = numerator / np.sqrt(denominator * denominator2) return out pcc_numba = nb.njit(fastmath=True)(_pcc_numba) pcc_numba_parallel = nb.njit(fastmath=True, parallel=True)(_pcc_numba) </code></pre> <p>Test and time:</p> <pre><code>rng = np.random.default_rng() shape = (10, 400, 400) img1 = rng.random(shape) img2 = rng.random(shape) out = pcc(img1, img2, 5, 5) out_numba = pcc_numba(img1, img2, 5, 5) out_numba_parallel = pcc_numba_parallel(img1, img2, 5, 5) out_jax = pcc_jax(img1, img2, 5, 5) out_jax2 = pcc_jax2(img1, img2, 5, 5) assert np.allclose(out, out_numba) assert np.allclose(out, out_numba_parallel) assert np.allclose(out, out_jax) assert np.allclose(out, out_jax2) %timeit pcc(img1, img2, 5, 5) %timeit pcc_numba(img1, img2, 5, 5) %timeit pcc_numba_parallel(img1, img2, 5, 5) %timeit pcc_jax(img1, img2, 5, 5).block_until_ready() %timeit pcc_jax2(img1, img2, 5, 5).block_until_ready() </code></pre> <p>Results:</p> <pre><code>804 ms ยฑ 7.99 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each) 81.7 ms ยฑ 539 ยตs per loop (mean ยฑ std. dev. of 7 runs, 10 loops each) 19.2 ms ยฑ 77.6 ยตs per loop (mean ยฑ std. dev. of 7 runs, 100 loops each) 417 ms ยฑ 1.58 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each) 639 ms ยฑ 11.4 ms per loop (mean ยฑ std. dev. of 7 runs, 1 loop each) </code></pre> <p>Therefore, my question is how do I perform a sliding window view like operation efficiently in JAX?</p> <p>Whilst the question is primarily aimed at JAX, I am open to any improvements to my numpy or numba code.</p>
<python><numpy><optimization><numba><jax>
2023-12-06 13:03:46
0
3,592
Nin17
77,612,658
14,351,788
Formula to project a line onto a plane
<p>I have a vector and project it onto a plane. I tried to compute the projection but I found the formula seems not correct.</p> <p><a href="https://i.sstatic.net/WrkVA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WrkVA.png" alt="enter image description here" /></a></p> <p>Here is my code:</p> <pre><code>import pandas as pd # n is the normalized normal of the plane and v is the vector n = np.array([0.6, 0.3, -0.7]) v = np.array([-415, 212, 180]) p = v - np.dot(v, n) / (np.linalg.norm(n)**2) * n # array([-216.23404255, 311.38297872, -51.89361702]) cross_product = np.cross(p, n) # array([202.4, 182.5, 251.7]) </code></pre> <p>&quot; If the formula works, the projection &quot;p&quot; on the plane should be perpendicular to normal &quot;n&quot; and the np.cross() result should be (0,0,0). However, as the above shows, the cross result is ([202.4, 182.5, 251.7]).</p> <p>I don't know why this formula doesn't work here. Anyone can help me figure it out?</p>
<python><math><3d><projection>
2023-12-06 11:27:08
1
437
Carlos
77,612,615
22,113,674
Databricks sql - Insert multiple rows using python list values
<p>I have a list of values - ['a', 'b', 'c']</p> <p>I have a delta table. Consider below one as an example -</p> <p><a href="https://i.sstatic.net/0VXEs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0VXEs.png" alt="enter image description here" /></a></p> <p>I am iterating a dataframe in a for loop and constructed the above list from dataframe result. And first column value is hardcoded value. So, in for loop, I will pass the values from list and execute as multiple insert statements.</p> <pre><code>spark.sql(&quot;insert into default.test values(1, 'a')&quot;) spark.sql(&quot;insert into default.test values(1, 'b')&quot;) spark.sql(&quot;insert into default.test values(1, 'c')&quot;) </code></pre> <p>How to execute a single insert statement so that I will get the below one -</p> <p><a href="https://i.sstatic.net/yZoDD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yZoDD.png" alt="enter image description here" /></a></p>
<python><databricks-sql>
2023-12-06 11:18:08
2
339
mpr
77,612,605
1,693,057
How Does `case AsyncGenerator():` Work in Python's `match` Statement Without Raising an Exception?
<p>I'm seeking clarity on the use of abstract classes within Python's <code>match</code> statement, specifically the <code>AsyncGenerator</code> from the <code>collections.abc</code> module. In a <code>match</code> statement, using <code>AsyncGenerator()</code> in a <code>case</code> pattern confuses me as it resembles an instantiation of the abstract class.</p> <p>For example:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; from collections.abc import AsyncGenerator &gt;&gt;&gt; AsyncGenerator() Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; TypeError: Can't instantiate abstract class AsyncGenerator with abstract methods asend, athrow &gt;&gt;&gt; async def check_generator(gen): ... match gen: ... case AsyncGenerator(): ... print(&quot;It's an AsyncGenerator&quot;) ... case _: ... print(&quot;It's something else&quot;) ... &gt;&gt;&gt; async def gen(): ... yield &quot;something&quot; ... &gt;&gt;&gt; await check_generator(gen()) It's an AsyncGenerator &gt;&gt;&gt; </code></pre> <p>This code doesn't raise any exceptions when I run my project, yet when I try to instantiate <code>AsyncGenerator</code> directly in the Python REPL, it does:</p> <pre class="lang-py prettyprint-override"><code>Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; TypeError: Can't instantiate abstract class AsyncGenerator with abstract methods asend, athrow </code></pre> <p>I'm trying to understand:</p> <ol> <li>How does Python interpret <code>case AsyncGenerator():</code> in a <code>match</code> statement? Is it not an attempt to instantiate <code>AsyncGenerator</code>?</li> <li>Why does this pattern not raise an exception in the <code>match</code> statement context, whereas directly instantiating <code>AsyncGenerator</code> in the REPL does?</li> <li>Is there a specific design reason or advantage behind this syntax choice for pattern matching with abstract classes?</li> </ol> <p>Any explanations or insights on this behavior would be greatly appreciated, as it seems to be a peculiar aspect of Python's pattern matching I haven't fully grasped yet.</p>
<python><python-3.x><pattern-matching><abstract-class>
2023-12-06 11:15:46
0
2,837
Lajos
77,612,458
13,146,029
Resource punkt not found. In VENV
<p>I have an application that uses ChatterBot - right now it's a simple conversation application. Anyway I started to use a VENV to start the app and now I have an issue with nltk</p> <p>I'm not sure where in the app it's being used but when I try to run I get that error</p> <pre><code> Resource punkt not found. Please use the NLTK Downloader to obtain the resource: &gt;&gt;&gt; import nltk &gt;&gt;&gt; nltk.download('punkt') </code></pre> <p>here is my trace:</p> <pre class="lang-py prettyprint-override"><code>Traceback (most recent call last): File &quot;/Users/grahammorby/Documents/GitHub/chatterbot/jarvis.py&quot;, line 28, in &lt;module&gt; lessons.train([ File &quot;/Users/grahammorby/Documents/GitHub/chatterbot/venv/lib/python3.12/site-packages/chatterbot/trainers.py&quot;, line 103, in train statement_search_text = self.chatbot.storage.tagger.get_bigram_pair_string(text) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/grahammorby/Documents/GitHub/chatterbot/venv/lib/python3.12/site-packages/chatterbot/tagging.py&quot;, line 135, in get_bigram_pair_string for sentence in self.tokenize_sentence(text.strip()): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/grahammorby/Documents/GitHub/chatterbot/venv/lib/python3.12/site-packages/chatterbot/tagging.py&quot;, line 68, in tokenize_sentence self.sentence_tokenizer = load_data('tokenizers/punkt/{language}.pickle'.format( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/grahammorby/Documents/GitHub/chatterbot/venv/lib/python3.12/site-packages/nltk/data.py&quot;, line 750, in load opened_resource = _open(resource_url) ^^^^^^^^^^^^^^^^^^^ File &quot;/Users/grahammorby/Documents/GitHub/chatterbot/venv/lib/python3.12/site-packages/nltk/data.py&quot;, line 876, in _open return find(path_, path + [&quot;&quot;]).open() ^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/grahammorby/Documents/GitHub/chatterbot/venv/lib/python3.12/site-packages/nltk/data.py&quot;, line 583, in find raise LookupError(resource_not_found) LookupError: ********************************************************************** Resource punkt not found. Please use the NLTK Downloader to obtain the resource: &gt;&gt;&gt; import nltk &gt;&gt;&gt; nltk.download('punkt') </code></pre> <p>These are my sole Imports</p> <pre class="lang-py prettyprint-override"><code># Importing ChatterBot from chatterbot import ChatBot from chatterbot.trainers import ListTrainer from chatterbot.trainers import ChatterBotCorpusTrainer # Importย Google to text speech import gtts from playsound import playsound </code></pre> <p>I have also tried to install via PIP whilst in the venv but still pull the same error</p>
<python>
2023-12-06 10:55:09
0
317
Graham Morby
77,612,387
12,081,269
How can I create conditional column for two sets in a window of a key?
<p>I have two tables with information on retailers users used in pre- and post-periods. My goal is to find what retailers were new for users in postperiod. I wonder how to do it in a <code>user_id</code> window, because <code>is_in()</code> and <code>loc[]</code> solutions do not fit well for my task. Also I was thinking about filtering joins (anti-join specifically) but it did not work well. Here's sample data:</p> <pre><code>sample1 = pd.DataFrame( { 'user_id': [45, 556, 556, 556, 556, 556, 556, 1344, 1588, 2063, 2063, 2063, 2673, 2982, 2982], 'retailer': ['retailer_1', 'retailer_1', 'retailer_2', 'retailer_3', 'retailer_4', 'retailer_5', 'retailer_6', 'retailer_3', 'retailer_2', 'retailer_2', 'retailer_3', 'retailer_7', 'retailer_1', 'retailer_1', 'retailer_2'] } ) sample2 = pd.DataFrame( { 'user_id': [45, 45, 556, 556, 556, 556, 556, 556, 1344, 1588, 2063, 2063, 2063, 2673, 2673, 2982, 2982], 'retailer': ['retailer_1', 'retailer_6', 'retailer_1', 'retailer_2', 'retailer_3', 'retailer_4', 'retailer_5', 'retailer_6', 'retailer_3', 'retailer_2', 'retailer_2', 'retailer_3', 'retailer_7', 'retailer_1', 'retailer_2', 'retailer_1', 'retailer_2'] } ) </code></pre> <p>My desired result is like this:</p> <pre><code>{'user_id': {0: 45, 1: 45, 2: 556, 3: 556, 4: 556, 5: 556, 6: 556, 7: 556, 8: 1344, 9: 1588, 10: 2063, 11: 2063, 12: 2063, 13: 2673, 14: 2673, 15: 2982, 16: 2982}, 'retailer': {0: 'retailer_1', 1: 'retailer_6', 2: 'retailer_1', 3: 'retailer_2', 4: 'retailer_3', 5: 'retailer_4', 6: 'retailer_5', 7: 'retailer_6', 8: 'retailer_3', 9: 'retailer_2', 10: 'retailer_2', 11: 'retailer_3', 12: 'retailer_7', 13: 'retailer_1', 14: 'retailer_2', 15: 'retailer_1', 16: 'retailer_2'}, 'is_new_retailer': {0: 0, 1: 1, 2: 0, 3: 0, 4: 0, 5: 0, 6: 0, 7: 0, 8: 0, 9: 0, 10: 0, 11: 0, 12: 0, 13: 0, 14: 1, 15: 0, 16: 0}} </code></pre>
<python><pandas>
2023-12-06 10:45:23
1
897
rg4s
77,612,193
972,647
Uniquify list and get indices of matching elements
<p>I have a list, say <code>[a, b, c, a, a, b]</code>. I want as output to get a list of indices of the matching elements. for this example it would then be <code>[[0, 3, 4], [1, 5]]</code>.</p> <p>The lists will be small so performance or memory use is not of importance. is there already a built-in function for lists or either for numpy/pandas that can do this?</p>
<python>
2023-12-06 10:16:02
3
7,652
beginner_
77,612,182
14,830,534
Merge pandas dataframes with multiindex columns
<p>How can I merge two dataframes in <code>pandas</code> that both have multi-index columns. The merge should be an outer merge on a column that is present in both dataframes. See minimal example and resulting error below:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd # Sample DataFrames with multi-index columns data1 = { ('A', 'X'): [1, 2, 3], ('A', 'Y'): [4, 5, 6], ('B', 'X'): [7, 8, 9], ('B', 'Y'): [10, 11, 12], } data2 = { ('A', 'X'): [13, 14, 15], ('A', 'Y'): [16, 17, 18], ('B', 'X'): [19, 20, 21], ('B', 'Y'): [22, 23, 24], } df1 = pd.DataFrame(data1, index=['row1', 'row2', 'row3']) df2 = pd.DataFrame(data2, index=['row1', 'row2', 'row3']) # Merge on a specific column (e.g., ('A', 'X')) column_to_merge_on = ('A', 'X') merged_df = pd.merge(df1, df2, left_on=column_to_merge_on, right_on=column_to_merge_on) print(merged_df) </code></pre> <p>Resulting error:</p> <pre class="lang-py prettyprint-override"><code>ValueError: The column label 'A' is not unique. For a multi-index, the label must be a tuple with elements corresponding to each level. </code></pre>
<python><pandas><dataframe>
2023-12-06 10:14:20
1
1,106
Jan Willem
77,611,699
2,641,825
In quarto, is it possible to cross reference a figure in another document?
<p>I am writing an article in one quarto <code>.qmd</code> document and would like to refer to a figure that is in an annex in another <code>.qmd</code> document.</p> <p>Reproducible example:</p> <ul> <li>article.qmd</li> </ul> <pre><code>See Figure @fig-a in the annex for details. </code></pre> <ul> <li>annex.qmd</li> </ul> <pre><code>![Caption for the figure](path/to/figure.png){#fig-a} </code></pre> <p>Related links:</p> <ul> <li><a href="https://quarto.org/docs/authoring/cross-references.html" rel="nofollow noreferrer">https://quarto.org/docs/authoring/cross-references.html</a></li> <li><a href="https://quarto.org/docs/books/book-crossrefs.html" rel="nofollow noreferrer">https://quarto.org/docs/books/book-crossrefs.html</a></li> <li><a href="https://forums.fast.ai/t/cross-references-between-multiple-notebooks-using-nbdev-and-quarto/98473" rel="nofollow noreferrer">https://forums.fast.ai/t/cross-references-between-multiple-notebooks-using-nbdev-and-quarto/98473</a></li> </ul>
<python><r><markdown><quarto>
2023-12-06 08:57:37
1
11,539
Paul Rougieux
77,611,637
3,291,993
Jupyter Notebook cell run doesn't finalize
<p>When I run Python code in the jupyter notebook, I got the following error, and cell doesn't finalize.</p> <p>However, if I run the same Python code in the command line, it finishes its run without any problem.</p> <pre><code>[IPKernelApp] ERROR | Exception in message handler: Traceback (most recent call last): File &quot;/Users/burcakotlu/developer/python/test_venv/lib/python3.9/site-packages/ipykernel/kernelbase.py&quot;, line 424, in dispatch_shell await result File &quot;/Users/burcakotlu/developer/python/test_venv/lib/python3.9/site-packages/ipykernel/kernelbase.py&quot;, line 770, in execute_request sys.stderr.flush() ValueError: I/O operation on closed file. [IPKernelApp] ERROR | Error in message handler Traceback (most recent call last): File &quot;/Users/burcakotlu/developer/python/test_venv/lib/python3.9/site-packages/ipykernel/kernelbase.py&quot;, line 529, in dispatch_queue await self.process_one() File &quot;/Users/burcakotlu/developer/python/test_venv/lib/python3.9/site-packages/ipykernel/kernelbase.py&quot;, line 518, in process_one await dispatch(*args) File &quot;/Users/burcakotlu/developer/python/test_venv/lib/python3.9/site-packages/ipykernel/kernelbase.py&quot;, line 437, in dispatch_shell sys.stderr.flush() ValueError: I/O operation on closed file. </code></pre> <p>I haven't solved this issue for days. I would appreciate any help.</p> <p>Env specific versions:</p> <pre><code>$ pip list | grep -E &quot;(^jup|^ipy)&quot; ipykernel 6.27.1 ipython 8.18.1 jupyter_client 8.6.0 jupyter_core 5.5.0 </code></pre>
<python><jupyter-notebook>
2023-12-06 08:47:35
1
1,147
burcak
77,611,116
1,509,695
Does Python's object class come with an id-based equality method built in?
<p>Does Python's <code>object</code> class indeed come with an id-based equality method built in?</p> <pre><code>class A(object): def __init__(self, i): self.i = i def test_keyset(): i1 = A(1) i2 = A(1) # fails: assert i1 == i2 </code></pre>
<python>
2023-12-06 07:02:49
0
13,863
matanox
77,610,957
1,113,579
Installing Poppler for Windows without using package manage for Windows
<p>I have gone thru <a href="https://stackoverflow.com/questions/18381713/how-to-install-poppler-on-windows/62615998">How to install Poppler on Windows?</a> Currently I do not use a package manager for Windows, like conda or scoop or chocolatey, and I wish to install poppler-utils without using a package manager. Is there a setup.exe which I can download and run? What else are my options? The link: <a href="http://blog.alivate.com.au/poppler-windows/" rel="nofollow noreferrer">http://blog.alivate.com.au/poppler-windows/</a> which is referred to in the above SO thread is not working any more.</p> <p>Some context: In our Python 3.10 project, we are using textract 1.5.0 to read a PDF and it seems extract requires poppler-utils to be installed. The poppler-utils is included in the Dockerfile using:</p> <pre><code>RUN apt-get update &amp;&amp; \ apt-get install -y --no-install-recommends pkg-config poppler-utils </code></pre> <p>The code to read PDF works when deployed in Azure through Docker image configured for Linux Debian. But the code does not work on my development system which is on Windows. This is a legacy code which was developed by the earlier team and now I need to figure out how to make it work on my local system in order to replicate issues.</p>
<python><python-3.x><poppler><poppler-utils>
2023-12-06 06:31:51
1
1,276
AllSolutions
77,610,686
159,072
Why does the one-hot-encoding give worse accuracy in this case?
<p>I have two directories, <code>train_data_npy</code> and <code>valid_data_npy</code> where there are 3013 and 1506 <code>*.npy</code> files, respectively.</p> <p>Each <code>*.npy</code> file has 11 columns of float types, of which the first eight columns are features and the last three columns are one-hot-encoded labels (characters) of three classes.</p> <pre><code>---------------------------------------------------------------------- f1 f2 f3 f4 f5 f6 f7 f8 ---classes--- ---------------------------------------------------------------------- 0.0 0.0 0.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 1.0 6.559 9.22 0.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 1.0 5.512 6.891 10.589 0.0 0.0 0.0 0.0 1.0 0.0 0.0 1.0 7.082 8.71 7.227 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 6.352 9.883 12.492 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 6.711 10.422 13.44 0.0 0.0 0.0 0.0 1.0 0.0 0.0 1.0 7.12 9.283 12.723 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 6.408 9.277 12.542 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 6.608 9.686 12.793 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 6.723 8.602 12.168 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 ... ... ... ... ... </code></pre> <p>Given the format of the data, I have written two scripts.</p> <p><code>cnn_autokeras_by_chunk_with_ohe.py</code> uses OHE labels as they are, and <code>cnn_autokeras_by_chunk_without_ohe.py</code> converts OHE data into integers.</p> <p>The first one achieves an accuracy of <code>0.40</code>, and the second one achieves an accuracy of <code>0.97</code>.</p> <p><strong>Why does the one-hot-encoding give worse accuracy in this case?</strong></p> <hr /> <p>The Python script's task is to load those <code>*.npy</code> files in chunks so that the memory is not overflowed while searching for the best model.</p> <hr /> <pre><code># File: cnn_autokeras_by_chunk_with_ohe.py import numpy as np import tensorflow as tf import autokeras as ak import os # Update these values to match your actual data N_FEATURES = 8 N_CLASSES = 3 # Number of classes BATCH_SIZE = 100 def get_data_generator(folder_path, batch_size, n_features, n_classes): &quot;&quot;&quot;Get a generator returning batches of data from .npy files in the specified folder. The shape of the features is (batch_size, n_features). The shape of the labels is (batch_size, n_classes). &quot;&quot;&quot; def data_generator(): files = os.listdir(folder_path) npy_files = [f for f in files if f.endswith('.npy')] for npy_file in npy_files: data = np.load(os.path.join(folder_path, npy_file)) x = data[:, :n_features] y = data[:, n_features:] for i in range(0, len(x), batch_size): yield x[i:i+batch_size], y[i:i+batch_size] return data_generator train_data_folder = '/home/my_user_name/original_data/train_data_npy' validation_data_folder = '/home/my_user_name/original_data/valid_data_npy' train_dataset = tf.data.Dataset.from_generator( get_data_generator(train_data_folder, BATCH_SIZE, N_FEATURES, N_CLASSES), output_signature=( tf.TensorSpec(shape=(None, N_FEATURES), dtype=tf.float32), tf.TensorSpec(shape=(None, N_CLASSES), dtype=tf.float32) # Labels are now 2D with one-hot encoding ) ) validation_dataset = tf.data.Dataset.from_generator( get_data_generator(validation_data_folder, BATCH_SIZE, N_FEATURES, N_CLASSES), output_signature=( tf.TensorSpec(shape=(None, N_FEATURES), dtype=tf.float32), tf.TensorSpec(shape=(None, N_CLASSES), dtype=tf.float32) # Labels are now 2D with one-hot encoding ) ) # Initialize the structured data classifier. clf = ak.StructuredDataClassifier(max_trials=10) # Set max_trials to any value you desire. # Feed the tensorflow Dataset to the classifier. clf.fit(train_dataset, epochs=100) # Get the best hyperparameters best_hps = clf.tuner.get_best_hyperparameters()[0] # Print the best hyperparameters print(best_hps) # Export the best model model = clf.export_model() # Save the model in tf format model.save(&quot;heca_v2_model_with_ohe&quot;, save_format='tf') # Note the lack of .h5 extension # Evaluate the best model with testing data. print(clf.evaluate(validation_dataset)) </code></pre> <pre><code># File: cnn_autokeras_by_chunk_without_ohe.py import numpy as np import tensorflow as tf import os import autokeras as ak N_FEATURES = 8 N_CLASSES = 3 # Number of classes BATCH_SIZE = 100 def get_data_generator(folder_path, batch_size, n_features): &quot;&quot;&quot;Get a generator returning batches of data from .npy files in the specified folder. The shape of the features is (batch_size, n_features). &quot;&quot;&quot; def data_generator(): files = os.listdir(folder_path) npy_files = [f for f in files if f.endswith('.npy')] for npy_file in npy_files: data = np.load(os.path.join(folder_path, npy_file)) x = data[:, :n_features] y = data[:, n_features:] y = np.argmax(y, axis=1) # Convert one-hot-encoded labels back to integers for i in range(0, len(x), batch_size): yield x[i:i+batch_size], y[i:i+batch_size] return data_generator train_data_folder = '/home/my_user_name/original_data/train_data_npy' validation_data_folder = '/home/my_user_name/original_data/valid_data_npy' train_dataset = tf.data.Dataset.from_generator( get_data_generator(train_data_folder, BATCH_SIZE, N_FEATURES), output_signature=( tf.TensorSpec(shape=(None, N_FEATURES), dtype=tf.float32), tf.TensorSpec(shape=(None,), dtype=tf.int32) # Labels are now 1D integers ) ) validation_dataset = tf.data.Dataset.from_generator( get_data_generator(validation_data_folder, BATCH_SIZE, N_FEATURES), output_signature=( tf.TensorSpec(shape=(None, N_FEATURES), dtype=tf.float32), tf.TensorSpec(shape=(None,), dtype=tf.int32) # Labels are now 1D integers ) ) # Initialize the structured data classifier. clf = ak.StructuredDataClassifier(max_trials=10) # Set max_trials to any value you desire. # Feed the tensorflow Dataset to the classifier. clf.fit(train_dataset, epochs=100) # Get the best hyperparameters best_hps = clf.tuner.get_best_hyperparameters()[0] # Print the best hyperparameters print(best_hps) # Export the best model model = clf.export_model() # Save the model in tf format model.save(&quot;heca_v2_model_without_ohe&quot;, save_format='tf') # Note the lack of .h5 extension # Evaluate the best model with testing data. print(clf.evaluate(validation_dataset)) </code></pre> <hr /> <p><strong>EDIT:</strong></p> <pre><code> 0 MSE C 0.000 0.000 0.000 1 1 1 1 1 0 1 ASN C 7.042 9.118 0.000 1 1 1 1 1 0 2 LEU H 5.781 5.488 7.470 0 0 0 0 1 0 3 THR H 5.399 5.166 6.452 0 0 0 0 0 0 4 GLU H 5.373 4.852 6.069 0 0 0 0 1 0 5 LEU H 5.423 5.164 6.197 0 0 0 0 2 0 </code></pre> <p>(1) - residue number for debug purpose only (NOT A FEATURE)<br /> (2) - residue type for debug purpose only (NOT A FEATURE)<br /> (3) - secondary structure (TRUE LABEL)<br /> (4) - r13<br /> (5) - r14<br /> (6) - r15<br /> (7) - neighbor count with 4A<br /> (8) - neighbor count with 4.5A<br /> (9) - neighbor count with 5A<br /> (10) - neighbor count with 6A<br /> (11) - neighbor count with 8A<br /> (12) - hydrogen bonds count</p>
<python><tensorflow><machine-learning><keras><auto-keras>
2023-12-06 05:12:08
2
17,446
user366312
77,610,633
6,245,473
Adding yfinance historical price data onto dataframe without hard-coding dates (using dates already in the dataframe)
<p>The following code pulls balance sheet data using yfinance. I would like to get the close price as of the corresponding dates in the dataframe on the last row without hard coding dates.</p> <pre><code>import yfinance as yf ticker_object = yf.Ticker('AAPL') balancesheet = ticker_object.balancesheet # Select specific rows and columns selected_data = balancesheet.loc[['Share Issued', 'Tangible Book Value']] selected_data </code></pre> <p>Current output:</p> <p><a href="https://i.sstatic.net/zJWdt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zJWdt.png" alt="enter image description here" /></a></p> <p>Desired output:</p> <p><a href="https://i.sstatic.net/KbUpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KbUpB.png" alt="enter image description here" /></a></p> <p>Here is sample code to get price data:</p> <pre><code>import yfinance as yf data = yf.download(&quot;AAPL&quot;, start=&quot;2017-01-01&quot;, end=&quot;2017-04-30&quot;) data.head(3) </code></pre> <p>Output:</p> <p><a href="https://i.sstatic.net/MqhmB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MqhmB.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe><stock><yfinance>
2023-12-06 04:52:06
1
311
HTMLHelpMe
77,610,309
3,835,843
gcloud-aio-storage can't upload more than 2GB data
<p><a href="https://talkiq.github.io/gcloud-aio/autoapi/storage/index.html#usage" rel="nofollow noreferrer">gcloud-aio-storage</a> is working in-memory fashion. It requires file content to upload, while we try to get file content from a larger file (more than 2GB), it says:</p> <blockquote> <p>OverflowError: string longer than 2147483647 bytes</p> </blockquote> <p>Can anyone knows what to do to accept a larger file?</p> <p>My implemented code is:</p> <pre><code>import aiofiles import asyncio from gcloud.aio.storage import Storage async def upload(bucket, file, contents): async with Storage() as client: await client.upload(bucket, file, contents) async def uploads(bucket, obj_names): for file in obj_names: async with aiofiles.open(file, mode=&quot;r&quot;) as f: contents = await f.read() # This line causes the issue await upload(bucket, file, contents) </code></pre> <p>Environment:</p> <p>OS: Windows 11 (64 bit)</p> <p>Python: Python 3.9.13 (tags/v3.9.13:6de2ca5, May 17 2022, 16:36:42) [MSC v.1929 64 bit (AMD64)] on win32</p>
<python><upload><gcloud>
2023-12-06 02:55:08
0
6,588
Arif
77,610,206
3,121,975
Get run config from Dagster RunFailureSensorContext
<p>I've setup a sensor in Dagster:</p> <pre><code>@sensor(job=excel_to_csv) def convert_unit_list(ctx: SensorEvaluationContext): env = os.getenv(&quot;ENVIRONMENT&quot;) region = os.getenv(&quot;AWS_REGION&quot;) appcfg = AppConfig(&quot;unit-ingestion&quot;, &quot;developer-units&quot;, env, region) bucket = Bucket.LANDING bucket_name = bucket.format(env) prefix = &quot;unit-ingestion/developer-units&quot; ctx.log.info(f&quot;Searching {bucket_name}/{prefix} for new files...&quot;) since_key = ctx.cursor keys = get_s3_keys(bucket_name, prefix, since_key) if not keys: return SkipReason(f&quot;No keys found in {bucket_name}/{prefix}&quot;) run_config = { &quot;s3&quot;: S3Resource(region_name=region), &quot;conv_ctx&quot;: ExcelConversionContext(appcfg, bucket, &quot;unit-ingestion/units&quot;, 1, &quot;B:BV&quot;), } ctx.log.info(f&quot;Found {len(keys)} new keys in {bucket_name}/{prefix}&quot;) for key in keys: yield RunRequest(key, run_config=run_config) ctx.log.info(f&quot;Updating cursor to {key}&quot;) ctx.update_cursor(key) </code></pre> <p>Now, I'd like to write an additional sensor that runs whenever <code>excel_to_csv</code> fails. I've got it stubbed out already:</p> <pre><code>@run_failure_sensor(request_job=excel_to_csv) def excel_to_csv_failure(ctx: RunFailureSensorContext): ctx.log.error(f&quot;{ctx.sensor_name} failed to process {ctx.failure_event.asset_key}&quot;) </code></pre> <p>However, I'd like to get <code>run_config</code> so I can use it to moved the failed Excel file to rejected S3 bucket. I've looked through <code>ctx</code> and it looks like <code>ctx.dagster_event.step_input_data</code> could be what I want here, but I'm not sure as the documentation doesn't include all the fields for this type. Does anyone know whether this is possible and, if it is, where I can find this data?</p>
<python><dagster>
2023-12-06 02:12:42
1
8,192
Woody1193
77,609,981
7,563,454
Python thread pools not working: pool.map freezes, pool.map_async returns a MapResult that's not iterable
<p>I'm trying to use the <code>multiprocessing.Pool()</code> function as it seems most fit for my project. I've followed several tutorial articles on how it works and attempted their examples. Some that I'm referring to:</p> <p><a href="https://superfastpython.com/threadpool-python" rel="nofollow noreferrer">https://superfastpython.com/threadpool-python</a></p> <p><a href="https://superfastpython.com/multiprocessing-pool-map" rel="nofollow noreferrer">https://superfastpython.com/multiprocessing-pool-map</a></p> <p><a href="https://www.delftstack.com/howto/python/python-pool-map-multiple-arguments" rel="nofollow noreferrer">https://www.delftstack.com/howto/python/python-pool-map-multiple-arguments</a></p> <p><a href="https://www.codesdope.com/blog/article/multiprocessing-using-pool-in-python" rel="nofollow noreferrer">https://www.codesdope.com/blog/article/multiprocessing-using-pool-in-python</a></p> <p>Somehow all examples provided there by seemingly experienced programmers are broken: Whenever I try to run them just as described, my application freezes or errors out. Based on what those resources suggested I compiled the following test:</p> <pre><code>import multiprocessing as mp def double(i): return i * 2 def main(): pool = mp.Pool() for result in pool.map(double, [1, 2, 3]): print(result) main() </code></pre> <p>This causes the application to spawn a lot of processes and then freeze. None of those processes are even using any CPU as if they're stuck processing something: They just sit there idly forever. I tried using <code>pool.map_async</code> instead of <code>pool.map</code> out of curiosity, now I instead get the following error:</p> <pre><code>TypeError: 'MapResult' object is not iterable </code></pre> <p>Strange since none of those examples mentioned there should be a <code>MapResult</code> object that cannot be iterated. But one suggested using <code>result.get()</code> instead: I did that and the result is that even with map_async everything just freezes again.</p> <pre><code>pool = mp.Pool() result = pool.map(double, [1, 2, 3]) print(result) </code></pre> <p>Removing the for loop in favor of printing the result directly has the exact same behavior: If I use <code>pool.map</code> everything freezes forever, if I use <code>pool.map_async</code> I'm told it's a <code>MapResult</code> unless I try to print <code>result.get()</code> in which case everything freezes again. Exact same two issues exist when getting the data with either <code>pool.apply(double)</code> or <code>pool.apply_async(double)</code> instead. Any idea what both myself seemingly every tutorial on Python thread pools are doing wrong?</p>
<python><python-3.x><multithreading>
2023-12-06 00:39:53
1
1,161
MirceaKitsune
77,609,841
1,887,919
Fastest way to construct sparse block matrix in python
<p>I want to construct a matrix of shape <code>(N,2N)</code> in Python.</p> <p>I can construct the matrix as follows</p> <pre class="lang-py prettyprint-override"><code>import numpy as np N = 10 # 10,100,1000, whatever some_vector = np.random.uniform(size=N) some_matrix = np.zeros((N, 2*N)) for i in range(N): some_matrix[i, 2*i] = 1 some_matrix[i, 2*i + 1] = some_vector[i] </code></pre> <p>So the result is a matrix of mostly zeros, but where on row <code>i</code>, the <code>2*i</code> and <code>2*i + 1</code> columns are populated.</p> <p>Is there a faster way to construct the matrix without the loop? Feels like there should be some broadcasting operation...</p> <p>Edit: There have been some very fast, great answers! I am going to extend the question a touch to my actual use case.</p> <p>Now suppose <code>some_vector</code> has a shape <code>(N,T)</code>. I want to construct a matrix of shape <code>(N,2*N,T)</code> analogously to the previous case. The naive approach is:</p> <pre class="lang-py prettyprint-override"><code>N = 10 # 10,100,1000, whatever T = 500 # or whatever some_vector = np.random.uniform(size=(N,T)) some_matrix = np.zeros((N, 2*N,T)) for i in range(N): for t in range(T): some_matrix[i, 2*i,t] = 1 some_matrix[i, 2*i + 1,t] = some_vector[i,t] </code></pre> <p>Can we extend the previous answers to this new case?</p>
<python><arrays><numpy><array-broadcasting>
2023-12-05 23:46:48
3
923
user1887919
77,609,745
2,446,332
Trouble getting dict.setdefault to work with tuple argument
<pre><code>#!/bin/python import logging import sys logging.basicConfig( format=&quot;%(asctime)s [%(levelname)s] %(name)s - %(message)s&quot;, level=logging.INFO, datefmt=&quot;%Y-%m-%d %H:%M:%S&quot;, stream=sys.stdout, ) logger = logging.getLogger(&quot;mylogger&quot;) cache = {} # Relevant stuff begins here. def cacheDecorator(func): &quot;&quot;&quot; Goal is to decorate a function so that successive calls to it with the same arguments return the cached value, rather than invoking the function again unnecessarily. :param func: :return: cached value when possible &quot;&quot;&quot; def wrapper(*args, **kwargs): # This works global cache if args in cache: return cache.get(args) else: retVal = func(*args, **kwargs) cache[args] = retVal return retVal def wrapper2(*args, **kwargs): # This doesn't work - it always invokes func even on successive # calls with the same input arguments passed to f1. global cache return cache.setdefault(args,func(*args, **kwargs) ) #return wrapper # Causes f1 to be executed only once, as desired return wrapper2 # Causes f1 to be executed again even with same arguments. Why?? @cacheDecorator def f1 (a, b, c, /): logger.info('Inside f1') result = a * b * c return result logger.info (f'Result from executing f1(1,2,3) = {f1(1,2,3)}') logger.info (f'Result from executing f1(1,2,3) again = {f1(1,2,3)}') sys.exit(0) </code></pre> <p>I'm trying to write a decorator in Python that will cache the results of a function. When my <code>cacheDecorator</code> function returns <code>wrapper</code>, it does not make a second call to <code>f1</code>, but when <code>cacheDecorator</code> returns <code>wrapper2</code> instead, which attempts to make use of the dictionary's <code>setdefault</code> method, it <code>wrapper2</code> always invokes the func <code>f1</code>, and I don't understand why.</p>
<python><dictionary><setdefault>
2023-12-05 23:16:48
1
1,363
Lyle Z
77,609,499
4,537,160
Python - import issue, how is path entry determined?
<p>I have this folder structure:</p> <pre><code>main_folder/ -- tests/ ---- test01.py -- some_package/ </code></pre> <p>with the file test01.py containing the import statement</p> <pre><code>import some_package </code></pre> <p>I'm running test01.py from <code>main_folder</code>:</p> <pre><code>python tests/test01.py </code></pre> <p>so, I would expect that the working directory is <code>main_folder</code>, and that the script should see <code>some_package</code>, which is in that same folder, but I'm getting the error:</p> <pre><code>ModuleNotFoundError: No module named 'node_core' </code></pre> <p>I tried adding these commands at the beginning of test01.py:</p> <pre><code>os.getcwd() -&gt; returns main_folder sys.path -&gt; returns main_folder/tests </code></pre> <p>So, the issue is that the path entry is not what I expected (<code>main_folder/tests</code> and not <code>main_folder</code>), but why is this happening? Should this not be determined by the location from where I launch the script?</p>
<python><python-import>
2023-12-05 22:08:19
2
1,630
Carlo
77,609,489
8,167,752
python-pptx: How do a enter the title text on a "Section and Header" slide
<p>I'm using the pptx module to create a PowerPoint &quot;Section Header&quot; slide, where I then enter the section title.</p> <pre><code>import pptx from pptx import Presentation prs = Presentation() mySectionHeaderSlide = prs.slides.add_slide(prs.slide_layouts[2]) mySectionHeaderSlide.shapes.title.text = &quot;This is my section header title.&quot; prs.save('SectionHeaderDemo.pptx') </code></pre> <p>When I do this, my section header title is in the region I'd consider the subtitle box of the Section Header slide, not the title box. That is, the bottom box of the two boxes, not the top box.</p> <p>How do I get my section header title to be in the top box of the Section Header slide?</p> <p>My apologies for any typos in my code examples. For corporate reasons, I can't access StackOverflow on the same system I do my Python programming, so I can't cut-and-paste.</p>
<python><powerpoint><python-pptx>
2023-12-05 22:07:42
0
477
BobInBaltimore
77,609,454
5,839,462
fastapi: use middleware to generate a result in which the original result is included as an element
<p>there is this code:</p> <pre><code>from fastapi import FastAPI, Request from fastapi.responses import JSONResponse app = FastAPI() @app.middleware(&quot;http&quot;) async def response_middleware(request: Request, call_next): # execute the request results = await call_next(request) return JSONResponse(content={ 'response': results }) @app.post(&quot;/test/&quot;) async def test_func(): results = { 'test': 10 } return results </code></pre> <p>but errors are generated</p> <pre><code>raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type _StreamingResponse is not JSON serializable </code></pre> <p>how to correctly pass <code>results</code> as an output dictionary element</p> <p>Is <code>JSONRespons</code>e necessary at all?</p> <p>I thought it could be done simply:</p> <pre><code>async def response_middleware(request: Request, call_next): # execute request results = await call_next(request) return { 'response': results } </code></pre> <p>but no</p>
<python><python-3.x><fastapi><middleware><fastapi-middleware>
2023-12-05 21:58:30
0
1,448
Zhihar
77,609,401
6,272,006
YTM different to zero rates on zero-coupon bonds and issues with settlement days on discounting in QuantLib Python
<p>I have bootstrapped the the yield curve using both zero-coupon bonds and fixed coupon bonds using the code below;</p> <pre><code># Importing Libraries: # The code imports necessary libraries: # pandas for data manipulation, matplotlib.pyplot for plotting, and QuantLib (ql) for quantitative finance calculations. import pandas as pd import matplotlib.pyplot as plt # Use the QuantLib or ORE Libraries import QuantLib as ql # Setting Evaluation Date: # Sets the evaluation date to May 31, 2023, for all subsequent calculations. today = ql.Date(21, ql.November, 2023) ql.Settings.instance().evaluationDate = today # Calendar and Day Count: # Creates a calendar object for South Africa and specifies the day-count convention (Actual/365 Fixed) calendar = ql.NullCalendar() day_count = ql.Actual365Fixed() # Settlement Days: zero_coupon_settlement_days = 4 coupon_bond_settlement_days = 3 # Face Value faceAmount = 100 data = [ ('11-09-2023', '11-12-2023', 0, 99.524, zero_coupon_settlement_days), ('11-09-2023', '11-03-2024', 0, 96.539, zero_coupon_settlement_days), ('11-09-2023', '10-06-2024', 0, 93.552, zero_coupon_settlement_days), ('11-09-2023', '09-09-2024', 0, 89.510, zero_coupon_settlement_days), ('22-08-2022', '22-08-2024', 9.0, 96.406933, coupon_bond_settlement_days), ('27-06-2022', '27-06-2025', 10.0, 88.567570, coupon_bond_settlement_days), ('27-06-2022', '27-06-2027', 11.0, 71.363073, coupon_bond_settlement_days), ('22-08-2022', '22-08-2029', 12.0, 62.911623, coupon_bond_settlement_days), ('27-06-2022', '27-06-2032', 13.0, 55.976845, coupon_bond_settlement_days), ('22-08-2022', '22-08-2037', 14.0, 52.656596, coupon_bond_settlement_days)] helpers = [] for issue_date, maturity, coupon, price, settlement_days in data: price = ql.QuoteHandle(ql.SimpleQuote(price)) today = ql.Date(21, ql.November, 2023) issue_date = ql.Date(issue_date, '%d-%m-%Y') maturity = ql.Date(maturity, '%d-%m-%Y') schedule = ql.Schedule(today, maturity, ql.Period(ql.Semiannual), calendar, ql.DateGeneration.Backward, ql.Following, ql.DateGeneration.Backward, False) helper = ql.FixedRateBondHelper(price, settlement_days, faceAmount, schedule, [coupon / 100], day_count, False) helpers.append(helper) curve = ql.PiecewiseCubicZero(today, helpers, day_count) # Enable Extrapolation: # This line enables extrapolation for the yield curve. # Extrapolation allows the curve to provide interest rates or rates beyond the observed data points, # which can be useful for pricing or risk management purposes. curve.enableExtrapolation() # Zero Rate and Discount Rate Calculation: # Calculates and prints the zero rate and discount rate at a specific # future date (May 28, 2048) using the constructed yield curve. date = ql.Date(11, ql.December, 2023) zero_rate = curve.zeroRate(date, day_count, ql.Annual).rate() forward_rate = curve.forwardRate(date, date + ql.Period(1, ql.Years), day_count, ql.Annual).rate() discount_rate = curve.discount(date) print(&quot;Zero rate as at 28.05.2048: &quot; + str(round(zero_rate*100, 4)) + str(&quot;%&quot;)) print(&quot;Forward rate as at 28.05.2048: &quot; + str(round(forward_rate*100, 4)) + str(&quot;%&quot;)) print(&quot;Discount factor as at 28.05.2048: &quot; + str(round(discount_rate, 4))) # Print the Zero Rates, Forward Rates and Discount Factors at node dates # print(pd.DataFrame(curve.nodes())) node_data = {'Date': [], 'Zero Rates': [], 'Forward Rates': [], 'Discount Factors': []} for dt in curve.dates(): node_data['Date'].append(dt) node_data['Zero Rates'].append(curve.zeroRate(dt, day_count, ql.Annual).rate()) node_data['Forward Rates'].append(curve.forwardRate(dt, dt + ql.Period(1, ql.Years), day_count, ql.Annual).rate()) node_data['Discount Factors'].append(curve.discount(dt)) node_dataframe = pd.DataFrame(node_data) print(node_dataframe) node_dataframe.to_excel('NodeRates.xlsx') # Printing Daily Zero Rates: # Prints the daily zero rates from the current date (May 31, 2023) to a maturity date that is 30 # years later. It calculates and prints the zero rates for each year using the constructed yield curve. maturity_date = calendar.advance(today, ql.Period(1, ql.Years)) current_date = today while current_date &lt;= maturity_date: zero_rate = curve.zeroRate(current_date, day_count, ql.Annual).rate() print(f&quot;Date: {current_date}, Zero Rate: {zero_rate}&quot;) current_date = calendar.advance(current_date, ql.Period(1, ql.Years)) # Creating Curve Data for Plotting: # Creates lists of curve dates, zero rates, and forward rates for plotting. # It calculates both zero rates and forward rates for each year up to 25 years from the current date. 'Zero Rate': [], 'Discount Factor': [], 'Clean Price': [], 'Dirty Price': []} # Calculate bond prices and yields for issue_date, maturity, coupon, price, settlement_days in data: price = ql.QuoteHandle(ql.SimpleQuote(price)) today = ql.Date(21, ql.November, 2023) issue_date = ql.Date(issue_date, '%d-%m-%Y') maturity = ql.Date(maturity, '%d-%m-%Y') schedule = ql.Schedule(today, maturity, ql.Period(ql.Semiannual), calendar, ql.DateGeneration.Backward, ql.Following, ql.DateGeneration.Backward, False) bondEngine = ql.DiscountingBondEngine(ql.YieldTermStructureHandle(curve)) bond = ql.FixedRateBond(settlement_days, faceAmount, schedule, [coupon / 100], day_count) bond.setPricingEngine(bondEngine) # Calculate bond yield, clean price, and dirty price bondYield = bond.bondYield(day_count, ql.Compounded, ql.Annual) bondCleanPrice = bond.cleanPrice() bondDirtyPrice = bond.dirtyPrice() zero_rate = curve.zeroRate(maturity, day_count, ql.Annual).rate() discount_factor = curve.discount(maturity) # Append the results to the DataFrame bond_results['Issue Date'].append(issue_date) bond_results['Maturity Date'].append(maturity) bond_results['Coupon Rate'].append(coupon) bond_results['Price'].append(price.value()) bond_results['Settlement Days'].append(settlement_days) bond_results['Yield'].append(bondYield) bond_results['Zero Rate'].append(zero_rate) bond_results['Discount Factor'].append(discount_factor) bond_results['Clean Price'].append(bondCleanPrice) bond_results['Dirty Price'].append(bondDirtyPrice) # Create a DataFrame from the bond results bond_results_df = pd.DataFrame(bond_results) # Print the results print(bond_results_df) bond_results_df.to_excel('BondResults.xlsx') </code></pre> <p>I have the following questions or queries; (i) The yield to maturity that I am getting for the first 4 zero-coupon bonds is slightly different from the zero (or spot) rates. My expectation is that the yield to maturity (YTM) and the zero rates for the first 4 zero-coupon bonds should be the same. What is causing the slight differences and how can I resolve this? (ii) In trying to manually calculate the prices of the first 4 zero-coupon bonds by simply discounting the face amount of 100 using the YTM (which should be similar to zero rates) and the accrual period, I have noted that I need to adjust for the settlement days (T+4) in this case 4 days for the zero coupon bonds. My understanding of settlement days is that in this case the bond is settled 4 days after maturity hence this will increase the accrual period (or discounting period) by 4 days. Surprisingly, to get to the same price for the zero-coupon bonds, I have noticed that I have reduced the accrual period by 4 days instead of increasing. Am I missing something or this is not correct.</p> <p>The final results will be in the BondResults.xlsx workbook</p>
<python><quantitative-finance><quantlib>
2023-12-05 21:43:07
1
303
ccc
77,609,289
18,366,396
Update firebase messaging using python and django
<p>I am using Firebase Admin SDK in Python.</p> <pre><code>//import sdk import firebase_admin from firebase_admin import credentials, messaging //saved json credentials from google in a private storage json_file = settings.PRIVATE_STORAGE_JSON_FIREBASE + &quot;/&quot; + &quot;firebase_json/&quot; + &quot;firebase_json.json&quot; cred = credentials.Certificate(json_file) //initialize app = firebase_admin.initialize_app(credentials, name=&quot;app&quot;) //sent push message def send_message(title, msg, registration_token): message = messaging.`MulticastMessage`( notification=messaging.Notification( title=title, body=msg), tokens=registration_token ) </code></pre> <p>Ok so far so good, this is working fine. However i received an email from google &quot;Your recent usage of impacted APIs/features: <code>Admin SDK(batch send API)</code>&quot;. I should resolve it &quot;Send messages via the HTTP v1 API, which has been optimized for fanout performance&quot;.</p> <p>Questions:</p> <ol> <li><p>As of my understanding iam using the <code>HTTPV1</code>, or am i not right?</p> </li> <li><p>Now i am aware that in my message sending i use multicast (this is <code>batch sending</code>, to multiple users). I don't need the batch sending here, i only have to sent to one user/token. If i change the sending function, would this be enough to resolve google's requirements?</p> </li> </ol> <pre><code>message = messaging.Message( // here the change (only sent to one token, instead of batch) notification=messaging.Notification( title=title, body=msg), tokens=registration_token ) </code></pre>
<python><firebase-cloud-messaging><firebase-admin>
2023-12-05 21:18:06
1
841
saro
77,609,191
14,012,215
How do I filter using the external Odoo API?
<p>My code is as such</p> <pre><code> search_domain = [ '&amp;', ('product_id', '=', 'Product_x'), '&amp;', ('lot_id', '=', 'AG:04/10/2023:UID520:6666'), ('on_hand', '=', True), ] # Use the search domain to find the specific location lookuplocationbasedonname = models.execute_kw(db, uid, password, 'stock.quant', 'search_read', [search_domain]) pprint(lookuplocationbasedonname) </code></pre> <p>The search returns a stock.quant record that has the following.</p> <pre><code> 'on_hand': False, </code></pre> <p>I have tried multiple ways of filtering, the Odoo docs don't make the filtering clear.</p>
<python><odoo>
2023-12-05 20:55:48
2
433
apexprogramming
77,609,189
678,572
How to use Nextflow to call scripts from different environments?
<p>I have the following conda environments:</p> <ul> <li><code>wf-preprocess_env</code></li> <li><code>wf-assembly_env</code></li> </ul> <p>Each environment has unique dependencies installed. I have 3 scripts:</p> <ul> <li><code>preprocess.py</code> which I use with <code>wf-preprocess_env</code> environment</li> <li><code>assembly.py</code> and <code>assembly-long.py</code> which I use with <code>wf-assembly_env</code></li> </ul> <p><strong>How can I use Nextflow to achieve a similar functionality to this?</strong></p> <p><code>wf-wrapper preprocess --flags</code> where <code>wf-wrapper</code> is a wrapper around Nextflow that allows me to have different modules that call different modules.</p> <p>In the cases listed above,</p> <ul> <li><p><code>wf-wrapper preprocess [--flags]</code> would call the <code>preprocess.py</code> script (and all the dependencies) that are in the bin of <code>wf-preprocess_env</code>. I would also be able to provide it with different --flags such as -h for help or the arguments that are required to run (e.g., <code>-o/--output_directory</code>)</p> </li> <li><p>Similarly, <code>wf assembly [--flags]</code> would call the <code>assembly.py</code> script and <code>wf assembly-long.py [--flags]</code> would call the <code>assembly-long.py</code> script both within the bin of <code>wf-assembly_env</code>.</p> </li> </ul> <p>My questions:</p> <ul> <li>How can I structure my main.nf Nextflow file to link a module with a specific script and specific environment to load the dependencies?</li> <li>Is it possible to wrap the main.nf file (e.g., wf-wrapper.nf) or is the only possibility to use the following notation: <code>nextflow run wf-wrapper.nf --module preprocess [--flags]</code>?</li> </ul> <p>Note: At this point I'm not trying to write an entire pipeline in Nextflow, just to wrap existing scripts in Nextflow so I can easily access the conda environments in the backend.</p> <p>My current code is the following:</p> <pre><code>#!/usr/bin/env nextflow // Define available modules modules = ['preprocess', 'assembly', 'assembly-long'] // Parse command line options opts = parseOpts() // Check if a valid module is provided if (!opts.module || !(opts.module in modules)) { echo &quot;Invalid module. Available modules: ${modules.join(', ')}&quot; exit 1 } // Define the process to execute the specified module process wrapperScript { // Set the Conda environment based on the provided module conda &quot;wf-${opts.module}_env&quot; // Define the command to run the script with flags script: &quot;&quot;&quot; # Assuming your scripts are in the bin directory of the Conda environment ${opts.module}.py ${opts.flags} &quot;&quot;&quot; } // Execute the wrapperScript process workflow { call wrapperScript { // Pass module and flags as input parameters input: module opts.module flags opts.flags } } </code></pre> <p>But when I call Nextflow run it just gives me the Nextflow help:</p> <pre><code>nextflow run wf-wrapper.nf --module preprocess -h Execute a pipeline project Usage: run [options] Project name or repository url Options: -E Exports all current system environment Default: false .... </code></pre>
<python><anaconda><conda><pipeline><nextflow>
2023-12-05 20:55:20
0
30,977
O.rka
77,608,787
8,849,755
Python ctypes copy structure located in temporary buffer
<p>I have a code that looks like this:</p> <pre class="lang-py prettyprint-override"><code>from ctypes import * some_library = CDLL('/usr/lib/black_box.so') # This `Group` comes from `some_library` class Group(Structure): _fields_ = [ (&quot;ChSize&quot;, c_uint32*9), (&quot;DataChannel&quot;, POINTER(c_float)*9), (&quot;TriggerTimeLag&quot;, c_uint32), (&quot;StartIndexCell&quot;, c_uint16)] class Thing(Structure): _fields_ = [ (&quot;A&quot;, c_uint8*4), (&quot;B&quot;, Group*4)] pointer_to_thing = POINTER(Thing)() def get_things(): some_library.allocate_memory() some_library.read_things() things = [] for n in range(some_library.how_many_things_did_your_read()): some_library.decode_next_thing() pointer_to_thing = some_library.give_me_the_thing() thing = pointer_to_thing.contents things.append(thing) some_library.release_memory() return things things = get_things() </code></pre> <p>which fails because <code>some_library.give_me_the_thing()</code> is giving me a reference to the thing and <code>some_library.release_memory()</code> is releasing the memory where the thing is. If I comment <code>some_library.release_memory()</code> it works, but it is not how it is supposed to be done. I must release the memory before returning from my function, so I would like to make a copy of the thing and return this copy, like this:</p> <pre class="lang-py prettyprint-override"><code>def get_things(): some_library.allocate_memory() some_library.read_things() things = [] for n in range(some_library.how_many_things_did_your_read()): some_library.decode_next_thing() pointer_to_thing = some_library.give_me_the_thing() thing = pointer_to_thing.contents copy_of_thing = Thing() memmove( addressof(copy_of_thing), addressof(thing), sizeof(Thing), ) things.append(copy_of_thing) some_library.release_memory() return things </code></pre> <p>However, it always returns a <code>Thing</code> with not the correct data. If <code>some_library.release_memory()</code> is commented, then it works as expected. It's like it is not copying the contents at all...</p> <p>I have also tried</p> <pre class="lang-py prettyprint-override"><code>def get_things(): some_library.allocate_memory() some_library.read_things() things = [] for n in range(some_library.how_many_things_did_your_read()): some_library.decode_next_thing() pointer_to_thing = some_library.give_me_the_thing() thing = pointer_to_thing.contents copy_of_thing = Thing.copy_from_buffer(thing) things.append(copy_of_thing) some_library.release_memory() return things </code></pre> <p>and the behavior is exactly the same, i.e. if I release the memory it fails, if I don't it works.</p> <p>Unfortunately <code>some_library</code> is a complicated black box and I cannot easily create a MWE to post here.</p> <p>One detail, that may be relevant, is that if I insert the following <code>print</code>s:</p> <pre class="lang-py prettyprint-override"><code>def get_things(): some_library.allocate_memory() some_library.read_things() things = [] for n in range(some_library.how_many_things_did_your_read()): some_library.decode_next_thing() pointer_to_thing = some_library.give_me_the_thing() thing = pointer_to_thing.contents copy_of_thing = Thing.copy_from_buffer(thing) print(thing, 'original') print(copy_of_thing, 'copy') print('-----------') things.append(copy_of_thing) some_library.release_memory() return things </code></pre> <p>I get this:</p> <pre><code>&lt;Thing object at 0x7f8d6f027840&gt; original &lt;Thing object at 0x7f8d6f0278c0&gt; copy -------- &lt;Thing object at 0x7f8d6f027940&gt; original &lt;Thing object at 0x7f8d6f027840&gt; copy -------- &lt;Thing object at 0x7f8d6f0279c0&gt; original &lt;Thing object at 0x7f8d6f027940&gt; copy -------- </code></pre> <p>and I see it is placing the copies in memory addresses from the original things, which (I guess) should be in the memory that is handled by <code>some_library</code>, thus when it is released, the copies are also destroyed and thus I get empty objects?</p>
<python><copy><ctypes>
2023-12-05 19:23:52
1
3,245
user171780
77,608,756
8,030,746
Why my scraping with Selenium is not working on Digital Ocean droplet?
<p>I'm working with Digital Ocean and droplets for the first time, and I can't get my Selenium script to work. At first I was getting the <code>DevToolsActivePort file doesn't exist</code> error, however, now my script is just not returning anything. It's not actually finishing at all. I tried adding ports and specifying locations of the chromium-browser. And nothing seems to be working.</p> <p>This is my code:</p> <pre><code>options = Options() options.add_argument(&quot;start-maximized&quot;) options.add_argument('--headless') options.binary_location = &quot;/usr/bin/chromium-browser&quot; options.add_argument('--user-data-dir=/home/username/myproject') options.add_argument(&quot;--remote-debugging-port=9222&quot;) driver = webdriver.Chrome(options=options) base_url = 'https://www.wikipedia.org/' driver.get(base_url) table_rows = driver.find_element(By.CSS_SELECTOR, &quot;.footer-sidebar-text&quot;) text = table_rows.text print(text) driver.quit() </code></pre> <p>For context, if it helps, the code works locally with just this:</p> <pre><code>options = Options() driver = webdriver.Chrome(options=options) driver.maximize_window() </code></pre> <p>What do I need to do to fix this? Thank you!</p> <p><strong>EDIT:</strong> Just to add a note for anyone having the same issue. Follow Barry's code below. But before that, make sure your droplet has enough memory, so Chrome can be installed properly. I had to resize mine to 1GB memory and that solved the issues and errors.</p>
<python><selenium-webdriver><digital-ocean>
2023-12-05 19:17:18
2
851
hemoglobin
77,608,755
3,861,775
Computing gradients for Module class in JAX
<p>I created a tiny test case that implements a PyTorch inspired module class in JAX. The forward method works as expected. However, when I try to compute the gradients for the weights in the linear layer, I end up with a single gradient that probably belongs to the loss. How do I compute the gradients for the weights in my classes in JAX?</p> <p>Here is a minimal working example:</p> <pre><code>import jax import jax.numpy as jnp class Module: def __init__(self) -&gt; None: pass def __call__(self, inputs: jax.Array): return self.forward(inputs) class Linear(Module): def __init__(self, key: jax.Array, in_features: int, out_features: int) -&gt; None: super().__init__() self.in_features = in_features self.out_features = out_features key_w, key_b = jax.random.split(key=key, num=2) self.weights = jax.random.normal(key=key_w, shape=(out_features, in_features)) self.biases = jax.random.normal(key=key_b, shape=(out_features,)) def forward(self, inputs: jax.Array) -&gt; jax.Array: out = jnp.dot(self.weights, inputs) + self.biases return out class Activation(Module): def __init__(self) -&gt; None: super().__init__() pass def forward(self, inputs: jax.Array) -&gt; jax.Array: return jax.nn.sigmoid(inputs) class Model(Module): def __init__(self, key: jax.Array, in_features: int, out_features: int) -&gt; None: super().__init__() self.linear = Linear(key=key, in_features=in_features, out_features=out_features) self.activation = Activation() def forward(self, inputs: jax.Array) -&gt; jax.Array: out = self.linear(inputs) out = self.activation(out) return out def criterion(output: jax.Array, target: jax.Array): return ((target - output) ** 2).sum() if __name__ == &quot;__main__&quot;: in_features: int = 4 out_features: int = 1 key = jax.random.PRNGKey(67) model = Model(key=key, in_features=in_features, out_features=out_features) key = jax.random.PRNGKey(68) data = jax.random.normal(key=key, shape=(in_features,)) out = model(data) print(f&quot;{out = }&quot;) loss = criterion(output=out, target=2) print(f&quot;{loss = }&quot;) target = jnp.array([2.0]) grads = jax.grad(criterion)(out, target) print(f&quot;{grads = }&quot;) </code></pre>
<python><jax>
2023-12-05 19:17:04
1
3,656
Gilfoyle
77,608,700
3,742,823
pyspark limit number of parallel connections to an API endpoint
<p>In PySpark, how can I limit the number of parallel connections to an API endpoint?</p> <p>Let's say I start my PySpark session with 32 executors, however, I want to hit the API with only 8 concurrent connections. How can I do so?</p> <p>Some sample code:</p> <pre><code>data = [(&quot;A&quot;, &quot;url1&quot;), (&quot;B&quot;, &quot;url2&quot;), (&quot;C&quot;, &quot;url3&quot;)] columns = [&quot;col1&quot;, &quot;col2&quot;] df = spark.createDataFrame(data, columns) def api_udf(url): response = requests.get(url) if response.status_code == 200: return response.json() else: return None api_udf_spark = udf(api_udf, StringType()) # Q. How to limit concurrent connections when applying UDF? result_df = df.withColumn(&quot;api_response&quot;, api_udf_spark(df[&quot;col2&quot;])) result_df.show(truncate=False) </code></pre>
<python><apache-spark><pyspark>
2023-12-05 19:06:29
0
3,281
The Wanderer
77,608,459
1,968
See live changes when editing code across multiple dependent projects in a VS Code Workspace
<p>I am developing a Python application (<code>app</code>), which relies on an internal Python library (<code>lib</code>) that is developed as a separate project. The main application depends on the library via a Git reference (<code>pip install -e git+https://โ€ฆ</code> or equivalent for Poetry). Both <code>app</code> and <code>lib</code> are developed in parallel by the same team.</p> <p>I am using a <a href="https://code.visualstudio.com/docs/editor/multi-root-workspaces" rel="nofollow noreferrer">multi-root workspace</a> in VS Code for development, and I have created a parent folder in which both projects are checked out in subfolders, and with a <code>product.code-workspace</code> definition file as follows:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;folders&quot;: [ { &quot;path&quot;: &quot;app&quot; }, { &quot;path&quot;: &quot;lib&quot; } ] } </code></pre> <p><code>lib</code> contains a single file <code>lib/__init__.py</code>:</p> <pre class="lang-py prettyprint-override"><code>def hello() -&gt; str: return &quot;hello world&quot; </code></pre> <p><code>app</code> contains the following <code>app/__init__.py</code>:</p> <pre class="lang-py prettyprint-override"><code>import lib def main(): print(lib.hello()) if __name__ == '__main__': main() </code></pre> <p>Both projects are set up in their own virtual environment.</p> <p><strong>When debugging <code>app</code>, how can I tell VS Code to load the <em>actual</em>, live code from the <code>lib</code> subfolder rather than the dependency installed in the virtual environment of <code>app</code>?</strong></p> <p>In other words, if I change the <code>lib/__init__.py</code> file above to:</p> <pre class="lang-py prettyprint-override"><code>def hello() -&gt; str: return &quot;goodbye&quot; </code></pre> <p>And run <code>app</code> in VS Code, how do I get it to print the output <code>goodbye</code> instead of <code>hello world</code> <em>without</em> having to commit &amp; push the code changes in <code>lib</code> and reinstalling the dependency in <code>app</code>?</p> <p>A workaround is to set the <code>PYTHONPATH</code> environment variable in <code>app</code>โ€™s <code>launch.json</code> configuration (<code>&quot;${workspaceFolder}/../lib&quot;</code>). However, I cannot use this because I have <em>multiple</em> project dependencies (<code>lib1</code>, <code>lib2</code>, โ€ฆ) so I would have to assign multiple directories to <code>PYTHONPATH</code>. But unfortunately the syntax for this differs across platform (Windows uses <code>;</code> as path separator, whereas Linux and macOS use <code>:</code>), and I would prefer having a cross-platform launch configuration that can be fully committed to version control.</p> <p>I know that you can have joint launch configurations in the workspace (and in fact I am using them!) but I couldnโ€™t find an option here to specify that VS Code should use โ€œliveโ€, local dependencies rather than installed dependencies when launching the application.</p> <hr /> <p>For completeness, I have uploaded an MCVE for the two projects to GitHub. To set it up, run:</p> <pre class="lang-bash prettyprint-override"><code>mkdir test-app &amp;&amp; cd $_ git clone git@github.com:klmr/py-test-app app git clone git@github.com:klmr/py-test-lib lib cd app python3 -m venv .venv .venv/bin/pip install -r requirements.txt </code></pre> <p>Next, add the <code>product.code-workspace</code> configuration from above to the main folder, and launch it in VS Code. Now navigate to <code>lib/lib/__init__.py</code>, edit the function <code>hello</code> to return a different value, and save.</p> <p>Launch <code>app</code> via <kbd>F5</kbd>. It prints <code>hello world</code>, not the changed value.</p>
<python><visual-studio-code><vscode-workspace>
2023-12-05 18:19:39
2
549,077
Konrad Rudolph
77,608,450
11,739,577
Web scraping shows 403 error but website is available
<p>I am trying to scrape CS2 skin prices from <a href="https://pricempire.com/search?q=ursus" rel="nofollow noreferrer">https://pricempire.com/search?q=ursus</a>. From Inspect in Chrome I find <a href="https://api.pricempire.com/v2/search/items?page=1&amp;priceMin=0&amp;priceMax=1000000&amp;orderBy=price_desc&amp;priceProvider=gamerpay&amp;q=ursus&amp;attr1=undefined&amp;attr2=undefined&amp;appId=730" rel="nofollow noreferrer">https://api.pricempire.com/v2/search/items?page=1&amp;priceMin=0&amp;priceMax=1000000&amp;orderBy=price_desc&amp;priceProvider=gamerpay&amp;q=ursus&amp;attr1=undefined&amp;attr2=undefined&amp;appId=730</a> where in Console I see the information needed but when I try to scrape the site, it says 403.</p> <p>I use the following code based on other answers (<a href="https://stackoverflow.com/questions/48756326/web-scraping-results-in-403-forbidden-error">Web scraping results in 403 Forbidden Error</a>) where people see a 403 error only when web scraping:</p> <pre><code>import requests from bs4 import BeautifulSoup import pandas as pd # URL of the webpage to scrape url = 'https://api.pricempire.com/v2/search/items?page=1&amp;priceMin=0&amp;priceMax=1000000&amp;orderBy=price_desc&amp;priceProvider=gamerpay&amp;q=ursus&amp;attr1=undefined&amp;attr2=undefined&amp;appId=730' # Replace with the URL of the page you want to scrape headers = { &quot;authority&quot;: &quot;api.pricempire.com&quot;, &quot;method&quot;:&quot;GET&quot;, &quot;Sec-Ch-Ua&quot;:'&quot;Google Chrome&quot;;v=&quot;119&quot;, &quot;Chromium&quot;;v=&quot;119&quot;, &quot;Not?A_Brand&quot;;v=&quot;24&quot;', &quot;User-Agent&quot;: &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36&quot;, &quot;Cookie&quot;:&quot;_ym_uid=1690206513662695582; _ym_d=1690206513; google-analytics_v4_lTnF__ga4=60b6fd6e-a42c-4a5b-86a3-b2ee4112ff43; google-analytics_v4_lTnF___z_ga_audiences=60b6fd6e-a42c-4a5b-86a3-b2ee4112ff43; _ga_8Q07ZHZ839=GS1.1.1698572129.62.1.1698572431.60.0.0; _ga=GA1.1.682725627.1690206513; _ym_isad=1; cf_clearance=586R0lbtXGfr_0eAxZ6FQGhjOP8KbabV9ausTZQV0J4-1701790159-0-1-5b5ce1f2.48e600da.934cebb0-0.2.1701790159; google-analytics_v4_lTnF__ga4sid=1266105687; google-analytics_v4_lTnF__session_counter=23; _ym_visorc=w; google-analytics_v4_lTnF__engagementPaused=1701799553646; google-analytics_v4_lTnF__engagementStart=1701799554306; google-analytics_v4_lTnF__counter=851; google-analytics_v4_lTnF__let=1701799554306; _ga_5YMSVJKHTZ=GS1.1.1701798361.82.1.1701799556.58.0.0&quot; } # Send a GET request to the URL r = requests.get(url, headers = headers) soup = BeautifulSoup(r.text, 'lxml') print(r) </code></pre> <p>My current output is:</p> <pre><code>&lt;Response [403]&gt; </code></pre> <p>How can I bypass this?</p>
<python><web-scraping>
2023-12-05 18:18:31
0
793
doomdaam
77,608,326
848,746
returning min from dict values where there are more than one zeros
<p>I have a weird situation with a dict which looks like so:</p> <pre><code>a={ &quot;key1&quot;:[{&quot;a&quot;:10,&quot;b&quot;:10,&quot;d&quot;:0},{&quot;a&quot;:100,&quot;b&quot;:100,&quot;d&quot;:1},{&quot;a&quot;:1000,&quot;b&quot;:1000,&quot;d&quot;:0}], &quot;key2&quot;:[{&quot;a&quot;:18,&quot;b&quot;:135,&quot;d&quot;:0},{&quot;a&quot;:135,&quot;b&quot;:154,&quot;d&quot;:10},{&quot;a&quot;:123,&quot;b&quot;:145,&quot;d&quot;:0}], &quot;key3&quot;:[{&quot;a&quot;:145,&quot;b&quot;:1455,&quot;d&quot;:0},{&quot;a&quot;:15,&quot;b&quot;:12,&quot;d&quot;:1},{&quot;a&quot;:14,&quot;b&quot;:51,&quot;d&quot;:10}] } </code></pre> <p>I can get the min by key doing this:</p> <pre><code>[min(group, key=itemgetter(&quot;d&quot;)) for group in a.values()] [{'a': 10, 'b': 10, 'd': 0}, {'a': 18, 'b': 135, 'd': 0}, {'a': 145, 'b': 1455, 'd': 0} ] </code></pre> <p>However this misses entries where where minimum value is more than one zero. So, I would like it to return:</p> <pre><code>[{'a': 10, 'b': 10, 'd': 0}, {&quot;a&quot;:1000,&quot;b&quot;:1000,&quot;d&quot;:0}, {'a': 18, 'b': 135, 'd': 0}, {&quot;a&quot;:123,&quot;b&quot;:145,&quot;d&quot;:0}, {'a': 145, 'b': 1455, 'd': 0} ] </code></pre> <p>How can I force this condition</p>
<python><python-3.x><python-itertools><itertools-groupby>
2023-12-05 17:56:06
3
5,913
AJW
77,608,298
10,431,629
Using Pandas to stack a data frame in a specifc format
<p>I have a panas dataframe as follows:</p> <pre><code> df Prod ProdDesc tot avg qtr val_qtr A Cyl 110 8.7 202301 12 A Cyl 110 8.7 202302 56.9 A Cyl 110 8.7 202303 9 A Cyl 110 8.7 202304 0 </code></pre> <p>So what I want is to stack/transpose the dataframe. I used pandas melt,</p> <pre><code> df_tra = df.melt(id_vars=['Prod', 'ProdDesc'], var_name='Attrib', value_name='Value') df_tra.drop_duplicates() </code></pre> <p>So my output comes as :</p> <pre><code>df_tra Prod ProdDesc Attrib Value A Cyl tot 110 A Cyl avg 8.7 A Cyl quarter 202301 A Cyl quarter 202302 A Cyl quarter 202303 A Cyl quarter 202304 A Cyl val_qtr 12 A Cyl val_qtr 56.9 A Cyl val_qtr 9 A Cyl val_qtr 0 </code></pre> <p><strong>but the output what I want/desire is different.</strong> What I want is the following:</p> <pre><code> df_actual_wanted Prod ProdDesc Attrib Value A Cyl tot 110 A Cyl avg 8.7 A Cyl 202301 12 A Cyl 202302 56.9 A Cyl 202303 9 A Cyl 202304 0 </code></pre> <p>How can I achieve that?</p>
<python><pandas><pivot-table><transpose><melt>
2023-12-05 17:51:53
1
884
Stan
77,608,173
9,381,985
sympy solve() result cannot be used in np.isclose(): ufunc 'isfinite' not supported
<p>My code is as following. The goal is to, given 1) a circle with radius <code>r</code>, 2) a point <code>(x0, y0)</code> on it, and 3) a line with <code>slope</code> through that point, find out the other point that the line intersects with the circle. To be simple, we assume the line is NOT tangent with the circle.</p> <pre class="lang-py prettyprint-override"><code>from sympy import * import numpy as np x0, y0 = 1.0, 0.0 slope = 0.0 r = 1.0 x = Symbol('x') y = Symbol('y') res = solve([x*x + y*y - r*r, y-y0 - slope*(x-x0)], [x, y]) #if len(res)==1: # the line NOT tangent with the circle # x1, y1 = res[0] if np.isclose(res[0], [x0, y0]): x1, y1 = res[1] else: x1, y1 = res[0] </code></pre> <p>I use <code>np.isclose()</code> to exclude the given point <code>(x0, y0)</code> from the <code>solve()</code> results. However, the code generates an error at line of <code>np.isclose()</code> which says:</p> <pre><code>TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' </code></pre> <p>I debug the code in VScode's DEBUG CONSOLE like following.</p> <pre><code>&gt; res [(-1, 0), (1, 0)] &gt; res[0] (-1, 0) &gt; np.isclose((-1, 0), [1, 0]) array([False, True]) &gt; np.isclose(res[0], [1, 0]) Traceback (most recent call last): File &quot;/Users/ufo/miniconda3/envs/ai/lib/python3.10/site-packages/numpy/core/numeric.py&quot;, line 2358, in isclose xfin = isfinite(x) TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe'' </code></pre> <p>Obviously, if <code>np.isclose()</code> was given two tuple/list, it is working. But if it was given one result from sympy.solve(), it raises errors.</p> <p>What should I do with <code>np.isclose()</code>?</p>
<python><numpy><sympy>
2023-12-05 17:29:56
1
575
Cuteufo
77,607,952
125,673
plotly radar chart - I need to set some attributes
<p>I have this code as follows;</p> <pre><code>self.fig.add_trace(go.Scatterpolar( r=_r, theta=_theta, fill=&quot;toself&quot;, line=_line, name=self.name)) self.fig.update_layout( polar=dict( radialaxis=dict( visible=True, range=[0, 100], linecolor=black, gridcolor=black, dtick=25 )), showlegend=False ) </code></pre> <p>Which displays the following chart:</p> <p><a href="https://i.sstatic.net/GjvIC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GjvIC.png" alt="enter image description here" /></a></p> <p>There are a number of things I would like to do and I cannot find how to do it. I would like to:</p> <ol> <li>Close the line</li> <li>Replace the white lines between the centre and the edge of the circle with black lines</li> <li>Have smaller labels and/or angle them</li> </ol> <p>How do I do this, I can't find it in the docs</p>
<python><plotly><radar-chart>
2023-12-05 16:56:39
1
10,241
arame3333
77,607,928
731,041
How do I produce a German umlaut with KMK
<p>I am making a custom keyboard with KMK / MicroPython on a Raspberry Pi Pico to be used with OSX.</p> <p>I would like it to have convenient access to German umlauts (รคร„รถร–รผรœ), so I defined a dedicated <code>KC.UMLAUT</code> key, which, when pressed together with <kbd>A</kbd>, <kbd>O</kbd>, or <kbd>U</kbd> should produce the corresponding umlaut. When <kbd>shift</kbd> is pressed too, it should send the capitalized version.</p> <p>Here is what I have:</p> <pre class="lang-py prettyprint-override"><code>import board from kmk.kmk_keyboard import KMKKeyboard from kmk.keys import KC, make_key from kmk.handlers.sequences import simple_key_sequence from kmk.modules.combos import Combos, Sequence keyboard = KMKKeyboard() AUML = simple_key_sequence( ( KC.LALT(KC.U), KC.A,)) AUML_CAP = simple_key_sequence( ( KC.LALT(KC.U), KC.RSFT(KC.A),)) make_key( KC.NO.code, names=( 'UMLAUT', ) ) combos = Combos() combos.combos = [ Sequence((KC.UMLAUT, KC.A), AUML), Sequence((KC.UMLAUT, KC.RSFT, KC.A), AUML_CAP), # ... I also have Sequences for O and U ] keyboard.keymap = [ [ KC.A, KC.O, KC.U, KC.UMLAUT, KC.RSFT, ] ] if __name__ == &quot;__main__&quot;: keyboard.go() </code></pre> <p>However, this approach is flawed in many ways:</p> <p>Only the first press of <kbd>umlaut</kbd>-<kbd>A</kbd> produces an &quot;รค&quot; character. Subsequent presses (with umlaut held down) give a regular &quot;a&quot; again. Also, <kbd>umlaut</kbd> and <kbd>shift</kbd> have to be pressed in that order -- <kbd>shift</kbd>-<kbd>umlaut</kbd>-<kbd>A</kbd> won't work.</p> <p>Last but not least, I don't like how I need separate definitions for the shifted behavior. It feels like I am on the wrong track here. Surely there is a better (and less verbose) way to do such a basic thing?</p>
<python><raspberry-pi-pico><kmk>
2023-12-05 16:51:51
2
2,759
Kolja
77,607,894
1,185,790
Palantir Foundry REST API call for adding a row identifier to a dataset
<p>I am currently using the <code>Read dataset as table</code> Palantir Foundry API as described <a href="https://www.palantir.com/docs/foundry/api/datasets-resources/datasets/read-table/" rel="nofollow noreferrer">here</a>. For datasets of a certain size, I end up running into memory constraints when pulling the data using this API call (I'm using <code>Arrow</code>, not <code>CSV</code>. As a workaround, I've broken up my dataset into subsets using the <code>columns</code> parameter, which allows me to specify a list of columns. My intention is to merge the <code>Arrow</code> output into a single table, like as follows:</p> <pre><code>merged_table = pa.concat_tables([table1, table2]) </code></pre> <p>However, the <code>Read dataset as table</code> API's row order is non-deterministic, so I can't simply apply the above code to the table subsets. As a preliminary step, it seems as though I would need to add a row identifier to the Foundry dataset. I can of course do this using a code repository within the Foundry web interface, but I would much rather have this done entirely through API calls. Is there a way that I can programmatically add a row identifier through a Foundry API call? I've looked through the documentation, and can't seem to find anything about doing any kind of data transformation through API calls.</p> <p>The Below example generates an Apache Arrow table from a Palantir Foundry dataset.</p> <pre><code>def foundry_arrow_exporter( dataset_rid, token, base_url, column_list=None): headers = { &quot;authorization&quot;: &quot;Bearer {}&quot;.format(token) } # Warning: This endpoint is in preview and may be modified or removed at any time. # To use this endpoint, add preview=true to the request query parameters. # Furthermore, this endpoint currently does not support views (Virtual datasets composed of other datasets). if column_list: columns = ','.join(column_list) response = requests.get(f'{base_url}/api/v1/datasets/{dataset_rid}/readTable?format=ARROW&amp;columns={columns}&amp;preview=True', headers=headers) else: response = requests.get(f'{base_url}/api/v1/datasets/{dataset_rid}/readTable?format=ARROW&amp;preview=True', headers=headers) response.raise_for_status() buffer = pa.py_buffer(response.content) table = pa.ipc.open_stream(buffer).read_all() return table arrow_table = foundry_arrow_exporter( dataset_rid=dataset_rid, token=foundry_token, base_url=foundry_base_url, column_list = column_chunk) </code></pre>
<python><rest><palantir-foundry><apache-arrow>
2023-12-05 16:46:37
0
723
baobobs
77,607,533
4,696,802
Why does creating a Python virtual environment change the folder structure?
<p>I'm on Windows and my Python interpreter is Python.exe, and then there is a subfolder named Lib and then another subfolder named &quot;site-packages&quot;. Python is set up to search this path relative to itself to find imports. When I create a virtual environment on Windows it puts the Python interpreter .exe inside the /Scripts folder, meaning to find the site-packages it needs to search ../Lib/site-packages. Why does it do this? And how does the interpreter know to search another path? Well, I guess the answer is the path is baked into the python.exe binary to search another path, but why is this the case for virtual environments? It seems it would make more sense to keep the folder structure as normal.</p>
<python>
2023-12-05 15:50:36
1
16,228
Zebrafish
77,607,499
9,318,323
Remove warning when using concat() with empty dataframes
<p>My old code concats some dataframes, some may be empty. I now receive two future warnings regarding this.</p> <p>My goal is to have the old logic but without any warnings. Mainly I need to <strong>retain all the column names without empty rows</strong>.</p> <p>I wrote the code below (fixed 1 of the 2 warning) but it still gives me <code>FutureWarning: The behavior of DataFrame concatenation with empty or all-NA entries is deprecated.</code></p> <pre><code>import io import pandas as pd df_list = ['RevisionTime,Data,2019/Q2,2019/Q3,2019/Q4\r\n', 'RevisionTime,Data,2019/Q3\r\n2019-08-17,10.5,10.5\r\n', 'RevisionTime,Data,2019/Q3\r\n2019-09-18 08:10:00,51.0,51.0\r\n', 'RevisionTime,Data,2019/Q3\r\n2019-10-18 08:10:00,111.5,111.5\r\n', 'RevisionTime,Data,2019/Q3,2019/Q4\r\n2019-11-15 22:31:00,182.0,111.5,70.5\r\n'] # list with dataframes df_list = [pd.read_csv(io.StringIO(df)) for df in df_list] # to avoid 'The behaviour of array concatenation with empty entries is deprecated.' # and to retain all column names # https://stackoverflow.com/questions/63970182/concat-list-of-dataframes-containing-empty-dataframes for i, df in enumerate(df_list): col_length = len(df.columns) template = pd.DataFrame(data=[[pd.NA] * col_length], columns=df.columns) df_list[i] = df if not df.empty else template # res_df = pd.concat(df_list) # warning here res_df = res_df.dropna(how='all') # remove empty rows print(res_df) </code></pre> <p>Output dataframe should look like:</p> <pre><code> RevisionTime Data 2019/Q2 2019/Q3 2019/Q4 0 2019-08-17 10.5 NaN 10.5 NaN 0 2019-09-18 08:10:00 51.0 NaN 51.0 NaN 0 2019-10-18 08:10:00 111.5 NaN 111.5 NaN 0 2019-11-15 22:31:00 182.0 NaN 111.5 70.5 </code></pre> <p>Basically, help me fix the code to remove the warning.</p>
<python><pandas><dataframe>
2023-12-05 15:46:27
1
354
Vitamin C
77,607,480
1,382,763
How to write results from snowpark to a table?
<p>I am following this code from <a href="https://github.com/interworks/Snowflake-Python-Functionality/blob/main/Created%20via%20Snowpark/User%20Defined%20Table%20Functions/Generate%20Auto%20ARIMA%20Predictions.py" rel="nofollow noreferrer">github</a>: it runs a muilti-series time series forecasting using ARIMA. If you want to test, use line 197 to the end.</p> <p>How can I write the df_output from line 136 to a snowflake table with partition: append each series result to a table?</p> <p>I tried using write_pandas from snowpark, but it needs a session.</p> <p>I also tried loading a sqlalchemy,connection setup using session.add_import(xx.py), but I got warning:</p> <p><em>TypeError: cannot pickle 'sqlalchemy.cprocessors.UnicodeResultProcessor' object: you might have to save the unpicklable object in the local environment first, add it to the UDF with session.add_import(), and read it from the UDF.</em></p>
<python><snowflake-cloud-data-platform>
2023-12-05 15:43:49
1
1,007
janicebaratheon
77,607,317
12,458,212
Issues building nested dictionary with python lists
<p>I'm using the following method to build a nested dictionary using 2 python lists:</p> <pre><code>lvl1 = ['a', 'b'] lvl2 = ['apples','bananas'] results = defaultdict(dict) for one in lvl1: for two in lvl2: results[one][two] = 0 </code></pre> <p>I'm having issues using the same method with 3 python lists (an additional layer). Any ideas?</p> <pre><code>lvl1 = ['a', 'b'] lvl2 = ['apples','bananas'] lvl3 = ['size','depth'] results = defaultdict(dict) # ERRORS OUT for one in lvl1: for two in lvl2: for three in lvl3: results[one][two][three] = 0 Desired Output: {'a':{'apples':{'size':0, 'depth':0}, 'bananas':{'size':0, 'depth':0}}, 'b':{'apples':{'size':0, 'depth':0}, 'bananas':{'size':0, 'depth':0}}} </code></pre>
<python><dictionary>
2023-12-05 15:19:54
2
695
chicagobeast12
77,607,315
7,563,454
Python: How to handle a repeating function on the main thread requesting updates from multiple sub-threads each execution
<p>Situation: This is the structure of a code I'm working on, it uses Tkinter but I left the details out to keep it simple and related to the threading question. The main thread contains a function we'll call <code>calculate</code> which repeatedly calls itself every second... each call it creates a number of threads running <code>thread_func</code>, they all do the heavy lifting on a common task, this task is divided based on the number of the thread so each thread only works on parts assigned to it. When a thread finishes it uses the <code>Pipe()</code> function to send the result back (<code>Queue()</code> is ridiculously slow for large data), the main thread waits until it has received this result from all threads before it will continue and reschedule itself.</p> <pre><code>import multiprocessing as mp import tkinter as tk class Viewport: def __init__(self): self.root = tk.Tk() self.thread_count = mp.cpu_count() self.p1, self.p2 = mp.Pipe(duplex = False) self.calculate() def thread_func(self, i: int): # something.heavy_stuff does the difficult repetitive task, it only processes parts of data matching the i number of the core var result = something.heavy_stuff(i) self.p2.send(result) def calculate(self): # 1: Start all threads threads = {} for i in range(0, self.thread_count): threads[i] = mp.Process(target = self.thread_func, args = (i,)) threads[i].start() # 2: Wait until all threads have delivered their data for i in range(0, self.thread_count): self.p1.recv() # 3: Stop threads for i in self.processes: self.processes[i].join() self.root.after(1000, self.calculate) </code></pre> <p>Issue: Although this appears to work fine, I'm uncomfortable with joining and creating so many threads every second... especially when the end result is designed to run at any speed so it could even be 1ms, I don't know if some operating systems may be upset by an application adding or deleting over a dozen processes many times a second. It would be ideal if I could create all threads just once, have them indefinitely run in the background, and only close everything when the program exits. This was originally the case: Everything inside <code>thread_func</code> was in a <code>while True:</code> loop, #1 where I start the threads was located in <code>__init__</code> instead of <code>calculate</code>, and point #3 didn't exist so I didn't stop the threads... I also used a <code>Lock()</code> so the <code>calculate</code> function paused the threads in sync and the received signal count at point #2 always matches expectations. This doesn't work for a few reasons:</p> <ol> <li>The running thread is unable to see changes to variables in the main thread, once started a thread exists in its own parallel reality based on the past of the main thread at the moment the sub-thread was created... hence I'm using the pipe as it's the only way to send data over. Every time the <code>calculate</code> func reruns on the main thread, I want all threads it requests an update from to see changes in other variables (such as those in <code>self</code>) that happened on the main thread.</li> <li>Using a <code>while True:</code> loop in <code>thread_func</code> causes my program to be unable to shut down unless forced to. As there's no easy way to change something in an already running thread from the main thread and vice-versa, there's no way to let sub-threads know I want to quit. I tried using the pipe backwards with <code>duplex = True</code> but no matter the order of the pipes that would cause weird errors and crashes.</li> </ol> <p>My question is: How do you think I should structure this code to work correctly? Should I go with the current model as I exemplified it, where a function on the main thread calling itself often destroys old threads and creates new ones each time... or is there a way to design this so the main thread may keep all sub-threads active during the duration of the application, but have those threads detect and changes in <code>self.*</code> variables and know when I'm exiting the program so they don't block it?</p>
<python><python-3.x><multithreading>
2023-12-05 15:19:06
0
1,161
MirceaKitsune
77,606,953
10,941,410
How long does the `httpx.AsyncClient` can "live" without closing the event loop?
<p>I have an endpoint in <code>Django</code> that orchestrates a bunch of API calls to other underlying services. I intend to use a long-lived instance of <code>httpx.AsyncClient</code> - which will be ultimately responsible for the API calls.</p> <pre class="lang-py prettyprint-override"><code>from functools import partial class Api: # for historical reasons, this class has both sync and async methods ... AsyncApi = partial(Api, async_client=httpx.AsyncClient(timeout=30)) class MyViewSet(viewsets.GenericViewSet): @action(...) def merge(self, request: Request) -&gt; Response: ... result = async_to_sync(merge_task)(..., api=AsyncApi(token=request.auth)) ... </code></pre> <p>I'm doing some tests in a dummy server and I can see that sometimes <code>merge_task</code> captures <code>RuntimeError('Event loop is closed')</code> when calling <code>api</code> (or, ultimately, the long-lived instance of <code>httpx.AsyncClient</code>) so I'd bet that the loop for <code>httpx.AsyncClient</code> is closed after some time and there's no issue with <code>asgiref</code>'s <code>async_to_sync</code>.</p> <p>Am I correct? If so, the only solution is to instantiate <code>httpx.AsyncClient</code> on every request?</p> <p>I tried do call <code>merge_task</code> using the endpoint but it captured <code>RuntimeError('Event loop is closed')</code>. The expectation is to not raise any error related to async programming</p>
<python><django><asynchronous><python-asyncio><httpx>
2023-12-05 14:23:28
0
305
Murilo Sitonio
77,606,824
5,594,008
Pytest, get IntegrityError when inserting ForeignKey by id
<p>I have such table structure</p> <pre><code>class ParentModel(models.Model): symbol = models.CharField(max_length=255, primary_key=True) name = models.CharField(max_length=200) class ChildModel(models.Model): parent_instrument = models.ForeignKey( to=ParentModel, on_delete=models.SET_NULL, null=True, blank=True, ) instrument = models.ForeignKey( to=ParentModel, on_delete=models.SET_NULL, null=True, blank=True, ) </code></pre> <p>Then I run my pytest with empty database, where I test such logic</p> <pre><code>try: ChildModel.objects.update_or_create( parent_instrument_id=symbol1, instrument_id=symbol2, ) except IntegrityError: self._create_instrument(symbol1, symbol2) </code></pre> <p>I don't get IntegrityError. ChildModel is just saved without existing ParentModel symbol. Is it possible to avoid such behaviour?</p>
<python><django><pytest>
2023-12-05 14:06:21
1
2,352
Headmaster
77,606,668
1,309,245
Default jinja macro template parameter in airflow dag
<p>How do I make an airflow dag to receive a parameter and its default value would be its logical date</p> <pre><code># Define the DAG dag = DAG( dag_id=&quot;test&quot;, start_date=days_ago(1), schedule_interval=&quot;@daily&quot;, params={&quot;date_param&quot;: &quot;{{ ds }}&quot; } ) # Define the BashOperator task print_param_task = BashOperator( task_id=&quot;print_param&quot;, bash_command='echo &quot;{{ params.date_param }}&quot;', dag=dag ) </code></pre> <p>This would print me the string itself &quot;{{ ds }}&quot;. I want to be the default the value of logical_date if nothing is passed in the config.</p>
<python><airflow>
2023-12-05 13:42:32
1
1,407
elvainch
77,606,651
13,180,235
How to Integrate GTM with Partytown in Django Template
<p>I am trying to integrate my GTM tag using Django Template. Following is my current html snipped that is working perfectly fine without partytown integration.</p> <pre><code>&lt;head&gt; &lt;script&gt; (function (w, d, s, l, i) { w[l] = w[l] || []; w[l].push({ 'gtm.start': new Date().getTime(), event: 'gtm.js' }); var f = d.getElementsByTagName(s)[0], j = d.createElement(s), dl = l != 'dataLayer' ? '&amp;l=' + l : ''; j.async = true; j.src = 'https://www.googletagmanager.com/gtm.js?id=' + i + dl; f.parentNode.insertBefore(j, f); })(window, document, 'script', 'dataLayer', '&lt;gtm-id&gt;'); &lt;/script&gt; &lt;/head&gt; &lt;body&gt; &lt;noscript&gt; &lt;iframe src=&quot;https://www.googletagmanager.com/ns.html?id=&lt;gtm-id&gt;&quot; height=&quot;0&quot; width=&quot;0&quot; style=&quot;display:none;visibility:hidden&quot;&gt;&lt;/iframe&gt; &lt;/noscript&gt; &lt;/body&gt; </code></pre> <p>Now i want to integrate it throught partytown approach, something I tried was following but doesnot work</p> <pre><code>&lt;script type=&quot;text/partytown&quot; src=&quot;https://www.googletagmanager.com/gtag/js?id=YOUR-ID-HERE&quot;&gt;&lt;/script&gt; &lt;script type=&quot;text/partytown&quot;&gt; window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'YOUR-ID-HERE'); &lt;/script&gt; </code></pre> <p>I just replaced my GTM ID in above template, this is the piece of code given in their documentation.</p> <p>If any one can integrate the conventional GTM code into the above partytown snipper please help, Im new to this library.</p> <p>Also note that I have integrated partytown library in my project so that is not a problem. No errors there just dont know how to integrate it. Thanks in advance</p>
<javascript><python><django><google-tag-manager><partytown>
2023-12-05 13:40:37
0
335
Fahad Hussain
77,606,400
2,791,346
Long running Django Celery task close DB connection
<p>I have a Django Celery task that performs long-running operations and sometimes loses connection to the database (Postgres).</p> <p>the task looks like something like this:</p> <pre><code>@app.task(name='my_name_of_the_task') def my_long_running_task(params): with transaction.atomic(): object_list = self.get_object_list_from_params(params) for batch in batches(object_list): self.some_calculations() MyObject.objects.bulk_create(objs=batch) # &lt;- here connection could be lost </code></pre> <p>I want to ensure the connection (but also be able to unit test this code).</p> <p>For example:</p> <pre><code>@app.task(name='my_name_of_the_task') def my_long_running_task(params): with transaction.atomic(): object_list = self.get_object_list_from_params(params) for batch in batches(object_list): connection.connect() self.some_calculations() MyObject.objects.bulk_create(objs=batch) # &lt;- here connection could be lost </code></pre> <p>This would work (because it always opens new connections) but the unit test throws the error that can't roll back on teardown.</p> <p>I am thinking about</p> <pre><code> @app.task(name='my_name_of_the_task') def my_long_running_task(params): with transaction.atomic(): object_list = self.get_object_list_from_params(params) for batch in batches(object_list): try: self.some_calculations() MyObject.objects.bulk_create(objs=batch) # &lt;- here connection could be lost except Exception e: connection.connect() MyObject.objects.bulk_create(objs=batch) # retry this insert </code></pre> <p>but is there a better way to handle this?</p>
<python><django><postgresql><django-celery><connection-timeout>
2023-12-05 13:02:37
0
8,760
Marko Zadravec