QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
75,464,017
244,297
Finding the longest interval with a decrease in value faster than quadratic time
<p>I have a list of values for some metric, e.g.:</p> <pre><code># 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [50, 52, 58, 54, 57, 51, 55, 60, 62, 65, 68, 72, 62, 61, 59, 63, 72] </code></pre> <p>I need to find the longest interval over which the value has decreased. For the above list such interval is from index 7 to 14 (and it's length is 8). An O(nΒ²) solution to this is simple:</p> <pre><code>def get_longest_len(values: list[int]) -&gt; int: longest = 0 for i in range(len(values)-1): for j in range(len(values)-1, i, -1): if values[i] &gt; values[j] and j - i &gt; longest: longest = j - i break return longest + 1 </code></pre> <p>Is there any way to improve it's time complexity?</p>
<python><algorithm><performance><intervals>
2023-02-15 18:27:19
1
151,764
Eugene Yarmash
75,463,993
5,962,981
What does async actually do in FastAPI?
<p>I have two scripts:</p> <pre><code>from fastapi import FastAPI import asyncio app = FastAPI() @app.get(&quot;/&quot;) async def root(): a = await asyncio.sleep(10) return {'Hello': 'World',} </code></pre> <p>And second one:</p> <pre><code>from fastapi import FastAPI import time app = FastAPI() @app.get(&quot;/&quot;) def root(): a = time.sleep(10) return {'Hello': 'World',} </code></pre> <p>Please note the second script doesn't use <code>async</code>. Both scripts do the same, at first I thought, the benefit of an <code>async</code> script is that it allows multiple connections at once, but when testing the second code, I was able to run multiple connections as well. The results are the same, performance is the same and I don't understand why would we use <code>async</code> method. Would appreciate your explanation.</p>
<python><asynchronous><async-await><python-asyncio><fastapi>
2023-02-15 18:25:05
2
923
filtertips
75,463,912
2,635,863
normalize Euclidean distance - python
<pre><code>def euclidean_distance(n): L = np.linalg.cholesky( [[1.0, 0.60], [0.60, 1.0]]) uncorrelated = np.random.standard_normal((2, n)) correlated = np.dot(L, uncorrelated) A = correlated[0] B = correlated[1] v = np.linalg.norm(A-B) return v v50 = euclidean_distance(50) v1000 = euclidean_distance(1000) </code></pre> <p>The euclidean distance is larger the more data points I use in the computation. How can I normalize the distances so that I can compare similarity between <code>v50</code> and <code>v1000</code>?</p>
<python><numpy><euclidean-distance>
2023-02-15 18:15:53
1
10,765
HappyPy
75,463,740
1,208,071
Python class in Google Colab
<p>I am trying to add a new Python class within an existing Collab project which is already working with a similar structure using classes within files within a sub file director (filedir). I have placed the new class, NewClassMethod, within a new_class.py file and put it into a directory. However, I am getting an error:</p> <pre><code>cannot import name 'NewClassMethod' from 'filedir1.filedir2' </code></pre> <p>There is a <code>__init__.py</code> file within that subdirectory of filedir2, but I haven't added anything to that. I am not sure if the error is related.</p> <p>Thanks for any help. The full error is:</p> <pre><code>ImportError Traceback (most recent call last) &lt;ipython-input-11-f73c71e55f75&gt; in &lt;module&gt; ----&gt; 1 from filedir1.filedir2 import NewClassMethod 2 nn = NewClassMethod() ImportError: cannot import name 'NewClassMethod' from 'cs231n.classifiers' (/content/drive/MyDrive/filedir1/filedir2/__init__.py) </code></pre>
<python><google-colaboratory>
2023-02-15 17:58:57
0
395
geo derek
75,463,734
3,517,025
sqlalchemy engine/connection vs psycopg2 connection
<p>We currently use a <code>get_connection</code> functionality for our postgres via</p> <pre class="lang-py prettyprint-override"><code>import psycopg2 def get_db_connection(conn_str: str = '') -&gt; psycopg2._psycopg.connection: conn_str = conn_str if conn_str else os.environ.get(PG_CONNECTION_STRING) return psycopg2.connect(conn_str) </code></pre> <p>Now that we're working with pandas dataframes, and we want to write them to the DB as new tables, I see in <a href="https://stackoverflow.com/a/23104436/3517025">this answer</a> and the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_sql.html" rel="nofollow noreferrer">pandas docs</a> that you can use an sqlalchemy engine or connection.</p> <pre class="lang-py prettyprint-override"><code>df.to_sql(con=sqlalchemy_cnx, 'table_name') </code></pre> <p>Is there a relationship between a psycopg2 connection and an sqlalchemy connection such that I could create the latter from the former?</p> <p>Something like</p> <pre class="lang-py prettyprint-override"><code>get_pandas_compliant_db_cnx(cnx: psycopg2._psycopg.connection) -&gt; sqlalchemy.engine.Connection: # this is what i'm trying to implement to avoid managing both systems </code></pre>
<python><pandas><sqlalchemy>
2023-02-15 17:58:19
0
5,409
Joey Baruch
75,463,704
3,482,266
os.getenv or os.environ don't reach variables in .env file
<p>In my directory, I have a .env file with:</p> <pre><code>HOST=&quot;url_to_host&quot; </code></pre> <p>I also have my <code>script.py</code>, where:</p> <pre><code>import os os.environ[&quot;HOST&quot;] </code></pre> <p>If I run the script in Vscode debug mode, everything works. However, in normal mode, I get a key error, or if I use <code>os.getenv(&quot;HOST&quot;)</code> I get <code>NoneType</code>.</p>
<python><python-os>
2023-02-15 17:55:40
2
1,608
An old man in the sea.
75,463,690
7,128,910
Numba JIT on static method works on one computer but throws an error on another
<p>I have a static method that I want to speed up with Numba:</p> <pre><code>@nb.njit def numba_loop_choice(population, weights, k): wc = np.cumsum(weights) m = wc[-1] sample = np.empty(k, population.dtype) sample_idx = np.full(k, -1, np.int32) i = 0 while i &lt; k: r = m * np.random.rand() idx = np.searchsorted(wc, r, side=&quot;right&quot;) if idx in sample_idx[:i]: continue sample[i] = population[idx] sample_idx[i] = idx i += 1 return sample </code></pre> <p>population and weights are numpy arrays, k is an integer. If I run this on my home computer it works without any error. On my server it crashes with an error. Both have Python 3.8.10. Does anyone understand why it works on one machine but crashes on another?</p> <p>The error:</p> <pre><code>Traceback (most recent call last): File &quot;run.py&quot;, line 115, in &lt;module&gt; paa.simulate_streams() File &quot;run.py&quot;, line 40, in simulate_streams StreamSimulation( File &quot;xxx/simulate_stream.py&quot;, line 63, in __init__ self.simulate() File &quot;xxx/simulate_stream.py&quot;, line 132, in simulate selected_element = numba_loop_choice(population=elements_dummy[0:latest_element + 1], weights=self.weights[0:latest_element + 1], k=1)[0] File &quot;/usr/lib/python3/dist-packages/numba/dispatcher.py&quot;, line 401, in _compile_for_args error_rewrite(e, 'typing') File &quot;/usr/lib/python3/dist-packages/numba/dispatcher.py&quot;, line 344, in error_rewrite reraise(type(e), e, None) File &quot;/usr/lib/python3/dist-packages/numba/six.py&quot;, line 668, in reraise raise value.with_traceback(tb) numba.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) Invalid use of Function(&lt;built-in function contains&gt;) with argument(s) of type(s): (array(int32, 1d, C), int64) * parameterized In definition 0: All templates rejected with literals. In definition 1: All templates rejected without literals. In definition 2: All templates rejected with literals. In definition 3: All templates rejected without literals. In definition 4: All templates rejected with literals. All templates rejected with literals. In definition 5: All templates rejected without literals. In definition 6: All templates rejected with literals. In definition 7: All templates rejected without literals. In definition 8: All templates rejected with literals. In definition 9: All templates rejected without literals. In definition 10: All templates rejected with literals. In definition 11: All templates rejected without literals. In definition 12: All templates rejected with literals. In definition 13: All templates rejected without literals. This error is usually caused by passing an argument of a type that is unsupported by the named function. [1] During: typing of intrinsic-call at xxx/simulate_stream.py (24) File &quot;simulate_stream.py&quot;, line 24: def numba_loop_choice(population, weights, k): &lt;source elided&gt; print(idx, sample_idx) if idx in sample_idx[:i]: ^ </code></pre>
<python><numba>
2023-02-15 17:54:09
0
1,261
Nik
75,463,366
4,207,793
Validate relative path by the wildcard in Python
<p>I have a condition where I need to compare the relative name/path of the file (<code>string</code>) with another path that has wildcards:</p> <pre class="lang-none prettyprint-override"><code>files list: aaa/file1.txt bbb/ccc/file2.txt accepted files by the wildcard: aaa/* bbb/**/* </code></pre> <p>I need to pick only those files that match to any of the wildcard masks.</p> <pre class="lang-none prettyprint-override"><code>aaa/file1.txt equals aaa/* =&gt; True ddd/file3.txt equals aaa/* or bbb/**/* =&gt; False </code></pre> <p>Is that possible to do with <code>glob</code> or any other module?</p> <p><strong>edit</strong>: I removed all unnecessary details from the question <em>after</em> the discussion.</p>
<python><regex><path>
2023-02-15 17:22:48
1
1,164
IgorZ
75,463,364
6,556,938
ModuleNotFoundError: No module named 'pyathena' when running AWS Glue Job
<p>Despite setting the parameter for my Python AWS Glue Job like this:</p> <pre><code>--additional-python-modules pyathena </code></pre> <p>I still get the following error when I try and run the job:</p> <pre><code>ModuleNotFoundError: No module named 'pyathena' </code></pre> <p>I have also tried the following parameters:</p> <pre><code>--additional-python-modules pyathena --pip-install pyathena --pip-install pyathena==2.23.0 </code></pre>
<python><amazon-web-services><etl><aws-glue><pyathena>
2023-02-15 17:22:39
1
1,215
ChrisDanger
75,463,339
7,644,323
Is there a way to filter multiple related fields in a single filter for a table in python Dash?
<p>In Microsoft PowerBI I can filter multiple related fields in a single slicer. You do so by building what's called a <a href="https://learn.microsoft.com/en-us/power-bi/create-reports/power-bi-slicer-hierarchy-multiple-fields" rel="nofollow noreferrer">hierarchy slicer</a>. This slicer can then be used to filter a table. Can something like this be done with Python Dash?</p> <p><a href="https://i.sstatic.net/nE7bq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nE7bq.png" alt="enter image description here" /></a></p> <p>We have an org hierarchy that we use to filter a table in a Dash application. It is a 4 tiered hierarchy. Up until now, we have used 4 successive, multi-select dropdown lists with each tier filtering the dropdown below it, but we have received feedback from users that the Power BI hierarchical slicers are much more intuitive, and they would like for us to find a similar solution, if possible.</p>
<python><plotly-dash>
2023-02-15 17:20:09
1
1,037
dshefman
75,463,288
9,748,823
Match value where string does not contain a certain word
<p>I am trying to match a word in a string only if a certain word occurs within the first 10 characters of the string.<br /> This is my approach:</p> <pre><code>import re string_array = [&quot;foo bar\nbaz qux&quot;, &quot;baz qux foo bar&quot;, &quot;baz qux&quot;] for string in string_array: a = re.search(&quot;(?s)^(?!.{0,10}bar).*(qux).*$&quot;, string) print(a) </code></pre> <p>I tried this in <a href="https://regex101.com/r/d9sntE/1" rel="nofollow noreferrer">regex101</a> but this would still match the entire string even when it contains <code>bar</code>. I also tried with a negative lookbehind but then the first part would be required to be a fixed length which in my case cannot. What am I doing wrong?<br /> I'd expect to get a match in all except for the first string</p>
<python><regex>
2023-02-15 17:15:50
2
381
Blob
75,463,161
12,297,666
Pivoting a pandas dataframe
<p>I have the following dataframe:</p> <pre><code> ID Datetime Y 0 1000 00:29:59 0.117 1 1000 00:59:59 0.050 2 1000 01:29:59 0.025 3 1000 01:59:59 0.025 4 1000 02:29:59 0.049 ... ... ... 48973133 2999 21:59:59 0.618 48973134 2999 22:29:59 0.495 48973135 2999 22:59:59 0.745 48973136 2999 23:29:59 0.514 48973137 2999 23:59:59 0.419 </code></pre> <p>The <code>Datetime</code> column is not actually in that format, here it is:</p> <pre><code>0 00:29:59 1 00:59:59 2 01:29:59 3 01:59:59 4 02:29:59 ... 48973133 21:59:59 48973134 22:29:59 48973135 22:59:59 48973136 23:29:59 48973137 23:59:59 Name: Datetime, Length: 48973138, dtype: object </code></pre> <p>I am trying to run the following pivot code:</p> <pre><code>print(df.assign(group=df.index//48).pivot(index='group', values='Y', columns=df['Datetime'][0:48])) </code></pre> <p>But i am getting following error:</p> <pre><code>KeyError: '00:29:59' </code></pre> <p>How can i fix it? I expect to get 48 columns (1 day of half-hourly measured data) in the pivoted dataframe, so my columns should be:</p> <pre><code>00:29:59 00:59:59 01:29:59 ... 23:29:59 23:59:59 </code></pre> <p>The first row should have the first 48 values of <code>Y</code>, the second row should have the next <code>48</code>, and so on.</p> <p>EDIT: Picture of the <code>cumcount()</code>issue:</p> <p><a href="https://i.sstatic.net/O4nAO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O4nAO.png" alt="enter image description here" /></a></p>
<python><pandas>
2023-02-15 17:04:09
2
679
Murilo
75,463,027
6,664,990
Python Regex to find certain SQL statement from a SQL script (multiple SQL statements) and transform respecting the comments in input SQL
<p>I'm trying to find certain type of SQL statements from a list of SQLs for examples CTAS and then do some tranformation.</p> <p>Input :</p> <pre><code> sqltexts = &quot;&quot;&quot; --drop table test_schema.test_table; /* create table table2 as select * from table3 ; */ create table table4 as (select * from table2); create table table5 as select * from table 3 where id =1 -- and date ='2013-01-01' ; &quot;&quot;&quot; </code></pre> <p>Expected output: Only 3rd and 4th statements have to be transformed as 1st and 2nd statement are commented and 4th statement should leave the comment section as it is</p> <pre><code>--drop table test_schema.test_table; /* create table table2 as select * from table3 ; */ &lt;transformed_sql&gt; ; &lt;transformed_sql&gt; where id =1 -- and date ='2013-01-01' ; </code></pre> <p>However, I'm not able to respect the comments here. What I do is find the single line and multiline comments remove them from the input and then do the transformation. I would like to keep the comment as it is.</p> <p>Output I get</p> <pre><code>&lt;transformed_sql&gt; ; &lt;transformed_sql&gt; where id =1 ; </code></pre> <p>This removes all the comments from the input which I do not want even though output SQL serves the objective.</p> <p>Python code I'm using:</p> <pre><code>sqlStatements = sqltexts.strip().split(';') # to split input SQL script into statements rgx = re.compile(r&quot;(create\s+(?:table|[\s\S]*\s+table)\s+[\w\d.]+\s+\bas\b\s+[\s\S]*)&quot;,re.MULTILINE|re.IGNORECASE) # to find if statement is CTAS rgxcomment = re.compile(r&quot;(--[^\r\n]*)&quot;,re.MULTILINE|re.IGNORECASE) # to ignore single line comment rgxmulticomment = re.compile(r&quot;(\/\*[\w\W]*?(?=\*)\*\/)&quot;,re.MULTILINE|re.IGNORECASE) # to ignore multiline comment for index,item in enumerate(sqlStatements): x = rgxcomment.findall(item) # Removing single line comments if len(x)&gt;0: for i in x: index = item.find(i) item = item[:index] + '' + item[index+len(i):] y = rgxmulticomment.findall(item) # Removing multiline comments if len(y)&gt;0: for i in y: index = item.find(i) item = item[:index] + '' + item[index+len(i):] m = rgxddl.search(item) if m: &lt;I do my transformation&gt; </code></pre>
<python><sql><regex>
2023-02-15 16:52:14
0
908
Leo
75,462,898
12,458,212
Using an NLP vectorized output for subsequent model?
<p>I use SpaCy to output a vectorized array of my text field. I'm having issues plugging this output into my random forest and could use some guidance. I label encoded other fields so my pandas dataframe looks something like:</p> <pre><code>import pandas as pd import numpy as np from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split d = {'le1': [0,1,2,1], 'le2': [3,0,2,1], 'spacy_output':[[0.12,0.14,3.5],[1.21,0.84,1.92],[0.34,0.85,2.43],[0.09,0.18,2.21]], 'response':[0,1,1,0]} df = pd.DataFrame(d) </code></pre> <p>Then I try to plug this into my model:</p> <pre><code>X = np.array(df.drop('response', axis=1)) y = df['response'].values.ravel() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 23) clf = RandomForestClassifier(min_samples_split=4, n_estimators=100, criterion='entropy') clf.fit(X_train,y_train) </code></pre> <p>Confused on how to pass this dataframe to my model. I get the following errors:</p> <pre><code>TypeError: only size-1 arrays can be converted to Python scalars ValueError: setting an array element with a sequence. </code></pre>
<python><machine-learning><nlp>
2023-02-15 16:42:55
1
695
chicagobeast12
75,462,859
16,935,119
Find NA in all columns in pandas through loop and eliminate rows with NA
<p>This is a the dataframe in pandas I have</p> <pre><code>dict = {'First Score':[100, 90, np.nan, 95], 'Second Score': [30, 45, 56, np.nan], 'Third Score':[np.nan, 40, 80, 98]} # creating a dataframe from dictionary df = pd.DataFrame(dict) </code></pre> <p>I am trying to eliminate rows with NA values at once. So is there a way to do this through loops For example I am trying this on 1 column like below</p> <pre><code>df_first_score = pd.notnull(df['First Score']) ### find not null values df[df_first_score] First Score Second Score Third Score 0 100.0 30.0 NaN 1 90.0 45.0 40.0 3 95.0 NaN 98.0 </code></pre> <p>So this way I am doing for all columns manually. Is there a way to achieve this through loops? So that I get below output</p> <pre><code>final_df First Score Second Score Third Score 1 90.0 45.0 40.0 </code></pre> <p>I know this can be done in other ways, But wanted to know if we can achieve this through loops</p>
<python><pandas>
2023-02-15 16:39:43
1
1,005
manu p
75,462,745
12,396,154
Do I need to close pyodbc sql server connection when reading the data into the Pandas Dataframe?
<p>I am confused how to use context manager with <code>pyodbc</code> connection. As far as I know, it is usually necessary to close the database connection and using context manager is a good practice for that (for pyodbc, I saw some examples which closes the cursor only). Long story short, I am creating a python app which pulls data from sql server and want to read them into a <code>Pandas Dataframe</code>.</p> <p>I did some search on using <code>contextlib</code> and wrote an script <code>sql_server_connection</code>:</p> <pre><code>import pyodbc import contextlib @contextlib.contextmanager def open_db_connection(server, database): &quot;&quot;&quot; Context manager to automatically close DB connection. &quot;&quot;&quot; conn = pyodbc.connect('DRIVER={SQL Server};SERVER='+server+';DATABASE='+database+';Trusted_Connection=yes;') try: yield except pyodbc.Error as e: print(e) finally: conn.close() </code></pre> <p>I then called this in another script:</p> <pre><code>from sql_server_connection import open_db_connection with open_db_connection(server, database) as conn: df = pd.read_sql_query(query_string, conn) </code></pre> <p>which raises this error:</p> <pre><code> File &quot;C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py&quot;, line 436, in read_sql_query return pandas_sql.read_query( File &quot;C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py&quot;, line 2116, in read_query cursor = self.execute(*args) File &quot;C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py&quot;, line 2054, in execute cur = self.con.cursor() AttributeError: 'NoneType' object has no attribute 'cursor' </code></pre> <p>I didn't define a cursor here because I expect that <code>Pandas</code> handle it as it did before I think about closing the connection. If the approach above is wrong how would I close the connection? Or does <code>pyodbc</code> handle it? Thanks!</p>
<python><sql-server><pandas><pyodbc>
2023-02-15 16:30:47
1
353
Nili
75,462,685
5,775,358
Polars columns subtract order does not matter (apparently)
<p>I would like to use <code>polars</code>, but when I try to subtract a 1x3 numpy array from three columns of the DataFrame. The problem is that is does not matter in which order the subtraction is applied:</p> <pre><code>import numpy as np import polars as pl # create polars dataframe: data = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]]) df = pl.DataFrame(data, schema=['x', 'y', 'z']).with_columns( pl.all().cast(pl.Float64) ) # subraction array: arr = np.array([2, 5, 8], dtype=np.float64) # subtract array from DataFrame df.with_columns( pl.col('x') - arr[0], pl.col('y') - arr[1], pl.col('z') - arr[2], ) &quot;&quot;&quot; This one is correct, top row should be negative and bottom row positive | | x | y | z | |---:|----:|----:|----:| | 0 | -1 | -1 | -1 | | 1 | 0 | 0 | 0 | | 2 | 1 | 1 | 1 | &quot;&quot;&quot; df.with_columns( arr[0] - pl.col('x'), arr[1] - pl.col('y'), arr[2] - pl.col('z'), ) &quot;&quot;&quot; This one is incorrect. The top row should be positive and the bottom row should be negative. | | x | y | z | |---:|----:|----:|----:| | 0 | -1 | -1 | -1 | | 1 | 0 | 0 | 0 | | 2 | 1 | 1 | 1 | &quot;&quot;&quot; </code></pre>
<python><numpy><operation><python-polars>
2023-02-15 16:25:37
1
2,406
3dSpatialUser
75,462,630
20,898,396
Why is the difference between id(2) and id(1) equal to 32?
<pre><code>&gt;&gt;&gt; a = 1 &gt;&gt;&gt; b = 2 &gt;&gt;&gt; id(a), id(b), id(b) - id(a) (1814458401008, 1814458401040, 32) </code></pre> <p>Is the memory address returned by <code>id</code> in bits or in bytes? Per <a href="https://docs.python.org/2/c-api/int.html#c.PyInt_FromLong" rel="nofollow noreferrer">the docs</a>:</p> <blockquote> <p>The current implementation keeps an array of integer objects for all integers between -5 and 256, when you create an int in that range you actually just get back a reference to the existing object.</p> </blockquote> <p>If integers were 32 bits and the numbers -5 to 256 were stored close together in memory, then the numbers 1 and 2 would be 32 bits apart.</p> <p>However, the size of a number object is 28 bytes.</p> <pre><code>&gt;&gt;&gt; import sys &gt;&gt;&gt; a = 1 &gt;&gt;&gt; a.__sizeof__(), sys.getsizeof(a), sys.getsizeof(3) (28, 28, 28) </code></pre> <p>If id returns an address in bytes, what is the 32-28=4 bytes between 1 and 2 for?</p> <p>For bigger numbers, the addresses are random which makes sense:</p> <pre><code>&gt;&gt;&gt; a = 257 &gt;&gt;&gt; b = 258 &gt;&gt;&gt; id(a), id(b), id(b) - id(a) (1814557834096, 1814557827568, -6528) </code></pre>
<python><python-internals>
2023-02-15 16:21:00
1
927
BPDev
75,462,567
6,696,730
Airflow PyTest DagBag Import Error from local module
<p>I have a directory structure as follows for my DAGs:</p> <pre><code>. β”œβ”€β”€ __init__.py β”œβ”€β”€ _base.py β”œβ”€β”€ dag1.py β”œβ”€β”€ dag2.py β”œβ”€β”€ dag3.py β”œβ”€β”€ ... </code></pre> <p>When I run PyTest to fill my DagBag, I get the following error for each DAG file</p> <pre><code>ERROR - Failed to import: dags/dag2.py Traceback (most recent call last): File &quot;~/code/myproject/venv/lib/python3.7/site-packages/airflow/models/dagbag.py&quot;, line 339, in parse loader.exec_module(new_module) File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 728, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 219, in _call_with_frames_removed File &quot;dags/dag2.py&quot;, line 12, in &lt;module&gt; from _base import start_here, task_factory ModuleNotFoundError: No module named '_base' </code></pre> <p>My _base.py is a helper file that holds common task declarations like:</p> <pre><code>def task_factory( operator_func, base_env_vars=[], op_kwargs={}, ): def build_airflow_task(task_id=None, extra_env_vars=[], **kwargs): task = operator_func( task_id=task_id, name=task_id, env_vars=[*base_env_vars, *extra_env_vars], **op_kwargs, **kwargs, ) return task return build_airflow_task </code></pre> <p>In each DAG file, I do the following:</p> <pre><code>import os from airflow import DAG from airflow.kubernetes.secret import Secret from airflow.providers.google.cloud.operators.kubernetes_engine import ( GKEStartPodOperator, ) from airflow.utils.helpers import chain from kubernetes.client import models as k8s from _base import start_here, finish, dummy_task, task_factory </code></pre> <p>I am not sure how to instrument Airflow/PyTest to find _base.py for tests, but in my actual deployment (Google Cloud Composer), my DAG files are able to find and import from <code>_base.py</code> without issue.</p> <p>My test file looks like this:</p> <pre><code>import sys sys.path.insert(0, &quot;..&quot;) from airflow.models import DagBag def test_dagbag_compiles(): dags = DagBag(&quot;dags&quot;, include_examples=False) assert len(dags.import_errors) == 0 </code></pre>
<python><airflow><pytest>
2023-02-15 16:15:49
1
713
K Pekosh
75,462,560
3,595,907
Unexpected behavior using if .. or .. Python
<p>I'm reading in a list of samples from a text file and in that list every now and then there is a &quot;channel n&quot; checkpoint. The file is terminated with the text <code>eof</code>. The code that works until it hits the <code>eof</code> which it obviously cant cast as a <code>float</code></p> <pre><code>log = open(&quot;mq_test.txt&quot;, 'r') data = [] for count, sample in enumerate(log): if &quot;channel&quot; not in sample: data.append(float(sample)) print(count) log.close() </code></pre> <p>So to get rid of the <code>ValueError: could not convert string to float: 'eof\n'</code> I added an <code>or</code> to my <code>if</code> as so,</p> <pre><code>log = open(&quot;mq_test.txt&quot;, 'r') data = [] for count, sample in enumerate(log): if &quot;channel&quot; not in sample or &quot;eof&quot; not in sample: data.append(float(sample)) print(count) log.close() </code></pre> <p>And now I get <code>ValueError: could not convert string to float: 'channel 00\n'</code></p> <p>So my solution has been to nest the <code>if</code>s &amp; that works.</p> <p>Could somebody explain to me why the <code>or</code> condition failed though?</p>
<python><if-statement>
2023-02-15 16:15:04
2
3,687
DrBwts
75,462,376
5,112,032
Convert custom string date to date
<p>Is there a way to convert a string date that is stored in some non-traditional custom manner into a date using <code>datetime</code> (or something equivalent)? The dates I am dealing with are S3 partitions that look like this:</p> <p><code>year=2023/month=2/dayofmonth=3</code></p> <p>I can accomplish this with several <code>replace</code>s but im hoping to find a clean single operation to do this.</p>
<python><date>
2023-02-15 15:59:56
2
2,522
eljusticiero67
75,462,354
6,290,211
Converting string column to date time when strings slightly differ in format due to GMT specification
<p>I have a <code>df</code> with a columns that contains, in two different string format, several timestamps.</p> <p>These are the two formats:</p> <pre><code>fmt = &quot;%a %b %d %H:%M:%S %Z%z %Y&quot; fmt_2 = &quot;%a %b %d %H:%M:%S %Z %Y&quot; </code></pre> <p><a href="https://codereview.stackexchange.com/questions/283198/a-function-that-uses-a-regex-to-parse-date-and-time-and-returns-a-datetime-objec/283219#283219">Thanks to this community</a>, if I had just one type of string format I would be able to convert to <code>datetime</code> using</p> <p><code>.apply(lambda x: datetime.strptime(x, fmt)</code></p> <pre><code>stamp = &quot;Fri Oct 11 15:09:30 GMT+01:00 2019&quot; stamp_2 = 'Sun Oct 27 08:05:47 GMT 2019' stamp_3 = 'Sat Oct 26 08:05:47 GMT 2019' stamp_4 = &quot;Sat Oct 12 15:09:30 GMT+01:00 2019&quot; #How I would convert the two format #from string to datetime # fmt = &quot;%a %b %d %H:%M:%S %Z%z %Y&quot; print(datetime.strptime(stamp, fmt)) fmt_2 = &quot;%a %b %d %H:%M:%S %Z %Y&quot; print(datetime.strptime(stamp_2, fmt_2)) #building an example dataframe question_dict = {&quot;time&quot;:[stamp, stamp_2, stamp_3,stamp_4], &quot;record&quot;:[1,2,3,4]} question_df = pd.DataFrame(question_dict) print(question_df.info()) </code></pre> <p>How can I convert the elements in my column from string to datetime?</p> <p>I tried to search but please let me know if my question is a duplicate.</p>
<python><pandas><datetime>
2023-02-15 15:58:32
0
389
Andrea Ciufo
75,462,344
5,962,321
Make ``pip install -e .`` build cython extensions with pyproject.toml
<p>With the move to the new <code>pyproject.toml</code> system, I was wondering whether there was a way to install packages in editable mode while compiling extensions (which <code>pip install -e .</code> does not do).</p> <p>So I want pip to:</p> <ul> <li>run the <code>build_ext</code> I configured for Cython and generate my <code>.so</code> files</li> <li>put them in the local folder</li> <li>do the rest of the normal editable install</li> </ul> <p>I found some mentions of <code>build_wheel_for_editable</code> on the <a href="https://pip.pypa.io/en/stable/reference/build-system/pyproject-toml/?highlight=config-settings#editable-installation" rel="nofollow noreferrer">pip documentation</a> but I could not find any actual example of where this hook should be implemented and what it should look like. (to be honest, I'm not even completely sure this is what I'm looking for)</p> <p>So would anyone know how to do that? I'd also happy about any additional explanation as to why <code>pip install .</code> runs <code>build_ext</code> but the editable command does not.</p> <hr /> <p>Details:</p> <p>I don't have a <code>setup.py</code> file anymore; the <code>pyproject.toml</code> uses setuptools and contains</p> <pre class="lang-ini prettyprint-override"><code>[build-system] requires = [&quot;setuptools&gt;=61.0&quot;, &quot;numpy&gt;=1.17&quot;, &quot;cython&gt;=0.18&quot;] build-backend = &quot;setuptools.build_meta&quot; [tool.setuptools] package-dir = {&quot;&quot; = &quot;.&quot;} [tool.setuptools.packages] find = {} [tool.setuptools.cmdclass] build_ext = &quot;_custom_build.build_ext&quot; </code></pre> <p>The custom <code>build_ext</code> looks like</p> <pre class="lang-py prettyprint-override"><code>from setuptools import Extension from setuptools.command.build_ext import build_ext as _build_ext from Cython.Build import cythonize class build_ext(_build_ext): def initialize_options(self): super().initialize_options() if self.distribution.ext_modules is None: self.distribution.ext_modules = [] extensions = Extension(...) self.distribution.ext_modules.extend(cythonize(extensions)) def build_extensions(self): ... super().build_extensions() </code></pre> <p>It builds a .pyx into .cpp, then adds it with another cpp into a .so.</p>
<python><pip><cython><pyproject.toml>
2023-02-15 15:57:51
1
2,011
Silmathoron
75,462,324
14,179,793
Python Peewee EXISTS Subquery not working as expected
<p>I am using the peewee ORM for a python application and I am trying to write code to fetch batches of records from a SQLite database. I have a subquery that seems to work by itself but when added to an update query the <code>fn.EXISTS(sub_query)</code> seems to have no effect as every record in the database is updated.</p> <p>Note: I am using the APSW extension for peewee.</p> <pre><code>def batch_logic(self, id_1, path_1, batch_size=1000, **kwargs): sub_query = (self.select(ModelClass.granule_id).distinct().where( (ModelClass.status == 'old_status') &amp; (ModelClass.collection_id == collection_id) &amp; (ModelClass.name.contains(provider_path)) ).order_by(ModelClass.discovered_date.asc()).limit(batch_size)).limit(batch_size)) print(f'len(sub_query): {len(sub_query)}') fb_st_2 = time.time() updated_records= list( (self.update(status='new_status').where(fn.EXISTS(sub_query)).returning(ModelClass)) ) print(f'update {len(updated_records)}: {time.time() - fb_st_2}') db.close() return updated_records </code></pre> <p>Below is output from testing locally:</p> <pre><code>id_1: id_1_1676475997_PQXYEQGJWR len(sub_query): 2 update 20000: 1.0583274364471436 fetch_batch 20000: 1.1167597770690918 count_things 0: 0.02147078514099121 processed_things: 20000 </code></pre> <p>The subquery is correctly returning <code>2</code> but the update query <code>where(fn.EXISTS(sub_query))</code> seems to be ignored. Have I made a mistake in my understanding of how this works?</p> <p>Edit 1: I believe <code>GROUP BY</code> is needed as rows can have the same <code>granule_id</code> and I need to fetch rows up to <code>batch_size</code> <code>granule_id</code>s</p>
<python><sqlite><peewee>
2023-02-15 15:55:57
1
898
Cogito Ergo Sum
75,462,164
8,182,504
Vedo 3D-Gyroid structures STL export
<p>I need to generate a double 3D gyroid structure. For this, I'm using <a href="https://vedo.embl.es/" rel="nofollow noreferrer"><code>vedo</code></a></p> <pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt from scipy.constants import speed_of_light from vedo import * import numpy as np # Paramters a = 5 length = 100 width = 100 height = 10 pi = np.pi x, y, z = np.mgrid[:length, :width, :height] def gen_strut(start, stop): '''Generate the strut parameter t for the gyroid surface. Create a linear gradient''' strut_param = np.ones((length, 1)) strut_param = strut_param * np.linspace(start, stop, width) t = np.repeat(strut_param[:, :, np.newaxis], height, axis=2) return t plt = Plotter(shape=(1, 1), interactive=False, axes=3) scale=0.5 cox = cos(scale * pi * x / a) siy = sin(scale * pi * y / a) coy = cos(scale * pi * y / a) siz = sin(scale * pi * z / a) coz = cos(scale * pi * z / a) six = sin(scale * pi * x / a) U1 = ((six ** 2) * (coy ** 2) + (siy ** 2) * (coz ** 2) + (siz ** 2) * (cox ** 2) + (2 * six * coy * siy * coz) + (2 * six * coy * siz * cox) + (2 * cox * siy * siz * coz)) - (gen_strut(0, 1.3) ** 2) threshold = 0 iso1 = Volume(U1).isosurface(threshold).c('silver').alpha(1) cube = TessellatedBox(n=(int(length-1), int(width-1), int(height-1)), spacing=(1, 1, 1)) iso_cut = cube.cutWithMesh(iso1).c('silver').alpha(1) # Combine the two meshes into a single mesh plt.at(0).show([cube, iso1], &quot;Double Gyroid 1&quot;, resetcam=False) plt.interactive().close() </code></pre> <p><a href="https://i.sstatic.net/zgB0e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zgB0e.png" alt="Gyroid Structure" /></a></p> <p>The result looks quite good, but now I'm struggling with exporting the volume. Although <em>vedo</em> has over 300 examples, I did not find anything in the documentation to export this as a watertight volume for 3D-Printing. How can I achieve this?</p>
<python><vedo>
2023-02-15 15:41:50
1
1,324
agentsmith
75,462,103
20,612,566
Extract text with multiple regex patterns in Python
<p>I have a list with address information</p> <p>The placement of words in the list can be random.</p> <pre><code>address = [' South region', ' district KTS', ' 4', ' app. 106', ' ent. 1', ' st. 15'] </code></pre> <p>I want to extract each item of a list in a new string.</p> <pre><code>r = re.compile(&quot;.region&quot;) region = list(filter(r.match, address)) </code></pre> <p>It works, but there are more than 1 pattern &quot;region&quot;. For example, there can be &quot;South reg.&quot; or &quot;South r-n&quot;. How can I combine a multiple patterns? And digit 4 in list means building number. There can be onle didts, or smth like 4k1. How can I extract building number?</p>
<python><regex>
2023-02-15 15:37:26
1
391
Iren E
75,461,848
8,899,386
Removing highly correlated columns of a huge Pyspark dataframe with missing values
<p>I have a huge dataframe with 6 million rows and 2k columns. I want to remove the highly correlated columns, and many of the columns are super sparse (90%+ missing values). Unfortunately, the Pyspark Correlation does not handle missing values, AFAIK. That's why I had to loop over columns and compute the correlation.</p> <p>This is the small code to reproduce it:</p> <pre class="lang-py prettyprint-override"><code>from pyspark.sql import SparkSession spark = SparkSession.builder.appName(&quot;test&quot;).getOrCreate() l = [ (7, -5, -8, None, 1, 456, 8), (2, 9, 7, 4, None, 9, -1), (-3, 3, None, 6, 0, 11, 9), (4, -1, 6, 7, 82, 99, 54), ] names = [&quot;colA&quot;, &quot;colB&quot;, &quot;colC&quot;, &quot;colD&quot;, &quot;colE&quot;, &quot;colF&quot;, &quot;colG&quot;] db = spark.createDataFrame(l, names) db.show() #+----+----+----+----+----+----+----+ #|colA|colB|colC|colD|colE|colF|colG| #+----+----+----+----+----+----+----+ #| 7| -5| -8|null| 1| 456| 8| #| 2| 9| 7| 4|null| 9| -1| #| -3| 3|null| 6| 0| 11| 9| #| 4| -1| 6| 7| 82| 99| 54| #+----+----+----+----+----+----+----+ </code></pre> <pre><code>from pyspark.ml.feature import VectorAssembler newdb = ( VectorAssembler(inputCols=db.columns, outputCol=&quot;features&quot;) .setHandleInvalid(&quot;keep&quot;) .transform(db) ) newdb.show() #+----+----+----+----+----+----+----+--------------------+ #|colA|colB|colC|colD|colE|colF|colG| features| #+----+----+----+----+----+----+----+--------------------+ #| 7| -5| -8|null| 1| 456| 8|[7.0,-5.0,-8.0,Na...| #| 2| 9| 7| 4|null| 9| -1|[2.0,9.0,7.0,4.0,...| #| -3| 3|null| 6| 0| 11| 9|[-3.0,3.0,NaN,6.0...| #| 4| -1| 6| 7| 82| 99| 54|[4.0,-1.0,6.0,7.0...| #+----+----+----+----+----+----+----+--------------------+ </code></pre> <p>The correlation function cannot handle missing values.</p> <pre class="lang-py prettyprint-override"><code>from pyspark.ml.stat import Correlation Correlation.corr( dataset=newdb.select(&quot;features&quot;), column=&quot;features&quot;, method=&quot;pearson&quot; ).collect()[0][&quot;pearson(features)&quot;].values # array([ 1. , -0.59756161, nan, nan, nan, # 0.79751788, 0.21792969, -0.59756161, 1. , nan, # nan, nan, -0.82202347, -0.40825556, nan, # nan, 1. , nan, nan, nan, # nan, nan, nan, nan, 1. , # nan, nan, nan, nan, nan, # nan, nan, 1. , nan, nan, # 0.79751788, -0.82202347, nan, nan, nan, # 1. , -0.06207047, 0.21792969, -0.40825556, nan, # nan, nan, -0.06207047, 1. ]) </code></pre> <p>I worked around by a <code>for</code> loop, but this one does not apply to my big data:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from pyspark.mllib.linalg import Vectors from pyspark.mllib.stat import Statistics df_vector = newdb num_cols = 7 res = np.ones((num_cols, num_cols), dtype=np.float32) for i in range(1, num_cols): for j in range(i): feature_pair_df = df_vector.select(&quot;features&quot;).rdd.map( lambda x: Vectors.dense([x[0][i], x[0][j]]) ) feature_pair_df = feature_pair_df.filter( lambda x: not np.isnan(x[0]) and not np.isnan(x[1]) ) corr_matrix = Statistics.corr(feature_pair_df, method=&quot;spearman&quot;) corr = corr_matrix[0, 1] res[i, j], res[j, i] = corr, corr res #array([[ 1. , -0.8, -1. , 0.5, 0.5, 0.8, 0. ], # [-0.8, 1. , 1. , -1. , -0.5, -1. , -0.4], # [-1. , 1. , 1. , -1. , 1. , -1. , -0.5], # [ 0.5, -1. , -1. , 1. , 1. , 1. , 1. ], # [ 0.5, -0.5, 1. , 1. , 1. , 0.5, 0.5], # [ 0.8, -1. , -1. , 1. , 0.5, 1. , 0.4], # [ 0. , -0.4, -0.5, 1. , 0.5, 0.4, 1. ]], dtype=float32) </code></pre> <p>How can I write it such that I can find a correlation matrix for a large dataset? Mapping instead of looping or any similar ideas.</p>
<python><dataframe><pyspark><bigdata><correlation>
2023-02-15 15:14:17
1
4,890
Hadij
75,461,623
10,057,842
Unable to use chromedriver on linux server [Exec format error]
<p>I have a raspberry-pi running linux-server as platform. Therefore there is no GUI and I execute all my tasks through terminal by SSH-ing into the Pi. Platform details:</p> <pre><code>uname -a &gt;&gt; Linux ubuntu 5.4.0-1080-raspi #91-Ubuntu SMP PREEMPT Thu Jan 19 09:35:03 UTC 2023 aarch64 aarch64 aarch64 GNU/Linux </code></pre> <h2><strong>Chromium [No issues here]</strong></h2> <p>I have installed Chromium through snap.</p> <pre><code>chromium --version &gt;&gt; Chromium 109.0.5414.119 snap </code></pre> <p>I am able to run chromium, navigate to a website, and take a snapshot</p> <pre><code>chromium --headless --disable-gpu --screenshot https://www.wikipedia.com &gt;&gt; 0215/140750.965255:WARNING:bluez_dbus_manager.cc(247)] Floss manager not present, cannot set Floss enable/disable. [0215/140752.998408:WARNING:sandbox_linux.cc(385)] InitializeSandbox() called with multiple threads in process gpu-process. [0215/140802.665622:INFO:headless_shell.cc(223)] 84646 bytes written to file screenshot.png </code></pre> <h2><strong>Chromedriver [Issues]</strong></h2> <p>I downloaded chromedriver this way</p> <pre><code>wget https://chromedriver.storage.googleapis.com/2.37/chromedriver_linux64.zip </code></pre> <p>And moved Chromedriver to the applications folder after unzipping</p> <p>I get this error when trying to get chromedriver version, let alone run it</p> <pre><code>chromedriver --version &gt;&gt; bash: /usr/local/bin/chromedriver: cannot execute binary file: Exec format error </code></pre> <h2>My Python Script [Issues]</h2> <p>Here is the script I want to be able to run finally</p> <pre><code>import selenium from selenium import webdriver options = webdriver.ChromeOptions() options.add_argument(&quot;--headless&quot;) options.add_argument(&quot;--disable-gpu&quot;) driver = webdriver.Chrome(options=options) driver.get(&quot;https://www.wikipedia.com&quot;) driver.save_screenshot(&quot;proof.png&quot;) </code></pre> <p>This is the error I get when I try to run it</p> <pre><code>python3 test.py &gt;&gt; OSError: [Errno 8] Exec format error: 'chromedriver' </code></pre> <h1>What I've Tried already</h1> <h3>Using chromedriver directly through ChromeDriverManager</h3> <pre><code>import selenium from selenium import webdriver from webdriver_manager.chrome import ChromeDriverManager options = webdriver.ChromeOptions() options.add_argument(&quot;--headless&quot;) options.add_argument(&quot;--disable-gpu&quot;) driver = webdriver.Chrome(service=Service(ChromeDriverManager(path=&quot;.&quot;, chrome_type=ChromeType.CHROMIUM).install()), options=options) driver.get(&quot;https://www.wikipedia.com&quot;) driver.save_screenshot(&quot;proof.png&quot;) </code></pre> <p>The error</p> <pre><code>OSError: [Errno 8] Exec format error: './.wdm/drivers/chromedriver/linux64/109.0.5414/chromedriver' </code></pre> <h3>Checking file permissions</h3> <p>Made sure file has execute permissions</p> <pre><code>ls -l /usr/local/bin/chromedriver &gt;&gt; -rwxr-xr-x 1 ubuntu ubuntu 20427216 Sep 8 2021 /usr/local/bin/chromedriver </code></pre>
<python><linux><selenium-webdriver><raspberry-pi><selenium-chromedriver>
2023-02-15 14:56:22
1
398
Aakash Dusane
75,461,413
19,238,204
Create Graphs for the outcome of a Fair Coin Tossed 3 times with JULIA or Python
<p>I am creating this code with JULIA and GraphRecipes. But it seems really ugly and far from what I wanted here:</p> <p><a href="https://i.sstatic.net/czxTY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/czxTY.png" alt="1" /></a></p> <p>this is the code:</p> <pre><code>using GraphRecipes, Plots gr() default(size=(800, 400)) g = [0 1 1 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 1 1 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 1 1 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 1 1 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 1 1 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 1 1; 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0 0 0 0 0 0 0;] graphplot(g, fontsize=12, names=&quot;H&quot;.*string.(1:14), nodeshape=:circle) </code></pre> <p>I believe if JULIA is till too young maybe there is Python code that can create this, maybe even for more than 3 times tossed, it will be good, along with the code to calculate the probability for each outcome will be nice. I am open to either JULIA or Python solution for this.</p> <p>Thanks a lot!</p>
<python><julia>
2023-02-15 14:41:33
2
435
Freya the Goddess
75,461,410
6,296,020
pySpark / Databricks - How to write a dataframe from pyspark to databricks?
<p>I'm trying to write a dataframe from a pyspark job that runs on AWS Glue to a Databricks cluster and I'm facing an issue I can't solve.</p> <p>Here's the code that does the writing :</p> <pre><code>spark_df.write.format(&quot;jdbc&quot;) \ .option(&quot;url&quot;, jdbc_url) \ .option(&quot;dbtable&quot;, table_name) \ .option(&quot;password&quot;, pwd) \ .mode(&quot;overwrite&quot;) \ .save() </code></pre> <p>Here's the error I'm getting :</p> <pre><code>[Databricks][DatabricksJDBCDriver](500051) ERROR processing query/statement. Error Code: 0, SQL state: org.apache.hive.service.cli.HiveSQLException: Error running query: org.apache.spark.sql.catalyst.parser.ParseException: \nno viable alternative at input '(\&quot;TEST_COLUMN\&quot;'(line 1, pos 82)\n\n== SQL ==\nCREATE TABLE MYDATABASE.MYTABLE (\&quot;TEST_COLUMN\&quot; TEXT </code></pre> <p>It seems that the issue comes from the fact the SQL statement is using the column name with double quotes instead of single quotes and it's failing because of that. I thought simple things like that would be managed automatically by spark but it seems it's not the case.</p> <p>Do you know how can I solve that issue please ?</p> <p>Thanks in advance !</p>
<python><pyspark><apache-spark-sql><databricks><aws-glue>
2023-02-15 14:41:26
0
360
Meyer Cohen
75,461,388
6,367,971
Validate column format in dataframe
<p>I have a CSV where some rows are parents and some are children, and I want to validate whether the format of the child rows are correct by adding a <code>Yes/No</code> value to a new column via a dataframe.</p> <pre><code>Type,ID,Ext_ID,Name Parent,1111,abc.xyz.num1,yyy Child,1,break://abc.xyz.num1/break1,break1 Child,2,break://abc.xyz.num1,break2 Parent,2222,abc.xyz.num2,zzz Child,1,break://abc.xyz.num2/break1,break1 Child,2,break://abc.xyz.num2/break2,break2 Child,3,abc.xyz.num2/,break3 Child,4,break://abc.xyz.num2/break4,break4 Parent,3333,abc.xyz.num3,sss Child,1,break://abc.xyz.num3/break1,break1 </code></pre> <p>The <em>correct</em> format of the child <code>Ext_ID</code> is <code>break://abc.xyz.{Ext_ID of parent}/break{Name}</code>, so what would be the best way to achieve the desired output?</p> <pre><code> Type ID Ext_ID Name all_breaks_correct_format Parent 1111 abc.xyz.num1 yyy No Child 1 break://abc.xyz.num1/break1 break1 Child 2 break://abc.xyz.num1 break2 Parent 2222 abc.xyz.num2 zzz No Child 1 break://abc.xyz.num2/break1 break1 Child 2 break://abc.xyz.num2/break2 break2 Child 3 abc.xyz.num2/ break3 Child 4 break://abc.xyz.num2/break4 break4 Parent 3333 abc.xyz.num3 sss Yes Child 1 break://abc.xyz.num3/break1 break1 </code></pre>
<python><pandas>
2023-02-15 14:39:53
1
978
user53526356
75,461,346
17,550,177
Different Result from MinMaxScaler() with Manual Calculations
<p>So i was tinkering with MinMaxScaler and want to see how it works with manual calculations here's the array i'm trying to scale</p> <pre><code>array = [[0, -1.73, -1.73, -2.0, -2.0, -2.0, -1.73], [-1.73, 0, -1.41, -1.73, -1.73, -1.73, -1.41], [-1.73, -1.41, 0, -1.73, -1.73, -1.73, -0.0], [-2.0, -1.73, -1.73, 0, -1.41, -1.41, -1.73], [-2.0, -1.73, -1.73, -1.41, 0, -0.0, -1.73], [-2.0, -1.73, -1.73, -1.41, -0.0, 0, -1.73], [-1.73, -1.41, -0.0, -1.73, -1.73, -1.73, 0]] </code></pre> <p>the lowest value is <code>-2.0</code> and highest value is <code>0</code>. When i do my manual calculations it is based on <code>MinMaxScaler()</code> formula stated in <a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html" rel="nofollow noreferrer">sklearn minmax</a> but when i program it, it shows different result as in this code</p> <pre><code>from sklearn.preprocessing import MinMaxScaler import numpy as np X = np.array([ [0, -1.73, -1.73, -2.0, -2.0, -2.0, -1.73], [-1.73, 0, -1.41, -1.73, -1.73, -1.73, -1.41], [-1.73, -1.41, 0, -1.73, -1.73, -1.73, -0.0], [-2.0, -1.73, -1.73, 0, -1.41, -1.41, -1.73], [-2.0, -1.73, -1.73, -1.41, 0, -0.0, -1.73], [-2.0, -1.73, -1.73, -1.41, -0.0, 0, -1.73], [-1.73, -1.41, -0.0, -1.73, -1.73, -1.73, 0] ]) # create an instance of MinMaxScaler scaler = MinMaxScaler() # fit the scaler to the data and transform the data X_scaled = scaler.fit_transform(X) # print the scaled data print(X_scaled) </code></pre> <p>The Result Array is</p> <pre><code>[[1. 0. 0. 0. 0. 0. 0. ] [0.135 1. 0.1849711 0.135 0.135 0.135 0.1849711] [0.135 0.1849711 1. 0.135 0.135 0.135 1. ] [0. 0. 0. 1. 0.295 0.295 0. ] [0. 0. 0. 0.295 1. 1. 0. ] [0. 0. 0. 0.295 1. 1. 0. ] [0.135 0.1849711 1. 0.135 0.135 0.135 1. ]] </code></pre> <p>My Calculations</p> <pre><code>x' = (x-min)/ (max⁑ - min) x' = (-1.73-(-2)) / (0⁑ -(-2)) x' = 0.135 </code></pre> <p>My question is where did i do my calculations differently than sklearn ? why is -1.73 becomes 0 ?</p>
<python><math><scikit-learn><scaling>
2023-02-15 14:36:02
1
756
Michael Halim
75,461,236
7,776,781
Dynamically updating type hints for all attributes of subclasses in pydantic
<p>I am writing some library code where the purpose is to have a base data model that can be subclassed and used to implement data objects that correspond to the objects in a database. For this base model I am inheriting from <code>pydantic.BaseModel</code>.</p> <p>There is a bunch of stuff going on but for this example essentially what I have is a base model class that looks something like this:</p> <pre><code>class Model(pydantic.BaseModel, metaclass=custom_complicated_metaclass): some_base_attribute: int some_other_base_attribute: str </code></pre> <p>I'll get back to what this metaclass does in a moment. This would then be subclassed by some user of this library like this:</p> <pre><code>class User(Model): age: int name: str birth_date: datetime.datetime </code></pre> <p>Now, the metaclass that I am using hooks in to <strong>getattr</strong> and allows the following syntax:</p> <pre><code>User.age &gt; 18 </code></pre> <p>Which then returns a custom Filter object that can be used to filter in a database, so essentially its using attribute access on the class directly (but not on its instances) as a way to create some syntactic sugar for user in filters and sorting.</p> <p>Now, the issue comes when I would like to allow a number of attributes to sort the results of a database query by. I can do something like the following:</p> <pre><code>db = get_some_database(...) db.query(..., order_by=[User.age, User.birth_date] </code></pre> <p>And this works fine, however I would like to be able to specify for each attribute in the order_by list if its ascending or descending order. The simplest syntax I could think of for this is to allow the use of <code>-</code> to invert the sort order like this:</p> <pre><code>db = get_some_database(...) db.query(..., order_by=[User.age, -User.birth_date] </code></pre> <p>This <em>works</em>, I just implement <code>__neg__</code> on my custom filter class and its all good.</p> <p>Now finally, the issue I have is that because I defined <code>User.birth_date</code> to be a <code>datetime</code> object, it does not support the <code>-</code> operator which pycharm and mypy will complain about (and they will complain about it for any type that does not support <code>-</code>). They are kind of wrong since when accessing the attribute on the class like this instead of on an instance it actually will return an object that does support the <code>-</code> operator, but obviously they don't know this. If this would only be a problem inside my library code I wouldn't mind it so much, I could just ignore it or add a disable comment etc but since this false positive complaint will show up in end-user code I would really like to solve it.</p> <p>So my actual question essentially is, can I in any way (that the type checkers would also understand) force all the attributes that are implemented on subclasses of my baseclass to have whatever type they are assigned but also union with my custom type, so that these complaints dont show up? Or is there another way I can solve this?</p>
<python><python-3.x><pycharm><mypy><pydantic>
2023-02-15 14:25:35
1
619
Fredrik Nilsson
75,461,232
6,357,916
Argument is not getting mapped in REST endpoint method in django
<p>I have following code in my <code>MyModuleViewSet</code>:</p> <pre><code>class MyModuleViewSet(viewsets.ModelViewSet): # ... @action(detail=False, permission_classes=(IsOwnerOrReadOnly,)) @debugger_queries def my_rest_api(self, request, pk=None): # ... return HttpResponse(resp, content_type=&quot;application/json&quot;, status=status.HTTP_200_OK) </code></pre> <p>It is registered in my <code>urls.py</code> as follows:</p> <pre><code>router.register(r'mymodule', MyModuleViewSet) </code></pre> <p>I tried to hit this end point with following link:</p> <pre><code>http://0.0.0.0:3000/myproject/api/mymodule/1019/my_rest_api/?format=json&amp;param1=7945 </code></pre> <p>But it gave error saying:</p> <pre><code>Page not found (404) Request Method: GET Request URL: http://0.0.0.0:3000/myproject/api/mymodule/1019/my_rest_api/?format=json&amp;param1=7945 Using the URLconf defined in myproject.urls, Django tried these URL patterns, in this order: 1. ... 2. ... ... 37. courseware/ api/ ^mymodule/my_rest_api/$ [name='mymodule-my-rest-api'] 38. courseware/ api/ ^mymodule/my_rest_api\.(?P&lt;format&gt;[a-z0-9]+)/?$ [name='mymodule-my-rest-api'] ... </code></pre> <p>As you can see <code>mymodule/my_rest_api</code> is indeed specified in the response on line 37 and 38. So, I tried with different url by removing <code>/1019</code>:</p> <pre><code>http://0.0.0.0:3000/myproject/api/mymodule/1019/my_rest_api/?format=json&amp;param1=7945 </code></pre> <p>and it hit the breakpoint inside the rest endpoint method. I was thinking that <code>1019</code> will get assigned to <code>pk</code> argument inside <code>my_rest_api(self, request,pk=None)</code>. But that did not happen. Why is this the case? Did I miss some basic understanding of how DRF works?</p>
<python><django><django-rest-framework><django-views><django-viewsets>
2023-02-15 14:25:14
1
3,029
MsA
75,461,143
12,474,157
Python: How to use requests library to get network data from a website
<p>Every time you enter into a website and you inspect the site, you can see you could get some information like &quot;elements&quot; which contains the html of the website, but also there is some data contained in the &quot;network&quot; tab. I would like to get the Fetch/XHR names of all the contents in the network</p> <pre><code>url = &quot;https://www.jobteaser.com/en&quot; </code></pre> <p><a href="https://i.sstatic.net/omD2g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/omD2g.png" alt="enter image description here" /></a></p> <h3>My sample code that does not work properly</h3> <pre><code>import requests url = 'https://www.jobteaser.com/en' response = requests.get(url) xhr_names = [] for req in response.history + [response]: for xhr in req.request._cookies.values(): if 'Fetch' in xhr and 'Response' in xhr: xhr_names.append(xhr.split('?')[0]) print(xhr_names) </code></pre> <h3>Expected Response</h3> <pre><code>[&quot;https://client.axept.io/5fd938934e433a71df9437d3.json?r=0&quot;, &quot;metadata&quot;, &quot;https://api.axept.io/v1/app/consent/5fd938934e433a71df9437d3?token=v0fd7vbc9l79i8i2ijrwn&amp;service=cookies&quot;] </code></pre>
<python><request><fetch>
2023-02-15 14:18:03
0
1,720
The Dan
75,461,087
2,897,115
python how to access class variables inside init
<p>I want to access class variables inside the init</p> <pre><code>class Hare: APP = 10; def __init__(self): print(APP) h = Hare() </code></pre> <p>its giving error</p> <pre><code>!!! NameError: name 'APP' is not defined </code></pre>
<python>
2023-02-15 14:13:20
1
12,066
Santhosh
75,461,034
13,158,157
Seaborn distplot for data with high SD
<p>I am trying to map float pd.Series with high SD:</p> <pre><code>&gt;&gt;&gt;pd.Series(foo).describe() count 3.351198e+07 mean -2.337614e+02 std 4.788547e+04 min -9.999999e+06 25% 0.000000e+00 50% 0.000000e+00 75% 0.000000e+00 max 7.499913e+07 &gt;&gt;&gt;pd.Series(foo).std() 47885.47066964969 </code></pre> <p>It is highly centered around 0 but has big tails and outliers on both sides. So far I tried sns.distplot() with log scale but it only works on absolute values and I am interested in seeing difference on both sides. How can I visualize in more detail both tails ?</p> <p>my code so far:</p> <pre><code>bins = [0, 10, 100, 1000, 10000, 100000, 100000, 100000000] bins = [-i for i in bins[:1:-1] ] + bins g = sns.distplot(foo.values,bins=bins,kde=False) g.set_xscale('log') </code></pre> <p><a href="https://i.sstatic.net/xkO5v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xkO5v.png" alt="enter image description here" /></a></p>
<python><pandas><matplotlib><seaborn>
2023-02-15 14:08:40
0
525
euh
75,460,996
1,056,563
After brew install of python@3.9 "python" is no longer found
<p><code>python3</code> is available after:</p> <pre><code>brew install python@3.9 </code></pre> <p>But there is no <code>python</code> to be found. Not in <code>/usr/local/bin</code> or in <code>/usr/local/Cellar/python@3.9/3.9.16/bin</code>. There are many programs that look for plain old <code>python</code> so what is supposed to be done here?</p>
<python>
2023-02-15 14:05:54
1
63,891
WestCoastProjects
75,460,953
19,130,803
Dash generalize progress bar
<p>I am developing a data app using dash. I am doing a file copy task for that I am showing the progress bar.</p> <p>I have written this code from dash website as reference. This is pseduo code structure</p> <pre><code>import dash_bootstrap_components as dbc from dash import Input, Output, dcc, html, ctx progress = html.Div( [ dcc.Interval(id=&quot;progress_interval&quot;, max_intervals=500, n_intervals=0, interval=2000), dbc.Progress(id=&quot;progress&quot;), ] ) copy_button = dbc.Button(id=&quot;btn_copy&quot;, children=&quot;Copy&quot;, className=&quot;btn btn-success&quot;), @app.callback( [Output(&quot;progress&quot;, &quot;value&quot;), Output(&quot;progress&quot;, &quot;label&quot;)], [ Input(&quot;progress_interval&quot;, &quot;n_intervals&quot;), Input(&quot;btn_copy&quot;, &quot;n_clicks&quot;) ], ) def update_copy_progress(n): triggered_id: Any = ctx.triggered_id # use n_intervals constrained to be in 0-100 if triggered_id == &quot;progress_interval&quot;: progress = min(n % 110, 100) # only add text after 5% progress to ensure text isn't squashed too much return progress, f&quot;{progress} %&quot; if progress &gt;= 5 else &quot;&quot; if triggered_id == &quot;btn_copy&quot;: # start some background process long_running_file_copy_task() </code></pre> <p>The progress bar starts running when clicked on button and display 5% 6% 7%....</p> <p>say I am copying 100MB file size. Issues, I noticed:</p> <ol> <li>File copied is 10 MB on terminal but on UI progress bar displaying 40%</li> <li>File copy is going on and once progress bar reaches 100% it gets refreshed and starts again from 1%</li> </ol> <p>How in real web application the progress bar is handled say for file copying task for different file sizes and for any long running tasks. How to generalize the progress bar. Please suggest some direction.</p>
<python><plotly-dash>
2023-02-15 14:01:58
0
962
winter
75,460,835
2,996,797
Mocking environment variables with starlette config
<pre><code>from starlette.config import Config config = Config(&quot;.env&quot;) SOME_ENV_VAR: str = config(&quot;SOME_ENV_VAR&quot;, cast=str, default=&quot;abc&quot;) </code></pre> <p>I'm looking for a way to mock the value of <code>SOME_ENV_VAR</code> for unit tests. Is there such an option?</p> <pre><code># mock config somehow so that config.SOME_ENV_VAR = &quot;xyz&quot; def some_test(): assert config.SOME_ENV_VAR == &quot;xyz&quot; </code></pre>
<python><unit-testing><mocking><fastapi><starlette>
2023-02-15 13:52:52
1
530
Cauthon
75,460,641
127,508
How to build a Python project for a specific version of Python?
<p>I have an app that I would like to deploy to AWS Lambda and for this reason it has to have Python 3.9.</p> <p>I have the following in the pyproject.toml:</p> <pre class="lang-ini prettyprint-override"><code>name = &quot;app&quot; readme = &quot;README.md&quot; requires-python = &quot;&lt;=3.9&quot; version = &quot;0.5.4&quot; </code></pre> <p>If I try to pip install all the dependencies I get the following error:</p> <pre><code>ERROR: Package 'app' requires a different Python: 3.11.1 not in '&lt;=3.9' </code></pre> <p>Is there a way to specify the Python version for this module?</p> <p>I see there is a lot of confusion about this. I simply want to specify 3.9 &quot;globally&quot; for my build. So when I build the layer for the lambda with the following command it runs:</p> <pre><code>pip install . -t pyhon/ </code></pre> <p>Right now it has only Python 3.11 packaged. For example:</p> <pre><code>❯ ls -larth python/ | grep sip siphash24.cpython-311-darwin.so </code></pre> <p>When I try to use the layer created this way it fails to load the required library.</p>
<python>
2023-02-15 13:38:40
1
8,822
Istvan
75,460,481
6,829,370
Combining multi one hot encoded columns in pandas dataframe along with removing duplicates
<p>I have a dataframe that looks like this :</p> <pre><code>ID A B C 1 1 0 0 1 0 1 0 2 1 0 0 </code></pre> <p>I want the output to be like this :</p> <pre><code>ID A B C 1 1 1 0 2 1 0 0 </code></pre> <p>Kindly guide how to achieve this.</p>
<python><python-3.x><pandas><scikit-learn>
2023-02-15 13:27:59
1
388
Shubham_geo
75,460,392
2,876,079
How to show pylint message codes/ids in PyCharm Problems view and get quick fix?
<p>I get some pylint warnings in the Problems view of PyCharm Community Edition:</p> <p><a href="https://i.sstatic.net/qzVlj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qzVlj.png" alt="enter image description here" /></a></p> <p><strong>a)</strong> How can I tell PyCharm to include the corresponding message code/id in the <strong>Problems</strong> View, so that I am able to manually suppress the warning without manually searching the corresponding message code/id?</p> <p><strong>b)</strong> How can I teach PyCharm to provide a quick fix for suppressing the warning with a line comment?</p> <p>For example</p> <p>&quot;PyLint: Too many local variables&quot;</p> <p>=&gt;</p> <p><code># pylint: disable=too-many-locals</code></p> <p>Related:</p> <p><a href="https://stackoverflow.com/questions/13955361/list-of-pylint-human-readable-message-ids">List of pylint human readable message ids?</a></p> <p><a href="https://stackoverflow.com/questions/64585271/how-to-show-pylint-message-codes-in-vscode">How to show pylint message codes in VScode?</a></p> <p><a href="https://stackoverflow.com/questions/54586757/how-do-i-automatically-fix-lint-issues-reported-by-pylint">How do I automatically fix lint issues reported by pylint?</a></p> <p><a href="https://stackoverflow.com/questions/55582277/visual-studio-code-quick-fix-python">Visual Studio Code quick-fix &amp; python</a></p> <p><em>Edit</em></p> <p>Also see related issue ticket of PyCharm Pylint Plugin:</p> <p><a href="https://github.com/leinardi/pylint-pycharm/issues/92" rel="nofollow noreferrer">https://github.com/leinardi/pylint-pycharm/issues/92</a></p>
<python><pycharm><pylint>
2023-02-15 13:20:18
3
12,756
Stefan
75,460,241
3,371,472
Speed up debugging the conda-build tutorial
<p>I'm working outward from <a href="https://docs.conda.io/projects/conda-build/en/latest/user-guide/tutorials/build-pkgs.html" rel="nofollow noreferrer">this conda-build example</a> to eventually build a conda package of my own. (If you try it out, note that the <code>meta.yaml</code> in the example is out of date and you need to use a different <code>meta.yaml</code>; details in <a href="https://github.com/conda/conda-build/issues/4164" rel="nofollow noreferrer">this issue</a>.)</p> <p>The source code in this conda-build example is an existing project called click, which seems to have a very specific structure with elements like <code>tox.ini</code> and <code>setup.py</code> and <code>setup.cfg</code>. It's hard for me to find definitive guidance on Conda's requirements or expectations about the structure of the source code anywhere in the <a href="https://docs.conda.io/projects/conda-build/en/latest/index.html" rel="nofollow noreferrer">conda-build docs</a>, so I've just been changing one thing at a time starting from the working example and checking if it still works.</p> <p>Each <code>conda build</code> command takes several minutes. It makes debugging slow and I've gotten impatient. How can I speed up <code>conda build</code> so that I can easily experiment with different inputs? There are tips to speed up conda environment solving <a href="https://www.anaconda.com/blog/understanding-and-improving-condas-performance" rel="nofollow noreferrer">here</a>, but I'm not solving an environment; I'm building a package.</p> <p>My package is pure Python, so I don't need to bother with any compiler details.</p>
<python><conda><conda-build>
2023-02-15 13:06:24
1
538
eric_kernfeld
75,460,125
16,009,435
cpu performing better than gpu python opencv
<p>I am trying to run an openCV program with my GPU. I am using pre-trained models to detect faces on a playing video. The code works fine my question is, how is my CPU performing better than my GPU? Am I missing some attribute that should be defined to use the full potential of the GPU or is it just because the performance of my CPU is better than that of my GPU? Thanks in advacne.</p> <p><strong>model download links:</strong></p> <p><a href="https://github.com/haroonshakeel/opencv_face_detection" rel="nofollow noreferrer">https://github.com/haroonshakeel/opencv_face_detection</a></p> <p><strong>PC specs:</strong></p> <p>OS: Windows 11</p> <p>GPU: nvidia rtx 3060 mobile</p> <p>CPU: core i7-12700H</p> <pre><code>import numpy as np import cv2 from imutils.video import FPS class Detector: def __init__(self, use_cuda = False): self.faceModel = cv2.dnn.readNetFromCaffe(&quot;models/res10_300x300_ssd_iter_140000.prototxt&quot;, caffeModel = &quot;models/res10_300x300_ssd_iter_140000.caffemodel&quot;) if use_cuda: self.faceModel.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA) self.faceModel.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA) print(&quot;running on GPU&quot;) else: print(&quot;running on CPU&quot;) def processVideo(self, videoName): cap = cv2.VideoCapture(videoName) if (cap.isOpened() == False): print(&quot;error opening video feed&quot;) return (success, self.img) = cap.read() (self.height, self.width) = self.img.shape[:2] fps = FPS().start() while success: self.processFrame() cv2.namedWindow(&quot;Output&quot;, cv2.WINDOW_NORMAL) cv2.imshow(&quot;Output&quot;, self.img) cv2.resizeWindow('Output', 900, 900) key = cv2.waitKey(1) &amp; 0xFF if key == ord(&quot;q&quot;): break fps.update() (success, self.img) = cap.read() fps.stop() print(&quot;Elapsed time: {:.2f}&quot;.format(fps.elapsed())) print(&quot;FPS: {:.2f}&quot;.format(fps.fps())) cap.release() cv2.destroyAllWindows() def processFrame(self): blob = cv2.dnn.blobFromImage(self.img, 1.0, (300, 300), (104.0, 177.0, 123.0), swapRB = False, crop = False) self.faceModel.setInput(blob) predictions = self.faceModel.forward() for i in range(0, predictions.shape[2]): if predictions[0, 0, i, 2] &gt; 0.5: bbox = predictions[0, 0, i, 3:7] * np.array([self.width, self.height, self.width, self.height]) (xmin, ymin, xmax, ymax) = bbox.astype(&quot;int&quot;) cv2.rectangle(self.img, (xmin, ymin), (xmax, ymax), (0, 0, 255), 2) detector = Detector(use_cuda = True) detector.processVideo(&quot;test.mp4&quot;) </code></pre>
<python><opencv>
2023-02-15 12:55:24
0
1,387
seriously
75,459,915
1,879,109
How do I return rust iterator from a python module function using pyo3
<p>Its not like I am not able to return any rust iterators from a python module function using pyo3. The problem is when lifetime doesn't live long enough!</p> <p>Allow me to explain.</p> <p><strong>First attempt:</strong></p> <pre class="lang-rust prettyprint-override"><code>#[pyclass] struct ItemIterator { iter: Box&lt;dyn Iterator&lt;Item = u64&gt; + Send&gt;, } #[pymethods] impl ItemIterator { fn __iter__(slf: PyRef&lt;'_, Self&gt;) -&gt; PyRef&lt;'_, Self&gt; { slf } fn __next__(mut slf: PyRefMut&lt;'_, Self&gt;) -&gt; Option&lt;u64&gt; { slf.iter.next() } } #[pyfunction] fn get_numbers() -&gt; ItemIterator { let i = vec![1u64, 2, 3, 4, 5].into_iter(); ItemIterator { iter: Box::new(i) } } </code></pre> <p>In the contrived example above I have written a python iterator wrapper for our rust iterator as per <a href="https://pyo3.rs/v0.18.1/class/protocols.html?highlight=iterator#iterable-objects" rel="nofollow noreferrer">pyo3 guide</a> and it works seemlessly.</p> <p><strong>Second attempt:</strong> The problem is when lifetimes are involved.</p> <p>Say now I have a <code>Warehouse</code> struct that I would want make available as python class alongside pertaining associated functions.</p> <pre class="lang-rust prettyprint-override"><code>struct Warehouse { items: Vec&lt;u64&gt;, } impl Warehouse { fn new() -&gt; Warehouse { Warehouse { items: vec![1u64, 2, 3, 4, 5], } } fn get_items(&amp;self) -&gt; Box&lt;dyn Iterator&lt;Item = u64&gt; + '_&gt; { Box::new(self.items.iter().map(|f| *f)) } } </code></pre> <p>Implementing them as python class and methods</p> <pre class="lang-rust prettyprint-override"><code>#[pyclass] struct ItemIterator { iter: Box&lt;dyn Iterator&lt;Item = u64&gt; + Send&gt;, } #[pymethods] impl ItemIterator { fn __iter__(slf: PyRef&lt;'_, Self&gt;) -&gt; PyRef&lt;'_, Self&gt; { slf } fn __next__(mut slf: PyRefMut&lt;'_, Self&gt;) -&gt; Option&lt;u64&gt; { slf.iter.next() } } #[pyclass] struct Warehouse { items: Vec&lt;u64&gt;, } #[pymethods] impl Warehouse { #[new] fn new() -&gt; Warehouse { Warehouse { items: vec![1u64, 2, 3, 4, 5], } } fn get_items(&amp;self) -&gt; ItemIterator { ItemIterator { iter: Box::new(self.items.iter().map(|f| *f)), } } } </code></pre> <p>This throws compiler error in <code>getItems</code> function saying:</p> <pre><code>error: lifetime may not live long enough --&gt; src/lib.rs:54:19 | 52 | fn get_items(&amp;self) -&gt; ItemIterator { | - let's call the lifetime of this reference `'1` 53 | ItemIterator { 54 | iter: Box::new(self.items.iter().map(|f| *f)), | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cast requires that `'1` must outlive `'static` error: could not compile `pyo3-example` due to previous error </code></pre> <p>I am not really sure how to fix this. Can someone explain what's really going on here. How does this compare to my first attempt implementing iterators and how to fix this?</p>
<python><rust><pyo3>
2023-02-15 12:38:32
2
915
Kity Cartman
75,459,861
20,051,041
Why do I scrape corrupted PDFs of same size with BeautifulSoup?
<p>I went through similar topics here but did not find anything helpful for my case.</p> <p>I managed to get all PDFs (for personal learning purposes) in local folder but cannot open them. They also have the same (310 kB) size. Perhaps, you find some mistake in my code. Thanks.</p> <pre><code>import os import requests from bs4 import BeautifulSoup # define the URL to scrape url = 'https://www.someweb.de/medikamente/arzneimittellisten/medikamente_i.html' # define the folder to save the PDFs to save_path = r'C:\PDFs' # create the folder if it doesn't exist if not os.path.exists(save_path): os.makedirs(save_path) # make a request to the URL response = requests.get(url) # parse the HTML content of the page soup = BeautifulSoup(response.content, 'html.parser') # find all links on the page that contain 'href=&quot;/medikamente/beipackzettel/&quot;' links = soup.find_all('a', href=lambda href: href and '/medikamente/beipackzettel/' in href) # loop through each link and download the PDF for link in links: href = link['href'] file_name = href.split('?')[0].split('/')[-1] + '.pdf' pdf_url = 'https://www.someweb.de' + href + '&amp;file=pdf' response = requests.get(pdf_url) with open(os.path.join(save_path, file_name), 'wb') as f: f.write(response.content) f.close() print(f'Downloaded {file_name} to {save_path}') </code></pre>
<python><pdf><web-scraping><beautifulsoup><python-requests>
2023-02-15 12:33:24
1
580
Mr.Slow
75,459,846
1,436,800
How to set Time zone to TIME_ZONE = "Asia/Karachi" in Django Project?
<p>I want to change the timezone of my django project to Asia/Karachi. I have added this in my settings.py file:</p> <pre><code>TIME_ZONE = &quot;Asia/Karachi&quot; </code></pre> <p>Time zone of my postgres is also set to Asia/Karachi. But still when I create the objects, the time zone of DateTimeField is set to UTC.</p> <pre><code>class MyClass(models.Model): name = models.CharField(max_length=64) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) def __str__(self) -&gt; str: return self.name </code></pre> <p>Now when I create the object of MyClass, created_at and updated_at are storing Time with UTC timezone. Why is this so and how can I fix it? Edit: In my drf interface, I can see the time zone in Asia/Karachi. But when I check time zone in shell, it gives time zone in UTC. <a href="https://i.sstatic.net/MnGHx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MnGHx.png" alt="enter image description here" /></a></p> <p>In python shell: <a href="https://i.sstatic.net/jPwb5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jPwb5.png" alt="enter image description here" /></a></p>
<python><django><django-rest-framework><utc><django-timezone>
2023-02-15 12:32:29
1
315
Waleed Farrukh
75,459,841
11,814,875
Scrapy signals not connecting to class methods
<p>I've defined a <code>Crawler</code> class for crawling multiple spiders from script.<br> For spiders, instead of using pipelines, I defined a class, <code>CrawlerPipeline</code> and used signals for connecting methods.<br> In <code>CrawlerPipeline</code>, some methods require to use class variables such as <code>__ERRORS</code>.<br> I'm unable to implement the correct way for the same. Any suggestions or ideas will be very helpful.<br> For reference, I'm attaching the code snippet</p> <pre class="lang-py prettyprint-override"><code>from scrapy import signals from scrapy.crawler import CrawlerProcess from .pipeline import CrawlerPipeline class Crawler: def __init__(self) -&gt; None: self.process = CrawlerProcess(settings={ 'ROBOTSTXT_OBEY': False, 'REDIRECT_ENABLED': True, 'SPIDER_MODULES': ['engine.crawler.spiders'], 'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7', 'USER_AGENT': 'Mozilla/5.0 (Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0', }) def spawn(self, spider: str, **kwargs) -&gt; None: self.process.crawl(spider, **kwargs) self.__connect_signals(spider) def run(self) -&gt; None: self.process.start() def __connect_signals(self, spider: str) -&gt; None: pipe = CrawlerPipeline() for crawler in self.process.crawlers: _set_signal = crawler.signals.connect if spider == 'a': _set_signal(pipe.add_meta_urls, signal=signals.spider_opened) if spider == 'b': ... if spider == 'c': _set_signal(pipe.add_meta_urls, signal=signals.spider_opened) if spider == 'd': ... # These lines are not working, above two also not working _set_signal(pipe.process_item, signal=signals.item_scraped) _set_signal(pipe.spider_closed, signal=signals.spider_closed) _set_signal(pipe.spider_error, signal=signals.spider_error) </code></pre> <pre class="lang-py prettyprint-override"><code>import json from pathlib import Path from collections import defaultdict from api.database import Mongo class CrawlerPipeline: __ITEMS = defaultdict(list) __ERRORS = list def process_item(self, item, spider): self.__ITEMS[spider.name].append(item) return item def add_meta_urls(self, spider): spider.start_urls = ['https://www.example.com'] def spider_error(self, failure, response, spider): self.__ERRORS.append({ 'spider': spider.name, 'url': response.url, 'status': response.status, 'error': failure.getErrorMessage(), 'traceback': failure.getTraceback(), }) def spider_closed(self, spider, reason): print(self.__ERRORS) Path(&quot;logs&quot;).mkdir(parents=True, exist_ok=True) ... </code></pre>
<python><web-scraping><scrapy><scrapy-pipeline>
2023-02-15 12:32:12
0
491
rish_hyun
75,459,700
1,436,800
How can I authenticate web socket connection in postman?
<p>This is my ASGI file: from channels.routing import ProtocolTypeRouter, URLRouter from channels.auth import AuthMiddlewareStack import os import app.routing from django.core.asgi import get_asgi_application</p> <pre><code> os.environ.setdefault(&quot;DJANGO_SETTINGS_MODULE&quot;, &quot;project.settings&quot;) application = get_asgi_application() application = ProtocolTypeRouter({ 'http': get_asgi_application(), &quot;websocket&quot;: AuthMiddlewareStack( URLRouter( app.routing.websocket_urlpatterns ) ), }) </code></pre> <p>This is my routing.py file:</p> <p>from django.urls import path from . import consumers</p> <pre><code>websocket_urlpatterns = [ path('ws/sc/', consumers.MySyncConsumer.as_asgi()), path('ws/ac/', consumers.MyASyncConsumer.as_asgi()), path('ws/test/', consumers.TestConsumer.as_asgi()), path('ws/scg/', consumers.MySyncGroupConsumer.as_asgi()), ] </code></pre> <p>This is my consumer file:</p> <pre><code>class MySyncConsumer(SyncConsumer): def websocket_connect(self,event): print(&quot;Web socket connected&quot;, event) self.send({ 'type':'websocket.accept' }) print(&quot;Channel layer &quot;,self.channel_layer) print(&quot;Channel name&quot;, self.channel_name) async_to_sync(self.channel_layer.group_add)(&quot;test_consumer_group_1&quot;, self.channel_name) def websocket_receive(self,event): print(&quot;Web socket recieved&quot;,event) print(event[&quot;text&quot;]) async_to_sync(self.channel_layer.group_send)(&quot;test_consumer_group_1&quot;, { 'type':'chat.message', 'message':event['text'] }) #event handler, whuch is sending data to client def chat_message(self,event): print('Event...', event) print('Event...', event[&quot;message&quot;]) self.send({ &quot;type&quot;:&quot;websocket.send&quot;, &quot;text&quot;:event['message'] }) def send_notification(self, event): print(&quot;send_notification called&quot;) print('Event...', event['value']) self.send({ &quot;type&quot;:&quot;websocket.send&quot;, &quot;text&quot;:event['value'] }) def websocket_disconnect(self,event): print(&quot;Web socket disconnect&quot;, event) async_to_sync(self.channel_layer.group_discard)(&quot;test_consumer_group_1&quot;, self.channel_name) raise StopConsumer() </code></pre> <p>How can I authenticate web socket connection in postman? I want to authenticate web socket connection so that self.scope[&quot;user&quot;] returns currently logged in user in consumers.py. Otherwise it returns an anonymous user.</p> <p><a href="https://i.sstatic.net/Iu5LO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Iu5LO.png" alt="enter image description here" /></a></p>
<python><django><websocket><postman><django-channels>
2023-02-15 12:17:45
0
315
Waleed Farrukh
75,459,630
14,239,638
warnings when running octave from python
<p>I am using oct2py python library to run octave script but I get the following error that I don't understand:</p> <pre><code>warning: function C:\Program Files\GNU Octave\Octave-7.3.0\mingw64\share\octave\packages\statistics-1.5.0\shadow\mean.m shadows a core library function warning: called from C:\Program Files\GNU Octave\Octave-7.3.0\mingw64\share\octave\packages\statistics-1.5.0\PKG_ADD at line 11 column 3 load_packages_and_dependencies at line 56 column 5 load_packages at line 53 column 3** </code></pre> <p>How can I solve this please?</p> <p>Thanks in advance.</p>
<python><warnings><octave><oct2py>
2023-02-15 12:11:14
1
680
Wallflower
75,459,613
9,390,633
how to format larger dataframe better using pdfwriter using matplotlib
<p>When a large dataframe is been transformed to a pdf it tries fit too much on to one page so information looks small on pdf. How can I fix this so it formats correctly for small and larger dataframes?</p> <pre><code>fig, ax = plt.subplots(figsize=(12, 4)) ax.axis('tight') ax.axis('off') ax.set_title(title) ax.table(cellText=panda_df.values, colLabels=panda_df.columns, loc='center') pdf.savefig(fig,bbox_inches='tight') plt.close() </code></pre>
<python><matplotlib>
2023-02-15 12:10:16
0
363
lunbox
75,459,440
4,707,189
pathlib.Path.relative_to doesn't resolve path
<p>I'm getting a nonsymetric behavior when using <code>Path.relative_to</code> versus <code>os.path.relpath</code>, see examples below. In <a href="https://docs.python.org/3/library/pathlib.html#correspondence-to-tools-in-the-os-module" rel="nofollow noreferrer">Correspondence to tools in the os module</a>, I was guided to believe they behave the same.</p> <p>I'm working with two paths here</p> <ul> <li><code>C:\Sync\Rmaster_head_\bin</code></li> <li><code>C:\Sync\installed</code></li> </ul> <p>I'm using Python 3.9.15.</p> <h3>os.path.relpath</h3> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import os.path &gt;&gt;&gt; import pathlib &gt;&gt;&gt; start = pathlib.Path(r&quot;../../installed&quot;) &gt;&gt;&gt; rel_path = os.path.relpath(pathlib.Path(r&quot;C:/Sync/Rmaster_head_/bin&quot;), start=start) &gt;&gt;&gt; rel_path '..\\..\\..\\..\\Sync\\Rmaster_head_\\bin' &gt;&gt;&gt; start / pathlib.Path(rel_path) WindowsPath('../../installed/../../../../Sync/Rmaster_head_/bin') &gt;&gt;&gt; (start / pathlib.Path(rel_path)).resolve() WindowsPath('C:/Sync/Rmaster_head_/bin') </code></pre> <h3>pathlib.Path.relative_to in both directions</h3> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; pathlib.Path(r&quot;C:/Sync/Rmaster_head_/bin&quot;).relative_to(pathlib.Path(r&quot;../../installed&quot;)) Traceback (most recent call last): File &quot;C:\Sync\installed\R2023.1.175_install\sys\python3\x86_64-unknown-winnt_i19v19\lib\code.py&quot;, line 90, in runcode exec(code, self.locals) File &quot;&lt;input&gt;&quot;, line 1, in &lt;module&gt; File &quot;C:\Sync\installed\R2023.1.175_install\sys\python3\x86_64-unknown-winnt_i19v19\lib\pathlib.py&quot;, line 939, in relative_to raise ValueError(&quot;{!r} is not in the subpath of {!r}&quot; ValueError: 'C:\\Sync\\Rmaster_head_\\bin' is not in the subpath of '..\\..\\installed' OR one path is relative and the other is absolute. </code></pre> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; pathlib.Path(r&quot;../../installed&quot;).relative_to(pathlib.Path(r&quot;C:/Sync/Rmaster_head_/bin&quot;)) Traceback (most recent call last): File &quot;C:\Sync\installed\R2023.1.175_install\sys\python3\x86_64-unknown-winnt_i19v19\lib\code.py&quot;, line 90, in runcode exec(code, self.locals) File &quot;&lt;input&gt;&quot;, line 1, in &lt;module&gt; File &quot;C:\Sync\installed\R2023.1.175_install\sys\python3\x86_64-unknown-winnt_i19v19\lib\pathlib.py&quot;, line 939, in relative_to raise ValueError(&quot;{!r} is not in the subpath of {!r}&quot; ValueError: '..\\..\\installed' is not in the subpath of 'C:\\Sync\\Rmaster_head_\\bin' OR one path is relative and the other is absolute. </code></pre> <p>I've noticed that the exception says that one of the paths is relative, but it doesn't also work when using full paths.</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; path1 = pathlib.Path(r&quot;C:\Sync\Rmaster_head_\bin&quot;) &gt;&gt;&gt; path2 = pathlib.Path(r&quot;C:\Sync\installed\R2023.1.175_install\documentation&quot;) &gt;&gt;&gt; os.path.relpath(path1, start=path2) '..\\..\\..\\Rmaster_head_\\bin' &gt;&gt;&gt; os.path.relpath(path2, start=path1) '..\\..\\installed\\R2023.1.175_install\\documentation' </code></pre> <p>and <code>path1.relative_to(path2)</code> and <code>path2.relative_to(path1)</code> both fail.</p> <p>What am I missing?</p>
<python><python-3.x><pathlib>
2023-02-15 11:54:55
1
1,149
Daniel
75,459,412
3,922,727
Azure http trigger is not allowing to create a folder in python using makedirs
<p>I am trying to create a new folder inside of Azure http trigger folder, in order to receive some files from an API call and save them based on the name.</p> <pre><code>config_country_folder = context.function_directory+&quot;/config/&quot;+country.upper() os.makedirs(config_country_folder, exist_ok=True) logging.info('Config folder was added') </code></pre> <p>this part is inside the <code>__init__.py</code> of the trigger, and it's working locally.</p> <p>however, once deployed and tested, the following error of code 500 occurred:</p> <blockquote> <p>Result: Failure Exception: OSError: [Errno 38] Function not implemented: '/home/site/wwwroot/HttpTrigger1/config' Stack: File &quot;/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/dispatcher.py&quot;, line 452, in _handle__invocation_request call_result = await self._loop.run_in_executor( File &quot;/usr/local/lib/python3.9/concurrent/futures/thread.py&quot;, line 58, in run result = self.fn(*self.args, **self.kwargs) File &quot;/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/dispatcher.py&quot;, line 718, in _run_sync_func return ExtensionManager.get_sync_invocation_wrapper(context, File &quot;/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/extension.py&quot;, line 215, in _raw_invocation_wrapper result = function(**args) File &quot;/home/site/wwwroot/HttpTrigger1/<strong>init</strong>.py&quot;, line 33, in main os.makedirs(config_country_folder, exist_ok=True) File &quot;/usr/local/lib/python3.9/os.py&quot;, line 215, in makedirs makedirs(head, exist_ok=exist_ok) File &quot;/usr/local/lib/python3.9/os.py&quot;, line 225, in makedirs mkdir(name, mode)</p> </blockquote> <p>The issue is over the makedirs, and I went through the web, which they advised on using the context of the function to create folders. however the error occurred.</p> <p>Also I was checking this link on using <a href="https://stackoverflow.com/questions/57955013/error-in-azure-functions-for-python-whenever-trying-to-create-new-directory">tempfile</a> library, but this is not what I need here.</p>
<python><azure><azure-http-trigger>
2023-02-15 11:52:46
1
5,012
alim1990
75,459,385
8,406,122
Extracting data using python
<p>I have a text file like this</p> <pre><code>app galaxy store Galaxy Store app text editor Text Editor app smartthings SmartThings app samsung pay Samsung Pay app pdssgentool PdssGenTool app pdss-sample pdss-sample app encodertool EncoderTool app play store Play Store app play music Play Music app keep notes Keep Notes </code></pre> <p>where the format is like</p> <pre><code>app(tab)name(tab)name_starting_with_caps </code></pre> <p>From here I only want to extract the <code>name</code> part and store it in a list. Can anyone help me how to do that in python. For example <code>list=[galaxy store, text editor, ...]</code></p>
<python>
2023-02-15 11:50:16
1
377
Turing101
75,459,348
13,566,716
do i need to use stripe listen - stripe cli in production?
<p>Is it required to run in command line:</p> <pre><code>stripe listen --forward-to localhost:5000/webhook </code></pre> <p>to receive stripe events to the webhook endpoint or does the endpoint automatically receive events if it is added to the stripe webhooks dashboard?</p> <p>I'm trying to ask if <code>stripe listen</code> is used only for testing or is it needed in production?</p>
<python><stripe-payments>
2023-02-15 11:47:13
1
369
3awny
75,459,341
10,260,806
How to parse txt file and make changes to a excel file based on the txt file in python
<p>I have a txt file with he following format and everyday I would like to run a python script so that the Name column, and end date column is used to parse the status column and make changed to a spreadsheet accordingly.</p> <p>Txt file format:</p> <pre><code>+------------+---------------------+---------------------+---------------------+----------------+ | Name | c2 | start | end | status +------------+---------------------+---------------------+---------------------+----------------+ | N1 | d1 | 2023-02-08 02:01:45 | 2023-02-08 08:15:01 | completed | N2 | d2 | 2023-02-09 06:04:25 | 2023-02-09 10:35:50 | completed | N1 | d2 | 2023-02-09 06:04:25 | 2023-02-09 10:35:50 | completed | N1 | d2 | 2023-02-10 13:46:01 | 2023-02-10 16:35:50 | completed | N4 | d2 | 2023-02-10 16:35:25 | 2023-02-10 19:35:50 | started | N1 | d2 | 2023-02-11 16:35:25 | 2023-02-11 19:35:50 | completed | N3 | d2 | 2023-02-11 16:35:25 | 2023-02-11 19:35:50 | completed | N2 | d2 | 2023-02-11 16:35:25 | 2023-02-11 19:35:50 | started | N4 | d2 | 2023-02-12 18:54:03 | 2023-02-12 23:53:09 | started </code></pre> <p>Spreadsheet: <a href="https://i.sstatic.net/CixBd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CixBd.png" alt="enter image description here" /></a></p> <p>Since all the columns in the spreadsheet have been filled out until the 11th. If I run the python script for 12/02/2023, then with respect to the txt file above, the spreadsheet would have the N4 on 12/02/2023 changed from waiting to Started.</p> <p>Is this possible based on the weirdly formatted txt file?</p> <p>I was thinking using openyxl and thinking of edge cases I could add in the future like if it starts on the 11th then ends on the 12th, then the status on the spreadsheet should change for the end date.. but for now id just like something simple I can wrap my head around. Sorry if this doesn't make sense</p>
<python><excel>
2023-02-15 11:46:31
1
982
RedRum
75,459,172
13,066,590
Loading a HuggingFace model on multiple GPUs using model parallelism for inference
<p>I have access to six 24GB GPUs. When I try to load some HuggingFace models, for example the following</p> <pre><code>from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained(&quot;google/ul2&quot;) model = AutoModelForSeq2SeqLM.from_pretrained(&quot;google/ul2&quot;) </code></pre> <p>I get an out of memory error, as the model only seems to be able to load on a single GPU. However, while the whole model cannot fit into a single 24GB GPU card, I have 6 of these and would like to know if there is a way to distribute the model loading across multiple cards, to perform inference.</p> <p>HuggingFace seems to have <a href="https://huggingface.co/docs/transformers/perf_infer_gpu_many" rel="noreferrer">a webpage</a> where they explain how to do this but it has no useful content as of today.</p>
<python><deep-learning><huggingface-transformers><torch><multi-gpu>
2023-02-15 11:33:12
1
930
andrea
75,458,965
15,001,463
T-test for determining which algorithm is experimentally faster
<p>I have written two algorithms to perform the Jacobi iterative method to simulate heat dissipation over a surface. I'd like to see if there is a significant difference between the run times of the two algorithms. I think that I can use a two-tailed <a href="https://www.statsmodels.org/dev/generated/statsmodels.stats.weightstats.ttest_ind.html" rel="nofollow noreferrer">statsmodels t-test</a> to determine whether the means are not equal to one another, but I'd like to know if one is statistically significantly faster. <strong>How can I test to see which algorithm is statistically significantly faster?</strong></p> <p>Here is an example with the two-sample test.</p> <pre class="lang-py prettyprint-override"><code>from statsmodels.stats.weightstats import ttest_ind # Example run times in seconds algorithm_1_rtimes = [5, 5.5, 4.9] algorithm_2_rtimes = [1.2, 1.1, 0.9] _, pvalue, _ = ttest_ind(algorithm_1_rtimes, algorithm_2_rtimes) if pvalue &lt; 0.05: print(&quot;Reject H0&quot;) else: print(&quot;Fail to reject H0&quot;) </code></pre>
<python><statistics>
2023-02-15 11:15:25
0
714
Jared
75,458,724
1,799,871
Tmux + python input() -- control characters taken literally
<p>I'm trying to get a simple console input in Python (<code>query = input('&gt;&gt;&gt; ')</code>). However, when I run this code inside Tmux, any control characters are taken literally, so I can't hit Enter to finish the input (prints <code>^M</code> instead) or break the process (prints <code>^C</code>).</p> <p>When the code is run outside of Tmux, it works fine. Any ideas?</p>
<python><input><console><tmux>
2023-02-15 10:53:03
0
438
Tuetschek
75,458,603
10,270,590
How to Output Downloadable file after processing?
<h2>Specification</h2> <ul> <li><code>gr.__version__ --&gt; '3.16.2'</code></li> <li>I want to create a gradio tab in mygradio app</li> <li>Disregard TAB 1, I am only working on tab2</li> <li>where I upload an excel file</li> <li>save name of the excel fie to a variable</li> <li>process that excel file take data out of it 2 numbers (1 and 2)</li> <li>Load data from the excel file to a pandas dataframe and add 1 to both of the numbers</li> <li>Turn dataframe to excel again and output it to the user to be able to download the output excel file</li> <li>The output file is named as the original uploaded file</li> </ul> <h2>MY CURRENT Code</h2> <pre><code>import gradio as gr import pandas as pd # def func1(): # #.... # pass def func2(name, file): file_name = name file_x = file # use this function to retrieve the file_x without modification for gradio.io output # excel to dataframe df = pd.read_excel(file_x) # add 1 to both numbers df['1'] = df['1'] + 1 df['2'] = df['2'] + 1 # dataframe to excel # returnt the exported excel fiel with the same name as the original file return df.to_excel(file_x, index=False) # GRADIO APP with gr.Blocks() as demo: gr.Markdown(&quot;BI App&quot;) ''' #1.TAB ''' # with gr.Tab(&quot;Tab1&quot;): # #.... unimportant code # with gr.Column(): # file_obj = gr.File(label=&quot;Input File&quot;, # file_count=&quot;single&quot;, # file_types=[&quot;&quot;, &quot;.&quot;, &quot;.csv&quot;,&quot;.xls&quot;,&quot;.xlsx&quot;]), # # extract the filename from gradio.io file object # # keyfile_name = gr.Interface(file_name_reader, inputs=&quot;file&quot;, outputs=None) # keyfile_name = 'nothing' # tab1_inputs = [keyfile_name, file_obj] # with gr.Column(): # # output excel file with gradio.io # tab1_outputs = [gr.File(label=&quot;Output File&quot;, # file_count=&quot;single&quot;, # file_types=[&quot;&quot;, &quot;.&quot;, &quot;.csv&quot;,&quot;.xls&quot;,&quot;.xlsx&quot;])] # tab1_submit_button = gr.Button(&quot;Submit&quot;) ''' #2.TAB - I EDIT THIS TAB''' with gr.Tab(&quot;Tab2&quot;): admitad_invoice_approvals_button = gr.Button(&quot;Submit&quot;) def file_name_reader(file): file_name = file.name # extract the file name from the uploaded file return file_name # iface = gr.Interface(file_name_reader, inputs=&quot;file&quot;, outputs=None) with gr.Column(): file_obj = gr.File(label=&quot;Input File&quot;, file_count=&quot;single&quot;, file_types=[&quot;&quot;, &quot;.&quot;, &quot;.csv&quot;,&quot;.xls&quot;,&quot;.xlsx&quot;]), # extract the filename from gradio.io file object keyfile_name = gr.Interface(file_name_reader, inputs=&quot;file&quot;, outputs=None) tab2_inputs = [keyfile_name, file_obj] with gr.Column(): # output excel file with gradio.io tab2_outputs = [gr.File(label=&quot;Output File&quot;, file_count=&quot;single&quot;, file_types=[&quot;&quot;, &quot;.&quot;, &quot;.csv&quot;,&quot;.xls&quot;,&quot;.xlsx&quot;])] tab2_submit_button = gr.Button(&quot;Submit&quot;) '''1 button for each of the tabs to execute the GUI TASK''' # tab1_submit_button.click(func1, # inputs=tab1_inputs, # outputs=tab1_outputs) tab2_submit_button.click(func2, inputs=tab2_inputs, outputs=tab2_outputs) ''' EXECUTING THE APP''' demo.launch(debug=True, share=True) ## PRODUCTION TESTING </code></pre> <h2>ERROR:</h2> <pre><code>Output exceeds the size limit. Open the full output data in a text editor --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[7], line 95 90 '''1 button for each of the tabs to execute the GUI TASK''' 91 # tab1_submit_button.click(func1, 92 # inputs=tab1_inputs, 93 # outputs=tab1_outputs) ---&gt; 95 tab2_submit_button.click(func2, 96 inputs=tab2_inputs, 97 outputs=tab2_outputs) 100 ''' EXECUTING THE APP''' 101 demo.launch(debug=True, share=True) ## PRODUCTION TESTING File ~/.local/lib/python3.8/site-packages/gradio/events.py:145, in Clickable.click(self, fn, inputs, outputs, api_name, status_tracker, scroll_to_output, show_progress, queue, batch, max_batch_size, preprocess, postprocess, cancels, every, _js) 140 if status_tracker: 141 warnings.warn( 142 &quot;The 'status_tracker' parameter has been deprecated and has no effect.&quot; 143 ) --&gt; 145 dep = self.set_event_trigger( 146 &quot;click&quot;, 147 fn, 148 inputs, 149 outputs, 150 preprocess=preprocess, 151 postprocess=postprocess, 152 scroll_to_output=scroll_to_output, 153 show_progress=show_progress, 154 api_name=api_name, 155 js=_js, 156 queue=queue, 157 batch=batch, 158 max_batch_size=max_batch_size, 159 every=every, 160 ) 161 set_cancel_events(self, &quot;click&quot;, cancels) 162 return dep File ~/.local/lib/python3.8/site-packages/gradio/blocks.py:225, in Block.set_event_trigger(self, event_name, fn, inputs, outputs, preprocess, postprocess, scroll_to_output, show_progress, api_name, js, no_target, queue, batch, max_batch_size, cancels, every) 217 warnings.warn( 218 &quot;api_name {} already exists, using {}&quot;.format(api_name, api_name_) 219 ) 220 api_name = api_name_ 222 dependency = { 223 &quot;targets&quot;: [self._id] if not no_target else [], 224 &quot;trigger&quot;: event_name, ... 237 } 238 Context.root_block.dependencies.append(dependency) 239 return dependency AttributeError: 'tuple' object has no attribute '_id' </code></pre> <h2>Tried</h2> <ul> <li>I have looked in to <a href="https://gradio.app/docs/#file" rel="nofollow noreferrer">https://gradio.app/docs/#file</a> but the output file generation is not clean especially regarding applying it to my case</li> </ul>
<python><excel><pandas><huggingface><gradio>
2023-02-15 10:44:23
1
3,146
sogu
75,458,549
9,860,033
How to show the line number from the file invoking the logger, not the logger file itself?
<p>I have a custom logging class, which has the following format:</p> <pre><code>log_format = &quot;%(asctime)s.%(msecs)d %(levelname)-8s [%(processName)s] [%(threadName)s] %(filename)s:%(lineno)d --- %(message)s&quot; </code></pre> <p>My project tree looks something like this:</p> <pre><code>. β”œβ”€β”€ exceptions.py β”œβ”€β”€ logger.py β”œβ”€β”€ settings.py └── main.py </code></pre> <p>In main.py I import my custom <code>Logger</code> from <code>logger.py</code>. On several places I perform logging using the following syntax:</p> <p><code>Logger.info(&quot;Transcribed audio successfully.&quot;)</code></p> <p>However when looking at the logs, the <code>filename</code> and <code>lineno</code> params are always referring to my <code>Logger</code> class, not the actual function from <code>main.py</code> which invoked the logging:</p> <pre><code>2023-02-15 10:48:06,241.241 INFO [MainProcess] [MainThread] logger.py:38 --- Transcribed audio successfully. </code></pre> <p>Is there a way to change this? I would like that the log entry states something like:</p> <pre><code>2023-02-15 10:48:06,241.241 INFO [MainProcess] [MainThread] main.py:98 --- Transcribed audio successfully. </code></pre> <hr /> <p>This is my <code>logger.py</code> file:</p> <pre><code>import logging from logging.handlers import RotatingFileHandler class Logger: log_format = &quot;%(asctime)s.%(msecs)d %(levelname)-8s [%(processName)s] [%(threadName)s] %(filename)s:%(lineno)d --- %(message)s&quot; @staticmethod def setup_single_logger(name, logfile, level): handler = RotatingFileHandler(logfile, mode='a', maxBytes=1024 * 1024, backupCount=10) handler.setFormatter(logging.Formatter(Logger.log_format)) logger = logging.getLogger(name) logger.setLevel(level) logger.addHandler(handler) return logger @staticmethod def setup_logging(): Logger.info_logger = Logger.setup_single_logger('INFO', '/path/to/your/logfile.log', logging.INFO) @staticmethod def info(msg, *args, **kwargs): Logger.info_logger.info(msg, *args, **kwargs) Logger.setup_logging() </code></pre> <p>And an example <code>main.py</code> is:</p> <pre class="lang-py prettyprint-override"><code>from logger import Logger Logger.info(&quot;Transcribed audio successfully.&quot;) </code></pre>
<python><logging><logfile><python-logging>
2023-02-15 10:40:50
1
1,134
waykiki
75,458,438
10,596,249
messagebird messages are not being delivered
<pre><code> import messagebird ACCESS_KEY = &quot;&quot; client = messagebird.Client(ACCESS_KEY) message = client.message_create( 'TestMessage', '+91XXXXXXXXX', 'working', { 'otp' : 1234 } ) print(client) </code></pre> <p>I am using above code to send message. But, I am not getting any message to my phone.</p> <p>It is giving this in response.</p> <pre><code> &lt;messagebird.client.Client object at 0x100f6f280&gt; </code></pre> <p>Check the screenshot. From here i got the api key to test</p> <p><a href="https://i.sstatic.net/BTHHu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BTHHu.png" alt="enter image description here" /></a></p>
<python><messagebird>
2023-02-15 10:29:18
1
806
soubhagya
75,458,399
287,297
Batching the bulk updates of millions of rows with peewee
<ul> <li>I have an SQLite3 database with a table that has twenty million rows.</li> <li>I would like to update the values of some of the columns in the table (for all rows).</li> <li>I am running into performance issues (about only 1'000 rows processed per second).</li> <li>I would like to continue using the <code>peewee</code> module in python to interact with the database.</li> </ul> <p>So I'm not sure if I am taking the right approach with my code. After trying some ideas that all failed, I attempted to perform the update in batches. My first solution here was to iterate in over the cursor with <code>islice</code> as so:</p> <pre><code>import math, itertools from tqdm import tqdm from cool_project.database import db, MyTable def update_row(row): row.column_a = computation(row.column_d) row.column_b = computation(row.column_d) row.column_c = computation(row.column_d) fields = (MyTable.column_a MyTable.column_b MyTable.column_c) rows = MyTable.select() total_rows = rows.count() page_size = 1000 total_pages = math.ceil(total_rows / page_size) # Start # with db.atomic(): for page_num in tqdm(range(total_pages)): page = list(itertools.islice(rows, page_size)) for row in page: update_row(row) MyTable.bulk_update(page, fields=fields) </code></pre> <p>This failed, because it would attempt to put the result of the whole query into memory. So I adapted the code to use the <code>paginate</code> function.</p> <pre><code>import math from tqdm import tqdm from cool_project.database import db, MyTable def update_row(row): row.column_a = computation(row.column_d) row.column_b = computation(row.column_d) row.column_c = computation(row.column_d) fields = (MyTable.column_a MyTable.column_b MyTable.column_c) rows = MyTable.select() total_rows = rows.count() page_size = 1000 total_pages = math.ceil(total_rows / page_size) # Start # with db.atomic(): for page_num in tqdm(range(1, total_pages+1)): # Get a batch # page = MyTable.select().paginate(page_num, page_size) # Update # for row in page: update_row(row) # Commit # MyTable.bulk_update(page, fields=fields) </code></pre> <p>But it's still quite slow, and would take &gt;24 hours to complete.</p> <p>What is strange is that the speed (in number of rows per second) notably decreases as time goes by. The scripts starts with ~1000 rows per second. But after half an hour it's down to 250 rows per second.</p> <p>Am I missing something? Thanks!</p>
<python><sqlite><sql-update><peewee>
2023-02-15 10:26:05
1
6,514
xApple
75,458,300
16,014,407
Efficient speaker diarization
<p>I am running a VM instance on google cloud. My goal is to apply speaker diarization to several .wav files stored on cloud buckets.</p> <p>I have tried the following alternatives with the subsequent problems:</p> <ol> <li>Speaker diarization on Google's API. This seems to go fast but the results make no sense at all. I've already seen <a href="https://issuetracker.google.com/issues/199112218" rel="noreferrer">similar issues</a> and I opened a thread myself but I get no answer... The output of this only returns maximum of two speakers with random labels. Here is the code I tried in python:</li> </ol> <pre><code>from google.cloud import speech_v1p1beta1 as speech from google.cloud import storage import os import json import sys storage_client = storage.Client() client = speech.SpeechClient() if &quot;--channel&quot; in sys.argv: index = sys.argv.index(&quot;--channel&quot;) + 1 if index &lt; len(sys.argv): channel = sys.argv[index] print(&quot;Channel:&quot;, channel) else: print(&quot;--channel option requires a value&quot;) audio_folder=f'audio_{channel}' # channel='tve' transcript_folder=f'transcript_output' bucket = storage_client.bucket(audio_folder) bucket2 = storage_client.bucket(transcript_folder) wav_files=[i.name for i in bucket.list_blobs()] json_files=[i.name.split(f'{channel}/')[-1] for i in bucket2.list_blobs(prefix=channel)] for file in wav_files: if not file.endswith('.wav'): continue transcript_name=file.replace('.wav','.json') if transcript_name in json_files: continue gcs_uri = f&quot;gs://{audio_folder}/{file}&quot; # gcs_uri = f&quot;gs://{audio_folder}/out2.wav&quot; audio = speech.RecognitionAudio(uri=gcs_uri) diarization_config = speech.SpeakerDiarizationConfig( enable_speaker_diarization=True, min_speaker_count=2, #max_speaker_count=10, ) config = speech.RecognitionConfig( encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16, #sample_rate_hertz=8000, language_code=&quot;es-ES&quot;, diarization_config=diarization_config, #audio_channel_count = 2, ) print(&quot;Waiting for operation to complete...&quot;) operation = client.long_running_recognize(config=config, audio=audio) response=operation.result() result = response.results[-1] # print(result) # print(type(result)) with open(transcript_name,'w') as f: json.dump(str(result),f) # transcript_name=file.replace('.wav','.txt') # result = response.results[-1] # with open(transcript_name,'w') as f: # f.write(result) os.system(f'gsutil cp {transcript_name} gs://transcript_output/{channel}') os.remove(transcript_name) print(f'File {file} processed. ') </code></pre> <p>No matter how the max_speaker or min are changed, results are the same.</p> <ol start="2"> <li>pyannote:</li> </ol> <p>As the above did not work, I decided to try with pyannote. The performance of it is very nice but there is one problem, it is extremely slow. For a wav file of 30 mins it takes more than 3 hours to finish the diarization.</p> <p>Here is my code:</p> <pre><code> #import packages import os from datetime import datetime import pandas as pd from pyannote.audio import Pipeline from pyannote.audio import Model from pyannote.core.json import dump from pyannote.core.json import load from pyannote.core.json import loads from pyannote.core.json import load_from import subprocess from pyannote.database.util import load_rttm from google.cloud import speech_v1p1beta1 as speech from google.cloud import storage import sys # channel='a3' storage_client = storage.Client() if &quot;--channel&quot; in sys.argv: index = sys.argv.index(&quot;--channel&quot;) + 1 if index &lt; len(sys.argv): channel = sys.argv[index] print(&quot;Channel:&quot;, channel) else: print(&quot;--channel option requires a value&quot;) audio_folder=f'audio_{channel}' transcript_folder=f'transcript_{channel}' bucket = storage_client.bucket(audio_folder) bucket2 = storage_client.bucket(transcript_folder) wav_files=[i.name for i in bucket.list_blobs()] rttm_files=[i.name for i in bucket2.list_blobs()] token=&quot;XXX&quot; pipeline = Pipeline.from_pretrained(&quot;pyannote/speaker-diarization@2.1&quot;, use_auth_token=token) # this load the model model = Model.from_pretrained(&quot;pyannote/segmentation&quot;, use_auth_token=token) for file in wav_files: if not file.endswith('.wav'): continue rttm_name=file.replace('.wav','.rttm') if rttm_name in rttm_files: continue if '2023' not in file: continue print(f'Doing file {file}') gcs_uri = f&quot;gs://{audio_folder}/{file}&quot; os.system(f'gsutil cp {gcs_uri} {file}') diarization = pipeline(file) with open(rttm_name, &quot;w&quot;) as rttm: diarization.write_rttm(rttm) os.system(f'gsutil cp {rttm_name} gs://transcript_{channel}/{rttm_name}') os.remove(file) os.remove(rttm_name) </code></pre> <p>I am running this with python3.9 on a VM instance with GPU NVIDIA-T4.</p> <p>Is this normal? I've seen that pyannote.audio is kinda slow on the factor of 1x or so, this time is much more than that given that, in theory, it should be running on a dedicated GPU for it...</p> <p>Are there any faster alternatives? Any way to improve the code or design a VM that might increase speed?</p>
<python><google-cloud-platform><speech-to-text><diarization>
2023-02-15 10:17:53
1
328
Luis
75,458,062
1,473,517
Why is a pure Python negative binomial pmf so much faster than the scipy version?
<p>scipy.stats has a function <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.nbinom.html" rel="nofollow noreferrer">nbinom.pmf()</a> which computes the <a href="https://en.wikipedia.org/wiki/Probability_mass_function" rel="nofollow noreferrer">probability mass function</a> of the <a href="https://en.wikipedia.org/wiki/Negative_binomial_distribution" rel="nofollow noreferrer">negative binomial distribution</a>.</p> <p>The mathematical function is very easily described in pure Python code.</p> <pre><code>from math import comb def nbinom_pmf(k, n, p): return comb(k+n-1, n-1)* p**n * (1-p)**k </code></pre> <p>It turns out that scipy.stats.nbinom.pmf() is quite a lot slower than this pure python code and this is a known issue due to the overhead of checking the parameters. The <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.html" rel="nofollow noreferrer">docs</a> suggest using ._pmf instead. This is indeed faster but is <strong>still</strong> slower than the pure Python code in many cases. E.g.</p> <pre><code>In [24]: %timeit nbinom_pmf(1, 26, 0.5) 282 ns Β± 1.61 ns per loop (mean Β± std. dev. of 7 runs, 1,000,000 loops each) In [25]: %timeit nbinom._pmf(1, 26, 0.5) 2.03 Β΅s Β± 6.55 ns per loop (mean Β± std. dev. of 7 runs, 100,000 loops each) In [32]: %timeit nbinom._pmf(36, 26, 0.5) 2.03 Β΅s Β± 1.49 ns per loop (mean Β± std. dev. of 7 runs, 100,000 loops each) In [33]: %timeit nbinom_pmf(36, 26, 0.5) 1.64 Β΅s Β± 30.9 ns per loop (mean Β± std. dev. of 7 runs, 1,000,000 loops each) </code></pre> <p>Why is the scipy code so much slower than the naive pure Python implementation?</p>
<python><scipy>
2023-02-15 09:56:35
1
21,513
Simd
75,458,048
10,659,910
Populating each calendar month with a specific incrementing pattern
<p>Given a pandas <code>DataFrame</code> indexed by a timeseries, e.g.</p> <pre><code>import pandas as pd import numpy as np index = pd.date_range('2023-01-01', '2023-12-31', freq='1D') pd.DataFrame({'a' : np.random.randint(0, 10, len(index))}, index=index) </code></pre> <pre class="lang-none prettyprint-override"><code> a 2023-01-01 3 2023-01-02 2 2023-01-03 1 2023-01-04 3 2023-01-05 8 ... .. 2023-12-27 2 2023-12-28 2 2023-12-29 0 2023-12-30 1 2023-12-31 7 </code></pre> <p>How can I add a new column populated with an incrementing pattern within each calendar month? E.g. <code>b = day_of_month / days_in_month</code></p> <pre class="lang-none prettyprint-override"><code> a b 2023-01-01 0 0.032258 2023-01-02 5 0.064516 2023-01-03 2 0.096774 2023-01-04 7 0.129032 2023-01-05 4 0.161290 ... .. ... 2023-12-27 6 0.870968 2023-12-28 5 0.903226 2023-12-29 8 0.935484 2023-12-30 2 0.967742 2023-12-31 9 1.000000 </code></pre> <p>Such that the following pattern is created:</p> <p><a href="https://i.sstatic.net/4lyLQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4lyLQ.png" alt="enter image description here" /></a></p>
<python><pandas>
2023-02-15 09:55:40
1
10,368
William Miller
75,458,034
14,912,911
No module named 'PIL' after Installing pillow latest version
<p>I am installed pillow,following the documentation <a href="https://pillow.readthedocs.io/en/stable/installation.html#" rel="nofollow noreferrer">here</a>,</p> <pre><code>python3 -m pip install --upgrade pip python3 -m pip install --upgrade Pillow </code></pre> <p>and import Image like this:</p> <pre><code>from PIL import Image </code></pre> <p>Even though I upgraded Pillow to 9.4.0, I am getting the following error in vscode</p> <blockquote> <p>No module named 'PIL'</p> </blockquote> <p>I am using Python 3.9.7. I am not sure what I am doing wrong here, is it my python version or is it the vscode. Can someone please enlighten me about this issue.</p> <p>I can see them installed in my venv folder, but cannot access it in the file I am working on (which is highlighted by yellow)</p> <p><a href="https://i.sstatic.net/ql7uU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ql7uU.png" alt="enter image description here" /></a></p>
<python><pip><python-imaging-library>
2023-02-15 09:54:41
2
448
Dude
75,458,001
8,511,544
I understand Python (3.x) is supposed to round to even. Why is round(0.975,2) = 0.97 but round(0.975*100)/100 = 0.98?
<p>So the question is in the title. Here are some more details:</p> <p>code:</p> <pre><code>a=0.975 print(round(a,2)) print(round(a*100)/100) a=-0.975 print(round(a,2)) print(round(a*100)/100) a=1.975 print(round(a,2)) print(round(a*100)/100) a=-1.975 print(round(a,2)) print(round(a*100)/100) </code></pre> <p>The printed output is:</p> <pre><code>0.97 0.98 -0.97 -0.98 1.98 1.98 -1.98 -1.98 </code></pre> <p>I guess there is something going on with floating point error and how <code>round()</code> handles float numbers? It seems to be the case between -1 and 1. Probably <code>round()</code> is shifting the floating point and creating a number with more digits after the 5?</p> <p>Can someone explain, whether there is a way to avoid this?</p>
<python><rounding><precision>
2023-02-15 09:51:46
1
637
dalleaux
75,457,897
20,612,566
Dict in List comprehension Python
<p>I have a List with API results:</p> <pre><code>result = [(True, {'result': {'skus': [{'sku': '123'}, {'sku': '124'}] } } ), (True, {'result': {'skus': [{'sku': '125'}, {'sku': '126'}] } } ) ] </code></pre> <p>So, I need to get each <code>'sku'</code></p> <p>I can do it in two loops:</p> <pre><code>for elem in result: for sku in elem[1][&quot;result&quot;][&quot;skus&quot;]: print(sku) </code></pre> <p>How can I do it in one string?</p>
<python><python-3.x><list><loops><list-comprehension>
2023-02-15 09:41:28
3
391
Iren E
75,457,849
6,210,177
regex: Find all groups of consecutive groups, where the groups are separated by pattern
<p>I have a badly parsed text where multiple text blocks are separated by lines with only three digits. What I want is to get a regex that would help me capture all the text in a block (starting and including the three digits row until the last white space before the next three characters.</p> <p>This is the one I've tried, but as it uses a lookahead the last group is not captured. <code>\n*((\d{3})\n*([\S\s]+?)(?=\s\d{3}\s))</code></p> <p>Sample:</p> <pre><code>foo 000 foo bar foo 461 long multiline text 999 last example until rest of document </code></pre> <p>Expected groups:</p> <pre><code>[000 foo bar foo ] Group 1 [461 long multiline text ] Group 2 [999 last example until rest of document] Group 3 </code></pre>
<python><regex>
2023-02-15 09:36:56
2
455
Jano
75,457,741
1,870,803
Dynamically generating marshmallow schemas for SQLAlchemy fails on column attribute lookup for Relationship
<p>I am automatically deriving marshmallow schemas for SQLAlchemy objects using the approach described in <a href="https://stackoverflow.com/questions/42891152/how-to-dynamically-generate-marshmallow-schemas-for-sqlalchemy-models">How to dynamically generate marshmallow schemas for SQLAlchemy models</a>.</p> <p>I am then decorating my model classes:</p> <pre class="lang-py prettyprint-override"><code>@derive_schema class Foo(db.Model): id = db.Column(UUID(as_uuid=True), primary_key=True, server_default=sqlalchemy.text(&quot;uuid_generate_v4()&quot;)) name = db.Column(String, nullable=False) def __repr__(self): return self.name @derive_schema class FooSettings(db.Model): foo_id = Column(UUID(as_uuid=True), ForeignKey('foo.id'), primary_key=True, nullable=False) my_settings = db.Column(JSONB, nullable=True) foo = db.relationship('Foo', backref=db.backref('foo_settings')) </code></pre> <p>Where my <code>derive_schema</code> decorator is defined as follows:</p> <pre class="lang-py prettyprint-override"><code>import marshmallow from marshmallow_sqlalchemy import SQLAlchemyAutoSchema def derive_schema(cls): class Schema(SQLAlchemyAutoSchema): class Meta: include_fk = True include_relationships = True load_instance = True model = cls marshmallow.class_registry.register(f'{cls.__name__}.Schema', Schema) cls.Schema = Schema return cls </code></pre> <p>This used to work fine with SQLAlchemy 1.4. While attempting to upgrade to 2.0.3, I am running into the following exception when changing my schema to inherit <code>SQLAlchemyAutoSchema</code> instead of <code>ModelSchema</code>:</p> <pre><code>Traceback (most recent call last): from foo.model import Foo, FooSettings File &quot;foo/model.py&quot;, line 21, in &lt;module&gt; class SourceSettings(db.Model): File &quot;schema_generation/__init__.py&quot;, line 18, in derive_schema class Schema(SQLAlchemyAutoSchema): File &quot;python3.10/site-packages/marshmallow/schema.py&quot;, line 121, in __new__ klass._declared_fields = mcs.get_declared_fields( File &quot;python3.10/site-packages/marshmallow_sqlalchemy/schema/sqlalchemy_schema.py&quot;, line 91, in get_declared_fields fields.update(mcs.get_declared_sqla_fields(fields, converter, opts, dict_cls)) File &quot;python3.10/site-packages/marshmallow_sqlalchemy/schema/sqlalchemy_schema.py&quot;, line 130, in get_declared_sqla_fields converter.fields_for_model( File &quot;python3.10/site-packages/marshmallow_sqlalchemy/convert.py&quot;, line 141, in fields_for_model field = base_fields.get(key) or self.property2field(prop) File &quot;python3.10/site-packages/marshmallow_sqlalchemy/convert.py&quot;, line 180, in property2field field_class = field_class or self._get_field_class_for_property(prop) File &quot;python3.10/site-packages/marshmallow_sqlalchemy/convert.py&quot;, line 262, in _get_field_class_for_property column = prop.columns[0] File &quot;python3.10/site-packages/sqlalchemy/util/langhelpers.py&quot;, line 1329, in __getattr__ return self._fallback_getattr(key) File &quot;python3.10/site-packages/sqlalchemy/util/langhelpers.py&quot;, line 1298, in _fallback_getattr raise AttributeError(key) AttributeError: columns </code></pre> <p>Looking internally in the stacktrace, <a href="https://github.com/marshmallow-code/marshmallow-sqlalchemy/blob/5e5488b3468f584c20a0983409f5be2e715c1301/src/marshmallow_sqlalchemy/convert.py#L271" rel="nofollow noreferrer">it seems like the problem is here</a>:</p> <pre><code>def _get_field_class_for_property(self, prop): if hasattr(prop, &quot;direction&quot;): field_cls = Related else: column = prop.columns[0] field_cls = self._get_field_class_for_column(column) return field_cls </code></pre> <p>There is a check for an attribute named <code>direction</code> which is specified on the SQLAlchemy <code>Relationship</code> object. However, this attribute seems to be dynamically loaded, which causes the conditional check to fail and fall back to <code>prop.columns[0]</code>. But since this object is a <code>Relationship</code> and not a <code>ColumnProperty</code> it has no <code>columns</code> attribute which causes the program to crash.</p> <p>However, I have found a way to force load the <code>direction</code> property by adding the following code to the <code>derive_schema</code> method, before creating the generating <code>Schema</code> class:</p> <pre><code>import marshmallow from marshmallow_sqlalchemy import SQLAlchemyAutoSchema from sqlalchemy import inspect def derive_schema(cls): mapper = inspect(cls) _ = [_ for _ in mapper.relationships] class Schema(SQLAlchemyAutoSchema): class Meta: include_fk = True include_relationships = True load_instance = True model = cls marshmallow.class_registry.register(f'{cls.__name__}.Schema', Schema) cls.Schema = Schema return cls </code></pre> <p>Enumerating the relationships and force loading them fixes the materialization of the <code>direction</code> property and thus the program loads fine.</p> <p>Am I missing something in the model definition to make this work without force loading the relationships?</p>
<python><sqlalchemy><marshmallow><marshmallow-sqlalchemy>
2023-02-15 09:26:58
2
149,964
Yuval Itzchakov
75,457,670
859,227
Drop rows and reset_index in a dataframe
<p>I was wondering why <code>reset_index()</code> has no effect in the following piece of code.</p> <pre><code>data = [0,10,20,30,40,50] df = pd.DataFrame(data, columns=['Numbers']) df.drop(df.index[2:4], inplace=True) df.reset_index() df Numbers 0 0 1 10 4 40 5 50 </code></pre> <p>UPDATE:</p> <p>If I use <code>df.reset_index(inplace=True)</code>, I see a new column which is not desired.</p> <pre><code> index Numbers 0 0 0 1 1 10 2 4 40 3 5 50 </code></pre>
<python><pandas><dataframe>
2023-02-15 09:20:30
2
25,175
mahmood
75,457,594
8,913,983
Extracting text from a binary PE file in python
<p>I am trying to extract strings from a PE files (.exe &amp; .dll) using <code>pefile</code> library but for a while I am stuck as this type of data format is new to new, I've read many questions similar to mine but with no success I am able to adapt the code to fit my needs.</p> <p>I have a following code:</p> <pre><code># path to random pe file p = 'dfghdsfhrtkl54165hs.exe' pe = pefile.PE(p) # Extract the file's metadata print('Machine: ', pe.FILE_HEADER.Machine) print('Number of sections: ', pe.FILE_HEADER.NumberOfSections) print('Timestamp: ', pe.FILE_HEADER.TimeDateStamp) print('Entry point: ', pe.OPTIONAL_HEADER.AddressOfEntryPoint) </code></pre> <pre><code># Machine: 332 # Number of sections: 3 # Timestamp: 1441263997 # Entry point: 5432 </code></pre> <p>As I understand there are <code>sections</code> that contain <code>.text</code> which can be used to classify if the file is bening or malignant so I've tried the following:</p> <pre><code>for section in pe.sections: if section.Name.decode().strip('\x00') == '.text': text_section = section break text_section </code></pre> <p>Which returns</p> <pre><code>&lt;Structure: [IMAGE_SECTION_HEADER] 0x1B0 0x0 Name: .text 0x1B8 0x8 Misc: 0xF53C 0x1B8 0x8 Misc_PhysicalAddress: 0xF53C 0x1B8 0x8 Misc_VirtualSize: 0xF53C 0x1BC 0xC VirtualAddress: 0x1000 0x1C0 0x10 SizeOfRawData: 0x10000 0x1C4 0x14 PointerToRawData: 0x1000 0x1C8 0x18 PointerToRelocations: 0x0 0x1CC 0x1C PointerToLinenumbers: 0x0 0x1D0 0x20 NumberOfRelocations: 0x0 0x1D2 0x22 NumberOfLinenumbers: 0x0 0x1D4 0x24 Characteristics: 0x60000020&gt; </code></pre> <p>But I am unsure how to proceed extracting printable strings from this or if this is even the right way.</p> <p>I've read the following answers: <a href="https://stackoverflow.com/questions/20027990/how-can-i-get-text-section-from-pe-file-using-pefile">1</a> <a href="https://stackoverflow.com/questions/6804582/extract-strings-from-a-binary-file-in-python">2</a> <a href="https://stackoverflow.com/questions/26139889/extracting-text-section-only-from-a-pe-file">3</a></p> <p>My end goal is to extract text from PE files that I can use in my ML model as features.</p>
<python><dll><binary><exe><portable-executable>
2023-02-15 09:14:35
0
4,870
Jonas Palačionis
75,457,486
3,270,696
Using LRU Cache with python's Gunicorn
<p>Trying to put data into LruCache and then get it back. I am using postman to get the data via api, every fourth or fifth time I retry the api, data is returned form the cache. Whereas for first tries, it takes time and never returns from the cache. I think data is not being cached in the first placed. Please help, what might be the issue here with python's LRU Cache and Gunicorn</p> <p>I am expecting data to be returned via cache, 2nd time I hit my api. But actually data is retuned from cache, after several tries of hitting the api.</p>
<python><gunicorn><lru>
2023-02-15 09:05:32
0
317
Jawwad Ahmed
75,457,389
17,696,880
Validate if a list has only None elements, strings that only have whitespaces or is empty
<pre class="lang-py prettyprint-override"><code> #examples: lista = [&quot; &quot;, None, None, &quot;&quot;, &quot; &quot;] #print 'yes!' lista = [] #print 'yes!' lista = [&quot; &quot;, None, None, &quot;&quot;, &quot; s&quot;] #print 'no!' lista = [&quot;sun is bright&quot;, &quot;aa&quot;, &quot;Hello!&quot;] #print 'no!' if not any(lista) or all(x in ('', None) for x in lista): print(&quot;yes&quot;) else: print(&quot;no&quot;) </code></pre> <p>Considering that the lists that have only strings with empty spaces or have None values, are lists that, although they have information, this information is not really useful, so I needed to create a validation to identify if a list belongs to this type of list with no useful information</p> <p>I was having trouble putting together a program that prints &quot;yes&quot; if the list[] is empty, has only strings of empty spaces, or has only None elements, and prints &quot;no&quot; otherwise.</p> <p>What is wrong with my code?</p>
<python><python-3.x><string><list><validation>
2023-02-15 08:57:16
2
875
Matt095
75,457,205
5,684,405
Error while installing jupyterlab with poetry
<p>I can't isntall JupyterLab with poetry. I keep getting this error. I've tried deleting cache by:</p> <pre><code> rm -rf ~/Library/Caches/pypoetry/artifacts/* </code></pre> <p>and</p> <pre><code>poetry cache clear pypi --all </code></pre> <p>but I keep getting the following error:</p> <pre><code>$ poetry add --group dev jupyterlab Using version ^3.6.1 for jupyterlab Updating dependencies Resolving dependencies... (2.6s) Package operations: 12 installs, 0 updates, 0 removals β€’ Installing y-py (0.5.5): Failed CalledProcessError Command '['/Users/mc/Library/Caches/pypoetry/virtualenvs/showcase-project-lis5iaDt-py3.9/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/Users/mc/Library/Caches/pypoetry/virtualenvs/showcase-project-lis5iaDt-py3.9', '--no-deps', '/Users/mc/Library/Caches/pypoetry/artifacts/be/5d/9c/38ed00c38e66f11b3f1295c0b4fa2565c954b8e0c8d63deac26e996efa/y_py-0.5.5.tar.gz']' returned non-zero exit status 1. at /opt/homebrew/Cellar/python@3.9/3.9.16/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py:528 in run 524β”‚ # We don't call process.wait() as .__exit__ does that for us. 525β”‚ raise 526β”‚ retcode = process.poll() 527β”‚ if check and retcode: β†’ 528β”‚ raise CalledProcessError(retcode, process.args, 529β”‚ output=stdout, stderr=stderr) 530β”‚ return CompletedProcess(process.args, retcode, stdout, stderr) 531β”‚ 532β”‚ The following error occurred when trying to handle this error: EnvCommandError Command ['/Users/mc/Library/Caches/pypoetry/virtualenvs/showcase-project-lis5iaDt-py3.9/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/Users/mc/Library/Caches/pypoetry/virtualenvs/showcase-project-lis5iaDt-py3.9', '--no-deps', '/Users/mc/Library/Caches/pypoetry/artifacts/be/5d/9c/38ed00c38e66f11b3f1295c0b4fa2565c954b8e0c8d63deac26e996efa/y_py-0.5.5.tar.gz'] errored with the following return code 1, and output: Processing /Users/mc/Library/Caches/pypoetry/artifacts/be/5d/9c/38ed00c38e66f11b3f1295c0b4fa2565c954b8e0c8d63deac26e996efa/y_py-0.5.5.tar.gz Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'error' error: subprocess-exited-with-error Γ— Preparing metadata (pyproject.toml) did not run successfully. β”‚ exit code: 1 ╰─&gt; [6 lines of output] Cargo, the Rust package manager, is not installed or is not on PATH. This package requires Rust and Cargo to compile extensions. Install it through the system's package manager or via https://rustup.rs/ Checking for Rust toolchain.... [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Γ— Encountered error while generating package metadata. ╰─&gt; See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. at ~/Library/Application Support/pypoetry/venv/lib/python3.9/site-packages/poetry/utils/env.py:1540 in _run 1536β”‚ output = subprocess.check_output( 1537β”‚ command, stderr=subprocess.STDOUT, env=env, **kwargs 1538β”‚ ) 1539β”‚ except CalledProcessError as e: β†’ 1540β”‚ raise EnvCommandError(e, input=input_) 1541β”‚ 1542β”‚ return decode(output) 1543β”‚ 1544β”‚ def execute(self, bin: str, *args: str, **kwargs: Any) -&gt; int: The following error occurred when trying to handle this error: PoetryException Failed to install /Users/mc/Library/Caches/pypoetry/artifacts/be/5d/9c/38ed00c38e66f11b3f1295c0b4fa2565c954b8e0c8d63deac26e996efa/y_py-0.5.5.tar.gz at ~/Library/Application Support/pypoetry/venv/lib/python3.9/site-packages/poetry/utils/pip.py:58 in pip_install 54β”‚ 55β”‚ try: 56β”‚ return environment.run_pip(*args) 57β”‚ except EnvCommandError as e: β†’ 58β”‚ raise PoetryException(f&quot;Failed to install {path.as_posix()}&quot;) from e 59β”‚ </code></pre> <p>Env info</p> <pre><code>$ poetry env info Virtualenv Python: 3.9.16 Implementation: CPython Path: /Users/mc/Library/Caches/pypoetry/virtualenvs/showcase-project-lis5iaDt-py3.9 Executable: /Users/mc/Library/Caches/pypoetry/virtualenvs/showcase-project-lis5iaDt-py3.9/bin/python Valid: True System Platform: darwin OS: posix Python: 3.9.16 Path: /opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9 Executable: /opt/homebrew/opt/python@3.9/Frameworks/Python.framework/Versions/3.9/bin/python3.9 </code></pre> <p>How can I solve this issue with <code>y_py</code> package? Do I really need to install rust to use jupyterlab with poetry as stated in the message?</p>
<python><jupyter-lab><python-poetry>
2023-02-15 08:38:14
2
2,969
mCs
75,456,976
5,987,698
How can I compute the gradient of multiple outputs with respect to a batch of inputs in a single backward pass in PyTorch?
<p>I am performing multi-label image classification in PyTorch, and would like to compute the gradients of all outputs at ground truth labels for each input with respect to the input. I would preferably like to do this in a single backward pass for a batch of inputs.</p> <p>For example:</p> <pre><code>inputs = torch.randn((4,3,224,224)) # Batch of 4 inputs targets = torch.tensor([[1,0,1],[1,0,0],[0,0,1],[1,1,0]]) # Labels for each input outputs = model(inputs) # 4 x 3 vector </code></pre> <p>Here, I want to find the gradient of:</p> <ul> <li><code>output[0,0]</code> and <code>output[0,2]</code> with respect to <code>input[0]</code></li> <li><code>output[1,0]</code> with respect to <code>input[1]</code></li> <li><code>output[2,2]</code> with respect to <code>input[2]</code></li> <li><code>output[3,0]</code> and <code>output[3,1]</code> with respect to <code>input[3]</code></li> </ul> <p><strong>Is there any way to do this in a single backward pass?</strong></p> <hr /> <p>If my outputs were one-hot, i.e., there was only one label per class, I could use:</p> <pre><code>gt_classes = torch.where(targets==1)[1] gather_outputs = torch.gather(outputs, 1, gt_classes.unsqueeze(-1)) grads = torch.autograd.grad(torch.unbind(gather_outputs), inputs)[0] # 4 x 3 x 224 x 224 </code></pre> <p>This gives gradient of <code>output[i,gt_classes[i]]</code> with respect to <code>input[i]</code>.</p> <p>For my case, it looks like the <code>is_grads_batched</code> argument from <code>torch.autograd.grad</code> might be relevant, but it's not very clear how it is to be used.</p>
<python><pytorch><backpropagation><multilabel-classification><autograd>
2023-02-15 08:07:18
0
8,627
GoodDeeds
75,456,586
1,942,868
Django url which accept both with parameter or without
<p>I want to accept both urls with and without parameters, such as <code>/top/</code> <code>/top/1</code></p> <p>I tried a few patterns:</p> <pre><code>path('top/&lt;int:pk&gt;/', views.top, name='top_edit'), path('top/(&lt;int:pk&gt;)/', views.top, name='top_edit'), path('top/(&lt;int:pk&gt;)/$', views.top, name='top_edit'), def top(request: HttpRequest,pk = None) -&gt; HttpResponse: return render(request, 'index.html', context) </code></pre> <p>It accept <code>/top/1</code> however <code>/top/</code> is not accepted.</p> <p>How can I make it work?</p>
<python><django><django-rest-framework><django-views><django-urls>
2023-02-15 07:21:14
3
12,599
whitebear
75,456,244
17,835,656
how can i create a CSR with specific Configuration using python
<p>i need to create a CSR with specific configuration using python</p> <p>this is my configuration :</p> <pre><code>oid_section = OIDs [ OIDs ] certificateTemplateName= 1.3.6.1.4.1.311.20.2 [ req ] default_bits = 2048 emailAddress = test@gmail.com req_extensions = v3_req x509_extensions = v3_ca prompt = no default_md = sha256 req_extensions = req_ext distinguished_name = dn [ dn ] C=SA OU=3111111117 O=shesh CN = tat-1 [ v3_req ] basicConstraints = CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment [req_ext] certificateTemplateName = ASN1:PRINTABLESTRING:PREZATCA-Code-Signing subjectAltName = dirName:alt_names [alt_names] SN=1-Device|2-234|3-mohamm UID=30000000000000003 title=1000 registeredAddress=Zatca 12 businessCategory=Technology </code></pre> <p>i can create a CSR with this configuration using OpenSSL</p> <p>but i need to Create a CSR with this configuration using Python.</p> <p>i tried to do it using this code:</p> <pre class="lang-py prettyprint-override"><code> from OpenSSL.SSL import FILETYPE_PEM from OpenSSL.crypto import dump_certificate_request, dump_privatekey,dump_publickey, PKey, TYPE_DSA, X509Req # create public/private key key = PKey() key.generate_key(TYPE_DSA,1028) print(key.to_cryptography_key()) # Generate CSR req = X509Req() req.get_subject().CN = 'localhost' req.get_subject().O = 'XYZ Widgets Inc' req.get_subject().OU = 'IT Department' req.get_subject().L = 'Seattle' req.get_subject().ST = 'Washington' req.get_subject().C = 'US' req.get_subject().emailAddress = 'e@example.com' req.set_pubkey(key) req.sign(key, 'sha256') with open(&quot;csr_testo.pem&quot;, 'wb+') as f: f.write(dump_certificate_request(FILETYPE_PEM, req)) with open(&quot;Private_key_testo.pem&quot;, 'wb+') as f: f.write(dump_privatekey(FILETYPE_PEM, key)) with open(&quot;public_key_testo.pem&quot;, 'wb+') as f: f.write(dump_publickey(FILETYPE_PEM, key)) </code></pre> <p>but it does not take all of my configuration.</p> <pre><code>[alt_names] SN=1-Device|2-234|3-mohamm UID=30000000000000003 title=1000 registeredAddress=Zatca 12 businessCategory=Technology </code></pre> <p>these configurations are very important to include them in the CSR</p>
<python><openssl><cryptography><private-key><csr>
2023-02-15 06:35:03
1
721
Mohammed almalki
75,456,242
3,386,779
hiding the testcase execution status as well report path
<p>I have created a robot framework tool with one testcase. now 'its showing testcase execution status &quot;Pass&quot; and well report path. I need to turn off the testcase execution status as well as report path.</p> <p><a href="https://i.sstatic.net/3nJOm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3nJOm.png" alt="enter image description here" /></a></p> <p>Expected to turn off the below details showing in termial</p>
<python><selenium-webdriver><robotframework>
2023-02-15 06:34:42
1
7,263
user3386779
75,456,096
1,942,868
How to get the authentication information in ModelViewSet?
<p>I am using rest framework with <code>ModelViewSet</code></p> <pre><code>class DrawingViewSet(viewsets.ModelViewSet): queryset = m.Drawing.objects.all() serializer_class = s.DrawingSerializer filterset_fields = ['user'] def list(self, request): queryset = m.Drawing.objects.all() serializer = s.DrawingSerializer(queryset, many=True) return Response(serializer.data) </code></pre> <p>With this script, I can use filter such as <code>/?user=1</code></p> <p>Howver this <code>user=1</code> is not necesary when authentication information is used in script. (Because one user needs to fetch only data belonging to him/herself)</p> <p>How can I get the authentication data in <code>list</code>?</p>
<python><django><django-rest-framework><django-viewsets>
2023-02-15 06:15:26
2
12,599
whitebear
75,456,070
11,402,025
Can I have the same FastAPI GET endpoints accepting two different types of input?
<p>I want to use a GET endpoint <code>/Users</code> with body parameters being either a <code>person</code> or <code>country</code>. I do not want to change the request call signature, but the parameters I am sending will be changed. Is this possible ?</p> <pre><code>class PersonInfo(AppUserInfo): id: int person: str class CountryInfo(AppUserInfo): id: int country: str @app.get(&quot;/Users&quot;) def get_alias_api(personinfo: PersonInfo): return {&quot;data&quot;: personinfo} @app.get(&quot;/Users&quot;) def get_alias_api(countryinfo: CountryInfo): return {&quot;data&quot;: countryinfo} </code></pre>
<python><fastapi>
2023-02-15 06:11:30
1
1,712
Tanu
75,455,995
1,019,129
Is indexed slice a view
<p>In torch slicing creates a View i.e. the data is not copied into a new tensor i.e. it acts as ALIAS</p> <pre><code> b = a[3:10, 2:5 ] </code></pre> <p>My understanding is that is not the case for indexed slice. f.e.</p> <pre><code> b = a[[1,2,3] : [5,11]] </code></pre> <p>Is this correct ?</p> <p>And second is there a module that mimic a view i.e. internally holds the indexes but access the original tensor i.e. act as a sort of proxy ?</p> <p>Something like this, but more general :</p> <pre><code>class IXView: def __init__(self, ixs, ten): self.ixs = ixs self.ten = ten def __getitem__(self, rows) : return self.ten[self.ixs[rows],:] </code></pre>
<python><indexing><view><pytorch>
2023-02-15 05:59:30
1
7,536
sten
75,455,721
15,358,800
Explode a string with random length equally to next empty columns pandas
<p>Let's say I've df like this..</p> <pre><code> string some_col 0 But were so TESSA tell me a little bit more t ... 10 1 15 2 14 3 Some other text xxxxxxxxxx 20 </code></pre> <p>How can I split <code>string</code> col such that long string exploded into random lengths equally across empty cells. It should look like this after fitting.</p> <pre><code> string some_col 0 But were so TESSA tell me . 10 1 little bit more t seems like 15 2 you pretty upset 14 </code></pre> <p>Reproducable</p> <pre><code>import pandas as pd data = [['But were so TESSA tell me a you pretty upset.', 10], ['', 15], ['', 14]] df = pd.DataFrame(data, columns=['string', 'some_col']) print(df) </code></pre> <p>I've no idea how to get even started I'm looking for execution steps so that I can implemnt on my own any refrence would be great!</p>
<python><pandas>
2023-02-15 05:15:48
3
4,891
Bhargav
75,455,659
2,091,585
Convertng a Pandas series of stringg of '%d:%H:%M:%S' format into datetime format
<p>I have a Pandas series which consists of strings of '169:21:5:24', '54:9:19:29', and so on which stand for 169 days 21 hours 5 minutes 24 seconds and 54 days 9 hours 19 minutes 29 seconds, respectively.</p> <p>I want to convert them to datetime object (preferable) or just integers of seconds.</p> <p>The first try was</p> <pre><code>pd.to_datetime(series1, format = '%d:%H:%M:%S') </code></pre> <p>which failed with an error message</p> <pre><code>time data '169:21:5:24' does not match format '%d:%H:%M:%S' (match) </code></pre> <p>The second try</p> <pre><code>pd.to_datetime(series1) </code></pre> <p>also failed with</p> <pre><code>expected hh:mm:ss format </code></pre> <p>The first try seems to work if all the 'days' are less than 30 or 31 days, but my data includes 150 days, 250 days etc and with no month value.</p> <p>Finally,</p> <pre><code>temp_list1 = [[int(subitem) for subitem in item.split(&quot;:&quot;)] for item in series1] temp_list2 = [item[0] * 24 * 3600 + item[1] * 3600 + item[2] * 60 + item[3] for item in temp_list1] </code></pre> <p>successfully converted the Series into a list of seconds, but this is lengthy.</p> <p>I wonder if there is a Pandas.Series.dt or datetime methods that can deal with such type of data.</p>
<python><pandas><datetime><time><series>
2023-02-15 05:02:28
2
2,238
user67275
75,455,571
6,032,140
Exact sub-string match inside a string in Python
<ol> <li><p>I have a string as given below</p> <pre><code>dit ='{p_d: {a:3, what:3.6864e-05, s:lion, sst:{c:-20, b:6, p:panther}}}' </code></pre> </li> <li><p>And I have a list of elements which I wanted to search in the above string and replace them with double quotes.</p> <pre><code>['', 'p_d', '', '', 'a', '3', '', 'what', '3.6864e-05', '', 's', 'lion', '', 'sst', '', 'c', '-20', '', 'b', '6', '', 'p', 'panther', '', '', ''] </code></pre> </li> <li><p>If I do search and replace using simple .replace it doesn't work as expected and can understand</p> <pre><code>import yaml import ast import json import re rep = {&quot;:&quot;: &quot; &quot;, &quot;'&quot;:&quot; &quot;, &quot;{&quot;:&quot; &quot;, &quot;}&quot;:&quot; &quot;, &quot;,&quot;: &quot; &quot;} quot = &quot;\&quot;&quot; dit = '{p_d: {a:3, what:3.6864e-05, s:lion, sst:{c:-20, b:6, p:panther}}}' def replace_all(text, dic): for i, j in dic.items(): text = text.replace(i, j) print(&quot;replace_all: text {}&quot;.format(text)) return text element_list_temp = replace_all(dit, rep) element_list = element_list_temp.split(&quot; &quot;) for z in element_list: if z != &quot;&quot; and z in dit: dit = dit.replace(z, quot+z+quot) print(dit) Output: {&quot;&quot;p&quot;_d&quot;: {&quot;a&quot;:&quot;3&quot;, wh&quot;a&quot;t:&quot;3&quot;.&quot;6&quot;8&quot;6&quot;4e-05, &quot;s&quot;:&quot;lion&quot;, &quot;s&quot;&quot;s&quot;t:{&quot;c&quot;:&quot;-20&quot;, &quot;b&quot;:&quot;6&quot;, &quot;p&quot;:&quot;p&quot;&quot;a&quot;nther}}} Desired Output: '{&quot;p_d&quot;: {&quot;a&quot;:&quot;3&quot;, &quot;what&quot;:&quot;3.6864e-05&quot;, &quot;s&quot;:&quot;lion&quot;, &quot;sst&quot;:{&quot;c&quot;:&quot;-20&quot;, &quot;b&quot;:&quot;6&quot;, &quot;p&quot;:&quot;panther&quot;}}}' </code></pre> </li> <li><p>How to exactly match the string in the list one by one and replace them with double quotes.</p> </li> </ol> <p>Updates:</p> <ol> <li><p>Different input</p> <pre><code>import yaml import ast import json import re rep = {&quot;:&quot;: &quot; &quot;, &quot;'&quot;:&quot; &quot;, &quot;{&quot;:&quot; &quot;, &quot;}&quot;:&quot; &quot;, &quot;,&quot;: &quot; &quot;} quot = &quot;\&quot;&quot; # dit = '{p_d: {a:3, what:3.6864e-05, s:lion, sst:{c:-20, b:6, p:panther}}}' dit = &quot;'{p_d: '{a:3, what:3.6864e-05, s:lion, vec_mode:'{2.5, -2.9, 3.4, 5.6, -8.9, -5.67, 2, 2, 2, 2, 5.4, 2, 2, 6.545, 2, 2}, sst:'{c:-20, b:6, p:panther}}}&quot; seps = &quot;:'{}, &quot; val_strings = re.findall(f&quot;[^{seps}]+&quot;, dit) print(&quot;val_strings: {}&quot;.format(val_strings)) sep_strings = re.findall(f&quot;[{seps}]+&quot;, dit) print(&quot;sep_strings: {}&quot;.format(sep_strings)) seq = [f'{b}&quot;{v}&quot;' for b, v in zip(sep_strings, val_strings)] + sep_strings[-1:] print(&quot;sep: {}&quot;.format(seq)) dit = &quot;&quot;.join(seq) print(dit) Dict = json.loads(dit) print(Dict) result = yaml.dump(Dict) print(result) print(result.replace(&quot;'&quot;,&quot;&quot;)) </code></pre> </li> <li><p>Output from above code</p> </li> </ol> <p>Think its failing because of the key:value pair of the dictionary. Checking at my end as well if there is a way to print them as arrays.</p> <pre><code> val_strings: ['p_d', 'a', '3', 'what', '3.6864e-05', 's', 'lion', 'vec_mode', '2.5', '-2.9', '3.4', '5.6', '-8.9', '-5.67', '2', '2', '2', '2', '5.4', '2', '2', '6.545', '2', '2', 'sst', 'c', '-20', 'b', '6', 'p', 'panther'] sep_strings: [&quot;'{&quot;, &quot;: '{&quot;, ':', ', ', ':', ', ', ':', ', ', &quot;:'{&quot;, ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', ', '}, ', &quot;:'{&quot;, ':', ', ', ':', ', ', ':', '}}}'] sep: ['\'{&quot;p_d&quot;', ': \'{&quot;a&quot;', ':&quot;3&quot;', ', &quot;what&quot;', ':&quot;3.6864e-05&quot;', ', &quot;s&quot;', ':&quot;lion&quot;', ', &quot;vec_mode&quot;', ':\'{&quot;2.5&quot;', ', &quot;-2.9&quot;', ', &quot;3.4&quot;', ', &quot;5.6&quot;', ', &quot;-8.9&quot;', ', &quot;-5.67&quot;', ', &quot;2&quot;', ', &quot;2&quot;', ', &quot;2&quot;', ', &quot;2&quot;', ', &quot;5.4&quot;', ', &quot;2&quot;', ', &quot;2&quot;', ', &quot;6.545&quot;', ', &quot;2&quot;', ', &quot;2&quot;', '}, &quot;sst&quot;', ':\'{&quot;c&quot;', ':&quot;-20&quot;', ', &quot;b&quot;', ':&quot;6&quot;', ', &quot;p&quot;', ':&quot;panther&quot;', '}}}'] '{&quot;p_d&quot;: '{&quot;a&quot;:&quot;3&quot;, &quot;what&quot;:&quot;3.6864e-05&quot;, &quot;s&quot;:&quot;lion&quot;, &quot;vec_mode&quot;:'{&quot;2.5&quot;, &quot;-2.9&quot;, &quot;3.4&quot;, &quot;5.6&quot;, &quot;-8.9&quot;, &quot;-5.67&quot;, &quot;2&quot;, &quot;2&quot;, &quot;2&quot;, &quot;2&quot;, &quot;5.4&quot;, &quot;2&quot;, &quot;2&quot;, &quot;6.545&quot;, &quot;2&quot;, &quot;2&quot;}, &quot;sst&quot;:'{&quot;c&quot;:&quot;-20&quot;, &quot;b&quot;:&quot;6&quot;, &quot;p&quot;:&quot;panther&quot;}}} Traceback (most recent call last): File &quot;./ditoyaml_new.py&quot;, line 36, in &lt;module&gt; Dict = json.loads(dit) File &quot;/usr/lib64/python3.6/json/__init__.py&quot;, line 354, in loads return _default_decoder.decode(s) File &quot;/usr/lib64/python3.6/json/decoder.py&quot;, line 339, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File &quot;/usr/lib64/python3.6/json/decoder.py&quot;, line 357, in raw_decode raise JSONDecodeError(&quot;Expecting value&quot;, s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) Expected Output with the json.load and dump as dictionary and if the key: value dictionary pair isnt available and put something like list or array. Checking at my end as well. p_d: a: 3 s: lion sst: b: 6 c: -20 p: panther vec_mode: [-8.9, -5.67, -2.9, 2, 2.5, 3.4, 5.4, 5.6, 6.545] what: 3.6864e-05 </code></pre>
<python><string>
2023-02-15 04:41:57
1
1,163
Vimo
75,455,405
10,534,633
Iterating over dictionary or its items
<p>What's a better practice and what's faster:</p> <ul> <li>iterating over dictionary (practically by its keys):<br /> <code>for key in dictionary: ...</code></li> <li>or iterating over its items:<br /> <code>for key, value in dictionary.items(): ...</code>?</li> </ul> <p>Using <code>dict.items()</code> seems to be a more clean way to iterate over a dictionary, but isn't it slower because of creating another object (<code>dict.items</code>) (I suppose it's negligible, but I had to ask)? Or maybe it's already done with initializing the dictionary?<br /> Also, in the first way, accessing a value by its key shouldn't affect the efficiency because the operation is <em>O(1)</em>.</p>
<python><performance><dictionary>
2023-02-15 04:11:05
1
1,230
maciejwww
75,455,187
7,394,039
PLotly Scatter: Moving the colorbar to the left not working
<p>Here's the piece of code I am using (This plots a scatter plot with a colorbar to the right of the plot)</p> <pre><code>import plotly.express as px data_y = [758, 742.19, 731.57, 728.36, 718.76, 707.56] data_x = [10, 9, 8, 7, 6, 5] data_x_labels = ['A', 'B', 'C', 'D', 'E', 'F'] fig = px.scatter(x=data_x, y=data_y, trendline=&quot;ols&quot;, trendline_color_override=&quot;black&quot;,) fig.update_traces(marker=dict( size=10, color=data_y, colorscale='YlGn', showscale=True, reversescale=False, opacity=1), ) fig.update_coloraxes(colorscale=&quot;YlGn&quot;) fig.update_xaxes(tickvals=data_x, ticktext=data_x_labels) fig.update_layout(width=650, height=600, margin=dict(l=50, r=50, b=50, t=50)) fig.show() </code></pre> <p>What I have done is added this:</p> <pre><code>fig.update_coloraxes(colorbar_xanchor='left') </code></pre> <p>to move the colorbar to the left. But it does not work and the colorbar stays at the right. What would be the correct way to move the colorbar to the right ?</p>
<python><python-3.x><plotly><plotly-express>
2023-02-15 03:15:06
1
1,270
Nikhil Mishra
75,455,115
19,425,874
Object has no attribute Python web scraping error
<p>I'm looking to scrape a set of URLs - I want to visit each link on the given URL, and return the player's pos1 pos2 and profile details.</p> <p>I have two sets of URLs I'm looking at, G League players (which is working perfectly) and International Players (which I'm completely stuck on).</p> <p>The sites seem to be almost identical, but not sure what's going on.</p> <p><strong>WORKING G LEAGUE SCRIPT:</strong></p> <pre><code>import requests from bs4 import BeautifulSoup import gspread gc = gspread.service_account(filename='creds.json') sh = gc.open_by_key('SSID') worksheet = sh.get_worksheet(0) # AddValue = [&quot;Test&quot;, 25, &quot;Test2&quot;] # worksheet.insert_row(AddValue, 3) def get_links(url): data = [] req_url = requests.get(url) soup = BeautifulSoup(req_url.content, &quot;html.parser&quot;) for td in soup.find_all('td', {'data-th': 'Player'}): a_tag = td.a name = a_tag.text player_url = a_tag['href'] pos = td.find_next_sibling('td').text print(f&quot;Getting {name}&quot;) req_player_url = requests.get( f&quot;https://basketball.realgm.com{player_url}&quot;) soup_player = BeautifulSoup(req_player_url.content, &quot;html.parser&quot;) div_profile_box = soup_player.find(&quot;div&quot;, class_=&quot;profile-box&quot;) row = {&quot;Name&quot;: name, &quot;URL&quot;: player_url, &quot;pos_option1&quot;: pos} row['pos_option2'] = div_profile_box.h2.span.text for p in div_profile_box.find_all(&quot;p&quot;): try: key, value = p.get_text(strip=True).split(':', 1) row[key.strip()] = value.strip() except: # not all entries have values pass data.append(row) return data urls = [ 'https://basketball.realgm.com/dleague/players/2022', 'https://basketball.realgm.com/dleague/players/2021', 'https://basketball.realgm.com/dleague/players/2020', 'https://basketball.realgm.com/dleague/players/2019', 'https://basketball.realgm.com/dleague/players/2018', ] res = [] for url in urls: print(f&quot;Getting: {url}&quot;) data = get_links(url) res = [*res, *data] if res != []: header = list(res[0].keys()) values = [ header, *[[e[k] if e.get(k) else &quot;&quot; for k in header] for e in res]] worksheet.append_rows(values, value_input_option=&quot;USER_ENTERED&quot;) </code></pre> <p>Like I stated, this prints the positions along with the rest of the profile details. I'm trying to recreate for a different set of URLs, but hitting the error:</p> <p><a href="https://i.sstatic.net/aIuGb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aIuGb.png" alt="error" /></a></p> <p>This is the script I'm stuck on, any thoughts?</p> <pre><code>import requests from bs4 import BeautifulSoup import gspread gc = gspread.service_account(filename='creds.json') sh = gc.open_by_key('1DpasSS8yC1UX6WqAbkQ515BwEEjdDL-x74T0eTW8hLM') worksheet = sh.get_worksheet(0) # AddValue = [&quot;Test&quot;, 25, &quot;Test2&quot;] # worksheet.insert_row(AddValue, 3) def get_links2(url): data = [] req_url = requests.get(url) soup = BeautifulSoup(req_url.content, &quot;html.parser&quot;) for td in soup.select('td.nowrap'): a_tag = td.a if a_tag: name = a_tag.text player_url = a_tag['href'] pos = td.find_next_sibling('td').text print(f&quot;Getting {name}&quot;) req_player_url = requests.get( f&quot;https://basketball.realgm.com{player_url}&quot;) soup_player = BeautifulSoup(req_player_url.content, &quot;html.parser&quot;) div_profile_box = soup_player.find(&quot;div&quot;, class_=&quot;profile-box&quot;) row = {&quot;Name&quot;: name, &quot;URL&quot;: player_url, &quot;pos_option1&quot;: pos} row['pos_option2'] = div_profile_box.h2.span.text for p in div_profile_box.find_all(&quot;p&quot;): try: key, value = p.get_text(strip=True).split(':', 1) row[key.strip()] = value.strip() except: # not all entries have values pass data.append(row) return data urls2 = [&quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/player/All/desc&quot;,&quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/2&quot;, &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/3&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/4&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/5&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/6&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/7&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/8&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/9&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/10&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/11&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/12&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/13&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/14&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/15&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/16&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/17&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/18&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/19&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/20&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/21&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/22&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/23&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/24&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/25&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/26&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/27&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/28&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/29&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/30&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/31&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/32&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/33&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/34&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/35&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/36&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/37&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/38&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/39&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/40&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/41&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/42&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/43&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/44&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/45&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/46&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/47&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/48&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/49&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/50&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/51&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/52&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/53&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/54&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/55&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/56&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/57&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/58&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/59&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/60&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/61&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/62&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/63&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/64&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/65&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/66&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/67&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/68&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/69&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/70&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/71&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/72&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/73&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/74&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/75&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/76&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/77&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/78&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/79&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/80&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/81&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/82&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/83&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/84&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/85&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/86&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/87&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/88&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/89&quot;, # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/90&quot;, # # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/91&quot;, # # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/92&quot;, # # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/93&quot;, # # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/94&quot;, # # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/95&quot;, # # &quot;https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/96&quot;] ] res2 = [] for url in urls2: data = get_links2(url) res2 = [*res2, *data] # print(res2) if res2 != []: header = list(res2[0].keys()) values = [ header, *[[e[k] if e.get(k) else &quot;&quot; for k in header] for e in res2]] worksheet.append_rows(values, value_input_option=&quot;USER_ENTERED&quot;) </code></pre>
<python><beautifulsoup><python-requests><python-requests-html>
2023-02-15 02:55:19
1
393
Anthony Madle
75,455,067
1,897,151
getting the roi section dynamically ignoring resize
<p>i have a set of resolution for opencv and pytesseract to detect which is the standard 1920x1080</p> <p>but after getting the image and resize it to 1920x1080, i will get the ROI around center square of the image</p> <pre><code>img = cv2.imread(&quot;test.png&quot;) height, width, channels = img.shape print(f'original size / w = {width}, h = {height}') img = image_resize(img, width=1920, height=1080) height, width, channels = img.shape print(f'after resize / w = {width}, h = {height}') x, y, w, h = 466, 203, 978, 760 roi = img[y:y+h, x:x+w] </code></pre> <p>something like this to crop out the image, but i found out if my image is not native 1920x1080, which either is from bigger or smaller resolution and resize to 1920x1080, this fixed roi x,y,w,h is not working well. i would like to know a better way to dynamically scale the ROI values from different resolution.</p> <p>using this resize method i found in stackoverflow as well.</p> <pre><code>def image_resize(image, width = None, height = None, inter = cv2.INTER_AREA): # initialize the dimensions of the image to be resized and # grab the image size dim = None (h, w) = image.shape[:2] # if both the width and height are None, then return the # original image if width is None and height is None: return image # check to see if the width is None if width is None: # calculate the ratio of the height and construct the # dimensions r = height / float(h) dim = (int(w * r), height) # otherwise, the height is None else: # calculate the ratio of the width and construct the # dimensions r = width / float(w) dim = (width, int(h * r)) # resize the image resized = cv2.resize(image, dim, interpolation = inter) # return the resized image return resized </code></pre>
<python><opencv><tesseract><python-tesseract>
2023-02-15 02:46:25
0
503
user1897151
75,454,835
1,334,858
VSCode Python IntelliSense Not working On Remove Machine
<p>I use VSCode &quot;Remote-SSH&quot; to access a VM to do all my coding.</p> <p>On the VM I have installed the &quot;Python&quot; and &quot;Pylance&quot; extensions hoping to get IntelliSense autocomplete.</p> <p>My python interpreter is not in the &quot;typical&quot; <code>/usr/bin/python3</code> location. So I have the following setting in my VM's setting.json:</p> <pre><code>&quot;python.defaultInterpreterPath&quot;: &quot;/some/path/to/3.6.8/python3&quot;, </code></pre> <p>I have successfully enabled python lint. Although my IntelliSense autocomplete isn't working at all. When I type I don't see any pulldown menu or error messages.</p> <p>What could I possibly be missing?</p>
<python><python-3.x><vscode-extensions><vscode-remote>
2023-02-15 01:51:58
0
1,955
user1334858
75,454,637
9,273,406
How to fix python uvicorn server returning "426 Upgrade Required"?
<p><strong>I have a python uvicorn app which runs fine locally for my colleagues but not for me</strong>. After running <code>python src/main.py</code>, the server connects to database and loads perfectly:</p> <pre><code>INFO | uvicorn.server:serve:75 - Started server process [49720] INFO | uvicorn.lifespan.on:startup:47 - Waiting for application startup. INFO | databases.core:connect:83 - Connected to database postgresql+asyncpg://localhost:5432/faethm_core INFO | uvicorn.lifespan.on:startup:61 - Application startup complete. INFO | uvicorn.server:_log_started_message:209 - Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit) </code></pre> <p>But the server doesn't take any requests. No matter where I send it from, either <code>curl</code> command, browser request, or an API tool such as Insomnia. I always get the same response 'Upgrade Required'</p> <p>For example a <code>curl</code> command:</p> <pre><code>curl --request GET \ --url http://0.0.0.0:8000/health \ --header 'Content-Type: application/json' </code></pre> <p>would return</p> <pre><code>Upgrade Required </code></pre> <p><strong>Things I've tried but failed</strong></p> <ul> <li>Restarting my server and also my computer</li> <li>Trying to send requests from different browsers and tools</li> <li>Adding headers to upgrade the protocol to HTTP/2.0. The docs online aren't clear on how to do this</li> <li>Changing the http connection to https</li> </ul> <p>Does anyone know where this issue is coming from and how to fix it?</p>
<python><http><uvicorn>
2023-02-15 01:04:48
1
4,370
azizbro
75,454,557
1,306,666
Safe way to create an "out of band" alternative value for a function argument in Python
<p>I have a function like the following:</p> <pre><code>def do_something(thing): pass def foo(everything, which_things): &quot;&quot;&quot;Do stuff to things. Args: everything: Dict of things indexed by thing identifiers. which_things: Iterable of thing identifiers. Something is only done to these things. &quot;&quot;&quot; for thing_identifier in which_things: do_something(everything[thing_identifier]) </code></pre> <p>But I want to extend it so that a caller can <code>do_something</code> with everything passed in <code>everything</code> without having to provide a list of identifiers. (As a motivation, if <code>everything</code> was an opaque container whose keys weren't accessible to library users but only library internals. In this case, <code>foo</code> can access the keys but the caller can't. Another motivation is error prevention: having a constant with obvious semantics avoids a caller mistakenly passing the wrong set of identifiers in.) So one thought is to have a constant <code>USE_EVERYTHING</code> that can be passed in, like so:</p> <pre><code>def foo(everything, which_things): &quot;&quot;&quot;Do stuff to things. Args: everything: Dict of things indexed by thing identifiers. which_things: Iterable of thing identifiers. Something is only done to these things. Alternatively pass USE_EVERYTHING to do something to everything. &quot;&quot;&quot; if which_things == USE_EVERYTHING: which_things = everything.keys() for thing_identifier in which_things: do_something(everything[thing_identifier]) </code></pre> <p>What are some advantages and limitations of this approach? How can I define a <code>USE_EVERYTHING</code> constant so that it is unique and specific to this function?</p> <p>My first thought is to give it its own type, like so:</p> <pre><code>class UseEverythingType: pass USE_EVERYTHING = UseEverythingType() </code></pre> <p>This would be in a package only exporting <code>USE_EVERYTHING</code>, discouraging creating any other <code>UseEverythingType</code> objects. But I worry that I'm not considering all aspects of Python's object model -- could two instances of <code>USE_EVERYTHING</code> somehow compare unequal?</p>
<python>
2023-02-15 00:45:25
0
7,145
nebuch
75,454,425
33,796
Access Blocked: <project> has not completed the Google verification process
<p>I am building a simple script which polls some data and then updates a spreadsheet that I am giving to my client. (It is a small project and I don't need anything fancy.)</p> <p>So I created a Google Cloud project, enabled the Sheets API, and got a credential for a Desktop app. When I try to run the <a href="https://developers.google.com/sheets/api/quickstart/python" rel="noreferrer">quickstart sample</a>, I get an error:</p> <p><strong>Access blocked: &lt;my project name&gt; has not completed the Google verification process</strong></p> <p>I have tried googling and all the solutions seem to be oriented toward what a user should do if they see this, but I am the developer. I only need to grant my own self access to this spreadsheet, since my script is the only thing that will be changing it (I will also share it with the client).</p> <p>What do I do?</p>
<python><google-cloud-platform><google-sheets-api>
2023-02-15 00:14:24
4
60,733
luqui
75,454,093
13,215,988
How can I improve my python environment setup on windows 10?
<p>This is my current python environment setup on Windows 10:</p> <p>For the remainder of this post I will assume you have chocolatey installed. Also I'm using the bash as admin terminal from cmder.</p> <p>Install python3 with <code>choco install python</code> and/or <code>choco install python3</code>. Verify that you have the latest python3 version installed with <code>python --version</code>. Then type choco install pyenv-win. This is the easiest way to do this because it also configures the Environment Variables.</p> <p>Now install the python version you want with pyenv. In this post I'll go with 3.9.0</p> <pre><code>pyenv update pyenv install --list pyenv install 3.9.0 </code></pre> <p>It seems like no matter what you do, your path will always put /c/Python39/python aka not the pyenv version of python first. Even if you set the pyenv section of your path first by going to My Computer &gt; Properties &gt; Advanced System Settings &gt; Environment Variables and move PYENV to the top of the path, your terminal will always see /c/Python39/python as the default version.</p> <p>Then I install virtualenvwrapper with the pyenv version of python:</p> <pre><code>export &quot;PATH=/c/Users/&lt;myusername&gt;/.pyenv/pyenv-win/shims/:$PATH&quot; /c/Users/&lt;myusername&gt;/.pyenv/pyenv-win/shims/pip install virtualenv /c/Users/&lt;myusername&gt;/.pyenv/pyenv-win/shims/pip install virtualenvwrapper source /c/Users/&lt;myusername&gt;/.pyenv/pyenv-win/versions/3.9.0/Scripts/virtualenvwrapper.sh export &quot;PATH=/c/Users/&lt;myusername&gt;/.pyenv/pyenv-win/shims/:/c/Users/&lt;myusername&gt;/.pyenv/pyenv-win/versions/3.9.0/Scripts/:$PATH&quot; </code></pre> <p>I make a virtual environement like this mkvirtualenv test1. Now the terminal has a (test1) before the prompt so I know that the virtual env is active and working. Before you close out of this terminal, run the command deactivate to stop the virtualenv.</p> <p>Now when I want to start a python project, I run these commands (Order is really important. If you do the first two commands out of order it won't work until you close and reopen the terminal).</p> <pre><code>export &quot;PATH=/c/Users/&lt;myusername&gt;/.pyenv/pyenv-win/shims/:/c/Users/&lt;myusername&gt;/.pyenv/pyenv-win/versions/3.9.0/Scripts/:$PATH&quot; source /c/Users/&lt;myusername&gt;/.pyenv/pyenv-win/versions/3.9.0/Scripts/virtualenvwrapper.sh workon test1 </code></pre> <p>And finished! That's how I setup my virtual environments in Python.</p> <p>See how inconvenient that was lol. On my mac it's easy I just run pyenv virtualenvwrapper then workon test1. So how can I do this better? Is there a simpler way to work with python on Windows 10?</p> <p>P.S. I would prefer not to use PyCharm. I know a lot of people like it but I'm looking for a more text editor/IDE agnostic solution.</p>
<python><virtualenv><python-venv><pyenv><virtualenvwrapper>
2023-02-14 23:10:59
0
1,212
ChristianOConnor
75,453,995
6,387,095
Pandas plot, vars() argument must have __dict__ attribute?
<p>It was working perfectly earlier but for some reason now I am getting strange errors.</p> <p>pandas version: <code>1.2.3</code></p> <p>matplotlib version: <code>3.7.0</code></p> <p>sample dataframe:</p> <pre><code>df cap Date 0 1 2022-01-04 1 2 2022-01-06 2 3 2022-01-07 3 4 2022-01-08 </code></pre> <pre><code>df.plot(x='cap', y='Date') plt.show() </code></pre> <pre><code>df.dtypes cap int64 Date datetime64[ns] dtype: object </code></pre> <p>I get a traceback:</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File &quot;/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/code.py&quot;, line 90, in runcode exec(code, self.locals) File &quot;&lt;input&gt;&quot;, line 1, in &lt;module&gt; File &quot;/Volumes/coding/venv/lib/python3.8/site-packages/pandas/plotting/_core.py&quot;, line 955, in __call__ return plot_backend.plot(data, kind=kind, **kwargs) File &quot;/Volumes/coding/venv/lib/python3.8/site-packages/pandas/plotting/_matplotlib/__init__.py&quot;, line 61, in plot plot_obj.generate() File &quot;/Volumes/coding/venv/lib/python3.8/site-packages/pandas/plotting/_matplotlib/core.py&quot;, line 279, in generate self._setup_subplots() File &quot;/Volumes/coding/venv/lib/python3.8/site-packages/pandas/plotting/_matplotlib/core.py&quot;, line 337, in _setup_subplots fig = self.plt.figure(figsize=self.figsize) File &quot;/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/_api/deprecation.py&quot;, line 454, in wrapper return func(*args, **kwargs) File &quot;/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/pyplot.py&quot;, line 813, in figure manager = new_figure_manager( File &quot;/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/pyplot.py&quot;, line 382, in new_figure_manager _warn_if_gui_out_of_main_thread() File &quot;/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/pyplot.py&quot;, line 360, in _warn_if_gui_out_of_main_thread if _get_required_interactive_framework(_get_backend_mod()): File &quot;/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/pyplot.py&quot;, line 208, in _get_backend_mod switch_backend(rcParams._get(&quot;backend&quot;)) File &quot;/Volumes/coding/venv/lib/python3.8/site-packages/matplotlib/pyplot.py&quot;, line 331, in switch_backend manager_pyplot_show = vars(manager_class).get(&quot;pyplot_show&quot;) TypeError: vars() argument must have __dict__ attribute </code></pre>
<python><pandas><matplotlib>
2023-02-14 22:55:53
2
4,075
Sid
75,453,928
7,640,725
Is there a way to define step functions inside a class with pytest-bdd
<p>I'm wondering if there's a way to define step test functions inside Python class with <code>pytest-bdd</code> framework.</p> <p>Let me elaborate a bit further..., suppose that we have a feature file like this:</p> <p>&quot;<strong>test.feature</strong>&quot; file:</p> <pre><code>Feature: Transaction Calculate the leftover money at the end of a transaction Scenario Outline: Withdraw Money Given I deposit &lt;initial_amount&gt; When I withdraw &lt;withdrawn_amount&gt; Then I should only have &lt;leftover_money&gt; left Examples: | initial_amount | withdrawn_amount | leftover_money | | $100 | $30 | $70 | | $1000 | $50 | $950 | | $3000 | $200 | $2800 | </code></pre> <p>and then we have the step functions written like this:</p> <p>&quot;<strong>test_transaction.py</strong>&quot; file:</p> <pre class="lang-py prettyprint-override"><code>from pytest_bdd import scenario, given, when, then, parsers import pytest def pytest_configure(): pytest.AMT = 0 class TestWithdrawal: @scenario('test.feature', 'Withdraw Money') def test_withdraw(): pass @given( parsers.cfparse( &quot;I deposit ${amount:Number}&quot;, extra_types={&quot;Number&quot;: int} ), target_fixture=&quot;initial_amount&quot; ) def initial_amount(amount): return amount @when( parsers.cfparse( &quot;I withdraw ${withdrawn_amount:Number}&quot;, extra_types={&quot;Number&quot;: int} ) ) def withdraw_money(initial_amount, withdrawn_amount): pytest.AMT = initial_amount - withdrawn_amount @then( parsers.cfparse( &quot;I should only have ${leftover_amount:Number} left&quot;, extra_types={&quot;Number&quot;: int} ) ) def leftover_money(leftover_amount): assert pytest.AMT == leftover_amount </code></pre> <p>When I tried to run the test with this command from my terminal:</p> <pre class="lang-bash prettyprint-override"><code>pytest test_transaction.py </code></pre> <p>I get this error below:</p> <pre class="lang-bash prettyprint-override"><code>=================================================================================================== ERRORS ==================================================================================================== _______________________________________________________________________________ ERROR collecting test_transaction_with_class.py _______________________________________________________________________________ invalid method signature During handling of the above exception, another exception occurred: Could not determine arguments of &lt;bound method step.&lt;locals&gt;.decorator.&lt;locals&gt;.step_function_marker of &lt;test_transaction_with_class.TestWithdrawal object at 0x1071aa550&gt;&gt;: invalid method signature =========================================================================================== short test summary info =========================================================================================== ERROR test_transaction_with_class.py::TestWithdrawal !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! ============================================================================================== 1 error in 0.06s =============================================================================================== </code></pre> <p>which I think means that <code>pytest-bdd</code> framework doesn't support grouping a set of tests inside a class.</p> <p>if this is true..., this is almost a deal breaker for me as we can easily group a set of tests inside a class with <code>pytest</code>.</p>
<python><unit-testing><pytest><bdd><pytest-bdd>
2023-02-14 22:44:09
0
941
blue2609
75,453,886
15,569,921
Map the indices of 2d array into 1d array in Python
<p>Say I have a square 2d array (<code>mat</code>) and a 1d array (<code>arr</code>) which is the same <strong>length</strong> as the flattened 2d array (the values inside are different). Given the row and column index of the 2d array, how can I map it into the 1d array <strong>index</strong> to give the value of the same position as the 2d array? Here's a small (4 element) example of what I mean:</p> <pre><code>mat = np.array([[6, 7], [8, 9]]) arr = np.array([1, 2, 3, 4]) </code></pre> <p>The mapping I'm looking for is:</p> <pre><code>mat[0,0] -&gt; arr[0] mat[0,1] -&gt; arr[1] mat[1,0] -&gt; arr[2] mat[1,1] -&gt; arr[3] </code></pre> <p>Note that I can't match with values as they aren't the same so the mapping would be on the index itself.</p>
<python><arrays><indexing><mapping>
2023-02-14 22:36:30
1
390
statwoman
75,453,872
11,809,811
inconsistent result when getting location from ip address
<p>I am trying to get a location from an ip address, the code for that is:</p> <pre><code>import urllib.request import json with urllib.request.urlopen(&quot;https://geolocation-db.com/json&quot;) as url: data = json.loads(url.read().decode()) print(data) </code></pre> <p>This does produce a result but it is the wrong result and gives me the wrong address (although the right countr. However, when I use a website like <a href="https://iplocation.com/" rel="nofollow noreferrer">https://iplocation.com/</a> I do get my proper address.</p> <p>I am quite confused why there are different results, could someone help?</p>
<python><json>
2023-02-14 22:33:31
1
830
Another_coder