QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,962,144
7,647,857
Numba Function Error: Handling 1D and 2D Arrays Differently
<p>I'm encountering an error while using Numba-optimized functions to check if the extent of an n-dimensional box (<code>n &gt;= 1</code>) is larger than a minimum value along corresponding dimensions. The functions <code>get_extent</code> and <code>is_larger_than_min</code> are decorated with <code>@njit</code>.</p> <p>Here are the functions:</p> <pre><code>@njit def get_extent(box): return box[1] - box[0] </code></pre> <p>and</p> <pre><code>@njit def is_larger_than_min(box, extent_min): extent = get_extent(box) return np.all(extent &gt;= extent_min) </code></pre> <p>When I pass a 2D array <code>box1</code> and its corresponding <code>extent_min1</code>, everything works fine. However, when I pass a 1D array <code>box2</code> and its <code>extent_min2</code>, I encounter an error.</p> <pre><code>box1 = np.array([[0, 0, 0], [5, 5, 5]]) # shape (2, n), i.e. (n&gt;1)-dimensional box extent_min1 = np.array([4, 4, 4]) # shape (n,), i.e. extents along each dimension box2 = np.array([0, 5]) # shape (2,), i.e. (n=1)-dimensional box, or just an interval extent_min2 = 4 # scalar, i.e. extent (length) along this single dimension is_larger_than_min(box1, extent_min1) # works fine is_larger_than_min(box2, extent_min2) # raises error </code></pre> <p>The error message I receive is not very informative, but since the function <code>is_larger_than_min</code> <em>without</em> <code>@njit</code> works just fine with any types of inputs, it is obviously related to handling scalars and arrays differently within the Numba-optimized functions. How can I modify these functions to handle both scalars and arrays without encountering errors? Any insights or solutions would be greatly appreciated. Thanks!</p>
<python><arrays><numpy><numba>
2024-02-08 13:40:48
1
399
user7647857
77,962,045
3,994,399
How to open an excel file that is already open without getting permission denied
<p>I need to do something with data in an excel file. I therefore open the excel file with the .read_excel method of the pandas library. I only read the data, I don't need to write to the file. However I often have to adjust something in the excel file and run the code again. In order to be able to run the code again, I always need to close the excel file (in the windows UI) which is very annoying. If I don't do this I get &quot;Permission denied&quot;.</p> <p>I therefore thought, okay lets just make a copy, read the copy and delete the copy after the script. However it does not even allow me to make a copy. I tried shutil.copyfile.</p> <p>I am running windows and autosave is on for this excel file.</p> <p>Any suggestions?</p>
<python><excel>
2024-02-08 13:24:56
1
692
ThaNoob
77,962,011
461,499
Transformer pipeline with 'accelerate' not using gpu?
<p>My transformers pipeline does not use cuda.</p> <p>code:</p> <pre><code> from transformers import pipeline, Conversation # load_in_8bit: lower precision but saves a lot of GPU memory # device_map=auto: loads the model across multiple GPUs chatbot = pipeline(&quot;conversational&quot;, model=&quot;BramVanroy/GEITje-7B-ultra&quot;, model_kwargs={&quot;load_in_8bit&quot;: True}, device_map=&quot;auto&quot;) </code></pre> <p>Testing for cuda works just fine:</p> <pre><code>import torch print(torch.cuda.is_available()) </code></pre> <p>Which prints <code>True</code></p> <p>I have a project with these libs:</p> <pre><code>[tool.poetry.dependencies] python = &quot;^3.11&quot; transformers = &quot;^4.37.2&quot; torch = {version = &quot;^2.2.0+cu121&quot;, source = &quot;pytorch&quot;} torchvision = {version = &quot;^0.17.0+cu121&quot;, source = &quot;pytorch&quot;} accelerate = &quot;^0.26.1&quot; bitsandbytes = &quot;^0.42.0&quot; [[tool.poetry.source]] name = &quot;pytorch&quot; url = &quot;https://download.pytorch.org/whl/cu121&quot; priority = &quot;supplemental&quot; </code></pre> <p>What am I missing?</p>
<python><pytorch><huggingface-transformers><accelerate>
2024-02-08 13:20:01
2
20,319
Rob Audenaerde
77,961,754
4,462,690
WinPython Path for Jupyter notebooks does not seems to respond to os.chdir()
<p>I am using a portable Python distribution (WinPythyon 2023-04), and downloaded a set of notebooks from a public GitHub repository (<a href="https://github.com/kenperry-public/ML_Spring_2024" rel="nofollow noreferrer">https://github.com/kenperry-public/ML_Spring_2024</a>) using GitHub Desktop on Windows 11 to C:\Users....\Github\ML_Spring_2024.</p> <p>I started Jupyter Lab from the WinPython directory, opened a .ipynb notebook located in C:\Users....\Github\ML_Spring_2024 and started work by inserting and executing a new cell right at the top:</p> <p>import os os.chdir(&quot;C:\Users....\Github\ML_Spring_2024&quot;)</p> <p>Unfortunately, the links in the notebook (which are relative links) did not work. I then noticed that WinPython had created a copy of the .ipynb file that I opened in C:\WPy64-31160\notebooks. If I copy all the files and subdirectories in C:\Users....\Github\ML_Spring_2024 to C:\WPy64-31160\notebooks, all the links in my open Jupyter notebook work. It seems as if my working directory has not changed on account of my os.chdir() coommand. This is very inconvenient.</p> <p>How do I get WinPython to change its working directory for Jupyter notebooks to the location to which I point it with os.chdir()?</p> <p>Sincerely and with many thanks in advance</p> <p>Thomas Philips</p>
<python><jupyter-notebook>
2024-02-08 12:42:20
0
1,131
Thomas Philips
77,961,502
16,627,522
Python equivalent for std::basic_istream. (Pybind11, how to read from file or input stream)
<p>I am using Pybind11/Nanobind to write Python bindings for my C++ libraries.</p> <p>One of my C++ functions takes in the argument of type <code>std::istream &amp;</code> e.g.:</p> <pre class="lang-cpp prettyprint-override"><code>std::string cPXGStreamReader::testReadStream(std::istream &amp;stream) { std::ostringstream contentStream; std::string line; while (std::getline(stream, line)) { contentStream &lt;&lt; line &lt;&lt; '\n'; // Append line to the content string } return contentStream.str(); // Convert contentStream to string and return } </code></pre> <p>What kind of argument do I need to pass in Python which corresponds to this?</p> <p>I have tried passing <code>s</code> where <code>s</code> is created:</p> <pre class="lang-python prettyprint-override"><code>s = open(r&quot;test_file.pxgf&quot;, &quot;rb&quot;) # and s = io.BytesIO(b&quot;some initial binary data: \x00\x01&quot;) </code></pre> <p>to no avail. I get the error</p> <pre class="lang-none prettyprint-override"><code>TypeError: test_read_file(): incompatible function arguments. The following argument types are supported: 1. (self: pxgf.PXGStreamReader, arg0: std::basic_istream&lt;char,std::char_traits&lt;char&gt; &gt;) -&gt; str Invoked with: &lt;pxgf.PXGStreamReader object at 0x000002986CF9C6B0&gt;, &lt;_io.BytesIO object at 0x000002986CF92250&gt; Did you forget to `#include &lt;pybind11/stl.h&gt;`? Or &lt;pybind11/complex.h&gt;, &lt;pybind11/functional.h&gt;, &lt;pybind11/chrono.h&gt;, etc. Some automatic </code></pre>
<python><c++><pybind11><nanobind>
2024-02-08 12:02:46
1
634
Tommy Wolfheart
77,961,299
4,014,825
AWS CDK Lambda Layer Python Package
<p>Currently when I create a Python Lambda Layer using the following CDK code:</p> <pre><code>_lambda.LayerVersion( self, &quot;CommonPythonLambdaLayer&quot;, code=_lambda.Code.from_asset( &quot;src/common&quot;, bundling=cdk.BundlingOptions( image=lambda_bundling_image, # type: ignore command=[ &quot;bash&quot;, &quot;-c&quot;, &quot;pip install --no-cache -r requirements.txt --platform manylinux2014_aarch64 --only-binary=:all: --upgrade -t /asset-output/python &amp;&amp; cp -au . /asset-output/python&quot;, ], platform=&quot;linux/arm64&quot;, ), ), ) </code></pre> <p>The directory structure of src/common/ is:</p> <pre><code>src/ common/ models.py </code></pre> <p>The CDK code for the current Lambda layer creates:</p> <pre><code>python/ models.py </code></pre> <p>However, what I need from the Lambda Layer is:</p> <pre><code>python/ common/ models.py </code></pre> <p>Two options I have tried and do not want is: Set build_dir in the CDK code to &quot;src/&quot; as it has other packages that needs to be excluded.</p> <p>Modify the asset paths to /common, as then the requirements.txt packages will also get moved to there:</p> <pre><code>pip install --no-cache -r requirements.txt --platform manylinux2014_aarch64 --only-binary=:all: --upgrade -t /asset-output/python/common &amp;&amp; cp -au . /asset-output/python/common </code></pre>
<python><amazon-web-services><aws-lambda><aws-cdk>
2024-02-08 11:29:42
1
1,029
Inthu
77,960,915
4,561,745
Nested maps in C++ with vectors of different types as values
<p>I am trying to convert the following Python code into C++:</p> <pre><code>outer_dict = {} outer_dict[0] = {} outer_dict[0][&quot;ints&quot;] = [0] outer_dict[0][&quot;floats&quot;] = [0.0] outer_dict[0][&quot;ints&quot;].append(1) outer_dict[0][&quot;floats&quot;].append(1.2) outer_dict[1] = {} outer_dict[1][&quot;ints&quot;] = [0] outer_dict[1][&quot;floats&quot;] = [0.5] </code></pre> <p>Essentially, the data structure is a nested dictionary in python where the values of the inner dictionary are lists of different data types. The overall data structure looks as follows:</p> <pre><code>outer_dict { 0: { &quot;ints&quot;: [0, 1] // list of ints &quot;floats&quot;: [0.0, 1.2] // list of floats } 1: { &quot;ints&quot;: [0] // list of ints &quot;floats&quot;: [0.5] // list of floats } } </code></pre> <p>How can such a code be converted to C++?</p>
<python><c++><dictionary><nested><hashmap>
2024-02-08 10:31:19
1
775
Dr. Prasanna Date
77,960,851
7,454,177
Is it possible to create a one to many association between a table and an Enum in SQLAlchemy?
<p>In my FastAPI project I want to associate a table with multiple regions. The regions are stored in an Enum. Many-to-Many relationships in SQLAlchemy are generally possible, but I get an error message when adjusting the <a href="https://docs.sqlalchemy.org/en/20/orm/basic_relationships.html#setting-bi-directional-many-to-many" rel="nofollow noreferrer">documented behaviour</a> to an enum.</p> <p>This is my code:</p> <pre><code>class RegionEnum(str, Enum): US = &quot;us&quot; EU = &quot;eu&quot; INDIA = &quot;india&quot; APAC = &quot;apac&quot; association_table = Table( &quot;association_table_check_region&quot;, Base.metadata, Column(&quot;left_id&quot;, ForeignKey(&quot;server.id&quot;), primary_key=True), Column(&quot;right_id&quot;, Enum(RegionEnum), primary_key=True, nullable=False), ) class Server(Base): regions: Mapped[list[RegionEnum]] = relationship(&quot;RegionEnum&quot;, secondary=association_table) </code></pre> <p>When running my tests I get <code>sqlalchemy.exc.InvalidRequestError: When initializing mapper Mapper[Check(check)], expression 'RegionEnum' failed to locate a name ('RegionEnum'). If this is a class name, consider adding this relationship() to the &lt;class 'app.models.check.Check'&gt; class after both dependent classes have been defined.</code></p> <p>I suppose SQLAlchemy is not aware of the enum, but how can I make it aware? Or in general: how do you set up a multiple select (1:M) of an enum in SQLAlchemy?</p> <blockquote> <p>I tried with a connection table before (basically a real table instead of the association table and it worked, I am looking for a proper solution.</p> </blockquote>
<python><sqlalchemy><enums><fastapi>
2024-02-08 10:20:15
0
2,126
creyD
77,960,840
8,588,743
Translate R-code into Python code - Unique timestamp values differ by a lot
<p>My data frame has the columns</p> <pre><code>DOMAINNAME object CUSTOMERNUMBER int64 CREDITCHECKSOURCE object RESULTTEXT object RESULTCODE object FUNCTION object LASTMODIFIED datetime64[ns] APPROVEDAMOUNT float64 ISANONYMIZED object SALESBRAND object COUNTRY object DATE datetime64[ns] DAY int64 MONTH int64 WEEK UInt32 YEAR int64 dtype: object </code></pre> <p>where 'DATE' is a datetime like this <code>'2024-01-08 15:32:07'</code>. Originally, the unique count for the <code>'DATE'</code>-column of the original dataframe is 296723. After the same data frame is passed to the R-code and Python-code below the counts are suddenly 293673 and 280531. I can't figure out why this discrepancy occurs.</p> <p>I have the dplyr code:</p> <pre><code>df2 &lt;- df1 %&gt;% select(-RESULTCODE) %&gt;% filter(RESULTTEXT == &quot;APPROVED&quot; | RESULTTEXT == &quot;DENIED&quot;, !is.na(FUNCTION), SALESBRAND != &quot;Stayhard&quot;) %&gt;% distinct(across(-DATE), .keep_all = TRUE) %&gt;% select( -CUSTOMERNUMBER ) </code></pre> <p>and the corresponding Python code:</p> <pre><code>def filter_transform_alternative(df): df_filtered = df[(df['RESULTTEXT'].isin([&quot;APPROVED&quot;, &quot;DENIED&quot;])) &amp; df['FUNCTION'].notna() &amp; (df['SALESBRAND'] != &quot;Stayhard&quot;)] df_filtered = df_filtered.drop(columns=['CUSTOMERNUMBER']) cols_for_dupes = [col for col in df_filtered.columns if col not in ['DATE', 'CUSTOMERNUMBER']] df_filtered['unique_id'] = df_filtered[cols_for_dupes].astype(str).apply(lambda x: '_'.join(x), axis=1) df_filtered['is_dupe'] = df_filtered.duplicated(subset='unique_id', keep='first') df_no_duplicates = df_filtered[df_filtered['is_dupe'] == False].drop(columns=['unique_id', 'is_dupe']) return df_no_duplicates </code></pre> <p>Below is a sample data set for reproducibility</p> <pre><code> DOMAINNAME CUSTOMERNUMBER CREDITCHECKSOURCE RESULTTEXT \ 0 Ellos-EllosNO 1246087421 MULTIUPPLYS APPROVED 1 Ellos-EllosSE 1246087439 MULTIUPPLYS APPROVED 2 Homeroom-HomeroomSE 1244949952 MULTIUPPLYS APPROVED 3 Ellos-EllosSE 534334891 MULTIUPPLYS APPROVED 4 Jotex-JotexSE 1246087165 MULTIUPPLYS APPROVED 5 Homeroom-HomeroomNO 1246087298 MULTIUPPLYS APPROVED 6 Jotex-JotexDK 1246087207 MULTIUPPLYS APPROVED 7 Ellos-EllosNO 1246086639 MULTIUPPLYS APPROVED 8 Ellos-EllosSE 936355635 MULTIUPPLYS APPROVED 9 Jotex-JotexSE 646132969 MULTIUPPLYS APPROVED 10 Jotex-JotexNO 943056952 MULTIUPPLYS APPROVED 11 Ellos-EllosSE 3169943333 MULTIUPPLYS DENIED 12 Jotex-JotexNO 1246086944 MULTIUPPLYS APPROVED 13 Ellos-EllosSE 1245979081 MULTIUPPLYS APPROVED 14 Ellos-EllosFI 1246086878 MULTIUPPLYS APPROVED 15 Ellos-EllosSE 936355635 MULTIUPPLYS APPROVED 16 Ellos-EllosSE 1246074783 MULTIUPPLYS APPROVED 17 Homeroom-HomeroomSE 1145457782 MULTIUPPLYS DENIED 18 Ellos-EllosSE 1246086803 MULTIUPPLYS APPROVED 19 Ellos-EllosNO 1245818248 MULTIUPPLYS APPROVED RESULTCODE FUNCTION LASTMODIFIED APPROVEDAMOUNT ISANONYMIZED \ 0 nan CREDIT 2024-01-08 15:32:07 2999.0 nan 1 nan CREDIT 2024-01-08 15:31:34 4045.0 nan 2 nan CREDIT 2024-01-08 15:26:49 198.0 nan 3 nan LIMIT 2024-01-08 15:26:47 21407.0 nan 4 nan CREDIT 2024-01-08 15:26:45 9099.0 nan 5 nan CREDIT 2024-01-08 15:24:45 328.0 nan 6 nan CREDIT 2024-01-08 15:23:34 641.0 nan 7 nan CREDIT 2024-01-08 15:22:17 1438.0 nan 8 nan LIMIT 2024-01-08 15:20:57 17600.0 nan 9 nan CREDIT 2024-01-08 15:20:41 348.0 nan 10 nan LIMIT 2024-01-08 15:19:03 7448.0 nan 11 nan LIMIT 2024-01-08 15:15:49 0.0 nan 12 nan CREDIT 2024-01-08 15:15:35 5489.0 nan 13 nan CREDIT 2024-01-08 15:13:46 603.0 nan 14 nan CREDIT 2024-01-08 15:12:54 399.0 nan 15 nan LIMIT 2024-01-08 15:11:02 13711.0 nan 16 nan CREDIT 2024-01-08 15:09:54 520.0 nan 17 nan CREDIT 2024-01-08 15:09:08 0.0 nan 18 nan CREDIT 2024-01-08 15:09:05 614.0 nan 19 nan CREDIT 2024-01-08 15:04:38 885.0 nan SALESBRAND COUNTRY DATE DAY MONTH WEEK YEAR 0 Ellos NO 2024-01-08 15:32:07 8 1 2 2024 1 Ellos SE 2024-01-08 15:31:34 8 1 2 2024 2 Homeroom SE 2024-01-08 15:26:49 8 1 2 2024 3 Ellos SE 2024-01-08 15:26:47 8 1 2 2024 4 Jotex SE 2024-01-08 15:26:45 8 1 2 2024 5 Homeroom NO 2024-01-08 15:24:45 8 1 2 2024 6 Jotex DK 2024-01-08 15:23:34 8 1 2 2024 7 Ellos NO 2024-01-08 15:22:17 8 1 2 2024 8 Ellos SE 2024-01-08 15:20:57 8 1 2 2024 9 Jotex SE 2024-01-08 15:20:41 8 1 2 2024 10 Jotex NO 2024-01-08 15:19:03 8 1 2 2024 11 Ellos SE 2024-01-08 15:15:49 8 1 2 2024 12 Jotex NO 2024-01-08 15:15:35 8 1 2 2024 13 Ellos SE 2024-01-08 15:13:46 8 1 2 2024 14 Ellos FI 2024-01-08 15:12:54 8 1 2 2024 15 Ellos SE 2024-01-08 15:11:02 8 1 2 2024 16 Ellos SE 2024-01-08 15:09:54 8 1 2 2024 17 Homeroom SE 2024-01-08 15:09:08 8 1 2 2024 18 Ellos SE 2024-01-08 15:09:05 8 1 2 2024 19 Ellos NO 2024-01-08 15:04:38 8 1 2 2024 </code></pre>
<python><r><duplicates>
2024-02-08 10:18:34
0
903
Parseval
77,960,696
13,314,132
spaCy training stopping automatically in Google Colab
<p>I am training spaCy's NER on a customed dataset.</p> <p>I have changed the dataset template as per spaCy requirements: <code>data[0]['text']</code></p> <pre><code> RECEIVED REGISTER OF DEEDS KENT COUNTY, MI 2022 MAY 02 4:26 PM GU 51 202205030036938 Total Pages: 2 05/03/2022 11:00 AM Fees: $30.00 Lisa Posthumus Lyons, County Clerk/Register Kent County, MI SEAL QUIT CLAIM DEED 41-13-23-104-009 rc Debra Kathleen Hoek, as trustee of the Jeanette (Ma Janet) Hoek Living Trust u/a/d April 17, 2019, of 1058 Patton Avenue NW, Grand Rapids, Michigan 49504, QUIT CLAIMS to Janet Hoek,' individually, of 1058 Patton Avenue NW, Grand Rapids, Michigan 49504, the premises located in Kent County, Michigan, described as on the attached Exhibit A, subject to all easements and restrictions of record, for One Dollar ($1.00). This transfer is exempt from real estate transfer tax under MCLA 207.526(a), MSA 7.456(26) and MCLA 207.505(a), MSA 7.456(5). This conveyance does not create a division of any parcel of real property and no divisions have been made since March 31, 1997. This property may be located within the vicinity of 'farmland or a farm operation. Generally a.. </code></pre> <p><code>data[0]['entities']</code></p> <pre><code>[[70, 85, 'Recording Number'], [101, 111, 'Recording Date'], [199, 214, 'Doc Type'], [235, 311, 'Seller'], [405, 416, 'Buyer']] </code></pre> <p><strong>How to reproduce the behaviour</strong> Created <code>train.spacy</code></p> <pre><code>from spacy.util import filter_spans for training_example in tqdm(data): text = training_example['text'] labels = training_example['entities'] doc = nlp.make_doc(text) ents = [] for start, end, label in labels: span = doc.char_span(start, end, label=label, alignment_mode=&quot;contract&quot;) if span is None: print(&quot;Skipping entity&quot;) else: ents.append(span) filtered_ents = filter_spans(ents) doc.ents = filtered_ents doc_bin.add(doc) doc_bin.to_disk(&quot;train.spacy&quot;) </code></pre> <p>Created <code>config.cfg</code></p> <p><code>!python -m spacy init fill-config base_config.cfg config.cfg</code></p> <p><strong>Output:</strong></p> <pre><code>✔ Auto-filled config with all values ✔ Saved config config.cfg You can now add your data and train your pipeline: python -m spacy train config.cfg --paths.train ./train.spacy --paths.dev ./dev.spacy </code></pre> <p>Now when I am trying to train the model, getting the following error in the output: <code>!python -m spacy train config.cfg --output ./ --paths.train ./train.spacy --paths.dev ./train.spacy</code></p> <p><strong>Output:</strong></p> <pre><code>ℹ Saving to output directory: . ℹ Using CPU ℹ To switch to GPU 0, use the option: --gpu-id 0 =========================== Initializing pipeline =========================== ✔ Initialized pipeline ============================= Training pipeline ============================= ℹ Pipeline: ['tok2vec', 'ner'] ℹ Initial learn rate: 0.001 E # LOSS TOK2VEC LOSS NER ENTS_F ENTS_P ENTS_R SCORE --- ------ ------------ -------- ------ ------ ------ ------ ^C </code></pre> <p>Automatic ^C is coming by itself and stopping the training.</p> <h2>Your Environment</h2> <p><strong>Info about spaCy</strong></p> <ul> <li>spaCy version: 3.7.3</li> <li>Platform: Linux-6.1.58+-x86_64-with-glibc2.35</li> <li>Python version: 3.10.12</li> <li>Pipelines: en_core_web_lg (3.7.1), en_core_web_sm (3.7.1)</li> </ul>
<python><spacy><named-entity-recognition>
2024-02-08 09:54:14
1
655
Daremitsu
77,960,679
2,763,895
Stellargraph GraphSAGE sample google Colab notebook model.predict error
<p>I started working on running sample of stellargraph python module to run GraphSAGE algorithm sample from this link:</p> <p><a href="https://stellargraph.readthedocs.io/en/stable/demos/node-classification/graphsage-node-classification.html" rel="nofollow noreferrer">https://stellargraph.readthedocs.io/en/stable/demos/node-classification/graphsage-node-classification.html</a></p> <p>Although I can run algorithm until this line [20]:</p> <pre><code>all_nodes = node_subjects.index all_mapper = generator.flow(all_nodes) all_predictions = model.predict(all_mapper) </code></pre> <p>but when it wants to call <code>predict</code> method I receive this error:</p> <pre><code>ValueError: in user code: ValueError: Layer &quot;model_1&quot; expects 3 input(s), but it received 1 input tensors. Inputs received: [&lt;tf.Tensor 'IteratorGetNext:0' shape=(None, None, None) dtype=float32&gt;] </code></pre> <p>In spite of running code from validated site but it receives errors of layers of neural network incompatibily. I do not know what to do.</p> <p>I also added the <code>model.summary()</code> for better results:</p> <pre><code>Model: &quot;model&quot; __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_2 (InputLayer) [(None, 10, 1433)] 0 [] input_3 (InputLayer) [(None, 50, 1433)] 0 [] input_1 (InputLayer) [(None, 1, 1433)] 0 [] reshape (Reshape) (None, 1, 10, 1433) 0 ['input_2[0][0]'] reshape_1 (Reshape) (None, 10, 5, 1433) 0 ['input_3[0][0]'] dropout_1 (Dropout) (None, 1, 1433) 0 ['input_1[0][0]'] dropout (Dropout) (None, 1, 10, 1433) 0 ['reshape[0][0]'] dropout_3 (Dropout) (None, 10, 1433) 0 ['input_2[0][0]'] dropout_2 (Dropout) (None, 10, 5, 1433) 0 ['reshape_1[0][0]'] mean_aggregator (MeanAggre multiple 45888 ['dropout_1[0][0]', gator) 'dropout[0][0]', 'dropout_3[0][0]', 'dropout_2[0][0]'] reshape_2 (Reshape) (None, 1, 10, 32) 0 ['mean_aggregator[1][0]'] dropout_5 (Dropout) (None, 1, 32) 0 ['mean_aggregator[0][0]'] dropout_4 (Dropout) (None, 1, 10, 32) 0 ['reshape_2[0][0]'] mean_aggregator_1 (MeanAgg (None, 1, 32) 1056 ['dropout_5[0][0]', regator) 'dropout_4[0][0]'] reshape_3 (Reshape) (None, 32) 0 ['mean_aggregator_1[0][0]'] lambda (Lambda) (None, 32) 0 ['reshape_3[0][0]'] dense (Dense) (None, 7) 231 ['lambda[0][0]'] ================================================================================================== Total params: 47175 (184.28 KB) Trainable params: 47175 (184.28 KB) Non-trainable params: 0 (0.00 Byte) __________________________________________________________________________________________________ </code></pre>
<python><tensorflow><graph-neural-network><stellargraph><graphsage>
2024-02-08 09:51:52
1
1,509
Reza Akraminejad
77,960,618
13,860,719
Unique identifier for ratios with tolerance
<p>I have some data that contains ratios of 5 elements <code>'a'</code>, <code>'b'</code>, <code>'c'</code>, <code>'d'</code>, <code>'e'</code>, which looks something like this:</p> <pre><code>data = [ {'a': 0.197, 'b': 0.201, 'c': 0.199, 'd': 0.202, 'e': 0.201}, {'a': 0.624, 'b': 0.628, 'c': 0.623, 'd': 0.625, 'e': 0.750}, {'a': 0.192, 'b': 0.203, 'c': 0.200, 'd': 0.202, 'e': 0.203}, {'a': 0.630, 'b': 0.620, 'c': 0.625, 'd': 0.623, 'e': 0.752}, ] </code></pre> <p>I would like to hash each ratio data (represented as dict) into a string that can be used as a unique identifier for ratios with a tolerance. For example, with a tolerance of 0.1 for the ratio of each element, the expectation is that the first and third dicts should have the same identifier, and the second and fourth dicts should have the same identifier. This is easy to do if one just wants to compare if two ratio data are within the tolerance, but I am not sure how create unique identifiers.</p> <p>Edit: I am looking for some rounding method, instead of completely arbitrary hashing.</p>
<python><string><algorithm><hash><rounding>
2024-02-08 09:42:11
2
2,963
Shaun Han
77,960,528
13,104,490
Integrating Python Matplotlib Heatmap into a JavaScript Application
<p>I am currently working with a medical company that is transitioning from MATLAB to JavaScript for our software solutions. As part of this process, we are looking to integrate heatmap visualizations similar to what we had in MATLAB.</p> <p>like the heatmap shown in the attached image. I'm trying to figure out if there's a way to incorporate a Matplotlib heatmap into our new JavaScript application. Is there an approach or a tool that allows embedding or converting these Matplotlib heatmaps for use in a JavaScript environment?</p> <p>Attached is an example of the heatmap we're working with, generated in Python using Matplotlib.</p> <p><img src="https://i.sstatic.net/XQhrd.png" alt="Example of a Python Matplotlib Heatmap" /></p>
<javascript><python><matplotlib><charts><webassembly>
2024-02-08 09:26:27
1
972
Dolev Dublon
77,960,499
6,652,751
ChromaDB get list of unique metadata field
<p>I'm working with a ChromaDB collection and need to efficiently extract a list of all unique values for a specific metadata field.</p> <pre><code> collection = client.get_collection(collection_name) unique_keys = collection.get(where={&quot;$distinct&quot;: &quot;metadata_key&quot;}) #not working as expected #in db `metadata_key` is like `sentiment` that can have value like good, bad, etc </code></pre> <p>currently I using this <strong>inefficient way</strong></p> <pre><code>all_metadatas = collection.get(include=[&quot;metadatas&quot;]).get('metadatas') distinct_keys = set([x.get('metadata_key') for x in all_metadatas]) </code></pre>
<python><chromadb>
2024-02-08 09:21:29
1
1,327
Akhilesh_IN
77,960,213
12,714,241
Git Bash does not recognize Python virtual environment created by Poetry
<p>I am experiencing an issue where Git Bash is not recognizing the Python virtual environment set by Poetry, and it defaults to the system's global Python installation instead. However, PowerShell correctly identifies and uses the virtual environment.</p> <p>Here are the commands and outputs in both shells:</p> <pre><code>PS H:\Coding\tradido\code&gt; python -c &quot;import sys;print(sys.executable)&quot; C:\Users\hamid\AppData\Local\pypoetry\Cache\virtualenvs\tradido-cdZ63RI2-py3.11\Scripts\python.exe </code></pre> <pre><code>hamid@DESKTOP-GJ4J9QV MINGW64 /h/Coding/tradido/code (feature/levels) $ python -c &quot;import sys;print(sys.executable)&quot; C:\Program Files\Python311\python.exe </code></pre> <p>I want Git Bash to use the Python interpreter from the virtual environment as PowerShell does.</p> <p>The default interpreter is correct but for some reason bash can't find that and uses the global python.</p>
<python><visual-studio-code><interpreter>
2024-02-08 08:29:19
2
420
scaryhamid
77,960,212
4,614,867
UDP Packets send via Scapy are not received by a plain Python script
<p>I need help with sending UDP packets via Scapy. For some reason, I cannot receive the packets that I'm sending - I see these packets in Wireshark, but not in the UDP client.</p> <p>I'm running a simplest UDP client for testing:</p> <pre class="lang-py prettyprint-override"><code>import socket UDP_IP = &quot;127.0.0.1&quot; UDP_PORT = 25516 print(&quot;Waiting for packet...&quot;) with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as s: s.bind((UDP_IP, UDP_PORT)) data, addr = s.recvfrom(1024) print(f&quot;Received message from {addr}: {data}&quot;) </code></pre> <p>And I'm sending the packet like that:</p> <pre class="lang-py prettyprint-override"><code>from scapy.layers.inet import * from scapy.sendrecv import send # The interface is the correct one, but I tried without it as well send(IP(dst=&quot;127.0.0.1&quot;) / UDP(sport=25515, dport=25516) / &quot;Test123&quot;, iface=&quot;enp5s0&quot;) </code></pre> <p>I can see the packet I sent in Wireshark, but the client does not print its content - it looks like client did not receive the packet. For comparison, sending the packet via plain Python works:</p> <pre class="lang-py prettyprint-override"><code>import socket UDP_IP = &quot;127.0.0.1&quot; UDP_PORT = 25516 with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as s: s.sendto(&quot;Test123&quot;.encode(), (UDP_IP, UDP_PORT)) print(&quot;Message sent!&quot;) </code></pre> <p>I need to use Scapy for my task, because in the future I will be sending packets parsed from a <code>.pcap</code> file, so I cannot just use plain Python. Please, help me find a way to receive the UDP packet sent from Scapy!</p>
<python><sockets><network-programming><udp><scapy>
2024-02-08 08:29:16
1
486
StragaSevera
77,960,181
4,575,197
Is there anyway to read data from Microsoft sql directly into cudf (GPU's RAM)?
<p>I searched for this through internet, but couldn't find any code. What comes to mind is to first load the data into Pandas (Ram) then load it into Cudf (GPU's ram).</p> <pre><code>import cudf from sqlalchemy import create_engine db_url = &quot;postgresql://username:password@localhost:5432/database_name&quot; engine = create_engine(db_url) query = &quot;SELECT * FROM your_table_name&quot; pandas_df = pd.read_sql(query, engine) cudf_df = cudf.DataFrame.from_pandas(pandas_df) print(cudf_df) </code></pre> <p>However with this approach while in WSL2 Environment, it takes longer to load the data and after the operation we still have the loaded data in ram (pandas Dataframe) which we need to drop.</p> <p>is there more efficient way do achieve this?</p>
<python><pandas><dataframe><bigdata><cudf>
2024-02-08 08:22:16
1
10,490
Mostafa Bouzari
77,960,180
1,538,049
Stopping gradient flow for a multiple headed Pytorch module
<p>I have a multiple headed model in Pytorch, similar to this one:</p> <pre><code>class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.backbone = Backbone() self.proxyModule = ProxyModule() def forward(self, x): backbone_output = self.backbone(x) proxy_target = transform_to_targets(backbone_output) proxy_output = self.proxyModule(backbone_output) return backbone_output, proxy_target, proxy_output net = Net() x,y = get_some_data() backbone_output, proxy_target, proxy_output = net(x) backbone_loss = Loss(backbone_output, y) proxy_loss = Loss(proxy_output, proxy_target) total_loss = backbone_loss + proxy_loss optimizer.zero_grad() loss.backward() optimizer.step() </code></pre> <p>Basically, I want to update the backbone model, and the proxy model via the same loss. However, I do not want that Backbone module gets updated via the operation <code>proxy_target = transform_to_targets(backbone_output)</code>. The purpose in there is only to generate output variables for the proxyModule. That is similar to a Q-learning scenario, actually.</p> <p>I have the following modification in mind but I am not sure if that would work as I expect:</p> <pre><code>class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.backbone = Backbone() self.proxyModule = ProxyModule() def forward(self, x): backbone_output = self.backbone(x) with torch.set_grad_enabled(False): proxy_target = transform_to_targets(backbone_output) proxy_output = self.proxyModule(backbone_output) return backbone_output, proxy_target, proxy_output net = Net() x,y = get_some_data() optimizer.zero_grad() with torch.set_grad_enabled(True): backbone_output, proxy_target, proxy_output = net(x) backbone_loss = Loss(backbone_output, y) proxy_loss = Loss(proxy_output, proxy_target) total_loss = backbone_loss + proxy_loss optimizer.zero_grad() loss.backward() optimizer.step() </code></pre> <p>The difference is now I have put the line <code>proxy_target = transform_to_targets(backbone_output)</code> into the context manager where I set the gradient calculation to false. Lately the Autograd mechanism in Pytorch became more complex, so I cannot be sure if this would provide the intended effect.</p>
<python><deep-learning><pytorch>
2024-02-08 08:22:09
1
3,679
Ufuk Can Bicici
77,960,129
15,190,069
How to use a production WSGI server for Firebase Cloud Functions in Python
<p>I implemented production ready functions in python that I want to deploy on Cloud Functions. When I do so using the following command :</p> <pre><code>firebase deploy --only functions </code></pre> <p>I get the following Warning message.</p> <pre><code>... * Serving Flask app 'serving' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on http://127.0.0.1:8081 ... </code></pre> <p>Do I need to act on it? Is there a way to use a production WSGI server? Why isn't it setup by default?</p>
<python><firebase><google-cloud-functions><wsgi>
2024-02-08 08:14:46
0
387
RominHood
77,960,057
16,933,406
Clustering for the Protein sequences (With/without MSA)
<p>I have NGS data (Unique clones only) and I want to group them based on the similarity (clustering is preferable) using Python language. Please have a look into the below sample sequences. Also to isolate the most distant clone as encircled in the image.</p> <p><strong>Note:</strong> I have already asked similar to this question before but that was for DNA of same length so MSA wasn't required but in below sample I want to explore if without MSA can we perform clustering and find the more distant clones.</p> <p><strong>NGS Sample Data is as below</strong></p> <pre><code>&gt;3971 AVTLDESGGGLQTPGGALSLVCKASGFTFSDRGMGWVRQAPGKGLEFVACIENDGSWTAYGSAVKGRATISRDNGQSTVRLQLNNLRAEDTATYYCAKSAGGSLLLTVVILTVGSIDAWGHGT &gt;3186 AVTLGESGGGLQTPGGALSLVCKASGFTFSSHGMAWVRQAPGKGLEFVAGIGNTGSNPNYGAAVKGRATISRDNRQSTVRLQLNNLRAEDTGTYFCAKRAYAASWSGSDRIDAWGHGT &gt;3066 AVTLGESGGGLQTPGGGLSLVCKASGFTFSSFNMFWVRQAPGKGLEYVAGIDNTGSYTAYGAAVKGRATISRDNGQSTLRLQLNNLRAEDTATYYCAKSFDGRYYHSIDGVDAIDAWGHGT &gt;3719 AVTLGESGGGLQTPGGTVSLVCKGSGFDFSSYNMQWVRQAPGKGLEFIAQINGAGSGTNYGPAVKGRATISRDNGQSTVRLQLNNLRAEDTAIYYCAKSYDGRYYHSIDGVDAIDAWGHGT &gt;127 AVTLGESGGGLQTPGGALSLVCKGSGFTLSSFNMGWVRQAPGKGLEWVGVISSSGRYTEYGAAVKGRAAISRDDGQSTVRLQLNNLRAEDTAIYFCAKGIGTAYCGSCVGEIDTWGHGT &gt;144 AVTLDESGGGLQTPGGGLSLVCKASGFTFSSHGMGWVRQAPGKGLEFVADISGSGSSTNYGPAVKGRATISRDNGQSTVRLQLNDLRAEDTATYYCAKYVGSIGCGSTAGIDAWGHGT &gt;291 AVTLDESGGGLQTPGGALSLVCKASGFTFSDRGMHWVRQAPGKGLEWVAGIGNSGSGTTYGSAVKGRATISRDNGQSTLRLQLNNLRPEDTATYFCARATCIGSGYCGWGTYRIDAWGHGT &gt;381 AVTLDESGGGLQTPGGALSLVCKASGFTFSRFNMFWVRQAPGKGLEWVAAISSGSSTWYGSAVKGRATISRDNGQSTVRLQLNNLRAEDTGTYYCTKAAGNGYRGWTTYIAGWIDAWGHGT **Desired output is as below** </code></pre> <p><a href="https://i.sstatic.net/QD7b5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QD7b5.png" alt="enter image description here" /></a></p>
<python><python-3.x><biopython><hierarchical-clustering>
2024-02-08 08:00:41
1
617
shivam
77,959,864
10,354,066
Set x-axis values for dataframe plotting in Python when data is time series
<p>I have drawn my graph in Python using this code:</p> <pre><code>print(data_filtered['ranking_datetime']) plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%Y/%m/%d')) plt.gca().xaxis.set_major_locator(mdates.AutoDateLocator()) plt.plot(data_filtered['ranking_datetime'], data_filtered['ranking']) plt.gcf().autofmt_xdate() plt.show() </code></pre> <p>Which prints this:</p> <p><a href="https://i.sstatic.net/BT0vj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BT0vj.png" alt="datetime data" /></a></p> <p>And gives this graph:</p> <p><a href="https://i.sstatic.net/VCHcW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VCHcW.png" alt="graph" /></a></p> <p>Why is graph dates not correct? I can clearly see there are good dates in that column when I print them before creating a plot. It might be because my data is not very consistent - some days has several entries, some dont at all.</p>
<python><dataframe><plot><time-series>
2024-02-08 07:17:35
1
1,548
vytaute
77,959,719
20,088,885
How can I create another Invoice Template in Odoo Community(Saas)
<p>I'm trying to create a new <code>invoice template</code> in my Odoo 17 Community (SaaS) with a customize header on it.</p> <p>So what I did first is create a <code>report</code>, It now shows in my <code>Invoice</code>.</p> <p><a href="https://i.sstatic.net/9hWwH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9hWwH.png" alt="enter image description here" /></a></p> <p>This is my <code>report</code> configuration</p> <p><a href="https://i.sstatic.net/wURJE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wURJE.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/3MNTZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3MNTZ.png" alt="enter image description here" /></a></p> <p>This is my <code>QWEB</code> for <code>report_invoice_with_payments_copySample</code></p> <pre><code>&lt;t t-name=&quot;account.report_invoice_with_payments_copySample&quot;&gt; &lt;t t-call=&quot;account.report_invoice_copySample&quot;&gt; &lt;t t-set=&quot;print_with_payments&quot; t-value=&quot;True&quot;/&gt; &lt;/t&gt; &lt;/t&gt; </code></pre> <p><a href="https://i.sstatic.net/lqyJV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lqyJV.png" alt="enter image description here" /></a></p> <pre><code>&lt;t t-name=&quot;account.report_invoice_copySample&quot;&gt; &lt;t t-call=&quot;web.html_container&quot;&gt; &lt;t t-foreach=&quot;docs&quot; t-as=&quot;o&quot;&gt; &lt;t t-set=&quot;lang&quot; t-value=&quot;o.partner_id.lang&quot;/&gt; &lt;t t-if=&quot;o._get_name_invoice_report() == 'account.report_invoice_document_copySample'&quot; t-call=&quot;account.report_invoice_document_copySample&quot; t-lang=&quot;lang&quot;/&gt; &lt;/t&gt; &lt;/t&gt; &lt;/t&gt; </code></pre> <p>This is my <code>report_invoice_document_copySample</code></p> <p><a href="https://i.sstatic.net/YFQO0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YFQO0.png" alt="enter image description here" /></a></p> <p>I just change the first line.</p> <p><code>&lt;t t-name=&quot;account.report_invoice_document_copySample&quot;&gt;</code></p>
<python><odoo><qweb>
2024-02-08 06:45:50
1
785
Stykgwar
77,959,558
4,398,166
Pandas groupby raises ValueError: len(index) != len(labels) when trying to aggregate columns
<p>I have some data whose columns are float numbers and I want to aggregate them on the integer number they are rounded to. In the MWE below, the expected output should be</p> <pre><code> 912 0 2.5 1 1.5 </code></pre> <p>because all column elements are rounded to <code>912</code>.</p> <p>MWE:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd temp = pd.DataFrame({911.7: {0: 0, 1: 1}, 911.9: {0: 2.0, 1: 0.0}, 912.0: {0: 0.5, 1: 0.5}}) round_to = 1 price_digits=1 rounded = [round(round(x / round_to) * round_to, price_digits) for x in temp.columns] temp.groupby(by=rounded, axis=1).sum() </code></pre> <p>When actually run, the traceback will be:</p> <pre><code>Traceback (most recent call last): File &quot;D:\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py&quot;, line 3331, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File &quot;&lt;ipython-input-17-983fbc3f7113&gt;&quot;, line 1, in &lt;module&gt; temp.groupby(by=rounded, axis=1).sum() File &quot;D:\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py&quot;, line 1378, in f return self._cython_agg_general(alias, alt=npfunc, **kwargs) File &quot;D:\Anaconda3\lib\site-packages\pandas\core\groupby\generic.py&quot;, line 1004, in _cython_agg_general how, alt=alt, numeric_only=numeric_only, min_count=min_count File &quot;D:\Anaconda3\lib\site-packages\pandas\core\groupby\generic.py&quot;, line 1033, in _cython_agg_blocks block.values, how, axis=1, min_count=min_count File &quot;D:\Anaconda3\lib\site-packages\pandas\core\groupby\ops.py&quot;, line 587, in aggregate &quot;aggregate&quot;, values, how, axis, min_count=min_count File &quot;D:\Anaconda3\lib\site-packages\pandas\core\groupby\ops.py&quot;, line 530, in _cython_operation result, counts, values, codes, func, is_datetimelike, min_count File &quot;D:\Anaconda3\lib\site-packages\pandas\core\groupby\ops.py&quot;, line 608, in _aggregate agg_func(result, counts, values, comp_ids, min_count) File &quot;pandas\_libs\groupby.pyx&quot;, line 464, in pandas._libs.groupby._group_add ValueError: len(index) != len(labels) </code></pre> <p>which is perplexing because <code>len(rounded)==len(temp.columns)==3</code>. There doesn't seem to be a length mismatch.</p> <p>What would be the appropriate way to achieve my purpose? Thanks in advance!</p> <p>Pandas version: <code>'1.0.1'</code>. Python version: <code>Python 3.7.6 (default, Jan 8 2020, 16:21:45) [MSC v.1916 32 bit (Intel)]</code>.</p> <hr> <p>The MWE does work in most cases. For example when we change the third column element to <code>912.3</code> from <code>912.0</code>:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd round_to = 1 price_digits=1 temp = pd.DataFrame({911.7: {0: 0, 1: 1}, 911.9: {0: 2.0, 1: 0.0}, 912.3: {0: 0.5, 1: 0.5}}) rounded = [round(round(x / round_to) * round_to, price_digits) for x in temp.columns] temp.groupby(by=rounded, axis=1).sum() </code></pre> <p>The output will be</p> <pre><code>Out[14]: 912 0 2.5 1 1.5 </code></pre>
<python><python-3.x><pandas><dataframe>
2024-02-08 06:05:00
2
1,578
Vim
77,959,461
12,427,876
Differentiate between MSYS2 & Powershell/Command Prompt
<p>I'm writing a python project which is not going to work on PowerShell or Command Prompt, but it does work on MSYS2.</p> <p>Is there any way to differentiate between MSYS2 and PowerShell/Command Prompt?</p> <p>I tried <code>os.name</code> and <code>sys.version</code>, but both return same result between these 3.</p>
<python><msys2>
2024-02-08 05:35:23
1
411
TaihouKai
77,959,410
21,192,065
Pruning nn.Linear weights inplace causes unexpected error, requires slightly weird workarounds. Need explanation
<h2>This fails</h2> <pre class="lang-py prettyprint-override"><code>import torch def test1(): layer = nn.Linear(100, 10) x = 5 - torch.sum(layer(torch.ones(100))) x.backward() layer.weight.data = layer.weight.data[:, :90] layer.weight.grad.data = layer.weight.grad.data[:, :90] x = 5 - torch.sum(layer(torch.ones(90))) x.backward() test1() </code></pre> <p>with error</p> <pre><code>--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) &lt;ipython-input-3-bb36a010bd86&gt; in &lt;cell line: 10&gt;() 8 x = 5 - torch.sum(layer(torch.ones(90))) 9 x.backward() ---&gt; 10 test1() 11 # and this works as well 12 2 frames /usr/local/lib/python3.10/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs) 249 # some Python versions print out the first line of a multi-line function 250 # calls in the traceback and some print out the last line --&gt; 251 Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass 252 tensors, 253 grad_tensors_, RuntimeError: Function TBackward0 returned an invalid gradient at index 0 - got [10, 90] but expected shape compatible with [10, 100] </code></pre> <h2>This works</h2> <pre class="lang-py prettyprint-override"><code>import torch def test2(): layer = torch.nn.Linear(100, 10) x = 5 - torch.sum(layer(torch.ones(100))) x.backward() del x #main change layer.weight.data = layer.weight.data[:, :90] layer.weight.grad.data = layer.weight.grad.data[:, :90] x = 5 - torch.sum(layer(torch.ones(90))) x.backward() test2() </code></pre> <h2>and this works as well</h2> <pre class="lang-py prettyprint-override"><code>import torch def test3(): layer = torch.nn.Linear(100, 10) x = 5 - torch.sum(layer(torch.ones(100))) x.backward() layer.weight.data = layer.weight.data[:, :90] layer.weight.grad.data = layer.weight.grad.data[:, :90] layer.weight = torch.nn.Parameter(layer.weight) #main change x = 5 - torch.sum(layer(torch.ones(90))) x.backward() test3() </code></pre> <p>I encountered this when trying to implement a paper on model pruning (Temporal Neuron Variance Pruning). I believe this has something to do with the autograd graph, but I have am not sure what exactly is going on. I've already seen the link on pruning and got my code working using the 3rd snippet. I am now trying to figure out why 1 and 2 did not work. Is there some explanation for why these almost identical code snippets work or fail?</p> <h2>Major points I'd like to figure out -</h2> <ol> <li>what is <code>TBackward0</code></li> <li>where is it defined</li> <li>where is the runtime error raised</li> <li>why is the compatibility with the old shape expected - especially when the grad has been modified correctly (I am assuming I have edited the tensors correctly because cases 2, 3 work)</li> <li>can I change something else (other than the 2 working cases) to make this work ?</li> </ol>
<python><machine-learning><pytorch><artificial-intelligence><autograd>
2024-02-08 05:17:32
1
978
arrmansa
77,959,315
287,545
mime multipart/mixed w/ pure binary + python
<p>It's unclear to me whether raw (non base64) binary is standard supported in MIME <code>multipart/mixed</code>, in particular when decoding with python:</p> <pre class="lang-py prettyprint-override"><code>msg = email.message_from_binary_file(fp, policy=email.policy.HTTP) for part in msg.walk(): if part.get_content_maintype() == 'multipart': continue filename = part.get_filename() payload = part.get_content() </code></pre> <p><code>payload</code> gets text processed (or something like it. <code>0x13</code>s turn into <code>0x10</code>s). Obviously this corrupts the data. Is there a way to put this into a pure binary mode? Should I be base64 encoding it? The MIME itself looks like this:</p> <pre><code>Content-Type: multipart/mixed;boundary=123456789000000000000987654321 Transfer-Encoding: chunked --123456789000000000000987654321 Content-Type: image/jpeg Content-transfer-encoding: binary Content-Disposition: attachment; filename=&quot;2024-02-08T000418.jpg&quot; Content-Length: 23302 &lt;binary data&gt; --123456789000000000000987654321 </code></pre> <p>UPDATE 08FEB24</p> <ol> <li>RFC 2045 tells us &quot;Content-Transfer-Encoding [...] &quot;binary&quot; all mean that the identity (i.e. NO) encoding transformation has been performed&quot;</li> <li>RFC 2045 tells us &quot;there are no circumstances in which the &quot;binary&quot; Content-Transfer-Encoding is actually valid in Internet mail.&quot;</li> <li>RFC 2045 tells us:</li> </ol> <pre><code>mechanism := &quot;7bit&quot; / &quot;8bit&quot; / &quot;binary&quot; / &quot;quoted-printable&quot; / &quot;base64&quot; / </code></pre> <ol start="4"> <li>Python call <code>email.message_from_bytes</code> leaves CRs alone</li> <li>Yes, fp is opened in binary mode</li> </ol> <p>From this I conclude:</p> <ul> <li><code>Content-transfer-encoding: base64</code> is standards-supported</li> <li>Python very much took item #2 to heart, since this is an <em>email</em> processing mechanism</li> <li>Presuming that that 28 year old RFC still holds true, one might not accuse Python <code>email</code> module of having a a bug. However, I say it does - <code>email.message_from_binary_file</code> <em>specifically fails on a binary file.</em></li> </ul> <p>Is my logic sound?</p>
<python><multipart>
2024-02-08 04:48:53
0
2,501
Malachi
77,959,301
395,857
How can I move a button before a box that the button uses or changes in Gradio?
<p>Example: I have the following Gradio UI:</p> <pre><code>import gradio as gr def dummy(a): return 'hello', {'hell': 'o'} with gr.Blocks() as demo: txt = gr.Textbox(value=&quot;test&quot;, label=&quot;Query&quot;, lines=1) answer = gr.Textbox(value=&quot;&quot;, label=&quot;Answer&quot;) answerjson = gr.JSON() btn = gr.Button(value=&quot;Submit&quot;) btn.click(dummy, inputs=[txt], outputs=[answer, answerjson]) gr.ClearButton([answer, answerjson]) demo.launch() </code></pre> <p><a href="https://i.sstatic.net/ad3UL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ad3UL.png" alt="enter image description here" /></a></p> <p>How can I change the code so that the &quot;Submit&quot; and &quot;Clear&quot; buttons are shown between the answer and JSON boxes, i.e.:</p> <p><a href="https://i.sstatic.net/Ql7sg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ql7sg.png" alt="enter image description here" /></a></p> <p>I can't just move the line <code>gr.ClearButton([answer, answerjson])</code> before <code>answerjson = gr.JSON()</code>, since <code>answerjson</code> needs to be defined in <code>gr.ClearButton([answer, answerjson])</code>.</p>
<python><gradio>
2024-02-08 04:45:11
1
84,585
Franck Dernoncourt
77,959,207
10,639,382
Pytorch model results different on each run, when using strict = False when loading weights
<p>I am attempting to use an opensource pretrained model (Resnet50) from the following repository <a href="https://github.com/ViTAE-Transformer/RSP/tree/main/Scene%20Recognition" rel="nofollow noreferrer">https://github.com/ViTAE-Transformer/RSP/tree/main/Scene%20Recognition</a> for scene classification. However when I load the weights I am forced to set <code>strict = False</code> to avoid with the following error.</p> <pre><code>RuntimeError: Error(s) in loading state_dict for ResNet: Missing key(s) in state_dict: &quot;conv1.weight&quot;, &quot;bn1.weight&quot;, &quot;bn1.bias&quot;, &quot;bn1.running_mean&quot;, &quot;bn1.running_var&quot;, &quot;layer1.0.conv1.weight&quot;, &quot;layer1.0.bn1.weight&quot;, &quot;layer1.0.bn1.bias&quot;, &quot;layer1.0.bn1.running_mean&quot;, &quot;layer1.0.bn1.running_var&quot;, &quot;layer1.0.conv2.weight&quot;, &quot;layer1.0.bn2.weight&quot;, &quot;layer1.0.bn2.bias&quot;, &quot;layer1.0.bn2.running_mean&quot;, &quot;layer1.0.bn2.running_var&quot;, &quot;layer1.0.conv3.weight&quot;, &quot;layer1.0.bn3.weight&quot;, &quot;layer1.0.bn3.bias&quot;, &quot;layer1.0.bn3.running_mean&quot;, &quot;layer1.0.bn3.running_var&quot;, &quot;layer1.0.downsample.0.weight&quot;, &quot;layer1.0.downsample.1.weight&quot;, &quot;layer1.0.downsample.1.bias&quot;, &quot;layer1.0.downsample.1.running_mean&quot;, &quot;layer1.0.downsample.1.running_var&quot;, &quot;layer1.1.conv1.weight&quot;, &quot;layer1.1.bn1.weight&quot;, &quot;layer1.1.bn1.bias&quot;, &quot;layer1.1.bn1.running_mean&quot;, &quot;layer1.1.bn1.running_var&quot;, &quot;layer1.1.conv2.weight&quot;, &quot;layer1.1.bn2.weight&quot;, &quot;layer1.1.bn2.bias&quot;, &quot;layer1.1.bn2.running_mean&quot;, &quot;layer1.1.bn2.running_var&quot;, &quot;layer1.1.conv3.weight&quot;, &quot;layer1.1.bn3.weight&quot;, &quot;layer1.1.bn3.bias&quot;, &quot;layer1.1.bn3.running_mean&quot;, &quot;layer1.1.bn3.running_var&quot;, &quot;layer1.2.conv1.weight&quot;, &quot;layer1.2.bn1.weight&quot;, &quot;layer1.2.bn1.bias&quot;, &quot;layer1.2.bn1.running_mean&quot;, &quot;layer1.2.bn1.running_var&quot;, &quot;layer1.2.conv2.weight&quot;, &quot;layer1.2.bn2.weight&quot;, &quot;layer1.2.bn2.bias&quot;, &quot;layer1.2.bn2.running_mean&quot;, &quot;layer1.2.bn2.running_var&quot;, &quot;layer1.2.conv3.weight&quot;, &quot;layer1.2.bn3.weight&quot;, &quot;layer1.2.bn3.bias&quot;, &quot;layer1.2.bn3.running_mean&quot;, &quot;layer1.2.bn3.running_var&quot;, &quot;layer2.0.conv1.weight&quot;, &quot;layer2.0.bn1.weight&quot;, &quot;layer2.0.bn1.bias&quot;, &quot;layer2.0.bn1.running_mean&quot;, &quot;layer2.0.bn1.running_var&quot;, &quot;layer2.0.conv2.weight&quot;, &quot;layer2.0.bn2.weight&quot;, &quot;layer2.0.bn2.bias&quot;, &quot;layer2.0.bn2.running_mean&quot;, &quot;layer2.0.bn2.running_var&quot;, &quot;layer2.0.conv3.weight&quot;, &quot;layer2.0.bn3.weight&quot;, &quot;layer2.0.bn3.bias&quot;, &quot;layer2.0.bn3.running_mean&quot;, &quot;layer2.0.bn3.running_var&quot;, &quot;layer2.0.downsample.0.weight&quot;, &quot;layer2.0.downsample.1.weight&quot;, &quot;layer2.0.downsample.1.bias&quot;, &quot;layer2.0.downsample.1.running_mean&quot;, &quot;layer2.0.downsample.1.running_var&quot;, &quot;layer2.1.conv1.weight&quot;, &quot;layer2.1.bn1.weight&quot;, &quot;layer2.1.bn1.bias&quot;, &quot;layer2.1.bn1.running_mean&quot;, &quot;layer2.1.bn1.running_var&quot;, &quot;layer2.1.conv2.weight&quot;, &quot;layer2.1.bn2.weight&quot;, &quot;layer2.1.bn2.bias&quot;, &quot;layer2.1.bn2.running_mean&quot;, &quot;layer2.1.bn2.running_var&quot;, &quot;layer2.1.conv3.weight&quot;, &quot;layer2.1.bn3.weight&quot;, &quot;layer2.1.bn3.bias&quot;, &quot;layer2.1.bn3.running_mean&quot;, &quot;layer2.1.bn3.running_var&quot;, &quot;layer2.2.conv1.weight&quot;, &quot;layer2.2.bn1.weight&quot;, &quot;layer2.2.bn1.bias&quot;, &quot;layer2.2.bn1.running_mean&quot;, &quot;layer2.2.bn1.running_var&quot;, &quot;layer2.2.conv2.weight&quot;, &quot;layer2.2.bn2.weight&quot;, &quot;layer2.2.bn2.bias&quot;, &quot;layer2.2.bn2.running_mean&quot;, &quot;layer2.2.bn2.running_var&quot;, &quot;layer2.2.conv3.weight&quot;, &quot;layer2.2.bn3.weight&quot;, &quot;layer2.2.bn3.bias&quot;, &quot;layer2.2.bn3.running_mean&quot;, &quot;layer2.2.bn3.running_var&quot;, &quot;layer2.3.conv1.weight&quot;, &quot;layer2.3.bn1.weight&quot;, &quot;layer2.3.bn1.bias&quot;, &quot;layer2.3.bn1.running_mean&quot;, &quot;layer2.3.bn1.running_var&quot;, &quot;layer2.3.conv2.weight&quot;, &quot;layer2.3.bn2.weight&quot;, &quot;layer2.3.bn2.bias&quot;, &quot;layer2.3.bn2.running_mean&quot;, &quot;layer2.3.bn2.running_var&quot;, &quot;layer2.3.conv3.weight&quot;, &quot;layer2.3.bn3.weight&quot;, &quot;layer2.3.bn3.bias&quot;, &quot;layer2.3.bn3.running_mean&quot;, &quot;layer2.3.bn3.running_var&quot;, &quot;layer3.0.conv1.weight&quot;, &quot;layer3.0.bn1.weight&quot;, &quot;layer3.0.bn1.bias&quot;, &quot;layer3.0.bn1.running_mean&quot;, &quot;layer3.0.bn1.running_var&quot;, &quot;layer3.0.conv2.weight&quot;, &quot;layer3.0.bn2.weight&quot;, &quot;layer3.0.bn2.bias&quot;, &quot;layer3.0.bn2.running_mean&quot;, &quot;layer3.0.bn2.running_var&quot;, &quot;layer3.0.conv3.weight&quot;, &quot;layer3.0.bn3.weight&quot;, &quot;layer3.0.bn3.bias&quot;, &quot;layer3.0.bn3.running_mean&quot;, &quot;layer3.0.bn3.running_var&quot;, &quot;layer3.0.downsample.0.weight&quot;, &quot;layer3.0.downsample.1.weight&quot;, &quot;layer3.0.downsample.1.bias&quot;, &quot;layer3.0.downsample.1.running_mean&quot;, &quot;layer3.0.downsample.1.running_var&quot;, &quot;layer3.1.conv1.weight&quot;, &quot;layer3.1.bn1.weight&quot;, &quot;layer3.1.bn1.bias&quot;, &quot;layer3.1.bn1.running_mean&quot;, &quot;layer3.1.bn1.running_var&quot;, &quot;layer3.1.conv2.weight&quot;, &quot;layer3.1.bn2.weight&quot;, &quot;layer3.1.bn2.bias&quot;, &quot;layer3.1.bn2.running_mean&quot;, &quot;layer3.1.bn2.running_var&quot;, &quot;layer3.1.conv3.weight&quot;, &quot;layer3.1.bn3.weight&quot;, &quot;layer3.1.bn3.bias&quot;, &quot;layer3.1.bn3.running_mean&quot;, &quot;layer3.1.bn3.running_var&quot;, &quot;layer3.2.conv1.weight&quot;, &quot;layer3.2.bn1.weight&quot;, &quot;layer3.2.bn1.bias&quot;, &quot;layer3.2.bn1.running_mean&quot;, &quot;layer3.2.bn1.running_var&quot;, &quot;layer3.2.conv2.weight&quot;, &quot;layer3.2.bn2.weight&quot;, &quot;layer3.2.bn2.bias&quot;, &quot;layer3.2.bn2.running_mean&quot;, &quot;layer3.2.bn2.running_var&quot;, &quot;layer3.2.conv3.weight&quot;, &quot;layer3.2.bn3.weight&quot;, &quot;layer3.2.bn3.bias&quot;, &quot;layer3.2.bn3.running_mean&quot;, &quot;layer3.2.bn3.running_var&quot;, &quot;layer3.3.conv1.weight&quot;, &quot;layer3.3.bn1.weight&quot;, &quot;layer3.3.bn1.bias&quot;, &quot;layer3.3.bn1.running_mean&quot;, &quot;layer3.3.bn1.running_var&quot;, &quot;layer3.3.conv2.weight&quot;, &quot;layer3.3.bn2.weight&quot;, &quot;layer3.3.bn2.bias&quot;, &quot;layer3.3.bn2.running_mean&quot;, &quot;layer3.3.bn2.running_var&quot;, &quot;layer3.3.conv3.weight&quot;, &quot;layer3.3.bn3.weight&quot;, &quot;layer3.3.bn3.bias&quot;, &quot;layer3.3.bn3.running_mean&quot;, &quot;layer3.3.bn3.running_var&quot;, &quot;layer3.4.conv1.weight&quot;, &quot;layer3.4.bn1.weight&quot;, &quot;layer3.4.bn1.bias&quot;, &quot;layer3.4.bn1.running_mean&quot;, &quot;layer3.4.bn1.running_var&quot;, &quot;layer3.4.conv2.weight&quot;, &quot;layer3.4.bn2.weight&quot;, &quot;layer3.4.bn2.bias&quot;, &quot;layer3.4.bn2.running_mean&quot;, &quot;layer3.4.bn2.running_var&quot;, &quot;layer3.4.conv3.weight&quot;, &quot;layer3.4.bn3.weight&quot;, &quot;layer3.4.bn3.bias&quot;, &quot;layer3.4.bn3.running_mean&quot;, &quot;layer3.4.bn3.running_var&quot;, &quot;layer3.5.conv1.weight&quot;, &quot;layer3.5.bn1.weight&quot;, &quot;layer3.5.bn1.bias&quot;, &quot;layer3.5.bn1.running_mean&quot;, &quot;layer3.5.bn1.running_var&quot;, &quot;layer3.5.conv2.weight&quot;, &quot;layer3.5.bn2.weight&quot;, &quot;layer3.5.bn2.bias&quot;, &quot;layer3.5.bn2.running_mean&quot;, &quot;layer3.5.bn2.running_var&quot;, &quot;layer3.5.conv3.weight&quot;, &quot;layer3.5.bn3.weight&quot;, &quot;layer3.5.bn3.bias&quot;, &quot;layer3.5.bn3.running_mean&quot;, &quot;layer3.5.bn3.running_var&quot;, &quot;layer4.0.conv1.weight&quot;, &quot;layer4.0.bn1.weight&quot;, &quot;layer4.0.bn1.bias&quot;, &quot;layer4.0.bn1.running_mean&quot;, &quot;layer4.0.bn1.running_var&quot;, &quot;layer4.0.conv2.weight&quot;, &quot;layer4.0.bn2.weight&quot;, &quot;layer4.0.bn2.bias&quot;, &quot;layer4.0.bn2.running_mean&quot;, &quot;layer4.0.bn2.running_var&quot;, &quot;layer4.0.conv3.weight&quot;, &quot;layer4.0.bn3.weight&quot;, &quot;layer4.0.bn3.bias&quot;, &quot;layer4.0.bn3.running_mean&quot;, &quot;layer4.0.bn3.running_var&quot;, &quot;layer4.0.downsample.0.weight&quot;, &quot;layer4.0.downsample.1.weight&quot;, &quot;layer4.0.downsample.1.bias&quot;, &quot;layer4.0.downsample.1.running_mean&quot;, &quot;layer4.0.downsample.1.running_var&quot;, &quot;layer4.1.conv1.weight&quot;, &quot;layer4.1.bn1.weight&quot;, &quot;layer4.1.bn1.bias&quot;, &quot;layer4.1.bn1.running_mean&quot;, &quot;layer4.1.bn1.running_var&quot;, &quot;layer4.1.conv2.weight&quot;, &quot;layer4.1.bn2.weight&quot;, &quot;layer4.1.bn2.bias&quot;, &quot;layer4.1.bn2.running_mean&quot;, &quot;layer4.1.bn2.running_var&quot;, &quot;layer4.1.conv3.weight&quot;, &quot;layer4.1.bn3.weight&quot;, &quot;layer4.1.bn3.bias&quot;, &quot;layer4.1.bn3.running_mean&quot;, &quot;layer4.1.bn3.running_var&quot;, &quot;layer4.2.conv1.weight&quot;, &quot;layer4.2.bn1.weight&quot;, &quot;layer4.2.bn1.bias&quot;, &quot;layer4.2.bn1.running_mean&quot;, &quot;layer4.2.bn1.running_var&quot;, &quot;layer4.2.conv2.weight&quot;, &quot;layer4.2.bn2.weight&quot;, &quot;layer4.2.bn2.bias&quot;, &quot;layer4.2.bn2.running_mean&quot;, &quot;layer4.2.bn2.running_var&quot;, &quot;layer4.2.conv3.weight&quot;, &quot;layer4.2.bn3.weight&quot;, &quot;layer4.2.bn3.bias&quot;, &quot;layer4.2.bn3.running_mean&quot;, &quot;layer4.2.bn3.running_var&quot;, &quot;fc.weight&quot;, &quot;fc.bias&quot;. Unexpected key(s) in state_dict: &quot;model&quot;, &quot;optimizer&quot;, &quot;lr_scheduler&quot;, &quot;max_accuracy&quot;, &quot;epoch&quot;, &quot;config&quot;. </code></pre> <p>When incorporating <code>strict = False</code>, the model works however the output is always different for different runs for the same image.</p> <pre><code>res50_state = torch.load(&quot;rsp-resnet-50-ckpt.pth&quot;) res50.load_state_dict(res50_state, strict = False) </code></pre> <p>I found the following stackoverflow (<a href="https://stackoverflow.com/questions/71308399/loaded-pytorch-model-has-a-different-result-compared-to-saved-model">Loaded PyTorch model has a different result compared to saved model</a>) post which addresses issue by amending the code to <code>torch.load(&quot;rsp-resnet-50-ckpt.pth&quot;)[&quot;state_dict&quot;]</code>, however this doesnt work for me as I get key error <code>KeyError: 'state_dict'</code></p> <p>When examining the keys I only have access to following <code>dict_keys(['model', 'optimizer', 'lr_scheduler', 'max_accuracy', 'epoch', 'config'])</code></p> <p>Any idea how I can fix this issue ?</p>
<python><pytorch>
2024-02-08 04:08:46
2
3,878
imantha
77,958,980
11,280,068
docker/fastapi/peewee - can connect to mysql on host, but can't in docker container
<p>Losing my mind a little with this networking stuff</p> <p>Context:</p> <ul> <li>I'm working on a remote ubuntu server</li> <li>I have a fastapi app that uses peewee to interact with the database</li> <li>I am trying to learn docker, by containerizing this app</li> <li>The MySQL service is not containerized, but running on the host server.</li> <li>I am not using docker compose</li> </ul> <p>I have a problem where my fastapi runs PERFECTLY fine when run through the terminal (using uvicorn), but when I dockerize the app, it returns an error saying it cannot connect to the MySQL server</p> <pre><code>Traceback (most recent call last): File &quot;/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py&quot;, line 404, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py&quot;, line 84, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/fastapi/applications.py&quot;, line 1054, in __call__ await super().__call__(scope, receive, send) File &quot;/usr/local/lib/python3.11/site-packages/starlette/applications.py&quot;, line 123, in __call__ await self.middleware_stack(scope, receive, send) File &quot;/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py&quot;, line 186, in __call__ raise exc File &quot;/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py&quot;, line 164, in __call__ await self.app(scope, receive, _send) File &quot;/usr/local/lib/python3.11/site-packages/starlette/middleware/base.py&quot;, line 189, in __call__ with collapse_excgroups(): File &quot;/usr/local/lib/python3.11/contextlib.py&quot;, line 158, in __exit__ self.gen.throw(typ, value, traceback) File &quot;/usr/local/lib/python3.11/site-packages/starlette/_utils.py&quot;, line 93, in collapse_excgroups raise exc File &quot;/usr/local/lib/python3.11/site-packages/starlette/middleware/base.py&quot;, line 191, in __call__ response = await self.dispatch_func(request, call_next) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/app/main.py&quot;, line 28, in add_process_time_header db.connect() File &quot;/usr/local/lib/python3.11/site-packages/peewee.py&quot;, line 3231, in connect with __exception_wrapper__: File &quot;/usr/local/lib/python3.11/site-packages/peewee.py&quot;, line 3059, in __exit__ reraise(new_type, new_type(exc_value, *exc_args), traceback) File &quot;/usr/local/lib/python3.11/site-packages/peewee.py&quot;, line 192, in reraise raise value.with_traceback(tb) File &quot;/usr/local/lib/python3.11/site-packages/peewee.py&quot;, line 3232, in connect self._state.set_connection(self._connect()) ^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/peewee.py&quot;, line 4201, in _connect conn = mysql.connect(db=self.database, autocommit=True, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/pymysql/connections.py&quot;, line 358, in __init__ self.connect() File &quot;/usr/local/lib/python3.11/site-packages/pymysql/connections.py&quot;, line 711, in connect raise exc peewee.OperationalError: (2003, &quot;Can't connect to MySQL server on '5da987ec678f' ([Errno 111] Connection refused)&quot;) </code></pre> <p>Here is my Dockerfile</p> <pre><code>FROM python:3.11-slim # I've tried commenting this out ENV PYTHONUNBUFFERED 1 # I've tried commenting this out too ENV PYTHONDONTWRITEBYTECODE 1 WORKDIR /app COPY requirements.txt . RUN pip install --upgrade pip &amp;&amp; pip install -r requirements.txt COPY . . CMD [&quot;uvicorn&quot;, &quot;main:app&quot;, &quot;--host&quot;, &quot;0.0.0.0&quot;, &quot;--port&quot;, &quot;8000&quot;] </code></pre> <p>here is the db.py and mre.py file<br /> <a href="https://pastebin.com/7r71AYPx" rel="nofollow noreferrer">https://pastebin.com/7r71AYPx</a><br /> (minimum reproducible example)</p> <p>What I've Tried:</p> <ul> <li>Editing the my.cnf file to set <code>bind-address=&quot;0.0.0.0&quot;</code></li> <li>Making sure my database string is correct in ALL REGARDS</li> <li>Setting the IP of the connect string to: <ul> <li>172.17.0.1</li> <li>The IP of the server itself</li> <li>localhost</li> </ul> </li> <li>Making sure the user is able to access from all hosts in mysql <code>%</code></li> <li>MySQL is indeed listening on 0.0.0.0 and port 3306</li> <li>I can 100% confirm that the API works when I run <code>uvicorn main:app --reload --port 8000</code></li> <li>I am fairly confident I don't have any firewalls (I have never set anything up)</li> <li>I <code>docker exec</code> into the container, and I'm able to log into mysql (???)</li> </ul> <p>I cannot figure it out, SOS</p>
<python><mysql><linux><docker><fastapi>
2024-02-08 02:39:32
1
1,194
NFeruch - FreePalestine
77,958,694
3,626,104
pylint - how do you enable all optinal checkers / extensions?
<p>From this page <a href="https://pylint.pycqa.org/en/latest/user_guide/checkers/extensions.html" rel="nofollow noreferrer">https://pylint.pycqa.org/en/latest/user_guide/checkers/extensions.html</a> pylint has a number of optional checkers. I pretty much just want all of them, how do I say to pylint &quot;give me everything&quot; without manually writing every extension out, one by one? Ideally the solution can b e passed via command-line like <code>--load-plugins=*</code>, without the need for a separate <code>.pylintrc</code> file. I've tried a number of syntaxes but pylint seems only able to recognize extensions if it exactly matches a module name.</p> <p>Each of the below do not work</p> <pre><code>pylint --load-plugins=* foo.py pylint --load-plugins=pylint.extensions.* foo.py pylint --load-plugins=pylint.extensions foo.py </code></pre> <p>As mentioned, a hard-coded list that needs to be updated as new checkers are added over time is not a great way to keep up with the latest checks. I'd like a way to express &quot;give me everything&quot; that doesn't require manually changing configuration over time. Any help would be appreciated.</p>
<python><pylint>
2024-02-08 00:43:29
1
1,026
ColinKennedy
77,958,666
14,492,001
How to achieve Polars' previous `pivot()` functionality pre 0.20.7?
<p>Previous to Polars version <code>0.20.7</code>, the <code>pivot()</code> method, if given multiple values for the <code>columns</code> argument, would apply the aggregation logic against each column in <code>columns</code> <strong>individually</strong> based on the <code>index</code> column, rather than against a collective set of columns.</p> <p>Before:</p> <pre><code>df = pl.DataFrame( { &quot;foo&quot;: [&quot;one&quot;, &quot;one&quot;, &quot;two&quot;, &quot;two&quot;, &quot;one&quot;, &quot;two&quot;], &quot;bar&quot;: [&quot;y&quot;, &quot;y&quot;, &quot;y&quot;, &quot;x&quot;, &quot;x&quot;, &quot;x&quot;], &quot;biz&quot;: ['m', 'f', 'm', 'f', 'm', 'f'], &quot;baz&quot;: [1, 2, 3, 4, 5, 6], } ) df.pivot(index='foo', values='baz', columns=('bar', 'biz'), aggregate_function='sum') </code></pre> <p>returns:</p> <pre><code>shape: (2, 5) ┌─────┬─────┬─────┬─────┬─────┐ │ foo ┆ y ┆ x ┆ m ┆ f │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╪═════╪═════╡ │ one ┆ 3 ┆ 5 ┆ 6 ┆ 2 │ │ two ┆ 3 ┆ 10 ┆ 3 ┆ 10 │ └─────┴─────┴─────┴─────┴─────┘ </code></pre> <p>After (in <a href="https://github.com/pola-rs/polars/pull/14048" rel="nofollow noreferrer"><code>0.20.7</code></a>):</p> <pre><code>shape: (2, 5) ┌─────┬───────────┬───────────┬───────────┬───────────┐ │ foo ┆ {&quot;y&quot;,&quot;m&quot;} ┆ {&quot;y&quot;,&quot;f&quot;} ┆ {&quot;x&quot;,&quot;f&quot;} ┆ {&quot;x&quot;,&quot;m&quot;} │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 ┆ i64 ┆ i64 │ ╞═════╪═══════════╪═══════════╪═══════════╪═══════════╡ │ one ┆ 1 ┆ 2 ┆ null ┆ 5 │ │ two ┆ 3 ┆ null ┆ 10 ┆ null │ └─────┴───────────┴───────────┴───────────┴───────────┘ </code></pre> <p>I like the previous functionality much better; it's very awkward to deal with the new pivoted table, especially given its column names. Polars devs put this change under &quot;Bug fixes&quot; but it actually broke my code.</p>
<python><python-3.x><python-polars>
2024-02-08 00:32:18
3
1,444
Omar AlSuwaidi
77,958,621
1,340,782
How can I use pipx version of a package by default?
<p>I'm on ubuntu 20.04 and trying to get my PC to use the <code>pipx</code> version of <code>ansible</code> by default, because the version installed by my package manager is too old.</p> <p>I tried to use the instructions <a href="https://lewoudar.medium.com/install-python-applications-globally-on-your-machine-63b8a9f00a0b" rel="nofollow noreferrer">here</a> (amongst a lot of others) to get the new version of <code>ansible</code> to run when I type simply, &quot;ansible&quot;, but it won't use that version.</p> <pre><code>user@pcname:~$ /home/user/.local/pipx/venvs/ansible/bin/ansible --version ansible [core 2.13.13] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr /share/ansible/plugins/modules'] ansible python module location = /home/user/.local/pipx/venvs/ansible/lib/pyt hon3.8/site-packages/ansible ansible collection location = /home/user/.ansible/collections:/usr/share/ansi ble/collections executable location = /home/user/.local/pipx/venvs/ansible/bin/ansible python version = 3.8.10 (default, Nov 22 2023, 10:22:35) [GCC 9.4.0] jinja version = 3.1.3 libyaml = True user@pcname:~$ ansible --version bash: /home/user/.local/bin/ansible: No such file or directory 127 user@pcname:~$ ll /home/user/.local/pipx/venvs/ansible/bin/ total 276 drwxrwxr-x 2 user user 4096 Feb 8 10:11 ./ drwxrwxr-x 6 user user 4096 Feb 2 18:22 ../ -rw-r--r-- 1 user user 2202 Feb 8 10:11 activate -rw-r--r-- 1 user user 1254 Feb 8 10:11 activate.csh -rw-r--r-- 1 user user 2406 Feb 8 10:11 activate.fish -rw-r--r-- 1 user user 8834 Feb 8 10:11 Activate.ps1 -rwxrwxr-x 1 user user 248 Feb 2 18:22 ansible* -rwxrwxr-x 1 user user 268 Feb 2 18:23 ansible-community* -rwxrwxr-x 1 user user 249 Feb 2 18:22 ansible-config* -rwxrwxr-x 1 user user 278 Feb 2 18:22 ansible-connection* -rwxrwxr-x 1 user user 250 Feb 2 18:22 ansible-console* -rwxrwxr-x 1 user user 246 Feb 2 18:22 ansible-doc* -rwxrwxr-x 1 user user 249 Feb 2 18:22 ansible-galaxy* -rwxrwxr-x 1 user user 252 Feb 2 18:22 ansible-inventory* -rwxrwxr-x 1 user user 251 Feb 2 18:22 ansible-playbook* -rwxrwxr-x 1 user user 247 Feb 2 18:22 ansible-pull* -rwxrwxr-x 1 user user 1732 Feb 2 18:22 ansible-test* -rwxrwxr-x 1 user user 248 Feb 2 18:22 ansible-vault* -rwxrwxr-x 1 user user 263 Feb 2 18:22 easy_install* -rwxrwxr-x 1 user user 263 Feb 2 18:22 easy_install-3.8* -rwxrwxr-x 1 user user 253 Feb 8 10:11 pip* -rwxrwxr-x 1 user user 253 Feb 8 10:11 pip3* -rwxrwxr-x 1 user user 253 Feb 8 10:11 pip3.8* lrwxrwxrwx 1 user user 7 Feb 2 18:10 python -&gt; python3* lrwxrwxrwx 1 user user 16 Feb 2 18:10 python3 -&gt; /usr/bin/python3* user@pcname:~$ apt-cache policy ansible ansible: Installed: (none) Candidate: 5.10.0-1ppa~focal Version table: 5.10.0-1ppa~focal 500 500 http://ppa.launchpad.net/ansible/ansible/ubuntu focal/main amd64 Pac kages 500 http://ppa.launchpad.net/ansible/ansible/ubuntu focal/main i386 Pack ages 2.9.6+dfsg-1 500 500 http://au.archive.ubuntu.com/ubuntu focal/universe amd64 Packages 500 http://au.archive.ubuntu.com/ubuntu focal/universe i386 Packages user@pcname:~$ ansible bash: /home/user/.local/bin/ansible: No such file or directory 127 user@pcname:~$ ansible --version bash: /home/user/.local/bin/ansible: No such file or directory 127 user@pcname:~$ which ansible /usr/bin/ansible user@pcname:~$ whereis ansible ansible: /usr/bin/ansible /etc/ansible /usr/share/ansible /usr/share/man/man1/an sible.1.gz user@pcname:~$ pip install --user pipx Requirement already satisfied: pipx in /usr/lib/python3/dist-packages (0.12.3.1) DEPRECATION: mythtv 31.0.-1 has a non-standard version number. pip 24.0 will enf orce this behaviour change. A possible replacement is to upgrade to a newer vers ion of mythtv or contact the author to suggest that they release a version with a conforming version number. Discussion can be found at https://github.com/pypa/ pip/issues/12063 user@pcname:~$ pipx ensurepath Your PATH looks like it already is set up for pipx. Pass `--force` to modify the PATH. user@pcname:~$ pipx list venvs are in /home/user/.local/pipx/venvs binaries are exposed on your $PATH at /home/user/.local/bin package ansible 6.7.0, Python 3.8.10 - ansible-community package ansible-core 2.13.13, Python 3.8.10 - __init__.py - ansible_connection_cli_stub.py - ansible (symlink not installed) - ansible-config (symlink not installed) - ansible-connection (symlink not installed) - ansible-console (symlink not installed) - ansible-doc (symlink not installed) - ansible-galaxy (symlink not installed) - ansible-inventory (symlink not installed) - ansible-playbook (symlink not installed) - ansible-pull (symlink not installed) - ansible-test (symlink not installed) - ansible-vault (symlink not installed) user@pcname:~$ ll /home/user/.local/bin/ansible* lrwxrwxrwx 1 user user 59 Feb 2 18:23 /home/user/.local/bin/ansible-communit y -&gt; /home/user/.local/pipx/venvs/ansible/bin/ansible-community* lrwxrwxrwx 1 user user 77 Feb 2 18:12 /home/user/.local/bin/ansible_connecti on_cli_stub.py -&gt; /home/user/.local/pipx/venvs/ansible-core/bin/ansible_connect ion_cli_stub.py </code></pre> <p>I suppose it's not working because of those &quot;symlink not installed&quot; messages but I don't know how to fix it.</p> <p>Edit:</p> <pre><code>user@pcname:~$ ll /usr/bin/ansible -rwxr-xr-x 1 root root 5915 Oct 26 2022 /usr/bin/ansible* user@pcname:~$ /usr/bin/ansible --version ansible [core 2.12.10] config file = /etc/ansible/ansible.cfg configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python3/dist-packages/ansible ansible collection location = /home/user/.ansible/collections:/usr/share/ansible/collections executable location = /usr/bin/ansible python version = 3.8.10 (default, Nov 22 2023, 10:22:35) [GCC 9.4.0] jinja version = 3.1.3 libyaml = True </code></pre>
<python><pip><ansible><pipx>
2024-02-08 00:16:53
0
1,327
localhost
77,958,603
10,714,273
ftputil throwing "530 Please login with USER and PASS" error on login with blank credentials while ftplib does not
<p>Getting the following error when trying to connect to ftp host with <code>ftputil</code>:</p> <blockquote> <p>ftputil.error.PermanentError: 530 Please login with USER and PASS.</p> </blockquote> <p>However I do not get this error when connecting with <code>ftplib</code>. I would like to use <code>ftputil</code> for some of its additional functionality.</p> <pre class="lang-py prettyprint-override"><code>import ftputil import ftplib host = 'ftp.swpc.noaa.gov' user = '' pw = '' # works ftp_host = ftplib.FTP(host, user, pw) ftp_host.login() # does not work ftp_host = ftputil.FTPHost(host, user, pw) # with ftputil.FTPHost(host, user, pw) as ftphost: # ftphost.listdir(ftphost.curdir) </code></pre>
<python><ftp><ftplib><ftputil>
2024-02-08 00:12:13
1
359
cap
77,958,373
8,217,821
f2py does not output inout as expected
<p>Why does f2py not generate a correct wrapper when dealing with inout params?</p> <p>Here is an example of my function call:</p> <pre><code>io = 7.5 io, out1, out2, out3 = fortran_file.func(5, 2.5, False, io) </code></pre> <p>Here's how that might look in Fortran (example.f):</p> <pre><code>subroutine func(inputA, inputB, inputC, ioD, outE, outF, outG) integer inputA real inputB logical inputC real ioD, outE, outF, outG real localH if(.not.inputC) then localH= ioD else localH= inputB endif ioD= ioD + localH outE= inputA + 10.5 outF= inputA + 5.5 outG= inputA + 1.5 return end </code></pre> <p>By default f2py marks everything here as an input, even though this is a basic example. So I create the file signature using:</p> <pre><code>python.exe -m numpy.f2py -m fortran_file -h sig_example.pyf example.f </code></pre> <p>I edit the pyf file to add the correct intent info. Then I compile using:</p> <pre><code>python.exe -m numpy.f2py -c sig_example.pyf example.f </code></pre> <p>But it keeps complaining that the f2py func only returns 3 items.</p> <p>Even with io explicitly using <code>intent(inout)</code>, f2py doesn't return it.</p> <p>I used the <code>--build-dir</code> option in f2py to look at the generated C code for the wrapper. The file lists io under Parameters, but it says in/output rank-0 array(float,'f) This tells me it knows it's supposed to be inout, but I don't get why its labeling it an array? It also doesn't list io under &quot;Returns&quot; which just makes me even more confused. The associated <code>Py_BuildValue</code> call also doesn't include io in the list of args.</p> <p>Why does it seem like inout isn't supposed to be returned? That's the whole point of inout, and for certain files it's required.</p> <p>I realize this whole question is niche, but I could really use some help here. Is there a way to specify param intents inside of Python? Because I have too many files to manually edit signatures.</p>
<python><fortran><f2py>
2024-02-07 22:57:05
1
321
Kyle Ponikiewski
77,958,311
5,837,992
Pandas - Using QCut Rankings From One Dataframe to Categorize a Second Dataframe
<p>I am looking to get quartiles from one dataframe (grouped by PriceDate) and then use the ranges to categorize the values in a second data frame with the same dates.</p> <p>So in the data set below</p> <p>For 10-1, we have the following bins (-0.001, 3.2] and (3.2, 9.3] For 10-2, we have the following bins (0.699, 6.5] and (6.5, 10.0]</p> <p>What I am looking to do us to have the rank column of the df1 be :</p> <p><strong>0</strong> if the date in the second column is 10-1 and the price between -0.001 and 3.2</p> <p><strong>1</strong> if the date in the second column is 10-1 and the price between 3.2 and 9.3</p> <p>etc..</p> <p>But I can't figure out the logic to make this work</p> <p>Would somebody please set me on the right path?</p> <p>Thanks</p> <pre><code>import pandas as pd import numpy as np df1 = pd.DataFrame({'Price':[4.4, 3.6, 9.2, 3.4], 'PriceDate':['2023-10-01', '2023-10-01','2023-10-01', '2023-10-02']}) df2 = pd.DataFrame({'Price':[0.0, 3.6, 9.3, 4.5, 2.9, 3.2, 1.0, 6.7, 8.7, 9.8, 3.4, .7, 2.2, 6.5, 3.4, 1.7, 9.4, 10.0], 'PriceDate':['2023-10-01', '2023-10-01', '2023-10-01', '2023-10-01', '2023-10-01', '2023-10-01', '2023-10-01', '2023-10-02', '2023-10-02', '2023-10-02', '2023-10-02', '2023-10-02', '2023-10-02', '2023-10-02', '2023-10-02', '2023-10-02', '2023-10-02', '2023-10-02']}) df2['Rank'] = df2.groupby(['PriceDate'])['Price'].transform( lambda x: pd.qcut(x, 2,labels=False,duplicates=&quot;drop&quot;)) df2['labels'] = df2.groupby(['PriceDate'])['Price'].transform( lambda x: pd.qcut(x, 2,duplicates=&quot;drop&quot;)) </code></pre> <p>Sample data can be found <a href="https://www.dropbox.com/scl/fi/txqtq4ezsjdn0g9hcrcg2/subsetfills.csv?rlkey=ndtt9xskrien8a4r9p4pr87od&amp;dl=0" rel="nofollow noreferrer">https://www.dropbox.com/scl/fi/txqtq4ezsjdn0g9hcrcg2/subsetfills.csv?rlkey=ndtt9xskrien8a4r9p4pr87od&amp;dl=0</a></p> <p><a href="https://www.dropbox.com/scl/fi/fwpgnwp2wh3e83ohyxkdb/allfills.csv?rlkey=n9kmx0bge4x5ayw12g4g2qfll&amp;dl=0" rel="nofollow noreferrer">https://www.dropbox.com/scl/fi/fwpgnwp2wh3e83ohyxkdb/allfills.csv?rlkey=n9kmx0bge4x5ayw12g4g2qfll&amp;dl=0</a></p> <p>Sample code for this data that doesn't work</p> <pre><code>import pandas as pd import numpy as np df1=pd.read_csv(&quot;c:/temp/subsetfills.csv&quot;,index_col=False) df2=pd.read_csv(&quot;c:/temp/allfills.csv&quot;,index_col=False) ref = df2.groupby('Trade_Date')['ExecPrice'].apply(lambda g: pd.qcut(g, q=2, retbins=True)[1]) ref = pd.DataFrame(ref).reset_index().rename(columns={'ExecPrice': 'Bins'}) ref df3 = pd.merge(df1, ref, on='Trade_Date', how='left') df3 def bin_price(g): # Get the bins range bins = g['Bins'].iloc[0] # In some case the value Price in df1 might outside the bin range # You can modify here: bins = [-np.inf] + bins[1:-1].tolist() + [np.inf] bin_value = pd.cut(g['Price'], bins, labels=False) return bin_value df3['GroupPrice'] = df3.groupby('ExecPrice', group_keys=False).apply(bin_price) df3 </code></pre>
<python><pandas><qcut>
2024-02-07 22:38:54
1
1,980
Stumbling Through Data Science
77,958,261
13,215,988
Pip commands in Kaggle create a lot of dependency resolver issues
<p>I have been using Kaggle for training models. I have a notebook in which I ran this command <code>!pip install -Uqq fastai</code> and it output this error:</p> <pre><code>ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. tensorflow-io 0.21.0 requires tensorflow-io-gcs-filesystem==0.21.0, which is not installed. tensorflow 2.6.3 requires absl-py~=0.10, but you have absl-py 1.0.0 which is incompatible. tensorflow 2.6.3 requires numpy~=1.19.2, but you have numpy 1.21.6 which is incompatible. </code></pre> <p>And many more similar errors.</p> <p>How can I fix this? I am using a Jupyter Notebook in Kaggle so maybe there is a special way to install packages in a Kaggle notebook?</p>
<python><tensorflow><pip>
2024-02-07 22:25:50
1
1,212
ChristianOConnor
77,958,175
1,735,215
Is there a non-memory-intensive way of importing data from MariaDB into Python?
<p><strong>Question</strong></p> <p>Is there a non-memory-intensive way for extracting large datasets from a MariaDB database and importing them directly into Python (via the python-mariadb connector)?</p> <p><strong>Edit: Solution</strong></p> <p>Following @Georg Richter's <a href="https://stackoverflow.com/a/77960704">solution below</a>, we can do:</p> <pre><code>connection = mariadb.connect(user='&lt;username&gt;', host='localhost', database='my_database') cursor = connection.cursor(buffered=False) query = 'SELECT column_1,column_2 FROM my_table' cursor.execute(query) df = pd.DataFrame({col:[] for col in range(number_of_columns_you_extract)}) size = 2**23 while rows := cursor.fetchmany(size): df = pd.concat([df, pd.DataFrame(rows)], copy=False) </code></pre> <p>For some reason, this casts my int values as float. But memory usage never exceeds 10GB.</p> <p><strong>Context</strong></p> <p>I have a large database in MariaDB on Linux. I often have to import parts of it into Python for further computations. MariaDB is quite economical in terms of its memory usage, but Python is not. Although there are less memory-intensive ways to hold the data in Python (e.g., as a Pandas DataFrame with appropriate dtypes for all columns), the usual workflow involves copying data several times and in some stages the memory requirements are enormous.</p> <p><strong>Simple Example</strong></p> <pre><code>import pandas as pd import mariadb import gc connection = mariadb.connect(user='&lt;username&gt;', host='localhost', database='my_database') cursor = connection.cursor(dictionary=False) query = 'SELECT column_1,column_2 FROM my_table' cursor.execute(query) rows = cursor.fetchall() del cursor gc.collect() df = pd.DataFrame(rows) del rows gc.collect() </code></pre> <p>The example extracts 500 million rows and two columns, all values are integers.</p> <p>After <code>cursor.execute(query)</code>, the memory usage is about 35GB. After <code>rows = cursor.fetchall()</code>, it rises to about 90GB. After converting the data to a pandas DataFrame and removing the other objects, the memory requirements fall to about 8GB. I monitored the memory usage using top.</p> <p>I am unsure how the data is saved in the cursor object. I <code>rows</code>, it is a list of tuples. (And if you do <code>cursor = connection.cursor(dictionary=True)</code> to preserve the column names, it is instead a list of dicts, each with the column names as strings. Needless to say, in that case, the memory requirements rise far beyond the 90GB reported above.)</p> <p><strong>Ideas for solutions</strong></p> <ul> <li><p>Cutting out the two intermediate stages of saving the data (after <code>cursor.execute()</code> and after <code>cursor.fetchall()</code>) and save directly into a pandas DataFrame. Python-mariadb does not seem to offer this. It does not strike me as something that would be easy to implement.</p> </li> <li><p>Importing the data in chunks. In this case, I am worried about (1) possible dtype inconsistencies, (2) memory requirements when concatenating the dataframes into one in the end, (3) possible inconsistencies this might introduce in case of more complex MariaDB queries (especially with ad hoc implementations of splitting the query result into chunks), and (4) the computation time, if in the worst possible case the query may have to be run ay many times as there ara chunks.</p> </li> <li><p>Saving the query results to the hard drive and reading it back into pandas. This would basically mean working around everything the python-mariadb connector has implemented and replacing it with manual hacks. In addition, this may lead to problems with write permissions to the hard drive. (I have <code>ProtectHome=true</code> in the MariaDB config, which will prevent MariaDB from writing to anywhere in <code>/home</code> etc. This is the standard for most Linuxes for security reasons. It is possible to work around this with bind-mounting or allowing users mess with directories outside <code>/home</code>, but neither of those strikes me as particularly great ideas.)</p> </li> <li><p>Following @yotheguitou's comment, you can improve memory usage a bit by plugging the mariadb connection directly into <code>pd.read_sql</code> and using the <code>chunksize</code> parameter. Memory usage is still huge (30GB fof the generator object, 10GB more during the creation of the dataframe), but only about half what my original approach was. An additional caveat is that <code>pd.read_sql</code> throws a <code>UserWarning: pandas only supports SQLAlchemy connectable...</code> warning. This may or may not mean that there are any real problems in some cases - for the case I tried, it seemed fine. The code would be:</p> <p><code>chunks = pd.read_sql(statement, connection, chunksize=50000000) dfx = pd.concat(list(chunks))</code></p> </li> </ul> <p><strong>Additional Information</strong></p> <p>MariaDB version is 11.2.2, Python version is 3.11.6, python-mariadb connector version is 1.1.8, Linux is Arch Linux with kernel 6.7.4.</p>
<python><sql><linux><memory><mariadb>
2024-02-07 22:07:12
2
2,176
0range
77,958,149
32,836
How can I more efficiently deserialize set data strings?
<p>I've written a Rust program that exposes it's JSON API in the form of JSON Schema and then using portions of that to create Python pydantic classes.</p> <p>Where I'm stuck is that I have several JSON schema types that I could receive in the Python code and I'm rather inefficiently deserializing them. A facsimile of the what's below is working, but I feel there has to be a better, more 'pythonic' way.</p> <pre><code>class DataType(BaseModel): pass class ATypeData(DataType): ... class BTypeData(DataType): ... class CTypeData(DataType): ... def deserialize_wired_json_str(json_str) -&gt; DataType: json_data = json.loads(json_str) if 'a_type' in json_str: return ATypeData.parse_raw(json.dumps(json_data['a_type'])) elif 'b_type' in json_str: return BTypeData.parse_raw(json.dumps(json_data['b_type'])) elif 'c_type' in json_str: return CTypeData.parse_raw(json.dumps(json_data['c_type'])) ... </code></pre> <p>... and by &quot;set&quot; I mean the types of string encapsulated JSON data is known to map to the defined <code>DataType</code> class variants.</p>
<python><pydantic>
2024-02-07 22:00:09
1
7,493
Jamie
77,958,113
9,766,795
Calculate binance ATR in python with binance connector
<p>I'm using the binance connector API in python (<a href="https://binance-connector.readthedocs.io/en/latest/getting_started.html" rel="nofollow noreferrer">https://binance-connector.readthedocs.io/en/latest/getting_started.html</a>) and I want to build a function that computes the ATR of the previous candle (not the current one) based on a given length. I'm using the candlestick data format provided by the API: <a href="https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data" rel="nofollow noreferrer">https://binance-docs.github.io/apidocs/spot/en/#kline-candlestick-data</a>. This is the code:</p> <pre class="lang-py prettyprint-override"><code>def tr(prices: list[list], length: int) -&gt; list[float]: prices = prices[len(prices)-length-1:] tr_list: list[float] = [] for i in range(1, len(prices)): high = float(prices[i][2]) low = float(prices[i][3]) close = float(prices[i-1][4]) current_true_range = max( (high - low), abs(high - close), abs(low - close), ) tr_list.append(current_true_range) return tr_list def atr(prices: list[list], length: int) -&gt; float: tr_values = tr(prices, length) return sum(tr_values) / len(tr_values) </code></pre> <p>this is how I get the klines:</p> <pre class="lang-py prettyprint-override"><code>self.binance_api_client.klines(symbol=self.trading_symbol, interval=self.klines_interval, limit=self.klines_limit_length) </code></pre> <p>I don't use the last kline because it's never complete yet, I only use the previous closed klines (that's why I use <code>[:-1]</code>):</p> <pre><code>atr = technical_indicators.atr(klines[:-1], 14) </code></pre> <p>However, I still don't get the right value and the error is significant, what am I doing wrong ? (self.klines_limit_length is, in this context, 200).</p> <p>I've made sure that the interval is the same and I'm sure that I get the right data.</p> <hr /> <p><em><strong>UPDATE:</strong></em></p> <p>When I call the supertrend function I always pass in the klines without the last candle (since it's not closed yet) so I don't have to think about not taking into consideration the values of the last candlestick when calculating the tr/atr. <em><strong>I now know that I am getting the right prices in the <code>tr</code> function</strong></em>, the problem that I have now is that, for some reason, <em><strong>the supertrend is still wrong</strong></em>.</p> <p>This is all the code:</p> <pre class="lang-py prettyprint-override"><code>def tr(prices: list[list], length: int) -&gt; list[float]: prices = prices[-length:] tr_list: list[float] = [] for i in range(1, len(prices)): high = float(prices[i][2]) low = float(prices[i][3]) close = float(prices[i-1][4]) current_true_range = max( (high - low), abs(high - close), abs(low - close), ) tr_list.append(current_true_range) return tr_list def atr(prices: list[list], length: int) -&gt; float: tr_values = tr(prices, length) return sum(tr_values) / len(tr_values) def supertrend(prices: list[list], length: int, multiplier: float) -&gt; float: high = float(prices[-1][2]) low = float(prices[-1][3]) atr_value = atr(prices, length) supertrend = (low + high) / 2 + multiplier * atr_value return supertrend </code></pre> <p>I already exclude the last price when calling the supertrend:</p> <pre class="lang-py prettyprint-override"><code>supertrend = technical_indicators.supertrend(klines[:-1], 20, 2) </code></pre> <p>The klines have always at least 200 prices, so the length of 20 is not a problem.</p>
<python><math><algorithmic-trading><binance>
2024-02-07 21:50:32
1
632
David
77,958,071
9,877,065
OpenCV find contours of an RGBA image not working
<p>With input image as <code>Contours_X.png</code> [Contours_X.png: PNG image data, 819 x 1154, 8-bit grayscale, non-interlaced] :</p> <p><a href="https://i.sstatic.net/iW54X.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iW54X.png" alt="enter image description here" /></a></p> <p>Stealing code from <a href="https://stackoverflow.com/questions/77946423/python-find-contours-white-region-only-opencv/77947469#77947469">Python Find Contours white region only OpenCV</a> code :</p> <pre><code>import cv2 as cv import numpy as np def generate_X_Y(image_path): image = cv.imread(image_path) # image = cv.imread(image_path, cv.IMREAD_UNCHANGED) cv.imwrite(&quot;image_ori.png&quot; , image) print('image[0] : ', image[0]) gray = cv.cvtColor(image, cv.COLOR_RGBA2GRAY) print('gray[0] : ', gray[0]) ## CHANGED TO: ret, thresh = cv.threshold(gray, 128, 255, cv.THRESH_BINARY_INV) cv.imwrite(&quot;image2.png&quot;, thresh) contours, hierarchies = cv.findContours(thresh, cv.RETR_LIST, cv.CHAIN_APPROX_SIMPLE) blank = np.zeros(thresh.shape[:2], dtype='uint8') cv.drawContours(blank, contours, -1, (255, 0, 0), 1) cv.imwrite(&quot;Contours.png&quot;, blank) print('len(contours) : ' , len(contours)) for i in contours: cv.drawContours(image, [i], -1, (0, 255, 0), 2) cv.imwrite(&quot;image.png&quot;, image) if __name__ == '__main__': image_path = 'Contours_X.png' # Provide the correct path in Colab # image_path = 'input_alpha.png' generate_X_Y(image_path) </code></pre> <p>I get output <code>image.png</code> [image.png: PNG image data, 819 x 1154, 8-bit/color RGB, non-interlaced] :</p> <p><a href="https://i.sstatic.net/wSDvp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wSDvp.png" alt="enter image description here" /></a></p> <p>While using <code>input_alpha_2.png</code> [input_alpha_2.png: PNG image data, 1000 x 1200, 8-bit/color RGBA, non-interlaced] :</p> <p><a href="https://i.sstatic.net/xq5zd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xq5zd.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/Qk87y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qk87y.png" alt="enter image description here" /></a></p> <p>and code:</p> <pre><code>import cv2 as cv import numpy as np def generate_X_Y(image_path): # image = cv.imread(image_path) image = cv.imread(image_path, cv.IMREAD_UNCHANGED) cv.imwrite(&quot;image_ori.png&quot; , image) print('image[0] : ', image[0]) gray = cv.cvtColor(image, cv.COLOR_RGBA2GRAY) print('gray[0] : ', gray[0]) ## CHANGED TO: ret, thresh = cv.threshold(gray, 128, 255, cv.THRESH_BINARY_INV) cv.imwrite(&quot;image2.png&quot;, thresh) contours, hierarchies = cv.findContours(thresh, cv.RETR_LIST, cv.CHAIN_APPROX_SIMPLE) blank = np.zeros(thresh.shape[:2], dtype='uint8') cv.drawContours(blank, contours, -1, (255, 0, 0), 1) cv.imwrite(&quot;Contours.png&quot;, blank) print('len(contours) : ' , len(contours)) for i in contours: cv.drawContours(image, [i], -1, (0, 255, 0), 20) cv.imwrite(&quot;image.png&quot;, image) if __name__ == '__main__': # image_path = 'Contours_X.png' # Provide the correct path in Colab image_path = 'input_alpha_2.png' generate_X_Y(image_path) </code></pre> <p>I get <code>image.png</code> [image.png: PNG image data, 1000 x 1200, 8-bit/color RGBA, non-interlaced] :</p> <p><a href="https://i.sstatic.net/wBmTd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wBmTd.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/pBgOo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBgOo.png" alt="enter image description here" /></a></p> <p>Why I don't get a nice green border around the subject like in first example?</p> <p>As suggested in comments :</p> <blockquote> <p>Your base BGR image (under the alpha channel) has your green lines. The alpha channel is covering it). Remove the alpha channel to see it.</p> </blockquote> <p>and doing that with :</p> <pre><code>cv.imwrite(&quot;image.png&quot;, image[:,:,:3]) </code></pre> <p>I get <em>image.png: PNG image data, 1000 x 1200, 8-bit/color RGBA, non-interlaced</em> :</p> <p><a href="https://i.sstatic.net/FvviL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FvviL.png" alt="enter image description here" /></a></p> <p>but still I don't get how a transparent alpha channel could hide a contour and why do I get the gray background, that I believe could be the area of the biggest contour in my image the squared external black border?</p> <p>More on this using :</p> <pre><code>cntsSorted = sorted(contours, key = lambda x: cv.contourArea(x) , reverse=True) for index , i in enumerate(cntsSorted) : print(cv.contourArea(i)) if index &gt; 0 : cv.drawContours(image, [i], -1, (0, 255, 0), 20) cv.imwrite(&quot;image.png&quot;, image) cv.imwrite(&quot;image_rem.png&quot;, image[:,:,:3]) </code></pre> <p>The second image doesn't have the more external green border but still keeps the dark grey background.</p>
<python><opencv><image-processing>
2024-02-07 21:40:41
2
3,346
pippo1980
77,957,959
6,800,914
lxml xpass can't find a tag below first one in xml
<p>I have an xml doc that looks something like this</p> <pre><code>&lt;MyXmlRoot&gt; &lt;App xmlns='urn:SomethingSomething1'&gt; ... &lt;/App&gt; &lt;User xmlns='urn:SomethingSomething2'&gt; ... &lt;/User&gt; &lt;Doc xmlns='urn:SomethingSomething3'&gt; &lt;level2&gt; &lt;level3&gt; &lt;level4&gt; &lt;level5&gt; &lt;level6&gt; &lt;level7&gt; &lt;level8&gt; &lt;level9&gt; &lt;level10&gt;Content at the deepest level&lt;/level10&gt; &lt;/level9&gt; &lt;/level8&gt; &lt;/level7&gt; &lt;/level6&gt; &lt;/level5&gt; &lt;/level4&gt; &lt;/level3&gt; &lt;/level2&gt; &lt;/Doc&gt; </code></pre> <p>I use lxml to read it and parse it like this</p> <pre><code>tree = etree.parse(&quot;textxml.xml&quot;) root = tree.getroot() </code></pre> <p>if I do pretty print from root it will show the entire xml. which is good but when I try to read specific tags values like so</p> <pre><code>content = root.xpath('//level10/text()') </code></pre> <p>xpath can't find any tag below the root and returns empty list I suspect it's because of the namespaces but can't find a solution to make xpath read values any advice ?</p>
<python><xml><lxml>
2024-02-07 21:18:41
1
303
Vladimir Zaguzin
77,957,732
16,837,686
Why Is a migration file created (over and over) when i run makemigrations
<p>Am facing a somehow funny issue...every time i migrate,i get this description</p> <pre><code> Your models in app(s): 'organization_app' have changes that are not yet reflected in a migration, and so won't be applied. Run 'manage.py makemigrations' to make new migrations, and then re-run 'manage.py migrate' to apply them. </code></pre> <p>so i follow what django says and i makemigrations,this migration file is created</p> <pre class="lang-py prettyprint-override"><code># Generated by Django 4.2.3 on 2024-02-07 20:14 from django.db import migrations, models class Migration(migrations.Migration): dependencies = [ (&quot;organization_app&quot;, &quot;0002_alter_penaltymodel_tenant_penalty_choice&quot;), ] operations = [ migrations.AlterField( model_name=&quot;penaltymodel&quot;, name=&quot;tenant_penalty_choice&quot;, field=models.CharField( choices=[(&quot;PERCENT&quot;, &quot;Percent&quot;), (&quot;AMOUNT&quot;, &quot;Amount&quot;)], default=&quot;AMOUNT&quot;, max_length=20, ), ), ] </code></pre> <p>when i migrate,i get the same description i stated earlier <code>'manage.py makemigrations'</code> and when i makemigration, the same migration file is created over and over even though i have already migrated..is this a #bug here is the penalty model</p> <pre class="lang-py prettyprint-override"><code>class PenaltyModel(models.Model): tenant_penalty_desc = models.TextField(null=False) tenant_penalty_name = models.CharField(max_length=100, null=False) tenant_penalty_percentage = models.IntegerField(default=0) tenant_penalty_amount = models.FloatField(default=0) tenant_penalty_choice = models.CharField( choices=TenantPenaltyChoices.choices, max_length=20, default=TenantPenaltyChoices.AMOUNT, ) rental_house = models.OneToOneField(HouseModel, on_delete=models.CASCADE) </code></pre> <p>I had deleted all migrations and recreated them again because i was facing some errors.</p> <p>Result of running sqlmigrate on the migration specified above</p> <pre class="lang-sql prettyprint-override"><code>BEGIN; -- -- Alter field tenant_penalty_choice on penaltymodel -- -- (no-op) COMMIT; </code></pre>
<python><django><django-rest-framework>
2024-02-07 20:30:01
1
354
slinger
77,957,707
381,416
Union of protocols with invariant generic type causes problems
<p>I have a protocol <code>MyExporter</code> using a generic type <code>T</code>:</p> <pre class="lang-py prettyprint-override"><code>T = TypeVar(&quot;T&quot;, bound=BaseSample) class MyExporter(Protocol[T]): def get_sample(self) -&gt; T: ... def process_sample(self, sample: T) -&gt; str: ... </code></pre> <p>I also have a function that, given a string, will return a module implementing <code>MyExporter</code> with <code>T</code> set to one of two possible <code>BaseSample</code> subclasses, <code>SampleA</code> or <code>SampleB</code>:</p> <pre class="lang-py prettyprint-override"><code>def get_exporter(name: str) -&gt; Union[MyExporter[SampleA], MyExporter[SampleB]]: if name == &quot;a&quot;: return my_exporter_a return my_exporter_b </code></pre> <p>My problem is when I use <code>MyExporter</code> like this:</p> <pre class="lang-py prettyprint-override"><code>exporter = get_exporter(&quot;a&quot;) sample = exporter.get_sample() output = exporter.process_sample(sample) # type checker complains here </code></pre> <p>I get the following errors from mypy:</p> <pre><code>Argument 1 to &quot;process_sample&quot; of &quot;MyExporter&quot; has incompatible type &quot;Union[SampleA, SampleB]&quot;; expected &quot;SampleA&quot; Argument 1 to &quot;process_sample&quot; of &quot;MyExporter&quot; has incompatible type &quot;Union[SampleA, SampleB]&quot;; expected &quot;SampleB&quot; </code></pre> <p>It makes sense that <code>sample</code> is of type <code>SampleA | SampleB</code>, but I also feel like the type checker should be able to infer that the output of <code>get_sample</code> is a valid argument to <code>process_sample</code> when using the same exporter.</p> <p>Is there any way around this that doesn't require an architectural overhaul?</p>
<python><generics><typeerror><python-typing>
2024-02-07 20:23:05
1
940
wstr
77,957,701
746,100
Why when my Python3 .pyw file is run from Visual Studio Code on Macos it gets "No module named 'PyQt6' but runs ok at macos command line?
<p>.pyw has no problem if I run from terminal command line or by double-clicking with Finder.</p> <p>BUT in VSC if I select the .pyw and do &quot;Run &gt; Start debug&quot; or &quot;Run &gt; Run Without Debug&quot; it gets the error:</p> <pre><code>/Users/me/.zshrc:export:2: not valid in this context: /Users/me/.zshrc me@Dougs-MacBook-2023 softwarehubgui % /usr/bin/env /Users/me/PRIMARY/WORK/MOJO/Software/softwarehubgui/.venv/bin/python /Users/me /.vscode/extensions/ms-python.python-2024.0.1/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher 61817 -- /Users/me/PRIMARY/WO RK/MOJO/Software/softwarehubgui/SoftwareHubGUI.pyw Traceback (most recent call last): File &quot;/Users/me/PRIMARY/WORK/MOJO/Software/softwarehubgui/SoftwareHubGUI.pyw&quot;, line 5, in &lt;module&gt; from PyQt6.QtGui import QIcon ModuleNotFoundError: No module named 'PyQt6' me@Dougs-MacBook-2023 softwarehubgui </code></pre> <p>=== THE .pyw FILE ==========================</p> <pre><code>import os import signal from PyQt6.QtGui import QIcon from PyQt6.QtWidgets import QApplication import sys from include.BaseDialog import BaseDialog if __name__ == '__main__': qapp = QApplication(sys.argv) qapp.setWindowIcon(QIcon(f'{sys.path[0]}/include/Icon.png')) styleSheet = &quot;&quot;&quot; &quot;&quot;&quot; qapp.setStyleSheet(styleSheet) app = BaseDialog() qapp.aboutToQuit.connect(app.close_program) signal.signal(signal.SIGINT, app.close_program) signal.signal(signal.SIGTERM, app.close_program) sys.exit(qapp.exec()) </code></pre> <p>Here is path... (I don't see PyQt in it.)</p> <blockquote> <p>dbell@Dougs-Mac softwarehubgui % echo $PATH /Library/Frameworks/Python.framework/Versions/3.11/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/Library/Apple/usr/bin:/usr/local/share/dotnet:~/.dotnet/tools:/Library/Frameworks/Python.framework/Versions/3.11/bin:/opt/homebrew/bin:/opt/homebrew/sbin</p> </blockquote> <p>Both the macos Terminal window and the Visual Studio Code's terminal window have the same path.</p> <p>UPDATE 2/12/2024 In response to Tim's suggestion, I found two Python 3.11 versions and for each I selected it and then right-clicked my python .pyw script but each failed and I got the following errors...</p> <p>1st python selection got this error.......</p> <blockquote> <p>me@Dougs-MacBook-2023 softwarehubgui % /Users/me/PRIMARY/WORK/MOJO/Software/softwarehubgui/.venv/bin/p ython /Users/me/PRIMARY/WORK/MOJO/Software/softwarehubgui/SoftwareHubGUI.pyw Traceback (most recent call last): File &quot;/Users/me/PRIMARY/WORK/MOJO/Software/softwarehubgui/SoftwareHubGUI.pyw&quot;, line 5, in from PyQt6.QtGui import QIcon ModuleNotFoundError: No module named 'PyQt6' me@Dougs-MacBook-2023 softwarehubgui %</p> </blockquote> <p>2nd python selection got this error.......</p> <pre><code>me@Dougs-MacBook-2023 softwarehubgui % /usr/local/bin/python3 /Users/me/PRIMARY/WORK/MOJO/Software/sof twarehubgui/SoftwareHubGUI.pyw Traceback (most recent call last): File &quot;/Users/me/PRIMARY/WORK/MOJO/Software/softwarehubgui/SoftwareHubGUI.pyw&quot;, line 5, in &lt;module&gt; from PyQt6.QtGui import QIcon ImportError: dlopen(/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/PyQt6/QtGui.abi3.so, 0x0002): Symbol not found: __ZN13QRasterWindow11resizeEventEP12QResizeEvent Referenced from: &lt;C8D7E625-2A13-3C34-9DFF-B6656A6F86E7&gt; /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/PyQt6/QtGui.abi3.so Expected in: &lt;FC67C721-05AD-33BB-A2A8-F70FC3403D7A&gt; /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/PyQt6/Qt6/lib/QtGui.framework/Versions/A/QtGui me@Dougs-MacBook-2023 softwarehubgui % enter code here </code></pre>
<python><python-3.x><visual-studio-code>
2024-02-07 20:21:49
0
8,387
Doug Null
77,957,560
1,473,517
How to convert nested brackets to a list of pairs
<p>I have code that returns a tuple with nested brackets which represents alternating intervals to exclude and include. The nested tuples look like:</p> <pre><code>((None, 6), 16) </code></pre> <p>and</p> <pre><code>(((None, 1), 6), 16) </code></pre> <p>What I really want is to convert these into a list of pairs for only the parts that are included but it is slightly complicated. I will give some examples to hopefully explain.</p> <pre><code>((None, 6), 16) should go to [(6, 16)] (((None, 1), 6), 16) should go to [(0,1), (6, 16)] ((None, 8), 17) should go to [(8, 17)] (((None, 2), 8), 17) should go to [(0, 2), (8, 17)] (((((None, 2), 3), 4), 8), 17) should go to [(0, 2), (3, 4), (8, 17)] </code></pre> <p>If the number of open brackets is even the first interval will be an exclude one and so should be omitted. For example:</p> <pre><code>((((None, 2), 4), 5), 6) should go to [(2, 4), (5, 6)] </code></pre> <p>How can this be done?</p>
<python>
2024-02-07 19:52:44
3
21,513
Simd
77,957,518
4,408,275
How do I add a `ttk.Notebook` widget to a `tk.Frame`?
<p>I can create this GUI</p> <pre><code>┏━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━┓ ┃ @ ┃ hello world ┃ X ┃ ┣━━━┻━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┻━━━┫ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ &lt;- Frame 0 ┃ ┃ ┃ ┃ ┃ ┃ ┣━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┫ ┃ OTHER ┃ &lt;- Frame 1 ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ </code></pre> <p>with the code shown below<sup>1</sup>.</p> <p>However, I want to add a <code>ttk.Notebook</code> to the <code>Frame 0</code>.</p> <pre><code>┏━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━┓ ┃ @ ┃ hello world ┃ X ┃ ┣━━━┻━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┻━━━┫ ┃ ┏━━━━━━━┳━━━━━━━┳━━━━━━━━━┳━━━━━━━┓ ┃ ┃ ┃ Tab 0 ┃ Tab 1 ┃ Tab ... ┃ Tab n ┃ ┃ ┃ ┣━━━━━━━┻━━━━━━━┻━━━━━━━━━┻━━━━━━━┫ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ ┃ &lt;- Frame 0 with n tabs (Notebooks) ┃ ┃ ┃ ┃ ┃ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ ┃ ┣━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┫ ┃ OTHER ┃ &lt;- Frame 1 ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ </code></pre> <p>Just adding the <code>ttk.Notebook</code> to the <code>tk.Frame</code> instance does not seem to work (i.e., <code>tabControl = ttk.Notebook(self.frame0)</code>). So, how do I add a <code>ttk.Notebook</code> widget to a <code>tk.Frame</code>?</p> <hr /> <p><sup>1</sup> GUI code:</p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk class MyApp(tk.Tk): def __init__(self): super().__init__() self.title(&quot;hello world&quot;) self.frame0 = tk.Frame(self, background=&quot;green&quot;) self.frame1 = tk.Frame(self, background=&quot;blue&quot;) self.frame0.pack(side=&quot;top&quot;, fill=&quot;both&quot;, expand=True) self.frame1.pack(side=&quot;top&quot;, fill=tk.X, expand=True) self.frame0_label0 = tk.Label(self.frame0, text=&quot; &quot;) self.frame0_label0.grid(row=0, column=0) self.frame1_label0 = tk.Label(self.frame1, text=&quot;OTHER&quot;) self.frame1_label0.grid(row=0, column=0) def main(): MyApp().mainloop() if __name__ == &quot;__main__&quot;: main() </code></pre>
<python><tkinter>
2024-02-07 19:45:56
1
1,419
user69453
77,957,504
5,594,008
Django, Subquery, more than one row returned by a subquery used as an expression
<p>Here is my model structure</p> <pre><code>class ContentFlag(models.Model): user = models.ForeignKey( User, verbose_name=&quot;user&quot;, related_name=&quot;object_flags&quot;, on_delete=models.CASCADE, ) flag = models.CharField(max_length=30, db_index=True) content_type = models.ForeignKey( ContentType, on_delete=models.CASCADE ) object_id = models.PositiveIntegerField() content_object = GenericForeignKey(&quot;content_type&quot;, &quot;object_id&quot;) class Article(models.Model): owner = models.ForeignKey( settings.AUTH_USER_MODEL, verbose_name=_(&quot;owner&quot;), null=True, blank=True, editable=True, on_delete=models.SET_NULL, related_name=&quot;owned_pages&quot;, ) flags = GenericRelation(ContentFlag, related_query_name=&quot;%(class)s_flags&quot;) title = models.CharField(max_length=255) class User(index.Indexed, AbstractUser): email = EmailField(verbose_name=&quot;email&quot;, unique=True, null=True, blank=True) name = models.CharField( blank=True, max_length=100, ) flags = GenericRelation(to=&quot;core.ContentFlag&quot;) </code></pre> <p>I need to calculate User rating based on Flags number. Here is what I do for that</p> <pre><code> subquery_plus = ContentFlag.objects.filter( Q(article_flags__owner_id=OuterRef(&quot;id&quot;)), flag=LIKEDIT_FLAG, ).values(&quot;id&quot;) top_users = ( User.objects.annotate( rating_plus=Count(Subquery(subquery_plus)), ).order_by(&quot;-rating_plus&quot;) ) </code></pre> <p>But I got an error</p> <pre><code>django.db.utils.ProgrammingError: more than one row returned by a subquery used as an expression </code></pre> <p>How can I fix that?</p>
<python><django>
2024-02-07 19:42:47
1
2,352
Headmaster
77,957,365
543,913
Embedding Bokeh server into Flask app while retaining plot size
<p>I found an example in the Bokeh github of a Flask app that embeds a Bokeh server <a href="https://github.com/bokeh/bokeh/blob/branch-3.4/examples/server/api/flask_embed.py" rel="nofollow noreferrer">here</a>. When I run it and open the Flask URL in Google Chrome, I see the embedded plot. When I directly open the Bokeh URL in my Google Chrome, I also see the plot. However, the two plots are of different size. Below are two screenshots of the same size that illustrate this.</p> <p>The first one is the direct Bokeh plot, and represents the correct size. The second one is the embedded plot and is too small. <strong>How do I fix the example so that the embedded plot size matches the standalone plot size?</strong></p> <p>I should note that in Firefox, the plots appear to be the same size.</p> <p><a href="https://i.sstatic.net/5PuxQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5PuxQ.png" alt="Standalone Bokeh" /></a> <a href="https://i.sstatic.net/g6llQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g6llQ.png" alt="Embedded Bokeh" /></a></p>
<python><google-chrome><flask><bokeh>
2024-02-07 19:17:11
1
2,468
dshin
77,957,324
14,170,672
Why is zoneinfo shifting DST by 2 hours instead of 1?
<p>I need to make sure that when a naive datetime is made aware the DST rules are applied correctly. What I find here is just odd to me.</p> <p>Somehow zoneinfo decides to increment the DST transition for America/New_York by 2 hours instead of one. It is even shifting the time right before DST to EDT by 2 hours.</p> <p><a href="https://www.timeanddate.com/time/change/usa/new-york?year=2022" rel="nofollow noreferrer">for DST New York shifts to 3am at 2am</a></p> <pre class="lang-py prettyprint-override"><code>from zoneinfo import ZoneInfo from datetime import datetime, timedelta def zmake_timezone_obj(tz): return ZoneInfo(tz) def zconvert_to_timezone(naive_dt, timezone_str): tz_obj = zmake_timezone_obj(timezone_str) return naive_dt.astimezone(tz_obj) dst_start = datetime(2022, 3, 13, 2, 1) zconverted = zconvert_to_timezone(dst_start, &quot;America/New_York&quot;) print(zconverted, zconverted.tzname()) zconverted_add = zconverted + timedelta(hours=1) print(zconverted_add, zconverted_add.tzname()) print() dst_start = datetime(2022, 3, 13, 1, 59) zconverted = zconvert_to_timezone(dst_start, &quot;America/New_York&quot;) print(zconverted, zconverted.tzname()) zconverted_add = zconverted + timedelta(hours=1) print(zconverted_add, zconverted_add.tzname()) Output: 2022-03-13 04:01:00-04:00 EDT # should be: 2022-03-13 03:01:00-04:00 EDT 2022-03-13 05:01:00-04:00 EDT # should be: 2022-03-13 04:01:00-04:00 EDT 2022-03-13 03:59:00-04:00 EDT # should be: 2022-03-13 01:59:00-05:00 EST 2022-03-13 04:59:00-04:00 EDT # should be: 2022-03-13 03:59:00-04:00 EDT </code></pre> <p>This is using tzdata version 2023.4 and python version 3.11.4.</p> <p>What is going on here and how can I correct it?</p>
<python><datetime><timezone><zoneinfo><tzdata>
2024-02-07 19:08:17
2
870
Stryder
77,957,253
2,687,317
plot a slice of 3D data with pcolormesh
<p>I have data in 2 numpy arrays: one a list of 3D positions, the other the values of a scalar AT each of those positions. The ordering of the position data is fairly 'weird' (see below).</p> <p>The 3D position data is in an array:</p> <pre><code>pos = np.array([[1,1,1],[1,1,2],[1,1,3],[1,2,2],[1,2,1], ...]) pos.shape is (100000,3) </code></pre> <p>where the ordering is not intuitive (it is following a space filling curve).</p> <p>and I also have the scalar values that I want to plot at each of those locations:</p> <pre><code>vel = np.array([1,2,1,3,4,...]) vel.shape = (1000000,1) </code></pre> <p>My question is, how do I plot an xy slice with pcolormesh from this data??? I can extract an xy plane with numpy:</p> <pre><code>xs = pos[:,:1][:,0] ys = pos[:,1:2][:,0] </code></pre> <p>Now I have a bunch of essentially random x coords and y coords that no long map 1-1 to the <code>vel</code> data... :/. I don't know how to first map those initial positions to my <code>vel</code> data, so that I can generate a <code>pcolormesh</code>:</p> <pre><code>plt.pcolormesh(X, Y, V) </code></pre> <p>Can someone help me slice this data up, so everything stays mapped to the right position in xy (and z) space?</p>
<python><numpy><matplotlib><slice>
2024-02-07 18:51:48
1
533
earnric
77,957,242
13,611,327
how to arbitrarily sort the radial plot values in altair?
<p>I'm building a radial chart in python but can't order the values of the plot based on the 'categoria' values. I already tried to sort the df and force through domain and sort in the altair code but can't get the desired result. What I'm doing wrong?</p> <p><a href="https://i.sstatic.net/MBJQI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBJQI.png" alt="radial plot" /></a></p> <p>Here's a sample of my current dataframe:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>index</th> <th>da_tipo_servicio_salud</th> <th>categoria</th> <th>porcentaje</th> <th>rubro</th> </tr> </thead> <tbody> <tr> <td>57</td> <td>En ambos, público y privado</td> <td>Menos de $200 pesos</td> <td>44.44444444444444</td> <td>Transporte</td> </tr> <tr> <td>36</td> <td>En ambos, público y privado</td> <td>Menos de $200 pesos</td> <td>7.142857142857142</td> <td>Medicamentos</td> </tr> <tr> <td>15</td> <td>En ambos, público y privado</td> <td>Menos de $200 pesos</td> <td>3.571428571428571</td> <td>Citas médicas</td> </tr> <tr> <td>48</td> <td>En ambos, público y privado</td> <td>Entre $200 y $500 pesos</td> <td>29.629629629629626</td> <td>Transporte</td> </tr> </tbody> </table></div> <p>and the altair code I'm using:</p> <pre class="lang-py prettyprint-override"><code> orden_monto = ['Menos de $200 pesos', 'Entre $200 y $500 pesos', 'Entre $500 y $800 pesos', 'Entre $800 y $1,000 pesos', 'Entre $1,000 y $1,500 pesos', 'Entre $1,500 y $2,000 pesos', 'Más de $2,000 pesos'] base = alt.Chart(df_combinado_orden_ambos).transform_filter( alt.datum.rubro == 'Medicamentos' ).encode( theta=alt.Theta(field='porcentaje', type='quantitative', stack=True), radius=alt.Radius(field='porcentaje', type='quantitative', scale=alt.Scale(type='sqrt', zero=True, rangeMin=100)), color=alt.Color('categoria:N', scale=alt.Scale(scheme='blues', domain=orden_monto), sort=orden_monto) ).properties( title={ 'text': ['Distribución porcentual por categoría'], 'subtitle': ['Medicamentos'], 'anchor': 'start', 'offset': 20 } ) c1 = base.mark_arc(innerRadius=20, cornerRadius=5, stroke=&quot;#fff&quot;) c2 = base.mark_text(radiusOffset=25).encode(text=alt.Text('porcentaje:Q', format='.0f')) final_chart = c1 + c2 final_chart.display() </code></pre>
<python><altair>
2024-02-07 18:49:29
1
303
Jay Ballesteros C.
77,957,153
12,704,700
how to mock s3 last_modified for unit tests to a file?
<p>I would like to test the last last_modified timestamp for the s3 file in the unit test. When i try to get the timestamp for an s3 file it always gives me the latest timestamp when i run the unit test, but i have multiple s3 files and iam planning on using the latest timestamp for these these multiple files to assert. So the latest timestamp will not help in this case as all of them are the same, even doing a <code>freezetime</code> will get me a single timestamp for all the files where as I am looking specifically. The doc does not talk a lot about testing this but i found this <a href="https://github.com/getmoto/moto/issues/933" rel="nofollow noreferrer">https://github.com/getmoto/moto/issues/933</a> for reference which also does not have enough information, can someone help me on how i can set the value for the <code>last_modified</code> and test the timestamp for the s3 file?</p>
<python><unit-testing><amazon-s3><boto3><moto>
2024-02-07 18:32:38
1
2,505
Sundeep
77,957,092
11,515,528
Getting peaks/troughs and charting
<p>I am analysing swings in data which resembles a sine wave.</p> <p>I have got a solution but am looking for something shorter.</p> <p>Currently I find the peaks and trough building a df for each and merging with the main data. I then make a series with the diffs and draw a box plot.</p> <p>Do I need to make those two dfs? Its a lot of code to do something simple...</p> <p>The reason for doing this is I want to be able to compare the swings in temperature data over time. With this method I will be able to go on and compare say Dec 22 with Dec 21.</p> <p>For this example I am using scipy data.</p> <pre><code>import pandas as pd from scipy.datasets import electrocardiogram from scipy.signal import find_peaks from scipy import ndimage temp = electrocardiogram()[200:300] temp = ndimage.gaussian_filter1d(x, 2) peaks, _= find_peaks(temp) troughs, _= find_peaks(-temp) troughs_df = pd.DataFrame() troughs_df.index = troughs troughs_df['troughs'] = 'trough' peaks_df = pd.DataFrame() peaks_df.index = peaks peaks_df['peaks'] = 'peak' temp = pd.Series(temp, name='temp') temp = pd.merge(temp, troughs_df, left_index=True, right_index=True, how='outer') temp = pd.merge(temp, peaks_df, left_index=True, right_index=True, how='outer') temp['peaks and troughs'] = temp[['peaks', 'troughs']].fillna('').sum(axis=1) temp.query(&quot;`peaks and troughs` != ''&quot;)['temp'].diff().plot.box() </code></pre> <p>input data from scipy</p> <pre><code>array([ 0.125, 0.16 , 0.165, 0.17 , 0.185, 0.22 , 0.225, 0.235, 0.22 , 0.23 , 0.25 , 0.28 , 0.27 , 0.25 , 0.255, 0.245, 0.235, 0.235, 0.25 , 0.23 , 0.245, 0.23 , 0.22 , 0.215, 0.18 , 0.12 , 0.075, 0.065, 0.06 , 0.055, 0.025, -0.005, -0.04 , -0.085, -0.135, -0.145, -0.14 , -0.12 , -0.12 , -0.16 , -0.195, -0.2 , -0.215, -0.2 , -0.2 , -0.215, -0.255, -0.27 , -0.225, -0.225, -0.24 , -0.25 , -0.23 , -0.19 , -0.18 , -0.2 , -0.21 , -0.215, -0.21 , -0.21 , -0.205, -0.225, -0.23 , -0.24 , -0.245, -0.23 , -0.23 , -0.21 , -0.2 , -0.185, -0.175, -0.175, -0.19 , -0.19 , -0.19 , -0.2 , -0.175, -0.13 , -0.09 , -0.095, -0.13 , -0.17 , -0.16 , -0.135, -0.1 , -0.09 , -0.115, -0.115, -0.08 , -0.04 , -0.065, -0.105, -0.12 , -0.12 , -0.11 , -0.105, -0.11 , -0.15 , -0.155, -0.16 ]) </code></pre> <p>output</p> <pre><code>temp troughs peaks peaks and troughs 13 0.3 NaN peak peak 48 -0.2 trough NaN trough 56 -0.2 NaN peak peak 63 -0.2 trough NaN trough 89 -0.1 NaN peak peak </code></pre> <p><a href="https://i.sstatic.net/S5IJh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/S5IJh.png" alt="enter image description here" /></a></p>
<python><pandas>
2024-02-07 18:19:48
0
1,865
Cam
77,957,072
16,646,078
How to Determine Final State of Sequential Entries in a Pandas DataFrame Using Vectorized Operations?
<p>I have a pandas DataFrame with columns representing various attributes of data entries, including a timestamp &quot;dh_processamento_rubrica&quot;, a unique identifier &quot;inivalid_iderubrica&quot;, and an operation type &quot;operacao&quot;. The operations include &quot;inclusao&quot; (insertion), &quot;alteracao&quot; (modification), and &quot;exclusao&quot; (deletion).</p> <p>For each unique identifier, there can be multiple operations performed, where &quot;alteracao&quot; modifies existing entries created by &quot;inclusao&quot;, and &quot;exclusao&quot; removes entries. The modifications can include changing certain attributes (like &quot;codinccp_dadosrubrica&quot;) or updating the identifier itself using &quot;inivalid_nova_validade&quot; (in the example below updating &quot;inivalid_iderubrica&quot; from 2019-11-01 to 2019-01-01) in the second row. &quot;dh_processamento_rubrica&quot; is when the operation was performed).</p> <p>There can only be one entry for each distinct &quot;inivalid_iderubrica&quot;, but if it is changed, a new one can later be created with the old value.</p> <p>Here's a simplified version of my DataFrame:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>codinccp_dadosrubrica</th> <th>inivalid_iderubrica</th> <th>inivalid_nova_validade</th> <th>operacao</th> <th>dh_processamento_rubrica</th> </tr> </thead> <tbody> <tr> <td>11</td> <td>2019-11-01</td> <td>NaT</td> <td>inclusao</td> <td>2020-03-18 23:58:14</td> </tr> <tr> <td>11</td> <td>2019-11-01</td> <td>2019-01-01</td> <td>alteracao</td> <td>2020-05-14 17:27:06</td> </tr> <tr> <td>00</td> <td>2019-11-01</td> <td>NaT</td> <td>inclusao</td> <td>2020-06-07 23:46:07</td> </tr> <tr> <td>00</td> <td>2019-01-01</td> <td>NaT</td> <td>alteracao</td> <td>2021-07-15 19:57:42</td> </tr> <tr> <td>NaN</td> <td>2019-11-01</td> <td>NaT</td> <td>exclusao</td> <td>2021-08-13 15:31:56</td> </tr> </tbody> </table></div> <p>Code to generate DataFrame:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd data = {'codinccp_dadosrubrica': ['11', '11', '00', '00', None], 'inivalid_iderubrica': [pd.Timestamp('2019-11-01 00:00:00'), pd.Timestamp('2019-11-01 00:00:00'), pd.Timestamp('2019-11-01 00:00:00'), pd.Timestamp('2019-01-01 00:00:00'), pd.Timestamp('2019-11-01 00:00:00')], 'inivalid_nova_validade': [None, pd.Timestamp('2019-01-01 00:00:00'), None, None, None], 'operacao': ['inclusao', 'alteracao', 'inclusao', 'alteracao', 'exclusao'], 'dh_processamento_rubrica': [pd.Timestamp('2020-03-18 23:58:14'), pd.Timestamp('2020-05-14 17:27:06'), pd.Timestamp('2020-06-07 23:46:07'), pd.Timestamp('2021-07-15 19:57:42'), pd.Timestamp('2021-08-13 15:31:56')]} df = pd.DataFrame(data) </code></pre> <p>As per the data, &quot;inclusao&quot; adds a new entry, &quot;alteracao&quot; modifies existing entries, and &quot;exclusao&quot; removes entries. Each &quot;alteracao&quot; references the original entry it modifies by the inivalid_iderubrica attribute. The modifications can either update existing attributes or change the inivalid_iderubrica.</p> <p>In the example:</p> <ol> <li>A new entry (i) is added with inivalid_iderubrica == 2019-11-01</li> <li>Entry (i) is updated to now have inivalid_iderubrica == 2019-01-01</li> <li>A new entry (ii) is added with inivalid_iderubrica == 2019-11-01</li> <li>Entry (i) is updated to change codinccp_dadosrubrica from 11 to 00</li> <li>Entry (ii) is deleted</li> </ol> <p>I need to determine the final state of each entry after all the operations have been applied, considering that the modifications are sequential (according to the dates on &quot;dh_processamento_rubrica&quot;) and may affect subsequent operations.</p> <p>Final state should something look like:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>codinccp_dadosrubrica</th> <th>inivalid_iderubrica</th> <th>inivalid_nova_validade</th> <th>operacao</th> <th>dh_processamento_rubrica</th> </tr> </thead> <tbody> <tr> <td>00</td> <td>2019-01-01</td> <td>NaT</td> <td>alteracao</td> <td>2021-07-15 19:57:42</td> </tr> </tbody> </table></div> <p><strong>Edit:</strong></p> <p>I have the following solution using iterrows, which works but is unfortunately too slow, as I need to process DataFrame which sometimes may have millions of rows:</p> <pre class="lang-py prettyprint-override"><code>df = df.sort_values(by=['dh_processamento_rubrica']).reset_index(drop=True) df_alt_exc = df[df['operacao'] != 'inclusao'] df = df[df['operacao'] == 'inclusao'] for _, row in df_alt_exc.iterrows(): to_update = df[(df['inivalid_iderubrica'] == row['inivalid_iderubrica'])] if row['operacao'] == 'alteracao': if to_update.size &gt; 0: if not pd.isna(row['inivalid_nova_validade']): row['inivalid_iderubrica'] = row['inivalid_nova_validade'] index_to_update = to_update.index[0] df.loc[index_to_update] = row if row['operacao'] == 'exclusao': if to_update.size &gt; 0: index_to_update = to_update.index[0] df = df.drop(index_to_update) </code></pre> <p>I need to know if there's a way of solving this using <strong>vectorized operations</strong> (or any other approach that would be similarly as efficient).</p>
<python><pandas><dataframe>
2024-02-07 18:13:40
2
7,595
e-motta
77,956,980
14,456,476
Getting ReplicaSetNoPrimary error for M0 cluster when using Django with MongoEngine
<p>I am using django with mongoengine. I am writing the following in the settings.py file:</p> <pre><code>from mongoengine import connect URI = 'mongodb+srv://myusername:mypassword@cluster0.5apjp.mongodb.net/django?retryWrites=true&amp;w=majority&amp;ssl=false' connect(host=URI) </code></pre> <p>After that, I have a model as follows:</p> <pre><code>from mongoengine import Document, StringField class User(Document): first_name = StringField(max_length=50) last_name = StringField(max_length=50) meta = { 'collection': 'users' } </code></pre> <p>I have a view as follows:</p> <pre><code>def adduser(request): userDict = json.loads(request.body) newUser = User(first_name=userDict['firstName'],last_name=userDict['lastName']) newUser.save() return HttpResponse('user added') </code></pre> <p>When this view function is called, I get an error as follows:</p> <pre><code>ServerSelectionTimeoutError( pymongo.errors.ServerSelectionTimeoutError: cluster0-shard-00-02.5apjp.mongodb.net:27017: connection closed (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms),cluster0-shard-00-01.5apjp.mongodb.net:27017: connection closed (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms),cluster0-shard-00-00.5apjp.mongodb.net:27017: connection closed (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms), Timeout: 30s, Topology Description: &lt;TopologyDescription id: 65c3bdc13c9136a1191890e1, topology_type: ReplicaSetNoPrimary, servers: [&lt;ServerDescription ('cluster0-shard-00-00.5apjp.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-00.5apjp.mongodb.net:27017: connection closed (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')&gt;, &lt;ServerDescription ('cluster0-shard-00-01.5apjp.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-01.5apjp.mongodb.net:27017: connection closed (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')&gt;, &lt;ServerDescription ('cluster0-shard-00-02.5apjp.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('cluster0-shard-00-02.5apjp.mongodb.net:27017: connection closed (configured timeouts: socketTimeoutMS: 20000.0ms, connectTimeoutMS: 20000.0ms)')&gt;]&gt; [07/Feb/2024 23:02:38] &quot;POST /user/adduser HTTP/1.1&quot; 500 115166 </code></pre> <p>I am using a mongodb free M0 cluster with database named as 'django' and collection named as 'users'.</p> <p>If I use a non-SRV URI string like localhost:27017, it works fine. But when I use a SRV URI, its giving me this error.</p> <p>Moreover, I have added 0.0.0.0/0 in the IP access list of Network Access tab of MongoDB Atlas UI.</p> <p>Please help me to get rid of this error so that I can proceed with basic CRUD operations on MongoDB using Django with MongoEngine.</p>
<python><django><mongodb><crud><mongoengine>
2024-02-07 17:57:33
2
378
Dhritiman Tamuli Saikia
77,956,832
7,654,773
Why is Kivy UrlRequest callback blocking GUI thread?
<p>I have Kivy app with screen that has a download button that downloads and processes various tables of remote data. It all works just fine but the gui is getting blocked in the process as evidenced by a MDSpinner that freezes about halfway through until the download process is completely finished.</p> <p>The entire DL process is being run in another thread that does not touch the gui so that should not happen.</p> <p>My troubleshooting has found that the UrlRequest callback is doing the blocking. Even a simple sleep in the callback hangs the GUI!</p> <p>Any idea why? Do I need to run the callback(s) in yet another thread?</p> <p>Since the real code is pretty complex, here is some simple pseudocode to demonstrate the problem.</p> <pre><code>from kivy.network.urlrequest import UrlRequest from functools import partial from kivy.clock import Clock, mainthread from threading import Thread from kivymd.uix.screen import MDScreen import time class myScreen(MDScreen): def on_button(self): # activate spinner when download button pressed self.ids.spinner.active = True # run sync stuff in new thread t = Thread(target=self.sync_thread) t.start() def sync_thread(self): # lots of code here including several async downloads like this: download(some_url) # when all downloads are done, kill spinner self.kill_spinner() @mainthread def kill_spinner(self): # uses decorator to set gui widget from another thread self.ids.spinner.active = False # URLRequest code ----------------------------- def download(self, url): cb = partial(mycallback, arg) UrlRequest(url=url, req_headers=myconfig.headers, timeout=timeout, on_success=cb, on_error=cb, on_failure=cb) def mycallback(self, arg): # used to process data from each download # PROBLEM LIES HERE! Anything here - even a simple sleep will hang spinner(gui) until finished time.sleep(5) </code></pre>
<python><kivy><kivy-language>
2024-02-07 17:34:02
2
696
Bill
77,956,750
4,211,950
Python Project Euler #23 Incorrect Answer
<p>I have tried solving <a href="https://projecteuler.net/problem=23" rel="nofollow noreferrer">Project Euler Problem 23</a> in python but I am getting an answer that is below the correct one by 995. The wrong answer that I get is due to the fact that my final list does not include 12, 18, 20 and 945. However, these numbers are abundant and should not be in the final answer anyway. Therefore, the answer in Project Euler is incorrect or my logic is grossly wrong. The Project Euler Problem is,</p> <blockquote> <p>A number n is called deficient if the sum of its proper divisors is less than n and it is called abundant if this sum exceeds n.</p> <p>As 12 is the smallest abundant number, 1+2+3+4+6 = 16, the smallest number that can be written as the sum of two abundant numbers is 24. By mathematical analysis, it can be shown that all integers greater than 28123 can be written as the sum of two abundant numbers. However, this upper limit cannot be reduced any further by analysis even though it is known that the greatest number that cannot be expressed as the sum of two abundant numbers is less than this limit.</p> <p>Find the sum of all the positive integers which cannot be written as the sum of two abundant numbers.</p> </blockquote> <p>I am fairly certain that my divisors code works well as many have the same implementation,</p> <pre class="lang-py prettyprint-override"><code>def sum_of_divisors(n): sum = 1 for i in range(2,int(n**0.5)+1): if n%i == 0: if i == n//i: sum += i else: sum += i + n//i return sum </code></pre> <p>My problem therefore lies in my main code, to determine if a number is abundant or not,</p> <pre class="lang-py prettyprint-override"><code>def find_non_abd_sum(input): sum = 1 #sum_set = set([1]) abd_lst = set([]) for n in range(2, input): if sum_of_divisors(n) &gt; n: abd_lst.add(n) else: found = any((n-i) in abd_lst for i in abd_lst) if not found: sum += n #sum_set.add(n) return sum </code></pre> <p>Unlike many popular choices, I am only iterating through the numbers 1 to 28123, and <strong>not</strong> creating a list of all known sums. My thinking was that since we are asking if a number is a sum of two others, those two must be lower than it, and thus we must have already iterated them.</p> <p>Specifically, if <code>x</code> is not abundant but is the sum of two abundant numbers, then there exists an <code>y, z</code> such that <code>y+z=x</code>. Therefore, both <code>y</code> and <code>z</code> <strong>must</strong> be less than <code>x</code>, and thus must have already been iterated before <code>x</code>.</p> <p>To find if a number is not abundant, I only subtract an abundant number from <code>abd_lst</code>. If the number is abundant, then the difference of this sum should be in <code>abd_lst</code> as well. This gives the logic behind the line <code>any((n-i) in abd_lst for i in abd_lst)</code>.</p> <p>My problem is that my answer is exactly 995 below, specifically, I get 4178876. The amount of abundant numbers I got is correct, 6975, and the sets match. The problem should lie in <code>find_non_abd_sum</code>, and more specifically the second part of the if statement. I am completely confused as to why this is wrong, given the fairly sound logic I set out.</p> <p>Given that I am so close to the answer, I decided to implement other answers to see which numbers I am missing in <code>sum_set</code>. Here is where it gets weird. I implemented many different solutions on the forum, and those on overflow, I even got a similar one when I used Chat-GPT. These solutions all use the idea to generate a list of all possible sums using known abundant numbers. The <strong>only</strong> numbers that are in those sets and not mine are 12, 18, 20 and 945. However, all of these numbers are abundant and should not be in the final answer. They are respectively the first 3 abundant numbers and the first odd one. Interestingly, the sum of those numbers is 995, the exact number which I am below the supposed correct answer. Anyone got any insights as to why this is the case, and where I've gone wrong? After all, these numbers shouldn't be in the final answer anyway as they are abundant, but every other correct answer seems to have them.</p> <p>I have therefore misunderstood the question or there has been an oversight as to the final answer in Project Euler, which is a stretch given its been solved almost 200K times.</p> <p>P.S: Before voting to close, this <em>exact</em> problem has not been asked. One, every 'correct solution' on overflow, no matter the language, I've seen contains 12,18, 20 and 945. Two, I am not asking how to solve this problem faster. On the input 20161 this code takes me about 0.31 seconds to run, the fastest I've seen on stack and Proj Euler forums.</p>
<python>
2024-02-07 17:20:43
1
347
Michael Mooney
77,956,528
960,115
OpenPyXL coordinate_to_tuple Method Fails on Absolute Addresses
<p>Fairly simple <em>OpenPyXL</em> question... how can I change the following line to return the address tuple for an absolute address cell? I could manually remove the dollar signs, but it seems like that is a dirty work-around for something that seems like a bug in <em>OpenPyXL</em>.</p> <pre><code># The following line results in the error: # Traceback (most recent call last): # ... # coordinates = coordinate_to_tuple('$A$2') # File &quot;/usr/local/lib/python3.8/dist-packages/openpyxl/utils/cell.py&quot;, line 202, in coordinate_to_tuple # return int(row), _COL_STRING_CACHE[col] # KeyError: '$A$' coordinates = coordinate_to_tuple('$A$2') # The following are all hacky work-arounds to return a tuple (row, col) # Working hack 1 coords = coordinate_to_tuple('$A$2'.replace('$', '')) # Working hack 2 coords = range_boundaries('$A$2')[1:3] # Working hack 3 coords = coordinate_from_string('$A$2') coords = (coords[1], column_index_from_string(coords[0])) </code></pre> <p>Perhaps there is a method that undoes a call to <code>absolute_coordinate</code> that I'm missing in the <a href="https://openpyxl.readthedocs.io/en/latest/api/openpyxl.utils.cell.html" rel="nofollow noreferrer">OpenPyXL utils documentation</a>? Or is the literal dollar sign defined in some constant that is part of the public API? I'll push forward with hack #1 unless someone suggests a better way.</p>
<python><openpyxl>
2024-02-07 16:46:42
1
4,735
Jeff G
77,956,505
23,361,424
How to have multiple email backends in Django?
<p>I would like to have both SMTP backend and console backend, so that I can send an email via SMTP while also printing the generated email to console.</p> <p>In my settings.py I would like to have something like this:</p> <pre><code>EMAIL_BACKEND = [&quot;django.core.mail.backends.smtp.EmailBackend&quot;, &quot;django.core.mail.backends.console.EmailBackend&quot;] </code></pre>
<python><django><smtp>
2024-02-07 16:43:23
1
518
c p
77,956,061
2,460,464
Docker Networking with Jaeger all-in-one container using Python
<p>I have three containers (C, S, and J). C is the client, S is a SQL Server container, and J is J(aeger). All three containers are on a dedicated network.</p> <p>Traffic between C and S happens without problems. From the Docker Host (in this case, it's a VirtualBox VM), I can push metrics to J without issue. From C, I get timeouts when attempting to send data from C to J. Bizarrely enough, I CAN telnet from C to tcp/4317 on J. Alas, I cannot send our OTLP data from C to J.</p> <p>Although it's not a best practice (not at all), I'm attempting to use IP addresses (172.16.x.x/16). C, S, and J are on the same subnet. This is being done solely to reduce the complexity (n.b., I do not have the ability to modify /etc/hosts in the Ubuntu-based containers).</p> <p>The last &quot;bizarre&quot; thing is that I've seen that &quot;something&quot; resolves the hostnames to routable IP addresses (specifically, to a block owned by DigitalOcean). This happens only within the containers; it does not happen from the Docker Host (VM) nor from the Windows laptop running the VirtualBox VM.</p> <p>BTW: this is the error which is returned (several times until it ultimately times out):</p> <blockquote> <p>Transient error StatusCode.DEADLINE_EXCEEDED encountered while exporting traces to 172.19.0.3:4317, retrying in 1s.</p> </blockquote>
<python><docker><network-programming><open-telemetry><jaeger>
2024-02-07 15:36:00
1
485
user2460464
77,955,992
9,536,103
Ensuring transactional writes across two delta tables in databricks
<p>I want to create/update two delta tables in databricks. However, I want them both to be created at the same time such that the following can never occur:</p> <ol> <li>A user queries table 1 and table 2 and gets the original data from table 1 and the updated data from table 2</li> <li>A user queries table 1 and table 2 and gets the updated data from table 1 and the original data from table 2.</li> </ol> <p>The following is some dummy code to create/update both tables but this wouldn't guarantee the above two conditions are satisfied.</p> <pre><code>from pyspark.sql import SparkSession # Create SparkSession spark = SparkSession.builder.getOrCreate() # Create dummy data table1_data = [(1, &quot;A&quot;, 10.5, True), (2, &quot;B&quot;, 20.5, False), (3, &quot;C&quot;, 30.5, True), (4, &quot;D&quot;, 40.5, False)] table2_data = [(1, &quot;E&quot;, 50.5, True), (2, &quot;F&quot;, 60.5, False), (3, &quot;G&quot;, 70.5, True), (4, &quot;H&quot;, 80.5, False)] # Create dataframe table1_df = spark.createDataFrame(table1_data, [&quot;col1&quot;, &quot;col2&quot;, &quot;col3&quot;, &quot;col4&quot;]) table2_df = spark.createDataFrame(table2_data, [&quot;col1&quot;, &quot;col2&quot;, &quot;col3&quot;, &quot;col4&quot;]) # Write dataframe to delta table table1_df.write.format(&quot;delta&quot;).mode(&quot;overwrite&quot;).option(&quot;overwriteSchema&quot;, &quot;true&quot;).saveAsTable(&quot;table1&quot;) table2_df.write.format(&quot;delta&quot;).mode(&quot;overwrite&quot;).option(&quot;overwriteSchema&quot;, &quot;true&quot;).saveAsTable(&quot;table2&quot;) </code></pre> <p>How can I modify this so that the above two conditions are guaranteed</p>
<python><databricks>
2024-02-07 15:25:38
1
1,151
Daniel Wyatt
77,955,809
3,124,206
Creating a sensor in airflow using a virtual python environment
<p>I have been using successfully airflow python tasks with a virtual environment thanks to the <code>@task.virtualenv(requirements=[...])</code> decorator.</p> <p>However, is it possible to also write sensors in python that make use of external environments? There is a <code>@task.sensor</code> decorator but I can't figure out how to use simultaneously with a virtual environment.</p> <p>To reproduce, here is an example of two DAGs, assuming a package (pymsteams in this case but could be anything not part of the airflow environment):</p> <pre class="lang-py prettyprint-override"><code>import datetime from airflow.decorators import dag, task @task.virtualenv(requirements=[&quot;pymsteams&quot;]) def virtual_env_task(): import pymsteams @task.sensor def virtual_env_sensor(): import pymsteams return True @dag(start_date=datetime.datetime(2024, 1, 1), catchup=False) def virtual_env_task_dag(): virtual_env_task() @dag(start_date=datetime.datetime(2024, 1, 1), catchup=False) def virtual_env_sensor_and_task_dag(): virtual_env_sensor() &gt;&gt; virtual_env_task() virtual_env_task_dag() virtual_env_sensor_and_task_dag() </code></pre> <p><code>virtual_env_task_dag</code> succeeds as <code>pymsteams</code> gets installed in a virtual env environment, <code>virtual_env_sensor_and_task_dag</code> however fails as I can't figure a way of how to incorporate a virtual environment into a sensor.</p>
<python><airflow>
2024-02-07 15:01:10
0
485
user3124206
77,955,698
18,793,790
How do I run a python script with virtual .venv from my sveltekit app?
<p>I am able to run the python script from my sveltekit app using exec using the code below. <code>const command = `python &quot;${pythonScriptPath}&quot; ${plateID} &quot;${formattedProdDate}&quot;`; is the </code> command where I run the python script. I need to run this python script in a virtual environment.</p> <p>In my python script I created a virtual environment by added an empty .venv folder and running pipenv install. When I am in the directory I run <code>pipenv run python get-pdf.py string date</code> which will run the python script in the virtual environment.</p> <p>So I tried to modify where I call the script in my sveltekit app <code>const command = `pipenv run python &quot;${pythonScriptPath}&quot; ${plateID} &quot;${formattedProdDate}&quot;</code> but when I do this it creates a virtual environemnt for my sveltekit app instead of running the virtual environment I already have installed. How can I run the python script using the virtual environment that I already have installed inside the directory where the python script is?</p> <pre><code>import fs from 'fs/promises'; import { exec } from 'node:child_process'; import { prisma } from &quot;$lib/server/prisma.js&quot;; import { format } from 'date-fns'; export async function GET({ url }: { url: { searchParams: URLSearchParams } }) { const plateID = url.searchParams.get('plateID') || ''; console.log('Plate id:', plateID); // Retrieve production end date from the database const prodEndDateRaw = await prisma.$queryRaw&lt;{ PROD_END: Date }[]&gt;` SELECT PROD_END FROM LTDB.LTDB.PIECE_TIMES pt WHERE ID_PIECE = ${plateID} `; // Type casting and null check for prodEndDate const prodEndDate = prodEndDateRaw[0]?.PROD_END as Date | undefined; if (prodEndDate) { // Format production end date const formattedProdDate = format(prodEndDate, 'yyyy-MM-dd'); console.log(formattedProdDate); const pythonScriptPath = 'C:\\scripts\\HTLReportScript\\get-pdf.py'; // Execute the command to run the Python script inside the virtual environment const command = `python &quot;${pythonScriptPath}&quot; ${plateID} &quot;${formattedProdDate}&quot;`; try { console.log('Executing Python script...'); await new Promise&lt;void&gt;((resolve, reject) =&gt; { exec(command, (error, stdout, stderr) =&gt; { if (error) { console.error(`Error executing Python script: ${error.message}`); console.error('Python script stderr:', stderr); reject(error); } else { console.log('Python script completed:', stdout); resolve(); } }); }); } catch (error: any) { console.error('Error:', error.message); return createServerErrorResponse('Internal Server Error'); } // Read the file from the local destination const localDestination = `C:\\scripts\\HTLReportScript\\Piece_${plateID}.pdf`; const fileData = await readFile(localDestination); // Set the appropriate headers for the response const headers = { 'Content-Disposition': `attachment; filename=Piece_${plateID}.pdf`, 'Content-Type': 'application/pdf', }; // Delete the file after sending it to the client // await fs.unlink(localDestination); // Send the file back to the client return new Response(fileData, { headers }); } else { console.error('Error: Production end date not found'); return createServerErrorResponse('Internal Server Error'); } } async function readFile(filePath: string): Promise&lt;Buffer&gt; { try { const data = await fs.readFile(filePath); return data; } catch (error: any) { console.error(`Error reading file: ${error.message}`); throw new Error('Internal Server Error'); } } function createServerErrorResponse(message: string) { return new Response(JSON.stringify({ error: message }), { status: 500, headers: { 'Content-Type': 'application/json' }, }); } </code></pre> <p>-----SOLUTION thanks to Mikko Ohtamaa------</p> <p>Thanks to Mikko I was able to get this fixed. I had to run the virtual environment by going into the directory where the python script was. I decided to keep my script into C\Scripts on both my dev machine and the server so I did not have to make any configurations when I moved the build to the server. I changed <code>const command = `python &quot;${pythonScriptPath}&quot; ${plateID} &quot;${formattedProdDate}&quot;```` to </code> const command = <code>cd C:\\scripts\\HTLReportScript &amp;&amp; C:\\scripts\\HTLReportScript\\.venv\\Scripts\\activate &amp;&amp; python get-pdf.py ${plateID} ${formattedProdDate}</code>;``` This opens the virtual env and then runs the python code. Here is the updated code</p> <pre><code>import fs from 'fs/promises'; import { exec } from 'node:child_process'; import { prisma } from &quot;$lib/server/prisma.js&quot;; import { format } from 'date-fns'; export async function GET({ url }: { url: { searchParams: URLSearchParams } }) { const plateID = url.searchParams.get('plateID') || ''; console.log('Plate id:', plateID); // Retrieve production end date from the database const prodEndDateRaw = await prisma.$queryRaw&lt;{ PROD_END: Date }[]&gt;` SELECT PROD_END FROM LTDB.LTDB.PIECE_TIMES pt WHERE ID_PIECE = ${plateID} `; // Type casting and null check for prodEndDate const prodEndDate = prodEndDateRaw[0]?.PROD_END as Date | undefined; if (prodEndDate) { // Format production end date const formattedProdDate = format(prodEndDate, 'yyyy-MM-dd'); console.log(formattedProdDate); const pythonScriptPath = 'C:\\scripts\\HTLReportScript\\get-pdf.py'; // Execute the command to run the Python script inside the virtual environment const command = `cd C:\\scripts\\HTLReportScript &amp;&amp; C:\\scripts\\HTLReportScript\\.venv\\Scripts\\activate &amp;&amp; python get-pdf.py ${plateID} ${formattedProdDate}`; try { console.log('Executing Python script...'); await new Promise&lt;void&gt;((resolve, reject) =&gt; { exec(command, (error, stdout, stderr) =&gt; { if (error) { console.error(`Error executing Python script: ${error.message}`); console.error('Python script stderr:', stderr); reject(error); } else { console.log('Python script completed:', stdout); resolve(); } }); }); } catch (error: any) { console.error('Error:', error.message); return createServerErrorResponse('Internal Server Error'); } // Read the file from the local destination const localDestination = `C:\\scripts\\HTLReportScript\\Piece_${plateID}.pdf`; const fileData = await readFile(localDestination); // Set the appropriate headers for the response const headers = { 'Content-Disposition': `attachment; filename=Piece_${plateID}.pdf`, 'Content-Type': 'application/pdf', }; // Delete the file after sending it to the client await fs.unlink(localDestination); // Send the file back to the client return new Response(fileData, { headers }); } else { console.error('Error: Production end date not found'); return createServerErrorResponse('Internal Server Error'); } } async function readFile(filePath: string): Promise&lt;Buffer&gt; { try { const data = await fs.readFile(filePath); return data; } catch (error: any) { console.error(`Error reading file: ${error.message}`); throw new Error('Internal Server Error'); } } function createServerErrorResponse(message: string) { return new Response(JSON.stringify({ error: message }), { status: 500, headers: { 'Content-Type': 'application/json' }, }); } </code></pre>
<python><sveltekit><python-venv>
2024-02-07 14:43:11
1
346
Avi4nFLu
77,955,612
8,599,834
Is it possible to make a module unimportable/hidden at runtime?
<p>I have different sets of dependencies that are installed in different environments (i.e. core deps installed everywhere, plus optional deps just for linting, optionals only in production etc).</p> <p>Locally, I typically have all of them installed, so my code will never accidentally happen to fail importing something. But in production, I just run the code and don't have, for example, Mypy installed. If I happen to need to import something from such an optional dependency (to work around a bug, for example), I might forget to make the import conditional.</p> <p>Long story short, I want to write tests that forcefully make certain modules unavailable/unimportable/hidden, so that regardless of whether these tests are run, importing those modules will result in an <code>ImportError</code>.</p> <p><strong>Is there any way to achieve this?</strong> Maybe there's a way to register some kind of hook that allows me to raise my own error whenever a certain module is imported from anywhere?</p> <p>I've already tested mangling <code>sys.modules</code>, but even deleting a module from it will not prevent me to import it, and if I <code>del</code> the module itself it only makes it unavailable until I import it again.</p>
<python><mocking><python-import>
2024-02-07 14:32:39
0
2,742
theberzi
77,955,472
236,247
How do you mark an entire file as deprecated in Python?
<p>I am reviewing a .py file with no classes, just a list of functions. There is a newer version of this same file which is now used instead of the old file.</p> <p>I know to mark class or function as deprecated, one can do <code>from warnings import deprecated</code> and then for the decorated function or class use <code>deprecated(&quot;Do not use this&quot;)</code></p> <p>There is not a 1-1 correspondence, so marking all the functions as deprecated individually seems weird. I could wrap the code in a class, deprecate this newly created class, and be done with it. However, it would be more natural to have a single annotation at the top of the page saying the entire contents of the file are to be ignored.</p> <p>I could also create a new package, say deprecated, and move it there, but I would rather it keep the code in its original location.</p> <p>So is there a pythonic way to mark an entire file as deprecated?</p>
<python><deprecated>
2024-02-07 14:13:57
1
9,756
demongolem
77,955,428
5,761,843
Bilinear Fit on data using python
<p>I'm trying to fit a number depending on two inputs via a bilinear fit using python.</p> <p>I have random sets of data of x and y inputs and the corresponding z output.</p> <p>I need the output coefficients a, b, c, and d for</p> <pre><code>z(x,y) = a*x + b*y + c*x*y + d </code></pre> <p>The input data I have is not a quadratic &quot;grid&quot; It might have more elements in the x direction than in the y space. Furthermore, there might be values missing. So basically, I have a random set of x and y pairs that I need to find bilinear cofficients for to match the given data as good as possible.</p> <p>I already tried:</p> <ul> <li><strong>from sklearn import linear_model</strong>: I realized too late, that this does not allow a bilinear fit.</li> <li><strong>from scipy.interpolate import LinearNDInterpolator</strong>: This does work, but somehow does not allow using the full input range of parameters and I found no way of actually getting any coefficients out of it.</li> </ul> <p>I could calculate the bilinear coefficients based on 4 points manually, but this wouldn't give me the best result.</p> <p>Any ideas how to approach this problem? I'm completely stuck. Searching for &quot;bilinear fit python&quot; did not turn up much useful information.</p> <p>As requested. A little sample data:</p> <pre><code>x y z 1056 8 50.89124679 1056 16 61.62827273 1056 32 78.83079982 1056 48 92.90073197 1056 64 105.103744 1056 80 116.0303753 1056 96 126.0130906 1056 112 135.2610439 1056 128 143.9159512 1056 144 152.0790946 2048 8 63.71675604 2048 16 77.15971099 2048 32 98.69757849 2048 48 116.313387 2048 64 131.5917779 2048 80 145.2721136 2048 96 157.7706532 2048 112 169.3492575 2048 128 180.1853546 2048 144 190.4057615 4096 8 86.7357654 4096 16 105.0352703 4096 32 134.3541477 4096 48 158.334033 4096 64 179.1320602 4096 80 197.7547066 4096 96 214.7686034 4096 112 230.5302193 4096 128 245.2810877 4096 144 259.193829 </code></pre>
<python><bilinear-interpolation>
2024-02-07 14:08:39
1
672
GNA
77,955,277
5,022,847
pandas adding 0 when adding =HYPERLINK to a column
<p>I want to add hyperlink to a column, but if the column is empty it adding <code>0</code> to it</p> <pre><code>def hyperlink(row): return f'=HYPERLINK(&quot;{row.link}&quot;, &quot;{row.a}&quot;)' if row.a else '' df[&quot;a&quot;] = df.apply(hyperlink, axis=1) df.drop(columns=[&quot;link&quot;], inplace=True) with pd.ExcelWriter(engine=&quot;xlsxwriter&quot;) as writer: df.to_excel(writer) wb = writer.book ws = wb.sheets[&quot;sheet1&quot;] text_format = wb.add_format({ &quot;font_name&quot;: &quot;Calibri&quot;, &quot;font_size&quot;: &quot;11&quot;, &quot;bold&quot;: False, &quot;valign&quot;: &quot;vcenter&quot;, &quot;font_color&quot;: &quot;blue&quot;, }) text_format.set_num_format(&quot;@&quot;) ws.set_column(&quot;A:A&quot;, 24, text_format) </code></pre> <p>Note: &quot;link&quot; is column in df. I am also adding text format to this &quot;a&quot; column.</p> <p>So if column &quot;a&quot; is empty it should return blank string (''), but somehow when doing to_excel it is adding 0 to value, and hence breaking excel sheet.</p> <p>Please guide what am I doing wrong.</p>
<python><pandas><dataframe>
2024-02-07 13:48:24
2
1,430
TechSavy
77,955,273
1,744,491
How to get AWS Managed Policy using Python Pulumi
<p>I'm trying to create an AWS Glue Role run the service properly. I want to use the AWS Managed role <strong>AWSGlueServiceRole</strong> using the following code:</p> <pre><code>import json from pulumi_aws import iam from pulumi_aws.iam import Role def get_access_bucket_role(role_name: str, bucket_name: str, tags) -&gt; Role: assume_role_policy = json.dumps( { &quot;Version&quot;: &quot;2012-10-17&quot;, &quot;Statement&quot;: [ { &quot;Effect&quot;: &quot;Allow&quot;, &quot;Principal&quot;: {&quot;Service&quot;: [&quot;glue.amazonaws.com&quot;]}, &quot;Action&quot;: &quot;sts:AssumeRole&quot;, } ], } ) iam.get_policy return iam.Role( role_name, assume_role_policy=assume_role_policy, inline_policies=iam.get_policy(arn=&quot;arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole&quot;), path=&quot;/my-path/&quot;, permissions_boundary=&quot;arn:aws:iam::XXX:policy/my-boundary&quot;, tags=tags, ) </code></pre> <p>However I'm getting the following error when the code try to get <code>iam.get_policy()</code> function:</p> <pre><code>AttributeError: 'NoneType' object has no attribute 'Invoke' </code></pre> <p>Am I missing something? How to create this policy properly?</p>
<python><amazon-web-services><pulumi><pulumi-python>
2024-02-07 13:48:06
1
670
Guilherme Noronha
77,955,134
7,699,037
PyInstaller won't find package metadata
<p>I have created a package that uses <code>importlib.metadata</code> to obtain the package version:</p> <pre><code>def get_version() -&gt; str: &quot;&quot;&quot;Get the installed version of this pip package. Returns: str: Version of this pip package. &quot;&quot;&quot; return metadata.version(&quot;myproj&quot;) </code></pre> <p>When I execute the script, the version is obtained and the script works fine.</p> <p>However, when I try to create an executable with PyInstaller (with <code>pyinstaller -F --name MyProj --copy-metadata myproj src/myproj/__main__.py</code>), PyInstaller returns the following error message:</p> <pre><code>Traceback (most recent call last): File &quot;/home/user/.local/bin/pyinstaller&quot;, line 8, in &lt;module&gt; sys.exit(_console_script_run()) File &quot;/home/user/.local/lib/python3.10/site-packages/PyInstaller/__main__.py&quot;, line 214, in _console_script_run run() File &quot;/home/user/.local/lib/python3.10/site-packages/PyInstaller/__main__.py&quot;, line 198, in run run_build(pyi_config, spec_file, **vars(args)) File &quot;/home/user/.local/lib/python3.10/site-packages/PyInstaller/__main__.py&quot;, line 69, in run_build PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs) File &quot;/home/user/.local/lib/python3.10/site-packages/PyInstaller/building/build_main.py&quot;, line 1071, in main build(specfile, distpath, workpath, clean_build) File &quot;/home/user/.local/lib/python3.10/site-packages/PyInstaller/building/build_main.py&quot;, line 1011, in build exec(code, spec_namespace) File &quot;/workspaces/myproj/MyProj.spec&quot;, line 5, in &lt;module&gt; datas += copy_metadata('myproj') File &quot;/home/user/.local/lib/python3.10/site-packages/PyInstaller/utils/hooks/__init__.py&quot;, line 957, in copy_metadata dist = importlib_metadata.distribution(package_name) File &quot;/usr/lib/python3.10/importlib/metadata/__init__.py&quot;, line 969, in distribution return Distribution.from_name(distribution_name) File &quot;/usr/lib/python3.10/importlib/metadata/__init__.py&quot;, line 548, in from_name raise PackageNotFoundError(name) importlib.metadata.PackageNotFoundError: No package metadata was found for myproj </code></pre> <p>I have looked for solutions on StackOverflow and all seemed to suggest adding the <code>--copy-metadata</code> flag to avoid this issue. But it does not work for me. When I leave out the flag, PyInstaller runs fine, but then, the executable returns the same <code>importlib.metadata.PackageNotFoundError: No package metadata was found for myproj</code></p> <p>In case it's relevant: I'm using <code>pyproject.toml</code> and <code>pdm</code> to obtain the version of my tool:</p> <pre><code>[tool.pdm.version] source = &quot;scm&quot; </code></pre>
<python><pyinstaller><pyproject.toml>
2024-02-07 13:26:52
0
2,908
Mike van Dyke
77,955,071
8,519,830
pandas dataframe resample sum daily entries only for complete months
<p>I have dataframes with daily entries that can start and end not on months' boundaries. I want to sum by month but without the first months entries if they don't start at the first day of the month and the last months entries if they don't end at the last day of the month.</p> <pre><code> gas 2022-10-18 1.81 2022-10-19 2.69 2022-10-20 3.06 2022-10-21 3.10 2022-10-22 2.28 ... ... 2024-02-03 5.68 2024-02-04 5.83 2024-02-05 5.74 2024-02-06 6.55 2024-02-07 3.17 </code></pre> <p>So far I do the sum using <code>df.resample('MS').sum()</code>. But with this I get also partial months' sums:</p> <pre><code>2022-10-01 34.040 2022-11-01 164.825 2022-12-01 236.335 2023-01-01 226.180 2023-02-01 197.230 ... ... 2023-10-01 78.060 2023-11-01 191.590 2023-12-01 219.650 2024-01-01 252.370 2024-02-01 38.625 </code></pre> <p>Note: All the other daily entries in the middle of the dataframes exist by having used interpolation when aggregating the daily dataframes. So probably counting the number of daily entries for a month and comparing that to the number of days in the month would be a way, but how would that be used as a filter?</p>
<python><pandas><dataframe>
2024-02-07 13:17:30
1
585
monok
77,954,934
7,566,673
auto-covariance matrix for 1D array
<p>I want to compute auto-covariance matrix for real valued 1D array . I have seen in a blog that auto-correlation matrix is computed as below .</p> <pre><code>import numpy as np x = np.asarray([1+1j,2+1j,3-1j]) acf = np.convolve(x,np.conj(x)[::-1]) Rxx=acf[2:]; # R_xx(0) is the center element Rx = toeplitz(Rxx,np.hstack((Rxx[0], np.conj(Rxx[1:])))) </code></pre> <p>I am able to compute Auto-correlation matrix for real valued also with code snippet above. I want to understand how auto-covariance matrix can be computed similarly?</p>
<python><scipy><signal-processing>
2024-02-07 12:57:37
0
1,219
Bharat Sharma
77,954,859
12,886,610
dbt: Get return of callback function
<p>I am trying out <a href="https://docs.getdbt.com/reference/programmatic-invocations" rel="nofollow noreferrer">using dbt programmatically</a>, in particular <a href="https://docs.getdbt.com/reference/programmatic-invocations#registering-callbacks" rel="nofollow noreferrer">callbacks</a>. I would like to get the data from the StatsLine event:</p> <pre class="lang-py prettyprint-override"><code>from dbt.cli.main import dbtRunner, dbtRunnerResult from dbt.events.base_types import EventMsg cli_args = ['run', '-m', 'my_model'] def callback_get_stats(event: EventMsg): if event.info.name == 'StatsLine': print(event.data) dbt = dbtRunner(callbacks=[callback_get_stats]) res = dbt.invoke(cli_args) </code></pre> <p>So far, so good: This prints the data part of the StatsLine event.</p> <p><strong>Question: How can I get this as an object?</strong></p> <p>One option would be to use side-effects, e.g.:</p> <pre class="lang-py prettyprint-override"><code>callback_dict = dict() def callback_get_stats(event: EventMsg): if event.info.name == 'StatsLine': print(event.data) callback_dict['StatsLine'] = event.data </code></pre> <p>This is <em>fine</em>, but it doesn't seem very elegant. I can of course add a return to the function...</p> <pre class="lang-py prettyprint-override"><code>def callback_get_stats(event: EventMsg): if event.info.name == 'StatsLine': print(event.data) return event.data </code></pre> <p>... but I don't know where it goes! I have tried <code>dir()</code> on <code>res</code> and its subobjects but I can't find the return value of <code>callback_get_stats</code>.</p> <p><a href="https://stackoverflow.com/questions/77144861/dbt-pass-return-value-of-dbt-macro-to-python">This question</a> has a similar flavour. I am using version 1.7.7 of <a href="https://pypi.org/project/dbt-core/" rel="nofollow noreferrer">dbt-core</a>.</p>
<python><callback><dbt>
2024-02-07 12:47:55
1
1,263
dwolfeu
77,954,808
6,151,828
ImportError: cannot import name 'ConstantInputWarning' from 'scipy.stats'
<p>Error due to <code>scikit-kbio</code>:</p> <pre><code>ImportError: cannot import name 'ConstantInputWarning' from 'scipy.stats' (/home/smith/miniconda3/lib/python3.8/site-packages/scipy/stats/__init__.py) </code></pre> <p>This is likely related to recent installations (like <code>numba</code>) which might have modified versions of <code>numpy</code> and <code>scipy</code>, as previously the problem didn't occur.</p> <p>Versions:<br /> <code>python</code> : 3.8<br /> <code>scipy</code> : 1.8.1<br /> <code>numpy</code> : 1.22.4<br /> <code>skbio</code> : 0.5.9</p> <p>Related:<br /> <a href="https://stackoverflow.com/q/72744811/6151828">Error: cannot import name 'SpearmanRConstantInputWarning' from 'scipy.stats'</a><br /> <a href="https://github.com/raw-lab/mercat2/issues/4" rel="nofollow noreferrer">Cannot import name 'ConstantInputWarning' from 'scipy.stats' #4</a></p>
<python><python-3.x><numpy><scipy><skbio>
2024-02-07 12:40:55
1
803
Roger V.
77,954,753
14,444,816
get_signing_key_from_jwt throws error Unable to find a signing key that matches kid
<p>I'm seeing this error when getting the signing key:</p> <blockquote> <p>PyJWKClientError: Unable to find a signing key that matches: &quot;{kid}&quot;</p> </blockquote> <p>The signing_key is where I'm seeing the error:</p> <pre><code>import jwt jwks_uri=&quot;https://my_auth_server/oauth2/v1/keys&quot; jwks_client = jwt.PyJWKClient(jwks_uri) signing_key = jwks_client.get_signing_key_from_jwt(token['access_token']) </code></pre> <p>I'm getting the token like this:</p> <pre><code>tdata = { 'grant_type': 'authorization_code', 'redirect_uri': config['redirect_uri'], 'client_id': config['client_id'], 'client_secret': config['client_secret'], 'scope': 'openid', 'state': state, 'code': code, } ret = requests.post(config[&quot;token_endpoint&quot;], headers=theaders, data=urlencode(tdata).encode(&quot;utf-8&quot;)) </code></pre>
<python><jwt><jwk>
2024-02-07 12:33:26
0
2,021
KristiLuna
77,954,656
19,500,571
Multithreading JSON parsing
<p>I need to process a large amount of JSON-files and I want to multithread this processing. However, I find that multithreading doesn't improve the processing time at all and in fact takes the same amount of time as if done in serial.</p> <p>I have narrowed the issue down to this piece of code, where the JSON files are parsed:</p> <pre><code>from concurrent.futures import ThreadPoolExecutor as th import json def func(file_names): for file in file_names: with open(file, 'r', encoding='utf8') as f: x = json.loads(f.read()) N_THREADS = 2 file_names = [.....] # list of file names with th(N_THREADS) as ex: ex.map(func, file_names) </code></pre> <p>Based on this, it seems that parsing JSON-files is done in serial. Why can't this be multithreaded using <code>ThreadPoolExecutor</code>? Is there another way of multithreading this?</p>
<python><json><multithreading><io>
2024-02-07 12:17:48
1
469
TylerD
77,954,638
1,942,868
Make link to foreign key object in admin screen
<p>I have class like this which has the <code>ForeignKey</code></p> <pre><code>class MyLog(SafeDeleteModel): user = models.ForeignKey(CustomUser,on_delete=models.CASCADE) </code></pre> <p>then I set <code>user</code> in the list_display of MyLog in admin page.</p> <pre><code>class MyLogAdmin(admin.ModelAdmin): list_display = [&quot;id&quot;,&quot;user&quot;] class UserAdmin(admin.ModelAdmin): list_display = [&quot;id&quot;,&quot;username&quot;] </code></pre> <p>Now I want to make link in user of MyLogAdmin page to UserAdmin,</p> <p>Is it possible?</p> <p>I think some framework(such as php symfony) administrator system does this automatically.</p> <p>However is it possible to do this by django admin?</p>
<python><django><admin>
2024-02-07 12:15:16
1
12,599
whitebear
77,954,618
1,879,109
Passing custom batch to the validator in Great Expectations
<pre class="lang-py prettyprint-override"><code># Import necessary libraries import great_expectations as ge import datetime # Load the Great Expectations context context = ge.data_context.DataContext(&quot;../.&quot;) # Load the JSON data into a Pandas DataFrame data_file_path = &quot;../../data/nested.json&quot; df = ge.read_json(data_file_path) # Create a batch of data batch = ge.dataset.PandasDataset(df) # Create new columns for nested values batch[&quot;details_age&quot;] = batch[&quot;details&quot;].apply(lambda x: x.get(&quot;age&quot;)) batch[&quot;details_address_city&quot;] = batch[&quot;details&quot;].apply(lambda x: x.get(&quot;address&quot;).get(&quot;city&quot;)) batch[&quot;details_address_state&quot;] = batch[&quot;details&quot;].apply(lambda x: x.get(&quot;address&quot;).get(&quot;state&quot;)) # Load the expectation suite expectation_suite_name = 'nestedjson_expectations_suite' suite = context.get_expectation_suite(expectation_suite_name) # Validate the batch against the expectation suite results = context.run_validation_operator( &quot;action_list_operator&quot;, assets_to_validate=[batch], run_name = &quot;abcd1&quot;, run_time = datetime.datetime.now(datetime.timezone.utc), ) # Print the validation results print(results) context.build_data_docs() context.open_data_docs(resource_identifier=results.list_validation_result_identifiers()[0]) </code></pre> <p>I am trying to work with a nested json for which I need to flatten the json to be able to work with. In the code above as you can see that I have modified the batch but I'm unsure how to pass this batch to the validator alongside my expectation suite.</p> <p>Looking at the code doc <a href="https://github.com/great-expectations/great_expectations/blob/209cdc6742ea7e9a61d8f080406b721490125e29/great_expectations/data_context/data_context/abstract_data_context.py#L2950" rel="nofollow noreferrer">here</a> <code>run_validation_operator</code> expects <code>assets_to_validate</code> that can either be a list of batches (which I am already trying)</p> <pre><code> assets_to_validate: a list that specifies the data assets that the operator will validate. The members of the list can be either batches, or a tuple that will allow the operator to fetch the batch: (batch_kwargs, expectation_suite_name) </code></pre> <p>Where <code>batch_kwargs</code> is keyword arguments used to request a batch directly from a Datasource.</p> <p>How am I suppose to pass custom batch alongside expectation suite to validate against?</p> <p><strong>Alternatively</strong> As an alternative I tested my batch without expectation suite like so:</p> <pre class="lang-py prettyprint-override"><code># Import necessary libraries import great_expectations as ge import datetime # Load the Great Expectations context context = ge.data_context.DataContext(&quot;../.&quot;) # Load the JSON data into a Pandas DataFrame data_file_path = &quot;../../data/nested.json&quot; df = ge.read_json(data_file_path) # Create a batch of data batch = ge.dataset.PandasDataset(df) # Create new columns for nested values batch[&quot;details_age&quot;] = batch[&quot;details&quot;].apply(lambda x: x.get(&quot;age&quot;)) batch[&quot;details_address_city&quot;] = batch[&quot;details&quot;].apply(lambda x: x.get(&quot;address&quot;).get(&quot;city&quot;)) batch[&quot;details_address_state&quot;] = batch[&quot;details&quot;].apply(lambda x: x.get(&quot;address&quot;).get(&quot;state&quot;)) # Define expectations for the 'id' column batch.expect_column_values_to_be_between('id', min_value=1, max_value=100) batch.expect_column_values_to_be_unique('id') # Define expectations for the 'name' column batch.expect_column_values_to_match_regex('name', r'^[A-Za-z\s]+$') batch.expect_column_values_to_not_be_null('name') # Define expectations for nested fields batch.expect_column_values_to_be_between('details_age', min_value=0, max_value=120) batch.expect_column_values_to_match_regex('details_address_city', r'^[A-Za-z\s]+$') batch.expect_column_values_to_match_regex('details_address_state', r'^[A-Za-z\s]+$') # # Load the expectation suite # expectation_suite_name = 'nestedjson_expectations_suite' # suite = context.get_expectation_suite(expectation_suite_name) # Validate the batch against the expectation suite results = context.run_validation_operator( &quot;action_list_operator&quot;, assets_to_validate=[batch], run_name = &quot;abcd1&quot;, run_time = datetime.datetime.now(datetime.timezone.utc), ) # Print the validation results print(results) context.build_data_docs() context.open_data_docs(resource_identifier=results.list_validation_result_identifiers()[0]) </code></pre>
<python><great-expectations>
2024-02-07 12:10:22
0
915
Kity Cartman
77,954,586
2,148,416
Odd behavior of Numpy any() when dtype=object
<p>Numpy <a href="https://numpy.org/doc/stable/reference/generated/numpy.any.html" rel="nofollow noreferrer">docs</a> on <code>any()</code> says &quot;<em>A new boolean or ndarray is returned...</em>&quot; and &quot;<em>Returns single boolean if axis is None.</em>&quot;</p> <p>Consider this:</p> <pre><code>&gt;&gt;&gt; np.array([0, 111, 222]).any() True &gt;&gt;&gt; np.array([0, 111, 222], dtype=object).any() 111 </code></pre> <p>Axis is None, but a boolean is not returned in the second case. It can be admitted, though, that the result is truthy.</p> <p>The problem comes when a subsequent logical operation is performed. For example:</p> <pre><code>&gt;&gt;&gt; np.invert(np.array([0, 111, 222]).any()) False &gt;&gt;&gt; np.invert(np.array([0, 111, 222], dtype=object).any()) -112 </code></pre> <p>This can be easily circumvented, once you realize how <code>any()</code> behaves when using <code>dtype=object</code>.</p> <p>But the question remains: Is this the expected behavior of <code>any()</code>? Shouldn't it return a boolean as the docs say?</p>
<python><numpy>
2024-02-07 12:05:08
1
3,437
aerobiomat
77,954,574
3,566,606
Generic Type Alias for NotRequired and Optional
<p>In <code>typing.TypedDict</code>, there is a difference between fields being <code>NotRequired</code> and fields whose values are declared <code>Optional</code>. The first means that the key may be absent, the latter means the value on an existing key may be <code>None</code>.</p> <p>In my context (working with mongodb), I do not want to differentiate this, and simply allow a field to be both <code>NotRequired</code> and <code>Optional</code>. This could be achieved by</p> <pre class="lang-py prettyprint-override"><code>class Struct(TypedDict): possibly_string = NotRequired[str | None] </code></pre> <p>In my context I never want to differentiate this, so I would like to have a generic type, say <code>Possibly[T]</code> which I can use as a shorthand for writing <code>NotRequired[T | None]</code>.</p> <p>When I try to write a generic alias, mypy gives me the two errors indicated in the code comments:</p> <pre class="lang-py prettyprint-override"><code> from typing import NotRequired, Optional, TypeVar, TypedDict T = TypeVar(&quot;T&quot;) Possibly = NotRequired[Optional[T]] class Struct(TypedDict): possibly_string: Possibly[str] # Variable &quot;Possibly&quot; is not valid as a type [valid-type] non_compliant: Struct = {&quot;possibly_string&quot;: int} compliant_absent: Struct = {} # Missing key &quot;possibly_string&quot; for TypedDict &quot;Struct&quot; [typeddict-item] compliant_none: Struct = {&quot;possibly_string&quot;: None} compliant_present: Struct = {&quot;possibly_string&quot;: &quot;a string, indeed&quot;} </code></pre> <p>Is it possible to achieve what I want? If so, how?</p> <p>I look for a solution for Python3.11 or upwards.</p>
<python><python-typing>
2024-02-07 12:04:03
0
6,374
Jonathan Herrera
77,954,411
7,024,352
CrispyError : as_crispy_field got passed an invalid or inexistent field for email field only
<p>I can't expose Email field from User model in django, If I remove email field from register.html there is no error. Also if I use {{ form }} It does not show the email field. It shows username, password1 and password2 with no error.</p> <p><strong>models.py:</strong></p> <pre><code>from django.contrib.auth.models import User class Profile(models.Model): profile_pic = models.ImageField(null=True, blank=True, default='default.jpg') # FK user = models.ForeignKey(User, max_length=10, on_delete=models.CASCADE, null=True) </code></pre> <p><strong>forms.py:</strong></p> <pre><code>from django.contrib.auth.models import User class CreateUserForm(UserCreationForm): class Meta: model = User fields= ['username','email', 'password1', 'password2'] </code></pre> <p><strong>views.py:</strong> from .forms import UserCreationForm</p> <pre><code>def register(request): form = UserCreationForm() if request.method == &quot;POST&quot;: form = UserCreationForm(request.POST) if form.is_valid(): form.save() return redirect(&quot;my-login&quot;) context = {'form': form} return render(request, &quot;register.html&quot;, context=context) </code></pre> <p><strong>register.html</strong></p> <pre><code>{% load static %} {% load crispy_forms_tags %} . . . &lt;form action=&quot;&quot; method=&quot;POST&quot; autocomplete=&quot;off&quot;&gt; {% csrf_token %} {{ form.username|as_crispy_field }} # error is here {{ form.email|as_crispy_field }} {{ form.password1|as_crispy_field }} {{ form.password2|as_crispy_field }} &lt;input type=&quot;submit&quot; value=&quot;submit&quot; /&gt; &lt;/form&gt; </code></pre> <p>Django 4.2.10</p> <p>django-bootstrap4 24.1</p> <p>django-crispy-forms 1.14.0</p> <p>pillow 10.2.0</p> <p>wheel 0.42.0</p>
<python><django><django-crispy-forms>
2024-02-07 11:36:30
1
319
Hesam Marshal
77,954,161
10,353,865
Apply expanding function over whole df - numba and alternatives?
<p>I want to apply an expanding window operation on the whole df - not columnwise. The method=table requires numba engine. But I always get an error when calling it (see below)</p> <p>I would also appreciate it, if anyone could point out another solution to my problem (i.e. I don't necessarily want to use numba).</p> <pre><code>S = pd.Series([i / 100.0 for i in range(1, 11)]) d_from = pd.concat([S, S], axis=1) # In a df expanding operates column or rowwise d_from.expanding(axis=0).apply(sum, raw=True) d_from.expanding(axis=1).apply(sum, raw=True) # can use table arg and numba to operate on whole df? d_from.expanding(axis=1, method=&quot;table&quot;).apply(sum, raw=True, engine=&quot;numba&quot;) # throws exception </code></pre> <p>The last line produces : &quot;numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)&quot;</p>
<python><pandas><numpy>
2024-02-07 10:59:37
1
702
P.Jo
77,954,150
7,745,011
How to use type annotation to support generic subtypes for third party classes in pydantic basemodels?
<p>Going from the <a href="https://docs.pydantic.dev/latest/concepts/types/#handling-third-party-types" rel="nofollow noreferrer">pydantic documentation</a> I have written the following annotation wrapper for numpy arrays in order to parse numpy arrays from jsons directly:</p> <pre class="lang-py prettyprint-override"><code>from typing import Annotated, Any import numpy as np import numpy.typing as npt from pydantic import ( GetCoreSchemaHandler, GetJsonSchemaHandler, ) from pydantic.json_schema import JsonSchemaValue from pydantic_core import core_schema class _NumpyPydanticAnnotation: @classmethod def __get_pydantic_core_schema__( cls, _source_type: Any, _handler: GetCoreSchemaHandler, ) -&gt; core_schema.CoreSchema: def valdidate_from_lists(value: list) -&gt; np.ndarray: result = np.array(value) return result from_list_schema = core_schema.chain_schema( [ core_schema.list_schema(), core_schema.no_info_plain_validator_function(valdidate_from_lists), ] ) return core_schema.json_or_python_schema( json_schema=from_list_schema, python_schema=core_schema.union_schema( [ # check if it's an instance first before doing any further work core_schema.is_instance_schema(np.ndarray), from_list_schema, ] ), serialization=core_schema.plain_serializer_function_ser_schema( lambda instance: instance.tolist() ), ) @classmethod def __get_pydantic_json_schema__( cls, _core_schema: core_schema.CoreSchema, handler: GetJsonSchemaHandler ) -&gt; JsonSchemaValue: # Use the same schema that would be used for `list` return handler(core_schema.list_schema()) NumpyWrapper = Annotated[npt.NDArray, _NumpyPydanticAnnotation] </code></pre> <p>Although the validation could be extended to be more thorough, this can be used in Basemodels as follows:</p> <pre class="lang-py prettyprint-override"><code>class SomeDataModel(BaseModel): my_awesome_array: NumpyWrapper </code></pre> <p>Which works fine, however I want to support validation for the correct dtype as well:</p> <pre class="lang-py prettyprint-override"><code>class SomeDataModel(BaseModel): my_awesome_array: NumpyWrapper[np.float64] </code></pre> <p>I initially this would work with generic, something like</p> <pre class="lang-py prettyprint-override"><code>dtypelike = TypeVar(&quot;dtypelike&quot;, bound=npt.DTypeLike) class _NumpyPydanticAnnotation(Generic[dtypelike]): ... # class content </code></pre> <p>But this results in the error message</p> <blockquote> <p>Type &quot;ndarray[Any, dtype[Unknown]]&quot; is already specialized</p> </blockquote> <p>when used as described above. I am a bit stuck on how to use it. Additionally I would like to check the correct dtype in the <code>validate_from_lists</code> function.</p>
<python><numpy><python-typing><pydantic>
2024-02-07 10:57:33
1
2,980
Roland Deschain
77,954,041
8,881,495
Inference of Mixtral 8x7b on multiple GPUs with pipeline
<p>I run Mixtral 8x7b on two GPUs (RTX3090 &amp; A5000) with pipeline. I can load the model in GPU memories, it works fine, but inference is very slow. When I run <code>nvidia-smi</code>, there is not a lot of load on GPUs. But the motherboard RAM is full (&gt;128Gb) and a CPU reach 100% of load. I feel that the model is loaded in GPU, but inference is done in the CPU.</p> <p>Here is my code:</p> <pre><code>from transformers import pipeline from transformers import AutoModelForCausalLM, AutoTokenizer, AutoConfig import time import torch from accelerate import init_empty_weights, load_checkpoint_and_dispatch t1= time.perf_counter() model_id = &quot;mistralai/Mixtral-8x7B-Instruct-v0.1&quot; tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id, device_map=&quot;auto&quot;) t2= time.perf_counter() print(f&quot;Loading tokenizer and model: took {t2-t1} seconds to execute.&quot;) # Create a pipeline code_generator = pipeline('text-generation', model=model, tokenizer=tokenizer) t3= time.perf_counter() print(f&quot;Creating piepline: took {t3-t2} seconds to execute.&quot;) # Generate code for an input string while True: print(&quot;\n=========Please type in your question=========================\n&quot;) user_content = input(&quot;\nQuestion: &quot;) # User question user_content.strip() t1= time.perf_counter() generated_code = code_generator(user_content, pad_token_id=tokenizer.eos_token_id, max_new_tokens=20)[0]['generated_text'] t2= time.perf_counter() print(f&quot;Inferencing using the model: took {t2-t1} seconds to execute.&quot;) print(generated_code) </code></pre> <p>Any idea why inference is so long (&gt;300s)</p>
<python><pytorch><pipeline><large-language-model>
2024-02-07 10:43:27
1
3,655
Fifi
77,954,019
5,589,640
Load word2vec model that is in .tar format
<p>I want to load a previously trained word2vec model into gensim. The trouble is the file format. It is not a .bin file format but a .tar file. It is the model / file <em>deu-ch_web-public_2019_1M.tar.gz</em> from the <a href="https://wortschatz.uni-leipzig.de/en/download/German" rel="nofollow noreferrer">University of Leipzig</a>. The model is also listed <a href="https://huggingface.co/inovex/Word2Vec-Finetuning-Service-Base-Models" rel="nofollow noreferrer">on HuggingFace</a> where different word2vec models for English and German are listed.</p> <p><strong>First I tried:</strong></p> <pre><code>from gensim.models import KeyedVectors model = KeyedVectors.load_word2vec_format('deu-ch_web-public_2019_1M.tar.gz') </code></pre> <p>--&gt; ValueError: invalid literal for int() with base 10: 'deu-ch_web-public_2019_1M</p> <p><strong>Then I unzipped the file with 7-Zip and tried the following:</strong></p> <pre><code>from gensim.models import KeyedVectors model = KeyedVectors.load_word2vec_format('deu-ch_web-public_2019_1M.tar') </code></pre> <p>--&gt; ValueError: invalid literal for int() with base 10: 'deu-ch_web-public_2019_1M</p> <pre><code>from gensim.models import word2vec model = word2vec.Word2Vec.load('deu-ch_web-public_2019_1M.tar') </code></pre> <p>--&gt; UnpicklingError: could not find MARK</p> <p>Then I got a bit desperate...</p> <pre><code>import gensim.downloader model = gensim.downloader.load('deu-ch_web-public_2019_1M.tar') </code></pre> <p>--&gt; ValueError: Incorrect model/corpus name</p> <p>Googling around I found useful information how to load a .bin model with gensim ( <a href="https://stackoverflow.com/questions/39549248/how-to-load-a-pre-trained-word2vec-model-file-and-reuse-it/39662736#39662736">see here</a> and <a href="https://stackoverflow.com/questions/65394022/how-can-a-word2vec-pretrained-model-be-loaded-in-gensim-faster">here</a> ). Following this <a href="https://github.com/deeplearning4j/deeplearning4j/issues/4150" rel="nofollow noreferrer">thread</a> it seems tricky to load a .tar file with gensim. Especially if one has not one .txt file but five .txt files as in this case. I found <a href="https://stackoverflow.com/questions/71733191/how-to-read-and-load-tarfile-to-extract-feature-vector">one answer how to read a .tar file but with tensorflow</a>. Since I am not familiar with tensorflow, I prefer to use gensim. Any thoughts how to solve the issue is appreciated.</p>
<python><gensim><word2vec><tar.gz>
2024-02-07 10:41:01
3
625
Simone
77,953,711
2,062,318
How to find the best possible team lineup (in swimming)
<p>I currently have a fairly simple algorithm that tries to build the best possible team-lineup given some constrains:</p> <ol> <li>There is a finite list of events which needs to be filled with swimmers</li> </ol> <ul> <li>Event 1: 50 Free</li> <li>Event 2: 100 Free</li> <li>...</li> </ul> <ol start="2"> <li>In each event, a team can send up to <strong>N</strong> (<code>MaxSwimmersPerTeam</code>) swimmers.</li> <li>A single swimmer can participate in multiple events, but is limited by <strong>M</strong> (<code>MaxEntriesPerSwimmer</code>).</li> <li>I have a list of swimmer best times for every event they have participated. <em>(Note: Not every swimmer swims every possible event, it's a subset with size close to <strong>M</strong>).</em></li> </ol> <pre><code>team_1_best_times = [ {&quot;time_id&quot;: 1, &quot;swimmer_id&quot;: 1, &quot;event_id&quot;: 1, &quot;time&quot;: 22.00, &quot;rating&quot;: 900.00}, {&quot;time_id&quot;: 2, &quot;swimmer_id&quot;: 1, &quot;event_id&quot;: 2, &quot;time&quot;: 44.00, &quot;rating&quot;: 800.00}, {&quot;time_id&quot;: 3, &quot;swimmer_id&quot;: 2, &quot;event_id&quot;: 1, &quot;time&quot;: 22.10, &quot;rating&quot;: 890.00}, {&quot;time_id&quot;: 4, &quot;swimmer_id&quot;: 2, &quot;event_id&quot;: 2, &quot;time&quot;: 46.00, &quot;rating&quot;: 750.00}, ] </code></pre> <p>The <code>rating</code> key <em>(Higher is Better)</em> gives me the ability to compare times across different events.</p> <p>The best-possible line-up would be the lineup with max average rating across all chosen times satisfying <code>N</code> and <code>M</code></p> <p>My current approach is to iterate over all the times sorted by <code>rating</code> and fill the line-up until <code>N</code> and <code>M</code> are satisfied. For example given <code>N=1</code> and <code>M=1</code> my algorithm would:</p> <ol> <li>Put <code>Time1</code> <em>(of Swimmer1)</em> with 900 rating into <code>Event1</code></li> <li>Skip <code>Time3</code> <em>(Event1-Swimmer2)</em> with <code>890</code> rating - since we already have <code>1</code> = <strong>N</strong> other swimmer <em>(Swimmer1)</em> in <code>Event1</code>, thus <code>MaxSwimmersPerTeam</code> has been reached.</li> <li>Skip <code>Time2</code> <em>(Event2-Swimmer1)</em> with <code>800</code> rating - since <code>Swimmer1</code> has already been put in <code>1</code> = <strong>M</strong> other Event <em>(Event1)</em>, thus (<code>MaxEntriesPerSwimmer</code>) has been reached.</li> <li>Put <code>Time4</code> <em>(Swimmer2)</em> with <code>750</code> rating into <code>Event2</code>.</li> </ol> <p>Now the average rating of this team-lineup <em>Event1-Swimmer1</em> (<code>900</code>) and <em>Event2-Swimmer2</em> (<code>750</code>) would be: <code>(900+750)/2 = 825</code>.</p> <p>And this is the simplest possible example showing where my approach falls short. What a smart coach could do, would be to put <code>Swimmer1</code> into <code>Event2</code> (<code>800</code>) and <code>Swimmer2</code> into <code>Event1</code> (<code>890</code>) reaching a higher avg rating of: <code>(890+800)/2 = 845</code>.</p> <p>I've tried to research the problem a little bit, found libraries like <code>python-constraint </code>, <code>gekko</code>, <code>pyomo</code> but I still cannot figure out how to explain the problem using their tools.</p>
<python><linear-programming><pyomo><gekko><constraint-programming>
2024-02-07 09:50:36
1
16,170
Todor
77,953,700
9,172,401
custom column type for django create DB cache table command
<p>I’m currently working with a PostgreSQL database and I want to use a timestamp of type ‘timestamp(0)’. To achieve this, I’ve created a custom column type using the following code:</p> <pre><code>class DateTimeWithoutTZField(DateTimeField): def db_type(self, connection): return 'timestamp(0)' </code></pre> <p>My question is: How can I use this custom ‘timestamp(0)’ type with the DB cache table that is created by running the Django command</p> <pre><code>python manage.py createcachetable </code></pre>
<python><django><django-models><django-rest-framework>
2024-02-07 09:48:32
1
598
Marcos DaSilva
77,953,646
2,372,467
Investing.com blocking Python Selenium Script
<p>I am trying to download portfolio data from my own login from <a href="https://www.investing.com/portfolio" rel="nofollow noreferrer">https://www.investing.com/portfolio</a> by automating using Selenium webdriver for Chrome using Python.</p> <p>It was working till few days back. Suddenly it has stopped working., it seems the website is detecting the automation and preventing it.</p> <p>Below is a very basic code which takes to the Login Page and waits for the user to input credentials.</p> <p>Once the user inputs its credentials and clicks on sign in - nothing happens. upon investigating i found that the javascript which signs in - is being blocked - 403 error.</p> <p>From Development Tools under networks, <a href="https://www.investing.com/members-admin/auth/signInByEmail/" rel="nofollow noreferrer">https://www.investing.com/members-admin/auth/signInByEmail/</a> this javascripts gives an error The preview of above file shows : Enable JavaScript and cookies to continue while in selenium but when using normal chrome shows: {response: &quot;Invalid_Email_Pwd&quot;, msg: &quot;Wrong email or password. Try again.&quot;} msg : &quot;Wrong email or password. Try again.&quot; response : &quot;Invalid_Email_Pwd&quot;</p> <p>Is it that the website is blocking the selenium automation?</p> <pre><code>from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager import time opt = webdriver.ChromeOptions() opt.add_argument(&quot;--start-maximized&quot;) opt.add_argument(&quot;--enable-javascript&quot;) userAgent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36' opt.add_argument(f'user-agent={userAgent}') driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=opt) # 16 driver.get(r'https://www.investing.com/portfolio'); assert 'Portfolio &amp; Watchlist - Investing.com' in driver.title print('Opened site successfully') username = WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.ID, 'loginFormUser_email'))) username.send_keys(&quot;&quot;, 'username@gmail.com') print('Entered UserID') password = WebDriverWait(driver, 20).until(EC.presence_of_element_located((By.ID, 'loginForm_password'))) password.send_keys(&quot;&quot;, 'password') print('Entered Password') time.sleep(4000) </code></pre>
<javascript><python><selenium-webdriver>
2024-02-07 09:40:17
1
301
Kiran Jain
77,953,490
773,595
Can't install chs
<p>When I try to execute <code>pip3 install chs</code> I get the following error:</p> <pre><code>pip3 install chs Defaulting to user installation because normal site-packages is not writeable Collecting chs Using cached chs-3.0.0.tar.gz (13.0 MB) Preparing metadata (setup.py) ... done Requirement already satisfied: python-chess in ./.local/lib/python3.12/site-packages (from chs) (1.999) Collecting editdistance (from chs) Using cached editdistance-0.6.2.tar.gz (31 kB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; [25 lines of output] Traceback (most recent call last): File &quot;/home/fedora/.local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;/home/fedora/.local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/fedora/.local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-xq8q7ikf/overlay/lib/python3.12/site-packages/setuptools/build_meta.py&quot;, line 325, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-xq8q7ikf/overlay/lib/python3.12/site-packages/setuptools/build_meta.py&quot;, line 295, in _get_build_requires self.run_setup() File &quot;/tmp/pip-build-env-xq8q7ikf/overlay/lib/python3.12/site-packages/setuptools/build_meta.py&quot;, line 311, in run_setup exec(code, locals()) File &quot;&lt;string&gt;&quot;, line 25, in &lt;module&gt; File &quot;/tmp/pip-build-env-xq8q7ikf/overlay/lib64/python3.12/site-packages/Cython/Build/Dependencies.py&quot;, line 1010, in cythonize module_list, module_metadata = create_extension_list( ^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-xq8q7ikf/overlay/lib64/python3.12/site-packages/Cython/Build/Dependencies.py&quot;, line 845, in create_extension_list for file in nonempty(sorted(extended_iglob(filepattern)), &quot;'%s' doesn't match any files&quot; % filepattern): File &quot;/tmp/pip-build-env-xq8q7ikf/overlay/lib64/python3.12/site-packages/Cython/Build/Dependencies.py&quot;, line 117, in nonempty raise ValueError(error_msg) ValueError: 'editdistance/bycython.pyx' doesn't match any files [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. </code></pre> <p>I'm new to Python so I'm having a hard time debugging. It seems the issue is occurring from <code>editdistance</code> package.</p> <p>I tried using a constraint file to force a different version of <code>editdistance</code> but no versions seem to pass the installation.</p>
<python>
2024-02-07 09:17:35
1
13,497
vdegenne
77,953,302
9,030,603
How to retrieve the generated SQL query from create_sql_agent of Langchain?
<p>I have used <strong>Langchain</strong> - <strong>create_sql_agent</strong> to generate SQL queries with a database and get the output result of the generated SQL query. Here is how my code looks like, it is working pretty well.</p> <pre><code>agent = create_sql_agent( llm=llm, db=db, verbose=True, agent_type= &quot;openai-tools&quot;, ) response = agent.invoke({&quot;input&quot;: &quot;How many resources are there in XYZ location?&quot;}) </code></pre> <p>Here is how the response looks like:</p> <pre><code>{'input': 'How many resources are there in XYZ location?', 'output': 'There are total 15 agents in XYZ'} </code></pre> <p>I want to extract the generated SQL query, just in case if the user need to review it. I can see the query in the intermediate steps, but Im not clear on how to fetch it.</p> <p>Any help would be much appreciated!! Thanks</p>
<python><sql><python-3.x><langchain><large-language-model>
2024-02-07 08:46:03
4
444
AlfiyaFaisy
77,953,145
2,473,382
Annotate a function return type with an instance, not a type
<p>I would like to type a function to an actual object, not the type of this object, and the type checker to understand this.</p> <p>Imagine this example:</p> <pre class="lang-py prettyprint-override"><code>my_marker = &quot;This is very singletony, I promise!&quot; def get_object_or_a_singleton_marker() -&gt; int|my_marker: # invalid syntax if (some_condition_is_met): return some_int_value() else: return my_marker # singleton a_var = get_object_or_a_singleton_marker() if a_var is not my_marker: # I want the type to be narrowed here </code></pre> <p>For the record, here the singleton is a string, but in my real code, it would be a class instance.</p> <p>The example as written is invalid, because <code>my_marker</code> is not a type expression. Using <code>Literal[my_marker]</code> has the same issue.</p> <p>If I use <code>type[my_marker]</code> as annotation, then the narrowing will not work (in my example, the variable will not be <code>my_marker</code> but could be any other <code>str</code>)</p> <p>I could probably use a <code>TypeGuard</code> function, but the idea is to keep the syntax as is, ie. with the use of <code>is</code>, without any other boilerplate.</p> <p>Can I annotate a function with an instance, not a type?</p>
<python><python-typing>
2024-02-07 08:13:21
1
3,081
Guillaume
77,952,977
736,662
Locust and Websockets
<p>I want to make a simple Locust script for a WebSocket call preferably using aiohttp. Request URL:</p> <pre><code>wss://someurl/api/task/ws </code></pre> <p>Request Method:</p> <pre><code>GET </code></pre> <p>Staus code:</p> <pre><code>101 Switching Protocols </code></pre> <p>Using &quot;Class&quot; and &quot;task&quot; as Locust spesifics, how could that script look like using aiohttp?</p>
<python><websocket><aiohttp><locust>
2024-02-07 07:41:34
1
1,003
Magnus Jensen
77,952,743
9,608,860
Get guild details from the user DM using Discord.py
<p>I have created a discord bot which can be invited to guilds and has slash commands. Everything is working fine if a user uses the command in server. But I want to get the guild if user uses that command in bot DM. How can I do that?</p> <p>I have tried using message.guild but that is always returning None.</p>
<python><discord><discord.py>
2024-02-07 06:50:59
1
405
Aarti Joshi
77,952,733
7,408,848
How to filter metadata in a Mongodb database with Python
<p>I want to do a basic filter on the metadata of all of the documents within my Mongodb database</p> <p>I generated the metadata myself using the following code</p> <pre><code>import mongoengine as moe db = &quot;games&quot; collection = &quot;characters&quot; conn = moe.connect(&quot;db&quot;, host=&quot;localhost&quot;) metadata = {&quot;_gameid&quot;:generated_game_id, &quot;symbol&quot;:symbol, &quot;extraction_date&quot;:datetime.today() } my_document = {&quot;$meta&quot;: metadata, &quot;timestamp&quot;: timepoint, &quot;data&quot;: mydata } conn[db][collection].insert_one(my_document) </code></pre> <p>From here I can perform a basic query and pull data out when it is the id, or timestamp. However, when I try to perform the query on the metadata, I keep receiving <code>none</code> or <code>use $getfield</code>.</p> <p>How to query the metadata and potentially the data fields?</p> <p>code for querying that I use:</p> <pre><code>ab = conn[db][collection] print(ab.find_one({'_id': ObjectId('65c31dc25494ef2c27c70068')}))) </code></pre> <p>I have tried the following but all do not work correctly:</p> <pre><code>print(ab.find_one({'meta.symbol': &quot;BoFV&quot;})) print(ab.find_one({'$meta': {&quot;symbol&quot;:&quot;BoFV&quot;}})) print(ab.find_one({'meta': {&quot;symbol&quot;:&quot;BoFV&quot;}})) print(ab.aggregate([{&quot;$match&quot;:{&quot;expr&quot;:{&quot;$eq&quot;: [{&quot;$getField&quot;:&quot;meta.symbol&quot;},&quot;BoFV&quot;]}}}])) print(ab.aggregate([{&quot;$match&quot;:{&quot;expr&quot;:{&quot;$eq&quot;: [{&quot;$getField&quot;:&quot;$meta.symbol&quot;},&quot;BoFV&quot;]}}}])) </code></pre> <p>More attempts:</p> <pre><code>bb = { &quot;field&quot;: { &quot;$literal&quot;: &quot;symbol&quot; }, &quot;input&quot;: { &quot;$getField&quot;: { &quot;$literal&quot;: &quot;$meta&quot; } } } print(ab.find_one({ '$expr': { '$eq': [ &quot;BoFV&quot; , { '$getField': bb } ] } } ) ) </code></pre>
<python><mongodb>
2024-02-07 06:47:35
1
1,111
Hojo.Timberwolf
77,952,676
20,024,690
How to use debugpy in VScode to debug remote python server?
<p>I've used <code>&quot;type&quot;: &quot;python&quot;</code> to debug remote python servers. Now I'm seeing this warning in <code>launch.json</code>:</p> <blockquote> <p>This configuration will be deprecated soon. Please replace <code>python</code> with <code>debugpy</code> to use the new Python Debugger extension.</p> </blockquote> <p><a href="https://i.sstatic.net/KyLw2.png" rel="noreferrer"><img src="https://i.sstatic.net/KyLw2.png" alt="enter image description here" /></a></p> <p>My original setting:</p> <pre><code>{ &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;attach&quot;, &quot;name&quot;: &quot;attach remote&quot;, &quot;host&quot;: &quot;192.168.1.101&quot;, &quot;port&quot;: 8765, &quot;pathMappings&quot;: [ { &quot;localRoot&quot;: &quot;${workspaceFolder}/...&quot;, &quot;remoteRoot&quot;: &quot;/usr/app/...&quot; } ], &quot;justMyCode&quot;: false }, </code></pre> <p>When I switch to <code>&quot;type&quot;: &quot;debugpy&quot;</code> I get the following error on the &quot;port&quot; and &quot;host&quot; fields:</p> <blockquote> <p>Property port is not allowed.</p> </blockquote> <p><a href="https://i.sstatic.net/WgpIZ.png" rel="noreferrer"><img src="https://i.sstatic.net/WgpIZ.png" alt="enter image description here" /></a></p> <p>So the question is: <strong>how to add the port and host information to complete the migration of the debug configuration?</strong></p>
<python><visual-studio-code><remote-debugging>
2024-02-07 06:36:45
1
361
PeterKogan
77,952,458
2,446,702
How to search JSON for data, get vlaues from that array in Python
<p>I have the below JSON. I want to search each array, and only scrape the data from the array which has keys and values in the &quot;source&quot; block, ignoring arrays which have an empty &quot;source&quot; block. Here is the JSON:</p> <pre><code>{ &quot;L&quot;: [ { &quot;operation_no&quot;: 123456, &quot;key1&quot;: &quot;value1&quot;, &quot;keys&quot;: { &quot;no_seq&quot;: &quot;1234&quot;, &quot;external_key&quot;: null }, &quot;key2&quot;: 10234, &quot;territory&quot;: { &quot;territory_no&quot;: 1 }, &quot;key3&quot;: &quot;value&quot;, &quot;source&quot;: [] }, { &quot;operation_no&quot;: 123458, &quot;key1&quot;: &quot;value3&quot;, &quot;keys&quot;: { &quot;no_seq&quot;: &quot;1237&quot;, &quot;external_key&quot;: null }, &quot;key2&quot;: 10237, &quot;territory&quot;: { &quot;territory_no&quot;: 1 }, &quot;key3&quot;: &quot;value&quot;, &quot;source&quot;: [ { &quot;source1&quot;: &quot;fhry4645fsgaa1&quot;, &quot;source2&quot;: &quot;123egst36535a1&quot; }, { &quot;source1&quot;: &quot;fhry4645fsgaa2&quot;, &quot;source2&quot;: &quot;123egst36535a2&quot; } ] } ] } </code></pre> <p>So, for example, the lower array which has &quot;source&quot; keys and values, I want to get the &quot;operation_no&quot; value (123458), but ignore the &quot;operation_no&quot; value thats has an empty &quot;source&quot; block (the first array). How would I go about this using Python?</p>
<python><arrays><json>
2024-02-07 05:47:53
2
3,255
speedyrazor
77,952,379
8,497,844
Output data in real time
<p>I have two Python scripts for Linux.</p> <p>test.py</p> <pre><code>#!/usr/bin/env python3 import subprocess print('first line') subprocess.run('./test_1.py', shell=True, encoding='utf-8') </code></pre> <p>test_1.py</p> <pre><code>#!/usr/bin/env python3 print('second line') </code></pre> <p>If I run <code>test.py</code> as:</p> <pre><code>./test.py </code></pre> <p>It print:</p> <pre><code>first line second line </code></pre> <p>It's good result.</p> <p>But if I use:</p> <pre><code>./test.py &gt; test </code></pre> <p>File <code>test</code> contain strings:</p> <pre><code>second line first line </code></pre> <p><code>second line</code> is first. Why?</p> <p>I try to play with <code>subprocess.stdout</code>, <code>subprocess.stderr</code>, <code>sys.stdout</code> and <code>sys.stderr</code> - it not correctly work in my case any way.</p> <p>One case work for me:</p> <pre><code>#!/usr/bin/env python3 import subprocess print('first line') text = subprocess.run('./test_1.py', stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True, encoding='utf-8') print(text.stdout) </code></pre> <p>I that case <code>test</code> look ok.</p> <p>BUT there is one moment. <code>print(text.stdout)</code> will be executed after <code>test_1.py</code> completes. All output data will capture in variable. And I don't see the actual, real process of running <code>test_1.py</code>. I can see all the data afterwards, if process of running <code>test_1.py</code> will be take a lot of time.</p> <p>Questions:</p> <ol> <li><code>second line</code> is first. Why do I get different result in both cases?</li> <li>Is it possible to get the output of subscripts in the correct format and in real time? I'm interesting in two cases of running:</li> </ol> <pre><code>./test.py </code></pre> <p>and</p> <pre><code>./test.py &gt; test </code></pre>
<python><linux>
2024-02-07 05:18:10
0
727
Pro
77,952,302
7,662,164
JAX `custom_vjp` for functions with multiple outputs
<p>In the <a href="https://jax.readthedocs.io/en/latest/notebooks/Custom_derivative_rules_for_Python_code.html" rel="nofollow noreferrer">JAX documentation</a>, custom derivatives for functions with a single output are covered. I'm wondering how to implement custom derivatives for functions with multiple outputs such as this one?</p> <pre><code># want to define custom derivative of out_2 with respect to *args def test_func(*args, **kwargs): ... return out_1, out_2 </code></pre>
<python><jax><automatic-differentiation>
2024-02-07 04:52:27
1
335
Jingyang Wang
77,952,221
3,713,236
Flask cannot open at 127.0.0.1?
<p>I'm trying to run the most basic possible app on Flask:</p> <pre><code>from flask import Flask, render_template import reddit_text_to_speech # This is a script converted from my Jupyter notebook and intended to be used in def index() once this is resolved. app = Flask(__name__) @app.route('/') def index(): return &quot;Hello World!&quot; app.run(host='0.0.0.0', port=5000) </code></pre> <p>However, I'm getting this on my Chrome:</p> <blockquote> <p>This site can’t be reached</p> </blockquote> <blockquote> <p>127.0.0.1 refused to connect.</p> </blockquote> <p><a href="https://i.sstatic.net/YnI96.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YnI96.png" alt="enter image description here" /></a></p> <p>According to CMD, it looks like its running:</p> <p><a href="https://i.sstatic.net/ZACD6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZACD6.png" alt="enter image description here" /></a></p> <p>I also have Jupyter Lab running on http://localhost:8888/, but since its port 8888 it should not conflict?</p>
<python><flask><localhost>
2024-02-07 04:25:24
1
9,075
Katsu
77,951,887
4,979,733
How to stream write a large list of objects in chunks to JSON format in Python?
<p>I want to know how to stream a large list of objects into JSON format in chunks. I have something like this:</p> <pre><code>from pydantic import BaseModel class MyObj(BaseModel): .... data = [] with open('file_to_process', 'rt') as fin: for x in fin: ... data.append(MyObj(....)) with open('foo.json') as fout: json.dump([x.model_dump() for x in data], fout) </code></pre> <p>This works OK, but when data gets very big, memory consumption gets very high. I tried using the <a href="https://github.com/daggaz/json-stream" rel="nofollow noreferrer">json-stream</a> library to perform the write. It works OK and reduced the memory consumption, but it seems to iterate and write out one object a time.</p> <pre><code>from json_stream import streamable_list @streamable_list def process(f): with open(f) as fin: for x in fin: ... y = MyObj() yield y.model_dump() data = process('file_to_process') with open('foo.json') as fout: json.dump(data, fout, indent=2) </code></pre> <p>Is there a way to do the JSON stream write in chunks? Say if there are 1000 objects generated, write it out, then proceed again to the next 1000, etc.</p>
<python>
2024-02-07 02:16:37
0
3,491
user4979733
77,951,588
13,142,245
How to compute percentiles with numpy?
<p>SciPy.stats has a function called percentileofscore. To keep my package dependencies down, I want to source the most similar function possible from numpy, instead.</p> <pre><code>import numpy as np a = np.array([3, 2, 1]) np.percentile(a, a) &gt;&gt;&gt; array([1.06, 1.04, 1.02]) percentileofscore(a,a) &gt;&gt;&gt; array([100. , 66.66666667, 33.33333333]) </code></pre> <p>I'm not sure what is is that Numpy is doing... But it's not returning intuitive percentiles to me. How can I achieve the same functionality using built-in numpy methods.</p> <p>Of note, by default, percentileofscore will average percentiles for ties. I do want to preserve this functionality. Ex <code>[100, 100]</code> should not return <code>[0, 100]</code> but <code>[50, 50]</code> instead.</p>
<python><numpy><scipy><scipy.stats>
2024-02-07 00:25:16
2
1,238
jbuddy_13