QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,836,608
1,801,968
Antlr4 compiler PyCharm Plugin produces incorrect Python3 code. Is this a bug or an operator error?
<p>I am compiling the <a href="https://github.com/antlr/grammars-v4/blob/master/antlr/antlr4/examples/CPP14.g4" rel="nofollow noreferrer">https://github.com/antlr/grammars-v4/blob/master/antlr/antlr4/examples/CPP14.g4</a> grammar (I renamed the grammar file to C.g4 and changed the first line in the grammar file to <code>grammar C;</code>)</p> <p><strong>Summary:</strong></p> <ul> <li><p>C.g4 grammar compiles without error in the Tool Output within the PyCharm IDE</p> </li> <li><p>CParser.py file (output from antr4 -o command) has an incorrect python line of code on line 15273</p> <p><code>if((None if localctx.val is None else localctx.val.text).compareTo(&quot;0&quot;)!=0) throw new InputMismatchException(this);</code></p> </li> <li><p>Is this a known bug?</p> </li> </ul> <p><strong>Details:</strong></p> <p>From within the PyCharm IDE.</p> <p><a href="https://i.sstatic.net/SIdUk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SIdUk.png" alt="enter image description here" /></a></p> <pre><code>Ctrl-Shift+G </code></pre> <p>The produces no error on the Tool Output window:</p> <p><code>2024-01-17 20:59:43: antlr4 -o C:\Users\xxxx\Documents\Git\GitHub\c2p-antlr4\gen -listener -visitor -Dlanguage=Python3 -lib C:\Users\xxxx\Documents\Git\GitHub\c2p-antlr4 C:/Users/xxxx/Documents/Git/GitHub/c2p-antlr4\C.g4</code></p> <hr /> <p>The generated code has an error as shown in the CParser.py code below:</p> <p><a href="https://i.sstatic.net/trrUd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/trrUd.png" alt="enter image description here" /></a></p> <p>Notice line 15273. I discovered the error when trying to run the parser on a code file. The line is hit while using the parser.</p> <p><strong>This isn't valid Python code!! Has anyone seen this issue and/ or is there a fix for this?</strong></p> <p>The plugin I'm using is version 1.22: <a href="https://i.sstatic.net/Zzkpe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zzkpe.png" alt="enter image description here" /></a></p>
<python><parsing><pycharm><antlr4>
2024-01-18 02:42:26
1
1,870
P Moran
77,836,541
1,914,781
print dataframe to be used in markdownhere
<p>I would like to print dataframe to markdown and use it in markdownhere.</p> <p>I saw to_markdown() api but the format not work in markdownhere.</p> <p>So I try below code, it works fine:</p> <pre><code>import pandas as pd def dumpmd(df): out = [] out.append(&quot;|&quot;.join(df.columns)) r = [] for name in df.columns: r.append(&quot;:-&quot;) out.append(&quot;|&quot;.join(r)) for i,row in df.iterrows(): r = [] r1 = [] for e in row: r.append(str(e)) out.append(&quot;|&quot;.join(r)) out = &quot;\n&quot;.join(out) return out data = [['1','B','C'], ['2','E','F'], ['3','H','I']] df = pd.DataFrame(data,columns=[&quot;C1&quot; ,&quot;C2&quot;,&quot;C3&quot;]) print(df.to_markdown(tablefmt='pipe',index=False)) print(dumpmd(df)) </code></pre> <p>Output:</p> <pre><code>| C1 | C2 | C3 | |----+----+----| | 1 | B | C | | 2 | E | F | | 3 | H | I | C1|C2|C3 :-|:-|:- 1|B|C 2|E|F 3|H|I </code></pre> <p>Would anyone help simplify this code or any better solution to output expected markdown table from dataframe?</p>
<python><pandas><markdown>
2024-01-18 02:22:07
2
9,011
lucky1928
77,836,522
7,563,454
Non-blocking thread pool that posts results as soon as they're ready for the main thread to access
<pre><code>import multiprocessing as mp pool = mp.Pool() def calc(i): return i * 2 def done(results): for result in results: print(result) def loop(): pool.map_async(calc, [0, 1, 2, 3], callback = done): while True: loop() </code></pre> <p>I'm using a thread pool setup as described in this simplified example. It works as expected: The main loop asks the pool to compute a set of results via <code>map_async</code> each time it runs, the pool uses the <code>calc</code> function to process the given items, when ready the <code>done</code> function is called with a list containing all results which in this case would always be <code>[0, 2, 4, 6]</code>.</p> <p>What I want to achieve: I'd like the thread pool to run the callback function whenever any result has finished processing, or add it to a list the main thread can see and modify at any time. The pool stops calling the callback after the last result is calculated, until the main loop triggers it again and the process repeats.</p> <p>The goal is for worker threads to post processed results as fast as they can, for the main thread to find as many results as possible whenever it executes even if they aren't all ready. Using <code>result.get()</code> isn't an option since fetching the result must not block the main thread, that's intended as an optional setting if you want to run in synced mode. I'm fine with either using a callback defined in the pool, or having each finished result added to an array as long as the main loop can see and processes finished items from it which are then discarded.</p> <p>Worth noting that it's not mandatory for the main thread to give the array to the thread pool, <code>[0, 1, 2, 3]</code> is a constant variable in the class, but other variables the <code>calc</code> function works need to be updated from the main thread each call. An item is assigned to each thread still... for instance my array presumes there are 4 processes, each one is tasked with computing one of the numbers. I don't expect it to work exactly as I imagine but please let me know what comes closest.</p>
<python><python-3.x><multithreading><threadpool><python-multithreading>
2024-01-18 02:15:04
1
1,161
MirceaKitsune
77,836,455
1,107,474
Python global variable within map task isn't incrementing in global scope
<p>Below I have a self-contained Python example demonstrating a thread pool. Each of the three thread calls the function <code>task()</code>.</p> <p>I have a global variable. <code>task()</code> should increment <code>created</code> three times. However, <code>created</code> is always 1, instead of 1, 2, 3.</p> <p>Could someone please help?</p> <pre><code>import multiprocessing as mp import operator import time import sys import os import subprocess import threading from functools import partial global created lock = threading.Lock() def task(lock, ignore_arg): print(&quot;task&quot;) global created lock.acquire() # PROBLEM: this isn't incrementing globally created = created + 1 # Always 1, instead of 1, 2 then 3 print(&quot;created (&quot; + str(created) + &quot;)&quot;) lock.release() def task_init(output_queue): task.output_queue = output_queue if __name__ == '__main__': created = 0 output_queue = mp.Queue() p = mp.Pool(3, task_init, [output_queue]) manager = mp.Manager() lock = manager.Lock() func = partial(task, lock) list_of_inputs = ['2', '3', '4'] p.map(func, list_of_inputs) p.close() </code></pre>
<python>
2024-01-18 01:49:41
0
17,534
intrigued_66
77,836,326
9,996,859
TypeError: unsupported type annotation <class 'discord.interactions.Interaction'>
<p>I have a discord command attempting to integrate Dall-E:</p> <pre><code>import opena9 from Classes import Dalle from PIL import Image, ImageDraw, ImageFont import discord from discord import Message as DiscordMessage, Color import logging from openai import Image from pathlib import Path from typing import Union @tree.command(name=&quot;hallucinate&quot;, description=&quot;Generate dall-e images using your query.&quot;) @discord.app_commands.checks.has_permissions(send_messages=True) @discord.app_commands.checks.has_permissions(view_channel=True) @discord.app_commands.checks.bot_has_permissions(send_messages=True) @discord.app_commands.checks.bot_has_permissions(view_channel=True) @discord.app_commands.checks.bot_has_permissions(manage_threads=True) async def dalle_command(ctx, interaction: discord.Interaction, *, query: str): if not isinstance(interaction.channel, discord.TextChannel): return if not isinstance(interaction.channel, discord.TextChannel): return user = interaction.user logger.info(f&quot;Dall-E command by {user} asked to draw {query[:20]}&quot;) try: # moderate the message flagged_str, blocked_str = moderate_message(message=query, user=user) await send_moderation_blocked_message( guild=interaction.guild, user=user, blocked_str=blocked_str, message=query, ) if len(blocked_str) &gt; 0: # message was blocked await interaction.response.send_message( f&quot;Your prompt has been blocked by moderation.\n{query}&quot;, ephemeral=True, ) return embed = discord.Embed( description=f&quot;&lt;@{user.id}&gt; asked to draw {query[:20]}&quot;, color=discord.Color.green(), ) embed.add_field(name=user.name, value=query) if len(flagged_str) &gt; 0: # message was flagged embed.color = discord.Color.yellow() embed.title = &quot;⚠️ This prompt was flagged by moderation.&quot; await interaction.response.defer() await asyncio.sleep(3) await interaction.followup.send(embed=embed) response = await interaction.original_response() await send_moderation_flagged_message( guild=interaction.guild, user=user, flagged_str=flagged_str, message=query, url=response.jump_url, ) if not query: await interaction.response.send_message( &quot;DALL·E: Invalid query\nPlease enter a query (e.g !dalle dogs on space).&quot;) return # Check if query is too long if len(query) &gt; 100: await interaction.response.send_message(&quot;DALL·E: Invalid query\nQuery is too long! (Max length: 100 chars)&quot;) return except Exception as e: logger.exception(e) await interaction.response.send_message( f&quot;Failed to start chat {str(e)}&quot;, ephemeral=True ) return message = await interaction.response.send_message(&quot;Generating your query (this may take 1 or 2 minutes):&quot; &quot; ```&quot; + query + &quot;```&quot;) try: dall_e = await Dalle.DallE(prompt=f&quot;{query}&quot;, author=f&quot;{user.id}&quot;) generated = await dall_e.generate() if len(generated) &gt; 0: first_image = Image.open(generated[0].path) generated_collage = await _create_collage(ctx, query, first_image, generated) # Prepare the attachment file = discord.File(generated_collage, filename=&quot;art.png&quot;) await interaction.response.send_message(file=file) # Delete the message await message.delete() </code></pre> <p>Which returns this error when it builds in Heroku:</p> <pre><code>File &quot;/app/src/main.py&quot;, line 159, in &lt;module&gt; async def dalle_command(ctx, interaction: discord.Interaction, *, query: str): TypeError: unsupported type annotation &lt;class 'discord.interactions.Interaction'&gt; </code></pre> <p>Line 159 being the line on which the function is defined.</p> <p>Based on some other posts on this error, it seems that maybe there is a problem with the order I am declaring these parameters, but I'm not sure what it is. How should I be declaring this function?</p>
<python><discord.py>
2024-01-18 01:01:37
0
889
andrewedgar
77,836,174
395,857
How can I add a progress bar/status when creating a vector store with langchain?
<p>Creating a vector store with the Python library <a href="https://github.com/langchain-ai/langchain" rel="noreferrer">langchain</a> may take a while. How can I add a progress bar?</p> <hr /> <p>Example of code where a vector store is created with langchain:</p> <pre><code>import pprint from langchain_community.vectorstores import FAISS from langchain_community.embeddings import HuggingFaceEmbeddings from langchain.docstore.document import Document model = &quot;sentence-transformers/multi-qa-MiniLM-L6-cos-v1&quot; embeddings = HuggingFaceEmbeddings(model_name = model) def main(): doc1 = Document(page_content=&quot;The sky is blue.&quot;, metadata={&quot;document_id&quot;: &quot;10&quot;}) doc2 = Document(page_content=&quot;The forest is green&quot;, metadata={&quot;document_id&quot;: &quot;62&quot;}) docs = [] docs.append(doc1) docs.append(doc2) for doc in docs: doc.metadata['summary'] = 'hello' pprint.pprint(docs) db = FAISS.from_documents(docs, embeddings) db.save_local(&quot;faiss_index&quot;) new_db = FAISS.load_local(&quot;faiss_index&quot;, embeddings) query = &quot;Which color is the sky?&quot; docs = new_db.similarity_search_with_score(query) print('Retrieved docs:', docs) print('Metadata of the most relevant document:', docs[0][0].metadata) if __name__ == '__main__': main() </code></pre> <p>Tested with Python 3.11 with:</p> <pre><code>pip install langchain==0.1.1 langchain_openai==0.0.2.post1 sentence-transformers==2.2.2 langchain_community==0.0.13 faiss-cpu==1.7.4 </code></pre> <p>The vector store is created with <code>db = FAISS.from_documents(docs, embeddings)</code>.</p>
<python><progress-bar><langchain><faiss>
2024-01-17 23:59:14
2
84,585
Franck Dernoncourt
77,836,108
23,190,147
Playwright not opening up link
<p>I'm trying to open a certain link using <code>playwright</code>, and I want to scrape the contents of the link. Snippet of code:</p> <pre><code>from playwright.sync_api import sync_playwright with sync_playwright() as p: browser = p.chromium.launch(channel=&quot;msedge&quot;) page = browser.new_page() page.goto('https://example.com/whatever') print(page.title) browser.close() </code></pre> <p>but instead it directs the browser to a page called: 'https://example.com/noAppName.html</p> <p>It directs the page to example.com fine (I'm not really using example.com, just for example), but it can't seem to get to the path of the url. I wondered if this might be because my url is <em>really long</em>, and maybe the link has changed for the site, but when I tried it manually, by opening up the page using the <strong>same exact</strong> link that I used for <code>playwright</code>, it worked fine. What does noAppName.html mean, and why is it giving me this error? I wondered if the page just didn't have enough time to load, so I made <code>playwright</code> wait a few seconds before printing the page title, but it still doesn't work.</p> <p>UPDATE: Resolved. The problem was not with the code, and had to do with the website, and certain things with logging in (when I opened it manually I had already logged into the website). In general, if you have this problem in the future, I recommend checking out the website, especially if the path of your url seems really long.</p>
<python><url><playwright>
2024-01-17 23:29:13
0
450
5rod
77,836,107
3,555,115
Remove Day, Date, year from Timestamp column in dataframe Python
<p>I have a dataframe: df1 =</p> <pre><code>Load sort_begin comp_begin a Mon Jan 15 23:17:58 PST 2024 - b Mon Jan 15 23:17:58 PST 2024 Mon Jan 15 23:23:58 PST 2024 c Mon Jan 15 23:17:58 PST 2024 Mon Jan 15 23:17:58 PST 2024 </code></pre> <p>How can we remove the day month year from timestamp in columns above and just timestamp in H:M:S format . Timestamps can be blank values or simply -( - ) ?</p> <pre><code>df2 = Load sort_begin comp_begin a 23:17:58 PST - b 23:17:58 PST 23:23:58 PST c 23:17:58 PST 23:17:58 PST </code></pre> <p>Is there any effective way to compute the time difference between <code>sort_begin</code> and <code>comp_begin</code> in seconds and add a new column</p> <pre><code>df3 = Load sort_begin comp_begin diff a 23:17:58 PST - - b 23:17:58 PST 23:23:58 PST xx sec c 23:17:58 PST 23:17:58 PST xx sec </code></pre>
<python><pandas><dataframe>
2024-01-17 23:29:05
1
750
user3555115
77,836,103
11,357,623
How to tell unit tests where the package library is in GitHub Actions
<p>I have built a python library and I have unit tests running inside GitHub Actions but unit tests are failing for this reason.</p> <pre><code>Hint: make sure your test modules/packages have valid Python names. Traceback: /opt/hostedtoolcache/Python/3.11.7/x64/lib/python3.11/importlib/__init__.py:126: in import_module return _bootstrap._gcd_import(name[level:], package, level) test/foo_test.py:11: in &lt;module&gt; from mylib import Foo E ModuleNotFoundError: No module named 'mylib' </code></pre> <p>The reason of the error is because of my directory structure, since <code>mylib</code> is inside the <code>src</code> directory.</p> <pre><code>my_project/ |-- src/ | |-- mylib/ | |-- __init__.py | |-- foo.py | |-- tests/ | |-- test_mylib.py (imports mylib.foo.Foo) no src prefix | |-- setup.py </code></pre> <p><code>pytest test</code> in pycharm works because I have told my IDE where the sources (src) and the package (mylib) is:</p> <p>how can I execute tests without requiring prefix src for my test files.</p> <h1>setup.py</h1> <pre><code>packages=find_packages(where='src'), package_dir={'': 'src'}, </code></pre> <h1>Attempt</h1> <p>I created a file <code>pytest.ini</code> with the content but it did't work (Unit tests couldn't find my package under src directory)</p> <pre><code>[pytest] srcpaths = src </code></pre>
<python><testing><github-actions>
2024-01-17 23:27:49
1
2,180
AppDeveloper
77,836,047
11,147,107
How to create new pandas row at every "\n" instance?
<p>I have a pandas DataFrame as follows:</p> <pre><code>df = pd.DataFrame({'A': ['foo\nbar', 'tre\ndex', 'hello\nworld'], 'B': ['abc', 'def', 'ghi']}) </code></pre> <p>There are 3 rows made of strings (including <code>\n</code> but I'd like to separate the string into two whenever <code>\n</code> appears and match the values from the <code>B</code> column with the newly created row, giving me the following DataFrame:</p> <pre><code>df_final = pd.DataFrame({'A': ['foo', 'bar', 'tre' 'dex', 'hello', 'world], 'B': ['abc', 'abc', 'def', 'def' ,'ghi', 'ghi']}) </code></pre>
<python><pandas><string><dataframe>
2024-01-17 23:11:57
1
335
Luiz Scheuer
77,836,045
11,618,586
indexing periods in a dataset that spans an increasing and decreasing range
<p>I have a dataframe like so:</p> <pre><code>data = {'column1': [1, 2, 3, 4, 5,4,3,2,1,1, 2, 3, 4, 5,4,3,2,1]} df=pd.DataFrame(data) column1 0 1 1 2 2 3 3 4 4 5 5 4 6 3 7 2 8 1 9 1 10 2 11 3 12 4 13 5 14 4 15 3 16 2 17 1 </code></pre> <p>I want to create an index column that identifies the range when <code>column1</code> starts from <code>1</code> and ends with a specific value that can be changed such that, if I specify the ending value to be <code>1</code> it results in:</p> <pre><code> column1 ID 0 1 1 1 2 1 2 3 1 3 4 1 4 5 1 5 4 1 6 3 1 7 2 1 8 1 1 9 1 2 10 2 2 11 3 2 12 4 2 13 5 2 14 4 2 15 3 2 16 2 2 17 1 2 </code></pre> <p>if I specify the ending value to be <code>3</code> it results in:</p> <pre><code> column1 ID 0 1 1 1 2 1 2 3 1 3 4 1 4 5 1 5 4 1 6 3 2 7 2 2 8 1 2 9 1 2 10 2 2 11 3 3 12 4 3 13 5 3 14 4 3 15 3 4 16 2 4 17 1 4 </code></pre> <p>I tried using a boolean mask and identifying the <code>1s</code> as a start and used <code>cumsum()</code> to increment is, but then whenever it encounters a 1 it increments the <code>ID</code>. It should increment only when it increases from <code>1</code> and then decreases back to <code>1</code></p>
<python><python-3.x><pandas><indexing>
2024-01-17 23:11:02
1
1,264
thentangler
77,835,927
19,675,781
How to replace dataframe values based on index statistics
<p>I have a dataframe like this:</p> <pre><code>l1 = [1,2,3,4,5,6,7,8,9,10] l2 = [11,12,13,14,15,16,17,18,19,20] index = ['FORD','GM'] df = pd.DataFrame(l1,l2).reset_index().T df.index = index </code></pre> <p>I want to replace these integer values based on this:</p> <p>For every index, if the value is less than mean-2, then it is 'MINI' else 'MEGA'.</p> <p>Here the mean varies for every row.</p> <p>The desired out looks like this:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th></th> <th>0</th> <th>1</th> <th>2</th> <th>3</th> <th>4</th> <th>5</th> <th>6</th> <th>7</th> <th>8</th> <th>9</th> </tr> </thead> <tbody> <tr> <td>FORD</td> <td>MINI</td> <td>MINI</td> <td>MINI</td> <td>MEGA</td> <td>MEGA</td> <td>MEGA</td> <td>MEGA</td> <td>MEGA</td> <td>MEGA</td> <td>MEGA</td> </tr> <tr> <td>GM</td> <td>MINI</td> <td>MINI</td> <td>MINI</td> <td>MEGA</td> <td>MEGA</td> <td>MEGA</td> <td>MEGA</td> <td>MEGA</td> <td>MEGA</td> <td>MEGA</td> </tr> </tbody> </table></div> <p>Can anyone help me with this?</p>
<python><pandas><dataframe><statistics>
2024-01-17 22:35:35
2
357
Yash
77,835,886
2,727,655
Adding Linear layers to Thinc Model Example - Understanding Data Dimensions Through Model Architecture
<p>Trying to learn the inner workings of models trained with Spacy, and Thinc models are it. Looking at <a href="https://colab.research.google.com/github/explosion/thinc/blob/master/examples/02_transformers_tagger_bert.ipynb#scrollTo=EYYyAeLqRc6S" rel="nofollow noreferrer">this tutorial</a> and I'm modifying the model to see what breaks and what works. Instead of tagging, I'm modifying it to fit a NER dataset I have with 16 classes. I want to add several layers after the TransformerTokenizer + Transformer layers already outlined in this tutorial, but I'm getting tons of dimension ValueErrors. Also, it's important to me that the TransformersTagger layer outputs the last hidden layer of the given transformer model, which I'm not confident this code is doing. Here's the error I'm getting:</p> <pre><code>ValueError: Attempt to change dimension 'nI' for model 'linear' from 512 to 16 </code></pre> <p>And here is my full code adaptation to date. To be fair, I don't like that there's a softmax(num_ner_classes) prior to the Linear() layer, but I can't get anything else to work with with_array() after the Transformer layer:</p> <pre><code>@dataclass class TokensPlus: batch_size: int tok2wp: List[Ints1d] input_ids: torch.Tensor token_type_ids: torch.Tensor attention_mask: torch.Tensor def __init__(self, inputs: List[List[str]], wordpieces: BatchEncoding): self.input_ids = wordpieces[&quot;input_ids&quot;] self.attention_mask = wordpieces[&quot;attention_mask&quot;] self.token_type_ids = wordpieces[&quot;token_type_ids&quot;] self.batch_size = self.input_ids.shape[0] self.tok2wp = [] for i in range(self.batch_size): print(i, inputs[i]) spans = [wordpieces.word_to_tokens(i, j) for j in range(len(inputs[i]))] print(spans) self.tok2wp.append(self.get_wp_starts(spans)) def get_wp_starts(self, spans: List[Optional[TokenSpan]]) -&gt; Ints1d: &quot;&quot;&quot;Calculate an alignment mapping each token index to its first wordpiece.&quot;&quot;&quot; alignment = numpy.zeros((len(spans)), dtype=&quot;i&quot;) for i, span in enumerate(spans): if span is None: raise ValueError( &quot;Token did not align to any wordpieces. Was the tokenizer &quot; &quot;run with is_split_into_words=True?&quot; ) else: alignment[i] = span.start return alignment @thinc.registry.layers(&quot;transformers_tokenizer.v1&quot;) def TransformersTokenizer(name: str) -&gt; Model[List[List[str]], TokensPlus]: def forward(model, inputs: List[List[str]], is_train: bool): tokenizer = model.attrs[&quot;tokenizer&quot;] wordpieces = tokenizer( inputs, is_split_into_words=True, add_special_tokens=True, return_token_type_ids=True, return_attention_mask=True, return_length=True, return_tensors=&quot;pt&quot;, padding=&quot;longest&quot; ) return TokensPlus(inputs, wordpieces), lambda d_tokens: [] return Model(&quot;tokenizer&quot;, forward, attrs={&quot;tokenizer&quot;: AutoTokenizer.from_pretrained(name)}) def convert_transformer_inputs(model, tokens: TokensPlus, is_train): kwargs = { &quot;input_ids&quot;: tokens.input_ids, &quot;attention_mask&quot;: tokens.attention_mask, &quot;token_type_ids&quot;: tokens.token_type_ids, } return ArgsKwargs(args=(), kwargs=kwargs), lambda dX: [] def convert_transformer_outputs(model: Model, inputs_outputs: Tuple[TokensPlus, Tuple[torch.Tensor]], is_train: bool) -&gt; Tuple[List[Floats2d], Callable]: tplus, trf_outputs = inputs_outputs wp_vectors = torch2xp(trf_outputs[0]) tokvecs = [wp_vectors[i, idx] for i, idx in enumerate(tplus.tok2wp)] def backprop(d_tokvecs: List[Floats2d]) -&gt; ArgsKwargs: # Restore entries for BOS and EOS markers d_wp_vectors = model.ops.alloc3f(*trf_outputs[0].shape, dtype=&quot;f&quot;) for i, idx in enumerate(tplus.tok2wp): d_wp_vectors[i, idx] += d_tokvecs[i] return ArgsKwargs( args=(trf_outputs[0],), kwargs={&quot;grad_tensors&quot;: xp2torch(d_wp_vectors)}, ) return tokvecs, backprop @thinc.registry.layers(&quot;transformers_encoder.v1&quot;) def Transformer(name: str = &quot;bert-large-cased&quot;) -&gt; Model[TokensPlus, List[Floats2d]]: return PyTorchWrapper( AutoModel.from_pretrained(name), convert_inputs=convert_transformer_inputs, convert_outputs=convert_transformer_outputs, ) @thinc.registry.layers(&quot;TransformersNer.v1&quot;) def TransformersNer(name: str, num_ner_classes: int = 16) -&gt; Model[List[List[str]], List[Floats2d]]: return chain( TransformersTokenizer(name), Transformer(name), with_array(Softmax(num_ner_classes)), Linear(512, 1024) ) </code></pre> <p>How do I best determine how to pipe the output of the PyTorchWrapped TransformersTagger layer into a Linear() + more layers down the chain? I've been using this model visualization but even when I run model.initialize() on the first examples of my data, there are still a lot of (?, ?).</p> <pre><code>import pydot def visualize_model(model): def get_label(layer): layer_name = layer.name nO = layer.get_dim(&quot;nO&quot;) if layer.has_dim(&quot;nO&quot;) else &quot;?&quot; nI = layer.get_dim(&quot;nI&quot;) if layer.has_dim(&quot;nI&quot;) else &quot;?&quot; return f&quot;{layer.name}|({nO}, {nI})&quot;.replace(&quot;&gt;&quot;, &quot;&amp;gt;&quot;) dot = pydot.Dot() dot.set(&quot;rankdir&quot;, &quot;LR&quot;) dot.set_node_defaults(shape=&quot;record&quot;, fontname=&quot;arial&quot;, fontsize=&quot;10&quot;) dot.set_edge_defaults(arrowsize=&quot;0.7&quot;) nodes = {} for i, layer in enumerate(model.layers): label = get_label(layer) node = pydot.Node(layer.id, label=label) dot.add_node(node) nodes[layer.id] = node if i == 0: continue from_node = nodes[model.layers[i - 1].id] to_node = nodes[layer.id] if not dot.get_edge(from_node, to_node): dot.add_edge(pydot.Edge(from_node, to_node)) print(dot) </code></pre> <p>Produces:</p> <pre><code>digraph G { rankdir=LR; node [fontname=arial, fontsize=10, shape=record]; edge [arrowsize=&quot;0.7&quot;]; 176 [label=&quot;tokenizer|(?, ?)&quot;]; 177 [label=&quot;pytorch|(?, ?)&quot;]; 176 -&gt; 177; 179 [label=&quot;with_array(softmax)|(16, 1024)&quot;]; 177 -&gt; 179; 180 [label=&quot;linear|(512, 1024)&quot;]; 179 -&gt; 180; } </code></pre>
<python><nlp><neural-network><spacy><spacy-transformers>
2024-01-17 22:26:29
1
554
lrthistlethwaite
77,835,777
23,260,297
Convert custom string to date in python
<p>I am reading data in from a csv file and I need to convert 3 columns from strings to dates.</p> <p>The date string are given in this format, where the day is first, then the month abbreviation and then the year abbreviated:</p> <pre><code>01Aug23 02Dec22 03Jan24 </code></pre> <p>I have tried the following, but it leads to a &quot;unconverted data remains&quot; error</p> <pre><code>df['StartDate'] = datetime.strptime(df['StartDate'].iloc[i], '%d%b%y') </code></pre> <p>I am unsure what exactly I am doing wrong. Any suggestions help</p> <p>I am using python pandas</p>
<python><pandas>
2024-01-17 22:01:43
0
2,185
iBeMeltin
77,835,699
963,319
How to use kfold to train model in scikit-learn
<p>I expected the <code>kfold</code> object to have a <code>fit</code> function attached to just like <code>GridSearchCV</code>:</p> <pre><code>spipe = Pipeline([ ('scale', StandardScaler()), ('model', somemodel()) ]) grid = GridSearchCV( estimator=pipe, cv=4 ) grid.fit(X, Y) </code></pre> <p>But <code>kfold</code> doesn't have the <code>fit</code> method and in the <a href="https://scikit-learn.org/stable/modules/cross_validation.html#k-fold" rel="nofollow noreferrer">example</a> they don't really show how to use it, instead I think they imply that it should be used with <code>cross_val_score</code>?</p> <p>The problem is that <code>cross_val_score</code> fits the data on each fold separately; what I want is to use all folds to make 1 fit to train the model.</p> <p>Is that possible or I'm not understanding the purpose of the folds.</p>
<python><scikit-learn>
2024-01-17 21:48:15
1
2,751
Jenia Be Nice Please
77,835,555
3,555,115
Convert column fields to multiple columns and rearrange data in Python Dataframe
<p>I have a python dataframe.</p> <pre><code>df1 Load Instance_name counter_name counter_value 0 A bytes_read 0 0 A bytes_written 90 0 A last_time 100 0 A locks 90 1 A bytes_read 10 1 A bytes_written 940 1 A last_time 1100 1 A locks 910 2 A bytes_read 11 2 A bytes_written 910 2 A last_time 1100 2 A locks 9120 </code></pre> <p><em><strong>Loads can range upto 1000 - 10000 etc, and instance_name,counter_name is same across different loads, but can have different counter values. To simplify view, I need something like below ex. transform counter_name column fields to columns and rearrange data.</strong></em></p> <pre><code>df2 = Load bytes_read bytes_written last_time locks 0 0 90 100 90 1 10 940 1100 910 2 11 910 1100 9120 </code></pre> <p>I am new to python dataframe libraries and not sure the right way to achieve this.</p>
<python><pandas><dataframe>
2024-01-17 21:13:54
1
750
user3555115
77,835,519
13,350,341
just: command alias could not be run because just could not find the shell
<p>I'd like to reference some commands I've set up in my <code>justfile</code> to be used in some steps of the CI pipeline (namely in my <code>actions.yaml</code> file) and I'm encountering issues. For personal preference, I'd like <code>just</code> commands to be run on <code>powershell</code> both locally and in the CI pipeline.</p> <p>My <code>justfile</code> is as follows (it is meant to be run on a Windows machine):</p> <pre><code># Use PowerShell instead of sh, https://just.systems/man/en/chapter_3.html set shell := [&quot;powershell.exe&quot;, &quot;-c&quot;] help: @just --list install: @echo &quot;🚀 Installing dependencies&quot; @poetry install --with dev install-pre-commit: @echo &quot;🚀 Setting up the hooks&quot; @poetry run pre-commit install check-project: @echo &quot;🚀 Checking consistency between poetry.lock and pyproject.toml&quot; @poetry check --lock @echo &quot;🚀 Running the hooks against all files&quot; @poetry run pre-commit run --all-files ruff: @echo &quot;🚀 Linting the project with Ruff&quot; @poetry run ruff check src tests ruff-show-violations: @echo &quot;🚀 Linting the project with Ruff and show violations&quot; @poetry run ruff check --show-source --show-fixes src tests ruff-fix: @echo &quot;🚀 Linting the project with Ruff and autofix violations (where possible)&quot; @poetry run ruff check --fix src tests black: @echo &quot;🚀 Formatting the code with Black&quot; @poetry run black src tests black-check: @echo &quot;🚀 Checking formatting advices from Black&quot; @poetry run black --check --diff src tests lint-and-format: ruff black test: @echo &quot;🚀 Testing code with pytest&quot; @poetry run pytest --verbose tests test-and-report-cov: @echo &quot;🚀 Testing code with pytest and generating coverage report&quot; @poetry run pytest --cov=./ --cov-report=xml </code></pre> <p>And here is part of my <code>actions.yaml</code> file:</p> <pre class="lang-yaml prettyprint-override"><code>jobs: lint-and-test: runs-on: ubuntu-latest defaults: run: shell: pwsh steps: - name: Check out repository uses: actions/checkout@v4 - name: Setup python uses: actions/setup-python@v4 with: python-version: ${{ env.PYTHON_VERSION }} - name: Install Poetry uses: snok/install-poetry@v1 with: virtualenvs-create: true virtualenvs-in-project: true installer-parallel: true - name: Load cached venv if cache exists id: cached-poetry-dependencies uses: actions/cache@v3 with: path: .venv key: venv-${{ runner.os }}-${{ steps.setup-python.outputs.python-version }}-${{ hashFiles('**/poetry.lock') }} - name: Install dependencies if cache does not exist if: steps.cached-poetry-dependencies.outputs.cache-hit != 'true' run: poetry install --no-interaction --no-root - name: Install project run: poetry install --no-interaction - name: Setup just uses: taiki-e/install-action@just - name: Enforce code style (Ruff) run: just ruff-show-violations - name: Verify code formatting (Black) run: just black-check - name: Run tests run: just test - name: Generate test coverage report run: just test-and-report-cov - name: Upload coverage reports to Codecov uses: codecov/codecov-action@v3 with: token: ${{ secrets.CODECOV_TOKEN }} verbose: true </code></pre> <p>the most relevant steps in this regard being the definition of the default <code>shell</code>, <em>just</em>'s setup (<code>taiki-e/install-action@just</code>, as per <a href="https://just.systems/man/en/chapter_6.html" rel="nofollow noreferrer">the manual</a>) and the successive steps where <code>just</code> is invoked to run commands.</p> <p>Along with the described setup, I get</p> <pre><code>&gt; just ruff-show-violations &gt; shell: /usr/bin/pwsh -command &quot;. '{0}'&quot; &gt; env: &gt; PYTHON_VERSION: 3.10 &gt; POETRY_VERSION: 1.6.1 pythonLocation: /opt/hostedtoolcache/Python/3.10.13/x64 PKG_CONFIG_PATH: /opt/hostedtoolcache/Python/3.10.13/x64/lib/pkgconfig Python_ROOT_DIR: /opt/hostedtoolcache/Python/3.10.13/x64 Python2_ROOT_DIR: /opt/hostedtoolcache/Python/3.10.13/x64 Python3_ROOT_DIR: /opt/hostedtoolcache/Python/3.10.13/x64 LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.10.13/x64/lib VENV: .venv/bin/activate **error: Recipe `ruff-show-violations` could not be run because just could not find the shell: No such file or directory (os error 2) Error: Process completed with exit code 1.** </code></pre> <p>Based on the error message and in the idea of ensuring compatibility between <code>set shell := [&quot;powershell.exe&quot;, &quot;-c&quot;]</code> in <code>justfile</code> and the setup of the default shell in <code>actions.yaml</code> (namely, trying to ensure alignment between the local execution environment and the Linux environment the Github Actions workflow runs on), I've tried to play around with these two with no luck. Neither setting <code>pwsh</code> nor <code>powershell</code> as default shell seems to work; specifying the shell at the level of single steps (with <code>shell:</code> option) doesn't work as well.</p> <p>How to invoke <code>just</code> in GitHub Actions jobs' steps and run them on <code>powershell</code>?</p> <h2>Update</h2> <p>Setting things up to run on <code>bash</code>, you won't incur in any issue. Namely, removing <code>set shell := [&quot;powershell.exe&quot;, &quot;-c&quot;]</code> from <code>justfile</code> (thus reverting to <code>bash</code> by default) and specifying <code>bash</code> to be the default shell on which to run jobs in <code>actions.yaml</code> I could get a working configuration. However, I am yet to find a configuration that could properly run everything on <code>powershell</code>.</p> <p>Not completely sure about that, but I also noticed that - no matter what the <em>default shell</em> configuration was - <code>taiki-e/install-action@just</code> kept on running on <code>bash</code>, which might be the root cause of the problem. Plus, it doesn't accept a <code>shell</code> input (that I tried to pass via <code>with: shell: ...</code> option) with which one could try to force the shell where to run the specific step on. IOW, with</p> <pre class="lang-yaml prettyprint-override"><code>jobs: lint-and-test: runs-on: ubuntu-latest defaults: run: shell: pwsh steps: ... ... - name: Setup just uses: taiki-e/install-action@just with: shell: pwsh </code></pre> <p>I got</p> <pre><code>Warning: Unexpected input(s) 'shell', valid inputs are ['tool', 'checksum'] Run taiki-e/install-action@just with: shell: pwsh tool: just checksum: true env: PYTHON_VERSION: 3.10 POETRY_VERSION: 1.6.1 pythonLocation: /opt/hostedtoolcache/Python/3.10.13/x64 PKG_CONFIG_PATH: /opt/hostedtoolcache/Python/3.10.13/x64/lib/pkgconfig Python_ROOT_DIR: /opt/hostedtoolcache/Python/3.10.13/x64 Python2_ROOT_DIR: /opt/hostedtoolcache/Python/3.10.13/x64 Python3_ROOT_DIR: /opt/hostedtoolcache/Python/3.10.13/x64 LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.10.13/x64/lib VENV: .venv/bin/activate Run bash --noprofile --norc &quot;${GITHUB_ACTION_PATH:?}/main.sh&quot; info: host platform: x86_64_linux info: installing just@latest info: downloading https://github.com/casey/just/releases/download/1.23.0/just-1.23.0-x86_64-unknown-linux-musl.tar.gz info: verifying sha256 checksum for just-1.23.0-x86_64-unknown-linux-musl.tar.gz info: just installed at /home/runner/.cargo/bin/just + just --version just 1.23.0 </code></pre>
<python><powershell><github-actions><powershell-core><just>
2024-01-17 21:07:30
0
3,157
amiola
77,835,414
2,175,534
Flask Toggle Button
<p>I want to add a toggle button to my page and saw this previous question: <a href="https://stackoverflow.com/questions/51057966/flask-toggle-button-with-dynamic-label">Flask - Toggle button with dynamic label</a>. I did some testing with it and it worked as expected and I was so excited to implement! Then I moved the code over to my actual .html file and for some reason it isn't working. I have been working in Flask for about a week now and it's very rewarding but I am still quite new so I'm not sure what is going on.</p> <p>HTML:</p> <pre><code>{% extends &quot;header.html&quot; %} {% block body %} &lt;h2&gt;{{ mType }}&lt;/h2&gt; {% if rType == &quot;Test&quot;%} &lt;h3&gt;Test&lt;/h3&gt; {% endif %} {% block content %} &lt;div class=&quot;row&quot;&gt; &lt;div class=&quot;col-md-4&quot;&gt;{% include &quot;safetychanges.html&quot; %}&lt;/div&gt; &lt;div class=&quot;col-md-8&quot;&gt; &lt;div id=&quot;load3&quot; class=&quot;load3&quot;&gt; &lt;form method=&quot;post&quot; action=&quot;{{ url_for('tasks') }}&quot;&gt; &lt;input type=&quot;submit&quot; value=&quot;Start/Stop Recording&quot; name=&quot;rec&quot; /&gt; &lt;/form&gt; &lt;div class=&quot;container&quot;&gt; &lt;div class=&quot;row&quot;&gt; &lt;div class=&quot;col-12 d-flex justify-content-end&quot;&gt; &lt;form method=&quot;post&quot; action=&quot;{{ url_for('tasks') }}&quot;&gt; &lt;/form&gt; &lt;img src=&quot;{{ url_for('video_feed') }}&quot; height=&quot;100%&quot;, width=&quot;100%&quot;&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; {% endblock %} &lt;html&gt; &lt;body&gt; &lt;head&gt; &lt;link rel=&quot;stylesheet&quot; href=&quot;https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta.2/css/bootstrap.min.css&quot;&gt; &lt;script src=&quot;https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js&quot;&gt;&lt;/script&gt; &lt;script src=&quot;https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta.2/js/bootstrap.min.js&quot;&gt;&lt;/script&gt; &lt;link href=&quot;https://gitcdn.github.io/bootstrap-toggle/2.2.2/css/bootstrap-toggle.min.css&quot; rel=&quot;stylesheet&quot;&gt; &lt;script src=&quot;https://gitcdn.github.io/bootstrap-toggle/2.2.2/js/bootstrap-toggle.min.js&quot;&gt;&lt;/script&gt; &lt;/head&gt; &lt;input type=&quot;checkbox&quot; class='toggle' checked data-toggle=&quot;toggle&quot;&gt; &lt;div class='status'&gt;Toggled&lt;/div&gt; &lt;/body&gt; &lt;script&gt; $(document).ready(function() { $('.toggle').click(function() { var current_status = $('.status').text(); console.log(&quot;here&quot;); $.ajax({ url: &quot;/Planning/get_toggled_status&quot;, type: &quot;get&quot;, data: {status: current_status}, success: function(response) { $(&quot;.status&quot;).html(response); }, error: function(xhr) { //Do Something to handle error } }); }); }); &lt;/script&gt; &lt;/html&gt; {% endblock %} </code></pre> <p>app.py:</p> <pre><code>@app.route('/Planning/get_toggled_status') def toggled_status(): current_status = request.args.get('status') print(&quot;hereeeeeeeeeeeeeeeeeeee&quot;) print(current_status) return 'Toggled' if current_status == 'Untoggled' else 'Untoggled' </code></pre> <p>From my best estimates while testing, it's like the script isn't being triggered while in my html code opposed to when I was testing it in an empty html code file. Any ideas?</p>
<javascript><python><html><ajax><flask>
2024-01-17 20:43:16
0
1,406
Bob
77,835,393
278,205
Poetry2nix flake build errors because the `poetry2nix.overrides` attribute seems to be missing
<h3>Describe the issue</h3> <p>I'm trying to build the following flake, which apart from the overrides section is verbatim from <a href="https://github.com/nix-community/poetry2nix/blob/master/templates/app/flake.nix" rel="nofollow noreferrer">a templates from poetry2nix's repo</a></p> <pre class="lang-hs prettyprint-override"><code>{ description = &quot;bear&quot;; inputs = { flake-utils.url = &quot;github:numtide/flake-utils&quot;; nixpkgs.url = &quot;github:nixos/nixpkgs/nixos-23.11&quot;; poetry2nix = { url = &quot;github:nix-community/poetry2nix&quot;; inputs.nixpkgs.follows = &quot;nixpkgs&quot;; }; }; outputs = { self, nixpkgs, flake-utils, poetry2nix }: flake-utils.lib.eachDefaultSystem (system: let pkgs = nixpkgs.legacyPackages.${system}; inherit (poetry2nix.lib.mkPoetry2Nix { inherit pkgs; }) mkPoetryApplication; in { packages = { bear = mkPoetryApplication { packageName = &quot;bearctl&quot;; projectDir = ./.; overrides = poetry2nix.overrides.withDefaults (self: super: { pycairo = super.pycairo.overridePythonAttrs (old: { nativeBuildInputs = [ self.meson pkgs.buildPackages.pkg-config ]; }); pygobject = super.pygobject.overridePythonAttrs (old: { buildInputs = (old.buildInputs or [ ]) ++ [ super.setuptools ]; }); urllib3 = super.urllib3.overridePythonAttrs (old: { buildInputs = (old.buildInputs or [ ]) ++ [ self.hatch-vcs ]; }); pipewire-python = super.pipewire-python.overridePythonAttrs (old: { buildInputs = (old.buildInputs or [ ]) ++ [ self.flit-core ]; }); }); buildInputs = (with pkgs; [ pkgs.pipewire pkgs.lorri pkgs.xorg.xset pkgs.i3 ]); }; default = self.packages.${system}.bear; }; devShells.default = pkgs.mkShell { inputsFrom = [ self.packages.${system}.myapp ]; packages = [ pkgs.poetry ]; }; }); } </code></pre> <p>which makes <code>nix build</code> balk with</p> <pre><code>✦ ❯ nix build . warning: Git tree '/home/robin/devel/bearctl' is dirty error: … while evaluating the attribute 'packages.x86_64-linux.bear' at /nix/store/2xz05z3ar2i1fr06mzr434f6n59513g6-source/flake.nix:88:11: 87| packages = { 88| bear = mkPoetryApplication { | ^ 89| packageName = &quot;bearctl&quot;; … while evaluating the attribute 'pkgs.buildPythonPackage' at /nix/store/yy19v2dwb8ldphvia9smajvwv3ycx2c1-source/pkgs/development/interpreters/python/passthrufun.nix:87:5: 86| withPackages = import ./with-packages.nix { inherit buildEnv pythonPackages;}; 87| pkgs = pythonPackages; | ^ 88| interpreter = &quot;${self}/bin/${executable}&quot;; (stack trace truncated; use '--show-trace' to show the full trace) error: attribute 'overrides' missing at /nix/store/2xz05z3ar2i1fr06mzr434f6n59513g6-source/flake.nix:92:25: 91| 92| overrides = poetry2nix.overrides.withDefaults (self: super: { | ^ 93| </code></pre> <p>I must be doing something very obvious incredibly wrong, but i can't seem to get this to work. What is happening?</p> <p>thanks</p>
<python><python-poetry><nix><nix-flake>
2024-01-17 20:39:10
1
5,297
thepandaatemyface
77,835,127
998,248
Pyright exhaustive check on literal only works for parameter, not properties
<p>This works:</p> <pre class="lang-py prettyprint-override"><code>def test(it: Literal['a', 'b']) -&gt; str: if it == 'a': return 'a' elif it == 'b': return 'b' # No error, exhaustive </code></pre> <p>But this doesn't:</p> <pre class="lang-py prettyprint-override"><code>@dataclass class Foo: foo: Literal['a', 'b'] def test(it: Foo) -&gt; str: foo = it.foo if foo == 'a': return 'a' elif foo == 'b': # Correctly infers that foo is Literal['b'] without the check return 'b' # Error, must return str </code></pre> <p>Why?</p>
<python><python-typing><pyright>
2024-01-17 19:46:52
1
2,791
Anthony Naddeo
77,835,107
673,600
Converting .ipynb to .py in colab
<p>Is there a simple way to do this? I have a <code>.ipynb</code> file I want to convert to a python module and import it, hence want to make it into a <code>.py</code> file. Is there a way to do it?</p>
<python><google-colaboratory>
2024-01-17 19:43:21
1
6,026
disruptive
77,835,037
2,658,228
Programatically create multiple instances of CustomTkinter Combobox Python
<p>New to python and trying to create an app with <code>customtkinter</code>. The app lets users pick values from different drop-downs and generates codes based on the chosen value.</p> <p>Updated per comment from Mike-SMT:</p> <pre><code>import customtkinter as ctk ctk.set_appearance_mode('light') # define fonts general_labels = button_text = ('Segoe UI', 18) frame_heading = ('Segoe UI Semibold', 20) small_label = ('Segoe UI', 12) class FrameSpcl(ctk.CTkFrame): def __init__(self, parent, rownum, colnum, frame_text, attributes_dict): super().__init__(parent) ctk.CTkLabel(master=self, text=frame_text, font = frame_heading).pack(padx=5, pady=5) self.configure(width=200, height=500, corner_radius=10, bg_color='transparent', fg_color='#d8ebca') # Create an empty list to store all ComboBox instances combobox_list = [] for attribute in list(attributes_dict.keys()): ctk.CTkLabel(master=self, text='Select '+attribute.replace('_', ' ').lower(), font=small_label, justify='left').pack(padx=10,pady=10) combobox_list.append(ctk.CTkComboBox(master=self, values=list(attributes_dict[attribute].keys())).pack(padx=20, pady=(0,20))) ctk.CTkButton(master=self, text='Generate Code', font=general_labels).pack(padx=20, pady=20) #, command=classFunc) code_lab = ctk.CTkLabel(master=self, text='Click to generate Code', font = frame_heading).pack(padx=5, pady=5) code_val = '' def classFunc(): attributes = list(attributes_dict.keys()) for i in range(0,len(combobox_list)): code_val += ' '+str(attributes_dict[attributes[i]].get(combobox_list[i].get())) print(code_val) return(code_val) # code_lab.configure(text='PN: '+classFunc()) self.grid(row=rownum, column=colnum, padx=(20,20), pady=(20,20), sticky = 'n') app = ctk.CTk() app.geometry('950x700') app.title('Car Code Generator') car = dict( model = dict(zip( ['S', '3', 'X', 'Y'], list(range(0,4)) )), trim = dict(zip( ['Standard', 'Long Range', 'All Wheel Drive', 'Sports'], ['RWD', 'RWDLR', 'AWD', 'SPRTS'] )) ) sale_terms = dict( lease = dict(zip( ['5 year', '6 year', '7 year', 'None'], ['5YL', '6YL', '7YL', '000'] )), insurance = dict(zip( ['base', '3 year enhanced', '5 year enhanced', 'None'], list(range(0,4)) )), rewards = dict(zip( ['None', 'Bronze', 'Silver', 'Gold', 'Platinum', 'Diamond'], list(range(0,6)) )) ) FrameSpcl(parent=app, rownum=0, colnum=0, frame_text='Car Options', attributes_dict=car) FrameSpcl(parent=app, rownum=0, colnum=1, frame_text='Sale options', attributes_dict=sale_terms) app.mainloop() </code></pre> <p>I'm not sure how to update the value for the label when the button is pressed. I tried using <code>configure(text=pn)</code> but it didn't work (commented out in the above code).</p> <p>The rest of the app appears exactly how I want it to:</p> <p><a href="https://i.sstatic.net/v2FnA.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v2FnA.jpg" alt="enter image description here" /></a></p>
<python><python-3.x><tkinter><python-class><customtkinter>
2024-01-17 19:29:53
1
2,763
Gautam
77,834,994
5,594,008
Wagtail, register settings singleton
<pre><code>from wagtail.contrib.settings.models import BaseGenericSetting, register_setting @register_setting class Glossary(BaseGenericSetting) ... </code></pre> <p>Is it possible to restrict max number of instances that can be created? Maybe there is some easy way or only overwriting <code>save</code> method</p>
<python><wagtail>
2024-01-17 19:18:49
1
2,352
Headmaster
77,834,959
673,600
importing python module in colab fails
<p>I'm importing a test function in colab but it's not working as I would expect. I create a simple <code>.py</code> file to test, but I failed to import it despite the fact that I the file can be seen in the path.</p> <pre><code>from google.colab import drive import sys import os drive.mount('/content/drive', force_remount=True) path=&quot;/content/drive/My Drive/Colab Notebooks/&quot; sys.path.insert(0,path) os.chdir(path) os.listdir(path) from PPLLM.py import Demo </code></pre> <p>results in the error message:</p> <pre><code>ModuleNotFoundError Traceback (most recent call last) &lt;ipython-input-55-eb52ff305968&gt; in &lt;cell line: 1&gt;() ----&gt; 1 from PPLLM.py import Demo ModuleNotFoundError: No module named 'PPLLM' </code></pre> <p>The pathing seems to work:</p> <pre><code>[..., 'PPLLM.py'] </code></pre> <p>The PPLLM file is created in VScode and uploaded to ensure no formatting issues.</p> <p>The contents are:</p> <pre><code>def Demo(): pass </code></pre>
<python><google-colaboratory>
2024-01-17 19:10:56
1
6,026
disruptive
77,834,850
10,203,572
Parsing stringified array fields when reading csv files with Pandas
<p>I have a csv file dumped from a DataFrame that had array fields, which looks like below:</p> <pre><code>col1,col2,col3 &quot;[454.40145382, 254.9620727, 0.9147141790444601]&quot;,2.1299914162447497,-29.85771596074138 </code></pre> <p>When I read this back with <code>pd.read_csv</code>, col1 is parsed as a string by default. Is there a standard way I can specify for such fields to be parsed into numpy arrays whenever possible?</p> <p>Alternatively, is there a numpy built-in that handles string arrays? I did not have much luck with <code>np.fromstring</code>, which complains about</p> <blockquote> <p>ValueError: string size must be a multiple of element size</p> </blockquote> <p>My workaround is simply doing it myself with something like below (can be modified to be recursive for 1D+)</p> <pre><code>data[&quot;col1_parsed&quot;] = data[&quot;col1&quot;].apply(lambda x: [float(e.strip().replace(&quot;[&quot;,&quot;&quot;).replace(&quot;]&quot;,&quot;&quot;)) for e in x.split(&quot;,&quot;)]) </code></pre> <p>But it feels too simple and tedious for it to not be handled by some builtin already</p>
<python><pandas><numpy>
2024-01-17 18:53:04
2
1,066
Layman
77,834,781
801,894
How does Python parse `7 in x == True`?
<p>I saw a <a href="https://es.stackoverflow.com/q/611715/349223">question on es.stackoverflow.com</a> in which the author tried to explicitly compare the result of a boolean expression to <code>True</code>.</p> <pre class="lang-python prettyprint-override"><code>if nota in lista1 == True: ... </code></pre> <p>I wasn't sure of the operator precedence myself, so I tried this in the python3 repl:</p> <pre><code>solomon@Solomons-Macintosh ~ % python3 Python 3.9.6 (default, May 7 2023, 23:32:44) [Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; &gt;&gt;&gt; x = [3, 7] &gt;&gt;&gt; (7 in x) == True True &gt;&gt;&gt; 7 in x == True False &gt;&gt;&gt; 7 in (x == True) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; TypeError: argument of type 'bool' is not iterable </code></pre> <p>How does Python parse <code>7 in x == True</code>? It's not the same as <code>(7 in x) == True</code> because it returns a different result. It's not the same as <code>7 in (x == True)</code> because it doesn't raise an error. So what <em>is</em> it? What does it mean?</p>
<python><operator-precedence>
2024-01-17 18:37:35
0
27,562
Solomon Slow
77,834,659
6,815,248
How to calculate the similarity(0%-99%) of two polylines in Python
<p>Both two polylines contains a set of geo positions(like 38.663610000000006,-121.29237), the total number of position for each polylines are different.</p> <p>How to calculate the similarity(0%-99%) of two polylines in Python?</p>
<python><geometry><compare><polyline>
2024-01-17 18:13:35
1
423
John
77,834,629
14,101,494
Abstract private attributes in Python
<p>In Python &gt;= 3.12, how do I declare private attributes in abstract classes?</p> <p>The following snippet should declare an abstract attribute. Though, from my understanding, this actually forces a getter on non-abstract subclasses (which produces the same behavior):</p> <pre class="lang-py prettyprint-override"><code>class MyAbstractClass(ABC): @property @abstractmethod def my_prop(self) -&gt; MyType: pass class Concrete(MyAbstractClass): def __init__(self, param: MyType): self._my_prop = param @property def my_prop(self) -&gt; MyType: return self._my_prop </code></pre> <p>But how do I declare only abstract private attributes, <em>without</em> a getter?</p> <pre class="lang-py prettyprint-override"><code>class MyAbstractClass(ABC): # @? __my_prop: MyType # &lt;- should be an abstract private attribute </code></pre> <p>I'm afraid this is a misunderstanding on my part and Python organizes this differently than in other languages. I searched the <code>abc</code> docs but could not find a satisfying answer.</p> <p>Edit: Thanks for your help! I know that &quot;private properties&quot; are not actually private in Python. Though this is not what I am asking for. I want each inherited subclass to have this certain property. In this case it's supposed to private. As other's indicated, I might have used misleading python terminology. But what's the pythonic way?</p> <p>Edit 2: I am looking for something like interfaces with the possibility to define private or readonly members.</p>
<python><python-typing>
2024-01-17 18:08:39
1
414
lsc
77,834,603
1,711,271
Read multiple csv files in a pandas dataframe, in parallel
<p>This question develops on the preceding one:</p> <p><a href="https://stackoverflow.com/q/77832160/1711271">Read multiple csv files in a pandas dataframe</a></p> <p>Basically, I have a set of files like:</p> <p>file 1:</p> <pre><code>&lt;empty line&gt; #----------------------------------------- # foo bar baz #----------------------------------------- 0.0120932 1.10166 1.08745 0.0127890 1.10105 1.08773 0.0142051 1.09941 1.08760 0.0162801 1.09662 1.08548 0.0197376 1.09170 1.08015 </code></pre> <p>file 2:</p> <pre><code>&lt;empty line&gt; #----------------------------------------- # foo bar baz #----------------------------------------- 0.888085 0.768590 0.747961 0.893782 0.781607 0.760417 0.899830 0.797021 0.771219 0.899266 0.799260 0.765859 0.891489 0.781255 0.728892 </code></pre> <p>etc. Each file is identified by an ID, and there's a ID to file mapping:</p> <pre><code>files = {'A': 'A.csv', 'B': 'B.csv'} </code></pre> <p>Thanks to the other answer, I can read the files serially:</p> <pre><code>columns = ['foo', 'bar', 'baz'] skip = 4 df = (pd.concat({k: pd.read_csv(v, skiprows=skip, sep=r'\s+', names=names) for k,v in files.items()}, names=['ID']) .reset_index('ID') .reset_index(drop=True) ) </code></pre> <p>However, I would like to read them in parallel, to take advantage of my multicore machine. A naive attempt doesn't work:</p> <pre><code>from joblib import Parallel, delayed from multiprocessing import cpu_count n_jobs = cpu_count() def read_file(res_dict: dict, skiprows: int, columns: list[str], id: str, file: Path ) -&gt; None: res_dict[id] = pd.read_csv(file, skiprows=skiprows, sep=r'\s+', names=columns) temp = {} temp = Parallel(n_jobs)(delayed(read_file)(temp, skip_rows, columns, id, file) for id, file in master2file.items()) df = (pd.concat(temp, names=['ID']) .reset_index('ID') .reset_index(drop=True) ) </code></pre> <p>I get the error</p> <pre><code>Traceback (most recent call last): File &quot;/home/...py&quot;, line 54, in &lt;module&gt; df = (pd.concat(temp, File &quot;/home/../.venv/lib/python3.10/site-packages/pandas/core/reshape/concat.py&quot;, line 372, in concat op = _Concatenator( File &quot;/home/../.venv/lib/python3.10/site-packages/pandas/core/reshape/concat.py&quot;, line 452, in __init__ raise ValueError(&quot;All objects passed were None&quot;) ValueError: All objects passed were None Process finished with exit code 1 </code></pre> <p>What am I doing wrong? Can you help me?</p>
<python><pandas><csv><parallel-processing><joblib>
2024-01-17 18:03:34
2
5,726
DeltaIV
77,834,580
9,283,107
PySide / QT Align text in vertical center for QPushButton
<p>How can I vertically align the text in a large font QPushButton? For example this Python code creates a button where the text is not vertically aligned. I have tried everything I can think of to solve it but I can't get it working.</p> <pre><code>from PySide2 import QtCore, QtGui, QtWidgets button = QtWidgets.QPushButton(&quot;+&quot;) # &quot;a&quot; or &quot;A&quot; button.setStyleSheet(&quot;font-size: 100px&quot;) layout = QtWidgets.QVBoxLayout() layout.addWidget(button) window = QtWidgets.QWidget() window.setLayout(layout) window.show()' </code></pre> <p>Here is what the code above creates:</p> <p><a href="https://i.sstatic.net/ygU2k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ygU2k.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/hcQQ0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hcQQ0.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/vdDkh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vdDkh.png" alt="enter image description here" /></a></p> <p>Note that I am running this code in Maya but it should be the same problem in an QT environment I think.</p>
<python><css><qt><fonts><pyside2>
2024-01-17 17:58:21
2
2,501
Frank
77,834,575
2,464,386
Calling `mock.patch` before a `with` statement
<p>Could someone explain why <code>testAAA_with_patch_from_function_outside_with</code> fails, but the other two tests succeed?</p> <pre><code>import unittest from unittest import mock class BBB: def __init__(self, param1, param2): self.param1 = param1 self.param2 = param2 def func1(self): print(&quot;func1 called&quot;) def func2(self): print(&quot;func2 called&quot;) class AAA: def main_func(self): bbb1 = BBB(&quot;first case&quot;, 1) bbb1.func1() bbb1.func2() bbb2 = BBB(&quot;second case&quot;, 2) bbb2.func1() bbb2.func2() class TestAAA(unittest.TestCase): def setUp(self) -&gt; None: self.mock_bbb = unittest.mock.create_autospec(BBB) def testAAA(self): with mock.patch(&quot;test_example2.BBB&quot;) as patch_bbb: patch_bbb.return_value = self.mock_bbb aaa = AAA() aaa.main_func() self.assertEqual(2, self.mock_bbb.func1.call_count) self.assertEqual(2, self.mock_bbb.func2.call_count) def testAAA_with_patch_from_function(self): with self.setup_patch() as patch_bbb: patch_bbb.return_value = self.mock_bbb aaa = AAA() aaa.main_func() self.assertEqual(2, self.mock_bbb.func1.call_count) self.assertEqual(2, self.mock_bbb.func2.call_count) def testAAA_with_patch_from_function_outside_with(self): patch_bbb = self.setup_patch() with patch_bbb: patch_bbb.return_value = self.mock_bbb aaa = AAA() aaa.main_func() self.assertEqual(2, self.mock_bbb.func1.call_count) self.assertEqual(2, self.mock_bbb.func2.call_count) def setup_patch(self): return mock.patch(&quot;test_example2.BBB&quot;) </code></pre> <p>I don't see anything in the definition of the &quot;with&quot; statement that would make the last two tests semantically different, but is there something special about <code>mock.patch</code> that would make them behave differently?</p> <p>Here's my motivation: I'm writing a number of unit tests that will all need the same patches, at least 6 of them. I was hoping to put the patches in a helper function to avoid duplicating a big glob of patches on every unit test. However, when I tried writing a function that returns a tuple of patches:</p> <pre><code>with setup_patches() as (patch_1, patch_2, patch_3...): </code></pre> <p>it failed because there's no <code>__enter__</code> on a tuple type. So then I tried assigning variables outside the &quot;with&quot;:</p> <pre><code>(patch_1, patch_2, patch_3...) = setup_patches() with patch_1, patch_2, patch_3: </code></pre> <p>but this failed because apparently <code>mock.patch</code> works differently when it's used outside the <code>with</code> expression. Is there any other way to simplify the patches?</p>
<python><unit-testing><mocking><python-unittest>
2024-01-17 17:57:07
0
31,779
ajb
77,834,447
274,460
Changing a class's base class for unit testing
<p>I've got a class I'm trying to test. It inherits from a base class which has some functionality which is both inconvenient and irrelevant to the testing so I'm trying to replace it for the purpose of the unit test.</p> <p>I've got this far:</p> <pre><code>class Foo: pass class Bar(Foo): def __init__(self): super().__init__() import types import pytest @pytest.fixture def uut(): global Bar old_bar = Bar Bar = types.new_class(Bar.__name__, (object,), {}, lambda ns: ns.update(Bar.__dict__)) try: yield Bar finally: Bar = old_bar def test_Bar(uut): b = uut() </code></pre> <p>This fails with:</p> <pre><code>FAILED test.py::test_Bar - TypeError: super(type, obj): obj must be an instance or subtype of type </code></pre> <p>AFAICT this is now <code>types.new_class()</code> is intended to be used. What am I doing wrong?</p> <p>I've also tried using <code>mock.Mock</code> as a baseclass and adding <code>&quot;metaclass&quot;: type</code> to the (currently-empty) keywords parameter but neither of these seems to change anything.</p>
<python><python-3.x><class><pytest><metaprogramming>
2024-01-17 17:34:15
1
8,161
Tom
77,834,401
4,243
Setting up a truly non-required subparser with argparse
<p>I'd like my program to allow any string to be passed as an argument, but also to allow subcommands. For example, these would all be valid:</p> <pre><code>$ python argtest.py 'hello world' $ python argtest.py version $ python argtest.py help </code></pre> <p>I do not want users to have to use <code>--</code>, e.g., <code>python argparse.py -- 'hello world'</code>.</p> <p>If I set up a subcommand parser to handle 'version' and 'help', it will then complain that 'hello world' isn't a valid subcommand.</p> <p>My code:</p> <pre><code>parser = argparse_flags.ArgumentParser() subparsers = parser.add_subparsers(required=False) subparsers.add_parser('version') subparsers.add_parser('help') ns, remainder = parser.parse_known_args(argv[1:]) </code></pre> <p>Yields:</p> <pre><code>$ python argtest.py 'hello world' usage: argtest.py [-h] {version,help} ... argtest.py: error: invalid choice: 'hello world' (choose from 'version', 'help') </code></pre> <p>The only fix I've come up with is to create a &quot;fallback&quot; subcommand and then manually add it with <code>parser.parse_known_args(['fallback'] + argv[1:])</code> by either:</p> <ul> <li>Looking for this first argument not prefixed with '-' and then see if it's in <code>subparsers.choices</code>.</li> <li>Catch the SystemExit raised by argparse when no subparsers match (and temporarily redirecting stderr so it doesn't spam the CLI).</li> </ul> <p>All of this is kind of gross, is there a better way to get the behavior I want? I just want argparse to be a little more chill about not finding a subcommand.</p>
<python><argparse>
2024-01-17 17:26:00
1
23,762
kristina
77,834,124
4,564,080
Add a new instance variable to a subclass on construction without overriding the parent class's __init__()?
<p>Let's say I have a base class <code>Parent</code> defined as:</p> <pre class="lang-py prettyprint-override"><code>class Parent: def __init__(self, a: int, b: int, c: int, d: int): self.a = a self.b = b self.c = c self.d = d </code></pre> <p>And a class <code>Child</code> which inherits from it. <code>Child</code> has an additional instance variable <code>e</code> which has a value by default and so its value is not passed as a parameter to <code>__init__</code>. I don't want to override <code>Parent.__init__</code> in <code>Child</code> with the exact same 4 parameters and then call <code>super().__init__(a=a, b=b, c=c, d=d)</code> as well as <code>self.e = 5</code>, but I do want to instantiate <code>self.e = 5</code> when a <code>Child</code> object is instantiated.</p> <p>Is this possible?</p>
<python>
2024-01-17 16:48:00
1
4,635
KOB
77,834,118
7,060,972
How to properly deploy a Django app while following 12FA?
<p>I am deploying a Django app and I got stuck. I am trying to follow the <a href="https://12factor.net/" rel="nofollow noreferrer">12 factor app</a> principles. Specifically, they state that you should have all your configuration in environment variables. Also, I don't want to have multiple <code>settings.py</code> files, because that goes against the 12FA principles. So, I have simply a single <code>settings.py</code> file, which checks environment variables using <code>os.environ['xxx']</code> for stuff like database login etc.</p> <p>On the local dev machine, these environment variables are pulled from a file called <code>.env</code>. On the server, I am planning to store all the environment variables inside a systemd unit using the <code>Environment=</code> syntax.</p> <p>Now, my problem occurs while deploying. I copy all the source Python files onto the server, and I need to call <code>collectstatic</code> and <code>migrate</code>. However, these actions both require the presence of a <code>settings.py</code> file which needs to have access to the database. However, <code>collectstatic</code> is called from my CI/CD pipeline and doesn't have all those environment variables; the <strong>only</strong> place where they're stored is the systemd unit file which has very limited permissions (only root can read and write to the file).</p> <p>My question is, what approach could I take in order to follow 12FA while maintaining a single <code>settings.py</code> file?</p>
<python><django><deployment><cicd><12factor>
2024-01-17 16:46:25
0
706
Dj Sushi
77,834,102
6,421,394
Converting PDF to Markdown in Python with structure preservation
<p>I need to convert a PDF text document to Markdown and maintaining its structure (ie. indexed numbered headers and subheaders should have their correspective number of hashtags # in markdown to keep the same structure tree). I have explored alone <code>PDFMinersix</code> but I am basically extracting text and I don't see a functionality capable of mapping the structure tree to markdown format, or am I wrong?</p> <p>For me it's important to convert the document to text and being able to retain structure tree hierarchy. Either in 1 or 2 steps is the same for me.</p> <p>Any recommendations for Python libraries or best practices that have proven effective in similar scenarios? I am looking for a solution that could scale hundreds of documents and so possibly nothing hardcoded, even though the documents will actually share most of the structure and indexing.</p>
<python><pdf><markdown><document-conversion><pdfminersix>
2024-01-17 16:45:12
1
493
Guido
77,834,002
1,892,308
Django ORM: how to cast geometry to geography in filter?
<p>In my Django project, I have defined a <code>geometry</code> field on one of my model. Its SRID is 4326 (WGS 84).</p> <p>I would like to write with Django ORM a query that will translate to:</p> <pre class="lang-sql prettyprint-override"><code>select * from building where (ST_DWITHIN(shape::geography, ST_MakePoint(1.9217,47.8814)::geography,20)) </code></pre> <p>In other words, I would like to filter the queryset using the <code>ST_DWITHIN</code> function, applied to a geometry field (shape) casted to a geography.</p> <p>The only way I could find is to use the <code>extra()</code> function, like so:</p> <pre class="lang-py prettyprint-override"><code>Building.objects.extra( where=[ f&quot;ST_DWITHIN(shape::geography, ST_MakePoint({lng}, {lat})::geography, {radius})&quot; ]) </code></pre> <p>But the <code>extra</code> function is supposed to be deprecated in future version of Django, according to the <a href="https://docs.djangoproject.com/en/5.0/ref/models/querysets/#django.db.models.query.QuerySet.extra" rel="nofollow noreferrer">doc</a>.</p> <p>Can I write this query without escaping from the ORM?</p> <p>Maybe using the <a href="https://docs.djangoproject.com/en/5.0/ref/models/database-functions/#cast" rel="nofollow noreferrer">Cast</a> function ?</p>
<python><django><postgis><geodjango>
2024-01-17 16:30:03
0
1,773
Istopopoki
77,833,856
11,864,933
downloading camera snapshot via http url in python
<p>I have NVR from Dahua, and I have access to admin user on this device. Using this account I'm trying to get snapshots from IP cameras, connected to this NVR. According to dahua documentation, to do this, I can use this url link http://Username:Password@IP-Address/cgi-bin/snapshot.cgi?channel=1</p> <p>When i replace &quot;username&quot; etc with correct data, it works I can paste this link in my web browser, and it displays snapshot from chosen camera.</p> <p>Now I want to do the same in the python code, because those snapshots will be analised by neural network so I need them as python variable not as a picture in my browser.</p> <p>I wrote some simple code, which looks like this</p> <pre><code>import requests from PIL import Image from io import BytesIO url = &quot;http://username:password@ip_address/cgi-bin/snapshot.cgi?channel=10&quot; response = requests.get(url) if response.status_code == 200: image = Image.open(BytesIO(response.content)) image.show() else: print(f&quot; failed to download the image, respond from website: {response.status_code}&quot;) </code></pre> <p>Obviously I used actual username, not &quot;username&quot;. And I got error HTTP: 401, which means Unauthorized connection. Which is weird, because if I take this exact url, and copy it from my code to web browser, there is no problem with authorization. Eveything works</p> <p>I supose that this might be some sort of firewall blockade, or other security stuff happening in NVR configuration, that prevents me from connecting via python code.</p> <p>I also tried another authentication method, but got the same HTTP 401 respond</p> <pre><code>session = requests.Session() session.auth = ('username', 'password') url = &quot;http://ip_address/cgi-bin/snapshot.cgi?channel=10&quot; response = session.get(url) </code></pre> <p>I honestly don't know. I am Data Scientist, not web developer, nor cybersecurity expert, so my knowledge in these areas is very limited</p>
<python><http>
2024-01-17 16:06:15
1
318
Mateusz Boruch
77,833,798
3,994,193
Create Azure static web hosting from storage account using Python SDK
<p>I am trying to create the static webhosting for the azure storage account using the python SDK. I am trying to find the examples but not able to find anything.</p> <p>As per SDK we have to use BlobServiceClient to create static hosting. But i am not able to find any code for this <a href="https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/storage/azure-storage-blob" rel="nofollow noreferrer">https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/storage/azure-storage-blob</a></p>
<python><azure><azure-storage>
2024-01-17 15:57:41
1
390
Sandhu
77,833,712
14,852,106
Odoo 16 / custom website : Pagination problem
<p>I paginated the search result. For the first, query set gives me perfect result , but the pagination set all pages, and I get all data from database by clicking on any page number.</p> <p>Here is my code:</p> <p>This is the controller which I added :</p> <pre><code> class MyCustomWeb(http.Controller): @http.route(['/customer', '/customer/page/&lt;int:page&gt;'], type=&quot;http&quot;, auth=&quot;user&quot;, website=True) def customer_kanban(self, page=1, search=None, **post): domain = [] if search: domain.append(('name', 'ilike', search)) post[&quot;search&quot;] = search customer_obj = request.env['res.partner'].sudo().search(domain) total = customer_obj.sudo().search_count([]) pager = request.website.pager( url='/customer', total=total, page=page, step=3, ) offset = pager['offset'] customer_obj = customer_obj[offset: offset + 5] return request.render('my_module.customer_form', { 'search': search, 'customer_details': customer_obj, 'pager': pager, }) </code></pre> <p>This is the XML code : the template of customer website :</p> <pre><code> &lt;template id=&quot;customer_form&quot; name=&quot;Customers&quot;&gt; &lt;t t-call=&quot;website.layout&quot;&gt; &lt;div&gt; &lt;div class=&quot;col-md-6&quot;&gt; &lt;br/&gt; &lt;div&gt; &lt;form action=&quot;/customer&quot; method=&quot;post&quot;&gt; &lt;t t-call=&quot;website.website_search_box&quot;&gt; &lt;/t&gt; &lt;input type=&quot;hidden&quot; name=&quot;csrf_token&quot; t-att-value=&quot;request.csrf_token()&quot;/&gt; &lt;div&gt; &lt;section&gt; &lt;div class=&quot;customer_details&quot;&gt; &lt;center&gt; &lt;h3&gt;Customers&lt;/h3&gt; &lt;/center&gt; &lt;/div&gt; &lt;br/&gt; &lt;div class=&quot;oe_product_cart_new row&quot; style=&quot;overflow: hidden;&quot;&gt; &lt;t t-foreach=&quot;customer_details&quot; t-as=&quot;customers&quot;&gt; &lt;div class=&quot;col-md-3 col-sm-3 col-xs-12&quot; style=&quot;padding:1px 1px 1px 1px;&quot;&gt; &lt;div style=&quot;border: 1px solid #f0eaea;width: 150px;height: auto;padding: 7% 0% 10% 0%; border-radius: 3px;overflow: hidden; margin-bottom: 44px !important;width: 100%;height: 100%;&quot;&gt; &lt;div class=&quot;oe_product_image&quot;&gt; &lt;center&gt; &lt;div style=&quot;width:100%;overflow: hidden;&quot;&gt; &lt;img t-if=&quot;customers.image_1920&quot; t-attf-src=&quot;/web/image/res.partner/#{customers.id}/image_1920&quot; class=&quot;img oe_product_image&quot; style=&quot;padding: 0px; margin: 0px; width:auto; height:100%;&quot;/&gt; &lt;/div&gt; &lt;div style=&quot;text-align: left;margin: 10px 15px 3px 15px;&quot;&gt; &lt;t t-if=&quot;customers.name&quot;&gt; &lt;span t-esc=&quot;customers.name&quot; style=&quot;font-weight: bolder;color: #3e3b3b;&quot;/&gt; &lt;br/&gt; &lt;/t&gt; &lt;/div&gt; &lt;/center&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/t&gt; &lt;/div&gt; &lt;div class=&quot;products_pager form-inline justify-content-center mt-3&quot;&gt; &lt;t t-call=&quot;website.pager&quot;&gt; &lt;t t-set=&quot;_classes&quot;&gt;mt-2 ml-md-2&lt;/t&gt; &lt;/t&gt; &lt;/div&gt; &lt;/section&gt; &lt;br/&gt; &lt;hr class=&quot;border-600 s_hr_1px w-100 mx-auto s_hr_dotted&quot;/&gt; &lt;/div&gt; &lt;/form&gt; &lt;/div&gt; &lt;/div&gt; &lt;/div&gt; &lt;/t&gt; &lt;/template&gt; </code></pre> <p>Any help please? Thanks.</p>
<python><xml><controller><odoo>
2024-01-17 15:43:55
1
633
K.ju
77,833,627
4,627,471
Why does my PySide6 wizard page have the first button in blue?
<p>Note to reviewers: I concede this may be a dupe, however IMHO it was marked as a dupe of the wrong question. Better would be <a href="https://stackoverflow.com/q/56478207">Why is the first button blue in color when I run the PyQt5 program?</a> as mentioned by @musicamente.</p> <pre><code>import sys from PySide6.QtWidgets import QApplication, QWidget, QPushButton, QHBoxLayout, QWizard, QWizardPage class ThreeButtonsPage(QWizardPage): def __init__(self): super().__init__() self.initUI() def initUI(self): button1 = QPushButton('Button 1', self) button2 = QPushButton('Button 2', self) button3 = QPushButton('Button 3', self) hbox = QHBoxLayout() hbox.addWidget(button1) hbox.addWidget(button2) hbox.addWidget(button3) self.setLayout(hbox) class MyWizard(QWizard): def __init__(self): super().__init__() first_page = ThreeButtonsPage() self.addPage(first_page) self.setWindowTitle('Wizard Example') if __name__ == '__main__': app = QApplication(sys.argv) wizard = MyWizard() wizard.show() sys.exit(app.exec()) </code></pre> <p>Why is the first button blue? I hope the code is pretty self-explanatory. The first and only the first button appears in blue. This is not a styling issue as I have dumped the styles of all the buttons just to check. I have also created the same layout in a single widget app and there is no problem there. So this seems specific to the wizard.</p> <p><a href="https://i.sstatic.net/1j7ld.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1j7ld.png" alt="dialog screen" /></a></p>
<python><pyside6>
2024-01-17 15:28:19
0
1,015
Keeely
77,833,511
10,232,932
Visual Studio Code behind a proxy with pip install
<p>I try to install a package behind a company proxy in visual studio code on windows on python.</p> <p>The error I get is:</p> <blockquote> <p>WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProxyError('Cannot connect to proxy.', ConnectionResetError(10054, 'An existing connection was forcibly close d by the remote host', None, 10054, None))</p> </blockquote> <p>I did the settings under windows for the proxy and for example the proxy server is: <code>http://gia-proxy.company.net</code> with the port <code>8080</code>. I am able to log on into the browser and I am able to use the internet, so the problem must be in the pip. somehow when I run: <code>pip config debug</code> I get the output: e</p> <blockquote> <p>nv_var: env: global: C:\ProgramData\pip\pip.ini, exists: False site: c:\users\user\appdata\local\programs\python\python39\pip.ini, exists: False user: C:\Users\user\pip\pip.ini, exists: False<br /> C:\Users\user\AppData\Roaming\pip\pip.ini, exists: False</p> </blockquote> <p>And I am also not able to install it with:</p> <pre><code>pip install --proxy http://user:password@ttp://gia-proxy.company.net:8080 TwitterApi </code></pre> <p>with my username and password and it leads to the same error.</p>
<python><pip><proxy>
2024-01-17 15:10:55
0
6,338
PV8
77,833,379
17,307,978
How apply changes on ods file using the library ezodf in python?
<p>I have 2 .ods files :</p> <ul> <li><p>The first is called AppAndroid.ods. This has the heading Application No. Version Category Encryption Interest Comments Counter Each column is separated by a space but in my file this corresponds to different columns. This file lists all previously encountered applications with the name, version number, etc. and the counter column which counts the number of times the application has been encountered. This ods file has a sheet called appFromAppStore and an Others sheet. The heading is the same on both sheets.</p> </li> <li><p>The second ods file is called UnsupportedApp.ods, this one is very similar to the first. Here is its header: Application No. Version Category Encryption Interest Comments In this case, there is no counter column because this file changes every time my python program is executed. This ods file has a sheet called appFromAppStore_Result and an Others_Result sheet. The heading is the same on both sheets.</p> </li> <li><p>What does my python program do ? Currently this copies the information from my UnsupportedApp.ods file into AppAndroid.ods. The latter lists all previously encountered applications. That said, UnsupportedApp.ods is ephemeral and case-specific. I made a function that allows me to add +1 to my counter when the application present in UnsupportedApp.ods is present in AppAndroid.ods. I want to update the latter however I cannot do it, I display the information in my console step by step, my counters increment well but when the recording is done and I open my document. ods thereafter it did not move.</p> </li> </ul> <p>Do you have any ideas ?</p> <p>Here is my code :</p> <pre><code>def increment_counter(app_name, data): app_exists = any(row['Application'] == app_name for row in data) if app_exists: for row in data: if row['Application'] == app_name: row['Counter'] += 1 else: new_row = {'Application': app_name, 'Version': '', 'Category': '', 'Encryption': '', 'Interest': '', 'Comments': '', 'Counter': 1} data.append(new_row) def update_counter(source_file, destination_file, source_sheet, destination_sheet): source_data = pe.get_records(file_name=source_file, sheet_name=source_sheet) destination_data = pe.get_records(file_name=destination_file, sheet_name=destination_sheet) for row in source_data: app_name = row['Application'] increment_counter(app_name, destination_data) destination_book = ezodf.opendoc(destination_file) destination_sheet_index = -1 for i, sheet in enumerate(destination_book.sheets): if sheet.name == destination_sheet: destination_sheet_index = i break destination_book.sheets[destination_sheet_index].data = destination_data destination_book.saveas(destination_file) bak_file = f&quot;{destination_file}.bak&quot; if os.path.exists(bak_file): os.remove(bak_file) source_file = 'UnsupportedApp.ods' source_sheet = 'appFromAppStore_Result' destination_sheet = 'appFromAppStore' update_counter(source_file, filePathAppAndroid, source_sheet, destination_sheet) </code></pre>
<python><ods>
2024-01-17 14:54:52
0
439
Louis Chabert
77,833,227
15,897,793
KivyMD Android App crashing because of error in material_resources.py file
<p>I developed a kivyMD app specifically for android which is running great on windows as .py but when converted to apk using ubuntu on windows, its giving an error that:</p> <pre><code>/mnt/c/Users/Ayan/Documents/rackcalculator/.buildozer/android/build-arm64-v8a_armeabi-v7a/build/python-installs/rackcalculator/arm64-v8a/kivymd/material_resources.py, line 20, in &lt;module&gt; AttributeError: 'NoneType' object has no attribute 'width' </code></pre> <p>Which is coming from kivyMD installed by buildozer.</p> <p>I tried changing the code manually in material_resources.py and compiled it also. But still after running buildozer build it gives same problem. Clean build downloads the same material_resources.py with error.</p> <p>Code that contains error is using <code>kivy.core.window import Window</code> then <code>Window.width</code> &amp; <code>Window.height</code></p>
<python><android><kivy><kivymd><buildozer>
2024-01-17 14:33:50
1
490
AaYan Yasin
77,833,094
6,884,119
Scrapy carousel categories not extracting
<p>I am trying to scrap a website to get the list of categories from a carousel but it's not working.</p> <p>Below is my code</p> <pre><code>import scrapy class CourtsmuSpider(scrapy.Spider): name = &quot;courtsmu&quot; allowed_domains = [&quot;www.courtsmammouth.mu&quot;] start_urls = [&quot;https://www.courtsmammouth.mu&quot;] def parse(self, response): carousel = response.xpath('//div[contains(@id, &quot;categories_4&quot;)]').extract() print(f&quot;Output: {carousel}&quot;) </code></pre> <p>I am getting the following output from console:</p> <pre><code>Output: [] </code></pre> <p>Can someone help me out ?</p>
<python><python-3.x><web-scraping><xpath><scrapy>
2024-01-17 14:12:39
0
2,243
Mervin Hemaraju
77,832,989
777,377
Sink is not written into delta table in Spark structured streaming
<p>I want to create a streaming job, that reads messages from a folder within TXT files, does the parsing, some processing, and appends the result into one of 3 possible delta tables depending on the parse result. There is a parse_failed table, an unknwon_msgs table, and a parsed_msgs table.</p> <p>Reading is done with</p> <pre class="lang-py prettyprint-override"><code>sdf = spark.readStream.text(path=path_input, lineSep=&quot;\n\n&quot;, pathGlobFilter=&quot;*.txt&quot;, recursiveFileLookup=True) </code></pre> <p>and writing with</p> <pre class="lang-py prettyprint-override"><code>x = sdf.writeStream.foreachBatch(process_microbatch).start() </code></pre> <p>where process_microbatch is</p> <pre class="lang-py prettyprint-override"><code>def process_microbatch(self, batch_df: DataFrame, batch_id: int) -&gt; None: &quot;&quot;&quot;Processing of newly arrived messages. For each message replicate it if needed, and execute the parse_msg_proxy on each.&quot;&quot;&quot; batch_df.rdd.flatMap(lambda msg: replicate_msg(msg)).map(lambda msg: parse_msg_proxy(msg)) </code></pre> <p>and where parse_msg_proxy is</p> <pre class="lang-py prettyprint-override"><code>def parse_msg_proxy(self, msg: str) -&gt; None: try: parsed_msg = parse_message(msg, element_mapping) # do some processing # create df_msg dataframe from parsed_msg df_msg.write.format(&quot;delta&quot;).mode(&quot;append&quot;).save(path_parsed_msgs) except ParseException as e: spark.createDataFrame([{'msg': parsed_msg, 'error': str(e)}]).write.format(&quot;delta&quot;).mode(&quot;append&quot;).save(path_parse_errors) raise Exception(&quot;Parse error occured.&quot;) except UnknownMsgTypeException: spark.createDataFrame([{'msg': parsed_msg}]).write.format(&quot;delta&quot;).mode(&quot;append&quot;).save(path_unknown_msgs) </code></pre> <p>The streaming job starts without error message, but the delta tables are not created. Whats wrong?</p> <p>Thanks!</p> <p>Update:</p> <p>If I change the function to be executed by adding a collect to it:</p> <pre class="lang-py prettyprint-override"><code>def process_microbatch(self, batch_df: DataFrame, batch_id: int) -&gt; None: &quot;&quot;&quot;Processing of newly arrived messages. For each message replicate it if needed, and execute the parse_msg_proxy on each.&quot;&quot;&quot; batch_df.rdd.flatMap(lambda msg: replicate_msg(msg)).map(lambda msg: parse_msg_proxy(msg)).collect() </code></pre> <p>I get the error message <em>RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transformation. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.</em></p>
<python><pyspark><databricks><azure-databricks><spark-structured-streaming>
2024-01-17 13:54:25
0
653
bayerb
77,832,920
5,779,083
How to Efficiently Copy Detectron2 Build Files in a Multi-Stage Docker Build?
<p>I'm working on a application that uses <a href="https://huggingface.co/docs/transformers/model_doc/layoutlmv2" rel="nofollow noreferrer">LayoutMV2 model</a>, which uses Facebook AI’s <a href="https://github.com/facebookresearch/detectron2/" rel="nofollow noreferrer">Detectron2</a> package for its visual backbone. Both of these dependencies, as well as the application itself, require torch.</p> <p>Both Detectron2 and the LayoutMV2 require <code>torch</code> as well as the application I'm working on itself.</p> <p>My goal is to have a container image in which the application can run a training using a Nvidia graphics card and CUDA.</p> <p>Detectron2 does not have pre-built wheels for the newer versions of CUDA and Pytorch, so I need to build Detectron2 from source. See <a href="https://detectron2.readthedocs.io/en/latest/tutorials/install.html" rel="nofollow noreferrer">this link</a> for installation instructions.</p> <p>To allow Detectron2 to make use of CUDA, it has to be build in an environment where CUDA development tools are present. For this the <code>nvidia/cuda:11.7.1-cudnn8-devel-ubuntu20.04</code> image provided by nvidia can be used. However, my application does not require these development tools, as the necessary CUDA libraries that my application requires are already bundled with PyTorch. This results in a large application image size, as the CUDA development tools are not removed after the Detectron2 build.</p> <p>The documentation for these images states that the following about the <code>devel</code> tagged images: <em>&quot;These images are particularly useful for multi-stage builds.&quot;</em></p> <p>Okay, so I can use a multi-stage build to build Detectron2 in the <code>devel</code> image and then copy the necessary files to a smaller image. However, I'm not sure what files I need to copy and where to.</p> <p>When I build Detectron2 from source using the <code>devel</code> image, the following files are created inside the cloned repository:</p> <pre><code>==================================================================================================== 189 MB $ pip install --user -e detectron2_repo # buildkit ==================================================================================================== 48 MB home/appuser/detectron2_repo/build/temp.linux-x86_64-3.8/home/appuser/detectron2_repo 25 MB home/appuser/detectron2_repo/build/lib.linux-x86_64-3.8/detectron2/_C.cpython-38-x86_64-linux-gnu.so 25 MB home/appuser/detectron2_repo/detectron2/_C.cpython-38-x86_64-linux-gnu.so 521 kB home/appuser/detectron2_repo/build/temp.linux-x86_64-3.8/.ninja_deps </code></pre> <p>(Created using <a href="https://github.com/orisano/dlayer" rel="nofollow noreferrer">dlayer</a>)</p> <p>Would it make sense to copy the <code>detectron2_repo</code> directory to the smaller image? Or should should I build a wheel and copy that to the smaller image? How would I go about doing that?</p> <p>I would appreciate any guidance on the best approach to take for the multi-stage build and what specific files should be copied to the smaller image.</p>
<python><docker><python-wheel><nvidia-docker><detectron>
2024-01-17 13:44:54
1
4,782
Tom
77,832,899
7,454,177
Why is this monkeypatch in pytest not working?
<p>As I am switching from Django to FastAPI, I also need to change tests from unittests to pytests. I build a custom TestAPI class and have test cases as methods, which works fine. However I want to override some functions (not dependencies) which are used in the code in one testcase. I tried this, but it doesn't work:</p> <pre><code>def test_smoke_me_api(self, monkeypatch): monkeypatch.setattr(&quot;app.auth.utils.get_user&quot;, mock_get_user) re = self.c.get(&quot;/me/&quot;) </code></pre> <p>It doesn't call the <code>mock_get_user</code> function, but instead the <code>get_user</code> one. According to some docs, I added the monkeypatch to the <code>setup_class</code> function of my test class, but this didn't work as this is apparently initialized with one argument only (<code>self</code>).</p> <p><code>self.c</code> is a client, which is a <code>TestClient</code> initialized in the <code>setup_class</code>.</p> <p>Minimal example:</p> <p>app/auth/utils.py</p> <pre><code>def get_user(sub) -&gt; dict: re = requests.get(f&quot;https://{API_DOMAIN}/api/v2/users/{sub}&quot;) return re.json() </code></pre> <p>app/auth/views.py</p> <pre><code>from app.auth.utils import get_user @router.get(&quot;/&quot;) async def me_get(sub: str = Security(auth.verify)) -&gt; dict: return get_user(sub) </code></pre> <p>app/test_main.py</p> <pre><code>def mock_get_user(sub = &quot;testing&quot;) -&gt; dict: return { &quot;created_at&quot;: &quot;2023-08-15T13:25:31.507Z&quot;, &quot;email&quot;: &quot;test@test.org&quot; } class TestAPI: def setup_class(self): from app.main import app self.c = TestClient(app) def test_smoke_me_api(self, monkeypatch): monkeypatch.setattr(&quot;app.auth.utils.get_user&quot;, mock_get_user) re = self.c.get(&quot;/me/&quot;) </code></pre>
<python><pytest><fastapi><pytest-mock>
2024-01-17 13:42:13
1
2,126
creyD
77,832,878
3,104,974
Django: Subclassing django.forms.widgets.Media
<p>I want to use a custom javascript widget in my Django 5.0 app. The javascript is not static, but dependant on template variables that I have already successfully attached to the widget as attributes (it's a MultiWidget in case that matters).</p> <p>Since the <a href="https://docs.djangoproject.com/en/4.2/topics/forms/media/#media-as-a-dynamic-property" rel="nofollow noreferrer">media property</a> can only render static javascript, I was thinking of using the html template renderer in an own subclass <code>MediaDirect(widgets.Media)</code>. For that, I have also subclassed <a href="https://github.com/django/django/blob/main/django/forms/widgets.py#L102" rel="nofollow noreferrer"><code>Media.render_js</code></a> and <a href="https://github.com/django/django/blob/main/django/forms/widgets.py#L128" rel="nofollow noreferrer"><code>Media.get_absolute_path</code></a> so that it uses <code>django.template.loader.render_to_string</code> and pastes the content between a <code>&lt;script&gt;&lt;/script&gt;</code> tag:</p> <pre><code>from django.template.loader import render_to_string from django.utils.html import format_html class MediaDirect(widgets.Media): &quot;&quot;&quot;Media class that directly renders a template, using the media's parent as context. Important: Javascript or CSS *templates* are accessed per definition of settings.TEMPLATES, not settings.STATIC_URL, since they are not static. &quot;&quot;&quot; def __init__(self, context, *args, **kwargs): self.context = context super().__init__(*args, **kwargs) def render_js(self): print(&quot;This is never called&quot;) result = [] for path in self._js: js = render_to_string(self.absolute_path(path), self.context) ########## &lt;--- result.append(format_html(&quot;&lt;script&gt;{}&lt;/script&gt;&quot;, js)) ########## &lt;--- return result def absolute_path(self, path): if path.startswith((&quot;http://&quot;, &quot;https://&quot;, &quot;/&quot;)): return path return path class NoUISliderMinMaxWidget(widgets.MultiWidget): template_name = &quot;widgets/nouislider_minmax.html&quot; def _get_media(self): &quot;&quot;&quot; Media for a multiwidget is the combination of all media of the subwidgets. Override method to use MediaDirect instead of Media &quot;&quot;&quot; context = self.get_context(self.label, None, None) md = MediaDirect(context, js=[&quot;widgets/nouislider_minmax.js&quot;]) ########## &lt;--- return md media = property(_get_media) def __init__(self, **kwargs): self.value_min = kwargs.get(&quot;value_min&quot;, 0) self.value_max = kwargs.get(&quot;value_max&quot;, 100) self.value_step = kwargs.get(&quot;value_step&quot;, 1) self.value0 = kwargs.get(&quot;initial&quot;, [self.value_min, self.value_max])[0] self.value1 = kwargs.get(&quot;initial&quot;, [self.value_min, self.value_max])[1] super().__init__(kwargs.get(&quot;attrs&quot;, {})) def get_context(self, name, value, attrs): context = super().get_context(name, value, attrs) for n in (&quot;value_min&quot;, &quot;value_max&quot;, &quot;value_step&quot;, &quot;value0&quot;, &quot;value1&quot;): context[&quot;widget&quot;][n] = getattr(self, n) return context def decompress(self, value): if isinstance(value, IntegerRange): return [value.low, value.high] if isinstance(value, (list, tuple)) and len(value) == 2: return value if value is None: return [None, None] raise ValueError(&quot;Invalid value: &quot;, value) </code></pre> <p>This subclassed render_js method is never called however. Honestly I can't follow the process what Django does internally in order to render the Media instance. Somehow it passes <a href="https://github.com/django/django/blob/main/django/forms/widgets.py#L185" rel="nofollow noreferrer"><code>widgets.media_property</code></a>, even though I've overridden the media property. Here I'm lost following the callstack comes from and where it goes.</p> <p>Any ideas how to achieve dynamically rendered javascript are welcome.</p>
<python><django><dynamic><django-templates><widget>
2024-01-17 13:40:08
0
6,315
ascripter
77,832,803
10,232,932
Avoid nested if function and skip line of codes
<p>I have a combination of functions, the code in python has the following format. In the code i am looping over an SQL Table where I always check a key, which get filled with done, after I run the function. So for example I have the &quot;QUEUE_TEST&quot; table which keys:</p> <pre><code>Key Status A Pending B Pending C Pending </code></pre> <p>Then with this Key I load specific data in the function and I have a new dataframe</p> <pre><code>def main(): try: cursor.execute(&quot;SELECT TOP 1 KEY FROM MODEL_FACTORY#PYTHON.QUEUE_TEST WHERE STATUS = 'Pending'&quot;) result = pd.DataFrame(cursor.fetchone()) while not result.empty: input_data = .... try: if conditionA=True: if conditionB=True Test if condtionC=True: if conditionD=False: except Exception as e return except Exception as e: return </code></pre> <p>How Can I Skip the line of codes after test, If the conditionB is True before Test and start at the while function again? So for example if condition B is True i want to have the following code:</p> <pre><code>def main(): try: cursor.execute(&quot;SELECT TOP 1 KEY FROM MODEL_FACTORY#PYTHON.QUEUE_TEST WHERE STATUS = 'Pending'&quot;) result = pd.DataFrame(cursor.fetchone()) while not result.empty: input_data = .... try: if conditionA=True: if conditionB=True Test except Exception as e return except Exception as e: return </code></pre> <p>And If conditionB is not True, i want to run the following code:</p> <pre><code>ef main(): try: cursor.execute(&quot;SELECT TOP 1 KEY FROM MODEL_FACTORY#PYTHON.QUEUE_TEST WHERE STATUS = 'Pending'&quot;) result = pd.DataFrame(cursor.fetchone()) while not result.empty: input_data = .... try: if conditionA=True: if condtionC=True: if conditionD=False: except Exception as e return except Exception as e: return </code></pre> <p>I Know I can do it with if else functions, but as I have a lot of other code after the condition B I would love not to use these kind of format...</p>
<python>
2024-01-17 13:28:22
3
6,338
PV8
77,832,724
647,151
(wrong-type-argument markerp nil) during shell-mode PDB tracking
<p>With the built-in <code>python-mode</code> that comes with the standard distribution of Emacs, you can set up a hook so that a PDB session gets <em>tracked</em> when initiating through <code>shell-mode</code>:</p> <pre class="lang-lisp prettyprint-override"><code>(defun my-shell-mode-hook () (add-hook 'comint-output-filter-functions 'python-pdbtrack-comint-output-filter-function t)) (add-hook 'shell-mode-hook 'my-shell-mode-hook) </code></pre> <p>Normally, this means Python will open the source code that corresponds to the current interpreter location. Pretty smart!</p> <p>For some reason, from experimentation possibly having to do with using a custom theme, Emacs can get into a bad state such that we get an error when moving to a new Python source file.</p> <pre class="lang-lisp prettyprint-override"><code>Debugger entered--Lisp error: (wrong-type-argument markerp nil) python-pdbtrack-unset-tracked-buffer() python-pdbtrack-comint-output-filter-function(#(&quot;\15\33[K\15(Pdb) &quot; 0 11 (field output))) run-hook-with-args(python-pdbtrack-comint-output-filter-function #(&quot;\15\33[K\15(Pdb) &quot; 0 11 (field output))) comint-output-filter(#&lt;process shell&gt; #(&quot;\15\33[K\15(Pdb) &quot; 0 11 (field output))) </code></pre> <p>The error happens somewhere inside this function:</p> <pre class="lang-lisp prettyprint-override"><code>(defun python-pdbtrack-unset-tracked-buffer () &quot;Untrack currently tracked buffer.&quot; (when (buffer-live-p python-pdbtrack-tracked-buffer) (with-current-buffer python-pdbtrack-tracked-buffer (set-marker overlay-arrow-position nil))) (setq python-pdbtrack-tracked-buffer nil)) </code></pre> <p>Emacs becomes more or less unusable in this state, corrupting most buffers with the error message.</p> <p>How to dig deeper to figure out what this issue is about?</p> <p>My current steps to debug the issue has been to open up emacs using <code>-Q</code> and I have been able to trigger the issue only when enabling a custom theme (on the other hand, reverting back to the default theme doesn't fix things up).</p>
<python><emacs>
2024-01-17 13:16:16
0
1,457
malthe
77,832,596
12,890,458
Programmatically click the ok button in ms-word
<p>I am trying to automatically convert pdf files to docx files in python, using (copied from <a href="https://stackoverflow.com/a/63255120/12890458">this</a>):</p> <pre><code>import win32com.client word = win32com.client.Dispatch(&quot;Word.Application&quot;) word.visible = 1 pdfdoc = 'NewDoc.pdf' todocx = 'NewDoc.docx' wb1 = word.Documents.Open(pdfdoc) wb1.SaveAs(todocx, FileFormat=16) # file format for docx wb1.Close() word.Quit() </code></pre> <p>This opens Word and the following prompt appears:</p> <p><a href="https://i.sstatic.net/Oh3nV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Oh3nV.png" alt="enter image description here" /></a></p> <p>I want to run my script without user intervention, so is it possible to programmatically click the OK button?</p>
<python><ms-word><com><win32com><office-automation>
2024-01-17 12:54:38
1
460
Frank Tap
77,832,460
4,822,772
Converting Matplotlib's filled contour plot (contourf_plot) to GeoJSON
<p>I am working on a project where I have successfully generated filled contour plots using plt.contourf in Matplotlib in a Google Colab environment. Now, I am attempting to convert these filled contour polygons into GeoJSON for seamless integration with Folium.</p> <pre><code>import numpy as np import folium from folium import plugins import matplotlib.pyplot as plt from matplotlib.colors import to_hex, Normalize # Specify the URL of the NetCDF file url = &quot;https://www.star.nesdis.noaa.gov/socd/mecb/sar/AKDEMO_products/APL_winds/tropical/2024/SH052024_BELAL/STAR_SAR_20240116013937_SH052024_05S_MERGED_FIX_3km.nc&quot; # Download the NetCDF file content response = requests.get(url) nc_content = BytesIO(response.content) # Open the NetCDF file using xarray dataset = xr.open_dataset(nc_content) # Access the 'sar_wind' variable sar_wind = dataset['sar_wind'].values # Access the 'latitude' and 'longitude' variables latitude = dataset['latitude'].values longitude = dataset['longitude'].values # Set iso values for filled contours iso_values_filled = np.linspace(np.min(sar_wind), np.max(sar_wind), 11) # One extra value for filling the background # Create a filled contour plot contourf_plot = plt.contourf(longitude, latitude, sar_wind, levels=iso_values_filled, cmap='viridis') # Convert filled contour polygons to GeoJSON geojson_data_polygon = {&quot;type&quot;: &quot;FeatureCollection&quot;, &quot;features&quot;: []} # Normalize iso values for colormap mapping norm = Normalize(vmin=np.min(iso_values_filled), vmax=np.max(iso_values_filled)) for level, collection in zip(iso_values_filled, contourf_plot.collections): for path in collection.get_paths(): coordinates = path.vertices.tolist() # Close the polygon by repeating the first vertex coordinates.append(coordinates[0]) color_hex = to_hex(plt.cm.viridis(norm(level))) # Map normalized iso value to colormap geojson_data_polygon[&quot;features&quot;].append({ &quot;type&quot;: &quot;Feature&quot;, &quot;geometry&quot;: { &quot;type&quot;: &quot;Polygon&quot;, &quot;coordinates&quot;: [coordinates] }, &quot;properties&quot;: { &quot;level&quot;: level, &quot;color&quot;: color_hex } }) # Create a Folium map centered on the average latitude and longitude center_lat, center_lon = np.mean(latitude), np.mean(longitude) mymap_polygon = folium.Map(location=[center_lat, center_lon], zoom_start=8) # Add filled contour polygons as GeoJSON overlay with colored areas folium.GeoJson( geojson_data_polygon, style_function=lambda feature: { 'fillColor': feature['properties']['color'], 'color': feature['properties']['color'], 'weight': 2, 'fillOpacity': 0.7 } ).add_to(mymap_polygon) # Display the map with filled contour polygons mymap_polygon </code></pre> <p>Issue:</p> <p>contourf_plot gives this map, which the desired result. <a href="https://i.sstatic.net/5SOsz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5SOsz.png" alt="enter image description here" /></a></p> <p>but the folium gives:</p> <p><a href="https://i.sstatic.net/Wqohv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wqohv.png" alt="enter image description here" /></a></p> <p>We can see that the polygons are not built.</p> <p>Objective: My goal is to convert the filled contour polygons from contourf_plot into GeoJSON format so that I can display them in Folium.</p>
<python><polygon><geojson><contour><contourf>
2024-01-17 12:30:54
1
1,718
John Smith
77,832,448
10,944,175
Solve matrix and vector multiplication with parameters instead of values (preferably in python)
<p>I want to look at some vector operations and see which matrix elements go into which vector, e.g. if I define a matrix with elements</p> <pre><code>mat = [[&quot;a11&quot;, &quot;a12&quot;], [&quot;a21&quot;, &quot;a22&quot;]] </code></pre> <p>and a vector</p> <pre><code>vec = [&quot;v1&quot;, &quot;v2&quot;] </code></pre> <p>then I'm looking for some module / library that gives me the result when I calclulate the product:</p> <pre><code>res = mat*vec = [&quot;a11&quot;*&quot;v1&quot; + &quot;a12&quot;*&quot;v2&quot;, &quot;a21&quot;*&quot;v1&quot; + &quot;a22&quot;*&quot;v2&quot;] </code></pre> <p>I know this is easy to do if all the parameters are actual numbers with numpy and of course I could work this out by hand, but if the operations becomes more complex it would be nice to have a way to automatically generate the resulting vector as a parameter equation.</p> <p>Bonus points if the equation gets simplified, if e.g. the result has +&quot;a11&quot; - &quot;a11&quot; somewhere and reduces this to 0.</p> <p>Is this at all possible to do in python? Wolfram Alfa gets me what I'm looking for, but I also need some operations on input data so a way to do this with a script would be great.</p>
<python><numpy><math><sympy>
2024-01-17 12:28:19
1
549
Freya W
77,832,441
4,626,254
How to Execute Oracle SQL Query with Multiple Database Links in Django?
<p>I'm working on a Django project where I need to execute a complex Oracle SQL query that involves multiple database links. I have already configured the database credentials for both databases in my Django settings, but I'm struggling with how to correctly execute a query that fetches data from different databases through these links.</p> <p>Here's a sample of the type of query I'm trying to execute:</p> <pre><code>SELECT CASE WHEN ACDMH_LOC_CODE LIKE '02%' THEN 'KHOBAR' WHEN ACDMH_LOC_CODE LIKE '03%' THEN 'RIYADH' ELSE 'JEDDAH' END AS Region, ACDMH_INTF_YN, ACDMH_TRANSACTION_TYPE AS TRANSACTION_TYPE, ACDMH_SOURCE_DOC_NO AS SOURCE_DOC_NO, TO_CHAR(ACDMH_TRANSACTION_DATE, 'dd-MM-yyyy') AS TRANSACTION_DATE, ACDMH_CUSTOMER_ID AS CUSTOMER_No, CUSTOMER_NAME, TO_CHAR(ACDMH_CRE_DATE, 'dd-mm-yyyy') AS Pushed_date_WinIT, TO_CHAR(ACDMH_CRE_DATE, 'hh:mi:ss AM') AS Pushed_time_WinIT, ACDMH_INTF_ORA_REF AS ERP_REF, ACDMH_LOC_CODE AS LOC_CODE, ACDMD_ORIGINAL_INVOICE_NO AS ORIGINAL_INVOICE_NO, ACDMD_OLD_VALUE AS fake_PRICE, ACDMD_NEW_VALUE AS selling_price, ACDMD_TOTAL AS tran_value, ACDMD_TAX_AMOUNT AS TAX_AMOUNT FROM AX_CREDIT_DEBIT_MEMO_HEADERS@erpbridge INNER JOIN AX_CREDIT_DEBIT_MEMO_LINES@erpbridge ON ACDMH_HEADER_ID = ACDMD_HEADER_ID LEFT JOIN AXKSA_ORACLE_CUSTOMER ON CUSTOMER_NUMBER = ACDMH_CUSTOMER_ID WHERE [some conditions]; </code></pre> <p>This query involves database links (@erpbridge) to other sources. I'm unsure how to execute such a query in Django, especially considering the database links.</p> <p>I have the following questions:</p> <p>How can I execute this Oracle SQL query in Django, given the use of multiple database links? Are there any specific configurations or considerations in Django for handling Oracle database links? Is using Django's raw SQL query execution the right approach for this, or is there a more efficient method? Any guidance or examples would be greatly appreciated!</p>
<python><django><oracle-database><django-database>
2024-01-17 12:27:10
1
5,266
Underoos
77,832,417
3,747,486
Use Python to call SharePoint API get 401 response while I have the token
<p>I have registered my App in Azure with API permission as below:</p> <p><a href="https://i.sstatic.net/gd6XO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gd6XO.png" alt="enter image description here" /></a></p> <p>Here is my python code.</p> <pre><code>import requests from msal import ConfidentialClientApplication client_id = &quot;xxxxxxxxxxxxxxxxxxxxx&quot; client_secret = &quot;yyyyyyyyyyyyyyyyyyyyy&quot; tenant_id = &quot;tttttttttttttttttttttttttttttttttttttttt&quot; site_url = &quot;https://{mytenent}.sharepoint.com/sites/mysite&quot; resource = &quot;https://{mytenent}&quot; # Authenticate using client ID and secret authority = f&quot;https://login.microsoftonline.com/{tenant_id}&quot; app = ConfidentialClientApplication( client_id=client_id, authority=authority, client_credential=client_secret, ) scope=[resource + &quot;/.default&quot;] token_response = app.acquire_token_for_client(scopes=scope) access_token = token_response.get(&quot;access_token&quot;) #token_response[&quot;access_token&quot;] print(access_token) endpoint_url = f&quot;{site_url}/_api/web&quot; headers = { &quot;Authorization&quot;: &quot;Bearer &quot; + access_token, &quot;Accept&quot;: &quot;application/json;odata=verbose&quot;, &quot;Content-Type&quot;: &quot;application/json;odata=verbose&quot;, } print(endpoint_url) response = requests.get(endpoint_url, headers=headers) # Check for errors in the SharePoint API response if response.status_code == 200: data = response.json() print(&quot;SharePoint Site Title:&quot;, data[&quot;d&quot;][&quot;Title&quot;]) else: print(&quot;SharePoint API Error:&quot;) print(&quot;Status Code:&quot;, response.status_code) print(&quot;Response:&quot;, response.text) </code></pre> <p>By the code print(access_token) I can get the token string so I think I do it right on get a token. But I got 401 response when calling SharePoint API.</p> <blockquote> <p>Status Code: 401 Response: {&quot;error_description&quot;:&quot;ID3035: The request was not valid or is malformed.&quot;}</p> </blockquote> <p>What could be the problem? Thanks for your advice.</p>
<python><azure><azure-active-directory><azure-web-app-service><sharepoint-rest-api>
2024-01-17 12:22:37
1
326
Mark
77,832,241
17,471,060
Polars dataframe sorting based on absolute value of a column
<p>I would like to sort a polars dataframe based on absolute value of a column in either ascending or descending order. It is easy to do in <code>Pandas</code>, or using <code>sorted</code> function in python. Let's say I want to sort based on <code>val</code> column in the below dataframe.</p> <pre><code>import numpy as np np.random.seed(42) import polars as pl df = pl.DataFrame({ &quot;name&quot;: [&quot;one&quot;, &quot;one&quot;, &quot;one&quot;, &quot;two&quot;, &quot;two&quot;, &quot;two&quot;], &quot;id&quot;: [&quot;C&quot;, &quot;A&quot;, &quot;B&quot;, &quot;B&quot;, &quot;C&quot;, &quot;C&quot;], &quot;val&quot;: np.random.randint(-10, 10, 6) }) </code></pre> <p>Returns:</p> <pre><code>┌──────┬─────┬─────┐ │ name ┆ id ┆ val │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i32 │ ╞══════╪═════╪═════╡ │ one ┆ C ┆ -4 │ │ one ┆ A ┆ 9 │ │ one ┆ B ┆ 4 │ │ two ┆ B ┆ 0 │ │ two ┆ C ┆ -3 │ │ two ┆ C ┆ -4 │ └──────┴─────┴─────┘ </code></pre> <p>Thanks!</p>
<python><dataframe><python-polars>
2024-01-17 11:56:01
2
344
beta green
77,832,227
10,122,822
Pyautogui/win32api scrolling does nothing when called per hotkey
<p>The scroll function on pyautogui doesn't work on my system. I have python <code>3.8.10</code> installed and the latest version of <code>pyautogui</code>. The code I want to run is pretty simple:</p> <pre><code>import pyautogui import keyboard def scroll(): scrollAmount = 100 pyautogui.scroll(scrollAmount) print(&quot;Scrolling...&quot;) keyboard.add_hotkey('shift+q', scroll) keyboard.wait('esc') </code></pre> <p>I tryed to set the <code>scrollAmount</code> to a way higher number like <code>1000</code> or <code>-1000</code> without success. I tryed to scroll different apps like chrome or just an cmd terminal without success.</p> <p>Is there something I am missing? Or should I use a different package to scroll?</p> <p>Edit:</p> <p>I also tryed to scroll with <code>win32api</code> but without success...:</p> <pre><code>import pyautogui import keyboard import win32api from win32con import * def scroll(): win32api.mouse_event(MOUSEEVENTF_WHEEL, 0, 0, -1000, 0) print(&quot;Scrolling...&quot;) keyboard.add_hotkey('shift+q', scroll) keyboard.wait('esc') </code></pre>
<python><winapi><pyautogui>
2024-01-17 11:54:34
0
748
sirzento
77,832,216
8,076,158
Typing Numba functions using classes and dictionaries
<p>This is a simplified version of my program:</p> <pre><code>import numpy as np from numba import types, typed, njit @njit(signature_or_function=types.unicode_type(types.unicode_type)) def my_func(symbol): return &quot;*&quot; + symbol + &quot;*&quot; @njit def run_numba(symbols): for symbol in symbols: result = my_func(symbol=symbol) # set symbol to be a literal = &quot;a&quot; print(result) symbols = np.array([&quot;a&quot;, &quot;b&quot;], dtype=&quot;&lt;U2&quot;) run_numba(symbols) </code></pre> <p>With the following error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/me/.pycharm_helpers/pydev/pydevconsole.py&quot;, line 364, in runcode coro = func() ^^^^^^ File &quot;&lt;input&gt;&quot;, line 17, in &lt;module&gt; File &quot;/home/me/.pyenv/versions/3.11.1/envs/my_env/lib/python3.11/site-packages/numba/core/dispatcher.py&quot;, line 468, in _compile_for_args error_rewrite(e, 'typing') File &quot;/home/me/.pyenv/versions/3.11.1/envs/my_env/lib/python3.11/site-packages/numba/core/dispatcher.py&quot;, line 409, in error_rewrite raise e.with_traceback(None) numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) Invalid use of type(CPUDispatcher(&lt;function my_func at 0x7f172e1499e0&gt;)) with parameters (symbol=[unichr x 2]) Known signatures: * (unicode_type,) -&gt; unicode_type During: resolving callee type: type(CPUDispatcher(&lt;function my_func at 0x7f172e1499e0&gt;)) During: typing of call at &lt;input&gt; (13) File &quot;&lt;input&gt;&quot;, line 13: &lt;source missing, REPL/exec in use?&gt; </code></pre> <p>In the commented line, if I use a string literal in place of the <code>symbol</code> variable, it works. How can I fix this code?</p>
<python><numba>
2024-01-17 11:52:41
2
1,063
GlaceCelery
77,832,160
1,711,271
Read multiple csv files in a pandas dataframe
<p>I have a set of text files like these:</p> <p>file corresponding to ID <code>A</code></p> <pre><code>&lt;empty line&gt; #----------------------------------------- # foo bar baz #----------------------------------------- 0.0120932 1.10166 1.08745 0.0127890 1.10105 1.08773 0.0142051 1.09941 1.08760 0.0162801 1.09662 1.08548 0.0197376 1.09170 1.08015 </code></pre> <p>file corresponding to ID <code>B</code></p> <pre><code>&lt;empty line&gt; #----------------------------------------- # foo bar baz #----------------------------------------- 0.888085 0.768590 0.747961 0.893782 0.781607 0.760417 0.899830 0.797021 0.771219 0.899266 0.799260 0.765859 0.891489 0.781255 0.728892 </code></pre> <p>etc.</p> <p>I want to read all of them into a <code>pandas</code> dataframe, whose columns are <code>['ID', 'foo', 'bar', 'baz']</code>. The ID doesn't correspond to the file name, but this is a detail: just imagine there's a mapping available (e.g., a dictionary) from file names to IDs.</p> <p>Usually, when I want to create a dataframe row by row, I read each row as a dictionary and append to a list, then convert the list of dictionaries to a dataframe. However, this trick doesn't work here, because each dictionary contains more than one row. How can I solve this? Concatenating multiple dataframes is quite slow, so I'd rather have a more performing solution.</p> <p>EDIT: this is the pandas dataframe I want as an output (example based on the two files above);</p> <pre><code> ID foo bar baz 0 A 0.012093 1.10166 1.08745 1 A 0.012789 1.10105 1.08773 2 A 0.014205 1.09941 1.08760 3 A 0.016280 1.09662 1.08548 4 A 0.019738 1.09170 1.08015 5 B 0.888085 0.768590 0.747961 6 B 0.893782 0.781607 0.760417 7 B 0.899830 0.797021 0.771219 8 B 0.899266 0.799260 0.765859 4 B 0.891489 0.781255 0.728892 </code></pre>
<python><pandas><csv><row>
2024-01-17 11:42:24
1
5,726
DeltaIV
77,831,986
5,868,293
How to get rows with consecutive dates with groupby in pandas
<p>I have the following pandas dataframe</p> <pre><code>import pandas as pd pd.DataFrame({ 'id': [1,1,1,1,1, 2,2,2,2,2, 3,3,3,3,3], 'week': ['2022-W9','2022-W10', '2022-W11', '2022-W15', '2022-W17', '2022-W10','2022-W11', '2022-W15', '2022-W19', '2022-W24', '2022-W1','2022-W3', '2022-W19', '2022-W20', '2022-W42'] }) id week 0 1 2022-W9 1 1 2022-W10 2 1 2022-W11 3 1 2022-W15 4 1 2022-W17 5 2 2022-W10 6 2 2022-W11 7 2 2022-W15 8 2 2022-W19 9 2 2022-W24 10 3 2022-W1 11 3 2022-W3 12 3 2022-W19 13 3 2022-W20 14 3 2022-W42 </code></pre> <p>I would like to get only the rows that have consecutive weeks, by <code>id</code>.</p> <p>The output should be this</p> <pre><code>pd.DataFrame({ 'id': [1,1,1, 2,2, 3,3], 'week': ['2022-W9','2022-W10', '2022-W11', '2022-W10','2022-W11', '2022-W19', '2022-W20'] }) id week 0 1 2022-W9 1 1 2022-W10 2 1 2022-W11 3 2 2022-W10 4 2 2022-W11 5 3 2022-W19 6 3 2022-W20 </code></pre> <p>How could I do that ?</p>
<python><pandas>
2024-01-17 11:13:00
1
4,512
quant
77,831,751
3,727,079
How can I merge two dataframes, keeping overlapping values and NaN for other values, using only timestamps from the first dataframe?
<p>I've got two dataframes, one of which is a timestamp, and the other also has timestamps, but has gaps in it. The two dataframes have some overlap in timestamps, and some not. MWE:</p> <p>This code creates the first dataframe:</p> <pre><code>daterange = pd.date_range(start='1/1/2023 09:30:00', end='1/3/2023 09:35:00', freq = 'min') daterange_keep = (pd.DatetimeIndex(pd.to_datetime(daterange)) .indexer_between_time('09:30', '09:35') ) firstdf= pd.DataFrame(daterange[daterange_keep]) firstdf.columns = ['timestamp'] firstdf </code></pre> <p>This creates the following dataframe, with times from 9:30 to 9:33, 1 Jan to 3 Jan:</p> <p><a href="https://i.sstatic.net/i6k5z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i6k5z.png" alt="enter image description here" /></a></p> <p>The second dataframe looks like this:</p> <pre><code>seconddf = pd.DataFrame({'timestamp': ['2023-01-01 09:30:00', '2023-01-01 09:32:00', '2023-01-01 09:34:00', '2023-02-01 09:30:00'], 'value': [3,5,7,9]}) seconddf </code></pre> <p><a href="https://i.sstatic.net/yTVL1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yTVL1.png" alt="enter image description here" /></a></p> <p>I want to merge the two dataframes, keeping all the timestamps in the first dataframe and inserting NaNs for the missing data from the second dataframe, and dropping all the data in the second frame that isn't in the first frame. The desired output is:</p> <p><a href="https://i.sstatic.net/7SANV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7SANV.png" alt="enter image description here" /></a></p> <p>What is the best way to do this? (Ideally I'll also be able to rename the 'value' column but I assume I can do that independently of the merge.)</p> <p>The obvious way appears to be <code>firstdf.merge(seconddf, how = 'inner')</code>, but this yields an error that says I should use <code>concat</code> instead, and I can't figure out how <code>concat</code> can achieve this merge.</p>
<python><pandas><dataframe>
2024-01-17 10:38:00
1
399
Allure
77,831,701
13,775,842
save data in a class object with multiprocess python
<p>I am learning multiprocess, and I want to create an array of 20 objects of class and save data to each object with multiprocess.</p> <p>when I try to print the data, the data is empty..</p> <p>Im trying to figure out why it happen but im struggling with knowing what is the problem, because when i run with debug, the code enters to the right places and append the data, but when I finish the process the data is empty..</p> <p>Here is my code:</p> <pre class="lang-py prettyprint-override"><code>import time from multiprocessing import Process class AppendInProcess: def __init__(self): self.list = [] def appendToObj(self, value): self.list.append(value) def process_function(obj, value): obj.appendToObj(value) if __name__ == '__main__': processes = [] objects = [] time_start = time.time() for i in range(20): obj = AppendInProcess() objects.append(obj) p = Process(target=process_function, args=(obj, i)) processes.append(p) p.start() for process in processes: process.join() time_end = time.time() print(f&quot;Time taken: {time_end - time_start} seconds&quot;) for obj in objects: print(obj.list) </code></pre> <p>run over array of object and call a process that add data to array</p>
<python><python-multiprocessing>
2024-01-17 10:28:27
1
483
Meyer Buaharon
77,831,687
6,243,534
How to find valleys and intersections in a distance field?
<p>I have a 2d array representing height. It contains peaks, and then a linear gradient in between the peaks.</p> <p>I am generating the array myself, based on the positions of the peaks. The gradient is simply the distance to the nearest peak. So the resolution of the actual array is higher than that of the posted images.</p> <p>(Originally I only posted white/blue images, I've left the rest of them as is)</p> <p>Original image (visualisation of the array):<br /> <a href="https://i.sstatic.net/a4syQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a4syQ.png" alt="enter image description here" /></a></p> <p>I would like to find the valleys and their intersections, as a set of line segments, visualised as below.</p> <p><a href="https://i.sstatic.net/EBh0Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EBh0Y.png" alt="enter image description here" /></a></p> <p>I'm aware of various approaches to finding the minima, but no way to explicitly figuring out the exact intersections and the line segments in between them.</p> <p>I've been working on this for a large number of hours but haven't got anywhere. I've tried things like below (arr is the array):</p> <pre><code>minima = (arr != scipy.ndimage.minimum_filter(arr, size=10, mode='constant', cval=0.0)) modified_arr = np.where(minima, arr, 1500) </code></pre> <p>and I've also tried iterating explicitly over rows/columns separately, with similar code to above. Both of these approaches are 'spotty', they identify a lot of the valleys when plotted, but they're not fully connected line segments.</p> <p>Crucially, what I haven't worked out how to do is actually define where the intersections are and the line segments in between them formed by the valleys. Can anyone help point me in the right direction?</p> <p>I'd like to also know the actual depth value of the intersection points as well if possible.</p> <p>Mathematically, I know that the intersections are defined by points which are an equal distance from three peaks, and the valleys are defined by points which are an equal distance from two peaks.</p> <p><strong>TL;DR</strong> - I want to find the intersections/valleys as a set of line segments, can anyone help?</p> <p><strong>Update</strong><br /> I've got part of the way. It's a bit rough, but the below does find all the intersections and valleys, except there are extra points found. This is because the only way I found which would consistently discover all the intersections/valleys was to allow a margin. Also, there's an extra intersection and valley in the top left, I forgot to indicate those in the original picture, there is actually a second peak directly beside another one.` I had to play around with the 'magic numbers' at the top to find a happy point where all the intersections/valleys are found but not too many points are included.</p> <p>My current idea for moving forwards is to apply K-means to the intersections to find the centroids of those points. For the valleys I'm not so sure, possibly look at defining the angles that nearby valley points are at in relation to intersections, and then finding which intersection lies on the same angle. I also thought of limiting the discovered valley points to those within a certain distance of an intersection, which would possibly make K-means appropriate for those as well.</p> <p>If anyone has anything to add I'd appreciate it. Otherwise I'll post further results when I get there.</p> <pre><code>max_int_diff = 0.85 # max intersection difference max_valley_diff = 0.45 # max valley difference max_edge_dist = 2 # max distance from edge of graph intersections = [] valleys = [] for i in range(arr.shape[1]): for j in range(arr.shape[0]): dists = [(hypot((i - node.x), (j - node.y)), node) for node in peaks.values()] closest = sorted(dists, key=lambda _x: _x[0]) # intersections diff_1_2 = abs(closest[0][0] - closest[1][0]) diff_1_3 = abs(closest[0][0] - closest[2][0]) diff_2_3 = abs(closest[1][0] - closest[2][0]) if diff_1_3 &lt;= max_int_diff and diff_2_3 &lt;= max_int_diff and diff_1_2 &lt;= max_int_diff: intersections.append((i, j)) # edge of graph valleys bound the graph, so they are counted as intersections elif diff_1_2 &lt;= max_valley_diff \ and (i &lt;= max_edge_dist or i &gt;= array_size - max_edge_dist or j &lt;= max_edge_dist or j &gt;= array_size - max_edge_dist): intersections.append((i, j)) # valleys elif diff_1_2 &lt;= max_valley_diff: valleys.append((i, j)) </code></pre> <p><a href="https://i.sstatic.net/SUdKp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SUdKp.png" alt="enter image description here" /></a></p> <p><strong>Update</strong><br /> I've now managed to use <code>ndimage.minimum_filter</code> to accurately find the majority of the valley points :) I combined individual axis processing to get the below result. Still missing some points though. Any thoughts on how I can improve this would be appreciated. I've tried changing the size up/down but this is the best result I can get.<br /> My current idea to find the missing points is to add two more filters that look at the two 45 degree axes, as it seems to be the angle of the valleys which is affecting it.</p> <pre><code># using scipy.ndimage.minimum_filter minima_0 = (arr != minimum_filter(arr, axes=0, size=10, mode='constant', cval=20.0)) minima_1 = (arr != minimum_filter(arr, axes=1, size=10, mode='constant', cval=20.0)) # return a masked arr, where minima have been replaced with a value arr_0 = np.where(minima_0, arr, 50) arr_1 = np.where(minima_1, arr, 50) mg = arr_0 + arr_1 </code></pre> <p><a href="https://i.sstatic.net/CmkHw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CmkHw.png" alt="enter image description here" /></a></p> <p><strong>Update</strong><br /> I've posted an answer. The real issue was my own lack of knowledge in mathematics, due to having never studied it formally beyond high school. Initially I was overcomplicating it by presenting it as an image processing problem, even though my actual data was just an array of points.</p> <p>Hopefully this all might help someone else!</p>
<python><geometry><distance><voronoi><delaunay>
2024-01-17 10:26:30
2
306
LangerNZ
77,831,677
6,312,979
Django Weasyprint PDF output into Postgres Database
<p>I am trying to stick a WeasyPrint html PDF into the Model. Using Django.</p> <p>I can generate the PDF file fine.</p> <pre><code>pdf_file = HTML(string=html,base_url=base_url).write_pdf(stylesheets, [css_path],font_config=font_config) </code></pre> <p>Now what is best way to convert the pdf_file output to a format I can use for the database?</p> <pre><code>class Reports(models.Model): pdf = models.FileField( upload_to='media/pdfs/', max_length=254, ) created = models.DateField(auto_now=False, auto_now_add=True) </code></pre> <p>If I do something like this it creates the file automatically before the Model.</p> <pre><code> fs = FileSystemStorage() report = fs.save(f'pdfs/{uuid}.pdf', File(BytesIO(pdf_file)) ) </code></pre> <p>And gives the table a 'pdf/...' in the database.</p> <p>What is most efficent way to convert the Wesasy HTML into a format the Model can use and let the model save the file?</p> <p>Thanks.</p>
<python><django><pdf><weasyprint>
2024-01-17 10:25:30
1
2,181
diogenes
77,831,497
4,928,212
Pandas merge dataframe with the same columns and one one varying
<p>Probably already asked before, buy I cannot find it even after 30 mins of searching.</p> <p>I have two pandas dataframes with the same columns. The values match except for one column and I want to perform a full outer join, where I get both values if both are there and only one value if one of them is present. There are many matching columns, so I would prefer a solution where I do not have to apply something for each matching column.</p> <p>Example All columns are the same if the value is in both df, only the frequency varies:</p> <pre><code> Gene GeneID Frequency 0 AA 1 10 1 BB 2 15 2 CC 3 12 Gene GeneID Frequency 0 AA 1 20 1 DD 4 29 </code></pre> <p>Code:</p> <pre><code>import pandas as pd t1 = [{&quot;Gene&quot;: &quot;AA&quot;, &quot;GeneID&quot;: &quot;1&quot; , &quot;Frequency&quot;: 10}, {&quot;Gene&quot;: &quot;BB&quot;, &quot;GeneID&quot;: &quot;2&quot; , &quot;Frequency&quot;: 15}, {&quot;Gene&quot;: &quot;CC&quot;, &quot;GeneID&quot;: &quot;3&quot; , &quot;Frequency&quot;: 12}] t2 = [{&quot;Gene&quot;: &quot;AA&quot;, &quot;GeneID&quot;: &quot;1&quot; , &quot;Frequency&quot;: 20}, {&quot;Gene&quot;: &quot;DD&quot;, &quot;GeneID&quot;: &quot;4&quot; , &quot;Frequency&quot;: 29}] f1 = pd.DataFrame(t1) f2 = pd.DataFrame(t2) m = pd.merge(f1,f2,on=['Gene','Gene'],how='outer') </code></pre> <p>Results in:</p> <pre><code> Gene GeneID_x Frequency_x GeneID_y Frequency_y 0 AA 1 10.0 1 20.0 1 BB 2 15.0 NaN NaN 2 CC 3 12.0 NaN NaN 3 DD NaN NaN 4 29.0 </code></pre> <p>Now the ID is either in GeneID_x or or GeneID_y. I would like the following:</p> <pre><code> Gene GeneID Frequency_x Frequency_y 0 AA 1 10.0 20.0 1 BB 2 15.0 NaN 2 CC 3 12.0 NaN 3 DD 4 NaN 29.0 </code></pre> <p>Of course I can iterate and fill the GeneID where needed, but there are many more columns that match. There has to be a better solution. I also tried concat with group by and aggregate. This works, however I cannot see if the frequency comes from the first or second df if there is only one value.</p> <p>Thanks.</p>
<python><pandas><merge><outer-join>
2024-01-17 09:59:50
1
340
mnzl
77,831,408
2,386,113
How to change the default device in cuPy?
<p>I am using cuPy in a python program to perform computations on GPU. My program consists of several functions/classes spread across multiple files.</p> <p>I am using a GPU cluster (NVIDIA V100), consisting of four GPUs.</p> <p>How can I select a particular GPU as the default GPU for the whole program? I found some information at <a href="https://docs.cupy.dev/en/stable/reference/generated/cupy.cuda.Device.html#cupy.cuda.Device.use" rel="nofollow noreferrer"><strong>cuPy documentation</strong></a> about <code>use</code> keyword but since there is no example, I am not sure how to utilize it.</p> <p><strong>MWE:</strong></p> <blockquote> <p>maths_ops.py:</p> </blockquote> <pre><code>import cupy as cp class MathsOps: def vector_addition(self, vector1, vector2): return cp.add(vector1, vector2) </code></pre> <blockquote> <p>dummy_class.py:</p> </blockquote> <pre><code>from maths_ops import MathsOps class DummyClass: def __init__(self, maths_ops_instance): self.maths_ops_instance = maths_ops_instance def perform_operation(self, vector1, vector2): result = self.maths_ops_instance.vector_addition(vector1, vector2) return result </code></pre> <blockquote> <p>main.py:</p> </blockquote> <pre><code>from maths_ops import MathsOps from dummy_class import DummyClass # Create an instance of MathsOps maths_ops_instance = MathsOps() # Create an instance of DummyClass with the MathsOps instance dummy_instance = DummyClass(maths_ops_instance) # Example vectors vector1 = cp.array([1, 2, 3]) vector2 = cp.array([4, 5, 6]) # Perform vector addition using DummyClass result = dummy_instance.perform_operation(vector1, vector2) # Print the result print(&quot;Vector Addition Result:&quot;, result) </code></pre> <p><strong>Question:</strong> How do I specify a GPU device (for example, device-1) I'm the <code>main</code> function as the default GPU device that should be used by the program?</p>
<python><gpu><cupy>
2024-01-17 09:48:33
2
5,777
skm
77,831,293
1,517,918
How to “template” a registry class that uses __new__ as a factory?
<p>I wrote a class <code>BaseRegistry</code> that uses a classmethod as a decorator to register other classes with a string name in a class attribute dictionary <code>registered</code>.</p> <p>This dictionary is used to return the class associated to the string name given to its <code>__new__</code> method.</p> <p>This works quite well but now, I would like to create several Registries of this kind without duplicating code. Of course, if I inherit from <code>BaseRegistry</code>, the dictionary is shared between all subclasses which is not what I want.</p> <p>I can not figure out how to achieve this “templating”.</p> <p>Below a code example:</p> <pre class="lang-py prettyprint-override"><code>class BaseRegistry: registered = {} @classmethod def add(cls, name): def decorator(function): cls.registered[name] = function return function return decorator def __new__(cls, name): return cls.registered[name] class RegistryOne(BaseRegistry): pass class RegistryTwo(BaseRegistry): pass @RegistryOne.add(&quot;the_one&quot;) class Example1: def __init__(self, scale_factor=1): self.scale_factor = scale_factor @RegistryOne.add(&quot;the_two&quot;) class Example2: def __init__(self, scale_factor=2): self.scale_factor = scale_factor if __name__ == &quot;__main__&quot;: the_one = RegistryOne(&quot;the_one&quot;)() print(f&quot;{the_one.scale_factor=}&quot;) assert RegistryOne.registered != RegistryTwo.registered </code></pre> <p>Of course, I would like to find a solution to make this code works but I am also opened to any alternative implementation of <code>BaseRegistry</code> that would achieve the same goal.</p> <p>EDIT:</p> <p>With this implementation, Pylint complains about the line:</p> <pre class="lang-py prettyprint-override"><code>the_one = RegistryOne(&quot;the_one&quot;)() </code></pre> <p>With this message:</p> <blockquote> <p>E1102: RegistryOne('the_one') is not callable (not-callable)</p> </blockquote>
<python><metaclass>
2024-01-17 09:31:48
3
1,240
PhML
77,831,093
8,356,936
How to do TRY_CAST in Snowpark?
<p>I am doing <code>try_cast</code> as on this column <code>df.select(split(col(&quot;spc_value&quot;),lit(&quot;~&quot;))[0])</code> that has value <code>&quot;1.419000&quot;</code> <a href="https://i.sstatic.net/E8zSl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E8zSl.png" alt="enter image description here" /></a> with</p> <pre><code>from snowflake.snowpark.types import StructType, StructField, StringType, IntegerType, FloatType, DecimalType, DoubleType, VariantType from snowflake.snowpark.functions import col,lit,split,try_cast </code></pre> <p><a href="https://docs.snowflake.com/ko/developer-guide/snowpark/reference/python/latest/api/snowflake.snowpark.functions.try_cast" rel="nofollow noreferrer">functions.try_cast</a></p> <pre><code>df.select(try_cast(split(col(&quot;spc_value&quot;),lit(&quot;~&quot;))[0],DoubleType())) </code></pre> <p>and column method <a href="https://docs.snowflake.com/ko/developer-guide/snowpark/reference/python/latest/api/snowflake.snowpark.Column.try_cast" rel="nofollow noreferrer">Column.try_cast</a></p> <pre><code>df.select(split(col(&quot;spc_value&quot;),lit(&quot;~&quot;))[0].try_cast(DoubleType())) </code></pre> <p>Both attempts return <a href="https://i.sstatic.net/GgN2M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GgN2M.png" alt="enter image description here" /></a></p> <blockquote> <p>SQL compilation error: Function TRY_CAST cannot be used with arguments of types VARIANT and FLOAT</p> </blockquote> <p>What have I missed for doing <code>TRY_CAST</code> here?</p>
<python><sql><casting><snowflake-cloud-data-platform>
2024-01-17 08:56:26
1
418
axiom
77,830,911
10,353,865
Function which changes a dataframe inplace used to slices of the dataframe
<p>Suppose I have a function which takes as input a dataframe d and which alters the dataframe <strong>inplace</strong>. In the example below the function writes the number 9 to the first two rows.</p> <p>Suppose I wanted to use the same function to achieve the following: (a) write 9 to the first two rows - but only to the &quot;x&quot; column (b) write 9 to the second and third rows</p> <p>In each case, I would find it very convenient if I could just pass an appropriate slice to the function, e.g. in case (a) call the function with d.loc[,[&quot;x&quot;]] instead of using the whole dataframe.</p> <p>However, I noticed that the function will not change the original dataframe. (see below)</p> <p>My question: How can I modify the code within the function to achieve this goal? Note that I don't want to create copies of the dataframe (every change should be inplace) and note that I do not want to expand the function signature.</p> <pre><code>import pandas as pd #toy dataframe with two columns df = pd.DataFrame({&quot;x&quot;: [1,3,4,5], &quot;y&quot;: [4,3,2,3]}) # This function changes the first two rows of the input dataframe inplace def change(d): n = len(d) d.loc[[True, True] + [False]*(n-2)] = 9 return d change(df) df # df has changed #Recreate the original df df = pd.DataFrame({&quot;x&quot;: [1,3,4,5], &quot;y&quot;: [4,3,2,3]}) # Create a reference to a portion of df df_ref = df.loc[:,[&quot;x&quot;]] # Call change with the reference change(df_ref) df # has not changed </code></pre> <p><strong>EDIT</strong></p> <p>For clarification: I just ask the following: If a function uses as input a dataframe and changes this dataframe inplace, how can I ensure that it also works on parts of the dataframe? (that is, I do not provide the dataframe as an argument, but only a subset of rows/columns as the argument). And by &quot;works&quot; I mean that although I provided only a subset of rows/columns it changed the original dataframe in the respective rows/columns.</p>
<python><pandas><slice>
2024-01-17 08:25:48
0
702
P.Jo
77,830,728
11,942,410
Python : appened the value of created_date based on the condition
<p>i am trying to write one function which will create a new column called <code>last_created_date</code>, and appened the value in that column as per condition.</p> <p>so this is the data.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>rounded_geo_lat</th> <th>rounded_geo_lng</th> <th>created_date</th> <th>distance_1_lat_lng</th> <th>distance_2_lat_lng</th> <th>distance_3_lat_lng</th> <th>distance_4_lat_lng</th> <th>distance_5_lat_lng</th> <th>last_created_date</th> </tr> </thead> <tbody> <tr> <td>26.11</td> <td>74.26</td> <td>16-01-2024 11:29</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>null</td> </tr> <tr> <td>25.77</td> <td>73.66</td> <td>16-01-2024 12:29</td> <td>70.91359357</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>16-01-2024 13:29</td> </tr> <tr> <td>25.23</td> <td>73.23</td> <td>16-01-2024 13:29</td> <td>142.2333872</td> <td>0</td> <td>NaN</td> <td>NaN</td> <td>NaN</td> <td>16-01-2024 15:20</td> </tr> <tr> <td>24.67</td> <td>72.94</td> <td>16-01-2024 14:29</td> <td>207.8935555</td> <td>142.1494871</td> <td>0</td> <td>NaN</td> <td>NaN</td> <td>16-01-2024 15:43</td> </tr> <tr> <td>24.41</td> <td>72.65</td> <td>16-01-2024 15:20</td> <td>248.8830913</td> <td>182.2445736</td> <td>0</td> <td>0</td> <td>NaN</td> <td>16-01-2024 15:43</td> </tr> <tr> <td>24.41</td> <td>72.65</td> <td>16-01-2024 15:43</td> <td>248.8830913</td> <td>182.2445736</td> <td>108.3518041</td> <td>4.1</td> <td>0</td> <td></td> </tr> <tr> <td>24.21</td> <td>72.27</td> <td>16-01-2024 17:28</td> <td>291.1047773</td> <td>222.9644721</td> <td>149.2166506</td> <td>84.94979054</td> <td>44.46789779</td> <td></td> </tr> </tbody> </table> </div> <p>so over here i want to derive <code>last created date</code> column per rake. every pair of <code>rounded_geo_lat, rounded_geo_lng pair</code> corrospondes to <code>distance column</code> first pair will have <code>distance_1_lat_lng</code>, 2nd pair will have <code>distance_2_lat_lng</code>, 3rd pair will have <code>distance_3_lat_lng</code> and so on..</p> <p>note: when we iterating distance column corrosponding to lat,lng pair we always will exclude self value in distance column and will go down till last value of that rake device.</p> <p>now , case 1: while iterating down thorugh distnace column, there is no value less then <code>5 (&lt;5)</code> then last_created_Date will be <code>null</code>. (look at distance_1_lat_lng) and break the loop else continue</p> <p>case 2: while iterating down thorugh distnace column from current pair of lat,lng when there is only ONE value less then 0.1 (&lt;0.1) (look at distance_2_lat_lng) then pick up the created_date for that corrosponding value and add it to the last created date column. and break the loop else continue</p> <p>case 3: while iterating down thorugh distnace column from current pair of lat,lng when there is multiple value less then 0.1 (&lt;0.1) (look at distance_3_lat_lng) then pick up the created date of last occuring value and put it in last_Created_Date for that pair of lat_lng and break the loop else continue</p> <p>case 4: while iterating down thorugh distnace column from current pair of lat,lng when there is value in the column which is &gt; 0.1 and &lt; 5 then put the created date of first occurence of that value as last_Created_Date.</p> <p>i have written this function but while writing this above condition it is not giving expected result.</p> <pre><code>def find_and_append_created_dates(df_0): df_0[&quot;last_created_dates&quot;] = None # Add the new column with initial values of None # Iterate over unique rake devices for rake_device in df_0['rake_device'].unique(): rake_device_df = df_0[df_0['rake_device'] == rake_device] # Iterate over distance columns for the current rake device for i in range(len(rake_device_df)): distance_column = f&quot;distance_{i+1}_lat_lng&quot; # Iterate over rows below the current row for j in range(i + 1, len(rake_device_df)): distance = rake_device_df[distance_column].iloc[j] # Iterate over distance columns for distance_column in rake_device_df.filter(like='distance_').columns: # Iterate downward from the row after the current row has_value_less_than_5 = False for j in range(i + 1, len(rake_device_df)): distance = rake_device_df[distance_column].iloc[j] if pd.isna(distance): continue # Condition 1: No value less than 5 in the entire column if distance &lt; 5: has_value_less_than_5 = True # Mark that a value less than 5 exists # Condition 2: First occurrence of value less than 0.1 if distance &lt; 0.1 and not found_date: found_date = rake_device_df.loc[rake_device_df.index[j], &quot;created_date&quot;] break # Stop iterating if you find a value less than 0.1 # If no value less than 5 was found, assign NULL and break the loop if not has_value_less_than_5: found_date = None break # Assign the found_date (or None if not found) to the last_created_dates column df.at[rake_device_df.index[i], &quot;last_created_dates&quot;] = found_date return df find_and_append_created_dates(df) print(df) </code></pre> <p>Column last_created_date in the table i gave u is the expected output, we want to derive that column only, based on table above</p>
<python><pandas><dataframe><numpy><numpy-slicing>
2024-01-17 07:54:20
0
326
vish
77,830,589
7,454,513
how to use python jsonpath_ng filter nested data
<p>need to find dict in <code>[]</code> , according specific value , but cannot find. which I hope return value is <code>[{ &quot;uid&quot;: {&quot;type&quot;: &quot;t2&quot;, &quot;id&quot;: &quot;world&quot;}, &quot;attrs&quot;: { &quot;value&quot;: 456 } }]</code></p> <pre><code>import json from jsonpath_ng.ext import parse json_str = '''[ { &quot;uid&quot;: {&quot;type&quot;: &quot;t1&quot;, &quot;id&quot;: &quot;hello&quot;}, &quot;attrs&quot;: { &quot;value&quot;: 123 } }, { &quot;uid&quot;: {&quot;type&quot;: &quot;t2&quot;, &quot;id&quot;: &quot;world&quot;}, &quot;attrs&quot;: { &quot;value&quot;: 456 } } ] ''' json_obj = json.loads(json_str) [match.value for match in parse('$.[*].attrs[?value &gt; 200]').find(json_obj)] </code></pre>
<python><json>
2024-01-17 07:26:48
1
683
Relax ZeroC
77,830,587
8,176,763
FastAPI - Swagger UI docs not rendering list array as query parameter
<p>According to the docs it is possible to pass a list of values <a href="https://fastapi.tiangolo.com/tutorial/query-params-str-validations/#query-parameter-list-multiple-values" rel="noreferrer">https://fastapi.tiangolo.com/tutorial/query-params-str-validations/#query-parameter-list-multiple-values</a> .</p> <p>I have tried on my end and the endpoint http://localhost:8000/items/?q=foo&amp;q=bar correctly process the values as a list:</p> <pre><code>{&quot;q&quot;:[&quot;foo&quot;,&quot;bar&quot;]} </code></pre> <p>But the docs do not, and interpret it as a string and do not render multiple options:</p> <p><a href="https://i.sstatic.net/FzWG9.png" rel="noreferrer"><img src="https://i.sstatic.net/FzWG9.png" alt="enter image description here" /></a></p> <p>Here is my <code>main.py</code>:</p> <pre><code>from db import engine,get_db from models import Example,ExampleModel,Base,Color,Item from fastapi import FastAPI,Depends,Query,Path from sqlalchemy.ext.asyncio import AsyncSession from sqlalchemy import select from contextlib import asynccontextmanager from typing import Annotated,Union,List from fastapi.responses import HTMLResponse @asynccontextmanager async def lifespan(app: FastAPI): async with engine.begin() as conn: await conn.run_sync(Base.metadata.create_all) yield app = FastAPI(lifespan=lifespan) # type: ignore @app.get(&quot;/items/&quot;) async def read_items(q: Annotated[List[str] | None, Query()] = None): query_items = {&quot;q&quot;: q} return query_items </code></pre>
<python><swagger><fastapi><openapi><swagger-ui>
2024-01-17 07:26:18
1
2,459
moth
77,830,492
15,893,581
reconstruct Covariance matrix from dataset generated given that Covariance matrix (using Cholesky factorization)
<p>Recalling that <code>C = L@L.T</code> using <a href="https://scicoding.com/how-to-calculate-cholesky-decomposition-in-python/" rel="nofollow noreferrer">Cholesky</a> factorization, I'm trying to repeat transformations described in <a href="https://blogs.sas.com/content/iml/2012/02/08/use-the-cholesky-transformation-to-correlate-and-uncorrelate-variables.html" rel="nofollow noreferrer">this article</a> with <code>Python</code>, but am having some misunderstanding &amp; inability to return initially given <em>Covariance matrix</em> from generated dataset given that covariance matrix. The question is how to get covariance matrix back?</p> <pre><code># Importing libraries import numpy as np import pandas as pd from scipy import linalg as la import matplotlib.pyplot as plt sigma = [[9,1], [1,1]] # given Sigma (or covariance matrix) U= la.cholesky(sigma, lower=False) rec= U.T@U print(pd.DataFrame({'U': U[:,1], 'rec1':rec[0], 'rec2':rec[1]} )) # generate x,y ~ N(0,1), corr(x,y)=0 xy = U.dot(np.random.normal(0,1,(2,1000))) ##print(xy) L= U.T zw = L @ xy # Z and W variables - correlated cov = np.cov(zw, ddof=0) # ?????????? /* did we succeed? Compute covariance of transformed data */ print(cov) # [[86.66333557 10.85423541] # [10.85423541 2.1026003 ]] plt.scatter(zw[0], zw[1]) plt.show() </code></pre> <p>P.S. used <code>ddof=0</code> as described <a href="https://stackoverflow.com/questions/68432422/calculating-covariance-matrix-in-numpy">here</a></p>
<python><matrix-factorization><covariance-matrix>
2024-01-17 07:04:51
1
645
JeeyCi
77,830,306
16,798,185
newSession from a Parent Session
<p>When we create new session using <code>newSession()</code>, would the new session inherits all config properties set from parent session?</p> <p>As per the <a href="https://spark.apache.org/docs/3.1.3/api/python/reference/api/pyspark.sql.SparkSession.newSession.html" rel="nofollow noreferrer">spark doc</a>, <code>newSession()</code> returns a new <code>SparkSession</code> as new session, that has separate <code>SQLConf</code>, but shared <code>SparkContext</code> and table cache. Could not follow the difference between <code>SQLConf</code> and <code>SparkContext</code> in this context.</p> <p>Performed below tests on Databricks, it seems newSession is not inhering any existing configs from parent session. In that case, what is the meaning of line mentioned in the doc - <em>&quot;new session, that has separate <code>SQLConf</code>, but shared <code>SparkContext</code> and table cache&quot;</em>?</p> <pre class="lang-py prettyprint-override"><code># Session-1 from pyspark.sql import SparkSession spark_session_1 = SparkSession.builder.appName('app_name').config(&quot;spark.sql.sources.default&quot;, &quot;parquet&quot;).getOrCreate() spark_session_1.conf.set(&quot;spark.sql.shuffle.partitions&quot;, &quot;5000&quot;) print(spark_session_1.conf.get(&quot;spark.sql.sources.default&quot;)) # parquet (note: default val is 'delta') print(spark_session_1.conf.get(&quot;spark.sql.shuffle.partitions&quot;)) # 5000 </code></pre> <pre class="lang-py prettyprint-override"><code># Session-2 spark_session_2 = spark_session_1.newSession() print(spark_session_2.conf.get(&quot;spark.sql.sources.default&quot;)) # delta print(spark_session_2.conf.get(&quot;spark.sql.shuffle.partitions&quot;)) # 200 </code></pre> <pre class="lang-py prettyprint-override"><code># Session-3 spark_session_3 = spark_session_1.newSession() spark_session_3.conf.set(&quot;spark.sql.shuffle.partitions&quot;, &quot;333&quot;) print(spark_session_3.conf.get(&quot;spark.sql.sources.default&quot;)) # delta print(spark_session_3.conf.get(&quot;spark.sql.shuffle.partitions&quot;)) # 333 </code></pre>
<python><apache-spark><pyspark>
2024-01-17 06:14:25
1
377
user16798185
77,830,176
3,555,115
Merge two different data frames by same column in Python dataframe
<p>df1 =</p> <pre><code>A ANTS AGE 0 ABC 18 1 ABC 25 3 ABC 24 2 DEF 20 </code></pre> <p>df2 =</p> <pre><code>ANTS LOC ABC WIND DEF FIND </code></pre> <p>My output should be</p> <p>df3 =</p> <pre><code> A ANTS AGE LOC 0 ABC 18 WIND 1 ABC 25 WIND 3 ABC 24 WIND 2 DEF 20 FIND </code></pre> <p>How can we merge df1 and df2 to get df3 effectively ?</p>
<python><pandas><dataframe>
2024-01-17 05:37:39
0
750
user3555115
77,830,159
1,266,109
What is thresh in this statement?
<p>I am trying to do the equivalent in c++ of some code written in python.</p> <p>In the following link:</p> <p><a href="https://stackoverflow.com/questions/13584586/how-to-automatically-detect-and-crop-individual-sprite-bounds-in-sprite-sheet">How to automatically detect and crop individual sprite bounds in sprite sheet?</a></p> <pre><code>thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1] </code></pre> <p>The output of threshold is a Mat. What is thresh ? The convention in c++ is different, because there is no subscript operator for Mat in opencv c++.</p>
<python><c++><opencv>
2024-01-17 05:32:34
2
21,085
Rahul Iyer
77,830,117
9,501,508
Web Scraping and POST Request Issue: Unable to Retrieve Expected Data
<p>I am currently working on a Python script using the <code>requests</code> library and <code>BeautifulSoup</code> for web scraping. The goal is to retrieve specific information from the website &quot;<a href="https://letmepost.com/check-da-pa" rel="nofollow noreferrer">https://letmepost.com/check-da-pa</a>,&quot; particularly the DA (Domain Authority), PA (Page Authority), Spam score, Age, Expiring At, and IP address for a given domain. However, despite successfully obtaining a response, the data retrieved is not as expected.</p> <p>I have shared my code, which includes handling the X-CSRF-TOKEN and sending a <strong>POST</strong> request with the necessary payload. However, the output I receive is just the domain &quot;cnn.com&quot; instead of the expected structured data, which includes the DA, PA, Spam score, Age, Expiring At, and IP.</p> <pre class="lang-py prettyprint-override"><code> import requests from bs4 import BeautifulSoup import re headers = { &quot;User-Agent&quot;: &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36&quot; } session = requests.Session() r = session.get('https://letmepost.com/check-da-pa', headers=headers) soup = BeautifulSoup(r.content, 'html.parser') script_tag = soup.find(&quot;script&quot;, string=lambda text: text and &quot;X-CSRF-TOKEN&quot; in text) if script_tag: token_pattern = r&quot;'X-CSRF-TOKEN': '(.*?)'&quot; csrf_token_match = re.search(token_pattern, script_tag.string) if csrf_token_match: csrf_token = csrf_token_match.group(1) print(f&quot;CSRF Token: {csrf_token}&quot;) else: print(&quot;CSRF Token pattern not found in script content.&quot;) else: print(&quot;Script tag with 'X-CSRF-TOKEN' not found.&quot;) data = &quot;prothomalo.com&quot; data_payload = { &quot;domains &quot;: data, &quot;X-CSRF-TOKEN&quot;: csrf_token } # Existing data_payload data_payload = { &quot;X-CSRF-TOKEN&quot;: csrf_token } # Adding new data to data_payload data_payload[&quot;listen&quot;] = &quot;4OdeyKA6Sku1ff86NqkAibxXGf4kfOQUcbepB84U&quot; data_payload[&quot;secret&quot;] = &quot;PNxcg1LI9331xX7&quot; data_payload[&quot;urls&quot;] = &quot;prothomalo.com&quot; data_payload[&quot;fromBrowser&quot;] = &quot;1705462709623&quot; data_payload[&quot;timestamp&quot;] = &quot;1705462686&quot; data_payload[&quot;direct&quot;] = True # Assuming you want to add a boolean False, not the string 'false' # The data_payload now contains the additional data headers = { &quot;User-Agent&quot;: &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36&quot;, &quot;X-CSRF-TOKEN&quot;: csrf_token } url = &quot;https://letmepost.com/check-da-pa&quot; response = session.post(url, data=data_payload, headers=headers) print(response.status_code) import pandas as pd result = BeautifulSoup(response.content, 'html.parser') print(result) </code></pre> <p><a href="https://i.sstatic.net/dk2dp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dk2dp.png" alt="enter image description here" /></a> I would appreciate any insights, suggestions, or corrections to my code that might help me retrieve the desired information accurately. Thank you in advance for your assistance!</p>
<python><web-scraping><beautifulsoup><python-requests><request>
2024-01-17 05:20:13
1
1,558
Humayun Ahmad Rajib
77,830,094
1,447,953
Python get all sets of N pairs from two lists
<p>This is some kind of elementary task in itertools probably, but my brain is not working today and I can't seem to find it answered here already.</p> <p>Suppose I have two lists with different numbers of elements. Their indices are</p> <pre><code>i1 = list(range(N)) i2 = list(range(M)) </code></pre> <p>for say <code>N=3</code>, <code>M=4</code> we have</p> <pre><code>[0, 1, 2] [0, 1, 2, 3] </code></pre> <p>I now want to construct all possible sets of pairs of length K from these two lists. So for example for K=1 we have</p> <pre><code>(0, 0) (0, 1) (0, 2) (0, 3) (1, 0) (1, 1) (1, 2) (1, 3) (2, 0) (2, 1) (2, 2) (2, 3) </code></pre> <p>and for K=2 we have</p> <pre><code>(0,0), (1,1) (0,0), (1,2) (0,0), (1,3) (0,0), (2,1) (0,0), (2,2) (0,0), (2,3) (0,1), (1,0) (0,1), (1,2) (0,1), (1,3) (0,2), (1,0) (0,2), (1,1) (0,2), (1,3) ... &lt;and so it goes on...&gt; </code></pre> <p>i.e. we are drawing one value from each list to make a pair, without replacement, and doing this K times to make K pairs. And we want to obtain all possible sets of such pairs.</p> <p>For K=1 I guess this is the cartesian product of the lists, but it's more complicated for K=2 and above.</p> <p>Note that the elements in each pair in each set are drawn from the lists without replacement. So each item from each list can appear in at most one pair in each set of pairs.</p> <p>Edit: The answer here: <a href="https://stackoverflow.com/a/12935562/1447953">https://stackoverflow.com/a/12935562/1447953</a> solves the problem when K=min(N,M), i.e. when finding the longest possible sets of pairs, but it doesn't work for smaller K .</p> <p>For additional context: I am working on a spectral peak matching problem. I have two line spectra, which have varying numbers of peaks in them, and I am working on an algorithm to determine whether any of the peaks line up with each other. This involves comparing peaks from one spectrum against those in the other spectrum, so I need to iterate over all the ways the peaks might possibly be paired up for comparison (for differing numbers of potential simultaneous matches).</p>
<python><python-itertools>
2024-01-17 05:12:45
2
2,974
Ben Farmer
77,829,919
3,747,241
PyTorch3D file io throws error - AttributeError: _evt
<p>I finally got PyTorch3D to install on my conda environment with the following configuration -- <code>torch=1.13.0</code> and <code>torchvision=0.14.0</code> and <code>pytorch3d=0.7.5</code>.</p> <p>I am trying to load a mesh from an .obj file using pytorch3d.io from <a href="https://pytorch3d.readthedocs.io/en/v0.6.0/modules/io.html" rel="nofollow noreferrer">https://pytorch3d.readthedocs.io/en/v0.6.0/modules/io.html</a>.</p> <p>In the line <code>pytorch3d.io.load_obj(filename)</code>, I get the below error</p> <pre><code>An exception occurred in telemetry logging.Disabling telemetry to prevent further exceptions. Traceback (most recent call last): File &quot;/home/aditya/miniconda3/envs/py3d/lib/python3.10/site-packages/iopath/common/file_io.py&quot;, line 946, in __log_tmetry_keys handler.log_event() File &quot;/home/aditya/miniconda3/envs/py3d/lib/python3.10/site-packages/iopath/common/event_logger.py&quot;, line 97, in log_event del self._evt AttributeError: _evt </code></pre> <p>I am on Nvidia RTX 4090 graphics card. Not sure what the issue is here.</p>
<python><pytorch><pytorch3d>
2024-01-17 04:08:29
1
1,135
Aditya
77,829,899
2,178,942
legend=False in sns.stripplot rises an error
<p>I am using seaborn's stripplot and pointplot to plot my data (<a href="https://seaborn.pydata.org/generated/seaborn.pointplot.html" rel="nofollow noreferrer">Reference</a>)</p> <p>My code is:</p> <pre><code>sns.set_palette(&quot;Purples&quot;) sns.stripplot( data=df_d_pt, x=&quot;feat&quot;, y=&quot;acc&quot;, hue=&quot;alpha&quot;, dodge=True, alpha=.2, legend=False, ) sns.pointplot( data=df_d_pt, x=&quot;feat&quot;, y=&quot;acc&quot;, hue=&quot;alpha&quot;, dodge=.4, linestyle=&quot;none&quot;, errorbar=None, marker=&quot;_&quot;, markersize=10, markeredgewidth=3, ) </code></pre> <p>However it seems legend=False is not working as I get the error:</p> <pre class="lang-text prettyprint-override"><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-156-18efd4be2baa&gt; in &lt;module&gt; 3 sns.stripplot( 4 data=df_d_pt, x=&quot;feat&quot;, y=&quot;acc&quot;, hue=&quot;alpha&quot;, ----&gt; 5 dodge=True, alpha=.2, legend=False, 6 ) 7 ~/.local/lib/python3.6/site-packages/seaborn/_decorators.py in inner_f(*args, **kwargs) 44 ) 45 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)}) ---&gt; 46 return f(**kwargs) 47 return inner_f 48 ~/.local/lib/python3.6/site-packages/seaborn/categorical.py in stripplot(x, y, hue, data, order, hue_order, jitter, dodge, orient, color, palette, size, edgecolor, linewidth, ax, **kwargs) 2820 linewidth=linewidth)) 2821 -&gt; 2822 plotter.plot(ax, kwargs) 2823 return ax 2824 ~/.local/lib/python3.6/site-packages/seaborn/categorical.py in plot(self, ax, kws) 1158 def plot(self, ax, kws): 1159 &quot;&quot;&quot;Make the plot.&quot;&quot;&quot; -&gt; 1160 self.draw_stripplot(ax, kws) 1161 self.add_legend_data(ax) 1162 self.annotate_axes(ax) ~/.local/lib/python3.6/site-packages/seaborn/categorical.py in draw_stripplot(self, ax, kws) 1152 kws.update(c=palette[point_colors]) 1153 if self.orient == &quot;v&quot;: -&gt; 1154 ax.scatter(cat_pos, strip_data, **kws) 1155 else: 1156 ax.scatter(strip_data, cat_pos, **kws) /opt/conda/lib/python3.6/site-packages/matplotlib/__init__.py in inner(ax, data, *args, **kwargs) 1445 def inner(ax, *args, data=None, **kwargs): 1446 if data is None: -&gt; 1447 return func(ax, *map(sanitize_sequence, args), **kwargs) 1448 1449 bound = new_sig.bind(ax, *args, **kwargs) /opt/conda/lib/python3.6/site-packages/matplotlib/cbook/deprecation.py in wrapper(*inner_args, **inner_kwargs) 409 else deprecation_addendum, 410 **kwargs) --&gt; 411 return func(*inner_args, **inner_kwargs) 412 413 return wrapper /opt/conda/lib/python3.6/site-packages/matplotlib/axes/_axes.py in scatter(self, x, y, s, c, marker, cmap, norm, vmin, vmax, alpha, linewidths, verts, edgecolors, plotnonfinite, **kwargs) 4496 ) 4497 collection.set_transform(mtransforms.IdentityTransform()) -&gt; 4498 collection.update(kwargs) 4499 4500 if colors is None: /opt/conda/lib/python3.6/site-packages/matplotlib/artist.py in update(self, props) 994 func = getattr(self, f&quot;set_{k}&quot;, None) 995 if not callable(func): --&gt; 996 raise AttributeError(f&quot;{type(self).__name__!r} object &quot; 997 f&quot;has no property {k!r}&quot;) 998 ret.append(func(v)) AttributeError: 'PathCollection' object has no property 'legend' </code></pre> <p>If I remove the &quot;<code>legend=False</code>&quot; from my code, results will look like this:</p> <p><a href="https://i.sstatic.net/mvcxJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mvcxJ.png" alt="enter image description here" /></a></p> <p>How can I solve this problem? Either remove the legend or put it on the right side of my figure</p>
<python><python-3.x><matplotlib><seaborn><figure>
2024-01-17 04:00:08
1
1,581
Kadaj13
77,829,697
534,674
How to get sphinx to gracefully ignore 3rd party reference targets in intersphinx mappings
<p>I've run into issues a few times that are solved by <code>intersphinx_mapping</code>, but that can be <a href="https://stackoverflow.com/a/42513684/534674">difficult</a>. I'm also wary of coupling my doc build success to another library's success, especially if I'm in charge of the other library. Is there a way to tell sphinx to not build reference targets to modules outside my package, and just assume they are valid? Alternatively, is there a way to list the ignored reference targets in conf.py? That way, I could at least review the warnings, verify I do actually want to ignore them, and manually add them.</p> <p>E.g. a subclass of numpy array that wants to prohibit assigning to <code>shape</code>, with the code:</p> <pre class="lang-py prettyprint-override"><code>@property def shape(self): return super().shape </code></pre> <p>In sphinx, this gives:</p> <pre><code>docstring of mypackage.MyArrayClass.shape:12: WARNING: 'any' reference target not found: ndarray.reshape </code></pre> <p>It is referring to the first link to <code>ndarray.reshape</code> in the <a href="https://numpy.org/doc/stable/reference/generated/numpy.shape.html" rel="nofollow noreferrer">numpy docs</a>.</p>
<python><python-sphinx>
2024-01-17 02:34:29
0
1,806
Jake Stevens-Haas
77,829,653
2,998,077
Python, Docx, to replace soft returns (^l or manual line breaks) to hard returns (^p or paragraph marks)
<p>Some Word documents (.docx) that have soft returns (^l or manual line breaks).</p> <p>When counting the number of paragraphs in the document, it shows there is only 1 paragraph (by script below).</p> <p>What is the way to recognize the soft returns (^l or manual line breaks) and replace them to hard returns (^p or paragraph marks). So that the script can count the actual number of paragraphs?</p> <p>A sample document as below, which contains 2 paragraphs. Using the code lines below to count the number of paragraphs in this docx, it says it has 1 paragraph only.</p> <p><a href="https://i.sstatic.net/HoyZo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HoyZo.png" alt="enter image description here" /></a></p> <pre><code>from docx import Document def count_paragraphs(docx_path): doc = Document(docx_path) return len(doc.paragraphs) # Example usage: docx_path = 'the_file.docx' paragraph_count = count_paragraphs(docx_path) print(f'The number of paragraphs in {docx_path}: {paragraph_count}') </code></pre> <p>Based on above, I've tried below (to replace the soft returns to hard returns) but it doesn't work:</p> <pre><code>paragraph.text.replace('\r', '\n') </code></pre>
<python><docx>
2024-01-17 02:15:26
1
9,496
Mark K
77,829,374
3,734,914
Python Decorator to Cast Class Attributes for MyPy
<p>I have a class that has multiple methods that can only be called if an attribute that may be <code>None</code> is not <code>None</code>. E.g.</p> <pre class="lang-py prettyprint-override"><code>class Connection: def read(self) -&gt; str: return &quot;data&quot; def write(self, data: str) -&gt; None: print(data) class Foo: def __init__(self): self._connection: None | Connection = None def connect(self): self._connection = Connection() def get_data(self) -&gt; str: if self._connection is None: raise RuntimeError() else: return self._connection.read() def send_data(self, data: str): if self._connection is None: raise RuntimeError(&quot;Connection not created&quot;) else: self._connection.write(data) </code></pre> <p>If I run <code>mypy</code> on the code above it has no issues.</p> <p>However, instead of manually checking inside each method that the connection has been created, I'd rather just have a decorator that checks if the connection has been created before entering the method.</p> <pre class="lang-py prettyprint-override"><code>from functools import wraps from typing import cast class Connection: def read(self) -&gt; str: return &quot;data&quot; def write(self, data: str) -&gt; None: print(data) def requires_connection(func): @wraps(func) def decorator(self, *args, **kwargs): if self._connection is None: raise RuntimeError(&quot;Connection not created&quot;) else: cast(Connection, self._connection) return func(self, *args, **kwargs) return decorator class Foo: def __init__(self): self._connection: None | Connection = None def connect(self): self._connection = Connection() @requires_connection def get_data(self) -&gt; str: return self._connection.read() @requires_connection def send_data(self, data: str): self._connection.write(data) </code></pre> <p>This code does what I want, in the sense that it raises an error if I try to call <code>get_data</code> or <code>send_data</code> before the connection has been created. But if I try to check my code with <code>mypy</code>, it doesn't recognise that the <code>_connection</code> attribute is not <code>None</code>.</p> <pre><code>error: Item &quot;None&quot; of &quot;Connection | None&quot; has no attribute &quot;read&quot; [union-attr] error: Item &quot;None&quot; of &quot;Connection | None&quot; has no attribute &quot;write&quot; [union-attr] </code></pre> <p>Is there a way to centralize the logic of making sure that <code>_connection</code> is not <code>None</code> in a way that <code>mypy</code> will recognise?</p>
<python><mypy><python-decorators><python-typing>
2024-01-17 00:20:29
0
9,017
Batman
77,829,347
7,563,454
Include script in local directory from a dynamic path
<p>I'm working on a game engine intended to support modding, which includes loading a custom mod directory with an init script. I can't find a clear answer as to the best way of dynamically loading a script from a string variable path. The name of the active mod will likely be specified as a script parameter: While I know the following example is probably not correct unless this is in fact allowed, it's intuitively what I'm thinking of doing in my <code>init.py</code> unless a more correct way exists.</p> <pre><code>import sys import &quot;mods/&quot; + sys.argv[1] + &quot;/init.py&quot; </code></pre>
<python><python-3.x>
2024-01-17 00:12:08
1
1,161
MirceaKitsune
77,829,278
12,671,057
Keep list elements with three or more on either side
<p>How can we keep only elements that have three or more elements on either side? For example:</p> <pre><code>[1, 2, 3, 4] =&gt; [1, 4] </code></pre> <ul> <li><p>The <code>1</code> has three elements on its right, so we keep it.</p> </li> <li><p>The <code>2</code> and <code>3</code> each have only one or two elements on either side, so we remove them.</p> </li> <li><p>The <code>4</code> has three elements on its left, so we keep it.</p> </li> </ul> <p>More examples:</p> <pre><code>[] =&gt; [] [1] =&gt; [] [1, 2] =&gt; [] [1, 2, 3] =&gt; [] [1, 2, 3, 4] =&gt; [1, 4] [1, 2, 3, 4, 5] =&gt; [1, 2, 4, 5] [1, 2, 3, 4, 5, 6] =&gt; [1, 2, 3, 4, 5, 6] [1, 2, 3, 4, 5, 6, 7] =&gt; [1, 2, 3, 4, 5, 6, 7] </code></pre> <p>Edit: Here's a bad way in case someone indeed minds that I didn't provide any &quot;attempt&quot;:</p> <pre><code>L = [1, 2, 3, 4] N = len(L) for i in reversed(range(N)): left = i right = N - 1 - i if not (left &gt;= 3 or right &gt;= 3): L.pop(i) print(L) </code></pre>
<python><list>
2024-01-16 23:48:17
3
27,959
Kelly Bundy
77,829,130
4,822,772
Folium GeoJson not displaying fill color correctly
<p>I'm working on a project where I need to display GeoJSON data using Folium in Python. I'm facing an issue with the fill color not displaying correctly.</p> <p>I'm using the following code:</p> <pre><code>import folium import requests import json # Load GeoJSON data from the provided URL url = &quot;https://earthquake.usgs.gov/product/shakemap/us7000kufc/us/1699242609676/download/cont_mmi.json&quot; response = requests.get(url) geojson_data = json.loads(response.text) # Create a folium map centered around the first coordinate in the GeoJSON mymap = folium.Map(location=[geojson_data['features'][0]['geometry']['coordinates'][0][0][1], geojson_data['features'][0]['geometry']['coordinates'][0][0][0]], zoom_start=10) # Define colors based on the &quot;value&quot; property colors = ['#a0e5ff', '#90f2ff', '#80ffff', '#80dfff', '#80bfff', '#809fff', '#8080ff', '#bf80ff', '#df80ff', '#ff80ff'] # Add GeoJSON layer to the map with different colors folium.GeoJson( geojson_data, style_function=lambda feature: { 'fillColor': colors[int(feature['properties']['value'])], 'color': colors[int(feature['properties']['value'])], 'weight': 3, 'fillOpacity': 0.5, # Adjust opacity value 'closed': True # Close the polygons } ).add_to(mymap) # Display the map mymap </code></pre> <p>The issue is that while the contour color is displaying correctly, the fill color is not visible. I have experimented with different values for fillOpacity, but it doesn't seem to work as expected.</p> <p>Has anyone encountered a similar problem, and how can I make the fill color visible?</p> <p><a href="https://i.sstatic.net/CFwtX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CFwtX.png" alt="enter image description here" /></a></p>
<python><geojson><folium>
2024-01-16 22:56:04
0
1,718
John Smith
77,829,118
4,822,772
Issues with Folium Grid Visualization
<p>I'm working on a grid visualization using Folium and Xarray in Python. The code is designed to display rectangles on a map, representing grid cells with associated wind speeds. The visualization works well when zoomed in, but becomes problematic when zooming out – at certain zoom levels, the rectangles are not visible.</p> <p>Here's the code I'm using:</p> <pre><code>import xarray as xr import requests from io import BytesIO import folium # Specify the URL of the NetCDF file url = &quot;https://www.star.nesdis.noaa.gov/socd/mecb/sar/AKDEMO_products/APL_winds/tropical/2024/SH052024_BELAL/STAR_SAR_20240116013937_SH052024_05S_MERGED_FIX_3km.nc&quot; # Download the NetCDF file content response = requests.get(url) nc_content = BytesIO(response.content) # Open the NetCDF file using xarray dataset = xr.open_dataset(nc_content) # Access the 'sar_wind' variable sar_wind = dataset['sar_wind'].values # Get latitude and longitude information from the NetCDF file latitude = dataset['latitude'].values longitude = dataset['longitude'].values # Calculate the resolution of each grid cell lat_resolution = abs(latitude[0, 1] - latitude[0, 0]) lon_resolution = abs(longitude[1, 0] - longitude[0, 0]) # Create a Folium map centered around the mean latitude and longitude mean_lat = latitude.mean() mean_lon = longitude.mean() mymap = folium.Map(location=[mean_lat, mean_lon], zoom_start=10) # Define color levels based on wind speed speed_levels = [0, 5, 10, 15, 20,30] speed_colors = ['blue', 'green', 'yellow', 'orange', 'red',&quot;purple&quot;] r = 2 # Iterate through each point and add a larger rectangle to the map for lat, lon, wind_speed in zip(latitude.flatten(), longitude.flatten(), sar_wind.flatten()): # Calculate rectangle bounds based on lat, lon, and resolution bounds = [(lat - r * lat_resolution, lon - r * lon_resolution), (lat + r * lat_resolution, lon + r * lon_resolution)] # Determine color based on wind speed level color_index = next((i for i, level in enumerate(speed_levels) if wind_speed &lt;= level), len(speed_levels) - 1) color = speed_colors[color_index] # Add rectangle to the map with no borders folium.Rectangle( bounds=bounds, color=color, fill=True, fill_color=color, fill_opacity=0.6, weight=0, # Set weight to 0 for no borders popup=f'Wind Speed: {wind_speed:.2f} m/s' ).add_to(mymap) # Save the map as an HTML file or display it in Jupyter Notebook mymap </code></pre> <p>Explanation:</p> <p>I'm using Xarray to handle NetCDF data and Folium to create a map with rectangles representing grid cells. The rectangles are colored based on wind speed levels. The resulting visualization, achieved through this method, is satisfactory. However, there are two noteworthy issues as outlined below.</p> <p><a href="https://i.sstatic.net/QHEXP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QHEXP.png" alt="enter image description here" /></a></p> <p>Issue:</p> <p>Upon zooming in on the map, it becomes evident that the rectangles lack perfect alignment. Although my preference is for them to be adjacent, there are instances of both overlaps and gaps.</p> <p><a href="https://i.sstatic.net/0NYFc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0NYFc.png" alt="enter image description here" /></a></p> <p>When zooming out on the map, the rectangles shrink to the extent that they become too small to be visible. Beyond a certain zoom level, they eventually disappear altogether.</p> <p><a href="https://i.sstatic.net/hSUZZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hSUZZ.png" alt="enter image description here" /></a></p> <p>Questions:</p> <ul> <li>Is there a way to dynamically adjust the size of the rectangles based on the zoom level to ensure visibility?</li> <li>Are there any alternative approaches or parameters that could enhance the visibility of the rectangles when zooming out?</li> </ul> <p>Here is an example in which the rectangles are all just next to other. You can zoom and click and you will see the rectangles. Here is the <a href="https://earthquake.usgs.gov/earthquakes/eventpage/us7000kufc/map?shakemap-code=us7000kufc&amp;shakemap-source=us&amp;shakemap-intensity=true&amp;shakemap-mmi-contours=false&amp;shakemap-stations=true" rel="nofollow noreferrer">link</a></p> <p><a href="https://i.sstatic.net/fkQVb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fkQVb.png" alt="enter image description here" /></a></p>
<python><folium>
2024-01-16 22:51:14
1
1,718
John Smith
77,829,091
963,319
How to use cross validation on linear regression model in scikit-learn
<p>I want to use <a href="https://scikit-learn.org/stable/_images/grid_search_cross_validation.png" rel="nofollow noreferrer">grid search cross validation</a> in scikit learn to train the Linear Regression model on lets say 10 folds just like in the image I shared.</p> <p>But when I do that I get:</p> <pre><code>spipe = Pipeline([ ('scale', StandardScaler()), ('model', LinearRegression()) ]) grid = GridSearchCV( estimator=pipe, cv=4 ) grid.fit(X, Y) TypeError: GridSearchCV.__init__() missing 1 required argument: 'param_grid' </code></pre> <p>So my understanding is it wants to iterate over possible parameters to the <code>LinearRegression</code> model and I should put those into <code>param_grid</code>.</p> <p>But I don't want to tune a parameter for each fold. I want instead to simply do exactly what the photo shows: to do 10 folds and train and validate on them 10 times in order for the model to fine tune 1 linear regression polynomial (I suppose that's what's happening inside the model).</p> <p>I tried using <code>cross_val_score</code> but it seems to train on the 10 folds 10 separate times because it returns 10 scores rather than 1 score (so I guess 10 linear regression polynomial, 1 for each fold).</p> <p>So all in all, how do I use the folding cross-validation method with Linear Regression?</p> <p>Here is the setup if anyone wants it:</p> <pre><code>from sklearn.linear_model import LinearRegression from sklearn.datasets import fetch_california_housing import pandas as pd from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression from sklearn.model_selection import GridSearchCV california = fetch_california_housing() pd.set_option('display.precision', 4) pd.set_option('display.max_columns', 9) pd.set_option('display.width', None) california_df = pd.DataFrame(california.data, columns=california.feature_names) california_df['MedHouseValue'] = pd.Series(california.target) X = california.data Y = california.target </code></pre>
<python><machine-learning><scikit-learn>
2024-01-16 22:44:50
0
2,751
Jenia Be Nice Please
77,828,965
4,398,966
Updating global variables across modules
<p>I'm trying to figure out the best or most pythonic way of updating a global variable across modules. Suppose I have two scripts:</p> <pre class="lang-python prettyprint-override"><code># master.py import functions as ff moves = 0 def main(): update() def update(): global moves moves = moves + 1 ff.moves = moves # send over to ff ff.update() moves = ff.moves # get back from ff </code></pre> <pre class="lang-python prettyprint-override"><code># functions.py def update(): global moves moves = moves + 1 </code></pre> <p>This works but it's a pain to have send and get back from <code>ff</code> the updated value of <code>moves</code>. Is there a nicer way of doing this? I would prefer to not have to use <code>__builtins__</code>.</p>
<python>
2024-01-16 22:10:59
0
15,782
DCR
77,828,858
3,555,115
Difference of timestamps from two columns in seconds to a new column
<p>I am very new to python and have a dataframe as strings in each column</p> <pre><code>df1 = A B C 0 23:14:25 23:16:34 90 1 23:14:32 23:19:44 91 </code></pre> <p><strong>I need to generate a new df with difference in timestamps from columns A and B in secs as new column D</strong></p> <pre><code>df2 = A B C D 0 23:14:25 23:16:34 90 (23:16:34-23:14:25) 1 23:14:32 23:19:44 91 (23:19:44-23:14:32) </code></pre> <p>Is there a way to do these for dataframes directly?</p>
<python><pandas><dataframe>
2024-01-16 21:43:11
2
750
user3555115
77,828,849
2,255,757
passing generic python class by reference with type initialized (AttributeError: 'typing.TypeVar' object has no attribute)
<p>This is the pseudo code for what I wish to achieve.</p> <pre><code>class A: @classmethod def run(): print(&quot;lul&quot;) class Test[T](): def run(self): return T.run() t = Test[A]() t = t.run() </code></pre> <p>error:</p> <pre><code>File &quot;.../main.py&quot;, line 13, in test return T.run() ^^^^^ AttributeError: 'typing.TypeVar' object has no attribute 'run' </code></pre> <p>Is this possible? Or an explanation as to why it isn't or an alternative would be appreciated.</p> <p>As a side note, in my specific actual scenario it is not possible to pass <code>A</code> by reference to <code>T</code> in any way.</p>
<python><generics>
2024-01-16 21:39:51
0
766
user2255757
77,828,836
67,476
Sqlalchemy query LIKE statement on an Array column
<p>I am having following model and the column skills is of type VARCHAR[] in the Postgres DB</p> <pre><code>class Job(Base): __tablename__ = &quot;jobs&quot; id = Column(BigInteger, primary_key=True) job_title = Column(String) is_remote = Column(String) skills = Column(ARRAY(String)) </code></pre> <p>I want to return return by filtering skills if it partially matche at least one query string. <strong>LIKE [java, go]</strong></p> <pre><code>def db_query(db:Session, search: List[str]): return db.query(Job) \ .filter( or_(Job.skills.like(any_([search])))) </code></pre> <p>I get following error while executing the method:</p> <pre><code>sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedFunction) operator does not exist: character varying[] ~~ text LINE 3: WHERE jobs.skills LIKE ANY (ARRAY['java','research']) ^ HINT: No operator matches the given name and argument types. You might need to add explicit type casts. </code></pre>
<python><postgresql><sqlalchemy>
2024-01-16 21:34:58
0
7,961
harshit
77,828,620
3,555,115
get the value after a search string in python
<p>I need to retrieve the values from logs that contain the lines in similar below format as My input line looks like this:</p> <pre><code>line = &quot;(last_bytes wrote 66560002, cur_bytes read 33280206, curr_bytes wrote 66560002, blks read 103335128)&quot; </code></pre> <p>I need to get the value based on my search string</p> <pre><code>Ex: search(last_bytes wrote) = 66560002 search(cur_bytes read) = 33280206 </code></pre> <p>I had tried to split line and search for next string after wrote/read, but that does seem to return unexpected values.</p> <p>Is there a way to get these values In python ?</p>
<python><string><split>
2024-01-16 20:43:39
4
750
user3555115
77,828,618
8,416,255
Should I use Multithreading or Async IO to speed up a network bound task?
<p>I have a python function that SSHs into a device, runs some commands and uses the data to make HTTP calls. I want to make <code>migrate_devices</code> run in parallel when processing the devices.</p> <p>Existing function, simplified:</p> <pre class="lang-py prettyprint-override"><code>def migrate_devices(devices: List[str]): for ip in devices: # each loop can be run in parallel without any race conditions data = collect_data_via_ssh(ip) # SSHing in takes some time new_data = process_data(data) # massages the data make_http_calls(data) # takes some time </code></pre> <p>The devices can be SSHed independently of each other. The HTTP calls for each device are independent, and don't depend on data from other devices.</p> <p>The biggest chunk of time is spent on waiting for the device to respond to SSH commands and waiting for the HTTP response. So essentially, I want to run each loop in parallel. Is multithreading the way to go or is async/await a better fit? I can use python 3.10+ if needed and use 3.12 if there's useful syntax additions</p> <pre><code>+----------------------------------------------------------+ | migrate_devices(devices: List[str]) | | +---------------------------+ | | | for device in devices: | | | +----+--------------------+-+ | | | | | | +------v-----------+ +----v--------------+ | | | ssh into device 1| | SSH into device 2 | | | | Make API calls | | Make API calls | | | +------------------+ +-------------------+ | +----------------------------------------------------------+ </code></pre> <p>I'm familiar with Javascript's async mechanism and I wanted to implement something similar:</p> <pre class="lang-py prettyprint-override"><code>async def process_single_device(ip): data = await collect_data_via_ssh(ip) new_data = process_data(data) # massages the data await make_http_calls(data) return success_or_failure_message async def migrate_devices(devices: List[str]): all_tasks = [] async with asyncio.TaskGroup() as tg: for device in devices: device_task = tg.create_task(process_single_device(ip=device)) all_tasks.append(device_task) # all tasks would complete at this point for device in all_tasks: print(f&quot;Device result: {device.result()}&quot;) </code></pre> <p>Should I continue with this approach or switch to threads?</p> <p>Is this what python is doing when I use async/await?</p> <pre><code> + | ---------+ | +--------+ |device 1| | |device 2| ---------+ | +--------+ | SSH into device 1+ | | while waiting +-for SSH,------&gt; ssh into device 2 process next device + | | | | | | | | make HTTP call+ for device 1&lt;--------------------+ while waiting for device 2 SSH, | | make HTTP calls for device 1 | | | +----------------------------&gt; make HTTP calls for device 2 | | + </code></pre>
<python><python-3.x><multithreading><async-await><python-asyncio>
2024-01-16 20:43:11
0
409
rsn
77,828,534
2,651,073
How to concat two dataframes with interleaved rows sorted by a column of the first dataframe
<p>Suppose I have two dataframes:</p> <pre><code>&gt;&gt; df1 X Y avg id f g 30 3 a b 20 1 d e 10 2 &gt;&gt; df2 X Y avg id D E 10 2 A B 0 1 F G 20 3 </code></pre> <p><code>df1</code> is sorted by <code>avg</code> column. I want to concat them so that put corresponding rows of second <code>dataframe</code> under the row of first <code>dataframe</code> without changing the order:</p> <pre><code> X Y avg id f g 30 3 F G 20 3 a b 20 1 A B 0 1 d e 10 2 D E 10 2 </code></pre> <p>I tried <code>pd.concat([df1, df2]).sort_index().reset_index()</code> but it doesnt' produce my desire output</p>
<python><pandas><dataframe>
2024-01-16 20:22:10
2
9,816
Ahmad
77,828,507
2,142,728
How to bundle my monorepo poetry subproject into an executable?
<p>I have a monorepo with multiple poetry projects.</p> <p>They may depend on each other with the usual path dependencies of poetry (i.e <code>{path='../some-lib',develop=true}</code>).</p> <p>During development everything works ok (not like a charm, but ok).</p> <p>When bundling it all, I assume dependencies must be copied into the virtual env (not referenced), so that I can copy maybe some folder or something, into the docker image.</p> <p>I don't want to copy the whole monorepo into the docker image, at least not the final image (I can use staged build where first I copy it all, and then I copy only the executable). Yet... I don't know how!</p>
<python><docker><monorepo><python-poetry>
2024-01-16 20:17:44
0
3,774
caeus
77,828,490
12,318,454
Use latest version of SQLite for Python in ubuntu-latest
<p>I have created a GitHub Actions workflow. which is supposed to run a SQL query. It uses the <a href="https://www.sqlite.org/json1.html#jptr:%7E:text=4.9.-,The%20%2D%3E%20and%20%2D%3E%3E%20operators,-Beginning%20with%20SQLite" rel="nofollow noreferrer">arrow operators</a> for JSON. This requires SQLite &gt;= 3.38.0.</p> <p>This worked for system with <code>windows-latest</code> and <code>macos-latest</code>, for which I could say SQLite installed in Python 3.11 was &gt;= 3.38.</p> <p>But there's a problem I faced in <code>ubuntu-latest</code> which has 3.37. I wanted to make sure Python uses the latest version.</p> <p>I did try installing through <code>sudo apt -y install sqlite3 libsqlite3-dev</code> but it didn't replace the <code>sqlite3</code> version in Python and I got this message:</p> <pre><code>libsqlite3-dev is already the newest version (3.37.2-2ubuntu0.3). sqlite3 is already the newest version (3.37.2-2ubuntu0.3). </code></pre> <p>reference github workflow:</p> <pre><code>name: 'tests' on: push: branches: - 'testing' jobs: test: name: Test runs-on: ${{ matrix.os }} strategy: matrix: os: ['macos-latest', 'windows-latest', 'ubuntu-latest'] steps: - uses: actions/checkout@v3 - name: Install Latest sqlite3 version on non-windows platform if: ${{ matrix.os == 'ubuntu-latest' }} run: | bash ./.github/scripts/install-sqlite.sh - name: Set up Python 3.11 uses: actions/setup-python@v3 with: python-version: &quot;3.11&quot; - name: Install dependencies run: | python -m pip install --upgrade pip pip install poetry poetry install - name: Smoke Test run: pytest __test__/test_smoke </code></pre> <p>I have even installed it from the source code, the version changed when I tried <code>sqlite --version</code> but it didn't change in Python.</p> <p>How can I upgrade the <code>sqlite3</code> version in Python from 3.37 to 3.38 in <code>ubuntu-latest</code>?</p>
<python><sqlite><github-actions>
2024-01-16 20:11:26
1
553
Rahul A Ranger
77,828,286
4,352,930
Matplotlib eventplot without space between events
<p>I want to create a matplotlib event plot of integers with two colours. A position in the eventplot can be white (no event) or have a colour (red or green, which means there is an event).</p> <p><a href="https://i.sstatic.net/H5Kjc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H5Kjc.png" alt="enter image description here" /></a></p> <p>For this I have the following code:</p> <pre><code>import matplotlib.pyplot as plt import numpy as np import matplotlib matplotlib.rcParams['font.size'] = 8.0 # create data data1 = np.array([[19, 2, 3, 4, 6, 34 , 23, 60, 49, 36], [ 52, 50, 33, 96, 56, 95, 0, 63, 90, 15]]) # alternative 1 #data1 = np.array([[19, 2, 3, 4, 6, 34 , 3, 200, 49, 36], [ 52, 50, 33, 96, 56, 95, 0, 63, 90, 15]]) # alternative 2 colors1 = [&quot;green&quot;, &quot;orange&quot;] lineoffsets1 = [1, 1] linelengths1 = [1, 1] linewidths = np.full(2, 3) #alternative 1 #linewidths = np.full(2, 4) #alternative 2 print(linewidths) fig, axs = plt.subplots() # create a horizontal plot axs.eventplot(data1, colors=colors1, lineoffsets=lineoffsets1, linelengths=linelengths1, linewidths=linewidths) plt.show() </code></pre> <p>At the same time, I want to avoid white spaces between events in case that there are events in adjacent integer positions (events at x values 2, 3, and 4 should not show any spacing between the numbers, but between events at 4 and 6 there should be a white spacing of the same width as the events.</p> <p>The integer range of the events may vary.</p> <p>I can control the width of the events using the parameter <em>linewidths</em>, but I am unaware where to get an overall width from.</p> <p><strong>EDIT:</strong> The effect seems to also depend on the actual scaling of the window, so if I increase the size of the window with the mouse, lines do not touch any longer. Maybe one needs to take a completely different approach</p>
<python><matplotlib>
2024-01-16 19:31:23
1
6,259
tfv
77,828,272
6,936,582
Column with lists of dicts to new columns
<p>I have a dataframe with a <code>column2</code> which for each row have a list of dicts.</p> <pre><code>import pandas as pd data = [{&quot;id&quot;:1, &quot;column1&quot;:123, &quot;column2&quot;:[{&quot;a&quot;:1}, {&quot;b&quot;:&quot;X&quot;}, {&quot;c&quot;:'2023-01-16'}]}] df = pd.DataFrame(data) # id column1 column2 # 1 123 [{'a': 1}, {'b': 'X'}, {'c': '2023-01-16'}] </code></pre> <p>I'm trying to create three new columns from the dicts to create:</p> <pre><code>#id column1 a b c # 1 123 1 X 2023-01-16 </code></pre> <p>I've tried <a href="https://stackoverflow.com/questions/38231591/split-explode-a-column-of-dictionaries-into-separate-columns-with-pandas">this</a>:</p> <pre><code>df = df.explode(column=&quot;column2&quot;) # column1 column2 # 0 123 {'a': 1} # 0 123 {'b': 'X'} # 0 123 {'c': '2023-01-16'} df[&quot;column2&quot;].apply(pd.Series) # 0 1 2 # 0 {'a': 1} {'b': 'X'} {'c': '2023-01-16'} </code></pre> <p>But I cant get it to work the way I want.</p> <p>How can I solve this?</p>
<python><pandas>
2024-01-16 19:27:31
1
2,220
Bera