QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,926,528
4,451,521
Why running fire shows a documentation?
<p>Let's reproduce this with a simple script</p> <pre><code>import pandas as pd import fire def nada(): data = { &quot;calories&quot;: [420, 380, 390], &quot;duration&quot;: [50, 40, 45] } #load data into a DataFrame object: df = pd.DataFrame(data) return df if __name__ == &quot;__main__&quot;: fire.Fire(nada) </code></pre> <p>If we run this with <code>python something.py</code> (and be aware that for this to work Jinja2 needs to be installed!) the script does not finish but instead shows a &quot;documentation file&quot; like</p> <pre><code>NAME something.py - Two-dimensional, size-mutable, potentially heterogeneous tabular data. SYNOPSIS something.py GROUP | COMMAND | VALUE DESCRIPTION Data structure also contains labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pandas data structure. GROUPS GROUP is one of the following: T Two-dimensional, size-mutable, potentially heterogeneous tabular data. at Access a single value for a row/column label pair. attrs axes calories One-dimensional ndarray with axis labels (including time series). columns Immutable sequence used for indexing and alignment. dtypes One-dimensional ndarray with axis labels (including time series). duration One-dimensional ndarray with axis labels (including time series). flags </code></pre> <p>and we have to type q in order for this to finish</p> <p>The reason is the <code>return df</code> in the function.</p> <p>Why is this happening? And why documentation?</p>
<python><pandas><jinja2><python-fire>
2024-08-29 07:06:35
1
10,576
KansaiRobot
78,926,325
3,337,089
Replacing stable diffusion v2.1 text encoder with image encoder
<p>I'm trying to replace the text encoder of Stable Diffusion with a corresponding image encoder, so that I can feed images instead of text. The <a href="https://huggingface.co/stabilityai/stable-diffusion-2-1-base" rel="nofollow noreferrer">stable diffusion hugging face documentation</a> says that it uses pretrained text encoder from OpenCLIP <code>ViT/H</code> model. Since the text encoder and image encoder of CLIP share the same latent space, I can easily replace the text encoder with image encoder and the model should work fine without any further training.</p> <p>However, the text embeddings I am getting from Stable diffusion text encoder and the OpenCLIP ViT/H text encoder are different.</p> <p>I get the below from stable diffusion text encoder</p> <pre><code>prompt = 'dress, long sleeve' model_key = &quot;./models--stabilityai--stable-diffusion-2-1-base/&quot; pipe = StableDiffusionPipeline.from_pretrained(model_key, torch_dtype=self.precision_t) self.tokenizer = pipe.tokenizer self.text_encoder = pipe.text_encoder inputs = self.tokenizer(prompt, padding='max_length', max_length=self.tokenizer.model_max_length, return_tensors='pt') embeddings = self.text_encoder(inputs.input_ids.to(self.device))[0] </code></pre> <p>I get the below text embeddings using <a href="https://github.com/mlfoundations/open_clip" rel="nofollow noreferrer">OpenCLIP</a> text encoder</p> <pre><code>model, _, preprocess = open_clip.create_model_and_transforms('ViT-H-14', pretrained='laion2b_s32b_b79k') model.eval() tokenizer = open_clip.get_tokenizer('ViT-H-14') text = tokenizer([prompt]) text_features = model.encode_text(text) </code></pre> <p>A main difference is that the <code>embeddings</code> from stable diffusion text encoder is of size <code>(1, 77, 1024)</code>, whereas <code>text_features</code> from OpenCLIP text encoder is of size <code>(1, 1024)</code>.</p> <p>I have two questions?</p> <ol> <li>What text encoder from OpenCLIP should I use to get the same embeddings as Stable diffusion text encoder?</li> <li>What image encoder corresponds to the text encoder in stable diffusion? i.e. which image encoder shares the same latent space as the text encoder?</li> </ol>
<python><encoding><stable-diffusion><latent-diffusion>
2024-08-29 06:01:00
0
7,307
Nagabhushan S N
78,926,127
4,451,521
How can I use a project that has been installed with pip install -e .?
<p>I am using a project (originally from github) that in its readme says:</p> <blockquote> <p>To use:</p> <ol> <li>Clone the project</li> <li>Create a virtual environment</li> <li>Do <code>pip install --upgrade pip</code> and <code>pip install -e .</code></li> </ol> </blockquote> <p>I have tried the project, it is working well. I read briefly that this install the project in &quot;editable&quot; mode.</p> <p>Anyway, I am ready to create a project in a repo of my own, in which I would write python scripts. This scripts will use the original project above</p> <p>Something like</p> <pre><code>from pathlib import Path from originalP.eval.run_EXP import eval_multiple from originalP.mm_utils import get_model_name_from_path import pandas as pd </code></pre> <p>My question is how can I use <code>originalP</code> in my own repo?</p> <hr /> <p>What I did so far is</p> <ol> <li>Created my project root folder</li> <li>Created a virtual environment with <code>python -m venv MyPVenv</code></li> <li>Activated the virtual environment with <code>source MyPVenv/bin/activate</code></li> </ol> <p>Just to check, here I do <code>python -c &quot;import gradio&quot;</code> and of course it says it is not installed</p> <p>My doubt comes from here</p> <ol start="4"> <li>I did <code>pip install -e /path/to/originalP</code></li> </ol> <p>After I do this, I check <code>python -c &quot;import gradio&quot;</code> and yes, gradio is installed (because originalP uses gradio)</p> <p>Is this how it is done?</p> <p>I have high doubts because what if in my project I want to use one library - for example gradio- with a newer version?</p> <p>In the originalP for example the gradio version is 4.16.0 but I want to use something newer.</p> <p>Am I going to be tied to the versions the originalP uses?</p>
<python><pip><python-venv><python-packaging>
2024-08-29 04:42:44
2
10,576
KansaiRobot
78,926,086
2,981,639
Parsing numeric data with thousands seperator in Polars
<p>I have a tsv file that contains integers with thousand separators. I'm trying to read it using <code>polars==1.6.0</code>, the encoding is <code>utf-16</code></p> <pre class="lang-py prettyprint-override"><code>from io import BytesIO import polars as pl data = BytesIO( &quot;&quot;&quot; Id\tA\tB 1\t537\t2,288 2\t325\t1,047 3\t98\t194 &quot;&quot;&quot;.encode(&quot;utf-16&quot;) ) df = pl.read_csv(data, encoding=&quot;utf-16&quot;, separator=&quot;\t&quot;) print(df) </code></pre> <p>I cannot figure out how to get polars to treat column &quot;B&quot; as integer rather than string, and I also cannot find a clean way of casting it to an integer.</p> <pre><code>shape: (3, 3) ┌────────┬─────┬───────┐ │ Id ┆ A ┆ B │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ str │ ╞════════╪═════╪═══════╡ │ 1 ┆ 537 ┆ 2,288 │ │ 2 ┆ 325 ┆ 1,047 │ │ 3 ┆ 98 ┆ 194 │ └────────┴─────┴───────┘ </code></pre> <p>cast fails, as does passing the schema explicitly. I also tried using <code>str.strip_chars</code> and to remove the comma, my work-around is to use <code>str.replace_all</code> instead.</p> <pre class="lang-py prettyprint-override"><code>df = df.with_columns( pl.col(&quot;B&quot;).str.strip_chars(&quot;,&quot;).alias(&quot;B_strip_chars&quot;), pl.col(&quot;B&quot;).str.replace_all(&quot;[^0-9]&quot;, &quot;&quot;).alias(&quot;B_replace&quot;), ) print(df) </code></pre> <pre><code>shape: (3, 5) ┌────────┬─────┬───────┬───────────────┬───────────┐ │ Id ┆ A ┆ B ┆ B_strip_chars ┆ B_replace │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ str ┆ str ┆ str │ ╞════════╪═════╪═══════╪═══════════════╪═══════════╡ │ 1 ┆ 537 ┆ 2,288 ┆ 2,288 ┆ 2288 │ │ 2 ┆ 325 ┆ 1,047 ┆ 1,047 ┆ 1047 │ │ 3 ┆ 98 ┆ 194 ┆ 194 ┆ 194 │ └────────┴─────┴───────┴───────────────┴───────────┘ </code></pre> <p>Also for this to work in general I'd need to ensure that <code>read_csv</code> doesn't try and infer types for any columns so I can convert them all manually (any numeric column with a value &gt; 999 will contain a comma)</p>
<python><dataframe><csv><python-polars>
2024-08-29 04:18:51
2
2,963
David Waterworth
78,926,051
22,407,544
How to stop a celery task if user unloads?
<p>My website allows users to translate files. I want to add a failsafe in case a user decides to unload the webpage(whether by reloading, navigating away or closing the tab). My backend is django plus celery[redis]. Currently, after a user begins the translation task my frontend polls the backend every 5 seconds to see if the task is still running. Here is the corresponding JS for reference:</p> <pre><code>function pollTaskStatus(taskId) { currentTaskId = taskId; console.log(currentTaskId) pollInterval = setInterval(() =&gt; { const xhr = new XMLHttpRequest(); xhr.onload = function() { if (xhr.status == 200) { const response = JSON.parse(xhr.responseText); if (response.status === 'completed') { console.log('sent'); showTranslationComplete(response); clearInterval(pollInterval); // Stop polling once completed isTranslating = false; // Set to false when translation is complete } } else { showError('An error occurred.'); clearInterval(pollInterval); // Stop polling on error isTranslating = false; // Set to false on errors } }; xhr.onerror = function() { showError('Connection error. Please check your network connection and try again.'); clearInterval(pollInterval); // Stop polling on network error isTranslating = false; // Set to false on network error }; xhr.open('GET', `/translate/poll_task_status/${taskId}/`, true); xhr.send(); }, 5000); // Poll every 5 seconds } </code></pre> <p>I know it is unreliable to run functions during/after an unload event so I've avoided that. Any suggestions appreciated.</p>
<javascript><python><django><celery>
2024-08-29 03:57:21
1
359
tthheemmaannii
78,925,963
4,921,918
Unexpected value passed to LangChain Tool argument
<p>I'm trying to create a simple example tool that creates new user accounts in a hypothetical application when instructed to do so via a user prompt. The llm being used is llama3.1:8b via Ollama.</p> <p>So far what I've written works, but it's very unreliable. The reason why it's unreliable is because when LangChain calls on my tool, it provides unexpected/inconsistent values to the user creation tool's single username argument.</p> <p>Sometime the argument will be a proper username and other times it will be a username with the value <code>&quot;username=&quot;</code> prefixed to the username (eg: <code>&quot;username=jDoe&quot;</code> rather than simply <code>&quot;jdoe&quot;</code>).</p> <p>Also, if I ask for multiple users to be created, sometimes langchain will correctly invoke the tool multiple times while other times, it will invoke the tool once with a string in the format of an array (eg: <code>&quot;['jDoe','jSmith']&quot;</code>)</p> <p>My questions are:</p> <ol> <li>Is the issue I'm encountering due to the limitations of LangChain or the Llama3.1:8b model that I'm using? Or is the issue something else?</li> <li>Is there a way to get LangChain to more reliably call my user creation tool with a correctly formatted username?</li> <li>Are there are other useful tips/recommendations that you can provide for a beginner like me?</li> </ol> <p>Below is my code:</p> <pre><code>from dotenv import load_dotenv from langchain.agents import AgentExecutor, create_react_agent from langchain.tools import Tool from langchain_core.prompts import PromptTemplate from langchain_ollama.chat_models import ChatOllama load_dotenv() # Define the tool to create a user account mock_user_db = [&quot;jDoe&quot;, &quot;jRogers&quot;, &quot;jsmith&quot;] def create_user_tool(username: str): print(&quot;USERNAME PROVIDED FOR CREATION: &quot; + username) if username in mock_user_db: return f&quot;User {username} already exists.&quot; mock_user_db.append(username) return f&quot;User {username} created successfully.&quot; # Define the tool to delete a user account def delete_user_tool(username: str): print(&quot;USERNAME PROVIDED FOR DELETION: &quot; + username) if username not in mock_user_db: return f&quot;User {username} does not exist.&quot; mock_user_db.remove(username) return f&quot;User {username} deleted successfully.&quot; def list_users_tool(ignore) -&gt; list: return mock_user_db # Wrap these functions as LangChain Tools create_user = Tool( name=&quot;Create User&quot;, func=create_user_tool, description=&quot;Creates a new user account in the company HR system.&quot; ) delete_user = Tool( name=&quot;Delete User&quot;, func=delete_user_tool, description=&quot;Deletes an existing user account in company HR system.&quot; ) list_users = Tool( name=&quot;List Users&quot;, func=list_users_tool, description=&quot;Lists all user accounts in company HR system.&quot; ) # Initialize the language model llm = ChatOllama(model=&quot;llama3.1:latest&quot;, temperature=0) # Create the agent using the tools tools = [create_user, delete_user, list_users] # Get the prompt to use #prompt = hub.pull(&quot;hwchase17/react&quot;) # Does not work with ollama/llama3:8b prompt = hub.pull(&quot;hwchase17/react-chat&quot;) # Kinda works with ollama/llama3:8b agent = create_react_agent(llm, tools, prompt) # Create an agent executor by passing in the agent and tools agent_executor = AgentExecutor(agent=agent, tools=tools, handle_parsing_errors=True) print(agent_executor.invoke({&quot;input&quot;: &quot;Please introduce yourself.&quot;})['output']) while True: user_prompt = input(&quot;PROMPT: &quot;) agent_response = agent_executor.invoke({&quot;input&quot;: user_prompt}) print(agent_response['output']) </code></pre>
<python><langchain><large-language-model>
2024-08-29 02:57:57
1
376
wtfacoconut
78,925,731
4,867,193
Python Multiprocessing Queue, blocks with item remaining
<p>Running two threads with the Multiprocessing library in Python, putting items into a queue in the daughter thread and getting them from the parents, hangs on the last item.</p> <p>The question is why is that happening? And how do we get it to work?</p> <p>The complete code is here. It is for an old USB spectrometer</p> <p><a href="https://drive.google.com/file/d/18kRFCnqO1GfAdrbgPYOFXp6LUdjy014s/view?usp=drive_link" rel="nofollow noreferrer">https://drive.google.com/file/d/18kRFCnqO1GfAdrbgPYOFXp6LUdjy014s/view?usp=drive_link</a></p> <p><a href="https://drive.google.com/file/d/1Q0b0i_VLBBpKIapGReJ4wdVr1CiJHS1a/view?usp=drive_link" rel="nofollow noreferrer">https://drive.google.com/file/d/1Q0b0i_VLBBpKIapGReJ4wdVr1CiJHS1a/view?usp=drive_link</a></p> <p>Following is a rough excerpt to describe what is the problem.</p> <p>In the setup we have:</p> <pre><code> from multiprocessing import Process, Queue, Value class Gizmo: #blah blah blah, set up the hardware and create a multiprocessing Queue self.dataqueue = Queue() def startreader(self,nframes,nsets) # clear the dataqueue while not self.dataqueue.empty(): try: self.dataqueue.get(False) print( 'queue got entry' ) except Exception as e: print( 'queue.get', e ) break print( 'creating reader thread') self.readerthread = Process( target = self.Reader_, args=(nframes,nsets) ) if self.readerthread is None: print( 'creating reader thread failed') return False print( 'starting reader thread') self.readerthread.start() def Reader_(self,nrames,nsets): #blah blah blah, get a bunch of records and then for n,record in enumerate(records): self.dataqueue.put( record ) print( 'reader put record', n ) return def savedata(self) while not self.dataqueue.empty(): print( 'getting record') try: record = self.dataqueue.get_nowait() records.append(record ) print( 'got record') except Exception as e: print(e) # blah blah blah and write it all to a disk file(changed $ to # while editing) </code></pre> <p>When we run this, we see four records pushed on to the queue from the reader, in the second thread.</p> <pre><code>reader put record 0 reader put record 1 reader put record 2 reader put record 3 </code></pre> <p>And, then after seeing the reader exit, we call save(). We see 3 of the 4 records retrieved, and then it hangs on trying to get the fourth record.</p> <pre><code>getting records getting record got record getting record got record getting record got record getting record </code></pre> <p>Again, the questions are:</p> <p>Why does it hang?</p> <p>And how do we get it to work?</p>
<python><multiprocessing>
2024-08-29 00:33:14
1
2,587
DrM
78,925,696
7,583,953
When should I include the score benefit of a local decision when using minimax?
<p>In the <a href="https://leetcode.com/problems/stone-game/" rel="nofollow noreferrer">Stone Game</a> problem, Alice and Bob take turns picking a pile of stones from the start or the end. The goal is to maximize Alice's total</p> <pre><code>def play(turn, left, right): if left &gt; right: return 0 end = piles[right] + play(1 - turn, left, right - 1) start = piles[left] + play(1 - turn, left + 1, right) return max(start, end) if turn == 0 else min(start, end) alice = play(0, 0, n - 1) </code></pre> <p>This follows the classic minimax algorithm.</p> <p>Let's now take a look at <a href="https://leetcode.com/problems/stone-game-ii/" rel="nofollow noreferrer">Stone Game II</a>. In this problem, Alice and Bob can pick the next 1 &lt;= x &lt;= 2m piles of stones, where m is the maximum x somebody has used.</p> <p>To my surprise, classic minimax would return the same number of stones whether it is Alice or Bob's turn, giving us an incorrect final answer</p> <pre class="lang-py prettyprint-override"><code># DOESN'T WORK def play(left, m, turn): if left == n-1: return 0 total = 0 ans = inf if turn else -inf for pos in range(left+1, min(n, left+2*m+1)): total += piles[pos] value = total + play(pos, max(m, pos - left), 1 - turn) if turn == 0: ans = max(ans, value) else: ans = min(ans, value) return ans alice = play(-1, 1, 0) </code></pre> <p>However, if we only include total in Alice's calculation, it suddenly works:</p> <pre class="lang-py prettyprint-override"><code># WORKS def play(left, m, turn): if left == n-1: return 0 total = 0 ans = inf if turn else -inf for pos in range(left+1, min(n, left+2*m+1)): total += piles[pos] value = play(pos, max(m, pos - left), 1 - turn) if turn == 0: ans = max(ans, total + value) else: ans = min(ans, value) return ans alice = play(-1, 1, 0) </code></pre> <p>Could someone explain why we're not supposed to add the local total for the minimizer in the second example?</p> <p>Here's a discrepancy I noticed that may have something to do with the answer: value is always the same recursive call when we take min/max in the second problem, but in the first problem, end and start are different recursive calls.</p>
<python><algorithm><minimax>
2024-08-29 00:03:26
2
9,733
Alec
78,925,467
11,163,122
How to have decorated function in a Python doctest?
<p>How can I include a decorated function inside a Python doctest?</p> <pre class="lang-py prettyprint-override"><code>def decorator(func): def wrapper() -&gt; None: func() return wrapper def foo() -&gt; None: &quot;&quot;&quot; Stub. Examples: &gt;&gt;&gt; @decorator &gt;&gt;&gt; def stub(): ... &quot;&quot;&quot; if __name__ == &quot;__main__&quot;: import doctest doctest.testmod() </code></pre> <p>Running the above with Python 3.12 throws a <code>SyntaxError</code>:</p> <pre class="lang-none prettyprint-override"><code>UNEXPECTED EXCEPTION: SyntaxError('invalid syntax', ('&lt;doctest path.to.a[0]&gt;', 1, 0, '@decorator\n', 1, 0)) </code></pre>
<python><unit-testing><doctest>
2024-08-28 22:16:40
1
2,961
Intrastellar Explorer
78,925,465
10,161,315
Find rows where pandas dataframe column, which is a paragraph or list, contains any value in another list
<p>I have a Pandas DataFrame which contains information about various jobs. I am working on filtering based on values in some lists.</p> <p>I have no problem with single value conditional filtering. However, I am having difficulties doing conditional filtering on the <code>Job Description</code> field, which is essentially a paragraph and multiple lines, and the <code>Job Skills</code> field which is essentially a list after I split on the <code>\n\n</code>.</p> <p>EXAMPLE DATA:</p> <pre><code> dftest=pd.DataFrame({ 'Job Posting':['Data Scientist', 'Cloud Engineer', 'Systems Engineer', 'Data Engineer'], 'Time Type':['Full Time', 'Part Time', 'Full Time', 'Part Time'], 'Job Location': ['Colorado', 'Maryland', 'Florida', 'Virginia'], 'Job Description': [ 'asdfas fasdfsad sadfsdaf sdfsdaf', 'asdfasd fasdfasd fwertqqw rtwergd fverty', 'qwerq e5r45yb rtfgs dfaesgf reasdfs dafads', 'aweert scdfsdf asdfa sdfsds vwerewr'], 'Job Skills': [ 'Algorithms\n\nData Analysis\n\nData Mining\n\nData Modeling\n\nData Science\n\nExploratory Data Analysis (EDA)\n\nMachine Learning\n\nUnstructured Data', 'Application Development\n\nApplication Integrations\n\nArchitectural Modeling\n\nCloud Computing\n\nSoftware Product Design\n\nTechnical Troubleshooting', 'Configuration Management (CM)\n\nInformation Management\n\nIntegration Testing\n\nRequirements Analysis\n\nRisk Management\n\nVerification and Validation (V&amp;V)', 'Big Data Analytics\n\nBig Data Management\n\nDatabase Management\n\nData Mining\n\nData Movement\n\nETL Processing\n\nMetadata Repository'] }) </code></pre> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th></th> <th>Job Posting</th> <th>Time Type</th> <th>Job Location</th> <th>Job Description</th> <th>Job Skills</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>Data Scientist</td> <td>Full Time</td> <td>Maryland</td> <td>asdfas fasdfsad sadfsdaf sdfsdaf</td> <td>Algorithms\n\nData Analysis\n\nPython\n\n Data...</td> </tr> <tr> <td>1</td> <td>Cloud Engineer</td> <td>Part Time</td> <td>Maryland</td> <td>asdfasd fasdfasd fwertqqw rtwergd fverty</td> <td>Application Development\n\nApplication Integra...</td> </tr> <tr> <td>2</td> <td>Systems Engineer</td> <td>Full Time</td> <td>Virginia</td> <td>qwerq e5r45yb rtfgs dfaesgf reasdfs dafads</td> <td>Configuration Management (CM)\n\nInformation M...</td> </tr> <tr> <td>3</td> <td>Data Engineer</td> <td>Part Time</td> <td>Virginia</td> <td>aweert scdfsdf asdfa sdfsds vwerewr</td> <td>Big Data Analytics\n\nBig Data Management\n\nP...</td> </tr> </tbody> </table></div> <p>LISTS and SPLITTING OF 'Job Skills' data by '\n\n':</p> <pre><code> state = ['Virginia', 'Maryland', 'District of Columbia'] time = ['Full Time'] skills = ['AI', 'Artificial Intelligence', 'Deep Learning', 'Machine Learning', 'Feature Selection', 'Feature Selection', 'Python', 'Cloud Computing'] dftest['Job Skills'] = dftest['Job Skills'].str.split('\n\n') </code></pre> <p>Results:</p> <pre><code>[Algorithms, Data Analysis, Data Mining, Data Modeling, Data Science, Exploratory Data Analysis (EDA), Machine Learning, Unstructured Data] [Application Development, Application Integrations, Architectural Modeling, Cloud Computing, Software Product Design, Technical Troubleshooting] [Configuration Management (CM), Information Management, Integration Testing, Requirements Analysis, Risk Management, Verification and Validation (V&amp;V)] [Big Data Analytics, Big Data Management, Database Management, Data Mining, Data Movement, ETL Processing, Metadata Repository] </code></pre> <p>CONDITIONAL FILTERING:</p> <pre><code> dftest[dftest['Job Location'].isin(state) &amp; dftest['Time Type'].isin(time)] </code></pre> <p>Results:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th></th> <th>Job Posting</th> <th>Time Type</th> <th>Job Location</th> <th>Job Description</th> <th>Job Skills</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>Data Scientist</td> <td>Full Time</td> <td>Maryland</td> <td>asdfas fasdfsad sadfsdaf sdfsdaf</td> <td>[Algorithms, Data Analysis, Python, Data Mini...</td> </tr> <tr> <td>2</td> <td>Systems Engineer</td> <td>Full Time</td> <td>Virginia</td> <td>qwerq e5r45yb rtfgs dfaesgf reasdfs dafads</td> <td>Configuration Management (CM), Information Ma...</td> </tr> </tbody> </table></div> <p>ISSUE: Now I want to take the values in <code>dftest['Job Skills']</code> and find all the rows that match the <code>skills</code> list.</p> <p>I've tried, among others:</p> <ul> <li>Iterating through the values in the field and comparing to the skills list and doing it the other way around, but that doesn't work.</li> <li><code>dftest['Job Skills'].filter(like=skills, axis=0)</code>, but that gives another error.</li> </ul> <p>I think I am almost there with this, but I just want to have a single unique row if there is a match. For example, this shows rows 0 and 3 match, so I want those rows to print.</p> <pre><code> for i in skills: print('skill: ',i) print(dftest['Job Skills'].map(set([i]).issubset)) </code></pre>
<python><pandas><dataframe><filter>
2024-08-28 22:15:29
1
323
Jennifer Crosby
78,925,432
2,893,712
Pandas Unpack List of Dicts to Columns
<p>I have a dataframe that has a field called <code>fields</code> which is a list of dicts (all rows have the same format). Here is how the dataframe is structured:</p> <pre><code>formId fields 123 [{'number': 1, 'label': 'Last Name', 'value': 'Doe'}, {'number': 2, 'label': 'First Name', 'value': 'John'}] </code></pre> <p>I am trying to unpack the <code>fields</code> column so it looks like:</p> <pre><code>formId Last Name First Name 123 Doe John </code></pre> <p>The code I have currently is:</p> <pre><code>for i,r in df.iterrows(): for field in r['fields']: df.at[i, field['label']] = field['value'] </code></pre> <p>However this does not seem like the most efficient way. Is there a better way to accomplish this?</p>
<python><pandas><dataframe><series>
2024-08-28 22:04:24
3
8,806
Bijan
78,925,412
4,521,319
Finding the K-th largest element using heap
<p>I am trying to solve the leetcode problem: <a href="https://leetcode.com/problems/kth-largest-element-in-an-array/description/" rel="nofollow noreferrer">kth-largest-element-in-an-array</a></p> <p>I know a way to solve this is by using a heap. However, I wanted to implement my own <code>heapify</code> method for practice, and here is <strong>my code</strong>:</p> <pre><code>def findKthLargest(self, nums: List[int], k: int) -&gt; int: def heapify(nums: List[int], i: int): print(nums, i) largest = i left = (2 * i) + 1 right = (2 * i) + 2 if left &lt; len(nums) and nums[largest] &lt; nums[left]: largest = left if right &lt; len(nums) and nums[largest] &lt; nums[right]: largest = right if largest != i: nums[i], nums[largest] = nums[largest], nums[i] print(nums) heapify(nums, largest) print(nums) for i in range(len(nums)-1, -1, -1): heapify(nums, i) print(nums) return nums[k-1] </code></pre> <p>My code is basically following the implementation <strong>given in an editorial</strong>:</p> <pre><code> def max_heapify(heap_size, index): left, right = 2 * index + 1, 2 * index + 2 largest = index if left &lt; heap_size and lst[left] &gt; lst[largest]: largest = left if right &lt; heap_size and lst[right] &gt; lst[largest]: largest = right if largest != index: lst[index], lst[largest] = lst[largest], lst[index] max_heapify(heap_size, largest) # heapify original lst for i in range(len(lst) // 2 - 1, -1, -1): max_heapify(len(lst), i) </code></pre> <p>And this worked for 21/41 test cases and is failing Input:</p> <pre><code>nums = [3,2,3,1,2,4,5,5,6] k = 4 </code></pre> <p>My code is returning 3 instead of 4. Here is my output:</p> <pre><code>[3, 2, 3, 1, 2, 4, 5, 5, 6] [3, 2, 3, 1, 2, 4, 5, 5, 6] 8 [3, 2, 3, 1, 2, 4, 5, 5, 6] 7 [3, 2, 3, 1, 2, 4, 5, 5, 6] 6 [3, 2, 3, 1, 2, 4, 5, 5, 6] 5 [3, 2, 3, 1, 2, 4, 5, 5, 6] 4 [3, 2, 3, 1, 2, 4, 5, 5, 6] 3 [3, 2, 3, 6, 2, 4, 5, 5, 1] [3, 2, 3, 6, 2, 4, 5, 5, 1] 8 [3, 2, 3, 6, 2, 4, 5, 5, 1] 2 [3, 2, 5, 6, 2, 4, 3, 5, 1] [3, 2, 5, 6, 2, 4, 3, 5, 1] 6 [3, 2, 5, 6, 2, 4, 3, 5, 1] 1 [3, 6, 5, 2, 2, 4, 3, 5, 1] [3, 6, 5, 2, 2, 4, 3, 5, 1] 3 [3, 6, 5, 5, 2, 4, 3, 2, 1] [3, 6, 5, 5, 2, 4, 3, 2, 1] 7 [3, 6, 5, 5, 2, 4, 3, 2, 1] 0 [6, 3, 5, 5, 2, 4, 3, 2, 1] [6, 3, 5, 5, 2, 4, 3, 2, 1] 1 [6, 5, 5, 3, 2, 4, 3, 2, 1] [6, 5, 5, 3, 2, 4, 3, 2, 1] 3 [6, 5, 5, 3, 2, 4, 3, 2, 1] </code></pre> <p>I see that <code>4</code> in index <code>5</code> is never being sorted after the initial few iterations. Why is this happening? What am I missing? Any help would be appreciated.</p>
<python><data-structures><logic><heap>
2024-08-28 21:55:48
1
925
Hemanth Annavarapu
78,925,362
3,177,701
QueryObject.ComputeSignedDistanceToPoint with Expression type
<p>I'm trying to include a signed distance constraint/cost in my <code>MathematicalProgram</code> for obstacle avoidance of some predetermined obstacles. I understand from <a href="https://stackoverflow.com/questions/72020785/how-to-implement-collision-constraints-for-trajectory-optimization-in-pydrake">these</a> <a href="https://stackoverflow.com/questions/77761129/cost-functions-involving-signed-distances-in-drake">questions</a> that the correct way to do it is probably to wrap everything in a <code>MultibodyPlant</code>. Unfortunately, I lacked the foresight to do this, hence what I have right now is just a point in the form of an <code>Expression</code> that I wish to evaluate the signed distance to a set of <code>AnchoredGeometry</code> in the <code>SceneGraph</code>.</p> <p>My current attempt is as follows:</p> <pre><code>scene_graph = SceneGraph() source_id = scene_graph.RegisterSource() scene_graph.RegisterAnchoredGeometry(source_id, GeometryInstance(...)) query_object = scene_graph.get_query_output_port().Eval( scene_graph.CreateDefaultContext() ) ... point = ... # array([&lt;Expression &quot;x(0,0)&quot;&gt;, &lt;Expression &quot;x(0,1)&quot;&gt;, &lt;Expression &quot;x(0,2)&quot;&gt;], dtype=object) prog.AddConstraint( query_object.ComputeSignedDistanceToPoint(point) &gt;= desired_distance ) </code></pre> <p>Unfortunately, this gives me the error</p> <pre><code>TypeError: ComputeSignedDistanceToPoint(): incompatible function arguments. The following argument types are supported: 1. (self: pydrake.geometry.QueryObject, p_WQ: numpy.ndarray[numpy.float64[3, 1]], threshold: float = inf) -&gt; List[drake::geometry::SignedDistanceToPoint&lt;double&gt;] Invoked with: &lt;pydrake.geometry.QueryObject object at 0x71897253d030&gt;, array([&lt;Expression &quot;x(0,0)&quot;&gt;, &lt;Expression &quot;x(0,1)&quot;&gt;, &lt;Expression &quot;x(0,2)&quot;&gt;], dtype=object) </code></pre> <p>Is there any way to get this to work?</p>
<python><drake>
2024-08-28 21:34:29
2
5,604
Rufus
78,925,300
7,290,845
Define DLT pipeline that depends on event log
<p>I want to define a table &quot;DEMO&quot; using DLT pipeline. This table includes data from the event log. Here is a simplified and anonymized example of what I want to do. I really need this information, to track my pipelines.</p> <pre><code>def event_log_update_id(): return spark.sql(&quot;select origin.update_id from event_log(table(catalog.default.DEMO)) order by timestamp desc limit 1;&quot;).first().update_id @dlt.table(name=&quot;DEMO&quot;) def table(): return ( spark.readStream.format(&quot;cloudFiles&quot;) .option(&quot;cloudFiles.Format&quot;, &quot;csv&quot;) .load(&quot;abfss://...&quot;) .withColumn(&quot;update_id&quot;, lit(event_log_update_id())) ) </code></pre> <p>Sometimes it is getting an empty result when creating a table from scratch, doing a full load. I am thinking that since the table does not yet exist, I cannot access the event log by specifying the table name.</p> <p>How can I achieve my goal? I cannot help but think that I am doing something wrong.</p>
<python><databricks><azure-databricks><dlt>
2024-08-28 21:10:26
1
1,689
Zeruno
78,925,221
4,118,462
In docstring, how to give multiple parameters the same description (using EpyText tags)?
<p>For IDEs where hovering over a function call displays help (e.g. PyCharm), rather than create multiple lines with the same description for multiple parameters ... is there a way to simplify by tagging multiple parameters to use the same description?</p> <p>So instead of this (using EpyText tags) ...</p> <pre><code>def myfunc(a, b, c, d): &quot;&quot;&quot; @param a: integer along the &quot;a&quot; dimension. @param b: integer along the &quot;b&quot; dimension. @param c: integer along the &quot;c&quot; dimension. @param d: integer along the &quot;d&quot; dimension. &quot;&quot;&quot; pass </code></pre> <p>... would like something like this (which doesn't render) ...</p> <pre><code>def myfunc(a, b, c, d): &quot;&quot;&quot; @param a,b,c,d: integer along the corresponding dimension. &quot;&quot;&quot; pass </code></pre> <p>In the ideal solution not only would there be efficiency of code but also of display (in the rendered help balloon). That is, all parameters sharing a description would also be listed on the same line (of the balloon text) and have the shared description shown only once.</p> <p>Alternatives to EpyText can be considered as well.</p>
<python><pycharm><docstring><epytext>
2024-08-28 20:45:54
1
395
MCornejo
78,925,207
16,462,878
real and imaginary part of a complex number in polar form
<p>I am a bit confused about the proper way to deal with complex <em>number</em>s in polar form and the way to separate its real and imaginary part.</p> <p>Notice that I am expecting as real part the <strong>radius</strong>, as imaginary part the <strong>angle</strong>.</p> <p>The inbuilt <code>re</code> and <code>im</code> functions get always the real and imaginary part of the Cartesian representation of the complex number.</p> <p>Here an example</p> <pre><code>from sympy import I, pi, re, im, exp, sqrt # complex num in Cartesian form z = -4 + I*4 print(re(z), im(z)) # 4 -4 # complex num in polar form z = 4* sqrt(2) * exp(I * pi * 3/4) print(re(z), im(z)) # 4 -4 but expacting 4*sqrt(2), pi*3/4 </code></pre> <p>What is the most SymPytonic way to deal with such problem?</p>
<python><sympy><complex-numbers><polar-coordinates>
2024-08-28 20:39:56
2
5,264
cards
78,925,095
12,466,687
How to swap values between two columns based on conditions in Python
<p>I am trying to <strong>switch values</strong> between the <code>Range</code> and <code>Unit</code> columns in the dataframe below based on the <strong>condition that if Unit contains</strong> <code>-</code>, then replace <code>Unit</code> with <code>Range</code> and <code>Range</code> with <code>Unit</code>. To do that, I am creating a <code>unit_backup</code> column so that I don't lose the original <code>Unit</code> value.</p> <p><strong>1. dataframe</strong></p> <pre><code>sample_df = pd.DataFrame({'Range':['34-67',12,'gm','45-90'], 'Unit':['ml','35-50','10-100','mg']}) sample_df </code></pre> <pre><code> Range Unit 0 34-67 ml 1 12 35-50 2 gm 10-100 3 45-90 mg </code></pre> <p><strong>2. Function</strong> Below is the code I have tried but I am getting an error in this:</p> <pre><code>def range_unit_correction_fn(df): # creating backup of Unit column df['unit_backup'] = df['Unit'] # condition check if df['unit_backup'].str.contains(&quot;-&quot;): # if condition is True then replace `Unit` value with `Range` and `Range` with `unit_backup` df['Unit'] = df['Range'] df['Range'] = df['unit_backup'] else: # if condition False then keep the same value df['Range'] = df['Range'] # drop the backup column df = df.drop(['unit_backup'],axis=1) return df </code></pre> <ol start="3"> <li>Applying the above function on the dataframe</li> </ol> <pre><code>sample_df = sample_df.apply(range_unit_correction_fn, axis=1) sample_df </code></pre> <p><strong>Error:</strong></p> <pre><code> 1061 def apply_standard(self): 1062 if self.engine == &quot;python&quot;: -&gt; 1063 results, res_index = self.apply_series_generator() ... ----&gt; 4 if df['unit_backup'].str.contains(&quot;-&quot;): 5 df['Unit'] = df['Range'] 6 df['Range'] = df['unit_backup'] AttributeError: 'str' object has no attribute 'str' </code></pre> <p>It seems like some silly mistake, but I am not sure where I am going wrong.</p> <p>Appreciate any sort of help here.</p>
<python><pandas>
2024-08-28 20:08:18
2
2,357
ViSa
78,924,952
3,753,826
How to scale subplots including images with different aspect ratios?
<p>A minimal version of my problem is the following plot. How do I scale the right panel to have the same height as the left one?</p> <pre><code>import matplotlib.pyplot as plt import numpy as np rnd = np.random.default_rng(123) dx1, dy1 = 3, 2 dx2, dy2 = 3, 4 im1 = rnd.random((dy1, dx1)) im2 = rnd.random((dy2, dx2)) fig, (ax1, ax2) = plt.subplots(1, 2) ax1.imshow(im1) ax2.imshow(im2) </code></pre> <p><a href="https://i.sstatic.net/BHsZJUqzl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHsZJUqzl.png" alt="enter image description here" /></a></p> <p>I could position both plots by hand, but I need a robust procedure to generate multiple similar plots. I looked at <code>ImageGrid</code> and <code>GridSpec</code> but neither seems to support what I need. I would expect <code>subplots</code> to automatically do what I need, but this is not the case. Is this behaviour by design?</p>
<python><matplotlib><subplot>
2024-08-28 19:14:37
1
17,652
divenex
78,924,835
20,591,261
Use polars .when() instead joins
<p>I have 3 polars dataframes, one that contains 2 IDS, and the other ones contains an ID and a value. I would like to join the 3 dataframes if the ID of the main table exists on one of the other tables and bring a values from a desired column.</p> <p>My current aproach its just rename each Table ID and then do a <code>.join(how = 'left')</code>, however, i think renaming and duplicate tables its not the correct way to approach this problem. (Due the extra code, and the wasted ram)</p> <p>The first one contains 2 ID columns:</p> <pre><code>data = { &quot;ID1&quot; : [1,2,3], &quot;ID2&quot; : [1,4,5] } df = pl.DataFrame(data) </code></pre> <p>The second and third are dataframes than contains an ID and a value:</p> <pre><code>T1 = { &quot;ID&quot; : [9,2,5], &quot;Values&quot; : [&quot;A&quot;,&quot;B&quot;,&quot;c&quot;], &quot;Values II&quot; : [&quot;foo&quot;,&quot;boo&quot;,&quot;baz&quot;] } T1 = pl.DataFrame(T1) T2 = { &quot;ID&quot; : [1,4,10], &quot;Values&quot; : [&quot;X&quot;,&quot;J&quot;,&quot;c&quot;] } T2 = pl.DataFrame(T2) </code></pre> <p>I can check if the <code>ID</code> exists on the other tables like this</p> <pre><code>( df .with_columns( ID1_is_on_T1 = pl.col(&quot;ID1&quot;).is_in(T1.select(pl.col(&quot;ID&quot;))), ID2_is_on_T1 = pl.col(&quot;ID2&quot;).is_in(T1.select(pl.col(&quot;ID&quot;))), ID1_is_on_T2 = pl.col(&quot;ID1&quot;).is_in(T2.select(pl.col(&quot;ID&quot;))), ID2_is_on_T2 = pl.col(&quot;ID2&quot;).is_in(T2.select(pl.col(&quot;ID&quot;))), ) ) </code></pre> <p>And i'm looking to do somehting like this:</p> <pre><code>( df .with_columns( pl .when( pl.col(&quot;ID1&quot;).is_in(T1.select(pl.col(&quot;ID&quot;))) ) .then( T1.select(pl.col(&quot;Values&quot;)) ) .otherwise(0) ) ) </code></pre> <p><code>ValueError: can only call .item() if the dataframe is of shape (1, 1), or if explicit row/col values are provided; frame has shape (3, 1)</code></p> <p>Current <code>.join()</code> approach:</p> <pre><code>T1_1 = ( T1 .rename( {&quot;ID&quot;: &quot;ID1&quot;} ) ) T1_2 = ( T1 .rename( {&quot;ID&quot;: &quot;ID2&quot;} ) ) Join_1 = df.join(T1_1,on = &quot;ID1&quot;, how=&quot;left&quot;).rename({&quot;Values&quot; : &quot;ID1_Values&quot;, &quot;Values II&quot; : &quot;ID1_Values II&quot;}) Join_2 = Join_1.join(T1_2, on = &quot;ID2&quot;, how=&quot;left&quot;).rename({&quot;Values&quot; : &quot;ID2_Values&quot;, &quot;Values II&quot; : &quot;ID2_Values II&quot;}) </code></pre> <p>On this approach its only considering the first table, i would need to do the same for the T2 too.</p>
<python><dataframe><python-polars>
2024-08-28 18:40:30
2
1,195
Simon
78,924,831
233,928
How to extract text from a PDF file using python and PyMuPDF
<p>I am trying to write a converter for a pdf file, starting with just text. The goal is to extract:</p> <pre><code>the font the location of the start of each block of text each letter and the relative positions </code></pre> <p>additionally, if possible, I would like to get the current font, and to extract images. I don't yet need to do anything with the information, just to have the right framework to work from.</p> <p>I have written the following code which appears to traverse the text, but I am only jumping forward with a constant width, and I want to get the actual advance that the pdf specifies.</p> <pre><code>import sys import fitz # PyMuPDF def extract_text_with_deltas(pdf_path): doc = fitz.open(pdf_path) for page_num in range(len(doc)): page = doc[page_num] previous_x, previous_y = None, None print(f&quot;--- Start of Page {page_num + 1} ---&quot;) text_info = page.get_text(&quot;dict&quot;) # Get text as a dictionary with structure for block in text_info['blocks']: if 'lines' not in block: continue # Skip non-text blocks for line in block['lines']: for span in line['spans']: font_size = span['size'] # Font size used in the span span_text = span['text'] bbox = span['bbox'] x0, y0 = bbox[0], bbox[1] span_width = bbox[2] - bbox[0] num_chars = len(span_text) # Calculate approximate width of each character char_width = span_width / num_chars if num_chars &gt; 0 else 0 for i, char in enumerate(span_text): # Calculate position of each character char_x0 = x0 + i * char_width char_y0 = y0 # Calculate deltas delta_x = char_x0 - previous_x if previous_x is not None else 0 delta_y = char_y0 - previous_y if previous_y is not None else 0 print(f&quot;Character: '{char}', Delta X: {delta_x:.2f}, Delta Y: {delta_y:.2f}&quot;) # Update the previous coordinates previous_x, previous_y = char_x0, char_y0 print(f&quot;--- End of Page {page_num + 1} ---\n&quot;) if __name__ == &quot;__main__&quot;: if len(sys.argv) != 2: print(&quot;Usage: python extract_pdf_text.py &lt;path_to_pdf&gt;&quot;) else: pdf_path = sys.argv[1] extract_text_with_deltas(pdf_path) </code></pre>
<python><pdf>
2024-08-28 18:39:06
0
8,644
Dov
78,924,799
1,870,832
I can't get Selenium Chrome to work in Docker with Python
<p>I have a classic &quot;it works on my machine&quot; problem, a web scraper I ran successfully on my laptop, but with a persistent error whenever I tried and run it in a container.</p> <p>My minimal reproducible dockerized example consists of the following files:</p> <p>requirements.txt:</p> <pre><code>selenium==4.23.1 # 4.23.1 pandas==2.2.2 pandas-gbq==0.22.0 tqdm==4.66.2 </code></pre> <p>Dockerfile:</p> <pre><code>FROM selenium/standalone-chrome:latest # Set the working directory in the container WORKDIR /usr/src/app # Copy your application files COPY . . # Install Python and pip USER root RUN apt-get update &amp;&amp; apt-get install -y python3 python3-pip python3-venv # Create a virtual environment RUN python3 -m venv /usr/src/app/venv # Activate the virtual environment and install dependencies RUN . /usr/src/app/venv/bin/activate &amp;&amp; \ pip install --no-cache-dir -r requirements.txt # Switch back to the selenium user USER seluser # Set the entrypoint to activate the venv and run your script CMD [&quot;/bin/bash&quot;, &quot;-c&quot;, &quot;source /usr/src/app/venv/bin/activate &amp;&amp; python -m scrape_ev_files&quot;] </code></pre> <p>scrape_ev_files.py (slimmed down to just what's needed to repro error):</p> <pre><code>import os from selenium import webdriver from selenium.webdriver.chrome.service import Service def init_driver(local_download_path): os.makedirs(local_download_path, exist_ok=True) # Set Chrome Options chrome_options = Options() chrome_options.add_argument(&quot;--headless&quot;) chrome_options.add_argument(&quot;--no-sandbox&quot;) chrome_options.add_argument(&quot;--disable-dev-shm-usage&quot;) chrome_options.add_argument(&quot;--remote-debugging-port=9222&quot;) prefs = { &quot;download.default_directory&quot;: local_download_path, &quot;download.prompt_for_download&quot;: False, &quot;download.directory_upgrade&quot;: True, &quot;safebrowsing.enabled&quot;: True } chrome_options.add_experimental_option(&quot;prefs&quot;, prefs) # Set up the driver service = Service() chrome_options = Options() driver = webdriver.Chrome(service=service, options=chrome_options) # Set download behavior driver.execute_cdp_cmd(&quot;Page.setDownloadBehavior&quot;, { &quot;behavior&quot;: &quot;allow&quot;, &quot;downloadPath&quot;: local_download_path }) return driver if __name__ == &quot;__main__&quot;: # PARAMS ELECTION = '2024 MARCH 5TH DEMOCRATIC PRIMARY' ORIGIN_URL = &quot;https://earlyvoting.texas-election.com/Elections/getElectionDetails.do&quot; CSV_DL_DIR = &quot;downloaded_files&quot; # initialize the driver driver = init_driver(local_download_path=CSV_DL_DIR) </code></pre> <p>shell command to reproduce the error:</p> <pre><code>docker build -t my_scraper . # (no error) docker run --rm -t my_scraper # (error) </code></pre> <p>stacktrace from error is below. Any help would be much appreciated! I've tried many iterations of my requirements.txt and Dockerfile attempting to fix this, but this error at this spot has been frustratingly persistent:</p> <pre><code> File &quot;/workspace/scrape_ev_files.py&quot;, line 110, in &lt;module&gt; driver = init_driver(local_download_path=CSV_DL_DIR) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/workspace/scrape_ev_files.py&quot;, line 47, in init_driver driver = webdriver.Chrome(service=service, options=chrome_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/workspace/.venv/lib/python3.12/site-packages/selenium/webdriver/chrome/webdriver.py&quot;, line 45, in __init__ super().__init__( File &quot;/workspace/.venv/lib/python3.12/site-packages/selenium/webdriver/chromium/webdriver.py&quot;, line 66, in __init__ super().__init__(command_executor=executor, options=options) File &quot;/workspace/.venv/lib/python3.12/site-packages/selenium/webdriver/remote/webdriver.py&quot;, line 212, in __init__ self.start_session(capabilities) File &quot;/workspace/.venv/lib/python3.12/site-packages/selenium/webdriver/remote/webdriver.py&quot;, line 299, in start_session response = self.execute(Command.NEW_SESSION, caps)[&quot;value&quot;] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/workspace/.venv/lib/python3.12/site-packages/selenium/webdriver/remote/webdriver.py&quot;, line 354, in execute self.error_handler.check_response(response) File &quot;/workspace/.venv/lib/python3.12/site-packages/selenium/webdriver/remote/errorhandler.py&quot;, line 229, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.SessionNotCreatedException: Message: session not created: Chrome failed to start: exited normally. (session not created: DevToolsActivePort file doesn't exist) (The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.) </code></pre>
<python><docker><google-chrome><selenium-webdriver><web-scraping>
2024-08-28 18:31:08
2
9,136
Max Power
78,924,693
6,843,153
Pydantic custom datatype doesn't take values when validating
<p>I have the following <strong>Pydantic v1.10</strong> custom datatype:</p> <pre><code>class SelectionList(ABC): class_selection_list = [] datatype = None @classmethod def __get_validators__(cls): yield cls.validate @classmethod def __modify_schema__(cls, field_schema): field_schema.update(selection_list=cls.class_selection_list, type=cls.datatype) @classmethod def validate(cls, v): if v not in cls.class_selection_list: raise Exception(f&quot;Value must be on of this list: {str(cls.class_selection_list)}&quot;) class StatusType(str, SelectionList): class_selection_list = [&quot;Active&quot;, &quot;Inactive&quot;] datatype = &quot;string&quot; </code></pre> <p>And the following model:</p> <pre><code>class MyModel(BaseModel): ID: int = Field(alias=&quot;Id&quot;, frozen=True) STATUS: StatusType = Field(alias=&quot;Status&quot;) </code></pre> <p>The <code>MyModel.schema()</code> works find but, when trying to validate a record using <code>modeled_record = MyModel(**record)</code>, the field value, which is a valid value, is not taken and the field <code>Status</code> of the model is <code>None</code>.</p> <p>What Am I missing?</p>
<python><pydantic>
2024-08-28 18:01:11
1
5,505
HuLu ViCa
78,924,539
9,500,769
shadowsocks AttributeError: module 'collections' has no attribute 'MutableMapping'
<p>I was trying to set up an VPN on Vultr via Shadowsocks, and trying to run Shadowsocks service manually. I tried below in my bash:</p> <pre><code>sudo ssserver -c /etc/shadowsocks.json </code></pre> <p>It has error: <strong>AttributeError: module 'collections' has no attribute 'MutableMapping'</strong></p> <p>I am using Python 3.10.12 version</p> <p>below is the ERROR:</p> <pre><code> root@vpn-test:~# sudo ssserver -c /etc/shadowsocks.json Traceback (most recent call last): File &quot;/usr/local/bin/ssserver&quot;, line 5, in &lt;module&gt; from shadowsocks.server import main File &quot;/usr/local/lib/python3.10/dist-packages/shadowsocks/server.py&quot;, line 27, in &lt;module&gt; from shadowsocks import shell, daemon, eventloop, tcprelay, udprelay, \ File &quot;/usr/local/lib/python3.10/dist-packages/shadowsocks/udprelay.py&quot;, line 71, in &lt;module&gt; from shadowsocks import encrypt, eventloop, lru_cache, common, shell File &quot;/usr/local/lib/python3.10/dist-packages/shadowsocks/lru_cache.py&quot;, line 34, in &lt;module&gt; class LRUCache(collections.MutableMapping): AttributeError: module 'collections' has no attribute 'MutableMapping' root@vpn-test:~# </code></pre>
<python><python-3.x><mutablemapping>
2024-08-28 17:18:53
1
2,492
Sky
78,924,379
880,874
Escaping double quotes in an INSERT statement for SQL Server
<p>Using Python 3 and the <code>re</code> module, how can I escape double quotes in an INSERT statement for SQL Server?</p> <p>I've tried a bunch of different ways but SQL Server always throws an error with the formatting.</p> <p>Here is an example:</p> <pre><code>INSERT INTO [events] VALUES ('&lt;span style=\&quot;font-size:14px;\&quot;&gt;&lt;span style=\&quot;font-family:\'Times New Roman\', Times, serif;\&quot;&gt; </code></pre> <p>My most recent try was this:</p> <pre><code>escaped_statement = re.sub(r'\&quot;', r'\&quot;\&quot;', escaped_statement) </code></pre> <p>But that returns this:</p> <pre><code>&lt;span style='\&quot;\&quot;font-size:14px; </code></pre>
<python><sql-server><regex>
2024-08-28 16:34:19
1
7,206
SkyeBoniwell
78,924,206
19,299,757
How to select a dropdown value in selenium python that has input html tags
<p>I am finding it bit difficult to select a value from a dropdown using selenium python. This dropdown has the below html.</p> <pre><code>&lt;input aria-invalid=&quot;true&quot; autocomplete=&quot;off&quot; id=&quot;asset-instrument-sub-type&quot; type=&quot;text&quot; class=&quot;MuiInputBase-input MuiInput-input MuiInputBase-inputAdornedEnd MuiAutocomplete-input MuiAutocomplete-inputFocused css-mnn31&quot; aria-autocomplete=&quot;list&quot; aria-expanded=&quot;false&quot; autocapitalize=&quot;none&quot; spellcheck=&quot;false&quot; role=&quot;combobox&quot; value=&quot;&quot; aria-describedby=&quot;asset-instrument-sub-type-helper-text&quot;&gt; </code></pre> <p>Once I click the dropdown there are 3 values in it (Equity, Fund, Others)</p> <pre><code>&lt;input aria-invalid=&quot;false&quot; autocomplete=&quot;off&quot; id=&quot;asset-instrument-sub-type&quot; type=&quot;text&quot; class=&quot;MuiInputBase-input MuiInput-input MuiInputBase-inputAdornedEnd MuiAutocomplete-input MuiAutocomplete-inputFocused css-mnn31&quot; aria-autocomplete=&quot;list&quot; aria-expanded=&quot;false&quot; autocapitalize=&quot;none&quot; spellcheck=&quot;false&quot; role=&quot;combobox&quot; value=&quot;Equity&quot;&gt; </code></pre> <p>In my python script, I have this Xpath defined. First to select the DD and then to select the value &quot;Equity&quot; from it.</p> <pre><code>instrument_type_dd_xpath = &quot;//input[@id='asset-instrument-sub-type' and @type='text']&quot; instrument_type_dd_for_equity = &quot;//*[@id='asset-instrument-sub-type'] and @value='Equity'&quot; element=WebDriverWait(self.driver,5).until(EC.presence_of_element_located(by_locator)) element.click() </code></pre> <p>But when the script runs, the dropdown is expanded, but its not selecting the &quot;Equity&quot; value from it. Since there is no &quot;select&quot; or &quot;li&quot; tags, I am not sure how to select values from these type of dropdowns.</p> <p>Any help is much appreciated.</p>
<python><selenium-webdriver>
2024-08-28 15:42:58
0
433
Ram
78,924,163
4,092,887
Get count of unique videos by channel in DataFrame
<p>I'm using Python for playing with some YouTube history data in Google Colab.</p> <p>My intention with this data is to get the <em><strong>amount of videos watched on every channel -counting only the unique videos</strong></em> - that is, a video might have been watched more than once.</p> <p>I've already debugged the YouTube history data and I've obtained <a href="https://raw.githubusercontent.com/MauroCSHPYP/SampleInputData/main/sample_data_28082024.json" rel="nofollow noreferrer">this JSON data</a> you can use for testing directly on a DataFrame.</p> <p>From this JSON data and with the following code I found, I was able to get the count of unique videos by each channel.</p> <pre><code># Get the count of unique URLs - Source: https://stackoverflow.com/a/69711933/4092887 df_count = df_unique_channels_and_videos.groupby(['Channel_Name', 'Channel_URL'])[&quot;URL&quot;].agg([&quot;unique&quot;, &quot;nunique&quot;]) # Show only certain columns - Source: https://stackoverflow.com/a/11287278/4092887 # This is due the previous code shows *all the URLS* in a single column called &quot;unique&quot;: df_count = df_count[['nunique']] display(df_count) </code></pre> <p>The current result with this code is as follows:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: left;">Channel_Name</th> <th style="text-align: left;">Channel_URL</th> <th>nunique</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">Channel_1</td> <td style="text-align: left;"><a href="https://www.youtube.com/channel/UC0HfE9z_EqkywKrplcS7xAA" rel="nofollow noreferrer">https://www.youtube.com/channel/UC0HfE9z_EqkywKrplcS7xAA</a></td> <td>13</td> </tr> <tr> <td style="text-align: left;">Channel_2</td> <td style="text-align: left;"><a href="https://www.youtube.com/channel/UC0H_FFs9EqkywKrplcS7xYg" rel="nofollow noreferrer">https://www.youtube.com/channel/UC0H_FFs9EqkywKrplcS7xYg</a></td> <td>5</td> </tr> <tr> <td style="text-align: left;">Channel_3</td> <td style="text-align: left;"><a href="https://www.youtube.com/channel/UCMNfE9z_EqkywKrplcS7xYg" rel="nofollow noreferrer">https://www.youtube.com/channel/UCMNfE9z_EqkywKrplcS7xYg</a></td> <td>8</td> </tr> </tbody> </table></div> <p>My desired result would be as follows:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: left;">Channel_Name</th> <th style="text-align: left;">Channel_URL</th> <th>No_of_unique_videos_viewed</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">Channel_1</td> <td style="text-align: left;"><a href="https://www.youtube.com/channel/UC0HfE9z_EqkywKrplcS7xAA" rel="nofollow noreferrer">https://www.youtube.com/channel/UC0HfE9z_EqkywKrplcS7xAA</a></td> <td>13</td> </tr> <tr> <td style="text-align: left;">Channel_2</td> <td style="text-align: left;"><a href="https://www.youtube.com/channel/UC0H_FFs9EqkywKrplcS7xYg" rel="nofollow noreferrer">https://www.youtube.com/channel/UC0H_FFs9EqkywKrplcS7xYg</a></td> <td>5</td> </tr> <tr> <td style="text-align: left;">Channel_3</td> <td style="text-align: left;"><a href="https://www.youtube.com/channel/UCMNfE9z_EqkywKrplcS7xYg" rel="nofollow noreferrer">https://www.youtube.com/channel/UCMNfE9z_EqkywKrplcS7xYg</a></td> <td>8</td> </tr> </tbody> </table></div> <p>and so on.</p> <p>However, I'm unable to change (<em>or set</em>) the column names and I'm not sure if thgis code is the right approach.</p>
<python><dataframe>
2024-08-28 15:32:35
1
2,675
Mauricio Arias Olave
78,923,753
5,269,892
Rapidfuzz critical error when using workers != 1
<p>When using the latest version 3.9.6 of <a href="https://rapidfuzz.github.io/RapidFuzz/index.html" rel="nofollow noreferrer">rapidfuzz</a>, I get a critical error when using <em>workers != 1</em>, i.e. I cannot use multi-processing to speed-up string comparisons:</p> <p><em>Process finished with exit code -1073741819 (0xC0000005)</em></p> <p>The error does not occur in rapidfuzz 3.5.2, however.</p> <p><strong>Code generating the error:</strong></p> <pre><code>import rapidfuzz from rapidfuzz import process print(rapidfuzz.__version__) process.cdist(['tree'], ['beer'], workers=1) process.cdist(['tree'], ['beer'], workers=2) </code></pre> <p><strong>What is causing the issue? Apart from downgrading, is there a fix?</strong></p> <p><strong>Update:</strong> Reported as <a href="https://github.com/rapidfuzz/RapidFuzz/issues/403" rel="nofollow noreferrer">issue #403</a> in the <em>rapidfuzz</em> GitHub.</p>
<python><fuzzy-comparison><rapidfuzz>
2024-08-28 14:04:38
0
1,314
silence_of_the_lambdas
78,923,588
12,775,531
Argo workflow retry a workflow using rest service
<p>I am trying to rerun an Argo Workflow through REST endpoint call. However it says it is Not implemented. Does anyone know how to get around this? And when would this be implemented? As per <a href="https://argo-workflows.readthedocs.io/en/latest/swagger/" rel="nofollow noreferrer">apidocs</a>.</p> <p>The below should be the endpoint.</p> <p>e.g.</p> <pre><code>curl https://localhost:2746/api/v1/workflows/argo/example-wf-template/retry --insecure {&quot;code&quot;:12,&quot;message&quot;:&quot;Not Implemented&quot;} </code></pre> <p>The Argo version I am using is: <code>v3.5.10</code></p> <p>Any insights would be helpful. I also do not want to use command line tools in the code.</p>
<python><argo-workflows><argo><data-engineering>
2024-08-28 13:25:01
1
2,872
s510
78,923,546
2,051,818
Accessing specific index out of N indices within multiple files with varying length of tensors
<p>Suppose that I have 20000 files, each has a tensor with a different length. the total length in all files is 3814139. I need to access the file that contains the specified index. So for example, if (<code>tensor length of file_0 is 320, tensor length of file_1 is 1036, tensor length of file_2 is 458, ......., tensor length of file_19000 is 241</code>), to access <code>index &gt;319 and &lt; 1356</code> it's located in <code>file_1</code>.</p> <p>What is the fastest way to access indices for huge length of indices within multiple files, in such fashion?</p>
<python><data-structures><pytorch><pytorch-dataloader>
2024-08-28 13:17:37
1
371
HATEM EL-AZAB
78,923,525
7,471,830
How do I Inject Dependencies in FastAPI's Lifespan Context / startup event?
<p>I'm developing a backend service using FastAPI, and I'm facing issues with dependency injection when trying to perform data processing during the application's startup.</p> <p>Here's a simplified version of my setup:</p> <p>I have a <code>Repository</code> class that depends on a MongoDB database instance, injected like this:</p> <pre class="lang-py prettyprint-override"><code>class Repository: def __init__(self, db: Database): self.collection = db['my_collection'] # other initialization def init_my_repository(db=Depends(get_mongo_database_instance)): return Repository(db) </code></pre> <p>In my application, I need to perform some initialization tasks using this repository (and other services) when the FastAPI app starts. My first thought was to do this in the <code>lifespan</code> context:</p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, Depends from contextlib import asynccontextmanager @asynccontextmanager async def lifespan(app: FastAPI, repo=Depends(init_my_repository), xxx=Depends(init_my_XXX), yyy=Depends(init_my_YYY)): repo.do_something() xxx.do_something() yyy.do_something() yield </code></pre> <p>However, this approach doesn't work as expected because <code>Depends</code> can't be used directly within the <code>lifespan</code> context.</p> <p>This limitation forces me to manually create instances of these dependencies, which defeats the purpose of using FastAPI's automatic dependency injection....!!</p> <p>I found a discussion <a href="https://github.com/fastapi/fastapi/discussions/11742#discussioncomment-10453200" rel="noreferrer">here</a> that offers a workaround, but it doesn't work in my codebase.</p> <p>Given how common this scenario is in other IoC-based frameworks, I'm surprised there isn't a straightforward solution in FastAPI.</p> <p>Please help. Thanks!</p>
<python><dependency-injection><fastapi><inversion-of-control>
2024-08-28 13:11:41
1
831
OOD Waterball
78,923,520
865,220
Filter a pandas row if it is significantly larger than neighbours given that it starts with specific character
<p>I have a dataframe like this</p> <pre><code> Name Year Value 0 Mexico 1961 14357 1 Mexico 1961 15161 2 Mexico 1961 514658 3 Mexico 1962 15559 4 United States of America 1977 2191197 5 United States of America 1978 2470734 6 United States of America 1978 52470734 7 United States of America 1979 2737377 8 United States of America 1979 52731457 9 United States of America 1980 3029030 10 United States of America 1980 53024589 11 United States of America 1981 3272565 12 United States of America 2010 15199150 13 United States of America 2010 515543018 14 United States of America 2011 15861873 15 United States of America 2011 16250364 </code></pre> <p>I want to convert it to:</p> <pre><code> Name Year Value 0 Mexico 1961 14357 1 Mexico 1961 15161 2 Mexico 1961 14658 3 Mexico 1962 15559 4 United States of America 1977 2191197 5 United States of America 1978 2470734 6 United States of America 1978 2470734 7 United States of America 1979 2737377 8 United States of America 1979 2731457 9 United States of America 1980 3029030 10 United States of America 1980 3024589 11 United States of America 1981 3272565 12 United States of America 2010 15199150 13 United States of America 2010 15543018 14 United States of America 2011 15861873 15 United States of America 2011 16250364 </code></pre> <p>As you can see when the last column was significantly larger than its neighbours, it it replaced by a different number which is just removal of 5 from its front.</p> <p>for example for Mexico, in 3rd row <code>514658</code> is replaced by <code>14658</code>, firstly because 514658 is significantly(5x-10x) larger than its neighbours ie <code>15161</code> and <code>15559</code>. Similarly for USA,</p> <p><code>United States of America,1979,52731457</code></p> <p>is replaced by</p> <p><code>United States of America,1979,2731457</code></p> <p>Similarly</p> <ul> <li><p><code>United States of America,1978,52470734</code></p> </li> <li><p><code>United States of America,1980,53024589</code></p> </li> <li><p><code>United States of America,2010,515543018</code></p> </li> </ul> <p>are replaced by</p> <ul> <li><p><code>United States of America,1978,2470734</code></p> </li> <li><p><code>United States of America,1980,3024589</code> and</p> </li> <li><p><code>United States of America,2010,15543018</code> respectively.</p> </li> </ul> <p>But mind you,</p> <ul> <li>firstly the first column ie <code>Name</code> should exactly match,</li> <li>secondly the last column ie <code>Value</code> has to start with 5 and</li> <li>finally <code>Value</code> has to be significantly larger ie with mostly one digit more than neighbours to avoid risk of removing false positives.</li> </ul> <p>By now you might have understood this is a data cleaning exercise where some <code>$</code> symbols have been written as 5 and hence have to be fixed.</p>
<python><pandas><dataframe>
2024-08-28 13:11:11
1
18,382
ishandutta2007
78,923,480
307,138
How to use @doc in BlackSheep API
<p>Use <code>blacksheep create</code> with the following options to create an example API:</p> <pre><code>✨ Project name: soquestion 🚀 Project template: api 🤖 Use controllers? Yes 📜 Use OpenAPI Documentation? Yes 🔧 Library to read settings essentials-configuration 🔩 App settings format YAML </code></pre> <p>This will generate a simple API based on BlackSheep, with the endpoints defined in <code>app/controllers/examples.py</code>:</p> <pre><code>&quot;&quot;&quot; Example API implemented using a controller. &quot;&quot;&quot; from typing import List, Optional from blacksheep.server.controllers import Controller, get, post class ExamplesController(Controller): @classmethod def route(cls) -&gt; Optional[str]: return &quot;/api/examples&quot; @classmethod def class_name(cls) -&gt; str: return &quot;Examples&quot; @get() async def get_examples(self) -&gt; List[str]: &quot;&quot;&quot; Gets a list of examples. Lorem Ipsum Dolor Sit amet &quot;&quot;&quot; return list(f&quot;example {i}&quot; for i in range(3)) @post() async def add_example(self, example: str): &quot;&quot;&quot; Adds an example. &quot;&quot;&quot; </code></pre> <p>When you start the API (don't forget to create and activate a virtual environment before you do the <code>pip install ...</code>) with <code>python dev.py</code> and navigate to http://localhost:44777/docs you can see the OpenAPI documentation.</p> <p>According to the <a href="https://www.neoteroi.dev/blacksheep/openapi/#adding-description-and-summary" rel="nofollow noreferrer">documentation</a> you can use the docstring to specify the endpoint description.</p> <p>Is it somehow possible to also add documentation for the responses?</p> <p>According to the <a href="https://www.neoteroi.dev/blacksheep/openapi/#adding-description-and-summary" rel="nofollow noreferrer">documentation</a> you can use the <code>@docs</code> decorator, but that only works in a simple file where <code>@docs</code> is defined beforehand. In the generated API <code>@docs</code> is defined in <code>app/docs/__init.py__</code>, but I can't find a way to use this inside the <code>example.py</code>.</p> <p>The generated <code>app/docs/__init.py__</code> looks like this:</p> <pre><code>&quot;&quot;&quot; This module contains OpenAPI Documentation definition for the API. It exposes a docs object that can be used to decorate request handlers with additional information, used to generate OpenAPI documentation. &quot;&quot;&quot; from blacksheep import Application from blacksheep.server.openapi.v3 import OpenAPIHandler from openapidocs.v3 import Info from app.docs.binders import set_binders_docs from app.settings import Settings def configure_docs(app: Application, settings: Settings): docs = OpenAPIHandler( info=Info(title=settings.info.title, version=settings.info.version), anonymous_access=True, ) # include only endpoints whose path starts with &quot;/api/&quot; docs.include = lambda path, _: path.startswith(&quot;/api/&quot;) set_binders_docs(docs) docs.bind_app(app) </code></pre>
<python><openapi><blacksheep>
2024-08-28 13:01:07
1
20,452
Ocaso Protal
78,923,442
7,713,770
How can I resolve the issue where Django migrations are not applying changes to the database?
<p>I have a Django app and I'm using PostgreSQL for the database. I've added some new properties to my models, and after making these changes, I ran the following commands:</p> <pre><code> python manage.py migrate python manage.py makemigrations </code></pre> <p>A new migration file, 0001_initial.py, was generated with the new properties, such as &quot;hdd_list_remark&quot;. However, when I check the database, it seems the migrations are not applied.</p> <p>One of the models with the added properties is:</p> <pre><code> default=HabitatDirectiveChoices.EMPTY, verbose_name=&quot; Habitatrichtlijnen&quot;) description = models.TextField( max_length=5000, blank=True, null=True, verbose_name=&quot;Beschrijving&quot;) feeding = models.TextField( max_length=2000, blank=True, null=True, verbose_name=&quot;Voeding&quot;) housing = models.TextField( max_length=2000, blank=True, null=True, verbose_name=&quot;Huisvesting&quot;) care = models.TextField(max_length=1000, blank=True, null=True, verbose_name=&quot;Verzorging&quot;) literature = models.TextField( max_length=10000, blank=True, null=True, verbose_name=&quot;Literatuur&quot;) images = models.ImageField( upload_to=&quot;media/photos/animals&quot;, blank=False, null=False, verbose_name=&quot;Foto&quot;) category = models.ForeignKey( Category, related_name='animals', on_delete=models.CASCADE, verbose_name=&quot;Familie&quot;) date_create = models.DateTimeField( auto_now_add=True, verbose_name=&quot;Datum aangemaakt&quot;) date_update = models.DateTimeField( auto_now=True, verbose_name=&quot;Datum aangepast&quot;) animal_images = models.ManyToManyField( AnimalImage, related_name='related_animals', blank=True) files = models.ManyToManyField( AnimalFile, related_name='related_animals_files', blank=True) </code></pre> <p>The migration file: 0001_initial.py looks:</p> <pre><code># Generated by Django 5.0.6 on 2024-08-28 12:36 import django.db.models.deletion from django.db import migrations, models class Migration(migrations.Migration): initial = True dependencies = [] operations = [ migrations.CreateModel( name=&quot;Animal&quot;, fields=[ ( &quot;id&quot;, models.BigAutoField( auto_created=True, primary_key=True, serialize=False, verbose_name=&quot;ID&quot;, ), ), ( &quot;hdd_list_remark&quot;, models.TextField( blank=True, max_length=500, verbose_name=&quot;HDD lijst opmerking&quot; ), ), (&quot;name&quot;, models.CharField(max_length=100, verbose_name=&quot;Naam&quot;)), ( &quot;sort&quot;, models.CharField( default=&quot;&quot;, max_length=100, verbose_name=&quot;Soort(Latijn)&quot; ), ), (&quot;slug&quot;, models.SlugField(editable=False, max_length=100)), ( &quot;cites&quot;, models.CharField( choices=[ (&quot;&quot;, &quot;&quot;), (&quot;A&quot;, &quot;A&quot;), (&quot;B&quot;, &quot;B&quot;), (&quot;A/B&quot;, &quot;A/B&quot;), (&quot;C&quot;, &quot;C&quot;), (&quot;D&quot;, &quot;D&quot;), (&quot;nvt&quot;, &quot;nvt&quot;), ], default=&quot;&quot;, max_length=5, verbose_name=&quot;Cites&quot;, ), ), (&quot;uis&quot;, models.BooleanField(default=False, verbose_name=&quot;UIS&quot;)), ( &quot;pet_list&quot;, models.CharField( choices=[ (&quot;&quot;, &quot;&quot;), (&quot;ja&quot;, &quot;ja&quot;), (&quot;nee&quot;, &quot;nee&quot;), (&quot;nvt&quot;, &quot;nvt&quot;), ], default=&quot;&quot;, max_length=3, verbose_name=&quot; HDD lijst&quot;, ), ), ( &quot;bird_directive_list&quot;, models.CharField( blank=True, choices=[ (&quot;&quot;, &quot;&quot;), (&quot;TrekVogel&quot;, &quot;TrekVogel&quot;), (&quot;Bijlage|&quot;, &quot;Bijlage |&quot;), (&quot;Bijlage ||/1&quot;, &quot;Bijlage ||/2&quot;), (&quot;Bijlage ||/2&quot;, &quot;Bijlage ||/2&quot;), (&quot;Bijlage |||/1&quot;, &quot;Bijlage |||/1 &quot;), (&quot;Bijlage |||/3&quot;, &quot;Bijlage |||/3&quot;), (&quot;nvt&quot;, &quot;nvt&quot;), ], default=&quot;&quot;, max_length=13, verbose_name=&quot; Vogelrichtlijnen&quot;, ), ), ( &quot;Habitat_directive_list&quot;, models.CharField( blank=True, choices=[ (&quot;&quot;, &quot;&quot;), (&quot;Bijlage||&quot;, &quot;Bijlage ||&quot;), (&quot;Bijlage |V&quot;, &quot;Bijlage |V&quot;), (&quot;Bijlage V &quot;, &quot;Bijlage V&quot;), (&quot;nvt&quot;, &quot;nvt&quot;), ], default=&quot;&quot;, max_length=13, verbose_name=&quot; Habitatrichtlijnen&quot;, ), ), ( &quot;description&quot;, models.TextField( blank=True, max_length=5000, null=True, verbose_name=&quot;Beschrijving&quot;, ), ), ( &quot;feeding&quot;, models.TextField( blank=True, max_length=2000, null=True, verbose_name=&quot;Voeding&quot; ), ), ( &quot;housing&quot;, models.TextField( blank=True, max_length=2000, null=True, verbose_name=&quot;Huisvesting&quot;, ), ), ( &quot;care&quot;, models.TextField( blank=True, max_length=1000, null=True, verbose_name=&quot;Verzorging&quot;, ), ), ( &quot;literature&quot;, models.TextField( blank=True, max_length=10000, null=True, verbose_name=&quot;Literatuur&quot;, ), ), ( &quot;images&quot;, models.ImageField( upload_to=&quot;media/photos/animals&quot;, verbose_name=&quot;Foto&quot; ), ), ( &quot;date_create&quot;, models.DateTimeField( auto_now_add=True, verbose_name=&quot;Datum aangemaakt&quot; ), ), ( &quot;date_update&quot;, models.DateTimeField(auto_now=True, verbose_name=&quot;Datum aangepast&quot;), ), ], options={ &quot;verbose_name&quot;: &quot;Dier&quot;, &quot;verbose_name_plural&quot;: &quot;Dieren&quot;, &quot;managed&quot;: True, }, ), migrations.CreateModel( name=&quot;AnimalFile&quot;, fields=[ ( &quot;id&quot;, models.BigAutoField( auto_created=True, primary_key=True, serialize=False, verbose_name=&quot;ID&quot;, ), ), ( &quot;file&quot;, models.FileField( upload_to=&quot;media/files/animals&quot;, verbose_name=&quot;Bestand&quot; ), ), ( &quot;file_name&quot;, models.CharField( blank=True, max_length=255, verbose_name=&quot;Bestandsnaam&quot; ), ), ( &quot;animal&quot;, models.ForeignKey( on_delete=django.db.models.deletion.CASCADE, related_name=&quot;animal_files&quot;, to=&quot;DierenWelzijnAdmin.animal&quot;, ), ), ], options={ &quot;verbose_name&quot;: &quot;Dierbestand&quot;, &quot;verbose_name_plural&quot;: &quot;Dierbestanden&quot;, &quot;managed&quot;: True, }, ), migrations.AddField( model_name=&quot;animal&quot;, name=&quot;files&quot;, field=models.ManyToManyField( blank=True, related_name=&quot;related_animals_files&quot;, to=&quot;DierenWelzijnAdmin.animalfile&quot;, ), ), migrations.CreateModel( name=&quot;AnimalImage&quot;, fields=[ ( &quot;id&quot;, models.BigAutoField( auto_created=True, primary_key=True, serialize=False, verbose_name=&quot;ID&quot;, ), ), (&quot;image&quot;, models.ImageField(upload_to=&quot;media/photos/animals&quot;)), ( &quot;animal&quot;, models.ForeignKey( on_delete=django.db.models.deletion.CASCADE, related_name=&quot;imag&quot;, to=&quot;DierenWelzijnAdmin.animal&quot;, ), ), ], ), migrations.AddField( model_name=&quot;animal&quot;, name=&quot;animal_images&quot;, field=models.ManyToManyField( blank=True, related_name=&quot;related_animals&quot;, to=&quot;DierenWelzijnAdmin.animalimage&quot;, ), ), migrations.CreateModel( name=&quot;Category&quot;, fields=[ ( &quot;id&quot;, models.BigAutoField( auto_created=True, primary_key=True, serialize=False, verbose_name=&quot;ID&quot;, ), ), ( &quot;name&quot;, models.CharField(max_length=100, unique=True, verbose_name=&quot;Naam&quot;), ), ( &quot;description&quot;, models.TextField( blank=True, max_length=5000, null=True, verbose_name=&quot;Beschrijving&quot;, ), ), ( &quot;images&quot;, models.ImageField( upload_to=&quot;media/photos/categories&quot;, verbose_name=&quot;Foto&quot; ), ), ( &quot;date_create&quot;, models.DateTimeField( auto_now_add=True, verbose_name=&quot;Datum aangemaakt&quot; ), ), ( &quot;date_update&quot;, models.DateTimeField(auto_now=True, verbose_name=&quot;Datum geupdate&quot;), ), ( &quot;category&quot;, models.ForeignKey( blank=True, null=True, on_delete=django.db.models.deletion.CASCADE, related_name=&quot;subcategories&quot;, to=&quot;DierenWelzijnAdmin.category&quot;, verbose_name=&quot;Categorie&quot;, ), ), ], options={ &quot;verbose_name&quot;: &quot;Categorie&quot;, &quot;verbose_name_plural&quot;: &quot;Categoriën&quot;, &quot;managed&quot;: True, }, ), migrations.AddField( model_name=&quot;animal&quot;, name=&quot;category&quot;, field=models.ForeignKey( on_delete=django.db.models.deletion.CASCADE, related_name=&quot;animals&quot;, to=&quot;DierenWelzijnAdmin.category&quot;, verbose_name=&quot;Familie&quot;, ), ), ] </code></pre> <p>I've tried several troubleshooting steps, including:</p> <ul> <li>Creating a backup of the database.</li> <li>Deleting and recreating the database, then restoring it.</li> <li>Deleting all migration files except 0001_initial.py, and running the migrations again.</li> <li>Refreshing the database in PostgreSQL.</li> </ul> <p>Despite these efforts, the migrations still don't seem to apply.</p> <pre><code>this are all the messages C:\repos\DWL_backend_Rachid&gt; python manage.py makemigrations System check identified some issues: </code></pre> <p>Question: How can I ensure that the migrations are properly applied to the database?</p>
<python><django>
2024-08-28 12:52:12
1
3,991
mightycode Newton
78,923,338
312,140
How to fix the error when try to plot data with pandas?
<p>I have encountered an issue in python source code for <a href="https://github.com/facebookresearch/detr" rel="nofollow noreferrer">DETR model</a> from Facebook(Archived repository). The problem is with pandas when it tries to plot something. I did not see this error before, but I think the pre-installed Google Colab packages cause this issue. The error is with these lines of codes(on else part) located in util/plot_utils.py:</p> <pre class="lang-py prettyprint-override"><code>for df, color in zip(dfs, sns.color_palette(n_colors=len(logs))): for j, field in enumerate(fields): if field == 'mAP': coco_eval = pd.DataFrame( np.stack(df.test_coco_eval_bbox.dropna().values)[:, 1] ).ewm(com=ewm_col).mean() axs[j].plot(coco_eval, c=color) else: df.interpolate().ewm(com=ewm_col).mean().plot( y=[f'train_{field}', f'test_{field}'], ax=axs[j], color='blue', style=['-', '--'] ) </code></pre> <p>This code tries to plot some figures based on these raw file:</p> <pre><code>{&quot;train_lr&quot;: 0.00010000000000000003, &quot;train_class_error&quot;: 60.685014724731445, &quot;train_loss&quot;: 11.764228752681188, &quot;train_loss_ce&quot;: 0.719119736126491, &quot;train_loss_bbox&quot;: 0.4253895453044346, &quot;train_loss_giou&quot;: 0.7821782167468753, &quot;train_loss_ce_0&quot;: 0.7641034339155469, &quot;train_loss_bbox_0&quot;: 0.4464654347726277, &quot;train_loss_giou_0&quot;: 0.8194219555173602, &quot;train_loss_ce_1&quot;: 0.7384649557726723, &quot;train_loss_bbox_1&quot;: 0.43022043790136066, &quot;train_loss_giou_1&quot;: 0.8033964250768934, &quot;train_loss_ce_2&quot;: 0.740848937204906, &quot;train_loss_bbox_2&quot;: 0.42099533123629435, &quot;train_loss_giou_2&quot;: 0.7929297387599945, &quot;train_loss_ce_3&quot;: 0.7324359204087939, &quot;train_loss_bbox_3&quot;: 0.4274384592260633, &quot;train_loss_giou_3&quot;: 0.7886527244533811, &quot;train_loss_ce_4&quot;: 0.723468439919608, &quot;train_loss_bbox_4&quot;: 0.42732386929648264, &quot;train_loss_giou_4&quot;: 0.7813752229724612, &quot;train_loss_ce_unscaled&quot;: 0.719119736126491, &quot;train_class_error_unscaled&quot;: 60.685014724731445, &quot;train_loss_bbox_unscaled&quot;: 0.08507791001881872, &quot;train_loss_giou_unscaled&quot;: 0.39108910837343763, &quot;train_cardinality_error_unscaled&quot;: 22.946428571428573, &quot;train_loss_ce_0_unscaled&quot;: 0.7641034339155469, &quot;train_loss_bbox_0_unscaled&quot;: 0.08929308610303062, &quot;train_loss_giou_0_unscaled&quot;: 0.4097109777586801, &quot;train_cardinality_error_0_unscaled&quot;: 14.321428571428571, &quot;train_loss_ce_1_unscaled&quot;: 0.7384649557726723, &quot;train_loss_bbox_1_unscaled&quot;: 0.0860440872077431, &quot;train_loss_giou_1_unscaled&quot;: 0.4016982125384467, &quot;train_cardinality_error_1_unscaled&quot;: 16.142857142857142, &quot;train_loss_ce_2_unscaled&quot;: 0.740848937204906, &quot;train_loss_bbox_2_unscaled&quot;: 0.08419906494340726, &quot;train_loss_giou_2_unscaled&quot;: 0.39646486937999725, &quot;train_cardinality_error_2_unscaled&quot;: 20.535714285714285, &quot;train_loss_ce_3_unscaled&quot;: 0.7324359204087939, &quot;train_loss_bbox_3_unscaled&quot;: 0.0854876914194652, &quot;train_loss_giou_3_unscaled&quot;: 0.39432636222669054, &quot;train_cardinality_error_3_unscaled&quot;: 19.821428571428573, &quot;train_loss_ce_4_unscaled&quot;: 0.723468439919608, &quot;train_loss_bbox_4_unscaled&quot;: 0.08546477396573339, &quot;train_loss_giou_4_unscaled&quot;: 0.3906876114862306, &quot;train_cardinality_error_4_unscaled&quot;: 21.642857142857142, &quot;test_class_error&quot;: 25.03192901611328, &quot;test_loss&quot;: 7.842764377593994, &quot;test_loss_ce&quot;: 0.5921314656734467, &quot;test_loss_bbox&quot;: 0.17633574455976486, &quot;test_loss_giou&quot;: 0.4800571948289871, &quot;test_loss_ce_0&quot;: 0.7295672297477722, &quot;test_loss_bbox_0&quot;: 0.1843198761343956, &quot;test_loss_giou_0&quot;: 0.5027516782283783, &quot;test_loss_ce_1&quot;: 0.650816947221756, &quot;test_loss_bbox_1&quot;: 0.17886291444301605, &quot;test_loss_giou_1&quot;: 0.5067266523838043, &quot;test_loss_ce_2&quot;: 0.6202048659324646, &quot;test_loss_bbox_2&quot;: 0.179609976708889, &quot;test_loss_giou_2&quot;: 0.5068651139736176, &quot;test_loss_ce_3&quot;: 0.6036751568317413, &quot;test_loss_bbox_3&quot;: 0.1801770180463791, &quot;test_loss_giou_3&quot;: 0.49125969409942627, &quot;test_loss_ce_4&quot;: 0.595748633146286, &quot;test_loss_bbox_4&quot;: 0.177798293530941, &quot;test_loss_giou_4&quot;: 0.4858558773994446, &quot;test_loss_ce_unscaled&quot;: 0.5921314656734467, &quot;test_class_error_unscaled&quot;: 25.03192901611328, &quot;test_loss_bbox_unscaled&quot;: 0.03526714816689491, &quot;test_loss_giou_unscaled&quot;: 0.24002859741449356, &quot;test_cardinality_error_unscaled&quot;: 21.375, &quot;test_loss_ce_0_unscaled&quot;: 0.7295672297477722, &quot;test_loss_bbox_0_unscaled&quot;: 0.03686397522687912, &quot;test_loss_giou_0_unscaled&quot;: 0.25137583911418915, &quot;test_cardinality_error_0_unscaled&quot;: 8.125, &quot;test_loss_ce_1_unscaled&quot;: 0.650816947221756, &quot;test_loss_bbox_1_unscaled&quot;: 0.03577258251607418, &quot;test_loss_giou_1_unscaled&quot;: 0.25336332619190216, &quot;test_cardinality_error_1_unscaled&quot;: 20.25, &quot;test_loss_ce_2_unscaled&quot;: 0.6202048659324646, &quot;test_loss_bbox_2_unscaled&quot;: 0.03592199645936489, &quot;test_loss_giou_2_unscaled&quot;: 0.2534325569868088, &quot;test_cardinality_error_2_unscaled&quot;: 25.5, &quot;test_loss_ce_3_unscaled&quot;: 0.6036751568317413, &quot;test_loss_bbox_3_unscaled&quot;: 0.03603540360927582, &quot;test_loss_giou_3_unscaled&quot;: 0.24562984704971313, &quot;test_cardinality_error_3_unscaled&quot;: 22.125, &quot;test_loss_ce_4_unscaled&quot;: 0.595748633146286, &quot;test_loss_bbox_4_unscaled&quot;: 0.03555965796113014, &quot;test_loss_giou_4_unscaled&quot;: 0.2429279386997223, &quot;test_cardinality_error_4_unscaled&quot;: 22.125, &quot;test_coco_eval_bbox&quot;: [0.25752657647843996, 0.4695548915331907, 0.2376618407445379, -1.0, 0.23612653129171618, 0.3743419813637686, 0.04096385538578033, 0.2349397599697113, 0.5843373503535986, -1.0, 0.5877192996442318, 0.5769230775535107], &quot;epoch&quot;: 0, &quot;n_parameters&quot;: 60219142} </code></pre> <p>Error message:</p> <pre class="lang-py prettyprint-override"><code>/content/detr/util/plot_utils.py:66: FutureWarning: DataFrame.interpolate with object dtype is deprecated and will raise in a future version. Call obj.infer_objects(copy=False) before interpolating instead. df.interpolate().ewm(com=ewm_col).mean().plot( --------------------------------------------------------------------------- TypeError Traceback (most recent call last) TypeError: float() argument must be a string or a real number, not 'list' The above exception was the direct cause of the following exception: ValueError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/pandas/core/window/rolling.py in _prep_values(self, values) 369 else: --&gt; 370 values = ensure_float64(values) 371 except (ValueError, TypeError) as err: pandas/_libs/algos_common_helper.pxi in pandas._libs.algos.ensure_float64() ValueError: setting an array element with a sequence. The above exception was the direct cause of the following exception: TypeError Traceback (most recent call last) 6 frames /usr/local/lib/python3.10/dist-packages/pandas/core/window/rolling.py in _apply_blockwise(self, homogeneous_func, name, numeric_only) 486 try: --&gt; 487 arr = self._prep_values(arr) 488 except (TypeError, NotImplementedError) as err: /usr/local/lib/python3.10/dist-packages/pandas/core/window/rolling.py in _prep_values(self, values) 371 except (ValueError, TypeError) as err: --&gt; 372 raise TypeError(f&quot;cannot handle this type -&gt; {values.dtype}&quot;) from err 373 TypeError: cannot handle this type -&gt; object The above exception was the direct cause of the following exception: DataError Traceback (most recent call last) &lt;ipython-input-9-56bcedec1dc6&gt; in &lt;cell line: 6&gt;() 4 from pathlib import Path, PurePath 5 ----&gt; 6 plot_logs([ 7 Path(&quot;/content/drive/MyDrive/Colab Notebooks/Datasets/dataset/model&quot;) 8 ]) /content/detr/util/plot_utils.py in plot_logs(logs, fields, ewm_col, log_name) 64 else: 65 df = df.infer_objects() ---&gt; 66 df.interpolate().ewm(com=ewm_col).mean().plot( 67 y=[f'train_{field}', f'test_{field}'], 68 ax=axs[j], /usr/local/lib/python3.10/dist-packages/pandas/core/window/ewm.py in mean(self, numeric_only, engine, engine_kwargs) 553 normalize=True, 554 ) --&gt; 555 return self._apply(window_func, name=&quot;mean&quot;, numeric_only=numeric_only) 556 else: 557 raise ValueError(&quot;engine must be either 'numba' or 'cython'&quot;) /usr/local/lib/python3.10/dist-packages/pandas/core/window/rolling.py in _apply(self, func, name, numeric_only, numba_args, **kwargs) 615 616 if self.method == &quot;single&quot;: --&gt; 617 return self._apply_blockwise(homogeneous_func, name, numeric_only) 618 else: 619 return self._apply_tablewise(homogeneous_func, name, numeric_only) /usr/local/lib/python3.10/dist-packages/pandas/core/window/rolling.py in _apply_blockwise(self, homogeneous_func, name, numeric_only) 487 arr = self._prep_values(arr) 488 except (TypeError, NotImplementedError) as err: --&gt; 489 raise DataError( 490 f&quot;Cannot aggregate non-numeric type: {arr.dtype}&quot; 491 ) from err DataError: Cannot aggregate non-numeric type: object </code></pre> <p>How can I fix this problem?</p>
<python><pandas><matplotlib>
2024-08-28 12:29:18
1
3,031
Mahdi Amrollahi
78,923,301
1,035,897
How to get all alerts for an alert rule in azure using python client
<p>In my <strong>Python 3.11</strong> program I manage to fetch all <strong>alert_rules</strong> in <strong>Azure</strong> like so:</p> <pre><code>def alert_rules(self, tag:str = None): processed = list() try: rules = list() # List all Metric Alert Rules for rule in self.management_client.metric_alerts.list_by_subscription(): rules.append(rule) # List all Scheduled Query (Log) Alert Rules for rule in self.management_client.scheduled_query_rules.list_by_subscription(): rules.append(rule) if tag: alert_rules_with_tag = [] for rule in rules: if rule.tags and tag in rule.tags: alert_rules_with_tag.append(rule) rules = alert_rules_with_tag for rule in rules: processed.append(self.process_rule(rule)) except Exception as e: logger.exception(f&quot;Exception when listing alert rules: {e}&quot;) return None finally: return processed </code></pre> <p>And I process each <strong>alert_rule</strong> like so:</p> <pre><code>def process_rule(self, rule): processed = dict() try: alerts_for_rule = self.alerts_for_rule(rule_id = rule.id) logger.info(f&quot;alerts for rule {rule.id}: {devtools.pformat(alerts_for_rule)}&quot;) processed_alerts = [] for alert in alerts_for_rule: processed_alerts.append(self.process_alert(alert)) processed = { 'name': rule.name , 'id': rule.id , 'type': rule.type , 'alerts': processed_alerts , 'tags': rule.tags if hasattr(rule, 'tags') else {} } except Exception as e: logger.exception(f&quot;Exception in rule_to_dict: {e}&quot;) finally: return processed </code></pre> <p>This all works great, except when I try to fetch all the <strong>alerts</strong> for the rule, it fails:</p> <pre><code>def alerts_for_rule(self, rule_id): alerts = [] try: raw = self.management_client.alerts.get_all(alert_rule = rule_id) logger.warning(f&quot;*** Alerts for rule '{rule_id}' {devtools.pformat(raw)}&quot;) out = list() for alert in raw: alerts.append(self.process_alert(alert)) except SomeAzureException as ex: logger.error(f&quot;Azure exception occurred while fetching alerts for rule {rule_id}: {ex}&quot;) except Exception as e: logger.exception(f&quot;Exception occurred while fetching alerts for rule {rule_id}: {e}&quot;) finally: return alerts </code></pre> <p>Basically, I get an empty list of <strong>alerts</strong> for every <strong>rule</strong>, even rules I know to have alerts associated with them (looking in the portal).</p> <p>So my question is; <em>what is the value I should pass as <code>rule_id</code> to <code>management_client.alerts.get_all(alert_rule = rule_id)</code> to make this work?</em> I tried the intuitive choice <code>rule.id</code> which obviously does not work. The <a href="https://learn.microsoft.com/en-us/python/api/azure-mgmt-alertsmanagement/azure.mgmt.alertsmanagement.operations.alertsoperations?view=azure-python#azure-mgmt-alertsmanagement-operations-alertsoperations-get-all" rel="nofollow noreferrer">documentation</a> does not specify any details except that is it of type <code>str</code>.</p>
<python><azure><alert>
2024-08-28 12:20:26
1
9,788
Mr. Developerdude
78,923,273
4,614,404
Compare two boolean arrays considering a tolerance
<p>I have two boolean arrays, <code>first</code> and <code>second</code> that should be mostly equal (up to a <code>tolerance</code>). I would like to compare them in a way that is forgiving if a few elements are different.</p> <p>Something like <code>np.array_equal(first, second, equal_nan=True)</code> is too strict because all values must be the same and <code>np.allclose(first, second, atol=tolerance, equal_nan=True)</code> is not suitable for comparing booleans.</p> <p>The following case should succeed:</p> <pre><code>tolerance = 1e-5 seed = np.random.rand(100, 100, 100) first = seed &gt; 0.5 second = (seed &gt; 0.5) &amp; (seed &lt; 1. - 1e-6) # 99.9999% overlap in true elements </code></pre> <p>The following case should fail:</p> <pre><code>first = seed &gt; 0.5 second = (seed &gt; 0.5) &amp; (seed &lt; 1. - 1e-4) # 99.99% overlap in true elements </code></pre> <p>The following case should also fail:</p> <pre><code>first = seed &gt; 0.5 second = first[::-1] # first.sum() == second.sum(), but they are not similar </code></pre> <p>How can I handle this case in an elegant manner?</p>
<python><numpy><unit-testing>
2024-08-28 12:13:28
2
2,024
Victor Zuanazzi
78,923,158
1,866,605
Backtesting.py not 100% win rate when using data for AI model
<p>I have been working on a data model to train an AI, in order to ensure that the model is correct before training, etc, I run a backtest.py to make sure it will win all trades, as the data is already there to make it 100% win rate.</p> <p>So this is basically just a way to validate the data model before training. Strangely with Buys after some issues. it finally worked, with Sells I can't really find the issue.</p> <p>I have created a minimum example to see, if it could be a pandas issue or backtest.py or myself.</p> <pre><code># requirements.txt maybe not all of them are needed to run this minimum code but just in case pandas numpy ta pandas_ta plotly kaleido backtesting tqdm bokeh==2.4.3 </code></pre> <pre><code># demo1.csv with only 10 rows Date,Time,Open,High,Low,Close,Volume 20240819,18:00:00,2502.98,2504.34,2502.32,2503.05,613950 20240819,18:30:00,2503.01,2504.72,2502.97,2504.18,575800 20240819,19:00:00,2504.16,2505.14,2502.95,2504.61,798150 20240819,19:30:00,2504.59,2505.75,2503.61,2505.34,823930 20240819,20:00:00,2505.51,2505.54,2503.45,2504.16,419860 20240819,20:30:00,2504.09,2504.46,2503.28,2503.89,294830 20240819,22:00:00,2503.28,2504.14,2501.93,2502.11,234130 20240819,22:30:00,2502.11,2504.47,2502.11,2504.14,186780 20240819,23:00:00,2504.14,2504.16,2502.41,2503.09,209580 20240819,23:30:00,2503.09,2503.68,2502.39,2503.51,184700 </code></pre> <pre><code># dev.py python code import pandas as pd # import the data and minor modification to index by datetime def get_data(): df = pd.read_csv('demo1.csv') df['adj_close'] = df['Close'] df['Date'] = pd.to_datetime(df['Date'], format='%Y%m%d') df['Time'] = pd.to_timedelta(df['Time']) df['datetime'] = df['Date'] + df['Time'] df.index = df['datetime'] df = df.drop(columns=['Date', 'Time', 'datetime']) return df def prepare_data(df): # here I get the next row Low and High df[&quot;L1&quot;] = df[&quot;Low&quot;].shift(-1) df[&quot;H1&quot;] = df[&quot;High&quot;].shift(-1) df.dropna(inplace=True) # For simplicity I just add 2$ to the Close for the Stop loss and substract 2$ for the Take Profit. df['SellSL'] = df['Close'] + 2 df['SellTP'] = df['Close'] - 2 # This is where if the SL is higher than next bar High # and TP is smaller than next bar Low # it saves as 1, so a possible signal to sell df[&quot;SellPossibleFound&quot;] = ( (df['SellSL'] &gt; df[&quot;H1&quot;]) &amp; ( df['SellTP'] &lt; df[&quot;L1&quot;] ) ).astype(int) return df from backtesting import Backtest, Strategy class CheckData(Strategy): def init(self): pass def next(self): # if the flag is 1, then it will sell, using the SellSL and SellTP if self.data.SellPossibleFound &gt; 0: self.sell(size=1, sl=self.data.SellSL, tp=self.data.SellTP) df = get_data() df = prepare_data(df) bt = Backtest(df, CheckData) stats = bt.run() bt.plot() print(stats) </code></pre> <p>At this point I start to think that it maybe a bug with the library, because I can't find the issue.</p>
<python><pandas><back-testing>
2024-08-28 11:46:13
0
1,326
peterpeterson
78,922,304
1,444,073
Qt/PySide6: Add widget to QScrollArea during runtime, not showing up
<p>I have a window with a button and a <code>QScrollArea</code> widget. When the button is pressed, a <code>QLabel</code> is supposed to be added to the <code>QScrollArea</code> widget. However, it's not showing up.</p> <p>I know there are plenty of questions about similar issues. Most suggest using <code>setWidgetResizable(True)</code>, which indeed makes the widget show up, but also changes its size: (<a href="https://doc.qt.io/qt-6/qscrollarea.html#widgetResizable-prop" rel="nofollow noreferrer">docs</a>)</p> <blockquote> <p>If this property is set to true, the scroll area will automatically resize the widget in order to avoid scroll bars where they can be avoided, or to take advantage of extra space.</p> </blockquote> <p>Here is the MWE:</p> <pre class="lang-py prettyprint-override"><code>from PySide6.QtWidgets import * class MainWindow(QMainWindow): def __init__(self): QMainWindow.__init__(self) button = QPushButton('Add') button.clicked.connect(self.add) centralWidget = QWidget() centralWidget.setLayout(QVBoxLayout()) self.setCentralWidget(centralWidget) centralWidget.layout().addWidget(button) self.outputLayout = QVBoxLayout() outputWidget = QWidget() outputWidget.setLayout(self.outputLayout) self.scrollArea = QScrollArea() self.scrollArea.setWidget(outputWidget) centralWidget.layout().addWidget(self.scrollArea) def add(self): label = QLabel('test') self.outputLayout.addWidget(label) if __name__ == '__main__': app = QApplication() mainWindow = MainWindow() mainWindow.show() app.exec() </code></pre> <p>I tried adding <code>label.show()</code>, <code>self.layout.update()</code>, and <code>self.scrollArea.update()</code> to <code>MainWindow.add</code>, but to no avail. How can I make the <code>label</code> show up, without using <code>setWidgetResizable</code>?</p>
<python><qt><pyside>
2024-08-28 08:39:31
1
4,334
kostrykin
78,922,280
18,139,225
Python Particle simulator in Quan Nguyen's book: Python advanced programming. The code not working for me
<p>I have started reading &quot;Advanced Python Programming&quot;, 2nd ed of Quan Nguyen. The code to simulate particles' circular motion is not working for me. But according to the book, it should work: I wonder if I am missing something. I run it on Windows 11, Python 3.8.12 and VScode or JupyterLab. Nothing seems to happen (in vscode) and a warning in JupyterLab.</p> <p>Here is the code:</p> <pre><code># %load simul.py from matplotlib import pyplot as plt from matplotlib import animation from random import uniform import timeit import os os.system('cls' if os.name == 'nt' else 'clear') class Particle: __slots__ = (&quot;x&quot;, &quot;y&quot;, &quot;ang_speed&quot;) def __init__(self, x, y, ang_speed): self.x = x self.y = y self.ang_speed = ang_speed class ParticleSimulator: def __init__(self, particles): self.particles = particles def evolve(self, dt): timestep = 0.00001 nsteps = int(dt / timestep) for i in range(nsteps): for p in self.particles: norm = (p.x ** 2 + p.y ** 2) ** 0.5 v_x = (-p.y) / norm v_y = p.x / norm d_x = timestep * p.ang_speed * v_x d_y = timestep * p.ang_speed * v_y p.x += d_x p.y += d_y def visualize(simulator): X = [p.x for p in simulator.particles] Y = [p.y for p in simulator.particles] fig = plt.figure() ax = plt.subplot(111, aspect=&quot;equal&quot;) (line,) = ax.plot(X, Y, &quot;ro&quot;) # Axis limits plt.xlim(-1, 1) plt.ylim(-1, 1) # It will be run when the animation starts def init(): line.set_data([], []) return (line,) def animate(i): # We let the particle evolve for 0.1 time units simulator.evolve(0.01) X = [p.x for p in simulator.particles] Y = [p.y for p in simulator.particles] line.set_data(X, Y) return (line,) # Call the animate function each 10 ms anim = animation.FuncAnimation(fig, animate, init_func=init, blit=True, interval=10) plt.show() def test_visualize(): particles = [ Particle(0.3, 0.5, +1), Particle(0.0, -0.5, -1), Particle(-0.1, -0.4, +3), ] simulator = ParticleSimulator(particles) visualize(simulator) if __name__ == &quot;__main__&quot;: test_visualize() </code></pre> <p>The code can also be found <a href="https://github.com/PacktPublishing/Advanced-Python-Programming-Second-Edition/blob/main/Chapter01/simul.py" rel="nofollow noreferrer">here</a>.</p> <p>When run with JupyterLab, I get the following warning:</p> <pre><code>C:\Users\ephra\AppData\Local\Temp\ipykernel_18360\3786700959.py:71: UserWarning: frames=None which we can infer the length of, did not pass an explicit *save_count* and passed cache_frame_data=True. To avoid a possibly unbounded cache, frame data caching has been disabled. To suppress this warning either pass `cache_frame_data=False` or `save_count=MAX_FRAMES`. anim = animation.FuncAnimation(fig, animate, init_func=init, blit=True, interval=10) </code></pre> <p>Am I running it in a wrong way?</p>
<python><visual-studio-code><jupyter-lab>
2024-08-28 08:33:03
1
441
ezyman
78,922,047
17,729,094
Non-equi join in polars
<p>If you come from the future, hopefully <a href="https://github.com/pola-rs/polars/pull/18365" rel="noreferrer">this PR</a> has already been merged.</p> <p>If you don't come from the future, hopefully <a href="https://stackoverflow.com/a/78913145/17729094">this answer</a> solves your problem.</p> <p>I want to solve my problem only with polars (which I am no expert, but I can follow what is going on), before just copy-pasting the DuckDB integration suggested above and compare the results in my real data.</p> <p>I have a list of events (name and timestamp), and a list of time windows. I want to count how many of each event occur in each time window.</p> <p>I feel like I am close to getting something that works correctly, but I have been stuck for a couple of hours now:</p> <pre><code>import polars as pl events = { &quot;name&quot;: [&quot;a&quot;, &quot;b&quot;, &quot;a&quot;, &quot;b&quot;, &quot;a&quot;, &quot;c&quot;, &quot;b&quot;, &quot;a&quot;, &quot;b&quot;, &quot;a&quot;, &quot;b&quot;, &quot;a&quot;, &quot;b&quot;, &quot;a&quot;, &quot;b&quot;, &quot;a&quot;, &quot;b&quot;, &quot;a&quot;, &quot;b&quot;], &quot;time&quot;: [0.0, 1.0, 1.5, 2.0, 2.25, 2.26, 2.45, 2.5, 3.0, 3.4, 3.5, 3.6, 3.65, 3.7, 3.8, 4.0, 4.5, 5.0, 6.0], } windows = { &quot;start_time&quot;: [1.0, 2.0, 3.0, 4.0], &quot;stop_time&quot;: [3.5, 2.5, 3.7, 5.0], } events_df = pl.DataFrame(events).sort(&quot;time&quot;).with_row_index() windows_df = ( pl.DataFrame(windows) .sort(&quot;start_time&quot;) .join_asof(events_df, left_on=&quot;start_time&quot;, right_on=&quot;time&quot;, strategy=&quot;forward&quot;) .drop(&quot;name&quot;, &quot;time&quot;) .rename({&quot;index&quot;: &quot;first_index&quot;}) .sort(&quot;stop_time&quot;) .join_asof(events_df, left_on=&quot;stop_time&quot;, right_on=&quot;time&quot;, strategy=&quot;backward&quot;) .drop(&quot;name&quot;, &quot;time&quot;) .rename({&quot;index&quot;: &quot;last_index&quot;}) ) print(windows_df) &quot;&quot;&quot; shape: (4, 4) ┌────────────┬───────────┬─────────────┬────────────┐ │ start_time ┆ stop_time ┆ first_index ┆ last_index │ │ --- ┆ --- ┆ --- ┆ --- │ │ f64 ┆ f64 ┆ u32 ┆ u32 │ ╞════════════╪═══════════╪═════════════╪════════════╡ │ 2.0 ┆ 2.5 ┆ 3 ┆ 7 │ │ 1.0 ┆ 3.5 ┆ 1 ┆ 10 │ │ 3.0 ┆ 3.7 ┆ 8 ┆ 13 │ │ 4.0 ┆ 5.0 ┆ 15 ┆ 17 │ └────────────┴───────────┴─────────────┴────────────┘ &quot;&quot;&quot; </code></pre> <p>So far, for each time window, I can get the index of the first and last events that I care about. Now I &quot;just&quot; need to count how many of these are of each type. Can I get some help on how to do this?</p> <p>The output I am looking for should look like:</p> <pre><code>shape: (4, 5) ┌────────────┬───────────┬─────┬─────┬─────┐ │ start_time ┆ stop_time ┆ a ┆ b ┆ c │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ f64 ┆ f64 ┆ i64 ┆ i64 ┆ i64 │ ╞════════════╪═══════════╪═════╪═════╪═════╡ │ 1.0 ┆ 3.5 ┆ 4 ┆ 5 ┆ 1 │ │ 2.0 ┆ 2.5 ┆ 2 ┆ 2 ┆ 1 │ │ 3.0 ┆ 3.7 ┆ 3 ┆ 3 ┆ 0 │ │ 4.0 ┆ 5.0 ┆ 2 ┆ 1 ┆ 0 │ └────────────┴───────────┴─────┴─────┴─────┘ </code></pre> <p>I feel like using something like <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.int_ranges.html#polars-int-ranges" rel="noreferrer"><code>int_ranges()</code></a>, <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.gather.html#polars.Expr.gather" rel="noreferrer"><code>gather()</code></a>, and <a href="https://docs.pola.rs/api/python/stable/reference/dataframe/api/polars.DataFrame.explode.html#polars.DataFrame.explode" rel="noreferrer"><code>explode()</code></a> can get me a dataframe with each time window and all it's corresponding events. Finally, something like <a href="https://docs.pola.rs/api/python/stable/reference/dataframe/api/polars.DataFrame.group_by.html#polars.DataFrame.group_by" rel="noreferrer"><code>group_by()</code></a>, <a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.count.html#polars.count" rel="noreferrer"><code>count()</code></a>, and <a href="https://docs.pola.rs/api/python/stable/reference/dataframe/api/polars.DataFrame.pivot.html#polars.DataFrame.pivot" rel="noreferrer"><code>pivot()</code></a> can get me to the dataframe I want. But I have been struggling with this for a while.</p>
<python><dataframe><python-polars>
2024-08-28 07:29:08
4
954
DJDuque
78,921,222
2,825,403
Polars - unexpected behaviour when using drop_nans() on all columns
<p>I have a simple Polars dataframe with some nulls and some NaNs and I want to drop only the latter. I'm trying to use <code>drop_nans()</code> by applying it to all columns and for whatever reason it replaces NaNs with a literal 1.0.</p> <p>I am confusion. Maybe I'm using the method wrong, but the docs don't have much info and definitely don't describe this behaviour:</p> <pre><code>ex = pl.DataFrame( { 'a': [float('nan'), 1, float('nan')], 'b': [None, 'a', 'b'] } ) ex.with_columns(pl.all().drop_nans()) Out: a b 1.0 null 1.0 &quot;a&quot; 1.0 &quot;b&quot; </code></pre> <p>I'm using the latest Polars 1.5.</p> <p>What is the correct way of dropping NaNs across all the columns given that in Polars 1.5 dataframes don't seem to have <code>drop_nans()</code> method, only the Series do?</p> <p><strong>EDIT:</strong> I'm expecting the result should be:</p> <pre><code>a b 1.0 'a' </code></pre>
<python><nan><python-polars>
2024-08-28 01:19:18
1
4,474
NotAName
78,921,119
7,619,353
Python outputs Windows fatal exception: code 0x8001010
<p>I have an automation test framework that is using unittest python framework under the hood. It has been producing a segmentation fault. I isolated it to the usage of a DLL library.</p> <p>I ran my python program with faulthandler enabled to obtain more traceback information on the segmentation fault.</p> <p>My program was spitting out a bunch of error's but continues to run.</p> <p>Multiple errors popping up here</p> <pre><code> tb_e = traceback.TracebackException(excType, value, tb, limit=length, File &quot;C:\Python310\lib\traceback.py&quot;, line 502, in __init__ self.stack = StackSummary.extract( File &quot;C:\Python310\lib\traceback.py&quot;, line 379, in extract linecache.checkcache(filename) File &quot;C:\Python310\lib\linecache.py&quot;, line 72, in checkcache stat = os.stat(fullname) ValueError: stat: embedded null character in path </code></pre> <p>Finally, at the end of the run I got the output:</p> <pre><code>Windows fatal exception: code 0x80010108 Thread 0x0000b778 (most recent call first): File &quot;C:\Python310\lib\site-packages\comtypes\__init__.py&quot;, line 185 in _shutdown Windows fatal exception: code 0x80010108 Thread 0x0000b778 (most recent call first): File &quot;C:\Python310\lib\site-packages\comtypes\__init__.py&quot;, line 185 in _shutdown </code></pre> <p>Googling shows error code <code>0x80010108</code> interpreted as <strong>Windows Update Error</strong>? This error is quite confusing to me because I am not doing a window's update during this time. How would I continue to debug this issue?</p>
<python><python-3.x><comtypes><faulthandler>
2024-08-28 00:03:40
1
1,840
tyleax
78,920,958
10,108,726
Create smaller pdfs with pdfkit
<p>In my django project, i am creating pdfs in python using pdfkit library. However my pdfs with 15 pages have 16mb size.</p> <p>my lib</p> <pre><code>pdfkit==1.0.0 </code></pre> <p>How can I reduce its size, considering my code above:</p> <pre><code>import pdfkit from django.template.loader import render_to_string my_html = '' for content in blocktexts: my_html += render_to_string('mypath/myhtml.html') pdfkit.from_string(my_html, False) </code></pre> <p>I loop in a variable called blocktexts that i have html contents in it and each one of them should create a pdf page, the problem is that each single page has 1mb, with only text in it and a small image in the begining...</p> <p>How can I make this pdf smaller?</p>
<python><django><pdfkit><python-pdfkit>
2024-08-27 22:37:50
1
654
Germano
78,920,890
825,227
How to rank and reassign values to subset of Python dataframe
<p>Drawing a blank, please advise.</p> <p>Have a dataframe, <code>d</code>:</p> <pre><code> Position Operation Side Price Size 0 0 0 1 0.7288 -17 1 8 0 1 0.7297 -14 2 7 0 1 0.7296 -8 3 6 0 1 0.7295 -426 4 5 0 1 0.7294 -16 5 4 0 1 0.7293 -16 6 3 0 1 0.7292 -15 7 2 0 1 0.7291 -267 8 1 0 1 0.7290 -427 9 0 0 1 0.7289 -16 10 0 0 0 0.7299 6 11 1 0 0 0.7300 34 12 2 0 0 0.7301 7 13 3 0 0 0.7302 9 14 4 0 0 0.7303 16 15 5 0 0 0.7304 15 16 6 0 0 0.7305 429 17 7 0 0 0.7306 16 18 8 0 0 0.7307 265 19 9 0 0 0.7308 18 </code></pre> <p>That I'd like to filter based on column value, such as:</p> <pre><code>d.loc[(d.Side==1)] </code></pre> <pre><code> Position Operation Side Price Size 0 0 0 1 0.7288 -17 1 8 0 1 0.7297 -14 2 7 0 1 0.7296 -8 3 6 0 1 0.7295 -426 4 5 0 1 0.7294 -16 5 4 0 1 0.7293 -16 6 3 0 1 0.7292 -15 7 2 0 1 0.7291 -267 8 1 0 1 0.7290 -427 9 0 0 1 0.7289 -16 </code></pre> <p>I'd then like to <code>rank</code> based on <code>Price</code>, and assign the rank to a column in the subset dataframe, specifically <code>Position</code>.</p> <p>For instance, filtering based on <code>Side</code> = 1, and then ranking based on <code>Price</code>, I'd like my updated dataframe, <code>d</code>, to look like this.</p> <pre><code> Position Operation Side Price Size 0 0 0 1 0.7288 -17 1 9 0 1 0.7297 -14 2 8 0 1 0.7296 -8 3 7 0 1 0.7295 -426 4 6 0 1 0.7294 -16 5 5 0 1 0.7293 -16 6 4 0 1 0.7292 -15 7 3 0 1 0.7291 -267 8 2 0 1 0.7290 -427 9 1 0 1 0.7289 -16 10 0 0 0 0.7299 6 11 1 0 0 0.7300 34 12 2 0 0 0.7301 7 13 3 0 0 0.7302 9 14 4 0 0 0.7303 16 15 5 0 0 0.7304 15 16 6 0 0 0.7305 429 17 7 0 0 0.7306 16 18 8 0 0 0.7307 265 19 9 0 0 0.7308 18 </code></pre> <p>Where '0' is the lowest value and '9' is the largest--<code>df.rank</code> returns a one-indexed float rank, but it's easy enough to get from there to the int 0-9 values I need.</p>
<python><pandas><dataframe>
2024-08-27 22:04:08
0
1,702
Chris
78,920,840
22,407,544
Why does my XMLHttpRequest cancel my background task before reloading the page
<p>I'm trying to send a XMLHttpRequest to my backend if a user chooses to reload the webpage while a task is running on the backend. It is to function like this:</p> <ol> <li>The user starts the task(translation).</li> <li>If the user decides to reload the page or navigate away they should get an alert that the task will stop if they navigate away. If they still choose to navigate away the request is sent to a <code>stop_task</code> view on the backend.</li> </ol> <p>However, currently if the user starts to navigate away or reloads the task is terminated once the alert shows instead of after the user confirms that they still want to reload/navigate away.</p> <p>Here is my code JS:</p> <pre><code>wwindow.addEventListener('beforeunload', function (e) { if (isTranslating) { stopTranslation(); e.preventDefault(); e.returnValue = ''; return 'Translation in progress. Are you sure you want to leave?'; } }); function stopTranslaion() { if (isTranslating &amp;&amp; currentTaskId) { // Cancel the polling console.log(currentTaskId) clearInterval(pollInterval); // Send a request to the server to stop the task const xhr = new XMLHttpRequest(); xhr.onload = function() { if (xhr.status == 200) { const response = JSON.parse(xhr.responseText); if (response.status === 'stopped') { console.log('Translation stopped'); isTranslating = false; currentTaskId = null; // Update UI to reflect stopped state showTranslationStopped(); } } else { console.error('Failed to stop translation'); } }; xhr.onerror = function() { console.error('Connection error while trying to stop translation'); }; xhr.open('POST', `/translate/stop_task/${currentTaskId}/`, true); xhr.setRequestHeader('X-CSRFToken', getCsrfToken()); xhr.send(); } } function showTranslationStopped() { // Update UI to show that translation has been stopped translatingFileField.style.display = 'none'; const stoppedMessage = document.createElement('div'); stoppedMessage.textContent = 'Translation has been stopped.'; form.appendChild(stoppedMessage); } </code></pre> <p>views.py:</p> <pre><code>@csrf_protect def stop_task(request, task_id): if request.method == 'POST': try: # Revoke the Celery task app.control.revoke(task_id, terminate=True) return JsonResponse({'status': 'stopped'}) except Exception as e: return JsonResponse({'status': 'error', 'message': str(e)}, status=500) return JsonResponse({'status': 'error', 'message': 'Invalid request method'}, status=400) </code></pre> <p>I would prefer for it to function as detailed in steps 1 and 2. My tasks are handled using Celery with redis</p>
<javascript><python><django><redis><celery>
2024-08-27 21:41:37
0
359
tthheemmaannii
78,920,706
25,874,132
finding the number of possible k non-attacking rooks in an NxM chessboard with forbidden tiles?
<p>I have an NxM incomplete chessboard (meaning an NxM chessboard with some tiles missing) and a number k (which is the number of non-attacking rooks I need to place on the board)</p> <p>the inputs of this function are an edge list (which can be thought of as a matrix that starts at index 1 and the top left is the &quot;first&quot; tile) and the number k.</p> <p>I've created a function that plots the board to give a better visual understanding of the problem:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import numpy as np import math as m from itertools import permutations, combinations def plot_chessboard(edge_list): #finding the num of columns for edge in edge_list: if edge[1] != (edge[0] + 1): num_cols = edge[1] - edge[0] #this is the number of columns #finding the num of rows y_max = max(max(y for x, y in edge_list), max(x for x, _ in edge_list)) num_rows = int(m.ceil(y_max/num_cols)) #this is the number of rows # Create a grid of ones (white squares) grid = np.zeros((num_rows, num_cols)) # Create a set of all nodes in the edge list nodes = set() for edge in edge_list: nodes.add(edge[0]) nodes.add(edge[1]) #find the legal and forbidden positions universe = set(range(1, num_cols*num_rows + 1)) forbidden_nodes = universe - nodes print(f&quot;the nodes are {nodes}&quot;) print(f&quot;the missing nodes are {forbidden_nodes}&quot;) # Shade missing nodes black for i in range(1, num_rows * num_cols + 1): if i not in nodes: row = (i - 1) // num_cols col = (i - 1) % num_cols grid[row, col] = 1 # Set to 0 for black print(grid) # Create the plot fig, ax = plt.subplots(figsize=(10, 10)) ax.imshow(grid, cmap='binary') # Add grid lines ax.set_xticks(np.arange(-0.5, num_cols, 1), minor=True) ax.set_yticks(np.arange(-0.5, num_rows, 1), minor=True) ax.grid(which=&quot;minor&quot;, color=&quot;gray&quot;, linestyle='-', linewidth=2) # Remove axis ticks ax.set_xticks([]) ax.set_yticks([]) # Show the plot plt.show() # Example usage edge_list = [(1, 4), (3, 6), (4, 5), (5, 6)] B = [[1, 2], [1, 8], [2, 3], [3, 4], [3, 10], [4, 5], [4, 11], [5, 12], [10, 11], [10, 17], [11, 12], [11, 18], [12, 13], [12, 19], [13, 20], [16, 17], [17, 18], [17, 24], [18, 19], [18, 25], [19, 20], [19, 26], [20, 21], [20, 27], [22, 29], [24, 25], [24, 31], [25, 26], [25, 32], [26, 27], [26, 33], [27, 34], [29, 30], [29, 36], [30, 31], [30, 37], [31, 32], [31, 38], [32, 33], [32, 39], [33, 34], [33, 40], [34, 35], [34, 41], [35, 42], [36, 37], [37, 38], [38, 39], [39, 40], [40, 41], [41, 42]] k = 2 plot_chessboard(edge_list) </code></pre> <p>now for the main function that is supposed to take the edge list and k, and output the number of possible ways to arrange k rooks in that board; in this function so far I was able to extract the dimensions of the chessboard (rows and columns) and the positions of the forbidden positions which currently I store in a set of tuples where the tuples are formatted the following way (row, column) (I also made the index to start at 0 to align with a matrix that represents the board) but from after that point, where all is left for me to do is actually calculate the number of possible ways to arrange k rooks in that board and I don't know how to do so.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from itertools import permutations, combinations def k_rook_problem(edge_list, k): #finding the num of columns for edge in edge_list: if edge[1] != (edge[0] + 1): num_cols = edge[1] - edge[0] #this is the number of columns #finding the num of rows y_max = max(max(y for _, y in edge_list), max(x for x, _ in edge_list)) num_rows = (y_max + num_cols - 1) // num_cols # Calculate number of rows print(f'testing: num rows and num cols are: {num_rows}, {num_cols}') #set of all nodes in the edge list nodes = set() for edge in edge_list: nodes.add(edge[0]) nodes.add(edge[1]) #set of forbidden positions universe = set(range(1, num_cols * num_rows + 1)) forbidden_nodes = universe - nodes #set of forbidden positions in tuple matrix form {(row, column),...} forbidden_positions = {((node - 1) // num_cols, (node - 1) % num_cols) for node in forbidden_nodes} #testing print(f&quot;testing: the nodes are {nodes}&quot;) print(f&quot;testing: the forbidden nodes are {forbidden_nodes}&quot;) print(f&quot;testing: the forbidden position are {forbidden_positions}&quot;) ### from here i used the help of AI and haven't advanced much # Identify valid row and column segments valid_row_segments = {} valid_col_segments = {} for i in range(num_rows): row_positions = [j for j in range(num_cols) if (i, j) not in forbidden_positions] if row_positions: valid_row_segments[i] = row_positions for j in range(num_cols): col_positions = [i for i in range(num_rows) if (i, j) not in forbidden_positions] if col_positions: valid_col_segments[j] = col_positions print(f'testing: valid_rows are: {valid_row_segments}, and valid_cols are: {valid_col_segments}') print(f'testing: length of valid_rows is: {sum(len(value) for value in valid_row_segments.values())}, and valid_cols is: {sum(len(value) for value in valid_col_segments.values())}') #create a matrix representing the board where the ones represent valid tiles and zeros represent forbidden tiles matrix = np.ones((num_rows, num_cols)) #set the forbidden position as zeros and the rest are ones for i in range(1, num_rows * num_cols + 1): if i not in nodes: row = (i - 1) // num_cols col = (i - 1) % num_cols matrix[row, col] = 0 # Set to 0 for black #create a submatrix sub_matrix = matrix[np.ix_([0,1],[0,1])] print(sub_matrix) # Count the number of valid k-rook configurations and store them configurations = [] def place_rooks(remaining_k, rows_left, cols_left, current_config): if remaining_k == 0: configurations.append(current_config[:]) return # Start with an empty dictionary to track already checked positions for row in rows_left: for col in cols_left: if (row, col) in forbidden_positions: continue if all(row != r and col != c for r, c in current_config): # Create new sets excluding the current row and column new_rows_left = rows_left - {row} new_cols_left = cols_left - {col} place_rooks(remaining_k - 1, new_rows_left, new_cols_left, current_config + [(row, col)]) # Reset configurations each time the function runs configurations = [] place_rooks(k, set(range(num_rows)), set(range(num_cols)), []) return len(configurations) # Example usage edge_list = [(1, 4), (3, 6), (4, 5), (5, 6)] B = [[1, 2], [1, 8], [2, 3], [3, 4], [3, 10], [4, 5], [4, 11], [5, 12], [10, 11], [10, 17], [11, 12], [11, 18], [12, 13], [12, 19], [13, 20], [16, 17], [17, 18], [17, 24], [18, 19], [18, 25], [19, 20], [19, 26], [20, 21], [20, 27], [22, 29], [24, 25], [24, 31], [25, 26], [25, 32], [26, 27], [26, 33], [27, 34], [29, 30], [29, 36], [30, 31], [30, 37], [31, 32], [31, 38], [32, 33], [32, 39], [33, 34], [33, 40], [34, 35], [34, 41], [35, 42], [36, 37], [37, 38], [38, 39], [39, 40], [40, 41], [41, 42]] k = 2 print(f'The number of valid configurations is: {k_rook_problem(edge_list, k)}') </code></pre> <p>here I'm adding the pic of what these chessboard look like here's B: <a href="https://i.sstatic.net/82zuqIaT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82zuqIaT.png" alt="enter image description here" /></a></p> <p>and here's edge_list:</p> <p><a href="https://i.sstatic.net/F06JPbYV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F06JPbYV.png" alt="enter image description here" /></a></p> <p>so the TL;DR is that I don't know how to calculate (in Python and in general) the number of possible ways to arrange k rooks on an NxM board with forbidden tiles, and I'm asking for help</p>
<python><permutation><combinatorics><montecarlo>
2024-08-27 20:48:57
1
314
Nate3384
78,920,562
23,260,297
CMake error when building exe with pyinstaller in visual studio 2019
<p>I am building an exe with pysinstaller and everytime I run the command it builds, but visual studio is throwing some warnings/errors at me.</p> <p>my pyinstaller command:</p> <pre><code>pyinstaller --onedir --add-data &quot;config.json;.&quot; file.py </code></pre> <p>The warnings include:</p> <pre><code>CMake Warning (dev) in C:\Users\...\dist\...\CMakeLists.txt: 1&gt; [CMake] No project() command is present. The top-level CMakeLists.txt file must 1&gt; [CMake] contain a literal, direct call to the project() command. Add a line of 1&gt; [CMake] code such as 1&gt; [CMake] 1&gt; [CMake] project(ProjectName) 1&gt; [CMake] 1&gt; [CMake] near the top of the file, but after cmake_minimum_required(). 1&gt; [CMake] 1&gt; [CMake] CMake is pretending there is a &quot;project(Project)&quot; command on the first 1&gt; [CMake] line. </code></pre> <pre><code>CMake Warning in C:\Users\...\dist\...\CMakeLists.txt: 1&gt; [CMake] The object file directory 1&gt; [CMake] 1&gt; [CMake] C:/Users/.../dist/.../x64-Debug/CMakeFiles/CMakeTmp/CMakeFiles/cmTC_05bd2.dir/./ and C:/Users/.../dist/.../x64-Debug/CMakeFiles/CMakeTmp/CMakeFiles/cmTC_26db5.dir/./ 1&gt; [CMake] 1&gt; [CMake] has 231 characters. The maximum full path to an object file is 250 1&gt; [CMake] characters (see CMAKE_OBJECT_PATH_MAX). Object file 1&gt; [CMake] 1&gt; [CMake] CMakeCCompilerABI.c.obj 1&gt; [CMake] 1&gt; [CMake] cannot be safely placed under this directory. The build may not work 1&gt; [CMake] correctly. </code></pre> <pre><code>CMake Error at C:\Users\...\dist\...\CMakeLists.txt:18 (arrow_install_all_headers): 1&gt; [CMake] Unknown CMake command &quot;arrow_install_all_headers&quot;. 1&gt; [CMake] 1&gt; [CMake] 1&gt; [CMake] CMake Warning (dev) in C:\Users\...\dist\...\MakeLists.txt: 1&gt; [CMake] No cmake_minimum_required command is present. A line of code such as 1&gt; [CMake] 1&gt; [CMake] cmake_minimum_required(VERSION 3.20) 1&gt; [CMake] 1&gt; [CMake] should be added at the top of the file. The version specified may be lower 1&gt; [CMake] if you wish to support older CMake versions for this project. For more 1&gt; [CMake] information run &quot;cmake --help-policy CMP0000&quot;. </code></pre> <p>Now, when I run the exe it works perfectly fine. How can I get rid of these warnings?</p>
<python><cmake><visual-studio-2019><pyinstaller>
2024-08-27 20:02:49
0
2,185
iBeMeltin
78,920,522
1,074,942
Django-ninja Webhook Server - Signature Error/Bad Request
<p>I am working on a Django application where I have to develop a webhook server using Django-ninja. The webhook app receives a new order notification as described here: <a href="https://developer.wolt.com/docs/marketplace-integrations/restaurant-advanced#webhook-server" rel="nofollow noreferrer">https://developer.wolt.com/docs/marketplace-integrations/restaurant-advanced#webhook-server</a></p> <p>My code below:</p> <pre class="lang-py prettyprint-override"><code>@api.post(&quot;/v1/wolt-new-order&quot;) def wolt_new_order(request: HttpRequest): received_signature = request.headers.get('wolt-signature') if not received_signature: print(&quot;Missing signature&quot;) return HttpResponse('Missing signature', status=400) payload = request.body expected_signature = hmac.new( CLIENT_SECRET.encode(), payload, hashlib.sha256 ).hexdigest() print(f&quot;Received: {received_signature}&quot;) print(f&quot;Expected: {expected_signature}&quot;) if not hmac.compare_digest(received_signature, expected_signature): return HttpResponse('Invalid signature', status=400) print(payload) return HttpResponse('Webhook received', status=200) </code></pre> <p>For some reason this always returns 'error code 400, bad request syntax' and the two signatures are always different.</p> <p>I am importing the CLIENT_SECRET correctly and I have all the necessary libraries properly installed.</p> <p>Funny enough when I do the same on a test Flask app, I receive the webhook notification correctly without issues.</p> <p>Flask code below:</p> <pre class="lang-py prettyprint-override"><code>import hmac import hashlib from flask import Flask, request, abort app = Flask(__name__) @app.route('/api/v1/wolt-new-order', methods=['POST']) def webhook(): # Extract the wolt-signature header received_signature = request.headers.get('wolt-signature') print(received_signature) # Extract the request payload payload = request.get_data() print(payload) # Compute the expected signature expected_signature = hmac.new( CLIENT_SECRET.encode(), payload, hashlib.sha256 ).hexdigest() print(expected_signature) # Compare signatures if not hmac.compare_digest(received_signature, expected_signature): abort(400, 'Invalid signature') print(payload) return 'Webhook received', 200 if __name__ == '__main__': app.run(port=8000) </code></pre> <p>My webhook server is behind ngrok. Any ideas?</p> <p>What am I doing wrong here? Any suggestions?</p>
<python><django><django-ninja>
2024-08-27 19:50:25
0
618
vsapountzis
78,920,271
11,608,962
Is there a best practice for defining optional fields in Pydantic models?
<p>I'm working with Pydantic for data validation in a Python project and I'm encountering an issue with specifying optional fields in my <code>BaseModel</code>.</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel class MyModel(BaseModel): author_id: int | None # Case 1: throws error author_id: Optional[int] # Case 2: throws error author_id: int = None # Case 3: works </code></pre> <p>Now, while requesting an endpoint that accepts the above model as its JSON body, I am not providing the field <code>author_id</code> in the request.</p> <p>When I use <code>author_id: int | None</code>, I get an error saying that a required field is missing. However, if I change it to <code>author_id: Optional[int]</code>, I encounter the same error. But when I use <code>author_id: int = None</code> or <code>author_id: Optional[int] = None</code>, the model works as expected without errors. (Working if <code>=</code> is present)</p> <p>Do you have any recommendations on how to properly define optional fields in Pydantic models? Is there a specific version of Pydantic (or another library) that supports the int | None syntax correctly?</p> <ul> <li>python==3.11</li> <li>pydantic==2.8.1</li> <li>fastapi==0.111.1</li> </ul>
<python><fastapi><pydantic>
2024-08-27 18:17:26
2
1,427
Amit Pathak
78,920,262
1,337,007
Handling Custom Exceptions in step functions for glue
<p>I have a <code>glue</code> job in which I have created a <code>custom exception</code> named <code>RetryableException</code> and <code>NonRetryableException</code>. I want to retry only in case of <code>RetryableException</code> and <code>DLQ</code> for <code>NonRetryableException</code>. However according to the <a href="https://docs.aws.amazon.com/step-functions/latest/dg/concepts-error-handling.html" rel="nofollow noreferrer">documentation</a>, only pre defined errors like <code>States.* and Lambda.*</code> etc can be covered in <code>retry</code> block. Which means even the <code>NonRetryableException</code> will be retried if the <code>glue</code> job task fails. Is there any way to retry the glue job <strong>only</strong> when the <code>RetryableException</code> is thrown?</p>
<python><amazon-web-services><pyspark><aws-glue><aws-step-functions>
2024-08-27 18:15:51
0
2,258
ghostrider
78,920,220
13,689,939
Why is .* not matching full string in Python regex?
<p>Running this code:</p> <pre><code>my_string = 'START\r\nwords\t&quot; Quoted Words&quot;\r\nmore12345\t$$symbols\r\n END' pattern = '(.*)' matches = re.match(pattern, my_string) matches.groups(0) </code></pre> <p>I expect the output:</p> <pre><code>('START\r\nwords\t&quot; Quoted Words&quot;\r\nmore12345\t$$symbols\r\n END',) </code></pre> <p>but I get:</p> <pre><code>('START\r',) </code></pre> <p>How can I match the entire string?</p>
<python><regex>
2024-08-27 17:58:43
1
986
whoopscheckmate
78,920,216
10,203,572
Optimizing Point Cloud to Voxel Grid with Max Sampling in NumPy
<p>I have two arrays that represent the point coordinates and values respectively. To max sample from this point cloud, I am initializing a grid with the desired size, and looping over each point to assign the max values:</p> <pre><code>N = 1000000 coords = np.random.randint(0, 256, size=(N, 3)) vals = np.random.rand(N, 3) grid = np.zeros((3, 256, 256, 256), dtype=np.float16) for i, pt in enumerate(coords): x, y, z = pt grid[0, x, y, z] = max(grid[0, x, y, z], vals[i, 0]) grid[1, x, y, z] = max(grid[1, x, y, z], vals[i, 1]) grid[2, x, y, z] = max(grid[2, x, y, z], vals[i, 2]) </code></pre> <p>Is there a way I can do this through NumPy without the for loop (which is very slow)?</p>
<python><numpy><open3d>
2024-08-27 17:56:59
1
1,066
Layman
78,920,140
12,466,687
Unable to convert text into dataframe in python
<p>I am trying to convert a <code>text</code> into a <code>dataframe</code> using Python.</p> <p><strong>sample_text:</strong> <code>'This is \nsample text\n\nName|age\n--|--\n1.abc|45\n2.xyz|34'</code></p> <p><strong>Final Desired output:</strong></p> <p><a href="https://i.sstatic.net/zOfSy2Y5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zOfSy2Y5.png" alt="enter image description here" /></a></p> <p>Steps I am following to achieve the above output are listed below :</p> <ol> <li><strong>Break the text into multiple rows and assign it to a variable</strong>: I have tried using <code>print()</code> to process this text <code>formatted_text = print('This is \nsample text\n\nName|age\n--|--\n1.abc|45\n2.xyz|34')</code> but it cant be assigned as <code>print()</code> returns <code>NoneType</code>, so I get an error here.</li> </ol> <p>Desired output after this step:</p> <pre><code>This is sample text Name|age --|-- 1.abc|45 2.xyz|34 </code></pre> <ol start="2"> <li><strong>Use the above <code>line break text</code> stored in a <code>variable</code> to be read as a CSV with the separator <code>|</code> to create a dataframe</strong>: I have been thinking of processing this as <code>pd.read_csv(formatted_text,sep='|', skipinitialspace=True)</code></li> </ol> <p>Desired_output after this step:</p> <p><a href="https://i.sstatic.net/zOfSy2Y5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zOfSy2Y5.png" alt="enter image description here" /></a></p> <p>I tried earlier explaining <a href="https://stackoverflow.com/questions/78919991/how-to-get-newline-instead-of-n-in-text-without-using-print-in-python-so-that?noredirect=1#comment139145441_78919991">this</a> problem in SO post but I guess I wasn't able to explain it well and it got closed. I Hope I am able to explain my issue this time. It could be a silly task but I have been stuck at this for a long time now and would appreciate any help.</p>
<python><pandas><markdown><pdf-parsing>
2024-08-27 17:38:17
3
2,357
ViSa
78,919,946
11,942,492
How to allow _inplacevar_ operations in RestrictedPython?
<p>Expressions such as:</p> <pre><code>total_impact += impact </code></pre> <p>in my restricted env are not allowed by default and they cause the error:</p> <blockquote> <p>NameError: name '_inplacevar_' is not defined</p> </blockquote> <p>but I would like to allow such expressions. Is there anything that I can add to the restricted globals to allow it? Anything in RestrictedPython.Guards or RestrictedPython.Eval? I can't find it so far</p>
<python><in-place><restrictedpython>
2024-08-27 16:38:33
1
1,289
Menas
78,919,927
5,957,195
How to debug a dylib error or compiler bug in a Python-C-API function wrapper?
<p>I am writing a Python wrapper for a C function but I have some very strange behaviour. The C code is:</p> <pre class="lang-c prettyprint-override"><code>static PyObject* f12_wrapper(PyObject* self, PyObject* args, PyObject* kwargs) { PyObject* y_obj; void* y_data = NULL; int64_t y_shape[1]; int64_t y_strides[1]; int64_t Out_0001; PyObject* Out_0001_obj; y_obj = Py_None; static char *kwlist[] = { &quot;y&quot;, NULL }; if (!PyArg_ParseTupleAndKeywords(args, kwargs, &quot;|O&quot;, kwlist, &amp;y_obj)) { return NULL; } // printf(&quot;1. %p %p %d\n&quot;, y_obj, Py_None, (y_obj == Py_None)); if (y_obj == Py_None) { // printf(&quot;2. Py_None detected&quot;); y_data = NULL; } else if (pyarray_check(&quot;y&quot;, y_obj, NPY_LONG, INT64_C(1), NO_ORDER_CHECK)) { y_data = PyArray_DATA((PyArrayObject*)(y_obj)); get_strides_and_shape_from_numpy_array(y_obj, y_shape, y_strides); } else { // printf(&quot;3. Py_None not detected&quot;); return NULL; } Out_0001 = bind_c_f12(y_data, y_shape[INT64_C(0)], y_strides[INT64_C(0)]); Out_0001_obj = Int64_to_PyLong(&amp;Out_0001); return Out_0001_obj; } </code></pre> <p>When I call the function from Python as:</p> <pre class="lang-py prettyprint-override"><code>f12() </code></pre> <p>the code runs into the wrong if block. Instead of entering the first if block (the block containing print 2), it skips over this and tests the second if condition before exiting with an error (in the block containing print 3). Putting the second condition in an <code>else</code> block instead of an <code>else if</code> block does not change the behaviour.</p> <p>If I uncomment print 1 then I can see that the address stored in <code>y_obj</code> and <code>Py_None</code> are the same. If I uncomment print 3 then I can see that the code is going into the else 3.</p> <p>The strangest behaviour occurs if I uncomment print 2. In this case I have the error:</p> <pre><code>Trace/BPT trap: 5 </code></pre> <p>I have no idea why the code is not behaving as described or how to debug the dylib error. Is this a compiler bug? The same code works perfectly on my local linux machine, a GitHub linux runner and a GitHub windows runner</p> <p>This code is calling Fortran code. If I remove the call to <code>bind_c_f12</code> and stop linking the associated .o files the problem goes away. But an almost identical version of this code with an identical version of the call to <code>bind_c_f12</code> has worked in the past.</p>
<python><c><macos><python-c-api>
2024-08-27 16:33:57
1
482
bourneeoo
78,919,818
15,297,204
Embedding using the LangChain_AWS is giving None value
<p>I am trying to embed a text using the <code>langchain_aws</code> <code>BedrockEmbeddings</code>, but when I invoke the function, I get a list with the <code>None</code> values.</p> <p>Here's the code:</p> <pre><code>from langchain_community.llms.bedrock import Bedrock from langchain_aws import BedrockEmbeddings import boto3 # Initialize the Bedrock client bedrock_client = boto3.client(service_name='bedrock-runtime') # Initialize Bedrock Embeddings bedrock_embeddings = BedrockEmbeddings( model_id=&quot;amazon.titan-text-express-v1&quot;, credentials_profile_name=&quot;default&quot;, client=bedrock_client, region_name=&quot;ap-south-1&quot; ) embed_data=bedrock_embeddings.embed_documents([&quot;This is a content of the document&quot;, &quot;This is another document&quot;]) print(embed_data) </code></pre> <p>Output:</p> <pre><code>[None, None] </code></pre>
<python><langchain><large-language-model><embedding><amazon-bedrock>
2024-08-27 16:05:02
1
521
Md Tausif
78,919,713
1,195,803
Yocto recipe for custom python module
<p>I'm having a very hard time trying to find an example of building in a custom python module into Yocto (more specifically Xilinx's Petalinux) that will install into the python site-packages directory so I can write Python scripts on-target using my module.</p> <p>This is what I have so far:</p> <h2>Directory tree</h2> <p>(in project-spec/meta-user/recipes-apps)</p> <blockquote> <pre><code>├── foo-python ├── files │   └── foo-python.py └── foo-python.bb </code></pre> </blockquote> <h2>foo-python.bb</h2> <pre><code># # This file is the foo-python recipe. # SUMMARY = &quot;Simple foo-python application&quot; SECTION = &quot;PETALINUX/apps&quot; LICENSE = &quot;MIT&quot; LIC_FILES_CHKSUM = &quot;file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302&quot; SRC_URI = &quot;file://foo-python.py&quot; RDEPENDS:${PN} = &quot;python3-core&quot; S = &quot;${WORKDIR}&quot; do_install() { install -d ${D}${PYTHON_SITEPACKAGES_DIR}/${PN} install -m 0644 foo-python.py ${D}${PYTHON_SITEPACKAGES_DIR}/${PN}/ } FILES_${PN} += &quot;${PYTHON_SITEPACKAGES_DIR}/${PN}/*&quot; </code></pre> <h2>Error</h2> <p>When I bitbake my recipe (Petalinux-build), I get the following error:</p> <blockquote> <p>NOTE: Executing Tasks ERROR: foo-python-1.0-r0 do_package: QA Issue: foo-python: Files/directories were installed but not shipped in any package:</p> <p>/foo-python</p> <p>/foo-python/foo-python.py</p> <p>Please set FILES such that these items are packaged. Alternatively if they are unneeded, avoid installing them or delete them within do_install. foo-python: 2 installed and not shipped files. [installed-vs-shipped] ERROR: foo-python-1.0-r0 do_package: Fatal QA errors found, failing task.</p> </blockquote> <ol> <li>I apologize in advance, I'm only familiar with Petalinux, and don't know where Petalinux ends, and Yocto/Bitbake overlaps.</li> <li>I can find examples online of how to add python modules from open-embedded layers, but I can't find a single example of adding a custom python module as installable python into site-packages.</li> </ol>
<python><yocto><bitbake><petalinux>
2024-08-27 15:42:28
2
541
justynnuff
78,919,691
638,048
hvplot reset axis to range of selected data when using dropdown
<p>When using hvplot, the range of the y axis is calculated from all the data, which means that if you use the &quot;by&quot; keyword to get a dropdown option, the axis does not change. If one variable in your data has a numerically much smaller range than another, then you can't see the values of the variable with the smaller range, it just looks like a horizontal line.</p> <p>For example, I have time series data as an xarray dataset, where I'm displaying sensor values for multiple sensors on one graph. Each sensor reports a DAC value which is 0 to 3 billion, and a temperature which is 0 to 100. If I use the &quot;by&quot; mechanism to choose either DAC or temperature, the y axis always has a range of 0 to 3 billion, even when the temperature is being displayed, so the temperature look like zero.</p> <p>I would like to have the y axis range reset whenever the dropdown option is changed.</p>
<python><python-xarray><hvplot><holoviz>
2024-08-27 15:38:15
1
936
Richard Whitehead
78,919,675
1,176,573
How to plot bar graph with button for multiple categories?
<p>For a given dataframe, I am trying to create <code>plotly</code> plot a comparative bar graph with <code>q1,q2..</code> on the x-axis, and barline for each <code>stockname</code> and the figures on y-axis.</p> <p>Additionally, a button should have <code>[Sales, Net Profit]</code> selectors so that when an option is picked it should plot a comparative graph of those stocks (maximum <code>6</code> stocks).</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np import plotly.graph_objects as go my_dict = dict({ 'quarterly_result' : ['Sales','Net Profit', 'Sales', 'Net Profit'], 'stockname' : ['Stock1', 'Stock1','Stock2', 'Stock2'], 'q1' : [100,10,np.nan,np.nan], 'q2' : [110,20.6,570,120], 'q3' : [67,-2.0,620,125.7], 'q4' : [125,40.5,np.nan,np.nan], 'q5' : [np.nan,np.nan,660,105.9], 'q6' : [np.nan,np.nan,636,140] }) df = pd.DataFrame(my_dict) fig = go.Figure() x = df.columns[2:] y1 = df.loc[0][2:] y2 = df.loc[2][2:] fig.add_traces(go.Bar(x=x, y=y1)) fig.add_traces(go.Bar(x=x, y=y2)) buttons = [{'method': 'update', 'label': col, 'args': [{'y': [df[col]]}]} for col in df.iloc[:, 1:]] updatemenus = [{'buttons': buttons, 'direction': 'down', 'showactive': True,}] # update layout with buttons, and show the figure fig.update_layout(updatemenus=updatemenus) fig.show() </code></pre> <p>I am unsure how to plot this multi-categorical dataframe. Below snippet is not working. How do I fix this?</p> <p>Incorrect outcome:</p> <p><a href="https://i.sstatic.net/FylYjedV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FylYjedV.png" alt="enter image description here" /></a></p> <p>Expected:</p> <p><a href="https://i.sstatic.net/WhnGSPwX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WhnGSPwX.png" alt="enter image description here" /></a></p>
<python><plotly>
2024-08-27 15:34:31
1
1,536
RSW
78,919,574
6,843,153
How to declare a Pydantic field which content is constrained by a list that is known until execution time
<p>I have the following <strong>Pydantic v1.10</strong> model:</p> <pre><code>from typing import Literal from pydantic import BaseModel, Field from connectors.snowflake.snowflake_connector import SnowflakeConnector conversion = SnowflakeConnector().get_conversion()[&quot;conversion_name&quot;].tolist() class MyModel(BaseModel): code: int = Field(alias=&quot;Code&quot;, frozen=True) status: Literal[&quot;Active&quot;, &quot;Inactive&quot;] = Field(alias=&quot;Status&quot;) conversion: Literal[conversion] = Field(alias=&quot;Conversion&quot;) </code></pre> <p>Field <code>status</code> works perfectly, but field <code>conversion</code> raises this error:</p> <pre><code> File &quot;pydantic/main.py&quot;, line 198, in pydantic.main.ModelMetaclass.__new__ File &quot;pydantic/fields.py&quot;, line 506, in pydantic.fields.ModelField.infer File &quot;pydantic/fields.py&quot;, line 436, in pydantic.fields.ModelField.__init__ File &quot;pydantic/fields.py&quot;, line 557, in pydantic.fields.ModelField.prepare File &quot;pydantic/fields.py&quot;, line 831, in pydantic.fields.ModelField.populate_validators File &quot;pydantic/validators.py&quot;, line 709, in find_validators File &quot;pydantic/validators.py&quot;, line 473, in pydantic.validators.make_literal_validator TypeError: unhashable type: 'list' </code></pre> <p>How can I achieve with field <code>conversion</code> what I can do with field <code>status</code> if I don't know the list of values until execution time?</p>
<python><pydantic>
2024-08-27 15:12:11
2
5,505
HuLu ViCa
78,919,353
561,243
Ternary plot with python-ternary: unable to set axis label and set axis limits
<p>I am using <a href="https://github.com/marcharper/python-ternary" rel="nofollow noreferrer">python-ternary</a> to generate a ternary plot with some of my data.</p> <p>I have two problems. One rather simple, I guess, and one I can't really understand.</p> <p>Here below is my piece of code.</p> <pre class="lang-py prettyprint-override"><code>import ternary import pandas as pd data_df = pd.read_csv('data.csv') data_df['C'] = 1 - (data_df['A']+data_df['B'] ) n = 100 print(data_df[['B','A','C']].head(n).describe()) print(data_df[['B','A','C']].head(n)) data_dictionary2 = data_df[['B','A','C']].head(n).values #--------------- Setting axis limit in ternary axis --------------------------- figure, tax = ternary.figure(scale=1) tax.boundary(linewidth=2.0) tax.gridlines(multiple=0.1) # -- set axis limit tax.set_axis_limits(axis_limits={'b':[0.94,1],'r':[0,0.1],'l':[0.0,0.1]}) #-- convert the data according to new coordinate pp = tax.convert_coordinates(data_dictionary2) tax.scatter(pp) #-- Update the axis ticks tax.get_ticks_from_axis_limits(multiple=0.1) tax.set_custom_ticks(multiple=.1, linewidth=1, offset=0.025, tick_formats=&quot;%.3f&quot;) tax.get_axes().axis('off') tax.clear_matplotlib_ticks() tax.set_title(&quot;Ternary plot\n&quot;) fontsize = 12 offset = 0.14 tax.right_corner_label(&quot;X&quot;, fontsize=fontsize) tax.top_corner_label(&quot;Y&quot;, fontsize=fontsize) tax.left_corner_label(&quot;Z&quot;, fontsize=fontsize) tax.left_axis_label(&quot;Left label $\\alpha^2$&quot;, fontsize=fontsize, offset=offset) tax.right_axis_label(&quot;Right label $\\beta^2$&quot;, fontsize=fontsize, offset=offset) tax.bottom_axis_label(&quot;Bottom label $\\Gamma - \\Omega$&quot;, fontsize=fontsize, offset=offset) figure.savefig('test.png') </code></pre> <p>The data.csv file is <a href="https://w-si.link/Z3v9o7ZnCSdqPYmvq" rel="nofollow noreferrer">here</a></p> <p>When I run the code, I get the following output. <a href="https://i.sstatic.net/6HYJIHVB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6HYJIHVB.png" alt="test.png" /></a></p> <p>that is not far away from what I would like to have.</p> <p>The first problem is that there are no axis nor corner label. Why are them not printed?</p> <p>The second problem is with the scaling. If you look at the console output you will see that the max value for the variable C is 0.018, but if I look at the ternary plot (I presume that it is plotted on left axis), I see that there are some points at 0.03. If my assumption on the left axis is wrong, and actually the third variable is plotted on the right axis, then it goes up to roughly 0.05.</p> <p>What I am doing wrong?</p> <p>Thanks a lot for your help and support!</p>
<python><matplotlib>
2024-08-27 14:22:57
1
367
toto
78,919,274
3,873,799
Class method implemented in parent class returning the name of the child class
<p>I would like to create a class method, or an abstract method, in a parent class. This method should return the implementer class's name.</p> <p>For example, using a class method:</p> <pre class="lang-py prettyprint-override"><code>class TestParentClass(): somefield = &quot;&quot; # create a method to get the name of this class. @classmethod def get_name(cls): class_name = cls.__name__ return class_name class TestClass(TestParentClass): somefield = TestParentClass.get_name() print(TestClass().somefield) # I want this to print 'TestClass' </code></pre> <p>I would like the print statement to print <code>TestClass</code>, but this will print <code>TestParentClass</code>.</p> <p>I also tried using an abstract method, like so:</p> <pre class="lang-py prettyprint-override"><code>class TestParentClass(): somefield = &quot;&quot; # create a method to get the name of this class. @abstractmethod def get_name(): class_name = __class__.__name__ print(class_name) return class_name </code></pre> <p>However, also this will print <code>TestParentClass</code>.</p> <p>Using the constructor in the child class also doesn't do anything different, i.e.:</p> <pre class="lang-py prettyprint-override"><code>class TestClass(TestParentClass): def __init__(self) -&gt; None: somefield = TestParentClass.get_name() # still sets 'TestParentClass' </code></pre> <p>Is there a way to do this in Python?</p> <p>In other words, I'd like something equivalent to this C# solution:</p> <pre class="lang-cs prettyprint-override"><code>class TestParentClass { public string SomeField { get; set; } public string GetName() { return this.GetType().Name; } } class TestClass : TestParentClass { public TestClass() { SomeField = this.GetName(); } } void Main() { Console.WriteLine(new TestClass().SomeField); // This prints TestClass. } </code></pre>
<python>
2024-08-27 14:05:12
3
3,237
alelom
78,919,150
2,950,593
Django store variable betrween user requests
<p><strong>Short version:</strong></p> <p>How do one store a variable in memory in django and make it shareable between different users ?</p> <p><strong>Long story:</strong></p> <p>I've written some api using django, and django-ninja This api uses third party library that helps me to connect to telegram account (library is called Pyrogram) This library uses some variable and somehow connects to telegram: <strong>I use it like this:</strong></p> <pre><code>async with Client(session_name, config['api_id'], config['api_hash']) as app: DO something </code></pre> <p>So I want to store this <strong>app</strong> variable in memory and share between all users. (I need this because otherwise pyrogram creates another session on user requests and there are limited number of pyrogram sessions)</p>
<python><django>
2024-08-27 13:27:34
0
9,627
user2950593
78,918,962
8,264,792
How to ignore "unused expression" warning from Pylance when using bit-shift operator to compose relationships in airflow
<p>I am currently developing Airflow DAGs using Python. I am using:</p> <ul> <li>Mypy version: 1.8.0</li> <li>Python version: 3.11.9</li> <li>VSCode version: 1.92.2</li> <li>Pylance version: v2024.8.2</li> </ul> <p>When I use the bit-shift operator to compose the relationships between different Operators, I have this warning in VSCode: <code>Expression value is unused</code></p> <p><a href="https://i.sstatic.net/8wsRasTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8wsRasTK.png" alt="Expression value is unused" /></a></p> <p>I would like to keep type checking enabled in VSCode, but I don't want to see this warning for valid use cases like composing Airflow DAGs.</p> <p>Is there a way to configure Pylance to suppress this specific warning for the bit-shift operator?</p> <p>I tried to set a <code>mypy.ini</code> in my settings with the code below, it doesnt do anything.</p> <pre class="lang-json prettyprint-override"><code>[mypy] warn_unused_ignores = False </code></pre> <p>The VSCode mypy extension settings look like:</p> <p><a href="https://i.sstatic.net/zOmwTkK5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zOmwTkK5.png" alt="Mypy extension settings" /></a></p> <p>Can anyone help me figure out how to resolve this issue without disabling type checking entirely?</p>
<python><visual-studio-code><airflow>
2024-08-27 12:44:45
2
436
maje
78,918,674
7,337,491
why does the request work if I'm using Invoke-RestMethod but it doesn't if I'm using requests from python?
<p>The Invoke-RestMethod looks like:</p> <pre><code>Invoke-RestMethod -Uri 'url' -Method 'GET' -Headers $headers </code></pre> <p>The header contains a bearer token</p> <p>The python code looks like:</p> <pre><code>import requests url = &quot;some_url&quot; headers = { 'Authorization': 'Bearer &lt;token&gt;' } response = requests.request(&quot;GET&quot;, url, headers=headers) </code></pre> <p>It results in an error:</p> <pre><code>SSLError: HTTPSConnectionPool(host='&lt;some_host&gt;', port=443): Max retries exceeded with url: &quot;&lt;url&gt;&quot; (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1133)'))) </code></pre> <p>The reuqest works if I'm using postman or Invoke-RestMethod, but it doesn't work if I'm using python's request module.</p> <p>The code for python and invoke method were generated using postman.</p> <p>What could be a possible explanation for this?</p>
<python><ssl><invoke-restmethod>
2024-08-27 11:36:51
0
1,340
cristian hantig
78,918,585
6,930,340
Count same consecutive numbers in list column in polars dataframe
<p>I have a <code>pl.DataFrame</code> with a column comprising lists with integers. I need to assert that each consecutive integer is showing up two times in a row at a maximum.</p> <p>For instance, a list containing <code>[1,1,0,-1,1]</code> would be OK, as the number 1 is showing up max two times in a row (the first two elements, followed by a zero).</p> <p>This list should lead to a failed assertion: <code>[1,1,1,0,-1]</code> The number <code>1</code> shows up three times in a row.</p> <p>Here's a toy example, where <code>row2</code> should lead to a failed assertion.</p> <pre><code>import polars as pl row1 = [0, 1, -1, -1, 1, 1, -1, 0] row2 = [1, -1, -1, -1, 0, 0, 1, -1] df = pl.DataFrame({&quot;list&quot;: [row1, row2]}) print(f&quot;row1: {row1}&quot;) print(f&quot;row2: {row2}&quot;) print(df) row1: [0, 1, -1, -1, 1, 1, -1, 0] row2: [1, -1, -1, -1, 0, 0, 1, -1] shape: (2, 1) ┌───────────────┐ │ list │ │ --- │ │ list[i64] │ ╞═══════════════╡ │ [0, 1, … 0] │ │ [1, -1, … -1] │ └───────────────┘ </code></pre>
<python><dataframe><python-polars>
2024-08-27 11:11:45
3
5,167
Andi
78,918,540
2,228,771
How to fix/configure Python syntax highlighting?
<p>Python syntax highlighting has recently stopped working correctly, even without any extensions enabled.</p> <p>Is there a way to configure this? I cannot find any good resources on this, other than extensions providing custom syntax highlighting. This is something that is shipped with VSCode out of the box, so I would assume that it can be configured somehow?</p> <p>A few things I checked:</p> <ol> <li>I'm on the latest VSCode version (8/14)</li> <li>No Python extensions are installed (no pylance etc.).</li> <li>Changing color theme does not do anything. I don't even have custom themes.</li> </ol> <p>Here is one example: It thinks that <code>UPDATE</code> (inside a comment) is the start of a new syntax node, and the entire highlighting in the file from there is entirely wrong:</p> <p><a href="https://i.sstatic.net/26Na1IeM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26Na1IeM.png" alt="enter image description here" /></a></p>
<python><visual-studio-code>
2024-08-27 10:58:17
1
24,848
Domi
78,918,376
5,457,202
Measure temperature of raw image in DJI SDK
<p>I'm learning how to use the DJI SDK in Python scripts to analyse the content of thermographies (radiometric JPGs, R-JPG).</p> <p>According to the docs, calling the SKD with the &quot;measure&quot; option should produce a &quot;global temperature value image which pixel type is INT16 or FLOAT32.&quot;</p> <pre><code>.\dji_irp.exe -s .\DJI_blablabla.JPG -a measure -o measure.raw </code></pre> <p>I've done so, which has produced a binary file I can read with Python. This file contains a list of integer values which is 655360-elements long. Values seem to come in pairs: values at odd positions range from 1 to 252, while values at even positions can either be 1 o 0.</p> <pre><code>temps = subprocess.call(['./dji_irp.exe', '-s', path_file, '-a', 'measure', '-o', 'measure.raw']) with open('measure.raw', 'rb') as f: file_contents = f.read() byte_array = np.frombuffer(file_contents, dtype=np.uint8) print(len(byte_array)) #655360 print(byte_array[:10]) #[69 1 69 1 69 1 78 1 73 1] </code></pre> <p>When separated into two arrays and reshaped as 640x512 they look like this:</p> <pre><code>print(byte_array[0:40:2]) # [ 69 69 69 78 73 78 73 65 51 56 65 65 65 69 69 65 61 29 233 178] print(max(byte_array[0::2])) # 252 print(min(byte_array[0::2])) # 1 plt.imshow(byte_array[0::2].reshape((512, 640)), cmap='magma') </code></pre> <p><a href="https://i.sstatic.net/7A65vEFe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7A65vEFe.png" alt="enter image description here" /></a></p> <pre><code>print(byte_array[1:40:2]) # [1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0] print(max(byte_array[1::2])) # 1 print(min(byte_array[1::2])) #0 plt.imshow(byte_array[1::2].reshape((512, 640))) </code></pre> <p><a href="https://i.sstatic.net/lcco869F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lcco869F.png" alt="enter image description here" /></a></p> <p>The second image look like a mask of the first one, but I don't know how to interpret the values from the first one, because such values don't look like temperatures to me, not even in Fahrenheit degrees.</p>
<python><image><matplotlib><dji-sdk>
2024-08-27 10:21:20
1
436
J. Maria
78,918,133
1,652,631
Django Viewflow - Passing field values via urls upon process start
<p>Is it possible *<strong>*</strong>, to pass a value to a process via the startup url/path.</p> <p>I have a process model with a <code>note</code> field.</p> <p>I want to start a new process flow and pass the note to the url e.g.</p> <p><code>http://server.com/my_process/start/?note=mynote</code></p>
<python><django><django-viewflow>
2024-08-27 09:31:22
1
3,731
Tooblippe
78,918,066
7,850,808
Using Jax Jit on a method as decorator versus applying jit function directly
<p>I guess most people familiar with jax have seen this example <a href="https://jax.readthedocs.io/en/latest/faq.html#how-to-use-jit-with-methods" rel="nofollow noreferrer">in the documentation</a> and know that it does not work:</p> <pre class="lang-py prettyprint-override"><code>import jax.numpy as jnp from jax import jit class CustomClass: def __init__(self, x: jnp.ndarray, mul: bool): self.x = x self.mul = mul @jit # &lt;---- How to do this correctly? def calc(self, y): if self.mul: return self.x * y return y c = CustomClass(2, True) c.calc(3) </code></pre> <p>3 workarounds are mentioned, but it appears that applying jit as a function directly, rather than a decorator works fine as well. That is, JAX does not complain about not knowing how to deal with the <code>CustomClass</code> type of <code>self</code>:</p> <pre class="lang-py prettyprint-override"><code>import jax.numpy as jnp from jax import jit class CustomClass: def __init__(self, x: jnp.ndarray, mul: bool): self.x = x self.mul = mul # No decorator here ! def calc(self, y): if self.mul: return self.x * y return y c = CustomClass(2, True) jitted_calc = jit(c.calc) print(jitted_calc(3)) </code></pre> <pre class="lang-bash prettyprint-override"><code>6 # works fine! </code></pre> <p>Although not documented (which it maybe should be?), this appears to function identical to marking self as static via <code>@partial(jax.jit, static_argnums=0)</code>, in that changing <code>self</code> does nothing for subsequent calls, i.e.:</p> <pre class="lang-py prettyprint-override"><code>c = CustomClass(2, True) jitted_calc = jit(c.calc) print(jitted_calc(3)) c.mul = False print(jitted_calc(3)) </code></pre> <pre class="lang-bash prettyprint-override"><code>6 6 # no update </code></pre> <p>So I originally assumed that decorators in general might just deal with self as a static parameter when applying them directly. Because the method might be saved to another variable with a specific instance (copy) of self. As a sanity check, I checked if non-jit decorators indeed do this as well, but this appears not to be the case, as the below non-jit &quot;decorated&quot; function happily deals with changes to self:</p> <pre class="lang-py prettyprint-override"><code>def decorator(func): def wrapper(*args, **kwargs): x = func(*args, **kwargs) return x return wrapper custom = CustomClass(2, True) decorated_calc = decorator(custom.calc) print(decorated_calc(3)) custom.mul = False print(decorated_calc(3)) </code></pre> <pre class="lang-bash prettyprint-override"><code>6 3 </code></pre> <p>I saw some other questions about applying decorators directly as functions versus decorator style (e.g. <a href="https://stackoverflow.com/questions/71905698/is-there-any-difference-between-using-a-decorator-and-applying-the-function-dire">here</a> and <a href="https://stackoverflow.com/questions/8772694/whats-the-difference-between-using-a-decorator-and-explicitly-calling-it">here</a>), and there it is mentioned there is a slight difference in the two versions, but this should almost never matter. I am left wondering what it is about the jit decorator that makes these versions behave so differently, in that JAX.jit cán deal with the <code>self</code> type if not in decorated style. If anyone has an answer, that would be much appreciated.</p>
<python><python-decorators><jax>
2024-08-27 09:14:31
1
527
Stackerexp
78,917,944
10,115,847
Can I "extract" python virtualenv site-packages from one machine to another to avoid using pip?
<p>I wish to create python virtual environment on different Linux machines that do have <code>python-venv</code> module installed but no pip, and no internet access.</p> <p>My idea is to create python virtual-env using <code>python3 -m venv --without-pip my-venv-name</code> for python3 or <code>python -m virtualenv --no-pip my-venv-name</code> for python 2.7, which will generate an empty virtual environment, my question is if I'm using the same python major and minor, and copy pasting the site-packages from a pre-prepared same python major &amp; minor virtual env and expect things to work correctly?</p> <p>important to mention that the packages will be installed in the original python-virtualenv using the same platform_tags.</p>
<python><python-3.x><pip><virtualenv><python-venv>
2024-08-27 08:51:08
1
3,920
Or Yaacov
78,917,847
1,788,712
TypeError: 'DataLoader' object is not subscriptable in SuperGradients Trainer
<p>I've created DataLoader objects for my training and validation datasets, but when I try to pass them to the trainer.train() method, I get the following error:</p> <p>Log summary:</p> <pre><code>TypeError: 'DataLoader' object is not subscriptable </code></pre> <p>Full log trace:</p> <pre><code>[2024-08-27 07:35:44] WARNING - sg_trainer.py - Train dataset size % batch_size != 0 and drop_last=False, this might result in smaller last batch. The console stream is now moved to /content/drive/MyDrive/.../checkpoints/yolo_nas_version_m/console_Aug27_07_35_44.txt An error occurred during training: Caught TypeError in DataLoader worker process 0. Original Traceback (most recent call last): File &quot;/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/worker.py&quot;, line 309, in _worker_loop data = fetcher.fetch(index) # type: ignore[possibly-undefined] File &quot;/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py&quot;, line 52, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File &quot;/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py&quot;, line 52, in &lt;listcomp&gt; data = [self.dataset[idx] for idx in possibly_batched_index] TypeError: 'DataLoader' object is not subscriptable Traceback: Traceback (most recent call last): File &quot;&lt;ipython-input-17-a2a5c064ba0b&gt;&quot;, line 5, in &lt;cell line: 4&gt; trainer.train( File &quot;/usr/local/lib/python3.10/dist-packages/super_gradients/training/sg_trainer/sg_trainer.py&quot;, line 1323, in train first_batch = next(iter(self.train_loader)) File &quot;/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py&quot;, line 630, in __next__ data = self._next_data() File &quot;/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py&quot;, line 1344, in _next_data return self._process_data(data) File &quot;/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py&quot;, line 1370, in _process_data data.reraise() File &quot;/usr/local/lib/python3.10/dist-packages/torch/_utils.py&quot;, line 706, in reraise raise exception TypeError: Caught TypeError in DataLoader worker process 0. Original Traceback (most recent call last): File &quot;/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/worker.py&quot;, line 309, in _worker_loop data = fetcher.fetch(index) # type: ignore[possibly-undefined] File &quot;/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py&quot;, line 52, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File &quot;/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py&quot;, line 52, in &lt;listcomp&gt; data = [self.dataset[idx] for idx in possibly_batched_index] TypeError: 'DataLoader' object is not subscriptable </code></pre> <p>My code to create dataloader:</p> <pre><code>from super_gradients.training.dataloaders.dataloaders import coco_detection_yolo_format_train, coco_detection_yolo_format_val from torch.utils.data import ConcatDataset, DataLoader # List of dataset folders containing COCO datasets yolo_folders = [f'{LOCATION}/dataset1', f'{LOCATION}/dataset2', f'{LOCATION}/dataset3', f'{LOCATION}/dataset4'] # Load YOLO TRAIN datasets from each folder train_datasets = [] for folder in yolo_folders: dataset = coco_detection_yolo_format_train( dataset_params={ 'data_dir': folder, 'images_dir': f'{folder}/train/images', 'labels_dir': f'{folder}/train/labels', 'classes': dataset_params['classes'], 'input_dim': (640, 640) }, dataloader_params={ 'batch_size': BATCH_SIZE, 'num_workers': 2 } ) train_datasets.append(dataset) # Combine the training datasets combined_train_dataset = ConcatDataset(train_datasets) # Create a DataLoader for the combined training dataset train_dataloader = DataLoader(combined_train_dataset, batch_size=16, shuffle=True, num_workers=4) </code></pre> <p>My code to create model and call Trainer.train()</p> <pre><code>import torch from super_gradients.training import models from super_gradients.training.losses import PPYoloELoss from super_gradients.training.metrics import DetectionMetrics_050 from super_gradients.training.models.detection_models.pp_yolo_e import PPYoloEPostPredictionCallback from super_gradients.training import Trainer model = models.get(MODEL_ARCH, num_classes=len(dataset_params['classes']), pretrained_weights=&quot;coco&quot;).to(DEVICE) train_params = { # ENABLING SILENT MODE 'silent_mode': False, &quot;average_best_models&quot;:True, &quot;warmup_mode&quot;: &quot;linear_epoch_step&quot;, &quot;warmup_initial_lr&quot;: 1e-6, &quot;lr_warmup_epochs&quot;: 3, &quot;initial_lr&quot;: 5e-4, &quot;lr_mode&quot;: &quot;cosine&quot;, &quot;cosine_final_lr_ratio&quot;: 0.1, &quot;optimizer&quot;: &quot;Adam&quot;, &quot;optimizer_params&quot;: {&quot;weight_decay&quot;: 0.0001}, &quot;zero_weight_decay_on_bias_and_bn&quot;: True, &quot;ema&quot;: True, &quot;ema_params&quot;: {&quot;decay&quot;: 0.9, &quot;decay_type&quot;: &quot;threshold&quot;}, &quot;max_epochs&quot;: 20, &quot;mixed_precision&quot;: False, #Set to True if using GPU to speed up training &quot;loss&quot;: PPYoloELoss( use_static_assigner=False, num_classes=len(dataset_params['classes']), reg_max=16 ), &quot;valid_metrics_list&quot;: [ DetectionMetrics_050( score_thres=0.1, top_k_predictions=300, # NOTE: num_classes needs to be defined here num_cls=len(dataset_params['classes']), normalize_targets=True, post_prediction_callback=PPYoloEPostPredictionCallback( score_threshold=0.01, nms_top_k=1000, max_predictions=300, nms_threshold=0.7 ) ) ], &quot;metric_to_watch&quot;: 'mAP@0.50' } trainer = Trainer(experiment_name='yolo_nas_version_m', ckpt_root_dir=CHECKPOINT_DIR) trainer.train( model=model, training_params=train_params, train_loader=train_dataloader, valid_loader=val_dataloader ) </code></pre> <p>If I do as below it works, but just using a single dataset directly:</p> <pre><code>trainer.train( model=model, training_params=train_params, train_loader=train_datasets[0], valid_loader=val_datasets[0] ) </code></pre> <p>Please give me some advice.</p> <p>P.S: I know a easy fix would be instead of having multiple dataset folders I could merge it in 1 and use trainer.train() with a single dataset instead of a combined dataloader. But the solution is growing and I need to split those datasets in case I want to test with few of them or additional datasets.</p>
<python><machine-learning><deep-learning><pytorch><dataloader>
2024-08-27 08:28:38
2
721
Jonathan Molina
78,917,766
5,931,672
Tensorflow in pip list but fail to import
<p>I built, with much effort, tensorflow 2.10.1 from source.</p> <pre><code>$ pip list -v | grep tensorflow tensorflow 2.10.1 /home/abarrachina/.local/lib/python3.9/site-packages pip tensorflow-estimator 2.10.0 /home/abarrachina/.local/lib/python3.9/site-packages pip tensorflow-io-gcs-filesystem 0.36.0 /home/abarrachina/.local/lib/python3.9/site-packages pip </code></pre> <p>However, when trying to import</p> <pre><code>$ python Python 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import sysconfig; print(sysconfig.get_paths()[&quot;purelib&quot;]) /usr/lib/python3.9/site-packages &gt;&gt;&gt; import tensorflow Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ModuleNotFoundError: No module named 'tensorflow' </code></pre> <p>I see as if I have two paths for python, <code>/usr/lib/python3.9</code> and <code>/home/abarrachina/.local/lib/python3.9</code>. But I failed to change it, I try <code>alias python=/home/abarrachina/.local/lib/python3.9</code> and still the previous commands do not change. Also, for all other packages seems to work, for example, numpy is also in <code>/home/abarrachina/.local/lib/python3.9/site-packages pip</code> and it works (same versiona and everything).</p> <p>What can be happening?</p>
<python><tensorflow><build>
2024-08-27 08:09:21
1
4,192
J Agustin Barrachina
78,917,424
20,732,098
Remove Gaps in Chart
<p>I have the following diagram: <a href="https://i.sstatic.net/VOV8ekth.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VOV8ekth.png" alt="enter image description here" /></a></p> <pre><code>data = { 'start': ['2024-08-02 08:00', '2024-08-02 09:00', '2024-08-02 18:50'], 'ende': ['2024-08-02 09:00', '2024-08-02 18:50', '2024-08-03 10:00'], 'color': ['green', 'green','green', 'green', 'green'] } </code></pre> <p>Here is the Python code that generates the diagram:</p> <pre><code>def createBarChart(df): # Convert the columns ‘start’ and ‘end’ into datetime objects df['start'] = pd.to_datetime(df['start']) df['ende'] = pd.to_datetime(df['ende']) # Create Diagramm fig = go.Figure() for i, row in df.iterrows(): fig.add_trace(go.Scatter( x=[row['start'], row['ende'], row['ende'], row['start']], y=[1, 1, 0, 0], fill='toself', fillcolor=row['color'], mode='none', showlegend=False, )) # Formatting the x-axis tickvals = pd.date_range(start=df['start'].min(), end=df['ende'].max(), freq='3H') ticktext = [] previous_day = None for d in tickvals: if previous_day is None or d.day != previous_day: ticktext.append(f&quot;&lt;b&gt;{d.strftime('%d.%m')}&lt;/b&gt;&quot;) else: ticktext.append(d.strftime('%H:%M')) previous_day = d.day fig.update_layout( yaxis=dict(showticklabels=False), xaxis=dict( tickformat='%H:%M', tickvals=tickvals, ticktext=ticktext ), margin=dict(l=20, r=20, t=10, b=20), height=200, ) return fig </code></pre> <p>Now we come to the problem. To display the diagram, I read from a CSV file. Each line in the CSV file is a block. The last two blocks are green. However, there is a small line between these two, which I would like to remove so that it is a continuous green bar.</p>
<python><plotly>
2024-08-27 06:37:20
2
336
ranqnova
78,917,359
6,649,616
How to delete a Django model file without restarting the server and trigger migrations?
<p>I'm working on a Django application where I need to programmatically delete a model file (e.g., <em>my_app/models/my_model.py</em>) and then run migrations, all without restarting the server. However, after deleting the file and running makemigrations, Django doesn't recognize any changes and doesn't generate a migration to remove the model.</p> <p>I've tried reloading the modules using <code>importlib.reload()</code>, but that hasn't resolved the issue. The system only detects the changes after I restart the server.</p> <p>Does Django or Python keep references to the model elsewhere, preventing the changes from being registered immediately? Is there a way to resolve this without having to restart the server?</p> <pre><code> file_path = 'my_app/models/my_model.py' if os.path.exists(file_path): os.remove(file_path) #Update __init__.py init_file_path = 'my_app/models/__init__.py' with open(init_file_path, 'r') as file: lines = file.readlines() with open(init_file_path, 'w') as file: for line in lines: if 'my_model' not in line: file.write(line) module_name = 'my_app.models' if module_name in sys.modules: del sys.modules[module_name] module = importlib.import_module(module_name) importlib.reload(module) call_command('makemigrations', 'my_app') call_command('migrate', 'my_app') </code></pre>
<python><django><django-models>
2024-08-27 06:17:25
1
321
johnny94
78,917,252
7,700,802
Creating columns from a column that contains a list of dictionaries
<p>I have a dataframe that has a column with a list of dictionaries that look like this object</p> <pre><code>[{'MetricName': 'test:mean_wQuantileLoss', 'Value': 1.0935583114624023, 'Timestamp': datetime.datetime(2022, 10, 20, 7, 45, 6, tzinfo=tzlocal())}, {'MetricName': 'train:loss:batch', 'Value': 3.0625627040863037, 'Timestamp': datetime.datetime(2022, 10, 20, 7, 44, 37, tzinfo=tzlocal())}, {'MetricName': 'train:progress', 'Value': 100.0, 'Timestamp': datetime.datetime(2022, 10, 20, 7, 44, 37, tzinfo=tzlocal())}, {'MetricName': 'train:loss', 'Value': 3.2942464351654053, 'Timestamp': datetime.datetime(2022, 10, 20, 7, 44, 37, tzinfo=tzlocal())}, {'MetricName': 'train:final_loss', 'Value': 3.2942464351654053, 'Timestamp': datetime.datetime(2022, 10, 20, 7, 44, 37, tzinfo=tzlocal())}, {'MetricName': 'train:throughput', 'Value': 385.56353759765625, 'Timestamp': datetime.datetime(2022, 10, 20, 7, 44, 37, tzinfo=tzlocal())}, {'MetricName': 'test:RMSE', 'Value': 22.101428985595703, 'Timestamp': datetime.datetime(2022, 10, 20, 7, 45, 6, tzinfo=tzlocal())}, {'MetricName': 'ObjectiveMetric', 'Value': 22.101428985595703, 'Timestamp': datetime.datetime(2022, 10, 20, 7, 45, 6, tzinfo=tzlocal())}] </code></pre> <p>I want to create columns for each MetricName and what the Value is. I have other columns in the dataframe that I want to keep in tact as well. How do I achieve this?</p> <p>Here is a sample dataframe</p> <pre><code>data = {'TrainingJobName': ['Training_JOB_NAME1'], 'TrainingJobArn': [&quot;Blahblah&quot;], 'FinalMetricDataList': [&quot;[{'MetricName': 'test:mean_wQuantileLoss', 'Value': 1.0935583114624023, 'Timestamp': datetime.datetime(2022, 10, 20, 7, 45, 6, tzinfo=tzlocal())}, {'MetricName': 'train:loss:batch', 'Value': 3.0625627040863037, 'Timestamp': datetime.datetime(2022, 10, 20, 7, 44, 37, tzinfo=tzlocal())}, {'MetricName': 'train:progress', 'Value': 100.0, 'Timestamp': datetime.datetime(2022, 10, 20, 7, 44, 37, tzinfo=tzlocal())}, {'MetricName': 'train:loss', 'Value': 3.2942464351654053, 'Timestamp': datetime.datetime(2022, 10, 20, 7, 44, 37, tzinfo=tzlocal())}, {'MetricName': 'train:final_loss', 'Value': 3.2942464351654053, 'Timestamp': datetime.datetime(2022, 10, 20, 7, 44, 37, tzinfo=tzlocal())}, {'MetricName': 'train:throughput', 'Value': 385.56353759765625, 'Timestamp': datetime.datetime(2022, 10, 20, 7, 44, 37, tzinfo=tzlocal())}, {'MetricName': 'test:RMSE', 'Value': 22.101428985595703, 'Timestamp': datetime.datetime(2022, 10, 20, 7, 45, 6, tzinfo=tzlocal())}, {'MetricName': 'ObjectiveMetric', 'Value': 22.101428985595703, 'Timestamp': datetime.datetime(2022, 10, 20, 7, 45, 6, tzinfo=tzlocal())}]&quot;]} df_sample = pd.DataFrame(data=data) df_sample.head() </code></pre>
<python><pandas><dictionary-comprehension>
2024-08-27 05:33:50
1
480
Wolfy
78,917,115
139,150
Count of pages found in google search
<p>I am looking for the count of pages where search term &quot;indieea&quot; is found. Visited this page:</p> <p><a href="https://www.google.com/search?q=%22indieea%22" rel="nofollow noreferrer">https://www.google.com/search?q=%22indieea%22</a></p> <p>Goto the last page in search results. You get this line...</p> <pre><code>we have omitted some entries very similar to the 64 already displayed. </code></pre> <p>The function should return 64 because there are 64 pages for the given search term &quot;indieea&quot;.</p> <p>The code that I tried:</p> <pre><code>import asyncio import urllib.parse from playwright.async_api import async_playwright # 1.44.0 async def main(): term = urllib.parse.quote_plus(&quot;टंकलेखन&quot;) url = f&quot;https://www.google.com/search?q={term}&quot; async with async_playwright() as pw: browser = await pw.chromium.launch() page = await browser.new_page() await page.goto(url, wait_until=&quot;domcontentloaded&quot;) # Find the &lt;a&gt; tag with aria-label=&quot;Page 9&quot; and class=&quot;fl&quot; link_element = page.locator('a[aria-label=&quot;Page 9&quot;].fl').first if await link_element.count() &gt; 0: # Check if the element exists # Get the outerHTML of the element to print the full source code of the link link_html = await link_element.evaluate('el =&gt; el.outerHTML') print(&quot;Source code of the link:&quot;, link_html) else: print(&quot;Element not found.&quot;) await browser.close() if __name__ == &quot;__main__&quot;: asyncio.run(main()) </code></pre>
<python><web-scraping><playwright><playwright-python>
2024-08-27 04:25:59
2
32,554
shantanuo
78,917,083
8,176,763
dockerfile pip error cannot find versions for packages in requirements.txt
<p>I have a dockerfile like this:</p> <pre><code>FROM nexus3/docker-hub/apache/airflow:slim-2.10.0-python3.11 ENV PIP_CONFIG_FILE=/home/airflow/pip.conf ENV AIRFLOW_VERSION=2.10.0 RUN --mount=type=secret,id=pip,target=/home/airflow/pip.conf,uid=50000,required=true \ --mount=type=bind,source=requirements.txt,target=requirements.txt \ pip install --no-cache-dir &quot;apache-airflow[celery,postgres,statsd]==${AIRFLOW_VERSION}&quot; -r requirements.txt </code></pre> <p>I then build as such:</p> <pre><code>docker build -t nexus3.systems.uk.hsbc:18094/docker-hub/11976246/apache/airflow:slim-2.10.0-python3.11 --secret id=pip,src=pip.conf . </code></pre> <p>My requirements.txt file looks like this:</p> <pre><code>requests httpx psycopg2-binary psycopg[binary,pool] oracledb pyxlsb pandas polars statsd </code></pre> <p>I then get error:</p> <pre><code>1.769 Requirement already satisfied: requests in /home/airflow/.local/lib/python3.11/site-packages (from -r requirements.txt (line 1)) (2.32.3) 1.769 Requirement already satisfied: httpx in /home/airflow/.local/lib/python3.11/site-packages (from -r requirements.txt (line 2)) (0.27.0) 2.625 ERROR: Could not find a version that satisfies the requirement psycopg2-binary (from versions: none) 3.520 ERROR: No matching distribution found for psycopg2-binary </code></pre> <p>I'm not sure what is really going wrong here.</p>
<python><docker><pip><airflow>
2024-08-27 04:07:29
0
2,459
moth
78,917,073
1,559,401
How to resolve relative module imports in Python when calling from an arbitrarily located script?
<p>I have a submodule that I would like to use. The structure of my project can be boiled down to</p> <pre><code>*/ ├── samples/ ├── scripts/ │ └── script.py # contains &quot;from ai_model import AiModel&quot; ├── model/ # submodule, branch=main ├── models/ # with __init__.py │ ├── base_model.py │ └── ai_model.py # contains &quot;from .base_model import BaseModel&quot; ├── util/ # with __init__.py └── data/ # with __init__.py ... </code></pre> <p>I execute my <code>script.py</code> from the within the <code>scripts</code> directory. I managed to add <code>util</code>, <code>models</code> and so on using</p> <pre><code>import sys from pathlib import Path models_module = Path('../model/models/').resolve() sys.path.insert(0, models_module.__str__()) util_module = Path('../model/').resolve() sys.path.insert(0, util_module.__str__()) .. </code></pre> <p>However, the code currently breaks at (high chance other parts of the submodule are also affected by this issue)</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\example\Project\scripts\script.py&quot;, line 9, in &lt;module&gt; from ai_model import AiModel File &quot;C:\Users\example\Project\model\models\ai_model.py&quot;, line 6, in &lt;module&gt; from .base_model import BaseModel ImportError: attempted relative import with no known parent package </code></pre> <p>I do not want to modify the source code contained in the submodule. I can add <code>__init__.py</code> files if necessary though.</p>
<python>
2024-08-27 04:03:14
1
9,862
rbaleksandar
78,916,962
4,852,094
Using an enum to allow for a class instance to be using as a Generic Value
<p>I noticed mypy will not raise an error when you provide a class instance for a generic type if you set the instance to an enum value first. At times it would be useful to provide a literal class instance to identify the type of an object.</p> <pre class="lang-py prettyprint-override"><code>class MyKlass(): pass my_val = MyKlass() class MyEnum(Enum): val = my_val Literal[MyEnum.val] # mypy doesnt complain Literal[x] # mypy complains &quot;Variable not allowed in type expression&quot; </code></pre> <p>So if I have a Generic that I want to be able to provide an arbitrary value, I can do something like:</p> <pre><code>class MyEnum(Enum): val = my_val class MyClass(Generic[T]): pass # here I use the enum value. MyClass[MyEnum.val] # no issues </code></pre> <p>Is there a better way to do this? I want to avoid providing the value as a function arg because it's much more relevant to the static type checker.</p>
<python><enums><python-typing>
2024-08-27 03:00:34
0
3,507
Rob
78,916,879
3,155,240
Cryptography Fernet prevents my windows service from starting
<p>As per the title - here is my reproducible example (be sure to change the path to the logging file):</p> <p><em>my_service.py</em></p> <pre><code>import logging logging.basicConfig(filename=r'C:\path\to\where\windows\service\is\service_log.txt', level=logging.DEBUG) import os import sys import time import multiprocessing from multiprocessing.connection import Listener import json import winreg import win32crypt # the below is needed to make boto3 work properly with threading from concurrent.futures import ThreadPoolExecutor # for the service import win32serviceutil import win32service import win32event import servicemanager # from cryptography.fernet import Fernet class OnePrintService(win32serviceutil.ServiceFramework): _svc_name_ = &quot;TestService&quot; _svc_display_name_ = &quot;Test Service&quot; def __init__(self,args): win32serviceutil.ServiceFramework.__init__(self, args) self.hWaitStop = win32event.CreateEvent(None, 0, 0, None) self.running = False def SvcStop(self): self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING) win32event.SetEvent(self.hWaitStop) self.running = False def SvcDoRun(self): servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE, servicemanager.PYS_SERVICE_STARTED, (self._svc_name_,'')) self.running = True self.main() def main(self): while self.running: logging.debug(&quot;looped&quot;) time.sleep(2) if __name__ == '__main__': try: win32serviceutil.HandleCommandLine(OnePrintService) except Exception as e: print(e) </code></pre> <ol> <li>open up a command prompt</li> <li>change directory (<code>cd C:\new\directory\</code>) to where the script is.</li> <li>run <code>python my_script.py install</code></li> <li>run <code>python my_script.py start</code></li> <li>wait a few seconds to make sure it had enough time to output a log or two</li> <li>run <code>python my_script.py stop</code></li> </ol> <p>With the <code>Fernet</code> module commented out, it will run fine. With the <code>Fernet</code> module in, when you run <code>python my_script.py stop</code> it will show an error in the command prompt.</p> <p><em><strong>How do I fix this?</strong></em></p> <p>Additionally, I <code>pip uninstall</code> and <code>pip install</code> Fernet, which seems promising, but as soon as I have to repackage my custom module (which requires Fernet), it won't execute the process any longer. My custom (packaged) module require Fernet in the .toml file - worth mentioning that this custom module is different than the service.</p>
<python><python-3.x><cryptography><windows-services>
2024-08-27 02:35:44
1
2,371
Shmack
78,916,760
2,084,503
Does PyPI no longer allow uploads with username and password?
<p>I've just republished one of my packages, but to do so, I had to give the username as <code>__token__</code> and use an API Token I generated from the website as my password. Is there another way to authenticate when I publish? The error reads</p> <pre><code>100% ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 19.1/19.1 kB • 00:00 • 9.0 MB/s WARNING Error during upload. Retry with the --verbose option for more details. ERROR HTTPError: 403 Forbidden from https://upload.pypi.org/legacy/ Invalid or non-existent authentication information. See https://pypi.org/help/#invalid-auth for more information. </code></pre> <p>And if I go to that help page, there is no mention of usernames or passwords, only authentication tokens.</p> <p><a href="https://i.sstatic.net/oXkoDWA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oXkoDWA4.png" alt="enter image description here" /></a></p> <p>When I logged in to my account I was made to set up 2FA and download recovery codes. That clued me in that PyPI may have decided to go all out on security. I mean, fair enough: If someone compromised a popular package and then millions of us downloaded malicious code...yeesh. All the same, I found it slightly annoying to figure out, because <code>twine</code> still takes a username and password.</p>
<python><pypi>
2024-08-27 01:31:01
1
1,266
Pavel Komarov
78,916,679
14,222,808
Best way to deal with Nulls in python
<p><strong>Background</strong>: I have a Python data frame with some null values. I'm trying to impute the value of the nulls in a repeatable/automated way.</p> <p><strong>Details</strong>: I have a data frame of inflation values for 2000 products in 150 countries. Unfortunately, I have a lot of null values. I can assume that inflation across countries is perfectly correlated. So I'm looking to impute my missing values using both the real inflation column but also my product and country column.</p> <p><strong>Ask:</strong> How do I fill in the null values in my data frame? I want the function/algorithm that considers the other countries and products when it imputes the values. For example, if country A had products in common with country B and the products in B were always twice the inflation rate, then any gap in B could be calculated by doubling the rate of the similar product in country A. I don't know the best way to do this in Python.</p>
<python><pandas><null>
2024-08-27 00:38:48
2
315
Jonathan Hay
78,916,677
3,782,816
Using python AST to traverse code and extract return statements
<p>I have a python script that I am trying to &quot;decode&quot; as it were so that I can cast it as an xml, but for this exercise I am just trying to get my head around using ast.walk() and how best to get info out of it. Below is a very simple function that I wrote to simple_if.py and what I am trying to do is extract the function name and the return statements as a list in order.</p> <pre><code>def if_else(abc): if abc &gt; 0: return &quot;Hello&quot; elif abc &lt; 0: return &quot;Goodbye&quot; return &quot;Neither Hello Nor Goodbye&quot; </code></pre> <p>When I put the above into the file and run the below ast functions on it like so</p> <pre><code>with open(&quot;simple_if.py&quot;, &quot;r&quot;, encoding = &quot;utf-8&quot;) as ast_tree_walker tree = ast.parse(ast_tree_walker.read()) expList = [] counter = 0 for node in ast.walk(tree): print(node) if isInstance(node, ast.FunctionDef) and counter == 0: expList.append(node.__dict__[&quot;name&quot;] counter = 1 # this adds the functiona name if_else to the list print(node.__dict__) </code></pre> <p>The above would give a terminal output of nodes, dicts, and args, but the only thing I have been able to accomplish is extracting the function name since it is the first non-Module node in the tree. I know I am dumping everything that I &quot;want&quot; but its not clear to me how I should keep track of the ordering for instance &quot;Neither Hello Nor Goodbye&quot; would appear before either of &quot;Hello&quot; or &quot;Goodbye&quot; and although I assume that is because in terms of a tree the final return is on the same level as the if statement it is not clear to me how I can maintain order (if at all). Basically does anyone know how I could return a list of [&quot;if_else&quot;, &quot;Hello&quot;, &quot;Goodbye&quot;, &quot;Neither Hello Nor Goodbye&quot;]? On some level I feel this is a fools errand, but in the pursuit of knowledge I am wondering how to go about this.</p> <p>Nuanced question the above prints out types of:</p> <pre><code>&lt;ast.Module object at (random token string that I won't write)&gt;, &lt;ast.FunctionDef object at &gt;, &lt;ast.If object at &gt;, &lt;ast.Return object at &gt;... </code></pre> <p>When I come across a &quot;&lt;ast.If object at &gt;&quot; should I step into the 'body' key directly or somehow just wait till that child node gets visited and then extract the info?</p> <pre><code>#children of ast.If {'test': &lt;ast.Compare object at&gt;, 'body&quot; &lt;ast.Return object at &gt;, 'orelse': [&lt;ast.If object at&gt;, 'lineno':2, 'col_offset':7} </code></pre>
<python><abstract-syntax-tree>
2024-08-27 00:37:34
2
301
user3782816
78,916,455
1,475,548
What is the correct syntax for an sshtunnel from Python?
<p>I am trying to open an ssh tunnel from Python, but I cannot seem to get the syntax correct.</p> <p>Essentially, I want to do the following, except from within Python:</p> <pre><code>ssh -i /path/to/my/private.ca.key -L 3306:127.0.0.1:3306 user@ourserver.com </code></pre> <p>When I enter that line directly in bash it works fine.</p> <p>My python code looks like this:</p> <pre><code>import logging from sshtunnel import SSHTunnelForwarder logging.basicConfig(level=logging.DEBUG) # Path to the private key file private_key_path = '/path/to/my/private.ca.key' # Establish SSH tunnel server = SSHTunnelForwarder( 'ourserver.com', ssh_username='user', ssh_pkey = private_key_path, remote_bind_address=('127.0.0.1', 3306), local_bind_address=('127.0.0.1', 3306) ) server.start() </code></pre> <p>This is my output:</p> <pre><code>DEBUG:paramiko.transport:starting thread (client mode): 0x79d3c9d0 DEBUG:paramiko.transport:Local version/idstring: SSH-2.0-paramiko_3.4.1 DEBUG:paramiko.transport:Remote version/idstring: SSH-2.0-OpenSSH_9.2p1 Debian-2+deb12u3 INFO:paramiko.transport:Connected (version 2.0, client OpenSSH_9.2p1) DEBUG:paramiko.transport:=== Key exchange possibilities === DEBUG:paramiko.transport:kex algos: sntrup761x25519-sha512@openssh.com, curve25519-sha256, curve25519-sha256@libssh.org, ecdh-sha2-nistp256, ecdh-sha2-nistp384, ecdh-sha2-nistp521, diffie-hellman-group-exchange-sha256, diffie-hellman-group16-sha512, diffie-hellman-group18-sha512, diffie-hellman-group14-sha256, kex-strict-s-v00@openssh.com DEBUG:paramiko.transport:server key: rsa-sha2-512, rsa-sha2-256, rsa-sha2-512-cert-v01@openssh.com, rsa-sha2-256-cert-v01@openssh.com DEBUG:paramiko.transport:client encrypt: chacha20-poly1305@openssh.com, aes128-ctr, aes192-ctr, aes256-ctr, aes128-gcm@openssh.com, aes256-gcm@openssh.com DEBUG:paramiko.transport:server encrypt: chacha20-poly1305@openssh.com, aes128-ctr, aes192-ctr, aes256-ctr, aes128-gcm@openssh.com, aes256-gcm@openssh.com DEBUG:paramiko.transport:client mac: umac-64-etm@openssh.com, umac-128-etm@openssh.com, hmac-sha2-256-etm@openssh.com, hmac-sha2-512-etm@openssh.com, hmac-sha1-etm@openssh.com, umac-64@openssh.com, umac-128@openssh.com, hmac-sha2-256, hmac-sha2-512, hmac-sha1 DEBUG:paramiko.transport:server mac: umac-64-etm@openssh.com, umac-128-etm@openssh.com, hmac-sha2-256-etm@openssh.com, hmac-sha2-512-etm@openssh.com, hmac-sha1-etm@openssh.com, umac-64@openssh.com, umac-128@openssh.com, hmac-sha2-256, hmac-sha2-512, hmac-sha1 DEBUG:paramiko.transport:client compress: none, zlib@openssh.com DEBUG:paramiko.transport:server compress: none, zlib@openssh.com DEBUG:paramiko.transport:client lang: &lt;none&gt; DEBUG:paramiko.transport:server lang: &lt;none&gt; DEBUG:paramiko.transport:kex follows: False DEBUG:paramiko.transport:=== Key exchange agreements === DEBUG:paramiko.transport:Strict kex mode: True DEBUG:paramiko.transport:Kex: curve25519-sha256@libssh.org DEBUG:paramiko.transport:HostKey: rsa-sha2-512 DEBUG:paramiko.transport:Cipher: aes128-ctr DEBUG:paramiko.transport:MAC: hmac-sha2-256 DEBUG:paramiko.transport:Compression: none DEBUG:paramiko.transport:=== End of kex handshake === DEBUG:paramiko.transport:Resetting outbound seqno after NEWKEYS due to strict mode DEBUG:paramiko.transport:kex engine KexCurve25519 specified hash_algo &lt;built-in function openssl_sha256&gt; DEBUG:paramiko.transport:Switch to new keys ... DEBUG:paramiko.transport:Resetting inbound seqno after NEWKEYS due to strict mode DEBUG:paramiko.transport:Got EXT_INFO: {'server-sig-algs': b'ssh-ed25519,sk-ssh-ed25519@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,sk-ecdsa-sha2-nistp256@openssh.com,webauthn-sk-ecdsa-sha2-nistp256@openssh.com,ssh-dss,ssh-rsa,rsa-sha2-256,rsa-sha2-512', 'publickey-hostbound@openssh.com': b'0'} DEBUG:paramiko.transport:Attempting public-key auth... DEBUG:paramiko.transport:userauth is OK INFO:paramiko.transport:Authentication (publickey) failed. 2024-08-26 16:12:00,129| ERROR | Could not open connection to gateway ERROR:sshtunnel.SSHTunnelForwarder:Could not open connection to gateway Traceback (most recent call last): File &quot;[filename].py&quot;, line 18, in &lt;module&gt; server.start() File &quot;.../site-packages/sshtunnel.py&quot;, line 1331, in start self._raise(BaseSSHTunnelForwarderError, File &quot;.../site-packages/sshtunnel.py&quot;, line 1174, in _raise raise exception(reason) sshtunnel.BaseSSHTunnelForwarderError: Could not establish session to SSH gateway </code></pre> <p>My private key uses the 'ssh-rsa' algorithm, and the server seems to confirm that it understands that algorithm during negotiation according to the following line:</p> <pre><code>DEBUG:paramiko.transport:Got EXT_INFO: {'server-sig-algs': b'ssh-ed25519,sk-ssh-ed25519@openssh.com,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,sk-ecdsa-sha2-nistp256@openssh.com,webauthn-sk-ecdsa-sha2-nistp256@openssh.com,ssh-dss,ssh-rsa,rsa-sha2-256,rsa-sha2-512', 'publickey-hostbound@openssh.com': b'0'} </code></pre>
<python><ssh-tunnel>
2024-08-26 22:17:47
1
8,335
Octopus
78,916,428
13,764,824
Parallel requests gets slow when the list grows
<p>I have created an application using FastAPI that basically exposes a POST route and execute few external requests for each incoming request.</p> <p>The code is really simple:</p> <pre><code>from fastapi import FastAPI from src.utils.logger import create_logger import aiohttp import asyncio logger = create_logger(__name__) server = FastAPI() @server.post('/enrich_data') async def process_gateway_data(payload: Payload, request: Request): url = 'https://myid.execute-api.es-east-1.amazonaws.com/mkt' mktlist = [{...}] # this list of objects data = await load_data(url, features, request_id) JSONResponse(content={'mkt': {'data': '000', data} }) def _create_mkt_payload(mkt: Dict[str, Any]) -&gt; Dict[str, Any]: return { &quot;mkt_group&quot;: mkt['mkt_group'], &quot;payload&quot;: { mkt['name']: mkt['value'] } } async def make_request(client: aiohttp.ClientSession, url: str, mktdata: Dict[str, any]): payload = _create_mkt_payload(mktdata) _id = uuid.uuid4() logger.debug(f&quot;{_id} request {mktdata['mkt_group']}&quot;) async with client.post(url=url, json=payload) as resp: mkt_response = await resp.json() logger.debug(f&quot;end uuid {_id} status {resp.status}&quot;) return {mkt['name']: mkt_response} async def load_data(url: str, mktlist: List[Dict[str, Any]]) -&gt; List[Dict[str, Any]]: tasks = [] async with aiohttp.ClientSession(connector=aiohttp.TCPConnector(limit=15)) as session: for mktdata in mktlist: tasks.append(make_request(session, url, mktdata)) data = await asyncio.gather(*tasks) return list(data) </code></pre> <p>All external requests are made for the same endpoint with a different payload and it responses pretty fast. I have extracted the percentiles from API gateway:</p> <pre><code>P50 P55 P60 P65 P70 P75 P80 P85 P90 P95 P99 17 17 18 18 19 19 20 21 22 24 30 </code></pre> <p>Based on these on numbers, I have figured out that my application had some bottleneck because my code only execute like 10 http requests, waiting for response, enrich the payload then return the response but the percentiles from my application are:</p> <pre><code>P50 P55 P60 P65 P70 P75 P80 P85 P90 P95 P99 66 66 67 68 69 70 71 72 75 80 113 </code></pre> <p>I have added that logger.debug between <code>await resp.json()</code> just to be sure that problem was there.</p> <p>After a few more tests, I have figured out that more items I had in <code>mktlist</code> more slower my response was - if I have only 1 or 2 items in <code>mktlist</code>, the response from wait json is very close from percentile of external request.</p> <p>It's very strange because it looks like that something is locking my event loop and right now I have no more ideas how to optimize it.</p> <p>I really can't have a difference like 20ms-30ms between external app and my application. Both applications are under same AWS account running into ECS.</p> <p>By the way, I am running using uvicorn with single worker:</p> <blockquote> <p>uvicorn src.app:server --host 0.0.0.0 --port 8080</p> </blockquote> <p>What is the explanation why the response time from my application gets so slow if <code>mktlist</code> has like 10 items that will execute 10 external requests? Right now I am testing with 5 rps and 30 tasks (each task is an instance of my application).</p> <p>Is there any other way to execute parallel requests or anything that I should change to make it faster?</p>
<python><python-asyncio><aiohttp>
2024-08-26 22:06:58
0
949
placplacboom
78,916,373
6,843,153
How to know which fields in pydantic model are flagged as 'exclude=True'
<p>I need to create a data object in execution time that maps the schema of a <strong>Pydantic v1.10</strong> model that might vary depending on user inputs. This is the example of one of those models:</p> <pre><code>class MyModel(BaseModel): field_1: str = Field(alias=&quot;Field 1&quot;) field_2: str = Field(alias=&quot;Field 2&quot;) field_3: str = Field(alias=&quot;Field 3&quot;, exclude=True) </code></pre> <p>I need to exclude fields labeled as <code>exclude=True</code> (this field should not exist in the data object), the problem is that I have no way to know the model that is going to be used until execution time, so the data objects is defined until execution time.</p> <p>The problem is that I have no way to know what fields of the selected mode are labeled as <code>exclude=True</code>. This info is not provided by <code>MyModel.__fields__</code> or <code>MyModel.schema()</code>.</p> <p>Is there any way I can get this info?</p>
<python><pydantic>
2024-08-26 21:43:22
1
5,505
HuLu ViCa
78,916,293
7,700,802
sagemaker list_training_jobs not returning all completed jobs
<p>I wrote this function</p> <pre><code>def list_completed_training_jobs(): &quot;&quot;&quot;Lists completed SageMaker training jobs.&quot;&quot;&quot; sagemaker = boto3.client('sagemaker', region_name=&quot;us-east-1&quot;) response = sagemaker.list_training_jobs( StatusEquals='Completed', CreationTimeAfter=datetime(2023, 1, 1), ) training_jobs = response['TrainingJobSummaries'] while 'NextToken' in response: response = sagemaker.list_training_jobs( StatusEquals='Completed', NextToken=response['NextToken'] ) training_jobs.extend(response['TrainingJobSummaries']) return training_jobs </code></pre> <p>I am trying to get all jobs after 2023, but I am getting this error</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[71], line 36 27 df.to_csv('../output/training_jobs.cvs',index=False) 30 return df ---&gt; 36 completed_jobs = list_completed_training_jobs() 37 # df = create_training_name_cols(completed_jobs) Cell In[71], line 8, in list_completed_training_jobs() 2 &quot;&quot;&quot;Lists completed SageMaker training jobs.&quot;&quot;&quot; 4 sagemaker = boto3.client('sagemaker', region_name=&quot;us-east-1&quot;) 6 response = sagemaker.list_training_jobs( 7 StatusEquals='Completed', ----&gt; 8 CreationTimeAfter=datetime(2023, 1, 1), 9 ) 11 training_jobs = response['TrainingJobSummaries'] 13 while 'NextToken' in response: TypeError: 'module' object is not callable </code></pre> <p>Not sure what the problem is any suggestions are appreciated. Even if I remove the CreationTimeAfter parameter the code works but only returns training jobs from 2021 to 2022 but nothing for 2023 or 2024.</p>
<python><boto3><amazon-sagemaker>
2024-08-26 21:05:19
1
480
Wolfy
78,916,247
13,944,524
`assert_never()` fails on match-casing a custom class with an enum field
<p>Here is the code to reproduce:</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass from enum import StrEnum, auto from typing import assert_never class RGBColor(StrEnum): RED = auto() BLUE = auto() GREEN = auto() @dataclass class FooData: number: int color: RGBColor def fn() -&gt; FooData: return FooData(10, RGBColor.RED) obj = fn() match obj: case FooData(color=RGBColor.RED): print(&quot;case 1&quot;) case FooData(color=RGBColor.BLUE): print(&quot;case 2&quot;) case FooData(color=RGBColor.GREEN): print(&quot;case 3&quot;) case _ as never: assert_never(never) </code></pre> <p>Mypy error:</p> <pre class="lang-none prettyprint-override"><code>error: Argument 1 to &quot;assert_never&quot; has incompatible type &quot;FooData&quot;; expected &quot;Never&quot; [arg-type] </code></pre> <p>Pyright error:</p> <pre class="lang-none prettyprint-override"><code>Argument of type &quot;FooData&quot; cannot be assigned to parameter &quot;__arg&quot; of type &quot;Never&quot; in function &quot;assert_never&quot; Type &quot;FooData&quot; cannot be assigned to type &quot;Never&quot;PylancereportArgumentType </code></pre> <p>Is it something that I'm missing here? I think I covered all cases in the pattern-matching. <code>obj</code> is of a <code>FooData</code> type because it is returned from the <code>fn()</code> function and its <code>.color</code> attribute can be any of the <code>RGBColor</code>'s member.</p> <hr /> <h5>additional information:</h5> <p>Type checking passes if I only used the bare enum (without being an attribute of an object):</p> <pre class="lang-py prettyprint-override"><code>from enum import StrEnum, auto from typing import assert_never class RGBColor(StrEnum): RED = auto() BLUE = auto() GREEN = auto() def fn() -&gt; RGBColor: return RGBColor.RED obj = fn() match obj: case RGBColor.RED: print(&quot;case 1&quot;) case RGBColor.BLUE: print(&quot;case 2&quot;) case RGBColor.GREEN: print(&quot;case 3&quot;) case _ as never: assert_never(never) </code></pre>
<python><pattern-matching><python-typing><mypy><pyright>
2024-08-26 20:48:10
0
17,004
S.B
78,916,208
1,329,652
Whye does Python ctypes GetClassInfoW return corrupt class name?
<p><code>GetClassInfoW</code> accessed via ctypes on Python 3.12.5 returns a corrupt class name:</p> <pre><code> wndclassa = WNDCLASSA() user32.GetClassInfoA(hInstance, b'BarClass', wndclassa) print(wndclassa.lpszMenuName) print(wndclassa.lpszClassName) wndclassw = WNDCLASSW() user32.GetClassInfoW(hInstance, 'BarClass', wndclassw) print(wndclassw.lpszMenuName) print(wndclassw.lpszClassName) </code></pre> <p>In the output below, the 4th line is expected to read <code>BarClass</code>. The junk value differs between runs:</p> <pre><code>b'Foo' b'BarClass' Foo ⣰᭺ɒ </code></pre> <p>Am I doing something wrong, or is this a genuine ctypes bug?</p> <p>Complete reproducer:</p> <pre><code>from ctypes import * from ctypes import wintypes as w def errcheck(result,func,args): if result is None or result == 0: raise WinError(get_last_error()) return result WNDPROC = WINFUNCTYPE(c_int64,w.HWND,w.UINT,w.WPARAM,w.LPARAM) def WNDCLASS(strtype): class WNDCLASS(Structure): _fields_ = [('style', w.UINT), ('lpfnWndProc', WNDPROC), ('cbClsExtra', c_int), ('cbWndExtra', c_int), ('hInstance', w.HINSTANCE), ('hIcon', w.HICON), ('hCursor', w.HANDLE), ('hbrBackground', w.HBRUSH), ('lpszMenuName', strtype), ('lpszClassName', strtype)] return WNDCLASS kernel32 = WinDLL('kernel32',use_last_error=True) user32 = WinDLL('user32',use_last_error=True) kernel32.GetModuleHandleA.argtypes = w.LPCSTR, kernel32.GetModuleHandleA.restype = w.HMODULE kernel32.GetModuleHandleA.errcheck = errcheck hInstance = kernel32.GetModuleHandleA(None) WNDCLASSA = WNDCLASS(w.LPCSTR) WNDCLASSW = WNDCLASS(w.LPCWSTR) user32.RegisterClassA.argtypes = POINTER(WNDCLASSA), user32.RegisterClassA.restype = w.ATOM user32.RegisterClassA.errcheck = errcheck user32.GetClassInfoA.argtypes = w.HINSTANCE, w.LPCSTR, POINTER(WNDCLASSA) user32.GetClassInfoA.restype = w.BOOL user32.GetClassInfoA.errcheck = errcheck user32.GetClassInfoW.argtypes = w.HINSTANCE, w.LPCWSTR, POINTER(WNDCLASSW) user32.GetClassInfoW.restype = w.BOOL user32.GetClassInfoW.errcheck = errcheck wndclass = WNDCLASSA() wndclass.lpfnWndProc = WNDPROC(user32.DefWindowProcA) wndclass.hInstance = hInstance wndclass.lpszMenuName = b'Foo' wndclass.lpszClassName = b'BarClass' user32.RegisterClassA(byref(wndclass)) wndclassa = WNDCLASSA() user32.GetClassInfoA(hInstance, b'BarClass', wndclassa) print(wndclassa.lpszMenuName) print(wndclassa.lpszClassName) wndclassw = WNDCLASSW() user32.GetClassInfoW(hInstance, 'BarClass', wndclassw) print(wndclassw.lpszMenuName) print(wndclassw.lpszClassName) </code></pre>
<python><windows><winapi><ctypes>
2024-08-26 20:35:44
0
99,011
Kuba hasn't forgotten Monica
78,915,972
1,914,781
drop row with 3 columns value equal
<p>I would like to drop the row with all three columns have equal value. e.g.</p> <pre><code>import pandas as pd data = [ ['A',2,2,2], ['B',2,2,3], ['C',3,3,3], ['D',4,2,2], ['E',5,5,2] ] df = pd.DataFrame(data,columns=['name','val1','val2','val3']) print(df) </code></pre> <p>In above example, <code>row 0</code> and <code>row 2</code> will be dropped since the value is equal.</p>
<python><pandas>
2024-08-26 19:11:42
2
9,011
lucky1928
78,915,951
6,930,340
Find the index of the first non-null value in a column in a polars dataframe
<p>I need to find the first non-null value in a column over a grouped <code>pl.DataFrame</code>.</p> <pre><code>import polars as pl df = pl.DataFrame( { &quot;symbol&quot;: [&quot;s1&quot;, &quot;s1&quot;, &quot;s2&quot;, &quot;s2&quot;], &quot;trade&quot;: [None, 1, -1, None], } ) shape: (4, 2) ┌────────┬───────┐ │ symbol ┆ trade │ │ --- ┆ --- │ │ str ┆ i64 │ ╞════════╪═══════╡ │ s1 ┆ null │ │ s1 ┆ 1 │ │ s2 ┆ -1 │ │ s2 ┆ null │ └────────┴───────┘ </code></pre> <p>How can I get the row numbers/index values of the first non-null value in the <code>trade</code> columns while group_by <code>symbol</code>?</p> <p>I am actually looking for the row/index numbers <code>1</code> and <code>0</code>. Maybe the result could be something like this:</p> <pre><code>shape: (2, 2) ┌────────┬────────────────┐ │ symbol ┆ first-non-null │ │ --- ┆ --- │ │ str ┆ i64 │ ╞════════╪════════════════╡ │ s1 ┆ 1 │ │ s2 ┆ 0 │ └────────┴────────────────┘ </code></pre> <p>I am actually looking for the equivalent to <code>pd.first_valid_index()</code></p>
<python><dataframe><python-polars>
2024-08-26 19:04:18
1
5,167
Andi
78,915,940
1,473,517
Correct code to do golden section search over integers
<p>I have an expensive function <code>f</code> which is unimodal and I want to find its minimum. However f is only defined at integer values. I read that <a href="https://en.wikipedia.org/wiki/Golden-section_search" rel="nofollow noreferrer">golden section search</a> is the right thing to do. My implementation which ignores the integer restriction is:</p> <pre><code>def golden_section_search(f, a, b, tolerance=1e-6): &quot;&quot;&quot; Perform the Golden Section Search for finding the minimum of a unimodal function f. Parameters: f (function): The unimodal function to minimize. a (float): Left boundary of the interval. b (float): Right boundary of the interval. tolerance (float, optional): Tolerance for stopping criterion. Default is 1e-6. Returns: float: Argument that minimizes the function f within the interval [a, b]. &quot;&quot;&quot; phi = (math.sqrt(5) - 1) / 2 # Golden ratio constant # Initial points x1 = a + (1 - phi) * (b - a) x2 = a + phi * (b - a) # Initial function evaluations f_x1 = f(x1) f_x2 = f(x2) while (b - a) &gt; tolerance: print(a, b) if f_x1 &lt; f_x2: b = x2 # Discard the interval [x2, b] x2 = x1 # Move x2 to the left f_x2 = f_x1 # Reuse the already computed value # Recompute x1 and its function value x1 = a + (1 - phi) * (b - a) f_x1 = f(x1) else: a = x1 # Discard the interval [a, x1] x1 = x2 # Move x1 to the right f_x1 = f_x2 # Reuse the already computed value # Recompute x2 and its function value x2 = a + phi * (b - a) f_x2 = f(x2) # The minimum is at the midpoint of the final interval return (a + b) / 2 </code></pre> <p>As a test function let's use this (this is of course defined everywhere but we pretend it isn't):</p> <pre><code>def f(x): return (x - 3)**2 + 5 </code></pre> <p>To make sure the search only evaluates at integers I tried just doing round(x1) and round(x2) every time x1 and x2 are changed but that made a version that either gives the wrong answer or iterates forever. It gives the wrong answer if I set tolerance = 1 and iterates forever if I set tolerance = 0.9. I am using a = 0 and b = 5 in the function call.</p> <p>The true minimum is at x = 3.</p> <p>This is the code which only evaluates <code>f</code> at integer values but does not work:</p> <pre><code>def golden_section_search_integer(f, a, b, tolerance=1e-6): &quot;&quot;&quot; Perform the Golden Section Search for finding the minimum of a unimodal function f, but only evaluate f at integer values. Parameters: f (function): The unimodal function to minimize. a (int): Left boundary of the interval (integer). b (int): Right boundary of the interval (integer). tolerance (float, optional): Tolerance for stopping criterion. Default is 1e-6. Returns: int: Integer value that minimizes the function f within the interval [a, b]. &quot;&quot;&quot; phi = (math.sqrt(5) - 1) / 2 # Golden ratio constant # Initial points, rounded to integers x1 = round(a + (1 - phi) * (b - a)) x2 = round(a + phi * (b - a)) # Initial function evaluations f_x1 = f(x1) f_x2 = f(x2) while (b - a) &gt; tolerance: print(a, b, x1, x2) if f_x1 &lt; f_x2: b = x2 # Discard the interval [x2, b] x2 = x1 # Move x2 to the left f_x2 = f_x1 # Reuse the already computed value # Recompute x1, ensuring it's an integer, and its function value x1 = round(a + (1 - phi) * (b - a)) if x1 != x2: # Only evaluate if x1 is different from x2 f_x1 = f(x1) else: a = x1 # Discard the interval [a, x1] x1 = x2 # Move x1 to the right f_x1 = f_x2 # Reuse the already computed value # Recompute x2, ensuring it's an integer, and its function value x2 = round(a + phi * (b - a)) if x2 != x1: # Only evaluate if x2 is different from x1 f_x2 = f(x2) # The minimum is at the midpoint of the final interval, rounded to the nearest integer return round((a + b) / 2) </code></pre>
<python><optimization><minimization>
2024-08-26 18:59:30
1
21,513
Simd
78,915,880
4,473,615
Transpose a column in pandas DataFrame
<p>I have a below dataframe, I'm trying to transpose the data based on the column Place. For each list of value in Place column, I need to generate a each row.</p> <pre><code>Language Capital Place Tamil Chennai ['Chennai', 'Vellore', 'Trichy', 'Madurai'] Kerala Kochi ['Kochi', 'Trivandrum'] </code></pre> <p>Expected result</p> <pre><code>Language Capital Place Tamil Chennai Chennai Tamil Chennai Vellore Tamil Chennai Trichy Tamil Chennai Madurai Kerala Kochi Kochi Kerala Kochi Trivandrum </code></pre> <p>I have tried many ways, using Pandas transpose, but I am unable to get the expected result. I have also retrieved and converted Place column to series of dataframe, still unable to get the result.</p>
<python><pandas><dataframe><transpose>
2024-08-26 18:34:38
2
5,241
Jim Macaulay
78,915,747
2,383,070
How to right split n times in python polars dataframe (mimic pandas rsplit)
<p>I have a column of strings where the end portion has some information I need to parse into its own columns. Pandas has the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.rsplit.html" rel="nofollow noreferrer">rsplit</a> function to split a string from the right, which does exactly what I need:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame( { &quot;name&quot;: [ &quot;some_name_set_1&quot;, &quot;some_name_set_1b&quot;, &quot;some_other_name_set_2&quot;, &quot;yet_another_name_set_2&quot;, ] } ) df.to_pandas()[&quot;name&quot;].str.rsplit(&quot;_&quot;, n=2, expand=True) </code></pre> <pre><code> 0 1 2 0 some_name set 1 1 some_name set 1b 2 some_other_name set 2 3 yet_another_name set 2 </code></pre> <p>How can I mimic this behavior in Polars, which doesn't have an <code>rsplit</code> expression right now?</p>
<python><python-polars>
2024-08-26 17:53:38
3
3,511
blaylockbk
78,915,695
1,361,752
Python temporary directory "access is denied" error for external subprocesses on Windows
<p>I want to create a temporary folder on windows using <code>tempfile.TemporaryDirectory</code>, and use <code>subprocess</code> to execute an external program that will use that temporary directory. This does not seem to work starting in python 3.12 (I have done most testing in 3.10 without issue, and a quick check of 3.11 showed the problem wasn't there either).</p> <p>Some additional context: I'm on a corporate pc without admin rights. I'm also using a conda environment manager and launching python from the conda prompt. I and one other person at the company are seeing this problem.</p> <p>Does anyone know how I can do this on python 3.12+ on windows? Some details of the behavior and my testing follow.</p> <h1>Testing</h1> <h2>Test Script</h2> <p>Test script executed from a <code>conda</code> prompt, to try and understand the problem</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path import subprocess import tempfile with tempfile.TemporaryDirectory() as tmpdir: print(&quot;writing file using subprocess&quot;) subprocess.run(&quot;echo $null &gt;&gt; &quot; + str(Path(tmpdir)/ 'subproc_test.txt'), shell=True) print(&quot;writing file using open function&quot;) open(Path(tmpdir)/'open_test.txt','w').write(&quot;Some text&quot;) programPause = input(f&quot;Paused so you can inspect the tmpdir {tmpdir}.\n&quot; &quot;Press the &lt;ENTER&gt; key to continue...&quot;) </code></pre> <h2>Running in python 3.10</h2> <p>(Also quickly tested in python 3.11, behavior seems the same)</p> <p>The behavior seems correct in this case. I see the following output</p> <pre><code>writing file using subprocess writing file using open function Paused so you can inspect the tmpdir C:\Users\cpl29573\AppData\Local\Temp\tmpe7sq8m6g. Press the &lt;ENTER&gt; key to continue... </code></pre> <p>Additionally, I can navigate to the temp folder (while the python program is paused, before file deletion), and read the two created text files <code>subproc_test.txt</code> and <code>open_test.txt</code>.</p> <h2>Running in python 3.12</h2> <p>The subprocess cannot access the temporary folder, but the <code>open</code> function within the python process can. I see the following output. Note the <code>Access is denied</code> error while writing the file using the subprocess.</p> <pre><code>writing file using subprocess Access is denied. writing file using open function Paused so you can inspect the tmpdir C:\Users\cpl29573\AppData\Local\Temp\tmpwft8ou2y. Press the &lt;ENTER&gt; key to continue... </code></pre> <p>If I try to navigate to the temporary directory in Windows file explorer, I get a pop up stating &quot;You don't currently have permission to access this folder&quot;. I had no problem opening the folder in the python 3.10 test. If I look in the folder (using temporary admin permissions), I see that the <code>open_test.txt</code> file is there, but not the <code>subproc_test.txt</code> file.</p> <h2>Additional Comments</h2> <ul> <li>Regardless on if I run from python 3.10 or python 3.12, the folder seems to be owned by &quot;Administrators&quot;.</li> <li>Details on the &quot;Advanced Security Settings&quot; pop up dialog (note, I am not much of an expert on Windows security terms and settings, I'm more of a Linux guy): <ul> <li>On python 3.10, the temp directory is listed as inheriting permissions from my home directory. On python 3.12, the &quot;Inherits from&quot; field is blank.</li> <li>On python 3.10, I'm listed as a &quot;Principal&quot; under permissions, but on python 3.12, the same entry is filled with &quot;OWNER RIGHTS&quot;.</li> </ul> </li> </ul>
<python><windows><subprocess><temporary-files><python-3.12>
2024-08-26 17:37:36
0
4,167
Caleb
78,915,581
14,130,365
Extracting text from a pdf file with differents strcuture failed how to properly do it Not all texts is extracted , just a portion is extracted
<p>I am trying to extract text from CV in pdf extension. I come up with this script but I have a problem. The script does not extract all the text and I have problem to identify different block of the document. Here is the script below following by the result and the pdf file as an example.</p> <pre><code> import fitz # PyMuPDF import json import os import re import shutil import glob def extract_blocks_from_pdf(file_path): document = fitz.open(file_path) blocks = [] for page_num in range(len(document)): page = document.load_page(page_num) blocks += page.get_text(&quot;blocks&quot;) # Récupérer les blocs de texte return blocks def classify_blocks(blocks): sections = { &quot;Résumé de carrière&quot;: [], &quot;Compétences&quot;: [], &quot;Rôle&quot;: [], &quot;Formations&quot;: [], &quot;Expériences Professionnelles&quot;: [], &quot;Autres Compétences&quot;: [] } current_section = None for block in blocks: print(type(block)) text = block[-3].strip() if re.search(r'Résumé de carrière', text, re.IGNORECASE): current_section = &quot;Résumé de carrière&quot; elif re.search(r'Rôle', text, re.IGNORECASE): current_section = &quot;Rôle&quot; elif re.search(r'Compétences', text, re.IGNORECASE): current_section = &quot;Compétences&quot; elif re.search(r'Formations', text, re.IGNORECASE): current_section = &quot;Formations&quot; elif re.search(r'Expériences Professionnelles', text, re.IGNORECASE): current_section = &quot;Expériences Professionnelles&quot; elif re.search(r'Autres Compétences', text, re.IGNORECASE): current_section = &quot;Autres Compétences&quot; elif current_section and text: sections[current_section].append(text) return sections def process_pdfs_to_json(pdf_files, output_dir): if not os.path.exists(output_dir): os.makedirs(output_dir) else: items = os.listdir(output_dir) if items: shutil.rmtree(output_dir) os.makedirs(output_dir) for pdf_file in pdf_files: blocks = extract_blocks_from_pdf(pdf_file) structured_blocks = classify_blocks(blocks) file_name = os.path.basename(pdf_file).replace('.pdf', '.json') json_path = os.path.join(output_dir, file_name) with open(json_path, 'w', encoding='utf-8') as json_file: json.dump(structured_blocks, json_file, ensure_ascii=False, indent=4) def read_json(json_path): with open(json_path, 'r', encoding='utf-8') as json_file: data = json.load(json_file) return data # Example of usage: pdf_dir = &quot;test&quot; pdf_files = glob.glob(os.path.join(pdf_dir, &quot;*.pdf&quot;)) output_dir = &quot;structured_cv_jsons&quot; process_pdfs_to_json(pdf_files, output_dir) </code></pre> <p>the variable pdf_files contains different pdf files.</p> <p>Here are the result of a single file</p> <pre><code> { &quot;Résumé de carrière&quot;: [ &quot;Technicien de support applicatif/fonctionnel N2+, Technicien de support \napplicatif/fonctionnel N2+, Coach CV&quot; ], &quot;Compétences&quot;: [], &quot;Rôle&quot;: [ &quot;• \nTraitement des tickets et des incidents&quot;, &quot;• \nAide à la diminution du volume de tickets entrants et des temps de traitement des incidents&quot;, &quot;• \nConception et construction de solutions agiles répondant à des cas d'usages fournis par la \n Power Platform Academy&quot;, &quot;• \nÉvaluation et amélioration continue des modules Microsoft Power Platform&quot;, &quot;• \nTraitement des tickets et des incidents&quot;, &quot;• \nIndustrialisation et automatisation des audits parties fixe et mobile&quot;, &quot;ISEP | ISEP \njanvier 2018 – juin 2018 (6 mois) \nProjet : Cybersécurité \nFormation Architecture Cybersécurité et intégration de composants de sécurité \nMission : \nPanorama : \nRGPD, Ecosystème du crime, Logique de l'attaquant, Sureté - Sécurité - Données, Ecosystème de la sécurité, Normes ISO \n27001, 27005, Production - Exploitation &amp; ROI, Bonnes pratiques &amp; historique, \nRetours d'expérience \nArchitecture : \nLes architectures, Architectures réseaux, Architectures applicative \nIntégration : \nInfrastructures Sécurisées, Serveurs, Terminaux, Techniques pour l'intégration \nProjet : \nArchitecture, Intégration, Management&quot;, &quot;3/3 | Curriculum Vitae&quot;, &quot;IEP | IEP \njanvier 2018 – juin 2018 (6 mois) \nProjet : Cybersécurité \nFormation Architecture Cybersécurité et intégration de composants de sécurité \nMission : \nPanorama : \nRGPD, Ecosystème du crime, Logique de l'attaquant, Sureté - Sécurité - Données, Ecosystème de la sécurité, Normes ISO \n27001, 27005, Production - Exploitation &amp; ROI, Bonnes pratiques &amp; historiqu&quot; ], &quot;Formations&quot;: [ &quot;• \nCertification Tricentis TOSCA : \nAutomatisation des tests&quot;, &quot;• \nISEP - 2 BADGES - Architecture \nCybersécurité et Intégration de \ncomposants de sécurité (labellisée \nSecNumedu et reconnue CNCP)&quot;, &quot;2/3 | Curriculum Vitae&quot; ], &quot;Expériences Professionnelles&quot;: [], &quot;Autres Compétences&quot;: [] } </code></pre> <p>As you can see the not all sections are included even all the texts. Could you help me figure it out how to properly done it.</p> <p><a href="https://filetransfer.io/data-package/HJs8TzKQ#link" rel="nofollow noreferrer"><strong>here is a link of the pdf folder containing one test example of the pdf used.</strong> </a></p> <p>I try to extract text block by block but not all the texte are extracted.</p>
<python><pdf><text-extraction><pymupdf>
2024-08-26 17:01:23
0
363
emma
78,915,499
2,223,505
Poetry Pytest - ModuleNotFoundError databricks extras
<p>I hit a confusing difficulty running pytest where a package has a 'package extras' dependency:</p> <pre class="lang-bash prettyprint-override"><code>poetry add &quot;databricks-sql-connector[sqlalchemy]&quot; </code></pre> <p>This module is imported in my package like so:</p> <pre class="lang-py prettyprint-override"><code>from databricks import sqlalchemy # lots of code to query databricks </code></pre> <p>The package works properly, however when I run the same functions in a pytest file I get error:</p> <pre class="lang-bash prettyprint-override"><code> File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1204, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1176, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1135, in _find_and_load_unlocked ModuleNotFoundError: No module named 'databricks.sqlalchemy'; 'databricks' is not a package </code></pre> <p>My package directory is laid out like:</p> <pre class="lang-ini prettyprint-override"><code>. ├── src ├── tests │ ├── __init__.py │ └── test_dbx.py └── pyproject.toml # and pyproject.toml (snippets) [tool.poetry] packages = [{include = &quot;src&quot;}] [tool.poetry.group.dev.dependencies] pytest = &quot;^7.1.3&quot; [tool.pytest.ini_options] pythonpath = [ &quot;src&quot;, &quot;tests&quot; ] </code></pre> <p>All functionality and pytest works fine, except for the newly added databricks functions.</p>
<python><pytest><databricks><python-poetry>
2024-08-26 16:37:32
1
2,017
Merlin
78,915,360
769,933
create polars array series sharing data with numpy array, with unpredictable offsets
<p>I'm working with &quot;records&quot; that are represented by ~1,000 sample arrays. I have millions of these records, so they take up a fairly significant amount of memory, often exceeding what is available on a pc if one is not careful. I would like to work with polars dataframe with a column with the array datatype containing these records. I would like to have this column share memory with an existing numpy array that contains these records at various unpredictable (not strided) offsets.</p> <p>In the following example I show a reproducible example of creating python list of the records that shares memory with <code>source_data</code>. Then I show a failed attempt to create a polars <code>Series</code> containing the same information, also sharing memory with <code>source_data</code>. How can I create a polars <code>Series</code> containing the same information, also sharing memory with <code>source_data</code>?</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import polars as pl record_len = 1000 # record length source_data = np.arange(1000000, dtype=np.int64) # all records exist in this array with random offsets inds0 = np.arange(0,len(source_data), 10000) # temoporary variable to calculate inds inds = inds0+np.random.randint(0,1000, len(inds0)) # mimic unpredictability of record offsets # make a python list of &quot;records&quot; sharing memory with source_data records = [source_data[ind:ind+record_len] for ind in inds] # confirm that every entry in records shares memory with source_data assert all(np.shares_memory(records[i], source_data) for i in range(len(inds))) # make a polars seriesof &quot;records&quot; from that list # the goal is to also share memory with &quot;source_data&quot; series = pl.Series(&quot;record&quot;, records, dtype=pl.Array(pl.Int64, record_len)) # attempt to confirm that memory is shared # this is likely not the correct test, since polars uses pyarrow internally # it is likely a test based on pyarrow is more appropriate # but the question asker does not know the correct test # test fails assert all(np.shares_memory(series[i].to_numpy(), source_data) for i in range(len(inds))) </code></pre>
<python><arrays><numpy><python-polars><pyarrow>
2024-08-26 16:02:18
2
2,396
gggg
78,915,342
6,654,730
Multi-architecture docker image with Python / Selenium / Chrome or Firefox
<p>I'm trying to install selenium / chromedriver on my Python image for an app. I've tried many different iterations, but nothing seems to work.</p> <p>requirements.txt</p> <pre><code>selenium==4.23.1 webdriver-manager==4.0.2 </code></pre> <p>I'm on Apple silicon. This is my Dockerfile. It's a bit of a mess now, because I've added all kinds of dependencies as nothing seems to have worked.</p> <pre><code>FROM --platform=linux/amd64 python:3.12-slim WORKDIR /app RUN apt-get update &amp;&amp; apt-get install -y \ wget \ curl \ gnupg \ unzip \ apt-transport-https \ ca-certificates \ libnss3 \ libgconf-2-4 \ libx11-xcb1 \ libxcomposite1 \ libxcursor1 \ libxdamage1 \ libxi6 \ libxtst6 \ libxrandr2 \ libasound2 \ libpango1.0-0 \ libpangocairo-1.0-0 \ libcups2 \ libxss1 \ libgtk-3-0 \ --no-install-recommends RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb RUN dpkg -i google-chrome-stable_current_amd64.deb || true # Fix broken dependencies (if any) RUN apt-get -f install -y COPY requirements.txt /app/ RUN pip install --no-cache-dir -r requirements.txt COPY . /app/ ENV PYTHONUNBUFFERED=1 \ PYTHONDONTWRITEBYTECODE=1 \ LANG=C.UTF-8 \ LC_ALL=C.UTF-8 </code></pre> <p>Build the image, start the container:</p> <pre><code>docker build . -t my-image docker run -it --shm-size=2g my-image </code></pre> <p>In the Python shell:</p> <pre><code>import time from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from webdriver_manager.chrome import ChromeDriverManager chrome_options = Options() chrome_options.add_argument(&quot;--headless&quot;) chrome_options.add_argument(&quot;--no-sandbox&quot;) chrome_options.add_argument(&quot;--disable-gpu&quot;) chrome_options.add_argument(&quot;--disable-dev-shm-usage&quot;) service = Service(ChromeDriverManager().install()) driver = webdriver.Chrome(service=service, options=chrome_options) </code></pre> <p>The error is:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/usr/local/lib/python3.12/site-packages/selenium/webdriver/chrome/webdriver.py&quot;, line 45, in __init__ super().__init__( File &quot;/usr/local/lib/python3.12/site-packages/selenium/webdriver/chromium/webdriver.py&quot;, line 66, in __init__ super().__init__(command_executor=executor, options=options) File &quot;/usr/local/lib/python3.12/site-packages/selenium/webdriver/remote/webdriver.py&quot;, line 212, in __init__ self.start_session(capabilities) File &quot;/usr/local/lib/python3.12/site-packages/selenium/webdriver/remote/webdriver.py&quot;, line 299, in start_session response = self.execute(Command.NEW_SESSION, caps)[&quot;value&quot;] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.12/site-packages/selenium/webdriver/remote/webdriver.py&quot;, line 354, in execute self.error_handler.check_response(response) File &quot;/usr/local/lib/python3.12/site-packages/selenium/webdriver/remote/errorhandler.py&quot;, line 229, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: disconnected: Unable to receive message from renderer (failed to check if window was closed: disconnected: not connected to DevTools) (Session info: chrome=128.0.6613.84) Stacktrace: #0 0x555555daa81a &lt;unknown&gt; #1 0x555555a78e50 &lt;unknown&gt; #2 0x555555a60e20 &lt;unknown&gt; #3 0x555555a5eb11 &lt;unknown&gt; #4 0x555555a5f31f &lt;unknown&gt; #5 0x555555a799f1 &lt;unknown&gt; #6 0x555555a4de89 &lt;unknown&gt; #7 0x555555a4d7d6 &lt;unknown&gt; #8 0x555555af99db &lt;unknown&gt; #9 0x555555af8e66 &lt;unknown&gt; #10 0x555555aed233 &lt;unknown&gt; #11 0x555555abb093 &lt;unknown&gt; #12 0x555555abc09e &lt;unknown&gt; #13 0x555555d71a7b &lt;unknown&gt; #14 0x555555d75a31 &lt;unknown&gt; #15 0x555555d5d645 &lt;unknown&gt; #16 0x555555d765a2 &lt;unknown&gt; #17 0x555555d4281f &lt;unknown&gt; #18 0x555555d99618 &lt;unknown&gt; #19 0x555555d997e2 &lt;unknown&gt; #20 0x555555da960c &lt;unknown&gt; #21 0x2aaaab7ab134 &lt;unknown&gt; </code></pre>
<python><selenium-webdriver>
2024-08-26 15:57:37
2
7,670
M3RS