QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
79,210,075
13,238,846
How to get timestamps in Azure text to Speech Synthesized Audio
<p>I am trying get timestamps for the generated audio using Azure Text to Speech. I have configured the speech config correctly but I can't find any property related to timestamps in response object. Following code is my code.</p> <pre><code>speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region) speech_config.speech_synthesis_voice_name = &quot;en-US-AndrewMultilingualNeural&quot; speech_config.request_word_level_timestamps() text = &quot;Hi&quot; # use the default speaker as audio output. speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config) result = speech_synthesizer.speak_text_async(text).get() print(result.properties) </code></pre>
<python><azure><text-to-speech><azure-speech>
2024-11-21 07:21:22
1
427
Axen_Rangs
79,209,946
2,998,077
Python and Pandas to write large amount dataframe
<p>A script is generating 50000 dataframe of below structure, and save them 1-by-1 to local disk. To improve efficiency, I changed the write-to format from Excel to Parquet. However, it seems not running faster.</p> <p>The 50000 dataframe are then to be lopped to filter the rows when the Meet column is 4 or 5, to save them in a final txt file.</p> <p>What's the better solution to above? (to have a final file only containing rows with 4 or 5 in Meet)</p> <p>I'm thinking perhaps when generating the 50000 dataframe, it filters the rows (4 or 5 in Meet) instead of saving all rows in each smaller dataframe. And, instead of writing 50000 dataframe, it directly writes to the final txt file. That is, skip the step of writing each small dataframe as a small file to local disk.</p> <p>Number of rows may be million. I'm not sure if a normal laptop can handle that (Win11, 16GB RAM, no internet connection).</p> <pre><code> Dict DT Length Meet 0 {'key_0': 45, 'key_1': 67} 2023-10-15 14:32:10 15 5 1 {'key_0': 12, 'key_1': 34} 2023-10-12 09:15:45 19 3 2 {'key_0': 56, 'key_1': 89} 2023-10-20 11:45:30 13 7 3 {'key_0': 23, 'key_1': 45} 2023-10-05 08:20:00 17 4 4 {'key_0': 78, 'key_1': 12} 2023-10-10 16:05:55 10 6 </code></pre> <p>Due to the length of the code (1315 lines) and privacy, sorry that I am not able to paste the code here. I am trying to write to 1 final dataframe directly, just power accidentally lost once that it needs to rerun.</p> <pre><code>big_df = [] ...... - - - lines to generate df_small - - - df_small = df_small[df_small['Meet'].isin([4,5])] big_df.append(df_1) writing_df = pd.concat(big_df, ignore_index=True) writing_df.to_excel('final.xlsx', index=False) </code></pre>
<python><dataframe>
2024-11-21 06:34:26
4
9,496
Mark K
79,209,922
294,103
Fetching multiple urls with aiohttp from a loop/generator
<p>All examples I saw for fetching multiple urls with <code>aiohttp</code> suggest to to the following:</p> <pre><code>async def fetch(session, url): async with session.get(url, ssl=ssl.SSLContext()) as response: return await response.json() async def fetch_all(urls, loop): async with aiohttp.ClientSession(loop=loop) as session: results = await asyncio.gather(*[fetch(session, url) for url in urls], return_exceptions=True) return results if __name__ == '__main__': loop = asyncio.get_event_loop() urls = url_list htmls = loop.run_until_complete(fetch_all(urls, loop)) print(htmls) </code></pre> <p>(<a href="https://stackoverflow.com/a/51728016/294103">https://stackoverflow.com/a/51728016/294103</a>)</p> <p>In practice, however, I typically have a generator (can be also async) returning domain objects from db, one attribute of which is url, but I also need access to other attributes later in the loop:</p> <pre><code>async for domain_obj in generator: url = domain_obj.url response = xxx # need to fetch single url here in async manner # do something with response </code></pre> <p>Of course I can batch collect domain_objs in a list, and fetch all of them like in example, but this doesn't feel right.</p>
<python><python-asyncio><aiohttp>
2024-11-21 06:26:04
1
5,784
dragoon
79,209,872
178,732
Function with Python type hint which accepts a class A and outputs a class that inherits from A
<p>I am trying to implement a decorator looks like</p> <pre class="lang-py prettyprint-override"><code>def decorator(cls): class _Wrapper(cls): def func(self): super().func() print(&quot;patched!&quot;) return _Wrapper </code></pre> <p>and I am wondering how do I hint the <code>cls</code> and the return value type. I tried to use <code>_Class=TypeVar(&quot;_Class&quot;)</code> and <code>cls: Type[_Class]</code> but mypy complained that value cls is not valid as a type. Also I do not know how to hint the return type.</p>
<python><python-typing>
2024-11-21 06:03:39
2
24,932
xis
79,209,784
9,542,989
`ConversationSummaryBufferMemory` is not fully defined; you should define `BaseCache`, then call `ConversationSummaryBufferMemory.model_rebuild()`
<p>I am attempting to use LangChain's <code>ConversationSummaryBufferMemory</code> and running into this error:</p> <pre><code>pydantic.errors.PydanticUserError: `ConversationSummaryBufferMemory` is not fully defined; you should define `BaseCache`, then call `ConversationSummaryBufferMemory.model_rebuild()`. </code></pre> <p>This is what my code looks like:</p> <pre><code>memory = ConversationSummaryBufferMemory( llm=llm, input_key=&quot;input&quot;, output_key=&quot;output&quot;, max_token_limit=args.get(&quot;max_tokens&quot;, DEFAULT_MAX_TOKENS), memory_key=&quot;chat_history&quot;, ) </code></pre> <p>I am using <code>langchain==0.3.7</code>.</p> <p>Has anyone else encountered this?</p>
<python><langchain><py-langchain>
2024-11-21 05:29:44
1
2,115
Minura Punchihewa
79,209,754
98,494
Django: Objects that live throughout server lifetime
<p>I have some data in csv format that I need to load into the application once and then reuse it throughout the lifetime of the application, which is across multiple requests. How can I do that?</p> <p>An obvious method for me is to have a module that will load the data and then expose it. However, I don't like modules doing a lot of work, because then imports lead to unexpected side effects. I would like to do that one-time work in a predictable, deterministic fashion, not because someone imported a module.</p> <p>Does Django provide some hooks for globals? Some kind of Application/Service class where I could do that work and then access the data in the requests?</p>
<python><django>
2024-11-21 05:16:00
1
42,356
gruszczy
79,209,753
8,742,237
llama-index RAG: how to display retrieved context?
<p>I am using LlamaIndex to perform retrieval-augmented generation (RAG).</p> <p>Currently, I can retrieve and answer questions using the following minimal 5 line example, from <a href="https://docs.llamaindex.ai/en/stable/getting_started/starter_example/" rel="nofollow noreferrer">https://docs.llamaindex.ai/en/stable/getting_started/starter_example/</a>:</p> <pre class="lang-py prettyprint-override"><code>from llama_index import VectorStoreIndex, SimpleDirectoryReader documents = SimpleDirectoryReader(&quot;data&quot;).load_data() index = VectorStoreIndex.from_documents(documents) query_engine = index.as_query_engine() response = query_engine.query(&quot;What did the author do growing up?&quot;) print(response) </code></pre> <p>This returns an answer, but I would like to display the retrieved context (e.g., the document chunks or sources) before the answer.</p> <p>Desired output format would look something like:</p> <pre><code>Here's my retrieved context: [x] [y] [z] And here's my answer: [answer] </code></pre> <p>What is the simplest reproducible way to modify the 5 line example to achieve this?</p>
<python><large-language-model><llama-index><rag>
2024-11-21 05:15:51
1
1,802
Jeremy K.
79,209,572
8,055,123
Log Pearson3 distribution fitting using scipy and lmoments3 in Python
<p>I am trying to find a way to fit Log Pearson3 distribution to my streamflow data but I can't find the way how to! Any tips will be much appreciated. Here is the problem:</p> <p>Both scipy and lmoment3 packages have Pearson3 but they don't have Log Pearson3 distributions to fit! scipy uses the Maximum Likelihood Estimation (MLE) method to fit the distribution and lmoment3 package fits the distributions by using the l-moment method. But as said, they both only have the Pearson3 distribution among their statistical distributions' list.</p> <p>I calculate the Annual Maximum Flow (AMF) data which returns a time series of say 50 years of streamflow data. Then I use scipy and lmoment3 packages to fit distributions. I thought if I calculate the log of my AMF and then fit Pearson3 and then calculate the antilog in the end, it would be like fitting Log Pearson3, but it seems like its not! There are differences in how parametrs are being estimated in Pearson3 and Log Pearson3!</p> <p>and I can't find any proper guide online!</p> <p>Any thoughts on this? below is the code I use:</p> <pre><code>import os import numpy as np import pandas as pd import scipy.stats as st import lmoments3 as lm from lmoments3 import distr, stats # Define the folder path and get the list of CSV files folder_path = '.../daily_ts/' # folder_path = '.../all_sites/' csv_files = sorted([f for f in os.listdir(folder_path) if f.endswith('.csv')]) # Iterate over each file and process the data for i, file in enumerate(csv_files[5:6]): station_code = file.split('_')[0] # Read the data df = pd.read_csv(os.path.join(folder_path, file), skiprows=27, names=['Date', 'Flow (ML)', 'Bureau QCode']) station_code = file.split('_')[0] # Process the data df = df.dropna(subset=['Flow (ML)']) # Ensure 'Flow (ML)' has no NaN values df['Date'] = pd.to_datetime(df['Date']) # Ensure the 'Date' column is in datetime format df['Year'] = df['Date'].dt.year # Extract the year from the 'Date' column and add it as a new column df_max_flow = df.loc[df.groupby('Year')['Flow (ML)'].idxmax()] # Max daily per year df_max_flow_sorted = df_max_flow.sort_values(by='Flow (ML)', ascending=False) # Sort by 'Flow (ML)' in descending order df_max_flow_sorted['Rank'] = range(1, len(df_max_flow_sorted) + 1) df_max_flow_sorted['ARI_Empir'] = (df_max_flow_sorted['Rank'].iloc[-1] + 1) / df_max_flow_sorted['Rank'] df_max_flow_sorted['AEP_Empir'] = 1 / df_max_flow_sorted['ARI_Empir'] data = df_max_flow_sorted['Flow (ML)'].values # Fit Log-Pearson3 using Maximun Likelihood Estimation method (MLE) dist_name = 'pearson3' # Use getattr to dynamically get the fitting method command scipy_dist_fit = getattr(st, dist_name) # Calculate natural log of the data log_data = np.log(data) param = scipy_dist_fit.fit(log_data) # Applying the Kolmogorov-Smirnov test ks_stat, p_val = st.kstest(log_data, dist_name, args=param) print(f&quot;parameters: {param}&quot;) print(f&quot;ks_stat, p_val: {ks_stat, p_val}&quot;) ARI_dict = {} AEP_dict = {} data_dict = {} logdata_dict = {} # test the results of ARI and AEP according to the fitted distribution loc = param[1] scale = param[2] shape = param[0] # get the attribute to run SciPy stat in the loop scipy_dist_fit = getattr(st, dist_name) # run the SciPy stat fitted_dist = scipy_dist_fit(shape, loc=loc, scale=scale) # Calculate the return period and AEP based on the fitted_dist and add relevant columns to the table AEP = 1 - fitted_dist.cdf(log_data) ARI = 1 / AEP # Store AEP and ARI in a dictionary AEP_dict['AEP_LP3_MLE'] = np.sort(AEP) ARI_dict['ARI_LP3_MLE'] = np.sort(ARI)[::-1] data_dict['data'] = data logdata_dict['log_data'] = log_data AEP_df = pd.DataFrame(AEP_dict) ARI_df = pd.DataFrame(ARI_dict) data_df = pd.DataFrame(data_dict) logdata_df = pd.DataFrame(logdata_dict) # Reset indices of all DataFrames to ensure they align correctly AEP_df = AEP_df.reset_index(drop=True) ARI_df = ARI_df.reset_index(drop=True) df_max_flow_sorted = df_max_flow_sorted.reset_index(drop=True) data_df = data_df.reset_index(drop=True) logdata_df = logdata_df.reset_index(drop=True) df_fitDist = pd.concat([df_max_flow_sorted, AEP_df, ARI_df], axis=1) LP3_FFA = pd.concat([data_df, logdata_df, AEP_df, ARI_df], axis=1) # data = fitted_dist.ppf(1 - AEP) # Inverse of CDF flood_100year = np.exp(fitted_dist.ppf(1 - 0.01)) # flood_100year =fitted_dist.ppf(1 - 0.01) print(f&quot;100-Year Flood = {flood_100year}&quot;) flood_50year = np.exp(fitted_dist.ppf(1 - 0.02)) # flood_50year = fitted_dist.ppf(1 - 0.02) print(f&quot;50-Year Flood = {flood_50year}&quot;) </code></pre>
<python><scipy>
2024-11-21 03:18:06
0
311
Navid Ghajarnia
79,209,481
2,137,570
python - macOS Sequoia: reading plist
<p>trying to read plist in macOS sequoia. I did find <a href="https://pypi.org/project/plists/#description" rel="nofollow noreferrer">https://pypi.org/project/plists/#description</a> but it doesn't install</p> <p><strong>How can I use python plists package?</strong></p> <blockquote> <p>pip3 install plists</p> </blockquote> <p>Gives the following error:</p> <pre><code>Defaulting to user installation because normal site-packages is not writeable Collecting plists Using cached plists-0.0.4.tar.gz (7.4 kB) ERROR: Command errored out with exit status 1: command: /Library/Developer/CommandLineTools/usr/bin/python3 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '&quot;'&quot;'/private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-install-eh0uq6op/plists_4670f12a38cc4539a382c3ec2a0ed312/setup.py'&quot;'&quot;'; __file__='&quot;'&quot;'/private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-install-eh0uq6op/plists_4670f12a38cc4539a382c3ec2a0ed312/setup.py'&quot;'&quot;';f = getattr(tokenize, '&quot;'&quot;'open'&quot;'&quot;', open)(__file__) if os.path.exists(__file__) else io.StringIO('&quot;'&quot;'from setuptools import setup; setup()'&quot;'&quot;');code = f.read().replace('&quot;'&quot;'\r\n'&quot;'&quot;', '&quot;'&quot;'\n'&quot;'&quot;');f.close();exec(compile(code, __file__, '&quot;'&quot;'exec'&quot;'&quot;'))' egg_info --egg-base /private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-pip-egg-info-iufqpu7j cwd: /private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-install-eh0uq6op/plists_4670f12a38cc4539a382c3ec2a0ed312/ Complete output (6 lines): Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;/private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-install-eh0uq6op/plists_4670f12a38cc4539a382c3ec2a0ed312/setup.py&quot;, line 80 print &quot;PDIR: &quot;, pdir(), os.listdir(pdir()) ^ SyntaxError: invalid syntax ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/71/1b/24d3f3885744b41e4d58774bce89b3a20966960cd41c9d3d787485e01e1d/plists-0.0.4.tar.gz#sha256=d48b2390c27d957cf54791001f679f1c96d2652b599a85a3d9d2cc4567c02ce0 (from https://pypi.org/simple/plists/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. Using cached plists-0.0.3.tar.gz (6.9 kB) ERROR: Command errored out with exit status 1: command: /Library/Developer/CommandLineTools/usr/bin/python3 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '&quot;'&quot;'/private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-install-eh0uq6op/plists_b5366a04f87f4cc0be570288a1a967b6/setup.py'&quot;'&quot;'; __file__='&quot;'&quot;'/private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-install-eh0uq6op/plists_b5366a04f87f4cc0be570288a1a967b6/setup.py'&quot;'&quot;';f = getattr(tokenize, '&quot;'&quot;'open'&quot;'&quot;', open)(__file__) if os.path.exists(__file__) else io.StringIO('&quot;'&quot;'from setuptools import setup; setup()'&quot;'&quot;');code = f.read().replace('&quot;'&quot;'\r\n'&quot;'&quot;', '&quot;'&quot;'\n'&quot;'&quot;');f.close();exec(compile(code, __file__, '&quot;'&quot;'exec'&quot;'&quot;'))' egg_info --egg-base /private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-pip-egg-info-qqjk06qw cwd: /private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-install-eh0uq6op/plists_b5366a04f87f4cc0be570288a1a967b6/ Complete output (6 lines): Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;/private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-install-eh0uq6op/plists_b5366a04f87f4cc0be570288a1a967b6/setup.py&quot;, line 80 print &quot;PDIR: &quot;, pdir(), os.listdir(pdir()) ^ SyntaxError: invalid syntax ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/f6/bf/d7e74c38a6d8d4993ac2631fc327c3439e06e7d01ebf54f86202393e548f/plists-0.0.3.tar.gz#sha256=c3ac16d5b6262552ac5da8ac410fa7a85e918e11ff708cf5b4958f240619029b (from https://pypi.org/simple/plists/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. Using cached plists-0.0.2.tar.gz (6.9 kB) ERROR: Command errored out with exit status 1: command: /Library/Developer/CommandLineTools/usr/bin/python3 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '&quot;'&quot;'/private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-install-eh0uq6op/plists_accf278426034dce9e1b20c280eb6e51/setup.py'&quot;'&quot;'; __file__='&quot;'&quot;'/private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-install-eh0uq6op/plists_accf278426034dce9e1b20c280eb6e51/setup.py'&quot;'&quot;';f = getattr(tokenize, '&quot;'&quot;'open'&quot;'&quot;', open)(__file__) if os.path.exists(__file__) else io.StringIO('&quot;'&quot;'from setuptools import setup; setup()'&quot;'&quot;');code = f.read().replace('&quot;'&quot;'\r\n'&quot;'&quot;', '&quot;'&quot;'\n'&quot;'&quot;');f.close();exec(compile(code, __file__, '&quot;'&quot;'exec'&quot;'&quot;'))' egg_info --egg-base /private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-pip-egg-info-y01b3lan cwd: /private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-install-eh0uq6op/plists_accf278426034dce9e1b20c280eb6e51/ Complete output (6 lines): Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;/private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-install-eh0uq6op/plists_accf278426034dce9e1b20c280eb6e51/setup.py&quot;, line 80 print &quot;PDIR: &quot;, pdir(), os.listdir(pdir()) ^ SyntaxError: invalid syntax ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/1f/38/e451a4895d12d241f2bca8061a2d5c4d376e2e90739d892977a1266e227f/plists-0.0.2.tar.gz#sha256=ed39b1dad1b0de1dfc4853e70c39bd7a14e5e3b7b86ad76c22754d49833976fd (from https://pypi.org/simple/plists/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. Using cached plists-0.0.1.tar.gz (6.2 kB) ERROR: Command errored out with exit status 1: command: /Library/Developer/CommandLineTools/usr/bin/python3 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '&quot;'&quot;'/private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-install-eh0uq6op/plists_6072274ea7ea4f6bafbe428f95e5feb1/setup.py'&quot;'&quot;'; __file__='&quot;'&quot;'/private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-install-eh0uq6op/plists_6072274ea7ea4f6bafbe428f95e5feb1/setup.py'&quot;'&quot;';f = getattr(tokenize, '&quot;'&quot;'open'&quot;'&quot;', open)(__file__) if os.path.exists(__file__) else io.StringIO('&quot;'&quot;'from setuptools import setup; setup()'&quot;'&quot;');code = f.read().replace('&quot;'&quot;'\r\n'&quot;'&quot;', '&quot;'&quot;'\n'&quot;'&quot;');f.close();exec(compile(code, __file__, '&quot;'&quot;'exec'&quot;'&quot;'))' egg_info --egg-base /private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-pip-egg-info-8esq59jv cwd: /private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-install-eh0uq6op/plists_6072274ea7ea4f6bafbe428f95e5feb1/ Complete output (6 lines): Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;/private/var/folders/9j/s_r_lpgs2tj1v838wj4h716c0000gn/T/pip-install-eh0uq6op/plists_6072274ea7ea4f6bafbe428f95e5feb1/setup.py&quot;, line 80 print &quot;PDIR: &quot;, pdir(), os.listdir(pdir()) ^ SyntaxError: invalid syntax ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/2b/67/18ade2a5d79733595d53a0ae174bdf11d28ac2f649db68b2377148291ad8/plists-0.0.1.tar.gz#sha256=3c452ad85861ada94f3199658c0fadd860a3c74388346e9c58ff74132807e893 (from https://pypi.org/simple/plists/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ERROR: Could not find a version that satisfies the requirement plists (from versions: 0.0.3.macosx-10.10-x86_64, 0.0.4.macosx-10.10-x86_64, 0.0.1, 0.0.2, 0.0.3, 0.0.4) ERROR: No matching distribution found for plists WARNING: You are using pip version 21.2.4; however, version 24.3.1 is available. You should consider upgrading via the '/Library/Developer/CommandLineTools/usr/bin/python3 -m pip install --upgrade pip' command. </code></pre>
<python><plist><macos-sequoia>
2024-11-21 02:27:45
2
5,998
Lacer
79,209,425
5,256,563
Build a wheel and Install package version depending on OS
<p>I have several python packages that need to be installed on various os/environments. These packages have dependencies and some of them like Polars needs a different package depending on the OS, for example: polars-lts-cpu on MacOS (Darwin) and polars on all the other OS.</p> <p>I use <code>setuptools</code> to create a <code>whl</code> file, but the dependencies installed depend on the OS where the wheel file was created. Here is my code:</p> <pre><code>import platform from setuptools import find_packages, setup setup( ... install_requires=[&quot;glob2&gt;=0.7&quot;, &quot;numpy&gt;=1.26.4&quot;, &quot;polars&gt;=1.12.0&quot; if platform.system() != &quot;Darwin&quot; else &quot;polars-lts-cpu&gt;=1.12.0&quot;] ...) </code></pre> <p>As mentioned above, this code installs the version of Polars according to the OS where the wheel file was created, not according to where the package will be installed.</p> <p>How can I fix this?</p>
<python><setuptools><python-polars><python-packaging><python-wheel>
2024-11-21 01:46:23
1
5,967
FiReTiTi
79,209,419
14,590,651
Linux check bluetooth RSSI without pair and connect
<p><strong>Background:</strong> I'm building a distance sensor for my auto door lock which uses the bluetooth signal strength. The code runs on my raspiberry pi and detects my phone bluetooth address. It only needs to see if my phone is in a very close distance or a relatively far distance, so the RSSI should be stable enough for it.</p> <p><strong>Main issue:</strong> I couldn't find a way to detect the certain address RSSI efficiently without pairing and connecting since I only need the RSSI.</p> <p><strong>Working solution (not preferred):</strong> The <code>sudo btmgmt find | grep &lt;address&gt;</code> does the job, but the problem is that it take too long for a single scan loop - about 10 sec, because it scans all the address first, then filter.</p> <p><strong>What I need:</strong> Any linux (or python) command (or package) that allows me to pass address in and scan the RSSI only for that address in a short time.</p> <p>Any idea is welcome.</p>
<python><linux><raspberry-pi><bluetooth>
2024-11-21 01:41:01
1
536
Josh Liu
79,209,129
663,028
Any way to create a subset (or partially clone) from a conda environment?
<p>For example, I have a working environment with 300 packages. Is there a clean way to duplicate or clone from this environment with only 100 packages, without the need to resolve/download dependencies?</p> <p>I could manually copy the folder and delete a bunch of stuff that is not needed, but a &quot;subcloning&quot; method would be ideal.</p>
<python><anaconda><conda>
2024-11-20 22:26:07
1
7,151
prusswan
79,209,096
3,458,191
How to move python function to the conftest as fixture?
<p>I have the following python code:</p> <p>main.py:</p> <pre><code>import pytest import re from playwright.sync_api import Page, expect from time import sleep TIMEOUT = 5000 # 5 seconds # Helper function to send a message and assert response def send_msg_and_assert(page: Page, message: str, regex: re.Pattern): page.type('[placeholder=&quot;Message...&quot;]', message) page.keyboard.press(&quot;Enter&quot;) sleep(DEFAULT_WAIT_TIME / 1000) # sleep time converted to seconds assert page.get_by_text(regex) # Go to the chat page def go_to_chats(page: Page): page.goto('https://my.chats.com/') page.wait_for_timeout(TIMEOUT) assert page.get_by_text(&quot;Chats&quot;).is_visible() @pytest.mark.usefixtures(&quot;login_logout&quot;) @pytest.mark.basic_interaction @pytest.mark.parametrize(&quot;message, expected_regex&quot;, [ (&quot;How can you help me?&quot;, re.compile(&quot;/language|help/&quot;, re.IGNORECASE)), (&quot;What kind of questions?&quot;, re.compile(&quot;/Language|Native|interview/&quot;, re.IGNORECASE)), (&quot;What does my name mean?&quot;, re.compile(&quot;/sorry|determine|please tell/&quot;, re.IGNORECASE)) ]) # This basic scenario is using playwright web application testing via the cohere web-page def test_basic_interaction(go_to_chats, message: str, expected_regex: re.Pattern): # Step 1: Login and go to chats page page = go_to_chats # Step 2: Send message and assert response send_msg_and_assert(page, message, expected_regex) # Step 3: Logout </code></pre> <p>Here you can find the conftest.py with the fixtures:</p> <pre><code>import pytest import os import re from playwright.sync_api import sync_playwright, Page TIMEOUT = 5000 # 5 seconds DEFAULT_WAIT_TIME = 30000 # 30 seconds for responses # Constants and credentials COHERE_API_KEY = &quot;your-api-key&quot; USER_CREDENTIALS = { 'email': 'go@goofy.com', 'password': &quot;blabla&quot;, 'user_name': 'goofy' } # Fixture for login, reusable across tests @pytest.fixture(scope=&quot;session&quot;) def login_logout(): with sync_playwright() as p: browser = p.chromium.launch(headless=False) # Set headless=True for no UI page = browser.new_page() # Performs login on the Cohere dashboard. page.goto('https://my.chats.com/welcome/login') page.locator('input[name=&quot;email&quot;]').fill(USER_CREDENTIALS['email']) page.locator('input[name=&quot;password&quot;]').fill(USER_CREDENTIALS['password']) page.get_by_role(&quot;button&quot;, name=&quot;Log in&quot;).click() page.wait_for_timeout(TIMEOUT) # Wait for login to complete assert page.get_by_text(f&quot;Welcome, {USER_CREDENTIALS['user_name']}!&quot;).is_visible() # Yield control to test yield page # Fixture for logout, reusable across tests # Performs logout after all tests are completed. page.goto('https://my.chats.com/api/auth/logout') page.wait_for_timeout(TIMEOUT) # Wait for logout to complete assert page.get_by_role(&quot;heading&quot;, name=&quot;Log in&quot;).is_visible() browser.close() </code></pre> <p>The <code>login_logout</code> function in the <code>conftest.py</code> will be executed at the beginning of the session and at the end of the session to achieve the use of one session for test cases.</p> <p>Now I was trying to move the <code>go_to_chats()</code> (or any other function) also to the fixtures but not as part of the <code>login_logout()</code> function. How can I achieve that?</p>
<python><python-3.x><pytest><playwright-python><playwright-test>
2024-11-20 22:04:30
0
1,187
FotisK
79,208,995
2,731,076
Type hinting in Python 3.12 for inherited classes that use the parent class method without typing module
<p>I have a parent class that I made along with a number of child classes that inherit from the parent class. These classes are essentially just python models to read certain documents in from a json or MongoDB and change them from a dictionary into a class. Because of that, I have a standard <code>from_file</code> class method that is only implemented in the parent class. I would like to get the type hinting worked out so that other modules that use these classes know which class will get returned based on the class that called the method.</p> <p><code>Parent</code> class:</p> <pre class="lang-py prettyprint-override"><code>class ParentDoc(ABC): def __init__(self, ver: int = 1) -&gt; None: self.ver = ver def to_dict(self) -&gt; dict: data = self.__dict__.copy() data[&quot;_class&quot;] = self.__class__.__name__ return data @classmethod def from_dict(cls, data: dict) -&gt; ParentDoc: return cls(ver=data.get(&quot;ver&quot;, 1)) def to_file(self, file_path: str | Path) -&gt; None: with open(file_path, &quot;w&quot;, encoding=&quot;utf8&quot;) as json_file: json.dump(self.to_dict(), json_file) @classmethod def from_file(cls, file_path: str | Path) -&gt; ?????: with open(file_path, &quot;w&quot;, encoding=&quot;utf8&quot;) as json_file: data = create_from_dict(json.load(json_file)) return data </code></pre> <p>That helper function in the <code>from_file</code> method is included below.</p> <pre class="lang-py prettyprint-override"><code>def create_from_dict(data: dict): class_name = data.get(&quot;_class&quot;) if class_name == &quot;ParentDoc&quot;: return ParentDoc.from_dict(data) elif class_name == &quot;ChildDoc&quot;: return ChildDoc.from_dict(data) elif class_name == &quot;Child2Doc&quot;: return Child2Doc.from_dict(data) else: raise ValueError(f&quot;Unsupported class: {class_name}&quot;) </code></pre> <p>Then all of the child classes overload the <code>to_dict</code> and <code>from_dict</code> methods but not the <code>to_file</code> and <code>from_file</code> methods.</p> <pre class="lang-py prettyprint-override"><code>class ChildDoc(DbDoc): def __init__(self, name: str, ver: int = 1) -&gt; None: super().__init__(ver=ver) self.name = name def to_dict(self) -&gt; dict: data = super().to_dict() data.update({&quot;name&quot;: self.name}) return data @classmethod def from_dict(cls, data: dict) -&gt; ChildDoc: return cls(name=data[&quot;name&quot;]) </code></pre> <p>What should I put where all of the question marks are in the parent class so I can call <code>ChildDoc.from_file(json_path)</code> and the type hinting will understand that will return a <code>ChildDoc</code> object and not a <code>ParentDoc</code> object? I currently don't have any type hinting for the output so the linter thinks that it could be any of the parent or child classes, even though I am calling it using one specific child class. And I suppose I could have better type hinting on the <code>create_from_dict</code> function as well.</p> <p>I would like to use the standard type hinting in Python 3.12 (not have to import the <code>typing</code> module). I have tried <code>Self</code> but that didn't work.</p>
<python><inheritance><python-typing>
2024-11-20 21:19:15
1
813
user2731076
79,208,862
8,576,801
In a Jupyter Notebook open in VS Code, how can I quickly navigate to the currently running cell?
<p>This feels like a useful feature but haven't been able to find a setting / extension that offers this capability.</p>
<python><visual-studio-code><jupyter-notebook><jupyter>
2024-11-20 20:27:36
1
428
piedpiper
79,208,817
13,259,162
Get a single series of classes instead of one series for each class with pandas in Python
<p>I have a DataFrame with 3 column of zeroes and ones corresponding to 3 different classes. I want to get a single series of zeroes, ones, and twos depending of the class of the entry (0 for the first class, 1 for the second one and 2 for the third one):</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; results.head() HOME_WINS DRAW AWAY_WINS ID 0 0 0 1 1 0 1 0 2 0 0 1 3 1 0 0 4 0 1 0 </code></pre> <p>What I want :</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; results.head() SCORE ID 0 2 1 1 2 2 3 0 4 1 </code></pre>
<python><pandas><numpy>
2024-11-20 20:11:07
2
309
Noé Mastrorillo
79,208,808
10,193,760
String cleaning removing consecutive value and put comma in the end
<p>I have this string from an email I'm scraping:</p> <pre><code>TICKET\xa0\xa0 STATE\xa0\xa0\xa0\xa0 ACCOUNT IDENTIFIER\xa0\xa0\xa0 FILE DIRECTORY\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0\xa0 CODE </code></pre> <p>My objective are the following:</p> <ol> <li>Remove \xa0</li> <li>Create comma separation for each group string</li> </ol> <p>This is my ideal result:</p> <pre><code>TICKET,STATE,ACCOUNT IDENTIFIER,FILE DIRECTORY </code></pre> <p>On the other hand, here's what I ended up getting:</p> <pre><code>#code my_string.replace(' ', ',').replace('\xa0', '') #result TICKET,STATE,ACCOUNT,IDENTIFIER,FILE,DIRECTORY </code></pre> <p>I was thinking of using regex however, I have no idea how I can implement the logic.</p>
<python><string><text><replace><split>
2024-11-20 20:07:23
1
1,610
Maku
79,208,772
823,633
pandas apply multiple columns
<p>Starting from this dataframe</p> <pre><code>df = pd.DataFrame( np.arange(3*4).reshape((4, 3)), index=['a', 'b', 'c', 'd'], columns=['A', 'B', 'C'] ) print(df) A B C a 0 1 2 b 3 4 5 c 6 7 8 d 9 10 11 </code></pre> <p>I want to apply two functions to each column to generate two columns for each original column to obtain this shape, with a multiindex column nested below each original column:</p> <pre><code> A B C x y x y x y a 10 100 11 101 12 102 b 13 103 14 104 15 105 c 16 106 17 107 18 108 d 19 109 20 110 21 111 </code></pre> <p>however, something like this doesn't work</p> <pre><code>df.apply(lambda series: series.transform([lambda x: x+10, lambda x: x+100]) ) </code></pre> <p>and raises <code>ValueError: If using all scalar values, you must pass an index</code></p> <p>Note that I do not want to use agg like in <a href="https://stackoverflow.com/questions/14529838/apply-multiple-functions-to-multiple-groupby-columns">this answer</a>, since this is not an aggregation. I also want to avoid referring to column names directly.</p>
<python><pandas>
2024-11-20 19:56:32
2
1,410
goweon
79,208,694
15,835,974
How to join 2 DataFrames on really specific condition?
<p>I have those 2 DataFrames:</p> <pre><code>df1: +---+----------+----------+ |id |id_special|date_1 | +---+----------+----------+ |1 |101 |2024-11-01| |2 |102 |2024-11-03| |3 |103 |2024-11-04| |4 |null |2024-11-05| +---+----------+----------+ df2: +----------+----------+------+ |id_special|date_2 |type | +----------+----------+------+ |101 |2024-10-30|Type_1| |101 |2024-10-31|Type_2| |101 |2024-11-01|Type_3| |102 |2024-11-03|Type_4| +----------+----------+------+ </code></pre> <p>My goal is to create a new column named <code>df2_type</code> in <code>df1</code>. To do so, I need a special join between <code>df1</code> and <code>df2</code>. Here are the rule to create the column <code>df2_type</code>.</p> <ol> <li>If df1.id_special is null, set df1.df2_type to &quot;Unknown&quot;.</li> <li>If df1.id_special is not in df2.id_client, set df1.df2_type to &quot;Unknown&quot;.</li> <li>If df1.id_special is in df2.id_client: <ol> <li>Get the record where df2.date_2 &lt; df1.date_1 and is the closest to df1.date_1</li> <li>From the record, use df2.type to set df2_type.</li> </ol> </li> </ol> <p>So, from the precious DataFrames, this is the result that I am expecting:</p> <pre><code>+---+----------+----------+--------+ |id |id_special|date_1 |df2_type| +---+----------+----------+--------+ |1 |101 |2024-11-01|Type_2 | |2 |102 |2024-11-03|Unknown | |3 |103 |2024-11-04|Unknown | |4 |null |2024-11-05|Unknown | +---+----------+----------+--------+ </code></pre> <p>I tried to do a join between my 2 DataFrames, but I never been able to join it properly. Here is the code that I have:</p> <pre class="lang-py prettyprint-override"><code>from awsglue.context import GlueContext from datetime import date from pyspark.context import SparkContext from pyspark.sql.functions import lit from pyspark.sql.types import DateType, IntegerType, StringType, StructField, StructType glueContext = GlueContext(SparkContext.getOrCreate()) data1 = [ (1, 101, date.fromisoformat(&quot;2024-11-01&quot;)), (2, 102, date.fromisoformat(&quot;2024-11-03&quot;)), (3, 103, date.fromisoformat(&quot;2024-11-04&quot;)), (4, None, date.fromisoformat(&quot;2024-11-05&quot;)), ] data2 = [ (101, date.fromisoformat(&quot;2024-10-30&quot;), &quot;Type_1&quot;), (101, date.fromisoformat(&quot;2024-10-31&quot;), &quot;Type_2&quot;), (101, date.fromisoformat(&quot;2024-11-01&quot;), &quot;Type_3&quot;), (102, date.fromisoformat(&quot;2024-11-03&quot;), &quot;Type_4&quot;), ] schema1 = StructType([ StructField(&quot;id&quot;, IntegerType(), True), # Unique key StructField(&quot;id_special&quot;, IntegerType(), True), StructField(&quot;date_1&quot;, DateType(), True), ]) schema2 = StructType([ StructField(&quot;id_special&quot;, IntegerType(), True), StructField(&quot;date_2&quot;, DateType(), True), StructField(&quot;type&quot;, StringType(), True), ]) df1 = spark.createDataFrame(data1, schema1) df2 = spark.createDataFrame(data2, schema2) # Step 1 - Add df2_type columns df1 = df1.withColumn(&quot;df2_type&quot;, lit(None)) # The final DataFrame need to be like this # +---+----------+----------+--------+ # |id |id_special|date_1 |df2_type| # +---+----------+----------+--------+ # |1 |101 |2024-11-01|Type_2 | # |2 |102 |2024-11-03|Unknown | # |3 |103 |2024-11-04|Unknown | # |4 |null |2024-11-05|Unknown | # +---+----------+----------+--------+ </code></pre>
<python><pyspark>
2024-11-20 19:32:36
1
597
jeremie bergeron
79,208,656
194,305
migrating from distutils to setup tools
<p>I am migrating a large library of python cpp extensions form swig+distutils to swig+setuptols. I am using swig cpp example as the illustration. The original setup.py file:</p> <pre><code>from distutils.core import setup, Extension from distutils.command.build_ext import build_ext example_module = Extension( name='_example', sources=['example.i', 'example.cpp'], swig_opts=['-python', '-py3', '-c++', '-cppext', 'cpp'] ) setup( name='example', version='1.0', ext_modules=[example_module], ) </code></pre> <p>And I building it as</p> <pre><code> /usr/bin/python3 setup.py build_ext --swig=${SWIG} /usr/bin/python3 setup.py install --prefix ${PREFIX} --install-lib ${LIBDIR} --no-compile --skip-build </code></pre> <p>I replaced distutils.core -&gt; setuptools and distutils.command -&gt; setuptools.command in my setup.py file. I added PYTHONPATH=${LIBDIR} to my install command.</p> <p>I got everything built. And I can run my example. However, instead of .so file in the library _example.cpython-38-x86_64-linux-gnu.so, I got an 'egg' file, site.py and easy-install.pth:</p> <pre><code>easy-install.pth example-1.0-py3.8-linux-x86_64.egg site.py </code></pre> <p>What is next? Should I simply use .egg files instead of .so files? I tried to use .egg files for a more complicated example and it did not work.</p> <p>I unzipped .egg file, it worked for both the simple example and for more a complicated one. But I feel that simply unzipping these files as a part of the build process is not the right thing to do.</p> <p>Update: I added --old-and-unmanageable option and I got .so file installed instead of .egg one.</p>
<python><python-3.x><setuptools><swig><distutils>
2024-11-20 19:20:07
0
891
uuu777
79,208,598
10,153,071
Unable to update a latent vector using custom loss function in pytorch
<p>I am trying to implement this function but have had no luck. There is a VAE model that I am using, and along with it, there are encoder and decoder. I'm freezing the weights of the VAE decoder, and trying to change a latent vector which is updated using the function <em><strong>optimize_latent_vector(model, inp__, num_epochs=50, learning_rate=0.01)</strong></em>. Now, there is some error regarding this piece of code: <em><strong>RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn</strong></em></p> <pre><code> class VAE_GD_Loss(nn.Module): def __init__(self): super(VAE_GD_Loss, self).__init__() def forward(self, bad_seg, recons_mask, vector): # l2 normed squared and the soft dice loss are calculated loss = torch.sum(vector**2)+Soft_Dice_Loss(recons_mask, bad_seg) return loss def optimize_latent_vector(model, inp__, num_epochs=50, learning_rate=0.01): inp__ = inp__.to(device).requires_grad_(True) # Encode and reparameterize to get initial latent vector with torch.no_grad(): mu, log_var = model.encoder(inp__) z_latent_vect = model.reparameterize(mu, log_var) optimizer_lat = torch.optim.Adam([z_latent_vect], lr=learning_rate) dec_only = model.decoder for epoch in range(num_epochs): optimizer_lat.zero_grad() dec_only.eval() # Decode from latent vector recons_mask = dec_only(z_latent_vect) # Calculate loss VGLoss = VAE_GD_Loss() loss = VGLoss(inp__, recons_mask, z_latent_vect) # loss = Variable(loss, requires_grad=True) # Backpropagation loss.backward() optimizer_lat.step() print(f&quot;Epoch {epoch}: Loss = {loss.item()}&quot;) return z_latent_vect </code></pre> <p>If we uncomment the line <em><strong>loss = Variable(loss, requires_grad=True)</strong></em>, then the code runs, but it doesn't minimize the loss whatsoever. I want to update the latent vector in such a way so that it follows the constraint set in the loss function. Any leads would help!</p>
<python><deep-learning><pytorch><autograd>
2024-11-20 18:52:44
1
536
Jimut123
79,208,551
268,581
Two stacked area plots on the same chart
<h1>Plotting assets</h1> <p>I'm plotting $MSTR assets as follows</p> <pre><code>fig = px.area(df_all, x='end', y='val', color='fact', title=f'{symbol.upper()} : Balance sheet', width=1000, height=600) fig.add_trace(go.Scatter(x=df_assets['end'], y=df_assets['val'], mode='lines', name='Assets')) st.plotly_chart(fig) </code></pre> <p><a href="https://i.sstatic.net/eALr0smv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eALr0smv.png" alt="enter image description here" /></a></p> <h1>Plotting liabilities</h1> <p>Similarly, I'm plotting liabilities like this (these show up as negative):</p> <pre><code>fig = px.area(df_all_liabilities, x='end', y='val', color='fact', title=f'{symbol.upper()} : Balance sheet', width=1000, height=600) fig.add_trace(go.Scatter(x=df_liabilities['end'], y=df_liabilities['val'], mode='lines', name='Liabilities')) st.plotly_chart(fig) </code></pre> <p><a href="https://i.sstatic.net/JA7b9K2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JA7b9K2C.png" alt="enter image description here" /></a></p> <h1>Question</h1> <p>Is there a way to have both of these on the same chart?</p>
<python><plotly><streamlit>
2024-11-20 18:33:43
1
9,709
dharmatech
79,208,459
6,829,370
Implementing Georeferencer transforms using QGIS Python API
<p>I can see in QGIS Georeferencer plugin(under Layer menu) various transform algorithms in Transformation settings (under transformation type) such as 'Linear','Polynomial 1','Polynomial 2','Helmert','Projective' etc. How can we directly use these transformations using Qgis Python API ? Please guide.</p>
<python><transform><geospatial><qgis><pyqgis>
2024-11-20 18:00:51
1
388
Shubham_geo
79,207,981
7,456,317
Python logger: a different log record attribute for each thread
<p>My use case is as follows: I have a server that handles requests, and for each request I'd like the log record to contain a <code>user_id</code>. I'd like to make it as seamless as possible for other developers in my team, such that they can simply import <code>logging</code> and use it, without passing user_id around. Here's a MWE, but, as you can see, its not always working:</p> <pre class="lang-py prettyprint-override"><code>import logging import threading import time class ContextFilter(logging.Filter): def __init__(self, user_id: str): super().__init__() self.local = threading.local() self.local.user_id = user_id def filter(self, record): record.user_id = getattr(self.local, 'user_id', 'NoValue') return True # Returning True ensures the log message is processed # Set up logging logger = logging.getLogger(&quot;my_logger&quot;) logger.setLevel(logging.DEBUG) # Add a handler formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(user_id)s - %(message)s') handler = logging.StreamHandler() handler.setFormatter(formatter) logger.addHandler(handler) # logger.info(&quot;This is a test message.&quot;) def worker(user_id: str): logger.addFilter(ContextFilter(user_id=user_id)) logger.info(&quot;message 1&quot;) time.sleep(0.5) logger.info(&quot;message 2&quot;) t1 = threading.Thread(target=worker, args=(&quot;user1&quot;,)) t2 = threading.Thread(target=worker, args=(&quot;user2&quot;,)) t1.start() t2.start() t1.join() t2.join() </code></pre> <p>This is the output:</p> <pre class="lang-none prettyprint-override"><code>2024-11-20 17:46:39,780 - INFO - user1 - message 1 2024-11-20 17:46:39,780 - INFO - user2 - message 1 2024-11-20 17:46:40,284 - INFO - NoValue - message 2 2024-11-20 17:46:40,285 - INFO - user2 - message 2 </code></pre> <p>What am I missing here?</p>
<python><multithreading><logging>
2024-11-20 15:50:05
1
913
Gino
79,207,951
773,102
Python wheel entry point not working as expected on Windows
<p>I'm tryring to setup a python wheel for my testrunner helper script to make it easyly acessible from everywhere on my Windows machine. Therefore I configure a console entry point in my setup.py. I can see the generated entry point in the entry_points.txt but if I'm trying to invoke my script I get the error message.</p> <pre><code>No module named testrunner.__main__; 'testrunner' is a package and cannot be directly executed </code></pre> <p>My installer folder tree looks like this</p> <pre><code>setup.py README.md LICENSE.txt testrunner/ __init__.py testrunner.py templates/ testDesc.json </code></pre> <p>The <em>testrunner.py</em> looks like this</p> <pre><code>def runTests(): print(&quot;Hello World&quot;) #More not relevant code here if __name__ == '__main__': runTests() </code></pre> <p>The <em>setup.py</em> content</p> <pre><code>from setuptools import find_packages, setup from pathlib import Path # read the contents of your README file this_directory = Path(__file__).parent long_description = (this_directory / &quot;README.md&quot;).read_text() setup( name='testrunner', version='0.3.0', packages=find_packages(include=['testrunner']), description='C/C++ test runner', long_description=long_description, long_description_content_type='text/markdown', author='Me', license=&quot;Proprietary&quot;, license_files = ('LICENSE.txt',), entry_points={ 'console_scripts': ['testrunner = testrunner:main'] }, classifiers=[ 'Topic :: Software Development :: Build Tools', 'Topic :: Software Development :: Compilers', 'Private :: Do Not Upload', 'Operating System :: Microsoft :: Windows', 'Intended Audience :: Developers', 'Intended Audience :: Science/Research ', 'Intended Audience :: Education ', 'Natural Language :: English', 'License :: Other/Proprietary License', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.7', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.8', 'Programming Language :: Python :: 3.9', 'Programming Language :: Python :: 3.10', ], install_requires=[], package_data={'':['templates\\*.json']}, ) </code></pre> <p>And finally the <em>__init__.py</em></p> <pre><code>from .testrunner import main, runTests </code></pre> <p>I build the wheel with the command: <code>py -m pip wheel --no-deps -w dist .</code> After installation with pip and checking the content in the site-packages directory executing <code>py -m testrunner -h</code> results in <code>No module named testrunner.__main__; 'testrunner' is a package and cannot be directly executed</code></p>
<python><python-wheel>
2024-11-20 15:40:27
1
1,483
Jonny Schubert
79,207,883
9,518,890
Workday returns 406 error (Server.governedError)
<p>I am having trouble getting some data from Workday, specifically from financial management &amp; <code>Get_Journals</code> operation using this <a href="https://community.workday.com/sites/default/files/file-hosting/productionapi/Financial_Management/v43.1/Get_Journals.html#Journal_Entry_Response_GroupType" rel="nofollow noreferrer">API</a> (though I don't think problem is specific to this operation)</p> <p>When I query Workday using the above mentioned <code>Get_Journals</code> operation, I am getting 406 HTTP response with <code>Server.governedError</code> as a fault code and no fault string.</p> <pre><code>&lt;SOAP-ENV:Envelope xmlns:SOAP-ENV=&quot;http://schemas.xmlsoap.org/soap/envelope/&quot;&gt; &lt;SOAP-ENV:Body&gt; &lt;SOAP-ENV:Fault xmlns:wd=&quot;urn:com.workday/bsvc&quot;&gt; &lt;faultcode&gt;SOAP-ENV:Server.governedError&lt;/faultcode&gt; &lt;faultstring&gt;&lt;/faultstring&gt; &lt;/SOAP-ENV:Fault&gt; &lt;/SOAP-ENV:Body&gt; &lt;/SOAP-ENV:Envelope&gt; </code></pre> <p>This happens only for a specific combination of dates and company codes (<code>Organization_Reference_ID</code>). Moreover, this happens only for a specific response page. This has been observed across multiple clients.</p> <p>example: I request data for company <code>ABC</code>, date <code>2024-01-01</code> and I specify <code>count</code> to be 1 (one object per page), there will be 10 pages. I can send 10 requests asking for each page individually. 9 of these requests return data as expected but one of them fails with the above mentioned error (let's say page 1 and 3-10 return data and page 2 returns error).</p> <p>According to some online resources, this may be caused by the size of the response being too big. I can't confirm this claim since I don't have other access to the data but I have noticed that when I request data for some other company codes, distribution of the data across response pages can be highly uneven. ex: each individual page is up to 2MB except for a single page that is more than 1GB.</p> <p>General suggestion is to filter the data based on the accounting date (<code>Accounting_From_Date</code> &amp; <code>Accounting_to_Date</code>), and based on the <code>Organization_Reference_ID</code>, which I am already doing here - single day &amp; single company code.</p> <p>Here is the actual request.</p> <pre class="lang-py prettyprint-override"><code>request_text = ''' &lt;?xml version=\'1.0\' encoding=\'utf-8\'?&gt;\n&lt;soap-env:Envelope xmlns:soap-env=&quot;http://schemas.xmlsoap.org/soap/envelope/&quot;&gt; &lt;soap-env:Header&gt; &lt;wsse:Security xmlns:wsse=&quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&quot;&gt; &lt;wsse:UsernameToken&gt; &lt;wsse:Username&gt;abc&lt;/wsse:Username&gt; &lt;wsse:Password Type=&quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText&quot;&gt;abc&lt;/wsse:Password&gt; &lt;/wsse:UsernameToken&gt; &lt;/wsse:Security&gt; &lt;/soap-env:Header&gt; &lt;soap-env:Body&gt; &lt;ns0:Get_Journals_Request xmlns:ns0=&quot;urn:com.workday/bsvc&quot;&gt; &lt;ns0:Request_Criteria&gt; &lt;ns0:Organization_Reference&gt; &lt;ns0:ID ns0:type=&quot;Organization_Reference_ID&quot;&gt;ABC&lt;/ns0:ID&gt; &lt;/ns0:Organization_Reference&gt; &lt;ns0:Accounting_From_Date&gt;2024-01-01&lt;/ns0:Accounting_From_Date&gt; &lt;ns0:Accounting_To_Date&gt;2024-01-01&lt;/ns0:Accounting_To_Date&gt; &lt;/ns0:Request_Criteria&gt; &lt;ns0:Response_Filter&gt; &lt;ns0:As_Of_Entry_DateTime&gt;2024-11-12T15:40:37.690367&lt;/ns0:As_Of_Entry_DateTime&gt; &lt;ns0:Page&gt;22&lt;/ns0:Page&gt; &lt;ns0:Count&gt;1&lt;/ns0:Count&gt; &lt;/ns0:Response_Filter&gt; &lt;ns0:Response_Group&gt; &lt;ns0:Include_Attachment_Data&gt;false&lt;/ns0:Include_Attachment_Data&gt; &lt;/ns0:Response_Group&gt; &lt;/ns0:Get_Journals_Request&gt; &lt;/soap-env:Body&gt;&lt;/soap-env:Envelope&gt; ''' headers = {&quot;Accept&quot;: &quot;*/*&quot;} host = &quot;https://somehost.com/...&quot; request_bytes = request_text.encode(&quot;utf-8&quot;) response = requests.post( host, data=request_bytes, headers=headers ) </code></pre> <p>Is there anything I can try to get either the data or at least some reasonable error message indicating what the actual problem is?</p>
<python><soap><workday-api>
2024-11-20 15:27:07
0
14,592
Matus Dubrava
79,207,871
6,930,340
Replace last two row values in a grouped polars DataFrame
<p>I need to replace the last two values in the <code>value</code> column of a <code>pl.DataFrame</code> with zeros, whereby I need to <code>group_by</code> the <code>symbol</code> column.</p> <pre><code>import polars as pl df = pl.DataFrame( {&quot;symbol&quot;: [*[&quot;A&quot;] * 4, *[&quot;B&quot;] * 4], &quot;value&quot;: range(8)} ) shape: (8, 2) ┌────────┬───────┐ │ symbol ┆ value │ │ --- ┆ --- │ │ str ┆ i64 │ ╞════════╪═══════╡ │ A ┆ 0 │ │ A ┆ 1 │ │ A ┆ 2 │ │ A ┆ 3 │ │ B ┆ 4 │ │ B ┆ 5 │ │ B ┆ 6 │ │ B ┆ 7 │ └────────┴───────┘ </code></pre> <p>Here is my expected outcome:</p> <pre><code>shape: (8, 2) ┌────────┬───────┐ │ symbol ┆ value │ │ --- ┆ --- │ │ str ┆ i64 │ ╞════════╪═══════╡ │ A ┆ 0 │ │ A ┆ 1 │ │ A ┆ 0 │&lt;-- replaced │ A ┆ 0 │&lt;-- replaced │ B ┆ 4 │ │ B ┆ 5 │ │ B ┆ 0 │&lt;-- replaced │ B ┆ 0 │&lt;-- replaced └────────┴───────┘ </code></pre>
<python><python-polars>
2024-11-20 15:24:34
2
5,167
Andi
79,207,658
12,820,223
Python package still unavailable after installing with venv
<p>I know this has been answered many times but I've been through the answers and it still won't work. I want to install package <code>&lt;some_package&gt;</code> so that I can use it through <code>import</code> in a script I run often.</p> <p>I tried using <code>pipx</code>, but that obviously installs the package in a virtual environment which is then only helpful if you run the code in that virtual environment which I don't want to do.</p> <p>So I followed the advice <a href="https://stackoverflow.com/questions/76499565/python-does-not-find-module-installed-with-pipx">here</a> and did:</p> <pre><code>$ python3 -m venv $HOME/.venvs/MyEnv $ $HOME/.venvs/MyEnv/bin/python -m pip install &lt;some_package&gt; $ source $HOME/.venvs/MyEnv/bin/activate </code></pre> <p>Then I tried to run my script but I get the error: <code>ModuleNotFoundError: No module named '&lt;some_package&gt;'</code></p> <p>Where did I go wrong?</p>
<python><macos>
2024-11-20 14:33:48
2
411
Beth Long
79,207,577
945,034
Number out of representable range: type FIXED[SB4] with Snowflake Python Connector
<p>I have a typical ETL project where I am reading a csv file into a Pandas dataframe and need to write this DataFrame to a Snowflake table</p> <pre class="lang-py prettyprint-override"><code> with og_snowflake_resource.get_connection() as sf_conn: write_pandas(conn=sf_conn, overwrite=False, df=raw_df, table_name=the_target_table_name, auto_create_table=True, database=SF_DATABASE, schema=my_schema_name, quote_identifiers=False) </code></pre> <p>Notice that I have used <code>auto_create_table=True</code> in order to automatically created table in snowflake. This is desirable.</p> <p>Some of the columns in my input csv files are typed to python's Decimal. Such columns get created as <code>NUMBER</code> columns in snowflake</p> <p>The code works fine for some files and then break for some. If I look at the schema of auto-created table, I can see the NUMBER typed columns are created with different precsions! :-</p> <p><a href="https://i.sstatic.net/Fy4IOpEV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fy4IOpEV.png" alt="Different NUMERIC precision" /></a></p> <p>Now, if the subsequent CSV file has a Decimal value that can't fit in the NUMRIC column that was created in last runs, the operation fails ?</p> <p><strong>Question</strong>:- How can I instruct the write_pandas method to write the decimal values in Numeric column of a given fixed precision ?</p> <pre class="lang-sql prettyprint-override"><code></code></pre>
<python><pandas><snowflake-cloud-data-platform>
2024-11-20 14:13:52
0
7,775
Kumar Sambhav
79,207,548
24,191,255
Determination of cutoff frequency for filtering
<p>I would like to determine the choice of cutoff frequency for e.g., a second-order low-pass Butterworth filter using frequency spectrum analysis. For this purpose, I would like to implement Fast Fourier Transformation on my data using <code>scipy.fftpack</code>.</p> <p>After transforming the signal, I smoothed the spectrum using a Savitzky-Golay filter and defined a threshold frequency as 5% of the maximal frequency of the spectrum. Subsequently, I defined the significant frequencies <code>significant_freqs</code> based on this threshold frequency. The cutoff frequency was defined as the maximum value in the list of significant frequencies.</p> <pre><code>from scipy.fftpack import fft, fftfreq from scipy.signal import savgol_filter import numpy as np #example data np.random.seed(42) duration = 10 fs = 500 # sampling rate n_samples = duration * fs #number of samples time = np.linspace(0, duration, n_samples) signal = ( 2 * np.sin(2 * np.pi * 5 * time) + 0.5 * np.sin(2 * np.pi * 20 * time) + np.random.normal(0, 0.2, n_samples) ) #method freqs = fftfreq(len(signal), d=1/fs)[:len(signal) // 2] spectrum = np.abs(fft(signal))[:len(signal) // 2] smoothed_spectrum = savgol_filter(spectrum, window_length=11, polyorder=2) threshold = 0.05 * max(smoothed_spectrum) significant_freqs = freqs[smoothed_spectrum &gt;= threshold] optimal_cutoff = max(significant_freqs) threshold_dB = 20 * np.log10(threshold / max(smoothed_spectrum)) print(f&quot;\nOptimal cutoff frequency: {optimal_cutoff:.10f} Hz&quot;) print(f&quot;Threshold amplitude: {threshold_dB:.2f} dB&quot;) </code></pre> <p>Is this a sufficient way to determine the cutoff frequency? How should I decide on the threshold frequency expressed relative to the maximal frequency?</p>
<python><filter><scipy><signal-processing><fft>
2024-11-20 14:04:48
0
606
Márton Horváth
79,207,543
19,500,571
Plotly: Increasing figure size to make room for long footnote
<p>Building on the example in this <a href="https://stackoverflow.com/questions/67055505/plotly-dash-how-to-add-footnotes-and-source-text-to-plots">thread</a>, I have the following code:</p> <pre><code>import plotly.express as px import plotly.graph_objects as go import pandas as pd df = pd.read_csv('https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv') fig = go.Figure(go.Scatter(x=df['Date'], y=df['AAPL.High'])) fig.update_xaxes(rangeslider_visible=True) note = 'NYSE Trading Days After Announcement&lt;br&gt;Source:&lt;a href=&quot;https://www.nytimes.com/&quot;&quot;&gt;The NY TIMES&lt;/a&gt; Data: &lt;a href=&quot;https://www.yahoofinance.com/&quot;&gt;Yahoo! Finance&lt;/a&gt;&lt;br&gt;NYSE Trading Days After Announcement&lt;br&gt;Source:&lt;a href=&quot;https://www.nytimes.com/&quot;&quot;&gt;The NY TIMES&lt;/a&gt; Data: &lt;a href=&quot;https://www.yahoofinance.com/&quot;&gt;Yahoo! Finance&lt;/a&gt;&lt;br&gt;NYSE Trading Days After Announcement&lt;br&gt;Source:&lt;a href=&quot;https://www.nytimes.com/&quot;&quot;&gt;The NY TIMES&lt;/a&gt; Data: &lt;a href=&quot;https://www.yahoofinance.com/&quot;&gt;Yahoo! Finance&lt;/a&gt;&lt;br&gt;NYSE Trading Days After Announcement&lt;br&gt;Source:&lt;a href=&quot;https://www.nytimes.com/&quot;&quot;&gt;The NY TIMES&lt;/a&gt; Data: &lt;a href=&quot;https://www.yahoofinance.com/&quot;&gt;Yahoo! Finance&lt;/a&gt;' fig.add_annotation( showarrow=False, text=note, font=dict(size=10), xref='paper', x=0, yref='paper', y=-1.5 ) fig.show() </code></pre> <p>The annotion is too long to fit in the figure, so it gets cut off. Is there a way to add space at the bottom of the figure such that the annotation fits?</p>
<python><plotly><visualization>
2024-11-20 14:02:15
1
469
TylerD
79,207,488
4,847,250
How do I represent sided boxplot in seaborn when boxplots are already grouped?
<p>I'm seeking for a way to represent two sided box plot in seaborn. I have 2 indexes (index1 and index2) that I want to represent according to two information info1 (a number) and info2 (a letter) My issue is the boxplot I have are already grouped together, and I don't understand how manage the last dimension?</p> <p>for now I can just represent both indexes separately in two panels (top and middle)</p> <p>what I would like is the box plot of the two indexes being represented just aside</p> <p>Something like this for instance: <a href="https://i.sstatic.net/oTDBTiYA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTDBTiYA.png" alt="enter image description here" /></a></p> <p>I don't know if it is easily doable</p> <p>Here a short example:</p> <pre><code>import numpy as np import seaborn as sns import pandas as pd import matplotlib.pyplot as plt fig = plt.figure() ax1 = plt.subplot(3, 1, 1) ax2 = plt.subplot(3, 1, 2) ax3 = plt.subplot(3, 1, 3) index1 = np.random.random((4,100,4)) intex2 = np.random.random((4,100,4))/2. info1 = np.zeros(shape=index1.shape,dtype='object') info1[0,:,:] = 'One' info1[1,:,:] = 'Two' info1[2,:,:] = 'Three' info1[3,:,:] = 'Four' info2 = np.zeros(shape=index1.shape, dtype='object') info2[:, :, 0] = 'A' info2[:, :, 1] = 'B' info2[:, :, 2] = 'C' info2[:, :, 3] = 'D' df = pd.DataFrame( columns=['Info1', 'Info2', 'Index1', 'Index2'], data=np.array( (info1.flatten(), info2.flatten(), index1.flatten(), intex2.flatten())).T) sns.boxplot(x='Info1', y='Index1', hue=&quot;Info2&quot;, data=df, ax=ax1) ax1.set_title('Index1') ax1.set_ylim([0, 1]) sns.boxplot(x='Info1', y='Index2', hue=&quot;Info2&quot;, data=df, ax=ax2) ax2.set_ylim([0, 1]) ax2.set_title('Index2') # sns.boxplot(x='Info1', y='Index1', hue=&quot;Info2&quot;, data=df, ax=ax3) ax3.set_ylim([0, 1]) ax3.set_title('Index1 + Index2') plt.show() </code></pre> <p><a href="https://i.sstatic.net/vBBPcco7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vBBPcco7.png" alt="enter image description here" /></a></p>
<python><seaborn><boxplot><group>
2024-11-20 13:48:11
1
5,207
ymmx
79,207,357
16,895,246
Tkinter keyboard based time input
<p>I'm looking for a way to input time to tkinter but all of the examples I can find are either unnecessarily fancy <a href="https://pypi.org/project/tkTimePicker/" rel="nofollow noreferrer">https://pypi.org/project/tkTimePicker/</a> or just slow to input <a href="https://pypi.org/project/tkTimePicker/" rel="nofollow noreferrer">https://pypi.org/project/tkTimePicker/</a> I'm writing the GUI as part of a tool for working with logs internally meaning the users will be fairly technical and care far more about the program functioning quickly than looking pretty.</p> <p>Something like a html time input that you can simply type numbers into while it prevents you from inputting invalid times would be perfect (especially if it comes with an option for dates and/or lets you tab between the boxes. Does something like this exist for tkinter?</p> <p>In summary I'd like</p> <ul> <li>Time input</li> <li>Fast to enter</li> <li>is able to validate time automatically (i.e. you can't put in minute=76)</li> </ul>
<python><tkinter><time>
2024-11-20 13:12:57
0
1,441
Pioneer_11
79,207,351
4,841,654
Error 422: Unprocessable Entity errors with Openstack python-swiftclient
<p>I am writing a Python script using <code>python-swiftclient</code> and <code>zipfile</code> to zip and upload files to the Swift API endpoint of an Openstack Object Store. I store the zipped data in memory as an <code>io.BytesIO</code> object.</p> <p>Code snippet:</p> <pre class="lang-py prettyprint-override"><code>arc_name = 'test.zip' zip_buffer = io.BytesIO() with open zipfile.ZipFile(zip_buffer, &quot;a&quot;, zipfile.ZIP_DEFLATED, True) as zip_file: for file in files: with open(file, 'rb') as src_file: zip_file.writestr(arc_name, src_file.read()) </code></pre> <p>...</p> <pre class="lang-py prettyprint-override"><code>zip_data = zip_buffer.getvalue() checksum_base64 = base64.b64encode(hashlib.md5(zip_data).digest()).decode() swift_conn = swiftclient.Connection(&lt;creds&gt;) container_name = 'swift-test' swift_conn.put_object(container=container_name, contents=zip_data, content_type=None, obj=arc_name, etag=checksum_base64) </code></pre> <p>The error is:</p> <pre><code>swiftclient.exceptions.ClientException: Object PUT failed: https://..../swift/v1/swift-test/test.zip 422 Unprocessable Entity </code></pre> <p>From other HTTP 422 error questions my thinking is the issue is <code>content_type</code> (MIME type). I've tried <code>'application/zip'</code> and <code>'multipart/mixed'</code> but always see the same error.</p> <p>If another MIME type is more appropriate, or I'm missing something else, I'd be grateful for any help.</p>
<python><mime-types><python-zipfile><object-storage><openstack-swift>
2024-11-20 13:11:44
1
501
Dave
79,207,328
5,350,089
Changing Serial Port Parity at Run time Python Serial
<p>Hi i am working with python serial project i want to change the serial port parity value at runtime it is not working properly especially in odd parity mode it is not sending correct data</p> <pre><code>import serial import time ser = serial.Serial( port='COM3', baudrate=9600, parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE, bytesize=serial.EIGHTBITS, timeout=0 ) time.sleep(2) while True: ser.parity = serial.PARITY_ODD ser.write(serial.to_bytes([0x01])) ser.write(serial.to_bytes([0x02])) ser.write(serial.to_bytes([0x03])) ser.write(serial.to_bytes([0x01])) ser.write(serial.to_bytes([0x02])) ser.write(serial.to_bytes([0x03])) time.sleep(2) </code></pre> <p>In above code i am sending three bytes of data with Odd and Even parity while sending i am able receive the Even parity data with correct value but in odd parity mode it is not transmitting correct values</p> <p>for even parity at the receiver end i am getting 01 02 03</p> <p>for odd parity at receiver end i am getting some time 01 12 03 some time 01 08 03 some time 81 02 03</p> <p>guide me to change the parity of the serial port at run time</p>
<python><serial-port><pyserial><parity>
2024-11-20 13:05:50
0
445
Sathish
79,207,322
2,447,427
SMAC3 conda dependency
<p>Are there any caveats to take into consideration before installing <a href="https://www.automl.org/hpo-overview/hpo-tools/smac/" rel="nofollow noreferrer">AutoML</a>'s <a href="https://github.com/automl/SMAC3" rel="nofollow noreferrer">SMAC3</a> using <code>pip</code> inside a regular <code>venv</code> rather than within a <code>conda</code> environment, as recommended by the <a href="https://automl.github.io/SMAC3/main/1_installation.html" rel="nofollow noreferrer">docs</a>?</p> <p>The docs suggest using <code>conda</code> for environment management, but in the initial 'Requirements' section they suggest installing <code>swig</code>, the only system dependency that can't be handled by <code>pip</code>, using the OS's package manager. Moreover, the first option literally stated in the docs is to &quot;install SMAC via PyPI&quot; using <code>pip</code>.</p> <p>I'm not very experienced with the <code>anaconda</code> distribution. Are there any special versions or builds of some packages that <code>smac</code> depends on? I understand that it comes with its own <code>python</code> interpreter implementation or build, too. I'm assuming it respects the CPython standards, but maybe there are some key differences that SMAC3 could depend on?</p> <p><strong>LE:</strong> I forgot to mention that I ran the test battery of SMAC both in a conda env and in a venv, and the same few tests fail.</p> <p><strong>LE2:</strong> apparently, there's an option even for <code>swig</code> to be installed via <code>pip</code> as a <a href="https://pypi.org/project/swig/" rel="nofollow noreferrer">PyPI package</a></p>
<python><pip><anaconda><conda><pypi>
2024-11-20 13:03:48
1
343
bbudescu
79,207,126
7,179,546
How to launch 2 requests in python in parallel
<p>I want to execute 2 functions in Python that execute requests under the hood. I don't mind to wait until the two have finished, but they should be executed in parallel.</p> <p>I tried to use the <code>concurrent.futures</code> library, but that requires me to use an async and so on until the function on top of all, which is synchronous and would need a huge refactoring to make it async.</p> <p>I'm trying with this approach, but I'm not sure if this actually parallelises everything correctly</p> <pre><code>def worker(function, queue_prepared, *args): result = function(*args) queue_prepared.put(result) def launch_threads(param1, param2): queue_first = queue.Queue() queue_second = queue.Queue() thread_first = threading.Thread(target=worker, args=(request1, queue_first, param1, param2)) thread_second = threading.Thread(target=worker, args=(request2, queue_second, param1)) thread_first.start() thread_second.start() thread_first.join() thread_second.join() return queue_first, queue_second queue_first, queue_second = launch_threads(param1,param2) queue_first_finished = False queue_second_finished = False while not queue_first_finished or not queue_second_finished: if not queue_first_finished: if not queue_first.empty(): first = queue_first.get(timeout=1) else: queue_first_finished = True if not queue_second_finished: if not queue_second.empty(): first = queue_second.get(timeout=1) else: queue_second_finished = True </code></pre>
<python><multithreading><parallel-processing>
2024-11-20 12:04:18
1
737
Carabes
79,206,947
8,792,159
How to apply a function to each 2D slice of a 4D Numpy array in parallel with Dask without running out of RAM?
<p>I want to apply a function to each 2D slice of a 4D Numpy array using Dask. The output should be a 2D matrix (the function applied to each 2D slice returns a single value). I would like to do this in parallel. Problem: I'm not sure I understand correctly how Dask implements the parallel calculation. Currently, I am always running out of RAM. My naive understanding is that the entire input array and all currently processed chunks must fit in RAM (+ some extra space for all other applications to work). But it seems that each chunk gets a copy of the input array?</p> <p>Here's some example code (<code>data</code> will be roughly ~ 20 GiB big). I know there are more efficient ways to compute the sum over the last two dimensions. This is just an example function to illustrate the problem (aka. the question is not about computing the sum over the last two dimensions of a 4D array).</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import dask.array as da # set seed np.random.RandomState(42) # Create example data array array_shape = (1000,300,50,200) data = np.random.random(array_shape) # Create a large 4D NumPy array # how big is whole array and how big is each chunk? array_gib = data.nbytes / (1024 ** 3) chunk_gib = data[0,0,:,:].nbytes / (1024 ** 3) print(f&quot;Memory occupied by array: {array_gib} GiB, Memory occupied by chunk: {chunk_gib} GiB&quot;) # Define an example function that operates on a 2D slice and returns a single value def sum_of_2d_slice(chunk): return chunk.sum(axis=None)[None,None] # Define dask array with chunks. We want to iterate over the first two dimensions # so each chunk is a 2D matrix data = da.from_array(data, chunks=(1,1,data.shape[2],data.shape[3])) # Map function to each chunk result = data.map_blocks(sum_of_2d_slice,drop_axis=[2,3]) # Compute the final result final_result = result.compute(num_workers=5,processes=True,memory_limit='1GB') # Print the result print(final_result) </code></pre>
<python><arrays><numpy><parallel-processing><dask>
2024-11-20 11:14:24
0
1,317
Johannes Wiesner
79,206,753
4,042,267
How to type hint a factory fixture for a pydantic model for tests
<p>Let's assume I have a <code>pydantic</code> model, such as this <code>Widget</code>:</p> <p><code>models.py</code></p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel class Widget(BaseModel): name: str value: float </code></pre> <p>When writing tests (using <code>pytest</code>), which use this <code>Widget</code>, I frequently want to be able to create widgets on the fly, with some default values for its fields (which I do not want to set as default values in general, ie on the model, because they are only meant as default values for tests), and potentially some fields being set to certain values.</p> <p>For this I currently have this clunky construct in my <code>conftest.py</code> file:</p> <p><code>conftest.py</code></p> <pre class="lang-py prettyprint-override"><code>from typing import NotRequired, Protocol, TypedDict, Unpack import pytest from models import Widget class WidgetFactoryKwargs(TypedDict): name: NotRequired[str] value: NotRequired[float] class WidgetFactory(Protocol): def __call__(self, **kwargs: Unpack[WidgetFactoryKwargs]) -&gt; Widget: ... @pytest.fixture def widget_factory() -&gt; WidgetFactory: def _widget_factory(**kwargs: Unpack[WidgetFactoryKwargs]) -&gt; Widget: defaults = WidgetFactoryKwargs(name=&quot;foo&quot;, value=42) kwargs = defaults | kwargs return Widget(**kwargs) return _widget_factory </code></pre> <p>This gives the type checker the ability to check if I am using the factory correctly in my tests and gives my IDE autocompletion powers:</p> <p><code>test_widgets.py</code></p> <pre class="lang-py prettyprint-override"><code>from typing import assert_type from conftest import WidgetFactory def test_widget_creation(widget_factory: WidgetFactory) -&gt; None: widget = widget_factory() assert_type(widget, Widget) # during type checking assert isinstance(widget, Widget) # during run time assert widget.name == &quot;foo&quot; assert widget.value == 42 widget = widget_factory(name=&quot;foobar&quot;) assert widget.name == &quot;foobar&quot; assert widget.value == 42 widget = widget_factory(value=1337) assert widget.name == &quot;foo&quot; assert widget.value == 1337 widget = widget_factory(name=&quot;foobar&quot;, value=1337) assert widget.name == &quot;foobar&quot; assert widget.value == 1337 widget = widget_factory(mode=&quot;maintenance&quot;) # type checker error </code></pre> <p>(the actual tests are of course more involved and use the widget in some other way)</p> <p><strong>Question:</strong></p> <p>Is there a better way to achieve this type safety? Ideally, I could build the <code>WidgetFactoryKwargs</code> <code>TypedDict</code> &quot;dynamically&quot; based on the Pydantic model. This would at least get rid of the <code>TypedDict</code> (and the associated maintenance cost of keeping it in line with any changes to the fields of the Pydantic model). But building a <code>TypedDict</code> dynamically is something <a href="https://mail.python.org/archives/list/typing-sig@python.org/thread/JDKKB5SXC6XQ5YBTANSWI4LGM67FTLFH/" rel="nofollow noreferrer">explicitly not supported for static type checking</a>.</p> <p>The <code>Widget</code> model, while technically dynamic, can be assumed to be static (no weird monkey-patching of my models after defining them).</p>
<python><pytest><python-typing><pydantic>
2024-11-20 10:17:20
1
7,246
Graipher
79,206,734
9,547,278
Unable to pass self.instance_variable as default into a class method
<p>I have a class and method that I am trying to pass a <code>self.instance_variable</code> as default but am unable to. Let me illustrate:</p> <pre><code>from openai import OpenAI class Example_class: def __init__(self) -&gt; None: self.client = OpenAI(api_key='xyz') self.client2 = OpenAI(api_key='abc') def chat_completion(self, prompt, context, client=self.client, model='gpt-4o'): # Process the prompt messages = [{&quot;role&quot;: &quot;system&quot;, &quot;content&quot;: context}, {&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: prompt}] response = client.chat.completions.create( model=model, messages=messages, temperature=0.35, # this is the degree of randomness of the model's output ) return response.choices[0].message.content def do_something(self): self.chat_completion(prompt=&quot;blah blah blah&quot;, context=&quot;fgasa&quot;) </code></pre> <p>You see, there is an error when trying to pass <code>self.client</code> into the <code>chat_completion</code> method. Where am I going wrong?</p>
<python><class><methods><instance-variables>
2024-11-20 10:12:56
1
474
Legion
79,206,699
3,161,120
pylint --generated-members doesn't ignore class protobuf class
<p>Would anyone of you know how to configure <code>pylint</code> to error for the following module?</p> <p>Option doesn't work for <code>'vox_api.api_pb2.*'</code> and <code>'vox_api.*'</code>, but works for <code>'Response'</code>:</p> <p>Results:</p> <pre><code>$ pylint --generated-members='vox_api.api_pb2.*' src/pyjct/jarvis_response.py -v Using config file /storage/amoje/Sync/area22/jct/.pylintrc ************* Module pyjct.jarvis_response src/pyjct/jarvis_response.py:17:13: E1101: Module 'vox_api.api_pb2' has no 'Response' member (no-member) src/pyjct/jarvis_response.py:18:12: E1101: Module 'vox_api.api_pb2' has no 'Response' member (no-member) src/pyjct/jarvis_response.py:19:11: E1101: Module 'vox_api.api_pb2' has no 'Response' member (no-member) --------------------------------------------------------------------------------------------------- Your code has been rated at 4.83/10 (previous run: 4.83/10, +0.00) Checked 1 files, skipped 0 files </code></pre> <pre><code>pylint --generated-members='vox_api.*' src/pyjct/jarvis_response.py -v Using config file /storage/amoje/Sync/area22/jct/.pylintrc ************* Module pyjct.jarvis_response src/pyjct/jarvis_response.py:17:13: E1101: Module 'vox_api.api_pb2' has no 'Response' member (no-member) src/pyjct/jarvis_response.py:18:12: E1101: Module 'vox_api.api_pb2' has no 'Response' member (no-member) src/pyjct/jarvis_response.py:19:11: E1101: Module 'vox_api.api_pb2' has no 'Response' member (no-member) ---------------------------------------------------------------------------------------------------- Your code has been rated at 4.83/10 (previous run: 10.00/10, -5.17) Checked 1 files, skipped 0 files </code></pre> <p>Works for <code>--generated-members=&quot;Response&quot;</code>:</p> <pre><code>$ pylint --generated-members=&quot;Response&quot; src/pyjct/jarvis_response.py -v Using config file /storage/amoje/Sync/area22/jct/.pylintrc ----------------------------------------------------------------------------------------------------- Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00) Checked 1 files, skipped 0 files </code></pre> <pre><code>$ pylint --version pylint 3.3.1 astroid 3.3.5 Python 3.12.3 (main, Sep 11 2024, 14:17:37) [GCC 13.2.0] </code></pre> <p>Snippet of the code:</p> <pre><code>import vox_api.api_pb2 as api @dataclass class JarvisResponse: session: api.Response </code></pre> <p><strong>EDIT</strong> I have just figured out that it works for: <code>--generated-members='api.Response'</code>.</p>
<python><protocol-buffers><pylint><pylintrc>
2024-11-20 10:02:52
0
1,830
gbajson
79,206,689
5,305,512
Azure AI Studio - Script runs indefinitely with no errors or outputs
<p>I am following <a href="https://learn.microsoft.com/en-us/azure/ai-studio/tutorials/copilot-sdk-build-rag" rel="nofollow noreferrer">this</a> tutorial to get started with Azure AI Studio. The <code>create_search_index.py</code> script ran successfully and created an index. But the <code>get_product_documents.py</code> and <code>chat_with_products.py</code> scripts do not produce any error or output when running, they just keep running indefinitely.</p> <p>Any idea what might be going on? And what could I try out to fix the issue?</p> <hr /> <p>Here are the scripts from the link:</p> <pre><code>### config.py # ruff: noqa: ANN201, ANN001 import os import sys import pathlib import logging from azure.identity import DefaultAzureCredential from azure.ai.projects import AIProjectClient from azure.ai.inference.tracing import AIInferenceInstrumentor # load environment variables from the .env file from dotenv import load_dotenv load_dotenv() # Set &quot;./assets&quot; as the path where assets are stored, resolving the absolute path: ASSET_PATH = pathlib.Path(__file__).parent.resolve() / &quot;assets&quot; # Configure an root app logger that prints info level logs to stdout logger = logging.getLogger(&quot;app&quot;) logger.setLevel(logging.INFO) logger.addHandler(logging.StreamHandler(stream=sys.stdout)) # Returns a module-specific logger, inheriting from the root app logger def get_logger(module_name): return logging.getLogger(f&quot;app.{module_name}&quot;) # Enable instrumentation and logging of telemetry to the project def enable_telemetry(log_to_project: bool = False): AIInferenceInstrumentor().instrument() # enable logging message contents os.environ[&quot;AZURE_TRACING_GEN_AI_CONTENT_RECORDING_ENABLED&quot;] = &quot;true&quot; if log_to_project: from azure.monitor.opentelemetry import configure_azure_monitor project = AIProjectClient.from_connection_string( conn_str=os.environ[&quot;AIPROJECT_CONNECTION_STRING&quot;], credential=DefaultAzureCredential() ) tracing_link = f&quot;https://ai.azure.com/tracing?wsid=/subscriptions/{project.scope['subscription_id']}/resourceGroups/{project.scope['resource_group_name']}/providers/Microsoft.MachineLearningServices/workspaces/{project.scope['project_name']}&quot; application_insights_connection_string = project.telemetry.get_connection_string() if not application_insights_connection_string: logger.warning( &quot;No application insights configured, telemetry will not be logged to project. Add application insights at:&quot; ) logger.warning(tracing_link) return configure_azure_monitor(connection_string=application_insights_connection_string) logger.info(&quot;Enabled telemetry logging to project, view traces at:&quot;) logger.info(tracing_link) </code></pre> <hr /> <pre><code>### create_search_index.py import os from azure.ai.projects import AIProjectClient from azure.ai.projects.models import ConnectionType from azure.identity import DefaultAzureCredential from azure.core.credentials import AzureKeyCredential from azure.search.documents import SearchClient from azure.search.documents.indexes import SearchIndexClient from config import get_logger # initialize logging object logger = get_logger(__name__) # create a project client using environment variables loaded from the .env file # project = AIProjectClient.from_connection_string( # conn_str=os.environ[&quot;AIPROJECT_CONNECTION_STRING&quot;], credential=DefaultAzureCredential() # ) API_ENDPOINT = os.environ[&quot;API_ENDPOINT&quot;] API_KEY = os.environ[&quot;API_KEY&quot;] SUBSCRIPTION_ID = os.environ[&quot;SUBSCRIPTION_ID&quot;] RESOURCE_GROUP_NAME = os.environ[&quot;RESOURCE_GROUP_NAME&quot;] PROJECT_NAME = os.environ[&quot;PROJECT_NAME&quot;] # Initialize the AIProjectClient project = AIProjectClient( endpoint=API_ENDPOINT, credential=DefaultAzureCredential(), subscription_id=SUBSCRIPTION_ID, resource_group_name=RESOURCE_GROUP_NAME, project_name=PROJECT_NAME ) # create a vector embeddings client that will be used to generate vector embeddings embeddings = project.inference.get_embeddings_client() # use the project client to get the default search connection search_connection = project.connections.get_default( connection_type=ConnectionType.AZURE_AI_SEARCH, include_credentials=True ) # Create a search index client using the search connection # This client will be used to create and delete search indexes index_client = SearchIndexClient( endpoint=search_connection.endpoint_url, credential=AzureKeyCredential(key=search_connection.key) ) ### Define a search index import pandas as pd from azure.search.documents.indexes.models import ( SemanticSearch, SearchField, SimpleField, SearchableField, SearchFieldDataType, SemanticConfiguration, SemanticPrioritizedFields, SemanticField, VectorSearch, HnswAlgorithmConfiguration, VectorSearchAlgorithmKind, HnswParameters, VectorSearchAlgorithmMetric, ExhaustiveKnnAlgorithmConfiguration, ExhaustiveKnnParameters, VectorSearchProfile, SearchIndex, ) def create_index_definition(index_name: str, model: str) -&gt; SearchIndex: dimensions = 1536 # text-embedding-ada-002 if model == &quot;text-embedding-3-large&quot;: dimensions = 3072 # The fields we want to index. The &quot;embedding&quot; field is a vector field that will # be used for vector search. fields = [ SimpleField(name=&quot;id&quot;, type=SearchFieldDataType.String, key=True), SearchableField(name=&quot;content&quot;, type=SearchFieldDataType.String), SimpleField(name=&quot;filepath&quot;, type=SearchFieldDataType.String), SearchableField(name=&quot;title&quot;, type=SearchFieldDataType.String), SimpleField(name=&quot;url&quot;, type=SearchFieldDataType.String), SearchField( name=&quot;contentVector&quot;, type=SearchFieldDataType.Collection(SearchFieldDataType.Single), searchable=True, # Size of the vector created by the text-embedding-ada-002 model. vector_search_dimensions=dimensions, vector_search_profile_name=&quot;myHnswProfile&quot;, ), ] # The &quot;content&quot; field should be prioritized for semantic ranking. semantic_config = SemanticConfiguration( name=&quot;default&quot;, prioritized_fields=SemanticPrioritizedFields( title_field=SemanticField(field_name=&quot;title&quot;), keywords_fields=[], content_fields=[SemanticField(field_name=&quot;content&quot;)], ), ) # For vector search, we want to use the HNSW (Hierarchical Navigable Small World) # algorithm (a type of approximate nearest neighbor search algorithm) with cosine # distance. vector_search = VectorSearch( algorithms=[ HnswAlgorithmConfiguration( name=&quot;myHnsw&quot;, kind=VectorSearchAlgorithmKind.HNSW, parameters=HnswParameters( m=4, ef_construction=1000, ef_search=1000, metric=VectorSearchAlgorithmMetric.COSINE, ), ), ExhaustiveKnnAlgorithmConfiguration( name=&quot;myExhaustiveKnn&quot;, kind=VectorSearchAlgorithmKind.EXHAUSTIVE_KNN, parameters=ExhaustiveKnnParameters(metric=VectorSearchAlgorithmMetric.COSINE), ), ], profiles=[ VectorSearchProfile( name=&quot;myHnswProfile&quot;, algorithm_configuration_name=&quot;myHnsw&quot;, ), VectorSearchProfile( name=&quot;myExhaustiveKnnProfile&quot;, algorithm_configuration_name=&quot;myExhaustiveKnn&quot;, ), ], ) # Create the semantic settings with the configuration semantic_search = SemanticSearch(configurations=[semantic_config]) # Create the search index definition return SearchIndex( name=index_name, fields=fields, semantic_search=semantic_search, vector_search=vector_search, ) ### add a csv file to the index # define a function for indexing a csv file, that adds each row as a document # and generates vector embeddings for the specified content_column def create_docs_from_csv(path: str, content_column: str, model: str) -&gt; list[dict[str, any]]: products = pd.read_csv(path) items = [] for product in products.to_dict(&quot;records&quot;): content = product[content_column] id = str(product[&quot;id&quot;]) title = product[&quot;name&quot;] url = f&quot;/products/{title.lower().replace(' ', '-')}&quot; emb = embeddings.embed(input=content, model=model) rec = { &quot;id&quot;: id, &quot;content&quot;: content, &quot;filepath&quot;: f&quot;{title.lower().replace(' ', '-')}&quot;, &quot;title&quot;: title, &quot;url&quot;: url, &quot;contentVector&quot;: emb.data[0].embedding, } items.append(rec) return items def create_index_from_csv(index_name, csv_file): # If a search index already exists, delete it: try: index_definition = index_client.get_index(index_name) index_client.delete_index(index_name) logger.info(f&quot;🗑️ Found existing index named '{index_name}', and deleted it&quot;) except Exception: pass # create an empty search index index_definition = create_index_definition(index_name, model=os.environ[&quot;EMBEDDINGS_MODEL&quot;]) index_client.create_index(index_definition) # create documents from the products.csv file, generating vector embeddings for the &quot;description&quot; column docs = create_docs_from_csv(path=csv_file, content_column=&quot;description&quot;, model=os.environ[&quot;EMBEDDINGS_MODEL&quot;]) # Add the documents to the index using the Azure AI Search client search_client = SearchClient( endpoint=search_connection.endpoint_url, index_name=index_name, credential=AzureKeyCredential(key=search_connection.key), ) search_client.upload_documents(docs) logger.info(f&quot;➕ Uploaded {len(docs)} documents to '{index_name}' index&quot;) ### run the functions to build the index and register it to the cloud project if __name__ == &quot;__main__&quot;: import argparse parser = argparse.ArgumentParser() parser.add_argument( &quot;--index-name&quot;, type=str, help=&quot;index name to use when creating the AI Search index&quot;, default=os.environ[&quot;AISEARCH_INDEX_NAME&quot;], ) parser.add_argument( &quot;--csv-file&quot;, type=str, help=&quot;path to data for creating search index&quot;, default=&quot;assets/products.csv&quot; ) args = parser.parse_args() index_name = args.index_name csv_file = args.csv_file create_index_from_csv(index_name, csv_file) </code></pre> <hr /> <pre><code>### get_product_documents.py import os from pathlib import Path from opentelemetry import trace from azure.ai.projects import AIProjectClient from azure.ai.projects.models import ConnectionType from azure.identity import DefaultAzureCredential from azure.core.credentials import AzureKeyCredential from azure.search.documents import SearchClient from config import ASSET_PATH, get_logger from azure.ai.inference.prompts import PromptTemplate from azure.search.documents.models import VectorizedQuery # initialize logging and tracing objects logger = get_logger(__name__) tracer = trace.get_tracer(__name__) # create a project client using environment variables loaded from the .env file # project = AIProjectClient.from_connection_string( # conn_str=os.environ[&quot;AIPROJECT_CONNECTION_STRING&quot;], credential=DefaultAzureCredential() # ) API_ENDPOINT = os.environ[&quot;API_ENDPOINT&quot;] API_KEY = os.environ[&quot;API_KEY&quot;] SUBSCRIPTION_ID = os.environ[&quot;SUBSCRIPTION_ID&quot;] RESOURCE_GROUP_NAME = os.environ[&quot;RESOURCE_GROUP_NAME&quot;] PROJECT_NAME = os.environ[&quot;PROJECT_NAME&quot;] # Initialize the AIProjectClient project = AIProjectClient( endpoint=API_ENDPOINT, credential=DefaultAzureCredential(), subscription_id=SUBSCRIPTION_ID, resource_group_name=RESOURCE_GROUP_NAME, project_name=PROJECT_NAME ) # create a vector embeddings client that will be used to generate vector embeddings chat = project.inference.get_chat_completions_client() embeddings = project.inference.get_embeddings_client() # use the project client to get the default search connection search_connection = project.connections.get_default( connection_type=ConnectionType.AZURE_AI_SEARCH, include_credentials=True ) # Create a search index client using the search connection # This client will be used to create and delete search indexes search_client = SearchClient( index_name=os.environ[&quot;AISEARCH_INDEX_NAME&quot;], endpoint=search_connection.endpoint_url, credential=AzureKeyCredential(key=search_connection.key), ) @tracer.start_as_current_span(name=&quot;get_product_documents&quot;) def get_product_documents(messages: list, context: dict = None) -&gt; dict: if context is None: context = {} overrides = context.get(&quot;overrides&quot;, {}) top = overrides.get(&quot;top&quot;, 5) # generate a search query from the chat messages intent_prompty = PromptTemplate.from_prompty(Path(ASSET_PATH) / &quot;intent_mapping.prompty&quot;) intent_mapping_response = chat.complete( model=os.environ[&quot;INTENT_MAPPING_MODEL&quot;], messages=intent_prompty.create_messages(conversation=messages), **intent_prompty.parameters, ) search_query = intent_mapping_response.choices[0].message.content logger.debug(f&quot;🧠 Intent mapping: {search_query}&quot;) # generate a vector representation of the search query embedding = embeddings.embed(model=os.environ[&quot;EMBEDDINGS_MODEL&quot;], input=search_query) search_vector = embedding.data[0].embedding # search the index for products matching the search query vector_query = VectorizedQuery(vector=search_vector, k_nearest_neighbors=top, fields=&quot;contentVector&quot;) search_results = search_client.search( search_text=search_query, vector_queries=[vector_query], select=[&quot;id&quot;, &quot;content&quot;, &quot;filepath&quot;, &quot;title&quot;, &quot;url&quot;] ) documents = [ { &quot;id&quot;: result[&quot;id&quot;], &quot;content&quot;: result[&quot;content&quot;], &quot;filepath&quot;: result[&quot;filepath&quot;], &quot;title&quot;: result[&quot;title&quot;], &quot;url&quot;: result[&quot;url&quot;], } for result in search_results ] # add results to the provided context if &quot;thoughts&quot; not in context: context[&quot;thoughts&quot;] = [] # add thoughts and documents to the context object so it can be returned to the caller context[&quot;thoughts&quot;].append( { &quot;title&quot;: &quot;Generated search query&quot;, &quot;description&quot;: search_query, } ) if &quot;grounding_data&quot; not in context: context[&quot;grounding_data&quot;] = [] context[&quot;grounding_data&quot;].append(documents) logger.debug(f&quot;📄 {len(documents)} documents retrieved: {documents}&quot;) return documents if __name__ == &quot;__main__&quot;: import logging import argparse # set logging level to debug when running this module directly logger.setLevel(logging.DEBUG) # load command line arguments parser = argparse.ArgumentParser() parser.add_argument( &quot;--query&quot;, type=str, help=&quot;Query to use to search product&quot;, default=&quot;I need a new tent for 4 people, what would you recommend?&quot;, ) args = parser.parse_args() query = args.query result = get_product_documents(messages=[{&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: query}]) </code></pre> <hr /> <pre><code>### chat_with_products.py import os from pathlib import Path from opentelemetry import trace from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from config import ASSET_PATH, get_logger, enable_telemetry from get_product_documents import get_product_documents from azure.ai.inference.prompts import PromptTemplate # initialize logging and tracing objects logger = get_logger(__name__) tracer = trace.get_tracer(__name__) # create a project client using environment variables loaded from the .env file # project = AIProjectClient.from_connection_string( # conn_str=os.environ[&quot;AIPROJECT_CONNECTION_STRING&quot;], credential=DefaultAzureCredential() # ) API_ENDPOINT = os.environ[&quot;API_ENDPOINT&quot;] API_KEY = os.environ[&quot;API_KEY&quot;] SUBSCRIPTION_ID = os.environ[&quot;SUBSCRIPTION_ID&quot;] RESOURCE_GROUP_NAME = os.environ[&quot;RESOURCE_GROUP_NAME&quot;] PROJECT_NAME = os.environ[&quot;PROJECT_NAME&quot;] # Initialize the AIProjectClient project = AIProjectClient( endpoint=API_ENDPOINT, credential=DefaultAzureCredential(), subscription_id=SUBSCRIPTION_ID, resource_group_name=RESOURCE_GROUP_NAME, project_name=PROJECT_NAME ) # create a chat client we can use for testing chat = project.inference.get_chat_completions_client() @tracer.start_as_current_span(name=&quot;chat_with_products&quot;) def chat_with_products(messages: list, context: dict = None) -&gt; dict: if context is None: context = {} documents = get_product_documents(messages, context) # do a grounded chat call using the search results grounded_chat_prompt = PromptTemplate.from_prompty(Path(ASSET_PATH) / &quot;grounded_chat.prompty&quot;) system_message = grounded_chat_prompt.create_messages(documents=documents, context=context) response = chat.complete( model=os.environ[&quot;CHAT_MODEL&quot;], messages=system_message + messages, **grounded_chat_prompt.parameters, ) logger.info(f&quot;💬 Response: {response.choices[0].message}&quot;) # Return a chat protocol compliant response return {&quot;message&quot;: response.choices[0].message, &quot;context&quot;: context} if __name__ == &quot;__main__&quot;: import argparse # load command line arguments parser = argparse.ArgumentParser() parser.add_argument( &quot;--query&quot;, type=str, help=&quot;Query to use to search product&quot;, default=&quot;I need a new tent for 4 people, what would you recommend?&quot;, ) parser.add_argument( &quot;--enable-telemetry&quot;, action=&quot;store_true&quot;, help=&quot;Enable sending telemetry back to the project&quot;, ) args = parser.parse_args() if args.enable_telemetry: enable_telemetry(True) # run chat with products response = chat_with_products(messages=[{&quot;role&quot;: &quot;user&quot;, &quot;content&quot;: args.query}]) </code></pre>
<python><azure><azure-ai><rag>
2024-11-20 10:00:56
1
3,764
Kristada673
79,206,673
6,703,592
dataframe resample get the start time
<pre><code>freq = '1m' for date, df in df_trade.resample(freq): print(date) start = date - datetime.timedelta(freq) </code></pre> <p>we know that here <code>date</code> is the end time of resample under the given <code>freq</code> e.g. for the interval <code>('2021-03-01', '2021-03-31')</code>, <code>date</code> is <code>'2021-03-31'</code>.</p> <p>How could I simply get the start? <code>start = date - datetime.timedelta(freq)</code> is a way. However it should convert <code>'1m'</code> to <code>'days'</code> which is not a direct way (say if i change the <code>freq</code>, we may not get the exact start date of resample)</p>
<python><dataframe><resample>
2024-11-20 09:57:15
0
1,136
user6703592
79,206,663
8,040,369
Adding values from 2 cells in the previous row in to current row in dataframe
<p>I have a dataframe like below</p> <pre><code> Name Value ==================== A 2400 B -400 C 400 D 600 </code></pre> <p>And i need the df to be in the below format</p> <pre><code> Name Lower_Value Upper_Value ====================================== A 0 2400 B 2400 -400 C 2000 400 D 2400 0 </code></pre> <p>So basically, the actual values should be Upper_Values and the Lower_values should be the addition of both the Lower_Values and Upper_Values from the previous row</p> <p>so far, i have tried something like,</p> <pre><code>df['Upper_Value']=df['Value'] df['Lower_Value'] = df.upper_value.shift(1).fillna(0) df['Lower_Value'] = df['Lower_Value'] + df['Upper_Value'] </code></pre> <p>Any help or suggestion is much appreciated.</p> <p>Thanks,</p>
<python><python-3.x><pandas><dataframe>
2024-11-20 09:55:42
2
787
SM079
79,206,398
3,252,535
How to get the result of the objective function while using a slack variable?
<p>I have a simple Python code using Gurobi. I have added a slack variable to force model feasibility. I need to know how to get the real minimum value in my objective function.</p> <pre><code>import gurobipy as gp from gurobipy import GRB # Create the model model = gp.Model(&quot;quadratic_optimization_with_slack&quot;) # Variables x = model.addVar(name=&quot;x&quot;, lb=0)  # x &gt;= 0 y = model.addVar(name=&quot;y&quot;, lb=0)  # y &gt;= 0 slack = model.addVar(name=&quot;slack&quot;, lb=0)  # Slack variable for relaxation. If slack = 0 --&gt; Model is infeasible # Objective: Minimize 2x^2 + y model.setObjective(2 * x**2 + y, GRB.MINIMIZE) # Constraints model.addConstr(x - 5 + slack == 0, name=&quot;constraint_1&quot;)  # (slack allows relaxation) &lt;-- Condition to be relaxed model.addConstr(x + y == 4, name=&quot;constraint_2&quot;) # Add slack penalty in the objective to ensure the slack value is minimum. # The problem is that the result is not anymore the model.ObjVal, but the penalty*slack penalty_weight = 0.00010  # Penalty for slack usage model.setObjective(model.getObjective() + (penalty_weight * slack)) # Optimize the model model.optimize() </code></pre> <p>According to the values:</p> <pre><code>x = 4 y = 0 slack = 1 model.ObjVal = 0.0001 # (penalty_weight * slack) </code></pre> <p>Obviously, 0.0001 is not the minimum value that my objective function <em>2x^2 + y</em> can get. With this slack, it should be <strong>2 * 4^2 + 0 = 32</strong>.</p> <p>How can I get the real minimum value of my objective function?</p>
<python><gurobi>
2024-11-20 08:40:45
2
899
ironzionlion
79,206,384
1,956,558
Beautiful soup duplicating link text
<p>I have a function that is meant to extract html to render it in pdf in another function</p> <pre><code>def setLinks(self, value): if isinstance(value, str) and ('&lt;' in value and '&gt;' in value): soup = BeautifulSoup(value, 'html.parser') paragraphs = [] links = [] # Process text and collect links for element in soup.descendants: if isinstance(element, str) and element.strip(): paragraphs.append(element.strip()) elif element.name == 'a' and element.get('href'): link_text = element.get_text().strip() href = element.get('href') links.append((link_text, href)) if link_text not in paragraphs: paragraphs.append(link_text) else: if isinstance(value, str): paragraphs = value.split('\n') else: paragraphs = [str(value)] links = [] return links, paragraphs </code></pre> <p>My input is :</p> <pre><code>Il faut noter une une similitude de mise en scène avec la&lt;a href=&quot;http://www.cappiello.fr/illustrations/&quot;&gt; couverture du &quot;Cri de Paris&quot;, n° 201&lt;/a&gt;, parue le 2 décembre 1900. Noter l'attitude des deux femmes, l'envolée du noeud de la robe de droite. Le visage de la femme en noir du Cri de Paris est repris pour la femme de gauche. On retrouve la présence d'un meuble sur la gauche pour l'équilibre de la composition. </code></pre> <p>When I debug, I get :</p> <pre><code>paragraphs = {list: 4} ['Il faut noter une une similitude de mise en scène avec la', 'couverture du &quot;Cri de Paris&quot;, n° 201', 'couverture du &quot;Cri de Paris&quot;, n° 201', &quot;, parue le 2 décembre 1900. Noter l'attitude des deux femmes, l'envolée du noeud de la robe de droite. Le visage 0 = {str} 'Il faut noter une une similitude de mise en scène avec la' 1 = {str} 'couverture du &quot;Cri de Paris&quot;, n° 201' 2 = {str} 'couverture du &quot;Cri de Paris&quot;, n° 201' 3 = {str} &quot;, parue le 2 décembre 1900. Noter l'attitude des deux femmes, l'envolée du noeud de la robe de droite. Le visage de la femme en noir du Cri de Paris est repris pour la femme de gauche. On retrouve la présence d'un meuble sur la gauche pour l'équilibre de </code></pre> <p>Why <code>soup.descendants</code> is looping first on the link :</p> <pre><code>&lt;a href=&quot;http://www.cappiello.fr/illustrations/&quot;&gt; couverture du &quot;Cri de Paris&quot;, n° 201&lt;/a&gt; </code></pre> <p>and then on the text <code>couverture du &quot;Cri de Paris&quot;, n° 201</code> ???</p>
<python><beautifulsoup>
2024-11-20 08:37:54
2
19,871
Juliatzin
79,206,293
6,930,340
Forcing numpy arrays generated by Python hypothesis to contain all allowed elements
<p>I generate test data using Python's <code>hypothesis</code> library.</p> <p>For example, I am generating a numpy array that may contain <code>0</code> and <code>1</code> like this:</p> <pre><code>arr = hypothesis.extra.numpy.arrays(dtype=np.int8, shape=10, elements=st.integers(0, 1)) </code></pre> <p>How can I make sure, that the generated <code>arr</code> always contains all allowed elements? That is, <code>arr</code> needs to contain at least one <code>0</code> and one <code>1</code>.</p> <p>I also need to ensure that I return an <code>np.array</code> of <code>dtype=np.int8</code> with a specific shape.</p>
<python><python-hypothesis>
2024-11-20 07:59:54
1
5,167
Andi
79,205,922
1,084,174
ModuleNotFoundError: No module named 'trl'
<p>I have installed trl using pip.</p> <pre><code>!pip show trl </code></pre> <p>and its showing packages info properly,</p> <pre><code>!pip show trl Name: trl Version: 0.7.10 Summary: Train transformer language models with reinforcement learning. Home-page: https://github.com/huggingface/trl Author: Leandro von Werra Author-email: leandro.vonwerra@gmail.com License: Apache 2.0 Location: /usr/local/lib/python3.10/dist-packages Requires: accelerate, datasets, numpy, torch, transformers, tyro Required-by: </code></pre> <p>However, when I am importing in Notebook its showing error:</p> <pre><code>import trl --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In[9], line 1 ----&gt; 1 import trl ModuleNotFoundError: No module named 'trl' </code></pre> <p><strong>Why this discrepancy? How to resolve it?</strong></p>
<python><jupyter-notebook><pip><package>
2024-11-20 05:34:24
1
40,671
Sazzad Hissain Khan
79,205,913
8,040,369
Spliting a JSON list obj into multiple based on values in python
<p>I have JSON object something like below,</p> <pre><code>Data = [ { 'Name': 'A1', 'Value': 10, 'Type': 'AAA' }, { 'Name': 'A1', 'Value': 20, 'Type': 'AAA' }, { 'Name': 'B1', 'Value': 10, 'Type': 'AAA' }, { 'Name': 'C1', 'Value': 10, 'Type': 'BBB' }, { 'Name': 'D1', 'Value': 10, 'Type': 'BBB' } ] </code></pre> <p>And i would like to split the object into a list based on &quot;Type&quot; and then based on the &quot;Name&quot; also to something like below,</p> <pre><code>Data = { 'AAA': { 'A1': [ { 'Name': 'A1', 'Value': 10, 'Type': 'AAA' }, { 'Name': 'A1', 'Value': 20, 'Type': 'AAA' }, ], 'B1': [ { 'Name': 'B1', 'Value': 10, 'Type': 'AAA' }, ] }, 'BBB': { 'C1': [ { 'Name': 'C1', 'Value': 10, 'Type': 'BBB' } ], 'D1': [ { 'Name': 'D1', 'Value': 10, 'Type': 'BBB' }, ] } } </code></pre> <p>To achieve this, so far i have tried looping over the whole data and then splitting them into separate objects based on the &quot;Type&quot; and then create a unique list of &quot;Name&quot;, then iterating over the newly created objects to split them based on the Name.</p> <p>I was doing something like this,</p> <pre><code>tTmp_List_1 = [] tTmp_List_2 = [] tTmp_Name_List_1 = [] tTmp_Name_List_2 = [] for tValue in Data: if (tValue['Type'] == 'AAA'): tTmp_List_1.append(tValue) tTmp_Name_List_1.append(tValue['Name']) if (tValue['Type'] == 'BBB'): tTmp_List_2.append(tValue) tTmp_Name_List_2.append(tValue['Name']) tTmp_Name_List_1 = list(set(tTmp_Name_List_1)) tTmp_Name_List_2 = list(set(tTmp_Name_List_2)) </code></pre> <p>From the above &quot;tTmp_Name_List_1&quot; and &quot;tTmp_Name_List_2&quot;, i am about to iterate over the list and then match the matching names with the initial Data object to come up with separated list objects based on the names and then set it back with something like this</p> <pre><code>tTmp_Dict = {} for tTmp_Name in tTmp_Name_List_1 : tTmp = [] if (Data['Name'] == tTmp_Name): tTmp.append(Data) tTmp_Dict.append(tTmp) </code></pre> <p>Can someone kindly suggest me or help me with a more better way of doing this.</p> <p>Any help is much appreciated.</p> <p>Thanks,</p>
<python><json><python-3.x>
2024-11-20 05:28:57
3
787
SM079
79,205,712
9,632,470
Trouble Sending Text with smtplib
<p>I wrote the following code to send a text message from my python script. The first time I ran it I successfully received a text message to my phone, but it was blank (no message). I re-ran the code 3 times and now get no message, however I noticed the messages are in the outbox of the email (though the recipient phone number is in BCC instead of recipient). I also attempted sending the message straight from email instead of from the python script and no message arrived.</p> <p>Can you please offer me ideas to try to debug why the code malfunctioned once, and then ceased to function there after?</p> <pre><code> def send_message(phone_number, carrier, message): CARRIERS = { &quot;att&quot;: &quot;@mms.att.net&quot;, &quot;tmobile&quot;: &quot;@tmomail.net&quot;, &quot;verizon&quot;: &quot;@vtext.com&quot;, &quot;sprint&quot;: &quot;@messaging.sprintpcs.com&quot; } EMAIL = &quot;email@gmail.com&quot; PASSWORD = &quot;paaa ssss word word&quot; recipient = phone_number + CARRIERS[carrier] auth = (EMAIL, PASSWORD) server = smtplib.SMTP(&quot;smtp.gmail.com&quot;, 587) server.starttls() server.login(auth[0], auth[1]) server.sendmail(auth[0], recipient, message) </code></pre>
<python><smtplib>
2024-11-20 03:35:54
0
441
Prince M
79,205,476
15,587,184
Python request taking too long to get PDF from website
<p>I'm trying to create a single, lightweight Python script to open a website hosting a guaranteed PDF file, download it, and extract its text.</p> <p>I’ve reviewed many posts here and across the internet and settled on a combination of the requests and PyPDF2 libraries. While PyPDF2 efficiently extracts text once the PDF is in memory, the process of retrieving the PDF data using requests is quite slow. Below is my code and the time it took to fetch the PDF file (before text extraction).</p> <p>This is my original code:</p> <pre><code>import urllib.request from urllib.parse import urlparse import time url = &quot;https://www.ohchr.org/sites/default/files/UDHR/Documents/UDHR_Translations/eng.pdf&quot; headers = { &quot;User-Agent&quot;: &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36&quot;, &quot;Accept&quot;: &quot;application/pdf&quot;, # Indicating we want a PDF file } # Extract the base domain from the URL to set as the Referer header parsed_url = urlparse(url) referer = f&quot;{parsed_url.scheme}://{parsed_url.netloc}&quot; # Extract base domain (e.g., &quot;https://example.com&quot;) # Update the headers with dynamic Referer headers[&quot;Referer&quot;] = referer start_time=time.time() # Step 1: Fetch PDF content directly from the URL with headers req = urllib.request.Request(url, headers=headers) with urllib.request.urlopen(req) as response: pdf_data = response.read() print(time.time() - start_time) </code></pre> <blockquote> <p>print(time.time() - start_time) 65.53884482383728</p> </blockquote> <p>It took more than a minute to get the data form the page, when opening this URL on my browser is fast as lighting.</p> <p>And another version using urllib3 adapters and retry logic:</p> <pre><code>import requests import time from requests.adapters import HTTPAdapter from urllib3.util.retry import Retry url = &quot;https://www.ohchr.org/sites/default/files/UDHR/Documents/UDHR_Translations/eng.pdf&quot; headers = { &quot;User-Agent&quot;: &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36&quot;, &quot;Accept&quot;: &quot;application/pdf&quot;, &quot;Cache-Control&quot;: &quot;no-cache&quot;, &quot;Pragma&quot;: &quot;no-cache&quot;, } start_time = time.time() # Configure retries for requests session = requests.Session() retries = Retry(total=3, backoff_factor=0.3, status_forcelist=[500, 502, 503, 504]) adapter = HTTPAdapter(max_retries=retries) session.mount(&quot;https://&quot;, adapter) response = session.get(url, headers=headers, timeout=5) if response.status_code == 200: pdf_data = response.content print(f&quot;Time taken: {time.time() - start_time:.2f} seconds&quot;) else: print(f&quot;Failed to fetch the PDF. Status code: {response.status_code}&quot;) </code></pre> <blockquote> <p>Time taken: 105.47 seconds</p> </blockquote> <p>Both methods work for downloading the PDF, but the process is still too slow for production. For example, using a URL from the United Nations, my browser loads the PDF in 1–2 seconds, while the script takes much longer. My internet connection is fast and stable.</p> <p>What alternative approaches, libraries, or programming strategies can I use to speed up this process (making it as fast as a browser)? I’ve read about tweaking user agents and headers, but these don’t seem to help on my end.</p> <h2>Update</h2> <p>I just run the code on Colab and it is so fast:</p> <p><a href="https://i.sstatic.net/51BRRXkH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/51BRRXkH.png" alt="enter image description here" /></a></p> <p>What could possibly be wrong or missing in my configuration? I'm on windows 10, with great internet connections.</p> <p><strong>Output Generated by Chaitanya Rahalkar's Code response:</strong></p> <pre><code>&gt;&gt;&gt; import requests &gt;&gt;&gt; from urllib3.util.retry import Retry &gt;&gt;&gt; from requests.adapters import HTTPAdapter &gt;&gt;&gt; import io &gt;&gt;&gt; import time &gt;&gt;&gt; def download_pdf(url): ... # Configure session with optimized settings ... session = requests.Session() ... retries = Retry(total=3, backoff_factor=0.1, status_forcelist=[500, 502, 503, 504]) ... adapter = HTTPAdapter(max_retries=retries, pool_connections=10, pool_maxsize=10) ... session.mount('https://', adapter) ... headers = { ... 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36', ... 'Accept': 'application/pdf', ... } ... # Use streaming to download in chunks ... response = session.get(url, headers=headers, stream=True, timeout=10) ... response.raise_for_status() ... # Stream content into memory ... pdf_buffer = io.BytesIO() ... for chunk in response.iter_content(chunk_size=8192): ... if chunk: ... pdf_buffer.write(chunk) ... return pdf_buffer.getvalue() ... &gt;&gt;&gt; url = &quot;https://www.ohchr.org/sites/default/files/UDHR/Documents/UDHR_Translations/eng.pdf&quot; &gt;&gt;&gt; start_time = time.time() &gt;&gt;&gt; pdf_data = download_pdf(url) &gt;&gt;&gt; print(f&quot;Download completed in {time.time() - start_time:.2f} seconds&quot;) Download completed in 68.41 seconds &gt;&gt;&gt; print(f&quot;PDF size: {len(pdf_data) / 1024:.1f} KB&quot;) PDF size: 190.6 KB &gt;&gt;&gt; </code></pre>
<python><pdf><python-requests><request>
2024-11-20 00:35:34
2
809
R_Student
79,205,378
1,613,983
How do I plot a line over a bar chart with pandas/date index?
<p>I've got a dataframe I'd like to represent as a stacked bar chart, and a series that I'd like to represent as a line (but with a different scale). I'd like to draw these as a line chart (for the series) drawn on top of a bar chart (for the dataframe) with a shared x-axis (being the time indices).</p> <p>Here's a toy example I tried:</p> <pre><code>import pandas as pd import numpy as np idx = pd.date_range(start='2000-01-01', periods=20) df = pd.DataFrame(np.random.rand(20,3), index=idx) ax = df.plot.bar(stacked=True) series = pd.Series(np.random.rand(20), index=idx) series.plot(ax=ax, secondary_y=True) </code></pre> <p>However, when I run this I get the following:</p> <p><a href="https://i.sstatic.net/3GN2dRql.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3GN2dRql.png" alt="enter image description here" /></a></p> <p>It's as if the line chart has overwritten the bar chart somehow. Interestingly, when I remove the index from the dataframe and series (making it just default integers), it seems to work:</p> <p><a href="https://i.sstatic.net/QHHGUonZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QHHGUonZ.png" alt="enter image description here" /></a></p> <p>What am I doing wrong? I'm using pandas 2.2.2</p>
<python><pandas>
2024-11-19 23:40:04
1
23,470
quant
79,205,151
4,858,640
How to install python packages generate by CMake using setup tools?
<p>I am working with a C++ project in which the CMake build process generates a directory structure of Python packages (i.e. Python source files, not shared object) in the chosen build directory under <code>python_packages/my_package</code>. I cannot change this approach. I have written a <code>setup.py</code> script that runs CMake and tries to add these to the generated package but I am stuck. Here's where I'm at:</p> <pre class="lang-py prettyprint-override"><code>import os import subprocess from pathlib import Path from setuptools import ( Extension, find_packages, setup, ) from setuptools.command.build_ext import build_ext class CMakeExtension(Extension): def __init__(self, name): super().__init__(name, sources=[]) class cmake_build_ext(build_ext): def build_extension(self, ext): cmake_source_dir = Path.cwd() cmake_build_dir = Path(self.build_temp) if not cmake_build_dir.exists(): cmake_build_dir.mkdir(parents=True) cmake_preset = &quot;debug&quot; if self.debug else &quot;release&quot; cmake_env = os.environ.copy() cmake_cmd = [ &quot;/usr/bin/cmake&quot;, &quot;-S&quot;, cmake_source_dir, &quot;-B&quot;, cmake_build_dir, &quot;--preset&quot;, cmake_preset ] subprocess.check_call(cmake_cmd, cwd=cmake_build_dir) cmake_build_cmd = [ &quot;/usr/bin/cmake&quot;, &quot;--build&quot;, cmake_build_dir, ] subprocess.check_call(cmake_build_cmd, cwd=cmake_build_dir) packages_dir = cmake_build_dir / &quot;python_packages&quot; packages = find_packages(packages_dir) print(&quot;Packages:&quot;, packages) self.distribution.packages = packages setup( name=&quot;my_project&quot;, version=&quot;1.0.0&quot;, author=&quot;Me Myself&quot;, author_email=&quot;me.myself@mymail.com&quot;, description = &quot;CMake experiment&quot;, python_requires=&quot;&gt;=3.12&quot;, ext_modules=[CMakeExtension(&quot;my_project&quot;)], cmdclass={&quot;build_ext&quot;: cmake_build_ext}, zip_safe=False, ) </code></pre> <p>When calling <code>pip install .</code> this runs up until the <code>running build_py</code> step where I get the following error:</p> <pre><code>error: package directory 'my_package' does not exist </code></pre> <p>At the same time, <code>Packages: [&quot;my_package&quot;, &quot;my_package.sub_package&quot;]</code> is printed during execution. There is likely something I should be doing here with <code>package_dir</code> but I am lost.</p>
<python><python-3.x><cmake><setuptools>
2024-11-19 21:47:27
0
3,242
Peter
79,205,112
141,650
Fixture `record_property` not found
<p>I'm trying to use the <code>record_property</code> fixture (<a href="https://docs.pytest.org/en/stable/reference/reference.html#record-property" rel="nofollow noreferrer">https://docs.pytest.org/en/stable/reference/reference.html#record-property</a>). My build file looks like this:</p> <pre><code>load(&quot;@pip//:requirements.bzl&quot;, &quot;requirement&quot;) py_test( name = &quot;test_foo&quot;, srcs = [&quot;test_foo.py&quot;], deps = [ requirement(&quot;pytest&quot;), ], ) </code></pre> <p>And <code>test_foo.py</code> looks like this:</p> <pre><code>import pytest import unittest from unittest import TestCase class TestFoo(TestCase): def test_escape(self, record_property): pass if __name__ == &quot;__main__&quot;: unittest.main() </code></pre> <p>When I run the test (<code>bazel test //path/to:test_foo</code>), I get an error about the <code>record_property</code> fixture being undefined:</p> <pre><code>TypeError: test_escape() missing 1 required positional argument: 'record_property' </code></pre> <p>I suspect this is due to some nuance in my build rule, bazel invocation, or Python set up. I've researched this a bit, and it does sound like other folks have encountered &quot;missing fixture&quot; errors, though not this flavor in particular. Any pointers are appreciated :)</p>
<python><pytest>
2024-11-19 21:28:16
1
5,734
Stephen Gross
79,204,975
4,858,640
Python version discrepancies when running setup.py in pyenv
<p>I'm trying to write a <code>setup.py</code> script for a Python project. I want to install this project into a pyenv virtualenv. So using pyenv 2.4.18 I have tried the following:</p> <pre><code>pyenv virtualenv 3.12.7 my_env pyenv local my_env pip install --upgrade pip pip install numpy python setup.py install </code></pre> <p><code>setup.py</code> internally calls CMake which builds a Python extension that needs the NumPy development headers, I point CMake to Python using <code>f&quot;-DPython3_EXECUTABLE={sys.executable}&quot;</code> which in this case evaluates to <code>~/.pyenv/versions/my_env/bin/python3</code> and everything works smoothly</p> <p>BUT if I install try to install using <code>pip install .</code>, <code>sys.executable</code> instead evaluates to <code>~/.pyenv/versions/3.12.7/envs/my_env/bin/python</code> and CMake fails with:</p> <pre><code>Could NOT find Python3 (missing: Python3_NumPy_INCLUDE_DIRS NumPy) (found version &quot;3.12.7&quot;) </code></pre> <p>That is absolutely crazy to me since BOTH of those python versions are symbolic links to the same <code>/Users/timonicolai/.pyenv/versions/3.12.7/bin/python</code>. Why can CMake resolve the NumPy dependency in one case but not the other and how do I fix installation via pip?</p>
<python><python-3.x><setuptools><pyenv>
2024-11-19 20:32:20
0
3,242
Peter
79,204,897
9,006,687
Install gymnasium with atari games using conda
<p>Sorry if this is a silly question, but I can't figure this one out.</p> <p>I am trying to install gymnasium with Atari games using conda. Here is my setup.py:</p> <pre><code>from setuptools import find_packages from setuptools import setup setup( name=&quot;benchmarks&quot;, version=&quot;0.0.0&quot;, ... packages=find_packages(), scripts=[&quot;scripts/run_training&quot;], include_package_data=True, install_requires=[ # &quot;gymnasium==1.0.0&quot;, # or &quot;gymnasium-atari==1.0.0&quot;, &quot;pytorch==2.5.1&quot;, &quot;pytorchrl==0.6.0&quot;, ], python_requires=&quot;~=3.11&quot;, ... ) </code></pre> <p>My code:</p> <pre><code>import gymnasium as gym import ale_py if __name__ == '__main__': gym.register_envs(ale_py) env = gym.make(&quot;ALE/Pong-v5&quot;) </code></pre> <p>When I import <code>&quot;gymnasium==1.0.0&quot;</code>, I get the error:</p> <pre><code>ModuleNotFoundError: No module named 'ale_py' </code></pre> <p>When I import <code>&quot;gymnasium-atari==1.0.0&quot;</code>, I get the error:</p> <pre><code>FileNotFoundError: [Errno 2] No such file or directory: '[...]/lib/python3.11/site-packages/ale_py/roms/pong.bin' </code></pre> <p>I have also tried to import <code>&quot;ale_py==0.10.1&quot;</code> but it seems to be a dependency of both previously mentioned packages, and did not change anything.</p> <p>What am I missing?</p>
<python><conda><openai-gym>
2024-11-19 20:03:27
0
461
Theophile Champion
79,204,757
498,584
Cuncurrency, mutex, semaphores on django and a cron job
<p>I want to create a critical section in django provide controlled section. One of this is in one of the django channels consumers handling the websocket connections and the other is in a cron process implemented with cron tab, not celery.</p> <p>I would like these both critical sections to have a lock. How would I implement this in django. I know there is python Value,Semaphores, Mutexs but I can not pass the Value to cron job. So Can I use some sort of a file lock. Maybe interprocess communication lock, if that exists.</p> <p>Anythoughs on how to archive this in Django</p>
<python><django><mutex><semaphore><critical-section>
2024-11-19 19:09:26
0
1,723
Evren Bingøl
79,204,671
3,323,455
pymssql can connect to 'master' but can't connect to another database after
<p>I do</p> <p><code>pymssql.connect(server=self.host, user=self.username, password=self.password, port=self.port, database=database_name)</code></p> <p>then</p> <p><code>with conn.cursor() as cursor:</code></p> <p><code>cursor.execute(sql, args)</code></p> <pre><code>cursor.fetchall() </code></pre> <p>then</p> <pre><code>self.interface.execute(&quot;SELECT DB_NAME()&quot;) </code></pre> <p>And I see 'master'</p> <p>Then do <code>connection.close()</code></p> <p>And again the same loop but with 'products' as DB name. But when I do SELECT DB_NAME() again - I see 'master' again instead of 'products'</p> <p>Why?</p> <p>Also I've enabled tds debugs and I see next</p> <pre><code> 1 3470 pulse: size_bytes 2 3 3711 net.c:318:Connecting to 127.0.0.1 port 62117 4 5 4224 dblib.c:1377:dbcmd(0x107775f0, SELECT DB_NAME()) 6 7 4317 select CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)*8192 8 9 4446 dblib.c:1491:dbclose(0x10776fd0) 10 4447 dblib.c:236:dblib_del_connection(0x7bbd9d3cba20, 0x1076a2d0) 11 4448 query.c:3804:tds_disconnect() 12 13 14 new connect after 15 --- 16 4525 net.c:318:Connecting to 127.0.0.1 port 62117 17 18 4779 dbutil.c:76:msgno 5701: &quot;Changed database context to 'products'.&quot; 19 20 4921 dblib.c:1450:dbuse(0x1083c370, products) 21 4922 dblib.c:1377:dbcmd(0x1083c370, use [products]) 22 23 4956 dblib.c:300:db_env_chg(0x1042b860, 1, products, products) 24 4963 dbutil.c:76:msgno 5701: &quot;Changed database context to 'products'.&quot; 25 26 5039 dblib.c:1377:dbcmd(0x107775f0, SELECT DB_NAME()) 27 28 29 5058 0010 00 e7 00 01 09 04 d0 00-34 00 d1 0c 00 6d 00 61 |........ 4....m.a| 30 5059 0020 00 73 00 74 00 65 00 72-00 fd 10 00 c1 00 01 00 |.s.t.e.r ........| </code></pre> <p>So context changed but I still somehow get 'master' as context. Why?</p>
<python><sql-server><pymssql>
2024-11-19 18:35:52
0
328
pulse
79,204,623
759,880
Correct way to parallelize request processing in Flask
<p>I have a Flask service that receives <code>GET</code> requests, and I want to scale the QPS on that endpoint (on a single machine/container). Should I use a python <code>ThreadPoolExecutor</code> or <code>ProcessPoolExecutor</code>, or something else? The <code>GET</code> request just retrieves small pieces of data from a cache backed by a DB. Is there anything specific to Flask that should be taken into account?</p>
<python><multithreading><flask>
2024-11-19 18:19:35
1
4,483
ToBeOrNotToBe
79,204,622
19,760,971
How to create DataFrame that is the minimal values based on 2 other DataFrame's in pandas (python)?
<p>Let's say I have <code>DataFrame</code>s <code>df1</code> and <code>df2</code>:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df1 = pd.DataFrame({'A': [0, 2, 4], 'B': [2, 17, 7], 'C': [4, 9, 11]}) &gt;&gt;&gt; df1 A B C 0 0 2 4 1 2 17 9 2 4 7 11 &gt;&gt;&gt; df2 = pd.DataFrame({'A': [9, 2, 32], 'B': [1, 3, 8], 'C': [6, 2, 41]}) &gt;&gt;&gt; df2 A B C 0 9 1 6 1 2 3 2 2 32 8 41 </code></pre> <p>What I want is the 3rd <code>DataFrame</code> that will have minimal rows (<code>min</code> is calculated based on column <code>B</code>), that is:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df3 A B C 0 9 1 6 1 2 3 2 2 4 7 11 </code></pre> <p>I really don't want to do this by iterating over <strong>all</strong> rows and comparing them one by one, is there a faster and compact way to do this?</p>
<python><pandas><dataframe><min>
2024-11-19 18:19:19
1
867
k1r1t0
79,204,607
6,686,503
Coverage: How to gracefully stop a Django development server that runs on a Github Action Instance
<p>I am testing some API endpoints of my Django app by running Playwright separately on a Ubuntu machine on my Github Action instance. Since I also want to get the coverage of the tests, I am starting my server this way:</p> <pre class="lang-yaml prettyprint-override"><code> - name: run backend server run: poetry run coverage run ./manage.py runserver --no-reload </code></pre> <p>This results in a single process being started. Now, in order to collect the coverage data, I need to gracefully stop my django server, as force killing the process would also interrupt <code>coverage</code> and executing <code>coverage combine</code> would produce an error.</p> <p>I have tried the following commands, but they all fail to stop the server</p> <pre class="lang-bash prettyprint-override"><code>kill -SIGINT $DJANGO_PID kill -SIGTERM $DJANGO_PI pkill -f &quot;./manage.py runserver&quot; python -c &quot;import os; import signal; print(os.kill($DJANGO_PID, signal.SIGTERM))&quot; </code></pre> <p>Is there a different approach I can use ? or maybe I am starting the server the wrong way?</p>
<python><django><github-actions><code-coverage>
2024-11-19 18:13:24
0
692
Gers
79,204,599
599,402
Python typehinting packages without creating circular imports
<p>I've been scouring the internet trying to find a solution to this, and I have yet to find an answer despite this seeming like something that should be a very common use case:</p> <p><strong>How can I safely type check methods or arguments in Python across different files or packages without creating a circular dependency?</strong></p> <p>Let me give an example of something that <em>should</em> work, and does work in most other strongly typed language (eg C/C++, Java, and even Hack), but does not seem to have a sane solution in Python. Let's take the example of a very common data structure - a Graph.</p> <p>I want to organize the structure into multiple files to separate concerns. In duck-typed, Python, it might look like this:</p> <pre class="lang-py prettyprint-override"><code># Node.py class Node: def __init__(self): self.edges = [] # Will be a list of Edge objects def add_edge(self, edge): self.edges.append(edge) ... </code></pre> <pre class="lang-py prettyprint-override"><code># Edge.py class Edge: def __init__(self, node, weight): self.node = node self.weight = weight </code></pre> <p>Now in the above example, neither object needs to import the other, so everything works. At runtime, another class such as <code>Graph</code> might be responsible for creating nodes and edges, and adding the edges to the nodes. No circular dependency so far.</p> <p>But suppose I want to implement typechecking to ensure no one accidentally attempts to pass an integer type to <code>add_edge</code>, or a string to &quot;Edge()&quot;. I would type it like the following:</p> <pre class="lang-py prettyprint-override"><code># Node.py from graph.Edge import Edge class Node: def __init__(self): self.edges: List[Edge] = [] # Will be a list of Edge objects def add_edge(self, edge: Edge): self.edges.append(edge) ... </code></pre> <pre class="lang-py prettyprint-override"><code># Edge.py from graph.Node import Node class Edge: def __init__(self, node: Node, weight: int): self.node: Node = node self.weight: int = weight </code></pre> <p>This of course presents a problem, because we now have a circular dependency. In our attempt to make the code more type-safe, we've actually introduced something that will crash at runtime.</p> <p>The most common solution I've seen posted for this is to use <code>if TYPE_CHECKING</code>. But this will cause the typechecker to miss a very common, and critical case - where someone tries to use the import for something other than type hinting. Let's take an example:</p> <pre class="lang-py prettyprint-override"><code># Node.py if TYPE_CHECKING: from graph.Edge import Edge class Node: def __init__(self): self.edges: List[Edge] = [] # Will be a list of Edge objects def add_edge(self, edge: Edge): self.edges.append(edge) </code></pre> <p>This prevents the circular import, and running <code>pyright</code> or <code>pyre</code> will show no errors, since the typing is correct. But now suppose someone adds a new method to the Node class like the following:</p> <pre class="lang-py prettyprint-override"><code># Node.py if TYPE_CHECKING: from graph.Edge import Edge class Node: def __init__(self): self.edges: List[Edge] = [] # Will be a list of Edge objects def add_edge(self, edge: Edge): self.edges.append(edge) def easy_add_edge(self, node: &quot;Node&quot;, weight: int): self.add_edges(Edge(node, weight)) </code></pre> <p>The developer then runs <code>pyright</code>, which confirms that there are no type errors in this method. They push it to production, and suddenly everything breaks. Since the import for Edge was inside <code>if TYPE_CHECKING</code> which is <code>False</code> at runtime, python will fail to resolve <code>self.add_edges(Edge(node, weight))</code> and throw an exception. But the typechecker didn't pick up on this because when it ran, <code>TYPE_CHECKING</code> was <code>True</code>!</p> <p>You might be thinking &quot;why didn't the developer write tests for that method&quot;, and you'd be right, but this is exactly the type of bug that typechecking is supposed to solve for! For most developers coming from other languages, the typechecker <em>would be the test</em> for that type of bug!</p> <p>Without knowing a ton about the inner workings of the python typehinting system, the solution would normally seem obvious - typehint using fully qualified names and never go near the <code>TYPE_CHECKING</code> variable (we could even write a lint rule to catch cases where people are doing this). But this is the part of the problem where I've hit a wall, as I can't figure out if this is something that is, or ever will be supported in Python.</p> <p>So the question: <strong>How can I safely typecheck code in python in a way that both does not introduce circular dependencies, and also ensures developers aren't trying to use undeclared types at runtime? Can fully-qualified names be used, and if so how?</strong></p> <p>Note: It is not reasonable to place everything that may need a type hinted dependency in the same file, or even the same package. Separation of concerns into multiple packages is something that is expected on any engineering team for almost all production sized applications. I know there is a way to do this if everything is in the same file, but there's no way I can convert my 200k line Django application into a flat-structured single file app.</p>
<python><python-typing><pyright>
2024-11-19 18:11:25
3
8,441
Ephraim
79,204,500
4,852,094
mypy with pydantic field_validator
<p>With pydantic, is there a way for mypy to be hinted so it doesn't raise an error in this scenario, where there's a field_validator modifying the type?</p> <pre><code>class MyModel(BaseModel): x: int @field_validator(&quot;x&quot;, mode=&quot;before&quot;) @classmethod def to_int(cls, v: str) -&gt; int: return len(v) MyModel(x='test') </code></pre>
<python><pydantic>
2024-11-19 17:41:20
2
3,507
Rob
79,204,427
12,057,138
SDV generates incorrect values for hash-like fields when creating synthetic data from a CSV file"
<p>I am using SDV to generate mock data by extracting real data and metadata from an existing CSV file and then saving the mock data to a new CSV file.</p> <p>Here is my code:</p> <pre><code>import pandas as pd from sdv.metadata import Metadata from sdv.single_table import GaussianCopulaSynthesizer data = pd.read_csv('file.csv', sep=';') metadata = Metadata.detect_from_dataframe( data=data, table_name='test' ) synthesizer = GaussianCopulaSynthesizer(metadata) synthesizer.fit(data) synthetic_data = synthesizer.sample(10) synthetic_data.to_csv('synthetic_file.csv', index=False, sep=';') </code></pre> <p>My issue is that SDV does not identify hash-like columns/fields correctly. For example, a field named ID in my data contains values like:</p> <pre><code>4959478426DF15EE67AZBED5B0B99EDB848597F2 AB28A95B91DE6637DE8D7728D6C945EFFC58F029 D304CE66B9204C637C8BA1B75B2952495C66321F </code></pre> <p>But in the synthetic output, SDV generates values like:</p> <pre><code>sdv-id-sVCqLP sdv-id-CjXnSq sdv-id-HuiFjs </code></pre> <p>I tried explicitly setting the field type using metadata.update_column:</p> <pre><code>metadata.update_column( table_name='test', column_name='ID', sdtype='id', ) </code></pre> <p>But the results remained the same. SDV still replaces the hash-like values with generic synthetic identifiers. I understand that I can use a custom generator to manually create hashes, but this would break the relationship logic provided by SDV.</p> <p>How can I make SDV generate synthetic data for hash-like fields while preserving the repetition logic from the original dataset?</p>
<python><data-generation><synthetic><sdv>
2024-11-19 17:12:26
0
688
PloniStacker
79,204,365
297,274
Polynomial roots - showing imaginary part
<p>Polynomial roots are determined using following code. Roots are displayed with imaginary numbers, but it is actually whole numbers. The 5's should be with out any imaginary value</p> <p>[5.+1.27882372e-07j 5.-1.27882372e-07j 2.+0.00000000e+00j]</p> <pre><code>import numpy as np poly=[1,-10,25] # (x-5)**2 p1 = np.poly1d(poly) p2 = np.poly1d([1,-2]) # (x-2) poly2 = p1*p2 # (x-5)**2 * (x-2) print(poly2.roots) print(poly2(5)) print(poly2(2)) Solution: Use np.real_if_close(poly2.roots, tol=1e-5) to ignore insgnificant imaginary part. #remove insignificant imaginary part formatted_roots = np.real_if_close(poly2.roots, tol=1e-5) # change back to number format roots = formatted_roots.astype(float) </code></pre>
<python><numpy><polynomials>
2024-11-19 16:52:41
1
782
cobp
79,204,309
207,717
How to decompress raw PKZIP data without zip header in python?
<p>I want to to decompress raw data from a file in an exotic format, but I know that the compression method is the same that is used in a ZIP file (PKZIP).</p> <p>In the file the PK\03\04 signature is missing. After that the data more or less fits the PKZIP header specs:</p> <p><a href="https://docs.fileformat.com/compression/zip/" rel="nofollow noreferrer">https://docs.fileformat.com/compression/zip/</a></p> <ol> <li>2 bytes - version = 0x0014 (I don't know if it's meaningful)</li> <li>2 byte flags = 0</li> <li>2 bytes compression method = 0x0008 (&quot;deflated&quot; according to ZIP docs)</li> <li>random 4 bytes (modification times)</li> <li>random 4 bytes (should be the CRC32)</li> <li>4 bytes of valid compressed size</li> <li>4 bytes of valid uncompressed size</li> <li>file name length = 0x14</li> <li>extra field length = 0</li> <li>file name - 20 random bytes</li> </ol> <p>Then the raw compressed data, and after that the End Record that looks damaged in a similar way. After adding the signature and valid file name characters and saving the buffer to a file, I was able to decompress it with 7zip. It showed an error dialog, but produced an uncompressed file. The resulting file contained the expected data.</p> <p>I know that there is always one compressed file and the compression method is fixed. The file name is not important, so I guess it should be possible to process only the compressed data bytes after the header, ignoring the End Record as well.</p> <p>Which Python package provides such functionality?</p> <p>I want to ignore the ZIP headers and pass only the compressed data buffer to some function in Python (possibly specifying the compression method and some flags), and get the uncompressed data buffer back. No CRC check, no file names.</p>
<python><zip><compression><gzip><pkzip>
2024-11-19 16:37:09
1
6,717
PanJanek
79,204,288
1,100,652
Handle DNS timeout with call to blob_client.upload_blob
<p>While using the Azure storage SDK in Python, I have been unable to override what appears to be a default 90-second timeout to catch a DNS exception occurring within a call to <code>blob_client.upload_blob()</code>. I am looking for a way to override this with a shorter time interval (i.e. 5 seconds).</p> <p>The following code illustrates this issue using a fictitious account name which DNS cannot resolve. I am using a <code>timeout</code> argument in the call to <code>upload_blob</code>, and I understand from reviewing documentation this enforces a server-side threshold, not a client-side threshold. I have not been successful in getting a client-side threshold to be enforced.</p> <p>This issue appears similar to this unanswered question: <a href="https://stackoverflow.com/questions/75866066/how-to-handle-timeout-for-uploading-a-blob-in-azure-storage-using-python-sdk">How to handle timeout for Uploading a blob in Azure Storage using Python SDK?</a>. The one (not accepted) solution suggests using a timeout threshold within the call to <code>upload_blob</code>. As noted above (and shown within the code below), this is not producing the desired effect.</p> <pre><code>from azure.core.exceptions import AzureError from azure.storage.blob import BlobServiceClient # Define Azure Storage Blob connection details connection_string = &quot;DefaultEndpointsProtocol=https;AccountName=test;AccountKey=removed==;EndpointSuffix=core.windows.net&quot; container_name = &quot;containername&quot; blob_name = &quot;blobname&quot; local_file_path = &quot;c:/temp/test.txt&quot; # Create the BlobServiceClient blob_service_client = BlobServiceClient.from_connection_string(connection_string) # Function to perform the blob upload def upload_blob_process(): try: with open(local_file_path, &quot;rb&quot;) as data: blob_client = blob_service_client.get_blob_client(container_name, blob_name) blob_client.upload_blob(data, timeout=5) print(&quot;Blob uploaded successfully!&quot;) except AzureError as e: print(f&quot;Azure error occurred: {e}&quot;) except Exception as e: print(f&quot;An error occurred: {e}&quot;) upload_blob_process() </code></pre>
<python><azure><http><azure-blob-storage>
2024-11-19 16:31:16
1
415
George
79,204,203
821,995
How to detect MS Visual C++ runtime version mismatches from third-party libraries when importing Python modules?
<p>I've run into a weird case of incompatibility between:</p> <ul> <li>A Python module (<code>PySide6</code>) that bundles its own version of the Visual C++ runtime</li> <li>Our own compiled modules that are compiled against the latest Visual C++ runtime</li> </ul> <p>At some point we upgraded Visual C++, and the second ended up being newer than the first, meaning that according to <a href="https://learn.microsoft.com/en-us/cpp/porting/binary-compat-2015-2017?view=msvc-170" rel="nofollow noreferrer">MS Visual C++ compatibility rules</a>:</p> <ul> <li>Importing <code>PySide6</code>, then our module makes the interpreter crash: by default our module tries to reuse the old runtime version already loaded from <code>PySide6</code>, and that configuration is not supported because it's compiled for a newer runtime.</li> <li>Importing our module, then <code>PySide6</code> works perfectly fine: our module uses the newer system runtime and <code>PySide6</code> keeps on using its custom runtime, both configurations being supported.</li> </ul> <p>Aside from doing the same thing as <code>PySide6</code> (bundling the runtime inside of our module) or forcing our module to load the system runtime manually, is there a way to detect that this is happening before the import and warn/crash with a proper error message?</p>
<python><c++><visual-c++><pyside6><visual-c++-runtime>
2024-11-19 16:06:43
1
7,455
F.X.
79,204,195
3,240,659
Tensorboard histogram display of layer weights was working in tensorflow 2.12 but is broken in 2.16. Is this due to the Keras version update (2 to 3)?
<p>The below code (a minimal 'toy' example using data generators for both training and validation data) works fine in 2.12 and produces the desired bias and kernel weight histograms in tensorboard for both hidden layers as well as the output layer. For 2.16 and later I just get the output layer bias and kernel displayed. I am guessing that this is due to the switch in tensorflow 2.16 to using Keras version 3, but I am unable to figure out how to fix my code so it generates all the histograms. Online suggestions about explicitly writing out the weights using tf.summary.histogram don't work, and neither does switching to using the validation data directly rather than using a generator.</p> <pre><code>import numpy as np import tensorflow as tf from keras.utils import Sequence from keras.models import Sequential from keras.layers import Input from keras.layers import Dense import datetime class DataGenerator(Sequence): def __init__(self, batch_size, data_size, **kwargs): super().__init__(**kwargs) self.data_size = data_size self.batch_size = batch_size self.data = np.random.random((data_size, features+1)) self.total_samples=self.data.shape[0] self.indexes = np.arange(self.total_samples) def __len__(self): return int(np.ceil(self.total_samples / self.batch_size)) def __getitem__(self, index): batch_indexes = self.indexes[index * self.batch_size:(index + 1) * self.batch_size] batch_data = self.data[batch_indexes] return self.__data_generation(batch_data) def __data_generation(self, batch_data): x = batch_data[:, :-1] y = batch_data[:, -1] return x, y if __name__ == '__main__': features = 3 batch_size = 2 print(f&quot;{tf.__version__=}&quot;) model = Sequential([ Dense(20, name=&quot;twenty_unit_hidden&quot;), Dense(10, name=&quot;ten_unit_hidden&quot;), Dense(1, name=&quot;output&quot;) ]) data_gen = DataGenerator(batch_size, 10) val_gen = DataGenerator(batch_size, 2) log_dir = f&quot;logs/fit/{tf.__version__}_&quot; + datetime.datetime.now().strftime(&quot;%Y%m%d-%H%M%S&quot;) tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1, write_graph=True, write_images=True, update_freq=&quot;epoch&quot;) loss = tf.keras.losses.BinaryCrossentropy() model.compile(loss=loss) model.fit(data_gen, epochs=4, batch_size=batch_size, callbacks=[tensorboard_callback], validation_data=val_gen) </code></pre>
<python><tensorflow><keras>
2024-11-19 16:04:52
0
358
nickcrabtree
79,204,030
3,741,284
Determine most accurate best-fit algorithm for scattered points
<p>I'm currently working with the <code>skspatial</code> package in Python3. I currently have two functions:</p> <ol> <li><code>skspatial.objects.Cylinder.best_fit</code> to try to best fit the points into a cylinder</li> <li>A function that returns the bounding box of the scattered points. This function returns a custom class that should &quot;extend&quot; the <code>skspatial.objects</code> with a <code>Cuboid</code> object with similar methods, but in reality is only a bounding box.</li> </ol> <p>My question is the following: Is there a way to determine which of these two shapes better encompass the scattered points? They both return valid <code>cylinder</code> or <code>Cuboid</code> objects on their own for the same set of scattered points, but only one of them actually fits the points better.</p> <p>In case it matters for this question, the scattered points only represent the surface of whatever shape they actually are on, so there are no points &quot;inside&quot; the object.</p>
<python><python-3.x><shapes><data-fitting><best-fit>
2024-11-19 15:25:37
1
947
CRoemheld
79,203,777
3,104,974
How to identify and manage a background daemon process beyond parent process runtime in python
<p>I want to run a python script <code>B</code> in the background every 8 hours on a Windows machine that usually runs 24/7. Also, I have another script <code>A</code> that is manually executed and uses data that is provided by the background script (which queries a large database and extracts relevant data so that it is avl. for the main script w/o long waiting times).</p> <p>The question is: How can I assure that <code>B</code> is running when <code>A</code> is started?</p> <p>My idea is when starting <code>A</code> to somehow check whether <code>B</code> already exists, and if not launch it via <code>multiprocessing.Process</code>. But how can I identify this process?</p> <p>Only thing I came up with would be saving the process id somewhere in a file to disk and then check every time whether a process with this id exists - but afaik this id must not necessarily refer to <em>my</em> process in case that one crashed and Windows gave the same id to another process in the meantime.</p>
<python><multiprocessing><python-multiprocessing><background-process>
2024-11-19 14:16:44
2
6,315
ascripter
79,203,234
6,189,352
Handling multiple IO blocking services using asyncio
<p>This doesn't necessarily have to be a question about Python, it could rather be about asynchronous programming in general. To keep it short, I have an undefined number of objects where each executes its operation by running a function that essentially acts as an infinite loop (often contains while True). These functions, across multiple objects, execute in parallel (for now they don't share data between them but instead use either callbacks or send data to a third place without passing data backwards).</p> <p>For example, let’s say I have objects of class Poller (poller1, poller2, poller3, and so on up to pollern) with a &quot;read&quot; function (contains infinite loop). Similarly, I have objects of class Listener (listener1, listener2, listener3, and so on up to listenern).</p> <p>These functions use asyncio and asynchronous calls, and everything works as I intended. However, I'm not sure if this is the proper approach for this task or if it could be done more efficiently.</p> <p>The function that runs my code looks like this:</p> <pre><code>async def loop(self): ... tasks = [ asyncio.create_task(device.read()) for poller in self.pollers ] tasks.extend( [ asyncio.create_task(listener.start()) for listener in self.listeners ] ) for task in tasks: await task </code></pre> <p>I simplified the code to focus on the core of the problem. My question is mainly about whether this is an adequate way to solve the problem and whether there are any obvious drawbacks that might make enhancing this functionality more difficult.</p>
<python><concurrency><python-asyncio>
2024-11-19 11:25:06
2
579
Serjuice
79,203,163
4,999,991
Python Launcher (py.exe) points to non-existent Python installation in AppData - how to fix?
<p>When I run <code>py --version</code> on Windows, I get this error:</p> <pre class="lang-none prettyprint-override"><code>Unable to create process using '%LOCALAPPDATA%\Programs\Python\Python312\python.exe --version': The system cannot find the file specified. </code></pre> <p>However, I have Python properly installed in <code>C:\Program Files\Python312\</code> (confirmed by <code>where python.exe</code>).</p> <p>Running <code>ftype</code> shows the Python Launcher is installed:</p> <pre class="lang-none prettyprint-override"><code>Python.ArchiveFile=&quot;C:\Windows\py.exe&quot; &quot;%L&quot; %* Python.CompiledFile=&quot;C:\Windows\py.exe&quot; &quot;%L&quot; %* Python.File=&quot;C:\Windows\py.exe&quot; &quot;%L&quot; %* Python.NoConArchiveFile=&quot;C:\Windows\pyw.exe&quot; &quot;%L&quot; %* Python.NoConFile=&quot;C:\Windows\pyw.exe&quot; &quot;%L&quot; %* </code></pre> <p>And <code>py -0p --list-paths</code> shows:</p> <pre class="lang-none prettyprint-override"><code> -V:3.12 * %LOCALAPPDATA%\Programs\Python\Python312\python.exe </code></pre> <p>The launcher is looking for Python in my <code>AppData</code> folder (probably from an old installation), but Python is actually installed in Program Files. How can I fix this?</p>
<python><windows><registry>
2024-11-19 11:06:10
0
14,347
Foad S. Farimani
79,203,144
9,476,917
Python Seaborn - set_xlim - axis labels do not appear on axis
<p>I have following function to produce seaborn bar charts.</p> <pre><code>def create_bar_chart(data, numeric_col, category_col, group_col=None, x_min=0, x_max=100, fig_width_cm=12, fig_heigth_cm=8): fig, ax = plt.subplots(figsize=(cm*fig_width_cm, cm*fig_heigth_cm)) palette= {&quot;Portfolio&quot;: &quot;#00915A&quot;, &quot;Benchmark&quot;: &quot;#B3B3B3&quot;} if group_col is None: group_col = category_col sns.barplot(data, x=numeric_col, y=category_col, hue=group_col, legend=False, palette=palette, width=0.4) sns.despine(offset=10, trim=True) ax.set_xlim(x_min, x_max) ax.yaxis.grid(False) # Hide the horizontal gridlines ax.xaxis.grid(True) # Show the vertical gridlines for container in ax.containers: ax.bar_label(container, fmt='{:.2f}', fontsize=8) return fig </code></pre> <p>I want the x-Axis to have a value range between e.g. 0-100 and also have vertical gridlines for this range. Therefore, I am using <code>ax.set_xlim()</code> However, with the current setting the axis labels and vertical gridlines are determined by the maximum x value of the data... only the chart grid size is impacted by <code>ax.set_xlim</code>... A screenshot to illustrate:</p> <p><a href="https://i.sstatic.net/FuagsiVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FuagsiVo.png" alt="enter image description here" /></a></p> <p>Do you know what I am missing? Thanks</p>
<python><matplotlib><seaborn><bar-chart>
2024-11-19 11:01:25
0
755
Maeaex1
79,203,119
7,760,998
(NoSuchVersion) when calling the GetObject operation: The specified version does not exist
<p>I am trying to download all the versions of a file in Amazon S3 using <code>boto3</code> (version 1.34.113) but it keeps giving the error <code>botocore.exceptions.ClientError: An error occurred (NoSuchVersion) when calling the GetObject operation: The specified version does not exist.</code></p> <p>My Python code is</p> <pre class="lang-py prettyprint-override"><code>import boto3 S3_BUCKET = &quot;mybucket&quot; S3_KEY = &quot;mykey&quot; s3_client = boto3.client(&quot;s3&quot;) paginator = s3_client.get_paginator(&quot;list_object_versions&quot;) page_iterator = paginator.paginate(Bucket=S3_BUCKET, Prefix=S3_KEY) for page in page_iterator: for version_obj in page.get(&quot;Versions&quot;, []): version_details = s3_client.get_object( Bucket=S3_BUCKET, Key=S3_KEY, VersionId=version_obj[&quot;VersionId&quot;] ) </code></pre> <p>It is able to list down the versions but is not able to perform the <code>get_object</code> operation.</p> <p>Troubleshooting I have done:</p> <ol> <li>I have full S3 access. (<code>s3:*</code> on <code>*</code> resource)</li> <li>My credentials are setup properly local.</li> <li>I am able to list and download the versions from the AWS Console S3 interface.</li> <li>The issue is happening for only a few files.</li> <li>The S3 key is not a prefix to any other object ruling out the possibility of a key mismatch since list versions uses a Prefix and get object uses a Key.</li> <li>AWS S3 CLI (version aws-cli/2.15.43 Python/3.11.8 Linux/6.8.0-48-generic exe/x86_64.ubuntu.24 prompt/off) works fine <code>aws s3api list-object-versions --bucket mybucket --prefix mykey</code> and <code>aws s3api get-object --bucket mybucket --key mykey --version-id versionidfromresponseabove outfile</code>.</li> <li>The file is not deleted.</li> <li>The version is not deleted.</li> </ol>
<python><amazon-web-services><amazon-s3><boto3>
2024-11-19 10:55:28
1
1,748
Samkit Jain
79,202,529
2,155,362
How to location image in web page?
<p>I want cut image from web page by python + Selenium, and remote debug by mstsc. Below is my code fragment:</p> <pre><code> image_data = self.driver.get_screenshot_as_png() screenshot = Image.open(BytesIO(image_data)) screenshot.save('screenshot.png') element = self.driver.find_element(By.CSS_SELECTOR,'myselect') top = element.location['y'] bottom = element.location['y'] + element.size['height'] left = element.location['x'] right = element.location['x'] + element.size['width'] result = screenshot.crop((left,top,right,bottom)) </code></pre> <p>But I can't get the image which I want. I open screenshot.png and find the real position by mouse, the value of (left,top,right,bottom) I got is different from the value calculated by the above code. So how can I get the real position of the image I want in web page?</p>
<python><selenium-webdriver>
2024-11-19 08:10:31
0
1,713
user2155362
79,202,327
7,282,437
Importing from another directory located in parent directory in Python
<p>Suppose we have a project structure like:</p> <pre><code>project/ public_app/ __init__.py dir/ __init__.py config.py subdir/ __init__.py functions.py utils.py my_app/ main.py </code></pre> <p>In <code>my_app/main.py</code>, I would like to import some functions from <code>public_app/dir/subdir/functions.py</code>. A solution I found was to add the following:</p> <pre><code># main.py import sys import os path = os.path.abspath(os.path.join(os.path.dirname(__file__), '../')) sys.path.append(path) from public_app.dir.subdir.functions import * </code></pre> <p>This seems to work, except now I would also like to import from <code>public_app/dir/subdir/utils.py</code>. However inside this file, it contains other relative imports:</p> <pre><code># utils.py from dir.config import * </code></pre> <p>If I then try doing</p> <pre><code># main.py from public_app.dir.subdir.utils import * </code></pre> <p>this gives me a <code>ModuleNotFoundError: No module named 'dir'</code>.</p> <p>Any suggestion on how to do this? Note that I would ideally like to not mess with <code>public_app</code> at all. This is because it is a frequently updated directory pulled from a public repository, and would require constantly changing the imports. I would also like to also keep <code>my_app</code> in a separate directory for cleanliness/easier maintenance if possible.</p> <hr /> <p><strong>Edit:</strong> Figured it out actually by sheer chance. See below for answer.</p>
<python><directory><python-import>
2024-11-19 06:42:59
4
389
Adam
79,202,065
16,405,935
How to sum data by input date, month and previous month
<p>I'm trying to sum up data of selected date, month of selected date and previous month of selected date but don't know how to do. Below is my sample data and my expected Output:</p> <p>Sample data:</p> <pre><code>import pandas as pd import numpy as np df = pd.read_excel('https://github.com/hoatranobita/hoatranobita/raw/refs/heads/main/Check%20data%20(1).xlsx', sheet_name='Data') df COA Code USDConversion Amount Base Date 2 0 19010000000 26924582.44 2024-10-01 1 19010000000 38835600.44 2024-10-02 2 19010000000 46794586.57 2024-10-03 3 19010000000 57117346.49 2024-10-06 4 19010000000 69256132.98 2024-10-07 ... ... ... ... 65 58000000000 38082130.88 2024-11-12 66 58000000000 38140016.13 2024-11-13 67 58000000000 38160089.27 2024-11-14 68 58000000000 38233974.54 2024-11-17 69 58000000000 38323598.99 2024-11-18 </code></pre> <p>So if I select date of November (for example <code>2024-11-18</code>, I want to group by selected date, month of selected date and previous month of selected date.</p> <p>Output:</p> <pre><code>COA Code 2024-11-18 October November 0 19010000000 42625047.24 1354513618.61 584813860.97 1 58000000000 38323598.99 820927014.08 456265522.64 </code></pre>
<python><pandas>
2024-11-19 04:13:19
1
1,793
hoa tran
79,201,971
2,307,441
swifter module not showing progress bar in python
<p>I have a pandas df with 1M records. I Need to apply a function to get the new column. To do so, I use <code>swift</code> to run the apply in parallel.</p> <p>swift module is working fine but as mentioned <a href="https://github.com/jmcarpenter2/swifter/issues/176" rel="nofollow noreferrer">here</a>, I don't see the progress bar with Actual progress. All I see is only 0% after completing the step execution too.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import swifter def get_length(row): return len(str(row['col1'])) if __name__ == '__main__': df = pd.read_csv(&quot;file1.csv&quot;, dtype=str) df['newval'] = df.swifter.allow_dask_on_strings(enable=True).progress_bar(enable=True, desc=&quot;Test&quot;).apply(get_length, axis=1) </code></pre> <p>Below is the screenshot of progress bar (it is showing 0% even after execution completed)</p> <p><a href="https://i.sstatic.net/Lhvi0Hed.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Lhvi0Hed.png" alt="enter image description here" /></a></p>
<python><pandas><apply><swifter>
2024-11-19 03:01:15
0
1,075
Roshan
79,201,839
403,875
Hello World for jaxtyping?
<p>I can't find any instructions or tutorials for getting started with jaxtyping. I tried the simplest possible program and it fails to parse. I'm on Python 3.11. I don't see anything on GitHub jaxtyping project about an upper bound (lower bound is Python 3.9) and it looks like it's actively maintained (last commit was 8 hours ago). What step am I missing?</p> <pre><code>jaxtyping==0.2.36 numpy==2.1.3 torch==2.5.1 typeguard==4.4.1 </code></pre> <p>(It seems like numpy is required for some reason even though I'm not using it)</p> <pre><code>from typeguard import typechecked from jaxtyping import Float from torch import Tensor @typechecked def matmul(a: Float[Tensor, &quot;m n&quot;], b: Float[Tensor, &quot;n p&quot;]) -&gt; Float[Tensor, &quot;m p&quot;]: &quot;&quot;&quot; Matrix multiplication of two 2D arrays. &quot;&quot;&quot; raise NotImplementedError(&quot;This function is not implemented yet.&quot;) </code></pre> <pre><code>(venv) dspyz@dspyz-desktop:~/helloworld$ python matmul.py Traceback (most recent call last): File &quot;/home/dspyz/helloworld/matmul.py&quot;, line 6, in &lt;module&gt; @typechecked ^^^^^^^^^^^ File &quot;/home/dspyz/helloworld/venv/lib/python3.11/site-packages/typeguard/_decorators.py&quot;, line 221, in typechecked retval = instrument(target) ^^^^^^^^^^^^^^^^^^ File &quot;/home/dspyz/helloworld/venv/lib/python3.11/site-packages/typeguard/_decorators.py&quot;, line 72, in instrument instrumentor.visit(module_ast) File &quot;/usr/lib/python3.11/ast.py&quot;, line 418, in visit return visitor(node) ^^^^^^^^^^^^^ File &quot;/home/dspyz/helloworld/venv/lib/python3.11/site-packages/typeguard/_transformer.py&quot;, line 598, in visit_Module self.generic_visit(node) File &quot;/home/dspyz/helloworld/venv/lib/python3.11/site-packages/typeguard/_transformer.py&quot;, line 498, in generic_visit node = super().generic_visit(node) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3.11/ast.py&quot;, line 494, in generic_visit value = self.visit(value) ^^^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3.11/ast.py&quot;, line 418, in visit return visitor(node) ^^^^^^^^^^^^^ File &quot;/home/dspyz/helloworld/venv/lib/python3.11/site-packages/typeguard/_transformer.py&quot;, line 672, in visit_FunctionDef with self._use_memo(node): File &quot;/usr/lib/python3.11/contextlib.py&quot;, line 137, in __enter__ return next(self.gen) ^^^^^^^^^^^^^^ File &quot;/home/dspyz/helloworld/venv/lib/python3.11/site-packages/typeguard/_transformer.py&quot;, line 556, in _use_memo new_memo.return_annotation = self._convert_annotation( ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/dspyz/helloworld/venv/lib/python3.11/site-packages/typeguard/_transformer.py&quot;, line 582, in _convert_annotation new_annotation = cast(expr, AnnotationTransformer(self).visit(annotation)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/dspyz/helloworld/venv/lib/python3.11/site-packages/typeguard/_transformer.py&quot;, line 355, in visit new_node = super().visit(node) ^^^^^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3.11/ast.py&quot;, line 418, in visit return visitor(node) ^^^^^^^^^^^^^ File &quot;/home/dspyz/helloworld/venv/lib/python3.11/site-packages/typeguard/_transformer.py&quot;, line 421, in visit_Subscript [self.visit(item) for item in node.slice.elts], ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/dspyz/helloworld/venv/lib/python3.11/site-packages/typeguard/_transformer.py&quot;, line 421, in &lt;listcomp&gt; [self.visit(item) for item in node.slice.elts], ^^^^^^^^^^^^^^^^ File &quot;/home/dspyz/helloworld/venv/lib/python3.11/site-packages/typeguard/_transformer.py&quot;, line 355, in visit new_node = super().visit(node) ^^^^^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3.11/ast.py&quot;, line 418, in visit return visitor(node) ^^^^^^^^^^^^^ File &quot;/home/dspyz/helloworld/venv/lib/python3.11/site-packages/typeguard/_transformer.py&quot;, line 474, in visit_Constant expression = ast.parse(node.value, mode=&quot;eval&quot;) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/lib/python3.11/ast.py&quot;, line 50, in parse return compile(source, filename, mode, flags, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;&lt;unknown&gt;&quot;, line 1 m p ^ SyntaxError: invalid syntax </code></pre>
<python><pytorch><python-typing><jax>
2024-11-19 01:14:35
2
5,604
dspyz
79,201,815
1,082,883
How to search string across multiple columns, and create a new column of flag if string found in any of columns using Polars?
<p>To search over multiple columns, and create a new column of flag if string found, the following code works, but is there any compact way inside <code>with_columns()</code> to achieve the same?</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame({ &quot;col1&quot;: [&quot;hello&quot;, &quot;world&quot;, &quot;polars&quot;], &quot;col2&quot;: [&quot;data&quot;, &quot;science&quot;, &quot;hello&quot;], &quot;col3&quot;: [&quot;test&quot;, &quot;string&quot;, &quot;match&quot;], &quot;col4&quot;: [&quot;hello&quot;, &quot;example&quot;, &quot;test&quot;] }) search_string = &quot;hello&quot; condition = pl.lit(False) for col in df.columns: condition |= pl.col(col).str.contains(search_string) df = df.with_columns( condition.alias(&quot;string_found&quot;) + 0 ) print(df) </code></pre> <pre><code>shape: (3, 5) ┌────────┬─────────┬────────┬─────────┬──────────────┐ │ col1 ┆ col2 ┆ col3 ┆ col4 ┆ string_found │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str ┆ str ┆ i32 │ ╞════════╪═════════╪════════╪═════════╪══════════════╡ │ hello ┆ data ┆ test ┆ hello ┆ 1 │ │ world ┆ science ┆ string ┆ example ┆ 0 │ │ polars ┆ hello ┆ match ┆ test ┆ 1 │ └────────┴─────────┴────────┴─────────┴──────────────┘ </code></pre>
<python><dataframe><python-polars>
2024-11-19 01:00:08
1
691
Fred
79,201,789
1,609,514
Why does Pandas rolling method return a series with a different dtype to the original?
<p>Just curious why the Pandas Series <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.rolling.html" rel="nofollow noreferrer">rolling window method</a> doesn't preserve the data-type of the original series:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd x = pd.Series(np.ones(6), dtype='float32') x.dtype, x.rolling(window=3).mean().dtype </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>(dtype('float32'), dtype('float64')) </code></pre>
<python><pandas><types><rolling-computation>
2024-11-19 00:40:14
1
11,755
Bill
79,201,663
13,642,249
Split columns containing lists from CSV into separate CSV files with pandas
<p>I have CSV files with multiple columns of data retrieved from APIs, where each cell may contain either a single value or a list/array. The size of these lists is consistent across each column (e.g., a column named <code>ALPHANUMS</code> having a row containing a list like <code>&quot;['A', 'B', '4']&quot;</code> has the same list size of a column named <code>COLOR</code> having a row containing a list <code>&quot;['red', 'blue', 'green']&quot;</code>, but the list sizes can vary per CSV file depending on the API response. I would like to use <code>pandas</code> to create separate CSV files for each element in a list column, while retaining the rest of the data in each file.</p> <p>Here's an example of what the data might look like from this mockup function:</p> <pre class="lang-py prettyprint-override"><code>import random import csv # Predefined lists for NAME, CARS, and PHONE OS NAMES = [&quot;John Doe&quot;, &quot;Jane Smith&quot;, &quot;Alice Johnson&quot;, &quot;Bob Brown&quot;, &quot;Charlie Davis&quot;, &quot;Eve White&quot;, &quot;David Wilson&quot;, &quot;Emma Taylor&quot;, &quot;Frank Harris&quot;, &quot;Grace Clark&quot;] CAR_BRANDS = [&quot;Toyota&quot;, &quot;Ford&quot;, &quot;BMW&quot;, &quot;Tesla&quot;, &quot;Honda&quot;, &quot;Chevrolet&quot;, &quot;Nissan&quot;, &quot;Audi&quot;] PHONE_OS = [&quot;Android&quot;, &quot;iOS&quot;] def create_csv(file_name, num_records): cur_random_list_size = random.randint(1, min(len(NAMES), len(CAR_BRANDS))) with open(file_name, mode='w', newline='') as file: writer = csv.writer(file) writer.writerow([&quot;ID&quot;, &quot;NAME&quot;, &quot;MONTH&quot;, &quot;CARS&quot;, &quot;PHONE OS&quot;]) for i in range(num_records): record = { &quot;id&quot; : i + 1, &quot;name&quot;: [NAMES[n] for n in range(cur_random_list_size)], &quot;month&quot;: random.randint(1,12), &quot;cars&quot;: [random.choice(CAR_BRANDS) for _ in range(cur_random_list_size)], &quot;phone&quot;: random.choice(PHONE_OS) } writer.writerow(record.values()) print(f&quot;CSV file '{file_name}' created with {num_records} records.&quot;) create_csv(&quot;people_data.csv&quot;, 5) </code></pre> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>ID</th> <th>NAME</th> <th>MONTH</th> <th>CARS</th> <th>PHONE OS</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>&quot;['John Doe', 'Jane Smith', 'Alice Johnson', 'Bob Brown', 'Charlie Davis', 'Eve White']&quot;</td> <td>2</td> <td>&quot;['Toyota', 'Nissan', 'Nissan', 'Nissan', 'Audi', 'Honda']&quot;</td> <td>iOS</td> </tr> <tr> <td>2</td> <td>&quot;['John Doe', 'Jane Smith', 'Alice Johnson', 'Bob Brown', 'Charlie Davis', 'Eve White']&quot;</td> <td>4</td> <td>&quot;['Nissan', 'Ford', 'Honda', 'Toyota', 'Ford', 'Honda']&quot;</td> <td>iOS</td> </tr> <tr> <td>3</td> <td>&quot;['John Doe', 'Jane Smith', 'Alice Johnson', 'Bob Brown', 'Charlie Davis', 'Eve White']&quot;</td> <td>8</td> <td>&quot;['BMW', 'Honda', 'Tesla', 'Tesla', 'Tesla', 'Nissan']&quot;</td> <td>Android</td> </tr> <tr> <td>4</td> <td>&quot;['John Doe', 'Jane Smith', 'Alice Johnson', 'Bob Brown', 'Charlie Davis', 'Eve White']&quot;</td> <td>3</td> <td>&quot;['Tesla', 'Audi', 'Chevrolet', 'Audi', 'Chevrolet', 'BMW']&quot;</td> <td>iOS</td> </tr> <tr> <td>5</td> <td>&quot;['John Doe', 'Jane Smith', 'Alice Johnson', 'Bob Brown', 'Charlie Davis', 'Eve White']&quot;</td> <td>8</td> <td>&quot;['Ford', 'Tesla', 'BMW', 'Toyota', 'Nissan', 'Ford']&quot;</td> <td>Android</td> </tr> </tbody> </table></div> <p>And ideally, I'd like to separate this into five individual csv files, as an example for <code>john_doe_people_data.csv</code>:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>ID</th> <th>NAME</th> <th>MONTH</th> <th>CARS</th> <th>PHONE OS</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>John Doe</td> <td>2</td> <td>Toyota</td> <td>iOS</td> </tr> <tr> <td>2</td> <td>John Doe</td> <td>4</td> <td>Nissan</td> <td>iOS</td> </tr> <tr> <td>3</td> <td>John Doe</td> <td>8</td> <td>BMW</td> <td>Android</td> </tr> <tr> <td>4</td> <td>John Doe</td> <td>3</td> <td>Tesla</td> <td>iOS</td> </tr> <tr> <td>5</td> <td>John Doe</td> <td>8</td> <td>Ford</td> <td>Android</td> </tr> </tbody> </table></div> <p>All in all, how can I use pandas to create separate CSV files for each element in a list column, while keeping the rest of the data in each file?</p>
<python><pandas>
2024-11-18 23:07:59
5
1,422
kyrlon
79,201,516
393,194
How to mock a python-jenkins call in FastAPI
<p>I have a simple FastAPI application that uses <a href="https://python-jenkins.readthedocs.io/en/latest/" rel="nofollow noreferrer">python-jenkins</a> to make custom calls to our Jenkins instance.</p> <p>Here are a couple of examples:</p> <pre class="lang-py prettyprint-override"><code>@app.get(&quot;/jenkins_data&quot;) async def get_jenkins_nodes() -&gt; list: server = jenkins.Jenkins(url, user, key) try: nodes = server.get_nodes() return get_nodes_and_states(nodes) except jenkins.JenkinsException: nodes = [] return nodes </code></pre> <p>Or I might have something like this:</p> <pre class="lang-py prettyprint-override"><code>@app.get(&quot;/job/{node_name}&quot;) def get_current_jenkins_job(node_name: str) -&gt; str: server = jenkins.Jenkins(url, user, key) node = server.get_node_info(f&quot;{node_name}&quot;, 2) if node[&quot;executors&quot;][0][&quot;currentExecutable&quot;] is not None: display_name = node[&quot;executors&quot;][0][&quot;currentExecutable&quot;][&quot;displayName&quot;] else: display_name = &quot;No Jobs Running&quot; return display_name </code></pre> <p>How do I go about mocking the Jenkin's object so I can properly test these methods with pytest? I want to write a test that causes Jenkins to raise <code>jenkins.JenkinsException</code> in the first example or mock it to set the &quot;currentExecutable&quot; for the second.</p> <p>Thanks!</p>
<python><pytest><fastapi>
2024-11-18 21:42:00
0
33,169
Mike Driscoll
79,201,278
10,853,071
Python Logging : name 'log' is not defined
<p>I am using python logging to write debug information to a file and print it to my screen :</p> <pre><code>logger=logging.getLogger() logger.setLevel(logging.INFO) file_handler = logging.FileHandler(&quot;std.log&quot;, mode='w') file_handler.setFormatter(logging.Formatter('%(asctime)s %(message)s')) logger.addHandler(file_handler) logger.addHandler(logging.StreamHandler(stream=sys.stdout)) class MeuManipulador(logging.Handler): def emit(self, record): if record.levelno == logging.ERROR: log_entry = self.format(record) if &quot;No such comm target registered&quot; in str(record.msg): return log(mensagem = log_entry) logger.addHandler(MeuManipulador()) </code></pre> <p>Everuthing runs fine, but on the end of my long code there is a code where logger.info or logger.warning runs ok.</p> <pre><code>logger.info('Dropando a tabela anterior') with enginedb2 as conn: try: sql = ''' drop table DB2I023A.CNFC_CNV_HST_CTBL''' conn.execute(text(sql)) conn.commit() logger.info(f'comando SQL {sql} executado com sucesso') time.sleep(1) except ResourceClosedError as error: logger.warning(f'comando SQL {sql} não foi executado com sucesso, resultando no erro : {error}') pass </code></pre> <p>But if I use logger.error, I get this!</p> <pre><code>--------------------------------------------------------------------------- ResourceClosedError Traceback (most recent call last) File &lt;timed exec&gt;:5 File /projeto/libs/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1418, in Connection.execute(self, statement, parameters, execution_options) 1417 else: -&gt; 1418 return meth( 1419 self, 1420 distilled_parameters, 1421 execution_options or NO_OPTIONS, 1422 ) File /projeto/libs/lib/python3.11/site-packages/sqlalchemy/sql/elements.py:515, in ClauseElement._execute_on_connection(self, connection, distilled_params, execution_options) 514 assert isinstance(self, Executable) --&gt; 515 return connection._execute_clauseelement( 516 self, distilled_params, execution_options 517 ) 518 else: File /projeto/libs/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1640, in Connection._execute_clauseelement(self, elem, distilled_parameters, execution_options) 1632 compiled_sql, extracted_params, cache_hit = elem._compile_w_cache( 1633 dialect=dialect, 1634 compiled_cache=compiled_cache, (...) 1638 linting=self.dialect.compiler_linting | compiler.WARN_LINTING, 1639 ) -&gt; 1640 ret = self._execute_context( 1641 dialect, 1642 dialect.execution_ctx_cls._init_compiled, 1643 compiled_sql, 1644 distilled_parameters, 1645 execution_options, 1646 compiled_sql, 1647 distilled_parameters, 1648 elem, 1649 extracted_params, 1650 cache_hit=cache_hit, 1651 ) 1652 if has_events: File /projeto/libs/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1813, in Connection._execute_context(self, dialect, constructor, statement, parameters, execution_options, *args, **kw) 1812 if conn is None: -&gt; 1813 conn = self._revalidate_connection() 1815 context = constructor( 1816 dialect, self, conn, execution_options, *args, **kw 1817 ) File /projeto/libs/lib/python3.11/site-packages/sqlalchemy/engine/base.py:680, in Connection._revalidate_connection(self) 679 return self._dbapi_connection --&gt; 680 raise exc.ResourceClosedError(&quot;This Connection is closed&quot;) ResourceClosedError: This Connection is closed During handling of the above exception, another exception occurred: NameError Traceback (most recent call last) File &lt;timed exec&gt;:10 File /usr/local/lib/python3.11/logging/__init__.py:1518, in Logger.error(self, msg, *args, **kwargs) 1509 &quot;&quot;&quot; 1510 Log 'msg % args' with severity 'ERROR'. 1511 (...) 1515 logger.error(&quot;Houston, we have a %s&quot;, &quot;major problem&quot;, exc_info=1) 1516 &quot;&quot;&quot; 1517 if self.isEnabledFor(ERROR): -&gt; 1518 self._log(ERROR, msg, args, **kwargs) File /usr/local/lib/python3.11/logging/__init__.py:1634, in Logger._log(self, level, msg, args, exc_info, extra, stack_info, stacklevel) 1631 exc_info = sys.exc_info() 1632 record = self.makeRecord(self.name, level, fn, lno, msg, args, 1633 exc_info, func, extra, sinfo) -&gt; 1634 self.handle(record) File /usr/local/lib/python3.11/logging/__init__.py:1644, in Logger.handle(self, record) 1637 &quot;&quot;&quot; 1638 Call the handlers for the specified record. 1639 1640 This method is used for unpickled records received from a socket, as 1641 well as those created locally. Logger-level filtering is applied. 1642 &quot;&quot;&quot; 1643 if (not self.disabled) and self.filter(record): -&gt; 1644 self.callHandlers(record) File /usr/local/lib/python3.11/logging/__init__.py:1706, in Logger.callHandlers(self, record) 1704 found = found + 1 1705 if record.levelno &gt;= hdlr.level: -&gt; 1706 hdlr.handle(record) 1707 if not c.propagate: 1708 c = None #break out File /usr/local/lib/python3.11/logging/__init__.py:978, in Handler.handle(self, record) 976 self.acquire() 977 try: --&gt; 978 self.emit(record) 979 finally: 980 self.release() File &lt;timed exec&gt;:17, in emit(self, record) NameError: name 'log' is not defined </code></pre> <p>Any tips? Is there any special requirements for logging.error!?</p>
<python><python-logging>
2024-11-18 20:00:56
2
457
FábioRB
79,201,114
46,058
flake8 warning for type hints using collections.namedtuple for pandas `itertuples()`
<p>I want to provide type hints for the IDE tab completion (and IDE type linter) for the loop variable over Pandas' DataFrame itertuples(). I could do this with <code>typing.NamedTuple</code>, but I though that <code>collections.namedtuple</code> would be enough:</p> <pre class="lang-py prettyprint-override"><code>row: namedtuple('Pandas', ['Index', 'author_name']) for row in authors_df.itertuples(): print(f&quot;{row.Index=}, {row.author_name}&quot;) </code></pre> <p>The PyCharms linter does not see any problems, and I get tab completion... however, the GitHub CI Action, that runs <code>flake8</code> linter, shows the following error:</p> <pre><code>src/package/source.py:728:21: F821 undefined name 'Pandas' row: namedtuple('Pandas', ['Index', 'author_name']) </code></pre> <p>Is flake8 correct? How can I avoid this warning (if it is wrong), or how can I go about silencing it?</p>
<python><pandas><python-typing><flake8>
2024-11-18 18:58:55
1
327,106
Jakub Narębski
79,201,012
7,271,231
IF element IN list in robot framework
<p>I am trying to see if the element is present in the list using Robot Framework. This is my test:</p> <pre><code>IF Calgary IN ${list} do something END </code></pre> <p>But I am getting this error <code>No keyword with name 'IN' found</code></p>
<python><robotframework><robotframework-browser>
2024-11-18 18:21:29
2
648
Urvish rana
79,200,982
3,840,530
How to make the ttk treeview not show extra empty columns?
<p>I am building an app with tkinter and running into some annoying issues.</p> <p>When I load my data into the ttk treeview, I see that there are always some additional columns on the right side of my treeview which are empty. I realized for a bigger data set there are many more columns which are empty (actual data is loaded correctly). I want to get rid of these extra columns or know the source of this. I have attached a code sample below to show the issue.</p> <p>I would also like to create a column selection functionality but unless the above issue is solved I don't feel like moving forward with it. Is there a simple solution to get rid of the empty columns? is column selection possible out of the box? The idea is to have a little bit excel kind of functionality but not too elaborate.</p> <blockquote> <pre><code>import pandas as pd import tkinter as tk from tkinter import ttk from tkinter.messagebox import showinfo root = tk.Tk() root.title('Treeview demo') root.geometry('620x200') data = {'Name':['Tom', 'nick', 'krish', 'jack', 'mack', 'tack', 'crack', 'lack'], 'Age':[20, 21, 19, 18, 20, 21, 19, 18]} df = pd.DataFrame(data) tree = ttk.Treeview(root, columns=df.columns, show='headings') scrollbar_y = ttk.Scrollbar(root, command=tree.yview) scrollbar_x = ttk.Scrollbar(root, command=tree.xview, orient=tk.HORIZONTAL) tree.config(xscrollcommand=scrollbar_x.set, yscrollcommand=scrollbar_y.set) scrollbar_y.pack(fill=tk.Y, side=tk.RIGHT) scrollbar_x.pack(fill=tk.X, side=tk.BOTTOM) tree.pack(fill=&quot;both&quot;, expand=True) df.dropna(axis=1, how='all', inplace=True) df.columns = [str(x)[:50] for x in df.columns] df = df.reset_index(drop=True) print(&quot;all cols : &quot;, df.columns) print(df.columns) # display the data in the treeview. df_list = list(df.columns) df_reset = df.to_numpy().tolist() print(&quot;No of columns &quot;, len(df_list)) tree[&quot;columns&quot;] = df.columns for i in range(len(df_list)): col = df_list[i] tree.column(i, anchor='c') tree.heading(i, text=col) j = 0 for dt in df_reset: v = [r for r in dt] print(&quot;v &quot;, v) tree.insert('', 'end', iid=j, values=v) j += 1 print(&quot;Total rows : &quot;, j) root.mainloop() </code></pre> </blockquote> <p><a href="https://i.sstatic.net/2fGqWaCM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fGqWaCM.png" alt="enter image description here" /></a></p> <p>Thanks for your suggestions in advance!</p> <p><strong>Edit:</strong> fixed the typo in instantiation of treeview to df.columns as suggested. In the real code I Instantiate an empty tree and later on populate it with content and set columns using</p> <pre><code>tree[&quot;columns&quot;] = df.columns. </code></pre>
<python><pandas><tkinter><treeview><ttk>
2024-11-18 18:13:09
2
302
user3840530
79,200,874
13,968,392
Column assignment with .alias() or =
<p>What is the preferred way to assign/add a new column to a polars dataframe in <code>.select()</code> or <code>.with_columns()</code>?<br /> Are there any differences between the below column assignments using <code>.alias()</code> or the <code>=</code> sign?</p> <pre><code>import polars as pl df = pl.DataFrame({&quot;A&quot;: [1, 2, 3], &quot;B&quot;: [1, 1, 7]}) df = df.with_columns(pl.col(&quot;A&quot;).sum().alias(&quot;a_sum&quot;), another_sum=pl.col(&quot;A&quot;).sum() ) </code></pre> <p>I am not sure which one to use.</p>
<python><dataframe><python-polars>
2024-11-18 17:34:50
2
2,117
mouwsy
79,200,849
10,165,118
SQLAlchemy - Tables are not created by Database class
<p>I am struggling to automatically create tables using SQLAlchemy for Postgresql database.</p> <p>In my module I defined a <code>base.py</code>:</p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy import MetaData from sqlalchemy.orm import DeclarativeBase, Session class Base(DeclarativeBase): # Ensure all models use the public schema by default metadata = MetaData(schema='public') __abstract__ = True def __eq__(self, other): return True def to_dict(self) -&gt; dict: &quot;&quot;&quot;Converts the fields (columns) of a class to a dictionary without the '_sa_instance_state' key.&quot;&quot;&quot; return {field.name: getattr(self, field.name) for field in self.__table__.c} @classmethod def get_count(cls, session: Session) -&gt; int: &quot;&quot;&quot;Get the count of rows in the table.&quot;&quot;&quot; return session.query(cls).count() </code></pre> <p>Then I defined a table in my <code>models.py</code></p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy import Column, DateTime, Integer, Float, ForeignKey from sqlalchemy.orm import relationship from .base import Base class SensorData(Base): __tablename__ = 'sensor_data' trip_id = Column(Integer, primary_key=True) timestamp = Column(DateTime, primary_key=True) acceleration_x = Column(Float) # Other fields... </code></pre> <p>and a Database object in my <code>database.py</code>:</p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy import create_engine, MetaData from .models import SensorData # Models imported after Base from .base import Base class Database: def __init__(self): from sqlalchemy.engine.url import URL url_object = URL.create( drivername=&quot;postgresql+psycopg2&quot;, username=&quot;username&quot;, password=&quot;password&quot;, host=&quot;localhost&quot;, port=5432, database=&quot;database&quot; ) self.engine = create_engine(url_object, echo=True) self.metadata = MetaData(schema='public') # Explicitly bind metadata and engine to the Base class Base.metadata = self.metadata Base.metadata.bind = self.engine Base.metadata.create_all(self.engine) # Table creation </code></pre> <p>In other parts of my code I would like to create the database object, and create the tables if they don't exist yet (in the public schema)</p> <pre class="lang-py prettyprint-override"><code> db = Database() </code></pre> <p>All code executes without errors, however, no tables are being created. The database user also should have sufficient privileges. From what Ive read the order of importing the classes is important, but I am not sure how ensure everything is imported correctly with my current code setup?</p>
<python><postgresql><sqlalchemy><orm>
2024-11-18 17:24:39
1
472
P. Leibner
79,200,847
337,149
Proper way to extract XML elements from a namespace
<p>In a Python script I make a call to a SOAP service which returns an XML reply where the elements have a namespace prefix, let's say</p> <pre><code>&lt;ns0:foo xmlns:ns0=&quot;SOME-URI&quot;&gt; &lt;ns0:bar&gt;abc&lt;/ns0:bar&gt; &lt;/ns0:foo&gt; </code></pre> <p>I can extract the content of <em>ns0:bar</em> with the method call</p> <pre><code>doc.getElementsByTagName('ns0:bar') </code></pre> <p>However, the name <em>ns0</em> is only a local variable so to speak (it's not mentioned in the schema) and might as well have been named <em>flubber</em> or <em>you_should_not_care</em>. What is the proper way to extract the content of a namespaced element without relying on it having a specific name? In my case the prefix was indeed changed in the SOAP service which resulted in a parse failure.</p>
<python><xml><xml-parsing>
2024-11-18 17:23:55
2
11,423
August Karlstrom
79,200,661
11,457,006
Reducing cold start in a python AWS Lambda Function
<p>I have an AWS lambda function written in python that needs a way of manipulating data stored in tables. My solution is to use pandas to read the tables in as parquet files. While this works, the cold start of this lambda function goes from ~400ms to 2000ms as soon as I add a pandas layer (even without any computation). I am wondering if there's any options out there that will get my cold start time to less than 1000ms? The total computation time of this function is &lt;100ms, so it's a shame to have it so inflated.</p>
<python><pandas><amazon-web-services><aws-lambda>
2024-11-18 16:15:30
1
3,875
Jesse McMullen-Crummey
79,200,411
4,124,981
legacy elastic beanstalk application failing to run ebextension with CERTIFICATE_VERIFY_FAILED
<p>We have an old legacy app that runs on elastic beanstalk. This week the app started failing to deploy, with the following error:</p> <p><code>Failed to retrieve https://github.com/papertrail/remote_syslog2/releases/download/v0.18/remote_syslog_linux_amd64.tar.gz: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:727).</code></p> <p>This is the syslog app for papertrail that is installed on deploy, as per papertrail's instructions...</p> <p><a href="https://github.com/papertrail/remote_syslog2/blob/master/examples/remote_syslog.ebextensions.config" rel="nofollow noreferrer">https://github.com/papertrail/remote_syslog2/blob/master/examples/remote_syslog.ebextensions.config</a></p> <p>I am assuming its some crt issue regarding root CA (as its an old box that EB uses), but if I ssh into the box in question, I can download <code>remote_syslog_linux_amd64</code> fine with CURL, and even via <code>python</code> which is what <code>eb</code> runs in (i think).</p> <p>If I try to update the <code>ca-certificates</code> I get <code>package ca-certificates-2018.2.22-65.1.22.amzn1.noarch already installed and latest version</code></p> <p>So a bit lost as to how to solve this...</p>
<python><amazon-web-services><amazon-elastic-beanstalk><papertrail-app>
2024-11-18 14:55:21
0
3,024
Matt Bryson
79,200,387
6,498,245
How to enable parallel execution of ABAP in PyRFC?
<p>I am migrating a project to PyRFC, using a server configuration.</p> <p>My program is registering a function that has to be called. The incoming data is splitted across 3 DTP, hence my function will be called 3 times.</p> <p>When we set the parallel execution in SAP to 1, everything is working fine and I am receiving one packet after another. But this is too slow! We need to have the executions running in parallel.</p> <p>When we set the execution to 3 in SAP, it says that all packets are sent but I'm only receiving the first one.</p> <p>Here is my implementation of PyRFC :</p> <pre class="lang-py prettyprint-override"><code> def launch_server(): &quot;&quot;&quot;Start server.&quot;&quot;&quot; # create server for ABAP system ABC client_params = { &quot;lang&quot;: config[&quot;lang&quot;], &quot;client&quot;: config[&quot;client&quot;], &quot;passwd&quot;: config[&quot;passwd&quot;], &quot;user&quot;: config[&quot;user&quot;], &quot;sysnr&quot;: config[&quot;sysnr&quot;], &quot;ashost&quot;: config[&quot;ashost&quot;], &quot;dest&quot;: config[&quot;dest&quot;], &quot;conncount&quot;: config[&quot;conncount&quot;], &quot;trace&quot;: config[&quot;trace_level&quot;] } # Define server parameters (SAP RFC server setup) server_params = { &quot;gwhost&quot;: config[&quot;gwhost&quot;], &quot;gwserv&quot;: config[&quot;gwserv&quot;], &quot;dest&quot;: config[&quot;dest&quot;], &quot;program_id&quot;: config[&quot;program_id&quot;], &quot;trace&quot;: config[&quot;trace_level&quot;] } server = Server( server_params=server_params, client_params=client_params, config={ &quot;check_date&quot;: False, &quot;check_time&quot;: False, &quot;port&quot;: 8081, &quot;debug&quot;: config[&quot;use_server_debug&quot;] != 0, &quot;server_log&quot;: config[&quot;use_server_debug&quot;] != 0 } ) # expose python function to be called by ABAP server.add_function(&quot;MY_FUNCTION&quot;, myFunctionImplementation) # start server server.serve() # Enable pyrfc logging logging.basicConfig(level=logging.DEBUG) server_thread = Thread(target=launch_server) server_thread.start() </code></pre> <p>In this example, when SAP is sending 3 packets at the same time I only receive one and <code>myFunctionImplementation</code> is called only once.</p> <p>In this example, <code>conncount</code> is set to 10 which seems the only things to change to allow parallelization.</p> <p>I hardly understand SAP internal processes, what am I missing? Is it possible to handle all three packets at the same time?</p> <p>I also tried to enable bgRFC without success.</p>
<python><abap><rfc><pyrfc>
2024-11-18 14:48:58
1
356
Double Sept
79,200,104
1,882,828
Node.js socket io server disconnects python socket.io client after 30 seconds
<p>Node.js socket io server disconnects python socket.io client after 30 seconds.. server sends this:</p> <blockquote> <p>0{&quot;sid&quot;:&quot;6afJCYzsh1Yz3bh3AAAA&quot;,&quot;upgrades&quot;:[],&quot;pingInterval&quot;:25000,&quot;pingTimeout&quot;:5000}</p> </blockquote> <p>am not able to find any other references of how a non nodejs client should respond to keep the connection alive.. nodejs client works well..</p> <p>Tried with nodejs client, that worked beyond 30 seconds.. only python and c# client that i tried, gets disconnected in 30 seconds..</p> <p>`import socketio</p> <p>sio = socketio.Client()`</p>
<python><node.js><socket.io><timeout>
2024-11-18 13:26:04
0
331
Sathish Kumar
79,200,058
1,129,666
How to make urllib use my own network code for the actual http GET/PUT/... operations?
<p>I need to configure urllib in Anaconda Python 3.6 to use my own python code to do the actual GET, PUT, ... operations. The solution will be native python code and wrap the curl cli to do the acutal operation.</p> <p>I'm working in a highly restrictive environment where I cannot install any software on my workstation. This limits me to Anaconda Python 3.6 with no ability to install pip modules. In this environment, I and my colleagues are using a set of python programs to collect information from internal APIs via an internal proxy. This proxy will soon be switched from basic authentication to NTLM authentication, which is not supported by Anaconda Python 3.6.</p> <p>In the search for a solution, we noticed, that the 'curl' that comes with git-bash on our workstation does support NTLM authentication and can access our APIs. I made a small POC python module to wrap the curl cli tool do http requests and it worked fine. Now I'm searching for a way to use the curl tool without the need to completely rewrite all our existing code. I've already experimented with deriving classes from urllib.request.BaseHandler and urllib.request.HTTPBaseHandler, but apparently they're not meant to replace the actual networking code.</p> <p>So, what would be your approach to make urllib use the curl cli command to do the actual requests? I'm aware that wrapping curl is horrible and I'm very open for alternative solutions, as long as they don't require additional software installation.</p>
<python><urllib>
2024-11-18 13:14:22
0
324
briconaut
79,199,863
25,413,271
Reorder Numpy array by given index list
<p>I have an array of indexes:</p> <pre class="lang-py prettyprint-override"><code>test_idxs = np.array([4, 2, 7, 5]) </code></pre> <p>I also have an array of values (which is longer):</p> <pre class="lang-py prettyprint-override"><code>test_vals = np.array([13, 19, 31, 6, 21, 45, 98, 131, 11]) </code></pre> <p>So I want to get an array with the length of the array of indexes, but with values from the array of values in the order of the array of indexes. In other words, I want to get something like this:</p> <pre class="lang-py prettyprint-override"><code>array([21, 31, 131, 45]) </code></pre> <p>I know how to do this in a loop, but I don't know how to achieve this using Numpy tools.</p>
<python><numpy>
2024-11-18 12:11:21
1
439
IzaeDA
79,199,776
6,542,623
LLDB Python scripting - how to add module or load symbol file at particular address?
<p>In LLDB you can do the following during a debugging session to add missing symbols at particular addresses:</p> <pre><code>target modules load --file &lt;symbol file&gt; .text 0x&lt;address&gt; </code></pre> <p>How can you do that in with the LLDB Python scripting module? I searched through the API but I couldn't find a corresponding method.</p>
<python><lldb>
2024-11-18 11:35:01
2
354
Kotosif
79,199,599
13,118,291
Tensorflow GPU crashing after first epoch
<p>I am training a neural network model and when running the code :</p> <pre><code>history = model.fit( X_train, y_train_encoded, epochs=20, batch_size=32, validation_data=(X_val, y_val_encoded), ) </code></pre> <p>The first epoch finishes successfully (in around 11 minutes), and then I get this error :</p> <pre><code>InternalError: Failed copying input tensor from /job:localhost/replica:0/task:0/device:CPU:0 to /job:localhost/replica:0/task:0/device:GPU:0 in order to run _EagerConst: Dst tensor is not initialized. </code></pre> <p>Contrary to the majority of people getting this error before the training starts. For me, The training works for the first epoch, finishes it successfully (it takes around 11 minutes) and then crashes before getting to the next epoch. It seems that tensorflow is not managing the GPU memory well because when inspected I see that 3.1GB/4GB of the GPU are being used (even after the code crashes), and won't start training again when I run it from the jupyter notebook, always gives the same error instantly. I need to restart the GPU to have it back to empty and only then it will run for the first epoch and then get the same error.</p> <p>Did anyone encounter this problem ? And is it a bug related to some things that tensorflow does between epochs (like clearing memory) ?</p> <p>I am using tensorflow 2.10.1 on windows with a GTX 1050ti GPU <em>(ps : I can't afford better hardware, running on CPU takes around 5 times more time)</em></p> <p>Thank you for any help.</p>
<python><tensorflow><deep-learning><jupyter-notebook>
2024-11-18 10:40:03
0
465
Elyes Lounissi
79,199,376
4,444,546
Hypothesis cannot build from type when type is int subclass
<p>So my problem is the following</p> <pre class="lang-py prettyprint-override"><code>from typing import ClassVar, TypeVar from hypothesis import strategies as st T = TypeVar(&quot;T&quot;, bound=&quot;FixedUint&quot;) class FixedUint(int): MAX_VALUE: ClassVar[&quot;FixedUint&quot;] def __init__(self: T, value: int) -&gt; None: if not isinstance(value, int): raise TypeError() if value &lt; 0 or value &gt; self.MAX_VALUE: raise OverflowError() class U256(FixedUint): pass U256.MAX_VALUE = int.__new__(U256, (2**256) - 1) class U64(FixedUint): pass U64.MAX_VALUE = int.__new__(U64, (2**64) - 1) st.from_type(U64).example() # TypeError: 'value' is an invalid keyword argument for int() </code></pre> <p>ie that hypothesis cannot build from type because the <code>__init__</code> of <code>FixedUint</code> has a <code>value</code> kwarg for validation, but <code>U64(value=1234)</code> is actually not valid.</p> <p>How to make this work?</p>
<python><python-hypothesis>
2024-11-18 09:34:50
1
5,394
ClementWalter