QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
76,443,605
14,954,932
Labeling columns in Pandas but not renaming
<p>I have a pandas dataframe with column names. These column names are not meaningful. So I want to make the columns with labels more intuitive and speaking. However, I also have many calculations based on the original names. How can I ensure that I get the dataframes with the labels displayed or that the CSV file shows the labels as column names when exporting, but at the same time that all the calculations in the code are based on the original column names?</p>
<python><pandas><dataframe><rename><labeling>
2023-06-09 21:02:20
1
376
Economist Learning Python
76,443,570
11,512,576
why git-related error reported when running mlflow in python
<p>I'm new to mlflow and learning it from very beginning. I installed <code>mlflow</code> package in python, and run the following code in spyder.</p> <pre><code>import os from random import random, randint from mlflow import log_metric, log_param, log_artifacts if __name__ == &quot;__main__&quot;: print(&quot;Running mlflow_tracking.py&quot;) log_param(&quot;param1&quot;, randint(0, 100)) log_metric(&quot;foo&quot;, random()) log_metric(&quot;foo&quot;, random() + 1) log_metric(&quot;foo&quot;, random() + 2) if not os.path.exists(&quot;outputs&quot;): os.makedirs(&quot;outputs&quot;) with open(&quot;outputs/test.txt&quot;, &quot;w&quot;) as f: f.write(&quot;hello world!&quot;) log_artifacts(&quot;outputs&quot;) </code></pre> <p>This code should be standard and I run it for test. What makes me confused is that git-related error appears. No git connect is set up and no git file is called. Why is there such an error. By the way, I have <code>..\AppData\Local\GitHubDesktop\bin</code> in path (is <code>github</code> the same as <code>git</code> here).</p> <pre><code>WARNING mlflow.utils.git_utils: Failed to import Git (the Git executable is probably not on your PATH), so Git SHA is not available. Error: Failed to initialize: Bad git executable. The git executable must be specified in one of the following ways: - be included in your $PATH - be set via $GIT_PYTHON_GIT_EXECUTABLE - explicitly set via git.refresh() All git commands will error until this is rectified. This initial warning can be silenced or aggravated in the future by setting the $GIT_PYTHON_REFRESH environment variable. Use one of the following values: - quiet|q|silence|s|none|n|0: for no warning or exception - warn|w|warning|1: for a printed warning - error|e|raise|r|2: for a raised exception Example: export GIT_PYTHON_REFRESH=quiet Running mlflow_tracking.py </code></pre> <p>I searched the following link to fix this error by setting <code>os.environ[&quot;GIT_PYTHON_REFRESH&quot;] = &quot;quiet&quot;</code>.</p> <p><a href="https://stackoverflow.com/questions/48399498/git-executable-not-found-in-python">git executable not found in python</a></p> <p>Could anyone explain why it's this way and help to fix this issue. Thanks in advance.</p>
<python><git><spyder><mlflow>
2023-06-09 20:54:29
0
491
Harry
76,443,470
18,758,062
Run Python function in Flutter app using Chaquopy gives error about function not defined
<p>I've got the Chaquopy Flutter example working in my Flutter app. This involves calling the Python code from the Flutter app via <a href="https://drive.google.com/file/d/1D4Hjt66f0MXkaeAQ8WLX3DEebX3BrFvM/view" rel="nofollow noreferrer">the provided <code>script.py</code></a>.</p> <pre class="lang-dart prettyprint-override"><code> final _result = await Chaquopy.executeCode(&quot;print('hello')&quot;); print(&quot;Python output: $_result&quot;); </code></pre> <p>However, when I try to run my own Python function <code>hello</code> that I have added to <code>script.py</code>, it is unable to find that function!</p> <pre class="lang-dart prettyprint-override"><code> final _result = await Chaquopy.executeCode(&quot;hello('world')&quot;); print(&quot;Python output: $_result&quot;); </code></pre> <p>will give the output:</p> <pre><code>Python output: {textOutputOrError: name 'hello' is not defined } </code></pre> <p>Here's my <code>script.py</code></p> <pre class="lang-py prettyprint-override"><code>import io,os,sys,time,threading,ctypes,inspect,traceback def _async_raise(tid, exctype): tid = ctypes.c_long(tid) if not inspect.isclass(exctype): exctype = type(exctype) res = ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, ctypes.py_object(exctype)) if res == 0: raise ValueError(&quot;invalid thread id&quot;) elif res != 1: ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, None) raise SystemError(&quot;Timeout Exception&quot;) def stop_thread(thread): _async_raise(thread.ident, SystemExit) def text_thread_run(code): try: env={} exec(code, env, env) except Exception as e: print(e) # MY NEWLY ADDED FUNCTION def hello(text): print(f&quot;hello {text}&quot;) # This is the code to run Text functions... def mainTextCode(code): global thread1 thread1 = threading.Thread(target=text_thread_run, args=(code,),daemon=True) thread1.start() timeout = 15 # change timeout settings in seconds here... thread1_start_time = time.time() while thread1.is_alive(): if time.time() - thread1_start_time &gt; timeout: stop_thread(thread1) raise TimeoutError time.sleep(1) </code></pre> <p>What is the correct way to call my own function that I define inside <code>script.py</code>? Even better is to have my functions defined in another file at the same directory level as <code>script.py</code>.</p>
<python><flutter><dart><chaquopy>
2023-06-09 20:31:59
1
1,623
gameveloster
76,443,430
2,516,231
How to make test discovery work in VSCode in multi-root workspace?
<p>I have the following folder structure in python:</p> <pre><code>β”œβ”€β”€project1 β”‚ ──venv β”‚ ──src | ─main.py β”‚ ──test | ─test.py β”œβ”€β”€project2 β”‚ ──venv β”‚ ──src | ─main.py β”‚ ──test | ─test.py β”œβ”€β”€shared β”‚ ──shared-library-1 | ─spam.py β”‚ ──shared-library-2 | ─very-common.py | </code></pre> <p>Project1 &amp; project2 are FastAPI services with their own virtual environments, dependencies, tests and they have some shared code too in the shared folder. I've created a multi-root VS Code code workspace to have all this under one umbrella, so I can run and debug both service at the some time conveniently. It works nicely for running and debugging the projects, but I'm having issues with setting up testing.</p> <p>I can create launch configurations to run all pytests from terminal, but the built-in VS Code test discovery fails with error (on test.py):</p> <p><strong>&quot;ModuleNotFoundError: No module named 'src'&quot;</strong></p> <p>I think I know why, it's the classic import error because python can't import main.py from test.py unless pythonpath environment variable is not set up correctly. This is the launch configuration to run tests for project1:</p> <pre><code>{ &quot;name&quot;: &quot;Project1 Tests&quot;, &quot;type&quot;: &quot;python&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;module&quot;: &quot;pytest&quot;, &quot;python&quot;:&quot;${workspaceRoot:project1}/venv/bin/python&quot;, &quot;cwd&quot;: &quot;${workspaceFolder:project1}&quot;, &quot;env&quot;: {&quot;PYTHONPATH&quot;: &quot;${workspaceFolder:project1}/..${pathSeparator}${env:PYTHONPATH}&quot;}, &quot;jinja&quot;: true, &quot;justMyCode&quot;: true } </code></pre> <p>This sets up the pythonpath nicely for each of the projects so imports can work and I can run my tests. How can I do the same for the built-in VS Code discoverer? I'd like my tests to show up nicely in the testing tab to be able run and debug them one-by-one.</p>
<python><python-3.x><visual-studio-code><pytest><fastapi>
2023-06-09 20:22:39
1
479
toderik
76,443,283
12,955,644
NoSuchElementException with Selenium
<p>I have been working on developing a bot called Bot1 to scrape data from a specific website. For this purpose, I am utilizing the Selenium framework. However, I encountered an issue where the bot fails to locate a particular element on the website during its initial run, but successfully locates the same element in subsequent runs.</p> <p>I kindly request you to review my code and provide any insights or suggestions.</p> <pre><code>def scrape_products(self, product_name): try: # Navigate to the site self.browser.get(&quot;https://www.example.com/&quot;) # Find the search box by id, enter the product name and hit enter. search_box = self.browser.find_element(By.ID, &quot;exampleid&quot;) search_box.send_keys(product_name) search_box.send_keys(Keys.RETURN) # Scrape the products information as web elements. products = WebDriverWait(self.browser, 10).until(EC.presence_of_all_elements_located( (By.CSS_SELECTOR, &quot;[some-selector='some-value']&quot;)) ) # List of products after extracting the html representation from web elements. products_data = [] for product in products: product_html = product.get_attribute('outerHTML') products_data.append(product_html) return products_data except Exception as e: return &quot;Not Found&quot; def run(self, product_name): product_list = self.scrape_products(product_name) if product_list: self.database.create_table(product_name) for product in product_list: self.database.insert_product(product_name, product) b1 = Bot1() for prod in ['item1', 'item2']: b1.run(prod) </code></pre> <p>When it first runs with item1 it can't locate this element -&gt; <code>search_box = self.browser.find_element(By.ID, &quot;exampleid&quot;)</code> and returns this error <code>selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {&quot;method&quot;:&quot;css selector&quot;,&quot;selector&quot;:&quot;[id=&quot;exampleid&quot;]&quot;}</code> but the subsequent iterations work correctly, meaning from item2 onward (if adding more items into the list) works correctly. It can find the element, it can scrape and can do other operations.</p> <p>Could you please tell me what did I do wrong?</p> <p>I have tried using different methods, such as:</p> <pre><code>search_box = WebDriverWait(self.browser, 10).until( EC.presence_of_element_located((By.ID, &quot;exampleid&quot;)) ) search_box = WebDriverWait(self.browser, 10).until( EC.visibility_of_element_located((By.ID, &quot;exampleid&quot;)) ) search_box = WebDriverWait(self.browser, 10).until( EC.element_to_be_clickable((By.ID, &quot;exampleid&quot;)) ) </code></pre> <p>Find the search box using XPath</p> <pre><code>search_box = driver.find_element_by_xpath(&quot;//input[@id='exampleid']&quot;) </code></pre> <p>Find the search box using CSS selector</p> <pre><code>search_box = driver.find_element(By.CSS_SELECTOR, &quot;input#exampleid&quot;) </code></pre> <p>Updated: The html of the search box from the site:</p> <pre><code>&lt;input type=&quot;text&quot; id=&quot;exampleid&quot; value=&quot;&quot; name=&quot;field-keywords&quot; autocomplete=&quot;off&quot; placeholder=&quot;Search Example&quot; class=&quot;nav-input nav-progressive-attribute&quot; dir=&quot;auto&quot; tabindex=&quot;0&quot; aria-label=&quot;Search Example&quot; spellcheck=&quot;false&quot;&gt; </code></pre>
<python><python-3.x><selenium-webdriver><web-scraping><browser-automation>
2023-06-09 19:50:53
1
333
Imtiaz Ahmed
76,443,099
2,398,040
How do I do a train and test split in a polars dataframe
<p>I am trying to find a simple way of randomly splitting a polars dataframe in train and test. This is how I am doing it right now</p> <pre><code>train, test = df .with_columns(pl.lit(np.random.rand(df0.height)&gt;0.8).alias('split')) .partition_by('split') </code></pre> <p>however, this leaves an extra split column hanging in my dataframes that I need to drop after.</p>
<python><dataframe><python-polars>
2023-06-09 19:15:15
3
1,057
ste_kwr
76,442,975
6,279,687
Python upload a file to AWS S3 with existing presign URL from another account
<p>I have a pre sign URL from a client, I want to store the file in my AWS S3 bucket</p> <pre><code>from requests import get resp = get(url=[Presign URL], timeout=9000) s3_client.upload_fileobj(resp.content, &quot;my-bucket&quot;, &quot;xyz/abc/hmm.avro&quot;) </code></pre> <p>with this code, I am getting the below error</p> <pre><code>{ValueError}Fileobj must implement read </code></pre> <p>I there a way to directly move the file</p>
<python><amazon-web-services><amazon-s3><file-transfer>
2023-06-09 18:55:20
1
1,475
Abhishek Patil
76,442,880
2,650,325
Join rows in pandas DataFrame
<p>I have a DataFrame defined as:</p> <pre><code>test_df = pd.DataFrame( {&quot;col_one&quot;: [1, 2, 3, 4, 5], &quot;col_two&quot;: [&quot;one&quot;, &quot;two&quot;, &quot;three&quot;, &quot;four&quot;, &quot;five&quot;]} ).astype(str) </code></pre> <p>I am using this code to make all the rows become one single row with the values concatenated:</p> <pre><code>for c in test_df.columns: test_df[c] = &quot;,&quot;.join(test_df[c].values) print(test_df) </code></pre> <p>The result of this print statetement is this:</p> <p><a href="https://i.sstatic.net/mfqgX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mfqgX.png" alt="enter image description here" /></a></p> <p>But the result I am looking for is this:</p> <p><a href="https://i.sstatic.net/PBPlu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PBPlu.png" alt="enter image description here" /></a></p> <p>How can I achieve it?</p>
<python><pandas><dataframe>
2023-06-09 18:35:00
1
2,417
Manuel Perez Heredia
76,442,843
22,009,322
How to change colors in plot lines depending on condition
<p>How can I change the line colors between the points depending on a condition - passes count, which is column &quot;Count&quot; in &quot;upd_passes&quot; dataframe in example. Condition like: the bigger the count is, the more the color is into red:</p> <p><a href="https://i.sstatic.net/ApSUq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ApSUq.png" alt="enter image description here" /></a></p> <p>It might be that if I used &quot;enumerate&quot; in this loop, it would be much easier to do that (however, I couldn't figure out how to use &quot;enumerate&quot; in here):</p> <pre><code># Lines between points: for x0, y0, x1, y1 in zip(x, y, xr, yr): plt.plot((x0, x1), (y0, y1), '-ro', color='tab:blue', linewidth=3, alpha=0.2, zorder=1) </code></pre> <p>Also, I have a strong feeling that I overdo it here with the dataframe manipulations, please forgive me I am knew to python :) Example of the full code:</p> <pre><code>import matplotlib.colors import matplotlib.pyplot as plt import pandas as pd import matplotlib.patheffects as PathEffects import matplotlib.colors as cm pd.set_option('display.width', 400) pd.set_option('display.max_columns', 10) players = pd.DataFrame([[1, 'Player 1'], [2, 'Player 2'], [3, 'Player 3'], [4, 'Player 4'], [5, 'Player 5'], [6, 'Player 6'], [7, 'Player 7']], columns=['Player_id', 'Name']) avg_positions = pd.DataFrame([[1, 15, 34], [2, 35, 48], [3, 58, 27], [4, 62, 55], [5, 52, 40], [6, 69, 31], [7, 27, 9]], columns=['Player_id', 'avg_pos_x', 'avg_pos_y']) avg_positions2 = pd.DataFrame(avg_positions, columns=['Player_id', 'avg_pos_x', 'avg_pos_y']) avg_positions2.rename(columns={'Player_id': 'Receiver_id'}, inplace=True) passes = pd.DataFrame([[1, 2], [1, 2], [1, 3], [2, 1], [2, 5], [3, 6], [6, 1], [4, 2], [4, 2], [5, 7], [6, 2], [7, 3], [7, 3], [7, 3], [7, 1]], columns=['Player_id', 'Receiver_id']) receivers = pd.DataFrame(players, columns=['Player_id', 'Name']) receivers.rename(columns={'Player_id': 'Receiver_id'}, inplace=True) plt.style.use('_mpl-gallery') # Making a final dataframe with all the necessary data combined in one dataframe: upd_passes = pd.merge(players, passes, on='Player_id') upd_passes = upd_passes.groupby(['Player_id', 'Name', 'Receiver_id']).size().reset_index() upd_passes.columns = [*upd_passes.columns[:-1], 'Count'] upd_passes = pd.merge(upd_passes, avg_positions, on='Player_id') upd_passes.rename(columns={'Name': 'Player_name', 'avg_pos_x': 'Player_x', 'avg_pos_y': 'Player_y'}, inplace=True) upd_passes = pd.merge(upd_passes, avg_positions2, on='Receiver_id') upd_passes.rename(columns={'avg_pos_x': 'Receiver_x', 'avg_pos_y': 'Receiver_y'}, inplace=True) upd_passes = pd.merge(upd_passes, receivers, on='Receiver_id') upd_passes.rename(columns={'Name': 'Receiver_name'}, inplace=True) upd_passes = upd_passes[['Player_id', 'Player_name', 'Player_x', 'Player_y', 'Receiver_id', 'Receiver_name', 'Receiver_x', 'Receiver_y', 'Count']] upd_passes.sort_values(by=['Player_id'], inplace=True) upd_passes.reset_index(drop=True, inplace=True) print(upd_passes) passes_count = passes.groupby('Player_id')['Player_id'].count() # print(passes_count) # Player (dots) coordinates: xa = avg_positions.avg_pos_y ya = avg_positions.avg_pos_x # Player's coordinates: x = upd_passes.Player_y y = upd_passes.Player_x # Receiver's coordinates: xr = upd_passes.Receiver_y yr = upd_passes.Receiver_x # Point sizes and colors: sizes = passes_count * 80 colors = passes_count # Define player names for text annotations: names = players.Name # plot fig, ax = plt.subplots() ax.scatter(xa, ya, s=sizes, c=colors, vmin=0, vmax=5, cmap=plt.get_cmap('viridis'), zorder=2) # Text above points: for i, txt in enumerate(names): ax.annotate(txt, xy=(xa[i], ya[i]), xytext=(xa[i]-3, ya[i]+2), fontsize=9, color='black', path_effects=[PathEffects.withStroke(linewidth=3, foreground=&quot;w&quot;)]) # line_colors = upd_passes['Count'] # count = len(line_colors.index) # cmap = plt.get_cmap('plasma') # colors = cm.Colormap(line_colors, 256) # Lines between points: for x0, y0, x1, y1 in zip(x, y, xr, yr): plt.plot((x0, x1), (y0, y1), '-ro', color='tab:blue', linewidth=3, alpha=0.2, zorder=1) fig.set_size_inches(5, 5) # ax.grid(True) plt.show() </code></pre>
<python><pandas><matplotlib>
2023-06-09 18:27:49
1
333
muted_buddy
76,442,733
3,299,050
Passing output parameters to env declaration in ArgoWF
<p>I'm developing an Argo Workflow that requires passing params from step to step. In most cases, I am outputting the params into a file, which are then used in the input of the subsequent steps. These are ok and working as expected. The trouble I am having is with a new initial step that I'd like to implement. In this initial step, I am determining some parameters based on the original input, saving them to a file, and finally using (read: trying to use) the outputs in subsequent env declarations. Simplified version below:</p> <pre><code>templates: ### STEPS ORDER - name: cua-pipeline-workflow steps: ### STEP 1 - - name: setup-env template: setup-env ### STEP 2 - - name: parser template: parser-ingest ### STEP 1 - name: setup-env script: image: python:3.8-slim command: [python] source: | import json from pathlib import Path output_dir = Path('/tmp') Path(output_dir).mkdir(parents=True, exist_ok=True) with open(output_dir.joinpath('env.txt'), 'w') as f: f.write(json.dumps({'client': 'abc', 'pipeline': '123'})) outputs: parameters: - name: envs valueFrom: path: '/tmp/env.txt' #### STEP 2 - name: parser-ingest container: command: [&quot;echo&quot;, &quot;$CLIENT_NAME&quot;] image: python:3.8-slim env: - name: CLIENT_NAME value: &quot;{{ steps.setup-env.outputs.parameters.envs.client }}&quot; - name: PIPELINE value: &quot;{{ steps.setup-env.outputs.parameters.envs.pipeline }}&quot; </code></pre> <p>In each attempt, the print statement looks like: &quot;{{ steps.setup-env.outputs.parameters.envs.pipeline }}&quot;.</p> <p>I have tried passing the output as an input to the next step then referencing the input in the env declaration. Single quotes, double quotes, no quotes around the steps output reference.</p> <p>Is this functionality possible?</p>
<python><argo-workflows><argo>
2023-06-09 18:07:38
1
385
diaferiaj
76,442,731
16,498,000
Why and how does list comprehension work?
<p>I'm still learning Python and I was trying to do this:</p> <pre><code>#values is a list of string digits values = int(i) for i in values </code></pre> <p>But it threw a Syntax error so after some googling I tried this:</p> <pre><code>values = [int(i) for i in values] </code></pre> <p>But I'm having a hard time understanding why one works and the other doesn't. Can you help me understand?</p>
<python><python-3.x>
2023-06-09 18:07:32
2
572
MiguelP
76,442,522
20,920,790
How to add Faker data type to SDV model (update metadata)
<p>I'm trying to add Faker data type to SDV model.</p> <p>Imports:</p> <pre><code>from sdv.metadata import SingleTableMetadata from sdv.single_table import GaussianCopulaSynthesizer import faker </code></pre> <p>Code:</p> <pre><code>fake = faker.Faker() metadata = SingleTableMetadata() metadata.detect_from_dataframe(data=df) metadata.update_column( column_name='DR_Prod', sdtype='fake.company' ) </code></pre> <p>I also tryied to add: 'faker.providers.company', but every time gets error (kernel crash).</p> <p>After metadata.detect i run this code:</p> <pre><code>synthesizer = GaussianCopulaSynthesizer(metadata) synthesizer.fit(df) synthetic_data = synthesizer.sample(num_rows=len(df)) </code></pre> <p>I can run code without metadata.update, but I don't get result, that I need.</p> <p>sdv.<strong>version</strong> '1.2.0'</p> <p>Thanks.</p>
<python><faker><sdv>
2023-06-09 17:32:22
2
402
John Doe
76,442,516
13,039,962
how to change the color of the labels based on the colors of my plotted values?
<p>I have this df:</p> <pre><code> DATE TMAX QUANTILE_TMAX_90 QUANTILE_TMAX_95 QUANTILE_TMAX_99 50415 2019-01-01 19.2 19.5 20.2 21.8 50416 2019-01-02 17.4 19.5 20.2 21.8 50417 2019-01-03 17.2 19.5 20.2 21.8 50418 2019-01-04 16.4 19.5 20.2 21.8 50419 2019-01-05 17.4 19.5 20.2 21.8 ... ... ... ... ... 50500 2019-03-27 16.2 19.0 19.8 21.4 50501 2019-03-28 13.6 19.0 19.8 21.4 50502 2019-03-29 15.2 19.0 19.8 21.4 50503 2019-03-30 16.2 19.0 19.8 21.4 50504 2019-03-31 18.8 19.0 19.8 21.4 [90 rows x 5 columns] </code></pre> <p>I'm plotting colored dots and lines in the time series graphic of the values that meet a specific condition. yellow color if TMAX &gt; QUANTILE_TMAX_90, orange color if TMAX &gt; QUANTILE_TMAX_95, red color if TMAX &gt; QUANTILE_TMAX_99.</p> <p>This is the code:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as mdates from matplotlib.dates import DateFormatter colors = [] axis_colors = [] for tmax, p90, p95, p99 in zip(datos['TMAX'], datos['QUANTILE_TMAX_90'], datos['QUANTILE_TMAX_95'], datos['QUANTILE_TMAX_99']): if tmax &gt; p99: colors.append('red') elif tmax &gt; p95: colors.append('orange') elif tmax &gt; p90: colors.append('yellow') else: colors.append('black') fig = plt.figure('GrΓ‘fica', figsize=(20,15), dpi=150) ax = fig.add_axes([0.2, 0.25, 0.60, 0.60]) for date, tmax, color in zip(datos['DATE'], datos['TMAX'], colors): if color != 'black': ax.plot(date, tmax, color=color, marker='o') for i in range(len(datos)-1): ax.plot(datos['DATE'].iloc[i:i+2], datos['TMAX'].iloc[i:i+2], color=colors[i], linewidth=1) </code></pre> <p>And with this code:</p> <pre><code> [t.set_color(color) for t, color in zip(ax.xaxis.get_ticklabels(), axis_colors) if color != 'black'] plt.xticks(rotation=90) </code></pre> <p>I want to change the color of the ticks based on the colors of my plotted values but is not working.</p> <p>How can i do this?</p> <p>Thanks in advance.</p>
<python><pandas><matplotlib>
2023-06-09 17:31:06
1
523
Javier
76,442,422
9,386,819
When is the pandas ~ operator useful, and why is it necessary when != already exists?
<p>I often see the <code>~</code> operator used in pandas statements by people offering help on stackoverflow, but I can't think of any time where I've needed to use it because <code>!=</code> just naturally comes to mind for me first. So I'm wondering why it exists.</p> <p>For example, this statement:</p> <p><code>(df['col_A'] &gt; 100) &amp; ~(df['col_A'] == 120)</code></p> <p>... could just as easily be written as:</p> <p><code>(df['col_A'] &gt; 100) &amp; (df['col_A'] != 120)</code></p> <p>I imagine there must be situations where one must use <code>~</code>, but what are they?</p>
<python><pandas><conditional-statements>
2023-06-09 17:12:40
1
414
NaiveBae
76,442,397
9,100,431
Group By pandas dataframe and keep earliest date from one column and the latest date from another column in Python
<p>I have the following data:</p> <pre><code>id start_date end_date 1 2023-01-01 2023-02-02 1 2023-02-05 2023-02-15 1 2023-02-16 2023-03-14 </code></pre> <p>How can I group by id and keep the earliest date from <code>start_date</code> and the latest from <code>end_date</code> . E.g.</p> <pre><code>id start_date end_date 1 2023-01-01 2023-03-14 </code></pre>
<python><pandas><dataframe><group-by>
2023-06-09 17:09:12
2
660
Diego
76,442,345
1,020,139
How to cache Poetry virtual environments in GitLab CI/CD?
<p>I want to cache Poetry virtual environments between builds of my Python application. I have configured the pipeline so that the virtual environment is recreated every time <code>pyproject.toml</code> changes (<code>poetry.lock</code> is not comitted):</p> <pre><code>default: cache: - key: files: - lego/pyproject.toml paths: - lego/.venv - artifacts/build/layer/build/.venv - paths: - artifacts/pypoetry </code></pre> <p>The virtual environment is created as follows:</p> <pre><code>poetry config virtualenvs.in-project true poetry install </code></pre> <p>This works locally, but it fails in GitLab CI/CD on the second pipeline run:</p> <pre><code>$ LOG_LEVEL=ERROR poetry run pytest tests -n auto -v --cov=./src --junitxml=lego-report.xml ImportError while loading conftest '/builds/5565557/lddpro-bff/lego/tests/conftest.py'. tests/conftest.py:22: in &lt;module&gt; from lego.common.aws import S3, EventBridge, SSM, Secrets E ModuleNotFoundError: No module named 'lego' </code></pre> <p>The reason is that <code>pytest</code> cannot locate the root package (<code>lego</code>). However, I have verified that the root package is located in the virtual environment (<code>.venv</code>):</p> <pre><code>$ cat .venv/lib/python*/site-packages/lego.pth /builds/5565557/lddpro-bff/lego/src </code></pre> <p>Here <code>lego.pth</code> includes the current job ID in the path.</p> <p>How can I properly cache Poetry virtual environments between CI/CD pipeline runs?</p> <p>Job configuration:</p> <pre><code>lego: extends: .unittest variables: REPORT_NAME: lego-report.xml script: - cd lego - poetry config virtualenvs.in-project true &amp;&amp; poetry install - poetry env info - env LOG_LEVEL=ERROR poetry run pytest tests -n auto -v --cov=./src --junitxml=${REPORT_NAME} - mv -f ${REPORT_NAME} ${CI_PROJECT_DIR}/ || echo &quot;&quot; - cd ${CI_PROJECT_DIR} - mkdir -p artifacts - cd tools - . ./build_deploy.sh --command=build --debug artifacts: paths: - artifacts/lego-function.zip - artifacts/lego-layer.zip reports: junit: ${REPORT_NAME} expire_in: 7 days rules: - if: $CI_COMMIT_BRANCH &amp;&amp; $CI_PIPELINE_SOURCE != &quot;merge_request_event&quot; - if: $CI_PIPELINE_SOURCE == &quot;merge_request_event&quot; </code></pre>
<python><gitlab><continuous-integration><python-poetry><continuous-delivery>
2023-06-09 17:00:54
0
14,560
Shuzheng
76,442,258
18,065,128
Brownie Github Actions workflow
<p>I am trying to deploy smart contracts on Ethereum Sepolia testnet using python brownie <code>eth-brownie</code>==1.19.3. My scripts work locally, but I want to deploy from Github Actions directly for a CICD pipeline for smart contracts.</p> <p>Does anyone have a <code>workflow.yaml</code> for brownie?</p>
<python><chainlink><brownie><evm>
2023-06-09 16:44:06
1
614
Matt - Block-Farms.io
76,442,204
10,985,257
Configure pyproject.toml for unittest and coverage
<p>I used in the pass <code>pytest</code> but wanted to use this time <code>unittest</code> instead. I use the integrated testframework from vscode and for testing it just runs fine. Further I wanted to use <code>coverage</code>. My structure is as usual:</p> <pre><code>┣ πŸ“¦mypackage ┃ ┣ πŸ“¦subpackage ┃ ┃ β”— πŸ“œmymodule.py ┃ ┣ πŸ“œ__init__.py ┃ β”— πŸ“œmain.py ┣ πŸ“¦tests ┃ β”— πŸ“œtest_mymodule.py β”— πŸ“œpyproject.toml </code></pre> <p>I've included the following to my <code>pyproject.toml</code>:</p> <pre class="lang-ini prettyprint-override"><code>[tool.coverage.report] fail_under = 80 [tool.coverage.run] branch = true include = [&quot;mypackage/*&quot;] command_line = &quot;-m unittest discover -s tests/&quot; </code></pre> <p>If I now run <code>coverage run</code> it outputs</p> <pre class="lang-bash prettyprint-override"><code>... ---------------------------------------------------------------------- Ran 3 tests in 0.000s OK </code></pre> <p>And if I run coverage report:</p> <pre class="lang-bash prettyprint-override"><code>Name Stmts Miss Branch BrPart Cover -------------------------------------------------------------------- mypackage/__init__.py 9 0 2 0 100% mypackage/subpackage/mymodule.py 15 2 2 0 88% -------------------------------------------------------------------- TOTAL 24 2 4 0 93% </code></pre> <p>But why is my <code>main.py</code> not included? It also includes code which is not tested yet and expected coverage to realize that. What am I doing wrong?</p>
<python><python-unittest><coverage.py>
2023-06-09 16:35:24
1
1,066
MaKaNu
76,442,097
1,473,517
How to assign a color to a specific value on a heatmap
<p>I am making a heatmap in seaborn. I am using <code>'viridis'</code>, but I modify it slightly so some of the values get particular colors. In my MWE, <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.colors.Colormap.html#matplotlib.colors.Colormap.set_over" rel="nofollow noreferrer"><code>.set_over</code></a> is used to set the values above 90 to <code>'black'</code>, and <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.colors.Colormap.html#matplotlib.colors.Colormap.set_under" rel="nofollow noreferrer"><code>.set_under</code></a> is used to set the values below 10 to <code>'white'</code>. I also mask out part of the heatmap. This all works fine.</p> <p>How can I also map a middle range value, 20, to <code>'orange'</code>, and without effecting the current colorbar appearance? As you can see, <code>.set_over</code>, and <code>.set_under</code> do not change the colorbar appearance.</p> <pre><code>import matplotlib import seaborn as sns import numpy as np np.random.seed(7) A = np.random.randint(0,100, size=(20,20)) mask_array = np.zeros((20, 20), dtype=bool) mask_array[:, :5] = True cmap = matplotlib.colormaps[&quot;viridis&quot;] # Set the under color to white cmap.set_under(&quot;white&quot;) # Set the voer color to white cmap.set_over(&quot;black&quot;) # Set the background color g = sns.heatmap(A, vmin=10, vmax=90, cmap=cmap, mask=mask_array) # Set color of masked region g.set_facecolor('lightgrey') </code></pre> <p><a href="https://i.sstatic.net/abyl0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/abyl0.png" alt="enter image description here" /></a></p> <p>I have seen <a href="https://stackoverflow.com/questions/33918822/map-value-to-specific-color-in-seaborn-heatmap">Map value to specific color in seaborn heatmap</a>, but I am not sure how I can use it to solve my problem.</p>
<python><matplotlib><seaborn><heatmap><colorbar>
2023-06-09 16:17:41
4
21,513
Simd
76,441,847
11,922,765
Python convert tuple of tuples with keys and values to dataframe
<p>I am getting this weird looking data. I want to convert it to a dataframe.</p> <pre><code>sales_data = Totalsales(product='y', sales=( specificsite(product='y', sales_percent =70.0, location_zip='12345'), specificsite(product='y', sales_percent =11.0, location_zip='23456'), specificsite(product='y', sales_percent =45.0, location_zip='34567'), specificsite(product='y', sales_percent =10.0, location_zip='45678'))) df = pd.DataFrame(sales_data) </code></pre> <p>present outupt:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; TypeError: 'Totalsales' object is not iterable </code></pre> <p>Expected output:</p> <pre><code>df = product sales_percent location_zip 0 y 70 12345 1 y 11 23456 2 y 45 34567 3 y 10 45678 </code></pre>
<python><pandas><dataframe><numpy>
2023-06-09 15:49:07
0
4,702
Mainland
76,441,777
3,702,685
huggingface evaluate function use multiple labels
<p>I have two sentence that combine with encode_plus function and i want to done the NLI task with finetuning a BERT base model.</p> <p>I want a metric name for huggingface evaluator function to evaluate multiple labels.</p> <p>i used from this code:</p> <pre class="lang-py prettyprint-override"><code>metric = evaluate.combine([&quot;accuracy&quot;, &quot;f1&quot;, &quot;precision&quot;, &quot;recall&quot;]) metrics = metric.compute(predictions=[0,1,1,2], references=[0,2,1,0]) </code></pre> <p>And got this result:</p> <pre class="lang-py prettyprint-override"><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[31], line 2 1 metric = evaluate.combine([&quot;accuracy&quot;, &quot;f1&quot;, &quot;precision&quot;, &quot;recall&quot;]) ----&gt; 2 metrics = metric.compute(predictions=[0,1,1,2], references=[0,2,1,0]) 4 metrics File ~/anaconda3/envs/NER/lib/python3.10/site-packages/evaluate/module.py:862, in CombinedEvaluations.compute(self, predictions, references, **kwargs) 860 batch = {&quot;predictions&quot;: predictions, &quot;references&quot;: references, **kwargs} 861 batch = {input_name: batch[input_name] for input_name in evaluation_module._feature_names()} --&gt; 862 results.append(evaluation_module.compute(**batch)) 864 return self._merge_results(results) File ~/anaconda3/envs/NER/lib/python3.10/site-packages/evaluate/module.py:444, in EvaluationModule.compute(self, predictions, references, **kwargs) 442 inputs = {input_name: self.data[input_name] for input_name in self._feature_names()} 443 with temp_seed(self.seed): --&gt; 444 output = self._compute(**inputs, **compute_kwargs) 446 if self.buf_writer is not None: 447 self.buf_writer = None File ~/.cache/huggingface/modules/evaluate_modules/metrics/evaluate-metric--f1/0ca73f6cf92ef5a268320c697f7b940d1030f8471714bffdb6856c641b818974/f1.py:127, in F1._compute(self, predictions, references, labels, pos_label, average, sample_weight) 126 def _compute(self, predictions, references, labels=None, pos_label=1, average=&quot;binary&quot;, sample_weight=None): --&gt; 127 score = f1_score( 128 references, predictions, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight 129 ) ... (...) 1401 UserWarning, 1402 ) ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted']. </code></pre>
<python><deep-learning><pytorch><huggingface>
2023-06-09 15:41:02
2
568
Ali ZareShahi
76,441,747
265,521
No module named 'pip' when "Getting requirements to build wheel" but pip does exist
<p>In a new Docker container (based on Rocky 8) I built Python from source:</p> <pre><code>RUN curl https://www.python.org/ftp/python/3.11.4/Python-3.11.4.tgz --output Python-3.11.4.tgz &amp;&amp; \ tar -xzf Python-3.11.4.tgz &amp;&amp; \ cd Python-3.11.4 &amp;&amp; \ ./configure --enable-optimizations &amp;&amp; \ make -j4 &amp;&amp; \ make install &amp;&amp; \ cd .. &amp;&amp; \ rm -rf Python-3.11.4 </code></pre> <p>After which I can run Python apparently fine:</p> <pre><code>[root@a0aa4fc36e39 opt]# python3 --version Python 3.11.4 [root@a0aa4fc36e39 opt]# pip3 --version pip 23.1.2 from /usr/local/lib/python3.11/site-packages/pip (python 3.11) [root@a0aa4fc36e39 opt]# python3 -m pip --version pip 23.1.2 from /usr/local/lib/python3.11/site-packages/pip (python 3.11) [root@a0aa4fc36e39 opt]# python3 Python 3.11.4 (main, Jun 9 2023, 14:56:33) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import pip &gt;&gt;&gt; </code></pre> <p>However when I try to <code>pip3 install path/to/riscof</code> it fails with the apparently near-universal error message <code>No module named 'pip'</code>. (<a href="https://github.com/riscv-software-src/riscof" rel="nofollow noreferrer">Here's riscof btw.</a>).</p> <pre><code>Using pip 23.1.2 from /usr/local/lib/python3.11/site-packages/pip (python 3.11) Processing ./tools/riscof Running command pip subprocess to install build dependencies Collecting setuptools&gt;=40.8.0 Using cached setuptools-67.8.0-py3-none-any.whl (1.1 MB) Collecting wheel Using cached wheel-0.40.0-py3-none-any.whl (64 kB) Installing collected packages: wheel, setuptools Successfully installed setuptools-67.8.0 wheel-0.40.0 WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Installing build dependencies ... done Running command Getting requirements to build wheel Traceback (most recent call last): File &quot;/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-trnsio_e/overlay/lib/python3.11/site-packages/setuptools/build_meta.py&quot;, line 341, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-trnsio_e/overlay/lib/python3.11/site-packages/setuptools/build_meta.py&quot;, line 323, in _get_build_requires self.run_setup() File &quot;/tmp/pip-build-env-trnsio_e/overlay/lib/python3.11/site-packages/setuptools/build_meta.py&quot;, line 488, in run_setup self).run_setup(setup_script=setup_script) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-trnsio_e/overlay/lib/python3.11/site-packages/setuptools/build_meta.py&quot;, line 338, in run_setup exec(code, locals()) File &quot;&lt;string&gt;&quot;, line 3, in &lt;module&gt; ModuleNotFoundError: No module named 'pip' error: subprocess-exited-with-error Γ— Getting requirements to build wheel did not run successfully. β”‚ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. full command: /usr/local/bin/python3.11 /usr/local/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py get_requires_for_build_wheel /tmp/tmprn976u9s Getting requirements to build wheel ... error error: subprocess-exited-with-error Γ— Getting requirements to build wheel did not run successfully. β”‚ exit code: 1 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. </code></pre> <p>I'm not in a venv. I <em>definitely</em> have <code>pip</code> available. There's only one Python installation on the whole system. How exactly has Python screwed this up?</p>
<python><pip>
2023-06-09 15:36:36
2
98,971
Timmmm
76,441,643
3,369,544
How to flag tukey outliers using python pandas groupby
<p>I would like to use pandas groupby to flag values in a df that are outliers. I think I've got it working, but as I'm new to python, wanted to ask if there is a more obvious / pythonic approach.</p> <p>Given input data with two groups, two variables X and Y:</p> <pre><code>n=10000 df= pd.DataFrame({'key': ['a']*n+['b']*n ,&quot;x&quot; : np.hstack(( np.random.normal(10, 1.0, size=n) ,np.random.normal(100, 1.0, size=n) )) ,&quot;y&quot; : np.hstack(( np.random.normal(20, 1.0, size=n) ,np.random.normal(200, 1.0, size=n) )) }) </code></pre> <p>To identify outliers I need to calculate the quartiles and inter-quartile range for each group to calculate the limits. Seemed reasonable to create a function:</p> <pre><code>def get_outlier(x,tukeymultiplier=2): Q1=x.quantile(.25) Q3=x.quantile(.75) IQR=Q3-Q1 lowerlimit = Q1 - tukeymultiplier*IQR upperlimit = Q3 + tukeymultiplier*IQR return (x&lt;lowerlimit) | (x&gt;upperlimit) </code></pre> <p>And then use groupby and call the function via transform, e.g.:</p> <pre><code>g=df.groupby('key')[['x','y']] df['x_outlierflag']=g.transform(get_outlier).x df['y_outlierflag']=g.transform(get_outlier).y df.loc[df.x_outlierflag==True] df.loc[df.y_outlierflag==True] </code></pre> <p>I'm not worried about performance at this point, because the data are small. But not sure if there is a more natural way to do this? For example, it's not clear to me how apply() differs from transform(). Is there an apply() approach that would be better?</p>
<python><pandas><group-by><quantile>
2023-06-09 15:25:02
1
6,388
Quentin
76,441,531
18,904,265
create temporary file to pass to API
<p>I am taking uploaded files in a streamlit interface:</p> <pre class="lang-py prettyprint-override"><code>import streamlit as st uploaded_files = st.file_uploader( label=&quot;documents&quot;, type=&quot;pdf&quot;, accept_multiple_files=True ) </code></pre> <p>In the next step, I need to pass those files to an API (pybis), which sadly only accepts the file name/path as an input, not the BytesIO object. file.name is a property of the custom bytesIO object which is used, which gives the file name as string. This currently looks like this (and works):</p> <pre class="lang-py prettyprint-override"><code>import os import shutil with open(file.name, &quot;wb&quot;) as out_file: shutil.copyfileobj(file, out_file) dataset_new = openbis_object.new_dataset( files=[file.name] ) dataset_new.save() if os.path.isfile(file.name): os.remove(file.name) </code></pre> <p>As you see, my workaround is to save the file on the disk, pass the path to the API and then remove the file again. I was wondering, is there a better way of doing this? I'd like to somehow create a temporary file of which I pass the path, instead of doing this.</p>
<python><streamlit>
2023-06-09 15:13:12
1
465
Jan
76,441,525
15,452,168
Python Pandas: Calculating transaction count and transactions with multiple sizes
<p>I'm working with a DataFrame in Python using the Pandas library and I need help with calculating some metrics. I have a DataFrame with the following columns: WEBSHOP_ORDER, CLASS, USIM, USIM_DESC, SIZE, and DEMAND_QTY.</p> <p>My goal is to calculate two metrics for specific USIMs (product IDs):</p> <p>Transaction_Count: The number of unique transactions (WEBSHOP_ORDER) for each USIM. transaction_with_Multiple_Sizes: The number of transactions for each USIM that have multiple sizes. I've tried using the following code:</p> <pre><code>import pandas as pd # Read the CSV file into a DataFrame # Define the list of specific USIMs usims_of_interest = [2199603, 2199608, 2199611, 2199641, 2199642, 2199682, 2199692, 2199697, 2200982] # Generate random sample data np.random.seed(0) size_choices = ['Small', 'Medium', 'Large'] df = pd.DataFrame({ 'WEBSHOP_ORDER': np.random.randint(1, 10001, 10000), 'CLASS': np.random.choice(['A', 'B', 'C'], 10000), 'USIM': np.random.choice(usims_of_interest, 10000), 'USIM_DESC': ['Product {}'.format(i) for i in np.random.randint(1, 10001, 10000)], 'SIZE': np.random.choice(size_choices, 10000), 'DEMAND_QTY': np.random.randint(1, 10, 10000) }) #df = pd.read_csv('path/to/my/file.csv') # Define the list of specific USIMs usims_of_interest = [2199603, 2199608, 2199611, 2199641, 2199642, 2199682, 2199692, 2199697, 2200982] # Filter the DataFrame for the specific USIMs filtered_df = df[df['USIM'].isin(usims_of_interest)] # Group by USIM and calculate the required metrics grouped = filtered_df.groupby('USIM').agg( Transaction_Count=('WEBSHOP_ORDER', 'nunique'), transaction_with_Multiple_Sizes=('USIM', lambda x: (x.duplicated() &amp; x.notna()).sum()) ).reset_index() # Print the resulting DataFrame print(grouped) </code></pre> <p>However, the calculation for transaction_with_Multiple_Sizes seems to be incorrect. It yields values greater than Transaction_Count, which is not logically possible.</p> <p>I would greatly appreciate any guidance or suggestions on how to correctly calculate the transaction_with_Multiple_Sizes metric based on the conditions mentioned. Is there a more appropriate approach or modification to my code that can address this issue?</p> <p>Thank you for your assistance!</p>
<python><pandas><dataframe>
2023-06-09 15:12:47
1
570
sdave
76,441,474
8,517,655
c++20 cppcoro equivalent of python's asyncio.create_task
<p>I am trying to rewrite some python code with coroutines to c++20 with cppcoro library. But I cant find a c++ equivalent of asyncio.create_task, this is a function which runs task immediately and concurrently. Here is python code:</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3 import asyncio import time async def getter(): for i in range(0,10): print(&quot;ok&quot;) await asyncio.sleep(0.1) return 100 async def runner(): n = asyncio.create_task(getter()) # getter started immediately await asyncio.sleep(0.2) # let's do something else while getter is running print(&quot;running&quot;) # print some text while getter is still running nn = await n # wait for getter and get result print(nn) asyncio.run(runner()) </code></pre> <p>Is there some function in cppcoro library which does same thing as <code>asyncio.create_task</code> in python?<br /> Or can someone recommend me better coroutine library?</p> <p>PS: Does cppcoro have some sleep function, like python's <code>await asyncio.sleep</code></p> <p>References:</p> <p><a href="https://github.com/lewissbaker/cppcoro" rel="nofollow noreferrer">https://github.com/lewissbaker/cppcoro</a><br /> <a href="https://docs.python.org/3/library/asyncio-task.html" rel="nofollow noreferrer">https://docs.python.org/3/library/asyncio-task.html</a></p>
<python><c++><c++20><coroutine>
2023-06-09 15:05:38
0
422
T0maas
76,441,395
643,194
Python - remove space between the strings
<p>I have string array which looks like below,</p> <pre><code>THIS IS TO TEST &amp;#32; USING APPLE &amp;#32; ENTER A LINK &amp;#32; </code></pre> <p>I need to remove the spaces from the string, so that final string will be like</p> <pre><code>THIS IS TO TEST USING APPLE ENTER A LINK </code></pre> <p>I Tried below code</p> <pre><code>for s in htmljob_arr: s = s.replace(&quot;&amp;#32;&quot;,&quot;&quot;) s = s.strip() + &quot; &quot; final_htmljob_str += s </code></pre> <p>Is there any better approach to remove the space between the strings?</p>
<python>
2023-06-09 14:55:50
2
319
vishal
76,441,327
673,600
Getting access to python function from another python .ipynb file in colab
<p>I want to import a function from a .ipynb file, but I'm not having much joy. So to be clear a new colab file should be able to call the file that contains the function. But I'm failing to get this to work. Below is the code I'm using.</p> <p>One file is named <code>Summary.ipynb</code> (with function to call named create) <code>Generate.ipynb</code> is calling that function</p> <pre><code>from google.colab import drive import sys drive.mount('/content/drive') sys.path.insert(0,'/content/drive/folders/XXXX') from Summary import create </code></pre> <p>Gives the following:</p> <pre><code>ModuleNotFoundError: No module named 'Summary' </code></pre>
<python><google-colaboratory>
2023-06-09 14:46:31
0
6,026
disruptive
76,441,063
2,511,942
pip3 install needs multiple retries
<p>is it common that pip package installation requires multiple retries before it works? I tried on windows and ubuntu and noticed the same issue, with and without conda.</p> <p><a href="https://i.sstatic.net/A9AR8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A9AR8.png" alt="example" /></a></p> <p>Second example now on windows:</p> <pre><code>(textgen) C:\w\RedHeart\text-generation-webui&gt;pip3 install accelerate==0.18.0 --verbose --no-cache (textgen) C:\w\RedHeart\text-generation-webui&gt;pip3 install accelerate==0.18.0 --verbose --no-cache Using pip 23.0.1 from C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip (python 3.10) Collecting accelerate==0.18.0 Downloading accelerate-0.18.0-py3-none-any.whl (215 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 215.3/215.3 kB 13.7 MB/s eta 0:00:00 (textgen) C:\w\RedHeart\text-generation-webui&gt;pip3 install accelerate==0.18.0 --verbose --no-cache Using pip 23.0.1 from C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip (python 3.10) Collecting accelerate==0.18.0 Downloading accelerate-0.18.0-py3-none-any.whl (215 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 215.3/215.3 kB ? eta 0:00:00 ERROR: Exception: Traceback (most recent call last): File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_internal\cli\base_command.py&quot;, line 160, in exc_logging_wrapper status = run_func(*args) File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_internal\cli\req_command.py&quot;, line 247, in wrapper return func(self, options, args) File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_internal\commands\install.py&quot;, line 419, in run requirement_set = resolver.resolve( File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_internal\resolution\resolvelib\resolver.py&quot;, line 92, in resolve result = self._result = resolver.resolve( File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_vendor\resolvelib\resolvers.py&quot;, line 481, in resolve state = resolution.resolve(requirements, max_rounds=max_rounds) File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_vendor\resolvelib\resolvers.py&quot;, line 373, in resolve failure_causes = self._attempt_to_pin_criterion(name) File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_vendor\resolvelib\resolvers.py&quot;, line 213, in _attempt_to_pin_criterion criteria = self._get_updated_criteria(candidate) File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_vendor\resolvelib\resolvers.py&quot;, line 204, in _get_updated_criteria self._add_to_criteria(criteria, requirement, parent=candidate) File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_vendor\resolvelib\resolvers.py&quot;, line 172, in _add_to_criteria if not criterion.candidates: File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_vendor\resolvelib\structs.py&quot;, line 151, in __bool__ return bool(self._sequence) File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py&quot;, line 155, in __bool__ return any(self) File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py&quot;, line 143, in &lt;genexpr&gt; return (c for c in iterator if id(c) not in self._incompatible_ids) File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_internal\resolution\resolvelib\found_candidates.py&quot;, line 44, in _iter_built for version, func in infos: File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_internal\resolution\resolvelib\factory.py&quot;, line 279, in iter_index_candidate_infos result = self._finder.find_best_candidate( File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_internal\index\package_finder.py&quot;, line 896, in find_best_candidate return candidate_evaluator.compute_best_candidate(candidates) File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_internal\index\package_finder.py&quot;, line 579, in compute_best_candidate applicable_candidates = self.get_applicable_candidates(candidates) File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_internal\index\package_finder.py&quot;, line 464, in get_applicable_candidates versions = { File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_internal\index\package_finder.py&quot;, line 464, in &lt;setcomp&gt; versions = { File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_vendor\packaging\specifiers.py&quot;, line 205, in filter if self.contains(parsed_version, **kw): File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_vendor\packaging\specifiers.py&quot;, line 183, in contains if normalized_item.is_prerelease and not prereleases: File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_vendor\packaging\version.py&quot;, line 370, in is_prerelease return self.dev is not None or self.pre is not None File &quot;C:\Users\levy\miniconda3\envs\textgen\lib\site-packages\pip\_vendor\packaging\version.py&quot;, line 342, in dev return self._version.dev[1] if self._version.dev else None AttributeError: 'Version' object has no attribute '_version' (textgen) C:\w\RedHeart\text-generation-webui&gt; </code></pre>
<python><pip>
2023-06-09 14:09:28
0
2,783
Levy Moreira
76,441,013
1,810,940
Specifying particular functions accepted through typing.Literal
<p>How can I use <code>typing</code> to specify that my function accepts only specific callables? For instance, I would like functionality akin to this:</p> <pre><code>import typing def accepted_function1(): pass def accepted_function2(): pass def function_accepting_functions(foo: Literal[accepted_function1, accepted_function2]): foo() </code></pre>
<python><python-typing>
2023-06-09 14:02:41
5
503
jay
76,440,883
18,972,785
How to make different graphs from connected components of a graph in python?
<p>I am working on a graph which is composed of 80 sub-graphs. I need to make a networkX graph for each of these sub graphs and perform operation on it. When i use <code>nx.connected_components(graph)</code> on my graph, it only gives me the nodes of each of connected components. but i need edges to make graph of each connected components. I used <code>nx.connected_component_subgraphs(graph)</code> for this purpose, but it seems it does not work (AttributeError: module networkx has no attribute connected_component_subgraphs). I really appreciate answers which solves my problem.</p>
<python><graph><networkx>
2023-06-09 13:45:47
1
505
Orca
76,440,836
1,764,089
How make barmode=group type plots witha secondary axis in plotly python?
<p>I tried something like</p> <pre><code>fig = make_subplots(specs=[[{'secondary_y': True}]]) fig.add_trace(go.Bar(x=x1, y=y1), secondary_y=False) fig.add_trace(go.Bar(x=x2, y=y2), secondary_y=True) </code></pre> <p>but this seemed to overlay them on top of eachother instead of next to eachother.</p>
<python><plotly>
2023-06-09 13:40:18
1
3,753
evan54
76,440,827
1,789,718
PIP very slow when using it in a conda environment
<p>It has been some months since the problem appeared from nowhere, every time I install something using <code>pip</code> from a <code>conda</code> environment (fresh or old) it takes ages to start the process, i have been using <code>-vvv</code> for debugging the issue.</p> <p>When i'm using the main <code>conda</code> environment the output look like the following:</p> <pre class="lang-bash prettyprint-override"><code>Using pip 21.1.3 from /localdisk/&lt;username&gt;/anaconda3/lib/python3.9/site-packages/pip (python 3.9) Non-user install because site-packages writeable Created temporary directory: /tmp/pip-ephem-wheel-cache-79hpacew Created temporary directory: /tmp/pip-req-tracker-2x18ocvp ... </code></pre> <p>while if I am in an environment the output looks like this:</p> <pre class="lang-bash prettyprint-override"><code>Using pip 23.0.1 from /localdisk/lpuglia/anaconda3/envs/vpunn/lib/python3.8/site-packages/pip (python 3.8) Non-user install because site-packages writeable [*** HERE PIP HANGS FOR ABOUT 2 MINUTES ***] Created temporary directory: /tmp/pip-build-tracker-xb9h_yb9 Initialized build tracking at /tmp/pip-build-tracker-xb9h_yb9 ... </code></pre> <p>besides the obvious difference in version between the two <code>pip</code>s, I really can't say what is the problem. Using <code>-vvv</code> doesn't shed any light on the source of the problem. I was expecting some hanging due to network problems but from the output it looks like that the hang happens before creating a folder.</p> <p>Interestingly enough, if I call <code>pip</code> again just after it finishes (few seconds) it doesn't hang for 2 minutes. This time window is very small, waiting for 10 seconds will force another hang on the next <code>pip</code> installation.</p> <p>I'm out of ideas on how to debug this issue, if anyone could suggest a way to understand better the problem or a solution it would be ideal.</p>
<python><pip><conda>
2023-06-09 13:38:33
1
1,360
Luca
76,440,648
264,136
temp place to store a number in windows
<p>I have a python program which runs via task scheduler every 1 min in <code>windows 10 enterprise</code>. One run of that program needs to store just an int value and the next run will read it and update for the next run to read and so on.</p> <p>I don't want to use a DB for this small task and storing the number in a text file looks very raw.</p> <p>Any suggestions?</p>
<python>
2023-06-09 13:16:54
2
5,538
Akshay J
76,440,501
7,029,382
Handling large inputs in Programming Language Models
<p>I am using CodeBERT and CodeT5 models to generate embeddings for my Python code dataset. Specifically, I am using the <code>codebert-base</code> and <code>codet5-small</code> models from HuggingFace. However, these models have a maximum input sequence length of 512 tokens. When tokenizing the Python codes, I find that their length is approximately 10 times larger than the maximum input sequence length. My intention is not to perform any downstream tasks; rather, I want to obtain the embeddings for the purpose of clustering. I am wondering how to handle these lengthy input codes aside from removing less important parts from the inputs.</p>
<python><machine-learning><nlp><huggingface-transformers><transformer-model>
2023-06-09 12:58:12
0
322
VihangaAW
76,440,490
14,679,834
`ERROR: Failed building wheel for detectron2` when installing detectron2 through a docker image in a DigitalOcean droplet
<p>I'm running into an error when installing <code>detectron2</code> in a docker image in a digital ocean droplet but it works with no problems when I build the docker image in my local machine.</p> <p>EDIT: Upon rebuilding with <code>--no-cache</code> the dockerfile below doesn't include detectron when building for some reason. Didn't have this before.</p> <p>Here's my dockerfile:</p> <pre><code>FROM python:3.10-slim-buster # Update package lists RUN apt-get update &amp;&amp; apt-get install ffmpeg libsm6 libxext6 gcc g++ git build-essential libpoppler-cpp-dev pkg-config poppler-utils tesseract-ocr libtesseract-dev -y # Make working directories RUN mkdir -p /app WORKDIR /app # Copy the requirements.txt file to the container COPY requirements.txt . # Install dependencies RUN pip install --upgrade pip RUN pip install torch torchvision torchaudio RUN pip install -r requirements.txt RUN pip install 'git+https://github.com/facebookresearch/detectron2.git@v0.4#egg=detectron2' # Copy the .env file to the container COPY .env . # Copy every file in the source folder to the created working directory COPY . . # Expose the port that the application will run on EXPOSE 8080 # Start the application CMD [&quot;python3.10&quot;, &quot;uvicorn&quot;, &quot;-m&quot;, &quot;main:app&quot;, &quot;--proxy-headers&quot;, &quot;--host&quot;, &quot;0.0.0.0&quot;, &quot;--port&quot;, &quot;8080&quot;] </code></pre> <p>I'm using a basic digital ocean droplet with 2 GB Memory / 2 Intel vCPUs / 60 GB Disk / and running Ubuntu 22.10 x64.</p> <p>Here's the error:</p> <pre><code>Building wheels for collected packages: detectron2, fvcore Building wheel for detectron2 (setup.py): started Building wheel for detectron2 (setup.py): still running... Building wheel for detectron2 (setup.py): still running... Building wheel for detectron2 (setup.py): finished with status 'error' error: subprocess-exited-with-error Γ— python setup.py bdist_wheel did not run successfully. β”‚ exit code: 1 ╰─&gt; [397 lines of output] running bdist_wheel /usr/local/lib/python3.10/site-packages/torch/utils/cpp_extension.py:476: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend. ... ... ... from /tmp/pip-req-build-1jp0qvnm/detectron2/layers/csrc/vision.cpp:3: /usr/local/lib/python3.10/site-packages/torch/include/pybind11/pybind11.h: In instantiation of β€˜class pybind11::class_&lt;detectron2::COCOeval::InstanceAnnotation&gt;’: /tmp/pip-req-build-1jp0qvnm/detectron2/layers/csrc/vision.cpp:105:73: required from here /usr/local/lib/python3.10/site-packages/torch/include/pybind11/pybind11.h:1479:7: warning: β€˜pybind11::class_&lt;detectron2::COCOeval::InstanceAnnotation&gt;’ declared with greater visibility than its base β€˜pybind11::detail::generic_type’ [-Wattributes] class class_ : public detail::generic_type { ^~~~~~ /usr/local/lib/python3.10/site-packages/torch/include/pybind11/pybind11.h: In instantiation of β€˜class pybind11::class_&lt;detectron2::COCOeval::ImageEvaluation&gt;’: /tmp/pip-req-build-1jp0qvnm/detectron2/layers/csrc/vision.cpp:107:67: required from here /usr/local/lib/python3.10/site-packages/torch/include/pybind11/pybind11.h:1479:7: warning: β€˜pybind11::class_&lt;detectron2::COCOeval::ImageEvaluation&gt;’ declared with greater visibility than its base β€˜pybind11::detail::generic_type’ [-Wattributes] gcc: fatal error: Killed signal terminated program cc1plus compilation terminated. error: command '/usr/bin/gcc' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for detectron2 Running setup.py clean for detectron2 Building wheel for fvcore (setup.py): started Building wheel for fvcore (setup.py): finished with status 'done' Created wheel for fvcore: filename=fvcore-0.1.5.post20221221-py3-none-any.whl size=61405 sha256=f68cb822c5d8cc162abe548a6080cc96afae83ad7f5dfdcc836fe6cf5ae14b81 Stored in directory: /root/.cache/pip/wheels/01/c0/af/77c1cf53a1be9e42a52b48e5af2169d40ec2e89f7362489dd0 Successfully built fvcore Failed to build detectron2 ERROR: Could not build wheels for detectron2, which is required to install pyproject.toml-based projects </code></pre> <p>I've also tried with this dockerfile:</p> <pre><code>FROM python:3.10-slim-buster # Update package lists RUN apt-get update &amp;&amp; apt-get install ffmpeg libsm6 libxext6 gcc g++ git build-essential libpoppler-cpp-dev pkg-config poppler-utils tesseract-ocr libtesseract-dev -y # Make working directories RUN mkdir -p /app WORKDIR /app # Copy the requirements.txt file to the container COPY requirements.txt . # Install dependencies RUN pip install --upgrade pip RUN pip install torch torchvision torchaudio RUN pip install -r requirements.txt RUN pip install 'git+https://github.com/facebookresearch/detectron2.git' # Copy the .env file to the container COPY .env . # Copy every file in the source folder to the created working directory COPY . . # Expose the port that the application will run on EXPOSE 8080 # Start the application CMD [&quot;python3.10&quot;, &quot;-m&quot;, &quot;uvicorn&quot;, &quot;main:app&quot;, &quot;--proxy-headers&quot;, &quot;--host&quot;, &quot;0.0.0.0&quot;, &quot;--port&quot;, &quot;8080&quot;] </code></pre> <p>But it doesn't install detectron on my local machine.</p>
<python><docker><detectron>
2023-06-09 12:57:02
1
526
neil_ruaro
76,440,418
1,764,089
How to make secondary y-axis and subplots in plotly pandas?
<p>I'd like to make a secondary y-axis in the 2nd plot in plotly.</p> <p>The way I'm doing this is smth like:</p> <pre><code>from plotly.subplots import make_subplots fig = make_subplots(rows=2, cols=1) # this works fig = make_subplots(specs=[[{'secondary_y': True}]]) # this works fig = make_subplots(rows=2, cols=1, specs=[[{'secondary_y': True}]]) # this doesn't work </code></pre> <p>I would then generally add plots I made in pandas to the figure</p> <pre><code>fig1 = df.plot() for k in fig1.data: fig.add_trace(k) </code></pre> <p>I'd like to be able for the plot in the second row to have a secondary axis, how would I go about that?</p>
<python><pandas><plotly>
2023-06-09 12:48:43
1
3,753
evan54
76,440,238
12,500,949
Generate combinations with condition of minimum hit and percentages in Python?
<p>Example:</p> <p><a href="https://i.sstatic.net/uQGDC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uQGDC.png" alt="enter image description here" /></a></p> <p>Supposing that based on your criteria you choose 27 numbers from a total pool of 49 to 6/49 lottery, like in the picture above. Next, you input that when you hit at least 6 numbers from all those 27 you already selected then you want to have at least one single ticket with at least 3 numbers.</p> <p>I am trying to do this using Python. Here is my work:</p> <pre><code>from itertools import combinations def get_combos(n, k, t): nums = [x for x in range(1, n+1)] combos = [] for cmb in combinations(nums, k): combos.append(cmb) final_combos = [] for filter_cmb in combinations(nums, t): for each in combos: if len(sorted(list(set(each).intersection(filter_cmb))))==len(filter_cmb): final_combos.append(each) break return sorted(list(set(final_combos))) v = 4 k = 3 t = 2 for each in get_combos(v, k, t): print(each) </code></pre> <p><strong>n is the total pool of combinations; k is the combination size; t is the minimum to hit;</strong></p> <p>And it seems it is working very well for these input parameters of 4;3;2; as the output is:</p> <pre><code>(1, 2, 3) (1, 2, 4) (1, 3, 4) </code></pre> <p>But if I change them to 5;3;2; the output is:</p> <pre><code>(1, 2, 3) (1, 2, 4) (1, 2, 5) (1, 3, 4) (1, 3, 5) (1, 4, 5) </code></pre> <p>which is wrong because the desired output in this case should be:</p> <pre><code>1, 2, 3 1, 4, 5 2, 3, 4 2, 3, 5 </code></pre> <p>Obviously the Python algorithm above is wrong. But how this is accomplished programmatically? What the algorithm above is missing? Also, the examples above are for 100% percentages but how to do it when percentage is smaller?</p> <p>Thank you!</p>
<python><python-3.x><python-itertools>
2023-06-09 12:26:08
1
439
YoYoYo
76,440,203
5,623,899
Subclassing `ndarray` following tutorial yields unexpected results (i.e. partial memory, some attributes are remembers others are lost)
<p>I think I followed the subclassing tutorial correctly. I have a very simple example. It works when I run the code once. When I rerun a cell in a Jupyter notebook, then the class breaks and it &quot;forgets&quot; state (well it remembers the stuff I added, it forgets that transpose I did to numpy array). See code below.</p> <p>Below I implement three simple classes NamedAxis, NamedAxes, and NamedArray (yes I am aware of xarray, this is for my own learning purposes). Mostly it works fine. However, I notice something very frustrating when I rerun flip</p> <pre class="lang-py prettyprint-override"><code>from copy import deepcopy from dataclasses import dataclass, field from typing import List, Dict, Union, Optional, Any, Callable, TypeVar, Generic, Type, cast, Tuple import numpy as np, pandas as pd @dataclass class NamedAxis: # name of axis name: str # index of axis axis: Optional[int] = None def __str__(self): return f'{self.name}({self.axis})' __repr__ = __str__ def copy(self) -&gt; 'NamedAxis': copy = deepcopy(self) return copy @dataclass class NamedAxes: axes: Union[List[NamedAxis], Tuple[NamedAxis]] name: Optional[str] = 'NamedAxes' umap: Dict[str, NamedAxis] = field(default_factory=dict, init=False, repr=False) def __post_init__(self): # assign unique id to each axis for i, axis in enumerate(self.axes): axis.axis = i self.umap = {ax.axis: ax for ax in self.axes} @property def ndim(self): return len(self.axes) @property def anames(self): # names in current location return [str(ax.name) for ax in self.axes] @property def aidxs(self): # original location as ax.axis should never be changed return [int(ax.axis) for ax in self.axes] @property def alocs(self): # current location return list(range(len(self))) def __getitem__(self, key:Union[int, str, NamedAxis]) -&gt; NamedAxis: # NOTE: this gets current location of axis, not original location if isinstance(key, int): return self.axes[key] # NOTE: this gets location based off original location elif isinstance(key, NamedAxis): return self.umap[key.axis] # NOTE: this gets location based off original location elif isinstance(key, str): for ax in self.umap.values(): if key == ax.name: return ax elif key == str(ax.axis): return ax else: raise KeyError(f'Key {key} not found in {self.name}') def __str__(self): _str = f'{self.name}(' + ', '.join(self.anames) + ')' return _str __repr__ = __str__ def __iter__(self): return iter(self.axes) def __len__(self): return len(self.axes) def copy(self): copy = deepcopy(self) copy.umap = self.umap.copy() return copy def index(self, key:Union[int, str, NamedAxis]): ax = self[key] return self.axes.index(ax) def transpose(self, *order:Union[str, int, NamedAxis]): # check input and convert to axes update_axes = [self[key] for key in order] # gather the axes that are not in the provided order needed_axes = [ax for ax in self.axes if ax not in update_axes] # the new order of axes is the updated axes followed by the needed axes new_order = update_axes + needed_axes print('NamedAxes.transpose:\t', self.name, self.axes, new_order) # rearrange axes according to the new order self.axes = new_order return self a, b, c = NamedAxis('axis-a'), NamedAxis('axis-b'), NamedAxis('axis-c') abc = NamedAxes((a, b, c)) abc class NamedArray(np.ndarray): DIMS = NamedAxes([NamedAxis('axis-a'), NamedAxis('axis-b'), NamedAxis('axis-c')], name='Trajectories') def __new__(cls, arr, dims=None): obj = np.asarray(arr).view(cls) obj.dims = (dims or cls.DIMS).copy() return obj def __new__(cls, arr, dims:NamedAxes=None): # Input array is an already formed ndarray instance # We first cast to be our class type obj = np.asarray(arr).view(cls) # add the new attribute to the created instance obj.dims = (dims or cls.DIMS).copy() # Finally, we must return the newly created object: return obj def __array_finalize__(self, obj): print('finalize, dims=', getattr(obj, 'dims', None)) print('finalize, obj=', obj) if obj is None: return self.dims = getattr(obj, 'dims', self.DIMS.copy()) # Ensure the indices are in the correct range shape = self.shape if len(shape) != len(self.dims): raise ValueError('NamedArray must have {len(self.dims)} dimensions, but got {len(shape)}.') def __array_wrap__(self, out, dims=None): print('In __array_wrap__:') print(' self is %s' % repr(self)) print(' arr is %s' % repr(out)) # then just call the parent return super().__array_wrap__(self, out, dims) @property def dim_names(self): return tuple(self.dims.anames) @property def dim_str(self): _str = ', '.join([f'{s} {n}' for s, n in zip(self.shape, self.dim_names)]) return f'({_str})' def __repr__(self): base = super(NamedArray, self).__repr__() first_line = base.split('\n')[0] spaces = 0 for s in first_line: if s.isdigit(): break spaces += 1 spaces = ' ' * (spaces - 1) return f'{base}\n{spaces}{self.dim_str}' def flip(self, axes:Union[str, int, NamedAxis]=None): # I tried transpose as well print(self.dims.axes) # Get the order of axes indices new_idxs = [self.dims.index(self.dims[ax]) for ax in axes] print(axes, new_idxs) # Transpose the NamedAxes self.dims.transpose(*axes) print(new_idxs, self.__array_interface__['shape']) # Transpose the underlying numpy array self = np.transpose(self, axes=new_idxs) # self.transpose(*new_idxs) ''' # NOTE: StackOverflow post edit / clarification I've tried this a few different ways including `self.transpose()` as well as just `return np.transpose()`, and trying to change the function flip to `transpose` etc. This is just the version I am posting for brevity without the 10 different `flip` implementations ''' return self </code></pre> <p>So lets make some dummy data:</p> <pre class="lang-py prettyprint-override"><code>arr = np.random.randint(0, 5, (2, 3, 4)) nar = NamedArray(arr) nar # (2 axis-a, 3 axis-b, 4 axis-c) ''' NOTE: flip is basically transpose, with the difference that `arr.transpose(1, 0, 2).transpose(1, 0, 2)` will do two transposes but since we are using names and named indices, `nar.flip('b', 'a', 'c').flip('b', 'a', 'c')` should only do one. In other words `flip` is declarative, saying how we want the axes to be. Similar to einops / xarray ''' nar.flip(('axis-c', 'axis-b', 'axis-a')) # (4 axis-c, 3 axis-b, 2 axis-a) </code></pre> <p>So far so good. However, when I run the cell again</p> <pre class="lang-py prettyprint-override"><code># (2 axis-a, 3 axis-b, 4 axis-c) nar.flip(('axis-c', 'axis-b', 'axis-a')) # (2 axis-c, 3 axis-b, 4 axis-a) </code></pre> <p>I have spent way too long debugging this and I can't figure it out.</p>
<python><python-3.x><numpy><numpy-ndarray>
2023-06-09 12:22:29
2
5,218
SumNeuron
76,440,184
13,130,804
How to read an API response with moviepy directly as a video
<p>Hi I am trying to add a fade-in effecto to a video that comes generated by an api. For this I need to use the python library moviepy. My first approach is saving the API response request as an mp4 file and afterwards read the file with moviepy again to add the fade in effect and save it again.</p> <pre><code>from moviepy.editor import * from moviepy.video.fx.all import fadein response = resquests.post(url, json=payload, headers=headers) with open(&quot;response1.mp4&quot;.&quot;wb&quot;) as f: f.write(response.content) clip = VideoFileClip(&quot;response1.mp4&quot;) clip = cip.fadein(duration=1) clip.write_videofile(&quot;response2.mp4&quot;) </code></pre> <p>Is there any way to generate &quot;response2.mp4&quot; directly, skipping the step of generating &quot;response1.mp4&quot; as a file before?</p>
<python><mp4><moviepy>
2023-06-09 12:19:33
1
446
Miguel Gonzalez
76,440,140
64,904
Python websockets: EOFError("line without CRLF")
<p>I've implemented a Python Websockets server program example.</p> <p>When I connect to it via wss://myserver.com:9000, it doesn't work and I see the error <code>EOFError(&quot;line without CRLF&quot;)</code> in my log.</p>
<python><websocket>
2023-06-09 12:15:41
1
49,337
Larry K
76,440,081
2,328,219
Iterating through repetitive blocks of tags with Selenium
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;block data-id="1234"&gt; &lt;taga&gt;New York&lt;/taga&gt; &lt;tagb&gt;Yankees&lt;/tagb&gt; &lt;/block&gt; &lt;block data-id="5678"&gt; &lt;taga&gt;Montreal&lt;/taga&gt; &lt;tagb&gt;Expos&lt;/tagb&gt; &lt;/block&gt; &lt;block data-id="2468"&gt; &lt;taga&gt;Boston&lt;/taga&gt; &lt;tagb&gt;Red Sox&lt;/tagb&gt; &lt;/block&gt;</code></pre> </div> </div> </p> <p>Assuming I have the HTML above - Buried amongst some other HTML.</p> <p>My approach to getting the data with Python and Selenium is:</p> <pre><code>teams = driver.find_elements(&quot;xpath&quot;, &quot;//block&quot;) for team in teams: print (team.get_attribute(&quot;data-id&quot;) </code></pre> <p>What is the right way to reference <code>&lt;taga&gt;</code> and <code>&lt;tagb&gt;</code> from the &quot;team&quot; element? Instinctively I would try an xpath using the team like:</p> <pre><code>print(team.find_element(&quot;xpath&quot;, &quot;//taga&quot;).text) print(team.find_element(&quot;xpath&quot;, &quot;//tagb&quot;).text) </code></pre> <p>but of course these xpath are absolute in the document, and not relative to the <code>&lt;block&gt;</code> I have referenced in &quot;team&quot;</p>
<python><selenium-webdriver>
2023-06-09 12:07:12
1
309
Damon Brodie
76,439,995
19,130,803
Type-hint for sklearn BaseEstimator
<p>I am working on ML project. I am using <code>sklearn</code> library. I am writing a custom transformer to remove duplicate rows from dataframe as below:</p> <pre><code>class RemoveDuplicateRows(BaseEstimator, TransformerMixin): &quot;&quot;&quot;Custom transformer for remove duplicate rows.&quot;&quot;&quot; # def __init__(self) -&gt; None: # &quot;&quot;&quot;Initialize the defaults.&quot;&quot;&quot; # self._count: int = 0 def fit(self, X: pd.DataFrame, y: pd.Series = None) -&gt; Self: &quot;&quot;&quot;Learn the parameters.&quot;&quot;&quot; return self def transform(self, X: pd.DataFrame, y: pd.Series = None) -&gt; pd.DataFrame: &quot;&quot;&quot;Transform the input.&quot;&quot;&quot; return X.drop_duplicates() </code></pre> <p>I am able to run inside a <code>pipeline</code> (without type-hints). Upon successfull run, now I have type-hinted it and getting error as below:</p> <pre><code>error: Class cannot subclass &quot;BaseEstimator&quot; (has type &quot;Any&quot;) [misc] error: Class cannot subclass &quot;TransformerMixin&quot; (has type &quot;Any&quot;) [misc] </code></pre> <p>What I am missing?</p>
<python><scikit-learn>
2023-06-09 11:57:03
0
962
winter
76,439,976
14,627,505
How to export a dashbio graph
<p>Is there a way to save <a href="https://dash.plotly.com/dash-bio/" rel="nofollow noreferrer">dashbio graphs</a>? (other than just taking a screenshot)</p> <ul> <li><p>For example, plotly has &quot;download plot&quot; button, and <a href="https://stackoverflow.com/q/59815797">also a way to do it with code</a>,</p> </li> <li><p><code>dash_cytoscape</code> has <a href="https://dash.plotly.com/cytoscape/images" rel="nofollow noreferrer">its own way</a> like this:</p> </li> </ul> <pre class="lang-py prettyprint-override"><code>import dash_cytoscape as cyto cyto.load_extra_layouts() # enable svg export @app.callback( Output(&quot;&lt;element-id&gt;&quot;, &quot;generateImage&quot;), # ... other steps follow </code></pre> <ul> <li>However, I can't find a way to save a <code>dashbio</code> graph, either in gui, or with code.</li> </ul>
<python><plotly-dash>
2023-06-09 11:55:34
0
3,903
Vladimir Fokow
76,439,820
8,510,149
Condition-based forwardfill for pyspark dataframe
<p>Trying to do a forwardfill using pyspark. I wonder why filled and filled1 gets the same. What I want to is apply forwardfill only when the flag column equals 1.</p> <pre><code>from pyspark.sql import SparkSession from pyspark.sql.functions import col, expr, when, last from pyspark.sql.window import Window spark = SparkSession.builder.getOrCreate() data = [(&quot;person1&quot;, &quot;1&quot;, 100, 0), (&quot;person1&quot;, &quot;2&quot;, 1000, 1), (&quot;person1&quot;, &quot;3&quot;, 1000, 0), (&quot;person2&quot;, &quot;1&quot;, 5, 0), (&quot;person2&quot;, &quot;2&quot;, 5, 0), (&quot;person2&quot;, &quot;3&quot;, 3, 0), (&quot;person2&quot;, &quot;4&quot;, 10, 1), (&quot;person2&quot;, &quot;5&quot;, 10, 0)] df = spark.createDataFrame(data, [&quot;person&quot;, &quot;date&quot;, &quot;salary&quot;, &quot;flag&quot;]) window = Window.partitionBy(&quot;person&quot;).orderBy(&quot;date&quot;).rowsBetween(Window.unboundedPreceding, Window.currentRow) filled_df = df.withColumn(&quot;filled&quot;, when(col(&quot;flag&quot;) == 1, last(col(&quot;salary&quot;), True).over(window)).otherwise(col(&quot;salary&quot;))) filled_df = filled_df.withColumn(&quot;filled1&quot;, last(col(&quot;salary&quot;), True).over(window)) filled_df.show() </code></pre> <pre><code>-------+----+------+----+------+-------+ | person|date|salary|flag|filled|filled1| +-------+----+------+----+------+-------+ |person1| 1| 100| 0| 100| 100| |person1| 2| 1000| 1| 1000| 1000| |person1| 3| 1000| 0| 1000| 1000| |person2| 1| 5| 0| 5| 5| |person2| 2| 5| 0| 5| 5| |person2| 3| 3| 0| 3| 3| |person2| 4| 10| 1| 10| 10| |person2| 5| 10| 0| 10| 10| +-------+----+------+----+------+-------+ </code></pre>
<python><apache-spark><pyspark>
2023-06-09 11:33:51
1
1,255
Henri
76,439,698
11,198,558
Loguru for multiple python file
<p>I'm new with loguru, so now have problem with logging multiple file. I have used the original python logging, and only need define the logger one time only, then it can catch the logging on multiple file, but the loguru does not. Specifically my project has a tree as</p> <pre><code>myProject/ |- pythonFile_1.py |- src/ |--- function_1.py |--- function_2.py |--- ... </code></pre> <p>As before when using the python logging, the log is defined in <code>pythonFile_1.py</code></p> <pre><code>import logging import logging.config from src.function_1 import * def defineLogger(savingPath, fileName, fileTime): logger = logging.getlogger() ... (add some handle) return defineLogger(...) logging.info(&quot;SOME CONTENT OF MY LOG&quot;) result = function_1_1(...) .... </code></pre> <p>and in other file <code>function_n.py</code> in folder <code>src/</code> , I only need <code>import logging</code></p> <pre><code>import logging def function_1_1(): ... logging.info(&quot;SOME OTHER LOG CONTENT&quot;) ... return </code></pre> <p>Then, it will handles automatically catch the log into stdout as well as log file</p> <p>But when I change into loguru, I define the same logging using loguru with same folder constructions, but it only prints the log that call in <code>pythonFile_1.py</code> but not in any <code>function_n.py</code></p> <p>Known that the <code>defineLogger</code> now is <code>defineLoguru</code> function is follow tutorial of docs, such as the code below:</p> <pre><code>from loguru import logger def defineLoguru(savingPath, fileName, fileTime): logger.add(&lt;add some needed handle&gt;) return logger logging = defineLoguru() logging.info(&quot;LOG CONTENT&quot;) </code></pre>
<python><logging><loguru>
2023-06-09 11:14:57
1
981
ShanN
76,439,653
12,965,658
TSV to CSV using pandas
<p>I have a TSV file <a href="https://i.sstatic.net/dmyTv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dmyTv.png" alt="enter image description here" /></a></p> <p>I am trying to convert this TSV file to CSV like:</p> <pre><code>df = pd.read_table(f'downloads/Survey.tsv',sep='\t+', header=None) print(df) </code></pre> <p>It is giving me following error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte</p>
<python><python-3.x><pandas>
2023-06-09 11:07:07
0
909
Avenger
76,439,649
10,290,585
Langchain Ouput Parser on QA Chain
<p>How does one correctly parse data from load_qa_chain?</p> <p>It is easy to retrieve an answer using the QA chain, but we want the LLM to return two answers, which then parsed by a output parser, PydanticOutputParser.</p> <p>The chain returns:</p> <pre><code>{'output_text': '\n1. Contract item of interest: Termination. \n2. Termination: Yes.} </code></pre> <p>But then trying to parse the result with the parser gives JSONDecodeError: Expecting value: line 1 column 1 (char 0) :</p> <pre><code>parser.parse(result['output_text']) </code></pre> <p>How should I change the chain to be able to correctly parse the output?</p> <p>Here is the chain below:</p> <pre><code>from langchain.chains.question_answering import load_qa_chain from langchain import PromptTemplate from dotenv import load_dotenv from langchain.output_parsers import PydanticOutputParser from pydantic import BaseModel, Field from langchain.embeddings.openai import OpenAIEmbeddings from langchain.vectorstores import Chroma from langchain.llms import AzureOpenAI from langchain.text_splitter import RecursiveCharacterTextSplitter import os from langchain.document_loaders.csv_loader import CSVLoader load_dotenv() def load_embeddings(row_num): &quot;&quot;&quot; This function loads embeddings from source documents. We want to answer questions from these documents. &quot;&quot;&quot; embedding_function = OpenAIEmbeddings( openai_api_key=os.getenv(&quot;OPENAI_API_KEY&quot;), deployment=os.getenv('EMBEDDING_DEPLOYMENT_NAME'), model=os.getenv('EMBEDDING_MODEL'), chunk_size=1 ) loader = CSVLoader(file_path='data/contacts.csv', source_column=&quot;Context&quot;, csv_args = {&quot;delimiter&quot;: ',',}) doc = loader.load() contract = [doc[row_num].page_content] text_splitter = RecursiveCharacterTextSplitter( chunk_size = 500, chunk_overlap = 50, length_function = len, ) context_split = text_splitter.create_documents(contract) db = Chroma.from_documents(context_split, embedding_function) return db row_num = 0 db = load_embeddings(row_num) item = &quot;Termination Clause&quot; # The item we want to search for class PersonIntel(BaseModel): contract_item_of_interest: str = Field(description=&quot;Item that is being searched for in the contract&quot;) item: str = Field(description=&quot;Is there any evidence that of the item in the contract? Yes or No&quot;) def to_dict(self): return { &quot;contract_item_of_interest&quot;: self.contract_item_of_interest, &quot;item&quot;: self.item, } llm = AzureOpenAI(deployment_name=os.getenv('CHAT_DEPLOYMENT_NAME'), openai_api_version=&quot;2023-05-15&quot;, model_name=os.getenv('CHAT_MODEL'), temperature=0) query = f&quot;Is there any evidence that of {item} in the contract?&quot; docs = db.similarity_search(query) template = &quot;&quot;&quot; You are a legal assistant. Given the following contract below, Answer the question. Questions: 1. Can you tell me whether {question} is mentioned in this contract? {context} # Your answer must be returned in the following structure: 1. Contract item of interest: {question}. 2. {question}: Yes or No'. \n{format_instructions} &quot;&quot;&quot; parser = PydanticOutputParser(pydantic_object=PersonIntel) PROMPT = PromptTemplate( template=template, input_variables=[&quot;summaries&quot;, &quot;question&quot;], partial_variables={ &quot;format_instructions&quot;: parser.get_format_instructions() } ) chain = load_qa_chain(llm, chain_type=&quot;stuff&quot;, prompt=PROMPT, document_variable_name=&quot;summaries&quot;) result = chain({&quot;input_documents&quot;: docs, &quot;question&quot;: item}, return_only_outputs=True) result </code></pre>
<python><langchain>
2023-06-09 11:06:30
1
570
rich
76,439,476
7,848,740
Let Django access serial port in Lubuntu
<p>So, I need my Django to access the serial port</p> <p>I'm correctly using it via pyserial package the issue is that, every time I boot the OS I got the error of Access denied.</p> <p>The only way to make it work is to grant access to the specific serial for everyone via <code>sudo chmod 666 /dev/ttyUSB0</code></p> <p>Is there a way to let Django access the serial anyway, instead of let eveyone access it with <code>666</code>?</p>
<python><django><ubuntu><permissions><serial-port>
2023-06-09 10:42:45
1
1,679
NicoCaldo
76,439,393
5,224,236
Running python scripts from shared folder
<p>Is it possible for users who don't have python installed to run a python selenium script if all the dependencies are present in a shared folder?</p> <p>E.g. if I put the whole python folder and its libraries in a shared folder, could users run it?</p>
<python><selenium-webdriver>
2023-06-09 10:31:01
1
6,028
gaut
76,439,333
7,921,635
python socket module in main() unresolved
<p>Why when I add socket code in top of the function everything work alright, but if I add it main, suddenly socket becomes unresolved reference. I added <code>import socket</code> at file header.</p> <p>UnboundLocalError: cannot access local variable 'socket' where it is not associated with a value</p> <pre><code>def main(): &quot;&quot;&quot; Preparing socket interface &quot;&quot;&quot; log.info(&quot;Preparing socket: {} )&quot;.format(SOCKET_PATH)) s = socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) if os.path.exists(SOCKET_PATH): os.remove(SOCKET_PATH) s.bind(SOCKET_PATH) s.listen(5) s.settimeout(1) </code></pre>
<python><sockets>
2023-06-09 10:22:28
0
427
Nir Vana
76,439,276
5,353,712
OpenAI API error: Why do I still get the "No module named 'openai.embeddings_utils';" error after I upgraded the OpenAI package and Python?
<p>I've been trying to play around with OpenAI embeddings, but got quickly stuck on environment configuration. I've looked at other similar problems questions here and have followed the answers posted there, but I'm still being thrown an error that package does not exist.</p> <p>The code is primitive:</p> <pre><code>import pandas as pd import openai from openai.embeddings_utils import get_embedding </code></pre> <p>Where output is:</p> <pre><code>PS C:\Users\username&gt; python openai.py Traceback (most recent call last): File &quot;C:\Users\username\openai.py&quot;, line 2, in &lt;module&gt; import openai File &quot;C:\Users\username\openai.py&quot;, line 3, in &lt;module&gt; from openai.embeddings_utils import get_embedding ModuleNotFoundError: No module named 'openai.embeddings_utils'; 'openai' is not a package </code></pre> <p>All dependencies should be in order:</p> <pre><code>PS C:\Users\username&gt; pip show openai Name: openai Version: 0.27.8 Summary: Python client library for the OpenAI API Home-page: https://github.com/openai/openai-python Author: OpenAI Author-email: support@openai.com License: Location: c:\users\username\appdata\local\programs\python\python310\lib\site-packages Requires: aiohttp, requests, tqdm Required-by: PS C:\Users\username&gt; python --version Python 3.10.6 PS C:\Users\username&gt; pip --version pip 23.1.2 from C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\pip (python 3.10) </code></pre> <p>The openai folder in <code>python310\lib\site-packages\</code> contains <code>embedding_utils.py</code>.</p> <p>I've tried uninstalling openai, and reinstalling it. I've also tried to install <code>openai[embeddings]</code>, but none of the options have worked. The other answers have hinted that this may be a problem with Python version, but to me it looks as if everything is OK with Python version as well.</p>
<python><openai-api>
2023-06-09 10:13:24
1
824
Banana
76,439,146
13,935,315
Pyspark StreamingQueryListener QueryTerminatedEvent not fired when using delta tables
<p>I am trying to create a StreamingQueryListener using delta tables. When I try the <a href="https://www.databricks.com/blog/2022/05/27/how-to-monitor-streaming-queries-in-pyspark.html" rel="nofollow noreferrer">example</a> provided by databricks that is using csv's instead of delta tables everything works as expected. However when I try to a similar thing using delta tables, the QueryTerminatedEvent is not fired or will throw an error. Can anyone explain to me what I am doing wrong and how I can solve it?</p> <p>Error:</p> <pre><code>23/06/09 10:34:08 ERROR StreamingQueryListenerBus: Listener PythonStreamingQueryListenerWrapper threw an exception py4j.Py4JException: Error while sending a command. at py4j.CallbackClient.sendCommand(CallbackClient.java:397) at py4j.CallbackClient.sendCommand(CallbackClient.java:356) at py4j.reflection.PythonProxyHandler.invoke(PythonProxyHandler.java:106) at com.sun.proxy.$Proxy30.onQueryTerminated(Unknown Source) at org.apache.spark.sql.streaming.PythonStreamingQueryListenerWrapper.onQueryTerminated(StreamingQueryListener.scala:86) at org.apache.spark.sql.execution.streaming.StreamingQueryListenerBus.doPostEvent(StreamingQueryListenerBus.scala:139) at org.apache.spark.sql.execution.streaming.StreamingQueryListenerBus.doPostEvent(StreamingQueryListenerBus.scala:43) at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:117) at org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:101) at org.apache.spark.sql.execution.streaming.StreamingQueryListenerBus.postToAll(StreamingQueryListenerBus.scala:88) at org.apache.spark.sql.execution.streaming.StreamingQueryListenerBus.onOtherEvent(StreamingQueryListenerBus.scala:108) at org.apache.spark.scheduler.SparkListenerBus.doPostEvent(SparkListenerBus.scala:100) at org.apache.spark.scheduler.SparkListenerBus.doPostEvent$(SparkListenerBus.scala:28) at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37) at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37) at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:117) at org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:101) at org.apache.spark.scheduler.AsyncEventQueue.super$postToAll(AsyncEventQueue.scala:105) at org.apache.spark.scheduler.AsyncEventQueue.$anonfun$dispatch$1(AsyncEventQueue.scala:105) at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62) at org.apache.spark.scheduler.AsyncEventQueue.org$apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:100) at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.$anonfun$run$1(AsyncEventQueue.scala:96) at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1471) at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.run(AsyncEventQueue.scala:96) Caused by: py4j.Py4JNetworkException: Error while sending a command: c p0 onQueryTerminated ro180 e at py4j.ClientServerConnection.sendCommand(ClientServerConnection.java:253) at py4j.CallbackClient.sendCommand(CallbackClient.java:384) ... 24 more Caused by: py4j.Py4JException: Received empty command at py4j.ClientServerConnection.sendCommand(ClientServerConnection.java:236) ... 25 more 23/06/09 10:34:08 ERROR StreamingQueryListenerBus: Listener PythonStreamingQueryListenerWrapper threw an exception py4j.Py4JException: Error while obtaining a new communication channel at py4j.CallbackClient.getConnectionLock(CallbackClient.java:257) at py4j.CallbackClient.sendCommand(CallbackClient.java:377) at py4j.CallbackClient.sendCommand(CallbackClient.java:356) at py4j.reflection.PythonProxyHandler.invoke(PythonProxyHandler.java:106) at com.sun.proxy.$Proxy30.onQueryTerminated(Unknown Source) at org.apache.spark.sql.streaming.PythonStreamingQueryListenerWrapper.onQueryTerminated(StreamingQueryListener.scala:86) at org.apache.spark.sql.execution.streaming.StreamingQueryListenerBus.doPostEvent(StreamingQueryListenerBus.scala:139) at org.apache.spark.sql.execution.streaming.StreamingQueryListenerBus.doPostEvent(StreamingQueryListenerBus.scala:43) at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:117) at org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:101) at org.apache.spark.sql.execution.streaming.StreamingQueryListenerBus.postToAll(StreamingQueryListenerBus.scala:88) at org.apache.spark.sql.execution.streaming.StreamingQueryListenerBus.onOtherEvent(StreamingQueryListenerBus.scala:108) at org.apache.spark.scheduler.SparkListenerBus.doPostEvent(SparkListenerBus.scala:100) at org.apache.spark.scheduler.SparkListenerBus.doPostEvent$(SparkListenerBus.scala:28) at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37) at org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:37) at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:117) at org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:101) at org.apache.spark.scheduler.AsyncEventQueue.super$postToAll(AsyncEventQueue.scala:105) at org.apache.spark.scheduler.AsyncEventQueue.$anonfun$dispatch$1(AsyncEventQueue.scala:105) at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62) at org.apache.spark.scheduler.AsyncEventQueue.org$apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:100) at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.$anonfun$run$1(AsyncEventQueue.scala:96) at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1471) at org.apache.spark.scheduler.AsyncEventQueue$$anon$2.run(AsyncEventQueue.scala:96) Caused by: java.net.ConnectException: Connection refused (Connection refused) at java.base/java.net.PlainSocketImpl.socketConnect(Native Method) at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.base/java.net.Socket.connect(Socket.java:609) at java.base/java.net.Socket.connect(Socket.java:558) at java.base/java.net.Socket.&lt;init&gt;(Socket.java:454) at java.base/java.net.Socket.&lt;init&gt;(Socket.java:264) at java.base/javax.net.DefaultSocketFactory.createSocket(SocketFactory.java:277) at py4j.PythonClient.startClientSocket(PythonClient.java:192) at py4j.PythonClient.getConnection(PythonClient.java:213) at py4j.CallbackClient.getConnectionLock(CallbackClient.java:250) ... 25 more </code></pre> <p>Working csv example:</p> <pre class="lang-py prettyprint-override"><code>my_csv_dir = &quot;tmp/&quot; # Now, start a streaming query that monitors 'my_csv_dir' directory. # Every time when there are new CSV files arriving here, we will process them. my_csv = spark.readStream.schema( &quot;my_key INT, my_val DOUBLE, _corrupt_record STRING&quot; ).csv(Path(my_csv_dir).as_uri()) # `DataFrame.observe` computes the counts of processed and malformed records, # and sends an event to the listener. my_observed_csv = my_csv.observe( &quot;metric&quot;, f.count(f.lit(1)).alias(&quot;cnt&quot;), # number of processed rows f.count(f.col(&quot;_corrupt_record&quot;)).alias(&quot;malformed&quot;)) # number of malformed rows my_csv = some_logic(my_csv) my_query = my_csv.writeStream.foreachBatch(some_logic).format( &quot;console&quot;).queryName(&quot;My observer&quot;).start() my_csv = some_logic(my_csv) my_query = my_csv.writeStream.foreachBatch(some_logic).format( &quot;console&quot;).queryName(&quot;My observer&quot;).start() # Now, we will write CSV data to be processed in a streaming manner on time. # This CSV file is all well-formed. with open(os.path.join(my_csv_dir, &quot;my_csv_1.csv&quot;), &quot;w&quot;) as f: _ = f.write(&quot;1,1.1\n&quot;) _ = f.write(&quot;123,123.123\n&quot;) time.sleep(5) # Assume that another CSV file arrived in 5 seconds. # Ouch! it has two malformed records out of 3. My observer query should alert it! with open(os.path.join(my_csv_dir, &quot;my_csv_error.csv&quot;), &quot;w&quot;) as f: _ = f.write(&quot;1,1.123\n&quot;) _ = f.write(&quot;Ouch! malformed record!\n&quot;) _ = f.write(&quot;Arrgggh!\n&quot;) time.sleep(3) # OK, all done. Let's stop the query in 5 seconds. spark.streams.removeListener(my_listener) </code></pre> <p>Not working delta example:</p> <pre class="lang-py prettyprint-override"><code>from pyspark.sql.streaming import StreamingQueryListener from pyspark.sql.streaming.listener import ( QueryStartedEvent, QueryProgressEvent, QueryTerminatedEvent, ) import pyspark import time from delta import configure_spark_with_delta_pipimport pyspark import time from delta import configure_spark_with_delta_pip builder = ( pyspark.sql.SparkSession.builder.appName(&quot;MyApp&quot;) .config(&quot;spark.sql.extensions&quot;, &quot;io.delta.sql.DeltaSparkSessionExtension&quot;) .config( &quot;spark.sql.catalog.spark_catalog&quot;, &quot;org.apache.spark.sql.delta.catalog.DeltaCatalog&quot;, ) ) spark = configure_spark_with_delta_pip(builder).getOrCreate() input_table = &quot;spark/input&quot; output_table =&quot; spark/output&quot; mydata1 = &quot;tmp/data_dump/part-00000-17f59480-16e2-418a-960e-285fdd04ee45.c000.snappy.parquet&quot; mydata2 = &quot;tmp/data_dump/part-00000-248f4fb4-7c85-4b13-baaf-0689b57e54c2.c000.snappy.parquet&quot; class CustomListener(StreamingQueryListener): def onQueryStarted(self, event): &quot;&quot;&quot; Called when a query is started. Parameters ---------- event: :class:`pyspark.sql.streaming.listener.QueryStartedEvent` The properties are available as the same as Scala API. Notes ----- This is called synchronously with meth:`pyspark.sql.streaming.DataStreamWriter.start`, that is, ``onQueryStart`` will be called on all listeners before ``DataStreamWriter.start()`` returns the corresponding :class:`pyspark.sql.streaming.StreamingQuery`. Do not block in this method as it will block your query. &quot;&quot;&quot; print(&quot;STARTED&quot;) def onQueryProgress(self, event): &quot;&quot;&quot; Called when there is some status update (ingestion rate updated, etc.) Parameters ---------- event: :class:`pyspark.sql.streaming.listener.QueryProgressEvent` The properties are available as the same as Scala API. Notes ----- This method is asynchronous. The status in :class:`pyspark.sql.streaming.StreamingQuery` will always be latest no matter when this method is called. Therefore, the status of :class:`pyspark.sql.streaming.StreamingQuery`. may be changed before/when you process the event. For example, you may find :class:`StreamingQuery` is terminated when you are processing `QueryProgressEvent`. &quot;&quot;&quot; print(&quot;PROGRESS&quot;) def onQueryTerminated(self, event): &quot;&quot;&quot; Called when a query is stopped, with or without error. Parameters ---------- event: :class:`pyspark.sql.streaming.listener.QueryTerminatedEvent` The properties are available as the same as Scala API. &quot;&quot;&quot; print(&quot;ENDED&quot;) spark = configure_spark_with_delta_pip(builder).getOrCreate() mylistener = CustomListener() spark.streams.addListener(mylistener) batch_df = spark.read.format(&quot;parquet&quot;).load( mydata1 ) #create source delta table batch_df.write.format(&quot;delta&quot;).save(input_table) #create target delta table batch_df.write.format(&quot;delta&quot;).save(output_table) df = spark.readStream.format(&quot;delta&quot;).load( input_table ) q = (df.writeStream .option(&quot;checkpointLocation&quot;, &quot;checkpoint&quot;) .outputMode(&quot;append&quot;) .format(&quot;delta&quot;) .trigger(**{&quot;processingTime&quot;: &quot;1 seconds&quot;}) .start(output_table) ) time.sleep(5) print(&quot;extra data&quot;) batch_df2 = spark.read.format(&quot;parquet&quot;).load( mydata2 ) batch_df2.write.format(&quot;delta&quot;).mode(&quot;append&quot;).save(input_table) time.sleep(5) q.stop() </code></pre>
<python><apache-spark><pyspark>
2023-06-09 09:54:48
1
331
Jens
76,439,114
202,335
How can I print two specified columns of a table?
<p>when I execute</p> <pre><code>import akshare as ak # Fetch all stock prices stock_prices = ak.stock_zh_a_spot() print(stock_prices) </code></pre> <p>I get a table which looks like</p> <pre><code>代码 名称 ζœ€ζ–°δ»· 梨跌钝 ζΆ¨θ·ŒεΉ… δΉ°ε…₯ 卖出 ζ˜¨ζ”Ά δ»ŠεΌ€ ζœ€ι«˜ \ 0 bj430017 星昊医药 9.96 -0.11 -1.092 9.96 9.97 10.07 10.08 10.10 1 bj430047 诺思兰德 15.20 0.22 1.469 15.20 15.21 14.98 15.46 16.16 2 bj430090 εŒθΎ‰δΏ‘ζ― 2.65 0.01 0.379 2.64 2.65 2.64 2.68 2.68 3 bj430139 εŽε²­θ‚‘δ»½ 10.82 0.31 2.950 10.76 10.82 10.51 10.48 </code></pre> <p>How can I print two specified columns, which are 名称 and ζœ€ζ–°δ»·? I use</p> <pre><code>for code, price in stock_prices.items(): print(f&quot;θ‚‘η₯¨δ»£η οΌš{code}οΌŒζœ€ζ–°δ»·ζ ΌοΌš{price['ζœ€ζ–°δ»·']}&quot;) </code></pre> <p>It doesn't work.</p>
<python>
2023-06-09 09:50:08
2
25,444
Steven
76,439,083
665,335
VS Code Python Unit Test tool partially listing unit tests
<p>I have a problem with VS Code Unit Test tool, which cannot list all the unit tests shown below:</p> <p><a href="https://i.sstatic.net/pU8kC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pU8kC.png" alt="enter image description here" /></a></p> <p>The error in Output -&gt; Python is below:</p> <blockquote> <p>~\AppData\Local\Programs\Python\Python39\python.exe ~.vscode\extensions\ms-python.python-2022.16.0\pythonFiles\testing_tools\unittest_discovery.py . test_*.py cwd: . [ERROR 2023-5-9 10:30:10.806]: Error discovering unittest tests: Failed to import test module: DownloadFromADCOrchestrator Traceback (most recent call last): File &quot;C:\Users\tom\AppData\Local\Programs\Python\Python39\lib\unittest\loader.py&quot;, line 470, in _find_test_path package = self._get_module_from_name(name) File &quot;C:\Users\tom\AppData\Local\Programs\Python\Python39\lib\unittest\loader.py&quot;, line 377, in <em>get_module_from_name <strong>import</strong>(name) File &quot;c:\Users\tom\repos\Rlam.DataPlatform.FunctionApp.PythonEtl\DownloadFromADCOrchestrator_<em>init</em></em>.py&quot;, line 12, in import azure.functions as func ModuleNotFoundError: No module named 'azure'</p> </blockquote> <p>Below is in the requirements.txt</p> <p>azure-functions</p> <p>azure.storage.blob</p> <p>azure.identity</p> <p>azure-functions-durable==1.1.6</p>
<python><unit-testing><visual-studio-code>
2023-06-09 09:46:09
1
8,097
Pingpong
76,438,795
11,198,558
Create pypi project but cannot import after install from pypi
<p>I have create a python module, package it, then push to the pypi. All my step is strictly followed this tutorial <a href="https://packaging.python.org/en/latest/tutorials/packaging-projects/" rel="nofollow noreferrer">https://packaging.python.org/en/latest/tutorials/packaging-projects/</a></p> <p>All the steps works fine, then I install this module in new env (specifically my model is <a href="https://test.pypi.org/project/sqlserverconnect/0.0.1/" rel="nofollow noreferrer">here</a>). However, after install this package, I cannot import this package and it raise an error</p> <pre><code>ModuleNotFoundError Traceback (most recent call last) Cell In[2], line 1 ----&gt; 1 import sqlserverconnect ModuleNotFoundError: No module named 'sqlserverconnect' </code></pre> <p>My pyproject.toml is</p> <pre><code>[build-system] requires = [&quot;setuptools&gt;=61.0&quot;] build-backend = &quot;setuptools.build_meta&quot; [project] name = &quot;sqlserverconnect&quot; #..other fields for metadata requires-python = &quot;&gt;=3.8&quot; </code></pre> <p>and my project structure is</p> <pre><code>πŸ“ &lt;root&gt;/ β”œβ”€πŸ“„ pyproject.toml β””β”€πŸ“ src/ β”œβ”€πŸ“„ dbConnector.py β””β”€πŸ“„ __init__.py </code></pre> <p>I do not know where the problem comes from and how to solve it. Pls help me??</p>
<python><pypi>
2023-06-09 09:06:10
0
981
ShanN
76,438,659
536,262
python logging set loglevel for all subloggers of one module
<p>How can I easily set all <code>urllib3.*</code> subbloggers without having to explicitly set each one or filter.</p> <p>I believed <code>urllib3</code> would set it and its subloggers but:</p> <pre class="lang-py prettyprint-override"><code>loggers = [logging.getLogger(name) for name in logging.root.manager.loggerDict] for name in loggers: if args.debug: name.setLevel(logging.DEBUG) else: name.setLevel(logging.INFO) logging.getLogger('urllib3').setLevel(logging.WARN) # only set root not subloggers print(loggers) #[ &lt;Logger urllib3.util.retry (DEBUG)&gt;, # &lt;Logger urllib3.util (DEBUG)&gt;, # &lt;Logger urllib3 (WARNING)&gt;, # &lt;Logger urllib3.connection (DEBUG)&gt;, # &lt;Logger urllib3.response (DEBUG)&gt;, # &lt;Logger urllib3.connectionpool (DEBUG)&gt;, # &lt;Logger urllib3.poolmanager (DEBUG)&gt;, # : </code></pre> <p>My current workaround is to start all my modules logging-name with <code>root.</code>, and explicitly set everything else to <code>logging.WARN</code>, then I get the root-logger also, but not very flexible in naming:</p> <pre class="lang-py prettyprint-override"><code>loggers = [logging.getLogger(name) for name in logging.root.manager.loggerDict] for name in loggers: if not re.search(r'^&lt;Logger\sroot', str(name)): name.setLevel(logging.WARN) else: if args.debug: name.setLevel(logging.DEBUG) else: name.setLevel(logging.INFO) print(loggers) </code></pre>
<python><python-logging>
2023-06-09 08:46:53
1
3,731
MortenB
76,438,582
9,704,496
usercustomize.py runs automatically but does not store variables and functions
<p>I have followed the instructions in this topic: <a href="https://stackoverflow.com/questions/51071691">Is there a way to always execute a script when python starts? (similar site.profile in R)</a></p> <p>Part of the output of <code>python3 -m site</code>:</p> <pre><code>USER_SITE: '/home/&lt;myuser&gt;/.local/lib/python3.10/site-packages' (exists) ENABLE_USER_SITE: True </code></pre> <p>I have this usercustomize.py:</p> <pre><code>variabile = 'Ciao Mondo.' print(variabile) </code></pre> <p>Gnome terminal:</p> <pre><code>&lt;myuser&gt;@&lt;myuser&gt;-&lt;mypc&gt;:~$ python3 Ciao Mondo. Python 3.10.6 (main, May 29 2023, 11:10:38) [GCC 11.3.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; print(variabile) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; NameError: name 'variabile' is not defined &gt;&gt;&gt; </code></pre> <p>As you can see, <code>usercustomize.py</code> runs automatically and prints the value of <code>variabile</code> but doesn't store it, neither when I run a script that requests it, nor from the console like in the example above.</p> <p>The solution can't be in <code>PYTHONSTARTUP</code>, because I need to run this file not only when I work in interactive mode but also from scripts.</p> <p>How can I solve the problem?</p>
<python><bash><command-line><ubuntu-22.04><site-packages>
2023-06-09 08:36:03
2
1,127
Mario Palumbo
76,438,537
7,403,431
Show both, major (year) and minor (month) tick
<p>The following minimal examples shows not all minor labels. Which minor labels are shown seem to depend on the major lables.</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd import matplotlib.dates as mdates dates = pd.date_range(start=&quot;2020-01-01&quot;, end=&quot;2023-01-01&quot;, freq=&quot;MS&quot;) data = pd.Series(np.random.randint(5, 30, size=len(dates)), index=dates) plt.plot(data.index, data.values) ax = plt.gca() ax.xaxis.set_minor_locator(mdates.MonthLocator()) ax.xaxis.set_minor_formatter(mdates.DateFormatter(&quot;%b&quot;)) ax.set_xticks( ax.get_xticks(minor=True), ax.get_xticklabels(minor=True), rotation=90, fontsize=8, minor=True, ) ax.xaxis.set_major_locator(mdates.YearLocator(month=6)) ax.xaxis.set_major_formatter(mdates.DateFormatter(&quot;\n%Y&quot;)) plt.show() </code></pre> <p>Here is the figure:</p> <p><a href="https://i.sstatic.net/IzwsR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IzwsR.png" alt="enter image description here" /></a></p> <p>How do I show all minor labels (month)?</p>
<python><matplotlib>
2023-06-09 08:30:14
2
1,962
Stefan
76,438,336
583,464
fill nan values with mean or interpolated values in a multidimensional array
<p>I have a big multidimensional array <a href="https://easyupload.io/thhhz0" rel="nofollow noreferrer">here</a> which has many nan values.</p> <p>I want to calculate either the mean value along first axis (5124) either interpolate values.</p> <pre><code>import numpy as np data = np.load('data.npy') mean = np.nanmean(data, axis=1) </code></pre> <p>Now, <code>mean</code> has shape: <code>5124, 112</code> and <code>data</code> : <code>5124, 112, 112</code>, so I am trying:</p> <pre><code>data[np.any(np.isnan(data))][-1, :, :, -1] = mean </code></pre> <p>but data is still full of nan values.</p> <p>I am not sure how to fill the mean values into the data array.</p> <p>I tried some interpolation method but is very very slow and memory consuming, so I don't know if there is a better method to fill the nan values.</p>
<python><numpy><multidimensional-array>
2023-06-09 08:01:48
1
5,751
George
76,438,257
11,146,276
Site redirects despite receiving 200 OK?
<p>I'm using the <a href="https://github.com/venomous/cloudscraper" rel="nofollow noreferrer">cloudscraper</a> library on the URL <a href="https://monroe.county-taxes.com/public/search/property_tax?search_query=1573108&amp;redirect=1573108" rel="nofollow noreferrer">https://monroe.county-taxes.com/public/search/property_tax?search_query=1573108&amp;redirect=1573108</a></p> <p>The site redirects to <a href="https://monroe.county-taxes.com/public/real_estate/parcels/1573108/bills?parcel=ef360da9-e509-11eb-9467-005056818710&amp;qid=27a7b62cf21f5b6bf80d9b106b6e286f" rel="nofollow noreferrer">https://monroe.county-taxes.com/public/real_estate/parcels/1573108/bills?parcel=ef360da9-e509-11eb-9467-005056818710&amp;qid=27a7b62cf21f5b6bf80d9b106b6e286f</a> on my browser, which is the target site that I want to extract the data from. However, the responses from <code>network</code> tab are all <code>200 OK</code> and I could not find a trace of anything that hints to this URL from the one above. What might be happening here?</p> <p>Minimal reproducible example:</p> <pre><code>import cloudscraper scraper = cloudscraper.create_scraper() r = scraper.get(&quot;https://monroe.county-taxes.com/public/search/property_tax?search_query=1573108&amp;redirect=1573108&quot;) print(r.text) </code></pre> <p>Edit: I have tried <a href="https://stackoverflow.com/questions/60576039/how-to-get-the-redirected-url-in-web-scraping">this</a> solution but due to Selenium's subpar speed, I prefer to leave it as the very last resort and looking for a different way.</p>
<python><web-scraping>
2023-06-09 07:51:55
0
428
Firefly
76,438,240
14,282,714
Untokenize specific words in a list
<p>I have a list of strings and I would like to untokenize some specific strings. Imagine having the following list with strings and I would like to join the words &quot;my&quot; and &quot;apple&quot; only if they are in respectively order. I was thinking to use the <code>detokenize</code> function from this <a href="https://stackoverflow.com/questions/21948019/python-untokenize-a-sentence">Python Untokenize a sentence</a> question. Here is some reproducible code:</p> <pre><code>target = &quot;my apple&quot; words = ['this', 'is', 'my', 'apple', 'and', 'this', 'is', 'not', 'your', 'apple'] </code></pre> <p>Using the detokenizer:</p> <pre><code>from nltk.tokenize.treebank import TreebankWordDetokenizer TreebankWordDetokenizer().detokenize(['my', 'apple']) 'my apple' </code></pre> <p>But I am not sure how to use this in a list with multiple strings and with specifying a target. Here is the desired output:</p> <pre><code>target_output = ['this', 'is', 'my apple', 'and', 'this', 'is', 'not', 'your', 'apple'] ['this', 'is', 'my apple', 'and', 'this', 'is', 'not', 'your', 'apple'] </code></pre> <p>So I was wondering if anyone knows how to detokenize some specific words only if they are next to each other in a list?</p>
<python><string><list><nltk><tokenize>
2023-06-09 07:48:47
1
42,724
Quinten
76,438,065
3,193,730
How to load ghosted objects with ZODB in python?
<p>I am writing a REST interface with flask in python using ZODB for a storage.</p> <p>This is my code for GET method:</p> <pre><code>@app.get(&quot;/queues&quot;) def get_queues(): connection = db.open() root = connection.root if (hasattr(root, QUEUES_COLLECTION)): myQueues = root()[QUEUES_COLLECTION].values() #for obj in myQueues: # print(obj.title) jsonQueues = orjson.dumps([obj.__dict__ for obj in myQueues]) connection.close() return jsonQueues connection.close() return &quot;ERROR no queues&quot; </code></pre> <p>When accessing <code>[ipaddress]:[port]/queues</code> I get <code>[{}{}]</code> as a response, which is due to ZODB loading objects as ghosts. I know I can load them by iterating through collection and accessing attributes (as shown within code comments), but is there a better way?</p>
<python><flask><zodb>
2023-06-09 07:24:22
1
2,045
Miljac
76,437,967
5,450,855
"Failed to establish a new connection" Azure App Service
<p>We have a custom Docker image that runs a Tomcat and exposes port 7000 to handle traffic. We store this image in a container registry accessible from our Azure subscription.</p> <p>We deploy the image using Azure App Service and use the <code>WEBSITES_PORT</code> configuration to bind the port 7000. However, the App Service fails to start with the following error in the App Service logs</p> <pre><code>Exception in thread Thread-1: Traceback (most recent call last): File &quot;/usr/lib64/az/lib/python3.6/site-packages/urllib3/connection.py&quot;, line 175, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File &quot;/usr/lib64/az/lib/python3.6/site-packages/urllib3/util/connection.py&quot;, line 96, in create_connection raise err File &quot;/usr/lib64/az/lib/python3.6/site-packages/urllib3/util/connection.py&quot;, line 86, in create_connection sock.connect(sa) OSError: [Errno 113] No route to host During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/usr/lib64/az/lib/python3.6/site-packages/urllib3/connectionpool.py&quot;, line 706, in urlopen chunked=chunked, File &quot;/usr/lib64/az/lib/python3.6/site-packages/urllib3/connectionpool.py&quot;, line 382, in _make_request self._validate_conn(conn) File &quot;/usr/lib64/az/lib/python3.6/site-packages/urllib3/connectionpool.py&quot;, line 1010, in _validate_conn conn.connect() File &quot;/usr/lib64/az/lib/python3.6/site-packages/urllib3/connection.py&quot;, line 358, in connect conn = self._new_conn() File &quot;/usr/lib64/az/lib/python3.6/site-packages/urllib3/connection.py&quot;, line 187, in _new_conn self, &quot;Failed to establish a new connection: %s&quot; % e urllib3.exceptions.NewConnectionError: &lt;urllib3.connection.HTTPSConnection object at 0x7ff2e335b358&gt;: Failed to establish a new connection: [Errno 113] No route to host During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/usr/lib64/python3.6/threading.py&quot;, line 916, in _bootstrap_inner self.run() File &quot;/usr/lib64/python3.6/threading.py&quot;, line 864, in run self._target(*self._args, **self._kwargs) File &quot;/usr/lib64/az/lib/python3.6/site-packages/azure/cli/command_modules/appservice/custom.py&quot;, line 2598, in _get_log preload_content=False File &quot;/usr/lib64/az/lib/python3.6/site-packages/urllib3/request.py&quot;, line 75, in request method, url, fields=fields, headers=headers, **urlopen_kw File &quot;/usr/lib64/az/lib/python3.6/site-packages/urllib3/request.py&quot;, line 96, in request_encode_url return self.urlopen(method, url, **extra_kw) File &quot;/usr/lib64/az/lib/python3.6/site-packages/urllib3/poolmanager.py&quot;, line 375, in urlopen response = conn.urlopen(method, u.request_uri, **kw) File &quot;/usr/lib64/az/lib/python3.6/site-packages/urllib3/connectionpool.py&quot;, line 796, in urlopen **response_kw File &quot;/usr/lib64/az/lib/python3.6/site-packages/urllib3/connectionpool.py&quot;, line 796, in urlopen **response_kw File &quot;/usr/lib64/az/lib/python3.6/site-packages/urllib3/connectionpool.py&quot;, line 796, in urlopen **response_kw File &quot;/usr/lib64/az/lib/python3.6/site-packages/urllib3/connectionpool.py&quot;, line 756, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File &quot;/usr/lib64/az/lib/python3.6/site-packages/urllib3/util/retry.py&quot;, line 574, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='[app-service-name].scm.[ase-name].appserviceenvironment.net', port=443): Max retries exceeded with url: /logstream (Caused by NewConnectionError('&lt;urllib3.connection.HTTPSConnection object at 0x7ff2e335b358&gt;: Failed to establish a new connection: [Errno 113] No route to host',)) </code></pre> <p>My guess is when the App Service starts it pulls some code from the SCM url.</p> <p>The App Service is deployed in a NSG with ports 80/443 open to both inbound and outbound traffic.</p>
<python><azure><azure-web-app-service><azure-cli>
2023-06-09 07:06:13
1
415
Aniruddha Bera
76,437,866
4,451,521
pandas concat does not respect the order of the columns
<p>I don't know if this is related to <a href="https://stackoverflow.com/questions/76437849/can-not-print-or-show-the-keys-of-a-dataframe-but-can-save-it-to-a-csv">my previous problem</a> (and I did not want to put two problems in a single question) but I have two dataframes</p> <p>(One of these can not be printed as the previous question states)</p> <p>When I do</p> <pre><code>df= pd.concat([df_1, df_2]) </code></pre> <p>the resulting dataframe has the keys in a totally wrong order (not like the keys of <code>df_1</code> and the keys o <code>df_2</code>). What could be happening?</p>
<python><pandas>
2023-06-09 06:50:48
2
10,576
KansaiRobot
76,437,849
4,451,521
Can not print or show the keys of a dataframe but can save it to a csv
<p>I am having a weird problem debugging a colleague's script</p> <p>There is a panda's dataframe</p> <p>when I try to print it <code>print(df)</code> or print their keys <code>print(df.keys())</code> I got</p> <pre><code>UnicodeEncodeError: 'ascii' codec can't encode characters in position 72-73: ordinal not in range(128) </code></pre> <p>but when I save it to a file <code>df.to_csv('a_file.csv')</code> I got a normal file.</p>
<python><pandas>
2023-06-09 06:47:25
0
10,576
KansaiRobot
76,437,644
12,884,304
sqlalchemy.orm.exc.ObjectDeletedError: Instance has been deleted, or its row is otherwise not present
<p>I habe 3 tables for M2M relation</p> <pre><code>class MoneyManagementsResult(Base): __tablename__ = &quot;money_managements_results&quot; strategies = relationship( &quot;Strategy&quot;, secondary=&quot;strategy_mm_result&quot;, back_populates=&quot;mm_results&quot; ) class Strategy(Base): __tablename__ = &quot;strategies&quot; mm_results = relationship( &quot;MoneyManagementsResult&quot;, secondary=&quot;strategy_mm_result&quot;, back_populates=&quot;strategies&quot; ) class StrategyMMResult(Base): __tablename__ = &quot;strategy_mm_result&quot; strategy_id = Column(Integer, ForeignKey(&quot;strategies.id&quot;), primary_key=True) mm_result_id = Column(Integer, ForeignKey(&quot;money_managements_results.id&quot;), primary_key=True) </code></pre> <p>I want create <code>money_management_result</code> and add <code>strategy</code> to <code>strategies</code>.</p> <pre><code>money_management_result = MoneyManagementsResult(...) session.add(money_management_result) session.commit() money_management_result.strategies.append(strategy) session.commit() </code></pre> <p>I get error <code>sqlalchemy.orm.exc.ObjectDeletedError: Instance '&lt;MoneyManagementsResult at 0x7fd05cb3bd10&gt;' has been deleted, or its row is otherwise not present.</code></p> <p><strong>Traceback</strong></p> <pre><code>/modeling/stats_output/layer_choice.py:292: SAWarning: Column 'money_managements_results.id' is marked as a member of the primary key for table 'money_managements_results', but has no Python-side or server-side default generator indicated, nor does it indicate 'autoincrement=True' or 'nullable=True', and no explicit value is passed. Primary key columns typically may not store NULL. Note that as of SQLAlchemy 1.1, 'autoincrement=True' must be indicated explicitly for composite (e.g. multicolumn) primary keys if AUTO_INCREMENT/SERIAL/IDENTITY behavior is expected for one of the columns in the primary key. CREATE TABLE statements are impacted by this change as well on most backends. session.commit() --------------------------------------------------------------------------- ObjectDeletedError Traceback (most recent call last) Cell In[2], line 1 ----&gt; 1 find_limit_results(1, 1, 4, 2.5, 1) File /modeling/stats_output/layer_choice.py:293, in find_limit_results(sample_id, strategy_id, factor, leverage, money_management_id) 291 session.add(money_managements_result) 292 session.commit() --&gt; 293 print(money_managements_result.id) 294 session.refresh(money_managements_result) 295 # session.flush() File /usr/local/lib/python3.11/site-packages/sqlalchemy/orm/attributes.py:487, in InstrumentedAttribute.__get__(self, instance, owner) 482 except AttributeError as err: 483 util.raise_( 484 orm_exc.UnmappedInstanceError(instance), 485 replace_context=err, 486 ) --&gt; 487 return self.impl.get(state, dict_) File /usr/local/lib/python3.11/site-packages/sqlalchemy/orm/attributes.py:959, in AttributeImpl.get(self, state, dict_, passive) 956 if not passive &amp; CALLABLES_OK: 957 return PASSIVE_NO_RESULT --&gt; 959 value = self._fire_loader_callables(state, key, passive) 961 if value is PASSIVE_NO_RESULT or value is NO_VALUE: 962 return value File /usr/local/lib/python3.11/site-packages/sqlalchemy/orm/attributes.py:990, in AttributeImpl._fire_loader_callables(self, state, key, passive) 984 def _fire_loader_callables(self, state, key, passive): 985 if ( 986 self.accepts_scalar_loader 987 and self.load_on_unexpire 988 and key in state.expired_attributes 989 ): --&gt; 990 return state._load_expired(state, passive) 991 elif key in state.callables: 992 callable_ = state.callables[key] File /usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:712, in InstanceState._load_expired(self, state, passive) 705 toload = self.expired_attributes.intersection(self.unmodified) 706 toload = toload.difference( 707 attr 708 for attr in toload 709 if not self.manager[attr].impl.load_on_unexpire 710 ) --&gt; 712 self.manager.expired_attribute_loader(self, toload, passive) 714 # if the loader failed, or this 715 # instance state didn't have an identity, 716 # the attributes still might be in the callables 717 # dict. ensure they are removed. 718 self.expired_attributes.clear() File /usr/local/lib/python3.11/site-packages/sqlalchemy/orm/loading.py:1465, in load_scalar_attributes(mapper, state, attribute_names, passive) 1462 # if instance is pending, a refresh operation 1463 # may not complete (even if PK attributes are assigned) 1464 if has_key and result is None: -&gt; 1465 raise orm_exc.ObjectDeletedError(state) ObjectDeletedError: Instance '&lt;MoneyManagementsResult at 0xffff8b545610&gt;' has been deleted, or its row is otherwise not present. In [3]: find_limit_results(1, 1, 4, 2.5, 1) --------------------------------------------------------------------------- ObjectDeletedError Traceback (most recent call last) Cell In[3], line 1 ----&gt; 1 find_limit_results(1, 1, 4, 2.5, 1) File /modeling/stats_output/layer_choice.py:293, in find_limit_results(sample_id, strategy_id, factor, leverage, money_management_id) 291 session.add(money_managements_result) 292 session.commit() --&gt; 293 print(money_managements_result.id) 294 session.refresh(money_managements_result) 295 # session.flush() File /usr/local/lib/python3.11/site-packages/sqlalchemy/orm/attributes.py:487, in InstrumentedAttribute.__get__(self, instance, owner) 482 except AttributeError as err: 483 util.raise_( 484 orm_exc.UnmappedInstanceError(instance), 485 replace_context=err, 486 ) --&gt; 487 return self.impl.get(state, dict_) File /usr/local/lib/python3.11/site-packages/sqlalchemy/orm/attributes.py:959, in AttributeImpl.get(self, state, dict_, passive) 956 if not passive &amp; CALLABLES_OK: 957 return PASSIVE_NO_RESULT --&gt; 959 value = self._fire_loader_callables(state, key, passive) 961 if value is PASSIVE_NO_RESULT or value is NO_VALUE: 962 return value File /usr/local/lib/python3.11/site-packages/sqlalchemy/orm/attributes.py:990, in AttributeImpl._fire_loader_callables(self, state, key, passive) 984 def _fire_loader_callables(self, state, key, passive): 985 if ( 986 self.accepts_scalar_loader 987 and self.load_on_unexpire 988 and key in state.expired_attributes 989 ): --&gt; 990 return state._load_expired(state, passive) 991 elif key in state.callables: 992 callable_ = state.callables[key] File /usr/local/lib/python3.11/site-packages/sqlalchemy/orm/state.py:712, in InstanceState._load_expired(self, state, passive) 705 toload = self.expired_attributes.intersection(self.unmodified) 706 toload = toload.difference( 707 attr 708 for attr in toload 709 if not self.manager[attr].impl.load_on_unexpire 710 ) --&gt; 712 self.manager.expired_attribute_loader(self, toload, passive) 714 # if the loader failed, or this 715 # instance state didn't have an identity, 716 # the attributes still might be in the callables 717 # dict. ensure they are removed. 718 self.expired_attributes.clear() File /usr/local/lib/python3.11/site-packages/sqlalchemy/orm/loading.py:1465, in load_scalar_attributes(mapper, state, attribute_names, passive) 1462 # if instance is pending, a refresh operation 1463 # may not complete (even if PK attributes are assigned) 1464 if has_key and result is None: -&gt; 1465 raise orm_exc.ObjectDeletedError(state) </code></pre> <p>After <code>session.flush()</code> I get</p> <p><code>money_managements_result.id == None</code></p> <p>After <code>session.refresh(money_managements_result)</code> I get</p> <pre><code>InvalidRequestError Traceback (most recent call last) Cell In[2], line 1 ----&gt; 1 find_limit_results(1, 1, 4, 2.5, 1) File /modeling/stats_output/layer_choice.py:294, in find_limit_results(sample_id, strategy_id, factor, leverage, money_management_id) 292 session.flush() 293 print(money_managements_result.id) --&gt; 294 session.refresh(money_managements_result) 295 # session.flush() 296 money_managements_result.strategies.append(strategy) # error File /usr/local/lib/python3.11/site-packages/sqlalchemy/orm/session.py:2355, in Session.refresh(self, instance, attribute_names, with_for_update) 2343 stmt = sql.select(object_mapper(instance)) 2344 if ( 2345 loading.load_on_ident( 2346 self, (...) 2353 is None 2354 ): -&gt; 2355 raise sa_exc.InvalidRequestError( 2356 &quot;Could not refresh instance '%s'&quot; % instance_str(instance) 2357 ) InvalidRequestError: Could not refresh instance '&lt;MoneyManagementsResult at 0xffff68089050&gt;' </code></pre> <p>I solve it with</p> <pre><code>session.add(money_managements_result) session.commit() mm_result_id = session.query(MoneyManagementsResult).order_by(MoneyManagementsResult.id.desc()).first().id strategy_mm_result = StrategyMMResult(strategy_id=strategy.id, mm_result_id=mm_result_id) session.add(strategy_mm_result) session.commit() </code></pre> <p>But it's a bad solution.</p> <p>I use <code>python=3.11.3</code>, <code>sqlalchemy=1.4.48</code></p>
<python><postgresql><sqlalchemy>
2023-06-09 06:10:37
1
331
unknown
76,437,563
2,137,570
Beautifulsoup script tag extract data
<p>Trying to extract values, from the script tag but get no output. Unsure what I'm doing wrong.</p> <p>Any help would be much appreciated.</p> <pre><code>detail_url address beds baths price </code></pre> <p>Code</p> <pre><code>from bs4 import BeautifulSoup import json def extract_property_info(html): soup = BeautifulSoup(html, 'html.parser') script_tag = soup.find('script', {'type': 'application/json', 'ss-data-keyy': 'mobileSearchPageStore'}) if script_tag: script_content = script_tag.contents[0] try: json_data = BeautifulSoup(script_content, 'html.parser').string.strip() data = json.loads(json_data) # Extract the desired values detail_url = data['detailUrl'] address = data['address'] beds = data['beds'] baths = data['baths'] price = data['price'] # Return the extracted values return detail_url, address, beds, baths, price except (AttributeError, json.JSONDecodeError): pass return None # HTML content html_content = ''' &lt;html&gt; &lt;body&gt; &lt;script type=&quot;application/json&quot; ss-data-keyy=&quot;mobileSearchPageStore&quot;&gt; &lt;!--{&quot;queryState&quot;:numberFormatter&quot;:&quot;0,0&quot;,&quot;inputFormatter&quot;:&quot;0.[0]a&quot;}},&quot;sortOrder&quot;:1,&quot;type&quot;:&quot;Range&quot;,&quot;defaultValue&quot;:{&quot;min&quot;:null,&quot;max&quot;:null},&quot;suggestedEnums&quot;:500&quot;,&quot;value&quot;:500},{&quot;url&quot;:&quot;https://photos.example.com/fp/e0ba89ff3ebcb9d689dcf6d0ffd87868-p_e.jpg&quot;}],&quot;detailUrl&quot;:&quot;https://www.example.com/homedetails/111-S-1316th-St-Arizona-AZ-3324332/33327367_ppid/&quot;,&quot;statusType&quot;:&quot;FOR_SALE&quot;,&quot;statusText&quot;:&quot;home for buy&quot;,&quot;countryCurrency&quot;:&quot;$&quot;,&quot;price&quot;:&quot;$269,000&quot;,&quot;unformattedPrice&quot;:269000,&quot;address&quot;:&quot;111 S 1316th St Arizona AZ 33243322&quot;,&quot;addressStreet&quot;:&quot;111 S 1316th St&quot;,&quot;addressCity&quot;:&quot;Arizona&quot;,&quot;addressState&quot;:&quot;OH&quot;,&quot;addressZipcode&quot;:&quot;33243322&quot;,&quot;isUndisclosedAddress&quot;:false,&quot;beds&quot;:4,&quot;baths&quot;:2.0,&quot;area&quot;:2467,&quot;latLong&quot;:{&quot;latitude&quot;:12.12345,&quot;longitude&quot;:12.12345},&quot;isexampleOwned&quot;:false,&quot;variableData&quot;:{&quot;type&quot;:&quot;DAYS_ON&quot;,&quot;text&quot;:&quot;2 days on example&quot;},&quot;badgeInfo&quot;:null,}],}--&gt; &lt;/script&gt; &lt;/body&gt; &lt;/html&gt; ''' # Call the function and print the extracted values result = extract_property_info(html_content) print(&quot;Result: &quot; , result) if result is not None: detail_url, address, beds, baths, price = result print(&quot;Detail URL:&quot;, detail_url) print(&quot;Address:&quot;, address) print(&quot;Beds:&quot;, beds) print(&quot;Baths:&quot;, baths) print(&quot;Price:&quot;, price) else: print(&quot;Unable to extract property information.&quot;) </code></pre>
<python><beautifulsoup><tags>
2023-06-09 05:56:37
1
5,998
Lacer
76,437,247
9,401,203
Creating variables from a python dictionary with multiple values for each key using a for loop
<p>I have an audio (wav) file imported into Python using the <code>pydub</code> package. I have a dictionary consisting of each key matching to 2 values. The key is meant to be used as the name for the variable in a subsequent step for variable creation. The values are to be used to designate as the cut-off points to cut the audio file (start and end timestamp).</p> <p>I want to create a for loop that creates multiple variables with the name of the key and the corresponding value to be the sliced audio file, with the audio file variable being named &quot;audio&quot; (e.g. audio[0:300000])</p> <p>Here is a sample of my dictionary:</p> <pre><code>pairs = { &quot;part1&quot;: [0, 300000], &quot;part2&quot;: [300001, 600000], &quot;part3&quot;: [600001, 900000], &quot;part4&quot;: [900001, 1200000] } </code></pre> <p>I've written the following code, but I'm unsure how to dynamically create a variable with the actual sliced audio file in a for loop.</p> <pre><code>for key, start_end in pairs.items(): start, end = start_end sliced_audio = audio[start:end] </code></pre> <p>Other SO posts mentioned the following, but I do not want strings. I want the actual variables with the sliced audio file in them. Thanks!</p> <pre><code>print(f&quot;{key} = {sliced_audio}&quot;) </code></pre>
<python><for-loop><pydub>
2023-06-09 04:38:07
1
1,210
DTYK
76,437,150
3,614,197
Creating a widget to allow user to open a dialog box to navigate to a excel file and then select the tab and pass to a dataframe
<p>I want to create widget that allows a user to navigate folders and files to select a file an excel file then choose which sheet is required and then create a pandas dataframe from the selection made.</p> <p>I am able to get the widget to navigate the folder structure and make a selection, however, the drop down list does not populate with sheet names.</p> <p>code below</p> <p>'''</p> <pre><code>import ipywidgets as widgets import pandas as pd from IPython.display import display # Function to read Excel sheet and create a DataFrame def read_excel_sheet(file_contents, sheet_name): df = pd.read_excel(file_contents, sheet_name=sheet_name) return df # Create FileUpload widget file_upload = widgets.FileUpload() # Create Dropdown widget for sheet selection sheet_dropdown = widgets.Dropdown(options=[], description='Select Sheet') # Function to update dropdown options when file is uploaded def update_sheet_dropdown(change): uploaded_file = next(iter(file_upload.value)) file_contents = file_upload.value[uploaded_file]['content'] excel_file = pd.ExcelFile(file_contents) sheet_names = excel_file.sheet_names sheet_dropdown.options = sheet_names # Event listener for file upload file_upload.observe(update_sheet_dropdown, names='value') # Display widgets display(file_upload) display(sheet_dropdown) # Function to handle sheet selection and DataFrame creation def handle_sheet_selection(change): uploaded_file = next(iter(file_upload.value)) file_contents = file_upload.value[uploaded_file]['content'] sheet_name = sheet_dropdown.value df = read_excel_sheet(file_contents, sheet_name) # Use the DataFrame 'df' as desired print(df) # Event listener for sheet selection sheet_dropdown.observe(handle_sheet_selection, names='value') </code></pre> <p>'''</p> <p>there are no error messages so not sure things are going wrong.</p>
<python><pandas><widget><ipywidgets>
2023-06-09 04:10:11
1
636
Spooked
76,437,117
1,870,832
json2token not found when using the Donut VisionEncoderDecoderModel from Huggingface transformers
<p>I am trying to fine-tune a Donut (Document Understanding) Huggingface Transformer model, but am getting hung up trying to create a <code>DonutDataset</code> object. I have the following code (running in google colab):</p> <pre><code>!pip install transformers datasets sentencepiece donut-python from google.colab import drive from donut.util import DonutDataset from transformers import DonutProcessor, VisionEncoderDecoderModel, VisionEncoderDecoderConfig drive.mount('/content/drive/') projectdir = 'drive/MyDrive/donut' donut_version = 'naver-clova-ix/donut-base-finetuned-cord-v2' # 'naver-clova-ix/donut-base' config = VisionEncoderDecoderConfig.from_pretrained(donut_version) config.decoder.max_length = 768 processor = DonutProcessor.from_pretrained(donut_version) model = VisionEncoderDecoderModel.from_pretrained(donut_version, config=config) train_dataset = DonutDataset(f'{projectdir}/input_doc_images', model, #'naver-clova-ix/donut-base-finetuned-cord-v2', max_length=config.decoder.max_length, split=&quot;train&quot;, task_start_token=&quot;&quot;, prompt_end_token=&quot;&quot;, sort_json_key=True, ) </code></pre> <p>...however, the last line is throwing the following error:</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-8-9d831be996e6&gt; in &lt;cell line: 4&gt;() 2 3 max_length = 768 ----&gt; 4 train_dataset = DonutDataset(f'{projectdir}/input_doc_images', 5 model, 6 #'naver-clova-ix/donut-base-finetuned-cord-v2', 2 frames /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py in __getattr__(self, name) 1612 if name in modules: 1613 return modules[name] -&gt; 1614 raise AttributeError(&quot;'{}' object has no attribute '{}'&quot;.format( 1615 type(self).__name__, name)) 1616 AttributeError: 'VisionEncoderDecoderModel' object has no attribute 'json2token' </code></pre> <p>I'm a little confused because my <code>model</code> object is a <code>'naver-clova-ix/donut-base-finetuned-cord-v2'</code> model, which according to <a href="https://github.com/clovaai/donut/blob/master/donut/model.py#L498" rel="nofollow noreferrer">this line from the model.py of the Donut github repo</a> seems to suggest does in fact have a <code>json2token</code> method???</p> <p>What am I missing?</p> <p>btw, you can view/copy my underlying data (images and json-lines metdata file) from my google drive 'donut' folder here: <a href="https://drive.google.com/drive/folders/1Gsr7d7Exvtx5PqjZQv2nXP9-pPDUEIOx?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/drive/folders/1Gsr7d7Exvtx5PqjZQv2nXP9-pPDUEIOx?usp=sharing</a></p>
<python><deep-learning><pytorch><huggingface-transformers><huggingface-datasets>
2023-06-09 03:58:30
1
9,136
Max Power
76,436,971
4,260,141
Numpy memory growth while using conditionals
<p>I am seeing a high memory growth while using broadcast operations:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np nrows = 9000000 ncols = 4 data = np.random.rand(nrows, ncols) x_min = np.random.rand(nrows) x_max = np.random.rand(nrows) #Create Boolean arrays for the bbox subselection (this is where the out of bound memory happens x_cond = (data[:, 0, np.newaxis] &gt;= x_min) &amp; (data[:, 0, np.newaxis] &lt;= x_max) # similarly some y_cond # and some z_cond conditions = x_cond &amp; y_cond &amp; z_cond subsel_block = np.where(conditions, data[:, 3, np.newaxis], np.nan) </code></pre> <p>x_min and x_max is always the shape of data.shape[0] and is dynamically calculated on the value of 1st column so I am hiding that implementation for simplicity sake.</p>
<python><arrays><numpy><array-broadcasting>
2023-06-09 03:11:36
0
537
datapanda
76,436,872
3,312,274
web2py Select min value from join
<p>How do I write the corresponding web2py statement for the following query:</p> <pre><code>select auth_user.id, min(auth_group.ranks) as highest_gr from auth_user left join auth_membership on auth_user.id = auth_membership.user_id, left join auth_group on auth_membership.group_id = auth_group.id </code></pre> <p>I haven't written pure SQL in a while, there must be a <code>group by</code> somewhere but the idea is there.</p> <p>Edit: I'm trying to retrieve all records from auth_user with their corresponding highest group ranks.</p>
<python><web2py>
2023-06-09 02:47:36
1
565
JeffP
76,436,620
3,821,009
How to use the interval values in a categorial column returned by `.hist()` in Polars?
<p>Let's say I have this:</p> <pre><code>&gt;&gt;&gt; df = pl.DataFrame(dict(j=numpy.random.randint(10, 99, 20))) &gt;&gt;&gt; df shape: (20, 1) β”Œβ”€β”€β”€β”€β”€β” β”‚ j β”‚ β”‚ --- β”‚ β”‚ i64 β”‚ β•žβ•β•β•β•β•β•‘ β”‚ 47 β”‚ β”‚ 22 β”‚ β”‚ 82 β”‚ β”‚ 19 β”‚ β”‚ … β”‚ β”‚ 28 β”‚ β”‚ 94 β”‚ β”‚ 21 β”‚ β”‚ 38 β”‚ β””β”€β”€β”€β”€β”€β”˜ &gt;&gt;&gt; df.get_column('j').hist([10, 20, 30, 50]) shape: (5, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ break_point ┆ category ┆ j_count β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ f64 ┆ cat ┆ u32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ══════════════β•ͺ═════════║ β”‚ 10.0 ┆ (-inf, 10.0] ┆ 0 β”‚ β”‚ 20.0 ┆ (10.0, 20.0] ┆ 4 β”‚ β”‚ 30.0 ┆ (20.0, 30.0] ┆ 5 β”‚ β”‚ 50.0 ┆ (30.0, 50.0] ┆ 3 β”‚ β”‚ inf ┆ (50.0, inf] ┆ 8 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>How would I go with doing something with the <code>category</code> column? For example, how would I filter values where category has <code>-inf</code> or where upper bound is between <code>10.0</code> and <code>30.0</code> or something along those lines?</p>
<python><python-polars>
2023-06-09 01:22:41
1
4,641
levant pied
76,436,502
11,009,630
How to save a 1-channel array as an image with proper color limit
<p>I am trying to save a sequence of 1-channel array into grayscale images. These images are supposed to be masks for segmentation.</p> <p>The issue I am facing is inconsistent colors for same pixel value in different images:</p> <p>Image #1<br /> <a href="https://i.sstatic.net/HfOeB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HfOeB.png" alt="image 1" /></a></p> <p>When I add additional circles to this image, then saved image has different set of colors for previous present circles.</p> <p>Image #2<br /> <a href="https://i.sstatic.net/P86EK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P86EK.png" alt="image 2" /></a></p> <p>Ideally I want them to have consistent colors throughout my entire dataset as each circle represent a unique class of label.</p> <p>So my question is how can I save these grayscale images with consistent color code?</p> <p>Added code snippet to generate above image</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import cv2 import matplotlib.image as mpimg image = np.zeros(shape=[256, 256], dtype=np.uint8) cv2.circle(image, center=(50, 50), radius=10, color=(2, 2), thickness= -1) cv2.circle(image, center=(100, 100), radius=10, color=(3, 3), thickness= -1) cv2.circle(image, center=(150, 150), radius=10, color=(4, 4), thickness= -1) cv2.circle(image, center=(75, 200), radius=10, color=(5, 5), thickness= -1) cv2.circle(image, center=(200, 89), radius=10, color=(6, 6), thickness= -1) # additional circles (they are part of Image #2) cv2.circle(image, center=(21, 230), radius=5, color=(7, 7), thickness= -1) cv2.circle(image, center=(149, 250), radius=5, color=(12, 12), thickness= -1) mpimg.imsave('image.jpg',image) </code></pre>
<python><matplotlib><colormap>
2023-06-09 00:36:34
1
646
Deep
76,436,375
6,334,082
Does unpacking enums at the global scope increase memory usage?
<p>I have a module that contains a little over of 500 lines of <code>Enum</code> definitions, to be clear it's not just single 500 line enum, but several <code>Enums</code>. Please correct me if I misunderstand, by unpacking the <code>Enum</code> I am only creating a reference?</p> <pre class="lang-py prettyprint-override"><code># app/enums.py import enum class MyEnum(enum.Enum): A = 1 B = 2 C = 3 A, B, C = MyEnum.A, MyEnum.B, MyEnum.C </code></pre>
<python><enums>
2023-06-08 23:49:54
1
339
Jason Leaver
76,436,366
5,179,643
Pandas: using list comprehension to create a new column based on conditional values from two other columns
<p>Assume I have a Pandas dataframe df that looks like this:</p> <pre><code>df = pd.DataFrame ({ 'probability': [0.51, 0.48, 0.52, 0.71, 0.38, 0.22, 0.59, 0.70, 0.44, 0.62, 0.38], 'indicator': [1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0], }) df probability indicator 0 0.51 1 1 0.48 0 2 0.52 0 3 0.71 1 4 0.38 0 5 0.22 1 6 0.59 0 7 0.70 1 8 0.44 0 9 0.62 1 10 0.38 0 </code></pre> <p>I'd like to create a new column named <code>accuracy</code> that takes a value of 1 if the row-wise <code>probability</code> value is greater than 0.50 <em><strong>and</strong></em> the row-wise <code>indicator</code> is 1, otherwise a value of 0.</p> <p>The desired output would look like this:</p> <pre><code> probability indicator accuracy 0 0.51 1 1 1 0.48 0 0 2 0.52 0 0 3 0.71 1 1 4 0.38 0 0 5 0.22 1 0 6 0.59 0 0 7 0.70 1 1 8 0.44 0 0 9 0.62 1 1 10 0.38 0 0 </code></pre> <p>I can do this using the following:</p> <pre><code>conditions = [ (df['probability'] &gt; 0.5) &amp; (df['indicator'] == 1), (df['probability'] &lt;= 0.5) &amp; (df['indicator'] == 1), (df['probability'] &lt;= 0.5) &amp; (df['indicator'] == 0) ] choices = [1, 0, 0] df['accuracy'] = np.select(conditions, choices) </code></pre> <p>But, I feel like this is 'hacky'.</p> <p>As a means to streamline this, I also tried:</p> <pre><code>df['accuracy_1'] = np.select(condlist = [(df['probability'] &gt; 0.5 &amp; df['indicator'] == 1)], choicelist = [1], default=0) </code></pre> <p>But, this throws an error:</p> <pre><code>TypeError: Cannot perform 'rand_' with a dtyped [int64] array and scalar of type [bool] </code></pre> <p>Is it possible to do this using list comprehension or some other more elegant approach?</p> <p>Thanks!</p>
<python><pandas>
2023-06-08 23:45:57
2
2,533
equanimity
76,436,292
193,258
How do I add a column of objects to a dataframe initialized by values from another column?
<p>I have a table of stock symbols I'm pulling from a csv file:</p> <pre><code> Symbol Last Open %Change ATR Earnings Date Sector 153 WHR 138.02 139.21 -2.26% 4.50 01 / 25 Consumer Discretionary 154 WIX 76.86 80.05 -5.33% 4.01 02 / 07 Information Technology 155 WM 158.03 158.60 -0.78% 3.10 02 / 01 Industrials 156 WMT 143.09 144.77 -1.44% 2.58 02 / 07 Consumer Staples 157 YUM 127.71 128.50 -0.83% 2.17 02 / 07 Consumer Discretionary </code></pre> <p>And I want to construct data objects of class TaData for each symbol and add it to a new column.</p> <p>TaData for a single symbol is normally constructed like this:</p> <pre><code>data = TaData('GOOG') </code></pre> <p>I want something like this (that didn't work since it passed in the whole column instead of individual symbols):</p> <pre><code>df = pd.read_csv('watchlist.csv') df['ta_data'] = TaData(df['Symbol']) </code></pre> <p>Is a for loop my only option?</p>
<python><pandas><dataframe>
2023-06-08 23:23:17
1
2,087
SDGator
76,436,289
4,404,805
Pandas: Convert group into list of jsons without using groupby or apply
<p>I have an item dataframe such as:</p> <pre><code>item_dict = { 'index': [18, 24, 25, 26, 30, 31, 37, 38, 61, 62, 63, 67, 68, 69], 'BarCode_x': ['12345678ABCD', '12345678IJKL', '12345678IJKL', '12345678IJKL', '12345678EFGH', '12345678EFGH', '67890123IJKL', '67890123IJKL', '67890123ABCD', '67890123ABCD', '67890123ABCD', '67890123EFGH', '67890123EFGH', '67890123EFGH'], 'Extracted_Code': ['12345678', '12345678', '12345678', '12345678', '12345678', '12345678', '67890123', '67890123', '67890123', '67890123', '67890123', '67890123', '67890123', '67890123'], 'Description_x': ['Apples', 'Mangoes', 'Mangoes', 'Mangoes', 'Oranges', 'Oranges', 'Oats', 'Oats', 'Yoghurt', 'Yoghurt', 'Yoghurt', 'Cookies', 'Cookies', 'Cookies'], 'Unique_Code_x': ['EFG', 'LMO', 'LMO', 'LMO', 'JKL', 'JKL', 'OPZ', 'OPZ', 'YQA', 'YQA', 'YQA', 'CDF', 'CDF', 'CDF'], 'Category_x': ['M', 'S', 'S', 'S', 'T', 'T', 'F', 'F', 'M', 'M', 'M', 'M', 'M', 'M'], 'Code_x': [1, 4, 4, 4, 2, 2, 2, 2, 3, 3, 3, 4, 4, 4], 'Quantity_x': [52, 90, 90, 90, 11, 11, 90, 90, 52, 52, 52, 11, 11, 11], 'Price_x': [15.6, 67.0, 67.0, 67.0, 12.9, 12.9, 67.0, 67.0, 15.6, 15.6, 15.6, 12.9, 12.9, 12.9], 'BarCode': ['12345678AAAA', '12345678AAAA', '12345678BBBB', '12345678CCCC', '12345678AAAA', '12345678BBBB', '67890123XXXX', '67890123YYYY', '67890123XXXX', '67890123YYYY', '67890123ZZZZ', '67890123XXXX', '67890123YYYY', '67890123ZZZZ'], 'Description': ['Fruits', 'Fruits', 'Fruits', 'Fruits', 'Fruits', 'Fruits', 'Snacks', 'Snacks', 'Snacks', 'Snacks', 'Snacks', 'Snacks', 'Snacks', 'Snacks'], 'Unique_Code': ['ABC', 'ABC', 'ABC', 'ABC', 'ABC', 'ABC', 'XYZ', 'XYZ', 'XYZ', 'XYZ', 'XYZ', 'XYZ', 'XYZ', 'XYZ'], 'Category': ['H', 'H', 'H', 'H', 'H', 'H', 'H', 'H', 'H', 'H', 'H', 'H', 'H', 'H'], 'Code': [0, 0, 2, 3, 0, 2, 0, 2, 0, 2, 3, 0, 2, 3], 'Quantity': [99, 99, 77, 10, 99, 77, 99, 77, 99, 77, 10, 99, 77, 10], 'Price': [12.0, 12.0, 10.5, 11.0, 12.0, 10.5, 12.0, 10.5, 12.0, 10.5, 11.0, 12.0, 10.5, 11.0] } item_df = pd.DataFrame(item_dict) </code></pre> <p>I am trying to group the dataframe based on <code>['BarCode_x', 'Extracted_Code', 'Unique_Code_x']</code>, convert each group into a list of jsons and store it in a new column <code>Grouped</code>. My desired result is:</p> <pre><code>BarCode_x Extracted_Code Unique_Code_x Grouped 12345678ABCD 12345678 EFG [{'BarCode': '12345678AAAA', 'Description': 'Fruits', 'Category': 'H', 'Code': 0, 'Quantity': 99, 'Price': 12.0}] 12345678EFGH 12345678 JKL [{'BarCode': '12345678AAAA', 'Description': 'Fruits', 'Category': 'H', 'Code': 0, 'Quantity': 99, 'Price': 12.0}, {'BarCode': '12345678BBBB', 'Description': 'Fruits', 'Category': 'H', 'Code': 2, 'Quantity': 77, 'Price': 10.5}] 12345678IJKL 12345678 LMO [{'BarCode': '12345678AAAA', 'Description': 'Fruits', 'Category': 'H', 'Code': 0, 'Quantity': 99, 'Price': 12.0}, {'BarCode': '12345678BBBB', 'Description': 'Fruits', 'Category': 'H', 'Code': 2, 'Quantity': 77, 'Price': 10.5}, {'BarCode': '12345678CCCC', 'Description': 'Fruits', 'Category': 'H', 'Code': 3, 'Quantity': 10, 'Price': 11.0}] 67890123ABCD 67890123 YQA [{'BarCode': '67890123XXXX', 'Description': 'Snacks', 'Category': 'H', 'Code': 0, 'Quantity': 99, 'Price': 12.0}, {'BarCode': '67890123YYYY', 'Description': 'Snacks', 'Category': 'H', 'Code': 2, 'Quantity': 77, 'Price': 10.5}, {'BarCode': '67890123ZZZZ', 'Description': 'Snacks', 'Category': 'H', 'Code': 3, 'Quantity': 10, 'Price': 11.0}] 67890123EFGH 67890123 CDF [{'BarCode': '67890123XXXX', 'Description': 'Snacks', 'Category': 'H', 'Code': 0, 'Quantity': 99, 'Price': 12.0}, {'BarCode': '67890123YYYY', 'Description': 'Snacks', 'Category': 'H', 'Code': 2, 'Quantity': 77, 'Price': 10.5}, {'BarCode': '67890123ZZZZ', 'Description': 'Snacks', 'Category': 'H', 'Code': 3, 'Quantity': 10, 'Price': 11.0}] 67890123IJKL 67890123 OPZ [{'BarCode': '67890123XXXX', 'Description': 'Snacks', 'Category': 'H', 'Code': 0, 'Quantity': 99, 'Price': 12.0}, {'BarCode': '67890123YYYY', 'Description': 'Snacks', 'Category': 'H', 'Code': 2, 'Quantity': 77, 'Price': 10.5}] </code></pre> <p>This is what I have done:</p> <pre><code>item_df.groupby(['BarCode_x', 'Extracted_Code', 'Unique_Code_x'])[[&quot;BarCode&quot;, &quot;Description&quot;, &quot;Category&quot;, &quot;Code&quot;, &quot;Quantity&quot;, &quot;Price&quot;]].apply(lambda group: group.to_dict(&quot;records&quot;)).reset_index(name=&quot;Grouped&quot;) </code></pre> <p>The <code>item_df</code> shown above is a small representation of another dataframe that contains over 3 million records. When I apply the above logic using groupby+apply, the process takes 2 hours to complete, which is not feasible. Therefore, is there any way I can achieve the same result in a shorter amount of time using another optimized method instead of using groupby+apply?</p>
<python><json><pandas><list><optimization>
2023-06-08 23:22:35
2
1,207
Animeartist
76,436,178
617,188
Spark "Run SQL on files directly" fails
<p><a href="https://spark.apache.org/docs/2.2.0/sql-programming-guide.html#run-sql-on-files-directly" rel="nofollow noreferrer">Spark documentation</a> suggests I can run SQL directly on files in PySpark with syntax like this:</p> <pre><code>df = spark.sql(&quot;SELECT * FROM parquet.`examples/src/main/resources/users.parquet`&quot;) </code></pre> <p>I have some Delta format data sitting in an AWS S3 bucket (well, currently a fake S3 bucket in LocalStack). I'm trying to query it using this approach. The bucket looks like this:</p> <pre><code>$ awslocal s3 ls s3://data-lake/data/ PRE _delta_log/ 2023-06-07 16:55:33 903 part-00000-923a2eea-e2fe-468e-b2b9-a85206858ddb-c000.snappy.parquet 2023-06-07 16:55:33 914 part-00001-a6ceac4f-d7ea-44e9-bce9-d7fe0c103e35-c000.snappy.parquet </code></pre> <p>And while the following works just fine:</p> <pre><code>df = spark.read.format(&quot;delta&quot;).load(&quot;s3://data-lake/data&quot;) </code></pre> <p>The syntax I'm trying to query it with using SQL as follows fails:</p> <pre><code>df = spark.sql(&quot;select * from delta.`s3://data-lake/data`&quot;) </code></pre> <p>(I've also tried with the path directly to a file rather than just the containing directory with not difference).</p> <p>The data is written via the following simple Scala Spark code:</p> <pre><code>val ds = Seq( Foo(&quot;a&quot;, Bar(1, 2)), // etc ).toDS() ds .write .format(&quot;delta&quot;) .mode(SaveMode.Overwrite) .save(&quot;s3://data-lake/data&quot;) </code></pre> <p>The error I am getting when I try and query this is as follows, edited a bit for brevity:</p> <pre><code>Py4JError Traceback (most recent call last) /tmp/ipykernel_178/100492152.py in &lt;module&gt; ---&gt; 11 df = spark.sql(&quot;select * from delta.`s3://data-lake/data`&quot;) &lt;snip&gt; py4j.Py4JException: Method sql([class java.lang.String, class java.util.HashMap]) does not exist </code></pre> <p>Feels like a weird thing in the plumbing rather than a syntax error on my part (not least because if I try the same approach in Scala it works OK).</p> <ul> <li>Python 3.7.16</li> <li>PySpark 3.4.0</li> <li>Running from a Jupyter notebook against a LocalStack container based on the <code>emr-serverless/spark/emr-6.10.0:latest</code> image.</li> </ul>
<python><apache-spark><jupyter-notebook><localstack>
2023-06-08 22:51:31
2
628
Jon Archer
76,436,114
225,396
Dataflow python flex template fails with Java must be installed error
<p>I'm running flex template for PubsubLite to BigQuery Dataflow job.</p> <p>This is my code:</p> <pre><code>from __future__ import annotations import argparse import json import logging import apache_beam.io.gcp.pubsublite as psub_lite import apache_beam as beam from apache_beam.options.pipeline_options import PipelineOptions # Defines the BigQuery schema for the output table. schema = 'trip_id:INTEGER,vendor_id:INTEGER,trip_distance:FLOAT,fare_amount:STRING,store_and_fwd_flag:STRING' class ModifyDataForBQ(beam.DoFn): def process(self, pubsub_message, *args, **kwargs): # attributes = dict(pubsub_message.attributes) obj = json.loads(pubsub_message.message.data.decode(&quot;utf-8&quot;)) yield obj def run( subscription_id: str, dataset: str, table: str, beam_args: list[str] = None, ) -&gt; None: options = PipelineOptions(beam_args, save_main_session=True, streaming=True) table = '{}.{}'.format(dataset, table) p = beam.Pipeline(options=options) pubsub_pipeline = ( p | 'Read from pubsub lite topic' &gt;&gt; psub_lite.ReadFromPubSubLite(subscription_path=subscription_id) | 'Print Message' &gt;&gt; beam.ParDo(ModifyDataForBQ()) | 'Write Record to BigQuery' &gt;&gt; beam.io.WriteToBigQuery(table=table, schema=schema, write_disposition=beam.io.BigQueryDisposition .WRITE_APPEND, create_disposition=beam.io.BigQueryDisposition .CREATE_IF_NEEDED, ) ) result = p.run() result.wait_until_finish() if __name__ == &quot;__main__&quot;: logging.getLogger().setLevel(logging.INFO) parser = argparse.ArgumentParser() parser.add_argument( &quot;--subscription_id&quot;, type=str, help=&quot;Region of Pub/Sub Lite subscription.&quot;, default=None ) parser.add_argument( &quot;--dataset&quot;, type=str, help=&quot;BigQuery Dataset name.&quot;, default=None ) parser.add_argument( &quot;--table&quot;, type=str, help=&quot;BigQuery destination table name.&quot;, default=None ) args, beam_args = parser.parse_known_args() run( subscription_id=args.subscription_id, dataset=args.dataset, table=args.table, beam_args=beam_args, ) </code></pre> <p>This is my docker file:</p> <pre><code>FROM gcr.io/dataflow-templates-base/python3-template-launcher-base ENV FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE=&quot;/template/requirements.txt&quot; ENV FLEX_TEMPLATE_PYTHON_PY_FILE=&quot;/template/streaming_beam.py&quot; COPY . /template RUN apt-get update \ &amp;&amp; apt-get install -y openjdk-11-jdk libffi-dev git \ &amp;&amp; rm -rf /var/lib/apt/lists/* \ # Upgrade pip and install the requirements. &amp;&amp; pip install --no-cache-dir --upgrade pip \ &amp;&amp; pip install --no-cache-dir -r $FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE \ # Download the requirements to speed up launching the Dataflow job. &amp;&amp; pip download --no-cache-dir --dest /tmp/dataflow-requirements-cache -r $FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE ENV JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 ENV PIP_NO_DEPS=True ENTRYPOINT [&quot;/opt/google/dataflow/python_template_launcher&quot;] </code></pre> <p>This is How I'm building template:</p> <pre><code> gcloud dataflow flex-template build gs://my-bucket-xxxx/templates/streaming-beam-sql.json \ --image-gcr-path &quot;us-central1-docker.pkg.dev/xxxx-xxx-2/dataflow-pubsublite-bigquery/test:latest&quot; \ --sdk-language &quot;PYTHON&quot; \ --flex-template-base-image &quot;PYTHON3&quot; \ --metadata-file &quot;metadata.json&quot; \ --py-path &quot;.&quot; \ --env &quot;FLEX_TEMPLATE_PYTHON_PY_FILE=streaming_beam.py&quot; \ --env &quot;FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE=requirements.txt&quot; \ --project &quot;xxxx-xxx-2&quot; </code></pre> <p>Now I'm invoking the template:</p> <pre><code> gcloud dataflow flex-template run &quot;streaming-beam-sql&quot; \ --template-file-gcs-location gs://my-bucket-xxxx/templates/streaming-beam-sql.json \ --project &quot;xxxx-xxx-2&quot; \ --parameters &quot;subscription_id=projects/xxxx-xxx-/locations/us-central1/subscriptions/data-streaming-xxxx-subscription,dataset=omer_poc,table=trip2&quot; </code></pre> <p>Pipeline launch fails in the logs I see the following:</p> <pre><code>INFO 2023-06-08T22:27:23.260235Z INFO:root:Starting a JAR-based expansion service from JAR /root/.apache_beam/cache/jars/beam-sdks-java-io-google-cloud-platform-expansion-service-2.41.0.jar INFO 2023-06-08T22:27:23.261209Z ERROR:apache_beam.utils.subprocess_server:Error bringing up service INFO 2023-06-08T22:27:23.261252Z Traceback (most recent call last): INFO 2023-06-08T22:27:23.261270Z File &quot;/usr/local/lib/python3.7/site-packages/apache_beam/utils/subprocess_server.py&quot;, line 79, in start INFO 2023-06-08T22:27:23.261296Z endpoint = self.start_process() INFO 2023-06-08T22:27:23.261313Z File &quot;/usr/local/lib/python3.7/site-packages/apache_beam/utils/subprocess_server.py&quot;, line 181, in start_process INFO 2023-06-08T22:27:23.261329Z 'Java must be installed on this system to use this ' INFO 2023-06-08T22:27:23.261343Z RuntimeError: Java must be installed on this system to use this transform/runner. </code></pre> <p>I'm followed google tutorials and workshop materials, but can't find what is the problem. Please help.</p> <p><strong>Update</strong>: I already installed jdk 11 as part of my Dockerfile. I also verified that JAVA_HOME is set in the image and java is accessible,</p>
<python><google-cloud-platform><google-cloud-dataflow><apache-beam>
2023-06-08 22:34:16
1
18,649
danny.lesnik
76,436,031
3,826,733
how to upload image file as binary to azure storage account in python
<p>I have this code below which as of now does not upload image file that it receives from the client. It receives the image file as an UploadFile instance. I have read that using the read() method will convert the file to bytes and that way I will be able to upload, but that didn't work for me. Not sure that what I am doing wrong here.</p> <pre><code>async def resolve_fileUpload(_, info, file): print(f&quot;File - {type(file)}&quot;) container_client = blob_service_client.get_container_client( '4160000000') if not container_client.exists(): container_client.create_container() with open(file, &quot;rb&quot;) as file: result = container_client.upload_blob( name='avatar', data=file.read()) return { &quot;status&quot;: 200, &quot;error&quot;: &quot;&quot;, &quot;fileUrl&quot;: &quot;www.test.com&quot; } </code></pre> <p>Any help is greatly appreciated as I am stuck on this for days now.</p>
<python><azure-blob-storage>
2023-06-08 22:14:44
2
3,842
Sumchans
76,435,878
5,120,238
VSCode debug subprocess.call
<p>Current launch.json looks like:</p> <pre><code>{ // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 &quot;version&quot;: &quot;0.2.0&quot;, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;Python: Current File&quot;, &quot;type&quot;: &quot;python&quot;, &quot;justMyCode&quot;: false, &quot;subProcess&quot;: true, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;${file}&quot;, &quot;console&quot;: &quot;integratedTerminal&quot;, } ] } </code></pre> <p>The code:</p> <pre><code>cmd = ( &quot;a-pip-package&quot; &quot; --verbose&quot; &quot; --do-something&quot; ) print(cmd) subprocess.call(shlex.split(cmd)) </code></pre> <p>I can add a print statement into the pip package and see that it in fact prints. However, when I add a breakpoint, the debugger never hits.</p> <p>e.g.</p> <p><code>/home/me/venv/lib/python3.9/site-packages/pip-package/a-pip-package.py</code></p> <pre><code>def main(): print(&quot;hey!&quot;) ... </code></pre> <p>I got the above path from doing something like:</p> <p><code>subprocess.call([&quot;which&quot;, &quot;a-pip-package&quot;])</code></p> <p>Results in: <code>/home/me/venv/bin/a-pip-package</code></p> <p>That file looks like:</p> <pre><code>#!/home/me/venv/bin/python # -*- coding: utf-8 -*- import re import sys from pip_package.a_pip_package import main if __name__ == '__main__': sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0]) sys.exit(main()) </code></pre> <p>In summation, I believe that the file that is actually executed is <code>/home/me/venv/lib/python3.9/site-packages/pip-package/a-pip-package.py</code>, but when I add a breakpoint in that file, VSCode never breaks there.</p>
<python><debugging><subprocess>
2023-06-08 21:43:28
0
7,473
Chasen Bettinger
76,435,823
13,916,049
AttributeError: can't set attribute when adding substring to items in list
<p>I want to add the <code>chr</code> substring to all items in the <code>clr_A0.chromnames</code> list.</p> <pre><code>import cooler import cooltools.lib.plotting import cooltools from pathlib import Path pathlist = Path(data_dir).glob('**/*.mcool') for path in pathlist: cool_file = str(path) filename = cool_file.split(&quot;/&quot;,1)[1] resolution = [i.rsplit(&quot;/&quot;, 1)[1] for i in cooler.fileops.list_coolers(cool_file)] ### load a cooler for each resolution for j in resolution: if filename.startswith(&quot;A0&quot;): clr_A0 = cooler.Cooler(f'{cool_file}::resolutions/{j}') clr_A0.chromnames = [&quot;chr&quot; + s for s in clr_A0.chromnames] </code></pre> <p>Traceback:</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Input In [37], in &lt;cell line: 2&gt;() 10 if filename.startswith(&quot;A0&quot;): 11 clr_A0 = cooler.Cooler(f'{cool_file}::resolutions/{j}') ---&gt; 12 clr_A0.chromnames = [&quot;chr&quot; + s for s in clr_A0.chromnames] AttributeError: can't set attribute </code></pre> <p>Input:</p> <p><code>clr_A0.chromnames</code></p> <pre><code>['M', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', 'X', 'Y'] </code></pre> <p>Expected output:</p> <pre><code>['chrM', 'chr1', 'chr2', 'chr3', 'chr4', 'chr5', 'chr6', 'chr7', 'chr8', 'chr9', 'chr10', 'chr11', 'chr12', 'chr13', 'chr14', 'chr15', 'chr16', 'chr17', 'chr18', 'chr19', 'chr20', 'chr21', 'chr22', 'chrX', 'chrY'] </code></pre>
<python>
2023-06-08 21:32:24
1
1,545
Anon
76,435,804
5,036,928
Numpy: Performing in-place operation along moving axis
<p>Ok I did my best to describe my problem in the title.</p> <p>My problem is as follows:</p> <p>I have a numpy array which may not always have a consistent shape/dimension (ranges from 1 to 3). Taking the simplest case when the array is of shape [100], I can perform the following (and obtain the desired result):</p> <pre><code>for i, bounds in enumerate(values): low, high = bounds arr[i] *= high - low </code></pre> <p>when the array is of shape [100, 200], I can do the following:</p> <pre><code>for i, bounds in enumerate(values): low, high = bounds arr[i, :] *= high - low </code></pre> <p>or if the array is of shape [200, 100], I can instead do:</p> <pre><code>for i, bounds in enumerate(values): low, high = bounds arr[:, i] *= high - low </code></pre> <p>and in the 3d case if the array is of shape [300, 100, 200], I would do:</p> <pre><code>for i, bounds in enumerate(values): low, high = bounds arr[:, i, :] *= high - low </code></pre> <p>My problem is I don't know how to alter the position of <code>i</code> in the index or how to index all the elements when iterating over the axis that corresponds to <code>i</code> (when the shape of <code>arr</code> is changing). In my example, the &quot;location&quot; of <code>i</code> is based on where <code>100</code> falls in the shape of the array. Is this something numpy can do or am I stuck with a number of if statements?</p>
<python><arrays><numpy><indexing>
2023-06-08 21:28:46
2
1,195
Sterling Butters
76,435,798
788,153
How to stop a tokenizer not split words further?
<p>In the following code below, the tokenizer is splitting some of the words. Is it the property of the model or can I somehow force it not to split the words? I am using these tokens for inference to model. Even after passing <code>do_basic_tokenize: False</code>, it is still splitting the words.</p> <pre><code>from transformers import AutoTokenizer text = &quot;Patient John Doe visited the hospital on 01/05/2023 with complaints of chest pain.&quot; tokenizer = AutoTokenizer.from_pretrained(&quot;obi/deid_bert_i2b2&quot;, tokenizer_args={&quot;do_basic_tokenize&quot;: False}) tokens = tokenizer.tokenize(text, truncation=True, padding=True, return_tensors=&quot;pt&quot;) model = AutoModelForTokenClassification.from_pretrained(&quot;obi/deid_bert_i2b2&quot;) tokens </code></pre> <p>Output:</p> <pre><code>['Pat', '##ient', 'John', 'Do', '##e', 'visited', 'the', 'hospital', 'on', '01', '/', '05', '/', '202', '##3', 'with', 'complaints', 'of', 'chest', 'pain', '.'] </code></pre> <p>Is there any efficient way/package to combine the tokens with hashes with its predecessor or successor in the text?</p>
<python><nlp><huggingface-tokenizers><huggingface>
2023-06-08 21:27:26
1
2,762
learner
76,435,735
13,354,525
simple way to send value from javascript to python back-end in streamlit
<p>I've used a <a href="https://docs.streamlit.io/library/components/components-api#render-an-iframe-url" rel="nofollow noreferrer">st.components.v1.iframe</a> to integrate an authentication mechanism that sends back a token to the parent if the user is allowed to authenticate, the iframe content looks like this:</p> <pre><code> &lt;script src=&quot;remote/auth.js&quot;&gt;&lt;/script&gt; &lt;script&gt; token = window.getAuth(); parent.postMessage(token, '*'); &lt;/script&gt; </code></pre> <p>and in the streamlit app:</p> <pre><code>from streamlit.components.v1 import html, iframe html(&quot;&quot;&quot; &lt;script&gt; parent.window.addEventListener('message', e =&gt; { const key = e.message ? 'message' : 'data'; const token = e[key]; parent.window.token = token; },false); &lt;/script&gt; &quot;&quot;&quot;) iframe(&quot;RemoteIframeLocation&quot;) </code></pre> <p>Now what I want is to send back the <code>parent.window.token</code> to the python back-end in order to grant access to users based on the token.</p> <p>I'm aware that basically this should be possible with <a href="https://docs.streamlit.io/library/components/components-api#create-a-bi-directional-component" rel="nofollow noreferrer">bi-directional streamlit components</a>, but it seems like it's way too complicated and an overkill for my use case where I only need to send one value !</p>
<javascript><python><streamlit>
2023-06-08 21:14:28
2
471
abdelgha4
76,435,700
6,932,839
PySpark: Convert Dictionary in JSON DataFrame to Separate Columns
<p>I have a JSON file that contains a dictionary that looks similar to this: <code>&quot;object1&quot;:{&quot;status1&quot;:388.646233,&quot;status2&quot;:118.580561,&quot;status3&quot;:263.673222,&quot;status4&quot;:456.432483}</code></p> <p>I am trying to parse out <code>status1</code>, <code>status2</code>, <code>status3</code>, and <code>status4</code> to separate columns using PySpark.</p> <p><strong>My Attempts:</strong></p> <p><code>F.explode(F.col('object1'))</code></p> <p><code>.withColumn('status1', F.split(F.col('object1'))</code></p> <p><strong>Expected Result:</strong></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>status1</th> <th>status2</th> <th>status3</th> <th>status4</th> </tr> </thead> <tbody> <tr> <td>388.646233</td> <td>118.580561</td> <td>263.673222</td> <td>456.432483</td> </tr> </tbody> </table> </div>
<python><sql><dataframe><pyspark><palantir-foundry>
2023-06-08 21:07:58
1
1,141
arnpry
76,435,629
8,189,123
Adding data object XML to PDF using PyMuPDF
<p>I am struggling to add a data object to a PDF using PyMuPDF. I am successful adding a PDF as an embedded file but I can not add an XML file. I am trying using the following function : <strong>embfile_add.</strong></p> <p>The embedded XML file will be used to get data into a PDF form dynamically.</p> <p>This is the code I am trying :</p> <pre><code>import fitz import os path = r&quot;c\temp&quot; namedoc = &quot;document.pdf&quot; pathnamedoc = os.path.join(path,namedoc) print(pathnamedoc) doc = fitz.open(pathnamedoc) # open main document count = doc.embfile_count() print(&quot;number of embedded file:&quot;, count) # shows number of embedded files namedata = &quot;data.xml&quot; pathnamedata = os.path.join(path,namedata) print(pathnamedata) embedded_doc = fitz.open(pathnamedata) # open document you want to embed embedded_data = embedded_doc.tobytes() # get the document byte data as a buffer doc.embfile_add(&quot;data.xml&quot;, embedded_data) doc.saveIncr() </code></pre> <p>but I keep having the following error:</p> <pre><code>RuntimeError: is no PDF </code></pre>
<python><xml><pdf><pymupdf>
2023-06-08 20:54:56
1
437
Camilo
76,435,578
2,726,900
koalas for PySpark: why does it try to run Python on Driver machine?
<p>I'm trying to use <code>koalas</code> library for PySpark 2.4.5</p> <p>I have set up <code>HADOOP_CONF_DIR</code> and <code>PYSPARK_PYTHON</code> environment variables and created a Spark session with client deployment mode:</p> <p><code>spark = SparkSession.builder.enableHiveSupport().master(&quot;yarn&quot;).getOrCreate()</code></p> <p>With this Spark session I have created a couple of PySpark dataframes and created <code>koalas</code> dataframes from them (via <code>.to_koalas()</code> method).</p> <p>But when I have tried to use <code>.loc[]</code> and <code>.filter(...)</code> operation on my koalas dataframes I got the following error: <code>java.io.IOException: Cannot run program &quot;/usr/local/bin/python3.7&quot;: error=2, No such file or directory </code></p> <p>The path <code>&quot;/usr/local/bin/python3.7&quot;</code> was exactly the path to Python interpreter on my Spark Executor hosts (and it was set in <code>PYSPARK_PYTHON</code> environment variable).</p> <p>After two hours of crying I've tried to add a symlink to Python interpreter on my driver machine at the same path: <code>/usr/local/bin/python3.7</code>, and it helped: my script stopped failing with this error.</p> <p>But on production environment I won't have such an option to set such symlinks so easily. And I still have a question: why does koalas searches for Python interpreter on driver by <code>PYSPARK_PYTHON</code> instead of <code>PYSPARK_DRIVER_PYTHON</code>?</p>
<python><apache-spark><pyspark><spark-koalas>
2023-06-08 20:46:26
0
3,669
Felix
76,435,549
10,491,951
How can I test Python virtual code environments programatically?
<p>We manage a significant number of Python virtual code environments in our estate. We are moving to a more aggresive OS level patching schedule which means there is higher risk to break some Python virtual code environments due to some OS level dependencies changing or issues during installation. We are therefore looking at different options to try to mitigate this risk. Aside from recreating the virtual code environment after OS patching what else can we do to confirm the code environment it's working fine? Of course running the code/app/model that the code environment is meant to support would be ideal but this is not practical/possible for various reasons that are not relevant to the question.</p> <p>I was thinking that perhaps I could look at the requirements.txt of each virtual code environment and attempt to import all packages that required to be installed in each virtual code environment. From previous experience the majority of virtual code environment issues happen either during the creation of the code environment (ie when packages are installed/code env created) or when the required packages are imported in code. So this approach will cover most of the risk leaving only potential run time issues which should be a very small proportion.</p> <p>What's you view on this approach? Do you see any issues with it? One thing I know will be a challenge is that package names don't always match the &quot;import&quot; name used in code but I suspect we will be able to work around this problem building a dictionary of package names and import names.</p>
<python><virtualenv><cicd><python-venv>
2023-06-08 20:41:00
2
487
GreenLantern22
76,435,506
1,624,552
Import "pkg_resources" could not be resolved from source Pylance (reportMissingModuleSource)
<p>I have python 3.8.10 (32-bit) and 3.9.7 (64.bit) installed on my system:</p> <p>I am trying to compare two version strings in Python so I used the code from <a href="https://stackoverflow.com/questions/11887762/how-do-i-compare-version-numbers-in-python">here</a>, especially below:</p> <pre><code>from pkg_resources import packaging packaging.version.parse(&quot;0.1.1rc1&quot;) &lt; packaging.version.parse(&quot;0.1.1rc2&quot;) </code></pre> <p>I am using VS Code and for some reason it says pkg_resources cannot be resolved.</p> <p>I have installed packaging package in my system using below for both python versions:</p> <pre><code>py -m pip install packaging python -m pip install packaging </code></pre> <p>and also setuptools:</p> <pre><code>py -m pip install setuptools python -m pip install setuptools </code></pre> <p>However from python terminal, if i do &quot;from pkg_resources import packaging&quot; it works.</p> <p>Below the list of packages installed for each python version.</p> <p>python 3.9.7: py -m pip list</p> <p><a href="https://i.sstatic.net/PdD7L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PdD7L.png" alt="enter image description here" /></a></p> <p>python 3.8.10: python -m pip list</p> <p><a href="https://i.sstatic.net/GMBfO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GMBfO.png" alt="enter image description here" /></a></p>
<python>
2023-06-08 20:34:01
0
10,752
Willy
76,435,474
3,402,703
Check for either key in python dictionary
<p>I have a the following code to get some messages from a telegram bot with python:</p> <pre><code>useful_messages = [i[&quot;message&quot;] for i in response[&quot;result&quot;] if i[&quot;message&quot;][&quot;from&quot;][&quot;id&quot;] == int(chat_id) and i[&quot;message&quot;][&quot;date&quot;] &gt; last_time] Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;listcomp&gt; KeyError: 'message' </code></pre> <p><code>chat_id</code> is the id of the user I'm interested in, and <code>last_time</code> is used to avoid reading the whole chat with the user.</p> <p>It worked for some months until today I hit an &quot;edge case&quot;:</p> <pre><code>[i.keys() for i in response[&quot;result&quot;]] [dict_keys(['update_id', 'message']), dict_keys(['update_id', 'message']), dict_keys(['update_id', 'edited_message']), dict_keys(['update_id', 'message'])] </code></pre> <p>As you can see, one of the messages was edited, and its key is no longer <code>message</code> but <code>edited_message</code>, causing the <code>KeyError</code> above.</p> <p>I know I can use a for loop, check for either key (<code>message</code> or <code>edited_message</code>) and continue with the message validation (date and id) and extraction. <strong>But I wondered if it is possible to check for either key in the dictionary</strong>, thus keeping the list/dictionary comprehension (a one-liner solution would be ideal).</p> <p>I also thought of replacing the <code>edited_message</code> key, if present, by following any of the procedures shown in the answers to <a href="https://stackoverflow.com/questions/4406501/change-the-name-of-a-key-in-dictionary">this question</a>. Sadly they are hardly one-liners, so the for loop seems to be a less verbose and convoluted solution.</p> <p><strong>Of course, I'm open to any other solution (or programming logic)</strong> that will result in better code.</p> <p>I'm still new to python, so I'd appreciate your detailed explanations, if complex solutions are offered.</p>
<python><dictionary><dictionary-comprehension>
2023-06-08 20:26:56
2
6,507
PavoDive
76,435,472
1,862,049
Website scraping not working properly for Medics
<p>I developed this script here to collect information from surgeons (public source) and put it in a spreadsheet.</p> <p>The script automates the extraction of plastic surgeon information from a website. Cycles through a list of states. Select a state on the website form and perform a search. It collects the information of the surgeons found on each page and goes to the next page (if any). Store data in an Excel file. Adjusts column width in Excel file for better readability. Repeat the process for all states and end the script. The problem is that it is giving a small error and I am not finding where.</p> <p>It starts by scanning and collecting data from the first site (it doesn't have a forward button). So far so good. Then it switches to the next state normally and continues collecting data. Then the problem comes here, when it finishes collecting the data from this first page of the next state, it advances to the next page but does not collect the data from the next one. And besides, it hangs in it and returns to the IDE screen with this error:</p> <p><strong>info_content = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CLASS_NAME, 'surgeon-info'))).text</strong></p> <p>I need it to keep collecting data from all pages. I've been trying to find where the error is for 5 hours now.</p> <p>If anyone can help me I'd be very grateful. Here's the script:</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import re import time import pandas as pd from openpyxl import load_workbook from openpyxl.utils import get_column_letter driver = webdriver.Chrome() url = &quot;http://www2.cirurgiaplastica.org.br/encontre-um-cirurgiao/#busca-cirurgiao&quot; driver.get(url) time.sleep(2) estados = [&quot;AC&quot;, &quot;AL&quot;, &quot;AP&quot;, &quot;AM&quot;, &quot;BA&quot;, &quot;CE&quot;, &quot;DF&quot;, &quot;ES&quot;, &quot;GO&quot;, &quot;MA&quot;, &quot;MT&quot;, &quot;MS&quot;, &quot;MG&quot;, &quot;PA&quot;, &quot;PB&quot;, &quot;PR&quot;, &quot;PE&quot;, &quot;PI&quot;, &quot;RJ&quot;, &quot;RN&quot;, &quot;RS&quot;, &quot;RO&quot;, &quot;RR&quot;, &quot;SC&quot;, &quot;SP&quot;, &quot;SE&quot;, &quot;TO&quot;] try: data = pd.read_excel(&quot;cirurgioes.xlsx&quot;) except FileNotFoundError: data = pd.DataFrame(columns=[&quot;Nome&quot;, &quot;Email&quot;, &quot;Cidade/Estado&quot;, &quot;CRM&quot;, &quot;Telefone&quot;, &quot;EndereΓ§o&quot;]) for estado in estados: select_element = driver.find_element(By.NAME, 'cirurgiao_uf') select_element.click() select_state = driver.find_element(By.XPATH, f'//option[@value=&quot;{estado}&quot;]') select_state.click() search_button = driver.find_element(By.ID, 'cirurgiao_submit') search_button.click() time.sleep(2) count = 0 while True: num_links = len(driver.find_elements(By.XPATH, '//a[@href=&quot;#0&quot;]')) for i in range(num_links): links = driver.find_elements(By.XPATH, '//a[@href=&quot;#0&quot;]') links[i].click() time.sleep(2) name = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.TAG_NAME, 'h3'))).text info_content = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CLASS_NAME, 'cirurgiao-info'))).text info_lines = info_content.split('\n') email = info_lines[3] state = info_lines[1] crm = &quot;&quot; phone = &quot;&quot; address = &quot;&quot; for line in info_lines[4:]: if re.match(r'^\d+\.?\d*\/[A-Z]{2}$', line): crm = line elif line.startswith((&quot;Rua &quot;, &quot;Avenida &quot;, &quot;RUA &quot;, &quot;AVENIDA &quot;)): address = line elif line.startswith(&quot;Comercial: (55) &quot;): phone = line.replace(&quot;Comercial: (55) &quot;, &quot;&quot;) data = data.append({&quot;Nome&quot;: name, &quot;Email&quot;: email, &quot;Cidade/Estado&quot;: state, &quot;CRM&quot;: crm, &quot;Telefone&quot;: phone, &quot;EndereΓ§o&quot;: address}, ignore_index=True) count += 1 if count % 10 == 0: data.to_excel(&quot;cirurgioes.xlsx&quot;, index=False) body = driver.find_element(By.TAG_NAME, 'body') body.send_keys(Keys.ESCAPE) time.sleep(2) next_buttons = driver.find_elements(By.CSS_SELECTOR, 'a.cirurgiao-pagination-link') if len(next_buttons) == 0: break next_button = next_buttons[-1] next_button.click() time.sleep(4) print(f&quot;Estado {estado} salvo com sucesso.&quot;) data.to_excel(&quot;cirurgioes.xlsx&quot;, index=False) driver.quit() </code></pre>
<python><selenium-webdriver><web-scraping>
2023-06-08 20:26:08
1
1,416
Lucas Fernandes
76,435,349
13,916,049
Remove substring from ordereddict
<p>I want to remove the <code>chr</code> prefix from all the keys in an OrderedDict and update the OrderedDict with the new keys.</p> <pre><code>from collections import OrderedDict for key, item in hg38_genome.items(): key = {x.removeprefix(&quot;chr&quot;) for x in key} hg38_genome = OrderedDict([(key, item)]) </code></pre> <p>Traceback:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [29], in &lt;cell line: 1&gt;() 1 for key, item in hg38_genome.items(): 2 key = {x.removeprefix(&quot;chr&quot;) for x in key} ----&gt; 3 hg38_genome = OrderedDict([(key, item)]) TypeError: unhashable type: 'set' </code></pre> <p><code>hg38_genome</code></p> <pre><code>OrderedDict([('chr1', &lt;bioframe.io.fileops.PysamFastaRecord at 0x2baa3dca9100&gt;), ('chr10', &lt;bioframe.io.fileops.PysamFastaRecord at 0x2baa3dca9130&gt;), ('chr11', &lt;bioframe.io.fileops.PysamFastaRecord at 0x2baa3dca9610&gt;), ('chr11_KI270721v1_random')] </code></pre> <p>Expected output:</p> <pre><code>OrderedDict([('1', &lt;bioframe.io.fileops.PysamFastaRecord at 0x2baa3dca9100&gt;), ('10', &lt;bioframe.io.fileops.PysamFastaRecord at 0x2baa3dca9130&gt;), ('11', &lt;bioframe.io.fileops.PysamFastaRecord at 0x2baa3dca9610&gt;), ('11_KI270721v1_random')] </code></pre>
<python><ordereddictionary><ordereddict>
2023-06-08 20:05:24
1
1,545
Anon
76,435,305
6,611,672
Convert Python list of dicts to mapping of key to rest of dict
<p>Let's say I have the following list:</p> <pre><code>l = [ {&quot;a&quot;: 10, &quot;b&quot;: 100, &quot;c&quot;: 100}, {&quot;a&quot;: 20, &quot;b&quot;: 100, &quot;c&quot;: 100}, {&quot;a&quot;: 30, &quot;b&quot;: 100, &quot;c&quot;: 100}, ] </code></pre> <p>I know <code>&quot;a&quot;</code> is unique in each item:</p> <pre><code>assert len({x[&quot;a&quot;] for x in l}) == len(l) </code></pre> <p>I want to generate a mapping of the <code>&quot;a&quot;</code> value to the rest of each item so my end result is the following dictionary:</p> <pre><code>{ 10: {&quot;b&quot;: 100, &quot;c&quot;: 100}, 20: {&quot;b&quot;: 100, &quot;c&quot;: 100}, 30: {&quot;b&quot;: 100, &quot;c&quot;: 100}, } </code></pre> <p>So far I've come up with the following:</p> <pre><code>{x[&quot;a&quot;]: {k: v for k, v in x.items() if k != &quot;a&quot;} for x in l} </code></pre> <p>Is this the best way to write this? Or is there a better way or a built in function that I'm missing?</p>
<python>
2023-06-08 19:56:34
1
5,847
Johnny Metz
76,435,254
6,843,153
How to read a file from s3 using s3fs
<p>I have the following method in Python:</p> <pre><code>def read_file(self, bucket, table_name, file_name, format=&quot;csv&quot;): data = None read_from_path = f&quot;s3://{bucket}/{table_name}/{file_name}&quot; try: fs = s3fs.S3FileSystem( key=self.dwh_s3_client_key, secret=self.dwh_s3_client_secret ) with fs.open(read_from_path, &quot;r&quot;) as f: data = f.read() return data except Exception as e: raise Exception( f&quot;Failed to read file {read_from_path} from S3 due to this error:\n`{str(e)}`&quot; ) </code></pre> <p>The problem is that I get an <code>Access Denied</code> error, but I'm able to read the files in the bucket, it fails only when attempting to read the content of one of the files.</p> <p>Is what I am doing the right way to read a file from s3 using <code>s3fs</code>?</p>
<python><amazon-s3><python-s3fs>
2023-06-08 19:48:27
1
5,505
HuLu ViCa
76,435,173
13,538,030
Categorize rows per their similarity in Python
<p>I am here to look for input for a data manipulation problem related to natural language processing.</p> <p>To make life easier, I am using a mock dataset posted several years ago from <a href="https://stackoverflow.com/questions/47159996/how-to-group-text-data-based-on-document-similarity">How to group text data based on document similarity?</a>.</p> <pre><code>import pandas as pd from difflib import SequenceMatcher df = pd.DataFrame({'Questions': ['What are you doing?','What are you doing tonight?','What are you doing now?','What is your name?','What is your nick name?','What is your full name?','Shall we meet?', 'How are you doing?' ]}) def similarity_score(s1, s2): return SequenceMatcher(None, s1, s2).ratio() def similarity(x,df): sim_score = [] for i in df['Questions']: sim_score.append(similarity_score(x,i)) return sim_score df['similarity'] = df['Questions'].apply(lambda x : similarity(x, df)).astype(str) print(df) </code></pre> <p>The output is as following</p> <pre><code>Questions \ 0 What are you doing? 1 What are you doing tonight? 2 What are you doing now? 3 What is your name? 4 What is your nick name? 5 What is your full name? 6 Shall we meet? 7 How are you doing? similarity 0 [1.0, 0.8260869565217391, 0.9047619047619048, ... 1 [0.8260869565217391, 1.0, 0.84, 0.533333333333... 2 [0.9047619047619048, 0.84, 1.0, 0.585365853658... 3 [0.6486486486486487, 0.5333333333333333, 0.585... 4 [0.5714285714285714, 0.52, 0.5217391304347826,... 5 [0.5714285714285714, 0.52, 0.5652173913043478,... 6 [0.36363636363636365, 0.34146341463414637, 0.3... 7 [0.8108108108108109, 0.6666666666666666, 0.731... </code></pre> <p>The logic is that I go through each row in the data frame to compare it to all over rows (including itself) in order to compute their similarity. I then store the similarity score as a list in another column called &quot;similarity&quot;.</p> <p>Next, I want to categorize the questions in the first column. If the similarity score &gt; 0.9, then those rows should be assigned to the same group. How can I achieve this?</p>
<python><pandas><nlp><aggregate><grouping>
2023-06-08 19:35:45
1
384
Sophia
76,435,153
4,352,047
Polars - rolling - How to reset count each day?
<p>I've switched from Pandas-Python to Polars and I'm trying to figure out how to essentially do a rolling sum for each day. I don't want just the total for each day, I want the totals for each time period (5m period) each day, in a &quot;rolling_sum&quot; fashion. Here is the code I have so far:</p> <pre class="lang-py prettyprint-override"><code>from datetime import datetime, timedelta df = pl.DataFrame({ &quot;date&quot;: pl.datetime_range( datetime(1985, 1, 1), datetime(1985, 1, 6), timedelta(minutes=5), time_unit=&quot;ns&quot;, eager=True ) }) df = df.with_columns( A=pl.lit(2, dtype=pl.Int32) ) df = ( df.lazy() .rolling(&quot;date&quot;, period=&quot;1d&quot;) .agg( pl.col(&quot;A&quot;).sum().alias(&quot;A_daily&quot;) ) .collect() ) print(df[280:]) </code></pre> <p>This produces:</p> <pre><code>shape: (1_161, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date ┆ A_daily β”‚ β”‚ --- ┆ --- β”‚ β”‚ datetime[ns] ┆ i32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════║ β”‚ 1985-01-01 23:20:00 ┆ 562 β”‚ β”‚ 1985-01-01 23:25:00 ┆ 564 β”‚ β”‚ 1985-01-01 23:30:00 ┆ 566 β”‚ β”‚ 1985-01-01 23:35:00 ┆ 568 β”‚ β”‚ 1985-01-01 23:40:00 ┆ 570 β”‚ β”‚ 1985-01-01 23:45:00 ┆ 572 β”‚ β”‚ 1985-01-01 23:50:00 ┆ 574 β”‚ β”‚ 1985-01-01 23:55:00 ┆ 576 β”‚ β”‚ 1985-01-02 00:00:00 ┆ 576 β”‚ β”‚ 1985-01-02 00:05:00 ┆ 576 β”‚ β”‚ 1985-01-02 00:10:00 ┆ 576 β”‚ β”‚ 1985-01-02 00:15:00 ┆ 576 β”‚ β”‚ 1985-01-02 00:20:00 ┆ 576 β”‚ β”‚ 1985-01-02 00:25:00 ┆ 576 β”‚ β”‚ 1985-01-02 00:30:00 ┆ 576 β”‚ β”‚ 1985-01-02 00:35:00 ┆ 576 β”‚ β”‚ 1985-01-02 00:40:00 ┆ 576 β”‚ β”‚ 1985-01-02 00:45:00 ┆ 576 β”‚ β”‚ 1985-01-02 00:50:00 ┆ 576 β”‚ ... β”‚ 1985-01-05 23:50:00 ┆ 576 β”‚ β”‚ 1985-01-05 23:55:00 ┆ 576 β”‚ β”‚ 1985-01-06 00:00:00 ┆ 576 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>What I guess what I'm actually after is for each day, I want the <code>A_Daily</code> to reset, i.e. to look like:</p> <pre><code>shape: (1_161, 2) β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ date ┆ A_daily β”‚ β”‚ --- ┆ --- β”‚ β”‚ datetime[ns] ┆ i32 β”‚ β•žβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•ͺ═════════║ β”‚ 1985-01-01 23:20:00 ┆ 562 β”‚ β”‚ 1985-01-01 23:25:00 ┆ 564 β”‚ β”‚ 1985-01-01 23:30:00 ┆ 566 β”‚ β”‚ 1985-01-01 23:35:00 ┆ 568 β”‚ β”‚ 1985-01-01 23:40:00 ┆ 570 β”‚ β”‚ 1985-01-01 23:45:00 ┆ 572 β”‚ β”‚ 1985-01-01 23:50:00 ┆ 574 β”‚ β”‚ 1985-01-01 23:55:00 ┆ 576 β”‚ β”‚ 1985-01-02 00:00:00 ┆ 2 β”‚ # !!!! CHANGE HERE !!!! β”‚ 1985-01-02 00:05:00 ┆ 4 β”‚ β”‚ 1985-01-02 00:10:00 ┆ 6 β”‚ β”‚ 1985-01-02 00:15:00 ┆ 8 β”‚ β”‚ 1985-01-02 00:20:00 ┆ 10 β”‚ β”‚ 1985-01-02 00:25:00 ┆ 12 β”‚ β”‚ 1985-01-02 00:30:00 ┆ 14 β”‚ β”‚ 1985-01-02 00:35:00 ┆ 18 β”‚ β”‚ 1985-01-02 00:40:00 ┆ 20 β”‚ β”‚ 1985-01-02 00:45:00 ┆ 22 β”‚ β”‚ 1985-01-02 00:50:00 ┆ 24 β”‚ </code></pre> <p>Basically, I want to just reset the rolling sum each day. In Pandas, I'd have done it sort of like this:</p> <pre><code>tn = outdata.groupby(pd.Grouper(freq='D', key='_daily_reset'))[ # '_vp_volume'].transform('cumsum') </code></pre>
<python><python-polars>
2023-06-08 19:31:34
1
379
Deftness
76,435,070
1,491,089
How do I use Python Trimesh to get boundary vertex indices?
<p>How do I use the <a href="https://trimsh.org/index.html" rel="nofollow noreferrer">Trimesh</a> Python library to retrieve the indices of the vertices that make up the boundary of a mesh?</p> <p>For example, for a planar mesh, I expect only the vertices that are on the outer edges. If the planar mesh has a hole, I also expect the vertices that mark the edges of the hole. For an open cylindrical mesh, I expect only the vertices that line the two openings.</p> <p>All of my meshes are open, like pipes or boxes without tops and bottoms. They are not watertight.</p> <p>For a given mesh instance, Neither the <code>edges</code> (which returns more entries than I have vertices!), nor the <code>edges_unique</code> properties return what I expect. The <code>facets_boundary</code> works for planar meshes, but fails spectacularly for the cylindrical mesh. In reading the project API documentation, I find it difficult to understand what I should expect from these properties.</p> <p>Do I have to find the edges myself, e.g. using <code>vertex_counts = np.bincount(faces.flatten())</code> ?</p> <p>For my meshes, this produces results as follows:</p> <pre><code>Planar mesh (4x5) vertex_counts: [2 3 3 3 3 1 3 6 6 6 6 3 3 6 6 6 6 3 3 6 6 6 6 3 1 3 3 3 3 2] Planar mesh with hole (8x8 with a 3x3 hole in the middle) vertex_counts: [2 3 3 3 3 3 3 3 1 3 6 6 6 6 6 6 6 3 3 6 4 3 3 3 5 6 3 3 6 3 0 0 0 3 6 3 3 6 3 0 0 0 3 6 3 3 6 3 0 0 0 3 6 3 3 6 5 3 3 3 4 6 3 3 6 6 6 6 6 6 6 3 1 3 3 3 3 3 3 3 2] Cylindrical mesh (8 circular divisions, 6 longitudinal divisions) vertex_counts: [3 3 3 3 3 3 3 3 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 3 3 3 3 3 3 3 3] </code></pre>
<python><mesh><trimesh>
2023-06-08 19:19:04
2
577
KevinM