QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,167,810
| 13,039,962
|
Get the dates of the start and end of consecutive values with a condition
|
<p>I have this df:</p>
<pre><code> DATE CODE QUARTER PP
0 1964-01-01 100007 1964Q1 NaN
1 1964-01-02 100007 1964Q1 NaN
2 1964-01-03 100007 1964Q1 NaN
3 1964-01-04 100007 1964Q1 NaN
4 1964-01-05 100007 1964Q1 NaN
... ... ... ...
10656619 2023-03-27 118004 2023Q1 0.0
10656620 2023-03-28 118004 2023Q1 0.0
10656621 2023-03-29 118004 2023Q1 0.0
10656622 2023-03-30 118004 2023Q1 0.0
10656623 2023-03-31 118004 2023Q1 0.0
[2647935 rows x 4 columns]
</code></pre>
<p>I would like to group by CODE and QUARTER, then obtain the maximum value of consecutive days in which PP<1, obtain the start and end date where the maximum value of consecutive days was had in which PP<1, and finally obtain the amount of total nas.</p>
<p>To calculate the maximum value of consecutive days in which PP<1, i did this code:</p>
<pre><code>df['CONDITION']=(df['PP']<1)
#CALCULATE THE maximum value of consecutive days in which PP<1
max_values = (df.groupby(['CODE','QUARTER'])
.apply(lambda g: (g['CONDITION'].ne(g['CONDITION'].shift()).cumsum() # Group continuous
[g['CONDITION']] # Keep True
.value_counts().max())) # Find max True
.to_frame('MAX_CONSEC_VALUES').reset_index())
</code></pre>
<p>But i need also the start and end date where the maximum value of consecutive days was had in which PP<1, and the amount of total nas. How can i do this?</p>
|
<python><pandas>
|
2023-05-03 20:40:21
| 1
| 523
|
Javier
|
76,167,779
| 12,369,569
|
SyntaxError in python script that creates a qiime2 manifest file
|
<p>My script creates a manifest document that contains the filepaths of my fastq data for upload into qiime2. The input is a text file of the sample names and a directory that contains the fastq files. I then run the shell file using <code>python ./manifest_single.py --input-dir seqs</code>. Input text file example:</p>
<blockquote>
<p>SRR123456 SRR123457 SRR123458...</p>
</blockquote>
<p>Below is my shell file</p>
<pre><code>#!/local/cluster/bin/python3
# assign a variable to the file of interest
file_name = "seq.by.sys_accession.txt"
dir = "/filepath/seqs"
# open the file with a file handle
read_SRR = open(file_name, "r")
# obtain a list of lines in your file
allLines_list = read_SRR.readlines()
# create output file
outFile = "manifest.txt"
read_outFile = open(outFile, "w")
read_outFile.write(f"sample-id\tabsolute-filepath\n")
# loop over the body of data lines until the end of the file
for rawLine in allLines_list:
line = rawLine.strip()
print(f"|{line}|")
with open(outFile, "a") as f:
read_outFile.write(f"{line}\t{dir}/{line}.R1.fastq.gz\n")
outFile.close()
read_SRR.close()
</code></pre>
<p>But I keep receiving the error</p>
<pre><code>[name@congo metagenome-analysis]$ python ./manifest_single.py --input-dir seqs
File "./manifest_single.py", line 17
read_outFile.write(f"sample-id\tabsolute-filepath\n")
^
SyntaxError: invalid syntax
</code></pre>
<p>I've changed the <code>\t</code> and <code>\n</code> with a literal "tab". I've triple checked my input file structure and contents.</p>
|
<python><manifest><fastq><qiime>
|
2023-05-03 20:32:50
| 0
| 474
|
Geomicro
|
76,167,739
| 18,572,509
|
Make Flask pass "self" arguments to server functions?
|
<p>I have Flask server wrapped in a class that looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, Response, render_template, request
class Server:
def __init__(self, host, port):
# Initialize Flask:
self.app = Flask(__name__)
self.host = host
self.port = port
self.string = "foobar"
@staticmethod
@app.route("/")
def index():
return render_template("index.html")
@app.route("/pantilt")
def not_staticmethod(self):
return self.string
def start(self):
app.run(host=self.host, port=self.port)
</code></pre>
<p>The problem is that I have a function (<code>not_staticmethod</code>) that requires a <code>self</code> argument. Is there a way that I can make Flask always pass certain arguments to the functions?
i.e: Put a line in <code>__init__</code> that is something like <code>self.app.alway_pass_args = (self, other_arg)</code></p>
|
<python><python-3.x><flask>
|
2023-05-03 20:26:35
| 1
| 765
|
TheTridentGuy supports Ukraine
|
76,167,560
| 5,731,101
|
Colcon build fails following docs without clear error or traceback
|
<p>I'm taking my first baby-steps with ROS2 and following the beginner tutorials in the docs to familiarise myself with the workflows.</p>
<p>On the section about paramaters it shows you how to use launch-files.
<a href="https://docs.ros.org/en/humble/Tutorials/Beginner-Client-Libraries/Using-Parameters-In-A-Class-Python.html" rel="nofollow noreferrer">https://docs.ros.org/en/humble/Tutorials/Beginner-Client-Libraries/Using-Parameters-In-A-Class-Python.html</a></p>
<p>I followed the first section of this tutorial. All is well.</p>
<p>The second section tells you to create a launch-file and adding <code>data_files</code> to setup.py. This is where colcon --build fails with a rather obscure error:</p>
<pre><code>Summary: 2 packages finished [4.96s]
1 package failed: python_parameters
1 package aborted: my_package
1 package had stderr output: python_parameters
2 packages not processed
Command '['/usr/bin/python3', '-c', 'import sys;from contextlib import suppress;exec("with suppress(ImportError): from setuptools.extern.packaging.specifiers import SpecifierSet");exec("with suppress(ImportError): from packaging.specifiers import SpecifierSet");from distutils.core import run_setup;dist = run_setup( \'setup.py\', script_args=(\'--dry-run\',), stop_after=\'config\');skip_keys = (\'cmdclass\', \'distclass\', \'ext_modules\', \'metadata\');data = { key: value for key, value in dist.__dict__.items() if ( not key.startswith(\'_\') and not callable(value) and key not in skip_keys and key not in dist.display_option_names )};data[\'metadata\'] = { k: v for k, v in dist.metadata.__dict__.items() if k not in (\'license_files\', \'provides_extras\')};sys.stdout.buffer.write(repr(data).encode(\'utf-8\'))']' returned non-zero exit status 1.
</code></pre>
<p>If I remove the <code>data_files</code> from setup.py; colcon --build runs fine.</p>
<p>This is what I have inside of the setup args:</p>
<pre><code> data_files=[
(os.path.join('share', package_name), glob('launch/*launch.[pxy][yma]*')),
]
</code></pre>
<p>This makes me suggest the problem is with <code>data_files</code>. But I'm at a loss here. As soon as <code>data_file</code> is present, containing a value or not, the build fails. Can someone help me debug this?</p>
<p>thank you!</p>
|
<python><ros2><colcon>
|
2023-05-03 19:55:51
| 1
| 2,971
|
S.D.
|
76,167,484
| 5,452,008
|
How to update table in gradio by uploading a csv file?
|
<p>I would like to upload a CSV file and update a table with the content and finally create a plot with the content, but I am stuck connecting all the components. I also could not find any good documentation of this.</p>
<pre><code>import gradio as gr
default_csv = "Phase,Activity,Start date,End date\n\"Mapping the Field\",\"Literature review\",2024-01-01,2024-01-31"
def process_csv_text(text):
print('process_csv_text')
print(text)
df = pd.read_csv(StringIO(text), parse_dates=["Start date", "End date"])
return df
with gr.Blocks() as demo:
upload_button = gr.UploadButton(label="Upload Timetable", file_types = ['.csv'], live=True, file_count = "single")
table = gr.Dataframe(headers=["Phase", "Activity", "Start date", "End date"], col_count=4, default=process_csv_text(default_csv))
image = gr.Plot()
upload_button.click(fn=process_csv_text, inputs=upload_button, outputs=table, api_name="upload_csv")
demo.launch()
</code></pre>
|
<python><csv><gradio>
|
2023-05-03 19:43:08
| 0
| 9,295
|
Soerendip
|
76,167,425
| 1,179,620
|
Why does using a large num for np.linspace mess up the calculation for intersection?
|
<p>I am attempting some calculations/graphing of two functions.</p>
<p>Using</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 35, 50)
g = lambda x: 155 * (1 + (5/100)) ** (x - 1)
f = lambda x: np.where(x<=25, 200, 0)
plt.plot(x, f(x), '-')
plt.plot(x, g(x), '-')
idx = np.argwhere(np.diff(np.sign(f(x) - g(x)))).flatten()
plt.plot(x[idx], f(idx), 'ro')
plt.show()
print(x[idx],f(idx))
</code></pre>
<p><code>>>> [5.71428571] [200]</code>
<a href="https://i.sstatic.net/8esOj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8esOj.png" alt="attempt 1" /></a></p>
<p>But that red dot isn't in the proper space (too far to the left).</p>
<p>So, assuming the issue is is the number of x-values, I change <code>x = np.linspace(0, 35, 100)</code> and get</p>
<p><code>>>>[6.01010101] [200]</code>
<a href="https://i.sstatic.net/xBfCF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xBfCF.png" alt="attempt 2" /></a>
which is better but still a bit off, and, in reality, I would like the exact solution.</p>
<p>But, if I change <code>x = np.linspace(0, 35, 200)</code>, I suddenly get</p>
<p><code>>>>[6.15577889] [0]</code> and <a href="https://i.sstatic.net/O4Mdt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O4Mdt.png" alt="attempt 3" /></a>
...which makes no sense to me.</p>
|
<python><numpy><matplotlib>
|
2023-05-03 19:33:47
| 2
| 668
|
jesse
|
76,167,373
| 5,509,839
|
Deprecating Django Migrations
|
<p>I have a long-maintained and large Django project with hundreds and hundreds of migration files at this point. The tests are taking a really long time to run now due to just how many migrations there are to run through before tests begin.</p>
<p>I'm wondering if there's a best-practice for deprecating existing migrations and starting fresh, perhaps with the old migration files archived on git but deleted from the working branch, and starting fresh with new 0001's based on the current schema.</p>
<p>We've luckily almost never had to roll back – and when we do it's at deploy time and not at some point down the line. Does anyone have any thoughts on what the process for starting migrations fresh on an existing project looks like?</p>
|
<python><django>
|
2023-05-03 19:26:05
| 0
| 5,126
|
aroooo
|
76,167,346
| 3,529,833
|
Python - How can I annotate that a function has the same parameters as another?
|
<p>Context here is overwriting the method of a Class from a library</p>
<pre class="lang-py prettyprint-override"><code>from mail_library import MailClient
class LocalMailClient(MailClient):
def send(**kwargs: KwargsFrom[MailClient.send]):
print(f'Mocking email to {kwargs["recipients"]}')
</code></pre>
<p>Is there something similar to the <code>KwargsFrom</code> I suggested above?</p>
|
<python><python-3.x><python-typing>
|
2023-05-03 19:20:56
| 1
| 3,221
|
Mojimi
|
76,167,270
| 9,179,875
|
Tensorflow model.trainable_variables doesn't update after setting layer.trainable
|
<h3>Context</h3>
<p>I'm creating a script which randomly modifies some parameters in a Tensorflow model. It aims to "encrypt" the model by recording the modifications made so that the modifications can be undone by authorised users only.</p>
<p>To enable this script, <strong>I want to freeze all model layers except the one I'm manually modifying.</strong></p>
<h3>Problem</h3>
<p>To freeze all model layers, I set <code>model.trainable = False</code>. Then, I unfreeze a layer by setting <code>layer.trainable = True</code>. I want the layer's parameters to then be added to <code>model.trainable_variables</code> so that I can compute gradient updates like this:</p>
<pre class="lang-py prettyprint-override"><code>with tf.GradientTape() as tape:
pred = model(x)
loss = tf.keras.losses.sparse_categorical_crossentropy(y, pred, from_logits=True)
# returns None because model.trainable_variables is []
dloss_dparams = tape.gradient(loss, model.trainable_variables)
</code></pre>
<h3>Reproducible Example</h3>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import random
# Load pretrained model
model = tf.keras.applications.mobilenet_v2.MobileNetV2(
input_shape=None,
alpha=1.0,
weights='imagenet',
classifier_activation=None)
# No layers are trainable
model.trainable = False
print(model.trainable_variables) # prints empty list: []
# Choose 5 random layers to train
selected_layers = [layer.name for layer in model.layers]
while len(selected_layers) > 5:
rand_index = random.randint(0, len(selected_layers) - 1)
del selected_layers[rand_index]
# Attempt to make them trainable
for layer in model.layers:
if layer.name in selected_layers:
layer.trainable = True
print(model.trainable_variables) # STILL prints empty list: []
</code></pre>
<h3>Attempted Solutions</h3>
<p>I've checked out <a href="https://www.tensorflow.org/tutorials/images/transfer_learning" rel="nofollow noreferrer">Tensorflow's guide on Transfer learning</a>. Unlike me, they called <code>model.compile()</code> after they adjusted some layers' <code>trainable</code> attributes. But I'm not using an optimiser or evaluation metric; I just want to compute gradients and then manually update model parameters myself.</p>
<p>Some similar Tensorflow issues on StackOverflow are <a href="https://stackoverflow.com/questions/63656935/keras-layer-trainable-false-to-freeze-layer-doesnt-work">unanswered</a> or <a href="https://stackoverflow.com/questions/59462707/why-trainable-variables-do-not-change-after-training">based on bugs not applicable to my reproducible example</a>.</p>
|
<python><tensorflow><machine-learning>
|
2023-05-03 19:09:33
| 1
| 385
|
Madhav Malhotra
|
76,167,213
| 1,023,928
|
How can I change the y-axis range based on a callback when the x-axis range changes in Bokeh?
|
<p>I would like to change the visible range of the y-axis each time when the x-axis range changes. But something does not seem to work, panning and changing the x-axis range does not seem to invoke the callback or something is wrong with the callback?</p>
<pre><code>x = list(range(100))
y = list(np.random.randint(-10, 10, 100))
y = np.cumsum(y)
p1 = figure(title="Random", width=600, height=600)
p1.line(x, y, color="red")
callback = CustomJS(args=dict(yrange=p1.y_range), code="""
yrange.start=-10;
yrange.end=10;""")
p1.x_range.js_on_change("end", callback)
show(p1)
</code></pre>
|
<python><charts><visualization><bokeh>
|
2023-05-03 18:58:12
| 1
| 7,316
|
Matt
|
76,167,094
| 11,462,274
|
Reverse the sequence while keeping pairs of columns in a dataframe
|
<p>Let's say my dataframe <code>df</code> has this sequence of columns:</p>
<pre><code>['e', 'f', 'c', 'd', 'a', 'b']
</code></pre>
<p>And I want to reverse the sequence while keeping pairs, resulting in this sequence:</p>
<pre><code>['a', 'b', 'c', 'd', 'e', 'f']
</code></pre>
<p>If the column names were always the same, I could use this same list above to generate the desired dataframe:</p>
<pre><code>df = df[['a', 'b', 'c', 'd', 'e', 'f']]
</code></pre>
<p>But if there can be multiple pairs of columns and without certainty of their names, the only certainty being that the last pair should come first and so on, how to proceed?</p>
|
<python><pandas>
|
2023-05-03 18:41:39
| 2
| 2,222
|
Digital Farmer
|
76,166,972
| 12,590,879
|
When is the task added to the event loop?
|
<p>I'm fairly comfortable with the asynchronous functionalities of Python (adding tasks to the event loop, <code>await</code>ing them etc). However, I've been reading more about it recently and I saw an example which confuses me:</p>
<pre class="lang-py prettyprint-override"><code>import aiohttp
import asyncio
import time
start_time = time.time()
async def main():
async with aiohttp.ClientSession() as session:
for number in range(1, 151):
pokemon_url = f'https://pokeapi.co/api/v2/pokemon/{number}'
async with session.get(pokemon_url) as resp:
pokemon = await resp.json()
print(pokemon['name'])
asyncio.run(main())
print("--- %s seconds ---" % (time.time() - start_time))
</code></pre>
<p>The above code is supposed to be running concurrently (which it does) but I don't exactly understand how the event loop is organised in this case. My understanding is that the session is created asynchronously, then the <code>for</code> loops runs 'sequentially' up to a point.</p>
<p>I'm assuming that <code>async with session.get</code> is what is responsible for adding the tasks to the event loop somehow but how? My version would look something like this:</p>
<pre class="lang-py prettyprint-override"><code> async with asyncio.TaskGroup() as group:
for i in range(x):
group.create_task(session.get(url))
</code></pre>
<p>which should be equivalent somehow with the above. So then, how exactly is this done in the first version?</p>
|
<python><python-asyncio><aiohttp>
|
2023-05-03 18:24:55
| 1
| 325
|
Pol
|
76,166,919
| 11,333,604
|
Torchvision resize not recognising second dimension of pytorch tensor input
|
<p>I have a pytorch tensor of shape [512,512] and want to resize it to [256,256] I tried using</p>
<pre><code>resized = T.Resize(size=(256,256))(img)
</code></pre>
<p>But I got this error</p>
<blockquote>
<p>Input and output must have the same number of spatial dimensions, but got input with spatial dimensions of [512] and output size of [256, 256]. Please provide input tensor in (N, C, d1, d2, ...,dK) format and output size in (o1, o2, ...,oK) format.</p>
</blockquote>
<p>But the shape of img is not [512].</p>
<p>I also tried to play with the dimensions of the original image,</p>
<pre><code>img = img[:100,:200]
</code></pre>
<p>and the error transforms into</p>
<blockquote>
<p>Input and output must have the same number of spatial dimensions, but got input with spatial dimensions of <strong>[200]</strong> and output size of [256, 256]. Please provide input tensor in (N, C, d1, d2, ...,dK) format and output size in (o1, o2, ...,oK) format.</p>
</blockquote>
<p>so it only reads the second dimension, or like only the first line from the 2d tensor img.</p>
|
<python><pytorch><torchvision>
|
2023-05-03 18:17:52
| 1
| 303
|
Iliasp
|
76,166,910
| 3,181,104
|
"OSError: Starting path not found" when using Selenium webdriver manager for the first time in Python 3
|
<p>I am trying to set up Selenium 4 in Python for the first time on my Windows 10 PC. I want to be able to manipulate a browser session locally with login.</p>
<ul>
<li><p>I updated local Python 3 to the latest release 3.11.3</p>
</li>
<li><p>I installed the latest stable Selenium release 4.9.0 by downloading the files manually and then running <code>setup.py</code></p>
</li>
<li><p>I installed the latest Webdriver-mananger release 4.9.0 by downloading the files manually and then running <code>setup.py</code></p>
</li>
<li><p>I updated the local ChromeDriver</p>
</li>
<li><p>I created an empty <code>.env</code> file in my working directory.</p>
</li>
<li><p>I created a <code>requirements.txt</code> file with a single line <code>selenium==4.9.0</code> as instructed by <a href="https://www.selenium.dev/documentation/webdriver/getting_started/install_library/" rel="nofollow noreferrer">documentation</a>. (Honestly, I don't really know how the requirements file works.)</p>
</li>
<li><p>I added the directory containing Chrome driver to environment variable %PATH%</p>
</li>
</ul>
<p>Next, I created a <code>test.py</code> file in the working directory as follows:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
driver.get("https://www.google.com")
</code></pre>
<p>But upon running the file with <code>python test.py</code> where <code>python</code> is <code>DOSKEY</code>'d to the Python executable, I received the "OSError: Starting path not found" error as follows.</p>
<p>Yet, the very same line by itself <code>print(find_dotenv())</code> and without input with <code>from dotenv import find_dotenv</code> outputs the full path <code>...\.env</code> seemingly successfully. (I am not sure how to find out how the <code>.egg</code> file interacts with <code>find_dotenv()</code>.)</p>
<p>What can I do to get Selenium running?</p>
<pre><code>Traceback (most recent call last):
File "test.py", line 3, in <module>
from webdriver_manager.chrome import ChromeDriverManager
File "C:\Python\Python311\Lib\site-packages\webdriver_manager-3.8.6-py3.11.egg\webdriver_manager\chrome.py", line 4, in <module>
File "C:\Python\Python311\Lib\site-packages\webdriver_manager-3.8.6-py3.11.egg\webdriver_manager\core\download_manager.py", line 3, in <module>
File "C:\Python\Python311\Lib\site-packages\webdriver_manager-3.8.6-py3.11.egg\webdriver_manager\core\http.py", line 4, in <module>
File "C:\Python\Python311\Lib\site-packages\webdriver_manager-3.8.6-py3.11.egg\webdriver_manager\core\config.py", line 9, in <module>
File "C:\Python\Python311\Lib\site-packages\python_dotenv-1.0.0-py3.11.egg\dotenv\main.py", line 336, in load_dotenv
dotenv_path = find_dotenv()
^^^^^^^^^^^^^
File "C:\Python\Python311\Lib\site-packages\python_dotenv-1.0.0-py3.11.egg\dotenv\main.py", line 300, in find_dotenv
for dirname in _walk_to_root(path):
File "C:\Python\Python311\Lib\site-packages\python_dotenv-1.0.0-py3.11.egg\dotenv\main.py", line 257, in _walk_to_root
raise IOError('Starting path not found')
</code></pre>
|
<python><python-3.x><windows><selenium-webdriver>
|
2023-05-03 18:15:45
| 0
| 10,090
|
Argyll
|
76,166,611
| 6,013,354
|
ChatGPT integration with Django for parallel connection
|
<p>I'm using Django framework to have multiple ChatGPT connection at same time but it's make complete code halt/down until ChatGPT response is back.</p>
<p>To encounter this I'm using async with Django channels but still its block Django server to serve any other resource.</p>
<p>This is command to run Django server:</p>
<pre><code>daphne --ping-interval 10 --ping-timeout 600 -b 0.0.0.0 -p 8000 backend.gradingly.asgi:application
</code></pre>
<p>this is code which is calling ChatGPT:</p>
<pre><code>model = "gpt-4-0314"
thread = threading.Thread(target=self.call_gpt_api, args=(prompt,model,context,))
thread.start()
</code></pre>
<p>This is Python code which is sending response to channels</p>
<pre class="lang-py prettyprint-override"><code>async_to_sync(channel_layer.group_send)(
f'user_{context["current_user"]}',{
"type": "send_message", "text": json.dumps(json_data)
}
)
</code></pre>
|
<python><django><asynchronous><openai-api><chatgpt-api>
|
2023-05-03 17:34:17
| 1
| 680
|
Aleem
|
76,166,414
| 926,071
|
Unable to download PDF using chrome in python
|
<p>I am using selenium in python. When I click on the first PDF icon visible on the page. Instead of the PDF file being downloaded there is a page with "Open" button on it. I have tried clicking on the open button using ID open-button but it does not seem to work.</p>
<pre><code>from selenium import webdriver
import time
from selenium.webdriver.common.by import By
options=webdriver.ChromeOptions()
prefs={"download.default_directory":"C:\\path",
"plugins.always_open_pdf_externally": True,
"download.directory_upgrade": True}
options.add_experimental_option("prefs",prefs)
driver=webdriver.Chrome(executable_path='chromedriver.exe',options=options)
driver.get('https://ieeexplore.ieee.org/xpl/conhome/10067248/proceeding?isnumber=10067251&sortType=vol-only-seq&rowsPerPage=75&pageNumber=1')
cookie_button = driver.find_element(By.CLASS_NAME,'cc-btn')
cookie_button.click()
time.sleep(20)
pdf_buttons = driver.find_elements(By.XPATH, "//a[@aria-label='PDF']")
for button in pdf_buttons:
button.click()
time.sleep(5)
button.click()
driver.back()
break
time.sleep(1000)
</code></pre>
|
<python><google-chrome><selenium-webdriver><pdf><download>
|
2023-05-03 17:03:58
| 2
| 401
|
Chetan
|
76,166,306
| 9,251,158
|
Wrong matrix size in dot product multiplication with sparse matrices
|
<p>I am coding a linear regression. I have issues with the <code>.dot()</code> product when used with SciPy sparse matrices. Here is a minimal reproducible example with 200 observations, 50 regressors, and 400 outputs.</p>
<pre><code>import numpy as np
import scipy
n_row = 200
n_col = 50
n_outcomes = 400
x = np.random.rand(n_row, n_col)
y = scipy.sparse.rand(n_row, n_outcomes, format="csc", density=0.05)
print(x.shape) # (200, 50)
print(y.shape) # (200, 400)
print((x.T @ y).shape) # (50, 400)
print(((x.T).dot(y)).shape) # (50, 200) <- WRONG, it should be (50, 400)
xx_inv = np.linalg.inv(x.T.dot(x))
xx_inv.dot((x.T).dot(y)) # this calculation takes a long time
</code></pre>
<p>Reading <a href="https://numpy.org/doc/stable/reference/generated/numpy.dot.html" rel="nofollow noreferrer">the documentation</a>, it says that that for 2-D matrices, <code>.dot()</code> is like matrix multiplication <code>@</code>, but using <code>@</code> works.</p>
<p>What does <code>.dot()</code> do with sparse matrices, and why does it do it instead of throwing an error?</p>
|
<python><numpy><scipy>
|
2023-05-03 16:49:49
| 2
| 4,642
|
ginjaemocoes
|
76,166,252
| 12,596,824
|
Place labels on top of each bar
|
<p>I have the following data frame:</p>
<pre><code>week person_counts
2023-01-23 777
2023-01-30 800
2023-02-06 890
2023-02-13 766
2023-02-20 789
</code></pre>
<p>How can I plot this with labels on top of each bar?</p>
<p>I have the following code:
I see solutions online but most of them are solutions when plotting using the pandas function or with seaborn. How can i do this with just a matplotlib bar() plot?</p>
<pre><code>fig, ax = plt.subplots(figsize = (15,6))
ax.set_ylabel('Person Count')
ax.set_title('Persons')
ax.xaxis.set.major_formatter(mdates.DateFormatter('%m/%d/%y')
ax.bar(df.TimeStamp, df.person_counts, width = 5)
plt.xticks(df.TimeStamp)
plt.show()
</code></pre>
|
<python><matplotlib>
|
2023-05-03 16:38:36
| 0
| 1,937
|
Eisen
|
76,166,239
| 2,333,496
|
Simpy: increase store count only if sub stores have items
|
<p>I'm trying to model a manufacturing line where some machines depends on specific items produced by other machines and can't start its job until previous ones put the items into their stores.</p>
<p>For example:</p>
<p>We have 4 Machines (A, B, C, D), <code>Machine A</code> has a <code>Store A*</code> (B,C,D Stores correspondingly).
<br><code>Machine A</code> can start work only if there's an item in <code>Store A*</code>.
Item in A* consists of three items from B*, C*, D* Stores.</p>
<p>Is there any way to increase count in <code>Store A*</code> only if Stores <code>B*</code>, <code>C*</code>, <code>D*</code> have at least one item?</p>
|
<python><simulation><simpy>
|
2023-05-03 16:37:29
| 1
| 1,125
|
tema
|
76,166,230
| 559,827
|
How can a Python program determine which core it is running on?
|
<p>I need to debug a Python 3 program that uses the <code>multiprocessing</code> module.</p>
<p>I want to keep track of which cores (of a multi-core machine) are getting used and how.</p>
<p><strong>Q:</strong> I am looking for a way for the Python code to determine which core is running it.</p>
<hr />
<p>The closest I have found is to use the following:</p>
<pre><code>multiprocessing.current_process()._identity[0] - 1
</code></pre>
<p>Putting aside the fact that such code appears to be "going behind the API,"<sup>1</sup> as far as I can tell, the code that initializes the <code>_identity</code> attribute makes no reference to the underlying hardware<sup>2</sup>, which I think is unsatisfactory.</p>
<hr />
<p><sup><sup>1</sup> For one thing, I can find no official documentation for the <code>_identity</code> attribute, as one would expect from the leading underscore in the name.</sup><br/><sup><sup>2</sup> More specifically, this code evaluates something like <code>next(_process_counter)</code>, where <code>_process_counter</code> is initially set to the value <code>itertools.count(1)</code>, and uses the the result as the basis for the <code>_identity</code> attribute's value.</sup></p>
|
<python><multiprocessing><python-multiprocessing>
|
2023-05-03 16:35:38
| 1
| 35,691
|
kjo
|
76,166,075
| 2,987,488
|
Infeasible solution to scheduling problem using ortools
|
<p>I'm trying to create a scheduler which assigns a set of shifts to a set of drivers for each day of the week, enforcing a minimum of 1 Day off per week. For the following input data, I'm getting an infeasible solution. Any help?</p>
<p>Input data:</p>
<pre><code> dow driver hub
0 Sunday 1 S
1 Sunday 2 S
2 Sunday 3 S
3 Monday 1 S
4 Monday 2 S
5 Monday 3 S
6 Tuesday 2 S
7 Tuesday 3 S
8 Wednesday 1 S
9 Wednesday 3 S
10 Thursday 1 S
11 Thursday 2 S
12 Thursday 3 S
13 Friday 1 S
14 Friday 2 S
15 Saturday 1 S
16 Saturday 2 S
17 Saturday 3 S
</code></pre>
<p>Code</p>
<pre><code>from ortools.sat.python import cp_model
import pandas as pd
def create_shift_schedule(drivers,
shifts,
week_days,
hubs,
min_shift_drivers,
max_shift_drivers,
driver_hubs_relationships):
model = cp_model.CpModel()
# Create shift variables
schedule = {}
for e in drivers:
for d in week_days:
for s in shifts:
for r in hubs:
schedule[(e, d, s, r)] = model.NewBoolVar(f'schedule_{e}_{d}_{s}_{r}')
# Each driver works exactly one shift per day (including "Off" shift) at a specific hub
for e in drivers:
for d in week_days:
model.Add(sum(schedule[(e, d, s, r)] for s in shifts for r in hubs) == 1)
# Respect the minimum and maximum number of drivers per shift per hub per day
for d in week_days:
for s in shifts:
for r in hubs:
min_drivers = min_shift_drivers.get((d, s, r), 0)
max_drivers = max_shift_drivers.get((d, s, r), len(drivers))
model.Add(sum(schedule[(e, d, s, r)] for e in drivers) >= min_drivers)
model.Add(sum(schedule[(e, d, s, r)] for e in drivers) <= max_drivers)
# Ensure each driver has at least one "Off" shift per week and max 3 "off" shifts per week
for e in drivers:
model.Add(sum(schedule[(e, d, "Off", r)] for d in week_days for r in hubs) >= 1)
for e in drivers:
model.Add(sum(schedule[(e, d, "Off", r)] for d in week_days for r in hubs) <= 3)
# Assign each driver to their specific hub
for e in drivers:
for r in hubs:
if r not in driver_hubs_relationships[e]:
for d in week_days:
for s in shifts:
model.Add(schedule[(e, d, s, r)] == 0)
# Minimize the total number of working shifts assigned to drivers (excluding "Off" shift)
# model.Minimize(sum(
# schedule[(e, d, s, r)] for e in drivers for d in week_days for s in shifts if
# s != "Off" for r in hubs))
# Create auxiliary variables
same_shift_aux = {}
for e1 in drivers:
for e2 in drivers:
if e1 != e2:
for d in week_days:
for s in shifts:
if s != "Off":
for r in hubs:
same_shift_aux[(e1, e2, d, s, r)] = model.NewBoolVar(
f'same_shift_aux_{e1}_{e2}_{d}_{s}_{r}')
# Add constraints to link auxiliary variables with schedule variables
for e1 in drivers:
for e2 in drivers:
if e1 != e2:
for d in week_days:
for s in shifts:
if s != "Off":
for r in hubs:
model.AddImplication(schedule[(e1, d, s, r)],
same_shift_aux[(e1, e2, d, s, r)])
model.AddImplication(schedule[(e2, d, s, r)],
same_shift_aux[(e1, e2, d, s, r)])
# Modify the objective function
penalty_for_same_shift = 1 # Adjust this value to control the penalty for assigning the same
# shift to multiple drivers
model.Minimize(
sum(schedule[(e, d, s, r)] for e in drivers for d in week_days for s in shifts if
s != "Off" for r in hubs)
+ penalty_for_same_shift * sum(
same_shift_aux[(e1, e2, d, s, r)] for e1 in drivers for e2 in drivers if
e1 != e2 for d in week_days for s in shifts if s != "Off" for r in hubs))
# Solve the scheduling problem
solver = cp_model.CpSolver()
solver.parameters.max_time_in_seconds = 300.0
solver.parameters.log_search_progress = True
status = solver.Solve(model)
print(f"status code {status}")
if status == cp_model.OPTIMAL:
solution = {}
for e in drivers:
solution[e] = {}
for d in week_days:
for s in shifts:
for r in hubs:
if solver.Value(schedule[(e, d, s, r)]) == 1:
solution[e][d] = (s, r)
return solution
else:
return None
file = pd.Dataframe(input_data) # i'm loading them from a file
file['driver'] = file['driver'].apply(lambda x: str(x))
drivers = file['driver'].unique().tolist()
num_shifts = 2 # Including "Off" shift
# Create shift assignment variables
shifts = ["Off"] + [f"Shift_{d}" for d in range(1, num_shifts)]
week_days = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday', 'Sunday']
hubs = file['hub'].unique().tolist()
# how many drivers are needed per dow and per hub
hub_counts = \
pd.DataFrame(file.groupby(['dow', 'hub'],
as_index=False)['driver'].count())
fixed_shift_drivers_per_day_hub = {
day: {
hub: driver_count
for hub, driver_count in zip(hub_counts[hub_counts['dow'] == day]['hub'],
hub_counts[hub_counts['dow'] == day]['driver'])
}
for day in week_days
}
min_shift_drivers = {
(day, shift, hub): 1
for day in week_days
for shift in shifts
for hub in hubs
}
max_shift_drivers = {
(day, shift, hub): fixed_shift_drivers_per_day_hub[day][hub] if shift != "Off" else 0
for day in week_days
for shift in shifts
for hub in hubs
}
driver_hubs_relationships = \
pd.DataFrame(file.groupby('driver')['hub'].apply(list)).reset_index()
driver_hubs_relationships = \
{key: value for key, value in zip(driver_hubs_relationships['driver'],
driver_hubs_relationships['hub'])}
solution = create_shift_schedule(drivers, shifts, week_days, hubs, min_shift_drivers,
max_shift_drivers, driver_hubs_relationships)
if solution:
for e in drivers:
print(f"{e}:")
for d in week_days:
print(f" {d}: {solution[e][d]}")
else:
print("No solution found.")
</code></pre>
|
<python><optimization><scheduling><or-tools><cp-sat>
|
2023-05-03 16:14:06
| 1
| 1,272
|
azal
|
76,165,970
| 14,775,478
|
How to keep only final hypertuning trial/params to recreate a model
|
<p>I am using hypertuning with keras. That produces many files (1 sub dir for each trial, oracle.json, tuner.json).</p>
<p>I would like to run this in a docker container without a mounted volume/without writing all trial data to an external file system. But, I would still like to reuse all these trials when continuing/re-running the hypertuning the next time.</p>
<p>Is there a way to run hypertuning "out of the box" and discard almost all files after completion, except for maybe 1-2 files, which can be re-used again later? Which files would that be? Is it enough to keep the <code>oracle.json</code>, and copy it into the hypertuning dir the next time it's running? Or do I also need to copy the last trial folder with its checkpoints? Or multiple ones?</p>
<p>Of course I could "write" the best params to a separate json and create model manually from them, but for convenience reasons it would be nice to use the "out of the box hypertuning" every time, just skipping all the obsolete trials (given we already know the outcome) the next time around.</p>
|
<python><keras>
|
2023-05-03 16:01:39
| 0
| 1,690
|
KingOtto
|
76,165,885
| 10,517,575
|
Airflow: PostgresOperator loading sql file fails due to jinja2.exceptions.TemplateNotFound
|
<p>I know this is an issue that keeps coming up a lot but I have read many solutions and none of them seems to solve my issue. I'm trying to execute a SQL query using <strong>PostgreOperator</strong> and a .sql file file named <strong>delete.sql</strong>. I have a git repository named <strong>external_datasources</strong> which contains several dags that are all accessible through Airflow. My Airflow instance is deployed using Docker and the dags folder is mounted to the container.</p>
<p>I'm currently trying to run the aforementioned dag of file dag_that_runs_delete_query.py that executes query from delete.sql file. But I keep getting the <strong>jinja2.exceptions.TemplateNotFound: delete.sql</strong> error.</p>
<p>The file structure of external_datasources repository is the following. While the full path to the repository is <strong>/home/etl/airflow-docker/dags/external_datasources</strong></p>
<pre><code>.
├── README.md
├── __init__.py
├── conf.py
├── dw
│ ├── __init__.py
│ ├── queries
│ │ ├── delete.sql
│ │ ├── query1.sql
│ │ └── query2.sql
│ └── dag_that_runs_delete_query.py
├── helper_for_dag1.py
├── helper_for_dag2.py
├── dag1.py
├── dag2.py
├── dag3.py
├── requirements.txt
└── utils.py
</code></pre>
<p>My dag looks like this, I'm using the template_searchpath as many have suggested</p>
<pre class="lang-py prettyprint-override"><code>dag = DAG(
'dag_that_runs_delete_query',
default_args=default_args,
description='A DAG that runs a sql query',
schedule='0 4 * * *',
catchup=False,
template_searchpath="/home/etl/airflow-docker/dags/external_datasources/dw/queries"
)
with dag:
delete_before_insert_task = PostgresOperator(
task_id='delete_before_insert',
postgres_conn_id='connection_for_postgres',
sql="delete.sql",
params={
"table_name": "test_table",
"destination_table_schema": "dw"
}
)
delete_before_insert_task
</code></pre>
<p>The SQL file looks like this</p>
<pre class="lang-sql prettyprint-override"><code>DELETE FROM {{ destination_table_schema }}.{{ table_name }}
WHERE created_on >= {{ ts }}
AND created_on < {{ ts }} + INTERVAL '{{ schedule_interval }}'
</code></pre>
<p>Any suggestions?</p>
|
<python><sql><airflow><jinja2>
|
2023-05-03 15:52:29
| 2
| 372
|
Zisis F
|
76,165,672
| 12,596,824
|
Grouping timestamps and graphing
|
<p>I have a dataframe like so</p>
<pre><code>PersonId TimeStamp
10 2023-03-11 02:22:25
1 2023-03-30 03:02:20
26 2023-01-11 08:02:28
28 2023-02-26 09:25:01
46 2023-01-27 11:49:40
2 2023-04-01 01:32:21
</code></pre>
<p>I want to plot this with the timestamp on the x-axis and counts for every person in a certain range.</p>
<p>I want to group by every week or every month in a function. How can I do this in python?</p>
<p>I have the following code:</p>
<pre><code>series = df.groupby(df.TimeStamp.dt.to_period('W'))[[PersonId]].count().reset_index()
</code></pre>
<p>I get an error when I plot here and don't know why</p>
<pre><code>fig, ax = plt.subplots(figsize = (8,6))
ax.plot(series.TimeStamp, series.SubmissionID)
# float() argument must be a string or a number, not 'Period'
</code></pre>
|
<python><pandas><matplotlib><time-series>
|
2023-05-03 15:29:52
| 1
| 1,937
|
Eisen
|
76,165,645
| 12,323,468
|
How to put X variable estimates from many statsmodels equations as columns in a dataframe
|
<p>The following code generates many linear regressions (all with a constant term) based on combinations of 6 explanatory variables. The regressions and the various diagnostic statistics are placed into a df:</p>
<pre><code>import pandas as pd
import statsmodels.api as sm
import itertools
from statsmodels.stats.outliers_influence import variance_inflation_factor
import numpy as np
from scipy import stats
pd.options.display.float_format = '{:.3f}'.format
# Separate y variable from the rest of the predictors
X = df_all.loc['2019-10':'2023-02',['rsi41h_11', 'exc171_12', 'lag36a_11' , 'ljh1la_11','imp474_6', 'emp325_11' ]]
y = df_all.loc['2019-10':'2023-02',['apple_sales']]
# Split the data into training (first set of months) and test (next 6 months) sets
y_train, y_test = y[:'2022-09'], y['2022-10':]
X_train, X_test = X[:'2022-09'], X['2022-10':]
# Create all possible combinations of predictors
predictor_combinations = list(itertools.chain.from_iterable(
itertools.combinations(X.columns, i) for i in range(1, len(X.columns)+1)))
# Create an empty list to store the regression results
regression_results = []
# Loop through all predictor combinations and fit a regression
for predictors in predictor_combinations:
X_subset_train = X_train[list(predictors)]
X_subset_train = sm.add_constant(X_subset_train)
model = sm.OLS(y_train, X_subset_train)
results = model.fit()
formula = results.params
variables = ' + '.join(predictors)
durbin_watson = sm.stats.stattools.durbin_watson(results.resid)
skewness = stats.skew(results.resid, axis=0)
kurtosis = stats.kurtosis(results.resid, axis=0, fisher=False)
n = results.resid.shape[0]
jb = n/6 * (skewness**2 + (1/4) * (kurtosis - 3)**2 )
jb_prob = stats.chi2.sf(jb, 2)
vif = 1 / ( 1 - results.rsquared)
aic = results.aic
bic = results.bic
condition_number = np.linalg.cond(X_subset_train)
adj_r_squared = results.rsquared_adj
f_stat_prob = results.f_pvalue
X_subset_test = X_test[list(predictors)]
X_subset_test = sm.add_constant(X_subset_test)
y_pred = results.predict(X_subset_test) # y_pred is a pandas series (not an array and not a df)
rmse = np.sqrt(((y_test.squeeze() - y_pred)**2).mean()) # convert y_test to pandas df to a series using squeeze, otherwise RMSE won't calculate
regression_results.append([formula, variables, durbin_watson, condition_number, adj_r_squared, f_stat_prob, rmse, vif, skewness, kurtosis, jb_prob, aic, bic])
# Create a dataframe from the results and print to a table
results_df = pd.DataFrame(regression_results, columns=['Linear Regression', 'Predictors', 'Durbin-Watson', 'Condition Number', 'Adjusted R-Squared', 'F-Statistic Probability', 'RMSE', 'Variance Inflation', 'Skewness', 'Kurtosis', 'JB-Prob', 'AIC', 'BIC'] )
results_df
</code></pre>
<p>Which gives the following table:</p>
<p><a href="https://i.sstatic.net/4LzDP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4LzDP.png" alt="enter image description here" /></a></p>
<p>The regression results are hard to read. I want to replace the columns 'Linear Regression' and 'Predictors' instead with the column names being the constant term and each of the X variables (7 new columns total) with the parameter estimates only populating if contained in the model plus the usual diagnostic stats so I have this instead:</p>
<p><a href="https://i.sstatic.net/9em3K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9em3K.png" alt="enter image description here" /></a></p>
<p>How would I modify my code above to get this preferred output?</p>
|
<python><pandas><loops>
|
2023-05-03 15:26:04
| 1
| 329
|
jack homareau
|
76,165,503
| 6,326,147
|
AttributeError: 'CSSMediaRule' object has no attribute 'style' in premailer
|
<p>I was trying to transform the following nested media query CSS with premailer.</p>
<pre class="lang-css prettyprint-override"><code><style type="text/css">
@media (prefers-color-scheme: dark) {
.textPrimary {
color: #E2E2E2 !important;
}
@media (max-width: 630px) {
body,
.footerContainer {
background-color: #1E1E1E !important;
}
}
}
</style>
</code></pre>
<hr />
<p>I'm having trouble with followig error:</p>
<pre class="lang-bash prettyprint-override"><code>html = instance.transform(html, pretty_print=False)
File "/usr/local/lib/python3.10/dist-packages/premailer/premailer.py", line 414, in transform
style.text = self._css_rules_to_string(these_leftover)
File "/usr/local/lib/python3.10/dist-packages/premailer/premailer.py", line 683, in _css_rules_to_string
for key in rule.style.keys():
AttributeError: 'CSSMediaRule' object has no attribute 'style'. Did you mean: '_type'?
</code></pre>
<p>My premailer version is 3.10.0 and I've tried with both python 3.8 and 3.10. No luck :(</p>
|
<python><premailer>
|
2023-05-03 15:12:12
| 1
| 1,073
|
Rijoanul Hasan Shanto
|
76,165,382
| 3,761,305
|
Python case insensitive efficiently check if list of strings is contained in another string
|
<p>I want to check if a list of string is contained in another string <strong>but ignoring the case</strong></p>
<p>For example:</p>
<pre><code>Input: 'Hello World', ['he', 'o w']
Output: [True, True]
Input: 'Hello World', ['he', 'wol']
Output: [True, False]
</code></pre>
<p>I can write something like:</p>
<pre><code>output =[]
for keyword in keywordlist:
if keyword.lower() in string.lower():
output.append(True)
else:
output.append(False)
</code></pre>
<p>But the issues with this are:</p>
<ol>
<li>the time complexity</li>
<li>using lower()</li>
</ol>
<p>I have found this question on stack overflow which is similar <a href="https://stackoverflow.com/questions/3389574/check-if-multiple-strings-exist-in-another-string">Check if multiple strings exist in another string</a></p>
<p>But it doesn’t work for ignore case.</p>
<p>Is there an efficient way to do this?</p>
|
<python><string><list><substring><case-insensitive>
|
2023-05-03 15:01:31
| 1
| 651
|
Ahsan Tarique
|
76,165,277
| 13,606,345
|
NameError during class creation
|
<p>I have the following code, and I wonder how I can prevent getting error.</p>
<pre><code>class A:
...: class Numbers(Enum):
...: ONE = 1
...: TWO = 2
...: ODDS = (Numbers.ONE,)
...: EVENS = [number for number in Numbers if not number in ODDS]
</code></pre>
<p>After running this snippet, I get <code>NameError: name 'ODDS' is not defined</code>.</p>
<p>Then I try to use ODDS as <code>A.ODDS</code></p>
<pre class="lang-py prettyprint-override"><code>In [71]: class A:
...: class Numbers(Enum):
...: ONE = 1
...: TWO = 2
...: ODDS = (Numbers.ONE,)
...: EVENS = [number for number in Numbers if not number in A.ODDS]
...:
In [72]: A.EVENS
Out[72]: [<Numbers.ONE: 1>, <Numbers.TWO: 2>]
</code></pre>
<p>However, I only want evens.. So if I try like this, the following happens.</p>
<pre class="lang-py prettyprint-override"><code>In [76]: class A:
...: class Numbers(Enum):
...: ONE = 1
...: TWO = 2
...: ODDS = (Numbers.ONE,)
...: EVENS = [number for number in A.Numbers if not number in A.ODDS]
...:
In [77]: A.EVENS
Out[77]: [<Numbers.TWO: 2>]
</code></pre>
<p>Anyone knows why the last way work and previous ones do not?</p>
<p>Thanks</p>
<p>EDIT:</p>
<p>I restarted the shell and tried again</p>
<pre class="lang-py prettyprint-override"><code>In [4]: class A:
...: class Numbers(Enum):
...: ONE = 1
...: TWO = 2
...: ODDS = (Numbers.ONE,)
...: EVENS = [number for number in A.Numbers if not number in A.ODDS]
</code></pre>
<p>got <code>NameError: name 'A' is not defined</code></p>
|
<python>
|
2023-05-03 14:52:56
| 1
| 323
|
Burakhan Aksoy
|
76,165,275
| 11,348,734
|
Received server error (500) from primary and could not load the entire response body
|
<p>I'm running a detectron2 instance segmention with a AWS endpoint, I used this tutorial [https://github.com/aws-samples/amazon-sagemaker-pytorch-detectron2][1] for Object Detection, and I adapted to instance segmentation and worked well, but to draw the masks and object identification I need to do it outside the endpoint, inside the endpoint I only have the model(pth file) and the settings(yml file) in a script. And I would like to do everything from the endpoint, just get the final result, that is the segmented image.</p>
<p>UPDATE</p>
<p>I have <code>my_script.py</code> on an Endpoint:</p>
<pre><code>from sqlalchemy import true
from detectron2 import model_zoo
from detectron2.config import get_cfg
from detectron2.engine import DefaultPredictor
from detectron2.data import MetadataCatalog
from detectron2.utils.visualizer import Visualizer, ColorMode
import matplotlib.pyplot as plt
import cv2
import numpy as np
import math
from PIL import Image
import scipy.cluster
import sklearn.cluster
import os
import sys
from typing import BinaryIO, Mapping
import json
import logging
from pathlib import Path
from json import JSONEncoder
import torch
##############
# Macros
##############
LOGGER = logging.Logger("InferenceScript", level=logging.INFO)
HANDLER = logging.StreamHandler(sys.stdout)
HANDLER.setFormatter(logging.Formatter("%(levelname)s | %(name)s | %(message)s"))
LOGGER.addHandler(HANDLER)
static_prefix = os.path.join(os.getcwd(), './')
log_file = os.path.join(static_prefix, 'latest')
##########
# Deploy
##########
def _load_from_bytearray(request_body: BinaryIO) -> np.ndarray:
npimg = np.frombuffer(request_body, np.uint8)
return cv2.imdecode(npimg, cv2.IMREAD_COLOR)
class NumpyArrayEncoder(JSONEncoder):
def default(self, obj):
if isinstance(obj, np.ndarray):
return obj.tolist()
return JSONEncoder.default(self, obj)
def model_fn(model_dir: str) -> DefaultPredictor:
path_cfg = "/opt/ml/model/config.yml"
path_model = "/opt/ml/model/model_final"
print('config= ', path_cfg)
print('modelo=',path_model)
os.system("ls")
LOGGER.info(f"Using configuration specified in {path_cfg}")
LOGGER.info(f"Using model saved at {path_model}")
if path_model is None:
err_msg = "Missing model PTH file"
LOGGER.error(err_msg)
raise RuntimeError(err_msg)
if path_cfg is None:
err_msg = "Missing configuration JSON file"
LOGGER.error(err_msg)
raise RuntimeError(err_msg)
cfg = get_cfg()
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
cfg.merge_from_file(path_cfg)
cfg.MODEL.WEIGHTS = str(path_model)
cfg.DATALOADER.NUM_WORKERS = 2
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 128
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 3
cfg.MODEL.DEVICE = "cpu"
LOGGER.info(f"model_loader ={cfg}")
predictorReturn = DefaultPredictor(cfg)
modelStr = predictorReturn.model
LOGGER.info(f"model_arqtuiteture ={modelStr}")
LOGGER.info(f"model_fn - end")
return predictorReturn
def input_fn(request_body: BinaryIO, request_content_type: str) -> np.ndarray:
LOGGER.info(f"*input_fn - init*")
if request_content_type == "application/x-image":
np_image = _load_from_bytearray(request_body)
else:
err_msg = f"Type [{request_content_type}] not support this type yet"
LOGGER.error(err_msg)
raise ValueError(err_msg)
LOGGER.info(f"*input_fn - end*")
return np_image
def predict_fn(input_object: np.ndarray, predictor: DefaultPredictor) -> Mapping:
LOGGER.info(f"Prediction on image of shape {input_object.shape}")
normalizedImg = np.zeros((1000, 1000))
input_object = cv2.cvtColor(np.ndarray(input_object), cv2.COLOR_RGB2BGR)
input_object = cv2.normalize(input_object,normalizedImg, 0, 255, cv2.NORM_MINMAX)
cfg = get_cfg()
predictor = DefaultPredictor(cfg)
outputs = predictor(input_object)
MetadataCatalog.get(cfg.DATASETS.TRAIN[0]).thing_classes = ["t1","t2","t3"]
v = Visualizer(input_object[:, :, ::-1], metadata=MetadataCatalog.get(cfg.DATASETS.TRAIN[0]), instance_mode=ColorMode.IMAGE_BW)
out = v.draw_instance_predictions(outputs["instances"].to("cpu"))
fmt_out= cv2.cvtColor(out.get_image()[:, :, ::-1], cv2.COLOR_RGBA2RGB)
classes = outputs["instances"].pred_classes.tolist()
print('obj:',len(classes))
print('classes:' ,len(set(classes)))
return fmt_out
def output_fn(predictions, response_content_type):
LOGGER.info(f"*output_fn init - end*")
jsonPredictions =json.dumps(predictions)
LOGGER.info(f"after jsonPredictions dumps ")
return jsonPredictions
</code></pre>
<p>To run this, I tried this code in my jupyter notebook:</p>
<pre><code>import boto3
endpoint_name="my_endpoint"
client = boto3.client('sagemaker-runtime')
content_type = 'application/x-image'
accept_mime_type = 'application/x-image'
with open("my_image.jpg", "rb") as f:
payload = bytearray(f.read())
##############################
response = client.invoke_endpoint(
EndpointName=endpoint_name,
Accept=accept_mime_type,
ContentType=content_type,
Body=payload,
)
# Write segmented output image
with open("my_image.segmented.jpg", "wb") as f:
f.write(response["Body"].read())
</code></pre>
<p>I used the above code as a hunch, I got his code here ([https://extrapolations.dev/model/instance-segmentation-mask-r-cnn/api/#examples][2]). And now I got this error:</p>
<pre><code>ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received server error (500) from primary and could not load the entire response body.
</code></pre>
<p>I think is a problem in my format input. Some suggestion?</p>
|
<python><amazon-web-services><amazon-sagemaker><endpoint><detectron>
|
2023-05-03 14:52:35
| 0
| 897
|
Curious G.
|
76,165,210
| 10,901,843
|
How would you solve the Minimum Non Constructible Change problem with a Dynamic Programming Table?
|
<p>this is the question I'm trying to solve using a dynamic programming table:</p>
<p><strong>We are given an array of positive integers, which represent the values of coins that we have in our possession. The array could have duplicates. We are asked to write a function that returns the minimum amount of change that we cannot create with our coins. For instance, if the input array is [1, 2, 5], the minimum amount of change that we cannot create is 4, since we can create 1, 2, 3 (1 + 2) and 5.</strong></p>
<p>The optimal solution is this:</p>
<pre><code>def nonConstructibleChange(coins):
coins.sort()
minimum_change = 0
for coin in coins:
if coin > minimum_change + 1:
break
minimum_change += coin
return minimum_change + 1
</code></pre>
<p>But I'd like to solve it using a brute force matrix type solution, because I feel like the optimal solution isn't something I would have thought of on my own. I want the rows represent the coins, and the columns represent the range of numbers from 1 to sum(coins).</p>
<p>Here's what I have so far, it's not even close to complete as I'm trying to learn dynamic programming:</p>
<pre><code>def nonConstructibleChange(coins):
coins = sorted(coins)
if 1 not in coins:
return 1
if len(coins)==1:
return 2
sum_coins = sum(coins)
array = [[0 for val in range(sum_coins)] for val in range(len(coins))]
array[0][0]=1
for val in range(len(coins)):
coin = coins[val]
for valtwo in range(sum_coins):
if coin==valtwo+1:
array[val][valtwo]=1
else:
if val!=0 and valtwo!=0 and valtwo<=val:
if array[val-1][valtwo-1]!=1 and array[val][valtwo-1]!=1 and array[val-1][valtwo]!=1:
return val-1
else:
array[val][valtwo]=1
</code></pre>
|
<python><algorithm><dynamic-programming>
|
2023-05-03 14:47:19
| 1
| 407
|
AI92
|
76,165,193
| 6,453,106
|
garbage collection in python threading
|
<p>When implementing a thread that is intended to periodically read from a stream, I cannot manage to make the thread stop correctly. This is only the case when the callback function that I use is implemented as a method of the agent (<code>Worker</code>). See this example (python v3.10.11):</p>
<pre class="lang-py prettyprint-override"><code>import threading
from time import sleep
import weakref
class Consumer(threading.Thread):
"""This class periodically reads from a stream."""
def __init__(self, stream_key, callback):
super().__init__()
self._stream_key: str = stream_key
self._handlers = {callback}
self._running = True
def run(self):
"""Poll the event stream and call each handler with each event item returned."""
counter = 0
while self._running:
for number, handler in enumerate(self._handlers):
handler(number, counter)
print("reading from stream: ", self._stream_key)
counter += 1
sleep(2)
def stop(self):
"""Stop polling the event stream."""
self._running = False
self.join()
def start(self) -> None:
self._running = True
return super().start()
def add_handler(self, callback):
self._handlers.add(callback)
def remove_handler(self, callback):
self._handlers.remove(callback)
class EventHandler:
def __init__(self):
self.consumers = weakref.WeakValueDictionary()
def subscribe(self, stream_key: str, callback):
if stream_key in self.consumers:
self.consumers[stream_key].add_handler(callback)
else:
consumer = Consumer(stream_key=stream_key, callback=callback)
self.consumers[stream_key] = consumer
self.consumers[stream_key].start()
def __del__(self):
for consumer in self.consumers.values():
consumer.stop()
class Worker:
def __init__(self) -> None:
self._eventhandler = EventHandler()
self.registered = False
self._subscriptions = {("test-stream-key", self.handlerfunc)}
def register(self):
self._start_listeners()
self.registered = True
def _start_listeners(self):
for subscription in self._subscriptions:
self._eventhandler.subscribe(*subscription)
def handlerfunc(self, number, counter):
print(f"handler {number} doing things, counting: {counter}")
worker = Worker()
worker.register()
del worker
</code></pre>
<p>it keeps producing output like</p>
<pre><code>reading from stream: test-stream-key
handler 0 doing things, counting: 1
reading from stream: test-stream-key
handler 0 doing things, counting: 2
...
</code></pre>
<p>After the <code>del</code> command I expect the garbage collection to do its magic and thereby stop the agent (incl. the <code>EventHandler</code> that has also a <code>__del__</code> method).</p>
<p>Interestingly, this works fine in case I do not define the <code>handlerfunc</code> as a method of <code>Worker</code> but in the global scope:</p>
<pre class="lang-py prettyprint-override"><code>import threading
from time import sleep
import weakref
class Consumer(threading.Thread):
"""This class periodically reads from a stream."""
def __init__(self, stream_key, callback):
super().__init__()
self._stream_key: str = stream_key
self._handlers = {callback}
self._running = True
def run(self):
"""Poll the event stream and call each handler with each event item returned."""
counter = 0
while self._running:
for number, handler in enumerate(self._handlers):
handler(number, counter)
print("reading from stream: ", self._stream_key)
counter += 1
sleep(2)
def stop(self):
"""Stop polling the event stream."""
self._running = False
self.join()
def start(self) -> None:
self._running = True
return super().start()
def add_handler(self, callback):
self._handlers.add(callback)
def remove_handler(self, callback):
self._handlers.remove(callback)
class EventHandler:
def __init__(self):
self.consumers = weakref.WeakValueDictionary()
def subscribe(self, stream_key: str, callback):
if stream_key in self.consumers:
self.consumers[stream_key].add_handler(callback)
else:
consumer = Consumer(stream_key=stream_key, callback=callback)
self.consumers[stream_key] = consumer
self.consumers[stream_key].start()
def __del__(self):
for consumer in self.consumers.values():
consumer.stop()
class Worker:
def __init__(self) -> None:
self._eventhandler = EventHandler()
self.registered = False
self._subscriptions = {("test-stream-key", handlerfunc)}
def register(self):
self._start_listeners()
self.registered = True
def _start_listeners(self):
for subscription in self._subscriptions:
self._eventhandler.subscribe(*subscription)
def handlerfunc(number, counter):
print(f"handler {number} doing things, counting: {counter}")
worker = Worker()
worker.register()
del worker
</code></pre>
<p>in that case it stops after one message, more or less immediately. this is what I would expect with the class scoped method as well.</p>
<p>What is happening here? And is it correct to use <code>weakref.WeakValueDictionary()</code>? (obviously not) But is it at least the idea of using <code>weakref</code> correct?</p>
|
<python><multithreading><python-multithreading><weak-references>
|
2023-05-03 14:45:41
| 0
| 1,286
|
Pascal
|
76,165,096
| 814,074
|
Pythonic way to connect hashicorp vault using self signed certificate
|
<p>I am trying to connect to hvault using python with self signed cert. I wrote code something like below</p>
<pre><code>client = hvac.Client(url='https://localhost:8203', cert=('hv.crt','hv.key'),verify=False)
client.is_authenticated()
client.secrets.kv.v2.read_secret(mount_point="secret", path='test')
</code></pre>
<p>However, it fails with error</p>
<pre><code>False
>>> client.secrets.kv.v2.read_secret(mount_point="secret", path='test')
/usr/lib/python3/dist-packages/urllib3/connectionpool.py:999: InsecureRequestWarning: Unverified HTTPS request is being made to host 'localhost'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
warnings.warn(
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/uptycs/.local/lib/python3.8/site-packages/hvac/api/secrets_engines/kv_v2.py", line 98, in read_secret
return self.read_secret_version(
File "/opt/uptycs/.local/lib/python3.8/site-packages/hvac/api/secrets_engines/kv_v2.py", line 153, in read_secret_version
return self._adapter.get(
File "/opt/uptycs/.local/lib/python3.8/site-packages/hvac/adapters.py", line 110, in get
return self.request("get", url, **kwargs)
File "/opt/uptycs/.local/lib/python3.8/site-packages/hvac/adapters.py", line 372, in request
response = super().request(*args, **kwargs)
File "/opt/uptycs/.local/lib/python3.8/site-packages/hvac/adapters.py", line 340, in request
self._raise_for_error(method, url, response)
File "/opt/uptycs/.local/lib/python3.8/site-packages/hvac/adapters.py", line 258, in _raise_for_error
utils.raise_for_error(
File "/opt/uptycs/.local/lib/python3.8/site-packages/hvac/utils.py", line 41, in raise_for_error
raise exceptions.VaultError.from_status(
hvac.exceptions.Forbidden: permission denied, on get https://localhost:8203/v1/secret/data/test
</code></pre>
<p>When I run same code with token it works</p>
<pre><code>client = hvac.Client(url='https://localhost:8203',token='hvs.XXXXXXXX')
client.is_authenticated()
client.secrets.kv.v2.read_secret(mount_point="secret", path='test')
</code></pre>
<p>o/p</p>
<pre><code>True
{'request_id': 'fd211543-225f-58d6-4d87-112bec5698b9', 'lease_id': '', 'renewable': False, 'lease_duration': 2764800, 'data': {'data': {'test': 'a'}}, 'wrap_info': None, 'warnings': None, 'auth': None}
</code></pre>
<p>Even the shell execution is returning the results</p>
<pre><code>curl -s -k --header "X-Vault-Token:$(curl -s -k --request POST --cacert cacert.pem --cert hv.crt --key hv.key https://localhost:8203/v1/auth/cert/login | jq -r .auth.client_token)" --request GET https://localhost:8203/v1/secret/data/test |jq -r .data.data[]
</code></pre>
<p>I went through <a href="https://hvac.readthedocs.io/en/stable/advanced_usage.html#making-use-of-private-ca" rel="nofollow noreferrer">link</a> but there is no defination of the <code>load_vault_token</code> across internet for self signed there is one for ec2</p>
<p>Any suggestion?</p>
|
<python><hashicorp-vault>
|
2023-05-03 14:37:25
| 1
| 3,594
|
Sachin
|
76,165,089
| 2,986,042
|
How to print float variable from Trace32 with python command?
|
<p>I have a simple C code which will update static variable with floating variable. Let's say I have</p>
<pre><code>static float32 totalcount = 60.73f;
</code></pre>
<p>I want to know how to get float values from <code>Lauterbach trace32</code>. I have tried to print the <code>float</code> values using below <code>t32api64.dll</code> and <code>ctype</code> method.</p>
<pre><code>error = ctypes.c_int32(0)
result = ctypes.c_float32(0)
t32api.T32_Cmd (b"InterCom mycore Var totalcount")
error = t32api.T32_EvalGet(ctypes.byref(result));
if (error == 0):
print("OK");
print (result.value)
else:
print("Nok error")
</code></pre>
<p>But I am getting some different ouputt.</p>
<p><strong>Output:</strong></p>
<pre><code>$ python test.py
OK
8.96831017167883e-44
</code></pre>
<p>After some research, I understood that <code>t32api.T32_EvalGet()</code> function is not supporting <code>float</code> values. So I would like to know how to print the float values from <code>trace32</code> using python. Please suggest some method to print float values?</p>
|
<python><python-3.x><trace32><lauterbach>
|
2023-05-03 14:36:39
| 1
| 1,300
|
user2986042
|
76,165,060
| 6,243,129
|
How to find a threshold number in Optical Flow in Python OpenCV
|
<p>Using optical flow, I am trying to detect the motion of a coil attached to a motor. When the motor starts, coil runs smoothly but sometimes it starts to vibrate. I need to detect this vibration. I am not sure if optical flow is the correct approach or not but when tested with a stable movement vs vibrations I can see some colors showing during vibrations. Attached are the images:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">Stable (running smoothly)</th>
<th style="text-align: center;">Vibration</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;"><a href="https://i.sstatic.net/uQTFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uQTFd.png" alt="enter image description here" /></a></td>
<td style="text-align: center;"><a href="https://i.sstatic.net/jnkg3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jnkg3.png" alt="enter image description here" /></a></td>
</tr>
</tbody>
</table>
</div>
<p>Doing optical flow on a complete video frame will not work so I have cropped the corners of the coil and you can see a lot of colors showing when it starts to vibrate because when it runs smoothly, there is not much motion visible but when it vibrates, motion is visible and thus it's picked up in the optical flow.</p>
<p>Now I am trying to find out some kind of threshold so that I can print when it starts vibrating, when it's low vibration, and when it's high vibrations.</p>
<p>Using below code:</p>
<pre><code>import cv2
import numpy as np
cap = cv2.VideoCapture("Coil.mp4")
ret, frame1 = cap.read()
frame1 = frame1[284: 383, 498:516]
prvs = cv2.cvtColor(frame1, cv2.COLOR_BGR2GRAY)
hsv = np.zeros_like(frame1)
hsv[..., 1] = 255
while cap:
ret, frame2 = cap.read()
frame2 = frame2[284: 383, 498:516]
next = cv2.cvtColor(frame2, cv2.COLOR_BGR2GRAY)
flow = cv2.calcOpticalFlowFarneback(prvs, next, None, 0.5, 3, 15, 3, 5, 1.2, 0)
mag, ang = cv2.cartToPolar(flow[..., 0], flow[..., 1])
hsv[..., 0] = ang * 180 / np.pi / 2
hsv[..., 2] = cv2.normalize(mag, None, 0, 255, cv2.NORM_MINMAX)
rgb = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
cv2.imshow('Optical Flow', rgb)
k = cv2.waitKey(30) & 0xff
if k == 27:
break
prvs = next
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>When there is a motion, optical flow shows it in form of color so I am guessing there has to be some way to find out threshold. What can I try next?</p>
|
<python><python-3.x><opencv><opticalflow>
|
2023-05-03 14:33:34
| 1
| 7,576
|
S Andrew
|
76,164,969
| 5,212,614
|
pip install error: The process cannot access the file because it is being used by another process
|
<p>I'm trying to do: <code>pip install pandas_datareader</code></p>
<p>I'm getting this error.</p>
<pre><code>PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\ryans\\AppData\\Local\\Temp\\tmpmyj3mha6'
</code></pre>
<p>I ran the pip install process from the Anaconda Prompt, which I ran both normally and in Administrator mode. I also uninstalled and reinstalled Anaconda this morning. Everything worked totally fine yesterday and I don't know what changed overnight. Maybe there is some kind of virus checker process running, or maybe there is an issue with a firewall. Not sure. How can I troubleshoot this and get things back in business?</p>
|
<python><python-3.x><anaconda>
|
2023-05-03 14:22:22
| 1
| 20,492
|
ASH
|
76,164,863
| 14,208,556
|
use scipy optimize minimize in a class
|
<p>I am cleaning up some code and want to refactor code that is currently unorganized to an individual class. The code optimizes an outflow rate to match the duration outcome with the duration target and looks as follows:</p>
<pre><code>import scipy.optimize as optimize
volume = 100
value = 95
discount_rates = <some dataframe>
duration_target = 1.5
initial_guess = 0
def calculate_duration(discount_rates, outflow, volume, value)
...
def fun(outflow):
duration_outcome = calculate_duration(discount_rates, outflow, volume, value)
target = abs(duration_outcome - duration_target)
return target
def solv():
res = optimize.minimize(fun, initial_guess, method = 'SLSQP', tol=1e-8)
return res.x
</code></pre>
<p>This code works, but it uses global variables that are not input for the relevant functions (initial_guess in solv(), duration_target in fun(), etc.)</p>
<p>I have written the following class which should accomplish the same. However, the optimization does not change the outflow parameter so the outcome does not converge.</p>
<pre><code>import scipy.optimize as optimize
class Calculate:
def __init__(self, discount_rates, volume, value, duration, initial_guess):
self.discount_rates = discount_rates
self.volume = volume
self.value = value
self.duration_target = duration_target
self.initial_guess = initial_guess
def calculate_duration(self, outflow):
...
def fun(self, outflow):
duration_outcome = self.calculate_cashflows(outflow)
target = abs(duration_outcome - self.duration_target)
return target
def solv(self):
# solution
res = optimize.minimize(self.fun, self.initial_guess, method = 'SLSQP', tol=1e-8)
return res.x
</code></pre>
<p>It is unclear for me what the problem is with my class and why the code outflow <em>does</em> converge in the first option, but <em>doesnt</em> in the second option. I.e., res.x is not the same for the two methods. What mistake did I make here?</p>
|
<python><class><scipy-optimize-minimize>
|
2023-05-03 14:11:40
| 1
| 333
|
t.pellegrom
|
76,164,842
| 34,935
|
Can I turn off profiling in a portion of python code when invoking cProfile as a script?
|
<p>I want to profile my code, but exclude startup.</p>
<p>Python docs <a href="https://docs.python.org/3/library/profile.html#module-cProfile" rel="nofollow noreferrer">describe</a> how to run cProfile as a script:</p>
<pre><code>python -m cProfile [-o output_file] [-s sort_order] (-m module | myscript.py)
</code></pre>
<p>They also describe how to turn profiling on and off using the python API:</p>
<pre><code>import cProfile
pr = cProfile.Profile()
pr.enable()
# ... do something ...
pr.disable()
</code></pre>
<p>Will it be effective to run <code>pr.enable()</code> and <code>pr.disable()</code> if I am running cProfile as a module?</p>
<p>Is there an implied "enable" when starting my code that I could disable, or is the <code>cProfile</code> object used by the script method not accessible to me?</p>
|
<python><profiling><cprofile>
|
2023-05-03 14:09:03
| 1
| 21,683
|
dfrankow
|
76,164,803
| 534,238
|
Is there a python library for generating fake data using data properties
|
<p>I use <a href="https://hypothesis.readthedocs.io/en/latest/quickstart.html" rel="nofollow noreferrer"><em>hypothesis</em></a> for testing.</p>
<p>Now I find myself in need of exactly the same capability, but for generating fake data instead of for testing across many parameters.</p>
<h1>Question</h1>
<ul>
<li>Is there any library that can do this?</li>
<li>Is there any way to use the <em>hypothesis</em> library to generate fake data?</li>
</ul>
<h1>Example</h1>
<p>I have an object like this (the real object is far more complex):</p>
<pre class="lang-py prettyprint-override"><code>my_dict = {
'name': 'Mike',
'job': [
{
'title': 'manager',
'location': 'remote'
}
],
'age': 50
}
</code></pre>
<p>and I want to be able to figure out that I have:</p>
<pre class="lang-py prettyprint-override"><code>my_dict = {
'name': str,
'job': [
{
'title': str,
'location': str
}
],
'age': int
}
</code></pre>
<p>so that I can generate arbitrary data that follows whether it is a collection or a primitive, such that:</p>
<ul>
<li>if it is a collection, then keep all of the keys the same (or the size of the list the same)</li>
<li>if it is a primitive, create fake data that matches the kind</li>
</ul>
<p>This would, for instance, create a new dict for me that is something like:</p>
<pre class="lang-py prettyprint-override"><code>fake_dict = {
'name': 'rsat98OULYR,
'job': [
{
'title': 'qfwpRST(*&',
'location': '.mk, rast0798'
}
],
'age': -46
}
</code></pre>
<p>So I don't need any <em>semantics</em> to be saved (eg, "name" doesn't carry any meaning other than being a string). But I do need to keep all the data types.</p>
|
<python><code-generation>
|
2023-05-03 14:05:13
| 1
| 3,558
|
Mike Williamson
|
76,164,760
| 13,039,962
|
Calculate the cumulative values grouping by 2 columns
|
<p>I have this df called df_normales:</p>
<pre><code> CODE MONTH NORMAL_PP
0 000261 January 111.4
1 000253 January 46.5
2 000375 January 86.5
3 000229 January 203.6
4 152204 January 52.6
... ... ...
6403 000858 December 0.5
6404 000861 December 60
6405 000179 December 5.7
6406 000458 December 240.1
6407 002412 December 236.7
[6408 rows x 3 columns]
</code></pre>
<p>I want to calculate the accumulated values of 8 months by code. In other words, for each code I would like to have the accumulated data from January to August, February to September, March to October, and so on for all (including September to April) of the NORMAL_PP column.</p>
<p>Expected Result:</p>
<pre><code> DATE_START NORMAL_PP DATE_END CODE
0 12 79.5 07 000009
1 01 76.2 08 000009
2 02 50.9 09 000009
3 03 25.7 10 000009
4 04 4.2 11 000009
.. ... ... ... ...
</code></pre>
<p>Values of DATE_START and DATE_END are the months (12 = December, 01 = January, etc).</p>
<p>So i tried this code:</p>
<pre><code>oct_norm=pd.DataFrame()
for code, datost in df_normales.groupby('CODE'):
new_row = {'DATE': '2019-12-01', 'NORMAL_PP': datost.loc[datost['DATE'] == '2020-12-01', 'NORMAL_PP'].item()}
datost = pd.concat([pd.DataFrame(new_row, index=[0]), datost]).reset_index(drop=True)
out_normales=(datost.groupby(pd.to_datetime(datost['DATE']).dt.to_period('M'))['NORMAL_PP'].sum()
[::-1].rolling(8).sum()[::-1]
.reset_index()
.assign(DATE_END=lambda d: d['FECHA'].add(7))
)
out_normales['CODIGO']=code
oct_norm=oct_norm.append(out_normales)
</code></pre>
<p>The issue is that i'm not getting some of the 8 month consecutive values. This is what i'm getting (sample of a specific CODE value):</p>
<pre><code> DATE NORMAL_PP DATE_END CODE
0 2019-12 79.5 2020-07 000009
1 2020-01 76.2 2020-08 000009
2 2020-02 50.9 2020-09 000009
3 2020-03 25.7 2020-10 000009
4 2020-04 4.2 2020-11 000009
5 2020-05 4.7 2020-12 000009
6 2020-06 NaN 2021-01 000009
7 2020-07 NaN 2021-02 000009
8 2020-08 NaN 2021-03 000009
9 2020-09 NaN 2021-04 000009
10 2020-10 NaN 2021-05 000009
11 2020-11 NaN 2021-06 000009
12 2020-12 NaN 2021-07 000009
</code></pre>
|
<python><pandas>
|
2023-05-03 14:01:06
| 0
| 523
|
Javier
|
76,164,757
| 4,699,294
|
Using python package from specific path when different versions are installed in the environment at different paths
|
<p>I have got two different versions of paramiko package in my PYTHONPATH environment coming from two different sets of dependencies installed. I can't change the order in which two different packages installation paths are setup due to various other dependencies and I can't uninstall any one of it either. When imported, by default this packages gets picked from the first installation path (has version 1.7.7.1) but I want to use the one from second package path (has version 2.4.0).</p>
<p>There is option to force this by changing the sys.path but I was wondering if we have any better option than this. Stumbled on <a href="https://stackoverflow.com/questions/6445167/force-python-to-use-an-older-version-of-module-than-what-i-have-installed-now">this</a> stackoverflow question but then ran into packageresource version conflict issue..</p>
<p>pkg_resources.VersionConflict: (paramiko 1.7.7.1 (first_installtion_path), Requirement.parse('paramiko==2.4.0'))</p>
<p>mentioned in the comments section of the accepted answer. The link mentioned there for the fix is broken so can't see what could the fix be.</p>
<p>Do we have any better option than changing the sys.path, keeping the limitation of not able to change the installed packages.</p>
|
<python>
|
2023-05-03 14:00:45
| 0
| 862
|
Dinesh Maurya
|
76,164,749
| 769,449
|
Use python, AutoGPT and ChatGPT to extract data from downloaded HTML page
|
<p>Note: If you're downvoting at least share why. I put in a lot of effort to write this question, shared my code and did my own research first, so not sure what else I could add.</p>
<p>I already use Scrapy to crawl websites successfully. I extract specific data from a webpage using CSS selectors. However, it's time consuming to setup and error prone.
I want to be able to pass the raw HTML to chatGPT and ask a question like</p>
<blockquote>
<p>"Give me in a JSON object format the price, array of photos, description, key features, street address, and zipcode of the object"</p>
</blockquote>
<p>Desired output below.
I truncated description, key features and photos for legibility.</p>
<pre><code>{
"price":"$945,000",
"photos":"https://media-cloud.corcoranlabs.com/filters:format(webp)/fit-in/1500x1500/ListingFullAPI/NewTaxi/7625191/mediarouting.vestahub.com/Media/134542874?w=3840&q=75;https://media-cloud.corcoranlabs.com/filters:format(webp)/fit-in/1500x1500/ListingFullAPI/NewTaxi/7625191/mediarouting.vestahub.com/Media/134542875?w=3840&q=75;https://media-cloud.corcoranlabs.com/filters:format(webp)/fit-in/1500x1500/ListingFullAPI/NewTaxi/7625191/mediarouting.vestahub.com/Media/134542876?w=3840&q=75",
"description":"<div>This spacious 2 bedroom 1 bath home easily converts to 3 bedrooms. Featuring a BRIGHT and quiet southern exposure, the expansive great room (with 9ft ceilings) is what sets (...)",
"key features":"Center island;Central air;Dining in living room;Dishwasher",
"street address":"170 West 89th Street, 2D",
"zipcode":"NY 10024",
}
</code></pre>
<p>Right now I run into the max chat length of 4096 characters. So I decided to send the page in chunks. However even with a simple question like "What is the price of this object?" I'd expect the answer to be "$945,000" but I'm just getting a whole bunch of text.
I'm wondering what I'm doing wrong. I heard that AutoGPT offers a new layer of flexibility so was also wondering if that could be a solution here.</p>
<p>My code:</p>
<pre><code>import requests
from bs4 import BeautifulSoup, Comment
import openai
import json
# Set up your OpenAI API key
openai.api_key = "MYKEY"
# Fetch the HTML from the page
url = "https://www.corcoran.com/listing/for-sale/170-west-89th-street-2d-manhattan-ny-10024/22053660/regionId/1"
response = requests.get(url)
# Parse and clean the HTML
soup = BeautifulSoup(response.text, "html.parser")
# Remove unnecessary tags, comments, and scripts
for script in soup(["script", "style"]):
script.extract()
# for comment in soup.find_all(text=lambda text: isinstance(text, Comment)):
# comment.extract()
text = soup.get_text(strip=True)
# Divide the cleaned text into chunks of 4096 characters
def chunk_text(text, chunk_size=4096):
chunks = []
for i in range(0, len(text), chunk_size):
chunks.append(text[i:i+chunk_size])
return chunks
print(text)
text_chunks = chunk_text(text)
# Send text chunks to ChatGPT API and ask for the price
def get_price_from_gpt(text_chunks, question):
for chunk in text_chunks:
prompt = f"{question}\n\n{chunk}"
response = openai.Completion.create(
engine="text-davinci-002",
prompt=prompt,
max_tokens=50,
n=1,
stop=None,
temperature=0.5,
)
answer = response.choices[0].text.strip()
if answer.lower() != "unknown" and len(answer) > 0:
return answer
return "Price not found"
question = "What is the price of this object?"
price = get_price_from_gpt(text_chunks, question)
print(price)
</code></pre>
|
<python><openai-api><chatgpt-api><autogpt>
|
2023-05-03 13:59:44
| 1
| 6,241
|
Adam
|
76,164,593
| 1,189,783
|
Pydantic can't validate the nested model
|
<p>I expect to get the response as as a list, e.g.:</p>
<pre><code>{orders: [{'id': 111, 'info': {'dt': '2023-05-11'}}, ...]}
</code></pre>
<p>Schemas:</p>
<pre><code>class OrderInfo(BaseModel):
dt: date
class Order(BaseModel):
id: int
info: OrderInfo
class Orders(BaseModel):
orders: List[Order]
</code></pre>
<p>Here I iterate over the data to append the list:</p>
<pre><code>@app.get("/", response_model=Orders)
async def get_orders():
orders = []
for i in data:
order = Order.parse_obj(i)
orders.append(order)
return Orders(orders=orders)
</code></pre>
<p>Here data is a list of dicts:</p>
<pre><code>[
{'id': 111, 'dt': '2022-01-13', 'quantity': 5},
{'id': 112, 'dt': '2022-01-14', 'quantity': 10}
]
</code></pre>
<p>Looks like pydantic can't resolve the nested model OrderInfo:</p>
<pre><code>pydantic.error_wrappers.ValidationError: 1 validation error for Order
info
field required (type=value_error.missing)
</code></pre>
<p>If I declare the OrderInfo as nullable, then I can get the results, but with nulls:</p>
<pre><code>{"orders": [{"id":111,"info":null}, ...}
</code></pre>
|
<python><pydantic>
|
2023-05-03 13:43:29
| 0
| 533
|
Alex
|
76,164,528
| 1,497,139
|
addEdge with python gremling GLV - how to get syntax correct
|
<p>The error:</p>
<pre><code>TypeError: The child traversal of [['addV']] was not spawned anonymously - use the __ class rather than a TraversalSource to construct the child traversal
</code></pre>
<p>appears when trying to run the following code with python GLV</p>
<pre class="lang-py prettyprint-override"><code>a = g.addV()
b = g.addV()
print (type(a))
print (type(b))
g.addE('foo').from_(a).to_(b).next()
</code></pre>
<pre class="lang-py prettyprint-override"><code>#g.addE('knowing').to('b').iterate()
#g.addE("knowing").from_("a").to_("b").iterate()
a = g.addV().next()
b = g.addV().next()
print (a)
print (b)
#g.addE('foo').from(a).to(b).next()
g.V(a).addE('test').from_(V(b)).next()
</code></pre>
<p>works see <a href="https://stackoverflow.com/a/71068278/1497139">https://stackoverflow.com/a/71068278/1497139</a></p>
<p><strong>why is there still a .from_ necessary and not a .from?</strong></p>
|
<python><gremlin><gremlinpython>
|
2023-05-03 13:37:41
| 1
| 15,707
|
Wolfgang Fahl
|
76,164,441
| 10,083,382
|
Identify duplicates and assign similar index in Pandas DataFrame
|
<p>Suppose that I have a sample data set that can be generated using code below</p>
<pre><code># Sample DataFrame with duplicate rows
data = {'A': [1, 2, 1, 3, 1, 2, 3, 2],
'B': [4, 5, 4, 6, 4, 5, 6, 5],
'C': [1, 2, 3, 4, 5, 6, 7, 8]}
df = pd.DataFrame(data)
</code></pre>
<p>In above dataframe I want to assign duplicate rows similar index. For index <code>0</code> would be assigned to rows <code>0</code>, <code>1</code> and <code>4</code>. Similarly index <code>1</code> would be assigned to rows <code>1</code>, <code>5</code> and <code>7</code>. Duplicates should be identified using only column <code>A</code> and <code>B</code></p>
|
<python><pandas><dataframe><indexing><duplicates>
|
2023-05-03 13:29:14
| 1
| 394
|
Lopez
|
76,164,346
| 6,423,456
|
How do I use a dict to translate ArrayAgg values for an annotation in Django?
|
<p>I'm using Django 4 with Postgres.</p>
<p>Say I have two related models like this:</p>
<pre class="lang-py prettyprint-override"><code>class Company(Model):
...
class Address(Model):
city = CharField(...)
companies = ManyToManyField(
Company,
through=ThroughCompanyAddress,
related_name="addresses"
)
...
class ThroughCompanyAddress(Model):
company_id = ForeignKey(Company, ...)
address_id = ForeignKey(Address, ...)
</code></pre>
<p>I want to annotate the cities that each company is in. Normally, I would do something like this:</p>
<pre class="lang-py prettyprint-override"><code>Company.objects.all().annotate(cities=ArrayAgg("addresses__city"))
</code></pre>
<p>Unfortunately, because of the way my DevOps team has the databases configured, these tables are stored on different databases, and I can't do this. Instead, I need to do 2 queries - first, to get a mapping of address ids to city names, and then to somehow use that mapping to annotate them onto the company models.</p>
<pre class="lang-py prettyprint-override"><code># After the initial query to the first database, I end up with this
# Note: The int dict keys are the address model Primary Keys
cities = {1: "Paris", 2: "Dubai", 3: "Honk Kong"}
# Now how do I use this dict here?
(
Company.objects.all()
.annotate(
city_ids=Subquery( # Not tested, but I think something like this would work
ThroughCompanyAddress.objects.filter(company_id=OuterRef("id")).values(
"address_id"
)
)
)
.annotate(cities=???)
)
</code></pre>
<p>How do I use the <code>cities</code> dict in my annotation here to translate city_ids to city names?</p>
<p>I probably could do a Case/Where clause, but the <code>cities</code> dict can have dynamic data in it - it's not always going to have 3 items like above, so the clause would need to dynamically use the data in the dict somehow.</p>
<p>Or maybe I can't use a Case/Where, as my value is an array?</p>
|
<python><django>
|
2023-05-03 13:19:49
| 0
| 2,774
|
John
|
76,164,227
| 3,507,584
|
Pandas keep square brackets in to_latex output
|
<p>I have a dataframe that I need to pass to a LaTeX document. The dataframe index are strings with square brackets that I would like to keep.</p>
<p>MWE:</p>
<pre><code>data = {
"calories": [420, 380, 390],
"duration": [50, 40, 45]
}
df = pd.DataFrame(data = data, index = ['[1] Row 1', '[2] Row 2','[3] Row 3'])
print(df.to_latex(index=True))
</code></pre>
<p>Output of print(to_latex):</p>
<pre><code> calories duration
[1] Row 1 420 50
[2] Row 2 380 40
[3] Row 3 390 45
>>> print(df.to_latex(index=True))
\begin{tabular}{lrr}
\toprule
{} & calories & duration \\
\midrule
[1] Row 1 & 420 & 50 \\
[2] Row 2 & 380 & 40 \\
[3] Row 3 & 390 & 45 \\
\bottomrule
\end{tabular}
</code></pre>
<p>The output of <code>to_latex</code> does not show the square brackets in between <code>{}</code> as <code>{[1]}</code>, so it would yield an error when compiled in LaTeX.
If I add the <code>{}</code> to the python index name string, this will be shown in the code too as <code>\{[1]\}</code> and show in the LaTeX document as <code>{[1]}</code>, but I want the LaTeX document to show <code>[1]</code>.</p>
<p>How could I get <code>[1]</code> in my LaTeX document?</p>
|
<python><pandas><latex>
|
2023-05-03 13:09:11
| 0
| 3,689
|
User981636
|
76,164,198
| 6,195,489
|
Use str.split() to set value of column in dataframe, but only for some rows
|
<p>I have a dataframe like e.g.:</p>
<pre><code>id some_string
1. blah,count=1,blah
2. blah,blah
3 blah,count=4,blah
4. blah,blah
5 blah,count=4,blah
6. blah,count=3,blah
</code></pre>
<p>I would like to use split to set a separate column with the value of count to get:</p>
<pre><code>id some_string count
1 blah,count=1,blah 1
2 blah,blah 0
3 blah,count=4,blah 4
4 blah,blah 0
5 blah,count=4,blah 4
6 blah,count=3,blah 3
</code></pre>
<p>I tried:</p>
<pre><code>df['count'].str.split('[count=|,]',expand=True)[3]
</code></pre>
<p>but it rightly complains that:</p>
<pre><code> Length of values (4) does not match length of index (6)
</code></pre>
<p>Is there an obvious way of doing this short of looping through the dataframe entries?</p>
|
<python><pandas><dataframe>
|
2023-05-03 13:06:28
| 2
| 849
|
abinitio
|
76,164,132
| 3,575,623
|
Efficiently attribute group to each ID in a melted DataFrame
|
<p>I have a melted DataFrame that contains measurements for different sampleIDs and experimental conditions:</p>
<pre><code> expcond variable value
0 0 Sample1 0.001620
1 1 Sample1 -0.351960
2 2 Sample1 -0.002644
3 3 Sample1 0.000633
4 4 Sample1 0.011253
... ... ... ... ...
293933 54 Sample99 0.006976
293934 55 Sample99 -0.002270
293935 56 Sample99 -0.498353
293936 57 Sample99 -0.006603
293937 58 Sample99 0.003283
</code></pre>
<p>I also have access to this data in non-melted form if it would be easier to handle it that way, but I doubt it.</p>
<p>Each sample is member of group. I have the groups stored in a separate file, which for the moment I am reading and storing as a dictionary. I would like to add a column "group" to my DataFrame based on this information. For the moment, I am doing it line by line, but that is quite slow given the ~300 000 entries:</p>
<pre><code>final_ref_melt["group"] = ["XXX"] * len(final_ref_melt)
for i in range(len(final_ref_melt)):
final_ref_melt.loc[i, "group"] = ID_group[final_ref_melt.loc[i, "variable"]]
</code></pre>
<p>The end goal is then to separate the data into one DataFrame per group, then perform statistics calculations on each of them. With my current setup, I would do it like so:</p>
<pre><code>final_ref_groups = {}
for mygroup in group_IDs.keys():
final_ref_groups[mygroup] = final_ref_melt[final_ref_melt["group"] == mygroup]
</code></pre>
<p>(Yes, I have the group information stored as two different dictionaries. I know.)</p>
<p>How can I do this more efficiently?</p>
|
<python><pandas><dataframe>
|
2023-05-03 12:59:49
| 0
| 507
|
Whitehot
|
76,164,083
| 5,858,752
|
Is `engine` a Python keyword?
|
<p>I tried googling "Python engine keyword" or "python engine" but nothing useful is showing up.</p>
<p>In the codebase I am working with, I see the following:</p>
<pre><code> with engine(database_url).connect() as conn:
</code></pre>
<p>I did not <code>engine</code> imported, and I also did not see a <code>from [package_name] import *</code>.</p>
<p>This is in a section of the codebase that is doing sql queries.</p>
|
<python>
|
2023-05-03 12:56:03
| 2
| 699
|
h8n2
|
76,163,832
| 7,253,901
|
Pandas-on-spark throwing java.lang.StackOverFlowError
|
<p>I am using pandas-on-spark in combination with regex to remove some abbreviations from a column in a dataframe. In pandas this all works fine, but I have the task to migrate this code to a production workload on our spark cluster, and therefore decided to use pandas-on-spark. However, I am running into a weird error. I'm using the following function to clean up the abbreviations (Somewhat simplified here for readability purposes, in reality abbreviations_dict has 61 abbreviations and patterns is a list with three regex patterns).</p>
<pre><code>import pyspark.pandas as pspd
def resolve_abbreviations(job_list: pspd.Series) -> pspd.Series:
"""
The job titles contain a lot of abbreviations for common terms.
We write them out to create a more standardized job title list.
:param job_list: df.SchoneFunctie during processing steps
:return: SchoneFunctie where abbreviations are written out in words
"""
abbreviations_dict = {
"1e": "eerste",
"1ste": "eerste",
"2e": "tweede",
"2de": "tweede",
"3e": "derde",
"3de": "derde",
"ceo": "chief executive officer",
"cfo": "chief financial officer",
"coo": "chief operating officer",
"cto": "chief technology officer",
"sr": "senior",
"tech": "technisch",
"zw": "zelfstandig werkend"
}
#Create a list of abbreviations
abbreviations_pob = list(abbreviations_dict.keys())
#For each abbreviation in this list
for abb in abbreviations_pob:
# define patterns to look for
patterns = [fr'((?<=( ))|(?<=(^))|(?<=(\\))|(?<=(\())){abb}((?=( ))|(?=(\\))|(?=($))|(?=(\))))',
fr'{abb}\.']
# actual recoding of abbreviations to written out form
value_to_replace = abbreviations_dict[abb]
for patt in patterns:
job_list = job_list.str.replace(pat=fr'{patt}', repl=f'{value_to_replace} ', regex=True)
return job_list
</code></pre>
<p>When I then call the function with a pspd Series, and perform an action so the query plan is executed:</p>
<pre><code>df['SchoneFunctie'] = resolve_abbreviations(df['SchoneFunctie'])
print(df.head(100))
</code></pre>
<p>it throws a java.lang.StackOverflowError. The stack trace is too long to paste here, I pasted a subset of it since it is a repeating one.</p>
<pre><code>23/05/05 09:53:14 WARN TaskSetManager: Lost task 0.0 in stage 4.0 (TID 4) (PC ID executor driver): java.lang.StackOverflowError
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2408)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:508)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:466)
at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:527)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1185)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2319)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2352)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2210)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1690)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2428)
</code></pre>
<p>It goes on like this for quite a while, untill I get:</p>
<pre><code>23/05/03 14:19:11 ERROR TaskSetManager: Task 0 in stage 4.0 failed 1 times; aborting job
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2021.3\plugins\python\helpers\pydev\pydevconsole.py", line 364, in runcode
coro = func()
File "<input>", line 194, in <module>
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\frame.py", line 12255, in __repr__
pdf = cast("DataFrame", self._get_or_create_repr_pandas_cache(max_display_count))
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\frame.py", line 12246, in _get_or_create_repr_pandas_cache
self, "_repr_pandas_cache", {n: self.head(n + 1)._to_internal_pandas()}
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\frame.py", line 12241, in _to_internal_pandas
return self._internal.to_pandas_frame
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\utils.py", line 588, in wrapped_lazy_property
setattr(self, attr_name, fn(self))
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\internal.py", line 1056, in to_pandas_frame
pdf = sdf.toPandas()
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\sql\pandas\conversion.py", line 205, in toPandas
pdf = pd.DataFrame.from_records(self.collect(), columns=self.columns)
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\sql\dataframe.py", line 817, in collect
sock_info = self._jdf.collectToPython()
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\py4j\java_gateway.py", line 1321, in __call__
return_value = get_return_value(
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\sql\utils.py", line 190, in deco
return f(*a, **kw)
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\py4j\protocol.py", line 326, in get_return_value
raise Py4JJavaError(
py4j.protocol.Py4JJavaError----------------------------------------
Exception occurred during processing of request from ('127.0.0.1', 54483)
Traceback (most recent call last):
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\socketserver.py", line 316, in _handle_request_noblock
self.process_request(request, client_address)
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\socketserver.py", line 347, in process_request
self.finish_request(request, client_address)
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\socketserver.py", line 360, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\socketserver.py", line 747, in __init__
self.handle()
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\accumulators.py", line 281, in handle
poll(accum_updates)
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\accumulators.py", line 253, in poll
if func():
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\accumulators.py", line 257, in accum_updates
num_updates = read_int(self.rfile)
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\serializers.py", line 593, in read_int
length = stream.read(4)
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\socket.py", line 704, in readinto
return self._sock.recv_into(b)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
----------------------------------------
ERROR:root:Exception while sending command.
Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm 2021.3\plugins\python\helpers\pydev\pydevconsole.py", line 364, in runcode
coro = func()
File "<input>", line 194, in <module>
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\frame.py", line 12255, in __repr__
pdf = cast("DataFrame", self._get_or_create_repr_pandas_cache(max_display_count))
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\frame.py", line 12246, in _get_or_create_repr_pandas_cache
self, "_repr_pandas_cache", {n: self.head(n + 1)._to_internal_pandas()}
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\frame.py", line 12241, in _to_internal_pandas
return self._internal.to_pandas_frame
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\utils.py", line 588, in wrapped_lazy_property
setattr(self, attr_name, fn(self))
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\pandas\internal.py", line 1056, in to_pandas_frame
pdf = sdf.toPandas()
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\sql\pandas\conversion.py", line 205, in toPandas
pdf = pd.DataFrame.from_records(self.collect(), columns=self.columns)
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\sql\dataframe.py", line 817, in collect
sock_info = self._jdf.collectToPython()
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\py4j\java_gateway.py", line 1321, in __call__
return_value = get_return_value(
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\pyspark\sql\utils.py", line 190, in deco
return f(*a, **kw)
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\py4j\protocol.py", line 326, in get_return_value
raise Py4JJavaError(
py4j.protocol.Py4JJavaError: <unprintable Py4JJavaError object>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\py4j\clientserver.py", line 511, in send_command
answer = smart_decode(self.stream.readline()[:-1])
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\socket.py", line 704, in readinto
return self._sock.recv_into(b)
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\py4j\java_gateway.py", line 1038, in send_command
response = connection.send_command(command)
File "C:\Users\MyUser\.conda\envs\Anaconda3.9\lib\site-packages\py4j\clientserver.py", line 539, in send_command
raise Py4JNetworkError(
py4j.protocol.Py4JNetworkError: Error while sending or receiving
: <exception str() failed>
</code></pre>
<p>Some things i've tried / facts I think could be relevant:</p>
<ul>
<li>For now I am trying to run this locally. I am running it locally on a
subset of 5000 rows of data, so that shouldn't be the problem.
Perhaps increasing some kind of default config could still help.</li>
<li>I think this has to do with the lazy evaluation in spark, and the DAG of spark getting too big because of the for-loops in the function.
But I have no idea how to solve the problem. As per
<a href="https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/best_practices.html#use-checkpoint" rel="nofollow noreferrer">pyspark-on-pandas best practices documentation</a> I have tried to
implement checkpointing, but this is not available for pspd.Series,
and converting my Series into a <code>pspd.Dataframe</code> makes the
<code>.apply(lambda ...)</code> fail inside the resolve_abbreviations function.</li>
</ul>
<p>Any help would be greatly appreciated. Perhaps I am better off avoiding the pandas-on-spark API, and transform the code to regular pyspark as the pandas-on-spark API apparently isn't mature enough yet to run pandas scripts "as is"? Or perhaps our code design is flawed by nature and there is another efficient way to achieve similar results?</p>
|
<python><pandas><apache-spark><pyspark><pyspark-pandas>
|
2023-05-03 12:28:30
| 1
| 2,825
|
Psychotechnopath
|
76,163,761
| 8,510,149
|
Transform a pandas series, new mean and stddev
|
<p>If you have a series, like this:</p>
<pre><code>import pandas as pd
pd.series = [1.01, 2, 1.2, 3.1, 4.32, 1.23, 8.21, 4.2, 1.3, 2.3, 3.3, 5.2, 4.8, 4.2, 5.98, 6.1, 2.9, 4.12, 4.78, 5.56, 5.21]
</code></pre>
<p>What if want to force a transformation on this series, to have a mean of 10 and a std dev of 2.</p>
<p>How would I do that?</p>
|
<python>
|
2023-05-03 12:20:50
| 1
| 1,255
|
Henri
|
76,163,412
| 694,360
|
Split a 2D polyline defined through segments and arcs with Python
|
<p>I'm looking for a library (preferably in pure Python) or an algorithm able to split, with an arbitrary line, a 2D polyline made of segments and arcs.</p>
<p>The polyline is a sequence of</p>
<ul>
<li>segments <code>[(x1,y1),(x2,y2)]</code></li>
<li>and arcs <code>[(cx,cy),radius,start_angle,end_angle)]</code>, where <code>end_angle = start_angle + theta</code> and <code>theta</code> is positive if the arc goes counterclockwise and negative otherwise.</li>
</ul>
<p>Angles are in degrees. The splitting line is actually a segment long enough to split the polyline in two.</p>
<p>I'm looking for an <strong>exact split</strong>, that is arcs can NOT be approximated by segments.</p>
<p>Given a polyline, the splitting function should return two <em>lists of polylines</em> (defined as above by segments and arcs), and the polylines in the first list should be the ones laying on the side of origin. I'm talking about <em>list</em> of polylines because a sufficiently complex initial polyline could be splitted, on each side, into many separated polylines.</p>
|
<python><computational-geometry>
|
2023-05-03 11:42:53
| 0
| 5,750
|
mmj
|
76,163,192
| 10,311,672
|
elevenlabs-python package vs. google_speech: setting storage path, and limit for characters read-outs
|
<p>I'm using an python package to play voices known as elevenlabs - here is the link
<a href="https://github.com/elevenlabs/elevenlabs-python" rel="nofollow noreferrer">https://github.com/elevenlabs/elevenlabs-python</a></p>
<p>This package uses mpv to play its audio files,</p>
<p>such that , a script like that will be played:</p>
<pre><code>from elevenlabs import generate, play
audio = generate(
text="Hi! My name is Bella, nice to meet you!",
voice="Bella",
model="eleven_monolingual_v1"
)
play(audio)
</code></pre>
<p>I have a specific questions: How to find or set a path for the output audio file?</p>
<p>On the other hand, working with google_speech seems to be very easy, despite the less trained voice for reading text, you can easily save the audio to the desired directory: <a href="https://pypi.org/project/google-speech/" rel="nofollow noreferrer">https://pypi.org/project/google-speech/</a></p>
|
<python><mpv>
|
2023-05-03 11:17:48
| 1
| 336
|
Waly
|
76,163,173
| 14,535,309
|
Why is Django looking for font in the wrong place and by the wrong name? TTFError at*
|
<p>I'm trying to render my html page to pdf using django with these functions while also using a <strong>Cyrillic</strong> font:</p>
<pre><code>def fetch_resources(uri, rel):
if settings.STATIC_URL and uri.startswith(settings.STATIC_URL):
path = os.path.join(STATIC_ROOT, uri.replace(settings.STATIC_URL, ""))
elif settings.MEDIA_URL and uri.startswith(settings.MEDIA_URL):
path = os.path.join(STATIC_ROOT, uri.replace(settings.MEDIA_URL, ""))
else:
path = os.path.join(settings.STATIC_ROOT, uri)
return path.replace("\\", "/")
def render_pdf(url_template, context={}):
template = get_template(url_template)
html = template.render(context)
result = BytesIO()
pdf = pisa.CreatePDF(html, result, link_callback=fetch_resources)
if not pdf.err:
return HttpResponse(result.getvalue(), content_type="application/pdf")
return None
</code></pre>
<p>This is the <strong>view</strong>:</p>
<pre><code>class DownloadPDF(View)
def get(self, request, *args, **kwargs):
pdf = render_pdf("tmp.html")
return HttpResponse(pdf, content_type="application/pdf")
</code></pre>
<p>And this is the <strong>template</strong>:</p>
<pre><code>{% block extra_style %}
<style type="text/css">
@font-face { font-family: Calibri; src: url("/static/fonts/Calibri.ttf"); }
body { font-family: 'Calibri', sans-serif;}
</style>
{% endblock %}
{% block content %}
<body>
<p>йоу</p>
</body>
{% endblock %}
</code></pre>
<p>As you can see, I'm using Calibri font for cyrillic letters, however when I'm rendering the page I get the following error:</p>
<pre><code>TTFError at /download-pdf/
Can't open file "C:\Users\user\AppData\Local\Temp\tmp6o9yikqk.ttf"
</code></pre>
<p>It seems like the interpreter is looking for the file in a wrong catalog, since my project is on D drive and my font path is in <code>/static/fonts/Calibri.ttf</code></p>
<p><strong>settings.py</strong></p>
<pre><code>STATIC_URL = "/static/"
STATICFILES_DIRS = [os.path.join(BASE_DIR, "staticfiles")]
STATIC_ROOT = os.path.join(BASE_DIR, "static")
</code></pre>
<p><strong>Full Traceback:</strong></p>
<pre><code>Environment:
Request Method: GET
Request URL: http://127.0.0.1:8000/download/slug
Django Version: 4.1.7
Python Version: 3.11.0
Installed Applications:
['django.contrib.admin',
'authentication',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites',
*apps*]
Installed Middleware:
[*basic middleware]
Traceback (most recent call last):
File "D:\*\venv\Lib\site-packages\reportlab\lib\utils.py", line 523, in open_for_read
return open_for_read_by_name(name,mode)
File "D:\*\venv\Lib\site-packages\reportlab\lib\utils.py", line 463, in open_for_read_by_name
return open(name,mode)
During handling of the above exception ([Errno 13] Permission denied: 'C:\\Users\\slavk\\AppData\\Local\\Temp\\tmp6uw9470p.ttf'), another exception occurred:
File "D:\*\venv\Lib\site-packages\reportlab\lib\utils.py", line 530, in open_for_read
return BytesIO((datareader if name[:5].lower()=='data:' else rlUrlRead)(name))
File "D:\*\venv\Lib\site-packages\reportlab\lib\utils.py", line 476, in rlUrlRead
return urlopen(name).read()
File "C:\Users\slavk\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 216, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\slavk\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 519, in open
response = self._open(req, data)
File "C:\Users\slavk\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 541, in _open
return self._call_chain(self.handle_open, 'unknown',
File "C:\Users\slavk\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 496, in _call_chain
result = func(*args)
File "C:\Users\slavk\AppData\Local\Programs\Python\Python311\Lib\urllib\request.py", line 1419, in unknown_open
raise URLError('unknown url type: %s' % type)
During handling of the above exception (<urlopen error unknown url type: c>), another exception occurred:
File "D:\*\venv\Lib\site-packages\reportlab\pdfbase\ttfonts.py", line 151, in TTFOpenFile
f = open_for_read(fn,'rb')
File "D:\*\venv\Lib\site-packages\reportlab\lib\utils.py", line 534, in open_for_read
return open_for_read(name,mode)
File "D:\*\venv\Lib\site-packages\reportlab\lib\utils.py", line 532, in open_for_read
raise IOError('Cannot open resource "%s"' % name)
During handling of the above exception (Cannot open resource "C:\Users\slavk\AppData\Local\Temp\tmp6uw9470p.ttf"), another exception occurred:
File "D:\*\venv\Lib\site-packages\django\core\handlers\exception.py", line 56, in inner
response = get_response(request)
File "D:\*\venv\Lib\site-packages\django\core\handlers\base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "D:\*\venv\Lib\site-packages\django\views\generic\base.py", line 103, in view
return self.dispatch(request, *args, **kwargs)
File "D:\*\venv\Lib\site-packages\django\views\generic\base.py", line 142, in dispatch
return handler(request, *args, **kwargs)
File "D:\*\views.py", line 78, in get
pdf = render_to_pdf("tmp.html", {'object': object})
File "D:\*\utils\render_to_pdf.py", line 25, in render_to_pdf
pdf = pisa.pisaDocument(BytesIO(html.encode("UTF-8")), result)
File "D:\*\venv\Lib\site-packages\xhtml2pdf\document.py", line 116, in pisaDocument
context = pisaStory(src, path, link_callback, debug, default_css, xhtml,
File "D:\*\venv\Lib\site-packages\xhtml2pdf\document.py", line 68, in pisaStory
pisaParser(src, context, default_css, xhtml, encoding, xml_output)
File "D:\*\venv\Lib\site-packages\xhtml2pdf\parser.py", line 793, in pisaParser
context.parseCSS()
File "D:\*\venv\Lib\site-packages\xhtml2pdf\context.py", line 539, in parseCSS
self.css = self.cssParser.parse(self.cssText)
File "D:\*\venv\Lib\site-packages\xhtml2pdf\w3c\cssParser.py", line 443, in parse
src, stylesheet = self._parseStylesheet(src)
File "D:\Work\gofriends\TrojanCRM\venv\Lib\site-packages\xhtml2pdf\w3c\cssParser.py", line 545, in _parseStylesheet
src, atResults = self._parseAtKeyword(src)
File "D:\*\venv\Lib\site-packages\xhtml2pdf\w3c\cssParser.py", line 667, in _parseAtKeyword
src, result = self._parseAtFontFace(src)
File "D:\*\venv\Lib\site-packages\xhtml2pdf\w3c\cssParser.py", line 845, in _parseAtFontFace
result = [self.cssBuilder.atFontFace(properties)]
File "D:\*\venv\Lib\site-packages\xhtml2pdf\context.py", line 176, in atFontFace
self.c.loadFont(names, src,
File "D:\*\venv\Lib\site-packages\xhtml2pdf\context.py", line 926, in loadFont
file = TTFont(fullFontName, filename)
File "D:\*\venv\Lib\site-packages\reportlab\pdfbase\ttfonts.py", line 1178, in __init__
self.face = TTFontFace(filename, validate=validate, subfontIndex=subfontIndex)
File "D:\*\venv\Lib\site-packages\reportlab\pdfbase\ttfonts.py", line 1072, in __init__
TTFontFile.__init__(self, filename, validate=validate, subfontIndex=subfontIndex)
File "D:\*\venv\Lib\site-packages\reportlab\pdfbase\ttfonts.py", line 439, in __init__
TTFontParser.__init__(self, file, validate=validate,subfontIndex=subfontIndex)
File "D:\*\venv\Lib\site-packages\reportlab\pdfbase\ttfonts.py", line 175, in __init__
self.readFile(file)
File "D:\*\venv\Lib\site-packages\reportlab\pdfbase\ttfonts.py", line 251, in readFile
self.filename, f = TTFOpenFile(f)
File "D:\*\venv\Lib\site-packages\reportlab\pdfbase\ttfonts.py", line 161, in TTFOpenFile
raise TTFError('Can\'t open file "%s"' % fn)
Exception Type: TTFError at /download/item
Exception Value: Can't open file "C:\Users\slavk\AppData\Local\Temp\tmp6uw9470p.ttf"
</code></pre>
|
<python><django><pdf><django-staticfiles>
|
2023-05-03 11:15:28
| 0
| 2,202
|
SLDem
|
76,163,104
| 1,856,922
|
How do I make two parallel, asynchronous calls using jQuery/FastAPI so that neither waits on the other?
|
<p>I have a simple chat application written using FastAPI in Python and jQuery. The user enters a question into form, and a response is returned from the server. However, I also need to send the user message to a separate process that takes a long time (say, querying a database). I don't want the user to have to wait for that separate process to complete, but I have been unable to get that to work. I've tried all sorts of variations of Promise and await, but nothing works. Here is a toy example that demonstrates the problem:</p>
<pre class="lang-html prettyprint-override"><code><!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
<script type="text/javascript">
$(document).ready(function () {
$('#message-form').submit(async function (event) {
event.preventDefault();
const input_message = $('#message-form input[name=message]').val()
$('#message-list').append('<li><strong>' + input_message + '</strong></li>');
side_track(input_message);
const response = await fetch('/submit_message', {
method: 'POST',
body: JSON.stringify({ message: input_message }),
headers: { 'Content-Type': 'application/json' },
});
// Reset the message input field
$('#message-form')[0].reset();
const newMessage = document.createElement('li');
$('#message-list').append(newMessage);
const stream = response.body;
const reader = stream.getReader();
const decoder = new TextDecoder();
const { value, done } = await reader.read();
const message = JSON.parse(decoder.decode(value)).message;
newMessage.innerHTML += message
});
});
async function side_track(question) {
const response = fetch('/side_track', {
method: 'POST',
body: JSON.stringify({ message: question }),
headers: { 'Content-Type': 'application/json' },
});
alert('Getting questions')
}
</script>
</head>
<body>
<ul id="message-list">
<li>Message list.</li>
<!-- Existing messages will be inserted here -->
</ul>
<form id="message-form" method="POST">
<input type="text" name="message" placeholder="Enter your message">
<button type="submit">Submit</button>
</form>
</body>
</html>
</code></pre>
<p>And the corresponding Python:</p>
<pre class="lang-py prettyprint-override"><code># -*- coding: utf-8 -*-
from fastapi import FastAPI, Request
from fastapi.responses import HTMLResponse
from fastapi.templating import Jinja2Templates
from pydantic import BaseModel
from fastapi.staticfiles import StaticFiles
app = FastAPI()
templates = Jinja2Templates(directory="templates")
app.mount("/static", StaticFiles(directory="static"), name="static")
class MessageInput(BaseModel):
message: str
@app.post("/side_track")
async def side_track(message_data: MessageInput):
import time
time.sleep(10)
return {"status": "ok"}
@app.get("/", response_class=HTMLResponse)
async def index(request: Request):
return templates.TemplateResponse("index.html", {"request": request})
async def random_numbers():
import random
import asyncio
while True:
await asyncio.sleep(.1) # Wait for 1 second
yield random.randint(1, 10)
@app.post("/submit_message")
async def submit_message(message_data: MessageInput):
from fastapi.encoders import jsonable_encoder
async for number in random_numbers():
return jsonable_encoder({'message': number})
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="127.0.0.1", port=8000)
</code></pre>
<p>In the example above, the user has to wait the full 10 seconds for <code>side_track</code> to complete before displaying the message returned from <code>submit_message</code>. I want the message from <code>submit_message </code>(which doesn't take any time to process) to display immediately, and the response from <code>side_track </code>to be handled separately whenever it completes, without tying up the program.</p>
<p>EDIT: I modified the toy program to more accurately demonstrate the asynchronous generator that responds to <code>submit_message</code> and make it easier to replicate the problem.</p>
|
<javascript><python><ajax><fastapi>
|
2023-05-03 11:07:30
| 0
| 4,732
|
Craig
|
76,162,791
| 9,985,032
|
OpenCV getWindowImageRect returning size not matching resolution
|
<p>I'm trying to get the screen position of my window using opencv</p>
<pre><code> cv2.imshow('test', img)
print(cv2.getWindowImageRect('test'))
</code></pre>
<p>When I move my window to the bottom right corner to the screen it outputs
<code>(1484, 895, 500, 500)</code> so it looks like it thinks that my screen resolution is 1500x900 while in reality it's 2240x1400. Why does this happen and how can I make sure that the code returns correct values for every screen resolution?</p>
<p>Edit:
As wohlstad noticed this is happening because my windows "display settings -> scale and layout" is set to 150%</p>
|
<python><windows><opencv><dpi>
|
2023-05-03 10:26:17
| 0
| 596
|
SzymonO
|
76,162,732
| 2,726,900
|
How to convert a JIRA task to a subtask from Atlassian Python API?
|
<p>I'm trying to convert a JIRA issue to a subtask</p>
<p>That's what I'm doing:</p>
<pre><code>converted_to_subtask = jira.update_issue_field(
"FSTORE-326", {
"issuetype": {"name": "Sub-task"}, "parent": {"key": "FSTORE-324"}
})
</code></pre>
<p>Alas, I get an error 400.</p>
<p>What is the correct way to convert a JIRA task to subtask from Atlassian Python API?</p>
|
<python><jira><atlassian-python-api>
|
2023-05-03 10:18:14
| 1
| 3,669
|
Felix
|
76,162,691
| 6,528,055
|
How does the word2vec produce embeddings for the unseen words?
|
<p>I'm using an <strong>unlabeled</strong> news corpus to fine-tune the Word2Vec model. After that I'm using those embeddings to generate embeddings for words present in a new <strong>labeled</strong> dataset. These new embeddings were fed to an RNN as initial weights. I've shared the code for generating the embedding matrix:</p>
<pre><code>embed_dim = embedding_size
words_not_found = []
nb_words = min(MAX_NB_WORDS, len(word_index))
embedding_matrix = np.random.rand(nb_words+1, embed_dim)
for word, i in word_index.items():
if i >= nb_words:
continue
#print(word)
if embeddings_index.wv.__contains__(word):
embedding_vector = embeddings_index.wv[word]
embedding_matrix[i] = embedding_vector
else:
words_not_found.append(word)
</code></pre>
<p>I've seen that the matrix contains embeddings for all of the unique words in the new <strong>labeled</strong> dataset.</p>
<p><strong>Can anyone tell me how did the word2vec model produce embeddings for the words which were not present in the initial news corpus?</strong></p>
|
<python><nlp><word2vec><word-embedding>
|
2023-05-03 10:13:42
| 1
| 969
|
Debbie
|
76,162,648
| 12,404,524
|
How to deal with AttributeError in dataclass subclass initialization?
|
<p>I have a dataclass <code>Walk</code> with the attribute <code>vertex_list</code> and a method <code>add</code>. I want to make a subclass <code>Path</code> that inherits from <code>Walk</code> and has a different implementation for the <code>add</code> method.</p>
<p>These are my two classes:</p>
<p><code>Walk.py</code></p>
<pre class="lang-py prettyprint-override"><code>from .Vertex import Vertex
from dataclasses import dataclass, field
@dataclass(frozen=True)
class Walk:
vertex_list: list[Vertex] = field(default_factory=list)
def __init_subclass__(cls, vertex: Vertex = None) -> None:
cls.vertex_list.append(vertex)
def add(self, vertex: Vertex) -> None:
self.vertex_list.append(vertex)
def __repr__(self) -> str:
return '--'.join(self.__vertex_list)
</code></pre>
<p><code>Path.py</code></p>
<pre class="lang-py prettyprint-override"><code>from .Vertex import Vertex
from .Walk import Walk
from dataclasses import dataclass
@dataclass
class Path(Walk):
def add(self, vertex: Vertex) -> None:
if vertex not in self.__vertex_list:
self.__vertex_list.append(vertex)
else:
pass
</code></pre>
<p>Trying to initialize a path with a <code>vertex</code> named <code>src</code> with the line <code>path = Path(src)</code> results in the following error:</p>
<pre><code> cls.vertex_list.append(vertex)
AttributeError: type object 'Path' has no attribute 'vertex_list'
</code></pre>
<p>What am I doing wrong?</p>
<p>I am quite the python noob, so links to references will be appreciated.</p>
|
<python><inheritance><python-dataclasses>
|
2023-05-03 10:08:10
| 0
| 1,006
|
amkhrjee
|
76,162,566
| 15,593,152
|
Extract number of days from an sql query using pandas, and write the result in a new column of the dataframe
|
<p>I have a dataframe where one column (called "sql") is an sql query string (that have at least two dates inside it). I need to compute the number of days between the two dates, and store it in a new column of the df.</p>
<p>Example "sql" column <code>df['sql'].iloc[0]</code> of my dataframe:</p>
<pre><code>'WITH table1 AS (\n SELECT * FROM table WHERE date BETWEEN \'2023-01-01\' AND \'2023-01-04\' ...'
</code></pre>
<p>I can extract the dates from the first line of the sql query using:</p>
<pre><code>sqlquery = df.sql.iloc[0]
ldates = re.findall('\d{4}-\d{2}-\d{2}', sqlquery)
</code></pre>
<p>And I can compute the number of days using something like</p>
<pre><code>num_days = (pd.to_datetime(ldates[1])-pd.to_datetime(ldates[0])).days
</code></pre>
<p>However, this only concerns the first line of the df (hence the <code>.iloc[0]</code>). My problem is that I don't know how to process this information for all lines, and write the result into a new column of the df. I tried something like:</p>
<pre><code>df['num_days'] = (
pd.to_datetime(re.findall('\d{4}-\d{2}-\d{2}', df.sql)[1]
) -
pd.to_datetime(re.findall('\d{4}-\d{2}-\d{2}', df.sql)[0]
)
).days
</code></pre>
<p>But the re.findall from the "df.sql" part throws an error:</p>
<pre><code>TypeError: expected string or bytes-like object
</code></pre>
<p>Any ideas on how can I achieve that? Maybe using a for loop to every row of the df?</p>
|
<python><pandas><dataframe>
|
2023-05-03 09:58:20
| 1
| 397
|
ElTitoFranki
|
76,162,459
| 6,583,606
|
Jupyter notebook ImportError: attempted relative import with no known parent package
|
<h1>Summary</h1>
<p>I've a repository with the following structure:</p>
<pre><code>phd-notebooks
├── notebooks
| ├── resources
| | └── utils.py
| ├── notebooks-group-1
| | ├── notebook1.ipynb
| | └── notebook2.ipynb
| └── notebooks-group-2
| ├── notebook1.ipynb
| └── notebook2.ipynb
├── .gitignore
├── LICENSE
└── README.md
</code></pre>
<p>In my notebooks, say in <code>notebooks-group-1\notebook1.ipynb</code> for example, I want to import the module <code>utils.py</code>.</p>
<p>If I try <code>from resources import utils</code> I get <code>ModuleNotFoundError: No module named 'resources'</code>, if I try <code>from ..resources import utils</code> I get <code>ImportError: attempted relative import with no known parent package</code>.</p>
<p>I've tried to debug <code>notebooks-group-1\notebook1.ipynb</code> prompting <code>os.getcwd()</code> and I get <code>notebooks-group-1</code> as the current working directory.</p>
<p>I am using Visual Studio Code and a conda environment with python 3.9.13.</p>
<h1>What I've tried</h1>
<p>I've tried adding an <code>__init.py__</code> file in all folders (<code>notebooks</code>, <code>notebooks\resources</code>, <code>notebooks\notebooks-group-1</code>, <code>notebooks\notebooks-group-2</code>) but I get the same errors. I don't want to use the <code>sys.path.append</code> hack or make my entry point at the top level, as suggested <a href="https://stackoverflow.com/a/64691256/6583606">here</a>, because when I will make the repository available I want to make running my notebooks by other people as simple as possible.</p>
<p>I'm aware of <a href="https://stackoverflow.com/questions/16981921/relative-imports-in-python-3">this frequently cited question</a>, but I could not find any solution that would suit my case. Running my script using <code>-m</code> is not possible because I'm using notebooks, and even if was it would go against the requirement of making running my notebooks as simple as possible. Setting <code>__package__</code> manually is not well-suited for use in real-world code, similarly to the <code>sys.path.append</code> hack. I'm doutbful about using <code>setuptools</code> because <code>resources</code> is supposed to be a folder with a collection of utility modules for the notebooks, not a real package (it feels like an overcomplication).</p>
<p>How can I import the module <code>utils.py</code> in my notebooks in a pythonic way? Or it's the structure of my repository that is bad?</p>
|
<python><import><jupyter-notebook><jupyter><importerror>
|
2023-05-03 09:46:49
| 2
| 319
|
fma
|
76,162,435
| 14,494,483
|
Why use .execute in Gmail API discovery in python
|
<p>I have a query in regards to accessing gmail api using python. Below, I have the discovery build <code>service</code>, then using <a href="https://developers.google.com/gmail/api/reference/rest" rel="nofollow noreferrer">gmail api documentation</a>, we can access different REST resources, for example, accessing labels and to list all the labels like below. But in the script, there is <code>.execute()</code>after <code>list</code>, I'm wondering which documentation tells me that I need to include <code>.execute()</code> to get this working?</p>
<pre><code>service = build('gmail', 'v1', credentials=creds)
results = service.users().labels().list(userId='youremail').execute()
</code></pre>
|
<python><gmail-api>
|
2023-05-03 09:42:46
| 1
| 474
|
Subaru Spirit
|
76,162,427
| 1,423,259
|
Argument of type "dict[str, str]" cannot be assigned to parameter "data" of type "Type[empty]"
|
<p>I have a very small serializer for the initial user registration:</p>
<pre class="lang-py prettyprint-override"><code>class UserRegistrationSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = [
"email",
"password",
]
def validate_password(self, value):
try:
validate_password(password=value)
except ValidationError as e:
raise serializers.ValidationError(list(e.messages))
return value
</code></pre>
<p>Now I am writing tests for this serializer. These are working fine, but my IDE (vscode) shows me an typing error:</p>
<pre class="lang-py prettyprint-override"><code>class UserRegistrationSerializerTestCase(TestCase):
def test_invalid_email(self):
request_data = {"email": "test@test", "password": "testpwsecure"}
serializer = UserRegistrationSerializer(data=request_data)
self.assertFalse(serializer.is_valid())
</code></pre>
<p>The error refers to my input to the <code>UserRegistrationSerializer</code>, saying that <code>Argument of type "dict[str, str]" cannot be assigned to parameter "data" of type "Type[empty]"</code></p>
<p>I am not sure what <code>Type[empty]</code> actually means and how I can satisfy the static code analysis here ... the test is working fine and as expected, I would just like to understand and learn what the IDE is complaining about.</p>
|
<python><django><django-rest-framework>
|
2023-05-03 09:41:47
| 1
| 2,526
|
tommueller
|
76,162,375
| 2,107,667
|
`No module named...` while deploying an AWS Lambda using Serverless Framework, poetry, python3.10 and fastapi
|
<h2>EDIT</h2>
<p>Il looks like theproblem comes from the Serverless plugin, <code>serverless-python-requirements</code>.</p>
<p>When packaging with <code>$ sls package</code>, when having <code>python = "^3.9"</code> the dependencies are in the zip, but having <code>python = "^3.10"</code> they are not. Everything else is the same.</p>
<p><strong>end of edit</strong></p>
<hr />
<p>I want to deploy an AWS Lambda using <em>Serverless Framework</em>, <em>poetry</em>, <em>python3.10</em> and <em>fastapi</em>.</p>
<p>I did the same thing using <em>python3.9</em> and it worked. It must be something with my local config, can you help, please?</p>
<p>The answer I get when calling the endpoint <code>https://***********.execute-api.eu-west-3.amazonaws.com/development/api/health-check/</code> is <code>{"message": "Internal server error"}</code>.</p>
<p>The lambda's log says:</p>
<pre class="lang-bash prettyprint-override"><code>[ERROR] Runtime.ImportModuleError: Unable to import module 'main': No module named 'fastapi'
Traceback (most recent call last):
</code></pre>
<pre class="lang-py prettyprint-override"><code># main.py
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from mangum import Mangum
app = FastAPI()
app.add_middleware(CORSMiddleware, allow_origins="*", allow_credentials=True, allow_methods=["*"], allow_headers=["*"])
@app.get("/api/health-check/")
def health_check():
return {"message": "OK"}
handle = Mangum(app)
</code></pre>
<p>Here is what I do:</p>
<pre class="lang-bash prettyprint-override"><code>$ poetry init
.....
Compatible Python versions [^3.9]: 3.10
Would you like to define your main dependencies interactively? (yes/no) [yes] no
Would you like to define your development dependencies interactively? (yes/no) [yes] no
Generated file
.....
[tool.poetry.dependencies]
python = "3.10"
</code></pre>
<pre class="lang-bash prettyprint-override"><code>$ poetry add --dev pytest pytest-cov black isort flake8 bandit
The currently activated Python version 3.9.9 is not supported by the project (3.10).
Trying to find and use a compatible version.
Poetry was unable to find a compatible version. If you have one, you can explicitly use it via the "env use" command.
</code></pre>
<p>On my machine I have 3 versions of python:
<a href="https://i.sstatic.net/pdmbS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pdmbS.png" alt="enter image description here" /></a></p>
<pre class="lang-bash prettyprint-override"><code>$ python3 --version
Python 3.9.9
$ python3.10 --version
Python 3.10.11
</code></pre>
<p>Editing the python version in <code>pyproject.toml</code> seams to unlock the situation:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.dependencies]
python = "~3.10"
</code></pre>
<pre class="lang-bash prettyprint-override"><code>$ poetry add --dev pytest pytest-cov black isort flake8 bandit
The currently activated Python version 3.9.9 is not supported by the project (~3.10).
Trying to find and use a compatible version.
Using python3.10 (3.10.11)
Creating virtualenv test-sls-deploy-md9kc90P-py3.10 in /Users/costin/Library/Caches/pypoetry/virtualenvs
.....
$ poetry add fastapi uvicorn httpx
The currently activated Python version 3.9.9 is not supported by the project (~3.10).
Trying to find and use a compatible version.
Using python3.10 (3.10.11)
Using version ^0.95.1 for fastapi
.....
</code></pre>
<p>Then I deploy with</p>
<pre class="lang-bash prettyprint-override"><code>$ sls deploy --stage development --verbose
</code></pre>
<p>I get a warning that <code>python3.10</code> is not in the list of expected runtime environments, but it deploys correctly.</p>
<p>Here is the <code>serverless.yml</code> file:</p>
<pre class="lang-yaml prettyprint-override"><code>service: test-sls-deploy-api
frameworkVersion: '3'
useDotenv: true
provider:
name: aws
runtime: python3.10
region: 'eu-west-3'
stage: 'development'
logRetentionInDays: 30
functions:
TEST-DEPLOY:
handler: main.handle
memorySize: 512
events:
- http:
path: /{proxy+}
method: any
cors:
origin: ${env:ALLOWED_ORIGINS}
maxAge: 60
custom:
pythonRequirements:
usePoetry: true
noDeploy:
- boto3 # already on Lambda
- botocore # already on Lambda
plugins:
- serverless-python-requirements
</code></pre>
<p>I was expecting the plugin <code>serverless-pytohn-requirements</code> to handle the deployment of <code>fastapi</code> as it does with <code>python3.9</code>, but with <code>python3.10</code> things do not happen as expected.</p>
<p>Do you see what could go wrong?</p>
|
<python><amazon-web-services><aws-lambda><serverless-framework><python-poetry>
|
2023-05-03 09:34:17
| 1
| 3,039
|
Costin
|
76,162,262
| 5,383,733
|
Use Pandas UDF to calculate Cosine Similarity of two vectors in PySpark
|
<p>I want to calculate the cosine similarity of 2 vectors using Pandas UDF. I implemented it with Spark UDF, which works fine with the following script.</p>
<pre><code>import numpy as np
from pyspark.sql.functions import udf
from pyspark.sql.types import FloatType
# Create dataframe
df = spark.createDataFrame([("A", [1, 2, 3], [3, 4, 5]), ("B", [5, 6, 7], [7, 8, 9] )], ("name", "vec1", "vec2"))
# Cosime Similarity function
def cosine_similarity(vec1, vec2):
return float(np.dot(vec1, vec2) / (np.linalg.norm(vec1) * np.linalg.norm(vec2)))
# Spark UDF
cosine_similarity_udf = udf(cosine_similarity, FloatType())
</code></pre>
<p>When I wrap it with Pandas UDF, as follows, it gives me a TypeError saying <code>TypeError: only size-1 arrays can be converted to Python scalars</code></p>
<pre><code>import pandas as pd
from pyspark.sql.functions import pandas_udf, PandasUDFType
@pandas_udf(returnType=FloatType())
def cosine_similarity_udf(vec1: pd.Series, vec2: pd.Series) -> pd.Series:
return pd.Series(cosine_similarity(vec1, vec2))
</code></pre>
<p>What should be the correct way to get this desired output using Pandas UDF?
<a href="https://i.sstatic.net/eS5xN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eS5xN.png" alt="enter image description here" /></a></p>
|
<python><pandas><apache-spark><pyspark><pandas-udf>
|
2023-05-03 09:20:21
| 1
| 876
|
Haritha Thilakarathne
|
76,162,244
| 5,773,890
|
Linear interpolation along rows - single value per row - using pandas/scipy
|
<p>Supose I have the following pandas DataFrame</p>
<pre><code>>>> df = pd.DataFrame([[-10, -5, 0, 10], [17, 10, 16, 20], [40, 30, 10, -6]], columns=[0, 10, 20, 30])
>>> df.values
array([[-10, -5, 0, 10],
[ 17, 10, 16, 20],
[ 40, 30, 10, -6]])
>>> df.columns
Int64Index([0, 10, 20, 30], dtype='int64')
</code></pre>
<p>where the column names are the array of x that I want to interpolate on, and the values in the table are the y values. I want to perform a linear interpolation on each row with a single new_x in each row, i.e. where</p>
<pre><code>new_x = [5, 15, 25]
</code></pre>
<p>I want to interpolate the value <code>5</code> using the column names as the x values and the first row as the y values, and so on down the rows with expected results <code>[-7.5, 13., 2.]</code></p>
<p>I've tried the below</p>
<pre><code>from scipy import interpolate
interpolate.interp1d(x=df.columns,y=df)(new_x)
</code></pre>
<p>but that gives</p>
<pre><code>array([[-7.5, -2.5, 5. ],
[13.5, 13. , 18. ],
[35. , 20. , 2. ]])
</code></pre>
<p>which is interpolating every element of new_x in each row. Instead I only want the diagonal from that result.</p>
<p>I can do it by applying an interpolate function over every row, but is there a more natural/faster single-line way that I'm missing?</p>
<p><a href="https://stackoverflow.com/questions/47594932/row-wise-interpolation-in-dataframe-using-interp1d">This question</a> asks something similar, but is only interpolating between 2 points.</p>
|
<python><pandas><scipy><interpolation>
|
2023-05-03 09:17:35
| 1
| 415
|
Mark.R
|
76,161,924
| 15,070,331
|
Folium Map not displaying in PDF output
|
<p>I am trying to generate a PDF file from a Folium map using the weasyprint package in Python. I have the following code:</p>
<pre><code>import folium
from weasyprint import HTML
def generate_map_html(lat: float, lon: float):
"""
Generate a map HTML
"""
# Generate map
m = folium.Map(location=[lat, lon], zoom_start=15)
# Add marker
folium.Marker([lat, lon], popup='Property Location').add_to(m)
# Save map to html
map_html = m._repr_html_()
return map_html
map_location = generate_map_html(33.589886, -7.603869)
html = HTML(string=map_location)
# save html to pdf
pdf = html.write_pdf()
with open('out/output.pdf', 'wb') as f:
f.write(pdf)
</code></pre>
<p>The PDF file is generated successfully, but instead of the map, I see the text "Make this Notebook Trusted to load map: File -> Trust Notebook".</p>
<p>How can I fix this issue and get the map to display in the PDF output?</p>
<p>The pdf output <a href="https://i.sstatic.net/eyFd8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eyFd8.png" alt="enter image description here" /></a></p>
|
<python><pdf><folium><weasyprint>
|
2023-05-03 08:36:44
| 0
| 395
|
Saad Mrabet
|
76,161,889
| 1,112,283
|
How do I specify a Python version constraint for a dependency specifically for a platform?
|
<p>My Python package requires Python >= 3.8 (defined in <code>setup.cfg</code> or <code>pyproject.toml</code>):</p>
<pre><code>python_requires = >=3.8
</code></pre>
<p>However, it also has the following <em>optional</em> dependency :</p>
<pre><code>tensorflow>=2.7.0
</code></pre>
<p>If optional dependencies are tob be installed, I would like to require Python < 3.11 on macOS only. Previously, I tried:</p>
<pre><code>tensorflow>=2.7.0;python_version<'3.11'
</code></pre>
<p>But that contrains Python on all platforms. Is there a way to achieve this?</p>
|
<python><setuptools><python-packaging>
|
2023-05-03 08:32:43
| 1
| 1,821
|
cbrnr
|
76,161,882
| 7,386,830
|
Line plot with Seaborn
|
<p>I have the following code sample, where I need to draw a green line, as shown inside the plot below:</p>
<pre><code>fig = plt.figure(figsize = (12,6))
plt.scatter((y - y_pred), y_pred , color = 'red')
plt.axhline(y=0, color='r', linestyle=':')
plt.ylim(0,50)
plt.xlim(-4,6)
p = sns.lineplot([-4,6],[25,25],color='g')
plt.ylabel("Predictions")
plt.xlabel("Residual")
p = plt.title('Homoscedasticity Check')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/NpHvD.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NpHvD.jpg" alt="enter image description here" /></a></p>
<p>Although the code above worked some time back, it doesn't seem to execute now, and I observe the following error:</p>
<blockquote>
<p>TypeError: lineplot() takes from 0 to 1 positional arguments but 2
were given</p>
</blockquote>
<p>What could be wrong here ? For example, what is the right format to indicate where the Green line should appear ?</p>
|
<python><plot><seaborn><visualization>
|
2023-05-03 08:31:32
| 1
| 754
|
Dinesh
|
76,161,826
| 10,232,932
|
Pandas read excel: XLRDError: Excel xlsx file; not supported
|
<p>Also there existing already several questions on this topic with solutions:</p>
<p><a href="https://stackoverflow.com/questions/65250207/pandas-cannot-open-an-excel-xlsx-file">Pandas cannot open an Excel (.xlsx) file</a></p>
<p><a href="https://stackoverflow.com/questions/65254535/xlrd-biffh-xlrderror-excel-xlsx-file-not-supported">xlrd.biffh.XLRDError: Excel xlsx file; not supported</a></p>
<p>I am working with <code>python 3.9</code> in visual studio code with the packages:</p>
<pre><code>xlrd 2.0.1
pandas 1.1.5
openpyxl 3.1.2
</code></pre>
<p>when I am running the command:</p>
<pre><code>mapping = pd.read_excel('mapping/mapping.xlsx', dtype=str, engine='openpyxl')
</code></pre>
<p>or:</p>
<pre><code>mapping = pd.read_excel('mapping/mapping.xlsx', dtype=str)
</code></pre>
<p>it both gives the error:</p>
<blockquote>
<p>XLRDError: Excel xlsx file; not supported</p>
</blockquote>
<p>The solutions provided in the similar questions don't help me here.</p>
|
<python><pandas><excel>
|
2023-05-03 08:25:14
| 1
| 6,338
|
PV8
|
76,161,798
| 4,862,162
|
How to force-reload a Python module after pip upgrade?
|
<p>In practical applications, certain Python packages need to be upgraded dynamically during runtime. And to avoid service interruption, the main Python process shall not be terminated and relaunched due to individual module upgrade. To re-import the upgraded module, we typically use <code>importlib.reload()</code>. However, it does not work. To illustrate the problem:</p>
<p>Firstly, we run <code>pip install yt-dlp==2023.3.3</code> to installed the old version <code>yt-dlp</code>, then run the following code:</p>
<pre><code>#!/usr/bin/env python3
import pip, sys, importlib, time
import yt_dlp
print('before update:')
try:
yt_dlp.main(['--version'])
except:
pass
pip.main(['install', 'yt-dlp==2023.3.4'])
from importlib import reload
reload(yt_dlp)
print('after update:')
try:
yt_dlp.main(['--version'])
except:
pass
</code></pre>
<p>The output is shown below:</p>
<pre><code>before update:
2023.03.03
WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.
To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
Collecting yt-dlp==2023.3.4
Using cached yt_dlp-2023.3.4-py2.py3-none-any.whl (2.9 MB)
Requirement already satisfied: mutagen in /home/xuancong/anaconda3/lib/python3.8/site-packages (from yt-dlp==2023.3.4) (1.45.1)
Requirement already satisfied: pycryptodomex in /home/xuancong/anaconda3/lib/python3.8/site-packages (from yt-dlp==2023.3.4) (3.12.0)
Requirement already satisfied: websockets in /home/xuancong/anaconda3/lib/python3.8/site-packages (from yt-dlp==2023.3.4) (10.1)
Requirement already satisfied: certifi in /home/xuancong/anaconda3/lib/python3.8/site-packages (from yt-dlp==2023.3.4) (2022.12.7)
Requirement already satisfied: brotli in /home/xuancong/anaconda3/lib/python3.8/site-packages (from yt-dlp==2023.3.4) (1.0.9)
Installing collected packages: yt-dlp
Attempting uninstall: yt-dlp
Found existing installation: yt-dlp 2023.3.3
Uninstalling yt-dlp-2023.3.3:
Successfully uninstalled yt-dlp-2023.3.3
Successfully installed yt-dlp-2023.3.4
after update:
2023.03.03
</code></pre>
<p>Even after <code>pip</code> successfully upgraded <code>yt-dlp</code> to Version 2023.3.4 and <code>yt_dlp</code> has been reloaded by <code>importlib.reload()</code>, it still shows the old version 2023.03.03 . And if you check <code>yt-dlp</code> version now (either by running <code>pip freeze | grep yt-dlp</code> or <code>yt-dlp --version</code>), it is indeed the upgraded version 2023.3.4 (will change as new version emerges).</p>
<p>What is ridiculous and sarcastic about the current Python 3 <code>importlib.reload()</code> implementation is that the following piece of code actually works without using importlib at all:</p>
<pre><code>#!/usr/bin/env python3
import pip, sys
def cleanse_modules(name):
for module_name in sorted(sys.modules.keys()):
if module_name.startswith(name):
del sys.modules[module_name]
del globals()[name]
import yt_dlp
print('before update:')
try:
yt_dlp.main(['--version'])
except:
pass
pip.main(['install', 'yt-dlp==2023.3.4'])
cleanse_modules('yt_dlp')
import yt_dlp
print('after update:')
try:
yt_dlp.main(['--version'])
except:
pass
</code></pre>
<p>The output is shown below:</p>
<pre><code>before update:
2023.03.03
WARNING: pip is being invoked by an old script wrapper. This will fail in a future version of pip.
Please see https://github.com/pypa/pip/issues/5599 for advice on fixing the underlying issue.
To avoid this problem you can invoke Python with '-m pip' instead of running pip directly.
Collecting yt-dlp==2023.3.4
Using cached yt_dlp-2023.3.4-py2.py3-none-any.whl (2.9 MB)
Requirement already satisfied: mutagen in /home/xuancong/anaconda3/lib/python3.8/site-packages (from yt-dlp==2023.3.4) (1.45.1)
Requirement already satisfied: pycryptodomex in /home/xuancong/anaconda3/lib/python3.8/site-packages (from yt-dlp==2023.3.4) (3.12.0)
Requirement already satisfied: websockets in /home/xuancong/anaconda3/lib/python3.8/site-packages (from yt-dlp==2023.3.4) (10.1)
Requirement already satisfied: certifi in /home/xuancong/anaconda3/lib/python3.8/site-packages (from yt-dlp==2023.3.4) (2022.12.7)
Requirement already satisfied: brotli in /home/xuancong/anaconda3/lib/python3.8/site-packages (from yt-dlp==2023.3.4) (1.0.9)
Installing collected packages: yt-dlp
Attempting uninstall: yt-dlp
Found existing installation: yt-dlp 2023.3.3
Uninstalling yt-dlp-2023.3.3:
Successfully uninstalled yt-dlp-2023.3.3
Successfully installed yt-dlp-2023.3.4
after update:
2023.03.04
</code></pre>
<p>The output is correct and as expected. Therefore, a natural question that arises is that: how can the native Python 3 library function does not work while in principle it is possible to get this working using manual Python code?</p>
|
<python><module><reload><python-importlib>
|
2023-05-03 08:20:47
| 0
| 1,615
|
xuancong84
|
76,161,594
| 10,027,592
|
Setting display width for OpenAI Gym (now Gymnasium)
|
<p>I'm trying to print out some values in Gymnasium (previously OpenAI Gym), such as:</p>
<pre><code>import gymnasium as gym
env = gym.make("LunarLander-v2", render_mode="human")
observation, info = env.reset()
print(f'env.observation_space: {env.observation_space}')
print(f'obs: {observation}')
</code></pre>
<p>The output looks like:</p>
<pre><code>env.observation_space: Box([-90. -90. -5. -5. -3.1415927 -5.
-0. -0. ], [90. 90. 5. 5. 3.1415927 5.
1. 1. ], (8,), float32)
obs: [-0.00316305 1.3999956 -0.3203935 -0.48554128 0.00367194 0.072574
0. 0. ]
</code></pre>
<p>Is there any way to set options such as <code>set_option('display.max_columns', 500)</code> or
<code>set_option('display.width', 1000)</code> like Pandas?</p>
<p><sub> - The tag should be <code>Gymnasium</code> but there's only <code>openai-gym</code> right now, so I'm using it.</sub><br />
<sub> - I'm not sure if StackOverflow is the right place to ask this question, but there are many questions like this and helpful answers. Please let me know if there's any advice.</sub></p>
<hr />
<p>EDIT) Summing up the comment and answers:</p>
<ol>
<li>use <code>np.set_printoptions(linewidth=1000)</code> since <code>Box2D</code> has a np.array representation. (<code>np.set_printoptions</code> has more options so please check them out)</li>
<li>Use pprint: <code>pp = pprint.PrettyPrinter(width=500, compact=True); pp.pprint(...)</code></li>
<li>Use <code>np.array2string</code> if it's np.array. (observation is np.array so this works, but Gymnasium spaces such as <code>Box</code> itself are not numpy array so this doesn't work. Though its "representation" seems to be a numpy array so the solution 1 works.)</li>
</ol>
|
<python><reinforcement-learning><openai-gym>
|
2023-05-03 07:55:29
| 2
| 4,226
|
starriet 차주녕
|
76,161,562
| 4,469,930
|
How to properly attach a file to a PDF using PyPDF2?
|
<p>I'm trying to attach a file to a PDF file, but I'm running into some issues. I'm not sure if I'm doing something wrong or if there's a bug in PyPDF2. I'm using Python 3.10.2 for this and I downloaded the newest package for PyPDF2 through pip.</p>
<p>These are 3 versions of code that I tried using, but each has its own issues.</p>
<ol>
<li>This code copies the PDF properly, but the attachment fails silently. I can confirm the failure because the file size didn't grow.</li>
</ol>
<pre><code>pdfFile = open("input.pdf", "rb")
reader = PdfReader(pdfFile)
writer = PdfWriter()
writer.clone_document_from_reader(reader) # this line is different
pdfFile.close()
with open("image.png", "rb") as file:
writer.add_attachment("image", file.read())
with open("output.pdf", "wb") as file:
writer.write(file)
</code></pre>
<ol start="2">
<li>This code is slightly different than the one before, but also fails to attach the file.</li>
</ol>
<pre><code>pdfFile = open("input.pdf", "rb")
reader = PdfReader(pdfFile)
writer = PdfWriter()
writer.clone_reader_document_root(reader) # this line is different
writer.append_pages_from_reader(reader) # this line is different
pdfFile.close()
with open("image.png", "rb") as file:
writer.add_attachment("image", file.read())
with open("output.pdf", "wb") as file:
writer.write(file)
</code></pre>
<ol start="3">
<li>This code actually does attach the file, but upon opening the file in Adobe Acrobat, I get the error: "The was an error opening this document. The root object is missing or invalid." I don't see any API calls for creating a root object manually in PyPDF2.</li>
</ol>
<pre><code>pdfFile = open("input.pdf", "rb")
reader = PdfReader(pdfFile)
writer = PdfWriter()
writer.append_pages_from_reader(reader) # this line is different
pdfFile.close()
with open("image.png", "rb") as file:
writer.add_attachment("image", file.read())
with open("output.pdf", "wb") as file:
writer.write(file)
</code></pre>
<p>Funny enough, I don't get the error if I run the 3rd version of the code without attaching the file. Then it just works like the first 2 versions.</p>
|
<python><pdf><attachment><pypdf>
|
2023-05-03 07:51:37
| 1
| 728
|
bblizzard
|
76,161,461
| 14,224,948
|
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name to address: Unknown server error
|
<p>while doing this tutorial:
<a href="https://auth0.com/blog/using-python-flask-and-angular-to-build-modern-apps-part-1/" rel="nofollow noreferrer">https://auth0.com/blog/using-python-flask-and-angular-to-build-modern-apps-part-1/</a></p>
<p>I bumped on an error that wasn't described anywhere on Stack.</p>
<p>The problematic code:</p>
<pre><code>from datetime import datetime
from sqlalchemy import create_engine, Column, String, Integer, DateTime
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
db_url = 'localhost:5432'
db_name = 'FLASK_ANGULAR'
db_user = 'postgres'
db_password = 'Password@22'
engine = create_engine(f'postgresql://{db_user}:{db_password}@{db_url}/{db_name}')
Session = sessionmaker(bind=engine)
Base = declarative_base()
class Entity():
id = Column(Integer, primary_key=True)
created_at = Column(DateTime)
updated_at = Column(DateTime)
last_updated_by = Column(String)
def __init__(self, created_by):
self.created_at = datetime.now()
self.updated_at = datetime.now()
self.last_updated_by = created_by
</code></pre>
|
<python><angular><flask><sqlalchemy><psycopg2>
|
2023-05-03 07:38:29
| 1
| 1,086
|
Swantewit
|
76,161,237
| 1,055,817
|
Tensorflow 2.11.0 on Mac crashes when setting the seed for generating random tensors
|
<p>I am trying to learn tensorflow. I installed tf on my Mac (M1 Chip) running 13.2.1.</p>
<p>I created a conda env with tensorflow.</p>
<p>I am trying to generate random tensors by setting the seed and my kernel crashes. Here is the output when I tried to do this.</p>
<p><a href="https://i.sstatic.net/doq3U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/doq3U.png" alt="enter image description here" /></a></p>
<p>What am I doing wrong? How do I fix this issue?</p>
|
<python><tensorflow>
|
2023-05-03 07:07:30
| 2
| 1,942
|
siva82kb
|
76,161,108
| 1,954,677
|
How to simplify a logical expression based on additional logical constraints using sympy
|
<p><strong>Question</strong></p>
<p>Is there a way to use <code>sympy</code> to simplify (in the usual vaguely defined sense) a logical <em>target expression</em> based on some logical <em>constraint expressions</em>? Both the target expression and the constraints can have an arbitrary form in a set of symbols <code>A,B,C...</code> and the operands <code>>>,&,|,~</code>.</p>
<p>E.g. say I would expect</p>
<pre><code>how_to_do_this( (C & A) | (C & B), [A >> B, ] )
-> (B & C)
</code></pre>
<p>Note that there might be multiple constraints. Besides the general case, also the special case of having only simple implications as constraints would be of interest.</p>
<p><strong>First thoughts</strong></p>
<p>A hackish approach I could think of would be to simply add the constraints, as</p>
<pre><code>sp.simplify("A >> B & (C & A) | (C & B)")
-> B & C
</code></pre>
<p>However, for the same constraint <code>A >> B</code> with different target expressions, this would "contaminate" the target expression with the information of the constraint, e.g.</p>
<pre><code>sp.simplify("A >> B & (C & D)")
-> C & D & (B | ~A)
</code></pre>
<p>which is unwanted, as the constraint belongs to a different logical context. Here the approach fails, as I'd simply expect the result <code>(C & D)</code> because the constraint just can't be used to simplify anything.</p>
<p>More formally, in this approach the constraints become part of the target expression, and thus something that it has to ensure.</p>
<p><strong>Second thoughts</strong></p>
<p>A different perspective, which however is almost the equivalent problem, would be to reformulate this with algebraic expressions, e.g.</p>
<pre><code>how_to_do_this(x+2*y, [x+y==5])
-> y+5
</code></pre>
<p>however, note that the explicit approach "solve <code>x+y=5</code> for <code>x=5-y</code> and substitue it into <code>x+2*y</code>" is not a general enough solution here, as e.g. the constraint <code>A>>B</code> cannot be "solved for" <code>A</code> (or <code>B</code>) in this sense, nor could a more complicated logical condition. The algorithm should rather "group out" <code>x+y</code> and replace it by <code>5</code>, i.e. <code>x+2*y=y+(x+y)=y+5</code>. If such a mechanism is known for algebraic expressions, it should also be applicable to logic expressions: "group out <code>A >> B</code> and replace it by <code>True</code>".</p>
<p>These however are just ideas.</p>
|
<python><logic><sympy><logical-operators><symbolic-math>
|
2023-05-03 06:47:12
| 1
| 3,916
|
flonk
|
76,160,833
| 6,758,739
|
Search for the username and change the password in the space-separated file using python
|
<p>The code below reads the files and stores it to an array:</p>
<pre><code>import sys
import logging
def file_to_array():
file_info="/application/files/Password.txt"
with open (file_info,'rt') ass fh:
pw_lines = [x.strip() for x in fh.readlines() if not x.startswith('#')] #ignore comments
passwd = find_in_pwfile(pw_lines=pw_lines, key1="", key2="DBFS_USR")
def main():
file_to_array()
</code></pre>
<p>File:</p>
<pre><code> * DB_USR ABCDEF
* DB_MGR QWERTY
</code></pre>
<p>New File:</p>
<pre><code> * DB_USR PQRSTU
* DB_MGR QWERTY
</code></pre>
<p>I am looking to change the password of <code>DB_USR</code> from <code>ABCDEF</code> to new password <code>PQRSTU</code> in the file.</p>
<p>So I am finding the username in the code using below code. The below code deals with different patterns of entries in the file:</p>
<pre><code>def find_in_pwfile(pw_lines, key1, key2)
patterns = ['^{k1}\s'+{k2}\s+(\S+)\s*$.format(k1=key1 , k2=key2),
'^\*\s'+{k2}\s+(\S+)\s*$.format( k2=key2),
'^{k1}\s'+\*\s+(\S+)\s*$.format(k1=key1)
]. #These are various patterns in password file
for pattern in patterns:
for line in pwlines
m = re.match(pattern, line)
if m is not None
s= "PQRSTU"
replaced_line=m.group(0).replace(m.group(1),s)
return m.group(1)
#if we reach this point no match has been found
reach 'NotFound'
</code></pre>
<p>Now, I would like to change the password in the array and copy the entire array to a file and take backup original file, but I am struck here.
Can you please let me know what's the best approach and how do I go ahead with changing the value in file?</p>
|
<python>
|
2023-05-03 05:59:23
| 0
| 992
|
LearningCpp
|
76,160,728
| 6,539,586
|
Pandas months between function, np.timedelta does not work
|
<p>There are a million results for how to find number of months between two columns in a pandas dataframe, and they nearly all say to use this:</p>
<pre><code>(date_end - date_start) / np.timedelta64(1, "M")
</code></pre>
<p>This doesn't work in some instances though, I'm not sure if this is a bug or what, but here's my example:</p>
<pre><code>df = pd.DataFrame({"a":["2018-01-27"], "b":["2020-02-29"]})
(pd.to_datetime(df.b) - pd.to_datetime(df.a)) / np.timedelta64(1,"M")
</code></pre>
<p>This gives:</p>
<pre><code>0 24.612903
dtype: float64
</code></pre>
<p>But that's not right, 2020-02-29 is more than 25 months after 2018-01-27. In fact if you switch to the same year, it recognizes this:</p>
<pre><code>df = pd.DataFrame({"a":["2020-01-27"], "b":["2020-02-29"]})
(pd.to_datetime(df.b) - pd.to_datetime(df.a)) / np.timedelta64(1,"M")
</code></pre>
<p>This gives:</p>
<pre><code>0 1.064516
dtype: float64
</code></pre>
<p>Is there an explanation for this and/or a better way to get months between two months?</p>
|
<python><pandas><numpy><date><datetime>
|
2023-05-03 05:38:21
| 0
| 730
|
zachvac
|
76,160,542
| 2,604,247
|
What Actually Triggers the Asyncio Tasks?
|
<p>Trying to understand python <code>asyncio</code> coming from some background on multithreading with <code>concurrent.futures</code>. Here is the sample script</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# encoding: utf-8
"""Sample script to test asyncio functionality."""
import asyncio
import logging
from time import sleep # noqa
logging.basicConfig(format='%(asctime)s | %(levelname)s: %(message)s',
level=logging.INFO)
async def wait(i: int) -> None:
"""The main function to run asynchronously"""
logging.info(msg=f'Entering wait {i}')
await asyncio.sleep(5)
logging.info(msg=f'Leaving wait {i}') # This does not show because all pending tasks are SIGKILLed?
async def main() -> None:
"""The main."""
[asyncio.create_task(
coro=wait(i)) for i in range(10)]
logging.info(msg='Created tasks, waiting before await.')
sleep(5) # This is meant to verify the tasks do not start by the create_task call.
# What changes after the sleep command, i.e. here?
# If the tasks did not start before the sleep, why would they start after the sleep?
if __name__ == '__main__':
asyncio.run(main=main())
</code></pre>
<p>The technology stack, if relevant, is python 3.10 on Ubuntu 22.04. Here is my terminal output.</p>
<pre><code>2023-05-03 12:30:45,297 | INFO: Created tasks, waiting before await.
2023-05-03 12:30:50,302 | INFO: Entering wait 0
2023-05-03 12:30:50,304 | INFO: Entering wait 1
2023-05-03 12:30:50,304 | INFO: Entering wait 2
2023-05-03 12:30:50,304 | INFO: Entering wait 3
2023-05-03 12:30:50,304 | INFO: Entering wait 4
2023-05-03 12:30:50,304 | INFO: Entering wait 5
2023-05-03 12:30:50,304 | INFO: Entering wait 6
2023-05-03 12:30:50,304 | INFO: Entering wait 7
2023-05-03 12:30:50,304 | INFO: Entering wait 8
2023-05-03 12:30:50,304 | INFO: Entering wait 9
</code></pre>
<p>So two related questions based on this snippet.</p>
<ol>
<li>What exactly is triggering the <code>wait</code> async task here (which just logs two lines at the console upon entry and exit)? Clearly, creating the tasks is not really making them run, as I am waiting for long enough after creating them in main. Even the timestamps show they are run <em>after</em> the blocking sleep in main.
Yet, just as the main function seems to finish its sleep, and exit, the tasks seem to be triggered. Should not the main thread just exit at this point?</li>
<li>The exit log is never printed (commented in the code). Does it mean the subprocess are just started after the main thread exits, and then immediately killed?</li>
</ol>
|
<python><multithreading><subprocess><python-asyncio>
|
2023-05-03 04:56:06
| 2
| 1,720
|
Della
|
76,160,534
| 651,174
|
Find if a sub-structure is in a structure
|
<p>A way to find if an object is a sub-object of another?</p>
<p>Is there an operator or built-in function in python where I can do something like the following?</p>
<pre><code>>>> needle={"age": 10, "name": "peter"}
>>> haystack={"age":10, "name": "peter", "height": 100}
>>> needle & haystack == haystack
</code></pre>
<p>The equivalent, I suppose, of set-intersection, but now with key-value pairs.</p>
<p>Is there a name for this in intersection-on-maps in programming?</p>
<p>One roundabout way of doing this is the following:</p>
<pre><code>>>> set(needle.items()) & set(haystack.items()) == set(needle.items())
True
</code></pre>
|
<python><python-3.x><dictionary>
|
2023-05-03 04:53:47
| 2
| 112,064
|
David542
|
76,160,529
| 5,942,100
|
Tricky normalize a specific column within a dataframe using a calculated field logic (Pandas)
|
<p>Take the delta between ['rounded_sum'] and ['rounded_sum_2']. Once found, subtract or add this delta to the ['Q4 28'] column. Making sure the sum of columns [Q1 28 - Q4 28] is equivalent to the ['rounded_sum'] column.</p>
<p><strong>Data</strong></p>
<pre><code>Location range type Q1 28 Q2 28 Q3 28 Q4 28 rounded_sum rounded_sum_2
NY low_r AA 2 0 0 0 2 2
NY low_r AA 2 2 2 6 8 12
NY low_g BB 0 0 0 0 0 0
NY low_g BB 0 0 2 4 4 6
CA low_r AA 0 2 4 4 6 10
CA low_r AA 2 2 4 8 12 16
CA low_g BB 0 0 0 0 0 0
CA low_g BB 0 0 0 2 2 2
</code></pre>
<p><strong>Desired</strong></p>
<pre><code> Location range type Q1 28 Q2 28 Q3 28 Q4 28 rounded_sum rounded_sum_2
NY low_r AA 2 0 0 0 2 2
NY low_r AA 2 2 2 2 8 12
NY low_g BB 0 0 0 0 0 0
NY low_g BB 0 0 2 2 4 6
CA low_r AA 0 2 4 0 6 10
CA low_r AA 2 2 4 4 12 16
CA low_g BB 0 0 0 0 0 0
CA low_g BB 0 0 0 2 2 2
</code></pre>
<p><strong>Doing</strong></p>
<pre><code>delta = df['rounded_sum'].sub(df['rounded_sum_2'])
</code></pre>
<p>#add or subtract delta to [Q4 28']</p>
<pre><code>df['Q4 28'] = df['Q4 28'].add(delta)
</code></pre>
<p>I believe I can use a calculated field, but still researching, any suggestion is appreciated.</p>
|
<python><pandas><numpy>
|
2023-05-03 04:52:40
| 2
| 4,428
|
Lynn
|
76,160,478
| 3,931,214
|
Numpy "put_along_axis", but add to existing values rather than just put (similar to scatter_add in PyTorch)
|
<p>Is there a way to use <code>np.put_along_axis</code> but have it add to the existing values rather than replace?</p>
<p>For example, in PyTorch this could be implemented as:</p>
<pre><code>import torch
frame = torch.zeros(3,2, dtype=torch.double)
updates = torch.tensor([[5,5], [10,10], [3,3]], dtype=torch.double)
indices = torch.tensor([[1,1], [1,1], [2,2]])
frame.scatter_add(0, indices, updates)
OUTPUT: [[0, 0], [15,15], [3,3]]
</code></pre>
<p>Numpy's <code>put_along_axis</code> would give:</p>
<pre><code>import numpy as np
frame = np.zeros(3,2)
updates = np.array([[5,5], [10,10], [3,3]])
indices = np.array([[1,1], [1,1], [2,2]])
np.put_along_axis(frame, indices, update)
OUTPUT: [[0, 0],[10, 10], [3,3]]
</code></pre>
|
<python><numpy><pytorch>
|
2023-05-03 04:39:01
| 1
| 843
|
Craig
|
76,160,437
| 8,283,848
|
Aggrgate JSON key value of JSONField Django PostgreSQL
|
<p>I have a simple model setup as below,</p>
<pre class="lang-py prettyprint-override"><code>import random
import string
from django.db import models
def random_default():
random_str = "".join(random.choice(string.ascii_uppercase + string.digits) for _ in range(10))
return {"random": random_str, "total_price": random.randint(1, 100)}
class Foo(models.Model):
cart = models.JSONField(default=random_default)
</code></pre>
<p>I want to get the sum of <em><strong><code>total_price</code></strong></em> from all <code>Foo</code> instances. In native Python, I can do something like below to get the sum, but I believe it is suboptimal.</p>
<pre class="lang-py prettyprint-override"><code>sum(foo.cart["total_price"] for foo in Foo.objects.all())
</code></pre>
<p>I tried the following aggregate queries with Django, but none seems correct/working.</p>
<h3>1.</h3>
<pre class="lang-py prettyprint-override"><code>Foo.objects.aggregate(total=models.Sum(Cast('cart__total_price', output_field=models.IntegerField())))
# Error
# django.db.utils.DataError: cannot cast jsonb object to type integer
</code></pre>
<h3>2.</h3>
<pre class="lang-py prettyprint-override"><code>Foo.objects.aggregate(total=models.Sum('cart__total_price', output_field=models.IntegerField()))
# Error
# django.db.utils.ProgrammingError: function sum(jsonb) does not exist
# LINE 1: SELECT SUM("core_foo"."cart") AS "total" FROM "core_foo"
^
# HINT: No function matches the given name and argument types. You might need to add explicit type casts.
</code></pre>
<h3>Question</h3>
<p>What is the proper/best way to get the sum of top-level JSON keys of a JSONField?</p>
<hr />
<h3>Versions</h3>
<ul>
<li>Python 3.8</li>
<li>Django 3.1.X</li>
</ul>
|
<python><django><postgresql><django-orm><django-3.0>
|
2023-05-03 04:27:42
| 2
| 89,380
|
JPG
|
76,160,382
| 3,199,553
|
Why can't I modify DataFrame in place by selecting some columns when iterating through list of DataFrames?
|
<pre><code>dfl = [pd.DataFrame(
{
"A": 1.0,
"B": pd.Timestamp("20130102"),
"C": pd.Series(1, index=list(range(4)), dtype="float32"),
"D": "foo",
}
)]
for df in dfl:
df = df[["A", "B"]]
print(dfl)
</code></pre>
<p>I was expecting the output has only column "A" and "B" since I was modifying the DataFrame in place (<code>df = ...</code>). However I got:</p>
<pre><code>[ A B C D
0 1.0 2013-01-02 1.0 foo
1 1.0 2013-01-02 1.0 foo
2 1.0 2013-01-02 1.0 foo
3 1.0 2013-01-02 1.0 foo]
</code></pre>
<p>What is the reason and how can I <strong>select</strong> (not drop) some columns from the each DataFrame in that list in place?</p>
|
<python><pandas><dataframe>
|
2023-05-03 04:12:57
| 3
| 1,497
|
stanleyli
|
76,160,077
| 21,784,274
|
How to get rid of daylight saving time (DST) in Django? (Because in my country it's not being used anymore!)
|
<p>In my country, daylight saving time (DST) is not being applied anymore, but Django is still taking it into account. So when I choose my region in <code>TIME_ZONE</code> inside the <code>settings.py</code> file, the time is stored wrong (1-hour offset) inside the database.</p>
<p><strong>Is there any possible way to turn off the DST in Django?!</strong></p>
|
<python><django><timezone><dst>
|
2023-05-03 02:34:01
| 1
| 947
|
Mohawo
|
76,160,048
| 10,655,190
|
XGBoost DataFrame.dtypes for data must be int, float or bool
|
<p><strong>Context</strong>: Trying to fit my XGBoost model but getting a ValueError Msg. I've looked at similar posts but the difference is that ALL my columns are either Int or Float. I have no object, categorical columns.</p>
<p>X_train.info() yields:</p>
<p><a href="https://i.sstatic.net/P6RnN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P6RnN.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/HIvpv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HIvpv.png" alt="enter image description here" /></a></p>
|
<python><machine-learning><xgboost>
|
2023-05-03 02:25:07
| 1
| 1,624
|
Roger Steinberg
|
76,159,980
| 1,023,928
|
Can't get 'autorange' for the y-axis to work in Plotly for Python
|
<p>I created 4 subplots that are stacked vertically with their x-axes synchronized. My problem is that when I zoom in on one chart then the other chart's x-axis range is adjusted as desired, however, the y-axis is not adjusted. I want the y-axis range to rescale automatically so that the max value in the visible range is at the upper bound of the y-axis and accordingly for the min value on the lower bound. I tried to use the 'autorange' function but it does not seem to have any effect:</p>
<pre><code>fig = make_subplots(rows=4, cols=1)
fig.append_trace(go.Line(x=df["Datetime"], y=df["Pnl"], name="Pnl"), row=1, col=1)
fig.append_trace(go.Line(x=df["Datetime"], y=df["USDCAD"], name="USDCAD"), row=2, col=1)
fig.append_trace(go.Line(x=df["Datetime"], y=df["USD"], name="USD"), row=3, col=1)
fig.append_trace(go.Line(x=df["Datetime"], y=df["USD_SMA30"], name="USD SMA"), row=3, col=1)
fig.append_trace(go.Line(x=df["Datetime"], y=df["CAD"], name="CAD"), row=4, col=1)
fig.append_trace(go.Line(x=df["Datetime"], y=df["CAD_SMA30"], name="CAD SMA"), row=4, col=1)
fig.update_layout(width=1000, height=2000)
fig.update_xaxes(matches='x')
fig.update_yaxes(autorange=True)
fig.show()
</code></pre>
|
<python><plotly>
|
2023-05-03 02:04:22
| 0
| 7,316
|
Matt
|
76,159,932
| 412,655
|
With a Python decorator that adds a named argument, is it possible to provide a correct type annotation?
|
<p>Suppose I have this decorator <code>wrap_with_id</code> and use it on <code>func</code>:</p>
<pre class="lang-py prettyprint-override"><code>P = ParamSpec("P")
R = TypeVar("R")
def wrap_with_id(fn: Callable[P, R]):
def wrapper(id: str, *args: P.args, **kwargs: P.kwargs) -> R:
all_ids.append(id)
return fn(*args, **kwargs)
return wrapper
@wrap_with_id
def func(x: int, y: int) -> str:
...
func
</code></pre>
<p>When I use pylance or pyright, it knows that the resulting <code>func</code> has the following signature:</p>
<pre class="lang-py prettyprint-override"><code>(function) def func(id: str, x: int, y: int) -> str
</code></pre>
<p>However, notice that <code>wrap_with_id</code> did <em>not</em> have a return type annotation. Is it possible to add an annotation so that the type checker knows that the first argument has the name <code>id</code>?</p>
<p>The closest I have been able to get is with <code>Callable[Concatenate[str, P], R]</code>, but it's not quite right. If I add that annotation, like so:</p>
<pre class="lang-py prettyprint-override"><code>P = ParamSpec("P")
R = TypeVar("R")
def wrap_with_id(fn: Callable[P, R]) -> Callable[Concatenate[str, P], R]:
def wrapper(id: str, *args: P.args, **kwargs: P.kwargs) -> R:
all_ids.append(id)
return fn(*args, **kwargs)
return wrapper
@wrap_with_id
def func(x: int, y: int) -> str:
...
func
</code></pre>
<p>Then pylance/pyright knows that the first argument is of type <code>str</code>, but it does not know that it has the name <code>id</code>. This is what it thinks the signature is:</p>
<pre class="lang-py prettyprint-override"><code>(function) def func(str, x: int, y: int) -> str
</code></pre>
<p>This is a problem because then if I call the function like this, it thinks there's an error, when in fact it's fine:</p>
<pre class="lang-py prettyprint-override"><code>func(id="abc", x=4, y=5)
</code></pre>
|
<python><types>
|
2023-05-03 01:46:38
| 2
| 4,147
|
wch
|
76,159,813
| 1,035,279
|
With python, I am using a requirements.txt file, how can I find the module specified in that file that installed a particular dependency?
|
<p>The requirements.txt file is large and investigating it module by module is not practical.</p>
|
<python><python-3.x><pip>
|
2023-05-03 01:03:38
| 1
| 16,671
|
Paul Whipp
|
76,159,708
| 4,419,845
|
How to disable Authentication in FastAPI based on environment?
|
<p>I have a FastAPI application for which I enable <code>Authentication</code> by injecting a dependency function.</p>
<p>controller.py</p>
<pre class="lang-py prettyprint-override"><code>router = APIRouter(
prefix="/v2/test",
tags=["helloWorld"],
dependencies=[Depends(api_key)],
responses={404: {"description": "Not found"}},
)
</code></pre>
<p>Authorzation.py</p>
<pre class="lang-py prettyprint-override"><code>async def api_key(api_key_header: str = Security(api_key_header_auth)):
if api_key_header != API_KEY:
raise HTTPException(
status_code=401,
detail="Invalid API Key",
)
</code></pre>
<p>This works fine. However, I would like to <strong>disable</strong> the authentication based on environment. For instance, I would want to keep entering the authentication key in <code>localhost</code> environment.</p>
|
<python><authorization><fastapi><swagger-ui><openapi>
|
2023-05-03 00:25:33
| 1
| 508
|
Waqar ul islam
|
76,159,644
| 1,185,242
|
How do you test a class that reads from multiple files with pytest?
|
<p>I'm trying to write a test for the following class which has a method that reads from two different files?</p>
<pre><code># example.py
class Example
def __init__(self):
pass
def load(self):
p = open('p.txt').read()
q = open('q.txt').read()
return p + q
</code></pre>
<p>I would like to test the <code>load</code> method by supplying the contents for both files:</p>
<pre><code># test_example.py
from lib import Example
def test_example_class():
p_contents = 'abc'
q_contents = 'def'
# What do I put here to mock the open function for both files?
ex = Example()
assert(ex.load() == p_contents + q_contents)
</code></pre>
<p>How do I supply the contents of the two files as they don't (and cant exist) in the test environment?</p>
|
<python><mocking><pytest>
|
2023-05-03 00:02:46
| 1
| 26,004
|
nickponline
|
76,159,625
| 4,447,540
|
how to avoid lock or release metadata lock in sqlalchemy using mysql
|
<p>I'm using sqlalchemy 2.0 with a MySQL database. If I create some table definitions in my <code>Metadata()</code> object (<code>meta</code>) using <code>meta.create_all()</code> and then immediately drop them using <code>meta.drop_all()</code> there is no problem -- tables are correctly created and dropped without a problem.</p>
<p>However, if I read from those tables, after create and before drop, the <code>drop_all()</code> fails. I presume this is due to a metadata lock, but I'm not sure how to release the lock i.e. via commit() or close(), i.e. methods of Session objects</p>
<p>Example:</p>
<pre><code>
from sqlalchemy import Table, Column, Integer, String, MetaData, create_engine, inspect, select
import pandas as pd
engine = create_engine(con_str)
meta = MetaData(schema='test')
table1 = Table('table1',meta, Column('id1', Integer, primary_key = True), Column('col1', String(50)))
table2 = Table('table2',meta, Column('id2', Integer, primary_key = True), Column('col2', String(50)))
meta.create_all(bind=engine)
print(f"Tables after creation: {inspect(engine.connect()).get_table_names()}")
# pd.read_sql_query(sql=select(table1), con=engine.connect())
meta.drop_all(bind=engine)
print(f"Tables after drop_all: {inspect(engine.connect()).get_table_names()}")
</code></pre>
<p>Output:</p>
<pre><code>Tables after creation: ['table1', 'table2']
Tables after drop_all: []
</code></pre>
<p>If I uncomment the <code>pd.read_sql_query()</code> line, it will correctly return a <code>pd.DataFrame</code> of shape <code>(0,2)</code>, but the next line (i.e. the <code>drop_all()</code> call) will freeze.</p>
<p><strong>Question: what is the correct way to unlock the metadata lock or table lock that appears to be placed when query the tables?</strong></p>
<h3>Update:</h3>
<p>Below is the line from <code>show processlist;</code> that indicates the freeze</p>
<pre><code>| 432 | <user> | <host> | test | Query | 3 | Waiting for table metadata lock | DROP TABLE test.table1 |
</code></pre>
|
<python><mysql><sqlalchemy>
|
2023-05-02 23:58:04
| 1
| 25,296
|
langtang
|
76,159,612
| 3,380,902
|
GeoPandas GeoDataFrame polygon geometry - calculate area
|
<p>I have a geopandas GeoDataFrame with Polygon geometry and I am calculating the area of the polygon, however, I am not sure what the unit is for the area.</p>
<pre><code>import geopandas as gpd
from shapely.geometry import Polygon
# create two dummy polygons
poly1 = Polygon([(0,0), (1,0), (1,1), (0,1)])
poly2 = Polygon([(1,1), (2,1), (2,2), (1,2)])
# create a geopandas DataFrame with two rows
data = {'name': ['Polygon 1', 'Polygon 2'], 'geometry': [poly1, poly2]}
df = gpd.GeoDataFrame(data, crs='EPSG:4326')
</code></pre>
<p>I'd like to reproject the geometry and calculate the area in square meters or another linear unit, however, I have been having issues with re-projection and transforming geometries:</p>
<pre><code> `df['geometry'][0].area`
</code></pre>
<p>when I attempt to convert the <code>crs</code> to a <code>projected coordinate reference</code> system, I get <code>Polygon (Inf Inf....)</code>.</p>
<pre><code>df.to_crs('EPSG:32610', inplace=True)
df.crs
</code></pre>
<p>Expected output is to calculate the <code>area()</code> in <code>square meters</code> units.</p>
|
<python><geospatial><geopandas>
|
2023-05-02 23:52:24
| 1
| 2,022
|
kms
|
76,159,601
| 3,438,507
|
Pandas DataFrame dtype switches when row is inserted using .loc
|
<p><strong>Aim</strong><br />
I am trying to append a row to a pandas data frame with replacement, where maintaining data type (dtype) is crucial.</p>
<p><strong>Question</strong><br />
Why, when inserting <code>row</code> into <code>data_frame</code>, does the dtype switch to <code>object</code>, while neither object is of this dtype, and this dtype could not be inferred from the data?</p>
<p><strong>Data</strong><br />
Both <code>row</code> and <code>data_frame</code> are <code>pd.DataFrame</code> objects with the same columns, of the same dtypes.<br />
<code>data_frame</code> contains many rows with unique indices.<br />
<code>row</code> only contains a single entry with a unique index.</p>
<p><strong>Issue</strong><br />
To prevent duplication of indices I would like to use <code>.loc</code> as follows:</p>
<pre><code>data_frame.loc[idx] = row.loc[idx]
</code></pre>
<p>However, this changes the dtype of the entire data frame to <code>object</code>:</p>
<pre><code>>>> data_frame['column'].dtype
Int64Dtype()
>>> row['column'].dtype
Int64Dtype()
>>> data_frame.loc[idx] = row.loc[idx]
>>> data_frame['column'].dtype
dtype('O')
</code></pre>
<p><strong>Alternative</strong></p>
<p>I've since chosen to work with the following alternative, which effectively does the same thing by requires to check and drop duplicates instead of immediately overwriting them:</p>
<pre><code>data_frame = pd.concat([data_frame, row])
data_frame = data_frame[~data_frame.index.duplicated(keep='last')]
</code></pre>
<p><strong>PS</strong><br />
I have read documentation on <code>.loc</code> but I cannot find details on how dtypes are handled. I had presumed that the dtype of the master data frame would be maintained.<br />
I used to work with row as a <code>pd.Series</code> object, but I was not able to properly set the dtype here.</p>
|
<python><pandas><dataframe><concatenation>
|
2023-05-02 23:50:04
| 1
| 1,155
|
M.G.Poirot
|
76,159,540
| 6,326,429
|
Matrix comprising columns of identity
|
<p>I want to implement the following in python. Given a vector v of length N with each entry taking values in the set of integers 0,1,2,...,d, I want to create a new vector w of length d, that stores in the ith location the number of occurrences of the digit i in v. I assume knowledge of d, N and v.</p>
<p>As an example, suppose d = 3 and N=5, and v = (0,0,1,3,2). Then I want w[0] = 2, w[1] = 1, w[2] = 1, w[3] = 1, i.e. w = (2,1,1,1).</p>
<p>I have included my attempt below. My question is can I do it more efficiently using inbuilt numpy functions, and in particular, avoid the for loop over N? I will need to use this function multiple times in a long iterative simulation so I would like to optimise it.</p>
<p>My attempt:</p>
<pre><code>import numpy as np
d = 3
N = 5
v = np.array([0,0,1,3,2])
ID = np.eye(d+1)
w = np.zeros(N)
for i in range(0,N):
w[i] = np.sum(v==i)
</code></pre>
|
<python><numpy>
|
2023-05-02 23:37:13
| 1
| 901
|
sixtyTonneAngel
|
76,159,509
| 16,527,596
|
How to call an attribute from one class into another in python
|
<p>I have some code like:</p>
<pre><code>class Digital_signal_information:
def __init__(self, signal_power :float, noise_power :float, n_bit_mod:int):
self.signal_power=signal_power # The attribute i want to use
self.noise_power=noise_power
self.n_bit_mod=n_bit_mod
class Line(Digital_signal_information):
def __init__(self,loss_coefficient:float, length:int):
self.loss_coefficient=loss_coefficient
self.length=length
def Noise_Generation(self): #Here i need to use it
noise_generation=1e-9*self.signal_power*self.length
return noise_generation
def SNR_Digital(self): # Also here
snr_digital=self.signal_power-self.noise_power-self.loss
return snr_digital
</code></pre>
<p>How can I use <code>self.signal_power</code> in the indicated <code>Line</code> methods?</p>
|
<python><oop>
|
2023-05-02 23:29:39
| 1
| 385
|
Severjan Lici
|
76,159,471
| 11,146,276
|
Correct regex not matching anything
|
<pre><code>import re
pattern = r"/destination=(-?\d+\.\d+),(-?\d+\.\d+)/"
url = "https://www.google.com/maps/dir/?api=1&destination=12.1234567890,-12.1234567890"
result = re.search(pattern, url)
latitude = result.group(1)
longitude = result.group(2)
</code></pre>
<p>I expect to receive the latitude and longtitude output, but Python says <code>AttributeError: 'NoneType' object has no attribute 'group'</code>. I tested my regex on regex101 and it works, but I have no clue why is it not working in Python.</p>
|
<python><regex>
|
2023-05-02 23:19:51
| 1
| 428
|
Firefly
|
76,159,434
| 1,204,749
|
Python3 scripts in "venv" imports matplotlib from Python2.7 rather Python3 site packages
|
<p>I’m having a deployment problem/Import Error with Python3 scripts that are running inside “venv” virtual env, one of the scripts that imports “matplotlib” & “numpy” seems to be incorrectly referring to python2.7 site packages rather than python3 site-packages.</p>
<p>Background: The project was to migrate some Python2 scripts running on RHEL7 , to Python3 scripts and run them on RHEL8.</p>
<p>However other teams have fallen behind and they wanted Python3 scripts to be moved back to the legacy RHEL7 environment & work alongside other the old Python2 legacy scripts.</p>
<p>Because the base OS RHEL7 has both Python2.7 and Python3.6.8 , I created a “venv” and installed all the required packages inside “venv” via PIP.</p>
<p><strong>Problem:</strong> One of the Python3 scripts inside “venv” keeps importing "matplotlib" & "numpy" from Python2.7 site packages incorrectly, rather than the "matplotlib" & "numpy" from Python3 site packages.</p>
<p><strong>1) Note the Python3 script is self-contained and does not call any Python2 scripts.</strong></p>
<p><strong>2) I explicitly created “venv” pointing to Python3:</strong></p>
<pre><code>virtualenv --python=/usr/bin/python3 new_py3_env
</code></pre>
<ol start="3">
<li>Furthermore, I appended & inserted required libraries specifically at the top of the script:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>sys.path.insert(2,python3_base_path.get_base_path()+'/python3_branchs/mickey-python3/mickey_LOCAL/new_py3_env/lib/python3.6/site-packages')
sys.path.insert(3,python3_base_path.get_base_path()+'/python3_branchs/mickey-python3/mickey_LOCAL/new_py3_env/lib/python3.6/site-packages/matplotlib')
import matplotlib.pyplot as plt
</code></pre>
<p><strong>4) Check the Python3 version in “venv” shows correct Python3 version:</strong>
Python 3.6.8</p>
<p><strong>5) matplotlib & numpy are installed via pip3 in venv.</strong></p>
<p><strong>pip3 list shows:</strong>
numpy 1.9.2
matplotlib 1.3.1</p>
<p><strong>However, after running the script I still get:</strong></p>
<pre><code>[new_py3_env] buster@mickeyrh7python:/home/buster/mickey/python3_branchs/mickey-python3/mickey_LOCAL:<tcsh-165> python3 /home/buster/mickey/python3_branchs/mickey-python3/mickey_LOCAL/python-from-OPS/mickey/venue/analysis/systems/tools/bin/backup_process.py ~/python-from-OPS/mickey/test/lut_sync/backup_process_python3_test_case_1_current_output.txt
Traceback (most recent call last):
File "/home/buster/mickey/python3_branchs/mickey-python3/mickey_LOCAL/python-from-OPS/mickey/venue/analysis/systems/tools/bin/backup_process.py", line 28, in <module>
import matplotlib.pyplot as plt
File "/opt/shameless/pos/cse/1.6/lib64/python2.7/site-packages/matplotlib/__init__.py", line 156, in <module>
from matplotlib.cbook import is_string_like
File "/opt/shameless/pos/cse/1.6/lib64/python2.7/site-packages/matplotlib/cbook.py", line 28, in <module>
import numpy as np
File "/opt/shameless/pos/cse/1.6/lib64/python2.7/site-packages/numpy/__init__.py", line 170, in <module>
from . import add_newdocs
File "/opt/shameless/pos/cse/1.6/lib64/python2.7/site-packages/numpy/add_newdocs.py", line 13, in <module>
from numpy.lib import add_newdoc
File "/opt/shameless/pos/cse/1.6/lib64/python2.7/site-packages/numpy/lib/__init__.py", line 8, in <module>
from .type_check import *
File "/opt/shameless/pos/cse/1.6/lib64/python2.7/site-packages/numpy/lib/type_check.py", line 11, in <module>
import numpy.core.numeric as _nx
File "/opt/shameless/pos/cse/1.6/lib64/python2.7/site-packages/numpy/core/__init__.py", line 6, in <module>
from . import multiarray
ImportError: dynamic module does not define module export function (PyInit_multiarray)
</code></pre>
<p>How can I configure "venv" or Python3 to import Python3-installed packages rather than referring to Python2.7 versions?</p>
<p>Appreciated</p>
|
<python><python-3.x><numpy><matplotlib><python-venv>
|
2023-05-02 23:05:07
| 0
| 3,034
|
cyber101
|
76,159,412
| 2,769,240
|
Issue with Relative Import
|
<p>I am having error in importing both via relative and absolute import. Something I have done many times in the past and it used to work.</p>
<p>here's my folder structure:</p>
<pre><code>project
├── src
│ └── notebook.ipynb
|_____ test.py
└── __init__.py
</code></pre>
<p>So basically within src folder (which is made a package using <strong>init</strong>.py) I have two files- notebook.ipynb and test.py</p>
<p>Now within notebook.ipynb, in cell I do:</p>
<pre><code>from . import test
</code></pre>
<p>Gives the following error:</p>
<pre><code>ImportError Traceback (most recent call last)
Cell In[13], line 1
----> 1 from . import test
ImportError: attempted relative import with no known parent package
</code></pre>
<p>Even if I try absolute import:</p>
<pre><code>from src import test
</code></pre>
<p>I get:</p>
<pre><code>---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[14], line 1
----> 1 from src import test
ModuleNotFoundError: No module named 'src'
</code></pre>
<p>Now I have used similar importing styles in past projects and it would work all fine. Within the same package I can either via relative or absolute. It doesn't NEED me to specify any PYTHONPATH in my env variable.</p>
<p>But this time it isn;t working.</p>
<p>I am using Python 3.9.15 on MAC OS installed within a conda env</p>
|
<python>
|
2023-05-02 22:59:01
| 1
| 7,580
|
Baktaawar
|
76,159,376
| 3,889,954
|
RequestsDependencyWarning: urllib3 (2.0.1) or chardet (5.1.0)/charset_normalizer (3.1.0) doesn't match a supported version
|
<p>My script was running fine before, but now it is giving me this error:</p>
<pre><code>RequestsDependencyWarning: urllib3 (2.0.1) or chardet (5.1.0)/charset_normalizer (3.1.0) doesn't match a supported version
</code></pre>
<p>I have read existing posting and tried the following, but they did not work:</p>
<pre><code>>pip3 install requests
</code></pre>
<p>and</p>
<pre><code>>pip3 install --upgrade requests
</code></pre>
<p>The first command produced the following:</p>
<pre><code>Collecting requests
Using cached requests-2.29.0-py3-none-any.whl (62 kB)
Collecting charset-normalizer<4,>=2 (from requests)
Using cached charset_normalizer-3.1.0-cp311-cp311-win_amd64.whl (96 kB)
Collecting idna<4,>=2.5 (from requests)
Using cached idna-3.4-py3-none-any.whl (61 kB)
Collecting urllib3<1.27,>=1.21.1 (from requests)
Using cached urllib3-1.26.15-py2.py3-none-any.whl (140 kB)
Collecting certifi>=2017.4.17 (from requests)
Using cached certifi-2022.12.7-py3-none-any.whl (155 kB)
Installing collected packages: urllib3, idna, charset-normalizer, certifi, requests
Successfully installed certifi-2022.12.7 charset-normalizer-3.1.0 idna-3.4 requests-2.29.0 urllib3-1.26.15
</code></pre>
<p>Then when I tried the second command, it produced this:</p>
<pre><code>Requirement already satisfied: requests in c:\users\xxxxx\appdata\local\programs\python\python311\lib\site-packages (2.29.0)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\xxxxx\appdata\local\programs\python\python311\lib\site-packages (from requests) (3.1.0)
Requirement already satisfied: idna<4,>=2.5 in c:\users\xxxxx\appdata\local\programs\python\python311\lib\site-packages (from requests) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\users\xxxxx\appdata\local\programs\python\python311\lib\site-packages (from requests) (1.26.15)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\xxxxx\appdata\local\programs\python\python311\lib\site-packages (from requests) (2022.12.7)
</code></pre>
<p>But I am still getting the warning. Am I missing something?</p>
<p>Thanks.</p>
|
<python>
|
2023-05-02 22:48:42
| 0
| 629
|
Cinji18
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.