QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
75,903,821
| 4,245,462
|
Round Lists While Maintaining Sum Across ALL Lists
|
<p><a href="https://stackoverflow.com/questions/44737874/rounding-floats-while-maintaining-total-sum-equal/44740221#44740221">I have seen where we can round elements in a single list while maintaining overall list sum value.</a></p>
<p>However, what if I have multiple lists and am trying to round individual elements in each list while maintaining the sum across ALL of the lists?</p>
<p>For example, we are given:</p>
<pre><code>a = [4.1, 5.9, 10.3]
b = [3.2, 3.2, 4.5]
c = [10.1, 8.8, 7.4]
</code></pre>
<p>All lists together sum to 57.5</p>
<p>How would I round each element in each list <em>while</em> ensuring all lists together still sum to 57 or 58?</p>
<p>For my purposes, there are some relaxed constraints that may make this easier:</p>
<ul>
<li>The sum of all rounded lists together can be up to 5% different from the unrounded (for example, the 57.5 can be as little as 55 or as much as 60 (but, ideally, the sum is +/- 1 [57 or 58]).</li>
<li>it is OK if individual elements change more than +/- 1 (but, ideally, each element is changed by <1.).</li>
</ul>
|
<python>
|
2023-04-01 01:26:03
| 1
| 1,609
|
NickBraunagel
|
75,903,571
| 10,308,255
|
How to remove None from Column containing Lists ONLY IF the List contains another string?
|
<p>I have a dataframe with a column that contains lists. These lists can contain strings, <code>None</code>, or, my personal favorite <em>both</em> i.e <code>['ABCD', None]</code></p>
<p>I would really like to remove the <code>None</code> from the lists that are not empty, but keep it in the columns where the value is <code>[None]</code>. Ultimately, I will flatten these lists into strings, so in cases where the <code>None</code> is replace, the column cell will be empty which is not helpful for filtering.</p>
<p>I am wondering how this can be done?</p>
<p>Sample dataframe and sample code below:</p>
<pre><code>data = [[1, "['ABCD', None]"], [2, "['ABCD', None]"], [3, "[None]"]]
df = pd.DataFrame(data, columns=["Item", "String1"])
df["String1"] = df["String1"].apply(lambda x: x.replace("None", ""))
</code></pre>
<pre><code>
Item String1
0 1 ['ABCD', None]
1 2 ['ABCD', None]
2 3 [None]
</code></pre>
<p>what I am getting:</p>
<pre><code> Item String1
0 1 ['ABCD',]
1 2 ['ABCD',]
2 3 []
</code></pre>
<p>what I would like to see:</p>
<pre><code> Item String1
0 1 ['ABCD',]
1 2 ['ABCD',]
2 3 [None]
</code></pre>
|
<python><pandas><string><list>
|
2023-04-01 00:02:19
| 2
| 781
|
user
|
75,903,541
| 23,512,643
|
run .bat files for python libraries
|
<p>I have Windows OS (Operating System) and I am trying to install libpostal library <a href="https://github.com/openvenues/libpostal" rel="nofollow noreferrer">https://github.com/openvenues/libpostal</a></p>
<p>I want to take care of everything (installations) using a .bat script and then try to run an example.</p>
<p>Here is a .bat script I found that tests if installation is already done else continues until finished (I think I might already have some of these parts done before):</p>
<pre class="lang-bash prettyprint-override"><code>@echo off
:: Check if MSYS2 and MinGW are installed
where msys2 2>nul >nul
if %errorlevel% equ 0 (
echo MSYS2 is already installed. Use --force to reinstall.
) else (
:: Install MSYS2 and MinGW
choco install msys2
refreshenv
)
:: Check if MSYS2 packages are updated
pacman -Qu 2>nul >nul
if %errorlevel% equ 0 (
echo MSYS2 packages are already updated. Use --force to reinstall.
) else (
:: Update MSYS2 packages
pacman -Syu
)
:: Check if build dependencies are installed
pacman -Q autoconf automake curl git make libtool gcc mingw-w64-x86_64-gcc 2>nul >nul
if %errorlevel% equ 0 (
echo Build dependencies are already installed. Use --force to reinstall.
) else (
:: Install build dependencies
pacman -S autoconf automake curl git make libtool gcc mingw-w64-x86_64-gcc
)
:: Check if libpostal is cloned
if exist libpostal (
echo libpostal repository is already cloned. Use --force to reinstall.
) else (
:: Clone libpostal repository
git clone https://github.com/openvenues/libpostal
)
cd libpostal
:: Check if libpostal is built and installed
if exist C:/Program Files/libpostal/bin/libpostal.dll (
echo libpostal is already built and installed. Use --force to reinstall.
) else (
:: Build and install libpostal
cp -rf windows/* ./
./bootstrap.sh
./configure --datadir=C:/libpostal
make -j4
make install
)
:: Check if libpostal is added to PATH environment variable
setx /m PATH "%PATH%;C:\Program Files\libpostal\bin" 2>nul >nul
if %errorlevel% equ 0 (
echo libpostal is already added to PATH environment variable. Use --force to reinstall.
) else (
:: Add libpostal to PATH environment variable
setx PATH "%PATH%;C:\Program Files\libpostal\bin"
)
:: Test libpostal installation
libpostal "100 S Broad St, Philadelphia, PA"
pause
</code></pre>
<p>I save this code in a notepad file as <code>bat_script.bat</code>, then I left-click on .bat icon, run as administrator and get the following screenshot before the command prompt closes:</p>
<p><a href="https://i.sstatic.net/XpU5E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XpU5E.png" alt="enter image description here" /></a></p>
<p>Can someone please help?</p>
|
<python><windows><batch-file>
|
2023-03-31 23:47:55
| 0
| 6,799
|
stats_noob
|
75,903,449
| 19,051,091
|
How to predict custom image with PyTorch?
|
<p>Good evening everyone,
I'm trying to build multiple image classification with 4 classes with a custom Dataset
That's my model so Can someone please help me identify where the wrong because it always predicts class left at all images</p>
<pre><code>class TingVGG(nn.Module):
def __init__(self, input_shape: int, hidden_units: int, output_shape: int) -> None:
super().__init__()
self.conv_block1 = nn.Sequential(nn.Conv2d(in_channels=input_shape,out_channels=hidden_units,kernel_size=3,stride=1,padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=hidden_units, out_channels=hidden_units, kernel_size=3,stride=1,padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.conv_block2 = nn.Sequential(nn.Conv2d(in_channels=hidden_units,out_channels=hidden_units,kernel_size=3,stride=1,padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=hidden_units, out_channels=hidden_units, kernel_size=3,stride=1,padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.classifier = nn.Sequential(nn.Flatten(), nn.Linear(in_features=hidden_units*32*32 ,out_features=output_shape))
def forward(self, x: torch.Tensor):
x = self.conv_block1(x)
#print(x.shape)
x = self.conv_block2(x)
#print(x.shape)
x = self.classifier(x)
#print(x.shape)
return x
</code></pre>
<p>And that's my train and test function:</p>
<pre><code>def train_step(model: torch.nn.Module,
dataloader: DataLoader,
loss_fn: torch.nn.Module,
optimizer: torch.optim.Optimizer,
device=device
):
# put the model into the train
model.train()
# Setup train loss and train accuracy values
train_loss, train_acc = 0, 0
# Loop through DataLoader and data batches
for batch, (X, y) in enumerate(dataloader):
# Send data to the target device
X, y = X.to(device), y.to(device)
# 1. forward pass
y_pred = model(X) #output model logits
# 2. Calculate the loss
loss = loss_fn(y_pred, y)
train_loss += loss.item()
# 3. Optimize zero grad
optimizer.zero_grad()
# 4.Loss backward
loss.backward()
# 5.Optimzer step
optimizer.step()
# Calculate the accuracy metric
y_pred_class = torch.argmax(torch.softmax(y_pred, dim=1), dim=1)
train_acc += (y_pred_class ==y).sum().item()/len(y_pred)
# Adjust metrics to get average loss and accuracy per batch
train_loss = train_loss / len(dataloader)
train_acc = train_acc / len(dataloader)
return train_loss, train_acc
def test_step(model: torch.nn.Module,
dataloader: DataLoader,
loss_fn: torch.nn.Module,
device=device
):
# Put the model in eval mode
model.eval()
# Setup train loss and train accuracy values
test_loss, test_acc = 0, 0
# Turn on inference mode
with torch.inference_mode():
# Loop through DataLoader Batches
for batch, (X, y) in enumerate(dataloader):
# Send data to the target device
X, y = X.to(device), y.to(device)
# 1. forward pass
test_pred_logits = model(X)
# 2. Calculate the loss
loss = loss_fn(test_pred_logits, y)
test_loss += loss.item()
# 3. Calculate the accuracy
test_pred_labels = test_pred_logits.argmax(dim=1)
test_acc += ((test_pred_labels == y).sum().item() / len(test_pred_labels))
# Adjust metrics to get average loss and accuracy per batch
test_loss = test_loss / len(dataloader)
test_acc = test_acc / len(dataloader)
return test_loss, test_acc
def train(model: torch.nn.Module,
train_dataloader: DataLoader,
test_dataloader: DataLoader,
optimizer: torch.optim.Optimizer,
loss_fn: torch.nn.Module = nn.CrossEntropyLoss(),
epochs: int = 10,
device = device):
# 2. Create empty results dictionary
results = {"train_loss": [],
"train_acc": [],
"test_loss": [],
"test_acc": []}
# 3. Loop through training and testing steps for a number of epochs
for epoch in tqdm(range(epochs)):
train_loss, train_acc = train_step(model= model, dataloader= train_dataloader,loss_fn=loss_fn, optimizer=optimizer, device=device)
test_loss, test_acc = test_step(model= model, dataloader=test_dataloader,loss_fn=loss_fn, device=device)
# 4. Print out what's happening
print(f"Epoch: {epoch +1} | "
f"train_loss: {train_loss:.4f} | "
f"train_acc: {train_acc:.4f} | "
f"test_loss: {test_loss:.4f} | "
f"test_acc: {test_acc:.4f}")
# 5. Update the results dictionary
results["train_loss"].append(train_loss)
results["train_acc"].append(train_acc)
results["test_loss"].append(test_loss)
results["test_acc"].append(test_acc)
# 6. return the results at the end of the epoches
return results
# Set random seed
torch.manual_seed(42)
torch.cuda.manual_seed(42)
# Set number of epoches
NUM_EPOCHS = 20
# Create and initialize of TinyVGG
model_0 = TingVGG(input_shape=1, # Number of channels in the input image (c, h, w) -> 3
hidden_units=20,
output_shape=len(train_data.classes)).to(device)
# Setup the loss function and optimizer
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(params= model_0.parameters(),
lr= 0.001)
# Start the timer
start_time = timer()
# Train model 0
model_0_results = train(model= model_0,
train_dataloader= train_dataloader_simple,
test_dataloader= test_dataloader_simple,
optimizer= optimizer,
loss_fn= loss_fn,
epochs= NUM_EPOCHS
)
# End the timer and print the results
end_time = timer()
print(f"Total training time: {end_time - start_time: .3f} seconds")
</code></pre>
<p>And I saved the model and loaded with that:</p>
<pre><code>PATH='Model_3.pth'
torch.save(model_0, PATH)
model = torch.load('Model_3.pth', map_location='cpu')
</code></pre>
<p>And that's how to make the prediction:</p>
<pre><code>images_path = r'Rotation_Images\train\180'
class_names = ['180', 'left', 'right', 'zero']
for img in os.listdir(images_path):
start_time = timer()
# Reads a file using pillow
PIL_image = PIL.Image.open(images_path + "/" + img)
# Convert to numpy array
numpy_array = np.array(PIL_image)
# Convert to PyTorch tensor
tensor = torch.from_numpy(numpy_array)
tensor = tensor.unsqueeze(0).type(torch.float32)
tensor_image = tensor / 255.
# Create transform pipleine to resize image
custom_image_transform = transforms.Compose([
transforms.Resize((128, 128)),
])
# Transform target image
custom_image_transformed = custom_image_transform(tensor_image)
model.eval()
with torch.inference_mode():
# Add an extra dimension to image
custom_image_transformed_with_batch_size = custom_image_transformed.unsqueeze(dim=0)
# Make a prediction on image with an extra dimension
custom_image_pred = model(custom_image_transformed_with_batch_size.to(device))
# Convert logits -> prediction probabilities (using torch.softmax() for multi-class classification)
custom_image_pred_probs = torch.softmax(custom_image_pred, dim=1)
# Convert prediction probabilities -> prediction labels
custom_image_pred_label = torch.argmax(custom_image_pred_probs, dim=1)
# Find the predicted label
custom_image_pred_class = class_names[custom_image_pred_label.cpu()]
</code></pre>
<p>Sorry If I seemed stuped It's my 2nd weak at studying pytorch</p>
|
<python><pytorch>
|
2023-03-31 23:25:06
| 0
| 307
|
Emad Younan
|
75,903,218
| 3,380,902
|
geojson file doesn't plot points on mapbox in jupyter notebook
|
<p>I am running jupyter notebook on Databricks and attempting to render a map. I tried the example from the documentation to test and it doesn't plot the points.</p>
<pre><code>import mapboxgl
from mapboxgl.viz import *
from mapboxgl.utils import df_to_geojson
import matplotlib.pyplot as plt
from IPython.display import display, HTML
display(HTML("<script src='https://cdnjs.cloudflare.com/ajax/libs/leaflet/1.7.1/leaflet.js'></script>"))
# Load data from sample csv
data_url = 'https://raw.githubusercontent.com/mapbox/mapboxgl-jupyter/master/examples/data/points.csv'
df = pd.read_csv(data_url)
# Must be a public token, starting with `pk`
token = mapbox_token
# Create a geojson file export from a Pandas dataframe
df_to_geojson(df, filename='points.geojson',
properties=['Avg Medicare Payments', 'Avg Covered Charges', 'date'],
lat='lat', lon='lon', precision=3)
# Generate data breaks and color stops from colorBrewer
color_breaks = [0,10,100,1000,10000]
color_stops = create_color_stops(color_breaks, colors='YlGnBu')
# Create the viz from the dataframe
viz = CircleViz('points.geojson',
access_token=token,
height='400px',
color_property = "Avg Medicare Payments",
color_stops = color_stops,
center = (-95, 40),
zoom = 3,
below_layer = 'waterway-label'
)
viz.show()
</code></pre>
<p><a href="https://i.sstatic.net/yDuJw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yDuJw.png" alt="mapbox" /></a></p>
|
<python><jupyter-notebook><mapbox><mapbox-gl>
|
2023-03-31 22:33:33
| 0
| 2,022
|
kms
|
75,903,174
| 4,045,275
|
conda installed an older, unusable version of mamba and I can't update it
|
<h2>The issue</h2>
<p>I have Anaconda on two separate Windows PCs. On both, I installed mamba with</p>
<pre><code>conda install mamba -n base -c conda-forge
</code></pre>
<p>on one, it worked and it installed mamba 1.3.1. On the other, it installed mamba 0.1.0, which doesn't work - e.g. if I try
conda update pandas
I get an error about "no module named conda._vendor.auxlib" which, searching online, I understand was an error of older, unstable versions of mamba.</p>
<h2>What I have tried</h2>
<p><code>conda update mamba</code> tells me all requested packages are already installed</p>
<p><code>mamba update mamba</code> doesn't work (no module named etc etc)</p>
<p><code>conda install mamba=1.3.1 -n base -c conda-forge</code> to force the installation of version 1.3.1, doesn't work: it fails with initial frozen solve, retries with flexible solve, but after more than 3 hours it still did nothing</p>
<p>I then tried the other 3 subfolders of condaforge mentioned here: <a href="https://anaconda.org/conda-forge/mamba" rel="nofollow noreferrer">https://anaconda.org/conda-forge/mamba</a> (without really understanding the difference) but none worked - the command was processed without errors but mamba was not updated, as if no version > 0.1.0 existed</p>
<p>I also tried <code>'conda update conda-build'</code> which was mentioned here as a solution <a href="https://github.com/mamba-org/mamba/issues/1583" rel="nofollow noreferrer">https://github.com/mamba-org/mamba/issues/1583</a> but it didn't work</p>
<h2>What I have searched</h2>
<p>This bug report mentions an issue similar to mine <a href="https://github.com/mamba-org/mamba/issues/1583;" rel="nofollow noreferrer">https://github.com/mamba-org/mamba/issues/1583;</a> most posters (but not all) managed to fix updating conda-build. This solution didn't work for me</p>
|
<python><anaconda><conda><mamba>
|
2023-03-31 22:21:23
| 0
| 9,100
|
Pythonista anonymous
|
75,903,154
| 4,386,541
|
How to perform a cold reset on a smart card reader in python?
|
<p>I am brand new to python so I hope I am providing the right information.
I have a python script which imports the following:</p>
<pre><code>from smartcard.CardType import AnyCardType
from smartcard.CardRequest import CardRequest
from smartcard.util import toHexString
from smartcard.scard import (SCARD_UNPOWER_CARD, SCARD_RESET_CARD, SCARD_SHARE_EXCLUSIVE)
from smartcard.Exceptions import CardConnectionException
from smartcard.System import readers
</code></pre>
<p>I am going to list only the lines of code that I think matter for brevity.
But keep in mind the entire script runs perfectly fine so all of my actual code is working in case I don't include a line below. But there comes a time where my script gets an error reading the sim card that is in the reader and this is actually normal. It simply means the sim card needs to "reboot" so if I manually pull the sim card out of the reader and reinsert the card then everything continues to work as normal.
What I am trying to do is figure out the proper code to simulate manually removing the card and reinserting, so I am thinking this is what I am calling a cold reset.</p>
<pre><code>card_readers = readers()
reader = card_readers[0]
self.connection = reader.createConnection()
#self.connection.connect(disposition=SCARD_UNPOWER_CARD)
self.connection.connect(mode=SCARD_SHARE_EXCLUSIVE) #, disposition=SCARD_RESET_CARD)
... do some stuff in between
self.connection.disconnect()
</code></pre>
<p>on the line self.connection.connect() I have tried many different ways to pass in either the SCARD_RESET_CARD or SCARD_UNPOWER_CARD but no matter what I do there it does not seem to be working. Any advice on exactly how to force the reader to do a cold reset ?</p>
|
<python><smartcard>
|
2023-03-31 22:17:51
| 1
| 901
|
Wayne Fulcher
|
75,903,092
| 6,197,439
|
Matplotlib show|keep visible annotation line that disappears during pan/zoom?
|
<p>Consider this code:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib as mpl
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.subplots()
ax.plot([1, 2, 3, 4], [0, 0.5, 1, 0.2])
ax.annotate("", xy=(0,0), xytext=(1,1), arrowprops=dict(facecolor='black'))
ax.set_ylabel('some numbers')
ax.set_xlim([0,5])
ax.set_ylim([0,2])
plt.show()
</code></pre>
<p>It produces this:</p>
<p><a href="https://i.sstatic.net/mwVzy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mwVzy.png" alt="matplotlib annotation line visible" /></a></p>
<p>I drag a the window down a little bit with the mouse - and the line is completely gone - even if judging by the previous image, it should still be visible:</p>
<p><a href="https://i.sstatic.net/HCSJm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HCSJm.png" alt="matplotlib annotation line not visible" /></a></p>
<p>Apparently, matplotlib thinks that if the annotation line is not fully visible in the window, - that is, it is clippent - then it should be completely removed.</p>
<p>I think that if an annotation line is clipped, then it should be shown to the extent it is visible.</p>
<p>How to get matplotlib to do what I want, and not what it wants?</p>
<hr />
<p>EDIT: just found about <code>annotation_clip</code>; if I set it to True, then behavior is same as above; if I set it to False; then I can get:</p>
<p><a href="https://i.sstatic.net/QUzve.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QUzve.png" alt="matplotlib arrow clip false" /></a></p>
<p>But that is not what I want either - what I want is this (photoshopped):</p>
<p><a href="https://i.sstatic.net/m8Kp0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m8Kp0.png" alt="matplotlib annot arrow that I want" /></a></p>
<p>How the hell do I get this - how would you call it, clipped, non-clipped, half clipped, whatever the hell it is?</p>
|
<python><matplotlib><clipping>
|
2023-03-31 22:08:07
| 1
| 5,938
|
sdbbs
|
75,902,908
| 3,137,388
|
fabric -- can't run mv command
|
<p>We need to transfer some files to remote machine using Fabric. But the requirement is while a file is being transferred, the first character of filename on the remote machine should be '.' (basically a hidden file). Once the transfer is done, we need to rename the file (I am using mv command for it). I am able to transfer the file but couldn't rename it on the remote machine.</p>
<p>for example, I need to transfer one file to remote machine as <strong>.a.txt</strong>. Once the transfer is done, I need to rename <strong>.a.txt</strong> to <strong>a.txt</strong>. I used below code but it is throwing error <strong>'Could not contact remote machine : x.x.x.x error : Encountered a bad command exit code!'</strong></p>
<p>Below is the code I used.</p>
<pre><code>for host, user_name in input.remote.remote_hosts.items():
try:
host_name = user_name + '@' + host
fabric_connection = Connection(host = host_name, connect_kwargs = {'key_filename' : tmp_id_rsa, 'timeout' : 10})
fabric_connection.put(exchange_file, remote_op, {'timeout' : 10})
network_command = 'mv ' + remote_op + ' ' + remote_op[1:]
fabric_connection.run(network_command)
finally:
fabric_connection.close()
</code></pre>
<p>Below is the error</p>
<pre><code>ERROR root:test.py:300 Could not contact remote machine : x.x.x.x error : Encountered a bad command exit code!
Exit code: 1
</code></pre>
<p>Can anyone please let me know how to fix this issue.</p>
|
<python><fabric>
|
2023-03-31 21:34:31
| 0
| 5,396
|
kadina
|
75,902,831
| 3,137,388
|
Is it possible to pass multiple names to logging.getLogger()?
|
<p>We wrote a python script that involves AWS (getting file from s3 bucket, transfer the processed file to remote machine using fabric / paramiko). When I turn the logging at DEBUG level, many logs are getting printed on the console. I just wanted my python file to be at DEBUG level and external modules like AWS, Paramiko should be at INFO file. Below is the code I used for that.</p>
<pre><code>logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger('boto').setLevel(logging.INFO)
logging.getLogger('boto3').setLevel(logging.INFO)
logging.getLogger('botocore').setLevel(logging.INFO)
logging.getLogger('paramiko').setLevel(logging.INFO)
logging.getLogger('invoke').setLevel(logging.INFO)
logging.getLogger('urllib3').setLevel(logging.INFO)
logging.getLogger('fabric').setLevel(logging.INFO)
logging.getLogger('s3transfer').setLevel(logging.INFO)
</code></pre>
<p>It is working fine.</p>
<ol>
<li><p>But is there any way to pass multiple names to getLogger some thing
like</p>
<pre><code>logging.getLogger('boto|boto3').setLevel(logging.INFO)
</code></pre>
</li>
<li><p>Also for all Amazon services, can we set log level to INFO for all the services in one line instead
of setting Log Levels to boto, boto3, botocore individually?</p>
</li>
</ol>
|
<python><amazon-web-services>
|
2023-03-31 21:22:17
| 1
| 5,396
|
kadina
|
75,902,704
| 280,002
|
localstack 0.12.18 fails to install
|
<p>I am trying to install localstack <code>0.12.18</code> on a windows 10 Enterprise machine.</p>
<p>I'm running <code>Python 3.9.0</code> and <code>pip 23.0.1</code>.</p>
<p>I have downloaded the <code>.tar.gz</code> file directly from pypi.org and installing it via:</p>
<p><code>pip install C:\Users\me\Downloads\localstack-0.12.18.tar.gz --trusted-host pypi.org --trusted-host files.pythonhosted.org</code></p>
<p>This is the output:</p>
<pre><code>pip install C:\Users\me\Downloads\localstack-0.12.18.tar.gz --trusted-host pypi.org --trusted-host files.pythonhosted.org
Processing c:\users\me\downloads\localstack-0.12.18.tar.gz
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
Γ Getting requirements to build wheel did not run successfully.
β exit code: 1
β°β> [19 lines of output]
Traceback (most recent call last):
File "C:\Anaconda\envs\python390\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Anaconda\envs\python390\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\Anaconda\envs\python390\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\me\AppData\Local\Temp\pip-build-env-hvz55xsa\overlay\Lib\site-packages\setuptools\build_meta.py", line 338, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "C:\Users\me\AppData\Local\Temp\pip-build-env-hvz55xsa\overlay\Lib\site-packages\setuptools\build_meta.py", line 320, in _get_build_requires
self.run_setup()
File "C:\Users\me\AppData\Local\Temp\pip-build-env-hvz55xsa\overlay\Lib\site-packages\setuptools\build_meta.py", line 484, in run_setup
super(_BuildMetaLegacyBackend,
File "C:\Users\me\AppData\Local\Temp\pip-build-env-hvz55xsa\overlay\Lib\site-packages\setuptools\build_meta.py", line 335, in run_setup
exec(code, locals())
File "<string>", line 54, in <module>
File "C:\Anaconda\envs\python390\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position 2184: character maps to <undefined>
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
Γ Getting requirements to build wheel did not run successfully.
β exit code: 1
β°β> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>Any ideas on what the solution might be?</p>
<p>Thank you</p>
|
<python><pip><anaconda><setuptools><localstack>
|
2023-03-31 21:00:17
| 1
| 1,301
|
alessandro ferrucci
|
75,902,682
| 19,325,656
|
Use orjson/ujsonin drf
|
<p>Hi all im looking into faster ways of returning JSON data etc. I know that FastAPI uses</p>
<pre><code>ujson
orjson
</code></pre>
<p>Is there a way to replace the standard drf serializer to orjson?</p>
<p><strong>edit</strong>
Lets say i have this viewset</p>
<pre><code>class ProfileViewSet(viewsets.ModelViewSet):
permission_classes = [ProfilePermission]
serializer_class = ProfileSerializer
pagination_class = ProfilePagination
filterset_class = ProfileFilter
def get_queryset(self):
current_id = self.request.user.id
blocked_users = Blocked.objects.filter(user__id=current_id).values('blocked')
users_taken_action = Action.objects.filter(user__id=current_id).values('action_user')
users_not_taken_action = Profile.objects.exclude(user__in=users_taken_action) \
.exclude(user__in=blocked_users).exclude(user__id=current_id)
return users_not_taken_action
</code></pre>
<p>and this serializer</p>
<pre><code>class ProfileSerializer(WritableNestedModelSerializer, serializers.ModelSerializer):
fav_data = serializers.SerializerMethodField(read_only=True)
images = serializers.SerializerMethodField(read_only=True)
user = UserSerializer(read_only=True)
place = PlaceSerializer(read_only=True)
def get_images(self, obj):
return PhotoSerializer(UserPhoto.objects.filter(user=obj.user.id), many=True).data
def get_fav_data(self, obj):
fav_data = Favorite.objects.get(user=obj.user.id).data
return {
"id": fav_data.id,
"name": fav_data.name,
"category": fav_data.category,
}
class Meta:
model = Profile
fields = '__all__'
depth = 1
</code></pre>
<p>How can I convert it to ojson/urjson</p>
|
<python><django><django-rest-framework><django-serializer>
|
2023-03-31 20:54:56
| 2
| 471
|
rafaelHTML
|
75,902,554
| 4,045,275
|
conda wants to remove half my packages if I try to update just one (on a fresh Anaconda installation)
|
<h1>The issue</h1>
<p>Today (31-Mar-2023) I downloaded Anaconda, uninstalled an older version and installed today's version on a Windows PC (private PC, no corporate firewall).</p>
<p>As suggested in another question <a href="https://stackoverflow.com/questions/75901180/conda-very-slow-and-downloading-only-from-conda-forge">Conda very slow and downloading only from conda forge?</a> I removed the .condarc file, which must have been a leftover from a previous installation, as it contained conda-forge as the first repository.</p>
<p>I was testing if conda works properly, and a simple</p>
<pre><code>conda update pandas
</code></pre>
<p>returned the oddest result ever: conda wanted to remove many packages, including numba and matplotlib. I report the exact text at the bottom.</p>
<p>Why should updating pandas result in the removal of all those packages?</p>
<h1>What I have tried</h1>
<p>I installed mamba, and a simple</p>
<pre><code>mamba update pandas
</code></pre>
<p>tells me that <code>All requested packages already installed</code> - as one would expect having installed the whole of Anaconda just today</p>
<h1>What I have researched</h1>
<p>I hve found a similar issue reported here <a href="https://github.com/conda/conda/issues/8842" rel="nofollow noreferrer">https://github.com/conda/conda/issues/8842</a> and <a href="https://stackoverflow.com/questions/57844377/updating-a-specific-module-with-conda-removes-numerous-packages">Updating a specific module with Conda removes numerous packages</a> However, if I understand correctly, in those cases the bug was caused by the fact that, over time, different versions of different packages had been installed, deviating from the anaconda installation. My case is different, because I downloaded Anaconda just today and I haven't changed nor updated any of those packages (the only change was installing mamba). I have been using Anaconda for 9 years and never encountered this.</p>
<h1>The exact output from conda</h1>
<pre><code>The following packages will be REMOVED:
alabaster-0.7.12-pyhd3eb1b0_0
anyio-3.5.0-py310haa95532_0
appdirs-1.4.4-pyhd3eb1b0_0
argon2-cffi-21.3.0-pyhd3eb1b0_0
argon2-cffi-bindings-21.2.0-py310h2bbff1b_0
arrow-1.2.3-py310haa95532_1
astroid-2.14.2-py310haa95532_0
astropy-5.1-py310h9128911_0
asttokens-2.0.5-pyhd3eb1b0_0
atomicwrites-1.4.0-py_0
automat-20.2.0-py_0
autopep8-1.6.0-pyhd3eb1b0_1
babel-2.11.0-py310haa95532_0
backcall-0.2.0-pyhd3eb1b0_0
bcrypt-3.2.0-py310h2bbff1b_1
binaryornot-0.4.4-pyhd3eb1b0_1
black-22.6.0-py310haa95532_0
bleach-4.1.0-pyhd3eb1b0_0
blosc-1.21.3-h6c2663c_0
bokeh-2.4.3-py310haa95532_0
brotli-1.0.9-h2bbff1b_7
brotli-bin-1.0.9-h2bbff1b_7
cfitsio-3.470-h2bbff1b_7
charls-2.2.0-h6c2663c_0
cloudpickle-2.0.0-pyhd3eb1b0_0
colorcet-3.0.1-py310haa95532_0
comm-0.1.2-py310haa95532_0
constantly-15.1.0-py310haa95532_0
contourpy-1.0.5-py310h59b6b97_0
cookiecutter-1.7.3-pyhd3eb1b0_0
cssselect-1.1.0-pyhd3eb1b0_0
curl-7.87.0-h2bbff1b_0
cycler-0.11.0-pyhd3eb1b0_0
cytoolz-0.12.0-py310h2bbff1b_0
daal4py-2023.0.2-py310hf497b98_0
dal-2023.0.1-h59b6b97_26646
dask-2022.7.0-py310haa95532_0
dask-core-2022.7.0-py310haa95532_0
datashader-0.14.4-py310haa95532_0
datashape-0.5.4-py310haa95532_1
debugpy-1.5.1-py310hd77b12b_0
decorator-5.1.1-pyhd3eb1b0_0
diff-match-patch-20200713-pyhd3eb1b0_0
dill-0.3.6-py310haa95532_0
distributed-2022.7.0-py310haa95532_0
docstring-to-markdown-0.11-py310haa95532_0
docutils-0.18.1-py310haa95532_3
entrypoints-0.4-py310haa95532_0
et_xmlfile-1.1.0-py310haa95532_0
executing-0.8.3-pyhd3eb1b0_0
flake8-6.0.0-py310haa95532_0
flask-2.2.2-py310haa95532_0
flit-core-3.6.0-pyhd3eb1b0_0
fonttools-4.25.0-pyhd3eb1b0_0
fsspec-2022.11.0-py310haa95532_0
gensim-4.3.0-py310h4ed8f06_0
greenlet-2.0.1-py310hd77b12b_0
h5py-3.7.0-py310hfc34f40_0
hdf5-1.10.6-h1756f20_1
heapdict-1.0.1-pyhd3eb1b0_0
holoviews-1.15.4-py310haa95532_0
huggingface_hub-0.10.1-py310haa95532_0
hvplot-0.8.2-py310haa95532_0
hyperlink-21.0.0-pyhd3eb1b0_0
icc_rt-2022.1.0-h6049295_2
imagecodecs-2021.8.26-py310h4c966c4_2
imageio-2.26.0-py310haa95532_0
imagesize-1.4.1-py310haa95532_0
imbalanced-learn-0.10.1-py310haa95532_0
importlib-metadata-4.11.3-py310haa95532_0
importlib_metadata-4.11.3-hd3eb1b0_0
incremental-21.3.0-pyhd3eb1b0_0
inflection-0.5.1-py310haa95532_0
iniconfig-1.1.1-pyhd3eb1b0_0
intake-0.6.7-py310haa95532_0
intervaltree-3.1.0-pyhd3eb1b0_0
ipykernel-6.19.2-py310h9909e9c_0
ipython-8.10.0-py310haa95532_0
ipython_genutils-0.2.0-pyhd3eb1b0_1
ipywidgets-7.6.5-pyhd3eb1b0_1
isort-5.9.3-pyhd3eb1b0_0
itemadapter-0.3.0-pyhd3eb1b0_0
itemloaders-1.0.4-pyhd3eb1b0_1
itsdangerous-2.0.1-pyhd3eb1b0_0
jedi-0.18.1-py310haa95532_1
jellyfish-0.9.0-py310h2bbff1b_0
jinja2-time-0.2.0-pyhd3eb1b0_3
jmespath-0.10.0-pyhd3eb1b0_0
joblib-1.1.1-py310haa95532_0
jq-1.6-haa95532_1
json5-0.9.6-pyhd3eb1b0_0
jupyter-1.0.0-py310haa95532_8
jupyter_client-7.3.4-py310haa95532_0
jupyter_console-6.6.2-py310haa95532_0
jupyter_server-1.23.4-py310haa95532_0
jupyterlab-3.5.3-py310haa95532_0
jupyterlab_pygments-0.1.2-py_0
jupyterlab_server-2.19.0-py310haa95532_0
jupyterlab_widgets-1.0.0-pyhd3eb1b0_1
jxrlib-1.1-he774522_2
keyring-23.4.0-py310haa95532_0
kiwisolver-1.4.4-py310hd77b12b_0
lazy-object-proxy-1.6.0-py310h2bbff1b_0
lcms2-2.12-h83e58a3_0
libaec-1.0.4-h33f27b4_1
libbrotlicommon-1.0.9-h2bbff1b_7
libbrotlidec-1.0.9-h2bbff1b_7
libbrotlienc-1.0.9-h2bbff1b_7
libsodium-1.0.18-h62dcd97_0
libspatialindex-1.9.3-h6c2663c_0
libuv-1.44.2-h2bbff1b_0
libzopfli-1.0.3-ha925a31_0
llvmlite-0.39.1-py310h23ce68f_0
locket-1.0.0-py310haa95532_0
lxml-4.9.1-py310h1985fb9_0
lz4-3.1.3-py310h2bbff1b_0
lzo-2.10-he774522_2
m2w64-libwinpthread-git-5.0.0.4634.697f757-2
markdown-3.4.1-py310haa95532_0
matplotlib-3.7.0-py310haa95532_0
matplotlib-base-3.7.0-py310h4ed8f06_0
matplotlib-inline-0.1.6-py310haa95532_0
mccabe-0.7.0-pyhd3eb1b0_0
mistune-0.8.4-py310h2bbff1b_1000
mock-4.0.3-pyhd3eb1b0_0
mpmath-1.2.1-py310haa95532_0
msgpack-python-1.0.3-py310h59b6b97_0
multipledispatch-0.6.0-py310haa95532_0
munkres-1.1.4-py_0
mypy_extensions-0.4.3-py310haa95532_1
nbclassic-0.5.2-py310haa95532_0
nbclient-0.5.13-py310haa95532_0
nbconvert-6.5.4-py310haa95532_0
nest-asyncio-1.5.6-py310haa95532_0
networkx-2.8.4-py310haa95532_0
ninja-1.10.2-haa95532_5
ninja-base-1.10.2-h6d14046_5
nltk-3.7-pyhd3eb1b0_0
notebook-6.5.2-py310haa95532_0
notebook-shim-0.2.2-py310haa95532_0
numba-0.56.4-py310h4ed8f06_0
numpydoc-1.5.0-py310haa95532_0
openjpeg-2.4.0-h4fc8c34_0
openpyxl-3.0.10-py310h2bbff1b_0
pandocfilters-1.5.0-pyhd3eb1b0_0
panel-0.14.3-py310haa95532_0
param-1.12.3-py310haa95532_0
paramiko-2.8.1-pyhd3eb1b0_0
parsel-1.6.0-py310haa95532_0
parso-0.8.3-pyhd3eb1b0_0
partd-1.2.0-pyhd3eb1b0_1
pathspec-0.10.3-py310haa95532_0
patsy-0.5.3-py310haa95532_0
pep8-1.7.1-py310haa95532_1
pexpect-4.8.0-pyhd3eb1b0_3
pickleshare-0.7.5-pyhd3eb1b0_1003
plotly-5.9.0-py310haa95532_0
pooch-1.4.0-pyhd3eb1b0_0
poyo-0.5.0-pyhd3eb1b0_0
prometheus_client-0.14.1-py310haa95532_0
prompt-toolkit-3.0.36-py310haa95532_0
prompt_toolkit-3.0.36-hd3eb1b0_0
protego-0.1.16-py_0
ptyprocess-0.7.0-pyhd3eb1b0_2
pure_eval-0.2.2-pyhd3eb1b0_0
py-1.11.0-pyhd3eb1b0_0
pyasn1-0.4.8-pyhd3eb1b0_0
pyasn1-modules-0.2.8-py_0
pycodestyle-2.10.0-py310haa95532_0
pyct-0.5.0-py310haa95532_0
pycurl-7.45.1-py310hcd4344a_0
pydispatcher-2.0.5-py310haa95532_2
pydocstyle-6.3.0-py310haa95532_0
pyerfa-2.0.0-py310h2bbff1b_0
pyflakes-3.0.1-py310haa95532_0
pygments-2.11.2-pyhd3eb1b0_0
pyhamcrest-2.0.2-pyhd3eb1b0_2
pylint-2.16.2-py310haa95532_0
pylint-venv-2.3.0-py310haa95532_0
pyls-spyder-0.4.0-pyhd3eb1b0_0
pynacl-1.5.0-py310h8cc25b3_0
pyodbc-4.0.34-py310hd77b12b_0
pyparsing-3.0.9-py310haa95532_0
pyqtwebengine-5.15.7-py310hd77b12b_0
pytables-3.7.0-py310h388bc9b_1
pytest-7.1.2-py310haa95532_0
python-lsp-black-1.2.1-py310haa95532_0
python-lsp-jsonrpc-1.0.0-pyhd3eb1b0_0
python-lsp-server-1.7.1-py310haa95532_0
python-slugify-5.0.2-pyhd3eb1b0_0
python-snappy-0.6.1-py310hd77b12b_0
pytoolconfig-1.2.5-py310haa95532_1
pytorch-1.12.1-cpu_py310h5e1f01c_1
pyviz_comms-2.0.2-pyhd3eb1b0_0
pywavelets-1.4.1-py310h2bbff1b_0
pywin32-ctypes-0.2.0-py310haa95532_1000
pywinpty-2.0.10-py310h5da7b33_0
pyzmq-23.2.0-py310hd77b12b_0
qdarkstyle-3.0.2-pyhd3eb1b0_0
qstylizer-0.2.2-py310haa95532_0
qtawesome-1.2.2-py310haa95532_0
qtconsole-5.4.0-py310haa95532_0
queuelib-1.5.0-py310haa95532_0
regex-2022.7.9-py310h2bbff1b_0
requests-file-1.5.1-pyhd3eb1b0_0
rope-1.7.0-py310haa95532_0
rtree-1.0.1-py310h2eaa2aa_0
scikit-image-0.19.3-py310hd77b12b_1
scikit-learn-1.2.1-py310hd77b12b_0
scikit-learn-intelex-2023.0.2-py310haa95532_0
scipy-1.10.0-py310hb9afe5d_1
scrapy-2.8.0-py310haa95532_0
seaborn-0.12.2-py310haa95532_0
send2trash-1.8.0-pyhd3eb1b0_1
service_identity-18.1.0-pyhd3eb1b0_1
smart_open-5.2.1-py310haa95532_0
snappy-1.1.9-h6c2663c_0
sniffio-1.2.0-py310haa95532_1
snowballstemmer-2.2.0-pyhd3eb1b0_0
sortedcontainers-2.4.0-pyhd3eb1b0_0
sphinx-5.0.2-py310haa95532_0
sphinxcontrib-applehelp-1.0.2-pyhd3eb1b0_0
sphinxcontrib-devhelp-1.0.2-pyhd3eb1b0_0
sphinxcontrib-htmlhelp-2.0.0-pyhd3eb1b0_0
sphinxcontrib-jsmath-1.0.1-pyhd3eb1b0_0
sphinxcontrib-qthelp-1.0.3-pyhd3eb1b0_0
sphinxcontrib-serializinghtml-1.1.5-pyhd3eb1b0_0
spyder-5.4.1-py310haa95532_0
spyder-kernels-2.4.1-py310haa95532_0
sqlalchemy-1.4.39-py310h2bbff1b_0
stack_data-0.2.0-pyhd3eb1b0_0
statsmodels-0.13.5-py310h9128911_1
sympy-1.11.1-py310haa95532_0
tabulate-0.8.10-py310haa95532_0
tbb-2021.7.0-h59b6b97_0
tbb4py-2021.7.0-py310h59b6b97_0
tblib-1.7.0-pyhd3eb1b0_0
tenacity-8.0.1-py310haa95532_1
terminado-0.17.1-py310haa95532_0
text-unidecode-1.3-pyhd3eb1b0_0
textdistance-4.2.1-pyhd3eb1b0_0
threadpoolctl-2.2.0-pyh0d69192_0
three-merge-0.1.1-pyhd3eb1b0_0
tifffile-2021.7.2-pyhd3eb1b0_2
tinycss2-1.2.1-py310haa95532_0
tldextract-3.2.0-pyhd3eb1b0_0
tokenizers-0.11.4-py310he5181cf_1
tomli-2.0.1-py310haa95532_0
tomlkit-0.11.1-py310haa95532_0
transformers-4.24.0-py310haa95532_0
twisted-22.2.0-py310h2bbff1b_1
twisted-iocpsupport-1.0.2-py310h2bbff1b_0
typing-extensions-4.4.0-py310haa95532_0
typing_extensions-4.4.0-py310haa95532_0
unidecode-1.2.0-pyhd3eb1b0_0
w3lib-1.21.0-pyhd3eb1b0_0
watchdog-2.1.6-py310haa95532_0
wcwidth-0.2.5-pyhd3eb1b0_0
webencodings-0.5.1-py310haa95532_1
websocket-client-0.58.0-py310haa95532_4
werkzeug-2.2.2-py310haa95532_0
whatthepatch-1.0.2-py310haa95532_0
widgetsnbextension-3.5.2-py310haa95532_0
winpty-0.4.3-4
wrapt-1.14.1-py310h2bbff1b_0
xarray-2022.11.0-py310haa95532_0
xlwings-0.29.1-py310haa95532_0
yapf-0.31.0-pyhd3eb1b0_0
zeromq-4.3.4-hd77b12b_0
zfp-0.5.5-hd77b12b_6
zict-2.1.0-py310haa95532_0
zipp-3.11.0-py310haa95532_0
zope-1.0-py310haa95532_1
zope.interface-5.4.0-py310h2bbff1b_0
The following packages will be UPDATED:
ca-certificates conda-forge::ca-certificates-2022.12.~ --> pkgs/main::ca-certificates-2023.01.10-haa95532_0
conda conda-forge::conda-23.1.0-py310h5588d~ --> pkgs/main::conda-23.3.1-py310haa95532_0
conda-repo-cli 1.0.27-py310haa95532_0 --> 1.0.41-py310haa95532_0
jupyter_core 5.2.0-py310haa95532_0 --> 5.3.0-py310haa95532_0
libcurl 7.87.0-h86230a5_0 --> 7.88.1-h86230a5_0
packaging 22.0-py310haa95532_0 --> 23.0-py310haa95532_0
pcre2 conda-forge::pcre2-10.37-hdfff0fc_0 --> pkgs/main::pcre2-10.37-h0ff8eda_1
pkginfo 1.8.3-py310haa95532_0 --> 1.9.6-py310haa95532_0
reproc conda-forge::reproc-14.2.3-h8ffe710_0 --> pkgs/main::reproc-14.2.4-hd77b12b_1
reproc-cpp conda-forge::reproc-cpp-14.2.3-h0e605~ --> pkgs/main::reproc-cpp-14.2.4-hd77b12b_1
requests 2.28.1-py310haa95532_0 --> 2.28.1-py310haa95532_1
sqlite 3.40.1-h2bbff1b_0 --> 3.41.1-h2bbff1b_0
tornado 6.1-py310h2bbff1b_0 --> 6.2-py310h2bbff1b_0
tqdm 4.64.1-py310haa95532_0 --> 4.65.0-py310h9909e9c_0
urllib3 1.26.14-py310haa95532_0 --> 1.26.15-py310haa95532_0
zstd 1.5.2-h19a0ad4_0 --> 1.5.4-hd43e919_0
</code></pre>
<h1>EDIT - UPDATE:</h1>
<p>I gave up on Anaconda. I installed mambaforge on a PC without a proxy, and miniforge on a PC which accesses the internet via a proxy. mamba is faster than conda but doesn't work behind certain proxy/firewalls.
I now have smaller environments with only the packages I need.</p>
<p>I still have no idea what caused the problem.</p>
|
<python><anaconda><conda><anaconda3><mamba>
|
2023-03-31 20:33:56
| 1
| 9,100
|
Pythonista anonymous
|
75,902,530
| 9,900,084
|
Polars: Expand dataframe so that each id vars have the same num of rows
|
<p>I have a dataframe that has id and week. I want to expand the dataframe so that each id have the same number of rows or four weeks.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = {
'id': ['a', 'a', 'b', 'c', 'c', 'c'],
'week': ['1', '2', '3', '4', '3', '1'],
'num1': [1, 3, 5, 4, 3, 6],
'num2': [4, 5, 3, 4, 6, 6]
}
df = pd.DataFrame(data)
</code></pre>
<pre><code> id week num1 num2
0 a 1 1 4
1 a 2 3 5
2 b 3 5 3
3 c 4 4 4
4 c 3 3 6
5 c 1 6 6
</code></pre>
<p>In pandas, I can just do:</p>
<pre class="lang-py prettyprint-override"><code>df = (
df.set_index(['id', 'week'])
.unstack().stack(dropna=False)
.reset_index()
)
</code></pre>
<pre><code> id week num1 num2
0 a 1 1.0 4.0
1 a 2 3.0 5.0
2 a 3 NaN NaN
3 a 4 NaN NaN
4 b 1 NaN NaN
5 b 2 NaN NaN
6 b 3 5.0 3.0
7 b 4 NaN NaN
8 c 1 6.0 6.0
9 c 2 NaN NaN
10 c 3 3.0 6.0
11 c 4 4.0 4.0
</code></pre>
<p>How do you do this with polars?</p>
|
<python><python-polars>
|
2023-03-31 20:29:41
| 3
| 2,559
|
steven
|
75,902,475
| 1,142,728
|
Setting initial value for a new foreign key on django admin
|
<p>I have the following django models:</p>
<pre><code>class ModelA(models.Model):
something = models.ForeignKey('Something')
extra_data = models.ForeignKey('ModelB')
class ModelB(models.Model):
something = models.ForeignKey('Something')
</code></pre>
<p>I have registered both models with django admin.</p>
<p>Is is possible to pre-set ModelB's <code>something</code> to the parent's <code>something</code> value when I create a new Model B via the <code>+</code> button?</p>
<p><a href="https://i.sstatic.net/jcNXI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jcNXI.png" alt="enter image description here" /></a></p>
<p>I have tried using <code>formfield_for_foreignkey</code> and setting <code>kwargs["initial"]</code> but that only is able to set the foreign key itself, it does not provide default values for new objects.</p>
<p>This should be possible to do by making the <code>+</code> link URL to add <code>&something=</code> to the current object's <code>something</code> value. But I am not sure how to tell django to do this.</p>
|
<python><django><django-admin>
|
2023-03-31 20:19:44
| 1
| 1,231
|
Alejandro Garcia
|
75,902,366
| 975,199
|
on_failure_callback is executed only at task level but not on DAG level
|
<p>This is a sample code, trying to test <code>on_failure_callback</code> at dag level. It works when I explicitly call out at task level but not at dag level.</p>
<p>following DAG level code doesn't call <code>on_failure_callback</code></p>
<pre><code>import sys
from datetime import datetime
from airflow import DAG
from airflow.models import Variable
from airflow.operators.python_operator import PythonOperator
def on_failure(ctx):
print('hello world')
print(ctx)
def always_fails():
sys.exit(1)
dag = DAG(
dag_id='always_fails',
description='dag that always fails',
schedule_interval=None,
catchup=False,
start_date=datetime(2021,7,12),
on_failure_callback=on_failure
)
task = PythonOperator(task_id='test-error-notifier', python_callable=always_fails, dag=dag)
</code></pre>
<p>This is task level code which is calling <code>on_failure_callback --> on_failure</code>.</p>
<pre><code>PythonOperator(task_id='test-error-notifier', python_callable=always_fails,
on_failure_callback=on_failure,
dag=dag)
</code></pre>
<p>Please let me know if I am missing anything here.</p>
<p>Thank you</p>
|
<python><airflow>
|
2023-03-31 20:03:09
| 1
| 8,456
|
logan
|
75,902,268
| 14,676,485
|
Filling NaN with object values throws an error: unhashable type: 'numpy.ndarray'
|
<p>I need to fill missing values with nemurical or categorical values (depending on column type). I wrote a function which calculates median for each year and fills missings with this value (numerical column) and for categorical columns - fills missings with most frequent value in each year. I prepared a test dataframe to test the function. My function works fine for this dataframe - fills missing values for categorical and numerical columns.</p>
<p>The issue is that when I use this function on my real dataset it doesn't work on categorical column (of type <code>object</code>) but it works fine on numerical column. The following error shows up:</p>
<blockquote>
<p>TypeError: unhashable type: 'numpy.ndarray'</p>
</blockquote>
<p>I can't throw the whole dataset here an reproducible example doesn't throw any error but maybe you know where the problem might lie?</p>
<pre><code>dataset = {'song_year': ['1999', '2000', '2000', '2000','2000'],
'song_popularity': [20.0, 33.8, 19.1, np.nan, 55.9],
'country': ['USA', 'Japan', 'Japan', 'Italy', np.nan]}
df = pd.DataFrame(dataset)
def mapNullsWithValues(data, col_key, col_vals):
to_df = pd.DataFrame()
if (data[col_vals].dtype == object) | (data[col_vals].dtype == 'category'):
to_df = data.groupby(col_key)[col_vals].agg(pd.Series.mode).to_frame().reset_index()
elif (data[col_vals].dtype == 'float64') | (data[col_vals].dtype == 'int64'):
to_df = data.groupby(col_key)[col_vals].median().to_frame().reset_index()
col_keys = to_df.iloc[:,0].values
col_values = to_df.iloc[:,1].values
dictionary = dict(zip(col_keys, col_values))
data[col_vals].fillna(data[col_key].map(dictionary), inplace=True)
mapNullsWithValues(df, 'song_year', 'country')
mapNullsWithValues(df, 'song_year', 'song_popularity')
</code></pre>
|
<python><pandas>
|
2023-03-31 19:48:12
| 0
| 911
|
mustafa00
|
75,902,255
| 4,944,986
|
Spawn Python script on remote box using ssh from within python
|
<p>i am currently writing a script which is supposed to manage multiple servers. Basically I have a python script on my own machine which i will refer to as <strong>Host</strong> as well as multiple servers. When running my Host-Python script, I need to find a way to connect to my server via ssh and spawn a second python script on the server. The issue here is that.</p>
<p>A) The script on the server can technically run indefinetely <br> B) My
local script is supposed to be stopped or issue other commands after
he ssh command has been issued.</p>
<p>This means that my python script needs to:</p>
<ol>
<li>log into the remote box</li>
<li>start a python script in the background</li>
<li>exit the box again</li>
</ol>
<p>The result should be that there is only the python script running in the background of my server.</p>
<p>I tried multiple ways now and none seems to really work out. Every single solution i tried so far results in my local script being blocked.</p>
<p>My current script looks like this:</p>
<pre class="lang-py prettyprint-override"><code>def call_command(target, command):
command = f'ssh -o ProxyJump={CONFIG["proxy"]}:{CONFIG["port"]} {target} "{command}"'
try:
subprocess.call(command, shell=True, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as e:
raise Exception(f"Failed to execute command on {target}: {e.output}")
call_command("server_1", "cd ~/Script/ && nohup python3 Client.py > /dev/null 2>&1 & disown")
</code></pre>
<p>As stated before, this seems to block and my program is halted when calling the given command. I have looked into the box while my local script was running and found this:</p>
<pre class="lang-bash prettyprint-override"><code>> ps aux | grep python3
bash -c cd ~/Script/ &&nohup python3 Client.py > /dev/null 2>&1 & disown
</code></pre>
<p>So for some reason this call seems to be running while the underlying called script is running. Do you have any solution for this?</p>
<p>I am happy for any help!</p>
|
<python><bash><ssh>
|
2023-03-31 19:45:59
| 2
| 945
|
Finn Eggers
|
75,902,183
| 19,438,577
|
Since when is Python None equal to None?
|
<p>I always thought that Python nulls are not equal, as is common in many other languages and based on simple logic (if the value is unknown, how can it be equal to another unknown?).</p>
<p>However, recently I tried it, and discovered that:</p>
<pre><code>Python 3.10.2
>>> None == None
True
</code></pre>
<p>Has it always been this way? If not, which version changed it?</p>
|
<python><equality><nonetype>
|
2023-03-31 19:36:05
| 3
| 542
|
Dommondke
|
75,901,930
| 6,395,388
|
Pyre-check cannot located Typeshed
|
<p>I installed <code>pyre-check</code> on my Mac via <code>pip install</code>:</p>
<pre><code>> pip3 install pyre-check β 127
Collecting pyre-check
Downloading pyre-check-0.9.18.tar.gz (18.0 MB)
ββββββββββββββββββββββββββββββββββββββββ 18.0/18.0 MB 11.1 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Requirement already satisfied: click>=8.0 in /opt/homebrew/lib/python3.10/site-packages (from pyre-check) (8.1.3)
Collecting dataclasses-json
Using cached dataclasses_json-0.5.7-py3-none-any.whl (25 kB)
Collecting intervaltree
Using cached intervaltree-3.1.0.tar.gz (32 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: libcst in /opt/homebrew/lib/python3.10/site-packages (from pyre-check) (0.4.9)
Collecting psutil
Downloading psutil-5.9.4-cp38-abi3-macosx_11_0_arm64.whl (244 kB)
ββββββββββββββββββββββββββββββββββββββββ 244.2/244.2 kB 5.9 MB/s eta 0:00:00
Collecting pyre-extensions>=0.0.29
Downloading pyre_extensions-0.0.30-py3-none-any.whl (12 kB)
Requirement already satisfied: tabulate in /opt/homebrew/lib/python3.10/site-packages (from pyre-check) (0.9.0)
Collecting testslide>=2.7.0
Downloading TestSlide-2.7.1.tar.gz (50 kB)
ββββββββββββββββββββββββββββββββββββββββ 50.3/50.3 kB 1.6 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Requirement already satisfied: typing_extensions in /opt/homebrew/lib/python3.10/site-packages (from pyre-check) (4.5.0)
Requirement already satisfied: typing-inspect in /opt/homebrew/lib/python3.10/site-packages (from pyre-extensions>=0.0.29->pyre-check) (0.8.0)
Requirement already satisfied: Pygments>=2.2.0 in /opt/homebrew/lib/python3.10/site-packages (from testslide>=2.7.0->pyre-check) (2.14.0)
Collecting typeguard<3.0
Using cached typeguard-2.13.3-py3-none-any.whl (17 kB)
Collecting marshmallow<4.0.0,>=3.3.0
Downloading marshmallow-3.19.0-py3-none-any.whl (49 kB)
ββββββββββββββββββββββββββββββββββββββββ 49.1/49.1 kB 1.7 MB/s eta 0:00:00
Collecting marshmallow-enum<2.0.0,>=1.5.1
Using cached marshmallow_enum-1.5.1-py2.py3-none-any.whl (4.2 kB)
Collecting sortedcontainers<3.0,>=2.0
Using cached sortedcontainers-2.4.0-py2.py3-none-any.whl (29 kB)
Requirement already satisfied: pyyaml>=5.2 in /opt/homebrew/lib/python3.10/site-packages (from libcst->pyre-check) (6.0)
Collecting packaging>=17.0
Downloading packaging-23.0-py3-none-any.whl (42 kB)
ββββββββββββββββββββββββββββββββββββββββ 42.7/42.7 kB 1.3 MB/s eta 0:00:00
Requirement already satisfied: mypy-extensions>=0.3.0 in /opt/homebrew/lib/python3.10/site-packages (from typing-inspect->pyre-extensions>=0.0.29->pyre-check) (1.0.0)
Building wheels for collected packages: pyre-check, testslide, intervaltree
Building wheel for pyre-check (setup.py) ... done
Created wheel for pyre-check: filename=pyre_check-0.9.18-py3-none-any.whl size=19378418 sha256=6a22961a137fb73d1673dda3058f8671246fee6ccc8e1e9631b9c34af7f89810
Stored in directory: /Users/derekbrown/Library/Caches/pip/wheels/ab/96/d0/489ab89163cf9c83b2dd4f61192b6a12203fefddda3a3ff663
Building wheel for testslide (setup.py) ... done
Created wheel for testslide: filename=TestSlide-2.7.1-py3-none-any.whl size=54669 sha256=8781de43ad005e9aded178536ca1a382a4a5346f98b088c6346a806895000ba1
Stored in directory: /Users/derekbrown/Library/Caches/pip/wheels/09/36/92/3312ef5da8123f2fff7c9111f78b65ac9cecb990ec1c13fe68
Building wheel for intervaltree (setup.py) ... done
Created wheel for intervaltree: filename=intervaltree-3.1.0-py2.py3-none-any.whl size=26098 sha256=eba5e52219b4d57f5f478074230b585b332ac8abb212ef66a97fb203740a0e4a
Stored in directory: /Users/derekbrown/Library/Caches/pip/wheels/f1/52/97/0884d240db33fb0bbc0c2c9549ff13f6a81ec91bf0c1807615
Successfully built pyre-check testslide intervaltree
Installing collected packages: sortedcontainers, typeguard, psutil, packaging, intervaltree, testslide, pyre-extensions, marshmallow, marshmallow-enum, dataclasses-json, pyre-check
Successfully installed dataclasses-json-0.5.7 intervaltree-3.1.0 marshmallow-3.19.0 marshmallow-enum-1.5.1 packaging-23.0 psutil-5.9.4 pyre-check-0.9.18 pyre-extensions-0.0.30 sortedcontainers-2.4.0 testslide-2.7.1 typeguard-2.13.3
</code></pre>
<p>However, when running <code>pyre init</code>, <code>pyre-check</code> can't find the typeshed:</p>
<pre><code>> pyre init
Ζ Also initialize watchman in the current directory? [Y/n] n
Ζ Unable to locate typeshed, please enter its root:
</code></pre>
<p>Why might this be the case?</p>
|
<python><pyre-check>
|
2023-03-31 19:04:07
| 2
| 4,457
|
Derek Brown
|
75,901,851
| 9,392,771
|
Multiple Next Page Links on Same Page
|
<p>In python, how would you handle in a web scrape if the next page link shows up twice on the same page and you only want to grab one of them after you scrape the page?</p>
<p>Example <a href="https://www.imdb.com/search/title/?groups=top_100&sort=user_rating,desc&ref_=adv_prv" rel="nofollow noreferrer">https://www.imdb.com/search/title/?groups=top_100&sort=user_rating,desc&ref_=adv_prv</a></p>
<p>Next page shows at the top and bottom of the list.</p>
|
<python><web-scraping>
|
2023-03-31 18:52:06
| 2
| 301
|
Sven
|
75,901,770
| 11,286,032
|
How do I check the version of python a module supports?
|
<p>I was wondering if there is a generic way to find out if your version of <code>python</code> is supported by a specific module?</p>
<p>For example, let us say that I have <code>python 3.11</code> installed on my computer and I want to install the modules <code>biopython</code> and <code>lru-dict</code>. Going to their respective <code>pypi</code> entries <a href="https://pypi.org/project/biopython/" rel="nofollow noreferrer"><code>biopython</code></a> shows this in their <code>Project description</code>:</p>
<blockquote>
<p>Python Requirements</p>
<p>We currently recommend using Python 3.10 from <a href="http://www.python.org" rel="nofollow noreferrer">http://www.python.org</a></p>
<p>Biopython is currently supported and tested on the following Python implementations:</p>
<p>Python 3.7, 3.8, 3.9, 3.10 and 3.11 β see <a href="http://www.python.org" rel="nofollow noreferrer">http://www.python.org</a></p>
<p>PyPy3.7 v7.3.5 β or later, see <a href="http://www.pypy.org" rel="nofollow noreferrer">http://www.pypy.org</a></p>
</blockquote>
<p>However, if we check the <a href="https://pypi.org/project/lru-dict/" rel="nofollow noreferrer"><code>lru-dict</code></a> entry, their <code>Project description</code> does not mention Python requirements. I did eventually find out that <code>python 3.11</code> is not supported by finding a <a href="https://github.com/amitdev/lru-dict/issues/44" rel="nofollow noreferrer"><code>issue</code></a> on their <code>github</code> page.</p>
<p>Is there a simpler way of finding this information out?</p>
|
<python><pypi>
|
2023-03-31 18:41:36
| 3
| 2,942
|
Marcelo Paco
|
75,901,444
| 9,392,771
|
Why does the while loop from scraper never ends?
|
<p>I have this code listed below that when it runs it is supposed to scrape data from an imdb site and go to the next page and do it again.</p>
<p>I am pieced this code together from bits of places from this site and it works just fine if I want just the first page (meaning remove the while true part and the nextpage line and down. as it stands, it seems to be stuck in a forever loop and I am not sure why. <a href="https://www.imdb.com/search/title/?groups=top_100&sort=user_rating,desc" rel="nofollow noreferrer">IMDB Top 100 Movies</a> is the site that I am using and there are only 2 pages so I would think this should be really quick since the one page takes about 10 seconds to run.</p>
<p>Is there something that I am doing wrong?</p>
<p>I am not really familiar with while loops too much and python is new to me. I am trying to do this for an assignment to scrape 100 rows worth of data with python. I am using <code>BeautifulSoup</code>.</p>
<pre><code>while True:
for container in movie_div:
# name
name = container.h3.a.text
movie_name.append(name)
# year
year = container.h3.find('span', class_='lister-item-year').text
movie_years.append(year)
# runtime
runtime = container.p.find('span', class_='runtime').text if container.p.find('span', class_='runtime').text else '-'
movie_runtime.append(runtime)
# IMDB rating
imdb = float(container.strong.text)
imdb_ratings.append(imdb)
# metascore
m_score = container.find('span', class_='metascore').text if container.find('span', class_='metascore') else '-'
metascores.append(m_score)
# There are two NV containers, grab both of them as they hold both the votes and the grosses
nv = container.find_all('span', attrs={'name': 'nv'})
# filter nv for votes
vote = nv[0].text
number_votes.append(vote)
# filter nv for gross
grosses = nv[1].text if len(nv) > 1 else '-'
us_gross.append(grosses)
nextpage = requests.get('https://www.imdb.com'+movie_soup.select_one('.next-page').get('href'))
# create soup for next url
nextsoup = BeautifulSoup(nextpage.text, 'html.parser')
</code></pre>
|
<python><web-scraping><beautifulsoup><while-loop>
|
2023-03-31 17:58:57
| 2
| 301
|
Sven
|
75,901,378
| 11,666,502
|
Fastest way to loop through video and save frames
|
<p>I need to extract frames and landmarks from a video in one process, and then perform operations of the frames and landmarks in another process. What is the most efficient way to do this?</p>
<p>Right now, I am storing frames as arrays in a list, and then pickling that list. Code below:</p>
<pre><code>frames, landmarks = [], []
cap = cv2.VideoCapture(path)
while cap.isOpened():
success, frame = cap.read()
lm = get_landmarks(frame)
frames.append(frame)
landmarks.append(lm)
frame_pickle = open(f"frames.pkl", "wb")
landmarks_pickle = open(f"landmarks.pkl", "wb")
pickle.dump(data_out, frame_pickle)
pickle.dump(frame_number_out, landmarks_pickle)
</code></pre>
<p>This works, but it is very slow. Is there anyway I can speed this up/make this more efficient?</p>
<pre><code></code></pre>
|
<python><loops><opencv><computer-vision><pickle>
|
2023-03-31 17:50:54
| 2
| 1,689
|
connor449
|
75,901,180
| 4,045,275
|
Conda very slow and downloading only from Conda-Forge
|
<h1>The issue</h1>
<p>I have recently installed Anaconda3 (as downloaded on 31-Mar-2023) onto a Windows PC. I chose the installation for my username only, which doesn't require admin rights. It's my private PC, so no corporate firewalls.</p>
<p>Quite simply, Conda doesn't work. Even a banal command like <code>conda update pandas</code> will result in:</p>
<pre><code>Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): |
</code></pre>
<p>and, even if I leave it an hour, it remains stuck at collecting package metadata. To be clear, Pandas doesn't need updating, mine was just a test to see if Conda works properly, and it doesn't.</p>
<h1>What I have tried</h1>
<p>Beyond uninstalling and reinstalling multiple times, I have disabled my firewall (Eset on Windows) thinking that maybe it was blocking access to the remote repositories.</p>
<p>I now get the message that loads of packages will be downloaded from Conda-Forge - but these are all packages which I already have, and the version is the same, too. I did not go ahead with this.</p>
<pre><code> The following packages will be downloaded:
package | build
---------------------------|-----------------
[...]
numpy-1.24.2 | py310hd02465a_0 5.6 MB conda-forge
openjpeg-2.5.0 | ha2aaf27_2 232 KB conda-forge
openssl-1.1.1t | hcfcfb64_0 5.0 MB conda-forge
packaging-23.0 | pyhd8ed1ab_0 40 KB conda-forge
pandas-1.5.3 | py310h1c4a608_1 10.2 MB conda-forge
pathlib-1.0.1 | py310h5588dad_7 5 KB conda-forge
</code></pre>
<h1>My interpretation</h1>
<ol>
<li>It seems I have two problems: the ESET firewall blocks Conda</li>
<li>if I disable the firewall, Conda searches the conda-forge repository first, and wants to replace the pandas 1.5.3 I already have with the Pandas 1.5.3 from conda-forge, and the same for a number of other packages</li>
</ol>
<h1>What I have researched</h1>
<p>I have found many, many posts on this matter, but they mostly seem to focus on how to use additional repository sources (e.g. conda-forge) and how to configure them correctly. <a href="https://stackoverflow.com/questions/63734508/stuck-at-solving-environment-on-anaconda">Stuck at Solving Environment on Anaconda</a>
I think my case is different - we're not talking about struggling to install an obscure package from an obscure repository, we're saying Conda cannot even update Pandas!</p>
<p>I have found other discussions at <a href="https://github.com/conda/conda/issues/11919" rel="nofollow noreferrer">https://github.com/conda/conda/issues/11919</a> and <a href="https://github.com/conda/conda/issues/8051" rel="nofollow noreferrer">https://github.com/conda/conda/issues/8051</a> but they don't seem particularly relevant to my case.</p>
|
<python><anaconda><conda><anaconda3>
|
2023-03-31 17:23:31
| 1
| 9,100
|
Pythonista anonymous
|
75,900,997
| 9,105,621
|
how to compare two dataframes and return a new dataframe with only the records that have changed
|
<p>I want to build a python script that will compare two pandas dataframes and create a new <code>df</code> that I can use to update my sql table. I create <code>df1</code> by reading the existing table. I create <code>df2</code> by reading the new data through an API call. I want to isolate changed lines and update the SQL table with the new values.</p>
<p>I have attempted to compare through an outer merge, but I need help returning the dataframe with only records with a different value in any field.</p>
<p>Here is my example <code>df1</code>:</p>
<p><a href="https://i.sstatic.net/AZ75Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AZ75Z.png" alt="enter image description here" /></a></p>
<p>Here is my example <code>df2</code>:</p>
<p><a href="https://i.sstatic.net/OPOKV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OPOKV.png" alt="enter image description here" /></a></p>
<p>My desired output:</p>
<p><a href="https://i.sstatic.net/wpGa5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wpGa5.png" alt="enter image description here" /></a></p>
<p>This function returns the entire dataframe and isn't working as expected:</p>
<pre><code>def compare_dataframes(df1, df2, pk_col):
# Merge the two dataframes on the primary key column
df_merged = pd.merge(df1, df2, on=pk_col, how='outer', suffixes=('_old', '_new'))
# Identify the rows that are different between the two dataframes
df_diff = df_merged[df_merged.isna().any(axis=1)]
# Drop the columns from the old dataframe and rename the columns from the new dataframe
df_diff = df_diff.drop(columns=[col for col in df_diff.columns if col.endswith('_old')])
df_diff = df_diff.rename(columns={col: col.replace('_new', '') for col in df_diff.columns})
return df_diff
</code></pre>
|
<python><pandas>
|
2023-03-31 17:00:05
| 1
| 556
|
Mike Mann
|
75,900,837
| 10,234,248
|
TDD modifying my test to make my code pass
|
<p>I'm learning Test Driven Development, i'm struggling a little bit, seems like my brain wants to build solution algorithm, and not the needed test to build my algorithm. I can barely formalize unit test, and i go back and forth to change my tests.
I come to a point where i make some adjustement in my code, and then i'm trying to modify my tests to make them pass! It should be the other way around.</p>
<p>i had initially this piece of code for test</p>
<pre><code> def test_is_zero_in_range(self):
self.assertRaises(Exception, self._provServerPrepare.is_in_allow_range, 0)
</code></pre>
<p>so i write it up the code needed</p>
<pre><code> def is_in_allow_range(self, value: str) -> int:
value = int(value)
if value in MULTIPLY_PROV_RANGE:
return value
raise Exception('Value out of range')
</code></pre>
<p>as i'm working with flask, i changed my code to get a return with a message and an error code, rather than a raise exception</p>
<pre><code> def is_in_allow_range(self, value: str) -> int:
value = int(value)
if value in MULTIPLY_PROV_RANGE:
return value
return {"message": f"Value out of range"}, 400
</code></pre>
<p>In this situation i have to change my tests to make it work, i'm not sure, but i feel like i'm messing with something, this doesn't seem right to me to do it that way.</p>
<p>Does anyone can help me on this, or have any resources for me to read/watch?</p>
<p>Thx</p>
|
<python><flask><tdd>
|
2023-03-31 16:34:56
| 3
| 447
|
jmnguye
|
75,900,701
| 11,021,252
|
How to extract individual points from the shapely Multipoint data type?
|
<p>I am using shapely2.0; somehow, I can't iterate through individual points in the MULTIPOINT data type in this version.</p>
<p>I wanted to extract and plot individual points from the MULTIPOINT.
The MULTIPOINT is obtained from <code>line.intersection(circle_boundary)</code>, where I tried to get the intersection points between line and circle geometry.</p>
<p>Is there any way to access the Individual points in the MULTIPOINT or get the intersecting points as individual shapely Points rather than as MULTIPOINT?</p>
|
<python><gis><geopandas><shapely>
|
2023-03-31 16:17:54
| 1
| 507
|
VGB
|
75,900,697
| 1,388,419
|
Python win32serviceutil ModuleNotFoundError
|
<p>I created a Python win32serviceutil service following the excellent guide at <a href="https://thepythoncorner.com/posts/2018-08-01-how-to-create-a-windows-service-in-python/" rel="nofollow noreferrer">https://thepythoncorner.com/posts/2018-08-01-how-to-create-a-windows-service-in-python/</a>.</p>
<p>It will install just fine</p>
<pre><code>C:\βΊpython PowerMasterUPS.py install
Installing service PowerMasterUPS
copying host exe 'C:\Users\x\AppData\Local\Programs\Python\Python311\Lib\site-packa
C:\Users\x\AppData\Local\Programs\Python\Python311\pythonservice.exe'
Service installed
</code></pre>
<p>But it fails to start</p>
<pre><code>C:\βΊpython PowerMasterUPS.py debug
Debugging service PowerMasterUPS - press Ctrl+C to stop.
Error (xC0000004 - Python could not import the service's module
ModuleNotFoundError: No module named 'PowerMasterUPS'
(null): (null)
</code></pre>
<p>I tried to add <code>sys.path.append("C:\\")</code> but still the same error. This is the directory where the PowerMasterUPS.py is.</p>
<pre><code>import socket
import win32serviceutil
import servicemanager
import win32event
import win32service
import requests ,time ,json, os, sys
sys.path.append("C:\\")
class SMWinservice(win32serviceutil.ServiceFramework):
'''
SMWinservice
by Davide Mastromatteo
'''
_svc_name_ = 'pythonService'
_svc_display_name_ = 'Python Service'
_svc_description_ = 'Python Service Description'
@classmethod
def parse_command_line(cls):
'''
ClassMethod to parse the command line
'''
win32serviceutil.HandleCommandLine(cls)
def __init__(self, args):
'''
Constructor of the winservice
'''
win32serviceutil.ServiceFramework.__init__(self, args)
self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)
socket.setdefaulttimeout(60)
def SvcStop(self):
'''
Called when the service is asked to stop
'''
self.stop()
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
def SvcDoRun(self):
'''
Called when the service is asked to start
'''
self.start()
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,
servicemanager.PYS_SERVICE_STARTED,
(self._svc_name_, ''))
self.main()
def start(self):
'''
Override to add logic before the start
eg. running condition
'''
pass
def stop(self):
'''
Override to add logic before the stop
eg. invalidating running condition
'''
pass
def main(self):
'''
Main class to be ovverridden to add logic
'''
pass
class PowerMasterUPS(SMWinservice):
_svc_name_ = "PowerMasterUPS"
_svc_display_name_ = "PowerMasterUPS"
_svc_description_ = "PowerMasterUPS will Shutdown Windows when UPS battery is depleting"
def start(self):
self.isrunning = True
def stop(self):
self.isrunning = False
def main(self):
while self.isrunning:
# my app code here
if __name__ == '__main__':
PowerMasterUPS.parse_command_line()
</code></pre>
|
<python><python-3.x><windows-services><pywin32><win32serviceutil>
|
2023-03-31 16:17:20
| 0
| 377
|
Gotenks
|
75,900,656
| 7,559,397
|
Cannot check if item in listbox was clicked
|
<p>I am trying to check and see if an item in a listbox was selected and then enable another button if there is an item selected from the listbox.</p>
<pre><code>from tkinter import *
top = Toplevel()
top.geometry('255x135')
top.resizable(False, False)
guessbox = Listbox(master=top, selectmode=SINGLE)
guessbox.insert(0, '0')
guessbox.insert(1, '1')
guessbox.place(x=0, y=0)
answer = random.randint(0, 1)
dirlabel = Label(master=top, text='Click Next when done')
dirlabel.place(x=130, y=0)
nextbutton = Button(master=top, text='Next', command=top.quit, state='disabled')
nextbutton.place(x=170, y=50)
guess = guessbox.curselection()
print(guess)
guessbox.bind('<<ListboxSelect>>', nextbutton.config(state='normal'))
</code></pre>
|
<python><tkinter>
|
2023-03-31 16:11:48
| 1
| 1,335
|
Jinzu
|
75,900,501
| 6,528,055
|
How can I keep track of the number of epochs completed while training a Word2Vec model?
|
<p>I'm training my Word2Vec model for more than 12 hours for a corpus of more than 90k tweets (samples), ~10k unique words in the dictionary for a number of 5 epochs with my 8gb RAM laptop. Is it normal?</p>
<p>I want to track the progress of the training process which is why I want to keep track of the number of epochs completed while training. How can I do that? My code given below:</p>
<pre><code>model = Word2Vec(df['tweet_text'], window=10, vector_size=300, hs=0, negative=1)
model.train([df['tweet_text']], total_examples=len(df['tweet_text']), epochs=5)
</code></pre>
|
<python><tensorflow><nlp><gensim><word2vec>
|
2023-03-31 15:54:19
| 1
| 969
|
Debbie
|
75,900,386
| 305,883
|
librosa y-axis spectrogram does not align properly
|
<p>How to align axis of spectrogram visualisations in Librosa or Matplotlib ?</p>
<p>Consider this example, <a href="https://librosa.org/doc/latest/generated/librosa.feature.spectral_rolloff.html" rel="nofollow noreferrer">from Librosa's documentation</a>:
<a href="https://i.sstatic.net/nmvkA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nmvkA.png" alt="enter image description here" /></a></p>
<p>as you can see, the rolloff are aligned with the spectrogram.
I can't replicate the figure with my own audio.</p>
<p>The y-axis is never aligned.</p>
<p>Try:</p>
<pre><code>sr = 250000
n_fft = 2048
hop_length=256
win_length = 1024
fmin = 220
S, phase = librosa.magphase(librosa.stft(filtered_audio))
sftf_spec = librosa.stft(filtered_audio, n_fft=n_fft, hop_length=hop_length)
S = np.abs(sftf_spec)
rolloff = librosa.feature.spectral_rolloff(S=S,
sr=sr,
n_fft=n_fft,
hop_length=hop_length,
win_length = win_length
)
amplitude_spec = librosa.amplitude_to_db(S,
ref=np.max)
rolloff_min = librosa.feature.spectral_rolloff(S=S, sr=sr, roll_percent=0.15)
fig, ax = plt.subplots()
librosa.display.specshow(amplitude_spec,
y_axis='log', x_axis='time', ax=ax)
ax.plot(librosa.times_like(rolloff), rolloff[0], label='Roll-off frequency (0.85)')
ax.plot(librosa.times_like(rolloff), rolloff_min[0], color='w',
label='Roll-off frequency (0.15)')
ax.legend(loc='lower right')
ax.set(title='log Power spectrogram')
</code></pre>
<p>If you need to replicate, you can try download the audio wav :</p>
<pre><code>https://drive.google.com/file/d/1UCUWAaczzejTN9m_y-usjPbG8__1mWI1/view?usp=sharing
</code></pre>
<pre><code>filtered_audio = np.array([[ #copy ]])
</code></pre>
<p>I got this:</p>
<p><a href="https://i.sstatic.net/jngxU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jngxU.png" alt="enter image description here" /></a></p>
<p>and if I set the rate in specshow, I got this:</p>
<pre><code>librosa.display.specshow(amplitude_spec,
sr=sr,
y_axis='log', x_axis='time', ax=ax)
</code></pre>
<p><a href="https://i.sstatic.net/Vwf9r.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Vwf9r.png" alt="enter image description here" /></a></p>
<p>I want to have the bandwidth following the same scale of the spectrogram they were build from...</p>
|
<python><matplotlib><librosa><spectrogram><acoustics>
|
2023-03-31 15:41:03
| 1
| 1,739
|
user305883
|
75,900,231
| 323,698
|
Using python to generate a JWT raises ValueError "Could not deserialize key data"
|
<p>I am trying to generate a JWT for a Github app following these instructions <a href="https://docs.github.com/en/apps/creating-github-apps/authenticating-with-a-github-app/generating-a-json-web-token-jwt-for-a-github-app" rel="nofollow noreferrer">https://docs.github.com/en/apps/creating-github-apps/authenticating-with-a-github-app/generating-a-json-web-token-jwt-for-a-github-app</a></p>
<p>This is what I did:</p>
<p>Install Python3</p>
<p>Pip3 install jwt</p>
<p>I created a pem file that has the RSA PRIVATE KEY</p>
<p>I copied the code from the link above and created a get-jwt.py file:</p>
<pre><code>#!/usr/bin/env python3
import jwt
import time
import sys
# Get PEM file path
if len(sys.argv) > 1:
pem = sys.argv[1]
else:
pem = input("Enter path of private PEM file: ")
# Get the App ID
if len(sys.argv) > 2:
app_id = sys.argv[2]
else:
app_id = input("Enter your APP ID: ")
# Open PEM
with open(pem, 'rb') as pem_file:
signing_key = jwt.jwk_from_pem(pem_file.read())
payload = {
# Issued at time
'iat': int(time.time()),
# JWT expiration time (10 minutes maximum)
'exp': int(time.time()) + 600,
# GitHub App's identifier
'iss': app_id
}
# Create JWT
jwt_instance = jwt.JWT()
encoded_jwt = jwt_instance.encode(payload, signing_key, alg='RS256')
print(f"JWT: ", encoded_jwt)
</code></pre>
<p>When I do <code>python3 get-jwt.py</code>, I get the following error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "/Users/al/Library/Python/3.9/lib/python/site-packages/jwt/jwk.py", line 345, in jwk_from_private_bytes
privkey = private_loader(content, password, backend) # type: ignore[operator] # noqa: E501
File "/Users/al/Library/Python/3.9/lib/python/site-packages/cryptography/hazmat/primitives/serialization/base.py", line 24, in load_pem_private_key
return ossl.load_pem_private_key(
File "/Users/al/Library/Python/3.9/lib/python/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 957, in load_pem_private_key
return self._load_key(
File "/Users/al/Library/Python/3.9/lib/python/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 1152, in _load_key
self._handle_key_loading_error()
File "/Users/al/Library/Python/3.9/lib/python/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 1207, in _handle_key_loading_error
raise ValueError(
ValueError: ('Could not deserialize key data. The data may be in an incorrect format, it may be encrypted with an unsupported algorithm, or it may be an unsupported key type (e.g. EC curves with explicit parameters).', [<OpenSSLError(code=503841036, lib=60, reason=524556, reason_text=unsupported)>])
</code></pre>
<p>What am I doing wrong?</p>
|
<python><python-3.x><jwt>
|
2023-03-31 15:26:36
| 0
| 11,317
|
Strong Like Bull
|
75,900,223
| 1,747,493
|
How to remove histogram bar labels for 0-values in matplotlib
|
<p>I'm creating histogram bar plots and I'm adding labels to the different categories with the value percentage. I wonder if it is possible to remove labels when the percentage value is 0, and if so, how?</p>
<p>For instance, the following script</p>
<pre><code>#!/usr/bin/env -S python
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
plt.figure (figsize = (8.0, 6.0))
labels = [ "exp1", "exp2", "exp3", "exp4" ]
data_REF = [5, 4, 1, 1]
data_PAR = [1, 0, 0, 1]
data_OTH = [3, 1, 1, 1]
# Convert to numpy objects and calculate percentuals
data_REF = np.asarray(data_REF)
data_PAR = np.asarray(data_PAR)
data_OTH = np.asarray(data_OTH)
data_TOTALS = data_REF + data_PAR + data_OTH
data_pREF = 100 * data_REF / data_TOTALS
data_pPAR = 100 * data_PAR / data_TOTALS
data_pOTH = 100 * data_OTH / data_TOTALS
# Generate horizontal 100% stacked bar chart, with data bar labels
bREF = plt.barh (labels, data_pREF, color="blue", edgecolor="blue", height=1)
bREF_ = plt.bar_label (bREF, label_type='center', color="white", fmt='%.2f')
bPAR = plt.barh (labels, data_pPAR, left=data_pREF, color="white", edgecolor="gray", height=1)
bPAR_ = plt.bar_label (bPAR, label_type='center', color="darkgray", fmt='%.2f')
bOTH = plt.barh (labels, data_pOTH, left=data_pREF+data_pPAR, color="red", edgecolor="red", height=1)
bOTH_ = plt.bar_label (bOTH, label_type='center', color="white", fmt='%.2f')
plt.legend ([bREF, bPAR, bOTH], ["REF", "On par", "OTH"], bbox_to_anchor=(0, -0.05), loc="upper center", mode="expand", fontsize='small', frameon=False)
plt.tight_layout()
plt.savefig ("test.png")
plt.close (fig=None)
</code></pre>
<p>generates the following figure. And I'd like to get rid of the 0.00 label in rows "exp1" and "exp4".</p>
<p><a href="https://i.sstatic.net/5YDiF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5YDiF.png" alt="Result for the attached script" /></a></p>
|
<python><numpy><matplotlib>
|
2023-03-31 15:25:22
| 0
| 3,206
|
Harald
|
75,900,125
| 12,894,011
|
How to store two separate variables of the same argument in Python?
|
<p>I am trying to write a function in Python which takes in a website name and simply returns two versions of it in two separate variables:</p>
<p>The first variable website should look like the original argument with no changes: <a href="http://example.com" rel="nofollow noreferrer">http://example.com</a></p>
<p>The second variable websitefilename should look like this: http-example-com</p>
<p>I have attempted to store these in two separate variables like so:</p>
<pre><code> def websitefile(website):
websitefilename = re.sub(r'[^\w\s-]', '-', website).strip().lower()
websitefilename = re.sub(r'[-\s]+', '-', website)
print(website)
print(websitefilename)
websitefile(http://example.com)
</code></pre>
<p>But both website and websitefilename- return the same thing:</p>
<p><a href="http://example.com" rel="nofollow noreferrer">http://example.com</a></p>
<p>How do you make website return <a href="http://example.com" rel="nofollow noreferrer">http://example.com</a> and websitefilename return http-example-com?</p>
<p>I need them differently because Windows for some reason can't have slashes in filenames.</p>
|
<python>
|
2023-03-31 15:14:26
| 3
| 347
|
Julius Goddard
|
75,900,085
| 11,255,651
|
Loss function giving nan in pytorch
|
<p>In pytorch, I have a loss function of <code>1/x</code> plus a few other terms. The last layer of my neural net is a sigmoid, so the values will be between 0 and 1.</p>
<p>Some value fed to <code>1/x</code> must get really small at some point because my loss has become this:</p>
<pre class="lang-bash prettyprint-override"><code>loss: 11.047459 [729600/235474375]
loss: 9.348356 [731200/235474375]
loss: 7.184393 [732800/235474375]
loss: 8.699876 [734400/235474375]
loss: 7.178806 [736000/235474375]
loss: 8.090066 [737600/235474375]
loss: 12.415799 [739200/235474375]
loss: 10.422441 [740800/235474375]
loss: 8.335846 [742400/235474375]
loss: nan [744000/235474375]
loss: nan [745600/235474375]
loss: nan [747200/235474375]
loss: nan [748800/235474375]
loss: nan [750400/235474375]
</code></pre>
<p>I'm wondering if there's any way to "rewind" if <code>nan</code> is hit or define the loss function so that it's never hit? Thanks!</p>
|
<python><pytorch>
|
2023-03-31 15:09:45
| 2
| 826
|
Mike
|
75,900,056
| 12,860,924
|
How to use K-Fold cross validation with DenseNet121 model
|
<p>I am working on classification of images breast cancer using <code>DensetNet121</code> pretrained model. I split the dataset into training, testing and validation. I want to apply <code>k-fold cross validation</code>. I used <code>cross_validation</code> from <code>sklearn</code> library, but I get the below error when I run the code. I tried to solve it but nothing solved the error. Anyone have idea how to solve this.</p>
<pre><code>in_model = tf.keras.applications.DenseNet121(input_shape=(224,224,3),
include_top=False,
weights='imagenet',classes = 2)
in_model.trainable = False
inputs = tf.keras.Input(shape=(224,224,3))
x = in_model(inputs)
flat = Flatten()(x)
dense_1 = Dense(1024,activation = 'relu')(flat)
dense_2 = Dense(1024,activation = 'relu')(dense_1)
prediction = Dense(2,activation = 'softmax')(dense_2)
in_pred = Model(inputs = inputs,outputs = prediction)
validation_data=(valid_data,valid_labels)
#16
in_pred.summary()
in_pred.compile(optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.0002), loss=tf.keras.losses.CategoricalCrossentropy(from_logits = False), metrics=['accuracy'])
history=in_pred.fit(train_data,train_labels,epochs = 3,batch_size=32,validation_data=validation_data)
model_result=cross_validation(in_pred, train_data, train_labels, 5)
</code></pre>
<p>The error:</p>
<pre><code>TypeError: Cannot clone object '<keras.engine.functional.Functional object at 0x000001F82E17E3A0>'
(type <class 'keras.engine.functional.Functional'>):
it does not seem to be a scikit-learn estimator as it does not implement a 'get_params' method.
</code></pre>
|
<python><validation><keras><scikit-learn><k-fold>
|
2023-03-31 15:07:41
| 1
| 685
|
Eda
|
75,899,931
| 11,268,057
|
Create Stripe Payment Intent after purchasing
|
<p>I am using the Stripe Card Element for my website. The page that allows users to purchase a product is public-facing (you don't need to login) hence there is a lot of casual browsing.</p>
<p>I followed Stripe's <a href="https://stripe.com/docs/payments/card-element" rel="nofollow noreferrer">card element</a> docs for my website. One of the things that they recommend is to immediately load the paymentintent object on the page if you know how much the user will be paying. I'd like to know if I can create the paymentintent <em>after</em> purchasing instead of creating it on page load?</p>
<p>It's becoming hard for me to navigate through the stripe dashboard because of all the incompleted paymentintents that are popping up.</p>
|
<javascript><python><stripe-payments><payment-processing>
|
2023-03-31 14:53:52
| 2
| 932
|
abhivemp
|
75,899,927
| 2,355,903
|
Getting formatted traceback when overwriting sys.excepthook
|
<p>I am rewording this question, as it seems I had some issues in my initial question and it was too vague.</p>
<p>I am working on trying to build a replacement function for sys.excepthook to save errors to file when running a batch job. I have referenced several sources and and am specifically having trouble getting the traceback to print to file. Below is something that is working for me.</p>
<pre><code>import sys
import traceback
def saveError(exctype, value, tb):
filename = tb.tb_frame.f_code.co_filename
name = tb.tb_frame.f_code.co_name
line_no = tb.tb_linno
with open('filepath.txt','w', newline = '') as f:
f.write(f"{filename}")
f.write(f"{name}")
f.write(f"{line_no}")
f.write(f"{exctype.__name__}")
f.write(f"{value}")
f.close()
sys.excepthook = saveError
print(1/0)
</code></pre>
<p>The problem I'm running into is trying to print the full traceback, you can see above I've pulled out individual pieces of it but haven't been able to get the whole thing. A few examples of things I've tried below.</p>
<pre><code>import traceback
trace = f"{traceback.format_exc()}"
</code></pre>
<p>This returns NoneType: None</p>
<pre><code>f.write(f"{tb}")
</code></pre>
<p>This returns <traceback object at 0x7....</p>
<pre><code>trace2 = traceback.format_tb(tb)
</code></pre>
<p>This seems to return nothing</p>
<p>Is there a way to get the traceback information into a string format and save it to file?</p>
|
<python><exception><logging><error-handling><unhandled-exception>
|
2023-03-31 14:53:28
| 1
| 663
|
user2355903
|
75,899,868
| 2,507,567
|
Cannot install pyside6 from pip
|
<p>I'm looking at the Qt for Python<a href="https://doc.qt.io/qtforpython/quickstart.html" rel="nofollow noreferrer">1</a> documentation on how to install PySide6 and it should be simple enough:</p>
<pre><code>pip install pyside6
</code></pre>
<p>It doesn't work, though:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement pyside6 (from versions: none)
ERROR: No matching distribution found for pyside6
</code></pre>
<p>I looked for it at <a href="https://pypi.org" rel="nofollow noreferrer">pypi.org</a> and found out the name of the package is <strong>P</strong>y<strong>S</strong>ide6, capitalized, not pyside6. Still, I tried it again, but had no luck:</p>
<pre><code>$ sudo pip install PySide6
ERROR: Could not find a version that satisfies the requirement PySide6 (from versions: none)
ERROR: No matching distribution found for PySide6
</code></pre>
<p>Even if I explicitly pass <code>--index-url</code> to pip, as described in the official documentation, pip can't find pyside6 to install:</p>
<pre><code>$ sudo pip install --index-url=https://download.qt.io/snapshots/ci/pyside/6.5/latest/ PySide6 --trusted-host download.qt.io
Looking in indexes: https://download.qt.io/snapshots/ci/pyside/6.0.0/latest
ERROR: Could not find a version that satisfies the requirement pyside6 (from versions: none)
ERROR: No matching distribution found for pyside6
</code></pre>
<p>(I tried several combinations of urls and package names)</p>
<p>Any idea to what's going on? Other pyside versions are available apparently. Not pyside6, though.</p>
<h3>System information</h3>
<pre><code>$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 11 (bullseye)
Release: 11
Codename: bullseye
$ python3 --version
Python 3.9.2
$ pip --version
pip 23.0.1 from /usr/local/lib/python3.9/dist-packages/pip (python 3.9)
</code></pre>
|
<python><qt><pip>
|
2023-03-31 14:46:38
| 3
| 1,986
|
RomΓ‘rio
|
75,899,862
| 14,728,691
|
How to get files and filenames persisted in k8s volume that come from SFTP server?
|
<p>I have deployed an SFTP server in a pod where data is persisted in a persistent volume.</p>
<p>These files are sql dump files.</p>
<p>I would like to do the following in Kubernetes :</p>
<ol>
<li>Set up a Kafka producer and consumer in Python</li>
<li>Write a Python script to monitor the persistent volume for new dump files</li>
<li>Connect the Python script with the Kafka producer to send messages when a new file is detected</li>
<li>Set up a Kafka consumer to listen for these messages and trigger the impdp process</li>
</ol>
<p>I am stuck in point 2. "Write a Python script to monitor the persistent volume for new dump files"</p>
<p>I tried the following wit K8S API for Python:</p>
<pre><code>from kubernetes import client, config
# Load Kubernetes configuration
config.load_kube_config()
# Create Kubernetes API client
api_client = client.CoreV1Api()
# Retrieve data from persistent volume claim
data = api_client.read_namespaced_persistent_volume_claim(name='01-claim1', namespace='dev-01')
</code></pre>
<p>But it doesn't list files that are in SFTP or in persistent volume.</p>
<p>What is the best way to know when a new file from SFTP arrive in persistent volume, get this file in another pod that will execute sql command to import dump in database ?</p>
<p>Maybe there is another way than using Python script (I think I cannot directly access the files in a Persistent Volume (PV) using the Kubernetes API, because the Kubernetes API is not designed for reading or writing files), or maybe it is not possible and I have to find something else.</p>
|
<python><kubernetes><sftp>
|
2023-03-31 14:46:00
| 1
| 405
|
jos97
|
75,899,840
| 12,760,550
|
Create extra rows using date column pandas dataframe
|
<p>Imagine I have the following data:</p>
<pre><code>ID Leave Type Start Date End Date
1 Sick 2022-01-01 2022-01-01
1 Holiday 2023-03-28
2 Holiday 2023-01-01 2023-01-02
3 Work 2023-01-01 2023-01-01
</code></pre>
<p>I need to find a way to confirm Start Date and End Date have the same value. In case it is not, it needs to count the number of days the End Date is ahead and, for each day, create a row adding 1 day and always matching Start Date and End Date. If End Date is blank, it should create rows until it reaches the day of 2023-03-30. This way resulting on this data:</p>
<pre><code>ID Leave Type Start Date End Date
1 Sick 2022-01-01 2022-01-01
1 Holiday 2023-03-28 2023-03-28
1 Holiday 2023-03-29 2023-03-29
1 Holiday 2023-03-30 2023-03-30
1 Holiday 2023-03-31 2023-03-31
2 Holiday 2023-01-01 2023-01-01
2 Holiday 2023-01-02 2023-01-02
3 Work 2023-01-01 2023-01-01
</code></pre>
<p>Thank you!</p>
|
<python><pandas><date><row>
|
2023-03-31 14:43:47
| 3
| 619
|
Paulo Cortez
|
75,899,402
| 20,793,070
|
Multi filter by 2 columns and display largest results with Polars
|
<p>I have df for my work with 3 main columns: cid1, cid2, cid3, and more columns cid4, cid5, etc. cid1 and cid2 is int, another columns is float.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.from_repr("""
ββββββββ¬βββββββ¬βββββββ¬βββββββ¬βββββββ¬βββββββ
β cid1 β cid2 β cid3 β cid4 β cid5 β cid6 β
β --- β --- β --- β --- β --- β --- β
β i64 β i64 β f64 β f64 β f64 β f64 β
ββββββββͺβββββββͺβββββββͺβββββββͺβββββββͺβββββββ‘
β 1 β 5 β 1.0 β 4.0 β 4.0 β 1.0 β
β 1 β 5 β 2.0 β 5.0 β 5.0 β 9.0 β
β 1 β 5 β 9.0 β 6.0 β 4.0 β 9.0 β
β 3 β 7 β 1.0 β 7.0 β 9.0 β 1.0 β
β 3 β 7 β 3.0 β 7.0 β 9.0 β 1.0 β
β 3 β 7 β 8.0 β 8.0 β 3.0 β 1.0 β
ββββββββ΄βββββββ΄βββββββ΄βββββββ΄βββββββ΄βββββββ
""")
</code></pre>
<p>Each combination of cid1 and cid2 is a workset for analysis and for each workset I have some values cid3.</p>
<p>I can take df with only maximal values of cid3:</p>
<pre class="lang-py prettyprint-override"><code>df.filter(pl.col("cid3") == pl.col("cid3").max().over("cid1", "cid2"))
</code></pre>
<pre><code>shape: (2, 6)
ββββββββ¬βββββββ¬βββββββ¬βββββββ¬βββββββ¬βββββββ
β cid1 β cid2 β cid3 β cid4 β cid5 β cid6 β
β --- β --- β --- β --- β --- β --- β
β i64 β i64 β f64 β f64 β f64 β f64 β
ββββββββͺβββββββͺβββββββͺβββββββͺβββββββͺβββββββ‘
β 1 β 5 β 9.0 β 6.0 β 4.0 β 9.0 β
β 3 β 7 β 8.0 β 8.0 β 3.0 β 1.0 β
ββββββββ΄βββββββ΄βββββββ΄βββββββ΄βββββββ΄βββββββ
</code></pre>
<p>But I would like to take two maximal values of cid3 for each workset for this result:</p>
<pre><code>shape: (4, 6)
ββββββββ¬βββββββ¬βββββββ¬βββββββ¬βββββββ¬βββββββ
β cid1 β cid2 β cid3 β cid4 β cid5 β cid6 β
β --- β --- β --- β --- β --- β --- β
β i64 β i64 β f64 β f64 β f64 β f64 β
ββββββββͺβββββββͺβββββββͺβββββββͺβββββββͺβββββββ‘
β 1 β 5 β 2.0 β 5.0 β 5.0 β 9.0 β
β 1 β 5 β 9.0 β 6.0 β 4.0 β 9.0 β
β 3 β 7 β 3.0 β 7.0 β 9.0 β 1.0 β
β 3 β 7 β 8.0 β 8.0 β 3.0 β 1.0 β
ββββββββ΄βββββββ΄βββββββ΄βββββββ΄βββββββ΄βββββββ
</code></pre>
<p>(Two maximal values of cid3 is an example, for my actual task I want 10 maximal values and 5 minimal values.)</p>
|
<python><dataframe><python-polars>
|
2023-03-31 14:00:01
| 2
| 433
|
Jahspear
|
75,899,314
| 2,386,605
|
How can I serve ML models quickly and with a low latency
|
<p>Assume a user connects via a Websocket connection to a server, which serves a personalized typescript function based on a personalized JSON file</p>
<p>So when a user connects,</p>
<ul>
<li>the personalized JSON file is loaded from an S3-lile bucket (around 60-100 MB per user)</li>
<li>and when he types a Typescript/JavaScript/Python code is executed which returns some string a reply and the JSON-like data structure gets updates</li>
<li>when the user disconnects the JSON gets persisted back to the S3-like bucket.</li>
</ul>
<p>In total, you can think about 10,000 users, so 600 GB in total.</p>
<p>It should</p>
<ul>
<li>spin up fast for a user,</li>
<li>should be very scalable given the number of users (such that we do not waste money) and</li>
<li>have a global latency of a few tens of ms.</li>
</ul>
<p>Is that possible? If so, what architecture seems to be the most fitting?</p>
|
<python><machine-learning><cdn><scalability><low-latency>
|
2023-03-31 13:50:38
| 1
| 879
|
tobias
|
75,899,307
| 21,351,146
|
How do I escape JSON strings in python mysql.connector?
|
<p>To clarify I am using Python 3.10.9 along with mysql.connector</p>
<p>I am trying to insert a JSON string into my DB but I get a syntax error</p>
<pre><code>func_locals -> {'protocol_id': '1',
'sock_object': '<create_socket.CreateSocket object at 0x7f8f3e1ae0>',
'sock': "<socket.socket fd=16, family=AddressFamily.AF_INET,
type=SocketKind.SOCK_STREAM, proto=0, laddr=('0.0.0.0', 0)>"}
Error from program_logging_table => 1064 (42000): You have an error in your SQL syntax;
check the manual that corresponds to your MariaDB server version for the right syntax
to use near '0.0.0.0', 0)>"}',
'<socket.socket fd=16, family=AddressFamily.AF_INET, type=S...' at line 1
</code></pre>
<p>Per my understand the problem is the single quotes being used inside the JSON <code>'</code></p>
<p>Should I be using something along the lines of <code>db_connection._cmysql.escape_string()</code></p>
<p>I get an error in the editor <code>Access to a protected member _cmysql of a class</code></p>
<p>EDIT:
I am explicitly converting a dict into JSON</p>
<pre><code> locals_json = json.dumps(func_locals)
print(f'locals_json -> {locals_json}')
</code></pre>
<p>And the mysql table is expecting JSON</p>
<pre><code>CREATE TABLE program_logging_table(
...
program_logging_parameters JSON,
</code></pre>
<pre><code>func_locals -> {'protocol_id': '1', 'sock_object':
'<create_socket.CreateSocket object at 0x7fb51edc00>', 'sock':
"<socket.socket fd=16, family=AddressFamily.AF_INET,
type=SocketKind.SOCK_STREAM, proto=0, laddr=('0.0.0.0', 0)>"}
locals_json -> {"protocol_id": "1", "sock_object":
"<create_socket.CreateSocket object at 0x7fb51edc00>", "sock":
"<socket.socket fd=16, family=AddressFamily.AF_INET,
type=SocketKind.SOCK_STREAM, proto=0, laddr=('0.0.0.0', 0)>"}
</code></pre>
<p>EDIT2:
I'm adding the SQL statement for further clarity:</p>
<pre><code> f"INSERT INTO program_logging_table ("
f"program_logging_function_name, "
f"program_logging_function_description, "
f"program_logging_parameters, "
f"program_logging_return_value, "
f"program_logging_initial_timestamp, "
f"program_logging_final_timestamp)"
f"VALUES( "
f"'{self.function_name}', "
f"'{self.function_description}', "
f"'{self.variables_dictionary}', "
f"'{self.return_value}', "
f"'{self.initial_timestamp}', "
f"'{self.final_timestamp}');"
</code></pre>
<p>Where variables dictionary would the <code>locals_json</code> from my previous edit</p>
|
<python><mysql><mariadb><mysql-python><mysql-connector>
|
2023-03-31 13:50:05
| 0
| 301
|
userh897
|
75,899,278
| 11,261,546
|
Pybind11 default values for Custom type casters
|
<p>I have a function :</p>
<pre><code>void my_functions(int a, some_type b);
</code></pre>
<p>And I want to bind it using only the default argument for <code>b</code>:</p>
<pre><code> m.def("my_functions", &my_functions, pb::arg("a"), pb::arg("b") = function_that_returns_my_type()); // just default for b
</code></pre>
<p>What's different from my <a href="https://stackoverflow.com/q/75890415/11261546">previous question</a> is that some_type has an automatic custom type caster declared in <code>PYBIND11_NAMESPACE</code></p>
<p>This compiles, but when I call in python</p>
<pre><code>my_functions(5)
</code></pre>
<p>I get:</p>
<pre><code>TypeError: my_functions(): incompatible function arguments. The following argument types are supported:
1. (a: int, b: some_type = python_object_custom_casted)
</code></pre>
|
<python><c++><pybind11><default-arguments>
|
2023-03-31 13:46:52
| 0
| 1,551
|
Ivan
|
75,899,186
| 34,935
|
How to configure dependabot to check multiple files?
|
<p>The <a href="https://github.com/jazzband/pip-tools#cross-environment-usage-of-requirementsinrequirementstxt-and-pip-compile" rel="nofollow noreferrer">official recommendation from pip-tools for cross-compilation</a> is:</p>
<blockquote>
<p>As the resulting requirements.txt can differ for each environment, users must execute pip-compile on each Python environment separately to generate a requirements.txt valid for each said environment.</p>
</blockquote>
<p>I have multiple requirements files. One is called <code>requirements.txt</code>, another is <code>requirements-silicon.txt</code></p>
<p>I have dependabot <a href="https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file" rel="nofollow noreferrer">configured</a> on github, but how do I get it to check multiple files?</p>
|
<python><github><dependabot>
|
2023-03-31 13:37:23
| 1
| 21,683
|
dfrankow
|
75,899,158
| 8,801,879
|
Shap summary plots for XGBoost with categorical data inputs
|
<p>XGBoost supports inputting features as categories directly, which is very useful when there are a lot of categorical variables. This doesn't seem to be compatible with Shap:</p>
<pre><code>import pandas as pd
import xgboost
import shap
# Test data
test_data = pd.DataFrame({'target':[23,42,58,29,28],
'feature_1' : [38, 83, 38, 28, 57],
'feature_2' : ['A', 'B', 'A', 'C','A']})
test_data['feature_2'] = test_data['feature_2'].astype('category')
# Fit xgboost
model = xgboost.XGBRegressor(enable_categorical=True,
tree_method='hist')
model.fit(test_data.drop('target', axis=1), test_data['target'] )
# Explain with Shap
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(test_data)
</code></pre>
<p>Throws an error: <strong>ValueError: DataFrame.dtypes for data must be int, float, bool or category.</strong></p>
<p>Is it possible to use Shap in this situation?</p>
|
<python><xgboost><shap>
|
2023-03-31 13:34:43
| 4
| 673
|
prmlmu
|
75,898,644
| 4,903,479
|
Bayesian filtration technique for physics based model tuning
|
<p>I am new to Bayesian filtration techniques. It will be helpful, if you guide me on easy explainable terms the above concept.
I am seeking how to use Bayesian Filtration techniques for physics based model tuning.
Thanks in anticipation</p>
|
<python><bayesian><kalman-filter><processmodel>
|
2023-03-31 12:40:30
| 0
| 583
|
shan
|
75,898,563
| 10,522,495
|
How to scrape only the text of reviews and avoid content from other elements?
|
<p>I am trying to extract reviews only from the webpages downloaded locally.</p>
<p>WebPage Link: <a href="https://www.airbnb.co.in/rooms/605371928419351152?adults=1&category_tag=Tag%3A677&children=0&enable_m3_private_room=false&infants=0&pets=0&search_mode=flex_destinations_search&check_in=2023-04-09&check_out=2023-04-14&federated_search_id=da4d5c1e-7ad2-4539-8658-5f27dde826f8&source_impression_id=p3_1680264622_sNnLDFQJLlbBR4%2Fw" rel="nofollow noreferrer">https://www.airbnb.co.in/rooms/605371928419351152?adults=1&category_tag=Tag%3A677&children=0&enable_m3_private_room=false&infants=0&pets=0&search_mode=flex_destinations_search&check_in=2023-04-09&check_out=2023-04-14&federated_search_id=da4d5c1e-7ad2-4539-8658-5f27dde826f8&source_impression_id=p3_1680264622_sNnLDFQJLlbBR4%2Fw</a></p>
<p>I tried the following approach:</p>
<pre><code># Loop through all files in the folder
for filename in os.listdir(folder_path):
try:
with open(os.path.join(folder_path, filename), encoding="utf8") as f:
html_content = f.read()
# Parse the HTML content using BeautifulSoup
soup = BeautifulSoup(html_content, "html.parser")
# Find all the tags with class "container"
containers = soup.find_all(class_='ll4r2nl dir dir-ltr')
for container in containers:
print(container.text)
print('**********************************************')
except Exception as error:
continue
</code></pre>
<p>I am able to get the reviews but there is other unwanted text also. Please help me to get reviews only.</p>
<p>Current Output</p>
<pre><code>Jannat blends the most luxurious backdrop of your fantasy with natureβs incredible marvels to create a tranquil utopia that caters to your comfort where Breakfast is complimentary! This 3Bed pool villa in Nashik is a hidden paradise waiting to dazzle you with its tranquil charm.Surrounded by nature as far as the eyes can see,the open lawns within the premises give you all the space you need to take a refreshing walk,practice your morning asanas or indulge your kids in a fun game of catch&cook.The spaceβ’This villa offers 3 spacious bedrooms and ensuite bathrooms in 2 bedrooms, giving you ample space to have a luxurious getaway from the congested city life. With an elderly-friendly approach in mind, 1 bedroom of the villa is situated on the ground level, where your elderly parents can rest in comfort without having to make their way up the stairs. β’The 2 bedrooms on the first floor can be occupied by younger couples and children. These rooms offer spectacular views of Waldevi lake so you can wake up and head to the windows allowing the cool breeze to gently embrace you. Spend some time in the living room gazing at the garden outside or turn the vast room into your personal dancefloor or karaoke zone. As beautiful as its surroundings are, the villa itself charms you with its decor and makes you want to stay indoors. β’ Challenge your friends and family members with exciting and fun games, such as table tennis, table hockey, carrom and other board games and discover whoβs the best player in the group. Dip your feet in the swimming pool, or take a deep dive and wash away the heat during a sweltering afternoon. If it's a chilly winter's day, head out to the loungers after breakfast to soak in some sun. No matter the weather, this villa has everything indoors and outdoors to create the perfect vacation setting for you! Missing out on your workouts while on
**********************************************
This was a great find for us. Akash was our host for the stay. He was quite proactive in sharing with us instructions/information pertaining to the stay, followed-through diligently with the requisite actions, and was very responsive to the queries/additional-requests placed by us. Coming to the property itself - it's exactly as it looks in the pictures and perhaps prettier at this time of the year when there is a chill in the weather. The views from the living room and the 1st floor rooms are amazing. We couldnt try the pool owing to the chilly weather but it's kept quite clean and so is the garden area. It's around 40 mins drive from most of the to-visit places in/around nashik. The place is quite secure and the staff stays at the property.Coming to the people, Sonu and Shobha did their best to make us feel-at-home. Jeetendra was very keen to make this stay a foodie's delight with his expansive menu and the barbeque. And ofcourse we loved hanging out with Cocktail. Thank you all!!
**********************************************
We had an amazing stay. The staff was very competent and made our stay fabulous. We will highly recommend this place to our family and friends.
**********************************************
We really enjoyed two days at this property. The villa as well as the entire property are well maintained. The view of lake as well as mountains are breathtaking. Very helpful care takers and food food.
**********************************************
Enjoy boating on the backwaters of Waldevi Lake, a 20-minute drive complemented by amazing views. Adventure and culture combine at the Pandav Leni Caves, where you can set up your hiking trails, 20 minutes from the villa. Your lessons in history await at the Gadgada, Ranjangiri and Bahula forts, 5 to 15 minutes from villa.
**********************************************
Hi! Iβm Shreya and Iβm eager to host families and friends in our private villas, on behalf of Saffron Stays , a leading private villa rental & luxury hospitality start-up. Come rain or shine, I absolutely love travelling. From the exhilarating reverse bungee in Panchgani to the calming hill to Munnar, I love to explore places across the country that are brimming with rich culture and heritage! Mentally Iβm always planning a trip to a new beach destination, so share your reccos with me!
**********************************************
Hi! Iβm Shreya and Iβm eager to host families and friends in our private villas, on behalf of Saffron Stays , a leading private villa rental & luxury hospitality start-up. Comeβ¦
**********************************************
we have a care team present on the villa to take care.
**********************************************
Superhosts are experienced, highly rated hosts who are committed to providing great stays for guests.
**********************************************
English, ΰ€Ήΰ€Ώΰ€¨ΰ₯ΰ€¦ΰ₯
**********************************************
100%
**********************************************
within an hour
**********************************************
Cancel before 2 Apr for a partial refund.
**********************************************
Review the Hostβs full cancellation policy which applies even if you cancel for illness or disruptions caused by COVID-19.
**********************************************
</code></pre>
<p>Expected Output</p>
<pre><code>This was a great find for us. Akash was our host for the stay. He was quite proactive in sharing with us instructions/information pertaining to the stay, followed-through diligently with the requisite actions, and was very responsive to the queries/additional-requests placed by us. Coming to the property itself - it's exactly as it looks in the pictures and perhaps prettier at this time of the year when there is a chill in the weather. The views from the living room and the 1st floor rooms are amazing. We couldnt try the pool owing to the chilly weather but it's kept quite clean and so is the garden area. It's around 40 mins drive from most of the to-visit places in/around nashik. The place is quite secure and the staff stays at the property.Coming to the people, Sonu and Shobha did their best to make us feel-at-home. Jeetendra was very keen to make this stay a foodie's delight with his expansive menu and the barbeque. And ofcourse we loved hanging out with Cocktail. Thank you all!!
**********************************************
We had an amazing stay. The staff was very competent and made our stay fabulous. We will highly recommend this place to our family and friends.
**********************************************
We really enjoyed two days at this property. The villa as well as the entire property are well maintained. The view of lake as well as mountains are breathtaking. Very helpful care takers and food food.
</code></pre>
|
<python><web-scraping><beautifulsoup>
|
2023-03-31 12:32:13
| 1
| 401
|
Vinay Sharma
|
75,898,512
| 19,369,393
|
How to instruct autopep8 to remove line breaks?
|
<p>How to instruct autopep8 python formatter to remove line breaks if the resulting line after removing the line breaks does not exceed the maximum allowed line length?
For example, I have the following code that was previously formatted with <code>--max-line-length 80</code>:</p>
<pre><code>def function(a, b, c):
pass
function('VERYLOOOOOOOOOOOOONGSTRING',
'VERYLOOOOOOOOOOOOONGSTRING', 'VERYLOOOOOOOOOOOOONGSTRING')
</code></pre>
<p>Now maximum line length is 120. I want <code>autopep8 -i --max-line-length 120 test.py</code> to format it like this:</p>
<pre><code>def function(a, b, c):
pass
function('VERYLOOOOOOOOOOOOONGSTRING', 'VERYLOOOOOOOOOOOOONGSTRING', 'VERYLOOOOOOOOOOOOONGSTRING')
</code></pre>
<p>But it does not remove line breaks. Adding multiple <code>--aggressive</code> options does not help either.</p>
<p>If it's not possible do to with autopep8, what other formatters/software can I use to do it?
Thanks in advance!</p>
|
<python><autopep8>
|
2023-03-31 12:26:30
| 0
| 365
|
g00dds
|
75,898,501
| 8,081,597
|
Is there an easy "tqdm like" way to make a for loop to run multiprocess?
|
<p>I have a for loop in Python that I want to run in multiple processes. I know I can use the <code>multiprocessing</code> module to achieve this, but I was wondering if there is a library that allows me to do this with a simple syntax similar to how <code>tqdm</code> works. Here is what I want to achieve:</p>
<pre class="lang-py prettyprint-override"><code>for i in some_multiprocess_library(range(100), n_processes=4):
some_func(i)
</code></pre>
|
<python><multithreading><multiprocessing>
|
2023-03-31 12:25:33
| 1
| 306
|
Adar Cohen
|
75,898,467
| 4,336,593
|
Transforming annotated csv (influxdb) to normal csv file using python script
|
<p>I have a <code>CSV</code> file that was downloaded from <code>InfluxDB UI</code>. I want to extract useful data from the downloaded file. A snippet of the downloaded file is as follows:</p>
<pre><code>#group FALSE FALSE TRUE TRUE FALSE FALSE TRUE TRUE TRUE TRUE TRUE
#datatype string long dateTime:RFC3339 dateTime:RFC3339 dateTime:RFC3339 double string string string string string
#default mean
result table _start _stop _time _value _field _measurement smart_module serial type
0 2023-03-31T08:12:40.697076925Z 2023-03-31T09:12:40.697076925Z 2023-03-31T08:20:00Z 0 sm_alarm system_test 8 2.14301E+11 sm_extended
0 2023-03-31T08:12:40.697076925Z 2023-03-31T09:12:40.697076925Z 2023-03-31T08:40:00Z 0 sm_alarm system_test 8 2.14301E+11 sm_extended
0 2023-03-31T08:12:40.697076925Z 2023-03-31T09:12:40.697076925Z 2023-03-31T09:00:00Z 0 sm_alarm system_test 8 2.14301E+11 sm_extended
0 2023-03-31T08:12:40.697076925Z 2023-03-31T09:12:40.697076925Z 2023-03-31T09:12:40.697076925Z 0 sm_alarm system_test 8 2.14301E+11 sm_extended
</code></pre>
<p>I'd like to have the output CSV as follows:</p>
<pre><code>_time sm_alarm next_column next_column ....... ...........
2023-03-29T08:41:15Z 0
</code></pre>
<p>Please note that <code>sm_alarm</code> is only one field among 9 others (that are under <code>_filed</code>).</p>
<p>I tried to do with the following script, but could not solve my problem.</p>
<pre><code>import csv
# Specify the input and output file names
input_file = 'influx.csv'
output_file = 'output.csv'
try:
# Open the input file for reading
with open(input_file, 'r') as csv_file:
# Create a CSV reader object
csv_reader = csv.reader(csv_file)
# Skip the first row (header)
next(csv_reader)
# Open the output file for writing
with open(output_file, 'w', newline='') as output_csv:
# Create a CSV writer object
csv_writer = csv.writer(output_csv)
# Write the header row
csv_writer.writerow(['_time', '_field', '_value'])
# Iterate over the input file and write the rows to the output file
for row in csv_reader:
# Check if the row is not empty
if row:
# Split the fields
fields = row[0].split(',')
# Write the row to the output file
csv_writer.writerow(fields)
print(f'{input_file} converted to {output_file} successfully!')
except FileNotFoundError:
print(f'Error: File {input_file} not found.')
except Exception as e:
print(f'Error: {e}')
</code></pre>
<p>Thank you.</p>
|
<python><csv><influxdb><influxdb-python>
|
2023-03-31 12:20:44
| 1
| 858
|
santobedi
|
75,898,276
| 3,018,860
|
OpenAI API error 429: "You exceeded your current quota, please check your plan and billing details"
|
<p>I'm making a Python script to use OpenAI via its API. However, I'm getting this error:</p>
<blockquote>
<p>openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details</p>
</blockquote>
<p>My script is the following:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3.8
# -*- coding: utf-8 -*-
import openai
openai.api_key = "<My PAI Key>"
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Tell the world about the ChatGPT API in the style of a pirate."}
]
)
print(completion.choices[0].message.content)
</code></pre>
<p>I'm declaring the shebang <code>python3.8</code>, because I'm using <a href="https://github.com/pyenv/pyenv" rel="noreferrer">pyenv</a>. I think it should work, since I did 0 API requests, so I'm assuming there's an error in my code.</p>
|
<python><prompt><openai-api><completion><chatgpt-api>
|
2023-03-31 11:58:04
| 5
| 2,834
|
Unix
|
75,898,130
| 21,787,377
|
What is a better way to use 'request' in a ModelChoiceField
|
<p>Is there any way to use <code>user=request.user</code> inside <code>ModelChoiceField</code> when I use this method I got an error: <code>NameError: name 'request' is not defined</code>.</p>
<pre><code>class AlbumForm(forms.Form):
album = ModelChoiceField(queryset=Album.objects.filter(user=request.user)
</code></pre>
<p>The model:</p>
<pre><code>class Album(models.Model):
name = models.CharField(max_length=20)
user = models.ForeignKey(User, on_delete=models.CASCADE)
</code></pre>
<pre><code>class CreateOurColumn(CreateView):
model = Column
success_url = reverse_lazy('List-Of-Column')
form_class = ColumnForm
template_name = 'create_column.html'
def get_context_data(self, *args, **kwargs):
context = super(CreateOurColumn, self).get_context_data(**kwargs)
context['formset'] = ColumnFormSet(queryset=Column.objects.none())
context['album_form'] = AlbumForm()
return context
def post(self, request, *args, **kwargs):
formset = ColumnFormSet(request.POST)
album_form = AlbumForm(data=request.POST)
if formset.is_valid() and album_form.is_valid():
return self.form_valid(formset, album_form)
def form_valid(self, formset, album_form):
album = album_form.cleaned_data['album']
instances = formset.save(commit=False)
for instance in instances:
instance.album = album
instance.save()
return HtppResponseRedirect('List-Of-Column')
</code></pre>
|
<python><django>
|
2023-03-31 11:41:26
| 2
| 305
|
Adamu Abdulkarim Dee
|
75,898,107
| 17,596,179
|
Getting result from select query with dbt jinja
|
<p>So I'm working with a duckdb database connected with dbt. Now I can execute my query and it can complete succesfully now the problem that I face is that I want to get the result from this query.
My sql file looks like the following.</p>
<pre><code>{%- call statement('all', fetch_result=True) -%}
select * from {{ source("energy_sellers", "energy") }}
{%- endcall -%}
{%- set all = load_result('all') -%}
</code></pre>
<p>But this returns the following error</p>
<pre><code>11:31:59 Running with dbt=1.4.5
11:32:00 Found 1 model, 16 tests, 0 snapshots, 0 analyses, 409 macros, 0 operations, 0 seed files, 1 source, 0 exposures, 0 metrics
11:32:01
11:32:01 1 of 1 START sql table model energy_sellers.energy_sellers ..................... [RUN]
11:32:01 1 of 1 ERROR creating sql table model energy_sellers.energy_sellers ............ [ERROR in 0.30s]
11:32:01
11:32:01 Finished running 1 table model in 0 hours 0 minutes and 1.34 seconds (1.34s).
11:32:01
11:32:01 Completed with 1 error and 0 warnings:
11:32:01
11:32:01 Runtime Error in model energy_sellers (models\example\energy_sellers.sql)
11:32:01 Parser Error: syntax error at or near ")"
11:32:01 LINE 10: );
11:32:01 ^
11:32:01
11:32:01 Done. PASS=0 WARN=0 ERROR=1 SKIP=0 TOTAL=1
</code></pre>
<p>The thing is my file has 7 lines, yet it says there is an error on line 10.
My file is located in <code>models/example/energy_sellers.sql</code>
Any help is greatly appreciated.</p>
<p>But when my file looks like this it doesn't produce an error.</p>
<pre><code>select * from {{ source("energy_sellers", "energy") }}
</code></pre>
<p>Then I get this as output</p>
<pre><code>12:49:03 Running with dbt=1.4.5
12:49:05 Found 1 model, 16 tests, 0 snapshots, 0 analyses, 409 macros, 0 operations, 0 seed files, 1 source, 0 exposures, 0 metrics
12:49:05
12:49:06 Concurrency: 1 threads (target='dev')
12:49:06
12:49:06 1 of 1 START sql table model energy_sellers.energy_sellers ..................... [RUN]
12:49:08 1 of 1 OK created sql table model energy_sellers.energy_sellers ................ [OK in 1.56s]
12:49:08
12:49:08 Finished running 1 table model in 0 hours 0 minutes and 3.78 seconds (3.78s).
12:49:08
12:49:08 Completed successfully
12:49:08
12:49:08 Done. PASS=1 WARN=0 ERROR=0 SKIP=0 TOTAL=1
</code></pre>
<p>It is a table btw.</p>
|
<python><sql><jinja2><dbt><duckdb>
|
2023-03-31 11:39:34
| 1
| 437
|
david backx
|
75,898,033
| 19,003,861
|
can only concatenate tuple (not "int") to tuple - sum up 2 variables after an if statement in django/python
|
<p>I am looking to sum 2 variables together and getting error: 'can only concatenate tuple (not "int") to tuple' (error edited since original post)</p>
<p>In summary, I have a sort of todo list. Every time an <code>action</code> is validated by creating an <code>Validation</code> object model, the <code>action</code> is given 1 point.</p>
<p>If all <code>actions</code> within the <code>task</code> have received 1 point, then the <code>task</code> is considered completed.</p>
<p>I am stuck at the sum of points.</p>
<p>I feel I need to write a sort of for loop after my if statement to add up each point in each action together. I tried different combos, but none seem to work.</p>
<p>Am I getting this wrong? (I am sure my code is also far from being optimal, so I wont be offended if you offer an alternative)</p>
<p><strong>models</strong></p>
<pre><code>class Action(models.Model):
name = models.CharField(verbose_name="Name",max_length=100, blank=True)
class ValidationModel(models.Model):
user = models.ForeignKey(UserProfile, blank=True, null=True, on_delete=models.CASCADE)
venue = models.ForeignKey(Action, blank=True, null=True, on_delete=models.CASCADE)
created_at = models.DateTimeField(auto_now_add=True, null=True, blank=True)
class Task(models.Model):
title = models.CharField(verbose_name="title",max_length=100, null=True, blank=True)
venue = models.ManyToManyField(Action, blank=True)
created_at = models.DateTimeField(auto_now_add=True, null=True, blank=True)
class TaskAccepted(models.Model):
name = models.ForeignKey(Task,null=True, blank=True, on_delete=models.SET_NULL, related_name='task_accepted')
user = models.ForeignKey(User, null=True, blank=True, on_delete=models.CASCADE)
accepted_on = models.DateTimeField(auto_now_add=True, null=True, blank=True)
</code></pre>
<p><strong>views</strong></p>
<pre><code>def function(request, taskaccepted_id):
instance_1 = Action.objects.filter(task__id=taskaccepted_id)
action_count = instance_1.count()
instance_2 = get_object_or_404(Task, pk=taskaccepted_id)
sum_completed =()
for action in instance1:
for in_action in action.validationmodel_set.all()[:1]:
latest_point = in_action.created_at
action_completed = in_action
if latest_point > instance_2.accepted_on:
action_completed = 1
else:
action_completed = 0
sum_completed += venue_completed #<-- can only concatenate tuple (not "int") to tuple
</code></pre>
|
<python><django><django-views>
|
2023-03-31 11:29:17
| 1
| 415
|
PhilM
|
75,898,010
| 6,734,243
|
How sphinx decides which files should go in _source folder at build time?
|
<p>I want to use <code>html_show_sourcelink</code> and <code>html_copy_source</code> to display the rst sources of my documentation files. From what I understood the files are copied in a subdirectory <code>_source</code> of the <code>_build/html</code> directory.</p>
<p>How does Sphinx decide which file should go there ?</p>
<p>I'm asking because in one of my project (<a href="https://github.com/openforis/sepal-doc" rel="nofollow noreferrer">https://github.com/openforis/sepal-doc</a>) a whole folder is ignored, breaking the underlying "source" button.</p>
|
<python><python-sphinx>
|
2023-03-31 11:27:15
| 0
| 2,670
|
Pierrick Rambaud
|
75,897,994
| 1,753,640
|
Python Regex to extract text between numbers
|
<p>I'd like to extract the text between digits. For example, if have text such as the following</p>
<pre><code>1964 ORDINARY shares
EXECUTORS OF JOANNA C RICHARDSON
100 ORDINARY shares
TG MARTIN
C MARTIN
7500 ORDINARY shares
ARCO LIMITED
</code></pre>
<p>I want to produce a list of 3 elements, where each element is the text between the numbers including the first number but not the end number, and the final element in the list where there is no end number</p>
<pre><code>[
'1964 ORDINARY shares \nEXECUTORS OF JOANNA C RICHARDSON',
'100 ORDINARY shares \nTG MARTIN\nC MARTIN\n',
'7500 ORDINARY shares\nARCO LIMITED'
]
</code></pre>
<p>I tried doing this</p>
<pre><code>regex = r'\d(.+?)\d
re.findall(regex, a, re.DOTALL)
</code></pre>
<p>but it returned</p>
<pre><code>['9',
' ORDINARY shares\nEXECUTORS OF JOANNA C RICHARDSON\n',
'0 ORDINARY shares\nTG MARTIN\nC MARTIN\n',
'0']
</code></pre>
|
<python><regex>
|
2023-03-31 11:25:11
| 2
| 385
|
user1753640
|
75,897,897
| 2,392,151
|
pyarrow timestamp datatype error on parquet file
|
<p>I have this error when I read and count records in pandas using pyarrow, I do not want pyarrow to convert to timestamp[ns], it can keep in timestamp[us], is there an option to keep timestamp as is ?, i am using pyarrow 11.0,0 and python 3.10.Please advise</p>
<p>code:</p>
<pre><code>import pyarrow as pa
import pyarrow.parquet as pq
import pyarrow.compute as pc
import pandas as pd
# Read the Parquet file into a PyArrow Table
table = pq.read_table('/Users/abc/Downloads/LOAD.parquet').to_pandas()
print(len(table))
</code></pre>
<p>error</p>
<pre><code>pyarrow.lib.ArrowInvalid: Casting from timestamp[us] to timestamp[ns] would result in out of bounds timestamp: 101999952000000000
</code></pre>
|
<python><pandas><parquet><pyarrow><fastparquet>
|
2023-03-31 11:14:50
| 1
| 363
|
Bill
|
75,897,896
| 17,487,457
|
IndexError: an index can only have a single ellipsis ('...')
|
<p>I have the following <code>numpy</code> 4D array:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
X = np.random.rand(5, 1, 10, 4)
# so for example, first 2 elements:
X[:2]
array([[[[0.27383924, 0.48908027, 0.64997038, 0.20394247],
[0.28361942, 0.33425344, 0.27687327, 0.2549442 ],
[0.91655337, 0.77325791, 0.31945728, 0.82919328],
[0.83989813, 0.65384396, 0.13853182, 0.46299719],
[0.14700217, 0.37591964, 0.8545056 , 0.02064633],
[0.06186759, 0.88515897, 0.84535195, 0.41697788],
[0.9180413 , 0.42174186, 0.55005076, 0.70799608],
[0.68446734, 0.41968608, 0.19013073, 0.16875907],
[0.44687274, 0.62684239, 0.27798323, 0.6355134 ],
[0.8489883 , 0.23450424, 0.53215137, 0.66814813]]],
[[[0.85473496, 0.70600538, 0.70862705, 0.89358703],
[0.80026841, 0.62795239, 0.06190375, 0.41356739],
[0.01792312, 0.82979946, 0.82117873, 0.14904196],
[0.10831188, 0.63943446, 0.20393167, 0.4058673 ],
[0.7966648 , 0.37533761, 0.73456441, 0.36841977],
[0.78459342, 0.34400906, 0.08502799, 0.2625697 ],
[0.57079306, 0.52439791, 0.6417777 , 0.02517128],
[0.84525549, 0.40980805, 0.20189425, 0.39604223],
[0.06425004, 0.75075354, 0.69504595, 0.76566498],
[0.01929747, 0.03261916, 0.32740129, 0.43836062]]]])
</code></pre>
<p>So select the first two columns of each entry to <code>X</code>, I would do <code>X[..., :2]</code>. So in this example:</p>
<pre class="lang-py prettyprint-override"><code>X[..., :2][:2]
array([[[[0.27383924, 0.48908027],
[0.28361942, 0.33425344],
[0.91655337, 0.77325791],
[0.83989813, 0.65384396],
[0.14700217, 0.37591964],
[0.06186759, 0.88515897],
[0.9180413 , 0.42174186],
[0.68446734, 0.41968608],
[0.44687274, 0.62684239],
[0.8489883 , 0.23450424]]],
[[[0.85473496, 0.70600538],
[0.80026841, 0.62795239],
[0.01792312, 0.82979946],
[0.10831188, 0.63943446],
[0.7966648 , 0.37533761],
[0.78459342, 0.34400906],
[0.57079306, 0.52439791],
[0.84525549, 0.40980805],
[0.06425004, 0.75075354],
[0.01929747, 0.03261916]]]])
</code></pre>
<p>But then I am interested in the first 2 columns and the last column (kind of drop the third column).</p>
<pre class="lang-py prettyprint-override"><code>X[..., :2, ...,3]
IndexError: an index can only have a single ellipsis ('...')
</code></pre>
<p><strong>Required output</strong>:</p>
<pre class="lang-py prettyprint-override"><code># the case of first 2 elements of X
array([[[[0.27383924, 0.48908027, 0.20394247],
[0.28361942, 0.33425344, 0.2549442 ],
[0.91655337, 0.77325791, 0.82919328],
[0.83989813, 0.65384396, 0.46299719],
[0.14700217, 0.37591964, 0.02064633],
[0.06186759, 0.88515897, 0.41697788],
[0.9180413 , 0.42174186, 0.70799608],
[0.68446734, 0.41968608, 0.16875907],
[0.44687274, 0.62684239, 0.6355134 ],
[0.8489883 , 0.23450424, 0.66814813]]],
[[[0.85473496, 0.70600538, 0.89358703],
[0.80026841, 0.62795239, 0.41356739],
[0.01792312, 0.82979946, 0.14904196],
[0.10831188, 0.63943446, 0.4058673 ],
[0.7966648 , 0.37533761, 0.36841977],
[0.78459342, 0.34400906, 0.2625697 ],
[0.57079306, 0.52439791, 0.02517128],
[0.84525549, 0.40980805, 0.39604223],
[0.06425004, 0.75075354, 0.76566498],
[0.01929747, 0.03261916, 0.43836062]]]])
</code></pre>
|
<python><arrays><numpy><multidimensional-array><numpy-ndarray>
|
2023-03-31 11:14:49
| 1
| 305
|
Amina Umar
|
75,897,828
| 6,775,670
|
Should I remove old python installation when I recompile it from sources
|
<p>Once I has installed python 3.7.1 from the sources. I need update a version of it between 3.7.1 and 3.7.5. And backwards.</p>
<p>I do not need both at the same time. I want to use the same directory for installed python. Should I remove the content from the folder I had before specified as ./configure --prefix=... when compiling past time?</p>
<p>Question is both for upgrading and downgrading cases, the only idea that major version (3.7 here) remains the same, if this even matters.</p>
|
<python>
|
2023-03-31 11:07:20
| 0
| 1,312
|
Nikolay Prokopyev
|
75,897,551
| 2,859,206
|
Why is pandas.series.str.extract not working here but working elsewhere
|
<p>Why is a pandas.series.extract(regex) able to print the correct values, but won't assign the value to an existing variable using indexing or np.where.</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(
[
['1', np.nan, np.nan, '1 Banana St, 69126 Heidelberg'],
['2', "Doloros St", 67898, '2 Choco Rd, 69412 Eberbach']],
columns=['id', "Street", 'Postcode', 'FullAddress']
)
m = df['Street'].isna()
print(df["FullAddress"].str.extract(r'(.+?),')) # prints street
print(df["FullAddress"].str.extract(r'\b(\d{5})\b')) # prints postcode
df.loc[m, 'Street'] = df.loc[m, 'FullAddress'].str.extract(r'(.+?),') # outputs NaN
df.loc[m, 'Postcode'] = df.loc[m, 'FullAddress'].str.extract(r'\b(\d{5})\b')
# trying where method throws error - NotImplementedError: cannot align with a higher dimensional NDFrame
df["Street"] = df["Street"].where(~(df["Street"].isna()), df["FullAddress"].str.extract(r'(.+?),'))
</code></pre>
<p>What I'm trying to do is fill the empty Street and Postcode with the values from FullAddress - without disturbing the existing Street and Postcode values.</p>
<p>There is no problem with the indexing, the regex, or even the extract... I've read the docs, searched for anything similar... What does every get, but I don't understand!?!?!</p>
|
<python><pandas><extract>
|
2023-03-31 10:35:49
| 2
| 2,490
|
DrWhat
|
75,897,504
| 20,174,226
|
How to find all instances of an unterminated percent sign in a batch file?
|
<p>I have a python function that looks through a batch file for potential errors, and the one that I am looking to resolve is to find any lines that have an unterminated percent sign. However, this function seems to be returning every <code>%</code> sign on every line that has a <code>%</code> sign.</p>
<p>Here is the function I have at the moment, which is not doing what I need it to do:</p>
<pre class="lang-py prettyprint-override"><code>def check_unterminated_percent_signs(content):
# check for unterminated percent signs
errors = []
for i, line in enumerate(content.split('\n')):
match = re.search(r'%[^%\n]*$', line)
if match:
errors.append(f'Unterminated percent sign found on line {i+1}, position {match.start()}: {line.strip()}')
return errors
</code></pre>
<br>
<p>Here is a line from the batch file that should <strong>not</strong> trigger the error:</p>
<pre><code>set Log_File=%Backup_Folder%\Logs\daily_copy.log
</code></pre>
|
<python><regex><batch-file>
|
2023-03-31 10:32:09
| 1
| 4,125
|
ScottC
|
75,897,494
| 12,715,723
|
How to expand value of 3d numpy array?
|
<p>Let say I have 3d array (<code>3x3x1</code>) like this:</p>
<pre class="lang-py prettyprint-override"><code>[[[149]
[121]
[189]]
[[ 32]
[225]
[ 44]]
[[ 33]
[133]
[ 11]]]
</code></pre>
<p>How can I expand all values so they can be the same in the deepest one (<code>3x3x3</code>) like this:</p>
<pre class="lang-py prettyprint-override"><code>[[[149 149 149]
[121 121 121]
[189 189 189]]
[[ 32 32 32]
[225 225 225]
[ 44 44 44]]
[[ 33 33 33]
[133 133 133]
[ 11 11 11]]]
</code></pre>
<p>I have tried this:</p>
<pre class="lang-py prettyprint-override"><code>for i in range(len(array)):
for j in range(len(array[i])):
array[i][j] = np.array(list(array[i][j]) * 3)
print(array)
</code></pre>
<p>But it gives me an error:</p>
<pre><code>could not broadcast input array from shape (3,) into shape (1,)
</code></pre>
<p>For generalization purposes, how do I achieve this with <code>m x n x p</code> shape format?</p>
|
<python><arrays><numpy>
|
2023-03-31 10:31:16
| 2
| 2,037
|
Jordy
|
75,897,342
| 6,151,828
|
Concatenate results of `apply` in pandas
|
<p>I would like to apply a function to each element/row of a pandas Series/DataFrame and concatenate/stack the results into a single DataFrame.
E.g., I may start with a Series <code>s = pd.Series(["a", "b,c", "d,e,f"])</code>, and I would like to obtain as a final result <code>res = pd.Series(["a", "b", "c", "d", "e", "f"])</code></p>
<p>A slow way of doing this would be:</p>
<pre><code>res = []
for _, x in s.items():
res.append(pd.Series(s.split(",")))
res = pd.concat(res, ignore_index=True)
</code></pre>
<p>I would like to explore the internal functionality of pandas. It seems that there should be a way of doing this by starting with something like <code>s.apply(lambda x: x.split(","))</code> or <code>s.str.split()</code>, which gives a series of lists...</p>
<p><strong>Remarks:</strong></p>
<ul>
<li>The simple example above could be actually solved using something like <code>pd.Series(",".join(s.tolist()).split(","))</code>, but I am looking for a generalizable solution.</li>
<li>Note that this is not a duplicate of <a href="https://stackoverflow.com/q/41191358/6151828">Apply elementwise, concatenate resulting rows into a DataFrame</a> (where <em>apply</em> is used in a generic sense rather than as a name of pandas function.)</li>
</ul>
|
<python><pandas><dataframe><series>
|
2023-03-31 10:13:54
| 1
| 803
|
Roger V.
|
75,896,928
| 7,745,011
|
Is it possible print into one line per process in python?
|
<p>I haven't found a good solution for the following problem so far:</p>
<pre><code>def do_work() -> None:
print("Start work", end="")
# some long running operation
print("done")
def main():
with Pool(workers) as p:
for i in range(100):
p.apply_async(func=do_work)
p.close()
p.join()
</code></pre>
<p>The output I would like to have is</p>
<pre><code>Start work...done
Start work...done
Start work...done
...
Start work...done
</code></pre>
<p>Of course this is not the case, since every process outputs at different times. Question is, can this be achieved without additional dependencies?</p>
|
<python><multiprocessing>
|
2023-03-31 09:34:15
| 1
| 2,980
|
Roland Deschain
|
75,896,800
| 595,305
|
mariadb package installation difficulties
|
<p>This is in WSL (Ubuntu 20.04).</p>
<p>I've set up a Python VE with 3.10.10.</p>
<p>I've done <code>apt install</code> of python3.10-venv, python3.10-dev, python3.10-minimal and python3.10-distutils.</p>
<p>I've managed to activate the VE and do <code>pip install</code> with a few packages. But I'm having problems with mariadb.</p>
<p>First, when I went <code>pip install mariadb</code> it complained</p>
<blockquote>
<p>This error typically indicates that MariaDB Connector/C, a dependency
which
must be preinstalled, is not found.</p>
</blockquote>
<p>So then I went <code>sudo apt install libmariadb3 libmariadb-dev</code>... and then it complained:</p>
<blockquote>
<p>Connector/Python requires MariaDB Connector/C >= 3.3.1, found version
3.1.20</p>
</blockquote>
<p>So then I downloaded a tar.gz from <a href="https://mariadb.com/downloads/connectors/" rel="nofollow noreferrer">here</a>: mariadb-connector-c-3.3.4-ubuntu-jammy-amd64.tar.gz</p>
<p>Then I followed the instructions <a href="https://mariadb.com/docs/skysql/connect/programming-languages/python/install/" rel="nofollow noreferrer">here</a>, section "Install from source distribution".</p>
<p>After expanding I get this:</p>
<pre><code>(sysadmin_wsl) root@M17A:/mnt/d/apps/MariaDB/mariadb-connector-python# pip install ./mariadb-connector-c-3.3.4-ubuntu-jammy-amd64
ERROR: Directory './mariadb-connector-c-3.3.4-ubuntu-jammy-amd64' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
</code></pre>
<p>Indeed, neither of these files is present.</p>
<p>Any suggestions?</p>
|
<python><pip><mariadb><version>
|
2023-03-31 09:22:06
| 1
| 16,076
|
mike rodent
|
75,896,756
| 1,206,998
|
spark addJar from hdfs in a jupyter python notebook
|
<p>We are running a jupyter notebook connected to a hdfs & spark cluster. Some user need to use a jar lib for a use case that we don't want to deploy for all notebooks. So we don't want to add this dependency to the global deployment of our solution.</p>
<p>We are looking for a way for spark to load the jar lib from hdfs, that is accessible by all nodes and edge nodes of our cluster. And we tried to use addJar to load it in the notebook that requires it, but to no avail. We tried:</p>
<pre><code>spark = SparkSession.builder \
.config("spark.jars", "hdfs:///some/path/the-lib_2.11-0.13.7.jar") \
.appName('test jar imports - .config(spark.jars)') \
.getOrCreate()
</code></pre>
<p>and</p>
<pre><code>spark.sparkContext._jsc.addJar("hdfs:///some/path/the-lib_2.11-0.13.7.jar")
# note that print(spark.sparkContext._jsc.sc().listJars()) does contain the above path
</code></pre>
<p>My intuition is that addJar doesn't work with hdfs, but I don't know really</p>
<p>=> My question: Is there a way to load a jar lib from hdfs into a python spark notebook programatically (that is not a hack, see below)?</p>
<hr />
<p>Also we found a hack that works, by changing the spark-submit args. But we are not satisfied by it because it works thanks to a replace on the expected current args:</p>
<pre><code>os.environ['PYSPARK_SUBMIT_ARGS'] = os.environ['PYSPARK_SUBMIT_ARGS'].replace(', pyspark-shell',',hdfs:/some/path/the-lib_2.11-0.13.7.jar pyspark-shell')
spark = SparkSession.builder \
.appName('test jar imports os.environ --jars') \
.getOrCreate()
</code></pre>
|
<python><apache-spark><jupyter-notebook><dependencies><hdfs>
|
2023-03-31 09:16:02
| 1
| 15,829
|
Juh_
|
75,896,736
| 11,113,553
|
Bokeh: DataTable columns collasping when updating layout
|
<p>I noticed that when trying to update part of a Bokeh layout, the DataTable display changed (see attached picture) even though no modifications is made to the table explicitly in the code. The goal would be to update only one of the chart/table of the bokeh document, and not modifying the rest.</p>
<p><a href="https://i.sstatic.net/heHJ8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/heHJ8.png" alt="Before (left), and after clicking the button (right)" /></a></p>
<p>I do it by adding a column object to the root, and then modifying directly the children of the column object. Is there another more standard way to do it? I don't know whether it is a bug, or I'm breaking it by using a hack to update the charts.</p>
<p>The bokeh version used is 3.1.0, this issue did not appear with the previous version I was using (2.4.2)</p>
<p>Here is a code that illustrate this behaviour:</p>
<pre><code>import bokeh
import bokeh.plotting
import pandas as pd
class BokehPlot:
@staticmethod
def _get_plot(plot_data):
plot_source = bokeh.models.ColumnDataSource(plot_data)
plot = bokeh.plotting.figure(width=800, height=400, x_range=(0, 5), y_range=(0, 20))
plot.line('x', 'y', source=plot_source)
return plot
def draw_plot(self):
table_data = pd.DataFrame(data={'column_1': [11, 21], 'column_2': [12, 22]})
table_source = bokeh.models.ColumnDataSource(data=table_data)
# I tried to specify width and formatter for TableColumn and DataTable as well, but
# that did not change the behaviour
table_columns = [bokeh.models.TableColumn(field=column, title=column) for column in table_data.columns]
table = bokeh.models.DataTable(source=table_source, columns=table_columns)
button = bokeh.models.Button(label='Click')
def update_plot():
plot_data = pd.DataFrame(data={'x': [1, 2], 'y': [12, 8]})
plot = self._get_plot(plot_data)
# Only the second object is replaced, the Button and the DataTable are left as they are
self._layout.children[1] = plot
plot_data = pd.DataFrame(data={'x': [1, 2], 'y': [10, 15]})
plot = self._get_plot(plot_data)
button.on_click(update_plot)
self._layout = bokeh.layouts.column([button, plot, table])
bokeh.io.curdoc().add_root(self._layout)
bokeh_plot = BokehPlot()
bokeh_plot.draw_plot()
</code></pre>
<p><strong>Edit:</strong></p>
<p>Actually it seems to be fine if I put the plot in a bokeh object, and then modify this object. The two lines to change would be:</p>
<pre><code>self._layout = bokeh.layouts.column([button, bokeh.layouts.column(plot), table])
</code></pre>
<p>and</p>
<pre><code>self._layout.children[1].children[0] = bokeh.layouts.column(plot)
</code></pre>
<p>However, it seems more complicated to maintain than previous code, and more of a hack.</p>
|
<python><bokeh>
|
2023-03-31 09:12:51
| 0
| 1,626
|
K. Do
|
75,896,565
| 221,270
|
Find combination of two list in Pandas dataframe
|
<p>I have two lists:</p>
<pre><code>List1:
123
456
789
List2:
321
654
987
</code></pre>
<p>I want to find the combination of the 2 lists in a data frame but without combinations inside the lists:</p>
<pre><code>123-321
123-654
123-987
456-321
456-654
456-987
789-321
789-654
789-987
321-123
321-456
321-789
654-123
654-456
654-789
987-123
987-456
987-789
</code></pre>
<p>With these combinations I want to check whether they are inside two columns of a data frame and if yes, extract the rows:</p>
<pre><code>A B Value
123 321 0.5
456 111 0.4
987 654 0.3
Return:
A B Value
123 321 0.5
</code></pre>
<p>How can I extract the rows?</p>
|
<python><pandas>
|
2023-03-31 08:49:42
| 2
| 2,520
|
honeymoon
|
75,896,561
| 6,803,114
|
Not able to write spark dataframe. Error Found nested NullType in column 'colname' which is of ArrayType
|
<p>Hi I have a pandas dataframe named df , where few of the columns contain list of strings.</p>
<pre><code>id colname colname1
a1 [] []
a2 [] []
a3 [] ['anc','asf']
</code></pre>
<p>I want to write it into delta table. As per the schema of the table, the datatype of colname and colname1 are array.</p>
<p>But as you can see colname doesn't contain any data, so when I'm trying to write it into the table. it is giving me this error:</p>
<pre><code>AnalysisException: Found nested NullType in column 'colname' which is of ArrayType. Delta doesn't support writing NullType in complex types.
</code></pre>
<p>This is the code for writing it to table.</p>
<pre><code>spark_df = spark.createDataFrame(df)
spark_df.write.mode("append").option("overwriteSchema", "true").saveAsTable("dbname.tbl_name")
</code></pre>
<p>I tried to search everywhere but didn't find the solution.</p>
<p>What can I do so that even if the colname column is entirely empty(as in this case) the data should be successfully inserted in the table.</p>
|
<python><pandas><apache-spark><pyspark><apache-spark-sql>
|
2023-03-31 08:48:55
| 4
| 7,676
|
Shubham R
|
75,896,502
| 13,803,549
|
AttributeError: 'submitButton' object has no attribute '_row' - Discord.py
|
<p>I keep getting this error for a Discord.py button and after searching the internet I still cant find out why it is happening.</p>
<p>Any help would be greatly appreciated.</p>
<p>Thanks!</p>
<p>AttributeError: 'submitButton' object has no attribute '_row'</p>
<pre class="lang-py prettyprint-override"><code> class submitButton(discord.ui.Button):
def __init__(self):
@discord.ui.button(label="Submit Entry", style=discord.ButtonStyle.blurple, row=3)
async def submitEntry(self, button: discord.ui.Button, interaction:
discord.Interaction):
await self.view.submit_response(interaction)
</code></pre>
<pre class="lang-py prettyprint-override"><code> async def submit_response(self, interaction:discord.Interaction):
# Created messageEmbed here #
await interaction.user.send(embed=messageEmbed)
await interaction.response.send_message(content="Your answers have been
submitted!", view=self, ephemeral=True,
delete_after=30)
</code></pre>
|
<python><discord.py><discord-buttons>
|
2023-03-31 08:40:36
| 1
| 526
|
Ryan Thomas
|
75,896,240
| 17,801,773
|
How to create an image with intensities 0 to 255 with normal distribution?
|
<p>To solve my last question <a href="https://stackoverflow.com/questions/75884220/how-to-do-histogram-matching-with-normal-distribution-as-reference?noredirect=1#comment133860591_75884220">How to do histogram matching with normal distribution as reference?</a> I want to create an image with normal distribution. For that for every pixel of the new image I want to choose a number from 0 to 255 randomly and with normal distribution. I've done this:</p>
<pre><code>normal_image = np.random.normal(0, 1, size = (M,N))
</code></pre>
<p>But the dtype of this image is float64. So then I did this:</p>
<pre><code>normal_image = np.random.normal(0, 1, size = (M,N)).astype('uint8')
</code></pre>
<p>But I'm not sure if this is a correct approach. Should I choose random numbers from the integers 0 to 255 based on normal distribution?(Which I don't know how to do this!)</p>
<p>Would you please guide me?</p>
|
<python><image-processing><normal-distribution>
|
2023-03-31 08:09:53
| 1
| 307
|
Mina
|
75,896,207
| 4,828,720
|
Differences in timing between timeit.timeit() and Timer.autorange()
|
<p>I am trying to figure out how to use Python's <a href="https://docs.python.org/3/library/timeit.html" rel="nofollow noreferrer">timeit</a> module but I get vastly different timings between its <a href="https://docs.python.org/3/library/timeit.html#timeit.timeit" rel="nofollow noreferrer">timeit.timeit</a> method and <a href="https://docs.python.org/3/library/timeit.html#timeit.Timer.autorange" rel="nofollow noreferrer">timeit.Timer.autorange()</a>:</p>
<pre><code>import timeit
setup = """
def f():
x = "-".join(str(n) for n in range(100))
"""
def f():
x = "-".join(str(n) for n in range(100))
t = timeit.timeit("f()", setup=setup, number=100)
print(t)
num, timing = timeit.Timer(stmt='f()', globals=globals()).autorange()
per_run = timing/num
print(per_run *1000)
</code></pre>
<p>results in numbers like</p>
<pre><code>0.0025681090000944096 # timeit.timeit
0.014390230550020533 # timeit.Timer.autorange
</code></pre>
<p>so an order of magnitude of difference between the two approaches.</p>
<p>I am probably doing something wrong but have no idea what. The <code>autorange</code> documentation is so sparse.</p>
|
<python><performance><profiling><timeit>
|
2023-03-31 08:06:53
| 1
| 1,190
|
bugmenot123
|
75,896,214
| 15,435,361
|
Python: Chaining iterators without changing the original instance
|
<p>This questions goes in the direction of this question:
<a href="https://stackoverflow.com/questions/3211041/how-to-join-two-generators-or-other-iterables-in-python">How to join two generators (or other iterables) in Python?</a></p>
<p>But I'm searching for a solution that keeps the original instance of an iterator?</p>
<p>Something that does the following:</p>
<pre><code>iterator1=range(10)
iterator2=range(10)
iterator_chained=keep_instance_chain_left(iterator1,iterator2)
assert iterator2 is iterator_chained #!!!!
</code></pre>
<p>The <code>iterator1</code> should be extendleft the original <code>iterator2</code>.</p>
<p>Can anybody give an replacement for <code>keep_instance_chain_left()</code> ?</p>
<p>The solution:</p>
<pre><code>iterator_chained=itertools.chain(iterator1,iterator2)
</code></pre>
<p>Delivers just a new instance of an iterator:</p>
<pre><code>assert iterator_chained is not iterator2
</code></pre>
|
<python>
|
2023-03-31 08:06:50
| 4
| 344
|
B.R.
|
75,896,149
| 8,547,986
|
python using packages across virtual environment
|
<p>Apologies for the lack of better title..</p>
<p>I wanted to know if the following is possible?</p>
<p>So currently I am using vs-code for python work. In vs-code I install some packages related to vs-code extensions, for example I use <code>black</code> for formatting. The package <code>black</code> is installed in my base python installation. But when I switch to a new virtual environment, I have to reinstall the package. I don't mind reinstalling the package, but when I do <code>pip freeze > requirements.txt</code>, it includes these packages also as dependencies, which I don't want.</p>
<p>My first question is, how can I limit the output of <code>pip freeze > requirements.txt</code> to just include the packages which are actually used in project (by actually I mean, which I am importing). I am open to using other methods for creating <code>requirements.txt</code>.</p>
<p>My second question (sort of on a side) is, when I reinstall the package in new virtual environment, is it creating a new copy of files (given that the same package is installed in base environment). I am using conda for virtual environment and my guess is it is using some sort of caching mechanism to avoid multiple installation.</p>
<p>Thanks for your help and time...</p>
|
<python><python-3.x><anaconda>
|
2023-03-31 08:00:22
| 0
| 1,923
|
monte
|
75,896,102
| 8,667,243
|
How can I generate documentation without writing any docstrings?
|
<p>How can I generate documentation without writing a single line of docstring? I simply want to have an API reference page.</p>
<p>Because as I think, if there are type annotations in functions, then we don't need docstrings to describe them.</p>
<p>For example, one of my source code functions:</p>
<pre class="lang-py prettyprint-override"><code>def sort_files_by_time(self, files: List[str], reverse: bool = False):
files.sort(key=os.path.getmtime, reverse=reverse)
return files
</code></pre>
<p>I expect it to generate API docs in HTML format. Like what I see in vscode function hint.</p>
|
<python><python-sphinx>
|
2023-03-31 07:52:11
| 0
| 411
|
yingshao xo
|
75,896,095
| 1,142,881
|
How to create a partitioned table with Peewee?
|
<p>When defining my data model with peewee, how can I make a table partitioned by a given field? Iβm interested in partitioning a table by a date column, for example, by month of the year.</p>
<p>I see in Peeweeβs documentation that I can add custom constraints as part of the Meta part of the data model, how can I then add then partitioning command which would go at the end of the table creation DDL?</p>
<p>Is there a way to do it post schema creation? This would also be a way to go β¦</p>
|
<python><postgresql><peewee>
|
2023-03-31 07:51:07
| 1
| 14,469
|
SkyWalker
|
75,895,815
| 2,006,921
|
Python import from central location
|
<p>I know that Python import strategy has been a matter of discussion in countless posts, but I have not yet found a satisfactory answer yet.</p>
<p>I am building a large Python project, where I try to group modules in subdirectories in a sensible way, thus creating a directory structure that becomes a few levels deep in some branches. In many of my modules I need to import some basic module that I would like to keep in a central place at the top level. For example, I have a lot of different classes that implement 3D objects, and I have one general rotation matrix class that I would like to apply to all of these objects. I therefore would have to import that from a parent directory, and the fact that Python does not make that easy somehow indicates that this is not encouraged.</p>
<p>I know that I can hack something together by adding to <code>sys.path</code>, but somehow that does seem like a hack. Since it looks like I have to bend over backwards to achieve what I'm trying to do here, I am questioning my architectural approach, although it seems like a natural way to put general stuff that applies to many situations at top level. The alternative for easy imports would be to duplicate the class to be imported at all the levels where it is needed, but that cannot be the right way to do it. So I'm asking for guidance on the proper way to organize Python modules.</p>
|
<python><import>
|
2023-03-31 07:16:23
| 1
| 1,105
|
zeus300
|
75,895,583
| 2,833,774
|
How to limit BigQuery job by slots
|
<p>Iβd like to limit a BigQuery job (query) submitted with the Python SDK to use a certain amount of slots. For example, I have a provision with 400 slots in BigQuery (flat-rate). And let's say I've got a heavy query which uses 300 slots during execution (according to the statistics). I would like to define a lower limit for the job, let's say 100 slots. I admit that the execution will be slower, but it's fine. I just need the job not to "eat" more than 100 slots during the execution.</p>
<p>I found that it's possible to set a limit by bytes with <code>maximumBytesBilled</code> (described <a href="https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#jobconfigurationquery" rel="nofollow noreferrer">here</a>). Is it possible to set slots in a similar manner?</p>
|
<python><google-bigquery>
|
2023-03-31 06:44:52
| 1
| 374
|
Alexander Goida
|
75,895,568
| 14,606,987
|
Flask sessions not persisting on Google App Engine for a simple chess game
|
<p>I'm currently experiencing issues with Flask and sessions while developing a simple chess game. In this game, each time a new game starts, it is associated with a session. Everything works as expected when running the app locally. However, after deploying the app to Google App Engine, the current session gets removed after a few moves.</p>
<p>Here's some code to demonstrate how I configure the app and create a new session:</p>
<pre class="lang-py prettyprint-override"><code>app = Flask(__name__, template_folder=".")
app.secret_key = "my_key"
CORS(app)
instances = {}
@app.route("/new-session")
def new_session():
session_id = str(uuid.uuid4())
instances[session_id] = SomeClass(session_id)
return {"session_id": session_id}
</code></pre>
<p>I'd prefer to avoid using Redis, as hosting it would incur additional costs. However, if that's the only viable solution, I'm open to exploring it. Any suggestions or alternative solutions to maintain session persistence on Google App Engine are greatly appreciated.</p>
|
<python><flask><google-app-engine><session>
|
2023-03-31 06:42:31
| 0
| 868
|
yemy
|
75,895,405
| 10,396,491
|
Copying and modifying weights with pytorch
|
<p>I am trying to copy weights from one net to another. When I do this, I can see the effect in the target net inside the sacAgent object:</p>
<pre><code>sacAgent.actor.latent_pi = copy.deepcopy(pretrained_actor.latent_pi)
</code></pre>
<p>When I try to copy the weights element-wise like so, I don't see any effect:</p>
<pre><code>with th.no_grad():
for key in sacAgent.actor.latent_pi.state_dict().keys():
sacAgent.actor.latent_pi.state_dict()[key] = \
pretrained_actor.latent_pi.state_dict()[key]
</code></pre>
<p>Edit: I have also tried following guidelines from other posts on adding noise to weights in order to first subtract the initial weights and add in the ones I want to use instead. When I compare the values inside tensors, they seem to match, but the behaviour of the net is different when using the deepcopy approach. I am using it inside stable baselines reinforcement learning framework and the net initialised with deepcopy behaves as expected but the one initialised with the = or add_ operators acts exactly the same as a randomly initialised net would:</p>
<pre><code>with th.no_grad():
for key in sacAgent.actor.latent_pi.state_dict().keys():
sacAgent.actor.latent_pi.state_dict()[key].add_(
-sacAgent.actor.latent_pi.state_dict()[key]
+ pretrained_actor.latent_pi.state_dict()[key])
</code></pre>
<p>I would prefer to use something similar to the second approach because it's a lot more readable and would make it easier for me to ultimately add noise to the weights. Is there any way to achieve what I am attempting to do?</p>
|
<python><pytorch>
|
2023-03-31 06:19:53
| 0
| 457
|
Artur
|
75,895,105
| 2,825,403
|
"Type variable has no meaning in this context"
|
<p>I have some machine learning model classes subclassed from XGBoost, Sklearn and TF and I have a factory method that takes a model name and returns a specific implementation and to have a correct return type from the factory I defined a TypeVar. Here MinXGBModel, MinRandomForestModel, MinDenseModel and MinMLPModel are my task-specific subclasses :</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar
PredictiveModel = TypeVar('PredictiveModel',
MinXGBModel,
MinRandomForestModel,
MinDenseModel,
MinMLPModel)
def predictive_model_factory(model_type: str) -> PredictiveModel:
if model_type == "XGB":
return MinXGBModel
elif model_type == "TF":
return MinDenseModel
elif model_type == "RF":
return MinRandomForestModel
elif model_type == "MLP":
return MinMLPModel
else:
raise NotImplementedError(f'Unknown model type {model_type}. '
'Allowed model types are ("XGB", "RF", "TF", "MLP")')
</code></pre>
<p>This works fine but when I actually go to get the model class from a factory:</p>
<pre class="lang-py prettyprint-override"><code>model_cls: PredictiveModel = predictive_model_factory(model_type=model_type)
</code></pre>
<p>the linter highlights reference to <code>PredictiveModel</code> and says that <em>"Type variable PredictiveModel has no meaning in this context"</em>.</p>
<p>I don't use TypeVars very often so probably I'm doing something incorrectly but not sure what exactly. I had a look but couldn't find explanation for this message.</p>
|
<python><type-variables>
|
2023-03-31 05:22:06
| 1
| 4,474
|
NotAName
|
75,895,014
| 4,465,454
|
How to get Column from Table using Bracket Notation in declarative SQLAlchemy 2.0
|
<p>I use python SQLAlchemy 2.0 ORM in declarative style to define my tables like such:</p>
<pre><code>from sqlalchemy.orm import DeclarativeBase
from sqlalchemy import select
class Example(DeclarativeBase):
__tablename__ = 'example'
foo = mapped_column(Text, nullable=False)
</code></pre>
<p>I know that I can access table column in queries using "dot notation":</p>
<pre><code>select(Example.foo).select_from(Example)
</code></pre>
<p>Is there some way to also access tables using "bracket notation"? This might look like the following:</p>
<pre><code>select(Example['foo']).select_from(Example)
</code></pre>
|
<python><sqlalchemy><orm>
|
2023-03-31 05:05:07
| 1
| 1,642
|
Martin Reindl
|
75,894,984
| 10,437,110
|
Faster way to add a row in between a time series Python
|
<p>I have a dataframe that has one of the columns as 'date'.</p>
<p>It contains datetime value in the format <code>2020-11-04 09:15:00+05:30</code> for 45 days.</p>
<p>The data for a day starts at <code>9:15:00</code> and ends at <code>18:30:00</code>.</p>
<p>Apart from the date, there is an <code>x</code> column and a <code>y</code> column.</p>
<p>I want to insert a new row of <code>09:14:00</code> just before <code>9:15:00</code>.</p>
<p>Here <code>x</code> of the new row will be the <code>y</code> of previous row i.e. <code>18:30:00</code> of previous day.</p>
<p>And, <code>y</code> of the new row will be the <code>x</code> of next row i.e. <code>09:15:00</code> of same day.</p>
<p>I tried the below code, and the answer is both wrong and also very slow.</p>
<pre><code>def add_one_row(df):
new_df = pd.DataFrame(columns=df.columns)
for i, row in df.iterrows():
if i != 0 and row['date'].time() == pd.to_datetime('09:15:00').time():
new_row = row.copy()
new_row['date'] = new_row['date'].replace(minute=14, second=0)
new_row['x'] = df.loc[i-1, 'y']
new_row['y'] = row['x']
new_df = pd.concat([new_df, new_row])
new_df = pd.concat([new_df, row])
return new_df
</code></pre>
<p>I expected the <code>row</code> and <code>new_row</code> to be concated as a row to the <code>new_df</code>. However, it is creating a column with name 0.</p>
<pre><code>
date | x | y | 0
date 2020-11-04 09:15:00+05:30
x 50
y 60
</code></pre>
<p>It is really slow so I need to do it faster maybe with vectorization.</p>
<p>Can someone provide a faster and correct way to solve this?</p>
|
<python><python-3.x><pandas>
|
2023-03-31 04:57:49
| 2
| 397
|
Ash
|
75,894,935
| 4,451,521
|
A solution to pytest not finding a module to test
|
<p>This question is a follow up to <a href="https://stackoverflow.com/q/75894443/674039">this question</a> that was answered by the user @wim.</p>
<p>I have the same structure</p>
<pre><code> |
|-tests
| |----test_sum.py
|
|--script_to_test.py
|- pytest.ini
</code></pre>
<p>and the files
script_to_test.py</p>
<pre><code>def sum(a,b):
return a+b
</code></pre>
<p>pytest.ini</p>
<pre><code>[pytest]
python_files = test_*
python_classes = *Tests
python_functions = test_*
</code></pre>
<p>test_sum.py</p>
<pre><code>import script_to_test
def test_sum():
d=script_to_test.sum(6,5)
assert d==11
</code></pre>
<p>In this situation, pytest does not work since it fails to find the module <code>script_to_test.py</code></p>
<p>I have read several solutions, some simple but incorrect, some complicated like installing modules, etc.</p>
<p>My solution is simple. Just add an empty file <code>__init__.py</code></p>
<pre><code>|
|-tests
| |----__init__.py
| |----test_sum.py
|
|--script_to_test.py
|- pytest.ini
</code></pre>
<p>and with this, pytest works.</p>
<p>Now my question is, is this solution correct, appropriate? and what is the influence of <code>__init__.py</code> here?</p>
|
<python><pytest>
|
2023-03-31 04:46:55
| 0
| 10,576
|
KansaiRobot
|
75,894,853
| 809,459
|
Python Pandas Dataframe Multiplying 2 columns results replicated values of column A
|
<p>OK I am trying something I think should be very simple but it's doesn't seem to be working.</p>
<p>I have a Panda Dataframe which includes many columns. There are 2 which I want to multiply but instead of the resulting new column displaying the result of multiplying the two values, it shows the value in column A the number of times that is in column B</p>
<p>Example:</p>
<pre><code>dataframe['totalcost'] = dataframe['orderedQuantity.amount'].mul(dataframe['netCost.amount'])
</code></pre>
<p>Also tried this:</p>
<pre><code>dataframe['totalcost'] = dataframe['orderedQuantity.amount'] * dataframe['netCost.amount']
</code></pre>
<p>netCost.amount = 12</p>
<p>orderedQuantity.amount = 3</p>
<p>totalcost results in 121212 instead of 36</p>
<p>What am I missing?</p>
|
<python><pandas><dataframe>
|
2023-03-31 04:25:39
| 1
| 575
|
Reg
|
75,894,833
| 3,614,197
|
repeat a sequence of numbers N times and incrementally increase the values in the sequence Python
|
<p>I have a number sequence as below</p>
<pre><code>sequence = (0,0,1,1,1,1)
</code></pre>
<p>I want the number sequence to repeat a specified number of times but incrementally increase the values within the sequence</p>
<p>so if n= 3 then the sequence would go 1,1,2,2,2,2,3,3,4,4,4,4,5,5,6,6,6,6</p>
<p>I can make a sequence like below but the range function is not quite right in this instance</p>
<pre><code>n = 3
CompleteSequence = [j + k for j in range(0, n, 2) for k in sequence]
CompleteSequence
[0, 0, 1, 1, 1, 1, 2, 2, 3, 3, 3, 3]
</code></pre>
<p>I have tried itertools</p>
<pre><code>import itertools
sequence = (0,0,1,1,1,1)
n=3
list1 = list(itertools.repeat(sequence,n))
list1
[(0, 0, 1, 1, 1, 1), (0, 0, 1, 1, 1, 1), (0, 0, 1, 1, 1, 1)]
</code></pre>
<p>How can I go I achieve the pattern I need? a combination of range and itertools?</p>
|
<python><range><python-itertools>
|
2023-03-31 04:20:57
| 5
| 636
|
Spooked
|
75,894,793
| 2,056,201
|
Python 'NoneType' object has no attribute 'get' on a global variable
|
<p>I don't understand why I cannot set a global variable <code>driver</code> in Selenium</p>
<p>I get this error in the <code>Load()</code> function on <code>driver</code></p>
<pre><code>Exception has occurred: AttributeError
'NoneType' object has no attribute 'get'
File "D:\Code\edge_script.py", line xx, in load
driver.get("https://www.google.com")
^^^^^^^^^^
File "D:\Code\edge_script.py", line xx, in main
load()
File "D:\Code\edge_script.py", line xx, in <module>
main()
AttributeError: 'NoneType' object has no attribute 'get'
</code></pre>
<p>code is below</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from pathlib import Path
import time
import subprocess
import re
from msedge.selenium_tools import Edge, EdgeOptions
#global variables
driver = None
def init():
subprocess.run("taskkill /f /im msedge.exe")
edge_options = EdgeOptions()
edge_options.use_chromium = True
#Here you set the path of the profile ending with User Data not the profile folder
path = "user-data-dir="+str(Path.home())+"\\AppData\\Local\\Microsoft\\Edge\\User Data"
print(path)
edge_options.add_argument(path);
#Here you specify the actual profile folder
edge_options.add_argument("--profile-directory=Default")
edge_options.add_argument("--no-sandbox")
edge_options.add_argument("--disable-setuid-sandbox")
edge_options.add_argument("--remote-debugging-port=9222")
edge_options.add_argument("--disable-dev-shm-using")
edge_options.add_argument("--disable-extensions")
edge_options.add_argument("--disable-gpu")
edge_options.binary_location = "C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe"
driver = Edge(options = edge_options, executable_path = "D:\\msedgedriver.exe")
def load():
# navigate to the website
driver.get("https://www.google.com") #<<<ERROR HERE
# wait for the page to load
time.sleep(5)
def close():
# close the driver
driver.quit()
def main():
init()
load()
close()
if __name__ == "__main__":
main()
</code></pre>
|
<python><selenium-webdriver><edgedriver>
|
2023-03-31 04:08:39
| 2
| 3,706
|
Mich
|
75,894,508
| 17,347,824
|
TIMESTAMP and NULL in Postgresql table
|
<p>I am trying to populate a database table in postgresql and just keep getting errors. The data is a customer data set with 15 columns all of which are of object type. The table is set to the following:</p>
<pre><code>customer = ("""CREATE TABLE IF NOT EXISTS customer (
CustID INT PRIMARY KEY NOT NULL,
CustFName VARCHAR(100) NOT NULL,
CustLName VARCHAR(100) NOT NULL,
CustPhone VARCHAR(14),
CustEmail VARCHAR(275),
CustState CHAR(2),
ContactPref VARCHAR(10),
PmtID VARCHAR(20),
AddedStamp TIMESTAMP NOT NULL,
UpdatedStamp TIMESTAMP,
HHI INT,
IsMarried CHAR(1),
HasKids CHAR(1),
TravelsWPet CHAR(1),
Pronoun VARCHAR(20));
""")
cursor.execute(customer)
conn.commit()
</code></pre>
<p>The data has instances in the updatedstamp column where it is blank and I need it to just put NULL in the table (no quotes, just the word), but when there is a date, I need it to put the date with quotes such as: '2015-01-01'.</p>
<p>Below is the latest attempt to get this to work:</p>
<pre><code>import pandas as pd
custdf = pd.read_csv('customer.csv')
custdf = custdf.replace('NULL', None)
custdf['custid'] = custdf['custid'].astype(object)
for x in custdf.index:
cursor.execute("""
INSERT INTO customer (custid, custfname, custlname, custphone, custemail, custstate, contactpref, pmtid,
addedstamp, updatedstamp, hhi, ismarried, haskids, travelswpet, pronoun)
VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s,
IF(%s = 'NULL', NULL, '%s'), %s, %s, %s, %s, %s)""",
(custdf.loc[x]['custid'],
custdf.loc[x]['custfname'],
custdf.loc[x]['custlname'],
custdf.loc[x]['custphone'],
custdf.loc[x]['custemail'],
custdf.loc[x]['custstate'],
custdf.loc[x]['contactpref'],
custdf.loc[x]['pmtid'],
custdf.loc[x]['addedstamp'],
custdf.loc[x]['updatedstamp'],
custdf.loc[x]['hhi'],
custdf.loc[x]['ismarried'],
custdf.loc[x]['haskids'],
custdf.loc[x]['travelswpet'],
custdf.loc[x]['pronoun']))
conn.commit()
query = ("""SELECT * FROM customer""")
customer_df = pd.read_sql(query, conn)
print(customer_df)
</code></pre>
<p>This code returns `IndexError: tuple index out of range' - but the number of columns and placeholders are the same. Previous code attempts were giving me errors such as <code>InvalidDatetimeFormat: invalid input syntax for type timestamp: "NULL" LINE 4: ...l.com', 'AZ', 'call', 744905307847.0, '1/1/22 0:00', 'NULL',</code></p>
<p>How can I fix this?</p>
|
<python><postgresql>
|
2023-03-31 02:59:24
| 1
| 409
|
data_life
|
75,894,443
| 4,451,521
|
pytest finding and not finding modules to test
|
<p>I was going to post this question in code review because rather than a solution I wanted for python experts to check my code.
But while preparing the code I found some related issue so I finally decided to ask it here since it is not more just a code check</p>
<p>My question is how does pytest finds modules to test?
Let me explain</p>
<p><strong>Situation 1</strong>
First I have this code structure</p>
<pre><code>|
|-tests
| |----test_sum.py
|
|--script_to_test.py
</code></pre>
<p>test_sum.py</p>
<pre><code>import script_to_test
def test_sum():
d=script_to_test.sum(6,5)
assert d==11
</code></pre>
<p>script_to_test.py</p>
<pre><code>def sum(a,b):
return a+b
</code></pre>
<p>If I create a virtual environment and install pytest there, or if I use a conda environment with pytest installed I can do</p>
<pre><code>pytest
</code></pre>
<p>and I will get</p>
<pre><code>pytest
====================================================================== test session starts ======================================================================
platform linux -- Python 3.9.16, pytest-7.2.2, pluggy-1.0.0
rootdir: /home/me/pytestExample
collected 1 item
tests/test_sum.py . [100%]
======================================================================= 1 passed in 0.01s =======================================================================
</code></pre>
<p><strong>Situation 2</strong>
If I add a file <code>pytest.ini</code> in the root</p>
<pre><code>[pytest]
python_files = test_*
python_classes = *Tests
python_functions = test_*
</code></pre>
<p>pytest will not work anymore and I will get</p>
<pre><code> pytest
========================================================= test session starts ==========================================================
platform linux -- Python 3.9.16, pytest-7.2.2, pluggy-1.0.0
rootdir: /home/me/pytestExample, configfile: pytest.ini
collected 0 items / 1 error
================================================================ ERRORS ================================================================
__________________________________________________ ERROR collecting tests/test_sum.py __________________________________________________
ImportError while importing test module '/home/me/pytestExample/tests/test_sum.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
../../../.pyenv/versions/3.9.16/lib/python3.9/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_sum.py:1: in <module>
import script_to_test
E ModuleNotFoundError: No module named 'script_to_test'
======================================================= short test summary info ========================================================
ERROR tests/test_sum.py
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
=========================================================== 1 error in 0.07s ===========================================================
</code></pre>
<p>Now, I can solve this already, but the purpose of this question is to know <em>why it fails to work</em> when a <code>pytest.ini</code> is added? and why does it work in the first place when that file is not present?</p>
|
<python><pytest>
|
2023-03-31 02:43:01
| 2
| 10,576
|
KansaiRobot
|
75,894,421
| 6,197,439
|
pip3 in venv refuses to install requested setuptools version
|
<p>I use the Python 3.10.10 in MINGW64, and I want to use a Python3 package that has not been updated in years, which I've installed successfully back in Python 3.7 on this platform - but of course, that is useless on Python 3.10.</p>
<p>Now I have to build this package again. It being old, it uses deprecated features for setuptools, so now I have to install old setuptools. Since that is likely to mess up the rest of my system, I'm thinking I should try this in <code>python3 -m venv mingw64_py3.10_venv</code>. This works, I can install old setuptools:</p>
<pre class="lang-none prettyprint-override"><code>$ pip3 install setuptools==45
Collecting setuptools==45
Downloading setuptools-45.0.0-py2.py3-none-any.whl (583 kB)
ββββββββββββββββββββββββββββββββββββββββ 583.8/583.8 kB 4.6 MB/s eta 0:00:00
Installing collected packages: setuptools
Attempting uninstall: setuptools
Found existing installation: setuptools 65.5.0
Uninstalling setuptools-65.5.0:
Successfully uninstalled setuptools-65.5.0
Successfully installed setuptools-45.0.0
</code></pre>
<p>... but upon building the package with <code>python3 setup.py install</code>, I get <code>ModuleNotFoundError: No module named 'numpy'</code>. Ok, so I do <code>pip3 install numpy</code> and it takes an hour to build from source, and then install process fails again for <code>scipy</code> - I can tell already this is going to be another hour.</p>
<p>So, I'm thinking - I <em>already have</em> numpy and scipy, why can't I reuse those? So I found:</p>
<p><a href="https://stackoverflow.com/questions/12079607/make-virtualenv-inherit-specific-packages-from-your-global-site-packages">Make virtualenv inherit specific packages from your global site-packages</a></p>
<blockquote>
<p>Create the environment with <code>virtualenv --system-site-packages</code> . Then, activate the virtualenv and when you want things installed in the virtualenv rather than the system python, use <code>pip install --ignore-installed</code> or <code>pip install -I</code> . That way pip will install what you've requested locally even though a system-wide version exists. Your python interpreter will look first in the virtualenv's package directory, so those packages should shadow the global ones.</p>
</blockquote>
<p>Ok, so I try:</p>
<pre class="lang-none prettyprint-override"><code>$ python3 -m venv mingw64_py3.10_venv --system-site-packages
$ source mingw64_py3.10_venv/bin/activate
$ mingw64_py3.10_venv/bin/python3.exe -m pip install --ignore-installed --upgrade pip
...
Successfully installed pip-23.0.1
$ pip3 --version
pip 23.0.1 from C:/path/to/mingw64_py3.10_venv/lib/python3.10/site-packages/pip (python 3.10)
</code></pre>
<p>Ok, looks good so far; now old setuptools?</p>
<pre><code>$ pip3 install --ignore-installed setuptools==45
Collecting setuptools==45
Using cached setuptools-45.0.0-py2.py3-none-any.whl (583 kB)
Installing collected packages: setuptools
Successfully installed setuptools-65.5.0
</code></pre>
<p>Why does it say <code>Collecting setuptools==45</code> - and then <code>Successfully installed setuptools-65.5.0</code>? How is that "successful"?</p>
<p>I asked for version 45.0.0 not 65.5.0??</p>
<pre class="lang-none prettyprint-override"><code>$ pip3 show setuptools
Name: setuptools
Version: 65.5.0
Summary: Easily download, build, install, upgrade, and uninstall Python packages
Home-page: https://github.com/pypa/setuptools
Author: Python Packaging Authority
Author-email: distutils-sig@python.org
License:
Location: c:/path/to/mingw64_py3.10_venv/lib/python3.10/site-packages
Requires:
Required-by: sip
</code></pre>
<p>Can I make a <code>venv</code> in Python 3.10, that inherits most of my system modules, and yet allows me to downgrade setuptools - and if so, how?</p>
|
<python><pip><virtualenv><python-venv>
|
2023-03-31 02:38:14
| 1
| 5,938
|
sdbbs
|
75,894,400
| 14,862,885
|
Pymunk draw lines on space which falls due to gravity(do physics)
|
<p>I want to scrible on pymunk screen(pygame),which becomes a line like object and falls due to physics.</p>
<p>I tried to make the code for most part, now i am able to get the vertices of points on screen, I want the line to be drawn on screen joining these points and do physics:</p>
<pre><code>import pymunk
import pymunk.pygame_util
import random
# Initialize Pygame and Pymunk
pygame.init()
screen = pygame.display.set_mode((600, 600))
clock = pygame.time.Clock()
space = pymunk.Space()
space.gravity = (0, 1000)
draw_options = pymunk.pygame_util.DrawOptions(screen)
# Define a function to convert screen coordinates to space coordinates
def to_space(x, y):
return x, y
# Define a list to store the vertices of the polygon
vertices = []
# Define a variable to indicate if the polygon is complete
complete = False
# Define the main game loop
running = True
draging=False
while running:
# Handle events
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False
elif event.type == pygame.MOUSEBUTTONDOWN and not complete:
draging=True
# Add the vertex to the list
vertices.append(pymunk.Vec2d(*to_space(*event.pos)))
elif event.type == pygame.MOUSEMOTION and not complete and draging==True:
# Add the vertex to the list
if not pymunk.Vec2d(*to_space(*event.pos)) in vertices:
vertices.append(pymunk.Vec2d(*to_space(*event.pos)))
elif event.type == pygame.MOUSEBUTTONUP and not complete:
draging=False
elif event.type == pygame.KEYDOWN and event.key == pygame.K_SPACE and len(vertices) >= 3:
# PROBLEM HERE,NEED TO ADD THE SHAPE/BODY TO SPACE,FROM "vertices"
# Clear the list of vertices and set the complete flag
vertices=[]
complete = False
# Draw the screen
screen.fill((255, 255, 255))
if not complete:
# Draw the vertices of the polygon
for vertex in vertices:
pygame.draw.circle(screen, (0, 0, 255), vertex.int_tuple, 5)
space.debug_draw(draw_options)
# Update the space and the screen
space.step(1 / 60)
pygame.display.flip()
clock.tick(60)
# Clean up
pygame.quit()
</code></pre>
<p>I don't find a lot of helpful resource on this,Only 170 questions on pymunk in SO : | , I have vertices/points which i need to connect and add as a line(not a straight line), also i only know the points,nothing else like position,center of gravity...</p>
|
<python><pymunk>
|
2023-03-31 02:32:43
| 0
| 3,266
|
redoc
|
75,894,373
| 1,857,373
|
Lightgbm prediction issue TypeError: Unknown type of parameter:boosting_type, got:dict
|
<p><strong>Problem</strong></p>
<p>Build prediction accuracy model using lightgbm on New York Taxi Duration dataset. [Kaggle model:https://www.kaggle.com/code/mobilematthew/newyorkcitytaxidurationprediction/edit/run/123885887</p>
<p>Setting up Light Gradient Boost with one set of parameters, and two models, 1) use to LGBMClassifier fit / predict, 2) to to LightGB train / predict.</p>
<p>Tested 2) LightGB train / predict and this code works. Added 1) LGBMClassifier fit / predict, with exact same parameters, should work fine, but the fit raises an error.</p>
<p>The 2) model trains fine before this issue. The problem is when I attempt to make a prediction from the lightgbm 1) LGBMClassifier fit model. This is the error: "TypeError" which is raised from the lightgbm.fit() function. The y is one dimension. X_train has multiple features, all reduced via importance. To analyze this numpy.ndarray for 2 dimensions.</p>
<pre><code>lgb_train = lgb.Dataset(X_train)
lgb_y_train = lgb.Dataset(y_train)
lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train)
param_grid = { 'boosting_type':'gbdt',
'n_estimators':50,
'objective': 'regression',
'num_leaves': 5,
'num_boost_round':20,
'class_weight':'balanced',
'colsample_bytree':1.0,
'importance_type':'gain',
'learning_rate':0.001,
'max_depth':-1,
'min_child_samples':20,
'min_child_weight':0.001,
'min_split_gain':0.0,
'n_jobs':-1,
'verbose':0,
'random_state':None,
'reg_alpha':0.0,
'reg_lambda':0.05,
'subsample':1.0,
'subsample_freq':0,
'min_data':1,
'force_row_wise' : True,
'eval_set':[X_test, y_test]
}
light_model = lgb.LGBMClassifier(param_grid,random_state=42)
light_model_fit = light_model.fit(X_train, y_train)
light_model_fix_y_pred = light_model_fit.predict(X_test)
light_model_trained = lgb.train(param_grid, lgb_train)
light_model_trained_pred = light_model_trained.predict(X_test)
</code></pre>
<p><strong>Code with raised error</strong></p>
<p>With above code, trained model, so far everything working fine</p>
<p><strong>Setup for prediction</strong></p>
<p>Setup light gradient boost fit for predict invocation, this is where the Value Error is raised.</p>
<pre><code>light_model_fit = light_model.fit(X_train, y_train)
</code></pre>
<p><strong>Error</strong></p>
<pre><code>TypeError Traceback (most recent call last)
/tmp/ipykernel_27/124012372.py in <module>
30
31 light_model = lgb.LGBMClassifier(param_grid,random_state=42)
---> 32 light_model_fit = light_model.fit(X_train, y_train)
33 light_model_fix_y_pred = light_model_fit.predict(X_test)
34
/opt/conda/lib/python3.7/site-packages/lightgbm/sklearn.py in fit(self, X, y, sample_weight, init_score, eval_set, eval_names, eval_sample_weight, eval_class_weight, eval_init_score, eval_metric, early_stopping_rounds, verbose, feature_name, categorical_feature, callbacks, init_model)
970 eval_metric=eval_metric, early_stopping_rounds=early_stopping_rounds,
971 verbose=verbose, feature_name=feature_name, categorical_feature=categorical_feature,
--> 972 callbacks=callbacks, init_model=init_model)
973 return self
974
/opt/conda/lib/python3.7/site-packages/lightgbm/sklearn.py in fit(self, X, y, sample_weight, init_score, group, eval_set, eval_names, eval_sample_weight, eval_class_weight, eval_init_score, eval_group, eval_metric, early_stopping_rounds, verbose, feature_name, categorical_feature, callbacks, init_model)
756 init_model=init_model,
757 feature_name=feature_name,
--> 758 callbacks=callbacks
759 )
760
/opt/conda/lib/python3.7/site-packages/lightgbm/engine.py in train(params, train_set, num_boost_round, valid_sets, valid_names, fobj, feval, init_model, feature_name, categorical_feature, early_stopping_rounds, evals_result, verbose_eval, learning_rates, keep_training_booster, callbacks)
269 # construct booster
270 try:
--> 271 booster = Booster(params=params, train_set=train_set)
272 if is_valid_contain_train:
273 booster.set_train_data_name(train_data_name)
/opt/conda/lib/python3.7/site-packages/lightgbm/basic.py in __init__(self, params, train_set, model_file, model_str, silent)
2603 )
2604 # construct booster object
-> 2605 train_set.construct()
2606 # copy the parameters from train_set
2607 params.update(train_set.get_params())
/opt/conda/lib/python3.7/site-packages/lightgbm/basic.py in construct(self)
1817 init_score=self.init_score, predictor=self._predictor,
1818 silent=self.silent, feature_name=self.feature_name,
-> 1819 categorical_feature=self.categorical_feature, params=self.params)
1820 if self.free_raw_data:
1821 self.data = None
/opt/conda/lib/python3.7/site-packages/lightgbm/basic.py in _lazy_init(self, data, label, reference, weight, group, init_score, predictor, silent, feature_name, categorical_feature, params)
1515 params['categorical_column'] = sorted(categorical_indices)
1516
-> 1517 params_str = param_dict_to_str(params)
1518 self.params = params
1519 # process for reference dataset
/opt/conda/lib/python3.7/site-packages/lightgbm/basic.py in param_dict_to_str(data)
292 pairs.append(f"{key}={val}")
293 elif val is not None:
--> 294 raise TypeError(f'Unknown type of parameter:{key}, got:{type(val).__name__}')
295 return ' '.join(pairs)
296
TypeError: Unknown type of parameter:boosting_type, got:dict
</code></pre>
<p><strong>Data</strong></p>
<p>ValueError: Input numpy.ndarray or list must be 2 dimensional.</p>
<p>Data for this lightgbm model: 1 X_test, 2 X_train, 3 y_train 4 lgb.Dataset(X_train) via lgb.get[X] commands</p>
<p>X_test</p>
<pre><code> pickup_longitude pickup_latitude dropoff_latitude trip_duration direction week minute_oftheday
139168 -73.990189 40.757259 40.762600 1095 69.265257 6 1289
1401881 -73.955223 40.768841 40.777191 390 39.910385 24 881
1207916 -73.955345 40.764126 40.781013 1171 -49.064405 1 1156
466038 -73.996696 40.733234 40.713543 1626 162.417522 14 683
855381 -74.004532 40.706974 40.717777 689 -5.260115 22 1237
... ... ... ... ... ... ... ...
425268 -73.978287 40.752300 40.763824 858 2.899225 23 900
940105 -73.984207 40.759949 40.751755 432 -166.399449 6 1084
502876 -73.970856 40.753487 40.764393 708 -11.675022 1 924
895439 -74.006882 40.710022 40.713509 486 -46.212070 5 682
699976 -74.002594 40.739544 40.778725 866 27.435443 25 1374
</code></pre>
<p><a href="https://i.sstatic.net/Kr3KM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kr3KM.png" alt="enter image description here" /></a></p>
<p>X_train</p>
<pre><code> pickup_longitude pickup_latitude dropoff_latitude trip_duration direction week minute_oftheday
97650 -73.979507 40.765388 40.759701 600 -175.931831 19 1097
1101996 -73.970932 40.765720 40.762833 224 138.507110 8 1151
61397 -73.862785 40.768963 40.753124 884 -106.385968 8 1296
941058 -73.957634 40.782143 40.761646 1239 -136.690811 1 495
909725 -74.013771 40.701969 40.706612 240 51.439139 21 555
... ... ... ... ... ... ... ...
1054126 -73.990929 40.750561 40.728096 646 175.218677 17 549
624177 -73.941513 40.851059 40.849010 197 107.450422 25 559
1175512 -73.989532 40.769600 40.788155 413 40.718927 16 1316
823176 -73.982628 40.751122 40.754730 308 66.554445 19 1042
448716 -73.989456 40.720070 40.770935 969 61.479580 2 274
</code></pre>
<p><a href="https://i.sstatic.net/yROlo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yROlo.png" alt="enter image description here" /></a></p>
<p>y_train</p>
<pre><code>97650 600
1101996 224
61397 884
941058 1239
909725 240
...
1234001 831
1403381 1590
454139 2226
557019 312
699873 1337
Name: trip_duration, Length: 2995, dtype: int64
</code></pre>
<p><a href="https://i.sstatic.net/qqY2H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qqY2H.png" alt="enter image description here" /></a></p>
<p>Dataset X_train</p>
<pre><code>, ,pickup_longitude ,pickup_latitude ,dropoff_latitude ,trip_duration ,Direction,βweek β , βminute_oftheday
,97650,-73.979507,40.765388,40.759701,600,175.931831,19,1097
,1101996,-73.970932,40.765720,40.762833,224,138.507110,8,1151
,61397,-73.862785,40.768963,40.753124,884,106.385968,8,1296
,941058,-73.957634,40.782143,40.761646,1239,136.690811,1,495
,909725,-74.013771,40.701969,40.706612,240,51.439139,21,555
</code></pre>
<p><em>Methods to get data from Lightgdm</em></p>
<pre><code>print(lgb_train.get_data())
print(lgb_train.get_feature_name())
print(lgb_train.get_label())
print(lgb_train.get_params())
print(lgb_train.num_data())
print(lgb_train.num_feature())
</code></pre>
<p><a href="https://i.sstatic.net/w9oKh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w9oKh.png" alt="enter image description here" /></a></p>
|
<python><machine-learning><lightgbm>
|
2023-03-31 02:25:19
| 1
| 449
|
Data Science Analytics Manager
|
75,894,303
| 10,474,998
|
Remove subfolders with the same name if empty
|
<p>Assuming I have a folder called trial that contains the following:
Folder1, Folder2, Folder3,...Folder5
And each of these folders has a subfolder called FF-15</p>
<p>Thus, the path is:
<code>'/Users/brianadams/school/trial/Folder1/FF-15'</code>, or <code>'/Users/brianadams/school/trial/Folder2/FF-15'</code>, and so on.</p>
<p>I want to delete folder FF-15 if it is empty from all the Folders.</p>
<p>I tried the following:</p>
<pre><code>import glob
import os
path = glob.glob('/Users/brianadams/school/trial/**/FF-15')
os.removedirs(path)
</code></pre>
<p>But it throws up an error or does not end up deleting the file.</p>
|
<python><python-3.x><jupyter-notebook>
|
2023-03-31 02:06:29
| 2
| 1,079
|
JodeCharger100
|
75,894,224
| 7,966,156
|
How to match a changing pattern in python?
|
<p>So I have a collection of lyrics from different artists, but in the middle of all the lyrics there is always an advertisement I want to remove. It looks like this:</p>
<p>'lyric lyric See John Mayer LiveGet tickets as low as $53 lyric lyric'</p>
<p>More generally, the pattern is always: 'See ARTIST LiveGet tickets as low as $NUMBER'</p>
<p>Is there a way I can match this changing pattern so I can get rid of these advertisements in the text?</p>
|
<python><regex>
|
2023-03-31 01:49:11
| 1
| 628
|
Nova
|
75,894,205
| 9,386,819
|
How do I create a function that assembles data into a dictionary and then returns the values of a particular key, given as an argument?
|
<p>Suppose I have a function:</p>
<pre><code>def data_fetch():
# The function assembles this dictionary from various sources:
my_dict = {'a': [1, 2, 3],
'b': [4, 5, 6],
'c': [7, 8, 9]
}
</code></pre>
<p>And I want to be able to call the function with any combination of arguments <code>a</code>, <code>b</code>, and/or <code>c</code> and have the function return <code>my_dict['a']</code>, or <code>(my_dict['a'], my_dict['b'])</code>, etc., depending on the arguments entered when the function is called.</p>
<p>How do I do this? Or what is a more effective/canonical way of setting this up?</p>
|
<python><function>
|
2023-03-31 01:45:50
| 2
| 414
|
NaiveBae
|
75,894,118
| 9,986,939
|
Unpacking Nested Data with Pandas
|
<p>Hello I've got an icky list of dictionaries I want to put into a dataframe</p>
<pre><code>data = [
{
"name": "pod_name",
"type": "group",
"values": [
[
"7b977b5d68-mdwfc"
],
[
"d8b746cdf-hn5dx"
],
[
"d8b746cdf-wmxdq"
],
[
"d8b746cdf-8dv65"
],
[
"d8b746cdf-9dn2c"
],
[
"d8b746cdf-rh5c5"
],
[
"d8b746cdf-q5fz6"
],
[
"d8b746cdf-hvdmd"
],
[
"d8b746cdf-fgzcj"
],
[
"d8b746cdf-lhclk"
]
]
},
{
"name": "cpu_limit",
"type": "number",
"values": [
2.5,
1.5,
1.5,
1.5,
1.5,
1.5,
1.5,
1.5,
1.5,
1.5
]
},
{
"name": "mem_limit",
"type": "number",
"values": [
10737418240.0,
2147483648.0,
2147483648.0,
2147483648.0,
2147483648.0,
2147483648.0,
2147483648.0,
2147483648.0,
2147483648.0,
2147483648.0
]
}
]
</code></pre>
<p>Each dictionary is a column in the target dataframe, the "name" as the column and the list "values" as the data.</p>
<p>So far my solution is this:</p>
<pre><code>df = pd.DataFrame()
for col in data:
df[col['name']] = col['values']
print(df.head())
</code></pre>
<p>Which to be honest, is fairly clean, but I'm curious if any pythonistas out there have a better solution. (FYI I'm working on pandas 0.25...)</p>
|
<python><pandas>
|
2023-03-31 01:19:03
| 1
| 407
|
Robert Riley
|
75,893,863
| 10,891,491
|
How can I mock/patch a method inside for specific method class?
|
<p><strong>Objective</strong>: I'm trying to do a end to end test in an application. So I would like to mock the external communications to test all flow.</p>
<p><strong>Business problem</strong>:</p>
<ul>
<li><p><code>Bar</code> class equivalent: I have a class that send requests.</p>
<ol start="2">
<li>bar_method -> send_request</li>
</ol>
</li>
<li><p><code>Foo</code> class equivalent: is my user class that build the data to send requests.</p>
<ul>
<li>method_that_use_bar_function -> create_user</li>
<li>method_that_use_bar_function_too -> get_user</li>
</ul>
</li>
<li><p><code>runner</code> equivalent: is a job that executes some user's operations</p>
</li>
</ul>
<p><strong>Reproducibility</strong>:</p>
<p>To simplify the test developing, I build this code with similar structure to reproduce the scenario:</p>
<pre><code># / (root folder)
from src.foo import Foo
def method_to_run_all_my_code():
my_foo = Foo()
res1 = my_foo.method_that_use_bar_function()
res2 = my_foo.method_that_use_bar_function_too()
return res1 + res2
if __name__ == '__main__':
method_to_run_all_my_code()
# src/foo.py
from .bar import Bar
class Foo:
def __init__(self) -> None:
self.bar = Bar()
def method_that_use_bar_function(self):
result = self.bar.bar_method()
print("[Foo] Use 1 - " + result)
return "Use 1 - " + result
def method_that_use_bar_function_too(self):
result = self.bar.bar_method()
print("[Foo] Use 2 - ", result)
return "Use 2 - " + result
# src/bar.py
class Bar:
def bar_method(self):
print("[Bar] this is a real method of bar")
return "real method"
</code></pre>
<p>I would like to mock the bar_method when it is called by <code>method_that_use_bar_function</code> (not <code>method_that_use_bar_function_too</code>), for example.</p>
<p>I tried these tests, but I canβt do this yet.</p>
<pre><code># tests/runner_test.py
import unittest
import pytest
from unittest.mock import patch, Mock
from runner import method_to_run_all_my_code
@pytest.mark.skip() # this use the real values
def test_initial():
res = method_to_run_all_my_code()
assert "Use 1" in res
assert "Use 2" in res
@pytest.mark.skip() # this mock Foo method
@patch("src.foo.Foo.method_that_use_bar_function", return_value="Mocked foo function use 1")
def test_mocking_bar_for_use_1(mocked_use_1):
res = method_to_run_all_my_code()
print("Final Results: ", res)
assert "Mocked foo function use 1" in res
assert "Use 2" in res
@pytest.mark.skip()
@patch("src.foo.Bar.bar_method", return_value="Bar function was mocked") # mock bar method for both use 1 and 2. I don't want this.
def test_mocking_bar_method(mocked_bar):
res = method_to_run_all_my_code()
print("Final Results: ", res)
assert "1 - Bar function was mocked" in res
assert "2 - Bar function was mocked" in res
@pytest.mark.skip() # this raise error
@patch("src.foo.Foo.method_that_use_bar_function", return_value=Mock(bar_method=Mock(return_value="Bar function was mocked to use 1")))
def test_mocking_bar_for_use_1(mocked_bar_use_1):
res = method_to_run_all_my_code()
print("Final Results: ", res)
assert "Mocked foo function use 1" in res
assert "Use 2" in res
@pytest.mark.skip() # this raise error
@patch("src.foo.Foo.method_that_use_bar_function.Bar.bar_method", return_value="Bar function was mocked to use 1")
def test_mocking_bar_for_use_1(mocked_bar_use_1):
res = method_to_run_all_my_code()
print("Final Results: ", res)
assert "Mocked foo function use 1" in res
assert "Use 2" in res
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>Someone can help me?</p>
|
<python><mocking><pytest><python-unittest><pytest-mock>
|
2023-03-31 00:15:43
| 1
| 436
|
natielle
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.