QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,141,428
| 2,100,039
|
Python "checknull() takes no keyword arguments" error
|
<p>I have tried to research this error and I cannot seem to find an answer. I've never seen this and this code worked fine last month and now I receive this error with the same code. Please advise and thank you. I've included my code and the error message below. The program is designed to read data from a URL and ingest that data into a new dataframe. I do not understand the meaning of "checknull()".</p>
<pre><code>import os
os.environ["PROJ_LIB"] = 'C:\\Users\\Yury\\anaconda3\\Library\\share'
from sys import exit
import netCDF4 as nc4
from netCDF4 import Dataset
import numpy as np
import matplotlib as m
import matplotlib.pyplot as plt
#from mpl_toolkits.basemap import Basemap, cm
import datetime
from datetime import datetime
import pandas as pd
import xarray as xr
import cartopy.crs as ccrs
import math
#import cdstoolbox as ct
import bottleneck as bn
from mpl_toolkits.basemap import Basemap
import cdsapi
from matplotlib.pyplot import figure
import cartopy.feature as cfeature
import time
import calendar
# -----------------------------------------------------------------------------------------------------------
#
# -----------------------------------------------------------------------------------------------------------
#setx ECCODES_DEFINITION_PATH "C:\\Users\\U321103\\Anaconda3\\envs\\Maps2\\Library\\share\\eccodes\\definitions"
# copy setx... in command prompt in C:\\Users\\U321103
#!/usr/bin/env python3
c = cdsapi.Client()
url = c.retrieve(
'reanalysis-era5-single-levels-monthly-means',
{
'product_type': 'monthly_averaged_reanalysis',
'format': 'grib',
'variable': ['Mean sea level pressure'],
'year': ['1992','1993','1994','1995','1996','1997','1998','1999','2000','2001','2002','2003','2004','2005','2006',
'2007','2008','2009','2010','2011','2012','2013','2014','2015','2016','2017','2018','2019','2020','2021','2022'],
'month': ['01','02','03','04','05','06','07','08','09','10','11','12'],
# 'month': '12',
# 'day': '01',
'time': ['00:00'],
'grid': [0.25, 0.25],
# GRID = 0.5 TO MATCH VORTEX ?
'area': [65.00, -140.00, 15.00, -53.00],
},
"C:\\Users\\U321103\\.spyder-py3\\ERA5_MAPPING\\mean_sea_level_pressure")
path = "C:\\Users\\U321103\\.spyder-py3\\ERA5_MAPPING\\mean_sea_level_pressure"
ds = xr.load_dataset(path, engine='cfgrib')
exit()
</code></pre>
<p>And the error --- ></p>
<pre><code>runfile('//porfiler03.ar.local/gtdshare/GOALS_2022/WeatherTypes/ERA5_Get_MSLP_USA_WORKING_MONTHLY.py', wdir='//porfiler03.ar.local/gtdshare/GOALS_2022/WeatherTypes')
2023-01-16 18:09:51,830 INFO Welcome to the CDS
2023-01-16 18:09:51,831 INFO Sending request to https://cds.climate.copernicus.eu/api/v2/resources/reanalysis-era5-single-levels-monthly-means
2023-01-16 18:09:52,149 INFO Request is completed
2023-01-16 18:09:52,150 INFO Downloading https://download-0001-clone.copernicus-climate.eu/cache-compute-0001/cache/data0/adaptor.mars.internal-1673912965.1298344-26515-5-eb9a349c-be6c-4e0c-b3fb-79e0d436a411.grib to C:\Users\U321103\.spyder-py3\ERA5_MAPPING\mean_sea_level_pressure (49.9M)
2023-01-16 18:09:59,400 INFO Download rate 6.9M/s
2023-01-16 18:09:59,401 WARNING Ignoring index file 'C:\\Users\\U321103\\.spyder-py3\\ERA5_MAPPING\\mean_sea_level_pressure.923a8.idx' older than GRIB file
Traceback (most recent call last):
File "C:\Users\U321103\Anaconda3\envs\Maps2\lib\site-packages\xarray\conventions.py", line 523, in decode_cf_variables
new_vars[k] = decode_cf_variable(
File "C:\Users\U321103\Anaconda3\envs\Maps2\lib\site-packages\xarray\conventions.py", line 364, in decode_cf_variable
var = coder.decode(var, name=name)
File "C:\Users\U321103\Anaconda3\envs\Maps2\lib\site-packages\xarray\coding\variables.py", line 189, in decode
encoded_fill_values = {
File "C:\Users\U321103\Anaconda3\envs\Maps2\lib\site-packages\xarray\coding\variables.py", line 193, in <setcomp>
if not pd.isnull(fv)
File "C:\Users\U321103\Anaconda3\envs\Maps2\lib\site-packages\pandas\core\dtypes\missing.py", line 185, in isna
return _isna(obj)
File "C:\Users\U321103\Anaconda3\envs\Maps2\lib\site-packages\pandas\core\dtypes\missing.py", line 208, in _isna
return libmissing.checknull(obj, inf_as_na=inf_as_na)
TypeError: checknull() takes no keyword arguments
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\U321103\Anaconda3\envs\Maps2\lib\site-packages\spyder_kernels\py3compat.py", line 356, in compat_exec
exec(code, globals, locals)
File "\\porfiler03.ar.local\gtdshare\goals_2022\weathertypes\era5_get_mslp_usa_working_monthly.py", line 58, in <module>
ds = xr.load_dataset(path, engine='cfgrib')
File "C:\Users\U321103\Anaconda3\envs\Maps2\lib\site-packages\xarray\backends\api.py", line 258, in load_dataset
object.
File "C:\Users\U321103\Anaconda3\envs\Maps2\lib\site-packages\xarray\backends\api.py", line 545, in open_dataset
ds = _dataset_from_backend_dataset(
File "C:\Users\U321103\Anaconda3\envs\Maps2\lib\site-packages\xarray\backends\api.py", line 451, in maybe_decode_store
specified). If None (default), attempt to decode times to
File "C:\Users\U321103\Anaconda3\envs\Maps2\lib\site-packages\xarray\conventions.py", line 659, in decode_cf
vars, attrs, coord_names = decode_cf_variables(
File "C:\Users\U321103\Anaconda3\envs\Maps2\lib\site-packages\xarray\conventions.py", line 534, in decode_cf_variables
raise type(e)(f"Failed to decode variable {k!r}: {e}")
TypeError: Failed to decode variable 'number': checknull() takes no keyword arguments
</code></pre>
|
<python><error-handling><null><runtime-error>
|
2023-01-17 02:19:47
| 0
| 1,366
|
user2100039
|
75,141,364
| 17,823,260
|
How to convert two columns of dataframe into an orderedDict in Python?
|
<p>I have a table named <code>tableTest</code> like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>startDate</th>
<th>endDate</th>
</tr>
</thead>
<tbody>
<tr>
<td>2022-12-15</td>
<td>2022-12-18</td>
</tr>
<tr>
<td>2022-12-19</td>
<td>2022-12-21</td>
</tr>
<tr>
<td>2022-12-22</td>
<td>2022-12-24</td>
</tr>
<tr>
<td>2022-12-26</td>
<td>2022-12-27</td>
</tr>
<tr>
<td>2022-12-29</td>
<td>2022-12-30</td>
</tr>
<tr>
<td>2022-12-02</td>
<td>2022-12-04</td>
</tr>
<tr>
<td>2022-12-06</td>
<td>2022-12-07</td>
</tr>
<tr>
<td>2022-12-07</td>
<td>2022-12-08</td>
</tr>
<tr>
<td>2022-12-09</td>
<td>2022-12-09</td>
</tr>
<tr>
<td>2022-12-13</td>
<td>2022-12-14</td>
</tr>
</tbody>
</table>
</div>
<p><strong>I need to loop the key-value pairs consisting of startDate and endDate by original order.</strong></p>
<p>What I did:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = [
("2022-12-15", "2022-12-18"),
("2022-12-19", "2022-12-21"),
("2022-12-22", "2022-12-24"),
("2022-12-26", "2022-12-27"),
("2022-12-29", "2022-12-30"),
("2022-12-02", "2022-12-04"),
("2022-12-06", "2022-12-07"),
("2022-12-07", "2022-12-08"),
("2022-12-13", "2022-12-14"),
("2023-01-01", "2023-01-03"),
]
df = spark.createDataFrame(data).toDF(*('startDate', 'endDate')).toPandas()
dictTest = df.set_index('startDate')['endDate'].to_dict()
print(dictTest)
for k,v in dictTest.items():
print(f'startDate is {k} and corresponding endDate is {v}.')
</code></pre>
<p>The above code can indeed convert these two columns to dict, but dict is unordered, so I lost the original order of these two columns.</p>
<p>Thank you in advance.</p>
|
<python><pandas><pyspark>
|
2023-01-17 02:04:48
| 2
| 339
|
Guoran Yun
|
75,141,277
| 10,200,497
|
get the first row of mask that meets conditions and create a new column
|
<p>This is my pandas dataframe:</p>
<pre><code>df = pd.DataFrame({'a': [20, 21, 333, 444], 'b': [20, 20, 20, 20]})
</code></pre>
<p>I want to create column <code>c</code> by using this <code>mask</code>:</p>
<pre><code>mask = (df.a >= df.b)
</code></pre>
<p>and I want to get the second row that meets this condition and create column <code>c</code>.
The output that I want looks like this:</p>
<pre><code> a b c
0 20 20 NaN
1 21 20 x
2 333 20 NaN
3 444 20 NaN
</code></pre>
<p>And this is my try but it put 'x' on all rows after the first row:</p>
<pre><code>df.loc[mask.cumsum().ne(1) & mask, 'c'] = 'x'
</code></pre>
|
<python><pandas>
|
2023-01-17 01:44:53
| 1
| 2,679
|
AmirX
|
75,141,238
| 4,334,110
|
exhaustive search over a list of complex strings without modifying original input
|
<p>I am attempting to create a minimal algorithm to exhaustively search for duplicates over a list of strings and remove duplicates using an index to avoid changing cases of words and their meanings.</p>
<p>The caveat is the list has such words Blood, blood, DNA, ACTN4, 34-methyl-O-carboxy, Brain, brain-facing-mouse, BLOOD and so on.</p>
<p>I only want to remove the duplicate 'blood' word, keep the first occurrence with the first letter capitalized, and not modify cases of any other words. Any suggestions on how should I proceed?</p>
<p>Here is my code</p>
<pre><code>def remove_duplicates(list_of_strings):
""" function that takes input of a list of strings,
uses index to iterate over each string lowers each string
and returns a list of strings with no duplicates, does not modify the original strings
an exhaustive search to remove duplicates using index of list and list of string"""
list_of_strings_copy = list_of_strings
try:
for i in range(len(list_of_strings)):
list_of_strings_copy[i] = list_of_strings_copy[i].lower()
word = list_of_strings_copy[i]
for j in range(len(list_of_strings_copy)):
if word == list_of_strings_copy[j]:
list_of_strings.pop(i)
j+=1
except Exception as e:
print(e)
return list_of_strings
</code></pre>
|
<python><list><algorithm><search><duplicates>
|
2023-01-17 01:36:49
| 2
| 1,136
|
Paritosh Kulkarni
|
75,141,237
| 558,619
|
General function to turn string into **kwargs
|
<p>I'm trying to find a way to pass a string (coming from outside the python world!) that can be interpreted as <code>**kwargs</code> once it gets to the Python side.</p>
<p>I have been trying to use <a href="https://stackoverflow.com/questions/38799223/parse-string-to-identify-kwargs-and-args">this pyparsing example</a>, but the string thats being passed in this example is too specific, and I've never heard of pyparsing until now. I'm trying to make it more, human friendly and robust to small differences in spacing etc. For example, I would like to pass the following.</p>
<pre><code>input_str = "a = [1,2], b= False, c =('abc', 'efg'),d=1"
desired_kwargs = {a : [1,2], b:False, c:('abc','efg'), d:1}
</code></pre>
<p>When I try this code though, no love.</p>
<pre><code>from pyparsing import *
# Names for symbols
_quote = Suppress('"')
_eq = Suppress('=')
# Parsing grammar definition
data = (
delimitedList( # Zero or more comma-separated items
Group( # Group the contained unsuppressed tokens in a list
Regex(u'[^=,)\s]+') + # Grab everything up to an equal, comma, endparen or whitespace as a token
Optional( # Optionally...
_eq + # match an =
_quote + # a quote
Regex(u'[^"]*') + # Grab everything up to another quote as a token
_quote) # a quote
) # EndGroup - will have one or two items.
)) # EndList
def process(s):
items = data.parseString(s).asList()
args = [i[0] for i in items if len(i) == 1]
kwargs = {i[0]:i[1] for i in items if len(i) == 2}
return args,kwargs
def hello_world(named_arg, named_arg_2 = 1, **kwargs):
print(process(kwargs))
hello_world(1, 2, "my_kwargs_are_gross = True, some_bool=False, a_list=[1,2,3]")
#output: "{my_kwargs_are_gross : True, some_bool:False, a_list:[1,2,3]}"
</code></pre>
<p><strong>Requirements</strong>:</p>
<ol>
<li>The '<code>{'</code> and <code>'}'</code> will be appended on the code side.</li>
<li>Only standard types / standard iterables (list, tuple, etc) will be used in the kwargs-string. No special characters that I can think of...</li>
<li>The kwargs-string will be like they are entered into a function on the python side, ie, <code>'x=1, y=2'</code>. Not as a string of a dictionary.</li>
<li>I think its a safe assumption that the first step in the string parse will be to remove all whitespace.</li>
</ol>
|
<python><parsing>
|
2023-01-17 01:36:29
| 4
| 3,541
|
keynesiancross
|
75,141,191
| 19,826,650
|
Change List format in Python
|
<p>I have an List like down below</p>
<pre><code>[['-6.167665', '106.904251'], ['-6.167665', '106.904251']]
</code></pre>
<p>How to make it like this?</p>
<pre><code>[[-6.167665, 106.904251], [-6.167665, 106.904251]]
</code></pre>
<p>Any suggestions how to do it?</p>
|
<python>
|
2023-01-17 01:25:49
| 5
| 377
|
Jessen Jie
|
75,141,188
| 4,195,053
|
How to record code execution timings for multiple functions as a pandas dataframe in Python
|
<p>I have multiple functions, where each function is designed to perform a particular task. For the sake of an example, suppose I want to prepare lunch. Possible functions for this task can be, <code>collect_vege_images()</code> <code>remove_duplicates()</code> and <code>count_veges_in_multiple_plates()</code>.</p>
<p><strong>Problem:</strong></p>
<p>I want to generate a dataframe where each row records the function start time, end time and elapsed time.</p>
<pre><code>Iteration image count Function start time end time elapse time
1 200 collect_vege_images 11.00 11.10 0.10
200 remove_duplicates 11.10 11.15 0.5
100 count_veges_in_multiple_plates 11.16 11.20 0.4
2 300 collect_vege_images 11.21 11.31 0.10
150 remove_duplicates 11.31 11.35 0.5
50 count_veges_in_multiple_plates 11.35 11.39 0.4
</code></pre>
<p><strong>What I have tried so far</strong></p>
<p>I have written both functions, however, I'm not able to get the desired output that I want. The code is given below as well as the output its generating. Needless to state, I've already looked at similar questions <a href="https://stackoverflow.com/questions/2866380/how-can-i-time-a-code-segment-for-testing-performance-with-pythons-timeit?noredirect=1&lq=1">1</a>, <a href="https://stackoverflow.com/questions/2245161/how-to-measure-execution-time-of-functions-automatically-in-python">2</a>, <a href="https://stackoverflow.com/questions/66021748/is-there-a-way-to-time-a-block-of-code-that-is-executed-many-times">3</a>,<a href="https://stackoverflow.com/questions/47810404/how-to-measure-execution-time-of-python-code">4</a>, but most are related to timing individual functions only. Looking for a simple solution as I'm beginner in python programming.</p>
<pre><code>import os
import pandas as pd
import time
VEGE_SOURCE = r'\\path to vegetable images'
VEGE_COUNT = 5
VEGE_DST = r'\\path to storing vegetable images'
LOG_FILE_NAME = 'vege_log.csv'
plates = 5
CYCLES=5
counter = ([] for i in range(7))
VEGEID = 'Potato'
def collect_vege_images(VEGE_SOURCE):
plate = os.listdir(VEGE_SOURCE)
if len(plate) == 0:
print("vege source is empty: ", folder)
else:
for i in range(VEGE_COUNT):
vege = plate[0]
curr_vege = VEGE_SOURCE + '\\' + vege
shutil.move(curr_vege, VEGE_DST)
plate.pop(0)
return
def count_veges_in_multiple_plates(plates):
N = 0
for root, dirname, files in os.walk(plates):
# print(files)
file_count = len(files)
vege_img_count += file_count
return vege_img_count
if __name__ == __main__:
collect_vege_images(VEGE_SOURCE)
img_count = count_veges_in_multiple_plates(plates=5)
for i in range(CYCLES):
print("Round # ", i)
counter.append(i)
# print("counter: ", counter)
start_time = time.process_time()
collect_vege_images(VEGE_SOURCE)
count_veges_in_multiple_plates(plates=5)
end_time = time.process_time()
elapse_time = round((end_time - start_time), 2)
fun = collect_vege_images.__name__
df = pd.DataFrame(
{'vegeid': VEGEID, 'imgcnt': img_count, 'func': fun, 'start_time': start_time, 'end_time': end_time,
'elapse_time': elapse_time}, index=[0])
print(df)
</code></pre>
<p><strong>Current code output given below</strong></p>
<pre><code>Round # 1
Moving files...
122 files moved!
iteration imgcnt func start_time end_time elapse_time
0 122 collect_vege_images 22.10 22.15 0.5
Round # 2
Moving files...
198 files moved!
iteration imgcnt func start_time end_time elapse_time
1 122 collect_vege_images 22.15 22.19 0.04
</code></pre>
|
<python><python-3.x><pandas><dataframe>
|
2023-01-17 01:25:14
| 1
| 2,022
|
mnm
|
75,141,149
| 558,639
|
numpy: limiting min value of a scalar, 1D or nD array
|
<p>Given a scalar, a 1D or an N-D numpy (numeric) array, I'd like to replace all values less than <code>threshold</code> with <code>threshold</code>. So, for example:</p>
<pre><code>def fn(a, threshold):
return ???
fn(2, 2.5) => 2.5 # scalar
fn([1, 2, 3, 4], 2.5) => [2.5, 2.5, 3, 4]] # 1-D
fn[[1, 2, 3, 4], [0, 2, 4, 6]], 2.5) => [[2.5, 2.5, 3, 4], [2.5, 2.5, 4, 6]] # 2-D
</code></pre>
<p><em>(Note: For ease of reading, I've shown the arrays above with ordinary Python array syntax, but they're actually numpy.ndarrays.)</em></p>
<p>I could use <code>if</code> statements and dispatch on the type of <code>a</code> to handle each case. But I'm still wrapping my head around numpy's broadcast methods: is there a simple numpy idiom for handling this situation?</p>
|
<python><numpy><array-broadcasting>
|
2023-01-17 01:13:59
| 1
| 35,607
|
fearless_fool
|
75,141,134
| 4,420,797
|
cifar100: spikes in accuracy and loss
|
<p>I am training a <code>preactresnet18</code> network on the cifar100 dataset. I have adjusted all training settings but still, I can see spikes in my training loss and accuracy. I have no idea why it's happening. I have changed the learning rate batch size but still i cannot solve it.</p>
<p><strong>Code</strong></p>
<pre><code>parser.add_argument('--epochs', default=200, type=int, metavar='N',
help='number of total epochs to run (default: 200)')
parser.add_argument('-b', '--batch-size', default=256, type=int, metavar='N',
help='mini-batch size (default: 256), this is the total '
'batch size of all GPUs on the current node when '
'using Data Parallel')
parser.add_argument('--lr', '--learning-rate', default=0.0001, type=float,
metavar='LR', help='initial learning rate (defualt: 0.1)',
dest='lr')
parser.add_argument('--momentum', default=0.9, type=float, metavar='M',
help='momentum (default: 0.9)')
parser.add_argument('--wd', '--weight-decay', default=5e-4, type=float,
metavar='W', help='weight decay (default: 5e-4)',
dest='weight_decay')
</code></pre>
<p><strong>Loader</strong></p>
<pre><code>def cifar100_loader(batch_size, num_workers, datapath, cuda):
normalize = transforms.Normalize(
mean=[0.4914, 0.4822, 0.4465],
std=[0.2023, 0.1994, 0.2010])
transform_train = transforms.Compose([
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
normalize,
])
transform_val = transforms.Compose([
transforms.ToTensor(),
normalize,
])
trainset = CIFAR100(
root=datapath, train=True, download=True,
transform=transform_train)
valset = CIFAR100(
root=datapath, train=False, download=True,
transform=transform_val)
if cuda:
train_loader = torch.utils.data.DataLoader(
trainset,
batch_size=batch_size, shuffle=True, drop_last=True,
num_workers=num_workers, pin_memory=True)
val_loader = torch.utils.data.DataLoader(
valset,
batch_size=batch_size, shuffle=False,
num_workers=num_workers, pin_memory=True)
else:
train_loader = torch.utils.data.DataLoader(
trainset,
batch_size=batch_size, shuffle=True,
num_workers=num_workers, pin_memory=False)
val_loader = torch.utils.data.DataLoader(
valset,
batch_size=batch_size, shuffle=False,
num_workers=num_workers, pin_memory=False)
return train_loader, val_loader
</code></pre>
<p><strong>Main</strong></p>
<pre><code> import time
import pathlib
from os.path import isfile
import torch
import torch.nn as nn
import torch.optim as optim
import torch.backends.cudnn as cudnn
import models
from colorBox import Rand_Square_Mask
from utils import *
from config import config
from data import DataLoader
import wandb
# for ignore imagenet PIL EXIF UserWarning
import warnings
warnings.filterwarnings("ignore", "(Possibly )?corrupt EXIF data", UserWarning)
best_acc1 = 0
wandb.init(project="ICCV2023", entity="text")
def main():
global opt, start_epoch, best_acc1
opt = config()
if opt.cuda and not torch.cuda.is_available():
raise Exception('No GPU found, please run without --cuda')
print('\n=> creating model \'{}\''.format(opt.arch))
if opt.arch == 'preactresnet18':
model = models.preactresnet18()
else:
model = models.__dict__[opt.arch](opt.dataset, opt.width_mult)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=opt.lr,
momentum=opt.momentum, weight_decay=opt.weight_decay,
nesterov=True)
start_epoch = 0
n_retrain = 0
if opt.cuda:
torch.cuda.set_device(opt.gpuids[0])
with torch.cuda.device(opt.gpuids[0]):
model = model.cuda()
criterion = criterion.cuda()
model = nn.DataParallel(model, device_ids=opt.gpuids,
output_device=opt.gpuids[0])
cudnn.benchmark = True
# checkpoint file
ckpt_dir = pathlib.Path('checkpoint')
ckpt_file = ckpt_dir / opt.arch / opt.dataset / opt.ckpt
# for resuming training
if opt.resume:
if isfile(ckpt_file):
print('==> Loading Checkpoint \'{}\''.format(opt.ckpt))
checkpoint = load_model(model, ckpt_file, opt)
start_epoch = checkpoint['epoch']
optimizer.load_state_dict(checkpoint['optimizer'])
print('==> Loaded Checkpoint \'{}\' (epoch {})'.format(
opt.ckpt, start_epoch))
else:
print('==> no checkpoint found \'{}\''.format(
opt.ckpt))
return
# Data loading
print('==> Load data..')
train_loader, val_loader = DataLoader(opt.batch_size, opt.workers,
opt.dataset, opt.datapath,
opt.cuda)
# for evaluation
if opt.evaluate:
if isfile(ckpt_file):
print('==> Loading Checkpoint \'{}\''.format(opt.ckpt))
checkpoint = load_model(model, ckpt_file, opt)
start_epoch = checkpoint['epoch']
optimizer.load_state_dict(checkpoint['optimizer'])
print('==> Loaded Checkpoint \'{}\' (epoch {})'.format(
opt.ckpt, start_epoch))
# evaluate on validation set
print('\n===> [ Evaluation ]')
start_time = time.time()
acc1, acc5 = validate(val_loader, model, criterion)
save_eval(['{}-{}-{}'.format(opt.arch, opt.dataset, opt.ckpt[:-4]),
str(acc1)[7:-18], str(acc5)[7:-18]], opt)
elapsed_time = time.time() - start_time
print('====> {:.2f} seconds to evaluate this model\n'.format(
elapsed_time))
return
else:
print('==> no checkpoint found \'{}\''.format(
opt.ckpt))
return
# train...
train_time = 0.0
validate_time = 0.0
for epoch in range(start_epoch, opt.epochs):
adjust_learning_rate(optimizer, epoch, opt.lr)
print('\n==> {}/{} training'.format(opt.arch, opt.dataset))
print('==> Epoch: {}, lr = {}'.format(
epoch, optimizer.param_groups[0]["lr"]))
# train for one epoch
print('===> [ Training ]')
start_time = time.time()
acc1_train, acc5_train = train(train_loader,
epoch=epoch, model=model,
criterion=criterion, optimizer=optimizer)
elapsed_time = time.time() - start_time
train_time += elapsed_time
print('====> {:.2f} seconds to train this epoch\n'.format(
elapsed_time))
# evaluate on validation set
print('===> [ Validation ]')
start_time = time.time()
acc1_valid, acc5_valid = validate(val_loader, model, criterion)
elapsed_time = time.time() - start_time
validate_time += elapsed_time
print('====> {:.2f} seconds to validate this epoch\n'.format(
elapsed_time))
# remember best Acc@1 and save checkpoint and summary csv file
is_best = acc1_valid > best_acc1
best_acc1 = max(acc1_valid, best_acc1)
state = {'epoch': epoch + 1,
'model': model.state_dict(),
'optimizer': optimizer.state_dict()}
summary = [epoch,
str(acc1_train)[7:-18], str(acc5_train)[7:-18],
str(acc1_valid)[7:-18], str(acc5_valid)[7:-18]]
save_model(state, epoch, is_best, opt)
save_summary(summary, opt)
avg_train_time = train_time / (opt.epochs - start_epoch)
avg_valid_time = validate_time / (opt.epochs - start_epoch)
total_train_time = train_time + validate_time
print('====> average training time per epoch: {:,}m {:.2f}s'.format(
int(avg_train_time // 60), avg_train_time % 60))
print('====> average validation time per epoch: {:,}m {:.2f}s'.format(
int(avg_valid_time // 60), avg_valid_time % 60))
print('====> training time: {}h {}m {:.2f}s'.format(
int(train_time // 3600), int((train_time % 3600) // 60), train_time % 60))
print('====> validation time: {}h {}m {:.2f}s'.format(
int(validate_time // 3600), int((validate_time % 3600) // 60), validate_time % 60))
print('====> total training time: {}h {}m {:.2f}s'.format(
int(total_train_time // 3600), int((total_train_time % 3600) // 60), total_train_time % 60))
def train(train_loader, **kwargs):
epoch = kwargs.get('epoch')
model = kwargs.get('model')
criterion = kwargs.get('criterion')
optimizer = kwargs.get('optimizer')
batch_time = AverageMeter('Time', ':6.3f')
data_time = AverageMeter('Data', ':6.3f')
losses = AverageMeter('Loss', ':.4e')
top1 = AverageMeter('Acc@1', ':6.2f')
top5 = AverageMeter('Acc@5', ':6.2f')
progress = ProgressMeter(len(train_loader), batch_time, data_time,
losses, top1, top5, prefix="Epoch: [{}]".format(epoch))
# switch to train mode
model.train()
end = time.time()
for i, (input, target) in enumerate(train_loader):
# measure data loading time
data_time.update(time.time() - end)
if opt.cuda:
target = target.cuda(non_blocking=True)
# compute output
mask1 = Rand_Square_Mask(percent_augment=0.2, img_size=(32, 32), divisions=2, rand_spawn=False)
images = mask1(input)
output = model(images)
loss = criterion(output, target)
# measure accuracy and record loss
acc1, acc5 = accuracy(output, target, topk=(1, 5))
losses.update(loss.item(), input.size(0))
top1.update(acc1[0], input.size(0))
top5.update(acc5[0], input.size(0))
# compute gradient and do SGD step
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({"loss": losses.avg, "epoch": opt.epochs, 'acc': top1.avg, 'batch': opt.batch_size})
# measure elapsed time
batch_time.update(time.time() - end)
if i % opt.print_freq == 0:
progress.print(i)
end = time.time()
print('====> Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}'
.format(top1=top1, top5=top5))
return top1.avg, top5.avg
def validate(val_loader, model, criterion):
batch_time = AverageMeter('Time', ':6.3f')
losses = AverageMeter('Loss', ':.4e')
top1 = AverageMeter('Acc@1', ':6.2f')
top5 = AverageMeter('Acc@5', ':6.2f')
progress = ProgressMeter(len(val_loader), batch_time, losses, top1, top5,
prefix='Test: ')
# switch to evaluate mode
model.eval()
with torch.no_grad():
end = time.time()
for i, (input, target) in enumerate(val_loader):
if opt.cuda:
target = target.cuda(non_blocking=True)
# compute output
output = model(input)
loss = criterion(output, target)
# measure accuracy and record loss
acc1, acc5 = accuracy(output, target, topk=(1, 5))
losses.update(loss.item(), input.size(0))
top1.update(acc1[0], input.size(0))
top5.update(acc5[0], input.size(0))
# measure elapsed time
batch_time.update(time.time() - end)
if i % opt.print_freq == 0:
progress.print(i)
end = time.time()
print('====> Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}'
.format(top1=top1, top5=top5))
return top1.avg, top5.avg
if __name__ == '__main__':
start_time = time.time()
main()
elapsed_time = time.time() - start_time
print('====> total time: {}h {}m {:.2f}s'.format(
int(elapsed_time // 3600), int((elapsed_time % 3600) // 60), elapsed_time % 60))
</code></pre>
<p><strong>Training Configuration</strong></p>
<p>Single GPU (NVIDIA 3080 <code>12 GB</code>)
The batch size is <code>384</code>
Epochs <code>400</code>
Learning rate <code>0.001</code>, momentum 0.9 weight decay <code>5e-4</code></p>
<p>Results
<a href="https://i.sstatic.net/1GCqR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1GCqR.png" alt="enter image description here" /></a></p>
|
<python><pytorch><dataset><torch><resnet>
|
2023-01-17 01:10:38
| 0
| 2,984
|
Khawar Islam
|
75,141,095
| 4,755,954
|
Streamlit + Spacy causing "AttributeError: 'PathDistribution' object has no attribute '_normalized_name'"
|
<blockquote>
<p><strong>Note:</strong> This is not a duplicate question as I have gone through <a href="https://stackoverflow.com/questions/67212594/attributeerror-pathdistribution-object-has-no-attribute-name">this answer</a> and made the necessary package downgrade but it still results in the same error. Details below.</p>
</blockquote>
<h2># System Details</h2>
<ul>
<li>MacBook Air (M1, 2020)</li>
<li>MacOS Monterey 12.3</li>
<li>Python 3.10.8 (Miniconda environment)</li>
<li>Relevant library versions from <code>pip freeze</code></li>
</ul>
<pre><code>importlib-metadata==3.4.0
PyMuPDF==1.21.1
spacy==3.4.4
spacy-alignments==0.9.0
spacy-legacy==3.0.11
spacy-loggers==1.0.4
spacy-transformers==1.2.0
streamlit==1.17.0
flair==0.11.3
catalogue==2.0.8
</code></pre>
<h2># Setup</h2>
<ul>
<li>I am trying to use <code>Spacy</code> for some text processing over a pdf document uploaded to a <code>Streamlit</code> app.</li>
<li>The <code>Streamlit</code> app basically contains an upload button, submit button (which calls the preprocessing and spacy functions), and a <code>text_area</code> to display the processed text.</li>
</ul>
<p>Here is the working code for uploading a pdf document and extracting its text -</p>
<pre><code>import streamlit as st
import fitz
def load_file(file):
doc = fitz.open(stream=uploaded_file.read(), filetype="pdf")
text = []
with doc:
for page in doc:
text.append(page.get_text())
text = "\n".join(text)
return text
#####################################################################
st.title("Test app")
col1, col2 = st.columns([1,1], gap='small')
with col1:
with st.expander("Description -", expanded=True):
st.write("This is the description of the app.")
with col2:
with st.form(key="my_form"):
uploaded_file = st.file_uploader("Upload",type='pdf', accept_multiple_files=False, label_visibility="collapsed")
submit_button = st.form_submit_button(label="Process")
#####################################################################
col1, col2 = st.columns([1,3], gap='small')
with col1:
st.header("Metrics")
with col2:
st.header("Text")
if uploaded_file is not None:
text = load_file(uploaded_file)
st.text_area(text)
</code></pre>
<h2># Reproduce base code</h2>
<ul>
<li>install necessary libraries</li>
<li>save above code to a <code>test.py</code> file</li>
<li>from terminal navigate to folder and run <code>streamlit run test.py</code></li>
<li>navigate to <code>http://localhost:8501/</code> in browser</li>
<li>download <a href="https://www.africau.edu/images/default/sample.pdf" rel="nofollow noreferrer">this sample pdf</a> and upload it to the app as an example</li>
</ul>
<p>This results in a functioning app -</p>
<p><a href="https://i.sstatic.net/xQs9P.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xQs9P.jpg" alt="enter image description here" /></a></p>
<h2># Issue I am facing</h2>
<p>Now, the issue comes when I add <code>spacy</code> to the python file using <code>import spacy</code> and rerun the streamlit app, this error pops up -</p>
<pre><code>AttributeError: 'PathDistribution' object has no attribute '_normalized_name'
Traceback:
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/Users/akshay_sehgal/Library/CloudStorage/________/Documents/Code/Demo UI/Streamlit/keyphrase_extraction_template/test.py", line 3, in <module>
import spacy
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/site-packages/spacy/__init__.py", line 6, in <module>
from .errors import setup_default_warnings
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/site-packages/spacy/errors.py", line 2, in <module>
from .compat import Literal
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/site-packages/spacy/compat.py", line 3, in <module>
from thinc.util import copy_array
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/site-packages/thinc/__init__.py", line 5, in <module>
from .config import registry
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/site-packages/thinc/config.py", line 1, in <module>
import catalogue
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/site-packages/catalogue/__init__.py", line 20, in <module>
AVAILABLE_ENTRY_POINTS = importlib_metadata.entry_points() # type: ignore
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/importlib/metadata/__init__.py", line 1009, in entry_points
return SelectableGroups.load(eps).select(**params)
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/importlib/metadata/__init__.py", line 459, in load
ordered = sorted(eps, key=by_group)
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/importlib/metadata/__init__.py", line 1006, in <genexpr>
eps = itertools.chain.from_iterable(
File "/Users/akshay_sehgal/miniconda3/envs/demo_ui/lib/python3.10/importlib/metadata/_itertools.py", line 16, in unique_everseen
k = key(element)
</code></pre>
<h2># What have I tried?</h2>
<ol>
<li>First thing I tried was to isolate the spacy code and run it in a notebook in the specific environment, which worked without any issue.</li>
<li>Next, after researching SO (<a href="https://stackoverflow.com/questions/67212594/attributeerror-pathdistribution-object-has-no-attribute-name">this answer</a>) and the github issues, I found that <code>importlib.metadata</code> could be the potential culprit and therefore I downgraded this using the following code, but it didn't fix anything.</li>
</ol>
<pre><code>pip uninstall importlib-metadata
pip install importlib-metadata==3.4.0
</code></pre>
<ol start="3">
<li><p>I removed the complete environment, and setup the whole thing again, from scratch, following the same steps I used the first time (just in case I had made some mistake during its setup). But still the same error.</p>
</li>
<li><p>Final option I would be left with, is to containerize the spacy processing as an API, and then call it via the streamlit app using <code>requests</code></p>
</li>
</ol>
<p>I would be happy to share the <code>requirements.txt</code> if needed, but I will have to figure out how to upload it somewhere via my office pc. Do let me know if that is required and I will find a way.</p>
<p>Would appreciate any help in solving this issue!</p>
|
<python><python-3.x><spacy><apple-m1><streamlit>
|
2023-01-17 01:00:02
| 1
| 19,377
|
Akshay Sehgal
|
75,140,968
| 1,689,987
|
Python error "Parameters to generic types must be types. Got <module "
|
<p>When trying to use my own class as a type hint :</p>
<pre><code>from mycode.ltm import MyClass
def DoSomething(self, values: List[MyClass]) -> None:
</code></pre>
<p>I get:</p>
<blockquote>
<p>Parameters to generic types must be types. Got <module '...' from
'...'>.</p>
</blockquote>
<p>How to fix this?</p>
|
<python><python-typing>
|
2023-01-17 00:27:34
| 1
| 1,666
|
user1689987
|
75,140,730
| 3,666,612
|
Enable Visual Studio Intellisense for .pyd python modules generated with pybind11
|
<p><strong>Background</strong>: I've successfully used pybind11 in a Visual Studio 2022 MSBuild project to create a .pyd library of C/C++ functions which can be imported into python code directly using the instructions here: <a href="https://learn.microsoft.com/en-us/visualstudio/python/working-with-c-cpp-python-in-visual-studio?view=vs-2022" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/visualstudio/python/working-with-c-cpp-python-in-visual-studio?view=vs-2022</a></p>
<p><strong>Problem</strong>: Intellisense in VS2022 doesn't seem to recognize the imported module contents and display the expected autocomplete pop-ups or hints about exported functions/symbols/signatures/etc when coding in a .py file which imports this .pyd library. Intellisense <em>does</em> seem to know the library <em>exists</em> (i.e. no warning/error squiggles under the import directive).</p>
<p><strong>Question</strong>: Is it possible to enable Intellisense to provide details about the imported module like it would when importing a .py file? Either via some configuration flags or additional content added to the <code>PYBIND11_MODULE(){}</code> section of the C/C++ code?</p>
<p>(Note this was not with Visual Studio Code or a CMake project, but I would be interested if this common problem exists for those tools in case the fix is similar. <a href="https://github.com/microsoft/vscode-python/issues/7736#issuecomment-537620794" rel="nofollow noreferrer">This discussion</a> is along similar lines, but relevant VSCode and mentions something called pydoc which doesn't come up in the instructions linked above from Microsoft.)</p>
|
<python><c++><visual-studio><intellisense><pybind11>
|
2023-01-16 23:39:14
| 0
| 519
|
NKatUT
|
75,140,727
| 8,869,570
|
How to create a dataframe from a single row of another dataframe?
|
<p>I'd like to create a dataframe from a select row of another dataframe, e.g.,</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"col1": [1, 2], "col2": [0, 1]})
# I want to create df1 such that df1 is a dataframe from the first row of df
df1 = df.iloc[0] # produces a pandas Series which is not what i want
</code></pre>
<p>This one seems to work but seems to involve a lot of (unnecessary?) ops?</p>
<pre><code>df1 = pd.DataFrame(df.iloc[0]).T
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-16 23:38:52
| 0
| 2,328
|
24n8
|
75,140,680
| 6,611,818
|
Python3 slack_bolt is there easy way to format attachment responses?
|
<p>Currently my responses to a command given come back to Slack in a very hard format to read. I am hoping there is a way to format the out so it's easier on the eyes.</p>
<pre><code>Console
APP 11:26
[{'imageCL': '1838900-unix64-clang-debug'}, {'imageCL': '1838872-unix64-clang-debug'}, {'imageCL': '1838851-unix64-clang-debug'}, {'imageCL': '1838754-unix64-clang-debug'}, {'imageCL': '1838697-unix64-clang-debug'}, {'imageCL': '1838694-unix64-clang-debug'}, {'imageCL': '1838600-unix64-clang-debug'}, {'imageCL': '1838588-unix64-clang-debug'}, {'imageCL': '1838534-unix64-clang-debug'}, {'imageCL': '1838512-unix64-clang-debug'}, {'imageCL': '1838487-unix64-clang-debug'}, {'imageCL': '1838285-unix64-clang-debug'}, {'imageCL': '1838256-unix64-clang-debug'}, {'configCL': '1838900'}, {'configCL': '1838894'}, {'configCL': '1838893'}, {'configCL': '1838872'}, {'configCL': '1838851'}, {'configCL': '1838849'}, {'configCL': '1838754'}, {'configCL': '1838697'}, {'configCL': '1838694'}, {'configCL': '1838600'}, {'configCL': '1838588'}, {'configCL': '1838534'}, {'configCL': '1838512'}]
</code></pre>
<p>I have other responses that come back all bunched up. The above is a single k,v pair but I have others are the more complicated to read.</p>
<p>The code I use is as follows:</p>
<pre><code>from slack_bolt import App
app = App(token=slack_user_token)
def handle_command(ack, event):
ack()
....
for response in responses:
if not isinstance(response, list):
response = [response]
app.client.chat_postMessage(
channel = event["channel"],
attachments=response
)
if __name__ == "__main__":
SocketModeHandler(app, slack_app_token).start()
</code></pre>
<p>In the above I pass into <strong>response</strong> a dictionary. My question is, is there a way I can format the output rather than a big clump of text? Even if it's some for loop where I can break apart the whole dictionary output to make it easier to read? I found this documentation <a href="https://api.slack.com/methods/chat.postMessage#arg_attachments" rel="nofollow noreferrer">https://api.slack.com/methods/chat.postMessage#arg_attachments</a> which only has a super basic one kind of example, but I have also found other documentation for Slack messages that allow a huge range of customizations. Perhaps I need to use something else other that <strong>chat_postMessage</strong> and <strong>attachments</strong>?</p>
<h2>Update</h2>
<p>I am pasting my addendum here as I have made a little progress, and the comments doesn't format:</p>
<p>So I am getting warm, but it's still not working properly:</p>
<pre><code>test_dict2 = [{"color": "#FF0000", "fields": [{"title": "imageCL","value": "1838900-unix64-clang-debug"},{"title": "configCL","value": "1838534"}]}]
test_response2 = {
#"ok": True,
#"ts": "1503435956.000247",
"message": {
"text": "Here's a message for you",
"username": "ecto1",
"attachments": json.dumps(test_dict2),
"type": "message",
"subtype": "bot_message",
"ts": "1503435956.000247"
}
}
</code></pre>
<p>But all I get is:</p>
<pre><code>Console
APP 16:36
Here's a message for you
Aug 22nd, 2017
</code></pre>
|
<python><slack><slack-bolt>
|
2023-01-16 23:32:05
| 0
| 425
|
New2Python
|
75,140,645
| 21,115
|
Python typing for class that having methods from decorator
|
<p>The <a href="https://lidatong.github.io/dataclasses-json/#usage" rel="nofollow noreferrer"><code>dataclasses-json</code></a> allows code such as:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from dataclasses_json import dataclass_json
@dataclass_json
@dataclass
class Person:
name: str
lidatong = Person('lidatong')
# Encoding to JSON
lidatong.to_json() # '{"name": "lidatong"}'
# Decoding from JSON
Person.from_json('{"name": "lidatong"}') # Person(name='lidatong')
</code></pre>
<p>How could I use <code>typing</code> to annotate</p>
<ol>
<li>a function which takes an instance whose class has <code>@dataclass_json</code> applied, such that the existence of the <code>to_json</code> instance method is known to the type checker, and:</li>
<li>a function which takes a class having <code>@dataclass_json</code> applied, such that the <code>from_json</code> class method is known to the type checker?</li>
</ol>
|
<python><python-typing><python-dataclasses>
|
2023-01-16 23:23:29
| 1
| 18,140
|
davetapley
|
75,140,592
| 5,019,169
|
How to handle a global context object in the following scenario?
|
<p>I have these two classes:</p>
<pre><code>class TestSuite(object):
def __init__(self, context):
self._tests = [
Test(test_data) for test_data in context["tests"]
]
self.context = context
@property
def runtime(self):
try:
return RuntimeFactory.create_runtime(self.context)
except:
print("runtime_id is missing or invalid, handle with with a proper exception")
class Test(object):
def __init__(self, context):
self.context = context
</code></pre>
<p>The <code>context</code> object here contains the data based on which all the classes in the application lifecycle get instantiated. Now, inside the <code>Test</code> class, I need the <code>runtime</code> property of <code>TestSuite</code>, because all <code>Test</code> instances share the same runtime naturally.</p>
<p>My problem is,</p>
<ol>
<li>If I pass the full <code>context</code> throughout the application flow, I can't know inside a <code>Test</code> object which test I am dealing with (as it's a list with no explicit index).</li>
<li>If I pass <code>context["tests"][index]</code> inside <code>Test</code>object as local <code>context</code>, I lose the <code>context["runtime"]</code>.</li>
</ol>
<p>Now, I can pass two <code>context</code> (<code>runtime_context</code> and <code>test_context</code>), but that kind of violates the whole purpose of a <code>context</code> object (To pass one single <code>context</code>, which moves around the app, and classes use it as needed).</p>
<p>I can think of some solutions:</p>
<ol>
<li>Update the local <code>context</code> by adding <code>runtime</code> as a key before creating each <code>Test</code> object.</li>
<li>Create the <code>Test</code> objects with local <code>context</code> and assign the <code>runtime</code> attribute of <code>Test</code> object from <code>TestSuite</code> class.</li>
<li>Create the <code>Test</code> objects with global <code>context</code> and assign an <code>index/id</code> attribute to <code>Test</code> object from <code>TestSuite</code> class.</li>
</ol>
<p>But none of this seems quite convincing to me. I have this situation on different layers (A test has a list of input for example), changing all those classes make the code a bit messy in my opinion.</p>
<p>Any suggestion of how I can solve this problem in a pythonic way with less coupling.</p>
|
<python><class><oop><design-patterns>
|
2023-01-16 23:13:54
| 0
| 11,224
|
Ahasanul Haque
|
75,140,464
| 14,673,832
|
Unexpected output for a class methods in Python
|
<p>Which of the following correctly describe the complete output of executing the Python Polymorphic functions below? I got this question online which I think the output is unexpected. Primary class has object inside it, shouldnt it matter? Also can we write classmethod inside a function as an argument?</p>
<pre><code>class Primary(object):
def show(self):
print("Primary class is called")
class Secondary(Primary):
def show(self):
print("Secondary class is called")
def result(classmethod):
classmethod.show()
primaryobj = Primary()
secondaryobj = Secondary()
result(primaryobj)
result(secondaryobj)
</code></pre>
<p>The output that I got is :</p>
<pre><code>Primary class is called
Secondary class is called
</code></pre>
|
<python><polymorphism><class-method>
|
2023-01-16 22:54:56
| 0
| 1,074
|
Reactoo
|
75,140,280
| 7,668,467
|
Polars Aggregate Multiple Rows into One Row
|
<p>I have a df created like this:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.from_repr("""
┌───────────────┬──────────────┬───────────────┐
│ schema_name ┆ table_name ┆ column_name │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ str │
╞═══════════════╪══════════════╪═══════════════╡
│ test_schema ┆ test_table ┆ test_column │
│ test_schema ┆ test_table ┆ test_column │
│ test_schema_2 ┆ test_table_2 ┆ test_column_2 │
└───────────────┴──────────────┴───────────────┘
""")
</code></pre>
<p>I would like to use polars to aggregate the <code>column_name</code> field by <code>schema_name</code> and <code>table-name</code> so that multiple values from <code>column_name</code> are combined into one row. The target dataset is this:</p>
<pre><code>shape: (2, 3)
┌───────────────┬──────────────┬──────────────────────────┐
│ schema_name ┆ table_name ┆ column_name │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ str │
╞═══════════════╪══════════════╪══════════════════════════╡
│ test_schema_2 ┆ test_table_2 ┆ test_column_2 │
│ test_schema ┆ test_table ┆ test_column, test_column │
└───────────────┴──────────────┴──────────────────────────┘
</code></pre>
<p>I can aggregate the values into a list with this:</p>
<pre class="lang-py prettyprint-override"><code>df.group_by('schema_name','table_name').agg(pl.col('column_name').alias('column_list'))
</code></pre>
<pre><code>shape: (2, 3)
┌───────────────┬──────────────┬────────────────────────────────┐
│ schema_name ┆ table_name ┆ column_list │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ list[str] │
╞═══════════════╪══════════════╪════════════════════════════════╡
│ test_schema_2 ┆ test_table_2 ┆ ["test_column_2"] │
│ test_schema ┆ test_table ┆ ["test_column", "test_column"] │
└───────────────┴──────────────┴────────────────────────────────┘
</code></pre>
<p>How do I then convert the list field (<code>column_list</code>) into a comma separated string? With pandas, I would do something like this:</p>
<pre><code>df['column_list_string'] = [','.join(map(str, l)) for l in df['column_list']]
</code></pre>
<p>However, I can't figure out how to use <code>.join()</code> in combination with polars <code>.agg()</code>.</p>
<p>Alternatively, how would I go straight from multiple rows to one row without using the list as an intermediate step?</p>
|
<python><python-polars>
|
2023-01-16 22:27:20
| 1
| 2,434
|
OverflowingTheGlass
|
75,140,194
| 683,741
|
Output linux CLI output to a datatable in python
|
<p>In Python, what is the best way to take the output from a CLI command and add it into a datatable?</p>
<p>I've been using <code>pandas</code> for the datatable, but I can't get the right result. I'd like to take the results of the command and manipulate the data in python in a datatable of some type and then add further processing. I've been trying a few variations of the below code, but without success:</p>
<pre><code>data_frame = pd.DataFrame()
cmd = "ls -al"
output = subprocess.check_output(['bash', '-c', cmd])
for line in output.splitlines():
data_frame = pd.append([data_frame, line])
</code></pre>
|
<python><pandas><linux>
|
2023-01-16 22:13:37
| 0
| 1,615
|
e-on
|
75,140,078
| 5,924,264
|
TypeError: cannot subtract DatetimeArray from ndarray when prepending a row to dataframe
|
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({"start": [pd.to_datetime("2001-01-01 08:00:00+00:00")]})
df1 = pd.DataFrame([np.nan], columns=df.columns)
df1[["start"]] = [
df["start"].iloc[0],
]
df = df1.append(df, ignore_index=True)
df["start"] - df["start"].dt.normalize()
</code></pre>
<p>produces the error</p>
<pre><code>TypeError: cannot subtract DatetimeArray from ndarray
</code></pre>
<p>If I instead construct <code>df1</code> directly</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({"start": [pd.to_datetime("2001-01-01 08:00:00+00:00")]})
df1 = pd.DataFrame([df["start"].iloc[0],], columns=df.columns)
df = df1.append(df, ignore_index=True)
df["start"] - df["start"].dt.normalize()
</code></pre>
<p>I don't get an error. I am wondering what is the problem with the first approach?</p>
|
<python><pandas><dataframe>
|
2023-01-16 21:57:44
| 0
| 2,502
|
roulette01
|
75,140,072
| 8,277,512
|
Matplotlib pick_event not working with geopandas dataframe
|
<p>Here is my <a href="https://github.com/leej11/parkrun_stat_mapper/blob/main/map_of_my_parkruns.ipynb" rel="nofollow noreferrer">Jupyer Notebook source code</a>.</p>
<p>But a hard-coded reproducible example is below. (You will need access to the UK.geojson file from my Github or the true source: <a href="http://geoportal1-ons.opendata.arcgis.com/datasets/687f346f5023410ba86615655ff33ca9_1.geojson" rel="nofollow noreferrer">http://geoportal1-ons.opendata.arcgis.com/datasets/687f346f5023410ba86615655ff33ca9_1.geojson</a>)</p>
<p>I have a geopandas DataFrame to plot the UK map and then plot certain coordinates (park run locations) on top of it.</p>
<p>I want to be able to interactively 'pick' the coordinates and then I will code it to show me information about that specific run location.</p>
<p>I can't figure out why the <code>fig.canvas.mpl_connect('pick_event', onpick)</code> function I'm calling does not work.</p>
<p>Please can someone help me figure this out?</p>
<pre><code>import pandas as pd
import geopandas as gpd
import numpy as np
import matplotlib.pyplot as plt
##### Create hardcoded dataset ######
df = pd.DataFrame.from_dict({0: {'parkrun_name': 'Beeston',
'location_name': 'Weirfields Recreation Ground',
'longitude': -1.201737,
'latitude': 52.913592,
'Run Date': '12/11/2022',
'Run Number': 370.0,
'Pos': 107.0,
'Time': '26:05',
'Age Grade': '49.46%',
'PB?': np.nan},
1: {'parkrun_name': 'Colwick',
'location_name': 'Colwick Country Park, Nottingham',
'longitude': -1.09786,
'latitude': 52.945171,
'Run Date': '22/10/2022',
'Run Number': 511.0,
'Pos': 127.0,
'Time': '29:44',
'Age Grade': '43.39%',
'PB?': np.nan},
2: {'parkrun_name': 'Exmouth',
'location_name': 'Exmouth',
'longitude': -3.412392,
'latitude': 50.614697,
'Run Date': '24/12/2022',
'Run Number': 189.0,
'Pos': 197.0,
'Time': '25:44',
'Age Grade': '50.13%',
'PB?': 'PB'},
}, orient='index')
###### Read in the UK Map File ######
uk = gpd.read_file("uk.geojson")
###### Convert df to geodf ######
gdf = gpd.GeoDataFrame(
df, geometry=gpd.points_from_xy(df.longitude, df.latitude)
)
gdf.crs = "EPSG:4326"
gdf = gdf.to_crs(uk.crs)
</code></pre>
<pre><code>###### Plot interactive map ######
%matplotlib widget
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
uk.plot(ax=ax, alpha=0.8)
def onclick(event):
ax = plt.gca()
# ax.set_title(f"You selected {event.x}, {event.y}")
line = event.artist
xdata, ydata = line.get_data()
ind = event.ind
tx = f"{np.array([xdata[ind], ydata[ind]]).T}"
ax.set_title(f"{tx}")
gdf[gdf['Run Date'].isna()].plot(ax=ax, color='black', marker='.', markersize=8, alpha=0.2, picker=20)
gdf[gdf['Run Date'].notna()].plot(ax=ax, color='#AAFF00', marker='.', markersize=50, picker=20)
ax.set_xlim(-7,2)
ax.set_ylim(49,60)
cid = fig.canvas.mpl_connect('pick_event', onclick)
</code></pre>
|
<python><matplotlib><jupyter-notebook><geopandas><matplotlib-widget>
|
2023-01-16 21:57:06
| 1
| 324
|
Liam Gower
|
75,140,070
| 2,441,615
|
Algorithm for grouping millions of names into groups of similar names
|
<p>I want to group the names in a list of a few million names into groups of similar names. For example, if the names "John", "John12" and "azJohn" appear in the list, they should all go into the same grouo. A naive approach could be to use an algorithm like this:</p>
<pre class="lang-py prettyprint-override"><code>import Levenshtein
def group_similar_names(names, threshold=2):
groups = {}
for name in names:
for group_name, group_list in groups.items():
if Levenshtein.distance(name, group_name) <= threshold:
group_list.append(name)
break
else:
groups[name] = [name]
return groups.values()
</code></pre>
<p>Alas, this algorithm does not scale for a dataset with million of names.</p>
<p>Can anyone come up with an algorithm that works for a dataset with millions of names? The algorithm may be implemented in any language. It is fine for the algorithm to put the same name into multiple groups, but preferably it would only put the name into the group(s) in which the name matches best.</p>
|
<python><algorithm><grouping>
|
2023-01-16 21:56:50
| 1
| 659
|
K. Claesson
|
75,139,914
| 20,895,654
|
Sort dictionary by multiple values, where amount of values can vary
|
<p>To come straight to the point:</p>
<pre><code>def sortBy(dict, byWhat):
# byWhat is a list of 1 - 10 strings, which can include any of allAttributes
# allAttributes = ['name', 'kingdom', 'diff', 'tier', 'type', 'founder', 'prover', 'server', 'extra', 'link']
# the lower the index, the higher the sorting priority -> byWhat = ['name', 'diff'] then sort by name first and then if names are the same by diff
# dict has the name as key and then allAttributes above in order as a list for the key's value, so for example:
# dict[Jump] = [Jump, Mushroom Kingdom, 10/10, Triple Jump, Dude, Dude2, Main Server, Cool jump description, https://twitter.com]
# All dictionary entries have those 10 attributes, no exceptions
return dict # but sorted
</code></pre>
<p>I tried for a while but where my knowledge ends is sorting by a varying size of specifiers with lambda. I can also not just call the function multiple times, because then it won't sort by two attributes, but by the last attribute the function got called with.</p>
|
<python><dictionary><sorting><lambda>
|
2023-01-16 21:34:05
| 2
| 346
|
JoniKauf
|
75,139,852
| 558,639
|
Python file organization for chainable functions
|
<p>I'm writing a Python3.x framework that has <em>generators</em> and <em>filters</em>. I have a compact syntax for chaining the output of generators and filters into filters, but file organization feels inelegant. Here's what I mean.</p>
<p>Assume Renderer is the super class for both generators and filters:</p>
<pre><code># file: renderer.py -- defines the common superclass used by generators and filters
class Renderer(object):
def render():
# Every subclass provides a `render` function that emits some data...
pass
</code></pre>
<pre><code># file: gen1.py -- defines a generator
class Gen1(Renderer):
def __init__(self):
super(Gen1, self).__init__()
def render(self):
... emit some data
</code></pre>
<pre><code># file: filt1.py -- defines a filter that takes any Renderer object as an input
class Filt1(Renderer):
def __init__(self, input, param):
super(Filt1, self).__init__()
self._input = input
self._param = param
def render():
... call self._input.render() to fetch and act on data before emitting it
</code></pre>
<pre><code># file: filt2.py -- defines a filter that takes any Renderer object as an input
class Filt2(Renderer):
def __init__(self, input):
super(Filt2, self).__init__()
self._input = input
def render():
... call self._input.render() to fetch and act on data before emitting it
</code></pre>
<pre><code># file: render_module.py -- a module file to bring in all the components
from renderer.py import Renderer
from gen1.py import Gen1
from filt1.py import Filt1
from filt2.py import Filt2
</code></pre>
<h3>What I'd like</h3>
<p>What I'd like is for a user of the platform to be able to write code like this, which chains the output of <code>gen1</code> into <code>filt1</code>, and the output of <code>filt1</code> into <code>filt2</code>:</p>
<pre><code>import render_module as rm
chain = rm.Gen1().filt1(123).filt2()
chain.render()
</code></pre>
<h3>What I've done</h3>
<p>What I've done is add the following to renderer.py. This works, but see "The Problem to Solve" below.</p>
<pre><code>class Renderer(object):
def render():
# emit some data...
pass
def filt1(self, param):
return rm.Filt1(self, parm)
def filt2(self):
return rm.Filt1(self)
import render_module as rm # at end of file to avoid a circular dependency
</code></pre>
<h3>The Problem to Solve</h3>
<p>It feels wrong to pollute the common superclass with specific mentions of each subclass. The clear indication of code smell is the <code>import</code> statement at the end of <code>renerer.py</code>.</p>
<p>But I haven't figured out a better way to refactor and organize the files. What's the pythonic approach out of this conundrum?</p>
|
<python><python-3.x><import><dependencies>
|
2023-01-16 21:26:48
| 0
| 35,607
|
fearless_fool
|
75,139,782
| 5,961,077
|
How can I sort out an object is datetime or date in python?
|
<p>I am trying to develop a logic that depends on the type of the input being date or datetime. To achieve this goal, I used <code>isinstance</code> with <code>datetime.date</code> and <code>datetime.datetime</code>. Unfortunately, it seems like that a <code>datetime.datetime</code> object is considered an instance of <code>datetime.date</code>.</p>
<pre class="lang-py prettyprint-override"><code>import datetime
date_obj = datetime.date.today()
datetime_obj = datetime.datetime.now()
type(date_obj)
# <class 'datetime.date'>
type(datetime_obj)
# <class 'datetime.datetime'>
isinstance(date_obj, datetime.date)
# True
isinstance(datetime_obj, datetime.date)
# True
isinstance(date_obj, datetime.datetime)
# False
isinstance(datetime_obj, datetime.date)
# True
</code></pre>
<p>I suspected that <code>datetime.date</code> might be considered a subclass of <code>datetime.datetime</code> but that's not the case:</p>
<pre class="lang-py prettyprint-override"><code>issubclass(datetime.date, datetime.datetime)
# False
issubclass(datetime.datetime, datetime.date)
# True
</code></pre>
<p>What's the pythonic way of figureing out whether an object is a date or a datetime?</p>
<p>P.S. I checked <a href="https://stackoverflow.com/questions/16151402/python-how-can-i-check-whether-an-object-is-of-type-datetime-date">this related question</a>, but that doesn't resolve my issue.</p>
|
<python><python-3.x>
|
2023-01-16 21:17:19
| 2
| 1,411
|
Mehdi Zare
|
75,139,755
| 3,875,372
|
New scrape table data from a hover popup with Selenium and Python
|
<p>I had this Selenium hover scrape working a few years ago, and I remember it was a challenge to select the correct hover table element, which only shows on hover. The website has undergone a complete style overhaul (seems like Tailwind CSS), and even though I've used the inspector w/a forced hover state, now Selenium says that the hover table I want to scrape is either not an element that it recognizes, or it is not an interactable element, depending on which selector I choose. All of my other modifications have found the updated elements just fine. How can I solve either or both of these issues for the present and the future? Cheers</p>
<p>Image of hover:
<a href="https://i.sstatic.net/EIfrU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EIfrU.png" alt="enter image description here" /></a></p>
<p>Sample Code (and error):</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.action_chains import ActionChains
driver = webdriver.Chrome(executable_path="config/chromedriver")
driver.maximize_window()
wait = WebDriverWait(driver, 50)
action = ActionChains(driver)
driver.get("https://www.oddsportal.com/tennis/australia/wta-australian-open/jimenez-kasintseva-victoria-lys-eva-IVZa5PVf/")
#first_td = WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, "//tr[@class='lo odd']/td[2]")))
#ActionChains(driver).move_to_element(first_td).perform()
tool_tip_text = driver.find_element(By.CSS_SELECTOR, "#tooltip").get_attribute('innerText')
print(tool_tip_text)
"""Error: selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"#tooltip"}"""
</code></pre>
<p>Desired Output:</p>
<pre><code>20 Sep, 17:42 2.30 +0.03
20 Sep, 17:29 2.27 +0.02
20 Sep, 17:25 2.25 -0.20
20 Sep, 17:24 2.45 +0.20
20 Sep, 17:20 2.25 -0.20
20 Sep, 17:19 2.45 +0.20
20 Sep, 16:58 2.25 -0.20
20 Sep, 16:56 2.45 +0.20
20 Sep, 16:30 2.25 -0.20
20 Sep, 16:29 2.45 +0.18
20 Sep, 16:23 2.27 -0.18
20 Sep, 16:21 2.45 +0.20
20 Sep, 15:52 2.25 -0.20
20 Sep, 15:51 2.45 +0.20
20 Sep, 15:45 2.25 -0.20
20 Sep, 15:42 2.45 +0.20
20 Sep, 15:41 2.25 -0.20
20 Sep, 15:36 2.45 +0.12
20 Sep, 15:16 2.33 -0.12
20 Sep, 15:14 2.45 +0.12
Opening odds:
19 Sep, 19:00 2.33
</code></pre>
|
<python><html><selenium><screen-scraping>
|
2023-01-16 21:13:05
| 1
| 367
|
DNburtonguster
|
75,139,369
| 8,925,864
|
Plotting probability of a list of floating point values in Python
|
<p>I am creating a list <code>ans</code> using the following code and have tried to create histogram plots of <code>ans</code> using both matplotlib and seaborn using <code>density=True</code> but I still don't get a probability density of the list of the values in <code>ans</code>. Here is the code.</p>
<pre><code>num=100000
ans=[]
T=4.5*math.pi
num_bins=50
for i in range(num):
val=0.5*np.random.chisquare(1)+np.random.exponential(1)
q=np.random.randn()
repeats=int(T/(2*q))
listofrepeats=list(itertools.repeat(val,repeats))
for val in listofrepeats:
ans.append(val)
#plt.hist(ans,bins=num_bins,normed=True)
sns.distplot(ans, hist=True, kde=False,
bins=num_bins, color = 'blue',
hist_kws={'edgecolor':'black'})
x=np.linspace(0.01,20,1000)
y=[0.5*np.exp(-0.5*n) for n in x]
plt.plot(x,y,lw=3,c='r',label='Chi sqrd with df=2')
plt.legend(loc='upper right')
plt.show()
</code></pre>
<p>The density plot I get is <a href="https://i.sstatic.net/gyToE.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gyToE.jpg" alt="enter image description here" /></a> and as you can see, the density has large values. I am looking for a plotting tool wherein a probability is calculated for each bin and the probabilities sum to one. Is there such a tool? Thanks.</p>
|
<python><matplotlib>
|
2023-01-16 20:29:39
| 0
| 305
|
q2w3e4
|
75,139,276
| 9,428,990
|
Reduce size of authorization python AWS Lambda@Edge
|
<p>I'm trying to implement authorization in my cloudfront distribution. It has worked so far until I ran into size limitation. I'm now running into the cloudfront error message <code>Max allowed: 1048576</code>, which is roughly ~1MB. But after installing the <code>authlib</code> package the total size is around 6MB. My method for validating tokens look roughly like this:</p>
<pre><code>from authlib.jose import JsonWebToken
jwk = get_jwk()
claims_options = {
"iss": {"essential": True, "value": ISSUER},
"aud": {"essential": True, "value": AUDIENCE}
}
jwt = JsonWebToken()
claims = jwt.decode(token, jwk, claims_options=claims_options)
claims.validate()
</code></pre>
<p>The whole thing works beautifully until the size limitation.</p>
<p>My ideas to get around this are:</p>
<ol>
<li>Find another package than authlib that is smaller/more efficient.</li>
<li>Write my own code which validates hash signature of JWT (get around the need for authlib package).</li>
<li>Write the Lambda in javascript to leverage the <a href="https://docs.aws.amazon.com/cdk/api/v1/docs/aws-lambda-nodejs-readme.html" rel="nofollow noreferrer">NodeJSfunction</a>, which according to docs are efficient in packaging the lambda. In the hopes of that its enough.</li>
</ol>
<p>Perhaps there are more alternatives, but these are the ones I could come up with which are desirable in descending order. Requesting assistance on either of these options or perhaps totally different solution.</p>
|
<python><python-3.x><aws-lambda><amazon-cloudfront><aws-lambda-edge>
|
2023-01-16 20:17:28
| 2
| 719
|
Frankster
|
75,139,201
| 2,023,745
|
How to recursively traverse a nested dictionary given a path?
|
<p>Give an n-level nested dictionary, and a path (e.g. <code>['a', 'b', 'c', 'd']</code>, how can I traverse a nested dictionary to see if the path completely exists in the dictionary or not?</p>
<p>If the path doesn't exist, I want to return <code>None</code>, otherwise I want to return the value associated with the file. I'm not quite sure why my function isn't working.</p>
<pre><code>def verify_path(path, d):
if len(path) == 0 : return
if path[0] in d:
return verify_path(path[1:], d[path[0]])
else:
return None
def main():
d = {
'a1': {
'b1': { 'asdf.txt': 10 },
'b2': {
'c1': {'qwerty.pdf': 1},
}
},
'a2': {'foo.bar': 99},
'a3': {
'b3': {
'c2': {'img.heic': 100},
},
},
}
print(verify_path('/a2/foo.bar'.split('/'), d))
</code></pre>
|
<python><recursion>
|
2023-01-16 20:07:55
| 1
| 8,311
|
TheRealFakeNews
|
75,139,126
| 10,299,633
|
error when using SKforecast: UserWarning: `y` has DatetimeIndex index but no frequency. Index is overwritten with a RangeIndex of step 1
|
<p>I am trying to use <code>skforecast</code> for time series analysis however I am getting warning telling me that the df has no frequency because the index is not <code>DateTimeIndex</code> but in fact it is.</p>
<p>Here is the code:</p>
<pre><code>import yfinance as yf
import datetime as dt
spxl = yf.Ticker("SPXL")
hist = spxl.history(start="2015-01-01")
hist = hist.asfreq("D")
data = hist.dropna()
type(data.index)
#Output: pandas.core.indexes.datetimes.DatetimeIndex
#Split data into train-val-test
#==============================================================================
data = data.loc['2015-01-01': '2022-12-31']
end_train = '2019-12-31'
end_validation = '2020-12-31'
data_train = data.loc[: end_train, :].copy()
data_val = data.loc[end_train:end_validation, :].copy()
data_test = data.loc[end_validation:, :].copy()
#Create forecaster
#==============================================================================
forecaster = ForecasterAutoreg(
regressor = LGBMRegressor(),
lags = 7
)
#Grid search of hyper-parameters and lags
#==============================================================================
#Regressor hyper-parameters
param_grid = {
'n_estimators': [100, 500],
'max_depth': [3, 5, 10],
'learning_rate': [0.01, 0.1]
}
#Lags used as predictors
lags_grid = [7]
</code></pre>
<p>Here where the warning is triggered, when creating forecaster:</p>
<pre><code>results_grid_q10 = grid_search_forecaster(
forecaster = forecaster,
y = data.loc[:end_validation, 'Close'],
param_grid = param_grid,
lags_grid = lags_grid,
steps = 7,
refit = True,
metric = 'mean_squared_error',
initial_train_size = int(len(data_train)),
fixed_train_size = False,
return_best = True,
verbose = False
)
</code></pre>
<p>I can not seem to understand what I am doing wrong!</p>
|
<python><pandas><datetime><time-series><yfinance>
|
2023-01-16 19:57:56
| 1
| 327
|
Sam.H
|
75,139,027
| 7,318,120
|
how to resolve google sheets API error with python
|
<p>I am trying to read / write to google sheets using python.</p>
<p>I run this boilerplate code, but get the following error:</p>
<pre><code>APIError: {'code': 403, 'message': 'Request had insufficient authentication scopes.', 'errors': [{'message': 'Insufficient Permission', 'domain': 'global', 'reason': 'insufficientPermissions'}], 'status': 'PERMISSION_DENIED', 'details': [{'@type': 'type.googleapis.com/google.rpc.ErrorInfo', 'reason': 'ACCESS_TOKEN_SCOPE_INSUFFICIENT', 'domain': 'googleapis.com', 'metadata': {'method': 'google.apps.drive.v3.DriveFiles.List', 'service': 'drive.googleapis.com'}}]}
</code></pre>
<p>what i have done:</p>
<ul>
<li>I have followed the instructions and setup the google-API key.</li>
<li>i have downloaded the json key (named it <code>google_keys.json</code>).</li>
<li>i created a new google sheet called <code>test sheet</code> for testing.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import gspread
from google.oauth2.service_account import Credentials
# Use the credentials to create a client to interact with the Google Drive API
scopes = ['https://www.googleapis.com/auth/spreadsheets']
creds = Credentials.from_service_account_file('google_keys.json', scopes=scopes)
client = gspread.authorize(creds)
# Open a sheet from a spreadsheet in Google Sheets
sheet = client.open("test sheet").worksheet("Sheet1")
# Read a range of cells
cell_list = sheet.range('A1:C7')
# Print the values in the range of cells
for cell in cell_list:
print(cell.value)
# Write values to the sheet
sheet.update_acell('B2', "I am writing to B2")
</code></pre>
<p>It appears to be an authentication problem.
So my question is, what else to i need to do to get this test code working ?</p>
|
<python><google-sheets><google-sheets-api>
|
2023-01-16 19:45:21
| 1
| 6,075
|
darren
|
75,138,791
| 17,487,457
|
find the median frequency of the resultant fft magnitude
|
<p>I am having my head confused around computing the median frequency from the fft I computed over my data, using the following function:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.fft import fft, fftfreq
def fft_transfrom(data):
# data has a 4D shape
fourier = fft(data, axis=2)
mag = np.round(np.abs(fourier), 0)
fs = 5 # sample size
freq = fftfreq(fs, d=1/fs)
return mag, freq
</code></pre>
<p>And this sample data:</p>
<pre class="lang-py prettyprint-override"><code>A = np.array([
[[[0, 1, 2, 3],
[3, 0, 1, 2],
[2, 3, 0, 1],
[1, 3, 2, 1],
[1, 2, 3, 0]]],
[[[9, 8, 7, 6],
[5, 4, 3, 2],
[0, 9, 8, 3],
[1, 9, 2, 3],
[1, 0, -1, 2]]],
[[[0, 7, 1, 2],
[1, 2, 1, 0],
[0, 2, 0, 7],
[-1, 3, 0, 1],
[1, 0, 1, 0]]]
])
</code></pre>
<p>So I get the magnitude and frequencies like so:</p>
<pre class="lang-py prettyprint-override"><code>magnitude, frequency = fft_transfrom(A)
magnitude
array([[[[ 7., 9., 8., 7.],
[ 3., 4., 3., 3.],
[ 2., 2., 1., 2.],
[ 2., 2., 1., 2.],
[ 3., 4., 3., 3.]]],
[[[16., 30., 19., 16.],
[11., 7., 7., 2.],
[ 6., 11., 9., 5.],
[ 6., 11., 9., 5.],
[11., 7., 7., 2.]]],
[[[ 1., 14., 3., 10.],
[ 2., 4., 2., 6.],
[ 2., 7., 1., 7.],
[ 2., 7., 1., 7.],
[ 2., 4., 2., 6.]]]])
</code></pre>
<p>And sorted like so:</p>
<pre class="lang-py prettyprint-override"><code>sorted_magnitude = np.sort(magnitude, axis=2)
sorted_magnitude
array([[[[ 2., 2., 1., 2.],
[ 2., 2., 1., 2.],
[ 3., 4., 3., 3.],
[ 3., 4., 3., 3.],
[ 7., 9., 8., 7.]]],
[[[ 6., 7., 7., 2.],
[ 6., 7., 7., 2.],
[11., 11., 9., 5.],
[11., 11., 9., 5.],
[16., 30., 19., 16.]]],
[[[ 1., 4., 1., 6.],
[ 2., 4., 1., 6.],
[ 2., 7., 2., 7.],
[ 2., 7., 2., 7.],
[ 2., 14., 3., 10.]]]])
</code></pre>
<p>Now that each array column (instance column) are sorted.</p>
<p>But I have problem finding the frequency that corresponds to the middle of the sorted power spectrum</p>
<pre class="lang-py prettyprint-override"><code>median_frequency = frequency[np.where(magnitude ==
sorted_magnitude[int(len(magnitude)/2)])]
IndexError: too many indices for array: array is 1-dimensional, but 4 were indexed
</code></pre>
<p>How do I get the frequencies of the middle <code>magnitude</code> in each array.</p>
<ol start="2">
<li>I can directly get the median of each array in magnitude using:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>np.median(magnitude, axis=2)
array([[[ 3., 4., 3., 3.]],
[[11., 11., 9., 5.]],
[[ 2., 7., 2., 7.]]])
</code></pre>
<p>How then do I get the frequency corresponding to the median power spectrum?</p>
|
<python><arrays><numpy><fft>
|
2023-01-16 19:20:27
| 0
| 305
|
Amina Umar
|
75,138,769
| 11,462,274
|
Using sklearn and keras to check how reliable a cumulative sum is for the long term
|
<p>Part of my CSV file:</p>
<pre class="lang-none prettyprint-override"><code>cumulative_sum
0.2244
0.75735
1.74845
1.93545
2.15985
2.8611
1.8611
2.2538
2.88025
3.83395
2.83395
5.0312
5.4426000000000005
5.5735
4.5735
3.5735
2.5735
3.38695
2.38695
1.3869500000000001
1.7048500000000002
2.47155
3.6309500000000003
4.1078
4.640750000000001
3.6407500000000006
4.3420000000000005
3.3420000000000005
3.6318500000000005
3.8282000000000003
4.08065
3.0806500000000003
3.7725500000000003
4.2868
3.2868000000000004
3.9974000000000003
4.7454
3.7454
4.7178
5.129200000000001
5.297500000000001
4.297500000000001
4.708900000000002
3.7089000000000016
</code></pre>
<p>I'm wanting to analyze how reliable is the growth of this cumulative sum for the long term:</p>
<pre class="lang-python prettyprint-override"><code>import numpy as np
import pandas as pd
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import LSTM, Dense
df = pd.read_csv("gradual_cumsum.csv")
scaler = MinMaxScaler(feature_range=(0, 1))
data = scaler.fit_transform(df)
train_size = int(len(data) * 0.8)
test_size = len(data) - train_size
train, test = data[0:train_size,:], data[train_size:len(data),:]
X_train = train[:, 0:1]
y_train = train[:, 1:2]
X_test = test[:, 0:1]
y_test = test[:, 1:2]
X_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1]))
X_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1]))
model = Sequential()
model.add(LSTM(50, input_shape=(X_train.shape[1], X_train.shape[2])))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(X_train, y_train, epochs=100, batch_size=1, verbose=1)
test_loss = model.evaluate(X_test, y_test, verbose=0)
print(f'Test Loss: {test_loss}')
</code></pre>
<p>But all Epoch's generate <code>loss: nan</code>:</p>
<pre><code>Epoch 1/100
849/849 [==============================] - 2s 1ms/step - loss: nan
</code></pre>
<p>And obviously the end result is:</p>
<pre><code>Test Loss: nan
</code></pre>
<p>What am I missing that is generating this failure that fails to verify the data?</p>
|
<python><tensorflow><keras><sklearn-pandas>
|
2023-01-16 19:18:30
| 0
| 2,222
|
Digital Farmer
|
75,138,558
| 11,780,316
|
How can I SHA1-HMAC a string in Python
|
<p>I've been trying to figure out how to use HMAC to hash a string in Python for a TOTP generator I've already made in PHP. In PHP, I use the command <code>echo -n "0x00000000035362f9" | xxd -r -p | openssl dgst -sha1 -mac HMAC -macopt hexkey:0080700040024c8581c2</code> which returns the desired value <code>7289e7b135d54b86a462a53da93ef6ad28b902f8</code>. However, when I use the <code>hmac</code> library in Python 3.10</p>
<pre class="lang-py prettyprint-override"><code>from hashlib import sha1
from hmac import new
key = "00000000035362f9"
msg = "0080700040024c8581c2"
byte_key = bytes(key, "UTF-8")
message = msg.encode()
print(new(byte_key, message, sha1).hexdigest())
print(new(message, byte_key, sha1).hexdigest())
</code></pre>
<p>The printed values are <code>b05fe172b6a8a20767c18e1bfba159f0ea54c2bd</code> and <code>0fe109b32b17aeff840255558e6b5c8ff3d8a115</code>, neither match what I want.</p>
<p>I've tried making the key and message hex values first, I've made them raw bytes using <code>b'0080700040024c8581c2'</code>, encoding them using UTF-8 and ASCII, and none of the solutions have worked.</p>
<p>I've looked at other post relating to this, and none of them worked. <br>
<a href="https://stackoverflow.com/questions/35034637/python-hmac-sha1-calculation">Python hmac (sha1) calculation</a> <br>
<a href="https://stackoverflow.com/questions/62966599/python-hmac-openssl-equivalent">Python HMAC OpenSSL equivalent</a> <br>
<a href="https://stackoverflow.com/questions/53029229/hmac-returning-different-hexdigest-values-to-openssl">hmac returning different hexdigest values to openssl</a> <br>
<a href="https://stackoverflow.com/questions/48303874/why-python-and-node-jss-hmac-result-is-different-in-this-code">Why Python and Node.js's HMAC result is different in this code?</a> <br>
<a href="https://stackoverflow.com/questions/52518872/implementing-sha1-hmac-with-python">Implementing SHA1-HMAC with Python</a></p>
|
<python><hash><sha1><hmac>
|
2023-01-16 18:56:04
| 1
| 367
|
lwashington27
|
75,138,435
| 15,724,084
|
yield scrapy.Request does not invoke parse function on each iteration
|
<p>in my code i have to functions inside scrapy class.
start_request takes data from excel workbook and assigns value to <code>plate_num_xlsx</code> variable.</p>
<pre><code>def start_requests(self):
df=pd.read_excel('data.xlsx')
columnA_values=df['PLATE']
for row in columnA_values:
global plate_num_xlsx
plate_num_xlsx=row
print("+",plate_num_xlsx)
base_url =f"https://dvlaregistrations.dvla.gov.uk/search/results.html?search={plate_num_xlsx}&action=index&pricefrom=0&priceto=&prefixmatches=&currentmatches=&limitprefix=&limitcurrent=&limitauction=&searched=true&openoption=&language=en&prefix2=Search&super=&super_pricefrom=&super_priceto="
url=base_url
yield scrapy.Request(url,callback=self.parse)
</code></pre>
<p>But, on each iteration it should invoke parse() method of Scrapy class and inside that function with each iterated newly value of <code>plate_num_xlsx</code> needs to compare the parsed value,
As I understood after print statements it first takes all values, assigns them then only with last value assigned calls parse() method. But for my crawler to function properly I need each time assigning happens to call and use that value inside def parse().
code is below;</p>
<pre><code>import scrapy
from scrapy.crawler import CrawlerProcess
import pandas as pd
itemList=[]
class plateScraper(scrapy.Spider):
name = 'scrapePlate'
allowed_domains = ['dvlaregistrations.dvla.gov.uk']
def start_requests(self):
df=pd.read_excel('data.xlsx')
columnA_values=df['PLATE']
for row in columnA_values:
global plate_num_xlsx
plate_num_xlsx=row
print("+",plate_num_xlsx)
base_url =f"https://dvlaregistrations.dvla.gov.uk/search/results.html?search={plate_num_xlsx}&action=index&pricefrom=0&priceto=&prefixmatches=&currentmatches=&limitprefix=&limitcurrent=&limitauction=&searched=true&openoption=&language=en&prefix2=Search&super=&super_pricefrom=&super_priceto="
url=base_url
yield scrapy.Request(url,callback=self.parse)
def parse(self, response):
for row in response.css('div.resultsstrip'):
plate = row.css('a::text').get()
price = row.css('p::text').get()
a = plate.replace(" ", "").strip()
print(plate_num_xlsx,a,a == plate_num_xlsx)
if plate_num_xlsx==plate.replace(" ","").strip():
item= {"plate": plate.strip(), "price": price.strip()}
itemList.append(item)
yield item
else:
item = {"plate": plate_num_xlsx, "price": "-"}
itemList.append(item)
yield item
with pd.ExcelWriter('output_res.xlsx', mode='r+',if_sheet_exists='overlay') as writer:
df_output = pd.DataFrame(itemList)
df_output.to_excel(writer, sheet_name='result', index=False, header=True)
process = CrawlerProcess()
process.crawl(plateScraper)
process.start()
</code></pre>
|
<python><scrapy>
|
2023-01-16 18:43:25
| 1
| 741
|
xlmaster
|
75,138,423
| 2,529,619
|
Fast vector addition of multiple columns in pandas
|
<p>I need to add up multiple columns of large DataFrames in a loop. It's taking very long. The bottleneck is in creating copies of the DataFrame.</p>
<p>Here is a simplified code sample of what I am trying to do:</p>
<pre><code>import itertools as it
import numpy as np
import pandas as pd
def sum_combo_ranks(df, combos: list[str]) -> pd.DataFrame:
"""Returns the sum of the composite ranks for each combination of columns..
Returns:
DataFrame
Each column contains a single system
Each row is a date/ticker
Each cell contains the aggregated rank for that stock on that date.
"""
# Each DataFrame in lst is the sum of a single combination of selected columns;
# This step is a bottleneck.
# 2.9s
lst = [df[x] for x in combos]
# For each subset, add each row across.
# 2.1s
sums = [x.sum(axis=1)
for x in lst]
# Join the list of df's into a single DataFrame.
# 0.3s
new_df = pd.DataFrame(sums)
return new_df
# Sample DataFrame. In real life it has real numbers.
df = pd.DataFrame(np.zeros([6000, 100]))
# Generate a list of multiple combinations of 98 column numbers out of 100.
combos = list(list(x) for x in it.combinations(range(100), 98))[:1000]
result = sum_combo_ranks(df, combos)
</code></pre>
<p>The biggest chunk of time is spent making numerous copies of the relevant columns. There should be a way to add up the relevant columns by reference.</p>
<p>Is this possible? How can we speed up this operation?</p>
|
<python><pandas><python-itertools>
|
2023-01-16 18:42:32
| 2
| 7,582
|
ChaimG
|
75,138,404
| 14,742,509
|
replace values in a category column based on the order they appear
|
<p>I have a DataFrame with 1k records similar to the below one. Column B is a group column in sorted order.</p>
<pre><code>import pandas as pd
df = pd.DataFrame([['red', 0], ['green', 0], ['blue', 16],['white', 58],['yellow', 59], ['purple', 71], ['violet', 82],['grey', 82]], columns=['A','B'])
df
A B
0 red 0
1 green 0
2 blue 16
3 white 58
4 yellow 59
5 purple 71
6 violet 82
7 grey 82
</code></pre>
<p>How could I update column B to get an output like the below one (to use the column as a category later)?</p>
<pre><code>output_df = pd.DataFrame([['red', 'group1'], ['green', 'group1'], ['blue', 'group2'],['white', 'group3'],['yellow', 'group4'], ['purple', 'group5'], ['violet', 'group6'],['grey', 'group6']], columns=['A','B'])
output_df
A B
0 red group1
1 green group1
2 blue group2
3 white group3
4 yellow group4
5 purple group5
6 violet group6
7 grey group6
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-16 18:40:20
| 1
| 329
|
DOT
|
75,138,362
| 7,190,950
|
Python Django nginx uWsgi getting failed on specific callback endpoints
|
<p>I'm running Django web app on a docker container where I use Nginx with uwsgi.
Overall the web works just fine it fails only on specific callback endpoints during the social app (Google, Facebook) registration.</p>
<p>Below is the command I use to run the uswgi</p>
<pre><code>uwsgi --socket :8080 --master --strict --enable-threads --vacuum --single-interpreter --need-app --die-on-term --module config.wsgi
</code></pre>
<p>Below is the endpoint where it fails (Django allauth lib)</p>
<pre><code>accounts/google/login/callback/?state=..........
</code></pre>
<p>Below is the error message:</p>
<pre><code>!! uWSGI process 27 got Segmentation Fault !!!
...
upstream prematurely closed connection while reading response header from upstream, client: ...
...
DAMN ! worker 1 (pid: 27) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 28)
</code></pre>
<p>Just FYI.. this works without any issues in the local docker container but it fails when I use the GCP container. Also, this used to work fine on GCP as well so probably something happened after recent dependency updates.</p>
<pre><code>Environment:
Python: 3.9.16
Django: 3.2.3
allauth: 0.44.0 (Django authentication library)
Nginx: nginx/1.23.3
uWSGI: 2.0.20
</code></pre>
|
<python><django><docker><uwsgi><django-allauth>
|
2023-01-16 18:35:39
| 0
| 421
|
Arman Avetisyan
|
75,138,334
| 4,418,481
|
Dash Leaflet popup to show multiline with links
|
<p>I have make a simple map using dash-leaflet and I wanted to add a popup based on some data that I have.</p>
<p>This data contains names, prices and links which I would like to show in the popup.</p>
<p>I tried to do the following but it gave this result:</p>
<pre><code> for index, row in data.iterrows():
# title = str(row['title'])
# price = str("{:,}".format(row['price']))
title = "hello" #example value
price = "1" #example value
link = "<a href='https://www.w3schools.com/'>Visit W3Schools.com!</a>"
marker_text = title + "\n" + price + "\n" + "<b>" + link + "</b>"
markers.append(
dl.Marker(
title=row['title'],
position=(row['latitude'], row['longitude']),
children=[
dl.Popup(marker_text),
],
)
)
children = dl.MarkerClusterGroup(id="markers", children=markers)
</code></pre>
<p><a href="https://i.sstatic.net/o4Hyk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o4Hyk.png" alt="enter image description here" /></a></p>
<p>I also tried to format the marker_text with <br> and to insert an HTML element into the dl.Popup but none worked.</p>
<p>When I inspect the element it looks like this:
<a href="https://i.sstatic.net/5o5gb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5o5gb.png" alt="enter image description here" /></a>
Any ideas?</p>
<p>Thank you</p>
|
<python><plotly-dash><dash-leaflet>
|
2023-01-16 18:33:31
| 1
| 1,859
|
Ben
|
75,138,131
| 443,908
|
pythonic way of expressing set with infinite number
|
<p>I'm leading a python study group in a couple of weekends and thought I would go over some simple discrete math ideas, starting with sets. Is there pythonic way of expressing a set like</p>
<p>A = {x| -inf < x < 6}</p>
<p>a simple boolean val of <code>n < 6</code> will obviously work, but wondering if there is a way to make it into a set object so I can call do stuff like <code>n in myset</code> and <code>mysubset.issubset(myset)</code> and the like.</p>
<p>thanks</p>
|
<python><set>
|
2023-01-16 18:14:18
| 0
| 1,654
|
badperson
|
75,138,086
| 3,312,636
|
Unable to read file from DBFS mount
|
<p>I am using Python code in Databricks to read a JSON using the below code.</p>
<pre><code>my_json = ''
with open('/dbfs/mnt/my_mount/my_json.json', 'r') as fp:
my_json = json.load(fp)
</code></pre>
<p>But I am getting the below error</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: '/dbfs/mnt/my_mount/my_json.json'
</code></pre>
<p>However, if I try to read using a Spark Dataframe, I am able to read contents of the json file and verified using <code>.show()</code> method of Spark Dataframe.</p>
<pre><code>df = spark.read.text('/dbfs/mnt/my_mount/my_json.json', wholetext=True)
</code></pre>
<p>How can I read the file using <code>with open</code>?</p>
<p>Below is the Databricks Runtime I am using.</p>
<p><a href="https://i.sstatic.net/1VUby.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1VUby.png" alt="enter image description here" /></a></p>
<p>Please let me know if you need any further details.</p>
|
<python><json><pyspark><databricks>
|
2023-01-16 18:08:22
| 0
| 21,579
|
Sarath Subramanian
|
75,137,717
| 7,427,859
|
Eventlet + DNS Python Attribute Error: module "dns.rdtypes" has no attribute ANY
|
<p>I know that someone will face this problem. I had this problem today, but I could fix it promptly, and I want to share my solution:</p>
<p>Problem:</p>
<pre><code>from flask_socketio import SocketIO
</code></pre>
<p>You will receive an output error with something like:</p>
<blockquote>
<p>Attribute Error: module "dns.rdtypes" has no attribute ANY</p>
</blockquote>
<p>This only happens if you have installed eventlet, because it install dnspython with it.</p>
<p>The solution is simple, just reinstall dnspython for previous realease:</p>
<blockquote>
<p>python3 -m pip install dnspython==2.2.1</p>
</blockquote>
<p>The problem should disappear</p>
|
<python><flask-socketio><eventlet><dnspython>
|
2023-01-16 17:32:09
| 2
| 496
|
Indesejavel Coisa
|
75,137,677
| 2,912,349
|
How to compute the Jacobian of a pairwise distance function (`scipy.spatial.pdist`)
|
<h2>Context</h2>
<p>I am the author and maintainer of <a href="https://github.com/paulbrodersen/netgraph" rel="nofollow noreferrer">netgraph</a>, a python library for creating network visualisations.
I am currently trying to optimise a routine that computes a set of <code>N</code> node positions for networks in which each edge has a defined length.
An example can be found <a href="https://stackoverflow.com/a/75083544/2912349">here</a>.</p>
<h2>Problem</h2>
<p>At its core, the routine runs <code>scipy.optimize.minimize</code> to compute the positions that maximise the total distance between nodes:</p>
<pre><code>def cost_function(positions):
return 1. / np.sum((pdist(positions.reshape((-1, 2))))**power)
result = minimize(cost_function, initial_positions.flatten(), method='SLSQP',
jac="2-point", constraints=[nonlinear_constraint])
</code></pre>
<ul>
<li><code>positions</code> are an (unravelled) numpy array of (x, y) tuples.</li>
<li><code>power</code> is a smallish number that limits the influence of large distances (to encourage compact node layouts) but for the purpose of this question could be assumed to be 1.</li>
<li><code>pdist</code> is the pairwise distance function in <code>scipy.spatial</code>.</li>
</ul>
<p>The minimisation ( / maximisation) is constrained using the following non-linear constraint:</p>
<pre><code>lower_bounds = ... # (squareform of an) (N, N) distance matrix of the sum of node sizes (i.e. nodes should not overlap)
upper_bounds = ... # (squareform of an) (N, N) distance matrix constructed from the given edge lengths
def constraint_function(positions):
positions = np.reshape(positions, (-1, 2))
return pdist(positions)
nonlinear_constraint = NonlinearConstraint(constraint_function, lb=lower_bounds, ub=upper_bounds, jac='2-point')
</code></pre>
<p>For toy examples, the optimisation completes correctly and quickly. However, even for smallish networks, the running time is fairly abysmal.
My current implementation uses finite differences to approximate the gradients (<code>jac='2-point'</code>).
To speed up the computation, I would like to compute the Jacobians explicitly.</p>
<p>Following several Math Stackexchange posts (<a href="https://math.stackexchange.com/questions/4212391/is-the-hessian-matrix-of-euclidean-norm-function-positive-definite">1</a>, <a href="https://math.stackexchange.com/questions/1552653/jacobian-of-second-norm">2</a>), I computed the the Jacobian of the pairwise distance function as follows:</p>
<pre><code> def delta_constraint(positions):
positions = np.reshape(positions, (-1, 2))
total_positions = positions.shape[0]
delta = positions[np.newaxis, :, :] - positions[:, np.newaxis, :]
distance = np.sqrt(np.sum(delta ** 2, axis=-1))
jac = delta / distance[:, :, np.newaxis]
squareform_indices = np.triu_indices(total_positions, 1)
return jac[squareform_indices]
nonlinear_constraint = NonlinearConstraint(constraint_function, lb=lower_bounds, ub=upper_bounds, jac=delta_constraint)
</code></pre>
<p>However, this results in a <code>ValueError</code>, as the shape of the output is incorrect. For the triangle example, the expected output shape is (3, 6), whereas the function above returns a (3, 2) array (i.e. 3 pairwise distance by 2 dimensions). For the square the expected output is (6, 8), whereas the actual is (6, 2). Any help deriving implementing the correct callable(s) for the <code>jac</code> arguments to <code>NonlinearConstraint</code> and <code>minimize</code> would be appreciated.</p>
<h2>Note</h2>
<p>I would like to avoid the use of autograd/jax/numdifftools (as in <a href="https://stackoverflow.com/questions/41137092/jacobian-and-hessian-inputs-in-scipy-optimize-minimize">this question</a>), as I would like to keep the number of dependencies of my library small.</p>
<h2>Minimal working example(s)</h2>
<pre><code>#!/usr/bin/env python
"""
Create a node layout with fixed edge lengths but unknown node positions.
"""
import numpy as np
from scipy.optimize import minimize, NonlinearConstraint
from scipy.spatial.distance import pdist, squareform
def get_geometric_node_layout(edges, edge_length, node_size=0., power=0.2, maximum_iterations=200, origin=(0, 0), scale=(1, 1)):
"""Node layout for defined edge lengths but unknown node positions.
Node positions are determined through non-linear optimisation: the
total distance between nodes is maximised subject to the constraint
imposed by the edge lengths, which are used as upper bounds.
If provided, node sizes are used to set lower bounds.
Parameters
----------
edges : list
The edges of the graph, with each edge being represented by a (source node ID, target node ID) tuple.
edge_lengths : dict
Mapping of edges to their lengths.
node_size : scalar or dict, default 0.
Size (radius) of nodes.
Providing the correct node size minimises the overlap of nodes in the graph,
which can otherwise occur if there are many nodes, or if the nodes differ considerably in size.
power : float, default 0.2.
The cost being minimised is the inverse of the sum of distances.
The power parameter is the exponent applied to each distance before summation.
Large values result in positions that are stretched along one axis.
Small values decrease the influence of long distances on the cost
and promote a more compact layout.
maximum_iterations : int
Maximum number of iterations of the minimisation.
origin : tuple, default (0, 0)
The (float x, float y) coordinates corresponding to the lower left hand corner of the bounding box specifying the extent of the canvas.
scale : tuple, default (1, 1)
The (float x, float y) dimensions representing the width and height of the bounding box specifying the extent of the canvas.
Returns
-------
node_positions : dict
Dictionary mapping each node ID to (float x, float y) tuple, the node position.
"""
# TODO: assert triangle inequality
# TODO: assert that the edges fit within the canvas dimensions
# ensure that graph is bi-directional
edges = edges + [(target, source) for (source, target) in edges] # forces copy
edges = list(set(edges))
# upper bound: pairwise distance matrix with unknown distances set to the maximum possible distance given the canvas dimensions
lengths = []
for (source, target) in edges:
if (source, target) in edge_length:
lengths.append(edge_length[(source, target)])
else:
lengths.append(edge_length[(target, source)])
sources, targets = zip(*edges)
nodes = sources + targets
unique_nodes = set(nodes)
indices = range(len(unique_nodes))
node_to_idx = dict(zip(unique_nodes, indices))
source_indices = [node_to_idx[source] for source in sources]
target_indices = [node_to_idx[target] for target in targets]
total_nodes = len(unique_nodes)
max_distance = np.sqrt(scale[0]**2 + scale[1]**2)
distance_matrix = np.full((total_nodes, total_nodes), max_distance)
distance_matrix[source_indices, target_indices] = lengths
distance_matrix[np.diag_indices(total_nodes)] = 0
upper_bounds = squareform(distance_matrix)
# lower bound: sum of node sizes
if isinstance(node_size, (int, float)):
sizes = node_size * np.ones((total_nodes))
elif isinstance(node_size, dict):
sizes = np.array([node_size[node] if node in node_size else 0. for node in unique_nodes])
sum_of_node_sizes = sizes[np.newaxis, :] + sizes[:, np.newaxis]
sum_of_node_sizes -= np.diag(np.diag(sum_of_node_sizes)) # squareform requires zeros on diagonal
lower_bounds = squareform(sum_of_node_sizes)
def cost_function(positions):
return 1. / np.sum((pdist(positions.reshape((-1, 2))))**power)
def constraint_function(positions):
positions = np.reshape(positions, (-1, 2))
return pdist(positions)
initial_positions = _initialise_geometric_node_layout(edges)
nonlinear_constraint = NonlinearConstraint(constraint_function, lb=lower_bounds, ub=upper_bounds, jac='2-point')
result = minimize(cost_function, initial_positions.flatten(), method='SLSQP',
jac="2-point", constraints=[nonlinear_constraint], options=dict(maxiter=maximum_iterations))
if not result.success:
print("Warning: could not compute valid node positions for the given edge lengths.")
print(f"scipy.optimize.minimize: {result.message}.")
node_positions_as_array = result.x.reshape((-1, 2))
node_positions = dict(zip(unique_nodes, node_positions_as_array))
return node_positions
def _initialise_geometric_node_layout(edges):
sources, targets = zip(*edges)
total_nodes = len(set(sources + targets))
return np.random.rand(total_nodes, 2)
if __name__ == '__main__':
import matplotlib.pyplot as plt
def plot_graph(edges, node_layout):
# poor man's graph plotting
fig, ax = plt.subplots()
for source, target in edges:
x1, y1 = node_layout[source]
x2, y2 = node_layout[target]
ax.plot([x1, x2], [y1, y2], color='darkgray')
ax.set_aspect('equal')
################################################################################
# triangle with right angle
edges = [
(0, 1),
(1, 2),
(2, 0)
]
lengths = {
(0, 1) : 3,
(1, 2) : 4,
(2, 0) : 5,
}
pos = get_geometric_node_layout(edges, lengths, node_size=0)
plot_graph(edges, node_layout=pos)
plt.show()
################################################################################
# square
edges = [
(0, 1),
(1, 2),
(2, 3),
(3, 0),
]
lengths = {
(0, 1) : 0.5,
(1, 2) : 0.5,
(2, 3) : 0.5,
(3, 0) : 0.5,
}
pos = get_geometric_node_layout(edges, lengths, node_size=0)
plot_graph(edges, node_layout=pos)
plt.show()
</code></pre>
<h2>Edit: Realistic use case for timing</h2>
<p>Below is a more realistic use case that I am using to time my code.
I have incorporated @adrianop01's computation of the Jacobian for the constraint. It also includes a superior initialisation. It requires the additional dependencies <code>networkx</code> and <code>netgraph</code>, both of which can be installed via pip.</p>
<pre><code>#!/usr/bin/env python
"""
Create a node layout with fixed edge lengths but unknown node positions.
"""
import numpy as np
from itertools import combinations
from scipy.optimize import minimize, NonlinearConstraint
from scipy.spatial.distance import pdist, squareform
from netgraph._node_layout import _rescale_to_frame
def get_geometric_node_layout(edges, edge_length, node_size=0., power=0.2, maximum_iterations=200, origin=(0, 0), scale=(1, 1)):
"""Node layout for defined edge lengths but unknown node positions.
Node positions are determined through non-linear optimisation: the
total distance between nodes is maximised subject to the constraint
imposed by the edge lengths, which are used as upper bounds.
If provided, node sizes are used to set lower bounds.
Parameters
----------
edges : list
The edges of the graph, with each edge being represented by a (source node ID, target node ID) tuple.
edge_lengths : dict
Mapping of edges to their lengths.
node_size : scalar or dict, default 0.
Size (radius) of nodes.
Providing the correct node size minimises the overlap of nodes in the graph,
which can otherwise occur if there are many nodes, or if the nodes differ considerably in size.
power : float, default 0.2.
The cost being minimised is the inverse of the sum of distances.
The power parameter is the exponent applied to each distance before summation.
Large values result in positions that are stretched along one axis.
Small values decrease the influence of long distances on the cost
and promote a more compact layout.
maximum_iterations : int
Maximum number of iterations of the minimisation.
origin : tuple, default (0, 0)
The (float x, float y) coordinates corresponding to the lower left hand corner of the bounding box specifying the extent of the canvas.
scale : tuple, default (1, 1)
The (float x, float y) dimensions representing the width and height of the bounding box specifying the extent of the canvas.
Returns
-------
node_positions : dict
Dictionary mapping each node ID to (float x, float y) tuple, the node position.
"""
# TODO: assert triangle inequality
# TODO: assert that the edges fit within the canvas dimensions
# ensure that graph is bi-directional
edges = edges + [(target, source) for (source, target) in edges] # forces copy
edges = list(set(edges))
# upper bound: pairwise distance matrix with unknown distances set to the maximum possible distance given the canvas dimensions
lengths = []
for (source, target) in edges:
if (source, target) in edge_length:
lengths.append(edge_length[(source, target)])
else:
lengths.append(edge_length[(target, source)])
sources, targets = zip(*edges)
nodes = sources + targets
unique_nodes = set(nodes)
indices = range(len(unique_nodes))
node_to_idx = dict(zip(unique_nodes, indices))
source_indices = [node_to_idx[source] for source in sources]
target_indices = [node_to_idx[target] for target in targets]
total_nodes = len(unique_nodes)
max_distance = np.sqrt(scale[0]**2 + scale[1]**2)
distance_matrix = np.full((total_nodes, total_nodes), max_distance)
distance_matrix[source_indices, target_indices] = lengths
distance_matrix[np.diag_indices(total_nodes)] = 0
upper_bounds = squareform(distance_matrix)
# lower bound: sum of node sizes
if isinstance(node_size, (int, float)):
sizes = node_size * np.ones((total_nodes))
elif isinstance(node_size, dict):
sizes = np.array([node_size[node] if node in node_size else 0. for node in unique_nodes])
sum_of_node_sizes = sizes[np.newaxis, :] + sizes[:, np.newaxis]
sum_of_node_sizes -= np.diag(np.diag(sum_of_node_sizes)) # squareform requires zeros on diagonal
lower_bounds = squareform(sum_of_node_sizes)
invalid = lower_bounds > upper_bounds
lower_bounds[invalid] = upper_bounds[invalid] - 1e-8
def cost_function(positions):
# return -np.sum((pdist(positions.reshape((-1, 2))))**power)
return 1. / np.sum((pdist(positions.reshape((-1, 2))))**power)
def cost_jacobian(positions):
# TODO
pass
def constraint_function(positions):
positions = np.reshape(positions, (-1, 2))
return pdist(positions)
# adapted from https://stackoverflow.com/a/75154395/2912349
total_pairs = int((total_nodes - 1) * total_nodes / 2)
source_indices, target_indices = np.array(list(combinations(range(total_nodes), 2))).T # node order thus (0,1) ... (0,N-1), (1,2),...(1,N-1),...,(N-2,N-1)
rows = np.repeat(np.arange(total_pairs).reshape(-1, 1), 2, axis=1)
source_columns = np.vstack((source_indices*2, source_indices*2+1)).T
target_columns = np.vstack((target_indices*2, target_indices*2+1)).T
def constraint_jacobian(positions):
positions = np.reshape(positions, (-1, 2))
pairwise_distances = constraint_function(positions)
jac = np.zeros((total_pairs, 2 * total_nodes))
jac[rows, source_columns] = (positions[source_indices] - positions[target_indices]) / pairwise_distances.reshape((-1, 1))
jac[rows, target_columns] = -jac[rows, source_columns]
return jac
initial_positions = _initialise_geometric_node_layout(edges, edge_length)
nonlinear_constraint = NonlinearConstraint(constraint_function, lb=lower_bounds, ub=upper_bounds, jac=constraint_jacobian)
result = minimize(cost_function, initial_positions.flatten(), method='SLSQP',
jac='2-point', constraints=[nonlinear_constraint], options=dict(maxiter=maximum_iterations))
# result = minimize(cost_function, initial_positions.flatten(), method='trust-constr',
# jac=cost_jacobian, constraints=[nonlinear_constraint])
if not result.success:
print("Warning: could not compute valid node positions for the given edge lengths.")
print(f"scipy.optimize.minimize: {result.message}.")
node_positions_as_array = result.x.reshape((-1, 2))
node_positions_as_array = _rescale_to_frame(node_positions_as_array, np.array(origin), np.array(scale))
node_positions = dict(zip(unique_nodes, node_positions_as_array))
return node_positions
# # slow
# def _initialise_geometric_node_layout(edges, edge_length=None):
# sources, targets = zip(*edges)
# total_nodes = len(set(sources + targets))
# return np.random.rand(total_nodes, 2)
# much faster
def _initialise_geometric_node_layout(edges, edge_length=None):
"""Initialises the node positions using the FR algorithm with weights.
Shorter edges are given a larger weight such that the nodes experience a strong attractive force."""
from netgraph import get_fruchterman_reingold_layout
if edge_length:
edge_weight = dict()
for edge, length in edge_length.items():
edge_weight[edge] = 1 / length
else:
edge_weight = None
node_positions = get_fruchterman_reingold_layout(edges)
return np.array(list(node_positions.values()))
if __name__ == '__main__':
from time import time
import matplotlib.pyplot as plt
import networkx as nx # pip install networkx
from netgraph import Graph # pip install netgraph
fig, (ax1, ax2) = plt.subplots(1, 2)
g = nx.random_geometric_graph(50, 0.3, seed=2)
node_positions = nx.get_node_attributes(g, 'pos')
plot_instance = Graph(g,
node_layout=node_positions,
node_size=1, # netgraph rescales node sizes by 0.01
node_edge_width=0.1,
edge_width=0.1,
ax=ax1,
)
ax1.axis([0, 1, 0, 1])
ax1.set_title('Original node positions')
def get_euclidean_distance(p1, p2):
return np.sqrt(np.sum((np.array(p1)-np.array(p2))**2))
edge_length = dict()
for (source, target) in g.edges:
edge_length[(source, target)] = get_euclidean_distance(node_positions[source], node_positions[target])
tic = time()
new_node_positions = get_geometric_node_layout(list(g.edges), edge_length, node_size=0.01)
toc = time()
print(f"Time elapsed : {toc-tic}")
Graph(g,
node_layout=new_node_positions,
node_size=1,
node_edge_width=0.1,
edge_width=0.1,
ax=ax2,
)
ax2.axis([0, 1, 0, 1])
ax2.set_title('Reconstructed node positions')
plt.show()
</code></pre>
<hr />
<h2>2nd edit</h2>
<p>Here are some preliminary results I obtained when testing @spinkus' and related solutions.
My implementation of his code looks like this:</p>
<pre><code>def cost_function(positions):
return -np.sum((pdist(positions.reshape((-1, 2))))**2)
def cost_jacobian(positions):
positions = positions.reshape(-1, 2)
delta = positions[np.newaxis, :] - positions[:, np.newaxis]
jac = -2 * np.sum(delta, axis=0)
return jac.ravel()
</code></pre>
<p>Unfortunately, this cost function takes significantly longer to converge: 13 seconds in a best of 5 with a large variance in the timings (up to a minute).
This is independent of whether I use the explicit Jacobian or approximate it using the finite difference approach.
Furthermore, the minimisation often ends prematurely with "scipy.optimize.minimize: Inequality constraints incompatible." and "scipy.optimize.minimize: Positive directional derivative for linesearch."
My bet (although I have little evidence to back it up) is that absolute value of the cost matters. My original cost function decreases both in value and in absolute value, whereas the minimisation increases the absolute value of the @spinkus cost function (however, see @spinkus' excellent comment below why that might be somewhat a red herring and lead to less accurate solutions).</p>
<p>I also understood (I think) why my original cost function is not amenable to computing the Jacobians.
Let <code>power</code> be 0.5, then the cost function and Jacobian take this form (unless my algebra is wrong again):</p>
<pre><code>def cost_function(positions):
return 1. / np.sum((pdist(positions.reshape((-1, 2))))**0.5)
def cost_jacobian(positions):
positions = positions.reshape(-1, 2)
delta = positions[np.newaxis, :] - positions[:, np.newaxis]
distance = np.sqrt(np.sum(delta**2, axis=-1))
denominator = -2 * np.sqrt(delta) * distance[:, :, np.newaxis]
denominator[np.diag_indices_from(denominator[:, :, 0]),:] = 1
jac = 1 / denominator
return np.sum(jac, axis=0).ravel() - 1
</code></pre>
<p>The problematic term is the <code>sqrt(delta)</code>, where <code>delta</code> are the vectors between all points.
Ignoring the diagonals, half of the entries in this matrix are necessarily negative and thus the Jacobian cannot be computed.</p>
<p>However, the purpose of the power is simply to decrease the importance of large distances on the cost.
Any monotonically increasing function with decreasing derivative will do.
Using <code>log(x + 1)</code> instead of the power results in these functions:</p>
<pre><code>def cost_function(positions):
return 1 / np.sum(np.log(pdist(positions.reshape((-1, 2))) + 1))
def cost_jacobian(positions):
positions = positions.reshape(-1, 2)
delta = positions[np.newaxis, :] - positions[:, np.newaxis]
distance2 = np.sum(delta**2, axis=-1)
distance2[np.diag_indices_from(distance2)] = 1
jac = -delta / (distance2 + np.sqrt(distance2))[..., np.newaxis]
return np.sum(jac, axis=0).ravel()
</code></pre>
<p>With the finite difference approximation, the minimisation terminates in 0.5 seconds.
However, with the explicit Jacobian, the best run times were 4 seconds albeit still with a very large variance with run times up a minute and longer.</p>
<h4>Tl;dr.</h4>
<p>I still don't understand why the minimisation does not run faster with explicit Jacobians.</p>
|
<python><optimization><scipy><scipy-optimize>
|
2023-01-16 17:27:22
| 2
| 12,703
|
Paul Brodersen
|
75,137,576
| 1,444,043
|
Problems with logging handler propagation - messages repeated
|
<p>I'm trying to setup logging for a modules and have the following minimal reproducible example...</p>
<pre class="lang-py prettyprint-override"><code>import sys
from pathlib import Path
from datetime import datetime
import logging
start = datetime.now()
LOG_CONFIG = logging.basicConfig(
filename=Path().cwd().stem + f"-{start.strftime('%Y-%m-%d-%H-%M-%S')}.log", filemode="w"
)
LOG_FORMATTER = logging.Formatter(
fmt="[I %(levelname)-8s : %(message)s]"
)
LOG_DEBUG_FORMATTER = logging.Formatter(
fmt="[D %(levelname)-8s : %(message)s]"
)
LOG_ERROR_FORMATTER = logging.Formatter(
fmt="[E %(levelname)-8s : %(message)s]"
)
LOG_WARNING_FORMATTER = logging.Formatter(
fmt="[W %(levelname)-8s : %(message)s]"
)
LOGGER_NAME = "test"
def setup_logger(log_name: str = LOGGER_NAME) -> logging.Logger:
out_stream_handler = logging.StreamHandler(sys.stdout)
out_stream_handler.propagate = False
out_stream_handler.setLevel(logging.INFO)
out_stream_handler.setFormatter(LOG_FORMATTER)
debug_stream_handler = logging.StreamHandler(sys.stderr)
debug_stream_handler.propagate = False
debug_stream_handler.setLevel(logging.DEBUG)
debug_stream_handler.setFormatter(LOG_DEBUG_FORMATTER)
err_stream_handler = logging.StreamHandler(sys.stderr)
err_stream_handler.propagate = False
err_stream_handler.setLevel(logging.ERROR)
err_stream_handler.setFormatter(LOG_ERROR_FORMATTER)
warning_stream_handler = logging.StreamHandler(sys.stderr)
warning_stream_handler.propagate = False
warning_stream_handler.setLevel(logging.WARNING)
warning_stream_handler.setFormatter(LOG_WARNING_FORMATTER)
logger = logging.getLogger(log_name)
logger.setLevel(logging.INFO)
logger.propagate = False
if logger.hasHandlers():
logger.handlers = []
if not logger.handlers:
logger.addHandler(out_stream_handler)
logger.addHandler(debug_stream_handler)
logger.addHandler(err_stream_handler)
logger.addHandler(warning_stream_handler)
return logger
</code></pre>
<p>I read the documentation about <code>propagate</code> and have explicitly set it to <code>False</code> for each handler and the <code>logger</code>.</p>
<p>When I evaluate the following...</p>
<pre><code>LOGGER = setup_logger(log_name=LOGGER_NAME)
LEVELS = {"ERROR": logging.ERROR,
"WARNING": logging.WARNING,
"INFO": logging.INFO,
"DEBUG": logging.DEBUG}
for key, value in LEVELS.items():
print(f"################### setLevel : {key}")
LOGGER.setLevel(value)
LOGGER.error("ERROR Message")
LOGGER.warning("WARNING Message")
LOGGER.info("INFO Message")
LOGGER.debug("DEBUG Message")
</code></pre>
<p>...it results in <code>LOGGER.error()</code> being printed four times, <code>LOGGER.warning()</code> three, <code>LOGGER.info()</code> twice and <code>LOGGER.debug()</code> once regardless of the <code>setLevel</code>. So the <code>setLevel</code> works as I'd expect only outputting the relevant level of messages but I can't figure out why the messages are repeated.</p>
<pre><code>################### setLevel(ERROR)
I [ERROR : ERROR Message]
D [ERROR : ERROR Message]
E [ERROR : ERROR Message]
W [ERROR : ERROR Message]
################### setLevel(WARNING)
I [ERROR : ERROR Message]
D [ERROR : ERROR Message]
E [ERROR : ERROR Message]
W [ERROR : ERROR Message]
I [WARNING : WARNING Message]
D [WARNING : WARNING Message]
W [WARNING : WARNING Message]
################### setLevel(INFO)
I [ERROR : ERROR Message]
D [ERROR : ERROR Message]
E [ERROR : ERROR Message]
W [ERROR : ERROR Message]
I [WARNING : WARNING Message]
D [WARNING : WARNING Message]
W [WARNING : WARNING Message]
I [INFO : INFO Message]
D [INFO : INFO Message]
################### setLevel(DEBUG)
I [ERROR : ERROR Message]
D [ERROR : ERROR Message]
E [ERROR : ERROR Message]
W [ERROR : ERROR Message]
I [WARNING : WARNING Message]
D [WARNING : WARNING Message]
W [WARNING : WARNING Message]
I [INFO : INFO Message]
D [INFO : INFO Message]
D [DEBUG : DEBUG Message]
</code></pre>
<p>EDIT:</p>
<p>Python version is 3.10.9</p>
|
<python><logging>
|
2023-01-16 17:19:02
| 1
| 2,447
|
slackline
|
75,137,538
| 8,967,422
|
Python How to run scripts from a subdirectory?
|
<p>I have such structure of project:</p>
<pre><code>lib/
...
scripts/
...
</code></pre>
<p>I have many Python scripts in the <code>scripts/</code> directory. All of them contains relative imports: <code>from lib import ...</code></p>
<p>So, how can I easy run scripts from the root of project <code>/</code>, without changing scripts (without write <code>chdir</code> in each script)?</p>
<p>Maybe can I use some <code>__init__</code> file to change work dir? Or maybe can I use special command to run python scripts with root folder? Any other ways?</p>
|
<python><directory><python-import><relative-import>
|
2023-01-16 17:15:09
| 3
| 486
|
Alex Poloz
|
75,137,516
| 12,621,824
|
Python: Call function that returns two strings inside map function
|
<p>Hello I am trying to create a function that is called inside a map function, splits the string that have been passed as input and returns two processed strings. To be more understood here is my code (it doesn't seem to return anything).</p>
<pre><code>def prepare_data(data):
x1, x2 = data.split(" ", 1) # split only 1 time at the space
return x1.strip("\""), x2
if __name__ == "__main__":
print(list(map(prepare_data, '"word_1" rest of sentence')))
</code></pre>
<p>Any suggestions would be appreciated. Cheers!</p>
|
<python><map-function>
|
2023-01-16 17:13:05
| 2
| 529
|
C96
|
75,137,492
| 14,091,382
|
AttributeError: 'Kernel' object has no attribute 'masker' during training of SVC with rbf kernel
|
<p>I am training my SVC model and I am getting this error with the Shap Library. Have tried some methods including shap.maskers.Independent but the same error keeps appearing. Am I missing out something?</p>
<p>Codes</p>
<pre><code>model = SVC (C = 10.0, gamma = 0.01, kernel = 'rbf', probability=True)
model.fit(X_resampled,y_resampled)
y_predict= model.predict_proba(X_resampled_test)[:,1] # for AUC calculation, ploting ROC-AUC and Precision_
from sklearn.metrics import log_loss
print("Log-loss on logit: {:6.4f}".format(log_loss(y_resampled_test, y_predict)))
plt.figure()
# # #Get shap values
# feature_names = X_train.columns
explainer = shap.KernelExplainer(model.predict_proba, X_resampled, link="logit")
shap_values = explainer(X_resampled, X_resampled_test)
</code></pre>
<p>Error</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_22576/2256358008.py in <module>
4 # feature_names = X_train.columns
5 explainer = shap.KernelExplainer(model.predict_proba, X_resampled, link="logit")
----> 6 shap_values = explainer(X_resampled, X_resampled_test)
c:\Users\danie\anaconda3\lib\site-packages\shap\explainers\_explainer.py in __call__(self, max_evals, main_effects, error_bounds, batch_size, outputs, silent, *args, **kwargs)
205 start_time = time.time()
206
--> 207 if issubclass(type(self.masker), maskers.OutputComposite) and len(args)==2:
208 self.masker.model = models.TextGeneration(target_sentences=args[1])
209 args = args[:1]
AttributeError: 'Kernel' object has no attribute 'masker
</code></pre>
|
<python><machine-learning><scikit-learn><shap><svc>
|
2023-01-16 17:10:23
| 1
| 333
|
DDM
|
75,137,474
| 11,261,546
|
eigenvalues of a simple RGB image
|
<p>I'm trying to replicate the results of an image treatment algorithm I saw in a paper.</p>
<p>An important step is to calculate the eigenvalues of an squared <strong>image</strong></p>
<p>When reading this I thought they may mean the eigenvalues of each channel (i.e R G and B) as they are using colored images.</p>
<p>However, as mentioned in the paper, a squared image actually</p>
<blockquote>
<p>defines a square matrix in the three-dimensional RGB color space, from which three eigenvectors, orthogonal to one another in
the three-dimensional RGB color space, can be calculated in
the traditional manner. Within the context presented herein, the eigenvectors point in the direction of color change. The
eigenvector having the largest eigenvalue points in the direc
tion of the most amount of color change, while the other
eigenvectors, by definition, point in orthogonal directions that
are also orthogonal to one another.</p>
</blockquote>
<p>Which makes sense. Then I started looking for tools for doing this calculation. I found solutions on OpenCV and Numpy, but I got troubles with both:</p>
<ul>
<li>OpenCV's eigen():</li>
</ul>
<pre><code>img = cv2.imread("/home/my_user/Desktop/DSC02654.jpeg")
roi = img[0:10, 0:10, :]
roi = np.float32(roi)
print(roi.dtype) # float32
print(roi.shape) # (10, 10, 3)
res = cv2.eigen(roi)
</code></pre>
<p>With this I get <code>error: OpenCV(4.6.0) /io/opencv/modules/core/src/lapack.cpp:1390: error: (-215:Assertion failed) type == CV_32F || type == CV_64F in function 'eigen'</code> any clue? the type seems right.</p>
<ul>
<li>Numpy's eigvals():</li>
</ul>
<pre><code>#same roi variable
a = LA.eigvals(roi)
</code></pre>
<p>Here I get <code>LinAlgError: Last 2 dimensions of the array must be square</code>. Which means numpy is not calculating the values over a space with 3D elements (I tried using only one channel and I get the values) any clue on a numpy's method that accepts this?</p>
<p>Thanks!</p>
|
<python><numpy><opencv><image-processing>
|
2023-01-16 17:08:50
| 0
| 1,551
|
Ivan
|
75,137,470
| 6,156,353
|
Can I attach migrations to a corresponding data warehouse model using SQLAlchemy and Alembic?
|
<p>Let's say I want to build a data warehouse using these two tools. I was thinking of something like</p>
<pre><code>root
- database_schema
-- table1.py (SQLAlchemy model)
-- table2.py
...
- database_schema2
-- table1.py
...
- alembic
</code></pre>
<p>However, alembic is creating all migrations in one folder (versions). Is it possible to "attach" migrations to the corresponding model? So e.g. I would have something like this:</p>
<pre><code>root
- database_schema
-- table1
--- migrations
---- migration1.py (Alembic migration)
--- table1.py (SQLAlchemy model)
-- table2
--- migrations
--- table2.py
...
- database_schema2
-- table1.py
...
- alembic
</code></pre>
|
<python><sqlalchemy><data-warehouse><alembic>
|
2023-01-16 17:08:32
| 1
| 1,371
|
romanzdk
|
75,137,319
| 3,507,584
|
Plotly Scatterpolar change fill, line and hover color
|
<p>I have the following example. I can change the fill color to 'green' following the <a href="https://plotly.github.io/plotly.py-docs/generated/plotly.graph_objects.Scatterpolar.html?highlight=scatterpolar#plotly.graph_objects.Scatterpolar" rel="nofollow noreferrer">documentation</a> but I cannot find how to change the line/dots and tooltip colours to the same one. What's the best way to change all into one colour?</p>
<pre><code>import numpy as np
import pandas as pd
import plotly.express as px
df = px.data.wind()
df_test = df[df["strength"]=='0-1']
fig.data=[]
fig.add_trace(go.Scatterpolar(r=df_test['frequency'], theta=df_test['direction'],fill='toself',fillcolor='green'))
fig.show(config= {'displaylogo': False})
</code></pre>
<p><a href="https://i.sstatic.net/gK5oq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gK5oq.png" alt="Example" /></a></p>
|
<python><charts><scatter-plot><plotly>
|
2023-01-16 16:54:18
| 1
| 3,689
|
User981636
|
75,137,238
| 4,397,312
|
Can I pass 'wandb' object to another class in python?
|
<p>I have written a modular code and I wanted to pass a wandb object to a class which has been written in another .py file. I instantiated a wandb object using:</p>
<pre><code>import wandb
exp_name = "expriment name"
run = wandb.init(config = wandb.config, project= exp_name, entity="username")
</code></pre>
<p>at the top of the main.py file. Now whenever I need to log anything I use <code>run.log({'Accuracy/train': 100.0 * n_class_corrected / total_class_samples}, step=iteration)</code>
and it works when I call it within the main.py or if I pass it to a function. But I am still wondering when I pass it to a class defined in another .py file it is not going to log anything.
Overall my question is how should I pass a wandb object to another class which is in another .py file. Is there any consideration that I should pay attention to?</p>
|
<python><logging><wandb>
|
2023-01-16 16:46:14
| 1
| 717
|
Milad Sikaroudi
|
75,137,217
| 10,658,339
|
How to get values outside an interval pandas DataFrame
|
<p>I'm trying to get the values outside an interval in pandas dataframe, and I was trying to avoid iterating over the rows. is there any way to do that?</p>
<p>This is what I was trying, but it gives the error</p>
<blockquote>
<p>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD'))
fence_low = 30
fence_high = 70
df_out = df[(df['A'] <= fence_low) or (df['A'] >= fence_high)]
df_out
</code></pre>
|
<python><pandas><dataframe><filtering><apply>
|
2023-01-16 16:44:17
| 3
| 527
|
JCV
|
75,137,163
| 17,103,465
|
How to reset index for pandas.core.series
|
<p>My dataframe contains a string column 'code_desc' which contains with codes in it.</p>
<p>I have a pandas.core.series generated by the code below:</p>
<pre><code>d = {'R438': 'The person is a Ninja',
'R727': 'The person is a Pirate',
.....}
pat = r'\b(%s)\b' % '|'.join(d)
codes = df['codes_desc'].str.extractall(pat)[0]
df['reference'] = codes.map(d).groupby(level=0).agg(', '.join)
</code></pre>
<p>Here is the output of 'codes'</p>
<pre><code> matches
0 0 R438
1 R727
2 R662
1 0 R438
1 R727
2 R662
2 0 R438
1 R727
2 R662
3 0 R438
1 R727
2 R662
4 0 R438
1 R727
2 R662
5 0 R438
1 R727
2 R662
0 0 R227
1 R438
2 R727
5 0 R223
4 0 R223
</code></pre>
<p>I have tried using <code>.reset_index(level=1, inplace=True, drop=True)</code> but this seems not to be working as I want to reset the index for it.</p>
<p>Expected output :</p>
<pre><code> matches
0 0 R438
1 R727
2 R662
1 0 R438
1 R727
2 R662
2 0 R438
1 R727
2 R662
3 0 R438
1 R727
2 R662
4 0 R438
1 R727
2 R662
5 0 R438
1 R727
2 R662
6 0 R227
1 R438
2 R727
7 0 R223
8 0 R223
</code></pre>
<p>May you please let me know; how to achieve this.</p>
|
<python><pandas>
|
2023-01-16 16:38:22
| 0
| 349
|
Ash
|
75,137,090
| 6,464,525
|
pyflink AttributeError: type object 'EnvironmentSettings' has no attribute 'in_streaming_mode'
|
<pre class="lang-py prettyprint-override"><code>%pyflink
from pyflink.table import EnvironmentSettings, StreamTableEnvironment
env_settings = EnvironmentSettings.in_streaming_mode()
table_env = StreamTableEnvironment.create(environment_settings=env_settings)
</code></pre>
<p>Will fail with the following context:</p>
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-48-cfd5e4663f51> in <module>
1 from pyflink.table import EnvironmentSettings, StreamTableEnvironment
2
----> 3 env_settings = EnvironmentSettings.in_streaming_mode()
4 table_env = StreamTableEnvironment.create(environment_settings=env_settings)
AttributeError: type object 'EnvironmentSettings' has no attribute 'in_streaming_mode'
</code></pre>
<p>I am running a Zeppelin notebook in AWS Kinesis Data analytics and all of the documentation suggests that this is the correct way to initial the environment.</p>
<p>This was pulled directly from their getting started repository amazon-kinesis-data-analytics-java-examples <code>python\GettingStarted\getting-started.py</code></p>
<p>According to flink <a href="https://github.com/apache/flink/blob/release-1.16/flink-python/pyflink/examples/table/word_count.py" rel="nofollow noreferrer">docs</a> examples</p>
<pre><code>t_env = TableEnvironment.create(EnvironmentSettings.in_streaming_mode())
</code></pre>
<p>is sufficient but alas it fails for the same reason.</p>
<p>And showcased in the API docs themselves <a href="https://nightlies.apache.org/flink/flink-docs-release-1.13/docs/dev/python/table/intro_to_table_api/" rel="nofollow noreferrer">here</a></p>
<p>I cannot find an example where <code>in_streaming_mode()</code> is not called but it isn't an attribute on the EnvironmentSettings.</p>
<p>What am I doing wrong? Where is some worthwhile documentation on this stuff?</p>
|
<python><apache-flink><amazon-kinesis><pyflink>
|
2023-01-16 16:31:48
| 1
| 344
|
hi im Bacon
|
75,136,991
| 3,004,472
|
Reading multiple files from different aws S3 in Spark parallelly
|
<p>I have a scenario where I would need to read many files (in csv or parquet) from s3 bucket located different locations and with different schema.</p>
<p>My purpose of this is to extract all metadata information from different s3 locations and keep it as a Dataframe and save it as csv file in s3 itself. The problem here is that I have lot of s3 locations to read the files(partitioned). My sample s3 location is like</p>
<pre><code>s3://myRawbucket/source1/filename1/year/month/day/16/f1.parquet
s3://myRawbucket/source2/filename2/year/month/day/16/f2.parquet
s3://myRawbucket/source3/filename3/year/month/day/16/f3.parquet
s3://myRawbucket/source100/filename100/year/month/day/16/f100.parquet
s3://myRawbucket/source150/filename150/year/month/day/16/f150.parquet and .......... so on
</code></pre>
<p>All I need to do is to use spark code to read these many files (around 200) and apply some transformations if required and extract header information, count information, s3 location information, datatype.</p>
<p>What is the efficient way to read all these files(differenct schema ) and process it using spark code (Dataframe) and save it as csv in s3 bucket? Please bear with me as I am new to spark world. I am using python (Pyspark)</p>
|
<python><apache-spark><pyspark><apache-spark-sql><boto3>
|
2023-01-16 16:23:22
| 1
| 880
|
BigD
|
75,136,853
| 2,423,198
|
Use shift in pd.NamedAgg groupby in pandas
|
<p>I find it very convenient to use <code>pd.NamedAgg</code> in order to give names to columns after aggregation, but I don't know how to do other aggregate functions like <code>shift</code>. The code bellow doesn't work because of the 3rd aggregation.</p>
<pre><code>df.groupby(["ID", "Day", "hour"]).agg(
profit_avg=pd.NamedAgg(column='profit', aggfunc='mean'),
volume_max=pd.NamedAgg(column='volume', aggfunc='max'),
pn_price_shift=pd.NamedAgg(column='price', aggfunc='shift')
)
</code></pre>
<p>I get this error:</p>
<pre><code>NotImplementedError: Can only union MultiIndex with MultiIndex or Index of tuples, try
mi.to_flat_index().union(other) instead.
</code></pre>
|
<python><pandas><group-by>
|
2023-01-16 16:11:39
| 0
| 3,400
|
deltascience
|
75,136,745
| 16,521,194
|
subprocess.Popen raises error while command works in shell
|
<h2>Issue</h2>
<p>I have this command, which works in the shell:</p>
<pre class="lang-bash prettyprint-override"><code><ABS_PATH>/odbc2parquet query --connection-string "Driver={<DRIVER_NAME>};SERVER=<SERVER_IP>,<SERVER_PORT>;DATABASE=<DB_NAME>;UID=<UID>;PWD=<PWD>" <OUTPUT_PATH> "<QUERY>"
</code></pre>
<p>But the following python code doesn't work:</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
from typing import List
cmd_odbc: List[str] = [
"<ABS_PATH>/odbc2parquet",
"query",
"--connection-string",
'"Driver={<DRIVER_NAME>};SERVER=<SERVER_IP>,<SERVER_PORT>;DATABASE=<DB_NAME>;UID=<UID>;PWD=<PWD>"',
"<OUTPUT_PATH>",
'"<QUERY>"'
]
process_odbc: subprocess.Popen = subprocess.Popen(cmd_odbc)
process_odbc.wait()
</code></pre>
<p>This raises the following error:</p>
<pre><code>Error: ODBC emitted an error calling 'SQLDriverConnect':
State: IM002, Native error: 0, Message: [unixODBC][Driver Manager]Data source name not found and no default driver specified
</code></pre>
<p>Instead, I have to do the following workaround</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
from typing import List
cmd_odbc: List[str] = [
"<ABS_PATH>/odbc2parquet",
"query",
"--connection-string",
'"Driver={<DRIVER_NAME>};SERVER=<SERVER_IP>,<SERVER_PORT>;DATABASE=<DB_NAME>;UID=<UID>;PWD=<PWD>"',
"<OUTPUT_PATH>",
'"<QUERY>"'
]
process_odbc: subprocess.Popen = subprocess.Popen(" ".join(cmd_odbc), shell=True)
process_odbc.wait()
</code></pre>
<h2>Question</h2>
<p>Why doesn't the first code work while the second does? What is the main difference here?</p>
<p>Thanks to everyone reading.</p>
|
<python><subprocess>
|
2023-01-16 16:01:52
| 0
| 1,183
|
GregoirePelegrin
|
75,136,588
| 12,957,790
|
Storing and displaying emojis with mysqldb and Python Flask
|
<p>I have a database with a table called <code>posts</code> and I have two fields called <code>title</code> and <code>description</code>. Both <code>title</code> and <code>description</code> can contain emoji characters.</p>
<p>The solutions to similar problems told me to <code>convert the character set to utf8mb4</code> and <code>collate to utf8mb4_bin</code> and also change the column type from <code>text</code> to <code>NVARCHAR(255)</code>. I've even tried different combinations of using <code>utf8</code>, <code>utf8mb4</code>, <code>utf8mb4_unicode_ci</code> and <code>utf8mb4_general_ci</code>. They all don't work in displaying emojis.</p>
<p><a href="https://i.sstatic.net/9rEVr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9rEVr.png" alt="error" /></a></p>
<p>Instead, this error will pop up once I try displaying the emojis.
Anyone have a work around this?</p>
<p>I tried using different combinations of unicodes and collations to the database, but all don't seem to work.</p>
|
<python><mysql><flask><encoding><emoji>
|
2023-01-16 15:47:50
| 1
| 575
|
Bob
|
75,136,584
| 10,967,961
|
Creating a dummy out of a list variable python
|
<p>I have a pandas dataframe looking like this:</p>
<pre><code>docdb tech_classes
1187498 ['Y02P 20/10']
1236571 ['Y02B 30/13' 'Y02B 30/12' 'Y02P 20/10']
1239098 ['Y10S 426/805' 'Y02A 40/81']
...
</code></pre>
<p>What I would like to do is to create N dummy variables where N is the total number of names appearing in the variable tech_classes (please notice that Y02P 20/10 is a unique name as if it was: Y02P_20/10 and so Y02B 30/13 and the others). The variables should be dummies having value 1 whenever a docdb has that class inside tech_classes.</p>
<p>In other words the result of the above example should look like this:</p>
<pre><code>docdb Y02P_20/10 Y02B_30/13 Y02B_30/12 Y02A_40/81 Y10S_426/805 ...
1187498 1 0 0 0 0
1236571 1 1 1 0 0
1239098 0 0 0 1 1
...
</code></pre>
<p>Thanks a lot!</p>
<p>P.s. I know that there is a get_dummies in pandas but it does not quite work as tech_classes is not in list form from...
Secifically:</p>
<pre><code>df_patents.head().to_dict('list')
</code></pre>
<p>gives:</p>
<pre><code>{'docdb_family_id': [1187498, 1226468, 1236571, 1239098, 1239277],
'tech_fields_cited': ["['Y02P_20_10']",
"['Y10T_156_1023']",
"['Y02B_30_13','Y02B_30_12','Y02E_60_14','Y02B_10_70']",
"['Y10S_426_805','Y02A_40_81']",
"['Y02E_60_10','Y02T_90_12','Y02T_10_7072','Y02T_90_14','Y02T_10_70']"],
'patindocdb_years': ['[1998 1999 1996]',
'[1996 1992 1994 1993 1997]',
'[1991 1993 1990 1996]',
'[1995 1992 1993]',
'[1996 1993 1992]'],
'appln_auth': ['DE', 'DE', 'WO', 'WO', 'WO'],
'appln_nr': ['19581932', '4042441', '9002512', '9103158', '9105114'],
'earliest_publn_year': [1998, 1992, 1991, 1992, 1993],
'nb_citing_docdb_fam_y': [5, 17, 35, 32, 35],
'person_ctrycode': ["['RU']", "['DE']", "['US']", "['US']", "['IL']"],
'fronteer': [0, 0, 0, 0, 0],
'distance': [9999, 2, 9999, 9999, 9999],
'oecd_fields': ['[nan]', '[nan]', '[nan]', '[nan]', '[nan]'],
'nr_green': [1, 3, 5, 4, 10],
'pctage_green': [0.2, 0.17647059, 0.14285715, 0.125, 0.2857143],
'id_mas': [1, 2, 3, 4, 5],
'avg_dist_citing': ['[0.6666666666666666]',
'[2.5]',
'[inf]',
'[inf]',
'[inf]'],
'dist_citing_patents2': ['[1, 1, 0]',
'[3, 3, 1, 3, 2, 3]',
'[5, 99999, 5, 2, 5, 99999, 4, 6, 99999, 6, 7, 7, 2, 0, 1, 0, 0, 0, 1, 0, 3, 1, 1]',
'[99999, 99999, 99999, 99999, 99999, 99999, 2, 2, 2, 99999, 99999, 2, 2, 99999, 4, 99999, 3, 2, 0, 1, 1, 1, 3, 99999, 99999]',
'[99999, 1, 1, 1, 1, 3, 1, 1, 1, 99999, 6, 1, 2, 99999, 5, 4, 3, 0, 2, 1, 1, 1, 1, 2, 1, 1, 0, 0, 2, 0, 3, 2]'],
'id_us': [3, 4, 5, 6, 7],
'y_tr1': [0.60000002, 0.05882353, 0.25714287, 0.125, 0.51428574],
'y_tr2': [0.60000002, 0.11764706, 0.31428573, 0.3125, 0.65714288],
'y_tr3': [0.60000002, 0.35294119, 0.34285715, 0.375, 0.74285716],
'y_tr4': [0.60000002, 0.35294119, 0.37142858, 0.40625, 0.77142859],
'y_tr5': [0.60000002, 0.35294119, 0.45714286, 0.40625, 0.80000001]}
</code></pre>
|
<python><pandas><dummy-variable>
|
2023-01-16 15:47:35
| 2
| 653
|
Lusian
|
75,136,484
| 15,724,084
|
pandas getting lastrow index of excel file and appending data to it, needs some fix
|
<p>i am trying to get last row of column <code>plate</code> and append data to it. But it gives corrupt file error even though scrapy is working properly.</p>
<p>I guess error is due to lines below. Where I firstly, use pandas <code>ExcelWriter</code> object, then for getting last row I use dataframe.</p>
<pre><code> with pd.ExcelWriter('output_res.xlsx', mode='r+',if_sheet_exists='overlay') as writer:
df_last=pd.DataFrame('output_res.xlsx')
lastRow=df_last['plate'].iget(-1)
df_output = pd.DataFrame(itemList)
df_output.to_excel(writer, sheet_name='result', index=False, header=True,startrow=lastRow)
</code></pre>
<p>and variable <code>lastRow</code> is unassigned, as I guess. That's why it does not give a value to <code>to_excel</code> method</p>
<pre><code>import scrapy
from scrapy.crawler import CrawlerProcess
import pandas as pd
class plateScraper(scrapy.Spider):
name = 'scrapePlate'
allowed_domains = ['dvlaregistrations.dvla.gov.uk']
def start_requests(self):
df=pd.read_excel('data.xlsx')
columnA_values=df['PLATE']
for row in columnA_values:
global plate_num_xlsx
plate_num_xlsx=row
base_url =f"https://dvlaregistrations.dvla.gov.uk/search/results.html?search={plate_num_xlsx}&action=index&pricefrom=0&priceto=&prefixmatches=&currentmatches=&limitprefix=&limitcurrent=&limitauction=&searched=true&openoption=&language=en&prefix2=Search&super=&super_pricefrom=&super_priceto="
url=base_url
yield scrapy.Request(url)
def parse(self, response):
itemList=[]
for row in response.css('div.resultsstrip'):
plate = row.css('a::text').get()
price = row.css('p::text').get()
if plate_num_xlsx==plate.replace(" ","").strip():
item= {"plate": plate.strip(), "price": price.strip()}
itemList.append(item)
yield item
else:
item = {"plate": plate.strip(), "price": "-"}
itemList.append(item)
yield item
with pd.ExcelWriter('output_res.xlsx', mode='r+',if_sheet_exists='overlay') as writer:
df_last=pd.DataFrame('output_res.xlsx')
lastRow=df_last['plate'].iget(-1)
df_output = pd.DataFrame(itemList)
df_output.to_excel(writer, sheet_name='result', index=False, header=True,startrow=lastRow)
process = CrawlerProcess()
process.crawl(plateScraper)
process.start()
</code></pre>
<p>gives an error</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\utils\defer.py", line 240, in iter_errback
yield next(it)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\utils\python.py", line 338, in __next__
return next(self.data)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\utils\python.py", line 338, in __next__
return next(self.data)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\core\spidermw.py", line 79, in process_sync
for r in iterable:
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 29, in <genexpr>
return (r for r in result or () if self._filter(r, spider))
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\core\spidermw.py", line 79, in process_sync
for r in iterable:
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 336, in <genexpr>
return (self._set_referer(r, response) for r in result or ())
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\core\spidermw.py", line 79, in process_sync
for r in iterable:
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 28, in <genexpr>
return (r for r in result or () if self._filter(r, spider))
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\core\spidermw.py", line 79, in process_sync
for r in iterable:
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 32, in <genexpr>
return (r for r in result or () if self._filter(r, response, spider))
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\core\spidermw.py", line 79, in process_sync
for r in iterable:
File "C:\pythonPro\w_crawl\SimonDarak\scrpy_00.py", line 33, in parse
with pd.ExcelWriter('output_res.xlsx', mode='a',if_sheet_exists='overlay') as writer:
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\io\excel\_openpyxl.py", line 73, in __init__
self._book = load_workbook(self._handles.handle, **engine_kwargs)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\openpyxl\reader\excel.py", line 317, in load_workbook
reader.read()
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\openpyxl\reader\excel.py", line 282, in read
self.read_worksheets()
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\openpyxl\reader\excel.py", line 228, in read_worksheets
ws_parser.bind_all()
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\openpyxl\worksheet\_reader.py", line 448, in bind_all
self.bind_cells()
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\openpyxl\worksheet\_reader.py", line 351, in bind_cells
for idx, row in self.parser.parse():
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\openpyxl\worksheet\_reader.py", line 144, in parse
for _, element in it:
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\xml\etree\ElementTree.py", line 1255, in iterator
data = source.read(16 * 1024)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\zipfile.py", line 925, in read
data = self._read1(n)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\zipfile.py", line 1015, in _read1
self._update_crc(data)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\zipfile.py", line 943, in _update_crc
raise BadZipFile("Bad CRC-32 for file %r" % self.name)
zipfile.BadZipFile: Bad CRC-32 for file 'xl/worksheets/sheet1.xml'
Process finished with exit code -1
</code></pre>
|
<python><pandas><scrapy>
|
2023-01-16 15:38:58
| 1
| 741
|
xlmaster
|
75,136,404
| 17,561,414
|
explode function spark python
|
<p>I have the following data structure in JSON file and would like to flatten the data.</p>
<pre><code> root
|-- _embedded: struct (nullable = true)
| |-- items: array (nullable = true)
| | |-- element: struct (containsNull = true)
| | | |-- _links: struct (nullable = true)
| | | | |-- self: struct (nullable = true)
| | | | | |-- href: string (nullable = true)
| | | |-- associations: struct (nullable = true)
| | | | |-- ERP_PIM: struct (nullable = true)
| | | | | |-- groups: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | | |-- product_models: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | | |-- products: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | |-- PACK: struct (nullable = true)
| | | | | |-- groups: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | | |-- product_models: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | | |-- products: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | |-- SUBSTITUTION: struct (nullable = true)
| | | | | |-- groups: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | | |-- product_models: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | | |-- products: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | |-- UPSELL: struct (nullable = true)
| | | | | |-- groups: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | | |-- product_models: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | | |-- products: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | |-- X_SELL: struct (nullable = true)
| | | | | |-- groups: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | | |-- product_models: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | | | |-- products: array (nullable = true)
| | | | | | |-- element: string (containsNull = true)
| | | |-- categories: array (nullable = true)
| | | | |-- element: string (containsNull = true)
| | | |-- created: string (nullable = true)
| | | |-- enabled: boolean (nullable = true)
| | | |-- family: string (nullable = true)
| | | |-- groups: array (nullable = true)
| | | | |-- element: string (containsNull = true)
| | | |-- identifier: string (nullable = true)
| | | |-- metadata: struct (nullable = true)
| | | | |-- workflow_status: string (nullable = true)
| | | |-- parent: string (nullable = true)
| | | |-- updated: string (nullable = true)
| | | |-- values: struct (nullable = true)
| | | | |-- Contrex_table: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- UFI_Table: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: array (nullable = true)
| | | | | | | |-- element: struct (containsNull = true)
| | | | | | | | |-- UFI: string (nullable = true)
| | | | | | | | |-- company: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- add_reg_info: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- adr_transport_class: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- adr_transport_label: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: array (nullable = true)
| | | | | | | |-- element: string (containsNull = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- adr_tunnel_restriction_code: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- allergen: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: array (nullable = true)
| | | | | | | |-- element: string (containsNull = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
| | | | |-- api: array (nullable = true)
| | | | | |-- element: struct (containsNull = true)
| | | | | | |-- data: string (nullable = true)
| | | | | | |-- locale: string (nullable = true)
| | | | | | |-- scope: string (nullable = true)
|-- _links: struct (nullable = true)
| |-- first: struct (nullable = true)
| | |-- href: string (nullable = true)
| |-- next: struct (nullable = true)
| | |-- href: string (nullable = true)
| |-- self: struct (nullable = true)
| | |-- href: string (nullable = true)
</code></pre>
<p>short version of data.</p>
<pre><code> {"items": [
{
"_links": {
"self": {
"href": "products\/PCR-0006894-SAMKG0-PC"
}
},
"identifier": "pcr-1",
"enabled": true,
"family": "products",
"categories": [
"239",
"CL1_MEA",
"D00",
"D001",
"EMEA",
"MAR",
"MARKET_SEGMENT_VIEW",
"PC",
"Validated_SHEQ"
],
"groups": [],
"parent": "PCR-0006894-SAMKG0"
}]}
</code></pre>
<p><a href="https://i.sstatic.net/JOo5M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JOo5M.png" alt="enter image description here" /></a></p>
<p>Goal is to not mentione <code>keys</code> manually but explode only the one which <code>values</code> are <code>str</code> type.</p>
<p>I have this code where I im trying to flatten the entire json file:</p>
<pre><code>def flatten_df(nested_df):
flat_cols = [c[0] for c in nested_df.dtypes if c[1][:6] != 'struct']
nested_cols = [c[0] for c in nested_df.dtypes if c[1][:6] == 'struct']
flat_df = nested_df.select(flat_cols +
[F.col(nc+'.'+c).alias(nc+'_'+c)
for nc in nested_cols
for c in nested_df.select(nc+'.*').columns])
return flat_df
</code></pre>
<p>Desired output:
<a href="https://i.sstatic.net/J5gxX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J5gxX.png" alt="enter image description here" /></a></p>
|
<python><apache-spark><apache-spark-sql><azure-databricks><flatten>
|
2023-01-16 15:32:24
| 1
| 735
|
Greencolor
|
75,136,383
| 5,224,236
|
pip install fails when Installing build dependencies
|
<p>Installing packages in my lovely locked-down IT environment</p>
<pre><code>python -m pip --default-timeout=1000 install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host files.pythonhosted.org pyautogui
</code></pre>
<p>Using this I am able to install packages such as <code>matplotlib</code> without any issue, however <code>pyautogui</code> fails with the following at the <code>Installing build dependencies</code> step :</p>
<pre><code> Using cached PyAutoGUI-0.9.53.tar.gz (59 kB)
Collecting pymsgbox
Using cached PyMsgBox-1.0.9.tar.gz (18 kB)
Installing build dependencies ... error
ERROR: Command errored out with exit status 2
.....
socket.timeout: The read operation timed out
.....
HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Read timed out.
</code></pre>
<p>Any help most welcome</p>
|
<python><pip><pyautogui>
|
2023-01-16 15:30:28
| 1
| 6,028
|
gaut
|
75,136,353
| 5,448,626
|
Problem uploading excel file to SharePoint with LDAP
|
<p>I am trying to upload an Excel file to SharePoint, using the code I found <a href="https://learn.microsoft.com/en-us/answers/questions/142065/ways-to-access-upload-documents-to-sharepoint-site" rel="nofollow noreferrer">here</a>, but so far, I cannot manage to make it work with my LDAP account. Getting this error:</p>
<blockquote>
<p>raise ShareplumRequestError("Shareplum HTTP Post Failed", err)<br />
shareplum.errors.ShareplumRequestError: Shareplum HTTP Post Failed :<br />
HTTPSConnectionPool(host='login.microsoftonline.com', port=443): Max retries exceeded with url:
/extSTS.srf (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at
0x000001B24A126B08>: Failed to establish a new connection: [WinError 10061] No connection could
be made because the target machine actively refused it'))</p>
</blockquote>
<p>with this code:</p>
<pre><code>from shareplum import Office365
from shareplum import Site
from shareplum.site import Version
def main():
sp = r"//foo.sharepoint.com/sites/foobarwiki/Shared Documents/SourceExcelFile.xlsx"
cp = r"C:\Users\Git\SourceExcelFile.xlsx"
authcookie = Office365('https://foo.sharepoint.com', username='un', password='pwd').GetCookies()
site = Site(r'https://foo.sharepoint.com/sites/foobarwiki/', version=Version.v365, authcookie=authcookie);
folder = site.Folder('Shared Documents/foobarbar/')
with open(cp, mode='rb') as file:
fileContent = file.read()
folder.upload_file(fileContent, "myfile.txt")
print("Done!")
if __name__ == "__main__":
main()
</code></pre>
<p>As this can be considered an <a href="https://mywiki.wooledge.org/XyProblem" rel="nofollow noreferrer">xy question</a>, in the meantime, I tried simply using <code>shutil.copy(cp, sp)</code> and <code>.copy2</code>, as it should be ok and I am quite ok to access the SharePoint with other ways, but still no success there as I guess <code>shutil</code> does not like SharePoint a lot.</p>
<p>Any ideas?</p>
|
<python><authentication><sharepoint><upload><ldap>
|
2023-01-16 15:27:35
| 0
| 43,725
|
Vityata
|
75,136,212
| 5,522,036
|
Why pandas prefers float64 over Int64?
|
<p>I have a pandas dataframe with a column, with 3 unique values: <code>[0, None, 1]</code>
When I run this line:</p>
<pre><code>test_data = test_data.apply(pd.to_numeric, errors='ignore')
</code></pre>
<p>the above mentioned column data type is converted to <code>float64</code></p>
<p>Why not <code>int64</code>? Technically integer type can handle <code>None</code> values, so I'm confused why it didn't pick <code>int64</code>?</p>
<p>Thanks for help,</p>
<p>Edit:
As I read about the difference between <code>int64</code> and <code>Int64</code>, why pandas doesn't choose <code>Int64</code> then?</p>
|
<python><pandas><dataframe>
|
2023-01-16 15:14:27
| 1
| 2,470
|
Ish Thomas
|
75,136,052
| 14,578,331
|
Reducing file size of animated Plotly Express Choropleth plot
|
<p>I am plotting numerical data using <code>plotly</code> Express <code>Choropleth</code> and that data has changing values over time, which makes an animation sensible to use. The resulting file size is, however, quite large and I would like to reduce it for web applications.</p>
<p>The problem seems to be that Plotly saves the underlying geojson for each animation frame separately to the HTML-file. Is there a way to let Plotly reuse the single geojson for all animation frames?</p>
<p><strong>What I tried, but had insufficient effect:</strong></p>
<ul>
<li>Reduce the accuracy with which the coordinates of the Polygons are saved</li>
<li>Reduce the number of animation frames (but that is no option long-term)</li>
</ul>
<p><strong>The code I am using is:</strong></p>
<pre><code> fig = px.choropleth(df,
geojson=regionalized_data,
locations=df['statistische_kennziffer'],
color=df[disp_value],
featureidkey='properties.ARS',
fitbounds='locations',
hover_data=hover_data_present,
labels=readable_labels_local,
animation_frame=df['Inbetriebnahmemonat'] if 'Inbetriebnahmemonat' in df.columns else None,
animation_group=df['statistische_kennziffer'] if 'Inbetriebnahmemonat' in df.columns else None,
basemap_visible=False,
)
</code></pre>
<p>But I do not expect this to really change anything with regards to my question.</p>
|
<python><plotly><choropleth>
|
2023-01-16 15:01:11
| 0
| 1,016
|
C Hecht
|
75,135,947
| 6,803,924
|
How to implement an n-gram inverted index from a paper (Python)?
|
<p>I am trying to implement an <strong>n-Gram/2L-approximation index</strong> from a paper by Min-Soo Kim, Kyu-Young Whang and Jae-Gil Lee, found here: <a href="http://infolab.dgist.ac.kr/%7Emskim/papers/CSSE07.pdf" rel="nofollow noreferrer">http://infolab.dgist.ac.kr/~mskim/papers/CSSE07.pdf</a></p>
<p>Building the Index is quite straight forward. Where I am getting lost is the querying algorithm.
Specifically performing the <em>merge outer join</em>.</p>
<blockquote>
<p>The algorithm extracts n-grams from the query string
Q by the 1-sliding technique and searches the posting lists
of those n-grams in the front-end index. Then, the algorithm performs merge outer join among those posting lists
using the m-subsequence identifier as the join attribute and
finds the set {Si} of candidate m-subsequences that satisfy
the necessary condition in Theorem 1.</p>
</blockquote>
<p>So far, I got this piece of code:</p>
<pre class="lang-py prettyprint-override"><code>from math import ceil, floor
class NGramIndex:
def __init__(self, m: int, n: int):
self.m: int = m # m-subsequence length
self.n: int = n # n-gram length
self.backend_index = dict()
self.frontend_index = dict()
self.msubseq_set = [] # Set of msubsequences
def append(self, doc: str, doc_id: int):
N = len(doc)
max_range = ceil(N / self.m)
for i in range(0, max_range):
offset = i * self.m
msubseq = doc[i * self.m: i * self.m + self.m]
# if extracted subseq is smaller, pad it with extra-char
if len(msubseq) < self.m:
msubseq += '$' * (self.m - len(msubseq))
if msubseq not in self.backend_index:
self.backend_index[msubseq] = [(doc_id, [offset])]
elif self.backend_index[msubseq][-1][0] == doc_id:
self.backend_index[msubseq][-1][1].append(offset)
else:
self.backend_index[msubseq].append((doc_id, [offset]))
if msubseq in self.msubseq_set:
# subseq_id is the unique identifier in msubseq_set
subseq_id = self.msubseq_set.index(msubseq)
else:
self.msubseq_set.append(msubseq)
subseq_id = len(self.msubseq_set) - 1
max_q_range = self.m - self.n + 1
for ngram_offset in range(0, max_q_range):
ngram = msubseq[ngram_offset:ngram_offset + self.n]
if ngram not in self.frontend_index:
self.frontend_index[ngram] = [(subseq_id, [ngram_offset])]
elif self.frontend_index[ngram][-1][0] == subseq_id:
self.frontend_index[ngram][-1][1].append(ngram_offset)
else:
self.frontend_index[ngram].append((subseq_id, [ngram_offset]))
def query(self, query_word: str, k: int):
"""
Query the index for results
k = error tolerance (threshold)
"""
t = floor((len(query_word) + 1) / self.m) - 1
eps = floor(k / t)
# r is used for filtration later
r = (self.m - self.n + 1) - (eps * self.n)
postings = []
for i in range(0, len(query_word) - self.n + 1):
ngram = query_word[i: i + self.n]
if ngram in self.frontend_index:
postings.append(self.frontend_index[ngram])
# TODO: Perform merge outer join?
</code></pre>
<p>and I am unsure how to proceed with the join.</p>
<p>I am glad for any suggestions, references, anything.</p>
|
<python><indexing><string-matching><n-gram><inverted-index>
|
2023-01-16 14:52:41
| 0
| 329
|
Lukáš Moravec
|
75,135,847
| 4,700,367
|
mypy: error: Module "multiprocessing.managers" has no attribute "EventProxy" [attr-defined]
|
<p>I have a method which takes a <code>multiprocessing.Manager().Event()</code>, for the purposes of gracefully stopping my app.</p>
<p>When I run mypy against it, it complains that it's not a valid type:</p>
<pre class="lang-none prettyprint-override"><code>error: Module "multiprocessing.managers" has no attribute "EventProxy" [attr-defined]
</code></pre>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing.managers import EventProxy
class App:
def run(self, stop_event: EventProxy):
...
with Manager() as manager:
stop_event = manager.Event()
a = App()
a.run(stop_event)
</code></pre>
<p>I used the type <code>EventProxy</code> because when I checked the <code>type()</code> of the <code>multiprocessing.Manager().Event()</code> it said it was a <code>multiprocessing.managers.EventProxy</code>.</p>
<p>How can I type hint this method so that mypy won't complain?</p>
<p>Versions:</p>
<pre class="lang-none prettyprint-override"><code>Python 3.9.14
mypy 0.991
mypy-extensions 0.4.3
</code></pre>
|
<python><python-3.x><mypy><python-typing>
|
2023-01-16 14:43:46
| 1
| 438
|
Sam Wood
|
75,135,830
| 4,152,567
|
Tensorboard error: An op outside of the function building code is being passed a "Graph" tensor
|
<p>The following code replicates a Tensorboard error I keep getting. <strong>Complete error:</strong></p>
<pre><code> TypeError: An op outside of the function building code is being passed a "Graph" tensor.
</code></pre>
<p>It is possible to have Graph tensors
leak out of the function building context by including a tf.init_scope in your function building code.
For example, the following function will fail:</p>
<pre><code> def has_init_scope():
my_constant = tf.constant(1.)
with tf.init_scope():
added = my_constant * 2
</code></pre>
<p>The graph tensor has name: output_4/kernel:0</p>
<p><strong>With</strong> the <strong>tensorboard callback</strong> I get the error, without it I don't. Bellow, the callback is commented out. Any help is greatly appreciated.</p>
<p><strong>Code:</strong></p>
<pre><code>input_gt_boxes = keras.layers.Input(
shape=[None, 4], name="input_gt_boxes", dtype=tf.float32)
output_ = keras.layers.Dense(1, name='output')(input_gt_boxes)
model_test_gt_layer_ = tf.keras.models.Model([input_gt_boxes],
[output_],
name="m")
model_test_gt_layer_.compile(tf.keras.optimizers.SGD(), loss='mse', \
experimental_run_tf_function=False,
#run_eagerly=True,
)
model_test_gt_layer_.summary()
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)
a = np.concatenate([np.expand_dims(np.arange(4).reshape(1,4),axis=0) for _ in range(100)], axis=0)
o= np.concatenate([np.zeros(100).reshape(100,1) for _ in range(1)], axis=0)
model_test_gt_layer_.fit(a, o, \
epochs=5, \
callbacks=[
#tensorboard_callback,
], \
verbose=1,\
use_multiprocessing= False,)
</code></pre>
|
<python><tensorflow><keras><tensorboard>
|
2023-01-16 14:42:14
| 1
| 512
|
Mihai.Mehe
|
75,135,759
| 1,574,551
|
Detect language in pandas column in python
|
<p>I would like to detect language in pandas column in python. After detecting it I want to write the language code as a column in pandas dataframe. Below is my code and what I tried. But I got an error please help.</p>
<p>Thank you.</p>
<pre><code> data = {'text': ["It is a good option","Better to have this way","es un portal informático
para geeks","は、ギーク向けのコンピューターサイエンスポータルです"]}
# Create DataFrame
df = pd.DataFrame(data)
#get the language
for i in df['text']:
# Language Detection
df['lang'] = TextBlob(i)
</code></pre>
<p><a href="https://i.sstatic.net/uE6vN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uE6vN.png" alt="enter image description here" /></a></p>
|
<python><pandas><textblob>
|
2023-01-16 14:37:26
| 2
| 1,332
|
melik
|
75,135,721
| 9,625,038
|
Standard application for a filetype not available inside service
|
<p>I've created a service which accepts PDF files from other computers using a socket, and then prints it to a connected printer. The code for this is written in Python.</p>
<p>I've tested this application by running the Python script manually, and everything works as expected. The script creates a socket, accepts a PDF file, pushes it to its queue and then prints the file.</p>
<p>I've created a Windows Service for this script using NSSM - the Non-Sucking Service Manager.</p>
<p>The service runs my Python script fine as well, only, when it is trying to print to a printer, I get an error that there is no application associated for the PDF file. Which is weird, because I do have a standard program assigned to PDF (Adobe Acrobat Reader), and it does work when running the script manually.</p>
<p>The Python script executes PowerShell commands to set the default printer and then print the file using Adobe (which prints to the default printer).</p>
<p>Here is the snippet from my script that is responsible for this printing:</p>
<pre class="lang-py prettyprint-override"><code>cmd_set_default_printer = "powershell.exe (New-Object -ComObject WScript.Network).SetDefaultPrinter('{0}')".format(printer_data['name'])
cmd_print_file = "powershell.exe Start-Process -FilePath '{0}' -Verb Print".format(item['file'])
cmd_close_acrobat = "powershell.exe Stop-Process -Name Acrobat -Force"
cmd_delete_file = "powershell.exe Remove-Item -Path '{0}'".format(item['file'])
self.logger.info('[+] Printing file {0}'.format(item['file']))
p = subprocess.Popen(cmd_set_default_printer, stdout=subprocess.PIPE)
p_out = p.communicate()
if p.returncode != 0: # non-zero return code means a failure
self.logger.error('[!] An error occured: {0}'.format(p_out))
self.db.set_item_status(item['id'], self.db.STATUS_FAILED)
continue
time.sleep(2)
p = subprocess.Popen(cmd_print_file, stdout=subprocess.PIPE)
p_out = p.communicate()
if p.returncode != 0:
self.logger.error('[!] An error occured: {0}'.format(p_out))
self.db.set_item_status(item['id'], self.db.STATUS_FAILED)
continue
time.sleep(5)
self.logger.info('[+] OK. Deleting file {0}'.format(item['file']))
p = subprocess.Popen(cmd_close_acrobat, stdout=subprocess.PIPE)
p_out = p.communicate()
p = subprocess.Popen(cmd_delete_file, stdout=subprocess.PIPE)
p_out = p.communicate()
</code></pre>
<p>When running the service and pushing a file to it, I get an error.
These are my logs:</p>
<pre><code>2023-01-16 15:13:20,589 - server_logger - INFO - [*] Listening as 0.0.0.0:50001
2023-01-16 15:13:20,620 - server_logger - INFO - [*] Connected to database
2023-01-16 15:20:40,916 - server_logger - INFO - [+] ('192.168.1.252', 44920) is connected.
2023-01-16 15:20:40,916 - server_logger - INFO - [+] Receiving new file... saving as wbcfaolzropovcui.pdf
2023-01-16 15:20:40,916 - server_logger - INFO - [+] Queue file for printing...
2023-01-16 15:20:40,942 - server_logger - INFO - [+] Queued.
2023-01-16 15:20:40,942 - server_logger - INFO - [+] Done receiving, closing socket.
2023-01-16 15:20:40,942 - server_logger - INFO - [+] Socket closed.
2023-01-16 15:20:41,309 - server_logger - INFO - [+] Printing file C:\.cloudspot\ecosyerp-printing\python_backend\print_queue\wbcfaolzropovcui.pdf
2023-01-16 15:20:44,012 - server_logger - ERROR - [!] An error occured: (b"Start-Process : This command cannot be run due to the error: Er is geen toepassing gekoppeld aan het opgegeven bestand \nvoor deze bewerking.\nAt line:1 char:1\n+ Start-Process -FilePath 'C:\\.cloudspot\\ecosyerp-printing\\python_backe ...\n+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n + CategoryInfo : InvalidOperation: (:) [Start-Process], InvalidOperationException\n + FullyQualifiedErrorId : InvalidOperationException,Microsoft.PowerShell.Commands.StartProcessCommand\n \n", None)
</code></pre>
<p>The error is in Dutch but translates to:</p>
<pre><code>No application is associated with the specified file for this operation
</code></pre>
<p>Which leaves me scratching my head because when I do not run the script as a service, but directly from the CMD, then it works without a problem.</p>
<p>Is there any reason why inside a service it would not work and outside it does?</p>
|
<python><powershell><windows-services>
|
2023-01-16 14:33:46
| 1
| 591
|
Alexander Schillemans
|
75,135,634
| 943,524
|
Is that a valid BFS?
|
<p>I recently wrote a function to traverse a graph, along those lines:</p>
<pre><code>def traverse(batch: list[str], seen: set[str]):
if not batch:
return
new_batch = []
for node in batch:
print(node)
new_batch.extend(n for n in neighbours(node) if n not in seen)
traverse(new_batch, seen.union(new_batch))
traverse([start_node], {start_node})
</code></pre>
<p>I didn't give it much thoughts then, but now that I look at it, it looks like this is actually a BFS on the graph.</p>
<p>My question is: is this a correct implementation of a BFS and why do all BFS algorithms employ a queue instead of such a recursion?</p>
|
<python><graph-theory><breadth-first-search><graph-traversal>
|
2023-01-16 14:26:23
| 1
| 1,439
|
Weier
|
75,135,628
| 14,667,788
|
how to insert string into query in python pymysql
|
<p>I have a following query:</p>
<pre class="lang-py prettyprint-override"><code>cursor = connection.cursor()
query = """
SELECT *
FROM `my_database`.table_a
"""
result = cursor.execute(query)
</code></pre>
<p>which works as expected. But I need to change <code>my_database</code> in <code>cursor.execute</code>. I try:</p>
<pre class="lang-py prettyprint-override"><code>cursor = connection.cursor()
query = """
SELECT *
FROM %s.table_a
"""
result = cursor.execute(query, ("my_database",))
</code></pre>
<p>which gives an error <code>pymysql.err.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ''my_database'.table_a at line 2") </code></p>
<p>how can I insert database name in <code>cursor.execute</code> please?</p>
|
<python><mysql><pymysql>
|
2023-01-16 14:25:55
| 1
| 1,265
|
vojtam
|
75,135,608
| 15,376,262
|
How to select multiple children from HTML tag with Python/BeautifulSoup if exists?
|
<p>I'm currently scraping elements from a webpage. Let's say i'm iterating over a HTML reponse and a part of that response looks like this:</p>
<pre><code><div class="col-sm-12 col-md-5">
<div class="material">
<div class="material-parts">
<span class="material-part" title="SLT-4 2435">
<img src="/images/train-material/mat_slt4.png"/> </span>
<span class="material-part" title="SLT-6 2631">
<img src="/images/train-material/mat_slt6.png"/> </span>
</div>
</div>
</div>
</code></pre>
<p>I know I can access the first element under <code>title </code>within the <code>span</code> class like so:</p>
<pre><code>row[-1].find('span')['title']
"SLT-4 2435
</code></pre>
<p>But I would like to select the second <code>title</code> under the <code>span</code> class (if it exists) as a string too, like so: <code>"SLT-4 2435, SLT-6 2631"</code></p>
<p>Any ideas?</p>
|
<python><html><css><web-scraping><beautifulsoup>
|
2023-01-16 14:24:48
| 2
| 479
|
sampeterson
|
75,135,540
| 10,197,418
|
Python polars: modify every nth row
|
<p>Given a polars DataFrame in Python, how can I modify every nth element in a series?</p>
<pre class="lang-py prettyprint-override"><code># have
df = pl.DataFrame(pl.Series("a", [1, -1, 1, -1, 1]))
# want
# [1, 1, 1, 1, 1]
# selecting works fine:
df["a", 1::2]
shape: (2,)
Series: 'a' [i64]
[
-1
-1
]
# but modification fails:
df["a", 1::2] *= -1
Traceback (most recent call last):
File "/tmp/ipykernel_103522/957012809.py", line 1, in <cell line: 1>
df["a", 1::2] *= -1
File "/home/.../.pyenv/versions/3.10.9/lib/python3.10/site-packages/polars/internals/dataframe/frame.py", line 1439, in __setitem__
raise ValueError(f"column selection not understood: {col_selection}")
ValueError: column selection not understood: slice(1, None, 2)
</code></pre>
<pre><code>pl.__version__
'0.15.14'
</code></pre>
<p><a href="https://stackoverflow.com/q/25055712/10197418">pandas version of the question</a></p>
|
<python><indexing><slice><python-polars>
|
2023-01-16 14:20:03
| 1
| 26,076
|
FObersteiner
|
75,135,532
| 11,945,144
|
Best parameters for UMAP + HistGradientBoostingClassifier
|
<p>I'm trying to find the best parameters for the UMAP (dimensionality reduction) model together with HistGradientBoostingClassifier.</p>
<p>The loop I have created is:</p>
<pre><code>vectorizer = TfidfVectorizer(use_idf=True, max_features = 6000)
corpus = list(df['comment'])
x = vectorizer.fit_transform(corpus)
y = df['CONTACT']
n_componentes = [2,10,20,40,60,80,100,150,200]
for component in n_componentes:
reducer = umap.UMAP(metric='cosine',n_components=component)
embedding = reducer.fit_transform(X)
print (f"Component: {embedding.shape}")
X_train,X_test,y_train,y_test=train_test_split(embedding, y, test_size=0.2, random_state=123, stratify=y)
clf = HistGradientBoostingClassifier()
n_iter_search = 20
random_search = RandomizedSearchCV(clf,
param_distributions=parameters,
n_iter=n_iter_search,
scoring='accuracy',
random_state=123)
random_search.fit(X_train,y_train)
print(f"Best Parameters {random_search.best_params_}")
print(f"DBCV score :{random_search.best_estimator_.relative_validity_}")
</code></pre>
<p>Run time is 4 hours and only takes one lap.
Can you tell me another way to perform this task more optimized?
Thank you!</p>
|
<python><for-loop><hyperparameters><boosting><runumap>
|
2023-01-16 14:19:19
| 1
| 343
|
Maite89
|
75,135,519
| 9,794,068
|
Iteration count in recursive function
|
<p>I am writing a recursive function to make permutations of digits from <code>0</code> to <code>n</code>. The program will return the <code>th</code> permutation that is obtained. It all works well but I had to use the cheap trick of defining <code>count</code> as a list, that is <code>count=[0]</code>. In this way I am using the properties of lists in order to properly update the variable <code>count[0]</code> at each iteration.</p>
<p>Ideally, what I would like to do is to define <code>count</code> as an integer number instead. However, this does not work because <code>count</code> is then updated only locally, within the scope of the function at the time it is called.</p>
<p>What is the proper way to count the iterations in a recursive function like this?</p>
<p>Below I show the code. It works, but I hate the way I am using <code>count</code> here.</p>
<pre><code>
import numpy as np
N=10
available=np.ones(N)
def permutations(array, count=[0], n=N, start=0, end=N, th=100):
if count[0]<th:
for i in range(n):
if available[i]:
array[start]=i
if end-start>1:
available[i]=0
permutations(array, count, n, start+1, end)
available[i]=1
else:
count[0]+=1
break
if count[0]==th:
a=''.join(str(i) for i in array)
return a
def main():
array=[0 for _ in range(N)]
count=[0]
print(permutations(array, count, N, start=0, end=N))
if __name__=="__main__":
main()
</code></pre>
|
<python><recursion><global-variables>
|
2023-01-16 14:18:35
| 2
| 530
|
3sm1r
|
75,135,495
| 6,290,062
|
pythonic equivalent of R double pivot and filter
|
<p>A task I come up against reasonable often is something like the following transformation:</p>
<p>from:</p>
<pre><code> home_team_id away_team_id home_team away_team
1 1 2 Arsenal Tottenham
2 2 3 Tottenham Chelsea
</code></pre>
<p>to</p>
<pre><code> team value
1 Arsenal 1
2 Tottenham 2
3 Tottenham 2
4 Chelsea 3
</code></pre>
<p>in my head I refer to this as a 'double pivot' though maybe there's a more fitting name.</p>
<p>In R I can do this by (written off the top of my head- I'm sure the gsub can be optimised/cleaned somewhat):</p>
<pre><code>library(tidyverse)
example_df_R = data.frame(
home_team_id = c(1, 2),
away_team_id = c(2,3),
home_team = c("Arsenal", "Tottenham"),
away_team = c("Tottenham", "Chelsea")
)
example_df_R %>%
pivot_longer(cols = ends_with("id")) %>%
pivot_longer(cols = ends_with("team"), values_to = "team", names_to = "team_location") %>%
filter(gsub("_id$", "", name) == team_location) %>%
select(team, value)
</code></pre>
<p>In python it <em>feels</em> like this should be the equivalent:</p>
<pre><code>import pandas as pd
example_df_py = pd.DataFrame(
{
"home_team_id": [1, 2],
"away_team_id": [2, 3],
"home_team": ["Arsenal", "Tottenham"],
"away_team": ["Tottenham", "Chelsea"],
}
)
result = (
example_df_py.melt(id_vars=["home_team", "away_team"])
.melt(id_vars=["variable", "value"], var_name="team_location", value_name="team")
.loc[lambda dfr: dfr["variable"].str.startswith(dfr["team_location"].iloc[0])][
["team", "value"]
]
)
result
</code></pre>
<p>however that gives me:</p>
<pre><code> team value
0 Arsenal 1
1 Tottenham 2
4 Tottenham 1
5 Chelsea 2
</code></pre>
<p>I fully understand why I get that result (I've included the iloc which means it isn't running row-by-row on both columns to make the code run), but not sure what the equivalent correct, 'elegant' (i.e. preferably in a chain for the context I frequently have to use), pythonic code is for the R posted above</p>
<p>Many thanks!</p>
|
<python><r><filter><pivot>
|
2023-01-16 14:16:41
| 1
| 917
|
Robert Hickman
|
75,135,415
| 1,711,088
|
Selenium. Python. Interact with extensions
|
<p>I'm using Selenium, Chrome and Python 3.10.
I'm trying to automate login to "opensea.io" site. Site doesn't have a classic login page. It's login only through installed MetaMask wallet extension. When i click "Login" extension is open a popup and wait my actions (click to buttons and enter a password).</p>
<p>My python script open "login" page ('https://opensea.io/login') find a MetaMask button and click it. But after that Crome extension open popup and it's not a page or tab. I can't find a way how to interact with this popup window to click a button for example</p>
<p><a href="https://i.sstatic.net/PwU3A.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PwU3A.jpg" alt="enter image description here" /></a></p>
<p>How to connect to this popup to interact with him? I want to find buttons and click it for example or fill data to input fields</p>
<p>Chrome Task Manager says this window is part of an extension:
<a href="https://i.sstatic.net/BhBtN.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BhBtN.jpg" alt="enter image description here" /></a></p>
|
<python><selenium><google-chrome-extension><automation><findelement>
|
2023-01-16 14:10:44
| 1
| 976
|
Massimo
|
75,135,100
| 10,437,727
|
What Python unittest command to run here?
|
<p>Here's my project structure:</p>
<pre><code>├── compute_completeness_service
│ ├── __init__.py
│ ├── app.py
│ ├── tests
│ │ ├── integration
│ │ │ ├── __init__.py
│ │ │ └── test__init__.py
│ │ └── unit
│ │ ├── __init__.py
│ │ ├── __pycache__
│ │ └── test_utils.py
│ └── utils
│ ├── __init__.py
│ └── __pycache__
├── data_quality
│ ├── __init__.py
│ ├── app.py
│ ├── helpers
│ │ ├── __init__.py
│ └── tests
│ └── unit
│ ├── __init__.py
│ ├── __pycache__
│ └── test_helpers.py
</code></pre>
<p>I can run these commands from <code>compute_completeness_service</code> and <code>data_quality</code>'s root directory:</p>
<pre class="lang-bash prettyprint-override"><code>python3 -m unittest discover -v -s ./compute_completeness_service/tests -p "test_*.py"
python3 -m unittest discover -v -s ./data_quality/tests -p "test_*.py"
</code></pre>
<p>but is there one command to run both of them? Because when I run this command, I get no tests:</p>
<pre class="lang-bash prettyprint-override"><code>python3 -m unittest discover -v -t . -p "test_*.py"
----------------------------------------------------------------------
Ran 0 tests in 0.000s
</code></pre>
<p>TIA!</p>
|
<python><python-3.x><python-unittest>
|
2023-01-16 13:45:39
| 1
| 1,760
|
Fares
|
75,134,954
| 15,229,310
|
Decorate all functions of a class in Python 3
|
<p>This has been asked, but all solutions seems to be from around 2011 for early python 2.x, unusable for recent (3.6+) versions.</p>
<p>Having</p>
<pre class="lang-py prettyprint-override"><code>def function_decorator(fn):
def wrapper(*args, **kwargs):
print('decorated')
return fn(*args, **kwargs)
return wrapper
</code></pre>
<p>and</p>
<pre class="lang-py prettyprint-override"><code>class C:
@function_decorator
def f1(self):
pass
# ... many other functions
def f10(self):
pass
</code></pre>
<p>what should I do so that the class <code>C</code> methods <code>f1</code>, ..., <code>f10</code> will be decorated with <code>function_decorator</code>? (Obviously without typing it all in manually.)</p>
<p>My actual use case is having a generic parent class with generic methods, while looking to create several children, with each child having slightly different behavior of inherited methods.</p>
<p>E.g. <code>Child1</code> applies <code>decorator1</code> to all parent methods, <code>Child2</code> applies <code>decorator2</code> etc.</p>
|
<python><python-3.x><python-decorators>
|
2023-01-16 13:33:59
| 1
| 349
|
stam
|
75,134,930
| 15,724,084
|
pandas appending data works online for few rows
|
<p>my script writes on each iteration to excel file from row 2. But I need it to append data each time under the last row.</p>
<p>code need to write new data in a bulk from the last row</p>
<p>code is below</p>
<pre><code>import scrapy
from scrapy.crawler import CrawlerProcess
import pandas as pd
class plateScraper(scrapy.Spider):
name = 'scrapePlate'
allowed_domains = ['dvlaregistrations.direct.gov.uk']
def start_requests(self):
df=pd.read_excel('data.xlsx')
columnA_values=df['PLATE']
for row in columnA_values:
global plate_num_xlsx
plate_num_xlsx=row
base_url =f"https://dvlaregistrations.direct.gov.uk/search/results.html?search={plate_num_xlsx}&action=index&pricefrom=0&priceto=&prefixmatches=&currentmatches=&limitprefix=&limitcurrent=&limitauction=&searched=true&openoption=&language=en&prefix2=Search&super=&super_pricefrom=&super_priceto="
url=base_url
yield scrapy.Request(url)
def parse(self, response):
itemList=[]
for row in response.css('div.resultsstrip'):
plate = row.css('a::text').get()
price = row.css('p::text').get()
if plate_num_xlsx==plate.replace(" ","").strip():
item= {"plate": plate.strip(), "price": price.strip()}
itemList.append(item)
yield item
else:
item = {"plate": plate.strip(), "price": "-"}
itemList.append(item)
yield item
with pd.ExcelWriter('output_res.xlsx', mode='a',if_sheet_exists='overlay') as writer:
df_output = pd.DataFrame(itemList)
df_output.to_excel(writer, sheet_name='result', index=False, header=True)
process = CrawlerProcess()
process.crawl(plateScraper)
process.start()
</code></pre>
<p>It writes data in bulks, I mean rewriting some kind of 12 rows every time not appending and going down. Strange, is not it? Would like to hear the reason and how to fix it to write up to down all data</p>
|
<python><pandas><scrapy>
|
2023-01-16 13:31:21
| 1
| 741
|
xlmaster
|
75,134,922
| 859,227
|
Removing extra spaces in print()
|
<p>I want to show a progress like</p>
<pre><code>[500/5000] [1000/5000] [1500/5000] ...
</code></pre>
<p>With the following print line</p>
<pre><code>print("[",str(c),"/",str(len(my_list)),"] ", end='')
</code></pre>
<p>I see extra spaces like <code>[ 1000 / 55707 ] [ 2000 / 55707 ]</code>. I also tried</p>
<pre><code>print('[{}'.format(c),"/",'{}]'.format(len(my_list))," ", end='')
</code></pre>
<p>But the output is <code>[1000 / 55707] [2000 / 55707]</code>.</p>
<p>How can I I fix that?</p>
|
<python><python-3.x>
|
2023-01-16 13:30:28
| 1
| 25,175
|
mahmood
|
75,134,906
| 2,710,058
|
Python async, future behaviour
|
<pre><code>from asyncio import Future
import asyncio
async def plan_fut(future_obj):
print('future started')
await asyncio.sleep(1)
future_obj.set_result('Future completes')
def create() -> Future:
future_obj = Future()
asyncio.create_task(plan_fut(future_obj))
return future_obj
async def main():
future_obj = create()
result = await future_obj
print(result)
asyncio.run(main())
</code></pre>
<p>Result</p>
<pre><code>future started
Future completes
</code></pre>
<p>If I change function create to</p>
<pre><code>async def create() -> Future:
future_obj = Future()
asyncio.create_task(plan_fut(future_obj))
return future_obj
</code></pre>
<p>result is</p>
<pre><code><Future pending>
future started
</code></pre>
<ol>
<li>I understand the case 1 where create is blocking and main awaits for future object</li>
<li>I don't understand the case 2, when create is async, why does main behave weird, shouldn't it wait for future to complete in this case as well?</li>
</ol>
|
<python><python-asyncio>
|
2023-01-16 13:29:14
| 0
| 391
|
Pramod
|
75,134,878
| 710,955
|
Python: NLTK and Spacy, don't get same result when tokenize sentence in French
|
<p>I want to split french text into sentences.</p>
<p>With <a href="https://www.nltk.org/_modules/nltk/tokenize.html#sent_tokenize" rel="nofollow noreferrer">NLTK</a>, I use the sentence tokenizer directly as follows:</p>
<pre><code>import nltk.data
tokenizer = nltk.data.load('tokenizers/punkt/french.pickle')
tokens = tokenizer.tokenize("Film culte, classique parmi les classiques.Enfin un conte de Noël bien adapté aux tout-petits sans les prendre pour des attardés.")
for sentence in tokens:
print(sentence)
</code></pre>
<p>But I got just one sentence:</p>
<blockquote>
<p>Film culte, classique parmi les classiques.Enfin un conte de Noël bien
adapté aux tout-petits sans les prendre pour des attardés.</p>
</blockquote>
<p>With <a href="https://spacy.io/models/fr" rel="nofollow noreferrer">Spacy</a>, I do this:</p>
<pre><code>import spacy
nlp = spacy.load("fr_core_news_sm")
doc = nlp("Film culte, classique parmi les classiques.Enfin un conte de Noël bien adapté aux tout-petits sans les prendre pour des attardés.")
for sentence in doc.sents:
print(sentence.text)
</code></pre>
<p>I have both sentences right. Which is correct.</p>
<blockquote>
<p>Film culte, classique parmi les classiques.</p>
</blockquote>
<blockquote>
<p>Enfin un conte de Noël bien adapté aux tout-petits sans les prendre pour des attardés.</p>
</blockquote>
<p>Why it's not good with NLTK?</p>
|
<python><nlp><nltk><spacy>
|
2023-01-16 13:26:39
| 1
| 5,809
|
LeMoussel
|
75,134,626
| 991,710
|
Propertly form JSON with df.to_json with dataframe containing nested json
|
<p>I have the following situation:</p>
<pre><code>id items
3b68b7b2-f42c-418b-aa88-02450d66b616 [{quantity=3.0, item_id=210defdb-de69-4d03-bddd-7db626cd501b, description=Abc}, {quantity=1.0, item_id=ff457660-5f30-4432-a5af-564a9dee0029, description=xyz . 23}, {quantity=10.0, item_id=8dbd22f2-cc13-4776-b58c-4d6fe0f3463e, description=abc def}]
</code></pre>
<p>where one of my columns has a nested JSON list inside of it.
I wish to output the data of this dataframe as proper JSON, <em>including</em> the nested list.</p>
<p>So, for example, calling <code>df.to_dict(orient='records', indent=4)</code> on the above dataframe yields:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"id": "3b68b7b2-f42c-418b-aa88-02450d66b616",
"items": "[{quantity=3.0, item_id=210defdb-de69-4d03-bddd-7db626cd501b, description=Abc}, {quantity=1.0, item_id=ff457660-5f30-4432-a5af-564a9dee0029, description=xyz . 23}, {quantity=10.0, item_id=8dbd22f2-cc13-4776-b58c-4d6fe0f3463e, description=abc def}]"
}
]
</code></pre>
<p>whereas I want:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"id": "3b68b7b2-f42c-418b-aa88-02450d66b616",
"items": [
{
"quantity": 3.0,
"item_id": "210defdb-de69-4d03-bddd-7db626cd501b",
"description": "Abc"
},
{
"quantity": 1.0,
"item_id": "ff457660-5f30-4432-a5af-564a9dee0029",
"description": "xyz . 23"
},
{
"quantity": 10.0,
"item_id": "8dbd22f2-cc13-4776-b58c-4d6fe0f3463e",
"description": "abc def"
}
]
}
]
</code></pre>
<p>Is this possible using <code>df.to_json()</code>? I have tried to use regex to parse the resulting string, but due to the data contained therein, it is unfortunately extremely difficult so "jsonify" the fields I want.</p>
|
<python><json><pandas>
|
2023-01-16 13:04:35
| 1
| 3,744
|
filpa
|
75,134,616
| 15,368,670
|
Simple check for projected crs
|
<p>How can I easily check if a crs is a projected one for geopandas ?</p>
<p>Basically a function:</p>
<pre><code>is_projected_crs(crs)
</code></pre>
<p>returning True or False.</p>
<p>I need that because I am writing some code to avoid the warning from geopandas :</p>
<pre><code>Geometry is in a geographic CRS. Results from 'area' are likely incorrect. Use 'GeoSeries.to_crs()' to re-project geometries to a projected CRS before this operation.
</code></pre>
|
<python><geopandas><coordinate-systems>
|
2023-01-16 13:03:47
| 1
| 719
|
Oily
|
75,134,553
| 13,058,538
|
Installation of Couchbase Python SDK 3.0.10 failing on python 3.8.5
|
<p>I am unable to install Couchbase Python SDK of version 3.0.10 (I specifically need this version). I am using Ubuntu 22.04 and with pyenv set global python to 3.8.5. I am also using poetry with venv. (Installation fails both in venv and globally). It seems that I tried out everything - using different python versions (3.8, 3.9, 3.10), having all requirements set up, installing docutils. At this point I don't know what else is there to try.</p>
<p>The requirements mentioned in the <a href="https://docs.couchbase.com/python-sdk/3.0/project-docs/sdk-release-notes.html" rel="nofollow noreferrer">documentation</a> are installed.:</p>
<pre><code>sudo apt install git-all python3-dev python3-pip python3-setuptools cmake build-essential libssl-dev
sudo apt-get install libssl-dev
</code></pre>
<p>Here is the error that I am receiving:</p>
<pre><code>$ python3 -m pip install couchbase==3.0.10
Collecting couchbase==3.0.10
Using cached couchbase-3.0.10.tar.gz (1.5 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Installing backend dependencies ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: attrs>=19.1.0 in ./.venv/lib/python3.8/site-packages (from couchbase==3.0.10) (21.4.0)
Requirement already satisfied: wrapt>=1.11.2 in ./.venv/lib/python3.8/site-packages (from couchbase==3.0.10) (1.14.1)
Requirement already satisfied: pyrsistent>=0.15.2 in ./.venv/lib/python3.8/site-packages (from couchbase==3.0.10) (0.18.1)
Requirement already satisfied: mypy-extensions in ./.venv/lib/python3.8/site-packages (from couchbase==3.0.10) (0.4.3)
Requirement already satisfied: six in ./.venv/lib/python3.8/site-packages (from couchbase==3.0.10) (1.16.0)
Building wheels for collected packages: couchbase
Building wheel for couchbase (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for couchbase (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [900 lines of output]
/tmp/pip-install-jdv9fnu7/couchbase_3a9b26b317cc4b378fb5eef06c0a77be/gen_config.py:55: UserWarning: problem: Traceback (most recent call last):
File "/tmp/pip-install-jdv9fnu7/couchbase_3a9b26b317cc4b378fb5eef06c0a77be/gen_config.py", line 36, in get_lcb_min_version
import docutils.parsers.rst
ModuleNotFoundError: No module named 'docutils'
warnings.warn("problem: {}".format(traceback.format_exc()))
ERROR:root:{'ext_modules': [<cmake_build.CMakeExtension('couchbase_core._libcouchbase') at 0x7f823cb3ae20>]}
INFO:root:running bdist_wheel
INFO:root:running build
INFO:root:running build_py
INFO:root:creating build
INFO:root:creating build/lib.linux-x86_64-cpython-38
INFO:root:creating build/lib.linux-x86_64-cpython-38/txcouchbase
INFO:root:copying txcouchbase/cluster.py -> build/lib.linux-x86_64-cpython-38/txcouchbase
INFO:root:copying txcouchbase/iops.py -> build/lib.linux-x86_64-cpython-38/txcouchbase
INFO:root:copying txcouchbase/__init__.py -> build/lib.linux-x86_64-cpython-38/txcouchbase
INFO:root:creating build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:creating build/lib.linux-x86_64-cpython-38/couchbase_core/iops
INFO:root:copying couchbase_core/iops/base.py -> build/lib.linux-x86_64-cpython-38/couchbase_core/iops
INFO:root:copying couchbase_core/iops/select.py -> build/lib.linux-x86_64-cpython-38/couchbase_core/iops
INFO:root:copying couchbase_core/iops/__init__.py -> build/lib.linux-x86_64-cpython-38/couchbase_core/iops
INFO:root:creating build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:copying couchbase/bucket.py -> build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:copying couchbase/exceptions.py -> build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:copying couchbase/cluster.py -> build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:copying couchbase/auth.py -> build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:copying couchbase/options.py -> build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:copying couchbase/JSONdocument.py -> build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:copying couchbase/n1ql.py -> build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:copying couchbase/diagnostics.py -> build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:copying couchbase/analytics.py -> build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:copying couchbase/durability.py -> build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:copying couchbase/search.py -> build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:copying couchbase/result.py -> build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:copying couchbase/__init__.py -> build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:copying couchbase/collection.py -> build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:copying couchbase/subdocument.py -> build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:copying couchbase/mutation_state.py -> build/lib.linux-x86_64-cpython-38/couchbase
INFO:root:creating build/lib.linux-x86_64-cpython-38/couchbase_core/asynchronous
INFO:root:copying couchbase_core/asynchronous/view.py -> build/lib.linux-x86_64-cpython-38/couchbase_core/asynchronous
INFO:root:copying couchbase_core/asynchronous/n1ql.py -> build/lib.linux-x86_64-cpython-38/couchbase_core/asynchronous
INFO:root:copying couchbase_core/asynchronous/client.py -> build/lib.linux-x86_64-cpython-38/couchbase_core/asynchronous
INFO:root:copying couchbase_core/asynchronous/analytics.py -> build/lib.linux-x86_64-cpython-38/couchbase_core/asynchronous
INFO:root:copying couchbase_core/asynchronous/__init__.py -> build/lib.linux-x86_64-cpython-38/couchbase_core/asynchronous
INFO:root:copying couchbase_core/asynchronous/rowsbase.py -> build/lib.linux-x86_64-cpython-38/couchbase_core/asynchronous
INFO:root:copying couchbase_core/asynchronous/events.py -> build/lib.linux-x86_64-cpython-38/couchbase_core/asynchronous
INFO:root:creating build/lib.linux-x86_64-cpython-38/acouchbase
INFO:root:copying acouchbase/cluster.py -> build/lib.linux-x86_64-cpython-38/acouchbase
INFO:root:copying acouchbase/iterator.py -> build/lib.linux-x86_64-cpython-38/acouchbase
INFO:root:copying acouchbase/__init__.py -> build/lib.linux-x86_64-cpython-38/acouchbase
INFO:root:copying acouchbase/asyncio_iops.py -> build/lib.linux-x86_64-cpython-38/acouchbase
INFO:root:creating build/lib.linux-x86_64-cpython-38/couchbase/management
INFO:root:copying couchbase/management/users.py -> build/lib.linux-x86_64-cpython-38/couchbase/management
INFO:root:copying couchbase/management/queries.py -> build/lib.linux-x86_64-cpython-38/couchbase/management
INFO:root:copying couchbase/management/collections.py -> build/lib.linux-x86_64-cpython-38/couchbase/management
INFO:root:copying couchbase/management/analytics.py -> build/lib.linux-x86_64-cpython-38/couchbase/management
INFO:root:copying couchbase/management/admin.py -> build/lib.linux-x86_64-cpython-38/couchbase/management
INFO:root:copying couchbase/management/buckets.py -> build/lib.linux-x86_64-cpython-38/couchbase/management
INFO:root:copying couchbase/management/search.py -> build/lib.linux-x86_64-cpython-38/couchbase/management
INFO:root:copying couchbase/management/views.py -> build/lib.linux-x86_64-cpython-38/couchbase/management
INFO:root:copying couchbase/management/__init__.py -> build/lib.linux-x86_64-cpython-38/couchbase/management
INFO:root:copying couchbase/management/generic.py -> build/lib.linux-x86_64-cpython-38/couchbase/management
INFO:root:creating build/lib.linux-x86_64-cpython-38/couchbase/asynchronous
INFO:root:copying couchbase/asynchronous/search.py -> build/lib.linux-x86_64-cpython-38/couchbase/asynchronous
INFO:root:copying couchbase/asynchronous/__init__.py -> build/lib.linux-x86_64-cpython-38/couchbase/asynchronous
INFO:root:copying couchbase_core/analytics_ingester.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/mockserver.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/transcoder.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/cluster.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/supportability.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/_logutil.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/n1ql.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/_bootstrap.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/_pyport.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/client.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/connstr.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/analytics.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/durability.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/bucketmanager.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/transcodable.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/_version.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/crypto.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/priv_constants.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/experimental.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/result.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/_ixmgmt.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/mapper.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/__init__.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/user_constants.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/items.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:copying couchbase_core/subdocument.py -> build/lib.linux-x86_64-cpython-38/couchbase_core
INFO:root:creating build/lib.linux-x86_64-cpython-38/couchbase_core/views
INFO:root:copying couchbase_core/views/iterator.py -> build/lib.linux-x86_64-cpython-38/couchbase_core/views
INFO:root:copying couchbase_core/views/__init__.py -> build/lib.linux-x86_64-cpython-38/couchbase_core/views
INFO:root:copying couchbase_core/views/params.py -> build/lib.linux-x86_64-cpython-38/couchbase_core/views
INFO:root:running build_ext
Build type: CMAKE_HYBRID, cmake:True
[]
{}
set base as None
Got platform linux
got pkgs ['libcouchbase.so.6']
Got platform linux
yielding binary libcouchbase.so.6 : /tmp/pip-install-jdv9fnu7/couchbase_3a9b26b317cc4b378fb5eef06c0a77be/couchbase_core/libcouchbase.so.6
From: temp_build_dir build/temp.linux-x86_64-cpython-38 and ssl_relative_path None Got ssl_abs_path /tmp/pip-install-jdv9fnu7/couchbase_3a9b26b317cc4b378fb5eef06c0a77be/build/temp.linux-x86_64-cpython-38/openssl
</code></pre>
<p>This is not a full error log, I cut the middle part since there seemed like no useful error messages, here is the ending:</p>
<pre><code>INFO:root:creating build/temp.linux-x86_64-cpython-38/src
INFO:root:gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Ibuild/temp.linux-x86_64-cpython-38/install/include -I/home/my-pc/PycharmProjects/my-project/.venv/include -I/home/my-pc/.pyenv/versions/3.8.5/include/python3.8 -Ibuild/temp.linux-x86_64-cpython-38/install/include -c src/bucket.c -o build/temp.linux-x86_64-cpython-38/src/bucket.o -Wno-strict-prototypes -fPIC -std=c11 -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04 -DPYCBC_LCB_API=0x02FF04
INFO:root:gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Ibuild/temp.linux-x86_64-cpython-38/install/include -I/home/my-pc/PycharmProjects/my-project/.venv/include -I/home/my-pc/.pyenv/versions/3.8.5/include/python3.8 -Ibuild/temp.linux-x86_64-cpython-38/install/include -c src/callbacks.c -o build/temp.linux-x86_64-cpython-38/src/callbacks.o -Wno-strict-prototypes -fPIC -std=c11 -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04 -DPYCBC_LCB_API=0x02FF04
INFO:root:gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Ibuild/temp.linux-x86_64-cpython-38/install/include -I/home/my-pc/PycharmProjects/my-project/.venv/include -I/home/my-pc/.pyenv/versions/3.8.5/include/python3.8 -Ibuild/temp.linux-x86_64-cpython-38/install/include -c src/cntl.c -o build/temp.linux-x86_64-cpython-38/src/cntl.o -Wno-strict-prototypes -fPIC -std=c11 -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04 -DPYCBC_LCB_API=0x02FF04
INFO:root:gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Ibuild/temp.linux-x86_64-cpython-38/install/include -I/home/my-pc/PycharmProjects/my-project/.venv/include -I/home/my-pc/.pyenv/versions/3.8.5/include/python3.8 -Ibuild/temp.linux-x86_64-cpython-38/install/include -c src/connevents.c -o build/temp.linux-x86_64-cpython-38/src/connevents.o -Wno-strict-prototypes -fPIC -std=c11 -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04 -DPYCBC_LCB_API=0x02FF04
INFO:root:gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Ibuild/temp.linux-x86_64-cpython-38/install/include -I/home/my-pc/PycharmProjects/my-project/.venv/include -I/home/my-pc/.pyenv/versions/3.8.5/include/python3.8 -Ibuild/temp.linux-x86_64-cpython-38/install/include -c src/constants.c -o build/temp.linux-x86_64-cpython-38/src/constants.o -Wno-strict-prototypes -fPIC -std=c11 -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04 -DPYCBC_LCB_API=0x02FF04
INFO:root:gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Ibuild/temp.linux-x86_64-cpython-38/install/include -I/home/my-pc/PycharmProjects/my-project/.venv/include -I/home/my-pc/.pyenv/versions/3.8.5/include/python3.8 -Ibuild/temp.linux-x86_64-cpython-38/install/include -c src/convert.c -o build/temp.linux-x86_64-cpython-38/src/convert.o -Wno-strict-prototypes -fPIC -std=c11 -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04 -DPYCBC_LCB_API=0x02FF04
INFO:root:gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Ibuild/temp.linux-x86_64-cpython-38/install/include -I/home/my-pc/PycharmProjects/my-project/.venv/include -I/home/my-pc/.pyenv/versions/3.8.5/include/python3.8 -Ibuild/temp.linux-x86_64-cpython-38/install/include -c src/counter.c -o build/temp.linux-x86_64-cpython-38/src/counter.o -Wno-strict-prototypes -fPIC -std=c11 -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04 -DPYCBC_LCB_API=0x02FF04
INFO:root:gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Ibuild/temp.linux-x86_64-cpython-38/install/include -I/home/my-pc/PycharmProjects/my-project/.venv/include -I/home/my-pc/.pyenv/versions/3.8.5/include/python3.8 -Ibuild/temp.linux-x86_64-cpython-38/install/include -c src/crypto.c -o build/temp.linux-x86_64-cpython-38/src/crypto.o -Wno-strict-prototypes -fPIC -std=c11 -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04 -DPYCBC_LCB_API=0x02FF04
INFO:root:gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Ibuild/temp.linux-x86_64-cpython-38/install/include -I/home/my-pc/PycharmProjects/my-project/.venv/include -I/home/my-pc/.pyenv/versions/3.8.5/include/python3.8 -Ibuild/temp.linux-x86_64-cpython-38/install/include -c src/ctranscoder.c -o build/temp.linux-x86_64-cpython-38/src/ctranscoder.o -Wno-strict-prototypes -fPIC -std=c11 -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04 -DPYCBC_LCB_API=0x02FF04
INFO:root:gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Ibuild/temp.linux-x86_64-cpython-38/install/include -I/home/my-pc/PycharmProjects/my-project/.venv/include -I/home/my-pc/.pyenv/versions/3.8.5/include/python3.8 -Ibuild/temp.linux-x86_64-cpython-38/install/include -c src/exceptions.c -o build/temp.linux-x86_64-cpython-38/src/exceptions.o -Wno-strict-prototypes -fPIC -std=c11 -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04 -DPYCBC_LCB_API=0x02FF04
INFO:root:gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Ibuild/temp.linux-x86_64-cpython-38/install/include -I/home/my-pc/PycharmProjects/my-project/.venv/include -I/home/my-pc/.pyenv/versions/3.8.5/include/python3.8 -Ibuild/temp.linux-x86_64-cpython-38/install/include -c src/ext.c -o build/temp.linux-x86_64-cpython-38/src/ext.o -Wno-strict-prototypes -fPIC -std=c11 -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04 -DPYCBC_LCB_API=0x02FF04
INFO:root:gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Ibuild/temp.linux-x86_64-cpython-38/install/include -I/home/my-pc/PycharmProjects/my-project/.venv/include -I/home/my-pc/.pyenv/versions/3.8.5/include/python3.8 -Ibuild/temp.linux-x86_64-cpython-38/install/include -c src/fts.c -o build/temp.linux-x86_64-cpython-38/src/fts.o -Wno-strict-prototypes -fPIC -std=c11 -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04 -DPYCBC_LCB_API=0x02FF04
INFO:root:gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Ibuild/temp.linux-x86_64-cpython-38/install/include -I/home/my-pc/PycharmProjects/my-project/.venv/include -I/home/my-pc/.pyenv/versions/3.8.5/include/python3.8 -Ibuild/temp.linux-x86_64-cpython-38/install/include -c src/get.c -o build/temp.linux-x86_64-cpython-38/src/get.o -Wno-strict-prototypes -fPIC -std=c11 -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04 -DPYCBC_LCB_API=0x02FF04
INFO:root:gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Ibuild/temp.linux-x86_64-cpython-38/install/include -I/home/my-pc/PycharmProjects/my-project/.venv/include -I/home/my-pc/.pyenv/versions/3.8.5/include/python3.8 -Ibuild/temp.linux-x86_64-cpython-38/install/include -c src/htresult.c -o build/temp.linux-x86_64-cpython-38/src/htresult.o -Wno-strict-prototypes -fPIC -std=c11 -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04 -DPYCBC_LCB_API=0x02FF04
INFO:root:gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Ibuild/temp.linux-x86_64-cpython-38/install/include -I/home/my-pc/PycharmProjects/my-project/.venv/include -I/home/my-pc/.pyenv/versions/3.8.5/include/python3.8 -Ibuild/temp.linux-x86_64-cpython-38/install/include -c src/iops.c -o build/temp.linux-x86_64-cpython-38/src/iops.o -Wno-strict-prototypes -fPIC -std=c11 -Wuninitialized -Wswitch -Werror -Wno-missing-braces -DPYCBC_LCB_API=0x02FF04 -DPYCBC_LCB_API=0x02FF04
src/iops.c: In function ‘update_event’:
src/iops.c:412:17: error: array subscript ‘lcb_uint32_t {aka unsigned int}[0]’ is partly outside array bounds of ‘short int[1]’ [-Werror=array-bounds]
412 | usecs = *(lcb_uint32_t*)arg;
| ^~~~~~~~~~~~~~~~~~~
src/iops.c:516:69: note: while referencing ‘flags’
516 | update_event(lcb_io_opt_t io, lcb_socket_t sock, void *event, short flags,
| ~~~~~~^~~~~
src/iops.c: In function ‘delete_event’:
src/iops.c:412:17: error: array subscript ‘lcb_uint32_t {aka unsigned int}[0]’ is partly outside array bounds of ‘short int[1]’ [-Werror=array-bounds]
412 | usecs = *(lcb_uint32_t*)arg;
| ^~~~~~~~~~~~~~~~~~~
src/iops.c:548:11: note: while referencing ‘tmp’
548 | short tmp = 0;
| ^~~
cc1: all warnings being treated as errors
Got platform linux
self.base is ['build', 'temp.linux-x86_64-cpython-38']
self.base is ['build', 'temp.linux-x86_64-cpython-38']
self.base is ['build', 'temp.linux-x86_64-cpython-38']
self.base is ['build', 'temp.linux-x86_64-cpython-38']
copying build/temp.linux-x86_64-cpython-38/install/lib/Release/libcouchbase.so.6 to /tmp/pip-install-jdv9fnu7/couchbase_3a9b26b317cc4b378fb5eef06c0a77be/build/lib.linux-x86_64-cpython-38/couchbase_core/libcouchbase.so.6
success
self.base is ['build', 'temp.linux-x86_64-cpython-38']
self.base is ['build', 'temp.linux-x86_64-cpython-38']
self.base is ['build', 'temp.linux-x86_64-cpython-38']
self.base is ['build', 'temp.linux-x86_64-cpython-38']
copying build/temp.linux-x86_64-cpython-38/install/lib/Release/libcouchbase.so.6 to /tmp/pip-install-jdv9fnu7/couchbase_3a9b26b317cc4b378fb5eef06c0a77be/couchbase_core/libcouchbase.so.6
success
self.base is ['build', 'temp.linux-x86_64-cpython-38']
self.base is ['build', 'temp.linux-x86_64-cpython-38']
error: command '/usr/bin/gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for couchbase
Failed to build couchbase
ERROR: Could not build wheels for couchbase, which is required to install pyproject.toml-based projects
</code></pre>
<p>The error at the beginning mentioning ModuleNotFoundError: No module named 'docutils' has been tried out. I have this installed both in venv and globally:</p>
<pre><code>$ pip3 install docutils
Requirement already satisfied: docutils in ./.venv/lib/python3.8/site-packages (0.19)
</code></pre>
<p>Installing with poetry install returns the same result, although some pip paths for example has changed (or represented differently) which seems to have something to do with different python version (3.10)</p>
<p>Here is the error from using poetry install:</p>
<pre><code> Installing dependencies from lock file
Package operations: 6 installs, 0 updates, 0 removals
• Installing couchbase (3.0.10): Failed
CalledProcessError
Command '['/home/my-pc/PycharmProjects/my-project/.venv/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/home/my-pc/PycharmProjects/my-project/.venv', '--no-deps', '/home/my-pc/.cache/pypoetry/artifacts/ee/68/10/ea670bac6fb66b1486d57a260265313756a45ab7bdee068a46fe64db38/couchbase-3.0.10.tar.gz']' returned non-zero exit status 1.
at ~/.pyenv/versions/3.10.4/lib/python3.10/subprocess.py:524 in run
520│ # We don't call process.wait() as .__exit__ does that for us.
521│ raise
522│ retcode = process.poll()
523│ if check and retcode:
→ 524│ raise CalledProcessError(retcode, process.args,
525│ output=stdout, stderr=stderr)
526│ return CompletedProcess(process.args, retcode, stdout, stderr)
527│
528│
The following error occurred when trying to handle this error:
EnvCommandError
Command ['/home/my-pc/PycharmProjects/my-project/.venv/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/home/my-pc/PycharmProjects/my-project/.venv', '--no-deps', '/home/my-pc/.cache/pypoetry/artifacts/ee/68/10/ea670bac6fb66b1486d57a260265313756a45ab7bdee068a46fe64db38/couchbase-3.0.10.tar.gz'] errored with the following return code 1, and output:
Processing /home/my-pc/.cache/pypoetry/artifacts/ee/68/10/ea670bac6fb66b1486d57a260265313756a45ab7bdee068a46fe64db38/couchbase-3.0.10.tar.gz
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Building wheels for collected packages: couchbase
Building wheel for couchbase (pyproject.toml): started
Building wheel for couchbase (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
× Building wheel for couchbase (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [834 lines of output]
/tmp/pip-req-build-lmxka0wp/gen_config.py:55: UserWarning: problem: Traceback (most recent call last):
File "/tmp/pip-req-build-lmxka0wp/gen_config.py", line 36, in get_lcb_min_version
import docutils.parsers.rst
ModuleNotFoundError: No module named 'docutils'
</code></pre>
<p>Apologies if the question looks messy, but there are a lot of logs.
Perhaps someone has encountered a similar issue and knows how to solve it?
All help is appreciated</p>
|
<python><couchbase>
|
2023-01-16 12:56:54
| 0
| 523
|
Dave
|
75,134,485
| 6,209,004
|
Quarto batch processing in python
|
<p>Doing batch rendering with quarto using the following script From vscode</p>
<pre class="lang-py prettyprint-override"><code>import os
import glob
from pathlib import Path
PHASE = [0, 0.1, 0.2]
for f in PHASE:
os.system(
f"quarto render individual_reference_template_copy.qmd --output phase_{f}.html -P phase:{f} --to html --no-cache"
)
</code></pre>
<p><strong>individual_reference_template_copy.qmd</strong></p>
<pre><code>---
execute:
echo: true
format:
html:
code-fold: true
theme: cosmo
jupyter: python3
---
```{python}
import numpy as np
import matplotlib.pyplot as plt
```
```{python}
#| tags: [parameters]
phase = 0
```
```{python}
x = np.arange(0,2*np.pi, 0.001)
y = np.sin(x+phase)
plt.scatter(x, y)
```
</code></pre>
<p>the output for the generated reports all show the same figure because the file</p>
<p><code>individual_reference_template_files\figure-html\cell-5-output-2.png</code></p>
<p>keeps getting overwritten.</p>
<p><strong>EDIT</strong>
replacing the first post with minimal example</p>
|
<python><quarto>
|
2023-01-16 12:51:06
| 1
| 1,908
|
Kresten
|
75,134,424
| 12,877,988
|
How to encrypt decrypted text by django-fernet-fields?
|
<p>I am using django-fernet-fields, and documentation says By default, django-fernet-fields uses your SECRET_KEY setting as the encryption key. my SECRET_KEY in django is the one created by django let say it is "XXXX".</p>
<p>my dectypted text is "gAAAAABfIrFwizr11ppteAXE3MOMItPDNfNkr5a4HcS3oiT7ih4Ln7y6TeCC5uXWPS3Yup_0s7whK3T44ndNlJRgc0Ii4_s3_A=="</p>
<p>when I try to encrypt it by using the code below</p>
<pre><code>token = SECRET_KEY
f = Fernet(token)
d =f.decrypt(b"gAAAAABfIrFwizr11ppteAXE3MOMItPDNfNkr5a4HcS3oiT7ih4Ln7y6TeCC5uXWPS3Yup_0s7whK3T44ndNlJRgc0Ii4_s3_A==")
print(d)
</code></pre>
<p>it throws an error, error is <strong>ValueError: Fernet key must be 32 url-safe base64-encoded bytes.</strong>
how can i avoid this error?</p>
|
<python><django><fernet>
|
2023-01-16 12:45:33
| 0
| 1,497
|
Elvin Jafarov
|
75,134,415
| 11,662,972
|
How to set the format of the marginal graph in dash plotly?
|
<p>I have a scatterplot with a <code>marginal_y</code> subgraph made with:</p>
<pre class="lang-py prettyprint-override"><code>fig = px.scatter(df_filtrada,
x=df_filtrada.index,
y=pto,
marginal_y = "violin")
</code></pre>
<p>When I update some properties of the main graph such as the background color it affects the background of the marginal subgraph as well such as the background color:</p>
<pre><code>fig.layout.paper_bgcolor='rgba(0,0,0,0)'
fig.layout.plot_bgcolor='rgba(0,0,0,0)'
</code></pre>
<p>However, I haven't managed to remove the grid of the marginal axis:
<a href="https://i.sstatic.net/cH5ya.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cH5ya.png" alt="enter image description here" /></a></p>
<p>How can I format the <code>marginal_y</code> chart properties such as the <code>show_grid</code> property? And can other properties be adjusted as well such as the title?</p>
|
<python><plotly-dash><plotly>
|
2023-01-16 12:44:54
| 1
| 385
|
Adrian Fischer
|
75,134,342
| 9,884,812
|
Django CaptureQueriesContext can't see my objects filter query in captures_queries
|
<p>I want to log the raw SQL statements in my Django testcases.<br />
I can see the INSERT sql statements, but I can't see the SELECT sql statements in the log.</p>
<p>I want to see every SQL statement in the log, whether it is CREATE, SELECT, UPDATE or something else.</p>
<p><strong>Output</strong></p>
<pre><code>$ python manage.py test
<OUTPUT OMITTED>
Found 1 test(s).
Running tests...
----------------------------------------------------------------------
[{'sql': 'INSERT INTO "myapp_testtabletwo" ("test_field") VALUES (\'abc\') RETURNING "myapp_testtabletwo"."id"', 'time': '0.001'}]
.
----------------------------------------------------------------------
Ran 1 test in 0.113s
OK
Destroying test database for alias 'default'...
Closing active connection
</code></pre>
<p><strong>tests.py</strong></p>
<pre><code>from django.db import connection
from my_app.models import TestTableTwo
class DatabaseExampleTests(TestCase):
databases = '__all__'
def test_example(self):
with CaptureQueriesContext(connection) as ctx:
created_object = TestTableTwo.objects.create(test_field="abc")
all_objects = TestTableTwo.objects.all()
print(ctx.captured_queries)
</code></pre>
<p><strong>models.py</strong></p>
<pre><code>from django.db import models
class TestTableTwo(models.Model):
id = models.AutoField(primary_key=True)
test_field = models.CharField(max_length=100, blank=True, null=True)
</code></pre>
<p><strong>settings.py</strong></p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': "testpostgres",
'USER': "postgres,
'PASSWORD': "password",
'HOST': "postgres"
}
}
</code></pre>
<p><strong>Version</strong></p>
<pre><code>$ python -V
Python 3.9.15
$ pip list
# output omitted
Package Version
---------------------- ---------
Django 4.1
psycopg2 2.9.5
</code></pre>
<p><strong>Edit:</strong><br />
When I change <code>all_objects = TestTableTwo.objects.all()</code> to <code>print(TestTableTwo.objects.all())</code>, I see the SELECT sql statement in the log.<br />
But I don't understand the reason, why it works with the <code>print()</code> statement.</p>
|
<python><django>
|
2023-01-16 12:38:17
| 1
| 539
|
Ewro
|
75,134,246
| 19,580,067
|
No module named 'mmcv._ext'
|
<p>Tried to train the model but got the mmcv error</p>
<pre><code>No module named 'mmcv._ext'
</code></pre>
<p>mmcv library is already installed and imported</p>
<pre><code>mmcv version = 1.4.0
Cuda version = 10.0
</code></pre>
<p>Any suggestions to fix the issue??</p>
|
<python><pytorch><openmmlab>
|
2023-01-16 12:28:54
| 6
| 359
|
Pravin
|
75,134,135
| 5,392,813
|
Python not respecting __init__ on windows, ModuleNotFoundError
|
<p>The following code works on macOS, but not on Windows:</p>
<pre><code>src
┣ __init__.py
┣ greeter.py
┗ helper.py
</code></pre>
<p>greeter.py</p>
<pre class="lang-py prettyprint-override"><code>from src.helper import world
def hello():
print("Hello {}".format(world()))
if __name__ == "__main__":
hello()
</code></pre>
<p>helper.py</p>
<pre class="lang-py prettyprint-override"><code>def world():
return "World"
</code></pre>
<p>Error on Windows:
<strong>ModuleNotFoundError</strong></p>
<p><a href="https://i.sstatic.net/7fQ78.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7fQ78.png" alt="enter image description here" /></a></p>
<p>Python on both is 3.9.13 64bit.</p>
<p>What am i missing here?</p>
|
<python><import><modulenotfounderror>
|
2023-01-16 12:20:05
| 0
| 665
|
flymg
|
75,134,118
| 245,362
|
Get Python ID as a number for Py03 PyAny object in Rust
|
<p>I'm using Py03 to build a python module in Rust. I have a class in Rust which accepts a <code>PyAny</code> to refer to an object in Python. As part of the hash function for the rust class, I want to use the Python ID for this object in the hash function in Rust so I can deduplicate the rust class if the same Python object is referenced in multiple versions of the Rust class. I can see the python ID in the <code>PyAny</code> object in Rust, but can't figure out how to get it into a plain number that I can pass to the hasher.</p>
<p>For example, I have the following in Rust:</p>
<pre class="lang-rust prettyprint-override"><code>#[pyclass]
pub struct MyClass {
obj: Option<Py<PyAny>>,
}
#[pymethods]
impl MyClass {
#[new]
fn new(obj: Option<Py<PyAny>>) -> Self {
if obj.is_some() {
println!("Obj: {:?}", obj.as_ref());
}
Self { obj }
}
}
</code></pre>
<p>Then, I can run in Python:</p>
<pre class="lang-py prettyprint-override"><code>obj = [1,2,3,4]
print(hex(id(obj)))
# '0x103da9100'
MyClass(obj)
# Obj: Some(Py(0x103da9100))
</code></pre>
<p>Both Python and Rust are showing the same number for the ID, which is great, but how can I get this number <code>0x103da9100</code> into a Rust variable? It looks like <code>PyAny</code> is just a tuple struct, so I tried the following but Rust complains that the fields of <code>PyAny</code> are private:</p>
<pre class="lang-rust prettyprint-override"><code>let obj_id = obj?.0;
</code></pre>
|
<python><rust><pyo3>
|
2023-01-16 12:18:39
| 1
| 563
|
David Chanin
|
75,134,088
| 9,749,124
|
How to select number of topics for Latent Dirichlet Allocation Topic-Model
|
<p>I am new to topic modeling and I came across LDA model, but I am not sure if I am using it good. As far as I read the documentation, parameter called <code>n_components</code> is parameter that represent number of topics, am I right?
This is my code:</p>
<pre><code>import pandas as pd
import numpy as np
from sklearn.decomposition import LatentDirichletAllocation
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
# Reading Dataset (all textual data)
df = pd.read_csv("final_dataset.csv")
df = df[df.groupby('Textelements')['MainCategoryName'].transform('nunique') == 1]
df = df.drop_duplicates()
df = df.dropna()
df = df.reset_index(drop = True)
print(df.shape) # (63197, 2)
# I am selecting 5000 rows
df = df.head(5000)
# Shuffling the dataset
for i in range(4):
df = df.sample(frac=1)
features = df["Textelements"]
print(type(features)) # <class 'pandas.core.series.Series'>
transformerVectoriser = TfidfVectorizer(analyzer='word', ngram_range=(1,1), max_features = 5000)
vectorized_features = transformerVectoriser.fit_transform(features)
print(type(vectorized_features)) # <class 'scipy.sparse.csr.csr_matrix'>
# Build LDA Model
model = LatentDirichletAllocation(n_components = 20, # Number of topics
max_iter = 10, # Max learning iterations
learning_method='online',
random_state=100, # Random state
batch_size=128, # n docs in each learning iter
evaluate_every = -1, # compute perplexity every n iters, default: Don't
n_jobs = -1, # Use all available CPUs
)
model = model.fit_transform(vectorized_features)
</code></pre>
<p>My question is how to calculate accuracy?
I do not know if I have 5,10 or 500 topics (n_components parameter in model, right?), so I want to see accuracy for specific number of topics in order to know what number to set for n_components</p>
|
<python><scikit-learn><topic-modeling>
|
2023-01-16 12:16:33
| 0
| 3,923
|
taga
|
75,133,876
| 17,696,880
|
Concatenate each element of a list with all the elements of its list and put these strings inside a new list
|
<pre class="lang-py prettyprint-override"><code>#Load the input lists
load_names_list = ['Katherine', 'María Jose', 'Steve']
load_surnames_list = ['Taylor', 'Johnson', 'White', 'Clark']
#Shuffle the data lists: ['element_AA element_AA' , 'element_AA element_AB' , ... ]
names_with_names_list = []
surnames_with_surnames_list = []
names_with_surnames_list = []
print(names_with_names_list)
print(surnames_with_surnames_list)
print(names_with_surnames_list)
</code></pre>
<ul>
<li><p><code>names_with_names_list</code> matches each name with each of the names within the list <code>load_names_list</code>, for example: <code>name_A name_A</code> , <code>name_A name_B</code>, always leaving a space in between</p>
</li>
<li><p><code>surnames_with_surnames_list</code> It is the same process as with the elements of list <code>names_with_names_list</code>, but using the list <code>load_surnames_list</code></p>
</li>
<li><p><code>names_with_surnames_list</code> this case is somewhat different since you must test all possible orders between the elements of the 2 lists ( <code>load_names_list</code> and <code>load_surnames_list</code> )</p>
</li>
</ul>
<p>The concatenation that is giving me the most problems is the one required to obtain the list <code>names_with_surnames_list</code></p>
<p>This would be the output that should be returned when printing each of these 3 lists of strings.</p>
<pre><code>#for names_with_names_list
['Katherine Katherine', 'Katherine María Jose', 'Katherine Steve', 'María Jose Katherine', 'María Jose María Jose', 'María Jose Steve', 'Steve Katherine', 'Steve María Jose', 'Steve Steve']
#for surnames_with_surnames_list
['Taylor Taylor', 'Taylor Johnson', 'Taylor White', 'Taylor Clark', 'Johnson Taylor', 'Johnson Johnson', 'Johnson White', 'Johnson Clark', 'White Taylor', 'White Johnson', 'White White', 'White Clark', 'Clark Taylor', 'Clark Johnson', 'Clark White', 'Clark Clark']
#for names_with_surnames_list
['Katherine Taylor', 'Katherine Johnson', 'Katherine White', 'Katherine Clark', 'María Jose Taylor', 'María Jose Johnson', 'María Jose White', 'María Jose Clark', 'Steve Taylor', 'Steve Johnson', 'Steve White', 'Steve Clark', 'Taylor Katherine', 'Taylor María Jose', 'Taylor Steve', 'Johnson Katherine', 'Johnson María Jose', 'Johnson Steve', 'White Katherine', 'White María Jose', 'White Steve', 'Clark Katherine', 'Clark María Jose', 'Clark Steve']
</code></pre>
<p>What should I do to get these 3 lists from the 2 input lists? (note that there is always a whitespace between the elements)</p>
|
<python><arrays><python-3.x><string><list>
|
2023-01-16 11:57:19
| 1
| 875
|
Matt095
|
75,133,842
| 15,724,084
|
scrapy python project does not export data to excel with pandas
|
<p>my script is below, first it reads <code>plate_num_xlsx</code> value from excel file <code>data.xlsx</code> successfully then requests scrapy to scrape data from url. At each parse() invocation, I am taking values parsed to <code>item</code> then trying to export them to excel with pandas.</p>
<pre><code> if plate_num_xlsx==plate.replace(" ","").strip():
item= {"plate": plate.strip(), "price": price.strip()}
else:
item = {"plate": plate.strip(), "price": "-"}
df_output=pd.DataFrame([item],columns=["PLATE","PRICE"])
df_output.to_excel("output_res.xlsx",sheet_name="result",index=False,header=True)
</code></pre>
<p>Excel file <code>output_res.xlsx</code> has been created successfully. But parsed data in item is not being exported to (written to) that file. What can be issue?</p>
<pre><code>import scrapy
from scrapy.crawler import CrawlerProcess
import pandas as pd
class plateScraper(scrapy.Spider):
name = 'scrapePlate'
allowed_domains = ['dvlaregistrations.direct.gov.uk']
def start_requests(self):
df=pd.read_excel('data.xlsx')
columnA_values=df['PLATE']
for row in columnA_values:
global plate_num_xlsx
plate_num_xlsx=row
base_url =f"https://dvlaregistrations.dvla.gov.uk/search/results.html?search={plate_num_xlsx}&action=index&pricefrom=0&priceto=&prefixmatches=&currentmatches=&limitprefix=&limitcurrent=&limitauction=&searched=true&openoption=&language=en&prefix2=Search&super=&super_pricefrom=&super_priceto="
url=base_url
yield scrapy.Request(url)
def parse(self, response):
for row in response.css('div.resultsstrip'):
plate = row.css('a::text').get()
price = row.css('p::text').get()
if plate_num_xlsx==plate.replace(" ","").strip():
item= {"plate": plate.strip(), "price": price.strip()}
else:
item = {"plate": plate.strip(), "price": "-"}
df_output=pd.DataFrame([item],columns=["PLATE","PRICE"])
df_output.to_excel("output_res.xlsx",sheet_name="result",index=False,header=True)
process = CrawlerProcess()
process.crawl(plateScraper)
process.start()
</code></pre>
|
<python><excel><pandas><scrapy>
|
2023-01-16 11:52:30
| 2
| 741
|
xlmaster
|
75,133,682
| 5,810,060
|
'float' object is not iterable", 'occurred at index 1'
|
<p>i have for the following dataset</p>
<pre><code>company_name_Ignite Mate Bence Raul Marina
01 TELECOM LTD NaN 01 Telecom, Ltd. 01 Telecom, Ltd. NaN
0404 Investments Ltd NaN 0404 INVESTMENTS LIMITED 0404 INVESTMENTS LIMITED NaN
</code></pre>
<p>I have got a custom function that compares the Mate, Bence, Raul and Marina columns against the 'company_name_Ignite' column and returns a similarity score for each columns against the company_name_Ignite column.</p>
<pre><code>for col in ['Mate', 'Bence', 'Raul','Marina']:
df[f"{col}_score"] = df.apply(lambda x: similar(x["company_name_Ignite"], x[col]) * 100,
axis=1)
</code></pre>
<p>The problem that I have is that when I try to run the code get the below error:</p>
<pre><code>TypeError Traceback (most recent call last)
<ipython-input-93-dc1c54d95f98> in <module>()
1 for col in ['Mate', 'Bence', 'Raul','Marina']:
----> 2 df[f"{col}_score"] = df.apply(lambda x: similar(x["company_name_Ignite"], x[col])
* 100, axis=1)
c:\ProgramData\Anaconda3\lib\site-packages\pandas\core\frame.py in apply(self, func, axis,
broadcast, raw, reduce, result_type, args, **kwds)
6002 args=args,
6003 kwds=kwds)
-> 6004 return op.get_result()
6005
6006 def applymap(self, func):
c:\ProgramData\Anaconda3\lib\site-packages\pandas\core\apply.py in get_result(self)
140 return self.apply_raw()
141
--> 142 return self.apply_standard()
143
144 def apply_empty_result(self):
c:\ProgramData\Anaconda3\lib\site-packages\pandas\core\apply.py in apply_standard(self)
246
247 # compute the result using the series generator
--> 248 self.apply_series_generator()
249
...
--> 311 for i, elt in enumerate(b):
312 indices = b2j.setdefault(elt, [])
313 indices.append(i)
TypeError: ("'float' object is not iterable", 'occurred at index 1')
</code></pre>
<p>Can I please get some help on why this is happening as I don't see any errors in the code?</p>
|
<python><python-3.x><pandas><similarity>
|
2023-01-16 11:37:45
| 1
| 906
|
Raul Gonzales
|
75,133,613
| 5,758,423
|
Making decorators produce pickle-serializable functions
|
<p>Why is it that this works:</p>
<pre><code>import pickle
from functools import wraps
def wrapper(func):
@wraps(func)
def _func(*args, **kwargs):
print(args, kwargs)
return func(*args, **kwargs)
return _func
def foo(a: int):
return a + 1
foo = wrapper(foo)
pickle.dumps(foo) # okay!
</code></pre>
<p>But this does not:</p>
<pre><code>def foo(a: int):
return a + 1
_foo = wrapper(foo)
pickle.dumps(_foo) # not okay!
</code></pre>
<p>The latter gives me the error:</p>
<pre><code>PicklingError: Can't pickle <function foo at 0x12e116b00>: it's not the same object as __main__.foo
</code></pre>
<p>If I do a</p>
<pre><code>_foo.__qualname__ = '_foo'
</code></pre>
<p>it'll make <code>_foo</code> pickle-able though.</p>
<p>But I'd like my <code>wrapper</code> function to take care of that so I'll be able to use it to decorate functions without losing their serializability or having to add the <code>__qualname__</code> thing "manually".</p>
|
<python><pickle>
|
2023-01-16 11:30:57
| 0
| 2,432
|
thorwhalen
|
75,133,570
| 4,041,387
|
redis-py compiatable with redis-server using docker image redis:5.0.9-alpine
|
<p>we have a Redis-Server running as a stateful set on a K8s server with 3 replicas which is using the docker image <code>redis:5.0.9-alpine</code>. This server is running in cluster mode <code>yes</code>.</p>
<p>Now we are trying to connect this cluster from the python code using the official Redis python package i.e. <a href="https://github.com/redis/redis-py" rel="nofollow noreferrer">https://github.com/redis/redis-py</a> but we are getting the below error:</p>
<pre><code>File "/code/./app/client/redisclient.py", line 40, in create_redis_cluster_mode
rc = Redis(startup_nodes=nodes, **{'password':redis_pwd, 'decode_responses':True})
File "/usr/local/lib/python3.9/site-packages/redis/cluster.py", line 588, in __init__
self.nodes_manager = NodesManager(
File "/usr/local/lib/python3.9/site-packages/redis/cluster.py", line 1330, in __init__
self.initialize()
File "/usr/local/lib/python3.9/site-packages/redis/cluster.py", line 1632, in initialize
self.default_node = self.get_nodes_by_server_type(PRIMARY)[0]
IndexError: list index out of range
</code></pre>
<p>Python code:</p>
<pre><code>from redis.cluster import RedisCluster as Redis
from redis.cluster import ClusterNode
def create_redis_cluster():
nodes = [ClusterNode(redis_host, redis_port, server_type= 'primary')]
rc = Redis(startup_nodes=nodes, **{'password':redis_pwd, 'decode_responses':True})
</code></pre>
<p>Initially, we were using the latest Redis package version but we downgraded to 4.1.0 but still got the same error.</p>
<p>Note: <code>redis:5.0.9-alpine</code> is 2 years old docker image deployed and we cannot move to the latest version as we have lot of dependencies around it.</p>
<hr />
<p><strong>the new finding:</strong> I have exec into all the 3 pods of Redis statefulset and upon issuing cluster info command it is showing <code>cluster_state: fail</code></p>
<pre><code>127.0.0.1:6379> cluster info
cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:1
cluster_size:0
cluster_current_epoch:0
cluster_my_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0
127.0.0.1:6379> cluster nodes
7ddd7d8ce5352db7b3d1f168b4c7db94535d6b2e :6379@16379 myself,master - 0 0 0 connected
</code></pre>
<p>anyway fixing this would be a great help!!</p>
|
<python><kubernetes><redis><redis-py>
|
2023-01-16 11:27:04
| 0
| 425
|
Imran
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.