QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
75,759,646
| 127,670
|
Are prepared statements supported with Azure Cosmos Cassandra API?
|
<p>Are prepared statements supported with Azure Cosmos Cassandra API with Python?</p>
<p>It <em>appears</em> not - when I execute</p>
<pre><code>stmt = session.prepare("SELECT provider FROM providers WHERE country_code=?")
</code></pre>
<p>I get the following exception:</p>
<pre><code>Traceback (most recent call last):
File "cosmos-cql.py", line 42, in <module>
select_provider_stmt = session.prepare("SELECT provider FROM providers WHERE country_code=?")
File "cassandra\cluster.py", line 3072, in cassandra.cluster.Session.prepare
File "cassandra\cluster.py", line 3069, in cassandra.cluster.Session.prepare
File "cassandra\cluster.py", line 4901, in cassandra.cluster.ResponseFuture.result
File "cassandra\connection.py", line 1229, in cassandra.connection.Connection.process_msg
File "cassandra\protocol.py", line 1196, in cassandra.protocol._ProtocolHandler.decode_message
File "cassandra\protocol.py", line 744, in cassandra.protocol.ResultMessage.recv_body
File "cassandra\protocol.py", line 734, in cassandra.protocol.ResultMessage.recv
File "cassandra\protocol.py", line 775, in cassandra.protocol.ResultMessage.recv_results_prepared
File "cassandra\protocol.py", line 819, in cassandra.protocol.ResultMessage.recv_prepared_metadata
File "cassandra\protocol.py", line 1321, in cassandra.protocol.read_short
File "C:\Users\Ian\.conda\envs\enerlytics\lib\site-packages\cassandra\marshal.py", line 22, in <lambda>
unpack = lambda s: packer.unpack(s)[0]
struct.error: unpack requires a buffer of 2 bytes
</code></pre>
<p>I'd normally expect to resolve this in a few minutes by googling, but I can find absolutely nothing about prepared statements and Cosmos Cassandra API. It's as if either the problem doesn't exist (I'm making some really silly mistake) or no one else has even thought to try it.</p>
<p>I am using version 3.25.1 of Datastax's Cassandra driver.</p>
|
<python><azure-cosmosdb><azure-cosmosdb-cassandra-api>
|
2023-03-16 17:24:51
| 2
| 6,235
|
Ian Goldby
|
75,759,641
| 13,039,962
|
Issues avoiding division by zero in pandas
|
<p>I have this df:</p>
<pre><code> A B
0 2 8
1 3 1
2 5 0
3 -1 2
4 2 1
5 3 0
.. .. ..
</code></pre>
<p>I want to calculate <code>((df['B']-df['A'])/df['B'])*100</code> in a new column so i did this code to avoid division by zero:</p>
<pre><code>if data['B']!=0:
data['D%']=((data['B']-data['A'])/data['B'])*100
else:
data['D%']=np.nan
</code></pre>
<p>But i got this error: <code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().</code></p>
<p>Would you mind to help me?</p>
<p>Thanks in advance.</p>
|
<python><pandas>
|
2023-03-16 17:24:17
| 2
| 523
|
Javier
|
75,759,564
| 12,845,199
|
Flag element in groupby if it is equal to his sucessor in next row
|
<p>I have the following df</p>
<pre><code>df = pd.DataFrame(
{'id':[1,1,1,2,2,2,3,3,3],
'value':['pot','pot','jebus','pot','jebus','pot','pot','jebus','jebus']})
</code></pre>
<p>What I want to do is to identify if an id contains repetitive values but only if a row is followed by another row with the same value. So if I have 'pot' and after that 'pot' again, I want to flag both as true.</p>
<p>Things worth noting, it needs to be based according to the ids. So if I have 'pot' in the last row, and 'pot' in the first row of a different id. I dont want to flag that value.</p>
<p>The values must be followed by one of the same value in the next row, meaning if 'pot','jebus','pot' no flag.</p>
<p>Wanted result:</p>
<pre><code>s = {true,true,false,false,false,false,true,true}
</code></pre>
|
<python><pandas>
|
2023-03-16 17:17:44
| 2
| 1,628
|
INGl0R1AM0R1
|
75,759,563
| 1,258,509
|
How to create multiple models from a single endpoint w/ Django RF
|
<p>I want to have an single endpoint that accepts a POST request and creates an "Order" that can be one of two types. The User should be able to submit an order (of either type) as JSON, and create a new Order.</p>
<p>Behind the scenes, two models are created. One "Base Order" Model that describes some common characteristics, and one "Type A Order" or "Type B Order" that describes the characteristics specific to that order. The actual order types and their fields will be vastly more complicated so here are just described as A and B with a few charFields.</p>
<p>The general idea though is that the same endpoint is used to create two models, a <code>BaseOrder</code> model, and a model of a type determined by the <code>order_type</code> field submitted.</p>
<p>I think the conditional part of this (which order to make) should not be too difficult. But <strong>I'm unsure how to create two separate models from the same endpoint, even if it was always of the same type.</strong></p>
<p>Models:</p>
<pre><code>class BaseOrder(models.Model):
name = models.CharField(max_length=80)
timestamp = models.DateTimeField(null=False)
class OrderType(models.TextChoices):
TYPEA = "A", "Placeholder A"
TYPEB = "B", "Placeholder B"
order_type = models.CharField(
max_length=16,
verbose_name="Order Type",
choices=OrderType.choices,
)
class TypeAOrder(models.Model):
a_field1 = models.CharField(max_length=80)
a_field2 = models.CharField(max_length=80)
a_field3 = models.CharField(max_length=80)
class TypeBOrder(models.Model):
b_field1 = models.CharField(max_length=80)
b_field2 = models.CharField(max_length=80)
</code></pre>
<p>a POST request to the /order endpoint that contains a Type A order might look like this:</p>
<pre><code>{
"name": "test order",
"timestamp": "2022-08-01T12:32:10Z",
"ordertype": "A"
"a_field1": "important"
"a_field2": "data"
"a_field3": "something"
},
</code></pre>
<p>And doing that would create TWO separate models. The <code>BaseOrder</code> Model and the <code>TypeAOrder</code> Model.</p>
<p>I found <a href="https://github.com/MattBroach/DjangoRestMultipleModels" rel="nofollow noreferrer">this library</a> that seems to be able to do this for combining models in a GET request, but does not seem to be for creating models. I'm assuming it's possible...</p>
|
<python><django><django-rest-framework><model>
|
2023-03-16 17:17:44
| 3
| 1,467
|
Brian C
|
75,759,544
| 10,192,593
|
Joining two multi-dimentional arrays by adding a new dimention in Python
|
<p>Let's say I have two arrays of shape (3,2,3)</p>
<pre><code>a = np.array([[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]],[[13,14,15],[16,17,18]]])
b = np.array([[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]],[[13,14,15],[16,17,18]]])
a.shape
b.shape
</code></pre>
<p>I would like to join these two arrays by adding a new dimention to get (2,3,2,3) like this:</p>
<pre><code>c = np.array([[[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]],[[13,14,15],[16,17,18]]], [[[1,2,3],[4,5,6]],[[7,8,9],[10,11,12]],[[13,14,15],[16,17,18]]]])
c.shape
</code></pre>
<p>How would I do this?</p>
|
<python><numpy>
|
2023-03-16 17:16:22
| 1
| 564
|
Stata_user
|
75,759,542
| 13,340,923
|
How to catch errors/exception in gekko when you have an infeasible solution?
|
<p>I am getting a <code>Exception: @error: Solution Not Found</code> with gekko. By using <code>disp=True</code>.
I got more details : <code>Unsuccessful with error code 0</code> and
<code>Warning: no more possible trial points and no integer solution Maximum iterations</code>. <br/>
I was checking the documentation and there's no indication on how to know find out the type of the error. <br />
To summarize: I'm trying to catch the error type, so that I can handle it and return a proper response explanation for the error. <br/>
Does anyone have an idea on how to find the type of the error ?</p>
|
<python><nonlinear-optimization><gekko>
|
2023-03-16 17:16:15
| 1
| 375
|
ElVincitore
|
75,759,437
| 1,107,226
|
How to print text with specific format to a printer in Python?
|
<p>I am printing a text string to a printer like so:</p>
<pre class="lang-py prettyprint-override"><code>import os
string_value = 'Hello, World!'
printer = os.popen('lpr -P Canon_iP110_series', 'w')
printer.write(string_value)
printer.close()
</code></pre>
<p>This works perfectly, printing the text to the printer in what I assume is the default font/color (black).</p>
<p>I want to change some features of the text, though. For example, I want to bold the word 'Hello', perhaps, or print 'World' in green maybe.</p>
<p>I have found several answers having to do with "printing" text, but they're giving escape codes relating to the terminal/console output - not to a printer. Eg, <a href="https://stackoverflow.com/q/8924173/1107226">How can I print bold text in Python?</a></p>
<p>These escape codes do work when I print the string to the console. Eg,</p>
<pre class="lang-py prettyprint-override"><code>bold_on = '\033[1m'
bold_off = '\033[0m'
string_value = '{0}Hello{1}, World!'.format(bold_on, bold_off)
print(string_value)
</code></pre>
<p>does output:</p>
<blockquote>
<p><strong>Hello</strong>, World!</p>
</blockquote>
<p>to the console. However, it does not bold the 'Hello' text on the printer.</p>
<p>How can I bold text at least, and possibly change other font attributes, when printing text to a printer in Python?</p>
<p>(Python 3.9+)</p>
|
<python><text><printing>
|
2023-03-16 17:07:20
| 1
| 8,899
|
leanne
|
75,759,382
| 8,068,733
|
ImportError: cannot import name 'runtime' in OpenVINO
|
<p>I am trying to run the openvino notebook for object detection task as follows</p>
<pre><code>from openvino import runtime as ov
</code></pre>
<p>But I get the following error</p>
<pre><code>---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-5-5dff2ebd5508> in <module>
8 import numpy as np
9 from IPython import display
---> 10 from openvino import runtime as ov
11 from openvino.tools.mo.front import tf as ov_tf_front
12
ImportError: cannot import name 'runtime'
</code></pre>
|
<python><machine-learning><intel><openvino>
|
2023-03-16 17:02:31
| 2
| 3,135
|
Ashok Kumar Jayaraman
|
75,759,348
| 14,349,010
|
Using weights and biases for storing pytorch experiments of different architectures
|
<p>following the basic pytorch <a href="https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html" rel="nofollow noreferrer">tutorial</a>, I'm trying to experiment with different CNN architectures (different number of layers, channels per layer, etc.) and I want to be organized so I'm trying to use <code>wandb</code>.</p>
<p>I'm looking for a simple way to try out different architectures, and save both the results (accuracies) of each architecture and the architecture itself. Currently what I have is a code snippet that allows me to save the model with it's weights, but I also need a way to reproduce my results, including reproducing the code for my class (I was able, using <code>repr(model)</code> save the models parameters, but not the code of the class itself, which is unfortunate).</p>
<p>Is there a right/better way to manage my experiments and easily reproduce my results including the code of the class? I considered using the <code>inspect</code> module to save the source code but I'm both not sure this is the best way and google colab doesn't support this module.</p>
<p>This is the code I have currently:</p>
<pre class="lang-py prettyprint-override"><code>torch.manual_seed(0)
# MODEL DESCRIPTION
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = Net()
model_desc = repr(model)
# TRAINING....
# EVALUATING...
accureacy = ...
# SAVE RESULTS
run = wandb.init(project='test-project', name=model_desc)
artifact = wandb.Artifact('model', type='model')
artifact.add_file(MODEL_PATH)
# 2. Save mode inputs and hyperparameters
config = run.config
config.test_number = 1
config.model_desc = model_desc
# 3. Log metrics over time to visualize performance
run.log({"accuracy": accuracy})
# 4. Log an artifact to W&B
run.log_artifact(artifact)
run.finish()
</code></pre>
|
<python><machine-learning><pytorch><wandb><python-inspect>
|
2023-03-16 16:59:20
| 1
| 373
|
Ariel Yael
|
75,759,146
| 19,980,284
|
Convert dataframe from wide to long format on multiple columns with differing names
|
<p>I have a dataframe with shape 500x200 and I'd like to pivot/melt it based on a subset of the columns. Here is an example test dataframe where I have and <code>id</code> column, three <code>case</code> columns, and an additional column with data for each id.</p>
<pre class="lang-py prettyprint-override"><code>pd.DataFrame({'id': [1,2], 'case1': [3,1], 'case2': [3,2], 'case3': [3,2], 'vpd': [2,1]})
id case1 case2 case3 vpd
0 1 3 3 3 2
1 2 1 2 2 1
</code></pre>
<p>I'd like to pivot on the case columns only, like so:</p>
<pre class="lang-py prettyprint-override"><code>pd.DataFrame({'index': ['case1', 'case2', 'case3', 'case1', 'case2', 'case3'], 'id': [1,1,1,2,2,2], 'vpd': [2,2,2,1,1,1],
'case': [3,3,3,1,2,2]}).set_index('index')
id vpd case
index
case1 1 2 3
case2 1 2 3
case3 1 2 3
case1 2 1 1
case2 2 1 2
case3 2 1 2
</code></pre>
<p>Where each case column becomes a row in the pivoted dataframe. This sort of seems to get at what I want:</p>
<pre class="lang-py prettyprint-override"><code>pd.wide_to_long(test_df, "case", i="id", j="case#").reset_index()
id case# vpd case
0 1 1 2 3
1 2 1 1 1
2 1 2 2 3
3 2 2 1 2
4 1 3 2 3
5 2 3 1 2
</code></pre>
<p>But not exactly. Any other ideas for how to get to my desired output?</p>
|
<python><pandas><dataframe><pivot><melt>
|
2023-03-16 16:42:05
| 1
| 671
|
hulio_entredas
|
75,758,944
| 6,494,707
|
Manual weight updates: torch.autograd returns none, why?
|
<p>I am new to pytorch and it may sound a simple question, sorry for that. I have written a function that updates the parameters of a network manually:</p>
<pre><code>def update_params(self, loss, update_lr):
# parameter update
updated_params = OrderedDict()
for name, param in self.graph_model.gnn.named_parameters():
if param.requires_grad:
grad = torch.autograd.grad(loss, param, create_graph=True, allow_unused=True)
if grad is None:
updated_params = param
else:
pdb.set_trace()
updated_params = param - update_lr * grad
updated_params[name] = updated_params
return updated_params
</code></pre>
<p>the loss is as follows:</p>
<pre><code>loss
tensor([0.0693], device='cuda:0', grad_fn=<AddBackward0>)
</code></pre>
<p>and the first param of loop is :</p>
<pre><code>(Pdb) param
Parameter containing:
tensor([[-0.2142, -0.1182, -0.2988, ..., 0.2933, -0.0804, -0.3286],
[-0.1250, 0.2673, 0.1617, ..., 0.2363, 0.2026, -0.2973],
[ 0.0588, 0.2348, -0.2333, ..., 0.1882, 0.0286, -0.3238],
...,
[-0.1961, 0.1434, 0.0306, ..., 0.3135, 0.2239, -0.0953],
[ 0.1190, 0.2062, -0.2643, ..., 0.3116, 0.1146, -0.1994],
[ 0.0340, -0.2294, 0.2095, ..., -0.2376, 0.0456, 0.3151]],
device='cuda:0', requires_grad=True)
</code></pre>
<p>the grad is none for the first param (first iteration in the loop)</p>
<pre><code>(pdb) grad
(None,)
</code></pre>
<p>however, when I check</p>
<pre><code>(Pdb) grad is None
False
</code></pre>
<p>it returns False (means goes to the <code>else</code>). I am not sure where I am doing the mistake?</p>
|
<python><deep-learning><pytorch><torchvision><pytorch-geometric>
|
2023-03-16 16:22:15
| 0
| 2,236
|
S.EB
|
75,758,917
| 2,100,039
|
DLL load failed while importing _netCDF4 in Python
|
<p>I am trying to download and plot NCAR reanalysis data that requires importing the netCDF4 module. Here is the link to a python program similar to mine used to plot this NCAR .nc data below.
<a href="https://geoclimatologyblog.wordpress.com/2016/04/24/plot-highs-and-lows-in-a-ncepncar-reanalysis-slp-file-with-python/" rel="nofollow noreferrer">https://geoclimatologyblog.wordpress.com/2016/04/24/plot-highs-and-lows-in-a-ncepncar-reanalysis-slp-file-with-python/</a></p>
<p>Here is my program below and what I do not understand is that I DO have netcdf4 installed in my environment and I have created a new environment loading python 3.8.16 from scratch, xarray, netcdf4, and pandas AND I still get this error. Please help, thank you.</p>
<pre><code>import os
os.environ["PROJ_LIB"] = 'C:\\Users\\Yury\\anaconda3\\Library\\share'
from sys import exit
#import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
import cartopy.crs as ccrs
#https://psl.noaa.gov/repository/entry/show?entryid=synth%3Ae570c8f9-ec09-4e89-93b4-babd5651e7a9%3AL25jZXAucmVhbmFseXNpcy5kZXJpdmVkL3N1cmZhY2Uvc2xwLm1vbi5tZWFuLm5j
# Download the dataset from above url
#url = 'https://rda.ucar.edu/thredds/fileServer/ds083.2/pressure/mon-anom/slp.mon.ltm.nc'
#file_path = '//PORFILER03.ar.local/gtdshare/IDL/wtypes/data_NCEP_slp/slp.mon.mean.nc'
file_path = 'C:/Users/U321103/Downloads/mslp.mon.mean.nc'
with open(file_path,'r')as file:
dataset = xr.open_dataset(file_path)
</code></pre>
<p>My error:</p>
<pre><code>runfile('//porfiler03.ar.local/gtdshare/VORTEX/MONTHLY_VARIABILITY/scripts/NCAR_Get_MSLP_USA_WORKING_MONTHLY.py', wdir='//porfiler03.ar.local/gtdshare/VORTEX/MONTHLY_VARIABILITY/scripts')
Traceback (most recent call last):
File ~\Anaconda3\envs\Maps\lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec
exec(code, globals, locals)
File \\porfiler03.ar.local\gtdshare\vortex\monthly_variability\scripts\ncar_get_mslp_usa_working_monthly.py:23
dataset = xr.open_dataset(file_path)
File ~\Anaconda3\envs\Maps\lib\site-packages\xarray\backends\api.py:539 in open_dataset
backend_ds = backend.open_dataset(
File ~\Anaconda3\envs\Maps\lib\site-packages\xarray\backends\netCDF4_.py:572 in open_dataset
store = NetCDF4DataStore.open(
File ~\Anaconda3\envs\Maps\lib\site-packages\xarray\backends\netCDF4_.py:343 in open
import netCDF4
File ~\Anaconda3\envs\Maps\lib\site-packages\netCDF4\__init__.py:3
from ._netCDF4 import *
ImportError: DLL load failed while importing _netCDF4: The specified procedure could not be found.
</code></pre>
|
<python><netcdf4>
|
2023-03-16 16:19:21
| 0
| 1,366
|
user2100039
|
75,758,845
| 1,112,325
|
Convert string to date with multiple formats
|
<p>I have dates in the following format in the same file:</p>
<pre><code>"%m/%d/%Y" --> YEAR with 4 digits
"%m/%d/%y" --> YEAR with 2 digits
</code></pre>
<p>I want to be able to parse both formats. Here's my attempt:</p>
<pre><code>df[["field1", "field2"]] = pd.to_datetime(
df[["field1", "field2"]], format="%m/%d/%Y", errors="coerce"
).fillna(pd.to_datetime(df[["field1", "field2"]], format="%m/%d/%y"))
</code></pre>
<p>But that is not working:</p>
<pre><code>ValueError: to assemble mappings requires at least that [year, month, day] be specified: [day,month,year] is missing
</code></pre>
<p>If I try with only one serie:</p>
<pre><code>df["field1"] = pd.to_datetime(
df["field1"], format="%m/%d/%Y", errors="coerce"
).fillna(pd.to_datetime(df["field1"], format="%m/%d/%y"))
</code></pre>
<p>I get ValueError and doesn't seem like both formats are being tested.</p>
<pre><code>ValueError: unconverted data remains: 21
</code></pre>
<p>What is the most efficient way to use different date formats?</p>
|
<python><pandas><date><datetime>
|
2023-03-16 16:12:44
| 1
| 3,037
|
briba
|
75,758,669
| 12,944,030
|
ERROR when running airflow task: Log file does not exist Failed to fetch log file from worker. Request URL is missing an http:// or https:// protocol
|
<p>I am running a DAG in airflow (Docker compose ) containing a</p>
<ul>
<li>DummyOperator : start</li>
<li>PythonOperator : simple python function to print text</li>
<li>DummyOperator : end</li>
</ul>
<p><a href="https://i.sstatic.net/ScbUc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ScbUc.png" alt="enter image description here" /></a></p>
<p>when I run the DAG I get this error :</p>
<blockquote>
<p>*** Log file does not exist: /opt/airflow/logs/dag_id=TEST/run_id=manual__2023-03-16T20:02:49.256977+00:00/task_id=test/attempt=1.log</p>
<p>*** Fetching from: http://:8793/log/dag_id=TEST/run_id=manual__2023-03-16T20:02:49.256977+00:00/task_id=test/attempt=1.log</p>
<p>*** Failed to fetch log file from worker. Request URL is missing an 'http://' or 'https://' protocol.</p>
</blockquote>
|
<python><airflow>
|
2023-03-16 15:58:17
| 1
| 349
|
moe_
|
75,758,644
| 913,098
|
How to check for existence of multiple keys in s3 bucket in boto3?
|
<p>I have a long list of keys I want to check for existence in s3. They have different prefixes, and I want this to be fast, thus checking one by one won't do the trick.</p>
<p><a href="https://stackoverflow.com/a/38376288/913098">This</a> checks one by one<br />
<a href="https://stackoverflow.com/a/34562141/913098">This</a> assumes similar prefixes.</p>
<p>I need a call similar to the following</p>
<pre><code>def keys_that_exist(keys: Sequence[str]) -> Sequence[bool]:
return [True if i % 2 == 0 else False for i, key in enumerate(keys)] # just some example output.
</code></pre>
<p>How to do this without an API call to s3 for each and every key, using boto3 or another Python library?</p>
|
<python><amazon-web-services><amazon-s3><boto3>
|
2023-03-16 15:56:28
| 0
| 28,697
|
Gulzar
|
75,758,625
| 13,916,049
|
How to match the index of a dataframe with a column of another dataframe and replace it with the values of an adjacent column?
|
<p>If the "Composite_Element_REF" column values of the <code>sym</code> dataframe matches the index of <code>df_normal_symbol</code>, I want to replace the index of <code>df_normal_symbol</code> with the adjacent column in the <code>sym</code> dataframe, i.e., the <code>Gene_Symbol</code> column.</p>
<pre><code>df_normal_symbol.index = df_normal.loc[sym["Composite_Element_REF"], df_normal.index].values
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Input In [38], in <cell line: 1>()
----> 1 df_normal_symbol.index = df_normal.loc[sym["Composite_Element_REF"], df_normal.index].values
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/core/indexing.py:961, in _LocationIndexer.__getitem__(self, key)
959 if self._is_scalar_access(key):
960 return self.obj._get_value(*key, takeable=self._takeable)
--> 961 return self._getitem_tuple(key)
962 else:
963 # we by definition only have the 0th axis
964 axis = self.axis or 0
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/core/indexing.py:1147, in _LocIndexer._getitem_tuple(self, tup)
1145 # ugly hack for GH #836
1146 if self._multi_take_opportunity(tup):
-> 1147 return self._multi_take(tup)
1149 return self._getitem_tuple_same_dim(tup)
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/core/indexing.py:1098, in _LocIndexer._multi_take(self, tup)
1082 """
1083 Create the indexers for the passed tuple of keys, and
1084 executes the take operation. This allows the take operation to be
(...)
1095 values: same type as the object being indexed
1096 """
1097 # GH 836
-> 1098 d = {
1099 axis: self._get_listlike_indexer(key, axis)
1100 for (key, axis) in zip(tup, self.obj._AXIS_ORDERS)
1101 }
1102 return self.obj._reindex_with_indexers(d, copy=True, allow_dups=True)
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/core/indexing.py:1099, in <dictcomp>(.0)
1082 """
1083 Create the indexers for the passed tuple of keys, and
1084 executes the take operation. This allows the take operation to be
(...)
1095 values: same type as the object being indexed
1096 """
1097 # GH 836
1098 d = {
-> 1099 axis: self._get_listlike_indexer(key, axis)
1100 for (key, axis) in zip(tup, self.obj._AXIS_ORDERS)
1101 }
1102 return self.obj._reindex_with_indexers(d, copy=True, allow_dups=True)
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/core/indexing.py:1327, in _LocIndexer._get_listlike_indexer(self, key, axis)
1324 ax = self.obj._get_axis(axis)
1325 axis_name = self.obj._get_axis_name(axis)
-> 1327 keyarr, indexer = ax._get_indexer_strict(key, axis_name)
1329 return keyarr, indexer
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/core/indexes/base.py:5782, in Index._get_indexer_strict(self, key, axis_name)
5779 else:
5780 keyarr, indexer, new_indexer = self._reindex_non_unique(keyarr)
-> 5782 self._raise_if_missing(keyarr, indexer, axis_name)
5784 keyarr = self.take(indexer)
5785 if isinstance(key, Index):
5786 # GH 42790 - Preserve name from an Index
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/pandas/core/indexes/base.py:5845, in Index._raise_if_missing(self, key, indexer, axis_name)
5842 raise KeyError(f"None of [{key}] are in the [{axis_name}]")
5844 not_found = list(ensure_index(key)[missing_mask.nonzero()[0]].unique())
-> 5845 raise KeyError(f"{not_found} not in index")
</code></pre>
<p>Expected output:</p>
<pre><code>pd.DataFrame({'TCGA-CZ-5457-11A': {nan: 0.102035759907132,
'VDAC3': 0.893345348116849,
'ACTN1': 0.847131904106541,
'ATP2A1': 0.580488869725658,
'SFRP1': 0.470767306311169,
nan: 0.147416341092933,
'NIPA2': 0.0120942766037886},
'TCGA-BQ-5888-11A': {nan: 0.147149659097321,
'VDAC3': 0.910195291355705,
'ACTN1': 0.816669300689161,
'ATP2A1': 0.514358122653833,
'SFRP1': 0.441313292788889,
nan: 0.245573257728479,
'NIPA2': 0.0147939578910346},
'TCGA-B0-4846-11A': {nan: 0.113480434528015,
'VDAC3': 0.886088576813537,
'ACTN1': 0.664793188247786,
'ATP2A1': 0.516081593815069,
'SFRP1': 0.400027063258341,
nan: 0.190871544331105,
'NIPA2': 0.0099210543418163},
'TCGA-CJ-4920-11A': {nan: 0.111657157534977,
'VDAC3': 0.918934002233238,
'ACTN1': 0.773517265412361,
'ATP2A1': 0.571990354691741,
'SFRP1': 0.489779654823996,
nan: 0.24188997202946,
'NIPA2': 0.0097521875052793,
'cg00000658': 0.919644862137697,
'cg00000721': 0.94229345837988},
'TCGA-B0-4849-11A': {nan: 0.13898299158527,
'VDAC3': 0.892691778501429,
'ACTN1': 0.697181652158477,
'ATP2A1': 0.47288614791789,
'SFRP1': 0.377593967259526,
nan: 0.149294919317939,
'NIPA2': 0.0107697567839102},
'TCGA-BQ-5891-11A': {nan: 0.0943910860490585,
'VDAC3': 0.798899904372697,
'ACTN1': 0.689450514637892,
'ATP2A1': 0.568046821756013,
'SFRP1': 0.464626018317553,
nan: 0.231639837864006,
'NIPA2': 0.0487962187571897},
'TCGA-BP-5186-11A': {'cg00000165': 0.110112361205661,
'VDAC3': 0.827523582109836,
'ACTN1': 0.757610109046985,
'ATP2A1': 0.484209696051666,
'SFRP1': 0.412811564854099,
nan: 0.167420794630144,
'NIPA2': 0.0104916507529456},
'TCGA-A3-3373-11A': {nan: 0.117830727124756,
'VDAC3': 0.90581935721054,
'ACTN1': 0.761457792189881,
'ATP2A1': 0.507633250448944,
'SFRP1': 0.51611998698701,
nan: 0.1737386620934,
'NIPA2': 0.0108894792403789},
'TCGA-BP-5180-11A': {nan: 0.119205137521098,
'VDAC3': 0.891261719087507,
'ACTN1': 0.746767379239554,
'ATP2A1': 0.463089282194905,
'SFRP1': 0.464692516947339,
nan: 0.228609755811405,
'NIPA2': 0.0095536851256427}})
</code></pre>
<p>Data:</p>
<p><code>df_normal_symbol</code></p>
<pre><code>pd.DataFrame({'TCGA-CZ-5457-11A': {'cg00000165': 0.102035759907132,
'cg00000236': 0.893345348116849,
'cg00000289': 0.847131904106541,
'cg00000292': 0.580488869725658,
'cg00000321': 0.470767306311169,
'cg00000363': 0.147416341092933,
'cg00000622': 0.0120942766037886,
'cg00000658': 0.93695494977688,
'cg00000721': 0.975854444522775},
'TCGA-BQ-5888-11A': {'cg00000165': 0.147149659097321,
'cg00000236': 0.910195291355705,
'cg00000289': 0.816669300689161,
'cg00000292': 0.514358122653833,
'cg00000321': 0.441313292788889,
'cg00000363': 0.245573257728479,
'cg00000622': 0.0147939578910346,
'cg00000658': 0.933589698841974,
'cg00000721': 0.93311604425552},
'TCGA-B0-4846-11A': {'cg00000165': 0.113480434528015,
'cg00000236': 0.886088576813537,
'cg00000289': 0.664793188247786,
'cg00000292': 0.516081593815069,
'cg00000321': 0.400027063258341,
'cg00000363': 0.190871544331105,
'cg00000622': 0.0099210543418163,
'cg00000658': 0.863861413753196,
'cg00000721': 0.935039379256587},
'TCGA-CJ-4920-11A': {'cg00000165': 0.111657157534977,
'cg00000236': 0.918934002233238,
'cg00000289': 0.773517265412361,
'cg00000292': 0.571990354691741,
'cg00000321': 0.489779654823996,
'cg00000363': 0.24188997202946,
'cg00000622': 0.0097521875052793,
'cg00000658': 0.919644862137697,
'cg00000721': 0.94229345837988},
'TCGA-B0-4849-11A': {'cg00000165': 0.13898299158527,
'cg00000236': 0.892691778501429,
'cg00000289': 0.697181652158477,
'cg00000292': 0.47288614791789,
'cg00000321': 0.377593967259526,
'cg00000363': 0.149294919317939,
'cg00000622': 0.0107697567839102,
'cg00000658': 0.855919013625267,
'cg00000721': 0.927295110742551},
'TCGA-BQ-5891-11A': {'cg00000165': 0.0943910860490585,
'cg00000236': 0.798899904372697,
'cg00000289': 0.689450514637892,
'cg00000292': 0.568046821756013,
'cg00000321': 0.464626018317553,
'cg00000363': 0.231639837864006,
'cg00000622': 0.0487962187571897,
'cg00000658': 0.879745629519866,
'cg00000721': 0.575514399845868},
'TCGA-BP-5186-11A': {'cg00000165': 0.110112361205661,
'cg00000236': 0.827523582109836,
'cg00000289': 0.757610109046985,
'cg00000292': 0.484209696051666,
'cg00000321': 0.412811564854099,
'cg00000363': 0.167420794630144,
'cg00000622': 0.0104916507529456,
'cg00000658': 0.889507665618008,
'cg00000721': 0.956223420054809},
'TCGA-A3-3373-11A': {'cg00000165': 0.117830727124756,
'cg00000236': 0.90581935721054,
'cg00000289': 0.761457792189881,
'cg00000292': 0.507633250448944,
'cg00000321': 0.51611998698701,
'cg00000363': 0.1737386620934,
'cg00000622': 0.0108894792403789,
'cg00000658': 0.831762722499429,
'cg00000721': 0.950671976784028},
'TCGA-BP-5180-11A': {'cg00000165': 0.119205137521098,
'cg00000236': 0.891261719087507,
'cg00000289': 0.746767379239554,
'cg00000292': 0.463089282194905,
'cg00000321': 0.464692516947339,
'cg00000363': 0.228609755811405,
'cg00000622': 0.0095536851256427,
'cg00000658': 0.922630855301534,
'cg00000721': 0.958168591617036}})
</code></pre>
<p><code>sym</code></p>
<pre><code>pd.DataFrame({'Composite_Element_REF': {1: 'cg00000108',
2: 'cg00000109',
3: 'cg00000165',
4: 'cg00000236',
5: 'cg00000289',
6: 'cg00000292',
7: 'cg00000321',
8: 'cg00000363',
9: 'cg00000622'},
'Gene_Symbol': {1: 'C3orf35',
2: 'FNDC3B',
3: nan,
4: 'VDAC3',
5: 'ACTN1',
6: 'ATP2A1',
7: 'SFRP1',
8: nan,
9: 'NIPA2'}})
</code></pre>
|
<python><pandas><dataframe>
|
2023-03-16 15:55:10
| 1
| 1,545
|
Anon
|
75,758,581
| 11,968,226
|
Python convert two lines of Excel to CSV without row or column numbers
|
<p>I want to convert a <code>xlsx-file</code> to <code>.csv</code>. I tried it with this function:</p>
<pre><code>def excel_to_csv(excel_file, csv_file):
# read Excel file into pandas dataframe
df = pd.read_excel(excel_file)
# write dataframe to CSV file
df.to_csv(csv_file, index=False, header=False)
</code></pre>
<p>This is my Excel-file:</p>
<p><a href="https://i.sstatic.net/kynrq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kynrq.png" alt="enter image description here" /></a></p>
<p>It is always two lines.</p>
<p>But when running the function above it removes the first line of my Excel-file.
That happens because I set <code>header=False</code>. However if I don't set that it places random <code>1</code> in my <code>csv</code> file.</p>
<p>As workaround I could add an empty first line, but is there a better solution for this?</p>
|
<python><excel><csv><export-to-csv>
|
2023-03-16 15:51:39
| 1
| 2,404
|
Chris
|
75,758,568
| 80,544
|
Will using time.sleep(1) in a loop block other routes in my FastAPI server?
|
<p>I have a route that sends a number of emails. To avoid rate limits, I am using <code>time.sleep(1)</code> between emails. If I understand correctly, the route will run in its own thread or coroutine and this will not block other requests, but I thought it would be good to confirm this with the community. Here is a code example (simplified to focus on the issue):</p>
<pre><code>@router.get("/send_a_bunch_of_emails")
def send_a_bunch_of_emails(db: Session = Depends(get_db)):
users = get_a_bunch_of_users(db)
for user in users:
send_email(to=user.email)
time.sleep(1) # Maximum of 1 email per second
</code></pre>
<p>I am just wanting to confirm, that if hypothetically, this sent 10 emails, it wouldn't block FastAPI for 10 seconds. Based on my testing this doesn't appear to be the case, but I'm wary of gotchas.</p>
|
<python><fastapi><blocking>
|
2023-03-16 15:51:01
| 2
| 5,161
|
Glenn
|
75,758,516
| 11,232,438
|
Why my json is not loading if I have all lines delimited?
|
<p>I'm trying to load a sample json:</p>
<pre><code>{
"bot_token": "##############################",
"GOOGLE_APPLICATION_CREDENTIALS": "assets/firebase-adminsdk.json",
"databaseURL": "https://########-default-rtdb.firebaseio.com/",
"storageBucket": "gs://##########.appspot.com",
"owner_id": 0000000000,
"admins": [0000000000]
}
</code></pre>
<p>Nothing special:</p>
<pre><code># Load configuration file
with open('assets/config.json', 'r') as f:
config = json.load(f)
</code></pre>
<p>But I'm getting this error:</p>
<pre><code>Exception has occurred: JSONDecodeError
Expecting ',' delimiter: line 6 column 16 (char 251)
File "D:\SingleStoreBot\bot_init.py", line 58, in <module>
config = json.load(f)
^^^^^^^^^^^^
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 6 column 16 (char 251)
</code></pre>
<p>I see all lines delimited so where is the error?</p>
|
<python><json>
|
2023-03-16 15:46:55
| 1
| 745
|
kuhi
|
75,758,436
| 2,011,284
|
Set TTL for Supervisor jobs
|
<p>I've been using Supervisor for over a year now to run 40 jobs on a project.<br />
Today, without any changes to the code or the server, two jobs got stuck, which caused some problems to the services I provide to my customers.</p>
<p>These jobs are very light, written in Python, and they usually process the workload in under 2 minutes.<br />
However, they were stuck for hours.
Inside the code, I can't see anything that could've caused this.</p>
<p>Since I know 5 minutes would be more than enough to run the job, is there a way for me to set a TTL for these jobs?</p>
|
<python><supervisord>
|
2023-03-16 15:39:51
| 1
| 4,927
|
CIRCLE
|
75,758,203
| 10,242,641
|
Assign Group ID Based on Time Interval PySpark
|
<p>I have a PySpark data frame like the following (snap - i can have multiple dates)</p>
<pre><code>UID Time
1 10/1/2016 7:25:52 AM
1 10/1/2016 8:53:38 AM
1 10/1/2016 11:18:50 AM
1 10/1/2016 11:19:32 AM
2 10/1/2016 10:25:36 AM
2 10/1/2016 10:28:08 AM
3 10/1/2016 10:57:41 AM
3 10/1/2016 8:57:10 PM
</code></pre>
<p>I want to assign a unique identifier to each set of actions carried out by a user at most N hours after the previous one. As an example, with a time range of 3 hours, the output should be something like this:</p>
<pre><code>UID Time GROUP_ID
1 10/1/2016 7:25:52 AM 1
1 10/1/2016 8:53:38 AM 1
1 10/1/2016 11:18:50 AM 1
1 10/1/2016 3:19:32 PM 2
2 10/1/2016 10:25:36 AM 3
2 10/1/2016 10:28:08 AM 3
3 10/1/2016 10:57:41 AM 4
3 10/1/2016 8:57:10 PM 5
</code></pre>
<p>Any help?</p>
<p>Thanks</p>
|
<python><apache-spark><pyspark>
|
2023-03-16 15:17:50
| 1
| 391
|
mht
|
75,758,195
| 1,008,883
|
SqlAlchemy TEXT is not defined
|
<p>I recently discovered an issue where a query I wanted to run on MS SQL Server was not executing with an error <code>Textual SQL expression '\n SELECT CONVE...' should be explicitly declared as text('\n SELECT CONVE...')</code></p>
<p>This seemed fairly straightforward that I just need to cast the query as text. I've attempted to do this but now always see <code>exception: name 'TEXT' not found</code></p>
<p>I've tried with:
<code>from sqlalchemy import Text</code> and</p>
<p><code>from sqlalchemy import text</code> and</p>
<p><code>from sqlalchemy import TEXT</code></p>
<p>I am using SQL ALchemy <code>v2.0.6</code>. My IDE doesnt have any issues with the import. Wondering if anyone has any ideas</p>
|
<python><sqlalchemy>
|
2023-03-16 15:17:39
| 0
| 644
|
jackie
|
75,758,160
| 11,913,986
|
compare columns with list of string values in two different df and return ID which has the highest match between the lists
|
<p>I have a pyspark dataframe 'jobs' like this:</p>
<pre><code> jobs=
id position keywords
5663123 A ["Engineer","Quality"]
5662986 B ['Java']
5663237 C ['Art', 'Paint', 'Director']
5663066 D ["Junior","Motion","Designer"]
5663039 E ['Junior', 'Designer']
5663153 F ["Client","Specialist"]
5663266 G ['Pyhton']
</code></pre>
<p>And I have another dataframe named 'people' as:</p>
<pre><code>people=
pid skills
5662321 ["Engineer","L2"]
5663383 ["Quality","Engineer","L2"]
5662556 ["Art","Director"]
5662850 ["Junior","Motion","Designer"]
5662824 ['Designer', 'Craft', 'Junior']
5652496 ["Client","Support","Specialist"]
5662949 ["Community","Manager"]
</code></pre>
<p>I want to do is match the list values of people['skills'] with jobs['keywords']</p>
<p>If the match is more than 2 tokens, i.e. len(list(set(A)-set(B))) >=2 then return the ID of that particular job from jobs's table jobs['id'] in a new column in people['match'] in a list because there could be more than one matches, None otherwise.</p>
<p>The final people df should look like:</p>
<pre><code>people=
pid skills match
5662321 ["Engineer","L2"] None
5663383 ["Quality","Engineer","L2"] [5663123]
5662556 ["Art","Director"] [5663237]
5662850 ["Junior","Motion","Designer"] [5663066,5663039]
5662824 ['Designer', 'Craft', 'Junior'] [5663066,5663039]
5652496 ["Client","Support","Specialist"] [5663153]
5662949 ["Community","Manager"] None
</code></pre>
<p>I currently have a solution in place which is not efficient at all.
Rightnow I iterate over spark dataframes row-wise which is taking a lot time for a large df.</p>
<p>I am open to pandas solutions as well.</p>
|
<python><python-3.x><dataframe><apache-spark><pyspark>
|
2023-03-16 15:15:17
| 1
| 739
|
Strayhorn
|
75,757,890
| 2,340,703
|
Python overload single argument
|
<p>Let's assume I have a function with many arguments, e.g.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
def f(df: pd.DataFrame, a: int, b: int, c: int, d: int, inplace: bool = True) -> Optional[pd.DataFrame]:
raise NotImplementedError
</code></pre>
<p>The function will modify the dataframe if <code>inplace=True</code> and return a modified copy if <code>inplace=False</code>.</p>
<p>I know I can do</p>
<pre class="lang-py prettyprint-override"><code>@overload
def f(df: pd.DataFrame, a: int, b: int, c: int, d: int, inplace: Literal[True] = True) -> None:
...
@overload
def f(df: pd.DataFrame, a: int, b: int, c: int, d: int, inplace: Literal[False] = True) -> pd.DataFrame:
...
</code></pre>
<p>to inform the typing system about this fact.</p>
<p>However, I'm wondering if there's a way to do this without repeating the entire function definition, which seems cumbersome if there are many arguments. I'm looking for something like</p>
<pre class="lang-py prettyprint-override"><code>@overload
def f(..., inplace: Literal[True]) -> None:
...
@overload
def f(..., inplace: Literal[False]) -> pd.DataFrame:
...
</code></pre>
<p><strong>EDIT:</strong> Several persons have made the point that the API design itself may be flawed. While this may be true, in this particular case I am retrospectively adding type hints to an existing library and changing the interface is not an option.</p>
|
<python><overloading><python-typing>
|
2023-03-16 14:49:41
| 1
| 2,950
|
Gregor Sturm
|
75,757,786
| 113,586
|
Getting the base class in a descriptor called via super()
|
<p>I have a following (simplified of course) descriptor:</p>
<pre class="lang-py prettyprint-override"><code>class d:
def __init__(self, method):
self.method = method
def __get__(self, instance, owner=None):
print(instance, owner, self.method)
return self.method(instance)
</code></pre>
<p>In <code>__get__()</code> I want to access the class where the decorated function is defined, but the <code>owner</code> argument is <code>B</code> on both invocations of <code>__get__()</code> that happen in the following code:</p>
<pre class="lang-py prettyprint-override"><code>class A:
@d
def f(self):
return "A"
class B(A):
@d
def f(self):
return super().f + "B"
print(B().f)
</code></pre>
<p>I've checked <a href="https://docs.python.org/3.11/howto/descriptor.html#invocation-from-super" rel="nofollow noreferrer">the Descriptor HowTo Guide section</a> on calling descriptors via <code>super()</code> and it says that this invocation indeed passes the subclass type to the parent class <code>__get__()</code>. Does it suggest I may need to define <code>__getattribute__()</code> to get what I want, or is there a different way? I understand that the <code>super()</code> call doesn't just return <code>A</code> but a proxy for <code>B</code> but I feel like there should be a way to get <code>A</code> in the descriptor.</p>
<p>I'll also appreciate a clearer explanation of what is happening in my code.</p>
|
<python><inheritance><python-descriptors>
|
2023-03-16 14:41:06
| 1
| 25,704
|
wRAR
|
75,757,778
| 4,942,661
|
Is there an equivalent pip command for poetry?
|
<p>I need to install a package with a flag using pip I run <code>pip3 install snowflakedb-snowflake-connector-python --no-use-pep517</code></p>
<p>but as the project is using poetry to manage de dependencies how can I run the same as pip?</p>
<p>I tried <code>poetry add snowflake-connector-python --extras ["--no-use-pep517"]</code> but seems not reflect the flag that was used.</p>
<p>btw I don't find any related options reading the <a href="https://python-poetry.org/docs/cli/#add" rel="nofollow noreferrer">docs</a></p>
|
<python><pip><snowflake-cloud-data-platform><python-poetry>
|
2023-03-16 14:40:33
| 0
| 1,022
|
Italo Lemos
|
75,757,585
| 11,572,712
|
hasura.io - connect postgres database - what is the correct database URL?
|
<p>Currently I am trying to create a GraphQL API and I would like to connect to a already running postgreSQL database.
I was looking for the connection settings in pgadmin and found the following:
Hostname: localhost
Port: 5432
Username: postgres
Password: postgres
Database: SuperMart_DB</p>
<p>I tried to connect and inserted the following Database URLs in hasura:</p>
<pre><code>postgresql://postgres:postgres@127.0.0.1:5432/SuperMart_DB
postgresql+psycopg2://postgres:postgres@localhost:5432/SuperMart_DB
</code></pre>
<p>Also tried out different combinations of <code>localhost</code> and <code>127.0.0.1</code> in the URLs upon but nothing worked.</p>
<p>I get this error:</p>
<pre><code>Inconsistent object: connection error
</code></pre>
<p>Could you tell me what I should try to do?</p>
<p>When I do it with python it is working and I can query the database:</p>
<pre><code>conn = psycopg2.connect(host="127.0.0.1", port = 5432, database="SuperMart_DB", user="postgres", password="postgres")
</code></pre>
|
<python><database><postgresql><hasura>
|
2023-03-16 14:22:28
| 0
| 1,508
|
Tobitor
|
75,757,548
| 4,151,075
|
Decode 32bit float with decade exponent
|
<p>I have this definition of a binary data (received from ModBus):</p>
<pre><code>Signed Measurement (32 bit):
Decade Exponent (Signed 8 bit)
Binary Signed value (24 bit)
Example: - 123456*10-3 = FDFE 1DC0(16)
</code></pre>
<p>I was trying to use <code>struct.unpack</code> to get the actual float from the binary data, but it yields wrong value (probably correct in terms of the documentation, because NodeJS also yields the same value).</p>
<pre class="lang-py prettyprint-override"><code>In [92]: result = b'\xfd\x01\xe2\x40'
In [93]: struct.unpack(">f", result)
Out[93]: (-1.0790323038781103e+37,)
</code></pre>
<p>I've managed to get it in two goes:</p>
<pre class="lang-py prettyprint-override"><code>In [86]: result = b'\xfd\x01\xe2\x40'
In [87]: decade_exponent = struct.unpack(">b", result[0:1])[0]
In [88]: decade_exponent
Out[88]: -3
In [89]: binary_value = int.from_bytes(result[1:], 'big')
In [90]: binary_value
Out[90]: 123456
In [91]: binary_value * 10 ** decade_exponent
Out[91]: 123.456
</code></pre>
<p>But I was thinking if there is a simpler way to do it?</p>
|
<python>
|
2023-03-16 14:20:26
| 0
| 1,269
|
Marek
|
75,757,540
| 2,636,044
|
Partial unpacking of kwargs
|
<p>Let's say I have a the following classes (consider that the classes have a couple more attributes, this is abbreviated for simplicity):</p>
<pre><code>class City:
def __init__(self, name=None, people=None, population=0):
self.name: str = name
self.people: Optional[List[Person]] = people
self.population: int = population
class Person:
def __init__(self, name='Anon'):
self.name: str = name
</code></pre>
<p>I have a JSON file (the context of my issue is supporting i18ln, so the problem is similar but not exactly the same data):</p>
<pre><code>{
'London': {
'citizens': [
{'name': 'Bob'}, {'name': 'Alice'}
],
'population': 2
},
'NY': {
'citizens': [
{'name': 'John'}, {'name': 'Dorothy'}
],
'population': 2
}
}
</code></pre>
<p>at some point in my code, I'm trying to initialise an object based on that JSON, something like:</p>
<pre><code>london = City(
name='London',
population=data.get('London').get('population'),
people=[Person(name=p.get('name')) for p in data.get('London').get('citizens')]
)
</code></pre>
<p>if you notice, the names of the keyword arguments are the same, so using <code>kwargs</code> is a possibility, the problem is that I'm having to initialise the <code>Person</code> class as part of the unpacking. This actual class is a bit more generic so I can't really add extra behaviour to it, trying to be the least incisive possible.</p>
<p>I know about extended iterable unpacking but I was wondering if that would work for <code>kwargs</code> as well, I tried this but it didn't work:</p>
<pre><code>def q(**kwargs):
return kwargs
x = {'a': 1, 'b': 2, 'c': 3}
q(a=5, **x)
Traceback (most recent call last):
TypeError: q() got multiple values for keyword argument 'a'
</code></pre>
<p>(This code doesn't work) Ideally it would look something like:</p>
<pre><code>london = City(name='London', people=[Person(**p) for person in data.get('London').get('citizens')], **data.get('London'))
</code></pre>
<p>Is there any way to achieve this?</p>
|
<python>
|
2023-03-16 14:19:52
| 1
| 1,339
|
Onilol
|
75,757,472
| 731,351
|
Create a new column based on condition/comparison of two existing columns in Polars
|
<p>I am trying to create a new column in Polars data frame based on comparison of two existing columns:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.from_repr("""
βββββββ¬ββββββ
β a β b β
β --- β --- β
β i64 β i64 β
βββββββͺββββββ‘
β 2 β 20 β
β 30 β 3 β
βββββββ΄ββββββ
""")
</code></pre>
<p>When I do:</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(pl.map_batches(["a", "b"], lambda s: "+" if s[0] < s[1] else "-").alias("strand"))
</code></pre>
<p>I am getting an error:</p>
<pre class="lang-py prettyprint-override"><code># InvalidOperationError: UDF called without return type, but was not able to infer the output type.
</code></pre>
<p>I am able to create a boolean column:</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(pl.map_batches(["a", "b"], lambda s: s[0] < s[1] ).alias("strand"))
</code></pre>
<pre><code>βββββββ¬ββββββ¬βββββββββ
β a β b β strand β
β --- β --- β --- β
β i64 β i64 β bool β
βββββββͺββββββͺβββββββββ‘
β 2 β 20 β true β
β 30 β 3 β false β
βββββββ΄ββββββ΄βββββββββ
</code></pre>
<p>So with extra steps I should get the column with the desired "+" and "-", but is there some simpler way?</p>
<pre><code>βββββββ¬ββββββ¬βββββββββ
β a | b | strand β
β --- | --- | --- β
β i64 | i64 | str β
βββββββͺββββββͺβββββββββ‘
β 2 | 20 | + β
β 30 | 3 | - β
βββββββ΄ββββββ΄βββββββββ
</code></pre>
|
<python><dataframe><python-polars>
|
2023-03-16 14:13:37
| 1
| 529
|
darked89
|
75,757,008
| 19,067,218
|
Python throws "TypeError: 'module' object is not callable" whyle importing the selenium
|
<p>I'm trying to import selenium to start a scratch project but getting error constantly, I have tried couple of solution but non has worked for me</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
driver = webdriver.chrome()
driver.get("https://google.com")
</code></pre>
<p>The error I get is</p>
<blockquote>
<p>Traceback (most recent call last): File
"C:\Users\ruben\PycharmProjects\pythonProject\main.py", line 5, in
driver = webdriver.chrome() TypeError: 'module' object is not callable</p>
</blockquote>
<p>Couldn't find any cure for this</p>
<p>I have tried to add a path of location of the webdriver to <code>webdriver.chrome()</code> as an argument, but that doesn't helped.</p>
|
<python><selenium-webdriver>
|
2023-03-16 13:34:02
| 1
| 344
|
llRub3Nll
|
75,756,942
| 9,021,547
|
Python inequality join with group by
|
<p>I have the following two dataframes</p>
<pre><code>import pandas as pd
dates = ['31-12-2015', '31-12-2016', '31-12-2017', '31-12-2018']
df1 = pd.DataFrame({'id': [1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4],
't': dates*4,
'stage': [1,2,2,3,1,1,2,3,1,1,1,3,2,1,1,3]})
df2 = df1.loc[df1['stage'] == 1]
</code></pre>
<p>What is the most efficient way of doing the operation below in python?</p>
<pre><code>Select a.id
,a.t
,max(b.stage = 2) as flag
From df2 as a
Left join df1 as b
On a.id = b.id and a.t < b.t
Group by 1,2
</code></pre>
|
<python><group-by><left-join>
|
2023-03-16 13:28:40
| 1
| 421
|
Serge Kashlik
|
75,756,705
| 14,427,209
|
Getting error when trying to install some python packages in Amazon EC2 instance
|
<p>I am trying to install some packages from our requirements.txt file. But we are constantly seeing the below error. Any guidance on how to resolve it? Maybe some error in preshed or blis packages? Any dependencies we have to preinstall here? We have Python 3.7.9 installed. It's in 64 Bit.</p>
<pre><code> /tmp/pip-install-aw5ivhl1/blis_f463d355a4f44a76bf2cb429c1a0aa28/blis/_src/include/linux-x86_64/blis.h:1644:13: warning: βbli_obj_init_subpart_fromβ defined but not used [-Wunused-function]
1644 | static void bli_obj_init_subpart_from( obj_t* a, obj_t* b )
| ^~~~~~~~~~~~~~~~~~~~~~~~~
error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for blis
Running setup.py clean for blis
Failed to build preshed thinc blis
Installing collected packages: wasabi, plac, cymem, wheel, tqdm, srsly, setuptools, preshed, numpy, murmurhash, Cython, blis, thinc
Running setup.py install for preshed: started
Running setup.py install for preshed: finished with status 'error'
error: subprocess-exited-with-error
Γ Running setup.py install for preshed did not run successfully.
β exit code: 1
β°β> [14 lines of output]
/home/ubuntu/.local/lib/python3.10/site-packages/setuptools/__init__.py:85: _DeprecatedInstaller: setuptools.installer and fetch_build_eggs are deprecated. Requirements should be satisfied by a PEP 517 installer. If you are using pip, you can try `pip install --use-pep517`.
dist.fetch_build_eggs(dist.setup_requires)
running install
/home/ubuntu/.local/lib/python3.10/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
running build_ext
building 'preshed.maps' extension
x86_64-linux-gnu-gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -fPIC -I/usr/include/python3.10 -I/usr/include/python3.10 -c preshed/maps.cpp -o build/temp.linux-x86_64-cpython-310/preshed/maps.o -O3 -Wno-strict-prototypes -Wno-unused-function
cc1plus: warning: command-line option β-Wno-strict-prototypesβ is valid for C/ObjC but not for C++
cc1plus: fatal error: preshed/maps.cpp: No such file or directory
compilation terminated.
error: command '/usr/bin/x86_64-linux-gnu-gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
Γ Encountered error while trying to install package.
β°β> preshed
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
Γ pip subprocess to install build dependencies did not run successfully.
β exit code: 1
β°β> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
|
<python><python-3.x><amazon-web-services><ubuntu><package>
|
2023-03-16 13:08:13
| 0
| 317
|
TECH FREEKS
|
75,756,665
| 11,192,976
|
How to make Pylance understand Pydantic's `allow_population_by_field_name` for initializers?
|
<p>In my current project, we are using an OpenAPI-to-TypeScript-API generator, that generates automatically typed functions for calling API endpoints via Axios. In Python, we use <code>snake_case</code> for our class properties, while in TypeScript we use <code>camelCase</code>.</p>
<p>Using this setup, we have found that the alias property (<code>Field(..., alias="***")</code>) is very helpful, combined with the <code>allow_population_by_field_name = True</code> property.</p>
<p>Simple model class example:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, Field
class MyClass(BaseModel):
my_property: str = Field(..., alias="myProperty")
class Config:
allow_population_by_field_name = True
if __name__ == "__main__":
my_object = MyClass(my_property="banana")
</code></pre>
<p>The question is: Why doesn't Pylance understand that I want to be able to write <code>MyClass(my_property="banana")</code>? See screenshot from vscode below:</p>
<img src="https://i.sstatic.net/VZNWg.png" alt="Screenshot of error in VSCode">
<p>Is there any way to configure Pylance/Pydantic/vscode to better understand this?</p>
<p>"Just living with" this is problematic because having red lines like this in our code promotes a mindset of "red is probably ok", so they lose their power.</p>
<hr />
<h2>Edit May 8th, 2023:</h2>
<p>We found a way to "circumvent" this problem, by creating a wrapper class, containing an <code>alias_generator</code>. It doesn't seem like pylance picks up on this, and therefore it acts as if it doesn't exist.</p>
<p>I'm sure using the system like this isn't something people do a lot, but this helped our team. Also it allows us to not write <code>= Field(..., alias="camelCase")</code> everywhere.</p>
|
<python><python-typing><pydantic><pyright>
|
2023-03-16 13:05:04
| 1
| 1,565
|
Sebastian
|
75,756,601
| 2,729,627
|
Is the Python garbage collection not working? Problematic processing of enormous XML files
|
<p>I'm trying to process a 4.6GB XML file with the following code:</p>
<pre><code>context = ET.iterparse(file_name_data, events=("start", "end"))
in_pandcertificaat = False
pandcertificaat = {}
pandcertificaten = []
number_of_pickles = 0
for index, (event, elem) in enumerate(context):
if event == "start" and elem.tag == "Pandcertificaat":
in_pandcertificaat = True
pandcertificaat = {} # Initiate empty pandcertificaat.
continue
elif event == "end" and elem.tag == "Pandcertificaat":
in_pandcertificaat = False
pandcertificaten.append(pandcertificaat)
continue
elif in_pandcertificaat:
pandcertificaat[elem.tag] = elem.text
else:
pass
if index % iteration_interval_for_internal_memory_check == 0:
print(f"index = {index:.2e}")
process = psutil.Process(os.getpid())
internal_memory_usage_in_mb = process.memory_info().rss / (1024 * 1024)
print(f"Memory usage = {internal_memory_usage_in_mb:.2f} * MB.")
if internal_memory_usage_in_mb > internal_memory_usage_limit_for_splitting_data_in_mb:
df = pd.DataFrame(pandcertificaten)
path_temporary_storage_data_frame = f"{base_path_temporary_storage_data_frame}{number_of_pickles}.{file_name_extension_pickle}"
df.to_pickle(path_temporary_storage_data_frame)
print(f"Intermediately saving data frame to {path_temporary_storage_data_frame} to save internal memory.")
number_of_pickles += 1
pandcertificaten.clear()
gc.collect()
</code></pre>
<p>As you can see I try to save RAM by intermediately saving the Pandas data frames to files on disk but for some reason the RAM usage still keeps increasing. Even after adding <code>gc.collect()</code>, hopefully forcing garbage collection.</p>
<p>This is an example of the output I'm getting:</p>
<pre><code>index = 3.70e+07
Memory usage = 2876.80 * MB.
Intermediately saving data frame to data_frame_pickles/26.pickle to save internal memory.
index = 3.80e+07
Memory usage = 2946.93 * MB.
Intermediately saving data frame to data_frame_pickles/27.pickle to save internal memory.
index = 3.90e+07
Memory usage = 3017.31 * MB.
Intermediately saving data frame to data_frame_pickles/28.pickle to save internal memory.
</code></pre>
<p>What am I doing wrong?</p>
<p>UPDATE 2023-03-17, 14:37.</p>
<p>The problem just got weirder. If I comment everything in the for loop, the RAM usage stillm keeps increasing in time. I believe it follows that there is a problem with <code>iterparse</code>. And the out of RAM problem occurs when using <code>lxml</code> or <code>xml.etree.ElementTree</code>. I did not try the <code>XMLPullParser</code> yet, as suggested by @Hermann12.</p>
|
<python><pandas><xml><large-data>
|
2023-03-16 12:57:49
| 0
| 715
|
Adriaan
|
75,756,535
| 10,164,750
|
psycopg2 execute_values - Update Query with Multiple Place holder - Throwing error - column is of type date but expression is of type text
|
<p>I am trying to replicate the answer provided by @jjanes to handle one of the <code>UPDATE</code> scenario.</p>
<p><a href="https://stackoverflow.com/questions/70470868/psycopg2-execute-values-the-query-contains-more-than-one-s-placeholder">Psycopg2 execute values -the query contains more than one '%s' placeholder</a></p>
<p>My code is as follows:</p>
<pre><code>UPDATE_QUERY = """UPDATE SHARE_RETURN_DOC
SET TO_DB_TS = v1,
SUPERSEDED_DT = v2,
FIRST_SEEN_DT = v3,
LAST_SEEN_DT = v4,
BATCH_ID = v5,
DATA_PROC_ID = v6,
CO_REG_DEBT = v7,
DIR_DTL_CHG_IN = v8,
SHRHLDR_LIST_CD = v9,
SHRHLDR_LIST_CD = v10,
SHRHLDR_LEGAL_STAT = v11,
SHRHLDR_REFRESH_CD = v12,
SHRHLDR_SUPRESS_IN = v13,
BULK_LIST_ID = v14,
DOC_TYPE_CD = v15,
JACKET_NR = v16
FROM (VALUES %s) u(id1, id2, v1, v2, v3, v4, v5, v6, v7, v8,
v9, v10, v11, v12, v13, v14, v15, v16)
WHERE SHARE_RETURN_DOC.REG_NB = u.id1
AND SHARE_RETURN_DOC.ANN_RTN_DT = u.id2"""
</code></pre>
<p><code>Method</code> to Execute the <code>Update SQL</code>:</p>
<pre><code>def tableUpdate(connection, cursor, query, dataframe):
data = []
dataframe = dataframe.toPandas()
for x in dataframe.to_numpy():
data.append(tuple(x))
try:
print(data)
extras.execute_values(cursor, query, data)
connection.commit()
except (Exception, Error) as error:
print("Error: %s" % error)
connection.rollback()
return 1
finally:
cursor.close()
</code></pre>
<p>The <code>data</code> I am trying to pass to <code>extras.execute_value</code> is a <code>list</code> of 3 <code>tuples</code> combined. So I have 3 rows to <code>Update</code> in the table. The <code>data</code> is as below:</p>
<pre><code>[('XXXXXXXX', datetime.date(2022, 12, 1), Timestamp('2023-03-16 09:51:31.634000'), **None**, datetime.date(2023, 3, 16), datetime.date(2023, 3, 16), '5e3642ee-3f91-4360-8694-d60e66317af2', 1, 0, 'N', '2', None, ' ', ' ', 'N', 'N', 'C', 'XXXWUC94'), ('ZZZZZZZZ', datetime.date(2022, 11, 1), Timestamp('2023-03-16 09:51:31.634000'), None, datetime.date(2023, 3, 16), datetime.date(2023, 3, 16), '5e3642ee-3f91-4360-8694-d60e66317af2', 1, 0, 'N', '2', None, ' ', ' ', 'N', 'N', 'C', 'ZZZWUC95'), ('YYYYYYYY', datetime.date(2022, 11, 21), Timestamp('2023-03-16 09:51:31.634000'), None, datetime.date(2023, 3, 16), datetime.date(2023, 3, 16), '5e3642ee-3f91-4360-8694-d60e66317af2', 1, 0, 'N', '7', None, ' ', ' ', 'N', 'N', 'C', 'YYYYY9D')]
</code></pre>
<p>However, I am getting an <code>Error</code>, while running the job:</p>
<p>The <code>method</code> is expecting <code>Date</code> Field. but the input data row contains <code>**None**</code>.</p>
<pre><code>column "superseded_dt" is of type date but expression is of type text
LINE 3: ... SUPERSEDED_DT = v2,
^
</code></pre>
<p>I tried the following options, which led to different other errors.
<code>SUPERSEDED_DT = v2::date[],</code>
<code>FROM (VALUES %s) u(id1, id2, v1, v2::date[], v3, v4, v5, v6, v7, v8,</code></p>
<p>Could you please help me in fixing the issue.</p>
<p>Thank you a lot!</p>
|
<python><sql><postgresql><psycopg2>
|
2023-03-16 12:51:48
| 1
| 331
|
SDS
|
75,756,299
| 11,198,671
|
Covering a 2D plotting area with lattice points
|
<p>My goal is to cover the plotting area with lattice points.</p>
<p>In this example we are working in 2D. We call the set Ξ β R^2 a lattice if there exists a basis B β R^2 with Ξ = Ξ(B). The set Ξ(B) is a set of all integer linear combinations of the basis vectors, so Ξ(B) = {x<em>b1 + y</em>b2 | x,y integers}.</p>
<p>In other words, we get a grid by calculating the integer combinations of the given basis vectors. For the standard basis E=[[1,0]^T, [0,1]^T] we can write the point [8,4]^T as 8 * [1,0]^T + 4 * [0,1]^T where both 8 and 4 are integers. The set of all integer combinations (hence the lattice Ξ) then looks like this:</p>
<p><a href="https://i.sstatic.net/7eosi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7eosi.png" alt="Standard basis lattice" /></a></p>
<p>If we change the basis, this will result into a different lattice. Here is an example for b1=[2,3]^T, b2=[4,5]^T:
<a href="https://i.sstatic.net/xpfIV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xpfIV.png" alt="Different lattice basis" /></a></p>
<p>In order to produce these images I am using the following Python code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
from typing import List, Tuple
def plotLattice(ax: plt.Axes, basis_vectors: List[np.ndarray],
ldown: np.ndarray, rup: np.ndarray, color: str,
linewidth: float, alpha: float) -> List[np.ndarray]:
"""
Draws a two-dimensional lattice.
Args:
ax: The Matplotlib Axes instance to plot on.
basis_vectors: A list of two NumPy arrays representing the basis vectors of the lattice.
ldown: A NumPy array representing the lower left corner of the rectangular area to draw the lattice in.
rup: A NumPy array representing the upper right corner of the rectangular area to draw the lattice in.
color: A string representing the color of the lattice points and basis vectors.
linewidth: A float representing the linewidth of the lattice points.
alpha: A float representing the alpha value of the lattice points.
Returns:
A list of NumPy arrays representing the lattice points.
"""
# get the basis vectors
b1, b2 = np.array(basis_vectors[0]), np.array(basis_vectors[1])
# list to hold the lattice points
points = []
# upper bounds for the for loops
xmax, ymax = 0, 0
if b1[0] == 0:
xmax = np.floor(rup[0] / abs(b2[0]))
elif b2[0] == 0:
xmax = np.floor(rup[0] / abs(b1[0]))
else:
xmax = np.floor(rup[0] / min(abs(b1[0]),abs(b2[0])))
if b1[1] == 0:
ymax = np.floor(rup[1] / abs(b2[1]))
elif b2[1] == 0:
ymax = np.floor(rup[1] / abs(b1[1]))
else:
ymax = np.floor(rup[1] / min(abs(b1[1]),abs(b2[1])))
# increase the bounds by 1
xmax = int(xmax) + 1
ymax = int(ymax) + 1
# get the lower bounds for the for loops
xmin, ymin = -int(xmax), -int(ymax)
for i in range(xmin, int(xmax)):
for j in range(ymin, int(ymax)):
# make the linear combination
p = i * b1 + j * b2
# if the point is within the plotting area, plot it and add the point to the list
if ldown[0] <= p[0] <= rup[0] and ldown[1] <= p[1] <= rup[1]:
ax.scatter(p[0], p[1], color=color, linewidths=linewidth, alpha=alpha)
points.append(p)
# plot basis vectors
ax.quiver(0, 0, b1[0], b1[1], color=color, scale_units='xy', scale=1, alpha=1)
ax.quiver(0, 0, b2[0], b2[1], color=color, scale_units='xy', scale=1, alpha=1)
return points
if __name__ == '__main__':
# pick a basis
b1 = np.array([2, 3])
b2 = np.array([-4, 5])
basis_vectors = [b1, b2]
# define the plotting area
ldown = np.array([-15, -15])
rup = np.array([15, 15])
fig, ax = plt.subplots()
points = plotLattice(ax, basis_vectors, ldown, rup, 'blue', 3, 0.25)
# resize the plotting window
mngr = plt.get_current_fig_manager()
mngr.resize(960, 1080)
# tune axis
ax.set_aspect('equal')
ax.grid(True, which='both')
ax.set_xlim(ldown[0], rup[0])
ax.set_ylim(ldown[1], rup[1])
# show the plot
plt.show()
</code></pre>
<p>And now we get to the problem. For the basis vectors b1=[1,1], b2=[1,2] the code does not cover the plotting area:</p>
<p><a href="https://i.sstatic.net/gFqr5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gFqr5.png" alt="problem1" /></a></p>
<p>We can increase the problem by choosing some not nicely orthogonal vectors:</p>
<p><a href="https://i.sstatic.net/dsI4d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dsI4d.png" alt="problem2" /></a></p>
<p>So, the problem arises every time when the vectors are getting closer to each other, hence when the dot product is big. Now consider the example I picked before:</p>
<p><a href="https://i.sstatic.net/gFqr5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gFqr5.png" alt="problem1" /></a></p>
<p>My approach was to take the minimum values of the absolute coordinate values and to estimate how much points I can have along one axis. This is also the approach you can see in the code. Taking the minimum of the x coordinates of <code>b1=[1,1]</code> and <code>b2=[1,2]</code> we get 1. Taking the minimum of the y coordinates we get 1. My plotting area is defined by the square which is given by the points <code>ldown=[-15,-15]</code> and <code>rup=[15,15]</code>. Hence I know that I can have maximum <code>floor(rup[0]/1) = 15</code> points along the x-axis, and maximum <code>floor(rup[1]/1) = 15</code> along the y-axis. Including the zero point it results in 31 points for each axis, so that I expect to see 31*31 = 961 points on the plot. Hence, I think that I am done and set <code>xmax=15, xmin=-15, ymax=15, ymin=-15</code>.</p>
<p>But this gives me the result presented above. So, the calculation is wrong. Then I say, "Ok, I know that the point <code>[15,-15]</code> has to be in the plot". Hence I can solve the system <code>Bx = [15,-15]^T</code>. This results into the vector <code>x=[45,-30]</code>. Now I can set <code>xmax=45, ymin=-30</code>. Doing the same for the point <code>[-15,15]</code> gives me the vector <code>x=[-45,30]</code>. So I can set <code>xmin=-45, ymin=-30</code>. The resulting plot is:</p>
<p><a href="https://i.sstatic.net/567hu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/567hu.png" alt="enter image description here" /></a></p>
<p>This looks almost well, but you can notice that the points <code>[15,-15]</code> and <code>[-15,15]</code> are missing in the plot. Hence I have to enlarge the bounds by 1 by setting <code>xmax=46, xmin=-46, ymax=31, ymin=-31</code>. After that, the whole area is covered.</p>
<p>So, the drawback of this mechanism, is that I cheated a bit. Here, I just knew that the point <code>[15,-15]</code> must be on the plot. I could solve the equations system and determine the bounds for the <code>for</code> loop. Occasionally, this point was also the most distant point from the origin, so that I knew that covering it I should automatically cover the whole plotting plane. However, there are basis vectors for which I cannot determine such point, and we can just pick one of the previous plots:</p>
<p><a href="https://i.sstatic.net/xpfIV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xpfIV.png" alt="Different lattice basis" /></a></p>
<p>Here, my approach would say that we can have <code>min(2,4) = 2</code> points along the x-axis and <code>min(3,5)=3</code> points along the y-axis. But I cannot simply say that the point <code>[14,-9]=[7*2,-3*3]</code> is on the plot (because it is not). Moreover, I cannot even say which of the points <code>[12,-12], [12,-15], [14,-12],[14-15]</code> are part of the plot, and which are not. Knowing the plot I see that <code>[12,-15]</code> and <code>[14,-12]</code> must be in the plot. Without knowing that I do not even know for which point I have to solve the <code>Bx=b</code> system.</p>
<p>Choosing different basis or a different (not origin-centered) plotting area makes the problem surprisingly complex for me, - even though we are acting in a 2D plane only.</p>
<p>So, now when the problem is described, I can formulate it as follows: Given the points <code>rup</code> and <code>ldown</code> of the plotting area, a basis <code>b1, b2</code>, define the bounds <code>xmax, xmin, ymax, ymin</code> for the <code>for</code> loops, so that the whole plotting area gets covered by the lattice points.</p>
<p>I am not even asking the code to be efficient at the moment, however a solution of the type <code>xmax = sys.maxsize</code> or <code>xmax = 100 * rup[0]</code> do not count.</p>
<p>What would be your approach?</p>
|
<python><algorithm><matplotlib><plot><mathematical-lattices>
|
2023-03-16 12:29:28
| 1
| 345
|
jupiter_jazz
|
75,756,180
| 3,909,896
|
Upgrade Python patch version from 3.8.10 to 3.8.16 on Windows without binary installers
|
<p>According to the <a href="https://www.python.org/downloads/release/python-3816/" rel="nofollow noreferrer">docs</a>, 3.8.10 was the last Python 3.8 version with binary installers. I need to install 3.8.16 on my local system. So I installed 3.8.10 first and then wanted to upgrade.</p>
<p>After unzipping + extracting the tar.gz file from <a href="https://www.python.org/downloads/release/python-3816/" rel="nofollow noreferrer">here</a>, I wanted to run the <code>setup.py</code> file, but get an error:</p>
<pre class="lang-py prettyprint-override"><code>PS C:\Users\XXXXXX\Downloads\Python-3.8.16> python .\setup.py
Traceback (most recent call last):
File ".\setup.py", line 2416, in <module>
main()
File ".\setup.py", line 2374, in main
set_compiler_flags('CFLAGS', 'PY_CFLAGS_NODIST')
File ".\setup.py", line 86, in set_compiler_flags
sysconfig.get_config_vars()[compiler_flags] = flags + ' ' + py_flags_nodist
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
</code></pre>
<p>How can I upgrade the Python version on Windows?</p>
|
<python>
|
2023-03-16 12:19:55
| 0
| 3,013
|
Cribber
|
75,755,958
| 848,746
|
curl to python requests conversion for solr query
|
<p>I have a bit of a bizzare problem. I have a solr index, which I query using curl like so:</p>
<pre><code>curl 'http://localhost:8984/solr/my_index/select?indent=on&q="galvin%20life%20sciences"~0&wt=json&sort=_docid_%20desc&rows=5'
</code></pre>
<p>and I get (note the <code>q</code> string and the tilde operator which I use for proximity search):</p>
<pre><code>{
"responseHeader":{
"status":0,
"QTime":1,
"params":{
"q":"\"galvin life sciences\"~0",
"indent":"on",
"sort":"_docid_ desc",
"rows":"5",
"wt":"json"}},
"response":{"numFound":61,"start":0,"numFoundExact":true,"docs":[
</code></pre>
<p>Now, I am trying to replicate the same thing in python using:</p>
<pre><code>resp=requests.get('http://localhost:8984/solr/my_index/select?q=' + "galvin%20life%20sciences"+"~0" + '&wt=json&rows=5&start=0&fl=id,org*,score')
</code></pre>
<p>and I get this:</p>
<pre><code>[
{
"responseHeader": {
"status": 0,
"QTime": 0,
"params": {
"q": "galvin life sciences~0",
"fl": "id,org*,score",
"start": "0",
"rows": "5",
"wt": "json"
}
},
"response": {
"numFound": 3505398,
"start": 0,
"maxScore": 9.792607,
"numFoundExact": true,
"docs": [
</code></pre>
<p>YOu can see that the queries are somehow different:</p>
<pre><code>curl: "q":"\"galvin life sciences\"~0",
requests: "q": "galvin life sciences~0",
</code></pre>
<p>so I am getting wrong results when using requests.</p>
<p>I am not sure what I should do in requests to make the queries match.</p>
<p>I have tried the solution of @Mats:</p>
<pre><code>requests.get('http://localhost:8984/solr/my_index/select', params={
'q': '"galvin life sciences"~0',
'wt': 'json',
'rows': 5,
'start': 0,
'fl': 'id,org*,score',
})
</code></pre>
<p>but now I am not able to pass the variable to it (how annoying). So I have:</p>
<pre><code>q_solr="Galvin life sciences"
requests.get('http://localhost:8984/solr/my_index/select', params={
'q': q_solr+'~0',
'wt': 'json',
'rows': 5,
'start': 0,
'fl': 'id,org*,score',
})
</code></pre>
<p>but this gives me no result.. WTAF!!!!</p>
|
<python><curl><python-requests><solr>
|
2023-03-16 11:58:27
| 1
| 5,913
|
AJW
|
75,755,923
| 50,065
|
pip install gives ResolutionImpossible: the user requested flax 0.6.8 but t5x depends on flax 0.6.8
|
<p>I am trying to a requirements file that depends on versions of the packages <code>flax</code> and <code>t5x</code> at specific commits.</p>
<p>The problem can be reproduced with the following command:</p>
<pre><code>pip install "flax @ git+https://github.com/google/flax@ccf48a62acafd1f8658d60e21457c4bb57b25b95" "t5x @ git+https://github.com/google-research/t5x.git@24feab6892a79b3fc465b7b3f2d1c77fc437b67a" jestimator clu orbax
</code></pre>
<p>This gives the following output:</p>
<pre><code>Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting flax@ git+https://github.com/google/flax@ccf48a62acafd1f8658d60e21457c4bb57b25b95
Cloning https://github.com/google/flax (to revision ccf48a62acafd1f8658d60e21457c4bb57b25b95) to /tmp/pip-install-wh571lzf/flax_b627060d1d0a434bb44bed9c11cff293
Running command git clone --filter=blob:none --quiet https://github.com/google/flax /tmp/pip-install-wh571lzf/flax_b627060d1d0a434bb44bed9c11cff293
Running command git rev-parse -q --verify 'sha^ccf48a62acafd1f8658d60e21457c4bb57b25b95'
Running command git fetch -q https://github.com/google/flax ccf48a62acafd1f8658d60e21457c4bb57b25b95
Running command git checkout -q ccf48a62acafd1f8658d60e21457c4bb57b25b95
Resolved https://github.com/google/flax to commit ccf48a62acafd1f8658d60e21457c4bb57b25b95
Preparing metadata (setup.py) ... done
Collecting t5x@ git+https://github.com/google-research/t5x.git@24feab6892a79b3fc465b7b3f2d1c77fc437b67a
Cloning https://github.com/google-research/t5x.git (to revision 24feab6892a79b3fc465b7b3f2d1c77fc437b67a) to /tmp/pip-install-wh571lzf/t5x_84d8a508d3a944e893250265fbc32f1a
Running command git clone --filter=blob:none --quiet https://github.com/google-research/t5x.git /tmp/pip-install-wh571lzf/t5x_84d8a508d3a944e893250265fbc32f1a
Running command git rev-parse -q --verify 'sha^24feab6892a79b3fc465b7b3f2d1c77fc437b67a'
Running command git fetch -q https://github.com/google-research/t5x.git 24feab6892a79b3fc465b7b3f2d1c77fc437b67a
Running command git checkout -q 24feab6892a79b3fc465b7b3f2d1c77fc437b67a
Resolved https://github.com/google-research/t5x.git to commit 24feab6892a79b3fc465b7b3f2d1c77fc437b67a
Preparing metadata (setup.py) ... done
Collecting jestimator
Using cached jestimator-0.3.3-py3-none-any.whl (15 kB)
Collecting clu
Using cached clu-0.0.8-py3-none-any.whl (96 kB)
Collecting orbax
Using cached orbax-0.1.4-py3-none-any.whl (76 kB)
Requirement already satisfied: numpy>=1.12 in /usr/local/lib/python3.9/dist-packages (from flax@ git+https://github.com/google/flax@ccf48a62acafd1f8658d60e21457c4bb57b25b95) (1.22.4)
Requirement already satisfied: jax>=0.4.2 in /usr/local/lib/python3.9/dist-packages (from flax@ git+https://github.com/google/flax@ccf48a62acafd1f8658d60e21457c4bb57b25b95) (0.4.4)
Requirement already satisfied: msgpack in /usr/local/lib/python3.9/dist-packages (from flax@ git+https://github.com/google/flax@ccf48a62acafd1f8658d60e21457c4bb57b25b95) (1.0.5)
Collecting optax
Downloading optax-0.1.4-py3-none-any.whl (154 kB)
βββββββββββββββββββββββββββββββββββββββ 154.9/154.9 KB 7.9 MB/s eta 0:00:00
Collecting tensorstore
Downloading tensorstore-0.1.33-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (8.4 MB)
ββββββββββββββββββββββββββββββββββββββββ 8.4/8.4 MB 87.0 MB/s eta 0:00:00
Collecting rich>=11.1
Downloading rich-13.3.2-py3-none-any.whl (238 kB)
ββββββββββββββββββββββββββββββββββββββ 238.7/238.7 KB 28.1 MB/s eta 0:00:00
Requirement already satisfied: typing_extensions>=4.1.1 in /usr/local/lib/python3.9/dist-packages (from flax@ git+https://github.com/google/flax@ccf48a62acafd1f8658d60e21457c4bb57b25b95) (4.5.0)
Requirement already satisfied: PyYAML>=5.4.1 in /usr/local/lib/python3.9/dist-packages (from flax@ git+https://github.com/google/flax@ccf48a62acafd1f8658d60e21457c4bb57b25b95) (6.0)
Requirement already satisfied: absl-py in /usr/local/lib/python3.9/dist-packages (from clu) (1.4.0)
Requirement already satisfied: etils[epath] in /usr/local/lib/python3.9/dist-packages (from clu) (1.0.0)
Requirement already satisfied: wrapt in /usr/local/lib/python3.9/dist-packages (from clu) (1.15.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.9/dist-packages (from clu) (23.0)
Requirement already satisfied: jaxlib in /usr/local/lib/python3.9/dist-packages (from clu) (0.4.4+cuda11.cudnn82)
Collecting ml-collections
Downloading ml_collections-0.1.1.tar.gz (77 kB)
ββββββββββββββββββββββββββββββββββββββββ 77.9/77.9 KB 11.3 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting cached_property
Downloading cached_property-1.5.2-py2.py3-none-any.whl (7.6 kB)
Requirement already satisfied: importlib_resources in /usr/local/lib/python3.9/dist-packages (from orbax) (5.12.0)
Collecting jax>=0.4.2
Downloading jax-0.4.6.tar.gz (1.2 MB)
ββββββββββββββββββββββββββββββββββββββββ 1.2/1.2 MB 49.7 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting clu@ git+https://github.com/google/CommonLoopUtils#egg=clu
Cloning https://github.com/google/CommonLoopUtils to /tmp/pip-install-wh571lzf/clu_5837e209762c457097ce7de4a98ad6d3
Running command git clone --filter=blob:none --quiet https://github.com/google/CommonLoopUtils /tmp/pip-install-wh571lzf/clu_5837e209762c457097ce7de4a98ad6d3
Resolved https://github.com/google/CommonLoopUtils to commit 6dd92f9d8db4b86234c42e302160cf4c2bf163af
Preparing metadata (setup.py) ... done
Collecting flax@ git+https://github.com/google/flax#egg=flax
Cloning https://github.com/google/flax to /tmp/pip-install-wh571lzf/flax_061afe6d94984c2499afac61f22cf5af
Running command git clone --filter=blob:none --quiet https://github.com/google/flax /tmp/pip-install-wh571lzf/flax_061afe6d94984c2499afac61f22cf5af
Resolved https://github.com/google/flax to commit 4efd541ded8e31326a5ca3de5955c5274ad3d24c
Preparing metadata (setup.py) ... done
Collecting orbax@ git+https://github.com/google/orbax#egg=orbax
Cloning https://github.com/google/orbax to /tmp/pip-install-wh571lzf/orbax_438ab2b57ec84691be847b96d1af9a44
Running command git clone --filter=blob:none --quiet https://github.com/google/orbax /tmp/pip-install-wh571lzf/orbax_438ab2b57ec84691be847b96d1af9a44
Resolved https://github.com/google/orbax to commit 37950426ce0d65ddbd28e29c35f9aebef0ba07e3
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting seqio@ git+https://github.com/google/seqio#egg=seqio
Cloning https://github.com/google/seqio to /tmp/pip-install-wh571lzf/seqio_9cc1aec2821d45cda70e6a0dcef045db
Running command git clone --filter=blob:none --quiet https://github.com/google/seqio /tmp/pip-install-wh571lzf/seqio_9cc1aec2821d45cda70e6a0dcef045db
Resolved https://github.com/google/seqio to commit 99863074f559b38eadc9c5924fab65beb60bc610
Preparing metadata (setup.py) ... done
INFO: pip is looking at multiple versions of orbax to determine which version is compatible with other requirements. This could take a while.
Collecting orbax
Downloading orbax-0.1.3-py3-none-any.whl (74 kB)
ββββββββββββββββββββββββββββββββββββββββ 74.2/74.2 KB 6.5 MB/s eta 0:00:00
INFO: pip is looking at multiple versions of clu to determine which version is compatible with other requirements. This could take a while.
Collecting clu
Downloading clu-0.0.7-py3-none-any.whl (92 kB)
ββββββββββββββββββββββββββββββββββββββββ 92.8/92.8 KB 11.9 MB/s eta 0:00:00
Requirement already satisfied: tensorflow-datasets in /usr/local/lib/python3.9/dist-packages (from clu) (4.8.3)
Requirement already satisfied: tensorflow in /usr/local/lib/python3.9/dist-packages (from clu) (2.11.0)
Downloading clu-0.0.6-py3-none-any.whl (77 kB)
ββββββββββββββββββββββββββββββββββββββββ 77.8/77.8 KB 11.4 MB/s eta 0:00:00
Downloading clu-0.0.5-py3-none-any.whl (77 kB)
ββββββββββββββββββββββββββββββββββββββββ 77.5/77.5 KB 9.5 MB/s eta 0:00:00
Downloading clu-0.0.4-py3-none-any.whl (76 kB)
ββββββββββββββββββββββββββββββββββββββββ 76.1/76.1 KB 7.9 MB/s eta 0:00:00
Downloading clu-0.0.3-py3-none-any.whl (73 kB)
ββββββββββββββββββββββββββββββββββββββββ 73.0/73.0 KB 8.9 MB/s eta 0:00:00
Downloading clu-0.0.2-py3-none-any.whl (68 kB)
ββββββββββββββββββββββββββββββββββββββββ 68.8/68.8 KB 9.6 MB/s eta 0:00:00
INFO: pip is looking at multiple versions of <Python from Requires-Python> to determine which version is compatible with other requirements. This could take a while.
INFO: pip is looking at multiple versions of jestimator to determine which version is compatible with other requirements. This could take a while.
Collecting jestimator
Downloading jestimator-0.3.2-py3-none-any.whl (14 kB)
INFO: pip is looking at multiple versions of clu to determine which version is compatible with other requirements. This could take a while.
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
INFO: pip is looking at multiple versions of flax to determine which version is compatible with other requirements. This could take a while.
ERROR: Cannot install clu==0.0.2, flax 0.6.8 (from git+https://github.com/google/flax@ccf48a62acafd1f8658d60e21457c4bb57b25b95), jestimator==0.3.2 and t5x==0.0.0 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested flax 0.6.8 (from git+https://github.com/google/flax@ccf48a62acafd1f8658d60e21457c4bb57b25b95)
jestimator 0.3.2 depends on flax
clu 0.0.2 depends on flax
t5x 0.0.0 depends on flax 0.6.8 (from git+https://github.com/google/flax#egg=flax)
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
</code></pre>
<p>What I don't understand about this error message is that I request <code>flax</code> version <code>0.6.8</code> and that <code>t5X</code> depends on that same version <code>0.6.8</code> but <code>pip</code> is unable to resolve this.</p>
<p>I'm certain there must be a resolution because the <code>requirements.txt</code> that I am using with these specific pinned versions is the output from <code>pip freeze > requirements.txt</code> from the virtual environment from a colleague.</p>
|
<python><pip><dependency-management><flax>
|
2023-03-16 11:55:10
| 0
| 23,037
|
BioGeek
|
75,755,801
| 19,336,534
|
Changing column values in python
|
<p>I have a dataframe of shape:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Col1</th>
<th>Col2</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.3</td>
<td>1</td>
</tr>
<tr>
<td>0.22</td>
<td>0</td>
</tr>
<tr>
<td>0.89</td>
<td>0</td>
</tr>
<tr>
<td>0.12</td>
<td>1</td>
</tr>
<tr>
<td>0.54</td>
<td>0</td>
</tr>
<tr>
<td>0.11</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>Assume that this dataset is sorted based on time and <code>df.iloc[1]</code> is before <code>df.iloc[2]</code>. Also assume that Col2 is binary.<br />
What i would like to do is change the value of each Col2 sample as follows:
df.iloc[i]['Col2'] is 1 if any of the next 2 samples is 1 in the dataframe, else it is 0. Leave the last 2 elements of the dataframe unchanged<br />
For example the result here would be:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Col1</th>
<th>Col2</th>
</tr>
</thead>
<tbody>
<tr>
<td>0.3</td>
<td>0</td>
</tr>
<tr>
<td>0.22</td>
<td>1</td>
</tr>
<tr>
<td>0.89</td>
<td>1</td>
</tr>
<tr>
<td>0.12</td>
<td>1</td>
</tr>
<tr>
<td>0.54</td>
<td>1</td>
</tr>
<tr>
<td>0.11</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>What i have done so far:</p>
<pre><code>for i, j in df.iterrows():
if i<df.shape[0]-2:
df.iloc[i]['Col2'] = max([df.iloc[j]['Col2'] for j in range(i,i+2)])
</code></pre>
<p>I think the code works correctly but since my dataset is very large it takes too much time to run. Is there a more elegant and computationally friendly solution?</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-03-16 11:43:40
| 1
| 551
|
Los
|
75,755,772
| 6,532,300
|
TypeError("__init__() got an unexpected keyword argument 'phonemes'") with NeMo
|
<p>I'm trying to run a skript based on NeMo and am stuck with this error</p>
<pre><code>File "use_me.py", line 7, in <module>
spec_generator = FastPitchModel.restore_from(model_path)
File "/data/fburkhardt/tts/fastpitch_nemo/venv/lib/python3.8/site-packages/nemo/core/classes/modelPT.py", line 434, in restore_from
instance = cls._save_restore_connector.restore_from(
File "/data/fburkhardt/tts/fastpitch_nemo/venv/lib/python3.8/site-packages/nemo/core/connectors/save_restore_connector.py", line 239, in restore_from
loaded_params = self.load_config_and_state_dict(
File "/data/fburkhardt/tts/fastpitch_nemo/venv/lib/python3.8/site-packages/nemo/core/connectors/save_restore_connector.py", line 162, in load_config_and_state_dict
instance = calling_cls.from_config_dict(config=conf, trainer=trainer)
File "/data/fburkhardt/tts/fastpitch_nemo/venv/lib/python3.8/site-packages/nemo/core/classes/common.py", line 507, in from_config_dict
raise e
File "/data/fburkhardt/tts/fastpitch_nemo/venv/lib/python3.8/site-packages/nemo/core/classes/common.py", line 499, in from_config_dict
instance = cls(cfg=config, trainer=trainer)
File "/data/fburkhardt/tts/fastpitch_nemo/venv/lib/python3.8/site-packages/nemo/collections/tts/models/fastpitch.py", line 94, in __init__
self._setup_tokenizer(cfg)
File "/data/fburkhardt/tts/fastpitch_nemo/venv/lib/python3.8/site-packages/nemo/collections/tts/models/fastpitch.py", line 216, in _setup_tokenizer
self.vocab = instantiate(cfg.text_tokenizer, **text_tokenizer_kwargs)
File "/data/fburkhardt/tts/fastpitch_nemo/venv/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 222, in instantiate
return instantiate_node(
File "/data/fburkhardt/tts/fastpitch_nemo/venv/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 339, in instantiate_node
return _call_target(_target_, partial, args, kwargs, full_key)
File "/data/fburkhardt/tts/fastpitch_nemo/venv/lib/python3.8/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 97, in _call_target
raise InstantiationException(msg) from e
hydra.errors.InstantiationException: Error in call to target 'nemo.collections.common.tokenizers.text_to_speech.tts_tokenizers.GermanCharsTokenizer':
TypeError("__init__() got an unexpected keyword argument 'phonemes'")
full_key: text_tokenizer
</code></pre>
<p>When trying to run the skript to test this model</p>
<p><a href="https://huggingface.co/inOXcrm/German_multispeaker_FastPitch_nemo" rel="nofollow noreferrer">https://huggingface.co/inOXcrm/German_multispeaker_FastPitch_nemo</a></p>
<p>So far i was unable to google the solution, i simply did
pip install nemo_toolkit['all']
and
pip install pynini==2.1.5
as suggested by the NeMo installation</p>
<p>any hints?</p>
|
<python>
|
2023-03-16 11:40:19
| 1
| 313
|
Felix Burkhardt
|
75,755,766
| 11,680,331
|
Python local imports still not working: ModuleNotFoundError or ImportError
|
<p>How can I import <code>src/constants.py</code> from within <code>src/data/compute_embeddings.py</code>?</p>
<h3>Project structure</h3>
<p>I have the following project structure:</p>
<pre><code>.
βββ data
βΒ Β βββ raw
βΒ Β βββ sample.xlsx
βββ __init__.py
βββ notebooks
βΒ Β βββ clustering.ipynb
βββ src
βββ constants.py
βββ data
βΒ Β βββ compute_embeddings.py
βΒ Β βββ __init__.py
βββ embeddings.py
βββ __init__.py
</code></pre>
<h3>Error</h3>
<p>Now, within <code>src/data/compute_embeddings.py</code>, I try to import <code>src/constants.py</code>. Here is what happens when I run <code>python src/data/compute_embeddings.py</code>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Import method</th>
<th>Error</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>from src.constants import ...</code></td>
<td><code>ModuleNotFoundError: No module named 'src'</code></td>
</tr>
<tr>
<td><code>from ..constants import ...</code></td>
<td><code>ImportError: attempted relative import with no known parent package</code></td>
</tr>
</tbody>
</table>
</div>
<p>I tried running as a module with <code>python -m src/data/compute_embeddings</code>, but I get <code>No module named src/data/compute_embeddings</code>.</p>
<h3>Undesirable fix</h3>
<p>I am however able to add <code>src</code> to <code>sys.path</code> and then my import works. However, I would rather not have to include <code>import sys</code> + <code>sys.path.append(...)</code> at the beginning of every file.</p>
<p>Thank you in advance for your help!</p>
|
<python><module><python-import><importerror><file-structure>
|
2023-03-16 11:39:36
| 0
| 847
|
Joris Limonier
|
75,755,691
| 16,343,968
|
Confused about decimal precision in Python
|
<p>I use the Python decimal module to get around the floating point error. And from what I understand, I can set how many decimal places the decimal can have by setting the precision in the context. But I recently discovered that my understanding of the decimal precision is likely wrong as this is what happens when I run this code.</p>
<pre class="lang-py prettyprint-override"><code>from decimal import Decimal, getcontext
getcontext().prec = 5
a = Decimal("80.05289")
b = Decimal("0.00015")
c = a * b
print(c)
</code></pre>
<p>Normally, this code would print <code>0.0120079335</code> as that is the correct solution for this calculation, but with the precision set to 5, I expected the result to be <code>0.01201</code> because the 7 will be rounded up.</p>
<p>The weird thing is, that none of these things happened and the result given to me by Python was <code>0.012008</code>. As if I set the precision to 6. Can someone better explain to me what happened and how can I fix this issue to always have only 5 decimal places?</p>
|
<python><python-3.x><python-decimal>
|
2023-03-16 11:32:43
| 1
| 830
|
Vladislav KoreckΓ½
|
75,755,441
| 11,318,472
|
Why does saving `to_netcdf` without `encoding=` change some values to `nan`?
|
<p>I'm struggling to understand a problem in my code when saving a <code>xarray.DataSet</code> as <code>netCDF</code>. The file does not contain any <code>nan</code> values. However after saving and loading it suddenly does for one value:</p>
<ol>
<li><p>Before: no <code>nan</code> in the original data before saving:</p>
<pre class="lang-py prettyprint-override"><code>
> ds.where(lambda x: x.isnull(), drop=True).coords
Coordinates:
* x (x) float64
* y (y) float64
* time (time) datetime64[ns]
lon (x) float64
lat (y) float64
</code></pre>
</li>
<li><p>Saving:</p>
<pre class="lang-py prettyprint-override"><code>> ds.to_netcdf("manual_save.nc")
</code></pre>
</li>
<li><p>Loading: Now a <code>nan</code> appears for a single data entry. Only this entry is affected. The effect is reproducible.</p>
<pre class="lang-py prettyprint-override"><code>> xr.open_dataset("manual_save.nc").where(lambda x: x.isnull(), drop=True).coords
Coordinates:
* x (x) float64 -3.5
* y (y) float64 57.0
* time (time) datetime64[ns] 2023-02-01
lon (x) float64 -3.5
lat (y) float64 57.0
</code></pre>
</li>
</ol>
<p><em>I don't understand why this is happening, can someone explain and offer a good solution?</em></p>
<p><strong>More details</strong></p>
<ol>
<li><p>Here's the value before and after saving+loading of the affected entry:</p>
<pre class="lang-py prettyprint-override"><code># Before saving+loading
> ds["soil temperature"].sel(x=-3.5, y=57, time="2023-02-01 00:00").load()
<xarray.DataArray 'soil temperature' ()>
array(275.88766, dtype=float32)
Coordinates:
x float64 -3.5
y float64 57.0
time datetime64[ns] 2023-02-01
lon float64 -3.5
lat float64 57.0
Attributes:
units: K
long_name: Soil temperature level 4
module: era5
feature: temperature
# After saving+loading
> xr.open_dataset("manual_save.nc")["soil temperature"].sel(x=-3.5, y=57, time="2023-02-01 00:00").load()
<xarray.DataArray 'soil temperature' ()>
array(nan, dtype=float32)
Coordinates:
x float64 -3.5
y float64 57.0
time datetime64[ns] 2023-02-01
lon float64 -3.5
lat float64 57.0
Attributes:
units: K
long_name: Soil temperature level 4
module: era5
feature: temperature
</code></pre>
</li>
<li><p>Before saving the data is represented as a <code>dask.array</code> in <code>xarray</code>, requiring the <code>.load()</code> to show the value. Without <code>.load()</code> it looks like this before saving:</p>
<pre class="lang-py prettyprint-override"><code>> ds["soil temperature"].sel(x=-3.5, y=57, time="2023-02-01 00:00")
<xarray.DataArray 'soil temperature' ()>
dask.array<getitem, shape=(), dtype=float32, chunksize=(), chunktype=numpy.ndarray>
Coordinates:
x float64 -3.5
y float64 57.0
time datetime64[ns] 2023-02-01
lon float64 -3.5
lat float64 57.0
Attributes:
units: K
long_name: Soil temperature level 4
module: era5
feature: temperature
</code></pre>
</li>
<li><p>Here's a peak at the full <code>xarray.DataSet</code>. <em>No</em> other entries are affected by the problem:</p>
<pre class="lang-py prettyprint-override"><code>> ds
<xarray.Dataset>
Dimensions: (x: 23, y: 25, time: 48)
Coordinates:
* x (x) float64 -4.0 -3.75 -3.5 -3.25 ... 0.75 1.0 1.25 1.5
* y (y) float64 56.0 56.25 56.5 56.75 ... 61.5 61.75 62.0
* time (time) datetime64[ns] 2023-01-31 ... 2023-02-01T23:00:00
lon (x) float64 -4.0 -3.75 -3.5 -3.25 ... 0.75 1.0 1.25 1.5
lat (y) float64 56.0 56.25 56.5 56.75 ... 61.5 61.75 62.0
Data variables:
temperature (time, y, x) float32 dask.array<chunksize=(24, 25, 23), meta=np.ndarray>
soil temperature (time, y, x) float32 dask.array<chunksize=(24, 25, 23), meta=np.ndarray>
Attributes:
module: era5
prepared_features: ['temperature']
chunksize_time: 100
Conventions: CF-1.6
history: 2023-03-13 09:15:56 GMT by grib_to_netcdf-2.25.1: /op...
</code></pre>
</li>
<li><p>I can workaround the issue by specifying an compression with <code>zlib</code> via <code>encoding</code>:</p>
<pre class="lang-py prettyprint-override"><code>
> ds.to_netcdf("manual_save_with_zlib.nc", encoding={'soil temperature': {'zlib': True, 'complevel': 1}})
> xr.open_dataset("manual_save_with_zlib.nc")["soil temperature"].sel(x=-3.5, y=57, time="2023-02-01 00:00").load()
<xarray.DataArray 'soil temperature' ()>
array(275.88766, dtype=float32)
Coordinates:
x float64 -3.5
y float64 57.0
time datetime64[ns] 2023-02-01
lon float64 -3.5
lat float64 57.0
Attributes:
units: K
long_name: Soil temperature level 4
module: era5
feature: temperature
</code></pre>
</li>
<li><p>The DataSet is created quite deep <a href="https://github.com/PyPSA/atlite/blob/17c81f9bee46752a89e31d5c28dc9e0b5fb107b9/atlite/data.py#L171" rel="nofollow noreferrer">inside the code of a library of ours</a> from the online API of <a href="https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-single-levels" rel="nofollow noreferrer">ERA5</a>, so I don't know how to create a MWE to share for this issue. The API access and retrieved data all seem to work fine as always.</p>
</li>
<li><p>(edit) As suggested by psalt I tried <code>.compute()</code> before saving and explicitly specifying <code>compute=True</code> while saving to remove this potential <code>dask</code> stumbling block. Neither change the result, after loading the <code>nan</code> value still exist. Here's what I did:</p>
<pre class="lang-py prettyprint-override"><code>> ds.compute().to_netcdf("manual_save_pre-compute.nc")
> ds.to_netcdf("manual_save-and-compute.nc", compute=True)
</code></pre>
</li>
<li><p>(edit) I also tried saving to <code>zarr</code> but without any success either. The same problem occurs there after loading.</p>
</li>
<li><p>(out of date)</p>
</li>
</ol>
<blockquote>
<p>! (edit) I'm sharing the affected <code>DataSet</code> as <code>pickle</code> because all standard methods from <code>xarray</code> interfere with the problem. If you unpickle the version and then save the DataSet as described above, you can reproduce the problem. You can <a href="https://fex.hrz.uni-giessen.de/fop/YWayEkb2/manual_save.pickle" rel="nofollow noreferrer">download the pickle file here</a>.
!
>! <code>python >! > import pickle >! >! # Code used for creating the pickle >! > f = open("manual_save.pickle", "wb") >! > pickle.dump(ds, f, protocol=pickle.HIGHEST_PROTOCOL) >! > f.close() >! >! # Code for unpickling >! with open("manual_save.pickle", "rb") as f: >! ds = pickle.load(f)~ >! </code></p>
</blockquote>
<ol start="9">
<li><p>(edit) I've managed to track down the error to an instable <code>netCDF</code> file. You can <a href="https://fex.hrz.uni-giessen.de/fop/34z4pt2G/instable-datafile.nc" rel="nofollow noreferrer">download the file here</a>. Tested with <code>xarray=2023.2.0</code> the following code seems to create a <code>nan</code> value out of thin air:</p>
<pre class="lang-py prettyprint-override"><code>import xarray as xr
ds = xr.open_mfdataset("instable-datafile.nc")
display("This contains no nan values", ds["t2m"].values)
ds.to_netcdf("collapsed-datafile.nc")
display("This contains nan values", xr.open_dataset("collapsed-datafile.nc")["t2m"].values)
# Output
'This contains no nan values'
array([[[278.03146, 278.4846 ],
[278.50998, 278.6799 ]],
[[277.91476, 278.4109 ],
[278.36594, 278.571 ]]], dtype=float32)
'This contains nan values'
array([[[278.03146, 278.4846 ],
[278.50998, 278.6799 ]],
[[ nan, 278.4109 ],
[278.36594, 278.571 ]]], dtype=float32)
</code></pre>
</li>
</ol>
<p>I'm happy to provide more information. Just let me know.</p>
|
<python><python-xarray><netcdf4>
|
2023-03-16 11:10:57
| 1
| 1,319
|
euronion
|
75,755,367
| 848,510
|
Unable to save data using pyarrow to gcs
|
<p>I am trying to save a data frame to GCS bucket, and it fails with the below error:</p>
<blockquote>
<p>TypeError: <strong>cinit</strong>() got an unexpected keyword argument
'existing_data_behavior'</p>
</blockquote>
<p>The code I try to use for saving the data is:
pd_df is the pandas dataframe which I am trying to save as a partitioned parquet file.</p>
<pre><code>table_w = pyarrow.Table.from_pandas(pd_df)
gs = gcsfs.GCSFileSystem()
pyarrow.parquet.write_to_dataset(table_w, root_path="gs://bucket/test", existing_data_behavior= "delete_matching", partition_cols =["group_id"], filesystem =gs, flavor="spark")
</code></pre>
<p>What am I doing wrong here?</p>
<p>OR How to overwrite the folder while writing dataset as parquet?</p>
|
<python><google-cloud-platform><parquet><pyarrow><google-cloud-storage>
|
2023-03-16 11:04:25
| 0
| 3,340
|
Tom J Muthirenthi
|
75,755,232
| 17,157,092
|
How to avoid iterating over dataframe rows
|
<p>I have a dataframe <code>df</code> with columns 'Name', 'Date' and 'Id'. The 'Id' column is initially all 0 and I want to populate it as follows: the rows that have the same 'Date' and <code>same_names(name_i, name_j) == True</code> take the same Id.</p>
<p>I managed to do it with a for-loop which goes over the rows of the df:</p>
<pre><code>from collections import defaultdict
import pandas as pd
def same_names(name1, name2):
name1_parts = name1.split()
name2_parts = name2.split()
# Compare last names
return name1_parts[-1] == name2_parts[-1]
# Sample data
data = [
['John Smith', '2022-01-01', 0],
['Mary Johnson', '2022-01-04', 0],
['Mark Williams', '2022-01-02', 0],
['Jessica Brown', '2022-01-03', 0],
['David Lee', '2022-01-03', 0],
['John Brown', '2022-01-02', 0],
['Frank Johnson', '2022-01-04', 0],
['Mary Lee', '2022-01-03', 0],
['David Lee', '2022-01-03', 0]
]
header = ['Name', 'Date', 'Id']
df = pd.DataFrame(data, columns=header)
date_to_index = defaultdict(list)
for index, row in df.iterrows():
date = row['Date']
if date in date_to_index:
for i in date_to_index[date]:
prev_row = df.iloc[i]
if same_names(prev_row['Name'], row['Name']):
df.at[index, 'Id'] = prev_row['Id']
else:
df.at[index, 'Id'] = df["Id"].max() + 1
date_to_index[date].append(index)
else:
if index > 0:
df.at[index, 'Id'] = df["Id"].max() + 1
date_to_index[date].append(index)
df.sort_values(by="Id", inplace=True, ignore_index=True)
print(df)
</code></pre>
<p>Result:</p>
<pre><code> Name Date Id
0 John Smith 2022-01-01 0
1 Mary Johnson 2022-01-04 1
2 Frank Johnson 2022-01-04 1
3 Mark Williams 2022-01-02 2
4 Jessica Brown 2022-01-03 3
5 David Lee 2022-01-03 4
6 Mary Lee 2022-01-03 4
7 David Lee 2022-01-03 4
8 John Brown 2022-01-02 5
</code></pre>
<p>Is there a way to vectorize this code (or make it faster in any other way)? Maybe using <code>groupby</code> somehow, but the problem is that I compare the names using a function <code>same_names</code> and not just by equality, which complicates it.</p>
<p><strong>Note</strong>: the <code>same_names()</code> function may be different from the example because in reality the names can be more messy (e.g. instead of 'Mary Johnson' it can be 'Johnson Mary' or 'Ms Mary Johnson', so I still have to figure out what <code>same_names()</code> will be).</p>
|
<python><pandas><dataframe><vectorization>
|
2023-03-16 10:52:19
| 2
| 302
|
orbit
|
75,755,045
| 4,742,411
|
PyQt6.QWidgets has no attribute qApp
|
<p>I am migrating (someone else his) code from pyqt5 to pyqt6 and the following code is giving me an error:</p>
<pre><code>def quitApp(self):
""" Exit application callback """
QtWidgets.qApp.quit
</code></pre>
<p>It seems that this no longer works in pyqt6 (I get the error in the title). I tried to find how I am supposed to do this in pyqt6 but I could not find it, unfortunately.</p>
<p>Anyone any ideas?</p>
|
<python><pyqt><pyqt6>
|
2023-03-16 10:37:50
| 1
| 1,065
|
user180146
|
75,755,039
| 18,481,179
|
polars and applying a UDF by column
|
<p>Importing text into a polars series arranges data in columns, which is what make polars (arrow) so efficient. The method <code>map_batches</code> will calculate a user-defined function (UDF) across rows but not columns. So I am in search of the fastest method to apply my UDF across each column.</p>
<p>This does what I need but it results in a 1 x n df with encapsulated series that are a pain to unpack. Does anyone have a better idea to unpack this or how to apply by columns?</p>
<p>Thanks!</p>
<p>Currently, I <code>select</code> and use <code>map_batches</code> as follows on a 2000 x n df:</p>
<pre class="lang-py prettyprint-override"><code>df_baseline_series = df.select(
pl.all().map_batches(lambda y: get_baseline(y.to_numpy()))
)
</code></pre>
<p>The function <code>get_baseline</code> uses a <code>scikit-ued</code>function to return a <code>np.ndarray</code>:</p>
<pre class="lang-py prettyprint-override"><code>baseline_dt(data, wavelet='qshift3', level=6, max_iter=1)
</code></pre>
<p>This gives me an 1 x n df, with each element being a doubly-nested series (i.e., a series of a series).</p>
|
<python><python-polars>
|
2023-03-16 10:37:03
| 1
| 339
|
Sam
|
75,754,951
| 11,235,205
|
Pandas: how to turn time-series rows into a contiguous set of columns?
|
<p>I have a table that stores the steps by that an item is transferred (purposely made in Python Pandas for ease of reproduction):</p>
<pre class="lang-sql prettyprint-override"><code>import pandas as pd
df = pd.DataFrame([
(1, 1, '2023-01-01 00:00:00', 'Arrive in Warehouse', 'PS5'),
(2, 1, '2023-02-01 00:00:00', 'Packaging', 'PS5'),
(3, 1, '2023-03-01 00:00:00':, 'Shipping', 'PS5'),
(4, 1, '2023-04-01 00:00:00', 'Received', 'PS5'),
(5, 2, '2023-01-01 00:00:00', 'Arrive in Warehouse', 'Fan'),
(6, 2, '2023-01-02 00:00:00', 'Checking failures', 'Fan'),
(7, 2, '2023-01-03 00:00:00', 'Shipping', 'Fan'),
(8, 2, '2023-01-04 00:00:00', 'Received', 'Fan')
], columns = ['PK', 'ID', 'DATE', 'STATUS', 'NAME'])
</code></pre>
<p>I want to do something like PIVOT for the column <code>STATUS</code> that "breaks" it into multiple columns and inserts the corresponding <code>DATE</code> as the value. Expected output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Arrive in Warehouse</th>
<th>Checking failures</th>
<th>Packaging</th>
<th>Shipping</th>
<th>Received</th>
<th>Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2023-01-01</td>
<td></td>
<td>2023-02-01</td>
<td>2023-03-01</td>
<td>2023-04-01</td>
<td>PS5</td>
</tr>
<tr>
<td>2</td>
<td>2023-01-01</td>
<td>2023-01-02</td>
<td></td>
<td>2023-01-03</td>
<td>2023-01-04</td>
<td>Fan</td>
</tr>
</tbody>
</table>
</div>
<p>Some assumptions and requirements that I have to follow:</p>
<ul>
<li>The table is already sorted by <code>ID, NAME</code>. The <code>STATUS</code> is also sorted in the correct flow for each <code>ID</code>.</li>
<li>The name and flow of <code>STATUS</code> is known beforehand in a separate dataframe, it looks like <code>(1, 'Arrive in Warehouse'), (2, 'Checking failures'), (3, 'Packaging'),...</code>, if that helps</li>
<li>The "pivot" process should not change the order of the columns, and should be able to generalize to an arbitrary number of columns after <code>STATUS</code>. As you see in the example, <code>Name</code> is the last column in the table, and it should also be the last column in the "pivoted" table. In my actual table, there are 3 columns before the <code>DATE</code> column and a lot of them after the <code>STATUS</code> column.</li>
<li>Each combination of <code>ID-DATE-STATUS</code> appears only once in the table, so no need to de-duplicate it.</li>
</ul>
|
<python><pandas>
|
2023-03-16 10:31:22
| 3
| 2,779
|
Long Luu
|
75,754,842
| 4,432,671
|
NumPy: argsort to give row-only permutation?
|
<p>I can use <code>np.argsort()</code> to create a permutation which if used as an indexing expression returns elements in order:</p>
<pre><code>>>> a = np.array([9, 3, 5, 2, 5, 1, 0, 8])
>>> idx = np.argsort(a)
>>> a[idx]
array([0, 1, 2, 3, 5, 5, 8, 9])
</code></pre>
<p>All well and good. Now I want to use this technique on a matrix, to give a permutation which if used for indexing would select the rows in order. Note that I don't want the results of applying <code>argsort()</code> on axis 0, as this gives a permutation per column.</p>
<p>In other words,</p>
<pre><code>>>> a = np.array([7, 3, 4, 7, 9, 7, 3, 4, 5]).reshape((3,3))
>>> a
array([[7, 3, 4],
[7, 9, 7],
[3, 4, 5]])
>>> np.find_lexicographic_row_permutation(a) # pseudo-code
array([2, 0, 1])
</code></pre>
<p>In other words, the partition would reorder the matrix's rows in 'lexicographic' order.</p>
<p>How can I achieve this in a generic way?</p>
|
<python><numpy>
|
2023-03-16 10:19:46
| 1
| 3,737
|
xpqz
|
75,754,735
| 11,328,614
|
GET/POST request with additional non named parameter using python requests
|
<p>I'm currently dealing with the implementation of a Anel Power Outlet manager. The Anel power outlet supports the following requests:
<a href="https://forum.anel.eu/viewtopic.php?f=52&t=888&sid=39081b8e472aaae7ffcec4cd3fb41e83" rel="nofollow noreferrer">https://forum.anel.eu/viewtopic.php?f=52&t=888&sid=39081b8e472aaae7ffcec4cd3fb41e83</a></p>
<p>However, the special form of the http request specifying the concatenated userpassword comma separated in the URL makes me headache, because it appears to be unsupported by the python requests package.</p>
<p><em>Therefore here my question:</em><br />
How do I sent a http request in either one of the forms</p>
<ol>
<li><code>http://IP?param=value,userpassword</code></li>
<li><code>http://IP?param=value&userpassword</code> or</li>
<li><code>http://IP?param&userpassword</code></li>
</ol>
<p>e.g.</p>
<ol>
<li><code>http://192.168.0.244?Stat&userpassword</code></li>
<li><code>http://192.168.0.244?Sw=0xc0&userpassword</code></li>
</ol>
<p>using the python requests package? I can sent the request manually using the browser and get the appropriate response. However, I cannot send it programmatically.</p>
<p>I'm struggling with the syntax as the requests documentation tells nothing about unnamed parameters (<a href="https://requests.readthedocs.io/en/latest/user/quickstart/#make-a-request" rel="nofollow noreferrer">https://requests.readthedocs.io/en/latest/user/quickstart/#make-a-request</a>).</p>
<p>It tells me about passing parameters as key-value pairs using a dict or a tuple or about passing parameters in the body of the request or dealing with response objects. But all this is not my problem.
It's just about sending the "userpassword" string, which can either be passed as concatenated cleartext or the same base64 encoded separated by a comma or ampersand after the named parameter.</p>
<pre><code>req = requests.get("IP", {"Param": "Value"}, "userpassword")
Traceback (most recent call last):
File "/home/user/pycharm-231.7864.77/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
File "<input>", line 1, in <module>
TypeError: get() takes from 1 to 2 positional arguments but 3 were given
</code></pre>
<pre><code>req = requests.post("http://IP", {"Param": "Value"}, "userpassword")
req
<Response [401]>
</code></pre>
<pre><code>req = requests.post("http://IP", {"Param": "Value", None: "userpassword"})
req
<Response [401]>
</code></pre>
<p>However, assembling the URL manually works.</p>
<pre><code>req = requests.get("http://IP?Param=Value,userpassword", allow_redirects=False)
req.content
b'304 Redirect: /u_res.htm\r\n'
req2 = requests.get("http://IP/u_res.htm")
req2.text
"blablablabla"
</code></pre>
<p>Unfortunately, if I use an abstraction layer I would not like my code to depend on the exact details of the request format.</p>
|
<python><python-3.x><python-requests><httprequest><positional-argument>
|
2023-03-16 10:10:28
| 2
| 1,132
|
WΓΆr Du Schnaffzig
|
75,754,711
| 11,760,835
|
Embed SpoutGL receiver in Qt widget
|
<p>I'm trying to embed a SpoutGL (<a href="https://github.com/jlai/Python-SpoutGL" rel="nofollow noreferrer">link to bindings</a>) program that receives graphics into a Qt (PySide6) window.
How can I do that?</p>
<p>The only example I have found uses PyGame to create a surface and then a image of it is converted into a QImage, but that is not real time processing. The <a href="https://github.com/jlai/Python-SpoutGL/blob/main/examples/texture/receiver.py" rel="nofollow noreferrer">other example</a> uses the PyGame event loop in order to receive data in real time, but I would have to embed the PyGame window into a QWidget which is not good.</p>
<p>I have this code:</p>
<pre class="lang-py prettyprint-override"><code>import time
import SpoutGL
import contextlib
from OpenGL.GL import *
from PySide6.QtCore import QObject, Signal, Qt, QThread
from PySide6.QtGui import QImage, QPixmap
from PySide6.QtWidgets import QMainWindow, QLabel, QWidget, QVBoxLayout, QApplication
with contextlib.redirect_stdout(None):
import pygame
DISPLAY_WIDTH = 800
DISPLAY_HEIGHT = 600
SENDER_NAME = "SpoutGL-texture-test"
def setProjection(width, height):
glMatrixMode(GL_PROJECTION)
glLoadIdentity()
glOrtho(0, width, height, 0, 1, -1)
glMatrixMode(GL_MODELVIEW)
def drawSquare(width, height):
glEnable(GL_TEXTURE_2D)
glBegin(GL_QUADS)
glTexCoord(0, 0)
glVertex2f(0, 0)
glTexCoord(1, 0)
glVertex2f(width, 0)
glTexCoord(1, 1)
glVertex2f(width, height)
glTexCoord(0, 1)
glVertex2f(0, height)
glEnd()
glDisable(GL_TEXTURE_2D)
class SpoutWorker(QObject):
send_image = Signal(QImage)
def run(self):
# Initialise screen
pygame.init()
pygame.display.set_mode((DISPLAY_WIDTH, DISPLAY_HEIGHT),
pygame.OPENGL | pygame.HIDDEN)
receiveTextureID = glGenTextures(1)
glBindTexture(GL_TEXTURE_2D, receiveTextureID)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glCopyTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 0, 0,
DISPLAY_WIDTH, DISPLAY_HEIGHT, 0)
setProjection(DISPLAY_WIDTH, DISPLAY_HEIGHT)
glClearColor(0.0, 0.0, 0.0, 1.0)
with SpoutGL.SpoutReceiver() as receiver:
receiver.setReceiverName(SENDER_NAME)
width = 0
height = 0
while not QThread.currentThread().isInterruptionRequested():
result = receiver.receiveTexture(
receiveTextureID, GL_TEXTURE_2D, False, 0)
if receiver.isUpdated():
width = receiver.getSenderWidth()
height = receiver.getSenderHeight()
print("Updated")
# Initialize or update texture size
glActiveTexture(GL_TEXTURE0)
glBindTexture(GL_TEXTURE_2D, receiveTextureID)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width,
height, 0, GL_RGBA, GL_UNSIGNED_BYTE, None)
setProjection(width, height)
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
# Draw texture
glActiveTexture(GL_TEXTURE0)
glBindTexture(GL_TEXTURE_2D, receiveTextureID)
drawSquare(width, height)
# Get the display image
image_str = pygame.image.tostring(pygame.display.get_surface(), "RGB", 1)
buffer = QImage(image_str, DISPLAY_WIDTH, DISPLAY_HEIGHT, QImage.Format_RGB888)
if buffer.isNull():
print("Buffer is null")
continue
self.send_image.emit(buffer)
# Wait until the next frame is ready
# Wait time is in milliseconds; note that 0 will return immediately
receiver.waitFrameSync(SENDER_NAME, 1000)
time.sleep(0.1)
print("Thread finished")
class SpoutReceiver(QMainWindow):
def __init__(self):
QMainWindow.__init__(self)
self.main_widget = QWidget()
self.setCentralWidget(self.main_widget)
self.main_layout = QVBoxLayout()
self.main_widget.setLayout(self.main_layout)
self.spout_worker = SpoutWorker()
self.spout_worker.send_image.connect(self.update_image)
self.spout_worker_thread = QThread()
self.spout_worker.moveToThread(self.spout_worker_thread)
self.spout_worker_thread.started.connect(self.spout_worker.run)
self.spout_worker_thread.start()
self.image_label = QLabel()
self.image_label.setAlignment(Qt.AlignCenter)
self.main_layout.addWidget(self.image_label)
def update_image(self, image):
flipped_image = image.mirrored()
qpixmap = QPixmap.fromImage(flipped_image)
self.image_label.setPixmap(qpixmap)
def closeEvent(self, event):
self.spout_worker_thread.requestInterruption()
self.spout_worker_thread.quit()
self.spout_worker_thread.wait()
QMainWindow.closeEvent(self, event)
if __name__ == "__main__":
app = QApplication()
window = SpoutReceiver()
window.show()
app.exec()
</code></pre>
<p>It uses a QThread to run PyGame at same time and SpoutGL to receive the graphics. However, although receiveTexture returns True, no image is received π</p>
|
<python><opengl><pyqt><pyside><spout>
|
2023-03-16 10:08:26
| 0
| 394
|
Jaime02
|
75,754,655
| 17,034,564
|
Scrape pdfs links from dynamic content in div in Python with Selenium
|
<p>I am trying to scrape some PDFs' links from the <a href="https://www.bundestag.de/protokolle" rel="nofollow noreferrer">Bundestag's website</a>.
There is an <a href="https://stackoverflow.com/questions/54633910/find-dynamic-content-in-div-not-iframe-python-selenium">old question</a> regarding the same very issue, but it does not provide an answer. However, I post it here for context and for further information in case they were needed.</p>
<p>My code works only partially since it only outputs and appends to the <code>pdf_links</code> only the link of the first PDF while it should scrape all the links from the page and then move (through <code>button</code>) to the next page.</p>
<p>My code looks like this:</p>
<pre><code>import time
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome("C:/Program Files/Chrome Driver/chromedriver.exe")
driver.get('https://www.bundestag.de/protokolle')
button_count = 0
pdf_links = [] # Initialize empty list to store links
mywaits = 100
for a in range(10):
WebDriverWait(driver, mywaits).until(EC.presence_of_element_located((By.CSS_SELECTOR,
"div.slick-initialized")))
elements = driver.find_elements(By.CSS_SELECTOR, "div.slick-initialized")
for element in elements:
pdf_link = element.find_element(By.CSS_SELECTOR, "a")
pdf_links.append(pdf_link.get_attribute('href')) # Append the href attribute to list
WebDriverWait(driver, mywaits).until(EC.element_to_be_clickable((By.CSS_SELECTOR,
"div.slick-initialized")))
button = driver.find_element(By.CSS_SELECTOR, "div.slick-initialized")
button.click()
print(pdf_links)
button_count += 1
time.sleep(1) # Add a short delay to allow the new content to load
driver.close()
print(pdf_links)
</code></pre>
<p>And this is the output:</p>
<pre><code>['https://dserver.bundestag.de/btp/20/20090.pdf']
['https://dserver.bundestag.de/btp/20/20090.pdf', 'https://dserver.bundestag.de/btp/20/20090.pdf']
['https://dserver.bundestag.de/btp/20/20090.pdf', 'https://dserver.bundestag.de/btp/20/20090.pdf', 'https://dserver.bundestag.de/btp/20/20090.pdf']...
</code></pre>
<p>For additional context, this is what the website looks like. In red, is the document the link of which gets appended to the list, and in green, is the button to pass to the next page of documents.</p>
<p><a href="https://i.sstatic.net/pL2Tq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pL2Tq.png" alt="enter image description here" /></a></p>
|
<python><selenium-webdriver><web-scraping><css-selectors><webdriverwait>
|
2023-03-16 10:03:56
| 2
| 678
|
corvusMidnight
|
75,754,609
| 4,730,164
|
Set style for pandas without changing its type
|
<p>I would like to change the style of my pandas without changing its type</p>
<p>Here a small example</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[19, 439],[19, 439]], columns=['COOL','NOTCOOL'])
def colour_col(col):
if col.name == 'COOL':
return ['background-color: red' for c in col.values]
df = df.style.apply(colour_col)
df
</code></pre>
<p><a href="https://i.sstatic.net/8QYWn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8QYWn.png" alt="enter image description here" /></a></p>
<p>But, obviously df is now a pandas.io.formats.style.Styler,
so I can not have access to 'COOL' column anymore</p>
<pre><code>df['COOL']
TypeError: 'Styler' object is not subscriptable
</code></pre>
<p>How can I customize my pandas only when it is displayed ?</p>
|
<python><pandas><dataframe>
|
2023-03-16 10:00:42
| 1
| 753
|
olivier dadoun
|
75,754,471
| 6,653,706
|
How to interact trought a UI with a python script
|
<p>I created a prototype of a small rover with a raspberry, using python I created a script that allows me to move it simply by pressing a few keys on the keyboard.</p>
<p>The script essentially waits for a key to be pressed to "command" the rover motors, the code works through a terminal shell, now what I would like is to create a web UI or an Android app that allows me to command the rover without using a shell.</p>
<p>The source code is something like that:</p>
<pre><code>#!/usr/bin/env python
#!/bin/bash
import RPi.GPIO as GPIO
from time import sleep
print("\n")
print("A-forward S-backward L-left turn R-right turn l-low m-medium h-high e-exit")
print("\n")
a=11
b=15
c=16
d=18
def init():
GPIO.setmode(GPIO.BCM)
GPIO.setup(a, GPIO.OUT)
GPIO.setup(b, GPIO.OUT)
GPIO.setup(c, GPIO.OUT)
GPIO.setup(d, GPIO.OUT)
p=GPIO.PWM(d,1000)
p.start(25)
def forward():
init()
//MORE STUFF
def reverse():
init()
//MORE STUFF
def left_turn():
init()
//MORE STUFF
def right_turn():
init()
//MORE STUFF
while(1):
x=input()
if x=='w':
print("forward")
forward()
x='z'
elif x=='s':
print("backward")
reverse()
x='z'
elif x=='a':
print("left")
left_turn()
x='z'
elif x=='d':
print("right")
right_turn()
x='z'
elif x=='e':
GPIO.cleanup()
break
else:
print("<<< wrong data >>>")
print("please enter the defined data to continue.....")
</code></pre>
<p>How can I create a graphical interface (basically formed by 4 buttons that will represent the directions) that can interact with a python script? Is it possible to continue to use a script as a basis for moving my rover or should I implement a different solution?</p>
<p>More than a solution I'm interested in understanding if I can use the base script and if I can hook it to a UI.</p>
|
<python><java><android><html>
|
2023-03-16 09:48:15
| 1
| 1,117
|
Mattia
|
75,754,443
| 1,780,761
|
Python: copy file to clipboard
|
<p>I need a way to copy a PNG image with its known path to the windows clipboard. My program has a list of files, when one is selected a button is pressed and the file should be copied in the clipboard and allow for pasting somewhere else.</p>
<p>All I found online deals with images, by copying the image itself to the clipboard so it can be pasted in a chat or in paint. However, I couldn't find anything that allows me to copy <strong>the file itself</strong>, NOT its content, to the clipboard.</p>
<p>The idea would be to have a function that accepts a filepath as input and adds that file to the clipboard.</p>
<p>Note: I have also tried reading through <a href="http://timgolden.me.uk/pywin32-docs/win32clipboard__SetClipboardData_meth.html" rel="nofollow noreferrer">this (win32clipboard)</a> but have not been able to understand it.</p>
|
<python><file><copy><clipboard>
|
2023-03-16 09:44:59
| 1
| 4,211
|
sharkyenergy
|
75,753,922
| 3,389,669
|
Failed tree-less parsing using python lark
|
<p>I use <a href="https://lark-parser.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">lark</a> to parse lines of a log file. The log file contains some equations and I would like to extract the left hand side and right hand side of the equations and store them in a dictionary. However, in my context the parsing seems to be kind of slow. I accelerated the parsing by switching from the Earley algorithm to LALR(1) parsing, as suggested by the <a href="https://lark-parser.readthedocs.io/en/latest/json_tutorial.html#step-2-lalr-1" rel="nofollow noreferrer">lark tutorial</a>.
However, I would like to squeeze out the last bit of performance, by also <a href="https://lark-parser.readthedocs.io/en/latest/json_tutorial.html#step-3-tree-less-lalr-1" rel="nofollow noreferrer">Β»going tree-lessΒ«</a>. Unfortunately, it does not work as expected. Consider the following MWE:</p>
<pre class="lang-py prettyprint-override"><code>from lark import Lark, Transformer
OPT = True
def parser() -> Lark:
grammar = r"""
equations: [equation ("," equation)*]
equation: identifier "=" rhs
identifier: CNAME
rhs: num
| vector
vector: "[" [num ("," num)*] "]"
num: SIGNED_NUMBER
%import common.CNAME
%import common.SIGNED_NUMBER
%import common.WS
%ignore WS
"""
if OPT:
eq_parser = Lark(grammar, start="equations", parser="lalr", transformer=ToDict)
else:
eq_parser = Lark(grammar, start="equations", parser="lalr")
return eq_parser
class ToDict(Transformer):
def equations(self, eqs):
return {lhs: rhs for eq in eqs for lhs, rhs in eq.items()}
def equation(self, eq):
(ident, _rhs) = eq
return {ident: _rhs}
def rhs(self, num_vec):
(num_vec,) = num_vec
return num_vec
def identifier(self, ident):
(ident,) = ident
return str(ident)
def num(self, n):
(n,) = n
return float(n)
def vector(self, vec):
return list(vec)
if __name__ == "__main__":
line = "a=3.14, b=[1.41, 1.732]"
prsr = parser()
if OPT:
parsed = prsr.parse(line)
else:
parsed = ToDict().transform(prsr.parse(line))
print(parsed)
</code></pre>
<p>If <code>OPT</code> is set to <code>False</code>, the expression <code>{'a': 3.14, 'b': [1.41, 1.732]}</code> is returned as expected. But if <code>OPT</code> is set to <code>True</code>, this happens:</p>
<pre><code>Traceback (most recent call last):
File ".../mwe.py", line 55, in <module>
parsed = prsr.parse(line)
File ".../anaconda3/envs/lark/lib/python3.10/site-packages/lark/lark.py", line 625, in parse
return self.parser.parse(text, start=start, on_error=on_error)
File ".../anaconda3/envs/lark/lib/python3.10/site-packages/lark/parser_frontends.py", line 96, in parse
return self.parser.parse(stream, chosen_start, **kw)
File ".../anaconda3/envs/lark/lib/python3.10/site-packages/lark/parsers/lalr_parser.py", line 41, in parse
return self.parser.parse(lexer, start)
File ".../anaconda3/envs/lark/lib/python3.10/site-packages/lark/parsers/lalr_parser.py", line 171, in parse
return self.parse_from_state(parser_state)
File ".../anaconda3/envs/lark/lib/python3.10/site-packages/lark/parsers/lalr_parser.py", line 179, in parse_from_state
state.feed_token(token)
File ".../anaconda3/envs/lark/lib/python3.10/site-packages/lark/parsers/lalr_parser.py", line 150, in feed_token
value = callbacks[rule](s)
TypeError: ToDict.identifier() missing 1 required positional argument: 'ident'
</code></pre>
<p>What is the error message trying to tell me?</p>
|
<python><lark-parser>
|
2023-03-16 08:58:34
| 1
| 819
|
user3389669
|
75,753,909
| 12,016,688
|
Defining an iterator without implementing the iterator protocol
|
<p>As noted <a href="https://docs.python.org/3/library/stdtypes.html#iterator-types" rel="nofollow noreferrer">here</a> iterator objects are required to implement both <code>__iter__</code> and <code>__next__</code> methods.</p>
<blockquote>
<p>The iterator objects themselves are required to support the
following two methods, which together form the iterator protocol:</p>
<p><strong><code>iterator.__iter__()</code></strong> Return the iterator object itself. This is
required to allow both containers and iterators to be used with the
for and in statements. This method corresponds to the tp_iter slot of
the type structure for Python objects in the Python/C API.</p>
<p><strong><code>iterator.__next__()</code></strong> Return the next item from the iterator. If there
are no further items, raise the StopIteration exception. This method
corresponds to the tp_iternext slot of the type structure for Python objects in the Python/C API.</p>
</blockquote>
<p>By this definition in the below snippet, I didn't have implemented an iterator:</p>
<pre class="lang-py prettyprint-override"><code>class Test:
def __init__(self, size):
self.size = size
def __iter__(self):
return TestIter(self.size)
class TestIter:
def __init__(self, target=20):
self.counter = 0
self.target = target
def __next__(self):
if self.counter > self.target:
raise StopIteration
self.counter += 1
return self.counter
</code></pre>
<p>Because neither <code>Test</code> nor <code>TestIter</code> has both <code>__iter__</code> and <code>__next__</code> methods defined. But the <code>Test</code> class has fully functionality of an iterator.</p>
<pre class="lang-py prettyprint-override"><code>>>> for i in Test(5):
... print(i)
...
1
2
3
4
5
6
>>>
>>> list(Test(5))
[1, 2, 3, 4, 5, 6]
>>>
>>> it = iter(Test(5))
>>> next(it)
1
>>> next(it)
2
>>> next(it)
3
>>> next(it)
4
>>> next(it)
5
>>> next(it)
6
>>> next(it)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/tmp/iterators.py", line 18, in __next__
raise StopIteration
StopIteration
>>>
</code></pre>
<p>Is this technically correct that <code>Test</code> is not an iterator according to the definition?</p>
|
<python><iterator>
|
2023-03-16 08:57:56
| 0
| 2,470
|
Amir reza Riahi
|
75,753,868
| 12,931,358
|
Error: Object of type IntervalStrategy is not JSON serializable when add `indent` to verticalize json
|
<p>I want to save a dataclass to a json file and save it, it is ok now without adding paramenter <code>indent</code>.</p>
<pre><code>class EnhancedJSONEncoder(json.JSONEncoder):
def default(self, o):
if dataclasses.is_dataclass(o):
return dataclasses.asdict(o)
# return super().default(o)
model_json = json.dumps(model_args, cls=EnhancedJSONEncoder)
</code></pre>
<p>model_args is a dataclass object, take a simple example,</p>
<pre><code>from dataclasses import dataclass
@dataclass
class Model_args:
x: str
model_args = Model_args(x="bar")
</code></pre>
<p>However, when I add indent, for example,</p>
<pre><code>model_json = json.dumps(model_args, cls=EnhancedJSONEncoder,indent=4)
</code></pre>
<p>it shows</p>
<pre><code>raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type IntervalStrategy is not JSON serializable
</code></pre>
<p>I want to save to json file vertically(make it looks better)</p>
<pre><code> with open("model_args.json", "w") as f:
f.write(model_json)
</code></pre>
|
<python><json><python-3.x><python-dataclasses>
|
2023-03-16 08:53:33
| 2
| 2,077
|
4daJKong
|
75,753,615
| 11,680,995
|
How to fix "Canceled future for execute_request message" error when using Polars?
|
<p>I'm using Polars to process a 35 GB file in Python, and I'm running the following line of code:</p>
<pre><code>df.unique(subset=['id']).select(pl.count()).collect(streaming=True).item()
</code></pre>
<p>However, I'm getting the following error:</p>
<pre><code>Canceled future for execute_request message before replies were done
The Kernel crashed while executing code in the the current cell or a previous cell. Please review the code in the cell(s) to identify a possible cause of the failure. Click here for more info. View Jupyter log for further details.
</code></pre>
<p>I believe this error is related to Polars lazy evaluation, but I'm not sure how to fix it. Can someone provide guidance on how to resolve this issue? Thank you in advance</p>
<p>update:
The version of Polars I am using is 0.16.6<br />
Here is the Jupyter log:</p>
<pre><code> warn(
c:\Users\user\anaconda3\lib\site-packages\traitlets\traitlets.py:2495: FutureWarning: Supporting extra quotes around Bytes is deprecated in traitlets 5.0. Use '50d3e1a5-c144-464a-b1ed-5dc5d79b04f0' instead of 'b"50d3e1a5-c144-464a-b1ed-5dc5d79b04f0"'.
warn(
memory allocation of 301430844 bytes failed
info 13:50:23.361: Dispose Kernel process 14504.
error 13:50:23.365: Raw kernel process exited code: 3221226505
error 13:50:23.412: Error in waiting for cell to complete [Error: Canceled future for execute_request message before replies were done
at t.KernelShellFutureHandler.dispose (c:\Users\user\.vscode\extensions\ms-toolsai.jupyter-2023.2.1200692131\out\extension.node.js:2:33213)
at c:\Users\user\.vscode\extensions\ms-toolsai.jupyter-2023.2.1200692131\out\extension.node.js:2:52265
at Map.forEach (<anonymous>)
at y._clearKernelState (c:\Users\user\.vscode\extensions\ms-toolsai.jupyter-2023.2.1200692131\out\extension.node.js:2:52250)
at y.dispose (c:\Users\user\.vscode\extensions\ms-toolsai.jupyter-2023.2.1200692131\out\extension.node.js:2:45732)
at c:\Users\user\.vscode\extensions\ms-toolsai.jupyter-2023.2.1200692131\out\extension.node.js:17:127079
at ee (c:\Users\user\.vscode\extensions\ms-toolsai.jupyter-2023.2.1200692131\out\extension.node.js:2:1552543)
at dh.dispose (c:\Users\user\.vscode\extensions\ms-toolsai.jupyter-2023.2.1200692131\out\extension.node.js:17:127055)
at hh.dispose (c:\Users\user\.vscode\extensions\ms-toolsai.jupyter-2023.2.1200692131\out\extension.node.js:17:134354)
at process.processTicksAndRejections (node:internal/process/task_queues:96:5)]
warn 13:50:23.418: Cell completed with errors {
message: 'Canceled future for execute_request message before replies were done'
}
info 13:50:23.433: Cancel all remaining cells true || Error || undefined
</code></pre>
|
<python><python-polars>
|
2023-03-16 08:23:46
| 0
| 343
|
roei shlezinger
|
75,753,568
| 12,436,050
|
ValueError: Unterminated string starting at: Error while calling an API endpoint recursively in Python 3.7
|
<p>I am trying to connect to an API endpoint, fetching the json response, writing and extending the response (looping through page: total number of pages are 200, number of items: 41500) to a list. And finally I would like to write the complete list to a single JSON file. I am running below code for performing this task.</p>
<pre><code>url = "https://abcd../v1/org"
headers = {'Accept': 'application/json'}
params = {'pagesize': '200', 'versions' : 'true'}
organisations = []
new_results = True
page = 1
while new_results:
org_url = url + f"?page={page}"
response_api = requests.get(org_url, auth=('asdff', '364gbudgsh$'), params = params, headers = headers).json()
organisations.extend(response_api)
page += 1
with open('organisations.json', 'w') as f:
for line in organisations:
f.write(f"{line}\n")
</code></pre>
<p>However, after some API calls, I am getting this error.</p>
<pre><code>ValueError: Unterminated string starting at:
</code></pre>
<p>Any suggestion to improve the API call or to solve this error would be really helpful.</p>
|
<python><python-requests>
|
2023-03-16 08:18:36
| 1
| 1,495
|
rshar
|
75,753,553
| 8,551,360
|
Elastic Beanstalk server slows down whenever a instance is added or removed on auto-scaling (Python 3.8 running on 64bit Amazon Linux 2)
|
<p>So we have recently uploaded a new enviornment for our product</p>
<p>We are using python 3.8</p>
<p>Elastic beanstalk is of application type</p>
<p><strong>This is my Auto Scaling group configuration</strong></p>
<p><a href="https://i.sstatic.net/zni2D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zni2D.png" alt="enter image description here" /></a></p>
<p><strong>These are myscaling triggers configuration</strong></p>
<p><a href="https://i.sstatic.net/VdCmz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VdCmz.png" alt="enter image description here" /></a></p>
<p><strong>I can see there are few spikes from the monitoring but these things were happening before also, at then we were not facing this issue</strong></p>
<p><a href="https://i.sstatic.net/P3tIe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P3tIe.png" alt="enter image description here" /></a></p>
<p>Is there something that I am missing in my confiuration, beacuse we have used exactly same configuration before and it was working fine. We have just updgraded from python 3.6 to python 3.8</p>
<p>Any help would be appreciated.</p>
|
<python><amazon-web-services><amazon-ec2><amazon-elastic-beanstalk>
|
2023-03-16 08:16:30
| 2
| 548
|
Harshit verma
|
75,753,322
| 981,764
|
base64 encoding string with special char in python
|
<p>I'm trying to encode a string using like so:</p>
<pre><code>base64.b64encode(b'username\with\backslash:password')
</code></pre>
<p>but printing only the string results in this</p>
<pre><code>print('username\\with\\backslash:password')
>> username\withackslash:password
</code></pre>
<p>When escaping the backslashes with another backslash it prints this</p>
<pre><code>print('username\\with\\backslash:password')
>> username\with\backslash:password
</code></pre>
<p>which looks good.
But when base64 encoding and afterwards decoding it adds the extra slashes though</p>
<pre><code>encoded=base64.b64encode(b'username\\with\\backslash:password')
print(encoded)
print(base64.b64decode(encoded))
>> b'dXNlcm5hbWVcd2l0aFxiYWNrc2xhc2g6cGFzc3dvcmQ='
>> b'username\\with\\backslash:password'
</code></pre>
<p>It encodes the escaping characters as well.</p>
<p>The same without escaping</p>
<pre><code>encoded=base64.b64encode(b'username\with\backslash:password')
print(encoded)
print(base64.b64decode(encoded))
>> b'dXNlcm5hbWVcd2l0aAhhY2tzbGFzaDpwYXNzd29yZA=='
>> b'username\\with\x08ackslash:password'
</code></pre>
<p>Anyone got me a hint on how to encoding this right?
I need this for http request authorization headers and my encoded strings won't be accepted so far.</p>
|
<python><encoding><base64>
|
2023-03-16 07:44:37
| 0
| 1,468
|
axel wolf
|
75,753,296
| 8,599,834
|
How to tell Mypy that an external package is annotated?
|
<p>I am using an external package that is fully annotated, but does not include the <code>py.typed</code> marker.</p>
<p>I have submitted a pull request to add it, but in the meantime, I'd like Mypy to know that the package is actually typed, possibly without ignoring the annotations.</p>
<p>By searching the internet, I can only find ways to make stubs (which would be overkill and unnecessary in this case) or ignore the package's types (which would basically opt me out of type checking it).</p>
<p>How do I mark an external package as typed?</p>
|
<python><python-typing><mypy>
|
2023-03-16 07:41:01
| 0
| 2,742
|
theberzi
|
75,753,120
| 965,002
|
How to calculate the angle rotated of the cut out image?
|
<p>I have a 200*200 square image, now I draw a circle of 100 diameter with the center of this image as the center, cut this image into two parts along the circle, rotate this circle image by a certain angle and save it as <em>circle.jpg</em> and save the rest as background.jpg. Now what algorithm do I need to calculate How much angle do I need to rotate <em>circle.jpg</em> in order to combine it with <em>background.jpg</em> to form the original image? Is it possible to do this using opencv?</p>
<p>background.jpg:</p>
<p><a href="https://i.sstatic.net/P9cVk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P9cVk.png" alt="background.jpg" /></a></p>
<p>circle.jpg:</p>
<p><a href="https://i.sstatic.net/lpD2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lpD2C.png" alt="circle.jpg" /></a></p>
|
<python><opencv>
|
2023-03-16 07:19:07
| 2
| 487
|
hoozecn
|
75,753,100
| 726,802
|
DJango pick Operating system username instead of user name present in settings file
|
<p><strong>About the issue</strong></p>
<p>I created the project using command <code>django-admin startproject test</code> and updated the settings file to have mysql database details. Apart from that nothing added/updated/deleted in code.</p>
<p><strong>Database details in config file</strong></p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
"Name": "django",
"User": "root",
"Password": "",
"Host": "localhost",
"Port": "3306"
}
}
</code></pre>
<p>Then ran the command <code>python manage.py runserver</code> and gave me very strange error.</p>
<p>It says Access denied for user pankaj@localhost</p>
<p><strong>Question</strong></p>
<p>Settings file has username = root, why django pick operating system username</p>
<p><a href="https://i.sstatic.net/Xmg2I.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Xmg2I.png" alt="enter image description here" /></a></p>
|
<python><django>
|
2023-03-16 07:16:38
| 1
| 10,163
|
Pankaj
|
75,753,067
| 6,182,971
|
Python: Reddit PRAW extremely slow pulling comments for large thread
|
<p>I want to build a sentiment bot based on Reddit comments, so I've started working with the Reddit PRAW library for Python. Just googling this topic, it seems that getting all thread comments can be somewhat tricky.</p>
<p>The PRAW code itself is fairly simple (based on this documentation page: <a href="https://praw.readthedocs.io/en/stable/tutorials/comments.html" rel="nofollow noreferrer">https://praw.readthedocs.io/en/stable/tutorials/comments.html</a>), but it also is very slow for a large thread.</p>
<p>I also came across this SO post about getting the threads in JSON: <a href="https://stackoverflow.com/questions/66178703/retrieving-all-comments-from-a-thread-on-reddit">Retrieving all comments from a thread on Reddit</a>. The solution per the question is to access response items of <code>kind=more</code>. I made a solution using direct calls to the API (i.e., no PRAW), but I'm getting inconsistent results for the number of comments returned.</p>
<p><strong>PRAW method:</strong></p>
<pre><code>import praw
reddit = praw.Reddit(client_id="<MYKEY>",
client_secret="<MY_SECRET_KEY>",
user_agent="USERAGENT",
check_for_async=False)
url = "https://www.reddit.com/r/CryptoCurrency/comments/11rfcjy/daily_general_discussion_march_15_2023_gmt0/"
submission = reddit.submission(url=url)
submission.comments.replace_more(limit=None)
comments = submission.comments.list()
print("top level comments:", submission.comments.__len__())
print("total comments:", len(comments))
</code></pre>
<p><strong>JSON API method:</strong></p>
<pre><code>import requests
import time
import numpy as np
# details for getting a token can be found here: https://towardsdatascience.com/how-to-use-the-reddit-api-in-python-5e05ddfd1e5c
TOKEN = get_reddit_token()
headers = {"user-agent": "Mozilla/5.0"}
url = "https://www.reddit.com/r/CryptoCurrency/comments/11rfcjy/daily_general_discussion_march_15_2023_gmt0/.json"
req = requests.get(url, headers=headers)
res = req.json()
body = res[1]["data"]["children"]
thread_id = body[-1]["data"]["parent_id"]
all_comments = {c["data"]["id"]: c for c in body if c["kind"] == "t1"}
comment_ids = [c["data"]["id"] for c in body if c["kind"] == "t1"]
comment_ids += body[-1]["data"]["children"]
def get_more_comments(more_ids, get_replies=True):
more_children = ",".join(more_ids)
more_url = f"https://oauth.reddit.com/api/morechildren/.json?api_type=json&link_id={thread_id}&children={more_children}&sort=top"
headers = {'user-agent': 'USERAGENT', 'Authorization': f"bearer {TOKEN}"}
res = requests.get(more_url, headers=headers)
comments = res.json()
comments = comments["json"]["data"]["things"]
t1_comments = [c for c in comments if c["kind"] == "t1"]
more_comments = [c for c in comments if c["kind"] == "more"]
print("new comments", len(t1_comments))
print("more comments", len(more_comments))
for comment in t1_comments:
comment_id = comment["data"]["id"]
all_comments[comment_id] = comment
if get_replies:
more_comments = [c["data"]["children"] for c in more_comments]
else:
more_comments = [c["data"]["children"] for c in more_comments if c["data"]["parent_id"] == thread_id]
# flatten list of lists
more_comments = [c for c_list in more_comments for c in c_list]
return more_comments
for i in range(100):
print(i)
existing_comments = list(all_comments.keys())
eligible_comments = np.isin(comment_ids, existing_comments)
eligible_comments = np.array(comment_ids)[~eligible_comments].tolist()
more_ids = eligible_comments[:100]
more_comments = get_more_comments(more_ids)
comment_ids += more_comments
comment_ids = list(set(comment_ids))
random.shuffle(comment_ids)
time.sleep(1)
print("top level comments:", len([k for k,v in all_comments.items() if v["data"]["depth"] == 0]))
print("total comments:", len(all_comments.keys()))
</code></pre>
<p>The basic idea of the second method is to get the initial json response of the thread (note that the URLs are the same in both examples except that the second has <code>.json</code> appended to the end) and capture the comment ids for any items that have <code>kind=more</code>.</p>
<p>For this example, I'm iterating an arbitrary number of times to try pulling new comments but after a while it stops getting new comments.
This runs quickly even sleeping between requests, so it would be great if I could use this method if nothing can be done about the speed of the PRAW <code>replace_more</code> method, but I want to get all comments.</p>
<p>The PRAW method took over 30 minutes to run and returned 1259 top comments and 4686 total comments as of the time I ran this. The JSON method returned 1259 top comments (that's good since it matches PRAW) and 4511 total comments (fewer than PRAW).</p>
<p><strong>Note:</strong> When I started writing this question, I was getting many fewer comments for the JSON method, but adding <code>sort=top</code> to the comments URL (I've updated this in the code - <code>sort=new</code> also works). Even though the results are a lot closer now, I'm going to post in case anyone can point out why I'm not getting all of the comments.</p>
<p>I'd like to get 100% completeness if possible and the question might help others trying to scrape comments more efficiently.</p>
|
<python><reddit><praw>
|
2023-03-16 07:11:00
| 1
| 3,174
|
SuperCodeBrah
|
75,752,849
| 45,843
|
Can language model inference on a CPU, save memory by quantizing?
|
<p>For example, according to <a href="https://cocktailpeanut.github.io/dalai/#/" rel="nofollow noreferrer">https://cocktailpeanut.github.io/dalai/#/</a> the relevant figures for LLaMA-65B are:</p>
<ul>
<li>Full: The model takes up 432.64GB</li>
<li>Quantized: 5.11GB * 8 = 40.88GB</li>
</ul>
<p>The full model won't fit in memory on even a high-end desktop computer.</p>
<p>The quantized one would. (But would not fit in video memory on even a $2000 Nvidia graphics card.)</p>
<p>However, CPUs don't generally support anything less than fp32. And when I've tried running Bloom 3B and 7B on a machine without a GPU, sure enough, the memory consumption has seemed to be 12 and 28GB respectively.</p>
<p>Is there a way to gain the memory savings of quantization, when running the model on a CPU?</p>
|
<python><machine-learning><neural-network><cpu><half-precision-float>
|
2023-03-16 06:41:19
| 1
| 34,049
|
rwallace
|
75,752,827
| 16,396,496
|
how to make def to get str(name of Dataframe or list) in python
|
<pre><code>def get_name(list):
??
li=[1]
get_name(li) # -> 'li' simple string, 'li'
f'{get_name(li)}' # -> 'li' not [1]
</code></pre>
<p>I think it is very simple question. but I can't solve.
Actually, value of list is not necessary.
desired input li (not 'li') , desired output 'li' (not li)</p>
|
<python><pandas>
|
2023-03-16 06:36:25
| 0
| 341
|
younghyun
|
75,752,747
| 1,139,541
|
How to create 2 environments in tox with different library version
|
<p>I need to test my library on the same python version but different dependent package version using <code>tox</code>. I have tried this:</p>
<pre><code>[tox]
requires =
tox>=4
env_list = py{37,38,39,310,311,typeguard_{v2,v3}}
[testenv]
description = run unit tests
deps =
-rrequirements.txt
commands =
pytest {posargs:tests}
[testenv:typeguard_{v2,v3}]
description = run tests with typeguard versions
base_python =
py310: python3.10
deps =
v2: typeguard<3.0.0
v3: typeguard>=3.0.0
-rrequirements.txt
commands =
pytest {posargs:tests}
</code></pre>
<p>But it looks like it uses the latest <code>typeguard</code> version in all tests. The <code>requirements.txt</code> holds <code>typeguard</code> without a specific version.</p>
<p>How can this be configured to create two environments with different <code>typeguard</code> versions?</p>
|
<python><tox>
|
2023-03-16 06:25:59
| 1
| 852
|
Ilya
|
75,752,586
| 11,713,196
|
Cannot import LSHForest from sklearn in Google colab
|
<p>While importing the library <code>LSHForest</code> in Google colab, it is coming out as <code>ImportError: cannot import name 'LSHForest' from 'sklearn.neighbors' (/usr/local/lib/python3.9/dist-packages/sklearn/neighbors/__init__.py)</code>.</p>
<p>I took the code from the official documentation <a href="https://scikit-learn.org/0.16/modules/generated/sklearn.neighbors.LSHForest.html" rel="nofollow noreferrer">Link</a></p>
<p>The code is:</p>
<pre><code>from sklearn.neighbors import LSHForest
</code></pre>
|
<python><scikit-learn><google-colaboratory>
|
2023-03-16 05:55:01
| 1
| 429
|
tarmas99
|
75,752,540
| 983,391
|
How to plot multiple pie chart or bar chart from multi-index dataframe?
|
<p>Dataframe is as below.</p>
<p>There are multiple index as <code>category_1</code> & <code>category_2</code> and two column as <code>count</code> & <code>%</code></p>
<p>I need to draw multiple (for each <code>category_1</code>) pie chart for <code>category_2</code> & <code>%</code> and bar chart for <code>category_2</code> & <code>count</code>.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th></th>
<th>count</th>
<th>%</th>
</tr>
</thead>
<tbody>
<tr>
<td>category_1</td>
<td>category_2</td>
<td></td>
<td></td>
</tr>
<tr>
<td>Bady and Kids</td>
<td>Baby Care</td>
<td>220</td>
<td>27.5</td>
</tr>
<tr>
<td></td>
<td>Toys</td>
<td>176</td>
<td>22</td>
</tr>
<tr>
<td></td>
<td>Boys Clothing</td>
<td>156</td>
<td>19.5</td>
</tr>
<tr>
<td></td>
<td>Girls Clothing</td>
<td>144</td>
<td>18</td>
</tr>
<tr>
<td></td>
<td>Baby Boy Clothing</td>
<td>104</td>
<td>13</td>
</tr>
<tr>
<td>Women's wear</td>
<td>Party Dresses</td>
<td>224</td>
<td>28</td>
</tr>
<tr>
<td></td>
<td>Sports Wear</td>
<td>188</td>
<td>23.5</td>
</tr>
<tr>
<td></td>
<td>Swim wear</td>
<td>140</td>
<td>17.5</td>
</tr>
<tr>
<td></td>
<td>Winter Wear</td>
<td>128</td>
<td>16</td>
</tr>
<tr>
<td></td>
<td>Watches</td>
<td>120</td>
<td>15</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas><matplotlib><bar-chart><pie-chart>
|
2023-03-16 05:45:53
| 2
| 12,382
|
Priyank Patel
|
75,751,931
| 5,679,047
|
Import succeeds in interpreter and fails in script (same Python executable) "No module named `src`"
|
<p>I have a program designed to automatically parse code in a repository and create an API based on a specification. Parsing the code requires using <code>importlib</code> to import the code I'm going to parse. The line where things diverge as mentioned in the title is <code>spec = importlib.util.find_spec(modulename, package='package_name')</code>; in the interpreter, it works perfectly. I can even import the function that calls this function from the script, and that works too.</p>
<p>The target repository that is being operated on has the structure</p>
<pre><code>__init__.py
setup.py
src (directory containing all Python files including an __init__.py)
</code></pre>
<p>where <code>setup.py</code> looks like</p>
<pre><code>setuptools.setup(name='package_name',
version='0.1',
author='Me',
description='For doing secret corporate stuff',
packages=['package_name'],
package_dir={'package_name': 'src'},
package_data={'package_name': ['file1',
'file2']},
install_requires=['various', 'stuff'])
</code></pre>
<p>I'm definitely using the same Python in both cases. I'm not installing the repository as a package but rather cloning it and attempting to operate on it from the root directory. On multiple computers, I'm able to import <code>src</code> and <code>src.module</code> for each <code>module</code> in the <code>src</code> directory in the interpreter, but cannot do it from a script.</p>
|
<python><python-packaging><python-importlib>
|
2023-03-16 03:31:54
| 0
| 681
|
Zorgoth
|
75,751,907
| 887,290
|
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root
|
<p>I have a weird problem which only occurs since today on my github workflow. These are relevant commands.</p>
<pre><code>pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
pip3 install mmengine==0.6.0 mmcv==2.0.0rc3 mmdet==3.0.0rc5 mmaction2==1.0rc3
</code></pre>
<p>The former succeeded. The latter stops with following error:</p>
<pre><code>Collecting mmengine==0.6.0
Using cached mmengine-0.6.0-py3-none-any.whl (360 kB)
Collecting mmcv==2.0.0rc3
Using cached mmcv-2.0.0rc3.tar.gz (424 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
Γ python setup.py egg_info did not run successfully.
β exit code: 1
β°β> [18 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-uml22xq3/mmcv_89a43e000b91495e88399ffe3c493514/setup.py", line 329, in <module>
ext_modules=get_extensions(),
^^^^^^^^^^^^^^^^
File "/tmp/pip-install-uml22xq3/mmcv_89a43e000b91495e88399ffe3c493514/setup.py", line 290, in get_extensions
ext_ops = extension(
^^^^^^^^^^
File "/home/github/.pyenv/versions/miniconda3-3.10-22.11.1-1/envs/heavi-analytic/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1048, in CUDAExtension
library_dirs += library_paths(cuda=True)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/github/.pyenv/versions/miniconda3-3.10-22.11.1-1/envs/heavi-analytic/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 1179, in library_paths
if (not os.path.exists(_join_cuda_home(lib_dir)) and
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/github/.pyenv/versions/miniconda3-3.10-22.11.1-1/envs/heavi-analytic/lib/python3.11/site-packages/torch/utils/cpp_extension.py", line 2223, in _join_cuda_home
raise EnvironmentError('CUDA_HOME environment variable is not set. '
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Γ Encountered error while generating package metadata.
β°β> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>Any idea?</p>
<p>UPDATE 1: So it turns out that pytorch version installed is 2.0.0 which is not desirable.</p>
|
<python><pytorch><openmmlab>
|
2023-03-16 03:26:57
| 2
| 2,153
|
ThomasEdwin
|
75,751,891
| 1,033,591
|
Can I make the class name to be a variable in python?
|
<p>Can I make the class name to be a variable in the newer python release?</p>
<p>I mean can I short the following instructions to be briefer?</p>
<pre><code> if target == "course":
return Course.objects.all()[start:end]`
elif target == "Subject":
return Subject.objects.all()[start:end]
elif target == "Teacher":
return Teacher.objects.all()[start:end]
</code></pre>
<p>to be briefer:</p>
<pre><code> return VARIABLE.objects.all()[start:end]
</code></pre>
|
<python><django>
|
2023-03-16 03:23:44
| 2
| 2,147
|
Alston
|
75,751,881
| 19,491,471
|
mean of a column after group by returns nan
|
<p>I have this:</p>
<pre><code>df = name year. salary. d.
a 1990. 3. 5
b 1992. 90. 1
c 1990. 234. 3
...
</code></pre>
<p>I am trying to group my data frame based on year, and then get the average of the salaries in that year. Then my goal is to assign it to a new column. This is what I do:</p>
<pre><code>df['averageSalaryPerYear'] = df.groupby('year')['salary'].mean()
</code></pre>
<p>I do get the correct results for df.groupby('year')['salary'].mean(), since when I print them, I get a column of numbers in scientific notation. However, when I assign it to df['averageSalaryPerYear'], they all turn into nan. I am not sure why this is happening as the printed values seem to be fine, although they are in scientific notation like this:</p>
<p>1990 1.707235e+07</p>
<p>1991 2.357879e+07</p>
<p>1992 3.098244e+07</p>
<p>which is year and avgOfSalary</p>
<p>Why is this happening? I want my new column to show the correct results of averages.
...</p>
<p>Thanks</p>
|
<python><pandas>
|
2023-03-16 03:21:25
| 1
| 327
|
Amin
|
75,751,811
| 15,392,319
|
How to omit certain parts of a SQL query
|
<p><em>Disclaimer that I'm highly aware there's a better way to word the question, if someone wants to suggest a better way I'd be happy to change it</em></p>
<p>I've been working with DynamoDB for the past 4 years and now I'm switching one of my tables over to postgreSQL. I've been working with the psycopg2 python library to test some queries. Users will need to filter against the database and so it'd be nice to have an all in one filter query.</p>
<p>I'd like users to be able to select multiple values for a given filter or none, in the case of none, that field shouldn't be filtered against. Here's what a basic query might look like (this is just an example that will accomplish asking my question).</p>
<pre class="lang-py prettyprint-override"><code>conn = psycopg2.connect(host=ENDPOINT, port=PORT, database=DBNAME, user=USER, password=PASSWORD, sslrootcert="SSLCERTIFICATE")
cur = conn.cursor()
sql = """
SELECT * FROM table_name
WHERE column_1 in %s AND column_2 in %s
ORDER BY datetime DESC
"""
sql_values = (("XXXXYXY", "XXXYYXXY"), ("ZGZGZGZGGG","GZGGGGZGG"))
cur.execute(sql, sql_values)
</code></pre>
<p>And here's the sort of query in which no value is present for column_2:</p>
<pre class="lang-py prettyprint-override"><code>conn = psycopg2.connect(host=ENDPOINT, port=PORT, database=DBNAME, user=USER, password=PASSWORD, sslrootcert="SSLCERTIFICATE")
cur = conn.cursor()
sql = """
SELECT * FROM table_name
WHERE column_1 in %s AND column_2 in %s
ORDER BY datetime DESC
"""
sql_values = (("XXXXYXY", "XXXYYXXY"), ())
cur.execute(sql, sql_values)
</code></pre>
<p>Obviously, this wouldn't work. In short, I'd like it to only query against columns that have data present. What would be the most efficient way to accomplish this?</p>
|
<python><sql><postgresql><psycopg2>
|
2023-03-16 03:07:59
| 1
| 428
|
cmcnphp
|
75,751,638
| 1,857,373
|
Graph add_nodes_from() nodes not working, add_edges_from() edges not working properly for data no dependent node/edge DAG is created
|
<p><strong>Problem</strong></p>
<p>This code is not working to write graph edges and nodes from csv lists and this code is not creating a proper graph from edges sent to add_nodes_from() for nodes and add_edges_from() for edges. So what is the property way to get a graph database to receive and read nodes and edges and even labels to nodes to build a networkx graph database correctly. I have trued many different approaches and none have worked correctly to build a networkx graph. I'm getting no progress on this issue, need to get this done this week.</p>
<p>My code is not capturing nodes and adding edges.</p>
<p><strong>Code</strong></p>
<pre><code>import networkx as nx
from sknetwork.data import from_edge_list, from_adjacency_list, from_csv
edges = from_csv("real_edges.csv")
nodes = from_csv("real-fnodes.csv")
_nodes = [n for n in nodes]
_edges = [n for n in edges]
G = nx.Graph()
G.add_nodes_from(_nodes)
G.add_edges_from(_edges)
print(nx.info(G))
</code></pre>
<p><strong>Data</strong></p>
<p>Sample Data nodes</p>
<pre><code>(0, 1265) 1
(0, 1338) 1
(0, 1413) 1
(0, 1643) 1
(0, 1719) 1
(0, 1806) 1
(0, 2052) 1
(0, 2128) 1
(0, 2641) 1
(0, 2872) 1
(0, 3100) 1
(0, 3244) 1
: :
(5778, 52) 1
(5779, 3) 4
(5780, 3) 4
(5781, 1) 1
(5781, 3) 2
(5781, 29) 1
(5782, 3) 4
(5783, 3) 4
(5784, 3) 4
(5785, 3) 4
(5786, 3) 4
(5787, 3) 4
(5788, 3) 2
(5788, 5) 1
(5788, 47) 1
(5789, 3) 4
(5790, 3) 2
(5790, 5) 1
(5790, 47) 1
(5791, 3) 4
(5792, 3) 2
(5792, 5) 1
(5792, 29) 1
</code></pre>
<p>Sample Nodes Data</p>
<pre><code>[['0', '1', '2', '3'], ['5.0', '3.0', '1.0', '29.0'], ['5.0', '3.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['5.0', '3.0', '1.0', '29.0'], ['5.0', '3.0', '1.0', '29.0'], ['5.0', '3.0', '1.0', '29.0'], ['5.0', '3.0', '1.0', '29.0'], ['5.0', '3.0', '1.0', '29.0'], ['5.0', '3.0', '1.0', '29.0'], ['40.0', '13.0', '1.0', '29.0'], ['40.0', '13.0', '1.0', '29.0'], ['40.0', '13.0', '1.0', '29.0'], ['5.0', '52.0', '1.0', '29.0'], ['5.0', '52.0', '1.0', '29.0'], ['5.0', '52.0', '1.0', '29.0'], ['5.0', '42.0', '1.0', '29.0'], ['5.0', '42.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['5.0', '42.0', '1.0', '29.0'], ['5.0', '42.0', '1.0', '29.0'], ['5.0', '42.0', '1.0', '29.0'], ['5.0', '42.0', '1.0', '29.0'], ['5.0', '48.0', '1.0', '29.0'], ['5.0', '48.0', '1.0', '29.0'], ['5.0', '42.0', '1.0', '29.0'], ['5.0', '42.0', '1.0', '29.0'], ['21.0', '17.0', '1.0', '29.0'], ['21.0', '17.0', '1.0', '29.0'], ['21.0', '17.0', '1.0', '29.0'], ['21.0', '17.0', '1.0', '29.0'], ['21.0', '17.0', '1.0', '29.0'], ['21.0', '17.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['5.0', '47.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '11.0', '1.0', '29.0'], ['3.0', '11.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['17.0', '52.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['1.0', '45.0', '1.0', '29.0'], ['1.0', '45.0', '1.0', '29.0'], ['1.0', '30.0', '1.0', '29.0'], ['1.0', '30.0', '1.0', '29.0'], ['5.0', '29.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['5.0', '48.0', '1.0', '29.0'], ['17.0', '52.0', '1.0', '29.0'], ['5.0', '42.0', '1.0', '29.0'], ['5.0', '42.0', '1.0', '29.0'], ['17.0', '52.0', '1.0', '29.0'], ['48.0', '44.0', '1.0', '29.0'], ['0.0', '52.0', '1.0', '29.0'], ['17.0', '52.0', '1.0', '29.0'], ['40.0', '13.0', '1.0', '29.0'], ['5.0', '52.0', '1.0', '29.0'], ['5.0', '52.0', '1.0', '29.0'], ['17.0', '52.0', '1.0', '29.0'], ['17.0', '52.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['40.0', '13.0', '1.0', '29.0'], ['34.0', '54.0', '1.0', '29.0'], ['5.0', '0.0', '1.0', '29.0'], ['5.0', '0.0', '1.0', '29.0'], ['40.0', '13.0', '1.0', '29.0'], ['3.0', '3.0', '1.0', '29.0'], ['5.0', '3.0', '5.0', '47.0'], ['5.0', '3.0', '5.0', '47.0'], ['1.0', '29.0', '5.0', '47.0'], ['1.0', '29.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['5.0', '3.0', '5.0', '47.0'], ['5.0', '3.0', '5.0', '47.0'], ['5.0', '3.0', '5.0', '47.0'], ['5.0', '52.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['5.0', '42.0', '5.0', '47.0'], ['5.0', '42.0', '5.0', '47.0'], ['5.0', '48.0', '5.0', '47.0'], ['5.0', '42.0', '5.0', '47.0'], ['21.0', '17.0', '5.0', '47.0'], ['21.0', '17.0', '5.0', '47.0'], ['21.0', '17.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '11.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['0.0', '52.0', '5.0', '47.0'], ['5.0', '3.0', '5.0', '47.0'], ['5.0', '3.0', '5.0', '47.0'], ['1.0', '29.0', '5.0', '47.0'], ['1.0', '29.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['5.0', '3.0', '5.0', '47.0'], ['5.0', '3.0', '5.0', '47.0'], ['5.0', '3.0', '5.0', '47.0'], ['40.0', '13.0', '5.0', '47.0'], ['40.0', '13.0', '5.0', '47.0'], ['40.0', '13.0', '5.0', '47.0'], ['5.0', '52.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['5.0', '42.0', '5.0', '47.0'], ['5.0', '42.0', '5.0', '47.0'], ['5.0', '48.0', '5.0', '47.0'], ['5.0', '42.0', '5.0', '47.0'], ['21.0', '17.0', '5.0', '47.0'], ['21.0', '17.0', '5.0', '47.0'], ['21.0', '17.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['5.0', '47.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '11.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['3.0', '3.0', '5.0', '47.0'], ['48.0', '44.0', '5.0', '47.0'], ['5.0', '3.0', '5.0', '3.0'], ['1.0', '29.0', '5.0', '3.0'], ['1.0', '29.0', '5.0', '3.0'], ['5.0', '47.0', '5.0', '3.0'], ['5.0', '47.0', '5.0', '3.0'], ['5.0', '3.0', '5.0', '3.0'], ['5.0', '3.0', '5.0', '3.0'], ['40.0', '13.0', '5.0', '3.0'], ['5.0', '52.0', '5.0', '3.0'], ['5.0', '52.0', '5.0', '3.0'], ['5.0', '52.0', '5.0', '3.0'], ['3.0', '3.0', '5.0', '3.0'], ['3.0', '3.0', '5.0', '3.0'], ['5.0', '42.0', '5.0', '3.0'], ['5.0', '42.0', '5.0', '3.0'], ['5.0', '48.0', '5.0', '3.0'], ['5.0', '42.0', '5.0', '3.0'], ['21.0', '17.0', '5.0', '3.0'], ['21.0', '17.0', '5.0', '3.0'], ['21.0', '17.0', '5.0', '3.0'], ['3.0', '3.0', '5.0', '3.0'], ['3.0', '3.0', '5.0', '3.0'], ['3.0', '3.0', '5.0', '3.0'], ['3.0', '3.0', '5.0', '3.0'], ['3.0', '3.0', '5.0', '3.0'], ['3.0', '3.0', '5.0', '3.0'], ['3.0', '3.0', '5.0', '3.0'], ['3.0', '3.0', '5.0', '3.0'], ['3.0', '3.0', '5.0', '3.0'], ['3.0', '3.0', '5.0', '3.0'], ['3.0', '3.0', '5.0', '3.0'], ['3.0', '3.0', '5.0', '3.0'], ['3.0', '3.0', '5.0', '3.0'], ['3.0', '3.0', '5.0', '3.0'], ['5.0', '47.0', '5.0', '3.0']
</code></pre>
<p>Sample Data edges</p>
<pre><code>(0, 682) 1
(0, 683) 1
(0, 755) 1
(0, 756) 1
(0, 794) 1
(0, 807) 1
(0, 808) 1
(0, 883) 1
(0, 884) 1
(0, 965) 1
: :
(5793, 1492) 1
(5793, 1968) 1
(5793, 2050) 1
(5793, 2347) 1
(5793, 2491) 1
(5793, 3171) 1
(5793, 3549) 1
(5793, 3772) 1
(5793, 5469) 1
(5793, 5504) 1
(5793, 5727) 1
(5793, 5766) 1
(5793, 5767) 1
(5793, 5768) 1
(5793, 5769) 1
</code></pre>
|
<python><graph><networkx>
|
2023-03-16 02:21:22
| 1
| 449
|
Data Science Analytics Manager
|
75,751,564
| 1,311,866
|
f-string and multiprocessing.Manager dict and list
|
<p>I am writing this question, because it does not seem like what I am experiencing should be the desired behavior:</p>
<p>The following code is pretty much lifted from the <a href="https://docs.python.org/3.9/library/multiprocessing.html" rel="nofollow noreferrer">3.9.16 documentation on <code>multiprocessing.Manager</code></a></p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing import Process, Manager
def f(d, l):
d[1] = '1'
d['2'] = 2
d[0.25] = None
l.reverse()
def multi_foo():
with Manager() as manager:
d = manager.dict()
l = manager.list(range(10))
jobs = []
for _ in range(1):
p = Process(target=f, args=(d, l))
jobs.append(p)
p.start()
for _j in jobs:
_j.join()
print(f"{d=}")
print(d)
print(f"{l=}")
print(l)
ell = l[:]
print(f"{ell=}")
if __name__ == '__main__':
multi_foo()
</code></pre>
<p>This is the output that running the above code yields:</p>
<pre><code>d=<DictProxy object, typeid 'dict' at 0x100cf2550>
{1: '1', '2': 2, 0.25: None}
l=<ListProxy object, typeid 'list' at 0x100f82790>
[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
ell=[9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
</code></pre>
<p>It seems to me that the 3rd line should be more like the 5th line. So it does not feel like the <code>f-string</code> outputs on lines 1 and 3 should be expected.</p>
<p>I am looking for an explanation for why I should expect what I get in the 1st and 3rd line of the output.</p>
|
<python><f-string><multiprocessing-manager>
|
2023-03-16 02:02:03
| 1
| 3,737
|
tipanverella
|
75,751,478
| 10,215,301
|
How can I include external Python scripts while executing their codes in Quarto?
|
<p>I am rendering a Quarto markdown as a parent file (<code>test.qmd</code> in the MWE). By way of rendering, I am trying to include an external Python script (<code>lm.py</code>) in the parent file while executing the Python code, and refer to the result of the code (<code>model</code> in <code>lm.py</code>) in the parent file. However, I face a <code>NameError</code> as shown below, and I am unable to access the result from the parent file:</p>
<pre><code>Executing 'test.ipynb'
Cell 1/2...Done
Cell 2/2...ERROR:
An error occurred while executing the following cell:
------------------
print(model.summary())
------------------
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_18968\3590939978.py in <module>
----> 1 print(model.summary())
NameError: name 'model' is not defined
NameError: name 'model' is not defined
</code></pre>
<p>Then, how can I include an external Python script while executing its codes when I render a Quarto markdown file?</p>
<h1>MWE</h1>
<h2><code>test.qmd</code></h2>
<pre><code>---
format: pdf
execute:
cache: false
---
```{python}
#| echo: true
#| eval: true
#| file: lm.py
```
```{python}
print(model.summary())
```
</code></pre>
<h2><code>lm.py</code></h2>
<pre class="lang-py prettyprint-override"><code>import statsmodels.api as sm
import pandas as pd
mtcars = sm.datasets.get_rdataset('mtcars').data
model = sm.formula.ols('am ~ cyl + mpg', data = mtcars).fit()
</code></pre>
|
<python><execution><evaluation><quarto>
|
2023-03-16 01:39:22
| 1
| 3,723
|
Carlos Luis Rivera
|
75,751,158
| 1,263,739
|
How to extract and combine text and tables from PDF using AWS Textract
|
<p>I am using the <a href="https://aws-samples.github.io/amazon-textract-textractor/index.html" rel="nofollow noreferrer">textractor</a> package to extract the text and the table that is present in a pdf document through AWS Textract:</p>
<pre><code>from textractor import Textractor
from textractor.data.constants import TextractFeatures
extractor = Textractor(region_name='us-east-1')
document = extractor.start_document_analysis(
file_source="s3://<document>.pdf",
features=[TextractFeatures.TABLES],
)
text = document.document.pages[0].text
table_csv = document.document.pages[0].tables[0].to_csv()
</code></pre>
<p>This works well. However, I want to combine in a single text string (1) the text of the page with (2)the table on the page but WITHOUT overlapping text. Right now, the <code>text</code> variable also contains the extracted text from the <code>table_csv</code> content. If I just concatenate the strings, there will be duplicated information.</p>
<p>Is there a clean way to remove the overlapping text to achieve this?</p>
|
<python><ocr><amazon-textract>
|
2023-03-16 00:21:59
| 1
| 2,374
|
kahlo
|
75,751,119
| 6,676,101
|
In general, what is a pythonic way to write a decorator?
|
<p>In general, I want to know how to write a simple decorator in python.</p>
<p>However, it might help to have a specific example.</p>
<p>Consider the following function:</p>
<pre class="lang-python prettyprint-override"><code>def pow(base:float, exp:int):
"""
+------------------------------------------+
| EXAMPLES |
+------------------------------------------+
| BASE | EXPONENT | OUTPUT |
+------+----------+------------------------+
| 2 | 5 | 2^5 | 32 |
| 2.5 | 7 | 2.5^7 | 610.3515625 |
| 10 | 3 | 10^3 | 1000 |
| 0.1 | 5 | 0.1^5 | 0.00001 |
| 7 | 0 | 7^0 | 1 |
+------+----------+----------+-------------+
"""
base = float(base)
# convert `exp` to string to avoid flooring, or truncating, floats
exp = int(str(exp))
if exp > 0:
return base * pow(base, exp-1)
else: # exp == 2
return 1
</code></pre>
<p>With the original implementation, the following function call will result in an error:</p>
<pre class="lang-python prettyprint-override"><code>raw_numbers = [0, 0]
raw_numbers[0] = input("Type a base on the command line and press enter")
raw_numbers[1] = input("Type an exponent (power) on the command line and press enter")
numbers = [float(num.strip()) for num in raw_numbers]
# As an example, maybe numbers == [4.5, 6]
result = pow(numbers)
print(result)
</code></pre>
<hr />
<p>Suppose that we want to decorate the <code>pow</code> function so that both of the following two calls are valid:</p>
<blockquote>
<ol>
<li><p><code>result = pow(numbers)</code> where <code>numbers</code> is a reference to the list-object <code>[4.5, 6]</code></p>
</li>
<li><p><code>result = pow(4.5, 6)</code></p>
</li>
</ol>
</blockquote>
<hr />
<p>We want to use a decorator named something similair to <code>flatten_args</code>...</p>
<pre class="lang-python prettyprint-override"><code>@flatten_args
def pow(*args):
pass
</code></pre>
<p>How do we do write such a decorator?</p>
<p>Also, how do we preserve the doc-string when we decorate a callable?</p>
<pre class="lang-python prettyprint-override"><code>print(pow.__doc__)
</code></pre>
|
<python><python-3.x><decorator><python-decorators><functor>
|
2023-03-16 00:13:19
| 2
| 4,700
|
Toothpick Anemone
|
75,751,096
| 1,585,767
|
How to set same encoding in Python String as when reading it from file
|
<p>I'm building an AWS Lambda to generate a JWT from a Pem. The example I'm using is taken from this link:</p>
<p><a href="https://docs.github.com/en/apps/creating-github-apps/authenticating-with-a-github-app/generating-a-json-web-token-jwt-for-a-github-app" rel="nofollow noreferrer">https://docs.github.com/en/apps/creating-github-apps/authenticating-with-a-github-app/generating-a-json-web-token-jwt-for-a-github-app</a></p>
<p>When I read the pem from a file, everything works ok. This is the code that does the job:</p>
<pre><code># Open PEM
with open(pem, 'rb') as pem_file:
signing_key = jwt.jwk_from_pem(pem_file.read())
</code></pre>
<p>However, I need to get the content of the pem file from an AWS secret. Therefore, this is the code that I'm using:</p>
<pre><code># get PEM from AWS secret
secret = get_secret()
signing_key = jwt.jwk_from_pem(secret)
</code></pre>
<p>When I run this code, I get the error below:</p>
<pre><code>{
"errorMessage": "from_buffer() cannot return the address of a unicode object",
"errorType": "TypeError",
"stackTrace": [
" File \"/var/task/handler.py\", line 19, in handler\n signing_key = jwt.jwk_from_pem(secret)\n",
" File \"/var/task/jwt/jwk.py\", line 405, in jwk_from_pem\n return jwk_from_bytes(\n",
" File \"/var/task/jwt/jwk.py\", line 384, in jwk_from_bytes\n return jwk_from_private_bytes(\n",
" File \"/var/task/jwt/jwk.py\", line 328, in wrapper\n return func(content, loader, **kwargs)\n",
" File \"/var/task/jwt/jwk.py\", line 345, in jwk_from_private_bytes\n privkey = private_loader(content, password, backend) # type: ignore[operator] # noqa: E501\n",
" File \"/var/task/cryptography/hazmat/primitives/serialization/base.py\", line 24, in load_pem_private_key\n return ossl.load_pem_private_key(\n",
" File \"/var/task/cryptography/hazmat/backends/openssl/backend.py\", line 949, in load_pem_private_key\n return self._load_key(\n",
" File \"/var/task/cryptography/hazmat/backends/openssl/backend.py\", line 1169, in _load_key\n mem_bio = self._bytes_to_bio(data)\n",
" File \"/var/task/cryptography/hazmat/backends/openssl/backend.py\", line 630, in _bytes_to_bio\n data_ptr = self._ffi.from_buffer(data)\n"
]
}
</code></pre>
<p>It seems like for some reason, when I read the pem from the file, it has the correct encoding. However, when I get a String from the AWS Secret with the same value, it doesn't like the encoding.</p>
<p>Any suggestions?</p>
<p>-------------------------------EDIT----------------------------
Here's the get_secret function</p>
<pre><code>def get_secret():
secret_name = "pem"
region_name = "eu-west-1"
# Create a Secrets Manager client
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name
)
try:
get_secret_value_response = client.get_secret_value(
SecretId=secret_name
)
except ClientError as e:
# For a list of exceptions thrown, see
# https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html
raise e
# Decrypts secret using the associated KMS key.
secret = json.loads(get_secret_value_response['SecretString'])
return (secret['pem'])
</code></pre>
|
<python><aws-lambda><encoding>
|
2023-03-16 00:06:05
| 1
| 10,747
|
Andres
|
75,751,088
| 10,592,609
|
Is there a more pythonic way of writing this?
|
<p>I wrote this code and it works. I wanted to have other thoughts on how the code can be improved thanks.</p>
<p>The main objective of the code is to determine whether it is possible to obtain a strictly increasing sequence by removing no more than one element from the array.</p>
<pre><code>def hh(s):
o = True
for i in range(len(s)-1):
if (s[i+1] <= s[i]):
o = False
return o
</code></pre>
<pre><code>def solution(sequence):
o = False
s = sequence
for i in range(len(s)):
d = s.pop(i)
if hh(s):
o = True
s.insert(i,d)
return o
</code></pre>
<pre><code>solution([1,3,2,1]) should return false
solution([1,3,2]) should return true
</code></pre>
|
<python><python-3.x>
|
2023-03-16 00:04:31
| 1
| 2,753
|
Babatunde Mustapha
|
75,750,742
| 18,313,588
|
Get all elements with count less than 50 from a counter
|
<p>Code</p>
<pre><code>from itertools import dropwhile
for key, count in dropwhile(lambda key_count: key_count[1] < 50, countered.most_common()):
del countered[key]
</code></pre>
<p>Input which is the <code>countered</code> variable looks like this</p>
<pre><code>[('Required Weather API', 100),
('New York City Museums', 70),
('New York City Art Galleries', 48),
('Zillow Home Value Searching API', 38)]
</code></pre>
<p>I used the code above to get the items with count less than 50 into a list but I get an empty counter back <code>Counter()</code></p>
<p>Expected output</p>
<pre><code>[('New York City Art Galleries', 48),
('Zillow Home Value Searching API', 38)]
</code></pre>
<p>Thanks</p>
|
<python><python-3.x><counter><python-itertools>
|
2023-03-15 22:46:36
| 2
| 493
|
nerd
|
75,750,656
| 16,665,831
|
Kafka consumer not appending consumed data into list in docker/ docker compose
|
<p>I developed a python app that reads file, transform it after that send it to Kafka topic and receive data with consumer. I deployed this pipeline with docker and docker compose. But according to logs. Here, it sent data to Kafka topic and consumer can receive data line-by-line but when code comes to [msg.value for msg in consumer] line, it does not append data into list. I build 3 containers for zookeeper, broker and client(which has three python scripts.)
Following is my docker-compose.yml file;</p>
<pre><code>---
version: '3.5'
networks:
rmoff_kafka:
name: rmoff_kafka
services:
zookeeper:
image: confluentinc/cp-zookeeper:5.5.0
container_name: zookeeper
networks:
- rmoff_kafka
environment:
ZOOKEEPER_CLIENT_PORT: 2181
broker:
image: confluentinc/cp-kafka:5.5.0
container_name: broker
ports:
- 9092:9092
networks:
- rmoff_kafka
depends_on:
- zookeeper
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
client:
container_name: client
build:
context: ./
dockerfile: Dockerfile
depends_on:
- broker
networks:
- rmoff_kafka
</code></pre>
<p>And following is my Dockerfile for client ;</p>
<pre><code>FROM python:3.7
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "consumer.py"]
</code></pre>
<p>Following is part of my Kafka producer script;</p>
<pre><code>def produce_data_with_kafka(data):
"""
Kafka Producer writes data to topic.
Args:
data: dataframe: input dataframe
Returns: None
"""
try:
logger.info("Kafka Producer setting up...")
producer = KafkaProducer(
bootstrap_servers=['broker:9092'],
value_serializer=lambda x: json.dumps(x, default=str).encode("utf-8")
)
except Exception:
raise Exception("Error when setting up Kafka Producer!")
for row in data.values:
producer.send("topic_test", key=None, value=row.flatten().tolist())
producer.flush()
logger.info("Data sent to Kafka topic...")
</code></pre>
<p>And following is part of my Kafka consumer script;</p>
<pre><code>def consume_data_with_kafka():
"""
Kafka Consumer read data from topic.
Args: None
Returns:
data: dataframe: input dataframe
"""
try:
logger.info("Kafka Consumer setting up...")
consumer = KafkaConsumer(
"topic_test",
bootstrap_servers=['broker:9092'],
value_deserializer=lambda x: json.loads(x.decode("utf-8")),
auto_offset_reset='earliest',
enable_auto_commit=False
)
consumer.subscribe(["topic_test"])
result = pd.DataFrame(
[msg.value for msg in consumer],
columns=my_cols,
)
consumer.commit()
return result
except Exception:
raise Exception("Error when setting up Kafka Producer!")
</code></pre>
<p>Finally following are client logs from docker-compose when I run client container;</p>
<pre><code>2023-03-16 01:18:15 2023-03-15 22:18:15,648 - Stream Data Processing - Data Processer File - INFO - Reading raw data from input...
2023-03-16 01:18:15 2023-03-15 22:18:15,671 - Stream Data Processing - Data Processer File - INFO - Preprocessesing input data...
2023-03-16 01:18:15 2023-03-15 22:18:15,766 - Stream Data Processing - Data Processer File - INFO - Finding session ids for unique ips...
2023-03-16 01:18:17 2023-03-15 22:18:17,548 - Stream Data Processing - Data Processer File - INFO - Finding average times for unique session ids...
2023-03-16 01:18:19 2023-03-15 22:18:19,964 - Stream Data Processing - Data Processer File - INFO - Merging get_post_counts data with two times transformed data...
2023-03-16 01:18:19 2023-03-15 22:18:19,964 - Stream Data Processing - Data Processer File - INFO - Finding get and post counts for unique ips...
2023-03-16 01:18:19 2023-03-15 22:18:19,976 - Stream Data Processing - Kafka Producer File - INFO - Kafka Producer setting up...
2023-03-16 01:18:19 2023-03-15 22:18:19,978 - kafka.conn - INFO - <BrokerConnection node_id=bootstrap-0 host=broker:9092 <connecting> [IPv4 ('172.31.0.3', 9092)]>: connecting to broker:9092 [('172.31.0.3', 9092) IPv4]
2023-03-16 01:18:19 2023-03-15 22:18:19,978 - kafka.conn - INFO - Probing node bootstrap-0 broker version
2023-03-16 01:18:19 2023-03-15 22:18:19,978 - kafka.conn - INFO - <BrokerConnection node_id=bootstrap-0 host=broker:9092 <connecting> [IPv4 ('172.31.0.3', 9092)]>: Connection complete.
2023-03-16 01:18:20 2023-03-15 22:18:20,097 - kafka.conn - INFO - Broker version identified as 2.5.0
2023-03-16 01:18:20 2023-03-15 22:18:20,097 - kafka.conn - INFO - Set configuration api_version=(2, 5, 0) to skip auto check_version requests on startup
2023-03-16 01:18:20 2023-03-15 22:18:20,269 - kafka.conn - INFO - <BrokerConnection node_id=1 host=broker:9092 <connecting> [IPv4 ('172.31.0.3', 9092)]>: connecting to broker:9092 [('172.31.0.3', 9092) IPv4]
2023-03-16 01:18:20 2023-03-15 22:18:20,269 - kafka.conn - INFO - <BrokerConnection node_id=1 host=broker:9092 <connecting> [IPv4 ('172.31.0.3', 9092)]>: Connection complete.
2023-03-16 01:18:20 2023-03-15 22:18:20,270 - kafka.conn - INFO - <BrokerConnection node_id=bootstrap-0 host=broker:9092 <connected> [IPv4 ('172.31.0.3', 9092)]>: Closing connection.
2023-03-16 01:18:36 2023-03-15 22:18:36,161 - Stream Data Processing - Kafka Producer File - INFO - Data sent to Kafka topic...
2023-03-16 01:18:36 2023-03-15 22:18:36,161 - kafka.conn - INFO - <BrokerConnection node_id=1 host=broker:9092 <connected> [IPv4 ('172.31.0.3', 9092)]>: Closing connection.
2023-03-16 01:18:36 2023-03-15 22:18:36,163 - Stream Data Processing - Kafka Consumer File - INFO - Kafka Consumer setting up...
2023-03-16 01:18:36 2023-03-15 22:18:36,163 - kafka.conn - INFO - <BrokerConnection node_id=bootstrap-0 host=broker:9092 <connecting> [IPv4 ('172.31.0.3', 9092)]>: connecting to broker:9092 [('172.31.0.3', 9092) IPv4]
2023-03-16 01:18:36 2023-03-15 22:18:36,164 - kafka.conn - INFO - Probing node bootstrap-0 broker version
2023-03-16 01:18:36 2023-03-15 22:18:36,164 - kafka.conn - INFO - <BrokerConnection node_id=bootstrap-0 host=broker:9092 <connecting> [IPv4 ('172.31.0.3', 9092)]>: Connection complete.
2023-03-16 01:18:36 2023-03-15 22:18:36,368 - kafka.conn - INFO - Broker version identified as 2.5.0
2023-03-16 01:18:36 2023-03-15 22:18:36,369 - kafka.conn - INFO - Set configuration api_version=(2, 5, 0) to skip auto check_version requests on startup
2023-03-16 01:18:36 2023-03-15 22:18:36,369 - kafka.consumer.subscription_state - INFO - Updating subscribed topics to: ('topic_test',)
2023-03-16 01:18:36 2023-03-15 22:18:36,369 - kafka.consumer.subscription_state - WARNING - subscription unchanged by change_subscription(['topic_test'])
2023-03-16 01:18:36 2023-03-15 22:18:36,425 - kafka.consumer.subscription_state - INFO - Updated partition assignment: [TopicPartition(topic='topic_test', partition=0)]
2023-03-16 01:18:36 2023-03-15 22:18:36,426 - kafka.conn - INFO - <BrokerConnection node_id=1 host=broker:9092 <connecting> [IPv4 ('172.31.0.3', 9092)]>: connecting to broker:9092 [('172.31.0.3', 9092) IPv4]
2023-03-16 01:18:36 2023-03-15 22:18:36,426 - kafka.conn - INFO - <BrokerConnection node_id=1 host=broker:9092 <connecting> [IPv4 ('172.31.0.3', 9092)]>: Connection complete.
2023-03-16 01:18:36 2023-03-15 22:18:36,426 - kafka.conn - INFO - <BrokerConnection node_id=bootstrap-0 host=broker:9092 <connected> [IPv4 ('172.31.0.3', 9092)]>: Closing connection.
</code></pre>
<p>I am missing one point, but could not find it.</p>
|
<python><docker><apache-kafka><docker-compose><dockerfile>
|
2023-03-15 22:32:17
| 0
| 309
|
Ugur Selim Ozen
|
75,750,646
| 2,684,771
|
How do I use bindtags with ttk.Entry?
|
<p>Following <a href="https://stackoverflow.com/q/11541262/2684771">this question</a>, I can setup bindtags to get events that include the keypress that activated the event.</p>
<p>When I change <code>tk.Entry</code> to <code>ttk.Entry</code>, the events still work but I am unable to select the entry contents or click to give <code>ttk.Entry</code> focus. Tabbing between fields does select all, but selecting part contents does not work.</p>
<p>How does bintags differ with ttk widgets? or how do I configure ttk bindtags to allow mouse click and content selection?</p>
<pre><code>import tkinter as tk
from tkinter import ttk
root = tk.Tk()
status = ttk.Label(root)
status.grid(row=2, column=0)
def OnKeyPress(event):
value = event.widget.get()
string="value of %s is '%s'" % (event.widget._name, value)
status.configure(text=string)
tk_entry = tk.Entry(root, name="tk_entry")
tk_entry.bindtags(('Entry', '.tk_entry', '.', 'all'))
tk_entry.grid(row=0, column=0, padx=3, pady=3)
tk_entry.bind("<KeyPress>", OnKeyPress)
# this entry content is unable to be selected
ttk_entry = ttk.Entry(root, name="ttk_entry")
ttk_entry.bindtags(('Entry', '.ttk_entry', '.', 'all'))
ttk_entry.grid(row=1, column=0, padx=3, pady=3)
ttk_entry.bind("<KeyPress>", OnKeyPress)
root.mainloop()
</code></pre>
|
<python><tkinter><tk-toolkit><ttk>
|
2023-03-15 22:31:17
| 1
| 7,151
|
sambler
|
75,750,635
| 12,131,472
|
how do I insert each row of a small dataframe before the matching rows of a big dataframe?
|
<p>I have one small and one big dataframe</p>
<p>the small one</p>
<pre><code> WS period shortCode identifier
6 197.78 2023-03-10 TC2-FFA spot
7 196.79 2023-03-10 TC5-FFA spot
8 253.13 2023-03-10 TC6-FFA spot
9 198.13 2023-03-13 TC12-FFA spot
10 166.67 2023-03-10 TC14-FFA spot
11 217.86 2023-03-10 TC17-FFA spot
18 97.00 2023-03-10 TD3-FFA spot
19 172.19 2023-03-10 TD7-FFA spot
20 205.71 2023-03-13 TD8-FFA spot
21 175.63 2023-03-10 TD19-FFA spot
22 115.45 2023-03-10 TD20-FFA spot
23 11350000.00 2023-03-10 TD22-FFA spot
24 232.14 2023-03-10 TD25-FFA spot
</code></pre>
<p>the big one with MultiIndex</p>
<pre><code>datumUnit $/mt WS
identifier period shortCode
TC2BALMO Mar 23 TC2-FFA 39.376 228.930
TC2CURMON Mar 23 TC2-FFA 35.946 208.988
TC2+1_M Apr 23 TC2-FFA 38.444 223.512
TC2+2_M May 23 TC2-FFA 37.786 219.686
TC2+3_M Jun 23 TC2-FFA 36.613 212.866
... ...
TD25+3Q Q4 23 TD25-FFA 42.909 185.432
TD25+4Q Q1 24 TD25-FFA 39.000 NaN
TD25+5Q Q2 24 TD25-FFA 32.421 NaN
TD25+1CAL Cal 24 TD25-FFA 34.250 NaN
TD25+2CAL Cal 25 TD25-FFA 33.955 NaN
</code></pre>
<p>This is its MultiIndex</p>
<pre><code>MultiIndex([( 'TC2BALMO', 'Mar 23', 'TC2-FFA'),
('TC2CURMON', 'Mar 23', 'TC2-FFA'),
( 'TC2+1_M', 'Apr 23', 'TC2-FFA'),
...
( 'TD25+4Q', 'Q1 24', 'TD25-FFA'),
( 'TD25+5Q', 'Q2 24', 'TD25-FFA'),
('TD25+1CAL', 'Cal 24', 'TD25-FFA'),
('TD25+2CAL', 'Cal 25', 'TD25-FFA')],
names=['identifier', 'period', 'shortCode'], length=198)
</code></pre>
<p>I wish to insert the "spot" line of the small dataframe on top of the 2nd dataframe for each shortCode without changing the order of the big dataframe</p>
<p>intended result</p>
<pre><code>datumUnit $/mt WS
identifier period shortCode
spot 23-03-10 TC2-FFA NaN 197.78
TC2BALMO Mar 23 TC2-FFA 39.376 228.930
TC2CURMON Mar 23 TC2-FFA 35.946 208.988
TC2+1_M Apr 23 TC2-FFA 38.444 223.512
TC2+2_M May 23 TC2-FFA 37.786 219.686
TC2+3_M Jun 23 TC2-FFA 36.613 212.866
... ...
spot 23-03-10 TD25-FFA NaN 232.14
TD25BALMO Mar 23 TD25-FFA 48.902 211.331
TD25CURMON Mar 23 TD25-FFA 53.254 230.138
TD25+1_M Apr 23 TD25-FFA 46.815 202.312
TD25+2_M May 23 TD25-FFA 43.717 188.924
TD25+3_M Jun 23 TD25-FFA 41.571 179.650
TD25+4_M Jul 23 TD25-FFA 40.776 176.214
TD25+5_M Aug 23 TD25-FFA 40.281 174.075
TD25CURQ Q1 23 TD25-FFA 46.668 201.677
TD25+1Q Q2 23 TD25-FFA 44.035 190.298
TD25+2Q Q3 23 TD25-FFA 40.367 174.447
TD25+3Q Q4 23 TD25-FFA 42.909 185.432
TD25+4Q Q1 24 TD25-FFA 39.000 NaN
TD25+5Q Q2 24 TD25-FFA 32.421 NaN
TD25+1CAL Cal 24 TD25-FFA 34.250 NaN
TD25+2CAL Cal 25 TD25-FFA 33.955 NaN
</code></pre>
|
<python><pandas><dataframe>
|
2023-03-15 22:30:01
| 1
| 447
|
neutralname
|
75,750,556
| 417,896
|
pyarrow - Write binary file in arrow format
|
<p>How do I write a file <code>./my-data.arrow</code> using the arrow binary format?</p>
<p>Note I don't want a Parquet file.</p>
<pre><code>import pyarrow as pa
data = [('A', 1), ('B', 2)]
schema = pa.schema([
('name', pa.string()),
('id', pa.int64())
])
</code></pre>
|
<python><pyarrow><apache-arrow>
|
2023-03-15 22:14:27
| 1
| 17,480
|
BAR
|
75,750,489
| 3,595,231
|
How to refer a lib in different location in python?
|
<p>This is the structure of our team code in python,</p>
<pre><code>`tree -L 1 apps/sockstunnel/sockstunnel auto_tasks/FEXT_STACKER/`
apps/sockstunnel/sockstunnel
βββ api.py
βββ db.py
βββ exceptions.py
βββ flags.py
βββ frame.py
βββ __init__.py
βββ libfix.py
βββ notification.py
βββ protocols.py
βββ proxy_router.py
βββ service.py
auto_tasks/FEXT_STACKER/
βββ docker-compose.yml
βββ Dockerfile
βββ licenses
βββ README
βββ stacker.py
βββ tmp
</code></pre>
<p>of which,</p>
<ol>
<li>"apps" and "auto_tasks" are in the same dir.</li>
<li>"auto_tasks/FEXT_STACKER/stacker.py" is the project I am currently working on, and it needs to refer to a call inside this "apps/sockstunnel/sockstunnel/protocols.py" file.</li>
</ol>
<p>My question is, what I can do on "stacker.py" side, so that contents of this "protocols.py" can be referred ? I am using python3.9.</p>
<p><strong>Some Update</strong>:</p>
<p>I have tried these 2 ways in my stacker.py:</p>
<p>A) :</p>
<pre><code>import sys
sys.path.append("/../../apps/sockstunnel")
</code></pre>
<p>B)</p>
<pre><code>import sys
sys.path.append("/absolute/path/to/this/<apps/sockstunnel>")
</code></pre>
<p>The 1st one fails, and the 2nd one works. But obviously, I would like to have the 1st option in my code. So what should I do.</p>
|
<python><python-3.x>
|
2023-03-15 22:05:44
| 1
| 765
|
user3595231
|
75,750,421
| 18,313,588
|
Change pandas column values to become lists in pandas dataframe
|
<p>Currently, I have</p>
<p><code>df['fruits']</code></p>
<pre><code>apple,
orange,
pineapple,
banana,tomato,
cabbage,watermelon,
Name: fruits, Length: 5, dtype: object
</code></pre>
<p>How can I change the above to lists and remove the commas at the end in pandas dataframe?</p>
<p>Expected output of <code>df['fruits']</code></p>
<pre><code>[apple]
[orange]
[pineapple]
[banana,tomato]
[cabbage,watermelon]
Name: fruits, Length: 5, dtype: object
</code></pre>
<p>Thanks</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-03-15 21:55:26
| 1
| 493
|
nerd
|
75,750,390
| 12,193,952
|
How to calculate rolling Volume Profile using Pandas faster?
|
<h2>Problem</h2>
<p>I am doing some market analysis and as a part of technical analysis, I am using a "rolling" <a href="https://www.tradingview.com/support/solutions/43000502040-volume-profile/" rel="nofollow noreferrer">Volume Profile</a> indicator. It calculates the price level with the highest volume in last x candles (<code>window</code>) and it's calculated every candle.</p>
<p>This creates an issue, because the size of the dataframe values that needs to be processed is multiplied by the <code>window</code>. E.g. if the dataframe has <code>1000</code> rows and the <code>window</code> is set to <code>10</code>, it has to process <code>10000</code> rows.</p>
<p>I usually use <code>window</code> parameter between <strong>5 - 30</strong> and usuall dataframe size is <strong>1.5 - 2.5 milions of rows</strong> <em>(because it is so slow, I was never able to set window larger than 30)</em>.</p>
<p>I need the "point of control" only, which is the price level with the <strong>highest volume</strong> in the <code>window</code>.</p>
<p><a href="https://i.sstatic.net/Ci5Vb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ci5Vb.png" alt="vrp_tradingview" /></a></p>
<p>This is how the implementation looks on the chart:
<a href="https://i.sstatic.net/06O8G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/06O8G.png" alt="vrp_real" /></a></p>
<hr />
<h2>Question</h2>
<ul>
<li>Is there any way how to speed up the process ommiting the "costly" <code>for</code> loop and leverage from <code>pandas</code> or <code>numpy</code> functions?</li>
<li>Is it possible to group values with same price level faster than using <code>pandas.DataFrame.groupby</code> method?</li>
<li>Is it even possible to lower the processing time under <code>120 sec / 1M</code> rows?</li>
</ul>
<hr />
<h3>A selection of the used tech</h3>
<ul>
<li>Python <code>3.10</code></li>
<li>Pandas <code>1.4.4</code></li>
<li>numpy <code>1.24.2</code></li>
<li>running in <code>AWS ECS Fargate</code> (but results on local are quite similar); 1 vCPU and 8 GB or memory <em>(threading/parallelism might not be so effective)</em></li>
</ul>
<hr />
<h2>Algorithm variables explained</h2>
<ul>
<li><strong>point of control</strong>: the price level with the highest volume in period</li>
<li><strong>window</strong>: number of candles used to calculate volume profile (last x candles)</li>
<li><strong>precision</strong>: price level that I round price to (if set to <code>10</code> => <code>13 -> 10</code>; set to <code>100</code> => <code>18421 -> 18400</code>)</li>
</ul>
<hr />
<h2>Current implementation</h2>
<p><em>Note: I have removed some comments from the code</em>
<em>Note: I understand the inaccurate way of the calculations, however I do not have tick-data to do the more precise calculation</em></p>
<p>Dataframe example</p>
<pre class="lang-py prettyprint-override"><code> datetime close volume
0 2019-11-05 22:17:00 9311.73 17.535066
1 2019-11-05 22:18:00 9306.09 19.708704
2 2019-11-05 22:19:00 9307.72 17.067911
3 2019-11-05 22:20:00 9303.72 39.139451
4 2019-11-05 22:21:00 9300.23 19.286789
5 2019-11-05 22:22:00 9300.43 49.699525
6 2019-11-05 22:23:00 9299.52 8.923734
7 2019-11-05 22:24:00 9299.82 27.646121
8 2019-11-05 22:25:00 9307.83 14.054908
9 2019-11-05 22:26:00 9314.66 16.678026
10 2019-11-05 22:27:00 9317.17 13.971519
11 2019-11-05 22:28:00 9323.35 30.626054
12 2019-11-05 22:29:00 9323.48 31.870815
13 2019-11-05 22:30:00 9323.41 12.434114
14 2019-11-05 22:31:00 9327.88 6.688897
15 2019-11-05 22:32:00 9332.30 9.494066
16 2019-11-05 22:33:00 9328.17 11.473973
17 2019-11-05 22:34:00 9324.59 16.432248
18 2019-11-05 22:35:00 9319.58 15.865908
19 2019-11-05 22:36:00 9318.79 10.796839
</code></pre>
<p>This locates/computes the POC</p>
<pre class="lang-py prettyprint-override"><code># Now, create a rolling maximum of last x candles only
def get_poc(input_df,
precision):
profile = input_df.groupby('close')[['volume']].sum()
point_of_control = profile['volume'].idxmax()
# In order to draw line just in the middle of the bar with the highest
# value we need to add half of the precision to the value
point_of_control_middle = point_of_control + (precision / 2)
return point_of_control_middle
</code></pre>
<p>Main def</p>
<pre class="lang-py prettyprint-override"><code>def generate_vrp(input_df,
precision,
window):
logger.info("> Generating volume profiles (VRP)")
num_rows = input_df.shape[0]
# Price levels
price_levels = pd.DataFrame(columns=['datetime', 'vrp_level'])
input_df['close'] = (input_df['close'] / precision).round() * precision
# In order to get to the last value, we need to do :"- window + 1"
for i in tqdm(range(1, num_rows)):
if i < window:
rows = input_df[0:i].copy()
else:
rows = input_df[i - window:i].copy()
# Get datetime from last row in the dataset
last_row = rows.iloc[-1]
# Get the volume profile
new_data = pd.DataFrame({'datetime': last_row['datetime'],
'vrp_level': get_poc(rows,
precision)},
index=[0])
price_levels = pd.concat([price_levels, new_data], ignore_index=True)
return price_levels
</code></pre>
<p>Method calling</p>
<pre class="lang-py prettyprint-override"><code>window = 30 # 10 candles
precision = 10 # the volume profile "step" is 100
generate_vrp(df, precision, window)
</code></pre>
<p>Example response <em>(just column with datetime and vrp_level/poc)</em></p>
<ul>
<li>precision: <code>10</code></li>
<li>window: <code>30</code></li>
</ul>
<pre class="lang-py prettyprint-override"><code> datetime vrp_level
0 2019-11-05 22:17:00 9315.0
1 2019-11-05 22:18:00 9315.0
2 2019-11-05 22:19:00 9315.0
3 2019-11-05 22:20:00 9315.0
4 2019-11-05 22:21:00 9305.0
5 2019-11-05 22:22:00 9305.0
6 2019-11-05 22:23:00 9305.0
7 2019-11-05 22:24:00 9305.0
8 2019-11-05 22:25:00 9305.0
9 2019-11-05 22:26:00 9305.0
10 2019-11-05 22:27:00 9305.0
11 2019-11-05 22:28:00 9305.0
12 2019-11-05 22:29:00 9305.0
13 2019-11-05 22:30:00 9305.0
14 2019-11-05 22:31:00 9305.0
15 2019-11-05 22:32:00 9305.0
16 2019-11-05 22:33:00 9305.0
17 2019-11-05 22:34:00 9305.0
18 2019-11-05 22:35:00 9305.0
19 2019-11-05 22:36:00 9305.0
20 2019-11-05 22:37:00 9325.0
</code></pre>
<p>And profiling from <a href="https://github.com/plasma-umass/scalene" rel="nofollow noreferrer">scalene</a> shows, that potential issue might be the <code>groupby</code> and <code>get_poc</code> functions.
<a href="https://i.sstatic.net/k82Sr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k82Sr.png" alt="scalene-profiling" /></a></p>
<h2>Possible solutions / tried approaches</h2>
<ul>
<li>β: tried; not worked</li>
<li>π‘: and idea I am going to test</li>
<li>π: did not completely solved the problem, but helped towards the solution</li>
<li>β
: working solution</li>
</ul>
<h4>β Test bfolkens/py-market-profile</h4>
<p>The idea is to test <a href="https://github.com/bfolkens/py-market-profile" rel="nofollow noreferrer">py-market-profile</a> instead of <code>get_poc</code> method. In my local tests, it showed extremely good performance results in volume profile calculations.</p>
<p>Tests results after this implementation are not... good. Scalene points out this line as the bottleneck <code>mp_slice = mp[i - window:i]</code>. <strong>It's worse than the initial solution.</strong></p>
<pre class="lang-py prettyprint-override"><code>def generate_vrp(df, precision, window):
num_rows = df.shape[0]
price_levels = pd.DataFrame(columns=['datetime', 'vrp_level'])
df['close'] = (df['close'] / precision).round() * precision
df = df.rename(columns={'close': 'Close',
'volume': 'Volume'})
mp = MarketProfile(df)
for i in tqdm(range(1, num_rows)):
if i < window:
mp_slice = mp[0:i]
else:
mp_slice = mp[i - window:i]
poc = mp_slice.poc_price
last_row = df.iloc[i]
new_data = pd.DataFrame({'datetime': last_row['datetime'],
'vrp_level': poc},
index=[0])
price_levels = pd.concat([price_levels, new_data], ignore_index=True)
return price_levels
</code></pre>
<p><a href="https://i.sstatic.net/1lvzr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1lvzr.png" alt="scalene-profile" /></a></p>
<hr />
<h2>Solutions benchmark</h2>
<ul>
<li>Using 1k, 10k, 20k, 50k and 100k dataframe rows; measured using <code>timeit</code>
<a href="https://i.sstatic.net/vFE1U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vFE1U.png" alt="benchmarks" /></a></li>
</ul>
<hr />
<p><em>Even if I don't get any answer, I will continue testing and editing until I find a solution and hopefully help someone else.</em></p>
|
<python><pandas><dataframe><numpy><performance>
|
2023-03-15 21:50:47
| 1
| 873
|
FN_
|
75,750,365
| 14,605,345
|
Extracting three columns from a single column using pandas
|
<p>I have a csv file in which a column is itself a dictionary. This column contains three attributes each of which I want as a separate column in the resultant dataframe.</p>
<p>From the answer <a href="https://stackoverflow.com/questions/34304899/how-to-split-a-single-column-into-three-columns-in-pandas-python">How to split a single column into three columns in pandas (python)?</a> I am trying to use the following line of code to achieve the desired result:</p>
<pre><code>df[['one', 'two', 'three']] = pd.DataFrame([ x.split(',') for x in df['statistics'].tolist() ])
</code></pre>
<p>But when I execute the above line of code I get the following error:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_15084\3622721730.py in <module>
----> 1 df[['one', 'two', 'three']] = pd.DataFrame([ x.split(',') for x in df['statistics'].tolist() ])
~\AppData\Local\Temp\ipykernel_15084\3622721730.py in <listcomp>(.0)
----> 1 df[['one', 'two', 'three']] = pd.DataFrame([ x.split(',') for x in df['statistics'].tolist() ])
AttributeError: 'dict' object has no attribute 'split'
</code></pre>
<p>I attach the df for ready reference:</p>
<pre><code>kind etag id statistics
youtube#video WerWpr9_ht7SPd646jeYvOrMdFU 1isoZVQ9DxY {'viewCount': '133155', 'likeCount': '9199', 'favoriteCount': '0', 'commentCount': '1271'}
youtube#video IkNRr3T_bPnPpfsRPJ9ZmpFLWQI 2izTju-uxrk {'viewCount': '103436', 'likeCount': '3930', 'favoriteCount': '0', 'commentCount': '712'}
youtube#video ea_8Q2h6XDamfLZNhIL0HM3UZw4 oUOUI4_mS5c {'viewCount': '61008', 'likeCount': '3119', 'favoriteCount': '0', 'commentCount': '210'}
youtube#video LjxX4UdBSR88LO41UtUf6cSBsV4 ONrmi30DkJc {'viewCount': '58111', 'likeCount': '2885', 'favoriteCount': '0', 'commentCount': '141'}
youtube#video D98h38VbjEri485pD7dYrOyfoGM RA7t76Ie1TE {'viewCount': '77895', 'likeCount': '3394', 'favoriteCount': '0', 'commentCount': '216'}
youtube#video 4sa3me5UXvRmHb_4rNUKG0XhuVs boomn3StWJ0 {'viewCount': '57257', 'likeCount': '3187', 'favoriteCount': '0', 'commentCount': '159'}
youtube#video e37d1Q_PIJj0ckLAE1Sv-ukVHDw AV3vptOJVaE {'viewCount': '67967', 'likeCount': '3371', 'favoriteCount': '0', 'commentCount': '207'}
youtube#video Ly4sowP9gxeM-3iNgLUUWydTiaU vq6PEiPXGVk {'viewCount': '213144', 'likeCount': '8917', 'favoriteCount': '0', 'commentCount': '550'}
youtube#video ubupKrV7LSJJCmyw4PBPY91BmPo toDp4JS5cwI {'viewCount': '316336', 'likeCount': '9160', 'favoriteCount': '0', 'commentCount': '747'}
youtube#video g6W6BiuT7Af1alJmvmNtgXzZVLw qFOcxBGmOjQ {'viewCount': '468641', 'likeCount': '16106', 'favoriteCount': '0', 'commentCount': '1021'}
youtube#video jhRggyXoTq_PAghKVfqVaZptT8I 6SOKGnf84Ik {'viewCount': '210653', 'likeCount': '10222', 'favoriteCount': '0', 'commentCount': '591'}
youtube#video 2kXYv_ycWt_AhVLV7ZfQ7KR6zFo q-wZ1819y7c {'viewCount': '214089', 'likeCount': '11232', 'favoriteCount': '0', 'commentCount': '571'}
youtube#video p7RePnFd9fXm6PU_UEBCSDs-iyQ 8I4S5Ery92s {'viewCount': '352246', 'likeCount': '15854', 'favoriteCount': '0', 'commentCount': '655'}
youtube#video mJ3OiBk5QpRTlJs-TH_rzEDHLJE aeSqTAwm5NI {'viewCount': '347399', 'likeCount': '13567', 'favoriteCount': '0', 'commentCount': '713'}
youtube#video iQWVTcoYkgmNjJTy93eo6fqdbrM yPwIprzFfF0 {'viewCount': '361987', 'likeCount': '15262', 'favoriteCount': '0', 'commentCount': '559'}
youtube#video XArq68sxje-985r9BAvs05Jj-HA Gg0wYPxbmjA {'viewCount': '1466364', 'likeCount': '52941', 'favoriteCount': '0', 'commentCount': '4278'}
youtube#video F0_58PVsa6pPEmphN1sEYZBe0sU ZcjXo8KtWRY {'viewCount': '230492', 'likeCount': '7322', 'favoriteCount': '0', 'commentCount': '622'}
youtube#video emkAGoMq-kgWTEwJeNOh3EshkiU ur7hLYv404I {'viewCount': '279350', 'likeCount': '9968', 'favoriteCount': '0', 'commentCount': '1187'}
youtube#video fXqmKxY3vFPYnutf0MqQKoyZQV4 wpgA-rRBqs8 {'viewCount': '215555', 'likeCount': '7564', 'favoriteCount': '0', 'commentCount': '451'}
youtube#video 2ml-vwsPQ_5jdgA2UdxoTc4ZXnk sG5rnRb-FI8 {'viewCount': '283075', 'likeCount': '9599', 'favoriteCount': '0', 'commentCount': '747'}
</code></pre>
<p>I require the resultant df to be as follows:</p>
<p><a href="https://i.sstatic.net/7iykI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7iykI.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe>
|
2023-03-15 21:46:53
| 1
| 581
|
Huzefa Sadikot
|
75,750,325
| 741,692
|
pyserial slow disconnection
|
<p>I found out my pyserial connection is taking a long time to disconnect, not sure why this is, because opening and sending/receiving seems to be quite fast and in line with the baudrates (225bits -> 0.09375 seconds). I wrote a minimal piece of code to showcase my problem. It is running on a raspberry pi3.</p>
<pre><code>#!/usr/bin/env python3
import datetime
import serial
def send_and_receive():
response_line = None
try:
with serial.Serial('/dev/ttyUSB2', 2400, timeout=1, write_timeout=1) as s:
print(datetime.datetime.now().time(), "Executing command via serialio...")
s.flushInput()
s.flushOutput()
s.write(b'QVFWb\x99\r')
response_line = s.read_until(b"\r")
s.read_all()
print(datetime.datetime.now().time(),"serial response was: %s", response_line)
#s.close()
print(datetime.datetime.now().time(), "closed")
return response_line
except Exception as e:
print(f"Serial read error: {e}")
for i in range(10):
print(datetime.datetime.now().time(), '--------------------')
send_and_receive()
</code></pre>
<p>This is the output I am geting (notice over 1 second overhead in "closing" the connection):</p>
<pre><code>21:31:43.739869 --------------------
21:31:43.744267 Executing command via serialio...
21:31:43.851322 serial response was: %s b'(VERFW:000SR.34\x16\xef\r'
21:31:44.983403 closed
21:31:44.983702 --------------------
21:31:44.987723 Executing command via serialio...
21:31:45.094942 serial response was: %s b'(VERFW:000SR.34\x16\xef\r'
21:31:46.193361 closed
21:31:46.193688 --------------------
21:31:46.197638 Executing command via serialio...
21:31:46.307374 serial response was: %s b'(VERFW:000SR.34\x16\xef\r'
</code></pre>
<p>If I manually add the <code>s.close()</code> then the time will be spend on that call and the connection will exit the with block inmediately.</p>
|
<python><serial-port><pyserial>
|
2023-03-15 21:39:47
| 0
| 8,420
|
DarkZeros
|
75,750,225
| 1,306,907
|
Automate a pipenv update run before merging
|
<p>My team has decided they want to enforce installing the latest version of packages before merging to master so that they don't become too out of date. So someone set up a pre-commit hook to call <code>pipenv update</code> to update the lock file and then install new versions of dependencies.</p>
<p>However, this has caused a major slow down in my development cycle. The update command can take almost 5 minutes to run each time I commit, and if it actually updates anything the hook fails and must run again for another 5 minutes.</p>
<p>Is there a best practice for keeping the lock file up-to-date? If this is it, can it be sped up?</p>
|
<python><pipenv>
|
2023-03-15 21:26:49
| 0
| 631
|
ScoobyDrew18
|
75,750,132
| 5,132,559
|
PyMongo save List of dict in subfield with own ObjectId for every dict entry
|
<p>I'm creating a list of Objects in my script, and want so save/update them in a subfield of an mongo db document.</p>
<p>My Objects looks like this:</p>
<pre><code>class IndividualRating:
def __init__(self, place=0, player="", team="", games="", legs=""):
self.place = place
self.player = player
self.team = team
self.games = games
self.legs = legs
</code></pre>
<p>in my process i'm generating a list of this objects</p>
<pre><code>ratings = []
for row in rows:
rating = IndividualRating()
# defining here the values
ratings.append(rating.__dict__)
</code></pre>
<p>than later i want to save or update this in my mongodb</p>
<pre><code>def save_ratings(league_id, season_id, ratings):
database.individualratings.update_one(
{
"seasonId": season_id,
"leagueId": league_id
},
{
"$setOnInsert": {
"seasonId": season_id,
"leagueId": league_id
},
"$set": {
"player": ratings
}
},
upsert=True
)
</code></pre>
<p>this code works in principle, but the objects under "player" are saved without an ObjectId. And i want an ObjectId for every Object under "player".</p>
<p>As Example</p>
<pre><code>{
_id: ObjectId('someLongId'),
seasonId: 56267,
leagueId: 27273,
player: [
{
<- Here should be an ObjectId
place: 1,
player: "Some Player Name",
team: "Some Team Name",
games: "2:0",
legs: "10:0"
},
{
<- Here should be an ObjectId
place: 2,
player: "Some Player Name",
team: "Some Team Name",
games: "2:0",
legs: "10:0"
}
]
}
</code></pre>
<p>How to insert a list of objects (dict's) with ObjectIds in a subfield of a document?</p>
|
<python><mongodb><pymongo>
|
2023-03-15 21:15:21
| 1
| 1,562
|
Tobias
|
75,750,025
| 6,676,101
|
Inside of `__new__`, why is `cls` required instead of `args[0]`?
|
<p>I have two almost identical implementations of the <code>__new__</code> method for a class.</p>
<p>However, in one case, there is an error message.</p>
<p>In the other case, it works fine.</p>
<pre class="lang-python prettyprint-override"><code>class Klass1():
# causes RuntimeError: super(): no arguments
def __new__(*args):
obj = super().__new__(args[0])
return obj
class Klass2():
# This code here works just fine
def __new__(cls, *args):
obj = super().__new__(cls)
return obj
instance1 = Klass1()
instance2 = Klass2()
</code></pre>
<p>Why would it matter whether we write <code>cls</code> or <code>args[0]</code>?</p>
<p>What is causing the <code>RuntimeError</code> exception to be raised?</p>
|
<python><python-3.x><constructor><constructor-overloading>
|
2023-03-15 20:59:51
| 0
| 4,700
|
Toothpick Anemone
|
75,749,945
| 506,824
|
Why are breakpoints in site-packages code disabled when I start debugger?
|
<pre><code>MacOS Monterey
Python 3.9.13
VS Code 1.76.2
pytest==7.2.0
</code></pre>
<p>I'm running out of a virtualenv. When I debug one of my unit tests and try to step into a library function, it doesn't step in, it just runs to completion. Explicitly setting breakpoints in the library code has no effect. I have justMyCode set to false in launch.json:</p>
<pre><code>{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": false
}
]
}
</code></pre>
<p>Per <a href="https://stackoverflow.com/questions/57373359">Set breakpoint in imported python module in vs code</a>, I tried adding site-packages to my workspace, but that didn't make any difference.</p>
<p>I can see the breakpoints set in the library code: <a href="https://i.sstatic.net/Z8SNj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z8SNj.png" alt="breakpoints set" /></a></p>
<p>As soon as I do "Debug test", the breakpoints in the library code get disabled: <a href="https://i.sstatic.net/gCfm6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gCfm6.png" alt="disabled breakpoints" /></a></p>
|
<python><visual-studio-code><vscode-debugger>
|
2023-03-15 20:49:44
| 1
| 2,177
|
Roy Smith
|
75,749,862
| 3,826,115
|
Using a checkbox widget in Holoviews to toggle visibility of Point plot
|
<p>I have the following code which gives me a scatter plot of two groups of points:</p>
<pre><code>import pandas as pd
import holoviews as hv
hv.extension('bokeh')
df1 = pd.DataFrame(data = {'x':range(1,5), 'y':range(1,5)})
df2 = pd.DataFrame(data = {'x':range(5,10), 'y':range(5,10)})
p1 = hv.Points(df1, label = 'a').opts(marker = 's')
p2 = hv.Points(df2, label = 'b').opts(marker = 'o')
options = hv.opts.Points(size = 10, show_legend = True)
(p1*p2).opts(options)
</code></pre>
<p>I want to add checkboxes to toggle the visibility of the point groups. I know I can add the checkboxes like this:</p>
<pre><code>a_checkbox = pn.widgets.Checkbox(name='a', value=True)
b_checkbox = pn.widgets.Checkbox(name='b', value=True)
pn.Column(a_checkbox, b_checkbox, ((p1*p2).opts(options)))
</code></pre>
<p>But I'm note sure how to make them interactive. Can anyone help with this?</p>
<p>Thanks!</p>
|
<python><bokeh><holoviews>
|
2023-03-15 20:40:18
| 1
| 1,533
|
hm8
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.