QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,642,624
| 247,542
|
How to add translation support to Django constants?
|
<p>I have a Django application with some dictionaries defined in <code>constants.py</code>.</p>
<p>The values in the dictionary represent user-readable labels, and I want to add translation support, so Django's i18n feature will translate them based on locale.</p>
<p>I thus added to the top of my <code>constants.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from django.utils.translation import gettext_lazy as _
</code></pre>
<p>and then applied it like this:</p>
<pre class="lang-py prettyprint-override"><code>{
"code": _("some name to be translated"),
# ....
}
</code></pre>
<p>This mostly seems to work, but I'm running into problems with Celery and multiprocessing support, and anything that tries to import my models in anything but the most vanilla manner.</p>
<p>Specifically, when trying to do any kind of multiprocessing, Django raises this error:</p>
<p><code>django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.</code></p>
<p>which itself is caused by a different error:</p>
<p><code>django.core.exceptions.AppRegistryNotReady: The translation infrastructure cannot be initialized before the apps registry is ready. Check that you don't make non-lazy gettext calls at import time.</code></p>
<p>The traceback for the latter starts at my use of <code>gettext_lazy</code> in <code>constants.py</code>.</p>
<p>Clearly, something I'm doing is not copacetic.</p>
<p>Is there an accepted best practice for applying translation support to constants in Django?</p>
|
<python><django><django-i18n>
|
2023-12-11 22:03:58
| 2
| 65,489
|
Cerin
|
77,642,342
| 9,457,900
|
unable to call blpapi function (InstrumentsListRequest) using xbbg
|
<p>I am trying to call the example of blpapi using xbbg</p>
<pre><code>from xbbg.core import conn, process
from xbbg import blp
from datetime import date
def allInstruments(): # Return all govts with the given ticker, matured or not
req = process.create_request(service='//blp/instruments', request='instrumentListRequest')
req.set('query', "Dhaka")
req.set('maxResults', 10)
def _process_instruments(msg): # Process the response
print(msg)
pass
conn.send_request(request=req)
processed_elements_list = list(process.rec_events(func=_process_instruments))
for i in processed_elements_list:
print(i)
allInstruments()
</code></pre>
<p>it gives me following error but still i get the result</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
h:\tanjin-work\xbbg\Examples.ipynb Cell 45 line 2
18 for i in processed_elements_list:
19 print(i)
---> 21 allGovts()
h:\tanjin-work\xbbg\Examples.ipynb Cell 45 line 1
14 conn.send_request(request=req)
16 # Process events and get the list of elements
---> 17 processed_elements_list = list(process.rec_events(func=_process_instruments))
19 # Iterate over the processed elements
20 for i in processed_elements_list:
File h:\tanjin-work\xbbg\xbbg\core\process.py:197, in rec_events(func, **kwargs)
195 if ev.eventType() in responses:
196 for msg in ev:
--> 197 for r in func(msg=msg, **kwargs):
198 yield r
199 if ev.eventType() == blpapi.Event.RESPONSE:
TypeError: 'NoneType' object is not iterable
</code></pre>
<p>result snap</p>
<pre><code>CID: {[ valueType=AUTOGEN classId=0 value=65 ]}
RequestId: 2838b8e0-83d5-4fb7-855f-d2122184f5c2
InstrumentListResponse = {
results[] = {
results = {
security = "MBL BD<equity>"
description = "Marico Bangladesh Ltd (Dhaka)"
}
results = {
security = "BATBC BD<equity>"
description = "British American Tobacco Bangladesh Co Ltd (Dhaka)"
}
results = {
security = "DSEX<index>"
description = "Bangladesh Dhaka Stock Exchange Broad Index"
}
results = {
security = "BRAC BD<equity>"
description = "BRAC Bank PLC (Dhaka)"
}
results = {
security = "SQUARE BD<equity>"
description = "Square Pharmaceuticals PLC (Dhaka)"
}
.....
</code></pre>
<p>it shows all the data where max value is not working</p>
|
<python><bloomberg><quantitative-finance><blpapi>
|
2023-12-11 20:54:32
| 1
| 2,644
|
Tanjin Alam
|
77,642,220
| 706,354
|
Pandas: how do I restart expanding function with each new day in my time-series?
|
<p>I am using expanding method in my dataset. I know how to use it, for example:</p>
<pre><code>data["someColumn"].expanding().mean()
</code></pre>
<p>The challenge is that my dataset contains time series, and I need to "restart" expanding method when a new day starts. I.e. when new day starts expanding should treat first row of a new day as the only available data, then second row is second data etc till the day ends.</p>
<p>How can I achieve it?</p>
|
<python><pandas>
|
2023-12-11 20:19:45
| 1
| 3,505
|
MiamiBeach
|
77,641,958
| 2,386,605
|
How to close sessions on Postgresql with sqlalchemy?
|
<p>In this configuration</p>
<pre><code>from sqlalchemy.ext.asyncio import AsyncSession, create_async_engine, async_sessionmaker
db_engine = create_async_engine('<DATABASE_URL>', echo=False, future=True)
async_session = async_sessionmaker(db_engine, class_=AsyncSession, expire_on_commit=False)
async def get_session() -> AsyncSession:
async with async_session() as session:
yield session
await session.close()
</code></pre>
<p>I can connect to the Postgresql DB, but by some several connections stay open. Is that an issue and how can I fix it?</p>
|
<python><postgresql><sqlalchemy><sqlmodel>
|
2023-12-11 19:30:15
| 1
| 879
|
tobias
|
77,641,923
| 5,235,665
|
Python GNUPG Unknown system error when loading private key
|
<p><strong>Please note:</strong> Even though I mention Azure Databricks here, I believe this is a Python/GNUPG problem at heart, and as such, can be answered by anybody with Python/GNUPG encryption experience.</p>
<hr />
<p>I have the following Python code in my Azure Databricks notebook:</p>
<pre><code>%python
from pyspark.sql import SparkSession
from pyspark.sql.functions import input_file_name, lit
from pyspark.sql.types import StringType
import os
import gnupg
from azure.storage.blob import BlobServiceClient, BlobPrefix
import hashlib
from pyspark.sql import Row
from pyspark.sql.functions import collect_list
# Initialize Spark session
spark = SparkSession.builder.appName("DecryptData").getOrCreate()
storage_account_name = "mycontainer"
storage_account_key = "<redacted>"
spark.conf.set(f"fs.azure.account.key.{storage_account_name}.blob.core.windows.net", storage_account_key)
clientsDF = spark.read.table("myapp.internal.Clients")
row = clientsDF.first()
clientsLabel = row["Label"]
encryptedFilesSource = f"wasbs://{clientsLabel}@mycontainer.blob.core.windows.net/data/*"
decryptedDF = spark.sql(f"""
SELECT
REVERSE(SUBSTRING_INDEX(REVERSE(input_file_name()), '/', 1)) AS FileName,
REPLACE(value, '"', '[Q]') AS FileData,
'{clientsLabel}' as ClientLabel
FROM
read_files(
'{encryptedFilesSource}',
format => 'text',
wholeText => true
)
""")
decryptedDF.show()
decryptedDF = decryptedDF.select("FileData");
encryptedData = decryptedDF.first()['FileData']
def decrypt_pgp_data(encrypted_data, private_key_data, passphrase):
# Initialize GPG object
gpg = gnupg.GPG()
print("Loading private key...")
# Load private key
private_key = gpg.import_keys(private_key_data)
if private_key.count == 1:
keyid = private_key.fingerprints[0]
gpg.trust_keys(keyid, 'TRUST_ULTIMATE')
print("Private key loaded, attempting decryption...")
try:
decrypted_data = gpg.decrypt(encrypted_data, passphrase=passphrase, always_trust=True)
except Exception as e:
print("Error during decryption:", e)
return
print("Decryption finished and decrypted_data is of type: " + str(type(decrypted_data)))
if decrypted_data.ok:
print("Decryption successful!")
print("Decrypted Data:")
print(decrypted_data.data.decode())
else:
print("Decryption failed.")
print("Status:", decrypted_data.status)
print("Error:", decrypted_data.stderr)
print("Trust Level:", decrypted_data.trust_text)
print("Valid:", decrypted_data.valid)
private_key_data = '''-----BEGIN PGP PRIVATE KEY BLOCK-----
<redacted>
-----END PGP PRIVATE KEY BLOCK-----'''
passphrase = '<redacted>'
encrypted_data = b'encryptedData'
decrypt_pgp_data(encrypted_data, private_key_data, passphrase)
</code></pre>
<p>As you can see, I am reading PGP-encrypted files from an Azure Blob Storage account container into a Dataframe, and then sending the first row (I'll change this notebook to work on all rows later) through a decrypter function that uses GNUPG.</p>
<p>When this runs it gives me the following output in the driver logs:</p>
<pre><code>+--------------------+--------------------+-------+
| FileName| FileData| ClientLabel |
+--------------------+--------------------+-------+
| fizz.pgp|���mIj�h�#{... | acme|
+--------------------+--------------------+-------+
Decrypting: <redacted>
Loading private key...
WARNING:gnupg:gpg returned a non-zero error code: 2
Private key loaded, attempting decryption...
Decryption finished and decrypted_data is of type: <class 'gnupg.Crypt'>
Decryption failed.
Status: no data was provided
Error: gpg: no valid OpenPGP data found.
[GNUPG:] NODATA 1
[GNUPG:] NODATA 2
[GNUPG:] FAILURE decrypt 4294967295
gpg: decrypt_message failed: Unknown system error
Trust Level: None
Valid: False
</code></pre>
<p>Can anyone spot why decryption is failing, or help me troubleshoot it to pin down the culprit? Setting a debugger is not an option since this is happening inside a notebook. I'm thinking:</p>
<ol>
<li>Perhaps I'm using the GNUPG API completely wrong</li>
<li>Perhaps there's something malformed or improperly formatted with the private key I'm reading in from an in-memory string variable</li>
<li>Perhaps the encrypted data is malformed (I've seen some internet rumblings of endianness causing this type of error)</li>
<li>Maybe GNUPG isn't trusting my private key for some reason</li>
</ol>
<p>Can anyone spot where I'm going awry?</p>
|
<python><encryption><azure-databricks><gnupg><pgp>
|
2023-12-11 19:24:18
| 2
| 845
|
hotmeatballsoup
|
77,641,455
| 1,467,552
|
How to limit memory usage while scanning parquet from S3 and join using Polars?
|
<p>As a follow-up question to <a href="https://stackoverflow.com/questions/76770890/using-scan-parquet-and-scan-pyarrow-dataset-on-s3-loads-entire-dataset-into-memo">this question</a>, I would like to find a way to limit the memory usage when scanning, filtering and joining a large dataframe saved in S3 cloud, with a tiny local dataframe.</p>
<p>Suppose my code looks like the following:</p>
<pre><code>import pyarrow.dataset as ds
import polars as pl
import s3fs
# S3 credentials
secret = ...
key = ...
endpoint_url = ...
# some tiny dataframe
sellers_df = pl.DataFrame({'seller_id': ['0332649223', '0192491683', '0336435426']})
# scan, filter and join with huge dataframe on S3
fs = s3fs.S3FileSystem(endpoint_url=endpoint_url, key=key, secret=secret)
dataset = ds.dataset(f'{s3_bucket}/benchmark_dt/dt_partitions', filesystem=fs, partitioning='hive')
scan_df = pl.scan_pyarrow_dataset(dataset) \
.filter(pl.col('dt') >= '2023-05-17') \
.filter(pl.col('dt') <= '2023-10-18') \
.join(sellers_df.lazy(), on='seller_id', how='inner').collect()
</code></pre>
<p>And my parquet files layout, looks like the following:</p>
<pre><code>-- dt_partitions
-- dt=2023-06-09
-- data.parquet
-- dt=2023-06-10
-- data.parquet
-- dt=2023-06-11
-- data.parquet
-- dt=2023-06-12
-- data.parquet
...
</code></pre>
<p>When running the code I notice that Polars first loads the entire dataset to the memory, according to the given dates, and <strong>after</strong> performs the join.<br />
This causes me severe memory problems.</p>
<p>Is there any way to perform the join in pre-defined batches/streaming to save memory?</p>
<p>Thanks in advance.</p>
<p><strong>Edit:</strong></p>
<p>This is the explain plan (you can see no streaming applied):</p>
<pre><code>INNER JOIN:
LEFT PLAN ON: [col("seller_id")]
PYTHON SCAN
PROJECT */3 COLUMNS
SELECTION: ((pa.compute.field('dt') >= '2023-10-17') & (pa.compute.field('dt') <= '2023-10-18'))
RIGHT PLAN ON: [col("seller_id")]
DF ["seller_id"]; PROJECT */1 COLUMNS; SELECTION: "None"
END INNER JOIN
INNER JOIN:
LEFT PLAN ON: [col("seller_id")]
</code></pre>
<p>However, when using <code>is_in</code>:</p>
<pre><code>PYTHON SCAN
PROJECT */3 COLUMNS
SELECTION: ((pa.compute.field('seller_id')).isin(["0332649223","0192491683","0336435426","3628932648","5241104373","1414317462","4028203396","6445502649","1131069079","9027417785","6509736571","9214134975","7722199293","1617136891","8786329949","8260764409","5103636478","3444202168","9066806312","3961998994","7345385102","2756955097","7038039666","0148664533","5120870693","8843132164","6424549457","8242686761","3148647530","8329075741","0803877447","2228154163","8661602117","2544985488","3241983296","4756084729","5317176976","0658022895","3802149808","2368104663","0835399702","0806598632","9753553141","3473629988","1145080603","5731199445","7622500016","4980968502","6713967792","8469333969"]) & ((pa.compute.field('dt') >= '2023-10-17') & (pa.compute.field('dt') <= '2023-10-18')))
</code></pre>
<p>Followed @Dean MacGregor answer, added <code>os.environ['AWS_ALLOW_HTTP'] = 'true'</code> and it worked:</p>
<pre><code>--- STREAMING
INNER JOIN:
LEFT PLAN ON: [col("seller_id")]
Parquet SCAN s3://test-bucket/benchmark_dt/dt_partitions/dt=2023-10-17/part-0.parquet
PROJECT */3 COLUMNS
RIGHT PLAN ON: [col("seller_id")]
DF ["seller_id"]; PROJECT */1 COLUMNS; SELECTION: "None"
END INNER JOIN --- END STREAMING
</code></pre>
|
<python><python-polars><pyarrow>
|
2023-12-11 17:52:27
| 1
| 1,170
|
barak1412
|
77,641,146
| 3,737,186
|
Extract response data from custom LLM in langchain through callbacks?
|
<p>When writing an integration for <a href="https://python.langchain.com/docs/modules/model_io/llms/custom_llm" rel="nofollow noreferrer">a custom LLM</a> in langchain. Is it possible to add support for collecting information from the actually requests through means of the <a href="https://python.langchain.com/docs/modules/callbacks/" rel="nofollow noreferrer">callbacks</a> as they exist for example for OpenAI through <code>get_openai_callback</code> (see <a href="https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.manager.get_openai_callback.html" rel="nofollow noreferrer">docs</a> and <a href="https://python.langchain.com/docs/modules/model_io/llms/token_usage_tracking" rel="nofollow noreferrer">this example about tracking token usage</a>).</p>
<p>Concretely I would like to be able to extract response data like <code>stop_reason</code> from the Watsonx granite model response. I am using the <code>ibm-watson-machine-learning</code> package which provides the <code>WatsonxLLM</code> class that implements <code>LLM</code> from langchain.</p>
<p>I have without success tried to replicate something similar as done <a href="https://github.com/langchain-ai/langchain/blob/c0f4b95aa9961724ab4569049b4c3bc12ebbacfc/libs/langchain/langchain/callbacks/manager.py#L62" rel="nofollow noreferrer">for openai inside the langchain library</a>:</p>
<pre><code>class _WatsonXCallbackHandler(BaseCallbackHandler):
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when LLM ends running."""
if response.llm_output is None:
return None
def __copy__(self) -> "_WatsonXCallbackHandler":
"""Return a copy of the callback handler."""
return self
def __deepcopy__(self, memo: Any) -> "_WatsonXCallbackHandler":
"""Return a deep copy of the callback handler."""
return self
watsonx_callback_var: ContextVar[Optional[_WatsonXCallbackHandler]] = ContextVar(
"watsonx_callback", default=None
)
class MyServiceIBMWatsonx(MyService):
@contextmanager
def get_langchain_callback(self) -> Generator[_WatsonXCallbackHandler, None, None]:
cb = _WatsonXCallbackHandler()
watsonx_callback_var.set(cb)
yield cb
watsonx_callback_var.set(None)
</code></pre>
<p>This works to the point that <code>on_llm_end</code> is called but <code>response</code> is <code>None</code>. I suspect that something is missing, either something that requires a further extension or addition to the custom LLM or (ideally) something else somewhere (because I cannot change this custom LLM) that allows translating the acutall response to the <code>LLMResult</code> data structure.</p>
<p>Or maybe there is a completely different approach I should try without using callbacks at all?</p>
<p>Mind this question is not specific about Watsonx but interesting generally for custom LLM.</p>
|
<python><langchain><py-langchain>
|
2023-12-11 16:50:44
| 0
| 3,508
|
Sjoerd222888
|
77,641,145
| 11,517,893
|
How to prevent race condition while working with sessions?
|
<p>I’m working on an e-commerce website with a cart and products. A product’s primary key is added to the user’s session data in a dictionary <code>'cart_content'</code>. This dictionary hold product’s primary key as key and amount of the given product as value.</p>
<p>I successfully reproduced a race condition bug by adding simultaneously a product twice with an amount big enough so only one of the two additions sold out the product (e.g. 3 products left and adding 3 twice). The two requests increment twice the cart but there is not enough stock (in our example 3 left but 6 in cart).</p>
<p>How can I prevent race condition to happen like the example given above or in a multi-user case ? Does there is for example some sort of lock system that prevent <code>add_to_cart()</code> from being executed asynchronously ?</p>
<p><strong>Edit: I switched from SQLite to PostgreSQL because I wanted to test <a href="https://docs.djangoproject.com/en/5.0/ref/models/querysets/#select-for-update" rel="nofollow noreferrer"><code>select_for_update()</code></a></strong>.</p>
<p>core/models.py :</p>
<pre class="lang-python prettyprint-override"><code>class Product(models.Model):
…
number_in_stock = models.IntegerField(default=0)
@property
def number_on_hold(self):
result = 0
for s in Session.objects.all():
amount = s.get_decoded().get('cart_content', {}).get(str(self.pk))
if amount is not None:
result += int(amount)
return result
…
</code></pre>
<p>cart/views.py :</p>
<pre class="lang-python prettyprint-override"><code>def add_to_cart(request):
if (request.method == "POST"):
pk = request.POST.get('pk', None)
amount = int(request.POST.get('amount', 0))
if pk and amount:
p = Product.objects.get(pk=pk)
if amount > p.number_in_stock - p.number_on_hold:
return HttpResponse('1')
if not request.session.get('cart_content', None):
request.session['cart_content'] = {}
if request.session['cart_content'].get(pk, None):
request.session['cart_content'][pk] += amount
else:
request.session['cart_content'][pk] = amount
request.session.modified = True
return HttpResponse('0')
return HttpResponse('', status=404)
</code></pre>
<p>cart/urls.py:</p>
<pre class="lang-python prettyprint-override"><code>urlpatterns = [
…
path("add-to-cart", views.add_to_cart, name="cart-add-to-cart"),
…
]
</code></pre>
<p>cart/templates/cart/html/atoms/add_to_cart.html :</p>
<pre class="lang-html prettyprint-override"><code><div class="form-element add-to-cart-amount">
<label for="addToCartAmount{{ product_pk }}"> {% translate "Amount" %} : </label>
<input type="number" id="addToCartAmount{{ product_pk }}" />
</div>
<div class="form-element add-to-cart">
<button class="btn btn-primary button-add-to-cart" data-product-pk="{{ product_pk }}" data-href="{% url 'cart-add-to-cart' %}"><span> {% translate "Add to cart" %} </span></button>
</div>
</code></pre>
<p>cart/static/cart/js/main.js:</p>
<pre class="lang-javascript prettyprint-override"><code>$(".button-add-to-cart").click(function(event) {
event.preventDefault();
let product_pk = $(this).data("productPk");
let amount = $(this).parent().parent().find("#addToCartAmount" + product_pk).val();
$.ajax({
url: $(this).data("href"),
method: 'POST',
data: {
pk: product_pk,
amount: amount
},
success: function(result) {
switch (result) {
case '1':
alert(gettext('Amount exceeded'));
break;
case '0':
alert(interpolate(gettext('Successfully added %s items to the cart.'), [amount]))
break;
default:
alert(gettext('Unknown error'));
}
}
});
});
</code></pre>
<p>The Javascript used to reproduce the race condition. Of course it doesn’t always work the first time. Just repeat two or three times until you get the behaviour I mentioned :</p>
<pre class="lang-javascript prettyprint-override"><code>async function test() {
document.getElementsByClassName('button-add-to-cart')[0].click();
}
test(); test();
</code></pre>
|
<javascript><python><django><race-condition>
|
2023-12-11 16:50:36
| 2
| 363
|
Zatigem
|
77,641,139
| 22,466,650
|
How to parse an html table with a fixed shape?
|
<p>I receive an html table that have always the same shape. Only the values differ in each time.</p>
<pre><code>html = '''
<table align="center">
<tr>
<th>Name</th>
<td>NAME A</td>
<th>Status</th>
<td class="IN PROGRESS">IN PROGRESS</td>
</tr>
<tr>
<th>Category</th>
<td COLSPAN="3">CATEGORY A</td>
</tr>
<tr>
<th>Creation date</th>
<td>13/01/23 23:00</td>
<th>End date</th>
<td></td>
</tr>
</table>
'''
</code></pre>
<p>I need to convert it to a dataframe but pandas is giving me a weird format :</p>
<pre><code>print(pd.read_html(html)[0])
0 1 2 3
0 Name NAME A Status IN PROGRESS
1 Category CATEGORY A CATEGORY A CATEGORY A
2 Creation date 13/01/23 23:00 End date NaN
</code></pre>
<p>I feel like we need to use beautifulsoup but I'm not sure how :</p>
<pre><code>from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
</code></pre>
<p>Can you guys help me with that ?</p>
<p>My expected output is this dataframe :</p>
<pre><code> Name Category Status Creation date End date
0 NAME A CATEGORY A RUNNING 27/07/2023 11:43 NaN
</code></pre>
|
<python><web-scraping><beautifulsoup><html-table><python-requests>
|
2023-12-11 16:49:14
| 2
| 1,085
|
VERBOSE
|
77,641,087
| 2,386,113
|
FFT values computed using Python and MATLAB don't match?
|
<p>I have a super simple test code to compute FFT in MATLAB, which I am trying to convert to Python but the computed values do not match.</p>
<p><strong>MATLAB Code:</strong></p>
<pre class="lang-matlab prettyprint-override"><code>rect=zeros(100,1);
ffrect=zeros(100,1);
for j=45:55
rect(j,1)=1;
end
frect=fft(rect);
</code></pre>
<p><strong>Python Code</strong></p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.fft import fft, ifft, fftshift, ifftshift
rect = np.zeros((100, 1))
for j in range(44, 55):
rect[j] = 1
frect = fft(rect)
</code></pre>
<p><strong>Problem:</strong></p>
<p>The computed values of <code>frect</code> computed using MATLAB and Python do not match (or even look similar). The values computed by Python has only zero values for complex component and 1s for real.</p>
|
<python><matlab><fft>
|
2023-12-11 16:42:04
| 1
| 5,777
|
skm
|
77,640,899
| 15,453,560
|
Not seeing column names when reading csv from s3 in pandas
|
<p>I am using the following bit of code to read the iris dataset from an s3 bucket.</p>
<pre><code>import pandas as pd
import s3fs
s3_path = 's3://h2o-public-test-data/smalldata/iris/iris.csv'
s3 = s3fs.S3FileSystem(anon=True)
with s3.open(s3_path, 'rb') as f:
df = pd.read_csv(f, header = True)
</code></pre>
<p>However, the column names are just the contents of the first row of the dataset. How do I fix that?</p>
|
<python><pandas><amazon-web-services><amazon-s3><read-csv>
|
2023-12-11 16:11:49
| 2
| 657
|
Muhammad Kamil
|
77,640,864
| 22,466,650
|
How to avoid redundant printings when using a decorator on nested functions?
|
<p>I have two functions <code>func1</code> and <code>func2</code> on which I'm applying a basic timer decorator <code>@time_elapsed</code>. The problem is that <code>func1</code> is called within <code>func2</code> which cause the print of its timing too.</p>
<p>How can I structure my code to apply the decorator only once to <code>func2</code> while still allowing <code>func1</code> to be called independently (with a decorator of course)?</p>
<p>Here is my code:</p>
<pre><code>import time
from functools import wraps
def time_elapsed(func):
@wraps(func)
def wrapper(*args, **kwargs):
start_time = time.time()
result = func(*args, **kwargs)
elapsed_time = time.time() - start_time
print(f'{func.__name__} took {elapsed_time:.2f} seconds.')
return result
return wrapper
@time_elapsed
def func1():
time.sleep(2)
@time_elapsed
def func2():
func1()
time.sleep(3)
</code></pre>
<p><strong>Current behaviour :</strong></p>
<pre><code>func1()
# func1 took 2.01 seconds.
</code></pre>
<pre><code>func2()
# func1 took 2.00 seconds.
# func2 took 5.01 seconds.
</code></pre>
<p><strong>Desired behaviour :</strong></p>
<pre><code>func1()
# func1 took 2.01 seconds.
</code></pre>
<pre><code>func2()
# func2 took 5.01 seconds.
</code></pre>
|
<python><decorator>
|
2023-12-11 16:07:17
| 2
| 1,085
|
VERBOSE
|
77,640,840
| 8,839,068
|
Catch RateLimitError in calls to OpenAI API with python and tenacity's @retry
|
<p>I am using python to query the OpenAI API.</p>
<p>I have queries in a list of dictionaries that contain the parameters for each query (a 'system_message' with the instruction and a 'user_message' with the query-specific input.</p>
<p>I define a function <code>complete_prompt_with_backoff</code> using <code>tenacity</code>'s <code>@retry</code> decorator, and two wrapper functions (for other reasons; this is not the required setup but the top-level function needs to take in the entire list of dictionaries).</p>
<p>I then run the following code</p>
<pre><code>from openai import AsyncOpenAI
from tenacity import retry, wait_random_exponential, stop_after_attempt
import asyncio
client_async = AsyncOpenAI(api_key="<api_key>", organization="<org_id>")
queries_list_of_dicts = 100 * [{'system_message': 'A description of the role. You are an assistant classifying input. [...]. Return the response as a JSON containing the numerical classification ("class") and the justification ("justification"). For example: {"class": 1, "justification": "this is a positive statement because"}', 'user_message': "Your input to classify is: 'The day was miserable and cold'"},
{'system_message': 'A description of the role. You are an assistant classifying input. [...]. Return the response as a JSON containing the numerical classification ("class") and the justification ("justification"). For example: {"class": 1, "justification": "this is a positive statement because"}', 'user_message': "Your input to classify is: 'The day was sunny and warm.'"}]
@retry(wait=wait_random_exponential(min=1, max=60), stop=stop_after_attempt(6))
async def complete_prompt_with_backoff(client, system_message: str, user_message: str, **kwargs) -> dict:
return await client.chat.completions.create(
messages=[{"role": "system", "content": system_message}, {"role": "user", "content": user_message, }],
model="gpt-3.5-turbo", )
async def run_compl_backoff_var_prompt_and_get_result_pass_through_args(**kwargs):
return await complete_prompt_with_backoff(**kwargs)
async def wrapper(queries_ls_dicts, **kwargs):
return await asyncio.gather(*[run_compl_backoff_var_prompt_and_get_result_pass_through_args(client=client_async, **q) for q in queries_ls_dicts])
results = asyncio.run(wrapper(queries_list_of_dicts))
for res in results:
print(f"\t{res}")
print("done")
</code></pre>
<p>This works until I increase the number of queries and run into the rate limit, throwing the below error. (The code above is in <code>this_file.py</code>; paths shortened.)</p>
<pre><code>Traceback (most recent call last):
File "[...]/.venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 50, in __call__
result = await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/this_file.py", line 21, in complete_prompt_with_backoff
return await client.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/.venv/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1199, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "[...]/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1482, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1283, in request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "[...]/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1314, in _request
return await self._retry_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1364, in _retry_request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "[...]/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1314, in _request
return await self._retry_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1364, in _retry_request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "[...]/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1326, in _request
raise self._make_status_error_from_response(err.response) from None
openai.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-3.5-turbo in organization org-jiWnW3BOAw31ibSIFFQ9riSc on requests per day (RPD): Limit 10000, Used 10000, Requested 1. Please try again in 8.64s. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'requests', 'param': None, 'code': 'rate_limit_exceeded'}}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "[...]/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/runpy.py", line 198, in _run_module_as_main
return _run_code(code, main_globals, None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/runpy.py", line 88, in _run_code
exec(code, run_globals)
File "[...]/.vscode/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 39, in <module>
cli.main()
File "[...]/.vscode/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 430, in main
run()
File "[...]/.vscode/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "[...]/.vscode/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/.vscode/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "[...]/.vscode/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "[...]/this_file.py", line 31, in <module>
results = asyncio.run(wrapper(queries_list_of_dicts))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "[...]/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/python@3.11/3.11.6_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "this_file.py", line 29, in wrapper
return await asyncio.gather(*[run_compl_backoff_var_prompt_and_get_result_pass_through_args(client=client_async, **q) for q in queries_ls_dicts])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "this_file.py", line 26, in run_compl_backoff_var_prompt_and_get_result_pass_through_args
return await complete_prompt_with_backoff(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/.venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 88, in async_wrapped
return await fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/.venv/lib/python3.11/site-packages/tenacity/_asyncio.py", line 47, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "[...]/.venv/lib/python3.11/site-packages/tenacity/__init__.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x105b38190 state=finished raised RateLimitError>]
</code></pre>
<p>Can someone point out to me how to better manage the rate limit? Maybe I'm also not using the <code>@retry</code> decorator appropriately or have an issue with <code>asyncio</code> that I'm not aware of.</p>
<p>Using the retry function is not a necessity but I need to be able to make one call for a large number of prompts (2-3k) with rather long <code>'system_message'</code> and <code>'user_message'</code> components that then runs through in one go.</p>
<p>Many thanks for any and all pointers.</p>
|
<python><python-3.x><async-await><python-asyncio><openai-api>
|
2023-12-11 16:03:25
| 1
| 4,240
|
Ivo
|
77,640,717
| 18,618,577
|
problem on sys.stdout used with the buffer
|
<p>on an old code on github that I used to use, the devs seems to plan a python 3 version but it seem that something doesn't work anymore. Here the piece of code (don't know if it's enought) :</p>
<pre><code># ---------
# Specifics
# ---------
if is_py2:
if is_py26:
from logging import Handler
class NullHandler(Handler):
def emit(self, record):
pass
from ordereddict import OrderedDict
else:
from logging import NullHandler
from collections import OrderedDict
from io import StringIO
def to_char(string):
if len(string) == 0:
return bytes('')
return bytes(string[0])
bytes = str
str = str
stdout = sys.stdout
xrange = xrange
elif is_py3:
from logging import NullHandler
from collections import OrderedDict
from io import StringIO
def to_char(string):
if len(string) == 0:
return str('')
return str(string[0])
str = str
bytes = bytes
stdout = sys.stdout.buffer
xrange = range
</code></pre>
<p>And the error I get on Spyder trying to use it :</p>
<pre><code> File ~/miniconda3/envs/spyder-env/lib/python3.11/site-packages/spyder_kernels/py3compat.py:356 in compat_exec
exec(code, globals, locals)
File ~/miniconda3/envs/spyder-env/lib/python3.11/site-packages/pylink/sanstitre1.py:9
from pyvantagepro import VantagePro2
File ~/miniconda3/envs/spyder-env/lib/python3.11/site-packages/pyvantagepro/__init__.py:13
from .logger import LOGGER, active_logger
File ~/miniconda3/envs/spyder-env/lib/python3.11/site-packages/pyvantagepro/logger.py:14
from .compat import NullHandler
File ~/miniconda3/envs/spyder-env/lib/python3.11/site-packages/pyvantagepro/compat.py:91
stdout = sys.stdout.buffer
AttributeError: 'TTYOutStream' object has no attribute 'buffer'
</code></pre>
<p>I tried to figure it out but I'm not good enought in python ... How to resolve this error ? If I didn't put enought code I'll make a new topic with all link and code.</p>
|
<python><buffer><stdout>
|
2023-12-11 15:42:06
| 0
| 305
|
BenjiBoy
|
77,640,695
| 363,796
|
"post" keyword in Python source distribution filenames
|
<p>Can someone explain the following behavior to me?</p>
<pre class="lang-none prettyprint-override"><code>$ python setup.py sdist
.........
$ls -l dist/
-rw-r--r--. 1 zenzic users 5.8K Oct 17 17:00 my_prog-3.0.3-20231017.tar.gz
-rw-r--r--. 1 zenzic users 6.0K Dec 9 19:48 my_prog-3.0.4.post20231209.tar.gz
-rw-r--r--. 1 zenzic users 5.9K Dec 11 15:15 my_prog-3.0.4-20231211.tar.gz
</code></pre>
<p>What causes the presence or absence of <code>post</code> in the filename? I have not altered setup.cfg. The only change to setup.py is the version number, and that doesn't <em>seem</em> to be causing the difference. I'm sure there's an obvious explanation, but it's escaped me for a ridiculous length of time now. Thanks for any enlightenment you can shed!</p>
|
<python><setuptools>
|
2023-12-11 15:38:15
| 0
| 1,653
|
zenzic
|
77,640,569
| 8,030,746
|
How to insert scraped data using flask_sqlalchemy models?
|
<p>To be able to easily use flask-paginate, I switched to Flask-SQLAlchemy in my Flask app. However, this is my first time using something like this, and I'm having some issues understanding what it is I need to do to insert scraped content into my database now.</p>
<p>Previously, I appended scraped content, and then used that to save it to my database. For example:</p>
<pre><code># previous code to access tables
jobs = []
for row in tables:
jobs.append({
'title': row.find_element(By.CSS_SELECTOR, '.classhere').text,
'info': row.find_element(By.CSS_SELECTOR, '.classhere').text,
'location': row.find_element(By.CSS_SELECTOR, '.classhere').text,
'link': row.find_element(By.CSS_SELECTOR, '.classhere').get_attribute('href'),
})
</code></pre>
<p>And then, with that, I did:</p>
<pre><code>for job in jobs:
c.execute('''INSERT INTO jobs(title, info, location, link) VALUES(?,?,?,?)''',
(job['title'], job['info'], job['location'], job['link']))
conn.commit()
</code></pre>
<p>I wrote the table inside the schema.sql file and then ran init_db.py to create it. To get the information, it was pretty straightforward as well.</p>
<p>After I followed <a href="https://www.digitalocean.com/community/tutorials/how-to-use-flask-sqlalchemy-to-interact-with-databases-in-a-flask-application" rel="nofollow noreferrer">this tutorial</a>, and installed flask_sqlalchemy and created a model that looks like this:</p>
<pre><code>class Job(db.Model):
id = db.Column(db.Integer, primary_key=True)
title = db.Column(db.Text, nullable=False)
info = db.Column(db.Text)
location = db.Column(db.Text, nullable=False)
link = db.Column(db.Text, nullable=False)
def __init__(self, title, info, location, link):
self.title = title
self.info = info
self.location = location
self.link = link
def __repr__(self):
return f'<Job {self.title}>'
</code></pre>
<p>Then created it with db.create_all() via terminal, and added a <code>jobs = Job.query.all()</code> to my home route. Now I can't figure out how I should change the above code so that when I'm running my scraping scripts (Selenium), it adds the scraped data to the database? I know I'm missing some information here, and though it might be simple, I can't connect the dots. All the other tutorials I found just use the terminal and insert dummy data that way, which is not what I need here. What do I need to write to accomplish the above scraping and inserting code? Do I just use a similar approach with execute and <code>INSERT INTO</code> as I did above, or is there something else I should be doing? I'm having trouble searching for similar examples (no idea what else to type), so even a link could help me a lot.</p>
<p>Thank you!</p>
|
<python><flask>
|
2023-12-11 15:14:52
| 1
| 851
|
hemoglobin
|
77,640,551
| 9,974,205
|
Looking for alternative to nested loops in Python
|
<p>I have developed the following code to check if groups of three people are conected at the same time</p>
<pre><code>import pandas as pd
from itertools import combinations
data = {
'User': ['Esther','Jonh', 'Ann', 'Alex', 'Jonh', 'Alex', 'Ann', 'Beatrix'],
'InitialTime': ['01/01/2023 00:00:00','01/01/2023 00:00:00', '01/01/2023 00:00:05', '01/01/2023 00:00:07', '01/01/2023 00:00:12', '01/01/2023 00:00:14', '01/01/2023 00:00:15', '01/01/2023 00:00:16'],
'FinalTime': ['01/01/2023 00:10:00','01/01/2023 00:00:10', '01/01/2023 00:00:12', '01/01/2023 00:00:12','01/01/2023 00:00:16', '01/01/2023 00:00:16', '01/01/2023 00:00:17', '01/01/2023 00:00:17']
}
df=pd.DataFrame(data)
def calculate_overlapped_time(df):
df['InitialTime'] = pd.to_datetime(df['InitialTime'], format='%d/%m/%Y %H:%M:%S')
df['FinalTime'] = pd.to_datetime(df['FinalTime'], format='%d/%m/%Y %H:%M:%S')
overlapped_time = {}
for i, row_i in df.iterrows():
for j, row_j in df.iterrows():
for k, row_k in df.iterrows():
if i != j and i != k and j != k:
initial_time = max(row_i['InitialTime'], row_j['InitialTime'], row_k['InitialTime'])
final_time = min(row_i['FinalTime'], row_j['FinalTime'], row_k['FinalTime'])
superposicion = max(0, (final_time - initial_time).total_seconds())
clave = f"{row_i['User']}-{row_j['User']}-{row_k['User']}"
if clave not in overlapped_time:
overlapped_time[clave] = 0
overlapped_time[clave] += superposicion
results = pd.DataFrame(list(overlapped_time.items()), columns=['Group', 'OverlappingTime'])
results['OverlappingTime'] = results['OverlappingTime'].astype(int)
return results
results_df = calculate_overlapped_time(df)
</code></pre>
<p>I want to calculate the overlaping time for groups of roughly 10 people, thus, a code with so many overlapping loops becomes impractical.</p>
<p>Can somebody please tell me if there is an alternative to make this code more scalable to be able to find groups of a bigger size without for loops?</p>
|
<python><pandas><for-loop><combinations><scalability>
|
2023-12-11 15:13:05
| 1
| 503
|
slow_learner
|
77,640,549
| 9,404,261
|
What is the correct numba decorator to return an array of different size than the parameter passed?
|
<p>I want to use numba to vectorize a function to count the occurrences of unique values.
The function receives a numpy array of any length, and returns a numpy array of length 257.</p>
<p>But I do not understand how to specify the decorator. It doesn't' even compile.
I read the documentation <a href="https://numba.readthedocs.io/en/stable/user/vectorize.html#the-vectorize-decorator" rel="nofollow noreferrer">here</a>, but i get empty handed. It says nothing about different array sizes.</p>
<pre><code>@nb.guvectorize([(nb.uint8[:], nb.uint64[:])], "(n) -> (m)", target="parallel")
def count_occurrences(byte_view):
"""
Counts the occurrences of each element in a byte array and returns a new array with the counts.
"""
# adds a zero to the beginning of the array, for convenience
count = np.zeros(1 + 256, dtype=np.uint64)
count[1 + byte_view] += 1
return count
sample = np.random.randint(1, 100, 100, dtype=np.uint8)
counts = count_occurrences(sample)
</code></pre>
|
<python><vectorization><numba>
|
2023-12-11 15:12:33
| 1
| 609
|
tutizeri
|
77,640,419
| 5,170,442
|
How to have parent method accept additional arguments compared to child method without linting issues
|
<p>I have a data structure where different objects are considered "equivalent" if certain parameters/arguments match. I want to implement a method in the parent class that does some of the checks that must apply to all children in order to make them equivalent. In addition, depending on the child class, some additional checks need to pass as well.</p>
<p>I implemented the following structure:</p>
<pre><code>class Parent(ABC):
@abstractmethod
def isequivalent(
self,
other,
checks: list[bool],
) -> bool:
if not <generic condition1>:
return False
if not <generic condition2>:
return False
# etc
for check in checks():
if not check:
return False
return True
class Child(Parent):
def isequivalent(
self,
other,
) -> bool:
checks = [
<specific condition 1>,
<specific condition 2>,
# etc
]
return super().isequivalent(other, checks)
</code></pre>
<p>This works fine in terms of functionality. However, I get a "pylint: arguments-differ" message from my linter. I've tried to look up the correct implementation (e.g. <a href="https://pylint.pycqa.org/en/latest/user_guide/messages/warning/arguments-differ.html" rel="nofollow noreferrer">here</a>), but all the examples are where the child has more arguments and the solutions don't fit my situation.</p>
<p>I don't want to add a <code>checks: list | None = None</code> optional argument to the child methods, as I don't want it to be accessible/editable when instantiating the class.</p>
<p>Note also that the parent class is an <code>ABC</code>, so there will never be an instance of the parent itself.</p>
<p>Is there a correct way to implement this, apart from just overruling the linter?</p>
|
<python><inheritance><pylint>
|
2023-12-11 14:52:36
| 0
| 653
|
db_
|
77,640,168
| 3,917,215
|
Decode hierarchical data structure using Pandas
|
<p>I have a Pandas dataframe with a sample hierarchical data as below</p>
<pre><code>df1 = pd.DataFrame({'Level': ['0','1','2','2','3','2','3','1','2','3','3','4'],
'Type': ['P','U','D','I','D','PR','D','U','U','D','PR','D'],
'ID': ['P1','U1','D1','I1','D2','PR1','D3','U4','U8','DB1','PR3','DF1'],
'Value': ['','1', '0','1','0','1','0','1','1','0','1','0']
}
)
df1
</code></pre>
<p>I would like to create a one-one mapping based on the column 'Level' which will be used as a root to build another dataframe like 0->1->2->3.</p>
<p>'Level' column should be considered to define the Parent and Child relation, for any level other than 1, the immediate previous level is the parent.</p>
<p>Expected output for the sample data is below</p>
<pre><code>df2 = pd.DataFrame({'ID1': ['P','P','U1','U1','I1','PR1','U4','U8','U8','PR3'],
'ID2': ['U1','U4','D1','I1','D2','D3','U8','DB1','PR3','DF1'],
'Value':['1','1','0','1','0','0','1','0','1','0']
}
)
df2
</code></pre>
|
<python><pandas><dataframe>
|
2023-12-11 14:08:23
| 1
| 353
|
Osceria
|
77,640,020
| 9,284,651
|
Data Frame- Remove all after year but keep information about year
|
<p>My DF looks like below:</p>
<pre><code>id date
1 21 July 2023 (abcd)
2 22 July 2023 00:00:01
3 23 July 2023 -abcda
</code></pre>
<p>I need to remove all after year (2023) but I want to keep it. So the result should be:</p>
<pre><code>id date
1 21 July 2023
2 22 July 2023
3 23 July 2023
</code></pre>
<p>I used this but I can't keep information about year</p>
<pre><code>df['date'].str.rsplit('2023', 1).str.get(0)
</code></pre>
<p>I can't add year '2023' to the string that would left after this operation because the year can change. But I can deal with this. I just need to get the result.</p>
<p>Regards
Tomasz</p>
|
<python><pandas><dataframe><date><split>
|
2023-12-11 13:43:09
| 2
| 403
|
Tmiskiewicz
|
77,639,975
| 11,064,604
|
Rearranging LGBMClassifier predict_proba outputs columns
|
<p>I am training an <code>LGBMClassifier</code> for the purpose of using its <code>predict_proba</code> method. The target has 3 classes: <strong>a, b, and c</strong>. I want to ensure that the model <code>predict_proba</code> outputs the probabilities of the columns in order <strong>b, a, c</strong>.<br />
<em>Is there a way to ensure that the output of</em> <code>LGBMClassifier</code> <code>predict_proba</code> <em>has the above ordering?</em></p>
<pre><code>import pandas as pd
from lightgbm import LGBMClassifier
import numpy as np
#data
features = ['feat_1']
TARGET = 'target'
df = pd.DataFrame({
'feat_1':np.random.uniform(size=100),
'target':np.random.choice(a=['b','c','a'], size=100)
})
#training
model = LGBMClassifier()
model.fit(df[features], df[TARGET])
print(model.classes_)
</code></pre>
<blockquote>
<p>['a','b','c']</p>
</blockquote>
<h3>Things I Have Tried</h3>
<ol>
<li>Just rearrange the <code>.classes_</code> attribute.<br />
<code>model.classes_ = ['b','a','c']</code></li>
</ol>
<blockquote>
<p>AttributeError: can't set attribute 'classes_'</p>
</blockquote>
<ol start="2">
<li>Manually rearrange the columns based on the <code>.classes_</code> attribute.</li>
</ol>
<pre><code>desired_order = ['b','a','c']
correct_idx = [list(model._classes).index(val) for val in desired_order]
model.predict_proba(test[features])[:, correct_idx]
</code></pre>
<p>#2 works, but I would just as soon not have to permute the column order every <code>predict_proba</code> call.</p>
|
<python><machine-learning><lightgbm>
|
2023-12-11 13:35:28
| 1
| 353
|
Ottpocket
|
77,639,943
| 10,333,668
|
Tkinter theme is slow
|
<p>I'm using <a href="https://github.com/rdbende/Azure-ttk-theme" rel="nofollow noreferrer">this</a> them in py tkinter program. My problem is that I make like 20 buttonst, it becomes very slow. I know that this is beacuse all the buttons have images. I alredy get a suggestion, that use OpenCV instead of tkinter to load images. But the images are loaded in the Tcl not in the Python file.</p>
<p>Any suggestions to overcome this issue?</p>
<p>UPDATE:</p>
<pre><code>def draw(self):
self['width'] = self.width
self['height'] = self.height
self['background'] = COLORS['GRAY3']
s_btn_add_scene = ttk.Style()
s_btn_add_scene.configure('scene.TButton', font=(FONT_NAME, self.font_size), padding=(0, 0))
btn_add_scene = ttk.Button(self.viewport, text='Add Scene', style='scene.TButton')
cf_scenes = CollapsingFrame(self.viewport, padding=10)
btn_remove = ttk.Button(None, text='Remove', style='scene.TButton')
btn_add = ttk.Button(None, text='+', style='scene.TButton', width=3)
btn_add_scene.pack(anchor='nw', pady=10, padx=8)
cf_scenes.pack(expand=True, fill='x', anchor='n')
for i in range(20):
#TODO: remove
go = randint(0, 100)
frame1 = ttk.Frame(cf_scenes, padding=10)
for j in range(go):
ttk.Label(frame1, text=f"GameEntity{j}").pack(anchor='w')
cf_scenes.add(frame1, title=f"NewScene{i}", collapsed=True, widgets=[btn_remove, btn_add])
</code></pre>
|
<python><tkinter><tcl><ttk><ttkthemes>
|
2023-12-11 13:29:53
| 2
| 377
|
Bálint Cséfalvay
|
77,639,922
| 18,649,992
|
Discrete Difference on Sharded JAX Arrays
|
<p>It is possible to speed-up discrete difference calculations on sharded (across CPU cores) JAX arrays?</p>
<p>The following is an attempt to follow the documentation of JAX automatic parallelism. It uses AOT compilation of essentially a JAX NumPy API call for different sharded arrays. Several device meshes are used to test partitioning across and along the difference direction. The measured run times show that there is no benefit to sharding this way.</p>
<pre><code>os.environ["XLA_FLAGS"] = (
f'--xla_force_host_platform_device_count=8'
)
import jax as jx
import jax.numpy as jnp
import jax.experimental.mesh_utils as jxm
import jax.sharding as jsh
def calc_fd_kernel(x):
# Calculate 1st-order fd along the first axis
return jnp.diff(
x, 1, axis=0, prepend=jnp.zeros((1, *x.shape[1:]))
)
def make_fd(shape, shardings):
# Compiled fd kernel factory
return jx.jit(
calc_fd_kernel,
in_shardings=shardings,
out_shardings=shardings,
).lower(
jx.ShapeDtypeStruct(shape, jnp.dtype('f8'))
).compile()
# Create 2D array to partition
n = 2**12
shape = (n,n,)
x = jx.random.normal(jx.random.PRNGKey(0), shape, dtype='f8')
shardings_test = {
(1, 1,) : jsh.PositionalSharding(jxm.create_device_mesh((1,), devices=jx.devices("cpu")[:1])).reshape(1, 1),
(8, 1,) : jsh.PositionalSharding(jxm.create_device_mesh((8,), devices=jx.devices("cpu")[:8])).reshape(8, 1),
(1, 8,) : jsh.PositionalSharding(jxm.create_device_mesh((8,), devices=jx.devices("cpu")[:8])).reshape(1, 8),
}
x_test = {
mesh : jx.device_put(x, shardings)
for mesh, shardings in shardings_test.items()
}
calc_fd_test = {
mesh : make_fd(shape, shardings)
for mesh, shardings in shardings_test.items()
}
for x_mesh, calc_fd_mesh in zip(x_test.values(), calc_fd_test.values()):
%timeit calc_fd_mesh(x_mesh).block_until_ready()
# (1, 1)
# 48.9 ms ± 414 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
# (8, 1)
# 977 ms ± 34.5 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# (1, 8)
# 48.3 ms ± 1.03 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre>
|
<python><jax><spmd>
|
2023-12-11 13:27:35
| 1
| 440
|
DavidJ
|
77,639,750
| 2,583,346
|
python plotly - arranging odd number of subplots
|
<p>Toy example:</p>
<pre><code>import plotly.graph_objects as go
from plotly.subplots import make_subplots
fig = make_subplots(rows=2, cols=3,
shared_yaxes=True,
horizontal_spacing=0.02)
fig.add_scatter(x=[1,2,3],y=[1,2,3], row=1, col=1)
fig.add_scatter(x=[1,2,3],y=[1,2,3], row=1, col=2)
fig.add_scatter(x=[1,2,3],y=[1,2,3], row=1, col=3)
fig.add_scatter(x=[1,2,3],y=[1,2,3], row=2, col=1)
fig.add_scatter(x=[1,2,3],y=[1,2,3], row=2, col=2)
fig.update_layout(showlegend=False)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/h1BRy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h1BRy.png" alt="enter image description here" /></a></p>
<p>I would like to push the two subplots in the bottom row to the right, so they are placed in the middle. Something like this:
<a href="https://i.sstatic.net/nLpKj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nLpKj.png" alt="enter image description here" /></a></p>
<p>Is there a way to do it programatically? I've tried playing with the <code>specs</code> option of <code>make_subplots</code>, but without success.</p>
|
<python><plotly><subplot>
|
2023-12-11 12:54:09
| 1
| 1,278
|
soungalo
|
77,639,418
| 9,915,864
|
How can I pass data back and forth between Flask template and Python code (context_processor)?
|
<p>I'll preface with stating I'm still pretty new to Flask. I setup my <code>app.py</code> file to import calls from a <code>view.py</code> file using the <code>add_url_rule()</code>. I'm stuck on how to pass data back and forth between my HTML Template and the Python code.</p>
<p>GOAL: Display HTML snippets based on user selections.<br/>
<strong>CLARIFICATION/UPDATE</strong><br/>
Specifically: I need collect a bunch of data. Then I need to form a directory path based on the shelf name selected, get its matching <code>directory_path</code> and concatenate it with an <code><input type='text'></code> directory name, and then add it to the earlier collection of data. The final button on the page will send all the data to be processed.<br/>
On my HTML page, I've got an HTML <code><form></code> to collect a bunch of data. I want to hold that data for the HTML page to use later. Then in the next <code><form></code>, the user types in a directory name & clicks a button. That <code>action</code> retrieves a directory path and concatenates the directory name to the path. The resulting path needs to be added to the earlier collected data. The final button on the page will execute a Python function with all the data. <br/></p>
<p>A work-around solution I've found is using <code>session</code>. <em>See my comment below</em>. It's working for me, but not sure if that's bad practice. Is it? That's why I was wondering about <code>context_processor</code>.</p>
<p>Shelves are currently stored as a list of dictionaries and a dataframe:</p>
<pre><code>shelf_dict = [{'label': 'Shelf Name',
'url':'http://shelf_url',
'save_dir': 'path/to/save_directory'},
{'label': 'Shelf Name',
'url':'http://shelf_url',
'save_dir': 'path/to/save_directory'}
]
</code></pre>
<p>Where I'm at: I've figured out that if I pass data from the HTML page to the Python code and back to the same HTML page, I have to pass both the old and new data for that HTML page to render.</p>
<ol>
<li>I've seen how to call the Flask <code>context_processor</code> <a href="https://flask.palletsprojects.com/en/2.2.x/templating/#context-processors" rel="nofollow noreferrer">in the docs</a>. Can the context_processor work here?</li>
<li>If so, how would I call it given the structure of my application right now? I've only found examples with the Flask app as an object: <code>@app.context_processor</code>. How would I avoid circular referencing here?</li>
<li>If context_processor doesn't work for this, is there another approach besides JS/AJAX?
<br/></li>
</ol>
<p>Thanks!</p>
<p>The <code>test-response</code> HTML snippet prints the expected info. <br>
The commented-out <code><form></code> in the HTML are where I'm stuck.</p>
<p>app.py:</p>
<pre><code>from flask import Flask, app_context
import views
app = Flask(__name__)
app.add_url_rule('/', view_func=views.index)
app.add_url_rule('/index', view_func=views.index)
app.add_url_rule('/download_shelf',
view_func=views.download_shelf_func,
methods=['POST', 'GET'])
if __name__ == "__main__":
app.config['SERVER_NAME'] = f"127.0.0.1:5000"
app.run(debug=True, use_reloader=True)
</code></pre>
<p>views.py:</p>
<pre><code>from flask import render_template, request, redirect
from shelf_df import shelf_dict
def download_shelf_func():
return render_template("download_page.html", shelf_dict=shelf_dict)
def download_options():
def get_download_path(label):
return shelf_df[shelf_df['label']==label]['save_dir'].item()
option_values = {}
if request.values:
print(f"{request.args.get('shelf_choice') = }")
if request.args.get('shelf_choice'):
shelf_choice = request.args.get('shelf_choice')
download_path = get_download_path(shelf_choice)
if request.args.get('pg_size'):
pg_size = request.args.get('pg_size')
if request.values.get('sort_order'):
sort_order = request.args.get('sort_order')
return render_template("download_page.html",
shelf_dict=shelf_dict,
option_values={
'shelf_choice': shelf_choice,
'pg_size': pg_size,
'sort_order': sort_order,
'download_path': download_path
})
# def download_func():
# pass
</code></pre>
<p>HTML</p>
<pre><code>{%block content%}
<div class="container">
<form action="{{ url_for('download_options') }}", id="option-results",
method="['POST','GET']}}">
<div class="options">
<div class="dropdown-menus">
<div class="shelf-choice">
<legend> Select shelf:</legend>
<select id="shelf-choice-select" name="shelf_choice" method="GET" action="/">
{% for i, label in shelf_dict['label'].items() %}
{% if "Test" in label %}:
<option value= "{{ label }}" selected>{{label}}</option>"
{% else: %}:
<option value= "{{ label }}">{{ label }}</option>"
{% endif %}
{% endfor %}
</select>
</div>
{# Trimmed out pg-size menu #}
</div>
{# Trimmed out sort-order buttons #}
</div>
<div class="button-group">
<input type="submit" id="submit-options" value="Set options" />
</div>
</form>
</div>
<div class="container" id="test-response">
{% if option_values %}
{% for key, val in option_values.items() %}
{{ key }}: {{ val }}<br />
{% endfor %}
{% endif %}
{#
<div class="container">
{# Need to concatenate a download_path+date # }
<form action="{{ url_for('download_func') }}" id="newPathName" name="directoryBrowser" method="['POST', 'GET']">
<input type="text" id="directory-name" name="directoryName" placeholder="Today's Date">
<input type="submit" id="download-btn" name="download_button">
</form>
</div>
#}
</div>
</code></pre>
|
<python><html><flask>
|
2023-12-11 12:01:00
| 0
| 341
|
Meghan M.
|
77,639,348
| 3,898,523
|
delta-rs package incurs high costs on GCS
|
<p>I'm using the <a href="https://github.com/delta-io/delta-rs" rel="nofollow noreferrer">delta-rs package</a> to store files on the Google Cloud Storage dual-region bucket. I use the following code to store the data:</p>
<pre><code> def save_data(self, df: Generator[pa.RecordBatch, Any, None]):
write_deltalake(
f"gs://<my-bucket-name>",
df,
schema=df_schema,
partition_by="my_id",
mode="append",
max_rows_per_file=self.max_rows_per_file,
max_rows_per_group=self.max_rows_per_file,
min_rows_per_group=int(self.max_rows_per_file / 2)
)
</code></pre>
<p>The input data is a generator since I'm taking the data from a Postgres database in batches. I am saving similar data into two different tables and I'm also saving a SUCCESS file for each uploaded partition.</p>
<p>I have around 25,000 partitions and most of them only have a single parquet file in them. The total number of rows that I've inserted is around 700,000,000. This incurred the following costs:</p>
<ul>
<li>Class A operations: 127,000.</li>
<li>Class B operations: 109,856,507.</li>
<li>Download Worldwide Destinations: 300 gibibyte.</li>
</ul>
<p>The number of class A operations makes sense to me when accounting for 2 writes per partition + an additional success file -- these are inserts. Some partitions probably have more than 1 file, so the number is a bit higher than 25,000 (number of partitions) x 3.</p>
<p>I can't figure out where so many class B operations and Download Worldwide Destinations. I assume it comes from the implementation of delta-rs.</p>
<p>Can you provide any insights into why the costs are so high and how I would need to change the code to decrease them?</p>
|
<python><google-cloud-storage><delta-lake><cost-management><delta-rs>
|
2023-12-11 11:48:08
| 1
| 305
|
gregorp
|
77,639,326
| 19,363,912
|
Nested dictionary with class and instance attributes
|
<p>I store some configuration details across several class attributes and one main class which references each of them. E.g.</p>
<pre><code>class A:
a = 1
class B:
b = 2
def __init__(self):
self.a_ = A()
x = B()
</code></pre>
<p>I would like to display all class (and instance) attributes in a dictionary, i.e.</p>
<pre><code>{'b': 2, 'a_': {'a': 1}}
</code></pre>
<p>I understood <code>__dict__</code> does not access class attributes.</p>
<pre><code>x.__dict__ # {'a_': <__main__.A at 0x1b888d9e530>}
x.__dict__['a_'].__dict__ # {} -- why not {'a': 1}?
</code></pre>
<p>So I tried below class methods but the result is still not right:</p>
<pre><code> @classmethod
def to_dict(cls):
return vars(cls)
@classmethod
def to_dict2(cls):
return cls.__dict__
</code></pre>
|
<python>
|
2023-12-11 11:43:59
| 2
| 447
|
aeiou
|
77,639,286
| 583,464
|
3d numpy array calculate mean of columns with nans
|
<p>I have this 3d array:</p>
<pre><code>import numpy as np
import numpy.ma as ma
a = np.array([[[1,2,3],[4,np.nan,6], [7,8,9]],
[[11,12,13],[14,np.nan,16], [17,18,19]]])
a.shape
(2, 3, 3)
array([[[ 1., 2., 3.],
[ 4., nan, 6.],
[ 7., 8., 9.]],
[[11., 12., 13.],
[14., nan, 16.],
[17., 18., 19.]]])
</code></pre>
<p>So, I have 2 sets of 2d data. I want to calculate the column mean of each set and replace the nans with that value.</p>
<p>So, I want the result:</p>
<pre><code>array([[[ 1., 2., 3.],
[ 4., 5, 6.],
[ 7., 8., 9.]],
[[11., 12., 13.],
[14., 15, 16.],
[17., 18., 19.]]])
( 2 + 8 = 10 / 2 = 5)
( 12 + 18 = 30 / 2 = 15)
</code></pre>
<p>I tried:</p>
<pre><code>a = np.where(np.isnan(a),
ma.array(a, mask=np.isnan(a)).mean(axis=1),
a)
</code></pre>
<p>with mean along axis 1 but it gives:</p>
<pre><code>operands could not be broadcast together with shapes (2,3,3) (2,3) (2,3,3)
</code></pre>
|
<python><numpy>
|
2023-12-11 11:38:17
| 1
| 5,751
|
George
|
77,638,814
| 13,023,647
|
Searching a list by an undefined value
|
<p>There is a certain list that I get after parsing a certain html page:</p>
<p>Could you tell me please, how can I do a search for a specific value?</p>
<p>Let's say I specify only part of the value <code>KES_</code> and the search should give me the entire value <code>KES_2023.z</code>.</p>
<p>Thank you very much!</p>
|
<python><python-3.x><list><find>
|
2023-12-11 10:09:19
| 3
| 374
|
Alex Rebell
|
77,638,147
| 11,046,379
|
Pivot Pandas dataframe and take missed values from other dataframe
|
<p>There are two dataframes:</p>
<pre><code>table1
id | time | status
-----------------------
1 | 10:00 | conn |
1 | 10:01 | disconn |
2 | 10:02 | conn |
2 | 10:03 | disconn |
3 | 10:04 | conn |
table2
id | time |
------------
3 | 10:05 |
</code></pre>
<p>If there is no disconn time value for ceratin id then take it from table2.
How to get wished result ?</p>
<pre><code>id | conn | disconn|
--------------------
1 | 10:00| 10:01 |
2 | 10:02| 10:03 |
3 | 10:04| 10:05 |
</code></pre>
|
<python><pandas><dataframe>
|
2023-12-11 08:01:46
| 1
| 1,658
|
harp1814
|
77,637,697
| 5,865,448
|
Given set of points, how to approximate a rectangle using opencv?
|
<p>I have a set of points as shown in the image attached. How can i approximate a rectange or draw a rectangle using these points?</p>
<p>The easiest will be to get min(x) and min(y) as top left and max(x), max(y) as bottom right, but that wont solve for any anamolous point. here is sample data to try with:</p>
<pre><code>[(569, 800), (569, 272), (404, 801), (404, 274), (624, 801), (624, 273), (457, 800), (457, 277), (516, 800), (516, 273), (686, 800), (686, 272), (681, 801), (681, 271), (343, 274), (352, 801), (399, 801), (399, 274), (349, 273), (341, 795), (442, 801), (442, 274), (462, 800), (462, 277), (415, 274), (433, 801), (411, 801), (420, 274), (431, 274), (451, 800), (451, 274), (676, 791), (344, 279), (676, 279), (347, 801), (356, 275), (483, 800), (342, 790), (676, 801), (473, 277), (344, 319), (667, 324), (346, 364), (667, 364), (347, 405), (664, 411), (345, 447), (666, 452), (387, 801), (387, 276), (347, 490), (654, 496), (409, 275), (342, 627), (655, 621), (342, 620), (666, 539)]
</code></pre>
<p><a href="https://i.sstatic.net/3WUvl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3WUvl.png" alt="enter image description here" /></a></p>
|
<python><opencv>
|
2023-12-11 05:55:49
| 1
| 337
|
Ali Waqas
|
77,637,675
| 2,780,906
|
np.vectorize and relativedelta returning "relativedelta only diffs datetime/date"
|
<p>I have a pandas dataframe with two datetime64[ns] columns ("d1" and "d2") representing dates. I would like to create a third column calculated as the difference between these two dates. I can't use a simple days/365 style calculation, so I am requiring relativedelta.</p>
<p>Using relativedelta works fine on one row:</p>
<pre><code>import dateutil.relativedelta as relativedelta
relativedelta.relativedelta(df["d1"][0],df["d2"][0])
> relativedelta(years=+1)
</code></pre>
<p>But it fails on columns. So I vectorize it:</p>
<pre><code>date_diffs=np.vectorize(relativedelta.relativedelta)
</code></pre>
<p>And then I try</p>
<pre><code>date_diffs(df["d1"],df["d2"])
</code></pre>
<p>But this returns <code>TypeError: relativedelta only diffs datetime/date</code></p>
<p>How do I fix this? Or should I simply use the <code>apply</code> statement or a for-loop?</p>
|
<python><pandas><date><vectorization>
|
2023-12-11 05:46:58
| 1
| 397
|
Tim
|
77,637,562
| 10,027,592
|
How does assignment to another assignment(adding an element in a dictionary) work?
|
<p>Let's imagine a dictionary and we want to get some kind of a "reference" to a value that hasn't been populated yet.</p>
<pre><code>>>> d = {} # This dictionary may have a huge number of elements.
>>> a = d['John'] = {} # I hope 'a' works as if some reference to the empty value.
>>> d
{'John': {}}
>>> a['age'] = 20
>>> d
{'John': {'age': 20}} # worked as hoped, but how did the 2nd line above work?
</code></pre>
<p>In the 2nd line in the above example, I wanted the <code>a</code> to work as a reference to the empty value, and it worked as I hoped. But how exactly does <code>a = d['John'] = {}</code> work?</p>
<p>It would make some sense if it were equivalent to <code>a = (d['John'] = {})</code> and if assignment were an expression. But in Python, assignment is not an expression as far as I know. Maybe <code>myDict['someKey'] = 'someValue'</code> is not a typical assignment in Python?</p>
<p>Could somebody explain how the 2nd line works?</p>
|
<python><python-3.x><variable-assignment>
|
2023-12-11 05:04:42
| 0
| 4,226
|
starriet 차주녕
|
77,637,539
| 9,108,781
|
Why BeautifulSoup can't find this supposed-to-be xbrl related "ix" tag?
|
<p>It turns out that the tag name should be: "ix:nonfraction"</p>
<p>This does not work. No "xi" tag is found.</p>
<pre><code>from bs4 import BeautifulSoup
text = """
<td style="BORDER-BOTTOM:0.75pt solid #7f7f7f;white-space:nowrap;vertical-align:bottom;text-align:right;">$ <ix:nonfraction name="ecd:AveragePrice" contextref="P01_01_2022To12_31_2022" unitref="Unit_USD" decimals="2" scale="0" format="ixt:num-dot-decimal">97.88</ix:nonfraction>
</td>
"""
soup = BeautifulSoup(text, 'lxml')
print(soup)
ix_tags = soup.find_all('ix')
print(ix_tags)
</code></pre>
<p>But the following works. I don't see a difference. Why is it? Thanks a lot!</p>
<pre><code>html_content = """
<html>
<body>
<ix>Tag 1</ix>
<ix>Tag 2</ix>
<ix>Tag 3</ix>
<p>Not an ix tag</p>
</body>
</html>
"""
soup = BeautifulSoup(html_content, 'lxml')
ix_tags = soup.find_all('ix')
for tag in ix_tags:
print(tag.text)
</code></pre>
|
<python><beautifulsoup>
|
2023-12-11 04:57:25
| 2
| 943
|
Victor Wang
|
77,637,307
| 8,523,868
|
How to print all links from the website in robot framwok
|
<p>Thank you so much in advance for answering my question.
Below I tried to print all the links name in the console.
I could able to get no of count of links but could not able to print the name of the links.
Please help me to solve the query.</p>
<pre><code>*** Settings ***
Library SeleniumLibrary
*** Variables ***
*** Test Cases ***
GetAllLInks
open browser https://tnreginet.gov.in/portal/ Firefox
Maximize Browser Window
${nooflinks}= Get Element Count xpath://a
Log To Console ${nooflinks}
@{linkItems} create list
FOR ${i} IN RANGE 1 ${nooflinks}+1
${linktext}= get text (xpath://a)$[i]
lOG TO CONSOLE ${linktext}
END
*** Keywords ***
</code></pre>
<p>Below I am getting the error</p>
<pre><code>Started: E:\pycharm\projecttelegram\rautomation\alllinks.robot
==============================================================================
Alllinks
==============================================================================
[info] Opening browser 'Firefox' to base url 'https://tnreginet.gov.in/portal/'.
GetAllLInks 175
[info (+7.14s)] ${nooflinks} = 175
[info] @{linkItems} = [ ]
[info (+0.07s)] </td></tr><tr><td colspan="3"><a href="selenium-screenshot-1.png"><img src="selenium-screenshot-1.png" width="800px"></a>
[FAIL] Element with locator '(xpath://a)$[i]' not found.
| FAIL |
Element with locator '(xpath://a)$[i]' not found.
------------------------------------------------------------------------------
Alllinks | FAIL |
1 test, 0 passed, 1 failed
==============================================================================
Output: E:\pycharm\projecttelegram\log\output.xml
Log: E:\pycharm\projecttelegram\log\log.html
Report: E:\pycharm\projecttelegram\log\report.html
Robot Run Terminated (code: 0)
</code></pre>
|
<python><automation><robotframework>
|
2023-12-11 03:15:33
| 3
| 911
|
vivek rajagopalan
|
77,636,915
| 235,472
|
How to find out if a symbolic link points to a missing directory?
|
<p>I have a symbolic link to a directory that does not exist:</p>
<pre><code>~/ramdisk -> /dev/shm/mydir
</code></pre>
<p>When running my Python code, I get the error:</p>
<blockquote>
<p>FileNotFoundError: [Errno 2] No such file or directory: '~/ramdisk'</p>
</blockquote>
<p>How can I avoid this error, detecting beforehand the fact the directory is missing?</p>
<p>The only method I found is <code>symlink()</code>, which creates a symbolic link.</p>
|
<python><python-3.x><symlink>
|
2023-12-11 00:06:52
| 1
| 13,528
|
Pietro
|
77,636,721
| 609,782
|
How to get a certain value from enum?
|
<p>I have the following class:</p>
<pre><code>class YesOrNn(enum.Enum):
YES = "Y"
NO = "N"
</code></pre>
<p>I am getting my inputs such as <code>YesOrNo("true")</code> or <code>YesOrNo("false")</code>,</p>
<p>To make this work, I think I need to change the class to:</p>
<pre><code>class YesOrNn(enum.Enum):
YES = "true"
NO = "false"
</code></pre>
<p>But, I also have a case where whenever a variable's value is saved as <code>YesOrNo.YES.value</code>, it should answer <code>"Y"</code>.</p>
<p>I can't seem to figure out how to pull that off</p>
|
<python><enums>
|
2023-12-10 22:46:33
| 1
| 5,792
|
Darpan
|
77,636,586
| 3,608,005
|
Extracting titles from PDFs based on formatting via classification
|
<p>I have about 20000 pdfs with probably 100 different layouts. Unfortunately, not all PDFs contain clean metadata. People are often lazy and did not always provide the title with it or sometimes a very short one or even worse just the file name as title.</p>
<p>Because of the number of documents, I started with PyMuPDF before diving into deep learning (e.g layout-parser, layout_lm etc.).</p>
<p>A naive implementation would be to filter for the text with the largest font size at the beginning of the document. But as can be seen here that does not work well for different layouts:
<a href="https://www.ema.europa.eu/en/documents/overview/vectormune-nd-epar-summary-public_en.pdf" rel="nofollow noreferrer">title and subtitle</a>,
<a href="https://www.ema.europa.eu/en/documents/scientific-guideline/ich-q-7-good-manufacturing-practice-active-pharmaceutical-ingredients-step-5_en.pdf" rel="nofollow noreferrer">old format</a>,
<a href="https://www.ema.europa.eu/system/files/documents/scientific-guideline/wc500191492_en.pdf" rel="nofollow noreferrer">simple example for which a naive solution works well</a></p>
<p>I thought about training a classifier based on a feature matrix created like:</p>
<pre><code>def create_feature_matrix(blocks):
"""
blocks created using pymupdf like:
# Open the PDF file
doc = fitz.open(pdf_path)
# Extract data from the first page
page = doc[0]
text_instances = page.get_text("dict")["blocks"]
"""
text_instances = blocks
# Initialize the feature matrix
feature_matrix = []
# Iterate through text instances and extract features
for instance in text_instances:
if "lines" in instance:
for line in instance["lines"]:
for span in line["spans"]:
# Extract text, color, and positional information
text = span["text"]
color = span["color"]
size = span["size"]
font = span["font"]
bbox = span["bbox"] # bbox = (x0, y0, x1, y1)
feature_matrix.append({
"text": text,
"color": color,
"size": size,
"font": font,
"x0": bbox[0],
"y0": bbox[1],
"x1": bbox[2],
"y1": bbox[3]
})
return feature_matrix
</code></pre>
<p>this code could be used together with PyMuPDF like so:</p>
<pre><code>import pandas as pd
import fitz
pdf_path = some_path
doc = fitz.open(pdf_path)
# Extract data from the first page
page = doc[0]
blocks = page.get_text("dict")["blocks"]
FM_for_one_page = pd.DataFrame(create_feature_matrix(blocks))
</code></pre>
<p>My plan is to manually label the rows based on the text like: text is title: 1, not a title: 0.</p>
<p>This solution might be an improvement to my naive rule based classifier. However, I doubt my approach is very robust and it opens more questions:</p>
<ul>
<li>Would it be ok to just concatenate the feature matrices for all first pages? I do loose the information about the page boundaries. But I have no idea what else I could do.</li>
<li>The features of the titles depend on the surrounding structure and sequence. What model could capture that?</li>
<li>Do you think I am on a right path? If not, how would you tackle such a challenge?</li>
</ul>
|
<python><pdf><nlp>
|
2023-12-10 21:51:00
| 2
| 5,448
|
Moritz
|
77,636,518
| 539,251
|
Python for .NET - python lambda parameter in function definition called from C#
|
<p>I have the following code I want to convert to Python for .NET (so I want to embed Python in C#)</p>
<pre><code>import numpy as np
from scipy.optimize import curve_fit
x = np.array([4, 3, 2, 1])
y = np.array([1.43, 1.84, 1.36, 1.08])
model = lambda t, a, b: a * np.exp(-b * t)
params = curve_fit(model, x, y)
</code></pre>
<p>how to convert this to Python for .NET?</p>
<pre><code>PythonEngine.Initialize();
using (Py.GIL())
{
dynamic np = Py.Import("numpy");
dynamic sp = Py.Import("scipy.optimize");
var x = new List<double> { 1, 2, 3, 4 };
var y = new List<double> { 1.43, 1.84, 1.36, 1.08 };
var objective = (float t, float alpha, float kappa) => alpha * np.exp(-kappa * t);
dynamic parameters = sp.curve_fit(objective, x, y)
}
</code></pre>
<p>objective however is not accepted, anyone got an idea, I prefer a solution without something like</p>
<pre><code>scope.Import("numpy");
scope.Exec("model = lambda t, a, b: a * numpy.exp(-b * t)");
var model = scope.Eval("model");
</code></pre>
|
<python><c#><.net><embed><python.net>
|
2023-12-10 21:25:26
| 1
| 1,545
|
BigChief
|
77,636,346
| 7,391,480
|
Pandas table reshape puzzle - list all items and fill in blanks with NaN or 0
|
<p>I have a table I'm trying to reshape in a particular way.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'race': ['one', 'one', 'one', 'two', 'two', 'two'],
'type': ['D', 'K', 'G', 'D', 'D', 'K'],
'item': ['x', 'y', 'z', 'q', 'x', 'y'],
'level': [1, 2, 1, 6, 2, 3]})
df
</code></pre>
<p>The resulting dataframe:</p>
<pre><code> race type item level
0 one D x 1
1 one K y 2
2 one G z 1
3 two D q 6
4 two D x 2
5 two K y 3
</code></pre>
<p>I would like to reshape it in this format:</p>
<pre><code> D K G
item level item level item level
race
one x 1 y 2 z 1
two q 6 y 3 NaN NaN
two x 2 NaN NaN NaN NaN
</code></pre>
<ul>
<li>My goal is simply to lay out the information in a different format
for a human to read.</li>
<li>There is no data aggregation.</li>
<li><code>item</code> is unique within <code>race</code> but it can appear in multiple different races.</li>
<li>The tricky part is the <code>race</code> column or index must expand to fit the number of items within the race. In the example above, there are two 'D' items in race 'two' so race 'two' repeats twice in 2 rows to accommodate both items. If there
were 5 'K' items in race 'two', race 'two' would need to repeat 5
times.</li>
<li>The number of 'D' 'K' 'G' items in each race is random and they are not related to each other. When there is not an item available, that cell is filled with 'NaN' or 0.</li>
</ul>
<p>How can I achieve my desired table shape?</p>
<p>I've already tried:</p>
<pre><code>df.pivot(index='race', columns='type', values=['level', 'item'])
</code></pre>
<p>which gives error:</p>
<pre><code>ValueError: Index contains duplicate entries, cannot reshape
</code></pre>
<p>Is there another way with <code>pd.pivot</code>, <code>pd.groupby</code>, <code>pd.pivot_table</code>, or <code>pd.crosstab</code> or another pandas or dataframe method that can work?</p>
|
<python><pandas><dataframe><pivot><pivot-table>
|
2023-12-10 20:34:29
| 2
| 1,364
|
edge-case
|
77,636,251
| 2,813,606
|
Create a dictionary from Pandas column conditioned on values from another column
|
<p>I have a tennis dataset that looks like the following:</p>
<pre><code>tourney_id = ['French Open 2018','French Open 2018','Wimbledon 2018','Wimbledon 2018','Australian Open 2019','Australian Open 2019','US Open 2019','US Open 2019']
player_name = ['Novak Djokovic','Roger Federer','Andy Murray','Rafael Nadal','John Isner','Novak Djokovic','Andy Murray','Roger Federer']
match_num = [103, 103, 217, 217, 104, 104, 243, 243]
df = pd.DataFrame(list(zip(tourney_id, player_name, match_num)),
columns =['TournamentID','Name','MatchID'])
</code></pre>
<p>I want to create a dictionary where the keys are the players and the items are also the players (opponents). So it would look like the following based on my dataset:</p>
<pre><code>{'Novak Djokovic': ['Roger Federer','John Isner'],
'Roger Federer': ['Novak Djokovic','Andy Murray'],
'Andy Murray': ['Rafael Nadal','Roger Federer'],
'Rafael Nadal': ['Andy Murray'],
'John Isner': ['Novak Djokovic']}
</code></pre>
<p>I want to identify players who have played each other when they have the same values for both TournamentID and MatchID.</p>
<p>The last thing I tried was: <code>df.set_index(['TournamentID','MatchID'])['Name'].to_dict()</code> but that's not quite what I'm looking for.</p>
<p>What can I try next?</p>
|
<python><pandas><list><dictionary><python-zip>
|
2023-12-10 20:07:47
| 5
| 921
|
user2813606
|
77,636,213
| 6,619,979
|
Is it possible to avoid encoding padding when creating a sequence data encoder in PyTorch?
|
<p>I am attempting to make an observation history encoder, where my goal is for a model that takes in as input a variable length sequence of dimension [Time, Batch, Features] (where sequences are padded to fit a fixed Time length) and outputs a variable of dimension [Batch, New_Features]. My concern is that when I’m doing dimensionality reduction with FC layers, they will take the padded data into consideration. Is there any way to avoid this? Or is this something I don't need to worry about because the padding will naturally become part of the unique encodings?</p>
|
<python><deep-learning><pytorch><reinforcement-learning><autoencoder>
|
2023-12-10 19:56:25
| 1
| 833
|
Yuerno
|
77,636,192
| 1,743,551
|
`AttributeError: 'Series' object has no attribute 'iteritems'` in simple custom Pyfolio example
|
<p>I try to get a custom Pyfolio example running in a Jupyter Notebook on OSx. But I either get</p>
<p><code>AttributeError: 'Series' object has no attribute 'iteritems'</code> with the Pandas > 2.0.0 or</p>
<p><code>IndexError: index -1 is out of bounds for axis 0 with size 0</code> with the Pandas < 2.0.0.</p>
<p>These are my steps for the latest Pandas version:</p>
<ol>
<li>Create a <code>requirements.txt</code> with</li>
</ol>
<pre><code>pyfolio
jupyter
pandas
</code></pre>
<ol start="2">
<li><code>virtualenv --python python3 env</code></li>
<li><code>source ./env/bin/activate</code></li>
<li><code>pip3 install -r requirements.txt</code></li>
<li><code>jupyter notebook</code>
The Notebook looks like this:</li>
</ol>
<pre><code>import pyfolio as pf
import pandas as pd
</code></pre>
<pre><code>return_values = {
'2023-01-01': 0.005,
'2023-01-02': -0.002,
'2023-01-03': 0.003,
'2023-01-04': -0.002,
'2023-01-05': 0.006,
}
dates = pd.to_datetime(list(return_values.keys()))
returns = pd.Series(list(return_values.values()), index=dates)
returns
</code></pre>
<pre><code>data = {
'AAPL': [5000, 5200, 5100, 5300, 5400],
'MSFT': [3000, 3050, 3100, 3150, 3200],
'GOOG': [7000, 6900, 7100, 7200, 7300],
}
dates = ['2023-01-01', '2023-01-02', '2023-01-03', '2023-01-04', '2023-01-05']
dates = pd.to_datetime(dates)
positions = pd.DataFrame(data, index=dates)
positions['cash'] = [1000, 1500, 1200, 1100, 1300]
positions
</code></pre>
<pre><code>data = {
'date': ['2023-01-01', '2023-01-02', '2023-01-03', '2023-01-04', '2023-01-05'],
'symbol': ['AAPL', 'MSFT', 'GOOG', 'MSFT', 'AAPL'],
'amount': [10, -5, 15, 11, -4], # Positive for buys, negative for sells
'price': [150, 200, 1000, 240, 110]
}
transactions = pd.DataFrame(data)
transactions['date'] = pd.to_datetime(transactions['date'])
transactions.set_index('date', inplace=True)
transactions
</code></pre>
<p><code>pf.create_full_tear_sheet(returns, positions=positions, transactions=transactions)</code></p>
<p>With that last call I get</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/_n/_pkw_mkx6z72n2b2jy6vj64w0000gn/T/ipykernel_97539/850986836.py in ?()
----> 1 pf.create_full_tear_sheet(returns, positions=positions, transactions=transactions)
~/privat/trading/pyfoliotest/env/lib/python3.10/site-packages/pyfolio/tears.py in ?(returns, positions, transactions, market_data, benchmark_rets, slippage, live_start_date, sector_mappings, bayesian, round_trips, estimate_intraday, hide_positions, cone_std, bootstrap, unadjusted_returns, style_factor_panel, sectors, caps, shares_held, volumes, percentile, turnover_denom, set_context, factor_returns, factor_loadings, pos_in_dollars, header_rows, factor_partitions)
197
198 positions = utils.check_intraday(estimate_intraday, returns,
199 positions, transactions)
200
--> 201 create_returns_tear_sheet(
202 returns,
203 positions=positions,
204 transactions=transactions,
~/privat/trading/pyfoliotest/env/lib/python3.10/site-packages/pyfolio/plotting.py in ?(*args, **kwargs)
48 def call_w_context(*args, **kwargs):
49 set_context = kwargs.pop('set_context', True)
50 if set_context:
51 with plotting_context(), axes_style():
---> 52 return func(*args, **kwargs)
53 else:
54 return func(*args, **kwargs)
~/privat/trading/pyfoliotest/env/lib/python3.10/site-packages/pyfolio/tears.py in ?(returns, positions, transactions, live_start_date, cone_std, benchmark_rets, bootstrap, turnover_denom, header_rows, return_fig)
492
493 if benchmark_rets is not None:
494 returns = utils.clip_returns_to_benchmark(returns, benchmark_rets)
495
--> 496 plotting.show_perf_stats(returns, benchmark_rets,
497 positions=positions,
498 transactions=transactions,
499 turnover_denom=turnover_denom,
~/privat/trading/pyfoliotest/env/lib/python3.10/site-packages/pyfolio/plotting.py in ?(returns, factor_returns, positions, transactions, turnover_denom, live_start_date, bootstrap, header_rows)
644 APPROX_BDAYS_PER_MONTH)
645 perf_stats = pd.DataFrame(perf_stats_all, columns=['Backtest'])
646
647 for column in perf_stats.columns:
--> 648 for stat, value in perf_stats[column].iteritems():
649 if stat in STAT_FUNCS_PCT:
650 perf_stats.loc[stat, column] = str(np.round(value * 100,
651 1)) + '%'
~/privat/trading/pyfoliotest/env/lib/python3.10/site-packages/pandas/core/generic.py in ?(self, name)
6200 and name not in self._accessors
6201 and self._info_axis._can_hold_identifiers_and_holds_name(name)
6202 ):
6203 return self[name]
-> 6204 return object.__getattribute__(self, name)
AttributeError: 'Series' object has no attribute 'iteritems'
</code></pre>
<p>As far as I can see in the <a href="https://github.com/quantopian/pyfolio/blob/master/pyfolio/tears.py#L89" rel="nofollow noreferrer">DocStrings</a>, it's not like the positions should be a dataframe and the <code>iteritems</code> refers to a dataframe but really a Series.
As can be seen in another <a href="https://stackoverflow.com/a/76202088/1743551">SO post</a>, <code>iteritems</code> got deprecated in Pandas>2.0.0. So I've tried to downgrade it. Now I actually see the backtest result table (without graphs) but I get this error:</p>
<pre><code>---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
Cell In[5], line 1
----> 1 pf.create_full_tear_sheet(returns, positions=positions, transactions=transactions)
File ~/privat/trading/options-backtesting/env/lib/python3.10/site-packages/pyfolio/tears.py:201, in create_full_tear_sheet(returns, positions, transactions, market_data, benchmark_rets, slippage, live_start_date, sector_mappings, bayesian, round_trips, estimate_intraday, hide_positions, cone_std, bootstrap, unadjusted_returns, style_factor_panel, sectors, caps, shares_held, volumes, percentile, turnover_denom, set_context, factor_returns, factor_loadings, pos_in_dollars, header_rows, factor_partitions)
195 returns = txn.adjust_returns_for_slippage(returns, positions,
196 transactions, slippage)
198 positions = utils.check_intraday(estimate_intraday, returns,
199 positions, transactions)
--> 201 create_returns_tear_sheet(
202 returns,
203 positions=positions,
204 transactions=transactions,
205 live_start_date=live_start_date,
206 cone_std=cone_std,
207 benchmark_rets=benchmark_rets,
208 bootstrap=bootstrap,
209 turnover_denom=turnover_denom,
210 header_rows=header_rows,
211 set_context=set_context)
213 create_interesting_times_tear_sheet(returns,
214 benchmark_rets=benchmark_rets,
215 set_context=set_context)
217 if positions is not None:
File ~/privat/trading/options-backtesting/env/lib/python3.10/site-packages/pyfolio/plotting.py:52, in customize.<locals>.call_w_context(*args, **kwargs)
50 if set_context:
51 with plotting_context(), axes_style():
---> 52 return func(*args, **kwargs)
53 else:
54 return func(*args, **kwargs)
File ~/privat/trading/options-backtesting/env/lib/python3.10/site-packages/pyfolio/tears.py:504, in create_returns_tear_sheet(returns, positions, transactions, live_start_date, cone_std, benchmark_rets, bootstrap, turnover_denom, header_rows, return_fig)
494 returns = utils.clip_returns_to_benchmark(returns, benchmark_rets)
496 plotting.show_perf_stats(returns, benchmark_rets,
497 positions=positions,
498 transactions=transactions,
(...)
501 live_start_date=live_start_date,
502 header_rows=header_rows)
--> 504 plotting.show_worst_drawdown_periods(returns)
506 vertical_sections = 11
508 if live_start_date is not None:
File ~/privat/trading/options-backtesting/env/lib/python3.10/site-packages/pyfolio/plotting.py:1664, in show_worst_drawdown_periods(returns, top)
1648 def show_worst_drawdown_periods(returns, top=5):
1649 """
1650 Prints information about the worst drawdown periods.
1651
(...)
1661 Amount of top drawdowns periods to plot (default 5).
1662 """
-> 1664 drawdown_df = timeseries.gen_drawdown_table(returns, top=top)
1665 utils.print_table(
1666 drawdown_df.sort_values('Net drawdown in %', ascending=False),
1667 name='Worst drawdown periods',
1668 float_format='{0:.2f}'.format,
1669 )
File ~/privat/trading/options-backtesting/env/lib/python3.10/site-packages/pyfolio/timeseries.py:991, in gen_drawdown_table(returns, top)
973 """
974 Places top drawdowns in a table.
975
(...)
987 Information about top drawdowns.
988 """
990 df_cum = ep.cum_returns(returns, 1.0)
--> 991 drawdown_periods = get_top_drawdowns(returns, top=top)
992 df_drawdowns = pd.DataFrame(index=list(range(top)),
993 columns=['Net drawdown in %',
994 'Peak date',
995 'Valley date',
996 'Recovery date',
997 'Duration'])
999 for i, (peak, valley, recovery) in enumerate(drawdown_periods):
File ~/privat/trading/options-backtesting/env/lib/python3.10/site-packages/pyfolio/timeseries.py:956, in get_top_drawdowns(returns, top)
954 drawdowns = []
955 for t in range(top):
--> 956 peak, valley, recovery = get_max_drawdown_underwater(underwater)
957 # Slice out draw-down period
958 if not pd.isnull(recovery):
File ~/privat/trading/options-backtesting/env/lib/python3.10/site-packages/pyfolio/timeseries.py:895, in get_max_drawdown_underwater(underwater)
893 valley = np.argmin(underwater) # end of the period
894 # Find first 0
--> 895 peak = underwater[:valley][underwater[:valley] == 0].index[-1]
896 # Find last 0
897 try:
File ~/privat/trading/options-backtesting/env/lib/python3.10/site-packages/pandas/core/indexes/base.py:5320, in Index.__getitem__(self, key)
5317 if is_integer(key) or is_float(key):
5318 # GH#44051 exclude bool, which would return a 2d ndarray
5319 key = com.cast_scalar_indexer(key, warn_float=True)
-> 5320 return getitem(key)
5322 if isinstance(key, slice):
5323 # This case is separated from the conditional above to avoid
5324 # pessimization com.is_bool_indexer and ndim checks.
5325 result = getitem(key)
File ~/privat/trading/options-backtesting/env/lib/python3.10/site-packages/pandas/core/arrays/datetimelike.py:358, in DatetimeLikeArrayMixin.__getitem__(self, key)
350 """
351 This getitem defers to the underlying array, which by-definition can
352 only handle list-likes, slices, and integer scalars
353 """
354 # Use cast as we know we will get back a DatetimeLikeArray or DTScalar,
355 # but skip evaluating the Union at runtime for performance
356 # (see https://github.com/pandas-dev/pandas/pull/44624)
357 result = cast(
--> 358 "Union[DatetimeLikeArrayT, DTScalarOrNaT]", super().__getitem__(key)
359 )
360 if lib.is_scalar(result):
361 return result
File ~/privat/trading/options-backtesting/env/lib/python3.10/site-packages/pandas/core/arrays/_mixins.py:289, in NDArrayBackedExtensionArray.__getitem__(self, key)
283 def __getitem__(
284 self: NDArrayBackedExtensionArrayT,
285 key: PositionalIndexer2D,
286 ) -> NDArrayBackedExtensionArrayT | Any:
287 if lib.is_integer(key):
288 # fast-path
--> 289 result = self._ndarray[key]
290 if self.ndim == 1:
291 return self._box_func(result)
IndexError: index -1 is out of bounds for axis 0 with size 0
</code></pre>
<pre><code>print(pf.__version__)
print(pd.__version__)
</code></pre>
<p>returns:</p>
<pre><code>0.9.2
2.1.4
</code></pre>
<p>What can I do to get this simple Pyfolio example running from a recent version in a Jupyter Notebook?</p>
|
<python><pandas><pyfolio>
|
2023-12-10 19:51:10
| 1
| 1,855
|
Sandro
|
77,636,125
| 13,492,584
|
Python PyCharm - cannot install contextily
|
<p>I am using <strong>Python</strong> with <strong>PyCharm</strong> and I am trying to run this code that I found <a href="https://networkx.org/documentation/stable/auto_examples/geospatial/plot_points.html" rel="nofollow noreferrer">here</a>:</p>
<pre><code>from libpysal import weights, examples
from contextily import add_basemap
import matplotlib.pyplot as plt
import networkx as nx
import numpy as np
import geopandas
cases = geopandas.read_file("cholera_cases.gpkg")
coordinates = np.column_stack((cases.geometry.x, cases.geometry.y))
knn3 = weights.KNN.from_dataframe(cases, k=3)
dist = weights.DistanceBand.from_array(coordinates, threshold=50)
knn_graph = knn3.to_networkx()
dist_graph = dist.to_networkx()
positions = dict(zip(knn_graph.nodes, coordinates))
f, ax = plt.subplots(1, 2, figsize=(8, 4))
for i, facet in enumerate(ax):
cases.plot(marker=".", color="orangered", ax=facet)
add_basemap(facet)
facet.set_title(("KNN-3", "50-meter Distance Band")[i])
facet.axis("off")
nx.draw(knn_graph, positions, ax=ax[0], node_size=5, node_color="b")
nx.draw(dist_graph, positions, ax=ax[1], node_size=5, node_color="b")
plt.show()
</code></pre>
<p>The problem is that, after I installed the imported libraries, I get:</p>
<pre><code>Traceback (most recent call last):
File "...my.path...", line 14, in <module>
from contextily import add_basemap
ModuleNotFoundError: No module named 'contextily'
</code></pre>
<p>If I try to install the missing module (<code>pip install contextily</code>) I get:</p>
<pre><code>...
Collecting rasterio
Using cached rasterio-1.2.10.tar.gz (2.3 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [2 lines of output]
INFO:root:Building on Windows requires extra options to setup.py to locate needed GDAL files. More information is available in the README.
ERROR: A GDAL API version must be specified. Provide a path to gdal-config using a GDAL_CONFIG environment variable or use a GDAL_VERSION environment variable.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>I need your help: the answers from related questions did not work well. What can I do?</p>
|
<python><pip><networkx><geospatial><contextily>
|
2023-12-10 19:27:59
| 1
| 644
|
hellomynameisA
|
77,636,095
| 8,030,746
|
How to scrape a very nested element on a page with Selenium?
|
<p>I'm scraping <a href="https://higher.gs.com/results?&page=1&sort=RELEVANCE" rel="nofollow noreferrer">this page</a> for jobs. However, I'm having some trouble properly getting and scraping an element describing job positions (Analyst, Vice President, Associate, etc.).</p>
<p>To get to the job card, I'm using full XPATH (which already looks messy but I have no idea how else to do it). The code is:</p>
<pre><code>checked = wait.until(
EC.presence_of_all_elements_located(
(By.XPATH, '//*[@id="__next"]/main/div/div[2]/div/div/div[2]/div/div[2]/div/div/div[2]/div'))
)
</code></pre>
<p>And then to get the text inside these elements:
<a href="https://i.sstatic.net/ta1Sv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ta1Sv.png" alt="enter image description here" /></a></p>
<p>What should I do? I tried using XPATH again, but it just gets the first element's info, and repeats. For example:</p>
<pre><code>jobs = []
for row in checked:
jobs.append({
'info': row.find_element(By.XPATH, '//*[@id="__next"]/main/div/div/div/div/div/div/div[2]/div/div/div[2]/div[5]/div/div[1]/a/div/div[2]/span[2]').text
})
print(jobs)
</code></pre>
<p>Just gives me this result:</p>
<pre><code>[{'info': 'Analyst'}, {'info': 'Analyst'}, {'info': 'Analyst'}, {'info': 'Analyst'}, {'info': 'Analyst'}, {'info': 'Analyst'}, {'info': 'Analyst'}, {'info': 'Analyst'}, {'info': 'Analyst'}, {'info': 'Analyst'}]
</code></pre>
<p>How do I properly get the XPATH for all these elements? What's a more cleaner approach? I tried <code>By.CSS_SELECTOR</code>, too. But the elements are so nested with generic classes which repeat, that I have no clue how to approach this.</p>
<p>Thank you!</p>
|
<python><selenium-webdriver><web-scraping>
|
2023-12-10 19:15:52
| 1
| 851
|
hemoglobin
|
77,635,789
| 1,914,781
|
reshape dataframe according to duration
|
<p>I would like to re-shape a dataframe to a dataframe with below rules:</p>
<ol>
<li>Add a row under current ts with ts + dur,copy other columns</li>
<li>Add a None row to the next row.</li>
</ol>
<p>example:</p>
<pre><code>import pandas as pd
import itertools
def segments(df):
start = df['ts']
end = start + df['dur']
lst = list(zip(start, end, itertools.cycle([None])))
return lst
df = pd.DataFrame({
'ts': [1, 5, 10 ],
'dur': [1, 2, 1 ],
'typ': [0, 1, 0]
})
print(df)
lst = segments(df)
print(lst)
</code></pre>
<p>current output is a list:</p>
<pre><code> ts dur typ
0 1 1 0
1 5 2 1
2 10 1 0
[(1, 2, None), (5, 7, None), (10, 11, None)]
</code></pre>
<p>expected output:</p>
<pre><code> ts typ
0 1 0
1 2 0
2 NA NA
3 5 1
4 7 1
5 NA NA
6 10 0
7 11 0
8 NA NA
</code></pre>
|
<python><pandas><dataframe>
|
2023-12-10 17:42:28
| 2
| 9,011
|
lucky1928
|
77,635,741
| 5,377,910
|
Pyarmor ModuleNotFoundError: No module named 'pyarmor_runtime_000000'
|
<p>I'm trying to obfuscate my project but got error ModuleNotFoundError: No module named 'pyarmor_runtime_000000', So I tried to obfuscate a very simple folder containing only 2 files as follows</p>
<pre><code>test
├── __init__.py
├── test2.py
└── test.py
</code></pre>
<p>the content of each file is simply a print statement. I used this command to obfuscate <code>pyarmor gen test</code>,
which result in this structure of dist folder</p>
<pre><code>├── pyarmor_runtime_000000
│ ├── __init__.py
│ └── pyarmor_runtime.so
└── test
├── __init__.py
├── test2.py
└── test.py
</code></pre>
<p>then i run <code> python PATH/dist/test/test.py</code> but i get error <code>ModuleNotFoundError: No module named 'pyarmor_runtime_000000'</code></p>
<p>I'm using pyarmor latest version, my OS is ubuntu and python version is 3.10</p>
|
<python><obfuscation><pyarmor>
|
2023-12-10 17:28:03
| 2
| 401
|
shrouk mansour
|
77,635,685
| 957,052
|
Getting amplitude value of mp3 file played in Python on Raspberry Pi
|
<p>I am playing an mp3 file with Python on a Raspberry Pi:</p>
<pre><code>output_file = "sound.mp3"
pygame.mixer.init()
pygame.mixer.music.load(output_file)
pygame.mixer.music.play()
while pygame.mixer.music.get_busy():
pass
sleep(0.2)
</code></pre>
<p>Now while the mp3 file is playing, I would like to get the current amplitude of the sound played. Is there a way to do this? The only examples I was able to find work on a microphone input but not with a file being played.</p>
|
<python><audio>
|
2023-12-10 17:13:00
| 3
| 1,542
|
Reto
|
77,635,659
| 19,694,624
|
Can't reply to a message discord py slash command using cogs
|
<p>I am struggling with sending a basic reply in discord.py with a cog. The problem is that the bot keeps pending and eventually breaks.</p>
<p>The problem:</p>
<p><a href="https://i.sstatic.net/6D53c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6D53c.png" alt="enter image description here" /></a></p>
<p>Here is the code snippet of my cog:</p>
<pre><code>import discord
from discord import app_commands
from discord.ext import commands
class Example(commands.Cog):
def __init__(self, bot: commands.Bot):
self.bot = bot
@commands.Cog.listener()
async def on_ready(self):
print("ChatGPT cog connected.")
@commands.command()
async def sync(self, ctx) -> None:
fmt = await ctx.bot.tree.sync(guild=ctx.guild)
await ctx.send(f"Synced {len(fmt)} commands")
@app_commands.command(name="ping", description="desc")
async def ping(self, ctx):
await ctx.reply("pong")
</code></pre>
<p>And here is the code snippet of the main file:</p>
<pre><code>import discord
from discord.ext import commands
import os
from tools.config import TOKEN
import asyncio
client = commands.Bot(command_prefix="!", intents=discord.Intents.all())
@client.event
async def on_ready():
print("Success: Bot is connected to Discord")
async def load():
for filename in os.listdir("./cogs"):
if filename.endswith("py"):
await client.load_extension(f"cogs.{filename[:-3]}")
async def main():
await load()
await client.start(TOKEN)
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
|
<python><python-3.x><discord><discord.py>
|
2023-12-10 17:04:13
| 2
| 303
|
syrok
|
77,635,629
| 7,563,454
|
Python: Set object property from a string name
|
<p>I want a class that can be initialized with a dictionary, attributing each entry in this dictionary to itself. I wrote the following which I expected to just work:</p>
<pre><code>class dat:
def __init__(self, data: dict):
for d in data:
self[d] = data[d]
</code></pre>
<p>But that produces the error:</p>
<pre><code>TypeError: 'dat' object does not support item assignment
</code></pre>
<p>How do I set a property on <code>self</code> by its name? Internally I'd do something like <code>self.prop = 0</code> since I know what <code>prop</code> is called, but since this is a dictionary <code>prop</code> can be anything: Only way I know to set those is as <code>self[prop]</code> but if that doesn't work on objects I presume there's some builtin syntax or function of the form <code>self.set(prop)</code>?</p>
|
<python><python-3.x>
|
2023-12-10 16:57:01
| 1
| 1,161
|
MirceaKitsune
|
77,635,607
| 827,927
|
How to draw a two-dimensional function whose domain is a triangle?
|
<p>I would like to plot a function f(x,y) = x^2 + y^2 defind on all pairs x,y such that x + y ≤ 1, x ≥ 0, y ≥ 0. Currently, I use <code>imshow</code>:</p>
<pre><code>x = np.arange(0, 1, 0.01)
y = np.arange(0, 1, 0.01)
Z = compute_function(x, y) # evaluation of the function on the grid
imshow(Z, cmap=cm.RdBu, extent=(min(x),max(x),min(y), max(y)), origin='lower')
</code></pre>
<p>It looks like this, which is quite ugly:</p>
<p><a href="https://i.sstatic.net/4RGD7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4RGD7.png" alt="enter image description here" /></a></p>
<p>How can I make this plot nicer, showing only the relevant triangle (at the bottom-left)?</p>
<p>P.S. If possible, I would like to present the relevant triangle, not as a right-angled triangle, but as a symmetric, equilateral triangle</p>
|
<python><matplotlib>
|
2023-12-10 16:51:49
| 1
| 37,410
|
Erel Segal-Halevi
|
77,635,495
| 10,452,700
|
How can plot errorbar style using scatter plot for min & max values instead of mean & std over time within dataframe in python?
|
<p>Let's say I have the following datafarme:</p>
<pre class="lang-none prettyprint-override"><code>+-------------------+--------+--------+--------+
|timestamp |average |min |max |
+-------------------+--------+--------+--------+
|2021-08-11 04:05:06|2.0 |1.8 |2.2 |
|2021-08-11 04:15:06|2.3 |2.0 |2.7 |
|2021-08-11 09:15:26|2.5 |2.3 |2.8 |
|2021-08-11 11:04:06|2.3 |2.1 |2.6 |
|2021-08-11 14:55:16|2.6 |2.2 |2.9 |
|2021-08-13 04:12:11|2.1 |1.7 |2.3 |
+-------------------+--------+--------+--------+
</code></pre>
<p>I want to plot <code>average</code> values in the form of a <em>scatter plot</em> and also plot <code>min</code> and <code>max</code> columns using a dashed <em>line plot</em> and then fill between areas <code>average</code> & <code>max</code> as well as <code>average</code> & <code>min</code> <strong>separately</strong> as it is shown in fig 1 in the table.</p>
<p>I found some close examples:</p>
<ul>
<li><a href="https://stackoverflow.com/a/43384034/10452700">How to create a min-max plot by month with fill_between</a></li>
<li><a href="https://stackoverflow.com/a/43384034/10452700">How to plot min max line plot in python pandas</a></li>
</ul>
<hr />
<p>I aim to develop this to reach something like <a href="https://matplotlib.org/stable/plot_types/stats/errorbar_plot.html" rel="nofollow noreferrer"><code>plt.errorbar()</code></a> (which deals with mean and std) <a href="https://stackoverflow.com/a/57453989/10452700">example 1</a>, <a href="https://stackoverflow.com/q/57120164/10452700">example 2</a> <strong>but</strong> I just want to illustrate <code>min</code> and <code>max</code> values instead of ~~mean & std~~ over time as follows:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;"><img src="https://i.imgur.com/YcRNG0m.png" alt="img" /></th>
<th style="text-align: center;"><img src="https://i.imgur.com/yX9wTM8.png" alt="img" /></th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">Fig. 1: without errorbar style.</td>
<td style="text-align: center;">Fig. 2: with errorbar style.</td>
</tr>
</tbody>
</table>
</div>
<p>sadly I could not find the output example for <em>fig 2</em> since normally they used to translate mean and std with data points but for <em>fig 1</em> this <a href="https://stackoverflow.com/q/57120164/10452700">post</a> but it for <a href="/questions/tagged/r" class="post-tag" title="show questions tagged 'r'" aria-label="show questions tagged 'r'" rel="tag" aria-labelledby="tag-r-tooltip-container">r</a> language is what part of what I want but the area color should be filled with different color (light red and blue) separately.</p>
<p>Any help and guidance will be appreciated.</p>
|
<python><pandas><matplotlib><seaborn><plotly-express>
|
2023-12-10 16:16:47
| 2
| 2,056
|
Mario
|
77,635,481
| 10,620,003
|
colorize the background of different parts of the graph based on the event
|
<p>I have a code which create two subplot. I want to colorized the figure based on the values of the black curve.
I want to use three color for the graph.
Every where which the values of the black curve is one, one index befor that to the start should be one color, during that period color 2, and two index after that should be another color. For example:</p>
<p>exactly one From the point which the black curve is one, from index [9:10] color blue, [10-14] red, and [14-16] color green.
[23-24] blue, 24-35 red, [35-37] green.
Can you please help me with that?</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
event = pd.DataFrame(np.random.randint(1, size = (56, 1)))
event.iloc[10:14, :] = 1
event.iloc[24:36, :] = 1
plt.figure(figsize=(38, 16))
ax1 = plt.subplot(2, 2, 1)
ax2 = ax1.twinx()
ax1.plot(pd.DataFrame(np.random.randint(300, size = (56,1))), label='1', color='g')
ax1.plot(pd.DataFrame(np.random.randint(3, size = (56,1))), label='2', color='r')
ax1.set_ylabel('m', fontsize=20)
ax1.legend(loc=1, prop={'size': 20})
ax1.set_title('0', fontsize=30)
#ax1.set_xticks([4, 8, 12, 16, 20, 24, 28, 32, 36, 40])
#ax1.set_xticklabels(['10 AM', '11 AM', '12', '1 PM', '2 PM', '3 PM', '4 PM', '5 PM', '6 PM', '7PM'])
ax1.tick_params(axis='y', labelsize=20) # Set font size of y-tick labels
ax1.tick_params(axis='x', labelsize=20)
ax2.plot(event, color='k')
ax2.set_ylabel('t', color='k', fontsize=20)
ax2.tick_params(axis='y', labelsize=20) # Set font size of y-tick labels
ax3 = plt.subplot(2, 2, 2)
ax4 = ax3.twinx() # Create a second y-axis
ax3.plot(pd.DataFrame(np.random.randint(300, size = (56,1))), label='1', color='g')
ax3.plot(pd.DataFrame(np.random.randint(400, size = (56,1))), label='2', color='r')
ax3.legend(loc=1, prop={'size': 20})
# ax3.set_xticks([4, 8, 12, 16, 20, 24, 28, 32, 36, 40])
# ax3.set_xticklabels(['10 AM', '11 AM', '12', '1 PM', '2 PM', '3 PM', '4 PM', '5 PM', '6 PM', '7PM'])
ax4.plot(event, color='k')
ax4.set_ylabel('t', color='k', fontsize=20)
ax4.tick_params(axis='y', labelsize=20)
ax3.set_ylabel('m', fontsize=30)
ax3.legend(loc=1, prop={'size': 20})
ax3.set_title('1', fontsize=30)
ax3.tick_params(axis='y', labelsize=20)
ax3.tick_params(axis='x', labelsize=20)
plt.show()
</code></pre>
|
<python><pandas><numpy><matplotlib>
|
2023-12-10 16:12:25
| 1
| 730
|
Sadcow
|
77,635,238
| 13,135,901
|
Filter rows based on max/min value in a group in Pandas
|
<p>I have a dataframe:</p>
<pre><code>mydict = [
{'HH': True, 'LL': False, 'High': 10, 'Low': 1},
{'HH': False, 'LL': True, 'High': 100, 'Low': 20},
{'HH': True, 'LL': False, 'High': 32, 'Low': 1},
{'HH': True, 'LL': False, 'High': 30, 'Low': 1},
{'HH': True, 'LL': False, 'High': 31, 'Low': 1},
{'HH': False, 'LL': True, 'High': 100, 'Low': 40},
{'HH': False, 'LL': True, 'High': 100, 'Low': 45},
{'HH': False, 'LL': True, 'High': 100, 'Low': 42},
{'HH': False, 'LL': True, 'High': 100, 'Low': 44},
{'HH': True, 'LL': False, 'High': 50, 'Low': 1},
]
df = pd.DataFrame(mydict)
print(df)
</code></pre>
<pre><code> HH LL High Low
0 True False 10 1
1 False True 100 20
2 True False 32 1
3 True False 30 1
4 True False 31 1
5 False True 100 40
6 False True 100 45
7 False True 100 42
8 False True 100 44
9 True False 50 1
</code></pre>
<p>I am trying to find peak values on a chart. So if there are several <code>True</code> or <code>False</code> values in either <code>HH</code> or <code>LL</code> I want to only leave the one with the highest <code>High</code> or lowest <code>Low</code> accordingly. I tried doing it like this:</p>
<pre><code>check = True
while check:
df2 = df[df.HH | df.LL]
h1 = df2.HH & df2.HH.shift()
h2 = df2.High < df2.High.shift()
h3 = df2.HH & df2.HH.shift(-1)
h4 = df2.High < df2.High.shift(-1)
l1 = df2.LL & df2.LL.shift()
l2 = df2.Low > df2.Low.shift()
l3 = df2.LL & df2.LL.shift(-1)
l4 = df2.Low > df2.Low.shift(-1)
df3 = df2[(h1 & h2 | h3 & h4) | (l1 & l2 | l3 & l4)]
df.loc[df.index.isin(df3.index), ["HH", "LL"]] = False
check = not df3.empty
</code></pre>
<p>And it seems to procude desirable result on a small example dataframe:</p>
<pre><code> HH LL High Low
0 True False 10 1
1 False True 100 20
2 True False 32 1
3 False False 30 1
4 False False 31 1
5 False True 100 40
6 False False 100 45
7 False False 100 42
8 False False 100 44
9 True False 50 1
</code></pre>
<p>But for a reason I am yet to figure out it still leaves occasional repeating peaks in a bigger dataframe.</p>
|
<python><pandas>
|
2023-12-10 14:54:21
| 1
| 491
|
Viktor
|
77,635,111
| 13,285,779
|
Error building the ONNX library from scratch Ubuntu 22.04.2
|
<p>I'm trying to build the ONNX library from source going step by step according to <a href="https://github.com/onnx/onnx/tree/main" rel="nofollow noreferrer">the official manual</a>. When I reach the final step of executing the <code>pip install -e .</code> command, I encounter some errors. As I understand, the library has been built, but something is still wrong. There were other errors that I was able to resolve by installing additional packages and dependencies, but for this one I cannot figure out what's wrong.
Here's the error output I've got:</p>
<pre><code>Using pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)
Obtaining file:///home/curiouspan/studyingC/intel_test/onnx
Running command pip subprocess to install build dependencies
Collecting setuptools>=61
Using cached setuptools-69.0.2-py3-none-any.whl (819 kB)
Collecting protobuf>=3.20.2
Using cached protobuf-4.25.1-cp37-abi3-manylinux2014_x86_64.whl (294 kB)
Installing collected packages: setuptools, protobuf
Successfully installed protobuf-4.25.1 setuptools-69.0.2
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Installing build dependencies ... done
Running command Checking if build backend supports build_editable
Checking if build backend supports build_editable ... done
Running command Getting requirements to build wheel
running egg_info
writing manifest file 'UNKNOWN.egg-info/SOURCES.txt'
Getting requirements to build wheel ... done
Running command pip subprocess to install backend dependencies
Collecting wheel
Using cached wheel-0.42.0-py3-none-any.whl (65 kB)
Installing collected packages: wheel
Successfully installed wheel-0.42.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Installing backend dependencies ... done
Running command Preparing metadata (pyproject.toml)
running dist_info
creating /tmp/pip-modern-metadata-7id_y34f/UNKNOWN.egg-info
writing manifest file '/tmp/pip-modern-metadata-7id_y34f/UNKNOWN.egg-info/SOURCES.txt'
writing manifest file '/tmp/pip-modern-metadata-7id_y34f/UNKNOWN.egg-info/SOURCES.txt'
Preparing metadata (pyproject.toml) ... done
Installing collected packages: UNKNOWN
Running setup.py develop for UNKNOWN
Running command python setup.py develop
running develop
/usr/lib/python3/dist-packages/setuptools/command/easy_install.py:158: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
/usr/lib/python3/dist-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
/usr/lib/python3/dist-packages/pkg_resources/__init__.py:116: PkgResourcesDeprecationWarning: 1.1build1 is an invalid version and will not be supported in a future release
warnings.warn(
running egg_info
writing manifest file 'UNKNOWN.egg-info/SOURCES.txt'
running build_ext
running cmake_build
-- ONNX_PROTOC_EXECUTABLE: /usr/bin/protoc
-- Protobuf_VERSION: 3.21.12
Generated: /home/curiouspan/studyingC/intel_test/onnx/.setuptools-cmake-build/onnx/onnx-ml.proto
Generated: /home/curiouspan/studyingC/intel_test/onnx/.setuptools-cmake-build/onnx/onnx-operators-ml.proto
Generated: /home/curiouspan/studyingC/intel_test/onnx/.setuptools-cmake-build/onnx/onnx-data.proto
-- Could NOT find pybind11 (missing: pybind11_DIR)
-- pybind11 v2.10.4
--
-- ******** Summary ********
-- CMake version : 3.22.1
-- CMake command : /usr/bin/cmake
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 11.3.0
-- CXX flags : -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : __STDC_FORMAT_MACROS
-- CMAKE_PREFIX_PATH :
-- CMAKE_INSTALL_PREFIX : /usr/local
-- CMAKE_MODULE_PATH :
--
-- ONNX version : 1.16.0
-- ONNX NAMESPACE : onnx
-- ONNX_USE_LITE_PROTO : OFF
-- USE_PROTOBUF_SHARED_LIBS : OFF
-- Protobuf_USE_STATIC_LIBS : ON
-- ONNX_DISABLE_EXCEPTIONS : OFF
-- ONNX_DISABLE_STATIC_REGISTRATION : OFF
-- ONNX_WERROR : OFF
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
-- ONNX_BUILD_SHARED_LIBS :
-- BUILD_SHARED_LIBS :
--
-- Protobuf compiler : /usr/bin/protoc
-- Protobuf includes : /usr/include
-- Protobuf libraries : /usr/lib/x86_64-linux-gnu/libprotobuf.a
-- BUILD_ONNX_PYTHON : ON
-- Python version :
-- Python executable : /usr/bin/python3
-- Python includes : /usr/include/python3.10
-- Configuring done
-- Generating done
-- Build files have been written to: /home/curiouspan/studyingC/intel_test/onnx/.setuptools-cmake-build
[ 2%] Built target gen_onnx_proto
[ 8%] Built target gen_onnx_operators_proto
[ 8%] Built target gen_onnx_data_proto
Consolidate compiler generated dependencies of target onnx_proto
[ 22%] Built target onnx_proto
Consolidate compiler generated dependencies of target onnx
[ 97%] Built target onnx
Consolidate compiler generated dependencies of target onnx_cpp2py_export
[100%] Built target onnx_cpp2py_export
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/home/curiouspan/studyingC/intel_test/onnx/setup.py", line 321, in <module>
setuptools.setup(
File "/usr/lib/python3/dist-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/usr/lib/python3/dist-packages/setuptools/_distutils/core.py", line 148, in setup
return run_commands(dist)
File "/usr/lib/python3/dist-packages/setuptools/_distutils/core.py", line 163, in run_commands
dist.run_commands()
File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/home/curiouspan/studyingC/intel_test/onnx/setup.py", line 253, in run
return super().run()
File "/usr/lib/python3/dist-packages/setuptools/command/develop.py", line 34, in run
self.install_for_development()
File "/usr/lib/python3/dist-packages/setuptools/command/develop.py", line 114, in install_for_development
self.run_command('build_ext')
File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/usr/lib/python3/dist-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/home/curiouspan/studyingC/intel_test/onnx/setup.py", line 259, in run
return super().run()
File "/usr/lib/python3/dist-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/usr/lib/python3/dist-packages/setuptools/_distutils/command/build_ext.py", line 339, in run
self.build_extensions()
File "/home/curiouspan/studyingC/intel_test/onnx/setup.py", line 288, in build_extensions
if self.editable_mode:
File "/usr/lib/python3/dist-packages/setuptools/_distutils/cmd.py", line 103, in __getattr__
raise AttributeError(attr)
AttributeError: editable_mode
error: subprocess-exited-with-error
× python setup.py develop did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /usr/bin/python3 -c '
exec(compile('"'"''"'"''"'"'
# This is <pip-setuptools-caller> -- a caller that pip uses to run setup.py
#
# - It imports setuptools before invoking setup.py, to enable projects that directly
# import from `distutils.core` to work with newer packaging standards.
# - It provides a clear error message when setuptools is not installed.
# - It sets `sys.argv[0]` to the underlying `setup.py`, when invoking `setup.py` so
# setuptools doesn'"'"'t think the script is `-c`. This avoids the following warning:
# manifest_maker: standard file '"'"'-c'"'"' not found".
# - It generates a shim setup.py, for handling setup.cfg-only projects.
import os, sys, tokenize
try:
import setuptools
except ImportError as error:
print(
"ERROR: Can not execute `setup.py` since setuptools is not available in "
"the build environment.",
file=sys.stderr,
)
sys.exit(1)
__file__ = %r
sys.argv[0] = __file__
if os.path.exists(__file__):
filename = __file__
with tokenize.open(__file__) as f:
setup_py_code = f.read()
else:
filename = "<auto-generated setuptools caller>"
setup_py_code = "from setuptools import setup; setup()"
exec(compile(setup_py_code, filename, "exec"))
'"'"''"'"''"'"' % ('"'"'/home/curiouspan/studyingC/intel_test/onnx/setup.py'"'"',), "<pip-setuptools-caller>", "exec"))' develop --no-deps
cwd: /home/curiouspan/studyingC/intel_test/onnx/
error: subprocess-exited-with-error
× python setup.py develop did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
|
<python><linux><pip><onnx>
|
2023-12-10 14:09:24
| 1
| 1,359
|
CuriousPan
|
77,635,009
| 7,453,683
|
Python extract only exceptions from traceback
|
<p>I have nested python exceptions. I wan't to extract all of them into a list from a traceback while discarding everything else, i.e. files, code lines, etc.</p>
<p>The code below extracts everything from a traceback as a list of strings</p>
<pre class="lang-py prettyprint-override"><code>import traceback
import requests
try:
r = requests.get('https://thisdoesntexist.test')
except Exception as e:
exc = traceback.format_exception(e)
print(exc)
</code></pre>
<p>Output:</p>
<pre><code>[
'Traceback (most recent call last):\n',
' File "c:\\Python312\\Lib\\site-packages\\urllib3\\connection.py", line 203, in _new_conn\n sock = connection.create_connection(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n',
' File "c:\\Python312\\Lib\\site-packages\\urllib3\\util\\connection.py", line 60, in create_connection\n for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n',
' File "c:\\Python312\\Lib\\socket.py", line 963, in getaddrinfo\n for res in _socket.getaddrinfo(host, port, family, type, proto, flags):\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n',
'socket.gaierror: [Errno 11001] getaddrinfo failed\n',
'\nThe above exception was the direct cause of the following exception:\n\n',
'Traceback (most recent call last):\n',
# Output truncated
'requests.exceptions.ConnectionError: HTTPSConnectionPool(host=\'thisdoesntexist.test\', port=443): Max retries exceeded with url: / (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x0000025E6C236600>: Failed to resolve \'thisdoesntexist.test\' ([Errno 11001] getaddrinfo failed)"))\n'
]
</code></pre>
<p>But i only need this:</p>
<pre><code>[
'socket.gaierror: [Errno 11001] getaddrinfo failed',
'urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x0000025E6C236600>: Failed to resolve 'thisdoesntexist.test' ([Errno 11001] getaddrinfo failed)',
'urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host=\'thisdoesntexist.test\', port=443): Max retries exceeded with url: / (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x0000025E6C236600>: Failed to resolve \'thisdoesntexist.test\' ([Errno 11001] getaddrinfo failed)"))',
'requests.exceptions.ConnectionError: HTTPSConnectionPool(host=\'thisdoesntexist.test\', port=443): Max retries exceeded with url: / (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x0000025E6C236600>: Failed to resolve \'thisdoesntexist.test\' ([Errno 11001] getaddrinfo failed)"))\n'
]
</code></pre>
<p>Is there a better way of achieving this without iterating over strings and pattern matching?</p>
<p><strong>NB:</strong> It would be perfect if I could get a type of the exception in addition to the exception message string, e.g:</p>
<pre><code>[
{"type": "socket.gaierror", "message": 'socket.gaierror: [Errno 11001] getaddrinfo failed'},
{"type": "urllib3.exceptions.NameResolutionError", "message": 'urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x0000025E6C236600>: Failed to resolve \'thisdoesntexist.test\' ([Errno 11001] getaddrinfo failed)'},
# etc
]
</code></pre>
|
<python><python-3.x><exception><traceback>
|
2023-12-10 13:37:17
| 1
| 861
|
Superbman
|
77,634,955
| 12,671,057
|
Why is the simpler loop slower?
|
<p>Called with <code>n = 10**8</code>, the simple loop is consistently significantly slower for me than the complex one, and I don't see why:</p>
<pre><code>def simple(n):
while n:
n -= 1
def complex(n):
while True:
if not n:
break
n -= 1
</code></pre>
<p>Some times in seconds:</p>
<pre class="lang-none prettyprint-override"><code>simple 4.340795516967773
complex 3.6490490436553955
simple 4.374553918838501
complex 3.639145851135254
simple 4.336690425872803
complex 3.624480724334717
Python: 3.11.4 (main, Sep 9 2023, 15:09:21) [GCC 13.2.1 20230801]
</code></pre>
<p>Here's the looping part of the bytecode as shown by <code>dis.dis(simple)</code>:</p>
<pre><code> 6 >> 6 LOAD_FAST 0 (n)
8 LOAD_CONST 1 (1)
10 BINARY_OP 23 (-=)
14 STORE_FAST 0 (n)
5 16 LOAD_FAST 0 (n)
18 POP_JUMP_BACKWARD_IF_TRUE 7 (to 6)
</code></pre>
<p>And for <code>complex</code>:</p>
<pre><code> 10 >> 4 LOAD_FAST 0 (n)
6 POP_JUMP_FORWARD_IF_TRUE 2 (to 12)
11 8 LOAD_CONST 0 (None)
10 RETURN_VALUE
12 >> 12 LOAD_FAST 0 (n)
14 LOAD_CONST 2 (1)
16 BINARY_OP 23 (-=)
20 STORE_FAST 0 (n)
9 22 JUMP_BACKWARD 10 (to 4)
</code></pre>
<p>So it looks like the complex one does more work per iteration (two jumps instead of one). Then why is it faster?</p>
<p>Seems to be a Python 3.11 phenomenon, see the comments.</p>
<p>Benchmark script (<a href="https://ato.pxeger.com/run?1=ZZBBDoIwEEXjtocws6MQIBI3hIQ7uHBnTIPahkaYklJUzuKGjR7K0wgUotG_mt--_E7__Vm1JlfYdY_GiCB-LZZCqxKMLDnIslLajDOZ5rqtCTlxAXV_UHCKbkKg1zWXBQe0ZhBCkEJk2aMa2NsPvNUN__BSACrznTDooHl2_ssUSoMAibCzW_jzC3vwYG0TDKTj4tQdraDRyvNiayot0VARMoZZyRnzJxICMC4h9trZjMUkjj98OrxwXUuFrq1pamtu7Q0" rel="noreferrer">Attempt This Online!</a>):</p>
<pre class="lang-py prettyprint-override"><code>from time import time
import sys
def simple(n):
while n:
n -= 1
def complex(n):
while True:
if not n:
break
n -= 1
for f in [simple, complex] * 3:
t = time()
f(10**8)
print(f.__name__, time() - t)
print('Python:', sys.version)
</code></pre>
|
<python><performance><cpython><python-internals><python-3.11>
|
2023-12-10 13:17:37
| 1
| 27,959
|
Kelly Bundy
|
77,634,823
| 334,155
|
Download Python packages with correct dependencies for offline install
|
<p>I am trying to install Apache Airflow 2.7.3 offline.
I have read about <code>pip download</code>, requirement file, constraint file but still not able to do this.</p>
<p>Below is my full script to demonstrate the problem.</p>
<pre><code>AIRFLOW_VERSION=2.7.3
PYTHON_VERSION="$(python --version | cut -d " " -f 2 | cut -d "." -f 1-2)"
CONSTRAINT_URL="https://raw.githubusercontent.com/apache/airflow/constraints-$AIRFLOW_VERSION/constraints-$PYTHON_VERSION.txt"
# download constraint file locally to be referred to later
curl -L $CONSTRAINT_URL --output constraints-$AIRFLOW_VERSION-$PYTHON_VERSION.txt
# create a folder to store downloaded packages
mkdir pip-download && cd pip-download
# pip download packages with constraints
pip download "apache-airflow==$AIRFLOW_VERSION" --constraint ../constraints-$AIRFLOW_VERSION-$PYTHON_VERSION.txt
# pip install packages locally with constraints
pip install apache-airflow==$AIRFLOW_VERSION --no-index --find-links=. --constraint ../constraints-$AIRFLOW_VERSION-$PYTHON_VERSION.txt
</code></pre>
<p>However, running this will results in the following error</p>
<pre><code>Processing ./connexion-2.14.2-py2.py3-none-any.whl (from apache-airflow==2.7.3)
Processing ./cron_descriptor-1.4.0.tar.gz (from apache-airflow==2.7.3)
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [4 lines of output]
Looking in links: .
Processing ./setuptools-69.0.2-py3-none-any.whl
ERROR: Could not find a version that satisfies the requirement wheel (from versions: none)
ERROR: No matching distribution found for wheel
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
(env) [airflow@89a126f62807 airflow]$
</code></pre>
<p>What am I doing wrong? Note that installing with the following command would work.</p>
<pre><code>AIRFLOW_VERSION=2.7.3
PYTHON_VERSION="$(python --version | cut -d " " -f 2 | cut -d "." -f 1-2)"
CONSTRAINT_URL="https://raw.githubusercontent.com/apache/airflow/constraints-$AIRFLOW_VERSION/constraints-$PYTHON_VERSION.txt"
pip install apache-airflow==$AIRFLOW_VERSION --constraint $CONSTRAINT_URL
</code></pre>
<p>Testing</p>
<pre><code>(env) [airflow@89a126f62807 airflow]$ airflow version
2.7.3
(env) [airflow@89a126f62807 airflow]$
</code></pre>
|
<python><pip><airflow>
|
2023-12-10 12:41:02
| 0
| 619
|
zaidwaqi
|
77,634,768
| 200,160
|
SQLalchemy can't communicate to MySQL server upon app start
|
<p>I'm have troubles with SQLalchemy in Flask app first minute after the app start (or restart).
It looks like logger exceptions of a sort:</p>
<blockquote>
<p>sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (2006, 'MySQL server has gone away') <---- most often one</p>
</blockquote>
<blockquote>
<p>sqlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically.</p>
</blockquote>
<blockquote>
<p>sqlalchemy.exc.NoSuchColumnError: "Could not locate column in row for column 'users.id'"</p>
</blockquote>
<blockquote>
<p>sqlalchemy.exc.NoSuchColumnError: "Could not locate column in row for column 'payments.id'"</p>
</blockquote>
<blockquote>
<p>sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (2013, 'Lost connection to MySQL server during query')</p>
</blockquote>
<p>Then everything gets back to normal. It's not a critical issue but annoying.</p>
<p>I tried to run warming up queries upon application start:</p>
<pre><code> with app.app_context():
for _ in range(5):
try:
db.session.execute(statement)
logger.info("DB connection is successfully established")
return
except Exception as e:
logger.warning(e)
time.sleep(1)
raise Exception("Couldn't establish a DB connection")
</code></pre>
<p>It passes through just fine but then I see same issues.</p>
<p>It doesn't happen in development environment, only in production where Flask app runs on uwsgi server. Is there a way to fix it?</p>
<p>Update: connection URI looks like this: <code>"mysql+mysqldb://user:password@localhost/mydb?unix_socket=/var/run/mysqld/mysqld.sock"</code></p>
|
<python><sqlalchemy><flask-sqlalchemy><uwsgi>
|
2023-12-10 12:19:08
| 1
| 825
|
Ralfeus
|
77,634,718
| 2,996,797
|
Server for saving TCP packets from routers
|
<p>Asking for advice here.</p>
<p>So I have a business that sells a specific kind of routers. One of the features is to monitor the activity of the router (i.e. making sure it's "live"). I asked the manufacturer of the routers to program them to send a TCP "ping" packet to my server IP address, to a specific port, every 15 minutes, with very basic data for monitoring (router serial number, "I'm alive" string).
I have a Windows Server running on a static IP which receives the data.</p>
<p>I tested this by sending a packet using <code>netcat</code>, and I can see it's received correctly. I used freeware called <a href="https://packetsender.com/" rel="nofollow noreferrer">Packet Sender</a> to open a specific TCP port and log the incoming traffic.</p>
<p>What I need to do now is run code on my server which listens to the specific port, and saves the data (router serial number, time of transmission, etc.) in a DB. This way, customers will later be able to log into their account, and view their router's "activity". There will also be advanced features like notifying the customer whenever the server doesn't receive data from their router over a period of X minutes/hours, etc.</p>
<p>I'm asking for advice as to how to set this up in the best way. I read about simple Python code that can listen to a port (like in <a href="https://stackoverflow.com/a/47316167/2996797">this answer</a>), and I guess I can use that to save the data to a local DB.
But would that be best? What about race conditions (i.e. multiple routers sending data at the same time)? I guess these transmissions wuould need to enter into some basic queue and be dealt with one by one? Is there some good exiting framework I can use that will do all this for me?</p>
<p>Any advice as to how to implement this would be great. I know some basic Python backend programming.</p>
|
<python><logging><server><network-programming>
|
2023-12-10 12:05:39
| 1
| 530
|
Cauthon
|
77,634,487
| 3,433,875
|
Improve Tesseract Accuracy on my book images, for Google Books API
|
<p>I am trying to upload my library to goodreads, so I am taking a picture to get the details of each book and then I use Google books api to get the isbn number.</p>
<p>Unfortunately, pytesseract is not detecting the text properly.</p>
<p>Here is one image:
<a href="https://i.sstatic.net/kFQVA.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kFQVA.jpg" alt="enter image description here" /></a></p>
<p>The result after thresholding:</p>
<p><a href="https://i.sstatic.net/vuqRg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vuqRg.png" alt="thresh" /></a></p>
<p>and here is my code:</p>
<pre><code>import pytesseract
from pytesseract import Output
import cv2
import pandas as pd
#We then read the image with text
img=cv2.imread(image_path)
original = img.copy()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (3,3), 0)
# threshold the image, setting all foreground pixels to
# 255 and all background pixels to 0
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
text = pytesseract.image_to_data(thresh, lang='eng', output_type='data.frame')
</code></pre>
<p>but the results are far from great.
I have tried to deskew the images and process them differently but the results are not better.</p>
<p>Is there a way to improve the image processing or another library method I should try that might work better?</p>
|
<python><ocr><tesseract><python-tesseract><google-books-api>
|
2023-12-10 10:50:46
| 0
| 363
|
ruthpozuelo
|
77,634,437
| 1,815,047
|
Flask-WTF reCAPTCHA ERROR for site owner: Invalid site key
|
<p>I'm using Flask 2.2.5 and Flask-WTF 1.1.1. <br />I want to use a Recaptcha field and, get <strong>ERROR for site owner: Invalid site key</strong>
printed on the widget.</p>
<p><a href="https://i.sstatic.net/U4a37.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U4a37.png" alt="enter image description here" /></a></p>
<p>Here's my code:</p>
<pre><code>from flask_wtf import FlaskForm, RecaptchaField
class CommentForm(FlaskForm):
text = StringField("Some text", validators=[DataRequired()])
recaptcha = RecaptchaField()
submit = SubmitField("Go")
app.config['RECAPTCHA_PUBLIC_KEY'] = os.environ.get('RPUB_K')
app.config['RECAPTCHA_PRIVATE_KEY'] = os.environ.get('RPR_K')
</code></pre>
<p>I'm using Jinja 3.1.2 to render the form</p>
<pre><code>{{ render_form(form, novalidate=True, button_map={"submit": "primary"}) }}
</code></pre>
<p>What is missing/wrong?
Thanks</p>
|
<python><recaptcha>
|
2023-12-10 10:30:15
| 1
| 734
|
Ahadu Tsegaye Abebe
|
77,634,338
| 19,041,863
|
Histogram unequal number on the Y axis
|
<p>I want to plot a histogram in a Matplotlib subplot. I have the code</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
def plot_Diff(data):
fig = plt.figure(figsize=(10, 10))
ax =fig.add_subplot(423)
x= dfe['O18ad']
bins=[-20, -19, -18, -17, -16, -15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1]
old_yticks = ax.get_yticks()
bin_counts, _, bars = plt.hist(x, bins, alpha=0.65, label='old', edgecolor='black', color='lightgrey')
new_ticks = old_yticks[old_yticks < bin_counts.max()]
new_ticks = np.append(new_ticks, bin_counts.max())
ax.set_yticks(new_ticks)
plt.show()
</code></pre>
<p>my data:</p>
<pre><code>0 -4.268475
1 -4.265793
2 -4.263120
3 -4.260457
4 -4.257803
...
359995 -7.813345
359996 -7.821394
359997 -7.773479
359998 -7.807605
359999 -7.797769
</code></pre>
<p>The result looks like this
<a href="https://i.sstatic.net/UHfYE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UHfYE.png" alt="enter image description here" /></a></p>
<p>How can I get the maximum value to be displayed automatically on the Y axis?</p>
<p>When I use the answer from it looks like this! I have also adapted the code accordingly</p>
<p>-
<a href="https://i.sstatic.net/6jvLe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6jvLe.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><histogram>
|
2023-12-10 09:50:51
| 1
| 303
|
Weiss
|
77,634,299
| 2,573,075
|
I try to build a basic Odoo 15 module but gives error at inheritance
|
<p>I'm trying to create a graphical reporting module but gives error at inheritance.
I start basic by inheritance the crm modules to manage them.
My modules is:</p>
<pre><code>class XPFReporting(models.Model):
"""
This is the reporting system that will take all data from crm to further filter and order it
"""
_name = 'xpf.reporting'
_description = "XPF Reporting"
_inherit = 'crm.lead'
custom_field = fields.Char(string='Custom Field')
</code></pre>
<p>and my view:</p>
<pre><code><?xml version="1.0" encoding="utf-8" ?>
<odoo>
<!-- View for model -->
<record id="view_xpf_reporting_tree" model="ir.ui.view">
<field name="name">xpf.reporting.tree</field>
<field name="model">xpf.reporting</field>
<field name="arch" type="xml">
<tree string="default tree">
<field name="custom_field"/>
<!-- <field name="*" /> -->
</tree>
</field>
</record>
<!-- Define the button to generate PDF report -->
<record id="view_xpf_reporting_form" model="ir.ui.view">
<field name="name">xpf.reporting.form</field>
<field name="model">xpf.reporting</field>
<field name="arch" type="xml">
<form>
<field name="custom_field"/>
<!--
group - Place holder for filtering
-->
<footer>
<button string="Generate PDF Report" type="object" class="oe_highlight" icon="fa-file-pdf-o"
name="generate_pdf_report"/>
</footer>
</form>
</field>
</record>
<record id="action_xpf_reporting" model="ir.actions.act_window">
<field name="name">XPF Reporting</field>
<field name="res_model">xpf.reporting</field>
<field name="view_mode">tree,form</field>
</record>
<menuitem id="menu_xpf_reporting" name="XPF Reporting" action="action_xpf_reporting" parent="" sequence="1"/>
</odoo>
</code></pre>
<p>The issue is when I try to install it returns:
<code>TypeError: Many2many fields xpf.reporting.tag_ids and crm.lead.tag_ids use the same table and columns</code></p>
<p>Any idea is gladly appreciated.
Merci</p>
|
<python><xml><module><odoo><odoo-15>
|
2023-12-10 09:33:44
| 1
| 633
|
Claudiu
|
77,633,752
| 10,455,471
|
Python Setuptools / pyproject.toml - Console Scripts Can't Import Main Package Name: ModuleNotFoundError
|
<h2>Issue</h2>
<p>I'm making a python template which is supposed to be able to be run in 3 different ways:</p>
<ol>
<li>Run <code>python python3-template.py</code> in the main directory (just imports the directory-based package <code>python3_template</code> and runs <code>python3_template.run()</code>)</li>
<li>Build a pyinstaller <code>.exe</code> from <code>python3-template.py</code> and run that</li>
<li>Build a wheel with the console script <code>python3-template = "python3_template:run"</code> and run that</li>
</ol>
<p>I have <code>1</code> and <code>2</code> working perfectly, and the wheel for <code>3</code> builds with all of the data included. However, after installing the wheel, and trying to run <code>python3-template</code>, I get this error:</p>
<pre><code>Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\Ella\miniconda3\envs\build\Scripts\python3-template.exe\__main__.py", line 4, in <module>
ModuleNotFoundError: No module named 'python3_template'
</code></pre>
<p>Additionally, I can't <code>import python3_template</code> in the interactive interpreter, with the same error. I can import the package when I am in the main project directory just fine (of course, it's loading from the <code>python3_project</code> directory), but I can't get it to import as a regular installed package.</p>
<h2>Project Info</h2>
<p>Here is the directory structure:</p>
<pre><code>python3-template/
├── docs/
│ └── example.jpg
├── python3_template/
│ ├── constants/
│ │ ├── __init__.py
│ │ ├── defaults.py
│ │ ├── enums.py
│ │ ├── links.py
│ │ ├── paths.py
│ │ ├── platform.py
│ │ ├── resources.py
│ │ └── version.py
│ ├── helpers/
│ │ ├── __init__.py
│ │ └── core.py
│ ├── resources/
│ │ ├── icon.ico
│ │ └── icon.png
│ ├── __init__.py
│ ├── calculator.py
│ ├── main.py
│ └── version.yml
├── build.bat
├── environment.yml
├── LICENSE
├── MANIFEST.in
├── pyproject.toml
├── python3-template.py
├── README.md
└── README_pypi.md
</code></pre>
<p>And here is the <code>pyproject.toml</code>:</p>
<pre class="lang-js prettyprint-override"><code>[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[project]
name = "python3-template"
version = "0.1.0"
description = "A Python 3 Template"
readme = "README_pypi.md"
license= {file = "LICENSE"}
authors = [
{name = "Ella Jameson", email = "ellagjameson@gmail.com"}
]
classifiers = []
dependencies = [
"pyyaml"
]
[project.urls]
homepage = "https://github.com/nimaid/python3-template"
repository = "https://github.com/nimaid/python3-template"
issues = "https://github.com/nimaid/python3-template/issues"
[project.scripts]
python3-template = "python3_template:run"
[tool.setuptools]
include-package-data = true
[tool.setuptools.packages.find]
where = ["python3_template"]
</code></pre>
<p>I've even made a branch of my repository that you can play with if you need to: <a href="https://github.com/nimaid/python3-template/tree/init-but-can%27t-run-script-import-error" rel="nofollow noreferrer">https://github.com/nimaid/python3-template/tree/init-but-can't-run-script-import-error</a></p>
<h2>What I've Tried So Far</h2>
<ul>
<li>Renaming the project to have no hyphens or underscore (no effect)</li>
<li>Using different script endpoints like <code>python3_template.main:run</code> (no effect)</li>
<li>Changing the package name to use underscores instead of dashes (<code>setuptools</code> overrides this back to a dash)</li>
</ul>
<h2>Summary</h2>
<p>I have no clue why this isn't working with this template. I've released packages in the past with what I'm pretty sure is the same structure, but I may have made an error in transferring those patterns into this template. I have no other ideas as to why it won't work, and I would very much appreciate any help you can give! Thank you!</p>
|
<python><python-3.x><setuptools><importerror><pyproject.toml>
|
2023-12-10 05:14:57
| 1
| 648
|
Ella Jameson
|
77,633,722
| 15,209,066
|
How to make pypdf2 Annotations printable
|
<p>I'm Using the <code>PyPDF2</code> Library to do some annotations to a pdf. Here's the code that I'm using</p>
<pre><code>from PyPDF2 import PdfReader, PdfWriter
from PyPDF2.generic import AnnotationBuilder
reader = PdfReader("Example.pdf")
page = reader.pages[0]
writer = PdfWriter()
writer.add_page(page)
annotation = AnnotationBuilder.free_text(
"Hello World",
rect=(50, 550, 200, 650),
font="Arial",
bold=True,
italic=True,
font_size="20pt",
font_color="00ff00",
border_color="0000ff",
background_color="cdcdcd",
)
writer.add_annotation(page_number=0, annotation=annotation)
with open("annotated-pdf.pdf", "wb") as fp:
writer.write(fp)
</code></pre>
<p>But the thing here is that The annotations are shown in the Acrobat reader</p>
<p><a href="https://i.sstatic.net/w5phq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w5phq.png" alt="the ss from the acrobat" /></a></p>
<p>but when it comes to the printable preview the annotations are vanished. Is there any way that I can make those annotations printable</p>
<p><a href="https://i.sstatic.net/mw03y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mw03y.png" alt="enter image description here" /></a></p>
|
<python><pdf><acrobat><pypdf>
|
2023-12-10 04:57:13
| 0
| 377
|
rav2001
|
77,633,675
| 10,620,003
|
Build a df from two other df based on 0,1 in the df1 and the values in df2
|
<p>I have two dfs. one df has only 1 and 0 <strong>(df_one_zero</strong>). and another one has the different values <strong>df_value_total</strong>. These two has thousand rows and columns!</p>
<p>The first column of each df is the id and we dont want to change that at all.</p>
<p>I want to move with sliding window of 5 through the columns. so, each window I want to work with these two: df_one_zero_window, df_value_window.</p>
<p>In each window,the column which 1's started and ended is important.</p>
<p>Then I want to create another df_out with the same shape to the df_one_zero(initially is set to zero), consider that in column col the 1 started and ended in col_end,</p>
<p>put the value in <strong>df_out (row, col-1) = df_value_window(row, col-1)-df_value_window(row, col)</strong>, and other values are zero.(if the 1 started at index 0, or end at the last column then its ok. it does not need to put value for that)
Also, if the 1 in df_one_zero_window ended at col_end, then <strong>df_out(row, col_end+1) = df_value_window(row,col_end+1)-df_value_window(row,col_end)</strong>. In the following dfs, I want to create the df_out= df2. The values in df_value_total are very diverse and here I only select some easy number in my df.</p>
<pre><code>## only has zero and 1
df = pd.DataFrame()
df['id'] = ['a', 'b', 'c']
df['0'] = [0, 0, 0]
df['1'] = [1, 0, 1]
df['2'] = [1, 1, 1]
df['3'] = [0, 0, 0]
df['4'] = [0, 0, 0]
df['5'] = [0, 0, 0]
df['6'] = [0, 1, 1]
df['7'] = [0, 0, 1]
df['8'] = [0, 0, 0]
df['9'] = [0, 0, 0]
df['10'] = [0, 0, 0]
df['11'] = [0, 0, 1]
df['12'] = [1, 1, 1]
df['13'] = [1, 0, 0]
df['14'] = [0, 0, 0]
df['15'] = [0, 0, 0]
df['16'] = [0, 1, 1]
df['17'] = [1, 1, 0]
df['18'] = [0, 0, 0]
df['19'] = [0, 0, 0]
## this is that which has different values
df1 = pd.DataFrame()
df1['id'] = ['a', 'b', 'c']
df1['0'] = [4, 0, 9]
df1['1'] = [0, 0, 1]
df1['2'] = [1, 1, 3]
df1['3'] = [6, 2, 0]
df1['4'] = [0, 0, 0]
df1['5'] = [0, 5, 0]
df1['6'] = [0, 1, 2]
df1['7'] = [0, 0, 1]
df1['8'] = [0, 0, 3]
df1['9'] = [0, 0, 0]
df1['10'] = [0, 0, 0]
df1['11'] = [0, 0, 1]
df1['12'] = [1, 1, 1]
df1['13'] = [1, 3, 4]
df1['14'] = [9, 0, 0]
df1['15'] = [0, 0, 0]
df1['16'] = [2, 1, 1]
df1['17'] = [1, 1, 4]
df1['18'] = [0, 5, 0]
df1['19'] = [0, 0, 0]
</code></pre>
<p>I tried to do some parts, but I couldn't track where the 1 is finished and also I think it is not optimum!Can you please help me with that?</p>
<pre><code>def generate_df_out(df_one_zero, df_value_total, window_size=5):
for col in range(1, len(df_one_zero.columns), window_size):
df1_window = df_one_zero.iloc[:, col:col + window_size]
df_value_window = df_value_total.iloc[:, col:col + window_size]
for row in range(df1_window.shape[0]):
start_idx = 0
for col in range(window_size):
if df1_window.iloc[row, col] == 1 and start_idx==0:
df_out.iloc[row, col-1] = df_value_window.iloc[row, col] - df_value_window.iloc[row, col-1]
start_idx += col
return df_out
df_out = generate_df_out(df, df1)
</code></pre>
<p>The output I want is like this:</p>
<pre><code>df2 = pd.DataFrame()
df2['id'] = ['a', 'b', 'c']
df2['0'] = [4, 0, 8]
df2['1'] = [0, -1, 0]
df2['2'] = [0, 1, 0]
df2['3'] = [5, 1, -1]
df2['4'] = [0, 0, 0]
df2['5'] = [0, 4, -1]
df2['6'] = [0, 0, 0]
df2['7'] = [0, -1, 0]
df2['8'] = [0, 0, 2]
df2['9'] = [0, 0, 0]
df2['10'] = [0, 0, -1]
df2['11'] = [-1, -1, 0]
df2['12'] = [0, 0, 0]
df2['13'] = [0, 2, 3]
df2['14'] = [9, 0, 0]
df2['15'] = [0, -1, -1]
df2['16'] = [1, 0, 0]
df2['17'] = [0, 0, 3]
df2['18'] = [-1, 4, 0]
df2['19'] = [0, 0, 0]
df2
id 0 1 2 3 4 5 6 7 8 ... 10 11 12 13 14 15 16 17 18 19
0 a 4 0 0 5 0 0 0 0 0 ... 0 -1 1 1 9 0 1 0 -1 0
1 b 0 -1 1 1 0 4 0 -1 0 ... 0 -1 0 2 0 -1 0 0 4 0
2 c 8 0 0 -1 0 -1 0 0 2 ... -1 0 0 3 0 -1 0 3 0 0
</code></pre>
|
<python><pandas><dataframe>
|
2023-12-10 04:28:32
| 1
| 730
|
Sadcow
|
77,633,472
| 3,884,713
|
A Simple Toy ML problem that surprisingly fails to learn anything
|
<p>This is a much simplified network from a real problem that, to me, has a surprising <strong>INability</strong> to learn a simple task via backprop, ie, it can't overfit or learn at all. This simple version has come at the cost of many gray hairs, and much simplification, and I truly ignored the option that something this <em>simple</em> would fail to learn, until finally I tested it, and, sure enough it fails.</p>
<p><em>(Runnable version at bottom)</em></p>
<p>It is a FFNN tasked with turning input vectors into output vectors, all drawn from <code>randn</code>, and all with the same dimension.</p>
<ol>
<li><p><code>input</code> vectors are compared via cosine similarity with a model parameter called <code>predicate</code>.</p>
</li>
<li><p>if <code>predicate</code> approaches <code>1.0</code> the network should output it's model parameter vector <code>true</code>. Otherwise, output parameter <code>false</code>.</p>
</li>
<li><p><code>loss</code> is defined as <code>1 - cosine_similarity(output, expected)</code></p>
</li>
</ol>
<p>Note, if you cheat and set the model's internal vectors to the expected values, the model performs with expectedly great accuracy and low loss, and learning doesn't degrade (too much), so there is a stable fixed point in the loss landscape.</p>
<p>Here's the model:</p>
<pre><code>class Sim(nn.Module):
def __init__(self, ):
super(Sim, self).__init__()
self.predicate = nn.Parameter(torch.randn(VEC_SIZE) * 1e-2)
self.true = nn.Parameter(torch.randn(VEC_SIZE) * 1e-2)
self.false = nn.Parameter(torch.randn(VEC_SIZE) * 1e-2)
def forward(self, input):
# input : [batch_size, vec_size]
# output : [batch_size, vec_size]
batch_size = input.size(0)
predicate = self.predicate.unsqueeze(0)
matched = torch.cosine_similarity(predicate, input, dim=1)
return (
einsum('v, b -> bv', self.true, matched) +
einsum('v, b -> bv', self.false, 1 - matched)
)
</code></pre>
<p>The <code>self.predicate</code> should eventually approximate <code>predicate_vec</code>, and the same for <code>self.true</code> and <code>self.false</code>. If it learns these expected values, it should have the lowest loss. Here's the data:</p>
<pre><code>predicate_vec = torch.randn(VEC_SIZE)
true_vec = torch.randn(VEC_SIZE)
false_vec = torch.randn(VEC_SIZE)
dataset = (
# positives
[(predicate_vec, true_vec) for _ in range(N_DATASET_POS)] +
# negatives
[(torch.randn(VEC_SIZE), false_vec) for _ in range(N_DATASET_POS)]
)
dataset_loader = torch.utils.data.DataLoader(dataset,
batch_size=BATCH_SIZE,
shuffle=True)
</code></pre>
<p>When this runs, the output looks something like this, where the loss does decrease some, but accuracy at the task doesn't improve at all.</p>
<pre><code>Epoch 260, Training Loss: 0.005365743
Epoch 270, Training Loss: 0.005237889
Epoch 280, Training Loss: 0.005211671
Epoch 290, Training Loss: 0.005140129
Epoch 300, Training Loss: 0.005135684
SIMILARITY OF LEARNED VECS: p=-0.352 t=-0.244 f=0.266
</code></pre>
<p>If I "cheat" and set the model's internal params to the known-good values, the loss plummets, and the params are robust to the training procedure. Notice, the cos-sim of <code>predicate</code>, <code>true</code>, and <code>false</code> are all around <code>1.0</code> as expected:</p>
<pre><code>Epoch 570, Training Loss: 0.003478402
Epoch 580, Training Loss: 0.003488328
Epoch 590, Training Loss: 0.003480982
Epoch 600, Training Loss: 0.003456787
SIMILARITY OF LEARNED VECS: p=0.990 t=0.994 f=0.992
</code></pre>
<p><strong>Runnable version:</strong></p>
<pre><code>import torch
import torch.nn as nn
import torch.nn.functional as F
from sklearn.metrics import f1_score
from torch import einsum
import pdb
import torch
import numpy as np
import random
import string
from datasets import Dataset
torch.set_printoptions(precision=3)
SEED = 42
torch.manual_seed(SEED)
np.random.seed(SEED)
random.seed(SEED)
DEVICE = 'cuda'
##########
# Params
NUM_EPOCHS = 1000
BATCH_SIZE = 10
GRAD_CLIP = 10.0
LR = 1e-2
WD = 0
N_DATASET_POS = 100
N_DATASET_NEG = 100
VEC_SIZE = 128
##########
# Data
# We'll try and find these same vectors being learned within the network.
predicate_vec = torch.randn(VEC_SIZE)
true_vec = torch.randn(VEC_SIZE)
false_vec = torch.randn(VEC_SIZE)
dataset = (
# positives
[(predicate_vec, true_vec) for _ in range(N_DATASET_POS)] +
# negatives
[(torch.randn(VEC_SIZE), false_vec) for _ in range(N_DATASET_POS)]
)
dataset_loader = torch.utils.data.DataLoader(dataset,
batch_size=BATCH_SIZE,
shuffle=True)
##########
# Model
class Sim(nn.Module):
def __init__(self, ):
super(Sim, self).__init__()
self.predicate = nn.Parameter(torch.randn(VEC_SIZE) * 1e-2)
self.true = nn.Parameter(torch.randn(VEC_SIZE) * 1e-2)
self.false = nn.Parameter(torch.randn(VEC_SIZE) * 1e-2)
def forward(self, input):
# input : [batch_size, vec_size]
# output : [batch_size, vec_size]
batch_size = input.size(0)
predicate = self.predicate.unsqueeze(0)
matched = torch.cosine_similarity(predicate, input, dim=1)
return (
einsum('v, b -> bv', self.true, matched) +
einsum('v, b -> bv', self.false, 1 - matched)
)
def run_epoch(data_loader, model, optimizer):
model.train()
total_loss = 0
all_predictions = []
all_true_values = []
for batch in data_loader:
input_tensor, target_tensor = batch
input_tensor = input_tensor.to(DEVICE)
target_tensor = target_tensor.to(DEVICE)
model.zero_grad()
output = model(input_tensor)
loss = (1 - torch.cosine_similarity(target_tensor, output.unsqueeze(1))).mean()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=GRAD_CLIP)
optimizer.step()
total_loss += loss.item() / target_tensor.size(1)
return total_loss / len(data_loader)
##########
# Go
model = Sim()
model = model.to(DEVICE)
##########
# Cheat and set to expected value
# with torch.no_grad():
# model.predicate[:] = predicate_vec
# model.true[:] = true_vec
# model.false[:] = false_vec
optimizer = torch.optim.AdamW(model.parameters(), lr=LR, weight_decay=WD)
def check(model):
''' Check how much internal vectors are aligning to known vectors. '''
p = torch.cosine_similarity(model.predicate.to(DEVICE), predicate_vec.to(DEVICE), dim=0)
t = torch.cosine_similarity(model.true.to(DEVICE), true_vec.to(DEVICE), dim=0)
f = torch.cosine_similarity(model.false.to(DEVICE), false_vec.to(DEVICE), dim=0)
print(f'SIMILARITY OF LEARNED VECS: p={p:>.3f} t={t:>.3f} f={f:>.3f}')
for epoch in range(NUM_EPOCHS):
loss = run_epoch(dataset_loader, model, optimizer)
if epoch % 10 == 0:
print(f'Epoch {epoch}, Training Loss: {loss:>.9f}')
if epoch % 100 == 0:
check(model)
</code></pre>
|
<python><machine-learning><pytorch><neural-network><cosine-similarity>
|
2023-12-10 02:00:35
| 0
| 3,806
|
Josh.F
|
77,633,368
| 1,434,495
|
How come using np.linalg.norm introduces unseen numerical inequality but writing it out does not?
|
<p>Let's say I have these two variables:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
a = np.array([[ 0, 1, 10, 2, 5]])
b = np.array([[ 0, 1, 18, 15, 5],
[13, 9, 23, 3, 22],
[ 2, 10, 17, 4, 8]])
</code></pre>
<p><strong>Method 1</strong></p>
<pre class="lang-py prettyprint-override"><code>m1 = -np.linalg.norm(a[:, np.newaxis, :] - b[np.newaxis, :, :], axis=-1) ** 2 / 2
</code></pre>
<p><strong>Method 2</strong></p>
<pre class="lang-py prettyprint-override"><code>m2 = -np.sum(np.square(a[:, np.newaxis, :] - b[np.newaxis, :, :]), axis=-1) / 2
</code></pre>
<p>Both of these outputs look alike (at least according to <code>print()</code>):</p>
<pre class="lang-py prettyprint-override"><code>array([[-116.5, -346. , -73.5]])
</code></pre>
<p>But</p>
<pre class="lang-py prettyprint-override"><code>>>> np.array_equal(m1, m2)
False
</code></pre>
<p>What makes it interesting is that defining a literal to check equality leads to:</p>
<pre class="lang-py prettyprint-override"><code>>>> sanity_check = np.array([[-116.5, -346. , -73.5]])
>>> np.array_equal(sanity_check, m1)
False
>>> np.array_equal(sanity_check, m2)
True
</code></pre>
<p>How come using the method with <code>np.linalg.norm</code> is the odd one out? If <code>m1</code> is unequal, how come its <code>print()</code> looks equal?</p>
|
<python><numpy><linear-algebra>
|
2023-12-10 00:55:17
| 2
| 2,957
|
Flair
|
77,633,295
| 16,844,801
|
Warning when using Hugging Face model
|
<p>When I am using HF I keep getting this warning and the models seem to keep hallucinating:</p>
<pre><code>from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-medium")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-medium")# Let's chat for 5 lines
for step in range(5):# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
</code></pre>
<p>The output:</p>
<pre><code>A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
DialoGPT: What is love?
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
DialoGPT: I love lamp
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
DialoGPT: I love lamp
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
DialoGPT: Only lamp
A decoder-only architecture is being used, but right-padding was detected! For correct generation results, please set `padding_side='left'` when initializing the tokenizer.
DialoGPT: I love lamp
</code></pre>
<p>I tried <code>tokenizer.padding_side='left'</code> and still nothing</p>
|
<python><huggingface-transformers>
|
2023-12-10 00:09:38
| 2
| 434
|
Baraa Zaid
|
77,633,279
| 13,135,901
|
Vectorize an algorithm with numpy
|
<p>I have an two arrays of 1's and 0's:</p>
<pre><code>a = [1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0]
b = [0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1]
</code></pre>
<p>I want to make sure that the "1" always "jumps" the array as I go from left to right never appearing in the same array twice in a row before appearing in the other array.</p>
<pre><code>a = [1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0]
b = [0 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1]
</code></pre>
<p>I can do it using pandas and iteration:</p>
<pre><code>df = pd.DataFrame({"A": a, "B": b, })
df2 = df[(df.A > 0) | (df.B > 0)]
i = 0
for idx in df2.index:
try:
if df2.at[idx, 'A'] == df2.at[df2.index[i + 1], 'A']:
df.at[idx, 'A'] = 0
if df2.at[idx, 'B'] == df2.at[df2.index[i + 1], 'B']:
df.at[idx, 'B'] = 0
i += 1
except IndexError:
pass
</code></pre>
<p>But it is not efficient. How can I vectorize it to make it faster?</p>
|
<python><pandas><numpy>
|
2023-12-10 00:00:00
| 1
| 491
|
Viktor
|
77,632,945
| 2,474,008
|
tk.TopLevel is making two windows appear, but why?
|
<p>The code below generates my main program (frmMain), which sometimes opens forms. But each time I open a form, the code below makes TWO forms open, the proper form, and one extra one, which is blank. The standard top-right close button only works on the proper form, and, the extra one is invulerable. But why does it appear, and why is it invulnerable?</p>
<pre><code>frmMain=tk.Tk()
def createForm():
lFrm = tk.Toplevel() #Create a new form
initForm(lFrm) #tk.inits the form and sets the mass onFocus event bubble
return lFrm
def initForm(pFrm):
tk.Toplevel.__init__(pFrm) #instead of super
#super().__init__(master=pFrm)
setWindowFocusEvent(pFrm) #when any win gets focus, all windows come to fore in order (as per std MDI app behaviour)
</code></pre>
<p>An example of where I might call this is below, but it causes TWO windows to appear:</p>
<pre><code>def listBands():
global frmMain
frmMain.lListBandsFrm = createForm()
</code></pre>
<p>There are some threads on SO about two windows appearing at once, but in those examples I can see what the cause was. But in my case, I really can't. I'm not calling any spurious tk.Tk(), and I'm only calling tk.Toplevel() once for each form. Stepping through the code reveals that both windows appear simultaneously when createForm finishes.</p>
<p>I've tried omitting the <strong>init</strong>() call, but that of course breaks everything, e.g., I can't call .bind until after <strong>init</strong>.</p>
<p>The answer is... not to fiddle with <strong>init</strong>, and to properly use inheritance and subclassing. Here it is:</p>
<pre><code>class FrmMDI(tk.Toplevel): #New form, with my own setup as required
def __init__(self): #Autocalled when obj is instantiated
super().__init__() #Calls the init() of the parent
setWindowFocusEvent(self) #My own prep, to improve MDI in Windows
def createForm(): #This was giving me double-windows
lFrm = FrmMDI() #Create a new form using subclass of .Toplevel
return lFrm
</code></pre>
|
<python><tk-toolkit>
|
2023-12-09 21:18:10
| 2
| 369
|
Vexen Crabtree
|
77,632,886
| 1,165,727
|
mypy doesn't recognize the latest version of attrs?
|
<p>Is some special setup needed to help <code>mypy</code> recognize standard <code>attrs</code> usage?</p>
<pre><code>somefile.py:7: error: Cannot find implementation or library stub for module named "attr" [import-not-found]
</code></pre>
<p>(followed by a bunch on errors due to mypy unable to understand the structure of the attrs class)</p>
<p>Here is the relavent output of <code>pip list</code></p>
<pre><code>attrs 23.1.0
mypy 1.7.1
mypy-extensions 1.0.0
types-attrs 19.1.0
typing_extensions 4.5.0
</code></pre>
|
<python><mypy><python-attrs>
|
2023-12-09 21:03:02
| 1
| 997
|
Y123
|
77,632,539
| 15,520,615
|
How to suppress Error: SyntaxWarning: "is" with a literal. Did you mean "=="?
|
<p>On my Databricks Community Edition the code <code>if len(crValue) is 0:</code> results with the error:</p>
<pre class="lang-none prettyprint-override"><code>Error: SyntaxWarning: "is" with a literal. Did you mean "=="?
</code></pre>
<p>Is it possible to suppress the error so as not to have the change the code to <code>if len(crValue) == 0:</code>?</p>
|
<python><databricks><azure-databricks>
|
2023-12-09 19:04:32
| 1
| 3,011
|
Patterson
|
77,632,195
| 11,530,571
|
Custom optimizer for TensorFlow
|
<p>I'm trying to experiment with custom optimization algorithms for neural networks on TensorFlow, but I'm stuck with the lack of information on the topic. What I need is some code that will get me at each iteration a vector x (current point) and a vector g (gradient at x), then I'll update x, and then some code to set updated values back. Here's what I have at the moment:</p>
<pre class="lang-py prettyprint-override"><code>from tensorflow.python.framework import ops
from tensorflow.python.ops import gen_training_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.training import optimizer
from tensorflow.python.util.tf_export import tf_export
import tensorflow as tf
import numpy as np
class TestGD(optimizer.Optimizer):
def __init__(self, rad=0.01,
use_locking=False, name="TestGD"):
super(TestGD, self).__init__(use_locking, name)
self._radius = rad
def _create_slots(self, var_list):
num_dims = len(var_list)
self._beta = (num_dims - 1) / (num_dims + 1)
self._B_matrix = np.identity(num_dims)
def _prepare(self):
self._radn_t = ops.convert_to_tensor(self._call_if_callable(self._radius), name="beta")
self._beta_t = ops.convert_to_tensor(self._call_if_callable(self._beta), name="beta")
self._B_matrix_t = ops.convert_to_tensor(self._call_if_callable(self._B_matrix), name="B")
def _apply_dense(self, grad, var):
return self._resource_apply_dense(grad, var)
def _resource_apply_dense(self, grad, var):
print(grad.shape, "<-----------")
#I'm planning to implement my algorithm somewhere here
var_update = tf.compat.v1.assign_sub(var, 0.01 * grad)
return tf.group(var_update)
def _apply_sparse(self, grad, var):
raise NotImplementedError("Sparse gradient updates are not supported.")
# Build LeNet model
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(6, kernel_size=(5, 5), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Conv2D(16, kernel_size=(5, 5), activation='relu'),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(120, activation='relu'),
tf.keras.layers.Dense(84, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
# Use your custom optimizer
#custom_optimizer = SimpleGD(learning_rate=0.001)
custom_optimizer = TestGD()
# Compile the model with your custom optimizer
model.compile(optimizer=custom_optimizer,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Getting dataset
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0 # Normalize pixel values to between 0 and 1
x_train = x_train[..., tf.newaxis].astype("float32")
x_test = x_test[..., tf.newaxis].astype("float32")
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=60000).batch(64)
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
test_dataset = test_dataset.batch(64)
# training
model.fit(train_dataset, epochs=5)
# evaluation
test_loss, test_acc = model.evaluate(test_dataset)
print(f"Test accuracy: {test_acc}")
</code></pre>
<p>The problem is, I get very strange shapes for <code>grad</code> and <code>var</code>, they're definitely not vectors. What should I do to reduce the problem to x and g vectors and how do I correctly update results after my minimization step?</p>
|
<python><tensorflow><machine-learning><keras>
|
2023-12-09 17:23:42
| 1
| 456
|
guest
|
77,632,015
| 10,934,417
|
Create list of items repeated N times without repeating itself
|
<p>I would like to repeat a list N times but without repeating an item itself. For example,</p>
<pre><code>from itertools import *
items = [ _ for _ in range(3)]
# repeat each item twice
row = sorted(list(chain(*repeat(items, 2))))
row
[0, 0, 1, 1, 2, 2]
</code></pre>
<p>BUT, I would like to create another list (col), which also has 6 items:</p>
<pre><code>col = [1, 2, 0, 2, 0, 1]
</code></pre>
<p>The goal of these lists is to create an adjacency matrix with diagonal items = 0. (COO format is my final goal so random.shuffle does not meet my need)</p>
<pre><code>row = [0,0,1,1,2,2]
col = [1,2,0,2,0,1]
value = [1,1,1,1,1,1]
import scipy
mtx = scipy.sparse.coo_matrix((value, (row, col)), shape=(3, 3))
mtx.todense()
matrix([[0, 1, 1],
[1, 0, 1],
[1, 1, 0]])
</code></pre>
<p>I need the information into a COO format. Any suggestions? Thanks in advance!</p>
|
<python><numpy><adjacency-matrix>
|
2023-12-09 16:33:06
| 2
| 641
|
DaCard
|
77,631,847
| 15,520,615
|
Unable to create ConnectionString variable to connect to Azure Event Hub: Error: JavaPackage' object is not callable
|
<p>I am trying to generate dummy to our Event Hub using Python Library <strong>dbldatagen</strong> using the following Python code in databricks</p>
<pre><code>import dbldatagen as dg
from pyspark.sql.types import IntegerType, StringType, FloatType
import json
from pyspark.sql.types import StructType, StructField, IntegerType, DecimalType, StringType, TimestampType, Row
from pyspark.sql.functions import *
import pyspark.sql.functions as F
num_rows = 10 * 1000000 # number of rows to generate
num_partitions = 8 # number of Spark dataframe partitions
delay_reasons = ["Air Carrier", "Extreme Weather", "National Aviation System", "Security", "Late Aircraft"]
# will have implied column `id` for ordinal of row
flightdata_defn = (dg.DataGenerator(spark, name="flight_delay_data", rows=num_rows, partitions=num_partitions)
#.withColumn("body",StringType(), False)
.withColumn("flightNumber", "int", minValue=1000, uniqueValues=10000, random=True)
.withColumn("airline", "string", minValue=1, maxValue=500, prefix="airline", random=True, distribution="normal")
.withColumn("original_departure", "timestamp", begin="2020-01-01 01:00:00", end="2020-12-31 23:59:00", interval="1 minute", random=True)
.withColumn("delay_minutes", "int", minValue=20, maxValue=600, distribution=dg.distributions.Gamma(1.0, 2.0))
.withColumn("delayed_departure", "timestamp", expr="cast(original_departure as bigint) + (delay_minutes * 60) ", baseColumn=["original_departure", "delay_minutes"])
.withColumn("reason", "string", values=delay_reasons, random=True)
)
df_flight_data = flightdata_defn.build(withStreaming=True, options={'rowsPerSecond': 10})
streamingDelays = (
df_flight_data
.groupBy(
#df_flight_data.body,
df_flight_data.flightNumber,
df_flight_data.airline,
df_flight_data.original_departure,
df_flight_data.delay_minutes,
df_flight_data.delayed_departure,
df_flight_data.reason,
window(df_flight_data.original_departure, "1 hour")
)
.count()
)
writeConnectionString = sparkContext._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt(connectionString)
checkpointLocation = "///checkpoint"
# ehWriteConf['eventhubs.connectionString'] = sc._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt(connectionString)
# ehWriteConf = {
# 'eventhubs.connectionString' : writeConnectionString
# }
ehWriteConf = {
'eventhubs.connectionString' : writeConnectionString
}
</code></pre>
<p>However, the following line of code gives me the error</p>
<pre><code>writeConnectionString = sparkContext._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt(connectionString)
</code></pre>
<p>TypeError: 'JavaPackage' object is not callable</p>
|
<python><databricks><azure-databricks>
|
2023-12-09 15:38:41
| 1
| 3,011
|
Patterson
|
77,631,672
| 8,030,794
|
Connect to remote vps server postgres?
|
<p>I have vps server with postresql. I change settings to enable remote connections</p>
<ol>
<li>listen_addresses = *</li>
<li>host all all 0.0.0.0/0 md5</li>
<li>sudo ufw allow 5432/tcp</li>
</ol>
<p>Then i need to connect in python with psycopg2</p>
<pre><code>conn = psycopg2.connect(database='fr3sto', user='**', password='**',host='***' )
</code></pre>
<p>In the host I write the address of my VPS. User and password are role data in postgres on vps.
But i recieve an error message <code>connection to server at "***", port 5432 failed: Connection refused (0x0000274D/10061) Is the server running on that host and accepting TCP/IP connections?</code>
What am I doing wrong?</p>
|
<python><postgresql>
|
2023-12-09 14:43:57
| 0
| 465
|
Fresto
|
77,631,627
| 13,985,175
|
Most efficient way to replace elements based on first 3 characters of every original element in very large dataframe for over 20,000 selected columns
|
<p>I am trying to replace entire elements in an array based upon the first 3 characters of all original elements in specific columns of a large dataframe (where applicable based on a dictionary). This process needs to be repeated many times. Given that there are over 20,000 selected columns, the for-loop below is very slow.</p>
<p>Please note that all the elements within are strings.</p>
<pre><code> d = {'0/0': 0, '0/1': 1, '1/0': 1, '1/1': 2, "./.": 3}
cols = list(set(merged.columns) - set(["subject", "Group"]))
for col in cols:
merged[col] = merged[col].str[:3].replace(d)
</code></pre>
<p>I have made an attempt to use lamda functions (please see below), however, this has been slow as well. I believe it is the apply function that slowed things down. (Note: Using Applymap was slow as well)</p>
<pre><code>d = {'0/0': 0, '0/1': 1, '1/0': 1, '1/1': 2, "./.": 3}
cols = list(set(merged.columns) - set(["subject", "Group"]))
merged[cols] = merged[cols].apply(lambda x: x.str[:3].replace(d))
</code></pre>
<p>I am seeking for more efficient approaches such as using vectorisation, but have not been able to identify a way forward.</p>
<p>An example of the data can be seen below (please note that it is much smaller than the actual data and the strings within each cell are much longer)</p>
<pre><code>data = {
'Sample1': ['0/0:0,1:33', '0/1:2,3:32', '1/0:4,5', '1/1:6,7', './.:8,9'],
'Sample2': ['0/0:10,11', '0/1:12,13', '1/0:14,15', '1/1:16,17', './.:18,19'],
'Sample3': ['0/0:20,21', '0/1:22,23', '1/0:24,25:23', '1/1:26,27', './.:28,29'],
}
df = pd.DataFrame(data)
</code></pre>
<p>Update: Data sample with a representative size</p>
<pre><code>import numpy as np
import pandas as pd
sample = ['0/0:0,1:33', '0/1:2,3:32', '1/0:4,5', '1/1:6,7', './.:8,9', '0/0:10,11',
'0/1:12,13', '1/0:14,15', '1/1:16,17', './.:18,19', '0/4:20,21',
'0/1:22,23', '1/0:24,25:23', '1/1:26,27', './.:28,29']
df = pd.DataFrame(np.random.choice(sample, (2000, 20000)))
</code></pre>
|
<python><pandas><dataframe><performance>
|
2023-12-09 14:30:50
| 5
| 331
|
AIBball
|
77,631,580
| 10,570,372
|
Understanding Mypy Error Reporting with Default `--follow-imports=normal`
|
<p>I'm currently working with <code>mypy</code> for type checking in a Python project and have encountered a behavior that I'd like to understand better regarding how <code>mypy</code> reports errors, especially under the default <code>--follow-imports=normal</code> setting.</p>
<p>Consider a scenario where I have three files:</p>
<ul>
<li><code>foo.py</code> (which is the main file I'm checking)</li>
<li><code>a.py</code> (imported by <code>foo.py</code>)</li>
<li><code>b.py</code> (also imported by <code>foo.py</code>)</li>
</ul>
<p>When I run <code>mypy</code> to check <code>foo.py</code>, I get an output that indicates errors spread across all three files (e.g., "Found 25 errors in 3 files (checked 1 source file)"). My understanding is that mypy, under <code>--follow-imports=normal</code>, should primarily report errors in the file explicitly being checked (<code>foo.py</code> in this case), and only include errors from the imported files (<code>a.py</code> and <code>b.py</code>) if they directly affect the type correctness of <code>foo.py</code>.</p>
<p>However, I'm unclear on the following points:</p>
<ol>
<li>Does <code>mypy</code> report all errors from <code>a.py</code> and <code>b.py</code> regardless of whether they directly impact <code>foo.py</code>?</li>
<li>Or does it only report errors from <code>a.py</code> and <code>b.py</code> that have a direct consequence on the type safety of <code>foo.py</code>?</li>
</ol>
<p>The reason I am asking is I am facing some peculiar errors when running a <code>mypy</code> check on Github actions (for a public repo):</p>
<pre class="lang-bash prettyprint-override"><code>mypy --follow-imports=normal foo.py
</code></pre>
<pre class="lang-bash prettyprint-override"><code>Found 25 errors in 3 files (checked 1 source file)
</code></pre>
<p>I have read the <code>mypy</code> documentation:</p>
<ul>
<li><a href="https://mypy.readthedocs.io/en/stable/running_mypy.html" rel="nofollow noreferrer">https://mypy.readthedocs.io/en/stable/running_mypy.html</a></li>
<li><a href="https://mypy.readthedocs.io/en/stable/existing_code.html" rel="nofollow noreferrer">https://mypy.readthedocs.io/en/stable/existing_code.html</a></li>
</ul>
<p>but still isn't clear.</p>
|
<python><mypy>
|
2023-12-09 14:16:32
| 0
| 1,043
|
ilovewt
|
77,631,531
| 9,607,072
|
How to connect to google cloud docker service using python in cloud run
|
<p>I would like to connect to a docker service from a cloud run serverless with python. For example when I run this inside a cloud run service:</p>
<pre><code>import docker
client = docker.from_env()
</code></pre>
<p>I get this error: <code>docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))</code></p>
<p>I should create <code>env</code> file but I don't know what should be the content of this file.</p>
<p>I want to open and close various containers using this docker client.
What should I specify in my env file to connect to my GCP project?</p>
|
<python><docker><google-cloud-platform>
|
2023-12-09 14:00:46
| 0
| 1,243
|
Kevin
|
77,631,428
| 13,130,804
|
How to enforce to https in Streamlit?
|
<p>Hi I have an application hosted with Heroku. I have a custom domain and I acquired an automated SSL certificate with Heroku. It works fine with no issue. My problem is that I would like to enforce the url to be always redirected to the https version. In other app's that I have done with Flask, I was able to enforce the https domain using Talisman. But this specific application was developed in Streamlit and therefore it is using a Tornado environment.</p>
<p>Specific example: when I type simply "my_webpage33.com" or "http://my_webpage33.com" it should go to "https://my_webpage33.com" so that no security warning message is displayed in the browser regarding the SSL certificate.</p>
<p>I asked the to Heroku support team but they replied that the issue falls outside the nature of the Heroku support policy.</p>
<p>Thanks!</p>
|
<python><heroku><ssl-certificate><streamlit>
|
2023-12-09 13:26:16
| 0
| 446
|
Miguel Gonzalez
|
77,631,410
| 264,136
|
module 'pexpect' has no attribute 'spawn'
|
<p>OS: WIndows 10</p>
<pre><code>ssh_command = f"ssh {ssh_username}@{ssh_address} -p {ssh_port}"
ssh_session = pexpect.spawn(ssh_command, encoding='utf-8')
</code></pre>
<p>Error:</p>
<pre><code>(Python310_Services_VENV) C:\UPScripts>python test.py
Trying to establish SSH connection.
Traceback (most recent call last):
File "C:\UPScripts\test.py", line 13, in check_power_status
ssh_session = pexpect.spawn(ssh_command, encoding='utf-8')
AttributeError: module 'pexpect' has no attribute 'spawn'
</code></pre>
|
<python>
|
2023-12-09 13:20:38
| 1
| 5,538
|
Akshay J
|
77,631,385
| 6,089,311
|
How to add data into hover in Hvplot (bokeh) candlestick without missing-date gaps
|
<p>I have an simple example, run in jupyter notebook:</p>
<ul>
<li>hvplot.ohlc without missing-date gaps</li>
<li>wanted to fix Hover (add Date in the right format, and add Volume info)</li>
</ul>
<p>packages:</p>
<p>python 3.12<br />
bokeh 3.3.2<br />
hvplot 0.9.0<br />
holoviews 1.18.1</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import hvplot.pandas
data = pd.DataFrame({
"Open": [100.00, 101.25, 102.75],
"High": [104.10, 105.50, 110.00],
"Low": [94.00, 97.10, 99.20],
"Close": [101.15, 99.70, 109.50],
"Volume": [10012, 5000, 18000],
}, index=[pd.Timestamp("2022-08-01"), pd.Timestamp("2022-08-03"), pd.Timestamp("2022-08-04")])
df = pd.DataFrame(data)
# remove datetime gaps
df = df.reset_index(names="Date")
df['Idx'] = pd.RangeIndex(0, df.shape[0], 1)
# fix hover ------
from bokeh.models import HoverTool
hover = HoverTool(
tooltips=[
('Date', '@Date{%Y-%m-%d}'),
('Open', '@Open{0.00}'),
('High', '@High{0.00}'),
('Low', '@Low{0.00}'),
('Close', '@Close{0.00}'),
('Volume', '@Volume{0}'),
],
formatters={'@Date': 'datetime'},
mode='vline'
)
# fix hover ------
ohlc_cols = ["Open", "High", "Low", "Close"]
ohlc = df.hvplot.ohlc(x='Idx', y=ohlc_cols, hover_cols=["Date", *ohlc_cols, "Volume"], tools=[hover])
# fix x tick labels ------
import holoviews as hv
from bokeh.io import show
fig = hv.render(ohlc)
fig.xaxis.major_label_overrides = {
i: dt.strftime("%b %d") for i, dt in enumerate(df['Date'])
}
# fix x tick labels ------
show(fig)
</code></pre>
<p>But the output is:
<a href="https://i.sstatic.net/4G04V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4G04V.png" alt="enter image description here" /></a></p>
|
<python><bokeh><holoviews><candlestick-chart><hvplot>
|
2023-12-09 13:09:17
| 1
| 586
|
Jan
|
77,631,313
| 11,341,498
|
Python root logger handlers do not get named loggers records?
|
<p>I have the following setup:</p>
<ul>
<li>root logger to log everywhere</li>
<li>my programm adds a new handler after startup via an callback (for example a database) with <code>CallbackHandler</code></li>
<li>named logger in my modules do not call into the root-logger</li>
</ul>
<p><a href="https://www.online-python.com/tI0vGAqC9d" rel="nofollow noreferrer">Online Python Compiler</a> <sub><- you need to have <em>main.py</em> selected to Run <em>main.py</em> in there</sub></p>
<p><em>main.py</em></p>
<pre class="lang-py prettyprint-override"><code>import logging
import logging.config
import MyLogger
from MyApp import MyApp
MyLogger.init()
_logger = logging.getLogger() # root
def main() :
_logger.error( "main - root logger" )
app = MyApp() # setup app and attach CallbackHandler to root logger
app.testLog() # call named logger - should call root logger & callback handler
if __name__ == "__main__" :
main()
</code></pre>
<p><em>MyLogger.py</em></p>
<pre class="lang-py prettyprint-override"><code>import logging
from logging import LogRecord
import logging.config
import os
from typing import Callable
LOG_PATH = "./logs"
LOGGING_CONFIG : dict = {
"version" : 1 ,
'formatters': {
'simple': {
'format': '%(name)s %(message)s'
},
},
"handlers" : {
"ConsoleHandler" : {
"class" : "logging.StreamHandler" ,
"formatter" : "simple" ,
} ,
} ,
"root" : {
"handlers" : [
"ConsoleHandler" ,
] ,
"level" : "DEBUG" ,
}
}
def init() :
os.makedirs( LOG_PATH , exist_ok = True )
logging.config.dictConfig( LOGGING_CONFIG )
class CallbackHandler( logging.Handler ) :
def __init__( self , level = logging.DEBUG , callback : Callable = None ) :
super().__init__( level )
self._callback = callback
def emit( self , record : LogRecord ) :
if self._callback is not None :
self._callback( record.name + " | " + record.msg )
</code></pre>
<p><em>MyApp.py</em></p>
<pre class="lang-py prettyprint-override"><code>import logging
from MyLogger import CallbackHandler
_logger = logging.getLogger( __name__ )
class MyApp :
def __init__( self ) :
rootLogger = logging.getLogger()
rootLogger.addHandler( CallbackHandler( callback = self.myCallback ) )
def myCallback( self , msg : str ) :
print( "CALLBACK: " + msg )
def testLog( self ) :
_logger.error( "MyApp.testLog() - named logger" )
</code></pre>
<p>The <a href="https://docs.python.org/3/howto/logging-cookbook.html" rel="nofollow noreferrer">docs</a> say, named loggers do not inherit the parents handlers.
But they propagate their log messages to the parent/root logger - which has handlers attached.
However they do not get called with a named logger.</p>
<p>The Problem: <strong><code>CallbackHandler.emit()</code> is not called</strong></p>
<p>(if I remove the <code>__name__</code> in <code>MyApp.py: logging.getLogger()</code>, the root logger gets referenced and the Callback-Handler is called)</p>
<p>How do I :</p>
<ol>
<li>initialize the root logger</li>
<li>later in my program attach a custom Handler to the root logger</li>
<li>use named loggers in my program</li>
<li>propagate the the logs from named loggers to the root logger</li>
<li>such that the logs use the custom root-logger-handler</li>
</ol>
|
<python><logging>
|
2023-12-09 12:46:24
| 2
| 323
|
nonsensation
|
77,631,275
| 18,756,733
|
How to free up memory every time, condition is met in loop?
|
<p>I am scraping YouTube comments using the following code:</p>
<pre><code>i=0
while True:
load_more_comments=driver.find_element(By.CSS_SELECTOR,'main[class="container-xl"] button[id="showMoreBtn"]')
driver.execute_script("arguments[0].click();", load_more_comments)
driver.execute_script("arguments[0].scrollIntoView(true);", load_more_comments)
if i % 50 == 0 and i != 0:
html = driver.page_source
joblib.dump(html,open(f'html_{i}.pickle','wb'))
del html
gc.collect()
print(i,end='\r')
i+=1
time.sleep(1)
</code></pre>
<p>After some time I get "out of memory" error.</p>
<p><code>gc.collect()</code> was supposed to clean up memory, when the specified condition was met, but it does not. Also every time the file is saved, its size is larger than it was previous time.</p>
<p>How to prevent this accumulation and free up memory every time, file is saved?</p>
|
<python><selenium-webdriver><web-scraping><memory>
|
2023-12-09 12:34:27
| 0
| 426
|
beridzeg45
|
77,631,051
| 7,658,051
|
line 158, in get_app_config return self.app_configs[app_label] KeyError: 'account'
|
<p>I am developing a django project, the main directory/project name is <code>django-basic-ecommerce</code>.</p>
<p>One of the apps name is <code>accounts</code> (note that it has the final "s").</p>
<p>as I run</p>
<pre><code>python manage.py makemigrations
</code></pre>
<p>I get</p>
<pre><code>Traceback (most recent call last):
File "/my/path/basic-django-ecommerce/venv/lib/python3.10/site-packages/django/apps/registry.py", line 158, in get_app_config
return self.app_configs[app_label]
KeyError: 'account'
During handling of the above exception, another exception occurred:
[...]
/my/path/basic-django-ecommerce/venv/lib/python3.10/site-packages/django/apps/registry.py", line 165, in get_app_config
raise LookupError(message)
LookupError: No installed app with label 'account'.
During handling of the above exception, another exception occurred:
[...]
/my/path/basic-django-ecommerce/venv/lib/python3.10/site-packages/django/contrib/auth/__init__.py", line 176, in get_user_model
raise ImproperlyConfigured(django.core.exceptions.ImproperlyConfigured: AUTH_USER_MODEL refers to model 'account.User' that has not been installed
</code></pre>
<p>The last message makes me think that I missed some configurations for <code>account.User</code>, however, it is correctly installed in <code>mainapp_ecommerce/settings.py</code></p>
<pre><code>INSTALLED_APPS = [
...
'accounts',
...
'django.contrib.auth',
...
]
AUTH_USER_MODEL = 'account.User'
# it changes the builtin model user to User custom defined model
</code></pre>
<p>and about that <code>AUTH_USER_MODEL</code> , it refers to the account.User modeldefined in <code>accounts/models.py</code> as</p>
<pre><code>class User(AbstractBaseUser):
email = models.EmailField(max_length=255, unique=True)
# full_name = models.CharField(max_length=255, blank=True, null=True)
active = models.BooleanField(default=True) # is h* allowed to login?
staff = models.BooleanField(default=False) # staff user, not superuser
admin = models.BooleanField(default=False) # superuser
timestamp = models.DateTimeField(auto_now_add=True) # superuser
USERNAME_FIELD = 'username'
# email and password are required
REQUIRED_FIELDS = []
objects = UserManager
</code></pre>
<p>So, what could be the problem here?</p>
|
<python><django><django-models><django-migrations>
|
2023-12-09 11:14:01
| 1
| 4,389
|
Tms91
|
77,630,934
| 1,234,434
|
python regex match for non-fraction and fractions
|
<p>I have the following text in a pandas column that I'm trying to create a regular expression to sort into individual columns</p>
<pre><code>2 Table $75
5 Chairs 875 Teabags
9/10 gel 125 Dishwasher tablets
</code></pre>
<p>My first attempt works for the first line of text and some of the last line but not the middle line for the most part:</p>
<pre><code>^(\d+)\D+(\d+)\D+(\d+)
</code></pre>
<p>But my second iteration to account for an "or" condition fails to cause any reasonable match:</p>
<pre><code>^(\d+|\d+\/\d+)\D+(\d+)\D+(\d+)
</code></pre>
<p>Example <a href="https://regex101.com/r/H74dJb/1" rel="nofollow noreferrer">here</a></p>
<p>How can I restructure the above to account for the differences and catch all the numbers and fractions?</p>
|
<python><regex>
|
2023-12-09 10:27:31
| 1
| 1,033
|
Dan
|
77,630,908
| 2,722,968
|
Discover tests in sub-package
|
<p>I have a package with several hundred tests organized in modules, and I'm in the process of moving some of those tests into their own sub-package for ease of maintenance. My question is how to organize the files so Python's std <code>unittest</code> module can discover <em>and</em> isolate them if I only want to execute some of the test.</p>
<p>The current directory layout goes something like the following:</p>
<pre><code>/my_project/[...]
/test/__init__.py
/test/test_missiles.py
/test/test_launch_procedure.py
/test/fallback/__init__.py
/test/fallback/test_sticks.py
/test/fallback/test_stones.py
</code></pre>
<p>I can have <code>unittest</code> discover and execute the <code>fallback</code> tests individually by means of</p>
<pre class="lang-bash prettyprint-override"><code>python3 -m unittest test.fallback.test_sticks
</code></pre>
<p>yet what I want to do is execute all <code>fallback</code>-related tests in one go. However, <code>unittest</code> does not discover the modules inside the sub-package; e.g.</p>
<pre class="lang-bash prettyprint-override"><code>python3 -m unittest test.fallback
</code></pre>
<p>... will execute no tests at all.</p>
<p>Using filenames instead of paths also works, yet only for single files; e.g. <code>-m unittest test/fallback/test_sticks.py</code> works for a single module, but not for to-be-expanded patterns like <code>test/fallback/test_s*.py</code>.</p>
<p>Do I need the <code>fallback</code>-test-subpackage to somehow assist with test-discovery, so a normal <code>python3 -m unittest test.fallback</code> would load and execute all tests from that package?</p>
|
<python><python-3.x>
|
2023-12-09 10:15:59
| 0
| 17,346
|
user2722968
|
77,630,854
| 20,176,161
|
Groupy by: TypeError: sequence item 0: expected str instance, float found
|
<p>I am trying to do a groupby in the following dataframe.</p>
<p><a href="https://i.sstatic.net/w3Agf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w3Agf.png" alt="enter image description here" /></a></p>
<p>I am looking for a unique combination of <code>ville</code>, <code>arrondissement</code>, <code>quartier</code> and <code>quartier_av</code> where column <code>quartier_av</code> would be a list. The output I am looking for is as:</p>
<p><a href="https://i.sstatic.net/8h6Yj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8h6Yj.png" alt="enter image description here" /></a></p>
<p>Here is the code i tried :</p>
<pre><code>df.groupby(['ville','arrondissement','quartier'])['quartier_av'].agg(lambda x: ','.join(x)).reset_index()
</code></pre>
<p>I am getting this error:</p>
<pre><code> TypeError: sequence item 0: expected str instance, float found
</code></pre>
<p>When reading the forum, i realise that the error might come from NaN. <code>NaN is a float, so trying to do string operations on it doesn't work</code>.</p>
<p>I converted the column to string as follows:</p>
<pre><code>df['quartier_av']=df['quartier_av'].astype(str)
</code></pre>
<p>But i am still getting the same error.</p>
<p>Can someone comment please. Thanks</p>
|
<python><dataframe><group-by>
|
2023-12-09 09:57:16
| 2
| 419
|
bravopapa
|
77,630,719
| 12,276,279
|
From a pandas dataframe containing net exports between any two countries, how can I get second dataframe containing net exports for each country?
|
<p>I have a dataframe <code>df</code> containing net exports between any two countries in column <code>From</code> and <code>To</code> respectively.</p>
<p><code>df.to_dict()</code> returns</p>
<pre><code>{'From': {0: 'A', 1: 'A', 2: 'B', 3: 'C', 4: 'D'},
'To': {0: 'B', 1: 'C', 2: 'C', 3: 'D', 4: 'A'},
'Net exports': {0: 3, 1: 6, 2: 2, 3: 2, 4: 5}}
</code></pre>
<p>It looks as follows:
[1]: <a href="https://i.sstatic.net/y6BRe.png" rel="nofollow noreferrer">https://i.sstatic.net/y6BRe.png</a></p>
<p>I want to get a second dataframe <code>df2</code> which shows net trade per country.
It means, if a country is in <code>From</code> column in <code>df</code>, its value needs to be added.
If the country is in <code>To</code> column in <code>df</code>, its value needs to be subtracted.</p>
<p>It should look something as shown:
[2]: <a href="https://i.sstatic.net/I1xrE.png" rel="nofollow noreferrer">https://i.sstatic.net/I1xrE.png</a></p>
<p>For example,
in <code>df</code>, A to B is 3, A to C is 6 and D to A is 5. Hence, the value of A in <code>df2</code> is 3+6-5 = 4.</p>
<p>Note, sometimes some countries may not appear in both From and To columns in <code>df</code>.
To get list of all countries in both columns, I could use</p>
<pre><code>all_countries = list(set(df["From"]).union(set(df["To"])))
all_countries
</code></pre>
<p>However, what would be the process to proceed to the next step to get <code>df2</code> from <code>df</code>?</p>
|
<python><pandas><dataframe><math><group-by>
|
2023-12-09 09:06:36
| 4
| 1,810
|
hbstha123
|
77,630,640
| 9,640,238
|
gspread numberFormat is not applied for a date
|
<p>With gspread, I'm trying to apply date formatting for a column, but it doesn't see to work. Here's a minimal example:</p>
<pre class="lang-py prettyprint-override"><code>sh = gc.create("new")
ws = sh.get_worksheet(0)
ws.update("A1", "20/10/2023")
ws.format(
"A1",
{
"numberFormat": {
"pattern": "dd-mmm-yyyy",
"type": "DATE",
},
"textFormat": {"bold": True},
},
)
</code></pre>
<p>Looking at the results in Google Sheets, the text formatting is applied, but not the date format. Any idea?</p>
|
<python><formatting><gspread>
|
2023-12-09 08:34:45
| 1
| 2,690
|
mrgou
|
77,630,569
| 1,696,434
|
Making a common DB for one app across multiple django projects
|
<p>I have 3 distinct Django projects (or instances) running on the same Ubuntu server (for the same user) in different folders. Lets call these instances (D1, D2, D3). All the 3 projects use sqlite3 as DB.</p>
<p>All the 3 projects aside from their inherent differences also have a single common django Model called "Word" (that is basically responsible for storing word images). there can be millions of entries for word model for each of the django projects at any given time.</p>
<p>The problem I am encountering is that I need to frequently transfer the word instances from one project to another. the Volume of the words to be transferred is huge (millions of word model instances per day).</p>
<p>My current solution is very inefficient, what I am doing is to export all the word entries from D1 project to a single folder and import all the entries from the folder to D2 project.</p>
<p>Is there any way to make a common database that all the 3 projects can access for the "Word" model? this will make life much easier where to transfer the words from D1 to D2, I would just have to change a field (probably belongs_to) in all the concerned words from "D1" to "D2".</p>
<p>Any help on how to implement this solution or some other idea to solve this issue will be appreciated.</p>
<p>-Thanks</p>
|
<python><django><sqlite><django-models>
|
2023-12-09 08:03:26
| 1
| 359
|
Krishna
|
77,630,410
| 8,176,763
|
fastapi swagger interface showing operation level options override server options
|
<p>I have been recently finding a popup that tells me some operation level options override the global server options:</p>
<p>As per image:</p>
<p><a href="https://i.sstatic.net/hxem3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hxem3.png" alt="enter image description here" /></a></p>
<p>I do not understand whether this is a bug in my application or anything but normal. I would not like to show the user that message.</p>
<p>EDIT:</p>
<p>This is my <code>main.py</code> file:</p>
<pre><code>from fastapi import FastAPI,Depends
from ml.model_recommendation import predict_intention
from ml.training_ml import create_ml
import crud,models,schemas
from db import SessionLocal,engine
from sqlalchemy.orm import Session
from typing import Optional
from enum import Enum
import numpy as np
app = FastAPI(title="ML prediction",description="API to serve data used for prediction of intended remediation date (IRD)")
class Tags(Enum):
ITEMS = "Retrieve ITSO and Software Versions"
DELETE = "Delete Data"
INSERT = "Get IRD predictions"
DOWNLOAD = "Download Data"
@app.on_event("startup")
def on_startup():
models.Base.metadata.create_all(bind=engine)
# Dependency
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
@app.get(
"/service_owners/",
response_model=list[schemas.Evergreen],
tags=[Tags.ITEMS]
)
def read_owners(service_owner: str ,software_product_version_name: Optional[str] = None,db: Session = Depends(get_db),skip: int = 0, limit: int = 100):
owners = crud.get_service_owners(db,service_owner=service_owner,software_product_version_name=software_product_version_name,skip=skip,limit=limit)
return owners
</code></pre>
<p>This is my <code>crud.py</code> function:</p>
<pre><code>import models
from sqlalchemy.orm import Session
from sqlalchemy import select
from typing import Optional
def get_service_owners(db: Session, service_owner: str,software_product_version_name: Optional[str] = None,skip: int = 0, limit: int = 100):
if software_product_version_name:
stmt = (select(models.Evergreen.service_owner,models.Evergreen.software_product_version_name)
.where(models.Evergreen.service_owner.ilike(f'%{service_owner}%'))
.where(models.Evergreen.software_product_version_name.ilike(f'%{software_product_version_name}%'))
).distinct().offset(skip).limit(limit)
return db.execute(stmt).all()
stmt = select(models.Evergreen.service_owner,models.Evergreen.software_product_version_name).where(models.Evergreen.service_owner.ilike(f'%{service_owner}%')).distinct().offset(skip).limit(limit)
return db.execute(stmt).all()
</code></pre>
<p>And this my <code>schema.py</code> for validation:</p>
<pre><code>from typing import Optional
from pydantic import BaseModel
from typing_extensions import TypedDict
class Evergreen(BaseModel):
service_owner: Optional[str]
software_product_version_name: Optional[str]
class Config:
from_attributes = True
class Items(TypedDict):
service_owner: str
software_product_version_name: str
class Pred(TypedDict):
service_owner: str
software_product_version_name: str
future_expectation: int
</code></pre>
|
<python><swagger><fastapi>
|
2023-12-09 06:53:44
| 1
| 2,459
|
moth
|
77,630,159
| 6,533,037
|
Get string typed in input after screen reset
|
<p>I have a websocket chat-application. The user can send and receive messages simultaneously. The problem I had however is that because it is a CLI application the input-line of the user got messed up with the incoming messages.</p>
<p>You would have something like:</p>
<pre><code>Message: Hello
Type message: Hi
Message: How are you
</code></pre>
<p>I want to have the input-field for the user always below the 'chat-screen'.</p>
<pre><code>Message: Hello
Message: How are you
Type message: Hi
</code></pre>
<p>I have achieved this by putting all the received messages in a list. When a new message is received the screen gets emptied and the content of the chat-list is displayed. After looping over the chat-list, we display the input-field again. This works fine.</p>
<p><strong>The problem I have:</strong> When the user is typing a message in the input-field, and while he is typing a message comes in the screen gets emptied to display the chat. But also the message he was typing in the input-field gets emptied.</p>
<pre><code># Chat session
Hello
How are you
Type message: Hi, I am o
# Message comes in > screen gets emptied to display chat again > user-input is also emptied
Hello
How are you
Test
Type message:
# Wanted behavior (The message the user was typing is still visible in input-field)
Hello
How are you
Test
Type message: Hi, I am oka
</code></pre>
<p><strong>Note:</strong> What is interesting though. Is that when the screen and the input gets emptied and the user presses enter. The message typed before emptying is still send. Which means that Python must save the entered string somewhere in a buffer.</p>
<p><strong>Current behavior (Input gets reset during incoming message):</strong></p>
<p><a href="https://i.sstatic.net/xMo0M.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xMo0M.gif" alt="Current behavior" /></a></p>
<p><strong>Wanted behavior (Input progress gets saved during incoming message):</strong></p>
<p><a href="https://i.sstatic.net/Ljesa.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ljesa.gif" alt="Wanted behavior" /></a></p>
<p>And like I said. Even if the screen gets emptied, pressing enter results in content of the input-field before emptying being sent. So Python must have a buffer somewhere containing the string. Could I use some kind of stream or stdin buffer to retrieve the message?</p>
<p><strong>Code:</strong></p>
<pre class="lang-py prettyprint-override"><code>import sys
import asyncio
import aioconsole
chat = []
async def start():
sendTask = asyncio.ensure_future(send())
receiveTask = asyncio.ensure_future(receiveSimulator())
await asyncio.gather(sendTask, receiveTask)
async def send():
while True:
# Here I somehow need to capture the input of the user
# even if he hasn't pressed enter
sys.stdout.flush()
sys.stdout.write("Type message: ")
message = await aioconsole.ainput("")
# Note that when the user sends a message, it is not directly
# inputted in the chat. But first send to the server who
# echo's it. Hence why we simulate it by calling receive
receive(message)
def receive(message, sim = False):
chat.append(message)
reset_chat_screen(sim = sim)
# This function is purely to simulate a incoming message
async def receiveSimulator():
await asyncio.sleep(8)
receive("Incoming message", sim = True)
def reset_chat_screen(sim = False):
# Empty screen
print(chr(27) + "[2J")
# Show all messages including new incoming one
for message in chat:
print(message)
# Here I somehow need to put the captured input before the screen was emptied
# e.g: sys.stdout.write("Type message: " + captured_input)
if sim:
sys.stdout.write("Type message: I am oka")
else:
sys.stdout.write("Type message: ")
sys.stdout.flush()
if __name__ == "__main__":
asyncio.get_event_loop().run_until_complete(start())
</code></pre>
|
<python><websocket>
|
2023-12-09 04:44:34
| 1
| 1,683
|
O'Niel
|
77,630,116
| 3,486,773
|
How to stop tkinter thread but keep window open?
|
<p>I have a button that when pressed starts a new thread:</p>
<pre><code>import tkinter as tk
import threading
db = tk.Button(root, text='Download!', height=2, width=30, background='light blue',command=lambda: threading.Thread(target=runYtAction)
.start()).grid( row=22, column=0, sticky=tk.W, padx=120, pady=(0,10))
</code></pre>
<p>how do I stop this thread to "cancel" whatever it was doing by clicking a cancel button?</p>
|
<python><multithreading><tkinter>
|
2023-12-09 04:22:41
| 1
| 1,278
|
user3486773
|
77,630,091
| 4,225,430
|
How to modify subsetting and datetime handling with .loc[] to avoid warning?
|
<p>I try to practice time series with real data. The difficulty is in data wrangling.</p>
<p>This exercise is to show the local passenger departure trend of one of the borders in Hong Kong in 2023.</p>
<p>Jupyter warned me twice about subsetting and datetime handling, but I do not know how to change accordingly and why the warning is needed. Grateful if you can point out the solution. Thank you very much.</p>
<p>The code is here:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from datetime import date
from datetime import datetime
df = pd.read_csv("https://www.immd.gov.hk/opendata/eng/transport/immigration_clearance/statistics_on_daily_passenger_traffic.csv")
df = df.iloc[: , :-1]
df = df[df["Date"].str.contains("2023") == True]
</code></pre>
<p>#First warning</p>
<pre><code>df["Date"] = df["Date"].apply(lambda x: datetime.strptime(str(x), "%d-%m-%Y"))
</code></pre>
<pre><code>SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df["Date"] = df["Date"].apply(lambda x: datetime.strptime(str(x), "%d-%m-%Y"))
</code></pre>
<p>Continue the script:</p>
<pre><code>options = ['Airport', 'Express Rail Link West Kowloon', 'Lo Wu', 'Lok Ma Chau Spur Line', 'Heung Yuen Wai', 'Hong Kong-Zhuhai-Macao Bridge', 'Shenzhen Bay']
df_clean = df.loc[df['Control Point'].isin(options)]
df_XRL = df[df["Control Point"].str.contains("Heung Yuen Wai") & df["Arrival / Departure"].str.contains("Departure")]
df_XRL = df_XRL[["Date","Hong Kong Residents"]]
</code></pre>
<p>The second warning:</p>
<pre><code>df_XRL['Month'] = pd.DatetimeIndex(df_XRL['Date']).strftime("%b")
df_XRL['Week day'] = pd.DatetimeIndex(df_XRL['Date']).strftime("%a")
</code></pre>
<pre><code>A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df_XRL['Month'] = pd.DatetimeIndex(df_XRL['Date']).strftime("%b")
C:\Users\User\AppData\Local\Temp\ipykernel_28232\787368475.py:2: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df_XRL['Week day'] = pd.DatetimeIndex(df_XRL['Date']).strftime("%a")
</code></pre>
<p>Continue the script:</p>
<pre><code>from numpy import nan
monthOrder = ['Jan', 'Feb', 'Mar', 'Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec']
dayOrder = ['Mon','Tue','Wed','Thu','Fri','Sat','Sun']
pivot_XRL = pd.pivot_table(df_XRL, index=['Month'],
values=['Hong Kong Residents'],
columns=['Week day'], aggfunc=('sum')).loc[monthOrder, (slice(None), dayOrder)]
pivot_XRL.plot(figsize = (20,8))
plt.grid()
plt.legend(loc='best');
plt.xlabel('Month',fontsize=15)
plt.ylabel('Persons',fontsize=15)
plt.rc('xtick',labelsize=15)
plt.rc('ytick',labelsize=15)
plt.legend(fontsize="x-large")
</code></pre>
|
<python><pandas><datetime><pandas-loc>
|
2023-12-09 04:08:04
| 0
| 393
|
ronzenith
|
77,629,977
| 8,497,844
|
What is named this structure?
|
<p>Some days ago I saw this Python <code>string</code> structure:</p>
<pre><code>str = (
'this is a very'
'long string too'
'for sure ...'
)
</code></pre>
<p>I'm interested how is named this structure? It's not <code>list</code> or <code>tuple</code>.</p>
<p>Sorry for maybe simple question but I tried to found in official Python manual and couldn't. It would be nice if you attach link to manual.</p>
|
<python>
|
2023-12-09 02:57:49
| 0
| 727
|
Pro
|
77,629,929
| 1,945,925
|
python lxml xpath query fails on hardcoded url but works on a bytes string
|
<p>I am trying to extract an xml attribute <code>parsable-cite</code> from the <code>text</code> tag. I am parsing an xml from the url "https://www.congress.gov/118/bills/hr61/BILLS-118hr61ih.xml".</p>
<p>The code I'm using is the following (Replit here <a href="https://replit.com/join/ohhztxpqdr-aam88" rel="nofollow noreferrer">https://replit.com/join/ohhztxpqdr-aam88</a>) and writing here for convenience:</p>
<pre><code>from lxml import etree
import requests
response = requests.get(url)
xml_response = response.content
tree = etree.fromstring(xml_response)
result = tree.xpath("//text[contains(., 'is amended')]")
for r in result:
external_xref = r.find("external-xref")
print(external_xref.attrib)
</code></pre>
<p>I get an error conveying that I'm accessing <code>None</code> and that the xpath didn't find the search.</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'attrib'
</code></pre>
<p>When I use the same code and instead use the snippet of the text node directly, I get the following:</p>
<pre><code>text = b’<text display-inline="no-display-inline">Section 4702 of the Matthew Shepard and James Byrd Jr. Hate Crimes Prevention Act (<external-xref legal-doc="usc" parsable-cite="usc/18/249">18 U.S.C. 249</external-xref> note) is amended by adding at the end the following: </text>’
tree = etree.fromstring(text)
result = tree.xpath("//text[contains(., 'is amended')]")
for r in result:
external_xref = r.find("external-xref")
print(external_xref.attrib)
</code></pre>
<pre><code>{'legal-doc': 'usc', 'parsable-cite': 'usc/18/249'}
</code></pre>
<p>The issue seems to come from processing the content from the url directly. Any recommendations on how to proceed?</p>
<p>Thanks</p>
|
<python><xml><xpath><lxml>
|
2023-12-09 02:24:15
| 1
| 552
|
Andre Marin
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.