QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,087,173 | 3,650,983 | tqdm print average information in the end of the loop | <p>During the run tqdm print current information of the unit (in my case batch) like accuracy and loss.</p>
<p>Is there an elegant way that after last iteration tqdm print in the same line the average accuracy/loss of all the batches that run?</p>
<p>Now I'm doing in "manually" by saving the accuracy/loss data for each batch and apply mean when the loop finish.</p>
| <python><machine-learning><pytorch><torch><tqdm> | 2023-09-12 08:09:27 | 0 | 4,119 | ChaosPredictor |
77,087,153 | 894,827 | Extracting data from PDF files using python | <p>New to python here, and I have a challenge which is to extract order information presented in a PDF file. I can log onto the website to convert the orders onto PDF, however what I require is to export it into excel in the form of a table.</p>
<p>The columns required are order number, order date, code, items, quantity, price , total</p>
<p>The PDF file has multiple pages and the way the order date/order number is presented its within a different section, for every order the code should extract the last order date / order no at the top of the page.</p>
<p>An example of what the PDF pages looks like can be found below, there are multiple pages on the file. The output that I am looking for can be seen below. I have contacted the company and they do not have API's that can be exposed.</p>
<p><a href="https://i.sstatic.net/jSAO9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jSAO9.png" alt="Invoice sample" /></a></p>
<p><a href="https://i.sstatic.net/VUO3A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VUO3A.png" alt="Sample of what the python code should output" /></a></p>
| <python> | 2023-09-12 08:07:33 | 0 | 1,099 | learner |
77,087,129 | 13,087,576 | Converting characters like '³' to Integer in Python | <p>I have this character '³' in my dataset that I'm processing on top of.</p>
<p>Generic Idea is to detect if a character is an integer, convert it into an integer and process on top of it.</p>
<pre class="lang-py prettyprint-override"><code>>>> x = '³'
>>> x.isdigit() # Returns True
True
</code></pre>
<p>Python detects this character as a digit. But raises the following error when I try to convert it</p>
<pre class="lang-py prettyprint-override"><code>>>> int(x)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: invalid literal for int() with base 10: '³'
</code></pre>
<p>I would like it if such characters could also be converted to integer, to ease my further processing</p>
<p>Not sure if this helps, but here is my locale info</p>
<pre class="lang-py prettyprint-override"><code>>>> import locale
>>> locale.getdefaultlocale()
('en_US', 'UTF-8')
</code></pre>
| <python> | 2023-09-12 08:05:02 | 1 | 307 | Sai Prashanth |
77,087,115 | 1,668,622 | How do I avoid pollution of a .venv folder with files owned by root when running a script with root privileges? | <p>In virtual environment (created with poetry) I'm running a Python script regularly as current user via</p>
<pre class="lang-bash prettyprint-override"><code>poetry run ./script.py
</code></pre>
<p>or</p>
<pre class="lang-bash prettyprint-override"><code>.venv/bin/python3 script.py
</code></pre>
<p>From within this script I need to run another script with <code>root</code> privileges, which boils down to running that script via sudo:</p>
<pre class="lang-bash prettyprint-override"><code>sudo .venv/bin/python3 other_script.py
</code></pre>
<p>or</p>
<pre class="lang-bash prettyprint-override"><code>ssh root@localhost /<path>/<to>/.venv/bin/python3 /<path>/<to>/other_script.py
</code></pre>
<p>(I probably could also <code>setuid</code> the script but I'd prefer the <code>ssh</code>-way since I need it anyway. Also installing the virtual environment for <code>root</code> is not an option for me)</p>
<p>Running a script this way will create <code>__pycache__</code> folders and <code>.pyc</code> files owned by root and running <code>python3 -B</code> or setting <a href="https://stackoverflow.com/questions/154443/how-to-avoid-pyc-files"><code>PYTHONDONTWRITEBYTECODE</code></a> will not create any bytecode files at all (which I'd also like to avoid).</p>
<p>On an abstract level - is there a nice way to run a Python3 script from an existing virtual environment owned by current user with <code>root</code> privileges without having to fiddle with <code>setuid</code> or missing <code>pyc</code> files?
Can I somehow tell Python (or the current environment) to create <code>pyc</code> files/folder with certain ownership or at a location owned by <code>root</code>?</p>
| <python><linux><root><pyc><file-ownership> | 2023-09-12 08:02:48 | 0 | 9,958 | frans |
77,087,110 | 3,302,016 | Rewrite apply function in pandas using simpe Dataframe calculation | <p>I have this piece of code which calculates the values of a column based on values of some existing columns on a pandas dataframe.</p>
<pre><code>def get_prj_yield(row):
try:
prj_yield = row['prj_rev'] / (row['ds'] + row['otb_demand'])
if pandas.isnull(prj_yield):
prj_yield = row['otb_rev'] / row['otb_demand']
return prj_yield
except ZeroDivisionError:
return 0
</code></pre>
<p>this function gets called on the dataframe using the <code>apply</code> function.</p>
<pre><code>df['prj_yield'] = output_df.apply(get_prj_yield, axis=1)
</code></pre>
<p>The existing dataframe has more than 1M rows and I want to know if this function can be rewritten using just plain dataframe calculation. Would that add any improvement to the resources consumed ?</p>
| <python><pandas><dataframe><apply> | 2023-09-12 08:01:38 | 1 | 4,859 | Mohan |
77,087,063 | 13,238,846 | Langchain pinecone similarity search error : Invalid type for variable 'namespace' | <p>I'm trying to getting work similarity search with Pincone's existing index. But I'm getting following error when passing the query. This is only happening on local machine. everything working fine in colab.</p>
<pre><code>pinecone.core.client.exceptions.ApiTypeError: Invalid type for variable 'namespace'. Required value type is str and passed type was NoneType at ['namespace']
</code></pre>
<p>My implementation is like follows.</p>
<pre><code>embeddings = OpenAIEmbeddings(
openai_api_key=openai_api_key,
model=embedding_model,
)
pinecone.init(
api_key=os.getenv("pineconeapikey"), # find at app.pinecone.io
environment=os.getenv("pineconeenvironment"), # next to api key in console
def data(query):
""" Ansewer product related questions. Function requires use query"""
docsearch = Pinecone.from_existing_index(index_name, embeddings)
answer = docsearch.similarity_search(query)
return answer
</code></pre>
| <python><callback><langchain><pinecone> | 2023-09-12 07:53:54 | 2 | 427 | Axen_Rangs |
77,087,044 | 12,888,866 | Python recursive dictionary value finder returns None | <p>I am working with multiple dictionaries which all have different depths and I am looking for the key <code>"values"</code> in all of them. I am not very familiar with recursive functions, but for this problem I attempted to write one. According to my print statements, the function arrives at the key I'm looking for, but it returns <code>None</code>.</p>
<p>An example of such a dictionary is this:</p>
<pre><code>d = {'key1': {'key2': {'values': [0, 1, 2]}}}
</code></pre>
<p>I wrote this function to find the key <code>"values"</code>, but the output is not what I expected:</p>
<pre><code>def extract_key(dict_, key, path=[]):
if not isinstance(dict_, dict):
raise ValueError("went to deep")
else:
keys = list(dict_.keys()) # get all keys, list for indexing
print("path:", path)
print("keys:", keys)
if key not in keys: # continue looking for key
next_key = keys[0]
print("next_key:", next_key)
next_dict = dict_[next_key] # look with first key of keys
path.append(next_key)
extract_key(next_dict, key, path)
else: # found key, return values
print("out:", key)
return dict_[key]
output = extract_key(d, "values")
print(output)
</code></pre>
<p>Output:</p>
<pre><code>path: []
keys: ['key1']
next_key: key1
path: ['key1']
keys: ['key2']
next_key: key2
path: ['key1', 'key2']
keys: ['values']
out: values
None
</code></pre>
<p>Why does it not return the list I'm looking for but also not an error, and how do I fix this?</p>
| <python><dictionary><recursion> | 2023-09-12 07:51:29 | 1 | 377 | Timo |
77,086,913 | 12,880,432 | how to get split months from two intervals? | <p>I've two dates in YYYYmm format</p>
<pre><code>start : 202307
end : 202612
</code></pre>
<p>want to split them in interval wise, based on provided interval</p>
<p>for example split_months <code>('202307,'202405',5)</code>, will give me</p>
<pre><code>((202307,202311), (202312,202404), (202405,202405))
</code></pre>
<p>tried with below code, bot stuck in the logic</p>
<pre><code>def split_months(start, end, intv):
from datetime import datetime
periodList = []
periodRangeSplitList = []
start = datetime.strptime(start ,"%Y%m")
end = datetime.strptime(end ,"%Y%m")
mthDiff = relativedelta.relativedelta(end, start)
if mthDiff == 0:
periodRangeSplitList.append((start,start))
return periodRangeSplitList
diff = mthDiff / intv
print(diff)
for i in range(intv):
periodList.append ((start + diff * i).strftime("%Y%m"))
periodList.append(end.strftime("%Y%m"))
print(periodList)
</code></pre>
<p>I tried, above code, but not working, can anyone share any suggestions ?</p>
<p>Thanks</p>
| <python><python-3.x> | 2023-09-12 07:31:34 | 3 | 610 | Maria |
77,086,896 | 311,660 | Adjusting the values in an array to be at least minimum distance apart | <p>I have an array of floats in a specific order</p>
<pre><code>[99.3, 100.0, 98.6]
</code></pre>
<p>I need to ensure that they are a minimum distance apart, for this example say they must be at least 3 units apart. The maximum value should not be adjusted.</p>
<p>The other values can all be adjusted to pass the spacing rule.</p>
<p>Their original values are used to display a percentage, and they should always remain in that order so they match with a graph. The adjusted values will ultimately be mapped to display coordinates (this is why the spacing is important, so items do not overlap)</p>
<p>So far I have:</p>
<pre class="lang-py prettyprint-override"><code>from copy import copy
def adjust_spacing(values, minimum_spacing):
output_values = copy(values)
adjusted_values = []
adjusted_indexes = []
max_val = max(values)
max_val_index = values.index(max_val)
# Add the maximum value as is
adjusted_values.append(max_val)
adjusted_indexes.append(max_val_index)
current_max = max_val
while len(adjusted_values) != len(values):
# Find the next maximum value (excluding the current max) and its index
next_value = max(val for val in values if val < current_max)
next_value_index = values.index(next_value)
# Calculate the adjusted next value
adjusted_next_value = current_max - minimum_spacing
adjusted_values.append(adjusted_next_value)
adjusted_indexes.append(next_value_index)
current_max = adjusted_next_value
while len(adjusted_values) > 0:
value = adjusted_values.pop()
index = adjusted_indexes.pop()
output_values[index] = value
return output_values
result = adjust_spacing([99.0, 100.0, 95.0], 4)
print(f"=> {result}")
</code></pre>
<p>and when run I get:</p>
<pre class="lang-py prettyprint-override"><code>values = [99.3, 100.0, 98.6]
minimum_distance = 4
adjust_spacing(values, minimum_distance)
=> [96.0, 100.0, 92.0]
</code></pre>
<p>While that's correct, surely there is a better, more idiomatic way to do this.</p>
<p>It'd be even better if it worked with a larger array. (this breaks with len > 3... oof!)</p>
| <python><arrays><python-3.x> | 2023-09-12 07:29:25 | 1 | 30,444 | ocodo |
77,086,743 | 8,961,082 | Airflow dag hanging on running state when interacting with AWS S3 bucket | <p>Simple Airflow DAGs are working for me, however, when I try to interact with an S3 bucket, the DAG will just hang on a running state. I'm using airflow S3 hook. If I run the code it will work but as a DAG it hangs. The function interacts with an API and saves the json file to an S3 bucket. Add your own bucket name to try the function out.</p>
<pre><code>from airflow.providers.amazon.aws.hooks.s3 import S3Hook
import requests
import json
def dump_json_to_s3(url, bucket_name, file_name):
"""
:param url: api
:param bucket_name: s3 bucket name
:param file_name: json file name
:return:
"""
try:
print("Begin process ...")
# Retrieve JSON file from API
response = requests.get(url=url)
# S3 key path
key_path = f"asset/{file_name}.json"
# Initialise hook
hook = S3Hook('s3_conn')
print(response.text)
hook.load_string(
string_data=response.text,
key=key_path,
bucket_name=bucket_name,
replace=True
)
# Log a success message
print(f"Success: JSON object has been uploaded to {key_path} S3 key path.")
# Exceptions
except requests.exceptions.RequestException as e:
# Log an error message
print(f"Request Error: {e}")
except boto3.exceptions.S3UploadFailedError as e:
# Log an error message
print(f"S3 Upload Error: {e}")
except Exception as e:
# Log an error message
print(f"Unexpected Error: {e}")
return
# If you want to try the function without the DAG you can run below
url = "http://engineering-exam.s3-website.ap-southeast-2.amazonaws.com/"
bucket_name = "INSERT YOUR S3 BUCKET NAME"
dump_json_to_s3(url=url, bucket_name=bucket_name, file_name='asset_new')
</code></pre>
<p>DAG below</p>
<pre><code>from airflow import DAG
from airflow.operators.python import PythonOperator
from datetime import datetime, timedelta
from dump_json_to_s3_file import *
default_args = {
'owner': 'airflow',
'start_date': datetime(2023, 9, 1),
}
dag = DAG(
dag_id='dump_to_s3',
schedule_interval=None,
default_args=default_args,
catchup=False
)
convert_json = PythonOperator(
task_id="convertor",
python_callable=dump_json_to_s3, # Reference the function without calling it
op_args=[ # Provide arguments as a list if needed
"http://engineering-exam.s3-website.ap-southeast-2.amazonaws.com/",
"INSERT YOUR S3 BUCKET NAME",
"airflow_tester"
],
dag=dag
)
convert_json
</code></pre>
| <python><amazon-s3><airflow><directed-acyclic-graphs> | 2023-09-12 07:04:18 | 1 | 377 | d789w |
77,086,529 | 20,732,098 | HTML Error Message will be returned instead of the custom JSON error message | <p>I have an API_1 that I call with Postman. If there is an error in the calculation, an error message will be output in JSON format, as well as the status code. I use the following code for this:</p>
<pre><code>@app.errorhandler(Exception)
def handle_default_error(e):
message = e.description
return jsonify(error=message), e.code
</code></pre>
<p>The output in Postman looks like this:
<a href="https://i.sstatic.net/vZdBi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vZdBi.png" alt="enter image description here" /></a></p>
<p>This is the correct JSON error message that will be returned.</p>
<p>My goal is that API_1 is called by API_Total. If I now call API_Total with Postman, which then calls API_1 and an error occurs there in the calculation, the JSON error message should normally be passed from API_1 to API_Total, and then output by API_Total. However, I get the following output:</p>
<pre><code><html>
<head>
<meta http-equiv="Content-Type" content="text/html;charset=utf-8">
<title>Error response</title>
</head>
<body>
<h1>Error response</h1>
<p>Error code: 400</p>
<p>Message: Bad Request.</p>
<p>Error code explanation: 400 - Bad request syntax or unsupported method.</p>
</body>
</code></pre>
<p>But I don't want the JSON error message to be replaced by an HTML error message.</p>
<pre><code>return jsonify(error=message), e.code, {'Content-Type': 'application/json'}
</code></pre>
<p>Unfortunately, this does not change anything and there is still no JSON error message.</p>
<pre><code>
#Simplified procedure
#1. Postman calls API_Total
@app.route('/api/API_Total', methods=['GET'])
#2. API_1 is called by API_Total
@app.route('/api/API_1', methods=['GET'])
....
try:
response = requests.request("GET", url, headers=headers_content, data=payload_content, proxies=proxies)
except:
return None
#3. API_1
#Performing the calculations
#If an error occurs:
return abort(404, 'Bad Request')
#app.errorhandler is called
@app.errorhandler(Exception)
def handle_default_error(e):
message = e.description
#Return to API_Total
return jsonify(error=message), e.code
#4 When an error occurs, API_Total displays it as HTML rather than JSON unless I set the Status Code to 200.
</code></pre>
<p>This is where I have the problem (#3). If I return it using jsonify and do not set the status code to 200, the error message is output as HTML. But if I use return jsonify and set the status code to 200, the error message is correctly output as JSON. But I get the error message and the status code using abort. I would also like to return the status code correctly and not set it to 200 by default.</p>
| <python><flask><pycharm> | 2023-09-12 06:23:52 | 1 | 336 | ranqnova |
77,086,484 | 1,145,666 | How do I get the body of a PUT request in SimpleHTTPRequestHandler | <p>I have this super simple HTTP server to test some REST endpoints:</p>
<pre><code>import http.server
class MyHandler(http.server.SimpleHTTPRequestHandler):
def do_PUT(self):
self.send_response(200)
self.end_headers()
def do_POST(self):
self.send_response(409)
self.end_headers()
server_address = ('', 8000)
httpd = http.server.HTTPServer(server_address, MyHandler)
httpd.serve_forever()
</code></pre>
<p>But, now I want to check if my PUT call sends the correct body, so I want to just print the request body in the <code>do_PUT</code> method. I tried looking for more info in the <a href="https://docs.python.org/3/library/http.server.html#http.server.SimpleHTTPRequestHandler" rel="nofollow noreferrer">documentation</a>, but it doesn't even mention the <code>do_PUT</code> method in the <code>SimpleHTTPRequestHandler</code> class (or any other class for that matter).</p>
<p>How can I access the request body in the <code>do_PUT</code> method of a inherited <code>SimpleHTTPRequestHandler</code> ?</p>
| <python><http.server> | 2023-09-12 06:13:53 | 1 | 33,757 | Bart Friederichs |
77,086,419 | 956,424 | multilevel user admin in django | <p>Need to create multi level users with roles having same validation for the models as that specified in admin.py, only that each of these users will have different roles.eg. Customer can have different custom_users with diff. roles. A customer can create multiple such custom users with different roles. Each customer can see only his own records in the admin panel. Models: Customer, Custom_user(customer f.key), Role(permission) , customer_customuser(custom_user_id,role_id), But I need to use the same permissions as available in the original django default admin panel for the diff.models like add,edit,delete, view. But have also customized it in admin.py such that each customer can administer using changelist_view and filtering the queryset objects accordingly. How can this be achieved. The models:</p>
<pre><code>class Role(models.Model):
name = models.CharField(max_length=100, unique=True)
permissions = models.ManyToManyField(Permission)
def __str__(self):
return self.name
class CustomUserManager(BaseUserManager):
def create_user(self, username, email, password=None):
if not email:
raise ValueError("The Email field must be set")
email = self.normalize_email(email)
user = self.model(username=username, email=email)
user.set_password(password)
user.save(using=self._db)
return user
def create_superuser(self, username, email, password=None):
user = self.create_user(username, email, password)
user.is_staff = True
user.is_superuser = True
user.save(using=self._db)
return user
class CustomUser(AbstractBaseUser, PermissionsMixin):
username = models.CharField(max_length=150, unique=True)
email = models.EmailField(unique=True)
is_active = models.BooleanField(default=True)
is_staff = models.BooleanField(default=False)
# Specify a related_name below
role = models.ForeignKey(Role, on_delete=models.CASCADE,
related_name='custom_users')
# # Specify a related_name for the groups below
groups = models.ManyToManyField(Group, blank=True,
related_name='custom_users')
# Add a custom related_name for user_permissions below
user_permissions = models.ManyToManyField(
Permission,
blank=True,
related_name='custom_users_permissions'
)
objects = CustomUserManager()
USERNAME_FIELD = 'email'
REQUIRED_FIELDS = ['username']
def __str__(self):
return self.username
# class Meta:
# permissions = [
# ("can_view_custom_user", "Can view custom users"),
# # Define other permissions here as needed
# ("view_domain", "Can view domain"),
# ("add_domain", "Can add domain"),
# ("change_domain", "Can change domain")
# ]
# Create your models here.
class AbstractPerson(models.Model):
first_name = models.CharField(max_length=60)
last_name = models.CharField(max_length=60)
email = models.EmailField(unique=True)
mobile = models.CharField(max_length=10, validators=[mobile_regex])
muser_id = models.IntegerField(null=True, blank=True)
class Customer(AbstractPerson):
#multiadmin line
# Each Customer can have multiple CustomUser instances with different roles
custom_users = models.ManyToManyField(CustomUser,
through='CustomerCustomUser')
</code></pre>
<p>Roles should be the same as available in the default django users/groups</p>
| <python><django> | 2023-09-12 05:59:26 | 1 | 1,619 | user956424 |
77,086,208 | 8,089,312 | x axis tick labels are one position ahead of data points | <p>I have a seaborn lineplot that shows two categories' quarterly count changes. The issue is that the x-axis labels are one position ahead of the data points. I adjusted my code, but it didn't help.</p>
<p>Create example data</p>
<pre><code>import pandas as pd
import random
import matplotlib.pyplot as plt
import seaborn as sns
categories = ['A', 'B']
quarters = []
for year in range(2014, 2024):
for quarter in range(1, 5):
quarters.append(f"{year}-Q{quarter}")
data = []
for category in categories:
for quarter in quarters:
count = random.randint(40000, 500000)
data.append({'category': category, 'quarter': quarter, 'count': count})
df = pd.DataFrame(data)
</code></pre>
<p>Create the chart</p>
<pre><code># Create the lineplot
plt.figure(figsize=(10, 6))
ax = sns.lineplot(x="quarter", y="count", hue="category", data=df, marker="o")
plt.xlabel("Quarter")
plt.ylabel("Count")
plt.xticks(rotation=45)
# Set the x-axis tick positions and labels based on the quarters in the DataFrame
x_positions = range(len(quarters))
ax.set_xticks(x_positions)
ax.set_xticklabels(quarters, rotation=45) # Display every label
plt.tight_layout()
plt.show()
</code></pre>
<p>The chart
<a href="https://i.sstatic.net/Uqb4z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Uqb4z.png" alt="enter image description here" /></a></p>
| <python><matplotlib><datetime><seaborn><xticks> | 2023-09-12 04:56:22 | 2 | 1,744 | Osca |
77,086,128 | 8,595,891 | How to pass worker options/parameters in gunicorn | <p>I am running an app which needed <code>uvicorn</code>'s asycio loop, by default it uses auto and some time it randomly assign it to <code>uvloop</code> whihc breaks the behavior. So I use the following command</p>
<pre><code>uvicorn myapp.server.api:app --loop asyncio --port 7474
</code></pre>
<p>This forces uvicorn to use <code>asyncio</code> loop. This works as expected.</p>
<p>Now I am trying to move this changes to <code>gunicorn</code> and <code>uvicorn</code> as worker, but I am couldn't find a way to pass this <code>loop</code> to uvicorn.</p>
<pre><code>gunicorn myapp.server.api:app -k uvicorn.workers.UvicornWorker --bind 0.0.0.0:7474
</code></pre>
<p>But this end up using default value i.e. auto and end up selecting loop type as uvloop. How can I force it to use <code>asyncio</code> worker. Help is appreciated.</p>
| <python><python-asyncio><fastapi><gunicorn><uvicorn> | 2023-09-12 04:31:15 | 1 | 1,362 | Pranjal Doshi |
77,086,012 | 432,509 | Python multiprocessing: how to constrain the amount of memory used by a Pool? | <p>When working on a script, you may wish to use all available cores for best performance.</p>
<p>There are cases however, when each sub-process uses a lot of memory, where it may be useful to set a memory threshold for the pool to prevent each task in the tool from running out of memory or cause the system to swap.</p>
<p>Is there a way of setting a memory threshold so new processes are only started when the combined memory use of all other tasks in the pool is under a threshold? (or something roughly equivalent).</p>
| <python><multiprocessing> | 2023-09-12 03:56:25 | 1 | 49,183 | ideasman42 |
77,085,952 | 2,626,865 | Should probabilities be managed outside of Hypothesis? | <pre class="lang-py prettyprint-override"><code>from hypothesis import strategies as st
import random
import time
import math
</code></pre>
<p>Imagine your code expects a list sampling four elements. Some elements are more likely than others. You don't know the list's length, so you've modeled the list as a series of Bernoulli trials, where a false outcoming concludes the list.</p>
<p>List elements and probabilities:</p>
<ul>
<li>"a": 10</li>
<li>"b": 20</li>
<li>"c": 25</li>
<li>"d": 35</li>
</ul>
<p>Failure probability: 0.1, which leads to an expected list length of 9 elements.</p>
<p>Here's the strategy:</p>
<pre class="lang-py prettyprint-override"><code>@st.composite
def choices_bernoulli(draw, population, weights, failure_weight):
"""
Simulate a Bernoulli process that stops after the first failure,
with failure probability given by 'failure_weight'.
"""
random = draw(st.randoms())
fail = object()
results = []
while True:
choice = random.choices\
((*population, fail), (*weights, failure_weight), k=1)[0]
if choice is fail:
break
results.append(choice)
return results
choices_bernoulli("abcd", [10,20,25,35], 10)
</code></pre>
<p>For me, this takes a depressing ~35 seconds to generate 100 samples. But even before that, let's consider that there are actually two random processes. One samples an item from a pool of known elements, and returns a known element. The other simulates a random process to generate the list's length - a new unknown quantity. Should either of these processes be specified outside of the Hypothesis?</p>
<p>One aspect of the Hypothesis is to sample the search space for test data. As a perfect fuzzer Hypothesis would ideally consider every possible combination of choices. From this perspective there's no reason to specify choice probabilities - every combination will be sampled. Unfortunately, the search space is almost always too large. If every combination can't be sampled, one solution is to sample the search space uniformly, which is how shrinking works. In strategies like <code>one_of</code>, you always order smaller sub-spaces first as that's what the Hypothesis will shrink towards. Uniform sampling attempts to exercise all code paths regardless of bias in likely actual data. Another solution is to push Hypothesis towards exploring more fully the sample space which produces more likely choices. That's what specifying choice probabilities does. Unfortunately, I've come to believe that most bugs lie in unexpected inputs, not in subtle variations of the most common denominator. So there - I've just convinced myself that specifying choice probabilities is not a very good idea. Even if it was, the Hypothesis manipulates the random generator to invalidate the choice probabilities, though this can be overridden using the <code>use_true_random</code> parameter. Below is a comparison of <code>random.choice</code> with quadratic weights compared to a random from Hypothesis. Unlike <code>strategies.sampled_from</code>, the Hypothesis doesn't seem to be attempting any shrinking. I'm guessing that the Hypothesis can only manipulate the sample space or state of the random number generator and has no control over the output of any random functions. As for the strategy above, it generates samples with an average length of 3-4 rather than 9.</p>
<pre><code>random.choices: false random random.choices: true random strategies.sampled_from
0: *********************** 0: 0: **************************
1: ******************* 1: 1: **********************
2: ***************************** 2: ** 2: **********************
3: ************************ 3: **** 3: **************************
4: ******************** 4: ***** 4: ***************************
5: ************************** 5: ******** 5: ************************
6: ********************* 6: ************* 6: **************************
7: ********************** 7: ***************** 7: ***************************
8: ************************* 8: ************************** 8: *****************************
9: ********************** 9: ***************************** 9: *********************
</code></pre>
<p>Now that I've decided to avoid probabilities in lieu of sampling the search space uniformly, how can I do this for variables that are measured solely as probabilities, such as list length? My first attempt was to discard the binomial distribution in lieue of generating data that would share some of the original statistics, such as mean. How about sampling the list length from a uniform distribution with the same mean as the original negative binomial distribution? Well, it didn't work and I shouldn't have been surprised. The list strategy from Hypothesis isn't normally distributed! The strategy below generates lists with an average length of 4-5 rather than the expected 9.</p>
<pre><code>@st.composite
def choices_bernoulli_mean(draw, population, failure_weight):
fail_norm = failure_weight / 100
expected_k = 1 * (1 - fail_norm) / fail_norm
max_size = int(expected_k * 2)
return draw(st.lists
(st.sampled_from(population), min_size=0, max_size=max_size))
</code></pre>
<p>Now I decide to sample the length directly from the negative binomial distribution rather than simulating a random process. The list strategy will be fed a fixed length, this sample. Again we run into the same issue, this time with <code>random.uniform</code>, which is no longer uniform. It's the same when replacing the uniform distribution with a floats strategy. Both have a penchant for extremes, and extremely large lists are very slow to generate - both time out with <code>Unsatisfiable</code>.</p>
<pre><code>@st.composite
def choices_bernoulli_dist(draw, population, failure_weight):
def inverse(i, p):
if i <= 0 or i > p:
raise ValueError
return math.floor(math.log(i / p) / math.log(1 - p))
p = failure_weight / 100
random = draw(st.randoms(use_true_random=False))
i = random.uniform(0, p)
hypothesis.assume(0 < i <= p)
# An alternative source:
#i = draw(st.floats\
# (min_value=0, max_value=p, exclude_min=True, exclude_max=False))
k = inverse(i, p)
return draw(st.lists
(st.sampled_from(population), min_size=k, max_size=k))
</code></pre>
<p>From this point, we can either sample the negative binomial distribution with a true random number generator, or we can accept the Hypothesis <code>random</code> and impose our own minimum and maximum list sizes. If we impose limits our problem transforms to one of sampling from a pool of known elements, for which we decided not to impose any external distributions. Even if we don't impose limits it's now clear that the random process is biasing the data we can generate in ways that may not be compatible with how the Hypothesis searches the sample space. Of course, we don't instead want a uniform distribution. A 99th-percentile sample of from the negative binomial would be a list of 43 members, and that's just too long for a uniform distribution. So perhaps we only want to consider 75% of possible samples (length 13). Or perhaps 50% (length 6). And once again we are imposing limits. Perhaps we want a Hypothesis-driven distribution, which attempts only a few large values that then shrink down. I honestly have no idea. What are your thoughts?</p>
<p>As for performance, the real bottlenecks are the calls to <code>st.sampled_from</code> or <code>random.choices</code>. By comparison, generating the list itself is downright cheap.</p>
| <python><probability><python-hypothesis> | 2023-09-12 03:38:14 | 1 | 2,131 | user19087 |
77,085,915 | 14,154,784 | JavaScript on Site Prevents `requests.get` from Working | <p>I’m trying to write a simple web-scraper, practicing on <a href="https://weighttraining.guide/exercises/standing-dumbbell-overhead-triceps-extension/" rel="nofollow noreferrer">this site</a> which has dynamic content.</p>
<p>My strategy is to use Selenium to get the page source so I have all the dynamic content, then scrape using Beautiful Soup. Basically, exactly the strategy <a href="https://stackoverflow.com/questions/14529849/python-scraping-javascript-using-selenium-and-beautiful-soup">here</a></p>
<p>I’m stuck on Step 1, however: I can’t get Selenium to even get the page. The following script gets through the ‘driver loaded’ print statement and then freezes:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537"
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument(f"user-agent={user_agent}")
try:
driver = webdriver.Chrome(executable_path='/opt/homebrew/bin/chromedriver', options=chrome_options)
print("Driver loaded")
driver.get("https://weighttraining.guide/exercises/standing-dumbbell-overhead-triceps-extension/")
print("Selenium get successful")
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.CLASS_NAME, "entry-content"))
)
print("Page loaded")
driver.quit()
except Exception as e:
print(f"An error occurred: {e}")
</code></pre>
<p>When I run this script I see Chrome open the website and I can see it load successfully, but then the script freezes. I’ve tried this with and without any of the <code>chrome_options</code>, <code>user_agent</code>, and <code>WebDriverWait</code> portions. Nothing seems to work.</p>
<p>Please help!</p>
| <python><selenium-webdriver><web-scraping><beautifulsoup><python-requests> | 2023-09-12 03:27:06 | 1 | 2,725 | BLimitless |
77,085,879 | 1,492,229 | LIME gives this error "classifier models without probability scores" in python | <p>it is my first time to use LIME and i have never used any interpretation technique before.</p>
<p>most likeley i am doing something wrong but i cannot figure out what is it.</p>
<p>I tried googling and go through SOF question to find the way to resolve this but did not find anything that can help me.</p>
<p>my dataset <strong>df_reps</strong> looks like this</p>
<pre><code>Toyota Horse Toyota Gear... Mazda Night King
Green Mazda King Toyota ... Blue Mazda Toyota
...
...
Gear Tyre Toyota Geaer ... Horse Blue Park
Laptop Invoice Toyota ... Horse Mango Kitkat
</code></pre>
<p>and labels to predict, is whether the customer approved of not so the labels are only 0 and 1</p>
<p>Here is my code</p>
<pre><code>def BOW(df):
CountVec = CountVectorizer() # to use only bigrams ngram_range=(2,2)
Count_data = CountVec.fit_transform(df)
Count_data = Count_data.astype(np.uint8)
cv_dataframe=pd.DataFrame(Count_data.toarray(), columns=CountVec.get_feature_names_out(), index=df.index) # <- HERE
return cv_dataframe.astype(np.uint8)
df = BOW(df_reps)
y = df_Labels # this is either 0 or 1
X = df
X_train, X_test, y_train, y_test = train_test_split(X, y)
clf = RandomForestClassifier(max_depth=100)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
</code></pre>
<p>I converted text into tabular format using BOW</p>
<p>therefore, i will using
# Here is the part for LIME</p>
<pre><code>explainer = LimeTabularExplainer(X_train.values, feature_names=X_train.columns, verbose=True, mode='classification')
exp = explainer.explain_instance(X_test.values[1], clf.predict, num_features=10000)
</code></pre>
<p>but i am getting this error</p>
<blockquote>
<p>NotImplementedError: LIME does not currently support classifier models
without probability scores. If this conflicts with your use case,
please let us know: <a href="https://github.com/datascienceinc/lime/issues/16" rel="nofollow noreferrer">https://github.com/datascienceinc/lime/issues/16</a></p>
</blockquote>
| <python><nlp><lime> | 2023-09-12 03:14:13 | 1 | 8,150 | asmgx |
77,085,832 | 1,471,980 | how do you force write pandas styles to excel file in Pandas | <p>I have a data frame like this:</p>
<p>df</p>
<pre><code>Server CPU_Usage Memory Usage
Prod1 80 30
Prod2 70 10
Prod3 20 12
</code></pre>
<p>I need to apply Pandas Style to show % sign next to the numbers like below:</p>
<pre><code>Server CPU_Usage Memory_Usage
Prod1 80% 30%
Prod2 70% 10%
Prod3 20% 12%
</code></pre>
<p>I tried this:</p>
<pre><code>df_style=df.style.format({'CPU_Usage':'{:}%', {'Memory_Usage':'{:}%')
</code></pre>
<p>When do df_style, it shows up on Jupyter notebook. But when I write this to excel as below:</p>
<pre><code>df_style.to_excel(r'report.xlsx', engine='openpyxl')
</code></pre>
<p>Percent sign (%) disappear from the excel file. Any ideas to force writing % next to the CPU_Usage and Memory_Usage in excel file?</p>
| <python><pandas><pandas-styles> | 2023-09-12 02:57:22 | 2 | 10,714 | user1471980 |
77,085,817 | 12,394,134 | Map user-defined function on multiple polars columns | <p>I am doing a bit of data munging on a <code>polars.Dataframe</code> and I could write the same expression twice, but I would ideally like to cut down on that a bit. So I was thinking that I could just create a user-defined function that just plugs in the column names.</p>
<p>But, I know that polars tends to be a bit reluctant to let people bring in user-defined functions (and for good reasons), but it feels a bit tedious for me to write out the same expression over and over again, but with different columns.</p>
<p>So let's say that I have a polars dataframe like this:</p>
<pre><code>import polars as pl
df = pl.DataFrame({
'a':['Strongly Disagree', 'Disagree', 'Agree', 'Strongly Agree'],
'b':['Strongly Agree', 'Agree', 'Disagree', 'Strongly Disagree'],
'c':['Agree', 'Strongly Agree', 'Strongly Disagree', 'Disagree']
})
</code></pre>
<p>And, I could just use the <code>when-then-otherwise</code> expression to convert these three to numeric columns:</p>
<pre><code>df_clean = df.with_columns(
pl.when(
pl.col('a') == 'Strongly Disagree'
).then(
pl.lit(1)
).when(
pl.col('a') == 'Disagree'
).then(
pl.lit(2)
).when(
pl.col('a') == 'Agree'
).then(
pl.lit(3)
).when(
pl.col('a') == 'Strongly Agree'
).then(
pl.lit(4)
)
)
</code></pre>
<p>But I don't want to write this out two more times.</p>
<p>So I was thinking, I could just write a function so then I could just map over <code>a</code>, <code>b</code>, and <code>c</code>, but this seems like it wouldn't work.</p>
<p>Anyone have any advice for the most efficient way to do this?</p>
| <python><python-polars> | 2023-09-12 02:53:22 | 2 | 326 | Damon C. Roberts |
77,085,415 | 4,002,633 | Python, Pandas, Dataframes: Inexplicable characteristic difference that I cannot identify | <p>In a code base that has worked wonderfully for a long time, suddenly I have a crash. I have traced it nominally to a Pandas idiosyncrasy that has be confused. Essentially I have two CSV files, a reference and a result and I load each of these into a dataframe for a subsequent comparison.</p>
<p>But out of the blue, two CSV files that looks comparable to me, certainly have the same shape, and same characteristic content, load in very different ways, one of them very cryptically, with the <code>read_csv()</code> method. I have skipped the first line in e each file which has the column headers and used a custom set. This dialog with Python illustrates the problem in a way I find hard to explain in natural language. But if I must it is this: Two seemingly identical dataframes, in shame, and character, behave very differently when sliced.</p>
<p>In the following python dialog you will see that reference behaves as expected, and result for some reason when slicing one column out, returns a 2D thing, and yet, that is judged purely on the repr, the actually types and dimensions of the slices agree. In this example I'm slicing column 2.</p>
<p>It looks almost as if result in column 2 has the whole CSV row. Problem is if I slice column 1 or 0 or 3 or 4 I get the same result!</p>
<p>What could be causing this oddity, that makes it very hard for me to compare slices?</p>
<pre class="lang-py prettyprint-override"><code>>>> reference = pandas.read_csv(reference_file, skiprows=1, header=None, names=column_names, delimiter=delimiter, low_memory=False)
>>> result = pandas.read_csv(result_file, skiprows=1, header=None, names=column_names, delimiter=delimiter, low_memory=False)
>>> type(reference)
<class 'pandas.core.frame.DataFrame'>
>>> type(result)
<class 'pandas.core.frame.DataFrame'>
>>> reference.shape
(70698, 1426)
>>> result.shape
(70698, 1426)
>>> reference.iloc[:, 2].size
70698
>>> result.iloc[:, 2].size
70698
>>> reference.iloc[:, 2]
0 -9.900000e+37
1 -9.900000e+37
2 -9.900000e+37
3 -9.900000e+37
4 -9.900000e+37
...
70693 -9.900000e+37
70694 -9.900000e+37
70695 -9.900000e+37
70696 -9.900000e+37
70697 -9.900000e+37
Name: Distance_vl, Length: 70698, dtype: float64
>>> result.iloc[:, 2]
0 0.000000 -9.900000e+37 1970-01-19 13:54:44 464.0 NaN NaN -1.524544 1.524544 -1.524544 1.524544 2800 0.0 0.0 ... 0.0 0.0
1 0.000000 -9.900000e+37 1970-01-19 13:54:44 514.0 20.276414 -87.380000 -1.524544 1.524544 -1.524544 1.524544 2800 0.0 0.0 ... 0.0 0.0
2 0.000320 -9.900000e+37 1970-01-19 13:54:44 914.0 20.276409 -87.380000 -1.524544 1.524544 -1.524544 1.524544 2800 0.0 0.0 ... 0.0 0.0
3 0.000325 -9.900000e+37 1970-01-19 13:54:44 920.0 20.276409 -87.380000 -1.524544 1.524544 -1.524544 1.524544 2800 0.0 0.0 ... 0.0 0.0
4 0.000485 -9.900000e+37 1970-01-19 13:54:45 146.0 20.276406 -87.380000 -1.524544 1.524544 -1.524544 1.524544 2800 0.0 0.0 ... 0.0 0.0
...
70693 2.473177 -9.900000e+37 1970-01-19 15:09:15 24.0 20.278198 -87.381172 -1.524544 1.524544 -1.524544 1.524544 2800 0.0 0.0 ... 0.0 0.0
70694 2.473177 -9.900000e+37 1970-01-19 15:09:15 76.0 20.278198 -87.381172 -1.524544 1.524544 -1.524544 1.524544 2800 0.0 0.0 ... 0.0 0.0
70695 2.473177 -9.900000e+37 1970-01-19 15:09:15 147.0 20.278198 -87.381172 -1.524544 1.524544 -1.524544 1.524544 2800 0.0 0.0 ... 0.0 0.0
70696 2.473177 -9.900000e+37 1970-01-19 15:09:15 159.0 20.278198 -87.381172 -1.524544 1.524544 -1.524544 1.524544 2800 0.0 0.0 ... 0.0 0.0
70697 2.473177 -9.900000e+37 1970-01-19 15:09:15 267.0 20.278198 -87.381172 -1.524544 1.524544 -1.524544 1.524544 2800 0.0 0.0 ... 0.0 0.0
Name: Distance_vl, Length: 70698, dtype: float64
>>> type(reference.iloc[:, 2])
<class 'pandas.core.series.Series'>
>>> type(result.iloc[:, 2])
<class 'pandas.core.series.Series'>
>>> reference.iloc[:, 2].size
70698
>>> result.iloc[:, 2].size
70698
>>> reference.iloc[:, 2].shape
(70698,)
>>> result.iloc[:, 2].shape
(70698,)
>>> type(result.iloc[:, 2][0])
<class 'pandas.core.series.Series'>
>>> type(reference.iloc[:, 2][0])
<class 'numpy.float64'>
>>> result.iloc[:, 2][0]
0.0 -9.900000e+37 1970-01-19 13:54:44 464.0 -999.0 -999.0 -1.524544 1.524544 -1.524544 1.524544 2800 0.0 ... 0.0 0.0
Name: Distance_vl, dtype: float64
>>> type(result.iloc[:, 2][1])
<class 'pandas.core.series.Series'>
>>> result.iloc[:, 2][1]
0.0 -9.900000e+37 1970-01-19 13:54:44 514.0 20.276414 -87.38 -1.524544 1.524544 -1.524544 1.524544 2800 0.0 ... 0.0 0.0
Name: Distance_vl, dtype: float64
</code></pre>
<p><strong>Note:</strong> <code>result.iloc[:, 2]</code> has very long lines of 0s, and SO can't take all the data, so I inserted elipsis (...) manually on its repr to post here. Essentially it looks like the repr of <code>result.iloc[:, 2]</code> is showing <code>result</code> not one column of it.</p>
| <python><pandas><csv> | 2023-09-12 00:11:36 | 0 | 2,192 | Bernd Wechner |
77,084,988 | 9,757,174 | "The merchandise variant referenced by this term condition could not be found" - Shopify API Error | <p>I am creating a custom app on shopify and trying to create a draft order. When I create the draft order, I get the following error.</p>
<pre><code>{
"errors": {
"base": [
"The merchandise variant referenced by this term condition could not be found."
]
}
}
</code></pre>
<p>The code that I have written is as follows:</p>
<pre><code># Create a Shopify session
access_token = '<ACCESS_TOKEN>'
shop_url = "<URL>"
# session = shopify.Session(cart.storeId, GLOBAL_SHOPIFY_API_VERSION, access_token)
# shopify.ShopifyResource.activate_session(session)
# # Create a new draft order
# new_draft_order = shopify.DraftOrder()
# new_draft_order.line_items = {
# "variant_id": cart.cartItem.get("variantId"),
# "quantity": cart.cartItem.get("quantity")
# }
# await new_draft_order.save()
# Set the headers and the data for the request
headers = {
'X-Shopify-Access-Token': access_token,
'Content-Type': 'application/json',
}
json_data = {
'draft_order': {
'line_items': [
{
'varianct_id': cart.cartItem.get("variantId"),
'quantity': cart.cartItem.get("quantity"),
},
],
},
}
response = requests.post(
f'https://{shop_url}/admin/api/{GLOBAL_SHOPIFY_API_VERSION}/draft_orders.json',
headers=headers,
json=json_data,
)
# Check if the request was successful
if response.status_code == 200:
return response.json()
if response.status_code == 202:
return response.json()
return response.json()
</code></pre>
<p>I have used the API Access token for the endpoint so it should be working. The variant ID is also correct and can be viewed on the shopify platform.</p>
| <python><shopify><fastapi><shopify-app><shopify-api> | 2023-09-11 22:01:09 | 1 | 1,086 | Prakhar Rathi |
77,084,912 | 1,100,107 | How to clip a PyVista mesh to a ball? | <p>I have this PyVista isosurface:</p>
<pre class="lang-py prettyprint-override"><code>from math import pi, cos, sin
import pyvista as pv
import numpy as np
def strange_surface(x, y, z, A, B):
return (
z**4 * B**2
+ 4 * x * y**2 * A * B**2
+ x * z**2 * A * B**2
- 2 * z**4 * A
- 4 * x * y**2 * B**2
- x * z**2 * B**2
+ 3 * z**2 * A * B**2
- 2 * z**4
- x * A * B**2
- 2 * z**2 * A
+ x * B**2
+ A * B**2
+ 2 * z**2
- B**2
)
# generate data grid for computing the values
X, Y, Z = np.mgrid[(-3.05):3.05:250j, (-3.05):3.05:250j, (-3.05):3.05:250j]
# create a structured grid
grid = pv.StructuredGrid(X, Y, Z)
# compute and assign the values
A = cos(2.5*pi/4)
B = sin(2.5*pi/4)
values = strange_surface(X, Y, Z, A, B)
grid.point_data["values"] = values.ravel(order="F")
isosurf = grid.contour(isosurfaces=[0])
#
mesh = isosurf.extract_geometry()
dists = np.linalg.norm(mesh.points, axis=1)
dists = (dists - dists.min()) / (dists.max() - dists.min())
pltr = pv.Plotter(window_size=[512, 512], off_screen=False)
pltr.background_color = "#363940"
pltr.set_focus(mesh.center)
pltr.set_position((17, 14, 12))
pltr.camera.zoom(1)
mesh["dist"] = dists
pltr.add_mesh(
mesh,
smooth_shading=True,
specular=0.9,
color="yellow
)
pltr.show()
</code></pre>
<p>I want to clip it to the ball of radius 3. That is to say, I want to discard all points whose distance to the origin is superior to 3. How could I do? The boundary where the mesh is cut must be smooth.</p>
<p>PS: it seems PyVista has changed since the last time I used it, I'm not sure everything in my code is needed.</p>
| <python><3d><pyvista> | 2023-09-11 21:38:17 | 1 | 85,219 | Stéphane Laurent |
77,084,784 | 11,277,108 | Find string in text of script element | <p>I'm trying to scrape a page where I want to wait until a string has been detected in a <code>script</code> element before returning the page's HTML.</p>
<p>Here's my MRE scraper:</p>
<pre><code>from scrapy import Request, Spider
from scrapy.crawler import CrawlerProcess
from scrapy_playwright.page import PageMethod
class FlashscoreSpider(Spider):
name = "flashscore"
custom_settings = {
"DOWNLOAD_HANDLERS": {
"http": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
"https": "scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler",
},
"TWISTED_REACTOR": "twisted.internet.asyncioreactor.AsyncioSelectorReactor",
"REQUEST_FINGERPRINTER_IMPLEMENTATION": "2.7",
}
def start_requests(self):
yield Request(
url="https://www.flashscore.com/match/WKM03Vff/#/match-summary/match-summary",
meta=dict(
dont_redirect=True,
playwright=True,
playwright_page_methods=[
PageMethod(
method="wait_for_selector",
selector="//script[contains(text(), 'WKM03Vff')]",
timeout=5000,
),
],
),
callback=self.parse,
)
def parse(self, response):
print("I've loaded the page ready to parse!!!")
if __name__ == "__main__":
process = CrawlerProcess()
process.crawl(FlashscoreSpider)
process.start()
</code></pre>
<p>This results in the following error:</p>
<pre><code>playwright._impl._api_types.TimeoutError: Timeout 5000ms exceeded.
</code></pre>
<p>My understanding is that this is because there are multiple text nodes in <code>script</code> and I'm only picking up the first one with the XPath. As the string I'm looking for is in a later node then I get the <code>TimeoutError</code> error.</p>
<p>This <a href="https://stackoverflow.com/a/50822483/11277108">answer</a> gives a neat solution however scrapy doesn't support XPath 2.0 so when I use:</p>
<pre><code>"string-join(//script/text()[normalize-space()], ' ')"
</code></pre>
<p>I get the following error:</p>
<pre><code>playwright._impl._api_types.Error: Unexpected token "string-join(" while parsing selector "string-join(//script/text()[normalize-space()], ' ')"
</code></pre>
<p>There is an alternative given in the comments to the answer but my worry there is a changing number of text nodes.</p>
<p>From some fairly intensive googling I don't think there is a robust XPath solution. However, is there a CSS equivalent? I've tried:</p>
<pre><code>"script:has-text('WKM03Vff')"
</code></pre>
<p>However, that results in a <code>Timeout</code> exception again.</p>
| <python><xpath><scrapy><scrapy-playwright> | 2023-09-11 21:08:05 | 1 | 1,121 | Jossy |
77,084,774 | 12,436,050 | Filter pandas dataframe if value is either 'True' or null | <p>I have following dataframe</p>
<pre><code>A B C
1 ABC True
2 DEF False
3 GHI
</code></pre>
<p>I would like to filter this dataframe and select those rows where column C is either 'True' or blank.</p>
<pre><code>A B C
1 ABC True
3 GHI
</code></pre>
<p>I tried many approaches and below is the latest effort but none of them are working.</p>
<pre><code>df2= df.loc[df['C'] | df['C'].isnull()]
</code></pre>
<p>Any help is highly appreciated!</p>
| <python><pandas><dataframe> | 2023-09-11 21:05:41 | 1 | 1,495 | rshar |
77,084,682 | 21,370,869 | How to seperate my program output from the Path being displayed on terminal | <p>When I run a simple python programme, like the following:</p>
<pre><code>print("Hello World")
</code></pre>
<p>The output on the terminal becomes:</p>
<pre><code>c:; cd 'c:\Users\super\long\path\here'; & 'c:\Users\super\long\path\here\MyPyENV\Scripts\python.exe' 'c:\Users\super\.vscode\extensions\ms-python.python-2023.8.0\pythonFiles\lib\python\debugpy\adapter/../..\debugpy\launcher' '53164' '--' 'c:\Users\super\long\path\here\Chapter 5\HelloWorld.py'
Hello world
</code></pre>
<p>I have a very small monitor, so there is line wrapping that makes it hard for me to read and worse I sometimes have to scroll down just to see the actual part of my programm, <code>Hello World</code>.</p>
<p>Searching around I came across an answer for <a href="https://stackoverflow.com/questions/62919341/vscode-how-to-automatically-clear-python-terminal-output-window-before-each-run">THIS</a> question, the solution works as intended:</p>
<pre><code>import os
os.system("cls")
print("Hello World")
</code></pre>
<p>but I also have a slow computer, so now my terminal will first display:</p>
<pre><code>c:; cd 'c:\Users\super\long\path\here'; & 'c:\Users\super\long\path\here\MyPyENV\Scripts\python.exe' 'c:\Users\super\.vscode\extensions\ms-python.python-2023.8.0\pythonFiles\lib\python\debugpy\adapter/../..\debugpy\launcher' '53164' '--' 'c:\Users\super\long\path\here\Chapter 5\HelloWorld.py'
</code></pre>
<p>Then about a second later, clear the screen and only output:</p>
<pre><code> Hello world
</code></pre>
<p>The waiting time is really not ideal.</p>
<p>Since my terminal is PowerShell, I figured If I included <code>Clear-Host</code> in my <code>launch.json</code> for my Python debug profile:</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "PYrun",
"type": "python",
"request": "launch",
// "program": "${file} ; clear-host", // I also tried this line
"program": "${file}",
"args": ["clear-host"],
"console": "integratedTerminal",
"justMyCode": true
}
]
}
</code></pre>
<p>But it does not clear the terminal before running the python programm.</p>
<p>Is there a way to simply clear my terminal before running the currently active file via debugging, or in general to make it that the output is separate and clearly discernable from the path?</p>
<p>I am on:</p>
<ul>
<li>PowerShelL7</li>
<li>Windows 11</li>
<li>Python 3.11</li>
</ul>
<p>Any help would be greatly appreciated!</p>
| <python><visual-studio-code> | 2023-09-11 20:39:36 | 0 | 1,757 | Ralf_Reddings |
77,084,622 | 5,036,928 | PyInstaller Hidden Imports Param for Local Imports | <p>(Open to suggestions for a more descriptive title)</p>
<p>I am trying to package my script using Pyinstaller and get the following when trying to run the exe:</p>
<pre><code> File "PyInstaller\loader\pyimod02_importers.py", line 385, in exec_module
File "apryse_sdk\__init__.py", line 17, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 385, in exec_module
File "PDFNetPython3\__init__.py", line 17, in <module>
ModuleNotFoundError: No module named 'PDFNetPython'
</code></pre>
<p>The import in the main script that solicits this error looks like:</p>
<pre><code>from apryse_sdk import *
</code></pre>
<p>where the associated <code>__init__.py</code> contains the nested import:</p>
<pre><code>...
from PDFNetPython import *
</code></pre>
<p>Notice that here <code>PDFNetPython</code> is imported locally. See the file structure:</p>
<pre><code> Directory: C:\Program Files\Anaconda2022\Lib\site-packages\apryse_sdk
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 9/11/2023 12:56 PM __pycache__
-a---- 9/7/2023 1:13 PM 45508632 PDFNetC.dll
-a---- 9/7/2023 1:13 PM 12 PDFNetPyLibInfo
-a---- 9/7/2023 1:13 PM 658347 PDFNetPython.py
-a---- 9/7/2023 1:13 PM 7536128 _PDFNetPython.pyd
-a---- 9/11/2023 1:33 PM 474 __init__.py
</code></pre>
<p>I keep seeing that the syntax for the <code>--hidden-import</code> param is just <code>--hidden-import=<Module></code> but does this apply to local imports? It doesn't seem to me that it does. What is the workaround?</p>
| <python><pyinstaller><python-import><python-module> | 2023-09-11 20:29:24 | 1 | 1,195 | Sterling Butters |
77,084,573 | 3,781,009 | Identifying location in list of runtimeWarning in python | <p>I have a script that loops through a list and at certain points generates a runtimeWarning. I want to flag these locations, but not kill the loop. I just want to identify the locations of the warnings.</p>
<p>Here is an example.</p>
<pre><code>import numpy as np
#define NumPy arrays
x = np.array([4, 5, 5, 7, 0,10])
y = np.array([2, 4, 6, 7, 0,2])
#divide the values in x by the values in y
for i in range(0,len(x)):
tmp_x = x[i]
tmp_y = y[i]
print(np.divide(tmp_x, tmp_y))
</code></pre>
<p>Setting warnings to errors breaks the loop, which I don't want to do.</p>
| <python> | 2023-09-11 20:19:41 | 2 | 1,269 | user44796 |
77,084,500 | 1,802,225 | <class 'psycopg2.errors.UndefinedColumn'>: column does not exist when JOIN | <ul>
<li><strong>python</strong>: 3.9.17</li>
<li><strong>psycopg2</strong>: 2.9.7</li>
<li><strong>OS</strong>: Ubuntu 20.0</li>
<li><strong>postgre</strong>: 12</li>
</ul>
<p>I did all I can and still this error:</p>
<pre><code>2023-09-11 22:45:35,033 ERROR <class 'psycopg2.errors.UndefinedColumn'>: column u.id does not exist
LINE 1: ...id, u.balance FROM key k JOIN user u ON k.user_id=u.id WHERE...
^
</code></pre>
<p>Python code:</p>
<pre class="lang-py prettyprint-override"><code># this all below return the same error
# sql = """SELECT k.user_id, k.custom_rates, u.id, u.balance FROM key k JOIN user u ON k.user_id=u.id WHERE k.key='test'"""
# sql = "SELECT k.user_id, k.custom_rates, u.id, u.balance FROM user u JOIN key k ON k.user_id=u.id WHERE k.key='test'"
# sql = "SELECT k.user_id, k.custom_rates, u.id, u.balance FROM key k INNER JOIN user u ON k.user_id=u.id WHERE k.key='test'"
sql = "SELECT k.user_id, k.custom_rates, u.id, u.balance FROM key k JOIN user u ON k.user_id=u.id WHERE k.key='test'"
cursor.execute(sql)
res = cursor.fetchone()
</code></pre>
<p><strong>key</strong> and <strong>user</strong> tables:
<a href="https://i.sstatic.net/Owcej.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Owcej.jpg" alt="enter image description here" /></a></p>
<p>What is mistake?</p>
| <python><postgresql><psycopg2><psql> | 2023-09-11 20:04:09 | 1 | 1,770 | sirjay |
77,084,311 | 5,038,503 | Optional, dynamic dependencies at Python runtime | <p>I'm building a complex Python application with a maximalist approach in terms of support for different libraries, some of which are machine learning libraries that use several gigabytes. I am now getting ready to make a release of the project, but I've realized the build size is enormous and 99% of it is not my code, and some of it may never be called by the user.</p>
<p>What I'd like to be able to do is have the user download a release with the minimal dependencies, and if they choose to use other ones in the GUI, prompt them and install them at runtime, and maybe even rebuild the application with <code>pyinstaller</code>.</p>
<p>I don't expect all my users to be developers, so just having a configurable build script isn't the best option, and I don't want to ship multiple releases.</p>
<p>Is what I'm describing possible with Python, and is it a common pattern? How would I accomplish something like this?</p>
<p>If this question is better suited to the Software Engineering stack exchange, I can move it there.</p>
<p>EDIT:
Some of my packages are also Linux-only or Windows-only. I want to release my project as a portable executable with something like PyInstaller. I don't expect my users to understand how to manage a <code>venv</code> or system Python version either. I suppose I could understand making another piece of software that installs and builds the program by and manages its own Python version, <code>venv</code>, and executable like an installed system package. Is such a complicated, overbearing solution necessary to handle this?</p>
| <python><pip><setuptools> | 2023-09-11 19:25:11 | 4 | 2,024 | Tessa Painter |
77,084,041 | 13,584,963 | Compare numpy 3D array with 2D array aisle by aisle | <p>I have large amount of m-by-n 2D arrays that need to be compared with one reference m-by-n 2D array (of course it's not all 1),</p>
<pre><code>arrays = np.random.rand(m,n,k)
array_ref = np.ones((m,n))
result = np.zeros_like(arrays, dtype=np.bool_)
for k_id in range(k):
result[:,:,k_id] = arrays[:,:,k_id]> array_ref
arrays[result] = np.nan
</code></pre>
<p>Right now it takes 0.4s per 128 comparisons, I'm wondering if there's a direct comparison that can be faster, without using the for-loop? I hope it to be something simplified like <code>arrays[arrays>array_ref]=np.nan</code>, but as long as it runs faster it's ok.</p>
<p>Thanks in advance!</p>
| <python><numpy><multiprocessing><mask> | 2023-09-11 18:32:28 | 0 | 311 | Crear |
77,083,986 | 8,521,346 | imap-tools Access Raw Message Data | <p>How do you access the raw message data of an email when using imap-tools?
Specifically so it can then be loaded into the email.message_from_bytes() function for forwarding?</p>
<pre><code>from imap_tools import MailBox, AND
with MailBox('imap.gmail.com').login('asdf@gmail.com', '123456', 'INBOX') as mailbox:
# get unseen emails from INBOX folder
for msg in mailbox.fetch(AND(seen=False), mark_seen=False):
pass # get the raw data from msg
</code></pre>
| <python><smtp><imap> | 2023-09-11 18:21:08 | 2 | 2,198 | Bigbob556677 |
77,083,861 | 595,305 | Maintain responsiveness of PyQt5 and other threads during call to intensive non-Python module? | <p>This is the first time I've tried writing a Rust module (using PyO3) to be called from Python. OS is W10.</p>
<p>Before, when just using Python, I spent quite a bit of time trying to understand how to maintain responsiveness of PyQt5's "gui thread" (aka "main application thread") when running background threads. I was was originally inspired by this: <a href="https://mayaposch.wordpress.com/2011/11/01/how-to-really-truly-use-qthreads-the-full-explanation/" rel="nofollow noreferrer">How to really, truly use QThreads</a>.</p>
<p>The intensive Rust module is being started from inside such a spawned <code>QThread</code>. NB the Rust module is currently executing in the same <em>process</em> as the Python app. It is of course possible to run the Rust module in a separate process: then these problems don't occur. But I'm trying to understand why they occur if everything is happening in the same process.</p>
<p>When I run the Python app the first thing is to <code>show</code> a very simple <code>QMainWindow</code> in the Python, in the main (gui) thread, obviously. I spend the rest of the time wiggling my mouse cursor in front of this.</p>
<p>Next the Rust module is started up from the <code>QThread</code>. I'm currently using a threadpool in Rust, and limiting the number of threads to 4. My machine has 8 "logical processors".</p>
<p>Despite using only a maximum of 4 threads in the Rust module, the same thing happens as when I make no attempt to restrict thread use in the Rust module: after about 5 seconds my mouse cursor turns into the dreaded W10 blue spinner, indicating that my app's gui is not responsive. This then lasts to the end of the Rust task, about another 10 seconds.</p>
<p>So I then thought about monitoring thread use in the Python app while all this is happening: I started a simple <code>threading.Thread</code> in the Python, which starts before the Rust module starts. This is meant to print out, each second, <code>threading.active_count()</code>. But it doesn't. It prints the first value out... and then nothing happens until the Rust module has finished!</p>
<p>So it appears that both the PyQt5 gui thread and also any random additional Python threads are being severely affected by what's happening in the Rust module, even if I limit the threads used in it. Conversely, the blue spinner doesn't appear immediately. It's taking a good 5 seconds or so.</p>
<p>Two other things I tried: restricting the number of threads in the Rust threadpool to 1: same blue spinner. And I also put a few lines like this in the Rust code at a place meaning this would be executed frequently:</p>
<pre><code>thread::sleep(Duration::from_millis(1));
</code></pre>
<p>... which is what I need to do in Python to stop non-gui tasks killing the gui responsiveness. Lines like this have no effect.</p>
<p>I'm not at all sure whether this is specific to Rust. I suspect it's probably more a Python question and specifically a PyQt question. Or maybe it's about the mechanisms in PyO3, perhaps? Any insight about these problems or suggestions about where to look for solutions would be helpful.</p>
| <python><multithreading><rust><pyqt5><pyo3> | 2023-09-11 17:58:29 | 1 | 16,076 | mike rodent |
77,083,578 | 1,852,526 | Filter List of T or list of some type in Python | <p>I am new to Python. I have a List of T i.e. List of PackageReference objects. The PackageReference object is defined as follows:</p>
<pre><code>class PackageReference:
def __init__(self, ecosystem, repository_root, repository_name, source, package_name, version):
self.ecosystem = ecosystem
self.repository_root = repository_root
self.repository_name = repository_name
self.source = source
self.package_name = package_name
self.version = version
def __repr__(self):
return f'PackageReference({self.ecosystem} {self.package_name} {self.version} {self.repository_name} {self.source})'
</code></pre>
<p>Now I am fetching a list of PackageReference items on which I want to apply a filter. The filter condition is, I want all the items where the ecosystem=='NuGet'. In C# you would do something like <code>list.Where(x=>x.ecosystem=="NuGet")</code>. I am trying to do the filter like this in Python but after I apply the filter, the list is empty.</p>
<pre><code>package_references = collect_package_references(patterns, commits, rules)
# Filter the packages based on criteria
filtered_list=filter(lambda item:item.ecosystem=='NuGet',package_references) #The filtered_list is empty.
</code></pre>
<p>Before filter as I am debugging this is the screenshot of the package_references.</p>
<p><a href="https://i.sstatic.net/H3BR2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H3BR2.png" alt="package_references list" /></a></p>
<p>Upon setting breakpoint on filtered_list= line, it shows me the following:</p>
<p><a href="https://i.sstatic.net/564E4.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/564E4.jpg" alt="result" /></a></p>
| <python><list><filter> | 2023-09-11 17:07:18 | 0 | 1,774 | nikhil |
77,083,408 | 3,851,085 | Django create Polygon from list of lat/long coordinates | <p>I have a list of lat/long coordinates that make up a polygon. How do I instantiate a Polygon, to be used as a PolygonField?</p>
| <python><django><django-models><geodjango> | 2023-09-11 16:41:56 | 1 | 1,110 | Software Dev |
77,083,405 | 2,071,807 | How to mock pathlib.Path.read_text for a particular file | <p>The unittest docs show you <a href="https://docs.python.org/3/library/unittest.mock.html#unittest.mock.mock_open" rel="nofollow noreferrer">how to use mock_open</a> to mock <em>direct</em> calls to <code>builtins.open</code>.</p>
<p>But how about mocking <code>pathlib</code>'s <code>read_text</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>import pathlib
pathlib.Path("/path/to/file").read_text()
</code></pre>
<p>Using the recipe in the unittest docs doesn't patch this correctly? It must use <code>builtins.open</code> under the hood somewhere but where??</p>
| <python><python-unittest> | 2023-09-11 16:41:29 | 2 | 79,775 | LondonRob |
77,083,211 | 5,217,293 | Gridspec spanning fractional column width | <p>I'd like a plot that has two subplots in the first row, each spanning 1.5 columns, and three plots in the second row, each a column wide. Is that possible with matplotlib and gridspec? From the <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.gridspec.GridSpec.html#examples-using-matplotlib-gridspec-gridspec" rel="nofollow noreferrer">examples</a>, it doesn't appear to be so. The <code>width_ratios</code> argument also won't work since that affects all rows.</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
fig = plt.figure(figsize=(10, 6))
gs = GridSpec(2, 3)
# First row (1.5 column width)
ax1 = plt.subplot(gs[0, 0])
ax2 = plt.subplot(gs[0, 1])
# Second row (one-third width)
ax3 = plt.subplot(gs[1, 0])
ax4 = plt.subplot(gs[1, 1])
ax5 = plt.subplot(gs[1, 2])
ax1.set_title('Subplot 1')
ax2.set_title('Subplot 2')
ax3.set_title('Subplot 3')
ax4.set_title('Subplot 4')
ax5.set_title('Subplot 5')
plt.tight_layout()
plt.show()
</code></pre>
<p>The code above produces this. I just can't figure out how to make subplots 1 and 2 span 1.5 columns.</p>
<p><a href="https://i.sstatic.net/AKEcL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AKEcL.png" alt="incorrect spacing" /></a></p>
| <python><matplotlib><subplot><matplotlib-gridspec> | 2023-09-11 16:11:04 | 2 | 1,023 | K. Shores |
77,083,199 | 130,208 | In python doit how to run task only when some condition is met | <p>Something really simple I want to achieve with python doit module.
I have task_entry and three other tasks: task_a, task_b, task_c.
I want to control from task_entry which of the either tasks a, b, or c
to get evaluated.
Lets say there is a config file from which task entry finds out which
among a, b, c to execute.
I am not able to express this simple requirement in doit framework.
file_dep only controls the order of execution and not if a task is selected at all. Below is the MWE:</p>
<pre><code># dodo.py
import configparser
config = configparser.ConfigParser()
config.read("config.ini")
def choose_task():
task_to_run = config.get("config", "task_to_run")
return {"task_dep": [task_to_run]}
def task_entry():
return {
"actions": None,
"calc_dep": choose_task,
}
def task_a():
return {
"actions": ["echo 'Task A'"],
}
def task_b():
return {
"actions": ["echo 'Task B'"],
}
def task_c():
return {
"actions": ["echo 'Task C'"],
}
</code></pre>
<p>Currently all tasks, A, B, and C will run. I will like only the task specified in config file to run.</p>
| <python><doit> | 2023-09-11 16:08:15 | 1 | 2,065 | Kabira K |
77,083,082 | 20,830,264 | openai.error.InvalidRequestError: The specified base model does not support fine-tuning. when fine-tune Azure OpenAI model | <p>I'm running the following Python code for a fine-tune OpenAI task:</p>
<pre class="lang-py prettyprint-override"><code>import openai
from openai import cli
import time
import shutil
import json
openai.api_key = "*********************"
openai.api_base = "https://*********************"
openai.api_type = 'azure'
openai.api_version = '2023-05-15'
deployment_name ='*********************'
training_file_name = 'training.jsonl'
validation_file_name = 'validation.jsonl'
# Samples data are fake
sample_data = [
{"prompt": "Questa parte del testo e’ invece in italiano, perche’ Giuseppe Coco vive a Milano, codice postale 09576.", "completion": "[type: LOCATION, start: 36, end: 44, score: 0.85, type: PERSON, start: 54, end: 72, score: 0.85, type: LOCATION, start: 75, end: 81, score: 0.85]"},
{"prompt": "In this fake document, we describe the ambarabacicicoco, of Alfred Johnson, who lives in Paris (France), the zip code is 21076, and his phone number is +32 475348723.", "completion": "[type: AU_TFN, start: 157, end: 166, score: 1.0, type: PERSON, start: 60, end: 74, score: 0.85, type: LOCATION, start: 89, end: 94, score: 0.85, type: LOCATION, start: 97, end: 103, score: 0.85, type: PHONE_NUMBER, start: 153, end: 166, score: 0.75]"},
{"prompt": "This document is a fac simile", "completion": "[]"},
{"prompt": "Here there are no PIIs", "completion": "[]"},
{"prompt": "Questa parte del testo e’ invece in italiano, perche’ Giuseppe Coco vive a Milano, codice postale 09576.", "completion": "[type: LOCATION, start: 36, end: 44, score: 0.85, type: PERSON, start: 54, end: 72, score: 0.85, type: LOCATION, start: 75, end: 81, score: 0.85]"},
{"prompt": "In this fake document, we describe the ambarabacicicoco, of Alfred Johnson, who lives in Paris (France), the zip code is 21076, and his phone number is +32 475348723.", "completion": "[type: AU_TFN, start: 157, end: 166, score: 1.0, type: PERSON, start: 60, end: 74, score: 0.85, type: LOCATION, start: 89, end: 94, score: 0.85, type: LOCATION, start: 97, end: 103, score: 0.85, type: PHONE_NUMBER, start: 153, end: 166, score: 0.75]"},
{"prompt": "This document is a fac simile", "completion": "[]"},
{"prompt": "Here there are no PIIs", "completion": "[]"},
{"prompt": "10 August 2023", "completion": "[type: DATE_TIME, start: 0, end: 14, score: 0.85]"},
{"prompt": "Marijn De Belie, Manu Brehmen (Deloitte Belastingconsulenten)", "completion": "[type: PERSON, start: 0, end: 15, score: 0.85, type: PERSON, start: 17, end: 29, score: 0.85]"},
{"prompt": "The content expressed herein is based on the facts and assumptions you have provided us. We have assumed that these facts and assumptions are correct, complete and accurate.", "completion": "[]"},
{"prompt": "This letter is solely for your benefit and may not be relied upon by anyone other than you.", "completion": "[]"},
{"prompt": "Dear Mr. Mahieu,", "completion": "[type: PERSON, start: 9, end: 15, score: 0.85]"},
{"prompt": "Since 1 January 2018, a capital reduction carried out in accordance with company law rules is partly imputed on the taxable reserves of the SPV", "completion": "[type: DATE_TIME, start: 6, end: 20, score: 0.85]"},
]
# Generate the training dataset file.
print(f'Generating the training file: {training_file_name}')
with open(training_file_name, 'w') as training_file:
for entry in sample_data:
json.dump(entry, training_file)
training_file.write('\n')
# Copy the validation dataset file from the training dataset file.
# Typically, your training data and validation data should be mutually exclusive.
# For the purposes of this example, you use the same data.
print(f'Copying the training file to the validation file')
shutil.copy(training_file_name, validation_file_name)
def check_status(training_id, validation_id):
train_status = openai.File.retrieve(training_id)["status"]
valid_status = openai.File.retrieve(validation_id)["status"]
print(f'Status (training_file | validation_file): {train_status} | {valid_status}')
return (train_status, valid_status)
# Upload the training and validation dataset files to Azure OpenAI.
training_id = cli.FineTune._get_or_upload(training_file_name, True)
validation_id = cli.FineTune._get_or_upload(validation_file_name, True)
# Check the upload status of the training and validation dataset files.
(train_status, valid_status) = check_status(training_id, validation_id)
# Poll and display the upload status once per second until both files succeed or fail to upload.
while train_status not in ["succeeded", "failed"] or valid_status not in ["succeeded", "failed"]:
time.sleep(1)
(train_status, valid_status) = check_status(training_id, validation_id)
# This example defines a fine-tune job that creates a customized model based on curie,
# with just a single pass through the training data. The job also provides
# classification-specific metrics by using our validation data, at the end of that epoch.
create_args = {
"training_file": training_id,
"validation_file": validation_id,
"model": "curie",
"n_epochs": 1,
"compute_classification_metrics": True,
"classification_n_classes": 3
}
# Create the fine-tune job and retrieve the job ID and status from the response.
resp = openai.FineTune.create(**create_args)
job_id = resp["id"]
status = resp["status"]
# You can use the job ID to monitor the status of the fine-tune job.
# The fine-tune job might take some time to start and complete.
print(f'Fine-tuning model with job ID: {job_id}.')
# Get the status of our fine-tune job.
status = openai.FineTune.retrieve(id=job_id)["status"]
# If the job isn't yet done, poll it every 2 seconds.
if status not in ["succeeded", "failed"]:
print(f'Job not in terminal status: {status}. Waiting.')
while status not in ["succeeded", "failed"]:
time.sleep(2)
status = openai.FineTune.retrieve(id=job_id)["status"]
print(f'Status: {status}')
else:
print(f'Fine-tune job {job_id} finished with status: {status}')
# Check if there are other fine-tune jobs in the subscription.
# Your fine-tune job might be queued, so this is helpful information to have
# if your fine-tune job hasn't yet started.
print('Checking other fine-tune jobs in the subscription.')
result = openai.FineTune.list()
print(f'Found {len(result)} fine-tune jobs.')
# Retrieve the name of the customized model from the fine-tune job.
result = openai.FineTune.retrieve(id=job_id)
if result["status"] == 'succeeded':
model = result["fine_tuned_model"]
# Create the deployment for the customized model by using the standard scale type
# without specifying a scale capacity.
print(f'Creating a new deployment with model: {model}')
result = openai.Deployment.create(model=model, scale_settings={"scale_type":"standard", "capacity": None})
# Retrieve the deployment job ID from the results.
deployment_id = result["id"]
</code></pre>
<p>Based on this Microsoft official documentation:
<a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/fine-tuning?pivots=programming-language-python" rel="nofollow noreferrer">Microsoft documentation for OpenAI fine-tuning</a></p>
<p>Now, when I run this script I get the following error:</p>
<pre><code>openai.error.InvalidRequestError: The specified base model does not support fine-tuning.
</code></pre>
<p>Based on a similar question (<a href="https://learn.microsoft.com/en-us/answers/questions/1190892/getting-error-while-finetuning-gpt-3-model-using-a" rel="nofollow noreferrer">similar question</a>), it seems that the problem is related to the region where my OpenAI service is deployed, indeed my OpenAI service is deployed in East US, and as far as I understood, the only available region for fine-tuning is Central US. The problem is that I don't see Central US as an available region to deploy an OpenAI service:</p>
<p><a href="https://i.sstatic.net/t05Ru.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t05Ru.png" alt="regions available for OpenAI deployment" /></a></p>
<p>Note that I also tried "North Central US", and got the same error.</p>
<p>Do you know what could be the reason of this error?</p>
| <python><azure><azure-openai> | 2023-09-11 15:50:14 | 1 | 315 | Gregory |
77,082,810 | 3,247,006 | What is & How to know the default screenshot's window size of browsers in Selenium? | <p>I ran the code below to take 3 screenshots of Django Admin on Google Chrome, Microsoft Edge and Firefox. *I use <a href="https://github.com/django/django" rel="nofollow noreferrer">Django</a>, <a href="https://github.com/pytest-dev/pytest-django" rel="nofollow noreferrer">pytest-django</a> and <a href="https://github.com/SeleniumHQ/selenium" rel="nofollow noreferrer">Selenium</a>:</p>
<pre class="lang-py prettyprint-override"><code>import os
import pytest
from selenium import webdriver
def take_screenshot(driver, name):
os.makedirs(os.path.join("screenshot", os.path.dirname(name)), exist_ok=True)
driver.save_screenshot(os.path.join("screenshot", name))
@pytest.fixture(params=["chrome", "edge", "firefox"], scope="class")
def driver_init(request):
if request.param == "chrome":
web_driver = webdriver.Chrome()
request.cls.browser = "chrome"
if request.param == "edge":
web_driver = webdriver.Edge()
request.cls.browser = "edge"
if request.param == "firefox":
web_driver = webdriver.Firefox()
request.cls.browser = "firefox"
request.cls.driver = web_driver
yield
web_driver.close()
@pytest.mark.usefixtures("driver_init")
class Screenshot:
def screenshot_admin(self, live_server):
self.driver.get(("%s%s" % (live_server.url, "/admin/")))
take_screenshot(self.driver, "admin/" + self.browser + ".png")
</code></pre>
<p>Then, I could take 3 screenshots which have different window size depending on the browsers as shown below:</p>
<p><code>chrome.png</code>:</p>
<p><a href="https://i.sstatic.net/ihKJL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ihKJL.png" alt="chrome.png" /></a></p>
<p><code>edge.png</code>:</p>
<p><a href="https://i.sstatic.net/krE7q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/krE7q.png" alt="edge.png" /></a></p>
<p><code>firefox.png</code>:</p>
<p><a href="https://i.sstatic.net/H6T2l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H6T2l.png" alt="firefox.png" /></a></p>
<p>So, what is & how can I know the default screenshot's window size of the browsers in Selenium?</p>
| <python><django><selenium-webdriver><screenshot><pytest-django> | 2023-09-11 15:07:37 | 1 | 42,516 | Super Kai - Kazuya Ito |
77,082,788 | 7,454,706 | Pandas Multi-index on 2 Levels | <p>I have a json which is in the following format :</p>
<pre><code>data = {
'Low': {
'2023.07.01': {"u": "mo", 'N': 1, 'O': 2, "PN": 22, "PO": 34},
'2023.07.02': {"u": "no", 'N': 1, 'O': 2, "PN": 22, "PO": 34}
},
'Medium': {
'2023.07.01': {"u": "no", 'N': 1, 'O': 2, "PN": 22, "PO": 34},
'2023.07.02': {"u": "mo", 'N': 1, 'O': 2, "PN": 22, "PO": 34}
},
'High': {
'2023.07.01': {"u": "no", 'N': 122, 'O': 2, "PN": 212, "PO": 334},
'2023.07.02': {"u": "mo", 'N': 13, 'O': 2, "PN": 2, "PO": 342}
}
}
</code></pre>
<p>How can i create a multi level dataframe with the following structure :</p>
<pre><code> Low Medium High
N PN O PN N PN O PN N PN O PN
Date U
</code></pre>
<p>I have tried various ways like <code>df.stack</code> and <code>df.pivot</code> but I was not able to get the exact format I needed.</p>
| <python><pandas><multi-index> | 2023-09-11 15:04:46 | 2 | 2,047 | ASHu2 |
77,082,780 | 913,347 | HTTPServer: can't listen on Wifi when wire is connected (Windows) | <p>I've encountered an unusual behavior with Python's HTTPServer. My computer has both Ethernet and WiFi connections active simultaneously. I intended for the server to exclusively respond to incoming requests from the WiFi interface, so I configured it accordingly as described below. However, I've encountered an issue where requests are not being served when the Ethernet cable is connected. Strangely enough, when I disconnect the cable, requests are handled successfully.</p>
<p>I suspect that this issue may be related to interface priorities, but I'm not entirely sure. Is there a way to work around this problem? It's worth noting that the Ethernet connection is used for remote control, while the WiFi is dedicated to managing devices within a separate network.</p>
<p>Here is the result for <em>ipconfig</em>:</p>
<pre><code>Ethernet adapter Ethernet:
Connection-specific DNS Suffix . :
Link-local IPv6 Address . . . . . : fe80::5edd:...
IPv4 Address. . . . . . . . . . . : 192.168.0.29
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 192.168.0.1
Wireless LAN adapter Wi-Fi:
Connection-specific DNS Suffix . :
Link-local IPv6 Address . . . . . : fe80::904:...
IPv4 Address. . . . . . . . . . . : 192.168.32.100
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 192.168.32.1
</code></pre>
<p>I want my server to listen on the WiFi, so I do this:</p>
<pre><code>import socket
from http.server import BaseHTTPRequestHandler, HTTPServer
addr = ('192.168.32.100', 3101)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
sock.bind(addr)
sock.listen()
</code></pre>
<p>Then I run several threads with this code:</p>
<pre><code>httpd = HTTPServer(addr, BaseHTTPRequestHandler, False)
# Prevent the HTTP server from re-binding every handler.
# https://stackoverflow.com/questions/46210672/
httpd.socket = sock
httpd.server_bind = self.server_close = lambda self: None
httpd.serve_forever()
</code></pre>
<p>EDIT: following @Emmanuel-BERNAT response, I'm adding here the result for <code>route print</code>:</p>
<pre><code>Interface List
3...c0 25 a5 c8 cf ea ......Intel(R) Ethernet Connection (17) I219-LM
12...70 a8 d3 d5 d5 ff ......Microsoft Wi-Fi Direct Virtual Adapter
4...72 a8 d3 d5 d5 fe ......Microsoft Wi-Fi Direct Virtual Adapter #2
18...70 a8 d3 d5 d5 fe ......Intel(R) Wi-Fi 6E AX211 160MHz
10...70 a8 d3 d5 d6 02 ......Bluetooth Device (Personal Area Network)
1...........................Software Loopback Interface 1
===========================================================================
IPv4 Route Table
===========================================================================
Active Routes:
Network Destination Netmask Gateway Interface Metric
0.0.0.0 0.0.0.0 192.168.0.1 192.168.0.29 25
0.0.0.0 0.0.0.0 192.168.32.1 192.168.32.100 40
127.0.0.0 255.0.0.0 On-link 127.0.0.1 331
127.0.0.1 255.255.255.255 On-link 127.0.0.1 331
127.255.255.255 255.255.255.255 On-link 127.0.0.1 331
192.168.0.0 255.255.255.0 On-link 192.168.0.29 281
192.168.0.29 255.255.255.255 On-link 192.168.0.29 281
192.168.0.255 255.255.255.255 On-link 192.168.0.29 281
192.168.32.0 255.255.255.0 On-link 192.168.32.100 296
192.168.32.100 255.255.255.255 On-link 192.168.32.100 296
192.168.32.255 255.255.255.255 On-link 192.168.32.100 296
224.0.0.0 240.0.0.0 On-link 127.0.0.1 331
224.0.0.0 240.0.0.0 On-link 192.168.0.29 281
224.0.0.0 240.0.0.0 On-link 192.168.32.100 296
255.255.255.255 255.255.255.255 On-link 127.0.0.1 331
255.255.255.255 255.255.255.255 On-link 192.168.0.29 281
255.255.255.255 255.255.255.255 On-link 192.168.32.100 296
===========================================================================
Persistent Routes:
None
IPv6 Route Table
===========================================================================
Active Routes:
If Metric Network Destination Gateway
1 331 ::1/128 On-link
3 281 fe80::/64 On-link
18 296 fe80::/64 On-link
18 296 fe80::904:e4c2:.../128 On-link
3 281 fe80::5edd:a253:.../128 On-link
1 331 ff00::/8 On-link
3 281 ff00::/8 On-link
18 296 ff00::/8 On-link
===========================================================================
Persistent Routes:
None
</code></pre>
| <python><python-sockets><basehttpserver> | 2023-09-11 15:03:10 | 1 | 6,845 | ishahak |
77,082,739 | 2,626,865 | selecting conditional code paths in hypothesis | <p>Most conditional strategies seem to be data-driven. But what if I want to select a code path independently of any generated data?</p>
<p>For example, let's convert the grammar <code>rule_a = rule_optional? rule_b</code> to a strategy:</p>
<pre class="lang-py prettyprint-override"><code>@strategy.composite
def rule_a(draw):
elem_b = draw(rule_b())
if draw(strategy.sampled_from([False, True])):
elem_opt = draw(rule_optional())
elem_b = combine(elem_opt, elem_b)
return elem_b
</code></pre>
<p>Say I'm generating a data structure from a grammar, and to test a particular behavior I want to modify the data structure by replacing a single randomly-selected node from a subset of specific locations in the syntax tree. Rewriting the grammar to generate this modified data is too complicated, so I modify the generated structure instead:</p>
<pre class="lang-py prettyprint-override"><code>@strategy.composite
def generate_maybe_modified(draw):
data = draw(generate())
locations = []
for location, node in iterate(data):
if match_specific(data, location, node):
locations.append(location)
if not draw(strategy.sampled_from([False, True])
return data
modified = draw(modified())
location = draw(strategy.sampled_from(locations))
data = replace(location, data, modified)
return data
</code></pre>
<p>The common idea is to select a code path by randomly selecting from <code>[False, True]</code>, where <code>False</code> is first to shrink towards the shortest code path. Is this the most efficient solution?</p>
<p>Is there any difference between <code>draw(strategy.sampled_from([False, True]))</code> and <code>random.choice([False, True])</code> where <code>random</code> is managed by Hypothesis, i.e. an instance of a <code>HypothesisRandom</code> subclass?</p>
<p>Can I manage probabilities outside of Hypothesis? That is, can I increase the chance of the simpler code path being drawn by instead using:</p>
<pre class="lang-py prettyprint-override"><code>random.choices([False, True], [90, 10])[0]
</code></pre>
| <python><conditional-statements><python-hypothesis> | 2023-09-11 14:56:55 | 1 | 2,131 | user19087 |
77,082,653 | 13,023,224 | Order nested lists inside column | <p>I have this dataset:</p>
<pre><code>df = pd.DataFrame({'Name':['John', 'Rachel', 'Adam','Joe'],
'Age':[95, 102, 31,np.nan],
'Scores':[np.nan, [80, 82, 78], [25, 20, 30, 60, 21],np.nan]
})
</code></pre>
<p>I would like to sort values inside column 'scores'.</p>
<p>Desired output:</p>
<pre><code> Name Age Scores
John 95.0 NaN
Rachel 102.0 [78,80,82]
Adam 31.0 [20,21,25,30,60]
Joe NaN NaN
</code></pre>
<p>I have tried the solutions from this <a href="https://stackoverflow.com/questions/67959535/python-pandas-sort-values-with-nested-list">answer</a>, as well as the code</p>
<pre><code>df.sort_values(by=["Scores"], na_position="first")
</code></pre>
<p>but result wasn not the deisred one.</p>
| <python><pandas><sorting><nested-lists> | 2023-09-11 14:48:12 | 4 | 571 | josepmaria |
77,082,612 | 2,173,773 | Pytest mock when using multiprocessing: How to wait until method is called? | <p>I would like to test the following code using pytest and <a href="https://pytest-mock.readthedocs.io/en/latest/usage.html" rel="nofollow noreferrer">pytest-mock</a> (this is a minimal example from a larger program):</p>
<pre><code>import logging
import multiprocessing
import os
def task():
logging.info('task started')
os.execvp('gedit', ['gedit', '/tmp/foo.txt'])
def main():
process = multiprocessing.Process(target=task, daemon=False)
process.start()
def test_main(mocker):
mock = mocker.MagicMock()
mocker.patch(__name__ + ".os.execvp", mock)
main()
assert mock.called
if __name__ == '__main__':
logging.basicConfig(level=logging.INFO)
main()
</code></pre>
<p>Current output from running <code>pytest t.py</code> is:</p>
<pre><code>> assert mock.called
E AssertionError: assert False
E + where False = <MagicMock id='139999769857040'>.called
t.py:21: AssertionError
</code></pre>
<p>I suspect this is because the <code>assert mock.called</code> is called before the mock has been called in the separate process. What is the best way to solve this?</p>
| <python><pytest><python-multiprocessing><pytest-mock> | 2023-09-11 14:42:31 | 2 | 40,918 | Håkon Hægland |
77,082,550 | 583,187 | Search string at the beginnig of line is not returned by regex and lines with $ are not ignored | <p>From a text file I want to extract everything that starts with specific "keyword" including the keyword, until a "end_word" but not the end_word itself. New line characters must be returned as well. Every extracted string is stored in a dictionary.
I have the following code in Python which works except:</p>
<p><strong>That I do not get the starting keyword in my output and that all lines starting with $ are extracted as well...</strong></p>
<pre><code>import re
keywords = [ "GRID" , "CHEXA" , "FORCE" , "FORCE*" , "MOMENT" , "MOMENT*" ]
end_words = [ "GRID" , "CHEXA" , "FORCE" , "FORCE*" , "MOMENT" , "MOMENT*" , "\$" ]
# Create a regular expression pattern
end_words_pattern = "|".join(f"(?=.*{re.escape(word)})" for word in end_words)
# Read the file
CardsFound = {}
with open("D:/home/bdf_in_test_full.dat", "r") as file:
file_data = file.read()
for keyword in keywords:
pattern = fr"^{re.escape(keyword)}((?:.*\n)+?(?={end_words_pattern}\n))"
matches = re.findall(pattern, file_data, re.MULTILINE | re.DOTALL)
CardsFound[keyword] = matches
for keyword in keywords:
print(CardsFound[keyword])
print("\n\n")
</code></pre>
<p>My input sample looks like this:</p>
<pre><code> $$
$$ GRID Data
$$
GRID 1 0.0 37.4999819.99999
GRID 2 -13.750233.8154619.99999
GRID 3 130.0 -405.0 39.99871
CHEXA 32662 2 51318 76683 48931 14427 76517 88177
+ 55490 48762 51318 76683 48931 14427 76517 88177
+ 51318 76683 48931 14427 76517 88177
CHEXA 32663 2 76683 48933 13278 48931 88177 55494
+ 17304 55490
CHEXA 32664 2 51311 76677 88177 76517 14422 48924
+ 55488 48760
$ PSHELL Data
$ PSOLID Data
$HWCOLOR PROP 2 3
PSOLID 2 1 0
$HWCOLOR PROP 14 4
PSOLID 14 1 0
$$
$$ MAT1 Data
$$
$ PBAR Data
$ MAT1 Data
$HMNAME MAT 1"steel" "MAT1"
$HWCOLOR MAT 1 3
MAT1 1200000.0 0.3 0.0
$$
$$
$$
$$ SPC Data
$$
SPC 1 2656 123 0.0
SPC 1 2697 23 0.0
SPC 1 3239 3 0.0
$$
$$ FORCE Data
$$
FORCE 2 2699 01.0 1000.0 0.0 0.0
FORCE 3 2699 01.0 0.0 1000.0 0.0
FORCE 4 2699 01.0 0.0 0.0 1000.0
FORCE* 29 7928000 102 0.
* .57735 .57735 .57735
FORCE* 29 7929000 102 16221.9
* -.0330417 -.0458638 .998401
$ Nodal Forces of Load Set : W06_MZFW_Gust_VB_TMSet102
MOMENT* 30 7906000 102 1.02446+7
* .188616 .979433 .071665
MOMENT* 30 7907000 102 1.4082+7
* .966316 .257311 .00498565
$
</code></pre>
<p>These lines in the text file</p>
<pre><code>CHEXA 32664 2 51311 76677 88177 76517 14422 48924
+ 55488 48760
$ PSHELL Data
</code></pre>
<p>should return a entry in the dict like this:</p>
<pre><code>'CHEXA 32664 2 51311 76677 88177 76517 14422 48924\n+ 55488 48760'
</code></pre>
| <python><regex> | 2023-09-11 14:32:13 | 3 | 2,841 | Lumpi |
77,082,545 | 2,725,810 | Making sense of the log for the cold start of AWS Lambda | <p>My AWS Lambda measures the time (in milliseconds) of copying two ML models from <code>/var/task</code> to <code>/tmp</code> and initializing these models during the cold start. Here is the relevant code (i.e. the code that executes outside of any function):</p>
<pre class="lang-py prettyprint-override"><code>import json
import subprocess
import time
def ms_now():
return int(time.time_ns() / 1000000)
class Timer():
def __init__(self):
self.start = ms_now()
def stop(self):
return ms_now() - self.start
def shell_command(command):
print("Executing: ", command, flush=True)
result = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
if result.returncode == 0:
print("Success", flush=True)
else:
print("Error:", result.stderr, flush=True)
def copy():
shell_command(f"cp -r /var/task/torch /tmp/")
shell_command(f"cp -r /var/task/huggingface /tmp/")
# Copying the models
timer = Timer()
copy()
print("Time to copy:", timer.stop(), flush=True)
# Initializing the models
timer = Timer()
from sentence_transformers import SentenceTransformer
from punctuators.models import PunctCapSegModelONNX
model_name = "pcs_en"
model_sentences = PunctCapSegModelONNX.from_pretrained(model_name)
model_embeddings = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
print("Time to initialize:", timer.stop(), flush=True)
</code></pre>
<p>The first three invocations after updating the Lambda timed out (the timeout is set at 30 seconds). The CloudWatch logs follow. For convenience, I added (manually) a separator line where I believe one invocation ends and another one begins.</p>
<pre><code>2023-09-11T16:57:36.315+03:00 Executing: cp -r /var/task/torch /tmp/
2023-09-11T16:57:46.061+03:00 Executing: cp -r /var/task/torch /tmp/
2023-09-11T16:58:16.020+03:00 START RequestId: ac4c4611-7c49-4eb0-94f2-238845e0500a Version: $LATEST
2023-09-11T16:58:16.022+03:00 2023-09-11T13:58:16.022Z ac4c4611-7c49-4eb0-94f2-238845e0500a Task timed out after 30.03 seconds
2023-09-11T16:58:16.022+03:00 END RequestId: ac4c4611-7c49-4eb0-94f2-238845e0500a
2023-09-11T16:58:16.022+03:00 REPORT RequestId: ac4c4611-7c49-4eb0-94f2-238845e0500a Duration: 30032.19 ms Billed Duration: 30000 ms Memory Size: 3000 MB Max Memory Used: 616 MB
---------------------------------------------------------------------------------------------------------------------
2023-09-11T16:58:16.621+03:00 Executing: cp -r /var/task/torch /tmp/
2023-09-11T16:58:26.956+03:00 Executing: cp -r /var/task/torch /tmp/
2023-09-11T16:58:56.915+03:00 START RequestId: 81ce18e4-9ff8-41a4-8f30-a540343c225f Version: $LATEST
2023-09-11T16:58:56.918+03:00 2023-09-11T13:58:56.918Z 81ce18e4-9ff8-41a4-8f30-a540343c225f Task timed out after 30.03 seconds
2023-09-11T16:58:56.918+03:00 END RequestId: 81ce18e4-9ff8-41a4-8f30-a540343c225f
---------------------------------------------------------------------------------------------------------------------
2023-09-11T16:58:56.918+03:00 REPORT RequestId: 81ce18e4-9ff8-41a4-8f30-a540343c225f Duration: 30033.08 ms Billed Duration: 30000 ms Memory Size: 3000 MB Max Memory Used: 760 MB
2023-09-11T16:58:57.230+03:00 Executing: cp -r /var/task/torch /tmp/
2023-09-11T16:59:48.954+03:00 Executing: cp -r /var/task/torch /tmp/
2023-09-11T17:00:12.242+03:00 Success
2023-09-11T17:00:12.242+03:00 Executing: cp -r /var/task/huggingface /tmp/
2023-09-11T17:00:15.354+03:00 Success
2023-09-11T17:00:15.354+03:00 Time to copy: 26400
2023-09-11T17:00:18.915+03:00 START RequestId: aead35f0-9b9e-4859-bce3-e56a6fbd4e87 Version: $LATEST
2023-09-11T17:00:18.924+03:00 2023-09-11T14:00:18.924Z aead35f0-9b9e-4859-bce3-e56a6fbd4e87 Task timed out after 30.04 seconds
2023-09-11T17:00:18.924+03:00 END RequestId: aead35f0-9b9e-4859-bce3-e56a6fbd4e87
2023-09-11T17:00:18.924+03:00 REPORT RequestId: aead35f0-9b9e-4859-bce3-e56a6fbd4e87 Duration: 30039.63 ms Billed Duration: 30000 ms Memory Size: 3000 MB Max Memory Used: 1629 MB
---------------------------------------------------------------------------------------------------------------------
2023-09-11T17:00:19.750+03:00 Executing: cp -r /var/task/torch /tmp/
2023-09-11T17:00:23.783+03:00 Success
2023-09-11T17:00:23.783+03:00 Executing: cp -r /var/task/huggingface /tmp/
2023-09-11T17:00:24.076+03:00 Success
2023-09-11T17:00:24.076+03:00 Time to copy: 4326
2023-09-11T17:00:28.374+03:00 The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
2023-09-11T17:00:28.377+03:00 0it [00:00, ?it/s] 0it [00:00, ?it/s]
2023-09-11T17:00:59.270+03:00 Executing: cp -r /var/task/torch /tmp/
2023-09-11T17:01:02.233+03:00 Success
2023-09-11T17:01:02.233+03:00 Executing: cp -r /var/task/huggingface /tmp/
2023-09-11T17:01:02.470+03:00 Success
2023-09-11T17:01:02.470+03:00 Time to copy: 3201
2023-09-11T17:01:07.459+03:00 Time to initialize: 4989
</code></pre>
<p>This log surprises me in many ways:</p>
<ol>
<li>During the very first invocation, the command to copy the first model (i.e. <code>cp -r /var/task/torch /tmp/</code>) is executed twice.</li>
<li>The first model is only about 90 MB. Nonetheless, copying it during the first invocation took 10 seconds and 20 seconds respectively, which is terribly slow for this amount.</li>
<li>If it takes so long to copy the model, how come it succeeded on the fourth attempt with both models (the second one is about 220 MB) taking 3 seconds to copy?</li>
<li>During the last invocation, both copy operations are repeated.</li>
</ol>
<p>What is going on in this log? Also, I would expect the function to succeed on the first invocation and take under 10 seconds, as it consists of copying ~300 MB of two models (which we saw could take only 3 seconds) and initializing the models. How do I make this happen?</p>
<p>P.S. The question at AWS Re:Post: <a href="https://repost.aws/questions/QU0ZTbt04OSBGTmt0tJpRIQA/making-sense-of-the-log-for-the-cold-start-of-aws-lambda" rel="nofollow noreferrer">https://repost.aws/questions/QU0ZTbt04OSBGTmt0tJpRIQA/making-sense-of-the-log-for-the-cold-start-of-aws-lambda</a></p>
| <python><shell><aws-lambda><aws-sam><aws-sam-cli> | 2023-09-11 14:31:38 | 0 | 8,211 | AlwaysLearning |
77,082,541 | 20,831,707 | how to install python3 in postgres | <p>I'm trying to work with plpython in postgresql
So I run the command:</p>
<pre><code>CREATE EXTENSION plpython3u
</code></pre>
<p>but I get the error:</p>
<pre><code>Erro SQL [58P01]: ERROR: could not load library "C:/Program Files/PostgreSQL/15/lib/plpython3.dll": unknown error 126
</code></pre>
<p>I read some solutions like this:
<a href="https://stackoverflow.com/questions/47907232/could-not-load-library-plpython3-dll">solution</a></p>
<p>but none of them helped me.
I checked if the DLL exists in the postgres directory, and it is present in the directory</p>
<p>I'm using Windows 11, I have the latest version of Python installed on my computer, and also the environment variables are installed correctly, what could be causing this error?
I'm using dbeaver</p>
<p><strong>Edit 1</strong></p>
<p>I followed a Languge package installation toturial from this link:
<a href="https://www.enterprisedb.com/docs/language_pack/latest/installing/windows/" rel="nofollow noreferrer">EDB Docs</a></p>
<p>I also installed the correct version of Python as described in this postgresql document:</p>
<pre><code>C:/ProgramFiles/PostgreSQL/15/doc/installation-notes.html
</code></pre>
<p>I added a system/user environment variable called PYTHONHOME% with the path:</p>
<pre><code>C:\edb\languagepack\v3\Python-3.10
</code></pre>
<p>but I still get the same error as above.</p>
<p>I also added all the python environment variables (when installing python it already gives the option to install automatically)</p>
| <python><postgresql><plpython> | 2023-09-11 14:31:15 | 0 | 389 | Guilherme Rodrigues |
77,082,451 | 1,652,219 | How to test if any date matches a specific year-month within each group in Polars? | <h3>Intro</h3>
<p>In Polars I would like to do quite complex queries, and I would like to simplify the process by dividing the operations into methods. Before I can do that, I need to find out how to provide these function with multiple columns and variables.</p>
<h3>Example Data</h3>
<pre class="lang-py prettyprint-override"><code># Libraries
import polars as pl
from datetime import datetime
# Data
test_data = pl.DataFrame({
"class": ['A', 'A', 'A', 'B', 'B', 'C'],
"date": [datetime(2020, 1, 31), datetime(2020, 2, 28), datetime(2021, 1, 31),
datetime(2022, 1, 31), datetime(2023, 2, 28),
datetime(2020, 1, 31)],
"status": [1,0,1,0,1,0]
})
</code></pre>
<h3>The Problem</h3>
<p>For each group, I would like to know if a reference date overlaps with the year-month in the date column of the dataframe.</p>
<p>I would like to do something like this.</p>
<pre class="lang-py prettyprint-override"><code># Some date
reference_date = datetime(2020, 1, 2)
# What I would expect the query to look like
(test_data
.group_by("class")
.agg(
pl.col("status").count().alias("row_count"), #just to show code that works
pl.lit(reference_date).alias("reference_date"),
pl.col("date", "status")
.map_elements(lambda group: myfunc(group, reference_date))
.alias("point_in_time_status")
)
)
</code></pre>
<pre class="lang-py prettyprint-override"><code># The desired output
pl.from_repr("""
┌───────┬─────────────────────┬──────────────────────┐
│ class ┆ reference_date ┆ point_in_time_status │
│ --- ┆ --- ┆ --- │
│ str ┆ datetime[μs] ┆ i64 │
╞═══════╪═════════════════════╪══════════════════════╡
│ A ┆ 2020-01-02 00:00:00 ┆ 1 │
│ B ┆ 2020-01-02 00:00:00 ┆ 0 │
│ C ┆ 2020-01-02 00:00:00 ┆ 0 │
└───────┴─────────────────────┴──────────────────────┘
""")
</code></pre>
<p>But I can simply not find any solutions for doing operations on groups. Some suggest using pl.struct, but that just outputs some weird object without columns or anything to work with.</p>
<h3>Example in R of the same operation</h3>
<pre class="lang-r prettyprint-override"><code># Loading library
library(tidyverse)
# Creating dataframe
df <- data.frame(
date = c(as.Date("2020-01-31"),
as.Date("2020-02-28"), as.Date("2021-01-31"),
as.Date("2022-01-31"), as.Date("2023-02-28"),
as.Date("2020-01-31")),
status = c(1,0,1,0,1,0),
class = c("A","A","A","B","B","C"))
# Finding status in overlapping months
ref_date = as.Date("2020-01-02")
df %>%
group_by("class") %>%
filter(format(date, "%Y-%m") == format(ref_date, "%Y-%m")) %>%
filter(status == 1)
</code></pre>
| <python><dataframe><datetime><group-by><python-polars> | 2023-09-11 14:20:36 | 1 | 3,944 | Esben Eickhardt |
77,082,304 | 5,769,814 | Opposite of numpy.reduce | <p>So in <code>numpy</code> we have a <code>reduce</code> function with which we can reduce one dimension of an array by applying a function to the elements across that dimension. Is there also an inverse of this function that would take a single element and expand it to a whole new dimension?</p>
<p>Let's say I have these two arrays:</p>
<pre><code>class A:
def __init__(self, a, b, c):
self.value = a, b, c
a = np.array([A(1,2,3), A(4,5,6)])
b = np.array([1<<8 | 2<<4 | 3, 4<<8 | 5<<4 | 6])
</code></pre>
<p>And I would like to transform either of these two arrays into</p>
<pre><code>np.array([[1,2,3],[4,5,6]])
</code></pre>
<p>What I'm currently doing is the following:</p>
<pre><code>def expand(arr):
a = np.empty((*arr.shape, 3))
for x, y in np.ndindex(arr.shape):
if isinstance(arr[x,y], A):
a[x,y] = arr[x,y].value
else:
a[x,y] = arr[x,y] >> 8, arr[x,y] >> 4 & 0xF, arr[x,y] & 0xF
return a
</code></pre>
<p>This works, but I'd like to avoid (slow?) iteration since it goes against the spirit of <code>numpy</code>.</p>
<p>I've also tried a solution using <code>np.vectorize</code>, but it doesn't work as I'd want it to:</p>
<pre><code>def expanded(element):
if isinstance(element, A):
return element.value
return element >> 8, element >> 4 & 0xF, element & 0xF
f = np.vectorize(expanded)
f(a) # Prints a tuple of arrays instead of the desired single array
</code></pre>
<p>Is there a better way to expand a single value to a new dimension, either via some mathematical operation or via object attribute access?</p>
| <python><numpy><vectorization> | 2023-09-11 14:00:45 | 1 | 1,324 | Mate de Vita |
77,081,953 | 6,440,589 | How to test whether timezone exists in Python/pandas? | <p>I am working with pandas' <code>tz_localize</code> function to convert dates from local time to UTC:</p>
<pre><code>time_zone = 'America/Santiago'
pd.to_datetime(pd.to_datetime(df['mydate']).apply(lambda x: datetime.strftime(x, '%d-%m-%Y %H:%M:%S'))).dt.tz_localize(time_zone).dt.tz_convert('UTC').dt.tz_localize(None)
</code></pre>
<p>This works well, but now I would like to check whether the user-defined <code>time_zone</code> actually exists prior to passing it into <code>tz_localize</code>. I found <a href="https://gist.github.com/heyalexej/8bf688fd67d7199be4a1682b3eec7568" rel="nofollow noreferrer">lists of available countries</a> online, but I do not know how to access this information directly in Python.</p>
<p>I have seen <a href="https://docs.python.org/3/library/zoneinfo.html" rel="nofollow noreferrer">this page</a> about zoneinfo but I am looking for a Python/pandas alternative.</p>
| <python><pandas><timezone><utc> | 2023-09-11 13:11:59 | 1 | 4,770 | Sheldon |
77,081,785 | 394,957 | Manipulating particular values of a previous tensorflow layer | <p>I'm trying to build a tensorflow model in which the final output is a decaying function of the original data and the classifier.</p>
<p>Thus, I've got the following layer:</p>
<pre><code>class HalfLifeLayer(tf.keras.layers.Layer):
def __init__(self, **kwargs):
super(HalfLifeLayer, self).__init__(**kwargs)
def call(self, inputs):
return tf.exp(-inputs[:,5]/inputs[:,0] - inputs[:,6]/inputs[:,1])
</code></pre>
<p>and the following setup:</p>
<pre><code>from tensorflow.keras.layers import Input, Dense, Concatenate
from tensorflow.keras import Model
I = Input(shape=(None, 6))
d1 = Dense(8, kernel_regularizer='l2')
d1 = Dense(8, kernel_regularizer='l2')
d1 = Dense(8, kernel_regularizer='l2')(I)
d2 = Dense(8, kernel_regularizer='l2')(d1)
d3 = Dense(8, kernel_regularizer='l2')(d2)
d4 = Dense(2, kernel_regularizer='l2')(d3)
# A
c = Concatenate()([d4, I])
hl = HalfLifeLayer()(c)
model = Model(inputs=I, outputs=hl)
# B
model.compile(loss=tf.keras.losses.BinaryFocalCrossentropy(), optimizer=tf.keras.optimizers.Adam())
model.fit(fit_data, fit_labels, validation_split=0.1)
</code></pre>
<p>Then I get an error:</p>
<pre><code>ValueError: Can not squeeze dim[0], expected a dimension of 1, got 32 for '{{node binary_focal_crossentropy/weighted_loss/Squeeze}} = Squeeze[T=DT_FLOAT, squeeze_dims=[-1]](Cast)' with input shapes: [32].
</code></pre>
<p>I think this should be equivalent to removing the code between <code># A</code> and <code># B</code> and instead doing</p>
<pre><code>output = tf.exp(-I[:,3]/d4[:,0] - I[:,4]/d4[:,1])
model = Model(input=I, output=output)
</code></pre>
<p>but this gives a different error:</p>
<pre><code>ValueError: Exception encountered when calling layer "tf.math.truediv_12" (type TFOpLambda).
Dimensions must be equal, but are 6 and 2 for '{{node tf.math.truediv_12/truediv}} = RealDiv[T=DT_FLOAT](Placeholder, Placeholder_1)' with input shapes: [?,6], [?,2].
Call arguments received by layer "tf.math.truediv_12" (type TFOpLambda):
• x=tf.Tensor(shape=(None, 6), dtype=float32)
• y=tf.Tensor(shape=(None, 2), dtype=float32)
• name=None
</code></pre>
<p>It doesn't seem to be recognizing the dimensions of my data or neurons correctly. How would I resolve this problem?</p>
| <python><tensorflow><keras> | 2023-09-11 12:50:16 | 0 | 1,955 | Mark C. |
77,081,746 | 15,222,211 | FastAPI - multiple examples for Body in Response | <p>I need to create multiple examples for the Response Body, to display it in API documentation <a href="http://127.0.0.1:8000/docs" rel="nofollow noreferrer">http://127.0.0.1:8000/docs</a>.
I found an example for the Request Body in the <a href="https://fastapi.tiangolo.com/tutorial/schema-extra-example/#openapi-examples-in-the-docs-ui" rel="nofollow noreferrer">documentation</a> (there is drop-down list of: "A normal example", "An example with converted data", etc.), but I require the same approach for the Response Body.</p>
<pre class="lang-py prettyprint-override"><code>import uvicorn
from fastapi import FastAPI
from pydantic import BaseModel
app = FastAPI()
class Item(BaseModel):
value: str
model_config = {
"json_schema_extra": {
"examples": [
{"value": "A normal example"}, # working
{"value": "An example with converted data"}, # NOT WORKING
{"value": "Invalid data ..."}, # NOT WORKING
]
}
}
@app.get(path="/")
def get1() -> Item:
return Item(value="a")
if __name__ == "__main__":
uvicorn.run(app)
</code></pre>
| <python><fastapi><pydantic> | 2023-09-11 12:44:21 | 2 | 814 | pyjedy |
77,081,709 | 21,365,399 | not sure how to interpret and solve ImportError from lxml using py2exe | <p>I'm trying to build a standalone executable file from a python script using <a href="https://pypi.org/project/py2exe/" rel="nofollow noreferrer">py2exe</a></p>
<p>The creation of the .exe file does not output any warning nor errors, but when i actually try to execute it from command prompt it outputs the following error:</p>
<blockquote>
<p>Traceback (most recent call last):</p>
<p>File "", line 259, in load_module</p>
<p>File "", line 15, in </p>
<p>File "", line 13, in __load</p>
<p>ImportError: (cannot import name _elementpath) 'C:\CAD\Workspace\username\dev-env\dist\lxml.etree.pyd'</p>
<p>During handling of the above exception, another exception occurred:</p>
<p>Traceback (most recent call last):
File "", line 259, in load_module</p>
<p>File "", line 15, in </p>
<p>File "", line 11, in __load
...</p>
<p>File "", line 627, in _load_backward_compatible
File "", line 261, in load_module</p>
<p>KeyError: 'lxml.objectify'</p>
</blockquote>
<p>The call i use, in the div folder, is the following <code>mdf2mat.exe C:\CAD\username\usefulScripts\dev-folder\dosingFromCANape.mdf</code></p>
<p>And this is the code of the script:</p>
<pre><code>import mdfreader
import sys
fileIn = sys.argv[1]
mdfFile = mdfreader.Mdf(fileIn)
mdfFile.export_to_matlab()
</code></pre>
<p>and lastly, this is the code i use to create the .exe:</p>
<pre><code>py2exe.freeze(console={"C:\CAD\Workspace\username\dev-env\mdf2mat.py"})
</code></pre>
<p>Now from the <a href="http://www.py2exe.org/index.cgi/WorkingWithVariousPackagesAndModules" rel="nofollow noreferrer">list</a> of packages supported by py2exe, <code>lxml</code> has this remark:</p>
<blockquote>
<p>if missing _elementhpath, either pull whole lxml library in packages=..., or put "from lxml import _elementhpath as _dummy" somewhere in code; in both cases also pull gzip in packages=...</p>
</blockquote>
<p>which describes exactly my situation.</p>
<p>Since i'm quite the beginner with this i have no idea what it means with 'pulling whole library in packages' or 'pull gzip in packages' hence i cannot solve the issue.</p>
| <python><exe><lxml><py2exe> | 2023-09-11 12:39:41 | 0 | 656 | Ferro Luca |
77,081,560 | 17,561,414 | change feed data in databricks | <p>How can I pass the lateste version as startingVersion and evndingVersion while Im trying to read changeFeed data in databricks?</p>
<p>Code I have</p>
<pre><code>df1 = spark.read.format("delta") \
.option("readChangeFeed", "true") \
.option("startingVersion", 45) \
.option("endingVersion", 45) \
.table("mdp_prd.bronze.nrq_customerassetproperty_autoloader_nodups")
</code></pre>
<p>goal is to dont hardcode <code>startingVersion</code> and <code>endingVersion</code> nor, to use the somekind of watermark table. But is there some built it options to this feature that it autoamticlly detects the last version of the <code>_commit_version</code> value?</p>
<p>P.S streaming is not the option too.</p>
| <python><azure><databricks><azure-databricks> | 2023-09-11 12:18:24 | 1 | 735 | Greencolor |
77,081,425 | 5,308,892 | Match functions with more than two arguments with Regex | <p>I want to write a regular expression in Python that matches functions with more than two arguments, such that the following expressions match:</p>
<pre><code>function(1, 2, 3)
function(1, 2, 3, 4)
function(1, function(1, 2), 3)
function(function(function(1, 2), 2), 2, 3, 4)
</code></pre>
<p>but the following don't:</p>
<pre><code>function(1, 2)
function(1, function(1, function(1, 2)))
function(1, function(function(1), 2))
</code></pre>
<p>My best attempt was the following expression, which only works for the cases without nested functions:</p>
<pre><code>\w+\((?:.*,){2,}.*\)
</code></pre>
<p>What expression should I use instead?</p>
| <python><regex><pcre><parentheses> | 2023-09-11 11:59:12 | 2 | 2,146 | cabralpinto |
77,081,386 | 4,451,521 | In which case to use action store_false in python argparse? | <p>This question is about the very useful argparse Python library.</p>
<p>We know the basic usage:</p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--useOptionA",help="activate the useOptionA option", action="store_true")
args=parser.parse_args()
print(args)
</code></pre>
<p>So logically, if we do:</p>
<pre><code>python program.py
Namespace(useOptionA=False)
python program.py --useOptionA
Namespace(useOptionA=True)
</code></pre>
<p>It makes sense, right?</p>
<p>My question is, in which cases (or how) can we use the <code>action="store_false"</code> option?</p>
<p>I tried</p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--noOptionB",help="do not usethe useOptionB option", action="store_false")
args=parser.parse_args()
print(args)
</code></pre>
<p>but then I get</p>
<pre><code>python program.py
Namespace(noOptionB=True)
python program.py --noOptionB
Namespace(noOptionB=False)
</code></pre>
<p>which does not make sense, if all I want is to have a variable <code>useOptionB</code> set true or false.
(unless, I have to make <code>useOptionB = not args.noOptionB</code> but it feels a bit wierd, doesn't it)</p>
<p>The alternative also does not make sense</p>
<pre><code>import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--useOptionC",help="activate the useOptionC option", action="store_false")
args=parser.parse_args()
print(args)
</code></pre>
<p>so</p>
<pre><code>python program.py
Namespace(useOptionC=True)
python program.py --useOptionC
Namespace(useOptionC=False)
</code></pre>
<p>in that case the default is to use the option C but the argument that says "Use" actually deactivates it.</p>
<p>It is a matter of semantics I suppose, but can someone write an example that makes sense of that option?</p>
| <python><argparse> | 2023-09-11 11:50:14 | 2 | 10,576 | KansaiRobot |
77,081,344 | 3,560,671 | SQLAlchemy ORM add Session Sleeping and not Committing on SQL Server 2022 | <p>I'm following <a href="https://docs.sqlalchemy.org/en/20/orm/quickstart.html" rel="nofollow noreferrer">SQLAlchemy ORM's Quick Start tutorial</a> and can't get the first code to work on our company's SQL Server 2022 database. I'm able to create tables (and commit them), but I'm unable to commit after adding the data. The script doesn't do anything after the insert statements. No commit is performed.</p>
<p>To troubleshoot and perform some further testing, I've setup my own SQL Server database (using AWS RDS). To my surprise, the exact same code from the tutorial (with a different connection string, of course) works without any problems.
I've, therefore, reached out to our DBAs, and they see the session sleeping without performing a commit.</p>
<p>Here's my Python code (which is copy+pasted from the SQLAlchemy ORM Quick Start):</p>
<pre class="lang-py prettyprint-override"><code>from typing import List
from typing import Optional
from sqlalchemy import ForeignKey
from sqlalchemy import String
from sqlalchemy.orm import Session
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
import engine_creator
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "user_account"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(String(30))
fullname: Mapped[Optional[str]]
addresses: Mapped[List["Address"]] = relationship(
back_populates="user", cascade="all, delete-orphan"
)
class Address(Base):
__tablename__ = "address"
id: Mapped[int] = mapped_column(primary_key=True)
email_address: Mapped[str]
user_id: Mapped[int] = mapped_column(ForeignKey("user_account.id"))
user: Mapped["User"] = relationship(back_populates="addresses")
if __name__ == '__main__':
engine = engine_creator.create_engine(echo=True, future=True)
Base.metadata.create_all(engine)
with Session(engine) as session:
spongebob = User(
name="spongebob",
fullname="Spongebob Squarepants",
addresses=[Address(email_address="spongebob@sqlalchemy.org")],
)
sandy = User(
name="sandy",
fullname="Sandy Cheeks",
addresses=[
Address(email_address="sandy@sqlalchemy.org"),
Address(email_address="sandy@squirrelpower.org"),
],
)
patrick = User(name="patrick", fullname="Patrick Star")
session.add_all([spongebob, sandy, patrick])
session.commit()
</code></pre>
<p>Running this code on my own database executes successfully with the following debug output:</p>
<pre><code>/Users/thomasv/PycharmProjects/sqlalchemy_orm/venv/bin/python /Users/thomasv/PycharmProjects/sqlalchemy_orm/src/main_tutorial.py
2023-09-11 13:27:53,100 INFO sqlalchemy.engine.Engine SELECT CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR)
2023-09-11 13:27:53,100 INFO sqlalchemy.engine.Engine [raw sql] ()
2023-09-11 13:27:53,156 INFO sqlalchemy.engine.Engine SELECT schema_name()
2023-09-11 13:27:53,157 INFO sqlalchemy.engine.Engine [generated in 0.00025s] ()
2023-09-11 13:27:53,315 INFO sqlalchemy.engine.Engine SELECT CAST('test max support' AS NVARCHAR(max))
2023-09-11 13:27:53,316 INFO sqlalchemy.engine.Engine [generated in 0.00029s] ()
2023-09-11 13:27:53,368 INFO sqlalchemy.engine.Engine SELECT 1 FROM fn_listextendedproperty(default, default, default, default, default, default, default)
2023-09-11 13:27:53,369 INFO sqlalchemy.engine.Engine [generated in 0.00031s] ()
2023-09-11 13:27:53,472 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2023-09-11 13:27:53,477 INFO sqlalchemy.engine.Engine SELECT [INFORMATION_SCHEMA].[TABLES].[TABLE_NAME]
FROM [INFORMATION_SCHEMA].[TABLES]
WHERE ([INFORMATION_SCHEMA].[TABLES].[TABLE_TYPE] = CAST(? AS NVARCHAR(max)) OR [INFORMATION_SCHEMA].[TABLES].[TABLE_TYPE] = CAST(? AS NVARCHAR(max))) AND [INFORMATION_SCHEMA].[TABLES].[TABLE_NAME] = CAST(? AS NVARCHAR(max)) AND [INFORMATION_SCHEMA].[TABLES].[TABLE_SCHEMA] = CAST(? AS NVARCHAR(max))
2023-09-11 13:27:53,477 INFO sqlalchemy.engine.Engine [generated in 0.00031s] ('BASE TABLE', 'VIEW', 'user_account', 'dbo')
2023-09-11 13:27:53,586 INFO sqlalchemy.engine.Engine SELECT [INFORMATION_SCHEMA].[TABLES].[TABLE_NAME]
FROM [INFORMATION_SCHEMA].[TABLES]
WHERE ([INFORMATION_SCHEMA].[TABLES].[TABLE_TYPE] = CAST(? AS NVARCHAR(max)) OR [INFORMATION_SCHEMA].[TABLES].[TABLE_TYPE] = CAST(? AS NVARCHAR(max))) AND [INFORMATION_SCHEMA].[TABLES].[TABLE_NAME] = CAST(? AS NVARCHAR(max)) AND [INFORMATION_SCHEMA].[TABLES].[TABLE_SCHEMA] = CAST(? AS NVARCHAR(max))
2023-09-11 13:27:53,586 INFO sqlalchemy.engine.Engine [cached since 0.1096s ago] ('BASE TABLE', 'VIEW', 'address', 'dbo')
2023-09-11 13:27:53,696 INFO sqlalchemy.engine.Engine COMMIT
2023-09-11 13:27:53,754 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2023-09-11 13:27:53,755 INFO sqlalchemy.engine.Engine INSERT INTO user_account (name, fullname) OUTPUT inserted.id, inserted.id AS id__1 SELECT p0, p1 FROM (VALUES (?, ?, 0), (?, ?, 1), (?, ?, 2)) AS imp_sen(p0, p1, sen_counter) ORDER BY sen_counter
2023-09-11 13:27:53,755 INFO sqlalchemy.engine.Engine [generated in 0.00010s (insertmanyvalues) 1/1 (ordered)] ('spongebob', 'Spongebob Squarepants', 'sandy', 'Sandy Cheeks', 'patrick', 'Patrick Star')
2023-09-11 13:27:53,858 INFO sqlalchemy.engine.Engine INSERT INTO address (email_address, user_id) OUTPUT inserted.id, inserted.id AS id__1 SELECT p0, p1 FROM (VALUES (?, ?, 0), (?, ?, 1), (?, ?, 2)) AS imp_sen(p0, p1, sen_counter) ORDER BY sen_counter
2023-09-11 13:27:53,858 INFO sqlalchemy.engine.Engine [generated in 0.00008s (insertmanyvalues) 1/1 (ordered)] ('spongebob@sqlalchemy.org', 4, 'sandy@sqlalchemy.org', 5, 'sandy@squirrelpower.org', 5)
2023-09-11 13:27:53,970 INFO sqlalchemy.engine.Engine COMMIT
Process finished with exit code 0
</code></pre>
<p>When I run this exact same script using our company's SQL Server database, I get the following output:</p>
<pre><code>/Users/thomasv/PycharmProjects/sqlalchemy_orm/venv/bin/python /Users/thomasv/PycharmProjects/sqlalchemy_orm/src/main_tutorial.py
2023-09-11 13:29:31,059 INFO sqlalchemy.engine.Engine SELECT CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR)
2023-09-11 13:29:31,059 INFO sqlalchemy.engine.Engine [raw sql] ()
2023-09-11 13:29:31,095 INFO sqlalchemy.engine.Engine SELECT schema_name()
2023-09-11 13:29:31,095 INFO sqlalchemy.engine.Engine [generated in 0.00027s] ()
2023-09-11 13:29:31,215 INFO sqlalchemy.engine.Engine SELECT CAST('test max support' AS NVARCHAR(max))
2023-09-11 13:29:31,215 INFO sqlalchemy.engine.Engine [generated in 0.00042s] ()
2023-09-11 13:29:31,251 INFO sqlalchemy.engine.Engine SELECT 1 FROM fn_listextendedproperty(default, default, default, default, default, default, default)
2023-09-11 13:29:31,251 INFO sqlalchemy.engine.Engine [generated in 0.00030s] ()
2023-09-11 13:29:31,322 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2023-09-11 13:29:31,327 INFO sqlalchemy.engine.Engine SELECT [INFORMATION_SCHEMA].[TABLES].[TABLE_NAME]
FROM [INFORMATION_SCHEMA].[TABLES]
WHERE ([INFORMATION_SCHEMA].[TABLES].[TABLE_TYPE] = CAST(? AS NVARCHAR(max)) OR [INFORMATION_SCHEMA].[TABLES].[TABLE_TYPE] = CAST(? AS NVARCHAR(max))) AND [INFORMATION_SCHEMA].[TABLES].[TABLE_NAME] = CAST(? AS NVARCHAR(max)) AND [INFORMATION_SCHEMA].[TABLES].[TABLE_SCHEMA] = CAST(? AS NVARCHAR(max))
2023-09-11 13:29:31,327 INFO sqlalchemy.engine.Engine [generated in 0.00040s] ('BASE TABLE', 'VIEW', 'user_account', 'dbo')
2023-09-11 13:29:31,384 INFO sqlalchemy.engine.Engine SELECT [INFORMATION_SCHEMA].[TABLES].[TABLE_NAME]
FROM [INFORMATION_SCHEMA].[TABLES]
WHERE ([INFORMATION_SCHEMA].[TABLES].[TABLE_TYPE] = CAST(? AS NVARCHAR(max)) OR [INFORMATION_SCHEMA].[TABLES].[TABLE_TYPE] = CAST(? AS NVARCHAR(max))) AND [INFORMATION_SCHEMA].[TABLES].[TABLE_NAME] = CAST(? AS NVARCHAR(max)) AND [INFORMATION_SCHEMA].[TABLES].[TABLE_SCHEMA] = CAST(? AS NVARCHAR(max))
2023-09-11 13:29:31,384 INFO sqlalchemy.engine.Engine [cached since 0.05768s ago] ('BASE TABLE', 'VIEW', 'address', 'dbo')
2023-09-11 13:29:31,442 INFO sqlalchemy.engine.Engine COMMIT
2023-09-11 13:29:31,481 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2023-09-11 13:29:31,483 INFO sqlalchemy.engine.Engine INSERT INTO user_account (name, fullname) OUTPUT inserted.id, inserted.id AS id__1 SELECT p0, p1 FROM (VALUES (?, ?, 0), (?, ?, 1), (?, ?, 2)) AS imp_sen(p0, p1, sen_counter) ORDER BY sen_counter
2023-09-11 13:29:31,483 INFO sqlalchemy.engine.Engine [generated in 0.00014s (insertmanyvalues) 1/1 (ordered)] ('spongebob', 'Spongebob Squarepants', 'sandy', 'Sandy Cheeks', 'patrick', 'Patrick Star')
2023-09-11 13:29:31,742 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2023-09-11 13:29:31,744 INFO sqlalchemy.engine.Engine INSERT INTO address (email_address, user_id) OUTPUT inserted.id, inserted.id AS id__1 SELECT p0, p1 FROM (VALUES (?, ?, 0), (?, ?, 1), (?, ?, 2)) AS imp_sen(p0, p1, sen_counter) ORDER BY sen_counter
2023-09-11 13:29:31,745 INFO sqlalchemy.engine.Engine [generated in 0.00021s (insertmanyvalues) 1/1 (ordered)] ('spongebob@sqlalchemy.org', 4, 'sandy@sqlalchemy.org', 5, 'sandy@squirrelpower.org', 5)
</code></pre>
<p>As you can see, there is no <code>COMMIT</code> after the <code>INSERT INTO</code> statements. However, there is a <code>COMMIT</code> after the <code>CREATE TABLE</code> statements, and the tables are correctly created.
Our team doesn't manage the database, and our DBAs can't determine the problem. They have been able to determine that the session is sleeping.</p>
<p>Can anyone point me in the direction of things I can attempt or I can ask my DBAs to look into (in terms of SQL Server configuration)?</p>
<p>Thank you for any help you can offer!</p>
| <python><sql-server><sqlalchemy><orm><pyodbc> | 2023-09-11 11:43:10 | 0 | 897 | Thomas Vanhelden |
77,081,311 | 17,561,414 | how to create the watermark table in databricks | <p>I would like to have a watermark table created in databricks with one column (version) and its value 1. This will be starting point. eveytime the python script will finish running I want to update the value by 1.</p>
<p>Goal is to use this value later before the python code runs.</p>
| <python><azure-databricks><watermark> | 2023-09-11 11:38:43 | 1 | 735 | Greencolor |
77,081,025 | 9,731,056 | Record and Speak at the same time using Twilio | <p>I am currently developing a Python Flask application that utilizes Twilio, and I am encountering the following challenge:</p>
<ul>
<li><p>The application needs to <strong>maintain continuous listening</strong> capabilities to transcribe the user's speech into text.</p>
</li>
<li><p>Once the user finishes speaking, the application should accurately repeat what the user has said. However, <strong>it must also remain attentive to any additional input from the user</strong>, allowing it to <strong>promptly cease repeating if the user interrupts</strong>.</p>
</li>
</ul>
<p>In simpler terms, I'm seeking a way to have TwiML instructions run concurrently like :</p>
<pre><code>response = VoiceResponse()
response.say(text)
</code></pre>
<pre><code>response = VoiceResponse()
response.gather(input='speech', action=URL)
</code></pre>
<p>However, it appears that Twilio follows a <strong>linear execution of its TwiML document</strong>. Any guidance on how to work around this issue would be greatly appreciated. Thank you in advance for your assistance!</p>
| <python><asynchronous><concurrency><twilio><voice> | 2023-09-11 10:55:18 | 0 | 645 | Tbertin |
77,080,720 | 12,236,313 | Celery signatures .s(): early or late binding? | <p>This <a href="https://adamj.eu/tech/2022/08/22/use-partial-with-djangos-transaction-on-commit/" rel="nofollow noreferrer">blog post</a> of Adam Johnson perfectly illustrates the difference between <em><strong>early binding</strong></em> and <em><strong>late binding</strong></em> closures in Python. This is also explained <a href="https://docs.python-guide.org/writing/gotchas/#late-binding-closures" rel="nofollow noreferrer">here</a>.</p>
<p>I've got a function <code>my_func</code> which corresponds to a Celery task (it's decorated with <code>@shared_task</code>). It expects some arguments. I run it with the following piece of code, using Celery signatures as described <a href="https://docs.celeryq.dev/en/stable/userguide/canvas.html#signatures" rel="nofollow noreferrer">in the documentation</a> (see also <a href="https://stackoverflow.com/questions/26942604/celery-and-transaction-atomic#answer-40050417">this StackOverflow answer</a>):</p>
<pre class="lang-py prettyprint-override"><code>from functools import partial
from django.db import transaction
from example.tasks import my_func
# next line can be found at the end of some other functions which involve a transaction
transaction.on_commit(my_func.s(param1, param2).delay))
</code></pre>
<p><strong>Are Celery signatures early binding or late binding?</strong>
And how to demonstrate it in an elegant way?</p>
<p>If they are early binding, the following line of code should be equivalent:</p>
<pre class="lang-py prettyprint-override"><code>transaction.on_commit(partial(my_func.delay, param1, param2))
</code></pre>
<p>If they are late binding, I think I could be facing a pesky bug with my current code, in some edge cases...</p>
| <python><django><celery><late-binding><early-binding> | 2023-09-11 10:10:53 | 1 | 1,030 | scūriolus |
77,080,169 | 1,422,096 | How to do a try / retry / except? | <p>Is there a way to avoid nested <code>try / except</code> when there are <strong>3 cases</strong> (a bit like <code>if</code> <code>elif</code> <code>else</code>)?</p>
<pre><code>try:
name = open("test.txt", "r").read() # test in current folder
except FileNotFoundError:
try:
name = open("c:/test.txt", "r").read() # test in drive root
except FileNotFoundError:
name = "empty" # not found
</code></pre>
<p>Note: this is just an example of a general case <code>try/retry/except</code>, so here I'm not interested for a solution with <code>if os.path.exists(...)</code>, which I already know, or other solutions which would be specific to this file open example.</p>
| <python><exception><try-except> | 2023-09-11 08:49:43 | 2 | 47,388 | Basj |
77,080,078 | 10,518,698 | Video output is not showing in OpenPifPaf | <p>I am trying to detect pose from my Webcamera using OpenPifPaf Library (<a href="https://openpifpaf.github.io/tutorial_opencv.html" rel="nofollow noreferrer">https://openpifpaf.github.io/tutorial_opencv.html</a>). However I am always getting this error, <code>AttributeError: 'NoneType' object has no attribute '__array_interface__'</code></p>
<p>This is what I have tried</p>
<pre><code># importing all the libraries
import openpifpaf
import cv2
VIDEO_LINK = 0 # 0 for webcamera
video = cv2.VideoCapture(VIDEO_LINK)
predictor = openpifpaf.Predictor(checkpoint='shufflenetv2k16')
annotation_painter = openpifpaf.show.AnnotationPainter()
while(True):
_, frame = video.read()
predictions, gt_anns, meta = predictor.numpy_image(frame)
# cv2.imshow('frame', frame)
with openpifpaf.show.Canvas.image(frame) as ax:
img = annotation_painter.annotations(ax, predictions)
cv2.imshow('frame2', img)
# to exit the video live
if cv2.waitKey(33) & 0xFF == ord('q'):
break
video.release()
cv2.destroyAllWindows()
</code></pre>
<p>I can get the pose estimate in images but I can't get it using video. Any experts advice would help me.</p>
<p>This is the complete error traceback message:</p>
<pre><code>src\openpifpaf\csrc\src\cif_hr.cpp:102: UserInfo: resizing cifhr buffer
src\openpifpaf\csrc\src\occupancy.cpp:53: UserInfo: resizing occupancy buffer
Traceback (most recent call last):
File "d:/Projects/HumanPoseEstimation/openpifpaf_test_on_video.py", line 2, in <module>
import openpifpaf
File "D:\Projects\HumanPoseEstimation\human_pose_estimation_venv\lib\site-packages\openpifpaf\__init__.py", line 41, in <module>
plugin.register()
File "D:\Projects\HumanPoseEstimation\human_pose_estimation_venv\lib\site-packages\openpifpaf\plugin.py", line 31, in register
module = importlib.import_module(name)
File "C:\Users\AppData\Local\Programs\Python\Python38\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "d:\Projects\HumanPoseEstimation\openpifpaf_test_on_video.py", line 16, in <module>
cv2.imshow('frame2', img)
cv2.error: OpenCV(4.8.0) D:\a\opencv-python\opencv-python\opencv\modules\highgui\src\window.cpp:971: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'
</code></pre>
| <python><opencv><pose-estimation><openpifpaf> | 2023-09-11 08:36:53 | 0 | 513 | JSVJ |
77,080,024 | 2,900,959 | How to fetch DynamoDB records with FilterExpression=Attr.exists via boto3? | <p>I have the following code:</p>
<pre><code>import boto3
from boto3.dynamodb.conditions import Attr
resource = boto3.resource("dynamodb")
table = resource.Table(MY_TABLE_NAME)
records = []
pagination = {}
while True:
response = table.scan(FilterExpression=Attr(attr_name).exists(), **pagination)
records += response["Items"]
if response.get("LastEvaluatedKey"):
pagination = {"ExclusiveStartKey": response["LastEvaluatedKey"]}
else:
break
</code></pre>
<p>When I run it, I get the empty <code>records</code> list.</p>
<p>However, the following code works and the list contains all the records with the given attribute:</p>
<pre><code>resource = boto3.resource("dynamodb")
table = resource.Table(MY_TABLE_NAME)
records = []
pagination = {}
while True:
response = table.scan(
FilterExpression="attribute_exists(#1)",
ExpressionAttributeNames={"#1": attr_name},
**pagination
)
records += response["Items"]
if response.get("LastEvaluatedKey"):
pagination = {"ExclusiveStartKey": response["LastEvaluatedKey"]}
else:
break
</code></pre>
<p>I really like the neat <code>Attr</code> syntaxis and want to keep it consistent over the codebase (as in the other places of my code I use this approach). So, maybe I'm doing something wrong with my query?</p>
<p>I'm using boto3 v1.28.44</p>
| <python><amazon-dynamodb><boto3> | 2023-09-11 08:30:02 | 1 | 398 | MartinSolie |
77,079,893 | 12,983,543 | JWE Invalid Invalid Initialization Vector length | <p>I need to encrypt using Python with the A256GCM algorithm, and getting back a JWT that I need to send to a Typescript backend, that then needs to decrypt it and consume its content.</p>
<p>This is the code that in the backend is used to decrypt what is received:</p>
<pre><code>import { jwtDecrypt } from 'jose'
result = await jwtDecrypt(request.data, this.key)
</code></pre>
<p>And this is the code that I use on the Python application to encode and then send the code to the backend.</p>
<pre><code>self.key = secrets.token_bytes(32)
self.b64s = base64.b64encode(self.key).decode('utf-8')
data_payload = json.dumps(self.payload)
self.token = jwe.encrypt(data_payload, self.key,
algorithm='dir', encryption='A256GCM').decode('utf-8')
</code></pre>
<p>I tried to see if what arrives and what I produced are the same, and I find out that the backend receives exactly what I am sending, and the transmission of data is working correclty.</p>
<p>Here is an example of generated data</p>
<ul>
<li>key: BUCHFQE9nxIo4TwT613Xxm9wZ3U0zOmF7x7La2DBNj0=</li>
<li>JWT: eyJhbGciOiJkaXIiLCJlbmMiOiJBMjU2R0NNIn0..Fld1u1UAP0Hej4QM2bp1Og.1T854v93imiIlbFCVUCmMViZsSSb305u7cqDpj2LlITkfH0kjptcPGBK1OQtI5HjA9e6kjmQfuyUncDQ1wCxbCpY84Qe_jAnRorywdBMPxhwQZN860qJlkN4ZvOK8sLX-FgkKekw2Nmq3g09KjZpksYVtpkHEYB0zb7c2ZmMS4W7rvEk2K6YJoPtO3LX1ophhVNQVWWQljF1T60RoKgf_dPOGDMb051raN-w5aPykv9Da62_GGQtv6c6o5vMd7VNiBQaQ2uqEELikQ-85RgFZoX-8rieuEyR6O622ocCWDR0En6DrFenXYBk74K4WffiPAqpun0.QcN5OfrTGIFPkwzwiE3Png</li>
</ul>
<p>But then, when is time to decrypt in the backend, it says the following error:</p>
<pre><code>JWEInvalid: Invalid Initialization Vector length
</code></pre>
<p>I have not used an iv anywhere, and in the implementation of the server, (i have to stick with it, I cannot change since is third party) is not required it.</p>
<p>Do you have some idea where the problem might be? How could I fix this?</p>
<p>One thing that popped to my mind was that JWE is not the same as JWT, so I tried to use that library in Python, but this does not work, since <code>jose</code> does not implement jet the A256GCM algorithm.</p>
<p>Does someone have some idea or pointer? Thank you a lot</p>
| <python><encryption><aes> | 2023-09-11 08:09:17 | 0 | 614 | Matteo Possamai |
77,079,773 | 5,567,893 | python map function returns NaN value while key is existed in the dictionary | <p>I have trouble converting values using <code>map</code> and <code>dictionary</code></p>
<p>Although I checked a similar situation here (<a href="https://stackoverflow.com/questions/61369944/dictionary-mapping-return-nan">Dictionary mapping return Nan</a>), I can't understand why it didn't work well.</p>
<p>I have two dataframes, one is the main dataframe and another one is for the dictionary.</p>
<pre class="lang-py prettyprint-override"><code>df1
# Gene Name Drug IDs
#0 GLS2 DB00130
#1 GLS2 DB11118
#2 F13A1 DB00130
#3 NOS2 DB00997
#4 NOS2 DB09237
#...
id_converter
# DrugBank ID of Ligand DrugName
#0 DB00001 Lepirudin
#1 DB00002 Cetuximab
#2 DB00003 Dornase alfa
#...
</code></pre>
<p>I made <code>id_converter</code> to the dictionary as below:</p>
<pre class="lang-py prettyprint-override"><code>drugbank_mapping = dict(id_converter[['DrugBank ID of Ligand', 'DrugName']].values)
</code></pre>
<p>Then, I converted Drug IDs to DrugName, but it returned NaN while the keys are in the dictionary</p>
<pre class="lang-py prettyprint-override"><code>df1['Drug IDs_name'] = df1['Drug IDs'].map(drugbank_mapping)
df1
# Gene Name Drug IDs Drug IDs_name
#0 GLS2 DB00130 L-Glutamine
#1 GLS2 DB11118 NaN
#2 F13A1 DB00130 L-Glutamine
#3 NOS2 DB00997 Doxorubicin
#4 NOS2 DB09237 NaN
#...
drugbank_mapping['DB11118']
#'Ammonia'
drugbank_mapping['DB09237']
#'Levamlodipine'
</code></pre>
<p>Moreover, I tried <code>pd.merge</code>, but it returned the same result removing the <code>NaN</code> values.</p>
<p>Can anyone give me an idea for this problem?
Thank you for reading.</p>
| <python><pandas><dataframe><dictionary> | 2023-09-11 07:47:07 | 0 | 466 | Ssong |
77,079,762 | 14,830,534 | Can you reconstruct a neural network from the weights file (.h5) only? | <p>If you want to keep your Neural Network architecture secret and still want to use it in an application, would somebody to be able to reverse engineer the Neural Network from the weights file (.h5) only?</p>
<p>The weights are an output of <code>model.save_weights()</code> and are loaded back into the model with <code>model.load_weights()</code>. All other application code is properly encrypted in this case.</p>
| <python><tensorflow><deep-learning> | 2023-09-11 07:45:18 | 1 | 1,106 | Jan Willem |
77,079,553 | 1,028,133 | Filtering PyTables PerformanceWarning with warnings.filterwarnings() fails | <p>There are a number of answers on this website detailing how one can ignore specific warnings in python (either <a href="https://stackoverflow.com/questions/63979540/python-how-to-filter-specific-warning">by category</a> or by <a href="https://stackoverflow.com/questions/9134795/how-to-get-rid-of-specific-warning-messages-in-python-while-keeping-all-other-wa">providing a regex to match a warning message</a>).</p>
<p>However, none of these seem to work when I try try to suppress <code>PerformanceWarning</code>s coming from PyTables.</p>
<p>Here's an MWE:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import warnings
from tables import NaturalNameWarning, PerformanceWarning
data = {
'a' : 1,
'b' : 'two'
}
df = pd.DataFrame.from_dict(data, orient = 'index') # mixed types will trigger PerformanceWarning
dest = pd.HDFStore('warnings.h5', 'w')
#dest.put('data', df) # mixed type will produce a PerformanceWarning
#dest.put('data 1', df) # space in 'data 1' will trigger NaturalNameWarning in addition to the PerformanceWarning
warnings.filterwarnings('ignore', category = NaturalNameWarning) # NaturalNameWarnings ignored
warnings.filterwarnings('ignore', category = PerformanceWarning) # no effect
warnings.filterwarnings('ignore', message='.*PyTables will pickle') # no effect
#warnings.filterwarnings('ignore') # kills all warnings, not what I want
dest.put('data 2', df) # PerformanceWarning
dest.close()
</code></pre>
<p>Using a context manager doesn't help either:</p>
<pre><code>with warnings.catch_warnings():
warnings.filterwarnings("ignore", category=PerformanceWarning) # no effect
warnings.filterwarnings('ignore', message='.*PyTables') # no effect
dest.put('data 6', df)
</code></pre>
<p>Nor does using <code>warnings.simplefilter()</code> instead of <code>warnings.filterwarnings()</code>.</p>
<p>Perhaps relevant, here is the PerformanceWarning:</p>
<pre><code>test.py:21: PerformanceWarning:
your performance may suffer as PyTables will pickle object types that it cannot
map directly to c-types [inferred_type->mixed-integer,key->block0_values] [items->Int64Index([0], dtype='int64')]
dest.put('data 2', df) # PerformanceWarning
</code></pre>
<p>Contrast this with the <code>NaturalNameWarning</code>, which doesn't come from the offending line in <code>test.py</code>, but from <code>tables/path.py</code>:</p>
<pre><code>/home/user/.local/lib/python3.8/site-packages/tables/path.py:137: NaturalNameWarning: object name is not a valid Python identifier: 'data 2'; it does not match the pattern ``^[a-zA-Z_][a-zA-Z0-9_]*$``; you will not be able to use natural naming to access this object; using ``getattr()`` will still work, though
check_attribute_name(name)
</code></pre>
<p>This is with tables 3.7.0/python 3.8.10. Any ideas?</p>
| <python><pandas><pytables> | 2023-09-11 07:11:35 | 1 | 744 | the.real.gruycho |
77,079,524 | 5,359,846 | How to expect count bigger or smaller than? | <p>Using <code>Playwright</code> and <code>Python</code>, how can I expect for count bigger or smaller than?</p>
<p>For example, this code expect for count of 2.</p>
<p>How do I achieve count >= 2 <strong>(Only bigger)</strong></p>
<pre><code>expect(self.page.locator('MyLocator')).to_have_count(2, timeout=20 * 1000)
</code></pre>
| <python><playwright><playwright-python> | 2023-09-11 07:06:08 | 4 | 1,838 | Tal Angel |
77,079,458 | 4,451,521 | Copy a file to a different place with a new name with python | <p>I have a file in some location, and I want to copy that file to another location but with a different name. This operation will be repeated many times and in every case the basename of the file to copy is the same (only location varies) so that is why the renaming.</p>
<p>Anyway, a way to do this is explained in <a href="https://pythonguides.com/python-copy-file/" rel="nofollow noreferrer">this guide</a>.<br />
The code is</p>
<pre><code>import shutil
import os
# Specify the source file, the destination for the copy, and the new name
source_file = 'path/to/your/source/file.txt'
destination_directory = 'path/to/your/destination/'
new_file_name = 'new_file.txt'
# Copy the file
shutil.copy2(source_file, destination_directory)
# Get the base name of the source file
base_name = os.path.basename(source_file)
# Construct the paths to the copied file and the new file name
copied_file = os.path.join(destination_directory, base_name)
new_file = os.path.join(destination_directory, new_file_name)
# Rename the file
os.rename(copied_file, new_file)
</code></pre>
<p>So far so good.
My question is by <em>only</em> using <code>os.rename</code> we would be able to <em>"move"</em> the file. Is copying really this complicated in comparison? Isn't some method to just copy and rename with a single command?</p>
<p>NOTE: Before someone flag this question as "repeated", notice that <em>I am giving the solution to the basic problem</em> right above.
What I am asking -and therefore not a repeat- is that if in terms of writing effective python, is not a simpler way of doing that.</p>
| <python><file-copying><file-move> | 2023-09-11 06:54:08 | 0 | 10,576 | KansaiRobot |
77,079,088 | 7,223,184 | How to get a rounded corners color clip using MoviePy? | <p>I'm working on a small video project using Python and MoviePy, and I want to create a color clip with rounded corners to use as a background. However, I'm unsure how to achieve this effect. So far I have a rectangle background !!. Can anyone give me a small example on how to add rounded corners to a color clip in MoviePy?</p>
<pre><code>from moviepy.editor import VideoFileClip, TextClip, CompositeVideoClip, ColorClip
# Create a color clip as the background
color_clip = ColorClip(size=(video_clip.size[0], video_clip.size[1]),
color=(0, 0, 0), duration=video_clip.duration)
# Set the opacity of the color clip
color_clip = color_clip.set_opacity(0.4)
# Set the position of the color clip and text clip
color_clip = color_clip.set_position('center')
</code></pre>
| <python><moviepy> | 2023-09-11 05:23:19 | 1 | 680 | Aness |
77,079,060 | 972,647 | Running many CLI processes from Python, avoid CPU overload | <p>I'm creating a python script to launch many processes via the CLI. Once process per file found in a certain directory. And then I just loop over the files and launch the process which works on that file.</p>
<pre><code>for path in pathlist:
# Prepare cli call
p = subprocess.Popen(cmd...)
processes.append(p)
</code></pre>
<p>I'm also adding all processes to a list to wait for them at the end of the script. Since there can be 100s of files I don't want to overload the CPU and make things slower due to too many context switches. Plus at some point memory will also become a limiting factor.</p>
<p>How can I control in above "logic" to not "flood" the cpu/OS and slow things down?</p>
| <python><command-line-interface> | 2023-09-11 05:16:54 | 2 | 7,652 | beginner_ |
77,079,039 | 9,983,652 | how to upgrade to pandas 2.0 in anaconda? | <p>I am using anaconda. currently I have pandas 1.3 installed. how do I upgrade pandas to 2.0? I checked below website and there is pandas 2.0.3 available in anaconda with following command</p>
<pre><code>conda install -c anaconda pandas
</code></pre>
<p><a href="https://anaconda.org/anaconda/pandas" rel="nofollow noreferrer">https://anaconda.org/anaconda/pandas</a></p>
<p>However, if I use this below command, it shows me only pandas 1.4 will be installed.</p>
<pre><code>conda install -c anaconda pandas
</code></pre>
<p>Then I will just use below command to install pandas 2.0.3. it has been long time since I got environment issue shown as below. Not sure which is the better way to upgrade to pandas 2.0? Thanks</p>
<pre class="lang-none prettyprint-override"><code>conda install pandas=2.0.3
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: |
</code></pre>
| <python><pandas><anaconda> | 2023-09-11 05:09:27 | 1 | 4,338 | roudan |
77,078,795 | 271,351 | Add Image to Excel file after pandas.to_excel() | <p>I am using <code>xlsxwriter</code> as the <code>engine</code> that I pass to <code>pandas.ExcelWriter</code>. I have a DataFrame and I call <code>to_excel()</code> on it passing the writer that I've previously acquired. I then try grabbing the worksheet using <code>get_worksheet_by_name()</code>, which seems to work well, and then calling <code>worksheet.insert_image()</code> to insert an image.</p>
<p>This isn't working. I have the supposition that <code>to_excel()</code> causes the worksheet to be written and thus not available for mutation later on. Is this correct? If so, would I have to have a manual process for reading every cell in the DataFrame and writing it to Excel myself using <code>xlsxwriter</code> so that I can do the image stuff too? Or is there a way to tell <code>to_excel()</code> not to finish writing the sheet?</p>
| <python><pandas><export-to-excel><xlsxwriter> | 2023-09-11 03:42:12 | 1 | 4,573 | cjbarth |
77,078,680 | 11,262,633 | Average of datetime.datetime.now() is overstated | <p>If I make a list of time deltas, the average is larger than if I average the microsecond values from these deltas. Why is that?</p>
<pre><code>import time
import numpy as np
l = []
l_m = []
for _ in range(5):
start = datetime.datetime.now()
time.sleep(1)
l.append((datetime.datetime.now() - start))
l_m.append((datetime.datetime.now() - start).microseconds / 1000000)
print(l)
print(np.mean(l), np.mean(l_m))
</code></pre>
<p>gives</p>
<pre><code>[datetime.timedelta(seconds=1, microseconds=846), datetime.timedelta(seconds=1, microseconds=1017), datetime.timedelta(seconds=1, microseconds=1010), datetime.timedelta(seconds=1, microseconds=1013), datetime.timedelta(seconds=1, microseconds=887)]
0:00:01.000955 0.9639999999999999
</code></pre>
<p>This is Python 3.8.10 on Linux.</p>
| <python> | 2023-09-11 02:53:26 | 1 | 1,260 | mherzog |
77,078,298 | 10,138,470 | What distance measure can I use that factors in order? | <p>I have a few lists that havethe same IDs that are strings. They are as follows:</p>
<pre><code>list1 = ["1", "2", "3", "4", "5"]
list2 = ["1", "2", "3", "5", "4"]
list3 = ["1", "5", "4", "3", "2"]
list4 = ["4", "2", "5", "3", "1"]
</code></pre>
<p>What measure can I use to determine the lists that are closest to each other here in terms of order? Ideally <code>list1</code> and <code>list2</code> should be the closest here.</p>
<p>Does the spearman correlation make sense here?</p>
| <python><distance><levenshtein-distance> | 2023-09-11 00:04:09 | 2 | 445 | Hummer |
77,078,235 | 2,458,922 | Pandas Find distance amoung the group | <p>Given data set contains,</p>
<blockquote>
<p>Brand | Sector|Year|Price|Sales.</p>
</blockquote>
<blockquote>
<p>B1, S1, 2023, 45900, 400</p>
</blockquote>
<blockquote>
<p>B1, S1, 2022, 45000, 500</p>
</blockquote>
<blockquote>
<p>B2, S1, 2022, 45400, 520</p>
</blockquote>
<p>The Group may be defined as Same Brand and Sector, and the distance may be defined as change in Price by Change in Sale and with in a group. Lets say there are 4 (A,B,C,D) members for Brand B1 and Sector S1, each member having its own Count of Sales, and Price. We can form 6 Pairs,
AB, AC,AD,BC,BD,CD and from each pair we could find Change is Price by Change in Sales.
(A.Price - B.Price ) / (A.Sale - B.Sale )</p>
<p>Tried things like df.diff(), df.roll() etc. But that can compare only one element next to another.</p>
| <python><pandas> | 2023-09-10 23:29:05 | 1 | 1,731 | user2458922 |
77,078,098 | 7,869,636 | How to specify para mutual exclusive groups (sets) of arguments in argparse? Or make optional positional parameter with any number of args | <p>I want the following usege:</p>
<p>Usage 1: <code>my_launch.py --list-installed</code><br />
Usage 2: <code>my_launch.py [-a | -b | -c <arg_c>] <appname [args ...]></code></p>
<p>Is that possible to combine with argparse?</p>
<p>If the <code>--list-installed</code> is used, then there should not be any other arguments.<br />
In another case, there is required appname, and then zero or more arguments.</p>
<p>I have read about subparser, but they are not appropriate for my case, because they require the command to start from defined word (a subcommand).</p>
<p>I thought I could just collect everything in namespace, and then do my own logic. But the problem is that I cannot make an appname with its parameters captured. In usage 2 it is mandatory, in usage 1 it is prohibited.</p>
<p>Is it possible to make appname to collect all args behind it (I do it like this: <code>parser.add_argument("appname", nargs=argparse.REMAINDER)</code>), and at the same time make the whole appname optional?</p>
<p>I also read about mutually exclusive groups, but the term is ambiguous. Actually, they are not "groups", but a set from which only one can be used. But I wanted two actually groups, one group for usage 1, another group for usage 2. And make them mutually exclusive. That is why I used "para" in question.</p>
| <python><argparse> | 2023-09-10 22:28:39 | 0 | 839 | Ashark |
77,078,086 | 2,068,311 | Can not uploading content to WordPress folder /wp-content/uploads/woocommerce_uploads/ | <p>Im trying to write a Python script that will upload images and pdf to WordPress. I would like the images to be uploaded to the folder '/wp-content/uploads/' and the pdf files to the folder '/wp-content/uploads/woocommerce_uploads/'.</p>
<pre><code>import requests
import os
class UploadProductMedia:
def __init__(self, media_endpoint, wordpress_username, wordpress_password):
self.media_endpoint = media_endpoint
self.wordpress_username = wordpress_username
self.wordpress_password = wordpress_password
pass
def upload_media(self, file_path):
print(f"Uploading media {file_path}")
with open(file_path, 'rb') as file:
# Prepare the data to send in the request
data = {
'file': (os.path.basename(file_path), file),
}
# Determine the upload directory based on the file type
upload_directory = '/wp-content/uploads/'
if file_path.lower().endswith(('.pdf')):
upload_directory = '/wp-content/uploads/woocommerce_uploads/'
print(f"Uploading to directory {upload_directory}")
# Send a POST request to the media endpoint with authentication
response = requests.post(f"{self.media_endpoint}?upload_to={upload_directory}", auth=(self.wordpress_username, self.wordpress_password), files=data)
if response.status_code == 201:
response_data = response.json()
uploaded_media_uri = response_data.get('guid', {}).get('rendered', '')
print(f'Successfully uploaded: {file_path}')
return uploaded_media_uri
else:
print(f'Failed to upload: {file_path}')
print(f'Response: {response.status_code} - {response.text}')
return None
if __name__ == "__main__":
# Example usage:
print("=== Starting ===")
for file in ["TestData\pear1.png", "TestData\pear1.pdf"]:
UploadProductMedia('https://mysite.co.uk/wp-json/wp/v2/media', 'myusername', 'mypassword').upload_media(file)
print("=== Finished ===")
</code></pre>
<p>The console logs indicate that the program worked as expected.</p>
<pre><code>=== Starting ===
Uploading media TestData\pear1.png
Uploading to directory /wp-content/uploads/
Successfully uploaded: TestData\pear1.png
Uploading media TestData\pear1.pdf
Uploading to directory /wp-content/uploads/woocommerce_uploads/
Successfully uploaded: TestData\pear1.pdf
=== Finished ===
</code></pre>
<p>However, the pdf file was uploaded to the incorrect directory. The pdf was uploaded to
<a href="https://mysite.co.uk/wp-content/uploads/2023/09/pear1.pdf" rel="nofollow noreferrer">https://mysite.co.uk/wp-content/uploads/2023/09/pear1.pdf</a></p>
<p>Please, can you help explain why the pdf files are not being uploaded to /wp-content/uploads/woocommerce_uploads/</p>
| <python><wordpress><rest> | 2023-09-10 22:24:35 | 1 | 654 | Andrew Seaford |
77,078,023 | 604,063 | Progress bar when copying one large file in Python? | <p>I want to use <a href="https://tqdm.github.io/" rel="nofollow noreferrer">tqdm</a> to display a progress bar when copying a single large file from one filepath to another.</p>
<p>This is not the same as <a href="https://stackoverflow.com/questions/62342365/display-progress-bar-in-console-while-copying-files-using-tqdm-in-python">showing a progress bar when copying multiple small files</a>.</p>
| <python><file-copying><tqdm> | 2023-09-10 21:56:13 | 1 | 8,125 | David Foster |
77,077,807 | 11,561,158 | Django query: transform two for loops into a queryset | <p>I have 3 simple models:</p>
<pre><code>from django_better_admin_arrayfield.models.fields import ArrayField
class BuildItem(models.Model):
base = models.ForeignKey(Item, on_delete=models.CASCADE)
level = models.IntegerField(default=1)
mandatory_skills = ArrayField(models.IntegerField(null=True, blank=True), default=list)
class Item(BaseModel):
name = models.CharField(max_length=100)
class Skill(BaseModel):
item = models.ForeignKey(Item, on_delete=models.CASCADE, blank=True, null=True, related_name="skills")
effect = models.IntegerField(null=True, blank=True)
value = models.IntegerField(null=True, blank=True)
</code></pre>
<p>I would like to retrieve the sum of the value fields of the skills multiply by the level field of builditem, where the pk fields of the skills are in the mandatory_field list, if the effect field of skill is equal to 1.</p>
<p>I wrote some code that works with for loop:</p>
<pre><code>array_life = []
for builditem in BuildItem.objects.all():
for skill in builditem.base.skills.all():
if skill.pk in builditem.mandatory_skills:
if skill.effect == 1 and skill.value < 2:
array_life.append(skill.value * builditem.level)
sum_life = sum(array_life)
</code></pre>
<p>I would like to transform this code into a queryset.
<em>(I tried many things but without success)</em></p>
<p>Can anyone help me ?</p>
| <python><django> | 2023-09-10 20:42:17 | 1 | 863 | Hippolyte BRINGER |
77,077,694 | 12,236,313 | Using Celery tasks in Django to update aggregated data | <p>In my Django project, I have a use case where I have some complex Python functions which are <strong>computationally expensive</strong>: they involve numerous calculations based on a recurrence relation. They are already well optimized but still take a few seconds to run in some cases.</p>
<p>In this context, I'm thinking about <strong>using Celery tasks</strong> to carry out the calculations and to update aggregated data in dedicated tables of my PostgreSQL database. However, I have a feeling it's not going to be easy to <strong>ensure the integrity of the data</strong> and to <strong>avoid unnecessary calculations</strong>. Race conditions and unreliable error handling / retry mechanisms are some of the pitfalls I have in mind.</p>
<p>What are the best practices in this case? What do I need to pay particular attention to?</p>
<p>Are you aware of open source Django projects doing something similar?</p>
<p>I'm using Django 4.2, PostgreSQL 15, Redis 7.2 (as a cache), Celery 5.3, RabbitMQ 3.12 (as a broker).</p>
| <python><django><celery> | 2023-09-10 19:57:33 | 0 | 1,030 | scūriolus |
77,077,628 | 10,671,956 | python3 TypeError: '<' not supported error when try to implement A* algo use python heapq | <p>I am trying to implement A* algorithm using pri_que by leveraging python library <em>heapq</em>. In general, all the state of will be stored as a <strong>Node</strong> instance</p>
<pre><code>class Node:
def __init__(self, state, parent, action, depth):
self.path_cost = 0
self.state = state
self.parent = parent
self.action = action
self.depth = depth
</code></pre>
<p>The code is too long to paste here, but the error locates in the following block.</p>
<pre><code> def astar_insert(self, node):
hq.heappush(self.pri_que, (node.path_cost, node))
self.visited_pri.add(str(node.state))
def astar_pop(self):
node = hq.heappop(self.pri_que)[1] # <-------------------- this line
self.visited_pri.discard(str(node.state))
return node
</code></pre>
<p>The full error is:</p>
<pre><code>TypeError: '<' not supported between instances of 'Node' and 'Node'
</code></pre>
<p>I am very confused. I try to run the code and print the solution. It seems like the code works for a well before failing.</p>
| <python><python-3.x><priority-queue><a-star> | 2023-09-10 19:35:21 | 1 | 1,505 | Rieder |
77,077,603 | 3,476,463 | run llama-2-70B-chat model on single gpu | <p>I'm running pytorch on an ubuntu server 18.04 LTS. I have an nvidia gpu with 8 GB or ram. I'd like to experiment with the new llma2-70B-chat model. I'm trying to use peft and bitsandbytes to reduce the hardware requirements as described in the link below:</p>
<p><a href="https://www.youtube.com/watch?v=6iHVJyX2e50" rel="nofollow noreferrer">https://www.youtube.com/watch?v=6iHVJyX2e50</a></p>
<p>is it possible to work with the llama-2-70B-chat model on a single gpu with 8GB of ram? I don't care if it's quick, I just want experiment and see what kind of quality responses I can get out of it.</p>
| <python><pytorch><gpu><large-language-model><llama> | 2023-09-10 19:27:06 | 3 | 4,615 | user3476463 |
77,077,056 | 12,944,030 | optimize insert operation in MySQL (InnoDB) | <p>I have this table:</p>
<pre><code>create table tab3(
id int not null auto_increment,
phrase text,
link_1 int,
link_2 int,
primary key (id),
foreign key (link_1) references tab1 (id),
foreign key (link_2) references tab2 (id));
</code></pre>
<p>I am inserting around 400k rows into this table with Python.
this is the insert statement:</p>
<pre><code>INSERT INTO tab3(phrase, link_1, link_2)
VALUES(
%s,
(select id from tab1 where tab1.col1 = %s),
(select id from tab2 where tab2.col2 = %s));
</code></pre>
<p>I have an index on both tables tab1.col1 and tab2.col2. but the insertion is taking a long time around 5mins/1000 row</p>
<p>I've tried many different techniques <a href="https://dev.mysql.com/doc/refman/8.0/en/insert-optimization.html#:%7E:text=You%20can%20use%20the%20following,separate%20single%2Drow%20INSERT%20statements." rel="nofollow noreferrer">from the official docs of MySQL</a> such as:</p>
<ul>
<li>using cursor.execute(stmt,param)</li>
<li>using cursor.executemany(stmt, param<strong>s</strong>)</li>
<li>multiple processes ( billiard <a href="https://pypi.org/project/billiard/" rel="nofollow noreferrer">https://pypi.org/project/billiard/</a> )</li>
<li>blocking the commit until all chunk of data is inserted and then commit changes</li>
<li>encapsulating the insert stmt inside one transaction ( with <strong>START TRANSACTION</strong> )</li>
</ul>
<p>But None of the above gave a good improvement.</p>
| <python><mysql> | 2023-09-10 16:43:03 | 1 | 349 | moe_ |
77,077,029 | 15,303,234 | os.scandir throws mysterious NotADirectory error | <p>while I was writing a function loop through a folder</p>
<p>code here:</p>
<pre><code>import os
from pathlib import Path
path = Path(r'D:\full')
for i in os.scandir(path):#folders
for j in os.scandir(i.path):#subdict
for k in os.scandir(j.path):#files
print(k.path)
</code></pre>
<p>it seems to stop whilst a specific folder, a specific file
one that seems to be called 12.288.csv
the full error is here:</p>
<pre><code>Traceback (most recent call last):
File "D:\project\MOTOR.py", line 7, in <module>
for k in os.scandir(j.path):#files
NotADirectoryError: [WinError 267] The directory name is invalid: 'D:\\full\\normal\\12.288.csv'
</code></pre>
<p>interestingly though, the directory does exist, what is wrong?</p>
<p>Alright, I already tried</p>
<pre><code>\-double slashes
\-raw string
\-checking if directory exists
\-forward slashing and backward slashing
</code></pre>
| <python><filesystems><file-not-found> | 2023-09-10 16:38:06 | 0 | 301 | TheHappyBee |
77,076,981 | 7,809,915 | Pyinstaller throws AssertionError | <p>I'm trying to turn a Python app (with a PyQt dialog) into an exe using pyinstaller. Unfortunately this doesn't work and returns the following output to the console:</p>
<p>There's only one (!) Python script, app.py. It contains some functions and a class for the gui. Nothing special.</p>
<pre><code>C:\Users\m14\Desktop\PythonApp>pyinstaller --onefile app.py
1595 INFO: PyInstaller: 5.13.0
1597 INFO: Python: 3.7.0
1597 INFO: Platform: Windows-10-10.0.19041-SP0
1598 INFO: wrote C:\Users\m14\Desktop\PythonApp\app.spec
1606 INFO: Extending PYTHONPATH with paths
['C:\\Users\\m14\\Desktop\\PythonApp']
2777 INFO: checking Analysis
2779 INFO: Building Analysis because Analysis-00.toc is non existent
2779 INFO: Initializing module dependency graph...
2790 INFO: Caching module graph hooks...
2899 INFO: Analyzing base_library.zip ...
5226 INFO: Loading module hook 'hook-heapq.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\PyInstaller\\hooks'...
5403 INFO: Loading module hook 'hook-encodings.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\PyInstaller\\hooks'...
6438 INFO: Loading module hook 'hook-pickle.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\PyInstaller\\hooks'...
9095 INFO: Caching module dependency graph...
9287 INFO: running Analysis Analysis-00.toc
9304 INFO: Adding Microsoft.Windows.Common-Controls to dependent assemblies of final executable
required by C:\Users\m14\AppData\Local\Programs\Python\Python37\python.exe
9872 INFO: Analyzing C:\Users\m14\Desktop\PythonApp\app.py
11131 INFO: Processing pre-safe import module hook six.moves from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\PyInstaller\\hooks\\pre_safe_import_module\\hook-six.moves.py'.
11583 INFO: Loading module hook 'hook-xml.etree.cElementTree.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\PyInstaller\\hooks'...
11584 INFO: Loading module hook 'hook-xml.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\PyInstaller\\hooks'...
12124 INFO: Loading module hook 'hook-xml.dom.domreg.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\PyInstaller\\hooks'...
12566 INFO: Loading module hook 'hook-lxml.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
13816 INFO: Loading module hook 'hook-docx.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
15648 INFO: Loading module hook 'hook-PyQt5.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\PyInstaller\\hooks'...
15937 INFO: Loading module hook 'hook-PyQt5.uic.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\PyInstaller\\hooks'...
16317 INFO: Processing pre-find module path hook PyQt5.uic.port_v2 from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\PyInstaller\\hooks\\pre_find_module_path\\hook-PyQt5.uic.port_v2.py'.
16681 INFO: Processing module hooks...
16681 INFO: Loading module hook 'hook-lxml.etree.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
16938 INFO: Loading module hook 'hook-difflib.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\PyInstaller\\hooks'...
18560 INFO: Loading module hook 'hook-platform.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\PyInstaller\\hooks'...
19896 INFO: Loading module hook 'hook-lxml.isoschematron.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
19920 WARNING: Hidden import "sip" not found!
19920 INFO: Loading module hook 'hook-PyQt5.QtCore.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\PyInstaller\\hooks'...
20126 INFO: Loading module hook 'hook-PyQt5.QtGui.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\PyInstaller\\hooks'...
20312 INFO: Loading module hook 'hook-PyQt5.QtWidgets.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\PyInstaller\\hooks'...
20431 INFO: Loading module hook 'hook-lxml.objectify.py' from 'C:\\Users\\m14\\AppData\\Local\\Programs\\Python\\Python37\\lib\\site-packages\\_pyinstaller_hooks_contrib\\hooks\\stdhooks'...
Traceback (most recent call last):
File "C:\Users\m14\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\m14\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\m14\AppData\Local\Programs\Python\Python37\Scripts\pyinstaller.exe\__main__.py", line 7, in <module>
File "C:\Users\m14\AppData\Local\Programs\Python\Python37\lib\site-packages\PyInstaller\__main__.py", line 194, in _console_script_run
run()
File "C:\Users\m14\AppData\Local\Programs\Python\Python37\lib\site-packages\PyInstaller\__main__.py", line 180, in run
run_build(pyi_config, spec_file, **vars(args))
File "C:\Users\m14\AppData\Local\Programs\Python\Python37\lib\site-packages\PyInstaller\__main__.py", line 61, in run_build
PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs)
File "C:\Users\m14\AppData\Local\Programs\Python\Python37\lib\site-packages\PyInstaller\building\build_main.py", line 1019, in main
build(specfile, distpath, workpath, clean_build)
File "C:\Users\m14\AppData\Local\Programs\Python\Python37\lib\site-packages\PyInstaller\building\build_main.py", line 944, in build
exec(code, spec_namespace)
File "C:\Users\m14\Desktop\PythonApp\app.spec", line 20, in <module>
noarchive=False,
File "C:\Users\m14\AppData\Local\Programs\Python\Python37\lib\site-packages\PyInstaller\building\build_main.py", line 429, in __init__
self.__postinit__()
File "C:\Users\m14\AppData\Local\Programs\Python\Python37\lib\site-packages\PyInstaller\building\datastruct.py", line 184, in __postinit__
self.assemble()
File "C:\Users\m14\AppData\Local\Programs\Python\Python37\lib\site-packages\PyInstaller\building\build_main.py", line 599, in assemble
deps_proc = DependencyProcessor(self.graph, self.graph._additional_files_cache)
File "C:\Users\m14\AppData\Local\Programs\Python\Python37\lib\site-packages\PyInstaller\building\toc_conversion.py", line 51, in __init__
self._distributions.update(self._get_distribution_for_node(node))
File "C:\Users\m14\AppData\Local\Programs\Python\Python37\lib\site-packages\PyInstaller\building\toc_conversion.py", line 88, in _get_distribution_for_node
assert len(dists) == 1
AssertionError
C:\Users\m14\Desktop\PythonApp>
</code></pre>
<p>Any idea?</p>
| <python><python-3.x><pyinstaller> | 2023-09-10 16:26:36 | 1 | 490 | M14 |
77,076,663 | 11,416,654 | RNG Challenge Python | <p>I am trying to solve a CTF challenge in which the goal is to guess the generated number. Since the number is huge and you only have 10 attempts per number, I don't think you can apply binary search or any kind of algorithm to solve it, and that it has something to do with somehow getting the seed of the random function and being able to generate the next number, but I have no idea on where to start to get the correct seed. Do you have any idea?
Here's the code of the challenge:</p>
<pre><code>#!/usr/bin/env python3
import signal
import os
import random
TIMEOUT = 300
assert("FLAG" in os.environ)
FLAG = os.environ["FLAG"]
assert(FLAG.startswith("CCIT{"))
assert(FLAG.endswith("}"))
def handle():
for i in range(625):
print(f"Round {i+1}")
guess_count = 10
to_guess = random.getrandbits(32)
while True:
print("What do you want to do?")
print("1. Guess my number")
print("2. Give up on this round")
print("0. Exit")
choice = int(input("> "))
if choice == 0:
exit()
elif choice == 1:
guess = int(input("> "))
if guess == to_guess:
print(FLAG)
exit()
elif guess < to_guess:
print("My number is higher!")
guess_count -= 1
else:
print("My number is lower!")
guess_count -= 1
elif choice == 2:
print(f"You lost! My number was {to_guess}")
break
if guess_count == 0:
print(f"You lost! My number was {to_guess}")
break
if __name__ == "__main__":
signal.alarm(TIMEOUT)
handle()
</code></pre>
| <python><random><ctf> | 2023-09-10 15:06:12 | 1 | 823 | Shark44 |
77,076,597 | 837,451 | Is it possible to get pydantic v2 to dump json with sorted keys? | <p>In the pydantic v1 there was an option to add kwargs which would get passed to <code>json.dumps</code> via <a href="https://docs.pydantic.dev/1.10/usage/exporting_models/#modeljson" rel="noreferrer"><code>**dumps_kwargs</code></a>. However, in pydantic v2 if you try to add extra kwargs to <code>BaseModel.json()</code> it fails with the error <code>TypeError: `dumps_kwargs` keyword arguments are no longer supported.</code></p>
<p>Here is example code with a workaround using <code>dict()</code>/<code>model_dump()</code>. This is good enough as long as the types are simple, but it won't work for the more complex data types that pydantic knows how to serialize.</p>
<p>Is there a way to get <code>sort_keys</code> to work in pydantic v2 in general?</p>
<pre class="lang-py prettyprint-override"><code>import json
from pydantic import BaseModel
class JsonTest(BaseModel):
b_field: int
a_field: str
obj = JsonTest(b_field=1, a_field="one")
# this worked in pydantic v1 but raises a TypeError in v2
# print(obj.json(sort_keys=True)
print(obj.model_dump_json())
# {"b_field":1,"a_field":"one"}
# workaround for simple objects
print(json.dumps(obj.model_dump(), sort_keys=True))
# {"a_field": "one", "b_field": 1}
</code></pre>
| <python><json><pydantic> | 2023-09-10 14:48:01 | 3 | 4,688 | mmdanziger |
77,076,413 | 893,254 | How can I test some Python code which makes use of the requests package? | <p>I have a Python script which uses the <code>requests</code> package to make a <code>get</code> request to a url to download a (large) file.</p>
<p>Becuase the file is large, I cannot use this in the current state for testing. (It takes about 20 minutes to download.)</p>
<p>Is there a way I can use another Python package to setup a local process/server which can provide a "fake" (or just shorter) version of this file for testing purposes? It doesn't matter if I have to change the url for testing, and indeed I would expect to do so.</p>
<p>Under test conditions, the application can just ask for a localhost address. <code>localhost:/test_endpoint</code>, or something.</p>
<p>The code I use to make the request to download the file is just two lines.</p>
<pre><code>url = 'http://blaa_blaa.txt'
request = requests.get(url, allow_redirects=True)
</code></pre>
| <python><python-3.x> | 2023-09-10 13:55:45 | 1 | 18,579 | user2138149 |
77,076,399 | 11,748,924 | using finally clause vs without using finally clause | <p>What is the difference between this code that uses a <code>finally</code> statement and the code without a <code>finally</code> statement?</p>
<pre><code># code 1
a = 10
b = 0
try:
c = a / b
print(c)
except ZeroDivisionError as error:
print(error)
finally:
print('Finishing up.')
</code></pre>
<p>and this for code without finally statement, they are looks same. Is there difference such as performance or what?</p>
<pre><code># code 2
a = 10
b = 0
try:
c = a / b
print(c)
except ZeroDivisionError as error:
print(error)
print('Finishing up.') # without using finally statement.
</code></pre>
<p>Here is flowchart I got from that <a href="https://www.pythontutorial.net/python-basics/python-try-except-finally/" rel="nofollow noreferrer">site</a>
<a href="https://i.sstatic.net/cLWS1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cLWS1.png" alt="try-except-finally" /></a></p>
<p>In my real case or real application, I'm using OpenCV in frame loop to handle error, but I have no idea shall I use finally or without using finally?</p>
<pre><code> while ret:
try:
# Write the preview frame to the video file
video_writer_preview.write(frame)
log_dict[pid] = f'Frame {view_class_dict[pid]} inferencing... '
# Inference
# frame = inference_detect_mtcnn(frame)
if is_dopler:
frame = inference_detect_yoloDopler(frame)
else:
if is_abnclass:
if view_class_dict[pid] == '4CH':
frame = inference_detect_yolo4CH(frame)
elif view_class_dict[pid] == '5CH':
frame = inference_detect_yolo5CH(frame)
elif view_class_dict[pid] == 'LA':
frame = inference_detect_yoloLA(frame)
elif view_class_dict[pid] == 'SA':
frame = inference_detect_yoloSA(frame)
elif view_class_dict[pid] == 'SUB':
frame = inference_detect_yoloSUB(frame)
# Write the processed frame to the video file
video_writer_processed.write(frame)
except Exception as e:
log_dict[pid] = f'Frame inference error! {e}'
print(f"error: {e}")
finally: # shall I use finally clause?
# Read the next frame
ret, frame = capture.read()
progress_dict[pid] += 1
</code></pre>
| <python><error-handling><try-catch-finally> | 2023-09-10 13:53:25 | 0 | 1,252 | Muhammad Ikhwan Perwira |
77,076,359 | 16,222,937 | Import could not be resolved for modules in same folder | <p>I have two .py files (modules). I'm trying to import one into the other:</p>
<pre><code>from place import Place
</code></pre>
<p><code>place</code> is the name of the .py files and <code>Place</code> name of the class inside it. I am getting an error:</p>
<blockquote>
<p>Import "place" could not be resolved Pylance(reportMissingImports)</p>
</blockquote>
<p>I have Python installed correctly and modules are both in the same folder. Answers tried:</p>
<ul>
<li><a href="https://stackoverflow.com/q/72019083/16222937">how do I solve the Pylance(reportMissingImports)?</a> (checking environment/interpreter).</li>
<li><a href="https://stackoverflow.com/questions/68887729/vs-pylance-warning-import-module-could-not-be-resolved">VS/Pylance warning: import "module" could not be resolved</a> (check environment).</li>
<li><a href="https://stackoverflow.com/questions/74539519/vscode-import-could-not-be-resolved-python">vscode import could not be resolved python</a> (restart Visual Studio Code).</li>
</ul>
<p>file_to_do_import.py:</p>
<pre><code>from place import Place
</code></pre>
<p>place.py:</p>
<pre><code>class Place:
def __init__(self, place_name, place_address, num_days, total_cost):
# Instance variables for each book instance!!!!
self.place_name = place_name
self.place_address = place_address
self.num_days = num_days
self.total_cost = total_cost
def __str__(self):
# Instance method to return book information as a string!!!!
return "{} by {}, published in {}".format(self.place_name, self.place_address, self.num_days, self.total_cost)
</code></pre>
| <python><pylance> | 2023-09-10 13:41:58 | 1 | 443 | CloakedArrow |
77,076,185 | 5,859,885 | SciPy - Warning when there are Contradictory constraints? | <p>I am using the SciPy package from Python in order to solve a minimization problem with many constraints.</p>
<p>Let's say I have contradictory constraints on my solution. For purposes of this questions, let's just say I had these constraints:</p>
<pre><code> def con1(x):
return 0.9 * x[0] - x[1]
def con2(x):
return 0.9 * x[1] - x[0]
</code></pre>
<p>With these bounds <code>[(0.001, None), (0.001, None)]</code> so that they are both positive.</p>
<p>When I run <code>scipy.optimize.minimize</code> with these constraints: <code>minimize(objective, initial_guess, method='SLSQP', bounds=bounds, constraints=cons)</code></p>
<p>Is there any way for scipy to tell me that these are contradictory constraints? As it stands, I either just get a solution which ignores my conditions or a bounds error.</p>
<p>Is there any way for scipy to directly tell me that the conditions cannot be satisfied?</p>
| <python><scipy><scipy-optimize><scipy-optimize-minimize> | 2023-09-10 12:52:35 | 0 | 1,461 | the man |
77,076,144 | 1,045,800 | Create a single object from chained comparison in Python | <p>I am toying around with chained comparison in Python, that is, something like <code>a < b < c</code>. According to the docs, it is evaluated as <code>(a < b) and (b < c)</code>, where <code>b</code> is evaluated only once.</p>
<p>I do this to save inequalities of symbolic expressions to a list. For example:</p>
<pre><code>l = Keeper()
a = Symbol()
b = Symbol()
c = Symbol()
l.add(a < b)
l.add(b < c)
</code></pre>
<p>This works nicely. Now I've seen in the HiGHS documentation something and am baffled how they do it. <a href="https://ergo-code.github.io/HiGHS/dev/interfaces/python/example-py/" rel="nofollow noreferrer">Here</a> they give this example:</p>
<pre><code>x0 = h.addVar(lb = 0, ub = 4)
x1 = h.addVar(lb = 1, ub = 7)
h.addConstr(5 <= x0 + 2*x1 <= 15)
</code></pre>
<p>with the implication that this means what one would expect with standard mathematic notation.</p>
<p>How do they do it? The best I can do is create either the <code>5 <= x0 + 2*x1</code> part or the <code>x0 + 2*x1 <= 15</code>, but never both.</p>
<p>Any ideas?</p>
| <python><highs> | 2023-09-10 12:40:11 | 1 | 5,420 | cxxl |
77,076,122 | 1,473,517 | scipy's direct fails (almost) immediately on this toy optimization problem | <p>Consider the following simple MWE:</p>
<pre><code>import numpy as np
from scipy.optimize import direct
def score(x):
parity_in_range = len([v for v in x if 4 <= v <= 6])%3
main_score = np.max(np.abs(np.diff(x)))
return main_score + parity_in_range
length = 20
bounds = [(0,10)] * length
result = direct(score, locally_biased=False, bounds=bounds, maxiter=10000, maxfun=10000)
print(result)
</code></pre>
<p>An optimal solution is to make all the parameters equal and not between 4 and 6. E.g. all 3s. This gives a function value of 0. The optimization works with varying degrees of success with the different optimizers of scipy but it fails almost instantly with direct. It gives:</p>
<pre><code> message: The volume of the hyperrectangle containing the lowest function value found is below vol_tol=1e-16
success: True
status: 4
fun: 2.0
x: [ 5.000e+00 5.000e+00 ... 5.000e+00 5.000e+00]
nit: 2
nfev: 157
</code></pre>
<p>I am not sure that it should report success but the real problem is that it gives up after 157 function evaluations with that warning.</p>
<p>Is there any way to get direct to optimize this function?</p>
| <python><scipy><scipy-optimize> | 2023-09-10 12:34:23 | 1 | 21,513 | Simd |
77,076,050 | 689,242 | pyserial gets no response from the bootloader in embedded device | <p>I have an embedded device configured to boot in to the bootloader which expects to communicate with the user on the USART1 after reboot.</p>
<p>This works if I open a serial terminal application <em>(cutecom)</em> and connect to the device <code>/dev/ttyUSB0</code> using:</p>
<ul>
<li>115200 baud rate,</li>
<li>8 data bits,</li>
<li>even parity</li>
<li>1 stop bit</li>
</ul>
<p>After connecting I reboot the device, send <code>0x7f</code> to the bootloader and bootloader acknowledges with <code>0x79</code>. Here is how it looks:</p>
<p><a href="https://i.sstatic.net/1F91m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1F91m.png" alt="enter image description here" /></a></p>
<p>This works flawlessly.</p>
<hr />
<p>Now, I want to automate this with the following python script which was supposed to automatically connect to the device, write <code>0x7f</code> and then continue to read in order to receive the expected acknowledge <code>0x79</code> from the bootloader. But, acknowledge never comes...</p>
<pre class="lang-py prettyprint-override"><code>import serial.tools.list_ports
import serial
s = serial.Serial(
port="/dev/ttyUSB0",
baudrate=115200,
bytesize=8,
parity=serial.PARITY_EVEN,
stopbits=1,
timeout=120
)
s.write(bytes([0x7f]))
while True:
readData = s.read()
print(readData)
</code></pre>
<p>What am I missing. Isn't this simple enough... I was thinking that bootloader responds too fast. Is this possible? How can I solve this?</p>
| <python><pyserial> | 2023-09-10 12:16:10 | 1 | 1,505 | 71GA |
77,076,045 | 20,285,843 | MySQL Connector / CharacterSet / two connections | <p>I am facing a problem using python, mysql.connector (8.1.0) and trying to open 2 connections on 2 different servers:</p>
<p>If I run :</p>
<pre><code>from mysql.connector import MySQLConnection
if __name__ in '__main__':
# A
try:
c1 = MySQLConnection(
host='host1',
user='*',
password='*',
database='A'
)
c1.close()
except Exception as e:
print(e)
finally:
print('c1')
# B
try:
c2 = MySQLConnection(
host='host2',
user='*',
password='*',
database='B'
)
c2.close()
except Exception as e:
print(e)
finally:
print('c2')
</code></pre>
<p>I got exception : <code>Character set 'utf8' unsupported</code> for c2</p>
<p>If I run only part B, it's Ok. It's as if something was set globally after the first connection.</p>
<p>any idea?</p>
<p><strong>EDIT:</strong> Got it ! <code>CharacterSet.desc</code> is a class variable set at begining.</p>
<pre><code>from mysql.connector import MySQLConnection as MySQLConnection
from mysql.connector.constants import CharacterSet
if __name__ in '__main__':
desc = CharacterSet.desc.copy()
try:
c1 = MySQLConnection(
host='host1',
user='*',
password='*',
database='A'
)
c1.close()
except Exception as e:
print(e)
finally:
print('c1')
CharacterSet.desc = desc
try:
c2 = MySQLConnection(
host='host2',
user='*',
password='*',
database='B'
)
c2.close()
except Exception as e:
print(e)
finally:
print('c2')
</code></pre>
<p>It works now</p>
| <python><mysql-connector><charset> | 2023-09-10 12:13:51 | 3 | 529 | yotheguitou |
77,075,930 | 1,942,868 | How can I make the queryset filter `or` conditions dynamically from the array? | <p>I have <code>array</code> and try to use this as filter key for database</p>
<p>I want to make this dynamically from the <code>array = [AC" ,"BC"]</code></p>
<pre><code> queryset = queryset.filter(Q(username__icontains="AC")| Q(username__icontains="BC"))
</code></pre>
<p>For example, I try like this below but it is obviously wrong.</p>
<pre><code>array = ["AC","BC"]
qs = []
for k in array:
qs.append(Q(username__icontains=k))
queryset = queryset.filter(qs.join('|'))
</code></pre>
<p>How can I do this ?</p>
| <python><django><filter> | 2023-09-10 11:35:06 | 2 | 12,599 | whitebear |
77,075,862 | 15,212,664 | Interact with an exe file which is converted from py file | <p>I have the following python script. In my python script i run it to convert some pdfs to excel. When i run it i enter the filename which the folders with the pdfs exist and i press enter and my script proceed the pdf files and return me a structured excel file.</p>
<p>I tried to convert the py file to an exe file using the following commands</p>
<pre><code>pip install pyinstaller
pyinstaller --onefile myscript.py
</code></pre>
<p>When i execute this i am getting back an exe file but when i run it a cmd pop up window open and close very quickly without giving me the opportunity to enter the folder name.
Can anyone help on that ?</p>
<p>This is the python script.</p>
<pre><code>import pandas as pd
import re
import os
from datetime import date
import datetime
import tkinter as tk
from tkinter import simpledialog
def get_folder_name():
root = tk.Tk()
root.withdraw() # Hide the main window
folder = simpledialog.askstring("Folder Name", "Enter the folder name (e.g., 23.08.2023):")
return folder
def process_pdfs_and_generate_excel(base_directory, folder):
dfs = []
folder_path = os.path.join(base_directory, folder)
if not os.path.exists(folder_path):
print(f"Folder '{folder}' does not exist.")
return None
for root, _, files in os.walk(folder_path):
for filename in files:
if filename.endswith('.txt'):
filepath = os.path.join(root, filename)
with open(filepath, 'r') as file:
lines = file.readlines()
start_line = 18
weighted_average_line_index = None
for i, line in enumerate(lines[start_line:], start=start_line):
if "WEIGHTED AVERAGE" in line:
weighted_average_line_index = i
break
if weighted_average_line_index is not None:
table_data = [line.strip().split('|') for line in lines[start_line:weighted_average_line_index] if '|' in line]
column_names = ['ARRIVAL_DATE', 'CONTAINER_NO', 'GR_QTY', 'LOT_NUMBER', 'OTHER_PAPERS', 'MOISTURE', 'PROHIBITIVE', 'File Name', 'Folder Name', 'EXCEL_FILE_ID']
mapped_data = {}
for column_name, index in zip(column_names, [2, 3, 5, 6, 7, 8, 9, None, None, None]):
mapped_data[column_name] = []
for row in table_data:
if column_name == 'Folder Name':
mapped_data[column_name].append(folder)
elif column_name == 'File Name':
mapped_data[column_name].append(filename)
elif column_name == 'EXCEL_FILE_ID':
mapped_data[column_name].append(f'PT_EKAMAS_{folder}')
elif len(row) > index:
mapped_data[column_name].append(row[index].strip())
else:
mapped_data[column_name].append('')
invoice_pattern = r"Invoice No\.\s*:\s*(\d+)"
invoice_no = ''
for line in lines:
invoice_match = re.search(invoice_pattern, line)
if invoice_match:
invoice_no = invoice_match.group(1)
break
ordered_grade_column = 'ORDERED_GRADE'
ordered_quality_column = 'ORDERED_QUALITY'
ordered_grade_pattern = r"Material No\.\s*:\s*\d+\s*-\s*(.*?);"
ordered_quality_pattern = r"Material No\.\s*:\s*\d+\s*-\s*.*?;\s*(.*?)\s*,"
ordered_grade = ''
ordered_quality = ''
if len(lines) >= 7:
ordered_grade_match = re.search(ordered_grade_pattern, lines[6])
ordered_quality_match = re.search(ordered_quality_pattern, lines[6])
if ordered_grade_match:
ordered_grade = ordered_grade_match.group(1).strip()
if ordered_quality_match:
ordered_quality = ordered_quality_match.group(1).strip()
entity_line = lines[5].strip()
entity_match = re.search(r"Vendor No\.\s*:\s*(.*)", entity_line)
entity = entity_match.group(1).strip().split(' - ')[-1] if entity_match else ''
if 'UK' in entity:
entity = 'UK'
elif 'GREECE' in entity:
entity = 'GR'
elif 'ITALY' in entity:
entity = 'IT'
elif 'LAUSANNE' in entity:
entity = 'CH'
elif 'VIPA RECYCLING (IRELAND) LTD' in entity:
entity = 'IE'
mapped_data['Entity'] = [entity] * len(table_data)
mapped_data['Invoice No'] = [invoice_no] * len(table_data)
mapped_data[ordered_grade_column] = [ordered_grade] * len(table_data)
mapped_data[ordered_quality_column] = [ordered_quality] * len(table_data)
df = pd.DataFrame(mapped_data)
dfs.append(df)
result_df = pd.concat(dfs, ignore_index=True)
formatted_date = datetime.datetime.now().strftime('%Y%m%d_%H%M%S')
# Construct the folder path with the date
folder_date = datetime.datetime.now().strftime('%Y%m%d')
folder_path = os.path.join(base_directory, 'claims', folder_date)
os.makedirs(folder_path, exist_ok=True)
# Construct the Excel file path
excel_filename = f'mycust{folder}_{formatted_date}.xlsx'
excel_filepath = os.path.join(folder_path, excel_filename)
# Add a new column with the name of the output Excel file
result_df['EXCEL_FILE_NAME'] = excel_filename
result_df['CONTAINER_NO'] = result_df['CONTAINER_NO'].apply(lambda x: re.search(r"/(.*?)/", x).group(1) if "/" in x else None)
result_df.to_excel(excel_filepath, index=False)
return excel_filepath
if __name__ == "__main__":
base_directory = 'mydirectory'
# Get the folder name from the user using the GUI
folder = get_folder_name()
if folder:
excel_filepath = process_pdfs_and_generate_excel(base_directory, folder)
if excel_filepath:
print(f"Excel file saved at: {excel_filepath}")
# Add an input prompt to keep the command prompt window open
input("Press Enter to exit...")
</code></pre>
| <python><pyinstaller><exe> | 2023-09-10 11:13:16 | 1 | 327 | Lefteris Kyprianou |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.