QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,864,661
| 2,251,736
|
Running flask for multiple port number in a single thread
|
<p>In our setup we have 4 cameras, I'm creating 4 individual threads, in which I capture the raw camera frames using <code>cv2.VideoCapture("some_rtsp_stream")</code>, apply some image processing logic using opencv and produce the desired output frame.</p>
<p>What I need is to create a single threaded flask module or architecture which will accept the output frames and display them on webpage with different port numbers.</p>
<pre><code>CAMERA1 --> 192.168.0.50:5000
CAMERA2 --> 192.168.0.50:5001
CAMERA3 --> 192.168.0.50:5002
CAMERA4 --> 192.168.0.50:5003
</code></pre>
<p>Work that I have done so far:</p>
<p>Following is a flask file I am using:</p>
<pre><code>from flask import Flask, Response, render_template, request
import threading
app_flask = Flask(__name__)
result_frame = None
@app_flask.route('/upload', methods=['PUT'])
def upload():
global result_frame
# keep jpg data in global variable
result_frame = request.data
return "OK"
def start_flask(ip_addr, f_port):
app_flask.run(host=ip_addr, port=f_port, debug=False, use_reloader=False)
def gen():
while True:
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n'
b'\r\n' + result_frame + b'\r\n')
@app_flask.route('/video')
def video():
if result_frame:
# if you use `boundary=other_name` then you have to yield `b--other_name\r\n`
return Response(gen(), mimetype='multipart/x-mixed-replace; boundary=frame')
else:
return ""
@app_flask.route('/')
def index():
return render_template('index.html');
</code></pre>
<p>So in order to start the flask app with dynamic port number:</p>
<pre><code>thread1 = threading.Thread(target=lambda: start_flask("192.168.0.50","5000"))
thread1.start()
thread2 = threading.Thread(target=lambda: start_flask("192.168.0.50","5001"))
thread2.start()
thread3 = threading.Thread(target=lambda: start_flask("192.168.0.50","5002"))
thread3.start()
thread4 = threading.Thread(target=lambda: start_flask("192.168.0.50","5003"))
thread4.start()
</code></pre>
<p>While to post the frame into route <code>/upload</code> (this works inside the induvial camera thread where I get <code>result</code> as opencv output frame):</p>
<pre><code> _, imdata = cv2.imencode('.JPG', result, [cv2.IMWRITE_JPEG_QUALITY, 90])
requests.post('http://192.168.0.50:'+port_no+'/upload', data=imdata.tobytes())
</code></pre>
<p>Now, my question is how can we manage, to stream the output on web browser using some good single threaded architecture of flask for better memory management. As when I start flask thread more than 2, all the next threads have significant amount of lag in the output.</p>
<p>Please, if any one can come up with different approach entirely is also fine.</p>
|
<python><multithreading><opencv><flask>
|
2023-03-28 09:45:58
| 0
| 444
|
Vikrant
|
75,864,649
| 8,179,672
|
Test FastAPI with Big Query client in backend
|
<p>I'm writing a REST API in FastAPI where I have to use Big Query for fetching data. I have problems with creating unit tests for positive scenarios where everything goes fine. I'm a beginner in API unit testing and I don't know how to mock Big Query Client.</p>
<p>Below please find the simplified code showing the overall idea of what I want to achieve. Ask me if you need more details and I will edit the post.</p>
<p><code>main.py</code></p>
<pre><code>from google.cloud import bigquery
from fastapi import FastAPI, Depends
app = FastAPI()
def get_bq_client():
with bigquery.Client() as client:
return client
def prepare_query(val1):
query = f"""
SELECT name, item_id, price
FROM `bigquery-my-data.products`
WHERE item_id = {val1}
"""
return query
@app.get("/v1/items/{item_id}")
async def get_price(item_id: int,
bq_client: bigquery.client.Client = Depends(get_bq_client)
):
query = prepare_query(item_id)
bq_client = bigquery.Client()
query_job = bq_client.query(query)
result = query_job.query().to_dataframe()
return result.to_json()
</code></pre>
<p><code>test_main.py</code></p>
<pre><code>from fastapi.testclient import TestClient
from main import app
client = TestClient(app)
def test_get_price_correct():
response = client.get("/v1/items/123")
expected = {"name": "Cola",
"item_id": 123,
"price": 1.12}
pass
</code></pre>
<p>I've found this thread but I failed to adapt it for mocking the client response: <a href="https://stackoverflow.com/questions/53700181/python-unit-testing-google-bigquery">Python Unit Testing Google Bigquery</a>.</p>
|
<python><testing><mocking><pytest><fastapi>
|
2023-03-28 09:45:16
| 1
| 739
|
Roberto
|
75,864,479
| 5,246,211
|
How to list Dataproc operations in `google-cloud-dataproc` client
|
<p>I am looking for a way to do something similar to CLI's <code>gcloud dataproc operations list --filter "..."</code>.</p>
<p>The minimal code example:</p>
<pre class="lang-py prettyprint-override"><code>from google.cloud import dataproc_v1
region = 'us-west1'
client_options = {"api_endpoint": f"{region}-dataproc.googleapis.com:443"}
dataproc_cluster_client = dataproc_v1.ClusterControllerClient(client_options=client_options)
def list_operations(dataproc_cluster_client, region):
for op in dataproc_cluster_client.list_operations(
request={"filter": f"operationType = CREATE AND labels.goog-dataproc-location:{region}"}
):
print(op)
list_operations(dataproc_cluster_client, region)
</code></pre>
<p>The error:</p>
<pre><code>grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Invalid resource field value in the request."
debug_error_string = "UNKNOWN:Error received from peer ipv4:xxx.xxx.xx.xxx:443 {created_time:"2023-03-28T11:23:00.466125+02:00", grpc_status:3, grpc_message:"Invalid resource field value in the request."}"
</code></pre>
<p>What's wrong? I have failed to find any documentation around this <code>resource field value</code>, its possible values, and how actually pass it in the request.</p>
|
<python><google-cloud-platform><google-cloud-dataproc>
|
2023-03-28 09:27:45
| 2
| 968
|
egordoe
|
75,864,326
| 15,011,154
|
Unable to add text to some PDFs with PyMuPDF
|
<p>I'm writing a Python (v3.11) script, using the library PyMuPDF, to write some text to PDF.
The script works fine with some PDFs, but with others it doesn't write the text. It doesn't show any errors, the output file is created, but without the text added.</p>
<p>Here is the script:</p>
<pre class="lang-py prettyprint-override"><code>import fitz
def processPdf(input_file, output_file, text):
doc = fitz.open(input_file)
page = doc.load_page(0)
x = 100
y = 100
insertText(page, text, x, y)
#insertTextbox(page, text, x, y)
doc.save(output_file)
doc.close()
def insertText(page, text, x, y):
p = fitz.Point(x, y)
page.insert_text(
p,
text,
fontname = "helv",
fontsize = 11,
color = (1, 0, 0)
)
def insertTextbox(page, text, x, y):
rect = (x, y, x+200, y+200)
page.draw_rect(rect, color=(0.25, 1, 0, 0.25))
rc = page.insert_textbox(
rect,
text,
fontname = "helv",
fontsize = 11,
align = 1
)
processPdf("input_file.pdf", "output_file.pdf", "123")
</code></pre>
<p>I tried both methods <code>insert_text</code> and <code>insert_textbox</code>, but neither works.</p>
<ol>
<li>The PDF is not write protected or password protected.</li>
<li>As you can see, I used small values as coordinates, so to be sure the text will be inserted inside the margins of a normal PDF (the PDFs are A4 papersize).</li>
<li>The text is a simple "123", so it doesn't contain special characters.</li>
<li>I've already used the method <code>doc.get_layers()</code> to check if there are layers that can cover the inserted text and there are none.</li>
</ol>
<p>What can I check more?</p>
<p>Thank you all in advance.</p>
<p>Edit: I tried to modify the PDF's content with an app called UPDF, in order to hide the sensible content and share the problematic file with you. Of course, after saving the file (using the function Flatten and Save of UPDF), my script was able to write the "123" text in the modified file. What could have UPDF changed in the problematic file to let it be modified by my script (I only used UPDF to replace some text with "---" and put black squares on a couple of images)?</p>
|
<python><python-3.x><pdf>
|
2023-03-28 09:09:48
| 1
| 555
|
Ma3x
|
75,864,104
| 221,270
|
Keras - get number of samples used to build the model
|
<p>I have a Keras model for image classification saved as an HDF5 file. Is it possible to trace back the number of samples (images) used to create the model?</p>
|
<python><tensorflow><keras>
|
2023-03-28 08:46:58
| 1
| 2,520
|
honeymoon
|
75,864,073
| 4,253,946
|
Use of UnstructuredPDFLoader unstructured package not found, please install it with `pip install unstructured
|
<p>I just have a newly created Environment in Anaconda (conda 22.9.0 and Python 3.10.10). Then I proceed to install langchain (<code>pip install langchain</code> if I try conda install langchain it does not work). According to the quickstart guide <a href="https://python.langchain.com/en/latest/getting_started/getting_started.html" rel="noreferrer">I have to install one model provider</a> so I install openai (<code>pip install openai</code>).</p>
<p>Then I enter to the python console and try to load a PDF using the class <a href="https://python.langchain.com/en/latest/modules/indexes/document_loaders/examples/pdf.html#using-unstructured" rel="noreferrer">UnstructuredPDFLoader</a> and I get the following error. What the problem could be?</p>
<pre><code>(langchain) C:\Users\user>python
Python 3.10.10 | packaged by Anaconda, Inc. | (main, Mar 21 2023, 18:39:17) [MSC v.1916 64 bit (AMD64)] on win32
>>> from langchain.document_loaders import UnstructuredPDFLoader
>>> loader = UnstructuredPDFLoader("C:\\<path-to-data>\\data\\name-of-file.pdf")
Traceback (most recent call last):
File "C:\<path-to-anaconda>\envs\langchain\lib\site-packages\langchain\document_loaders\unstructured.py", line 32, in __init__
import unstructured # noqa:F401
ModuleNotFoundError: No module named 'unstructured'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\<path-to-anaconda>\envs\langchain\lib\site-packages\langchain\document_loaders\unstructured.py", line 90, in __init__
super().__init__(mode=mode, **unstructured_kwargs)
File "C:\<path-to-anaconda>\envs\langchain\lib\site-packages\langchain\document_loaders\unstructured.py", line 34, in __init__
raise ValueError(
ValueError: unstructured package not found, please install it with `pip install unstructured`
</code></pre>
|
<python><conda><openai-api><langchain>
|
2023-03-28 08:44:10
| 3
| 376
|
Edu
|
75,863,988
| 11,809,811
|
os.path.join with forward and backward slashes
|
<p>I want to import multiple images from a folder using os.path.join and os.walk. Here is the code so far:</p>
<pre><code>path = '../images' # path to folder
for _,__,image_paths in os.walk(path):
for file_name in image_paths:
full_path = os.path.join(path,file_name)
</code></pre>
<p>When I print full path I get results like</p>
<pre><code>../images\00.png
../images\01.png
../images\02.png
</code></pre>
<p>So I got forward and backward slashes. Is that going to be a problem? I can use the paths to import images just fine but I am worried it might cause errors somewhere down the line.</p>
<p>I guess along with that, I see people use forward and backward slashes more or less interchangeably, is there one that should be used?</p>
|
<python>
|
2023-03-28 08:35:16
| 1
| 830
|
Another_coder
|
75,863,975
| 5,868,293
|
Merge with one column and also with another if value between other values in pandas
|
<p>I have the following dataframes</p>
<pre><code>import pandas as pd
foo1 = pd.DataFrame({'id':[1,1,2,2],
'phase':['Pre','Post','Pre','Post'],
'date_start': ['2022-07-24', '2022-12-25', '2022-09-30', '2022-12-25'],
'date_end': ['2022-07-30', '2023-03-07', '2022-10-05', '2023-03-04']})
foo2 = pd.DataFrame({'id': [1,1,1,1,
2,2,2,2],
'date': ['2022-07-24', '2022-07-25', '2022-12-26', '2023-01-01',
'2022-10-04', '2022-11-25', '2022-12-26', '2023-03-01']})
print(foo1, '\n' ,foo2)
id phase date_start date_end
0 1 Pre 2022-07-24 2022-07-30
1 1 Post 2022-12-25 2023-03-07
2 2 Pre 2022-09-30 2022-10-05
3 2 Post 2022-12-25 2023-03-04
id date
0 1 2022-07-24
1 1 2022-07-25
2 1 2022-12-26
3 1 2023-01-01
4 2 2022-10-04
5 2 2022-11-25
6 2 2022-12-26
7 2 2023-03-01
</code></pre>
<p>I would like to get the <code>phase</code> column in <code>foo2</code> by merging on <code>id</code> <strong>and</strong> if <code>date</code> is between <code>date_start</code> and <code>date_end</code>. If <code>date</code> is not within the range <code>[date_start,date_end]</code> then the phase column should have <code>NaN</code></p>
<p>The resulting dataframe should look like this:</p>
<pre><code> id date phase
0 1 2022-07-24 Pre
1 1 2022-07-25 Pre
2 1 2022-12-26 Post
3 1 2023-01-01 Post
4 2 2022-10-04 Pre
5 2 2022-11-25 NaN
6 2 2022-12-26 Post
7 2 2023-03-01 Post
</code></pre>
<p>How could I do that ?</p>
<p>I found <a href="https://stackoverflow.com/questions/46525786/how-to-join-two-dataframes-for-which-column-values-are-within-a-certain-range">this</a> but it does not include "merging as well with <code>id</code>"</p>
|
<python><pandas>
|
2023-03-28 08:34:14
| 1
| 4,512
|
quant
|
75,863,754
| 13,839,945
|
Dimension of target and label in Pytorch
|
<p>I know this is probably discussed somewhere but I couldn't find it. I always have a missmatch of shapes when using pytorch between target and label. For a batch size of 64 I would get <code>[64, 1]</code> for the target and <code>[64]</code> for the label. I always fix this using <code>label.view(-1, 1)</code> inside the loss function.</p>
<p>I was wondering if there is a "best way" to fix this. Because I could also just use <code>target.view(-1)</code> to get the same result. Or I could even change the output in the network to <code>output.view(-1)</code>. Maybe it's also better to use something like <code>.reshape()</code>?</p>
<p>The missmatch probably comes from inside the dataloader, which gets y_train, y_test as a Series not as a dataframe (since <code>y = X.pop(target_name)</code>). So doing <code>y_train.values</code> will give a 1D array. Should I fix it here?</p>
<p>I am happy for any kind of feedback :) If needed, I could also provide a small example of the process, but I think the question should also work without since it is a general problem.</p>
|
<python><pytorch>
|
2023-03-28 08:08:47
| 1
| 341
|
JD.
|
75,863,595
| 10,998,672
|
How to download file from sharepoint using office365-rest-python-api
|
<p>I was trying to use that lib to connect with my SharePoint and download the file: <a href="https://github.com/vgrem/Office365-REST-Python-Client" rel="nofollow noreferrer">https://github.com/vgrem/Office365-REST-Python-Client</a></p>
<p>I tried two approaches for auth:</p>
<ol>
<li>UserCredential</li>
<li>ClientCredential</li>
</ol>
<p>Code:</p>
<pre><code>client_credentials = ClientCredential(f'{client_id}',f'{client_secret}')
ctx = ClientContext(url).with_credentials(client_credentials)
web = ctx.web.get_folder_by_server_relative_path("Shared Documents/Documents").expand(["Files", "Folders"]).get().execute_query()
#web = ctx.web
ctx.load(web)
ctx.execute_query()
print("Web title: {0}".format(web.properties['Title']))
</code></pre>
<p>I registered the app in Azure Portal with a tutorial from GitLab site but in the first example I have an error:</p>
<pre><code>ValueError: Cannot get binary security token from https://login.microsoftonline.com/extSTS.srf
</code></pre>
<p>in client creds I have:</p>
<pre><code>Forbidden 403 Error
</code></pre>
<p>I have already checked many possibilities:</p>
<ul>
<li>whether the user given is correct - email not login</li>
<li>whether the password is correct - if I enter the wrong one there will be another error</li>
<li>whether the SharePoint page is correct - if I enter a non-existent one, I get another error</li>
<li>whether the query produced by the application is ok - if I type it directly in the browser while logged into SharePoint it returns the correct result.</li>
</ul>
<p>How did I register the applications in Azure?</p>
<ul>
<li>I went into AD Azure, created a new application</li>
<li>I generated a secret for it</li>
<li>I added to API Permissions read access to SharePoint.</li>
</ul>
<p>Is there anything else I should do? Did I leave something out? Maybe someone has encountered a similar problem?</p>
<p>I've run out of ideas - all links in google are already in purple.</p>
|
<python><azure><sharepoint><office365>
|
2023-03-28 07:51:00
| 1
| 1,185
|
martin
|
75,863,518
| 6,057,371
|
pandas iterate over many dataframe and create list of occurance per key
|
<p>I have the few hundreds of dataframe with the same sturcture.
I want to aggregate per key as follows:
For the list columns - create a list of lists (where each list is the value of specific dataframe)
For example in case of 2 datarames:</p>
<pre><code>df1 =
Key C1. C2. C3
A [1,2] 6 b
B [6,1] 9 c
df2 =
Key C1 C2 C3
B [5,8] 2 t
A [7,2] 3 z
df_agg =
Key C1 C2 C3
A [[1,2],[7,2]] [6,3] [b,z]
B [[6,1],[5,8]] [9,2] [c,t]
</code></pre>
<p>Please notice I have few hundreds: df1, df2, ... dfn</p>
|
<python><pandas><dataframe><aggregate><data-munging>
|
2023-03-28 07:39:40
| 1
| 2,050
|
Cranjis
|
75,863,494
| 11,167,518
|
Python custom module not found, but path is in PYTHONPATH
|
<p>I searched for a similar question, but none of the solutions I've found have solved my problem.</p>
<p>I'm trying to use a local module as a python library, and I want to be able to import it everywhere when using python. I'm using a Mac OS Ventura and python 3.9.13.</p>
<p>The module is in the following folder structure:</p>
<pre><code>TestLibrary
├──__init__.py
└──test_library_1.py
</code></pre>
<p>I've set the env variable as follows, in the .zshrc file in the user folder:</p>
<pre><code>export PYTHONPATH="/Users/my_user/path_to_folder/TestLibrary"
</code></pre>
<p>The variable seems ok, as within python, printing sys.path gives this result:</p>
<pre><code>>>> import sys
>>> sys.path
['', '/Users/my_user/path_to_folder/TestLibrary', '/Library/Frameworks/Python.framework/Versions/3.9/lib/python39.zip', '/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9', '/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload', '/Users/my_user/Library/Python/3.9/lib/python/site-packages', '/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages']
</code></pre>
<p>But if I try to import:</p>
<pre><code>>>> import TestLibrary
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'TestLibrary'
</code></pre>
<p>Same happens if I try <code>from TestLibrary import test_library</code></p>
<p>I have always worked with Windows and I'm still getting used to a Mac, so I don't know if I'm setting the env variable wrong or is anything wrong with the module.</p>
<p>Thanks in advance!</p>
|
<python><macos><environment-variables><python-module>
|
2023-03-28 07:35:21
| 0
| 602
|
jcf
|
75,863,425
| 10,994,166
|
java.lang.OutOfMemoryError: GC overhead limit exceeded Pyspark
|
<p>I'm trying to join two dataframe in Pyspark, here are tables details:</p>
<pre><code>df1.count(): 9989352358(2 columns)
df2.count(): 64000000(1 columns)
</code></pre>
<p>Now every time when I join them I can in Spark UI that out of 1000 task 1 task is always failing and sometimes it's giving <code>GC overhead limit exceeded</code> sometimes it's giving <code>java.util.concurrent.TimeoutException</code> and sometimes <code>heartbeat timeout</code>.</p>
<p>But I got this in logs which I find it main reason:</p>
<pre><code>23/03/28 07:10:13 INFO org.apache.parquet.hadoop.InternalParquetRecordWriter: mem size 134387820 > 134217728: flushing 4840100 records to disk.
23/03/28 07:10:13 INFO org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 133711235
23/03/28 07:10:30 INFO org.apache.parquet.hadoop.InternalParquetRecordWriter: mem size 134319884 > 134217728: flushing 4900100 records to disk.
23/03/28 07:10:30 INFO org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 134321506
23/03/28 07:10:45 INFO org.apache.parquet.hadoop.InternalParquetRecordWriter: mem size 134362428 > 134217728: flushing 4800100 records to disk.
23/03/28 07:10:45 INFO org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 133932237
23/03/28 07:10:52 INFO org.apache.parquet.hadoop.InternalParquetRecordWriter: mem size 134568280 > 134217728: flushing 4820100 records to disk.
23/03/28 07:10:52 INFO org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 133781816
23/03/28 07:11:00 INFO org.apache.parquet.hadoop.InternalParquetRecordWriter: mem size 134445336 > 134217728: flushing 4920100 records to disk.
23/03/28 07:11:00 INFO org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 134417873
23/03/28 07:11:08 INFO org.apache.parquet.hadoop.InternalParquetRecordWriter: mem size 134452136 > 134217728: flushing 4870100 records to disk.
23/03/28 07:11:08 INFO org.apache.parquet.hadoop.InternalParquetRecordWriter: Flushing mem columnStore to file. allocated memory: 134350885
</code></pre>
<p>I feel like I'm runnig out of memory but I think I have given enough resources to my spark job. Here's my spark config:</p>
<pre><code>park_config["spark.executor.memory"] = "16G"
spark_config["spark.executor.memoryOverhead"] = "8G"
spark_config["spark.executor.cores"] = "8"
spark_config["spark.driver.memory"] = "10G"
spark_config["spark.network.timeout"] = "20000s"
spark_config["spark.executor.heartbeatInterval"] = "1000000"
spark_config["spark.dynamicAllocation.enabled"] = "true"
spark_config["spark.sql.execution.arrow.pyspark.enabled"] = "true"
spark_config["spark.shuffle.service.enabled"] = "true"
spark_config["spark.dynamicAllocation.minExecutors"] = "200"
spark_config["spark.dynamicAllocation.maxExecutors"] = "250"
</code></pre>
<p>I have tried so may things but nothing is working out, can someone tell me what might be the real reason and how can I update it to make it work. I have tried repartitioning as well but that didn't help as well.</p>
|
<python><apache-spark><pyspark>
|
2023-03-28 07:27:47
| 1
| 923
|
Chris_007
|
75,863,152
| 5,197,270
|
pytest fixture vs global variable
|
<p>I know that when multiple tests use the same variable, it should be defined as a fixture, so that it gets initialized once, and can be re-used.</p>
<p>What I don't understand, however, is what advantage it offers (apart from looking cleaner) over a simple global variable, that also gets initialized once, and is accessible to all tests as well.</p>
|
<python><pytest>
|
2023-03-28 06:50:45
| 2
| 411
|
scott_m
|
75,863,105
| 189,035
|
How to programmatically getting link to CSV behind javascript page?
|
<p>I'm using python and I'm trying to get the link from which the CSV come from when I click on the <code>DATA V CSV</code> button at the bottom of <a href="https://www.ceps.cz/en/all-data#AktualniSystemovaOdchylkaCR" rel="nofollow noreferrer">this page</a>.</p>
<p>I tried <code>beautifulsoup</code>:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
url = 'https://www.ceps.cz/en/all-data#AktualniSystemovaOdchylkaCR'
response = requests.get(url)
soup = BeautifulSoup(response.content, 'html.parser')
# Find the link to the CSV file
csv_link = soup.find('a', string='DATA V CSV').get('href')
</code></pre>
<p>I also tried:</p>
<p><code>soup.find("button", {"id":"DATA V CSV"})</code></p>
<p>but it doesn't find the link behind <code>DATA V CSV</code>.</p>
|
<python><web-scraping><beautifulsoup>
|
2023-03-28 06:44:38
| 2
| 5,809
|
user189035
|
75,862,874
| 11,901,732
|
String format empty string caused extra space in print
|
<p>I want to string format a sentence as below:</p>
<pre><code>integer = 1
if integer != 1:
n_val, book = integer, 'books'
else:
n_val, book ='', 'book'
print(f'Fetch the top {n_val} children {book}.')
</code></pre>
<p>and I expected to see:</p>
<pre><code>Fetch the top 3 children books.
</code></pre>
<p>or</p>
<pre><code>Fetch the top children book.
</code></pre>
<p>It works if integer is not 1, however, when <code>integer =1</code>, the string format gives me an additional space in the place of integer as shown below:</p>
<pre><code>Fetch the top children book.
</code></pre>
<p>How do I get rid of the space when <code>integer =1</code>?</p>
|
<python><string-formatting>
|
2023-03-28 06:15:45
| 1
| 5,315
|
nilsinelabore
|
75,862,688
| 3,573,626
|
Python Postgres - psycopg2 insert onto a table with columns that includes curly bracket
|
<p>I have the following function that insert dataframe into a postgres table:</p>
<pre><code>def insert(conn, df, table, return_field_list):
tuples = [tuple(x) for x in df.to_numpy()]
cols = ','.join(list(df.columns))
query = "INSERT INTO {} ({}) VALUES (%%s)".format(table, cols)
sub_stmt = ' RETURNING {} '.format(' ,'.join(return_field_list))
query += sub_stmt
cursor = conn.cursor()
try:
extras.execute_values(cursor, query, tuples)
conn.commit()
return cursor.fetchall()
except (Exception, psycopg2.DatabaseError) as error:
print("Error: %s" % error)
conn.rollback()
cursor.close()
return None
finally:
if cursor is not None:
cursor.close()
</code></pre>
<p>Some of my table columns include special characters, e.g. "Ni[%{wt}]", S[%{wt}]
I 'escaped' the dataframe columns with double quotes (except id and name columns) before passing it to the function above:</p>
<pre><code>df.rename(columns = lambda col: f'"{col}"' if col not in ('id', 'name') else col, inplace=True)
</code></pre>
<p>However, the function returns the following error:</p>
<pre><code>Error: unsupported format character: '{'
</code></pre>
|
<python><postgresql><dataframe><escaping><special-characters>
|
2023-03-28 05:45:15
| 1
| 1,043
|
kitchenprinzessin
|
75,862,665
| 10,964,685
|
How to adjust cell line width for hexbin mapbox?
|
<p>Is it possible to adjust the cell linewidth using Plotly hexbin mapbox? I've had a look at the doco and can't find anything. If the linewidth can't be adjusted, can the color?</p>
<p>If I try to pass the linewidth parameter, it returns an error.</p>
<pre><code>import pandas as pd
import plotly.figure_factory as ff
df = pd.DataFrame({
'LAT': [-45,-44,-42,-41,-46,-44,-43,-45,-41,-45,-47],
'LON': [-70,-71,-72,-72,-73,-74,-71,-70,-72,-70,-72],
})
fig = ff.create_hexbin_mapbox(data_frame = df,
lat = 'LAT',
lon = 'LON',
nx_hexagon = 5,
opacity = 0.5,
labels = {'color': 'Point Count'},
linewidth = 0.5,
mapbox_style = 'carto-positron',
zoom = 4
)
fig.show()
</code></pre>
<p>Output:</p>
<pre><code>TypeError: create_hexbin_mapbox() got an unexpected keyword argument 'linewidth'
</code></pre>
|
<python><plotly>
|
2023-03-28 05:40:28
| 1
| 392
|
jonboy
|
75,862,606
| 8,973,609
|
Calculate percentage of win attempts in pandas DataFrame
|
<p>I have the following pandas <code>DataFrame</code> and I am trying to solve a small lottery exercise. I would like to calculate the percentage of persons who won <code>nth</code> attempt (1st attempt, 2nd attempt, 3rd attempt, and so on...). For some reason I am getting total percentage above 100%. Not sure why... Can anyone spot the problem?</p>
<p>Input:</p>
<pre><code>| person | result |
|--------|--------|
| a | loss |
| b | win |
| a | loss |
| c | loss |
| d | loss |
| c | loss |
| c | win |
| d | win |
</code></pre>
<p>Expected output:</p>
<pre><code>Percentage of people who won on their attempt 1: 25.00% # Person B
Percentage of people who won on their attempt 2: 25.00% # Person D
Percentage of people who won on their attempt 3: 25.00% # Person C
</code></pre>
<pre><code>attempts_per_person = data.groupby('person').count()
max_attempts = 3 # I only care about first 3 attempts for now
res = []
for i in range(max_attempts):
num_wins = data.loc[data.groupby('person').cumcount() == i, 'result'].eq('win').sum()
res.append((num_wins / attempts_per_person.shape[0]) * 100)
for i, pct in enumerate(res):
print(f"Percentage of people who won on their attempt {i+1}: {pct:.2f}%")
</code></pre>
|
<python><pandas>
|
2023-03-28 05:30:21
| 1
| 507
|
konichiwa
|
75,862,378
| 10,964,685
|
Plot difference between two Plotly hexbin maps
|
<p>I've seen posts relating to plotting the difference between two hexbin maps in matplotlib. I couldn't find anything executing the same process but for Plotly hexbin map box plots. If I have two separate hexbin subplots <code>(t, y)</code>, is it possible to produce a single plot that subtracts the difference between <code>t</code> and <code>y</code>?</p>
<pre><code>import pandas as pd
import plotly.express as px
import plotly.graph_objs as go
import plotly.figure_factory as ff
from plotly.subplots import make_subplots
data = pd.DataFrame({
'Cat': ['t','y','y','t','t','t','t','y','y','y','t','y'],
'LAT': [5,6,7,5,6,7,5,6,7,5,6,7],
'LON': [10,11,12,10,11,12,10,11,12,10,11,12],
})
data = pd.concat([data]*5)
df_t = data[data['Cat'] == 't']
df_y = data[data['Cat'] == 'y']
fig = make_subplots(
rows = 2,
cols = 1,
subplot_titles = ('t', 'y'),
specs = [[{"type": "choroplethmapbox"}], [{"type": "choroplethmapbox"}]],
vertical_spacing = 0.05,
horizontal_spacing = 0.05
)
fig2 = ff.create_hexbin_mapbox(data_frame=df_t,
lat="LAT", lon="LON",
nx_hexagon=5,
opacity=0.5,
labels={"color": "Point Count"},
mapbox_style='carto-positron',
)
fig3 = ff.create_hexbin_mapbox(data_frame=df_y,
lat="LAT", lon="LON",
nx_hexagon=5,
opacity=0.5,
labels={"color": "Point Count"},
mapbox_style='carto-positron',
)
fig.add_trace(fig2.data[0], row=1,col=1)
fig.update_mapboxes(zoom=4, style='carto-positron')
fig.add_trace(fig3.data[0], row=2,col=1)
fig.update_mapboxes(zoom=4, style='carto-positron')
fig.update_layout(height=600, margin=dict(t=20,b=0,l=0,r=0))
fig.show()
</code></pre>
<p>intended output:</p>
<p>The bottom left bin for <code>t</code> has 15 points, while <code>y</code> has 5. So this will total 10. The middle bin has 10 points for both so will result in 0. The top right has 5 for <code>t</code> and 15 for <code>y</code>, coming to -10. But I'll set <code>vmin</code> to 0 to ensure no negative values.</p>
<p><a href="https://i.sstatic.net/Jn0Qf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jn0Qf.png" alt="enter image description here" /></a></p>
<p>Edit 2:</p>
<p>If I alter the input data with different size arrays and include min_count = 1 as a parameter, I return an error.</p>
<pre><code>data = pd.DataFrame({
'Cat': ['t','y','y','t','t','t','t','y','y','y','t','y','y'],
'LAT': [5,6,7,5,6,7,5,6,7,5,6,7,8],
'LON': [10,11,12,10,11,12,10,11,12,10,11,12,8],
})
---------------------------------------------------------------------------
ValueError Traceback (most recent call last) /var/folders/bf/09nyl3td65j2lty5m7138ndw0000gn/T/ipykernel_78237/2142200526.py in <module>
47
48 fig = go.Figure(fig2)
---> 49 fig.data[0]['z'] = (fig2.data[0]['z'] - fig3.data[0]['z']).clip(min=0)
50 cmax, cmin = max(fig.data[0]['z']), min(fig.data[0]['z'])
51
ValueError: operands could not be broadcast together with shapes (3,) (4,)
</code></pre>
|
<python><plotly>
|
2023-03-28 04:37:20
| 1
| 392
|
jonboy
|
75,862,353
| 14,154,784
|
Django form save: Object is None, but form is valid
|
<p>I expect the <a href="https://stackoverflow.com/questions/15184000/django-overriden-form-save-method-returns-none">answer here</a> is related to the problem I have, but it is unfortunately not the same and I have not been able to use it to solve the problem. I also tried following the <a href="https://stackoverflow.com/questions/45221097/add-data-to-django-form-before-it-is-saved">method here</a>, which also is not proving successful in solving this particular issue.</p>
<p><strong>Here is the issue</strong>: I am trying to use a simple form to create a new object. Some of the info comes from the user, some of the info comes from the system. When I process the POST request, using simple print statements I can see that the form is valid, and that it has the right data. But then when I go to save it tells me the object is None so it's not saving anything.</p>
<p>Below follows:</p>
<ul>
<li>views.py</li>
<li>forms.py</li>
<li>models.py</li>
<li>template.html</li>
</ul>
<p>What am I missing? Thank you!</p>
<hr>
<p><strong>views.py</strong></p>
<pre><code>def add_set_to_active_workout(request):
workout = get_active_workout()
if request.method == "POST":
form = SetForm(request.POST)
if form.is_valid():
print(f"Cleaned data is: {form.cleaned_data}") # This prints what I'd expect, namely, the user submitted data.
set = form.save(commit=False)
set.workout = workout
print(f"Set is {set}") # This prints: Set is Set object (None)
set.save()
print("set saved")
return HttpResponseRedirect(reverse("log:active_workout"))
else:
return render(request, "log/active_workout.html", {
"set_form": form,
'workout': workout
})
else:
return render(request, "log/active_workout.html", {
"set_form": SetForm(), 'workout': workout,
})
</code></pre>
<p><strong>The print statement <code>print(f"Set is {set}") Set is Set object (none)</code>. This is the problem to solve.</strong> Here are the exact print statements:</p>
<p><code>Cleaned data is: {'exercise': <Exercise: Deadlift>, 'actual_weight': 225.0, 'actual_reps': 12, 'actual_difficulty': 'easy'}</code></p>
<p><code>Set is Set object (None)</code></p>
<p><code>set saved</code></p>
<br>
<p>forms.py</p>
<pre><code>class SetForm(forms.ModelForm):
class Meta:
model = Set
fields = ['exercise', 'actual_weight', 'actual_reps', 'actual_difficulty']
</code></pre>
<br>
<p>models.py</p>
<pre><code>class Set(models.Model):
exercise = models.ForeignKey(Exercise, related_name="sets", on_delete=models.SET("Exercise Deleted!"))
workout = models.ForeignKey(Workout, related_name="sets", on_delete=models.CASCADE)
actual_reps = models.PositiveSmallIntegerField()
actual_weight = models.FloatField()
difficulty_options = (
("easy", "easy"),
("medium", "medium"),
("hard", "hard")
)
actual_difficulty = models.CharField(choices=difficulty_options, max_length=6)
set_complete = models.DateTimeField(auto_now_add=True)
</code></pre>
<br>
<p>template.html</p>
<pre><code><h2>Add a Set:</h2>
<form action="{% url 'log:add_set_to_active_workout' %}" method="POST">
{% csrf_token %}
{{ set_form.as_p }}
<input type="submit" value="Save Set" class="btn btn-secondary">
</form>
</code></pre>
|
<python><django><django-models><django-views><django-forms>
|
2023-03-28 04:31:32
| 2
| 2,725
|
BLimitless
|
75,862,277
| 2,946,773
|
ATM cash withdraw algorithm to distribute notes using $20 and $50 notes only
|
<p>I want to begin by acknowledging that <strong>I know</strong> there are a ton of similar questions in SO and other websites, but all proposed solutions seem to have the same problem for my specific example.</p>
<p>Only using <strong>$20</strong> and <strong>$50</strong> notes, I'd like to calculate the less amount of notes that add up to the desired amount.</p>
<p>Although my question is language-agnostic, I'll use Python for simplicity. I see a lot of people suggesting something like this:</p>
<pre><code>def calculate_notes(amount, notes):
remainder = amount
results = {}
for note in notes:
n, remainder = divmod(remainder, note)
results[note] = n
return results
</code></pre>
<p>However, the method above returns the wrong result for many different scenarios, here are a couple:</p>
<pre><code>print(calculate_notes(110, [50, 20])) # Outputs {50: 2, 20: 0}, it should be {50: 1, 20: 3}
print(calculate_notes(130, [50, 20])) # Outputs {50: 2, 20: 1}, it should be {50: 1, 20: 4}
</code></pre>
<p>I mean, I can make it work by adding a bunch of "if" statements, but I'm wondering if there's a way to <em>calculate it properly</em>.</p>
<p>Invalid amounts like $10, $25 and $30 can be ignored.</p>
|
<python><algorithm>
|
2023-03-28 04:12:34
| 3
| 10,705
|
AndreFeijo
|
75,862,231
| 992,421
|
T5 multilabel classification using tf
|
<p>I am trying to do multilabel classification on a corpus of data which has labels too. When
The data looks like this after adding the tag in the front for each row:
print(texts)
0 multilabel classification: how time changes th...
1 multilabel classification: hawaii has been in ...
2 multilabel classification: not all alaskans ar...
3 multilabel classification: you should read rap...
4 multilabel classification: giving stupid kids ...</p>
<p>I am trying to tokenize above and since there are multiple rows, I am guessing i have to go in a loop. What I am trying understand is, how do I get the input_ids, attention_mask? should I go in a loop to get for each row or for the entire text?
am I doing it right above by adding the tag multilabel classification: for each row? am totally confused whether my assumption is wrong or whether this is the way to do it.</p>
<p>my code is:</p>
<pre><code>src_tokenized = TOKENIZER.encode_plus(
texts[0],
max_length=SRC_MAX_LENGTH,
pad_to_max_length=True,
truncation=True,
return_attention_mask=True,
return_token_type_ids=False,
return_tensors='tf'
)
src_input_ids = src_tokenized['input_ids']
src_attention_mask = src_tokenized['attention_mask']
t5_summary_ids = t5_model.generate(src_input_ids)
</code></pre>
<p>am feeling am doing wrong by running it row by row. but am not sure. I googled for it and all i see multilabel classification example is using pytorch not tf.</p>
<p>Appreciate all the help. TIA</p>
|
<python><tensorflow><huggingface-transformers>
|
2023-03-28 04:01:04
| 0
| 850
|
Ram
|
75,862,162
| 3,055,164
|
Programmatically invoking method chaining in Python
|
<p>I have a simple method which needs to be chained depending on the list of dictionaries. Following is the example.</p>
<pre><code>e = Example()
instance_copy = e.perform_action("A, "Good")
.perform_action("B", "Very Good")
.perform_action("C", "Poor")
</code></pre>
<p>Now, I want to automate this flow with the input given from dictionary</p>
<pre><code>e = Example()
d = {'A': 'Good', 'B': 'Very Good', 'C': 'Poor', 'D': 'Very Poor'}
# chain methods using e.perform_action(k, v) from dict `d.items()` identical to above
</code></pre>
<p>How can I achieve this in python?</p>
<p>PS: Please ignore the class implementation but I am looking for the solution for method chaining that can be chained by infinite size (restricted to size of dict) programmatically.</p>
|
<python><python-3.x><python-2.7>
|
2023-03-28 03:42:33
| 2
| 401
|
rishm.msc
|
75,861,803
| 4,259,243
|
How to have multiple sets of scatter markers with different colormaps in Plotly Express?
|
<p>I'm trying to make a plot with two (or more) sets of points that
that are plotted according to different colormaps.</p>
<p>There is a similar question <a href="https://stackoverflow.com/questions/60458220/two-or-three-colorbars-for-one-plot-in-plotly">here</a>, however when I try to modify the example answer that used Express, I still only get one colormap for all points, and only one colorbar instead of two.</p>
<p>The is following minimum (non-)working example. I want to get one cluster of points shown with the 'inferno' colormap , and another set of points plotted using 'viridis' color map, and a colorbar for each:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import plotly.express as px
npoints = 100
data1 = np.random.rand(npoints,4)
columns = ['x','y','z','c']
df1 = pd.DataFrame(data1, columns=columns)
fig = px.scatter_3d(df1, x='x', y='y', z='z', color='c',
color_continuous_scale='inferno')
fig.update_coloraxes(colorscale="inferno")
fig.layout.coloraxis.colorbar.x = 1.1 # move 1st colorbar to make room for 2nd?
data2 = 0.5*np.random.rand(npoints,4)
df2 = pd.DataFrame(data2, columns=columns)
fig2 = px.scatter_3d(df2, x='x', y='y', z='z', color='c',
color_continuous_scale='viridis')
fig2.update_coloraxes(colorscale="viridis")
fig.add_traces(list(fig2.select_traces())) # add fig2 to fig
fig.show()
</code></pre>
<p>But instead all points have the same 'inferno' map and there's only one colorbar:</p>
<p><a href="https://i.sstatic.net/DcpTt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DcpTt.png" alt="resulting image only has one colormap" /></a></p>
<p>Can someone assist in fixing this?</p>
<p>(I don't understand why my use case is different from the similar question -- I tried similar code but it doesn't work. If my question were simply a "duplicate" then I wouldn't need to ask this.)</p>
<p><strong>EDIT</strong>: One other potential duplicate is here: <a href="https://stackoverflow.com/questions/67504743/plotly-how-to-use-two-color-scales-in-a-single-plotly-map-figure">Plotly: How to use two color scales in a single plotly map figure?</a>, however, trying to implement what I see there, i.e. similarly trying to move the colorbar layout, copying coloraxes, using <code>fig.add_trace(fig2.data[0])</code>,... still results in only one colormap for my case.</p>
<p><strong>EDIT 2</strong>: User error. Forgot to include the <code>fig['data'][1]...</code> info from code mentioned in EDIT 1. With that, it works as expected. My question is therefore a dupe. Closing.</p>
|
<python><plotly><colormap>
|
2023-03-28 02:06:00
| 0
| 1,542
|
sh37211
|
75,861,767
| 1,039,860
|
Is there any way to provide additional information about member variables in python?
|
<p>I want to create a function that creates a dialog from different classes, providing a text (or checkbox for boolean, etc) input for each member variable. I understand how to get a list of the variables from a class and how to use hints to specify variable types, but I'd like to provide additional information (i.e. the description, a default value, is it required, is it a password, etc.)</p>
<p>I was hoping to use something like annotations to further describe each variable for this function to better infer how to present the input to the user (but they are only available for functions and not variables.)</p>
<p>Any suggestions would be most welcome</p>
|
<python><tags><introspection>
|
2023-03-28 01:59:26
| 0
| 1,116
|
jordanthompson
|
75,861,680
| 14,328,098
|
How to adjust the label position of the main scale?
|
<p><a href="https://i.sstatic.net/fNysR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fNysR.png" alt="enter image description here" /></a></p>
<p>I want to draw this type of graph below based on data in excel. In this way, I don't need to download the excel file from the server every time to generate it.</p>
<pre><code>plt.switch_backend('Agg')
plt.subplots_adjust(right = 0.98, top = 0.96, bottom=0.3,left=0.1)
ax = plt.gca()
ipc = range(1,7)
x = np.arange(0,len(ipc))
print(x)
ax.bar(x,ipc, edgecolor="b", width=0.3)
xmajorLocator = FixedLocator(np.arange(-0.5,len(ipc),2))
# xmajorFormatter = FormatStrFormatter(major)
xminorLocator = FixedLocator(range(0,len(ipc),1))
ax.xaxis.set_minor_locator(xminorLocator)
ax.xaxis.set_major_locator(xmajorLocator)
types=["apple","banana"]
times=["past","now","future"]
# plt.setp(ax.xaxis.get_majorticklabels(), rotation=90, fontsize=10)
plt.setp(ax.xaxis.get_minorticklabels(), rotation=90, fontsize=8)
add = lambda x,pos:types[pos%2]
sub = lambda x,pos:times[pos%3]
ax.xaxis.set_minor_formatter(add)
ax.xaxis.set_major_formatter(sub)
plt.xlim(-1,7)
ax.tick_params(axis ='x', which ='minor',pad = 0,
labelsize = 8, colors ='b')
ax.tick_params(axis ='x', which ='major', pad = 10, length=30, width=0.35,
labelsize = 20, colors ='k', )
</code></pre>
<p>By setting major and minor ticks, I achieved the result below.
But what I want is that the labels of the major ticks should be in the middle of each set of minor ticks.
How should I adjust it?</p>
<p>Or is there a better way to achieve the above functionality?
<a href="https://i.sstatic.net/9IQRB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9IQRB.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2023-03-28 01:35:43
| 0
| 816
|
Gerrie
|
75,861,483
| 1,915,230
|
sorting/merging a binary file in-place, which translates to sorting an array that contains two parts - both of which are already sorted
|
<p>first and foremost, I'd like to stress out that this code would never be ran on production (I am fully aware that there are dedicated solutions called time-series database(s)). I'm just doing this to keep my brain active with solving some fun project. Here's the problem I'm trying to solve.</p>
<p>Imagine I'm gathering a list of samples. Each of these samples is actually a structure (more on which later) that I can serialize and deserialize into a stream of bytes. The idea is as follows: I start with an empty file and record first samples. Each portion of the samples can easily fit into memory (let's assume that there are 1000 of them each time I'd like to insert them). So because I start with an empty file, I would sort the new samples in memory and then simply write them into the file. On the second iteration, I'd do the same, but I'd append the new samples to the end of the file. Here comes the tricky part - I need to sort the file. I thought about couple of ideas:</p>
<ul>
<li>reading file back, sorting it in memory and writing it back to disk (which I would not like to do, given that the file can grow in size)</li>
<li>chunking the file - so that even if there are.. say, 1_000_000 samples recorded, I'd end up with 10 files with 100_000 samples in each of the chunks (which could be beneficial I suppose because I could very easily read the whole file back into memory, sort it there and save it back to the disk and I could introduce some modulo function to determine which chunk should I use)</li>
<li>sorting the file in place. That's the path I went trough since it seemed the least resource-intensive solution.</li>
</ul>
<p>Assumptions:</p>
<ul>
<li>the file on the disk, prior to inserting new samples is already sorted</li>
<li>I have a total control over the new samples, therefore I can sort them in memory as well.</li>
</ul>
<p>here's a piece of code I wrote for treating a file like a pseudo-array:</p>
<pre><code>import os
import struct
from datetime import datetime
from typing import BinaryIO
class FileWrapper:
def __init__(self, file: BinaryIO, fmt: str):
self.file = file
self.fmt = fmt
self.sizeof_struct = struct.calcsize(fmt)
def __getitem__(self, index: int) -> int:
buffer = self.read_at(index)
_data = struct.unpack(self.fmt, buffer)
# timestamp is always the first value
return _data[0]
def __len__(self) -> int:
self.file.seek(0, os.SEEK_END)
return self.file.tell() // self.sizeof_struct
def __setitem__(self, index: int, content: bytes) -> None:
self.file.seek(index * self.sizeof_struct)
self.file.write(content)
## helper method
def read_at(self, index: int) -> bytes:
self.file.seek(index * self.sizeof_struct)
buffer = self.file.read(self.sizeof_struct)
return buffer
def swap(self, i: int, j: int) -> None:
if i == j:
return
buffer_i = self.read_at(i)
buffer_j = self.read_at(j)
self[i] = buffer_j
self[j] = buffer_i
</code></pre>
<p>I can use this class in the following way:</p>
<pre><code>with open("input.bin", "r+b") as input_file:
fileWrapper = FileWrapper(input_file, "<IIdI")
...
</code></pre>
<p>because of how that's implemented, I can obtain the timestamp of the given sample via <code>__getitem__</code> and treat the file as an sort of array (a random-access file for the lack of better word).</p>
<p>So before I started the tests on an actual file, I wanted to try this out on a "pure" array that would contain a list of integers. Needless to say, there is a magnitude of options for sorting algorithms, however most of them don't sort in-place. What I wanted to avoid was a situation where I'd copy huge chunks of the file around. Here's the algorithm I came up with using a piece of paper (though I'm affraid it's a simple bubble sort)</p>
<pre><code>def merge(arr, start, mid, end):
start2 = end
last_swap = 0
swaps = 0
lookups = 0
while start2 > mid:
lookups += 1
if arr[start] <= arr[start2]:
start += 1
else:
swaps += 1
arr[start], arr[start2] = arr[start2], arr[start]
if last_swap == 0:
last_swap = start
start += 1
if start >= start2:
start = last_swap
start2 -= 1
n = len(arr)
print(f"lookups: {lookups}; swaps: {swaps} for n={n}")
</code></pre>
<p>What worries me about this is the number of lookups and swaps. Here's the example invocation and some stats:</p>
<pre><code>data = [0, 1, 1, 2, 3, 5, 6, 7, 8, 9, 10, 11, 12, 13, 13, 13, 13, 13, 13, 40]
new_items = [12, 5, 5, 5, 1, 1, 1, 0, 1, 18, 18, 8, 20]
new_items.sort(reverse=True)
num_new_items = len(new_items)
data.extend(new_items)
pa(data, num_new_items, "[input]")
size = len(data) - 1
merge(data, 0, size - num_new_items, size)
pa(data, num_new_items, "[final]")
[ ... ]
lookups: 326; swaps: 90 for n=33
[final]: [ 0 0 1 1 1 1 1 1 2 3 5 5 5 5 6 7 8 8 9 10] | [ 11 12 12 13 13 13 13 13 13 18 18 20 40] | start=None; start2=None
for 100_000 random integers and 1_000 "new" points the stats are:
lookups: 95742262; swaps: 9716 for n=101000
which is really, really bad
</code></pre>
<p>Is it possible that it could damage the hard drive ? Or would it just be desparately slow ? There's one more approach I thought of which would be the following:</p>
<p>1/ iterate over already sorted file, and iterate over the samples
2/ if the n-th sample is less than n-th sample from the file, then
3a/ create a new file, copy bytes from the input file from 0 to the offset at position n
3b/ write samples until the value is greater than n+1'th position in the original file
4/ repeat</p>
<p>of course it violates the basic assumption that everything must be done in-place. Though maybe it would be better for the hardware ? I was basically worried that if a file would be fairly large, say, 4 gigabytes worth of binary data then I'd end up copying huge blocks of the file over the hard drive.</p>
<p>initially I tried using radix sort, however I quickly realized that it would obliterate the hard drive as it moves the data <em>a lot</em> - that's the nature of it. I also tried insertion sort, merge sort and quicksort.</p>
<p>I'd like to stress out that (to best of my knowledge) this is not a typical sorting question. Basically, when we sort an array of integers in memory, both lookups and swaps are essentially "free", however when the disk is taken into account, I believe the nature is somewhat different.</p>
|
<python><algorithm><sorting><time-series><in-place>
|
2023-03-28 00:45:45
| 1
| 864
|
toudi
|
75,861,384
| 4,133,188
|
AttributeError: module 'tensorflow.compat.v2.__internal__' has no attribute 'monitoring'
|
<p>Recently installed <code>tensorflow</code> and <code>tensorflow_hub</code> in a <code>miniconda</code> environment and have an issue when trying to run a script. Receive the following error:</p>
<pre><code>Traceback (most recent call last):
File "/test/models.py", line 3, in <module>
import tensorflow_hub as hub
File "/miniconda3/envs/CFS/lib/python3.9/site-packages/tensorflow_hub/__init__.py", line 90, in <module>
from tensorflow_hub.estimator import LatestModuleExporter
File "/miniconda3/envs/CFS/lib/python3.9/site-packages/tensorflow_hub/estimator.py", line 62, in <module>
class LatestModuleExporter(tf_estimator.Exporter):
File "/miniconda3/envs/CFS/lib/python3.9/site-packages/tensorflow/python/util/lazy_loader.py", line 62, in __getattr__
module = self._load()
File "/miniconda3/envs/CFS/lib/python3.9/site-packages/tensorflow/python/util/lazy_loader.py", line 45, in _load
module = importlib.import_module(self.__name__)
File "/miniconda3/envs/CFS/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/miniconda3/envs/CFS/lib/python3.9/site-packages/tensorflow_estimator/__init__.py", line 10, in <module>
from tensorflow_estimator._api.v1 import estimator
File "/miniconda3/envs/CFS/lib/python3.9/site-packages/tensorflow_estimator/_api/v1/estimator/__init__.py", line 13, in <module>
from tensorflow_estimator._api.v1.estimator import tpu
File "/miniconda3/envs/CFS/lib/python3.9/site-packages/tensorflow_estimator/_api/v1/estimator/tpu/__init__.py", line 14, in <module>
from tensorflow_estimator.python.estimator.tpu.tpu_estimator import TPUEstimator
File "/miniconda3/envs/CFS/lib/python3.9/site-packages/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py", line 108, in <module>
_tpu_estimator_gauge = tf.compat.v2.__internal__.monitoring.BoolGauge(
AttributeError: module 'tensorflow.compat.v2.__internal__' has no attribute 'monitoring'
</code></pre>
<p>Package info is here:</p>
<pre><code>$ conda list
# packages in environment at /home/chris/miniconda3/envs/CFS:
#
# Name Version Build Channel
...
_tflow_select 2.3.0 mkl
...
keras-preprocessing 1.1.2 pyhd8ed1ab_0 conda-forge
...
numpy 1.22.3 py39hc58783e_2 conda-forge
...
pip 22.2.2 py39h06a4308_0
...
python 3.9.5 h12debd9_4 anaconda
...
setuptools 65.5.0 py39h06a4308_0
...
tensorboard 2.11.2 pyhd8ed1ab_0 conda-forge
tensorboard-data-server 0.6.0 py39hd97740a_2 conda-forge
tensorboard-plugin-wit 1.8.1 pyhd8ed1ab_0 conda-forge
tensorflow 2.4.1 mkl_py39h4683426_0
tensorflow-base 2.4.1 mkl_py39h43e0292_0
tensorflow-estimator 2.6.0 py39he80948d_0 conda-forge
tensorflow-hub 0.13.0 pyh56297ac_0 conda-forge
...
</code></pre>
<p>I don't have a tpu, but from the error, it looks like <code>tensorflow</code> is trying to use one?</p>
|
<python><tensorflow><anaconda><tensorflow-hub>
|
2023-03-28 00:19:08
| 1
| 771
|
BeginnersMindTruly
|
75,861,332
| 1,303,826
|
Build PySide2 for Apple Silicon architecture
|
<p>I’m trying to build my project dependencies for a native Apple Silicon support. However, I’m having issues to compile PySide2. I know… I know… it’s too old and it’s better to use PySide6. We want to upgrade that too. But my point is to know if I can do the arm64 support first.</p>
<p>To build PySide2 I’m using:</p>
<ul>
<li>macOS 12.6</li>
<li>XCode 14</li>
<li>I compiled Python 3.9 with no issues.</li>
<li>QT 5.15.10</li>
<li>libclang 12</li>
<li>pyside-setup (source code) 5.15.2.1</li>
</ul>
<p>My error message when I try to build PySide2 with <code>python setup.py install --qmake=...</code> is: <code>Something's broken. UCHAR_MAX should be defined in limits.h.</code></p>
<p>I realized that since QT 5.15.4 I can use the option <code>QMAKE_APPLE_DEVICE_ARCHS="x86_64 arm64"</code> but I’m not sure how to pass it to pyside-setup.</p>
<p>I have a couple questions:</p>
<ol>
<li><p>Is it possible to compile PySide2 (QT5) for arm?</p>
</li>
<li><p>How can I build PySide2 locally without a Qt license just for development purposes? I have my file only on CI and when I try locally I get:</p>
<p>Error: Qt license file was not found!
Project ERROR: License check failed! Giving up ...
error: Failed to query for Qt's QMAKE_MACOSX_DEPLOYMENT_TARGET.</p>
</li>
</ol>
|
<python><qt><pyside2>
|
2023-03-28 00:04:19
| 2
| 558
|
po5i
|
75,861,304
| 1,185,242
|
Can you calculate the average distance between a set of shapes in shapely?
|
<p>For a set of N polygons. What is the fasters way to calculate the average across all shapes and their nearest neighbour. Where the nearest neighbour is shape which has a point nearest to a point on the current shape closer than any other.</p>
|
<python><shapely>
|
2023-03-27 23:56:12
| 1
| 26,004
|
nickponline
|
75,861,236
| 2,717,373
|
Python list comprehension to create list of pairs, two at a time
|
<p>I am trying to generate a list of pairs using a comprehension, that creates the pairs two at a time. I can create a list of lists, where each sub-list is two pairs, e.g.,:</p>
<pre class="lang-py prettyprint-override"><code>mylist = [[(f'2i_{i}',2*i),(f'8i_{i}',8*i)] for i in range(1,4)]
# [[('2i_1', 2), ('8i_1', 8)], [('2i_2', 4), ('8i_2', 16)], [('2i_3', 6), ('8i_3', 24)]]
</code></pre>
<p>or a list of pairs of pairs, e.g.,:</p>
<pre class="lang-py prettyprint-override"><code>mylist = [((f'2i_{i}',2*i),(f'8i_{i}',8*i)) for i in range(1,4)]
# [(('2i_1', 2), ('8i_1', 8)), (('2i_2', 4), ('8i_2', 16)), (('2i_3', 6), ('8i_3', 24))]
</code></pre>
<p>but what I want it to look like is:</p>
<pre class="lang-py prettyprint-override"><code># [('2i_1', 2), ('8i_1', 8), ('2i_2', 4), ('8i_2', 16), ('2i_3', 6), ('8i_3', 24)]
</code></pre>
<p>Is there a simple way of using a comprehension to create a list like this?</p>
|
<python><list><list-comprehension>
|
2023-03-27 23:38:10
| 0
| 1,373
|
guskenny83
|
75,861,042
| 303,624
|
Tkinter buttons disappear when window is resized
|
<p>This is the start of a simple text editor. Everything works, except if I make the window shorter, the buttons disappear. How can I solve this?</p>
<pre><code>from tkinter import *
top = Tk()
def main():
frame1 = Frame(top)
frame1.pack(side='top', fill=BOTH, expand=True)
scrollbar = Scrollbar(frame1, orient="vertical")
text = Text(frame1, height=40, width=80, yscrollcommand = scrollbar.set)
scrollbar.config(command=text.yview)
scrollbar.pack(side="right", fill=Y)
text.pack(side=TOP, fill=BOTH, expand=True)
frame2 = Frame(top)
frame2.pack(side=BOTTOM, fill=X)
loadButton = Button(frame2, text="Load", command=quit)
spacer = Label(frame2, text='If the buttons disappear, make the window taller')
quitButton = Button(frame2, text="Save and Quit", command=quit)
loadButton.pack(side=LEFT, pady=12)
spacer.pack(side=LEFT)
quitButton.pack(side=RIGHT)
mainloop()
def quit():
top.withdraw()
top.destroy()
main()
</code></pre>
|
<python><tkinter>
|
2023-03-27 22:56:07
| 1
| 1,263
|
David Matuszek
|
75,860,881
| 11,922,765
|
Python Find matching item in a list of dictionaries
|
<p>I have a big list of dictionaries. Each dictionary contains timeseries data of each sensor when each data point is collected. I want to know the index location of a specific sensor and date. So, I can update the sensor value.
My code:</p>
<pre><code>big_list = [
dict({'sensor':12,'time':'2022-02-03','value':10}),
dict({'sensor':22,'time':'2022-02-03','value':12}),
dict({'sensor':32,'time':'2022-02-03','value':24}),
dict({'sensor':12,'time':'2022-02-04','value':17}),
dict({'sensor':22,'time':'2022-02-04','value':13}),
dict({'sensor':32,'time':'2022-02-04','value':21})]
# Find index of an item
item_to_find = dict({'time':'2022-02-03','sensor':32})
# solution
big_list.index(item_to_find)
</code></pre>
<p>Present output:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: {'sensor': 32, 'time': '2022-02-03'} is not in list
</code></pre>
<p>Expected output:</p>
<pre><code>2
</code></pre>
|
<python><list><numpy><dictionary><numpy-ndarray>
|
2023-03-27 22:26:16
| 7
| 4,702
|
Mainland
|
75,860,805
| 967,621
|
Pass dropdown menu selection to Pyodide
|
<p>I am trying to pass the value selected by the user from the dropdown menu to the Python code in Pyodide. In the simplified example below, I am trying to:</p>
<ul>
<li>Read the user-selected input file</li>
<li>Convert file contents to lowercase</li>
<li>Append the user-selected value of "strand" (for example, the string "negative")</li>
<li>Write the results to file 'out.txt' that can be downloaded by the user</li>
</ul>
<p>For the input file with contents <code>Hello World!</code>, I expect that the output file to have:</p>
<pre><code>hello world!
negative
</code></pre>
<p>Instead, I got:</p>
<pre><code>hello world!
[object HTMLSelectElement]
</code></pre>
<p>Full page code:</p>
<pre><code><!doctype html>
<html>
<head>
<script src="https://cdn.jsdelivr.net/pyodide/v0.22.1/full/pyodide.js"></script>
</head>
<body>
Input file:
<button>Select file</button>
<br>
<br>
Strand:
<select id="strand" class="form-control sprites-arrow-down" name="strand" value>
<option id="val" value="positive" selected>positive</option>
<option id="val" value="negative">negative</option>
</select>
<br>
<br>
<script type="text/javascript">
async function main() {
// Get the file contents into JS
const [fileHandle] = await showOpenFilePicker();
const fileData = await fileHandle.getFile();
const contents = await fileData.text();
var d = document;
d.g = d.getElementById;
var strand = d.g("strand");
// Create the Python convert toy function
let pyodide = await loadPyodide();
let convert = pyodide.runPython(`
from pyodide.ffi import to_js
def convert(contents, strand):
return to_js(contents.lower() + str(strand))
convert
`);
let result = convert(contents, strand);
console.log(result);
const blob = new Blob([result], {type : 'application/text'});
let url = window.URL.createObjectURL(blob);
var downloadLink = document.createElement("a");
downloadLink.href = url;
downloadLink.text = "Download output";
downloadLink.download = "out.txt";
document.body.appendChild(downloadLink);
}
const button = document.querySelector('button');
button.addEventListener('click', main);
</script>
</body>
</html>
</code></pre>
<p>The code is based on this post: <a href="https://stackoverflow.com/q/75806497/967621">Select and read a file from user's filesystem</a></p>
|
<javascript><python><html-select><webassembly><pyodide>
|
2023-03-27 22:13:03
| 1
| 12,712
|
Timur Shtatland
|
75,860,721
| 3,103,957
|
Since everything in an object in Python, which is the top most object?
|
<p>In Python, it is said that everything is an object. Could someone please help me which is the top most object.</p>
<p>Do we have any such thing Java? Java does have a parent class called "Object" from which all other classes are inheriting. Not everything is an object in Java.</p>
<p>Is the OO principle w.r.t the above, different between Python and Java?</p>
|
<python><java>
|
2023-03-27 21:59:46
| 2
| 878
|
user3103957
|
75,860,641
| 673,859
|
big_modeling.py not finding the offload_dir
|
<p>I'm trying to load a large model on my local machine and trying to offload some of the compute to my CPU since my GPU isn't great (Macbook Air M2). Here's my code:</p>
<pre><code>from peft import PeftModel
from transformers import AutoTokenizer, GPTJForCausalLM, GenerationConfig
from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True)
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
offload_folder="/Users/matthewberman/Desktop/offload"
model = GPTJForCausalLM.from_pretrained(
"EleutherAI/gpt-j-6B",
device_map="auto",
offload_folder=offload_folder,
quantization_config=quantization_config
)
model = PeftModel.from_pretrained(model, "samwit/dolly-lora", offload_dir=offload_folder)
</code></pre>
<p>However, I get this error:</p>
<pre><code>ValueError: We need an `offload_dir` to dispatch this model according to this `device_map`, the following submodules need to be offloaded: base_model.model.transformer.h.10, base_model.model.transformer.h.11, base_model.model.transformer.h.12, base_model.model.transformer.h.13, base_model.model.transformer.h.14, base_model.model.transformer.h.15, base_model.model.transformer.h.16, base_model.model.transformer.h.17, base_model.model.transformer.h.18, base_model.model.transformer.h.19, base_model.model.transformer.h.20, base_model.model.transformer.h.21, base_model.model.transformer.h.22, base_model.model.transformer.h.23, base_model.model.transformer.h.24, base_model.model.transformer.h.25, base_model.model.transformer.h.26, base_model.model.transformer.h.27, base_model.model.transformer.ln_f, base_model.model.lm_head.
</code></pre>
<p>I am definitely pointing to a valid offload directory as the previous method uses offload_folder successfully (I see things being put in there).</p>
<p>What am I doing wrong?</p>
|
<python><peft>
|
2023-03-27 21:47:46
| 1
| 8,651
|
Matthew Berman
|
75,860,567
| 17,810,039
|
Python Flet AttributeError: module 'flet_core.page' has no attribute 'controls'
|
<p>In summary, I am at the very beginning of the project.</p>
<p>What I'm trying to do is use a custom 'navigation rail' for navigation between pages.Now there is a problem. When I use the custom navigation rail alone, the program works, but when I want to integrate it into my project, I get an error.
error = AttributeError: module 'flet_core.page' has no attribute 'controls'</p>
<p>*
*</p>
<p>This is the code that works when I use the custom navigation rail alone.</p>
<pre><code>import flet
from flet import *
from functools import partial
import time
class ModernNavBar(UserControl):
def __init__(self,func):
self.func = func
super().__init__()
def HighLight(self,e):
print("e = "+str(e.data))
if e.data == "true":
print("girdi")
e.control.bgcolor = "white10"
e.control.update()
e.control.content.controls[0].icon_color = "white"
e.control.content.controls[1].color = "white"
e.control.content.update()
else:
e.control.bgcolor=None
e.control.update()
e.control.content.controls[0].icon_color = "white54"
e.control.content.controls[1].color = "white54"
e.control.content.update()
def UserData(self,initialise:str,name:str,description:str):
return Container(
content=Row(
controls=[
Container(
width=42,
height=42,
bgcolor="bluegrey900",
alignment=alignment.center,
border_radius=8,
content=Text(
color="white",
value=initialise,
size=20,
weight="bold",
),
),
Column(
alignment=MainAxisAlignment.CENTER,
spacing=1,
controls=[
Text(
color="white",
value=name,
size=11,
weight="bold",
opacity=1,
animate_opacity=200,
),
Text(
color="white54",
value=description,
size=9,
weight="bold",
opacity=1,
animate_opacity=200,
),
]
)
]
)
)
def ContainedIcon(self,icon_name:str,text:str):
return Container(
width=180,
height=45,
border_radius=10,
on_hover=lambda e: self.HighLight(e),
on_click=None,
content=Row(
controls=[
IconButton(
icon=icon_name,
icon_size=18,
icon_color="white54",
style=ButtonStyle(
shape={
"":RoundedRectangleBorder(radius=7)
},
overlay_color={"":"transparent"}
)
),
Text(
color="white54",
value=text,
size=9,
weight="bold",
opacity=1,
animate_opacity=200,
),
]
)
)
def build(self):
return Container(
width=200,
height=580,
padding=padding.only(top=10),
alignment=alignment.center,
content=Column(
alignment=MainAxisAlignment.START,
horizontal_alignment=CrossAxisAlignment.CENTER,
controls=[
self.UserData("LI","Line Indent","Software"),
Container(
width=24,
height=24,
bgcolor="bluegrey800",
border_radius=8,
on_click=self.func
),
Divider(height=5,color="transparent"),
self.ContainedIcon(icons.DASHBOARD,"Dashboard"),
self.ContainedIcon(icons.ANALYTICS, "Analytics"),
Divider(height=50,color="white24"),
self.ContainedIcon(icons.SETTINGS, "Settings"),
]
)
)
def main(page: Page):
page.title = "Flet Modern Sidebar"
page.horizontal_alignment = "center"
page.vertical_alignment = "center"
def AnimateSideBar(e):
if page.controls[0].width != 62:
for item in (
page.controls[0]
.content.controls[0]
.content.controls[0]
.content.controls[1]
.controls[:]
):
item.opacity = 0
item.update()
for items in page.controls[0].content.controls[0].content.controls[3:]:
if isinstance(items, Container):
items.content.controls[1].opacity = 0
items.content.update()
time.sleep(0.2)
page.controls[0].width = 62
page.controls[0].update()
else:
page.controls[0].width = 200
page.controls[0].update()
time.sleep(0.2)
for item in (
page.controls[0]
.content.controls[0]
.content.controls[0]
.content.controls[1]
.controls[:]
):
item.opacity = 1
item.update()
for items in page.controls[0].content.controls[0].content.controls[3:]:
if isinstance(items, Container):
items.content.controls[1].opacity = 1
items.content.update()
page.add(Container(
content=ModernNavBar(AnimateSideBar),
padding=10,
width=200,
height=580,
bgcolor="black",
border_radius=20,
alignment=alignment.center,
animate = animation.Animation(500, "decelerate")
))
page.update()
print(str(page.controls[0]))
if __name__ == "__main__":
flet.app(target=main)
</code></pre>
<p>When you run the page you see above, the program works perfectly.However, when I try to integrate it into a project as I will show below, I get an error.</p>
<p>I explain my code below:</p>
<ul>
<li>main page = the part required for the application to start</li>
<li>route page = I control my app's navigation processes.</li>
<li>home page = the first page my application opens. On this page, I put the navigation rail in a row. There will be navigation rail on the left side of the window, and other content on the right. When I press the button that triggers the animation inside the navigation rail, it gives an error</li>
</ul>
<p>What I'm trying to do is fix the animation of the navigation rail and write a function for each item to go to another page</p>
<p><strong>main page</strong></p>
<pre><code> from flet import *
from core.init.navigation.route import Router
def main(page : Page):
def route_change(route):
page.views.clear()
page.views.append(
Router(page)[page.route]
)
page.on_route_change=route_change
page.go("/")
app(target=main)
</code></pre>
<p><strong>route page</strong></p>
<pre><code> from flet import *
from product.screens.main.main import Main
def Router(page):
return {
"/":View(
route="/",
controls=[
Main(page)
]
),
"/b": View(
route="/b",
controls=[
Container(height=800, width=350, bgcolor="blue")
]
),
}
</code></pre>
<p><strong>home page</strong></p>
<pre><code> import time
from flet import *
from product.components.side_bar import ModernNavBar
class Main(UserControl):
def __init__(self,page):
super().__init__()
self.page = page
page.controls.append(self)
page.update()
print("page ===== "+str(page.controls[0]))
def _getControlName(self):
return "Main"
def AnimateSideBar(e, self=None):
if page.controls[0].width != 62:
for item in (
page.controls[0]
.content.controls[0]
.content.controls[0]
.content.controls[1]
.controls[:]
):
item.opacity = 0
item.update()
for items in page.controls[0].content.controls[0].content.controls[3:]:
if isinstance(items, Container):
items.content.controls[1].opacity = 0
items.content.update()
time.sleep(0.2)
page.controls[0].width = 62
page.controls[0].update()
else:
page.controls[0].width = 200
page.controls[0].update()
time.sleep(0.2)
for item in (
page.controls[0]
.content.controls[0]
.content.controls[0]
.content.controls[1]
.controls[:]
):
item.opacity = 1
item.update()
for items in page.controls[0].content.controls[0].content.controls[3:]:
if isinstance(items, Container):
items.content.controls[1].opacity = 1
items.content.update()
def build(self):
return Container(
animate=animation.Animation(500, "decelerate"),
height=800,
width=200,
bgcolor="bluegrey",
content=Row(
controls=[
ModernNavBar(self.AnimateSideBar),
Text("Welcome to the homepage"),
]
)
)
</code></pre>
<p><strong>When you click the button you see in the photo, it gives the following error.</strong></p>
<p><a href="https://i.sstatic.net/zSs9C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zSs9C.png" alt="enter image description here" /></a></p>
<blockquote>
<p>Exception in thread Thread-8: Traceback (most recent call last):<br />
File
"C:\Users\Yonet\AppData\Local\Programs\Python\Python39\lib\threading.py",
line 973, in _bootstrap_inner
self.run() File "C:\Users\Yonet\AppData\Local\Programs\Python\Python39\lib\threading.py",
line 910, in run
self._target(*self._args, **self._kwargs) File "D:\pycharmProjectFolder\pythonProject\venv\lib\site-packages\flet_core\event_handler.py",
line 28, in __sync_handler
h(r) File "D:\pycharmProjectFolder\pythonProject\product\screens\main\main.py",
line 24, in AnimateSideBar
if page.controls[0].width != 62: AttributeError: module 'flet_core.page' has no attribute 'controls'</p>
</blockquote>
|
<python><flet>
|
2023-03-27 21:35:12
| 1
| 382
|
Hasancan Çakıcıoğlu
|
75,860,309
| 16,305,340
|
socket is terminating for some reason
|
<p>so I am a newbie in python and I am trying to establish a server-client program in python, but I want when I send something from server or the client, the other side has to listen till the sender finish his message, so I made a function called <code>recvTillTimeOut</code> which is supposed to receive till timeout occurs and this is the body of the function:</p>
<pre><code>def recvTillTimeOut(s):
s.settimeout(1)
data = b''
while True:
try:
data += s.recv(1024)
print(data)
except:
print("error")
break
return data
</code></pre>
<p>but every time, the console outputs that there is an exception in this function after the connection is established between the server and the client, so I don't know what did I do wrong.</p>
<hr />
<h2>server code</h2>
<pre><code>import socket
# package needed in serializing and deserializing data
import pickle
# this is a function to receive till timeout occurs
def recvTillTimeOut(s):
s.settimeout(1)
data = b''
while True:
try:
data += s.recv(1024)
print(data)
except:
print("error")
break
return data
# this is the least number of bits needed for encryption/decryption to be done correctly
# generating both public and private keys
PublicKey = [2342314, 422421]
# Creating a socket
mySocket = socket.socket()
# getting hostname
host = socket.gethostname()
# the port that communication will happen through
portNum = 12345
# the port isn't created yet, as I am a server
mySocket.bind((host, portNum))
# make at most 2 clients to be served simultaneously
mySocket.listen(2)
# accept a connection
clientConn, clientAddr = mySocket.accept()
try:
# receive public key of the client and deserialize IT
data = recvTillTimeOut(mySocket)
clientkey = pickle.loads(data)
print(clientkey)
# serialize the public key and send it
data = pickle.dumps(PublicKey)
mySocket.send(data)
while True:
# receiving data from the client
data = clientConn.recv(1024).decode()
if not data:
# if the connection is closed then end the program
break
# printing the received message
print("from connected user: " + str(data))
# send response to the client
data = input(" -> ")
clientConn.send(data.encode())
except:
# close the socket
mySocket.close()
# close the socket
mySocket.close()
</code></pre>
<hr />
<h2>Client code</h2>
<pre><code>import socket
import pickle
# this is a function to receive till timeout occurs
def recvTillTimeOut(s):
s.settimeout(1)
data = b''
while True:
try:
data += s.recv(1024)
except:
break
return data
# generating both public and private keys
PublicKey = [100123120, 5124210]
# getting hostname
host = socket.gethostname()
# the port that communication will happen through
portNum = 12345
# Creating a socket
mySocket = socket.socket()
# connect to the server
mySocket.connect((host, portNum))
try:
# serialize the public key and send it
data = pickle.dumps(PublicKey)
mySocket.send(data)
# receive public key of the server and deserialize IT
data = recvTillTimeOut(mySocket)
serverkey = pickle.loads(data)
print(serverkey)
# the port is already open, so I am a client
while True:
# take input from the user
message = input(" -> ")
# end the connection
if message == 'bye':
break
# send message to the server
mySocket.send(message.encode())
# receive data from the server
data = mySocket.recv(1024).decode()
print('Received from server: ' + data)
except:
# close the socket
mySocket.close()
# close the socket
mySocket.close()
</code></pre>
|
<python><sockets>
|
2023-03-27 20:58:00
| 1
| 1,893
|
abdo Salm
|
75,860,270
| 14,461,379
|
Why does flask redirect parse character & to HTML entity & for external URLs?
|
<p>I want to redirect users to an external URL with parameters in a flask application that I've set up, I use <code>urlencode</code> like so:</p>
<pre class="lang-py prettyprint-override"><code>parameters = {'foo': 'bar', 'baz': 'qux'}
link = 'https://some.domain.com?%s' % urlencode(parameters)
return redirect(link, code=302)
</code></pre>
<p>If I cUrl my endpoint, let's say <code>curl https://my.domain.com/endpoint</code>, it returns a HTML document like so:</p>
<pre class="lang-html prettyprint-override"><code><!doctype html>
<html lang=en>
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to the target URL: <a href="https://some.domain.com?foo=bar&amp;baz=qux">https://some.domain.com?foo=bar&amp;baz=qux</a>. If not, click the link.
</code></pre>
<p>Note the link being parsed:</p>
<pre><code>https://some.domain.com?foo=bar&amp;baz=qux
</code></pre>
<p>If I simply print the <code>link</code> it is properly encoded:</p>
<pre><code>https://some.domain.com?foo=bar&baz=qux
</code></pre>
<p>So why does <code>redirect</code> generate this HTML page with <code>&</code> characters parsed as HTML entity <code>&amp;</code> where it is not required? How can I avoid this?</p>
<p>I am using Python <code>3.9.2</code>.</p>
|
<python><flask><urlencode>
|
2023-03-27 20:52:47
| 0
| 331
|
gbrl
|
75,860,165
| 2,254,971
|
How should I read this python traceback/what is causing "breaks" in?
|
<p><strong>I know what a key error is, and why I am getting it, this question is about what is occurring to the call stack not why I am getting a key error. I am also not looking to know exactly what's happening in the pandas operation specifically, but anytime I see a callstack that appears to be "divided"</strong></p>
<p>I have experience with C# and .net languages, but need some analysis and a friend suggested I use pandas in a jupyter notebook. I am trying to understand how to read a call stack generated by pandas when I have a key error. While doing so I found I don't understand why a traceback is formatted the way it is. I'd like to compare traceback A and traceback B and understand why A is behaving differently than B.</p>
<p>traceback A
<a href="https://i.sstatic.net/Hy7uv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hy7uv.png" alt="enter image description here" /></a></p>
<p>traceback B
<a href="https://i.sstatic.net/6YQIg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6YQIg.png" alt="enter image description here" /></a></p>
<p>I understand every line of traceback B, it tells me exactly the order of calls and line numbers where my error is occurring. We go directly from one function call to another and only print out the exception name at the beginning along with the name and message at the end. What I don't understand is why we appear to reset the stack trace for the callstack in traceback A; I'm guessing maybe we catch the exception, print the traceback and then rethrow, but that's only a guess. What exactly is happening in the exception handling to cause this?</p>
|
<python><stack-trace>
|
2023-03-27 20:37:09
| 0
| 730
|
Sidney
|
75,860,164
| 12,436,050
|
Groupby and concatenate unique values by separator in Pandas dataframa
|
<p>I have following pandas dataframe.</p>
<pre><code> org_id org_name location_id loc_status city country
0 100023310 advance GmbH LOC-100052061 ACTIVE Planegg Germany
1 100023310 advance GmbH LOC-100032442 ACTIVE Planegg Germany
2 100023310 advance GmbH LOC-100042003 INACTIVE Planegg Germany
3 100004261 Beacon Limited LOC-100005615 ACTIVE Tunbridge Wells United Kingdom
4 100004261 Beacon Limited LOC-100000912 ACTIVE Crowborough United Kingdom
</code></pre>
<p>I would like to group the rows by column org_id, org_name and find unique and concatenate value by a separator '|' other column values.</p>
<p>I am using following lines of code.</p>
<pre><code>gr_columns = [x for x in df.columns if x not in ['location_id', 'loc_status','city', 'country']]
df.groupby(gr_columns).agg(lambda col: '|'.join(col))
</code></pre>
<p>However, the final dataframe has some of the columns missing (city and country). I am getting following output.</p>
<pre><code> org_id org_name location_id loc_status
1 100023310 advance GmbH LOC-100052061|LOC-100032442|LOC-100042003 ACTIVE|INACTIVE
2 100004261 Beacon Limited LOC-100005615 ACTIVE
</code></pre>
<p>With the following warning as well.</p>
<pre><code>
FutureWarning: Dropping invalid columns in DataFrameGroupBy.agg is deprecated. In a future version, a TypeError will be raised. Before calling .agg, select only columns which should be valid for the function.
df.groupby(gr_columns).agg(lambda col: ','.join(col))
</code></pre>
<p>The expected output is:</p>
<pre><code> org_id org_name location_id loc_status city country
1 100023310 advance GmbH LOC-100052061|LOC-100032442|LOC-100042003 ACTIVE|INACTIVE Planegg Germany
2 100004261 Beacon Limited LOC-100005615 ACTIVE Tunbridge Wells|Crowborough United Kingdom
</code></pre>
<p>Any help is highly appreciated.</p>
|
<python><pandas><group-by><aggregate>
|
2023-03-27 20:37:04
| 3
| 1,495
|
rshar
|
75,859,965
| 21,420,742
|
Creating a column that that compares values from one Dataframe to another in Python
|
<p>I have 2 data frames one employee history and another is a hiring report. What I want is to see if a person's ID, if they are a manager, and in both data frames, and don't have open as a status then Vacant. A a new column <strong>Vacancy</strong> in the first data frame. Below is what the data frames look like:</p>
<p>DF1:</p>
<pre><code>ID ManagerID Job Status
1 3 Sales Active
2 3 Sales Active
3 10 Sales Manager Active
4 5 Tech Active
5 12 Tech Manager Active
6 3 Sales Active
</code></pre>
<p>DF2:</p>
<pre><code>Hiring_ID Job_Title Status # of Roles
3 Associate Open 8
10 Consultant Open 3
10 Lead Open 1
7 Advisor Open 2
3 Cashier NaN 4
</code></pre>
<p>From here I tried using:</p>
<p><code>df['Vacant'] = np.where((df['ID'].isin(df2['Hiring_ID'])) & (df2['Status'].isnull()), 'Yes','No'</code></p>
<p>With this code I get no results when I shpuld be getting <strong>ID 3 with Cashiers</strong> as a vancany. Is there a better method then using .isin</p>
<p>This is what Iwith this code am looking for:</p>
<pre><code> ID Manager_ID Job Status Vacancy
1 3 Sales Active No
2 3 Sales Active No
3 10 Sales Manager Active Yes
4 5 Tech Active No
5 12 Tech Manager Active No
6 3 Sales Active No
</code></pre>
|
<python><python-3.x><pandas><dataframe><numpy>
|
2023-03-27 20:08:46
| 1
| 473
|
Coding_Nubie
|
75,859,915
| 3,726,546
|
Compare rows from the same dataframe and update/create new columns
|
<p>I am comparing values from different rows of the same dataframe and updating the <code>score</code> column with data from the afore matched row. The following code works fine for what I need, however it is not an efficient solution and takes hours to run for a 20K DF.</p>
<p>What is a faster vectorised way to achieve the same result?</p>
<p>I tried the below iterative method:</p>
<pre><code>df = pd.DataFrame({'date': ['2022-10-01', '2022-10-02', '2022-10-03', '2022-10-01', '2022-10-02'],
'symbol': ['XYZ', 'XYZ', 'XYZ', 'ABC', 'DEF'],
'tenor': ['2022-10-31', '2022-11-30', '2022-12-31', '2022-10-31', '2022-11-30'],
'score': [2, 3, 4, 6, 7],
})
ref_symbol = 'XYZ'
df_xyz = df.loc[df['symbol'] == ref_symbol]
sym_to_compare = list(df.symbol.unique())
sym_to_compare.remove(ref_symbol)
score_columns = [f'score_{sym}' for sym in sym_to_compare]
df[score_columns] = [np.NAN] * len(sym_to_compare)
for idx, row in df_xyz.iterrows():
for sym in sym_to_compare:
match = df.loc[(df.symbol == sym) & (df.tenor == row.tenor) & (df.date == row.date)]
if len(match.index):
df.at[idx, f'score_{sym}'] = match.score
</code></pre>
<p>Dataframes before and after transform:</p>
<pre><code># Original DF
date symbol tenor score
0 2022-10-01 XYZ 2022-10-31 2
1 2022-10-02 XYZ 2022-11-30 3
2 2022-10-03 XYZ 2022-12-31 4
3 2022-10-01 ABC 2022-10-31 6
4 2022-10-02 DEF 2022-11-30 7
# Transformed DF
date symbol tenor score score_ABC score_DEF
0 2022-10-01 XYZ 2022-10-31 2 6.0 NaN
1 2022-10-02 XYZ 2022-11-30 3 NaN 7.0
2 2022-10-03 XYZ 2022-12-31 4 NaN NaN
3 2022-10-01 ABC 2022-10-31 6 NaN NaN
4 2022-10-02 DEF 2022-11-30 7 NaN NaN
</code></pre>
|
<python><pandas><dataframe><group-by>
|
2023-03-27 20:01:40
| 2
| 409
|
B Jacob
|
75,859,799
| 2,256,085
|
return contour label positions
|
<p>The reproducible example below shows nearly concentric contours; it captures something I'm trying to do with matplotlib for comparing a 2D analytical solution to a numerical method solution. Because the two solution have small differences, the contours do not plot on top of one another.</p>
<pre><code>import numpy
import matplotlib.pyplot as plt
delta = 0.025
x = numpy.arange(-3.0, 3.0, delta)
y = numpy.arange(-2.0, 2.0, delta)
X, Y = numpy.meshgrid(x, y)
Z1 = numpy.exp(-X**2 - Y**2)
Z2 = numpy.exp(-(X - 1)**2 - (Y - 1)**2)
Z = (Z1 - Z2) * 2
Z2 = Z * 1.1
# adding labels to the line contours
fig, ax = plt.subplots()
CS = ax.contour(X, Y, Z)
CS2 = ax.contour(X, Y, Z2)
ax.clabel(CS, inline=1, fmt='%1d', fontsize=10)
#ax.clabel(CS2, inline=True, fmt='%1d', fontsize=10)
ax.set_title('Simplest default with labels')
plt.show()
</code></pre>
<p>Is there a way to return the position of where labels were added to CS so that a "blank patch" could be drawn on CS2?</p>
<p>Simply adding contour labels to <code>ax.clabel(CS2,...)</code> (uncomment the 3rd-to-last line) isn't quite what I would like to do. For example, in the plot below, generated by uncommenting the 3rd-to-ast line, the lower contour labels identified by "1" aren't near each other. In the real-world problem, it would be preferrable to simply label 1 set of contours and add a blank spot nearby in the other set of contours.</p>
<p><a href="https://i.sstatic.net/YgVGi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YgVGi.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2023-03-27 19:46:05
| 1
| 469
|
user2256085
|
75,859,754
| 8,487,782
|
Python Selenium Can not solving reCaptcha
|
<p>2capthca not solved my targeting site.
I try using python with selenium to solve this.
My targeting site is: <a href="https://visa.vfsglobal.com/ind/en/ltu/login" rel="nofollow noreferrer">https://visa.vfsglobal.com/ind/en/ltu/login</a></p>
<p>I got result from 2captcha api. But when I click button got error.</p>
<pre><code>pageurl = 'https://visa.vfsglobal.com/ind/en/ltu/login'
google_site_key = '6LfDUY8bAAAAAPU5MWGT_w0x5M-8RdzC29SClOfI'
service_key = '2CAPTCH KEY'
driver = webdriver.Chrome()
driver.get(pageurl)
WebDriverWait(driver, 50).until(
EC.invisibility_of_element_located((By.XPATH, '/html/body/div[1]')))
# Allow Cookie
try:
WebDriverWait(driver, 5) \
.until(EC.element_to_be_clickable((By.CSS_SELECTOR,
"div#onetrust-button-group button#onetrust-accept-btn-handler"))) \
.click()
except:
pass
email = driver.find_element(By.XPATH, '//*[@id="mat-input-0"]')
email.send_keys(str('EMAIL'))
password = driver.find_element(By.XPATH, '//*[@id="mat-input-1"]')
password.send_keys(str('PASSWORD'))
time.sleep(5)
url = "http://2captcha.com/in.php?key=" + service_key + "&method=userrecaptcha&googlekey=" + google_site_key + "&pageurl=" + pageurl
resp = requests.get(url)
if resp.text[0:2] != 'OK':
quit('Service error. Error code:' + resp.text)
captcha_id = resp.text[3:]
fetch_url = "http://2captcha.com/res.php?key="+ service_key + "&action=get&id=" + captcha_id
for i in range(1, 10):
time.sleep(5) # wait 5 sec.
resp = requests.get(fetch_url)
print(resp.text)
if resp.text[0:2] == 'OK':
break
driver.execute_script('var element=document.getElementById("g-recaptcha-response"); element.style.display="";')
driver.execute_script("""
document.getElementById("g-recaptcha-response").innerHTML = arguments[0]
""", resp.text[3:])
driver.execute_script('var element=document.getElementById("g-recaptcha-response"); element.style.display="none";')
login = driver.find_element(By.XPATH, "//span[contains(text(),'Sign In')]")
driver.execute_script("arguments[0].scrollIntoView();", login)
time.sleep(2)
driver.execute_script("arguments[0].click();", login)
</code></pre>
<p>check my attached file to see error</p>
<p><a href="https://i.sstatic.net/G1DoI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G1DoI.png" alt="error" /></a></p>
|
<python><selenium-webdriver><captcha>
|
2023-03-27 19:40:34
| 0
| 400
|
A S M Saief
|
75,859,739
| 5,188,353
|
How to bulk insert into a Presto DB using Python
|
<p>In my environment, I have to handle a lot of different data saved in Excel and CSV files. My python script reads all these Excel and CSV files, extracts the relevant data into a dataframe, and does some transformation.</p>
<p>The final step is to load the data into a Presto DB. The current code simply loops over the data and inserts every 500 records of the presto DB using an insert statement. This method seems to be complicated and slow. I tried it for every single record but the performance is of course unacceptable slow.</p>
<p>Is there any elegant method to bulk insert records from external data sources (Excel, CSV, ...) to a Presto database using python (and maybe an elegant insert statement)?</p>
|
<python><sql><insert><presto><bulkinsert>
|
2023-03-27 19:38:22
| 2
| 675
|
clex
|
75,859,706
| 3,605,534
|
How to calculate real accuracy in CIFAR100 Tensorflow after retrain pretrained InceptionResNetV2 model
|
<p>I was working with CIFAR100 dataset. I used InceptionResNetV2 to build my model. I trained for 25 epochs using data augmentation. Even though this dataset has 32x32 images, I upscaled them to 75x75. The result is as follow</p>
<p><a href="https://i.sstatic.net/0hZbx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0hZbx.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/1itI6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1itI6.png" alt="enter image description here" /></a></p>
<p>After that I saved the weights on a h5 file to work on it later. Now, I am loading the model as follow:</p>
<pre><code>from sklearn.metrics import classification_report
import tensorflow as tf, numpy as np
from keras.applications.inception_resnet_v2 import InceptionResNetV2
from keras.preprocessing.image import ImageDataGenerator
from keras.utils import to_categorical
from keras.layers import Dense, GlobalAveragePooling2D, Dropout
from keras.models import Model, Sequential
tf.keras.backend.clear_session()
(x_train, y_train), (x_test, y_test)=tf.keras.datasets.cifar100.load_data(label_mode='fine')
print(x_train.shape, y_train.shape, x_test.shape, y_test.shape)
#We found names of the classes For learning and visual reason
labels = ['apple', 'aquarium_fish', 'baby', 'bear', 'beaver', 'bed', 'bee', 'beetle', 'bicycle', 'bottle', 'bowl', 'boy', 'bridge', 'bus', 'butterfly',
'camel', 'can', 'castle', 'caterpillar', 'cattle', 'chair', 'chimpanzee', 'clock', 'cloud', 'cockroach', 'couch', 'crab', 'crocodile', 'cup',
'dinosaur', 'dolphin', 'elephant', 'flatfish', 'forest', 'fox', 'girl', 'hamster', 'house', 'kangaroo', 'computer_keyboard',
'lamp', 'lawn_mower', 'leopard', 'lion', 'lizard', 'lobster', 'man', 'maple_tree', 'motorcycle', 'mountain', 'mouse', 'mushroom',
'oak_tree', 'orange', 'orchid', 'otter', 'palm_tree', 'pear', 'pickup_truck', 'pine_tree', 'plain', 'plate', 'poppy', 'porcupine', 'possum',
'rabbit', 'raccoon', 'ray', 'road', 'rocket', 'rose',
'sea', 'seal', 'shark', 'shrew', 'skunk', 'skyscraper', 'snail', 'snake', 'spider', 'squirrel', 'streetcar', 'sunflower', 'sweet_pepper',
'table', 'tank', 'telephone', 'television', 'tiger', 'tractor', 'train', 'trout', 'tulip', 'turtle',
'wardrobe', 'whale', 'willow_tree', 'wolf', 'woman', 'worm']
train_datagen = ImageDataGenerator(
rescale=1.0/255.0,
height_shift_range=0.2,
width_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
rotation_range=40,
horizontal_flip=True,
vertical_flip=True,
fill_mode='nearest',
)
test_datagen = ImageDataGenerator(
# Your Code Here
rescale=1.0/255.0)
# Convert labels to one hot encoding matrix
print(y_train.shape, y_test.shape)
train_y = to_categorical(y_train, 100)
test_y = to_categorical(y_test, 100)
print(train_y.shape, test_y.shape)
def build_IncV4_model(input_shape, n_classes):
base4 = InceptionResNetV2(input_shape=input_shape,
weights="imagenet",
include_top=False,
classes=n_classes)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4)
model = Sequential()
model.add(base4)
model.add(GlobalAveragePooling2D())
model.add(Dropout(0.5))
model.add(Dense(n_classes, activation='softmax'))
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
return model
mymodel = build_IncV4_model(input_shape=(75, 75, 3), n_classes=len(labels))
mymodel.summary()# Evaluate model on validation data
mymodel.load_weights('./cifar100_incv4_weights.h5')
mymodel.evaluate(test_datagen.flow(tf.image.resize(x_test, [75, 75]),test_y))
</code></pre>
<p>The result of the evaluate function is: [1.1647623777389526, 0.7148000001907349]. The problem come out when I try to create a classification report using sklearn. All the metrics are 0. It only shows 1 in recall for the 73 class and I got this warning "UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use <code>zero_division</code> parameter to control this behavior.
_warn_prf(average, modifier, msg_start, len(result))". I am not sure if one hot encoding was applied properly.</p>
<pre><code>data = mymodel.predict(tf.image.resize(x_test, [75, 75]))
pred = np.argmax(data, axis=1)
print(classification_report(y_test, pred, labels=list(range(len(labels)))))
</code></pre>
<p>Thanks in advance for all your comments.</p>
|
<python><tensorflow><scikit-learn>
|
2023-03-27 19:31:56
| 1
| 945
|
GSandro_Strongs
|
75,859,687
| 14,608,529
|
How to use variable volume control for mac via python?
|
<p>I found this code in python to control mac volume:</p>
<pre><code>osascript.osascript("set volume output volume 35")
</code></pre>
<p>I want to use a variable to control this volume instead like:</p>
<pre><code>volumes = [35, 35, 30, 35, 45, 45, 45, 45, 45, 40, 45, 45, 45, 45, 55, 45, 50, 45, 50, 55]
for i, n in enumerate(frequencies):
osascript.osascript("set volume output volume %s" (volumes[i]))
</code></pre>
<p>I'm getting this error though:</p>
<pre><code>TypeError: 'str' object is not callable
</code></pre>
<p>Is there any way I can input volume directly here?</p>
|
<python><python-3.x><string><macos><volume>
|
2023-03-27 19:30:16
| 3
| 792
|
Ricardo Francois
|
75,859,649
| 1,457,380
|
Negate findall in Python re module
|
<p>I would like to strip every instance of a solution environment in a <code>.tex</code> file (i.e., text between <code>\begin{solution}</code> and <code>\end{solution}</code>) and save the output to a new file.</p>
<p>The code snippet below grabs the solution environment: How can I negate that to strip it instead?</p>
<p>Applying a "negation" caret <code>^</code> as in <code>re.findall(r'^\\begin{solution}(.*?)\\end{solution}', data, re.S)</code> works <a href="https://regex101.com/r/JGTzst/2" rel="nofollow noreferrer">here</a>, but did not work for me within a Python session (it returned an empty list <code>[]</code>). I did try to wrap with parentheses.</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import re
data = open(file).read()
re.findall(r'\\begin{solution}(.*?)\\end{solution}', data, re.S)
</code></pre>
<p>Eventually I'll strip the two empty lines that are created by the substitution and save the cleaned text to a new file (without overwriting).</p>
<p><strong>Input:</strong></p>
<pre class="lang-none prettyprint-override"><code>\documentclass{article}
\usepackage{verbatim}
\title{Strip the solutions environment}
% solution environment
\newenvironment{solution}
{\noindent\textbf{Solution:\newline}}{}
\begin{document}
\section{Introduction}
\subsection{Objectives}
None whatsoever:
\begin{itemize}
\item True
\item False
\end{itemize}
\begin{solution}
The correct answer is True.
% adding a comment here
\begin{comment}
make this a bit of a challenge with a nested environment:
~ tilde
` backtick
{ open brace
} close brace
[ open bracket
] close bracket
! exclamation mark
% percent
^ carat
* asterisk
– hyphen
= equals
+ plus
_ underscore
| pipe
\ backslash
/ forward slash
@ at
: colon
; semicolon
< less than
> greater than
? question mark
. period
, comma
# pound
& ampersand
$ dollar
( open parenthesis
) close parenthesis
\end{comment}
\end{solution}
\end{document}
</code></pre>
<p><strong>Expected output:</strong></p>
<pre class="lang-none prettyprint-override"><code>\documentclass{article}
\usepackage{verbatim}
\title{Strip the solutions environment}
% solution environment
\newenvironment{solution}
{\noindent\textbf{Solution:\newline}}{}
\begin{document}
\section{Introduction}
\subsection{Objectives}
None whatsoever:
\begin{itemize}
\item True
\item False
\end{itemize}
\end{document}
</code></pre>
|
<python><python-3.x><regex><python-re>
|
2023-03-27 19:25:46
| 1
| 10,646
|
PatrickT
|
75,859,426
| 2,112,406
|
matplotlib change vertical padding only between two specific columns of subplots
|
<p>I'm creating a 2 by 3 array of plots with:</p>
<pre><code>fig, ax = plt.subplots(
nrows=2, ncols=3, figsize=(
22, 10), gridspec_kw={
'width_ratios': [
0.7, 0.7, 1]})
</code></pre>
<p>I can adjust the vertical spacing with</p>
<pre><code>plt.tight_layout(w_pad=5.0)
</code></pre>
<p>which adds a ton of space between each column in the array of plots.</p>
<p>But I want to put extra padding between column 2 and 3, and not between column 1 and 2. Is this possible?</p>
|
<python><matplotlib>
|
2023-03-27 18:58:43
| 1
| 3,203
|
sodiumnitrate
|
75,859,357
| 2,595,546
|
changes() function in python sqlite3 wrapper?
|
<p>Python's sqlite3 library has a total_changes property on its connection object. However, I would like to know the amount of changes made by my last query, which I would typically query using the sqlite3 changes() function.</p>
<p>Is there a wrapper for that function in the sqlite3 python library?</p>
|
<python><database><sqlite>
|
2023-03-27 18:51:17
| 1
| 868
|
Fly
|
75,859,326
| 12,734,492
|
PySpark : How to merge two json columns to new column
|
<p>Pyspark Table:</p>
<pre><code>Col1
{'table': [{'name': 'XXS',
'ranges': {'chestc': {'min': 87.88, 'max': 87.88},
'waistc': {'min': 58.42, 'max': 58.42}}},
{'name': 'XS',
'ranges': {'chestc': {'min': 94.22, 'max': 94.22},
'waistc': {'min': 66.04, 'max': 66.04}}},
{'name': 'S',
'ranges': {'chestc': {'min': 100.58, 'max': 100.58},
'waistc': {'min': 73.66, 'max': 73.66}}},
{'name': 'M',
'ranges': {'chestc': {'min': 106.92, 'max': 106.92},
'waistc': {'min': 81.28, 'max': 81.28}}},
{'name': 'L',
'ranges': {'chestc': {'min': 114.54, 'max': 114.54},
'waistc': {'min': 93.98, 'max': 93.98}}},
{'name': 'XL',
'ranges': {'chestc': {'min': 122.16, 'max': 122.16},
'waistc': {'min': 106.68, 'max': 106.68}}},
{'name': 'XXL',
'ranges': {'chestc': {'min': 131.06, 'max': 131.06},
'waistc': {'min': 121.92, 'max': 121.92}}}],
'measurement_system': 'metric'}
Col2
{
"gender": "male",
"measurement_system": "metric",
"measurements": {
"height": 178,
"weight": 99
}
}
</code></pre>
<p><a href="https://i.sstatic.net/F8T9i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F8T9i.png" alt="enter image description here" /></a></p>
<p>I need create Col3 like :</p>
<pre><code>{params:{'table': [{'name': 'XXS',
'ranges': {'chestc': {'min': 87.88, 'max': 87.88},
'waistc': {'min': 58.42, 'max': 58.42}}},
{'name': 'XS',
'ranges': {'chestc': {'min': 94.22, 'max': 94.22},
'waistc': {'min': 66.04, 'max': 66.04}}},
{'name': 'S',
'ranges': {'chestc': {'min': 100.58, 'max': 100.58},
'waistc': {'min': 73.66, 'max': 73.66}}},
{'name': 'M',
'ranges': {'chestc': {'min': 106.92, 'max': 106.92},
'waistc': {'min': 81.28, 'max': 81.28}}},
{'name': 'L',
'ranges': {'chestc': {'min': 114.54, 'max': 114.54},
'waistc': {'min': 93.98, 'max': 93.98}}},
{'name': 'XL',
'ranges': {'chestc': {'min': 122.16, 'max': 122.16},
'waistc': {'min': 106.68, 'max': 106.68}}},
{'name': 'XXL',
'ranges': {'chestc': {'min': 131.06, 'max': 131.06},
'waistc': {'min': 121.92, 'max': 121.92}}}],
'measurement_system': 'metric'},
"gender": "male",
"measurement_system": "metric",
"measurements": {
"height": 178,
"weight": 99
}
}
</code></pre>
<p>When I try :</p>
<pre><code>table = table.withColumn(
"params",
F.struct(
F.col("Col1"),
F.col("Col2")
)
)
</code></pre>
<p>I get Row object like :</p>
<pre><code>Row(Col1='{"table":[{"name":"26W/30L","ranges":{"height":{"min":120.000000,"max":175.000000},"hipc":{"min":80.000000,"max":86.000000}}},{"name":"28W/32L","ranges":{"height":{"min":175.000000,"max":220.000000},"hipc":{"min":88.000000,"max":90.000000}}},{"name":"31W/32L","ranges":{"height":{"min":175.000000,"max":220.000000},"hipc":{"min":98.000000,"max":99.000000}}},{"name":"34W/30L","ranges":{"height":{"min":120.000000,"max":175.000000},"hipc":{"min":103.000000,"max":106.500000}}},{"name":"38W/30L","ranges":{"height":{"min":120.000000,"max":175.000000},"hipc":{"min":115.000000,"max":120.000000}}},{"name":"26W/32L","ranges":{"height":{"min":175.000000,"max":220.000000},"hipc{"min":80.000000,"max":86.000000}}}}],"measurement_system":"metric"}',
Col2='{"gender":"male","measurement_system":"metric","measurements":{"height":150,"weight":50}}')
</code></pre>
<p>**This example of row</p>
<p>and when I try convert to JSON:</p>
<pre><code>table = table.withColumn(
"params",
F.to_json(
F.struct(
F.col("Col1"),
F.col("Col2")
)
)
)
</code></pre>
<p>I get json with slashes like:</p>
<pre><code>{
"params": {
"Col1": "{\"table\":[{\"name\":\"XXS\",\"ranges\":{\"height\":{\"min\":120.000000,\"max\":175.000000},\"hipc\":{\"min\":80.000000,\"max\":86.000000}}},{\"name\":\"XS\",\"ranges\":{\"height\":{\"min\":175.000000,\"max\":220.000000},\"hipc\":{\"min\":88.000000,\"max\":90.000000}}},{\"name\":\"M\",\"ranges\":{\"height\":{\"min\":175.000000,\"max\":220.000000},\"hipc\":{\"min\":98.000000,\"max\":99.000000}}},{\"name\":\"S\",\"ranges\":{\"height\":{\"min\":120.000000,\"max\":175.000000},\"hipc\":{\"min\":103.000000,\"max\":106.500000}}},{\"name\":\"L\",\"ranges\":{\"height\":{\"min\":120.000000,\"max\":175.000000},\"hipc\":{\"min\":115.000000,\"max\":120.000000}}},{\"name\":\"XL\",\"ranges\":{\"height\":{\"min\":175.000000,\"max\":220.000000},\"hipc\":{\"min\":80.000000,\"max\":86.000000}}},{\"name\":\"XXL\",\"ranges\":{\"height\":{\"min\":175.000000,\"max\":220.000000},\"hipc\":{\"min\":86.000000,\"max\":88.000000}}}}}],\"measurement_system\":\"metric\"}",
"Col2": "{\"gender\":\"male\",\"measurement_system\":\"metric\",\"measurements\":{\"height\":150,\"weight\":50}"
}
}
</code></pre>
<p>The challenge is not to use the UDF or any Python non-Pyspark functions or functions like collect(). I need to stay in Pyspark.</p>
<p>How I can get a column in normal JSON format?</p>
|
<python><json><apache-spark><pyspark><apache-spark-sql>
|
2023-03-27 18:47:06
| 1
| 487
|
Galat
|
75,859,096
| 7,655,687
|
How to load a csv file into tksheet
|
<p>How do you save and load csv files into the Python tkinter table widget <code>tksheet</code> found here <a href="https://github.com/ragardner/tksheet" rel="nofollow noreferrer">https://github.com/ragardner/tksheet</a> using a standard tkinter window</p>
|
<python><python-3.x><tkinter>
|
2023-03-27 18:17:56
| 1
| 2,005
|
ragardner
|
75,859,074
| 6,695,297
|
Getting RateLimitError while implementing openai GPT with Python
|
<p>I have started to implement openai gpt model in python. I have to send a single request in which I am getting RateLimitError.</p>
<p>My code looks like this</p>
<pre><code>import openai
key = '<SECRET-KEY>'
openai.api_key = key
model_engine = 'text-ada-001'
prompt = 'Hi, How are you today?'
completion = openai.Completion.create(engine=model_engine, prompt=prompt, max_token=2048, n=1, stop=None, temprature=0.5)
print(completion.choices)
</code></pre>
<p>This is what error I am getting</p>
<pre><code>openai.error.RateLimitError: You exceeded your current quota, please check your plan and billing details.
</code></pre>
<p>So, How do I do development without getting this error? I have checked the doc they provide a free version with limitations but this is the initial stage I have sent only 5-6 requests in an hour.</p>
<p>Thanks advance for your help.</p>
|
<python><python-3.x><openai-api><gpt-3>
|
2023-03-27 18:14:46
| 3
| 1,323
|
Shubham Srivastava
|
75,859,013
| 3,016,483
|
Python regex for matching a substring with maximum 1 space
|
<p>I'm trying to write a regex which will match a substring with at least one alphabet, one number, max 1 space and string length of 7.</p>
<p>Here is what I have come up with until now:</p>
<pre><code>r'\b(?=.*[a-zA-Z])(?=.*\d)[a-zA-Z\d\s]{7}(?!=[ ]{2,})\b'
</code></pre>
<p>Examples</p>
<pre><code>1. "abc 12 3 vha bnf1254"
2. "abc 123 vha ss1234"
</code></pre>
<p>Expected outputs:</p>
<pre><code>1. ["bnf1254"]
2. ["abc 123", "123 vha", "ss1234"]
</code></pre>
|
<python><regex>
|
2023-03-27 18:07:31
| 2
| 612
|
Illuminati0x5B
|
75,858,726
| 496,289
|
Configuring log4j with IDE / pyspark shell to log to console and file using properties file
|
<p>In my development env on my laptop (Ubuntu box), I write my "pyspark code" and run it through <code>pytest</code> to test. I inject the spark session that's created using <code>SparkSession.builder.getOrCreate()</code> into my tests. This "pyspark code" has a bunch of log statements at info level. None of them show up anywhere, so I'm forced to write <code>print()</code>, which I later remove.</p>
<p>All I want is:</p>
<ul>
<li>Ability to set log level for all my loggers (<code>My.*</code>)</li>
<li>Write to console and file (appenders)</li>
<li>Some docs that describe the format specs. <code>%t, %m, %n, %d{YYYY-MM-DD}, ...</code></li>
</ul>
<p>I read:</p>
<ul>
<li><a href="https://spark.apache.org/docs/latest/configuration.html#configuring-logging" rel="nofollow noreferrer">Configuring Logging</a> in official documentation. It just says add <code>log4j2.properties</code>. Could not find any docs on format of <code>log4j2.properties</code>. <a href="https://github.com/apache/spark/blob/master/conf/log4j2.properties.template" rel="nofollow noreferrer">Sample <code>log4j2.properties.template</code></a> is VERY limited. <strong>NOTE that it says</strong> nothing about using <code>xml</code> or other formats, just <code>properties</code>.</li>
<li><a href="https://duckduckgo.com/?q=%22log4j2.properties%22%20site%3Alogging.apache.org" rel="nofollow noreferrer">All official log4j2</a> docs point to <a href="https://logging.apache.org/log4j/2.x/manual/configuration.html#Properties" rel="nofollow noreferrer">the same place</a>, which has no docs on contents of <code>log4j2.properties</code>. It gives some samples, but those are useless e.g. where do I specify filename for a file appender (not rolling file appender).</li>
<li>I even tried the <a href="https://logging.apache.org/log4j/1.2/manual.html#configuration" rel="nofollow noreferrer">log4j (not log4j2) docs</a>, this clearly doesn't work (<code>ConfigurationException: No type attribute provided for Appender my_console</code>).</li>
</ul>
<p>I don't want to figure it out by trial and error. Just looking for docs.</p>
<p><code>my_spark_module.py</code></p>
<pre class="lang-py prettyprint-override"><code>class MySparkModule:
def __init__(self, spark):
self.log = spark.sparkContext._jvm.org.apache.log4j.LogManager.getLogger('My.SparkModule')
self.df = spark.createDataFrame([('data1', '2022-01-01T00:00:00'), ('data2', '2022-02-01T00:00:00')],
schema=['data', 'ts_column'])
self.log.info('exiting init()')
def get_count(self):
return self.df.count()
</code></pre>
<p><code>test_my_spark_module.py</code></p>
<pre class="lang-py prettyprint-override"><code>import pytest
from pyspark.sql import SparkSession
@pytest.fixture(autouse=True, scope='module')
def spark_session():
return SparkSession.builder.getOrCreate()
class TestMySparkModule:
def test_tables(self):
spark_session = SparkSession.builder.getOrCreate()
log = spark_session.sparkContext._jvm.org.apache.log4j.LogManager.getLogger('My.TestMySparkModule')
log.info('something I wanted to log')
assert MySparkModule(spark_session).get_count() == 1, 'expected count to be 1'
</code></pre>
|
<python><java><logging><pyspark><log4j>
|
2023-03-27 17:34:04
| 1
| 17,945
|
Kashyap
|
75,858,681
| 9,481,731
|
cannot import requests in python script running on jenkins container
|
<p>Team,
request module is not found by python script though it is installed. so is it some path I need to tell python script to look at?
I am trying to run python on jenkins container ubuntu based.</p>
<p>version setup from container is below</p>
<pre><code>python --version
Python 2.7.18
pip --version
pip 20.0.2 from /usr/lib/python3/dist-packages/pip (python 3.8)
uname -a
Linux code-scan-coverage-pbpbl 5.4.0-65-generic #73~18.04.1-Ubuntu SMP Tue Jan 19 09:02:24 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
which python
/usr/bin/python
pip install requests
</code></pre>
<p>jenksinfile calling python script</p>
<pre><code>sh "pip install requests && ${workspace}/src/jenkins/ci/pba/pba_coverage_scan.sh"
result = sh(script: "python ${workspace}/src/jenkins/ci/pba/api.py",returnStdout: true).trim()
</code></pre>
<p>output</p>
<pre><code>Requirement already satisfied: requests in /usr/lib/python3/dist-packages (2.22.0)
+ python /home/jenkins/agent/workspace/team/api.py
Traceback (most recent call last):
File "/home/jenkins/agent/workspace/team/api.py", line 2, in <module>
import json, requests, os
ImportError: No module named requests
</code></pre>
<p>python script is</p>
<pre><code>import json, requests, os
def stuff_api():
...
</code></pre>
|
<python><python-2.7><jenkins>
|
2023-03-27 17:28:53
| 1
| 1,832
|
AhmFM
|
75,858,629
| 2,772,805
|
Separate vtk self intersected polydata into multiple polygons from duplicate points?
|
<p>From a vtk self intersected polydata, I would like to separate it into multiple polygons.</p>
<p>Note that intersections in the initial polygon can be detected from duplicate points in the list of points that form the polygon.</p>
<p>Get the test file from a wget <a href="https://thredds-su.ipsl.fr/thredds/fileServer/ipsl_thredds/brocksce/tmp/poly_11.vtk" rel="nofollow noreferrer">https://thredds-su.ipsl.fr/thredds/fileServer/ipsl_thredds/brocksce/tmp/poly_11.vtk</a></p>
<p>then</p>
<pre><code>import pyvista as pv
a = pv.read('poly_11.vtk')
pl = pv.Plotter()
pl.add_mesh(a)
viewer = pl.show(jupyter_backend='trame', return_viewer=True)
display(viewer)
</code></pre>
<p><a href="https://i.sstatic.net/0FC46.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0FC46.png" alt="enter image description here" /></a></p>
<p>In fact I would like to describe its coordinates anti-clockwise as requested by the matplotlib path structure.</p>
<p>A solution could be to separate the polydata into 2 convex polygons then use the shapely orient function as needed (<a href="https://shapely.readthedocs.io/en/stable/manual.html?highlight=orient#shapely.geometry.polygon.orient" rel="nofollow noreferrer">https://shapely.readthedocs.io/en/stable/manual.html?highlight=orient#shapely.geometry.polygon.orient</a>).</p>
<p>So how to have 2 convex polygons from this set of lines (polydata) ?</p>
|
<python><vtk><shapely><pyvista>
|
2023-03-27 17:23:11
| 1
| 429
|
PBrockmann
|
75,858,590
| 11,453,690
|
how set Month as string in Python datetime
|
<p>I found this example how to set time and covert back to readable form</p>
<pre><code>import datetime
x = datetime.datetime(2020, 5, 17)
print(x.strftime("%Y %b %d"))
</code></pre>
<p>My question is how to set new date with <code>Month</code> as string ?
Is it possible ? Is there some parameter/function for this ?</p>
<p><code>y = datetime.datetime(2020, May, 17)</code></p>
<p>Update:
How can I show difference between two times only in days ?</p>
<pre><code>x = datetime.datetime(2024, 5, 17)
now=datetime.now()
print('Time difference:', str(x-now))
</code></pre>
<p>Current output is :
<code>Time difference: 416 days, 9:37:06.470952</code></p>
<p>OK , I got it</p>
<p><code> print('Time difference:', (x-now).days)</code></p>
|
<python>
|
2023-03-27 17:18:00
| 2
| 619
|
andrew
|
75,858,415
| 14,978,092
|
Changing Hexa color not changing output color in Image recoloring
|
<p>I am using below code to change the color of intended object . I pass hexa code and object of image and its job is to recolor it. When i pass it in my code it always gives blue color after recoloring. I am changing hexa color but not effecting, What is issue in my code:</p>
<p><strong>Image</strong></p>
<p><a href="https://i.sstatic.net/ZNoCT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZNoCT.png" alt="Image" /></a></p>
<p><strong>Code:</strong></p>
<pre><code>import cv2
import numpy as np
import argparse
import matplotlib.pyplot as plt
# Define command-line arguments
parser = argparse.ArgumentParser(description='Recolor images based on hexcode or sample image')
parser.add_argument('image_paths', metavar='path', type=str, nargs='+', help='path(s) to image(s) to recolor')
parser.add_argument('--hexcode', type=str, help='hexcode of the desired color')
parser.add_argument('--sample', type=str, help='path to a sample image to match the color')
# Parse the command-line arguments
args = parser.parse_args(['/home/hamza-developer/Pictures/1.png','--hexcode', '#0000FF'])
image_paths = args.image_paths
hexcode = args.hexcode
sample_path = args.sample
# Define color adjustments based on hexcode
def hex_to_rgb(hexcode):
r = int(hexcode[1:3], 16)
g = int(hexcode[3:5], 16)
b = int(hexcode[5:7], 16)
return (b, g, r) # OpenCV uses BGR format
# Define color adjustments based on sample image
def get_color_adjustments(sample_img):
# Convert sample image to HSV color space for color thresholding
hsv = cv2.cvtColor(sample_img, cv2.COLOR_BGR2HSV)
# Define color range based on hue, saturation, and value channels
lower = np.array([hsv[:,:,0].min(), hsv[:,:,1].min(), hsv[:,:,2].min()])
upper = np.array([hsv[:,:,0].max(), hsv[:,:,1].max(), hsv[:,:,2].max()])
# Define color adjustments based on the color range
color = np.array([np.mean(lower[:2]), np.mean(lower[1:]), np.mean(upper[1:])])
diff = np.array([50, 50, 50]) # acceptable color range
return (color, diff)
# Recolor an image based on a color adjustment
def recolor_image(img, color, diff):
# Create a mask of pixels within the acceptable color range
mask = cv2.inRange(img, color - diff, color + diff)
# Replace pixels within the mask with the desired color
img[mask > 0] = color
return img
# Load and recolor each image
for path in image_paths:
# Load the image
img = cv2.imread(path)
# Get the color adjustment
if hexcode:
color = hex_to_rgb(hexcode)
diff = np.array([50, 50, 50]) # acceptable color range
elif sample_path:
sample_img = cv2.imread(sample_path)
color, diff = get_color_adjustments(sample_img)
else:
print('Error: No color specified')
break
# Recolor the image
recolored_img = recolor_image(img, color, diff)
recolored_path = path.replace('.jpg', '_recolored.jpg')
plt.imshow(recolored_img)
plt.show()
</code></pre>
|
<python><opencv><matplotlib>
|
2023-03-27 16:57:29
| 1
| 590
|
Hamza
|
75,858,404
| 6,003,629
|
How can I mock the instantiation of a Class
|
<p>I can't seem to figure out how to mock the instantiation of a class, any pointers would be greatly appreciated. Here is what I am trying to do:</p>
<p>I would like to test the method <code>ClassA.some_method()</code> and specifically if <code>kafka_producer.flush()</code> was called, however I don't want <code>KafkaProducer</code> to be instantiated because it makes some requests which fail in my test environment.</p>
<pre class="lang-py prettyprint-override"><code>class ClassA:
def some_method(self):
# Do some stuff ...
kafka_producer = KafkaProducer(...)
# Do some more stuff ...
kafka_producer.flush()
</code></pre>
<p>I have tried to use <code>mock.patch</code> as follows but the requests are still made and fail before getting to the relevant test part:</p>
<pre><code>with mock.patch.object(kafka, "KafkaProducer", autospec=True) as kafka_producer:
class_a.some_method()
kafka_producer.assert_called()
</code></pre>
<p>Any suggestions?</p>
|
<python><unit-testing><pytest><kafka-python>
|
2023-03-27 16:55:35
| 1
| 385
|
jimfawkes
|
75,858,396
| 9,418,115
|
Python Oracledb - Segmentation Fault Error
|
<p>When I use <a href="https://python-oracledb.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">oracledb</a> Python module in Thick mode, I get a <code>Segmentation fault</code> error on a huge database request with Oracle v11.2 client.</p>
<p><strong>Code:</strong></p>
<pre><code># main.py
import oracledb
oracledb.init_oracle(lib_dir='path/to/lib', config_dir='path/to/config')
connection = oracledb.connect(user='user', password='pwd', dsn=cs, sid=sid, encoding='utf-8')
cursor = connection.cursor()
cur.arraysize = 100
results = cur.execute('SELECT * FROM MY_TABLE')
</code></pre>
<p><strong>Error:</strong></p>
<pre class="lang-bash prettyprint-override"><code>python main.py
Segmentation fault
</code></pre>
<p>What is the problem with this request?</p>
|
<python><oracle-database><python-oracledb>
|
2023-03-27 16:54:51
| 2
| 2,410
|
Adrien Arcuri
|
75,858,336
| 17,973,259
|
Is there a better way to define multiple boolean variables?
|
<p>I have multiple Boolean variables in my code and now they are defined like this:</p>
<pre><code>self.paused, self.show_difficulty, self.resizable, self.high_score_saved, \
self.show_high_scores, self.show_game_modes = \
False, False, True, False, False, False
</code></pre>
<p>And I thought of refactoring the code like this to improve readability.</p>
<pre><code>ui_options = {
"paused": False,
"show_difficulty": False,
"resizable": False,
"high_score_saved": False,
"show_high_scores": False,
"show_game_modes": False
}
</code></pre>
<p>Is there a better way to do it?</p>
|
<python><boolean>
|
2023-03-27 16:47:50
| 1
| 878
|
Alex
|
75,858,239
| 10,430,394
|
Beautifulsoup cannot parse htm created in word?
|
<p>I created a .htm file (the stripped type without a data folder) in MS Word 365. I used to create those types of files all the time and load them into a .py script all the time using bs4, but now it doesn't work for some reason. That might've been caused by me writing a previous version to a new file with <code>errors='ignore'</code>, but I am not certain. Now when I read it in, there is a bunch of spaces in between everything, so I'm guessing there is some kind of whitespace character that doesn't exist in the encoding that I'm using to read the file in?</p>
<p>Anyways, this is the <a href="https://github.com/p3rAsperaAdAstra/TOPAS-Param-Tables-public-" rel="nofollow noreferrer">link to the public repository where I keep my project</a>:</p>
<p>In there, the minimal working example is contained within the FOR_SO.py script. The .htm file is called "broken.htm". I'm gonna post the minimal code here as well, but the file is too big to upload here.</p>
<p>How did I break it so badly?</p>
<pre class="lang-py prettyprint-override"><code>from bs4 import BeautifulSoup
def html_parser(path):
'''Opens the template.htm and returns it as a bs4 Object'''
with open(path,'r') as inf:
soup = BeautifulSoup(inf,'html.parser')
return soup
resource = html_parser('broken.htm')
print(resource)
table = template.findAll('table')[-1]
print(table)
</code></pre>
<p>EDIT: I have realised that the chars "<" and ">" are being read as ">" and "<". Are those escape chars? How can I read those in properly? Can I just save the file again with a different codec somehow?</p>
|
<python><html><beautifulsoup><encoding>
|
2023-03-27 16:36:54
| 1
| 534
|
J.Doe
|
75,858,143
| 10,755,782
|
Create a directory, only if it doesn't exist using python os.mkdir()
|
<p>We can create a directory, only if it does not exist using</p>
<p><code> mkdir -p ./my_directory/</code></p>
<p>How can we achieve the same using <code>os.mkdir()</code> in python?
<code>os.mkdir("./my_directory/")</code> fails if <code>./my_directory/</code> already exists and <code>os.mkdir(" -p ./my_directory/")</code> is not working.</p>
|
<python><operating-system><mkdir>
|
2023-03-27 16:27:09
| 1
| 660
|
brownser
|
75,858,139
| 8,958,754
|
Python kivy, enlarging an image in
|
<p>In python kivy, I have an FloatLayout for my self.root, later i add an Image to it with a pos_hint.</p>
<pre><code>self.avatar.image = Image(source=self.avatar.current_images[0], keep_ratio=False, pos_hint={'center_x':0.5, 'center_y': 0.5})
</code></pre>
<p>I then add it to root on startup.</p>
<p>I want to be able to increase the image size with a click of a button, but it always gets too big.</p>
<p>I've tried many things such as even increasing it by a tiny amount like so</p>
<pre><code>self.avatar.image.size_hint = [originalW*1.005, originalH*1.005]
</code></pre>
<p>I've had many suggestion which iv'e tried, such as scale, or updating self.avatar.image.size or changing keep_ratio.</p>
<p>None of this works, either it doesn't change at all, or it only gets way too big via a tiny increment (off the screen) or if I am modifying the .size property then the image never changes size visibly.</p>
<p>So my question for you is, what is the normal process for increasing an image size in a floatlayout? Is this even possible? Am I missing something?</p>
|
<python><kivy><kivy-language>
|
2023-03-27 16:26:54
| 1
| 855
|
Murchie85
|
75,858,068
| 3,387,716
|
Multiprocessing. How to output the computed records of a text file in the same order as they where read?
|
<p>I have a text file of about 300GiB in size that has a header followed by data records. Here's a dummy <code>input.txt</code>:</p>
<pre class="lang-none prettyprint-override"><code># header
# etc... (the number of lines in the header can vary)
record #1
record #2
record #3
record #4
record #5
record #6
record #7
record #8
record #9
...
</code></pre>
<p>Given the input file size, processing it one line at a time is slow, <strong>and CPU-bound</strong>, so I decided to add some parallelism to the code:</p>
<p>Here's a sample code (it took me some time to get it right; now it works as expected):</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python
import sys
import multiprocessing
def worker(queue,lock,process_id):
while True:
data = queue.get()
if data == None:
break
# whatever processing that takes some time
for x in range(1000000):
data.split()
lock.acquire()
print( data.rstrip() + " computed by process #" + process_id )
sys.stdout.flush()
lock.release()
if __name__ == '__main__':
queue = multiprocessing.Queue()
lock = multiprocessing.Lock()
workers = []
num_processes = 4
for process_id in range(num_processes-1):
p = multiprocessing.Process(target=worker, args=(queue,lock,str(process_id)))
p.start()
workers.append(p)
with open('input.txt') as handler:
try:
# read the header
line = next(handler)
while line.startswith('#'):
line = next(handler)
# send the records to the queue
while True:
queue.put(line)
line = next(handler)
except StopIteration:
for p in workers:
queue.put(None)
finally:
for p in workers:
p.join()
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>record #4 computed by process #2
record #3 computed by process #1
record #1 computed by process #0
record #2 computed by process #3
record #5 computed by process #2
record #6 computed by process #1
record #7 computed by process #0
record #8 computed by process #3
record #9 computed by process #2
</code></pre>
<p><strong>My problem is that the ordering in the output should be the same as in the input. How can I do it efficiently?</strong></p>
<p><strong>AL:</strong> The architecture of the code may seem weird (I didn't find any example that looks like my code), so if there's an other more standard and efficient way to do the same thing then it would be great if you can share it.</p>
|
<python><multiprocessing>
|
2023-03-27 16:18:17
| 1
| 17,608
|
Fravadona
|
75,858,010
| 13,014,864
|
PySpark add rank column to large dataset
|
<p>I have a large dataframe and I want to compute a metric based on the rank of one of the columns. This metric really only depends on two columns from the dataframe, so I first select the two columns I care about, then compute the metric. Once the two relevant columns are selected, the dataframe looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>score | truth
-----------------
0.7543 | 0
0.2144 | 0
0.5698 | 1
0.9221 | 1
</code></pre>
<p>The analytic that we want to calculate is called "average percent rank" and we want to calculate it for the ranks of data where <code>truth == 1</code>. So the process is to compute the percent rank for every data point, then select the rows where <code>truth == 1</code>, and finally compute the average percent rank of those data points. However, when we try to compute this, we get OOM errors. One of the issues is that using the <code>pyspark.sql</code> function <code>rank</code> requires using <code>Window</code>, and we want the window to include the entire dataframe (same fore <code>percent_rank</code>). Some code:</p>
<pre class="lang-py prettyprint-override"><code>w = Window.orderBy(F.col("score"))
avg_percent_rank = (
df
.select("score", "truth")
.withColumn("percent_rank", F.percent_rank().over(w))
.filter(F.col("truth") == 1)
.agg(F.mean(F.col("percent_rank")))
)
</code></pre>
<p>This results in an OOM error. There are over 6 billion records, and we need to build this for datasets that may be a hundred times larger. Ultimately, the critical operation is the sorting and indexing; we can derive <code>percent_rank</code> from this by dividing by the total number of rows.</p>
<p>Is there a better approach to computing rank than using a <code>Window</code> function?</p>
|
<python><apache-spark><sorting><pyspark><indexing>
|
2023-03-27 16:13:01
| 1
| 931
|
CopyOfA
|
75,857,923
| 283,538
|
get columns from numpy array in specific format given column indices
|
<p>I have a numpy array like this:</p>
<pre><code>import numpy as np
# Create a NumPy array
array_data = np.array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
</code></pre>
<p>and yes I can get the 2nd column like this:</p>
<pre><code>column_index = 1
selected_column = array_data[:,1]
print(selected_column)
</code></pre>
<p>giving:</p>
<pre><code>[2 5 8]
</code></pre>
<p>I need it like this though:</p>
<pre><code>[[2]
[5]
[8]]
</code></pre>
<p>which I can get using:</p>
<pre><code>selected_column = array_data[:, column_index:column_index+1]
print(selected_column)
</code></pre>
<p>Now you wonder, why ask this question. Well I would like to actually provide several column indices (coming from a pandas data frame with provided column names) whilst still getting the array like the last output.</p>
<pre><code>selected_column = array_data[:, column_index:column_index+1]
</code></pre>
|
<python><pandas><numpy>
|
2023-03-27 16:03:22
| 2
| 17,568
|
cs0815
|
75,857,827
| 10,192,593
|
Plots using for loop in Python
|
<p>I have a numpy array of shape (2,2,1000) representing income-groups, age-groups and a sample of 1000 observations for each group.</p>
<p>I am trying to use a for-loop to plot the 4 combinations of values:</p>
<pre><code> 1. < 18 age, i0 income
2. < 18 age, i1 income
3. >= 18 age, i0 income
4. >= 18 age, i1 income
</code></pre>
<p>so the end result would be 4 separate plots next to each other where the x and y axes change based on the list above.
My problem is that I get all 4 plots printed on the same graph. How can I put them on separate graphs?</p>
<p>Here is my code:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
elasticity = np.random.rand(2,2,1000)
print(elasticity.shape)
income = ['i0','i1']
age_gr= ['<=18','>18']
for i in range(len(age_gr)):
for j in range((len(income))):
plt.plot(elasticity[:,j,:], elasticity[i,:,:])
plt.subplot(i,j)
plt.show()
</code></pre>
|
<python><for-loop><matplotlib>
|
2023-03-27 15:52:56
| 1
| 564
|
Stata_user
|
75,857,794
| 15,326,565
|
How does Cloudflare detect that I am a bot even though I have provided the cf_clearance cookie?
|
<p>How does Cloudflare even know that this request came from a script even if I provided all the data, cookies and parameters when making a normal request? What does it check for? Am I doing something wrong? For example (I have redacted some of the values):</p>
<pre class="lang-py prettyprint-override"><code>import requests
cookies = {
'__Host-next-auth.csrf-token': '...',
'cf_clearance': '...',
'oai-asdf-ugss': '...',
'oai-asdf-gsspc': '...',
'intercom-id-dgkjq2bp': '...',
'intercom-session-dgkjq2bp': '',
'intercom-device-id-dgkjq2bp': '...',
'_cfuvid': '...',
'__Secure-next-auth.callback-url': '...',
'cf_clearance': '...',
'__cf_bm': '...',
'__Secure-next-auth.session-token': '...',
}
headers = {
'authority': 'chat.openai.com',
'accept': 'text/event-stream',
'accept-language': 'en-IN,en-US;q=0.9,en;q=0.8',
'authorization': 'Bearer ...',
'content-type': 'application/json',
'cookie': '__Host-next-auth.csrf-token=...',
'origin': 'https://chat.openai.com',
'referer': 'https://chat.openai.com/chat',
'sec-ch-ua': '"Brave";v="111", "Not(A:Brand";v="8", "Chromium";v="111"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Linux"',
'sec-fetch-dest': 'empty',
'sec-fetch-mode': 'cors',
'sec-fetch-site': 'same-origin',
'sec-gpc': '1',
'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36',
}
json_data = {
...
}
response = requests.post('https://chat.openai.com/backend-api/conversation', cookies=cookies, headers=headers, json=json_data)
</code></pre>
<p>I have tried different useragents to no avail, but I can't seem to figure out whats causing the problem in the first place.</p>
<p>The response comes back with error code <code>403</code> and HTML something like:</p>
<pre class="lang-html prettyprint-override"><code><html>
...
...
<h1>Access denied</h1>
<p>You do not have access to chat.openai.com.</p><p>The site owner may have set restrictions that prevent you from accessing the site.</p>
<ul class="cferror_details">
<li>Ray ID: ...</li>
<li>Timestamp: ...</li>
<li>Your IP address: ...</li>
<li class="XXX_no_wrap_overflow_hidden">Requested URL: chat.openai.com/backend-api/conversation </li>
<li>Error reference number: ...</li>
<li>Server ID: ...</li>
<li>User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36</li>
</ul>
...
...
</html>
</code></pre>
|
<python><cookies><python-requests><session-cookies><cloudflare>
|
2023-03-27 15:49:49
| 3
| 857
|
Anm
|
75,857,731
| 1,194,864
|
Sort a vector in PyTorch
|
<p>I am performing a prediction using an input image and a pre-trained classifier on <code>ImageNet</code> using <code>PyTorch</code>. What I would like to do is to calculate the value for each for the class and returned the highest 10 values. My code looks like:</p>
<pre><code>img = imread_img('image.png')
input = pre_processing(img) # normalize image, transpose and return a tensor
# load model
# model_type = 'vgg19'
model = models.vgg19(pretrained=True)
# run it on a GPU if available:
cuda = torch.cuda.is_available()
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
print('cuda:', cuda, 'device:', device)
model = model.to(device)
# set model to evaluation
model.eval()
out = model(input)
print (out.shape)
out = F.softmax(out, dim=1)
out= torch.sort(out, descending=True)
# top = out[:][0] # that returns only the values and not a tuple
</code></pre>
<p>The function sort returns the output vector and returns values and indices. How can I keep the first 5 higher values after the sort?</p>
|
<python><image><jupyter-notebook><pytorch>
|
2023-03-27 15:43:08
| 2
| 5,452
|
Jose Ramon
|
75,857,726
| 15,376,262
|
Remove group from pandas df if at least one group member consistently meets condition
|
<p>I have a pandas dataframe that looks like this:</p>
<pre><code>import pandas as pd
d = {'name': ['peter', 'peter', 'peter', 'peter', 'peter', 'peter', 'david', 'david', 'david', 'david', 'david', 'david'],
'class': ['A', 'B', 'A', 'B', 'A', 'B', 'A', 'B', 'C', 'A', 'B', 'C'],
'value': [2, 0, 3, 5, 0, 0, 4, 7, 0, 9, 1, 0]}
df = pd.DataFrame(data=d)
df
name class value
peter A 2
peter B 0
peter A 3
peter B 5
peter A 0
peter B 0
david A 4
david B 7
david C 0
david A 9
david B 1
david C 0
</code></pre>
<p>I would like to group this dataframe by <code>name</code> and <code>class</code> and delete a whole <code>name</code> group if at least one group member constantly equals 0. In the example above, all <code>C</code>'s of the <code>david</code> group equal 0. For that reason, I would like to <em>remove</em> the <code>david</code> group and <em>keep</em> the <code>peter</code> group, see desired output below. Any advice on how to achieve this?</p>
<pre><code>name class value
peter A 2
peter B 0
peter A 3
peter B 5
peter A 0
peter B 0
</code></pre>
|
<python><pandas><dataframe>
|
2023-03-27 15:42:41
| 3
| 479
|
sampeterson
|
75,857,719
| 11,819,955
|
Matplotlib axis not appearing in saved images
|
<p><strong>I'll submit this question only because I couldn't find any similar questions.</strong></p>
<p>Today I was making plots using Python 3.10.4, matplotlib 3.5.2
I used <code>matplotlib.pyplot.savefig(f"[ImageSpecificName{specifier}]".png)</code> in a loop to save a series of figures that I then sent to a number of people. One of those people was unable to view the titles/axis labels/anything outside the plot area. All other recipients could view them without issue. How is it that one person was unable to see the axis?</p>
|
<python><matplotlib><png>
|
2023-03-27 15:42:13
| 1
| 437
|
Messypuddle
|
75,857,669
| 21,376,217
|
How does the IDAT block of a PNG image store pixel information?
|
<p>This is my Python code:</p>
<pre class="lang-py prettyprint-override"><code>import struct
import zlib
PNG_HEADER = b'\x89\x50\x4e\x47\x0d\x0a\x1a\x0a'
IHDR = b'\x00\x00\x00\x0d\x49\x48\x44\x52\x00\x00\x00\x02\x00\x00\x00\x02\x08\x02\x00\x00\x00'
IHDR += struct.pack('>I', zlib.crc32(IHDR[4:]))
sRGB = b'\x00\x00\x00\x01\x73\x52\x47\x42\x00\xAE\xCE\x1C\xE9'
pixel = zlib.compress(b'\xff\x00\x00\x00\xff\x00\x00\x00\xff\x80\x80\x80')
IDAT = struct.pack('>I', len(pixel)) + b'IDAT' + pixel
IDAT += struct.pack('>I', zlib.crc32(IDAT[4:]))
PNG_END = b'\x00\x00\x00\x00\x49\x45\x4e\x44\xae\x42\x60\x82'
file = PNG_HEADER + IHDR + sRGB + IDAT + PNG_END
with open('test.png', 'wb') as f:
f.write(file)
</code></pre>
<p>This is PNG image:
<img src="https://i.sstatic.net/TwMdi.png" alt="" /></p>
<p>The pixels in the code are represented as RGB channels without transparent channels, but the actual generated image is not the color set in the "pixel".<br>
I tried to change the "color type" from 02 to 04 or 06, but I still couldn't achieve the colors in the 'pixel'.<br>
An attempt was also made to change the image depth from 08 to 10.<br>
I also tried adding transparent channels to the colors in "pixel", but it still didn't work.<br>
I used the Windows system's "Paint tool" to create an 8x5 PNG image and decompress the data in the IDAT block using zlib, but I found that I simply couldn't understand.<br>
So now I want to know how the pixel information in the IDAT block is stored. I went to the official website of libpng, but I couldn't find the specific information.<br></p>
<p>I have consulted many documents, but I still cannot understand how to store IDAT blocks.<br>
For example, under what circumstances does it have four channels (RGBA)? Under what circumstances do you only have three channels (RGB)? And what changes will image "bit depth" and "color type" bring to the image?</p>
<p>For example, if I manually change the "image bit depth" to 08 and the "color type" to 06, then I may be able to use the four channels of RGBA, but the next row of pixels will not be displayed, such as:</p>
<pre><code>IHDR:
width: 00 00 00 02
height: 00 00 00 02
bit depth: 08
color type: 06
Pixel information I manually wrote (uncompressed data):
background color? 00
First pixel: ff 00 00 ff
Second pixel: 00 ff 00 ff
Third pixel: 00 00 ff ff
Fourth pixel: ff 00 ff ff
The third and fourth pixels will not be visible using the image viewer, but will only see a white background.
</code></pre>
<p>The following is a link to the webpage I have referred to:</p>
<pre><code>https://www.w3.org/TR/png/
https://zh.wikipedia.org/wiki/PNG
http://www.libpng.org/pub/png/book/chapter11.html
http://www.libpng.org/pub/png/book/chapter08.html
http://www.libpng.org/pub/png/book/toc.html
http://www.libpng.org/pub/png/spec/1.2/png-1.2.pdf
http://www.libpng.org/pub/png/spec/1.2/PNG-Chunks.html
http://libpng.org/pub/png/
https://www.sciencedirect.com/science/article/pii/S174228761930163X
https://www.youtube.com/watch?v=BLnOD1qC-Vo&ab_channel=sandalaz
https://www.codeproject.com/Articles/581298/PNG-Image-Steganography-with-libpng
https://www.cnblogs.com/flylong0204/articles/4955235.html
https://www.cnblogs.com/senior-engineer/p/9548347.html
https://www.cnblogs.com/ECJTUACM-873284962/p/8986391.html
https://www.nayuki.io/page/png-file-chunk-inspector
</code></pre>
|
<python><image><image-processing><png><libpng>
|
2023-03-27 15:36:56
| 1
| 402
|
S-N
|
75,857,623
| 13,494,917
|
Executing an sql statement into a dataframe and receiving this error ValueError: setting an array element with a sequence
|
<p>For some reason this line of code isn't working for me anymore on a specific table. And I'm not sure how I'd go about fixing it.</p>
<pre class="lang-py prettyprint-override"><code>sql_query = pd.read_sql_query("SELECT * FROM "+key+"."+table+"", conn, chunksize=10000)
</code></pre>
<p>Error I receive:</p>
<blockquote>
<p>ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (74,) + inhomogeneous part.</p>
</blockquote>
<p>This used to work, but I've been receiving this error recently</p>
|
<python><pandas><sqlalchemy><azure-sql-database>
|
2023-03-27 15:31:54
| 1
| 687
|
BlakeB9
|
75,857,558
| 9,640,238
|
Find events in log that occur after a specific event
|
<p>I have a log of events to analyze that looks like this:</p>
<pre><code>+----+---------------------+----------+--------+
| id | timestamp | record | event |
+====+=====================+==========+========+
| 1 | 2023-03-01 13:17:05 | record03 | Edit |
+----+---------------------+----------+--------+
| 2 | 2023-03-02 02:57:49 | record02 | Edit |
+----+---------------------+----------+--------+
| 3 | 2023-03-03 00:41:13 | record03 | Locked |
+----+---------------------+----------+--------+
| 4 | 2023-03-03 14:54:34 | record03 | View |
+----+---------------------+----------+--------+
| 5 | 2023-03-04 07:29:55 | record03 | Edit |
+----+---------------------+----------+--------+
| 6 | 2023-03-05 02:15:10 | record02 | Locked |
+----+---------------------+----------+--------+
| 7 | 2023-03-05 04:47:33 | record01 | View |
+----+---------------------+----------+--------+
| 8 | 2023-03-05 15:39:04 | record02 | View |
+----+---------------------+----------+--------+
| 9 | 2023-03-06 08:36:22 | record03 | View |
+----+---------------------+----------+--------+
| 10 | 2023-03-06 18:37:28 | record02 | View |
+----+---------------------+----------+--------+
</code></pre>
<p>What I'm looking for is any "Edit" event that occurs after a "Locked" event for a given record. For each record, any event that occurred prior to a "Locked" event can be ignored. Any "Edit" event that occurs after the "Locked" event must be reported.</p>
<p>For example, in the sample data above, only row 5 should be returned as it has an "Edit" event after a "Locked" event. If everything is working properly, there shouldn't be any "Edit" events after a "Locked" event in the log. Any method that identifies row 5 among a list of results would be acceptable.</p>
<p>I've been trying to use groupby() and first(), but I'm struggling to figure out how to return the first occurrence of "Edit" for a given record after any occurrence of "Locked".</p>
<p>Thanks in advance for any tip!</p>
|
<python><pandas><time-series>
|
2023-03-27 15:24:30
| 1
| 2,690
|
mrgou
|
75,857,537
| 10,687,615
|
Add new column with incremental increase
|
<p>I have a dataframe that looks like this:</p>
<pre><code> Date LOC_A
0 2022-07-01 154
1 2022-07-02 162
2 2022-07-03 170
3 2022-07-04 169
4 2022-07-05 201
</code></pre>
<p>I would like to create a new column based on the data in column <code>LOC_A</code>, where the new column, let's call it <code>VOL_SCORE</code>, starts at 0 when <code>LOC_A</code> equals 160 and then increases by 1 every time the value in <code>LOC_A</code> increases by 1. Anything below 160 would be 0.</p>
<pre><code> Date LOC_A VOL_SCORE
0 2022-07-01 154 0
1 2022-07-02 162 3
2 2022-07-03 170 11
3 2022-07-04 169 10
4 2022-07-05 201 42
</code></pre>
<p>There's a similar question but I'm not sure how to apply that answer to my situation since Im attempting to create a new column.</p>
<p><a href="https://stackoverflow.com/questions/38862293/how-to-add-incremental-numbers-to-a-new-column-using-pandas#:%7E:text=For%20a%20pandas%20DataFrame%20whose%20index%20starts%20at,the%20end%3A%20df%20%5B%27New_ID%27%5D%20%3D%20df.index%20%2B%20880">https://stackoverflow.com/questions/38862293/how-to-add-incremental-numbers-to-a-new-column-using-pandas#:~:text=For%20a%20pandas%20DataFrame%20whose%20index%20starts%20at,the%20end%3A%20df%20%5B%27New_ID%27%5D%20%3D%20df.index%20%2B%20880</a></p>
|
<python><pandas>
|
2023-03-27 15:22:23
| 1
| 859
|
Raven
|
75,857,496
| 6,071,697
|
How to Django query only latest rows
|
<p>I have the following Django models:</p>
<pre><code>class Amount(Model):
hold_fk = ForeignKey(to=Hold)
creation_time = DateTimeField()
class Hold(Model):
group_fk = ForeignKey(to=Group)
class Group(Model):
pass
</code></pre>
<p>Each <code>Group</code> has multiple holds and each <code>Hold</code> has multiple amounts.</p>
<p>I have a Group instance. I want to get all the amounts that belong to the group, but I only want the latest one from each hold (newest creation time).</p>
<p>How do I accomplish that using Django?</p>
|
<python><django>
|
2023-03-27 15:18:15
| 1
| 622
|
Epic
|
75,857,347
| 3,525,290
|
Evaluating 2 list using all function throwing error in python
|
<p>I am using the <code>all</code> function to compare list items to see if they are less than or equall to 10 of each other. I am testing to see if the values in l2 is 10 lower than value in l1. But I am getting a syntax error.</p>
<pre><code>l1 = [10, 20, 30, 40, 50]
l2 = [50, 75, 30, 20, 40]
all([result for x,y in l1,l2 if x - y<=10 ])
SyntaxError: invalid syntax
all([result for x,y in l1,l2 if x - y<=10 ])
</code></pre>
|
<python><python-3.x><list-comprehension><python-all-function>
|
2023-03-27 15:04:12
| 1
| 1,619
|
user3525290
|
75,857,317
| 2,912,859
|
sshtunnel is not using specified key and not using private key password
|
<p>I'm writing a script that needs to connect to a MySQL server via SSH. I have the following:</p>
<pre><code>import mysql.connector
from sshtunnel import SSHTunnelForwarder
def query_mysql_server(query):
with SSHTunnelForwarder(
('ssh_server_ip', 22),
ssh_username='sshuser',
ssh_pkey='/Users/myhomedir/.ssh/id_rsa',
ssh_private_key_password='my_ssh_key_passphrase',
remote_bind_address=('127.0.0.1', 3306)
) as server:
conn = mysql.connector.connect(
host='127.0.0.1',
port=server.local_bind_port,
user='mysqluser',
password='mysqluserpass',
database='mydb'
)
cursor = conn.cursor()
cursor.execute(query)
results = cursor.fetchall()
for row in results:
print(row)
cursor.close()
conn.close()
query = "SELECT * FROM users;"
query_mysql_server(query)
</code></pre>
<p>Running this results in the error <code>ERROR | Password is required for key /Users/myhomedir/.ssh/id_rsa</code>.
I've also tried using a different key (<code>/Users/myhomedir/.ssh/app_key</code>), that doesn't have a pass phrase set at all and get exactly the same error, referring to the "default" key <code>id_rsa</code>, so an alternative key is not picked up for some reason.</p>
<p>Both keys are added to the ssh authentication agent using <code>ssh-add</code>. The default key (id_rsa) is an RSA key, not an OpenSSH key.</p>
<p>System is macOS.</p>
<p>Any help is appreciated!</p>
|
<python><mysql><ssh>
|
2023-03-27 15:00:11
| 3
| 344
|
equinoxe5
|
75,857,226
| 8,414,280
|
Parallel sorting a dictionary and returning the first k items
|
<p>One approach to returning the first <code>k</code> items of a sorted dictionary is shown in the code snippet below:</p>
<pre class="lang-py prettyprint-override"><code>dict(sorted(dictionary.items(), key=lambda item: item[1], reverse=True)[:k])
</code></pre>
<p>Therefore, is there a more efficient approach (for instance, using parallelism as promoted by <a href="https://numba.pydata.org/" rel="nofollow noreferrer">numba</a>) that increases the efficiency?</p>
<p>To better understand, suppose that the dictionary looks like this:</p>
<pre class="lang-py prettyprint-override"><code>import random
dictionary = {}
num_docs, num_queries = 128, 128000
for query_idx in range(num_queries):
docs_scores = {}
for doc_idx in range(num_docs):
docs_scores[f"doc_{doc_idx}"] = random.random()
dictionary[f"query_{query_idx}"] = docs_scores
# {
# "query_0":{
# 'doc_0': 0.108, 'doc_1': 0.143, ..., 'doc_127': 0.447
# },
# ...
# "query_127999":{
# 'doc_0': 0.847, 'doc_1': 0.431, ..., 'doc_127': 0.744
# }
# }
</code></pre>
<p>And the task requires selecting the k best-scored documents for every query (for a huge set of queries).</p>
|
<python><sorting><numba>
|
2023-03-27 14:51:46
| 0
| 744
|
Celso França
|
75,857,007
| 3,971,855
|
How to get a list of dictionary in a single column of pandas dataframe
|
<p>Hi I have a dataframe in 1NF form which I wanted to change to different format where I can access those values.</p>
<p>This is how my dataframe looks like</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Code</th>
<th style="text-align: center;">ProvType</th>
<th style="text-align: right;">Alias</th>
<th style="text-align: right;">spec_code</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1A12</td>
<td style="text-align: center;">A</td>
<td style="text-align: right;">Hi</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">1A12</td>
<td style="text-align: center;">B</td>
<td style="text-align: right;">Hi</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">1A12</td>
<td style="text-align: center;">B</td>
<td style="text-align: right;">Hola</td>
<td style="text-align: right;">2</td>
</tr>
<tr>
<td style="text-align: left;">1A12</td>
<td style="text-align: center;">A</td>
<td style="text-align: right;">Pola</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td style="text-align: left;">1b32</td>
<td style="text-align: center;">C</td>
<td style="text-align: right;">Cola</td>
<td style="text-align: right;">7</td>
</tr>
<tr>
<td style="text-align: left;">1b32</td>
<td style="text-align: center;">D</td>
<td style="text-align: right;">Cola</td>
<td style="text-align: right;">6</td>
</tr>
<tr>
<td style="text-align: left;">1b32</td>
<td style="text-align: center;">A</td>
<td style="text-align: right;">Mola</td>
<td style="text-align: right;">1</td>
</tr>
</tbody>
</table>
</div><div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Code</th>
<th style="text-align: center;">aliasList</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1A12</td>
<td style="text-align: center;">[{alias:Hi,provtypelist:[A,B],specCodeList:[1,2]},{alias:Hola,provtypelist:[B],specCodeList:[1]},{alias:Pola,provtypelist:[A],specCodeList:[3]}]</td>
</tr>
<tr>
<td style="text-align: left;">1b32</td>
<td style="text-align: center;">[{alias:Cola,provtypelist:[C,D],specCodeList:[7,6]},{alias:Mola,provtypelist:[A],specCodeList:[1]}]</td>
</tr>
</tbody>
</table>
</div>
<p>I want my dataframe to look like this. Dont know how the code/groupby will look like so any help is appreciated in this.</p>
<p>The reason i want my dataframe to look like this is so that I can insert that aliasList column into opensearch index with nested datatype.</p>
<p>Another way will also be appreciated.</p>
|
<python><pandas><dataframe><dictionary><group-by>
|
2023-03-27 14:31:52
| 1
| 309
|
BrownBatman
|
75,856,742
| 13,294,769
|
Is it possible to use SQLAlchemy create_engine and pass a certificate from an environment variable?
|
<p>The title pretty much sums it up. I can connect using <code>sslrootcert</code> as an argument to <code>create_engine</code>, assuming that it points to an existing certificate in my local storage. However, I would like to pass the certificate using an environment variable, or load it from a vault, etc..</p>
<p>Is this possible? I haven't found any documentation or example unfortunately.</p>
<p>If this is not possible, can someone explain why?</p>
<p>If this is a dumb question, can someone explain why?</p>
<p>EDIT:
I want to be able to load the certificate as a string from an environment variable. I want to pass my certificate as a string to the <code>create_engine</code> function. I do not wish to load an env var string that would point to a file in the filesystem.</p>
|
<python><sql><python-3.x><sqlalchemy><psycopg2>
|
2023-03-27 14:08:06
| 1
| 1,063
|
doublethink13
|
75,856,699
| 5,868,293
|
Keep only rows for which sum is less or equal than a threshold
|
<p>I have the following data</p>
<p>import pandas as pd</p>
<pre><code>pd.DataFrame({'id': [1,1,1,1,1,2,2,2,2],
'day':[1,2,3,4,5,1,2,3,4],
'value':[5,6,7,8,9,11,12,13,14],
'day_before': [4,4,4,4,4,3,3,3,3]})
id day value day_before
0 1 1 5 4
1 1 2 6 4
2 1 3 7 4
3 1 4 8 4
4 1 5 9 4
5 2 1 11 3
6 2 2 12 3
7 2 3 13 3
8 2 4 14 3
</code></pre>
<p>I want for each <code>id</code>, to keep only the <code>day</code>s that are smaller or equal than the <code>day_before</code> <strong>and</strong> that the sum of <code>value</code> is smaller or equal than 20</p>
<p>The resulting dataframe is this</p>
<pre><code>pd.DataFrame({'id': [1,1,1,2,2],
'day':[3,4,5,3,4],
'value':[7,8,9,13,14],
'day_before': [4,4,4,3,3]})
id day value day_before
0 1 3 7 4
1 1 4 8 4
2 1 5 9 4
3 2 3 13 3
4 2 4 14 3
</code></pre>
<p><strong>Explanation of result</strong></p>
<p>For <code>id==1</code>:</p>
<ul>
<li>I am keeping the <code>day</code>s > 4</li>
<li>Sum of of <code>value</code> for <code>day==4</code> is 8 -> I keep it</li>
<li>Sum of of <code>value</code> for <code>day==4 or day==3</code> is 15 -> Still smaller than 20, so I keep them</li>
<li>Sum of of <code>value</code> for <code>day==4 or day==3 or day==2</code> is 21 -> larger than 20 so I keep only <code>day==4 or day==3</code></li>
</ul>
<p>For <code>id==2</code>:</p>
<ul>
<li>I am keeping the <code>day</code>s > 3</li>
<li>Sum of of <code>value</code> for <code>day==3</code> is 13 -> I keep it</li>
<li>Sum of of <code>value</code> for <code>day==3 or day==2</code> is 25 -> larger than 20 so I keep only <code>day==3</code></li>
</ul>
<p>How could I do that ?</p>
|
<python><pandas>
|
2023-03-27 14:03:35
| 1
| 4,512
|
quant
|
75,856,674
| 13,799,627
|
How to change the cell width of an exported jupyter notebook as html?
|
<p>I exported my .ipynb with jupyter notebook as html.<br />
When using the build-in export in jupyter-notebook via "File" -> "Export as" -> "HTML", the cells become infinite wide which can look very bad when there are images included within some markdown cells.<br />
Naturally, jupyter-notebook has a limited cell width by default, which I also like to have within the exportet html.</p>
<p>This is an example how it looks like in jupyter notebook:
<a href="https://i.sstatic.net/dh2OH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dh2OH.png" alt="enter image description here" /></a></p>
<p>And this is how it looks like when opening the exported html file in any viewer or browser:
<a href="https://i.sstatic.net/3pXek.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3pXek.png" alt="enter image description here" /></a></p>
<p>Is there any option to limit the cell width for the exported html either directly within jupyter or manually afterwards? Note: I need to use an .html export since I want to be able to interact with some bokeh figures directly in the file afterwards.</p>
<p>Thanks for any advices.</p>
|
<python><html><jupyter-notebook><jupyter>
|
2023-03-27 14:01:01
| 0
| 535
|
Crysers
|
75,856,666
| 8,467,078
|
mypy doesn't understand copy.copy
|
<p>I'm having issues getting mypy to understand that the functions <code>copy.deepcopy</code> (and also <code>copy.copy</code>) from the <code>copy</code> standard library are callables. Even more so when these functions are assigned to a variable via a <code>x if y else z</code> expression. Below I included an example with two generic functions, to illustrate that the issue is not the assignment itself. The code itself runs without any errors and the lists at the bottom all print to the expected results.</p>
<p>However, when running mypy over this code, I get three different error messages, included below.</p>
<p>The generic example works fine, both with <code>map</code> and using list comprehension, no errors:</p>
<pre><code>def add(x: int) -> int:
return x + 5
def mult(x: int) -> int:
return x * 5
do_mult = True
myfct = mult if do_mult else add
numbers = [2, 3, 4]
list(map(myfct, numbers)) # ok
[myfct(i) for i in numbers] # ok
</code></pre>
<p>First set things up, no issues so far:</p>
<pre><code>import copy
do_deepcopy = True
copyfct = copy.deepcopy if do_deepcopy else copy.copy
numbers = [2, 3, 4]
</code></pre>
<p>Here's where it get's weird:</p>
<pre><code>list(map(copyfct, numbers)) # error
</code></pre>
<p>produces:</p>
<pre><code>error: No overload variant of "map" matches argument types "function", "List[int]" [call-overload]
</code></pre>
<p>and some notes about possible overload variants.</p>
<p>If I try the same thing using a list comprehension instead of <code>map</code>:</p>
<pre><code>[copyfct(i) for i in numbers] # error
</code></pre>
<p>I get:</p>
<pre><code>error: Cannot call function of unknown type [operator]
</code></pre>
<p>Now if I apply the function directly, without the assignment to a variable, the list comprehension case is accepted by mypy:</p>
<pre><code>[copy.deepcopy(i) for i in numbers] # ok
</code></pre>
<p>However, trying the same with <code>map</code></p>
<pre><code>list(map(copy.deepcopy, numbers)) # error
</code></pre>
<p>will get me:</p>
<pre><code>error: Argument 1 to "map" has incompatible type "Callable[[_T, Optional[Dict[int, Any]], Any], _T]"; expected "Callable[[int], _T]" [arg-type]
</code></pre>
<p>I do not know what's going on here, it seems to me as though mypy has some issue understanding the <code>copy</code> functions.</p>
<p>Regarding versions, I'm running mypy 1.1.1 and python 3.9.16</p>
|
<python><mypy>
|
2023-03-27 13:59:56
| 0
| 345
|
VY_CMa
|
75,856,568
| 10,938,315
|
How to find exact substring in list of substrings?
|
<p>Say I have this dictionary:</p>
<pre><code>lookup = {"text_text": 1, "text_text_num": 1}
</code></pre>
<p>And a list of strings:</p>
<pre><code>my_strings = ["text_text_part1", "text_text_part2", "text_text_another_part3", "text_text_num_something_part3"]
</code></pre>
<p>How can I ensure that <code>my_strings[0]</code>, <code>my_strings[1]</code> and <code>my_strings[2]</code> are only matched with <code>text_text</code> from <code>lookup</code> while <code>my_strings[3]</code> is matched with <code>text_text_num</code>? The suffix is dynamic and I can't regex the desired part from my strings because I don't know when they stop.</p>
|
<python>
|
2023-03-27 13:49:13
| 1
| 881
|
Omega
|
75,856,491
| 4,837,637
|
Python SqlAlchemy Google Cloud Sql connectionevent loop error closed
|
<p>I'm trying to connect with my Google Cloud Sql db using google.cloud.sql.connector, and this is the code to connect:</p>
<pre><code>def getconn():
with Connector() as connector:
conn = connector.connect(
instance_connection_name,
"pg8000",
user = db_user,
password = db_pass,
db = db_name,
ip_type = IPTypes.PUBLIC
)
return conn
# create connection pool
pool = sqlalchemy.create_engine(
"postgresql+pg8000://",
creator=getconn
)
</code></pre>
<p>and this is the code to do a simple select:</p>
<pre><code>with pool.connect() as db_conn:
# insert into database
#db_conn.execute(insert_stmt, parameters={"id": "book1", "title": "Book One"})
# query database
result = db_conn.execute(sqlalchemy.text("SELECT * from users")).fetchall()
# Do something with the results
for row in result:
print(row)
connector.close()
</code></pre>
<p>now when i run the code, the connection are ok and i print all records of table into db, but after i receive this error:</p>
<pre><code>Exception ignored in: <function _ProactorBasePipeTransport.__del__ at 0x0000026D24CD0A60>
Traceback (most recent call last):
File "C:\Program Files\Python310\lib\asyncio\proactor_events.py", line 116, in __del__
self.close()
File "C:\Program Files\Python310\lib\asyncio\proactor_events.py", line 108, in close
self._loop.call_soon(self._call_connection_lost, None)
File "C:\Program Files\Python310\lib\asyncio\base_events.py", line 750, in call_soon
self._check_closed()
File "C:\Program Files\Python310\lib\asyncio\base_events.py", line 515, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
Exception ignored in: <function _ProactorBasePipeTransport.__del__ at 0x0000026D24CD0A60>
Traceback (most recent call last):
File "C:\Program Files\Python310\lib\asyncio\proactor_events.py", line 116, in __del__
self.close()
File "C:\Program Files\Python310\lib\asyncio\proactor_events.py", line 108, in close
self._loop.call_soon(self._call_connection_lost, None)
File "C:\Program Files\Python310\lib\asyncio\base_events.py", line 750, in call_soon
self._check_closed()
File "C:\Program Files\Python310\lib\asyncio\base_events.py", line 515, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
</code></pre>
<p>How can I fix it?</p>
|
<python><postgresql><google-cloud-platform><sqlalchemy><google-cloud-sql>
|
2023-03-27 13:42:44
| 2
| 415
|
dev_
|
75,856,326
| 4,262,876
|
pynput keyboard typing upper case letter
|
<p>I'm facing a weird issue here with the pynput library and maybe one of you might enlighten me.</p>
<p>So this is a simple code that will help you to reproduce the behavior I'm having.</p>
<p>install pynput <code>pip install pynput</code> then copy and paste the code into a .py file and run it. Then in a notepad try to hit the "ENTER" key and then "; the" key</p>
<p>They both call the execute_command method, but one prints everything correctly and the other one everything or almost everything in upper case.</p>
<p><a href="https://i.sstatic.net/a7rXV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a7rXV.png" alt="result" /></a></p>
<pre><code>from pynput.keyboard import Key, Controller, Listener
class KeyListener:
def __init__(self):
self.listener = Listener(on_press=self.on_press)
self.listener.start()
self.keyboard = Controller()
def on_press(self, key):
try:
if key == Key.esc:
self.stop()
elif key == Key.enter or key.char == ";":
# Execute the command when "Enter" or ";" is detected
self.execute_command()
except AttributeError:
pass
def execute_command(self):
# just print something
self.keyboard.type('Test from the execute command')
def stop(self):
self.listener.stop()
def main():
listener = KeyListener()
listener.listener.join()
if __name__ == "__main__":
main()
</code></pre>
<p>I appreciate your help.</p>
<p>FYI: I'm currently using a (german) qwertz keyboard</p>
|
<python><python-3.x><python-2.7><keyboard><pynput>
|
2023-03-27 13:26:15
| 0
| 924
|
1020rpz
|
75,856,310
| 14,789,957
|
Grouping dataframe by similar non matching values
|
<p>If I have a pandas dataframe with the following columns: <strong>id</strong>, <strong>num</strong>, <strong>amount</strong>.</p>
<p>I want to group the dataframe such that all rows in each group have the same <strong>id</strong> and <strong>amount</strong> and where each row's value of <strong>num</strong> has a value that is not more than 10 larger or smaller the next row's value of <strong>num</strong>.</p>
<p>For the same <strong>id</strong>, if one row to the next does not have the same <strong>amount</strong> or if the absolute difference between the two <strong>num</strong> values is more than 10 then it will start a new grouping. Having a row with a different <strong>id</strong> in the middle does not break a grouping.</p>
<p>How can I go about doing this?</p>
<p>I have not managed to make a grouping where I'm not looking for matching values (like here where I need it to be close - but not matching). I'm assuming that this would need some custom grouping function but I've been having trouble putting one together</p>
<p><strong>Example dataframe:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>amount</th>
<th>num</th>
</tr>
</thead>
<tbody>
<tr>
<td>aaa-aaa</td>
<td>130</td>
<td>12</td>
</tr>
<tr>
<td>aaa-aaa</td>
<td>130</td>
<td>39</td>
</tr>
<tr>
<td>bbb-bbb</td>
<td>270</td>
<td>41</td>
</tr>
<tr>
<td>ccc-ccc</td>
<td>130</td>
<td>19</td>
</tr>
<tr>
<td>bbb-bbb</td>
<td>270</td>
<td>37</td>
</tr>
<tr>
<td>aaa-aaa</td>
<td>130</td>
<td>42</td>
</tr>
<tr>
<td>aaa-aaa</td>
<td>380</td>
<td>39</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Expected Groups:</strong></p>
<p>Group 1:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>amount</th>
<th>num</th>
</tr>
</thead>
<tbody>
<tr>
<td>aaa-aaa</td>
<td>130</td>
<td>12</td>
</tr>
</tbody>
</table>
</div>
<p>Group 2:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>amount</th>
<th>num</th>
</tr>
</thead>
<tbody>
<tr>
<td>aaa-aaa</td>
<td>130</td>
<td>39</td>
</tr>
<tr>
<td>aaa-aaa</td>
<td>130</td>
<td>42</td>
</tr>
</tbody>
</table>
</div>
<p>Group 3:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>amount</th>
<th>num</th>
</tr>
</thead>
<tbody>
<tr>
<td>bbb-bbb</td>
<td>270</td>
<td>41</td>
</tr>
<tr>
<td>bbb-bbb</td>
<td>270</td>
<td>37</td>
</tr>
</tbody>
</table>
</div>
<p>Group 4:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>amount</th>
<th>num</th>
</tr>
</thead>
<tbody>
<tr>
<td>ccc-ccc</td>
<td>130</td>
<td>19</td>
</tr>
</tbody>
</table>
</div>
<p>Group 5:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>amount</th>
<th>num</th>
</tr>
</thead>
<tbody>
<tr>
<td>aaa-aaa</td>
<td>380</td>
<td>39</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas><dataframe><grouping>
|
2023-03-27 13:24:46
| 2
| 785
|
yem
|
75,856,240
| 5,896,319
|
How to display celery output on the frontend?
|
<p>I have a celery task in my project that trigger a command from another project.
Everything is working well. I'm running the celery command as</p>
<pre><code>celery -A fdaconfig worker --loglevel=info
</code></pre>
<p>How can I display the outputs of terminal in the frontend?</p>
<pre><code>@shared_task
def trigger_model(main_config, runmode_config, modelreport):
with cd("/another_project"):
spark_submit_str = "python3 run_trigger.py " + "/this_project" + main_config
process = subprocess.Popen(spark_submit_str, stdout=subprocess.PIPE, stderr=subprocess.PIPE,
universal_newlines=True, shell=True)
stdout, stderr = process.communicate()
if process.returncode != 0:
print(stderr)
print(stdout)
</code></pre>
|
<python><django><celery>
|
2023-03-27 13:17:31
| 1
| 680
|
edche
|
75,856,221
| 3,614,197
|
Stacked bar chart from time series data when stacked divisions are in the same column - Pandas
|
<p>I have a dataframe that look like the following</p>
<pre><code> TrgID SenName SignalToNoise date
0 20201001000732016 a 1.645613 2020-10-01
1 20201001000732016 b 2.601088 2020-10-01
2 20201001000732016 c 1.253404 2020-10-01
3 20201001000732017 a 6.062578 2020-10-01
4 20201001000732017 b 2.753620 2020-10-01
5 20201001000732017 c 3.671336 2020-10-01
6 20201001000732018 a 1.466516 2020-10-01
7 20201001000732018 b 1.232844 2020-10-01
8 20201001000732018 c 2.028571 2020-10-01
9 20210331234440962 a 11.182038 2020-10-02
10 20210331234440962 b 11.413975 2020-10-02
11 20210331234440962 c 14.690728 2020-10-02
12 20210331234440963 a 1.228948 2020-10-02
13 20210331234440963 b 1.105445 2020-10-02
14 20210331234440963 c 2.035442 2020-10-02
15 20210331234440964 a 2.453167 2020-10-02
16 20210331234440964 b 2.075166 2020-10-02
17 20210331234440964 c 1.140017 2020-10-02
</code></pre>
<p>I would like to create a stacked bar chart where x axis is date and the y-axis is stacked with the Signal to "SignalToNoise" value stacking the "SenName"</p>
<p>what I have so far is:</p>
<pre><code>ax = df.plot.bar(x = 'date', y = 'SignalToNoise')
</code></pre>
<p>However, I just get a separate bar for each entry in my df. Most of the examples demonstrating values from different columns. Any help greatly appreciated.</p>
<p>How can</p>
|
<python><pandas><matplotlib>
|
2023-03-27 13:14:45
| 0
| 636
|
Spooked
|
75,856,169
| 8,981,425
|
Pandas Datetime reduced ISO format
|
<p>I have to read tables where some columns may be dates. These tables are defined by the user so the format may change from one to another. For example, these would be valid formats:</p>
<pre><code>'1998'
'2003-10-1'
'2003/5'
'2004/4/5 10:40'
</code></pre>
<p>Pandas pd.to_datetime() handles this really well, but when it is time to display the date I want to display just the part of the DateTime defined by the user. Continuing with the previous example:</p>
<pre><code>'1998' --> '1998'
'2003-10-1' --> '2003-10-1'
'2003/5' --> '2003-05'
'2004/4/5 10:40' --> '2004-04-05T10:40'
</code></pre>
<p>What I am trying to avoid is, when a user just defines the year <code>'1998'</code>, to display the full ISO DateTime <code>'1998-01-01T00:00'</code>. I would like to know if pandas or python provide such functionality.</p>
<p>If I have to create my own function I guess I would force the user to follow the ISO 8601 and use a regex to extract the groups of the date.</p>
<p><strong>EDIT</strong>:</p>
<p>Since it seems it is not possible to solve it using pandas I have decided to enforce the use of ISO8601 and use these functions:</p>
<pre><code>import re
from pandas._libs.tslib import _test_parse_iso8601
def is_iso8601(string):
try:
_test_parse_iso8601(string)
return True
except ValueError:
return False
def capture_iso_date(text):
if not is_iso8601(text):
raise ValueError("Date format not valid")
date_parts = ["year", "month", "day", "hour", "minute", "second"]
regex = r"(\d{4})(?:-(\d{2}))?(?:-(\d{2}))?(?:[\s,T](\d{2}))?(?::(\d{2}))?(?::(\d{2}))?"
match = re.search(regex, text)
# This is not necessary, but solves linter problems
if match is None: raise ValueError("Date format not valid")
date_list = match.groups()
date_dict = {date_parts[i]: date_list[i] for i in range(len(date_list))}
return date_dict
def format_iso_date(date_dict):
ret: str = ""
if date_dict['year']: ret += date_dict['year']
else: return None
if date_dict['month']: ret += "-" + date_dict['month']
else: return ret
if date_dict['day']: ret += "-" + date_dict['day']
else: return ret
if date_dict['hour']: ret += " " + date_dict['hour']
else: return ret
if date_dict['minute']: ret += ":" + date_dict['minute']
else: return ret+':00'
if date_dict['second']: ret += ":" + date_dict['second']
else: return ret
return ret
format_iso_date(capture_iso_date("2021-03-28T12"))
</code></pre>
<p>Thanks to <a href="https://stackoverflow.com/users/10197418/fobersteiner">FObersteiner </a> for pointing me this question: <a href="https://stackoverflow.com/questions/46842793/datetime-conversion-how-to-extract-the-inferred-format">Datetime conversion - How to extract the inferred format?</a></p>
|
<python><pandas><datetime><iso8601>
|
2023-03-27 13:10:45
| 0
| 367
|
edoelas
|
75,855,999
| 3,103,767
|
How to broadcast using asyncio's datagram endpoint?
|
<p>I have tried to build on <a href="https://docs.python.org/3/library/asyncio-protocol.html#udp-echo-client" rel="nofollow noreferrer">asyncio's edp echo client example</a> for building a broadcaster (for Wake On LAN, but cut out some details below to keep the code short). The code below however always fails to send. I have tried other non-broadcast IPs, this doesn't matter. Setting allow_broadcast to False does let the code complete. How can i make it work when broadcasting? I am on Windows 11, Python 3.10. NB: I have commented out two lines to make sure the socket isn't closed too early (then i get other errors, those are a later worry).</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from typing import Optional
BROADCAST_IP = "255.255.255.255"
DEFAULT_PORT = 9
class _WOLProtocol:
def __init__(self, *messages):
self.packets = messages
self.done = asyncio.get_running_loop().create_future()
self.transport = None
def connection_made(self, transport):
for p in self.packets:
transport.sendto(p.encode())
#transport.close()
def error_received(self, exc):
self.done.set_exception(exc)
def connection_lost(self, exc):
print('closing')
#self.done.set_result(None)
async def send_magic_packet(
*macs: str,
ip_address: str = BROADCAST_IP,
port: int = DEFAULT_PORT,
interface: Optional[str] = None
) -> None:
loop = asyncio.get_running_loop()
transport, protocol = await loop.create_datagram_endpoint(
lambda: _WOLProtocol(*macs),
remote_addr=(ip_address, port),
allow_broadcast = True,
local_addr=(interface, 0) if interface else None
)
try:
await protocol.done
finally:
transport.close()
if __name__ == "__main__":
asyncio.run(send_magic_packet('test'))
</code></pre>
<p>Error i get:</p>
<pre><code>Exception in callback _ProactorDatagramTransport._loop_reading()
handle: <Handle _ProactorDatagramTransport._loop_reading()>
Traceback (most recent call last):
File "C:\Program Files\Python310\lib\asyncio\proactor_events.py", line 570, in _loop_reading
self._read_fut = self._loop._proactor.recv(self._sock,
File "C:\Program Files\Python310\lib\asyncio\windows_events.py", line 458, in recv
ov.WSARecv(conn.fileno(), nbytes, flags)
OSError: [WinError 10022] An invalid argument was supplied
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\Python310\lib\asyncio\events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "C:\Program Files\Python310\lib\asyncio\proactor_events.py", line 576, in _loop_reading
self._protocol.error_received(exc)
File ".....\code.py", line 23, in error_received
self.done.set_exception(exc)
asyncio.exceptions.InvalidStateError: invalid state
closing
Traceback (most recent call last):
File ".....\code.py", line 50, in <module>
asyncio.run(send_magic_packet('test'))
File "C:\Program Files\Python310\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "C:\Program Files\Python310\lib\asyncio\base_events.py", line 649, in run_until_complete
return future.result()
File ".....\code.py", line 45, in send_magic_packet
await protocol.done
File "C:\Program Files\Python310\lib\asyncio\proactor_events.py", line 530, in _loop_writing
self._write_fut = self._loop._proactor.send(self._sock,
File "C:\Program Files\Python310\lib\asyncio\windows_events.py", line 541, in send
ov.WSASend(conn.fileno(), buf, flags)
OSError: [WinError 10057] A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied
</code></pre>
<p>For reference, the corresponding sync function body for <code>send_magic_packet()</code> would be the below, which works fine:</p>
<pre><code># credit: https://github.com/remcohaszing/pywakeonlan/blob/main/wakeonlan/__init__.py
import socket
with socket.socket(socket.AF_INET, socket.SOCK_DGRAM) as sock:
if interface is not None:
sock.bind((interface, 0))
sock.setsockopt(socket.SOL_SOCKET, socket.SO_BROADCAST, 1)
sock.connect((ip_address, port))
for packet in macs:
sock.send(packet.encode())
</code></pre>
|
<python><udp><python-asyncio><broadcast>
|
2023-03-27 12:54:24
| 1
| 983
|
Diederick C. Niehorster
|
75,855,883
| 9,998,989
|
Ways of supress prints from joblib parallelization
|
<p>I have a script that contains parallelization. Can I suppress the parallelization prints with a simple contextlib suppressor or do I have to initialize a mute in the subprocesses with.</p>
<pre><code>from joblib import Parallel, delayed
import os
import contextlib
def start():
print('HELLO')
def para(i):
print('hello')
a = Parallel(n_jobs = 2)(delayed(para)(i) for i in [0,1,2])
with contextlib.redirect_stdout(open(os.devnull, "w")):
start()
Output:
'hello'
'hello'
'hello'
</code></pre>
<p>Is there any way to suppress the parallelization content as well, without having to initialize it in the function it self?</p>
|
<python><stdout>
|
2023-03-27 12:42:16
| 1
| 752
|
Noob Programmer
|
75,855,618
| 7,945,506
|
Selenium (python): How to access alerts?
|
<p>During an interaction with a website via Selenium (Python) I get this popup window:
<a href="https://i.sstatic.net/hTu2i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hTu2i.png" alt="enter image description here" /></a></p>
<p>I want to check the checkbox and then click on the left button.</p>
<p>I tried accessing the popup window (about which I think is an alert) with</p>
<pre><code>driver.switch_to.alert
</code></pre>
<p>However, this raises an <code>NoAlertPresentException</code>. As I did this step by step in a jupyter notebook, the popup is definitely already there (no timing problem).</p>
<p>Is this not an alert? How do I solve this problem?</p>
<p>Thanks a lot!</p>
|
<python><selenium-webdriver><alert>
|
2023-03-27 12:14:54
| 1
| 613
|
Julian
|
75,855,547
| 10,542,284
|
subproccess.check_output don't recognize unix command
|
<pre><code>import subprocess
with open("file.txt", 'r') as fl:
xs = fl.readlines()
for x in xs:
output = subprocess.check_output(f"command -L {x} -N", shell=True, stderr=subprocess.STDOUT)
print(output)
</code></pre>
<p>Trying to run this python script in Linux but <code>subprocess</code> gives 127 error (dubbed as command not found according to <a href="https://stackoverflow.com/a/19328244/10542284">this person here</a>) and adds a new line character.</p>
<pre><code>Traceback (most recent call last):
File "/home/user/Documents/the_test/script/pythonstuff/script.py", line 9, in <module>
output = subprocess.check_output(f"command -L {x} -N", stderr=subprocess.STDOUT)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 466, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 548, in run
with Popen(*popenargs, **kwargs) as process:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/subprocess.py", line 1024, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.11/subprocess.py", line 1901, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'command -L x \n -N'
</code></pre>
<p>My path is correct and the command exists. What can I do?</p>
|
<python><subprocess>
|
2023-03-27 12:04:18
| 1
| 473
|
Jugert Mucoimaj
|
75,855,392
| 8,337,391
|
Convert DataFrameGroupBy object to Dict without Index column
|
<p>I want to do a <em>groupby</em> on my <em>dataframe</em> (like below):</p>
<pre><code> VendorID ProductID Qty Expiring Date
123456 P789456 1195 True xyz
123456 P123456 1015 True xyz
987564 P168816 251 False xyz
123456 P900456 1222 True xyz
</code></pre>
<p>I want a dictionary like below i.e. column names as Keys & the row values as Value of the Key (<strong>NOTE</strong>: Without the <strong>INDEX</strong> column of <em>dataframe</em>):</p>
<pre><code>[
{
'vendorid': 123456,
'products': [
{'productid': 'P789456', 'qty': 1195},
{'productid': 'P123456', 'qty': 1015},
{'productid': 'P900456', 'qty': 1222}
],
'expiring': True
'date': 'xyz'
},
{
'vendorid': 987564,
'products': [
{'productid': 'P168816', 'qty': 251},
],
'expiring': False
'date': 'xyz'
},
...
...
]
</code></pre>
<p>I was trying</p>
<pre><code>df = df.groupby('vendorid')[['productid', 'qty', 'expiring', 'date']]
df = df.apply(lambda x: x.set_index('productid').to_dict(orient='index')).to_dict()
</code></pre>
<p>on <code>df</code> but getting errors.</p>
<p>Appreciate any clue/direction with a little bit of explanation.</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-03-27 11:47:47
| 1
| 433
|
iPaul
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.