QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,548,703
| 11,462,274
|
Collect selected value text from a dropdown box that doesn't change anything on the element upon click
|
<p>I open <a href="https://int.soccerway.com/teams/brazil/ituano-futebol-clube/340/statistics/" rel="nofollow noreferrer">this page</a> with selenium webdrive Firefox.</p>
<p>Here there is a dropdown box:</p>
<p><a href="https://i.sstatic.net/MMOg3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MMOg3.png" alt="PHOTOGRAPH" /></a></p>
<pre><code><table class="table compare section-dropdowns">
<thead>
<tr class="sub-head dropdown-head">
<th class="col1">&nbsp;</th>
<th><div data-active-dropdown-id="" class="active-dropdown">
<select name="competition_id">
<option value="89" selected="selected">Serie B - 2023</option>
<option value="239">Paulista A1 - 2023</option>
<option value="231">Copa do Brasil - 2023</option>
</select>
</div></th>
</tr>
</thead>
</table>
</code></pre>
<p>That when selecting another option, the visual text changes, but nothing in the element changes, including it keeps saying that the first option is selected:</p>
<p><a href="https://i.sstatic.net/OpmPX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OpmPX.png" alt="PHOTO 2" /></a></p>
<pre><code><table class="table compare section-dropdowns">
<thead>
<tr class="sub-head dropdown-head">
<th class="col1">&nbsp;</th>
<th><div data-active-dropdown-id="" class="active-dropdown">
<select name="competition_id">
<option value="89" selected="selected">Serie B - 2023</option>
<option value="239">Paulista A1 - 2023</option>
<option value="231">Copa do Brasil - 2023</option>
</select>
</div></th>
</tr>
</thead>
</table>
</code></pre>
<p>How can I get the new value of the box after selecting it while the element remains unchanged?</p>
<p>I tested several shapes with the webdriver and with Beautifulsoup, but none delivered correctly, including getting the text without setting any other details:</p>
<pre><code>soup = BeautifulSoup(driver.page_source, 'html.parser')
season_opt = soup.select('table.section-dropdowns select[name="competition_id"]')[0]
season = season_opt.text.strip()
</code></pre>
|
<python><selenium-webdriver><web-scraping><beautifulsoup>
|
2023-02-23 18:07:42
| 1
| 2,222
|
Digital Farmer
|
75,548,633
| 1,870,969
|
Find all disjoint subsets of a binray matrix in Pyhton
|
<p>I have a binary matrix, and I want to find all disjoint subsets that exist in this matrix. To clarify the problem, the matrix is a collection of image masks, masks of irregular shapes, and each disjoint subset of 1s representing a separate mask. In other words, if I have a collection of <code>N</code> disjoint subsets in a matrix of size <code>d1xd2</code>, I want to end up with <code>N</code> matrices of size <code>d1xd2</code>, each having only one of the disjoint subsets of 1s.
Any help on that is highly appreciated.</p>
|
<python><image-processing><matrix><disjoint-sets>
|
2023-02-23 18:02:17
| 1
| 997
|
Vahid S. Bokharaie
|
75,548,459
| 14,269,252
|
Streamlit app - if the check box is clicked, concat all the relevant Data Frames that each check box is indicating
|
<p>I am building an streamlit app. I put check box which indicates to each Data Frame.
Then if the check box is clicked, concat all the relevant Data Frames.</p>
<p>for instance if option 1 and 2 are clicked, I want to concat only dataframe 1 and 2.</p>
<p>I wrote some piece of code which I can not get the final result, can anyone help to modify the code?</p>
<pre><code>
option_1 = st.sidebar.checkbox('dataframe1 ')
option_2 = st.sidebar.checkbox('dataframe2 ')
option_3 = st.sidebar.checkbox('dataframe3 ')
option_4 = st.sidebar.checkbox('dataframe4 ')
dic = {"option_1":"dataframe_1 ", "option_2" :"dataframe_2 ",
"option_3":"dataframe_3 ", "option_4": "dataframe_4 ",
}
df = None
for key, val in dic.items():
if option_1 or option_2 or option_3 or option_4:
df = pd.concat([dataframe_1,dataframe_2,dataframe_3,dataframe_4])
else:
None
</code></pre>
<p>My another try:</p>
<pre><code>
df = None
list2 = []
for key, val in dic.items():
st.write(key)
if option_1 or option_2 or option_3 or option_4 :
list2.append(val)
st.write(list2)
for i in list2:
df = pd.concat(i)
else:
None
</code></pre>
|
<python><pandas><streamlit>
|
2023-02-23 17:44:23
| 2
| 450
|
user14269252
|
75,548,444
| 8,895,744
|
Polars dataframe drop nans
|
<p>I need to drop rows that have a nan value in any column. As for null values with <code>drop_nulls()</code></p>
<pre><code>df.drop_nulls()
</code></pre>
<p>but for nans. I have found that the method <code>drop_nans</code> exist for Series but not for DataFrames</p>
<pre><code>df['A'].drop_nans()
</code></pre>
<p>Pandas code that I'm using:</p>
<pre><code>df = pd.DataFrame(
{
'A': [0, 0, 0, 1,None, 1],
'B': [1, 2, 2, 1,1, np.nan]
}
)
df.dropna()
</code></pre>
|
<python><dataframe><python-polars>
|
2023-02-23 17:43:13
| 4
| 563
|
EnesZ
|
75,548,255
| 2,478,485
|
How to find file location using the regular expression ("*" in the path)?
|
<p>Following <code>cp</code> linux command is working fine to find a file <code>"/home/temp/test-1.34.56/sample"</code> to current location</p>
<p><strong>Shell command:</strong> Working fine</p>
<pre><code>cp "/home/temp/test-*/sample" "./"
</code></pre>
<p><strong>Python code:</strong>
It not working using <code>os.rename</code></p>
<pre><code>os.rename("/home/temp/test-*/sample", "./")
</code></pre>
<p>any other options ?</p>
|
<python><python-os>
|
2023-02-23 17:24:33
| 1
| 3,355
|
Lava Sangeetham
|
75,548,138
| 10,095,440
|
Python returns changes the return dimensions if called with extra return
|
<p>I have this function that perform pooling on an image
'''</p>
<pre><code>def pool_forward(A_prev, hparameters, mode = "max"):
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
f = hparameters["f"]
s = hparameters["stride"]
n_H = int(1 + (n_H_prev - f) / s)
n_W = int(1 + (n_W_prev - f) / s)
n_C = n_C_prev
A = np.zeros((m, n_H, n_W, n_C))
for i in range (m):
for n in range (n_H):
vert_start = s * n
vert_end = vert_start + f
for k in range (n_W):
horiz_start = s * k
horiz_end = horiz_start + f
for j in range(n_C):
a_slice_prev = A_prev[i]
if (mode == 'max'):
A[i, n, k, j] =np.max(a_slice_prev[vert_start:vert_end,horiz_start:horiz_end,j])
else:
A[i, n, k, j] = np.mean(a_slice_prev[vert_start:vert_end,horiz_start:horiz_end,j])
cache = (A_prev, hparameters)
assert(A.shape == (m, n_H, n_W, n_C))
print("A.shape = " + str(A.shape))
return A
</code></pre>
<p>Then If I called this function but expecting two variables as return (surprisingly works) but change the output size</p>
<pre><code> np.random.seed(1)
A_prev = np.random.randn(2, 5, 5, 3)
hparameters = {"stride" : 1, "f": 3}
A,cache = pool_forward(A_prev, hparameters)
print("A.shape = " + str(A.shape))
</code></pre>
<p>the results of print A before return and after return are different.</p>
<pre><code>A.shape = (2, 3, 3, 3)
A.shape = (3, 3, 3)
</code></pre>
<p>but if we call the function for one veriable as consistant with return this will work fine, this is a strange behavior, I want to ask if any one haas an idea why this happans in python 3</p>
|
<python><function><return>
|
2023-02-23 17:12:11
| 0
| 317
|
Ibrahim zawra
|
75,548,059
| 264,136
|
post value of selected item in drop down
|
<p>app.py:</p>
<pre><code>from flask import Flask, flash, redirect, render_template, request, url_for
app = Flask(__name__)
import requests
import json
@app.route("/", methods=['GET'])
def fill_devices():
devices = requests.request("GET", "http://10.64.127.94:5000/api/get_platforms", headers={}, data="").json()["final_result"]
return render_template('devices.html', devices=devices)
@app.route('/submit_device', methods = ['POST'])
def submit_device():
# get the device name here
labels = requests.request("GET", "http://10.64.127.94:5000/api/get_labels", headers={},
data=json.dumps(
{ "device": ""}
)).json()["final_result"]
return render_template('labels.html', labels=labels)
if __name__ == "__main__":
app.run(host='127.0.0.1', debug=True, port=5000)
</code></pre>
<p>labels.html</p>
<pre><code>{% extends "index.html" %}
{% block devices %}
<select name="device_id">
<option>SELECT</option>
{% for device in devices %}
<option>{{ device }}</option>
{% endfor %}
</select>
{% endblock devices %}
</code></pre>
<p>When someone changes the value in drop down, I should get that in app.py so that I can fill the other drop down and render. How to achieve this?</p>
|
<python><flask>
|
2023-02-23 17:04:47
| 1
| 5,538
|
Akshay J
|
75,548,009
| 2,218,321
|
Python doesn't know the class attributes, while Jupyter does
|
<p>I have this code from StatQuest channel. This code works in Jupyter, however when I run this in a <code>.py</code> file, it reports the error</p>
<blockquote>
<p>AttributeError: 'BasicNNTrain' object has no attribute 'w00'</p>
</blockquote>
<p>This is the code:</p>
<pre><code>import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.optim import SGD
import matplotlib.pyplot as plt
import seaborn as sns
class BasicNNTrain(nn.Module):
def __int__(self):
super().__init__()
self.w00 = nn.Parameter(torch.tensor(1.7), requires_grad=False)
self.b00 = nn.Parameter(torch.tensor(-0.85), requires_grad=False)
self.w01 = nn.Parameter(torch.tensor(-40.8), requires_grad=False)
self.w10 = nn.Parameter(torch.tensor(12.6), requires_grad=False)
self.b10 = nn.Parameter(torch.tensor(0.0), requires_grad=False)
self.w11 = nn.Parameter(torch.tensor(2.7), requires_grad=False)
self.final_bias = nn.Parameter(torch.tensor(0.0), requires_grad=True)
def forward(self, input):
input_to_top_relu = input * self.w00 + self.b00
top_relu_output = F.relu(input_to_top_relu)
scaled_top_relu_output = top_relu_output * self.w01
input_to_bottom_relu = input * self.w10 + self.b10
bottom_relu_output = F.relu(input_to_bottom_relu)
scaled_bottom_relu_output = bottom_relu_output * self.w11
input_to_final_relu = scaled_top_relu_output + scaled_bottom_relu_output + self.final_bias
output = F.relu(input_to_final_relu)
return output
model = BasicNNTrain()
for name, param in model.named_parameters():
print(name, param.data)
input_doses = torch.linspace(start=0, end=1, steps=11)
print(input_doses)
model(input_doses)
</code></pre>
|
<python><python-3.x><machine-learning><jupyter>
|
2023-02-23 17:01:43
| 1
| 2,189
|
M a m a D
|
75,547,914
| 6,320,794
|
ModuleNotFoundError: No module named 'openvino'
|
<p>I'd like to run some official OpenVINO samples, but I always get the following error:</p>
<pre><code>from openvino.inference_engine import IECore
ModuleNotFoundError: No module named 'openvino'
</code></pre>
<p>I created a simple script to test this behavior:</p>
<p>IECore_test.py</p>
<pre><code>import sys
from openvino.inference_engine import IECore
ie=IECore()
print("End of test")
</code></pre>
<p>I'm testing on Raspberry Pi 3B with Movidius <code>Neural Compute Stick 1 (NCS1)</code>.<br />
The OS is <code>Raspberry Pi OS 32-bit (Legacy) Buster</code> (because <code>Bullseye</code> doesn't support NCS1).<br />
OpenVINO Version is <code>l_openvino_toolkit_runtime_raspbian_p_2020.3.194.tgz</code>,<br />
which is the last version that can support NCS1.</p>
<p>Here's the procedure to set up OpenVINO:</p>
<pre><code>sudo mkdir -p /opt/intel/openvino
mkdir ~/download
cd ~/download
wget https://storage.openvinotoolkit.org/repositories/openvino/packages/2020.3/l_openvino_toolkit_runtime_raspbian_p_2020.3.194.tgz
sudo tar -xf l_openvino_toolkit_runtime_raspbian_p_2020.3.194.tgz --strip 1 -C /opt/intel/openvino
echo "source /opt/intel/openvino/bin/setupvars.sh" >> ~/.bashrc
source /opt/intel/openvino/bin/setupvars.sh
sudo usermod -a -G users "$(whoami)"
sh /opt/intel/openvino/install_dependencies/install_NCS_udev_rules.sh
</code></pre>
<p>I searched on the Internet, then I noticed that <code>ie_api.so</code> plays an important role.<br />
I found that <code>ie_api.so</code> is located here:</p>
<pre><code>/opt/intel/openvino/python/python3.5/openvino/inference_engine/ie_api.so
</code></pre>
<p>I checked <code>$PYTHONPATH</code>:</p>
<pre><code>(openvino_env) pi@raspberrypi:~ $ echo $PYTHONPATH
/opt/intel/openvino/python/python3.7:
/opt/intel/openvino/python/python3:
/opt/intel/openvino/deployment_tools/model_optimizer:
</code></pre>
<p>Somehow, <code>/opt/intel/openvino/python/python3.5</code> was missing.<br />
(And, there is no <code>python3.7</code> directory under <code>/opt/intel/openvino/python/</code>, but there is one under <code>/usr/lib/</code>.)</p>
<p>So, I ran these two lines:</p>
<pre><code>export PYTHONPATH="/opt/intel/openvino/python/python3.5:$PYTHONPATH"
export PYTHONPATH="/opt/intel/openvino/python/python3.5/openvino/inference_engine:$PYTHONPATH"
</code></pre>
<p>Now <code>$PYTHONPATH</code> is:</p>
<pre><code>(openvino_env) pi@raspberrypi:~ $ echo $PYTHONPATH
/opt/intel/openvino/python/python3.5/openvino/inference_engine:
/opt/intel/openvino/python/python3.5:
/opt/intel/openvino/python/python3.7:
/opt/intel/openvino/python/python3:
/opt/intel/openvino/deployment_tools/model_optimizer:
</code></pre>
<p>I thought it would work, but <code>python3 IECore_test.py</code> returns another error:</p>
<pre><code>Traceback (most recent call last):
File "IECore_test.py", line 2, in <module>
from openvino.inference_engine import IECore
File "/opt/intel/openvino/python/python3.5/openvino/inference_engine/__init__.py", line 1, in <module>
from .ie_api import *
ImportError: libpython3.5m.so.1.0: cannot open shared object file: No such file or directory
</code></pre>
<p>I can't find <code>libpython3.5m.so.1.0</code> anywhere.<br />
So, I'm stuck here.<br />
How can I resolve these errors?</p>
|
<python><raspberry-pi3><pythonpath><openvino><raspbian-buster>
|
2023-02-23 16:53:26
| 1
| 581
|
IanHacker
|
75,547,787
| 11,740,625
|
AWS ECR describe_image_scan_findings does not return findings
|
<p>strange issue i think. I am trying to automate gathering findings from AWS ECR image scans using Python Boto3 ECR client describe_image_scan_findings. I am able to get "findingSeverityCounts" in the response but the actual detailed "findings" are not returned with the response even though the documentation says they should be included.</p>
<p>So i try:</p>
<pre><code>scan_report = ecr_client.describe_image_scan_findings(
repositoryName=registry,
imageId={
'imageTag': most_recent_image
},
maxResults=1000
)
scan_findings = scan_report['imageScanFindings']
pp.pprint(scan_report)
</code></pre>
<p>I get 'imageScanFindings': { 'findingSeverityCounts': {'HIGH': x}... etc in the response but ['imageScanFindings']['findings'] are not returned with the finding details. Despite the boto3 docs specifying that as part of the response
<a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ecr.html#ECR.Client.describe_image_scan_findings" rel="nofollow noreferrer">boto3 describe image scan findings</a></p>
<p>What am i missing? Thx!!</p>
|
<python><amazon-web-services><boto3><amazon-ecr>
|
2023-02-23 16:43:02
| 1
| 513
|
Trevor Griffiths
|
75,547,772
| 9,720,696
|
Recovering input IDs from input embeddings using GPT-2
|
<p>Suppose I have the following text</p>
<pre><code>aim = 'Hello world! you are a wonderful place to be in.'
</code></pre>
<p>I want to use GPT2 to produce the input_ids and then produce the embedding and from embeddings recover the input_ids, to do this I do:</p>
<pre><code>from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2Model.from_pretrained("gpt2")
</code></pre>
<p>The input_ids can be defines as:</p>
<pre><code>input_ids = tokenizer(aim)['input_ids']
#output: [15496, 995, 0, 345, 389, 257, 7932, 1295, 284, 307, 287, 13]
</code></pre>
<p>I can decode this to make sure it reproduce the aim:</p>
<pre><code>tokenizer.decode(input_id)
#output: 'Hello world! you are a wonderful place to be in.'
</code></pre>
<p>as expected! To produce the embedding I convert the input_ids to tensor:</p>
<pre><code>input_ids_tensor = torch.tensor([input_ids])
</code></pre>
<p>I can then procude my embeddings as:</p>
<pre><code># Generate the embeddings for input IDs
with torch.no_grad():
model_output = model(input_ids_tensor)
last_hidden_states = model_output.last_hidden_state
# Extract the embeddings for the input IDs from the last hidden layer
input_embeddings = last_hidden_states[0,1:-1,:]
</code></pre>
<p>Now as mentioned earlier, the aim is to use input_embeddings and recover the input_ids, so I do:</p>
<pre><code>x = torch.unsqueeze(input_embeddings, 1) # to make the dim acceptable
with torch.no_grad():
text = model(x.long())
decoded_text = tokenizer.decode(text[0].argmax(dim=-1).tolist())
</code></pre>
<p>But doing this I get:</p>
<pre><code>IndexError: index out of range in self
</code></pre>
<p>at the level of <code>text = model(x.long())</code> I wonder what am I doing wrong? How can I recover the input_ids using the embedding I produced?</p>
|
<python><pytorch><huggingface-transformers><gpt-2>
|
2023-02-23 16:41:17
| 1
| 1,098
|
Wiliam
|
75,547,631
| 16,706,763
|
Overwrite single file in a Google Cloud Storage bucket, via Python code
|
<p>I have a <code>logs.txt</code> file at certain location, in a <a href="https://cloud.google.com/compute/docs/instances#:%7E:text=An%20instance%20is%20a%20virtual,or%20the%20Compute%20Engine%20API." rel="noreferrer">Compute Engine VM Instance</a>. I want to periodically backup (i.e. <strong>overwrite</strong>) <code>logs.txt</code> in a <a href="https://cloud.google.com/storage/docs/json_api/v1/buckets" rel="noreferrer">Google Cloud Storage bucket</a>. Since <code>logs.txt</code> is the result of some preprocessing made inside a Python script, I want to also use that script to upload / copy that file, into the Google Cloud Storage bucket (therefore, <strong>the use of <a href="https://cloud.google.com/storage/docs/gsutil/commands/cp" rel="noreferrer"><code>cp</code></a> cannot be considered an option</strong>). Both the Compute Engine VM instance, and the Cloud Storage bucket, stay at the same GCP project, so "they see each other". What I am attempting right now, based on <a href="https://cloud.google.com/storage/docs/uploading-objects#uploading-an-object" rel="noreferrer">this sample code</a>, looks like:</p>
<pre class="lang-py prettyprint-override"><code>from google.cloud import storage
bucket_name = "my-bucket"
destination_blob_name = "logs.txt"
source_file_name = "logs.txt" # accessible from this script
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
generation_match_precondition = 0
blob.upload_from_filename(source_file_name, if_generation_match=generation_match_precondition)
print(f"File {source_file_name} uploaded to {destination_blob_name}.")
</code></pre>
<p>If <code>gs://my-bucket/logs.txt</code> does not exist, the script works correctly, but if I try to <strong>overwrite</strong>, I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/google/cloud/storage/blob.py", line 2571, in upload_from_file
created_json = self._do_upload(
File "/usr/local/lib/python3.8/dist-packages/google/cloud/storage/blob.py", line 2372, in _do_upload
response = self._do_multipart_upload(
File "/usr/local/lib/python3.8/dist-packages/google/cloud/storage/blob.py", line 1907, in _do_multipart_upload
response = upload.transmit(
File "/usr/local/lib/python3.8/dist-packages/google/resumable_media/requests/upload.py", line 153, in transmit
return _request_helpers.wait_and_retry(
File "/usr/local/lib/python3.8/dist-packages/google/resumable_media/requests/_request_helpers.py", line 147, in wait_and_retry
response = func()
File "/usr/local/lib/python3.8/dist-packages/google/resumable_media/requests/upload.py", line 149, in retriable_request
self._process_response(result)
File "/usr/local/lib/python3.8/dist-packages/google/resumable_media/_upload.py", line 114, in _process_response
_helpers.require_status_code(response, (http.client.OK,), self._get_status_code)
File "/usr/local/lib/python3.8/dist-packages/google/resumable_media/_helpers.py", line 105, in require_status_code
raise common.InvalidResponse(
google.resumable_media.common.InvalidResponse: ('Request failed with status code', 412, 'Expected one of', <HTTPStatus.OK: 200>)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/my_folder/upload_to_gcs.py", line 76, in <module>
blob.upload_from_filename(source_file_name, if_generation_match=generation_match_precondition)
File "/usr/local/lib/python3.8/dist-packages/google/cloud/storage/blob.py", line 2712, in upload_from_filename
self.upload_from_file(
File "/usr/local/lib/python3.8/dist-packages/google/cloud/storage/blob.py", line 2588, in upload_from_file
_raise_from_invalid_response(exc)
File "/usr/local/lib/python3.8/dist-packages/google/cloud/storage/blob.py", line 4455, in _raise_from_invalid_response
raise exceptions.from_http_status(response.status_code, message, response=response)
google.api_core.exceptions.PreconditionFailed: 412 POST https://storage.googleapis.com/upload/storage/v1/b/production-onementor-dt-data/o?uploadType=multipart&ifGenerationMatch=0: {
"error": {
"code": 412,
"message": "At least one of the pre-conditions you specified did not hold.",
"errors": [
{
"message": "At least one of the pre-conditions you specified did not hold.",
"domain": "global",
"reason": "conditionNotMet",
"locationType": "header",
"location": "If-Match"
}
]
}
}
: ('Request failed with status code', 412, 'Expected one of', <HTTPStatus.OK: 200>)
</code></pre>
<p>I have checked the documentation for <a href="https://cloud.google.com/python/docs/reference/storage/latest/google.cloud.storage.blob.Blob#google_cloud_storage_blob_Blob_upload_from_filename" rel="noreferrer"><code>upload_from_filename</code></a>, but it seems there is no flag to "enable overwritting".</p>
<p>How to properly overwrite a file existing in a Google Cloud Storage Bucket, using Python language?</p>
|
<python><google-cloud-platform><google-cloud-storage><google-compute-engine>
|
2023-02-23 16:28:27
| 1
| 879
|
David Espinosa
|
75,547,526
| 8,262,535
|
How to refactoring different actions on files inside subfolders
|
<p>I am trying to functionalize and clean up file organization scripts. Each one is something like this:</p>
<pre><code>for dir_level in dirs:
dates = glob.glob(dir_level)
for date in dates:
files = glob.glob(date)
for file in files:
# do_stuff
</code></pre>
<p>What is a clean way to have the same directory crawl logic (with variable number of depth levels) but arbitrary actions to be done on that folder level?</p>
|
<python><file><directory><higher-order-functions>
|
2023-02-23 16:20:49
| 1
| 385
|
illan
|
75,547,519
| 2,601,293
|
export an environment variable from python
|
<p>How can I export an environment variable from Python?</p>
<pre><code># I want to export a variable in python the same way you would in bash
export my_var="foo"
</code></pre>
<pre><code># The variable can be set in Python but it won't stay outside of the python session since it's not exported.
os.environ['my_var'] = 'bar'
</code></pre>
|
<python><export>
|
2023-02-23 16:20:07
| 0
| 3,876
|
J'e
|
75,547,505
| 9,202,041
|
Function similar to xlookup in Python
|
<p>So I am looking for some Python code/function similar to Xlookup in excel.</p>
<p>Basically I am trying to include one new column in a dataframe (df). The values within this new column will depend on two columns of df which are time and day of week. The df looks like:
<a href="https://i.sstatic.net/7cPSP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7cPSP.png" alt="enter image description here" /></a></p>
<p>The values of this third column will come from excel sheet saved as dataframe PSA, shown here:</p>
<p><a href="https://i.sstatic.net/Jq3Bu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jq3Bu.png" alt="enter image description here" /></a></p>
<p>Now I am not sure how do I read this data values based on time (hour) and Day so that I can include in a third column. I have tried <code>pd.merge(df, PSA, on=['time', 'day'], how='left')</code> but it's not working.</p>
<p>Need your help.</p>
<p>TIA!</p>
|
<python><pandas><merge><xlookup>
|
2023-02-23 16:19:15
| 0
| 305
|
Jawairia
|
75,547,503
| 1,200,914
|
How do you use ANY using psycopg2?
|
<p>Using psycopg2, I want to do an update statement using a list of elements:</p>
<pre><code>cursor.execute("UPDATE mytable SET mycol=2 WHERE name=ANY(%s) RETURNING id",
tuple(keywords))
</code></pre>
<p>where keywords is a list of strings, since <code>name</code> is a varchar column. However, I get:</p>
<pre><code>TypeError: not all arguments converted during string formatting
</code></pre>
<p>How should I do the request?</p>
|
<python><sql><postgresql><psycopg2>
|
2023-02-23 16:19:09
| 1
| 3,052
|
Learning from masters
|
75,547,485
| 12,932,447
|
How to check if any async function has never been awaited?
|
<p>I'm testing my Python software using Pytest.
I have to call many <code>async</code> function and sometimes my tests pass even when I forgot to write the <code>await</code> keyword.</p>
<p>I would like my test to automatically fail if I call an <code>async</code> function without <code>await</code>.</p>
<p>I was thinking about a decorator to add at the top of my tests, something like</p>
<pre class="lang-py prettyprint-override"><code>async def func():
return 42
@checkawait
async def test_await():
func() # <-- forgot to await
</code></pre>
<p>I would like this test to fail with the decorator, because <code>func</code> is an async function that was never awaited.
(I know this is not a proper test, since I'm not testing anything. It's just an example).</p>
<p>Without the decorator <code>test_await</code> passes.</p>
<p>I really don't know what to do.</p>
<p>I asked chatGPT which told me to use this</p>
<pre class="lang-py prettyprint-override"><code>def checkawait(test):
@functools.wraps(test)
async def wrapper(*args, **kwargs):
coros = []
res = test(*args, **kwargs)
if asyncio.iscoroutine(res):
coros.append(res)
while coros:
done, coros = await asyncio.wait(coros, timeout=0.1)
if not done:
raise Exception("Not all coroutines have completed")
return res
return wrapper
</code></pre>
<p>which, of course, is not working.</p>
<p>Does anyone have any ideas? Is it even possible to do so?
Thanks</p>
|
<python><async-await><python-decorators>
|
2023-02-23 16:18:21
| 1
| 875
|
ychiucco
|
75,547,449
| 11,391,711
|
how to convert a string containing characters into date time using strptime in Python
|
<p>I'm trying to convert a string where there exists a characters between date information into a datetime. I was wondering if this is possible without using <code>replace</code> function. Suppose my string is defined as <code>'20220117A1745'</code> implying Jan. 17th, 2022 at 5:45pm.</p>
<p>If I use <code>strptime</code>, as expected, I receive an error.</p>
<pre><code>from datetime import datetime
datetime.strptime(x,"%Y%m%d%H%M")
ValueError: unconverted data remains: A1745
</code></pre>
<p>I was wondering if I can do this without using another method. The reason why I don't want to use antoher method is that, <code>replace</code> has a bad time ecomplexity and will slow the execution time badly (see for <a href="https://stackoverflow.com/questions/35583983/what-is-the-big-o-notation-for-the-str-replace-function-in-python">reference</a>). I'll do this operation for hundreds of thousans of strings in a for loop.</p>
|
<python><regex><datetime><strptime>
|
2023-02-23 16:15:33
| 2
| 488
|
whitepanda
|
75,547,380
| 2,109,064
|
How can I configure the "Run" button next to the editor tabs in VS Code for Python files?
|
<p>I have the following configuration in my <code>launch.json</code>:</p>
<pre class="lang-json prettyprint-override"><code>{
"configurations":
[
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true,
"cwd": "${fileDirname}",
"env": {"PYTHONPATH": "C:/repo/python"}
}
]
}
</code></pre>
<p>With this, I can easily run and debug the current script from the Run/Debug side panel:</p>
<p><a href="https://i.sstatic.net/sDfc9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sDfc9.png" alt="enter image description here" /></a></p>
<p>All good, so far. But now I'd like to also be able to directly run the script from the Run button on the editor tabs:</p>
<p><a href="https://i.sstatic.net/QHtgr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QHtgr.png" alt="enter image description here" /></a></p>
<p>Unfortunately, this does not work. It runs the file, but the configuration from above is not taken.</p>
<p><strong>How can I add a configuration with <code>PYTHONPATH</code> etc. also for this Run button in the editor tabs?</strong></p>
|
<python><visual-studio-code>
|
2023-02-23 16:08:24
| 2
| 7,879
|
Michael
|
75,547,323
| 5,568,409
|
In PyMC, the "thin" argument seems no more in use. What can be put instead?
|
<p>Trying to understand PyMC through examples, I made a small model referring to a set of birds weight (data observed: <code>y20</code>), supposedly coming from a normal population <code>N(mu, sigma)</code>.</p>
<pre><code>import pymc as pm
import arviz as az
with pm.Model() as model:
# Priors for unknown model parameters
mu = pm.Uniform('mu', lower = 0, upper = 2000)
sigma = pm.Uniform('sigma', lower = 0, upper = 100)
# Likelihood of observations
lkd = pm.Normal('likelihood', mu = mu, sigma = sigma, observed = y20)
# Expected value of outcome
weights = pm.Normal('weights', mu = mu, sigma = sigma)
</code></pre>
<p>This <code>pm.Model()</code> runs without problem. Now about sampling...</p>
<p>Browsing docs, I found in the <strong>"read the docs" tutorial</strong>, here: <a href="https://pymcmc.readthedocs.io/en/latest/tutorial.html" rel="nofollow noreferrer">HERE</a>, <code>paragraph 3.5.1</code>, the following sentence: <em>MCMC often results in strong autocorrelation among samples that can result in imprecise posterior inference. To circumvent this, it is useful to thin the sample by only retaining every k th sample, where k is an integer value. This thinning interval is passed to the sampler via the thin argument.</em></p>
<p>So, after my model, I used this new line:</p>
<pre><code>with model:
trace = pm.sample(1000, tune = 1000, thin = 10)
</code></pre>
<p>and I got a strange consequence:</p>
<pre><code>ValueError: Unused step method arguments: {'thin'}
</code></pre>
<p>Has something changed in PyMC?</p>
|
<python><pymc>
|
2023-02-23 16:02:26
| 1
| 1,216
|
Andrew
|
75,547,286
| 1,866,775
|
What does "errorStatus code 9" mean coming from the Looker SDK for Python and how to fix it?
|
<p>I have</p>
<ul>
<li>some Looker Studio dashboards, accessible via: <a href="https://lookerstudio.google.com" rel="nofollow noreferrer">https://lookerstudio.google.com</a></li>
<li>enabled the Looker Studio API</li>
<li>created an OAuth client</li>
</ul>
<p>I'd like to manage the dashboards using <a href="https://pypi.org/project/looker-sdk/" rel="nofollow noreferrer">looker-sdk</a>, so I've created a</p>
<p><code>looker.ini</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[Looker]
base_url=https://lookerstudio.google.com
client_id=[MY_CLIENT_ID]
client_secret=[MY_CLIENT_SECRET]
verify_ssl=True
</code></pre>
<p>and a</p>
<p><code>main.py</code></p>
<pre class="lang-py prettyprint-override"><code>import looker_sdk
sdk = looker_sdk.init40()
sdk.all_dashboards()
</code></pre>
<p>Running it, I get the following error:</p>
<pre><code> File "/home/tobias/.local/lib/python3.10/site-packages/looker_sdk/sdk/api40/methods.py", line 4424, in all_dashboards
self.get(
File "/home/tobias/.local/lib/python3.10/site-packages/looker_sdk/rtl/api_methods.py", line 141, in get
response = self.transport.request(
File "/home/tobias/.local/lib/python3.10/site-packages/looker_sdk/rtl/requests_transport.py", line 66, in request
headers.update(authenticator(transport_options or {}))
File "/home/tobias/.local/lib/python3.10/site-packages/looker_sdk/rtl/auth_session.py", line 100, in authenticate
token = self._get_token(transport_options)
File "/home/tobias/.local/lib/python3.10/site-packages/looker_sdk/rtl/auth_session.py", line 87, in _get_token
self._login(transport_options)
File "/home/tobias/.local/lib/python3.10/site-packages/looker_sdk/rtl/auth_session.py", line 148, in _login
response = self._ok(
File "/home/tobias/.local/lib/python3.10/site-packages/looker_sdk/rtl/auth_session.py", line 240, in _ok
raise error.SDKError(response.value.decode(encoding="utf-8"))
looker_sdk.error.SDKError: )]}'
{"errorStatus":{"code":9}}
</code></pre>
<p>(same when using <code>looker_sdk.init31()</code> instead of <code>looker_sdk.init40()</code>)</p>
<p>While debugging, I found</p>
<pre><code>)]}'
{"errorStatus":{"code":9}}
</code></pre>
<p>is the response from the HTTP request (POST) going to <code>f"{self.settings.base_url}/api/{self.api_version}/login"</code>, which can be reproduced using <code>curl</code>:</p>
<pre><code>curl -X POST --url https://lookerstudio.google.com/api/4.0/login --header 'Content-Type: application/x-www-form-urlencoded' --data 'client_id=MY_CLIENT_ID_START.apps.googleusercontent.com&client_secret=SECRET_OF_COURSE'
</code></pre>
<p>Maybe <code>https://lookerstudio.google.com</code> is just the wrong <code>base_url</code>?</p>
|
<python><looker-studio>
|
2023-02-23 15:59:14
| 0
| 11,227
|
Tobias Hermann
|
75,547,257
| 1,196,358
|
Extract frequency of sin wav from clean and noisy numpy arrays
|
<p>Here is a graph that shows a "control" (blue) and "recorded" (orange) signals.</p>
<p><a href="https://i.sstatic.net/s25ta.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s25ta.png" alt="enter image description here" /></a></p>
<p>Both are numpy arrays with 10,756 items, acquired from a <code>.wav</code> file with the code:</p>
<pre class="lang-py prettyprint-override"><code>from wave import open as open_wave
waveFile = open_wave(filename,'rb')
nframes = waveFile.getnframes()
wavFrames = waveFile.readframes(nframes)
ys = np.fromstring(wavFrames, dtype=np.int16)
</code></pre>
<p>I have 26 control signals (all letters of alphabet), and I'd like to take a recorded wave and figure out which control signal it is most similar too.</p>
<p>My first approach was using <code>scipy.signal.find_peaks()</code> which works perfectly for control signals, and <em>sometimes</em> for recorded signals, but not good enough. I understand the shortcoming here to be a) possible clipping of signal at beginning/end, or b) noise in the recorded signal can create false peaks.</p>
<p>My second approach was <em>subtracting</em> the recorded array from all controls, hoping the most similar would result in the smallest diff. This didn't work well either (though still interested in this approach...).</p>
<p>What I'm hoping to do now is:</p>
<ul>
<li>continue to identify peaks with <code>scipy.signal.find_peaks()</code></li>
<li>get average distance between peaks across signal</li>
<li>look for a control signal where this average peak distance is similar</li>
</ul>
<p>Where, of course, "peak distance" is the frequency of the sin wave.</p>
<p>Any suggestions or streamlines appreciated! I realize I'm bumbling into a <em>very</em> rich world of signal processing, using this toy / fun example to dip my toes.</p>
|
<python><numpy><scipy><modulation>
|
2023-02-23 15:56:22
| 1
| 1,272
|
ghukill
|
75,547,158
| 11,192,771
|
Legend outside of plot is cut off when saving figure
|
<p>I made a scatterplot where I put the legend just outside (beneath) the plot. When saving my plot, the legend is cut off halfway.</p>
<p>What is the best method to fix this?</p>
<pre><code>scalebar = ScaleBar(1, location='lower right')
plt.style.use('seaborn')
plt.scatter(x['xcoord'], x['ycoord'], c='lightgrey', s=25)
plt.scatter(x[x['var1']==1]['xcoord'], x[x['var1']==1]['ycoord'], c='dimgrey', s=35)
plt.scatter(x[x['var2']==1]['xcoord'], x[x['var2']==1]['ycoord'], c='red', s=180, marker="+")
plt.gca().legend(('dwelling', 'var1', 'var2'), frameon= True, facecolor='white', loc='lower center',
bbox_to_anchor=(0, -0.22, 1, 0), fontsize=12,) #this places the legend outside the plot.
plt.gca().add_artist(scalebar)
plt.tick_params(axis='x', colors='white')
plt.tick_params(axis='y', colors='white')
plt.savefig('test.pdf')
plt.show()
</code></pre>
|
<python><matplotlib>
|
2023-02-23 15:47:35
| 1
| 425
|
TvCasteren
|
75,546,893
| 10,337,789
|
Get all parent topics in recursive query in flask with m2m relationship
|
<p>I have two tables with many to many relationship.</p>
<p>The Topic table is:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>name</th>
<th>parent</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Mathematics</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>Algebra</td>
<td>1</td>
</tr>
<tr>
<td>3</td>
<td>Progression</td>
<td>2</td>
</tr>
<tr>
<td>4</td>
<td>Number sequences</td>
<td>3</td>
</tr>
<tr>
<td>5</td>
<td>Arithmetics</td>
<td>1</td>
</tr>
<tr>
<td>6</td>
<td>sum values</td>
<td>5</td>
</tr>
</tbody>
</table>
</div>
<p>The task table is</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>task</th>
<th>topics_id</th>
</tr>
</thead>
<tbody>
<tr>
<td>100</td>
<td>1+2+3+4</td>
<td>4</td>
</tr>
</tbody>
</table>
</div>
<p>tasks_topics table is</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>task_id</th>
<th>topics_id</th>
</tr>
</thead>
<tbody>
<tr>
<td>100</td>
<td>3</td>
</tr>
<tr>
<td>100</td>
<td>6</td>
</tr>
</tbody>
</table>
</div>
<p>THe code of this database in flask sqlalchemy you can see</p>
<pre><code>tasks_topics= db.Table('tasks_topics',
db.Column('task_id', db.Integer, db.ForeignKey('tasks.id')),
db.Column('topics_id', db.Integer, db.ForeignKey('topics.id'))
)
class Tasks(db.Model):
__tablename__ = 'tasks'
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
task = db.Column(db.Text)
# связи
topics = db.relationship('Topics',
secondary=tasks_topics,
back_populates="tasks")
class Topics(db.Model):
__tablename__ = 'topics'
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
name = db.Column(db.String(140))
parent = db.Column(db.Integer, default=0)
# связи
tasks = db.relationship('Tasks',
secondary=tasks_topics,
back_populates="topics")
</code></pre>
<p>when I query the database,</p>
<pre><code>my_task = Tasks.query.options(subqueryload(Tasks.topics)).get(100)
print(my_task.topics)
</code></pre>
<p>I get the title of only those topics that are especially displayed in the database:</p>
<pre><code>[<Topics 3>, <Topics 6>]
</code></pre>
<p>But my task is to get all parent topics for this request</p>
<pre><code>[<Topics 1>, <Topics 2>,<Topics 3>, <Topics 5>, <Topics 6>]
</code></pre>
<p>Since the database is hierarchical, so the query must be recursive. How should I fix my query to get a different result?</p>
|
<python><sqlalchemy>
|
2023-02-23 15:27:20
| 0
| 622
|
Владимир Кузовкин
|
75,546,638
| 5,095,986
|
Why is pip on version 3 and python -V on python 2.7
|
<pre><code>~$ export PATH="$PATH:/usr/bin/python"
:~$ pip --version
pip 21.3.1 from /usr/local/lib/python3.6/dist-packages/pip (python 3.6)
~$ python -V
Python 2.7.17
</code></pre>
<p>Why is pip on python3? I want pip to be on python2.</p>
<p>I have tried adding path of python2 to the PATH variable but it didn't help.
Please help me understand why this happened and how can I change pip to python2.7 for the system</p>
|
<python><pip>
|
2023-02-23 15:07:33
| 0
| 1,171
|
Shivangi Singh
|
75,546,632
| 12,304,000
|
You're trying to access a column, but multiple columns have that name
|
<p>I am trying to join 2 dataframes such that both have the following named columns. What's the best way to do a LEFT OUTER join?</p>
<pre><code>df = df.join(df_forecast, ["D_ACCOUNTS_ID", "D_APPS_ID", "D_CONTENT_PAGE_ID"], 'left')
</code></pre>
<p>Currently, I get an error that:</p>
<pre><code>You're trying to access a column, but multiple columns have that name.
</code></pre>
<p>what am i missing out on?</p>
|
<python><pyspark><left-join><outer-join><foundry-code-repositories>
|
2023-02-23 15:06:59
| 1
| 3,522
|
x89
|
75,546,587
| 4,391,249
|
Subprocess Popen stdout is empty if the subprocess is a Python script
|
<p>Here is some code that should print numbers 0 through 4 to the terminal: (adapted from <a href="https://stackoverflow.com/a/59291466/4391249">https://stackoverflow.com/a/59291466/4391249</a>)</p>
<pre class="lang-py prettyprint-override"><code>import os
import time
import subprocess
cmd = 'python', '-c', 'import time; [(print(i), time.sleep(1)) for i in range(5)]'
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
os.set_blocking(p.stdout.fileno(), False)
start = time.time()
while True:
# first iteration always produces empty byte string in non-blocking mode
line = p.stdout.readline()
if len(line):
print(line)
if time.time() > start + 5:
break
p.terminate()
</code></pre>
<p>But it doesn't for me. For me, nothing is printed.</p>
<p>When I instead set <code>cmd = 'ls'</code>, it does produce the expected output (prints the contents of my working directory).</p>
<p>Why doesn't the Python one work?</p>
<p>I'm on Ubuntu20.04 with Python3.10.</p>
|
<python><subprocess>
|
2023-02-23 15:02:41
| 1
| 3,347
|
Alexander Soare
|
75,546,550
| 4,954,037
|
python function or operation that returns float("nan")
|
<p>the <a href="https://en.wikipedia.org/wiki/IEEE_754" rel="nofollow noreferrer">IEEE Standard for Floating-Point Arithmetic (IEEE 754)</a> requires the existence of a <code>float</code> (or two...) that is called <code>nan</code> (not a number).</p>
<p>there are two ways to get <code>nan</code> (that i know of)</p>
<pre><code>nan = float("nan")
# or
from math import nan
</code></pre>
<p>but is there a <strong>mathematical</strong> function i can perform on <code>floats</code> <strong>in the standard library</strong> that returns <code>nan</code>?</p>
<p>the obvious ideas like <code>math.sqrt(-1)</code> (and similar) do not return <code>nan</code> but raise <code>ValueError: math domain error</code>.</p>
<p>or are <code>nan</code>s only meant for data where values are missing and are never supposed to be returned by a function?</p>
<p>(is there also something that returns <code>math.inf</code>? again, the obvious <code>1/0</code> raises a <code>ZeroDivisionError</code>).</p>
|
<python><python-3.x><floating-point>
|
2023-02-23 14:59:27
| 2
| 47,321
|
hiro protagonist
|
75,546,543
| 8,412,665
|
How to average based on data range in a difference table in Pandas
|
<p>With two tables, <code>Values</code> and <code>dates</code>, I would like to get the average value between the date ranges.
<code>Values</code> looks like:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Value</th>
</tr>
</thead>
<tbody>
<tr>
<td>2023-01-01 10:00</td>
<td>1</td>
</tr>
<tr>
<td>2023-01-01 11:00</td>
<td>2</td>
</tr>
<tr>
<td>2023-01-02 10:00</td>
<td>4</td>
</tr>
<tr>
<td>2023-01-04 10:00</td>
<td>4</td>
</tr>
<tr>
<td>2023-01-07 10:00</td>
<td>4</td>
</tr>
</tbody>
</table>
</div>
<p>and <code>dates</code> looks like</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Group</th>
<th>StartDay</th>
<th>EndDay</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2023-01-01</td>
<td>2023-01-05</td>
</tr>
<tr>
<td>2</td>
<td>2023-01-03</td>
<td>2023-01-10</td>
</tr>
</tbody>
</table>
</div>
<p>As you can see, the date ranges can overlap.</p>
<p>I am trying to calculate the averages over these ranges, so in this example the output should be something along the lines of</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Group</th>
<th>StartDay</th>
<th>EndDay</th>
<th>Mean</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2023-01-01</td>
<td>2023-01-05</td>
<td>2.75</td>
</tr>
<tr>
<td>2</td>
<td>2023-01-03</td>
<td>2023-01-10</td>
<td>4</td>
</tr>
</tbody>
</table>
</div>
<p>Currently my code looks like (all one line):</p>
<p><code>Values.groupby(np.where(Values['Date'].between(Dates['StartDay'],Dates['EndDay']),'pre','post'))['value'].mean()</code></p>
<p>however this results in
<code>ValueError: Can only compare identically-labeled Series objects</code></p>
<p>This was based on <a href="https://stackoverflow.com/questions/65586029/get-the-average-for-specific-range-of-dates-in-pandas">other similar questions</a>, however does not appear to apply here due to it being over two tables / using ranges.</p>
|
<python><pandas><group-by>
|
2023-02-23 14:59:02
| 2
| 518
|
Beavis
|
75,546,385
| 10,792,871
|
Normalizing a Nested JSON in Python and Converting it to a Pandas Dataframe
|
<p>I have created a simpler version of some JSON data I've been working with here:</p>
<pre><code>[
{
"id": 1,
"city": "Philadelphia",
"Retaillocations": { "subLocation": [
{
"address": "1235 Passyunk Ave",
"district": "South"
},
{
"address": "900 Market St",
"district": "Center City"
},
{
"address": "2300 Roosevelt Blvd",
"district": "North"
}
]
},
"distributionLocations": {"subLocation": [{
"address": "3000 Broad St",
"district": "North"
},
{
"address": "3000 Essington Blvd",
"district": "Cargo City"
},
{
"address": "4300 City Ave",
"district": "West"
}
]
}
}
]
</code></pre>
<p>My goal is to normalize this into a data frame (yes, the above json will only create one row, but I am hoping to get the steps down and then generalize it to a larger set).</p>
<p>First, I loaded the file with <code>jsob_obj = json.loads(inputData)</code> which turns this into a dictionary. The problem is that some of the dictionaries can have lists and are nested oddly as shown above. I've tried using <code>pd.json_normalize(json_obj, record_path = 'retailLocations')</code>, I get a type error saying that list indices must be integers or slices, not str. How can I handle the above JSON file and convert it into a single record in a pandas data frame?</p>
|
<python><json><python-3.x><pandas><json-normalize>
|
2023-02-23 14:46:13
| 1
| 724
|
324
|
75,546,371
| 8,318,946
|
Django taggit returns empty list of tags in search view
|
<p><strong>Explanation</strong></p>
<p>I would like to add to <code>SearchView</code> a filter that checks tags in my models and if <code>raw_query</code> contains tag then it displays the list of relevant objects.</p>
<p>Each model contains title and tags fields. I am using django taggit with autosugest extension to handle tags in my application.</p>
<p>For instance user is inserting tag1 into a search bar and <code>MyModel1</code> contains a tag like this then it should appear on the page. If not then my <code>SearchView</code> is checking if <code>rank</code> returns anything and return list of relevant objects.</p>
<p><strong>Problem</strong></p>
<p><code>tag_query</code> is always returning empty list. I checked database and tags are added correctly. I can display them in Django templates properly but my <code>tag_query</code> return empty list of tags.</p>
<p><strong>Question</strong></p>
<p>How can I implement tags filter properly so based on users query it will check list of tags and return objects that contain this tag.</p>
<p><strong>Code</strong></p>
<p>models.py</p>
<pre><code>from taggit_autosuggest.managers import TaggableManager
class MyModel1(models.Model):
title = models.CharField('Title', max_length=70, help_text='max 70 characters', unique=True)
tags = TaggableManager()
...
</code></pre>
<p>views.py</p>
<pre><code>class SearchView(LoginRequiredMixin, CategoryMixin, ListView, FormMixin, PostQuestionFormView):
model = MyModel1
template_name = 'search/SearchView.html'
context_object_name = 'results'
form_class = PostQuestionForm
def get_queryset(self):
models = [MyModel1, MyModel2, MyModel3]
vectors = [SearchVector(f'title', weight='A') + SearchVector(f'short_description', weight='B') for model in models]
queries = []
for model, vector in zip(models, vectors):
raw_query = self.request.GET.get("q")
query = SearchQuery(raw_query, config='english')
rank = SearchRank(vector, query)
headline = SearchHeadline('title', query, start_sel='<span class="text-decoration-underline sidebar-color">',stop_sel='</span>', highlight_all=True)
# Filter by tags using Django Q filter
tag_query = Q(tags__name__icontains=raw_query_split)
# First check if raw_query contains a tag and display list of objects with matching tag
if tag_query:
queryset = model.objects.annotate(rank=rank, headline=headline).filter(tag_query)
print("tag: ", queryset)
# If tag is not found then use SearchVector, SearchQuery and SearchRank to search for objects matching words in raw_query
if rank:
queryset = model.objects.annotate(rank=rank, headline=headline).filter(rank__gte=0.3)
print("rank: ", queryset)
queries.append(queryset)
results = list(chain(*queries))
results = sorted(results, key=lambda x: (-x.rank, x.headline))
result_count = len(results)
return results
</code></pre>
|
<python><django><django-taggit>
|
2023-02-23 14:45:08
| 0
| 917
|
Adrian
|
75,546,338
| 12,435,792
|
Use list comprehension to extract value from another dataframe based on a key from another dataframe
|
<p>I have a dataframe test which has a column <code>company</code>.
I have another dataframe bsr which has 2 columns, <code>market</code> and <code>region</code>, both contain the same data type.</p>
<p>I want to extract the region of each company.</p>
<p>For that I wrote the following code:</p>
<pre class="lang-py prettyprint-override"><code>test['region'] = [for company in test['company']]
</code></pre>
<p>I am not able to get the correct syntax of how to look for that company in the market column of bsr dataframe as key and get it's region as value.</p>
<p>Any help would be appreciated!</p>
|
<python><pandas>
|
2023-02-23 14:41:45
| 1
| 331
|
Soumya Pandey
|
75,546,202
| 2,005,415
|
How to create a limited size cache shared by multiple processes in Python
|
<p>I'm trying to use a cache shared by multiple processes, using <code>multiprocessing.Manager</code>'s <code>dict</code>. The following demo gives some context (adopted from <a href="https://stackoverflow.com/a/8533626/2005415">this answer</a>):</p>
<pre><code>import multiprocessing as mp
import time
def foo_pool(x, cache):
if x not in cache:
time.sleep(2)
cache[x] = x*x
else:
print('using cache for', x)
return cache[x]
result_list = []
def log_result(result):
result_list.append(result)
def apply_async_with_callback():
manager = mp.Manager()
cache = manager.dict()
pool = mp.Pool()
jobs = list(range(10)) + list(range(10))
for i in jobs:
pool.apply_async(foo_pool, args = (i, cache), callback = log_result)
pool.close()
pool.join()
print(result_list)
if __name__ == '__main__':
apply_async_with_callback()
</code></pre>
<p>Running the above code gives something like this:</p>
<pre><code>using cache for 0
using cache for 2
using cache for 4
using cache for 1
using cache for 3
using cache for 5
using cache for 7
using cache for 6
[25, 16, 4, 1, 9, 0, 36, 49, 0, 4, 16, 1, 9, 25, 49, 36, 64, 81, 81, 64]
</code></pre>
<p>So the cache is working as expected.</p>
<p>What I'd like to achieve is to give a size limit to this <code>manager.dict()</code>, like the <code>maxsize</code> argument for the <code>functools.lru_cache</code>. My current attempt is:</p>
<pre><code>class LimitedSizeDict:
def __init__(self, max_size):
self.max_size = max_size
self.manager = mp.Manager()
self.dict = self.manager.dict()
self.keys = self.manager.list()
def __getitem__(self, key):
return self.dict[key]
def __setitem__(self, key, value):
if len(self.keys) >= self.max_size:
oldest_key = self.keys.pop(0)
del self.dict[oldest_key]
self.keys.append(key)
self.dict[key] = value
def __contains__(self, key):
return key in self.dict
def __len__(self):
return len(self.dict)
def __iter__(self):
for key in self.keys:
yield key
</code></pre>
<p>Then use the following to launch the processes:</p>
<pre><code>def apply_async_with_callback():
cache = LimitedSizeDict(3)
pool = mp.Pool()
jobs = list(range(10)) + list(range(10))
for i in jobs:
pool.apply_async(foo_pool, args = (i, cache), callback = log_result)
pool.close()
pool.join()
print(result_list)
</code></pre>
<p>But this gives me an empty list: <code>[]</code>.</p>
<p>I thought I probably have to subclass the <code>multiprocessing.managers.DictProxy</code> class to achieve this, so I looked into the source code. But there doesn't seem to be class definition of <code>DictProxy</code>.</p>
<p>How to give a size limit to this shared dict cache? Thanks in advance.</p>
|
<python><caching><multiprocessing>
|
2023-02-23 14:30:03
| 1
| 3,356
|
Jason
|
75,546,121
| 13,647,125
|
Fitz draw_rect coordination
|
<p>I am struggling with right coordination of square to overlap the name field in pdf file.
This is my pdf file</p>
<pre><code>https://docdro.id/wiAwsH8
</code></pre>
<p>I would like to get this</p>
<p><a href="https://i.sstatic.net/phZg7.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/phZg7.jpg" alt="enter image description here" /></a></p>
<p>This is my code</p>
<pre><code>import fitz
page = fitz.open('ABCZ01S0112_Canon iR-ADV C256_1388_0012.pdf')
for p in page:
# For every page, draw a rectangle on coordinates (1,1)(100,100)
p.draw_rect([100,0,5,500], color = (0, 0, 0), width = 100)
# Save pdf
page.save('name.pdf')
</code></pre>
<p>But still I am not able to find right coordinations</p>
|
<python><pdf>
|
2023-02-23 14:24:24
| 1
| 755
|
onhalu
|
75,546,062
| 19,079,397
|
How to append list of values into a desired string format?
|
<p>I have two lists like below. I want to convert the list of values into a desired string format like <code>'123,1345;2345,890;'</code>. I tried to use loop and tried to append as list but how do I convert that into a string format.</p>
<pre><code>l1 = [123,4567,80,3456,879]
l2=[98,789,5674,678,9087]
out=[]
for i,j in zip(l1,l2):
out.append(str(j)+','+str(i)+';')
print(out)
['98,123;', '789,4567;', '5674,80;', '678,3456;', '9087,879;']
</code></pre>
<p>Excepted output:-</p>
<pre><code>'98,123;789,4567;5674,80;678,3456;9087,879;'
</code></pre>
|
<python><string><list><append>
|
2023-02-23 14:19:31
| 2
| 615
|
data en
|
75,545,992
| 8,115,653
|
Custom calendar in Pandas make iterable
|
<p>trying to make the class iterable for the below NYSE custom calendar using <code>pandas</code>, say to run this check:</p>
<pre><code>nyse_cal = NYSECalendar()
for date in nyse_cal:
print(date)
</code></pre>
<p>but the statement <code>last_holiday = self.rules[-1]</code> produces <code>TypeError: 'method' is not subscriptable</code> for the 'Holiday' instance because of indexing <code>[-1]</code>. Is there a workaround?</p>
<pre><code>from pandas.tseries.offsets import CustomBusinessDay, MonthEnd
from pandas.tseries.holiday import AbstractHolidayCalendar, Holiday, \
USMemorialDay, USMartinLutherKingJr, USPresidentsDay, GoodFriday, \
USLaborDay, USThanksgivingDay, nearest_workday, next_monday
class NYSECalendar(AbstractHolidayCalendar):
''' NYSE holiday calendar via pandas '''
rules = [
Holiday('New Years Day', month=1, day=1, observance=nearest_workday),
USMartinLutherKingJr,
USPresidentsDay,
GoodFriday,
USMemorialDay,
Holiday('Juneteenth', month=6, day=19, observance=nearest_workday),
Holiday('USIndependenceDay', month=7, day=4, observance=nearest_workday),
USLaborDay,
USThanksgivingDay,
Holiday('Christmas', month=12, day=25, observance=nearest_workday),
]
def __iter__(self):
last_holiday = self.rules[-1]
last_holiday_date = last_holiday.dates[-1]
start_date = last_holiday_date + MonthEnd(12) + CustomBusinessDay(calendar=self)
end_date = start_date + MonthEnd(1) - CustomBusinessDay(calendar=self)
curr_date = start_date
while curr_date <= end_date:
yield curr_date
curr_date += CustomBusinessDay(calendar=self)
</code></pre>
|
<python><pandas>
|
2023-02-23 14:13:58
| 1
| 1,117
|
gregV
|
75,545,957
| 3,294,378
|
AWS S3 Pre Signed Post with custom domain
|
<p>I am looking for a way to use a custom domain with the S3 pre signed post functionality. Right now the URL returned is the default S3 bucket URL e.g. <code>https://mybucket.s3.amazonaws.com/</code>. Using python I generate the pre-signed post data as such:</p>
<pre class="lang-py prettyprint-override"><code>content_type = "text/csv"
data = s3.generate_presigned_post(
Bucket="my-bucket",
Key=path,
Fields={
"Content-Type": content_type
},
Conditions=[
{"Content-Type": content_type},
["content-length-range", 0, 10 * 1000000]
],
ExpiresIn=300,
)
</code></pre>
<p>The data returned by boto3 to perform a multi-part form upload is:</p>
<pre class="lang-json prettyprint-override"><code>{
"url": "https://my-bucket.s3.amazonaws.com/",
"fields": {
"Content-Type": "text/csv",
"key": "pri...",
"AWSAccessKeyId": "A....",
"policy": "e....",
"signature": "CJR..."
}
}
</code></pre>
<p>I would like to get a custom domain as the "url" part to upload to. How can I do this?</p>
<p>Edit: This question is about AWS S3 Pre-Signed POST data for multi-part form upload. Not Pre-Signed URLs.</p>
|
<python><amazon-web-services><amazon-s3><boto3>
|
2023-02-23 14:11:29
| 1
| 1,060
|
Jelle
|
75,545,944
| 12,439,683
|
Efficient algorithm for online Variance update over batched data / color channels
|
<p>I have a large amount of multi-demensional data (images) and want to calculate the variance of an axis (color channel) across all of them. Memory wise I cannot create a large array to calculate the variance in one step. I therefore need to load the data in batches and update the current variance somehow in an online way after each batch.</p>
<p><strong>Toy example</strong></p>
<p>In the end the the batch wise updated <code>online_var</code> should match <code>correct_var</code>.
However, I struggle to find an efficient algorithm for this.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
np.random.seed(0)
# Correct calculation of the variance
all_data = np.random.randint(0, 9, (9, 3)) # <-- does not fit into memory
correct_var = all_data.var(axis=0)
# Create batches
batches = all_data.reshape(-1, 3, 3)
online_var = 0
for batch in batches:
batch_var = batch.var(axis=0)
online_var = ? # <-- how to update the variance of the samples seen so far?
# Both need to be equal
assert np.allclose(correct_var, online_var)
</code></pre>
<hr />
<p>I found the <a href="https://en.wikipedia.org/wiki/Algorithms_for_calculating_variance#Welford%27s_online_algorithm" rel="nofollow noreferrer">Welford's online algorithm</a>, however it is very slow as it only updates the variance for a single new value, i.e. it cannot process a whole batch at once. As I am working with images an update is necessary for each pixel and each channel.</p>
<hr />
<p>How can I update the variance for multiple new observations in an efficient way that considers the whole batch at once?</p>
|
<python><math><statistics><variance><online-algorithm>
|
2023-02-23 14:10:26
| 1
| 5,101
|
Daraan
|
75,545,793
| 352,319
|
Getting help on matching modules in Zeppelin/Python
|
<p>We're setting up a new Zeppelin 0.10.0 environment running Python 3.7. I'm new to Zeppelin and fairly new to Python and I want to see what modules are already loaded. I start off asking for help:</p>
<pre><code>%livy.pyspark
help()
</code></pre>
<p>I get a response:</p>
<blockquote>
<p>... less relevant text omitted...
To get a list of available modules, keywords, symbols, or topics,
type "modules", "keywords", "symbols", or "topics". Each module also
comes with a one-line summary of what it does; to list the modules
whose name or summary contain a given string such as "spam", type
"modules spam".</p>
<p>help> You are now leaving help and returning to the Python
interpreter. If you want to ask for help on a particular object
directly from the interpreter, you can type "help(object)". Executing
"help('string')" has the same effect as typing a particular string at
the help> prompt.</p>
</blockquote>
<p>Then I create a note with the following:</p>
<pre><code>%livy.pyspark
help('modules')
</code></pre>
<p>I get a long list of modules. Among them is one named "urllib".</p>
<blockquote>
<p>Please wait a moment while I gather a list of all available modules...
(many modules) Enter any module name to get more help. Or, type
"modules spam" to search for modules whose name or summary contain the
string "spam".</p>
</blockquote>
<p>At this point I decide I'd like to find all modules containing the string "url". I can't manage it. Below are the things I've tried, and the error messages I've gotten. What do I need to do to get this seemingly simple request to work?</p>
<pre><code>%livy.pyspark
modules url
invalid syntax (<stdin>, line 2)
File "<stdin>", line 2
modules url
^
SyntaxError: invalid syntax
</code></pre>
<p>I guess that was because I wasn't within the help system.</p>
<pre><code>%livy.pyspark
help('modules url')
'NoneType' object has no attribute 'loader'
Traceback (most recent call last):
File "/usr/lib64/python3.7/_sitebuiltins.py", line 103, in __call__
return pydoc.help(*args, **kwds)
File "/usr/lib64/python3.7/pydoc.py", line 1891, in __call__
self.help(request)
File "/usr/lib64/python3.7/pydoc.py", line 1940, in help
self.listmodules(request.split()[1])
File "/usr/lib64/python3.7/pydoc.py", line 2076, in listmodules
apropos(key)
File "/usr/lib64/python3.7/pydoc.py", line 2170, in apropos
ModuleScanner().run(callback, key, onerror=onerror)
File "/usr/lib64/python3.7/pydoc.py", line 2131, in run
loader = spec.loader
AttributeError: 'NoneType' object has no attribute 'loader'
</code></pre>
<p>That error message is cryptic to this noob.</p>
<pre><code>%livy.pyspark
help('url')
No Python documentation found for 'url'.
Use help() to get the interactive help utility.
Use help(str) for help on the str class.
</code></pre>
<p>Well, that makes sense. There is no <code>url</code> command, but I thought I would give it a try anyway.</p>
|
<python><python-3.x><apache-zeppelin>
|
2023-02-23 13:58:54
| 0
| 5,249
|
kc2001
|
75,545,718
| 10,159,065
|
Convert string to dictionary in a dataframe
|
<p>I have a dataframe that looks like this</p>
<pre><code>df = pd.DataFrame({'col_1': ['1', '2', '3', '4'],
'col_2': ['a:b,c:d', ':v', 'w:,x:y', 'f:g,h:i,j:']
})
</code></pre>
<p>Datatype of col_2 is currently string. I want to extract the first key and first value from col_2 as col_3 and col_4 respectively. So the output should look like</p>
<pre><code>pd.DataFrame({'col_1': ['a', 'b', 'c', 'd'],
'col_2': ['a:b,c:d', ':v', 'w:,x:y', 'f:g,h:i,j:'],
'col_3': ['a','','w','f'],
'col_4': ['b','v','','g']
})
</code></pre>
<p>Here is what i have done so far is this</p>
<pre><code>df['col_3'] = df['col_2'].apply(lambda x: x.split(":")[0])
df['col_4'] = df['col_2'].apply(lambda x: x.split(":")[1])
</code></pre>
<p>But this obviously doesn't work because its not a dictionary.</p>
|
<python><pandas><string><dataframe><numpy>
|
2023-02-23 13:51:29
| 2
| 448
|
Aayush Gupta
|
75,545,599
| 532,054
|
Django get duration between now and DateTimeField
|
<p>I have a Django model with the following field:</p>
<pre><code>date = models.DateTimeField('start date')
</code></pre>
<p>I want to create a duration function that returns the duration between now and the date in the format "hours:minutes"</p>
<p>How can we achieve this?</p>
|
<python><django><date>
|
2023-02-23 13:39:01
| 1
| 1,771
|
lorenzo
|
75,545,443
| 10,428,677
|
Extract all entries from a pandas df where the values are the same across all years
|
<p>I have a dataframe that looks like this (with many more other countries, this is a sample):</p>
<pre><code>df_dict = {'country': ['Japan','Japan','Japan','Japan','Japan','Japan','Japan', 'Greece','Greece','Greece','Greece','Greece','Greece','Greece'],
'year': [2016, 2017,2018,2019,2020,2021,2022,2016, 2017,2018,2019,2020,2021,2022],
'value': [320, 416, 172, 652, 390, 570, 803, 100, 100, 100, 100, 100, 100,100]}
df = pd.DataFrame(df_dict)
</code></pre>
<p>I want to extract all the entries where the <code>value</code> is the same across all years. Sometimes it could be <code>100</code>, sometimes it could be another value, but the example here is with <code>100</code>.</p>
<p>I'm not really sure how to go about this</p>
<p>The output should look like this.</p>
<pre><code>df_dict2 = {'country': ['Greece','Greece','Greece','Greece','Greece','Greece','Greece'],
'year': [2016, 2017,2018,2019,2020,2021,2022],
'value': [100, 100, 100, 100, 100, 100,100]}
df2 = pd.DataFrame(df_dict2)
</code></pre>
|
<python><pandas>
|
2023-02-23 13:23:05
| 1
| 590
|
A.N.
|
75,545,413
| 1,697,288
|
sqlalchemy MSSQL Cannot insert/update datetime2
|
<p>I need to bulk insert & update data in MSSQL using sqlalchemy 2.0, it's working but is always ignoring my two datetime fields without error, I end up with NULL in those fields</p>
<pre><code>import datetime
from sqlalchemy import DateTime, ForeignKey, String, insert, update
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column, relationship
from sqlalchemy.dialects.mssql import DATETIME2
from dateutil import parser
class Base(DeclarativeBase):
pass
class issues(Base):
__tablename__ = "issues"
id = mapped_column(String(36), primary_key=True)
created = mapped_column (DATETIME2())
updated = mapped_column (DATETIME2())
status = mapped_column(String(50))
severity = mapped_column(String(10))
control_id = mapped_column(String(36))
entity_id = mapped_column(String(36))
</code></pre>
<p>I've tried creating my dict as a date object and a string neither works and MSSQL accepts the data if I insert it manually:</p>
<pre><code>issueList.append({
'id': issue['id'],
# 'createdAt': parser.parse(issue['createdAt']).__str__(),
'createdAt': parser.parse(issue['createdAt']),
# 'updatedAt': parser.parse(issue['updatedAt']).__str__(),
'updatedAt': parser.parse(issue['updatedAt']),
'status': issue['status'],
'severity': issue['severity'],
'control_id': issue['control']['id'],
'entity_id': issue['entity']['id']
})
session.execute(insert(issues),issueList)
session.execute(update(issues),issueListUpdates)
session.commit()
</code></pre>
|
<python><sql-server><sqlalchemy>
|
2023-02-23 13:20:37
| 1
| 463
|
trevrobwhite
|
75,545,410
| 7,380,417
|
OpenSSL.crypto.Error when trying to load certificate from Azure Key Vault
|
<p>I need to implement certificate-based authentication for web API hosted in app service on Azure. To do this I firstly generated <code>.crt</code> certificate file and private key <code>.key</code> file like this:</p>
<pre class="lang-bash prettyprint-override"><code>openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 -nodes -keyout private_key.key -out cert.crt -subj '/CN=myapi.azurewebsites.net' -addext 'subjectAltName=DNS:myapi.azurewebsites.net'
</code></pre>
<p>Then I've created certificate in<code>.pfx</code> format using following command:</p>
<pre class="lang-bash prettyprint-override"><code>openssl pkcs12 -inkey private_key.key -in cert.crt -export -out certificate.pfx
</code></pre>
<p>My code for fetching certificate data looks something like this:</p>
<pre class="lang-py prettyprint-override"><code>
from azure.identity import DefaultAzureCredential
from azure.keyvault.certificates import CertificateClient
from OpenSSL import crypto
key_client = CertificateClient(
vault_url=f"https://myteskeyvault.vault.azure.net/",
credential=DefaultAzureCredential(),
)
cert = key_client.get_certificate("certificate")
loaded_cert = crypto.load_pkcs12(
bytes(cert.cer), None
)
</code></pre>
<p>I get error on line when I'm trying to load certificate using <code>crypto.load_pkcs12</code> function:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 375, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 75, in __call__
return await self.app(scope, receive, send)
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\uvicorn\middleware\message_logger.py", line 82, in __call__
raise exc from None
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\uvicorn\middleware\message_logger.py", line 78, in __call__
await self.app(scope, inner_receive, inner_send)
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\fastapi\applications.py", line 261, in __call__
await super().__call__(scope, receive, send)
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\starlette\applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\starlette\middleware\errors.py", line 181, in __call__
raise exc
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\starlette\middleware\errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\starlette\middleware\cors.py", line 92, in __call__
await self.simple_response(scope, receive, send, request_headers=headers)
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\starlette\middleware\cors.py", line 147, in simple_response
await self.app(scope, receive, send)
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\starlette\exceptions.py", line 82, in __call__
raise exc
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\starlette\exceptions.py", line 71, in __call__
await self.app(scope, receive, sender)
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in __call__
raise e
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\starlette\routing.py", line 656, in __call__
await route.handle(scope, receive, send)
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\starlette\routing.py", line 259, in handle
await self.app(scope, receive, send)
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\starlette\routing.py", line 61, in app
response = await func(request)
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\fastapi\routing.py", line 217, in app
solved_result = await solve_dependencies(
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\fastapi\dependencies\utils.py", line 529, in solve_dependencies
solved = await run_in_threadpool(call, **sub_values)
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\starlette\concurrency.py", line 39, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread return await future
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\Users\devdz\Projects\myapp\.\myapp\dependencies.py", line 204, in __call__
api_token = self.__get_api_token(token, settings)
File "C:\Users\devdz\Projects\myapp\.\myapp\dependencies.py", line 149, in __get_api_token
msal_client = settings.aad.get_msal_client_using_certificate()
File "C:\Users\devdz\Projects\myapp\.\myapp\settings\aad.py", line 62, in get_msal_client_using_certificate
private_key = self.az_key_vault.get_private_key()
File "C:\Users\devdz\Projects\myapp\.\myapp\settings\aad.py", line 29, in get_private_key
loaded_cert = crypto.load_pkcs12(
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\OpenSSL\crypto.py", line 3295, in load_pkcs12
_raise_current_error()
File "C:\Users\devdz\Projects\myapp\.venv\lib\site-packages\OpenSSL\_util.py", line 57, in exception_from_error_queue
raise exception_type(errors)
OpenSSL.crypto.Error: [('asn1 encoding routines', '', 'wrong tag'), ('asn1 encoding routines', '', 'nested asn1 error'), ('asn1 encoding routines', '', 'nested asn1 error')]
</code></pre>
<p>What am I doing wrong that can cause such error? I've also tried approaches mentioned in <a href="https://stackoverflow.com/questions/58313018/how-to-get-private-key-from-certificate-in-an-azure-key-vault">this</a> thread but it also did not help me.</p>
|
<python><azure><openssl><azure-keyvault><pkcs>
|
2023-02-23 13:20:32
| 1
| 2,201
|
devaerial
|
75,545,379
| 9,403,950
|
What is the fastest way to shallow copy a (`__slots__`) object in python?
|
<p>‘Fastest’ as in we’re trying to squeeze out every nanosecond. If the number of attributes is relevant, then let’s say it has one attribute.</p>
<p>Assume we want to shallow copy <code>c</code>:</p>
<pre class="lang-py prettyprint-override"><code>class C:
__slots__ = (‘foo’,)
def __init__(self, foo):
self.foo = foo
c = C()
</code></pre>
<p><code>copy.copy</code> is impressively slow, so let’s say the baseline is <code>x = C(c.foo)</code>.</p>
<p>A slightly faster way to do this (by 7% on my machine) where <code>__new__</code> is irrelevant is:</p>
<pre class="lang-py prettyprint-override"><code>x = object.__new__(C, None)
x.foo = c.foo
</code></pre>
<p>I was trying to find a method copying raw bytes or something similar, but can’t seem to find anything.</p>
|
<python><python-3.x>
|
2023-02-23 13:17:45
| 0
| 317
|
Liam
|
75,545,370
| 1,729,210
|
How to configure the entrypoint/cmd for docker-based python3 lambda functions?
|
<p>I switched from a zip-based deployment to a docker-based deployment of two lambda functions (which are used in an API Gateway). Both functions where in the same zip file and I want to have both functions in the same docker-based container (meaning I can't use the <code>cmd</code> setting in my Dockerfile (or to be precise need to overwrite it anyway). Previously, I used the handler attribute in the cloudformation template for specifying which handler function to call in which module, e.g.</p>
<pre><code>...
ConfigLambda:
Type: 'AWS::Serverless::Function'
Properties:
Handler: config.handler
...
...
LogLambda:
Type: 'AWS::Serverless::Function'
Properties:
Handler: logs.handler
...
</code></pre>
<p>but with a docker-based build one has to define an <code>ImageConfig</code>, i.e.</p>
<pre><code>...
LogLambda:
Type: 'AWS::Serverless::Function'
Properties:
PackageType: Image
ImageUri: !Ref EcrImageUri
FunctionName: !Sub "${AWS::StackName}-Logs"
ImageConfig:
WorkingDirectory: /var/task
Command: ['logs.py']
EntryPoint: ['/var/lang/bin/python3']
...
ConfigLambda:
Type: 'AWS::Serverless::Function'
Properties:
PackageType: Image
ImageUri: !Ref EcrImageUri
FunctionName: !Sub "${AWS::StackName}-Config"
ImageConfig:
WorkingDirectory: /var/task
Command: ['config.py']
EntryPoint: ['/var/lang/bin/python3']
</code></pre>
<p>I'm a bit stuck because this does not work, no matter what combination I pass to the command array. If I fire a test event in the AWS console, I get the following error</p>
<pre><code>RequestId: <uuid> Error: Runtime exited without providing a reason
Runtime.ExitError
</code></pre>
<p>Judging from the full output, the file is loaded and executed, but the handler function is not invoked (there is some output from a logging setup function which is called right after the module imports). The section in the AWS documentation on python3 based lambdas state that naming for handlers should be file_name.function (e.g. function_lambda.lambda_handler), but this doesn't give any clues on how do to this for command array in a ImageConfig.</p>
<p>How do I set the Command section correctly for my lambda function in my cloudformation template?</p>
|
<python><amazon-web-services><docker><aws-lambda><aws-cloudformation>
|
2023-02-23 13:17:09
| 1
| 659
|
user1729210
|
75,545,320
| 8,895,744
|
Modify some rows of one column based on conditions from another column using Polars
|
<p>Given the data frame below, I would need to modify column 'A' using conditions of 'B'. Pandas expression for that is presented with <code>.loc</code></p>
<pre><code>df = pl.DataFrame(
{
"A": [1, 1, 1, 1, 1],
'B': [1, 2, 2, 3, 3]
}
)
df.loc[df['B'] == 2, 'A'] = 100
</code></pre>
<p>I have a big data set and I need to do this a lot of times for small samples. I know that it is possible to solve with <code>apply</code> function by going through all rows, but I need a fast solution, O(1) if possible, instead of O(n).
I tried to use</p>
<pre><code>df[df['B'] == 2, 'A'] = 100
</code></pre>
<p>but it works only when one row met the condition.</p>
|
<python><python-polars>
|
2023-02-23 13:12:20
| 1
| 563
|
EnesZ
|
75,545,314
| 822,896
|
Python Pandas DataFrame Merge on Columns with Overwrite
|
<p>Is there a way to merge two Pandas DataFrames, by matching on (and retaining) supplied columns, but overwriting all the rest?</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df1 = pd.DataFrame(columns=["Name", "Gender", "Age", "LastLogin", "LastPurchase"])
df1.loc[0] = ["Bob", "Male", "21", "2023-01-01", "2023-01-01"]
df1.loc[1] = ["Frank", "Male", "22", "2023-02-01", "2023-02-01"]
df1.loc[2] = ["Steve", "Male", "23", "2023-03-01", "2023-03-01"]
df1.loc[3] = ["John", "Male", "24", "2023-04-01", "2023-04-01"]
df2 = pd.DataFrame(columns=["Name", "Gender", "Age", "LastLogin", "LastPurchase"])
df2.loc[0] = ["Steve", "Male", "23", "2022-11-01", "2022-11-02"]
df2.loc[1] = ["Simon", "Male", "23", "2023-03-01", "2023-03-02"]
df2.loc[2] = ["Gary", "Male", "24", "2023-04-01", "2023-04-02"]
df2.loc[3] = ["Bob", "Male", "21", "2022-12-01", "2022-12-01"]
>>> df1
Name Gender Age LastLogin LastPurchase
0 Bob Male 21 2023-01-01 2023-01-01
1 Frank Male 22 2023-02-01 2023-02-01
2 Steve Male 23 2023-03-01 2023-03-01
3 John Male 24 2023-04-01 2023-04-01
>>> df2
Name Gender Age LastLogin LastPurchase
0 Steve Male 23 2022-11-01 2022-11-02
1 Simon Male 23 2023-03-01 2023-03-02
2 Gary Male 24 2023-04-01 2023-04-02
3 Bob Male 21 2022-12-01 2022-12-01
</code></pre>
<p>What I'd like is to end up with is <code>df1</code> updated with values from <code>df2</code>, if the <code>"Name"</code>, <code>"Gender"</code> and <code>"Age"</code> columns match. But without caring what the other columns are, so I'd end up with this:</p>
<pre class="lang-py prettyprint-override"><code>>>> df1
Name Gender Age LastLogin LastPurchase
0 Bob Male 21 2022-12-01 2022-12-01 # Updated last two columns from df2
1 Frank Male 22 2023-02-01 2023-02-01
2 Steve Male 23 2022-11-01 2022-11-02 # Updated last two columns from df2
3 John Male 24 2023-04-01 2023-04-01
</code></pre>
<p>I can do a merge like this:</p>
<pre class="lang-py prettyprint-override"><code>>>> df3 = df1.merge(df2, on=["Name", "Gender", "Age"], how='left')
</code></pre>
<p>But then I have to manually extract data from and drop the new columns created from the merge, using their names:</p>
<pre class="lang-py prettyprint-override"><code>>>> df3['LastLogin'] = df3['LastLogin_y'].fillna(df3['LastLogin_x'])
>>> df3['LastPurchase'] = df3['LastPurchase_y'].fillna(df3['LastPurchase_x'])
>>> df3.drop(['LastLogin_x', 'LastLogin_y'], axis=1, inplace=True)
>>> df3.drop(['LastPurchase_x', 'LastPurchase_y'], axis=1, inplace=True)
>>>
>>> df3
Name Gender Age LastLogin LastPurchase
0 Bob Male 21 2022-12-01 2022-12-01
1 Frank Male 22 2023-02-01 2023-02-01
2 Steve Male 23 2022-11-01 2022-11-02
3 John Male 24 2023-04-01 2023-04-01
</code></pre>
<p>I'm trying to avoid this, as I need a generic way to update batches of data, and I don't know all their column names (just the ones I want to match on).</p>
|
<python><pandas><dataframe>
|
2023-02-23 13:11:38
| 1
| 1,229
|
Jak
|
75,545,272
| 11,951,910
|
How can I use a custom sort order for dict keys with the standard library json module?
|
<p>I am trying to output a dictionary as JSON, using python 2.7 (this can not be upgraded)
The keys in the <code>data</code> are strings that contain numbers, like <code>'item_10'</code>, and have an arbitrary order. For example, this code generates some test data:</p>
<pre><code>import random
data = {}
numbers = list(range(1, 12))
random.shuffle(numbers)
for value in numbers:
data['item_{}'.format(value)] = 'data{}'.format(value)
</code></pre>
<p>I tried using:</p>
<pre><code>print(json.dumps(data, sort_keys=True, indent=2))
</code></pre>
<p>However, I want the keys to be sorted <a href="https://stackoverflow.com/questions/4836710">naturally</a>, like:</p>
<pre><code>{
"item_1": "data1",
"item_2": "data2",
...
"item_10": "data10",
"item_11": "data11"
}
</code></pre>
<p>Instead, I get keys sorted by Python's default sort order:</p>
<pre><code>{
"item_1": "data1",
"item_10": "data10",
"item_11": "data11",
...
"item_2": "data2"
}
</code></pre>
<p>How can I get this result?</p>
|
<python><json><dictionary>
|
2023-02-23 13:08:27
| 2
| 718
|
newdeveloper
|
75,544,918
| 9,121,235
|
Python Fuzzywuzzy does not give a similarity of 100 for identical terms
|
<p>Why is Fuzzywuzzy library not giving me back proper results for terms which are identical?
I would assume a similarity of 100, or at least 95 if 100 is not working. Am I missing something?
Maybe there's a better fuzzy matching algorithm?</p>
<p>My code:</p>
<pre><code>from fuzzywuzzy import process
import pandas as pd
# creating a data frame
df1 = pd.read_csv('List1.csv', sep=';')
df2 = pd.read_csv('List2.csv', sep=';')
NAME_MATCH = []
similarity = []
for i in df2.NAME_LIST2:
ratio = process.extract(i, df1.NAME_LIST1, limit=1)
NAME_MATCH.append(ratio[0][0])
similarity.append(ratio[0][2])
df2['NAME_MATCH'] = pd.Series(NAME_MATCH)
df2['similarity'] = pd.Series(similarity)
print(df2.head())
df2.to_csv('output.csv', sep='\t', index=False)
</code></pre>
<p>Output:</p>
<p><a href="https://i.sstatic.net/v7eCL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v7eCL.png" alt="enter image description here" /></a></p>
<p>The strange thing is that a different wording has a higher similarity than identical findings.</p>
|
<python><fuzzy-search><fuzzywuzzy>
|
2023-02-23 12:35:02
| 0
| 455
|
smartini
|
75,544,869
| 4,451,315
|
Propagate error in function which returns tuple of int
|
<p>Here's a file <code>t.pyx</code> I've written:</p>
<pre class="lang-py prettyprint-override"><code># cython: language_level=3
cdef int foo(val: int) except? -1:
if val != 42:
raise ValueError("foo")
return 0
cpdef (int, int) bar(val: int):
res = foo(val)
return res, res+1
</code></pre>
<p>and here's my <code>setup.py</code> file:</p>
<pre class="lang-py prettyprint-override"><code>from setuptools import setup
from Cython.Build import cythonize
setup(
name = 't',
ext_modules = cythonize('t.pyx'),
)
</code></pre>
<p>and here's my <code>main.py</code> file:</p>
<pre class="lang-py prettyprint-override"><code>from t import bar
res = bar(43)
print(res)
</code></pre>
<p>If I run</p>
<pre><code>python setup.py build_ext -i -f
python main.py
</code></pre>
<p>then I get</p>
<pre><code>main.py
Traceback (most recent call last):
File "t.pyx", line 5, in t.foo
raise ValueError("foo")
ValueError: foo
Exception ignored in: 't.bar'
Traceback (most recent call last):
File "t.pyx", line 5, in t.foo
raise ValueError("foo")
ValueError: foo
(2075612320, 13418336)
</code></pre>
<p>So, it didn't raise.</p>
<p>How can I get <code>bar</code> to raise if <code>foo</code> raises?</p>
<p>One "hack" I've come up with is to have <code>t.pyx</code> like this</p>
<pre><code># cython: language_level=3
cdef int foo(val: int) except? -1:
if val != 42:
raise ValueError("foo")
return 0
cpdef int bar(val: int, ret: list[int]) except? -1:
res = foo(val)
ret.append(res)
ret.append(res+1)
return 0
</code></pre>
<p>and <code>main.py</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>from t import bar
res = []
bar(43, res)
print(res)
</code></pre>
<p>Is there a better way? If <code>bar</code> only needed to be called from within Cython, then I could pass an <code>int</code> pointer and modify that - however, I need to call it from a Python script too. How can I do that?</p>
|
<python><cython>
|
2023-02-23 12:31:09
| 2
| 11,062
|
ignoring_gravity
|
75,544,826
| 3,045,351
|
Itertools Group list of dicts of variable length by key/value pairs
|
<p>I have this input object:</p>
<pre><code>vv = [{'values': ['AirportEnclosed', 'Bus', 'MotorwayServiceStation']},{'values': ['All']}]
</code></pre>
<p>...there can be variable numbers of dicts present, but all dicts will always have the key 'values' and values populated for this.</p>
<p>The type of value assigned to 'values' will always be string or list. I wish to group/zip so I get the following output (list of tuples or tuple of tuples is fine):</p>
<pre><code>(
('AirportEnclosed', 'All'),
('Bus', 'All'),
('MotorwayServiceStation', 'All')
)
</code></pre>
<p>...this is my code:</p>
<pre><code>import itertools
outputList=[]
for i,g in itertools.groupby(vv, key=operator.itemgetter("values")):
outputList.append(list(g))
print(outputList)
</code></pre>
<p>...and this is my output:</p>
<pre><code>[[{'values': ['AirportEnclosed', 'Bus', 'MotorwayServiceStation']}], [{'values': ['All']}]]
</code></pre>
<p>...what do I need to change?</p>
|
<python><python-3.x><python-itertools>
|
2023-02-23 12:27:00
| 1
| 4,190
|
gdogg371
|
75,544,822
| 14,594,208
|
How to access custom dataframe accessor's namespace?
|
<p>According to Pandas <a href="https://pandas.pydata.org/docs/development/extending.html" rel="nofollow noreferrer">docs</a> it is possible to register custom accessors like below:</p>
<pre class="lang-py prettyprint-override"><code>@pd.api.extensions.register_dataframe_accessor("geo")
class GeoAccessor:
def __init__(self, pandas_obj):
self._validate(pandas_obj)
self._obj = pandas_obj
@property
def center(self):
lat = self._obj.latitude
lon = self._obj.longitude
return (float(lon.mean()), float(lat.mean()))
def method(self):
# do something
</code></pre>
<p>Suppose that there are more accessors with different namespaces. For instance:</p>
<ul>
<li>geo2</li>
<li>geo3</li>
</ul>
<p>If we'd like to invoke a method from <code>geo</code>, for example, we'd do:</p>
<pre class="lang-py prettyprint-override"><code>df.geo.method() # here we use geo explicitly
</code></pre>
<p>How could I store/retrieve a namespace to/from a variable?</p>
<p>I am thinking something along the lines of:</p>
<pre class="lang-py prettyprint-override"><code>df.variable_namespace.method() # variable_namespace could be geo, geo2 etc..
</code></pre>
<p>What if we'd like to have dynamic behavior as far as namespaces are concerned?</p>
|
<python><pandas><dataframe>
|
2023-02-23 12:26:25
| 1
| 1,066
|
theodosis
|
75,544,795
| 9,403,950
|
Is there a way to cache a global as a local const in cpython?
|
<p>Is there a way to define a const object from a pre-existing global variable inside a function (in cpython) such that it gets loaded with the <code>LOAD_CONST</code> instruction (short of modifying and recompiling the interpreter yourself)?</p>
<p>Assuming the value of the global doesn’t change and we only care about setting the value at the time the function is defined—it can be a copy of the value, not a reference to it.</p>
<p>In other words, how could:</p>
<pre class="lang-py prettyprint-override"><code>a = 7
def f():
return a + 1
</code></pre>
<p>be made to be synonymous with:</p>
<pre class="lang-py prettyprint-override"><code>def f():
a = 7
return a + 1
</code></pre>
<p>so that <code>7</code> appears in <code>f.__code__.co_consts</code> of the former and <code>LOAD_CONST</code> is used to load it?</p>
<p>Using python 3.11.1 for anything that involves modifying function byte code.</p>
|
<python><python-3.x>
|
2023-02-23 12:24:12
| 1
| 317
|
Liam
|
75,544,762
| 5,269,959
|
How to encrypt pandas Dataframe with pyarrow and parquet
|
<p>I would like to encrypt pandas dataframe as parquet file using the modular encryption. I tought the best way to do that, is to transform the dataframe to the pyarrow format and then save it to parquet with a ModularEncryption option. Something like this:</p>
<pre><code>import pandas as pd
d = {'col1': [1, 2], 'col2': [3, 4]}
df = pd.DataFrame(data=d)
import pyarrow as pa
schema = pa.Schema.from_pandas(df)
pa.parquet.write_table(df,"test.parquet",encryption_properties=enc_prop)
</code></pre>
<p>My problem is, that I'm stuck with the encrypton_properties creation.
Has anyone a idea how to create them?</p>
<p>Big Thanks,
Seb</p>
|
<python><pandas><encryption><parquet><pyarrow>
|
2023-02-23 12:21:30
| 3
| 538
|
seb2704
|
75,544,728
| 1,424,395
|
multivariate Kernel Density Estimator with independent bandwidth for each dimension
|
<p>I found several types of KDEs implementations on python, but still I cannot find one version that is flexible regarding the kernel used and that can use different bandwidth per dimension. This actually is available in MATLAB's <code>mvksdensity()</code>.</p>
<pre><code>scipy.stats.multivariate_normal
scipy.stats.kde
scipy.stats.gaussian_kde
</code></pre>
<p>are for some types of kernel only, and bw is the same in all dimensions.</p>
<pre><code>sklearn.neighbors.KernelDensity
</code></pre>
<p>this one seems like the best candidate, I love how it works, but bandwidth is also global.</p>
<pre><code>statsmodels.nonparametric.KDEMultivariate
</code></pre>
<p>bandwidth are different per dimension, but it seems a bit limited, for example, I cannot choose the kernel other than gaussian, cuadratic or uniform afaik...</p>
<p>Anyone know about another option, or can tell me wrong about the statements above?</p>
|
<python><scikit-learn><statsmodels><scipy-optimize><kernel-density>
|
2023-02-23 12:17:54
| 0
| 1,827
|
myradio
|
75,544,652
| 7,211,014
|
bash print multiline variable, but also use it as a command for python?
|
<p>I want to create a variable that is human readable with bash, but then also be able to run the variable as a python script.</p>
<pre><code>run_cmd(){
echo "[+] Creating stuff"
run_command="$script 10.10.10.10 \\
--config $settings \\
--cid $cid \\
-v"
echo -e "$run_comamnd"
$run_command
}
run_cmd
</code></pre>
<p>Running the above will print out the following</p>
<pre><code>[+] Creating stuff
pythonscript 10.10.10.10 \
--config $settings \
--cid $cid \
-v"
usage: pythonscript [-v] --cid CID --config CONFIG host
pythonscript: error: unrecognized arguments \ \ \
</code></pre>
<p>If I remove the <code>\\</code> and just have <code>\</code> like the following, the command runs but the output removes all of the new line chars.</p>
<pre><code>run_cmd(){
echo "[+] Creating stuff"
run_command="$script 10.10.10.10 \
--config $settings \
--cid $cid \
-v"
echo -e "$run_comamnd"
$run_command
}
run_cmd
</code></pre>
<p>Output</p>
<pre><code>[+] Creating stuff
pythonscript 10.10.10.10 --config $settings --cid $cid -v"
[+] this output is from pythonscript, the script ran successfully.
</code></pre>
<p>I know that if I remove <code>\</code> entirely from the variable, it will print out new lines and run the command. However I want the <code>\</code> so someone can copy the command that is outputted and run directly from command line, so I need the <code>\</code> on output.</p>
<p>How can I have my cake and eat it too? As in print out the command with new lines, and also run it, without having to make seperate variables for the echo statement and the run statement?</p>
|
<python><bash><arguments><echo><newline>
|
2023-02-23 12:10:39
| 1
| 1,338
|
Dave
|
75,544,534
| 1,773,702
|
ModuleNotFoundError: No module named 'google.ads.googleads.v10'
|
<p>We use a python script to connect to Google Ads API and consume its REST operations.</p>
<p>All of a sudden the working script has started giving error with below message:</p>
<pre><code>from google.ads.googleads.v10.enums.types import offline_user_data_job_status
ModuleNotFoundError: No module named 'google.ads.googleads.v10'
</code></pre>
<p>Please advise.</p>
|
<python><google-ads-api>
|
2023-02-23 12:00:00
| 0
| 949
|
azaveri7
|
75,544,224
| 2,612,592
|
Minimize class method with class attributes as parameters
|
<p>How do I make that a class like this:</p>
<pre><code>class Conic:
def __init__(self, x: float, y: float):
self.x = x
self.y = y
def get_value(self) -> float:
return (self.x - 2) ** 2 + (self.y + 3) ** 2
def calibrate(self):
pass
</code></pre>
<p>Calibrates its attributes <code>x</code> and <code>y</code> such that it minimizes the return value of <code>get_value()</code> method. Preferably inside the same class, as in <code>calibrate()</code> method.</p>
<p>Expected behaviour:</p>
<pre><code>c = Conic(0,0)
print(c.get_value())
--> 13
c.calibrate()
print(c.get_value(), c.x, c.y)
--> 0 2 -3
</code></pre>
|
<python><class><oop><optimization><calibration>
|
2023-02-23 11:30:58
| 1
| 587
|
Oliver Mohr Bonometti
|
75,543,920
| 59,300
|
LDAP3 search when Active Directory accounts expire
|
<p>I want to specify an <a href="https://ldap3.readthedocs.io/en/latest/" rel="nofollow noreferrer">LDAP3</a> search against an Active Directory server which returns when the PW of an account expires.</p>
<pre><code>server = Server(server_name, port=636, use_ssl=True, get_info=ALL)
conn = Connection(server, user='{}\\{}'.format(domain_name, user_name), password=password, authentication=NTLM, auto_bind=True)
conn.search(
search_base=f'OU={root_ou},OU={sub_ou},OU={org_ou},DC={domain_name},DC={domain_suffix}',
# search_filter='(objectClass=person)',
# https://learn.microsoft.com/en-us/windows/win32/adschema/a-accountexpires
search_filter='(userAccountControl:1.2.840.113556.1.4.159)',
# search_scope='SUBTREE',
attributes=[ALL_ATTRIBUTES, ALL_OPERATIONAL_ATTRIBUTES]
)
</code></pre>
<p>Can I specify the search filter in a way <a href="https://learn.microsoft.com/en-us/windows/win32/adschema/a-accountexpires" rel="nofollow noreferrer">so that it returns</a>:</p>
<blockquote>
<p>The date when the account expires. This value represents the number of
100-nanosecond intervals since January 1, 1601 (UTC). A value of 0 or
0x7FFFFFFFFFFFFFFF (9223372036854775807) indicates that the account
never expires.</p>
</blockquote>
<p>I would like to see the actual value as a date.</p>
|
<python><active-directory><ldap>
|
2023-02-23 11:03:39
| 1
| 7,437
|
wishi
|
75,543,832
| 7,760,910
|
Python unittest gives file not found exception
|
<p>I have a core class which is as below:</p>
<pre><code>class GenerateDag(object):
def __int__(self):
pass
def generate_dag(self, manifest: dict):
"""
:return: bytes of the file passed
"""
with open('../../resources/dag.py', 'rb') as f:
return f.read()
</code></pre>
<p><strong>TestCase</strong>:</p>
<pre><code>def test_generate_dag(self):
manifest = Mock()
result = GenerateDag().generate_dag(manifest)
expected = b"some-byte-content"
assert result == expected
</code></pre>
<p>The project structure is as follows:</p>
<p><a href="https://i.sstatic.net/jvh4M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jvh4M.png" alt="enter image description here" /></a></p>
<p>When I create an instance something like this <code>GenerateDag().generate_dag({})</code> it gives me the proper content of the file as I expected but however, but when I run the test case it gives me the below error:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: '/Users/../IdeaProjects/some-projct/provisioner/.tox/py38/lib/python3.8/resources/dag.py'
</code></pre>
<p>I also tried the below logic in the core class:</p>
<pre><code>dir_path = os.path.dirname(pathlib.Path(__file__).parent.parent)
conf_path = os.path.join(dir_path, 'resources/dag.py')
</code></pre>
<p>But even this didn't help. So what else I am missing here? I run the tests through <code>tox</code></p>
<p>P.S: My core class is in <code>src/services</code></p>
|
<python><python-3.x><python-unittest>
|
2023-02-23 10:55:30
| 1
| 2,177
|
whatsinthename
|
75,543,822
| 12,304,000
|
grouping expressions sequence is empty
|
<p>This pyspark code with <strong>df.select</strong> works fine.</p>
<pre><code>def dev_prev_month(cleaned):
df = cleaned
df = df.select(
F.coalesce(
_sum(
F.when(
(F.col("ORDERS_VIA_ARTICLE") > 0) &
(
(F.col("ORDER_SUCCESS_URL") != "%16237890%") &
(F.col("ORDER_SUCCESS_URL") != "%30427132%") &
(F.col("ORDER_SUCCESS_URL") != "%242518801%") |
(F.col("ORDER_SUCCESS_URL").isNull())
),
F.col("ORDERS_VIA_ARTICLE")
).otherwise(F.lit(0))
),
F.lit(0)
).alias("report_sum_orders_via_article")
)
return df
</code></pre>
<p>Now, I wanted to use the same logic with df.withColumn() instead of df.select().</p>
<p>I tried this (removed the coalesce for now):</p>
<pre><code>def dev_prev_month(clean_joined_traffic_data):
df = clean_joined_traffic_data
df = df.withColumn(
"report_sum_orders_via_article",_sum(
F.when(
(F.col("ORDERS_VIA_ARTICLE") > 0) &
(
(F.col("ORDER_SUCCESS_URL") != "%16237890%") &
(F.col("ORDER_SUCCESS_URL") != "%30427132%") &
(F.col("ORDER_SUCCESS_URL") != "%242518801%") |
(F.col("ORDER_SUCCESS_URL").isNull())
),
F.col("ORDERS_VIA_ARTICLE")
).otherwise(F.lit(0)))
)
return df
</code></pre>
<p>However, here I get an error that:</p>
<pre><code>pyspark.sql.utils.AnalysisException: grouping expressions sequence is empty, and '`!ri.foundry.main.transaction.123-123:ri.foundry.main.transaction.xxxx:master`.ORDERS' is not an aggregate function.
</code></pre>
<p>what am i missing out on?</p>
|
<python><pyspark><group-by><aggregate-functions><foundry-code-repositories>
|
2023-02-23 10:54:25
| 1
| 3,522
|
x89
|
75,543,492
| 10,232,932
|
Open a website in AWS SageMaker Notebooks
|
<p>I am using Amazon SageMaker and running a notebook instance in it. In my notebook instance I created a conda_python3 file and try to running the following command (which is running on my local machine):</p>
<pre><code>import os
for i in range(1):
os.system("start \"\" https://google.com")
os.system("taskkill /im msedge.exe /f")
</code></pre>
<p>This should open and close the google website. What configurations or adjustments I am missing on AWS?</p>
|
<python><amazon-web-services><amazon-sagemaker>
|
2023-02-23 10:25:33
| 1
| 6,338
|
PV8
|
75,543,424
| 15,991,297
|
Counting Groups with Same Datetime and Comparing Count with Other Groups in Dataframe
|
<p>Below is an extract from a dataframe. For each MarketName there should be two Date/Times as in "Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd" in the extract below. Sometimes data is missing from either the earliest or latest Date/Time (as in "Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs") so I want to count the earliest and latest Date/Times and if the counts are not the same delete all rows for that MarketName.</p>
<p>I am new to Pandas and am struggling to work out how to do it. Can anyone help?</p>
<p>Thanks</p>
<p>Extract:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date/Time</th>
<th>MarketName</th>
<th>SelectionName</th>
</tr>
</thead>
<tbody>
<tr>
<td>2022-01-22 12:20:03</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Girandole (NR)</td>
</tr>
<tr>
<td>2022-01-22 12:20:03</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Doctor Parnassus</td>
</tr>
<tr>
<td>2022-01-22 12:20:03</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Twilight Twist</td>
</tr>
<tr>
<td>2022-01-22 12:20:03</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Irish Hill</td>
</tr>
<tr>
<td>2022-01-22 12:20:03</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Hayedo (NR)</td>
</tr>
<tr>
<td>2022-01-22 12:20:03</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Restitution</td>
</tr>
<tr>
<td>2022-01-22 12:20:03</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Graystone (NR)</td>
</tr>
<tr>
<td>2022-01-22 12:35:49</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Graystone (NR)</td>
</tr>
<tr>
<td>2022-01-22 12:35:49</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Restitution</td>
</tr>
<tr>
<td>2022-01-22 12:35:49</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Doctor Parnassus</td>
</tr>
<tr>
<td>2022-01-22 12:35:49</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Twilight Twist</td>
</tr>
<tr>
<td>2022-01-22 12:35:49</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Irish Hill</td>
</tr>
<tr>
<td>2022-01-22 12:35:49</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Girandole (NR)</td>
</tr>
<tr>
<td>2022-01-22 12:35:49</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Hayedo (NR)</td>
</tr>
<tr>
<td>2022-01-22 12:55:03</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Winds Of Fire</td>
</tr>
<tr>
<td>2022-01-22 12:55:03</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Cat Tiger</td>
</tr>
<tr>
<td>2022-01-22 12:55:03</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Sussex Ranger</td>
</tr>
<tr>
<td>2022-01-22 12:55:03</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Didero Vallis</td>
</tr>
<tr>
<td>2022-01-22 12:55:03</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Le Breuil</td>
</tr>
<tr>
<td>2022-01-22 12:55:03</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Port Of Mars</td>
</tr>
<tr>
<td>2022-01-22 12:55:03</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Mr Muldoon</td>
</tr>
<tr>
<td>2022-01-22 12:55:03</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Downtown Getaway</td>
</tr>
<tr>
<td>2022-01-22 13:10:16</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Sussex Ranger</td>
</tr>
<tr>
<td>2022-01-22 13:10:16</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Mr Muldoon</td>
</tr>
<tr>
<td>2022-01-22 13:10:16</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Port Of Mars</td>
</tr>
<tr>
<td>2022-01-22 13:10:16</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Le Breuil</td>
</tr>
<tr>
<td>2022-01-22 13:10:16</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Downtown Getaway</td>
</tr>
<tr>
<td>2022-01-22 13:10:16</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Cobolobo</td>
</tr>
<tr>
<td>2022-01-22 13:10:16</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Defi Sacre</td>
</tr>
<tr>
<td>2022-01-22 13:10:16</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Winds Of Fire</td>
</tr>
<tr>
<td>2022-01-22 13:10:16</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Didero Vallis</td>
</tr>
<tr>
<td>2022-01-22 13:10:16</td>
<td>Ascot - 13:10 Ascot 22nd Jan 3m Hcap Chs</td>
<td>Cat Tiger</td>
</tr>
</tbody>
</table>
</div>
<p>Expected output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date/Time</th>
<th>MarketName</th>
<th>SelectionName</th>
</tr>
</thead>
<tbody>
<tr>
<td>2022-01-22 12:20:03</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Girandole (NR)</td>
</tr>
<tr>
<td>2022-01-22 12:20:03</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Doctor Parnassus</td>
</tr>
<tr>
<td>2022-01-22 12:20:03</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Twilight Twist</td>
</tr>
<tr>
<td>2022-01-22 12:20:03</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Irish Hill</td>
</tr>
<tr>
<td>2022-01-22 12:20:03</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Hayedo (NR)</td>
</tr>
<tr>
<td>2022-01-22 12:20:03</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Restitution</td>
</tr>
<tr>
<td>2022-01-22 12:20:03</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Graystone (NR)</td>
</tr>
<tr>
<td>2022-01-22 12:35:49</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Graystone (NR)</td>
</tr>
<tr>
<td>2022-01-22 12:35:49</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Restitution</td>
</tr>
<tr>
<td>2022-01-22 12:35:49</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Doctor Parnassus</td>
</tr>
<tr>
<td>2022-01-22 12:35:49</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Twilight Twist</td>
</tr>
<tr>
<td>2022-01-22 12:35:49</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Irish Hill</td>
</tr>
<tr>
<td>2022-01-22 12:35:49</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Girandole (NR)</td>
</tr>
<tr>
<td>2022-01-22 12:35:49</td>
<td>Ascot - 12:35 Ascot 22nd Jan 1m7f Hrd</td>
<td>Hayedo (NR)</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas>
|
2023-02-23 10:18:54
| 2
| 500
|
James
|
75,543,387
| 1,438,934
|
Need to write custom User model in DJango and return pre stored json file in response
|
<p>Just started learning Python/Django 5 days back. Need to write an API where it gets 'username', 'password', 'password2' in request and it should store username in some User object. And if the username already exists in User, then just return simple error message: "username is duplicate".</p>
<p>I am using User class from django.contrib.auth.models. Which returns a BIG response[More than 1000 lines] , if username is not duplicate and stored successfully.</p>
<p><strong>Question 1</strong>: <em><strong>Solved Now. Looking for answer for Question 2.</strong></em></p>
<p>I want to return simple one line message in response , if username is not duplicated and stored successfully.</p>
<pre><code>from rest_framework import serializers
from django.contrib.auth.models import User
from django.contrib.auth.password_validation import validate_password
class RegisterSerializer(serializers.ModelSerializer):
username = serializers.CharField(required=True)
password = serializers.CharField(write_only=True, required=True, validators=[validate_password])
password2 = serializers.CharField(write_only=True, required=True)
class Meta:
model = User
fields = ('username', 'password', 'password2', 'email', 'first_name', 'last_name')
def validate(self, attrs):
if attrs['password'] != attrs['password2']:
raise serializers.ValidationError({"password": "Password fields didn't match."})
return attrs
def create(self, validated_data):
user = User.objects.create(
username=validated_data['username']
)
def validate_username(self, value):
if User.objects.filter(username__iexact=value).exists():
raise serializers.ValidationError("A user with this username already exists.")
return value
user.set_password(validated_data['password'])
user.save()
return user
</code></pre>
<p><strong>Question 2</strong>:</p>
<p>I want to write an API, which on a GET call , returns pre-stored simple json file. This file should be pre-stored in file system. How to do that?</p>
<p>register is app name. static is folder inside register. There I keep stations.json file. register/static/stations.json.</p>
<p><strong>settings.py</strong>:</p>
<pre><code>STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'register/static/')
]
STATIC_URL = 'static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
</code></pre>
<p><strong>views.py:</strong></p>
<pre><code>from django.shortcuts import render
# Create your views here.
from django.contrib.auth.models import User
from .serializers import RegisterSerializer
from rest_framework import generics
from django.http import JsonResponse
from django.conf import settings
import json
class RegisterView(generics.CreateAPIView):
queryset = User.objects.all()
serializer_class = RegisterSerializer
def get_stations(request):
with open(settings.STATICFILES_DIRS[0] + '/stations.json', 'r') as f:
data = json.load(f)
return JsonResponse(data)
</code></pre>
<p><strong>urls.py:</strong></p>
<pre><code>from django.urls import path
from register.views import RegisterView
from . import views
urlpatterns = [
path('register/', RegisterView.as_view(), name='auth_register'),
path('stations/', views.get_stations, name='get_stations'),
]
</code></pre>
<p>setup/urls.py:</p>
<pre><code>from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('api/', include('register.urls')),
]
</code></pre>
<p>When I hit GET request from Postman: "http://127.0.0.1:8000/api/stations/",</p>
<p>I get error: 500 Internal server error.</p>
TypeError
at /api/stations/
|
<python><django>
|
2023-02-23 10:15:46
| 1
| 1,182
|
Anish Mittal
|
75,543,376
| 8,794,133
|
Generating a 3D mesh from a string in Python
|
<p>I am Looking for a way to turn a text string into a 3D mesh, that could then be further manipulated using relevant python libraries, such as pyMesh or open3D. I thought this should be a common feature but this proves more difficult than imagined.</p>
|
<python><3d><mesh><open3d><pymesh>
|
2023-02-23 10:14:27
| 1
| 594
|
IamTheWalrus
|
75,543,343
| 14,269,252
|
Streamlit multiselect, if I don't select anything, doesn't show data frame
|
<p>I am building a Streamlit app. Part of my code includes <code>multiselect</code> as follows, when I dont select anything in <code>multiselect</code>, I want to show whole data frame without any filtration, but it doesn't show any data, how should I modify the code?</p>
<pre class="lang-py prettyprint-override"><code>code_= df_temp.CODE.unique().tolist()
type_ = df_temp.TYPE.unique().tolist()
options, options2 = st.columns([0.1, 0.1])
options = options.multiselect('Select Code', code_ )
options2 = options2.multiselect('Select Type', type_ )
df_filtered = time_filtered.query('CODE in @options or TYPE in @options2')
</code></pre>
|
<python><dataframe><streamlit>
|
2023-02-23 10:11:46
| 1
| 450
|
user14269252
|
75,543,332
| 6,803,114
|
Databricks notebook ipywidgets not working as expected
|
<p>I am working on Azure databricks(IDE). I wanted to create a button which takes a text value as input and on the click of a button a function needed to be run which prints the value entered.
For that I created this code:</p>
<pre><code>import ipywidgets as widgets
def my_function(param):
print(f"The parameter is: {param}")
text_input = widgets.Text(description="Enter text:")
button = widgets.Button(description="Click Me!")
display(text_input)
display(button)
def on_button_click(b):
my_function(text_input.value)
button.on_click(on_button_click)
</code></pre>
<p>But when I click the button, nothing happens. It should run the <code>my_function</code> and print the input text.</p>
<p>Strangely this exact code works fine when I run it in <strong>jupyter notebook</strong>.</p>
<p>I am not able to make it work in <strong>Azure Databricks</strong>.</p>
<p>Any insights would be helpful</p>
|
<python><databricks><azure-databricks>
|
2023-02-23 10:11:07
| 1
| 7,676
|
Shubham R
|
75,542,778
| 14,752,392
|
DRF - list of optionals field in serializer meta class
|
<p>I have a model with about 45 fields that takes information about a company</p>
<pre><code>class Company(models.Model):
company_name = models.Charfield(max_length=255)
.
.
.
last_information = models.Charfield(max_lenght=255)
</code></pre>
<p>I also have a serializer that looks like so,</p>
<pre><code>class CompanySerializer(serializers.ModelSerializer):
class Meta:
model = Company
fields = "__all__"
# some_optional_fields = ["field_1","field_2","field_3"]
</code></pre>
<p>however some of the fields are not required (about 20 of them to be precise). Is there a way where I can add those optional fields as a list or iterable of some sort to the metadata, example <code>some_optional_fields = ["field_1","field_2","field_3"]</code>, so that I won't have to explictly set those variables required argument to false like so</p>
<pre><code>class CompanySerializer(serializers.ModelSerializer):
company_name = serializers.Charfield(max_length=255, required=False)
.
.
.
last_information = serializers.Charfield(max_lenght=255, required=False)
class Meta:
model = Company
fields = ["field_1","field_2","field_3",...,"field_45"]
</code></pre>
|
<python><django><django-models><django-rest-framework><django-serializer>
|
2023-02-23 09:20:27
| 4
| 918
|
se7en
|
75,542,704
| 11,644,523
|
Extract year, month, day from a string with forward slashes
|
<p>Given a string like
<code>s3://bucket/year=2023/month=02/day=22/test.csv</code>,
I would like to return the <code>year</code>, <code>month</code>, <code>day</code> as separate variables.</p>
<p>Is there some regex that can search for this? Assuming the pattern is always the same. I tried with <code>datetime</code> module, but I think the forward slashes are interfering with it.</p>
|
<python>
|
2023-02-23 09:13:37
| 1
| 735
|
Dametime
|
75,542,637
| 486,181
|
prettier throws error `Failed to resolve a parser`
|
<p>Prettier throws error "failed to resolve a parser". Prettier is selected in Workspace, User and Python > Workspace, so I'm out of ideas why the error is thrown...</p>
<pre><code>["INFO" - 08:57:18] File Info:
{
"ignored": false,
"inferredParser": null
}
["WARN" - 08:57:18] Parser not inferred, trying VS Code language.
["ERROR" - 08:57:18] Failed to resolve a parser, skipping file. If you registered a custom file extension, be sure to configure the parser.
</code></pre>
|
<python><visual-studio-code><vscode-extensions><prettier>
|
2023-02-23 09:08:01
| 1
| 1,512
|
Ajax
|
75,542,507
| 9,640,238
|
NLP: Calculate text time to read
|
<p>I need to calculate estimated reading time for a few hundred Word documents. I couldn't identify a method to do so in common NLP libraries such as <code>spaCy</code> and <code>nltk</code>.</p>
<p>Any hint?</p>
|
<python><nlp>
|
2023-02-23 08:55:59
| 0
| 2,690
|
mrgou
|
75,542,501
| 14,667,788
|
How to turn pandas df into 2D form
|
<p>I have a following dataset:</p>
<pre class="lang-py prettyprint-override"><code>
import pandas as pd
input = {"Product" : ["Car", "", "", "House", "", "", ""], "Name" : ["Wheel", "Glass", "Seat", "Glass", "Roof", "Door", "Kitchen"],
"Price" : [5, 3, 4, 2, 6, 4, 12]}
df_input = pd.DataFrame(input)
</code></pre>
<p>I would like to turn this df into 2D form. How can I do this please?</p>
<p>Desired output is:</p>
<pre class="lang-py prettyprint-override"><code>output = {"Product" : [ "Car", "House"] , "Wheel" : [ 5, 0], "Glass" : [ 3, 2],"Seat" : [ 4, 0],"Roof" : [ 0, 6], "Door" : [ 0, 4], "Kitchen" : [ 0, 12]}
df_output = pd.DataFrame(output)
</code></pre>
|
<python><pandas>
|
2023-02-23 08:55:31
| 1
| 1,265
|
vojtam
|
75,542,366
| 4,872,985
|
Pytorch - What module should I use to multiply the output of a layer using Sequential?
|
<p>While defining a neural network using <em>nn.Module</em>, in the forward function I can multiply the output of the final layer using:</p>
<pre><code>def forward(self, x):
...
x = torch.mul(x, self.max_action)
return x
</code></pre>
<p>I am trying to do the same but instead using <em>nn.Sequential</em> method to define the neural network</p>
<pre><code>model = nn.Sequential()
model.add_module(...
...
model.add_module(name='activation_output', module=?)
</code></pre>
<p>What should I use there to have the previous layer multiply by the scalar <em>self.max_action</em> ?
Or should I build the sequential model in a different way ?</p>
|
<python><pytorch>
|
2023-02-23 08:40:34
| 1
| 616
|
Fabio Olivetto
|
75,542,224
| 13,612,961
|
RuntimeError: Failed to add edge detection - On Raspberrypi
|
<p>I'm working on a button example on a Raspberry Pi. I found <a href="https://raspberrypihq.com/use-a-push-button-with-raspberry-pi-gpio/" rel="noreferrer">this</a> tutorial on the internet, and I was trying to complete it 1 by 1. The code is exactly the same, and I'm sure I used the correct pins.</p>
<p>This is what the code looks like:</p>
<pre><code>import RPi.GPIO as GPIO # Import Raspberry Pi GPIO library
def button_callback(channel):
print("Button was pushed!")
GPIO.setwarnings(False) # Ignore warning for now
GPIO.setmode(GPIO.BOARD) # Use physical pin numbering
GPIO.setup(10, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) # Set pin 10 to be an input pin and set initial value to be pulled low (off)
GPIO.add_event_detect(10,GPIO.RISING,callback=button_callback) # Setup event on pin 10 rising edge
message = input("Press enter to quit\n\n") # Run until someone presses enter
GPIO.cleanup() # Clean up
</code></pre>
<p>And this is how the Breadboard is wired:
<a href="https://i.sstatic.net/lsfVR.png" rel="noreferrer"><img src="https://i.sstatic.net/lsfVR.png" alt="Wiring schema" /></a></p>
<p>Somehow, if I try to run the python file, I get the following error:
<a href="https://i.sstatic.net/vcWXv.png" rel="noreferrer"><img src="https://i.sstatic.net/vcWXv.png" alt="error" /></a></p>
<p>I've done some research and found out that the python file probably isn't the problem. It's more likely that it's the user privileges. For other people, it worked to just run the file as sudo, but that didn't work for me.</p>
<p>Does anyone know how I can fix this?</p>
|
<python><raspberry-pi><gpio><edge-detection>
|
2023-02-23 08:24:43
| 3
| 569
|
Lumberjack
|
75,542,143
| 9,850,681
|
Sphinx errors with Pydantic settings, does not generate documentation
|
<p>I am trying to generate documentation with Sphinx (apidoc) but all the files that call the settings file, are not documented.</p>
<p>When I run the <code>make html</code> command I get these error messages</p>
<pre><code>#...
File "pydantic/env_settings.py", line 39, in pydantic.env_settings.BaseSettings.__init__
File "pydantic/main.py", line 342, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 35 validation errors for Settings
app_name
field required (type=value_error.missing)
kafka_broker
field required (type=value_error.missing)
apicurio_uri
field required (type=value_error.missing)
jwt_secret
field required (type=value_error.missing)
#...
</code></pre>
<p>I create the settings with BaseSettings from Pydantic</p>
<p><code>settings.py</code></p>
<pre class="lang-py prettyprint-override"><code>class Settings(BaseSettings):
app_name: str
kafka_broker: str
healthcheck_topic: str = "_healthcheck"
# Apicurio
apicurio_uri: str
apicurio_journal_topic: str = "kafkasql-journal"
business_rules_topic: str = "_message"
jwt_secret: str
jwt_expiration_sec: int = 24 * 60 * 60 # A day
_jwt_algorithm: str = PrivateAttr(default_factory=lambda: "HS256")
admin_username: str
admin_password: str
azure_auth_url: str
class Config:
try:
env_file = '.env'
env_file_encoding = 'utf-8'
case_sensitive = False
except:
env_file = '.env_local'
env_file_encoding = 'utf-8'
case_sensitive = False
def __init__(self, **data):
super().__init__(**data)
self._postgresql_conn = make_url(self.postgresql_conn_url)
@property
def postgresql_conn(self) -> URL:
return self._postgresql_conn
@property
def jwt_algorithm(self) -> str:
return self._jwt_algorithm
@validator('redis_user', 'redis_password')
def field_is_not_empty_if_not_null(cls, v):
if v is not None and len(v) == 0:
raise ValueError("Field cannot be empty string")
return v
_fields_not_empty = validator(
'kafka_broker',
'app_name',
'apicurio_journal_topic',
'business_rules_topic',
'jwt_secret',
allow_reuse=True
)(field_not_empty)
_fields_are_positive = validator(
'cache_short_expiration_hours',
'cache_medium_expiration_hours',
'cache_long_expiration_hours',
'jwt_expiration_sec',
allow_reuse=True
)(field_non_zero_positive)
_fields_are_uri = validator(
'apicurio_uri',
'ticket_uri',
allow_reuse=True
)(field_is_uri)
def build_api_definition(self) -> TicketApiDefinition:
return TicketApiDefinition(
webservice_endpoint=self.ticket_webservice_path,
webservice_path=self.ticket_webservice_path,
create_session=self.ticket_webservice_create_session_endpoint,
)
@lru_cache
def get_settings():
return Settings()
</code></pre>
<p>How can I solve it? Do I need to change something on sphinx or is it about settings?</p>
|
<python><python-sphinx><pydantic>
|
2023-02-23 08:15:57
| 0
| 460
|
Plaoo
|
75,542,120
| 14,269,252
|
Insert image on the top of side bar in stream lit app
|
<p>Is there a way to insert image on the top of side bar in Streamlit app? I used the code as follows but it shows the image below the menu in sidebar.</p>
<pre class="lang-py prettyprint-override"><code>st.sidebar.image("st.png", width=70)
</code></pre>
|
<python><streamlit>
|
2023-02-23 08:13:56
| 1
| 450
|
user14269252
|
75,541,891
| 7,215,853
|
What is an efficient way to transfer an relation defining id between two types of (JSON)entities in Python3?
|
<p>I have to types of entities contacts and clubs.
Both are represented as a json object. Contacts have unique id's and clubs also have unique id's. Contacts and Clubs are stored in individual lists.</p>
<p>The relation between those two is, that each club can have one or more contacts.
This relation is currently stored insied the club entity. There is a key called "contacts" which lists one or more Ids of contacts.</p>
<pre><code>{'id': '12345678', 'clubName': 'myclub', 'contacts': ['098765', '192837', '543210]}
</code></pre>
<p>However, I now need to import those datasets into another system.
In this new system, the relation is "reversed". Meaning, the information is not stored in the clubs entity but in the contacts entity. Contacts now need to hold the id's of their respective clubs and not the other way around.</p>
<p>I am looking for a way to transfer those id's from the clubs into the contacts.</p>
<p>The only way, I could think of so far is:</p>
<ul>
<li>Loop over all clubs and get each contact id, remember the clubs id</li>
<li>Loop over all contacts, get the contact id and check whether it matches with the given contact id</li>
<li>if the contact id matches, add the club's id to the contact entity</li>
</ul>
<p>As you probably noticed, this is a pretty inefficient double or even triple loop (since a club can have multiple contacts) and will probably be very inefficient in a larger dataset.</p>
<p>Is there a faster way to do this?</p>
|
<python><json>
|
2023-02-23 07:46:22
| 1
| 320
|
MrTony
|
75,541,888
| 1,852,526
|
Python subprocess open command prompt in new window as administrator and execute command
|
<p>I have the following code, where I am trying to open command prompt in a separate window as administrator and want to execute a command. I am trying to follow the <a href="https://www.xingyulei.com/post/py-admin/index.html" rel="nofollow noreferrer">tutorial</a> here, but it says:</p>
<blockquote>
<p>FileNotFoundError: [WinError 2] The system cannot find the file specified in the Terminal.</p>
</blockquote>
<p>Here is the code, I am trying. If I just run this command <code>cmd.exe /K "EchoServer.exe -c -s"</code> it runs fine but won't run as admin.</p>
<pre><code>import subprocess
from subprocess import Popen, CREATE_NEW_CONSOLE
def OpenServers():
print("Full path "+echoServerFullPath)
print(os.path.exists(echoServerFullPath))
os.chdir(echoServerFullPath)
command = ['cmd.exe /K "CoreServer.exe -c -s"', '/c', 'runas', '/user:administrator']
#cmd.exe /K "EchoServer.exe -c -s"
cmd1=subprocess.Popen(command,creationflags=CREATE_NEW_CONSOLE)
</code></pre>
<p>Here is the error I am getting:</p>
<p><a href="https://i.sstatic.net/JIVBw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JIVBw.png" alt="error" /></a></p>
<p>Just a quick edit, when I try</p>
<pre><code>subprocess.Popen(['runas', '/user:Administrator', '"CoreServer.exe -c -s"'],creationflags=CREATE_NEW_CONSOLE)
</code></pre>
<p>It opens command prompt and says enter password for administrator.</p>
|
<python><subprocess><command-prompt>
|
2023-02-23 07:46:05
| 0
| 1,774
|
nikhil
|
75,541,730
| 2,998,077
|
Month column in dataframe, plus number of months, to compare with current calendar month
|
<p>A column in dataframe looks like month, I want to use it to plus a month, as a 'future' month, then to compare this 'future' month with current (calendar) month.</p>
<pre><code>import pandas as pd
from io import StringIO
import numpy as np
from datetime import datetime
csvfile = StringIO(
"""Name Year - Month Score
Mike 2022-11 31
Mike 2022-09 136
""")
df = pd.read_csv(csvfile, sep = '\t', engine='python')
d_name_plus_month = {"Mike":2}
month_of_first_row = pd.to_datetime(df.iloc[[0]]['Year - Month']).values.astype("datetime64[M]")
plus_months = d_name_plus_month['Mike']
scheduled_month = month_of_first_row + int(plus_months)
# scheduled_month_in_string = scheduled_month.astype('str')
current_month = datetime.now().strftime("%Y") +'-' +datetime.now().strftime("%m") # it's string
current_month = np.array(current_month)
print (scheduled_month <= current_month)
# month_of_first_row: 2022-11
# scheduled_month: 2023-01
# current_month: 2023-02
# so "scheduled_month" is earlier than "current_month".
</code></pre>
<p>But it has error:</p>
<pre><code>TypeError: '<=' not supported between instances of 'numpy.ndarray' and 'numpy.ndarray'
</code></pre>
<p>I've tried to alert the lines to make them into string for compare, but not successful.</p>
<p>How can I correct the lines?</p>
|
<python><pandas><dataframe><numpy><datetime>
|
2023-02-23 07:27:07
| 4
| 9,496
|
Mark K
|
75,541,631
| 14,673,832
|
Printing binary tree after initialising gives None value
|
<p>I have a simple binary tree. I have intialized it using TreeNode class, but while printing out the values from the tree, it gives me None value as well. How to mitigate this?</p>
<p>The code snippet is as follows:</p>
<pre><code>class TreeNode:
def __init__(self, root=None, left = None, right= None):
self.value = root
self.left = left
self.right = right
def tree(root) -> int:
# print(root)
if root is not None:
print(root.value)
print(tree(root.left))
print(tree(root.right))
return None
root = TreeNode(1)
# root.value = TreeNode(1)
root.left = TreeNode(2)
root.right = TreeNode(3)
tree(root)
</code></pre>
<p>The above code gives the following output:</p>
<pre><code>1
2
None
None
None
3
None
None
None
</code></pre>
<p>While I want only to print <code>1 2 3</code> without printing the None values. How can we achieve this??</p>
|
<python><graph><binary-tree><nodes>
|
2023-02-23 07:15:17
| 2
| 1,074
|
Reactoo
|
75,541,607
| 17,973,259
|
Pygame animation now showing on screen
|
<p>In my game there is an animation of "exploding" when the ship hits something and loses 1 hp. This is the code that triggers <code>self.first_player_ship.explode()</code>:</p>
<pre><code> def _first_player_ship_hit(self):
"""Respond to the first player ship being hit by an alien."""
if self.first_player_ship.exploding:
return
if self.stats.ships_left > 0:
self.first_player_ship.explode()
self.first_player_ship.shield_on = False
self.settings.thunder_bullet_count = 1
if self.settings.first_player_bullets_allowed > 1:
self.settings.first_player_bullets_allowed -= 2
self.stats.ships_left -= 1
self.first_player_ship.center_ship()
self.sb.prep_hp()
else:
self.stats.player_one_active = False
if self.stats.player_two_active:
self.first_player_bullets.empty()
else:
self.stats.game_active = False
pygame.mouse.set_visible(True)
</code></pre>
<p>The problem is that the animation works well until the player loses his last hp. The ship just disappears without the animation playing.
I suspect that it might have something to do with the update method in my game, when the ship loses all it's hp becomes inactive, and when it's inactive it's not blitted on the screen anymore.</p>
<pre><code>def _update_screen(self):
"""Update images on the screen, and flip to the new screen."""
if self.stats.player_one_active:
self.first_player_ship.blitme()
</code></pre>
<p>In the _first_player_ship_hit(self), I tried to put the first_player_ship.explode() before setting the inactive flag to False but it still didn't work.</p>
<p>Link to game repo: <a href="https://github.com/KhadaAke/Alien-Onslaught" rel="nofollow noreferrer">https://github.com/KhadaAke/Alien-Onslaught</a></p>
|
<python>
|
2023-02-23 07:12:39
| 1
| 878
|
Alex
|
75,541,586
| 4,907,187
|
How to access Oracle DB Link in Django models?
|
<p>Im trying to access table from different databases using database Link. Im getting error <code>database link not found</code></p>
<p>My model looks like this:</p>
<pre><code>from django.db import models
# Create your models here.
class Customer(models.Model):
cust_num = models.CharField(max_length=20)
customer_number = models.CharField(max_length=80)
class Meta:
db_table = '\"MY_SCHEMA\".\"TABLE_NAME\"@\"OTHER_DB\"'
managed = False
</code></pre>
<p>And im getting this error while trying to access it through Python Django shell</p>
<pre><code>cx_Oracle.DatabaseError: ORA-04054: database link MY_SCHEMA.CUST_NUM does not exist
</code></pre>
<p>Any suggestions? Thanks</p>
|
<python><django>
|
2023-02-23 07:09:58
| 1
| 429
|
arisalsaila
|
75,541,272
| 10,962,766
|
Copying values from dataframe cell to other cell in the same row
|
<p>I have a problem with chain-indexing in a Pandas dataframe, which I know has to be avoided.</p>
<p>I am checking a dataframe for identical start and end dates in two different columns. If they are identical, I want to read the same value to a third column in the same row, which is the exact date column.</p>
<p>The dataframe looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>factoid_ID</th>
<th>pers_ID</th>
<th>pers_name</th>
<th>alternative_names</th>
<th>event_type</th>
<th>event_after-date</th>
<th>event_before-date</th>
<th>event_start</th>
<th>event_end</th>
<th>event_date</th>
<th>pers_title</th>
<th>pers_function</th>
<th>place_name</th>
<th>inst_name</th>
<th>rel_pers</th>
<th>source_quotations</th>
<th>additional_info</th>
<th>comment</th>
<th>info_dump</th>
<th>source</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>3342</td>
<td>API</td>
<td>Georg Christian Gottlieb Theophil Wedekind</td>
<td>n/a</td>
<td>Rezeption</td>
<td>n/a</td>
<td>n/a</td>
<td>1788-11-09</td>
<td>1788-11-09</td>
<td>n/a</td>
<td>Dr., med. / Dr., phil. h.c.</td>
<td>Mitglied</td>
<td>Mainz</td>
<td>Universität Mainz, Medizinische Fakultät</td>
<td>n/a</td>
<td>n/a</td>
<td>auf Mitteilung eines kurfürstlichen Beschlusses vom 05.07.1788 erfolgte seine Aufnahme ohne Prüfung.</td>
<td>n/a</td>
<td>n/a</td>
<td>ProfAPI</td>
</tr>
</tbody>
</table>
</div>
<p>So far, I am trying to use this:</p>
<pre><code># SAME START/AFTER AND END/BEFORE DATE = EXACT DATE
# be careful to avoid chain-indexing in dataframe
# use multi-axis indexing (df.loc['a', '1']) instead
if e_df['event_start'].equals(e_df['event_end']):
new_date=e_df['event_start'].values[0]
#print(new_date)
f_unique.loc[x, 'event_date']=new_date
if e_df['event_after-date'].equals(e_df['event_before-date']):
new_date=e_df['event_after-date'].values[0]
#print(new_date)
f_unique.loc[x, 'event_date']=new_date # NOT WORKING!!
if len(e_df["event_date"].values[0])>=4:
new_date=e_df["event_date"].values[0]
f_unique.loc[x, 'event_start']=new_date
f_unique.loc[x, 'event_end']=new_date
else:
new_date=e_df['event_date'].values[0]
f_unique.loc[x, 'event_date']=new_date
</code></pre>
<p>In this code section, <code>f_unique</code> is the entire dataframe with over 9000 rows. <code>e_df</code> is the individual row I am analysing as I am going through the rows one by one. <code>x</code> is the index from "0" to <code>len(f_unique)</code>.</p>
<p>If I look at <code>f_unique</code> after the operation, the values have unfortunately not been updated. How can I fix this?</p>
<p>My expected output is this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>factoid_ID</th>
<th>pers_ID</th>
<th>pers_name</th>
<th>alternative_names</th>
<th>event_type</th>
<th>event_after-date</th>
<th>event_before-date</th>
<th>event_start</th>
<th>event_end</th>
<th>event_date</th>
<th>pers_title</th>
<th>pers_function</th>
<th>place_name</th>
<th>inst_name</th>
<th>rel_pers</th>
<th>source_quotations</th>
<th>additional_info</th>
<th>comment</th>
<th>info_dump</th>
<th>source</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>3342</td>
<td>API</td>
<td>Georg Christian Gottlieb Theophil Wedekind</td>
<td>n/a</td>
<td>Rezeption</td>
<td>n/a</td>
<td>n/a</td>
<td>1788-11-09</td>
<td>1788-11-09</td>
<td>1788-11-09</td>
<td>Dr., med. / Dr., phil. h.c.</td>
<td>Mitglied</td>
<td>Mainz</td>
<td>Universität Mainz, Medizinische Fakultät</td>
<td>n/a</td>
<td>n/a</td>
<td>auf Mitteilung eines kurfürstlichen Beschlusses vom 05.07.1788 erfolgte seine Aufnahme ohne Prüfung.</td>
<td>n/a</td>
<td>n/a</td>
<td>ProfAPI</td>
</tr>
</tbody>
</table>
</div>
<p>Maybe iteration through the rows is already the wrong approach in this case, but I am also doing other things later in the code, e.g. duplicating some rows if they contain certain trigger events.</p>
|
<python><pandas><indexing>
|
2023-02-23 06:25:23
| 1
| 498
|
OnceUponATime
|
75,541,013
| 9,861,647
|
Jupyter Notebook Print unnecessary Index and results
|
<p>I have this code in Geopandas in a jupyter notebook</p>
<pre><code>import geopandas as gpd
gdf = gpd.read_file("adm4.shp")
gdf['coordinates'] = gdf.geometry.apply(lambda x: (x.centroid.y, x.centroid.x))
df['coordinates'] = list(zip(df.latitude_geopy, df.longitude_geopy))
joined = gpd.sjoin(gdf, df, how="inner", op='contains', lsuffix='left', rsuffix='right')
</code></pre>
<p>It prints the entire index out:</p>
<p><a href="https://i.sstatic.net/hgXSs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hgXSs.png" alt="enter image description here" /></a></p>
<p>How can I avoid that ?</p>
|
<python><pandas><jupyter-notebook><geopandas>
|
2023-02-23 05:47:39
| 0
| 1,065
|
Simon GIS
|
75,540,802
| 14,109,040
|
Convert dictionary values to floats based on key
|
<p>I have the following dictionary:</p>
<pre><code>{'Month': 'July', '# short': 8, '# cancelled': 6, '% TT delivered': '0.9978408389882788', '% ontime': '0.85284108487160981', '% cancelled': '0.0018507094386181369', '% short': '0.0024676125848241827', '# scheduled': 3242, '# scheduled': 9697, '# ontime': 8270, 'Route': '82', 'Year': 2005}
</code></pre>
<p>I want to convert all values where the key starts with a % to floats</p>
|
<python><dictionary>
|
2023-02-23 05:14:22
| 1
| 712
|
z star
|
75,540,692
| 8,884,612
|
Run kmeans text clustering with pytorch in gpu to create more than 1000 clusters
|
<p>I am trying to implement kmeans clustering using kmeans-pytorch but I am getting memory error when I am try to create more than 10 clusters</p>
<p>My dataset is having 7000 text records</p>
<p>Here is my code snippet</p>
<pre><code>import torch
from sklearn.feature_extraction.text import TfidfVectorizer
from kmeans_pytorch import kmeans
text_data = #list of 7000 records
# Preprocess the data
vectorizer = TfidfVectorizer(stop_words='english')
X = vectorizer.fit_transform(text_data)
# Convert sparse matrix to PyTorch tensor
X = torch.Tensor(X.toarray())
# Move the data to the GPU
X = X.cuda()
# Run k-means clustering on the GPU
k = 1000
cluster_assignments, centroids = kmeans(X, k,device=torch.device('cuda'))
</code></pre>
<p>Error:</p>
<pre><code>RuntimeError: [enforce fail at alloc_cpu.cpp:73] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 41359010000 bytes. Error code 12 (Cannot allocate memory)
</code></pre>
|
<python><pytorch><gpu><k-means>
|
2023-02-23 04:54:59
| 1
| 579
|
Tanmay Shrivastava
|
75,540,685
| 2,458,922
|
Tensor, How to Gather Values of a List of Index?
|
<pre><code>t2 = tf.constant([[0, 11, 2, 3, 4],
[5, 61, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]])
valid_mask = t2 <= 10
validIndex = tf.where(valid_mask)
print('validIndex',validIndex) # Expectation = Reality
print()
print('Final Output',tf.gather(t2,indices=validIndex)) # Hmm.. What ?
</code></pre>
<p>My final output comes as</p>
<pre><code>tf.Tensor(
[[[ 0 11 2 3 4]
[ 0 11 2 3 4]]
[[ 0 11 2 3 4]
[10 11 12 13 14]]......
[[10 11 12 13 14]
[ 0 11 2 3 4]]], shape=(9, 2, 5), dtype=int32)
</code></pre>
<p>Expected</p>
<pre><code>[0,2,3,4,5,7,8,9]
</code></pre>
<p>Please help to debug & correct and explain what's happening</p>
|
<python><tensorflow><tensor>
|
2023-02-23 04:53:40
| 1
| 1,731
|
user2458922
|
75,540,444
| 412,234
|
Python efficently convert string to numpy integer array, character by character
|
<p>NumPy can work with <a href="https://numpy.org/doc/stable/reference/generated/numpy.fromstring.html" rel="nofollow noreferrer">comma separated lists</a> but that is a different task.
I want to convert each <em>character</em> of a string into an entry of a np array:</p>
<pre><code>x = np.frombuffer('fooλ'.encode(), dtype=np.uint8) #x = [102 111 111 206 187]
</code></pre>
<p>But the <a href="https://en.wikipedia.org/wiki/UTF-8" rel="nofollow noreferrer">UTF-8 encoding</a> assigns a variable number of bytes to each char (ascii chars take one byte but unicode chars take up to four). In this example "λ" costs two bytes.</p>
<p>To get the correct answer "ord()" works well:</p>
<pre><code>x = np.asarray([ord(c) for c in 'fooλ']) #x = [102 111 111 955]
</code></pre>
<p>But this solution involves a list comprehension. Doing so is slow since it's not <a href="https://www.pythonlikeyoumeanit.com/Module3_IntroducingNumpy/VectorizedOperations.html" rel="nofollow noreferrer">vectorized</a>: the Python intrepreter has to call ord() on each character instead of calling a function once on the whole string. Is there a faster way?</p>
<p><strong>Edit:</strong> <a href="https://stackoverflow.com/questions/54424433/converting-numpy-arrays-of-code-points-to-and-from-strings">this question</a> is very similar, although my answer is much more concise.</p>
|
<python><numpy><performance><encoding><utf-8>
|
2023-02-23 03:59:46
| 1
| 3,589
|
Kevin Kostlan
|
75,540,423
| 14,109,040
|
Replacing the keys in a dictionary based on the values of another
|
<p>I have two dictionaries (Dic1 and Dict2)</p>
<pre><code>Dict1 = {'M0': 399, 'M1': 71, 'M2': '0.979827269', 'M3': '0.84576281', 'M4': '0.011849132', 'M5': '0.066588785'}
Dict2 = {'M0': 'KPI1', 'M1': 'KPI2', 'M2': 'KPI3', 'M3': 'KPI4', 'M4': 'KPI5', 'M5': 'KPI6'}
</code></pre>
<p>I want to update the keys of Dict1 based on the key-value pairs in Dict 2.
So I want my output dictionary to look like</p>
<pre><code>Dict1 = {'KPI1': 399, 'KPI2': 71, 'KPI3': '0.979827269', 'KPI4': '0.84576281', 'KPI5': '0.011849132', 'M5': '0.066588785'}
</code></pre>
|
<python><dictionary>
|
2023-02-23 03:54:15
| 1
| 712
|
z star
|
75,540,286
| 19,316,811
|
Indexing all items excluding an index from the back
|
<p>I want to slice a numpy array such that an index, -7 for example, is excluded. What is the best way to do this?</p>
|
<python><numpy>
|
2023-02-23 03:25:07
| 1
| 457
|
PeriodicParticle
|
75,540,267
| 18,148,705
|
Generate epoch time
|
<p>I want to convert my time_stamp string into epoch time but I am not good with datetime and related modules, so getting a little confused. I saw many solutions but none of them had date_time format like mine.</p>
<p>my date_time is in the below format
date_time = "2023-01-1T17:35:19.818"</p>
<p>How can I convert this into epoch time using python?</p>
<p>Any help will be appreciated. Thank you.</p>
|
<python><datetime><unix-timestamp>
|
2023-02-23 03:19:44
| 1
| 335
|
user18148705
|
75,540,264
| 1,424,739
|
Finding high points in a zigzag plot with only one segment on the right
|
<p>For a zigzag line graph like the following (nearby points are guaranteed to be either lower or higher, depending the center points are high or low), I want to find out all the high points where there is only one segment of the zigzag line on their right.</p>
<p>In this example, those points are indicated by the left ends of the blue horizontal segments. Each blue segments only cross the zigzag line graph once. How to compute those high points efficiently?</p>
<p>The worst case is to check all high points against all segments in the line graph. But it is not efficient. What is the most efficient algorithm to perform this computation?</p>
<pre><code># R code to plot the figure for the example data
f=read.table(pipe('curl -s https://i.sstatic.net/tMB1y.gif | tail -c +43 | zcat'), header=T, sep='\t')
# example data can be retrieved by the curl command above.
with(f, plot(x, y, type='l'))
invisible(apply(subset(f, z=='z', select=-z), 1, function(v) { segments(v[['x']], v[['y']], 1e6, v[['y']], col='blue') }))
</code></pre>
<p>Some programming languages are tagged. But programs in any language are fine.</p>
<p><a href="https://i.sstatic.net/wTQHF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wTQHF.png" alt="enter image description here" /></a></p>
|
<python><r><algorithm><data-structures><geometry>
|
2023-02-23 03:19:33
| 2
| 14,083
|
user1424739
|
75,540,110
| 1,610,626
|
Pandas asof_locs Example
|
<p>I've tried searching the entire internet for an example of how to use <code>pandas.index.asof_locs</code> but couldn't. I don't quite understand what to pass in to the second <code>mask</code> variable.</p>
<p>I have a list of datetime variables <code>dt</code> and a target dataframe with dates and price data <code>df</code>. For every datetime value in <code>dt</code>, I would like to find the closes datetime index from <code>df</code>. I know if <code>df.index.asof</code> and it takes in a single input at a time and i'm trying to avoid loops so wanted to check out <code>asof_locs</code> which can take a list i think.</p>
<p>Thanks</p>
|
<python><pandas>
|
2023-02-23 02:42:15
| 1
| 23,747
|
user1234440
|
75,540,092
| 9,338,509
|
Missing 1 required positional argument in python tests
|
<pre><code>@patch("aioboto3.client")
@patch(“my_handler.send_batch", new_callable=truthy_async_mock)
@patch("my_handler.ConfigLoader.get_config")
@patch.dict(os.environ, {"STAGE": "Prod”})
@pytest.mark.parametrize("expected_value", ["123", "456"])
def testThis(
self,
expected_value,
sqs_mock,
appconfig_mock,
_mock_boto_client,
):
</code></pre>
<p>I am getting <code>missing 1 required positional argument _mock_boto_client</code>. I tried changing the order</p>
<pre><code>@patch("aioboto3.client")
@patch(“my_handler.send_batch", new_callable=truthy_async_mock)
@patch("my_handler.ConfigLoader.get_config")
@patch.dict(os.environ, {"STAGE": "Prod”})
@pytest.mark.parametrize("expected_value", ["123", "456"])
def testThis(
self,
sqs_mock,
appconfig_mock,
_mock_boto_client,
expected_value,
)
</code></pre>
<p>Then getting <code>missing 1 required positional argument: 'expected_value'</code>. If I remove <code>expected_value</code> argument then everything is working fine. I am not sure what I am missing here. Can anyone please help me?</p>
|
<python><testing><parameters><mocking>
|
2023-02-23 02:39:05
| 0
| 553
|
lakshmiravali rimmalapudi
|
75,539,894
| 10,500,424
|
Randomly sample numpy columns using Cython
|
<p>I would like to sample columns from a large Numpy matrix a million times. I am achieving satisfactory speed using Python; however, I aim to repeat this process tens of thousands of time. At this scale, speed becomes an issue, and I am looking for ways to improve performance. I am leaning towards Cython, but I am a novice in this sector. I read the Numpy documentation and combed through online tutorials, but the suggestions do not seem straightforward.</p>
<p>I am wondering if there is a way, in Cython, to randomly sample N columns from a numpy matrix. The Python implementation of the logic I aim to optimize is written below:</p>
<pre class="lang-py prettyprint-override"><code>import random
import numpy as np
random.seed(888)
num_cols = 6000
range_num_cols = range(num_cols)
random_matrix = np.random.random((20, num_cols))
for _ in range(1_000_000):
random_matrix[:, random.sample(range_num_cols, 5)]
</code></pre>
|
<python><numpy><random><cython><cpython>
|
2023-02-23 01:54:46
| 0
| 1,856
|
irahorecka
|
75,539,859
| 1,857,373
|
Linear Regression ValueError X shape Y shape with .values() numpy.array. OK, ValueError Expected 2D array, got 1D array
|
<p><strong>PROBLEM</strong></p>
<p>Preparing data for LinearRegression with pre-encoded dataset, e.g., null, NAN, imputed missing values handles. Problem with ValueError on numpy.array even after trying .reshape(). This attempt is trying to fix this code which ran 5 years ago.</p>
<p>Assigning Y response SalePrice column to numpy.array with .values
Assigning X features (all others)</p>
<p>Shape looks good:</p>
<p>X shape: (1460, 250)
Y shape: (1460,)</p>
<p>Even after I reshape, ValueError is raised:
X shape: (365000, 1)
X shape: (1460, 1)</p>
<p><strong>ERROR</strong></p>
<pre><code>raise ValueError(
ValueError: Expected 2D array, got 1D array instead:
array=[0.24107763 0.20358284 0.26190807 ... 0.321622 0.14890293 0.15636717].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
</code></pre>
<p><strong>CODE</strong></p>
<pre><code>train_data = pd.read_csv("../data/train_data_encoded.csv")
test_data = pd.read_csv("../data/test_data_encoded.csv")
train_data.loc[train_data['LotFrontage'].isnull(), 'LotFrontage'] = 0.0
train_data.drop(['Id'], axis=1)
Y = train_data['SalePrice'].values
X = train_data.values
print('X shape:', X.shape)
print('Y shape:', Y.shape)
X_reshaped = X.reshape(-1, 1)
print('X shape:', X_reshaped.shape)
Y_reshaped = Y.reshape(-1, 1)
print('X shape:', Y_reshaped.shape)
lm = linear_model.LinearRegression()
lm.fit(X, Y)
res = lm.predict(Y)
</code></pre>
<p><strong>Environment Console</strong>
<a href="https://i.sstatic.net/DZS6U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DZS6U.png" alt="Python Environment Console Variables" /></a></p>
|
<python><arrays><pandas><numpy><linear-regression>
|
2023-02-23 01:47:43
| 1
| 449
|
Data Science Analytics Manager
|
75,539,817
| 8,481,155
|
Apache Beam Map, DoFn and Composite Transform
|
<p>I want to understand the difference is use cases between a Map function, a DoFn called from Pardo and a Composite transform.</p>
<p>I could achieve the same results with the below code for a list of transformations that I need to do for my pipeline. I made a sample of what I mean by multiple stages.</p>
<pre><code>import apache_beam as beam
def myTransform(line):
line = line * 10
line = line + 5
line = line - 2
return line
class myPTransform(beam.PTransform):
def expand(self, pcoll):
# return pcoll | beam.Map(myTransform)
pcol_output = (pcoll
| beam.Map(lambda line: line * 10)
| beam.Map(lambda line: line + 5)
| beam.Map(lambda line: line - 2)
)
return pcol_output
class mydofunc(beam.DoFn):
def process(self, element):
element = element * 10
element = element + 5
element = element - 2
yield element
with beam.Pipeline() as p:
lines = p | beam.Create([1,2,3,4,5])
### Map Function
manual = (lines
| "Map function" >> beam.Map(myTransform)
| "Print map" >> beam.Map(print))
### Composite Ptransform
ptrans = (lines
| "ptransform call" >> myPTransform()
| "Print ptransform" >> beam.Map(print))
### Do Function
dofnpcol = (lines
| "Dofn call" >> beam.ParDo(mydofunc())
| "Print dofnpcol" >> beam.Map(print))
</code></pre>
<p>On what scenarios should I use a DoFn and a Composite Transform?
I might be missing a bigger picture here for the difference between these 3 options.
Any insights would be really helpful.</p>
<p>I saw a question on <a href="https://stackoverflow.com/questions/47706600/apache-beam-dofn-vs-ptransform">Apache Beam: DoFn vs PTransform</a></p>
|
<python><python-3.x><google-cloud-dataflow><apache-beam>
|
2023-02-23 01:40:01
| 1
| 701
|
Ashok KS
|
75,539,785
| 2,571,607
|
Why sklearn's KFold can only be enumerated once (also on using it in xgboost.cv)?
|
<p>Trying to create a <code>KFold</code> object for my <code>xgboost.cv</code>, and I have</p>
<pre><code>import pandas as pd
from sklearn.model_selection import KFold
df = pd.DataFrame([[1,2,3,4,5],[6,7,8,9,10]])
KF = KFold(n_splits=2)
kf = KF.split(df)
</code></pre>
<p>But it seems I can only enumerate once:</p>
<pre><code>for i, (train_index, test_index) in enumerate(kf):
print(f"Fold {i}")
for i, (train_index, test_index) in enumerate(kf):
print(f"Again_Fold {i}")
</code></pre>
<p>gives output of</p>
<pre><code>Fold 0
Fold 1
</code></pre>
<p>The second enumerate seems to be on an empty object.</p>
<p>I am probably fundamentally understanding something wrong, or completed messed up somewhere, but could someone explain this behavior?</p>
<p><strong>[Edit, adding follow up question]</strong> This behavior seems to cause passing KFold object to <code>xgboost.cv</code> setting <code>xgboost.cv(..., folds = KF.split(df))</code> to have index out of range error. My fix is to recreate the list of tuples with</p>
<pre><code>kf = []
for i, (train_index, test_index) in enumerate(KF.split(df)):
this_split = (list(train_index), list(test_index))
kf.append(this_split)
xgboost.cv(..., folds = kf)
</code></pre>
<p>looking for smarter solutions.</p>
|
<python><scikit-learn><xgboost><cross-validation><k-fold>
|
2023-02-23 01:32:45
| 1
| 593
|
Yue Y
|
75,539,748
| 3,099,733
|
reference undefined fields in dataclass will still pass static check (pylint, pylance)
|
<p>Given the following sample code:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
@dataclass
class Demo:
a: int = 1
d = Demo()
d.b # b: Any
</code></pre>
<p>I don't know why static check won't report missing attribute but treating it as Any type instead. Is there any way to make it to report error?</p>
|
<python><pylint><python-typing><python-dataclasses><pylance>
|
2023-02-23 01:25:09
| 0
| 1,959
|
link89
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.