QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
โ |
|---|---|---|---|---|---|---|---|---|
76,453,071
| 9,877,412
|
Converting multiple images into tensor at once
|
<p>I have ~10 images. They images are different in size. I am reading them in OpenCV and converting all of them to (54, 54, 3). I know that I can convert each one of them to a tensor. Is there any way of converting them to tensors altogether? Final tensor will have a shape of (10, 54, 54, 3)</p>
<p>I am trying to use the following codes for single image conversion.</p>
<pre><code>import cv2, glob
import tensorflow as tf
lst = glob.glob("images/*jpg")
im1 = cv2.imread(lst[1])
im1_tensor = tf.convert_to_tensor(im1, dtype=tf.int64)
</code></pre>
|
<python><tensorflow>
|
2023-06-12 00:45:09
| 1
| 866
|
Sourav
|
76,453,068
| 9,547,278
|
Extracting text that is beneath a heading, to be stored in a dictionary
|
<p>I need help with extracting text that is beneath a heading, to be stored in a dictionary. Let me elaborate, I have a text file with information organized the following way:</p>
<p>HEADING 1 (The heading is always written in all caps, there may be instruction like sentence this written enclosed in brackets beside the heading, the contents within the bracket will be excluded):</p>
<p><em>Subheading 1:</em></p>
<ol>
<li>Item 1</li>
<li>Item 2</li>
</ol>
<p><em>Subheading 2:</em></p>
<ol start="3">
<li>Item 3</li>
<li>Item 4</li>
</ol>
<p><em>Subheading n:</em></p>
<ol start="3">
<li>Item <em>n</em></li>
</ol>
<br/>
HEADING 2 (this is the second heading that will be extracted and stored in the dictionary):
<p><em>Subheading 1:</em></p>
<ol>
<li>Item 1
...</li>
</ol>
<p>I wrote the following function:</p>
<pre><code>def extract_info(corpus):
# Initialize variables
output = {}
# Extract info
for element in corpus.split("\n\n"):
item = element.split("\n")
key = item[0].split("(")[0].replace(":", "").strip()
output.update({key:item[1:]})
# Return output
return output
</code></pre>
<p>Unfortunately, this only gets the first subheading (Subheading 1) and ignores the rest. What can I do?</p>
|
<python><string>
|
2023-06-12 00:44:15
| 2
| 474
|
Legion
|
76,452,998
| 2,687,601
|
In PyQ5t, how to extend QMediaPlaylist whilst it's playing
|
<h3>Context</h3>
<p>I'm building a text-to-speech (TTS) GUI with an offline real-time speech synthesizer (like espeak). Because the speech synthesizer is quite slow, I'd like to chunk up the text, get sequentially wav for each, and play them in sequence as soon as the first one is completed. The assumption is that TTS won't be any faster.</p>
<h3>Question</h3>
<p><strong>Does the QPlaylist allow for extending playlist, i.e. adding new media at the end, while the playlist is being played and without stopping it?</strong></p>
<p>What I'm hoping to achieve is to have a thread that generates wav files and appends each file to a playlist, and the playlist is started when a list is not empty, and stops when all files are played.</p>
<p>I could do this with thread that monitors the status of the QMediaPlayer and as soon as it's finished play the next file in a list (populated by other threads). But, that seems like reimplementing the playlist.</p>
<h3>Code</h3>
<p>The code I'm working on and would like to modify is <a href="https://github.com/laszukdawid/cracker/blob/master/cracker/speaker/frogger.py#L41-L47" rel="nofollow noreferrer">here</a>, or a short cleaned snipped is like</p>
<pre class="lang-py prettyprint-override"><code>from PyQt5.QtMultimedia import QMediaPlayer, QMediaPlaylist
def read_text(text: str, player: QMediaPlayer) -> None:
filepaths = []
for parted_text in text.split('. '):
filename = ask(parted_text)
filepaths.append(filename)
play_files(filepaths, player)
return
def play_files(filepaths, player):
playlist = QMediaPlaylist(player)
for filepath in filepaths:
url = QUrl.fromLocalFile(filepath)
playlist.addMedia(QMediaContent(url))
player.setPlaylist(playlist)
player.play()
def ask(text: str):
"""Connect to local server on host localhost:port and extract wav from stream"""
response = requests.get(URL, params={"text": text})
if response.status_code != 200:
raise Exception(f"Error: Unexpected response {response}")
return response.json()["filename"]
</code></pre>
|
<python><pyqt5><qt5><qmediaplayer>
|
2023-06-12 00:15:18
| 0
| 2,017
|
Dawid Laszuk
|
76,452,923
| 9,582,542
|
Merge multiple dataframe rows into 1 column
|
<p>When I collect time data, the day and time are on different rows. I like to combine the day and time to form a datetime column with both date and time.</p>
<p>This is a sample of the data I get</p>
<pre><code>sampletest = pd.DataFrame(columns=['daytime','datetime'])
sampletest = sampletest.append({'daytime':'Sunday, January 1'}, ignore_index=True)
sampletest = sampletest.append({'daytime':'01:00'}, ignore_index=True)
sampletest = sampletest.append({'daytime':'13:00'}, ignore_index=True)
sampletest = sampletest.append({'daytime':'17:30'}, ignore_index=True)
sampletest = sampletest.append({'daytime':'19:00'}, ignore_index=True)
sampletest = sampletest.append({'daytime':'Monday, January 2'}, ignore_index=True)
sampletest = sampletest.append({'daytime':'08:00'}, ignore_index=True)
sampletest = sampletest.append({'daytime':'09:00'}, ignore_index=True)
sampletest = sampletest.append({'daytime':'10:30'}, ignore_index=True)
sampletest = sampletest.append({'daytime':'11:30'}, ignore_index=True)
</code></pre>
<p>This is a sample of what the end result should look like.</p>
<pre><code>ConvertTime = pd.DataFrame(columns=['daytime','datetime'])
ConvertTime = ConvertTime.append({'daytime':'Sunday, January 1','datetime': np.NaN}, ignore_index=True)
ConvertTime = ConvertTime.append({'daytime':'01:00','datetime': '2023-01-01 01:00:00'}, ignore_index=True)
ConvertTime = ConvertTime.append({'daytime':'13:00','datetime': '2023-01-01 13:00:00'}, ignore_index=True)
ConvertTime = ConvertTime.append({'daytime':'17:30','datetime': '2023-01-01 17:30:00'}, ignore_index=True)
ConvertTime = ConvertTime.append({'daytime':'19:00','datetime': '2023-01-01 19:00:00'}, ignore_index=True)
ConvertTime = ConvertTime.append({'daytime':'Monday, January 2','datetime': np.NaN}, ignore_index=True)
ConvertTime = ConvertTime.append({'daytime':'08:00','datetime': '2023-01-02 08:00:00'}, ignore_index=True)
ConvertTime = ConvertTime.append({'daytime':'09:00','datetime': '2023-01-02 09:00:00'}, ignore_index=True)
ConvertTime = ConvertTime.append({'daytime':'10:30','datetime': '2023-01-02 10:00:00'}, ignore_index=True)
ConvertTime = ConvertTime.append({'daytime':'11:30','datetime': '2023-01-02 11:00:00'}, ignore_index=True)
</code></pre>
<p>How should I script this merger? Also, the data does not have year so, we will assume all days are 2023.</p>
|
<python>
|
2023-06-11 23:39:44
| 2
| 690
|
Leo Torres
|
76,452,821
| 11,922,765
|
Statsmodels ARIMA: how to get 0, 10, 20, .., 90,100 percentile forecast?
|
<p>My goal is compute various percentile forecast for the same day.</p>
<p>My code:</p>
<pre><code># import the data
catfish_sales = pd.read_csv('/kaggle/input/time-series-toy-data-set/catfish.csv',
parse_dates = [0], index_col=0, date_parser=parse_date)
train_data, test_data = np.split(catfish_sales,
[int(len(catfish_sales)*0.95)])
forecast_data = pd.DataFrame()
for conf in range(0,101,10):
# Perform arima fit and forecast
aux_df = arima.fit().get_forecast(steps=len(test_data)).conf_int(alpha=(1-(conf/100)))
# print(forecast_data.dtypes)
aux_df['Total_forecast'] = aux_df[['lower Total', 'upper Total']].mean(axis=1)
conf_lbl = 'p'+str(conf)+'_'
aux_df = aux_df.add_prefix(conf_lbl)
print(aux_df.head())
forecast_Data = pd.concat([forecast_data,aux_df],axis=0)
</code></pre>
<p>Present output:</p>
<pre><code> p0_lower Total p0_upper Total p0_Total_forecast
2011-08-01 15273.493541 15273.493541 15273.493541
2011-09-01 14611.272886 14611.272886 14611.272886
2011-10-01 13929.948620 13929.948620 13929.948620
2011-11-01 11729.123604 11729.123604 11729.123604
2011-12-01 12358.778644 12358.778644 12358.778644
p10_lower Total p10_upper Total p10_Total_forecast
2011-08-01 15130.404131 15416.582950 15273.493541
2011-09-01 14442.952828 14779.592943 14611.272886
2011-10-01 13747.205922 14112.691318 13929.948620
2011-11-01 11535.026991 11923.220217 11729.123604
2011-12-01 12154.903296 12562.653992 12358.778644
.....................
p90_lower Total p90_upper Total p90_Total_forecast
2011-08-01 13400.513982 17146.473099 15273.493541
2011-09-01 12408.034870 16814.510902 14611.272886
2011-10-01 11537.924366 16321.972875 13929.948620
2011-11-01 9188.481426 14269.765781 11729.123604
2011-12-01 9690.136982 15027.420306 12358.778644
p100_lower Total p100_upper Total p100_Total_forecast
2011-08-01 -inf inf NaN
2011-09-01 -inf inf NaN
2011-10-01 -inf inf NaN
2011-11-01 -inf inf NaN
2011-12-01 -inf inf NaN
</code></pre>
<p>My question:</p>
<ol>
<li>Is confidence level and percentile both same? In the above, I assumed both as same. I considered 90% confidence as 90 percentile forecast, etc.</li>
<li>I just need one value for each confidence or percentile. In the above, I just computed mean of upper and lower boundary. But I noticed an interesting thing: mean values at all the percentile or confidence level are same. This could mean my assumption of confidence interval and percentile seems to be wrong. I appreciate your help.</li>
<li>I need to compute various percentile forecast for same day as I am trying above. I appreciate if you could suggest me a model that I can compute these if ARIMA is not the correct one?</li>
</ol>
|
<python><statsmodels><forecasting><arima><percentile>
|
2023-06-11 22:55:50
| 1
| 4,702
|
Mainland
|
76,452,764
| 21,420,742
|
How to subtract using datetime in py
|
<p>I have a dataset and would like to create a new column that looks 90 days back from the <code>start_Date</code> column. I have tried two ways and both cause a typeError.</p>
<p><code>df['prior_date'] = df['start_Date'] - 90</code></p>
<p>And I tried</p>
<p><code>df['prior'] = (df['start_Date')] - dt(days = 90)).strftime('%Y-%m-%D')</code></p>
<p>Any suggestions on how to have the created column <code>prior</code> work? Thank you</p>
|
<python><python-3.x><datetime><timedelta>
|
2023-06-11 22:29:39
| 1
| 473
|
Coding_Nubie
|
76,452,575
| 16,466,037
|
How do I prevent the "Rich" Python library from printing "new_val" on startup?
|
<p>The first time an attribute of rich is executed, <code>new_val</code> get's printed onto my console. This happens on quite a few attributes I have tried. For example, simply executing
<code>from rich.console import Console</code>
would result in <code>new_val</code> being printed.</p>
<p>The most perplexing part of this is that when I dug into the source code of Rich, I could not find "new_val" anywhere in <code>__init__.py</code> or <code>console.py</code>.</p>
<p>Has anyone encountered this before and how can I stop this?</p>
<p>System info:</p>
<pre><code>Ubuntu 20.04.6 LTS
Python 3.11.4
Rich 13.4.1
</code></pre>
<h1>Minimal, reproducible examples</h1>
<p>Case 1:</p>
<pre><code>from rich.console import Console
</code></pre>
<p>Case 2:</p>
<pre><code>from rich import print
print('Hi')
</code></pre>
<h2>Expected output</h2>
<p>Case 1:
Nothing</p>
<p>Case 2:
<code>Hi</code></p>
<h2>Observed output</h2>
<p>Case 1:
<code>new_val</code></p>
<p>Case 2:</p>
<pre><code>new_val
Hi
</code></pre>
|
<python><python-3.x><rich>
|
2023-06-11 21:28:18
| 1
| 372
|
Squarish
|
76,452,551
| 3,821,009
|
Reference polars.DataFrame.height in with_columns
|
<p>Take this example:</p>
<pre><code>df = (polars
.DataFrame(dict(
j=polars.datetime_range(datetime.date(2023, 1, 1), datetime.date(2023, 1, 3), '8h', closed='left', eager=True),
))
.with_columns(
k=polars.lit(numpy.random.randint(10, 99, 6)),
)
)
j k
2023-01-01 00:00:00 47
2023-01-01 08:00:00 22
2023-01-01 16:00:00 82
2023-01-02 00:00:00 19
2023-01-02 08:00:00 85
2023-01-02 16:00:00 15
shape: (6, 2)
</code></pre>
<p>Here, <code>numpy.random.randint(10, 99, 6)</code> uses hard-coded <code>6</code> as the height of DataFrame, so it won't work if I changed e.g. the interval from <code>8h</code> to <code>4h</code> (which would require changing <code>6</code> to <code>12</code>).</p>
<p>I know I can do it by breaking the chain:</p>
<pre><code>df = polars.DataFrame(dict(
j=polars.datetime_range(datetime.date(2023, 1, 1), datetime.date(2023, 1, 3), '4h', closed='left', eager=True),
))
df = df.with_columns(
k=polars.lit(numpy.random.randint(10, 99, df.height)),
)
j k
2023-01-01 00:00:00 47
2023-01-01 04:00:00 22
2023-01-01 08:00:00 82
2023-01-01 12:00:00 19
2023-01-01 16:00:00 85
2023-01-01 20:00:00 15
2023-01-02 00:00:00 89
2023-01-02 04:00:00 74
2023-01-02 08:00:00 26
2023-01-02 12:00:00 11
2023-01-02 16:00:00 86
2023-01-02 20:00:00 81
shape: (12, 2)
</code></pre>
<p>Is there a way to do it (i.e. reference <code>df.height</code> or an equivalent) in one chained expression though?</p>
|
<python><python-polars>
|
2023-06-11 21:18:49
| 1
| 4,641
|
levant pied
|
76,452,501
| 7,128,827
|
Mapping recursively from a dataframe to python dictionary
|
<p>I am struggling to find a recursive mapping to get the end result. Here is the input df</p>
<pre><code>## mapping recursive
import pandas as pd
data = {
"group1": ["A", "A", "B", "B"],
"group2": ["grp1", "grp2", "grp1", "grp2"],
"hc": [50, 40, 45, 90],
"response": [12, 30, 43, 80]
}
#load data into a DataFrame object:
df = pd.DataFrame(data)
df
</code></pre>
<p>I would like to map recursively to convert the <code>df</code> into a Python dictionary. Each number is in a <code>details</code> list, and it is recursively mapped through the aggregated data frame. For example, level A <code>total_hc</code> is the sum of hc of <code>group1</code> is <code>A</code>.</p>
<pre><code>## desired output
output = {
"rows":[
{
"details": [{
"level": "A",
"total_hc": 90,
"response_total": 42
}],
"rows":[
{
"details": [{
"level": "grp1",
"total_hc": 50,
"response_total": 12
}]
},
{
"details": [{
"level": "grp2",
"total_hc": 40,
"response_total": 30
}],
}
]
},
{
"details": [{
"level": "B",
"total_hc": 135,
"response_total": 123
}],
"rows":[
{
"details": [{
"level": "grp1",
"total_hc": 45,
"response_total": 43
}]
},
{
"details": [{
"level": "grp2",
"total_hc": 90,
"response_total": 80
}],
}
]
}
]
}
</code></pre>
<p>I tried to group the df</p>
<pre><code>## group by function
group_df = df.groupby(["group1", "group2"]).sum()
group_df.to_dict("index")
</code></pre>
<p>Then I am struggling to find a recursive mapping to get the end result. Appreciate anyone who can help out.</p>
|
<python><pandas><dictionary><recursive-datastructures>
|
2023-06-11 21:00:52
| 1
| 673
|
Joanna
|
76,452,480
| 9,577,975
|
Python's zlib doesn't work on CommonCrawl file
|
<p>I was trying to unzip a file using Python's zlib and it doesn't seem to work. <a href="https://data.commoncrawl.org/crawl-data/CC-MAIN-2022-33/segments/1659882570651.49/wet/CC-MAIN-20220807150925-20220807180925-00000.warc.wet.gz" rel="nofollow noreferrer">The file</a> is 100MB from Common Crawl and I downloaded it as <code>wet.gz</code>. When I unzip it on the terminal with <code>gunzip</code>, everything works fine, and here're the first few lines of the output:</p>
<pre><code>WARC/1.0
WARC-Type: warcinfo
WARC-Date: 2022-08-20T09:26:35Z
WARC-Filename: CC-MAIN-20220807150925-20220807180925-00000.warc.wet.gz
WARC-Record-ID: <urn:uuid:3f9035e8-8038-4239-a566-c9410b93956d>
Content-Type: application/warc-fields
Content-Length: 371
Software-Info: ia-web-commons.1.1.10-SNAPSHOT-20220804021208
Extracted-Date: Sat, 20 Aug 2022 09:26:35 GMT
robots: checked via crawler-commons 1.4-SNAPSHOT (https://github.com/crawler-commons/crawler-commons)
isPartOf: CC-MAIN-2022-33
operator: Common Crawl Admin (info@commoncrawl.org)
description: Wide crawl of the web for August 2022
publisher: Common Crawl
WARC/1.0
WARC-Type: conversion
WARC-Target-URI: http://100bravert.main.jp/public_html/wiki/index.php?cmd=backup&action=nowdiff&page=Game_log%2F%EF%BC%A7%EF%BC%AD%E6%9F%98&age=53
WARC-Date: 2022-08-07T15:32:56Z
WARC-Record-ID: <urn:uuid:8dd329bf-6717-4d0c-ae05-93445c59fd50>
WARC-Refers-To: <urn:uuid:1e2e972b-4273-468a-953f-28b0e45fb117>
WARC-Block-Digest: sha1:GTEJAN2GXLWBXDRNUEI3LLEHDIPJDPTU
WARC-Identified-Content-Language: jpn
Content-Type: text/plain
Content-Length: 12482
Game_log/๏ผง๏ผญๆ ใฎใใใฏใขใใใฎ็พๅจใจใฎๅทฎๅ(No.53) - PukiWiki
Game_log/๏ผง๏ผญๆ ใฎใใใฏใขใใใฎ็พๅจใจใฎๅทฎๅ(No.53)
[ ใใใ ] [ ๆฐ่ฆ | ไธ่ฆง | ๅ่ชๆค็ดข | ๆ็ตๆดๆฐ | ใใซใ ]
ใใใฏใขใใไธ่ฆง
</code></pre>
<p>However, when I try to use Python's <code>gzip</code> or <code>zlib</code> library, using these code examples:</p>
<pre><code># using gzip
fh = gzip.open('wet.gz', 'rb')
data = fh.read(); fh.close()
# using zlib
o = zlib.decompressobj(zlib.MAX_WBITS|16)
result = []
result = [o.decompress(open("wet.gz", "rb").read()), o.flush()]
</code></pre>
<p>Both of them return this:</p>
<pre><code>WARC/1.0
WARC-Type: warcinfo
WARC-Date: 2022-08-20T09:26:35Z
WARC-Filename: CC-MAIN-20220807150925-20220807180925-00000.warc.wet.gz
WARC-Record-ID: <urn:uuid:3f9035e8-8038-4239-a566-c9410b93956d>
Content-Type: application/warc-fields
Content-Length: 371
Software-Info: ia-web-commons.1.1.10-SNAPSHOT-20220804021208
Extracted-Date: Sat, 20 Aug 2022 09:26:35 GMT
robots: checked via crawler-commons 1.4-SNAPSHOT (https://github.com/crawler-commons/crawler-commons)
isPartOf: CC-MAIN-2022-33
operator: Common Crawl Admin (info@commoncrawl.org)
description: Wide crawl of the web for August 2022
publisher: Common Crawl
โ
</code></pre>
<p>So apparently, they can decompress the first few paragraphs just fine, but all other paragraphs below it are lost. Is this a bug in Python's zlib/gzip library?</p>
<p>Edit for future readers: I've integrated the accepted answer to my Python package if you don't want to mess around:</p>
<pre class="lang-bash prettyprint-override"><code>pip install k1lib
</code></pre>
<pre class="lang-py prettyprint-override"><code>from k1lib.imports import *
lines = cat("wet.gz", text=False, chunks=True) | unzip(text=True)
for line in lines:
print(line)
</code></pre>
<p>This will read the file in binary mode chunk by chunk, unzips them incrementally, split up into multiple lines and convert them into strings.</p>
|
<python><gzip><zlib><common-crawl>
|
2023-06-11 20:54:07
| 1
| 379
|
157 239n
|
76,452,271
| 1,905,108
|
Trigger file upload from outside an upload component in plotly
|
<p>I have a dashboard in Plotly that needs two specific but separate files to be uploaded. I have two separate components today (very simplified, but this is to give you the gist):</p>
<pre class="lang-py prettyprint-override"><code>app.layout = html.Div([
dcc.Upload(html.Button('Upload File 1')),
dcc.Upload(html.Button('Upload File 2')),
])
</code></pre>
<p>This is all good if I force the user to click on one button at a time. But how can I get it so that the upload prompt for file 2 opens automatically when the upload of file 1 is completed so the user don't have to click the button for file 2 explicitly?</p>
|
<python><plotly-dash>
|
2023-06-11 19:46:38
| 1
| 323
|
Ivar Stange
|
76,452,260
| 1,126,493
|
How to Define Object's __str__ in a Python C++ Extension
|
<p>Implementing a <code>__str__</code> function in C. This is my function definition:</p>
<pre><code>static PyObject *
Foo_str(FooObject *self)
{
return PyUnicode_FromFormat("bar");
}
</code></pre>
<p>And the export:</p>
<pre><code>static PyMethodDef Foo_methods[] = {
{ "__str__", reinterpret_cast<PyCFunction>(Foo_str), METH_VARARGS, "Pretty print" },
{ NULL } /* sentinel */
};
</code></pre>
<p>Now, in Python, <code>print(str(myobj))</code> does not work while <code>print(myobj.__str__())</code> does. How do I implement my own type casting and Python function overriding in a C/C++ extension?</p>
|
<python><c++>
|
2023-06-11 19:44:35
| 1
| 442
|
major4x
|
76,452,253
| 153,053
|
How can I view error logs from a Python Azure Function App?
|
<p>I'm working on some Python Function Apps using Linux and the consumption plan. I'm trying to figure out where I can see error logs from my apps without a lot of success. In particular I want to see two things:</p>
<ul>
<li><p>If my app code fails to compile I get 404 errors when I try to access the API endpoint. Where do I see the compile error?</p>
</li>
<li><p>If my app throws an exception I get 500 errors from the endpoint. Where do I see the exception message and traceback?</p>
</li>
</ul>
<p>Neither of these appear in any of the logging or insights screens I've been able to find. I've resorted to wrapping all my code in try/except blocks and returning the errors in my response but that's not great. I must be missing something obvious here!</p>
<p>Thanks for the help.</p>
|
<python><azure><azure-functions>
|
2023-06-11 19:43:15
| 1
| 7,178
|
samtregar
|
76,452,150
| 14,369,072
|
How to send an email with Python without enabling 2-fa and allow less secure app options in the sender Gmail account?
|
<p>I am trying to develop a feature for sending an email from an gmail account with an own domain. For example: person1@companyname.com</p>
<p>The requirement is that I can't enable the 2 factor authentication in the google account and allow less secure apps neither.</p>
<p>So, what will be the best approach for this? I have seen a tutorial on Youtube but they use a credential.json or client.json files for authentication without explaining the proper way to get these files or their contents.</p>
<p>This is the code I have so far:</p>
<pre><code> # Sending email
try:
password = 'appPasswordIsSupposedtoBePlacedHereButIAmNotAllowedToUseit'
sender_email = 'person1@companyname.com'
recipient_email = ['person2@companyname.com', 'testemail@gmail.com']
subject = 'blablba'
body = 'blabla'
em= EmailMessage()
em['From']= sender_email
em['To'] = ", ".join(recipient_email)
em['subject'] = subject
em.set_content(body)
context = ssl.create_default_context()
with smtplib.SMTP_SSL('smtp.gmail.com', 465, context=context) as smtp:
smtp.login(sender_email, password)
smtp.sendmail(sender_email, recipient_email, em.as_string())
except Exception as e:
errors = errores + ' Error while sending the email: ' + str(e)
logging.warning(errors)
</code></pre>
|
<python><oauth-2.0><gmail><gmail-api><smtplib>
|
2023-06-11 19:15:46
| 1
| 347
|
doubting
|
76,452,100
| 673,600
|
Trendlines in plotly - interquartile range possible?
|
<p>I'd love to plot the 25 to 75 percentile range as a some kind of trendline. Does anyone have any suggestions for how I might be able to do that?</p>
<pre><code>fig = px.scatter(df, "Date", y="Number of Events", color="Technology", labels={
"Date": "Date",
"Number of Events": "Number of Events",
"markersize": 50,
"s": 50,
}, trendline="expanding", trendline_options=dict(function="mean"), trendline_scope="overall")
</code></pre>
|
<python><plotly>
|
2023-06-11 19:07:05
| 0
| 6,026
|
disruptive
|
76,451,866
| 11,716,727
|
How to calculate corr from a dataframe with non-numeric columns
|
<p>I have these data set as shown below:</p>
<p><a href="https://i.sstatic.net/CLu78.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CLu78.png" alt="enter image description here" /></a></p>
<p>which belong to Pokemon dataset
<a href="https://elitedatascience.com/wp-content/uploads/2022/07/Pokemon.csv" rel="nofollow noreferrer">https://elitedatascience.com/wp-content/uploads/2022/07/Pokemon.csv</a></p>
<p>I want to plot the heatmap as shown below:</p>
<pre><code># Calculate correlations
corr = stats_df.corr()
# Heatmap
plt.figure(figsize=(9,8))
sns.heatmap(corr)
</code></pre>
<p>But I get this error below; how can I solve it?</p>
<p><a href="https://i.sstatic.net/Mub1b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mub1b.png" alt="enter image description here" /></a></p>
|
<python><pandas><correlation>
|
2023-06-11 18:07:24
| 1
| 709
|
SH_IQ
|
76,451,698
| 7,695,845
|
Check code compatibility for python 3x versions
|
<p>I am making a Python script to plot the evolution of a system we are studying in a Physics course. We only use Python 3.6+. I share my code with my classmates and the last time we did such a task, I ran into an annoying issue. I have Python 3.11.4 on my machine because I like having the most recent features of Python. I accidentally used a 3.10 feature which made my code crash for some of my friends who had 3.7-3.9. Is there a way to check which versions of Python my code will work on without installing these versions myself?</p>
<p>I want to make sure the code is compatible with Python 3.7 at least (because I don't give up <code>dataclasses</code>). Is there a library, program, or website which can tell me which versions of Python my code will work on?</p>
<p>Alternatively, I am using Visual Studio Code as an editor, so is there a VSCode extension that checks it (it would be nice for VSCode to highlight a compatibility error if it can). Of course, I prefer an editor-independent solution first.</p>
|
<python><compatibility>
|
2023-06-11 17:34:30
| 1
| 1,420
|
Shai Avr
|
76,451,682
| 11,586,490
|
Problems with Python installation and Pycharm virtual environments
|
<p>I've been using PyCharm for a while now, when I first started using it python 3.7 was the latest release so I used that, everything worked fine (~8 different projects).</p>
<p>However, the latest version of Django requires python 3.8 or later so I finally updated and installed 3.11.</p>
<p>I'm having some problems setting up my Pycharm projects with python 3.11 (my system can't import Django even though it's installed, I think it stems from the project not being in a venv but I've tried to add it into one).I've installed python 3.11 in the default location, the same place python 3.7 was installed but when I choose a python interpreter (screenshot below) many of them are invalid. I also notice that many of my other projects (<code>FaceComparison</code> as an example) are using a python 3.7 that itself is within a virtual environment, within the <code>Scripts</code> folder by the looks of it.</p>
<p>Should I have installed python 3.11 in a virtual environment? Whenever I try and set up a new project and use a python 3.11 interpreter, a) it's got <code>(DjangoTesting)</code> in it - not sure why? and it also isn't in a virtual environment.</p>
<p>I've read many questions about Pycharm, virtual environments and followed Pycharms guide on setting them up but I'm still having problems with my Django installation (<code>ImportError: Couldn't import Django ... Did you forget to activate a virtual environment?</code>), which leads me to believe I'm not creating a virtual environment properly, even though I've started a fresh project and followed PyCharms vitrual environment steps a few times!</p>
<p>If I delete python 3.11 and start again is there a better way/place to install it?</p>
<p>Finally, apologies for the poor question, I'm just not sure of the exact problem!</p>
<p><a href="https://i.sstatic.net/3rUkl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3rUkl.png" alt="enter image description here" /></a></p>
|
<python><django><pycharm>
|
2023-06-11 17:31:11
| 1
| 351
|
Callum
|
76,451,639
| 15,439,406
|
When I 'Run Selected' in Python on Visual Studio Code, it is suddenly opening up two Python terminals, one of which does not run my Python code
|
<p>I was developing some Python code for work, using Python extension v2023.10.0 on VS Code. It was working completely fine before.</p>
<p>Then all of a sudden when I ran some of my selected code as usual with <kbd>Shift</kbd>+<kbd>Enter</kbd> on my Python code, it opened up two terminals and won't allow me to run the code in a normal Python terminal. It only allows me to run in that second Python terminal there.</p>
<p>Why is this happening, and how can this be fixed?</p>
<p>I did not change any settings, so I'm surprised by the sudden change. I have tried:</p>
<ul>
<li>Reinstalling VS Code and Extensions</li>
<li>Deleting all User Settings on JSON</li>
<li>Making sure that I've selected a Python Interpreter</li>
</ul>
<p><a href="https://i.sstatic.net/P2w98.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P2w98.jpg" alt="enter image description here" /></a></p>
<p>The first Python terminal seems okay, but I can't seem to run my code in <strong>that</strong> terminal, because the second one is in the way.</p>
|
<python><visual-studio-code><conda><pylance>
|
2023-06-11 17:17:26
| 1
| 484
|
kodikai
|
76,451,567
| 7,695,845
|
How to check which writers are available for saving an animation
|
<p>I am using matplotlib to make an animation of the evolution of a system over time. I want to be able to save the animation as a file. My default choice is to save as an <code>.mp4</code> file, which means I should use the <code>ffmpeg</code> writer like this:</p>
<pre class="lang-py prettyprint-override"><code>anim.save(filename="system_evolution.mp4", writer="ffmpeg", fps=30)
</code></pre>
<p>The problem is that I am sharing my code with my classmates who don't necessarily have <code>ffmpeg</code> installed on their systems. In this case, I would like to fall back to saving a <code>.gif</code> of the animation using <code>pillow</code> (most of them install Python using <code>Anaconda</code> so they probably have <code>Pillow</code> installed as well). How can I check which writers are available to use for saving the animation?</p>
<p>I would like to have something like this:</p>
<pre class="lang-py prettyprint-override"><code>if ffmpeg_available():
print("Saving system_evolution.mp4")
anim.save(filename="system_evolution.mp4", writer="ffmpeg", fps=30)
elif pillow_available():
print("Saving system_evolution.gif")
anim.save(filename="system_evolution.gif", writer="pillow", fps=30)
else:
print("Please install either ffmpeg to save a mp4 or pillow to save a gif.")
</code></pre>
<p>I couldn't figure out how to actually check if <code>ffmpeg</code> or <code>pillow</code> are available, which means the program crashes when I try to save an <code>.mp4</code> and <code>ffmpeg</code> isn't installed. How can this be checked?</p>
|
<python><matplotlib><matplotlib-animation>
|
2023-06-11 17:00:34
| 1
| 1,420
|
Shai Avr
|
76,451,500
| 3,152,686
|
Permission denied when creating a directory inside Docker container
|
<p>I am trying to create a directory 'data' inside a docker container and extract the contents of another zip file (my model stored in cloud) into this. But I am getting the following error.</p>
<pre><code>File "/app/utils/data_utils.py", line 102, in unzip_file
zipf.extractall(dst_path)
File "/opt/conda/lib/python3.9/zipfile.py", line 1633, in extractall
self._extract_member(zipinfo, path, pwd)
File "/opt/conda/lib/python3.9/zipfile.py", line 1679, in _extract_member
os.makedirs(upperdirs)
File "/opt/conda/lib/python3.9/os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/opt/conda/lib/python3.9/os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/opt/conda/lib/python3.9/os.py", line 215, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/opt/conda/lib/python3.9/os.py", line 225, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/app/data/models'
</code></pre>
<p>My dockerfile looks as below</p>
<pre><code>FROM sap.corp:65492/mlimages/infrastructure/multistage-base:0.4.0 AS base
WORKDIR /app
COPY config.py requirements.txt /app/
RUN pip install setuptools==57.5.0
RUN pip install -U --no-cache-dir -r requirements.txt
RUN groupadd --gid 1000 nonroot \
&& useradd --uid 1000 --gid 1000 -m nonroot
USER 1000
COPY --chown=nonroot:nonroot api /app/api
COPY --chown=nonroot:nonroot automation /app/automation
COPY --chown=nonroot:nonroot models /app/models
COPY --chown=nonroot:nonroot stress_test /app/stress_test
COPY --chown=nonroot:nonroot tests /app/tests/
COPY --chown=nonroot:nonroot tools /app/tools
COPY --chown=nonroot:nonroot utils /app/utils
COPY --chown=nonroot:nonroot engine /app/engine
ENV PYTHONPATH /app/
WORKDIR /app/api
CMD [ "python", "main.py"]
</code></pre>
<p>The code which basically does the extraction into a newly created folder looks as below</p>
<pre><code>def unzip_file(src_path: str, dst_path: str) -> None:
"""
Extract zip file in data dir
Args:
src_path (str):
dst_path (str):
Returns:
None:
"""
zipf = zipfile.ZipFile(src_path)
if zipfile.is_zipfile(src_path):
if not os.path.exists(dst_path):
os.makedirs(dst_path, exist_ok=True)
zipf.extractall(dst_path)
zipf.close()
else:
raise Exception('Not a zipfile')
data_path = data_utils.get_project_root() + '/data/'
data_utils.unzip_file(src_path=model_path, dst_path=data_path)
</code></pre>
<p>Even if I create a dummy 'data' folder inside the container, I still get the same error when I try to extract the zip into this created directory.</p>
<p>May I know if user 1000 has permission to create a directory inside docker? If not, how can I change the Dockerfile to resolve the error?</p>
|
<python><docker><dockerfile><zip><user-permissions>
|
2023-06-11 16:44:06
| 1
| 564
|
Vishnukk
|
76,451,499
| 1,643,257
|
Can't build any Python version on Mac OS X 13.4 with pyenv
|
<p>I recently install the latest Mac OSX upgrade (currently running 13.4) and I can't build Python with pyenv anymore (I suspect it's related although not 100% sure).</p>
<p><code>pyenv</code> was installed through <code>brew install</code> (I'm using <code>2.3.19</code>), and was able to build Python 3.10.4 just fine. I wanted to reinstall the same Python version I was using (3.10.4) and got this error message:</p>
<pre><code>BUILD FAILED (OS X 13.4 using python-build 20180424)
Inspect or clean up the working tree at /var/folders/5j/xmpgv80d18s_r90jvtyftpph0000gn/T/python-build.20230611185357.74229
Results logged to /var/folders/5j/xmpgv80d18s_r90jvtyftpph0000gn/T/python-build.20230611185357.74229.log
Last 10 log lines:
File "/private/var/folders/5j/xmpgv80d18s_r90jvtyftpph0000gn/T/python-build.20230611185357.74229/Python-3.10.12/Lib/ensurepip/__init__.py", line 287, in _main
return _bootstrap(
File "/private/var/folders/5j/xmpgv80d18s_r90jvtyftpph0000gn/T/python-build.20230611185357.74229/Python-3.10.12/Lib/ensurepip/__init__.py", line 203, in _bootstrap
return _run_pip([*args, *_PACKAGE_NAMES], additional_paths)
File "/private/var/folders/5j/xmpgv80d18s_r90jvtyftpph0000gn/T/python-build.20230611185357.74229/Python-3.10.12/Lib/ensurepip/__init__.py", line 104, in _run_pip
return subprocess.run(cmd, check=True).returncode
File "/private/var/folders/5j/xmpgv80d18s_r90jvtyftpph0000gn/T/python-build.20230611185357.74229/Python-3.10.12/Lib/subprocess.py", line 526, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/private/var/folders/5j/xmpgv80d18s_r90jvtyftpph0000gn/T/python-build.20230611185357.74229/Python-3.10.12/python.exe', '-W', 'ignore::DeprecationWarning', '-c', '\nimport runpy\nimport sys\nsys.path = [\'/var/folders/5j/xmpgv80d18s_r90jvtyftpph0000gn/T/tmp18yfiiwu/setuptools-65.5.0-py3-none-any.whl\', \'/var/folders/5j/xmpgv80d18s_r90jvtyftpph0000gn/T/tmp18yfiiwu/pip-23.0.1-py3-none-any.whl\'] + sys.path\nsys.argv[1:] = [\'install\', \'--no-cache-dir\', \'--no-index\', \'--find-links\', \'/var/folders/5j/xmpgv80d18s_r90jvtyftpph0000gn/T/tmp18yfiiwu\', \'--root\', \'/\', \'--upgrade\', \'setuptools\', \'pip\']\nrunpy.run_module("pip", run_name="__main__", alter_sys=True)\n']' died with <Signals.SIGABRT: 6>.
make: *** [install] Error 1
</code></pre>
<p>(I don't really get why it tries to do anything with a <code>.exe</code> file...)</p>
<p>Since then, I tried almost lead I could find, including reinstalling pyenv, reinstalling xcode-select, updating homebrew and upgrading all packages. Nothing helped.</p>
<p>I've tried installing 3.9, 3.10 and 3.11 (latest with each). On 3.9 and 3.10 I get the same error with the exe file. When I try 3.11 I get something else:</p>
<pre><code>Undefined symbols for architecture x86_64:
"_libintl_bindtextdomain", referenced from:
__locale_bindtextdomain in _localemodule.o
"_libintl_dcgettext", referenced from:
__locale_dcgettext in _localemodule.o
"_libintl_dgettext", referenced from:
__locale_dgettext in _localemodule.o
"_libintl_gettext", referenced from:
__locale_gettext in _localemodule.o
"_libintl_setlocale", referenced from:
__locale_setlocale in _localemodule.o
__locale_localeconv in _localemodule.o
"_libintl_textdomain", referenced from:
__locale_textdomain in _localemodule.o
ld: symbol(s) not found for architecture x86_64
clang-16: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [Programs/_freeze_module] Error 1
BUILD FAILED (OS X 13.4 using python-build 20180424)
Inspect or clean up the working tree at /var/folders/5j/xmpgv80d18s_r90jvtyftpph0000gn/T/python-build.20230611184550.63292
Results logged to /var/folders/5j/xmpgv80d18s_r90jvtyftpph0000gn/T/python-build.20230611184550.63292.log
Last 10 log lines:
"_libintl_gettext", referenced from:
__locale_gettext in _localemodule.o
"_libintl_setlocale", referenced from:
__locale_setlocale in _localemodule.o
__locale_localeconv in _localemodule.o
"_libintl_textdomain", referenced from:
__locale_textdomain in _localemodule.o
ld: symbol(s) not found for architecture x86_64
clang-16: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [Programs/_freeze_module] Error 1
</code></pre>
<p>Does anyone have any lead? I feel like every time I'm upgrading my Mac OS, I completely break my whole dev environment...</p>
|
<python><python-3.x><macos><pyenv>
|
2023-06-11 16:44:03
| 2
| 3,000
|
Zach Moshe
|
76,451,455
| 6,286,900
|
Error: Failed to find Flask application or factory in module 'app'. Use 'app:name' to specify one
|
<p>I have a problem that is making me crazy. I have a <a href="https://github.com/sasadangelo/calendar" rel="nofollow noreferrer">flask and python project</a> that manages a vacation planner calendar for team and employee. So far I only have Teams managed as CRUD application. The structure and code is similar to another project I have and that works fine.</p>
<p>However, if I run the following steps:</p>
<pre><code>cd <workspace>
git clone https://github.com/sasadangelo/calendar
cd calendar
python3 -m venv env
source env/bin/activate
pip3 install -r requirements.txt
./run.sh
</code></pre>
<p>I got the following error:</p>
<pre><code>Error: Failed to find Flask application or factory in module 'app'. Use 'app:name' to specify one.
</code></pre>
<p>I checked all possible articles (also on stackoverflow.com) but I was unable to solve the issue. The <code>run.sh</code> script is like this:</p>
<pre><code>export FLASK_APP=app
flask run --debug
</code></pre>
<p>Can anyone help me to solve this problem?</p>
|
<python><python-3.x><flask><flask-sqlalchemy>
|
2023-06-11 16:32:28
| 1
| 1,179
|
Salvatore D'angelo
|
76,451,178
| 4,683,036
|
Evaluating if two ConditionSets are disjoint in sympy
|
<p>In sympy, it is trivial to evaluate whether two Intervals are disjoint</p>
<pre><code>In [3]: from sympy import Interval
In [4]: Interval(0, 2).is_disjoint(Interval(-2, -1))
Out[4]: True
In [5]: Interval(0, 2).is_disjoint(Interval(1, 2))
Out[5]: False
In [6]: Interval(0, 2).intersect(Interval(-2, -1))
Out[6]: EmptySet
</code></pre>
<p>However in the case when I have two ConditionSet objects, how can I perform the is_disjoint check ?</p>
<pre><code>In [12]: from sympy import ConditionSet, Symbol, S
In [13]: x = Symbol('x')
In [14]: ConditionSet(x, x >1, S.Reals).is_disjoint(ConditionSet(x, x < 1, S.Reals))
Out[14]: False
In [15]: ConditionSet(x, x >1, S.Reals).intersect(ConditionSet(x, x < 1, S.Reals))
Out[15]: Intersection(ConditionSet(x, x > 1, Reals), ConditionSet(x, x < 1, Reals))
</code></pre>
<p><code>is_disjoint</code> simply checks if <code>intersect</code> equates to <code>EmptySet</code> and in the case of the ConditionSet the <code>intersect</code> is still not evaluated.</p>
<p>What is the best way to resolve this?</p>
|
<python><python-3.x><sympy>
|
2023-06-11 15:28:44
| 1
| 494
|
marwan
|
76,451,171
| 8,547,986
|
File creation time on GitHub actions
|
<p>I am running a GitHub action, and in GitHub action I need to get the time when the file was created. The file is from the respective repo on which I am running the action. I am using a python script to get the file creation time in GitHub action.</p>
<p>I don't know if it's actually possible, and just wanted to confirm if my reasoning of why it isn't is right. So the action runs the code in a docker container and files of the repository get copied to container, which essentially means they get created in docker container again. So the date of creation of file wouldn't be actual date but the date when they get copied into the container?</p>
|
<python><github-actions>
|
2023-06-11 15:26:16
| 0
| 1,923
|
monte
|
76,450,870
| 9,582,542
|
How to get span text only from a table?
|
<p>In this HTML I am trying to parse the text fields and the impact but impact is not text its an image</p>
<pre><code><td class="fxs_c_item fxs_c_time"><span>01:00</span></td>,
<td class="fxs_c_item fxs_c_flag"><span class="fxs_flag fxs_us" title="United States"></span></td>,
<td class="fxs_c_item fxs_c_currency"><span>USD</span></td>,
<td class="fxs_c_item fxs_c_name"><span>New Year's Day</span><span> <span></span></span></td>,
<td class="fxs_c_item fxs_c_impact"><span class="fxs_c_impact-icon fxs_c_impact-none"></span></td>,
<td class="fxs_c_item fxs_c_type" colspan="4"><span class="fxs_c_label fxs_c_label_info">All Day</span></td>,
<td class="fxs_c_item fxs_c_notify"></td>,
<td class="fxs_c_item fxs_c_dashboard" data-gtmid="features-calendar-eventdetails-eventoptions-4d3300ad-c168-4a5f-a4ac-a60a338e63c4"><span><svg aria-hidden="true" class="fxs_icon svg-inline--fa fa-ellipsis-h fa-w-16" data-icon="ellipsis-h" data-prefix="fas" focusable="false" role="img" viewbox="0 0 512 512" xmlns="http://www.w3.org/2000/svg"><path d="M328 256c0 39.8-32.2 72-72 72s-72-32.2-72-72 32.2-72 72-72 72 32.2 72 72zm104-72c-39.8 0-72 32.2-72 72s32.2 72 72 72 72-32.2 72-72-32.2-72-72-72zm-352 0c-39.8 0-72 32.2-72 72s32.2 72 72 72 72-32.2 72-72-32.2-72-72-72z" fill="currentColor"></path></svg></span></td>]
</code></pre>
<p>I am able to get all the <code>table</code> text with this line</p>
<pre><code>cols = [ele.text.strip() for ele in cols]
</code></pre>
<p>but substituting <strong>span.text</strong> does not work I need the <code>span</code> value of</p>
<pre><code>fxs_c_impact-icon fxs_c_impact-none
</code></pre>
<p>for impact for each row of text</p>
<p>I am trying extract all the <code>span</code> text from a <code>table</code></p>
<pre><code>data3 = []
table3 = soup.find('table', attrs={'class':'fxs_c_table'})
table_body3 = table3.find('tbody')
rows = table_body3.find_all('tr')
for row in rows:
cols = row.find_all('td')
cols = [ele.span.text for ele in cols]
data3.append([ele for ele in cols if ele])
</code></pre>
<p>The <code>span</code> item looks like this</p>
<pre><code><span class="fxs_c_impact-icon fxs_c_impact-medium"></span>
</code></pre>
<p>Error I get</p>
<pre><code>AttributeError: 'NoneType' object has no attribute 'text'
</code></pre>
<p>The script works if I want to extract text from text fields from the table but I cant seem to extract this span text value.</p>
|
<python><web-scraping><beautifulsoup>
|
2023-06-11 14:08:35
| 1
| 690
|
Leo Torres
|
76,450,720
| 1,211,959
|
Problem in SMOTE oversampling in a classification problem
|
<p>My dataset contains 130 features, all in z-score format, and 100 labels.
I am trying to train a classifier on this dataset.</p>
<pre><code>import pandas as pd
train_data = pd.read_csv('data_train.csv')
test_data = pd.read_csv('data_test.csv')
X_test = test_data.drop(['labels'], axis=1)
y_test = test_data['labels'].astype('string')
X_train = train_data.drop(['labels'], axis=1)
y_train = train_data['labels'].astype('string')
</code></pre>
<p>This is the distribution of the labels:</p>
<p><a href="https://i.sstatic.net/faiYS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/faiYS.png" alt="enter image description here" /></a></p>
<p>I used three classifiers (Naive Bayes, Random Forest, Neural network with Keras). I obtain about 60% accuracy with the three classifiers.</p>
<p>For example with this code I obtain 57% accuracy.</p>
<pre><code>from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
y_pred = gnb.fit(X_train, y_train).predict(X_test)
(y_test == y_pred).sum()/X_test.shape[0]
</code></pre>
<p>I used SMOTE to reduce the imbalance in the labels distribution.</p>
<pre><code>oversample = SMOTE()
X_train, y_train = oversample.fit_resample(X_train, y_train)
y_train.value_counts().plot(kind='bar')
</code></pre>
<p>This is the distribution of labels in the balanced dataset:</p>
<p><a href="https://i.sstatic.net/1D0FN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1D0FN.png" alt="enter image description here" /></a></p>
<p>After running the oversampling code, I rerun the same classifier on the oversampled data. The accuracy drop to 0:</p>
<pre><code>from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
y_pred = gnb.fit(X_train, y_train).predict(X_test)
(y_test == y_pred).sum()/X_test.shape[0]
</code></pre>
<p>Accuracy here is 0, and the same thing happen to all other classifiers.</p>
<p>Do you have any explanation?</p>
|
<python><pandas><machine-learning><classification><smote>
|
2023-06-11 13:36:21
| 1
| 417
|
tammuz
|
76,450,676
| 12,961,237
|
Use argon2 to generate a key usable for Fernet
|
<p><strong>My goal:</strong> Take a plain-text password and generate a 32 byte token that can be used for Fernet encryption.</p>
<p><strong>What I've tried so far:</strong></p>
<pre><code>>>> from cryptography.fernet import Fernet
>>> import argon2
>>> import base64
>>> hash = argon2.hash_password('abc'.encode(), hash_len=32)
>>> hash
b'$argon2i$v=19$m=65536,t=3,p=4$ee2ZpEZ5Q58HwIT91xQxgw$QrzxXCLBOnuzLgVPoccIPcaeF4mS4uvmrRBFzNXZbxw'
>>> Fernet(hash)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/cryptography/fernet.py", line 34, in __init__
raise ValueError(
ValueError: Fernet key must be 32 url-safe base64-encoded bytes.
</code></pre>
<p><strong>I've then tried:</strong></p>
<pre><code>>>> b64_hash = base64.urlsafe_b64decode(hash)
>>> b64_hash
b'j\xb8(\x9fh\xaf\xd7\xd9'
>>> Fernet(b64_hash)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/cryptography/fernet.py", line 32, in __init__
key = base64.urlsafe_b64decode(key)
File "/usr/lib/python3.10/base64.py", line 133, in urlsafe_b64decode
return b64decode(s)
File "/usr/lib/python3.10/base64.py", line 87, in b64decode
return binascii.a2b_base64(s)
binascii.Error: Incorrect padding
</code></pre>
<p>So, I've fixed the first error, but now another (similar) one comes up.
What's the issue here?</p>
|
<python><python-3.x><security><encryption><encoding>
|
2023-06-11 13:23:57
| 1
| 1,192
|
Sven
|
76,450,635
| 3,745,149
|
How to hash a ring buffer of integers?
|
<p>I am using Python. I have some lists containing integers, but they are actually ring buffers.</p>
<p>The following are rules by examples:</p>
<ul>
<li><p>We do not add new elements or modify any elements. These rings are immutable.</p>
</li>
<li><p>No repetitive elements in a ring.</p>
</li>
<li><p>If two lists have different lengths, they are not the same ring.</p>
</li>
<li><p>Between two lists of the same length, if one list, after arbitrary times of <strong>ring shift</strong> or <strong>reversion</strong>, can be identical to the other, the two rings are equal. For example, <code>[1, 7, 9, 2, 5]</code> and <code>[7, 9, 2, 5, 1]</code>(ring shifted) are equal, <code>[1, 7, 9, 2, 5]</code> and <code>[1, 5, 2, 9, 7]</code>(ring shifted and reversed) are equal, but <code>[1, 7, 9, 2, 5]</code> and <code>[7, 1, 9, 2, 5]</code> are not equal.</p>
</li>
</ul>
<p>I want to quickly identify whether two rings are equal.</p>
<p>One method is to compare their elements, another method is to find a good hashing method. I tried shifting two lists to their normal state and compare if their elements are identical (or identical after reversed), but it's too slow.</p>
<p>I think hashing is a better choice. So what hashing method is good for this kind of ring buffers?</p>
<p>The following is what I currently have:</p>
<pre><code>import random
from time import perf_counter
from typing import List, Tuple
class Ring:
def __init__(self, ids:List[int]) -> None:
self.ids = ids
def get_shifted(self, n:int) -> 'Ring':
result_list = self.ids.copy()
for i in range(n):
head = result_list[0]
result_list.remove(head)
result_list.append(head)
return Ring(result_list)
def get_normalized(self) -> 'Ring':
min_i = self.ids.index(min(self.ids))
shifted = self.get_shifted(min_i)
return shifted
def get_reversed(self) -> 'Ring':
result_list = self.ids.copy()
result_list.reverse()
return Ring(result_list)
def __eq__(self, other: 'Ring') -> bool:
if len(self.ids) != len(other.ids):
return False
normalized1 = tuple(self.get_normalized().ids)
normalized2 = tuple(self.get_reversed().get_normalized().ids)
normalized_other = tuple(other.get_normalized().ids)
return normalized1 == normalized_other or normalized2 == normalized_other
@staticmethod
def Random() -> 'Ring':
unduplicated = set()
while len(unduplicated) < ring_capacity:
unduplicated.add(random.randint(0, 20))
return Ring(list(unduplicated))
if __name__ == '__main__':
random.seed(1)
ring_capacity = 5
num_rings = 2000
ring_set = []
random_rings = [Ring.Random() for _ in range(num_rings)]
start = perf_counter()
for ring in random_rings:
if ring not in ring_set:
ring_set.append(ring)
end = perf_counter()
print(end - start)
print(f'{len(ring_set)} out of {num_rings} unduplicated rings')
</code></pre>
|
<python><algorithm><hash>
|
2023-06-11 13:15:01
| 3
| 770
|
landings
|
76,450,609
| 9,607,072
|
Firebase functions gen2 python init does not work
|
<p>I have only one python installed in my system: <code>3.10.10</code>. it includes the latest pip: <code>23.1.2</code> and I installed the latest module of <code>firebase_functions</code></p>
<p>After I try to init firebase functions in my machine I follow the instructions and when it asks me to install dependencies I get this error:</p>
<pre><code>ERROR: To modify pip, please run the following command:
C:\Users\XXX\functions\venv\Scripts\python.exe -m pip install --upgrade pip
Error: An unexpected error has occurred.
</code></pre>
<p>Next time I run the same process but this time I did not accept to install dependencies and it worked:</p>
<pre><code> Firebase initialization complete!
</code></pre>
<p>Now this is the default code google provided:</p>
<pre><code># Welcome to Cloud Functions for Firebase for Python!
# To get started, simply uncomment the below code or create your own.
# Deploy with `firebase deploy`
from firebase_functions import https_fn
from firebase_admin import initialize_app
initialize_app()
@https_fn.on_request()
def on_request_example(req: https_fn.Request) -> https_fn.Response:
return https_fn.Response("Hello world!")
</code></pre>
<p>I have all dependencies installed. I made sure thousand times.
When I run</p>
<pre><code>firebase deploy
</code></pre>
<p>I get this error:</p>
<pre><code>i deploying functions
i functions: preparing codebase default for deployment
i functions: ensuring required API cloudfunctions.googleapis.com is enabled...
i functions: ensuring required API cloudbuild.googleapis.com is enabled...
i artifactregistry: ensuring required API artifactregistry.googleapis.com is enabled...
+ functions: required API cloudbuild.googleapis.com is enabled
+ artifactregistry: required API artifactregistry.googleapis.com is enabled
+ functions: required API cloudfunctions.googleapis.com is enabled
Error: An unexpected error has occurred.
</code></pre>
<p>And this is the log in the firebase-debug.log</p>
<pre><code>[debug] [2023-06-11T13:05:29.172Z] stderr: ModuleNotFoundError: No module named 'firebase_functions'
[debug] [2023-06-11T13:05:29.182Z] Error: spawn "C:\Users\XXX\functions\venv\Scripts\activate.bat" ENOENT
at notFoundError (C:\Users\XXX\AppData\Roaming\npm\node_modules\firebase-tools\node_modules\cross-spawn\lib\enoent.js:6:26)
at verifyENOENT (C:\Users\XXX\AppData\Roaming\npm\node_modules\firebase-tools\node_modules\cross-spawn\lib\enoent.js:40:16)
at cp.emit (C:\Users\XXX\AppData\Roaming\npm\node_modules\firebase-tools\node_modules\cross-spawn\lib\enoent.js:27:25)
at ChildProcess._handle.onexit (node:internal/child_process:291:12)
[error] Error: An unexpected error has occurred.
</code></pre>
|
<python><function><google-cloud-platform>
|
2023-06-11 13:09:00
| 2
| 1,243
|
Kevin
|
76,450,587
| 3,708,067
|
Python Wheel that includes shared library is built as pure-Python platform independent none-any
|
<p>I wish to use some C and CUDA code in my Python package (which I then call using ctypes). Because of the CUDA, it doesn't seem to be easy to use the traditional approach of a setuptools Extension, so I instead pre-compile the code to shared libraries and then wish to include them in a Wheel. When I build the Wheel it includes the shared libraries, but it still gives the output Wheel a "none-any" extension, indicating that it is pure Python and platform independent, which is not correct. cibuildwheel then refuses to run auditwheel on it because of this. I have read posts about forcing Wheels to be labelled as platform dependent, but I imagine that I shouldn't have to force it and that I am instead doing something wrong.</p>
<p>Directory structure:</p>
<ul>
<li>mypackage
<ul>
<li>pyproject.toml</li>
<li>setup.cfg</li>
<li>src
<ul>
<li>mypackage
<ul>
<li>__init__.py</li>
<li>aaa.py</li>
<li>bbb.c</li>
<li>ccc.cu</li>
<li>libmypackage_cpu.so</li>
<li>libmypackage_cuda.so</li>
<li>before_all.sh (script to build the .so files)</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
</ul>
<p>pyproject.toml:</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["setuptools>=42"]
build-backend = "setuptools.build_meta"
[tool.cibuildwheel]
build = "cp36-*"
skip = ["pp*", "*686"]
manylinux-x86_64-image = "manylinux2014"
[tool.cibuildwheel.linux]
before-all = "bash {project}/src/mypackage/before_all.sh"
</code></pre>
<p>setup.cfg:</p>
<pre><code>[metadata]
name = mypackage
[options]
package_dir =
= src
include_package_data = True
zip_safe = False
packages = find:
python_requires = >=3.6
[options.packages.find]
where = src
[options.package_data]
mypackage =
*.so
</code></pre>
<p>Am I missing something that is causing the packager to not detect that the Wheel is not pure Python and platform independent, or is forcing the packager to label it otherwise (such as by creating an empty Extension) the normal approach?</p>
|
<python><setuptools><python-packaging><python-wheel><python-extensions>
|
2023-06-11 13:02:37
| 1
| 633
|
user3708067
|
76,450,546
| 1,019,455
|
How to use civitAI checkpoint with ControlNet
|
<p>I am in the early stage of learning <code>Stable Diffusion</code>.</p>
<p><strong>Motivation:</strong><br>
I would like to generate real object picture from line art like <a href="https://web.facebook.com/groups/stablediffusionthailand/permalink/1279268646021590/" rel="nofollow noreferrer">this</a>.</p>
<p>I found that I need to use <code>ControlNet</code>. However, when I download <code>majicMIX realistic</code>. It does not support <code>ControlNet</code> which I can input image to it.</p>
<p>Here is my attempt.</p>
<pre class="lang-py prettyprint-override"><code>from diffusers import StableDiffusionPipeline, Stable
import torch
torch.manual_seed(111)
device = torch.device("mps") if torch.backends.mps.is_available() else torch.device("cpu")
pipe = StableDiffusionPipeline.from_ckpt("majicmixRealistic_v5.safetensors", load_safety_checker=False).to(device)
prompt = "A photo of rough collie, best quality"
negative_prompt: str = "low quality"
guidance_scale = 1
eta = 0.0
result = pipe(
prompt, num_inference_steps=30, num_images_per_prompt=8,
guidance_scale=1, negative_prompt=negative_prompt)
for idx, image in enumerate(result.images):
image.save(f"character_{guidance_scale}_{eta}_{idx}.png")
</code></pre>
<p>But I can't use this <code>model</code> with <code>ControlNet</code>. Because it is <code>checkpoint</code>.</p>
<p>Next on I use <code>StableDiffusionImg2ImgPipeline</code>. This one I can put <code>text</code> and <code>image</code> together, but I can't use <code>ControlNet</code>.</p>
<pre class="lang-py prettyprint-override"><code>"""
https://huggingface.co/docs/diffusers/using-diffusers/img2img
"""
import torch
from diffusers import StableDiffusionImg2ImgPipeline
from diffusers.utils import load_image
device = "mps" if torch.backends.mps.is_available() else "cpu"
pipe = StableDiffusionImg2ImgPipeline.from_ckpt("majicmixRealistic_v5.safetensors").to(
device
)
url = "../try_image_to_image/c.jpeg"
init_image = load_image(url)
prompt = "A woman, realistic color photo, high quality"
generator = torch.Generator(device=device).manual_seed(1024)
strengths = [0.3, 0.35, 0.4, 0.45, 0.5]
guidance_scales = [1, 2, 3, 4, 5, 6, 7, 8]
num_inference_steps = 100
print(f"Total run: {len(strengths) * len(guidance_scales)}")
for strength in strengths:
for guidance_scale in guidance_scales:
# image= pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5, generator=generator).images[0]
image = pipe(
prompt=prompt, image=init_image, strength=strength, guidance_scale=guidance_scale,
generator=generator, num_inference_steps=num_inference_steps).images[0]
image.save(f"images/3rd_{strength}_{guidance_scale}.png")
</code></pre>
<p><strong>Question:</strong><br>
How to use <code>ContronNet</code> with <code>majicmix-realistic</code>?</p>
|
<python><stable-diffusion>
|
2023-06-11 12:51:32
| 2
| 9,623
|
joe
|
76,450,527
| 974,297
|
How to create an asyncpg pool that sets codecs (type converters) automatically?
|
<p>More specifically, I need to use <code>hstore</code>, <code>json</code> an <code>jsonb</code> values. There are built-in codecs for this, but they must be registered on the connection explicitly, as described in <a href="https://magicstack.github.io/asyncpg/current/usage.html#example-decoding-hstore-values" rel="nofollow noreferrer">https://magicstack.github.io/asyncpg/current/usage.html#example-decoding-hstore-values</a></p>
<pre class="lang-py prettyprint-override"><code>import asyncpg
import asyncio
async def run():
conn = await asyncpg.connect()
# Assuming the hstore extension exists in the public schema.
await conn.set_builtin_type_codec(
'hstore', codec_name='pg_contrib.hstore')
result = await conn.fetchval("SELECT 'a=>1,b=>2,c=>NULL'::hstore")
assert result == {'a': '1', 'b': '2', 'c': None}
asyncio.get_event_loop().run_until_complete(run())
</code></pre>
<p>Since I need these codec everwhere in my application, I want to create a pool that automatically sets these codecs when a <strong>new</strong> connection is created.</p>
<p>But I'm not sure how to do this. I can see that <code>connection_class</code> can be given for <code>create_pool</code>. In other words, I can create a pool that uses a custom connection class, for example:</p>
<pre class="lang-py prettyprint-override"><code>async def create_my_pool(pool_config):
return await asyncpg.create_pool(connection_class=MyDbConnection, **pool_config)
</code></pre>
<p>But it does not solve the problem, because I cannot call <code>set_builtin_type_codec</code> from <code>MyDbConnection.__init__</code>, simply because <code>set_builtin_type_codec</code> is an async method, and the constructor of my custom <code>MyDbConnection</code> is not async.</p>
<p>I might be able to write my own async context manager that wraps the one that is returned by <code>asyncpg.create_pool</code>, but then I need to keep track of each connection and set the codecs on them only if they have not been set previously. It looks very clumsy.</p>
<p>Since this problem (e.g. automatically setting codecs) seems to be a very common requirement to me, there must be a better way to do this, I just can't figure it out.</p>
<p>So this is my question: how to create an asyncpg pool that sets various codecs automatically, whenever if creates a <strong>new</strong> connection?</p>
|
<python><async-await><type-conversion><asyncpg>
|
2023-06-11 12:44:13
| 2
| 4,240
|
nagylzs
|
76,450,495
| 3,247,006
|
How to enable Pylint extension to show "E202" error on VSCode?
|
<p>I wrote the code below on <a href="https://code.visualstudio.com/" rel="nofollow noreferrer">VSCode</a>:</p>
<pre class="lang-py prettyprint-override"><code>"""Say Hello"""
print("Hello" )
</code></pre>
<p>Then, <a href="https://marketplace.visualstudio.com/items?itemName=ms-python.flake8" rel="nofollow noreferrer">Flake8 extension</a> can show <code>E202</code> error for <code>print("Hello" )</code> as shown below:</p>
<p><a href="https://i.sstatic.net/jLxFJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jLxFJ.png" alt="enter image description here" /></a></p>
<blockquote>
<p>whitespace before ')' Flake8(E202)</p>
</blockquote>
<p>But, <a href="https://marketplace.visualstudio.com/items?itemName=ms-python.pylint" rel="nofollow noreferrer">Pylint extension</a> cannot show <code>E202</code> error for <code>print("Hello" )</code> as shown below:</p>
<p><a href="https://i.sstatic.net/aJ2pd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aJ2pd.png" alt="enter image description here" /></a></p>
<p>So, how can I enable <strong>Pylint extension</strong> to show <code>E202</code> error on VSCode?</p>
|
<python><visual-studio-code><pylint><flake8>
|
2023-06-11 12:32:54
| 2
| 42,516
|
Super Kai - Kazuya Ito
|
76,450,285
| 275,002
|
Youtube: Exceed quota error for newly created API
|
<p>I have enabled Youtube Data API an hour ago and made a few requests and I am already getting the following error:</p>
<pre><code>b'{\n "error": {\n "code": 403,\n "message": "The request cannot be completed because you have exceeded your \\u003ca href=\\"/youtube/v3/getting-started#quota\\"\\u003equota\\u003c/a\\u003e.",\n "errors": [\n {\n "message": "The request cannot be completed because you have exceeded your \\u003ca href=\\"/youtube/v3/getting-started#quota\\"\\u003equota\\u003c/a\\u003e.",\n "domain": "youtube.quota",\n "reason": "quotaExceeded"\n }\n ]\n }\n}\n'
</code></pre>
<p>When I go to dashboard I see the following:</p>
<p><a href="https://i.sstatic.net/js7FG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/js7FG.png" alt="enter image description here" /></a></p>
<p>and</p>
<p><a href="https://i.sstatic.net/fcct8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fcct8.png" alt="enter image description here" /></a></p>
<p>and
<a href="https://i.sstatic.net/OOIFW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OOIFW.png" alt="enter image description here" /></a></p>
<p>Below is the code I am trying:</p>
<pre><code>def upload_video(video_file, channel_id, youtube_object):
try:
# Define the video metadata
request_body = {
'snippet': {
'title': 'TESTING BY ME',
'description': 'Your video description',
'tags': ['tag1', 'tag2', 'tag3'],
'categoryId': '22', # Set the appropriate category ID
'channelId': channel_id # Set the channel ID
},
'status': {
'privacyStatus': 'private' # Set the privacy status of the video
}
}
# Create a media upload object for the video file
media = MediaFileUpload(video_file)
# Execute the video insert request
response = youtube_object.videos().insert(
part='snippet,status',
body=request_body,
media_body=media
).execute()
# Print the video ID of the uploaded video
print('Video uploaded. Video ID:', response['id'])
except HttpError as e:
print('An HTTP error occurred:')
print(e.content)
except Exception as e:
print('An error occurred:')
print(e)
</code></pre>
|
<python><youtube-api><youtube-data-api>
|
2023-06-11 11:31:08
| 1
| 15,089
|
Volatil3
|
76,450,127
| 733,002
|
python string decode displayed as byte array in list context
|
<p>Why is this string printing as a byte array in the context of a list, printing as expected in a print statement, and the type is of string, not bytearray?</p>
<pre><code>stringList = []
# Comparing a string, and a string decoded from a byte array
theString = "Hello World1"
value = byteArray.decode("utf-8") # byteArray is set externally, but prints correctly below
# Types are the same
print("theString type: " + str(type(theString)))
print("value type: " + str(type(value)))
# Value are displayed the same
print("theString: " + theString)
print("value: " + value)
# Add each to list
stringList.append(theString)
stringList.append(value)
# the value string prints as a byte array
print(stringList)
</code></pre>
<p>Output:</p>
<pre><code>theString type: <class 'str'>
value type: <class 'str'>
theString: Hello World1
value: Hello World0
['Hello World1', 'H\x00\x00\x00e\x00\x00\x00l\x00\x00\x00l\x00\x00\x00o\x00\x00\x00\x00\x00\x00W\x00\x00\x00o\x00\x00\x00r\x00\x00\x00l\x00\x00\x00d\x00\x00\x000\x00\x00\x00']
</code></pre>
|
<python><python-bytearray>
|
2023-06-11 10:47:21
| 1
| 1,031
|
Joe
|
76,449,938
| 3,059,024
|
Problem with starting and stopping a "HTTPServer" multiple times in Python
|
<p>I have a HTTPServer in Python who's job it is to change a status variable between states. The essence of my problem is that the server starts and stops once or (sometimes) twice, but not more times. It appears that the server does not start in the offending subsequent time, although I'm not sure why. Please see my code:</p>
<pre><code>from typing import Tuple, Type, Optional
import requests
import logging
import threading
import time
from enum import Enum
from http.server import HTTPServer, BaseHTTPRequestHandler
from socketserver import BaseServer
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
class Status(Enum):
START_STATE = 0
END_STATE = 1
class StatusTracker:
# pure abstract class. Has the "status" property. that is it. Implemented by UserOfHttpServer below.
def __init__(self):
self._status = Status.START_STATE
@property
def status(self):
return self._status
@status.setter
def status(self, newStatus):
self._status = newStatus
class HttpHandler(BaseHTTPRequestHandler):
"""Simple request handler -- churns the statusTracker.status variable. """
def __init__(self, request, client_address: Tuple[str, int], server: BaseServer, StatusTracker):
self.statusTracker: Optional[StatusTracker] = StatusTracker
super().__init__(request=request, client_address=client_address, server=server)
def do_GET(self):
logger.debug(f"handling GET request: {self.path}")
self.send_response(200)
if self.path.endswith('/'):
# mini landing page for browser
self.send_header("Content-type", "text/html")
self.end_headers()
landing = f"""<html>
<p>Request: {self.path}</p>
<body>
<p>Listening to Callback Notifications.</p>
</body>
</html>
"""
self.wfile.write(landing.strip().encode("utf-8"))
return
# otherwise we send plain text only
self.send_header("Content-type", "text/plain")
self.end_headers()
logger.info(f"HTTPServer received request: \"{self.path}\". ")
if self.path == '/favicon.ico':
# dont know what this is, but it seems I need to handle and ignore
# https://stackoverflow.com/a/65528815/3059024
return
if self.path.endswith("/test_connection"):
response_content = f"HttpServer is running and responsive"
self.wfile.write(response_content.encode())
return
elif self.path.endswith("/start_state"):
self.statusTracker.status = Status.START_STATE
elif self.path.endswith("/end_state"):
self.statusTracker.status = Status.END_STATE
else:
logger.warning(f"unhandled request: \"{self.path}\"")
class MyHttpServer(HTTPServer):
"""The server itself"""
def __init__(self, address: Tuple[str, int], handler: Type[HttpHandler],
StatusTracker: Optional[StatusTracker] = None):
self.allow_reuse_address = True
self.allow_reuse_port = True
self.statusTracker = StatusTracker
super(MyHttpServer, self).__init__(address, handler)
self.serverThread: threading.Thread
self.serverStartedEvent: threading.Event = threading.Event()
self.serverFinishedEvent: threading.Event = threading.Event()
def finish_request(self, request, client_address) -> None:
assert self.statusTracker is not None
self.RequestHandlerClass(request, client_address, self, self.statusTracker) # type: ignore
def serve_forever(self, poll_interval: float = -1) -> None:
try:
super().serve_forever(poll_interval)
except Exception as e:
logger.error(str(e))
self.shutdownFromNewThread()
def serveInNewThread(self) -> threading.Thread:
logger.info(f"Starting server on \"{self.server_address}")
self.serverThread = threading.Thread(target=self.serve_forever, name="HttpServerThread")
self.serverThread.start()
try:
response = requests.get(f"http://{self.server_address[0]}:{self.server_address[1]}/test_connection",
timeout=5)
response.raise_for_status() # Raise an exception for non-2xx status codes
logger.critical(response.text)
self.serverStartedEvent.set()
self.serverFinishedEvent.clear()
return self.serverThread
except requests.exceptions.RequestException as e:
if "timed out" in str(e):
raise ConnectionError(f"Could not start and connect to HttpServer. Error message: \"{e}\"")
else:
raise e
def shutdown(self) -> None:
logger.info("called shutdown")
super().shutdown()
logger.info(f"Finished shutdown")
# make sure server thread has finished
isServerThreadStillRunning = self.serverThread.is_alive()
while isServerThreadStillRunning:
logger.info(f"Is server thread still alive?: {self.serverThread.is_alive()}")
time.sleep(0.1)
isServerThreadStillRunning = self.serverThread.is_alive()
self.serverFinishedEvent.set()
self.serverStartedEvent.clear()
def shutdownFromNewThread(self):
thread = threading.Thread(target=self.shutdown, name="ShutdownHttpServerThread")
thread.start()
self.serverFinishedEvent.wait()
class UserOfMyHttpServer:
# implements the ServerStatus interface
def __init__(self, port: int):
self.port = port
self._status = Status.START_STATE
self.statusStartStateEvent: threading.Event = threading.Event()
self.statusStartStateEvent.set()
self.statusEndStateEvent: threading.Event = threading.Event()
# keep order
self.events = [self.statusStartStateEvent, self.statusEndStateEvent]
# FYI no http:// is needed with local host. Might be with a normal address...
self.httpServer: MyHttpServer = MyHttpServer(
("localhost", self.port), HttpHandler, self
)
self.httpServerThread = self.httpServer.serveInNewThread()
x = 4
def shutdown(self):
# but the server is not yet shutdown. We need to do so.
self.httpServer.shutdownFromNewThread()
# and wait for it to signal.
# note: Do not try to turn off http server from within the server
if self.httpServer.serverStartedEvent.is_set() and not self.httpServer.serverFinishedEvent.is_set():
self.httpServer.serverFinishedEvent.wait()
@property
def status(self):
return self._status
@status.setter
def status(self, value):
logger.debug(f"{self.__class__.__name__} status changed from {self._status} to {value}")
self.events[self._status.value].clear()
self._status = value
self.events[self._status.value].set()
</code></pre>
<p>And the code to test it and demonstrate the problem.</p>
<pre><code>
def createUserOfHttpServer(port) -> UserOfMyHttpServer:
"""Make the instance of UserOfMyHttpServer"""
user = UserOfMyHttpServer(port)
logger.info("waiting for server start event")
user.httpServer.serverStartedEvent.wait()
logger.info("server has started")
return user
def shutdownUserOfServer(userOfServer: UserOfMyHttpServer):
"""Shutdown the instance of UserOfMyHttpServer"""
userOfServer.shutdown()
logger.info("waiting for server ending event")
userOfServer.httpServer.serverFinishedEvent.wait()
logger.info("server has finished")
def startAndStopServer():
# start and stop the server
logger.info("Attempt start and stop")
user = createUserOfHttpServer(1234)
shutdownUserOfServer(user)
logger.info("Start and stop successful")
def test_startAndStopOnce():
startAndStopServer()
def test_startAndStopTwice():
startAndStopServer()
startAndStopServer()
def test_startAndStop_3():
for i in range(3):
startAndStopServer()
</code></pre>
<p>You'll see here that <code>startAndStopServer</code> works once, and sometimes twice. But (for me) 3 or more times raises the <code>ConnectionError</code> from the bottom of <code>MyHttpServer.serveInNewThread</code> due to time out.</p>
<pre><code> except requests.exceptions.RequestException as e:
if "timed out" in str(e):
> raise ConnectionError(f"Could not start and connect to HttpServer. Error message: \"{e}\"")
E ConnectionError: Could not start and connect to HttpServer. Error message: "HTTPConnectionPool(host='127.0.0.1', port=1234): Read timed out. (read timeout=5)"
</code></pre>
<p>Could anybody help with getting this code to pass the "multiple start test" and more generally have any suggestions for making this code more robust? Thanks in Advance!</p>
|
<python><http><server><tcp>
|
2023-06-11 09:56:22
| 0
| 7,759
|
CiaranWelsh
|
76,449,834
| 11,644,523
|
How can I select specific model in dbt?
|
<p>Based on the dbt <a href="https://docs.getdbt.com/reference/node-selection/syntax" rel="nofollow noreferrer">documentation</a>, we should be able to select specific models within a certain path.</p>
<p><code>dbt run --select myproject/models/abc/mymodel.py</code></p>
<p>However this does not work for me, I get this error:</p>
<p><code>The selection criterion 'myproject/models/abc/mymodel.py' does not match any nodes</code></p>
<p>I am using dbt python models in case that matters.</p>
<p>My dbt project tree sample:</p>
<pre><code>โโโ models
โ โโโ other.py
โ โโโ other2.py
โ โโโ config.yml
โ โโโ abc
โ โโโ config.yml
โ โโโ mymodel.py
</code></pre>
|
<python><dbt>
|
2023-06-11 09:29:31
| 2
| 735
|
Dametime
|
76,449,729
| 7,676,365
|
Generate functions in python from source tables
|
<p>I have excel file with many sheets, on every is that type of table for some category, name of category is sheet name. Tables have same structure, different is only nr of rows and values off course.</p>
<pre><code>cluster left right keyval
1 0 3 kg
2 3 8
3 8 15
4 15 50
</code></pre>
<p>In python, I need make automaticaly that function from table:</p>
<pre><code>def convert_range(row: pd.Series) -> str:
if row["kg"] <= 3:
return "0-3"
elif 3 < row["kg"] <= 8:
return "4-8"
elif 8 < row["kg"] <= 15:
return "9-15"
elif 15 < row["kg"] <= 50:
return "16 plus"
</code></pre>
<p>I hope I have described my problem clearly. I absolutely don't know how to do that...</p>
|
<python><python-3.x>
|
2023-06-11 09:00:49
| 0
| 403
|
314mip
|
76,449,593
| 3,943,868
|
Why does this tf-idf model give 0 similarity?
|
<p>I have two strings, which are different only slightly:</p>
<pre><code>str1 = 'abcdefgh'
str2 = 'abcdef-gh'
</code></pre>
<p>The only difference is that each sub string has a hyphen. But the tf-idf gives 0 similarity:</p>
<p>Code to compute tf-idf similarity is below:</p>
<pre><code>from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
def compute_cosine_similarity(str1, str2):
# Create a TF-IDF vectorizer
vectorizer = TfidfVectorizer()
# Compute the TF-IDF matrix for the two strings
tfidf_matrix = vectorizer.fit_transform([string1, string2])
# Compute the cosine similarity between the two TF-IDF vectors
similarity_matrix = cosine_similarity(tfidf_matrix[0], tfidf_matrix[1])
# Extract the similarity score from the matrix
similarity_score = similarity_matrix[0][0]
return similarity_score
similar_columns = compute_similar_columns(df1, df2)
</code></pre>
<p>But if I change to:</p>
<pre><code>str1 = 'abcdef-gh'
str2 = 'abcdef-gh'
</code></pre>
<p>The similarity is 1. It seems that tf-idf does't like some special symbols in one side of the strings, like '-'</p>
<p>Why is that?</p>
|
<python><tf-idf><cosine-similarity><tfidfvectorizer>
|
2023-06-11 08:19:56
| 1
| 7,909
|
marlon
|
76,449,568
| 20,740,043
|
Add a new (binary) column in a dataframe, based on certain condition, for every group/id
|
<p>I have the below dataframe:</p>
<pre><code>#Load the required libraries
import pandas as pd
#Create dataset
data = {'id': [1, 1, 1, 1, 1,1, 1, 1, 1, 1, 1,
2, 2,2,2,2,
3, 3, 3, 3, 3, 3,
4, 4,4,4,4,4,4,4,
5, 5, 5, 5, 5,5, 5, 5,5,],
'cycle': [1,2, 3, 4, 5,6,7,8,9,10,11,
1,2, 3,4,5,
1,2, 3, 4, 5,6,
1,2,3,4,5,6,7,8,
1,2, 3, 4, 5,6,7,8,9,],
'Salary': [5, 6, 7,8,9,6,4,12,5,14,15,
4, 5,6,7,8,
5,8,4,7,12,1,
8,1,2,7,4,5,8,1,
1, 4,9,10,11,7,13,4,15,],
'Children': ['No', 'Yes', 'Yes', 'Yes', 'Yes', 'No','No', 'Yes', 'Yes', 'Yes', 'No',
'Yes', 'No', 'Yes', 'No', 'Yes',
'No','Yes', 'Yes', 'No','No', 'Yes',
'Yes','Yes', 'Yes', 'No','No', 'Yes', 'Yes', 'Yes',
'No', 'Yes', 'No', 'No', 'Yes', 'Yes', 'Yes', 'Yes', 'No',],
'Days': [123, 128, 66, 66, 120, 141, 52,96, 120, 141, 52,
96, 120,128, 66, 120,
15,123, 128, 66, 120, 141,
141,128, 66, 123, 128, 66, 120,141,
123, 128, 66, 123, 128, 66, 120, 141, 52,],
}
#Convert to dataframe
df = pd.DataFrame(data)
print("df = \n", df)
</code></pre>
<p>The above dataframe looks as such:</p>
<p><a href="https://i.sstatic.net/FIzNq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FIzNq.png" alt="enter image description here" /></a></p>
<p>Now, I need to add a binary column in this dataframe such that, in every group/id, whenever Salary >= 7, Binary value should be 1, else 0.</p>
<p>For, example, for id=1, the 'Salary' column is [5, 6, 7,8,9,6,4,12,5,14,15]. Hence, the Binary column should be [0, 0 , 1, 1, 0, 0 ,0 ,1 , 0 , 1 ,1]</p>
<p>The new dataframe looks as such:</p>
<p><a href="https://i.sstatic.net/qW2Pi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qW2Pi.png" alt="enter image description here" /></a></p>
<p>Can somebody please let me know how do I achieve this task in Python?</p>
|
<python><pandas><dataframe><group-by>
|
2023-06-11 08:12:58
| 2
| 439
|
NN_Developer
|
76,449,532
| 827,927
|
Compact way to save a list of subsets
|
<p>I have an input list L containing about 20-30 elements. I need to keep some of the subsets of L, that satisfy a given condition, so that I can iterate over them later. I can create the list of subsets in the following way:</p>
<pre><code>good_subsets = list([s for s in powerset(L) if condition(s)])
</code></pre>
<p>This works, but might consume a lot of memory. For example, if L has 20 elements, and 50% of them satisfy the condition, then there are about 500,000 subsets.</p>
<p>Is there a more compact way to store the good_subsets in a list? Note that it should be a list and not a generator, since I need to iterate over this list many times later on, and I do not want to generate it each time anew.</p>
|
<python><data-structures><subset><memory-efficient>
|
2023-06-11 08:00:05
| 0
| 37,410
|
Erel Segal-Halevi
|
76,449,458
| 20,357,303
|
Validation loss as nan in a validation function
|
<p>I have used this dataset to perform a simple linear regression using PyTorch</p>
<pre><code>DATASET_URL = "https://gist.github.com/BirajCoder/5f068dfe759c1ea6bdfce9535acdb72d/raw/c84d84e3c80f93be67f6c069cbdc0195ec36acbd/insurance.csv"
</code></pre>
<p>Converted the dataframe to numpy arrays</p>
<pre><code> # Make a copy of the original dataframe
dataframe1 = dataframe.copy(deep=True)
# Convert non-numeric categorical columns to numbers
for col in categorical_cols:
dataframe1[col] = dataframe1[col].astype('category').cat.codes
# Extract input & outupts as numpy arrays
inputs_array = dataframe1[input_cols].to_numpy()
targets_array = dataframe1[output_cols].to_numpy()
return inputs_array, targets_array
</code></pre>
<pre><code>inputs_array, targets_array = dataframe_to_arrays(dataframe)
inputs_array, targets_array
</code></pre>
<p>Then converted them to tensors</p>
<pre><code>inputs = torch.tensor(inputs_array, dtype=torch.float32)
targets = torch.tensor(targets_array, dtype=torch.float32)
</code></pre>
<p>cerated a tensor dataset as
<code>dataset = TensorDataset(inputs, targets)</code></p>
<p>performed the split</p>
<pre><code>from torch.utils.data import random_split
val_percent = 0.2 # between 0.1 and 0.2
val_size = int(num_rows * val_percent)
train_size = num_rows - val_size
train_ds, val_ds = random_split(dataset,[train_size,val_size])
</code></pre>
<p>used DataLoader with <code>batch size of 128</code></p>
<pre><code>train_loader = DataLoader(train_ds, batch_size, shuffle=True)
val_loader = DataLoader(val_ds, batch_size)
</code></pre>
<p>Created as extended class</p>
<pre><code>class InsuranceModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(input_size, output_size) (hint: use input_size & output_size defined above)
def forward(self, xb):
# xb = xb.view(xb.size(0), -1)
# xb = xb.reshape(-1, 5)
out = self.linear(xb)
return out
def training_step(self, batch):
inputs, targets = batch
# Generate predictions
out = self(inputs)
# Calcuate loss
loss = F.mse_loss(out,targets)
return loss
def validation_step(self, batch):
inputs, targets = batch
# Generate predictions
out = self(inputs)
# Calculate loss
loss = F.mse_loss(out,targets)
return {'val_loss': loss.detach()}
def validation_epoch_end(self, outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean() # Combine losses
return {'val_loss': epoch_loss.item()}
def epoch_end(self, epoch, result, num_epochs):
# Print result every 20th epoch
if (epoch+1) % 20 == 0 or epoch == num_epochs-1:
print("Epoch [{}], val_loss: {:.4f}".format(epoch+1, result['val_loss']))
</code></pre>
<p>Now with <code>model = InsuranceModel()</code> and</p>
<pre><code>def evaluate(model, val_loader):
outputs = [model.validation_step(batch) for batch in val_loader]
return model.validation_epoch_end(outputs)
def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):
history = []
optimizer = opt_func(model.parameters(), lr)
for epoch in range(epochs):
# Training Phase
for batch in train_loader:
loss = model.training_step(batch)
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Validation phase
result = evaluate(model, val_loader)
model.epoch_end(epoch, result, epochs)
history.append(result)
return history
</code></pre>
<p>this give me <code>val_loss : nan</code></p>
<pre><code>epochs = 30
lr = 1e-2
history1 = fit(epochs, lr, model, train_loader, val_loader)
</code></pre>
<p>The warning I get is</p>
<p>UserWarning: Using a target size (torch.Size([128, 5])) that is different to the input size (torch.Size([128, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
loss = F.mse_loss(out,targets)</p>
|
<python><pytorch>
|
2023-06-11 07:35:51
| 1
| 448
|
Shaik Naveed
|
76,449,432
| 1,793,229
|
gspread tests giving error "google.auth.exceptions.RefreshError"
|
<p>I downloaded gspread from github (<a href="https://github.com/burnash/gspread" rel="nofollow noreferrer">https://github.com/burnash/gspread</a>). Installed all the necessary packages. Tried running test using
<code>gspread$ pytest tests/spreadsheet_test.py</code></p>
<p>and all the tests failed with similar error</p>
<pre><code>========================================================== ERRORS ==========================================================
_________________________________ ERROR at setup of SpreadsheetTest.test_add_del_worksheet _________________________________
self = <tests.spreadsheet_test.SpreadsheetTest testMethod=test_add_del_worksheet>
client = <gspread.client.BackoffClient object at 0x7f07c002b350>
request = <SubRequest 'init' for <TestCaseFunction test_add_del_worksheet>>
@pytest.fixture(scope="function", autouse=True)
def init(self, client, request):
name = self.get_temporary_spreadsheet_title(request.node.name)
> SpreadsheetTest.spreadsheet = client.create(name)
tests/spreadsheet_test.py:17:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
gspread/client.py:247: in create
r = self.request("post", DRIVE_FILES_API_V3_URL, json=payload, params=params)
gspread/client.py:599: in request
return super().request(*args, **kwargs)
gspread/client.py:80: in request
response = getattr(self.session, method)(
/usr/lib/python3/dist-packages/requests/sessions.py:635: in post
return self.request("POST", url, data=data, json=json, **kwargs)
/usr/lib/python3/dist-packages/google/auth/transport/requests.py:218: in request
self.credentials.refresh(auth_request_with_timeout)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <tests.conftest.DummyCredentials object at 0x7f07bf275a50>
request = functools.partial(<google.auth.transport.requests.Request object at 0x7f07bf276210>, timeout=None)
@_helpers.copy_docstring(credentials.Credentials)
def refresh(self, request):
if (self._refresh_token is None or
self._token_uri is None or
self._client_id is None or
self._client_secret is None):
> raise exceptions.RefreshError(
'The credentials do not contain the necessary fields need to '
'refresh the access token. You must specify refresh_token, '
'token_uri, client_id, and client_secret.')
E google.auth.exceptions.RefreshError: The credentials do not contain the necessary fields need to refresh the access token. You must specify refresh_token, token_uri, client_id, and client_secret.
/usr/lib/python3/dist-packages/google/oauth2/credentials.py:128: RefreshError
</code></pre>
<p>How to make them pass?
I was hoping unittest will run on mint condition code.</p>
|
<python><google-sheets><pytest><gspread>
|
2023-06-11 07:27:55
| 2
| 553
|
Sharad
|
76,449,394
| 761,986
|
eventlet.monkey_patch cause: maximum recursion depth exceeded in ssl.py
|
<p>In a python 3.10.9 docker container, the app depends on a critical package what uses <code>eventlet.monkey_patch(socket=True)</code>. Unfortunatelly this cause <code>RecursionError: maximum recursion depth exceeded</code> when <code>requests.Session().get()</code> called.</p>
<p>Here is the code snippet</p>
<pre><code>import requests
from requests.adapters import HTTPAdapter
s = requests.Session()
s.mount('https://', HTTPAdapter(pool_connections=2))
s.get('https://stackoverflow.com/questions/34837026/whats-the-meaning-of-pool-connections-in-requests-adapters-httpadapter')
# <Response [200]>
import eventlet
eventlet.monkey_patch(socket=True)
s.get('https://stackoverflow.com/questions/34837026/whats-the-meaning-of-pool-connections-in-requests-adapters-httpadapter')
# <Response [200]>
s.get('https://www.google.com/')
# RecursionError: maximum recursion depth exceeded
s.get('https://stackoverflow.com/questions/34837026/whats-the-meaning-of-pool-connections-in-requests-adapters-httpadapter')
# <Response [200]>
</code></pre>
<p>Is there any possible way to make <code>urllib3.HTTPSConnection.connect(self)</code> be coperative with a patched <code>socket</code>.</p>
<p>So far the only trick I found is to warm up the connection pools before monkey_patch applied.</p>
<p>I must deal with<code>eventlet.monkey_patch(socket=True)</code>. It is not an option to replace it with <code>gevent</code> or rewrite.</p>
|
<python><connection-pooling><monkeypatching>
|
2023-06-11 07:12:10
| 1
| 868
|
oliver nadj
|
76,449,278
| 3,720,668
|
How does python resolve the order when module fully qualified name is the same as object name
|
<p>Suppose I have the following dir:</p>
<pre><code>/foo
__init__.py
/logger
db_logger.py
stream_logger.py
</code></pre>
<p>inside <code>foo/__init__.py</code> I defined an object called <code>logger</code>, which has a method <code>info</code> on it</p>
<p>so when I do</p>
<pre><code>from foo import logger
logger.info(...)
</code></pre>
<p>In some places it works fine and correctly prints the messages I put in the info(), while
in some other places I got an error:</p>
<pre><code>AttributeError: module 'foo.logger' has no attribute 'info'
</code></pre>
<p>Apparently in the 2nd case, it's resolving it to the module and couldn't find the info attribute, but it's unclear to me how Python resolves the order when there is a conflict between module name and object name</p>
|
<python><import>
|
2023-06-11 06:34:33
| 0
| 399
|
strisunshine
|
76,449,059
| 2,215,888
|
How can I determine the lowest value (age) of the top quartile bin using qcut()?
|
<p>How can I determine the lowest value (age) of the top quartile bin using <code>qcut()</code>?</p>
<p>The following DataFrame contains ages:</p>
<pre><code>df = pd.DataFrame({
'age': [2, 4, 6, 8, 10, 12, 14, 16]
})
</code></pre>
<p>First, I would like to separate the ages into 4 different bins. More specifically, I would like to have the same number of ages in each bin. The first bin would have ages 2 and 4. The second bin would have 6 and 8. The third bin would have 10 and 12. And, finally, the fourth bin would have 14 and 16. I would like to determine the lowest age in the top quartile which would be 14 in this example.</p>
<p>I tried setting <code>retbins=true</code> but it only returns the bin values and not the actual age values belonging to each bin.</p>
<pre><code>result, bins = pd.qcut(
df['age'],
4, # A single number value
retbins=True
)
</code></pre>
<p>returns the following:</p>
<pre><code>result
0 (1.999, 5.5]
1 (1.999, 5.5]
2 (5.5, 9.0]
3 (5.5, 9.0]
4 (9.0, 12.5]
5 (9.0, 12.5]
6 (12.5, 16.0]
7 (12.5, 16.0]
Name: age, dtype: category
Categories (4, interval[float64, right]): [(1.999, 5.5] < (5.5, 9.0] < (9.0, 12.5] < (12.5, 16.0]]
bins
[ 2. 5.5 9. 12.5 16. ]
</code></pre>
<p>Is there something that I can use so that my desired output would be <code>14</code> in this example?</p>
|
<python><pandas><dataframe>
|
2023-06-11 04:59:07
| 2
| 708
|
enter_display_name_here
|
76,448,991
| 15,542,245
|
Replacing [word] [space] [word] strings with dictionary
|
<p>Substituting one/two-word suburbs. Matching dictionary keys with dictionary values for lines of addresses. For example:</p>
<pre><code>32 Konini Street Eastbourne Retired
6 Moana Road Days Bay Retailer
</code></pre>
<p>Expected output:</p>
<pre><code>32 Konini Street _Eastbourne Retired
6 Moana Road _Days_Bay Retailer
</code></pre>
<p>Code:</p>
<pre><code># Parsing words of line for single/double word dictionary key
# replacing found instances with underscore version
lines = ['6 Moana Road Days Bay Retailer','32 Konini Street Eastbourne Retired']
sampleDictionary = {"Days Bay":"_Days_Bay","Eastbourne":"_Eastbourne"}
newString = ''
doubleWordLists = []
for i in range(len(lines)):
line = lines[i]
splits = line.split(' ')
# Unused block
for j in range(len(splits)):
if j + 1 < len(splits):
doubleWord = splits[j] + ' ' + splits[j + 1]
doubleWordLists.append(doubleWord)
# Compare a token with dictionary. If no match compare that token and next.
# Add appropriate token(s) to new string
for j in range(len(splits)):
if splits[j] != '\d':
for key in sampleDictionary:
value = sampleDictionary[key]
if key == splits[j]:
newString += value + ' '
else:
#if key == doubleWordLists[j]:
# newString += value + ' '
newString += splits[j] + ' '
else:
newString += splits[j] + ' '
print(newString)
</code></pre>
<p>Output:</p>
<pre><code>6 Moana Road Days Bay Retailer 32 Konini Street _Eastbourne Retired
</code></pre>
<p>As the code shows I have tried to access <code>doubleWordLists</code> as a way to match and replace the two word suburb in the string. See commented out lines. But I was not able to do it while keeping within index of <code>splits[j]</code></p>
<p>The issues are: supplying a two word match to a dictionary and having the means to substitute the returned dictionary <code>value</code> back into the string at the correct index.</p>
|
<python><dictionary>
|
2023-06-11 04:19:46
| 1
| 903
|
Dave
|
76,448,979
| 4,386,557
|
python syntax error in attempt to create decorator
|
<h1></h1>
<p>I don't understand the error I'm getting in this attempted decorator
declaration and use. It's my first try so it's probably something obvious.</p>
<p>I define the decorator in <code>utils.py</code>, import it to <code>test.py</code> where I apply the decorator as a property of a <code>dataclass</code> and then run <code>test.py</code>.</p>
<p>I get this error:</p>
<pre><code>#
#Runtime error :
#
File "/mnt/sda/hd2/projects/test/dbus/test.py", line 11
flags: List[str]
^^^^^
SyntaxError: invalid syntax
</code></pre>
<p>Here is utils.py</p>
<pre><code># utils.py
#
from dataclasses import dataclass, field
from typing import List
class ImmutableFieldError(Exception):
pass
def staticlist(default_value: List[str]) -> field:
def setter(self, value):
raise ImmutableFieldError("Cannot modify the static constant")
return field(default_factory=lambda: default_value, init=False, repr=False, hash=False, compare=False, default=None, metadata=None, fset=setter)
</code></pre>
<p>And the test.py</p>
<pre><code>#
# test.py
#
#!/usr/bin/python
from utils import staticlist
from dataclasses import dataclass
from typing import List
@dataclass
class Test:
on : str
@staticlist(['yes','no'])
flags: List[str]
</code></pre>
|
<python><python-decorators><python-dataclasses>
|
2023-06-11 04:11:30
| 0
| 1,239
|
Stephen Boston
|
76,448,764
| 13,442,734
|
Unable to query pinecone vector index using filter?
|
<p>I am having issues trying to query my pinecone index with a filter. It works without the filter however. Any help or advice would be greatly appreciated!</p>
<p>I have upserted into pinecone in the following way:</p>
<pre><code>for doc in tqdm(docs_restaurant):
chunks = text_splitter.split_text(str(doc.metadata))
for i, chunk in enumerate(chunks):
documents.append({
'id': f'{doc.page_content}_{i}',
'text': chunk,
'metadata': {
'file': 'restaurants'
}
})
for i in tqdm(range(0, len(documents), batch_size)):
i_end = min(len(documents), i+batch_size)
res = s.post(
f"{endpoint_url}/upsert",
headers=headers,
json={
"documents": documents[i:i_end]
}
)
</code></pre>
<p>The data is in pinecone. I can query it with no filter successfully ala:</p>
<pre><code>query_response = self.index.query(
top_k=query.top_k,
vector=query.embedding,
filter=None,
include_metadata=True,
)
</code></pre>
<p>Which returns the following:</p>
<pre><code>{'matches': [{'id': 'Del Taco_0_1',
'metadata': {'document_id': 'Del Taco_0',
'file': 'restaurants',
'text': "pic Beyond Burritos', 'Desserts & Shakes', "
"'Tacos', 'Meals', '20 Under $2 Menu']}"},
'score': 0.786260903,
'values': []},
{'id': 'Minos Take Out_0_1',
'metadata': {'document_id': 'Minos Take Out_0',
'file': 'restaurants',
'text': "izers', 'Wraps & Pitas', 'Light Meals']}"},
'score': 0.7722193,
'values': []},
'namespace': ''
}
</code></pre>
<p>However, when I try this:</p>
<pre><code>self.index.query(
top_k=query.top_k,
vector=query.embedding,
filter={'file': {'$eq': 'restaurants'}},
include_metadata=True,)
</code></pre>
<p>It returns nothing:</p>
<pre><code>{'matches': [], 'namespace': ''}
</code></pre>
|
<python><python-3.x><word-embedding><pinecone>
|
2023-06-11 01:45:58
| 1
| 377
|
Anthony Arena
|
76,448,559
| 3,817,456
|
What's causing "RuntimeError: No built-in server implementation installed." when I try a demo reactpy script in python?
|
<p>I hit <code>RuntimeError: No built-in server implementation installed.</code> when I try</p>
<pre><code>python -c "import reactpy; reactpy.run(reactpy.sample.SampleApp)"
</code></pre>
<p>which is a getting started app mentioned <a href="https://www.kdnuggets.com/2023/06/getting-started-reactpy.html" rel="nofollow noreferrer">here</a>. <code>python -m http.server</code> runs fine.
Full stacktrace is</p>
<pre><code>python -c "import reactpy; reactpy.run(reactpy.sample.SampleApp)"
2023-06-11T02:37:17+0300 | WARNING | The `run()` function is only intended for testing during development! To run in production, consider selecting a supported backend and importing its associated `configure()` function from `reactpy.backend.<package>` where `<package>` is one of ['starlette', 'fastapi', 'sanic', 'tornado', 'flask']. For details refer to the docs on how to run each package.
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/reactpy/backend/default.py", line 59, in _default_implementation
implementation = next(all_implementations())
^^^^^^^^^^^^^^^^^^^^^^^^^^^
StopIteration
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/homebrew/lib/python3.11/site-packages/reactpy/backend/utils.py", line 37, in run
app = implementation.create_development_app()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/reactpy/backend/default.py", line 28, in create_development_app
return _default_implementation().create_development_app()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/reactpy/backend/default.py", line 62, in _default_implementation
raise RuntimeError("No built-in server implementation installed.")
</code></pre>
<p>Any ideas what might be the problem? I've not seen reference to this in a quick google search.</p>
|
<python><reactjs>
|
2023-06-10 23:43:54
| 1
| 6,150
|
jeremy_rutman
|
76,448,517
| 4,155,428
|
Implementation of blending option in Python (Blend If)
|
<p>I am currently working on conditional blending implementation like one used in Photoshop -> Blending Options -> Blend If. Anyone. who is a bit familiar with Photoshop should
be aware of such functionality. For simplicity, lets assume <code>Blend If</code> implementation just for Underlying image. Anyway, this one is most important to me. Photoshop's feature works in two modes. First, simpler mode, take care for two values, shadows and highlight. Blending among layer (foreground) on top with underlying layer(background) is executed if gray-scale pixels of background are lying between interval given by values [shadow, highlight], otherwise original background pixels are taken. I implemented such behavior in function bellow.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Tuple
import numpy as np
def expand_as_rgba(image: np.ndarray) -> np.ndarray:
# Add alpha-channels, if they are not provided
if image.shape[2] == 3:
return np.dstack((image, np.ones(image.shape[:2] + (1,)) * 255)).astype(np.uint8)
return image
def normal_blend_if(
this_layer: np.ndarray,
underlying_layer: np.ndarray,
underlying_layer_shadows_range: Tuple = (0, 0),
underlying_layer_highlights_range: Tuple = (255, 255),
) -> np.ndarray:
bg_shadows, bg_highlights = underlying_layer_shadows_range, underlying_layer_highlights_range
# Expand with alpha if missing
foreground_array = expand_as_rgba(this_layer) / 255.0
background_array = expand_as_rgba(underlying_layer) / 255.0
# Extract the individual channels (R, G, B, A)
foreground_r, foreground_g, foreground_b, foreground_a = np.rollaxis(foreground_array, axis=-1)
background_r, background_g, background_b, background_a = np.rollaxis(background_array, axis=-1)
# Calculate the luminosity of the background image
background_luminosity = (0.299 * background_r + 0.587 * background_g + 0.114 * background_b) * 255.0
# Create the blend if condition based on the luminosity range
blend_if = (background_luminosity >= bg_shadows[0]) & (background_luminosity <= bg_highlights[1])
blend_if_broadcast = np.expand_dims(blend_if, 2)
foreground_rgb, background_rgb = foreground_array[:, :, :3], background_array[:, :, :3]
fga_broadast, bga_broadcast = np.expand_dims(foreground_a, 2), np.expand_dims(background_a, 2)
# Conditional blending
blended_rgb = np.where(
blend_if_broadcast,
(foreground_rgb * fga_broadast + background_rgb * bga_broadcast * (1 - fga_broadast)) / (
fga_broadast + bga_broadcast * (1 - fga_broadast)),
background_rgb
)
blended_a = np.where(
blend_if_broadcast,
fga_broadast + bga_broadcast * (1 - fga_broadast),
bga_broadcast
)
# Combine the blended channels back into a single RGBA image
blended_rgba = np.dstack((blended_rgb, blended_a))
# Scale the RGBA values back to the range [0, 255]
blended_rgba = (blended_rgba * 255).astype(np.uint8)
return blended_rgba
</code></pre>
<p>However, what is crucial for me,is more complex implementation, where shadow and highlight thresholds are split into two values. It eventually leads to smooth blending, but I have no idea what is going on even though, I watched several tutorial about Photoshop. It seems like additional blending is executed based on scale defined by shadow/highlight range, but have no idea how and how to add it to my algorithm. The closest implementation I was able to derive is in the second snippet,</p>
<pre class="lang-py prettyprint-override"><code>def normal_complex_blend_if(
this_layer: np.ndarray,
underlying_layer: np.ndarray,
underlying_layer_shadows_range: Tuple = (0, 0),
underlying_layer_highlights_range: Tuple = (255, 255),
) -> np.ndarray:
bg_shadows, bg_highlights = underlying_layer_shadows_range, underlying_layer_highlights_range
# Expand with alpha if missing
foreground_array = expand_as_rgba(this_layer) / 255.0
background_array = expand_as_rgba(underlying_layer) / 255.0
# Extract the individual channels (R, G, B, A)
foreground_r, foreground_g, foreground_b, foreground_a = np.rollaxis(foreground_array, axis=-1)
background_r, background_g, background_b, background_a = np.rollaxis(background_array, axis=-1)
# Calculate the luminosity of the background image
background_luminosity = (0.299 * background_r + 0.587 * background_g + 0.114 * background_b) * 255.0
# Create the blend if condition based on the luminosity range
blend_if = (background_luminosity >= bg_shadows[0]) & (background_luminosity <= bg_highlights[1])
blend_if_broadcast = np.expand_dims(blend_if, 2)
# Calculate the blending factors for the shadow and highlight ranges
shadow_factor = np.interp(background_luminosity, [bg_shadows[0], bg_shadows[1]], [0, 1])
highlight_factor = np.interp(background_luminosity, [bg_highlights[0], bg_highlights[1]], [0, 1])
# Expand dimensions of alpha for further use
fga_broadast, bga_broadcast = np.expand_dims(foreground_a, 2), np.expand_dims(background_a, 2)
foreground_rgb, background_rgb = foreground_array[:, :, :3], background_array[:, :, :3]
shadow_factor = np.expand_dims(shadow_factor, 2)
highlight_factor = np.expand_dims(highlight_factor, 2)
blended_rgb = np.where(
blend_if_broadcast,
foreground_rgb + (background_rgb - foreground_rgb) * (1 - shadow_factor),
background_rgb + (foreground_rgb - background_rgb) * highlight_factor
)
blended_a = np.where(
blend_if_broadcast,
fga_broadast + bga_broadcast * (1 - fga_broadast),
bga_broadcast
)
blended_rgba = np.dstack((blended_rgb, blended_a))
# Scale the RGBA values back to the range [0, 255]
blended_rgba = (blended_rgba * 255).astype(np.uint8)
return blended_rgba
</code></pre>
<p>but it doesn't work properly if foreground image has its own mask already on the input.
Here are also images I blend as expected results and my results obtained with</p>
<pre class="lang-py prettyprint-override"><code>blended = normal_complex_blend_if(
filtered_image,
underlying_layer,
underlying_layer_shadows_range=(10, 55),
underlying_layer_highlights_range=(255, 255)
)
</code></pre>
<p><strong>Inputs [two images, one of them is quite white text]:</strong></p>
<p><a href="https://i.sstatic.net/nCRCZ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nCRCZ.jpg" alt="background" /></a>
<a href="https://i.sstatic.net/Ye1GB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ye1GB.png" alt="foreground" /></a></p>
<p><strong>Output:</strong></p>
<p><a href="https://i.sstatic.net/4jI97.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4jI97.png" alt="my output" /></a>
<a href="https://i.sstatic.net/JDkQk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JDkQk.png" alt="expected result" /></a></p>
<p>Both functions take input images in form of 0 - 255 uint8 numpy array in shape (h, w, ch).</p>
<p>Can anyone help with this issue?
Thank You</p>
|
<python><image-processing><photoshop><blending><alphablending>
|
2023-06-10 23:21:03
| 1
| 530
|
s.t.e.a.l.t.h
|
76,448,472
| 15,239,717
|
How can I access Property Owner Details in Django
|
<p>I have a Django View where all users (Landlord, Agent and prospect) can access and on this view, I want to display the name of the property owner on my template but I am only getting None displayed whenever a Prospect user is on the view. Understand that the three users have their Models with a OneToOneField relationship with the User Model. And I also have a Profile Model with the same relationship with the User Model.
See the view code below:</p>
<pre><code>def property_detail(request, property_id):
user = request.user
#Check if user is authenticated
if user.is_authenticated:
#Check if user is Landlord
if hasattr(user, 'landlord'):
properties = Property.objects.filter(landlord=user.landlord).prefetch_related('agent').order_by('-last_updated')[:4]
property_owner = user.landlord.profile if hasattr(user.landlord, 'profile') else None
#Check if user is Agent
elif hasattr(user, 'agent'):
properties = Property.objects.filter(agent=user.agent).prefetch_related('landlord').order_by('-last_updated')[:4]
property_owner = user.agent.profile if hasattr(user.agent, 'profile') else None
else:
properties = Property.objects.order_by('-last_updated')[:4]
property_owner = None
#Get the Property by its Product key from DB
property_instance = get_object_or_404(Property, pk=property_id)
#Send Notification to property owner
if request.method == 'POST':
form = MessageForm(request.POST)
if form.is_valid():
content = form.cleaned_data['content']
subject = form.cleaned_data['subject']
Message.objects.create(property=property_instance, sender=user, recipient=property_owner, subject=subject, content=content)
messages.success(request, 'Notification sent successfully with your Contact Details.')
else:
form = MessageForm()
context = {
'form':form,
'properties':properties,
'property_owner': property_owner,
'page_title': 'Property Detail',
'property': property_instance,
}
return render(request, 'realestate/landlord_agent_property.html', context)
# Handle the case when the user is not authenticated
logout(request)
messages.warning(request, 'Session expired. Please log in again.')
return redirect(reverse('account-login'))
</code></pre>
|
<python><django>
|
2023-06-10 23:02:21
| 2
| 323
|
apollos
|
76,448,351
| 272,023
|
How to generate schema for yaml that supports anchors and aliases?
|
<p>I need to define the schema of a YAML configuration file. The model should support <a href="https://www.linode.com/docs/guides/yaml-anchors-aliases-overrides-extensions/" rel="nofollow noreferrer">yaml aliases and anchors</a> so that a global configuration value could be overridden more specifically.</p>
<p>For example, maybe thereโs a list of items which should take a object value which by default is defined at the top level and maybe you want to override for some objects by doing something like this:</p>
<pre><code>
globals:
foo: &default_foo
val: bar
my_config:
items:
- name: item1
config: *default_foo
- name: item2
config:
<<: *default_foo
val: override_bar
</code></pre>
<p>I want to be able to describe this schema to users (and obviously to check validity programmatically). Iโd like to make it clear to users via aliases that they can override values without having to manually write this in an example file.</p>
<p>Is there a way to do this schema generation in any commonly used high level language (primarily interested in Python & Typescript but happy to consider Rust or GoLang).</p>
<p>I specifically want to be able to easily make it known to users that they can override values using aliases. The file model is sufficiently complex that I really donโt want to have to manually provide an example file with handwritten documentation notes if at all possible.</p>
<p>For Python, for instance, I could use Pydantic & pydantic-yaml to define a model programatically but whilst aliases are parsed correctly (serialize yaml to model) it doesnโt appear that thereโs a way to cleanly document the schema like this.</p>
|
<python><typescript><go><rust><yaml>
|
2023-06-10 22:16:10
| 0
| 12,131
|
John
|
76,448,326
| 2,177,538
|
A dataflow job fails and prints a strange error message
|
<p>I'm trying to build an Apache Beam pipeline that reads the data from the PostgreSQL table and writes it in Avro format to the GCS bucket.</p>
<p>Here's what I've got so far:</p>
<pre><code>import apache_beam as beam
from apache_beam.io.avroio import WriteToAvro
from beam_nuggets.io import relational_db
from gcs.options import IngestionOptions
from gcs.schemas.providers import get_schema_by_table
def parse_record(record) -> dict:
record["registration_date"] = record["registration_date"].timestamp()
return record
def _get_resulting_path(ingestion_bucket, ingestion_date, table):
year = ingestion_date[0:4]
month = ingestion_date[4:6]
day = ingestion_date[6:8]
hour = ingestion_date[8:10]
minute = ingestion_date[10:12]
return "/".join([
f"gs://{ingestion_bucket}", table,
year, month, day, hour, minute,
f"result-{ingestion_date}-{table}.avro"
])
def _get_query(table: str) -> str:
return f"select * from {table}"
def execute_pipeline(options: IngestionOptions) -> None:
table = options.table
schema = get_schema_by_table(table)
query = _get_query(table)
result_path = _get_resulting_path(options.ingestion_bucket, options.ingestion_date, table)
source_config = relational_db.SourceConfiguration(
drivername="postgresql",
host=options.host,
port=options.port,
username=options.user,
password=options.password,
database=options.dbname,
)
with beam.Pipeline(options=options) as p:
records = p | "Reading example records from database" >> relational_db.ReadFromDB(
source_config=source_config,
table_name=table,
query=query
)
records | "Parsing records" >> beam.Map(parse_record) \
| "Write to GCS" >> WriteToAvro(f"{result_path}", schema, file_name_suffix=".arvo")
</code></pre>
<p>I'm using <strong>beam_nuggets</strong> library to get records from DB.</p>
<p>This code works perfectly fine when I run it locally with <strong>DirectRunner</strong>:</p>
<pre><code>python exec.py \
--host ... \
--user ... \
--password ... \
--dbname ... \
--table ... \
--ingestion_bucket ...
</code></pre>
<p>However, I can't run it with Dataflow Runner. The pipeline is executed with the following command:</p>
<pre><code>python exec.py \
--host ... \
--user ... \
--password ... \
--dbname ... \
--table ... \
--ingestion_bucket ... \
--runner DataflowRunner \
--project ... \
--region ... \
--temp_location ... \
--staging_location ...
</code></pre>
<p>Dataflow jobs started in GCP console and then failed after 5 minutes with the following error:</p>
<blockquote>
<p>Error message from worker: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/apache_beam/internal/dill_pickler.py", line 418, in loads return dill.loads(s) File "/usr/local/lib/python3.9/site-packages/dill/_dill.py", line 275, in loads return load(file, ignore, **kwds) File "/usr/local/lib/python3.9/site-packages/dill/_dill.py", line 270, in load return Unpickler(file, ignore=ignore, **kwds).load() File "/usr/local/lib/python3.9/site-packages/dill/_dill.py", line 472, in load obj = StockUnpickler.load(self) File "/usr/local/lib/python3.9/site-packages/dill/_dill.py", line 462, in find_class return StockUnpickler.find_class(self, module, name) ModuleNotFoundError: No module named 'beam_nuggets' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/sdk_worker.py", line 295, in _execute response = task() File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/sdk_worker.py", line 370, in lambda: self.create_worker().do_instruction(request), request) File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/sdk_worker.py", line 629, in do_instruction return getattr(self, request_type)( File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/sdk_worker.py", line 660, in process_bundle bundle_processor = self.bundle_processor_cache.get( File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/sdk_worker.py", line 491, in get processor = bundle_processor.BundleProcessor( File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 904, in <strong>init</strong> self.ops = self.create_execution_tree(self.process_bundle_descriptor) File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 983, in create_execution_tree return collections.OrderedDict([( File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 986, in get_operation(transform_id))) for transform_id in sorted( File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 819, in wrapper result = cache[args] = func(*args) File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 965, in get_operation transform_consumers = { File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 966, in tag: [get_operation(op) for op in pcoll_consumers[pcoll_id]] File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 966, in tag: [get_operation(op) for op in pcoll_consumers[pcoll_id]] File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 819, in wrapper result = cache[args] = func(*args) File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 965, in get_operation transform_consumers = { File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 966, in tag: [get_operation(op) for op in pcoll_consumers[pcoll_id]] File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 966, in tag: [get_operation(op) for op in pcoll_consumers[pcoll_id]] File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 819, in wrapper result = cache[args] = func(*args) File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 965, in get_operation transform_consumers = { File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 966, in tag: [get_operation(op) for op in pcoll_consumers[pcoll_id]] File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 966, in tag: [get_operation(op) for op in pcoll_consumers[pcoll_id]] File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 819, in wrapper result = cache[args] = func(*args) File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 970, in get_operation return transform_factory.create_operation( File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 1267, in create_operation return creator(self, transform_id, transform_proto, payload, consumers) File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 1614, in create_par_do return _create_pardo_operation( File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 1657, in _create_pardo_operation dofn_data = pickler.loads(serialized_fn) File "/usr/local/lib/python3.9/site-packages/apache_beam/internal/pickler.py", line 51, in loads return desired_pickle_lib.loads( File "/usr/local/lib/python3.9/site-packages/apache_beam/internal/dill_pickler.py", line 422, in loads return dill.loads(s) File "/usr/local/lib/python3.9/site-packages/dill/_dill.py", line 275, in loads return load(file, ignore, **kwds) File "/usr/local/lib/python3.9/site-packages/dill/_dill.py", line 270, in load return Unpickler(file, ignore=ignore, **kwds).load() File "/usr/local/lib/python3.9/site-packages/dill/_dill.py", line 472, in load obj = StockUnpickler.load(self) File "/usr/local/lib/python3.9/site-packages/dill/_dill.py", line 462, in find_class return StockUnpickler.find_class(self, module, name) ModuleNotFoundError: No module named 'beam_nuggets'</p>
</blockquote>
<p>Although I don't have this issue when I use the direct runner, the error says that the beam_nuggets module is not found. Furthermore, the "missing" module can change from run to run, now it's beam_nuggets, but next time it may be something else.</p>
<p>Could somebody help me with that?</p>
|
<python><google-cloud-platform><google-cloud-dataflow><apache-beam>
|
2023-06-10 22:05:20
| 1
| 1,184
|
Roman Dryndik
|
76,448,319
| 3,892,866
|
Best way to switch threads in Python?
|
<p>I'm writing unit tests for a Python multithreaded program. Unit testing multithreaded programs is always difficult, but I have it working pretty well. One issue I have is that since the Python threads take turns, instead of running in parellel or time sliced, I often find myself writing tests like:</p>
<pre><code># Act
object_being_tested.do_function_that_triggers_other_thread_to_run()
time.sleep(0.1) # Let the worker thread run
# Verify
self.assertEqual(value_that_worker_thread_should_have_set, object_being_tested.get_value())
# Cleanup
object_being_tested.worker_thread.join()
</code></pre>
<p>This works pretty well but the 'time.sleep(0.1)' seems awkward to me. Is there a clean way in python to indicate that I don't want to sleep, I just want to switch away from the current thread and let the other thread run? Or is a short sleep the best way?</p>
|
<python><multithreading>
|
2023-06-10 22:01:40
| 0
| 568
|
Bill Shubert
|
76,448,259
| 16,728,369
|
ImproperlyConfigured( django.core.exceptions.ImproperlyConfigured: AUTH_USER_MODEL refers to model 'authen.CustomUser' that has not been installed
|
<p>(I've been through all the other questions)I wanted to customize default user model. that's why i created a fresh django project and haven't done migrations. i just wrote the code and saved it
settings.py</p>
<pre><code>INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'authen',
]
AUTH_USER_MODEL = 'authen.CustomUser'
</code></pre>
<p>manager.py</p>
<pre><code>from django.contrib.auth.base_user import BaseUserManager
class UserManager(BaseUserManager):
use_in_migrations = True
def create_user(self, email, password=None, **extra_fields):
if not email:
raise ValueError("Email is required")
email = self.normalize_email(email)
user = self.model(email=email,**extra_fields)
user.set_password(password)
user.save(using=self._db)
return user
def create_superuser(self, email, password, **extra_fields):
extra_fields.setdefault('is_staff', True)
extra_fields.setdefault('is_superuser',True)
extra_fields.setdefault('is_active',True)
if extra_fields.get('is_staff') is not True:
raise ValueError("Super user must have is_staff true")
return self.create_user(email, password, **extra_fields)
</code></pre>
<p>models.py of authen app</p>
<pre><code>from django.db import models
from django.contrib.auth.models import AbstractUser
from .manager import UserManager
class CustomUser(AbstractUser):
email = models.EmailField(unique=True)
phone = models.CharField(max_length=20)
nid = models.CharField(max_length=17,unique=True)
profile_picture = models.ImageField(upload_to='profile_pics', blank=True, null=True)
objects = UserManager()
REQUIRED_FIELDS=[]
USERNAME_FIELD = 'email'
class Meta:
app_label = 'CustomUser'
</code></pre>
<p>i haven't done any migrations and i'm using latest django and python3.1. i had to create several project for this reason. Ask me if more info is needed.</p>
|
<python><django><django-models><django-migrations>
|
2023-06-10 21:40:51
| 1
| 469
|
Abu RayhaN
|
76,448,213
| 20,920,790
|
Why where's no changes for second column?
|
<p>I'm trying to change 2 columns in dataframe.</p>
<p>And It's working, but I can't make def with this code, it's not working.</p>
<pre><code>import pandas as pd
def new_func(folder, file, types, folder1_changes: tuple):
df = pd.read_csv(folder + '/' + file, encoding='1251').astype(types)
for (i, j) in folder1_changes:
try:
product_names_list = list(df[i].unique())
products_arr = np.array(range(1, len(product_names_list)+1))
product_names_map = {}
for a, p in zip(products_arr, product_names_list):
product_names_map[p] = f'{j}{a}'
df[i] = df[i].map(product_names_map)
except KeyError:
continue
return df
</code></pre>
<p>My result is:</p>
<pre><code>new_func(folder1_1, files1_1[1], folder1_types, folder1_changes)[['DR_NDrugs', 'DR_TDoc']].head(5)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">DR_NDrugs</th>
<th style="text-align: left;">DR_TDoc</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">Product_1</td>
<td style="text-align: left;">ะ ะพะทะฝะธัะฝะฐั ัะตะฐะปะธะทะฐัะธั</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">Product_2</td>
<td style="text-align: left;">ะ ะพะทะฝะธัะฝะฐั ัะตะฐะปะธะทะฐัะธั</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">Product_3</td>
<td style="text-align: left;">ะ ะพะทะฝะธัะฝะฐั ัะตะฐะปะธะทะฐัะธั</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">Product_4</td>
<td style="text-align: left;">ะ ะพะทะฝะธัะฝะฐั ัะตะฐะปะธะทะฐัะธั</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: left;">Product_5</td>
<td style="text-align: left;">ะ ะพะทะฝะธัะฝะฐั ัะตะฐะปะธะทะฐัะธั</td>
</tr>
</tbody>
</table>
</div>
<p>Why there's no change for 'DR_TDoc'?
Code working fine without def (I literally can copy/paste code outside function and it will work).</p>
|
<python><pandas>
|
2023-06-10 21:26:44
| 1
| 402
|
John Doe
|
76,448,210
| 10,181,236
|
How to feed a numpy array as audio for whisper model
|
<p>So I want to open an mp3 using AudioSegment, then I want to convert the AudioSegment object to numpy array and use this numpy array as input for whisper model, I followed this question <a href="https://stackoverflow.com/questions/38015319/how-to-create-a-numpy-array-from-a-pydub-audiosegment">How to create a numpy array from a pydub AudioSegment?</a> but non of the result was helpful since I get always error like</p>
<pre><code> Traceback (most recent call last):
File "E:\Programmi\PythonProjects\whisper_real_time\test\converting_test.py", line 19, in <module>
result = audio_model.transcribe(arr_copy, language="en", word_timestamps=True,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Programmi\PythonProjects\whisper_real_time\venv\Lib\site-packages\whisper\transcribe.py", line 121, in transcribe
mel = log_mel_spectrogram(audio, padding=N_SAMPLES)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Programmi\PythonProjects\whisper_real_time\venv\Lib\site-packages\whisper\audio.py", line 146, in log_mel_spectrogram
audio = F.pad(audio, (0, padding))
^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: [enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 86261939712 bytes.
</code></pre>
<p>This error is strange because if I provide directly the file like below I get no problems</p>
<pre><code>result = audio_model.transcribe("../audio_test_files/1001_IEO_DIS_HI.mp3", language="en", word_timestamps=True,
fp16=torch.cuda.is_available())
</code></pre>
<p>This is the code I wrote</p>
<pre><code>from pydub import AudioSegment
import numpy as np
import whisper
import torch
audio = AudioSegment.from_mp3("../audio_test_files/1001_IEO_DIS_HI.mp3")
dtype = getattr(np, "int{:d}".format(
audio.sample_width * 8)) # Or could create a mapping: {1: np.int8, 2: np.int16, 4: np.int32, 8: np.int64}
arr = np.ndarray((int(audio.frame_count()), audio.channels), buffer=audio.raw_data, dtype=dtype)
arr_copy = arr.copy()
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Loading whisper...")
audio_model = whisper.load_model("small", download_root="../models",
device=device)
print(f"Transcribing...")
result = audio_model.transcribe(audio=arr_copy, language="en", word_timestamps=True,
fp16=torch.cuda.is_available()) # , initial_prompt=result.get('text', ""))
text = result['text'].strip()
print(text)
</code></pre>
<p>how can I do it?</p>
<p>--------EDIT--------
I edited the code, and now I use the code below. I don't have the error that I had before but the model seems to don't transcribe correctly. I tested what audio I was passing to the model exporting back the wav file, I played it and there is a lot of noise, I can't understand what they are saying so that's why the model does not transcribe. Are the passage of normalization that I am doing ok?</p>
<pre><code>from pydub import AudioSegment
import numpy as np
import whisper
import torch
language = "en"
model = "medium"
model_path = "../models"
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Loading whisper {model} model {language}...")
audio_model = whisper.load_model(model, download_root=model_path, device=device)
# load wav file with pydub
audio_path = "20230611-004146_audio_chunk.wav"
audio_segment = AudioSegment.from_wav(audio_path)
#audio_segment = audio_segment.low_pass_filter(1000)
# get sample rate
sample_rate = audio_segment.frame_rate
arr = np.array(audio_segment.get_array_of_samples())
arr_copy = arr.copy()
arr_copy = torch.from_numpy(arr_copy)
arr_copy = arr_copy.to(torch.float32)
# normalize
arr_copy = arr_copy / 32768.0
# to device
arr_copy = arr_copy.to(device)
print(f"Transcribing...")
result = audio_model.transcribe(arr_copy, language=language, fp16=torch.cuda.is_available())
text = result['text'].strip()
print(text)
waveform = arr_copy.cpu().numpy()
audio_segment = AudioSegment(
waveform.tobytes(),
frame_rate=sample_rate,
sample_width=waveform.dtype.itemsize,
channels=1
)
audio_segment.export("test.wav", format="wav")
</code></pre>
|
<python><numpy><openai-whisper><audiosegment>
|
2023-06-10 21:24:48
| 1
| 512
|
JayJona
|
76,448,082
| 11,652,655
|
How to change image into data.frame
|
<p>Can I have help turning this image into a dataframe with python code, please?
I'm sorry, I don't know where to start.</p>
<p><a href="https://i.sstatic.net/1nr0M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1nr0M.png" alt="enter image description here" /></a></p>
|
<python><dataframe><image>
|
2023-06-10 20:38:38
| 1
| 1,285
|
Seydou GORO
|
76,448,057
| 11,932,905
|
Pandas - find pairs of rows with same integer values in at least two columns for a large dataset
|
<p>I'm struggling with the efficient way how to select matching pairs of rows that have at least 2(3) common columns.<br />
Sample df:</p>
<pre><code>df = pd.DataFrame({
'A': [1, 2, 3, 4, 5],
'B': [1, 1, 1, 1, 2],
'C': [1, 1, 2, 3, 3],
'D': [2, 7, 9, 8, 4]})
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>1</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>1</td>
<td>1</td>
<td>7</td>
</tr>
<tr>
<td>2</td>
<td>3</td>
<td>1</td>
<td>2</td>
<td>9</td>
</tr>
<tr>
<td>3</td>
<td>4</td>
<td>1</td>
<td>3</td>
<td>8</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
</tbody>
</table>
</div>
<p>Here only rows 0 and 1 have 2 common columns.</p>
<p>I have a working solution with numpy, but it fails when I get huge dataset as input (5 000 000 records) - obviously can't fit in memory 5*5 mln.</p>
<pre><code># Convert the DataFrame to a NumPy array
np_array = df.to_numpy()
# Find pairs of rows with same integers in at least two columns
matching_rows = np.sum(np_array[:, None] == np_array, axis=2) >= 2
np.fill_diagonal(matching_rows, False)
# Get the indices of the matching pairs
matching_pairs = np.argwhere(matching_rows)
# Filter out bidirectional links
matching_pairs = matching_pairs[matching_pairs[0:, 0] > matching_pairs[:, 1], :]
</code></pre>
<p>Appreciate any help regarding other solutions to select matching rows with any 2 common columns.</p>
|
<python><pandas>
|
2023-06-10 20:31:13
| 1
| 608
|
Alex_Y
|
76,447,934
| 11,724,911
|
Issue with running shell command in python 3 (Windows 11)
|
<p>I wanted to run a command from Python 3 (3.8.10 to be exact). Here is what I tried:</p>
<pre class="lang-py prettyprint-override"><code>import os
os.system("ls")
</code></pre>
<p>Output:</p>
<pre><code>'DOSKEY' is not recognized as an internal or external command,
operable program or batch file.
'ls' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>I was pretty frustrated, but I tried <code>echo</code> since that was a basic command:</p>
<pre><code>import os
os.system("echo hi!")
</code></pre>
<p>Output:</p>
<pre><code>'DOSKEY' is not recognized as an internal or external command,
operable program or batch file.
hi!
</code></pre>
<p>So that worked. I'm now stumped. Why did <code>echo</code> work and not <code>ls</code>? And why does it say that DOSKEY is not an internal or external command and how do I fix it.</p>
<p>Edit:
I now tried the <code>subprocess</code> method:</p>
<pre><code>import subprocess
subprocess.run(["echo", "hi"]) # If I understand correctly, the arguments have to be separated by array elements like so
</code></pre>
<p>Output:</p>
<pre><code>Traceback (most recent call last):
File C:\Program Files\Spyder\pkgs\spyder_kernels\py3compat.py:356 in compat_exec
exec(code, globals, locals)
File c:\users\lucas\.spyder-py3\automated_run.py:8
subprocess.run(["echo","hi"])
File subprocess.py:493 in run
File C:\Program Files\Spyder\pkgs\spyder_kernels\customize\spydercustomize.py:109 in __init__
super(SubprocessPopen, self).__init__(*args, **kwargs)
File subprocess.py:858 in __init__
File subprocess.py:1311 in _execute_child
FileNotFoundError: [WinError 2] The system cannot find the file specified
</code></pre>
<p>Any help would be greatly appreciated.</p>
|
<python><python-3.x><windows>
|
2023-06-10 19:51:44
| 0
| 665
|
Lucas Urban
|
76,447,442
| 4,260,141
|
Fast range query over a million size nD space to determine statistics
|
<p>I have a numpy array with 10 million entries. The array has 5 columns, where the first 4 columns specify the co-ordinates of x, y, z and t. The last column specifies a scalar value at each of these points. Now for every point in this dataset I want to query for points within an n-D bounding box specified by some x_min, x_max, y_min, y_max, z_min, z_max, t_min and t_max. For the points within the bounding box, calculate the median and standard deviation of the values and store it.</p>
<p>Few things to note:</p>
<ol>
<li>the bounding box specification for each point will be different, for
some it might be a small box for some points it might be larger box.</li>
<li>note that the x, y, z, t; all 4 axis have different resolutions and
different scales.</li>
<li>The array is an ordered array along the first axis but not the other three axis.</li>
</ol>
<p>Right now, I have leveraged the info of point number 3 to reduce my search space but I would like to maybe create some sort of Tree data structure which at once fetch me the points within the bounding box, because I need to do this query 10 million times for each row (think about a marching bounding box centered around each point).</p>
<p>With the above information, I have tried to implement the following pseudocode</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import time
num_points = 10_000_000
df_us = np.random.rand(num_points, 5)
df = df_us[df_us[:,0].argsort()] #sort along the first axis to mimick the real data feature
# for this expt ignoring the fact that each axis has different limits and different sampling resolution
xmin = np.random.rand(num_points)
xmax = xmin + 0.2
ymin = np.random.rand(num_points)
ymax = ymin + 0.4
zmin = np.random.rand(num_points)
zmax = zmin + 0.4
tmin = np.random.rand(num_points)
tmax = tmin + 0.4
def bbox_stat(xr_df, xmin, xmax, ymin, ymax, zmin, zmax, tmin, tmax):
for i in range(len(xmin)):
# use the fact the first axis is presorted in the dataset
x_si = np.searchsorted(xr_df[:,0], xmin[i], side='left')
x_ei = np.searchsorted(xr_df[:,0], xmax[i], side='right')
y_conditions = (xr_df[x_si:x_ei, 1] >= ymin[i]) & (xr_df[x_si:x_ei, 1] <= ymax[i])
z_conditions = (xr_df[x_si:x_ei, 2] >= zmin[i]) & (xr_df[x_si:x_ei, 2] <= zmax[i])
t_conditions = (xr_df[x_si:x_ei, 3] >= tmin[i]) & (xr_df[x_si:x_ei, 3] <= tmax[i])
conditions = y_conditions & z_conditions & t_conditions
med_i = xr_df[x_si:x_ei,:][conditions,3].median() # use this fillup an array mean
std_i = xr_df[x_si:x_ei,:][conditions,3].std() # use this to fillup an array std
arguments = []
num_cores = 50
orig_len = df.shape[0]
unit_len = orig_len//num_cores
for i in range(num_cores):
arguments.append((df, xmin[i*uni_len:(i+1)*uni_len], xmax[i*uni_len:(i+1)*uni_len], ymin[i*uni_len:(i+1)*uni_len], ymax[i*uni_len:(i+1)*uni_len], zmin[i*uni_len:(i+1)*uni_len], zmax[i*uni_len:(i+1)*uni_len], tmin[i*uni_len:(i+1)*uni_len], tmax[i*uni_len:(i+1)*uni_len]))
start_time = time.time()
pool = Pool(processes=num_cores)
pool.starmap(bbox_stat, arguments)
pool. Close()
pool. Join()
end_time = time.time()
print(f"Time taken: {end_time - start_time} seconds")
</code></pre>
|
<python><arrays><numpy><data-structures><tree>
|
2023-06-10 17:43:21
| 1
| 537
|
datapanda
|
76,447,330
| 13,401,636
|
What is the wildcard equivalent for boto3 dynamodb ProjectionExpression
|
<p>I have a function that calls boto3's query operation. My data object for each record is rather large so I have created an optional parameter that can be passed to reduce the fields that are returned. Here is the function:</p>
<pre><code>def get_leads(type,leadTable,**kwargs):
attributes = kwargs.get('attributes', None)
client = boto3.resource('dynamodb')
table = client.Table(leadTable)
try:
if attributes != None:
response = table.query(ProjectionExpression=attributes,KeyConditionExpression=Key('lead_type').eq(type))
else:
response = table.query(KeyConditionExpression=Key('lead_type').eq(type))
return response
except Exception as e:
print(e)
return 0
</code></pre>
<p>This way, i can optionally pass attributes as a parameter
e.g.,</p>
<p><code>get_leads("user","table1",attributes="first_name,last_name,title,image_link")</code></p>
<p>returns only <code>first_name,last_name,title,image_link</code> fields</p>
<p>OR</p>
<p><code>get_leads("user","table1")</code></p>
<p>returns all fields</p>
<p>Ideally, I would like to simplify this function and set attributes to a wildcard ProjectionExpression value when it is not defined.
e.g.,</p>
<pre><code>def get_leads(type,leadTable,**kwargs):
attributes = kwargs.get('attributes', "*")
client = boto3.resource('dynamodb')
table = client.Table(leadTable)
try:
return table.query(ProjectionExpression=attributes,KeyConditionExpression=Key('lead_type').eq(type))
except Exception as e:
print(e)
return 0
</code></pre>
<p>Sadly, I can't seem to find documentation anywhere stating what this wildcard value is or if it even exists.</p>
|
<python><amazon-dynamodb><boto3>
|
2023-06-10 17:10:35
| 1
| 315
|
Matthew D'vertola
|
76,447,153
| 4,075,155
|
How to use a Llama model with langchain? It gives an error: Pipeline cannot infer suitable model classes from: <model_name> - HuggingFace
|
<p>finetuned a model (<a href="https://huggingface.co/decapoda-research/llama-7b-hf" rel="nofollow noreferrer">https://huggingface.co/decapoda-research/llama-7b-hf</a>) using peft and lora and saved as <a href="https://huggingface.co/lucas0/empath-llama-7b" rel="nofollow noreferrer">https://huggingface.co/lucas0/empath-llama-7b</a>. Now im getting <code>Pipeline cannot infer suitable model classes from</code> when trying to use it along with with langchain and chroma vectordb:</p>
<pre><code>from langchain.embeddings import HuggingFaceHubEmbeddings
from langchain import PromptTemplate, HuggingFaceHub, LLMChain
from langchain.chains import RetrievalQA
from langchain.prompts import PromptTemplate
from langchain.vectorstores import Chroma
repo_id = "sentence-transformers/all-mpnet-base-v2"
embedder = HuggingFaceHubEmbeddings(
repo_id=repo_id,
task="feature-extraction",
huggingfacehub_api_token="XXXXX",
)
comments = ["foo", "bar"]
embeddings = embedder.embed_documents(texts=comments)
docsearch = Chroma.from_texts(comments, embedder).as_retriever()
#docsearch = Chroma.from_documents(texts, embeddings)
llm = HuggingFaceHub(repo_id='lucas0/empath-llama-7b', huggingfacehub_api_token='XXXXX')
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=docsearch, return_source_documents=False)
q = input("input your query:")
result = qa.run(query=q)
print(result["result"])
</code></pre>
<p>is anyone able to tell me how to fix this? Is it an issue with the model card? I was facing issues with the lack of the config.json file and ended up just placing the same config.json as the model I used as base for the lora fine-tuning. Could that be the origin of the issue? If so, how to generate the correct config.json without having to get the original llama weights?</p>
<p>Also, is there a way of loading several sentences into a custom HF model (not only OpenAi, as the <a href="https://python.langchain.com/en/latest/modules/indexes/vectorstores/examples/chroma.html" rel="nofollow noreferrer">tutorial</a> show) without using vector dbs?</p>
<p>Thanks!</p>
<hr />
<p>The same issue happens when trying to run the API on the model's HF page:</p>
<p><a href="https://i.sstatic.net/4vxAa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4vxAa.png" alt="enter image description here" /></a></p>
|
<python><huggingface-transformers><langchain><chromadb><large-language-model>
|
2023-06-10 16:38:45
| 1
| 2,380
|
Lucas Azevedo
|
76,447,024
| 2,236,794
|
need help on running different enviroments on GIT or with a CI/CD pipeline
|
<p>I am have been a python developer for a small startup for many years. Recently the company started growing and now that I am not the only developer in the team we are standarizing on certain stuff. One of the issues we are running into is how to use different enviroments with GIT and/or with a CI/CD pipeline. We develop locally on our laptops or on our local enviroment we built. Then we push to a Dev enviroment and then to Prod. When we do this a script changes some of the files and added or removes stuff such as ENV variables, DB's, and/or security stuff. This works fine, until we ran into a bug in Prod. We create a hotfix branch fix it and now merge it in prod and then merge those changes back into the other branches. Now we have all of our Prod ENV, config and DB's in our local enviroment. Not only is it taking us time to figure out what happened, but we accidently deleted a table in Prod thinking we were on our local enviroment. Since then, we have tried Git hooks which is supposed to run a script upon checkout of a branch, but cant seem to get it quite right and the other thing is that I would need to add the script to the .git directory in the repo which no one does. The .git directory is ignored in most cases. I have looked at other options, but cant seem to find a good explanation or "best practice" for this. Any suggestions?</p>
|
<python><git><continuous-integration><devops><cicd>
|
2023-06-10 16:04:35
| 0
| 561
|
user2236794
|
76,446,985
| 8,128,190
|
Use class attributes, property or dataclass?
|
<p>I am working on connecting to an API and fetching information about employees.<br />
When creating the employee class, I got confused between:</p>
<ul>
<li>using a class <em><strong>with attributes only and no methods</strong></em> which is <a href="https://stackoverflow.com/questions/52679693/classes-with-attributes-but-no-methods">considered</a> anti-pattern.</li>
<li>using class with property decorators, which I am not sure if it is the best practice in my case.</li>
<li>using a dataclass, but include some processing (API authentication and creating some attributes) in the <code>__post_init__()</code> method.</li>
</ul>
<p>Here are the 3 alternatives:<br />
<strong>1- Using a class with attributes only (<a href="https://en.wikipedia.org/wiki/Anemic_domain_model" rel="nofollow noreferrer">Anemic domain model</a>):</strong></p>
<pre><code>import api_library
class Employee:
def __init__(self, system_user: str, system_password: str, employee_mail: str):
self.context = api_library.authentication(usename=system_user, passwoed=system_password)
self.user = self.context.users.get_user_by_email(email=employee_mail)
self.full_name = self.user.display_name
self.team = self.user.primary_team
self.manager = self.user.manager.display_name
self.department = self.user.department.display_name
</code></pre>
<p><strong>2- Using <code>@property</code> decorator:</strong></p>
<pre><code>import api_library
from typing import Optional
class Employee:
def __init__(self, system_user: str, system_password: str, employee_mail: str):
self.context = api_library.authentication(username=system_user, password=system_password)
self.user = self.context.users.get_user_by_email(email=employee_mail)
@property
def full_name(self) -> Optional[str]:
return self.user.display_name
@property
def team(self) -> Optional[str]:
return self.user.primary_team
@property
def manager(self) -> Optional[str]:
return self.user.manager.display_name
@property
def department(self) -> Optional[str]:
return self.user.department.display_name
</code></pre>
<p><strong>3- Using dataclass (but with some processing in <code>__post_init__()</code>):</strong></p>
<pre><code>import api_library
from dataclasses import dataclass, field
from config import system_user, system_password
@dataclass
class Employee:
email: str
full_name: str = field(init=False)
team: str = field(init=False)
manager: str = field(init=False)
department: str = field(init=False)
def __post_init__(self):
context = api_library.authentication(username=system_user, password=system_password)
user = context.users.get_user_by_email(email=self.email)
self.full_name = user.display_name
self.team = user.primary_team
self.manager = user.manager.display_name
self.department = user.department.display_name
</code></pre>
<ul>
<li>Which option is more pythonic?</li>
<li>Is there a better way to solve this?</li>
</ul>
<p>I am using <code>Python 3.10</code> or <code>Python 3.8</code></p>
|
<python><class><properties><attributes><python-dataclasses>
|
2023-06-10 15:54:23
| 3
| 3,056
|
singrium
|
76,446,857
| 11,267,281
|
How to use joblib to load a model using its azureml:// path in the Azure Machine Learning workspace
|
<p>I registered the model <em><strong>iris_flat_model_from_cli</strong></em> in my Azure Machine Learning Workspace.</p>
<p>Before publishing it, for testing purposes I need to load that model from the workspace using joblib library, using the same development VM.
I can associate the model to a Python object <em><strong>m</strong></em> using</p>
<pre><code>m = ml_client.models.get(name=m_name, version=m_version)
</code></pre>
<p>, which provides me with the path where it is registered within the Model Registry</p>
<pre><code>azureml://subscriptions/4*****c/resourceGroups/mauromi-ml-wrkgp01/workspaces/mmAmlsWksp02/datastores/workspaceblobstore/paths/azureml/9c98b03d-d53d-488d-80b3-543dfc9f09f0/model_flat_output_folder/
</code></pre>
<p>, which also allows me to build the WEB path within the Storage Account</p>
<pre><code>https://mm*****46.blob.core.windows.net/azureml-blobstore-c5*****8dc/azureml/e02c33b5-4beb-4250-9e03-9a13fbcc4a9c/model_flat_output_folder/model.pkl
</code></pre>
<p>, and I can also use the <em><strong>download</strong></em> method of the <em><strong>m</strong></em> object to download it locally and finally use it with joblib.load()</p>
<pre><code>ml_client.models.download(name=m_name, version=m_version, download_path=m_local_base_path)
</code></pre>
<p>, which allows me to successfully run the <em><strong>predict_proba()</strong></em> inference, as shown in the below picture.</p>
<p>QUESTION: how can I do the same in a cleaner way without downloading it locally, e.g. passing the model path in the workspace, something like</p>
<pre><code>model = joblib.load('azureml://subscriptions/4****c/resourceGroups/mauromi-ml-wrkgp01/workspaces/mmAmlsWksp02/datastores/workspaceblobstore/paths/azureml/9c98b03d-d53d-488d-80b3-543dfc9f09f0/model_flat_output_folder/model.pkl')
</code></pre>
<p>In fact, it seems that <em><strong>joblib.load()</strong></em> just accepts a local path.</p>
<p><a href="https://i.sstatic.net/15NRc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/15NRc.png" alt="enter image description here" /></a></p>
|
<python><azure-machine-learning-service><joblib>
|
2023-06-10 15:24:05
| 1
| 319
|
Mauro Minella
|
76,446,788
| 2,398,040
|
How to create a frequency table in polars from an iterator
|
<p>I am trying to create a polars dataframe which is a frequency table of words in a list of words. Something like this:</p>
<pre><code>from collections import defaultdict
word_freq= defaultdict(int)
for word in list_of_words:
word_freq[word] += 1
</code></pre>
<p>Except, instead of a dictionary I would like it to be a polars dataframe with two columns: word, count.</p>
<p>I would also like to know what the best way to convert this dict to a df (in cases where that may be needed).</p>
|
<python><dataframe><python-polars>
|
2023-06-10 15:07:53
| 1
| 1,057
|
ste_kwr
|
76,446,783
| 19,366,064
|
Question about FastAPI's dependency injection and its reusability
|
<pre><code>from fastapi import Depends, FastAPI
class MyDependency:
def __init__(self):
# Perform initialization logic here
pass
def some_method(self):
# Perform some operation
pass
def get_dependency():
# Create and return an instance of the dependency
return MyDependency()
app = FastAPI()
@app.get("/example")
def example(dependency: MyDependency = Depends(get_dependency)):
dependency.some_method()
</code></pre>
<p>For the code snippet above, does subsequent visits to /example create a new instance of the MyDependency object each time? If so, how can I avoid that?</p>
|
<python><dependency-injection><fastapi>
|
2023-06-10 15:06:37
| 1
| 544
|
Michael Xia
|
76,446,466
| 13,060,649
|
drf custom authentication backend gets executed on the path that doesn't need authentication
|
<p>I am new to django and I am trying to add permissions from DRF to my project. Ever since I have set <code>DEFAULT_AUTHENTICATION_CLASSES</code> for <code>REST_FRAMEWORK</code> in django <code>settings.py</code>, all the requests are going to the <code>authenticate</code> method of my <code>DEFAULT_AUTHENTICATION_CLASSES</code> irrespective of what permission I set to my view. Later it is coming to my view. So here is the settings I have added:</p>
<pre><code>REST_FRAMEWORK = {
'DEFAULT_AUTHENTICATION_CLASSES': [
'authentication.customauth.CustomAuthBackend',
]
}
</code></pre>
<p>And here is my <code>authentication.customauth.CustomAuthBackend</code>:</p>
<pre><code>class CustomAuthBackend(BaseAuthentication):
def authenticate(self, request):
user = AuthUtils.get_user_from_token(request)
if user is None:
raise AuthenticationFailed('User not found')
request.user = user
return user, None
@staticmethod
def authenticate_with_password(request):
email = request.data.get('email')
role = "CONSUMER" if request.data.get('role') is None else request.data.get('role')
password = request.data.get('password')
user = User.objects.filter(email=email, role=role).first()
if password is not None and user is not None and user.check_password(password):
return user
</code></pre>
<p>The views that actually should be called without authentication have <code>@permission_classes([AllowAny])</code> permission. Say this <code>login</code> view:</p>
<pre><code>@api_view(['POST'])
@permission_classes([AllowAny])
def login(request):
user = request.user
if user and user.is_active:
serializer = UserSerializer(user)
tokens_map = AuthUtils.generate_token(request=request, user=user)
return Response({'success': True, 'user': serializer.data, 'tokens': tokens_map})
return Response(data={'success': False, 'message': 'User not found'}, status=status.HTTP_404_NOT_FOUND)
</code></pre>
<p>With my understanding I think if permission class is <code>rest_framework.permissions.AllowAny</code> no <code>authenticate</code> method should not be called before calling my view.</p>
|
<python><django><django-rest-framework><django-views><django-permissions>
|
2023-06-10 13:37:41
| 0
| 928
|
suvodipMondal
|
76,446,345
| 3,165,683
|
Axes3D.view_init() got an unexpected keyword argument 'roll'
|
<p>I am trying to change the elevation, azim and roll of my matplotlib 3d projection which should be straightforward following the standard view_init:</p>
<p><a href="https://matplotlib.org/stable/api/toolkits/mplot3d/view_angles.html" rel="nofollow noreferrer">https://matplotlib.org/stable/api/toolkits/mplot3d/view_angles.html</a></p>
<p>However I am getting an error when trying to use roll:</p>
<p><code>Axes3D.view_init() got an unexpected keyword argument 'roll'</code></p>
<p>Is this functionality still available?</p>
<p>Please see me code below:</p>
<pre><code>fig = plt.figure()
ax = fig.gca(projection='3d')
for ll in range(dataset[0][:,:,0:64].shape[2]):
xlist = [x for x in range(1, (len(dataset[0][frame,:, ll])+1))]
plt.plot(
xs=np.clip(dataset[150,32,:,ll], 0,1),ys=xlist, zs=ll, label='Raw ' + str(1))
plt.title(("All A-scans of frame: " + str(frame)))
ax.set_axis_off()
ax.view_init(elev=30, azim=45, roll=15)
plt.grid(True)
plt.show()```
</code></pre>
|
<python><matplotlib><matplotlib-3d>
|
2023-06-10 13:06:08
| 0
| 377
|
user3165683
|
76,446,124
| 6,394,092
|
Best line detector algorithm for a specific content bounding box measurement
|
<p>The purpose of the algorithm is to auto align-sheet music pages based on staves/systems content.</p>
<p>The algorithm need to detect the bounding box <a href="https://i.sstatic.net/wbGmx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wbGmx.png" alt="bounding box" /></a> to allow to easily compute the left/right and top/bottom margin for the wholes pages.</p>
<p>Current algorithm like morphological operations in openCV (using <code>cv2.HoughLinesP</code> for example) fails.</p>
<p>I need to found the most precise coordinates at least the top (first) or bottom (last) staff line and at least the left (systems) lines to compute the bounding box (red lines at left of the picture)</p>
<p>What is the state-of-the-art algorithm to work on this kind of document. The full resolution is in 600 dpi (5078 x 6566 pixels) if that can help (without downsizing).</p>
<p>Thank you very much.</p>
|
<python><opencv><image-processing><computer-vision>
|
2023-06-10 12:05:06
| 1
| 373
|
MaxC2
|
76,446,015
| 16,707,518
|
Ranking a 3d netcdf file at each (x,y) location along the time axis
|
<p>I'm guessing there are a number of ways of doing this - but any guidance appreciated. I have a netcdf file of temperature (x,y,t) and essentially want to return the time rank for each x,y position for each year.</p>
<p>So for example: for the x,y location (1,1) at time=1 I'm comparing the x,y=(1,1) value at time 0 to the end of the file and ranking where time=1 sits. Say it's second highest value at x,y=(1,1) of all the times, then my new array at (1,1,1) would return a value of 2 here.</p>
<p>I've searched to see if cdo can do something like this but haven't been able to find anything: could this be done easily in pandas or xarray?</p>
|
<python><netcdf><ranking><rank>
|
2023-06-10 11:28:45
| 1
| 341
|
Richard Dixon
|
76,445,994
| 2,370,920
|
Python Pyrogram download_media doesn't work
|
<p>I have a simple function to download images from Telegram messages by using Pyrogram, but the <code>download_media</code> method doesn't download anything for me, and the returned file_path is always None. The received <code>message</code> object and its <code>photo</code> attribute do have some data in them, but still, the download won't work for me. I tried passing <code>message</code>, <code>m_photo</code> and <code>file_id</code> to <code>download_media</code> to no avail.</p>
<pre><code>async def download_photo(message):
m_photo = message.photo
file_id = m_photo.file_id
file_unique_id = m_photo.file_unique_id
width = m_photo.width
height = m_photo.height
file_size = m_photo.file_size
date = m_photo.date
print(m_photo)
print(f"Photo details - File ID: {file_id}, File Unique ID: {file_unique_id}, Width: {width}, Height: {height}, File Size: {file_size}, Date: {date}")
file_path = await app.download_media(m_photo, file_name=f'{message.chat.id}_{file_unique_id}.jpg')
return file_path
asyncio.run(download_photo(message))
</code></pre>
|
<python><telegram><pyrogram>
|
2023-06-10 11:20:56
| 2
| 788
|
abdus_salam
|
76,445,558
| 12,961,237
|
PyJWT not raising an ExpiredSignatureError
|
<p><strong>I'll come straight to the point:</strong></p>
<ol>
<li>I create the jwt:</li>
</ol>
<pre><code>jwt.encode({'sub':"abc", "iat":datetime.now(tz=timezone.utc), "exp":datetime.now()+timedelta(seconds=1)}, JWT_KEY, algorithm="HS256")
</code></pre>
<ol start="2">
<li>I wait</li>
</ol>
<pre><code>time.sleep(3)
</code></pre>
<ol start="3">
<li>I try to validate the <code>exp</code> flag:</li>
</ol>
<pre><code> try:
return jwt.decode(token, JWT_KEY, algorithms=["HS256"])
except jwt.ExpiredSignatureError:
raise Exception("JWT expired")
</code></pre>
<p><strong>But it won't raise the desired exception even though the current time is behind the exp timestamp</strong></p>
|
<python><authentication><jwt>
|
2023-06-10 09:16:36
| 1
| 1,192
|
Sven
|
76,445,497
| 6,103,050
|
Decrypting a string encrypted with AES-256-CBC using Python results in a zero-length result
|
<p>I need to access some resources protected by passwords. A colleague of mine gave me access to a database table that contains these passwords, but they are encrypted. (and I think the result of the encryption is encoded in base64, since it's 44 characters long and it always ends with an equal "=" character).</p>
<p>My colleague gave me the name of the algorithm he used to encrypt the passwords: AES-256-CBC, and he also gave me the encryption key.</p>
<p>I thought that with all this information it would be easy to decipher the passwords, but it was not.</p>
<p>I used the functions suggested on this page: <a href="https://paperbun.org/encrypt-and-decrypt-using-pycrypto-aes-256-python/" rel="nofollow noreferrer">https://paperbun.org/encrypt-and-decrypt-using-pycrypto-aes-256-python/</a></p>
<p>that are the following:</p>
<pre><code># Define the encryption function
def encrypt_AES_CBC_256(key, message):
key_bytes = key.encode('utf-8')
message_bytes = message.encode('utf-8')
iv = get_random_bytes(AES.block_size)
cipher = AES.new(key_bytes, AES.MODE_CBC, iv)
padded_message = pad(message_bytes, AES.block_size)
ciphertext_bytes = cipher.encrypt(padded_message)
ciphertext = b64encode(iv + ciphertext_bytes).decode('utf-8')
return ciphertext
# Define the decryption function
def decrypt_AES_CBC_256(key, ciphertext):
key_bytes = key.encode('utf-8')
ciphertext_bytes = b64decode(ciphertext)
iv = ciphertext_bytes[:AES.block_size]
cipher = AES.new(key_bytes, AES.MODE_CBC, iv)
ciphertext_bytes = ciphertext_bytes[AES.block_size:]
decrypted_bytes = cipher.decrypt(ciphertext_bytes)
plaintext_bytes = unpad(decrypted_bytes, AES.block_size)
plaintext = plaintext_bytes.decode('utf-8')
return plaintext
</code></pre>
<p>Then, I tried these functions by encrypting a message, decrypting it back and ensuring that the decrypted result was identical to the initial string. Here is my code:</p>
<pre><code>key = "XXXXXXXXXXXXXXXXXXXXXXXXXXX" # actual key is 32 characters long
print('-----------')
raw_input = "HERE IS MY TEST STRING"
print(f"Entrรฉe : {raw_input}")
test4 = encrypt_AES_CBC_256(key, raw_input)
print(f"Entrรฉe chiffrรฉe : {test4}")
test5 = decrypt_AES_CBC_256(key, test4)
print(f"Message dรฉchiffrรฉ : {test5}")
</code></pre>
<p>The output is:</p>
<pre><code>-----------
Entrรฉe : HERE IS MY TEST STRING
Entrรฉe chiffrรฉe : sZ1iZks9+qGVDHzt9WEO1hU3YrXKHvJj5sQzVd64HscPT8fiFBNbeihGFxpFyGC3
Message dรฉchiffrรฉ : HERE IS MY TEST STRING
</code></pre>
<p>We can see that everything seems to work fine, the decrypted message equals the input.</p>
<p>Now, I tried to decrypt (with the same key) an encrypted password given by my colleague:</p>
<pre><code>print('-----------')
raw = "e46mK0OmmYMBa2qkaALx5aUea4y/pT43OapQ6lmnDOA="
print(f"Mot de passe chiffrรฉ : {raw}")
FINAL_TEST = decrypt_AES_CBC_256(key, raw)
print(f"Mot de passe dรฉchiffrรฉ : {FINAL_TEST}")
print(f"Longueur du rรฉsultat : {len(FINAL_TEST)} caractรจres")
</code></pre>
<p>The output was:</p>
<pre><code>-----------
Mot de passe chiffrรฉ : e46mK0OmmYMBa2qkaALx5aUea4y/pT43OapQ6lmnDOA=
Mot de passe dรฉchiffrรฉ :
Longueur du rรฉsultat : 0 caractรจres
</code></pre>
<p>We can see that the result is 0 character long. And it was the same with every encrypted password he gave me. What am I missing?</p>
<p>And why is the encrypted password only 44 characters long and not 64? I know that this is somewhat related to 256/6 = 42.66 and then you pad the missing characters with = signs, and since you want the length to be a multiple of 4 you pad up to 44, but if my coworker indeed use the AES-CBC-256 algorithm, why would his encrypted passwords not end up being 64 characters long like my encrypted test message?</p>
|
<python><encryption><cryptography><aes>
|
2023-06-10 09:00:32
| 1
| 1,063
|
R. Bourgeon
|
76,445,462
| 1,911,091
|
how to update rows with data from another row in sqlite
|
<p>Lets imagine a table in sqlite like:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Key1</th>
<th>key2</th>
<th>val1</th>
<th>val2</th>
<th>...</th>
<th>val_n</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>a</td>
<td>1</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>a</td>
<td>b</td>
<td>null</td>
<td>2</td>
<td>3</td>
<td>4</td>
</tr>
<tr>
<td>a</td>
<td>c</td>
<td>5</td>
<td>2</td>
<td>8</td>
<td>4</td>
</tr>
<tr>
<td>a</td>
<td>d</td>
<td>3</td>
<td>6</td>
<td>null</td>
<td>3</td>
</tr>
</tbody>
</table>
</div>
<p>Is possible to update this tabe in a way that all rows containing in any column a null value will be sustituted with the values from a row with valid data?</p>
<p>Lets say is it possible to copy the row key1=a, key2=a
To every row containing a null value in any column?</p>
<p>Be aware that my table has about 300 columns, so if there is away without explicitly assigning the column values this would be appriciated.
The only solution which came to my mind was explicitly assigning all values, but there must be a better solution.
I just wannt to copy a row to another location</p>
<p>And if not possible in sqlite pandas python could also be used</p>
|
<python><sql><pandas><sqlite>
|
2023-06-10 08:48:10
| 1
| 1,442
|
user1911091
|
76,445,399
| 17,795,398
|
Is it possible to change pattern recognition of xgettext?
|
<p>I'm using python, <code>gettext</code> and <code>kivy</code> to make a multi-language app.</p>
<ol>
<li><p>I have a label with some text <code>Label(text = _("hi"))</code>. I made the code to be able to switch language, but that requires to reset the text, <code>Label.text = _(Label.text)</code>, but this code is a problem, since once the translation is done, the text to be used as <code>msgid</code> is translated, and therefore, not found in <code>.po</code> files the next time.</p>
</li>
<li><p>I created a workaround, something like <code>MyLabel(textKey = msgid)</code>. This solves all the problems, since changing language is done with <code>MyLabel.text = _(MyLabel.textKey)</code>.</p>
</li>
<li><p>But this approach has an issue, if I use <code>xgettext</code> to generate the <code>.pot</code> file, <code>msgid</code> is not recognized because it is not written as <code>_(msgid)</code> and it cannot be (point 1).</p>
</li>
</ol>
<p>My question is: is there a way to tell <code>xgettext</code> to recognise <code>textKey = msgid</code>?</p>
|
<python><kivy><xgettext>
|
2023-06-10 08:30:11
| 1
| 472
|
Abel Gutiรฉrrez
|
76,445,137
| 15,476,663
|
Unable to make axis logarithmic in 3D plot
|
<p>I am trying to make a 3D plot using <code>matplotlib</code> inside <code>jupyter-notebook</code>.
I am using a dataset from <a href="https://www.kaggle.com/competitions/house-prices-advanced-regression-techniques/data?select=test.csv" rel="nofollow noreferrer">kaggle</a>.</p>
<p>The schema is the following</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>LotArea</th>
<th>SalePrice</th>
<th>YrSold</th>
<th>PoolArea</th>
</tr>
</thead>
<tbody>
<tr>
<td>8450</td>
<td>208500</td>
<td>2008</td>
<td>0</td>
</tr>
<tr>
<td>9600</td>
<td>181500</td>
<td>2007</td>
<td>0</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
<p>When I plot with linear axes, everything is OK:</p>
<pre><code>import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10, 15))
ax = plt.axes(projection='3d')
area_data = dataset_chosen["LotArea"]
price_data = dataset_chosen["SalePrice"]
year_data = dataset_chosen["YrSold"]
cmhot = plt.get_cmap("hot")
ax.scatter3D(xs=area_data, ys=price_data, zs=year_data, c=dataset_chosen["PoolArea"])
#ax.set_xscale("log")
ax.set_xlabel("Area")
ax.set_ylabel("Price")
ax.set_zlabel("Year")
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/zvPQU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zvPQU.png" alt="Plot with all linear scales." /></a>
And when I try to make <code>x</code> scale logarithmic (uncomment <code>#ax.set_xscale("log")</code>), the plot does not look like a plot.
<a href="https://i.sstatic.net/LMkpK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LMkpK.png" alt="enter image description here" /></a></p>
<p>How to make X scale logarithmic?</p>
|
<python><matplotlib><matplotlib-3d>
|
2023-06-10 07:14:43
| 1
| 505
|
ะะตะพัะณะธะน ะัะผะธะฝะพะฒ
|
76,445,048
| 1,872,234
|
Drop consecutive duplicates for multiple columns
|
<p>I have some time-series data with multiple columns:</p>
<pre><code> date hardware location group rtype value
0 2021-03-01 desk NY opera type-s 0
1 2021-03-01 desk NJ opera type-s 200
2 2021-03-01 desk IL opera type-s 100
3 2022-08-01 desk NY opera type-s 275
4 2021-08-25 desk IL opera type-s 100
5 2022-09-16 desk IL opera type-s 30
6 2022-09-16 desk NY opera type-s 0
7 2022-11-01 desk NJ opera type-s 0
8 2022-11-01 desk IL opera type-s 0
</code></pre>
<p>I want to remove consecutive duplicates ignoring the date, e.g.</p>
<pre><code>...
2 2021-03-01 desk IL opera type-s 100
3 2022-08-01 desk NY opera type-s 275
4 2021-08-25 desk IL opera type-s 100
...
</code></pre>
<p>Rows with index 2, 4 are duplicates, and I want to only keep the first one.</p>
<p>I tried two approaches,</p>
<ol>
<li>using <code>drop_duplicates</code> which also drops non-consecutive duplicates,
and</li>
<li>using <code>shift() != </code> that I found <a href="https://stackoverflow.com/a/55360617/1872234">in this answer</a> which also seems to be doing the same thing.</li>
</ol>
<p>Here is my test code:</p>
<pre><code>import pandas as pd
from io import StringIO
data = """
date,hardware,location,group,rtype,value
2021-03-01,desk,NY,opera,type-s,0
2021-03-01,desk,NJ,opera,type-s,200
2021-03-01,desk,IL,opera,type-s,100
2022-08-01,desk,NY,opera,type-s,275
2021-08-25,desk,IL,opera,type-s,100
2022-09-16,desk,IL,opera,type-s,30
2022-09-16,desk,NY,opera,type-s,0
2022-11-01,desk,NJ,opera,type-s,0
2022-11-01,desk,IL,opera,type-s,0
"""
df = pd.read_csv(StringIO(data), parse_dates=['date'])
print('\nOriginal \n', df.to_string())
columns = ['hardware', 'location', 'group', 'rtype']
dedup_cols = columns + ['value']
df = df.drop_duplicates(
subset=dedup_cols, keep='first').reset_index(drop=True)
de_dup = df.loc[(df[dedup_cols].shift() != df[dedup_cols]).any(axis=1)]
print('\nDrop_duplicates: \n', df.to_string())
print('\nShift_filter: \n', de_dup.to_string())
expected = """
date,hardware,location,group,rtype,value
2021-03-01,desk,NY,opera,type-s,0
2021-03-01,desk,NJ,opera,type-s,200
2021-03-01,desk,IL,opera,type-s,100
2022-08-01,desk,NY,opera,type-s,275
2022-09-16,desk,IL,opera,type-s,30
2022-09-16,desk,NY,opera,type-s,0
2022-11-01,desk,NJ,opera,type-s,0
2022-11-01,desk,IL,opera,type-s,0
"""
expected_df = pd.read_csv(StringIO(expected), parse_dates=['date'])
print('Equal drop_duplicates ? ', expected_df.equals(df))
print('Equal shift_filter ? ', expected_df.equals(de_dup))
print('Same result from the two techniques? ', de_dup.equals(df))
print('\nExpected:\n', expected_df.to_string())
</code></pre>
<p>OUTPUT:</p>
<pre><code>Original
date hardware location group rtype value
0 2021-03-01 desk NY opera type-s 0
1 2021-03-01 desk NJ opera type-s 200
2 2021-03-01 desk IL opera type-s 100
3 2022-08-01 desk NY opera type-s 275
4 2021-08-25 desk IL opera type-s 100
5 2022-09-16 desk IL opera type-s 30
6 2022-09-16 desk NY opera type-s 0
7 2022-11-01 desk NJ opera type-s 0
8 2022-11-01 desk IL opera type-s 0
Drop_duplicates:
date hardware location group rtype value
0 2021-03-01 desk NY opera type-s 0
1 2021-03-01 desk NJ opera type-s 200
2 2021-03-01 desk IL opera type-s 100
3 2022-08-01 desk NY opera type-s 275
4 2022-09-16 desk IL opera type-s 30
5 2022-11-01 desk NJ opera type-s 0
6 2022-11-01 desk IL opera type-s 0
Shift_filter:
date hardware location group rtype value
0 2021-03-01 desk NY opera type-s 0
1 2021-03-01 desk NJ opera type-s 200
2 2021-03-01 desk IL opera type-s 100
3 2022-08-01 desk NY opera type-s 275
4 2022-09-16 desk IL opera type-s 30
5 2022-11-01 desk NJ opera type-s 0
6 2022-11-01 desk IL opera type-s 0
Equal drop_duplicates ? False
Equal shift_filter ? False
Same result from the two techniques? True
Expected:
date hardware location group rtype value
0 2021-03-01 desk NY opera type-s 0
1 2021-03-01 desk NJ opera type-s 200
2 2021-03-01 desk IL opera type-s 100
3 2022-08-01 desk NY opera type-s 275
4 2022-09-16 desk IL opera type-s 30
5 2022-09-16 desk NY opera type-s 0
6 2022-11-01 desk NJ opera type-s 0
7 2022-11-01 desk IL opera type-s 0
</code></pre>
<p>I recon the <code>drop_duplicates</code> method is not suitable.</p>
<p>I was hoping the second method would work but I don't completely understand why it doesn't.</p>
<p>I am thinking some approach with <code>group_by</code> + <code>shift</code> and take a consecutive diff and then drop where diff is 0. Any ideas?</p>
|
<python><pandas><dataframe><duplicates>
|
2023-06-10 06:50:47
| 2
| 1,643
|
Wajahat
|
76,445,008
| 6,369,958
|
Embedded icon in single-executable package with PyInstaller 5.12
|
<p>I'm running Python 3.11 and PyInstaller 5.12 on Windows 10. This is my spec file:</p>
<pre><code>block_cipher = None
a = Analysis(
['myapp.py'],
pathex=[os.path.abspath(os.curdir)],
binaries=[],
datas=[],
hiddenimports=['encodings'],
hookspath=[],
runtime_hooks=[],
excludes=[],
win_no_prefer_redirects=False,
win_private_assemblies=False,
cipher=block_cipher,
noarchive=False
)
pyz = PYZ(
a.pure, a.zipped_data,
cipher=block_cipher
)
exe = EXE(
pyz,
a.scripts,
a.binaries,
a.zipfiles,
a.datas,
[],
name='myapp',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
icon='assets/myapp.ico',
console=False
)
</code></pre>
<p>I get a nice <code>dist/myapp.exe</code> auto-unzipping executable, it works fine, but it has the PyInstaller default icon embedded in the .exe instead of the provided <code>assets/myapp.ico</code>.</p>
<p>I don't have this problem with the older version of my app, using Python 3.8 and PyInstaller 4.2 and <em>the same spec file</em>. I don't expect obviously that I don't have to modify the spec file while upgrading to a new major version of PyInstaller but I cannot find in the PyInstaller documentation what's the right why to embed the icon when building the single-exe package.</p>
<p>Where's my mistake? Or it's a bug in PyInstaller 5.12?</p>
<p>Thanks.</p>
|
<python><pyinstaller>
|
2023-06-10 06:32:07
| 0
| 367
|
user6369958
|
76,444,954
| 17,610,082
|
Enums Inheritance not Working in Python3.8?
|
<p>I've wrote below code, works fine in Python3.7</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
class UserRoles(Enum):
ADMIN = 'ADMIN'
MANAGER = 'MANAGER'
MEMBER = "MEMBER"
class AuthorRoles(UserRoles, Enum):
pass
</code></pre>
<p>Getting below error in Python3.8</p>
<pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.8/enum.py", line 146, in __prepare__
metacls._check_for_existing_members(cls, bases)
File "/usr/lib/python3.8/enum.py", line 527, in _check_for_existing_members
raise TypeError(
TypeError: AuthorRoles: cannot extend enumeration 'UserRoles'
</code></pre>
<p><strong>Note:</strong> I'm aware that inheriting is not recommended according to python docs</p>
<p>just curious what caused the issue.</p>
|
<python><python-3.x><enums>
|
2023-06-10 06:09:05
| 1
| 1,253
|
DilLip_Chowdary
|
76,444,801
| 13,443,114
|
How to Update Stripe Subscription with Python API without Accumulating Price Objects
|
<p>I am using <strong>stripe</strong> API integration to build a python app, I want to be able to allow an user to upgrade their subscription from their current subscription at any point in time and the new subscription would be charged on the next billing interval, I used the modify method of the stripe Subscription object as shown in the documentation, but the issue is that, after modifying the subscription it does not overwrite the previous price object, it simply adds to it, effectively <em>double-charging</em> the user on the next stripe subscription billing interval, please can anyone assist with a simple approach as to how to update the price object and lose all references to the previous stripe <strong>price</strong> object in the subscription object</p>
<pre class="lang-py prettyprint-override"><code>...
# retrieve stripe subscription object
subscription = stripe.Subscription.retrieve("<stripe_subscription_id>")
# modify the subscription
mod_subscription = stripe.Subscription.modify("<stripe_subscription_id>", items=[{"price":"<stripe_price_id>"}])
</code></pre>
|
<python><python-3.x><stripe-payments><python-stripe-api>
|
2023-06-10 05:11:12
| 1
| 428
|
Brian Obot
|
76,444,731
| 3,155,240
|
Numpy where ValueError - operands could not be broadcast together with shapes (x, y, z)
|
<p>I am trying to make the alpha channel in a PNG a particular value, based upon its RGB values. For simplicity, assume I just want to average the first 3 values of each pixel and then change the alpha value based on that.</p>
<pre><code>import numpy as np
# both image and avg are a (10, 10, 4)
img = np.random.rand(10, 10, 4)
avg = np.average(img[:, :, :-1], axis=2)
# the mask is a (10, 10)
mask = avg > .5
# np.full((img.shape[0], img.shape[1], 1), 255) creates a (10, 10, 1) array full of the value 255
# np.append(..., axis=2) appends the above so that each pixel will have a 255 at the end
# np.where(mask, ..., img) should use the mask to find each pixel, update that pixel to either use the alpha channel, if true, or keep its alpha channel, if false.
np.where(mask, np.append(img[:, :, :-1], np.full((img.shape[0], img.shape[1], 1), 255), axis=2), img)
</code></pre>
<p>Currently (and correctly), it throws the error -> ValueError: operands could not be broadcast together with shapes (10,10) (10,10,4) (10,10,4).</p>
<p>If I had to try to explain it, it's because the mask is a 10 x 10, and the values to act upon are a 10x10x4. So is there a way to "get around this"; to have the true or false values return a pixel with the proper value?</p>
<p>I know I can do this with loops, but I wanted to use numpy natively where I could.</p>
<p>Thanks, all!</p>
|
<python><numpy>
|
2023-06-10 04:39:18
| 1
| 2,371
|
Shmack
|
76,444,626
| 1,192,885
|
How to export new data into new lines in csv using Python?
|
<p>I am trying to export data fetched from an API to a csv file but when I get a new data the code write to the first line of the csv, and it should be on a new line. So what's the problem with the code below?</p>
<pre><code>def start(self):
check = []
latest_result = None
while True:
try:
self.date_now = str(datetime.datetime.now().strftime("%d/%m/%Y"))
results = []
time.sleep(1)
response = requests.get(self.url_API)
json_data = json.loads(response.text)
timestamp = datetime.datetime.now()
formatted_timestamp = timestamp.strftime("%d/%m/%Y, %H:%M:%S")
for i in json_data['results']:
results.append(i)
latest_result = i
if check != results:
check = results
print(f'''{results[0]}, {formatted_timestamp}''')
csv_file = "output.csv"
# Open the CSV file in write mode
with open(csv_file, mode="w", newline="") as file:
writer = csv.writer(file)
# Write each item as a separate row
writer.writerows([results[0]])
self.delete()
self.estrategy(results)
except:
print("ERROR - 404!")
continue
</code></pre>
|
<python><export-to-csv>
|
2023-06-10 03:44:59
| 1
| 1,303
|
Marco Almeida
|
76,444,617
| 6,301,394
|
Cast pl.Date to Unix epoch
|
<p>Trying to convert a pl.Date column to UNIX epoch as is, without any timezone offset:</p>
<pre class="lang-py prettyprint-override"><code>import datetime
import polars as pl
df = pl.DataFrame(
{'Date': [datetime.datetime.now().date()]}
)
</code></pre>
<p>Correct time (00:00:00) when converted to Datetime:</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(
pl.col("Date").cast(pl.Datetime)
)
</code></pre>
<pre><code>โโโโโโโโโโโโโโโโโโโโโโโ
โ Date โ
โ --- โ
โ datetime[ฮผs] โ
โโโโโโโโโโโโโโโโโโโโโโโก
โ 2023-06-10 00:00:00 โ
โโโโโโโโโโโโโโโโโโโโโโโ
</code></pre>
<p>Incorrect time when casting to timestamp:</p>
<pre class="lang-py prettyprint-override"><code>datetime.datetime.fromtimestamp(
df.with_columns(
pl.col("Date").cast(pl.Datetime).dt.timestamp("ms").truediv(1_000)
).item()
)
</code></pre>
<pre><code>datetime.datetime(2023, 6, 10, 8, 0) # (08:00:00)
</code></pre>
<p>As suggested, without casting to Datetime also produces the incorrect time. (08:00:00)</p>
<pre class="lang-py prettyprint-override"><code>pl.col("Date").dt.timestamp("ms").truediv(1_000)
</code></pre>
|
<python><datetime><python-polars>
|
2023-06-10 03:41:47
| 1
| 2,613
|
misantroop
|
76,444,501
| 785,349
|
TypeError: __init__() got multiple values for argument 'options'
|
<p>What could be the reason for this error being thrown:</p>
<pre><code>Traceback (most recent call last):
File "/Users/me/sc/sc.py", line 30, in <module>
driver = Chrome(ChromeDriverManager().install(), options=chrome_options)
TypeError: __init__() got multiple values for argument 'options'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/me/sc/sc.py", line 34, in <module>
driver = Chrome("./chromedriver", options=chrome_options)
TypeError: __init__() got multiple values for argument 'options'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/me/sc/sc.py", line 36, in <module>
driver = Chrome("chromedriver.exe", options=chrome_options)
TypeError: __init__() got multiple values for argument 'options'
List item
</code></pre>
<p>For this code:</p>
<pre><code>from time import time, sleep
from selenium.webdriver import Chrome
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.remote.webelement import WebElement
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.common.action_chains import ActionChains
from args_parser import ArgsParser
from downloader import download_file
args = ArgsParser()
def print_if_verbose(val):
if args.output_verbose:
print(val)
WAITING_TIMEOUT = 180
chrome_options = Options()
driver_user_agent = ('Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 '
'(KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36')
chrome_options.add_argument(f'user-agent={driver_user_agent}')
if not args.display_browser:
chrome_options.add_argument('--headless')
try:
driver = Chrome(ChromeDriverManager().install(), options=chrome_options)
except Exception as e:
print(e)
try:
driver = Chrome("./chromedriver", options=chrome_options)
except Exception:
driver = Chrome("chromedriver.exe", options=chrome_options)
</code></pre>
<p>BTW, I have Chrome 114 on macOS M2 silicon, and using Python 3.9</p>
|
<python><python-3.x><selenium-webdriver>
|
2023-06-10 02:32:50
| 1
| 35,657
|
quarks
|
76,444,364
| 1,484,184
|
mysql : The term 'mysql' is not recognized as the name of a cmdlet Of Code with Mosh's Ultimate Django Series
|
<p>If you had been following
<a href="https://codewithmosh.com/p/the-ultimate-django-series" rel="nofollow noreferrer">The Ultimate Django Series </a> by <strong>code with mosh</strong> and you got to the video :</p>
<p><strong>4.Setting Up the Database</strong> -> <strong>9- Using MySQL in Django</strong></p>
<p>and you get the following error in the console on windows :</p>
<blockquote>
<p>mysql : The term 'mysql' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the
name, or if a path was included, verify that the path is correct and
try again.</p>
</blockquote>
|
<python><django><mysql-connector>
|
2023-06-10 01:10:47
| 1
| 1,650
|
KADEM Mohammed
|
76,444,316
| 6,727,914
|
How to make Pycharm raise warning for setting non existing attributes
|
<p>In Pycharm, I get the following warning when I get a field that does not exists in a class.</p>
<p><a href="https://i.sstatic.net/PvtTY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PvtTY.png" alt="enter image description here" /></a></p>
<p>However, no warning when a field set. <code>xyz</code></p>
<pre><code>class Foo:
def __init__(self, bar):
self.bar = bar
foo = Foo("bar")
print(foo.bar) # No warning
print(foo.baz) # Warning: Unresolved attribute reference 'baz' for class 'Foo'
foo.xyz = "xyz" # how to raise a warning for this saying that xyz does not exist in Foo?
</code></pre>
<p>This is potentially a bug, how to make Pycharm warn for this</p>
|
<python><class><types><pycharm><static-analysis>
|
2023-06-10 00:54:03
| 0
| 21,427
|
TSR
|
76,444,294
| 4,434,941
|
scraping a webpage by using the algolia api
|
<p>I am trying to scrape the following webpage:</p>
<p>"https://www.peterpanbmw.com/used-vehicles/"</p>
<p>Instead of writing a UI scraper using scrapy, because the data on the page loads via javascript, I was trying to just use the underlying api on the page.</p>
<p>when inspecting the network tab in chrome, it looks as though the data for the underlying search query is being handled by algolia with the following parameters:</p>
<p>url was <a href="https://sewjn80htn-dsn.algolia.net/1/indexes/*/queries?x-algolia-agent=Algolia%20for%20JavaScript%20(4.9.1)%3B%20Browser%20(lite)%3B%20JS%20Helper%20(3.4.4)&x-algolia-api-key=179608f32563367799314290254e3e44&x-algolia-application-id=SEWJN80HTN" rel="nofollow noreferrer">https://sewjn80htn-dsn.algolia.net/1/indexes/*/queries?x-algolia-agent=Algolia%20for%20JavaScript%20(4.9.1)%3B%20Browser%20(lite)%3B%20JS%20Helper%20(3.4.4)&x-algolia-api-key=179608f32563367799314290254e3e44&x-algolia-application-id=SEWJN80HTN</a></p>
<pre><code>headers = {
'Accept-Encoding': "gzip, deflate, br",
'Accept-Language': "en-US,en;q=0.9",
'Connection': "keep-alive",
'Content-Length': 1702,
'Host': "sewjn80htn-dsn.algolia.net",
'Origin': "https://www.peterpanbmw.com",
'Referer': "https://www.peterpanbmw.com/",
'Sec-Fetch-Dest': "empty",
'Sec-Fetch-Mode': "cors",
'Sec-Fetch-Site': "cross-site",
'User-Agent': "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36",
'content-type': "application/x-www-form-urlencoded",
'sec-ch-ua': ' "Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114" ',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': "macOS",
}
</code></pre>
<p>Therefore, I tried the following approach in python:</p>
<pre><code>import requests
url = "https://sewjn80htn-dsn.algolia.net/1/indexes/*/queries?x-algolia-agent=Algolia%20for%20JavaScript%20(4.9.1)%3B%20Browser%20(lite)%3B%20JS%20Helper%20(3.4.4)&x-algolia-api-key=179608f32563367799314290254e3e44&x-algolia-application-id=SEWJN80HTN"
heads = {
'Accept-Encoding': "gzip, deflate, br",
'Accept-Language': "en-US,en;q=0.9",
'Connection': "keep-alive",
'Content-Length': "1702",
'Host': "sewjn80htn-dsn.algolia.net",
'Origin': "https://www.peterpanbmw.com",
'Referer': "https://www.peterpanbmw.com/",
'Sec-Fetch-Dest': "empty",
'Sec-Fetch-Mode': "cors",
'Sec-Fetch-Site': "cross-site",
'User-Agent': "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36",
'content-type': "application/x-www-form-urlencoded",
'sec-ch-ua': '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': "macOS",
}
response = requests.get(url, headers=heads)
</code></pre>
<p>This kept erroring out on me though.</p>
<p>Is it possible to call algolia APIs like this?... Any help would be much appreciated</p>
|
<python><web-scraping><scrapy>
|
2023-06-10 00:42:50
| 0
| 405
|
jay queue
|
76,444,274
| 1,084,875
|
Install namespace packages using pyproject.toml
|
<p>I'm trying to install two packages (see below) into the same namespace which is <code>calculator</code>. I installed each package using <code>pip install -e .</code> from the top level of each project. Then I tried to use <code>import calculator</code> and <code>from calculator.adder import add</code> but the module cannot be found when I import it. I tried using <a href="https://realpython.com/python-namespace-package/#what-does-a-namespace-package-look-like" rel="noreferrer">an approach</a> discussed on the Real Python website but it doesn't seem to work. How do I install these packages so they reside in the <code>calculator</code> namespace so I can do things like <code>from calculator.adder import add</code>?</p>
<h2>Package 1</h2>
<p>This is the structure of the <code>adder</code> package.</p>
<pre><code>calculator-add/
โโโ README.md
โโโ examples/
โ โโโ ex_addition.py
โโโ pyproject.toml
โโโ src/
โโโ calculator/
โโโ adder/
โโโ __init__.py
โโโ add.py
</code></pre>
<p>Contents of the <code>pyproject.toml</code> is shown below.</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["setuptools>=61.0"]
build-backend = "setuptools.build_meta"
[project]
name = "adder"
version = "0.1"
[tool.setuptools.packages.find]
where = ["."]
include = ["calculator"]
namespaces = true
</code></pre>
<p>Contents of the <code>src/calculator/adder/__init__.py</code> is shown below.</p>
<pre class="lang-py prettyprint-override"><code>from .add import add
__all__ = ['add']
</code></pre>
<h2>Package 2</h2>
<p>This is the structure of the <code>divider</code> package.</p>
<pre><code>calculator-divide/
โโโ README.md
โโโ examples/
โ โโโ ex_division.py
โโโ pyproject.toml
โโโ src/
โโโ calculator/
โโโ divider/
โโโ __init__.py
โโโ divide.py
</code></pre>
<p>Contents of the <code>pyproject.toml</code> is shown below.</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["setuptools>=61.0"]
build-backend = "setuptools.build_meta"
[project]
name = "divider"
version = "0.2"
[tool.setuptools.packages.find]
where = ["."]
include = ["calculator"]
namespaces = true
</code></pre>
<p>Contents of the <code>src/calculator/divider/__init__.py</code> is shown below.</p>
<pre class="lang-py prettyprint-override"><code>from .divide import divide
__all__ = ['divide']
</code></pre>
|
<python><setuptools><python-packaging><pyproject.toml>
|
2023-06-10 00:30:35
| 1
| 9,246
|
wigging
|
76,444,141
| 10,981,411
|
HAs anyone used pyoxidizer to create .exe file
|
<p>I have created the .toml file called pyoxidizer.toml in the same folder where the script are placed.</p>
<p>but when I run on the command file in the same folder where all the .py files (and the .toml file) are kept</p>
<pre><code>pyoxidizer build
</code></pre>
<p>I get the error</p>
<pre><code>error: unable to find PyOxidizer config file at .
</code></pre>
<p>I have checked multiple times and it seems the .toml file is in the same folder yet I get this error.</p>
|
<python><pyinstaller><pyoxidizer>
|
2023-06-09 23:33:03
| 1
| 495
|
TRex
|
76,444,087
| 16,728,369
|
'cross-env' is not recognized as an internal or external command, operable program or batch file
|
<p>I'm working on a team in a django project with tailwind following by this set up tutorial <a href="https://django-tailwind.readthedocs.io/en/latest/installation.html" rel="nofollow noreferrer">https://django-tailwind.readthedocs.io/en/latest/installation.html</a>. everything works find but when i'm starting tailwind
<code>py manage.py tailwind start</code>.</p>
<pre><code>theme@3.5.0 start
npm run dev
theme@3.5.0 dev
cross-env NODE_ENV=development tailwindcss --postcss -i ./src/styles.css -o ../static/css/dist/styles.css -w
'cross-env' is not recognized as an internal or external command,
operable program or batch file.
</code></pre>
<p>here is package.json</p>
<pre><code>{
"name": "theme",
"version": "3.5.0",
"description": "",
"scripts": {
"start": "npm run dev",
"build": "npm run build:clean && npm run build:tailwind",
"build:clean": "rimraf ../static/css/dist",
"build:tailwind": "cross-env NODE_ENV=production tailwindcss --postcss -i ./src/styles.css -o ../static/css/dist/styles.css --minify",
"dev": "cross-env NODE_ENV=development tailwindcss --postcss -i ./src/styles.css -o ../static/css/dist/styles.css -w",
"tailwindcss": "node ./node_modules/tailwindcss/lib/cli.js"
},
"keywords": [],
"author": "",
"license": "MIT",
"devDependencies": {
"@tailwindcss/aspect-ratio": "^0.4.2",
"@tailwindcss/forms": "^0.5.3",
"@tailwindcss/line-clamp": "^0.4.2",
"@tailwindcss/typography": "^0.5.2",
"cross-env": "^7.0.3",
"postcss": "^8.4.14",
"postcss-import": "^15.1.0",
"postcss-nested": "^6.0.0",
"postcss-simple-vars": "^7.0.1",
"rimraf": "^4.1.2",
"tailwindcss": "^3.2.7"
}
}
</code></pre>
<p>do i need to install cross-env? should i install it using npm or pip? in my venv or locally?</p>
|
<python><node.js><django><npm><tailwind-css>
|
2023-06-09 23:07:41
| 1
| 469
|
Abu RayhaN
|
76,444,046
| 9,092,669
|
best way to programatically edit a sql query?
|
<p>I have a dictionary of sql queries, where the key is the name of a view, and the value is the query like so:</p>
<pre><code>SELECT
artists.first_name,
artists.last_name,
artist_sales.sales
FROM database.artists
JOIN (
SELECT artist_id, SUM(sales_price) AS sales
FROM database.sales
GROUP BY artist_id
) AS artist_sales
ON artists.id = artist_sales.artist_id;
</code></pre>
<p>What I would like to do is programatically replace the table name, so that it can become something like <code>catalog.database.artists</code> and <code>catalog.database.sales</code>. So whatever function I have would I think use regex to find what is after the <code>FROM</code> clause and edit the table name to include the <code>catalog</code> before the name of the database. Any thoughts for this?</p>
|
<python><sql><regex>
|
2023-06-09 22:57:34
| 3
| 395
|
buttermilk
|
76,443,923
| 11,793,491
|
Create data frame with month start and end in Python
|
<p>I want to create a pandas dataframe from a given start and end date:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from pandas.tseries.offsets import MonthEnd
start_date = "2020-05-17"
end_date = "2020-07-23"
</code></pre>
<p>For each row in this dataframe, I should have the start day and end day of the month, so the expected output is:</p>
<pre class="lang-py prettyprint-override"><code>start end month year
2020-05-17 2020-05-31 May 2020
2020-06-01 2020-06-30 June 2020
2020-07-01 2020-07-23 July 2020
</code></pre>
<p>I know I have to loop over each month between the interval created by <code>start_date</code> and <code>end_date</code>. While I know how to extract the last day in a date:</p>
<pre><code>def last_day(date: str):
return pd.Timestamp(date) + MonthEnd(1)
</code></pre>
<p>I'm stuck over how to run this over the interval. Any suggestion will be appreciated.</p>
|
<python><pandas>
|
2023-06-09 22:13:02
| 2
| 2,304
|
Alexis
|
76,443,900
| 17,152,942
|
SSLCertVerificationError Error when running LangChain ArxivRetriever
|
<p>I'm trying to run the ArxivRetriever in LangChain (specifically this example <a href="https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html" rel="nofollow noreferrer">https://python.langchain.com/en/latest/modules/indexes/retrievers/examples/arxiv.html</a>) but I'm getting a SSLCertVerificationError after running</p>
<pre><code>docs = retriever.get_relevant_documents(query='hello')
</code></pre>
<p>I have tried to install and uninstall certifi, but it's not working. Not sure if there is a way to set verify = False or do something with the certificate.</p>
<p>The specific error I'm getting is:</p>
<pre><code>URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1007)>
</code></pre>
<p>I'm running python 3.10 and on a Mac.</p>
<p>Any help is really appreciated!</p>
|
<python><ssl-certificate><python-3.10><langchain>
|
2023-06-09 22:05:25
| 0
| 361
|
Flo
|
76,443,878
| 1,056,563
|
Pass arguments to `python -c`
|
<p>How can arguments be passed to the command line invocation of <code>python</code>? Using <code>sys.argv</code> had this interesting [but not helpful] result:</p>
<pre><code>echo "a b c" | python -c "import sys;
print([[str(x),a] for x,a in enumerate(sys.argv)])"
[['0', '-c']]
</code></pre>
|
<python>
|
2023-06-09 22:00:34
| 4
| 63,891
|
WestCoastProjects
|
76,443,834
| 9,990,469
|
AWS Lambda Python 3.10 - No module named '_cffi_backend'
|
<p>I am running a Lambda in AWS using Python 3.10 and trying to run some code that builds ssh key-pairs. I get the following error when running my function, can anyone identify how to resolve this?</p>
<p><strong>Error</strong></p>
<pre><code>strap.py", line 39, in _get_handler
m = importlib.import_module(modname.replace("/", "."))
File "/var/lang/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/var/task/index.py", line 4, in <module>
import libraries.ssh as ssh
File "/var/task/libraries/ssh.py", line 3, in <module>
from cryptography.hazmat.primitives import serialization as crypto_serialization
File "/opt/python/lib/python3.10/site-packages/cryptography/hazmat/primitives/serialization/__init__.py", line 7, in <module>
from cryptography.hazmat.primitives._serialization import (
File "/opt/python/lib/python3.10/site-packages/cryptography/hazmat/primitives/_serialization.py", line 11, in <module>
from cryptography.hazmat.primitives.hashes import HashAlgorithm
File "/opt/python/lib/python3.10/site-packages/cryptography/hazmat/primitives/hashes.py", line 10, in <module>
from cryptography.hazmat.bindings._rust import openssl as rust_openssl
pyo3_runtime.PanicException: Python API call failed
ModuleNotFoundError: No module named '_cffi_backend'
thread '<unnamed>' panicked at 'Python API call failed', /github/home/.cargo/registry/src/index.crates.io-6f17d22bba15001f/pyo3-0.18.3/src/err/mod.rs:790:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Traceback (most recent call last):
File "/var/runtime/bootstrap.py", line 60, in <module>
main()
File "/var/runtime/bootstrap.py", line 57, in main
awslambdaricmain.main([os.environ["LAMBDA_TASK_ROOT"], os.environ["_HANDLER"]])
File "/var/runtime/awslambdaric/__main__.py", line 21, in main
bootstrap.run(app_root, handler, lambda_runtime_api_addr)
File "/var/runtime/awslambdaric/bootstrap.py", line 389, in run
request_handler = _get_handler(handler)
File "/var/runtime/awslambdaric/bootstrap.py", line 39, in _get_handler
m = importlib.import_module(modname.replace("/", "."))
File "/var/lang/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/var/task/index.py", line 4, in <module>
import libraries.ssh as ssh
File "/var/task/libraries/ssh.py", line 3, in <module>
from cryptography.hazmat.primitives import serialization as crypto_serialization
File "/opt/python/lib/python3.10/site-packages/cryptography/hazmat/primitives/serialization/__init__.py", line 7, in <module>
from cryptography.hazmat.primitives._serialization import (
File "/opt/python/lib/python3.10/site-packages/cryptography/hazmat/primitives/_serialization.py", line 11, in <module>
from cryptography.hazmat.primitives.hashes import HashAlgorithm
File "/opt/python/lib/python3.10/site-packages/cryptography/hazmat/primitives/hashes.py", line 10, in <module>
from cryptography.hazmat.bindings._rust import openssl as rust_openssl
pyo3_runtime.PanicException: Python API call failed
START RequestId: 96dc8ff0-12e4-4ba3-8b7a-f87a1aff3419 Version: $LATEST
RequestId: 96dc8ff0-12e4-4ba3-8b7a-f87a1aff3419 Error: Runtime exited with error: exit status 1
Runtime.ExitError
</code></pre>
<p><strong>Python Code Snippet</strong></p>
<pre><code>from cryptography.hazmat.primitives import serialization as crypto_serialization
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.hazmat.backends import default_backend as crypto_default_backend
import paramiko
import io
import boto3
import json
</code></pre>
<p><strong>Pip command installing dependencies</strong></p>
<pre><code>"pip3 install --platform manylinux2014_x86_64 --only-binary=:all: -t ${path.module}/layer/python/lib/python3.10/site-packages -r ${path.module}/layer/requirements.txt"
</code></pre>
<p><strong>requirements.txt file</strong></p>
<pre><code>paramiko
</code></pre>
<p><strong>Supporting Info</strong></p>
<ul>
<li>I am running the code that <a href="https://github.com/aws-samples/aws-secrets-manager-ssh-key-rotation/blob/master/lambda/ssh.py#L1-L39" rel="nofollow noreferrer">AWS publishes</a>. That is the full piece of code that I am using.</li>
<li>I moved the dependencies to a lambda layer</li>
<li>I am building the zip via a github action runner (Ubuntu)</li>
</ul>
<p>Ive tried a few variations of addition additional libraries to requirements.txt such as cffi with no success.</p>
|
<python><aws-lambda>
|
2023-06-09 21:50:29
| 2
| 706
|
Chris
|
76,443,611
| 2,987,878
|
Speeding up wall clock with Python asyncio for testing
|
<p>I'd like to test some long-running Python asyncio code (using pytest). The code is slow to run because it sleeps a lot, waiting for various timeouts. Is there anything that lets you run asyncio code as if wall clock is running more quickly - so the test can be ran in reasonable time?</p>
<p>Of course Python <em>code</em> can't be magically made to run faster, rather I'd like to make the wall clock run e.g. 2x or 10x faster, so that <code>asyncio.sleep()</code> returns sooner. I would also need Python date and time functions to respect the resulting time (time.time() and datetime.datetime.today / datetime.date.today).</p>
<p>E.g. imagine the following code:</p>
<pre class="lang-py prettyprint-override"><code>async def slow_fn():
result = []
for ii in range(10):
result.append((ii, datetime.datetime.now()))
await asyncio.sleep(3600)
return result
</code></pre>
<p>This clearly takes 10 hours when ran in real-time, but its output can easily be simulated over a very short time.</p>
<p>Some options I considered and I don't think they help:</p>
<ul>
<li>I could use something like faketime to <em>set</em> the time, but it's not clear to me that I can use it also to make time flow <em>faster</em>.</li>
<li>It is not practical to [re]-write the code in such a manner that e.g. the sleep time is parametric, then use a shorter sleep interval in tests. For example, parts of the code wait for a particular <em>time</em>, not for a particular length of time. It is essential to the logic that e.g. something happens once per hour.</li>
<li>There exists a really cool library Simpy: <a href="https://simpy.readthedocs.io/en/latest/" rel="nofollow noreferrer">https://simpy.readthedocs.io/en/latest/</a> which simulates such events not in real time (so e.g. the sample code would evaluate immediately), except it doesn't work in real-time too. Whereas I want to write code using asyncio, then test it in short time in Python.</li>
</ul>
<p>I wonder if there is some ground source of truth of time in Python, and whether than can be monkeypatched in a pytest test, so make it appear to run more quickly. And if so, would it mess up various Python internals? Or other suggestions for working around this issue are also welcome.</p>
|
<python><time><pytest><python-asyncio>
|
2023-06-09 21:03:44
| 2
| 386
|
Bennet
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.