QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 β |
|---|---|---|---|---|---|---|---|---|
77,231,899 | 11,112,772 | How to I install the latest versions of numpy which are available as ".whl" files in the repository of Chaquopy? | <p><strong>When did the question arise?</strong></p>
<p>For a purpose, I need to install the <code>numpy</code> version at least 1.20.3. Since the default installation of <code>numpy</code> by the command <code>install "numpy"</code> does not satisfy the requirement, initially, I tried to install it by <code>install "numpy==1.20.3"</code>. Later, I tried to install the <code>numpy</code> as a wheel file from the <a href="https://chaquo.com/pypi-extra/numpy/" rel="nofollow noreferrer">repository of Chaquopy</a> which is supposed to be Android compatible. However, in both of the cases, I get the same error</p>
<blockquote>
<p>error: CCompiler.compile: Chaquopy cannot compile native code</p>
</blockquote>
<p><strong>Questions</strong></p>
<p>I have 3 questions.</p>
<ol>
<li>Chaquopy mentions</li>
</ol>
<blockquote>
<p>In our most recent tests, Chaquopy could install over 90% of the top 1000 packages on PyPI. This includes almost all pure-Python packages, plus a constantly-growing selection of packages with <strong>native components</strong>. To see which <strong>native packages</strong> are currently available, you can browse the repository here.</p>
</blockquote>
<p>I am using the latest version 14.0.2 of Chaquopy. Also, as the <code>numpy</code> package is in that <a href="https://chaquo.com/pypi-extra/numpy/" rel="nofollow noreferrer">repository</a>, isn't it supposed to be able to compile the native code?</p>
<ol start="2">
<li><p>I understand that the available wheel files in PyPi are not compatible for Android OS. If the wheel files of <code>numpy</code> available in Chaquopy also do not work, what is the use of those wheel files?</p>
</li>
<li><p>Is there any way to install the latest versions of <code>numpy</code> which are available as wheel file in the repository of Chaquopy?</p>
</li>
</ol>
<p><strong>Note</strong>: I would like to mention I checked many similar questions asked in SO and in GitHub. But I did not get answers to these 3 questions. I think since <code>numpy</code> is one of the most widely used libraries in Python, the availability of the latest versions for installation is very important.</p>
<p>Thanks in advance :).</p>
| <python><android><numpy><chaquopy> | 2023-10-04 17:49:04 | 1 | 881 | Md Sabbir Ahmed |
77,231,828 | 1,761,521 | How do I structure an AWS CDK application to be able to import from a python package in the same root directory? | <p>I have a project directory that looks like,</p>
<pre><code>.
βββ infrastructure
β βββ app.py
β βββ stacks
β βββ batch
β βββ long_running
β βββ Dockerfile
β βββ app.py
β βββ functions
β βββ lambda1
β βββ Dockerfile
β βββ app.py
β βββ requirements.txt
βββ pyproject.toml
βββ src
βββ myprogram
β βββ __init__.py
β βββ core.py
</code></pre>
<p>Inside my <code>lambda1</code> function, I want to import code from <code>myprogram.core</code>; for example,</p>
<p><em>infrastructure/functions/lambda1/app.py</em></p>
<pre><code>from myprogram.core import SomeClass
def lambda_handler(event, context):
foo = SomeClass()
...
</code></pre>
<p>How do I make <code>myprogram</code> available so that I can import it into my infrastructure application?</p>
| <python><aws-cdk> | 2023-10-04 17:37:59 | 1 | 3,145 | spitfiredd |
77,231,648 | 11,278,588 | Matching data two inputs | <p>I have 2 inputs as follows:</p>
<pre><code>input1 = ["order", "name", "age", "soul", "test", "chill", "thym", "transport", "lim"]
input2 = ["order", "age", "soul", "chill", "thym", "exe", "test", "lim"]
</code></pre>
<p>I want to match these 2 lists and want the output to have 2 columns as follows:</p>
<pre><code>order order
name -
age age
soul soul
test -
chill chill
thym thym
transport -
- exe
- test
lim lim
</code></pre>
<p>Output is obtained by matching two inputs</p>
<ul>
<li>If text is present in input 1, but not in input 2, then the first column output is the text of input 1 and the character -</li>
<li>If text is present in input 2, but not in input 1, the first column outputs the character - and the text of input 2</li>
<li>If text is present in both input 1 and 2 but their positions are different like in the case of the word <code>test</code> in 2 inputs, the output will be <code>test -</code> and <code>- test</code>.</li>
</ul>
<p>In the case of text present in one input but not in another input, I can easily handle it with a normal search. But in case text is in both inputs like the word "test" in the example, I have not found a solution yet.</p>
<p>I want the output to be optimal by having the least number of <code>-</code> characters.</p>
| <python> | 2023-10-04 17:09:13 | 1 | 401 | Kepler452B |
77,231,562 | 13,653,794 | Difference in behaviour of identical lists | <p>When attempting <a href="https://leetcode.com/problems/n-queens/" rel="nofollow noreferrer">N-Queens</a>, the solution I was using would break when using the first way to create a list.</p>
<p>I thought both of these methods created the same list, why do they behave differently?</p>
<p>(Full Code Below)</p>
<p>Final Output (incorrect):</p>
<pre><code>board = [["."]*n]*n
# [["QQQQ","QQQQ","QQQQ","QQQQ"],["QQQQ","QQQQ","QQQQ","QQQQ"]]
</code></pre>
<p>Final output (correct):</p>
<pre><code>board = [["."]*n for i in range(n)]
# [[".Q..","...Q","Q...","..Q."],["..Q.","Q...","...Q",".Q.."]]
</code></pre>
<pre class="lang-py prettyprint-override"><code>
class Solution:
def solveNQueens(self, n: int) -> List[List[str]]:
out = []
cols = set()
pos_diag = set()
neg_diag = set()
# board = [["."]*n]*n
board = [["."]*n for i in range(n)]
def dfs(r):
if r == n:
copy = ["".join(row) for row in board]
out.append(copy)
return
for c in range(n):
if c in cols or (r+c) in pos_diag or (r-c) in neg_diag:
continue
cols.add(c)
pos_diag.add(r+c)
neg_diag.add(r-c)
board[r][c] = "Q"
dfs(r+1)
cols.remove(c)
pos_diag.remove(r+c)
neg_diag.remove(r-c)
board[r][c] = "."
dfs(0)
return out
</code></pre>
| <python><list><recursion> | 2023-10-04 16:56:03 | 1 | 374 | Fergus Johnson |
77,231,435 | 3,261,772 | Text Embeddings from Finetuned llama2 model | <p>I have finetuned my locally loaded llama2 model and saved the adapter weights locally. To load the fine-tuned model, I first load the base model and then load my peft model like below:</p>
<pre><code>model = PeftModel.from_pretrained(base_model, peft_model_id)
</code></pre>
<p>Now, I want to get the text embeddings from my finetuned llama model using LangChain but LlamaCppEmbeddings accepts model_path as an argument not the model. What is the best way to create text embeddings using a loaded model?</p>
<pre><code>embeddings = LlamaCppEmbeddings(model_path=llama_model_path, n_ctx=2048)
</code></pre>
<p>my questions are: 1- how to save my finetuned model (base+peft) not only the PeftModel locally?
2- How can I create text embeddings from my loaded model?
Thanks in advance</p>
| <python><langchain><llama><peft> | 2023-10-04 16:37:18 | 1 | 1,205 | Hamid K |
77,231,407 | 1,532,602 | MySQL datetime not converting GMT to appropriate timezone in Python | <p>I created A MySQL table with a timestamp field which is supposed to hold GMT time:</p>
<pre><code>CREATE TABLE `Notes` (
`RecordID` varchar(36) NOT NULL,
`DateAdded` timestamp NOT NULL,
`UserID` varchar(36) NOT NULL,
`Note` text NOT NULL,
PRIMARY KEY (`RecordID`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci;
SET FOREIGN_KEY_CHECKS = 1;
</code></pre>
<p>When I save a record, I use the following code to save the DateAdded:</p>
<pre><code>gmt = pytz.timezone("GMT")
return datetime.now(gmt)
</code></pre>
<p>It APPEARS to create the correct datetime in the Table; however, I am unable to convert the GMT datetime when I read the record back into Python:</p>
<pre><code>local_tz = pytz.timezone("US/Pacific")
new_date = date.astimezone(local_tz)
</code></pre>
<p><code>new_date</code> is the same datetime in the table regardless of the timezone I set. Anything I should be looking at?</p>
| <python><mysql> | 2023-10-04 16:32:41 | 1 | 1,665 | Elcid_91 |
77,231,391 | 21,540,734 | Can anyone explain to me how these constants in win32con are picked apart? | <p>What I'm trying to do is find out what is fullscreen or exclusive fullscreen. The code I have is to generate the styles of the open windows on my desktop. WS_MAXIMIZE has a different numbers then what is being printed by the code. Can someone explain to me how these numbers are being generated by windows?</p>
<p>SO you know Siper-Man is exclusive fullscreen, and pot player is fullscreen while playing a video.</p>
<pre class="lang-py prettyprint-override"><code>from win32gui import GetWindowLong, GetWindowText
from pyvda import AppView, get_apps_by_z_order # noqa
from datetime import datetime
import win32con
import inspect
# Spider-Man: Exclusive Fullscreen
tinmestamp = datetime.now().timestamp()
while datetime.now().timestamp() - tinmestamp < 10:
continue
with open('GWL.txt', 'w') as file:
for key, value in [member for member in inspect.getmembers(win32con) if 'GWL' in member[0]]:
file.write(f'{key + ": int":35} = {value}\n')
with open('256.txt', 'w') as file:
for key, value in [member for member in inspect.getmembers(win32con) if member[1] == 256]:
file.write(f'{key + ": int":35} = {value}\n')
with open('WS.txt', 'w') as file:
for key, value in [member for member in inspect.getmembers(win32con) if member[0].startswith('WS_')]:
file.write(f'{key + ": int":35} = {value}\n')
with open('win32con.txt', 'w') as file:
for key, value in [member for member in inspect.getmembers(win32con) if member[0].isupper()]:
file.write(f'{key + ": int":35} = {value}\n')
print(f'{"MAXIMIZED":>12}')
windows = [window for window in get_apps_by_z_order(current_desktop = False) if (GetWindowLong(window.hwnd, win32con.GWL_STYLE) & win32con.WS_MAXIMIZE)]
for window in windows:
print(f'{GetWindowText(window.hwnd)}')
print()
print(f'{"GWL_STYLE":>12}')
for window in get_apps_by_z_order(current_desktop = False):
print(f'{GetWindowLong(window.hwnd, win32con.GWL_STYLE):<15} {GetWindowText(window.hwnd)}')
print()
print(f'{"GWL_EXSTYLE":>14}')
for window in get_apps_by_z_order(current_desktop = False):
print(f'{GetWindowLong(window.hwnd, win32con.GWL_EXSTYLE):<15} {GetWindowText(window.hwnd)}')
</code></pre>
<pre class="lang-none prettyprint-override"><code>output:
MAXIMIZED
MonitorPowerUtility β topmost.py
Alienware Command Center
GWL_STYLE
369098752 Marvel's Spider-Man Remastered v2.616.0.0
399441920 MonitorPowerUtility β topmost.py
-1781596160 Alienware Command Center
382664704 Can anyone explain to me how these constants in win32con are picked apart? - Stack Overflow β Mozilla Firefox
-1764818944 FBI - S02E14 - Studio Gangster.mkv - PotPlayer
GWL_EXSTYLE
8 Marvel's Spider-Man Remastered v2.616.0.0
256 MonitorPowerUtility β topmost.py
2097408 Alienware Command Center
256 Can anyone explain to me how these constants in win32con are picked apart? - Stack Overflow β Mozilla Firefox
262416 FBI - S02E14 - Studio Gangster.mkv - PotPlayer
</code></pre>
<pre class="lang-none prettyprint-override"><code>Generated WS.txt file:
WS_ACTIVECAPTION: int = 1
WS_BORDER: int = 8388608
WS_CAPTION: int = 12582912
WS_CHILD: int = 1073741824
WS_CHILDWINDOW: int = 1073741824
WS_CLIPCHILDREN: int = 33554432
WS_CLIPSIBLINGS: int = 67108864
WS_DISABLED: int = 134217728
WS_DLGFRAME: int = 4194304
WS_EX_ACCEPTFILES: int = 16
WS_EX_APPWINDOW: int = 262144
WS_EX_CLIENTEDGE: int = 512
WS_EX_COMPOSITED: int = 33554432
WS_EX_CONTEXTHELP: int = 1024
WS_EX_CONTROLPARENT: int = 65536
WS_EX_DLGMODALFRAME: int = 1
WS_EX_LAYERED: int = 524288
WS_EX_LAYOUTRTL: int = 4194304
WS_EX_LEFT: int = 0
WS_EX_LEFTSCROLLBAR: int = 16384
WS_EX_LTRREADING: int = 0
WS_EX_MDICHILD: int = 64
WS_EX_NOACTIVATE: int = 134217728
WS_EX_NOINHERITLAYOUT: int = 1048576
WS_EX_NOPARENTNOTIFY: int = 4
WS_EX_OVERLAPPEDWINDOW: int = 768
WS_EX_PALETTEWINDOW: int = 392
WS_EX_RIGHT: int = 4096
WS_EX_RIGHTSCROLLBAR: int = 0
WS_EX_RTLREADING: int = 8192
WS_EX_STATICEDGE: int = 131072
WS_EX_TOOLWINDOW: int = 128
WS_EX_TOPMOST: int = 8
WS_EX_TRANSPARENT: int = 32
WS_EX_WINDOWEDGE: int = 256
WS_GROUP: int = 131072
WS_HSCROLL: int = 1048576
WS_ICONIC: int = 536870912
WS_MAXIMIZE: int = 16777216
WS_MAXIMIZEBOX: int = 65536
WS_MINIMIZE: int = 536870912
WS_MINIMIZEBOX: int = 131072
WS_OVERLAPPED: int = 0
WS_OVERLAPPEDWINDOW: int = 13565952
WS_POPUP: int = -2147483648
WS_POPUPWINDOW: int = -2138570752
WS_SIZEBOX: int = 262144
WS_SYSMENU: int = 524288
WS_TABSTOP: int = 65536
WS_THICKFRAME: int = 262144
WS_TILED: int = 0
WS_TILEDWINDOW: int = 13565952
WS_VISIBLE: int = 268435456
WS_VSCROLL: int = 2097152
</code></pre>
| <python><win32con> | 2023-10-04 16:30:24 | 0 | 425 | phpjunkie |
77,231,350 | 7,505,228 | Invoking a Google Cloud Function with non-http trigger via python | <p>I have deployed a cloud function (1st gen) with a trigger when a new object is created in a bucket</p>
<pre><code>gcloud functions deploy <function_name> \
--source <source_folder> \
--trigger-bucket=<trigger_bucket> \
--region europe-west3 \
--runtime python38 \
...
</code></pre>
<p>It is possible to manually test it by using <code>gcloud</code></p>
<pre><code>gcloud functions call <function_name> --data '{"bucket": "<trigger_bucket>", "name": "<file_name>"}'
</code></pre>
<p>However, I'd also like this cloud function on some past files (not all of them) in the bucket.</p>
<p>So I tried to write a python script</p>
<pre class="lang-py prettyprint-override"><code>from google.auth.request import Request
from google.oauth2
import requests
cloud_function_url = "https://europe-west3-<project_name>.cloudfunctions.net/<function_name>"
token = google.oauth2.id_token.fetch_id_token(Request(), cloud_function_url)
def invoke_cloud_function():
return requests.post(cloud_function_url, headers={"Content-Type": "application/json", "Authorization": f"Bearer {token}"}, json={"bucket": "<trigger_bucket>", "name": "<file_name>"})
</code></pre>
<p>I am however getting an error 401 <code>'\n<html><head>\n<meta http-equiv="content-type" content="text/html;charset=utf-8">\n<title>401 Unauthorized</title>\n</head>\n<body text=#000000 bgcolor=#ffffff>\n<h1>Error: Unauthorized</h1>\n<h2>Your client does not have permission to the requested URL <code>/dev_ingestion_collection</code>.</h2>\n<h2></h2>\n</body></html>\n'</code></p>
<p>I seems that the issue stems from the trigger not be an http trigger, because the same code but using the <code>trigger-http</code> seems to work.</p>
<p>However, the gcloud command seems to hint that there is the possibility to run an HTTP request to trigger a non-HTTP cloud fonction, even though I cannot find anything in the documentation.</p>
<p>Is there any way to do this other than using `subprocess.run("gcloud functions call ...") ?</p>
| <python><google-cloud-functions> | 2023-10-04 16:25:12 | 1 | 2,289 | LoicM |
77,231,334 | 15,209,268 | How to zoom in and out with Moviepy | <p>I have a video in 1080P resolution and need to perform various editing tasks, including rotation, cropping, trimming, and zooming. While I've successfully applied all these edits using MoviePy, I'm encountering an issue with the zoom effect.</p>
<p>Specifically, I'm looking for a way to zoom in and out by a specified factor without altering the video's resolution. I attempted to use the 'resize' function in MoviePy, but it didn't seem to have any effect, and I'm unsure if it's equivalent to achieving the zoom effect I want. My goal is to maintain the video's original resolution throughout.</p>
<p>Although this may seem like a straightforward task, I haven't been able to find a clear function in MoviePy that accomplishes this.</p>
<p>I'd appreciate any guidance on how to achieve the zoom effect with MoviePy. If MoviePy doesn't offer a straightforward solution, I'd also like to know if there are other libraries that provide this functionality out of the box. While I'm aware of the option to implement it with OpenCV, I'm hoping to explore alternatives.</p>
| <python><zooming><moviepy> | 2023-10-04 16:23:18 | 0 | 444 | Oded |
77,231,294 | 9,983,652 | why using resample().interpolate('spine') create all nan? | <p>I have a dataframe with date gap, for example, 2023 10-02 10:00, 2023 10-02 10:10, then jump to next day of 2023 10-03 10:00,2023 10-03 10:10. Now I am using <code>resample('10min')</code> to create time series every 10min from 2021 01-01 to 2021 01-02, then using <code>interpolate(method="slinear", fill_value="extrapolate", limit_direction="both",order=2)</code> to fill missing date gap, however, the result is all of data become NaN.</p>
<p>so I am trying to understand why <code>interpolate()</code> create all NaN while <code>mean()</code> doesn't?</p>
<p>Here's the code</p>
<pre><code>test_file='test_resample.csv'
df=pd.read_csv(test_file,sep='\t').set_index('Date')
df.tail(13)
100 102 104
Date
10/2/2023 9:43 202.7250 202.9100 203.3800
10/2/2023 9:52 203.3000 203.2250 203.5100
10/2/2023 10:01 203.2875 203.4000 203.7050
10/2/2023 10:10 202.7450 202.8000 203.7125
10/2/2023 10:19 203.0075 202.9925 202.8875
10/3/2023 7:49 205.7625 205.7000 206.1250
10/3/2023 7:59 205.4600 205.8675 205.8925
10/3/2023 8:08 205.7925 205.7325 205.9850
10/3/2023 8:17 205.5525 206.0225 205.8250
10/3/2023 8:26 205.7925 205.8000 206.2675
10/3/2023 8:35 205.8400 205.9250 205.9475
10/3/2023 8:44 204.1450 204.8400 205.0275
10/3/2023 8:53 204.2175 204.0425 204.3575
</code></pre>
<p>Now I am doing resampling and use interpolate to fill gap, you can see I got all NaN, why?</p>
<pre><code>rule_1='20min'
df.index=pd.to_datetime(df.index)
df_2=df.resample(rule=rule_1).interpolate(method="slinear", limit_direction="both",fill_value="extrapolate", order=2)
df_2.tail(13)
100 102 104
Date
2023-10-03 04:40:00 NaN NaN NaN
2023-10-03 05:00:00 NaN NaN NaN
2023-10-03 05:20:00 NaN NaN NaN
2023-10-03 05:40:00 NaN NaN NaN
2023-10-03 06:00:00 NaN NaN NaN
2023-10-03 06:20:00 NaN NaN NaN
2023-10-03 06:40:00 NaN NaN NaN
2023-10-03 07:00:00 NaN NaN NaN
2023-10-03 07:20:00 NaN NaN NaN
2023-10-03 07:40:00 NaN NaN NaN
2023-10-03 08:00:00 NaN NaN NaN
2023-10-03 08:20:00 NaN NaN NaN
2023-10-03 08:40:00 NaN NaN Na
</code></pre>
<p>if I use <code>mean()</code> instead of <code>interpolate()</code>, some of data are still kept like this</p>
<pre><code>rule_1='20min'
df.index=pd.to_datetime(df.index)
df_1=df.resample(rule=rule_1).mean()
df_1.tail(13)
100 102 104
Date
2023-10-03 04:40:00 NaN NaN NaN
2023-10-03 05:00:00 NaN NaN NaN
2023-10-03 05:20:00 NaN NaN NaN
2023-10-03 05:40:00 NaN NaN NaN
2023-10-03 06:00:00 NaN NaN NaN
2023-10-03 06:20:00 NaN NaN NaN
2023-10-03 06:40:00 NaN NaN NaN
2023-10-03 07:00:00 NaN NaN NaN
2023-10-03 07:20:00 NaN NaN NaN
2023-10-03 07:40:00 205.61125 205.78375 206.00875
2023-10-03 08:00:00 205.67250 205.87750 205.90500
2023-10-03 08:20:00 205.81625 205.86250 206.10750
2023-10-03 08:40:00 204.18125 204.44125 204.69250
</code></pre>
<p>So why does <code>interpolate()</code> create all NaN, while <code>mean()</code> is different?</p>
| <python><pandas> | 2023-10-04 16:18:37 | 0 | 4,338 | roudan |
77,231,187 | 15,189,432 | Extract date, day, time in single columns out of a dataframe column | <p>How can I convert the dataframe so that I have one column with "Day", one with "date", one with "time"?
I tried to slide bit can't get the correct output.</p>
<pre><code>df = pd.DataFrame({'Datum': ["Samstag, 12.08.2023AnstoΓΕΈ um 15:30 Uhr", "Samstag, 19.08.2023AnstoΓΕΈ um 18:30 Uhr", "Samstag, 26.08.2023AnstoΓΕΈ um 15:30 Uhr"],
'Begegnungen': ["TTSV SCHOTT Mainz - Borussia Dortmund", "VfL Bochum - Borussia Dortmund", "Freiburg - Borussia Dortmund"]})
def custom_slice(full_string, start_index, end_index):
return full_string[start_index:end_index]
# Define the start and end indices for the slicing
start_index = 5
end_index = 10
# Apply the custom slicing function to the 'TextColumn' and create a new column 'SlicedColumn'
df['SlicedColumn'] = df['Datum'].apply(lambda x: custom_slice(x, start_index, end_index))
# Display the DataFrame with the sliced column
print(df)
</code></pre>
| <python><pandas> | 2023-10-04 16:04:14 | 2 | 361 | Theresa_S |
77,231,133 | 7,252,531 | AWS Glue Spark Job Extract Database Table Data in Parallel | <p>Suppose I have this simple AWS Glue 4.0 PySpark job:</p>
<pre><code>import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
args = getResolvedOptions(sys.argv, ["JOB_NAME"])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args["JOB_NAME"], args)
tables = ['table1','table2','table3','table4','table5']
for table in tables:
dyf = glueContext.create_dynamic_frame.from_options(
connection_type="sqlserver",
connection_options={
"useConnectionProperties": "true",
"dbtable": table,
"connectionName": "my-glue-jdbc-connection",
}
)
glueContext.write_dynamic_frame.from_options(
frame=dyf,
connection_type="s3",
format="parquet",
connection_options={"path": f"s3://my-s3-bucket/{table}", "partitionKeys": []},
format_options={"compression": "gzip"}
)
</code></pre>
<p>This job completes with the expected output in S3, however, everything within the <code>for</code> loop is run sequentially and as a result is pretty slow.</p>
<p>Is there a way to rearrange the code such that each <code>write_dynamic_frame</code> operation runs in parallel?</p>
| <python><amazon-web-services><apache-spark><pyspark><aws-glue> | 2023-10-04 15:55:39 | 1 | 1,830 | gbeaven |
77,230,984 | 1,920,827 | Disable Warnings when executing python code from C# with process class | <p>I'm running a sample Python script through the command prompt:</p>
<pre class="lang-py prettyprint-override"><code>import warnings
print('Hello, world!')
warnings.warn('Geeks 4 Geeks !')
</code></pre>
<p>Executing script file with warnings:</p>
<pre><code>python first_script.py Hello World!
</code></pre>
<p>Executing script file without warnings:</p>
<pre><code>python -W "ignore" first_script.py Hello World!
</code></pre>
<p>The pythonΒ script file mentioned above runs as expected from the command prompt.</p>
<p>Now, I am executing python code from C# with process class</p>
<pre class="lang-cs prettyprint-override"><code>string python = @"C:\\Python27\python.exe";
string myPythonApp = @"C:\\Dropbox\scripts\loadbuild\sum.py";
int x = 2;
int y = 5;
ProcessStartInfo myProcessStartInfo = new ProcessStartInfo(python);
myProcessStartInfo.UseShellExecute = false;
myProcessStartInfo.RedirectStandardOutput = true;
myProcessStartInfo.Arguments = myPythonApp + " " + x + " " + y;
Process myProcess = new Process();
myProcess.StartInfo = myProcessStartInfo;
myProcess.Start();
StreamReader myStreamReader = myProcess.StandardOutput;
string myString = myStreamReader.ReadToEnd();
myProcess.ErrorDataReceived += Process_ErrorDataReceived;
myProcess.OutputDataReceived += Process_OutputDataReceived;
myProcess.WaitForExit();
myProcess.Close();
Console.WriteLine("Value received from script: " + myString);
Console.ReadLine();
</code></pre>
<p>How can we decide whether to allow or ignore warnings in this case?</p>
<p>The strategy I'm attempting is:</p>
<pre class="lang-cs prettyprint-override"><code>string python = @"C:\\Python27\python.exe";
string myPythonApp = @" - W 'ignore' C:\\Dropbox\scripts\loadbuild\sum.py";
</code></pre>
<p>Please share your thoughts on whether this is the right course of action.</p>
<p><strong>Update:</strong></p>
<p>When there is a warning on the script file with the following code, I am getting an exception:</p>
<pre><code>string python = @"C:\\Python27\python.exe";
string myPythonApp = @"C:\\Dropbox\scripts\loadbuild\sum.py";
</code></pre>
<p>Exception is: "warnings.warn('Geeks 4 Geeks !')"</p>
<p>The following code works very well to ignore the warnings:</p>
<pre><code>string python = @"C:\\Python27\python.exe";
string myPythonApp = @" - W 'ignore' C:\\Dropbox\scripts\loadbuild\sum.py";
</code></pre>
<p>Is there any way to display python warnings via csharp process?</p>
| <python><c#><.net><process> | 2023-10-04 15:37:03 | 0 | 9,563 | Pearl |
77,230,983 | 18,445,352 | Why does it take longer to execute a simple for loop in Python 3.12 than in Python 3.11? | <p>Python 3.12 was released two days ago with several new features and improvements. It claims that it's faster than ever, so I decided to give it a try. I ran a few of my scripts with the new version, but it was slower than before. I tried various approaches, simplifying my code each time in an attempt to identify the bottleneck that was causing it to run slowly. However, I have been unsuccessful so far. Finally, I decided to test a simple for loop like the following:</p>
<pre><code>import time
def calc():
for i in range(100_000_000):
x = i * 2
t = time.time()
calc()
print(time.time() - t)
</code></pre>
<p>On my machine, it took 4.7s on Python 3.11.5 and 5.7s on Python 3.12.0. Trying on other machines had similar results.</p>
<p>So why is it slower in the latest version of Python?</p>
| <python><performance> | 2023-10-04 15:36:51 | 1 | 346 | Babak |
77,230,902 | 2,148,773 | How to depend on opencv-python OR opencv-python-headless? | <p>In the setup.py file for my package I currently have the line</p>
<pre><code>install_requires=['opencv-python'],
</code></pre>
<p>because my package uses some OpenCV functions. When installing my package, <code>opencv-python</code> is automatically installed as well.</p>
<p>But I want to deploy my package sometimes on a development host where <code>opencv-python</code> is installed, and sometimes on a server where <code>opencv-python-headless</code> is installed. Both packages are fine for me. Currently, when I install my package on the server, it will always cause <code>opencv-python</code> to be installed in addition to the existing <code>opencv-python-headless</code> package. According to <a href="https://pypi.org/project/opencv-python/#installation-and-usage" rel="noreferrer">https://pypi.org/project/opencv-python/#installation-and-usage</a> installing both packages in parallel is not a good thing.</p>
<p>Also, in some environments <code>opencv-contrib-python-headless</code> is installed; so there is a third option that would satisfy the dependency for me.</p>
<p>Is there a way to specify these "alternative" dependencies for <code>opencv-python</code> in my setup.py file, so that <em>either</em> <code>opencv-python</code> <em>or</em> <code>opencv-python-headless</code> is installed? Would it help to move to some other packaging tool to get this feature?</p>
| <python><opencv><pip><python-packaging> | 2023-10-04 15:24:45 | 0 | 6,891 | oliver |
77,230,830 | 9,317,361 | How to disable password manager in chrome webdriver | <p>I hope you are all having a good day.</p>
<p>I have been trying to disable the password manager bubble for the last 3 days now without any luck. The reeason why I want to disable it is because its interfeering with my selenium automation even though it's not supposed to. But since I don't need password manager for my automation I would like to disable it completely.</p>
<p><a href="https://i.sstatic.net/VHWfh.png" rel="noreferrer"><img src="https://i.sstatic.net/VHWfh.png" alt="enter image description here" /></a></p>
<p><strong>What I have tried so far</strong></p>
<p>Below is a list with all the flags I could find related to password manager. Even after adding all of them it still shows the password manager.</p>
<pre><code>'--credentials_enable_service=false '
'--profile.password_manager_enabled=false '
'--disable-save-password-bubble '
'--disable-fill-on-account-select '
'--fill-on-account-select=false '
'--username-first-flow-with-intermediate-values=false '
'--enable-show-autofill-signatures=false '
'--skip-undecryptable-passwords=false '
'--force-password-initial-sync-when-decryption-fails=false '
'--fill-on-account-select=false '
'--filling-across-grouped-sites=false '
'--ios-promo-password-bubble=false '
'--password-generation-experiment=false '
'--revamped-password-management-bubble=false '
'--passwords-import-m2=false '
'--forgot-password-form-support=false '
'--password-store=basic'
</code></pre>
<p>I have also looked at other similar questions in stackoverflow but they seem to be outdated with the new Chrome updates since all the flags that used to work are not even available anymore in Chrome.</p>
<pre><code>chrome_opt = webdriver.ChromeOptions()
prefs = {
"credentials_enable_service": False,
"profile.password_manager_enabled": False
}
chrome_opt.add_experimental_option("prefs", prefs)
driver = webdriver.Chrome(chrome_options=chrome_opt, executable_path=r'C:\chrome_path.exe')
</code></pre>
<p>I have also found a solution that inlolves modifying some values Window registry of Chrome that disables password manager completely but that's not optimal because it will also disable the password manager from my normal Chrome build.</p>
<p>Thanks everyone for your time :)</p>
<p>PS: Don't give me the: "This have been answered before." I have tried everything and nothing works anymore. I am looking for an up to date solution.</p>
| <python><selenium-webdriver><webdriver><password-manager> | 2023-10-04 15:14:58 | 3 | 374 | Speci |
77,230,733 | 13,364,925 | Python selenium can't click on a checkbox | <p>I'm writing python selenium code to automatic fill the form for testing purpose. But my code always throw exception when I try to click a checkbox. This is the HTML checkbox code:</p>
<pre class="lang-html prettyprint-override"><code><div
id="isManuallyEnterYourAddress"
>
<div class="field-wrapper">
<div>
<input
type="checkbox"
name="myId"
id="myId"
value="true"
class="input-field__checkbox"
alias="isManuallyEnterYourAddress"
/>
<input
type="hidden"
name="myId"
value="false"
/>
<label
for="myId"
>
manually enter your address
</label>
</div>
</div>
</div>
</code></pre>
<p>This is my first try, it's work before but it's crash now, I don't know why</p>
<pre class="lang-py prettyprint-override"><code>manual_address_1 = driver.find_element(
By.ID, "myId"
)
manual_address_1.click()
</code></pre>
<p>I got this error: <code>selenium.common.exceptions.ElementClickInterceptedException: Message: element click intercepted: Element is not clickable at point (337, 1425)</code>
I also try the others way, like WebDriverWait, but it still exception</p>
<pre><code>manual_address_2 = WebDriverWait(driver, delay).until(
EC.element_to_be_clickable(
(By.XPATH, "//input[@id='myId']")
)
)
</code></pre>
<p>Or find by label:</p>
<pre class="lang-py prettyprint-override"><code>manual_address_1 = WebDriverWait(driver, delay).until(
EC.element_to_be_clickable(
(By.XPATH, "//label[@for='myId']")
)
)
</code></pre>
<p>Or try ActionChains:</p>
<pre class="lang-py prettyprint-override"><code>action.move_to_element(manual_address_1).click().perform()
</code></pre>
<p>but I got the error: <code>selenium.common.exceptions.MoveTargetOutOfBoundsException: Message: move the target out of bounds</code></p>
<p>What's wrong? This checkbox is clearly visible, I can click no problem by mouse</p>
| <python><selenium-webdriver> | 2023-10-04 15:00:05 | 1 | 554 | dungreact |
77,230,366 | 411,929 | SciPy fails to build from source on Mac under CPython 3.9/PyPy on Intel & M1 Macs | <p>I have virtualenvs on both an Intel Mac running Catalina and an M1 Mac running Ventura and a friend has tried on an Intel Mac running Monterey.</p>
<p>I am following the build from source instructions from the scipy docs:</p>
<p><a href="https://docs.scipy.org/doc/scipy/building/index.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/building/index.html</a> which state:</p>
<p>xcode-select --install (make sure Xcode command line tools are installed)</p>
<p>brew install gfortran openblas pkg-config
pip install scipy --no-binary scipy</p>
<p><a href="https://handloomweaver.s3-eu-west-1.amazonaws.com/scipy_m1_cpy3.log" rel="nofollow noreferrer">https://handloomweaver.s3-eu-west-1.amazonaws.com/scipy_m1_cpy3.log</a> [CPython 3.11 Ventura M1 Mac]</p>
<p><a href="https://handloomweaver.s3-eu-west-1.amazonaws.com/scipy_m1_py3.log" rel="nofollow noreferrer">https://handloomweaver.s3-eu-west-1.amazonaws.com/scipy_m1_py3.log</a> [PyPy 3.10 Ventura M1 Mac]</p>
<p><a href="https://handloomweaver.s3-eu-west-1.amazonaws.com/intel.log" rel="nofollow noreferrer">https://handloomweaver.s3-eu-west-1.amazonaws.com/intel.log</a> [CPython 3.9 Intel Monterey]</p>
<p><a href="https://handloomweaver.s3-eu-west-1.amazonaws.com/scipy_intel_cpy3.log" rel="nofollow noreferrer">https://handloomweaver.s3-eu-west-1.amazonaws.com/scipy_intel_cpy3.log</a> [CPython 3.11 Intel Catalina]</p>
<p>You will see that in every case they fail to build and for different reasons even though I've followed the instructions.</p>
<p>Summary, they all say:</p>
<p>Either:</p>
<pre><code>../scipy/meson.build:159:9: ERROR: Dependency "OpenBLAS" not found, tried pkgconfig, framework and cmake
</code></pre>
<p>Or:</p>
<pre><code>../../numpy/meson.build:207:4: ERROR: Problem encountered: No BLAS library detected!
</code></pre>
<p>Or:</p>
<pre><code>../meson.build:82:0: ERROR: Unable to detect linker for compiler `gfortran -Wl,--version`
</code></pre>
<p>Or:</p>
<pre><code>Problem with the CMake installation, aborting build. CMake executable is cmake
</code></pre>
<p>(last one while building ninja .whl)</p>
<p>pip subprocess to install backend dependencies did not run successfully.</p>
<p>It's okay if there is an exact .whl but if there isn't and in my case I am trying to build on PyPy where there is no .whl available. But as evidenced, it isn't an M1 problem or a PyPy problem. You can't pip build from source SciPy currently on a variety of CPython or PyPy versions on two different backend architectures and three different OS versions.</p>
<p>Am I not getting something or should I file a bug?</p>
| <python><numpy><scipy><cpython><pypy> | 2023-10-04 14:14:08 | 0 | 5,041 | handloomweaver |
77,230,340 | 17,082,611 | SSIM is large but the two images are not similar at all | <p>I want to use the <a href="https://scikit-image.org/docs/0.21.x/api/skimage.metrics.html#skimage.metrics.structural_similarity" rel="nofollow noreferrer">structural similarity index measure</a> for computing the mean structural similarity index between two images: the original one and the reconstructed one.</p>
<p>This is the original one:</p>
<p><a href="https://i.sstatic.net/AnKmU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AnKmU.png" alt="original" /></a></p>
<p>While this is the reconstructed one:</p>
<p><a href="https://i.sstatic.net/huzrV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/huzrV.png" alt="reconstructed" /></a></p>
<p>Unfortunately if I run this script:</p>
<pre><code>import numpy as np
from skimage.metrics import structural_similarity as ssim
# Both original and reconstructed have shape (40, 40, 1)
# dtype is 'float32'
original = np.load("original.npy")
reconstructed = np.load("reconstructed.npy")
ssim(original, reconstructed, data_range=1, channel_axis=-1) # 0.9321383
</code></pre>
<p>You can notice that the value is very large, around <code>0.93</code>, but the reconstructed image is not similar at all to the original one!</p>
<p>What am I missing? Note that <code>scikit-image</code> version is <code>scikit-image~=0.21.0</code>.</p>
<p><strong>What I tried 1</strong>: I think there is an issue with <code>data_range=1</code> parameter since the <a href="https://scikit-image.org/docs/0.21.x/api/skimage.metrics.html#skimage.metrics.structural_similarity" rel="nofollow noreferrer">documentation</a> says that:</p>
<blockquote>
<p>The data range of the input image (distance between minimum and maximum possible values). By default, this is estimated from the image data type.
This estimate may be wrong for floating-point image data. Therefore it is recommended to always pass this value explicitly.
If data_range is not specified, the range is automatically guessed based on the image data type.
However for floating-point image data, this estimate yields a result double the value of the desired range, as the dtype_range in skimage.util.dtype.py has defined intervals from -1 to +1. This yields an estimate of 2, instead of 1, which is most often required when working with image data (as negative light intentsities are nonsensical). In case of working with YCbCr-like color data, note that these ranges are different per channel (Cb and Cr have double the range of Y), so one cannot calculate a channel-averaged SSIM with a single call to this function, as identical ranges are assumed for each channel.</p>
</blockquote>
<p>Does it make sense to set <code>data_range=original.min()-original.max()</code> instead of <code>data_range=1</code>? What do you think? In that way I would get <code>ssim = 0.059593666</code>.</p>
<p>If you need it there are some additional information:</p>
<pre><code>original.min(), original.max() # (0.7764345, 0.82100683)
reconstructed.min(), reconstructed.max() # (0.6095292, 0.65623206)
</code></pre>
<p><strong>What I tried 2</strong>: I normalized both original and reconstructed images this way:</p>
<pre><code>original = np.load("original.npy")
reconstructed = np.load("reconstructed.npy")
original = (original - np.min(original)) / (np.max(original) - np.min(original))
reconstructed = (reconstructed - np.min(reconstructed)) / (np.max(reconstructed) - np.min(reconstructed))
ssim(original, reconstructed, data_range=1, channel_axis=-1) # 0.056520723
</code></pre>
<p>What do you think is the proper solution?</p>
| <python><numpy><image-processing><scikit-image><ssim> | 2023-10-04 14:11:06 | 1 | 481 | tail |
77,230,239 | 382,912 | python regex breaks with new unicode characters in Mac OS Sonoma filenames | <p>After updating to Mac OS Sonoma, a <a href="https://github.com/kortina/dotfiles/blob/master/s3screenshots/s3screenshots.py#L97" rel="nofollow noreferrer">helper script</a> I had to upload screenshots to s3 has broken.</p>
<p>Specifically, this:</p>
<pre class="lang-py prettyprint-override"><code>r = r"(\d{4}-\d{2}-\d{2}) at (\d{1,2}).(\d{2}\.\d{2}) (AM|PM)\.png"
return re.search(r, filename)
</code></pre>
<p>Used to match filenames of the pattern:</p>
<p><code>/Users/kortina/Desktop/Screenshot 2023-10-03 at 6.57.55 PM.png</code></p>
<p>But in Mac OS Sonoma, Screenshots are now named with a unicode character instead of the final space:</p>
<p><code>/Users/kortina/Desktop/Screenshot 2023-10-03 at 6.57.55\u202fPM.png</code></p>
<p>And the match no longer works.</p>
<p>I don't want to modify the original string, just the regex to get this match to work, but nothing I have tried seems to have worked yet.</p>
<p>Anyone know how to get the regex to match <code>either</code> the space <code>or</code> the unicode character?</p>
<p>tx in advance...</p>
| <python><python-3.x><regex><macos-sonoma> | 2023-10-04 13:56:23 | 1 | 6,151 | kortina |
77,230,233 | 1,686,628 | pytest: validating __init__ args with mock | <p>I would like to write a test confirming <code>__init__</code> values</p>
<pre><code>def test_foo(mocker):
mock = Mock()
mocker.patch(
"Foo.__init__", return_value=mock
)
foo = Foo('bar')
mock.assert_called_with('bar')
</code></pre>
<p>But i get the following error <code>TypeError: __init__() should return None, not 'Mock'</code></p>
<p>I ignore the error via try/except around <code>foo = Foo('bar')</code> but now, pytest says mock was not called</p>
<pre><code>E AssertionError: expected call not found.
E Expected: mock('bar')
E Actual: not call
</code></pre>
<p>Any idea how to validate <code>__init__()</code> args?</p>
<p>Thanks</p>
<p>P.S.
I also would not like <code>__init__()</code> to actually run hence mock and not spy</p>
| <python><pytest><python-unittest.mock> | 2023-10-04 13:55:02 | 1 | 12,532 | ealeon |
77,230,153 | 2,562,058 | How to squeeze a list of numpy arrays into an existing array? | <p>Consider the following snippet:</p>
<pre><code>import numpy as np
pos = np.array([1, 3, 4])
arr = np.array([0, 1, 2, 3, 4, 5])
vals = np.array([[-10,10], [-20,20], [99,66]])
</code></pre>
<p>I want to squeeze the elements of <code>val</code> into <code>arr</code> in the indices specified by <code>pos</code>.
At the end, the desired result should be:</p>
<pre><code>result = array([0, -10, 10, 1, 2, -20, 20, 3, 99, 66, 4, 5])
</code></pre>
| <python><python-3.x><numpy> | 2023-10-04 13:46:43 | 1 | 1,866 | Barzi2001 |
77,229,992 | 12,133,068 | snakemake doesn't execute the right rules | <p>I'm using snakemake <code>7.25.0</code>. Sometimes, some of my rules are <strong>not</strong> re-executed after a parameter change. Sometimes it detects a parameter change, and sometimes it doesn't. More generally, I'm experiencing a lot of strange behavior of snakemake, and it's hard to understand when it decides to execute a rule or not.</p>
<p>I was able to reproduce <strong>two</strong> weird behaviors on this minimal example:</p>
<pre><code>rule all:
input:
"res.txt"
checkpoint save_number_files:
output:
"n.txt",
params:
x = 2,
t = 3,
shell:
"echo {params.x} > {output}"
rule create_file:
input:
"n.txt",
output:
"out/{index}.txt"
shell:
"touch {output}"
def get_n(wilcards):
with checkpoints.save_number_files.get(**wilcards).output[0].open() as f:
return [f"out/{i}.txt" for i in range(int(f.read()))]
rule aggregate:
input:
get_n
output:
"res.txt"
shell:
"""touch {output}"""
</code></pre>
<p>Step to reproduce the strange behaviors:</p>
<ol>
<li>Run the pipeline once (<code>snakemake -c1</code>). It should be fine.</li>
<li>Change the value of <code>t</code>, for instance <code>t = 4</code> and re-execute the pipeline. It executes <code>save_number_files</code>, but doesn't re-execute <code>create_file</code>, even though its input changed.</li>
<li>Re-execute the pipeline (without any change!). It now re-runs the rule <code>all</code> instead of saying <code>Nothing to be done</code>. This is not critical since it's actually doing nothing, but I'm surprise that it doesn't say <code>Nothing to be done</code> even though we didn't changed anything...</li>
</ol>
| <python><snakemake> | 2023-10-04 13:24:35 | 0 | 334 | Quentin BLAMPEY |
77,229,890 | 2,306,148 | train gpt-2 modal with json data | <p>I have a dataset in JSON format that contains questions, options, categories, and correct answers. I would like to train a GPT-2 model on this dataset, but I am getting an error.</p>
<p>i got following error: "ImportError: cannot import name 'Dataset' from 'transformers'"</p>
<p>I have written the following code:</p>
<pre><code>import json
import random
from transformers import GPT2LMHeadModel, Dataset
def convert_json_to_text(json_data):
text = ''
for question_and_answers in json_data:
random.shuffle(question_and_answers['answers'])
text += f"{question_and_answers['category']}: {question_and_answers['question']}\n"
for option in question_and_answers['answers']:
text += f"- {option}\n"
text += f"Correct Answer: {question_and_answers['correct_answer']}\n\n"
return text
with open("questions.json", "r") as f:
json_data = json.load(f)
text = convert_json_to_text(json_data)
train_dataset = Dataset.from_text(text)
model = GPT2LMHeadModel.from_pretrained("gpt2")
model.train()
for epoch in range(10):
for batch in train_dataset:
loss = model(input_ids=batch['input_ids'], labels=batch['input_ids'])
loss.backward()
model.optimizer.step()
model.optimizer.zero_grad()
model.save_pretrained("gpt2_model.pt")
</code></pre>
<p>Here is sample dataset:</p>
<pre><code>[
{
"question": "Q1. Which operator returns true if the two compared values are not equal?",
"category": "javascript",
"answers": [" <>", " ~", " ==!", " !=="],
"correct_answer": " !=="
},
{
"question": "Q2. How is a forEach statement different from a for statement?",
"category": "javascript",
"answers": [
" Only a for statement uses a callback function.",
" A for statement is generic, but a forEach statement can be used only with an array.",
" Only a forEach statement lets you specify your own iterator.",
" A forEach statement is generic, but a for statement can be used only with an array."
],
"correct_answer": " A for statement is generic, but a forEach statement can be used only with an array."
}
]
</code></pre>
| <python><training-data><gpt-3><gpt-2> | 2023-10-04 13:10:46 | 1 | 596 | Gautam Menariya |
77,229,585 | 4,403,498 | Azure mltable throwing ImportError when it is imported in Azure Notebooks in AutoML | <p>I am writing code in the notebooks on Azure AutoML wtih Spark Version 3.2 which means that Python version is 3.8.</p>
<p>I have installed the mltable in Azure Notebooks using the command:</p>
<pre><code>%pip install -U mltable azureml-dataprep[pandas]
</code></pre>
<p>Before installing mltable I had already installed azure-ai-ml package as well. But still mltable is throwing import error when I try to import it using:</p>
<pre><code>import mltable
</code></pre>
<p>It throws the following error:</p>
<pre><code>---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
/tmp/ipykernel_23522/3630867241.py in <module>
16 from azure.ai.ml.entities import AmlCompute
17
---> 18 import mltable
19 import os
/nfs4/pyenv-da563906-c8c8-4c39-89f1-22def948bc6a/lib/python3.8/site-packages/mltable/__init__.py in <module>
9 any Python environment, including Jupyter Notebooks or your favorite Python IDE.
10 """
---> 11 from .mltable import MLTable, load, from_delimited_files, from_parquet_files, from_json_lines_files, from_paths, \
12 from_delta_lake, DataType, MLTableFileEncoding, MLTableHeaders
13
/nfs4/pyenv-da563906-c8c8-4c39-89f1-22def948bc6a/lib/python3.8/site-packages/mltable/mltable.py in <module>
20 from azureml.dataprep.api._constants import ACTIVITY_INFO_KEY, ERROR_CODE_KEY, \
21 COMPLIANT_MESSAGE_KEY, OUTER_ERROR_CODE_KEY
---> 22 from azureml.dataprep.api._dataframereader import get_dataframe_reader, to_pyrecords_with_preppy, \
23 _execute
24 from azureml.dataprep.api.mltable._mltable_helper import _download_mltable_yaml, _parse_path_format, _PathType, \
ImportError: cannot import name 'to_pyrecords_with_preppy' from 'azureml.dataprep.api._dataframereader' (/home/trusted-service-user/cluster-env/env/lib/python3.8/site-packages/azureml/dataprep/api/_dataframereader.py)
</code></pre>
| <python><azure> | 2023-10-04 12:32:40 | 1 | 719 | Syed Shaharyaar Hussain |
77,229,540 | 3,625,533 | extract values as columns into dataframe from same dataframe json string | <p>Let have a data frame with json string like below</p>
<pre><code>df = pd.DataFrame([{'sno':1, 'values':"[{'a':1, 'b':2,'c':3}]"},
{'sno':2, 'values':"[{'a':4, 'b':5,'c':6}]"}])
</code></pre>
<p>Need to get the json string column as new columns in the same dataframe. Tried the below code but, getting the error as "the JSON object must be str, bytes or bytearray, not Series"</p>
<pre><code>df = df.assign(a=lambda x: (json.loads(x['values'])[0]['a']))
</code></pre>
<p>When we try to use the lambda it should be getting the values for that row only but json.loads(x['values']) actually returning a series and this is causing the error.</p>
<p>Is there any other ways to get this achieved?</p>
<p>Expected dataframe like below</p>
<pre><code>sno a b c
------------------------
1 1 2 3
2 4 5 6
</code></pre>
| <python><pandas><dataframe> | 2023-10-04 12:23:55 | 1 | 435 | user3625533 |
77,229,506 | 7,776,781 | Adding class getattr behaviour to pydantic v2 BaseModel | <p>I have a library that is essentially a type of ORM. As part of this I have a my own 'Model' class that is to be subclassed by end users of the library to create models that are to be persisted in a nosql database.</p>
<p>My base model class inherits from <code>pydantic.BaseModel</code>. One of the features of this library is that you can create filters to be used for database queries with relatively simple syntax.</p>
<p>For example, say I have my own class <code>base.Model</code> and a user creates a model like this:</p>
<pre><code>class Person(base.Model):
first_name: str
last_name: str
age: int
</code></pre>
<p>It should then be possible to construct a filter object by doing something like this:</p>
<pre><code>filter_ = (Person.age > 10) & (Person.first_name == "Bill")
</code></pre>
<p>In pydantic V1, I did this by adding a metaclass like this:</p>
<pre><code>class Model(pydantic.BaseModel, metaclass=utils.MetaGetFilterPydantic):
...
</code></pre>
<p>where</p>
<pre><code>class MetaGetFilterPydantic(MetaGetFilter, type(pydantic.BaseModel)):
pass
</code></pre>
<p>and finally</p>
<pre><code>class MetaGetFilter(type):
def _get_full_annotations(cls) -> collections.ChainMap[str, Any]:
annotations_ = [cls.__annotations__]
for parent in cls.__mro__:
if parent is pydantic.BaseModel:
break
annotations_.append(parent.__annotations__)
return collections.ChainMap(*annotations_)
def __getattr__(cls, item: str) -> MyFilterObject:
if item not in cls._get_full_annotations():
raise AttributeError(f"type object '{cls.__name__}' has no attribute '{item}'")
return MyFilterObject(...)
</code></pre>
<p>This allowed the normal attribute lookup to function while also allowing me to return a filter object that can be used to construct the filter if the user tries to access an attribute on the class directly that is also set in the annotations.</p>
<p>Now, the issue is that in pydantic V2, this break completely. I seen that pydantic V2 uses <code>pydantic._internal._model_construction.ModelMetaclass</code> as a metaclass to create new models, and I've played around with letting this my metaclass be a subclass of this and overwriting its <code>__getattr__</code>.</p>
<p>If I do something like this on my metaclass (after subclassing the pydantic model metaclass above):</p>
<pre><code>def __getattr__(cls, item):
try:
return super().__getattr__(item)
except AttributeError:
if not cls.__pydantic_complete__ or cls is base.Model:
raise
attrs = (
cls._get_full_annotations()
)
if item not in attrs:
raise
return MyFilterObject(...)
</code></pre>
<p>it will work for one single level of subclassing of my own model, but will break for any further subclassing of my model since then my check will not work.</p>
<p>Long rant to basically ask, is there any safe way to add behaviour to pydantic class object access in V2? It seems like pydantic now relies on <code>__getattr__</code> raising an <code>AttributeError</code></p>
<p>I've thought about there potentially being a way to know when the object is actually initialized, but <code>__pydantic_complete__</code> did not seem to be enough</p>
| <python><python-3.x><orm><pydantic> | 2023-10-04 12:18:38 | 1 | 619 | Fredrik Nilsson |
77,229,321 | 619,754 | The default channel for miniforge is default? | <p>Windows 10</p>
<p>The installation <a href="https://github.com/conda-forge/miniforge" rel="nofollow noreferrer">instructions for miniforge on github</a> says "... the only difference between Miniforge and Mambaforge was the presence of the mamba Python package. To minimize surprises, we decided to add mamba to Miniforge too."</p>
<p>Surprises are minimized, perhaps. But not eliminated.</p>
<p>I read in <a href="https://doc.sagemath.org/html/en/installation/conda.html" rel="nofollow noreferrer">various places</a> that the default channel for mambaforge is conda-forge. I installed miniconda, but the default channel is <em>default</em>, not <em>conda-forge</em>.</p>
<p>This is probably not the right place to phrase my question like this, but: can someone here speak with authority and explain the relationship of mamba, miniforge, and mambaforge to conda-forge?</p>
<p>True or false: the default channel for miniforge is conda-forge.</p>
<p><strong>Edit #2</strong> ('top posted' above the previous edit)</p>
<p>I found <a href="https://github.com/mamba-org/mamba/issues/656" rel="nofollow noreferrer">this post</a> where somewhat asks a similar question. It's probable that I've been using the wrong commands to answer these questions: is mamba getting packages from conda-forge or isn't it? From what channel did a given channel come from?</p>
<p>One interesting thing from that post is the fact that there are two 'conda config's. If conda config is called from any env, it queries the base env. To see the config of the current env, one has to run <code>$CONDA_PREFIX/bin/conda config ...</code>. Furthermore, 'mamba config' does not have all the features of 'conda config'</p>
<p>I think I'll take this up on mamba git-hub</p>
<p><strong>Edit after comment</strong></p>
<p>I should have been more explicit. I have installed miniforge.<br />
I'll display the outputs from 'mamba install pytest', and 'mamba install -c conda-forge pytest'</p>
<p>It appears that 'conda install pytest' is not coming from conda-forge. Unless I am misinterpreting what I see.</p>
<p>(I don't know what I did to muck up the formatting.)</p>
<p>Consider the report for</p>
<pre><code>'mamba install pytest'
</code></pre>
<p>The following packages will be downloaded:</p>
<pre><code>package | build
---------------------------|-----------------
iniconfig-1.1.1 | pyhd3eb1b0_0 8 KB
pluggy-1.0.0 | py311haa95532_1 33 KB
pytest-7.4.0 | py311haa95532_0 729 KB
------------------------------------------------------------
Total: 769 KB
</code></pre>
<p>The following NEW packages will be INSTALLED:</p>
<pre><code>iniconfig pkgs/main/noarch::iniconfig-1.1.1-pyhd3eb1b0_0
pluggy pkgs/main/win-64::pluggy-1.0.0-py311haa95532_1
pytest pkgs/main/win-64::pytest-7.4.0-py311haa95532_0
</code></pre>
<p>Now consider the report for</p>
<pre><code>'mamba install -c conda-forge pytest'
Package Version Build Channel Size
-----------------------------------------------------------------------------
Install:
-----------------------------------------------------------------------------
+ ucrt 10.0.22621.0 h57928b3_0 conda-forge/win-64 1MB
+ vc14_runtime 14.36.32532 hdcecf7f_17 conda-forge/win-64 739kB
Upgrade:
-----------------------------------------------------------------------------
- openssl 3.0.11 h2bbff1b_2 pkgs/main
+ openssl 3.1.3 hcfcfb64_0 conda-forge/win-64 7MB
- vs2015_runtime 14.27.29016 h5e58377_2 pkgs/main
+ vs2015_runtime 14.36.32532 h05e6639_17 conda-forge/win-64 17kB
-----------------------------------------
</code></pre>
| <python><conda><mamba> | 2023-10-04 11:47:35 | 2 | 999 | garyp |
77,229,228 | 4,841,654 | Appending and Inserting with Dask Arrays giving mismatch between chunks and shape | <p>I have a Dask array with shape <code>(1001,256,1,256)</code> (data over 1001 timesteps with len(x)=len(z)=256 and len(y)=1).</p>
<p>I need to pad the x-dimension with arrays of shape <code>(1001,2,1,256)</code>, which I'm attempting with <code>dask.array.insert()</code> and <code>dask.array.append()</code>, which are equivalent to numpy functions of the same names.</p>
<p>This should result in an array with shape <code>(1001,260,1,256)</code>.</p>
<p>My arrays look like this:</p>
<pre><code>print(prepend_array)
print(append_array)
print(data)
</code></pre>
<blockquote>
<pre><code>dask.array<getitem, shape=(1001, 2, 1, 256), dtype=float64, chunksize=(1001, 2, 1, 256),chunktype=numpy.ndarray>
dask.array<getitem, shape=(1001, 2, 1, 256), dtype=float64, chunksize=(1001, 2, 1, 256),chunktype=numpy.ndarray>
dask.array<getitem, shape=(1001, 256, 1, 256), dtype=float64, chunksize=(1001, 8, 1, 256),chunktype=numpy.ndarray>
</code></pre>
</blockquote>
<p>I then do (with insert and append together - the error comes from insert):</p>
<pre><code>padded_data = da.append(
da.insert(
data,
0,
prepend_padding,
axis=1),
append_padding,
axis=1)
</code></pre>
<p>But see the following error (from dask.array.rechunk):</p>
<blockquote>
<pre><code>ValueError: Chunks and shape must be of the same length/dimension. Got chunks=((1001,), (1,), (1,), (256,)), shape=(1001, 1, 2, 1, 256)
</code></pre>
</blockquote>
<p>It seems I'm gaining a dimension, where I'm expecting to add to an existing dimension. The obvious answer to me is <code>axis=1</code> is the issue, but I've been unable to resolve it, as any valid value (0,1,2,3) gives a <code>len(shape)=5</code>, not the expected <code>4</code>.</p>
| <python><arrays><numpy><dask> | 2023-10-04 11:36:00 | 1 | 501 | Dave |
77,229,227 | 4,630,773 | Python Error TypeError: 'bytes' object cannot be interpreted as an integer when concatenating bytes | <p>Here's my code:</p>
<pre><code>with open("SomeFile", mode="rb") as file:
byte = file.read(1)
count = 0
previous = []
while byte != b"":
if len(previous) >= 2:
addr = bytearray(b'')
addr.append(previous[len(previous) - 2])
addr.append(previous[len(previous) - 1])
addr.append(byte)
addr.append(b'\x00')
addr_int = int.from_bytes(addr, 'little')
# 0x105df0 == 1072624d
addr_rel = 1072624 - count;
if (addr_int == addr_rel):
print(hex(count))
previous.append(byte)
if len(previous) > 2:
previous.pop(0)
byte = file.read(1)
count += 1
</code></pre>
<p>I followed the method in this question (<a href="https://stackoverflow.com/questions/28130722/python-bytes-concatenation">Python bytes concatenation</a>) but got the error <code>Python Error TypeError: 'bytes' object cannot be interpreted as an integer</code> at line <code>addr.append(previous[len(previous) - 2])</code></p>
<p>How should I concatenate the bytes from the "previous" array to the "addr" byte sequence?</p>
<p>EDIT:</p>
<p>I found the following method works, but it seems very inefficient as the constructor is called 5 times so I would still like to know the correct way to do it:</p>
<pre><code> addr = bytearray(b'')
addr.extend(bytearray(previous[len(previous) - 2]))
addr.extend(bytearray(previous[len(previous) - 1]))
addr.extend(bytearray(byte))
addr.extend(bytearray(b'\x00'))
</code></pre>
| <python><byte><python-bytearray> | 2023-10-04 11:35:55 | 1 | 703 | cr001 |
77,229,004 | 13,924,886 | Incorrect Byte Representation in eth_abi | <p>I am in the process of familiarizing myself with Uniswap v3, albeit with limited success thus far. I am attempting to obtain a quote by invoking the function: quoteExactInput(pathBytes, amountinUint256).call()</p>
<p>For v3, it is required to specify the path in bytes, which I am doing. However, it seems like there might be an issue with the eth_abi module.</p>
<p>Here is the code:</p>
<pre><code> def path_to_encoded_path(self, path):
fst = encode(["address"], [path[0]])
snd = encode(["address"], [path[1]])
thr = encode(["uint24"], [500])
bytePath = fst + snd + thr
print("path", path)
print("fst", fst)
print("snd", snd)
print("thr", thr)
print("bytePath", bytePath)
return bytePath
</code></pre>
<p>Here is the output:</p>
<pre><code>path ['0x2170ed0880ac9a755fd29b2688956bd959f933f8', '0x55d398326f99059ff775485246999027b3197955']
fst b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00!p\xed\x08\x80\xac\x9au_\xd2\x9b&\x88\x95k\xd9Y\xf93\xf8'
snd b"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00U\xd3\x982o\x99\x05\x9f\xf7uHRF\x99\x90'\xb3\x19yU"
thr b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\xf4'
bytePath b"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00!p\xed\x08\x80\xac\x9au_\xd2\x9b&\x88\x95k\xd9Y\xf93\xf8\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00U\xd3\x982o\x99\x05\x9f\xf7uHRF\x99\x90'\xb3\x19yU\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\xf4"
</code></pre>
<p>*I already tryed:</p>
<pre><code>path.append(500)
encode(["address", "address", "uint24"], path)
</code></pre>
<p>I have not worked extensively with bytes, but the bytePath looks incredibly incorrect with the "!p", "&", etc.</p>
<p>Does anyone have a solution to this issue?
Or is there any other module to encode the new uniswapv3 parameters? (since web3.eth.abi.encodeParameters is depricated)</p>
| <python><ethereum><web3py> | 2023-10-04 11:01:39 | 0 | 401 | MaTok |
77,228,915 | 9,363,181 | Window_all throwing an error with Pyflink Kafka Connector | <p>I am trying to print the <code>datastream</code> by applying the <code>tumbling process window</code> for every <code>5</code> seconds. Since I couldn't implement the custom deserializer for now, I created the process function which returns the result as tuple, and as per this <a href="https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/datastream/operators/windows/#consecutive-windowed-operations" rel="nofollow noreferrer">documentation link</a> I could link the process function with the windowing operation so I tried this out:</p>
<pre><code>def get_data(self):
source = self.__get_kafka_source()
ds = self.env.from_source(source, WatermarkStrategy.no_watermarks(), "Kafka Source").window_all(
TumblingProcessingTimeWindows.of(Time.seconds(5)))
ds.process(ExtractingRecordAttributes(),
output_type=Types.TUPLE(
[Types.STRING(), Types.STRING(),
Types.STRING()])).print()
self.env.execute("source")
def __get_kafka_source(self):
source = KafkaSource.builder() \
.set_bootstrap_servers("localhost:9092") \
.set_topics("test-topic1") \
.set_group_id("my-group") \
.set_starting_offsets(KafkaOffsetsInitializer.latest()) \
.set_value_only_deserializer(SimpleStringSchema()) \
.build()
return source
class ExtractingRecordAttributes(KeyedProcessFunction):
def __init__(self):
pass
def process_element(self, value: str, ctx: 'KeyedProcessFunction.Context'):
parts = UserData(*ast.literal_eval(value))
result = (parts.user, parts.rank, str(ctx.timestamp()))
yield result
def on_timer(self, timestamp, ctx: 'KeyedProcessFunction.OnTimerContext'):
yield "On timer timestamp: " + str(timestamp)
</code></pre>
<p>When I trigger the <code>get_data</code> method, it gives me the below error:</p>
<pre><code> return self._wrapped_function.process(self._internal_context, input_data)
AttributeError: 'ExtractingRecordAttributes' object has no attribute 'process'
at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395)
at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999)
at org.apache.beam.sdk.util.MoreFutures.get(MoreFutures.java:61)
at org.apache.beam.runners.fnexecution.control.SdkHarnessClient$BundleProcessor$ActiveBundle.close(SdkHarnessClient.java:504)
at org.apache.beam.runners.fnexecution.control.DefaultJobBundleFactory$SimpleStageBundleFactory$1.close(DefaultJobBundleFactory.java:555)
at org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.finishBundle(BeamPythonFunctionRunner.java:421)
... 7 more
</code></pre>
<p>If I don't use window_all, everything works fine. But the moment I introduce it, it fails. What am I doing wrong here? Any hints would be helpful.</p>
<p>I am using <code>Pyflink 1.17.1</code></p>
<p>TIA.</p>
| <python><apache-flink><pyflink> | 2023-10-04 10:49:54 | 1 | 645 | RushHour |
77,228,650 | 5,758,423 | How to control Python's += operator to update dictionary values in-place? | <p>Consider the following:</p>
<pre class="lang-py prettyprint-override"><code>d = dict(a=(1,2))
d['a'] += (3,4)
assert d == {'a': (1, 2, 3, 4)}
# and yet
assert not hasattr(tuple, '__iadd__')
</code></pre>
<p>The way I parse this is that <code>(1, 2)</code> is first retrieved, then, since tuples are immutable, Python uses <code>__add__</code> to implement <code>__iadd__</code>, making <code>(1, 2) + (3, 4) == (1, 2, 3, 4)</code>. So far, nothing mysterious.</p>
<p>But then, somehow, this value is written "back" to <code>d['a']</code>!
The way I parse it, once <code>d['a']</code> gives forth the <code>(1, 2)</code>, both <code>d</code> and <code>'a'</code> are "forgotten". How does Python know "later", after computing the sum, that it should write back to <code>d['a']</code>?</p>
<p>I'd like to be able to "control" this behavior (see concrete use-case below). Since <code>d['a'] += (3, 4)</code> seems to resolve to <code>d['a'] = d['a'] + (3, 4)</code> my only hope seems to be do override the <code>__iadd__</code> or <code>__add__</code> of the <strong>values</strong> <code>d.__getitem__</code> outputs, which doesn't smell great.</p>
<h3>Concrete use-case</h3>
<p>My actual use case is I have <a href="https://i2mint.github.io/dol/module_docs/dol/filesys.html#dol.filesys.Files" rel="nofollow noreferrer">this</a> <code>MutableMapping</code> interface <code>Files</code> to a file system. For example:</p>
<pre class="lang-py prettyprint-override"><code>files = Files('~')
files['foo.txt'] = b'bar' # writes b'bar' to the `~/foo.txt`
files['foo.txt'] # retrieves b'bar'
</code></pre>
<p>Now, <code>Files</code> does nothing explicit for this behavior:</p>
<pre class="lang-py prettyprint-override"><code>files['foo.txt'] += b'man'
assert files['foo.txt'] == b'barman' # b'man' was appended to the contents!
</code></pre>
<p>The way it happens is that <code>b'bar'</code> was retrieved, <code>b'man'</code> was added, and the result was rewritten to the file.</p>
<p>But I'd like to have <code>files['foo.txt'] += b'man'</code> be <em>"open file, append, and close"</em>. How do I do that? One way is to have <code>files.__getitem__</code> return an object that will write back to the file, but then I don't have my "normal" <code>files['foo.txt']</code> as the contents of the file.</p>
| <python> | 2023-10-04 10:09:16 | 1 | 2,432 | thorwhalen |
77,228,579 | 16,155,080 | FastAPI Background Tasks execute tasks in LIFO order | <p>I'm using FastAPI BackgroundTasks and trying to get it to execute tasks in background in a given order.</p>
<p>For example, if I request my endpoint that creates Tasks with the following order <code>request1 - request2 - request3</code>, I am expecting them to be executed in the same order.
Unfortunately, the tasks are executed in the following order <code>request1 - request3 - request2</code>.</p>
<p>So I feel like it takes the first one first (because there is no task yet), and then queue request 2 & 3. But it'll then execute request 3 before 2 as in <strong>Last In First Out</strong>.
And if, in the meanwhile you send more requests, they'll take first place (execution order would be <code>request 5 - request4 - request 3 - request2</code>).</p>
<p>I'd like to make backgroundTasks FIFO execution order rather then LIFO. Any clue on that? Couldn,'t find anything on FastAPI nor Starlette documentation.</p>
<p>I created a simple bit of code to try it and reproduce the bug. As I tried it to make sure everything was okay, I realised the behavior is not the same on differents environments. The following is working normally (FIFO) on my Windows 10 (Python 3.8.2 and FastAPI 0.98) but doesn't work normally (LIFO) on CentOS (Python 3.9.12 and FastAPI 0.96).</p>
<p>App.py file :</p>
<pre><code>from uuid import UUID
from pydantic import BaseModel
from typing import List, Dict
from anyio.lowlevel import RunVar
from anyio import CapacityLimiter
from http import HTTPStatus
import time
from fastapi import FastAPI, HTTPException, BackgroundTasks
class Job(BaseModel):
uid: UUID
status: str = 'in_progress'
# API init
app = FastAPI()
jobs: Dict[UUID, Job] = {}
def mock_function(id_job) -> str:
jobs[id_job].status = 'in_progress'
time.sleep(5)
def process_request(job_id):
response = mock_function(job_id)
jobs[job_id].status = 'complete'
@app.on_event("startup")
def startup():
# Define the number of background tasks job executed simultaneously.
RunVar("_default_thread_limiter").set(CapacityLimiter(1))
@app.get("/status")
async def status_handler() :
return jobs
@app.post('/request/{uid}', status_code=HTTPStatus.ACCEPTED)
async def request_API(uid:UUID, background_tasks: BackgroundTasks):
new_task = Job(uid=uid)
new_task.status = 'in_queue'
jobs[new_task.uid] = new_task
background_tasks.add_task(process_request,
new_task.uid)
return new_task
</code></pre>
<p>Test file :</p>
<pre><code>import requests
import time
query_tasks = {}
jobs_id = ['c880cc1b-dc27-4175-b616-29a69322d156', '4f0ea1f1-a5a3-4a7c-a114-17e57a1c55db','15ba0d0a-3c94-4906-a331-054a3847171c']
for i in range(3):
r = requests.post(f'http://127.0.0.1:8000/request/{jobs_id[i]}')
id_ = r.json()['uid']
query_tasks[id_] = i+1
query_tasks
for t in range(20):
r = requests.get('http://127.0.0.1:8000/status')
tasks = r.json()
for k,v in tasks.items():
print(query_tasks[k])
print(v['status'])
time.sleep(1)
print('-----------------------------------------------')
</code></pre>
<p>Output :</p>
<pre><code>1
complete
2
in_queue
3
in_progress
-----------------------------------------------
1
complete
2
in_progress
3
complete
</code></pre>
<p>(Some outputs have been removed for clarity)</p>
<p>Any lead ?</p>
<p>Thank you</p>
| <python><fastapi><fifo> | 2023-10-04 09:58:42 | 1 | 641 | Jules Civel |
77,228,433 | 2,893,518 | Problem in converting a Python code to Delphi | <p>I need to convert a Python code to Delphi but I cant.</p>
<p>The Python code is:</p>
<pre><code>def crc32(data: bytes, initial):
crc = initial
for x in data:
for k in range(8):
if ((crc ^ x) & 0x01) == 1:
crc = crc >> 1
crc = crc ^ 0x04c11db7
else:
crc = crc >> 1
x = x >> 1
crc &= 0xffffffff
return crc
</code></pre>
<p>but when I translate to Delphi code I have a problem, the problem is the line <code>x = x >> 1</code></p>
<p>this is the Delphi code:</p>
<pre><code>function TForm1.CalculateCRC32(const data: TBytes; initial: Cardinal): Cardinal;
var
crc: Cardinal;
x, z: Integer;
begin
crc := initial;
for x in data do
begin
for z := 0 to 7 do
begin
if ((crc xor x) and $01) = 1 then
begin
crc := crc shr 1;
crc := crc xor $04c11db7;
end
else
begin
crc := crc shr 1;
end;
x := x shr 1; // here its the problem I have
end;
end;
crc := crc and $ffffffff;
Result := crc;
end;
</code></pre>
<p>How could I solve this problem?
Thanks in advance.</p>
<p>I am using Delphi XE11.3</p>
<p>to make a test, I do:</p>
<pre><code>data := '123456780000000077000000';
bytedata := HexToBytes(data); //TBytes type
initDataStr := '$FFFFFFFF';
initData := Cardinal(StrToInt64(initDataStr));
result := CalculateCRC32(bytedata, initData); //The result should be 7085D2 in hexadecimal.
</code></pre>
| <python><delphi><crc32> | 2023-10-04 09:39:10 | 1 | 529 | elcharlie |
77,228,389 | 10,353,865 | Broadcasting when setting on a boolean slice of a DataFrame gives weird results | <p>Consider the following code:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame(
{"AAA": [4, 5, 6, 7], "BBB": [10, 20, 30, 40], "CCC": [100, 50, -30, -50]}
)
df[[True, False, False, True]] = np.array([2,5]).reshape(2,1)
df
AAA BBB CCC
0 2 2 2
1 5 20 50
2 6 30 -30
3 5 5 5
</code></pre>
<p>This applies broadcasting to the dataframe and results in the first row containing solely the value 2 and the fourth row containing only the value 5. As I expected. Now, I just add a single column to the df and apply a similar transformation:</p>
<pre><code>df["logic"] = 9
df[[True, False, False, True]] = np.array([3,7]).reshape(2,1)
</code></pre>
<p>Rather than delivering a first row containing only 3 and a last row containing only 7, I now get a</p>
<blockquote>
<p>ValueError: Must have equal len keys and value when setting with an ndarray</p>
</blockquote>
<p>Can someone explain, why this is happening?</p>
| <python><pandas><array-broadcasting> | 2023-10-04 09:33:56 | 1 | 702 | P.Jo |
77,228,378 | 1,307,905 | dataclasses use InitVar default to set instance attribute | <p>When you provide a default for an <code>InitVar</code> typed variable in a dataclass, the attribute
for that variable is set (actually, for variables that lack a default the attribute seems to be
actively deleted in <code>dataclasses:_process_class()</code>).</p>
<pre><code>from dataclasses import dataclass, InitVar, fields
@dataclass
class HasDefault:
abc: int = 18
xyz: InitVar[str] = 'oops'
def __post_init__(self, xyz):
self.abc += len(xyz)
@dataclass
class NoHasDefault:
abc: int
xyz: InitVar[str]
def __post_init__(self, xyz):
self.abc += len(xyz)
hd = HasDefault(abc=42, xyz='so long...')
print('field names', [field.name for field in fields(hd)])
print(hd, hd.abc, hd.xyz)
print()
hd2 = HasDefault(abc=42)
print('field names', [field.name for field in fields(hd2)])
print(hd2, hd2.abc, hd2.xyz)
print()
hd3 = HasDefault()
print('field names', [field.name for field in fields(hd3)])
print(hd3, hd3.abc, hd3.xyz)
print()
nhd = NoHasDefault(abc=42, xyz='so long...')
print('field names', [field.name for field in fields(nhd)])
print(nhd, getattr(nhd, 'xyz', 'no attribute xyz'))
</code></pre>
<p>which gives:</p>
<pre><code>field names ['abc']
HasDefault(abc=52) 52 oops
field names ['abc']
HasDefault(abc=46) 46 oops
field names ['abc']
HasDefault(abc=22) 22 oops
field names ['abc']
NoHasDefault(abc=52) no attribute xyz
</code></pre>
<p>Why is the attribute <code>xyz</code> set <strong>at all</strong> for <code>InitVar</code> typed variables (when there is a default)? I expected this attribute never to be set as it is only for initialisation. Is this a bug?</p>
<p><code>mypy</code> actually complains that those instances have no attribute <code>xyz</code>, so it seems to handle <code>InitVar</code> differently.</p>
<p>Because of this I needed to add some extra lines to get loading such a <code>dataclass</code>
from YAML to work in the same way, as normally the value from the YAML mapping
would be used to set the attribute, instead of the default. So what you can do (in `ruamel.yaml>0.17.34) is:</p>
<pre><code>from dataclasses import dataclass, InitVar
from typing import ClassVar
from ruamel.yaml import YAML
yaml = YAML()
@yaml.register_class
@dataclass
class HasDefault:
yaml_tag: ClassVar = '!has_default' # if not set the class name is taken ('!HasDefault')
abc: int
xyz: InitVar[str] = 'oops'
def __post_init__(self, xyz):
self.abc += len(xyz)
yaml_str = """\
!has_default
abc: 42
xyz: hello world
"""
data = yaml.load(yaml_str)
print(data, data.xyz == 'oops')
</code></pre>
<p>giving:</p>
<pre><code>HasDefault(abc=53) True
</code></pre>
<p>otherwise <code>data.xyz</code> would equal <code>hello world</code></p>
| <python><default-value><python-dataclasses> | 2023-10-04 09:32:39 | 0 | 78,248 | Anthon |
77,228,256 | 2,173,320 | python textual - run a loop besides the main loop | <p>I'm using <a href="https://github.com/Textualize/textual" rel="nofollow noreferrer">textual</a> to display a ui in my console window. I start my class extending textual's App class by myclass.run().
Since I'm analyzing a camera signal using opencv, I now want to introduce a second loop that runs as long as the app runs to get and process the current image from opencv.
But I'm not sure how to start such a parallel loop using textual's API. The first thing that comes to my mind is a subthread but I fear textual might offer a better way.</p>
| <python><multithreading><textual> | 2023-10-04 09:13:41 | 1 | 1,507 | padmalcom |
77,228,235 | 20,266,647 | Different batch limits for Cassandra and Scylla, Err - 2200, Batch too large | <p>I used standard 'cassandra-driver' for access to Cassandra and Scylla for insert/udpate data via BatchStatement. I updated data via batch (~500 rows) and I got only for Cassandra this error (everything work fine for Scylla):</p>
<pre><code>Error from server: code=2200 [Invalid query] message="Batch too large"
</code></pre>
<p>Python code, see:</p>
<pre><code>from cassandra import ConsistencyLevel
from cassandra.cluster import Cluster
from cassandra.cluster import ExecutionProfile
from cassandra.cluster import EXEC_PROFILE_DEFAULT
from cassandra.query import BatchStatement
...
cluster = Cluster(contact_points=["localhost"],
port=9042,
execution_profiles={EXEC_PROFILE_DEFAULT: profile},
control_connection_timeout=60,
idle_heartbeat_interval=60,
connect_timeout=60)
session = cluster.connect(keyspace="catalog")
...
insert_statement = session.prepare(f"INSERT INTO rn6.t01 ({columns}) VALUES ({items})")
batch = BatchStatement(consistency_level=ConsistencyLevel.ONE)
data_frm = pandas.DataFrame(generator.integers(999999,
size=(run_setup.bulk_row, run_setup.bulk_col)),
columns=[f"fn{i}" for i in range(run_setup.bulk_col)])
# prepare data
for row in data_frm.values:
batch.add(insert_statement, row)
session.execute(batch)
</code></pre>
<p>It seems that Cassandra and Scylla have different default limits for batch statement. Do you know these limits?</p>
| <python><cassandra><scylla> | 2023-10-04 09:10:00 | 2 | 1,390 | JIST |
77,227,962 | 2,520,186 | How to incorporate `pooch` with scipy.datasets? | <p>I am trying to use the <code>ascent</code> image data from <code>scipy.datasets</code>. My code on the Jupyter Notebook:</p>
<pre><code>import scipy.datasets
import pooch
!pip show pooch
ascent = scipy.datasets.ascent()
ascent.shape
</code></pre>
<p>When I run this, I find a version of has been successfully installed but at the same time, it throws an ImportError!</p>
<p>What is wrong? Have I installed the wrong version? Or is there something more that I need to do?</p>
<pre><code>Name: pooch
Version: 1.7.0
Summary: "Pooch manages your Python library's sample data files: it automatically downloads and stores them in a local directory, with support for versioning and corruption checks."
Home-page: https://github.com/fatiando/pooch
Author: The Pooch Developers
Author-email: fatiandoaterra@protonmail.com
License: BSD 3-Clause License
Location: /usr/local/lib/python3.11/site-packages
Requires: packaging, platformdirs, requests
Required-by:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[17], line 4
2 import pooch
3 get_ipython().system('pip show pooch')
----> 4 ascent = scipy.datasets.ascent()
5 ascent.shape
File /usr/local/lib/python3.11/site-packages/scipy/datasets/_fetchers.py:71, in ascent()
66 import pickle
68 # The file will be downloaded automatically the first time this is run,
69 # returning the path to the downloaded file. Afterwards, Pooch finds
70 # it in the local cache and doesn't repeat the download.
---> 71 fname = fetch_data("ascent.dat")
72 # Now we just need to load it with our standard Python tools.
73 with open(fname, 'rb') as f:
File /usr/local/lib/python3.11/site-packages/scipy/datasets/_fetchers.py:27, in fetch_data(dataset_name, data_fetcher)
25 def fetch_data(dataset_name, data_fetcher=data_fetcher):
26 if data_fetcher is None:
---> 27 raise ImportError("Missing optional dependency 'pooch' required "
28 "for scipy.datasets module. Please use pip or "
29 "conda to install 'pooch'.")
30 # The "fetch" method returns the full path to the downloaded data file.
31 return data_fetcher.fetch(dataset_name)
ImportError: Missing optional dependency 'pooch' required for scipy.datasets module. Please use pip or conda to install 'pooch'.
</code></pre>
| <python><jupyter-notebook><scipy> | 2023-10-04 08:26:43 | 1 | 2,394 | hbaromega |
77,227,902 | 15,671,914 | How to make LangChain's chatbot answer only from the knowledge base I provided? It shouldn't have access to OpenAI's GPT-3.5 knowledge base | <p>I have the following chatbot which answers questions about the knowledge base <code>docs</code> and retrieves the source documents it got its answers from.</p>
<pre><code>import os
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains import RetrievalQA
from langchain.memory import ConversationBufferMemory
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.agents import AgentExecutor, Tool,initialize_agent
from langchain.agents.types import AgentType
os.environ['OPENAI_API_KEY'] = ''
system_message = """
"You are the XYZ bot."
"This is conversation with a human. Answer the questions you get based on the knowledge you have."
"If you don't know the answer, just say that you don't, don't try to make up an answer."
"""
llm = ChatOpenAI(
model_name="gpt-3.5-turbo", # Name of the language model
temperature=0 # Parameter that controls the randomness of the generated responses
)
embeddings = OpenAIEmbeddings()
docs = [
"Buildings are made out of brick",
"Buildings are made out of wood",
"Buildings are made out of stone",
"Buildings are made out of atoms",
"Buildings are made out of building materials",
"Cars are made out of metal",
"Cars are made out of plastic",
]
vectorstore = FAISS.from_texts(docs, embeddings)
retriever = vectorstore.as_retriever()
memory = ConversationBufferMemory(memory_key="chat_history", input_key='input', return_messages=True, output_key='output')
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=vectorstore.as_retriever(),
verbose=True,
return_source_documents=True
)
tools = [
Tool(
name="doc_search_tool",
func=qa,
description=(
"This tool is used to retrieve information from the knowledge base"
)
)
]
agent = initialize_agent(
agent = AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
tools=tools,
llm=llm,
memory=memory,
return_source_documents=True,
return_intermediate_steps=True,
agent_kwargs={"system_message": system_message}
)
</code></pre>
<p>currently, if the query is something like "who is the president of the USA?", it answers the question when this information wasn't provided in <code>docs</code>. It even retrieves source documents which does not make sense.</p>
<pre><code>query = "who is the president of the USA?"
result = agent(query)
</code></pre>
<p>I want the chatbot to answer something like "Sorry, I wasn't provided with information that answers your question" when the question's answer is not from the knowledge base.</p>
| <python><openai-api><langchain><py-langchain> | 2023-10-04 08:16:41 | 2 | 385 | Blue Cheese |
77,227,897 | 18,769,241 | How to adjust sensitivity of y-axis? | <p>I have these y-axis values that I want to plot each on a x-tick using a bar plot to show the difference (the first below value for the x-tick 1, the second value for x-tick 2 and the third for the x-tick 3)</p>
<pre><code>0.9999893294851471
0.9999997569910387
1.0
</code></pre>
<p>for now I am trying to get helped by <code>ylim</code>, but in vain:</p>
<pre><code>import matplotlib.pyplot as plt
x=[1,2,3]
y=[0.9999893294851471,0.9999997569910387,1.0]
plt.bar(x,y,align='center')
plt.plot(x,y, 'r--', label='Default')
plt.xlabel("tick")
plt.ylabel("values")
plt.ylim(0.9999800, 0.9999999)
plt.show()
</code></pre>
<p>Any clues?</p>
| <python><matplotlib> | 2023-10-04 08:15:55 | 2 | 571 | Sam |
77,227,895 | 1,188,594 | Polars SQL Context filter by date or datetime | <p>I'm trying to write a query against parquet using Polars SQL Context. It's working great if I pre-filter my arrow table by date. I cannot figure out how to use date in the SQL query.</p>
<p>works:</p>
<pre><code>filters = define_filters(event.get("filters", None))
table = pq.read_table(
f"s3://my_s3_path{partition_path}",
partitioning="hive",
filters=filters,
)
df = pl.from_arrow(table)
ctx = pl.SQLContext(stuff=df)
sql = "SELECT things FROM stuff"
new_df = ctx.execute(sql,eager=True)
</code></pre>
<p>doesn't work (<code>filters==None</code> in this case):</p>
<pre><code>filters = define_filters(event.get("filters", None))
table = pq.read_table(
f"s3://my_s3_path{partition_path}",
partitioning="hive",
filters=filters,
)
df = pl.from_arrow(table)
ctx = pl.SQLContext(stuff=df)
sql = """
SELECT things
FROM stuff
where START_DATE_KEY >= '2023-06-01' and START_DATE_KEY < '2023-06-17'
"""
new_df = ctx.execute(sql,eager=True)
</code></pre>
<p>I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/xaras/projects/arrow-lambda/loose.py", line 320, in <module>
test_runner(target=args.t, limit=args.limit, is_debug=args.debug)
File "/Users/xaras/projects/arrow-lambda/loose.py", line 294, in test_runner
rows, metadata = test_handler(target, limit, display=True, is_debug=is_debug)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xaras/projects/arrow-lambda/loose.py", line 110, in test_handler
response = json.loads(handler(event, None))
^^^^^^^^^^^^^^^^^^^^
File "/Users/xaras/projects/arrow-lambda/serverless/app.py", line 42, in handler
new_df = ctx.execute(
^^^^^^^^^^^^
File "/Users/xaras/.pyenv/versions/pyarrow/lib/python3.11/site-packages/polars/sql/context.py", line 275, in execute
return res.collect() if (eager or self._eager_execution) else res
^^^^^^^^^^^^^
File "/Users/xaras/.pyenv/versions/pyarrow/lib/python3.11/site-packages/polars/utils/deprecation.py", line 95, in wrapper
return function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xaras/.pyenv/versions/pyarrow/lib/python3.11/site-packages/polars/lazyframe/frame.py", line 1713, in collect
return wrap_df(ldf.collect())
^^^^^^^^^^^^^
exceptions.ComputeError: cannot compare 'date/datetime/time' to a string value (create native python { 'date', 'datetime', 'time' } or compare to a temporal column)
</code></pre>
<p>UPDATE for addl info:</p>
<p>I started using <code>cast('2023-06-01' as date)</code>. This runs, but does not return any records.</p>
<p>Here is a replicatable example:</p>
<pre><code>import polars as pl
df = pl.DataFrame(
{
"a": ["2023-06-01", "2023-06-01", "2023-06-02"],
"b": [None, None, None],
"c": [4, 5, 6],
"d": [None, None, None],
}
)
df = df.with_columns(pl.col("a").str.strptime(pl.Date, "%Y-%m-%d", strict=False))
print(df)
ctx = pl.SQLContext(stuff=df)
new_df = ctx.execute(
"select * from stuff where a = cast('2023-06-01' as date)",
eager=True,
)
print(new_df)
## Returns...
shape: (3, 4)
ββββββββββββββ¬βββββββ¬ββββββ¬βββββββ
β a β b β c β d β
β --- β --- β --- β --- β
β date β f32 β i64 β f32 β
ββββββββββββββͺβββββββͺββββββͺβββββββ‘
β 2023-06-01 β null β 4 β null β
β 2023-06-01 β null β 5 β null β
β 2023-06-02 β null β 6 β null β
ββββββββββββββ΄βββββββ΄ββββββ΄βββββββ
shape: (0, 4)
ββββββββ¬ββββββ¬ββββββ¬ββββββ
β a β b β c β d β
β --- β --- β --- β --- β
β date β f32 β i64 β f32 β
ββββββββͺββββββͺββββββͺββββββ‘
ββββββββ΄ββββββ΄ββββββ΄ββββββ
</code></pre>
| <python><python-polars><rust-polars> | 2023-10-04 08:15:22 | 2 | 813 | Stephen Lloyd |
77,227,671 | 8,253,860 | How to create a singleton design inside a function scope in python? | <p>I have a singleton class for tracking events. The design is something like this.</p>
<pre><code>class Events:
def __new__(cls):
# Ensure singleton
if not hasattr(cls, "instance"):
cls.instance = super(Events, cls).__new__(cls)
return cls.instance
def __init__(self):
self.events = [] # events list
self.max_events = 25 # max events to store in memory
self.rate_limit = 60.0 # rate limit (seconds)
def __call__(self, event):
print(len(self.events)
self.events.append(event)
...
# if rate_limit_reached:
# sen_events()
# self.events = []
...
EVENTS = Events()
# EVENTS("event_name") # called like this to register events across modules
</code></pre>
<p>This worked well but now I want to make it exception safe, something like this</p>
<pre><code>@TryExcept(verbose=True)
def register_event(name: str, **kwargs):
events = Events()
events(name, kwargs)
</code></pre>
<p>But when I use this function across modules, <code>self.events</code> is always flushed, i.e, starts from 0.
Why does this happen even though I've explicitly set the initialisation condition for singleton? What's the best approach to make this exception safe?</p>
| <python><python-3.x> | 2023-10-04 07:41:35 | 3 | 667 | Ayush Chaurasia |
77,227,291 | 1,401,472 | Building a Document-based Question Answering Assistant with OpenAI's LLM - Cost-effective Strategies for Testing | <p>I'm working on a project that involves building a question-answering assistant using OpenAI's large language model (LLM). My goal is to make this assistant capable of answering questions related to a collection of informative documents.</p>
<p>I have already built this assistant using python + langchain + openai and it working as expected.</p>
<p>I understand that utilizing the OPENAI_API_KEY is necessary to access the model, but it comes with associated costs. As I'm in the testing and development phase, I'd like to explore cost-effective strategies to minimize expenses during this stage.</p>
<p>Here are my specific concerns:</p>
<p><strong>Free Tier Limitations</strong>: I'm aware of the OpenAI free tier, but it has limitations in terms of usage and may not be sufficient for extensive testing. What are the best practices for staying within these limits while still getting meaningful results during development?</p>
<p><strong>Sample Data Usage</strong>: Are there ways to optimize the usage of sample data provided by OpenAI for testing? How can I make the most of this data without incurring additional charges?</p>
<p><strong>Alternative Development Environments</strong>: Are there alternative development environments or strategies that allow me to test and refine my question-answering assistant without relying heavily on the API and incurring costs?</p>
<p>I appreciate any insights or tips on how to manage costs effectively during the initial development and testing phase of my project. My ultimate aim is to create a valuable assistant without overspending in the early stages.</p>
<p>Thank you for your help!</p>
| <python><openai-api><langchain><large-language-model> | 2023-10-04 06:35:57 | 0 | 2,321 | user1401472 |
77,227,241 | 3,886,675 | Is there a lint rule for python that automatically detects list +operator concatenation and suggests using spread | <p>For me the following</p>
<pre class="lang-py prettyprint-override"><code>extras = ["extra0", "extra1"]
func_with_list_arg([
"base0",
"base1",
] + extras)
</code></pre>
<p>is nicer to read with a spread operator like the following</p>
<pre class="lang-py prettyprint-override"><code>extras = ["extra0", "extra1"]
func_with_list_arg([
"base0",
"base1",
*extras,
])
</code></pre>
<p>Is there a lint rule in ruff or pylint that would detect this situation?</p>
| <python><pylint><ruff> | 2023-10-04 06:27:43 | 1 | 448 | Tomi Kokkonen |
77,227,153 | 6,018,303 | Pyscript using classes in the Python source code | <p>I am trying to rewrite the <a href="https://docs.pyscript.net/2023.09.1.RC2/beginning-pyscript/" rel="nofollow noreferrer">Polyglot - Piratical PyScript</a> application using in <code>main.py</code> a <code>Pirate</code> class instead of the original <code>translate_english</code> function.</p>
<p><strong><code>pyscript.json</code></strong></p>
<pre><code>{
"packages": ["arrr"]
}
</code></pre>
<p><strong><code>index.html</code></strong></p>
<pre><code><!DOCTYPE html>
<html lang="en">
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width,initial-scale=1" />
<title>π¦ Polyglot - Piratical PyScript</title>
<script type="module" src="https://pyscript.net/snapshots/2023.09.1.RC2/core.js"></script>
</head>
<body>
<h1>Polyglot π¦ π¬ π¬π§ β‘οΈ π΄ββ οΈ</h1>
<p>Translate English into Pirate speak...</p>
<input type="text" id="english" placeholder="Type English here..." />
<button py-click="Pirate.translate_english">Translate</button>
<div id="output"></div>
<script type="py" src="./main.py" config="./pyscript.json"></script>
</body>
</html>
</code></pre>
<p>Where I have replaced the original code</p>
<pre><code><button py-click="translate_english">Translate</button>
</code></pre>
<p>with</p>
<pre><code><button py-click="Pirate.translate_english">Translate</button>
</code></pre>
<p><strong><code>main.py</code></strong></p>
<pre><code>import arrr
from pyscript import document
class Pirate:
def __init__(self):
self.input_text = document.querySelector("#english")
self.english = self.input_text.value
self.output_div = document.querySelector("#output")
self.translate_english()
def translate_english(self):
self.output_div.innerText = arrr.translate(self.english)
</code></pre>
<p>The error I get is the following:</p>
<p><a href="https://i.sstatic.net/VyB21.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VyB21.png" alt="enter image description here" /></a></p>
<p>Please help.</p>
<p><strong>EDITED 1</strong></p>
<p>But replacing in <strong><code>index.html</code></strong> the line</p>
<pre><code><button py-click="Pirate.translate_english">Translate</button>
</code></pre>
<p>with</p>
<pre><code><button py-click="Pirate2.__init__">Translate</button>
</code></pre>
<p>and in <strong><code>main.py</code></strong></p>
<pre><code>class Pirate2:
def __init__(self):
self.input_text = document.querySelector("#english")
self.english = self.input_text.value
self.output_div = document.querySelector("#output")
self.output_div.innerText = arrr.translate(self.english)
</code></pre>
<p>Then works:</p>
<p><a href="https://i.sstatic.net/gWTCu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gWTCu.png" alt="enter image description here" /></a></p>
<p><strong>EDITED 2</strong></p>
<p>Replacing in <strong><code>index.html</code></strong> the line</p>
<pre><code><button py-click="Pirate.translate_english">Translate</button>
</code></pre>
<p>with</p>
<pre><code><button py-click="translate">Translate</button>
</code></pre>
<p>and in <strong><code>main.py</code></strong></p>
<pre><code>import arrr
from pyscript import document
class Pirate:
def __init__(self):
self.input_text = document.querySelector("#english")
self.english = self.input_text.value
self.output_div = document.querySelector("#output")
self.translate_english()
def translate_english(self):
self.output_div.innerText = arrr.translate(self.english)
def translate(event):
pirate = Pirate()
pirate.translate_english()
</code></pre>
<p>Then works:</p>
<p><a href="https://i.sstatic.net/s16Qi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s16Qi.png" alt="enter image description here" /></a></p>
| <python><pyscript> | 2023-10-04 06:12:10 | 0 | 509 | PedroBiel |
77,227,060 | 5,369,777 | Python39 and pymysql | <p>I've just tried upgrading up to python3.9.5 from 3.7 and I'm encountering a strange behavior that I can't quite nail down. It involves pandas1.3 and pymysql1.1.0. The strange part is that the dataframes lengths' are the same, but the data is just not right in the 3.9 executed version.</p>
<p>I'd really like to move forward to python39 but I can't quite figure what the issue is here...</p>
<p>Now when I run:</p>
<pre><code>import pandas as pd
import pymysql as mysql
conn = connection = mysql.connect(host='xxxxxxxxxxxxxxxx',
user='xxxxxxxxxxxxxx',
password='xxxxxxxxxxxxxxxx',
db='xxxxxxxxxxxx',
charset='utf8',
port=xxxxxxxxxxx,
cursorclass=mysql.cursors.DictCursor)
somesql = """SELECT * FROM SOMETABLE;"""
df = pd.read_sql(somesql, conn)
</code></pre>
<p>I used to get in python37, and which should be the correct format:</p>
<pre><code>foo bar baz
___________
a b c
x y z
</code></pre>
<p>Now with python39, I get data returned but the data for each column is the column name repeated over in each cell:</p>
<pre><code>foo bar baz
___________
foo bar baz
foo bar baz
</code></pre>
| <python><python-3.x><pandas><python-3.7><pymysql> | 2023-10-04 05:54:18 | 0 | 666 | Derek_P |
77,227,040 | 992,421 | Spark job and optimization | <p>I am trying to run a spark job with code as below</p>
<pre><code>def call_stripes(rdd,voc,bas):
vocabulary =sc.broadcast(voc)
basis=sc.broadcast(bas)
def create_stripes(words):
vocab=set()
basis=set()
basis_set = set(words).intersection(basis.value)
vocab_set = set(words).intersection(vocabulary.value)
for key in list(vocab_set):
basis_nokey = set(basis_set)
#if key and value are same, just discard
basis_nokey.discard(key)
if basis_nokey != set():
yield (key, basis_nokey)
result = rdd.map(lambda line: (line.lower().split('\t')[0].split())) \
.flatMap(create_stripes) \
.reduceByKey(lambda x, y: x|y)
return result
stripesrdd=call_stripes(rdd,voc,bas)
print(stripesrdd.top(5))
</code></pre>
<p>I get data in format "some space separated text"\t1\t1 initially. I am interested only in the text so the map will take care of sending text into flatMap(createStripes).</p>
<p>if you look at that function, there are two data sources vocabulary and basis which are broadcasted data. vocabulary contains full vocabulary and basis contains a subset of vocabulary. The idea is to create a spark job which creates stripes.</p>
<p>if the any of the given words exist in vocabulary, they become keys of a set. if the word exists in basis that becomes a value. I need to create a list of such items. ie [(word1, {et, boo}), (word2, {dex, mine, buy})]. Call to top(5) is taking 60+sec. From spark ui i see that reducebykey is the one where time is spent</p>
<p>I have trouble with this 1) its giving me different data than expected 2) its taking too long ie around 60+ sec but it should take around 1.5+sec!!! Appreciate any insights on where I am going wrong and what optimizations can i do will be very helpful</p>
| <python><apache-spark><pyspark> | 2023-10-04 05:49:50 | 1 | 850 | Ram |
77,226,827 | 1,041,958 | How can I have a minimum period greater than the rolling window in a Pandas dataframe? | <p>I have multiple dataframes that have a column called <code>SIZE</code>. I need to calculate the rolling average of the size column on each Dataframe with different rolling windows; <strong>HOWEVER</strong> I want them all to require <code>x</code> number of valid (non-zero) observations to have passed before generating a valid moving average, regardless of whether x is greater or less than the rolling window itself.</p>
<p>e.g.</p>
<pre><code>x = 400
df1.groupby('NAME')['SIZE'].rolling(100, x).mean()
df2.groupby('NAME')['SIZE'].rolling(300, x).mean()
df3.groupby('NAME')['SIZE'].rolling(700, x).mean()
df4.groupby('NAME')['SIZE'].rolling(1000, x).mean()
</code></pre>
<p>If I were to execute the above, I receive a <code>ValueError: min_periods 400 must be <= window 100</code></p>
<p>The dfs are quite large and i'm struggling to come up with a way to handle this without being incredibly slow or breaking everything. What can I do?</p>
| <python><pandas> | 2023-10-04 04:49:00 | 1 | 955 | StormsEdge |
77,226,687 | 8,516,987 | FlaskForm, JWT Extended | <p>Looking for some help, clarity and to maybe get pointed in the right direction regarding python flask wtforms and flask jwt extended.</p>
<p>I have a login route that works just fine. My protected GET routes work fine as well.
However, when I submit a post request, I get this error:
flask_jwt_extended.exceptions.CSRFError: Missing CSRF token</p>
<p>My session has the csrf token because of flaskform.
My cookie storage has csrf_access_token as well.</p>
<p>It works for unprotected routes like login and contact.</p>
<pre><code># Login
@app.route('/login', methods=['GET', 'POST'])
def login():
form=LoginForm()
print(session)
if form.validate_on_submit():
data = request.form
email = data['email']
password = data['password']
user = User.query.filter_by(email=email).first()
if user and check_password_hash(user.password, password):
# # Check if user verified their email
# if user.email_verified == False:
# # Allow user to request another email
# return redirect(url_for('verify_email'))
access_token = create_access_token(identity=user.id)
refresh_token = create_refresh_token(identity=user.id)
# Update last_login_datea
user.last_login = datetime.utcnow()
db.session.commit()
response = make_response(redirect(url_for('solarsearch')))
response.set_cookie('access_token_cookie', access_token) # set the token as a cookie
response.set_cookie('refresh_token_cookie', refresh_token) # set the refresh token as a cookie
session['logged_in'] = True
session['login_message'] = None
return response
</code></pre>
<p>However, for a route like this;</p>
<pre><code>@jwt_required_and_not_revoked
def post(self):
form = SolarSearchForm()
print(form)
if form.validate_on_submit():
</code></pre>
<p>It just fails and throws missing csrf token.</p>
<p>I'm starting to wonder if there's a mix-up somewhere but unsure how to begin debugging.</p>
| <python><flask><flask-wtforms><flask-jwt-extended> | 2023-10-04 04:08:49 | 1 | 642 | itsPav |
77,226,579 | 7,188,690 | Unable to get the dictionary based on the first value of the dictionary | <p>I have a dictionary is this format. This is a spark representation :</p>
<pre><code>[4401188189705 -> [4.0, 1.0], 44011125835787 -> [5.0, 1.0], 44011142622982 -> [12.0, 1.0], 4401192401665 -> [7.0, 1.0], 4401171339786 -> [5.0, 1.0]]
</code></pre>
<p>I am trying to sort it in the descending order of the first value of the dictionary. But unable to do so. My code is as follows.</p>
<pre><code>sorted_dict = dict(sorted(distribution.items(), key=lambda item: item[1][0], reverse=True))
</code></pre>
<p>Expected output :</p>
<pre><code> 44011142622982 -> [12.0, 1.0], 4401192401665 -> [7.0, 1.0], 4401171339786 -> [5.0, 1.0],44011125835787 -> [5.0, 1.0],[4401188189705 -> [4.0, 1.0]
</code></pre>
| <python><dictionary> | 2023-10-04 03:38:01 | 0 | 494 | sam |
77,226,549 | 13,238,846 | Limit Memory Access on LangChain Bot | <p>I'm creating a langchain chatbot (Conversationl-ReACT-description) with zep long term memory and that bot has access to real-time data. How do I prevent use the memory when same query asked again few minutes later?</p>
| <python><langchain> | 2023-10-04 03:26:23 | 1 | 427 | Axen_Rangs |
77,226,540 | 3,398,536 | Tensorflow Keras not fitting the model - Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]: [13], [batch]: [5] | <p>i'm trying to use Tensorflow Keras for captcha solver per the example of the package example and using my own data set to train it.</p>
<p>I try a couple of suggestions in StackOverflow do not find solution.</p>
<p>The error keep showing :</p>
<pre><code>Cannot add tensor to the batch: number of elements does not match. Shapes are: [tensor]: [13], [batch]: [5]
[[{{node IteratorGetNext}}]] [Op:__inference_train_function_13370]
</code></pre>
<p>Bellow is the code as I try to use and the list of images can be accessed in this link :[ [captcha files][1] ][2]</p>
<pre><code>import os
import numpy as np
import matplotlib.pyplot as plt
from pathlib import Path
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
data_dir = Path("./formatted_captcha/")
# Get list of all the images
images = sorted(list(map(str, list(data_dir.glob("*.png")))))
labels = [img.split(os.path.sep)[-1].split(".png")[0] for img in images]
characters = set(char for label in labels for char in label)
characters = sorted(list(characters))
print("Number of images found: ", len(images))
print("Number of labels found: ", len(labels))
print("Number of unique characters: ", len(characters))
print("Characters present: ", characters)
# Batch size for training and validation
batch_size = 16
# Desired image dimensions
img_width = 150
img_height = 50
# Factor by which the image is going to be downsampled
# by the convolutional blocks. We will be using two
# convolution blocks and each block will have
# a pooling layer which downsample the features by a factor of 2.
# Hence total downsampling factor would be 4.
downsample_factor = 4
# Maximum length of any captcha in the dataset
max_length = max([len(label) for label in labels])
# Mapping characters to integers
char_to_num = layers.StringLookup(vocabulary=list(characters), mask_token=None)
# Mapping integers back to original characters
num_to_char = layers.StringLookup(vocabulary=char_to_num.get_vocabulary(), mask_token=None, invert=True)
def split_data(images, labels, train_size=0.9, shuffle=True):
# 1. Get the total size of the dataset
size = len(images)
# 2. Make an indices array and shuffle it, if required
indices = np.arange(size)
if shuffle:
np.random.shuffle(indices)
# 3. Get the size of training samples
train_samples = int(size * train_size)
# 4. Split data into training and validation sets
x_train, y_train = images[indices[:train_samples]], labels[indices[:train_samples]]
x_valid, y_valid = images[indices[train_samples:]], labels[indices[train_samples:]]
return x_train, x_valid, y_train, y_valid
# Splitting data into training and validation sets
x_train, x_valid, y_train, y_valid = split_data(np.array(images), np.array(labels))
def encode_single_sample(img_path, label):
# 1. Read image
img = tf.io.read_file(img_path)
# 2. Decode and convert to grayscale
img = tf.io.decode_png(img, channels=1)
# 3. Convert to float32 in [0, 1] range
img = tf.image.convert_image_dtype(img, tf.float32)
# 4. Resize to the desired size
img = tf.image.resize(img, [img_height, img_width], method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# 5. Transpose the image because we want the time
# dimension to correspond to the width of the image.
img = tf.transpose(img, perm=[1, 0, 2])
# 6. Map the characters in label to numbers
label = char_to_num(tf.strings.unicode_split(label, input_encoding="UTF-8"))
# 7. Return a dict as our model is expecting two inputs
return {"image": img, "label": label}
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = (
train_dataset.map(
encode_single_sample, num_parallel_calls=tf.data.AUTOTUNE
)
.batch(batch_size)
.prefetch(buffer_size=tf.data.AUTOTUNE)
)
validation_dataset = tf.data.Dataset.from_tensor_slices((x_valid, y_valid))
validation_dataset = (
validation_dataset.map(
encode_single_sample, num_parallel_calls=tf.data.AUTOTUNE
)
.batch(batch_size)
.prefetch(buffer_size=tf.data.AUTOTUNE)
)
class CTCLayer(layers.Layer):
def __init__(self, name=None):
super().__init__(name=name)
self.loss_fn = keras.backend.ctc_batch_cost
def call(self, y_true, y_pred):
# Compute the training-time loss value and add it
# to the layer using `self.add_loss()`.
batch_len = tf.cast(tf.shape(y_true)[0], dtype="int64")
input_length = tf.cast(tf.shape(y_pred)[2], dtype="int64")
label_length = tf.cast(tf.shape(y_true)[2], dtype="int64")
input_length = input_length * tf.ones(shape=(batch_len, 1), dtype="int64")
label_length = label_length * tf.ones(shape=(batch_len, 1), dtype="int64")
loss = self.loss_fn(y_true, y_pred, input_length, label_length)
self.add_loss(loss)
# At test time, just return the computed predictions
return y_pred
def build_model():
# Inputs to the model
input_img = layers.Input(
shape=(img_width, img_height, 1), name="image", dtype="float32"
)
labels = layers.Input(name="label", shape=(None,), dtype="float32")
# First conv block
x = layers.Conv2D(
32,
(3, 3),
activation="relu",
kernel_initializer="he_normal",
padding="same",
name="Conv1",
)(input_img)
x = layers.MaxPooling2D((2, 2), name="pool1")(x)
# Second conv block
x = layers.Conv2D(
64,
(3, 3),
activation="relu",
kernel_initializer="he_normal",
padding="same",
name="Conv2",
)(x)
x = layers.MaxPooling2D((2, 2), name="pool2")(x)
# We have used two max pool with pool size and strides 2.
# Hence, downsampled feature maps are 4x smaller. The number of
# filters in the last layer is 64. Reshape accordingly before
# passing the output to the RNN part of the model
new_shape = ((img_width // 4), (img_height // 4) * 64)
x = layers.Reshape(target_shape=new_shape, name="reshape")(x)
x = layers.Dense(64, activation="relu", name="dense1")(x)
x = layers.Dropout(0.2)(x)
# RNNs
x = layers.Bidirectional(layers.LSTM(128, return_sequences=True, dropout=0.25))(x)
x = layers.Bidirectional(layers.LSTM(64, return_sequences=True, dropout=0.25))(x)
# Output layer
x = layers.Dense(
len(char_to_num.get_vocabulary()) + 1, activation="softmax", name="dense2"
)(x)
# Add CTC layer for calculating CTC loss at each step
output = CTCLayer(name="ctc_loss")(labels, x)
# Define the model
model = keras.models.Model(
inputs=[input_img, labels], outputs=output, name="ocr_model_v1"
)
# Optimizer
opt = keras.optimizers.Adam()
# Compile the model and return
model.compile(optimizer=opt)
return model
# Get the model
model = build_model()
model.summary()
epochs = 100
early_stopping_patience = 10
# Add early stopping
early_stopping = keras.callbacks.EarlyStopping(
monitor="val_loss", patience=early_stopping_patience, restore_best_weights=True
)
# Train the model
history = model.fit(
train_dataset,
batch_size=batch_size,
validation_data=validation_dataset,
epochs=epochs,
callbacks=[early_stopping],
validation_steps=10,
)
# Get the prediction model by extracting layers till the output layer
prediction_model = keras.models.Model(
model.get_layer(name="image").input, model.get_layer(name="dense2").output
)
prediction_model.summary()
# A utility function to decode the output of the network
def decode_batch_predictions(pred):
input_len = np.ones(pred.shape[0]) * pred.shape[2]
# Use greedy search. For complex tasks, you can use beam search
results = keras.backend.ctc_decode(pred, input_length=input_len, greedy=True)[0][0][
:, :max_length
]
# Iterate over the results and get back the text
output_text = []
for res in results:
res = tf.strings.reduce_join(num_to_char(res)).numpy().decode("utf-8")
output_text.append(res)
return output_text
# Let's check results on some validation samples
for batch in validation_dataset.take(1):
batch_images = batch["image"]
batch_labels = batch["label"]
preds = prediction_model.predict(batch_images)
pred_texts = decode_batch_predictions(preds)
orig_texts = []
for label in batch_labels:
label = tf.strings.reduce_join(num_to_char(label)).numpy().decode("utf-8")
orig_texts.append(label)
_, ax = plt.subplots(4, 4, figsize=(15, 5))
for i in range(len(pred_texts)):
img = (batch_images[i, :, :, 0] * 255).numpy().astype(np.uint8)
img = img.T
title = f"Prediction: {pred_texts[i]}"
ax[i // 4, i % 4].imshow(img, cmap="gray")
ax[i // 4, i % 4].set_title(title)
ax[i // 4, i % 4].axis("off")
plt.show()``
[1]: https://file.io/yhnCd7gfu0Bb
</code></pre>
| <python><tensorflow><keras><deep-learning> | 2023-10-04 03:22:39 | 1 | 341 | Iron Banker Of Braavos |
77,226,390 | 8,387,921 | How to iterate through each url form txt file and download multiple file | <p>I have a txt file that contains more than 2000 the download links of various zip files. What is the best way to read text file and iterate through each line and download all files one by one. opening 2000 tabs will probably crash the pc, so i only want to download the file after first is finished.</p>
<p>The example of data_download.txt file is</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>https://data.unavco.org/archive/gnss/rinex/obs/2015/001/uh010010.15d.Z
https://data.unavco.org/archive/gnss/rinex/obs/2014/365/uh013650.14d.Z
https://data.unavco.org/archive/gnss/rinex/obs/2015/002/uh010020.15d.Z
https://data.unavco.org/archive/gnss/rinex/obs/2015/006/uh010060.15d.Z
https://data.unavco.org/archive/gnss/rinex/obs/2015/004/uh010040.15d.Z</code></pre>
</div>
</div>
</p>
<p>I use this code, but this only download the files when i click next and i have to click more then 2000 times to download each files.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>import tkinter as tk
from selenium import webdriver
driver = webdriver.Chrome()
with open('data_download.txt') as file:
urls = iter([line.strip() for line in file])
def open_next():
try:
driver.get(next(urls))
except StopIteration:
print('no more urls')
return
root = tk.Tk()
btn = tk.Button(root, text='Open next url', command=open_next)
btn.pack(padx=10, pady=10)
root.mainloop()</code></pre>
</div>
</div>
</p>
<p>I want to do it automatically. Thanks in advance.</p>
| <python><python-3.x><selenium-webdriver><tkinter><beautifulsoup> | 2023-10-04 02:30:25 | 1 | 399 | Sagar Rawal |
77,226,367 | 861,757 | What's the best way to calculate Tp,q,r = Sp,q Sp,r in NumPy? | <p>I'm trying to turn a series of m input sequences with n items each (s<sub>m,n</sub>) into a tensor where t<sub>p,q,r</sub> = s<sub>p,q</sub>s<sub>p,r</sub></p>
<p>While my code does work, I feel there must be a better solution. Here's what I got.</p>
<pre class="lang-py prettyprint-override"><code># nseqs is the number of sequences
# seq_length is the sequence length
# seq is a list of sequences
output = np.empty((nseqs, seq_length, seq_length))
for n in range(nseqs):
for i, j in enumerate(seq[n]):
output[n, i, :] = j
output[n, :, :] *= output[n, :, :].T
</code></pre>
<p>More to the point, is there a way to rejuggle the <code>output</code> tensor so that the multiplication phase is done in a single step and without those loops?</p>
| <python><numpy> | 2023-10-04 02:19:40 | 2 | 728 | Paulo Mendes |
77,226,161 | 1,302,018 | Airflow 2.7.1 - Two dags scheduled on same time but not both will run at same time | <p>I have 2 dags that are scheduled to run every 5 minutes. The issue i'm having is not both of them will simultaneously run. Please see below.</p>
<pre><code>*/5 * * * * scheds of both
dagA with task1, task2, task3
dabB with task1, task2
ex:
dagA.task1 (running)
dagB (not even have queues)
dagA.task1 (completed)
dagB.task2 (queued)
dagB.task1 (running)
</code></pre>
<p>In short, only 1 dag and 1 task will only run.<br />
I checked the config for <code>parallelism</code> and value is <code>32</code>. Dag code below with very same parameters and task too.</p>
<pre><code>@dag(
dag_id='dagA',
schedule_interval='*/5 * * * *', #..every 5 mins
start_date=datetime(2023, 9, 1, tz="UTC"),
catchup=False,
max_active_runs=1,
default_args=default_args,
)
--------------------------------------------------------
@dag(
dag_id='dagB',
schedule_interval='*/5 * * * *', #..every 5 mins
start_date=datetime(2023, 9, 1, tz="UTC"),
catchup=False,
max_active_runs=1,
default_args=default_args,
)
def process_gdelt_etl():
@task(
retries=1, retry_delay=timedelta(minutes=4)
) # they have same task structure too
def test():
pass
</code></pre>
<p>How do I configure or setup these dags to run at same time without waiting to each other dag's tasks to complete? Am I missing some important parameters?</p>
<p>New to Airflow, thanks!</p>
| <python><airflow> | 2023-10-04 00:54:21 | 1 | 669 | nickanor |
77,226,153 | 2,759,316 | Custom Thread names with ThreadPoolExecutor (NOT just the prefix)? | <p>I've found a couple threads with a pre-search, but they ALL (including online tutorials) state the solution is to use the thread_name_prefix parameter, which is NOT what I want.</p>
<p>I'm using the code below which works fine. As expected, all threads are named as "Access Rule Deletion_XXX". What I am trying to do is name each thread, not only with the thread prefix of my choice, but I'd like to append the "ar" value (a string) from the below code as well, basically so I can see what "access rule" it is currently processing with each thread.</p>
<p>Ideal Output for thread name would be: "Access Rule Deletion - SourceMatters Access_XXX" (where "SourceMatters Access" is some arbitrary access rule name)</p>
<pre><code>with ThreadPoolExecutor(max_workers=100, thread_name_prefix="Access Rule Deletion") as e:
future_list = []
for ar in existing_access_rules:
future = e.submit(access_rule_deletions, ar)
future_list.append(future)
</code></pre>
<p>Is this possible with the ThreadPoolExecutor?</p>
| <python><python-3.x><multithreading><threadpoolexecutor> | 2023-10-04 00:49:53 | 0 | 1,241 | Source Matters |
77,226,084 | 13,142,245 | Python asyncio for triggering event w/o waiting for response | <p>Suppose I have an event driven architecture and two functions, named outer and inner. Outer will call inner but outer does not require the response from inner. As soon as the request has been sent to inner, outer can terminate.</p>
<p>Can this be accomplished in Python using asyncio? Is it as simple as not using the <code>await</code> keyword?</p>
<p>Ex</p>
<pre class="lang-py prettyprint-override"><code>async def outer():
inner()
return
</code></pre>
<p>As opposed to</p>
<pre class="lang-py prettyprint-override"><code>async def outer():
await inner()
# not blocked by inner...but no remaining steps to perform
return
</code></pre>
<p>Edit, a real example</p>
<pre class="lang-py prettyprint-override"><code>def inner():
print('inner started')
time.sleep(5)
print('inner complete')
async def outer():
print('outer started')
inner()
print('outer complete')
def next_():
print('next started')
print('next complete')
</code></pre>
<p>Scenario one</p>
<pre class="lang-py prettyprint-override"><code>outer()
next_()
>>>
next started
next complete
/var/folders/p9/b46sgf393ys74l8dsh5fxyc80000gr/T/ipykernel_1113/3300557029.py:16:
RuntimeWarning: coroutine 'outer' was never awaited
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
</code></pre>
<p>Scenario two</p>
<pre class="lang-py prettyprint-override"><code>await outer()
next_()
>>>
outer started
inner started
inner complete
outer complete
next started
next complete
</code></pre>
<p>It would seem that there is not a way to terminate outer once inner has been invoked. Note: Using Jupyter notebook, which might perform async operations anomalously</p>
<p>For anyone wondering, "why does this matter? Just have outer await inner and call next" - consider two serverless functions A and B where if the result of B is not required by A, then A's computational costs are needlessly increased.</p>
| <python><async-await><python-asyncio> | 2023-10-04 00:16:16 | 1 | 1,238 | jbuddy_13 |
77,225,892 | 258,418 | Typing function object property | <p>I have to implement a protocol which sends a message, and gets a response in turn. Usually, the response is instant but in some cases the message sets the device into the correct state to process other messages, and only after processing the other messages will the device respond to the initial message setting the state.</p>
<p>Since the actual code of generating a message based on parameters, and constructing a response object is quite repetitive, I use a decorator to do so. While for 90% of the cases the user wants to <code>sendMessageX() -> ResponseToMessageX</code>, there are cases where the user has to <code>dispatchMessageX() -> Future<ResponseToMessageX></code>. Since both functions take the same parameters and essentially return the same thing (<code>ResponsToMessageX</code>, at least after resolving the future), it makes sense to generate both functions from the decorator.</p>
<p>The easiest way to do this for me, was to abuse pythons <code>everything is an object</code> and simply add the second function as a property to the function object, i.e.:</p>
<pre><code>def decorator(f):
def decorated_f():
return f()
def decorated_f_split():
return f'()
decorated_f.second_function = decorated_f_split
return decorated_f
</code></pre>
<p>(If I can help it I am trying to avoid metaclasses to keep things simple and avoid restrictions).</p>
<p>To make it easy on the user I would like typehinting to still work (so the editor can typehint which properties the response object has).</p>
<p>While I got this to work, pyright still complains, and I cant get rid off the warnings. Here is the MWE of what I came up with:</p>
<pre class="lang-python prettyprint-override"><code>from functools import wraps
from typing import (
Any,
Awaitable,
Callable,
Concatenate,
ParamSpec,
Protocol,
TypeVar,
cast,
)
T = TypeVar("T")
C = TypeVar("C", covariant=True)
P = ParamSpec("P") # , covariant=True)
class X(Protocol[P, C]):
def split(self, *args: Any, **kwds: Any) -> Awaitable[C]:
...
def __call__(self, *args: Any, **kwds: Any) -> C:
...
def wrap_returntype_awaitable(f: Callable[P, T]) -> Callable[P, Awaitable[T]]:
f.__annotations__["return"] = Awaitable[T]
g = cast(Callable[P, Awaitable[T]], f)
return g
def decorator(f: Callable[Concatenate[Any, P], C]) -> X[P, C]:
class Wrapper:
# @wrap_returntype_awaitable
@wraps(wrap_returntype_awaitable(f))
async def split(self, *args: P.args, **kwds: P.kwargs) -> Awaitable[C]:
await self.send_command()
return self.get_response()
@wraps(f)
async def __call__(self, *args: P.args, **kwds: P.kwargs) -> C:
response_receiver = await Wrapper.split(self, *args, **kwds)
return await response_receiver
Wrapper.split.__annotations__["return"] = Awaitable[C]
return Wrapper()
class A:
async def send_command(self):
pass
async def get_response(self):
pass
async def command(self, a: int) -> int:
await self.send_command()
return a
@decorator
async def x(self, a1: int) -> int:
...
async def demo():
a = A()
response = await a.x()
split_command = await a.x.split()
split_response = await split_command
</code></pre>
<p>I feel forced to use a class inside the decorator, since I can not tell pytright that I want to assign to a property of the function otherwise (I am not sure how to tell pyright that the object I am returning has a property <code>split</code> which I need to set (other than surpressing type warnings on that line)). The issue with this is matching the <code>self</code> type, in particular since I usually inherit the <code>send_command</code>/<code>get_response</code> message from a base type of <code>class A</code> (ommitted to keep the MWE simpler).</p>
<p>Pyright complains about the following:</p>
<pre><code>p2.py:38:24 - error: Cannot access member "send_command" for type "Wrapper"
Member "send_command" is unknown (reportGeneralTypeIssues)
p2.py:39:25 - error: Cannot access member "get_response" for type "Wrapper"
Member "get_response" is unknown (reportGeneralTypeIssues)
p2.py:48:12 - error: Expression of type "Wrapper" cannot be assigned to return type "X[P@decorator, C@decorator]"
"Wrapper" is incompatible with protocol "X[P@decorator, C@decorator]"
Type parameter "P@X" is invariant, but "P@X" is not the same as "P@decorator"
Type parameter "C@X" is covariant, but "Awaitable[C@decorator]" is not a subtype of "C@decorator"
Type "Awaitable[C@decorator]" cannot be assigned to type "C@decorator" (reportGeneralTypeIssues)
3 errors, 0 warnings, 0 informations
</code></pre>
<p>I already touched on the first to warnings (how should I type self here? coercing it to <code>Base_A</code> does not work, (<code>Base_A</code> has <code>send_command</code>/<code>get_response</code>, but my decorator calls these with <code>A._marker_property</code>). Furthermore, since I am currently using function and not class decorators, there is no way for me to get to <code>class A</code> inside the decorator.</p>
<p>I could try to split the decorator into a function decorator that marks my functions and a class decorator (which would have access to the underlying class), but then I would have pyright complaining about assignment of function properties again.</p>
<p>My bigger issue is with the last two errors:</p>
<pre><code> "Wrapper" is incompatible with protocol "X[P@decorator, C@decorator]"
Type parameter "P@X" is invariant, but "P@X" is not the same as "P@decorator"
Type parameter "C@X" is covariant, but "Awaitable[C@decorator]" is not a subtype of "C@decorator"
Type "Awaitable[C@decorator]" cannot be assigned to type "C@decorator"
</code></pre>
<p>Why is <code>P@X</code> not the same as <code>P@decorator</code> (The parameterset should be the same? And I cant have a covariant Parameterset...)
And I thought I had typed it correctly so that <code>Awaitable[C@decorator]</code> is expected in the right places, yet the typechecker clearly thinks differently.</p>
<hr />
<h2>(I do not have control over the protocol and cant change its design)</h2>
<p><strong>EDIT</strong>: I was so focused on getting completion/inference to work (i.e. for the example, the editor knows that the functions return <code>int</code>/<code>Awaitable[int]</code>) that I did not actually test the functionality. The access to self breaks, and methods of the parent object can not be called :/</p>
<p><a href="https://i.sstatic.net/SJHiT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SJHiT.png" alt="Inference works" /></a></p>
| <python><python-typing><pyright> | 2023-10-03 22:54:05 | 1 | 5,003 | ted |
77,225,888 | 3,387,716 | SAM alignment: Extract specific region in the query sequence and its enclosing parts in the CIGAR string | <p>I need to perform the local alignment of a given region of a DNA sequence for which a global alignment was made, and update the corresponding part of the global CIGAR string.</p>
<p>The steps would be as follow:</p>
<p><strong>1.</strong> Extract the region of interest from the "query" sequence according to the CIGAR string of the global alignment.
<br>
<strong>2.</strong> Perform the local alignment of the region and generate the local CIGAR string.
<br>
<strong>3.</strong> Extract from the global CIGAR string the operations "before" and "after" the region of interest.
<br>
<strong>4.</strong> Generate the new global CIGAR string</p>
<p>I would like to get help for <strong>#1</strong> and <strong>#3</strong></p>
<p>Here's a concrete example:</p>
<pre class="lang-none prettyprint-override"><code>Input:
------
Reference sequence: ACGGCATCAGCGATCATCGGCATATCGAC (against which the global alignment was made)
Query sequence: TCATCAAGCGTCGCCCTC
CIGAR string: 1S5M1I3M4D6M2D2M
CIGAR position: 5 (1-based, position of the leftmost mapped nucleotide in the reference)
Genomic region: 7-23 (1-based, inclusive)
</code></pre>
<p>And the interpretation of the above data:</p>
<pre class="lang-none prettyprint-override"><code>Reference: ACGGCA|TCA-GCGATCATCGGCAT|ATCGAC
Query: .CA|TCAAGCG----TCGCCC-|-TC
CIGAR*: SMM|MMMIMMMDDDDMMMMMMD|DMM
Position: β
</code></pre>
<p><sup><strong>note:</strong> the genomic region is in between the <code>|</code> characters</sup></p>
<p>So, in this example I would need to extract <code>TCAAGCGTCGCCC</code> from <code>Query</code> and <code>SMM</code> & <code>DMM</code> from <code>CIGAR*</code>.</p>
<hr />
<p>Here's my clumsy attempt:</p>
<pre class="lang-py prettyprint-override"><code>import re
import itertools
# Input:
reference = 'ACGGCATCAGCGATCATCGGCATATCGAC'
query = 'TCATCAAGCGTCGCCCTC'
cigar = '1S5M1I3M4D6M2D2M'
region = (7-1, 23)
position = 5-1
# Now trying the extract the region in the query and the outer parts of the CIGAR:
x0, x1 = region # range in reference
y0, y1 = (None, None) # range in query
c0, c1 = ([], []) # cigar operations, before and after the region
x = position # current position in reference
y = 0 # current position in query
for op in itertools.chain.from_iterable(itertools.repeat(m[1], int(m[0])) for m in re.findall(r'(\d+)([MIDS])', cigar)):
if x < x0:
c0.append(op)
elif x < x1:
if y0 == None:
y0 = y
else:
y1 = y
else:
c1.append(op)
if op == 'M':
x += 1
y += 1
elif op in ['S','I']:
y += 1
elif op == 'D':
x += 1
y1 += 1
print( (''.join(c0), query[y0:y1], ''.join(c1)) )
</code></pre>
<pre class="lang-none prettyprint-override"><code>('SMM', 'TCAAGCGTCGCCCT', 'DMM')
</code></pre>
<p>The problem is that I get a superfluous <code>T</code> at the end of the query region.</p>
| <python><bioinformatics><text-processing> | 2023-10-03 22:52:39 | 1 | 17,608 | Fravadona |
77,225,488 | 2,642,356 | VSCode: "fatal error: Python.h: No such file or directory" even with python-dev installed | <p>Using VSCode (which I'm quite new to) on Ubuntu 20.04, I wish to build a C-extension (actually C++ extension) to Python.</p>
<p>I'm trying to build the following MWE:</p>
<pre><code>#include <numpy/arrayobject.h>
int main() {return 0;}
</code></pre>
<p>However the build fails on <code>#include <numpy/arrayobject.h></code> with the error</p>
<blockquote>
<p>/usr/include/numpy/ndarrayobject.h:11:10: fatal error: Python.h: No such file or directory</p>
</blockquote>
<p>I do have <code>python3.9-dev</code> installed, and my <code>c_cpp_properties.json</code> is supposed to contain the relevant path to include:</p>
<pre class="lang-json prettyprint-override"><code>{
"configurations": [
{
"name": "Linux",
"includePath": [
"${workspaceFolder}/**",
"/usr/include/python3.9",
],
"defines": [],
"compilerPath": "/usr/bin/clang",
"cStandard": "c17",
"cppStandard": "c++14",
"intelliSenseMode": "linux-clang-x64"
}
],
"version": 4
}
</code></pre>
<p>This question was supposedly answered plenty of times in the past, and yet nothing works for me from any answer I found.</p>
<p>Thank you for your help.</p>
| <python><c++><visual-studio-code><python-extensions> | 2023-10-03 21:00:08 | 0 | 1,864 | EZLearner |
77,225,147 | 12,144,448 | Improving model when Coefficient of Determination (RΒ²)is inconsistent | <p>Many examples in the literature state that the value of the coefficient of determination (RΒ²) is a value between 0 and 1, but some cases in my regression model yielded negative results. While researching on the StackOverflow community and also reading the Sklearn library documentation, I found some information indicating that RΒ² can indeed be a negative number when the model performs arbitrarily worse. This is the case with this example where I obtained an RΒ² of <strong>-2.65</strong>:</p>
<p><a href="https://i.sstatic.net/9DNvs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9DNvs.png" alt="enter image description here" /></a></p>
<pre><code>def calculate_r2(y_true, y_pred):
y_mean = np.mean(y_true)
sse = np.sum((y_true - y_pred)**2)
sst = np.sum((y_true - y_mean)**2)
r2 = 1 - (sse / sst)
return r2
y_true = np.array([47, 40, 42, 43, 43, 34, 50, 47, 41, 34, 44, 48, 41, 35])
y_pred = np.array([42, 33, 51, 60, 45, 44, 51, 47, 34, 24, 53, 28, 35, 40])
r2 = calculate_r2(y_true, y_pred)
print("RΒ²:", r2)
</code></pre>
<p>However, I found another formula that can also be used to calculate RΒ² and decided to test it. Using this alternative formula, the negative results were eliminated, but there were some cases where RΒ² exceeded 1. In this particular case, it was <strong>4.02</strong>:</p>
<p><a href="https://i.sstatic.net/ontit.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ontit.png" alt="enter image description here" /></a></p>
<pre><code>def calculate_r2(y_true, y_pred):
y_mean = np.mean(y_true)
sse = np.sum((y_pred - y_mean)**2)
sst = np.sum((y_true - y_mean)**2)
r2 = sse / sst
return r2
y_true = np.array([47, 40, 42, 43, 43, 34, 50, 47, 41, 34, 44, 48, 41, 35])
y_pred = np.array([42, 33, 51, 60, 45, 44, 51, 47, 34, 24, 53, 28, 35, 40])
r2 = calculate_r2(y_true, y_pred)
print("RΒ²:", r2)
</code></pre>
<p>How is this possible? I believe that among the two formulas, the one that is widely accepted and used is the first one, which returned a negative value. From my understanding, negative values of RΒ² indicate that the model is not only inadequate but also predicting poorly. Is there anything I can do to improve this result?</p>
<p><strong># EDIT 1</strong></p>
<p>I have a database with 71 records. Several algorithms were employed in the implementation of regression models. The specific result I provided in the example was obtained using LinearRegression with the complete and normalized dataset, with 80% of the data for training and 20% for testing. In some cases (not the case in the example above), I performed data preprocessing using Pearson Correlation (0.98) and PCA (0.99), but I still encountered some instances of negative RΒ² values.</p>
| <python><machine-learning><statistics> | 2023-10-03 19:52:36 | 0 | 531 | Liam Park |
77,225,146 | 2,865,313 | Using `arcgisbinding` and `reticulate` to run R code and python within a Quarto document | <h1>Background</h1>
<p>I am a long-time R user but completely new to python. I have used the <code>arcgisbinding</code> package successfully for several years and I am happy with how it is working - I am connecting to my AGOL account there and exporting file geodatabases (fGDB) without issue.</p>
<p>Recently I found a script online to help me truncate and append to an AGOL feature service, written in python. I need to pull data from a web service using an API written for R, convert it to a fGDB, and use the python script to "overwrite" the data currently sitting in AGOL.</p>
<p>I tried out my scripts using test data that I had already created in another R script and everything worked. Then, I tried to put everything together in one script: pull the data from the webservice using an API, export to fGDB with <code>arcgisbinding</code>, and overwrite the older data in AGOL with the python code in one Quarto document.</p>
<h1>The issue</h1>
<p>However, it seems like <code>arcgisbinding</code> and <code>reticulate</code> are using different versions of python.</p>
<p>Please note: <strong>I know nothing about how to create an environment and have no experience with command line coding!</strong> So if you know what the issue is here, please be as detailed as possible in how to remedy it.</p>
<h1>Example code and error messages</h1>
<p>Running on Windows<br />
RStudio "Desert Sunflower" 2023.09.0 Build 463<br />
64 bit R V 4.2.3</p>
<p>This works - everything runs as expected (including the rest of the python code, which is not necessary for this minimal - hopefully reproducible - example).</p>
<pre><code>---
title: "Which python version?"
format: html
editor: visual
---
```{r libraries}
library(reticulate)
use_python('C:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3\\python.exe', required = T)
```
```{python}
import arcpy
```
</code></pre>
<p>But if I include <code>arcgisbinding</code> I get this error message after running my first line of python code:</p>
<pre><code>---
title: "Which python version?"
format: html
editor: visual
---
```{r libraries}
library(reticulate)
use_python('C:\\Program Files\\ArcGIS\\Pro\\bin\\Python\\envs\\arcgispro-py3\\python.exe', required = T)
library(arcgisbinding)
arc.check_portal()
arc.check_product()
```
```{python}
import arcpy
```
ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.9 from "C:/Program Files/ArcGIS/Pro/bin/Python/envs/arcgispro-py3/python.exe"
* The NumPy version is: "1.20.1"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: DLL load failed while importing _multiarray_umath: The specified module could not be found.
</code></pre>
<p>If I try not specifying the python location, this is the error I get:</p>
<pre><code>---
title: "Which python version?"
format: html
editor: visual
---
```{r libraries}
library(reticulate)
library(arcgisbinding)
arc.check_portal()
arc.check_product()
```
```{python}
import arcpy
```
> reticulate::repl_python()
Error: C:/Users/NewtonEr/AppData/Local/r-miniconda/envs/r-reticulate/python38.dll - The specified module could not be found.
</code></pre>
| <python><r><quarto><reticulate> | 2023-10-03 19:52:29 | 0 | 5,950 | Nova |
77,225,048 | 9,869,695 | sklearn requirements installation with pip: ensure that binary wheels are used | <p>In the sklearn <a href="https://scikit-learn.org/stable/install.html#installing-the-latest-release" rel="nofollow noreferrer">installation guide</a> for the latest version (1.3.1) it mentions that you can install dependencies with pip, but says</p>
<p><strong>"When using pip, please ensure that binary wheels are used, and NumPy and SciPy are not recompiled from source"</strong></p>
<p>What does this mean and how would I go about ensuring that?</p>
| <python><pip><sklearn-pandas> | 2023-10-03 19:32:51 | 1 | 1,514 | Arran Duff |
77,225,046 | 11,107,192 | Handling Large Payloads in CosmosDB with Azure Durable Functions | <p>We have encountered an issue with a function responsible for writing data chunks into CosmosDB. The problem arises when one of the items within these chunks exceeds 2MB in size, resulting in the following exception:</p>
<pre class="lang-bash prettyprint-override"><code>Microsoft.Azure.DocumentDB.Core: Message: {"Errors":["Request size is too large"]}
</code></pre>
<p>To mitigate this issue, we initially attempted to estimate the payload size using sys.getsizeof, but this approach proved ineffective. Consequently, we are exploring alternative solutions to address this specific payload size problem. One possible approach is to handle the oversized payload individually and take appropriate action, such as sending it to a queue or implementing a similar strategy.</p>
<h4>The problem</h4>
<p>The core issue we are facing is that our exception handling mechanism is not functioning as expected. Not even the logging within the exception handler nor the logging in the orchestrator's exception clause is being triggered. We are using retry policies for error handling; however, for this particular scenario, they appear to be ineffective since the payload size remains constant.</p>
<p>Here is a snippet of the relevant code:</p>
<pre class="lang-py prettyprint-override"><code> # activity
try:
entitiesCollection.set(entities_document_list)
classificationsCollection.set(classifications_document_list)
relatedAssetsCollection.set(related_assets_document_list)
lineageCollection.set(lineage_document_list)
except Exception as err:
logging.error(f"Saving to CosmosDB failed with error - {err}")
</code></pre>
| <python><azure-cosmosdb><azure-durable-functions> | 2023-10-03 19:32:45 | 0 | 717 | basquiatraphaeu |
77,224,942 | 10,430,394 | Installing OpenBabel for Python isn't working | <p>I have read all the related posts and they aren't helping. Like <a href="https://stackoverflow.com/questions/65537365/how-to-install-openbabel">this</a> one is just a defunct that leads nowhere. I have installed the binary for Windows form <a href="https://github.com/openbabel/openbabel/releases" rel="nofollow noreferrer">here</a> and as I understood, I had to then use pip to install openbabel using:</p>
<pre><code>pip install openbabel
</code></pre>
<p>But when I run that, I get the following error:</p>
<pre><code>C:\Users\Chris>pip install openbabel
Collecting openbabel
Using cached openbabel-3.1.1.1.tar.gz (82 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: openbabel
Building wheel for openbabel (pyproject.toml) ... error
error: subprocess-exited-with-error
Γ Building wheel for openbabel (pyproject.toml) did not run successfully.
β exit code: 1
β°β> [15 lines of output]
running bdist_wheel
running build
running build_ext
Warning: invalid version number '3.1.1.1'.
Guessing Open Babel location:
- include_dirs: ['C:\\Users\\Chris\\AppData\\Local\\Programs\\Python\\Python39\\include', 'C:\\Users\\Chris\\AppData\\Local\\Programs\\Python\\Python39\\Include', '/usr/local/include/openbabel3']
- library_dirs: ['C:\\Users\\Chris\\AppData\\Local\\Programs\\Python\\Python39\\libs', 'C:\\Users\\Chris\\AppData\\Local\\Programs\\Python\\Python39', 'C:\\Users\\Chris\\AppData\\Local\\Programs\\Python\\Python39\\PCbuild\\amd64', '/usr/local/lib']
building 'openbabel._openbabel' extension
swigging openbabel\openbabel-python.i to openbabel\openbabel-python_wrap.cpp
swig.exe -python -c++ -small -O -templatereduce -naturalvar -IC:\Users\Chris\AppData\Local\Programs\Python\Python39\include -IC:\Users\Chris\AppData\Local\Programs\Python\Python39\Include -I/usr/local/include/openbabel3 -o openbabel\openbabel-python_wrap.cpp openbabel\openbabel-python.i
Error: SWIG failed. Is Open Babel installed?
You may need to manually specify the location of Open Babel include and library directories. For example:
python setup.py build_ext -I/usr/local/include/openbabel3 -L/usr/local/lib
python setup.py install
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for openbabel
Failed to build openbabel
ERROR: Could not build wheels for openbabel, which is required to install pyproject.toml-based projects
</code></pre>
<p>So naturally, when I try to import <code>pybel</code> from openbabel in my script, it's not found.</p>
<p>It says that I need to "build wheels". I do not know what that means, or even what wheels are supposed to be.</p>
<p>All I saw in regards to that was that I was supposed to download some file from <a href="https://www.lfd.uci.edu/%7Egohlke/pythonlibs/#openbabel" rel="nofollow noreferrer">this</a> link according to <a href="https://github.com/openbabel/openbabel/issues/2018" rel="nofollow noreferrer">this</a> github issue.
But I am slightly out of my depth here. Which one of the 10 openbabel.whks am I supposed to get?
I just downloaded the first one since it seemed like a recent version and it said amd64 (I have a ryzen CPU in my system). I installed openbabel using the installer, ran the .whk and then tried to run <code>pip install openbabel</code>. Same nothing...</p>
<p>I have no idea how to fix this and it's been a long running issue. Openbabel would be very useful to me, were I able to utilize it in my scripts. So please, tell me how to get it working.</p>
| <python><pip><openbabel> | 2023-10-03 19:12:14 | 1 | 534 | J.Doe |
77,224,940 | 10,534,633 | How to smooth Venn diagram edges | <p>I have a Venn diagram created in Python but I don't know how to smooth all (also the internal ones) the edges.</p>
<p><a href="https://i.sstatic.net/UrcLw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UrcLw.png" alt="enter image description here" /></a></p>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
from matplotlib_venn import venn3
from matplotlib.patches import PathPatch
from matplotlib.path import Path
set1 = {i for i in range(150)}
set2 = {i for i in range(150 - 75, 350 - 75)}
set3 = {i for i in range(150 - 75, 150 - 75 + 20)}
fig, ax = plt.subplots()
venn = venn3([set1, set2, set3], set_labels=('', '', ''))
circle1 = venn.get_patch_by_id('100')
circle2 = venn.get_patch_by_id('010')
circle3 = venn.get_patch_by_id('110')
circle4 = venn.get_patch_by_id('111')
circle1.set_edgecolor('none')
circle2.set_edgecolor('none')
circle3.set_edgecolor('none')
circle4.set_edgecolor('none')
path_data1 = circle1.get_path().vertices.tolist()
patch1 = PathPatch(Path(path_data1), facecolor='yellow', edgecolor='#d6d3d6', hatch='/', lw=0)
ax.add_patch(patch1)
path_data2 = circle2.get_path().vertices.tolist()
patch2 = PathPatch(Path(path_data2), facecolor='purple', edgecolor='#d6d3d6', hatch='\\', lw=0)
ax.add_patch(patch2)
path_data3 = circle3.get_path().vertices.tolist()
patch3 = PathPatch(Path(path_data3), facecolor='pink', edgecolor='#d6d3d6', hatch="\/", lw=0)
ax.add_patch(patch3)
path_data4 = circle4.get_path().vertices.tolist()
patch4 = PathPatch(Path(path_data4), facecolor='orange', edgecolor='#d6d3d6', hatch=".", lw=0)
ax.add_patch(patch4)
venn.get_label_by_id('100').set_text('')
venn.get_label_by_id('010').set_text('')
venn.get_label_by_id('110').set_text('')
venn.get_label_by_id('111').set_text('')
# Show the plot
plt.axis('off')
plt.show()
</code></pre>
<p>How to achieve that?<br />
What's the cause of the angular edges here?</p>
| <python><matplotlib><visualization><venn-diagram><matplotlib-venn> | 2023-10-03 19:11:52 | 1 | 1,230 | maciejwww |
77,224,749 | 11,203,574 | What is the difference between "mouse_event" and "SendMessage"? | <p>I have two functions that perform clicks using pywin32.</p>
<p>One that uses <code>win32api.mouse_event</code>:</p>
<pre><code>def click_1(hwnd, x, y):
win_activate(hwnd)
win32api.SetCursorPos((x, y))
time.sleep(.1)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN, 0, 0)
time.sleep(.1)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP, 0, 0)
</code></pre>
<p>And another that uses <code>win32gui.SendMessage</code>:</p>
<pre><code>def click_2(hwnd, x, y):
lparam = win32api.MAKELONG(x, y)
win32gui.SendMessage(hwnd, win32con.WM_LBUTTONDOWN, None, lparam)
time.sleep(.1)
win32gui.SendMessage(hwnd, win32con.WM_LBUTTONUP, None, lparam)
</code></pre>
<p>I've tested both functions on several applications, and they work as expected.</p>
<p>However, there is one particular application where only <strong>click_1</strong>, the function that uses <code>win32api.mouse_event</code>, works.</p>
<p>We can read from <a href="https://stackoverflow.com/a/74898459/11203574">this post</a> that</p>
<blockquote>
<p>A program that needs to process and respond to mouse input can decide to ignore mouse input messages. (And clients that are based on mouse message handling can easily identify fake input messages, too, and respond by, y'know, not responding altogether.)</p>
</blockquote>
<p>So maybe that the application just ignore the <em>click messages</em>, but as <code>win32api.mouse_event</code> works, I would tell no.</p>
<p>My guess was that by using <code>win32api.mouse_event</code>, Windows would receive the message and send a new message to the window the cursor was on by using <code>win32gui.SendMessage</code>, but obviously this is not the case.</p>
<p>My question is: how does <code>win32api.mouse_event</code> work? Is there any way to send the same message that is sent from <code>win32api.mouse_event</code> to the window we want?</p>
| <python><winapi><pywin32><win32gui> | 2023-10-03 18:36:09 | 0 | 377 | Aurelien |
77,224,603 | 2,289,030 | Is there a way to check what object a decorated classmethod binds to? | <p>I want to do some runtime type checking on a property that's decorated with <code>@classmethod</code> (making sure the classmethod decorates a property and not some other thing), but I don't see a way to do this through the "normal" ways (inspection with <code>vars()</code> or <code>dir()</code> or <code>.__dict__</code>):</p>
<pre class="lang-py prettyprint-override"><code>>>> class A:
... @classmethod
... @property
... def d(cls):
... return 'foo'
...
>>> A.d
<bound method ? of <class '__main__.A'>>
>>> A.__dict__["d"]
<classmethod object at 0x0000022ED4766130>
>>> A.__dict__["d"].__dict__
{}
</code></pre>
<p>Is there any way the check that the <code>classmethod</code> <code>A.d</code> binds a <code>property</code>?</p>
<p>I understand this might be peeking too far under the hood, and would be open to suggestions for how to accomplish this with type annotations as well!</p>
<p>This is not a straightforward application of <code>abc</code>, as this is a property whose presence depends on the presence of other properties, which I check at runtime for adherence to the data model.</p>
| <python><reflection><python-decorators> | 2023-10-03 18:07:41 | 1 | 968 | ijustlovemath |
77,224,564 | 5,256,563 | Napari doesn't display full resolution | <p>I want to visualize a large tiff (65Kx35K) files using <code>Napari 4.18</code> but Napari does not display the image at full resolution. Instead at best it display a lower and pixelized image.</p>
<p>See the attached image where I compare the display between Napari (left) and ImageJ (right). We can see that resolution is different.</p>
<p><a href="https://i.sstatic.net/6ZUa9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6ZUa9.png" alt="Resolution comparison" /></a></p>
<p>Is there are option to display images at full resolution?</p>
| <python><python-napari> | 2023-10-03 18:02:04 | 1 | 5,967 | FiReTiTi |
77,224,505 | 2,205,930 | Python3: c-type segfault from multithreaded C shared library | <p>I've looked at other StackOverflow related questions and answers and followed the information given. I am still having trouble.</p>
<p>Running Python3 on Linux which accesses a shared C library (.so). The C library will spin up threads as needed. I am having as issue with segmentation faults when accessing a C API while threads are running. The issue appears only if the C thread calls a callback function within Python. If the C thread runs without calling the callback, there are no issues. In the code below, the Python lines <code>print(GetData(128))</code> will get the string and print it to the console before causing a segfault. Again, I only get a segfault on <code>GetData</code> if the thread is running AND calls the event or callback. Other <code>C</code> APIs which return a <code>char*</code>, but take no parameters, work just fine with the thread running.</p>
<p>Python:</p>
<pre><code># Within a class, but not shown for simplicity.
EVENT = CFUNCTYPE(c_void_p, c_int)
lib = CDLL("library.so")
lib.GetData.restype = c_uint
lib.GetData.argtypes = [c_char_p, c_uint]
lib.SetEvent.argtypes = [EVENT]
def GetData(count):
buf = create_string_buffer(count)
lib.GetData(buf, count)
return buf.value.decode("utf-8")
def SetEvent(ev):
lib.SetEvent(ev)
# Code outside of class.
def Event(e):
print("Event " + str(e))
if e == 2:
print(GetData(128)) # Event works fine until I call this.
SetEvent(EVENT(Event))
cnt = 0
while cnt < 10:
print(GetData(128)) # Works if I do not call SetEvent.
cnt = cnt + 1
sleep(1)
</code></pre>
<p>C/C++ Code:</p>
<pre><code>typedef void(*Event)(int i);
Event efunc = nullptr;
static void ThreadFunc()
{
int i = 0;
while(true)
{
if (efunc != nullptr) { efunc(++i); } // Raise event.
if (i > 10) { i = 0; }
std::this_thread::sleep_for(std::chrono::seconds(1));
}
}
extern "C" unsigned int GetData(char* data, unsigned int count)
{
const char* str = "Random data for testing";
std::strcpy(data, str);
return std::strlen(str);
}
extern "C" void SetEvent(Event e)
{
efunc = e;
if (efunc != nullptr) { myThread = std::thread(ThreadFunc); }
}
</code></pre>
<p>I've put the simple versions of the code above to make the problem easier to understand. I am aware there is missing error checking and the thread runs forever. This isn't the case in the actual code, but this post would be too long if I added everything. Also, the <code>C</code> library needs to create threads, this can't change. The same <code>C</code> library works when dynamically loaded and run with <code>C++</code> and <code>C#</code>.</p>
<h2>Update</h2>
I've added the Python library to the C/C++ shared library and made the following code changes (see above for reference).
<p>C/C++:</p>
<pre><code>static void ThreadFunc()
{
int i = 0;
while(true)
{
if (efunc != nullptr)
{
PyGILState_STATE state = PyGILState_Ensure();
// Py_BEGIN_ALLOW_THREADS
efunc(++i); // Raise event.
// Py_END_ALLOW_THREADS
PyGILState_Release(state);
}
if (i > 10) { i = 0; }
std::this_thread::sleep_for(std::chrono::seconds(1));
}
}
extern "C" void SetEvent(Event e)
{
efunc = e;
if (efunc != nullptr)
{
if (!PyEval_ThreadsInitialized())
{
PyEval_InitThreads();
}
myThread = std::thread(ThreadFunc)
}
}
</code></pre>
<p>Things work until the Python callback runs <code>print(GetData(128))</code>, after this each callback to the Python code prints <code>SystemError: null argument to internal routine</code> to the console.</p>
| <python><c><multithreading><segmentation-fault><shared-libraries> | 2023-10-03 17:51:49 | 1 | 1,086 | user2205930 |
77,224,498 | 10,098,481 | Convert string to float when the exponent is also a float | <p>Converting a string to a float is easy when said string is 10E-3:</p>
<pre><code>input_str = "10E-3"
output_float = float(input_str)
</code></pre>
<p>However, this breaks when the exponent is a non-integer. For instance:</p>
<pre><code>input_str = "10E-3.5"
output_float = float(input_str)
</code></pre>
<p>returns:</p>
<pre><code>ValueError: could not convert string to float: '10E-3.5'
</code></pre>
<p>How can I submit non-integer exponents?</p>
| <python><python-3.x> | 2023-10-03 17:50:29 | 2 | 723 | Wychh |
77,224,476 | 7,563,690 | Why is my Python script is not dealing with indexes properly when I am trying to reformat the data | <pre><code>import pandas as pd
cleaned_df = pd.read_csv('filepath/test.csv',
index_col=1, parse_dates=True)
id_list = list(set(cleaned_df.loc[:, 'id'].tolist()))
df_list = [] # Create an empty list to store individual dataframes
for id in id_list:
id_df = cleaned_df[cleaned_df['id'] == id].loc[:, ['exportEnergykWh', 'importEnergykWh']]
id_df.rename(columns={'exportEnergykWh': id + '_export', 'importEnergykWh': id + '_import'}, inplace=True)
df_list.append(id_df) # Append each dataframe to the list
changed_df = pd.concat(df_list, axis=0)
</code></pre>
<p>I am using the above code to take something in the format</p>
<pre><code>timestamp, exportEnergykWh, importEnergykWh, id
2023-10-04 06:00, 0, 1.3, s14j575
2023-10-04 06:00, 1.2, 3.1, f91734
2023-10-04 06:00, 0.4, 2.1, t91u10
2023-10-04 06:30, 0, 1.4, s14j575
2023-10-04 06:30, 1.1, 2.1, f91734
2023-10-04 06:30, 0.3, 1.7, t91u10
</code></pre>
<p>into something that has many more columns</p>
<pre><code>timestamp, s14j575_export, s14j575_import, f91734_export, f91734_import, t91u10_export, t91u10_import
2023-10-04 06:00, 0, 1.3, 1.2, 3.1, 0.4, 2.1,
2023-10-04 06:30, 0, 1.4, 1.1, 2.1, 0.3, 1.7,
</code></pre>
<p>it should much fewer rows, but there is something wrong with how I am dealing with the indexes, so I get a repeated index with the same value and the columns have NaN values in most rows, rather than a few rows all with data. It has been a while since I have used Pandas, would appreciate any help</p>
| <python><pandas> | 2023-10-03 17:45:58 | 1 | 475 | Luka Vlaskalic |
77,224,410 | 22,407,544 | Why is my form calling the wrong Django view? | <p>The flow of my app is as follows: A file is submitted on the transcribe.html page first which then redirects to the transcibe-complete.html page where the user can click to beginning transcription.</p>
<p>Why is the 'transcribeSubmit' view being called instead of the 'initiate_transcription' view when the user clicks the 'click to transcribe' button on the 'initiate_transcription' page?</p>
<p>Each page has their own JS file to ensure that forms are submitted separately.</p>
<p>html:
(transcribe.html):</p>
<pre><code><form method="post" action="{% url 'transcribeSubmit' %}" enctype="multipart/form-data" >
{% csrf_token %}
<label for="transcribe-file" class="transcribe-file-label">Choose audio/video file</label>
<input id="transcribe-file" name="file" type="file" accept="audio/*, video/*" hidden>
<button class="upload" id="transcribe-submit" type="submit" >Submit</button>
</form>
</code></pre>
<p>(transcribe-complete):</p>
<pre><code><form method="post" action="{% url 'initiate_transcription' %}" enctype="multipart/form-data">
{% csrf_token %}
<label for="output-lang--select-summary"></label>
<select class="output-lang-select" id="output-lang--select-summary" name="audio_language">
<option value="detect">Detect language</option>
<option value="zh">Chinese</option>
<option value="en">English</option>
<option value="es">Spanish</option>
<option value="de">German</option>
<option value="jp">Japanese</option>
<option value="ko">Korean</option>
<option value="ru">Russian</option>
<option value="pt">Portugese</option>
</select>
</div>
<div class="transcribe-output">
<button id="transcribe-button" type="submit">Click to Transcribe</button>
</div>
</form>
</code></pre>
<p>JS:</p>
<pre><code>const form = document.querySelector('form');
form.addeventlistener('submit', function (event) {
event.preventdefault();
const formdata = new formdata(form);
const xhr = new xmlhttprequest();
xhr.open('post', form.getattribute('action'), true);
xhr.onreadystatechange = function () {
console.log("ready state: ", xhr.readystate);
console.log("status: ", xhr.status);
}
xhr.responsetype = 'blob';
xhr.send(formdata);
});
</code></pre>
<p>views.py:</p>
<pre><code>@csrf_protect
def transcribeSubmit(request):
if request.method == 'POST':
form = UploadFileForm(request.POST, request.FILES)
if form.is_valid():
uploaded_file = request.FILES['file']
fs = FileSystemStorage()
filename = fs.save(uploaded_file.name, uploaded_file)
request.session['uploaded_file_name'] = filename
request.session['uploaded_file_path'] = fs.path(filename)
#transcribe_file(uploaded_file)
#return redirect(reverse('proofreadFile'))
return render(request, 'transcribe/transcribe-complete.html', {"form": form})
else:
else:
form = UploadFileForm()
return render(request, 'transcribe/transcribe.html', {"form": form})
@csrf_protect
def initiate_transcription(request):
if request.method == 'POST':
# get the file's name and path from the session
file_name = request.session.get('uploaded_file_name')
file_path = request.session.get('uploaded_file_path')
if file_name and file_path:
with open(file_path, 'rb') as f:
transcribe_file(file_name, f)
return JsonResponse({'status': 'success'})
else:
return JsonResponse({'status': 'error', 'error': 'No file uploaded'})
return JsonResponse({'status': 'error', 'error': 'Invalid request'})
</code></pre>
<p>transcribe/urls.py:</p>
<pre><code>from django.urls import path
from . import views
urlpatterns = [
path("", views.transcribeSubmit, name = "transcribeSubmit"),
path("", views.initiate_transcription, name = "initiate_transcription"),
]
</code></pre>
<p>mysite/urls.py:</p>
<pre><code>from django.contrib import admin
from django.urls import include, path
from translator import views
urlpatterns = [
path('', include('homepage.urls')),#redirects to translator/
path('translator/', include('translator.urls')),
path('translator/translator_upload/', include('translator_upload.urls')),
path('proofreader/', include('proofreader.urls')),
path('proofreader/proofreader_upload/', include('proofreader_upload.urls')),
path('transcribe/', include('transcribe.urls')),
path('admin/', admin.site.urls),
]
</code></pre>
| <javascript><python><html><django> | 2023-10-03 17:34:37 | 1 | 359 | tthheemmaannii |
77,224,402 | 20,830,264 | Render the Django REST Framework uploading system with an HTML script | <p>I have developed a simple file-upload system using Django REST Framework using this tutorial <a href="https://www.section.io/engineering-education/how-to-upload-files-to-aws-s3-using-django-rest-framework/" rel="nofollow noreferrer">How to upload files to AWS S3 with Django</a>. Basically the system allows user to upload files to an AWS S3 bucket.
This is what the users currently see:</p>
<p><a href="https://i.sstatic.net/NbF3q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NbF3q.png" alt="Django REST Framework uploader" /></a></p>
<p>This is the following django project structure:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-css lang-css prettyprint-override"><code>DROPBOXER-MASTER
|
|--- dropboxer
|
|--- settings.py
|--- urls.py
|--- wsgi.py
|--- uploader
|
|--- views.py
|--- urls.py
|--- serializers.py
|--- models.py
|--- apps.py
|--- templates
|--- index.html
|--- manage.py
|--- requirements.txt</code></pre>
</div>
</div>
</p>
<p>This is the dropboxer/urls.py:</p>
<pre class="lang-py prettyprint-override"><code>from django.conf import settings
from django.conf.urls.static import static
from django.contrib import admin
from django.urls import path, include # new
urlpatterns = [
path('admin/', admin.site.urls),
path('api/', include('rest_framework.urls')), # new
path('', include('uploader.urls')), # new
]
if settings.DEBUG:
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
</code></pre>
<p>This is the dropboxer/settings.py:</p>
<pre class="lang-py prettyprint-override"><code>import os
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = '*****************************************'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = ["*"]
# Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
# 3rd party apps
'rest_framework',
# local apps
'uploader', # new
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'dropboxer.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [BASE_DIR, 'templates'],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
WSGI_APPLICATION = 'dropboxer.wsgi.application'
# Database
# https://docs.djangoproject.com/en/2.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Password validation
# https://docs.djangoproject.com/en/2.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
# Internationalization
# https://docs.djangoproject.com/en/2.2/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/2.2/howto/static-files/
STATIC_URL = '/static/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'static')
]
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
AWS_ACCESS_KEY_ID = '******************'
AWS_SECRET_ACCESS_KEY = '******************'
AWS_STORAGE_BUCKET_NAME = '******************'
AWS_S3_SIGNATURE_VERSION = 's3v4'
AWS_S3_REGION_NAME = '********'
AWS_S3_FILE_OVERWRITE = False
AWS_DEFAULT_ACL = None
# AWS_S3_VERIFY = True
AWS_S3_VERIFY = False
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
</code></pre>
<p>This is the uploader/urls.py:</p>
<pre class="lang-py prettyprint-override"><code>from rest_framework.routers import SimpleRouter, DefaultRouter
from .views import DropBoxViewset
router = SimpleRouter()
router.register('', DropBoxViewset)
urlpatterns = router.urls
</code></pre>
<p>This is the uploader/views.py file:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>from rest_framework import viewsets, parsers
from .models import DropBox
from .serializers import DropBoxSerializer
class DropBoxViewset(viewsets.ModelViewSet):
queryset = DropBox.objects.all()
serializer_class = DropBoxSerializer
parser_classes = [parsers.MultiPartParser, parsers.FormParser]
http_method_names = ['get', 'post', 'patch', 'delete']
from django.shortcuts import render
def upload_page(request):
# Render the bindex.html template
return render(request, "index.html")</code></pre>
</div>
</div>
</p>
<p>This is the uploader/models.py file:</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
class DropBox(models.Model):
'''A simple model for uploading files.'''
title = models.CharField(max_length=100)
document = models.FileField(max_length=100)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
verbose_name_plural = 'Drop Boxes'
</code></pre>
<p>And finally this is the templates/indexer.html:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><form action="{% url 'dropbox-list' %}" method="post" enctype="multipart/form-data">
<input type="file" id="file" name="file" />
<button type="submit">Upload</button>
</form></code></pre>
</div>
</div>
</p>
<p>Now, when I run the project with the command <code>python manage.py runserver</code> what I see is the UI that I showed before and not the indexer.html that I expect, which should be this:</p>
<p><a href="https://i.sstatic.net/jji40.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jji40.png" alt="indexer.html" /></a></p>
<p>Do you know where is the mistake?</p>
| <python><html><django><amazon-s3><django-rest-framework> | 2023-10-03 17:33:40 | 0 | 315 | Gregory |
77,224,368 | 9,245,201 | `__init__` called twice? | <p>Here is a simplified Python code:</p>
<pre><code>
class Base:
__possible_types__ = {}
def __new__(cls, *args, **kwargs) -> type:
# pop the `__type__` argument because it
# should be passed to the class `__init__`
type_ = kwargs.pop("__type__", None)
if type_ is not None:
possible_type = cls.__possible_types__.get(type_)
if possible_type is not None:
return possible_type(*args, **kwargs)
return super().__new__(cls)
def __init__(self, *args, **kwargs) -> None:
print(f"{self.__class__.__name__}.__init__", args, kwargs)
def __init_subclass__(cls) -> None:
__possible_types__ = {}
for parent in cls.__mro__[1:]:
if issubclass(parent, Base):
parent.__possible_types__[cls.__name__] = cls
class Parent(Base):
pass
class Child(Parent):
pass
class Adult(Parent):
pass
should_be_child = Parent(
name="bob",
__type__="Child"
)
print(should_be_child)
</code></pre>
<p>and here is the output:</p>
<pre><code>Child.__init__ () {'name': 'bob'}
Child.__init__ () {'name': 'bob', '__type__': 'Child'}
<__main__.Child object at 0x...>
</code></pre>
<p>A few questions</p>
<ul>
<li>Why is <code>__init__</code> called twice?</li>
<li>How is the popped dict item (<code>__type__</code>) back in the second <code>__init__</code> call?</li>
<li>How do I solve this?</li>
</ul>
| <python> | 2023-10-03 17:27:12 | 3 | 720 | dsal3389 |
77,224,363 | 1,753,640 | Langchain streaming output using a Flask app | <p>I have managed to stream the output successfully to the console but i'm struggling to get it to display in a webpage.</p>
<p>This is the code to invoke RetrievalQA and get a response:</p>
<pre><code>
def get_openai_answer(query, chroma_folder, search_size):
handler = StreamingStdOutCallbackHandler()
embeddings = OpenAIEmbeddings(
model = 'text-embedding-ada-002',
openai_api_key=OPENAI_API_KEY
)
db = Chroma(
persist_directory=f"./chroma_db/{chroma_folder}",
embedding_function=embeddings
)
retriever = db.as_retriever(
search_type="similarity",
search_kwargs={"k":search_size}
)
llm = ChatOpenAI(
model_name="gpt-4",
temperature=0,
openai_api_key=OPENAI_API_KEY,
streaming=True,
callbacks=[handler]
)
# create a chain to answer questions
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True,
verbose=False
)
for chunk in qa({"query": query}):
yield chunk
@app.route('/<int:link_id>', methods=('GET', 'POST'))
def control_response_qa(link_id):
if request.method == 'POST':
query = request.form['query']
print(query)
if not query:
flash('Enter Some Text')
else:
conn = get_db_connection()
data = conn.execute(
'SELECT * FROM summaries WHERE id = ?',
(link_id,)).fetchone()
conn.close()
chroma_file = str(data['title'])+'_db'
print(chroma_file)
return Response(get_openai_answer(query, chroma_file, 4), mimetype='text/event-stream')
else:
return Response(None, mimetype='text/event-stream')
</code></pre>
<p>And this is the HTML extract:</p>
<pre><code> <form method="post" id="input-form">
<input type="hidden" name="submit_value" value="qa_doc" />
<textarea class="form-control" id="query" name="query" rows="4" value="{{ request.form['query'] }}" placeholder="Ask Questions On This Document"></textarea>
<br />
<button type="submit" id="link_id" name="link_id" value="{{ summary['id'] }}" class="btn btn-primary float-right" >Submit</button>
</form>
<br><br><br>
<div class="card">
<div class="card-body">
<div class="image-cropper">
<nav>
<div id="result">
</div>
</nav>
</div>
</div>
</div>
<br />
<br />
</div>
</div>
<script>
var searchForm = document.getElementById('input-form');
searchForm.addEventListener('submit', async function(event) {
event.preventDefault();
var formData = new FormData(searchForm);
formData.append('query', searchForm.elements.query.value);
try {
const response = await fetch('/'+ new URLSearchParams({
searchForm.elements.link_id.value}),
{
method: 'POST',
body: formData
});
const reader = response.body.getReader();
document.getElementById("result").innerHTML = "";
while (true) {
const {done, value} = await reader.read();
if (done) break;
const text = new TextDecoder().decode(value);
document.getElementById("result").innerHTML += text;
}
} catch (error) {
console.error(error);
}
});
</script>
{% endblock %}
</code></pre>
<p>The output is just a blank page with this</p>
<blockquote>
<p>queryresultsource_documents</p>
</blockquote>
<p>The output streams correctly in the console, am I using the corrct callback handler? any help would be great</p>
| <python><flask><openai-api><langchain> | 2023-10-03 17:26:25 | 1 | 385 | user1753640 |
77,224,177 | 3,826,115 | How to specify map extent in GeoViews with map tiles in the background | <p>If I have a GeoDataFrame of point geometries, I can plot it up with GeoViews and specify the longitude extent of the display like this:</p>
<pre><code>import geopandas as gpd
import geoviews as gv
import numpy as np
gv.extension('bokeh')
gdf = gpd.GeoDataFrame(geometry = gpd.points_from_xy(np.linspace(-88, -82, 100), np.linspace(32,38, 100)), crs = 'WGS84')
points = gv.Points(data=gdf)
points = points.opts(xlim = (-86, -84))
points
</code></pre>
<p><a href="https://i.sstatic.net/WTGKF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WTGKF.png" alt="enter image description here" /></a></p>
<p>However, if I try to add some background map tiles using <a href="https://geoviews.org/gallery/matplotlib/tile_sources.html" rel="nofollow noreferrer"><code>gv.tile_sources</code></a>, the exent gets messed up:</p>
<pre><code>points_map = (gv.tile_sources.OSM*points)
points_map = points_map.opts(xlim = (-86, -84))
points_map
</code></pre>
<p><a href="https://i.sstatic.net/Ylyix.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ylyix.png" alt="enter image description here" /></a></p>
<p>I think it probably has something to do with projections...the incorrect map above is centered around 0 degrees Longitude, so I think the map extent is being set to -86 to -84 <em>meters</em> or something like that. But I can't quite figure out how to properly set the extent.</p>
<p>How can I plot a <code>gv.Points()</code> display with a mapped tile underneath, and set the exent of the map display using decimal degree coordinates?</p>
| <python><gis><holoviews><geoviews> | 2023-10-03 16:53:29 | 0 | 1,533 | hm8 |
77,223,972 | 6,347,469 | Flask App Accessible on Localhost but Not on Private IP Address | <p>I'm making a Flask app and trying to make it accessible on my local network to test it on my phone. I run the app with the following settings to make it externally visible on my local network:</p>
<pre><code>if __name__ == '__main__':
app.run(host="0.0.0.0", port=5000, debug=True, threaded=False)
</code></pre>
<p>When I do this the console outputs what you would expect:</p>
<pre><code> * Serving Flask app 'app'
* Debug mode: on
* Running on all addresses (0.0.0.0)
* Running on http://[localhost]:5000
* Running on http://[private_IP]:5000
</code></pre>
<p>However, the app is only accessible via localhost and not via the private IP, even on the same machine. Here's what I have checked and tried so far:</p>
<ul>
<li>The firewall (<a href="https://www.henrypp.org/product/simplewall" rel="nofollow noreferrer">simplewall</a>) allows connections on port 5000</li>
<li>I tried deactivating simplewall too, and deactivating Windows Defender Firewall as well, no luck</li>
<li>The IP address is correct, as listed with ipconfig</li>
<li>No other service is using port 5000</li>
<li>I attempted to connect to the app using both my home WiFi and my phone's mobile hotspot, encountering the same issue in both scenarios to rule out router-specific problems</li>
<li>My hosts file doesn't seem to be causing any issues</li>
</ul>
<p>In case this is relevant, I am running the Flask app through VS Code on Windows 11.</p>
| <python><windows><flask><network-programming><firewall> | 2023-10-03 16:23:03 | 0 | 493 | Paul Miller |
77,223,969 | 1,056,563 | Spark/pyspark on same version but "py4j.Py4JException: Constructor org.apache.spark.api.python.PythonFunction does not exist" | <p>I have a properly sync'ed <code>pyspark</code> client / <code>spark</code> installation: both versions are 3.3.1 [ shown below]. The full exception message is:</p>
<blockquote>
<p>py4j.Py4JException: Constructor org.apache.spark.api.python.PythonFunction([class [B, class java.util.HashMap, class java.util.ArrayList, class java.lang.String, class java.lang.String, class java.util.ArrayList, class org.apache.spark.api.python.PythonAccumulatorV2]) does not exist</p>
</blockquote>
<p>This has been identified in <a href="https://stackoverflow.com/questions/72873963/py4jexception-constructor-org-apache-spark-sql-sparksessionclass-org-apache-s">another SOF post</a> as most likely due to versioning mismatch between the <code>pyspark</code> invoker/caller and the <code>spark</code> backend. I agree that would seem the likely cause: but then I have verified carefully that both sides of the equation are equal:</p>
<p><code>pyspark</code> and <code>spark</code> are same versions:</p>
<pre><code>Python 3.10.13 (main, Aug 24 2023, 22:48:59) [Clang 14.0.3 (clang-1403.0.22.14.1)]
In [1]: import pyspark
In [2]: print(f"PySpark version: {pyspark.__version__}")
PySpark version: 3.3.1
</code></pre>
<p><code>Spark</code> was installed by downloading the version 3.3.1 .tgz directly from the <code>apache</code> site and unzip/tar-ring. The <code>SPARK_HOME</code> was pointed to that directory and the <code>$SPARK_HOME/bin</code> added to the path.</p>
<pre><code>$spark-shell --version
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 3.3.1
/_/
</code></pre>
<p>Inside the python script the version has been verified as well:</p>
<blockquote>
<p>pyspark version: 3.3.1</p>
</blockquote>
<p>But the script blows up with a pyspark / spark error</p>
<blockquote>
<p>An error occurred while calling <a href="http://None.org" rel="nofollow noreferrer">None.org</a>.apache.spark.api.python.PythonFunction</p>
</blockquote>
<blockquote>
<p>py4j.Py4JException: Constructor org.apache.spark.api.python.PythonFunction([class [B, class java.util.HashMap, class java.util.ArrayList, class java.lang.String, class java.lang.String, class java.util.ArrayList, class org.apache.spark.api.python.PythonAccumulatorV2]) does not exist
at py4j.reflection.ReflectionEngine.getConstructor(ReflectionEngine.java:180)</p>
</blockquote>
<p>So .. what else might be going on here? Is there some way I'm not seeing in which the versions of spark/pyspark might be out of sync?</p>
| <python><apache-spark><pyspark><pycharm> | 2023-10-03 16:22:41 | 1 | 63,891 | WestCoastProjects |
77,223,721 | 11,167,163 | Is it possible to split HTML view and Python back-end across different servers (IIS) in Django? | <p>I have a <code>Django</code> application where I want to separate the <code>HTML</code> views and the <code>Python</code> back-end logic into two different servers for architectural and security reasons. Specifically, I want one server to handle all the <code>HTML</code> rendering and another server to handle the Python back-end logic like database queries, API calls, etc.</p>
<p>Is it possible to do this while still leveraging Django's capabilities? If so, how can I achieve this split?
If it is not doable, may you explain why ?</p>
<p>Thank you in advance for your help.</p>
| <python><django><iis> | 2023-10-03 15:44:24 | 1 | 4,464 | TourEiffel |
77,223,620 | 10,755,032 | Python requests and Graphql API: How to use variables? | <p>I am working on creating users on my wiki.js using graphql API and python's request package. I am able to create a user normally(hard coded) but I want to use variables. Here is the code I tried:</p>
<pre><code># we have imported the requests module
import requests
# defined a URL variable that we will be
# using to send GET or POST requests to the API
url = "https://wiki.something.com/graphql/"
email = "uchiha.sasuke@example.com"
name = "Uchiha Sasuke"
body = """
mutation {
users {
create (
email: "{email}"
name: "{name}"
passwordRaw: "pass123$"
providerKey: "local"
groups: [1]
mustChangePassword: true
sendWelcomeEmail: false
) {
responseResult {
succeeded
slug
message
}
user {
id
}
}
}
}
"""
response = requests.post(url=url, json={"query": body}, headers={"Authorization": "Bearer ..."})
print("response status code: ", response.status_code)
if response.status_code == 200:
print("response : ", response.content)
</code></pre>
<p>Above code is not working. How should I deal with this? I have taken a look at other questions regarding api and I tried them but it was of no use.</p>
| <python><graphql> | 2023-10-03 15:29:11 | 0 | 1,753 | Karthik Bhandary |
77,223,538 | 226,342 | Can I make my dash app return static responses using curl | <p>I've defined a multi page Dash app:</p>
<p>app.py:</p>
<pre><code>import dash
import dash_bootstrap_components as dbc
from dash import html
app = dash.Dash(__name__, use_pages=True, suppress_callback_exceptions=True)
...
app.layout = dbc.Container(
[navbar, nav, dash.page_container]
)
if __name__ == "__main__":
app.run_server(debug=True, host='0.0.0.0')
</code></pre>
<p>page.py:</p>
<pre><code>import dash
import json
import yaml
from dash import html, dcc, Input, Output, State, ALL, callback
from dash.exceptions import PreventUpdate
import dash_bootstrap_components as dbc
from filelock import FileLock
dash.register_page(__name__, order=1, title='ποΈ My page')
...
def layout(**other_unknown_query_strings):
....
</code></pre>
<p>The page handles some resources which the user can reserve or release. However, except the dash GUI in the web browser I would like to be able to reserve and unreserve via <code>curl</code> and return a static response such as {"result": "ok"}.</p>
<p>Is this possible?</p>
| <python><curl><plotly-dash> | 2023-10-03 15:15:27 | 1 | 2,005 | Henrik |
77,222,971 | 3,092,490 | Extracting specific JSON property into CSV using Python | <p>I have a JSON which is in nested form. I would like to extract billingcode and rate from json and put into csv using pandas python.</p>
<pre><code>{
"TransactionId": "1c9cc4b9-a0e1-4228-b382-b244a7674593",
"Number": "1022",
"Version": 534,
"StartDateTime": "2023-04-01 14:12:50.999",
"EndDateTime": "2023-04-01 14:15:32.038",
"LastUpdatedOn": "2023-04-01",
"RateSchedules": [{
"BillingCodeType": "CPT",
"BillingCodeTypeVersion": "2023",
"BillingCode": "0107T",
"Rates": [{
"Rate": 14.11,
"PlaceOfServices": ["01", "02", "03", "04", "05", "06", "07", "08", "09", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "31", "32", "33", "34", "41", "42", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "60", "61", "62", "65", "71", "72", "81", "99"]
}
]
}, {
"BillingCodeType": "CPT",
"BillingCodeTypeVersion": "2023",
"BillingCode": "0108T",
"Rates": [{
"Rate": 14.11,
"PlaceOfServices": ["01", "02", "03", "04", "05", "06", "07", "08", "09", "10", "11", "12", "13", "14", "15", "16", "17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "31", "32", "33", "34", "41", "42", "49", "50", "51", "52", "53", "54", "55", "56", "57", "58", "60", "61", "62", "65", "71", "72", "81", "99"]
}
]
}
</code></pre>
<p>}</p>
<p>Above is my JSON and here is my python script. I am running it from Spyder (Python 3.11) from Anaconda that has Pandas</p>
<pre><code>import pandas as pd
df = pd.read_json(r"C:\Users\Jay.Ramachandra\Documents\20230401-
141250_1022_RI_MC01.json")
df = df.loc[["BillingCode", ["Rate"], "Rates"],"RateSchedules"].T
df.to_csv(r"C:\Users\Jay.Ramachandra\Documents\MrfJson.csv")
</code></pre>
<p>I am referencing this post here to write my python <a href="https://stackoverflow.com/questions/48568649/convert-json-to-csv-using-python">Convert Json to CSV using Python</a></p>
<p>I ran into this and this is now corrected using <code>r"</code></p>
<pre><code>runfile('C:/Users/.spyder-py3/temp.py', wdir='C:/Users/.spyder-py3')
File <unknown>:9
df = pd.read_json("C:\Users\MC01.json")
^
SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape
</code></pre>
<p>What am i doing wrong? how to correct this?</p>
<p><strong>Addendum</strong></p>
<p>I edited my original question to add <code>r"</code> in my code and now run into</p>
<pre><code>TypeError: unhashable type: 'list'
</code></pre>
| <python><json><pandas> | 2023-10-03 14:00:33 | 2 | 381 | user176047 |
77,222,906 | 7,757,135 | How can swap columns into rows and maintain its values using Pandas? | <p>I have a problem when working with a tabular data series in Pandas. In particular, I have a DataFrame with a structure like the following:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>date</th>
<th>H01</th>
<th>H02</th>
<th>...</th>
<th>H24</th>
<th>Variable</th>
<th>station_code</th>
</tr>
</thead>
<tbody>
<tr>
<td>D_1</td>
<td>Value_101</td>
<td>Value_102</td>
<td>...</td>
<td>Value_124</td>
<td>Var_1</td>
<td>Code_1</td>
</tr>
<tr>
<td>D_1</td>
<td>Value_201</td>
<td>Value_202</td>
<td>...</td>
<td>Value_224</td>
<td>Var_2</td>
<td>Code_1</td>
</tr>
<tr>
<td>D_2</td>
<td>Value_301</td>
<td>Value_302</td>
<td>...</td>
<td>Value_324</td>
<td>Var_1</td>
<td>Code_2</td>
</tr>
<tr>
<td>D_2</td>
<td>Value_401</td>
<td>Value_402</td>
<td>...</td>
<td>Value_424</td>
<td>Var_2</td>
<td>Code_2</td>
</tr>
</tbody>
</table>
</div>
<p>Note: Date is format like: 2022-09-01 (YYYY-MM-DD) and H01, H02, ..., H24 represents the 24-hours in a day.</p>
<p>And I would like to be able to convert it into something like:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>datetime</th>
<th>Var_1</th>
<th>Var_2</th>
<th>...</th>
<th>Var_N</th>
<th>station_code</th>
</tr>
</thead>
<tbody>
<tr>
<td>D_1 + H01</td>
<td>Value_101</td>
<td>Value_201</td>
<td>...</td>
<td>Value_N01</td>
<td>Code_1</td>
</tr>
<tr>
<td>D_1 + H02</td>
<td>Value_102</td>
<td>Value_202</td>
<td>...</td>
<td>Value_N02</td>
<td>Code_1</td>
</tr>
<tr>
<td>D_1 + H03</td>
<td>Value_103</td>
<td>Value_203</td>
<td>...</td>
<td>Value_N03</td>
<td>Code_1</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>D_1 + H24</td>
<td>Value_124</td>
<td>Value_224</td>
<td>...</td>
<td>Value_N24</td>
<td>Code_1</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>D_2 + H01</td>
<td>Value_301</td>
<td>Value_401</td>
<td>...</td>
<td>Value_N01</td>
<td>Code_2</td>
</tr>
<tr>
<td>D_2 + H02</td>
<td>Value_302</td>
<td>Value_402</td>
<td>...</td>
<td>Value_N02</td>
<td>Code_2</td>
</tr>
<tr>
<td>D_2 + H03</td>
<td>Value_303</td>
<td>Value_403</td>
<td>...</td>
<td>Value_N03</td>
<td>Code_2</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>D_2 + H24</td>
<td>Value_324</td>
<td>Value_424</td>
<td>...</td>
<td>Value_N24</td>
<td>Code_2</td>
</tr>
</tbody>
</table>
</div>
<p>Note that can be multiple timestamp repeated (can exist one timestamp per station_code). That is, I would have to combine the column 'datetime' + all the columns that have 'H01', 'H02', ..., 'H24' in an index, and the rest of the remaining columns should be the values (in particular, var column should take the column header of each value).</p>
<p>Do you know how to achieve that?</p>
<p>Thank you very much in advance.</p>
<p>Best regards.</p>
| <python><pandas><dataframe><pivot> | 2023-10-03 13:52:24 | 1 | 1,760 | JuMoGar |
77,222,802 | 1,062,139 | Python static typing: How to tell the static type checker that my function has checked the type? | <p>I have the following code:</p>
<pre><code>ExprType: TypeAlias = int | str | datetime
def check_type(value: ExprType, expected_type: type[ExprType]):
if not isinstance(value, expected_type):
raise TypeError(f"Expected {expected_type}, but got {type(result)}.")
value: ExprType = 0
# ... more processing ...
if m := re.match("(\d+)", text_var):
check_type(value, int)
value += int(m.group(1))
</code></pre>
<p>Now, in the last line I'm positively sure that the old value of <code>value</code> is an integer, because it was checked by the <code>check_type</code> function. However, VSCode (Pylance) is not so sure about it and warns me that:</p>
<pre><code>Operator + is not supported for types ExprType and int.
</code></pre>
<p>To provide an analogy:</p>
<pre><code>my_var: int | None = 5 # type(my_var) = int | None
if my_var: # type(my_var) = int | None
print("I'm not None") # type(my_var) = int
else:
print("I'm None") # type(my_var) = NoneType
</code></pre>
<p>In this case, Pylance uderstands that the type of <code>my_var</code> is <code>int | None</code> on the <code>if</code> line, but it changes its type <em>inside</em> the <code>if</code> command. What I want to do is to instruct Pylance that my function <code>check_type</code> works as a type checker, too.</p>
| <python><python-typing> | 2023-10-03 13:39:35 | 3 | 2,118 | peter.slizik |
77,222,723 | 2,562,058 | How to use np.insert in iterations? | <p>I want to insert an array to some specific positions of another array.
For example, say that you have</p>
<pre><code>import numpy as np
pos = np.array([1, 3, 4])
arr = np.array([0, 1, 2, 3, 4, 5])
vals = np.array([-1,-1])
</code></pre>
<p>I want that <code>vals</code> is inserted to <code>arr</code> in the positions specified by <code>pos</code> so that at the end I have <code>result = [0,-1,-1,1,2, -1,-1, 3,-1,-1,4,5]</code>.</p>
<p>That is, <code>[result[i], result[i+1]]</code> should be equal to <code>val</code> for all <code>i</code> in <code>pos</code>.</p>
<p>I tried the following loop:</p>
<pre><code>for ii in pos:
arr = np.insert(arr, ii, vals)
</code></pre>
<p>but it does not work because the size of <code>arr</code> is increasing at every iteration.</p>
| <python><python-3.x><numpy> | 2023-10-03 13:31:08 | 2 | 1,866 | Barzi2001 |
77,222,716 | 17,082,611 | model.load_weights() does not load the weights I stored previously using model.save_weights() | <p>I aim to save and then load the weights of my model using <a href="https://www.tensorflow.org/tutorials/keras/save_and_load?hl=en#manually_save_weights" rel="nofollow noreferrer"><code>save_weights</code> and <code>load_weights</code> functions</a>.</p>
<p>In order to show you a minimal reproducible example, these are the dependencies you can use in my whole example:</p>
<pre><code>import numpy as np
import tensorflow as tf
from keras.initializers import he_uniform
from keras.layers import Conv2DTranspose, BatchNormalization, Reshape, Dense, Conv2D, Flatten
from keras.optimizers.legacy import Adam
from keras.src.datasets import mnist
from skimage.transform import resize
from sklearn.base import BaseEstimator
from tensorflow import keras
</code></pre>
<p>This is my model, a (variational) autoencoder:</p>
<pre><code>class VAE(keras.Model, BaseEstimator):
def __init__(self, encoder, decoder, epochs=None, l_rate=None, batch_size=None, patience=None, **kwargs):
super().__init__(**kwargs)
self.encoder = encoder
self.decoder = decoder
self.epochs = epochs
self.l_rate = l_rate
self.batch_size = batch_size
self.patience = patience
self.total_loss_tracker = keras.metrics.Mean(name="total_loss")
self.reconstruction_loss_tracker = keras.metrics.Mean(name="reconstruction_loss")
self.kl_loss_tracker = keras.metrics.Mean(name="kl_loss")
def call(self, inputs, training=None, mask=None):
_, _, z = self.encoder(inputs)
outputs = self.decoder(z)
return outputs
@property
def metrics(self):
return [
self.total_loss_tracker,
self.reconstruction_loss_tracker,
self.kl_loss_tracker,
]
def train_step(self, data):
data, labels = data
with tf.GradientTape() as tape:
# Forward pass
z_mean, z_log_var, z = self.encoder(data)
reconstruction = self.decoder(z)
# Compute losses
reconstruction_loss = tf.reduce_mean(
tf.reduce_sum(
keras.losses.binary_crossentropy(data, reconstruction), axis=(1, 2)
)
)
kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var))
kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))
total_loss = reconstruction_loss + kl_loss
# Compute gradient
grads = tape.gradient(total_loss, self.trainable_weights)
# Update weights
self.optimizer.apply_gradients(zip(grads, self.trainable_weights))
# Update metrics
self.total_loss_tracker.update_state(total_loss)
self.reconstruction_loss_tracker.update_state(reconstruction_loss)
self.kl_loss_tracker.update_state(kl_loss)
return {
"loss": self.total_loss_tracker.result(),
"reconstruction_loss": self.reconstruction_loss_tracker.result(),
"kl_loss": self.kl_loss_tracker.result(),
}
def test_step(self, data):
data, labels = data
# Forward pass
z_mean, z_log_var, z = self.encoder(data)
reconstruction = self.decoder(z)
# Compute losses
reconstruction_loss = tf.reduce_mean(
tf.reduce_sum(
keras.losses.binary_crossentropy(data, reconstruction), axis=(1, 2)
)
)
kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var))
kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))
total_loss = reconstruction_loss + kl_loss
# Update metrics
self.total_loss_tracker.update_state(total_loss)
self.reconstruction_loss_tracker.update_state(reconstruction_loss)
self.kl_loss_tracker.update_state(kl_loss)
return {
"loss": self.total_loss_tracker.result(),
"reconstruction_loss": self.reconstruction_loss_tracker.result(),
"kl_loss": self.kl_loss_tracker.result(),
}
</code></pre>
<p>This is the Encoder:</p>
<pre><code>@keras.saving.register_keras_serializable()
class Encoder(keras.layers.Layer):
def __init__(self, latent_dimension):
super(Encoder, self).__init__()
self.latent_dim = latent_dimension
seed = 42
self.conv1 = Conv2D(filters=64, kernel_size=3, activation="relu", strides=2, padding="same",
kernel_initializer=he_uniform(seed))
self.bn1 = BatchNormalization()
self.conv2 = Conv2D(filters=128, kernel_size=3, activation="relu", strides=2, padding="same",
kernel_initializer=he_uniform(seed))
self.bn2 = BatchNormalization()
self.conv3 = Conv2D(filters=256, kernel_size=3, activation="relu", strides=2, padding="same",
kernel_initializer=he_uniform(seed))
self.bn3 = BatchNormalization()
self.flatten = Flatten()
self.dense = Dense(units=100, activation="relu")
self.z_mean = Dense(latent_dimension, name="z_mean")
self.z_log_var = Dense(latent_dimension, name="z_log_var")
self.sampling = sample
def call(self, inputs, training=None, mask=None):
x = self.conv1(inputs)
x = self.bn1(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.conv3(x)
x = self.bn3(x)
x = self.flatten(x)
x = self.dense(x)
z_mean = self.z_mean(x)
z_log_var = self.z_log_var(x)
z = self.sampling(z_mean, z_log_var)
return z_mean, z_log_var, z
</code></pre>
<p>Where <code>sample</code> function is defined below:</p>
<pre><code>def sample(z_mean, z_log_var):
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.random.normal(shape=(batch, dim))
stddev = tf.exp(0.5 * z_log_var)
return z_mean + stddev * epsilon
</code></pre>
<p>And finally this is the Decoder:</p>
<pre><code>@keras.saving.register_keras_serializable()
class Decoder(keras.layers.Layer):
def __init__(self):
super(Decoder, self).__init__()
self.dense1 = Dense(units=4096, activation="relu")
self.bn1 = BatchNormalization()
self.dense2 = Dense(units=1024, activation="relu")
self.bn2 = BatchNormalization()
self.dense3 = Dense(units=4096, activation="relu")
self.bn3 = BatchNormalization()
seed = 42
self.reshape = Reshape((4, 4, 256))
self.deconv1 = Conv2DTranspose(filters=256, kernel_size=3, activation="relu", strides=2, padding="same",
kernel_initializer=he_uniform(seed))
self.bn4 = BatchNormalization()
self.deconv2 = Conv2DTranspose(filters=128, kernel_size=3, activation="relu", strides=1, padding="same",
kernel_initializer=he_uniform(seed))
self.bn5 = BatchNormalization()
self.deconv3 = Conv2DTranspose(filters=128, kernel_size=3, activation="relu", strides=2, padding="valid",
kernel_initializer=he_uniform(seed))
self.bn6 = BatchNormalization()
self.deconv4 = Conv2DTranspose(filters=64, kernel_size=3, activation="relu", strides=1, padding="valid",
kernel_initializer=he_uniform(seed))
self.bn7 = BatchNormalization()
self.deconv5 = Conv2DTranspose(filters=64, kernel_size=3, activation="relu", strides=2, padding="valid",
kernel_initializer=he_uniform(seed))
self.bn8 = BatchNormalization()
self.deconv6 = Conv2DTranspose(filters=1, kernel_size=2, activation="sigmoid", padding="valid",
kernel_initializer=he_uniform(seed))
def call(self, inputs, training=None, mask=None):
x = self.dense1(inputs)
x = self.bn1(x)
x = self.dense2(x)
x = self.bn2(x)
x = self.dense3(x)
x = self.bn3(x)
x = self.reshape(x)
x = self.deconv1(x)
x = self.bn4(x)
x = self.deconv2(x)
x = self.bn5(x)
x = self.deconv3(x)
x = self.bn6(x)
x = self.deconv4(x)
x = self.bn7(x)
x = self.deconv5(x)
x = self.bn8(x)
decoder_outputs = self.deconv6(x)
return decoder_outputs
</code></pre>
<p>This is the <code>main</code> code:</p>
<pre><code>def normalize(x):
return (x - np.min(x)) / (np.max(x) - np.min(x))
def create_vae():
latent_dimension = 25
best_epochs = 2500
best_l_rate = 10 ** -5
best_batch_size = 32
best_patience = 30
encoder = Encoder(latent_dimension)
decoder = Decoder()
vae = VAE(encoder, decoder, best_epochs, best_l_rate, best_batch_size, best_patience)
vae.compile(Adam(best_l_rate))
return vae
if __name__ == '__main__':
(x_train, y_train), (x_test, y_test) = mnist.load_data()
new_shape = (40, 40) # VAE deals with (None, 40, 40, 1) tensors
x_train = np.array([resize(img, new_shape) for img in x_train])
x_test = np.array([resize(img, new_shape) for img in x_test])
x_train = np.expand_dims(x_train, axis=-1).astype("float32")
x_test = np.expand_dims(x_test, axis=-1).astype("float32")
x_train = normalize(x_train)
x_test = normalize(x_test)
# Let's consider the first 100 items only for speed purposes
x_train = x_train[:100]
y_train = y_train[:100]
x_test = x_test[:100]
y_test = y_test[:100]
model = create_vae()
model.fit(x_train, y_train, batch_size=64, epochs=10)
weights_before_load = model.get_weights()
model.save_weights("test-checkpoints/my-vae")
del model
model = create_vae()
model.load_weights("test-checkpoints/my-vae")
weights_after_load = model.get_weights()
for layer_num, (w_before, w_after) in enumerate(zip(weights_before_load, weights_after_load), start=1):
print(f"Layer {layer_num}:")
print(f"Same weights? {w_before.all() == w_after.all()}")
</code></pre>
<p>And this is the output:</p>
<pre><code>Layer 1:
Same weights? True
Layer 2:
Same weights? False # WHY FALSE HERE?
Layer 3:
Same weights? True
Layer 4:
Same weights? True
Layer 5:
Same weights? True
Layer 6:
Same weights? True
</code></pre>
<p>But I expected the weights to be the same before and after the load! Why aren't the weights in the Layer 2 the same after I loaded them using <code>load_weights</code> method?</p>
<p>Furthermore this is a list of warnings I get:</p>
<pre><code>WARNING:tensorflow:Detecting that an object or model or tf.train.Checkpoint is being deleted with unrestored values. See the following logs for the specific values in question. To silence these warnings, use `status.expect_partial()`. See https://www.tensorflow.org/api_docs/python/tf/train/Checkpoint#restorefor details about the status object returned by the restore function.
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.iter
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.beta_1
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.beta_2
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.decay
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer.learning_rate
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).encoder.conv1.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).encoder.conv1.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).encoder.bn1.gamma
...
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).encoder.conv3.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).encoder.bn3.gamma
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).encoder.bn3.beta
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).encoder.bn3.moving_mean
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).encoder.bn3.moving_variance
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).encoder.dense.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).encoder.dense.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).encoder.z_mean.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).encoder.z_mean.bias
...
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn2.moving_variance
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.dense3.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.dense3.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn3.gamma
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn3.beta
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn3.moving_mean
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn3.moving_variance
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.deconv1.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.deconv1.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn4.gamma
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn4.beta
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn4.moving_mean
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn4.moving_variance
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.deconv2.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.deconv2.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn5.gamma
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn5.beta
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn5.moving_mean
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn5.moving_variance
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.deconv3.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.deconv3.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn6.gamma
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn6.beta
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn6.moving_mean
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn6.moving_variance
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.deconv4.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.deconv4.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn7.gamma
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn7.beta
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn7.moving_mean
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn7.moving_variance
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.deconv5.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.deconv5.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn8.gamma
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn8.beta
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn8.moving_mean
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.bn8.moving_variance
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.deconv6.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).decoder.deconv6.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.conv1.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.conv1.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.bn1.gamma
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.bn1.beta
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.conv2.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.conv2.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.bn2.gamma
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.bn2.beta
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.conv3.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.conv3.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.bn3.gamma
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.bn3.beta
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.dense.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.dense.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.z_mean.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.z_mean.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.z_log_var.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).encoder.z_log_var.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.dense1.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.dense1.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.bn1.gamma
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.bn1.beta
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.dense2.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.dense2.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.bn2.gamma
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.bn2.beta
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.dense3.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.dense3.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.bn3.gamma
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.bn3.beta
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.deconv1.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.deconv1.bias
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.bn4.gamma
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.bn4.beta
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.deconv2.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'm' for (root).decoder.deconv2.bias
...
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'v' for (root).decoder.deconv6.kernel
WARNING:tensorflow:Value in checkpoint could not be found in the restored object: (root).optimizer's state 'v' for (root).decoder.deconv6.bias
Process finished with exit code 0
</code></pre>
<p>How do I fix the issue?</p>
<p>Note that I truncated the warnings output because of their size.</p>
<p>This is weird since if I re-run my example on another model as you can see here:</p>
<pre><code>def create_model():
model = tf.keras.Sequential([
keras.layers.Dense(512, activation='relu', input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10)
])
model.compile(optimizer=Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
return model
if __name__ == '__main__':
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
y_train = y_train[:100]
y_test = y_test[:100]
x_train = x_train[:100].reshape(-1, 28 * 28) / 255.0
x_test = x_test[:100].reshape(-1, 28 * 28) / 255.0
model = create_model()
model.fit(x_train, y_train, batch_size=32, epochs=10)
model.save_weights("test-checkpoints/my-model")
weights_before = model.get_weights()
del model
model = create_model()
model.load_weights("test-checkpoints/my-model")
weights_after = model.get_weights()
for layer_num, (w_before, w_after) in enumerate(zip(weights_before, weights_after), start=1):
print(f"Layer {layer_num}:")
print(f"Same weights? {w_before.all() == w_after.all()}")
</code></pre>
<p>then you can notice that <code>model.save_weights()</code> and <code>model.load_weights()</code> worked properly, since weights are all the same:</p>
<pre><code>Layer 1:
Same weights? True
Layer 2:
Same weights? True
Layer 3:
Same weights? True
Layer 4:
Same weights? True
</code></pre>
| <python><tensorflow><machine-learning><keras> | 2023-10-03 13:30:11 | 1 | 481 | tail |
77,222,538 | 2,050,158 | Need to downgrade to Werkzeug==2.3.7 from Werkzeug==3.0.0 to avoid werkzeug/http.py TypeError: cannot use a string pattern on a bytes-like object | <p>I have a docker image flask web application which is generating the error below after rebuilding the docker image and deploying the docker container from it.
Before today, the containers from the images built using the docker file and requirements.txt file as this image ran without any errors.
I suspect there may have been a recent python module update in the image I built today that is causing the error below.</p>
<p>The Python version is Python 3.12.0</p>
<p>The output of pip list is shown below.</p>
<pre><code>root@8015f43d01aa:/# pip list
Package Version
------------------ ---------
blinker 1.6.2
cachelib 0.10.2
certifi 2023.7.22
charset-normalizer 3.3.0
click 8.1.7
Flask 2.3.3
Flask-PyMongo 2.3.0
Flask-Session 0.5.0
gunicorn 21.2.0
idna 3.4
itsdangerous 2.1.2
Jinja2 3.1.2
MarkupSafe 2.1.3
packaging 23.2
pika 1.3.2
pip 23.2.1
psycopg2 2.9.8
pyasn1 0.5.0
pyasn1-modules 0.3.0
pymongo 3.12.3
python-dateutil 2.8.2
python-ldap 3.4.3
redis 5.0.1
requests 2.31.0
setuptools 68.2.2
six 1.16.0
urllib3 2.0.6
Werkzeug 3.0.0
wheel 0.41.2
</code></pre>
<p>The error message.</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 2213, in __call__
return self.wsgi_app(environ, start_response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 2193, in wsgi_app
response = self.handle_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 1487, in full_dispatch_request
return self.finalize_request(rv)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 1508, in finalize_request
response = self.process_response(response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/flask/app.py", line 2005, in process_response
self.session_interface.save_session(self, ctx.session, response)
File "/usr/local/lib/python3.12/site-packages/flask_session/sessions.py", line 454, in save_session
response.set_cookie(app.config["SESSION_COOKIE_NAME"], session_id,
File "/usr/local/lib/python3.12/site-packages/werkzeug/sansio/response.py", line 224, in set_cookie
dump_cookie(
File "/usr/local/lib/python3.12/site-packages/werkzeug/http.py", line 1303, in dump_cookie
if not _cookie_no_quote_re.fullmatch(value):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: cannot use a string pattern on a bytes-like object
</code></pre>
<p>Downgrading to Werkzeug==2.3.7 from Werkzeug==3.0.0 solved the above error.
Werkzeug==3.0.0 was released 2023-September-30.</p>
| <python><flask><flask-session> | 2023-10-03 13:06:05 | 3 | 503 | Allan K |
77,222,535 | 3,948,766 | Why is the bind method not working in python kivy | <p>I am at a complete loss, why is the <code>on_packet</code> method not being called in either of the classes. I have plenty of other bindings in my app but for some reason these do not seem to work??</p>
<pre class="lang-py prettyprint-override"><code>from kivy.uix.checkbox import CheckBox
from kivy.uix.switch import Switch
from kivy.app import runTouchApp
from kivy.uix.gridlayout import GridLayout
class FXSwitch(Switch):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.bind(on_active=self.on_packet)
def on_active(self, *args):
print('on_active')
def on_packet(self, *args):
print('on_packet')
print(args)
class FXCheckBox(CheckBox):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.bind(on_state=self.on_packet)
def on_state(self, *args):
print('on_state')
def on_packet(self, *args):
print('on_packet')
print(args)
grid = GridLayout(cols=1)
sw = FXSwitch()
sw.bind(on_active=sw.on_packet)
grid.add_widget(sw)
grid.add_widget(FXSwitch())
grid.add_widget(FXCheckBox())
runTouchApp(grid)
</code></pre>
| <python><kivy> | 2023-10-03 13:05:28 | 1 | 356 | Infinity Cliff |
77,222,463 | 1,418,090 | Is there a way to change Adam to legacy when using Mac M1/M2 in TensorFlow? | <p>I am running a Deep Learning model, but sometimes I use my work PC and sometimes my Mac. The problem is that whenever using the Mac (with an M2 chip), I get this warning message:</p>
<blockquote>
<p>WARNING:absl: At this time, the v2.11+ optimizer <code>tf.keras.optimizers.Adam</code> runs slowly on M1/M2 Macs, please use the legacy Keras optimizer instead, located at <code>tf.keras.optimizers.legacy.Adam</code>.</p>
</blockquote>
<p>I question whether there is a way to shift to <code>tf.keras.optimizers.legacy.Adam</code> in my Mac. As a side question, is it beneficial at all? I guess so because my training is taking way more than I expected, given the problem's simplicity.</p>
<p>As a minimum viable example, please check this example I adapted from the <a href="https://www.tensorflow.org/tutorials/quickstart/beginner" rel="nofollow noreferrer">TensorFlow website</a>:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10)
])
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# If-else should go here:
# If in a Mac with M1 / M2:
# opt = tf.keras.optimizers.legacy.Adam(learning_rate=0.0005)
# Else:
opt = tf.keras.optimizers.Adam(learning_rate=0.0005)
model.compile(optimizer=opt,
loss=loss_fn,
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
</code></pre>
<p>Thanks in advance, and apologies if this has been already answered. I could not find a question similar.</p>
| <python><macos><tensorflow><deep-learning><tensorflow2.0> | 2023-10-03 12:56:31 | 1 | 438 | Umberto Mignozzetti |
77,222,375 | 2,386,113 | QR decomposition results in Python (using numpy) and in C++ (using Eigen) are different? | <p><strong>Sample QR decomposition Code (Python):</strong></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# Define the 2D array
data = np.array([
[12, -51, 4],
[6, 167, -68],
[-4, 24, -41],
[-1, 1, 0],
[2, 0, 3]
], dtype=float) # specify data type as 'float' to match C++ double
q, r = np.linalg.qr(data)
print(q)
print(r)
</code></pre>
<p><strong>Sample QR-Decomposition C++ code (Eigen):</strong></p>
<pre><code>#include <iostream>
#include <Eigen/Dense>
void qrDecomposition(const Eigen::MatrixXd& A, Eigen::MatrixXd& Q, Eigen::MatrixXd& R) {
int nRows = A.rows();
int nCols = A.cols();
Q = Eigen::MatrixXd::Zero(nRows, nCols);
R = Eigen::MatrixXd::Zero(nCols, nCols);
Eigen::MatrixXd v(nRows, nCols);
Eigen::MatrixXd u(nRows, nCols);
for (int j = 0; j < nCols; j++) {
v.col(j) = A.col(j);
for (int i = 0; i < j; i++) {
R(i, j) = Q.col(i).dot(A.col(j));
v.col(j) -= R(i, j) * Q.col(i);
}
R(j, j) = v.col(j).norm();
Q.col(j) = v.col(j) / R(j, j);
}
}
int main() {
int nRows = 5; // Number of rows
int nCols = 3; // Number of columns
// Sample flattened array (replace with your data)
Eigen::Map<Eigen::MatrixXd> A_data(new double[nRows * nCols], nRows, nCols);
A_data <<
12, -51, 4,
6, 167, -68,
-4, 24, -41,
-1, 1, 0,
2, 0, 3;
Eigen::MatrixXd Q, R;
qrDecomposition(A_data, Q, R);
std::cout << "Matrix Q:\n" << Q << "\n";
std::cout << "Matrix R:\n" << R << "\n";
return 0;
}
</code></pre>
<p><strong>RESULTS:</strong></p>
<p><strong>python results:</strong></p>
<pre><code>q
array([[-0.84641474, 0.39129081, -0.34312406],
[-0.42320737, -0.90408727, 0.02927016],
[ 0.28213825, -0.17042055, -0.93285599],
[ 0.07053456, -0.01404065, 0.00109937],
[-0.14106912, 0.01665551, 0.10577161]])
r
array([[ -14.17744688, -20.66662654, 13.4015667 ],
[ 0. , -175.04253925, 70.08030664],
[ 0. , 0. , 35.20154302]])
</code></pre>
<p><strong>c++ results:</strong></p>
<pre><code>Matrix Q:
0.846415 -0.391291 -0.343124
0.423207 0.904087 0.0292702
-0.282138 0.170421 -0.932856
-0.0705346 0.0140407 0.00109937
0.141069 -0.0166555 0.105772
Matrix R:
14.1774 20.6666 -13.4016
0 175.043 -70.0803
0 0 35.2015
</code></pre>
<p><strong>PROBLEM:</strong> As can be seen in the results above, the signs of the values are different (<strong>not always flipped</strong>). Why is it so?</p>
| <python><c++><math><eigen><qr-decomposition> | 2023-10-03 12:44:01 | 1 | 5,777 | skm |
77,222,210 | 10,533,225 | Assign column from one dataframe to another dataframe using pyspark | <p>I need to assign <code>item_name</code> from first daframe to second dataframe by creating a new column base on first dataframe.</p>
<p>This is the sample dataframe #1</p>
<p><a href="https://i.sstatic.net/kOKZc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kOKZc.png" alt="enter image description here" /></a></p>
<p>The second dataframe look like this:</p>
<p><a href="https://i.sstatic.net/BaDku.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BaDku.png" alt="enter image description here" /></a></p>
<p>I tried using this code, but it is not working. I guess, it is because of same data from <code>item_id</code> column?</p>
<pre><code>new_df1 = (
df1.select
.join(
df2
df1.item_id = df2.item_id,
"left"
)
)
</code></pre>
<p>Expected output is this:</p>
<p><a href="https://i.sstatic.net/cLRFx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cLRFx.png" alt="enter image description here" /></a></p>
| <python><pyspark> | 2023-10-03 12:20:17 | 2 | 583 | Tenserflu |
77,222,170 | 8,040,369 | Remove df rows if based on some condition in column values | <p>I have a data frame which has 2 columns datetime and value as shown below.</p>
<pre><code> Datetime Value
==============================================
10-03-2023 00:00:00.000 10
10-03-2023 01:00:00.000 20
10-03-2023 04:00:00.000 50
</code></pre>
<p>So to impute the missing entries, I am creating index value for each hour and imputing it with <strong>0</strong> with the below code,</p>
<pre><code>today = datetime.now(timezone("Asia/Kolkata")).replace(tzinfo=None).replace(microsecond=0, second=0, minute=0)
today = today.strftime("%m-%d-%Y %H:%M:%S")
start_date = pd.to_datetime("10-03-2023 00:00:00", format='%m-%d-%Y %H:%M:%S')
start_date = start_date.strftime("%m-%d-%Y %H:%M:%S")
idx = pd.date_range(start_date, today, freq='H')
# imputing missing data and assigning value as 0
df = df.set_index('Datetime')
df = df.drop_duplicates(keep=False)
df = df.reindex(idx, fill_value=0)
df = df.reset_index()
</code></pre>
<p>The above give the below data frame till <strong>10-03-2023 05:00:00.000</strong></p>
<pre><code> Datetime Value
==============================================
10-03-2023 00:00:00.000 10
10-03-2023 01:00:00.000 20
10-03-2023 02:00:00.000 0
10-03-2023 03:00:00.000 0
10-03-2023 04:00:00.000 50
10-03-2023 05:00:00.000 0
</code></pre>
<p>My question is, is there a way to restrict the last entry if it contains values as <strong>0</strong> not value when there is values apart from 0 and also it should not do anything to entries which has value as 0 but not as the last entry.</p>
<p>So if the latest entry in my df is 0, it should only get removed from df.</p>
<p><strong>Edit:</strong>
So basically my output should be like below It should not have the <strong>05</strong> entry as that is the last entry and its value is 0 and every hour values at which has <strong>0</strong> at the end.</p>
<p>If incase an entry which has a value greater than 0 at <strong>6</strong> then my DF should have all entries till 6th hour with 5th hr value as 0</p>
<pre><code> Datetime Value
==============================================
10-03-2023 00:00:00.000 10
10-03-2023 01:00:00.000 20
10-03-2023 02:00:00.000 0
10-03-2023 03:00:00.000 0
10-03-2023 04:00:00.000 50
</code></pre>
<p>Thanks,</p>
| <python><pandas><dataframe><datetime> | 2023-10-03 12:15:16 | 0 | 787 | SM079 |
77,222,164 | 13,508,045 | Data bar length isn't correct for 0% and 100% | <p>I have an excel sheet for which I want to add a <code>DataBarRule</code>. Technically it works but in excel it doesn't look correct. The type of the data bar is percent but for 0% the data bar doesn't disappear and for 100% it doesn't fill out the cell completely. For 50% it looks correct.</p>
<p><a href="https://i.sstatic.net/Enh1k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Enh1k.png" alt="Databar" /></a></p>
<h4>Code</h4>
<pre class="lang-py prettyprint-override"><code>import openpyxl
from openpyxl.formatting.rule import DataBarRule
wb = openpyxl.load_workbook("./test.xlsm", keep_vba=True)
ws = wb["Overview"]
rule = DataBarRule(
start_type="percent",
start_value=0,
end_type="percent",
end_value=100,
color="000000",
showValue=None,
minLength=None,
maxLength=None,
)
ws.conditional_formatting.add("A1:A3", rule)
wb.save("./test.xlsm")
</code></pre>
<h4>Reproduction Steps</h4>
<ol>
<li>Create excel file called <code>test.xlsm</code></li>
<li>Create excel sheet called <code>Overview</code></li>
<li>Write following values into the cells <code>A1:A3</code>: <code>0</code>, <code>0.5</code>, <code>1</code> and set the number format to percent</li>
<li>Run code</li>
</ol>
| <python><excel><openpyxl> | 2023-10-03 12:13:59 | 1 | 1,508 | codeofandrin |
77,222,122 | 4,355,873 | Run python with nodemon in docker compose run/debug configuration | <p>Is there a way to set up a <strong>python run/debug configuration</strong> with docker compose in <strong>PyCharm</strong> that uses <code>nodemon</code> to start the script? I want to use <code>nodemon</code> for the hot reload functionality in my python code. Iβve managed to set up it working with docker compose/dockerfile and when I start the project with <code>docker compose up</code> everything works as intended with nodemon. But I would like to create a run/debug config that uses my docker compose file and nodemon.</p>
<p>So far Iβve created a new interpreter using docker compose following <a href="https://www.jetbrains.com/help/pycharm/using-docker-compose-as-a-remote-interpreter.html" rel="nofollow noreferrer">this guide</a>. This works (using the debug with breakpoints and run) but there is no hot reload from nodemon. Is there a way to configure my configuration to use nodemon or some other way to use hot reload with the mentioned setup?</p>
<p>Dockerfile</p>
<pre><code>FROM python:3.8
WORKDIR /app
COPY requirements.txt /app/
RUN pip install -r requirements.txt
# Install Node.js and npm
RUN apt-get update && apt-get upgrade -y && \
apt-get install -y nodejs \
npm
# Install nodemon globally
RUN npm install -g nodemon
RUN groupadd -g 999 python && \
useradd -r -u 999 -g python python
RUN chown python:python /app
COPY --chown=python:python . .
USER 999
ENV PYTHONUNBUFFERED=TRUE
RUN chmod +x src/queue_consumer.py
CMD ["nodemon", "--exec", "python", "src/queue_consumer.py"]
</code></pre>
<p>This is my run configuration in pycharm
<a href="https://i.sstatic.net/RnUgW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RnUgW.png" alt="enter image description here" /></a></p>
| <python><docker><docker-compose><pycharm> | 2023-10-03 12:06:48 | 1 | 379 | GameAtrix1 |
77,222,090 | 188,331 | HuggingFace Trainer is not using GPU | <p>I wrote a simple trainer code as follows:</p>
<pre><code>from typing import List
from tokenizers import (
decoders,
models,
trainers,
Tokenizer,
Regex,
NormalizedString,
PreTokenizedString
)
tokenizer = Tokenizer(models.WordPiece(unk_token="[UNK]"))
special_tokens = ["[UNK]", "[PAD]", "[CLS]", "[SEP]", "[MASK]"]
trainer = trainers.WordPieceTrainer(vocab_size=2500, special_tokens=special_tokens)
tokenizer.train_from_iterator(get_training_corpus(), trainer=trainer)
</code></pre>
<p>The training process is slow, therefore I checked the GPU usage by <code>nvidia-smi</code> and found that both GPU units (I got 2 units) are idle.</p>
<p>The <code>get_training_corpus()</code> is basically a function that reads from an Excel file:</p>
<pre><code>import pandas as pd
dataset = []
dataxls = pd.read_excel('data/dataset.xlsx')
for index, row in dataxls.iterrows():
dataset.append(row['data'].strip())
def get_training_corpus():
for i in range(0, len(dataset), 1000):
yield dataset[i : i + 1000]
</code></pre>
<p>The code runs in JupyterHub, and the other notebooks utilize GPU without a problem. What did I miss?</p>
<p>The output of the GPU detection function is as follows:</p>
<pre><code>import torch
device = "cuda:0" if torch.cuda.is_available() else "cpu"
print(device)
</code></pre>
<p>It prints <code>cuda:0</code>.</p>
| <python><huggingface-tokenizers> | 2023-10-03 12:01:44 | 0 | 54,395 | Raptor |
77,221,923 | 10,243,896 | Embedding Python libraries in C# project | <p>I need to use a specific Python library (SciPy) in a C# application. Application will be deployed to machines that do not have Python installed, and it should not be a prerequisite. I am trying to use <a href="https://github.com/pythonnet/pythonnet" rel="nofollow noreferrer">Pythonnet</a> to integrate Python into my C# application. I downloaded and installed Pythonnet NuGet package via Visual Studio. I then copied Python311.dll to my solution folder (so it would be shipped with my application when installed on other machines), and I am now trying to test it with the code sample from Pythonnet docs:</p>
<pre><code> private void TestPython()
{
Runtime.PythonDLL = @"D:\Projects\MyProject\dlls\python311.dll";
PythonEngine.Initialize();
using (Py.GIL())
{
dynamic np = Py.Import("numpy");
Console.WriteLine(np.cos(np.pi * 2));
dynamic sin = np.sin;
Console.WriteLine(sin(5));
double c = (double)(np.cos(5) + sin(5));
Console.WriteLine(c);
dynamic a = np.array(new List<float> { 1, 2, 3 });
Console.WriteLine(a.dtype);
dynamic b = np.array(new List<float> { 6, 5, 4 }, dtype: np.int32);
Console.WriteLine(b.dtype);
Console.WriteLine(a * b);
Console.ReadKey();
}
}
</code></pre>
<p>The python311.dll is recognized and accepted, but I get an error <code>'No module named 'numpy''</code> on line <code>dynamic np = Py.Import("numpy");</code>. This is, of course, because I don't have numpy library installed.</p>
<p>Problem is, if I install it via pip, the library DLLs will not be embedded into my application (they will be in C:\Users\Administrator\AppData\Local\Programs\Python\Python311\Lib). I can probably copy them over to solution folder like I did with python311.dll, but I don't know how to tell Py to use them, rather than search for them in regular Python installation directories.</p>
<p>Can anyone advise?</p>
| <python><c#><python.net> | 2023-10-03 11:37:51 | 1 | 428 | Justin8051 |
77,221,899 | 22,466,650 | How to scatter plot the intersecting values from arrays with different shapes | <p>I have two arrays and I need to use them in a scatter plot with the consideration of their membership. For example, the first row of <code>B</code> is located at the second row of <code>A</code>, from column 2 to 3.</p>
<pre class="lang-py prettyprint-override"><code>#A
array([[ 0, 1, 2, 3],
[ 4, 5, 6, 7],
[ 8, 9, 10, 11],
[12, 13, 14, 15],
[16, 17, 18, 19]])
#B
array([[ 5, 6],
[12, 13],
[16, 17]])
</code></pre>
<p>I made the code below :</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
A = np.arange(20).reshape(5, 4)
B = np.array([[5, 6], [12, 13], [16, 17]])
x, y = np.meshgrid(range(A.shape[0]), range(A.shape[1]))
fig, ax = plt.subplots()
ax.scatter(x, y, facecolor='none', edgecolor='k', s=70, marker='s')
for ix, iy, a in zip(x.ravel(), y.ravel(), A.ravel()):
plt.annotate(a, (ix,iy), textcoords='offset points', xytext=(0,7), ha='center', fontsize=14)
plt.axis("off")
ax.invert_yaxis()
plt.show()
</code></pre>
<p>Now, I can check if <code>B</code> is in <code>A</code> with <code>np.isin(A, B)</code> but I have two problems:</p>
<ol>
<li>The grid that doesn't reflect the shape of <code>A</code> (there is like an extra column at the right)</li>
<li>The True values must be a filled x <code>'X'</code> with black edge and same size and width as reds</li>
</ol>
<p><a href="https://i.sstatic.net/0t0R6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0t0R6.png" alt="enter image description here" /></a></p>
<p>Do you have any ideas on how to do that?</p>
| <python><numpy><matplotlib><scatter-plot><intersection> | 2023-10-03 11:34:51 | 1 | 1,085 | VERBOSE |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.