QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,250,241
| 7,601,489
|
No Python at "...\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\python.exe'
|
<p>I installed Python 3.11 using the Windows Store and wanted to use Python 3.10 instead. So I uninstalled Python 3.11 and installed 3.10.</p>
<p>Running <code>python --version</code> gives the correct output <code>"Python 3.10.11"</code>, but when I'm trying to run a <code>.bat</code> file that uses Python, it spits out:</p>
<pre><code>No Python at "...\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\python.exe'
</code></pre>
<p>I figured it was possible that it was lingering in the PATH somewhere, but I didn't see it there either.</p>
<p>The <code>.bat</code> file is <code>webui.bat</code> from <a href="https://github.com/vladmandic/automatic" rel="nofollow noreferrer">https://github.com/vladmandic/automatic</a>.</p>
|
<python><windows>
|
2023-05-15 00:39:04
| 2
| 824
|
Andrew Zaw
|
76,250,102
| 3,762,284
|
How to write hive partitioned table with pyspark, with skip equal data column?
|
<p>In my project, I use hadoop-hive with pyspark.</p>
<p>my table created by this query.</p>
<pre><code>CREATE TABLE target_db.target_table(
id string)
PARTITIONED BY (
user_name string,
category string)
</code></pre>
<p>I get data like this.</p>
<pre><code>user_name: john
data list: # id, category
[('1', 'warrior'), ('2', 'warrior'), ('3', 'knight'), ...]
</code></pre>
<p>I want append this data to <code>target_db.target_table</code>.</p>
<p>first, i try this.</p>
<pre><code>
columns = ["account_id", "category"]
# creating a dataframe
df_user_list = spark_session.createDataFrame(id_list, columns)
file_path = f"{host}/target_db.db/target_table/user_name={user_name}"
df_user_list.write.partitionBy('category').mode("overwrite").parquet(file_path)
for category in category_list:
spark_session.sql(f"ALTER TABLE target_db.target_table DROP IF EXISTS PARTITION (user_name='{user_name}', category='{category}')")
spark_session.sql(f"ALTER TABLE target_db.target_table ADD PARTITION (user_name='{user_name}', category='{category}')")
</code></pre>
<p>upper code is work.
but, in real data, i can't get <code>category_list</code>. plus, i can't use 'overwrite', i need 'append' mode. so i cant use upper code.</p>
<p>now, i find one solution.</p>
<pre><code> columns = ['id', "user_name", "category"]
result = []
for item in id_list:
result.append((item[0], 'test.user', item[1]))
# creating a dataframe
df_user_list = spark_session.createDataFrame(result, columns)
df_user_list.write.mode("append").insertInto('target_db.target_table')
</code></pre>
<p>upper code is perfectly work.
but, i think add <code>user_name</code> is so strange. it's same data, i think there are more efficient solution.</p>
<p>i want good solution like this.</p>
<pre><code> columns = ['id', "category"]
# creating a dataframe
df_user_list = spark_session.createDataFrame(id_list, columns)
df_user_list.write.mode("append")
.selectCategory(['user_name', 'test_user']) # how to do this?
.insertInto('target_db.target_table')
</code></pre>
<ul>
<li>edit</li>
</ul>
<p>Sadly, @cruzlorite 's solution doesn't work.
I edit code like this, it works.</p>
<pre><code> df_user_list = spark_session.createDataFrame(id_list, columns)
df_user_list = df_user_list.withColumn('user_name', lit('test.user')).select('id', "user_name", "category")
df_user_list.write.mode("append").insertInto(table_name)
</code></pre>
<p>it looks better than my original, but i have one question. How does it work internally?</p>
<p>It works same to my original code? <code>user_name</code> columns is partition, so it's unnecessary in hdfs parquet file. spark can optimize it??</p>
|
<python><dataframe><pyspark><hive><hdfs>
|
2023-05-14 23:36:54
| 1
| 556
|
Redwings
|
76,250,086
| 342,553
|
django viewflow how to mock viewflow handlers
|
<p>Say I have a flow class <code>app1/flows.py</code></p>
<pre class="lang-py prettyprint-override"><code>class MyFlow(Flow):
start = ...
do_stuff = flow.Handler(this.do_stuff_handler).Next(...)
end = ...
def do_stuff_handler(self, activation):
...
</code></pre>
<p>If I want to mock <code>do_stuff_handler</code> to assert if it has been called</p>
<pre class="lang-py prettyprint-override"><code>class MyFlowTest(TestCase):
@mock.patch('app1.flows.MyFlow.do_stuff_handler')
def test_do_stuff_handler_called(self, mock_do_stuff_handler):
...
</code></pre>
<p>It appears the <code>do_stuff_handler</code> did not get patched. I did notice the flow class gets instantiated when starting up Django see <a href="https://github.com/viewflow/viewflow/issues/290" rel="nofollow noreferrer">here</a> I am struggling to find the correct path to patch the handler method. Any ideas?</p>
|
<python><django-viewflow>
|
2023-05-14 23:28:40
| 1
| 26,828
|
James Lin
|
76,249,814
| 6,549,541
|
Authorizing Google API on Headless Machine in Python
|
<p>Running Linux (ubuntu and python in docker on Raspberry OS) on a headless Raspberry Pi 4.</p>
<p>Followed <a href="https://developers.google.com/drive/api/quickstart/python" rel="nofollow noreferrer">Google's Python Quickstart Guide</a> and got it to the point where it asks me to go to a URL like this:</p>
<pre><code>Please visit this URL to authorize this application: https://accounts.google.com/o/oauth2/auth?response_type=code&client_id=xxxxx&redirect_uri=http%3A%2F%2Flocalhost%3A59875%2F&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcalendar.readonly&state=xxxxx&access_type=offline
</code></pre>
<p>Because I am headless and on SSH, I copied to my GUI computer and sign in authorize, but I get a request back to <code>localhost:59875/?state=xxxxx&code=xxxxx&scope=xxxx</code> which obviously does not open as nothing on my local machine is listening. I see that redirect URI in the above link with <code>localhost</code>, but changing that is invalid per <a href="https://developers.google.com/identity/protocols/oauth2/web-server#uri-validation" rel="nofollow noreferrer">this</a>.</p>
<p><strong>How can I authorize Google API on a headless machine?</strong></p>
<p>Short of creating a webpage on my pi and using that to run the python (which is more than I want to do right now, especially not knowing if it will work), how can I do this?</p>
<p><strong>EDIT</strong>
I am creating a python program that will interact with Google Calendar. I have a developer account and credentials, and a personal account and credentials. I am trying to create this under the developer credentials and then enable (long term) users to authenticate themselves. Hence the test users.</p>
|
<python><google-oauth><google-calendar-api>
|
2023-05-14 21:47:24
| 1
| 1,186
|
atclaus
|
76,249,794
| 11,141,816
|
How could msgspec and ijson faster than json for large json file?
|
<p>I was told that ijson create a pipe between the file on the hardrive and the memory to save the memory, so it's more memory efficient than json library. However, I was also told that ijson can be faster than json with the incremental parsing, which did not make sense to me at all. The hard drive's IO was usually much slower than the Ram's IO to out compete the cpu clock speed.</p>
<p>I have a 2 million entry json file around 3.5 GB to be loaded, and the data required very frequent search through the files. Can msgspec or ijson be faster than json? Or should I just load the json file into the RAM?</p>
|
<python><json><io><ijson>
|
2023-05-14 21:37:55
| 0
| 593
|
ShoutOutAndCalculate
|
76,249,781
| 6,467,512
|
Feature extraction process using too much memory and causing a crash. What can I do?
|
<p><a href="https://i.sstatic.net/RqbBn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RqbBn.png" alt="enter image description here" /></a>I am using a hugging face transformer to do some image feature extraction to use later for some similarity search functionality. This is not working currently because after processing around 200 images too much memory is being used and crashes the system... what am I doing wrong? what can I change to fix this.
Here is my feature extraction class:</p>
<pre><code>import numpy as np
from transformers import AutoProcessor, AutoModelForZeroShotImageClassification, AutoTokenizer, TFCLIPModel
def expand_greyscale_image_channels(grey_pil_image):
grey_image_arr = np.array(grey_pil_image)
grey_image_arr = np.expand_dims(grey_image_arr, -1)
grey_image_arr_3_channel = grey_image_arr.repeat(3, axis=-1)
return grey_image_arr_3_channel
def get_color_image(img):
img = img.resize((224, 224))
img = img.convert('RGB')
return img
def get_greyscale_image(img):
img = img.resize((224, 224))
img = img.convert('L')
img = expand_greyscale_image_channels(img)
return img
class FeatureExtractor:
def __enter__(self):
return self
def __exit__(self, exc_type, exc_value, traceback):
pass
def __init__(self, processor=None, model=None, tokenizer=None, text_model=None):
self.processor = processor
self.model = model
self.tokenizer = tokenizer
self.text_model = text_model
def model(self):
return self.model
def processor(self):
return self.processor
def extract_features(self, img, grey=False):
"""
Extract a deep feature from an input image
Args:
img: from PIL.Image.open(path) or tensorflow.keras.preprocessing.image.load_img(path)
Returns:
feature (np.ndarray): deep feature with the shape=(4096, )
"""
try:
if grey:
img = get_greyscale_image(img)
else:
img = get_color_image(img)
inputs = self.processor(images=img, return_tensors="pt")
image_features = self.model.get_image_features(**inputs)
# Use tensor.detach().numpy() instead.
image_features /= image_features.norm(dim=-1, keepdim=True)
return image_features.detach().numpy() # Normalize
except Exception as e:
print(e)
def extract_text_features(self, text):
try:
inputs = self.tokenizer([text], padding=True, return_tensors="tf")
text_features = self.text_model.get_text_features(**inputs)
text_features = text_features / np.linalg.norm(text_features)
return text_features.numpy()
except Exception as e:
print(e)
</code></pre>
<p>Here is the function that I run in a loop over each image url:</p>
<pre><code>fe = FeatureExtractor(processor, model, tokenizer, text_model)
def get_features_for_image(image_meta):
id = image_meta["id"]
image_url = image_meta["image_url"]
# get features for image
try:
# open image from url
image = get_pil_image_from_url(image_url)
# resize image
image = image.resize((224, 224))
# if file not in features folder
# extract features
if not os.path.exists("features/" + id + ".npy"):
# with FeatureExtractor(processor, model, tokenizer, text_model) as fe:
image_features = fe.extract_features(image)
np.save("features/" + id + ".npy", image_features)
del image_features
del image
gc.collect()
# write features to the json file
# save featuers under file features/id.npy
return True
except Exception as e:
print("Error extracting features for image ", id, " error: ", e)
</code></pre>
<p>where is the memory leak? how can I fix it?</p>
<p>Here is the image of cpu usage. It is doing fine per image, as the number of images that features are extracted for in total increases so does the cpu usage. even if the model uses a lot of memory, shouldn't it recover the memory after the feature of each image is done extracting?</p>
|
<python><deep-learning><memory-leaks><huggingface-transformers>
|
2023-05-14 21:34:16
| 1
| 323
|
AynonT
|
76,249,774
| 551,404
|
Unable to execute python script from php on web when mysql module is included in python
|
<p>This is my php code :</p>
<pre><code>$output = shell_exec("python3 test.py");
echo $output;
</code></pre>
<p>This is my python code</p>
<pre><code>#!/usr/bin/env python3
import requests
import sys
print("Hello from Python")
</code></pre>
<p>it show text <code>Hello from Python</code> when accessed from browser and also terminal using <code>php test.php</code></p>
<p>When I change my python code to include <code>import mysql.connector</code></p>
<pre><code>#!/usr/bin/env python3
import requests
import mysql.connector
import sys
print("Hello from Python")
</code></pre>
<p>it show text <code>Hello from Python</code> when accessed from terminal using <code>php test.php</code> but not showing anything when accessed using my browser, why?</p>
|
<python><php><mysql>
|
2023-05-14 21:32:08
| 0
| 3,522
|
dramasea
|
76,249,738
| 7,318,120
|
what does sha256 actually do?
|
<p>I am trying to understand hashing in python and in particular sha256.</p>
<p>I have a standard python function that uses the <code>hashlib</code> for creating a sha256 hash like this:</p>
<pre class="lang-py prettyprint-override"><code>import hashlib
def hash_password(password):
"""Hashes a password using the SHA-256 algorithm."""
hash_object = hashlib.sha256()
hash_object.update(password.encode('utf-8'))
return hash_object.hexdigest()
password = 'password123'
hashed_password = hash_password(password)
print(hashed_password)
</code></pre>
<p>I was expecting a function with a clear process.</p>
<p>So i navigate the the definition of <code>.sha256()</code> in the <code>hashlib.pyi</code> module to find this:</p>
<pre class="lang-py prettyprint-override"><code>def sha256(string: ReadableBuffer = b"", *, usedforsecurity: bool = True) -> _Hash: ...
</code></pre>
<p>But i simply do not understand what this is doing ?
it looks like a function that takes arguments and does nothing <code>...</code>.</p>
<p>So what does this function do please ?</p>
|
<python><sha256><hashlib>
|
2023-05-14 21:21:06
| 1
| 6,075
|
darren
|
76,249,666
| 15,537,469
|
Streamlit with Poetry is not found when run my docker container
|
<p><strong>Solved</strong></p>
<p>According to this: <a href="https://stackoverflow.com/a/57886655/15537469">https://stackoverflow.com/a/57886655/15537469</a></p>
<p>and to this: <a href="https://stackoverflow.com/a/74918400/15537469">https://stackoverflow.com/a/74918400/15537469</a></p>
<p>I make a Multi-stage Docker build with Poetry and venv</p>
<pre><code>FROM python:3.10-buster as py-build
RUN apt-get update && apt-get install -y \
build-essential \
curl \
software-properties-common \
&& rm -rf /var/lib/apt/lists/*
RUN curl -sSL https://install.python-poetry.org | POETRY_HOME=/opt/poetry python3 -
COPY . /app
WORKDIR /app
ENV PATH=/opt/poetry/bin:$PATH
RUN poetry config virtualenvs.in-project true && poetry install
FROM python:3.10-slim-buster
EXPOSE 8501
HEALTHCHECK CMD curl --fail http://localhost:8501/_stcore/health
COPY --from=py-build /app /app
WORKDIR /app
CMD ./.venv/bin/python
ENTRYPOINT ["streamlit", "run", "mtcc/app.py", "--server.port=8501", "--server.address=0.0.0.0"]
</code></pre>
<p>My build is going well but when I run my docker container I get this error: <code>docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "streamlit": executable file not found in $PATH: unknown.</code></p>
<p>I don't know if this is really useful but here is my file structure</p>
<pre><code>mtcc
├── mtcc
│ └── app.py
└── Dockerfile
</code></pre>
|
<python><python-3.x><dockerfile><streamlit><python-poetry>
|
2023-05-14 21:03:16
| 2
| 534
|
GuiEpi
|
76,249,636
| 16,436,774
|
Class properties in Python 3.11+
|
<p>In Python 3.9, we gained the ability to chain <code>@classmethod</code> and <code>@property</code> to sensibly create class properties.</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
@property
def instance_property(self):
return "A regular property"
@classmethod
@property
def class_property(cls):
return "A class property"
</code></pre>
<p>This was enabled by giving <code>@classmethod</code> proper interaction with the descriptor protocol, meaning one's prospects were not limited to <code>@property</code> but any descriptor under the sun. Everything was fine and dandy until it was discovered that the implementation led to <a href="https://docs.python.org/3/whatsnew/3.11.html#language-builtins" rel="noreferrer">"a number of downstream problems"</a>, with deprecation coming in Python 3.11.</p>
<p>I've read over <a href="https://github.com/python/cpython/issues/89519" rel="noreferrer">the GitHub discussions</a> concerning the deprecation a bit and will not gripe here about what I would call a hasty retraction to a hasty design. The fact of the matter is that class properties are a reasonable thing that people want and could use in Python 3.9/3.10, but now can't. The release notes suggest the following:</p>
<blockquote>
<p>To “pass-through” a classmethod, consider using the __wrapped__ attribute that was added in Python 3.10.</p>
</blockquote>
<p>It would not be controversial to call such a sentence extremely unhelpful on its own. The descriptor protocol is not something your average user will ever need to or want to encounter, and thus chaining <code>@classmethod</code> with them via a custom implementation is surely something that those in the know could and would spend time figuring out how to properly do in 3.11+.</p>
<p>But for those who have no idea what <code>@property</code> is besides that thing that lets them drop parentheses, <strong>how do you define class properties in Python 3.11+</strong>, and, in particular, <strong>how do you do it <em>well</em>?</strong></p>
|
<python><python-3.x><properties><python-decorators><python-descriptors>
|
2023-05-14 20:55:33
| 3
| 866
|
kg583
|
76,249,634
| 265,521
|
How to distribute binaries via pip
|
<p>I know that sounds a bit of a mad thing to do, but pip is probably the most likely-to-be-installed package manager in many cases, and also CMake does it (<code>pip install cmake</code> is by far the easiest way to get an up-to-date build of CMake on Linux and Mac).</p>
<p>But anyway how would I actually do it? As far as I can tell from <a href="https://peps.python.org/pep-0491/" rel="nofollow noreferrer">the wheel spec</a>, wheels can include "scripts" that don't have to be Python scripts (it could be a native executable). Also wheels can be platform-specific, so I can put my Linux x86 binary in one wheel, Mac ARM one in another, upload them all to PyPI and pip will figure it out.</p>
<p>But how do I actually do that? I got as far as creating a <code>pyproject.toml</code> with a <code>[project.scripts]</code> entry, but it appears that those entries <em>do</em> have to be Python scripts.</p>
<p>The questions are:</p>
<ol>
<li>Is this possible already (e.g. using <code>setuptools</code>, <code>python -m build</code>, etc.)?</li>
<li>If not, is this possible <em>in theory</em> (if I write my own builder)?</li>
</ol>
|
<python><pip><python-wheel>
|
2023-05-14 20:55:26
| 1
| 98,971
|
Timmmm
|
76,249,617
| 11,092,636
|
how to type hint ctypes.POINTER(ctypes.c_int)
|
<p>Here is an MRE you can run on <code>mypy Playground</code>:</p>
<pre class="lang-py prettyprint-override"><code>import ctypes
import numpy as np # type: ignore # (no stubs for numpy)
def np_to_c(arr: np.ndarray) -> tuple[ctypes.POINTER(ctypes.c_int), ctypes.c_int]:
return arr.ctypes.data_as(ctypes.POINTER(ctypes.c_int)), ctypes.c_int(len(arr))
</code></pre>
<p>This type hinting is wrong and according to <code>mypy</code>:</p>
<pre><code>main.py:4: error: Invalid type comment or annotation [valid-type]
main.py:4: note: Suggestion: use ctypes.POINTER[...] instead of ctypes.POINTER(...)
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>but when I do use <code>ctypes.POINTER[...]</code> instead of <code>ctypes.POINTER(...)</code> (cf screenshot below), I get:</p>
<pre class="lang-py prettyprint-override"><code>main.py:4: error: Function "ctypes.POINTER" is not valid as a type [valid-type]
main.py:4: note: Perhaps you need "Callable[...]" or a callback protocol?
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p><a href="https://i.sstatic.net/fmTln.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fmTln.png" alt="enter image description here" /></a></p>
|
<python><ctypes><type-hinting>
|
2023-05-14 20:51:09
| 2
| 720
|
FluidMechanics Potential Flows
|
76,249,478
| 8,778,855
|
Dynamically change dcc.link text
|
<p>My question is similar to the one posted <a href="https://stackoverflow.com/questions/68446584/dash-change-words-in-dcc-link">here</a>.</p>
<p>I use a <code>dcc.link</code> as sign-in and sign-out button.</p>
<p>It means that I need to change the text dynamically corresponding to the authentication state (I use flask-login, i.e. <code>current_user.is_authenticated</code>)</p>
<p>My approach is to use a callback such that everytime the dcc.link is being clicked the authentication state is being checked. If someone has logged in the dcc link should read <code>sign out</code>. If noone has logged in it should read <code>sign in</code>. This is the callback I wrote:</p>
<pre><code>@app.callback(
Output(component_id='sign-in-out', component_property='children'),
Input(component_id='sign-in-out', component_property='children'),
)
def show_hide_element(tmp):
if current_user.is_authenticated:
return "Sign out"
else:
return "Sign in"
</code></pre>
<p>Unfortunately this does not work as it should. When I click sign out it does not change the text accordingly. How could I achieve this?</p>
|
<python><plotly-dash>
|
2023-05-14 20:08:46
| 0
| 477
|
volfi
|
76,249,465
| 2,641,825
|
Why is `groupby` not returning a KeyError when a column is missing ? How to prevent values from beeing used as-is to determine the groups
|
<pre><code>import pandas
df_trade = pandas.DataFrame(
{
"reporter": ["a", "a", "b", "b"],
"reporter_code": [1, 1, 2, 2],
"partner": ["x", "y", "x", "z"],
"partner_code": [24, 25, 24, 26],
"product": ["p", "p", "p", "p"],
"value": [1, 2, 3, 4],
}
)
index = ['reporter', 'product', 'year', 'reporter_code']
df_trade.groupby(index).agg(imp=("value", sum)).reset_index()
--
Out[1]:
index imp
0 product 2
1 reporter 1
2 reporter_code 4
3 year 3
</code></pre>
<p>The "year" column is missing from <code>df_trade</code>, why is <code>groupby</code> not returning a KeyError?</p>
<h1>Documentation</h1>
<p>help(df_trade.groupby):</p>
<blockquote>
<p>If a list or ndarray of length
equal to the selected axis is passed (see the <code>groupby user guide <https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#splitting-an-object-into-groups></code>_),
the values are used as-is to determine the groups. A label or list
of labels may be passed to group by the columns in <code>self</code>.</p>
</blockquote>
<p>Maybe this is because my sample data frame just happens to have 4 rows, the exact same number as the 4 items in the <code>index</code> list used for the groupby.</p>
<ul>
<li>How to prevent the values from being "used as-is to determine the groups"?</li>
</ul>
|
<python><pandas>
|
2023-05-14 20:05:25
| 1
| 11,539
|
Paul Rougieux
|
76,249,205
| 4,725,707
|
How to assess the progress of a vectorized task (in python with pandas)
|
<p>Vectorization of tasks speeds up the execution, but I cannot find how to measure the progress of the vectorized task (in case of tasks taking a long time to complete). I've seen that <a href="https://tqdm.github.io/" rel="nofollow noreferrer">tqdm</a> might do the job, but I wonder if it is possible to do it in a simpler way.</p>
<p>Example with pandas dataframe (assume the index is [0...n] and a printout message is outputted each 1000 rows):</p>
<pre><code>for idx in df.index:
df.loc[idx, 'B'] = a_function(df.loc[idx, 'A'])
if (idx % 1000) == 0:
print(idx)
</code></pre>
<p>This will show the progress, but can be horribly slow if df has several million rows and a_function() is not trivial.</p>
<p>The alternative is to vectorize the operation:</p>
<pre><code>df['B'] = df['A'].apply(lambda x: a_funcion(x))
</code></pre>
<p>which will probably run much quicker, but it does not provide any hint about the progress. Any idea on how to get this information on the status of the vectorized task?</p>
|
<python><pandas><vectorization><progress>
|
2023-05-14 18:55:57
| 1
| 538
|
RiGonz
|
76,249,186
| 2,949,526
|
Probability of moving on a cartesian plane
|
<p>I am working on the below coding problem which looks more like a probability question rather than a coding problem</p>
<p>platform consisting of 5 vertices. The coordinates of the vertices are: (-1,0), (0.-1). (0,0), (0.1). (1.0).
You start at vertex (xs,ys)
and keep moving randomly either left (i.e., x coordinate decreases by 1), right (i.e., x coordinate increases by 1), up, or
down. The direction of subsequent moves is independent.
What is the probability that you reach vertex (xe, ye)
before falling off the platform?
Constraints:
(xs, ys) in [(-1.0), (0.-1), (0.0), (0.1), (1,0)]
(xe, ye) in [(-1,0), (0.-1), (0,0), (0,1), (1.0)]
xs != xend or ys != yend</p>
<p>below is what I implemented which works for the case I shared but fails for all other cases</p>
<pre><code>def calculate_probability(xs, ys, xe, ye):
edges = [[-1, 0], [0, -1], [0, 1], [1, 0]]
if [xs, ys] in edges:
if xe == 0 and ye == 0:
return 0.25
elif xs == xe and ys == ye:
return 1.0
elif [xe, ye] in edges:
return 0.075
if xs == 0 and ys == 0:
if [xe, ye] in edges:
return 0.3
elif xe == 0 and ye == 0:
return 1
return 0
</code></pre>
|
<python><math><probability><combinatorics>
|
2023-05-14 18:53:02
| 1
| 1,090
|
Legendary_Hunter
|
76,249,013
| 10,251,146
|
Polars explode columns, ComputeError columns mismatch
|
<p>I have a list column and two different columns, I would like to explode the list column while keeping the two other columns e.g.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>list</th>
</tr>
</thead>
<tbody>
<tr>
<td>"q"</td>
<td>"p"</td>
<td>["hello","bye"]</td>
</tr>
</tbody>
</table>
</div>
<p>to</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>list</th>
</tr>
</thead>
<tbody>
<tr>
<td>"q"</td>
<td>"p"</td>
<td>"hello"</td>
</tr>
<tr>
<td>"q"</td>
<td>"p"</td>
<td>"bye"</td>
</tr>
</tbody>
</table>
</div>
<p>but with</p>
<pre><code>pl.select(
pl.exclude("list"),
pl.col("list").explode())
</code></pre>
<p>I get a length mismatch</p>
<p>What is the best way to do this?</p>
|
<python><python-polars>
|
2023-05-14 18:13:04
| 0
| 459
|
linus heinz
|
76,248,925
| 13,080,966
|
How can I delete a thread running a blocking socket function in python?
|
<p>I have a problem, where I have created a thread which is running a blocking socket function (e.g. accept) and I need to shutdown the thread and close the socket without getting a bunch of errors so that I can use the socket again. I have created the following example to illustrate the problem:</p>
<pre><code>import threading
import socket
def accept_clients(a_socket):
print("Accepting clients!")
while True:
a_socket.accept()
print("A client connected!")
my_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
my_socket.bind(("127.0.0.1", 6000))
my_socket.listen(10)
threading.Thread(target=accept_clients, args=(my_socket,)).start()
</code></pre>
<p>When running this code I get the output <code>Accepting clients!</code> and the thread accepts clients like expected and prints out the correct messages.</p>
<p>However when it comes time to close the thread I run the following line of code:</p>
<pre><code>my_socket.close()
</code></pre>
<p>I get these errors:</p>
<pre><code>Exception in thread
Thread-1 (accept_clients):
Traceback (most recent call last):
File "C:\Users\aliam\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
self.run()
File "C:\Users\aliam\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "<pyshell#4>", line 4, in accept_clients
File "C:\Users\aliam\AppData\Local\Programs\Python\Python311\Lib\socket.py", line 294, in accept
fd, addr = self._accept()
^^^^^^^^^^^^^^
OSError: [WinError 10038] An operation was attempted on something that is not a socket
</code></pre>
<p>I'm assuming that these errors are being caused because the accept method is still running in the thread while I've tried to close the socket.</p>
<p>I was wondering if there was a way I could close the thread and socket without getting a bunch of errors.</p>
|
<python><multithreading><sockets><python-multithreading>
|
2023-05-14 17:49:19
| 0
| 362
|
Ali Awan
|
76,248,909
| 9,681,577
|
How to delete the DataFrame rows with the largest number of NaNs?
|
<p>Pandas and other question/answers in this site provide solutions for the case when we know the number of non NaN to preserve. How can I efficiently delete just the worst row, or rows if there are more than one being the worst ones.
Some examples below show how to remove columns, could be rows by setting the axis. However we need to specify how many non NaNs to keep.</p>
<pre><code>>>> import numpy as np
>>> df = pd.DataFrame([[1,np.nan,1,np.nan], [1,1,1,1], [1,np.nan,1,1], [np.nan,1,1,1]], columns=list('ABCD'))
A B C D
0 1.0 NaN 1 NaN
1 1.0 1.0 1 1.0
2 1.0 NaN 1 1.0
3 NaN 1.0 1 1.0
>>> df.dropna(thresh=3, axis=1)
A C D
0 1.0 1 NaN
1 1.0 1 1.0
2 1.0 1 1.0
3 NaN 1 1.0
</code></pre>
<p>Or to delete them altogether:</p>
<pre><code>>>> df.dropna(axis=1)
C
0 1
1 1
2 1
3 1
</code></pre>
<p><strong>Notice</strong>
I give more context below. While a hint to a specific solution for that is welcome, I prefer an answer regarding the general case as stated in the title of the post.</p>
<p><strong>Context</strong>
I am looking for an effficient way to remove the row with the largest amount of NaNs (or remove the rows if there are ties at the largest number), and after that remove the column(s) analogously, so that I can do repeat these two steps until all NaNs are removed.
The goal is to remove NaNs preserving the maximum possible amount of data keeping the table consistent, i.e., only entire row/column removal is allowed. Please read the notice above.</p>
<p>Examples above extracted from this answer:
<a href="https://stackoverflow.com/a/68306367/9681577">https://stackoverflow.com/a/68306367/9681577</a></p>
|
<python><pandas><dataframe><nan><missing-data>
|
2023-05-14 17:46:56
| 1
| 794
|
dawid
|
76,248,901
| 616,507
|
Jupyter rendering code even with "hide-input" tag
|
<p>I have a Jupyter notebook that I'm using for data analysis, everything is working exactly as I expect as far as the actual analysis part. I'm trying to get my output to a simple report on the analysis, without showing the code that I used to generate the results / maps. I've tried adding a "hide-input" tag to the code block per <a href="https://jupyterbook.org/en/stable/interactive/hiding.html" rel="nofollow noreferrer">https://jupyterbook.org/en/stable/interactive/hiding.html</a>, but the code block is still being rendered as if the tag isn't there.</p>
<p>The metadata on the code block is:</p>
<pre><code>{
"trusted": true,
"tags": [
"hide-input"
]
}
</code></pre>
<p>I don't care as much about the display inside the Jupyter notebook, but when I render a PDF, I don't want the code input block showing, since it will only serve to confuse my audience (said audience has no experience with or even knowledge of Python and uses computers only to check email).</p>
|
<python><jupyter-notebook><jupyter>
|
2023-05-14 17:45:09
| 0
| 727
|
John
|
76,248,899
| 177,779
|
Code Completion not working properly in DataSpell?
|
<p>I have included three images highlighting the issue I've encountered with code completion in Dataspell. The image below shows how an instance of Jupyter running in a browser deals with code completion.</p>
<p><a href="https://i.sstatic.net/xHNlW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xHNlW.png" alt="enter image description here" /></a></p>
<p>In the example below, you can see that for an Axes object labelled as "ax1" DataSpell fails to offer options relating to that object in Matplotlib. Some options it offers (axvline, for example) are only included because they have been used elsewhere in the code.</p>
<p><a href="https://i.sstatic.net/DIypz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DIypz.png" alt="enter image description here" /></a></p>
<p>The issue seems to be that DataSpell does not recognise the type of ax1 (as can be seen from the image below). DataSpell seems to think that ax1 is an "Any" object. Jupyter running in the browser does not have this issue.</p>
<p><a href="https://i.sstatic.net/4VPyM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4VPyM.png" alt="enter image description here" /></a></p>
<p>The code completion seems to suffer from lots of similar failures to identify the type of the object and so only offers limited generic completion suggestions for the "Any" object.</p>
<p>EDIT:</p>
<p>Another user has kindly sent me an answer that suggests a workaround from 7 years ago that cured the problem for Pycharm. My hope is that the requirement for type hinting, in order to get code-completion behaviour available in other IDEs and Jupyter on the browser, has now disappeared.</p>
<p>I like lots of DataSpell's features, but the requirement to add type hints throughout the code, when other IDEs don't require this, would not be ideal. I'm hoping that I've simply missed an option that enables this common behaviour.</p>
|
<python><jetbrains-ide><code-completion><dataspell>
|
2023-05-14 17:45:04
| 1
| 2,331
|
Urizen
|
76,248,839
| 8,990,846
|
Using Github Actions with Lektor CMS
|
<p>Basically i have a Lektor project, and i'm trying to to use <strong>Github Actions</strong> for deployment to <strong>Github Pages</strong></p>
<p><strong>But</strong> the publish always fail with this message <code>fatal: could not read Username for 'https://github.com': No such device or address</code></p>
<p>Here is the complete log :</p>
<pre><code>Run lektor deploy ghpages-https
/opt/hostedtoolcache/Python/3.11.3/x64/lib/python3.11/site-packages/lektor/publisher.py:657: DeprecationWarning: 'werkzeug.urls.url_parse' is deprecated and will be removed in Werkzeug 3.0. Use 'urllib.parse.urlsplit' instead.
url = urls.url_parse(str(target))
/opt/hostedtoolcache/Python/3.11.3/x64/lib/python3.11/site-packages/werkzeug/urls.py:545: DeprecationWarning: 'werkzeug.urls.URL' is deprecated and will be removed in Werkzeug 3.0. Use the 'urllib.parse' library instead.
return result_type(scheme, netloc, url, query, fragment)
Deploying to ghpages-https
Build cache: /home/runner/.cache/lektor/builds/1e169503b08805b6925804c56b34ca69
Target: ghpages+https://dzc0d3r/codeblog
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint: git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint: git branch -m <name>
Initialized empty Git repository in /home/runner/work/codeblog/codeblog/temp/.deploytempbbbozlcm/scratch/.git/
From https://github.com/dzc0d3r/codeblog
* [new branch] gh-pages -> origin/gh-pages
/opt/hostedtoolcache/Python/3.11.3/x64/lib/python3.11/site-packages/werkzeug/urls.py:170: DeprecationWarning: 'werkzeug.urls.url_decode' is deprecated and will be removed in Werkzeug 2.4. Use 'urllib.parse.parse_qs' instead.
return url_decode(self.query, *args, **kwargs)
fatal: could not read Username for 'https://github.com': No such device or address
Done!
</code></pre>
<p>The <code>publish.yml</code> file :</p>
<pre><code>name: Publish
on:
push:
branches:
- main
jobs:
publish:
name: Publish site
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v1
with:
python-version: "3.X"
- uses: actions/cache@v2
with:
key: ${{ github.ref }}
path: .cache
- name: Install dependencies
run: |
pip install --upgrade pip
pip install --upgrade setuptools
pip install -r requirements.txt
- name: Build site
run: |
lektor plugins reinstall
lektor build --no-prune
- name: Publish site
env:
LEKTOR_DEPLOY_USERNAME: ${{ secrets.LEKTOR_DEPLOY_USERNAME }}
LEKTOR_DEPLOY_PASSWORD: ${{ secrets.LEKTOR_DEPLOY_PASSWORD }}
run: |
lektor deploy ghpages-https
</code></pre>
<p>And this is the <code>codeblog.lektorproject</code> :</p>
<pre><code>[project]
name = codeblog
url = https://dzc0d3r.github.io/codeblog/
[servers.ghpages]
target = ghpages://dzc0d3r/codeblog
[servers.ghpages-https]
target = ghpages+https://dzc0d3r/codeblog
[packages]
lektor-minify = 1.2
lektor-disqus-comments = 0.4.1
lektor-tags = 0.3
</code></pre>
<p>Any idea, help, step by step guide would be appreciated</p>
|
<python><github-actions><github-pages><lektor>
|
2023-05-14 17:29:59
| 1
| 2,485
|
WaLid LamRaoui
|
76,248,723
| 1,421,907
|
What a function which produces a matplotlib plot is supposed to return?
|
<p>I have functions or methods in a class that produce plots with matplotlib. I would like to use that functions in various contexts. I never can be sure about what the function is supposed to return. I will be use that function essentially in jupyter notebooks but also in widgets or qt app for example.</p>
<p>Here is a minimal example:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# some interesting data
df = pd.DataFrame({c: np.random.randn(10) for c in "ABC"})
</code></pre>
<p>Here is the function. I used to use as optional argument the matplotlib axes and I return it. But sometimes (like in the example) I have a twin axes or other things and thus returning only one axes is strange and I return the whole figure object.</p>
<pre><code>def make_fake_plot(
df: pd.DataFrame,
col_x: str,
col_y: str,
show_mean: bool = True,
ax: plt.Axes = None
):
""" make a plot
Args:
df (pd.DataFrame): The data
col_x (str): name of col for x
col_y (str): name of col for y
show_mean (bool): an option
ax (plt.Axes): axes on which make the plot
"""
if ax is None:
fig, ax = plt.subplots()
# What is best here ?
# ax = plt.subplot(1, 1, 1)
ax = df.plot.scatter(x=col_x, y=col_y, marker="d", s=20, ax=ax)
if show_mean:
ax2 = ax.twiny()
ax2.set_xlim(ax.get_xlim())
mu = df[col_x].mean()
ax2.axvline(mu, color="C3")
ax2.set_xticks([mu])
ax2.set_xticklabels(["mean"])
# is something is supposed to be returned ??
# return ax
# return fig
# return fig, ax
# nothing ??
</code></pre>
<p>This function work quite well in a jupyter notebook. I can even put the plots in subplots while the function does not return anything (that is a point also I do not undersdant).</p>
<p>And here is an example using that function in widgets. But here it dos not work, the plot is not updated. The problem is that if I return something from the function, in the notebook, sometimes figures appear two times, and in the widget, instead of update the figure, the plots stack again and again one below the previous one.</p>
<pre><code>import ipywidgets as ipw
from IPython.display import display
def show_nice_app(dataset):
output = ipw.Output()
dropdown_xcol = ipw.Dropdown(
options=dataset.columns,
value=dataset.columns[0],
description='X col:')
dropdown_ycol = ipw.Dropdown(
options=dataset.columns,
value=dataset.columns[1],
description='Y col:')
def dropdown_xcol_eventhandler(change):
with output:
output.clear_output(wait=True)
make_fake_plot(dataset, col_x=change.new, col_y=dropdown_ycol.value)
# display(fig)
def dropdown_ycol_eventhandler(change):
with output:
output.clear_output(wait=True)
make_fake_plot(dataset, col_x=dropdown_xcol.value, col_y=change.new)
# display(fig)
dropdown_xcol.observe(dropdown_xcol_eventhandler, names='value')
dropdown_ycol.observe(dropdown_ycol_eventhandler, names='value')
input_widgets = ipw.HBox([dropdown_xcol, dropdown_ycol])
all_widgets = ipw.VBox([input_widgets, output])
display(all_widgets)
with output:
output.clear_output(wait=True)
make_fake_plot(dataset, col_x=dropdown_xcol.value, col_y=dropdown_ycol.value)
</code></pre>
<p>At the end, these functions (widget and plots functions) are not supposed to be in the notebook. They are supposed to be in a library (in classes or simply in module) which will be imported in the notebook.</p>
|
<python><matplotlib><ipywidgets>
|
2023-05-14 17:05:15
| 1
| 9,870
|
Ger
|
76,248,719
| 11,974,163
|
Is it possible to access another keyword in called function parameter?
|
<p>I'm not sure how to completely articulate what I'm wondering here so forgive any confusion.</p>
<p>What I want to know is, Once I import a module and call a function is it possible to access another parameter's value inside another parameter within the same function?</p>
<p>So for example, I'm playing around with <code>pandas</code> at the moment and I've created a series with a basic block of code:</p>
<pre><code>import pandas as pd
data = [40, 70, 90]
series = pd.Series(data=data, name='marks', index=[i for i in range(0, len(data))])
</code></pre>
<p>But I want to see if it's possible to do something like this:</p>
<pre><code>import pandas as pd
series = pd.Series(data=[40, 70, 90], name='marks', index=[i for i in range(0, len(__get_value_from_data_keyword__))])
</code></pre>
<p>where <code>__get_value_from_data_keyword__</code> returns the value of the <code>data</code> keyword in the same function. So <code>__get_value_from_data_keyword__</code> equals <code>[40, 70, 90]</code></p>
<p>Is this possible to do at all?</p>
|
<python><oop>
|
2023-05-14 17:04:07
| 2
| 457
|
pragmatic learner
|
76,248,637
| 8,477,566
|
Fields are missing when I `pip show` my Python package
|
<p>I recently uploaded my first Python package to PyPI. The relevant parts of <code>pyproject.toml</code> are defined as follows (full file available <a href="https://github.com/jakelevi1996/jutility/blob/main/pyproject.toml" rel="nofollow noreferrer">here</a>):</p>
<pre><code>[build-system]
requires = ["setuptools>=61.0"]
build-backend = "setuptools.build_meta"
[project]
name = "jutility"
version = "0.0.4"
license = {text = "MIT License"}
authors = [
{name = "Jake Levi", email = "jakelevi@hotmail.co.uk"},
]
# ...
[project.urls]
homepage = "https://github.com/jakelevi1996/jutility"
</code></pre>
<p>After installing this package with the command <code>python3 -m pip install -U jutility</code>, I run the command <code>python3 -m pip show jutility</code>, and get the following output:</p>
<pre><code>Name: jutility
Version: 0.0.4
Summary: Collection of Python utilities intended to be useful for machine learning research and experiments
Home-page:
Author:
Author-email: Jake Levi <jakelevi@hotmail.co.uk>
License: MIT License
Location: /usr/local/lib/python3.10/dist-packages
Requires: matplotlib, numpy, Pillow
Required-by:
</code></pre>
<p>Notably, the <code>Home-page</code> and <code>Author</code> fields are empty in the output from <code>pip show</code>, although they seem to be defined in <code>pyproject.toml</code>.</p>
<p>How should I change <code>pyproject.toml</code> to make these fields display properly in the <code>pip show</code> output?</p>
<p>Version-wise, I built and uploaded these packages to PyPI on my Windows 10 PC with Python 3.7.6, but I also tried downloading and installing this package and displaying the <code>pip show</code> output from a Google Colab notebook with Python 3.10.11. The package works completely as expected in the Colab notebook, but I get the same <code>pip show</code> output with empty <code>Home-page</code> and <code>Author</code> fields. I'd just like to know what I need to change in order to get these fields to display properly.</p>
|
<python><pip><setuptools><pypi><python-packaging>
|
2023-05-14 16:47:26
| 2
| 1,950
|
Jake Levi
|
76,248,287
| 1,955,215
|
Replacing the Print Command in a python script with a comment
|
<p>I have the command below in a .py file</p>
<pre><code>print(r"\n print series")
</code></pre>
<p>I want the above command/line to be replaced with a string:</p>
<p>"#series" ie convert the above print command into a comment prefixing the # symbol to the text following the "print" in the command above.</p>
<p>I am new to python. Please help.</p>
<p>This question was closed earlier as it was not clear what I was attempting to do. Hence, the explanation that follows:</p>
<p>The reason for doing this: I want to use print statements in a script for learning python/using as reference. Later, I want to change the print statement into a comment. Hence, I want to replace the beginning of the print statement with a # which will convert it into a comment. As there would be many such conversions, I wanted to do it programmatically using a string.replace or something else. Any suggestions to achieve this in another way are also welcome. Thanks.</p>
|
<python><string><replace>
|
2023-05-14 15:31:10
| 2
| 763
|
user1955215
|
76,248,276
| 21,309,333
|
How to reinterpret bits of float as int in python
|
<p>I had to write code that printed out bits of floating point number (in hexadecimal). For C++, the first thing that came to mind was
<code>std::cout << std::hex << *reinterpret_cast<uint32_t*>(&x);</code>
where x is a floating point variable that stores the original number.</p>
<p>However, when I tried the same problem in python, I just couldn't come up with a python-ish way to solve it (using not built-in libraries is not allowed). My best attempts lead to something like this:</p>
<pre><code>from ctypes import Union, c_uint32, c_float
class conversion(Union):
_fields_ = [("int", c_uint32),
("float", c_float)]
v = conversion()
v.float = float(input())
print(hex(v.int)[2:])
</code></pre>
<p>which is basically C code.</p>
<p>So, is there more "python" way to do this? Did recent python versions introduce something that could be useful for this case?</p>
|
<python><floating-point><integer><bit>
|
2023-05-14 15:28:48
| 1
| 365
|
God I Am Clown
|
76,248,183
| 4,720,957
|
Why calling `print(self)` in constructor of a parent class is causing RuntimeError?
|
<p>Here's the traceback:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "/home/neon/pyqt6/myqt/src/app.py", line 13, in <module>
window = AppWindow()
^^^^^^^^^^^
File "/home/neon/pyqt6/myqt/src/app.py", line 7, in __init__
super(AppWindow, self).__init__()
File "/home/neon/pyqt6/myqt/src/ui_mainwindow.py", line 5, in __init__
print(self)
RuntimeError: '__init__' method of object's base class (AppWindow) not called.
</code></pre>
<p>I'm trying to develop a personal QT6 based app in Python3.11. At some point I got stuck somewhere and decided to <code>print(self)</code> in one of the parents of a subclass, and that's what caused the above runtime error.</p>
<p>I've seen the <a href="https://stackoverflow.com/q/30114436/4720957">duplicate</a>. I have made attempts below to show why this runtime error should not happen in my opinion.</p>
<p>Here's how to reproduce the error:</p>
<ol>
<li>Install <code>PySide6</code> module in a virtual environment</li>
<li>Make file <code>ui_mainwindow.py</code> with following content:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>from PySide6.QtWidgets import QLineEdit
class Ui_MainWindow(object):
def __init__(self):
# This call causes error
print(self)
# A sample widget. Irrelevant for error
def makeui(self):
self.line_edit = QLineEdit()
</code></pre>
<ol start="3">
<li>Make file <code>app.py</code> with following content:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>from PySide6.QtWidgets import QMainWindow, QApplication
from ui_mainwindow import Ui_MainWindow
import sys
class AppWindow(QMainWindow, Ui_MainWindow):
def __init__(self):
super(AppWindow, self).__init__()
self.makeui()
self.setCentralWidget(self.line_edit)
if __name__ == "__main__":
app = QApplication(sys.argv)
window = AppWindow()
window.show()
app.exec()
</code></pre>
<ol start="4">
<li>Run app.py as <code>python app.py</code> and see the error reproduced.</li>
</ol>
<p>Here's something unexpected. If I replace the call <code>print(self)</code> with anything else, let's say <code>print(self.__class__.mro())</code> or <code>print("Hello World")</code> then no error occurs and print call works!</p>
<p>I'm unable to make sense of why <code>print(self)</code> is causing runtime error with that message.</p>
<hr />
<p>I also tried to replicate the problem without qt6 and with only python based objects. Here's what I also tried and no error occurred even when <code>print(self)</code> was called in one parent's constructor.</p>
<ol>
<li>Contents of main.py</li>
</ol>
<pre class="lang-py prettyprint-override"><code>from class_d import D
class A(object):
def __init__(self):
super().__init__()
class B(A):
def __init__(self):
super().__init__()
class C(B):
def __init__(self):
super().__init__()
class E(C, D):
def __init__(self):
super(E, self).__init__()
e = E()
</code></pre>
<ol start="2">
<li>Contents of class_d.py</li>
</ol>
<pre class="lang-py prettyprint-override"><code>class D(object):
def __init__(self):
print(self) # this call works as expected
</code></pre>
<p>Doing <code>python main.py</code> happily prints the repr of E object.</p>
<p>So why is that when qt objects are involved I am getting runtime error?</p>
|
<python><python-3.x><runtime-error><pyside><pyside6>
|
2023-05-14 15:07:45
| 0
| 1,229
|
user47
|
76,247,903
| 7,210,908
|
how to make pytorch lstm model for two input parameters
|
<p>I have a project where the future number of passengers is predicted using lstm. <a href="https://github.com/tyt34/forecasting-pytorch-passengeres/blob/master/main_2.ipynb" rel="nofollow noreferrer">full program link</a>.
As input data, the passenger traffic for the previous time of this kind:</p>
<pre><code>cd = [112, 118, 132, 129, 121, 135, ...]
</code></pre>
<p>The model on putorch for training has the following code:</p>
<pre><code>class LSTM(nn.Module):
def __init__(
self,
num_classes,
input_size,
hidden_size,
num_layers
):
super(LSTM, self).__init__()
self.num_classes = num_classes
self.num_layers = num_layers
self.input_size = input_size
self.hidden_size = hidden_size
self.seq_length = seq_length
self.lstm = nn.LSTM(
input_size=input_size,
hidden_size=hidden_size,
num_layers=num_layers,
batch_first=True
)
self.fc = nn.Linear(hidden_size, num_classes)
def forward(self, x):
h_0 = Variable(torch.zeros(self.num_layers, x.size(0), self.hidden_size))
c_0 = Variable(torch.zeros(self.num_layers, x.size(0), self.hidden_size))
ula, (h_out, _) = self.lstm(x, (h_0, c_0))
h_out = h_out.view(-1, self.hidden_size)
out = self.fc(h_out)
return out
</code></pre>
<p>This model works and I have a question, what needs to be configured in this model so that it can take two parameters as input? for example, an additional array where the number of days off per month is counted?</p>
<pre><code>an = [11, 9, 8, 9, 9, 8, ...]
</code></pre>
<p>In the file (in rep on github) "main_3" and "chat_model" I tried to do some operations, but nothing happened.</p>
|
<python><pytorch><lstm>
|
2023-05-14 14:09:28
| 0
| 320
|
Roma N
|
76,247,864
| 12,783,363
|
Shortening of Youtube playlist URL
|
<p>I'm trying to automate a youtube description with python, and I'd like to shorten a youtube playlist URL.</p>
<p>Normally a non-playlist URL goes <a href="https://www.youtube.com/watch?v=%7Bvideo_id%7D" rel="nofollow noreferrer">https://www.youtube.com/watch?v={video_id}</a> and can be shortened to youtu.be/{video_id}.</p>
<p>A playlist URL goes <a href="https://www.youtube.com/watch?v=%7Bvideo_id%7D&list=.." rel="nofollow noreferrer">https://www.youtube.com/watch?v={video_id}&list=..</a>. and shortening it to youtu.be/{video_id} responds with a non-playlist page.</p>
<p>An alternative I found is to use other websites and convert to URL 3rd-party-domain/{alias-name}. However, I've been informed this isn't ideal as youtube doesn't like other references and that it would hurt the video ranking (not too familiar with these).</p>
<p>So, as the title goes, how do we shorten a Youtube playlist URL preferably with a youtube domain?</p>
|
<python><automation><youtube>
|
2023-05-14 14:00:43
| 1
| 916
|
Jobo Fernandez
|
76,247,759
| 7,577,786
|
Errors thrown when my python-telegram-bot is doing run_polling are not logged to console
|
<p>I'm trying to test my telegram bot made with <code>python-telegram-bot</code> using <code>unittest</code>. The problem is that after I do <code>updater.run_polling()</code>, any errors raised thereafter simply cause the console to freeze here:</p>
<pre><code>EWARNING:telegram.ext.Application:Fetching updates got a asyncio.CancelledError. Ignoring as this task may onlybe closed via `Application.stop`.
DEBUG:telegram.ext.Updater:Network loop retry getting Updates was cancelled
</code></pre>
<p>After I do Ctrl + C, it then shows this traceback, with no mention of the actual error:</p>
<pre><code>EWARNING:telegram.ext.Application:Fetching updates got a asyncio.CancelledError. Ignoring as this task may onlybe closed via `Application.stop`.
DEBUG:telegram.ext.Updater:Network loop retry getting Updates was cancelled
^CDEBUG:asyncio:Close <_UnixSelectorEventLoop running=False closed=False debug=True>
Traceback (most recent call last):
File "/Users/nathan/development/python/ptbtest/examples/test_my.py", line 32, in <module>
unittest.main()
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/main.py", line 102, in __init__
self.runTests()
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/main.py", line 274, in runTests
self.result = testRunner.run(self.test)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/runner.py", line 217, in run
test(result)
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/suite.py", line 84, in __call__
return self.run(*args, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/suite.py", line 122, in run
test(result)
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/suite.py", line 84, in __call__
return self.run(*args, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/suite.py", line 122, in run
test(result)
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/case.py", line 678, in __call__
return self.run(*args, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/async_case.py", line 133, in run
self._tearDownAsyncioRunner()
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/async_case.py", line 126, in _tearDownAsyncioRunner
runner.close()
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/runners.py", line 71, in close
_cancel_all_tasks(loop)
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/runners.py", line 201, in _cancel_all_tasks
loop.run_until_complete(tasks.gather(*to_cancel, return_exceptions=True))
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/base_events.py", line 640, in run_until_complete
self.run_forever()
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/base_events.py", line 607, in run_forever
self._run_once()
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/base_events.py", line 1881, in _run_once
event_list = self._selector.select(timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/selectors.py", line 561, in select
kev_list = self._selector.control(None, max_ev, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyboardInterrupt
ERROR:asyncio:Task was destroyed but it is pending!
source_traceback: Object created at (most recent call last):
File "/Users/nathan/development/python/ptbtest/examples/test_my.py", line 32, in <module>
unittest.main()
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/main.py", line 102, in __init__
self.runTests()
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/main.py", line 274, in runTests
self.result = testRunner.run(self.test)
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/runner.py", line 217, in run
test(result)
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/suite.py", line 84, in __call__
return self.run(*args, **kwds)
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/suite.py", line 122, in run
test(result)
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/suite.py", line 84, in __call__
return self.run(*args, **kwds)
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/suite.py", line 122, in run
test(result)
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/case.py", line 678, in __call__
return self.run(*args, **kwds)
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/async_case.py", line 131, in run
return super().run(result)
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/case.py", line 623, in run
self._callTestMethod(testMethod)
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/async_case.py", line 90, in _callTestMethod
if self._callMaybeAsync(method) is not None:
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/unittest/async_case.py", line 112, in _callMaybeAsync
return self._asyncioRunner.run(
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/base_events.py", line 640, in run_until_complete
self.run_forever()
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/base_events.py", line 607, in run_forever
self._run_once()
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/base_events.py", line 1911, in _run_once
handle._run()
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/events.py", line 80, in _run
self._context.run(self._callback, *self._args)
File "/Users/nathan/development/python/ptbtest/examples/test_my.py", line 23, in test
await application.start()
File "/Users/nathan/.pyenv/versions/venvptbtest-3.11.1/lib/python3.11/site-packages/telegram/ext/_application.py", line 566, in start
self.__update_fetcher_task = asyncio.create_task(
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/tasks.py", line 374, in create_task
task = loop.create_task(coro)
task: <Task cancelling name='Task-6' coro=<Application._update_fetcher() running at /Users/nathan/.pyenv/versions/venvptbtest-3.11.1/lib/python3.11/site-packages/telegram/ext/_application.py:1050> wait_for=<Future pending cb=[Task.task_wakeup()] created at /Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/base_events.py:427> cb=[gather.<locals>._done_callback() at /Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/tasks.py:754] created at /Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/tasks.py:374>
Exception ignored in: <coroutine object Application._update_fetcher at 0x10e9b0c80>
Traceback (most recent call last):
File "/Users/nathan/.pyenv/versions/venvptbtest-3.11.1/lib/python3.11/site-packages/telegram/ext/_application.py", line 1050, in _update_fetcher
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/queues.py", line 160, in get
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/base_events.py", line 761, in call_soon
File "/Users/nathan/.pyenv/versions/3.11.1/lib/python3.11/asyncio/base_events.py", line 519, in _check_closed
RuntimeError: Event loop is closed
</code></pre>
<p>How do I make it show me the errors? I tried adding an error handler but to no avail (it doesn't print anything):</p>
<pre class="lang-py prettyprint-override"><code>def error_handler(update, context):
print('ERROR ERROR ERROR ERROR ERROR ERROR ERROR')
print(context)
...
application.add_error_handler(error_handler)
</code></pre>
<p>My full code:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import absolute_import
import unittest
import pytest
from telegram.ext import Application
import logging
def error_handler(update, context):
print('ERROR ERROR ERROR ERROR ERROR ERROR ERROR')
print(context)
class TestConversationbot2(unittest.IsolatedAsyncioTestCase):
@pytest.mark.asyncio
async def test(self):
logging.basicConfig(level=logging.DEBUG)
application = (
Application
.builder()
.token('MY_TELEGRAM_BOT_TOKEN')
.build()
)
application.add_error_handler(error_handler)
await application.initialize()
await application.start()
await application.updater.start_polling()
raise BaseException('my error')
await application.updater.stop()
await application.stop()
await application.shutdown()
return None
if __name__ == '__main__':
unittest.main()
</code></pre>
|
<python><telegram><python-unittest><python-telegram-bot><long-polling>
|
2023-05-14 13:38:51
| 0
| 757
|
Nathan Tew
|
76,247,224
| 13,793,478
|
how do I make the link from database clickable and get the content of the url
|
<p>this is my views.py</p>
<pre><code>from django.shortcuts import render, redirect
from base.models import Prescription
def orders(request):
orders = Prescription.objects.all()
return render(request, 'base/orders.html', {
'orders': orders,
})
</code></pre>
<p>the link in question is the last one ( order.presc.url )</p>
<pre><code>{% for order in orders %}
<tr>
<td>{{order.id}}</td>
<td>{{order.phone}}</td>
<td>{{order.presc}}</td>
</tr>
{% endfor %}
</code></pre>
|
<python><django><django-models>
|
2023-05-14 11:27:24
| 1
| 514
|
Mt Khalifa
|
76,247,167
| 3,789,665
|
Efficient creation of a sequence of sets from values and thresholds
|
<p>Given a shortish sequence of thresholds ordered ascendantly and numerous values (unordered).</p>
<p>Wanted result is a sequence of <code>set</code>s, first one containing all distinct values below the lowest/first threshold; next values not below lowest threshold, but below 2nd threshold, if any; and so on 'til last threshold; finally all values not below highest threshold.</p>
<p>There are similar questions about <code>dict</code>s (pointers to <em>helpful</em> solutions there welcome, too),<br />
suggestions amounting to</p>
<pre><code>from itertools import pairwise
def partition(values, thresholds):
""" Partition values into a list of sets
with values in right-open intervals specified by thresholds.
"""
return [ { v for v in values if v < thresholds[0] }
] + [ { v for v in values if lo <= v < hi }
for lo, hi in tuple(pairwise(thresholds))
] + [ { v for v in values if thresholds[-1] <= v } ]
</code></pre>
<p>This "iterates" <code>values</code> <code>len(thresholds)+1</code> times.</p>
<p>How to efficiently create a sequence of <code>set</code>s partitioning <em>values</em> according to <em>thresholds</em>?</p>
<p>I failed to find something helpful SciPy/NumPy.<br />
Using <code>numpy.digitize()</code> to index an array of <code>add()</code> members was in the ball park of <a href="/a/76248126/3789665"><code>partition_Kelly3b()</code></a> for non-trivial values <em>and</em> thresholds.</p>
|
<python><python-3.x><numpy><performance>
|
2023-05-14 11:12:40
| 2
| 2,642
|
greybeard
|
76,247,145
| 14,401,160
|
How to set Content-Type header with boto3 presigned url multipart upload
|
<p>Is there any way to allow <code>Content-Type</code> header with multipart uploads to presigned s3 url?</p>
<p>Let's begin with the following code:</p>
<pre class="lang-py prettyprint-override"><code>import boto3
import requests
BUCKET_NAME = "foo"
# No, it's global in this MRE only
client = boto3.client('s3')
def create(key):
response = client.create_multipart_upload(Bucket=BUCKET_NAME, Key=key)
return response['UploadId']
def get_url(key, upload_id, chunk_number):
signed_url = client.generate_presigned_url(
ClientMethod='upload_part',
Params={
"Bucket": BUCKET_NAME,
"Key": key,
"PartNumber": chunk_number,
"UploadId": upload_id,
# "ContentType": "application/x-www-form-urlencoded",
},
ExpiresIn=60 * 60, # seconds
)
return signed_url
def complete(key, upload_id, parts):
client.complete_multipart_upload(
Bucket=BUCKET_NAME,
Key=key,
UploadId=upload_id,
MultipartUpload={"Parts": parts}
)
def test_upload():
key = 'data/foo.bar'
upload_id = create(key)
url = get_url(key, upload_id, 1)
with open("/tmp/foo.bar", "rb") as src:
response = requests.put(url, data=src.read())
etag = response.headers["ETag"]
complete(key, upload_id, {"ETag": etag, "PartNumber": 1})
</code></pre>
<p>Hooray, this works. However, let's try to do the same from frontend, replacing <code>requests</code> call with</p>
<pre><code>fetch(uploadTo, {
method: 'PUT',
body: blob,
})
</code></pre>
<p>(no matter how blob is defined here, this is irrelevant to our problem).</p>
<p>And this fails, returning 403 <code>SignatureDoesNotMatch</code>. Why? Because <code>Content-Type</code> header <em>is</em> set (and <code>fetch</code> <a href="https://stackoverflow.com/questions/71670213/tell-fetch-to-not-send-a-content-type-header-at-all">cannot do without it</a>), and this is part of S3-side backend signature verification. <code>Content-Type</code> is not a part of generated URL, so with any content type <code>fetch</code> tries to set this will not match. I know this is the case, because here's what response looks like (with only URL being different, ignore this incompatibilities, <code>uploadId=1</code> is just a fake - same thing happens with real URL; pay attention to <strong>StringToSign</strong> tag):</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you
provided. Check your key and signing method.</Message>
<AWSAccessKeyId>...</AWSAccessKeyId>
<StringToSign>PUT
application/x-www-form-urlencoded
1684064749
x-amz-security-token:FwoGZXIvYXdzEOT//////////wEaDE+2gqVw4NTt1c2eOCKGAVTf4uDCD+GJ8P6lG2vBg8yQ2dqyU7/6aHg4hXMljyDFByT7hJ1/F/GPwBi84eAMDZqGzXpIySe8PhU80ak5C4vg7vcGOOSaB3cXk7TtQ2q0pWb8MB0AYb3LGAJ6sahySjHSdArFFADB60u6SskWhq9HHSijilW9hKIiUgdceZAPhLH1J59oKITngqMGMijigFpTERPtZLB+MOjIqJIpHvPJrrfRg4mzwAmZbk+rropyYha4rBNP
/sim-cal-bucket/temp/foo.mp4?partNumber=1&amp;uploadId=1</StringToSign>
<SignatureProvided>miwrnxtxoPdGnuAEqiP52ZMscBQ=</SignatureProvided>
<StringToSignBytes>50 55 54 0a 0a 61 70 70 6c 69 63 61 74 69 6f 6e 2f 78 2d
77 77 77 2d 66 6f 72 6d 2d 75 72 6c 65 6e 63 6f 64 65 64 0a 31 36 38 34 30
36 34 37 34 39 0a 78 2d 61 6d 7a 2d 73 65 63 75 72 69 74 79 2d 74 6f 6b 65
6e 3a 46 77 6f 47 5a 58 49 76 59 58 64 7a 45 4f 54 2f 2f 2f 2f 2f 2f 2f 2f
2f 2f 77 45 61 44 45 2b 32 67 71 56 77 34 4e 54 74 31 63 32 65 4f 43 4b 47
41 56 54 66 34 75 44 43 44 2b 47 4a 38 50 36 6c 47 32 76 42 67 38 79 51 32
64 71 79 55 37 2f 36 61 48 67 34 68 58 4d 6c 6a 79 44 46 42 79 54 37 68 4a
31 2f 46 2f 47 50 77 42 69 38 34 65 41 4d 44 5a 71 47 7a 58 70 49 79 53 65
38 50 68 55 38 30 61 6b 35 43 34 76 67 37 76 63 47 4f 4f 53 61 42 33 63 58
6b 37 54 74 51 32 71 30 70 57 62 38 4d 42 30 41 59 62 33 4c 47 41 4a 36 73
61 68 79 53 6a 48 53 64 41 72 46 46 41 44 42 36 30 75 36 53 73 6b 57 68 71
39 48 48 53 69 6a 69 6c 57 39 68 4b 49 69 55 67 64 63 65 5a 41 50 68 4c 48
31 4a 35 39 6f 4b 49 54 6e 67 71 4d 47 4d 69 6a 69 67 46 70 54 45 52 50 74
5a 4c 42 2b 4d 4f 6a 49 71 4a 49 70 48 76 50 4a 72 72 66 52 67 34 6d 7a 77
41 6d 5a 62 6b 2b 72 72 6f 70 79 59 68 61 34 72 42 4e 50 0a 2f 73 69 6d 2d
63 61 6c 2d 62 75 63 6b 65 74 2f 74 65 6d 70 2f 66 6f 6f 2e 6d 70 34 3f 70
61 72 74 4e 75 6d 62 65 72 3d 31 26 75 70 6c 6f 61 64 49 64 3d 31</StringToSignBytes>
<RequestId>073V6QJXMA0XAKWS</RequestId>
<HostId>1pR1Pz4RSnRilgjUbb0AVDcMWiqCq05dMrAVU+0t4a0HF5ytfXmNiIecxH80urVoiKtxtHhxS2o=</HostId>
</Error>
</code></pre>
<p>So, we need to pass a <code>Content-Type</code> to signed url somehow. Neither <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/generate_presigned_url.html" rel="nofollow noreferrer"><code>generate_signed_url</code></a> nor its <code>Params</code> (that must match params of <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/upload_part.html" rel="nofollow noreferrer"><code>upload_part</code></a>) accept <code>ContentType</code> option. This looks like a dead end... To double confirm, here's <a href="https://stackoverflow.com/questions/24684959/pre-signed-s3-url-signature-does-not-match">what works in JS</a> - <code>ContentType</code> is passed to the signer.</p>
<p>Well, for now I'm just monkeypatching botocore to allow passing <code>ContentType</code> parameter to <code>upload_part</code> (cloned <a href="https://github.com/boto/botocore/blob/d95d441ef9b81ac4d0163fa97e736edf63a8b4b3/botocore/data/s3/2006-03-01/service-2.json" rel="nofollow noreferrer"><code>botocore/data/s3/2006-03-01/service-2.json</code></a> and added this parameter to <code>UploadPartRequest</code> definition, patching this file in venv in Dockerfile), but it's certainly not what I want. However, this confirms that I <em>really</em> need to pass <code>ContentType</code>, and no other solution can allow setting this header. After uncommenting <code>ContentType</code> key in the sample above, everything is fine.</p>
<p>Just to compare, below are urls without and with content type - this arg is included in url directly. The latter URL works with frontend <code>fetch</code> flawlessly.</p>
<pre><code>https://sim-cal-bucket.s3.amazonaws.com/temp/foo.mp4?partNumber=1&uploadId=1&AWSAccessKeyId=...&Signature=miwrnxtxoPdGnuAEqiP52ZMscBQ%3D&x-amz-security-token=FwoGZXIvYXdzEOT%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDE%2B2gqVw4NTt1c2eOCKGAVTf4uDCD%2BGJ8P6lG2vBg8yQ2dqyU7%2F6aHg4hXMljyDFByT7hJ1%2FF%2FGPwBi84eAMDZqGzXpIySe8PhU80ak5C4vg7vcGOOSaB3cXk7TtQ2q0pWb8MB0AYb3LGAJ6sahySjHSdArFFADB60u6SskWhq9HHSijilW9hKIiUgdceZAPhLH1J59oKITngqMGMijigFpTERPtZLB%2BMOjIqJIpHvPJrrfRg4mzwAmZbk%2BrropyYha4rBNP&Expires=1684064749
https://sim-cal-bucket.s3.amazonaws.com/temp/foo.mp4?partNumber=1&uploadId=1&AWSAccessKeyId=...&Signature=1FeHhXi7QRtL0wCT7kJ%2BVEcBeso%3D&content-type=application%2Fx-www-form-urlencoded&x-amz-security-token=FwoGZXIvYXdzEOT%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDE%2B2gqVw4NTt1c2eOCKGAVTf4uDCD%2BGJ8P6lG2vBg8yQ2dqyU7%2F6aHg4hXMljyDFByT7hJ1%2FF%2FGPwBi84eAMDZqGzXpIySe8PhU80ak5C4vg7vcGOOSaB3cXk7TtQ2q0pWb8MB0AYb3LGAJ6sahySjHSdArFFADB60u6SskWhq9HHSijilW9hKIiUgdceZAPhLH1J59oKITngqMGMijigFpTERPtZLB%2BMOjIqJIpHvPJrrfRg4mzwAmZbk%2BrropyYha4rBNP&Expires=1684065651
</code></pre>
<p>Solutions suggesting to make a request without <code>Content-Type</code> are unacceptable, because this is a part of public API, and I do not want to make customers jump through hoops trying to send such request.</p>
|
<python><amazon-s3><http-headers><boto3><botocore>
|
2023-05-14 11:05:56
| 2
| 8,871
|
STerliakov
|
76,247,134
| 8,113,138
|
Cannot install yolox
|
<p>I am trying to install Yolox using the following command:</p>
<pre><code>pip install yolox
</code></pre>
<p>I got this command from the following link:</p>
<p><a href="https://pypi.org/project/yolox/" rel="nofollow noreferrer">https://pypi.org/project/yolox/</a></p>
<p>I got the following error:</p>
<pre><code>Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [8 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-e8uz9eo5/yolox_dd607ece87a3411aabef0d9d5bff591a/setup.py", line 77, in <module>
install_requires=get_install_requirements(),
File "/tmp/pip-install-e8uz9eo5/yolox_dd607ece87a3411aabef0d9d5bff591a/setup.py", line 49, in get_install_requirements
with open("requirements.txt", "r", encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'requirements.txt'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>How can I solve this problem?
I should mention that I am installing this package on a remote machine based on ubuntu.</p>
|
<python><ubuntu><yolo>
|
2023-05-14 11:03:22
| 1
| 858
|
Jalil Nourmohammadi Khiarak
|
76,246,993
| 21,896,093
|
IntelliSense not listing members with VSCode Interactive Python code files
|
<p>In VSCode I use Python Interactive mode. I have a Python script open on one pane, and I execute the code in an interactive terminal open in another pane. IntelliSense works properly in the interactive pane - when I type "." after an object it lists all the object's members. But when I try the same thing in the script pane, it doesn't list the object's members.</p>
<p>Example below.</p>
<p>In the interactive pane, IntelliSense provides a full lists of the object's members:
<a href="https://i.sstatic.net/C3em7.png" rel="nofollow noreferrer">Interactive pane autocomplete</a></p>
<p>But when I try the same thing in the Python script, it fails to lists the object's members after the cell has been run:
<a href="https://i.sstatic.net/p9TqW.png" rel="nofollow noreferrer">Script pane autocomplete</a></p>
<p>I have tried the following things to fix this:</p>
<ul>
<li>Updated to the latest VScode, and uninstalled/reinstalled Microsoft's Python extension (no other extensions installed)</li>
<li>In VScode settings, made sure the language server was using Pylance</li>
<li>I am running inside a conda virtual environment, and I only have that one terminal running</li>
</ul>
<p>My work-around is that, when I want to inspect a variable, I switch to the interactive pane in order to use its autocomplete, and then I switch back to the script pane to continue coding. Ideally, I should be able to see the full autocomplete in the script pane.</p>
<p>Please advise if there is a way to remedy this.</p>
<p><strong>Edit</strong>: Minimal example below. IntelliSense problem seems specific to the Tensorflow/Keras object called <code>history</code> created below, hence I am importing Tensorflow to reproduce the problem:</p>
<pre><code>#%% Execute this code cell in the Python interactive pane
import tensorflow as tf
model = tf.keras.Sequential([tf.keras.Input(1), tf.keras.layers.Dense(1)])
model.compile(loss='mse', optimizer='sgd')
history = model.fit([0, 1], [0, 1])
</code></pre>
<p>After running this, when I type <code>history.</code> (<code>history</code> plus trigger character <code>.</code>), IntelliSense should pop-up a small box listing all the members of the <code>history</code> object (e.g. <code>history.epoch</code>, <code>history.params</code>).</p>
<p>IntelliSense works in the Interactive pane: <a href="https://i.sstatic.net/QRQNd.png" rel="nofollow noreferrer">IntelliSense invoked from Interactive pane</a></p>
<p>But IntelliSense fails with "No suggestions." in the script pane: <a href="https://i.sstatic.net/EYUKb.png" rel="nofollow noreferrer">After running cell, IntelliSense doesn't lists object's members when triggered in script pane</a></p>
|
<python><visual-studio-code><intellisense><interactive><pylance>
|
2023-05-14 10:33:39
| 1
| 5,252
|
MuhammedYunus
|
76,246,908
| 8,044,204
|
Google Colab - redirect_url for Google Drive API - OAuth 2.0
|
<p>On Google Colab, I want to download a specific revision of a file from Google Drive to my workspace, however I found it extremely difficult (comparing how easy is to mount the drive).</p>
<p>I have created OAuth 2.0 Client ID, added 'expected' redirect_url's and uploaded the client_secrets.json to my workspace. And trying to download the revision with this script:</p>
<pre><code>import requests
from google_auth_oauthlib.flow import InstalledAppFlow
from google.colab import auth
auth.authenticate_user()
# Set up the OAuth flow
flow = InstalledAppFlow.from_client_secrets_file('client_secret.json', scopes=['https://www.googleapis.com/auth/drive'])
# Start the OAuth authorization process
credentials = flow.run_local_server(host='localhost',
port=8081,
authorization_prompt_message='Please visit this URL: {url}',
success_message='The auth flow is complete; you may close this window.',
open_browser=True)
# Save the credentials to a file
credentials.save_to_disk('credentials.json')
# Make an HTTP GET request to the selfLink URL
# I was able to fetch the FILE_ID and REVISION_ID both from download link given on Google Drive's UI
# ...and on developers.google.com/drive/api/reference/rest/v3
self_link = f'https://www.googleapis.com/drive/v2/files/{FILE_ID}/revisions/{REVISION_ID}'
response = requests.get(self_link, headers={'Authorization': 'Bearer ' + credentials.token})
# Save the response content as the downloaded file
with open('your_file_name', 'wb') as file:
file.write(response.content)
</code></pre>
<p>However, on <code>flow.run_local_server</code> step, when I visit the authentication link, I'm getting <code>Error 400: redirect_uri_mismatch</code>, where the request detail is:
<code>redirect_uri=http://localhost:8081/</code>.</p>
<p>I have no idea how to make this work, I have waited from 5 mins to hours to let Google workspace to save my redirect url, left out the ports, tried different ports, removed http, removed trailing backslash, used <code>postmessage</code> in the code for host (there is no way to register postmessage in the OAuth manager as it expects a URI).</p>
<p>I have also tried to use the Google API like this, and in this case, I am getting <code>file not found</code> error:</p>
<pre><code>from googleapiclient.discovery import build
drive_service = build('drive', 'v3', credentials=credentials)
files = drive_service.revisions().get_media(fileID=FILE_ID, revisionID=REVISION_ID).execute()
</code></pre>
<p>I have followed this suggested material as well:
<a href="https://github.com/googleapis/google-api-python-client/blob/main/docs/oauth-installed.md" rel="nofollow noreferrer">https://github.com/googleapis/google-api-python-client/blob/main/docs/oauth-installed.md</a></p>
<p>Anybody has an idea (or a workaround) to download a revision of a file from Drive into Google Colab workspace?
P.S: As a workaround, I am not able to delete the revisions manually until the revision I want, probably due to storage constraints on Drive side. This will be a topic of another question I think.</p>
|
<python><oauth-2.0><google-drive-api><google-oauth><google-colaboratory>
|
2023-05-14 10:11:08
| 1
| 814
|
Melih
|
76,246,817
| 3,247,006
|
@register.filter vs @register.simple_tag vs @register.tag vs @register.inclusion_tag in Django Templates
|
<p><a href="https://docs.djangoproject.com/en/4.2/howto/custom-template-tags/#writing-custom-template-filters" rel="nofollow noreferrer">The doc</a> says about <a href="https://docs.djangoproject.com/en/4.2/howto/custom-template-tags/#django.template.Library.filter" rel="nofollow noreferrer">@register.filter</a> below:</p>
<blockquote>
<p>Custom filters are Python functions that take one or two arguments:</p>
<ul>
<li>The value of the variable (input) – not necessarily a string.</li>
<li>The value of the argument – this can have a default value, or be left out altogether.</li>
</ul>
</blockquote>
<p>And, <a href="https://docs.djangoproject.com/en/4.2/howto/custom-template-tags/#simple-tags" rel="nofollow noreferrer">the doc</a> says about <a href="https://docs.djangoproject.com/en/4.2/howto/custom-template-tags/#django.template.Library.simple_tag" rel="nofollow noreferrer">@register.simple_tag</a> below:</p>
<blockquote>
<p>This function, which is a method of django.template.Library, takes a
function that accepts any number of arguments, wraps it in a render
function and the other necessary bits mentioned above and registers it
with the template system.</p>
</blockquote>
<p>And, <a href="https://docs.djangoproject.com/en/4.2/howto/custom-template-tags/#registering-the-tag" rel="nofollow noreferrer">the doc</a> only says about <code>@register.tag</code> below:</p>
<blockquote>
<p>Finally, register the tag with your module’s Library instance, as
explained in writing custom template tags above.</p>
</blockquote>
<p>And, <a href="https://docs.djangoproject.com/en/4.2/howto/custom-template-tags/#inclusion-tags" rel="nofollow noreferrer">the doc</a> says about <a href="https://docs.djangoproject.com/en/4.2/howto/custom-template-tags/#django.template.Library.inclusion_tag" rel="nofollow noreferrer">@register.inclusion_tag</a> below:</p>
<blockquote>
<p>Another common type of template tag is the type that displays some
data by rendering another template. For example, Django’s admin
interface uses custom template tags to display the buttons along the
bottom of the “add/change” form pages. Those buttons always look the
same, but the link targets change depending on the object being edited
– so they’re a perfect case for using a small template that is filled
with details from the current object.</p>
</blockquote>
<p>But, I don't understand what they are like so what is the difference between <code>@register.filter</code> <code>@register.simple_tag</code>, <code>@register.tag</code> and <code>@register.inclusion_tag</code> in Django Templates?</p>
|
<python><django><django-templates><python-decorators><templatetags>
|
2023-05-14 09:46:33
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
76,246,812
| 520,556
|
How to extract sentences from a pandas dataframe which are given in a column as word-per-row
|
<p>What is the most efficient way to loop over a large pandas dataframe in which sentences are given as word-per-row, and with punctuations in another column? For example:</p>
<pre><code>d = {'col1': ['This', 'is', 'a', 'simple', 'sentence',
'This', 'is', 'another', 'sentence',
'This', 'is', 'the', 'third', 'sentence',
'Is', 'this', 'a', 'sentence', 'too'],
'col2': ['', '', '', '', '!',
'', '', '', '.',
'', '', '', '', '...',
'', '', '', '', '?']}
df = pd.DataFrame(data=d)
df
col1 col2
0 This
1 is
2 a
3 simple
4 sentence !
5 This
6 is
7 another
8 sentence .
9 This
10 is
11 the
12 third
13 sentence ...
14 Is
15 this
16 a
17 sentence
18 too ?
</code></pre>
<p>Then, I would like to extract and work on individual sentences, like this:</p>
<pre><code>df[0:5]
col1 col2
0 This
1 is
2 a
3 simple
4 sentence !
</code></pre>
<p>or</p>
<pre><code>df[14:19]
col1 col2
14 Is
15 this
16 a
17 sentence
18 too ?
</code></pre>
<p>Thanks!</p>
|
<python><pandas><dataframe>
|
2023-05-14 09:45:38
| 1
| 1,598
|
striatum
|
76,246,578
| 10,012,856
|
Module 'numpy' has no attribute 'warnings'
|
<p>I'm trying to reproduce <a href="https://datashader.org/user_guide/Polygons.html#geopandas-import" rel="noreferrer">this</a> tutorial with my own data. I've a simple square grid of polygons:</p>
<pre><code>from shapely import wkt
import pandas as pd
import geopandas as gpd
data_list = [
[0,51, wkt.loads("POLYGON ((-74816.7238 5017078.8988, -74716.7238 5017078.8988, -74716.7238 5016978.8988, -74816.7238 5016978.8988, -74816.7238 5017078.8988))")],
[1,91, wkt.loads("POLYGON ((-74816.7238 5016978.8988, -74716.7238 5016978.8988, -74716.7238 5016878.8988, -74816.7238 5016878.8988, -74816.7238 5016978.8988))")],
[2,88, wkt.loads("POLYGON ((-74816.7238 5016878.8988, -74716.7238 5016878.8988, -74716.7238 5016778.8988, -74816.7238 5016778.8988, -74816.7238 5016878.8988))")],
[3,54, wkt.loads("POLYGON ((-74816.7238 5016778.8988, -74716.7238 5016778.8988, -74716.7238 5016678.8988, -74816.7238 5016678.8988, -74816.7238 5016778.8988))")],
[4,51, wkt.loads("POLYGON ((-74816.7238 5016678.8988, -74716.7238 5016678.8988, -74716.7238 5016578.8988, -74816.7238 5016578.8988, -74816.7238 5016678.8988))")],
]
df = pd.DataFrame(data_list, columns=["id", "data", "geometry"])
gdf = gpd.GeoDataFrame(df, geometry="geometry", crs=32633)
</code></pre>
<p>I've translate GeoPandas GeoDataFrame to SpatialPandas Geodataframe:</p>
<pre><code>from spatialpandas import GeoDataFrame
sp_gdf = GeoDataFrame(gdf)
</code></pre>
<p>At this point I try to create a choropleth map according to <a href="https://datashader.org/user_guide/Polygons.html#plotting-as-filled-polygons" rel="noreferrer">this</a> example:</p>
<pre><code>import datashader as ds
canvas = ds.Canvas(plot_width=1000, plot_height=1000)
agg = canvas.polygons(sp_gdf, 'geometry', agg=ds.mean('data'))
</code></pre>
<p>But I'm facing in the error below:</p>
<pre><code>AttributeError Traceback (most recent call last)
Cell In[7], line 4
1 import datashader as ds
3 canvas = ds.Canvas(plot_width=1000, plot_height=1000)
----> 4 agg = canvas.polygons(sp_gdf, 'geometry', agg=ds.mean('data'))
6 agg
File ~/.cache/pypoetry/virtualenvs/drakonotebook-larABRfp-py3.10/lib/python3.10/site-packages/datashader/core.py:753, in Canvas.polygons(self, source, geometry, agg)
751 agg = any_rdn()
752 glyph = PolygonGeom(geometry)
--> 753 return bypixel(source, self, glyph, agg)
File ~/.cache/pypoetry/virtualenvs/drakonotebook-larABRfp-py3.10/lib/python3.10/site-packages/datashader/core.py:1258, in bypixel(source, canvas, glyph, agg, antialias)
1255 canvas.validate()
1257 # All-NaN objects (e.g. chunks of arrays with no data) are valid in Datashader
-> 1258 with np.warnings.catch_warnings():
1259 np.warnings.filterwarnings('ignore', r'All-NaN (slice|axis) encountered')
1260 return bypixel.pipeline(source, schema, canvas, glyph, agg, antialias=antialias)
File ~/.cache/pypoetry/virtualenvs/drakonotebook-larABRfp-py3.10/lib/python3.10/site-packages/numpy/__init__.py:320, in __getattr__(attr)
317 from .testing import Tester
318 return Tester
--> 320 raise AttributeError("module {!r} has no attribute "
321 "{!r}".format(__name__, attr))
AttributeError: module 'numpy' has no attribute 'warnings'
</code></pre>
<p>I'm on Ubuntu 22.04, with Python 3.10 and the code above runs in a Jupyter Notebook. Below the version of libraries in use:</p>
<ul>
<li>shapely: 2.0.1</li>
<li>pandas: 2.0.1</li>
<li>geopandas: 0.12.2</li>
<li>spatialpandas: 0.4.7</li>
<li>datashader: 0.14.4</li>
<li>numpy: 1.24.3</li>
</ul>
<p>Moreover the python environment is managed by Poetry 1.4.2</p>
<p>NB: <a href="https://stackoverflow.com/questions/74863592/attributeerror-module-numpy-has-no-attribute-warnings">this</a> thread is complete unuseful.</p>
|
<python><numpy><datashader>
|
2023-05-14 08:39:34
| 1
| 1,310
|
MaxDragonheart
|
76,246,504
| 8,849,755
|
Why cannot I unstack this pandas data frame?
|
<p>I have a data frame in pandas that looks like this:</p>
<pre><code>n_y 0 1 2 3 4 ... 53 54 55 56 57
variable n_channel n_pulse n_x ...
Total collected charge (V s) 1 1 0 0.000021 0.000039 0.000028 0.000029 0.000030 ... 0.000099 0.000085 0.000031 0.000017 0.000035
1 0.000028 0.000133 0.000046 0.000049 0.000063 ... 0.000237 0.000165 0.000037 0.000036 0.000067
2 0.000022 0.000027 0.000054 0.000075 0.000023 ... 0.000105 0.000045 0.000122 0.000044 0.000022
3 0.000042 0.000171 0.000016 0.000027 0.000046 ... 0.000030 0.000054 0.000055 0.000103 0.000044
4 0.000031 0.000020 0.000043 0.000017 0.000028 ... 0.000024 0.000033 0.000040 0.000069 0.000034
... ... ... ... ... ... ... ... ... ... ... ...
4 2 53 0.000491 0.000047 0.000014 0.000025 0.000027 ... 0.000048 0.000029 0.000089 0.000051 0.000026
54 0.000022 0.000081 0.000069 0.000050 0.000038 ... 0.000040 0.000077 0.000039 0.000033 0.000017
55 0.000081 0.000091 0.000015 0.000072 0.000016 ... 0.000042 0.000046 0.000024 0.000087 0.000026
56 0.000031 0.000049 0.000273 0.000022 0.000122 ... 0.000028 0.000020 0.000033 0.000053 0.000022
57 0.000027 0.000018 0.000069 0.000012 0.000048 ... 0.000117 0.000031 0.000084 0.000019 0.000018
[2320 rows x 58 columns]
</code></pre>
<p>I want to <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a> the <code>variable</code> level. I keep getting <code>ValueError: Index contains duplicate entries, cannot reshape</code>.</p>
<p>I made a MWE to experiment with this:</p>
<pre class="lang-py prettyprint-override"><code>import pandas
import numpy
df = []
stuff = []
for variable in ['amplitude','time']:
for n_channel in ['CH1','CH2','CH3']:
for n_x in range(9):
for n_y in range(9):
for n_pulse in [1,2]:
stuff.append(
dict(
variable = variable,
n_channel = n_channel,
n_x = n_x,
n_y = n_y,
whatever = numpy.random.rand(),
n_pulse = n_pulse,
)
)
df = pandas.DataFrame.from_records(stuff)
df = pandas.pivot_table(
data = df,
values = df.columns,
index = ['variable','n_channel','n_pulse','n_x'],
columns = 'n_y',
)
df = df['whatever']
print(df)
df = df.unstack('variable')
print(df)
</code></pre>
<p>This is working fine... The first <code>print</code> shows:</p>
<pre><code>n_y 0 1 2 3 4 5 6 7 8
variable n_channel n_pulse n_x
amplitude CH1 1 0 0.318155 0.318907 0.819447 0.743255 0.478731 0.186771 0.801281 0.032142 0.183628
1 0.418701 0.404246 0.866815 0.907896 0.469923 0.395304 0.267957 0.259414 0.657171
2 0.344421 0.512340 0.227376 0.017938 0.947097 0.421796 0.024147 0.109382 0.998842
3 0.917094 0.454799 0.421441 0.846659 0.657387 0.914102 0.347941 0.784530 0.263389
4 0.746022 0.074256 0.275227 0.312784 0.945559 0.727171 0.097169 0.669294 0.124092
... ... ... ... ... ... ... ... ... ...
time CH3 2 4 0.413624 0.622738 0.062325 0.796511 0.765223 0.568922 0.521804 0.059481 0.698582
5 0.490076 0.053765 0.094804 0.122796 0.249527 0.915555 0.238314 0.282287 0.193821
6 0.728340 0.423650 0.295858 0.867221 0.987413 0.515603 0.615101 0.534728 0.302426
7 0.086354 0.697638 0.708708 0.464658 0.972898 0.592239 0.144225 0.029943 0.715472
8 0.983830 0.881102 0.855573 0.719386 0.795557 0.550499 0.457187 0.887028 0.077428
[108 rows x 9 columns]
</code></pre>
<p>which is similar to my original data frame. After the <code>unstack</code> the second <code>print</code> shows exactly what I want to obtain:</p>
<pre><code>n_y 0 1 2 ... 6 7 8
variable amplitude time amplitude time amplitude time ... amplitude time amplitude time amplitude time
n_channel n_pulse n_x ...
CH1 1 0 0.318155 0.377314 0.318907 0.124907 0.819447 0.033648 ... 0.801281 0.392816 0.032142 0.939812 0.183628 0.217295
1 0.418701 0.400172 0.404246 0.605267 0.866815 0.565185 ... 0.267957 0.918827 0.259414 0.925223 0.657171 0.293790
2 0.344421 0.544194 0.512340 0.159295 0.227376 0.748645 ... 0.024147 0.838900 0.109382 0.307935 0.998842 0.485162
3 0.917094 0.404233 0.454799 0.034856 0.421441 0.035746 ... 0.347941 0.044803 0.784530 0.948064 0.263389 0.413340
4 0.746022 0.451766 0.074256 0.401677 0.275227 0.427818 ... 0.097169 0.059010 0.669294 0.652602 0.124092 0.243418
5 0.655631 0.270657 0.369451 0.390933 0.433271 0.487986 ... 0.731998 0.152845 0.876073 0.326358 0.874010 0.973927
6 0.463634 0.727821 0.343269 0.387025 0.506886 0.742100 ... 0.402282 0.910941 0.629960 0.949041 0.258735 0.434187
7 0.273954 0.703639 0.339981 0.851011 0.598395 0.798185 ... 0.091755 0.269656 0.670143 0.573609 0.148111 0.147772
8 0.240539 0.557321 0.492881 0.078565 0.725186 0.865216 ... 0.911875 0.492582 0.689864 0.359497 0.535362 0.746846
2 0 0.422961 0.890133 0.535018 0.570279 0.571138 0.117695 ... 0.013214 0.968708 0.113321 0.224477 0.482554 0.405526
1 0.406993 0.026123 0.586861 0.125451 0.503849 0.976586 ... 0.033597 0.374277 0.688245 0.851391 0.726780 0.602009
2 0.754604 0.530039 0.092153 0.900987 0.570285 0.403137 ... 0.918097 0.756362 0.796046 0.258649 0.884949 0.881545
3 0.339633 0.517753 0.573029 0.157219 0.484687 0.347766 ... 0.110299 0.054327 0.951850 0.852574 0.196102 0.296092
4 0.872963 0.805149 0.181597 0.823101 0.025292 0.972622 ... 0.970624 0.002843 0.416082 0.133990 0.191165 0.023063
5 0.154898 0.739725 0.711921 0.257430 0.764489 0.542806 ... 0.076698 0.369945 0.687063 0.349555 0.794402 0.919290
6 0.773608 0.040250 0.180908 0.901588 0.638054 0.586157 ... 0.374235 0.160535 0.050930 0.745586 0.665319 0.783004
7 0.312764 0.702861 0.730731 0.052538 0.929150 0.958350 ... 0.498125 0.473075 0.904690 0.319732 0.104275 0.444401
8 0.041232 0.758952 0.735192 0.707305 0.612927 0.594899 ... 0.662133 0.736112 0.233739 0.849167 0.383954 0.244846
CH2 1 0 0.482685 0.030756 0.519471 0.099447 0.933879 0.269588 ... 0.222724 0.654689 0.676711 0.126967 0.267594 0.668274
1 0.549225 0.136437 0.178816 0.415080 0.416678 0.938479 ... 0.075785 0.506679 0.233368 0.959007 0.508629 0.916322
2 0.821333 0.705575 0.955700 0.151392 0.981931 0.177010 ... 0.250213 0.683854 0.904913 0.469773 0.133955 0.200588
3 0.338652 0.784300 0.735776 0.112112 0.527321 0.910160 ... 0.980183 0.282530 0.820129 0.950365 0.350072 0.941842
4 0.238885 0.866622 0.045756 0.409689 0.313099 0.774542 ... 0.802145 0.205618 0.609130 0.506542 0.943803 0.333078
5 0.108278 0.307909 0.352380 0.552131 0.307273 0.860126 ... 0.899399 0.778098 0.188246 0.729077 0.255168 0.279451
6 0.563208 0.115809 0.268294 0.882927 0.805017 0.664985 ... 0.543552 0.033564 0.916331 0.024751 0.208346 0.538154
7 0.165855 0.222313 0.213769 0.049349 0.209802 0.280979 ... 0.322356 0.401004 0.795929 0.919803 0.034130 0.088066
8 0.276778 0.514858 0.624288 0.716783 0.548286 0.145780 ... 0.097798 0.215031 0.713206 0.684878 0.572296 0.824280
2 0 0.821332 0.416202 0.212072 0.563054 0.844786 0.159210 ... 0.040958 0.142498 0.421627 0.078418 0.564603 0.800773
1 0.289747 0.963189 0.300213 0.289673 0.316227 0.794320 ... 0.958809 0.686297 0.095978 0.718890 0.404255 0.252729
2 0.195009 0.786497 0.284339 0.749703 0.324050 0.473740 ... 0.781819 0.461013 0.471294 0.053969 0.793857 0.339071
3 0.046489 0.090669 0.249579 0.771584 0.384625 0.159415 ... 0.557342 0.551205 0.566048 0.683298 0.182498 0.316421
4 0.525071 0.288015 0.126839 0.370341 0.692128 0.811195 ... 0.609041 0.509900 0.231344 0.679108 0.446438 0.983607
5 0.370951 0.651078 0.971907 0.343395 0.077181 0.395344 ... 0.696176 0.443765 0.506720 0.430668 0.425226 0.028401
6 0.103904 0.690262 0.013092 0.037656 0.478629 0.916823 ... 0.189386 0.971075 0.412990 0.011442 0.304753 0.806395
7 0.845763 0.723639 0.159862 0.514803 0.801334 0.548439 ... 0.610754 0.158426 0.282561 0.923168 0.085974 0.825729
8 0.602364 0.432593 0.570190 0.710235 0.594921 0.696220 ... 0.429891 0.068486 0.163413 0.225036 0.588790 0.393577
CH3 1 0 0.412815 0.397042 0.397197 0.233471 0.528925 0.976148 ... 0.704259 0.415330 0.180131 0.177507 0.938230 0.940145
1 0.203541 0.748581 0.779031 0.625414 0.720814 0.650374 ... 0.804879 0.211173 0.506444 0.536308 0.850145 0.203682
2 0.926183 0.549252 0.896651 0.500793 0.707658 0.740912 ... 0.293263 0.936843 0.556843 0.715463 0.472713 0.556433
3 0.571880 0.497707 0.487964 0.376266 0.956032 0.866367 ... 0.154190 0.394322 0.459835 0.102747 0.024374 0.759243
4 0.562065 0.086946 0.168839 0.178067 0.442287 0.515272 ... 0.780203 0.444624 0.542385 0.552299 0.639184 0.957356
5 0.288680 0.249769 0.255169 0.284534 0.544069 0.387061 ... 0.698532 0.862734 0.014134 0.850374 0.390506 0.102715
6 0.689871 0.358891 0.068330 0.039077 0.143581 0.658653 ... 0.953135 0.936750 0.953091 0.412520 0.000104 0.933817
7 0.446401 0.808949 0.331366 0.686982 0.647053 0.628440 ... 0.194656 0.130346 0.505650 0.056071 0.212809 0.498561
8 0.795113 0.452661 0.017817 0.151491 0.022560 0.451925 ... 0.417213 0.130917 0.507176 0.335206 0.724511 0.555618
2 0 0.728242 0.107426 0.627625 0.539927 0.340066 0.683591 ... 0.612183 0.266807 0.517194 0.471301 0.762811 0.719649
1 0.873102 0.652135 0.434765 0.631983 0.718914 0.509882 ... 0.849363 0.224798 0.305151 0.866502 0.382814 0.897405
2 0.947749 0.246105 0.993545 0.441134 0.565680 0.558149 ... 0.159999 0.667146 0.996090 0.836450 0.572647 0.077749
3 0.147513 0.717012 0.380867 0.305426 0.205768 0.524098 ... 0.506258 0.413851 0.200404 0.809909 0.908687 0.164276
4 0.949666 0.413624 0.339738 0.622738 0.920995 0.062325 ... 0.459207 0.521804 0.073823 0.059481 0.863865 0.698582
5 0.262064 0.490076 0.295991 0.053765 0.225085 0.094804 ... 0.994002 0.238314 0.045644 0.282287 0.289603 0.193821
6 0.541708 0.728340 0.236228 0.423650 0.225386 0.295858 ... 0.514739 0.615101 0.170154 0.534728 0.309843 0.302426
7 0.782316 0.086354 0.342009 0.697638 0.406948 0.708708 ... 0.402539 0.144225 0.760996 0.029943 0.999892 0.715472
8 0.384309 0.983830 0.436840 0.881102 0.843084 0.855573 ... 0.447355 0.457187 0.978081 0.887028 0.980453 0.077428
[54 rows x 18 columns]
</code></pre>
<p>Why could this be failing with my original data frame?</p>
|
<python><pandas><dataframe>
|
2023-05-14 08:19:34
| 1
| 3,245
|
user171780
|
76,248,516
| 2,280,111
|
simulate load-balancer variability
|
<p>The following program tries to show how a stateless load balancer creates variability
when spreading balls between bins (with default settings below, the <code>max/min</code> ratio is ~2).
Obviously, the more balls we throw, the smaller the spread.</p>
<p>Question: how do I convert it to a continuous example? i.e. instead of using balls, I would like to load balance a "traffic", but I am failing to model this.
My goal is to simulate a behavior showing how load-balanced traffic is spread non-uniformly even using a perfectly uniform sharding function.</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import numpy as np
import matplotlib.pyplot as plt
import webbrowser
def uniform_bin_filling(balls, bin_len):
bins = np.zeros(bin_len, dtype=int)
for i in range(balls):
bins[np.random.randint(0, bin_len)] += 1
return bins
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("--balls", type=int, default=150)
parser.add_argument("--bins", type=int, default=10)
args = parser.parse_args()
bins = uniform_bin_filling(args.balls, args.bins)
print("Expected average is {:.2f}".format(args.balls / args.bins))
print("The number of balls in each bin is:")
print(bins)
r = bins.max() / bins.min()
print("\nMax/Min ratio: {:.2f}".format(r))
<span class="math-container">```</span>
</code></pre>
|
<python><simulation>
|
2023-05-14 07:54:00
| 0
| 1,513
|
Roman
|
76,246,392
| 7,589,661
|
Replacing multiprocessing with dask equivalent
|
<p>There is a script which performs a tedious function with the help of multiprocessing. This script involves lots of log file processing and it takes lots of time when it involves hundreds or several thousands of them.</p>
<p>I am new to cluster computing, this is why I would like to know if there is a way to replace the multiprocessing workers, that are probably running on one machine power, with Dask workers that will spread over several machines? By the way, for what I know the cluster manager in the background is SGE.</p>
<p>If it is possible than, how can this be done?</p>
<p>Do I need to coordinate it with the IT?</p>
|
<python><cluster-computing>
|
2023-05-14 07:48:20
| 0
| 3,062
|
Moshe S.
|
76,245,993
| 755,934
|
How to capture logs from native python logging module in gunicorn?
|
<p>I have a python flask/gunicorn project which includes the standard flask logging code. However, some of my code may not run in an application context (it has its own unit tests, including some fairly complicated functions in other files). These files use the native python logging mechanism. How do I capture those logs and write them to the same log file as the gunicorn/flask logs?</p>
<pre><code>from flask import Flask, jsonify
app = Flask(__name__)
@app.route("/")
def index():
app.logger.info("index hit")
return jsonify("ok")
</code></pre>
<p>And I know the trick to capture the logging output and make it write to the gunicorn log:</p>
<pre><code>if __name__ != "__main__":
gunicorn_logger = logging.getLogger("gunicorn.error")
app.logger.handlers = gunicorn_logger.handlers
app.logger.setLevel(gunicorn_logger.level)
app.logger.info("Gunicorn logging enabled")
</code></pre>
<p>However, I have other code which <strong>may not run within an application context</strong>. For example, it's tested in a unit test.</p>
<pre><code>import logging
logger = logging.getLogger(__name__)
def my_external_function(*args):
logger.info("My function has been called")
# do something
</code></pre>
<p>When I invoke gunicorn in the usual way:</p>
<pre><code>gunicorn app:app -b 0.0.0.0:8080 \
--access-logfile /var/log/myapp/access.log \
--error-logfile /var/log/myapp/error.log \
--log-level INFO
</code></pre>
<p>Everything that starts with <code>app.logger.</code> will write to <code>error.log</code> while the code using the native python logging (<code>logger...</code> or <code>logging.</code>) will write to stdout.</p>
|
<python><flask><gunicorn><python-logging>
|
2023-05-14 05:33:22
| 1
| 5,624
|
Daniel Kats
|
76,245,885
| 3,878,377
|
Can DeepAR handle multiple time series different length?
|
<p>Assume I have 100 time series with different start and end dates but the same frequency hence they mostly have different lengths.Each time series is stored as a form of dataframe. They all look like the following:</p>
<pre><code>date item_id target
2020-01-10 'A' 5
2020-01-11 'A' 6
2020-01-12 'A' 7
2020-01-13 'A' 8
2020-01-14 'A' 9
</code></pre>
<p>The second time series is:</p>
<pre><code>date item_id target
2019-01-10 'B' 1
2019-01-11 'B' 2
2019-01-12 'B' 3
2019-01-13 'B' 4
</code></pre>
<p>I concatenate the time series to a big data frame and use the from_long method to create a long data frame.</p>
<p>dataset = PandasDataset.from_long_dataframe(long_df, time_col='date',
target_col='target', item_col='item_id')</p>
<p>However, this gives me the following error while training:</p>
<pre><code>AssertionError: Dataframe index is not uniformly spaced. If your data frame contains data from multiple series in the same column ("long" format), consider constructing the dataset with PandasDataset.from_long_dataframe instead
</code></pre>
<p>Can someone explain if DeepAR handles time series with different lengths (start/end date)? If No, how can I make it work when I have such a situation, and if yes, can someone explain how to to solve the above error?</p>
|
<python><time-series><forecasting><deepar><gluonts>
|
2023-05-14 04:58:40
| 0
| 1,013
|
user59419
|
76,245,884
| 1,031,219
|
Unknown C++ exception while drawing matches using cv2.line_descriptor.drawLineMatches() in OpenCV Python
|
<p>I am trying to match line descriptors extracted from two images using the <strong>binary descriptor module</strong> of OpenCV in Python. But, when drawing the matches using <code>cv2.line_descriptor.drawLineMatches()</code> function, I am getting the below error</p>
<pre><code>img3 = cv2.line_descriptor.drawLineMatches(img,keylines1,field,keylines2,matches[:10],None)
</code></pre>
<p>cv2.error: Unknown C++ exception from OpenCV code</p>
<p>What's wrong I am doing? Any suggestion would help. Thanks.</p>
<pre><code>import cv2
import numpy as np
#Read gray image
img = cv2.imread("8.jpg",0)
field = cv2.imread('field1.png',0)
lsd = cv2.line_descriptor.LSDDetector.createLSDDetector()
#Detect lines in the image
keylines1 = lsd.detect(img, scale=2, numOctaves=2)
keylines2 = lsd.detect(field, scale=2, numOctaves=2)
# # # compute descriptors for lines
bd = cv2.line_descriptor.BinaryDescriptor.createBinaryDescriptor()
k1,descr1 = bd.compute(img, keylines1)
k2,descr2 = bd.compute(field, keylines2)
bdm = cv2.line_descriptor.BinaryDescriptorMatcher()
matches = bdm.match(descr1, descr2)
matches = sorted(matches, key = lambda x:x.distance)
#img3 = cv2.drawMatches(img,k1,field,k2,matches[10],None,1)
img3 = cv2.line_descriptor.drawLineMatches(img,keylines1,field,keylines2,matches[:10],None)
#Show image
cv2.imshow("LSD",img3 )
cv2.waitKey(0)
</code></pre>
|
<python><opencv><image-processing><computer-vision><feature-extraction>
|
2023-05-14 04:57:48
| 1
| 1,229
|
Saikat
|
76,245,817
| 1,850,272
|
Automation not iterating through list of links yet no errors are thrown
|
<p>I'm creating an automated test with selenium and python and am trying to make it click on each product on a page. The steps I want to follow are:</p>
<ol>
<li>go to amazon.com.</li>
<li>click on search bar.</li>
<li>type '3D Printers' into the search bar.</li>
<li>click the submit button.</li>
<li>click on the first search result.</li>
<li>wait 10 seconds.</li>
<li>go back to the search results.</li>
<li>click on the second search result.</li>
<li>etc. etc.</li>
</ol>
<p>The code I have so far for it goes as follows</p>
<pre class="lang-py prettyprint-override"><code># Navigate to the main product page
driver.get('https://www.amazon.com/')
# Find Search Bar and enter product to search for
driver.find_element(By.ID, 'twotabsearchtextbox').send_keys('3D Printers')
#Find and click Submit button
driver.find_element(By.ID, "nav-search-submit-button").click()
# Find all the product links on the page
product_links = driver.find_elements(By.XPATH, "div[@data-component-type='s-search-result']//a[@class='a-link-normal']")
# Iterate over each product link
for link in product_links:
print('link', link.text)
# Click on the product link to go to the product page
link.click()
driver.implicitly_wait(10)
# Go back to the main product page
driver.back()
# Wait for the page to load before finding the next link
driver.implicitly_wait(10)
driver.quit()
</code></pre>
<p>When I run the test none of the links are clicked yet I don't get any errors after the test completes. I tried adding a <code>print()</code> method to see if I can print the text inside the link but I don't see anything in the console either. Does anybody see what I'm doing wrong?</p>
|
<python><selenium-webdriver>
|
2023-05-14 04:32:06
| 3
| 3,354
|
Optiq
|
76,245,584
| 506,824
|
How to find which superclass contains a name?
|
<p>I'm working with some code that I didn't write (so don't blame me for the crazyness). This is Python 3.9. I've got:</p>
<pre><code> try:
del self.userinfo # force reload
if self.userinfo['name'] == self.user():
self._loginstatus = login.LoginStatus.AS_USER
return
</code></pre>
<p>My first thought was WTF? It deletes self.userinfo and then immediately references it? How can be? Then I realized what must be going on is that self.userinfo is also defined in some superclass, of which there are a bunch:</p>
<pre><code>class APISite(
BaseSite,
EchoMixin,
FlowMixin,
GeneratorsMixin,
GeoDataMixin,
GlobalUsageMixin,
LinterMixin,
PageImagesMixin,
ProofreadPageMixin,
TextExtractsMixin,
ThanksFlowMixin,
ThanksMixin,
UrlShortenerMixin,
WikibaseClientMixin,
):
</code></pre>
<p>Is there some way to get Python to tell me in which of those superclasses it's finding self.userinfo?</p>
|
<python><python-3.x><method-resolution-order>
|
2023-05-14 02:35:38
| 2
| 2,177
|
Roy Smith
|
76,245,416
| 3,490,622
|
Rolling 12 month sum of customers who had at least one transaction in that time - pandas
|
<p>I have a transactions dataframe that looks like this:</p>
<pre><code>| customer_id | purchase_date | purchase_amt | ...
|-------------|---------------|--------------| ...
| 1 | 12-01-2023 | 150.00 | ...
| 2 | 11-24-2022 | 84.23 |..
</code></pre>
<p>I need to calculate a 12-month rolling sum of customers who made at least one purchase during that 12 month period. I am stuck. My idea was to try something like this:</p>
<pre><code>df2 = df.groupby('customer_id').apply(foo).to_frame().rename(columns:{0:'active customer'})
df = df.join(df2,on='customer_id')
df.groupby('active_customer').rolling(12).sum()
</code></pre>
<p>where function <code>foo</code> would flag if a customer had transactions in the previous 12 months or not. But I was unable to come up with such a function.</p>
<p>Could someone help?</p>
|
<python><pandas><time-series><rolling-computation>
|
2023-05-14 00:58:33
| 2
| 1,011
|
user3490622
|
76,245,284
| 11,540,781
|
Importing torch makes matplotlib kill kernel
|
<p>This code kills the kernel when executing the lineplot:</p>
<pre><code>import pandas as pd
import numpy as np
import seaborn as sns
from matplotlib import pyplot as plt
import time
import torch
sns.lineplot(x=np.arange(1, 10), y=np.arange(1, 10))
plt.show()
</code></pre>
<p>But this one doesn't:</p>
<pre><code>import pandas as pd
import numpy as np
import seaborn as sns
from matplotlib import pyplot as plt
import time
sns.lineplot(x=np.arange(1, 10), y=np.arange(1, 10))
plt.show()
</code></pre>
<p>The same behavior is seen using plt.plot(x, y).
I'm using a conda environment on windows 10 and I've tried running a jupyter notebook cell, my environment is this:
python=3.10.9
and this is my pip list return:</p>
<pre><code>alabaster 0.7.12
anaconda-client 1.11.2
anaconda-navigator 2.4.0
anaconda-project 0.11.1
anyio 3.5.0
appdirs 1.4.4
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
arrow 1.2.3
astroid 2.14.2
astropy 5.1
asttokens 2.0.5
atomicwrites 1.4.0
attrs 22.1.0
Automat 20.2.0
autopep8 1.6.0
Babel 2.11.0
backcall 0.2.0
backports.functools-lru-cache 1.6.4
backports.tempfile 1.0
backports.weakref 1.0.post1
bcrypt 3.2.0
beautifulsoup4 4.11.1
binaryornot 0.4.4
black 22.6.0
bleach 4.1.0
bokeh 2.4.3
boltons 23.0.0
Bottleneck 1.3.5
brotlipy 0.7.0
certifi 2022.12.7
cffi 1.15.1
chardet 4.0.0
charset-normalizer 2.0.4
click 8.0.4
cloudpickle 2.0.0
clyent 1.2.2
colorama 0.4.6
colorcet 3.0.1
comm 0.1.2
conda 23.3.1
conda-build 3.24.0
conda-content-trust 0.1.3
conda-pack 0.6.0
conda-package-handling 2.0.2
conda_package_streaming 0.7.0
conda-repo-cli 1.0.41
conda-token 0.4.0
conda-verify 3.4.2
constantly 15.1.0
contourpy 1.0.5
cookiecutter 1.7.3
cryptography 39.0.1
cssselect 1.1.0
cycler 0.11.0
cytoolz 0.12.0
daal4py 2023.0.2
dask 2022.7.0
datashader 0.14.4
datashape 0.5.4
debugpy 1.5.1
decorator 5.1.1
defusedxml 0.7.1
diff-match-patch 20200713
dill 0.3.6
distributed 2022.7.0
docstring-to-markdown 0.11
docutils 0.18.1
entrypoints 0.4
et-xmlfile 1.1.0
executing 0.8.3
fastjsonschema 2.16.2
filelock 3.9.0
flake8 6.0.0
Flask 2.2.2
flit_core 3.6.0
fonttools 4.25.0
fsspec 2022.11.0
future 0.18.3
gensim 4.3.0
glob2 0.7
greenlet 2.0.1
h5py 3.7.0
HeapDict 1.0.1
holoviews 1.15.4
huggingface-hub 0.10.1
hvplot 0.8.2
hyperlink 21.0.0
idna 3.4
imagecodecs 2021.8.26
imageio 2.26.0
imagesize 1.4.1
imbalanced-learn 0.10.1
importlib-metadata 4.11.3
incremental 21.3.0
inflection 0.5.1
iniconfig 1.1.1
intake 0.6.7
intervaltree 3.1.0
ipykernel 6.19.2
ipython 8.10.0
ipython-genutils 0.2.0
ipywidgets 7.6.5
isort 5.9.3
itemadapter 0.3.0
itemloaders 1.0.4
itsdangerous 2.0.1
jedi 0.18.1
jellyfish 0.9.0
Jinja2 3.1.2
jinja2-time 0.2.0
jmespath 0.10.0
joblib 1.1.1
json5 0.9.6
jsonpatch 1.32
jsonpointer 2.1
jsonschema 4.17.3
jupyter 1.0.0
jupyter_client 7.3.4
jupyter-console 6.6.2
jupyter_core 5.2.0
jupyter-server 1.23.4
jupyterlab 3.5.3
jupyterlab-pygments 0.1.2
jupyterlab_server 2.19.0
jupyterlab-widgets 1.0.0
keyring 23.4.0
kiwisolver 1.4.4
lazy-object-proxy 1.6.0
libarchive-c 2.9
llvmlite 0.39.1
locket 1.0.0
lxml 4.9.1
lz4 3.1.3
Markdown 3.4.1
MarkupSafe 2.1.1
matplotlib 3.7.0
matplotlib-inline 0.1.6
mccabe 0.7.0
menuinst 1.4.19
mistune 0.8.4
mkl-fft 1.3.1
mkl-random 1.2.2
mkl-service 2.4.0
mock 4.0.3
mpmath 1.2.1
msgpack 1.0.3
multipledispatch 0.6.0
munkres 1.1.4
mypy-extensions 0.4.3
navigator-updater 0.3.0
nbclassic 0.5.2
nbclient 0.5.13
nbconvert 6.5.4
nbformat 5.7.0
nest-asyncio 1.5.6
networkx 2.8.4
nltk 3.7
notebook 6.5.2
notebook_shim 0.2.2
numba 0.56.4
numexpr 2.8.4
numpy 1.23.5
numpydoc 1.5.0
openpyxl 3.0.10
packaging 22.0
pandas 1.5.3
pandocfilters 1.5.0
panel 0.14.3
param 1.12.3
paramiko 2.8.1
parsel 1.6.0
parso 0.8.3
partd 1.2.0
pathlib 1.0.1
pathspec 0.10.3
patsy 0.5.3
pep8 1.7.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.4.0
pip 22.3.1
pkginfo 1.9.6
platformdirs 2.5.2
plotly 5.9.0
pluggy 1.0.0
ply 3.11
pooch 1.4.0
poyo 0.5.0
prometheus-client 0.14.1
prompt-toolkit 3.0.36
Protego 0.1.16
psutil 5.9.0
ptyprocess 0.7.0
pure-eval 0.2.2
py 1.11.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycodestyle 2.10.0
pycosat 0.6.4
pycparser 2.21
pyct 0.5.0
pycurl 7.45.1
PyDispatcher 2.0.5
pydocstyle 6.3.0
pyerfa 2.0.0
pyflakes 3.0.1
Pygments 2.11.2
PyHamcrest 2.0.2
PyJWT 2.4.0
pylint 2.16.2
pylint-venv 2.3.0
pyls-spyder 0.4.0
PyNaCl 1.5.0
pyodbc 4.0.34
pyOpenSSL 23.0.0
pyparsing 3.0.9
PyQt5 5.15.7
PyQt5-sip 12.11.0
PyQtWebEngine 5.15.4
pyrsistent 0.18.0
PySocks 1.7.1
pytest 7.1.2
python-dateutil 2.8.2
python-lsp-black 1.2.1
python-lsp-jsonrpc 1.0.0
python-lsp-server 1.7.1
python-slugify 5.0.2
python-snappy 0.6.1
pytoolconfig 1.2.5
pytz 2022.7
pyviz-comms 2.0.2
PyWavelets 1.4.1
pywin32 305.1
pywin32-ctypes 0.2.0
pywinpty 2.0.10
PyYAML 6.0
pyzmq 23.2.0
QDarkStyle 3.0.2
qstylizer 0.2.2
QtAwesome 1.2.2
qtconsole 5.4.0
QtPy 2.2.0
queuelib 1.5.0
regex 2022.7.9
requests 2.28.1
requests-file 1.5.1
requests-toolbelt 0.9.1
rope 1.7.0
Rtree 1.0.1
ruamel.yaml 0.17.21
ruamel.yaml.clib 0.2.6
ruamel-yaml-conda 0.17.21
scikit-image 0.19.3
scikit-learn 1.2.1
scikit-learn-intelex 20230228.214818
scipy 1.10.0
Scrapy 2.8.0
seaborn 0.12.2
Send2Trash 1.8.0
service-identity 18.1.0
setuptools 65.6.3
sip 6.6.2
six 1.16.0
sklearn 0.0.post5
smart-open 5.2.1
sniffio 1.2.0
snowballstemmer 2.2.0
sortedcontainers 2.4.0
soupsieve 2.3.2.post1
Sphinx 5.0.2
sphinxcontrib-applehelp 1.0.2
sphinxcontrib-devhelp 1.0.2
sphinxcontrib-htmlhelp 2.0.0
sphinxcontrib-jsmath 1.0.1
sphinxcontrib-qthelp 1.0.3
sphinxcontrib-serializinghtml 1.1.5
spyder 5.4.1
spyder-kernels 2.4.1
SQLAlchemy 1.4.39
stack-data 0.2.0
statsmodels 0.13.5
sympy 1.11.1
tables 3.7.0
tabulate 0.8.10
TBB 0.2
tblib 1.7.0
tenacity 8.0.1
terminado 0.17.1
text-unidecode 1.3
textdistance 4.2.1
threadpoolctl 2.2.0
three-merge 0.1.1
tifffile 2021.7.2
tinycss2 1.2.1
tldextract 3.2.0
tokenizers 0.11.4
toml 0.10.2
tomli 2.0.1
tomlkit 0.11.1
toolz 0.12.0
torch 2.0.1+cu118
torchaudio 2.0.2+cu118
torchvision 0.15.2+cu118
tornado 6.1
tqdm 4.64.1
traitlets 5.7.1
transformers 4.24.0
Twisted 22.2.0
twisted-iocpsupport 1.0.2
typing_extensions 4.4.0
ujson 5.4.0
Unidecode 1.2.0
urllib3 1.26.14
w3lib 1.21.0
watchdog 2.1.6
wcwidth 0.2.5
webencodings 0.5.1
websocket-client 0.58.0
Werkzeug 2.2.2
whatthepatch 1.0.2
wheel 0.38.4
widgetsnbextension 3.5.2
win-inet-pton 1.1.0
wincertstore 0.2
wrapt 1.14.1
xarray 2022.11.0
xlwings 0.29.1
yapf 0.31.0
zict 2.1.0
zipp 3.11.0
zope.interface 5.4.0
zstandard 0.19.0
</code></pre>
<p>This is the 3rd environment i try. I've built 2 very similar environments before, and had the same issue, one was python 3.8 and the other one was python 3.9, I thought I might have screwed my anaconda instalation and even hard reinstalled it all. I still have the same problem, which is very weird to me, never seen anything like this.</p>
<p>I need to use graphs AND torch haha, and i don't seem to be able to do it anymore.</p>
<p>I've tried every solution proposed here: <a href="https://stackoverflow.com/questions/69067429/matplotlib-kills-kernel-on-jupyter">Matplotlib kills kernel on Jupyter</a></p>
<p>But none of them worked.</p>
<p>My procedure to build my environment was exactly this one:</p>
<pre><code>conda create -n env_name
conda activate env_name
pip install numpy pandas sklearn jupyter matplotlib seaborn
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
</code></pre>
<p>The previous two envs were the same procedure, except for a python=version in the create command.</p>
<p>I really don't know what else I can do.</p>
<p>Apart from that, the environment works perfectly, I can use torch and cuda normally, and separately I can also use matplotlib and seaborn normally. I just can't use them both on the same code for some reason. The kernel simply dies, no memory usage, no explanation, just dies.</p>
|
<python><matplotlib><pytorch><anaconda><jupyter>
|
2023-05-13 23:43:52
| 0
| 343
|
Ramon Griffo
|
76,244,570
| 10,333,931
|
python cffi breaks when used from external file
|
<p>I'm writing a self organising map implementation in c and I'm trying to bind it to python using cffi.</p>
<p>My wrapper module looks like this:</p>
<pre class="lang-py prettyprint-override"><code>from ._SOM import ffi, lib
lib.init()
class SOM:
def __init__(self, x, y, input_length, sigma, learning_rate):
self.C_som = lib.createSOM(x, y, input_length, sigma, learning_rate)
def random_weights_init(self, data):
newData = []
for d in data:
temp = ffi.new(f"double[{len(data)}]")
for index, elem in enumerate(d):
temp[index] = elem
newData.append(temp)
lib.random_weights_init(self.C_som, newData, len(data))
def train(self, data, iterations):
newData = []
for d in data:
temp = ffi.new(f"double[{len(data)}]")
for index, elem in enumerate(d):
temp[index] = elem
newData.append(temp)
lib.train(self.C_som, newData, len(data), iterations)
def winner(self, data):
res = lib.winner(self.C_som, data)
x = res[0]
y = res[1]
lib.free(res)
return x, y
if __name__ == "__main__":
S = SOM(10, 10, 3, 0.67, .05)
S.random_weights_init([[.5, 6, 9], [7, 8, 7], [7, 7, 7]])
S.train([[.5, 6, 9], [7, 8, 7], [7, 7, 7]], 10)
r = S.winner([7, 8, 7])
print(r)
</code></pre>
<p>When running this file I works fine when I change line 1 to</p>
<pre class="lang-py prettyprint-override"><code>from _SOM import ffi, lib
</code></pre>
<p>When importing my wrapper class in another file like this:</p>
<pre class="lang-py prettyprint-override"><code>from .SOM.SOM_test import SOM as C_SOM
</code></pre>
<p>and try to use it, I get the following error:</p>
<pre class="lang-py prettyprint-override"><code>Process finished with exit code -1073740940 (0xC0000374)
</code></pre>
<p>When looking the code up, I found it means heap corruption.
Is there a fault in my C source code? Or is something wrong with my imports?</p>
<h1>EDIT</h1>
<p>There was an issue with the C code itself. There was a struct field of type double*** and I was mallocing float**, float* and float into it.</p>
|
<python><python-cffi>
|
2023-05-13 19:38:12
| 1
| 593
|
Seppukki
|
76,244,484
| 5,468,296
|
Stable-Diffusion webui-user.bat gets stuck after "Model loaded"
|
<p>My local Stable-Diffusion installation was working fine. One day after starting webui-user.bat the command window got stuck after this:</p>
<pre><code>venv "...\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Commit hash: 5ab7f213bec2f816f9c5644becb32eb72c8ffb89
Installing requirements
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
Loading weights [cc6cb27103] from ...\StableDiffusion\models\Stable-diffusion\v1-5-pruned-emaonly.ckpt
Creating model from config: ...\StableDiffusion\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(0):
Model loaded in 4.7s (load weights from disk: 2.2s, create model: 0.3s, apply weights to model: 0.4s, apply half(): 0.4s, move model to device: 0.4s, load textual inversion embeddings: 1.0s).
</code></pre>
<p>Nothing happens after this. Just doesn't continue (doesn't show the ip for the web interface).
I think I uninstalled/installed Python again before this happened. But not sure anymore.
I didn't change the webui-user.bat.</p>
<p>Any ideas?</p>
|
<python><stable-diffusion>
|
2023-05-13 19:11:48
| 2
| 1,015
|
sobo
|
76,244,447
| 2,803,777
|
Why is mixing ratio at LCL not the same as for starting condition?
|
<p>I'm very new to metpy. As a first example I tried to calculate the LCL of an air parcel with T=25°C, p=950hPa and relative humidity=70%.</p>
<p>First, the mixing ratio of this air is calculated to be</p>
<pre><code>mr: 15.016493919350289 gram / kilogram
</code></pre>
<p>Afterwards, the LCL is calculated as:</p>
<pre><code>pLCL: 871.9597894290378 hectopascal
TLCL: 17.786684840244334 degree_Celsius
</code></pre>
<p>Now, I tried to calculate saturation equivalent potential temperature and mixing_ratio for levels starting with pLCL. To my surprise, the mixing ratio, calculated with <code>mixing_ratio_from_relative_humidity</code> at the LCL differs from the mixing ratio of the original parcel. Instead of getting again mr=15.016493919350289 gram / kilogram I get 14.862703501798386 gram / kilogram. Shouldn't it be the same, because at LCL the relative humidity is 100% and during dry adiabatic rise to LCL mixing ratio cannot change?</p>
<p>There is a link to this question in: <a href="https://earthscience.stackexchange.com/questions/25188/why-is-mixing-ratio-at-lcl-not-the-same-as-for-starting-condition">https://earthscience.stackexchange.com/questions/25188/why-is-mixing-ratio-at-lcl-not-the-same-as-for-starting-condition</a></p>
<pre><code>from metpy.calc import dewpoint_from_relative_humidity
from metpy.calc import equivalent_potential_temperature
from metpy.calc import mixing_ratio_from_relative_humidity
from metpy.calc import equivalent_potential_temperature
from metpy.calc import lcl
from metpy.calc import moist_lapse
from metpy.calc import saturation_equivalent_potential_temperature
from metpy.units import units
p = 950 * units.hPa
T = 25 * units.degC
rh = 70 * units.percent
print('p:\t', p)
print('T:\t', T)
print('RF:\t', rh)
dp = dewpoint_from_relative_humidity(T, rh)
mr = mixing_ratio_from_relative_humidity(p, T, rh).to('g/kg')
print('mr:\t', mr)
ept = equivalent_potential_temperature(p, T, dp)
print('Θe :\t', ept.to(units.degC))
mylcl = lcl(p, T, dp)
plcl = mylcl[0]
tlcl = mylcl[1]
print('pLCL:\t', plcl)
print('TLCL:\t', tlcl)
plevs = [plcl.magnitude, 800, 700, 600, 500, 400, 300, 200, 100, 50, 25] * units.hPa
ml = moist_lapse(plevs, tlcl).to('degC')
print()
print('Moist adiabatic lift, starting from LCL')
print()
rh100 = 100 * units.percent
for idx in range(len(plevs)):
p = plevs[idx]
T = ml[idx]
print('-------------------------------------------------------------')
print('p:\t', p)
print('T(p):\t', T)
ept = saturation_equivalent_potential_temperature(plevs[idx], ml[idx])
print('Θe(p):\t', ept.to(units.degC))
mr = mixing_ratio_from_relative_humidity(p, T, rh100)
print('mr:\t', mr.to('g/kg'))
p: 950 hectopascal
T: 25 degree_Celsius
RF: 70 percent
mr: 15.016493919350289 gram / kilogram
Θe : 73.59933073452271 degree_Celsius
pLCL: 871.9597894290378 hectopascal
TLCL: 17.786684840244334 degree_Celsius
Moist adiabatic lift, starting from LCL
-------------------------------------------------------------
p: 871.9597894290378 hectopascal
T(p): 17.786684840244334 degree_Celsius
Θe(p): 73.56266438016314 degree_Celsius
mr: 14.862703501798386 gram / kilogram <--- !!!!!!!
-------------------------------------------------------------
p: 800.0 hectopascal
T(p): 14.680478970030833 degree_Celsius
Θe(p): 73.7567612561337 degree_Celsius
mr: 13.254531670351966 gram / kilogram
-------------------------------------------------------------
p: 700.0 hectopascal
T(p): 9.70674918909208 degree_Celsius
Θe(p): 73.97100713514288 degree_Celsius
mr: 10.878278101841651 gram / kilogram
-------------------------------------------------------------
p: 600.0 hectopascal
T(p): 3.6628633969115754 degree_Celsius
Θe(p): 74.0966050124087 degree_Celsius
mr: 8.34266379949833 gram / kilogram
-------------------------------------------------------------
p: 500.0 hectopascal
T(p): -4.036747412608861 degree_Celsius
Θe(p): 74.09741860835788 degree_Celsius
mr: 5.6959850509157075 gram / kilogram
-------------------------------------------------------------
p: 400.0 hectopascal
T(p): -14.539107430252784 degree_Celsius
Θe(p): 73.9329618099161 degree_Celsius
mr: 3.10991709553994 gram / kilogram
-------------------------------------------------------------
p: 300.0 hectopascal
T(p): -30.169336134705304 degree_Celsius
Θe(p): 73.61031144188621 degree_Celsius
mr: 1.043016676418755 gram / kilogram
-------------------------------------------------------------
p: 200.0 hectopascal
T(p): -54.703600518566134 degree_Celsius
Θe(p): 73.32450117793161 degree_Celsius
mr: 0.11362204759565221 gram / kilogram
-------------------------------------------------------------
p: 100.0 hectopascal
T(p): -93.72997677325506 degree_Celsius
Θe(p): 73.25906799180353 degree_Celsius
mr: 0.0005989154996261802 gram / kilogram
-------------------------------------------------------------
p: 50.0 hectopascal
T(p): -125.96434978295119 degree_Celsius
Θe(p): 73.2583328918671 degree_Celsius
mr: 4.5360156886986637e-07 gram / kilogram
-------------------------------------------------------------
p: 25.0 hectopascal
T(p): -152.4084163524267 degree_Celsius
Θe(p): 73.25830368606137 degree_Celsius
mr: 2.1998938295740055e-11 gram / kilogram
</code></pre>
|
<python><metpy>
|
2023-05-13 19:02:59
| 1
| 1,502
|
MichaelW
|
76,244,436
| 173,003
|
Regular expression matching either an empty string or a string in a given set
|
<p>I would like to match either "direction" (<code>">"</code> or <code>"<"</code>) from a string like <code>"->"</code>, <code>"<=="</code>, <code>"..."</code>. When the string contains no direction, I want to match <code>""</code>. More precisely, the equivalent Python expression would be:</p>
<pre class="lang-py prettyprint-override"><code>">" if ">" in s else ("<" if "<" in s else "")
</code></pre>
<p>I first came up with this simple regular expression:</p>
<pre class="lang-py prettyprint-override"><code>re.search(r"[<>]|", s)[0]
</code></pre>
<p>... but it evaluates to <code>""</code> as soon as the direction is not in the first position. How would you do that?</p>
|
<python><regex>
|
2023-05-13 18:59:47
| 3
| 4,114
|
Aristide
|
76,244,137
| 13,555,387
|
Failed to parse the problem -- EOL while scanning string literal (, line 43)
|
<p>i am fetting error : /tmp/solver_planning_domains_tmp_2lmCSmQMYFyo5/problem.pddl: syntax error in line 14, '0':
'define' expected</p>
<p>my problem.pddl is</p>
<pre><code> (define (problem klotski)
(:domain klotski)
(:objects
piece2x2 - piece
piece2x1 - piece
piece1x2-1 piece1x2-2 piece1x2-3 piece1x2-4 - piece
piece1x1-1 piece1x1-2 piece1x1-3 piece1x1-4 - piece
exit - piece
)
(:init
(at piece2x2 0 0)
(at piece2x1 0 2)
(at piece1x2-1 0 4)
(at piece1x2-2 2 0)
(at piece1x2-3 2 2)
(at piece1x2-4 2 4)
(at piece1x1-1 3 0)
(at piece1x1-2 3 1)
(at piece1x1-3 3 3)
(at piece1x1-4 3 4)
(at exit 1 4)
(empty 1 0) (empty 1 1) (empty 1 3) (empty 1 4)
(empty 2 1) (empty 2 3) (empty 3 2)
)
(:goal
(and
(at piece2x2 1 3)
(empty 0 0) (empty 0 1) (empty 0 3) (empty 0 4)
(empty 2 0) (empty 2 1) (empty 2 3) (empty 2 4)
(empty 3 0) (empty 3 1) (empty 3 3) (empty 3 4)
(empty 1 1) (empty 1 3) (empty 2 1) (empty 2 3)
)
)
)
</code></pre>
<p>where is the problem?
the assignment is - Klotski is a sliding puzzle problem. There is one 2×2 piece, one 2 × 1 piece, four 1 × 2 pieces, and four 1 × 1 pieces. Initially, the pieces are placed on a 4 × 5 board, as shown in the following Figure. The goal of the game is to slide the 2 × 2 piece to the exit. No pieces can be removed from the board and pieces can only be slid to the empty spaces horizontally or vertically.</p>
|
<python><dns><editor><pddl>
|
2023-05-13 17:45:16
| 1
| 353
|
Munna Khandakar
|
76,243,939
| 4,348,400
|
How should I interpret the output of tdda.rexpy.extract?
|
<p>I am interesting in <a href="https://tdda.readthedocs.io/en/v2.0.09/rexpy.html#the-rexpy-command" rel="nofollow noreferrer">Rexpy</a> because I am looking for a tool which infers a regular expression that would match a string. Inspecting <code>rexpy.extract</code> with <code>help</code> it looked like it 'might' be what I want.</p>
<pre class="lang-py prettyprint-override"><code>extract(examples, tag=False, encoding=None, as_object=False, extra_letters=None, full_escape=False, remove_empties=False, strip=False, variableLengthFrags=False, max_patterns=None, min_diff_strings_per_pattern=1, min_strings_per_pattern=1, size=None, seed=None, dialect='portable', verbose=0)
Extract regular expression(s) from examples and return them.
Normally, examples should be unicode (i.e. ``str`` in Python3,
and ``unicode`` in Python2). However, encoded strings can be
passed in provided the encoding is specified.
Results will always be unicode.
If as_object is set, the extractor object is returned,
with results in .results.rex; otherwise, a list of regular
expressions, as unicode strings is returned.
</code></pre>
<p>So I tried an example:</p>
<pre class="lang-py prettyprint-override"><code>>>> from tdda import rexpy
>>> s = 'andrew.gelman@statistics.com'
>>> rexpy.extract(s)
['^[.@]$', '^[a-z]$']
</code></pre>
<p>I expected something similar to <code>['^[a-z].[a-z]@[a-z].[a-z]$']</code> rather than <code>['^[.@]$', '^[a-z]$']</code>. Is the extractor just telling me that special symbols <code>'.'</code> and <code>'@'</code> are used 'somewhere' in the string?</p>
|
<python><regex>
|
2023-05-13 16:57:42
| 1
| 1,394
|
Galen
|
76,243,849
| 12,845,199
|
Regex that removes whitespaces between two specific characters
|
<p>In pyspark I have the following expression</p>
<pre><code>df.withColumn('new_descriptions',lower(regexp_replace('descriptions',r"\t+",'')))
</code></pre>
<p>Which basically removes tab characters and makes my descriptions columns become lower</p>
<p><strong>Here is a list samples of my descriptions columns</strong></p>
<pre><code>['banha frimesa 450 gr','manteiga com sal tourinho pote 200 g','acucar refinado caravelas pacote 1kg',
'acucar refinado light uniao fit pacote 500g','farinha de trigo especial 101 5kg']
</code></pre>
<p>What I want to do is to be able to remove the whitespaces that are between the value and it is unit.
For example in this guy <strong>banha frimesa 450 gr</strong>, I want it to become <strong>banha frimesa 450gr</strong>.</p>
<p>But I also need to avoid removing whitespaces that are between a digit and digit with unit.</p>
<p>For example, this guy <strong>farinha de trigo especial 101 5kg</strong>** should stay the same.</p>
<p>What kind of regex should I use to only remove the whitespace that are between the kg,ml,l,g unit and it is value?</p>
<p><strong>Wanted Result:</strong></p>
<pre><code>['banha frimesa 450gr','manteiga com sal tourinho pote 200g','acucar refinado caravelas pacote 1kg',
'acucar refinado light uniao fit pacote 500g','farinha de trigo especial 101 5kg']
</code></pre>
|
<python><regex><pyspark>
|
2023-05-13 16:40:06
| 1
| 1,628
|
INGl0R1AM0R1
|
76,243,659
| 12,263,681
|
Flask app not able to find installed modules
|
<p>I have a very basic Flask app with some other Python modules installed, however upon trying to start the app I keep getting <code>ModuleNotFound</code> errors. I have attempted to run the app on two difference machines, a laptop running macOS and a PC running Windows and both give me <code>ModuleNotFound</code> errors, although they are for different packages.</p>
<p>The error is specifically happening on this line:</p>
<pre><code>db = SQLAlchemy(app)
</code></pre>
<p>and this is the error:</p>
<pre><code>Error: While importing 'app', an ImportError was raised:
Traceback (most recent call last):
File "/Users/path/to/project/venv/lib/python3.9/site-packages/flask/cli.py", line 218, in locate_app
__import__(module_name)
File "/Users/path/to/project/app.py", line 21, in <module>
db = SQLAlchemy(app)
File "/Users/path/to/project/venv/lib/python3.9/site-packages/flask_sqlalchemy/extension.py", line 219, in __init__
self.init_app(app)
File "/Users/path/to/project/venv/lib/python3.9/site-packages/flask_sqlalchemy/extension.py", line 326, in init_app
engines[key] = self._make_engine(key, options, app)
File "/Users/path/to/project/venv/lib/python3.9/site-packages/flask_sqlalchemy/extension.py", line 614, in _make_engine
return sa.engine_from_config(options, prefix="")
File "/Users/path/to/project/venv/lib/python3.9/site-packages/sqlalchemy/engine/create.py", line 804, in engine_from_config
return create_engine(url, **options)
File "<string>", line 2, in create_engine
File "/Users/path/to/project/venv/lib/python3.9/site-packages/sqlalchemy/util/deprecations.py", line 283, in warned
return fn(*args, **kwargs) # type: ignore[no-any-return]
File "/Users/path/to/project/venv/lib/python3.9/site-packages/sqlalchemy/engine/create.py", line 601, in create_engine
dbapi = dbapi_meth(**dbapi_args)
File "/Users/path/to/project/venv/lib/python3.9/site-packages/sqlalchemy/dialects/mysql/pymysql.py", line 75, in import_dbapi
return __import__("pymysql")
ModuleNotFoundError: No module named 'pymysql'
</code></pre>
<p>Here is my entire <code>app.py</code> file:</p>
<pre class="lang-py prettyprint-override"><code>import os
from flask import Flask, render_template
from flask_login import LoginManager
from flask_sqlalchemy import SQLAlchemy
from dotenv import load_dotenv
load_dotenv()
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = os.getenv('SQLALCHEMY_DATABASE_URI')
app.config['SECRET_KEY'] = os.getenv('SECRET_KEY')
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
db = SQLAlchemy(app)
login_manager = LoginManager()
login_manager.login_view = 'users.login'
login_manager.init_app(app)
from models import User
@login_manager.user_loader
def load_user(id):
"""Loads the user when given their ID."""
return User.query.get(int(id))
# Home page
@app.route('/')
def index():
"""Renders the index page."""
return render_template('main/index.html')
# Blueprint imports from each package
from users.views import users_blueprint
# Registering each blueprint so that it can be rendered and shown
app.register_blueprint(users_blueprint)
if __name__ == '__main__':
app.run()
</code></pre>
<p>I am using PyCharm as an IDE and can confirm that all the packages have been successfully installed. This is my requirements.txt:</p>
<pre><code>Flask
flask_login
flask_wtf
flask_sqlalchemy
SQLAlchemy==2.0.12
python-dotenv==1.0.0
</code></pre>
<p>I do not know much about how Python and pip works with virtual environments so I am not sure what information to provide.</p>
|
<python><flask>
|
2023-05-13 15:53:57
| 0
| 528
|
SirArchibald
|
76,243,447
| 4,311,316
|
PysimpleGui: pre-select an Item in a Combobox on Keyboard input
|
<p>I have a GUI with pysimplegui that uses a combobox.
There are alphabetically sorted strings in the list for the combobox.</p>
<p>The Combobox is readonly (to only input the strings available),
so the user gets the drop down list right away when clicking anywhere on the combobox element.</p>
<p>When the drop-down list is open, the user can highlight items by the "up" and "down" keys.
Also selection of the highlighted item by the enter key is possible.</p>
<p><em>It would increase the usability, if the highlight jumps to (or selects) the entry that starts with S when pushing the "s" key on the keyboard.</em></p>
<p><strong>What I tried</strong></p>
<ul>
<li><p>opening the window with <code>return_keyboard_events=True</code> but it would not change the behaviour.</p>
</li>
<li><p>Fiddeling with the <code>enable_per_char_events=True</code> for the combo element just shows that this method is no longer available in version 4.60.4 (though it is still in the <a href="https://www.pysimplegui.org/en/latest/call%20reference/#combo-element" rel="nofollow noreferrer">docs</a>) and the code crashes.</p>
</li>
</ul>
<p><strong>Screenshot</strong></p>
<p><a href="https://i.sstatic.net/kxMsE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kxMsE.png" alt="enter image description here" /></a></p>
|
<python><pysimplegui>
|
2023-05-13 15:13:36
| 1
| 385
|
Red
|
76,243,166
| 16,383,578
|
Can I save memory by replacing list elements with None in Python?
|
<p>I have millions of rows of data like these:</p>
<pre><code> ["1.0.0.0/24", 16777216, 16777471, "1.0.0.0", "1.0.0.255", 256, 13335, "AU", false, true, false, false],
["1.0.1.0/23", 16777472, 16777983, "1.0.1.0", "1.0.2.255", 512, null, "CN", false, false, false, false],
["1.0.3.0/24", 16777984, 16778239, "1.0.3.0", "1.0.3.255", 256, null, "CN", false, false, false, false]
</code></pre>
<p>I saved them in JSON files and also in an SQLITE3 database. I am going to pull all the data from the database at the start of a script, to make the data querying happen entirely in memory, thus save time by not using the slow filesystem calls.</p>
<p>And this also means it will take a lot of memory, I measured the memory usage of the data to be about 500MiB. I will put them into a <code>list</code>, I use binary search to find the index of closest starting IP address that is less than or equal to any given IP address, and then determine if the IP is inside the network located at the index. (I will pull the starts and ends out of the <code>list</code>)</p>
<p>If the IP is inside the network, the data will be put into a custom class to make the result strongly typed, the result will be cached so next time if the query is called with the same argument the cached result will be retrieved to save processing time, and the element located at the index will be deleted before the result is returned. (The key will be the index as well as the argument)</p>
<p>Because I use binary search, naturally this requires that the indices to be invariant, but I want to remove the unnecessary element from the <code>list</code> to save memory, and this will cause the indices to change.</p>
<p>A simple solution to this problem is to not delete the element at the index, but assign the <code>list</code> element located at the index to <code>None</code>. Another solution would be to convert the <code>list</code> to a <code>dict</code> with the indices as keys, but of course this would use more memory than using a <code>list</code>.</p>
<p>But I don't know if doing so would save memory, I tried to create <code>list</code>s with the same length and containing the same element at all indices, and it seemed that <code>list</code>s of the same length always have the same size, and the size of elements don't matter:</p>
<pre><code>In [200]: ([None]*18).__sizeof__()
Out[200]: 184
In [201]: ([None]*180).__sizeof__()
Out[201]: 1480
In [202]: ([0]*180).__sizeof__()
Out[202]: 1480
In [203]: ([object]*180).__sizeof__()
Out[203]: 1480
In [204]: (['']*180).__sizeof__()
Out[204]: 1480
In [205]: (['abcs']*180).__sizeof__()
Out[205]: 1480
In [206]: (['abcdefg']*180).__sizeof__()
Out[206]: 1480
In [207]: ({i: e for i, e in enumerate(['abcdefg']*180)}).__sizeof__()
Out[207]: 9296
In [208]: 9296/1480
Out[208]: 6.281081081081081
In [209]: ([('abcdefg', 1, 2, 3, 4, 5)]*180).__sizeof__()
Out[209]: 1480
</code></pre>
<p>So can replacing <code>list</code> elements with <code>None</code> save memory? If not, then what is a better way to remove items while keeping indices?</p>
<hr />
<p>It seems that a <code>list</code> containing a row of the data repeatedly also has the same size at the same length:</p>
<pre><code>In [221]: import json
In [222]: l = json.loads('["1.0.0.0/24", 16777216, 16777471, "1.0.0.0", "1.0.0.255", 256, 13335, "AU", false, true, false, false]')
In [223]: l
Out[223]:
['1.0.0.0/24',
16777216,
16777471,
'1.0.0.0',
'1.0.0.255',
256,
13335,
'AU',
False,
True,
False,
False]
In [224]: ([l]*180).__sizeof__()
Out[224]: 1480
</code></pre>
<hr />
<p>I have made some other tests, but the result doesn't make sense:</p>
<pre><code>In [224]: ([l]*180).__sizeof__()
Out[224]: 1480
In [225]: l.__sizeof__()
Out[225]: 168
In [226]: l = [l]*180
In [227]: l.__sizeof__()
Out[227]: 1480
In [228]: l[0:12] = [None]*12
In [229]: l.__sizeof__()
Out[229]: 1480
In [230]: list(range(180)).__sizeof__()
Out[230]: 1480
</code></pre>
<p>It seems that the size of a <code>list</code> is only related to its length and not related to its contents whatsoever, but this simply can't be true.</p>
<hr />
<p>No the binary search won't be broken, since I will store the starting IP of networks as integers in a separate list, and ending IP of networks as integers in yet another list, and these two lists will not change.</p>
<p>It's like this:</p>
<pre class="lang-py prettyprint-override"><code>
STARTS = [row[1] for row in data]
ENDS = [row[2] for row in data]
store = {}
def query(ip):
if ip in store:
return store[ip]
index = bisect(STARTS, ip) - 1
if index >= 0:
if not STARTS[index] <= ip <= ENDS[index]:
return
if index in store:
result = store[index]
store[ip] = result
return result
row = data[index]
data[index] = None
result = Network(row)
store[index] = result
store[ip] = result
return result
</code></pre>
<p>I didn't actually write working code, though writing it is trivial, I just don't know if this would end up saving memory.</p>
<hr />
<p>I have benchmarked the SQLite3 query and found it to take around 40 milliseconds to complete a single query:</p>
<pre><code>In [232]: import sqlite3
In [233]: conn = sqlite3.connect('D:/network_guard/IPFire_locations.db')
In [234]: cur = conn.cursor()
In [235]: cur.execute('select * from IPv4 where start_integer = 3113839616;')
Out[235]: <sqlite3.Cursor at 0x20b5d9c66c0>
In [236]: cur.fetchone()
Out[236]:
('185.153.108.0/22',
3113839616,
3113840639,
'185.153.108.0',
'185.153.111.255',
1024,
3242,
'IT',
0,
0,
0,
0)
In [237]: %timeit cur.execute('select * from IPv4 where start_integer = 3113839616;')
38.8 ms ± 805 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [238]: cur.execute('select * from IPv4 where start_integer = 3113839616;')
Out[238]: <sqlite3.Cursor at 0x20b5d9c66c0>
In [239]: %timeit cur.fetchone()
58.8 ns ± 0.672 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
</code></pre>
<hr />
<p>Using <code>bisect</code> takes under 1 microsecond to complete the same query, and there are 567778 rows for IPv4 addresses and 446631 rows for IPv6 addresses, for a total of 1014409 rows. Just fetching all the rows and creating the <code>lists</code> take about 500MiB memory.</p>
<pre><code>In [246]: cur.execute('select * from IPv4;')
Out[246]: <sqlite3.Cursor at 0x20b5d9c66c0>
In [247]: data = cur.fetchall()
In [248]: STARTS = [row[1] for row in data]
In [249]: bisect(STARTS, 3113839616)
Out[249]: 366233
In [250]: %timeit bisect(STARTS, 3113839616)
341 ns ± 6.43 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [251]: len(data)
Out[251]: 567778
In [252]: cur.execute('select * from IPv6;')
Out[252]: <sqlite3.Cursor at 0x20b5d9c66c0>
In [253]: data6 = cur.fetchall()
In [254]: len(data6)
Out[254]: 446631
In [255]: 567778 + 446631
Out[255]: 1014409
</code></pre>
<p>I determined the memory usage by using Task Manager, just by checking the memory usage of the process right before fetching the rows and right after fetching the rows, to calculate the difference.</p>
<p>If I create all instances of custom classes upfront, I don't think I have enough RAM for all the objects even though I have 16GiB (I open multiple tabs in browsers and the browser take multiple GibiBytes of RAM, so I don't have much available RAM).</p>
<p>And I won't make any more edits to this post.</p>
|
<python><python-3.x>
|
2023-05-13 14:11:23
| 1
| 3,930
|
Ξένη Γήινος
|
76,243,065
| 353,337
|
Replace all superscript digits with one `re.sub`
|
<p>Given a string with exponents like <code>^2</code> in it, I'd like to replace them with their unicode equivalents using Python.</p>
<p>This works:</p>
<pre class="lang-py prettyprint-override"><code>import re
content = "m^2 s^3"
content = re.sub(r"\^0", "\N{Superscript Zero}", content)
content = re.sub(r"\^1", "\N{Superscript One}", content)
content = re.sub(r"\^2", "\N{Superscript Two}", content)
content = re.sub(r"\^3", "\N{Superscript Three}", content)
content = re.sub(r"\^4", "\N{Superscript Four}", content)
content = re.sub(r"\^5", "\N{Superscript Five}", content)
content = re.sub(r"\^6", "\N{Superscript Six}", content)
content = re.sub(r"\^7", "\N{Superscript Seven}", content)
content = re.sub(r"\^8", "\N{Superscript Eight}", content)
content = re.sub(r"\^9", "\N{Superscript Nine}", content)
print(content)
</code></pre>
<p>but I feel this repetition (or loop equivalent) can't be too efficient. Is there a way to solve the problem with one <code>re.sub</code>?</p>
|
<python><regex>
|
2023-05-13 13:49:08
| 1
| 59,565
|
Nico Schlömer
|
76,243,038
| 14,860,526
|
authentication and authorization with azure AD and python
|
<p>I have two apps both registered on Azure AD:</p>
<p>-APP_1: a web app that runs in flask with SSO through Azure AD</p>
<pre><code>tenant_id = "tenant_id_of_APP_1_and_APP_2"
client_id_app_1 = "app_1"
client_secret_app_1 = "secret_app_1"
</code></pre>
<p>-APP_2: another app that makes requests to the first app:</p>
<pre><code>tenant_id = "tenant_id_of_APP_1_and_APP_2"
client_id_app_2 = "app_2"
client_secret_app_2 = "secret_app_2"
</code></pre>
<p>the endpoints on the APP_1 are restricted. When the user reaches the web app through the browser he is automatically requested to login, and after he is identified it can navigate any endpoint he has access to otherwise he gets PermissionError.</p>
<p>Now I want to give access to some endpoints to the APP_2, I can get an access_token from azure in this way (I think I'm going something wrong because when I request a token I should specify that is for APP_1, but i cannot find the argument that specifies it):</p>
<pre><code>import requests
# Authenticate and get an access token
auth_url = f'https://login.microsoftonline.com/{tenant_id_of_APP_1_and_APP_2}/oauth2/v2.0/token'
data = {
'grant_type': 'client_credentials',
'client_id': client_id_app_2,
'client_secret': client_secret_app_2,
'scope': 'https://graph.microsoft.com/.default'
}
response = requests.post(auth_url, data=data).json()
access_token = response['access_token']
</code></pre>
<p>and then I can make the request to the protected API:</p>
<pre><code>headers = {
'Authorization': f'Bearer {access_token}',
'Content-Type': 'application/octet-stream',
}
x = requests.post("https://localhost:5000/test_api", headers=headers)
response = requests.get("http://localhost:5000/test_api", headers=headers)
</code></pre>
<p>What should I do the server (APP_1) when I get such request? I should validate the access_token to verify the user but I cannot decode it with jwt because I keep getting invalid signature:</p>
<pre><code>token = request.headers.get("Authorization").removeprefix("Bearer ")
public_key = get_public_key(token)
decoded = jwt.decode(
token,
public_key,
verify=True,
algorithms=["RS256"],
audience=[client_id_app_1],
)
</code></pre>
<p>is there a better way to just send the access_token to azure for validation?</p>
<p>EDIT:</p>
<p>So I changed the code in this way:
on APP_2:</p>
<pre><code>server_client_id = server_app_config.CLIENT_ID
app = msal.ConfidentialClientApplication(client_id=client_id_app_2, client_credential=client_secret_app_2, authority=f"https://login.microsoftonline.com/{tenant_id_of_APP_1_and_APP_2}")
result = app.acquire_token_for_client(scopes=[f"{client_id_app_1}/.default"])
access_token = result["access_token"]
headers = {
'Authorization': f'Bearer {access_token}',
'Content-Type': 'application/octet-stream',
}
response = requests.post(url, headers=payload)
</code></pre>
<p>on APP_1 I validate the token in this way:</p>
<pre><code>token = request.headers.get("Authorization").removeprefix("Bearer ")
public_key = get_public_key(token)
decoded = jwt.decode(
token,
public_key,
verify=True,
algorithms=['RS256'],
audience=[client_id_app_1],
)
</code></pre>
<p>Is this how the workflow should be?</p>
|
<python><azure-active-directory><jwt><azure-ad-msal>
|
2023-05-13 13:44:36
| 1
| 642
|
Alberto B
|
76,243,036
| 6,124,330
|
Multiple records on one row in pandas
|
<p>Suppose I have a pandas data frame that stores multiple records on the same row as below</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id1</th>
<th>id2</th>
<th>id3</th>
<th>valueA1</th>
<th>valueA2</th>
<th>valueA3</th>
<th>valueB1</th>
<th>valueB2</th>
<th>valueB3</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2</td>
<td>3</td>
<td>X</td>
<td>Y</td>
<td>Z</td>
<td>A</td>
<td>B</td>
<td>C</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>3</td>
<td>P</td>
<td>Q</td>
<td>U</td>
<td>S</td>
<td>V</td>
<td>M</td>
</tr>
</tbody>
</table>
</div>
<p>I am looking for a generic (an arbitrary number of IDs and and associated values) way to stack these records such that I have</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>valueA</th>
<th>valueB</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>X</td>
<td>A</td>
</tr>
<tr>
<td>2</td>
<td>Y</td>
<td>B</td>
</tr>
<tr>
<td>3</td>
<td>Z</td>
<td>C</td>
</tr>
<tr>
<td>2</td>
<td>P</td>
<td>S</td>
</tr>
<tr>
<td>1</td>
<td>Q</td>
<td>V</td>
</tr>
<tr>
<td>3</td>
<td>U</td>
<td>M</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas>
|
2023-05-13 13:44:10
| 3
| 695
|
Ivor Denham-Dyson
|
76,242,916
| 386,861
|
Not sure what is wrong with my Altair code for visualisation in pandas?
|
<p>I'm working through a tutorial on Altair in Python.</p>
<p><a href="https://www.youtube.com/watch?v=umTwkgQoo_E" rel="nofollow noreferrer">https://www.youtube.com/watch?v=umTwkgQoo_E</a></p>
<p>Loaded data fine.</p>
<pre><code>df = pd.read_csv("https://raw.githubusercontent.com/onlyphantom/miband/main/data/run_1km.csv", parse_dates=['startTime'])
df['day_of_week'] = df['startTime'].dt.day_name()
df.head()
</code></pre>
<p>Got my first plots such as
alt.Chart(df).mark_point().encode(
x = "seconds_per_km",
y = "day_of_week",
color ="day_of_week"
).interactive()
<a href="https://i.sstatic.net/Em2xm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Em2xm.png" alt="enter image description here" /></a></p>
<p>But when I try to use</p>
<pre><code>alt.Chart(df).mark_point().encode(
alt.x("seconds_per_km"),
alt.y("day_of_week"),
alt.color("day_of_week")
).interactive()
I get an error:
Cell In[37], line 2
1 alt.Chart(df).mark_point().encode(
----> 2 alt.x("seconds_per_km"),
3 alt.y("day_of_week"),
4 alt.color("day_of_week")
5 ).interactive()
AttributeError: module 'altair' has no attribute 'x'
</code></pre>
<p>Why? I'm running version 4.2.2</p>
|
<python><pandas><altair>
|
2023-05-13 13:13:27
| 1
| 7,882
|
elksie5000
|
76,242,900
| 9,146,682
|
Python + TypeError: MarketData.get_kline() missing 1 required positional argument: 'kline_type'
|
<p>I need to extract BTC data from Kucoin. Here is the python code:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>import kucoin.client as kc
import pandas as pd
api_key = '***6120**********'
api_secret = '****-0078**************'
client = kc.Market(url='https://api.kucoin.com', key=api_key, secret=api_secret)
symbol = 'BTC-USDT'
interval = '1day'
start_time = '2022-01-01'
end_time = '2022-04-29'
kline_type = 'spot'
klines = client.get_kline(symbol=symbol, interval=interval, startAt=start_time, endAt=end_time, klineType=kline_type)</code></pre>
</div>
</div>
</p>
<p>I keep getting the following error:
TypeError: MarketData.get_kline() missing 1 required positional argument: 'kline_type'</p>
|
<python>
|
2023-05-13 13:09:17
| 1
| 355
|
hdsouza
|
76,242,791
| 2,987,552
|
audio streaming not working in flask using generator / yield
|
<p>I am trying to stream two audio files one after the other in flask using generator / yield. However it plays only the first file and not the other. Following is my code:</p>
<pre><code>from flask import Flask, send_file, Response
import random
import time
import wave
# Flask constructor takes the name of
# current module (__name__) as argument.
app = Flask(__name__)
# The route() function of the Flask class is a decorator,
# which tells the application which URL should call
# the associated function.
@app.route('/')
# ‘/’ URL is bound with paadas() function.
def paadas():
Approach 2: not working
def generate(files):
for file in files:
yield file.read()
files = []
number = random.randint(1,10)
f1 = open("../numbers/" + str(number) + ".wav", 'rb')
files.append(f1)
times = random.randint(1,10)
f2 = open("../times/" + str(times) + ".wav", 'rb')
files.append(f2)
return Response(generate(files), mimetype='audio/wav')
# main driver function
if __name__ == '__main__':
# run() method of Flask class runs the application
# on the local development server.
app.run()
</code></pre>
<p>What am I missing here? You can see my attempt of three approaches at <a href="https://github.com/sameermahajan/PaadasMLFlaskApp" rel="nofollow noreferrer">https://github.com/sameermahajan/PaadasMLFlaskApp</a> Only the first one works but it is not very elegant. If you want to try out the program, you can get the prerecorded audios of "numbers" and "times" from <a href="https://github.com/sameermahajan/Paadas" rel="nofollow noreferrer">https://github.com/sameermahajan/Paadas</a> These are numbers in marathi (an Indian language) in reciting table format.</p>
|
<python><flask><audio>
|
2023-05-13 12:41:49
| 1
| 598
|
Sameer Mahajan
|
76,242,711
| 3,225,420
|
Frequency plot using dots instead of bars?
|
<p>I'm trying to create the chart in this <a href="https://stackoverflow.com/questions/49703938/how-to-create-a-dot-plot-in-matplotlib-not-a-scatter-plot">question</a>, using this <a href="https://stackoverflow.com/a/49707027/3225420">answer</a>. I'm open to any solution that works.</p>
<p>Visual borrowed from original question:
<a href="https://i.sstatic.net/Ci3Jm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ci3Jm.png" alt="enter image description here" /></a></p>
<p>Difference from that question is I've already calculated my bins and frequency values so I don't use <code>numpy</code> or <code>matplotlib</code> to do so.</p>
<p>Here's my sample data, I refer to it as <code>df_fd</code> in my sample code below:</p>
<pre><code> low_bin high_bin frequency
0 13.142857 18.857143 3
1 18.857143 24.571429 5
2 24.571429 30.285714 8
3 30.285714 36.000000 8
4 36.000000 41.714286 7
5 41.714286 47.428571 7
6 47.428571 53.142857 1
7 53.142857 58.857143 1
</code></pre>
<p>Based off the cited question here's my code (<code>df_fd</code> is the <code>DataFrame</code> above):</p>
<pre><code>fig, ax = plt.subplots()
ax.bar(df_fd.low_bin, df_fd.frequency, width= df_fd.high_bin-df_fd.low_bin)
X,Y = np.meshgrid(bins, df_fd['frequency'])
Y = Y.astype(np.float)
Y[Y>df_fd['frequency']] = np.nan
plt.scatter(X,Y)
</code></pre>
<p>This <code>Y[Y>df_fd['frequency']] = np.nan</code> statement is what fails and I don't know how to get around it. I understand what it's trying to do and the best guess I have is somehow mapping the matrix index to the DataFrame index would help, but I'm not sure how to do that.</p>
<p>Thank you for helping me!</p>
|
<python><matplotlib><dot-plot>
|
2023-05-13 12:24:38
| 1
| 1,689
|
Python_Learner
|
76,242,695
| 21,188,902
|
Script to generate Markdown files with embedded PlantUML diagrams for GitLab's PlantUML renderer
|
<p>I am setting up a repository to store software documentation consisting of several documents which are written in Markdown, and I want to be able to embed PlantUML diagrams in them. The repository is hosted in Gitlab, which includes a PlantUML renderer but does not allow <a href="https://plantuml.com/preprocessing" rel="nofollow noreferrer">preprocessing</a> and therefore using the <code>!include</code> clause to reference diagrams in other files.</p>
<p>I would like to have a bash or python script that:</p>
<ol>
<li>Searches all .md files and append their content one after the other in a new file "all-docs.md".</li>
<li>Searches in that file "all-docs.md" for all the <code>!include [FILEPATH]</code> clauses and replace the content which is between <code>@startuml</code> and <code>@enduml</code> from that file [FILEPATH] into "all-docs.md".</li>
</ol>
<p>For example:</p>
<p>"all-docs.md" contains in certain part:</p>
<pre class="lang-markdown prettyprint-override"><code>Here is the Profile class diagram:
``plantuml
@startuml
!include ./data-models/profile.puml
Profile o-- UidObject
@enduml
``
</code></pre>
<p>And profile.puml content is:</p>
<pre><code>@startuml
class Profile <UidObject> {
+ string name
+ string email
+ string phone
+ Date birthDate
}
@enduml
</code></pre>
<p>The <strong>result after the script</strong> will be to have in "all-docs.md":</p>
<pre class="lang-markdown prettyprint-override"><code>Here is the Profile class diagram:
``plantuml
@startuml
class Profile <UidObject> {
+ string name
+ string email
+ string phone
+ Date birthDate
}
Profile o-- UidObject
@enduml
``
</code></pre>
<hr />
<p>The repo has the following structure.</p>
<pre><code>/
├── assets/
├── docs/
├── uml/
</code></pre>
<ul>
<li>The <code>assets/</code> directory contains various assets such as images, icons, and other resources.</li>
<li>The <code>docs/</code> directory contains the documents (markdown files)</li>
<li>The <code>uml/</code> directory contains contains <a href="https://plantuml.com/" rel="nofollow noreferrer">PlantUML</a> source files that are used to generate diagrams for the software documentation.</li>
</ul>
|
<python><bash><markdown><plantuml>
|
2023-05-13 12:19:26
| 2
| 858
|
pcba-dev
|
76,242,589
| 12,297,666
|
Echo State Network - AttributeError: 'NoneType' object has no attribute 'lower'
|
<p>I am trying to use the code found <a href="https://github.com/FilippoMB/Time-series-classification-and-clustering-with-Reservoir-Computing" rel="nofollow noreferrer">here</a> in my scripts to use that Echo State Network (ESN) to solve a time series classification problem. First, I define my ESN classifier:</p>
<pre><code>classifier_ESN = RC_model(lots of parameters,
dimred_method=config['dimred_method'],
lots of parameters)
</code></pre>
<p>But when I start the training, I get this error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Muril\PycharmProjects\tf-gpu\venv\lib\site-packages\IPython\core\interactiveshell.py", line 3378, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-10-457472db72b9>", line 30, in <module>
train_time = classifier_ESN.train(x_train, y_train_one_hot)
File "C:\Users\Muril\PycharmProjects\tf-gpu\Irish\ESN\modules.py", line 174, in train
if self.dimred_method.lower() == 'pca':
AttributeError: 'NoneType' object has no attribute 'lower'
</code></pre>
<p>From that, I have traced to error to this part of the code in the <code>modules.py</code> file:</p>
<pre><code># ============ Dimensionality reduction of the reservoir states ============
if self.dimred_method.lower() == 'pca':
# matricize
N_samples = res_states.shape[0]
res_states = res_states.reshape(-1, res_states.shape[2])
# ..transform..
red_states = self._dim_red.fit_transform(res_states)
# ..and put back in tensor form
red_states = red_states.reshape(N_samples,-1,red_states.shape[1])
elif self.dimred_method.lower() == 'tenpca':
red_states = self._dim_red.fit_transform(res_states)
else: # Skip dimensionality reduction
red_states = res_states
</code></pre>
<p>So, <code>dimred_method</code> is a parameter passed to the ESN function <code>RC_model</code>. This parameter comes from a dictionary called <code>config</code>, which has this pair of key/value:</p>
<pre><code>config['dimred_method'] = None # options: {None (no dimensionality reduction), 'pca', 'tenpca'}
</code></pre>
<p>As we can see, <code>None</code> is an option to <code>dimred_method</code> which should fall in the <code>else</code> case of the code provided in the <code>modules.py</code>, but it throws that error. I get it is because of the <code>lower()</code> attribute in the case of using strings like <code>'pca'</code> or <code>'tenpca'</code>, but I am not sure how to fix it. Any ideas?</p>
|
<python><dictionary>
|
2023-05-13 11:53:48
| 1
| 679
|
Murilo
|
76,242,524
| 3,130,747
|
How to determine the amount of free memory when running a google cloud function?
|
<p>When running a google cloud function the memory size can be set (<a href="https://cloud.google.com/functions/docs/configuring/memory" rel="nofollow noreferrer">https://cloud.google.com/functions/docs/configuring/memory</a>).</p>
<p>If I select <code>512MB</code>, how can I determine how much of that 512MB is free after loading my libraries ? So for example - if I need to load and process data that's 12MB in size, and it takes 500MB to load my libraries, I won't have enough memory. Whereas if it takes 250MB to load my libraries I probably will have enough memory.</p>
<p>I'm using Python, if there's something that I can output to a log and then inspect in the gcp logs explorer that would be useful</p>
|
<python><google-cloud-platform><google-cloud-functions>
|
2023-05-13 11:38:06
| 1
| 4,944
|
baxx
|
76,242,392
| 10,927,050
|
How to detect if the xlsx file is password protected or not in python?
|
<p>I have a required where I need to detect if the excel file is password protected or not. If password protected ask user for the password.
How do I do this in python?
I looked a bit on the internet, none of the answers were useful so far</p>
|
<python><python-3.x><openpyxl><xlsx><password-protection>
|
2023-05-13 11:07:35
| 2
| 668
|
Prajna
|
76,242,364
| 15,322,101
|
Colorize based on condition
|
<p>I am having the following code to display a plot. I would like to conditionally format the values based on a threshold of 0.05.</p>
<pre><code>from matplotlib.colors import to_rgba
# Generate x-axis
x = np.linspace(0, len(data_formula), len(data_formula))
colors = np.where(data_formula <= 0.05, "blue", "green")
plt.plot(x, data_formula, c=colors)
# Add labels and title
plt.ylabel('Volume')
plt.xlabel('time')
plt.title('Energy')
# Display the plot
plt.show()
</code></pre>
<p>Unfortunately I do receive the error: <code>array(['blue', 'blue', 'blue', ..., 'blue', 'blue', 'blue'], dtype='<U5') is not a valid value for color</code>, which indicates that I have passed the wrong value to the color parameter. I have tried it with lists, etc. But it doesn't seem to work. What went wrong? Does the argument <code>color</code> just not accept any datastructure or is the format wrong?</p>
<p>For reference: <code>data_formula</code> is defined as follow:</p>
<pre><code>def energy(x):
return (x**2)
data_formula = np.apply_along_axis(energy, axis=0, arr=data_normalized)
</code></pre>
<p>It is of datatype: <code>numpy.ndarray</code>.</p>
|
<python><matplotlib><plot>
|
2023-05-13 10:57:41
| 1
| 347
|
Infomagier
|
76,242,348
| 597,858
|
checking a checkbox has no effect in Tkinter
|
<p>Inside a frame, I have populated checkboxes. But checking them has no effect on the checkbox_vars. what could be wrong with this piece of code?</p>
<pre><code>def set_insider_trades():
def on_mousewheel(event):
canvas.yview_scroll(int(-1 * (event.delta / 120)), "units")
set_inside_trading = tk.Tk()
set_inside_trading.title("Set Insider Trading")
set_inside_trading.geometry("300x600")
# Add a Scrollbar to the Canvas
scrollbar = ttk.Scrollbar(set_inside_trading, orient=tk.VERTICAL)
scrollbar.pack(side=tk.RIGHT, fill=tk.Y)
# Create a Canvas
canvas = tk.Canvas(set_inside_trading, borderwidth=0, yscrollcommand=scrollbar.set)
canvas.pack(side=tk.LEFT, fill=tk.BOTH, expand=True)
# Configure the scrollbar to scroll the canvas
scrollbar.config(command=canvas.yview)
# Create a Frame within the Canvas
frame = tk.Frame(canvas)
# Add the Frame to the Canvas
canvas.create_window((0, 0), window=frame, anchor=tk.NW)
# Configure the Canvas Scroll Region
frame.bind("<Configure>", lambda event: canvas.configure(scrollregion=canvas.bbox("all")))
# Bind Mousewheel Event to Scroll
frame.bind_all("<MouseWheel>", on_mousewheel)
# Create checkboxes for each company
checkbox_vars = {company: tk.BooleanVar(value=False) for company in df.index}
for idx, (company, var) in enumerate(checkbox_vars.items()):
checkbox = tk.Checkbutton(frame, text=company, variable=var)
checkbox.grid(row=idx, column=0, sticky="w", padx=(10, 0), pady=5)
def update_companies():
for company, var in checkbox_vars.items():
if var.get():
df.loc[company, 'Insider & SAST Buys Last Quarter'] = 1
else:
df.loc[company, 'Insider & SAST Buys Last Quarter'] = 0
ok_button = tk.Button(set_inside_trading, text="OK", command=lambda: [update_companies(), set_inside_trading.destroy()])
ok_button.place(relx=0.5, rely=1.0, anchor=tk.S)
</code></pre>
|
<python><tkinter><checkbox>
|
2023-05-13 10:54:03
| 1
| 10,020
|
KawaiKx
|
76,242,318
| 13,921,399
|
Determine transitions and durations across rows in a data frame
|
<p>Consider the following data frame:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from numpy import nan
df = pd.DataFrame(data={
"x1": [3, 3, 3, 3, 3, 3],
"x2": [3, 3, 2, 2, 1, 3],
"x3": [3, 2, 2, 3, 2, nan],
"x4": [3, 2, 1, 2, 3, nan],
"x5": [3, 2, 3, 2, 2, nan],
"x6": [2, 1, nan, 2, 2, nan],
"x7": [2, 2, nan, 2, 2, nan]
})
</code></pre>
<p>Each row represents an id and each column a state for a given month. For the sake of simplicity, you can assume that each id starts in state 3 and can change its state each month, either to 1, 2 or nan (which means the id is deleted).</p>
<p>For each id, I want to determine which ids change their states directly from 3 to 2 and how long they remain in 2.</p>
<p>Expected result:</p>
<pre class="lang-py prettyprint-override"><code>out = pd.Series([2, 3, 2, 1, 0, 0])
</code></pre>
<p>I want to achieve this result in pure pandas and beat my solution in terms of code complexity and time.</p>
<p>My solution so far:</p>
<pre class="lang-py prettyprint-override"><code>import numba
@numba.njit
def _get_duration(l):
counter = 0
for i in range(1, len(l)):
cond = l[i-1] == 3
# If state remains in 3, just continue
if cond and l[i] == 3:
continue
# If state changes from 3 to 2 set counter
elif cond and l[i] == 2:
counter = 1
# If state remains in 2, increase counter
elif l[i-1] == 2 and l[i] == 2:
counter +=1
else:
break
return counter
@numba.njit
def get_stage2_duration(stg):
N = stg.shape[0]
return [_get_duration(stg[i]) for i in range(N)]
</code></pre>
<p>Yields the following result:</p>
<pre class="lang-py prettyprint-override"><code>get_stage2_duration(df.values) # [2, 3, 2, 1, 0, 0]
</code></pre>
|
<python><pandas>
|
2023-05-13 10:47:13
| 2
| 1,811
|
ko3
|
76,242,081
| 10,651,655
|
sqlalchemy + async: Problem with inserting data
|
<p>I am trying to write to DB in a async mode, but for some reason, values are not stored. However tables are created.</p>
<p>My DB</p>
<pre><code>async def create_tables(self):
async with self.async_engine.begin() as conn:
await conn.run_sync(self.models.Base.metadata.drop_all)
await conn.run_sync(self.models.Base.metadata.create_all)
async def connect(self):
conn_str: str = self.__create_connection_str()
# ASYNC
self.async_engine = create_async_engine(conn_str)
self.session_factory = sessionmaker(
self.async_engine, class_=AsyncSession, expire_on_commit=False, autocommit=False, autoflush=False
)
def create_async_session(self):
session = self.session_factory()
try:
yield session
finally:
session.close()
</code></pre>
<p>create_async_session returns:</p>
<blockquote>
<p><generator object DatabaseHandler.create_async_session at 0x000001C148797840></p>
</blockquote>
<p>My insert function:</p>
<pre><code>async def create_artist(session: AsyncSession, artist: schemas.Artist):
try:
# map values
db_artist = models.Artist(
artist_id=artist.id,
name=artist.name,
href=artist.href,
genres=artist.genres,
popularity=artist.popularity,
type=artist.type,
uri=artist.uri,
external_urls=artist.external_urls["spotify"],
followers=artist.followers["total"]
)
print(db_artist) # -> returns values ok
print(session) # -> returns value
await session.add(db_artist)
await session.commit()
await session.refresh(db_artist)
return artist
except Exception as err:
err
</code></pre>
<p>My main func:</p>
<pre><code>
async def collect_artists_in_background():
await db_client.connect()
await db_client.create_tables() # -> works
session = db_client.create_async_session() # -> works
for _id in list_of_artists_ids:
artist = await crud.get_artist_by_id(_id) # -> works
await CRUD.create_artist(session, schemas.Artist.parse_obj(artist)) # -> doesn't work
</code></pre>
<p>I think the ploblem is in create_async_session, but I can't get where</p>
<p>I have tried to create a fixed session, not yield</p>
<pre><code>async with db_client.session_factory() as session:
for _id in list_of_artists_ids:
artist = await crud.get_artist_by_id(_id)
await CRUD.create_artist(session, schemas.Artist.parse_obj(artist))
</code></pre>
<p>But it also doesn't work</p>
|
<python><asynchronous><sqlalchemy>
|
2023-05-13 09:49:14
| 1
| 974
|
Anna
|
76,241,884
| 13,234,892
|
How to get value from tuple except known one?
|
<p>Suppose we have a tuple with two elements:</p>
<pre><code>a = (1, N)
</code></pre>
<p>or</p>
<pre><code>a = (N, 1)
</code></pre>
<p>I know that one of the elements is 1, the position is unknown and I need to get value of second element other than 1.</p>
<p>Is there a way in Python to get it without iterating the tuple?</p>
|
<python><tuples>
|
2023-05-13 08:57:11
| 3
| 466
|
Andrey Ivanov
|
76,241,511
| 1,581,090
|
How to select element with selenium in python and get the text?
|
<p>In python I am trying to select this element</p>
<pre><code><button stid="FLIGHTS_DETAILS_AND_FARES-index-1-LDUWT-FlightsActionButton" data-test-id="select-link" data-stid="FLIGHTS_DETAILS_AND_FARES-index-1-LDUWT-FlightsActionButton" class="uitk-card-link" type="button"><span class="is-visually-hidden">Select and show fare information for Etihad Airways flight, departing at 10:50am from Geneva, arriving at 1:05pm in Tokyo, Priced at $2,224 Roundtrip per traveler. Arrives 1 day later. 19 hours 15 minutes total travel time, One stop, Layover for 3 hours 10 minutes in Abu Dhabi.</span></button>
</code></pre>
<p>and to extract the text. I tried to use the following expression</p>
<pre><code>element = browser.find_element(By.XPATH, "//button[@stid='FLIGHTS_DETAILS_AND_FARES-index-1-LDUWT-FlightsActionButton']")
print(element.text)
</code></pre>
<p>I also tried</p>
<pre><code>element = browser.find_element(By.XPATH, '//button[contains(text(), "Select and show fare information")]')
print(element.text)
</code></pre>
<p>which results in an "Unable to locate element" error.</p>
<p>Also the <code>stid</code> element might change between calls to the webpage.</p>
|
<python><selenium-webdriver>
|
2023-05-13 07:13:50
| 1
| 45,023
|
Alex
|
76,241,505
| 874,380
|
How to organize .ipynb and .py files into subfolders and allow imports?
|
<p>I want to create a repository to organize data, script files (py), and notebook files (ipynb). So the top level folder structure could look like this</p>
<pre><code>project
- data
- notebooks
- src
</code></pre>
<p>Since I have many notebooks, I would like to organize them further into subfolders; similar for <code>src</code> and <code>data</code>.</p>
<p>From what I understand, by default, a notebook thinks the root folder is the folder it's in. Thus it can only import anything located in that folder; it can't see <code>data</code>, <code>src</code>, or any of their subfolders. What is the best practice to address this?</p>
<p>My current idea is to use an environment variable that holds the path to <code>project</code> and set this value as the root path in each notebook. For example, let's say the path is <code>/home/alice/project</code> which is stored in the environment variable <code>MY_PROJECT_PATH</code>. Then, in a notebook, I can do something like</p>
<pre><code>import os
PATH = os.getenv('MY_PROJECT_PATH')
os.chdir(PATH)
from src.dummy import *
</code></pre>
<p>While this works just fine, I wonder if there are any downsides to this – apart from adding this code snippet to each notebook and ensuring the environment variable is indeed visible – or if there is a better or more convenient way to accomplish this.</p>
|
<python><jupyter-notebook>
|
2023-05-13 07:11:34
| 1
| 3,423
|
Christian
|
76,241,489
| 19,186,611
|
Read properties from common config in frontend and backend
|
<p>I have a Django server in the backend and React server in the frontend.</p>
<p>For example, I have this function in the backend:</p>
<pre class="lang-py prettyprint-override"><code>def create(request):
fname = request.data['first_name']
lname = request.data['last_name']
</code></pre>
<p>and this in my frontend:</p>
<pre class="lang-js prettyprint-override"><code>function() {
axios.post({first_name: x, last_name: y}, url, headers)
}
</code></pre>
<p>I need to read <code>first_name</code> from a common config that is shared with the frontend and backend.
In other words, if I want to change <code>first_name</code> to <code>firstname</code>, I would only need to change the common config (not individual frontend or backend code)</p>
|
<javascript><python><reactjs><django><config>
|
2023-05-13 07:08:05
| 1
| 433
|
lornejad
|
76,241,446
| 11,082,866
|
Spread the list of multiple dictionaries into multiple columns in pandas
|
<p>I have a dataframe where one column looks like this:</p>
<pre><code>df['handling_unit'] = [{'id': 87, 'handling_unit': 'tyu', 'quantity'...
1 []
2 []
3 [{'id': 88, 'handling_unit': 'tyu', 'quantity'...
4 [{'id': 141, 'handling_unit': 'kakjfkls', 'qua...
5 [{'id': 146, 'handling_unit': 'tyu', 'quantity...
6 [{'id': 148, 'handling_unit': 'ff', 'quantity'...
7 [{'id': 155, 'handling_unit': 'kakjfkls', 'qua...
8 [{'id': 156, 'handling_unit': 'hellow', 'quant...
9 [{'id': 159, 'handling_unit': 'time pass', 'qu...
10 [{'id': 162, 'handling_unit': 'tyu', 'quantity...
11 [{'id': 195, 'handling_unit': 'undefined', 'qu...
12 []
</code></pre>
<p>There can be multiple dictionaries in the list and they can also be empty.
how do I expand all of them into multiple columns such as 'handling_unit1', handling_unit1_quantity, handling_unit2, handling_unit2_quantity....?</p>
|
<python><pandas><dataframe>
|
2023-05-13 06:52:56
| 1
| 2,506
|
Rahul Sharma
|
76,241,416
| 11,973,820
|
Adding (and) logic for a list of columns and generating feature
|
<p>I have requirement to create a feature by checking the fields of two different dataframes.
df1 is a dataframe created from mysqldb and it will be static, the values of column(required) from df1 can be moved from True or False.</p>
<p>below are more details.</p>
<pre><code>df1=
flag beta required
alpha A TRUE
alpha B TRUE
alpha C FALSE
alpha D TRUE
</code></pre>
<pre><code>df2=
name A B C D E F
roy 1 1 0 1 0 0
john 0 1 1 1 0 0
sam 1 1 1 1 1 1
</code></pre>
<p>I trying to create feature alpha_flag by using the column name "beta" from df1 if the required column is TRUE.</p>
<p>So the code for alpha_flag will be like below</p>
<pre><code>df2['alpha_flag'] = np.where(df2['A']==1 and df2['B']==1 and df2['D']==1 , 1 , 0)
</code></pre>
<p>The difficulty i am facing is to choose only the columns that are mentioned as required true from df1 and dynamically update my 'alpha_flag' creation condition.</p>
|
<python><pandas><dataframe>
|
2023-05-13 06:40:33
| 1
| 859
|
Jai
|
76,241,341
| 539,023
|
Write exception to log when django is unable to run properly
|
<p>Django Gurus,
I have the following error messages on my console and I have the following logging configuration.</p>
<pre><code>LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"handlers": {
"console": {
"class": "logging.StreamHandler",
},
"file": {
'class':'logging.FileHandler',
"filename": str(BASE_DIR) + "/" + "logs"+ "/" + "app.log",
"level":"DEBUG",
},
},
"loggers": {
"django.utils.autoreload": {
"level": "CRITICAL"
},
"django": {
"handlers": ["file"],
"level": "DEBUG",
"propagate": True,
},
}
}
</code></pre>
<p>Stacktrace from my console:</p>
<pre><code>System check identified no issues (0 silenced).
Exception in thread django-main-thread:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
psycopg2.errors.InsufficientPrivilege: permission denied for table django_migrations
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/Cellar/python@3.8/3.8.11/Frameworks/Python.framework/Versions/3.8/lib/python3.8/threading.py", line 932, in _bootstrap_inner
self.run()
File "/usr/local/Cellar/python@3.8/3.8.11/Frameworks/Python.framework/Versions/3.8/lib/python3.8/threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.8/site-packages/django/utils/autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 121, in inner_run
self.check_migrations()
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 486, in check_migrations
executor = MigrationExecutor(connections[DEFAULT_DB_ALIAS])
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py", line 18, in __init__
self.loader = MigrationLoader(self.connection)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/loader.py", line 53, in __init__
self.build_graph()
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/loader.py", line 220, in build_graph
self.applied_migrations = recorder.applied_migrations()
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/recorder.py", line 78, in applied_migrations
return {(migration.app, migration.name): migration for migration in self.migration_qs}
File "/usr/local/lib/python3.8/site-packages/django/db/models/query.py", line 280, in __iter__
self._fetch_all()
File "/usr/local/lib/python3.8/site-packages/django/db/models/query.py", line 1324, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/usr/local/lib/python3.8/site-packages/django/db/models/query.py", line 51, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "/usr/local/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1169, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 98, in execute
return super().execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
django.db.utils.ProgrammingError: permission denied for table django_migrations
From the above stack trace it's obvious what the error is but I am not able to dump it to my log file. When I open app.log I always see the following:
(0.004)
SELECT c.relname,
CASE WHEN c.relispartition THEN 'p' WHEN c.relkind IN ('m', 'v') THEN 'v' ELSE 't' END
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('f', 'm', 'p', 'r', 'v')
AND n.nspname NOT IN ('pg_catalog', 'pg_toast')
AND pg_catalog.pg_table_is_visible(c.oid)
; args=None
(0.001) SELECT "django_migrations"."id", "django_migrations"."app", "django_migrations"."name", "django_migrations"."applied" FROM "django_migrations"; args=()
</code></pre>
<p>I want the full stack trace written in my log. When I open app.log I only able see the following information</p>
<pre><code>(0.004)
SELECT c.relname,
CASE WHEN c.relispartition THEN 'p' WHEN c.relkind IN ('m', 'v') THEN 'v' ELSE 't' END
FROM pg_catalog.pg_class c
LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace
WHERE c.relkind IN ('f', 'm', 'p', 'r', 'v')
AND n.nspname NOT IN ('pg_catalog', 'pg_toast')
AND pg_catalog.pg_table_is_visible(c.oid)
; args=None
(0.001) SELECT "django_migrations"."id", "django_migrations"."app", "django_migrations"."name", "django_migrations"."applied" FROM "django_migrations"; args=()
</code></pre>
<p>I have a middleware that can handle exceptions but Django stop after the above exception and I believe it did not reach to middleware to be handled.</p>
<p>Any help would be much appreciated.</p>
|
<python><django><exception>
|
2023-05-13 06:10:48
| 1
| 20,230
|
kta
|
76,241,290
| 4,098,013
|
Crosstab across 4 columns and multi-index output
|
<p>Here is my data -</p>
<pre><code>import pandas as pd
a = [[1,0,1,1], [1,1,0,0], [1,1,1,1], [1,1,1,1], [0,0,1,0], [0,1,0,0], [0,0,0,0], [1,0,0,1], [1,0,0,1], [0,1,0,1]]
df = pd.DataFrame(a, columns=['A','B','C','D'])
A B C D
0 1 0 1 1
1 1 1 0 0
2 1 1 1 1
3 1 1 1 1
4 0 0 1 0
5 0 1 0 0
6 0 0 0 0
7 1 0 0 1
8 1 0 0 1
9 0 1 0 1
</code></pre>
<p>The desired output is a cross tab of counts between different combinations of the 4 columns and two values -</p>
<pre><code>iterables = [["A", "B", "C", "D"], [1, 0]]
index = pd.MultiIndex.from_product(iterables)
op = [[0,0,3,3,3,3,0,0], [0,0,2,2,1,3,0,0], [0,0,0,0,0,0,0,0], [0,0,0,0,0,0,3,1],
[0,0,0,0,0,0,3,3], [0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0], [0,0,0,0,0,0,0,0]]
print(pd.DataFrame(op, index=index, columns=index))
A B C D
1 0 1 0 1 0 1 0
A 1 0 0 3 3 3 3 0 0
0 0 0 2 2 1 3 0 0
B 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 3 1
C 1 0 0 0 0 0 0 3 3
0 0 0 0 0 0 0 0 0
D 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0
</code></pre>
<p>I have tried <code>pd.crosstab()</code> and only seems to support two columns, not sure. Also tried pivot tables without luck. Please help.</p>
|
<python><pandas>
|
2023-05-13 05:52:07
| 2
| 9,101
|
Vivek Kalyanarangan
|
76,241,269
| 20,285,962
|
Installation issues for tensorflow-decision-forests on Linux
|
<p>I getting the following error trying to install <a href="https://www.tensorflow.org/decision_forests" rel="nofollow noreferrer">tensorflow-decision-forests</a> on Fedora 38 with <code>pip install tensorflow-decision-forests</code>:</p>
<pre><code>Defaulting to user installation because normal site-packages is not writeable
ERROR: Could not find a version that satisfies the requirement tensorflow-decision-forests (from versions: none)
ERROR: No matching distribution found for tensorflow-decision-forests
</code></pre>
<p>So I tried following <a href="https://github.com/tensorflow/decision-forests/blob/main/documentation/installation.md" rel="nofollow noreferrer">installation-build</a></p>
<p>Doing:</p>
<pre><code>git clone https://github.com/tensorflow/decision-forests.git
cd decision-forests/tools
</code></pre>
<p>then,</p>
<pre><code>./test_bazel.sh
</code></pre>
<p>I get the error:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement tensorflow== (from
versions: 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.13.0rc0) ERROR: No matching distribution
found for tensorflow==
</code></pre>
<p>So then I tried with docker since I have it installed:</p>
<pre><code>./tools/start_compile_docker.sh
</code></pre>
<p>inside docker:</p>
<pre><code>./tools/test_bazel.sh
</code></pre>
<p>I get the following error:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement tensorflow== (from
versions: 2.2.0, 2.2.1, 2.2.2, 2.2.3, 2.3.0, 2.3.1, 2.3.2, 2.3.3, 2.3.4, 2.4.0, 2.4.1,
2.4.2, 2.4.3, 2.4.4, 2.5.0, 2.5.1, 2.5.2, 2.5.3, 2.6.0rc0, 2.6.0rc1, 2.6.0rc2, 2.6.0,
2.6.1, 2.6.2, 2.6.3, 2.6.4, 2.6.5, 2.7.0rc0, 2.7.0rc1, 2.7.0, 2.7.1, 2.7.2, 2.7.3, 2.7.4,
2.8.0rc0, 2.8.0rc1, 2.8.0, 2.8.1, 2.8.2, 2.8.3, 2.8.4, 2.9.0rc0, 2.9.0rc1, 2.9.0rc2,
2.9.0, 2.9.1, 2.9.2, 2.9.3, 2.10.0rc0, 2.10.0rc1, 2.10.0rc2, 2.10.0rc3, 2.10.0, 2.10.1,
2.11.0rc0, 2.11.0rc1, 2.11.0rc2, 2.11.0, 2.11.1, 2.12.0rc0, 2.12.0rc1, 2.12.0, 2.13.0rc0)
ERROR: No matching distribution found for tensorflow== tf-docker
</code></pre>
<p>Any help?</p>
|
<python><linux><docker><tensorflow><machine-learning>
|
2023-05-13 05:44:45
| 1
| 319
|
Shane Gervais
|
76,241,137
| 5,672,673
|
mixup for gray image input and color image output
|
<p>Mixup is a technique to smoothen the training of a neural network by applying convex combination to training inputs:</p>
<p><a href="https://keras.io/examples/vision/mixup/" rel="nofollow noreferrer">https://keras.io/examples/vision/mixup/</a></p>
<p>In the example, the input is gray image and the output is a label. For example, if we create a new image = 0.3 * a shoe + 0.7 * a sweater then the label for that new image would be label = 0.3 * shoe + 0.7 * sweater. Lambda = 0.3.</p>
<p>However, I am thinking of applying this technique to a dataset that input gray image and output color image. I would like to ask whether the application of lambda is still technically the same for the gray image (1 channel) and the color image (3 channels):</p>
<pre><code>def mix_up(ds_one, ds_two, alpha=0.2):
# Unpack two datasets
images_one, outputs_one = ds_one
images_two, outputs_two = ds_two
batch_size = tf.shape(images_one)[0]
# Sample lambda and reshape it to do the mixup
lambda = sample_beta_distribution(batch_size, alpha, alpha)
x_l = tf.reshape(lambda, (batch_size, 1, 1, 1))
y_l = tf.reshape(lambda, (batch_size, 1, 1, 1))
# Perform mixup on both images and labels by combining a pair of images/labels
# (one from each dataset) into one image/label
images = images_one * x_l + images_two * (1 - x_l)
outputs = outputs_one * y_l + outputs_two * (1 - y_l)
return (images, outputs)
</code></pre>
|
<python><tensorflow><keras><image-processing><deep-learning>
|
2023-05-13 04:58:16
| 1
| 1,177
|
Linh Chi Nguyen
|
76,241,028
| 9,539,058
|
Interpolating values in xarray using non-indexed coordinates
|
<p>I'm trying to fetch time series from geographical coordinates (single points) from <a href="https://cloud.google.com/storage/docs/public-datasets/era5" rel="nofollow noreferrer">Google ERA5 Reanalysis data</a>. The dataset is following:</p>
<pre><code>import xarray
data = xarray.open_zarr(
'gs://gcp-public-data-arco-era5/co/single-level-reanalysis.zarr/',
chunks={'time': 48},
consolidated=True,
)
print("Model wind dataset size {:.1f} TiB".format(data.nbytes/(1024**4)))
print(data)
Model wind dataset size 28.0 TiB
<xarray.Dataset>
Dimensions: (time: 374016, values: 542080)
Coordinates:
depthBelowLandLayer float64 ...
entireAtmosphere float64 ...
latitude (values) float64 dask.array<chunksize=(542080,), meta=np.ndarray>
longitude (values) float64 dask.array<chunksize=(542080,), meta=np.ndarray>
number int64 ...
step timedelta64[ns] ...
surface float64 ...
* time (time) datetime64[ns] 1979-01-01 ... 2021-08-31T23:0...
valid_time (time) datetime64[ns] dask.array<chunksize=(48,), meta=np.ndarray>
Dimensions without coordinates: values
Data variables: (12/38)
cape (time, values) float32 dask.array<chunksize=(48, 542080), meta=np.ndarray>
d2m (time, values) float32 dask.array<chunksize=(48, 542080), meta=np.ndarray>
hcc (time, values) float32 dask.array<chunksize=(48, 542080), meta=np.ndarray>
istl1 (time, values) float32 dask.array<chunksize=(48, 542080), meta=np.ndarray>
istl2 (time, values) float32 dask.array<chunksize=(48, 542080), meta=np.ndarray>
istl3 (time, values) float32 dask.array<chunksize=(48, 542080), meta=np.ndarray>
... ...
tsn (time, values) float32 dask.array<chunksize=(48, 542080), meta=np.ndarray>
u10 (time, values) float32 dask.array<chunksize=(48, 542080), meta=np.ndarray>
u100 (time, values) float32 dask.array<chunksize=(48, 542080), meta=np.ndarray>
v10 (time, values) float32 dask.array<chunksize=(48, 542080), meta=np.ndarray>
v100 (time, values) float32 dask.array<chunksize=(48, 542080), meta=np.ndarray>
z (time, values) float32 dask.array<chunksize=(48, 542080), meta=np.ndarray>
Attributes:
Conventions: CF-1.7
GRIB_centre: ecmf
GRIB_centreDescription: European Centre for Medium-Range Weather Forec...
GRIB_edition: 1
GRIB_subCentre: 0
history: 2022-09-23T18:56 GRIB to CDM+CF via cfgrib-0.9...
institution: European Centre for Medium-Range Weather Forec...
pangeo-forge:inputs_hash: 5f4378143e9f42402424280b63472752da3aa79179b53b...
pangeo-forge:recipe_hash: 0c3415923e347ce9dac9dc5c6d209525f4d45d799bd25b...
pangeo-forge:version: 0.9.1
</code></pre>
<p>What is the best way to interpolate a time series from single geographical point?</p>
<p>The methods like <code>.sel</code> and <code>interp</code> don't work:</p>
<pre><code>data['cape'].interp(dict(latitude=60, longitude=20))
ValueError: Dimensions {'longitude', 'latitude'} do not exist. Expected one or more of Frozen({'values': 542080, 'time': 374016})
</code></pre>
|
<python><python-xarray><zarr><era5>
|
2023-05-13 04:10:42
| 1
| 888
|
Pörripeikko
|
76,240,677
| 9,937,874
|
Plotly express Choropleth highlight specific states
|
<p>I have generated a choropleth using plotly express. I am plotting the population of each state and I would like to highlight the state with the largest population and the smallest population. Is there a way to change the boarder color for a specific state?</p>
<p>current code:</p>
<pre><code>df = pd.read_csv("state_pop.csv")
fig = px.chorolpeth(df,
locations="state",
scope="usa",
color="population",
locationmode="USA-states",
color_continuous_scale="blues",
)
fig.show()
</code></pre>
|
<python><plotly>
|
2023-05-13 01:09:19
| 1
| 644
|
magladde
|
76,240,535
| 5,019,169
|
How to control a global counter in behave?
|
<p>I am trying to use environment file functions in <code>behave</code> to maintain a counter variable. My implementation looks like this:</p>
<pre><code>def before_all(context):
context.counter = -1
def before_scenario(context, scenario):
print(context.counter)
def after_scenario(context, scenario):
context.counter += 1
</code></pre>
<p>I have a <code>.feature</code> file with multiple <code>scenario</code>. But always getting the same value for each <code>scenario</code>. How can I solve it?</p>
|
<python><python-3.x><bdd><gherkin><python-behave>
|
2023-05-13 00:05:04
| 2
| 11,224
|
Ahasanul Haque
|
76,240,500
| 8,234,237
|
Copying file with CMake to build folder - single solution for VS Code and Visual Studio
|
<p>I have small project (minimum example) with 3 files:</p>
<pre><code>foo.cpp
script.py
CMakeLists.txt
</code></pre>
<p>CMakeLists.txt:</p>
<pre><code>cmake_minimum_required( VERSION 3.20 )
project( PythonCppExtension )
find_package( Python COMPONENTS Interpreter Development )
add_executable(PyRun "foo.cpp")
target_include_directories(PyRun PUBLIC ${Python_INCLUDE_DIRS})
target_link_libraries(PyRun "${Python_LIBRARY_DIRS}/python3.lib") #I am using the Stable ABI
add_custom_command(
TARGET PyRun POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy
${CMAKE_SOURCE_DIR}/script.py
${CMAKE_CURRENT_BINARY_DIR}/script.py)
</code></pre>
<p>foo.cpp:</p>
<pre><code>#define Py_LIMITED_API 0x03050000
#define PY_SSIZE_T_CLEAN
#ifdef _DEBUG
#undef _DEBUG
#include <python.h>
#define _DEBUG
#else
#include <python.h>
#endif
int main()
{
const wchar_t* arg[] = { L"-i", L"script.py" };
return Py_Main(n, const_cast<wchar_t**>(2));
}
</code></pre>
<p>The code may have issues and is just an example, but works for me.</p>
<p>I want to use cmake for building extensions that I can compile on Windows with Visual Studio or Visual Code and on Linux.</p>
<p>It works here, but the thing is that I still didn't figure out how to to make a single solution for both Visual Studio and Visual Code. The code above works for Visual Studio, that copy the script.py to the same folder of the executable PyRun. It does so because Visual Studio passes to CMAKE the full path of the folder output at the variable <strong>CMAKE_INSTALL_PREFIX</strong>:</p>
<pre><code> ... -DCMAKE_INSTALL_PREFIX:PATH="C:\Users\ ... \PythonCpp\out\install\x64-Debug" ...
</code></pre>
<p>When I compile with Visual Code I would have to update the cmake script to:</p>
<pre><code> add_custom_command(
TARGET PyRun POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy
${CMAKE_SOURCE_DIR}/script.py
${CMAKE_CURRENT_BINARY_DIR}/$<CONFIG>/script.py)
</code></pre>
<p>to copy the script file to the same folder of the executable, because the executable is copied to a Debug or Release folder within <strong>CMAKE_CURRENT_BINARY_DIR</strong>.</p>
<p>How do I handle such a issue to make one single cmake script?</p>
|
<python><c++><cmake>
|
2023-05-12 23:46:50
| 1
| 437
|
ChrCury78
|
76,240,408
| 5,012,832
|
Python VTKPlotLib how to remove existing mesh
|
<p>I am running into an issue with the code below. When the button is hit the code should grab the next model and display it, however, the old mesh is still present in the image and doesn't go away.
In VTKPlotLib how do you erase the previous mesh, without destroying the qtfigure and messing up the flow of gui application?</p>
<p><a href="https://i.sstatic.net/9by8K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9by8K.png" alt="Example of the problem" /></a></p>
<pre><code>from PyQt5 import QtWidgets
import vtkplotlib as vpl
from PyQt5.QtWidgets import QLineEdit, QMessageBox
from stl.mesh import Mesh
class Tagger(QtWidgets.QWidget):
def __init__(self):
super().__init__()
# Go for a vertical stack layout.
vbox = QtWidgets.QVBoxLayout()
self.setLayout(vbox)
self.setWindowTitle("Tagging Engine")
# Create the figure
self.figure = vpl.QtFigure2()
self.figure.add_preset_views()
# Create a button and attach a callback.
self.button = QtWidgets.QPushButton("Load Next Model")
self.button.released.connect(self.button_pressed_cb)
# Create textbox
self.textbox = QLineEdit(self)
self.textbox.move(20, 20)
self.textbox.resize(280, 40)
# Create a button and attach a callback.
self.button2 = QtWidgets.QPushButton("Save Current Data")
self.button2.released.connect(self.button_pressed_save)
# QtFigures are QWidgets and are added to layouts with `addWidget`
vbox.addWidget(self.button)
vbox.addWidget(self.textbox)
vbox.addWidget(self.button2)
vbox.addWidget(self.figure)
def button_pressed_cb(self):
self.mesh = Mesh.from_file(r"C:/examplefile.stl")
vpl.mesh_plot(mesh_data=self.mesh, color="#94b1ff")
# Reposition the camera to better fit to the model
vpl.reset_camera(self.figure)
self.figure.update()
</code></pre>
|
<python><python-3.x><pyqt5><stl-format>
|
2023-05-12 23:06:32
| 1
| 397
|
trinityalps
|
76,240,231
| 7,984,318
|
IBM DB2 Pyodbc String parameters issue: The invocation of routine "YEAR" is ambiguous. The argument in position "1" does not have a best fit
|
<p>I'm using IBM Db2 ,and pyodbc.</p>
<p>insert_sql:</p>
<pre><code>INSERT INTO TABLENAME
SELECT
((YEAR('1/1/9999')*12+MONTH('1/1/9999')) - DOUBLE(YEAR(CYCL)*12+MONTH(CYCL)))
...
</code></pre>
<p>I want to replace '1/1/9999' with a parameter in python code so I used '?' to replace it:</p>
<pre><code>INSERT INTO TABLENAME
SELECT
((YEAR(?)*12+MONTH(?)) - DOUBLE(YEAR(CYCL)*12+MONTH(CYCL)))
</code></pre>
<p>Python:</p>
<pre><code>cnxn = pyodbc.connect(database_string)
cursor = cnxn.cursor()
result = cursor.execute(insert_sql,'1/1/9999','1/1/9999')
cnxn.commit()
</code></pre>
<p>Error:</p>
<pre><code>The invocation of routine "YEAR" is ambiguous. The argument in position "1" does not have a best fit.
</code></pre>
<p>Any friend can help ?</p>
|
<python><sql><pyodbc>
|
2023-05-12 22:12:17
| 0
| 4,094
|
William
|
76,240,158
| 617,845
|
Multi covariate time series forecast using Darts
|
<p>I am trying to use the tutorial <a href="https://unit8.com/resources/time-series-forecasting-using-past-and-future-external-data-with-darts/" rel="nofollow noreferrer">here</a>, where we have two covariates to predict the target.</p>
<p>The tutorial uses <code>.stack()</code> to add two covariates together. It is not clear to me how to use this function to add more than two covariates to the model.</p>
<p>I tried the following code:</p>
<pre><code>from darts.models import BlockRNNModel
my_covariate = (tg.sine_timeseries(length=LENGTH,
value_frequency=(1/14),
freq='D',
column_name='my_covariate')
+ 0.4 * tg.gaussian_timeseries(length=LENGTH, freq='D'))
brnn_melting_and_rain = BlockRNNModel(input_chunk_length=30,
output_chunk_length=10,
n_rnn_layers=2)
brnn_melting_and_rain.fit(flow_train,
# past_covariates=melting.stack(rainfalls).stack(rainfalls),
past_covariates=[melting,rainfalls,my_covariate],
epochs=10,
verbose=True)
eval_model(brnn_melting_and_rain,
past_covariates=melting.stack(rainfalls))
</code></pre>
<p>But I got the following error:</p>
<blockquote>
<p>ValueError: The provided sequence of target series must have the same
length as the provided sequence of covariate series.</p>
</blockquote>
<p>I tried reading the documentation of DARTS, but there is no clear direction on how to use past_covariates, in specific, what is the data type I should pass here and if there are any other requirements.</p>
|
<python><python-3.x><time-series><multivariate-time-series><u8darts>
|
2023-05-12 21:55:42
| 3
| 1,383
|
M.M
|
76,240,108
| 998,967
|
django tests - error in github action while running in parallel
|
<p>in a github action yml file it's defined this step to run django tests:</p>
<pre><code>python manage.py test --failfast --parallel 2
</code></pre>
<p>looks like <code>--parallel 2</code> breaks it:</p>
<pre><code>multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/multiprocessing/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/site-packages/django/test/runner.py", line 449, in _run_subsuite
result = runner.run(subsuite)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/site-packages/django/test/runner.py", line 366, in run
test(result)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/unittest/suite.py", line 84, in __call__
return self.run(*args, **kwds)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/unittest/suite.py", line 122, in run
test(result)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/site-packages/django/test/testcases.py", line 381, in __call__
self._setup_and_call(result)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/site-packages/django/test/testcases.py", line 416, in _setup_and_call
super().__call__(result)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/unittest/case.py", line 650, in __call__
return self.run(*args, **kwds)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/unittest/case.py", line 599, in run
self._feedErrorsToResult(result, outcome.errors)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/unittest/case.py", line 516, in _feedErrorsToResult
result.addFailure(test, exc_info)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/site-packages/django/test/runner.py", line 296, in addFailure
self.check_picklable(test, err)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/site-packages/django/test/runner.py", line 216, in check_picklable
self._confirm_picklable(err)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/site-packages/django/test/runner.py", line 186, in _confirm_picklable
pickle.loads(pickle.dumps(obj))
TypeError: cannot pickle 'traceback' object
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/runner/work/fiscozen/fiscozen/fiscozen_django/manage.py", line 24, in <module>
execute_from_command_line(sys.argv)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line
utility.execute()
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/site-packages/django/core/management/__init__.py", line 436, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/site-packages/django/core/management/commands/test.py", line 24, in run_from_argv
super().run_from_argv(argv)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/site-packages/django/core/management/base.py", line 412, in run_from_argv
self.execute(*args, **cmd_options)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/site-packages/django/core/management/base.py", line 458, in execute
output = self.handle(*args, **options)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/site-packages/django/core/management/commands/test.py", line 68, in handle
failures = test_runner.run_tests(test_labels)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/site-packages/django/test/runner.py", line 1061, in run_tests
result = self.run_suite(suite)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/site-packages/django/test/runner.py", line 983, in run_suite
return runner.run(suite)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/unittest/runner.py", line 184, in run
test(result)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/unittest/suite.py", line 84, in __call__
return self.run(*args, **kwds)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/site-packages/django/test/runner.py", line 534, in run
subsuite_index, events = test_results.next(timeout=0.1)
File "/opt/hostedtoolcache/Python/3.10.6/x64/lib/python3.10/multiprocessing/pool.py", line 873, in next
raise value
TypeError: cannot pickle 'traceback' object
Error: Process completed with exit code 1.
</code></pre>
<p>anybody experienced it?</p>
|
<python><django><multiprocessing><github-actions><django-testing>
|
2023-05-12 21:44:23
| 1
| 1,844
|
Luke
|
76,240,088
| 17,877,528
|
Cannot send http request from Flutter to Flask
|
<p>i'm trying to make a simple request to Flask but i keep getting connection refused.</p>
<p>This is my <code>main.py</code></p>
<pre><code>import os
import sys
from konlpy.tag import Kkma, Hannanum, Okt
from flask import Flask, jsonify
sys.stdin.reconfigure(encoding="utf-8")
sys.stdout.reconfigure(encoding="utf-8")
app = Flask(__name__)
basedir = os.path.abspath(os.path.dirname(__file__))
@app.route('/', methods=["GET"])
def index():
return jsonify({'message': 'Hello, world!'})
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>I started and got this message</p>
<pre><code>* Serving Flask app 'main'
* Debug mode: on
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on http://127.0.0.1:5000
</code></pre>
<p>And on Flutter:</p>
<pre><code> final String baseUrl = 'http://127.0.0.1:5000';
void _search() async {
try {
final response = await http.get(Uri.parse(baseUrl));
final data = jsonDecode(response.body);
print(data);
} catch (e) {
print(e);
}
}
</code></pre>
<p>I was reading this:
<a href="https://stackoverflow.com/questions/55908089/why-is-flutter-refusing-to-connect-on-localhost8000-or-127-0-018000">why is flutter refusing to connect on localhost:8000 or 127.0.01:8000?</a></p>
<p>I'm using the android emulator, i also tried making the request to <code>https://10.0.2.2:5000</code>, but still didn't work. I also tried on my real device and got the same thing.</p>
<p>My computer is connected to the internet via cable, so maybe that's something to consider.</p>
<p>Thanks</p>
|
<python><flutter><dart><flask>
|
2023-05-12 21:40:09
| 1
| 774
|
José Carlos
|
76,239,975
| 11,922,765
|
Python pandas print week day name of a dataframe
|
<p>somehow I got dates of the all Mondays, Tuesday and Fridays beginning a specific date for up to next year. Next, I want to print name of the weekday in the next column</p>
<pre><code>df = pd.concat([pd.DataFrame(data={'Date':pd.date_range(start='11/07/2022',
freq=week_day,end='11/07/2023')}) for week_day in ['W-MON','W-TUE','W-FRI']])
df = df.sort_values('Date').reset_index(drop=True)
Date
0 2022-11-07
1 2022-11-08
2 2022-11-11
df['Weekday_name'] = df['Date'].strftime('%A')
</code></pre>
<p>Expected output:</p>
<pre><code> Date Weekday_name
0 2022-11-07 Monday
1 2022-11-08 Tuesday
2 2022-11-11 Friday
</code></pre>
<p>Present output:</p>
<pre><code>AttributeError: 'Series' object has no attribute 'strftime'
</code></pre>
|
<python><pandas><dataframe><datetime>
|
2023-05-12 21:13:19
| 2
| 4,702
|
Mainland
|
76,239,956
| 15,907,013
|
Vall-e pytorch torch.load returns dict
|
<p>I am using <code>enhuiz/vall-e</code> but running into a error which is
<code>File "vall-e/vall_e/main.py", line 30, in main ar = torch.load(args.ar_ckpt).to(args.device) AttributeError: 'dict' object has no attribute 'to'</code></p>
<p>The issue is mentioned but no one has solved it <code>https://github.com/enhuiz/vall-e/issues/64</code></p>
<p>I am using this google colab to reproduce it (NOTE: i added <code>!pip install deepspeed==0.8.3</code>)
<code>https://colab.research.google.com/drive/1wEze0kQ0gt9B3bQmmbtbSXCoCTpq5vg-?usp=sharing</code></p>
|
<python><pytorch><artificial-intelligence>
|
2023-05-12 21:10:12
| 1
| 539
|
Jonathan Coletti
|
76,239,866
| 396,014
|
How to combine 3d projections with 2d subplots and set the width
|
<p>I have working code to generate plots showing x,y,z values for three parameters from an accelerometer, with side-by-side line and 3D plots for each:</p>
<pre><code>from mpl_toolkits import mplot3d
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#code here loads data into a dataframe df
fig = plt.figure(figsize=(10,8))
fig.suptitle(filename, fontsize=12)
for p in ('accel','angle','avelo')
i += 1
ax = fig.add_subplot(3, 2, i)
ax.plot(idx,df[p,'x'], label = "x")
ax.plot(idx,df[p,'y'], label = "y")
ax.plot(idx,df[p,'z'], label = "z")
ax.set_ylabel(p)
ax.legend(loc="best")
i += 1
ax = fig.add_subplot(3,2,i,projection='3d')
ax.plot3D(df[p,'x'],df[p,'y'],df[p,'z'],'black')
ax.scatter(df[p]['x'][0],df[p]['y'][0],df[p]['z'][0], c='green', marker='o', s=50)
ax.scatter(df[p]['x'].iloc[-1],df[p]['y'].iloc[-1],df[p]['z'].iloc[-1], c='red', marker='x', s=50)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plt.subplots_adjust(left=0.1,
bottom=0.1,
right=0.9,
top=0.9,
wspace=0.4,
hspace=0.1)
plt.show()
</code></pre>
<p>I want to make the line plots twice as wide as they are by default. Is there some way to do this with the existing add_subplot approach or do I have to rework the code to set up the plots with plt.subplots? All the examples I find assume the latter.</p>
|
<python><matplotlib><subplot><matplotlib-3d>
|
2023-05-12 20:49:22
| 1
| 1,001
|
Steve
|
76,239,793
| 6,872,935
|
Python3: User defined version of Optional type, how to extend the Union type
|
<p>I have a class, let's call it <code>Foo</code>, and I would like to be able to use a <code>Fooable</code> type that signals that a value may be any union of a given list of types or an instance of the <code>Foo</code> class.</p>
<p>This is quite similar to the <code>Optional</code> type but with <code>Foo</code> instead of <code>None</code>.</p>
<p>Here's an example of the type</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
pass
# ... some sort of type definition ...
# Equivalent to Union[Foo, int, str]
x: Fooable[int, str] = 2
</code></pre>
<p>I know that for Python >= 3.10 you can solve this quite easily with the <code>|</code> typing operator, but let's assume that's not an option.</p>
<p>This also kind of works, but only for one type:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Union
class Foo:
pass
G = TypeVar("G")
Fooable = Union[G, Foo]
# this works
x: Fooable[int] = 3
# this does not
y: Fooable[int, str] = 3
# TypeError: Too many parameters for typing.Union[~G, Foo]; actual 2, expected 1
</code></pre>
<p>I'm hoping there's a way to allow any number of types</p>
|
<python><python-3.x><types>
|
2023-05-12 20:34:48
| 0
| 562
|
Jack
|
76,239,758
| 21,420,742
|
How to get count for multiple columns in pandas
|
<p>I have a dataset I need to get a count of.</p>
<pre><code>Manager hire_request hire_approval hired transfer_request transfer_approval transfer
Adam 0 0 0 0 1 0
Blake 1 0 0 0 0 0
Blake 0 1 0 0 0 0
Blake 1 0 0 0 0 0
Chris 0 0 0 1 0 0
Chris 0 1 0 0 0 0
</code></pre>
<p>I need to just get a count by manager. I tried doing <code>df.groupby(['Manager'])['Manager','hire_request','hire_approval', 'hired', 'transfer_request','transfer_approval', 'transfer'].count()</code></p>
<p>Desired output:</p>
<pre><code>Manager hire_request hire_apporval hire transfer_request transfer_approval transfer
Adam 0 0 0 0 1 0
Blake 2 1 0 0 0 0
Chris 0 1 0 1 0 0
</code></pre>
<p>Thank you</p>
|
<python><python-3.x><pandas><dataframe><numpy>
|
2023-05-12 20:30:30
| 1
| 473
|
Coding_Nubie
|
76,239,693
| 2,576,839
|
Python groupby function not recognising that there are keys that are identical that can be "grouped on"
|
<p>I have a list which I am trying to group by the first element of the nested list. The problem is that my code is not recognising that there are only 5 different values among the grouped on element.</p>
<pre><code>a = [['2.25.151989603747108360484758994222924880510', 1],
['2.25.23907329898781253437777953862543062317', 1],
['2.25.151989603747108360484758994222924880510', 2],
['2.25.23907329898781253437777953862543062317', 1],
['2.25.159215431212584126451402597802236328925', 1],
['2.25.339018106044083012102817776589396922392', 1],
['2.25.159215431212584126451402597802236328925', 1],
['2.25.23907329898781253437777953862543062317', 1],
['2.25.151989603747108360484758994222924880510', 1],
['2.25.159215431212584126451402597802236328925', 1],
['2.25.151989603747108360484758994222924880510', 1],
['2.25.339018106044083012102817776589396922392', 1],
['2.25.159215431212584126451402597802236328925', 1],
['2.25.151989603747108360484758994222924880510', 1],
['2.25.159215431212584126451402597802236328925', 1],
['2.25.339018106044083012102817776589396922392', 1],
['2.25.159215431212584126451402597802236328925', 1]]
</code></pre>
<p>Just as a simple test, I have run (thinking there may be something odd about the variable I'm grouping on:</p>
<pre><code>for key, group in groupby(a, lambda x: str(x[0]).strip()):
print(key)
</code></pre>
<p>I get this:</p>
<pre><code>2.25.151989603747108360484758994222924880510
2.25.23907329898781253437777953862543062317
2.25.151989603747108360484758994222924880510
2.25.23907329898781253437777953862543062317
2.25.159215431212584126451402597802236328925
2.25.339018106044083012102817776589396922392
2.25.159215431212584126451402597802236328925
2.25.23907329898781253437777953862543062317
2.25.151989603747108360484758994222924880510
2.25.159215431212584126451402597802236328925
2.25.151989603747108360484758994222924880510
2.25.339018106044083012102817776589396922392
2.25.159215431212584126451402597802236328925
2.25.151989603747108360484758994222924880510
2.25.159215431212584126451402597802236328925
</code></pre>
<p>If I do this:</p>
<pre><code>r=[]
for m in a:
for n in a:
if m[0]==n[0]:
r.append(m[0])
set(r)
</code></pre>
<p>I get this</p>
<pre><code>{'2.25.105201430514553352325644071061576888668',
'2.25.151989603747108360484758994222924880510',
'2.25.159215431212584126451402597802236328925',
'2.25.23907329898781253437777953862543062317',
'2.25.339018106044083012102817776589396922392'}
</code></pre>
<p>Which is correct. Why isn't the groupby function working?</p>
|
<python><group-by>
|
2023-05-12 20:18:57
| 2
| 2,178
|
GhostRider
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.