QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 β |
|---|---|---|---|---|---|---|---|---|
77,362,961 | 2,763,030 | How to tell if a python package installation includes precompiled binaries? | <p>Is there a reliable way to determine if when installing a Python package with pip/conda if the installer includes precompiled binaries? I would like to exclude or at least know when this happens. Sometimes this can be determined from looking at the setup/project in the source distribution or by the name of the whl file, but not always. Mostly it's important to have a reliable Yes/No/Unsure signal to help prioritize further investigation.</p>
| <python><pip><conda> | 2023-10-25 21:20:19 | 1 | 1,731 | John D. |
77,362,957 | 12,242,085 | How to drop duplicated values in one column for each id in Data Frame in Python Pandas? | <p>I have Data Frame in Python Pandas like below:</p>
<pre><code>data = {'id': [1, 1, 1, 1, 2, 2, 3, 3],
'nps': [8, 8, 8, 8, 7, 7, 9, 9],
'target': [True, True, True, True, False, False, True, True],
'score': [0.56, 0.78, 0.56, 0.78, 0.6785, 0.42, 0.9, 0.63],
'day': ['2023-02-15', '2023-02-15', '2023-02-22', '2023-02-22', '2023-06-10', '2023-06-10', '2023-07-01', '2023-07-01']}
df = pd.DataFrame(data)
</code></pre>
<p><a href="https://i.sstatic.net/GH6Pb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GH6Pb.png" alt="enter image description here" /></a></p>
<p>And as you can see I have duplicates for each id in column score. I need to have only one score per id.</p>
<p>So, as a result I need something like for example below:</p>
<pre><code>id | nps | target | score | day
---|-----|---------|--------|-----------
1 | 8 | True | 0.56 | 2023-02-15
1 | 8 | True | 0.56 | 2023-02-22
2 | 7 | False | 0.42 | 2023-06-10
3 | 9 | True | 0.90 | 2023-07-01
</code></pre>
<p>How can I do that in Python Pandas ?</p>
| <python><pandas><dataframe><duplicates><drop-duplicates> | 2023-10-25 21:19:08 | 1 | 2,350 | dingaro |
77,362,703 | 1,422,096 | How to plot an histogram correctly with numpy, and match it with the density function? | <p>TL;DR: <strong>How to plot the result of <code>np.histogram(..., density=True)</code> correctly with Numpy?</strong></p>
<p>Using <code>density=True</code> should help to match the histogram of the sample, and the density function of the underlying random variable, but it doesn't:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import scipy.stats
import matplotlib.pyplot as plt
y = np.random.randn(10000)
h, bins = np.histogram(y, bins=1000, density=True)
plt.bar(bins[:-1], h)
x = np.linspace(-10, 10, 100)
f = scipy.stats.norm.pdf(x)
plt.plot(x, f, color="green")
plt.show()
</code></pre>
<p><strong>Why aren't the histogram and probability density functions scaled accordingly?</strong></p>
<p><a href="https://i.sstatic.net/YGgPS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YGgPS.png" alt="plt.bar output" /></a></p>
<p>In this case, an observation shows that a 1.6 scaling would be better:</p>
<pre class="lang-py prettyprint-override"><code>plt.plot(x, 1.6 * f, color="green")
</code></pre>
<p>Also, this works normally:</p>
<pre class="lang-py prettyprint-override"><code>plt.hist(y, bins=100, density=True)
</code></pre>
<p><a href="https://i.sstatic.net/qsnrY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qsnrY.png" alt="plt.hist output" /></a></p>
<p>Why?</p>
| <python><numpy><matplotlib><statistics><probability-density> | 2023-10-25 20:29:36 | 3 | 47,388 | Basj |
77,362,389 | 10,318,539 | How to Run a Dynamic Qiskit Circuit Using the Sampler Primitive? | <p>I aim to execute my Qiskit dynamic circuit using a sampler. The rationale behind this choice is that a sampler allows me to run multiple circuits concurrently, whereas using a backend restricts me to executing one circuit at a time. However, while I have developed the code, I am uncertain about where to specify the <strong>'dynamic=True'</strong> parameter.</p>
<p>Code:</p>
<pre><code>from qiskit_ibm_runtime import QiskitRuntimeService, Sampler
from qiskit import QuantumCircuit
service = QiskitRuntimeService(
channel='ibm_quantum',
instance='ibm-q/open/main',
)
bell = QuantumCircuit(2,1)
bell.h(0)
bell.x(res[0]).c_if(cr, 0) #
bell.cx(0, 1)
bell.x(res[1]).c_if(cr, 0) #
bell.measure_all()
# executes three Bell circuits
with Sampler(
circuits=[bell] * 3,
service=service,
options={'backend': 'ibmq_qasm_simulator'},
) as sampler:
# alternatively you can also pass circuits as objects
result = sampler(circuits=[bell] * 3)
print(result)
</code></pre>
| <python><qiskit> | 2023-10-25 19:26:28 | 1 | 485 | Engr. Khuram Shahzad |
77,362,308 | 8,903,959 | What's the difference between 'pip3 install -e .' and 'python3 setup.py develop' | <p>When installing a Python package locally in editable mode, both 'pip3 install -e .', and 'python3 setup.py develop', will both install the package locally in editable mode. I know that 'pip3 install -e .' is recommended over 'python3 setup.py develop', but I am unclear as to why and what the differences are between the two.</p>
<p>This answer for a separate question (<a href="https://stackoverflow.com/a/19048754/8903959">https://stackoverflow.com/a/19048754/8903959</a>) seems to indicate that it's because it's more difficult to uninstall when using python3 setup.py develop. Why is that the case?</p>
<p>Additionally, the answer (<a href="https://stackoverflow.com/a/19048754/8903959">https://stackoverflow.com/a/19048754/8903959</a>) says that dependencies are managed differently and incorrectly. What are the specific differences surrounding dependencies and why do they exist, especially if 'python3 setup.py develop' manages them incorrectly?</p>
<p>Apologies if this is a trivial question, but try as I might I could not find the answer or documentation for this reasoning elsewhere.</p>
| <python><pip><setuptools><setup.py><python-packaging> | 2023-10-25 19:11:52 | 1 | 1,097 | Justin Furuness |
77,362,216 | 3,419,103 | Add startup/shutdown handlers to FastAPI app with lifespan API | <p>Consider a FastAPI using the <code>lifespan</code> parameter like this:</p>
<pre class="lang-py prettyprint-override"><code>def lifespan(app):
print('lifespan start')
yield
print('lifespan end')
app = FastAPI(lifespan=lifespan)
</code></pre>
<p>Now I want to register a sub app with its own lifecycle functions:</p>
<pre class="lang-py prettyprint-override"><code>app.mount(mount_path, sub_app)
</code></pre>
<p><strong>How can I register startup/shutdown handlers for the sub app?</strong></p>
<p>All solutions I could find either require control over the <code>lifespan</code> generator (which I don't have) or involve deprecated methods like <code>add_event_handler</code> (which doesn't work when <code>lifespan</code> is set).</p>
<hr />
<p><strong>Update</strong> Minimal reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI
# --- main app ---
def lifespan(_):
print("startup")
yield
print("shutdown")
app = FastAPI(lifespan=lifespan)
@app.get("/")
async def root():
return {"message": "Hello World"}
# --- sub app ---
sub_app = FastAPI()
@sub_app.get("/")
async def sub_root():
return {"message": "Hello Sub World"}
app.mount("/sub", sub_app)
app.on_event("startup")(lambda: print("sub startup")) # doesn't work
app.on_event("shutdown")(lambda: print("sub shutdown")) # doesn't work
</code></pre>
<p>Run with: <code>uvicorn my_app:app --port 8000</code></p>
| <python><fastapi><lifecycle><starlette> | 2023-10-25 18:56:54 | 2 | 18,137 | Falko |
77,362,087 | 4,031,815 | Keep getting conflicts when installing python, flask and google cloud bigquery | <p>I have this env.yml</p>
<pre><code>name: env-app
channels:
- conda-forge
- defaults
dependencies:
- google-cloud-bigquery=3.12.0
- flask=3.0.*
- python=3.12.*
</code></pre>
<p>If I run <code>micromamba env create --file environment.yml</code> I get this output:</p>
<pre><code>conda-forge/osx-arm64 Using cache
conda-forge/noarch Using cache
pkgs/main/osx-arm64 No change
pkgs/main/noarch No change
pkgs/r/noarch No change
pkgs/r/osx-arm64 No change
error libmamba Could not solve for environment specs
The following packages are incompatible
ββ google-cloud-bigquery 3.12.0** is installable and it requires
β ββ pandas >=1.1.0 with the potential options
β β ββ pandas [1.1.3|1.1.4|...|2.0.3] would require
β β β ββ python >=3.8,<3.9.0a0 , which can be installed;
β β ββ pandas [1.1.3|1.1.4|...|2.1.1] would require
β β β ββ python >=3.9,<3.10.0a0 , which can be installed;
β β ββ pandas [1.3.4|1.3.5|...|2.1.1] would require
β β β ββ python >=3.10,<3.11.0a0 *_cpython, which can be installed;
β β ββ pandas [1.3.4|1.3.5|...|2.0.3] would require
β β β ββ python >=3.8,<3.9.0a0 *_cpython, which can be installed;
β β ββ pandas [1.3.4|1.3.5|...|2.1.1] would require
β β β ββ python >=3.9,<3.10.0a0 *_cpython, which can be installed;
β β ββ pandas [1.5.1|1.5.2|...|2.1.1] would require
β β β ββ python >=3.11,<3.12.0a0 *_cpython, which can be installed;
β β ββ pandas 2.1.1 would require
β β β ββ python >=3.12.0rc3,<3.13.0a0 *_cpython but there are no viable options
β β β ββ python 3.12.0 would require
β β β β ββ python_abi 3.12.* *_cp312, which can be installed;
β β β ββ python 3.12.0rc3 would require
β β β ββ _python_rc, which does not exist (perhaps a missing channel);
β β ββ pandas [1.3.5|1.4.1|...|2.1.1] would require
β β β ββ python >=3.10,<3.11.0a0 , which can be installed;
β β ββ pandas [1.5.2|1.5.3|2.0.3|2.1.1] would require
β β ββ python >=3.11,<3.12.0a0 , which can be installed;
β ββ shapely >=1.8.4,<2.0dev with the potential options
β ββ shapely [1.8.4|1.8.5] would require
β β ββ python_abi 3.10.* *_cp310, which conflicts with any installable versions previously reported;
β ββ shapely [1.8.4|1.8.5] would require
β β ββ python_abi 3.8.* *_cp38, which conflicts with any installable versions previously reported;
β ββ shapely [1.8.4|1.8.5] would require
β β ββ python_abi 3.9.* *_cp39, which conflicts with any installable versions previously reported;
β ββ shapely 1.8.5 would require
β β ββ python_abi 3.11.* *_cp311, which conflicts with any installable versions previously reported;
β ββ shapely 1.8.4 would require
β β ββ python >=3.10,<3.11.0a0 , which can be installed;
β ββ shapely 1.8.4 would require
β β ββ python >=3.8,<3.9.0a0 , which can be installed;
β ββ shapely 1.8.4 would require
β ββ python >=3.9,<3.10.0a0 , which can be installed;
ββ python 3.12.* is not installable because there are no viable options
ββ python 3.12.0 conflicts with any installable versions previously reported;
ββ python 3.12.0, which cannot be installed (as previously explained);
ββ python 3.12.0rc3, which cannot be installed (as previously explained).
critical libmamba Could not solve for environment specs
</code></pre>
| <python><flask><conda> | 2023-10-25 18:31:22 | 0 | 25,646 | CommonSenseCode |
77,361,959 | 1,656,122 | Python webbrowser module keep browser open | <p>I am trying to open a website with Python's webbrowser module, however, I want the browser to continue to stay open after the script is finished and not close out.</p>
<pre><code>import webbrowser
# Logic/code that programmatically gets a URL
print(webbrowser.open(url))
# while True:
# continue
print("script done")
</code></pre>
<p>If I run the code above I get:</p>
<pre><code>>True
>script done
</code></pre>
<p>But no web page opens, no new tabs, etc.
If I uncomment the infinite While loop, then the browser/tab appears, and when I manually keyboard interrupt the process they disappear.</p>
<p>So, it appears the browser only lives while the script is running.</p>
<p>I want to open the browser to that page, and then leave it open.
Is there any way to do this with Python's webbrowser module or any built-in python modules?</p>
| <python><browser><module> | 2023-10-25 18:06:15 | 1 | 415 | Dylan Holmes |
77,361,799 | 13,086,128 | AttributeError: 'DataFrame' object has no attribute 'group_by' | <p>I am trying to group by a polars dataframe following the document:</p>
<p><a href="https://pola-rs.github.io/polars/py-polars/html/reference/dataframe/api/polars.DataFrame.group_by.html#polars.DataFrame.group_by" rel="noreferrer">https://pola-rs.github.io/polars/py-polars/html/reference/dataframe/api/polars.DataFrame.group_by.html#polars.DataFrame.group_by</a></p>
<pre><code>import polars as pl
df = pl.DataFrame(
{
"a": ["a", "b", "a", "b", "c"],
"b": [1, 2, 1, 3, 3],
"c": [5, 4, 3, 2, 1],
}
)
df.group_by("a").agg(pl.col("b").sum())
</code></pre>
<p>However, I am getting this error:</p>
<pre><code>AttributeError: 'DataFrame' object has no attribute 'group_by'
</code></pre>
| <python><python-3.x><dataframe><python-polars> | 2023-10-25 17:43:34 | 1 | 30,560 | Talha Tayyab |
77,361,638 | 4,726,246 | Anaconda Nested Virtual environment bug | <p>Whenever I open terminal using VS code, it shows nested conda environment. Currently there are two environments, base and myenv. When I start terminal, I am greeted by this :</p>
<pre><code>(myenv) (base) username:directory$
</code></pre>
<p>Any tips on how to debug this or how to find out what's going on?</p>
| <python><bash><terminal><anaconda> | 2023-10-25 17:15:30 | 1 | 598 | infiNity9819 |
77,361,637 | 5,684,326 | Containerizing a flask application that uses PYODBC | <p>I am trying to create a container and it's telling me that the ODBC Driver 17 for SQL server can't be opened. here is the container but I have tried many different containers. This is the code that throws the error.</p>
<pre><code>class Database:
@staticmethod
def _get_connection_string_from_secrets(file_path):
with open(file_path, 'r') as file:
data = json.load(file)
db = data['database']
connection_string = (f"Driver={{{db['driver']}}};"
f"Server={db['server']};"
f"Database={db['database_name']};"
f"Uid={db['uid']};"
f"Pwd={db['pwd']};"
f"Encrypt={db['encrypt']};"
f"TrustServerCertificate={db['trust_server_certificate']};"
f"Timeout={db['connection_timeout']};")
return connection_string
</code></pre>
<pre><code># Use an official Ubuntu as a base image
FROM ubuntu:latest
# Set environment variables to non-interactive (this avoids some prompts)
ENV DEBIAN_FRONTEND noninteractive
# Install Python and pip
RUN apt-get update && apt-get install -y \
python3 \
python3-pip \
python3-dev \
python3-tk\
unixodbc-dev \
g++
# Install Microsoft ODBC Driver for SQL Server
RUN apt-get install -y wget apt-transport-https
RUN wget https://packages.microsoft.com/keys/microsoft.asc -O- | apt-key add -
RUN wget https://packages.microsoft.com/config/ubuntu/$(. /etc/os-release; echo $VERSION_ID)/prod.list
RUN mv prod.list /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update
RUN ACCEPT_EULA=Y apt-get install -y msodbcsql17
# Set the working directory in the container
WORKDIR /usr/src/app
# Copy the current directory contents into the container at /usr/src/app
COPY . .
# Install any needed packages specified in requirements.txt
RUN pip3 install --no-cache-dir -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 5555
# Define environment variable
ENV NAME World
CMD ["python3", "runserver.py"]
</code></pre>
<p>It is in exactly the correct place though.</p>
<pre><code>root@f985ac7c4261:/usr/src/app# odbcinst -q -d
[ODBC Driver 17 for SQL Server]
root@f985ac7c4261:/usr/src/app# odbcinst -j
unixODBC 2.3.9
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
USER DATA SOURCES..: /root/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8
root@f985ac7c4261:/usr/src/app# cat /etc/odbcinst.ini
[ODBC Driver 17 for SQL Server]
Description=Microsoft ODBC Driver 17 for SQL Server
Driver=/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.10.so.5.1
UsageCount=1
</code></pre>
<p>This is the stack trace</p>
<pre><code> File "/usr/src/app/runserver.py", line 2, in <module>
from ProteinViewer import app # Import the 'app' object from your app module
File "/usr/src/app/ProteinViewer/__init__.py", line 8, in <module>
import views
File "/usr/src/app/views.py", line 53, in <module>
protein_name_map = Database.get_protein_name_map();
File "/usr/src/app/Database.py", line 149, in get_protein_name_map
temps = Database.execute_query(sql, False)
File "/usr/src/app/Database.py", line 93, in execute_query
connection = Database.get_connection()
File "/usr/src/app/Database.py", line 39, in get_connection
return pyodbc.connect(connection_string)
pyodbc.Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 17 for SQL Server' : file not found (0) (SQLDriverConnect)")
root@8508e47d6825:/usr/src/app# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/microsoft/msodbcsql17/lib64/
root@8508e47d6825:/usr/src/app# python3 runserver.py
Traceback (most recent call last):
File "/usr/src/app/runserver.py", line 2, in <module>
from ProteinViewer import app # Import the 'app' object from your app module
File "/usr/src/app/ProteinViewer/__init__.py", line 8, in <module>
import views
File "/usr/src/app/views.py", line 53, in <module>
protein_name_map = Database.get_protein_name_map();
File "/usr/src/app/Database.py", line 149, in get_protein_name_map
temps = Database.execute_query(sql, False)
File "/usr/src/app/Database.py", line 93, in execute_query
connection = Database.get_connection()
File "/usr/src/app/Database.py", line 39, in get_connection
return pyodbc.connect(connection_string)
pyodbc.Error: ('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 17 for SQL Server' : file not found (0) (SQLDriverConnect)")```
</code></pre>
<p>I have also run a variety of other commands to make sure that the ODBC driver is fine. I have also switched Linux distros, I am not sure how to resolve this.</p>
| <python><sql-server><docker><flask><odbc> | 2023-10-25 17:15:19 | 0 | 484 | jdmneon |
77,361,569 | 19,838,568 | KafkaConsumer with sasl_mechanism "OAUTHBEARER" not calling token() function | <p>I am trying to get a minimal <strong>kafka</strong> consumer working, using kafka-python. Authentication is via OAuth.</p>
<p>I created a TokenProvider class, which is working fine. Any time, token() is called, it will check whether there is a current token and if not, fetch one from the OAuth URL.</p>
<pre><code>>>> from myoauth import TokenProvider
>>> tp = TokenProvider(instance="test", verbose=True)
>>> print(tp.token()[:20]+"...")
Request to https://<hostname>/auth/realms/kafka-flow/protocol/openid-connect/token:
Response 200 OK
eyJhbGciOiJSUzI1NiIs...
</code></pre>
<p>Code for the minimal consumer that prints the available kafka topics is below:</p>
<pre><code>import ssl
from kafka import KafkaConsumer
from myoauth import TokenProvider
bootstrap_server = '<hostname>:443'
consumer = KafkaConsumer(bootstrap_servers=[bootstrap_server],
security_protocol='SSL',
ssl_context=ssl._create_unverified_context(),
sasl_mechanism="OAUTHBEARER",
sasl_oauth_token_provider=TokenProvider(instance="test", verbose=True),
client_id="kafka-client",
group_id="kafka-group",
)
print("Consumer created, {} topics found".format(len(consumer.topics())))
for topic in sorted(consumer.topics()):
print(topic)
</code></pre>
<p>The output is surpisingly:</p>
<pre><code>$ ./get_topics.py
Consumer created, 0 topics found
</code></pre>
<p>I assume, the issue is with the oauth, as the <code>token()</code> function of the TokenProvider class is never called (no <code>Request to https://...</code> output). And without token, it is obvious that the consumer does not have access to any topics.</p>
<p>Any ideas how to solve this appreciated.</p>
<p><code>kafka-python</code> version is 2.0.2. <code>python</code> is 3.6.8.</p>
| <python><apache-kafka><oauth><kafka-consumer-api> | 2023-10-25 17:04:32 | 1 | 2,406 | treuss |
77,361,546 | 4,442,337 | Get a package directory path using importlig.resources with >=py37 compatibility? | <p>I'm trying to make a <code>constants.py</code> file which can be accessed to get global available paths through the whole package. I'm using <code>importlib.resources</code> which should be in standard lib since <code>python3.7</code> according to <a href="https://docs.python.org/3/library/importlib.resources.html" rel="nofollow noreferrer">https://docs.python.org/3/library/importlib.resources.html</a>.</p>
<p>This works but for some reason only on <code>python3.9</code> there's no support for directory resources with <code>resurces.path</code>.</p>
<pre><code>E IsADirectoryError: [Errno 21] Is a directory: '.../my_package/data'
</code></pre>
<p>This is the source code inside <code>constants.py</code>.</p>
<pre class="lang-py prettyprint-override"><code>from importlib import resources
from contextlib import ExitStack
# Access package data
file_manager = ExitStack()
MODELS_PATH = file_manager.enter_context(resources.path("my_package", "data"))
CONFIGURATOR_DATA_PATH = file_manager.enter_context(
resources.path("my_package.configurator", "data")
)
</code></pre>
<p>Is there a way to achieve this with <code>>=py37</code> compatibility? Or maybe even a different approach since I'm not <strong>100%</strong> sure this is the best practice in general.</p>
| <python><python-3.x> | 2023-10-25 17:00:51 | 1 | 2,191 | browser-bug |
77,361,437 | 1,431,728 | How can I test a list of method calls in Python? | <p>Using this code I can print a list of the methods called in my test method:</p>
<pre class="lang-py prettyprint-override"><code>import unittest
from unittest.mock import patch, Mock, call
class MyTest(unittest.TestCase):
@patch(mymodule.DepObj)
def mytest(self, mock_obj):
mock_obj_ret = mock_obj.return_value # Object from constructor
# Call test method
callList = mock_obj_ret.method_calls
self.assertEqual(5, len(callList))
print(callList)
</code></pre>
<p>The output of <code>callList</code> is something like this:</p>
<pre><code>[call.mymethod0(), call.mymethod1(MYARG1, key=val), call.mymethod2(myarg1, myarg2), call.mymethod3(myarg3), ...]
</code></pre>
<p>I need to test calls in <code>callList</code> for the names of the methods called (in their proper order) and the arguments used. How can I access the method names and parameters passed? I can't find it in <a href="https://docs.python.org/3/library/unittest.mock.html#unittest.mock.Mock.method_calls" rel="nofollow noreferrer">the documentation</a>.</p>
<p>Maybe something like this:</p>
<pre><code>self.assertEqual("mymethod2", callList[1].name) # Doesn't work
</code></pre>
<p>I should note some the methods I want to test for are not mocks. For <code>print(vars(callList[1]))</code>, it gives:</p>
<pre><code>{'_mock_name': None, '_mock_parent': None, '_mock_from_kall': True}
</code></pre>
<p>It is also unhelpful.</p>
<p>I can get the args and kwargs from a single call like this:</p>
<pre><code>print(mock_obj_ret.method_calls[1].args)
print(mock_obj_ret.method_calls[1].kwargs)
</code></pre>
<p>But I still need to know how to get the <em>name</em> of the method called (as a string).</p>
| <python><python-unittest> | 2023-10-25 16:45:03 | 1 | 7,417 | JohnK |
77,361,396 | 3,091,161 | Why id() of a method changes after I call the hex() function? | <p>The <code>id()</code> function gives different values for the same method, and I don't know why.</p>
<p>Here's a minimal reproducible example:</p>
<pre><code>>>> class Test():
... def test(self):
... pass
...
>>> a = Test()
>>> id(a.test), id(a.test), id(a.test), hex(1), id(a.test)
(1662807685632, 1662807685632, 1662807685632, '0x1', 1662813074048)
^^^^^^^^^^^^^ why was it changed?
</code></pre>
<p>This gives weird effects:</p>
<pre class="lang-py prettyprint-override"><code>>>> id(a.test) == id(a.test)
True
>>> hex(id(a.test)) == hex(id(a.test))
False
</code></pre>
<p>Is it intended? Tested with Python 3.11.4.</p>
| <python> | 2023-10-25 16:38:55 | 0 | 1,693 | enkryptor |
77,361,341 | 101,647 | python pint : Create a "pace" metric from velocity metrics | <p>Using the <code>pint</code> library in python - It has support for unit conversions, which work great.</p>
<p>For example:</p>
<pre><code>units.Quantity(2, "mph").to("kph")
</code></pre>
<p>I would like to understand how I could define a unit <code>pacem</code> - which would be "minutes per mile", or <code>pacek</code> which would be "minutes per km"</p>
<p>This feels like it should be do-able in the units definitions, without having to write any python code at all, in the same way as I could write</p>
<pre><code>units.define("rps = revolution / second")
</code></pre>
| <python><pint> | 2023-10-25 16:29:35 | 1 | 2,197 | time4tea |
77,361,304 | 3,787,836 | Fast direct pixel access in python, go or julia | <p>I wrote a small program that creates random noise and displays it full screen (5K resolution). I used pygame for it. However the refresh rate is horribly slow. Both the surfarray.blit_array and random generation take a lot of time. Any way to speed this up? I am also flexible to use julia or golang instead. or also psychopy or octave with psychotoolbox (however those do not seem to be working under linux/wayland).</p>
<p>Here is what I wrote:</p>
<pre class="lang-py prettyprint-override"><code>
import pygame
import numpy as N
import pygame.surfarray as surfarray
from numpy import int32, uint8, uint
def main():
pygame.init()
#flags = pygame.OPENGL | pygame.FULLSCREEN # OpenGL does not want to work with surfarray
flags = pygame.FULLSCREEN
screen = pygame.display.set_mode((0,0), flags=flags, vsync=1)
w, h = screen.get_width(), screen.get_height()
clock = pygame.time.Clock()
font = pygame.font.SysFont("Arial" , 18 , bold = True)
# define a variable to control the main loop
running = True
def fps_counter():
fps = str(int(clock.get_fps()))
fps_t = font.render(fps , 1, pygame.Color("RED"))
screen.blit(fps_t,(0,0))
# main loop
while running:
# event handling, gets all event from the event queue
for event in pygame.event.get():
# only do something if the event is of type QUIT
if event.type == pygame.QUIT:
# change the value to False, to exit the main loop
running = False
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_ESCAPE:
pygame.quit()
return
array_img = N.random.randint(0, high=100, size=(w,h,3), dtype=uint)
surfarray.blit_array(screen, array_img)
fps_counter()
pygame.display.flip()
clock.tick()
#print(clock.get_fps())
# run the main function only if this module is executed as the main script
# (if you import this as a module then nothing is executed)
if __name__=="__main__":
# call the main function
main()
</code></pre>
<p>I need a refresh rate of at least 30 fps for it to be useful</p>
| <python><numpy><pygame-surface><psychopy><psychtoolbox> | 2023-10-25 16:24:56 | 1 | 359 | Loreno Heer |
77,361,097 | 12,492,675 | How to merge two data frames to show unique count of rows? | <p>I have 2 Dataframes in my project as shown below.</p>
<pre><code>Table 1
Order_id Prod_id Price
A AA 5
B BB 10
C CC 15
D AA 5
E CC 20
Table 2
Prod_id Price
AA 5
BB 10
CC 15
CC 20
</code></pre>
<p>My target table is as below:</p>
<pre><code>Table 3:
Prod_id Price Order_Count
AA 5 2
BB 10 1
CC 15 1
CC 20 1
</code></pre>
<p>This is what I've tried so far.</p>
<pre><code>merged_table = pd.merge(order_items[['product_id','price']].explode('price'),order_items[['product_id','price','order_id']], on=['product_id','price'])
merged_table = merged_table.drop_duplicates()
merged_table
order_counts = merged_table.groupby(['product_id', 'price']).size().reset_index()
order_counts.rename(columns={0:'order_count'},inplace=True)
order_counts.sort_values(by='order_count',ascending=False)
order_counts['No_of_orders'].sum()
</code></pre>
<p>My code is working but the order count is not matching up with the expected count from the dataset. The target table should be able to scan the Prod_id as well as the price of the specific product. If the Prod_id and the price is the same, then it should eliminate the row. But if the prod_id is the same but the price for the same product is different, then it should consider it to be a unique row.</p>
<p>Not sure where I'm going wrong.</p>
| <python><dataframe><merge><duplicates> | 2023-10-25 15:54:43 | 2 | 1,326 | Simran Aswani |
77,360,937 | 859,141 | Passing Parent PK to ModelForm in Class Based Create and Update View | <p>I'm updating function based views to class based views and having issues re-establishing the link between campaign and books. My Book Model has a foreign key link to Campaigns.</p>
<pre><code>campaign = models.ForeignKey(Campaign, on_delete=models.DO_NOTHING)
</code></pre>
<p>I have a ModelForm where I set the campaign_id and would like to get this from the CreateView.</p>
<pre><code>class BookForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
author = kwargs.pop('author', None)
campaign_id = kwargs.pop('campaign_id', None)
super(BookForm, self).__init__(*args, **kwargs)
if campaign_id:
self.fields['campaign_id'].initial = campaign_id
campaign_id = forms.CharField(widget=forms.HiddenInput())
</code></pre>
<p>I followed <a href="https://stackoverflow.com/questions/65498350/initialize-modelform-fields-with-url-parameters-in-cbv">this</a> using dispatch and get_form_kwargs and my CreateView looks like</p>
<pre><code>class BookCreateView(generic.CreateView):
model = Book
template_name = 'PromoManager/book_form.html'
form_class = BookForm
success_url = '/profile'
campaign_id = None
# Retrieves the campaign_id from url
def dispatch(self, request, *args, **kwargs):
self.campaign_id = kwargs.get("pk")
return super().dispatch(request, *args, **kwargs)
## Sends building id to the form
def get_form_kwargs(self, *args, **kwargs):
kwargs = super().get_form_kwargs(*args, **kwargs)
kwargs["campaign_id"] = self.campaign_id
return kwargs
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['campaign'] = Campaign.objects.get(pk=self.campaign_id)
context['title_label'] = "Create"
return context
def form_valid(self, form):
instance = form.save(commit=False)
instance.author = self.request.user
instance.campaign = Campaign.objects.get(id=form.cleaned_data['campaign_id'])
instance.save()
form.save_m2m()
return super().form_valid(form)
</code></pre>
<p>But it breaks the UpdateView which relies on the same form as the PK passed on the update view URL is the book pk. My UpdateView looks like:</p>
<pre><code>class BookUpdateView(generic.UpdateView):
model = Book
template_name = 'PromoManager/book_form.html'
form_class = BookForm
success_url = '/profile'
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
context['title_label'] = "Update"
return context
def form_valid(self, form):
print(form)
form.save(commit=True)
return super().form_valid(form)
def form_invalid(self, form):
print(form.errors)
</code></pre>
<p>How can I pass a Campaign Id or instance to the form to populate it upon create and then maintain the value during any updates. This value should not be changeable. TY</p>
| <python><django><modelform> | 2023-10-25 15:33:07 | 1 | 1,184 | Byte Insight |
77,360,933 | 345,660 | Efficient way to convert 4 billion hex integers to decimal integer in R or python | <p>I have a csv file with about 4.4 billion rows. It has 25 categorical columns, all of which are hexadecimal encoded integers, such as <code>EE6B2800</code> and <code>10642AC00</code>.</p>
<p>I have a machine with 3TB of RAM and used <code>data.table::fread</code> to load the dataset. R reads these values as strings (I'd be happy to use pandas instead). I would like to convert these strings to integers.</p>
<p>One way to do this would be to decode the hex values to 64-bit integers, e.g.:</p>
<ul>
<li>"10642AC00" β> 4000000000</li>
<li>"EE6B2800" β> 4400000000</li>
</ul>
<p>One issue with this approach is that I can't quite do it with the raw strings. <code>bit64::as.integer64(c(0x10642AC00, 0xEE6B2800))</code> works, but I can't figure out how to convert the strings that I have. Another issue is that 64 bit ints aren't supported by everything in R, and I'd really rather have 32 bit ints.</p>
<p>Another way to do this is to convert each unique value to sequential integers. E.g.</p>
<ul>
<li>"10642AC00" β> 1</li>
<li>"EE6B2800" β> 2</li>
</ul>
<p>I can do this with <code>as.integer(factor(c("10642AC00", "EE6B2800")))</code>, which works but is very slow. I've been running for a day and a half and its still chugging through the first column. I have no idea if this will every finish or if it will just blow up the RAM on my machine.</p>
<p>Is there a faster way to do something like <code>as.integer(factor(c("10642AC00", "EE6B2800")))</code>? I'd be happy for an approximate output, e.g. sometimes there's a few gaps between integers or sometimes different hex values end up as the same integer.</p>
<p>My main requirement is that I want 32 bit ints, that mostly (but not necessarily perfectly) map 1-to-1 to the original hex strings. Is there a faster and more efficient way to do something like this? I'm fine with an operation that takes many hours per column, but would prefer not to have something that takes many days per column.</p>
<p>Is there a good library in R for fast, efficient integer hashing of string values? Is there something clever I can do with the hex values?</p>
| <python><r><data.table><large-data><categorical-data> | 2023-10-25 15:32:45 | 1 | 30,431 | Zach |
77,360,874 | 9,185,312 | SQLAlchemy Python backend, after a few hours of no use will give a 500 error on any request, then it will work fine | <p>Everytime I start my backend up and I wait for a few hours, then come back to it, on the first request I will have an error like this:</p>
<pre><code>sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2006, "MySQL server has gone away (ConnectionResetError(10054, 'An existing connection was forcibly closed by the remote host', None, 10054, None))")
</code></pre>
<p>After that everything goes back to normal.
How can I make it so the backend won't go away after a few hours of non use?</p>
| <python><mysql><sqlalchemy> | 2023-10-25 15:25:40 | 1 | 1,180 | Alex Stroescu |
77,360,860 | 13,146,029 | ImportError: cannot import name 'db' from partially initialized module 'application' | <p>Im trying to set up a Flask factory, first time I have tried. I keep getting the following error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/grahammorby/Documents/GitHub/tagr-api/venv/lib/python3.12/site-packages/flask/cli.py", line 219, in locate_app
__import__(module_name)
File "/Users/grahammorby/Documents/GitHub/tagr-api/wsgi.py", line 1, in <module>
from application import init_app
File "/Users/grahammorby/Documents/GitHub/tagr-api/application/__init__.py", line 9, in <module>
from application.users import users_blueprint
File "/Users/grahammorby/Documents/GitHub/tagr-api/application/users.py", line 3, in <module>
from application.models import User
File "/Users/grahammorby/Documents/GitHub/tagr-api/application/models.py", line 2, in <module>
from . import db, ma
ImportError: cannot import name 'db' from partially initialized module 'application' (most likely due to a circular import) (/Users/grahammorby/Documents/GitHub/tagr-api/application/__init__.py)
</code></pre>
<p>So I have a file called wsgi.py</p>
<pre class="lang-py prettyprint-override"><code>from application import init_app
app = init_app()
if __name__ == "__main__":
app.run(host='0.0.0.0')
</code></pre>
<p>and then I have an application folder which has a <strong>init.py</strong> file in it</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask
from config import Config
from flask_sqlalchemy import SQLAlchemy
from flask_marshmallow import Marshmallow
from flask_bcrypt import Bcrypt
from flask_jwt_extended import JWTManager
from application.users import users_blueprint
from application.apps import apps_blueprint
# Globally accessible libraries
db = SQLAlchemy()
ma = Marshmallow()
bcrypt = Bcrypt()
JWTManager = JWTManager()
def init_app():
app = Flask(__name__, instance_relative_config=False)
app.config.from_object(Config)
# Setup plugins
db.init_app(app)
ma.init_app(app)
bcrypt.init_app(app)
JWTManager.init_app(app)
with app.app_context():
# Blueprints
return app
</code></pre>
<p>this is models.py:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import Column, Integer, String
from . import db, ma
class App(db.model):
__tablename__ = "apps"
id = Column(Integer, primary_key=True)
app_name = Column(String(64))
icon = Column(String(255))
class AppSchema(ma.Schema):
class Meta:
fields = ("id", "app_name", "icon")
app_schema = AppSchema()
apps_schema = AppSchema(many=True)
</code></pre>
<p>So I have tried to remove the db altogether to see if it builds but it just keeps giving me the same error. Im loaded into my venv and all seems like it should work.</p>
<pre class="lang-py prettyprint-override"><code>def init_app():
app = Flask(__name__, instance_relative_config=False)
app.config.from_object(Config)
# Setup plugins
db.init_app(app)
ma.init_app(app)
bcrypt.init_app(app)
JWTManager.init_app(app)
Migrate = init_app(app, db)
from models import User, App
with app.app_context():
# Blueprints
from application.users import users_blueprint
from application.apps import apps_blueprint
app.register_blueprint(users_blueprint)
app.register_blueprint(apps_blueprint)
return app
</code></pre>
| <python><flask> | 2023-10-25 15:24:14 | 1 | 317 | Graham Morby |
77,360,698 | 1,914,781 | print dataframe contains "\n" chars | <p>I have a dataframe which contains "\n" to format text, how can I get output as below:</p>
<pre><code> key val
0 A x
y
z
1 B apple
orange
2 C good
bad
best
</code></pre>
<p>Current demo code:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{'key': ['A','B','C'],
'val': ['x\ny\nz','apple\norange','good\nbad\nbest']})
print(df)
</code></pre>
<p>Output:
key val</p>
<pre><code>0 A x\ny\nz
1 B apple\norange
2 C good\nbad\nbest
</code></pre>
| <python><pandas> | 2023-10-25 15:02:21 | 2 | 9,011 | lucky1928 |
77,360,636 | 2,114,608 | Python JustPy QTable custom row color | <p>I am creating QTable:</p>
<pre><code>some_table = jp.QTable(
hide_bottom=True,
title=f'Some Table Title',
data=[],
columns=some_table_columns,
a=some_table_div,
dense=True,
style="margin-bottom:25px"
)
</code></pre>
<p>next, I have</p>
<pre><code>for some_item in some_item_str_list:
new_row = {"name": some_item}
some_table.data.append(new_row)
</code></pre>
<p>how can I make some rows to have different style, eg. being bold or red, depending on some condition, eg.</p>
<pre><code>for some_item in some_item_str_list:
# if some_item == "value1":
# make it red
new_row = {"name": some_item}
some_table.data.append(new_row)
</code></pre>
| <python><quasar><justpy> | 2023-10-25 14:54:36 | 0 | 2,027 | PrzemysΕaw Kalita |
77,360,610 | 274,460 | Can Flask and Quart avoid the redirect when a default path parameter is used? | <p>Consider this web service:</p>
<pre><code>from flask import Flask, jsonify, Blueprint
app = Flask(__name__)
@app.route("/<int:d>")
@app.route("/", defaults={"d": 1})
def a(d):
return jsonify({"d": d})
</code></pre>
<p>If I use call it with a parameter other than the default, it returns the correct result:</p>
<pre><code>$ curl http://localhost:5000/2
{
"d": 2
}
</code></pre>
<p>But if I call it with the default parameter value, it gives me a redirect to the version without the parameter:</p>
<pre><code>$ curl http://localhost:5000/1
<!doctype html>
<html lang=en>
<title>Redirecting...</title>
<h1>Redirecting...</h1>
<p>You should be redirected automatically to the target URL: <a href="http://localhost:5000/">http://localhost:5000/</a>. If not, click the link.
</code></pre>
<p>Is there a good way to avoid this behaviour?</p>
<p>The service is being accessed by an automated client that is too stupid to understand the redirect and which dies horribly on what should be a straightforward RPC.</p>
| <python><flask> | 2023-10-25 14:52:08 | 1 | 8,161 | Tom |
77,360,592 | 22,538,132 | How to create an OrientedBoundingBox from Depth image and bounding box | <p>I want to crop a poincloud using the segmented mask (rectangle) and depth image, (I can consider that there is no rotation for now).</p>
<pre class="lang-py prettyprint-override"><code>import open3d as o3d
## dummy pointcloud
demo_icp_pcds = o3d.data.DemoICPPointClouds()
pcd = o3d.io.read_point_cloud(demo_icp_pcds.paths[0])
## dummy depth mask
depth_mask = np.ones((640, 480), dtype=np.uint8)
## Crop PCD
## mask boundaries: [(xmin, xmax), (y_min, y_max)]
rect = [(20, 50), (70, 75)]
o_box = o3d.geometry.OrientedBoundingBox()
o_box.center = [0.0, 0.0, 0.0]
o_box.extent = [1.0, 2.0, 3.0]
o_box.R = [[1, 0, 0], [0, 1, 0], [0, 0, 1]]
cropped_pcd = pcd.crop(o_box)
</code></pre>
<p>I want to feed the info of the rect (x, y) and depth z to the orietned bounding box. Can anybody please tell me how can do that? thanks.</p>
| <python><geometry><crop><point-clouds><open3d> | 2023-10-25 14:50:02 | 1 | 304 | bhomaidan90 |
77,360,456 | 6,626,531 | Import order in python for specific packages | <p>I ran pylint on my codebase and it complained that <code>from config import ConfigParser</code> was defined before <code>from pathlib import Path</code> Why is this the case?</p>
<p>When I ran isort of the file, it agreed with pylint. From what I understand it groups imports by type</p>
<ul>
<li>Native to python</li>
<li>General 3rd party</li>
<li>local packages</li>
</ul>
<p>And in that it groups them by whether or not you use <code>from</code> or <code>import</code> and after that it wants them alphabetical.</p>
<p>I would have thought that config would come before pathlib, but this is not the case. What is happening here?</p>
<h1>Before running isort</h1>
<pre class="lang-py prettyprint-override"><code>from config import ConfigParser
from pathlib import Path
</code></pre>
<h1>After running isort</h1>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
from config import ConfigParser
</code></pre>
| <python><python-3.x><pylint><isort> | 2023-10-25 14:34:00 | 2 | 1,975 | Micah Pearce |
77,360,380 | 559,827 | How to read a single CLI keypress from Python 3 on Unix without requiring ENTER or root privileges? | <p>The problem described in the title is, apparently, incredibly hard. Every solution I have found for it (and there are <em><strong>dozens</strong></em> of them in SO alone) is inadequate for at least one of the following reasons:</p>
<ul>
<li>Python 2-specific;</li>
<li>unnecessarily portable/cross-platform (hence having unwarranted dependencies);</li>
<li>requiring a graphical UI;</li>
<li>requiring the user to press <kbd>Enter</kbd> before Python can see the input;</li>
<li>requiring root privileges to install or run.</li>
</ul>
<hr />
<p>Is there a way for me, <em>as a non-root programmer</em>, to implement a Python 3 script, <em>runnable by non-root users, on the terminal CLI, in a Unix environment</em>, that will</p>
<ol>
<li>print a prompt to stdout,</li>
<li>then read a <em><strong>single</strong></em> keypress from the user, and</li>
<li>then perform an action depending on the value of the key that the user pressed</li>
</ol>
<p>?</p>
<p>I stress that I cannot use the (non-standard) <code>keyboard</code> module for this, because in order to install this module so that it's usable by non-root users one must have root priveleges, which I don't have. Also, I do not want to saddle the script with a dependency solely to make it runnable on Windows, or any other non-Unix OS. I.e. for my purposes here, minimizing dependencies is more important than cross-platform operability.</p>
| <python><python-3.x><unix><keyboard><keyboard-events> | 2023-10-25 14:25:55 | 1 | 35,691 | kjo |
77,360,322 | 5,013,084 | Set shape of only some points in altair scatter plot | <p>I am trying to set the shape for one variable in this altair scatterplot (since I am dealing with live data, I do not know every possible domain value), therefore I am trying to set the value for only two of those values.</p>
<pre><code>import altair as alt
from vega_datasets import data
source = data.cars()
alt.Chart(source).mark_point(size=60).encode(
x='Horsepower',
y='Miles_per_Gallon',
shape=alt.Shape(
'Origin',
# scale=alt.Scale(
# domain=['USA', 'Europe'],
# range=['circle', 'square']
# )
)
)
</code></pre>
<p>This returns the following plot:</p>
<p><a href="https://i.sstatic.net/psaOE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/psaOE.png" alt="enter image description here" /></a></p>
<p>If however, I am uncommenting the scale parameter, the third element in the legend and the plot disappears:</p>
<p><a href="https://i.sstatic.net/fyMxj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fyMxj.png" alt="enter image description here" /></a></p>
<p>How can I set the shape for a subset of my variable (and not loose the remaining variables in the plot)?</p>
<p>Thank you!</p>
| <python><altair> | 2023-10-25 14:18:34 | 1 | 2,402 | Revan |
77,360,224 | 293,420 | Bokeh standalone embed serve folder for content (v 3.3.0 2023) | <p>I'm trying to run a server on a remote location to render a plot with circles so that the tooltip includes an image of the item hovered on.</p>
<p>I'm using Bokeh 3.3.0 and tornado. This is my code:</p>
<pre><code>from bokeh.layouts import column
from bokeh.models import ColumnDataSource,HoverTool
from bokeh.plotting import figure
from bokeh.server.server import Server
def bkapp(doc):
#get necessary data...
dfsource=ColumnDataSource(df)
hover = HoverTool(tooltips ="""
<div>
<div>
<img
src="@_id.png"
style="float: left; margin: 0px 5px 5px 0px;"
border="2"
></img>
</div>
<div>
<span style="font-size: 15px;">@_id</span>
<span style="font-size: 10px; color: #696;">(@ucx, @ucy)</span>
</div>
""")
# create a plot
p = figure(sizing_mode="stretch_width", max_width=2000, height=800,
tools=["pan", 'wheel_zoom',"tap","reset"])
p.add_tools(hover)
circle = p.circle(source=dfsource,x="ucx",y="ucy")
doc.add_root(p)
server = Server({'/': bkapp}, num_procs=4)
server.start()
if __name__ == '__main__':
print('Opening Bokeh application on http://localhost:5006/')
server.io_loop.add_callback(server.show, "/")
server.io_loop.start()
</code></pre>
<p>The plot works as intended but the images are not accessible.</p>
<p>The folder structure is simple:</p>
<pre><code>-cwd
--myapp.py
--imagefolder/
</code></pre>
<p>I have tried setting src as:</p>
<ul>
<li><code>src="imagefolder/@_id.png"</code></li>
<li><code>src="file://@_id.png"</code></li>
<li><code>src="http://localhost:5006/@_id.png"</code></li>
</ul>
<p>and other combinations but I think the problem is that the folder is not being served. Can someone tell me how can I tell tornado, through the Bokeh API to please serve this folder of images so that I can do something like http://localhost:5006/imagefolder/_id.png and be able to see the image?</p>
| <python><server><tooltip><bokeh><tornado> | 2023-10-25 14:05:52 | 1 | 3,654 | lesolorzanov |
77,360,174 | 2,727,655 | Map BERT token indices to Spacy token indices | <p>Iβm trying to make Bertβs (<code>bert-base-uncased</code>) tokenization token indices (not ids, token indices) map to Spacyβs tokenization token indices. In the following example, my approach doesnβt work becos Spacyβs tokenization behaves a bit more complex than I anticipated. Thoughts on solving this?</p>
<pre><code>import spacy
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
nlp = spacy.load("en_core_web_sm")
sent = nlp("BRITAIN'S railways cost Β£20.7bn during the 2020-21 financial year, with Β£2.5bn generated through fares and other income, Β£1.3bn through other sources and Β£16.9bn from government, figures released by the regulator the Office of Rail and Road (ORR) on November 30 revealed.")
# Get spacy word index to BERT token indice mapping
wd_to_tok_map = [wd.i for wd in sent for el in tokenizer.encode(wd.text, add_special_tokens=False)]
len(sent) # 55
len(wd_to_tok_map) # 67 <- Should be 65
input_ids = tokenizer.encode(sent.text, add_special_tokens=False)
len(input_ids) # 65
</code></pre>
<p>I can print both tokenizations and look for perfect text matches, but the problem I run into is what if a word repeats twice in the tokenization? Looking for a word match will return two indices at different sections of the sentence.</p>
<pre><code>[el.text for el in sent]
['BRITAIN', ''S', 'railways', 'cost', 'Β£', '20.7bn', 'during', 'the', '2020', '-', '21', 'financial', 'year', ',', 'with', 'Β£','2.5bn','generated','through', 'fares', 'and','other', 'income', ',', 'Β£', '1.3bn', 'through', 'other', 'sources', 'and', 'Β£', '16.9bn', 'from', 'government', ',', 'figures', 'released', 'by', 'the', 'regulator', 'the', 'Office', 'of', 'Rail', 'and', 'Road', '(', 'ORR', ')', 'on', 'November', '30', 'revealed', '.']
[tokenizer.ids_to_tokens[el] for el in input_ids]
['britain',''', 's', 'railways', 'cost', 'Β£2', '##0', '.', '7', '##bn', 'during', 'the', '2020', '-', '21', 'financial', 'year', ',', 'with', 'Β£2', '.', '5', '##bn', 'generated', 'through', 'fares', 'and', 'other', 'income', ',', 'Β£1', '.', '3', '##bn', 'through', 'other', 'sources', 'and', 'Β£1', '##6', '.', '9', '##bn', 'from', 'government', ',', 'figures', 'released', 'by', 'the', 'regulator', 'the', 'office', 'of', 'rail', 'and', 'road', '(', 'orr', ')', 'on', 'november', '30', 'revealed', '.']
</code></pre>
<p>decode() doesnβt seem to give me what I want, as Iβm after the indices.</p>
| <python><mapping><spacy><tokenize><bert-language-model> | 2023-10-25 13:58:12 | 1 | 554 | lrthistlethwaite |
77,360,126 | 1,107,595 | Same frame multiple times when using select and vframes | <p>I'm using ffmpeg in python using the ffmpeg-python wrapper</p>
<p>I run the following:</p>
<pre><code>filename = "something.mp4"
frame_number = 5 # It works fine if I ask to start at 0
out, err = (
ffmpeg.input(filename)
.filter_('fps', fps=10)
.filter_('select', 'gte(n,{})'.format(frame_number))
.output('pipe:', format='rawvideo', pix_fmt=no,uint32, vframes=5)
.run(capture_stdout=True, capture_stderr=True)
)
print(out)
# Then I parse it into numpy array
</code></pre>
<p>My problem is that when I put any value other than 0 for my start frame_number, I get 5 identical frame which are the frame at my frame number.</p>
<p>It looks like <code>vframes=5</code> return 5 different frames as long as my select filter is not applied.</p>
<p>But as soon as we select frames >= N, then it returns that Nth frame 5 times</p>
<p>What I have done wrong ?
How should I do what I'm trying to do ?</p>
<p>EDIT:</p>
<p>To help visualize what I mean here are a few examples:</p>
<p>frame_number = 0, vframes=5: <code>[0,1,2,3,4]</code> GOOD</p>
<p>frame_number = 5, vframes=1: <code>[5]</code> GOOD</p>
<p>frame_number = 5, vframes=5: <code>[5,5,5,5,5]</code> BAD, expected: <code>[5,6,7,8,9]</code></p>
| <python><ffmpeg> | 2023-10-25 13:52:34 | 1 | 2,538 | BlueMagma |
77,360,066 | 4,704,065 | Print elements when the difference is greater/lesser than a condition in numpy array | <p>in a numpy array , I have to print out elements when the difference between elements is > or < 1
Basically it should print the element when the difference is > or < 1</p>
<pre><code>Input : arr=np.array([1, 3, 3,4,7,4,1,6,7,8,9,5,13])
Output: 3,7,4,1,6,5,13
</code></pre>
<p>This is what I tried , but then it does not give correct output . Any pointers would be helpful</p>
<pre><code>arr=np.array([1, 3, 3,4,7,4,1,6,7,8,9,5,13])
z=np.diff(arr)
list=[]
for a in arr:
for q in z:
if q > 1:
list1.append(arr1[a])
print(arr[q+1])
</code></pre>
| <python><numpy><numpy-ndarray> | 2023-10-25 13:45:49 | 3 | 321 | Kapil |
77,360,061 | 7,055,769 | RecursionError: maximum recursion depth exceeded while calling a Python object when updating choice field | <p>model:</p>
<pre><code>StatusChoices = (
("TODO", "todo"),
("DOING", "doing"),
("DONE", "done"),
)
class Task(models.Model):
status = models.CharField(
choices=StatusChoices,
default=StatusChoices[0],
max_length=5,
)
</code></pre>
<p>request body:</p>
<pre><code>{
"id": 15,
"content": "Updated Task Content",
"creationDate": "2020-10-23",
"user": 2 ,
"status": "DONE"
}
</code></pre>
<p>serialiser:</p>
<pre><code>class TaskUpdateSerializer(serializers.ModelSerializer):
class Meta:
model = Task
fields = ("status")
</code></pre>
<p>view</p>
<pre><code>class TaskUpdateAPIView(APIView):
def patch(self, request, pk):
try:
task = Task.objects.get(pk=pk)
except Task.DoesNotExist:
return Response(status=status.HTTP_404_NOT_FOUND)
serializer = TaskUpdateSerializer(task, data=request.data, partial=True)
if serializer.is_valid():
task.status = request.data.get("status", task.status)
task.save()
return Response(serializer.data, status=status.HTTP_200_OK)
else:
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p>url:</p>
<pre><code>urlpatterns = [
path("task/update/<int:pk>",
TaskUpdateAPIView.as_view(),
name='update-task'
),
]
</code></pre>
<p>New error:</p>
<blockquote>
<p>RecursionError: maximum recursion depth exceeded while calling a Python object</p>
</blockquote>
<p>What am I missing?</p>
| <python><django><django-rest-framework> | 2023-10-25 13:44:50 | 3 | 5,089 | Alex Ironside |
77,359,687 | 5,507,389 | Get unique values from sets in Pandas DataFrame columns | <p>I'm working with a Pandas DataFrame which has the following structure:</p>
<pre><code>import pandas as pd
data = {"col1": [{1}, {5}, {}, {1, 2}], "col2": [{1, 2}, {}, {}, {3, 4}]}
df = pd.DataFrame(data)
print(df)
col1 col2
0 {1} {1, 2}
1 {5} {}
2 {} {}
3 {1, 2} {3, 4}
</code></pre>
<p>My goal is to create a third column which will contain, for each row, the set of unique values from <code>col1</code> and <code>col2</code>. For the above example, the resulting DataFrame should be as follows:</p>
<pre><code> col1 col2 col3
0 {1} {1, 2} {1, 2}
1 {5} {} {5}
2 {} {} {}
3 {1, 2} {3, 4} {1, 2, 3, 4}
</code></pre>
<p>I think this problem might be a good candidate for <code>apply</code> + <code>lambda</code> but I couldn't make it work properly. Any help would be greatly appreciated.</p>
| <python><python-3.x><pandas><dataframe> | 2023-10-25 12:58:08 | 1 | 679 | glpsx |
77,359,672 | 12,242,085 | How to stay for each id in Data Frame only rows for one random month from column with dates in Python Pandas? | <p>I have Data Frame in Python Pandas like below:</p>
<pre><code>df = pd.DataFrame({'id': [1, 1, 1, 2, 2, 2],
'day': ['2023-03-01', '2023-03-10', '2023-04-05', '2023-03-15', '2023-04-20', '2023-04-25'],
'col1' : [123, 66, 7, 890, 456, 100]})
</code></pre>
<p><a href="https://i.sstatic.net/k4VRc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k4VRc.png" alt="enter image description here" /></a></p>
<p>And for each id I need to stay only rows for random one month. For example for cont_id = 1 I need to stay only rows for march (03 month) or only rows for april (04 month).</p>
<p>So, as a result I need something like below:</p>
<pre><code>id | day | col1
-----|------------|-------
1 | 2023-03-01 | 123
1 | 2023-03-10 | 66
2 | 2023-04-20 | 456
2 | 2023-04-25 | 100
</code></pre>
<p>How can I do that in Python Pandas ?</p>
| <python><pandas><dataframe><date> | 2023-10-25 12:55:47 | 3 | 2,350 | dingaro |
77,359,597 | 1,134,856 | SSLCertVerificationError EE certificate key too weak | <p>I'm trying to call a rest service using python like so:</p>
<pre><code>import requests
import certifi
...
requests.get(endpoint_url, data=query, auth=my_auth, verify=certifi.where(), timout=1000)
...
</code></pre>
<p>This results in a <code>SSLCertVerificationError</code> with the message <code>EE certificate key too weak (_ssl.c:1006)</code></p>
<p>The reason for this seems to be the certificate on the server, according to this SO thread: <a href="https://stackoverflow.com/questions/69584265/ee-certificate-key-too-weak-ssl-c1131">EE certificate key too weak (_ssl.c:1131)</a></p>
<p>I cannot change the certificate on the server and I don't want to deactivate certificate validation using <code>verify=False</code>.</p>
<p>Is there a way to allow weak certificates? A weak certificate is better then no certificate, in my opinion.</p>
<p>What exactly is considered to be too weak? I tried to look that up in sources, but I couldn't find anything specific and the docs don't say that either.</p>
| <python><python-requests><request> | 2023-10-25 12:45:03 | 0 | 2,600 | treeno |
77,359,543 | 583,464 | Strip strings and date, time from mixed string with numbers | <p>I have this kind of dataset:</p>
<pre><code>import pandas as pd
import numpy as np
x = np.array([
'355395.7037',
'355369.6383',
'355367.881',
'355381.419',
'357394.9D7a82te7o6fm4o9n4t3h7 print: 06/10/202',
'357405.7897626596'])
y = np.array([
'4521429.292',
'4521430.0229',
' 4521430.1191',
'4521430.1256',
'3 13:36 4521735.552137422',
'4521512.725'])
df = pd.DataFrame({'X':x, 'Y':y})
</code></pre>
<p>So, sometimes, I may have strings mixed with numbers.</p>
<blockquote>
<p><strong>A Solution I thought.</strong></p>
</blockquote>
<p>If you note <code>357394.9D7a82te7o5fm4o9n4t3h7 print: 06/10/202</code> for example, there are the words <code>Date of month</code> inside number <code>357394.97827649437</code>.</p>
<p>and at <code>y</code> column : <code>'3 13:36 4521735.552137422'</code> there is the <code>3</code> that came from <code>2023</code> from previous <code>print: 06/10/202</code> and the time <code>13:36</code>.</p>
<p>I want to get rid of them in order to have only the numbers.</p>
<p>I may have different situations like:</p>
<p><code>'357394.9D7a82te7o6fm4o9n4t3h7 print: 06/10/2023'</code></p>
<p><code>13:36 4521735.552137422</code> for example.</p>
<p>Or,</p>
<p><code>'357394.9 D7a82te7o6fm4o9n4t3h7 print: ',</code></p>
<p><code>'06/10/2023 13:36 4521735.552137422'</code></p>
<p>If you see the numbers, for <code>X</code> column for example, all numbers have <code>6</code> digits before the decimal point, so we could take for example, the first 6 digits from</p>
<p><code>'357394.9D7a82te7o6fm4o9n4t3h7 print: 06/10/202',</code> -> 357394 and apply decimal point and fill with the rest numbers until a white space or a word (print) exists. So, the number to take is <code>357394.97827649437</code></p>
<p>But the thing is that we have a string and we cannot apply for example <code>float</code> or <code>int</code> to process it.</p>
<p>For the second case, for <code>Y</code> column:</p>
<p><code>'3 13:36 4521735.552137422',</code></p>
<p>I think we must search from the end and when we see a decimal point, count <code>7</code> digits (Y columns has 7 digits before decimal) and stop there.</p>
<p>** UPD **</p>
<p>If we use:</p>
<pre><code>x = np.array([
'355395.7037',
'355369.6383',
'355367.881',
'355381.419',
'357394.9D7a82te7o6fm4o9n4t3h7 p',
'357405.7897626596'])
y = np.array([
'4521429.292',
'4521430.0229',
'4521430.1191',
'4521430.1256',
'rint: 06/10/2023 13:36 4521735.552137422',
'4521512.725'])
</code></pre>
<p>then the solution gives:</p>
<pre><code> X Y
0 355395.7037 4521429.292
1 355369.6383 4521430.0229
2 355367.881 4521430.1191
3 355381.419 4521430.1256
4 357394.97827649437 06102023
5 357405.7897626596 4521512.725
</code></pre>
<p>where the <code>Y</code> values is: <code> 06102023</code> instead of <code>4521735.552137422</code></p>
| <python><arrays><pandas><data-cleaning> | 2023-10-25 12:37:33 | 2 | 5,751 | George |
77,359,323 | 18,771,355 | Simclr/Resnet18 cross entropy loss : 0D or 1D target tensor expected, multi-target not supported | <p>I am trying to implement a SimCLR/Resnet18 model with a custom dataset.</p>
<p>My training dataset used for pretext tasks is composed of 7000 with various unlabeled types of pictures, all aggregated in <code>train_X_v1.bin</code>, of shape <code>(7000, 3, 224, 224)</code>.
For the fine tuning I have two files <code>val_hiv_ni_X_v1.bin</code> which contains the pictures I want to tune my model on, of shape <code>(931, 3, 224, 224)</code>, and <code>val_hiv_ni_y_v1.bin</code> which contains the corresponding labels of shape: <code>(931,)</code>.</p>
<p>My pretext task is supposedly already "dummy-trained" (quick training of 10 epochs just to see if the code runs) and saved in a checkpoint.</p>
<p>Here is my code for fine tuning :</p>
<pre class="lang-py prettyprint-override"><code>def reproducibility(config):
SEED = int(config.seed)
torch.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(SEED)
if (config.cuda):
torch.cuda.manual_seed(SEED)
# From https://github.com/PyTorchLightning/pytorch-lightning/issues/924
def weights_update(model, checkpoint_path):
checkpoint = torch.load(checkpoint_path)
model_dict = model.state_dict()
pretrained_dict = {k: v for k, v in checkpoint['state_dict'].items() if k in model_dict}
model_dict.update(pretrained_dict)
model.load_state_dict(model_dict)
print(f'Checkpoint {checkpoint_path} was loaded')
return model
def get_idr_dataloader(batch_size, transform=None, split="unlabeled"):
# idr = STL10("./", split=split, transform=transform, download=True)
idr = ImageDataResourceDataset(root=SOURCE_PATH, transform=Augment(224), split=split)
print(idr.data.shape, idr.labels.shape)
return DataLoader(dataset=idr, batch_size=batch_size, num_workers=cpu_count() // 2, ) # cpu_count() // 2
# general stuff
available_gpus = len([torch.cuda.device(i) for i in range(torch.cuda.device_count())])
train_config = FtHparams()
save_model_path = os.path.join(os.getcwd(), "saved_models/")
print('available_gpus:', available_gpus)
filename = 'SimCLR_ResNet18_finetune_'
reproducibility(train_config)
save_name = filename + '_Final.ckpt'
# load resnet backbone
backbone = models.resnet18(pretrained=False)
backbone.fc = nn.Identity()
checkpoint = torch.load('resnet18_backbone_weights.ckpt')
backbone.load_state_dict(checkpoint['model_state_dict'])
model = SimCLR_eval(train_config.lr, model=backbone, linear_eval=False)
# preprocessing and data loaders
transform_preprocess = Augment(train_config.img_size).test_transform
data_loader = get_idr_dataloader(train_config.batch_size, transform=transform_preprocess, split='unlabeled')
data_loader_test = get_idr_dataloader(train_config.batch_size, transform=transform_preprocess, split='test')
# callbacks and trainer
accumulator = GradientAccumulationScheduler(scheduling={0: train_config.gradient_accumulation_steps})
checkpoint_callback = ModelCheckpoint(filename=filename, dirpath=save_model_path, save_last=True, save_top_k=2,
monitor='Val Accuracy_epoch', mode='max')
trainer = Trainer(callbacks=[checkpoint_callback, accumulator],
gpus=available_gpus,
max_epochs=train_config.epochs)
trainer.fit(model, data_loader, data_loader_test)
trainer.save_checkpoint(save_name)
"""# Finetune from Imageget pretraining"""
# load model
resnet = models.resnet18(pretrained=False)
resnet.fc = nn.Identity()
print('imagenet weights, no pretraining')
model = SimCLR_eval(train_config.lr, model=resnet, linear_eval=False)
# preprocessing and data loaders
transform_preprocess = Augment(train_config.img_size).test_transform
data_loader = get_idr_dataloader(70, transform=transform_preprocess, split='unlabeled')
data_loader_test = get_idr_dataloader(70, transform=transform_preprocess, split='test')
checkpoint_callback = ModelCheckpoint(filename=filename, dirpath=save_model_path)
trainer = Trainer(callbacks=[checkpoint_callback],
gpus=available_gpus,
max_epochs=train_config.epochs)
trainer.fit(model, data_loader, data_loader_test)
trainer.save_checkpoint(save_name)
</code></pre>
<p>Here are my classes :</p>
<pre class="lang-py prettyprint-override"><code>class SimCLR_eval(pl.LightningModule):
def __init__(self, lr, model=None, linear_eval=False):
super().__init__()
self.lr = lr
self.linear_eval = linear_eval
if self.linear_eval:
model.eval()
self.mlp = torch.nn.Sequential(
torch.nn.Linear(512, 10),
# torch.nn.ReLU(),
# torch.nn.Dropout(0.1),
# torch.nn.Linear(128, 10)
)
self.model = torch.nn.Sequential(
model, self.mlp
)
self.loss = torch.nn.CrossEntropyLoss()
def forward(self, X):
return self.model(X)
def training_step(self, batch, batch_idx):
x, y = batch
z = self.forward(x)
loss = self.loss(z, y)
self.log('Cross Entropy loss', loss, on_step=True, on_epoch=True, prog_bar=True, logger=True)
predicted = z.argmax(1)
acc = (predicted == y).sum().item() / y.size(0)
self.log('Train Acc', acc, on_step=False, on_epoch=True, prog_bar=True, logger=True)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
z = self.forward(x)
loss = self.loss(z, y)
self.log('Val CE loss', loss, on_step=True, on_epoch=True, prog_bar=False, logger=True)
predicted = z.argmax(1)
acc = (predicted == y).sum().item() / y.size(0)
self.log('Val Accuracy', acc, on_step=True, on_epoch=True, prog_bar=True, logger=True)
return loss
def configure_optimizers(self):
if self.linear_eval:
print(f"\n\n Attention! Linear evaluation \n")
optimizer = SGD(self.mlp.parameters(), lr=self.lr, momentum=0.9)
else:
optimizer = SGD(self.model.parameters(), lr=self.lr, momentum=0.9)
return [optimizer]
class FtHparams:
def __init__(self):
self.epochs = 10 # number of training epochs
self.seed = 77777 # randomness seed
self.cuda = False # use nvidia gpu
self.img_size = 224 # image shape
self.save = "./saved_models/" # save checkpoint
self.gradient_accumulation_steps = 1 # gradient accumulation steps
self.batch_size = 70
self.lr = 1e-3
self.embedding_size = 128 # papers value is 128
self.temperature = 0.5 # 0.1 or 0.5
class ImageDataResourceDataset(VisionDataset):
train_list = ['train_X_v1.bin', ]
test_list = ['val_hiv_ni_X_v1.bin', 'val_hiv_ni_y_v1.bin', ]
def __init__(self, root: str, split: str = 'unlabeled', transform: Optional[Callable] = None, ):
super().__init__(root=root, transform=transform)
if split == 'unlabeled':
self.data, _ = self.__loadfile(self.train_list[0])
self.labels = np.asarray([-1] * self.data.shape[0])
elif split == 'test':
self.data, self.labels = self.__loadfile(self.test_list[0], self.test_list[1])
def __len__(self) -> int:
return self.data.shape[0]
def __getitem__(self, idx):
img = self.data[idx]
img = np.transpose(img, (1, 2, 0))
img = Image.fromarray(img)
img = self.transform(img)
return img
def __loadfile(self, data_file: str, labels_file: Optional[str] = None) -> Tuple[np.ndarray, Optional[np.ndarray]]:
labels = None
if labels_file:
path_to_labels = os.path.join(os.getcwd(), 'datasets', labels_file)
with open(path_to_labels, "rb") as f:
labels = np.fromfile(f, dtype=np.uint8) # 0-based
path_to_data = os.path.join(os.getcwd(), 'datasets', data_file)
everything = np.fromfile(path_to_data, dtype=np.uint8)
images = np.reshape(everything, (-1, 3, 224, 224))
images = np.transpose(images, (0, 1, 3, 2))
return images, labels
class ContrastiveLoss(nn.Module):
"""
Vanilla Contrastive loss, also called InfoNceLoss as in SimCLR paper
"""
def __init__(self, batch_size, temperature=0.5):
super().__init__()
self.batch_size = batch_size
self.temperature = temperature
self.mask = (~torch.eye(batch_size * 2, batch_size * 2, dtype=bool)).float()
def calc_similarity_batch(self, a, b):
representations = torch.cat([a, b], dim=0)
similarity_matrix = F.cosine_similarity(representations.unsqueeze(1), representations.unsqueeze(0), dim=2)
return similarity_matrix
def forward(self, proj_1, proj_2):
"""
proj_1 and proj_2 are batched embeddings [batch, embedding_dim]
where corresponding indices are pairs
z_i, z_j in the SimCLR paper
"""
batch_size = proj_1.shape[0]
z_i = F.normalize(proj_1, p=2, dim=1)
z_j = F.normalize(proj_2, p=2, dim=1)
similarity_matrix = self.calc_similarity_batch(z_i, z_j)
sim_ij = torch.diag(similarity_matrix, batch_size)
sim_ji = torch.diag(similarity_matrix, -batch_size)
positives = torch.cat([sim_ij, sim_ji], dim=0)
nominator = torch.exp(positives / self.temperature)
# print(" sim matrix ", similarity_matrix.shape)
# print(" device ", device_as(self.mask, similarity_matrix).shape, " torch exp ", torch.exp(similarity_matrix / self.temperature).shape)
denominator = device_as(self.mask, similarity_matrix) * torch.exp(similarity_matrix / self.temperature)
all_losses = -torch.log(nominator / torch.sum(denominator, dim=1))
loss = torch.sum(all_losses) / (2 * self.batch_size)
return loss
</code></pre>
<p>And here is my full stack trace :</p>
<pre class="lang-bash prettyprint-override"><code>/home/wlutz/PycharmProjects/hiv-image-analysis/venv/bin/python /home/wlutz/PycharmProjects/hiv-image-analysis/main.py
2023-10-25 13:59:41.831899: I tensorflow/core/util/port.cc:111] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-10-25 13:59:41.834073: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-10-25 13:59:41.861845: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-10-25 13:59:41.861869: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-10-25 13:59:41.861884: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-10-25 13:59:41.867193: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-25 13:59:42.564010: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pl_bolts/__init__.py:11: FutureWarning: In the future `np.object` will be defined as the corresponding NumPy scalar.
if not hasattr(numpy, tp_name):
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pl_bolts/__init__.py:11: FutureWarning: In the future `np.bool` will be defined as the corresponding NumPy scalar.
if not hasattr(numpy, tp_name):
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pl_bolts/models/self_supervised/amdim/amdim_module.py:34: UnderReviewWarning: The feature generate_power_seq is currently marked under review. The compatibility with other Lightning projects is not guaranteed and API may change at any time. The API and functionality may change without warning in future releases. More details: https://lightning-bolts.readthedocs.io/en/latest/stability.html
"lr_options": generate_power_seq(LEARNING_RATE_CIFAR, 11),
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pl_bolts/models/self_supervised/amdim/amdim_module.py:92: UnderReviewWarning: The feature FeatureMapContrastiveTask is currently marked under review. The compatibility with other Lightning projects is not guaranteed and API may change at any time. The API and functionality may change without warning in future releases. More details: https://lightning-bolts.readthedocs.io/en/latest/stability.html
contrastive_task: Union[FeatureMapContrastiveTask] = FeatureMapContrastiveTask("01, 02, 11"),
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pl_bolts/losses/self_supervised_learning.py:228: UnderReviewWarning: The feature AmdimNCELoss is currently marked under review. The compatibility with other Lightning projects is not guaranteed and API may change at any time. The API and functionality may change without warning in future releases. More details: https://lightning-bolts.readthedocs.io/en/latest/stability.html
self.nce_loss = AmdimNCELoss(tclip)
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=None`.
warnings.warn(msg)
available_gpus: 0
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:478: LightningDeprecationWarning: Setting `Trainer(gpus=0)` is deprecated in v1.7 and will be removed in v2.0. Please use `Trainer(accelerator='gpu', devices=0)` instead.
rank_zero_deprecation(
GPU available: False, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/callbacks/model_checkpoint.py:613: UserWarning: Checkpoint directory /home/wlutz/PycharmProjects/hiv-image-analysis/saved_models exists and is not empty.
rank_zero_warn(f"Checkpoint directory {dirpath} exists and is not empty.")
| Name | Type | Params
-------------------------------------------
0 | mlp | Sequential | 5.1 K
1 | model | Sequential | 11.2 M
2 | loss | CrossEntropyLoss | 0
-------------------------------------------
11.2 M Trainable params
0 Non-trainable params
11.2 M Total params
44.727 Total estimated model params size (MB)
Sanity Checking DataLoader 0: 0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/wlutz/PycharmProjects/hiv-image-analysis/main.py", line 253, in <module>
finetuning()
File "/home/wlutz/PycharmProjects/hiv-image-analysis/main.py", line 226, in finetuning
trainer.fit(model, data_loader, data_loader_test)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 608, in fit
call._call_and_handle_interrupt(
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/call.py", line 38, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _fit_impl
self._run(model, ckpt_path=self.ckpt_path)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1112, in _run
results = self._run_stage()
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1191, in _run_stage
self._run_train()
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1204, in _run_train
self._run_sanity_check()
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1276, in _run_sanity_check
val_loop.run()
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 152, in advance
dl_outputs = self.epoch_loop.run(self._data_fetcher, dl_max_batches, kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
self.advance(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 137, in advance
output = self._evaluation_step(**kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 234, in _evaluation_step
output = self.trainer._call_strategy_hook(hook_name, *kwargs.values())
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1494, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/pytorch_lightning/strategies/strategy.py", line 390, in validation_step
return self.model.validation_step(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/finetuning.py", line 65, in validation_step
loss = self.loss(z, y)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/torch/nn/modules/loss.py", line 1179, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/home/wlutz/PycharmProjects/hiv-image-analysis/venv/lib/python3.9/site-packages/torch/nn/functional.py", line 3053, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: 0D or 1D target tensor expected, multi-target not supported
Process finished with exit code 1
</code></pre>
<p>I saw <a href="https://discuss.pytorch.org/t/error-0d-or-1d-target-tensor-expected-multi-target-not-supported/153754/5" rel="nofollow noreferrer">in the PyTorch forum</a> that the model output is expected to be <code>(batch size, n classes)</code> and target <code>(batch size)</code> for the contrastive loss. On the last line of error, the parameter <code>input</code> has a shape of <code>torch.Size([70, 10])</code> and <code>target</code> has a shape of <code>torch.Size([70, 3, 224, 224])</code>. so it seems targets does not meet the expectations of <code>torch._C._nn.cross_entropy_loss</code> ??</p>
<p>I'm so lost, thank you for your help.</p>
<blockquote>
<p>EDIT: I forgot to specify that I have only two classes for my fine
tuning</p>
</blockquote>
| <python><python-3.x><deep-learning><pytorch><resnet> | 2023-10-25 12:06:52 | 1 | 316 | Willy Lutz |
77,359,161 | 13,518,907 | BFloat16 is not supported on MPS (macOS) | <p>I accessed a Llama-based model on Huggingface named: "LeoLM/leo-hessianai-7b-chat".
I downloaded the model on my Mac with the device set as 'MPS'. The download worked, however when I want to test the model I get following error:</p>
<pre><code>TypeError: BFloat16 is not supported on MPS
</code></pre>
<p>Above I see the hint:</p>
<pre><code>FP4 quantization state not initialized. Please call .cuda() or .to(device) on the LinearFP4 layer first.
</code></pre>
<p>Here is my code:</p>
<pre><code>from torch import cuda, bfloat16
import transformers
device = torch.device("mps")
model_id = 'LeoLM/leo-hessianai-7b-chat'
#device = f'cuda:{cuda.current_device()}' if cuda.is_available() else 'cpu'
# set quantization configuration to load large model with less GPU memory
# this requires the `bitsandbytes` library
bnb_config = transformers.BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type='nf4',
bnb_4bit_use_double_quant=True,
bnb_4bit_compute_dtype=bfloat16
)
# begin initializing HF items, need auth token for these
hf_auth = 'HF_KEY'
model_config = transformers.AutoConfig.from_pretrained(
model_id,
use_auth_token=hf_auth
)
model = transformers.AutoModelForCausalLM.from_pretrained(
model_id,
trust_remote_code=False, # True for flash attention
config=model_config,
quantization_config=bnb_config,
device_map='auto',
use_auth_token=hf_auth
)
model.eval()
print(f"Model loaded on {device}")
tokenizer = transformers.AutoTokenizer.from_pretrained(
model_id,
use_auth_token=hf_auth
)
generate_text = transformers.pipeline(
model=model, tokenizer=tokenizer,
return_full_text=True, # langchain expects the full text
task='text-generation',
# we pass model parameters here too
temperature=0.0, # 'randomness' of outputs, 0.0 is the min and 1.0 the max
max_new_tokens=512, # mex number of tokens to generate in the output
repetition_penalty=1.1 # without this output begins repeating
)
res = generate_text("Explain the difference between a country and a continent.")
print(res[0]["generated_text"])
</code></pre>
<p>What do i need to change to make it run?</p>
| <python><macos><large-language-model><llama><bfloat16> | 2023-10-25 11:45:24 | 2 | 565 | Maxl Gemeinderat |
77,359,094 | 10,133,797 | `stmt` vs `functools.partial` for `timeit.Timer` | <p>I've read that <code>functools.partial(fn, *args, **kw)</code> saves overhead compared to <code>lambda: fn(*args, **kw)</code>. But is it also advantageous over <code>stmt</code> + <code>globals</code>?</p>
<pre class="lang-py prettyprint-override"><code>timeit.Timer(partial(fn, *args, **kw)).repeat(200, 5)
</code></pre>
<p>vs</p>
<pre class="lang-py prettyprint-override"><code>timeit.Timer(stmt='fn(*args, **kw)',
globals={'fn': fn, 'args': args, 'kw': kw})
</code></pre>
<p>or even, though <a href="https://docs.python.org/3/library/timeit.html" rel="nofollow noreferrer">docs</a> suggest they're interchangeable,</p>
<pre class="lang-py prettyprint-override"><code>timeit.Timer(stmt='fn(*args, **kw)',
globals={'args': args, 'kw': kw},
setup='from __main__ import fn')
</code></pre>
<p>I've not found much material on this. For what order of duration of <code>fn</code> (e.g. milliseconds) does it make a non-negligible difference? Are they all interchangeable past a certain order, including for parallel (<em>not</em> Python-multiprocessing, just multi-core math by e.g. <code>numpy</code>) and (optional question) <a href="https://pytorch.org/tutorials/recipes/recipes/benchmark.html" rel="nofollow noreferrer">GPU</a> benchmarking? Besides using the command line, is any one generally preferred?</p>
<hr>
<h3>A use case</h3>
<p>For reference: a costly <code>def setup</code>, then we bench inside <code>def main</code> over several configurations. It doesn't concern "micro-benchmarking", e.g. <code>x += 1</code>, but stuff that takes on order of milliseconds.</p>
<p>Here's a dummy self-contained example <a href="https://replit.com/@OverLordGoldDra/BarrenNavyFormula#main.py" rel="nofollow noreferrer">(try live)</a>; answers are free to invoke other cases. I've since edited it to put it on order of <code>us</code> (by changing <code>x</code>'s size) - here's for <code>partial</code> vs "long form":</p>
<ul>
<li><strong>Very fast CPU</strong>: <code>1.61us</code> vs <code>1.67us</code> (i7-13700HX)</li>
<li><strong>Very slow CPU</strong>: <code>3.93us</code> vs <code>3.92us</code> (replit's)</li>
</ul>
<pre><code># -*- coding: utf-8 -*-
import numpy as np
from timeit import Timer
from functools import partial
USE_PARTIAL = 1
#%% Timer, setup, target function --------------------------------------------
def setup():
k_bool = False
objs_all = []
for j_bool in (False, True):
x = [[[np.random.randn(1, 1) for _ in range(1)] for _ in range(1)]
for _ in range(1)]
fkw = dict(j_bool=j_bool)
objs_all.append((x, fkw))
return objs_all, k_bool
def my_func(x, i_bool, j_bool=False, k_bool=True):
# bools, floops, appends
x_notcopy = []
for i in range(len(x)):
if i + 1 % 2:
i_bool = not i_bool
x_notcopy.append([])
for j in range(len(x[0])):
if j + 1 % 2:
j_bool = not j_bool
x_notcopy[-1].append([])
for k in range(len(x[0][0])):
if k + 1 % 2:
k_bool = not k_bool
x_notcopy[-1][-1].extend(x[i][j][k])
# array ops
out = np.array(x_notcopy)
return out
#%% Bench funcs --------------------------------------------------------------
def main(objs_all, k_bool):
total = 0
n_iters = 100
n_repeats = 10000
for objs in objs_all:
x, fkw = objs
for i_bool in (False, True):
for negate_k_bool in (False, True):
if negate_k_bool:
k_bool = not k_bool
if USE_PARTIAL:
fn_partial = partial(my_func,
x, i_bool, k_bool=k_bool, **fkw)
total += min(Timer(fn_partial).repeat(n_repeats, n_iters)
) / n_iters
else:
total += min(Timer(
'my_func(x, i_bool, k_bool=k_bool, **fkw)',
globals={'x': x, 'i_bool': i_bool, 'k_bool': k_bool,
'fkw': fkw},
setup='from __main__ import my_func'
).repeat(n_repeats, n_iters)) / n_iters
print(total / 8) # 8 is total number of loops
#%% Execute ------------------------------------------------------------------
args = setup()
main(*args)
</code></pre>
| <python><performance><time><timeit> | 2023-10-25 11:36:10 | 0 | 19,954 | OverLordGoldDragon |
77,358,942 | 4,517,263 | Python how to add words from text file that contains letters from another word | <p>I have a text file that contains words between 2-7 letters. Now I want to to do the following:</p>
<ol>
<li>Make a new sub-list with all words with 7 letters.</li>
<li>For each 7 letter word <strong>W</strong>, list all the words from the original list that are anagrams of any subset of letters of <strong>W</strong> (including itself). Each such list should be ordered from shortest to longest.</li>
<li>Save the output from step 2 into a new file.</li>
</ol>
<p><em><strong>For example:</strong></em></p>
<p><strong>Original list in input file</strong></p>
<pre><code>peaty
meant
eat
payment
motion
emotion
no
novel
inlet
violet
violent
</code></pre>
<p><strong>What I want:</strong></p>
<p>The seven letter words in the list are <em>payment</em>, <em>emotion</em> & <em>violent</em>.</p>
<p><strong>Expected output file</strong></p>
<pre><code>eat|peaty|meant|payment
no|motion|emotion
no|novel|inlet|violet|violent
</code></pre>
<hr />
<p>I have started to create a new list with only the 7 letter words:</p>
<pre class="lang-py prettyprint-override"><code>with open('list1.txt', 'r') as f1, open('newlist.txt', 'w') as f2:
for line in f1:
if len(line) <9:
f2.write(line)
</code></pre>
<p>But now I don't know how to continue.</p>
<p>Does anyone know how can I do this?</p>
| <python> | 2023-10-25 11:12:43 | 1 | 607 | Mat Koffman |
77,358,840 | 2,074,697 | pandas to_sql gives conversion errors even after specifiying dtype | <p>I'm trying to make a program which imports data from an Excel to SQL tables. I need to do this for <em>several hundred</em> files, so I need to have a general approach.</p>
<p>I'm using SQLAlchemy to import my data frame to SQL Server, but the column ProductCode causes an issue. The first 1000 or so rows are integers, so SQL Alchemy identifies the data type as a integer. There are however some nvarchar values which causes an conversion error <code>Conversion failed when converting the nvarchar value 'AOE1' to data type int</code> when creating the table in SQL Server.</p>
<p><a href="https://i.sstatic.net/U2xQa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U2xQa.png" alt="enter image description here" /></a></p>
<p>What I have tried is to specify that the column ProductCode should be nvarchar, but the error still persists. I can't write a dtype argument for all columns as I don't know how the structure of the Excel files before importing them to a data frame.</p>
<pre><code>import sqlalchemy as sa
import pandas as pd
sqlcon = sa.create_engine('mssql+pyodbc://@' + serverName + '/' + databaseName + '?trusted_connection=yes&driver=SQL+Server')
xl2 = pd.read_excel(fullPath, sheet_name=sheetName, header=None)
xl2.to_sql(tableName,schema='dbo',con=sqlcon, index=False, if_exists='replace', dtype={'ProductCode': sa.types.NVARCHAR})
</code></pre>
<p>I've tried to go into Excel, and copy the name of the column there in case there was blank spaces or something in the column name, but it wasn't.</p>
<p>I still get the same error <code>Conversion failed when converting the nvarchar value 'AOE1' to data type int</code> though. Can I specify the dtype of a single column (leaving the others to be identified by SqlAlchemy?) or can I force SqlAlchemy to base its data type identifier by many more values (so I can capture the nvarchar values)?</p>
<p><strong>EDIT</strong></p>
<p>I've gone into the Excel file and sorted the table by the ProductCode column to have it start with the nvarchar values. It did not help.</p>
<p>I've tried to "restart variables" in Jupyter notebook in case the data type values has been cached. It did not work.</p>
| <python><pandas><sqlalchemy> | 2023-10-25 10:54:10 | 0 | 1,242 | Cenderze |
77,358,836 | 2,534,342 | How to make ruff to use f-string instead str.format? | <p>I'm trying <code>ruff</code> but when I do <code>ruff check .</code> on my example:</p>
<pre class="lang-py prettyprint-override"><code>print("%s" % "hello")
</code></pre>
<p>it shows:</p>
<pre><code>UP031 Use format specifiers instead of percent format
</code></pre>
<p>and if I do <code>ruff check . --fix</code> or <code>ruff format .</code> they don't change anything.</p>
<p>My <code>pyproject.toml</code> snipped:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.ruff]
target-version = "py39"
line-length = 120
extend-select = ["E501", "UP", "RUF"]
</code></pre>
| <python><ruff> | 2023-10-25 10:53:51 | 3 | 612 | alanwilter |
77,358,702 | 937,019 | Coloring line segments in altair | <p>How do I change the color of line segments in an altair plot? In my example, I want to color all line segments that are above 1 to red and below 1 to green. My current attempts look like this:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({
'x': [1, 2, 3, 4, 5],
'y': [0.8, 0.9, 1.1, 1.0, 0.9]
})
alt.Chart(df).mark_line(
point=True,
line=True,
).encode(
alt.X("x", type="quantitative"),
alt.Y("y", type="quantitative"),
color=alt.Color(
condition={
"test": "datum.y < 1",
"value": "green"
},
value="red",
),
).save("plot.html")
</code></pre>
<p>The result looks like this:</p>
<p><a href="https://i.sstatic.net/DRp8rm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DRp8rm.png" alt="result" /></a></p>
<p>Instead I want:</p>
<p><a href="https://i.sstatic.net/irnLom.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/irnLom.png" alt="enter image description here" /></a></p>
| <python><altair> | 2023-10-25 10:30:24 | 0 | 626 | magum |
77,358,655 | 736,662 | Running two tasks in one Locust test and weigh the calls relative | <p>I have a non-working Locust test like this:</p>
<pre><code>from locust import HttpUser, between, task
from locust.contrib.fasthttp import FastHttpUser
import random
import json
import csv
from datetime import datetime, timedelta
server_name = "https://rknwvb.com:7072"
save_from_date = "2023-07-05T08:00:00.000Z"
save_to_date = "2023-07-05T09:00:00.000Z"
headers = {'X-API-KEY': 'AKjCg9hTcYQ=', 'Content-Type': 'application/json'}
LOAD_TSIDs = {
'15016',
'83904',
'87371'
}
SAVE_TS_IDs = {
'14881': 30,
'15016': 30,
'87371': 30
}
def set_value_save_values():
ts_value_random = random.randrange(10000)
print("random_value:", ts_value_random)
return ts_value_random
def get_data_save_values(ts_id, from_date, to_date, ts_value):
myjson = {
"id": int(ts_id),
"values": [
{
"from": from_date,
"to": to_date,
"value": ts_value
}
]
}
return myjson
def set_from_date_load_values():
min_tid = datetime.now()
nytid = min_tid.replace(year=2022, month=1, day=1, hour=random.randint(0, 23), minute=0,
second=0, microsecond=0)
from_date = nytid.strftime("%Y-%m-%dT%H:%M:00.000Z")
print("from_date:", from_date)
return from_date
def set_to_date_load_values():
min_tid = datetime.now()
nytid = min_tid.replace(year=2022, month=1, day=2, hour=random.randint(0, 23), minute=0,
second=0, microsecond=0)
to_date = nytid.strftime("%Y-%m-%dT%H:%M:00.000Z")
print("to_date:", to_date)
return to_date
class SaveAndLoadValues(FastHttpUser):
host = server_name
def _run_read_ts(self, series_list, resolution, start, end):
LOAD_TSIDs = ",".join(series_list)
resp = self.client.get(f'/api/loadValues?tsIds={LOAD_TSIDs}&resolution={resolution}'
f'&startUtc={set_from_date_load_values()}&endUtc=
{set_to_date_load_values()}',
headers={'X-API-KEY': 'AKjCg9hTcYQ='})
print("Response status code:", resp.status_code)
def save_values(self, json_data):
print(type(json_data))
print(json_data)
# Make the PUT request with authentication:
response = self.client.put("/api/SaveValues", data=json_data, headers=headers)
# Check the response:
if response.status_code == 200:
print("SaveValues successful!")
print("Response:", response.json())
else:
print("SaveValues failed.")
print("Response:", response.text)
@task(1)
def test_get_ts_1(self):
self._run_read_ts(random.sample(SAVE_TS_IDs, 1), 'PT15M', set_from_date_load_values(),
set_to_date_load_values())
@task(2)
def save_list_values(self):
data_list = []
# for i, (ts_id, ts_value) in enumerate(random.sample(TS_IDs.items(), random.randint(1, 25))):
for i, (ts_id, ts_value) in enumerate(random.sample(SAVE_TS_IDs.items(), 100)):
data = get_data_save_values(ts_id, save_from_date, save_to_date, set_value_save_values())
data_list.append(data)
json_data = json.dumps(data_list, indent=2)
self.save_values(json_data)
</code></pre>
<p>Running it results in this error:</p>
<p>N52820/ERROR/locust.user.task: Population must be a sequence. For dicts or sets, use sorted(d).</p>
<p>First, is this the correct way to running (in this case both read and write) on one Locust test?
Second, is it possible weigh the two tasks relative? I want to run single-write multiple reader pattern. Lets say 1 read for every 10 write?</p>
| <python><locust> | 2023-10-25 10:21:55 | 1 | 1,003 | Magnus Jensen |
77,358,638 | 7,243,493 | dynamic numpy conditions based on values from array | <p>I am trying to find out how I can use <code>np.where</code> in a dynamic way, where I select some predefined values, pass them to a function and let them create the condition.
Ideally I want to create long conditions with several logical
operators.</p>
<p>In the code below I am stupidly trying to use a string:</p>
<p><code>cond_arr[1]['cond']</code> as a logical operator, to illustrate what I want, because I can't find out how to proceed with this?</p>
<p>Is there an elegant(or just working) way to create these dynamic conditions?</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import random
from datetime import datetime, timedelta
data = {
'open': [random.uniform(50, 100) for _ in range(30)],
'high': [random.uniform(100, 150) for _ in range(30)],
'low': [random.uniform(25, 50) for _ in range(30)],
'close': [random.uniform(50, 100) for _ in range(30)],
'volume': [random.randint(1000, 10000) for _ in range(30)],
'datetime': [datetime(2023, 10, 1, 0, 0) + timedelta(hours=i) for i in range(30)]
}
df = pd.DataFrame(data)
# The meat and potatoes
indicators = [{"ind": "open"},{"ind": "rsi"}, {"ind": "macd"}]
conds = [{"cond": "<"}, {"cond": ">"}, {"cond": "=="}, {"cond": "and"}, {"cond": "or"}]
values = [{"val": 10}, {"val": 100}, {"val": 15}, {"val": 17}, {"val": 18}, {"val": 7}]
def create_condition(cond_array):
print(f'{cond_array[0]["ind"]}') # Use double curly braces to escape
#df["signal"] = np.where(df["open"] > 10, 1, -1) <-- what i want to do below
df["signal"] = np.where(df[f'{cond_array[0]["ind"]}'] cond_arr[1]['cond'] df[f'{cond_array[2 ["val"]}'], 1, -1)
selected_conds = [indicators[0],conds[0],values[0]]
create_condition(selected_conds)
</code></pre>
| <python><pandas><numpy> | 2023-10-25 10:19:37 | 2 | 568 | Soma Juice |
77,358,599 | 2,745,609 | Python Covert Duration String to Number Format in Hours | <p>I have a Pandas dataframe with a Duration column with contains durations as text with the following format. Some strings have "Days" added at the beginning where some of them just have the hour minute and second information:</p>
<pre class="lang-py prettyprint-override"><code>df =
Duration
0 16h:48m:31s
1 0h:02m:49s
2 1d 3h:57m:27s
...
</code></pre>
<p>I want to convert this into a numeric format in the units of Hours. How would you approach this problem? Thanks in advance.</p>
| <python><pandas><datetime><timedelta> | 2023-10-25 10:15:27 | 2 | 474 | Solijoli |
77,358,512 | 2,650,325 | Google Sheets API: socket.timeout: The read operation timed out | <p>I am trying to upload about 40k rows (39345 rows x 60 columns) with Google Sheets API and I am getting the following error:</p>
<pre><code>Error, sleep for 101 seconds
Traceback (most recent call last):
File "C:\***lib\site-packages\pysuite\gsheets.py", line 38, in execute
return func.execute()
File "C:\***lib\site-packages\googleapiclient\_helpers.py", line 130, in positional_wrapper
return wrapped(*args, **kwargs)
File "C:\***lib\site-packages\googleapiclient\http.py", line 923, in execute
resp, content = _retry_request(
File "C:\***lib\site-packages\googleapiclient\http.py", line 222, in _retry_request
raise exception
File "C:\***lib\site-packages\googleapiclient\http.py", line 191, in _retry_request
resp, content = http.request(uri, method, *args, **kwargs)
File "C:\***lib\site-packages\google_auth_httplib2.py", line 218, in request
response, content = self.http.request(
File "C:\***lib\site-packages\httplib2\__init__.py", line 1724, in request
(response, content) = self._request(
File "C:\***lib\site-packages\httplib2\__init__.py", line 1444, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "C:\***lib\site-packages\httplib2\__init__.py", line 1396, in _conn_request
response = conn.getresponse()
File "C:\python\Python39\lib\http\client.py", line 1377, in getresponse
response.begin()
File "C:\python\Python39\lib\http\client.py", line 320, in begin
version, status, reason = self._read_status()
File "C:\python\Python39\lib\http\client.py", line 281, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
File "C:\python\Python39\lib\socket.py", line 704, in readinto
return self._sock.recv_into(b)
File "C:\python\Python39\lib\ssl.py", line 1241, in recv_into
return self.read(nbytes, buffer)
File "C:\python\Python39\lib\ssl.py", line 1099, in read
return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out
</code></pre>
<p>I am trying to circumvent the problem by uploading in batches of 1k rows (I lowered them to 50 rows!) but the problem still persists.</p>
<p>I have also tried:</p>
<pre><code>import socket
######### then:
socket.setdefaulttimeout(3600)
</code></pre>
<p>Now the error is:</p>
<pre><code><HttpError 503 when requesting https://sheets.googleapis.com/v4/spreadsheets/*******/values/%(SheetName)%27%21A1:append?valueInputOption=USER_ENTERED&insertDataOption=INSERT_ROWS&alt=json returned "The service is currently unavailable.". Details: "The service is currently unavailable.">
Process finished with exit code 1
</code></pre>
<p>Do you know if there is a limit to the number of rows one can append to a spreadsheet using this API? Any idea on how to solve this?</p>
| <python><google-cloud-platform><google-sheets-api> | 2023-10-25 10:03:55 | 1 | 2,417 | Manuel Perez Heredia |
77,358,341 | 4,490,376 | Why are environment variables not being overridden in pytest? | <p>I'm writing some unit tests for a basic flask app. Currently I'm testing the config.py file (mainly to learn, but also to test an environment dependent database configuration).</p>
<p>The file looks like:</p>
<p>config.py</p>
<pre><code>from os import environ as env
from dotenv import load_dotenv
load_dotenv()
class Config:
ENVIRONMENT = env['CONFIG_MODE']
SQLALCHEMY_TRACK_MODIFICATIONS = True
SECRET_KEY = env['SECRET_KEY']
# app deployed on Heroku - heroku sets DATABASE_URL to postgres://, SQLALCHEMY needs postgresql://
if ENVIRONMENT == "development":
SQLALCHEMY_DATABASE_URI = env['DEVELOPMENT_DATABASE_URL']
else:
SQLALCHEMY_DATABASE_URI = env.get('DATABASE_URL').replace("://", "ql://", 1)
</code></pre>
<p>This pulls environment variables from the .env file</p>
<p>.env</p>
<pre><code>CONFIG_MODE = 'development'
DEVELOPMENT_DATABASE_URL = 'postgresql://usr:pwd@localhost:5432/db'
FLASK_APP=app
SECRET_KEY='XXX'
</code></pre>
<p>I've set up a basic pytest unit test to check that the SQLALCHEMY_DATABASE_URL is set correctly, depending on whether the CONFIG_MODE env variable is 'development' or 'not development'.</p>
<p>test_config.py</p>
<pre><code>import os
class TestConfigDev:
development_database_url = 'development_database_url'
os.environ['DEVELOPMENT_DATABASE_URL'] = development_database_url
os.environ['CONFIG_MODE'] = 'development'
from service_authentication.api.config import Config
config = Config
def test_sqlalchemy_track_modifications(self):
"""
GIVEN: SQLALCHEMY_TRACK_MODIFICATIONS variable exists
WHEN: it is queried
THEN: it is True
"""
self.sqlalchemy_track_modifications = self.config.SQLALCHEMY_TRACK_MODIFICATIONS
assert self.sqlalchemy_track_modifications
def test_sqlalchemy_database_uri_dev(self):
"""
GIVEN: CONFIG_MODE is development
AND GIVEN: DEVELOPMENT_DATABASE_URL is set
AND GIVEN: SQLALCHEMY_DATABASE_URI variable exists
WHEN: it is queried
THEN: it returns the DEVELOPMENT_DATABASE_URL
"""
self.sqlalchemy_database_uri = self.config.SQLALCHEMY_DATABASE_URI
assert self.sqlalchemy_database_uri == self.development_database_url
class TestConfigNotDev:
database_url = 'postgres://database_url'
os.environ['DATABASE_URL'] = database_url
os.environ['CONFIG_MODE'] = 'not_development'
from service_authentication.api.config import Config
config = Config
def test_sqlalchemy_database_uri_not_dev(self):
"""
GIVEN: CONFIG_MODE is not development
AND GIVEN: DATABASE_URL is set
AND GIVEN: SQLALCHEMY_DATABASE_URI variable exists
WHEN: it is queried
THEN: it returns the modified DATABASE_URL
"""
self.sqlalchemy_database_uri = self.config.SQLALCHEMY_DATABASE_URI
self.database_url = self.database_url.replace("://", "ql://", 1)
assert self.config.ENVIRONMENT == 'not_development'
assert self.sqlalchemy_database_uri == self.database_url
</code></pre>
<p>The TestConfigDev tests pass with no issues.</p>
<p>The TestConfigNotDev tests fail - the ENVIRONMENT variable is still set to 'development', despite being overridden to 'not_development' in the Class definition.</p>
<p>I've removed CONFIG_MODE as a variable set in .env - isolating its instantiation to the test_config.py file. The issue persisted.</p>
<p>I've tried separating these out into two test files (test_config_dev.py and test_config_not_dev.py). Whichever runs first sets the ENVIRONMENT variable.</p>
<p>Thinking that the issue is that the env variables are set when <code>import os</code> is run, I've tried using <code>importlib.reload(os)</code> in the TestConfigNotDev class, before and after overriding the env variables, but it's still failing because the ENVIRONMENT variable remains as 'development'.</p>
<p>Curiously, the DEVELOPMENT_DATABASE_URL variable is being overridden in TestConfigDev.</p>
<p>This means that something else is going on with the way the environment variables are being set that I'm overlooking. Given how fundamental env variables are, I'd like to understand what's going on here before I go too much further.</p>
| <python><pytest><.env> | 2023-10-25 09:40:21 | 1 | 551 | 741852963 |
77,358,007 | 21,404,794 | Instantiating various crabnet models in same code | <p>I stumbled upon <a href="https://crabnet.readthedocs.io/en/latest/" rel="nofollow noreferrer">crabnet</a> recently, and I'm using it in a catalysis project. It's been a good addition to the project, but I've found a problem.</p>
<p>I wanted to train crabnet in different subsets of my dataset (filtering on reaction conditions) to try to see if that gave better results (I have sometimes the same composition but with different reaction conditions, which gives different results). The problem seems to be that when you instantiate 2 instances of crabnet, some parts seem to be shared.</p>
<p>Here's a Minimal Working example of how to replicate the problem:</p>
<pre class="lang-py prettyprint-override"><code>"""Basic usage of CrabNet regression on elasticity dataset."""
from crabnet.utils.data import get_data
from crabnet.data.materials_data import elasticity, example_materials_property
from crabnet.crabnet_ import CrabNet
train_df, val_df = get_data(elasticity, "train.csv", dummy=True)
train_df_2, val_df_2 = get_data(example_materials_property, "train.csv", dummy=True)
cb = CrabNet(mat_prop="elasticity")
cb.fit(train_df)
val_pred, val_sigma = cb.predict(val_df, return_uncertainty=True)
cbn = CrabNet(mat_prop="example_materials_property")
cbn.fit(train_df_2)
val_pred_2, val_sigma_2 = cbn.predict(val_df_2, return_uncertainty=True)
</code></pre>
<p>The example is just the <a href="https://crabnet.readthedocs.io/en/latest/examples.html" rel="nofollow noreferrer">Basic Usage example</a> in <a href="https://crabnet.readthedocs.io/en/latest/" rel="nofollow noreferrer">Crabnet's Docs</a> but with each step repeated (and changing the dataset just in case).</p>
<p>This returns this error: <code>File ".conda\lib\site-packages\torch\optim\optimizer.py", line 271, in wrapper for pre_hook in chain(_global_optimizer_pre_hooks.values(), self._optimizer_step_pre_hooks.values()): AttributeError: 'SWA' object has no attribute '_optimizer_step_pre_hooks'</code>. The whole error log refers to line 14 <code>cbn.fit(train_df_2)</code> as the trigger of the problem.</p>
<p>How could I instantiate more than 1 model?</p>
| <python><neural-network><artificial-intelligence> | 2023-10-25 08:55:16 | 1 | 530 | David Siret MarquΓ©s |
77,357,935 | 13,518,907 | Add custom prompts to Llama 2 for RAG | <p>I have downloaded Llama 2 locally and it works. Now I want to adjust my prompts/change the default prompt to force Llama 2 to anwser in a different language like German. Here is my code:</p>
<pre><code>from langchain.llms import LlamaCpp
from langchain.chains import LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.prompts import PromptTemplate
from langchain.document_loaders import PyPDFLoader
from langchain.vectorstores import Chroma
# embeddings are numerical representations of the question and answer text
from langchain.embeddings import HuggingFaceEmbeddings
# use a common text splitter to split text into chunks
from langchain.text_splitter import RecursiveCharacterTextSplitter
# for token-wise streaming so you'll see the answer gets generated token by token when Llama is answering your question
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = LlamaCpp(
model_path="MYPATH/llama.cpp/models/7B/ggml-model-q4_0.bin",
temperature=0.0,
top_p=1,
n_ctx=6000,
callback_manager=callback_manager,
verbose=True
)
# Load Pdf-File
loader = PyPDFLoader("myfile.pdf")
documents = loader.load()
# split the loaded documents into chunks
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=20)
all_splits = text_splitter.split_documents(documents)
# create the vector db to store all the split chunks as embeddings
embeddings = HuggingFaceEmbeddings()
vectordb = Chroma.from_documents(
documents=all_splits,
embedding=embeddings,
)
# use another LangChain's chain, RetrievalQA, to associate Llama with the loaded documents stored in the vector db
from langchain.chains import RetrievalQA
qa_chain = RetrievalQA.from_chain_type(
llm,
retriever=vectordb.as_retriever()
)
question = "What is the biggest city in Germany?"
result = qa_chain({"query": question})
</code></pre>
<p>At which part do I have to insert my own prompt? As I understood correctl, now the default Llama 2 prompt is being used. I tried to insert a prompt at the following part, but the model kept answering in English language:</p>
<pre><code>qa_chain = RetrievalQA.from_chain_type(
llm(prompt="Please answer only in the German language!"),
retriever=vectordb.as_retriever()
)
</code></pre>
<p>I saw that the prompt template for Llama 2 looks as follows:</p>
<pre><code><s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
There's a llama in my garden π± What should I do? [/INST]
</code></pre>
<p>Thanks in advance!</p>
| <python><nlp><prompt><langchain><large-language-model> | 2023-10-25 08:44:32 | 1 | 565 | Maxl Gemeinderat |
77,357,723 | 3,685,918 | How can I create xticks with varying intervals? | <p>I am drawing a line chart using matplotlib as shown below.</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import io
temp = u"""
tenor,yield
1M,5.381
3M,5.451
6M,5.505
1Y,5.393
5Y,4.255
10Y,4.109
"""
data = pd.read_csv(io.StringIO(temp), sep=",")
plt.plot(data['tenor'], data['yield'])
</code></pre>
<p>Output: The tick intervals on the x-axis are all the same.</p>
<p><a href="https://i.sstatic.net/LdQwW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LdQwW.png" alt="enter image description here" /></a></p>
<p>What I want : Set the tick interval of the x-axis differently as shown in the screen below</p>
<p>Is there any way to set tick intvel differently?</p>
<p><a href="https://i.sstatic.net/ZTetd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZTetd.png" alt="enter image description here" /></a></p>
| <python><pandas><matplotlib><xticks> | 2023-10-25 08:17:17 | 1 | 427 | user3685918 |
77,357,714 | 13,942,929 | Git & Python : How to removed a cache/pycache of a deleted folder? | <p>Let's say my project structure looks like the below</p>
<pre><code>Main Folder
- SubCode Folder
- SubTest Folder
</code></pre>
<p>I want to delete everything cache from that subtest folder including the folder itself.
How can I do that?</p>
| <python><git><caching> | 2023-10-25 08:15:43 | 0 | 3,779 | Punreach Rany |
77,357,678 | 4,944,986 | SciPy optimization doesn't return optimal solution | <p>I'm attempting to maximize a sectional area function related to my master thesis. I've come up with a function <code>double_T_Ixx</code>, and I wish to optimize it using SciPy's <code>minimize</code> function. While I understand the analytical solution, the optimization output is not aligning with my expectations.</p>
<p>Without too much background, I came up with this function:</p>
<pre class="lang-py prettyprint-override"><code>
def double_T_Ixx(lengths, thicknesses):
A_top_l = lengths[0] * thicknesses[0]
A_top_r = lengths[1] * thicknesses[1]
A_mid = lengths[2] * thicknesses[2]
A_bot_l = lengths[3] * thicknesses[3]
A_bot_r = lengths[4] * thicknesses[4]
y_top = lengths[2]
y_mid = lengths[2] / 2
y_bot = 0
Ixx_top_l = (lengths[0] * thicknesses[0]**3) / 12
Ixx_top_r = (lengths[1] * thicknesses[1]**3) / 12
Ixx_mid = (thicknesses[2] * lengths[2]**3) / 12
Ixx_bot_l = (lengths[3] * thicknesses[3]**3) / 12
Ixx_bot_r = (lengths[4] * thicknesses[4]**3) / 12
A_total = A_top_l + A_top_r + A_mid + A_bot_l + A_bot_r
y_composite = (A_top_l*y_top + A_top_r*y_top + A_mid*y_mid + A_bot_l*y_bot + A_bot_r*y_bot) / A_total
Ixx = (Ixx_top_l +
Ixx_top_r +
Ixx_mid +
Ixx_bot_l +
Ixx_bot_r +
A_top_l*(y_top - y_composite)**2 +
A_top_r*(y_top - y_composite)**2 +
A_mid *(y_mid - y_composite)**2 +
A_bot_l*(y_bot - y_composite)**2 +
A_bot_r*(y_bot - y_composite)**2)
return Ixx
</code></pre>
<p>wrapping this up into an objective and constraints yields:</p>
<pre class="lang-py prettyprint-override"><code>
MIN_LENGTHS = 1
MAX_LENGTH = 10
MIN_THICKNESS = 0.5
MAX_THICKNESS = 2.5
MAX_AREA = 10
def objective(x):
num_segments = len(x) // 2
lengths = x[:num_segments]
thicknesses = x[num_segments:]
return -double_T_Ixx(lengths, thicknesses)
def constraint_area(x):
lengths = x[:len(x)//2]
thicknesses = x[len(x)//2:]
A_top_l = lengths[0] * thicknesses[0]
A_top_r = lengths[1] * thicknesses[1]
A_mid = lengths[2] * thicknesses[2]
A_bot_l = lengths[3] * thicknesses[3]
A_bot_r = lengths[4] * thicknesses[4]
return MAX_AREA - (A_top_l + A_top_r + A_mid + A_bot_l + A_bot_r)
</code></pre>
<h1>Expected Solution</h1>
<p>From domain knowledge, I know there are multiple solutions. All of them have the same objective value, but they differ in parameter values. One representative solution is:</p>
<pre><code>c = [l1 = 1,
l2 = 1,
l3 = 10,
l4 = 1,
l5 = 1,
t1 = 1.25,
t2 = 1.25,
t3 = 0.5,
t4 = 1.25,
t5 = 1.25]
</code></pre>
<p>Generally l3 should be maximal while t3 should be minimal, l1, l2, l4, l5 should all be minimized, and t1, t2, t4, t5 should be maximized, given that we don't violate the area constraint.</p>
<h1>Attempt with SciPy's Minimize:</h1>
<pre class="lang-py prettyprint-override"><code>constraints = [{'type':'eq', 'fun':lambda x: constraint_area(x)}]
initial_guess = [(b[0] + b[1] / 2) for b in bounds]
result = minimize(
fun=objective,
x0=initial_guess,
method='SLSQP',
bounds=bounds,
constraints=constraints,
tol=1e-12,
options={'disp': True}
)
</code></pre>
<h1>Problem:</h1>
<p>The solution I get from SciPy is close to the analytical solution in terms of the objective function, but not in terms of the actual values:</p>
<p>[2.265, 2.265, 10.000, 2.321, 2.270, 0.532, 0.567, 0.500, 0.562, 0.531]</p>
<p>However, as mentioned earlier, it would be optimal if l1, l2, l4, l5 equaled 1 while t1, t2, t4, t5 were maximized.</p>
<h1>Question:</h1>
<p>How can I modify my optimization setup to achieve the expected optimal solution? Is my problem ill-posed for SciPy's optimizer, or am I missing something in its formulation or setup?</p>
| <python><optimization><scipy> | 2023-10-25 08:11:44 | 1 | 945 | Finn Eggers |
77,357,606 | 812,912 | Pylint recursive bug when directory is prefix of another directory | <p>I am observing weird behavior for pylint when I have a directory name being a prefix of another directory. Here is minimal setup to reproduce:</p>
<pre><code>mkdir pylint_test
cd pylint_test
mkdir dataset
touch dataset/__init__.py
mkdir dataset_123
echo "from collections import Counter" > dataset_123/a.py
echo "[master]\nenable= all" > pylintrc
pylint --recursive=y .
</code></pre>
<p>I would expect to get an error for unused import in a.py, but this does not happen. What is more weird - if I remove the file <code>__init__.py</code> in dataset (or rename it) I get the output as expected:</p>
<pre><code>pylint --recursive=y .
************* Module a
eval_123/a.py:1:0: C0114: Missing module docstring (missing-module-docstring)
eval_123/a.py:1:0: W0611: Unused Counter imported from collections (unused-import)
</code></pre>
<p>And what is more weird - if I do not remove <code>__init__.py</code> but rename the two directories to be <code>eval</code> and <code>eval_123</code> respectively again everything works.</p>
<p>Another experiment is renaming dataset_123 to dataset_b123. In that case pylint reports the issues as expected (even when <code>__init__.py</code> is present.</p>
<p>I am on Mac Ventura 13.4.1 and here are my package versions:</p>
<pre><code>pylint --version
pylint 2.15.2
astroid 2.13.5
Python 3.8.13 (default, Oct 19 2022, 17:52:09)
[Clang 12.0.0 ]
</code></pre>
<p>However this also reproduces with other python versions.</p>
<p>Does anyone have a clue what is going on here?</p>
| <python><pylint> | 2023-10-25 07:59:59 | 1 | 71,364 | Ivaylo Strandjev |
77,357,375 | 4,047,444 | Perform Calculation on One DataFrame Based on the Common Columns of a Second DataFrame | <p>Assume the two <code>DataFrames</code> have equal columns but different rows even though the examples are equal in length. I was able to find the commonalities of the two DataFrames. I want to use that similarity to perform calculations on one of the DataFrames and put the result in another DataFrames. In summary, I want to update the selling price of the Nike products to be 10% higher than Adidas products with the constraint that they're both in the same category and size.</p>
<pre class="lang-py prettyprint-override"><code>percent = 1.10
data1 = {
'item_id': [1001, 1002, 1003, 1004],
'brand': ['Adidas', 'Adidas', 'Adidas', 'Adidas'],
'category_id': [241, 241, 238, 717],
'size': [8, 7.5, 9, 10],
'cost_price': [8.02, 4.94, 1.49, 18.44],
'unit_price': [12.89, 7.98, 2.44, 29.53]
}
data2 = {
'item_id': [1005, 1006, 1007, 1008],
'brand': ['Nike', 'Nike', 'Nike', 'Nike'],
'category_id': [512, 241, 604, 717],
'size': [7.5, 8, 9, 10],
'cost_price': [35.90, 11.62, 15.03, 20.53],
'unit_price': [48.14, 16.29, 21.09, 28.20]
}
adidas = pd.DataFrame(data1)
nike = pd.DataFrame(data2)
columns = ['category_id', 'size']
common = adidas [columns].merge(nike[columns])
common
category_id size
0 241 8.0
1 717 10.0
df = pd.concat([adidas, nike]).merge(common)
df
item_id brand category_id size cost_price unit_price
0 1001 Adidas 241 8.0 8.02 12.89
1 1006 Nike 241 8.0 11.62 16.29
2 1004 Adidas 717 10.0 18.44 29.53
3 1008 Nike 717 10.0 20.53 28.20
</code></pre>
<p>If they were a dictionary, I would do:</p>
<pre class="lang-py prettyprint-override"><code>cat_match = []
for catid in data2['category_id']:
try:
index = data1['category_id'].index(catid)
cat_match.append(index)
except ValueError:
pass
size_match = []
for size in data2['size']:
try:
index = data1['size'].index(size)
size_match(index)
except ValueError:
pass
print(cat_match)
[0, 3]
print(size_match)
[1, 0, 2, 3]
</code></pre>
<p>I would then calculate the "unit_cost" (aka selling price) of <code>data1</code> by making the markup = unit_cost * percent at index 0 and 3. Next, I would search for the same index in <code>data2</code> and create a new row called "new_sell" with the markup calculated which will be added to data2 dictionary.</p>
<p>I also tried the below code from another SO answer but I received an error.</p>
<pre class="lang-py prettyprint-override"><code>df2['new_sell'] = df2.apply(lambda row: row['unit_price'] * 1.10 if row['category_id'] == df1['category_id'] and row['size'] == df1['size'] else None, axis=1)
...
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>Both attempts are quite messy and convoluted. Is there an easier way to achieve the following using pandas?</p>
<pre class="lang-py prettyprint-override"><code>df2
item_id brand category_id size cost_price unit_price sell_price
0 1006 Nike 241 8.0 11.62 16.29 14.18 # (12.89*1.1)
1 1008 Nike 717 10.0 20.53 28.20 32.48 # (29.53*1.1)
</code></pre>
| <python><pandas><dataframe> | 2023-10-25 07:25:07 | 2 | 861 | dreamzboy |
77,357,094 | 22,466,650 | How to index a df with a 2D array and lookup for values from another one? | <p>My inputs are <code>df</code> and two arrays :</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'id': ['id1', 'id2', 'id3', 'id4', 'id5']})
indexes = np.array(
[[1, 2],
[4, 0],
[0, 1],
[2, 0],
[1, 0]])
values = np.array(
[[0.012, 0.019],
[0.009, 0.012],
[0.019, 0.028],
[0.042, 0.061],
[0.009, 0.021]])
</code></pre>
<p>I'm trying to get the corresponding ids based on the <code>indexes</code> array and also at the same time pull up the values.</p>
<p>My code below gives the expected output but not only it gives me warning but it is also very slow on my dataset.</p>
<pre><code>wanted = df.copy()
for i, j in enumerate(indexes):
wanted.at[i, 'list_ids'] = ', '.join(df.iloc[j].squeeze().tolist())
for i, j in enumerate(values):
wanted.at[i, 'list_values'] = np.array(j, dtype='object')
print(wanted)
id list_ids list_values
0 id1 id2, id3 [0.012, 0.019]
1 id2 id5, id1 [0.009, 0.012]
2 id3 id1, id2 [0.019, 0.028]
3 id4 id3, id1 [0.042, 0.061]
4 id5 id2, id1 [0.009, 0.021]
</code></pre>
<p>Do you guys know how to improve it or do you have any other suggestions ?</p>
| <python><pandas> | 2023-10-25 06:37:22 | 1 | 1,085 | VERBOSE |
77,357,034 | 7,495,742 | Frame Loader over a sleeping function in python | <p>i wrote this snippet of code but i cannot get the expected behaviour, i guess i must use <em><strong>threading</strong></em> but i didn't figure out how to use it correctly :(</p>
<p>What i would like is the second Frame to continue to show the <em><strong>loading</strong></em> while the main frame runs a <em><strong>sleeping</strong></em> function.</p>
<p>Here's the code :</p>
<pre><code>import threading
import time
import wx
class MainFrame(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, None, id, "My main frame", wx.DefaultPosition, wx.Size(500, 300),style=wx.MINIMIZE_BOX|wx.SYSTEM_MENU | wx.CAPTION | wx.CLOSE_BOX|wx.STAY_ON_TOP)
sizer = wx.BoxSizer(wx.VERTICAL)
self.panel = wx.Panel(self,-1)
self.panel.Fit()
self.panel.Show()
#This will freeze while sleeping...
#self.loader = Loader(self,-1,title)
#self.loader.Show()
self.button = wx.Button(self.panel,-1,"Go To Sleep")
self.Bind(wx.EVT_BUTTON, self.sleepy_func, self.button)
self.another_button = wx.Button(self.panel,-1,"Go To Another Sleep")
self.Bind(wx.EVT_BUTTON, self.another_sleepy_func, self.another_button)
sizer.AddStretchSpacer(1)
sizer.Add(self.button, 0, wx.ALIGN_CENTER)
sizer.AddSpacer(20)
sizer.Add(self.another_button, 0, wx.ALIGN_CENTER)
sizer.AddStretchSpacer(1)
self.panel.SetSizerAndFit(sizer)
def sleepy_func(self,evt):
time.sleep(10)
def another_sleepy_func(self,evt):
time.sleep(10)
class Loader(wx.Frame):
def __init__(self, parent, id, title):
wx.Frame.__init__(self, None, id, "Please Wait...", wx.DefaultPosition, wx.Size(500, 300),style=wx.MINIMIZE_BOX|wx.SYSTEM_MENU | wx.CAPTION | wx.CLOSE_BOX|wx.STAY_ON_TOP)
sizer = wx.BoxSizer(wx.VERTICAL)
self.panel = wx.Panel(self,-1)
self.panel.Fit()
self.panel.Show()
self.txt = wx.StaticText(self.panel,-1,"Please Wait...")
self.spinner = wx.ActivityIndicator(self.panel, size=(30, 30))
sizer.AddStretchSpacer(1)
sizer.Add(self.txt, 0, wx.ALIGN_CENTER)
sizer.Add(self.spinner, 1, wx.ALIGN_CENTER)
sizer.AddStretchSpacer(1)
self.panel.SetSizerAndFit(sizer)
self.spinner.Start()
#usage : put in wx.Frame class, then call Show/Hide or Destroy
if __name__ == "__main__":
app = wx.App()
frame = MainFrame(None,-1,None)
frame.Show(True)
frame.Centre()
app.MainLoop()
</code></pre>
<p>Thanks everyone for helping ! Never used <em><strong>threads</strong></em> before...</p>
<p>I didn't let my <em><strong>threading tests</strong></em>, it was too much a mess and not working !</p>
<p>Already tried solutions : <em><strong>Threading</strong></em> , <em><strong>Multiprocessing</strong></em></p>
| <python><multithreading><wxpython><sleep><loader> | 2023-10-25 06:24:46 | 1 | 357 | Garbez FranΓ§ois |
77,357,003 | 17,519,895 | Docker image giving Timeout after 300 seconds | <p>I am testing a AWS lambda docker image on Local computer and if the process is under 5min/300 sec, it runs smoothly, and if the time limit approaches 300 sec, it gives timeout error. I haven't set any time limit anywhere.
Here's my docker script:</p>
<pre><code>FROM public.ecr.aws/lambda/python:3.9
ENV GOOGLE_APPLICATION_CREDENTIALS cloud_vision_api/vision-api.json
RUN yum install git -y
RUN yum update -y && yum install amazon-linux-extras -y && PYTHON=python2 amazon-linux-extras install epel -y && yum update -y && yum install git-lfs -y
RUN git lfs install
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY . .
CMD [ "handler.handle_response"]
</code></pre>
<p>And here's the timeout error:</p>
<pre><code>25 Oct 2023 05:56:18,129 [WARNING] (rapid) Reset initiated: Timeout
25 Oct 2023 05:56:18,130 [INFO] (rapid) Sending SIGKILL to runtime-1(14).
25 Oct 2023 05:56:18,155 [INFO] (rapid) Stopping runtime domain
25 Oct 2023 05:56:18,155 [INFO] (rapid) Waiting for runtime domain processes termination
25 Oct 2023 05:56:18,155 [INFO] (rapid) Stopping operator domain
25 Oct 2023 05:56:18,155 [INFO] (rapid) Starting runtime domain
END RequestId: 9501e4ef-f121-4385-9345-30e27825b70b
REPORT RequestId: 9501e4ef-f121-4385-9345-30e27825b70b Duration: 300000.00 ms Billed Duration: 300000 ms Memory Size: 3008 MB Max Memory Used: 3008 MB
</code></pre>
| <python><amazon-web-services><docker><aws-lambda> | 2023-10-25 06:19:16 | 1 | 421 | Aleef |
77,356,809 | 1,903,629 | In dagster sensors is there a way to re run a RunRequest for a run key | <p>I have a sensor, whenever there is a new file in my file system it will create a run request with file path as run_key. And a asset job will be invoked. That job will fetch the file and do some operations and write it to an external database.</p>
<p>But it is possible that the external database write could fail so I need to re try the file at a different point of time. I don't want to add a retry mechanism in the asset materialization. So is there a way to make dagster to invalidate the run request and allow me to invoke the run request again with the same filepath ?</p>
| <python><dagster> | 2023-10-25 05:26:15 | 0 | 4,193 | ted |
77,356,779 | 13,443,954 | How to use chained page object locators? | <p>I have a function, contains the page objects</p>
<pre><code>class TemplateListPage:
def __init__(self, page):
self.list_first_row = self.page.locator(".grid-row").first
self.use_btn = self.page.locator(".useTemplate")
</code></pre>
<p>And I would like to chain the button with the first row in assertion, in the spec test.
Like</p>
<pre><code>expect(TemplateListPage().list_first_row.use_btn).to_have_count(0)
</code></pre>
<p>or</p>
<pre><code>TemplateListPage().list_first_row.use_btn.click()
</code></pre>
<p>But got an error:</p>
<blockquote>
<p>AttributeError: 'Locator' object has no attribute 'use_btn'</p>
</blockquote>
<p>Is there any way that I can chain the page object locators in the test?</p>
| <python><playwright><playwright-python> | 2023-10-25 05:17:31 | 1 | 333 | M AndrΓ‘s |
77,356,692 | 1,718,989 | Python Operator Precedence with Shortcut Operator? | <p>I understand that Python follows an operator precedence with the acronym PEMDAS or P-E-MD-AS</p>
<p>Now Python happens to use shortcut operators so for example if I were to write</p>
<pre><code>x=5
x=x+1
</code></pre>
<p>This could be re-written as</p>
<pre><code>x+=1
</code></pre>
<p>Now I noticed something a bit odd when I took this a step further and tried to have multiple operations so for example</p>
<pre><code>x=6
y=2
x=x/2*3
</code></pre>
<p>Going left to right x then becomes 9.0</p>
<p>If I try to re-write the above with shortcut syntax I get the following</p>
<pre><code>x/=2*3
</code></pre>
<p>But this results in 1.0</p>
<p>It seems that the multiplication on the right hand side seems to take place before the division shortcut operator? I thought we would be working from left to right so I am confused how this works</p>
<p>Is this always the case? If so why does it work this way?</p>
| <python><shortcut><operator-precedence> | 2023-10-25 04:52:51 | 2 | 311 | chilly8063 |
77,356,663 | 1,854,821 | tox-conda: multiple Python interpreters for tests | <p>All afternoon I've been reading and rereading documentation for <a href="https://tox.wiki/en/latest/config.html" rel="nofollow noreferrer">tox</a> and <a href="https://github.com/tox-dev/tox-conda" rel="nofollow noreferrer">tox-conda</a>, <a href="https://cloudcity.io/blog/2018/03/12/multiversion-testing-with-tox/" rel="nofollow noreferrer">lots</a> of <a href="https://www.seanh.cc/2018/09/01/tox-tutorial/#envlist" rel="nofollow noreferrer">blog</a> and <a href="https://stackoverflow.com/questions/77018312/pytest-specify-different-environments-for-different-tests">StackΒ Overflow</a> <a href="https://stackoverflow.com/a/57654165/1854821">posts</a>, and I still haven't a clue how to make tox use <a href="https://en.wikipedia.org/wiki/Conda_(package_manager)" rel="nofollow noreferrer">Conda</a> to test my Python package against multiple Python interpreter versions.</p>
<p>I'm an Earth scientist; I use <a href="https://github.com/conda-forge/miniforge" rel="nofollow noreferrer">Miniforge</a> to manage my Python environments because it handles non-Python dependencies (and, for that matter, <a href="https://en.wikipedia.org/wiki/R_%28programming_language%29" rel="nofollow noreferrer">R</a> projects) nicely. So, I start off by creating a virtual environment to contain the tox/tox-conda setup:</p>
<pre class="lang-none prettyprint-override"><code>mamba create -n sandbox python=3.10 tox-conda
mamba activate sandbox
</code></pre>
<p>Then I create Conda environments to give tox access to PythonΒ 3.10 an 3.11:</p>
<pre class="lang-none prettyprint-override"><code>mamba create -n py311 python=3.11
mamba create -n py310 python=3.10
</code></pre>
<p>Then I try to test my package against 3.10 and 3.11 by running tox in my Python package directory with this <em>tox.ini</em> file:</p>
<pre class="lang-none prettyprint-override"><code>[tox]
requires = tox-conda
envlist = py310, py311
isolated_build = True
[testenv]
deps = pytest
commands =
pytest
</code></pre>
<p>It works like a charm for 3.10. I think, because my base environment for tox uses 3.10. tox can't find the interpreter for 3.11, and I can't work out what to put in tox.ini to point it there. The error I get is <code>ERROR: cowardly refusing to delete </code>envdir<code> (it does not look like a virtualenv): /home/timh/Code/pgongrid_sandbox/.tox/py311</code></p>
<p>I've tried adding <code>[testenv:py310]</code> and <code>[testenv:py311]</code> sections to tox.ini that use the absolute path to the interpreters (e.g., <code>$HOME/mambaforge/envs/py311/bin/python3</code>) or specify python=3.10 or python=3.11 in the <code>deps</code> or <code>conda-deps</code> settings, but I've not managed to steer tox away from the Python available in the Conda environment I run tox from.</p>
<p>Is there an example <em>tox.ini</em> anywhere on the web that does this? None of the settings in the <a href="https://github.com/tox-dev/tox-conda#usage" rel="nofollow noreferrer">tox-conda Usage documentation</a> seem to fit the bill.</p>
<p>This is on <a href="https://en.wikipedia.org/wiki/Ubuntu_version_history#Ubuntu_20.04_LTS_(Focal_Fossa)" rel="nofollow noreferrer">Ubuntu 20.04</a> (Focal Fossa), tox 3.28.0, tox-conda 0.10.2, mamba 1.5.2, and Conda 23.9.0.</p>
| <python><conda><tox> | 2023-10-25 04:42:04 | 1 | 473 | Timothy W. Hilton |
77,356,497 | 342,553 | MySQL is it possible to group by duration intersection? | <p>I have a set of phone call data which has <code>start</code> and <code>end</code> timestamps, I need to work out the most concurrent calls within a month, so the call "duration"s need to overlap, unfortunately I don't think I can use "sliding window" as it only works with single field.</p>
<p>eg.</p>
<pre><code>start, end
...
2023-09-04 11:14:12, 2023-09-04 11:24:27
2023-09-04 11:20:17, 2023-09-04 11:34:37
...
</code></pre>
<p>Instead of fetching the whole set into code and loop through each time block (every 30 seconds) to count overlaps as below dumbed down version of code, (<em>yes there is room to optimize by further partition the data into days or even hours, so time blocks won't loop thru unnecessary data</em>)</p>
<pre class="lang-py prettyprint-override"><code> all_time_blocks = {}
time_block = date_from
while time_block <= date_to:
counter = all_time_blocks.setdefault(time_block, 0)
for call in calls:
if call[0] <= time_block <= call[1]:
counter += 1
all_time_blocks[time_block] = counter
if counter > high_watermark:
high_watermark = counter
# advance the time block
time_block = time_block + relativedelta.relativedelta(seconds=30)
</code></pre>
<p>is it possible to generate those 30 second blocks in query and group by <strong>generated period blocks intersecting the call durations</strong>?</p>
<p>like</p>
<pre><code>group_by, count
2023-09-01 00:00:00 2023-09-01 00:00:30, 1
2023-09-01 00:00:30 2023-09-01 00:01:00, 3
2023-09-01 00:01:00 2023-09-01 00:01:30, 7
...
2023-09-30 23:59:30 2023-10-01 00:00:00, 0
</code></pre>
<p>The closest I could find is this answer, but the grouping logic is a bit different as it is grouped by the generated period key, but I need grouped by the period intersect with the actual start and end.</p>
<p><a href="https://stackoverflow.com/questions/22283244/group-by-half-hour-interval">Group by half hour interval</a></p>
| <python><mysql> | 2023-10-25 03:46:43 | 1 | 26,828 | James Lin |
77,356,419 | 6,739,732 | Why is Python's BZ2 Decompressor shrinking the block? | <p>The BZ2 file I'm using is a partial dump of Wikipedia [<a href="https://dumps.wikimedia.org/enwiki/20231020/enwiki-20231020-pages-articles-multistream1.xml-p1p41242.bz2" rel="nofollow noreferrer">here</a>]</p>
<p>Here's a Python code I wrote to test the length of a 10000-byte block before and after decompression:</p>
<pre class="lang-py prettyprint-override"><code>import bz2
with open('enwiki-20231020-pages-articles-multistream1.xml-p1p41242.bz2', 'rb') as f:
block = f.read(10000)
print(len(block))
block = bz2.BZ2Decompressor().decompress(block)
print(len(block))
</code></pre>
<p>It outputs:</p>
<pre><code>10000
2560
</code></pre>
<p>Indicating that the decompressor is somehow <em>shrinking</em> the block? How is this possible? Everywhere I searched, it's telling me this shouldn't be happening.</p>
| <python><bz2> | 2023-10-25 03:13:45 | 1 | 313 | Rapid Readers |
77,356,352 | 10,844,937 | Python check if a string is a standard US dollar or notοΌ | <p>I have some strings which only contains [<code>0-9</code>][<code>,</code>][<code>.</code>]. I need to check if they are in standard US dollar format. The number of the decimal of the string may be <code>0</code>, <code>1</code> or <code>2</code>. Here are some examples whether they are standard or not.</p>
<pre><code>14964,022.97 no
110506785.37 no
67,186,673.9697,764,643.17 no
-263,513.52 yes
15,331,312.0 yes
15,331,312 yes
</code></pre>
<p>Here is how I do it.</p>
<pre><code>import re
re.match(r'^-?\d{1,3}[,.](\d{3}[,.])*.?\d{0,2}?$', '15,331,312.0')
</code></pre>
<ul>
<li><code>-?</code> to check if there is a negative sign.</li>
<li><code>d{1,3}[,.]</code> to match the start of the numerical. Since we may have cases like <code>26.32</code>.</li>
<li><code>(\d{3}[,.])*</code> to match the middle of the numerical.</li>
<li><code>.?</code> to match cases without decimal.</li>
<li><code>\d{0,2}?</code> to match <code>1</code> or <code>2</code> decimals cases.</li>
</ul>
<p>Here the problem is 3 decimal case <code>15,331,312.102</code> which shoule be <code>False</code> but return <code>True</code> in my code. Because it was matched by <code>(\d{3}[,.])*</code>.</p>
| <python><python-3.x> | 2023-10-25 02:47:38 | 1 | 783 | haojie |
77,356,332 | 10,853,071 | Dask client causing memory expection? | <p>IΒ΄ve been migrating some of my codes from pandas do dask. Many of them are already working. Studing a little bit more about dask, IΒ΄ve faced it "Client()" feature for monitoring usage. Nice function! But it does break my code!</p>
<p>Dask without client</p>
<pre><code>import dask.dataframe as dd
from dask.distributed import Client
from datetime import date, timedelta, datetime
def carregar ():
df = dd.read_parquet(path=f'{pastafonte}\\Unificado', parse_dates=['data'])
df['parceiro'] = df.parceiro.cat.add_categories('Not_available').fillna('Not_available')
df['mci'] = df['mci'].fillna(0)
df['sku'] = df['sku'].fillna(df['marca'].astype(str))
df['cod_transacao'] = df['cod_transacao'].fillna('Not_available')
return df
dftotal = carregar()
gerado = dftotal.loc[(dftotal.parceiro != 'Amazon') | ((dftotal.parceiro == 'Amazon') & (dftotal.status == 'indefinido'))]
df = dftotal.groupby(['produto','parceiro', 'marca',dftotal.data.dt.to_period("M"), 'status'], dropna=False, observed=True).aggregate({'gmv': 'sum', 'receita': 'sum', 'cashback': 'sum'}).reset_index()
dfafiliados = dftotal.loc[(dftotal.produto == 'Afiliados')]
dfafiliados = dfafiliados.groupby(['produto','parceiro', 'marca',dfafiliados.data.dt.to_period("M"), 'status'], dropna=False, observed=True)['cod_transacao'].nunique().reset_index()
dfafiliados = dfafiliados.rename(columns={ 'cod_transacao' : 'qtde_vendas'})
dfafiliados = dfafiliados.loc[dfafiliados.qtde_vendas != 0] #Para substituir o "observed=True" que nΓ£o estΓ‘ funcionando
dfoutros = dftotal.loc[(dftotal.produto != 'Afiliados')]
dfoutros.cod_transacao = dfoutros.cod_transacao.fillna('NΓ£o disponΓvel')
dfoutros = dfoutros.groupby(['produto','parceiro', 'marca',dfoutros.data.dt.to_period("M"), 'status'], dropna=False, observed=True).aggregate({'cod_transacao' :'count'}).reset_index()
dfoutros = dfoutros.rename(columns={ 'cod_transacao' : 'qtde_vendas'})
dfnunique = dd.concat([dfafiliados,dfoutros], axis=0)#, ignore_order = True
df = df.merge(dfnunique,how='left', on=['produto','parceiro', 'marca','data', 'status'])
df['data'] = df['data'].astype({'data' : 'datetime64[ns]'}) # type: ignore
df = df[['produto','parceiro','marca','data','status','qtde_vendas','gmv','receita','cashback']]
df = df.compute()
df = df.sort_values( by = ['produto','parceiro', 'marca','data','status'])
</code></pre>
<p>And it all runs ok in about 30 seconds.</p>
<p>But if it run the same code, excepct for adding the client, it runs out of memory</p>
<pre><code>Client = Client()
Client
2023-10-24 23:22:14,645 - distributed.nanny.memory - WARNING - Worker tcp://127.0.0.1:52395 (pid=17668) exceeded 95% memory budget. Restarting...
2023-10-24 23:22:15,090 - distributed.nanny - WARNING - Restarting worker
2023-10-24 23:22:15,363 - distributed.nanny.memory - WARNING - Worker tcp://127.0.0.1:52401 (pid=4388) exceeded 95% memory budget. Restarting...
2023-10-24 23:22:15,584 - distributed.nanny - WARNING - Restarting worker
2023-10-24 23:22:19,561 - distributed.nanny.memory - WARNING - Worker tcp://127.0.0.1:52402 (pid=26016) exceeded 95% memory budget. Restarting...
2023-10-24 23:22:19,778 - distributed.nanny - WARNING - Restarting worker
2023-10-24 23:22:21,731 - distributed.nanny.memory - WARNING - Worker tcp://127.0.0.1:52398 (pid=18488) exceeded 95% memory budget. Restarting...
2023-10-24 23:22:22,074 - distributed.nanny - WARNING - Restarting worker
</code></pre>
<p>IΒ΄ve tried to create a sample dataframe to reproduce such error but I couldnt (I dont know I would need to test using a much larger sample), but even using a smaller dataframe, I can see a warning using client that does not happen using without client.</p>
<p>Without client</p>
<pre><code>from datetime import datetime
import pandas as pd
import dask.dataframe as dd
from dask.distributed import Client
import numpy as np
num_variables = 1_000_000
rng = np.random.default_rng()
data = pd.DataFrame({
'id' : np.random.randint(1,99999,num_variables),
'date' : [np.random.choice(pd.date_range(datetime(2021,1,1),datetime(2022,12,31))) for i in range(num_variables)],
'product' : [np.random.choice(['giftcards', 'afiliates']) for i in range(num_variables)],
'brand' : [np.random.choice(['brand_1', 'brand_2', 'brand_4', 'brand_6', np.nan]) for i in range(num_variables)],
'gmv' : rng.random(num_variables) * 100,
'revenue' : rng.random(num_variables) * 100})
data = data.astype({'product': 'category', 'brand':'category'})
ddf = dd.from_pandas(data, npartitions=5)
df = ddf.groupby([ddf.date.dt.to_period('M'), 'product','brand'], dropna=False, observed=True).aggregate({'id' : 'count'}).reset_index()
df = df.compute()
</code></pre>
<p>Runs in 0.0 seconds!</p>
<p>Now with client</p>
<pre><code>from datetime import datetime
import pandas as pd
import dask.dataframe as dd
from dask.distributed import Client
import numpy as np
Client = Client()
Client
num_variables = 1_000_000
rng = np.random.default_rng()
data = pd.DataFrame({
'id' : np.random.randint(1,99999,num_variables),
'date' : [np.random.choice(pd.date_range(datetime(2021,1,1),datetime(2022,12,31))) for i in range(num_variables)],
'product' : [np.random.choice(['giftcards', 'afiliates']) for i in range(num_variables)],
'brand' : [np.random.choice(['brand_1', 'brand_2', 'brand_4', 'brand_6', np.nan]) for i in range(num_variables)],
'gmv' : rng.random(num_variables) * 100,
'revenue' : rng.random(num_variables) * 100})
data = data.astype({'product': 'category', 'brand':'category'})
ddf = dd.from_pandas(data, npartitions=5)
df = ddf.groupby([ddf.date.dt.to_period('M'), 'product','brand'], dropna=False, observed=True).aggregate({'id' : 'count'}).reset_index()
df = df.compute()
UserWarning: Sending large graph of size 28.62 MiB.
This may cause some slowdown.
Consider scattering data ahead of time and using futures.
warnings.warn(
</code></pre>
<p>Thanks in advance!</p>
| <python><dask><dask-distributed><dask-dataframe> | 2023-10-25 02:39:25 | 0 | 457 | FΓ‘bioRB |
77,356,258 | 12,810,409 | `np.gradient` has high variance for non-uniform spacing | <p>I encountered high variance in <code>np.gradient</code> for relatively smooth data.</p>
<p>Suppose we want to calculate <code>dx/dt</code>:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.array([13.11149679, 13.2141427 , 13.37743691, 13.3934357 , 13.56163066,
13.60207566, 13.69304133])
t = np.array([0.73065159, 0.74012055, 0.75911018, 0.7607452 , 0.77811468,
0.78031837, 0.79046324])
x_grad = np.gradient(x, t)
plt.plot(t[1:], np.diff(x) / np.diff(t), 'xb', label="findiff")
plt.plot(t, x_grad, 'or', label = 'np.gradient')
plt.plot(t, x, '+g', label = "x")
plt.xlabel('t')
plt.ylabel('x')
plt.legend()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/0wBId.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0wBId.png" alt="" /></a></p>
<p>The noise in data seems to be amplified by a lot. How should we deal with this?</p>
| <python><numpy><numerical-methods> | 2023-10-25 02:09:23 | 2 | 378 | Toon Tran |
77,356,229 | 14,109,040 | Splitting Pandas dataframe by columns and concatenate to create single dataframe | <p>I have an excel file in the following format:</p>
<p><a href="https://i.sstatic.net/gIr5J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gIr5J.png" alt="enter image description here" /></a></p>
<p>I want to read it using Python and concatenate the tables(the number of tables could change) into a single one, and add a column with the road name next to each table</p>
<p>So it would look like:</p>
<p><a href="https://i.sstatic.net/edWCb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/edWCb.png" alt="enter image description here" /></a></p>
<p>I read in the excel file</p>
<p>import pandas as pd
df = pd.read_excel(input_fp, dtype='str').dropna(how='all')</p>
<p>And the dataframe looks like:</p>
<p><a href="https://i.sstatic.net/lCKyo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lCKyo.png" alt="enter image description here" /></a></p>
<p>I'm thinking that splitting the dataframe by columns with all nan values, or columns with a header should work. But unsure how to do this.</p>
<p>Any suggestions would be appreciated</p>
<p>Test data:</p>
<pre><code>{'Unnamed: 0': {0: 'Start Time', 1: '06:01:00', 2: '06:31:00', 3: '07:31:00', 4: '08:31:00'}, 'Unnamed: 1': {0: 'End Time', 1: '06:30:00', 2: '07:30:00', 3: '08:30:00', 4: '09:30:00'}, 'Unnamed: 2': {0: 'Number of Cars', 1: '5343', 2: '2545', 3: '2434', 4: '3424'}, 'Unnamed: 3': {0: nan, 1: nan, 2: nan, 3: nan, 4: nan}, 'Unnamed: 4': {0: 'Start Time', 1: '06:01:00', 2: '06:31:00', 3: '07:31:00', 4: '08:31:00'}, 'Unnamed: 5': {0: 'End Time', 1: '06:30:00', 2: '07:30:00', 3: '08:30:00', 4: '09:30:00'}, 'Unnamed: 6': {0: 'Number of Cars', 1: '5343', 2: '2545', 3: '2434', 4: '3424'}, 'Unnamed: 7': {0: nan, 1: nan, 2: nan, 3: nan, 4: nan}, 'Unnamed: 8': {0: 'Start Time', 1: '06:01:00', 2: '06:31:00', 3: '07:31:00', 4: '08:31:00'}, 'Unnamed: 9': {0: 'End Time', 1: '06:30:00', 2: '07:30:00', 3: '08:30:00', 4: '09:30:00'}, 'Unnamed: 10': {0: 'Number of Cars', 1: '5343', 2: '2545', 3: '2434', 4: '3424'}}
</code></pre>
| <python><pandas> | 2023-10-25 01:58:45 | 2 | 712 | z star |
77,356,173 | 433,570 | Is there a way to add path to sys.path automatically when activating an python environment? | <p>I'm trying to add project_root to sys.path when activating the project environment</p>
<p>(currently conda, but it can be pyenv, poetry..)</p>
<p>The project_root might be different for people depending on where they created the project_root in their local file system.</p>
<p>So is there a way for me to setup so that when other people download (git clone) and setup the project, and when they activate the environment, they will have the project-root added to their <code>sys.path</code> ?</p>
<p>I was thinking , to use hook script for conda (or similar), but i'm not sure how I 'll let it know the project-root (which can be different for different people)</p>
<ul>
<li>edit</li>
</ul>
<p>Why do I want this?</p>
<p>Suppose you have a new script which wants to use some of the other files in the project, you need to import the other file.</p>
<p>Then you need to tell the path , if project root is in sys.path, any script can import file relative from the project root, not from its current location</p>
| <python><virtualenv> | 2023-10-25 01:37:24 | 1 | 42,105 | eugene |
77,356,097 | 5,560,837 | `sorted(my_list, key = lambda x, y: ... )` possible? | <p>If I want to sort a list of non-empty strings by this criteria -</p>
<p><code>"x + y > y + x"</code> (for any <code>x = lst[i]</code>, <code>y = lst[j]</code> that <code>0 <= i < j < len(lst)</code>)</p>
<p>I know one way is to define a <code>my_key</code> class and put the logic in this class, then do <code>sorted(key=my_key)</code>. Another way is to use the <code>cmp_to_key</code> functool.</p>
<p>I wonder if it's also possible to use lambda expression instead here (without <code>cmp_to_key</code>)? Something like -</p>
<pre><code>sorted(lst, key = lambda x, y: ((x+y > y+x) - (x+y < y+x))
</code></pre>
<p>(The above code doesn't work though because of missing positional argument.)</p>
<p>(Edit: The reason of not simply using <code>x > y</code> is for example <code>"21" > "2"</code> but <code>"21" + "2" < "2" + "21"</code>)</p>
| <python><string><sorting><lambda><key> | 2023-10-25 00:55:15 | 1 | 417 | Egret |
77,356,073 | 159,072 | data frame is not stacked vertically when saved | <p>data1.dat</p>
<pre><code> 1 ASN C 7.042 9.118 0.000 1 1 1 1 1 0
2 LEU H 5.781 5.488 7.470 0 0 0 0 1 0
3 THR H 5.399 5.166 6.452 0 0 0 0 0 0
4 GLU H 5.373 4.852 6.069 0 0 0 0 1 0
5 LEU H 5.423 5.164 6.197 0 0 0 0 2 0
6 LYS H 5.247 4.943 6.434 0 0 0 0 1 0
7 ASN C 5.485 8.103 8.264 0 0 0 0 1 0
8 THR C 6.675 9.152 9.047 0 0 0 0 1 0
9 PRO C 6.372 8.536 11.954 0 0 0 0 0 0
10 VAL H 5.669 5.433 6.703 0 0 0 0 0 0
11 SER H 5.304 4.924 6.407 0 0 0 0 0 0
12 GLU H 5.461 5.007 6.088 0 0 0 0 1 0
13 LEU H 5.265 5.057 6.410 0 0 0 0 3 0
14 ILE H 5.379 5.026 6.206 0 0 0 0 1 0
15 THR H 5.525 5.154 6.000 0 0 0 0 1 0
16 LEU H 5.403 5.173 6.102 0 0 0 0 1 0
17 GLY H 5.588 5.279 6.195 0 0 0 0 1 0
18 GLU H 5.381 5.238 6.675 0 0 0 0 1 0
19 ASN H 5.298 5.287 6.668 0 0 0 0 1 0
20 MSE H 5.704 7.411 4.926 0 0 0 0 1 0
</code></pre>
<p>data2.dat</p>
<pre><code> 21 GLY C 5.978 9.254 9.454 0 0 0 0 1 0
22 LEU C 6.778 10.534 12.640 0 0 1 2 2 0
23 GLU C 7.187 7.217 10.728 0 0 0 0 2 0
24 ASN C 5.392 8.296 10.702 0 0 0 0 0 0
25 LEU C 5.657 6.064 9.609 0 0 0 1 3 0
26 ALA C 5.446 5.528 7.503 0 0 0 0 2 0
27 ARG C 5.656 8.071 8.419 0 0 0 0 0 0
28 MSE C 6.890 9.157 8.728 0 0 0 0 1 0
29 ARG C 6.330 7.993 11.562 0 0 0 0 0 0
30 LYS H 5.428 5.207 5.897 0 0 0 0 1 0
31 GLN H 5.402 5.046 6.349 0 0 0 0 1 0
32 ASP H 5.426 5.093 6.226 0 0 0 1 1 0
33 ILE H 5.361 5.004 6.194 0 0 0 0 6 0
34 ILE H 5.443 5.150 6.190 0 0 0 0 5 0
35 PHE H 5.403 5.181 6.293 0 0 0 0 1 0
36 ALA H 5.533 5.357 6.193 0 0 0 0 3 0
37 ILE H 5.634 5.167 6.025 0 0 0 1 5 0
38 LEU H 5.402 5.121 6.104 0 0 0 0 3 0
39 LYS H 5.470 5.092 6.101 0 0 0 0 1 0
40 GLN H 5.491 5.210 6.054 0 0 0 0 2 0
</code></pre>
<pre><code>import os
import pandas as pd
from src.utils.get_root_dir import get_root_directory
def save_dataframe_to_ascii(df, filepath):
df.to_csv(filepath, sep=',', index=False)
def getDataFrame(dataDirectoryPathString: str) -> pd.DataFrame:
dataframes = []
for filename in os.listdir(dataDirectoryPathString):
if filename.endswith('.dat'):
filepath = os.path.join(dataDirectoryPathString, filename)
df = pd.read_csv(filepath, sep='\t')
dataframes.append(df)
concatenated_df = pd.concat(dataframes, ignore_index=True)
return concatenated_df
if __name__ == "__main__":
dataFrame = getDataFrame(get_root_directory() + "/data/")
save_dataframe_to_ascii(dataFrame, get_root_directory() + "/save/save.txt")
</code></pre>
<p>Output:</p>
<pre><code> 1 ASN C 7.042 9.118 0.000 1 1 1 1 1 0, 21 GLY C 5.978 9.254 9.454 0 0 0 0 1 0
2 LEU H 5.781 5.488 7.470 0 0 0 0 1 0,
3 THR H 5.399 5.166 6.452 0 0 0 0 0 0,
4 GLU H 5.373 4.852 6.069 0 0 0 0 1 0,
5 LEU H 5.423 5.164 6.197 0 0 0 0 2 0,
6 LYS H 5.247 4.943 6.434 0 0 0 0 1 0,
7 ASN C 5.485 8.103 8.264 0 0 0 0 1 0,
8 THR C 6.675 9.152 9.047 0 0 0 0 1 0,
9 PRO C 6.372 8.536 11.954 0 0 0 0 0 0,
10 VAL H 5.669 5.433 6.703 0 0 0 0 0 0,
11 SER H 5.304 4.924 6.407 0 0 0 0 0 0,
12 GLU H 5.461 5.007 6.088 0 0 0 0 1 0,
13 LEU H 5.265 5.057 6.410 0 0 0 0 3 0,
14 ILE H 5.379 5.026 6.206 0 0 0 0 1 0,
15 THR H 5.525 5.154 6.000 0 0 0 0 1 0,
16 LEU H 5.403 5.173 6.102 0 0 0 0 1 0,
17 GLY H 5.588 5.279 6.195 0 0 0 0 1 0,
18 GLU H 5.381 5.238 6.675 0 0 0 0 1 0,
19 ASN H 5.298 5.287 6.668 0 0 0 0 1 0,
20 MSE H 5.704 7.411 4.926 0 0 0 0 1 0,
, 22 LEU C 6.778 10.534 12.640 0 0 1 2 2 0
, 23 GLU C 7.187 7.217 10.728 0 0 0 0 2 0
, 24 ASN C 5.392 8.296 10.702 0 0 0 0 0 0
, 25 LEU C 5.657 6.064 9.609 0 0 0 1 3 0
, 26 ALA C 5.446 5.528 7.503 0 0 0 0 2 0
, 27 ARG C 5.656 8.071 8.419 0 0 0 0 0 0
, 28 MSE C 6.890 9.157 8.728 0 0 0 0 1 0
, 29 ARG C 6.330 7.993 11.562 0 0 0 0 0 0
, 30 LYS H 5.428 5.207 5.897 0 0 0 0 1 0
, 31 GLN H 5.402 5.046 6.349 0 0 0 0 1 0
, 32 ASP H 5.426 5.093 6.226 0 0 0 1 1 0
, 33 ILE H 5.361 5.004 6.194 0 0 0 0 6 0
, 34 ILE H 5.443 5.150 6.190 0 0 0 0 5 0
, 35 PHE H 5.403 5.181 6.293 0 0 0 0 1 0
, 36 ALA H 5.533 5.357 6.193 0 0 0 0 3 0
, 37 ILE H 5.634 5.167 6.025 0 0 0 1 5 0
, 38 LEU H 5.402 5.121 6.104 0 0 0 0 3 0
, 39 LYS H 5.470 5.092 6.101 0 0 0 0 1 0
, 40 GLN H 5.491 5.210 6.054 0 0 0 0 2 0
</code></pre>
<p>The rows should have been stacked vertically.</p>
<p>Why is the output broken?</p>
<p>How can I fix it?</p>
| <python><pandas><csv> | 2023-10-25 00:48:05 | 1 | 17,446 | user366312 |
77,356,035 | 361,100 | PyCharm wrongly recognizes pydevd.py path when debugging on Windows | <p>When I click Debug button on PyCharm, it wrongly recognizes where <code>pydevd.py</code> file resides.</p>
<p>The console message is as below;</p>
<pre><code>C:\Users\young\AppData\Local\Microsoft\WindowsApps\python3.10.exe "C:/Users/young/AppData/Local/Programs/PyCharm Professional/plugins/python/helpers/pydev/pydevd.py" --multiprocess --qt-support=auto --client 127.0.0.1 --port 61823 --file C:\Users\young\Documents\GitHub\some-project\app.py
Connected to pydev debugger (build 232.10072.31)
* Serving Flask app 'app'
* Debug mode: on
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on all addresses (0.0.0.0)
* Running on http://127.0.0.1:5000
Press CTRL+C to quit
* Restarting with stat
C:\Users\young\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\python.exe: can't open file 'C:\\Users\\young\\AppData\\Local\\Programs\\PyCharm': [Errno 2] No such file or directory
Process finished with exit code 2
</code></pre>
<p>The reason seems clear. The path <code>/PyCharm Professional/</code> is trimmed because of whitespace in a path. But I don't know how to fix it up in IDE settings.</p>
<p>Note that PyCharm v2023.2.3 (232.10072.31)</p>
| <python><debugging><pycharm><console> | 2023-10-25 00:32:47 | 1 | 25,138 | Youngjae |
77,356,026 | 12,242,085 | How to expand Data Frame to artificially create dates up to the 31st day in each month no matter how many days that month has in Python Pandas? | <p>I have Data Frame in Python Pandas like below:</p>
<pre><code>data = {
'id': [1, 1, 2, 2, 3],
'nps': [8, 8, 7, 7, 5],
'target': [True, True, False, False, True],
'nps_score': [0.56, 0.56, 0.6785, 0.6785, 0.44],
'event_day': ['2023-02-15', '2023-02-22', '2023-06-10', '2023-06-20', '2022-02-11']
}
df = pd.DataFrame(data)
df
</code></pre>
<ul>
<li>id - id of client</li>
<li>event_day - dates of events</li>
<li>for each id in a given month values in columns: nps, target, nps_score are the same</li>
</ul>
<p>I need to expand this data frame so as to have dates in event_date for each id from the 1st to the 31st of a given month regardless of how many days that month actually has.</p>
<p>In short, I need to artificially create rows for each id with event_day range from 2022-07-01 to 2023-10-31 so that in each month there are days from 1 to 31 days.</p>
<p>So, as a result I need something like below:</p>
<pre><code>id | nps | target | event_day
------|------|---------|----------
1 | 8 | True | 2023-02-01
1 | 8 | True | 2023-02-02
1 | 8 | True | 2023-02-03
... | ... | ...
1 | 8 | True | 2023-02-30
1 | 8 | True | 2023-02-31
...
</code></pre>
<p>And the same for rest od id.</p>
| <python><pandas><dataframe><date> | 2023-10-25 00:31:43 | 2 | 2,350 | dingaro |
77,356,025 | 20,258,214 | CTkToplevel is brought to the front and then goes back | <p>I am encountering a really weird problem in <code>customtkinter</code>.</p>
<p>I created a <code>CTkToplevel</code> window and brought it to the front using <code>lift()</code> method.
It got to the front, and immediately after was sent behind my root window.</p>
<h3>Simplified Code Using <code>customtkinter</code>:</h3>
<pre><code>import customtkinter as ctk
class Win(ctk.CTkToplevel):
def __init__(self, parent):
super().__init__(parent)
ctk.CTkLabel(self, text="Top window").pack()
def show_win():
global top_win
if not top_win or not top_win.winfo_exists():
top_win = Win(root)
top_win.lift()
top_win.focus_force()
root = ctk.CTk()
root.geometry('500x500')
top_win:Win = None
ctk.CTkButton(root, text="Show Top Window", command=show_win).pack()
root.mainloop()
</code></pre>
<hr />
<p>I converted the code to <code>tkinter</code> and it worked perfectly.</p>
<h3>Using Standard <code>tkinter</code></h3>
<pre><code>import tkinter as tk
class Win(tk.Toplevel):
def __init__(self, parent):
super().__init__(parent)
tk.Label(self, text="Top window").pack()
def show_win():
global top_win
if not top_win or not top_win.winfo_exists():
top_win = Win(root)
top_win.lift()
top_win.focus_force()
root = tk.Tk()
root.geometry('500x500')
top_win:Win = None
tk.Button(root, text="Show Top Window", command=show_win).pack()
root.mainloop()
</code></pre>
<hr />
<p>The code worked well in <code>customtkinter</code> once I disabled the root window:</p>
<h3>Using <code>-disabled</code></h3>
<pre><code>import customtkinter as ctk
class Win(ctk.CTkToplevel):
def __init__(self, parent):
super().__init__(parent)
self.master.attributes('-disabled', True)
ctk.CTkLabel(self, text="Top window").pack()
def destroy(self):
self.master.attributes('-disabled', False)
return super().destroy()
def show_win():
global top_win
if not top_win or not top_win.winfo_exists():
top_win = Win(root)
top_win.lift()
top_win.focus_force()
root = ctk.CTk()
root.geometry('500x500')
top_win:Win = None
ctk.CTkButton(root, text="Show Top Window", command=show_win).pack()
root.mainloop()
</code></pre>
<p>The problem is that I need the root window enabled.</p>
<hr />
<p>I also tried to use the <code>topmost</code> attribute, but it didn't work as well:</p>
<h3>Using <code>-topmost</code></h3>
<pre><code>class Win(ctk.CTkToplevel):
def __init__(self, parent):
super().__init__(parent)
self.attributes('-topmost', True)
self.attributes('-topmost', False)
ctk.CTkLabel(self, text="Top window").pack()
</code></pre>
<hr />
<p>How can I ensure that my <code>CTkToplevel</code> window stays on top <em>without disabling</em> the root window?</p>
<p>(I want it to still go back when the root window is focused)</p>
| <python><tkinter><customtkinter> | 2023-10-25 00:30:48 | 0 | 1,598 | nokla |
77,355,767 | 3,143,269 | In Python is it possible to have a Stream based socket and a PIpe used asynchronously? | <p>I have been searching high and low for 10 days now trying to find an example where there is an asynchronous streams socket (I need to use receiveuntil()) and a multiprocessing Pipe in use in the same script?</p>
<p>I like asyncio as I can await data arrival for the socket and the pipe. At least it seems like I should be able to.</p>
<p>The problem statement is:
Create 4 classes: myapp, task1, task2, and task3
In myapp create two multiprocessing Pipes, Pipe1 and Pipe2. Starts a process for Task1 passing in Pipe1. Starts Task2 passing in Pipe1 and Pipe2. Starts a process for Task3 passing in Pipe2.</p>
<p>Task1 uses Pipe1 to communicate with Task2. Task2 uses Pipe1 to talk to Task1 and Pipe2 to talk to Task3. And Task3 uses Pipe2 to talk to Task2.</p>
<p>Task1 also has a streams type socket so receiveuntil() can be used. Task3 needs to read/write a USB port.</p>
<p>Because of the other functions these tasks must do, some CPU intensive, all communications need to be asynchronous. I thought this should be easy, just register callbacks and us await to prevent blocking. So far, I cannot figure out how to do this. Starting processes seems pretty easy, but the communications is anything but.</p>
<p>So, can this be done? Anyone have an example of such a configuration for say Task1, I can extrapolate from there, I think.</p>
| <python><python-3.x> | 2023-10-24 22:46:00 | 1 | 1,124 | AeroClassics |
77,355,743 | 7,318,120 | in python 3.12 i see type ListOrSet[T] = list[T] | set[T], but how do i use this? | <p>In the new python 3.12 docs i see this:</p>
<pre class="lang-py prettyprint-override"><code>type ListOrSet[T] = list[T] | set[T]
</code></pre>
<p>But I dont know how or where this is used.
Is this for type hinting and where can this be used?</p>
<p>The docs are here:
<a href="https://peps.python.org/pep-0695/" rel="nofollow noreferrer">https://peps.python.org/pep-0695/</a></p>
<p>But to me it is not clear, perhaps because i never really used the typing module.</p>
<p>Can i use this for Type Hinting ?</p>
| <python><types> | 2023-10-24 22:39:24 | 1 | 6,075 | darren |
77,355,707 | 1,722,361 | Python3 import module without relative path | <p>I have a folder test with the following py files</p>
<pre><code>mytest/
- test.py
- utility.py
- pb2.py
- pb2_grpc.py
</code></pre>
<p>pb2 and pb2_grpc are generated by python grpc tool.</p>
<p>in test.py, there are</p>
<pre><code>import utility
import pb2
import pb2_grpc
</code></pre>
<p>in pb2_grpc.py, there is</p>
<pre><code>import pb2
</code></pre>
<p>When I cd to mytest directory and execute</p>
<pre><code>python -m unittest test.py
</code></pre>
<p>everything is fine. But now I need to execute the test in the parent directory, then I encountered "no module named" error when I did</p>
<pre><code>python -m unittest mytest/test.py
</code></pre>
<p>Then I changed <code>import utility</code> to <code>from . import utility</code>, it works. But for those auto generated files, I could not edit them.</p>
<p>The only way I can do now is to add <code>os.append.path(os.path.dirname(__file__))</code> in my test.py, then when pb2_grpc is invoked, it knows where to find pb2. But is there a better way to do that?</p>
| <python><python-3.x><python-module> | 2023-10-24 22:26:19 | 2 | 427 | user1722361 |
77,355,420 | 9,343,043 | How to merge dictionary of dataframes on shared column | <p>I have a dictionary of 7 dataframes of t-statistic values (for a different clinical site), where each dataframe has two columns of about 100 Brain ROIs and the t statistic value for that ROI. My goal is to create a dataframe of each ROI with each t-statistic values.</p>
<p>For this post though, for the sake of brevity and health information I will use a sample dictionary of dataframes on something a bit different. <code>ROI = character</code>, and <code>t-statistic = color_{}</code>.</p>
<p>I have tried passing the dictionary values into <code>pd.concat</code> with <code>join="inner"</code> and I am getting duplicate columns for the shared column I want to merge on.</p>
<p><strong>Before / Dictionary</strong></p>
<pre><code>In[1]: sega_df_dict
Out[1]:
{'generic': character color_generic
0 Sonic Blue
1 Tails Yellow
2 Knuckles Red
3 Amy Pink
4 Shadow Black
[5 rows x 2 columns],
'precise': character color_precise
0 Sonic Cobalt
1 Tails Gold
2 Knuckles Crimson
3 Amy Rose Gold
4 Shadow Ebony
[5 rows x 2 columns]
</code></pre>
<p><strong>Current After / Result</strong></p>
<pre><code>In[2]: sega_final_df = pd.concat(sega_df_dict.values(), axis=1, join="inner")
Out[2]:
character color_generic character color_precise
0 Sonic Blue Sonic Cobalt
1 Tails Yellow Tails Gold
2 Knuckles Red Knuckles Crimson
3 Amy Pink Amy Rose Gold
4 Shadow Black Shadow Ebony
</code></pre>
<p><strong>Desired Result</strong></p>
<pre><code>In[3]: ...
Out[3]:
character color_generic color_precise
0 Sonic Blue Cobalt
1 Tails Yellow Gold
2 Knuckles Red Crimson
3 Amy Pink Rose Gold
4 Shadow Black Ebony
</code></pre>
<p>Do I have to reset my index or join along a specific axis?</p>
| <python><pandas><dictionary><merge><concatenation> | 2023-10-24 21:14:12 | 1 | 871 | florence-y |
77,355,368 | 11,618,586 | Calculating duration from a timestamp column by groups | <p>I have a pandas dataframe like so:</p>
<pre><code> dttm runid
9/7/2023 00:10 1694020173
9/7/2023 00:10 1694020173
9/7/2023 00:10 1694020173
9/7/2023 00:10 1694020173
9/7/2023 00:10 1694020173
9/29/2023 20:31 1695908276
9/29/2023 20:31 1695908276
9/29/2023 20:31 1695908276
9/29/2023 20:31 1695908276
9/29/2023 20:31 1695908276
</code></pre>
<p>The <code>dttm</code> column is of <code>datatime64[ns]</code> datatype.<br />
I want to add a column that shows the duration of subsequent rows in the <code>dttm</code> column subtracting from the very first row in the same column grouped by <code>runid</code></p>
<p>I resorted to using a helper column defining the start time for each <code>runid</code> like so:</p>
<pre><code>df['_Start_Time']=df.groupby('runid')['dttm'].transform('min')
df['Duration']=((df['dttm']-df['_Start_Time']).dt.total_seconds())
</code></pre>
<p>Is there a more elegant way of doing this without using an extra column?
Perhaps using the <code>apply()</code> method?</p>
| <python><pandas> | 2023-10-24 21:00:28 | 2 | 1,264 | thentangler |
77,355,318 | 10,990,958 | I have some buttons in customtkinter and they are not moving when I change position how would I achive my end result? | <p>So I've been working on this random selector. I am trying to move the buttons to the positions I want them, but for some reason they are not moving. This is the code:</p>
<pre><code>import random
import customtkinter
import PIL
from PIL import Image
customtkinter.set_appearance_mode("light")
# Create a list to track the names that have already been chosen
chosen_names = []
def button_callback():
# Create a list of names
names = ["Alvaro jesus mota reto","hernado abada treiton","Jualian gabriel rendon maraya"] # Randomly select a name from the list
name = random.choice(names)
# Check if the name has already been chosen
while name in chosen_names:
name = random.choice(names)
# Add the name to the list of chosen names
chosen_names.append(name)
# Get the label
# label = app.winfo_children()[0]
# Update the label text
label.configure(text=name,font=("Roboto",30))
#label.grid_remove()
# Check if all the values in the list have been selected
if len(chosen_names) == len(names):
chosen_names.clear()
label.configure(text="")
def reset_callback():
chosen_names.clear()
label.configure(text="")
app = customtkinter.CTk()
image = PIL.Image.open("imagen.png")
background_image = customtkinter.CTkImage(image, size=(1280, 800))
app.title("app")
app.iconbitmap('isologo.ico')
app.geometry("1280x800")
def bg_resizer(e):
if e.widget is app:
i = customtkinter.CTkImage(image, size=(e.width, e.height))
bg_lbl.configure(text="", image=i)
# Create a bg label
bg_lbl = customtkinter.CTkLabel(app, text="", image=background_image)
bg_lbl.place(x=0, y=0)
# Create a label
label = customtkinter.CTkLabel(app, text="")
label.grid(row=0, column=0, padx=20, pady=20)
button = customtkinter.CTkButton(app, text=" ", command=button_callback, hover_color='#8DBCBB',fg_color = '#F8E5BE')
button.grid(row=1,column=0,ipadx=40, ipady=30, padx=20, pady=20)
reset_button = customtkinter.CTkButton(app, text="Reiniciar", command=reset_callback,hover_color='#8DBCBB',fg_color = '#17223B')
reset_button.configure(width=5)
reset_button.grid(row=0, column=9,padx=0,pady=0)
app.bind("<Configure>", bg_resizer)
app.mainloop()
</code></pre>
<p>I get a window that looks like this but the buttons and label shouldnt be there
<a href="https://i.sstatic.net/qxorV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qxorV.png" alt="imagen" /></a></p>
<p>I would like the interface to look like this but I haven't been able to make it happen
<a href="https://i.sstatic.net/A5aLs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A5aLs.png" alt="reference image" /></a></p>
<p>The top text is the name generated at random; the reiniciar button is the one that is dark blue and the button with no text is the yellow button. If anybody knows how to do this because I haven't been able to do it, my buttons also jump all over depending on the size of the text that is displayed on the top.</p>
| <python><button><customtkinter> | 2023-10-24 20:49:56 | 1 | 349 | Pandas INC |
77,355,232 | 901,449 | Django model relations - "Owning" a related model enforced at the model level | <p>I'm not really sure how to title this question, so forgive my title's ambiguity.</p>
<p>I have two models, <code>Customer</code> and <code>Address</code>. Each has typical fields you would infer from their model names. <code>Address</code> is also set as an inline to <code>Customer</code> in admin.py:</p>
<pre><code>class Customer(models.Model):
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=30)
email = models.EmailField(unique=True)
phone = models.CharField(max_length=15, blank=True)
def __str__(self):
return f"{self.first_name} {self.last_name}"
class Address(models.Model):
customer = models.ForeignKey(Customer, on_delete=models.CASCADE)
street = models.CharField(max_length=100)
street_additional = models.CharField(max_length=100, blank=True)
city = models.CharField(max_length=50)
state = models.CharField(choices=US_STATES)
zip = models.CharField(max_length=5)
def __str__(self):
return self.street
</code></pre>
<p>I want to add a <code>billing_address</code> and <code>shipping_address</code> to the <code>Customer</code> model to allow a customer to choose one of several saved addresses to be used as those types of addresses accordingly, but I don't want any customer to be able to choose any address. My initial plan was to model the <code>Customer</code> with a foreign key relation back to <code>Address</code> while using <code>limit_choices_to</code> such as:</p>
<pre><code>class Customer(models.Model):
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=30)
email = models.EmailField(unique=True)
phone = models.CharField(max_length=15, blank=True)
billing_address = models.ForeignKey("Address", on_delete=models.SET_NULL, related_name='billing_address', blank=True, null=True, limit_choices_to={'customer_id':some_callable())
shipping_address = models.ForeignKey("Address", on_delete=models.SET_NULL, related_name='shipping_address', blank=True, null=True, limit_choices_to={'customer_id':some_callable())
def __str__(self):
return f"{self.first_name} {self.last_name}"
</code></pre>
<p>I realized this is not how <code>limit_choices_to</code> is intended to work as there's no way to get a reference to <code>id</code> before the model is instantiated, and there's also no way to reference a property of <code>Address</code> in the value for <code>limit_choices_to</code> (this seems like a missing feature to me). I've seen other related questions that modify the admin form to enforce this, but I would much prefer to have this enforced at the model level.</p>
<p><strong>How should I model both <code>Customer</code> and <code>Address</code> so that a customer can have many addresses accessible only to them, and be able to choose one for each of <code>billing_address</code> and <code>shipping_address</code>?</strong></p>
| <python><django><django-models> | 2023-10-24 20:30:02 | 2 | 2,780 | pspahn |
77,355,192 | 19,299,757 | How do I refer IAM user name in lambda event? | <p>I've an AWS Lambda function which just prints the event</p>
<pre><code>import json
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def respond(err, res=None):
return {
'statusCode': '400' if err else '200',
'body': err.message if err else json.dumps(res),
'headers': {
'Content-Type': 'application/json',
},
}
def lambda_handler(event, context):
logger.info('Received event: %s' + json.dumps(event, indent=2))
param1 = event['queryStringParameters']['name']
param2 = event['queryStringParameters']['age']
return respond(None, res='Hello World! event-----> ' + str(event) + "\n" + str(param1) + "
" + str(param2))
</code></pre>
<p>I've an API endpoint that calls this function and I can invoke the API like this.</p>
<pre><code>https://abvdsa9gpa.execute-api.us-east-1.amazonaws.com/api/?name=my-name&age=30
</code></pre>
<p>Now I want to access the IAM user name (possibly the email of the user) and then do some operations (sending email etc) to the user who accessed this end point from within my company.
So if someother user within my team accesses this end point, they should receive a test e-mail(SES part yet to be done), but first is there an easy way to identify the user who accessed this end point?</p>
<p>Thanks</p>
| <python><amazon-web-services><aws-lambda><aws-api-gateway> | 2023-10-24 20:20:32 | 0 | 433 | Ram |
77,354,976 | 3,543,200 | debian:12-slim breaking apt update for mysql | <p>In switching from <code>python:3.8.16-slim</code> (which is based on <code>debian:11-slim</code>) to <code>python:3.8.18-slim</code> (which is based on <code>debian:12-slim</code>), the following Dockerfile now fails to build with the following error. The change in python version is the only difference.</p>
<pre><code>FROM python:3.8.18-slim
RUN pip install awscli==1.19.29
# wget, lsb-release, gnupg required by mysql-apt-config
RUN apt-get update && apt-get install -y --no-install-recommends \
wget \
lsb-release \
gnupg \
&& rm -rf /var/lib/apt/lists/*
# if public key for mysql client is expired, check https://dev.mysql.com/downloads/repo/apt/ for latest version
RUN aws s3 cp s3://3rdparty-packages/mysql-apt-config_0.8.22-1_all.deb . && \
DEBIAN_FRONTEND=noninteractive dpkg -i mysql-apt-config_0.8.22-1_all.deb
RUN apt-get update && apt-get install -y --no-install-recommends \
g++ \
pkg-config \
default-libmysqlclient-dev \
libunistring-dev \
libboost-all-dev \
libboost-program-options-dev \
libffi-dev \
gcc automake autoconf libtool make \
&& rm -rf /var/lib/apt/lists/*
[...]
</code></pre>
<p>error:</p>
<pre><code> > [ 5/40] RUN apt-get update && apt-get install -y --no-install-recommends g++ pkg-config default-libmysqlclient-dev libunistring-dev libboost-all-dev libboost-program-options-dev libffi-dev gcc automake autoconf libtool make && rm -rf /var/lib/apt/lists/*:
14:11:34 #9 1.375 Get:2 http://deb.debian.org/debian bookworm-updates InRelease [52.1 kB]
14:11:34 #9 1.375 Get:3 http://deb.debian.org/debian-security bookworm-security InRelease [48.0 kB]
14:11:34 #9 1.574 Get:4 http://deb.debian.org/debian bookworm/main amd64 Packages [8780 kB]
14:11:34 #9 1.698 Get:5 http://deb.debian.org/debian bookworm-updates/main amd64 Packages [6408 B]
14:11:34 #9 1.802 Ign:6 http://repo.mysql.com/apt/debian bookworm InRelease
14:11:34 #9 1.834 Get:7 http://deb.debian.org/debian-security bookworm-security/main amd64 Packages [87.2 kB]
14:11:34 #9 2.085 Err:8 http://repo.mysql.com/apt/debian bookworm Release
14:11:34 #9 2.085 404 Not Found [IP: 23.78.1.18 80]
</code></pre>
| <python><docker><dockerfile><debian><debian-bookworm> | 2023-10-24 19:38:53 | 1 | 997 | gmoss |
77,354,731 | 5,960,363 | Define common Pydantic fields, use subsets of them in models | <h2>Setup</h2>
<p>I have a long list of variables to be used in email templates.</p>
<h5>Example</h5>
<pre class="lang-py prettyprint-override"><code>cust_name: str
bill_address: str
ship_address: str
ref_num: int
# ...list continues
</code></pre>
<p>Overlapping combinations of these appear in the templates, which I've modeled using Pydantic.</p>
<h5>Example</h5>
<pre class="lang-py prettyprint-override"><code>class TemplateA(BaseModel):
cust_name: str
bill_address: str
class TemplateB(BaseModel):
cust_name: str
ref_num: int
class TemplateC(BaseModel):
bill_address: str
ship_address: str
# ...and so on
</code></pre>
<h2>Question</h2>
<p>I'd like to know if there's a recommended way to centrally define my <code>variable: type</code> combos so I'm not repeating myself, and if I change a variable name or a type, those changes are reflected everywhere.</p>
<p>Pydantic model inheritance isn't working for me because so many combinations of fields are mixed and matched across template models. I'm open to the idea of changing my approach entirely if there's a better way. Thanks!</p>
| <python><design-patterns><types><fastapi><pydantic> | 2023-10-24 18:57:45 | 2 | 852 | FlightPlan |
77,354,502 | 19,980,284 | AttributeError: module 'transformers' has no attribute 'BertTokenizerFast' | <p>I have installed spacy onto a Jupyter Notebook in Jupyter Lab that I access through the Anaconda Navigator, all on a remote desktop.</p>
<p>I was able to install spacy using</p>
<pre><code>!pip install spacy
</code></pre>
<p>But when I tried to run</p>
<pre><code>import spacy
nlp = spacy.load('en_core_web_sm')
</code></pre>
<p>I got a common OSError 50 that 'en_core_web_sm' was not a Python package. So I went in circles for a while and finally downloaded en_core_web_sm 3.7.0 to my Downloads folder from <a href="https://github.com/explosion/spacy-models/releases/tag/en_core_web_sm-3.7.0" rel="nofollow noreferrer">here</a> and then used</p>
<pre><code>!pip install Downloads/en_core_web_sm-3.7.0.tar.gz
</code></pre>
<p>to install the english language model. I used <code>!pip list</code> and confirmed that I am using <code>en-core-web-sm 3.7.0</code> and <code>spacy 3.7.2</code> and when I tried to load that model, I got this weird attribute error:</p>
<pre><code>nlp = spacy.load('en_core_web_sm')
AttributeError Traceback (most recent call last)
Cell In[2], line 1
----> 1 nlp = spacy.load('en_core_web_sm')
File ~\AppData\Roaming\Python\Python311\site-packages\spacy\__init__.py:51, in load(name, vocab, disable, enable, exclude, config)
27 def load(
28 name: Union[str, Path],
29 *,
(...)
34 config: Union[Dict[str, Any], Config] = util.SimpleFrozenDict(),
35 ) -> Language:
36 """Load a spaCy model from an installed package or a local path.
37
38 name (str): Package name or model path.
(...)
49 RETURNS (Language): The loaded nlp object.
50 """
---> 51 return util.load_model(
52 name,
53 vocab=vocab,
54 disable=disable,
55 enable=enable,
56 exclude=exclude,
57 config=config,
58 )
File ~\AppData\Roaming\Python\Python311\site-packages\spacy\util.py:465, in load_model(name, vocab, disable, enable, exclude, config)
463 return get_lang_class(name.replace("blank:", ""))()
464 if is_package(name): # installed as package
--> 465 return load_model_from_package(name, **kwargs) # type: ignore[arg-type]
466 if Path(name).exists(): # path to model data directory
467 return load_model_from_path(Path(name), **kwargs) # type: ignore[arg-type]
File ~\AppData\Roaming\Python\Python311\site-packages\spacy\util.py:501, in load_model_from_package(name, vocab, disable, enable, exclude, config)
484 """Load a model from an installed package.
485
486 name (str): The package name.
(...)
498 RETURNS (Language): The loaded nlp object.
499 """
500 cls = importlib.import_module(name)
--> 501 return cls.load(vocab=vocab, disable=disable, enable=enable, exclude=exclude, config=config)
File ~\AppData\Roaming\Python\Python311\site-packages\en_core_web_sm\__init__.py:10, in load(**overrides)
9 def load(**overrides):
---> 10 return load_model_from_init_py(__file__, **overrides)
File ~\AppData\Roaming\Python\Python311\site-packages\spacy\util.py:682, in load_model_from_init_py(init_file, vocab, disable, enable, exclude, config)
680 if not model_path.exists():
681 raise IOError(Errors.E052.format(path=data_path))
--> 682 return load_model_from_path(
683 data_path,
684 vocab=vocab,
685 meta=meta,
686 disable=disable,
687 enable=enable,
688 exclude=exclude,
689 config=config,
690 )
File ~\AppData\Roaming\Python\Python311\site-packages\spacy\util.py:539, in load_model_from_path(model_path, meta, vocab, disable, enable, exclude, config)
537 overrides = dict_to_dot(config, for_overrides=True)
538 config = load_config(config_path, overrides=overrides)
--> 539 nlp = load_model_from_config(
540 config,
541 vocab=vocab,
542 disable=disable,
543 enable=enable,
544 exclude=exclude,
545 meta=meta,
546 )
547 return nlp.from_disk(model_path, exclude=exclude, overrides=overrides)
File ~\AppData\Roaming\Python\Python311\site-packages\spacy\util.py:587, in load_model_from_config(config, meta, vocab, disable, enable, exclude, auto_fill, validate)
584 # This will automatically handle all codes registered via the languages
585 # registry, including custom subclasses provided via entry points
586 lang_cls = get_lang_class(nlp_config["lang"])
--> 587 nlp = lang_cls.from_config(
588 config,
589 vocab=vocab,
590 disable=disable,
591 enable=enable,
592 exclude=exclude,
593 auto_fill=auto_fill,
594 validate=validate,
595 meta=meta,
596 )
597 return nlp
File ~\AppData\Roaming\Python\Python311\site-packages\spacy\language.py:1830, in Language.from_config(cls, config, vocab, disable, enable, exclude, meta, auto_fill, validate)
1824 warn_if_jupyter_cupy()
1826 # Note that we don't load vectors here, instead they get loaded explicitly
1827 # inside stuff like the spacy train function. If we loaded them here,
1828 # then we would load them twice at runtime: once when we make from config,
1829 # and then again when we load from disk.
-> 1830 nlp = lang_cls(
1831 vocab=vocab,
1832 create_tokenizer=create_tokenizer,
1833 create_vectors=create_vectors,
1834 meta=meta,
1835 )
1836 if after_creation is not None:
1837 nlp = after_creation(nlp)
File ~\AppData\Roaming\Python\Python311\site-packages\spacy\language.py:188, in Language.__init__(self, vocab, max_length, meta, create_tokenizer, create_vectors, batch_size, **kwargs)
166 """Initialise a Language object.
167
168 vocab (Vocab): A `Vocab` object. If `True`, a vocab is created.
(...)
183 DOCS: https://spacy.io/api/language#init
184 """
185 # We're only calling this to import all factories provided via entry
186 # points. The factory decorator applied to these functions takes care
187 # of the rest.
--> 188 util.registry._entry_point_factories.get_all()
190 self._config = DEFAULT_CONFIG.merge(self.default_config)
191 self._meta = dict(meta)
File ~\AppData\Roaming\Python\Python311\site-packages\catalogue\__init__.py:110, in Registry.get_all(self)
108 result = {}
109 if self.entry_points:
--> 110 result.update(self.get_entry_points())
111 for keys, value in REGISTRY.copy().items():
112 if len(self.namespace) == len(keys) - 1 and all(
113 self.namespace[i] == keys[i] for i in range(len(self.namespace))
114 ):
File ~\AppData\Roaming\Python\Python311\site-packages\catalogue\__init__.py:125, in Registry.get_entry_points(self)
123 result = {}
124 for entry_point in self._get_entry_points():
--> 125 result[entry_point.name] = entry_point.load()
126 return result
File C:\ProgramData\anaconda3\Lib\importlib\metadata\__init__.py:202, in EntryPoint.load(self)
197 """Load the entry point from its definition. If only a module
198 is indicated by the value, return that module. Otherwise,
199 return the named object.
200 """
201 match = self.pattern.match(self.value)
--> 202 module = import_module(match.group('module'))
203 attrs = filter(None, (match.group('attr') or '').split('.'))
204 return functools.reduce(getattr, attrs, module)
File C:\ProgramData\anaconda3\Lib\importlib\__init__.py:126, in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1204, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1176, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1126, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File <frozen importlib._bootstrap>:1204, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1176, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1147, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:690, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:940, in exec_module(self, module)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File ~\AppData\Roaming\Python\Python311\site-packages\spacy_curated_transformers\pipeline\__init__.py:1
----> 1 from .transformer import CuratedTransformer
File ~\AppData\Roaming\Python\Python311\site-packages\spacy_curated_transformers\pipeline\transformer.py:26
23 from thinc.types import Ragged
25 from ..errors import Errors
---> 26 from ..models.listeners import ListenerStateUtils
27 from ..models.output import DocTransformerOutput, TransformerModelOutput
28 from ..models.types import TransformerListenerModelT
File ~\AppData\Roaming\Python\Python311\site-packages\spacy_curated_transformers\models\__init__.py:1
----> 1 from .architectures import (
2 build_albert_transformer_model_v1,
3 build_bert_transformer_model_v1,
4 build_camembert_transformer_model_v1,
5 build_roberta_transformer_model_v1,
6 build_xlmr_transformer_model_v1,
7 build_pytorch_checkpoint_loader_v1,
8 )
9 from .hf_loader import build_hf_transformer_encoder_loader_v1
10 from .scalar_weight import build_scalar_weight_v1
File ~\AppData\Roaming\Python\Python311\site-packages\spacy_curated_transformers\models\architectures.py:32
29 from thinc.types import ArgsKwargs, Floats2d, Ints1d
31 from ..errors import Errors
---> 32 from ..tokenization.types import Tok2PiecesModelT
33 from .listeners import (
34 WrappedTransformerAndListener,
35 replace_listener_callback,
36 replace_listener_cfg_callback,
37 )
38 from .output import TransformerModelOutput
File ~\AppData\Roaming\Python\Python311\site-packages\spacy_curated_transformers\tokenization\__init__.py:3
1 from .bbpe_encoder import build_byte_bpe_encoder_loader_v1, build_byte_bpe_encoder_v1
2 from .char_encoder import build_char_encoder_loader_v1, build_char_encoder_v1
----> 3 from .hf_loader import build_hf_piece_encoder_loader_v1
4 from .sentencepiece_encoder import (
5 build_camembert_sentencepiece_encoder_v1,
6 build_sentencepiece_encoder_loader_v1,
7 build_sentencepiece_encoder_v1,
8 build_xlmr_sentencepiece_encoder_v1,
9 )
10 from .wordpiece_encoder import (
11 build_bert_wordpiece_encoder_v1,
12 build_wordpiece_encoder_loader_v1,
13 build_wordpiece_encoder_v1,
14 )
File ~\AppData\Roaming\Python\Python311\site-packages\spacy_curated_transformers\tokenization\hf_loader.py:17
13 from ..errors import Errors
15 if has_hf_transformers:
16 SUPPORTED_TOKENIZERS = (
---> 17 transformers.BertTokenizerFast,
18 transformers.RobertaTokenizerFast,
19 transformers.XLMRobertaTokenizerFast,
20 transformers.CamembertTokenizerFast,
21 transformers.BertJapaneseTokenizer,
22 )
23 else:
24 SUPPORTED_TOKENIZERS = () # type: ignore
AttributeError: module 'transformers' has no attribute 'BertTokenizerFast'
</code></pre>
| <python><pip><spacy> | 2023-10-24 18:14:42 | 1 | 671 | hulio_entredas |
77,354,401 | 9,983,652 | what is the final model to use when using stratified kfold cv? | <p>When using statified kfold cv, we fit the model for each of 5 folds for example so we got 5 models for each fold respectively, Then my question is what is the final model to use for prediction? For example, in below code, the code get accuracy result of each of 10 fold, then which kfold model to use after traning and fitting the data? Do we just use a specific model with a specific fold with highest accuracy?</p>
<p><a href="https://www.geeksforgeeks.org/stratified-k-fold-cross-validation/" rel="nofollow noreferrer">https://www.geeksforgeeks.org/stratified-k-fold-cross-validation/</a></p>
<pre><code># Import Required Modules.
from statistics import mean, stdev
from sklearn import preprocessing
from sklearn.model_selection import StratifiedKFold
from sklearn import linear_model
from sklearn import datasets
# FEATCHING FEATURES AND TARGET VARIABLES IN ARRAY FORMAT.
cancer = datasets.load_breast_cancer()
# Input_x_Features.
x = cancer.data
# Input_ y_Target_Variable.
y = cancer.target
# Feature Scaling for input features.
scaler = preprocessing.MinMaxScaler()
x_scaled = scaler.fit_transform(x)
# Create classifier object.
lr = linear_model.LogisticRegression()
# Create StratifiedKFold object.
skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=1)
lst_accu_stratified = []
for train_index, test_index in skf.split(x, y):
x_train_fold, x_test_fold = x_scaled[train_index], x_scaled[test_index]
y_train_fold, y_test_fold = y[train_index], y[test_index]
lr.fit(x_train_fold, y_train_fold)
lst_accu_stratified.append(lr.score(x_test_fold, y_test_fold))
# Print the output.
print('List of possible accuracy:', lst_accu_stratified)
print('\nMaximum Accuracy That can be obtained from this model is:',
max(lst_accu_stratified)*100, '%')
print('\nMinimum Accuracy:',
min(lst_accu_stratified)*100, '%')
print('\nOverall Accuracy:',
mean(lst_accu_stratified)*100, '%')
print('\nStandard Deviation is:', stdev(lst_accu_stratified))
</code></pre>
| <python><scikit-learn> | 2023-10-24 17:59:36 | 1 | 4,338 | roudan |
77,354,334 | 17,274,113 | Openpyxl: loading excel "Table" as variable. `ValueError: Table with name Membership already exists` | <p>I have been working with excel documents lately, and having difficulty grasping "tables". I think some of the confusion stems from the fact that some online resources refer to an array of values in a worksheet as a table, whereas I am interested in dealing with a table object ("insert table" in excel).</p>
<p>From a <a href="https://stackoverflow.com/questions/77108380/extracting-multiple-sub-tables-within-a-single-excel-sheet-to-pandas-dataframe/77115792?noredirect=1#comment136248237_77115792">previous question</a>, I got the sense that "tables" are recognizable by openpyxl which appeared to be able to read one in and manipulate it as one would a csv etc. I am however encountering an error upon loading the worksheet as a variable in my script:</p>
<pre><code>VBS_Metrics_Report_dir = "path to .xlsx file"
VBS_Metrics_Report = openpyxl.load_workbook(os.path.join(VBS_Metrics_Report_dir, "VBS_Metrics_Report.xlsx"))
VBS_Membership_ws = VBS_Metrics_Report['Membership']
VBS_Membership_table = VBS_Membership_ws.tables['Membership']
</code></pre>
<p><code>---> 192 VBS_Metrics_Report = openpyxl.load_workbook(os.path.join(VBS_Metrics_Report_dir, "VBS_Metrics_Report.xlsx"))</code></p>
<p><code>ValueError: Table with name Membership already exists</code></p>
<p>First, I am confused as the line in which the error is said to be does not even attempt to access the table nor the sheet named "Membership" yet, that occurs in the following lines.</p>
<p>Second, where does this error claim the table to already exist? considering that the error occurs even after restarting the python kernel, it can't be that it already exists as a variable in my workspace. Even if it did, I should be able to overwrite a variable no?</p>
<p>If instead the problem is that the table already exists in the workbook I am trying to access, then yes, of course it already exists, that is why I am trying to access it. In order to make use of the data that I know it contains. In this case I must be misunderstanding the usage of the <code>load_workbook</code> function.</p>
<p>I realize this question has essentially been asked <a href="https://stackoverflow.com/questions/63615515/openpyxl-load-workbook-valueerror-table-with-name-table1-already-exists">here</a>, but it has not really been answered and I though I would add some thoughts.</p>
<p>If anyone can help me solve this problem OR has insight as to how handling excel "Tables" works to give me a better understanding, please let me know. It would be hugely appreciated!</p>
<p>Thanks.</p>
| <python><excel><openpyxl><xlsx> | 2023-10-24 17:48:21 | 1 | 429 | Max Duso |
77,354,288 | 755,934 | No module named 'pydantic_core._pydantic_core' server for python serverless on AWS lambda | <p>I read these StackOverflow questions: <a href="https://stackoverflow.com/questions/77305721/aws-lambda-error-on-fast-api-no-module-named-pydantic-core-pydantic-core">1</a> <a href="https://stackoverflow.com/questions/76650856/no-module-named-pydantic-core-pydantic-core-in-aws-lambda-though-library-is-i">2</a> that cover the same issue, as well as <a href="https://github.com/pydantic/pydantic/issues/6557" rel="nofollow noreferrer">this Github thread</a>. I still feel like I am missing something.</p>
<p><strong>My Issue</strong></p>
<p>I have a function that is deployed to AWS Lambda/API gateway using the serverless framework (python 3.10 runtime). I am also using the <code>serverless-python-requirements</code> plugin. My function uses the <code>pydantic</code> module. I have the following in my requirements.txt (excerpt):</p>
<pre><code>pydantic==2.4.2
pydantic_core==2.10.1
</code></pre>
<p>I am not using Flask or FastAPI. My function works just fine when invoked locally (<code>serverless invoke local -f my_function</code>).</p>
<p>After deploying and invoking the deployed function with the same command (other than removing local), I get this error:</p>
<pre><code>Running "serverless" from node_modules
{
"errorMessage": "No module named 'pydantic_core._pydantic_core'",
"errorType": "ModuleNotFoundError",
"requestId": "fd4eb321-5f81-42a2-9880-ea6a76a626d5",
"stackTrace": [
" File \"/opt/python/serverless_aws_lambda_sdk/instrument/__init__.py\", line 598, in stub\n return self._handler(user_handler, event, context)\n",
" File \"/opt/python/serverless_aws_lambda_sdk/instrument/__init__.py\", line 580, in _handler\n result = user_handler(event, context)\n",
" File \"/var/task/serverless_sdk/__init__.py\", line 144, in wrapped_handler\n return user_handler(event, context)\n",
" File \"/var/task/s_post_rental_app_from_airtable.py\", line 25, in error_handler\n raise e\n",
" File \"/var/task/s_post_rental_app_from_airtable.py\", line 20, in <module>\n user_handler = serverless_sdk.get_user_handler('functions.post_rental_app_from_airtable.handler.lambda_handler')\n",
" File \"/var/task/serverless_sdk/__init__.py\", line 56, in get_user_handler\n user_module = import_module(user_module_name)\n",
" File \"/var/lang/lib/python3.10/importlib/__init__.py\", line 126, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n",
" File \"<frozen importlib._bootstrap>\", line 1050, in _gcd_import\n",
" File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\n",
" File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\n",
" File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\n",
" File \"<frozen importlib._bootstrap_external>\", line 883, in exec_module\n",
" File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\n",
" File \"/var/task/functions/post_rental_app_from_airtable/handler.py\", line 9, in <module>\n from pydantic import BaseModel, ValidationError\n",
" File \"/var/task/pydantic/__init__.py\", line 3, in <module>\n import pydantic_core\n",
" File \"/var/task/pydantic_core/__init__.py\", line 6, in <module>\n from ._pydantic_core import (\n"
]
}
Environment: darwin, node 20.2.0, framework 3.36.0 (local) 3.35.2v (global), plugin 7.1.0, SDK 4.4.0
Credentials: Local, "default" profile
Docs: docs.serverless.com
Support: forum.serverless.com
Bugs: github.com/serverless/serverless/issues
</code></pre>
<p><strong>What I Tried</strong></p>
<p>I read in the threads above that this problem could come about as a result of ISA mismatches in pydantic_core. So I tried the following:</p>
<p>Specifying the ISA for all functions in serverless.yml (provider -> architecture -> 'arm64' and 'x86_64')</p>
<pre><code>provider:
name: aws
# latest supported python by lambda + serverless as of 2023-10-23
runtime: python3.10
# NOTE: arm64 may offer better price/performance
architecture: 'arm64'
</code></pre>
<p>Specifying that pip should build in a docker in serverless.yml, that it should use an arm64 image, and pass the platform information to pip via the <code>dockerRunCmdExtraArgs</code>:</p>
<pre><code>custom:
pythonRequirements:
# this is necessary to avoid cross-platform build issues
dockerizePip: true
# explicitly pass the arm64 platform to the docker build
dockerImage: public.ecr.aws/sam/build-python3.10:latest-arm64
# explicitly tell pip to fetch the arm64 version of the package
dockerRunCmdExtraArgs: [ '--platform', 'linux/arm64/v8' ]
</code></pre>
<p>I'm not sure what else to try.</p>
| <python><aws-lambda><serverless-framework><pydantic> | 2023-10-24 17:40:42 | 1 | 5,624 | Daniel Kats |
77,354,262 | 616,460 | Python package not being found after install | <p>I have a project directory structure like this:</p>
<pre><code> root\
.git\
server\
setup.py
company\
__init__.py
zed\
__init__.py
server\
__init__.py
server.py
util.py
</code></pre>
<p>My <em>__init__.py</em> files are all empty. My <em>root\server\setup.py</em> looks like this:</p>
<pre><code>from setuptools import setup, find_packages
setup(
name="zed-server",
version="1.0.0",
packages=find_packages(),
install_requires=[
# ...
]
)
</code></pre>
<p>I confirmed that <code>find_packages()</code> is returning:</p>
<pre><code>['company', 'company.zed', 'company.zed.server']
</code></pre>
<p>I install the package (successfully) by doing:</p>
<pre><code>cd root\server
pip install -e .
</code></pre>
<p>I confirm the package is installed as editable:</p>
<pre><code>$ pip list
...
zed-server 1.0.0 C:\path\to\root\server
...
</code></pre>
<hr />
<p>The problem:</p>
<p>In <em>server.py</em> I have an import statement like this:</p>
<pre><code>import company.zed.server.util
</code></pre>
<p>I run that file like:</p>
<pre><code>cd root\server\company\zed\server
python server.py
</code></pre>
<p>But it fails with:</p>
<pre><code>ModuleNotFoundError: No module named 'company.zed.server'
</code></pre>
<p>And I have no idea what's going on. Why can't it find the package and how do I resolve this?</p>
<p>System:</p>
<ul>
<li>Windows 10</li>
<li>Python 3.11.1</li>
<li>Pip 23.3.1</li>
<li>Setuptools 68.2.2</li>
</ul>
| <python><setuptools><python-packaging> | 2023-10-24 17:37:29 | 1 | 40,602 | Jason C |
77,354,233 | 235,671 | MyPy complains about isinstance(obj, Callable) being incompatible with _ClassInfo | <p>I've got a couple of <code>if</code>s that all use <code>isinstance</code>. One of them is checking the <code>obj</code> against <code>Collable</code> with <code>isinstance(obj, Callable)</code> as suggested in <a href="https://stackoverflow.com/a/65505019/235671">this</a> answer. But when I run <code>mypy</code> on that code it reports the following error:</p>
<blockquote>
<pre><code>error: Argument 2 to "isinstance" has incompatible type "<typing special form>"; expected "_ClassInfo"
</code></pre>
</blockquote>
<p>Is there any way to fix that without using the builtin <code>callable</code> function or disabling type checking for this line?</p>
| <python><python-3.x><mypy><python-typing> | 2023-10-24 17:32:13 | 0 | 19,283 | t3chb0t |
77,354,114 | 13,217,286 | Generate combinations of items in a Polars List using expressions? | <p>Is there a way to generate combinations of items within a list inside a Polars column without resorting to <code>.map_elements()</code> + <strong>itertools</strong> for each row?</p>
<p>This is my current solution:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import itertools
pl.Config(fmt_table_cell_list_len=8, fmt_str_lengths=100)
df = pl.DataFrame({'col': [['a', 'b', 'c'], ['d', 'e']]})
df.with_columns(pl.col('col')
.map_elements(lambda list_o_things: [sorted([thing_1, thing_2])
for thing_1, thing_2
in itertools.combinations(list_o_things, 2)],
return_dtype=pl.List(pl.List(pl.String)))
)
</code></pre>
<p>which returns this:</p>
<pre><code>shape: (2, 1)
ββββββββββββββββββββββββββββββββββββββββ
β col β
β --- β
β list[list[str]] β
ββββββββββββββββββββββββββββββββββββββββ‘
β [["a", "b"], ["a", "c"], ["b", "c"]] β
β [["d", "e"]] β
ββββββββββββββββββββββββββββββββββββββββ
</code></pre>
| <python><dataframe><list><python-itertools><python-polars> | 2023-10-24 17:14:15 | 2 | 320 | Thomas |
77,354,058 | 4,031,815 | Micromamba and Dockerfile error: /bin/bash: activate: No such file or directory | <p>Used to have a Dockerfile with <code>conda</code> and <code>flask</code> that worked locally but after installing <code>pytorch</code> had to switch to <code>micromamba</code>, the old <code>CMD</code> no longer works after switching to image <code>mambaorg/micromamba:1-focal-cuda-11.7.1</code>, this is how my Dockerfile looks like right now:</p>
<pre><code>FROM mambaorg/micromamba:1-focal-cuda-11.7.1
WORKDIR /app
COPY environment.yml .
RUN micromamba env create --file environment.yml
EXPOSE 5001
COPY app.py .
CMD ["/bin/bash", "-c", "source activate env-app && flask run --host=0.0.0.0 --port=5001"]
</code></pre>
<p>Now I get error:</p>
<p><code>/bin/bash: activate: No such file or directory</code></p>
<p>but before when I was using <code>conda</code> it was working, I think the CMD command needs to be updated.</p>
| <python><docker><flask><dockerfile><conda> | 2023-10-24 17:03:58 | 1 | 25,646 | CommonSenseCode |
77,353,746 | 7,188,690 | subtraction from nested dictioanaries in a pandas dataframe | <p>I have a dataframe where I want to find the difference in the unique_users and total_sessions for the latest day(2023-09-06) and previous day(2023-09-07) for a specific hour 2 and 13 separately for each key 'jio' and 'other' and for a specific Exception. I need to consider the Exception and Hour and Date to calculate the difference. At the end I want to get the top two <code>unique_users</code></p>
<pre><code>Date Hour Exception cell
0 2023/09/06 2 S1AP {'design': {'jio': {'total_sessions': 39273, 'unique_users': 30837}, 'nokia': {'total_sessions': 9523, 'unique_users': 7690}}}
1 2023/09/06 13 S1AP {'design': {'jio': {'total_sessions': 46870, 'unique_users': 39330}, 'nokia': {'total_sessions': 11745, 'unique_users': 10059}}}
2 2023/09/07 13 S1AP {'design': {'jio': {'total_sessions': 35688, 'unique_users': 29628}, 'nokia': {'total_sessions': 8759, 'unique_users': 7537}}}
3 2023/09/07 2 S1AP {'design': {'jio': {'total_sessions': 37804, 'unique_users': 29654}, 'nokia': {'total_sessions': 8738, 'unique_users': 7272}}}
</code></pre>
<h1></h1>
<pre><code>result_df['Date'] = pd.to_datetime(result_df['Date'])
# Filter for the latest day
latest_day = result_df[result_df['Date'] == result_df['Date'].max()]
# Filter for the previous day
previous_day = result_df[result_df['Date'] == (result_df['Date'].max() - pd.DateOffset(days=1))]
result = {}
for key in latest_day['cell'].values[0].keys():
latest_unique_users = latest_day['cell'].values[0][key]['unique_users']
previous_unique_users = previous_day['cell'].values[0][key]['unique_users']
result[key] = {
'unique_users_diff': latest_unique_users - previous_unique_users,
}
</code></pre>
<p>But I would like to get the difference in the original dataframe itself. The previous date 2023-09-06 rows are not neeed in the final dataframe. My current code is not considering the specific hour and specific exception for the calculation.</p>
<p>Expected output:</p>
<pre><code>Date Hour Exception Cell design_jio_total_sessions_diff design_jio_unique_users_diff design_nokia_total_sessions_diff design_nokia_unique_users_diff
1 2023/09/07 2 S1AP {'design': {'jio': {'total_sessions': 37804, 'unique_users': 29654}, 'nokia': {'total_sessions': 8738, 'unique_users': 7272}}} -1469 -1183 -785 -418
2 2023/09/07 13 S1AP {'design': {'jio': {'total_sessions': 35688, 'unique_users': 29628}, 'nokia': {'total_sessions': 8759, 'unique_users': 7537}}} -11182 -9702 -2986 -2522
</code></pre>
| <python><pandas> | 2023-10-24 16:09:18 | 1 | 494 | sam |
77,353,402 | 1,182,794 | aiohttp posting list of jsons | <p>I saw in somebody's code that they are posting a list of requests in one aiohttp post call, and expecting to receive list of responses back. I have tried that myself and it worked (if you are curious what I'm requesting, those are blocks of Ethereum, I'm talking to a live node):</p>
<pre><code>def aiohttp_test():
import aiohttp
import asyncio
async def post_test():
req = list()
for ind, i in enumerate(range(15544190, 15544192)):
req.append({"jsonrpc": "2.0", "id": ind + 1, "method": "eth_getBlockByNumber", "params": [hex(i), False]})
async with aiohttp.ClientSession() as session:
async with session.post('http://ubuntu18-1:8545', json=req) as resp:
print(resp.status)
print(await resp.text())
loop = asyncio.get_event_loop()
loop.run_until_complete(post_test())
</code></pre>
<p>and I actually got a list with two responses, however status is a single number 200. How is this possible? Documentation for aiohttp never mentions possibility of posting a list.</p>
| <python><aiohttp> | 2023-10-24 15:25:33 | 2 | 622 | DimaA6_ABC |
77,353,375 | 20,770,190 | Why I get No module named 'requests' within the github actions runner? | <p>I have a <code>gihub-actions</code> file with a part to run <code>pytest</code> via <code>poetry</code>:</p>
<pre><code> build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ['3.10',]
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install poetry
pip install requests
if [ -f pyproject.toml ]; then poetry install; fi
- name: Test with pytest
run: |
poetry run pytest -s
</code></pre>
<p>However, despite the fact that I've installed the dependencies via <code>poetry</code> as well as <code>requests</code> separately via <code>pip</code> still it throws the following error in <code>github-actions</code> runner:</p>
<pre><code>Run poetry run pytest -s
ImportError while loading conftest '/home/runner/work/backend-holoplot/backend-holoplot/tests/conftest.py'.
tests/conftest.py:4: in <module>
from .client import override_get_db, engine
tests/client.py:3: in <module>
from fastapi.testclient import TestClient
../../../.cache/pypoetry/virtualenvs/production-api-4u0DyUfz-py3.10/lib/python3.10/site-packages/fastapi/testclient.py:1: in <module>
from starlette.testclient import TestClient as TestClient # noqa
../../../.cache/pypoetry/virtualenvs/production-api-4u0DyUfz-py3.10/lib/python3.10/site-packages/starlette/testclient.py:16: in <module>
import requests
E ModuleNotFoundError: No module named 'requests'
Error: Process completed with exit code 4.
</code></pre>
| <python><continuous-integration><pytest><github-actions><python-poetry> | 2023-10-24 15:22:26 | 3 | 301 | Benjamin Geoffrey |
77,353,344 | 6,494,707 | Conjugate symmetric: 3D Fourier transform dimension | <p>I have a real-valued input 3D array with the shape of <code>(H,W,D)=[8,8,20]</code>, where H, W, and D represent height, width and depth(z dimension), respectively. When computing the DFT, what will be the dimension of the DFT complex array (3D Fourier Transform)?</p>
<p>I read in one article that 2D DFT becomes like the following due to the conjugate symmetry:</p>
<p><a href="https://i.sstatic.net/Voo8E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Voo8E.png" alt="enter image description here" /></a></p>
<p>For 2D arrays of real value with <code>(H,W): [8,8]</code>, the dimension of DFT becomes <code>[8, 8//2+1] = [8, 5]</code>. In the case of 3D input real array, what will be the DFT array size?</p>
| <python><signal-processing><fft><dft> | 2023-10-24 15:19:04 | 1 | 2,236 | S.EB |
77,353,338 | 633,439 | Applying binary map gives NaN in some columns of the dataset | <p>I have a dataset named Housing having some string values in some columns as shown below:</p>
<p><a href="https://i.sstatic.net/0mdxu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0mdxu.png" alt="enter image description here" /></a></p>
<p>I need to convert those 'yes' or 'no' values to 0 or 1. To do that I am using the following code snippet:</p>
<pre><code>varlist = ['mainroad', 'guestroom', 'basement', 'hotwaterheating', 'airconditioning', 'prefarea']
# Defining the map function
def binary_map(x):
return x.map({"yes": 1, "no": 0})
# Applying the function to the housing list
housing[varlist] = housing[varlist].apply(binary_map)
housing.head()
</code></pre>
<p>However instead of 1 or 0 as entries I am getting NaN in those columns:
<a href="https://i.sstatic.net/UDk17.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UDk17.png" alt="enter image description here" /></a></p>
<p>Can someone tell what is the mistake here?</p>
| <python><pandas><list><dataset> | 2023-10-24 15:18:22 | 1 | 1,107 | kzs |
77,353,210 | 8,261,345 | How to dampen a series of values in Pandas, with more dampening if further from the average? | <p>I have a series of values that oscillate about the average, 1.0, with a minimum of 0 and no maximum, e.g.</p>
<pre><code>s = pd.Series([1.0, 1.03, 1.05, 1.2, 0.8, 0.97, 0.99])
</code></pre>
<p>I wish to 'damp' this series about the average of 1, i.e. values close to 1, such as 1.01 and 0.99 should be changed very little, but values further away, such as 0.5 and 1.2, should be brought much closer to 1. My ideal result would look something like:</p>
<pre><code>[1.0, 1.029, 1.047, 1.15, 0.87, 0.975, 0.991]
</code></pre>
<p>Applying a standard damping function, we have a reasonable result:</p>
<pre class="lang-py prettyprint-override"><code>s.apply(lambda x: (1 + (x - 1) / x) if x >= 1 else (1 + (x - 1) * abs(x)))
</code></pre>
<pre><code>0 1.000000
1 1.029126
2 1.047619
3 1.166667
4 0.840000
5 0.970900
6 0.990100
</code></pre>
<p>This is good, but I want to damp extreme values more, and values close to 1 less. <a href="https://stackoverflow.com/questions/8862983/function-to-dampen-a-value">This question</a> suggests use of an exponential function <code>-e^kx</code>, and I can see that <code>k</code> applies a horizontal stretch, with smaller values (less than 1) widening the drop off. I attempted to implement this:</p>
<pre class="lang-py prettyprint-override"><code>s.apply(lambda x: 1 + (x - 1) * exp(-0.5 * x))
</code></pre>
<pre><code>0 1.000000
1 1.017925
2 1.029578
3 1.109762
4 0.865936
5 0.981529
6 0.993904
</code></pre>
<p>Unfortunately this also damps values close to 1 too much. I also tried using a standard <code>x^n</code>:</p>
<pre class="lang-py prettyprint-override"><code>s.apply(lambda x: (1 + (x - 1) / x**10) if x >= 1 else (1 + (x - 1) * abs(x**10)))
</code></pre>
<pre><code>0 1.000000
1 1.022323
2 1.030696
3 1.032301
4 0.978525
5 0.977877
6 0.990956
</code></pre>
<p>This is closer to what I want but the problem arises that values very far from 1.0 get damped to closer than 1 than close values, e.g. here 0.8 -> 0.978525 which is greater than 0.97 -> 0.977877.</p>
<p>How can I create a damping that reigns in more extreme values without this problem?</p>
| <python><pandas><math> | 2023-10-24 14:59:55 | 1 | 694 | Student |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.