QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,658,724 | 365,531 | Pandas: Change subset of rows that contain duplicate values for a particular column based on values across all duplicates | <p>I'm new to Pandas and trying to understand how to modify a subset of rows that have duplicate values for a particular column, with the decision of which rows to change being made based on a conditional check across those duplicates.</p>
<p>Say I have a (contrived) dataframe like so:</p>
<pre><code> Class Length Head Teacher Premium Course
0 Maths Medium Mr. Bloggs Yes
1 English Short Mr. Plum Yes
2 English Long Mrs. Green Yes
3 English Medium Mr. Top Yes
4 Science Long Mrs. Blue Yes
5 Science Long Mr. Red Yes
6 ...
</code></pre>
<p>Wherever there are duplicate classes I want to replace the Teacher across all the duplicates with the Head Teacher from the longest class, and remove the Premium Course value for all the duplicates that are not the longest class. If the duplicate classes are all the same length, then simply take the teacher from the first duplicate, and the opposite for the Premium Course ie.</p>
<pre><code> Class Length Head Teacher Premium Course
0 Maths Medium Mr. Bloggs Yes
1 English Short Mrs. Green
2 English Long Mrs. Green Yes
3 English Medium Mrs. Green
4 Science Long Mrs. Blue Yes
5 Science Long Mrs. Blue
6 ...
</code></pre>
<p>In Python I would typically use loops, conditional statements etc and build a new list in memory. But I'm trying to determine the best approach in pandas.</p>
<p>I've been looking at the <em>duplicated</em> and <em>groupby</em> functions but have been unable to land on a solution. Any advice or help would be helpful. Trying to make the shift into thinking in a "Vectorized" way.</p>
| <python><pandas><dataframe><numpy><duplicates> | 2023-07-11 04:30:15 | 2 | 889 | Steven |
76,658,700 | 1,796,854 | How to share Robot Framework __init__.robot library initialization state with test cases? | <p>Suppose I have a Robot library implemented in python that does some state initialization like this:</p>
<pre class="lang-py prettyprint-override"><code>import uuid
class DemoLibrary:
ROBOT_LIBRARY_SCOPE = 'SUITE'
_my_uuid: str
def __init__(self) -> None:
self._my_uuid = str(uuid.uuid4())
def get_library_uuid(self) -> str:
return self._my_uuid
</code></pre>
<p>And within an <code>__init__.robot</code> file I initialize that library:</p>
<pre class="lang-none prettyprint-override"><code>*** Settings ***
Library DemoLibrary
Suite Setup Set Up Suite
*** Keywords ***
Set Up Suite
${lib_uuid} Get Library Uuid
Set Global Variable ${GLOBAL_UUID} ${lib_uuid}
</code></pre>
<p>If I then try to use that library in a test case in the suite defined by the <code>__init__.robot</code> file the library is re-initialized:</p>
<p><code>a_test.robot</code>:</p>
<pre class="lang-none prettyprint-override"><code>*** Settings ***
Library DemoLibrary
*** Test Cases ***
Setup Proof Of Concept
${lib_uuid} Get Library UUID
Should Be Equal ${GLOBAL_UUID} ${lib_uuid}
</code></pre>
<p>^ This fails with <code>[FAIL] 9854fe14-47e5-4484-92c6-930a5cc5224c != d81635d0-0716-487f-9a22-1a450499652e</code> (obviously, with unique uuids each time)</p>
<p>If I log the object ids of self from the python library I can also see that they change between the keyword invocations.</p>
<p>My directory structure looks like this:</p>
<pre class="lang-none prettyprint-override"><code>demo
└── a_test/
├── __init__.robot
└── a_test.robot
libraries/
└── DemoLibrary.py
</code></pre>
<p>I'm invoking the from demo with <code>robot --pythonpath=libraries a_test/</code>.</p>
<p>I've tried various combinations of <code>WITH NAME</code>, <code>Import Library</code> and <code>Get Library Instance</code> but none of them seemed to work.</p>
<p>Is there a way reuse or share the instance of the library created in <code>__init__.robot</code>?</p>
| <python><robotframework> | 2023-07-11 04:22:13 | 1 | 795 | nine9ths |
76,658,612 | 14,291,703 | How to compare two lists of pandas dataframe? | <pre><code>import pandas as pd
a = [pd.DataFrame([1,2,3])]
b = [pd.DataFrame([])]
</code></pre>
<p>How can I check where a==b so that it returns False?</p>
<p>I have tried a==b but it returns <code>ValueError: Can only compare identically-labeled (both index and columns) DataFrame objects</code>.</p>
| <python><pandas><dataframe> | 2023-07-11 03:53:29 | 2 | 512 | royalewithcheese |
76,658,489 | 11,028,689 | How to use precision or f1-score metrics in TensorFlow for multiclass classification | <p>I have tried to reproduce the following code for a multiclass classification problem (3 classes) from here:
<a href="https://saturncloud.io/blog/multiclass-logistic-regression-with-tensorflow-20-a-comprehensive-guide/" rel="nofollow noreferrer">https://saturncloud.io/blog/multiclass-logistic-regression-with-tensorflow-20-a-comprehensive-guide/</a></p>
<pre><code>import tensorflow as tf
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
# Load the Iris dataset
iris = load_iris()
# Split the dataset into training and test sets
X_train, X_test, y_train, y_test = train_test_split(
iris.data, iris.target, test_size=0.2, random_state=42)
# Define the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu', input_shape=(4,)),
tf.keras.layers.Dense(3, activation='softmax')
])
# Compile the model
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# Train the model
history = model.fit(X_train, y_train, epochs=50, validation_split=0.2)
</code></pre>
<p>I noticed that when I change the line in <code>model.compile</code> with</p>
<pre><code>metrics=['accuracy']
</code></pre>
<p>to</p>
<pre><code> metrics=[ tf.keras.metrics.Precision()])
</code></pre>
<p>to get a precision metric instead of accuracy, the code gives an error about shapes:</p>
<pre><code>ValueError: Shapes (32, 3) and (32, 1) are incompatible
</code></pre>
<p>Error also happens if I try to add precision as a metric after accuracy like so:</p>
<pre><code>metrics=['accuracy', tf.keras.metrics.Precision()]
</code></pre>
<p>I have also tried tensorflow addons and again get an error:</p>
<pre><code>import tensorflow_addons as tfa
metrics= [tfa.metrics.F1Score(average="macro",num_classes = 3,threshold=None,name='f1_score', dtype=None)]
ValueError: Dimension 0 in both shapes must be equal, but are 3 and 1. Shapes are [3] and [1].
</code></pre>
<p>How can I optimize on precision or f1-score from TensorFlow metrics (<a href="https://www.tensorflow.org/api_docs/python/tf/keras/metrics/" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/metrics/</a>)?</p>
| <python><tensorflow><machine-learning><keras><deep-learning> | 2023-07-11 03:10:01 | 1 | 1,299 | Bluetail |
76,658,311 | 2,515,265 | dash.exceptions.DependencyException thrown after increasing the number of processes in Dash application | <p>I have a Dash 2.7 application that used to run with 5 threads and 1 (gunicorn) process. To introduce true parallelisation I have increased the number of processes to 7, but since then from time to time I get this error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/dade/.conda/envs/viz-davide/lib/python3.8/site-packages/flask/app.py", line 2073, in wsgi_app
response = self.full_dispatch_request()
File "/Users/dade/.conda/envs/viz-davide/lib/python3.8/site-packages/flask/app.py", line 1518, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/dade/.conda/envs/viz-davide/lib/python3.8/site-packages/flask/app.py", line 1516, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/dade/.conda/envs/viz-davide/lib/python3.8/site-packages/flask/app.py", line 1502, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "/Users/dade/.conda/envs/viz-davide/lib/python3.8/site-packages/dash/dash.py", line 914, in serve_component_suites
_validate.validate_js_path(self.registered_paths, package_name, path_in_pkg)
File "/Users/dade/.conda/envs/viz-davide/lib/python3.8/site-packages/dash/_validate.py", line 357, in validate_js_path
raise exceptions.DependencyException(
dash.exceptions.DependencyException: Error loading dependency. "dash_tabulator" is not a registered library.
Registered libraries are:
[]
</code></pre>
<p>This error appears in the logs 7 times, so it looks like each process tried to do the same thing and failed.</p>
<p>The application uses a Flask filesystem cache which, from what I read, is thread safe. The application does also have some functions that are annotated with <code>@lru_cache</code> and I wonder whether this could be the source of the problem. If this is the issue, what is the best way to make the LRU cached functions thread safe?</p>
<p>My attempt:</p>
<pre><code>def foo():
return _cached_foo(process_id=os.getpid(), thread_id=threading.get_native_id())
@lru_cache
def _cached_foo(process_id: int, thread_id: int):
return something_not_thread_safe()
</code></pre>
| <python><flask><plotly-dash><gunicorn> | 2023-07-11 02:07:05 | 0 | 2,657 | Javide |
76,658,168 | 4,732,175 | Will python mysql connection pool keep connections? | <p>If you use <code>mysql.connector.connect</code> to connect to mysql database, the connection will be automatically disconnected after 8 hours. My question is: if I use mysql connection pool to connect to mysql database, so that means I can always get a useable connection and no need to worry about the 8 hours limit?</p>
| <python><mysql> | 2023-07-11 01:10:41 | 1 | 11,212 | Zhang Buzz |
76,658,075 | 242,042 | How do I mock the pymysqlpool.ConnectionPool constructor? | <p>Similar to <a href="https://stackoverflow.com/questions/43354242/how-do-i-mock-part-of-a-python-constructor-just-for-testing">How do I mock part of a python constructor just for testing?</a> but explicitly trying to get <code>pymysqlpool.ConnectionPool</code> to work/</p>
<pre><code>class DbTests(TestCase):
@mock.patch('pymysqlpool.ConnectionPool', autospec=True)
@mock.patch.dict(
os.environ,
{
"DATASOURCES_0_SERVERID": "server1",
"DATASOURCES_0_HOST": "non-existent",
"DATASOURCES_0_PORT": "3307",
"DATASOURCES_0_DATABASE": "lj_ca1",
"DATASOURCES_0_USERNAME": "sampleuser",
"DATASOURCES_0_PASSWORD": "password1",
"DATASOURCES_0_TIMEZONE": "Americas/Toronto",
},
)
def test_load(self, connection_pool_mock: mock.Mock):
ConnectionPool(
size=2, maxsize=3, pre_create_num=2, host=os.environ["DATASOURCES_0_HOST"]
)
</code></pre>
<p>I'm expecting the code to simply work, but I am getting</p>
<blockquote>
<p>pymysql.err.OperationalError: (2003, "Can't connect to MySQL server on 'non-existent' ([Errno 11001] getaddrinfo failed)")</p>
</blockquote>
| <python><testing><mocking> | 2023-07-11 00:39:56 | 1 | 43,097 | Archimedes Trajano |
76,658,073 | 8,507,303 | Streaming the output of a subprocess to 2 or more clients | <p>I have a basic example here</p>
<pre><code>import subprocess
from flask import (
Flask,
Response,
)
app = Flask(__name__)
stream = None
@app.route("/play", methods=["GET"])
def channel():
def createStream():
global stream
print("create stream")
stream = subprocess.Popen(
ffmpegcmd,
stdin=subprocess.DEVNULL,
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL,
)
def streamData():
print("stream data")
try:
while True:
chunk = stream.stdout.read(1024)
if len(chunk) == 0:
break
yield chunk
except:
pass
if not stream:
link = "https://cph-p2p-msl.akamaized.net/hls/live/2000341/test/master.m3u8"
ffmpegcmd = [
"ffmpeg",
"-re",
"-i",
link,
"-map",
"0",
"-codec",
"copy",
"-f",
"mpegts",
"pipe:"
]
createStream()
return Response(streamData(), mimetype="application/octet-stream")
else:
return Response(streamData(), mimetype="application/octet-stream")
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8001, debug=True)
</code></pre>
<p>If 2 or more clients try to stream at the same time, all streams freeze. Closing all streams and re-requesting <code>/play</code> picks up the existing sp and it plays ok.</p>
<p>Does anyone understand what is happening and why it doesn't work?
Is this a bug or limitation of subprocesses?</p>
| <python><flask><ffmpeg><subprocess> | 2023-07-11 00:38:26 | 1 | 1,319 | Chris |
76,657,985 | 8,705,745 | Pandas apply function read in list horizontally as an input | <p>Is there a way for the apply function to read in a list of values horizontally row-wise from the 3 columns ['A','B','C'] into a list 'x' as well as number from 'Val' as 'y' to the apply function to create a new column 'Result'?</p>
<p>I've presented the column f as more simplistic, I just need to know how to read a list/series into the function f</p>
<pre><code>import pandas as pd
cols=['Name','A','B','C','Type','Val']
data = [['Front',1,2,3,'Up',11],
['Front',4,5,6,'Dw',22]]
df = pd.DataFrame(data, columns=cols)
def f(x,y):
return sum(x)*y
</code></pre>
<h2>not sure this is correct</h2>
<pre><code>df['Result'] = df.apply(lambda row: f(row[['A','B','C']],row['Val'],axis=1))
</code></pre>
<p>Initial Data:</p>
<pre><code> Name A B C Type Val
0 Front 1 2 3 Up 11
1 Front 4 5 6 Dw 22
</code></pre>
<p>Desired Result:</p>
<pre><code> Name A B C Type Val Result
0 Front 1 2 3 Up 11 66
1 Front 4 5 6 Dw 22 330
</code></pre>
| <python><pandas><dataframe><apply> | 2023-07-11 00:03:23 | 2 | 323 | dingo |
76,657,887 | 11,028,689 | Error when training a logistic regression for multiclass classification in Pytorch | <p>I'm using this kaggle dataset of news articles (<a href="https://www.kaggle.com/datasets/rmisra/news-category-dataset" rel="nofollow noreferrer">https://www.kaggle.com/datasets/rmisra/news-category-dataset</a>), and have 7 classes:</p>
<pre><code>def news_data():
# load embeddings
with open('embeddings_v1.pkl', "rb") as fIn:
stored_data = pickle.load(fIn)
stored_sentences = stored_data['sentences']
stored_embeddings = stored_data['embeddings']
x = stored_embeddings
x = torch.tensor(x).float()
# load labels
with open('.../News_Category_Dataset_v3.json','r') as f:
jdata = f.read()
jdata2 = [json.loads(line) for line in jdata.split('\n') if line]
df = pd.DataFrame.from_records(jdata2)
label_dict = {'CRIME':0, 'BUSINESS':1, 'SPORTS':2 ,'WEDDINGS':3, 'DIVORCE':4, 'PARENTING':5}
df['label'] = df['category'].map(label_dict).fillna(6).astype(int)
y = df['label']
y = torch.tensor(y).float().unsqueeze(1)
return split_train_test(x, y)
############# Data summary #############
x_train has shape: torch.Size([167622, 384])
y_train has shape: torch.Size([167622, 1])
x_test has shape: torch.Size([41905, 384])
y_test has shape: torch.Size([41905, 1])
#######################################
</code></pre>
<p>I am trying to implement a logistic regression model for multiclass classification in pytorch:</p>
<pre><code>class LR(torch.nn.Module):
def __init__(self, n_features, n_outputs):
super(LR, self).__init__()
self.lr = torch.nn.Linear(n_features, n_outputs)
def forward(self, x):
out = torch.sigmoid(self.lr(x))
return out
model = LR(n_features, n_outputs)
# use gradient descent with a learning_rate=0.01
optim = torch.optim.SGD(model.parameters(), lr=0.01)
# use Cross Entropy Loss
criterion = torch.nn.CrossEntropyLoss()
# instantiate the model
n_features = 384
n_outputs = 7
# train the model
EPOCHS = 6
def train(model, optim, criterion, x, y, epochs=EPOCHS):
for e in range(1, epochs + 1):
optim.zero_grad()
out = model(x)
loss = criterion(out, y)
loss.backward()
optim.step()
print(f"Loss at epoch {e}: {loss.data}")
return model
model = train(model, optim, criterion, x_train, y_train)
</code></pre>
<p>I run into this error,</p>
<pre><code>---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[15], line 15
12 print(f"Loss at epoch {e}: {loss.data}")
13 return model
---> 15 model = train(model, optim, criterion, x_train, y_train)
Cell In[15], line 9, in train(model, optim, criterion, x, y, epochs)
7 optim.zero_grad()
8 out = model(x)
----> 9 loss = criterion(out, y)
10 loss.backward()
11 optim.step()
File ~\anaconda3\lib\site-packages\torch\nn\modules\module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
File ~\anaconda3\lib\site-packages\torch\nn\modules\loss.py:1174, in CrossEntropyLoss.forward(self, input, target)
1173 def forward(self, input: Tensor, target: Tensor) -> Tensor:
-> 1174 return F.cross_entropy(input, target, weight=self.weight,
1175 ignore_index=self.ignore_index, reduction=self.reduction,
1176 label_smoothing=self.label_smoothing)
File ~\anaconda3\lib\site-packages\torch\nn\functional.py:3029, in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)
3027 if size_average is not None or reduce is not None:
3028 reduction = _Reduction.legacy_get_string(size_average, reduce)
-> 3029 return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: 0D or 1D target tensor expected, multi-target not supported
</code></pre>
<p>What am I doing wrong and need to correct in my code?</p>
| <python><machine-learning><pytorch><neural-network> | 2023-07-10 23:28:34 | 0 | 1,299 | Bluetail |
76,657,822 | 4,463,825 | Issues with np.log | <p>I have a numpy array which got derived from pandas.
I am trying to do np.log (natural logarithm) on each of its elements and it is giving me the error.</p>
<p><code>AttributeError: 'float' object has no attribute 'log'</code></p>
<p>The array looks something like this.</p>
<pre><code>[5.810785984999995 5.666261181666755 5.577470475833309 7.967268425833254
8.298006562222156 8.974100307777746 8.553072009444406 9.059574381388813
9.055145143654158 8.770924936944482 8.52566836194444 8.21766430611109]
</code></pre>
<p>The array came from a pandas dataframe using the following code: (just for reference as per requested in comments)</p>
<pre><code>flag = df.iloc[0:12,7].to_numpy()
</code></pre>
<p>The error is happening when I try</p>
<pre><code>print (np.log(flag))
</code></pre>
<p>However when I try something like</p>
<pre><code>a = np.array([1.35,2.49,3.687])
print (np.log(a))
</code></pre>
<p>It works fine. These are still float datatypes? So I am unable to figure out what the issue is, and how I can remedy it.</p>
<p>At the end of the day I am looking to get the natural logarithm of my array.</p>
| <python><numpy><natural-logarithm> | 2023-07-10 23:09:20 | 1 | 993 | Jesh Kundem |
76,657,779 | 9,582,542 | Finding a specific span element with a class using Selenium | <p>The html below is stored in a selenium WebDriver variable</p>
<pre><code><td colspan="2" class="pi_mailing_address"><strong>Mailing Address</strong>
<div>
<span class="ng-binding">1236 NW 167 ST </span>
<span ng-show="property.mailingAddress.address2" class="ng-binding ng-hide">STE #</span>
<span ng-show="property.mailingAddress.address3" class="ng-binding ng-hide">789</span>
<span ng-show="property.mailingAddress.city" ng-class="{'inline':property.mailingAddress.city}" class="ng-binding inline">OPA LOCKA,</span>
<span class="inline ng-binding">FL</span>
<span class="inline ng-binding">33055-4314</span>
<span ng-hide="isCountryUSA(property.mailingAddress.country)" class="ng-binding"></span>
</div>
</td>
</code></pre>
<p>When I run this</p>
<pre><code>for elem in driver.find_elements_by_xpath('.//span[@class = "ng-binding"]'):
print(elem.text)
</code></pre>
<p>I get 7 values.</p>
<p>I want the 4 value which is: <strong>1236 NW 167 ST</strong></p>
<p>How would I use the DOM hierarchy to only extract the address items so I can assign then to a variables.</p>
| <python><selenium-webdriver><xpath><css-selectors><webdriver> | 2023-07-10 22:55:14 | 1 | 690 | Leo Torres |
76,657,416 | 1,171,746 | Is it possible to get a properties fully qualified when only property is passed into function? | <p>I am trying to get fully qualified name from a property.</p>
<p>Example class</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
def __init__(self, val: int):
self._val = val
@property
def myval(self) -> int:
return self._val
</code></pre>
<p>I am attempting to get full qualified name of different objects.
I am aiming to get a result like <code>ooolib.foo.Foo.myval</code> (module, class, attribute).</p>
<p>The idea is that I want to create a script that can do someting like the following.</p>
<pre class="lang-py prettyprint-override"><code>>>> from ooolib.helper import hlp
>>> from ooolib.foo import Foo
>>>
>>> hlp(Foo.myval)
Launching help for "ooolib.foo.Foo.myval" at "https://ooolib.help.example.com/src/ooolib/foo/Foo#myval"
</code></pre>
<p>I have a backend that contains all the fully qualified names and their respective help page links. I want to make it simple for user to look up help in a python interactive console.</p>
<p>My goal it to have users be able to get help for any part of the Library by typing the actual objects.
In the interactive console tab complete is enabled so it makes sense to me to do it this way.</p>
<pre class="lang-py prettyprint-override"><code>>>> hlp(ooolib.bar.Bar.mymethod)
>>> hlp(ooolib.bar.Bar.some_property)
>>> hlp(ooolib.bar.Bar.some_attr)
>>> hlp(ooolib.foo.Foo.__init__)
>>> hlp(ooolib.foo)
</code></pre>
<p>Is this possible of just a lofty goal on my part?</p>
| <python><python-inspect> | 2023-07-10 21:20:36 | 1 | 327 | Amour Spirit |
76,657,355 | 2,177,312 | sqlalchemy `A transaction is already begun on this Session` error when beginning transaction in fastapi app | <p>In my fastapi app I'm adding <code>db_session</code> to <code>request.state</code> within middleware like below:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, Request
from sqlalchemy.ext.asyncio import async_sessionmaker, AsyncSession, create_async_engine
from starlette.middleware.base import BaseHTTPMiddleware
# https://docs.sqlalchemy.org/en/20/orm/extensions/asyncio.html#sqlalchemy.ext.asyncio.create_async_engine
engine = create_async_engine("<my postgres url>", future=True, pool_pre_ping=True, echo=True)
# https://docs.sqlalchemy.org/en/20/orm/session_api.html#sqlalchemy.orm.sessionmaker.__init__
AsyncSessionFactory = async_sessionmaker(bind=engine, autoflush=False, expire_on_commit=False, future=True, class_=AsyncSession)
class AddDBSessionToRequest(BaseHTTPMiddleware):
async def dispatch(self, request, call_next):
async with AsyncSessionFactory() as db_session:
try:
request.state.db_session = db_session
print(f"Middleware transaction status: {db_session.in_transaction()}") # prints False
response = await call_next(request)
except SQLAlchemyError:
logger.exception("SQLAlchemy error when processing '%s %s'", request.method, request.url)
response = Response("Internal server error", status_code=500)
return response
middleware = [
Middleware(
CORSMiddleware,
allow_origins=get_settings().cors_origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
),
Middleware(TrustedHostMiddleware, allowed_hosts=get_settings().allowed_hosts),
Middleware(MeasureRequestProcessingTime),
Middleware(AddDBSessionToRequest),
]
app = FastAPI(middleware=middleware)
@app.post("/test")
async def test_view(request: Request):
db_session = request.state.db_session
print(f"view transaction status: {db_session.in_transaction()}") # prints True
async with db_session.begin():
# do something
pass
</code></pre>
<p>But above fails with error message: <code>sqlalchemy.exc.InvalidRequestError: A transaction is already begun on this Session.</code></p>
<p>I'm trying to understand where is the transaction started or what is starting it...</p>
<p>Full stack trace:</p>
<pre><code>Traceback (most recent call last):
File "/opt/venv/lib/python3.11/site-packages/anyio/streams/memory.py", line 94, in receive
return self.receive_nowait()
^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.11/site-packages/anyio/streams/memory.py", line 89, in receive_nowait
raise WouldBlock
anyio.WouldBlock
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 78, in call_next
message = await recv_stream.receive()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.11/site-packages/anyio/streams/memory.py", line 114, in receive
raise EndOfStream
anyio.EndOfStream
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/test/app.py", in dispatch
response = await call_next(request)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 84, in call_next
raise app_exc
File "/opt/venv/lib/python3.11/site-packages/starlette/middleware/base.py", line 70, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "/opt/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/opt/venv/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/opt/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/opt/venv/lib/python3.11/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/opt/venv/lib/python3.11/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/opt/venv/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/opt/venv/lib/python3.11/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.11/site-packages/fastapi/routing.py", line 237, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.11/site-packages/fastapi/routing.py", line 163, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/test/app.py", in test_view
async with db_session.begin():
File "/opt/venv/lib/python3.11/site-packages/sqlalchemy/ext/asyncio/base.py", line 127, in __aenter__
return await self.start(is_ctxmanager=True) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.11/site-packages/sqlalchemy/ext/asyncio/session.py", line 1769, in start
await greenlet_spawn(
File "/opt/venv/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py", line 179, in greenlet_spawn
result = context.switch(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 1811, in begin
raise sa_exc.InvalidRequestError(
sqlalchemy.exc.InvalidRequestError: A transaction is already begun on this Session.
</code></pre>
<p>I tested with plain async code like below, which is not showing that error:</p>
<pre class="lang-py prettyprint-override"><code># continuation of first example
import asyncio
async def another_test():
async with AsyncSessionFactory() as db_session:
async with db_session.begin():
# do something
pass
loop = asyncio.get_event_loop()
loop.run_until_complete(another_test())
</code></pre>
<p>I only get that sqlalchemy error when running in fastapi, but not when running in plain async function. Please help me understand why is sqlalchemy throwing error in one case but not in the other.</p>
<p>I found <a href="https://stackoverflow.com/questions/39277841/how-do-i-prevent-sqlalchemy-from-creating-a-transaction-on-select">this question</a> which suggests using <code>begin_nested</code> instead of <code>begin</code>... and this is probably what I will do.</p>
<p>I also found <a href="https://github.com/sqlalchemy/sqlalchemy/discussions/6921" rel="nofollow noreferrer">this github thread</a> where <code>autobegin</code> arg is mentioned. Though I will stick with <code>begin_nested</code>.</p>
| <python><sqlalchemy><python-asyncio><fastapi> | 2023-07-10 21:09:24 | 0 | 1,069 | Greg0ry |
76,657,345 | 4,873,380 | Python Versions -- Python3 vs Python and pyenv | <p>When I type</p>
<pre><code>python3 --version
</code></pre>
<p>I get</p>
<pre><code>Python 3.11.4
</code></pre>
<p>When I type</p>
<p>pyenv versions</p>
<p>I get</p>
<pre><code> system
3.7.17
3.9.17
3.10.12
</code></pre>
<p>Why the disrepancy? I want to use 3.9.0</p>
| <python><python-3.x><pyenv> | 2023-07-10 21:07:09 | 2 | 14,309 | Asool |
76,657,318 | 2,531,569 | Cast String field to datetime64[ns] in parquet file using pandas-on-spark | <p>My input is parquet file with I need to recast as below:</p>
<pre><code>df=spark.read.parquet("input.parquet")
psdf=df.to_pandas_on_spark()
psdf['reCasted'] = psdf['col1'].astype('float64')
psdf['reCasted'] = psdf['col2'].astype('int32')
psdf['reCasted'] = psdf['col3'].astype('datetime64[ns]')
</code></pre>
<p>In the above code I am able to convert <code>col1</code> into <code>float64</code> and <code>col2</code> into <code>int32</code>. But when I try to convert <code>col3</code> into <code>datetime64[ns]</code>, I am getting the recasted value as <code>NaT</code>. Note that <code>col3</code> is originally a String which I trying to convert to <code>datetime64[ns]</code></p>
<p>I can do this recasting using Pandas as below:</p>
<pre><code>psdf['reCasted'] = pd.to_datetime(psdf['col3'],format='%Y-m-%d%')
</code></pre>
<p>But I don't want to use Pandas as the process is taking time. I want to use pandas_on_spark only. What can I try next?</p>
| <python><pandas><pyspark-pandas> | 2023-07-10 21:01:36 | 1 | 629 | user2531569 |
76,657,254 | 4,236,951 | Wrap legend text in altair | <p>I have the following example where I'm trying to plot 4 points. When I wrap the text for the point legend labels the result is only 1 point for each category (2 points total).</p>
<p>Any help figuring out how to wrap legend text without losing data would be greatly appreciated.</p>
<pre><code>import pandas as pd
import altair as alt
import textwrap
# Example DataFrame
df = pd.DataFrame({'point': ['a', 'b', 'c', 'd'],
'label': ['My super long label that is too long',
'Short label',
'My super long label that is too long',
'Short label'],
'x': [1,2,3,4],
'y': [1,2,3,4]})
# Wrap text in the 'label' column
df['label_wrapped'] = df['label'].apply(textwrap.wrap, args=[15])
chart = alt.Chart(df).mark_circle().encode(
x='x',
y='y',
color=alt.Color('label_wrapped:N', title='Label'),
)
chart
</code></pre>
<p><a href="https://i.sstatic.net/ZP4b5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZP4b5.png" alt="enter image description here" /></a></p>
| <python><altair> | 2023-07-10 20:50:27 | 1 | 631 | Stephen Williams |
76,657,206 | 18,739,908 | How to work with a pdf form in Langchain? | <p>I have a pdf file that is questionnaire. There is text that cannot be changed which are the questions and then text boxes with the answers. When I run this simple code:</p>
<pre><code>from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("files/blo-file.pdf")
pages = loader.load_and_split()
pages
</code></pre>
<p>It doesn't include any of the information that was filled out. It only has the questions. How can I also get the answers to the questions? Thanks.</p>
| <python><openai-api><langchain><chatgpt-api><py-langchain> | 2023-07-10 20:42:08 | 0 | 494 | Cole |
76,657,176 | 11,370,582 | Add string Column B where string exists in column Columns A + np.where() + pandas | <p>I need to add a secondary ID in a dataset with unique entries for multiple individuals. To do so I am trying to use <code>np.where()</code>, after I implemented I realized I am overwriting the last entry each time. This an example of the original approach:</p>
<pre><code>df = pd.DataFrame({'Example':['1','2','3','4']})
df['Add'] = ''
df['Add'] = np.where(df['Example']== '1', 'One','')
df['Add'] = np.where(df['Example']== '2', 'Two','')
df['Add'] = np.where(df['Example']== '3', 'Three','')
df['Add'] = np.where(df['Example']== '4', 'Four','')
df.head()
</code></pre>
<p>As a work around I tried adding <code>str.contains('')</code> thinking that would evaluate <code>True</code> when string is empty and only insert new string in that case. As below:</p>
<pre><code>df = pd.DataFrame({'Example':['1','2','3','4']})
df['Add'] = ''
df['Add'] = np.where(df['Example'].str.contains('')== '1', 'One','')
df['Add'] = np.where(df['Example'].str.contains('')== '2', 'Two','')
df['Add'] = np.where(df['Example'].str.contains('')== '3', 'Three','')
df['Add'] = np.where(df['Example'].str.contains('')== '4', 'Four','')
df.head()
</code></pre>
<p>In that instance everything is being filled with an empty string...</p>
<p>Is there a simple method to check if a cell is empty before writing with <code>np.where()</code>?</p>
| <python><pandas><string><dataframe><numpy> | 2023-07-10 20:37:14 | 1 | 904 | John Conor |
76,657,118 | 14,380,704 | Pandas Multi-Index with multiple conditions | <p>I applied the .loc methodolgy discussed in <a href="https://stackoverflow.com/questions/53927460/select-rows-in-pandas-multiindex-dataframe">Select rows in pandas MultiIndex DataFrame</a> because I was recieving a KeyError:'class' even though 'class' exists as (what I thought was a column name) in my existing dataframe. Later finding out that 'class' was one of two indexes in a multiindex ('group' being the secondary index). While the .loc function allows me to select rows with 'First','Second','Third' I'm struggling to determine how to then apply an additional condition to exclude rows where the second index ('group') has blank rows.</p>
<p>Current dataframe looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">class</th>
<th style="text-align: center;">group</th>
<th style="text-align: right;">Column1</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">First</td>
<td style="text-align: center;">A</td>
<td style="text-align: right;">123</td>
</tr>
<tr>
<td style="text-align: left;">First</td>
<td style="text-align: center;"></td>
<td style="text-align: right;"></td>
</tr>
<tr>
<td style="text-align: left;">Second</td>
<td style="text-align: center;">B</td>
<td style="text-align: right;">123</td>
</tr>
<tr>
<td style="text-align: left;">Third</td>
<td style="text-align: center;">C</td>
<td style="text-align: right;">123</td>
</tr>
<tr>
<td style="text-align: left;">Forth</td>
<td style="text-align: center;">D</td>
<td style="text-align: right;">123</td>
</tr>
</tbody>
</table>
</div>
<p>Current code looks like this:</p>
<pre><code>keep_rows = df.loc[['First','Second','Third']]
</code></pre>
<p>My original code looked like this (and was throwing the KeyError due to referenced names beind indexs and not column names)</p>
<pre><code>keep_rows = df[(df['class'].isin(['First','Second','Third'])) & (df['group'].isna())]
</code></pre>
<p>Desired dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">class</th>
<th style="text-align: center;">group</th>
<th style="text-align: right;">Column1</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">First</td>
<td style="text-align: center;">A</td>
<td style="text-align: right;">123</td>
</tr>
<tr>
<td style="text-align: left;">Second</td>
<td style="text-align: center;">B</td>
<td style="text-align: right;">123</td>
</tr>
<tr>
<td style="text-align: left;">Third</td>
<td style="text-align: center;">C</td>
<td style="text-align: right;">123</td>
</tr>
</tbody>
</table>
</div> | <python><pandas><dataframe> | 2023-07-10 20:26:51 | 1 | 307 | 2020db9 |
76,657,099 | 3,215,940 | Apply minmax_scale to all columns in polars data frame | <p>I am trying to follow advice <a href="https://stackoverflow.com/questions/67834912/apply-function-to-all-columns-of-a-polars-dataframe">from this question</a></p>
<pre><code>df = pl.DataFrame({'a':[1, 2, 3], 'b':[4,5,6]})
df.select([pl.all().map(np.log2)])
shape: (3, 2)
┌──────────┬──────────┐
│ a ┆ b │
│ --- ┆ --- │
│ f64 ┆ f64 │
╞══════════╪══════════╡
│ 0.0 ┆ 2.0 │
│ 1.0 ┆ 2.321928 │
│ 1.584963 ┆ 2.584963 │
└──────────┴──────────┘
</code></pre>
<p>So far, so good. But:</p>
<pre><code>from sklearn.preprocessing import minmax_scale
>>> df.select(pl.all().map(minmax_scale))
shape: (1, 2)
┌─────────────────┬─────────────────┐
│ a ┆ b │
│ --- ┆ --- │
│ list[f64] ┆ list[f64] │
╞═════════════════╪═════════════════╡
│ [0.0, 0.5, 1.0] ┆ [0.0, 0.5, 1.0] │
└─────────────────┴─────────────────┘
</code></pre>
<p>I found a way of converting the <code>pl.List</code> back, but it seems strange that this step is needed.</p>
<pre><code>df.select(pl.all().map(minmax_scale)).explode(pl.all())
shape: (3, 2)
┌─────┬─────┐
│ a ┆ b │
│ --- ┆ --- │
│ f64 ┆ f64 │
╞═════╪═════╡
│ 0.0 ┆ 0.0 │
│ 0.5 ┆ 0.5 │
│ 1.0 ┆ 1.0 │
└─────┴─────┘
</code></pre>
<p>Both <code>minmax_scale</code> and <code>np.log2</code> return arrays, so I would expect the behavior to be the same. What is the proper way of doing this?</p>
| <python><python-polars> | 2023-07-10 20:24:28 | 2 | 4,270 | Matias Andina |
76,657,081 | 5,790,653 | python How to iterate over two files and grep only one line before the match | <p>This is <code>names.txt</code>:</p>
<pre><code>David
Mary
Rose
Saeed
</code></pre>
<p>This is <code>emails.txt</code>:</p>
<pre><code> - address1@gmail.com
- Mary
- address2@gmail.com
- Rose
- address3@hotmail.com
- David
- address4@yahoo.com
- Saeed
- address5@gmail.com
- Jones
</code></pre>
<p>In <code>emails.txt</code> there are more emails than <code>names.txt</code>.</p>
<p>I want to grep each name in <code>names.txt</code> and <code>grep</code> that name but only one line before that.</p>
<p>For example, to <code>grep</code> the name <code>David</code> and find its email <code>address3@hotmail.com</code>.</p>
<p>This is my python code up to now (which only prints the first line of <code>names.txt</code>:</p>
<pre><code>import re
with open('names.txt', 'r') as file:
with open('emails.txt', 'r') as emails:
for name in file:
for email in emails:
if re.search(name, email):
print(email)
</code></pre>
<p>I'll update the question when I find something new to get closer to the answer.</p>
| <python> | 2023-07-10 20:21:46 | 1 | 4,175 | Saeed |
76,656,999 | 3,551,443 | Error when installing package.json after Python | <p>I'm trying to run <code>npm install</code> as <em>root</em> for my project. The first time, it said that the Python path was incorrect.</p>
<p>I reinstalled Python to a newer version, but when I run <code>npm install</code>, it doesn't work. I'm getting errors and I don't know what to do.</p>
<pre class="lang-none prettyprint-override"><code>6540 verbose npm v8.1.2
6541 error code 1
6542 error path C:\wamp\www\xxxxxx\node_modules\node-sass
6543 error command failed
6544 error command C:\Windows\system32\cmd.exe /d /s /c node scripts/build.js
6545 error Building: C:\Program Files\nodejs\node.exe C:\wamp\www\xxxxxx\node_modules\node-gyp\bin\node-gyp.js rebuild --verbose --libsass_ext= --libsass_cflags= --libsass_ldflags= --libsass_library=
6546 error gyp info it worked if it ends with ok
6546 error gyp verb cli [
6546 error gyp verb cli 'C:\\Program Files\\nodejs\\node.exe',
6546 error gyp verb cli 'C:\\wamp\\www\\xxxxxx\\node_modules\\node-gyp\\bin\\node-gyp.js',
6546 error gyp verb cli 'rebuild',
6546 error gyp verb cli '--verbose',
6546 error gyp verb cli '--libsass_ext=',
6546 error gyp verb cli '--libsass_cflags=',
6546 error gyp verb cli '--libsass_ldflags=',
6546 error gyp verb cli '--libsass_library='
6546 error gyp verb cli ]
6546 error gyp info using node-gyp@3.8.0
6546 error gyp info using node@16.13.1 | win32 | x64
6546 error gyp verb command rebuild []
6546 error gyp verb command clean []
6546 error gyp verb clean removing "build" directory
6546 error gyp verb command configure []
6546 error gyp verb check python checking for Python executable "C:\Python311\python.exe" in the PATH
6546 error gyp verb `which` succeeded C:\Python311\python.exe C:\Python311\python.exe
6546 error gyp ERR! configure error
6546 error gyp ERR! stack Error: Command failed: C:\Python311\python.exe -c import sys; print "%s.%s.%s" % sys.version_info[:3];
6546 error gyp ERR! stack File "<string>", line 1
6546 error gyp ERR! stack import sys; print "%s.%s.%s" % sys.version_info[:3];
6546 error gyp ERR! stack ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
6546 error gyp ERR! stack SyntaxError: Missing parentheses in call to 'print'. Did you mean print(...)?
6546 error gyp ERR! stack
6546 error gyp ERR! stack at ChildProcess.exithandler (node:child_process:397:12)
6546 error gyp ERR! stack at ChildProcess.emit (node:events:390:28)
6546 error gyp ERR! stack at maybeClose (node:internal/child_process:1064:16)
6546 error gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:301:5)
6546 error gyp ERR! System Windows_NT 10.0.19045
6546 error gyp ERR! command "C:\\Program Files\\nodejs\\node.exe" "C:\\wamp\\www\\xxxxxx\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library="
6546 error gyp ERR! cwd C:\wamp\www\xxxxxx\node_modules\node-sass
6546 error gyp ERR! node -v v16.13.1
6546 error gyp ERR! node-gyp -v v3.8.0
6546 error gyp ERR! not ok
6546 error Build failed with error code: 1
6547 verbose exit 1
</code></pre>
<p><em><strong>Update</strong></em>:</p>
<p>I followed <a href="https://stackoverflow.com/questions/76656999/error-when-installing-package-json-after-python/76657244#76657244">the answer from user staocube</a>, but I still got the error when I tried to install npm install.</p>
<p>I don’t understand what’s wrong.</p>
<pre class="lang-none prettyprint-override"><code> npm ERR! code 1
npm ERR! path C:\wamp\www\xxxxx\node_modules\node-sass
npm ERR! command failed
npm ERR! command C:\Windows\system32\cmd.exe /d /s /c node scripts/build.js
npm ERR! Building: C:\Program Files\nodejs\node.exe C:\wamp\www\xxxxx\node_modules\node-gyp\bin\node-gyp.js rebuild --verbose --libsass_ext= --libsass_cflags= --libsass_ldflags= --libsass_library=
npm ERR! gyp info it worked if it ends with ok
npm ERR! gyp verb cli [
npm ERR! gyp verb cli 'C:\\Program Files\\nodejs\\node.exe',
npm ERR! gyp verb cli 'C:\\wamp\\www\\xxxxx\\node_modules\\node-gyp\\bin\\node-gyp.js',
npm ERR! gyp verb cli 'rebuild',
npm ERR! gyp verb cli '--verbose',
npm ERR! gyp verb cli '--libsass_ext=',
npm ERR! gyp verb cli '--libsass_cflags=',
npm ERR! gyp verb cli '--libsass_ldflags=',
npm ERR! gyp verb cli '--libsass_library='
npm ERR! gyp verb cli ]
npm ERR! gyp info using node-gyp@3.8.0
npm ERR! gyp info using node@16.13.1 | win32 | x64
npm ERR! gyp verb command rebuild []
npm ERR! gyp verb command clean []
npm ERR! gyp verb clean removing "build" directory
npm ERR! gyp verb command configure []
npm ERR! gyp verb check python checking for Python executable "C:\Python27\python.exe" in the PATH
npm ERR! gyp verb `which` succeeded C:\Python27\python.exe C:\Python27\python.exe
npm ERR! gyp verb check python version `C:\Python27\python.exe -c "import sys; print "2.7.2
npm ERR! gyp verb check python version .%s.%s" % sys.version_info[:3];"` returned: %j
npm ERR! gyp verb get node dir no --target version specified, falling back to host node version: 16.13.1
npm ERR! gyp verb command install [ '16.13.1' ]
npm ERR! gyp verb install input version string "16.13.1"
npm ERR! gyp verb install installing version: 16.13.1
npm ERR! gyp verb install --ensure was passed, so won't reinstall if already installed
npm ERR! gyp verb install version is already installed, need to check "installVersion"
npm ERR! gyp verb got "installVersion" 9
npm ERR! gyp verb needs "installVersion" 9
npm ERR! gyp verb install version is good
npm ERR! gyp verb get node dir target node version installed: 16.13.1
npm ERR! gyp verb build dir attempting to create "build" dir: C:\wamp\www\xxxxx\node_modules\node-sass\build
npm ERR! gyp verb build dir "build" dir needed to be created? C:\wamp\www\xxxxx\node_modules\node-sass\build
npm ERR! gyp verb find vs2017 Found installation at: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community
npm ERR! gyp verb find vs2017 - Found Microsoft.VisualStudio.Component.Windows10SDK.19041
npm ERR! gyp verb find vs2017 - Found Microsoft.VisualStudio.Component.VC.Tools.x86.x64
npm ERR! gyp verb find vs2017 - Found Microsoft.VisualStudio.VC.MSBuild.Base
npm ERR! gyp verb find vs2017 - Using this installation with Windows 10 SDK
npm ERR! gyp verb find vs2017 using installation: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community
npm ERR! gyp verb build/config.gypi creating config file
npm ERR! gyp verb build/config.gypi writing out config file: C:\wamp\www\xxxxx\node_modules\node-sass\build\config.gypi
npm ERR! (node:15060) [DEP0150] DeprecationWarning: Setting process.config is deprecated. In the future the property will be read-only.
npm ERR! (Use `node --trace-deprecation ...` to show where the warning was created)
npm ERR! gyp verb config.gypi checking for gypi file: C:\wamp\www\xxxxx\node_modules\node-sass\config.gypi
npm ERR! gyp verb common.gypi checking for gypi file: C:\wamp\www\xxxxx\node_modules\node-sass\common.gypi
npm ERR! gyp verb gyp gyp format was not specified; forcing "msvs"
npm ERR! gyp info spawn C:\Python27\python.exe
npm ERR! gyp info spawn args [
npm ERR! gyp info spawn args 'C:\\wamp\\www\\xxxxx\\node_modules\\node-gyp\\gyp\\gyp_main.py',
npm ERR! gyp info spawn args 'binding.gyp',
npm ERR! gyp info spawn args '-f',
npm ERR! gyp info spawn args 'msvs',
npm ERR! gyp info spawn args '-G',
npm ERR! gyp info spawn args 'msvs_version=2015',
npm ERR! gyp info spawn args '-I',
npm ERR! gyp info spawn args 'C:\\wamp\\www\\xxxxx\\node_modules\\node-sass\\build\\config.gypi',
npm ERR! gyp info spawn args '-I',
npm ERR! gyp info spawn args 'C:\\wamp\\www\\xxxxx\\node_modules\\node-gyp\\addon.gypi',
npm ERR! gyp info spawn args '-I',
npm ERR! gyp info spawn args 'C:\\Users\\nikla\\.node-gyp\\16.13.1\\include\\node\\common.gypi',
npm ERR! gyp info spawn args '-Dlibrary=shared_library',
npm ERR! gyp info spawn args '-Dvisibility=default',
npm ERR! gyp info spawn args '-Dnode_root_dir=C:\\Users\\nikla\\.node-gyp\\16.13.1',
npm ERR! gyp info spawn args '-Dnode_gyp_dir=C:\\wamp\\www\\xxxxx\\node_modules\\node-gyp',
npm ERR! gyp info spawn args '-Dnode_lib_file=C:\\Users\\nikla\\.node-gyp\\16.13.1\\<(target_arch)\\node.lib',
npm ERR! gyp info spawn args '-Dmodule_root_dir=C:\\wamp\\www\\xxxxx\\node_modules\\node-sass',
npm ERR! gyp info spawn args '-Dnode_engine=v8',
npm ERR! gyp info spawn args '--depth=.',
npm ERR! gyp info spawn args '--no-parallel',
npm ERR! gyp info spawn args '--generator-output',
npm ERR! gyp info spawn args 'C:\\wamp\\www\\xxxxx\\node_modules\\node-sass\\build',
npm ERR! gyp info spawn args '-Goutput_dir=.'
npm ERR! gyp info spawn args ]
npm ERR! gyp verb command build []
npm ERR! gyp verb build type Release
npm ERR! gyp verb architecture x64
npm ERR! gyp verb node dev dir C:\Users\nikla\.node-gyp\16.13.1
npm ERR! gyp verb found first Solution file build/binding.sln
npm ERR! gyp verb using MSBuild: C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\15.0\Bin\MSBuild.exe
npm ERR! gyp info spawn C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\15.0\Bin\MSBuild.exe
npm ERR! gyp info spawn args [
npm ERR! gyp info spawn args 'build/binding.sln',
npm ERR! gyp info spawn args '/nologo',
npm ERR! gyp info spawn args '/p:Configuration=Release;Platform=x64'
npm ERR! gyp info spawn args ]
npm ERR! gyp ERR! UNCAUGHT EXCEPTION
npm ERR! gyp ERR! stack Error: spawn C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\15.0\Bin\MSBuild.exe ENOENT
npm ERR! gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:282:19)
npm ERR! gyp ERR! stack at onErrorNT (node:internal/child_process:477:16)
npm ERR! gyp ERR! stack at processTicksAndRejections (node:internal/process/task_queues:83:21)
npm ERR! gyp ERR! System Windows_NT 10.0.19045
npm ERR! gyp ERR! command "C:\\Program Files\\nodejs\\node.exe" "C:\\wamp\\www\\xxxxx\\node_modules\\node-gyp\\bin\\node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library="
npm ERR! gyp ERR! cwd C:\wamp\www\xxxxx\node_modules\node-sass
npm ERR! gyp ERR! node -v v16.13.1
npm ERR! gyp ERR! node-gyp -v v3.8.0
npm ERR! gyp ERR! This is a bug in `node-gyp`.
npm ERR! gyp ERR! Try to update node-gyp and file an Issue if it does not help:
npm ERR! gyp ERR! <https://github.com/nodejs/node-gyp/issues>
npm ERR! Build failed with error code: 7
</code></pre>
| <python><npm> | 2023-07-10 20:07:01 | 0 | 589 | Niklas |
76,656,973 | 13,342,062 | Using a cached property on a named tuple | <pre><code>from typing import NamedTuple
from functools import cached_property
class Rectangle(NamedTuple):
x: int
y: int
@cached_property
def area(self):
return self.x * self.y
</code></pre>
<p>I thought this class definition would complain something about the <code>__slots__</code> on Rectangle, but apparently the class definition is valid. It doesn't fail until too late, if/when the getter is actually accessed:</p>
<pre><code>>>> rect = Rectangle(2, 3)
>>> rect.area
...
TypeError: Cannot use cached_property instance without calling __set_name__ on it.
>>> Rectangle.
</code></pre>
<p>Well, that's weird, but okay..</p>
<pre><code>>>> Rectangle.area.__set_name__(Rectangle, "area")
>>> rect.area
...
TypeError: No '__dict__' attribute on 'Rectangle' instance to cache 'area' property.
</code></pre>
<p>Is there a better recipe for cached properties on named tuples? Requirements:</p>
<ul>
<li>It should not appear to be a real field (<code>x, y, area = rect</code> should not be possible)</li>
<li>It should be lazy (not eagerly computed) and cached (not recomputed every time accessed)</li>
<li>Wherever the storage is should not leak memory (it should be deleted when the tuple instance itself is deleted)</li>
</ul>
| <python><caching><properties><namedtuple><python-descriptors> | 2023-07-10 20:02:00 | 2 | 323 | COVFEFE-19 |
76,656,788 | 54,873 | How do I get the last record in a groupby() in pandas? | <p>I have a dataframe <code>df</code> which has a number of records for each student. Frequently I want to get the one with the last timestamp.</p>
<p>What is the best way to do this? Previously I had been using <code>last()</code> but this gives the last <em>non null</em> value when really I just want the last value, null or otherwise.</p>
<p>Using <code>apply(lambda r: r.iloc[-1])</code> works, but the code feels ugly (I hate using an <code>apply</code> and anecdotally it feels slow and inefficient, likely because of the apply).</p>
<p>What is the right way to do this?</p>
<pre><code>(Pdb) df = pd.DataFrame([["A",2,3],["B",5,6],["A",np.NaN,4]], columns=["student", "value_a", "timestamp"]).sort_values("timestamp")
(Pdb) df
student value_a timestamp
0 A 2.0 3
2 A NaN 4
1 B 5.0 6
(Pdb) df.groupby("student").last()
# This gives the wrong answer
value_a timestamp
student
A 2.0 4
B 5.0 6
(Pdb) df.groupby("student").apply(lambda r: r.iloc[-1])
# This gives the right answer but feels inefficient
student value_a timestamp
student
A A NaN 4
B B 5.0 6
</code></pre>
| <python><pandas> | 2023-07-10 19:34:10 | 3 | 10,076 | YGA |
76,656,618 | 2,281,766 | Tkinter global event handler | <p>I would like to create a complex dynamic GUI with tkinter, the option for my use case would be a global event handler where I can decide on which <code>widget</code> and <code>event</code> type I want to process. I tried the following code, obviously it does not work. How can I achieve that?</p>
<pre><code>import tkinter
root = tkinter.Tk()
def global_handler(widget, event):
pass
root.bind_all('any', global_handler) # This does not work
root.mainloop()
</code></pre>
| <python><tkinter> | 2023-07-10 19:04:37 | 3 | 984 | VoidStar |
76,656,460 | 7,347,925 | How to count occurrences for each unique element by column? | <p>I have a 2d array and want to get the occurrences of all unique numbers by column.</p>
<p>Here's an example:</p>
<pre><code>import numpy as np
a = np.array([[2,2,3,3],
[2,3,3,3],
[3,3,4,4]])
</code></pre>
<p>The result should be</p>
<pre><code>[[2,1,0,0],
[1,2,2,2],
[0,0,1,1]])
</code></pre>
<p>For example, the first row is the occurrence of number <code>2</code> in each column, 0 means <code>2</code> isn't in the third and fourth columns. The second row is the occurrence of the number <code>3</code> while the last row is for the number <code>4</code>.
Briefly, I wanna get the per-column count of each sorted unique value.</p>
<p>I have tried <code>np.unique(a, return_counts=True, axis=0)</code>, but got this wrong result:</p>
<pre><code>(array([[2, 2, 3, 3],
[2, 3, 3, 3],
[3, 3, 4, 4]]),
array([1, 1, 1]))
</code></pre>
| <python><numpy><numpy-ndarray> | 2023-07-10 18:38:57 | 1 | 1,039 | zxdawn |
76,656,395 | 46,503 | SQLAlchemy: filtering on value of dict stored in JSONB array field | <p>My Postgres table has a field called data whose type is JSONB. The format is an array, for example:</p>
<pre><code>entity.data = [{"a": 1, "b": "name"}, {"a": 2, "b": "name1"}]
</code></pre>
<p>I need to find a record having in this array b == "name1".</p>
<p>I'm trying to use this filter but it doesn't work (I think it's because it's for dictionary, not for an array):</p>
<pre><code>record = TableName.query.filter(TableName.data['b'].astext == 'name1').first()
</code></pre>
| <python><postgresql><sqlalchemy> | 2023-07-10 18:29:47 | 1 | 5,287 | mimic |
76,656,339 | 718,529 | multiprocessing: Can a dict be shared between two Python shell? | <p>I come from this post <a href="https://stackoverflow.com/questions/6832554/multiprocessing-how-do-i-share-a-dict-among-multiple-processes">multiprocessing: How do I share a dict among multiple processes?</a>
but I want something slightly different. In that post, a dict is shared between a parent process and its child which is instantiated by the constructor <code>Process</code>. What I want is to share a dict between two Python shell.</p>
| <python><multiprocessing><shared-memory> | 2023-07-10 18:19:36 | 1 | 687 | chanp |
76,656,259 | 13,752,965 | How do I route subpages correctly with FastAPI? | <p>I'm trying to serve a small web app using Sveltekit and FastAPI.</p>
<p>FastAPI provides an api and various functions and the frontend was built with Sveltekit using <code>adapter-static</code>.</p>
<p>The basic framework is:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, Request
from fastapi.staticfiles import StaticFiles
app = FastAPI()
api = FastAPI(root_path="/api")
# mount the api
app.mount("/api", api)
# mount the sveltekit static files at root
app.mount('/', StaticFiles(directory="./webapp/build/", html=True), name="webapp")
# define the api endpoints
@api.websocket("/something")
async def do_something(request: Request):
body = await request.json()
result = do_something_fancy(body)
return {"result": result}
...
</code></pre>
<p>What I'm having trouble with is that the frontend app defines multiple sub pages:</p>
<ul>
<li>a main page/dashboard at <code>localhost:8888</code></li>
<li>a settings page at <code>localhost:8888/settings</code></li>
<li>an about page at <code>localhost:8888/about</code></li>
</ul>
<p>Now if I navigate to any of these pages using my svelte app navbar after starting at the root url "/", everything works fine, but if I navigate directly to <code>http://localhost:8888/settings</code> by entering it in the browser address bar, I get a 404 error and see <code>{"detail":"Not Found"}</code>.</p>
<p>I also get the <code>{"detail":"Not Found"}</code> when I hit "Refresh"/<code>Ctrl-r</code> in my browser when on one of the subpages (<code>settings</code> or <code>about</code>), but it works fine at the root url.</p>
<p>I'm pretty sure I'm just missing something simple like adding routing to the static app mount point, but the FastAPI docs don't specify how this should work (OR, it seems to indicate that it should "just work").</p>
| <python><routes><fastapi><sveltekit> | 2023-07-10 18:06:13 | 2 | 703 | tdpu |
76,656,090 | 16,591,513 | Shell cannot find copied files in Docker Container | <p>I have a following Dockerfile for my python web project. It works perfectly however, entrypoint.sh script, cannot find some of the files, that suppose to be copied by Docker (internal files of the project).</p>
<p>My Dockerfile</p>
<pre><code>FROM --platform=arm64 python:3.8.13-buster
LABEL maintainer=kirklimushin@gmail.com
WORKDIR /project/dir/
ENV PYTHONUNBUFFERED=1
COPY ./src ./
COPY ./__init__.py ./
COPY ./unittests ./
COPY ./environment.env ./
COPY ./module_requirements.txt ./
COPY ./module_constraints.txt ./
COPY ./entrypoint.sh ./
COPY ./rest_controllers.py ./
COPY ./settings.py ./
# creating new virtual environment
RUN python -m venv fn_env
# activating python virtual environment via shell
RUN . ./fn_env/bin/activate
# upgrading pip packet manager
RUN pip install --upgrade pip
RUN pip install --upgrade setuptools wheel
# installing dependencies inside virtual environment
RUN pip install -r module_requirements.txt -c module_constraints.txt
RUN chmod +x entrypoint.sh
ENTRYPOINT ./entrypoint.sh
</code></pre>
<p>My shell script <code>entrypoint.sh</code></p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/bash
echo "Starting Entrypoint pipeline...."
echo "Activating Virtual Environment"
source ./project/dir/fn_env/bin/activate
pip list
echo "Running Unittests..."
pytest ./project/dir/unittests/ml/test_models.py
pytest ./project/dir/unittests/web/test_rest_controllers.py
echo "Starting ASGI Server..."
python ./project/dir/settings.py
</code></pre>
<p>As aforementioned Dockerfile seems to work well, however, when it runs <code>entrypoint.sh</code>, it throws following error:</p>
<pre><code>/project/dir
Starting Entrypoint pipeline....
Activating Virtual Environment
Running Unittests...
./entrypoint.sh: line 9: ./project/dir/fn_env/bin/activate: No such file or directory
ERROR: file or directory not found: ./project/dir/unittests/ml/test_models.py
ERROR: file or directory not found: ./project/dir/unittests/web/test_rest_controllers.py
</code></pre>
<p>It cannot find files.</p>
<p>When I run <code>pwd</code> it shows, the current directory (aka. <code>/project/dir/</code>, which I specified inside my Dockerfile), so paths are supposed to be correct</p>
<p>Looks like Docker having trouble copying files?</p>
| <python><bash><docker><shell> | 2023-07-10 17:35:55 | 0 | 449 | CraZyCoDer |
76,656,038 | 3,654,588 | Python Global random seed vs Numpy Generator | <p>I am currently using randomness in functions and unit-tests. Moreover sometimes the functions should support <code>joblib</code> parallelism. I am wondering what are the issues with using <code>numpy.random.seed</code> vs <code>Generator</code>?</p>
<p>For example say we have the Generator pattern:</p>
<pre><code># pseudocode
def do_something_with_seed(rng):
rng = np.random.default_rng(rng)
# you can just call as is
do_something_with_seed(12345)
# I need to generate a seed sequence as I understand, when using parallelism
Parallel(do_something_with_seed(_rng) for _rng in rng.spawn(n_jobs))
</code></pre>
<p>Next, say we use the <code>np.random.seed</code> pattern</p>
<pre><code># pseudocode
def do_something_without_seed(seed):
np.random.seed(seed)
...
# this requires you to always set the global seed before running this function
do_something_with_global_seed(12345)
# when using parallelism
random_seeds = np.random.randint(np.iinfo(np.int32).max, size=len(seeds))
Parallel(do_something_with_global_seed(seed) for seed in random_seeds)
</code></pre>
<p>As I can see it, the performance, functionality is the same as long as you remember to do things properly. Is there any differences or reasons that we for sure need/want to use the <code>Generator</code> pattern? What about for unit-testing?</p>
| <python><numpy><random> | 2023-07-10 17:29:27 | 1 | 1,302 | ajl123 |
76,655,971 | 3,970,853 | Find closest point on HoughLine | <p>Given a hough line as <code>{rho: number, theta: number}</code> and a coordinate as <code>{x: number, y: number}</code>, how would I find the closest point on that hough line to the given coordinate?</p>
<p>So in the following sample graphic, I have the blue line and the red dot, and I'm looking for the white cross.</p>
<p><a href="https://i.sstatic.net/TIqbj.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TIqbj.jpg" alt="Graphic" /></a></p>
<p>I'm using Typescript, but I'd appreciate answers in any language or format, since this is technically more of a math problem.</p>
| <python><typescript><geometry><hough-transform> | 2023-07-10 17:19:49 | 2 | 1,503 | d0n.key |
76,655,962 | 9,338,509 | How to mock JClass call in python unit tests | <p>I am new to python unit test mocking. I am using jpype library in the code like below:</p>
<pre><code>def my_func():
#some code
instance. = JClass("myapp.myclass")()
#some code
</code></pre>
<p>This is the test code:</p>
<pre><code>def test_my_func():
my_func()
</code></pre>
<p>Here how to mock JClass() call?</p>
<p>Thanks in advance.</p>
| <python><mocking><python-unittest><jpype> | 2023-07-10 17:19:03 | 0 | 553 | lakshmiravali rimmalapudi |
76,655,959 | 19,675,781 | How to fix seaborn heatmap color mapping when values are in wide range | <p>I have a dataframe with 6 unique values in range(0-9). I want to assign specif color to each value but mapping is not working for me.</p>
<p>This is how my dataframe looks like:</p>
<p><a href="https://i.sstatic.net/oRpQE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oRpQE.png" alt="Data Frame" /></a></p>
<pre><code>cmap_new = {0: '#faf5f5', 1: '#ff0303', 6: '#1f78b4', 7: '#b2df8a', 8: '#33a02c', 9: '#fb9a99'}
cmap = ListedColormap([cmap_new[i] for i in cmap_new.keys()])
ax = sns.heatmap(data=tmp_df, cmap=cmap, yticklabels=True, xticklabels=False,linewidths=1,square=True,annot=True)
</code></pre>
<p>My plot looks like this:</p>
<p><a href="https://i.sstatic.net/Fizyy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fizyy.png" alt="enter image description here" /></a></p>
<p>In my data, though I dont have values [2-5], they are assigned a color. I want to fix this problem and assign colors only to keys in the cmap_new dictionary.</p>
<p>Can anyone help me with this?</p>
| <python><seaborn><heatmap><colormap> | 2023-07-10 17:18:28 | 1 | 357 | Yash |
76,655,793 | 14,498,998 | Django Can't Find a static file that's already there | <p>I'm facing a very peculiar problem using Django. So basically I have a CSS static file inside <static/css> directory, and here's the strange message from terminal: "GET /home/.../django/mysite/static/css/style.css HTTP/1.1" 404 1932, while the file is actually there!</p>
<p>settings file:</p>
<pre><code>DEBUG = True
ALLOWED_HOSTS = ['*']
STATIC_URL = str(BASE_DIR) + '/static/'
STATIC_ROOT = str(BASE_DIR.joinpath('static'))
STATICFILES_DIR = (str(BASE_DIR.joinpath('static')),)
</code></pre>
<p>I have used the command "python manage.py collectstatic", but it didn't work; I also checked out this page: <a href="https://docs.djangoproject.com/en/4.2/howto/static-files/" rel="nofollow noreferrer">https://docs.djangoproject.com/en/4.2/howto/static-files/</a> but there was no solution for me... Please tell me what's wrong!</p>
| <python><django><django-staticfiles> | 2023-07-10 16:50:00 | 1 | 313 | Alin |
76,655,633 | 13,197,161 | CloudKit Console: JSON Web Token Validator shows Unrecognizable claims found (Token Generated with Python) | <p>Hello guys I am getting this error <code>Unrecognizable claims found</code> when trying to validate my JWT Token for push notifications.</p>
<p><a href="https://i.sstatic.net/JAqSm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JAqSm.png" alt="enter image description here" /></a></p>
<p>I don't understand what it means. Can someone tell me how to resolve this issue?</p>
<p>I am using python, and using the <code>time</code> module to generate the epoch. I guess, this is where the issue is coming from, but I am not sure.</p>
<pre><code>import time
epoch = time.time()
</code></pre>
<p>Thanks in advance.</p>
| <python><ios><jwt><apple-push-notifications><cloudkit> | 2023-07-10 16:24:33 | 1 | 347 | Danny |
76,655,351 | 8,195,151 | rename returns NoneType object | <p>I have tried several different ways to rename columns in my dataframe and none are working.
I want to change the column name "Species" to read "Common Name".</p>
<pre><code>> general = pd.read_csv('july_4/general.csv', usecols=['Species', 'Count'])
> general
Species Count
0 Downy Woodpecker 1
1 Northern Flicker 1
2 Eastern Kingbird 2
</code></pre>
<p>Using rename() does not change the columns:</p>
<pre><code>> general.rename({'Species': 'Common Name'}, inplace=True)
> general
Species Count
0 Downy Woodpecker 1
1 Northern Flicker 1
2 Eastern Kingbird 2
</code></pre>
<p>Reassigning the variable returns a NoneType object:</p>
<pre><code>> general = pd.read_csv('july_4/general.csv', usecols=['Species', 'Count'])
> general = general.rename({'Species': 'Common Name'}, inplace=True)
> general[0]
Traceback (most recent call last):
Cell In[85], line 1
general[0]
TypeError: 'NoneType' object is not subscriptable
</code></pre>
| <python><pandas> | 2023-07-10 15:45:59 | 3 | 355 | digitalwaterfall |
76,655,144 | 13,180,235 | 'Connection aborted.', RemoteDisconnected('Remote end closed connection without response',) | <p>Im using a 3rd party api service for a text sending job.</p>
<p>When I send around 5000 numbers as payload to the API, it works fine. I have noticed that sometimes when the payload count exceeds to 7000 or above. I receive the following error code in response from the api.</p>
<pre><code>'Connection aborted.', RemoteDisconnected('Remote end closed connection without response',)
</code></pre>
<pre><code>msg_dict['test'] = test
msg_dict_json = json.dumps(msg_dict)
data = {
'apikey': apikey,
'data': msg_dict_json,
}
res = requests.post('https://api.txtlocal.com/bulk_json/', data=data)
return res
</code></pre>
<p>sample data obj:</p>
<pre><code>data={
"api_key": api_key,
"data": '{"sender": "abc", "messages": [{"number": "+0000000000", "text": "some text"}], "test": true}'
}
</code></pre>
<p>data['data'] can be over 7000 objects (which is causing the issue)</p>
<p>I know there's a limit of 10000 users per api call for this api:
<a href="https://api.txtlocal.com/bulk_json/" rel="noreferrer">https://api.txtlocal.com/bulk_json/</a>
so my payload count always stays less than 10000.</p>
<p>PS: The request doesnt rollback, sms are sent to the user even when I get a response as mentioned above. Just I dont receive a positive response and it throws exception.</p>
<p>Also i want to mention that I have successfully been able to send 7 to 8000 sms with successful response but now its sending this issue.</p>
<p>Any help would be appreciated, thanks.</p>
| <python><django><sms><remote-connection><textlocal> | 2023-07-10 15:18:19 | 1 | 335 | Fahad Hussain |
76,655,025 | 8,236,050 | Chromium binary not being found using Selenium | <p>I am using Selenium in a MacOS server. I have intsalled bith chrome and chromedriver, but with my code, I get an error saying the binary was not found.</p>
<p>This is the code:</p>
<pre><code>options = webdriver.ChromeOptions()
if(env['HEADLESS']):
options.add_argument('--headless')
options.add_argument('--auto-show-cursor')
options.add_argument('--no-sandbox')
options.add_argument('--disable-dev-shm-usage')
try:
driver = webdriver.Chrome('chromedriver',options=options)
except WebDriverException:
# Exception occurred, try using the specified executable_path
chrome_binary_path = '/usr/bin/chromedriver'
options.binary_location = chrome_binary_path
driver = webdriver.Chrome('chromedriver', options=options)# I have previously tried driver = webdriver.Chrome(executable_path = chrome_binary_path, options=options)
</code></pre>
<p>And this is the error:</p>
<pre><code>Traceback (most recent call last):
File "/home/docker/web-testing-wtai/WTAI-project/src/BL/browsers.py", line 21, in chrome
driver = webdriver.Chrome('chromedriver',options=options)
File "/home/docker/web-testing-wtai/WTAI-project/env/lib/python3.10/site-packages/selenium/webdriver/chrome/webdriver.py", line 80, in __init__
super().__init__(
File "/home/docker/web-testing-wtai/WTAI-project/env/lib/python3.10/site-packages/selenium/webdriver/chromium/webdriver.py", line 104, in __init__
super().__init__(
File "/home/docker/web-testing-wtai/WTAI-project/env/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 286, in __init__
self.start_session(capabilities, browser_profile)
File "/home/docker/web-testing-wtai/WTAI-project/env/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 378, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/home/docker/web-testing-wtai/WTAI-project/env/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 440, in execute
self.error_handler.check_response(response)
File "/home/docker/web-testing-wtai/WTAI-project/env/lib/python3.10/site-packages/selenium/webdriver/remote/errorhandler.py", line 245, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: Chrome failed to start: exited abnormally.
(unknown error: DevToolsActivePort file doesn't exist)
(The process started from chrome location /snap/chromium/2529/usr/lib/chromium-browser/chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.)
Stacktrace:
#0 0x560c936c76a3 <unknown>
#1 0x560c9340cad6 <unknown>
#2 0x560c934354c6 <unknown>
#3 0x560c934318a0 <unknown>
#4 0x560c9346f78d <unknown>
#5 0x560c9346ef6f <unknown>
#6 0x560c93466993 <unknown>
#7 0x560c9343c414 <unknown>
#8 0x560c9343d47e <unknown>
#9 0x560c9368aacd <unknown>
#10 0x560c9368f505 <unknown>
#11 0x560c93698a0e <unknown>
#12 0x560c9368ff8c <unknown>
#13 0x560c93660a62 <unknown>
#14 0x560c936b1538 <unknown>
#15 0x560c936b16dc <unknown>
#16 0x560c936c0b35 <unknown>
#17 0x7f8232ae6b43 <unknown>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/docker/web-testing-wtai/WTAI-project/src/BL/utils.py", line 418, in initializeBrowser
driver = browsers.chrome(env)
File "/home/docker/web-testing-wtai/WTAI-project/src/BL/browsers.py", line 26, in chrome
driver = webdriver.Chrome('chromedriver', options=options)
File "/home/docker/web-testing-wtai/WTAI-project/env/lib/python3.10/site-packages/selenium/webdriver/chrome/webdriver.py", line 80, in __init__
super().__init__(
File "/home/docker/web-testing-wtai/WTAI-project/env/lib/python3.10/site-packages/selenium/webdriver/chromium/webdriver.py", line 104, in __init__
super().__init__(
File "/home/docker/web-testing-wtai/WTAI-project/env/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 286, in __init__
self.start_session(capabilities, browser_profile)
File "/home/docker/web-testing-wtai/WTAI-project/env/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 378, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/home/docker/web-testing-wtai/WTAI-project/env/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 440, in execute
self.error_handler.check_response(response)
File "/home/docker/web-testing-wtai/WTAI-project/env/lib/python3.10/site-packages/selenium/webdriver/remote/errorhandler.py", line 245, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: no chrome binary at /usr/bin/chromedriver
</code></pre>
<p>I have ensured the file is at that path by doing nano <code>/usr/bin/chromedriver</code> and I get this:</p>
<pre><code>#!/bin/sh
if ! [ -x /snap/bin/chromium.chromedriver ]; then
echo "" >&2
echo "Command '$0' requires the chromium snap to be installed." >&2
echo "Please install it with:" >&2
echo "" >&2
echo "snap install chromium" >&2
echo "" >&2
exit 1
fi
exec /snap/bin/chromium.chromedriver "$@"
</code></pre>
<p>I really do not understand the reason why selenium is not able to find the chromedriver executable. What am I doing wrong? Alternatively, how could I avoid specifying the path, as I did before I got the exception? I started getting this exception when using my script in a new computer with the same selenium version ( 4.8.2 ) but higher chromium version (in the previous one I had 114.0.5735.90 and now 114.0.5735.198. If the chromedriver version missmatch is the problem, how could I change it using only command line?</p>
| <python><google-chrome><selenium-webdriver><selenium-chromedriver><google-chrome-headless> | 2023-07-10 15:01:39 | 2 | 513 | pepito |
76,654,976 | 13,321,451 | How to prevent pyplot.errorbar from shifting x-axis of seaborn barplot | <p>I want to plot data using Seaborn barplot; I only have the mean and standard deviation. I use pyplot.errorbar to add error bars to my plot, however, it shifts my x axis slightly (see red star below in plot). How do I prevent this from happening?</p>
<p>Plots:
<a href="https://i.sstatic.net/YsVpA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YsVpA.png" alt="enter image description here" /></a></p>
<p>Code to reproduce:</p>
<pre><code>import seaborn as sn
import matplotlib.pyplot as plt
### loading example data ###
health = sns.load_dataset('healthexp')
health_summary = health.groupby(['Country']).Life_Expectancy.agg({'mean','std'}).reset_index()
### barplot without errorbars ###
p = sn.barplot(health_summary, x = 'Country', y = 'mean', errorbar=None)
plt.show()
### barplot with errorbars ###
p = sn.barplot(health_summary, x = 'Country', y = 'mean', errorbar=None)
p.errorbar(x=health_summary['Country'], y=health_summary['mean'], yerr=health_summary['std'], fmt="none", c="k")
plt.show()
</code></pre>
| <python><matplotlib><seaborn> | 2023-07-10 14:55:55 | 1 | 342 | Oll |
76,654,896 | 8,014 | Why does git fail with "getaddrinfo() thread failed to start" when in a subprocess? | <p>My Python code calls <code>subprocess.Popen()</code> with a command:</p>
<pre><code> git clone -b master --single-branch https://github.com/kivy/python-for-android.git python-for-android
</code></pre>
<p>I call my Python code from the command line Terminal within PyCharm, on Windows 11.</p>
<p>It fails to run getaddrinfo():</p>
<pre><code>Cloning into 'python-for-android'...
fatal: unable to access 'https://github.com/kivy/python-for-android.git/': getaddrinfo() thread failed to start
</code></pre>
<p>I run the exact same command directly in the same terminal, and it runs perfectly.</p>
<p>Hmmm... That sounds like it can't get network access... and the answers to this <a href="https://stackoverflow.com/questions/59911649/fatal-unable-to-access-link-getaddrinfo-thread-failed-to-start">related question</a> all agree to check the firewall.</p>
<p>I only have Microsoft Defender Firewall, and I tried turning off all of the Domain network, Private Network and Public Network firewalls. (My router is still protecting me.) It makes no difference.</p>
<p>I would have thought the firewall, even if it were on, would be paying attention to <code>git.exe</code> (I have confirmed they are finding the same exe.) and wouldn't trigger on one and not the other.</p>
<p>Where should I look next? Any clues what might be blocking git from accessing the network when running in a Python subprocess?</p>
| <python><windows><subprocess> | 2023-07-10 14:47:44 | 1 | 25,529 | Oddthinking |
76,654,793 | 13,874,745 | How to activate jupyterlab-vim for jupyter-lab-4.0.2? | <p>I try to run the install command <code>pip install jupyterlab-vim</code>, which I found in <a href="https://github.com/jupyterlab-contrib/jupyterlab-vim#install" rel="nofollow noreferrer">https://github.com/jupyterlab-contrib/jupyterlab-vim#install</a></p>
<p>Result of executing <code>jupyter labextension list</code>:</p>
<ul>
<li><p>messages:</p>
<pre><code>JupyterLab v4.0.2
/opt/conda/share/jupyter/labextensions
jupyterlab_pygments v0.2.2 enabled X (python, jupyterlab_pygments)
@axlair/jupyterlab_vim v0.16.0 enabled X (python, jupyterlab_vim)
The following extensions are outdated:
jupyterlab_pygments
@axlair/jupyterlab_vim
Consider running "jupyter labextension update --all" to check for updates.
</code></pre>
</li>
<li><p>picture of messages:</p>
<p><a href="https://i.sstatic.net/VfhiZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VfhiZ.png" alt="enter image description here" /></a></p>
</li>
</ul>
<p>From the result of <code>X</code>, it seems like something went wrong, but I can't find the more detailed error message.</p>
<p>What I've tried:</p>
<ul>
<li>The most relative source I can find is : <a href="https://discourse.jupyter.org/t/how-to-get-vim-editor-extension-working/19272" rel="nofollow noreferrer">How to get Vim editor extension working?</a>, but I don't think the solution fit for me, because I don't use any virtual environment.</li>
<li>Once I downgrade jupyterlab to 3.2.4, the command <code>pip install jupyterlab-vim</code> work again.</li>
</ul>
<p><strong>What should I do to repair this issue?</strong></p>
<p>My jupyter lab environment:</p>
<pre><code>jupyter_client 8.3.0
jupyter_core 5.3.1
jupyter-events 0.6.3
jupyter-lsp 2.2.0
jupyter_server 2.7.0
jupyter_server_terminals 0.4.4
jupyterlab 4.0.2
jupyterlab-pygments 0.2.2
jupyterlab_server 2.23.0
jupyterlab-vim 0.16.0
</code></pre>
| <python><jupyter-notebook><jupyter><jupyter-lab> | 2023-07-10 14:34:33 | 1 | 451 | theabc50111 |
76,654,672 | 11,028,689 | How to add labels in panda dataframe columns with else condition? | <p>I have a dataframe with a column like this:</p>
<pre><code>POLITICS
BUSINESS
TRAVEL
SPORTS
....
DIVORCE
ARTS
WELLNESS
CRIME
</code></pre>
<p>e.g</p>
<pre><code>import pandas as pd
data = [['CRIME', 10], ['BUSINESS', 15], ['SPORTS', 12], ['TRAVEL', 2], ['WELLNESS', 3], ['ARTS', 25]]
df = pd.DataFrame(data, columns=['category', 'no'])
df
</code></pre>
<p>I want to add a column 'label' and map four categories to labels like so</p>
<pre><code>label_dict = {'CRIME':1, 'BUSINESS':2, 'SPORTS':3 'ARTS':4}
</code></pre>
<p>and then all of the remaining categories should be labeled as 5.
I have tried this and am getting a KeyError: 'label'.</p>
<pre><code>df['label'] = df['category'].apply( lambda x : label_dict[x] if x in label_dict.keys() else 5)
</code></pre>
<p>How can I achieve this?</p>
| <python><pandas><dataframe><list-comprehension> | 2023-07-10 14:20:59 | 1 | 1,299 | Bluetail |
76,654,631 | 3,590,940 | How to represent a data type string as PolarsDataType | <p>According to the <a href="https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.read_csv.html" rel="nofollow noreferrer">documentation</a> and examples when using <code>read_csv()</code>, we can only use <code>PolarsDataTypes</code> as the values in the map for dtypes:</p>
<pre><code>dtypes: Mapping[str, PolarsDataType] | Sequence[PolarsDataType] | None = None,
</code></pre>
<p>I have a JSON config where I have a map of the columns and their datatypes but as strings like so:</p>
<pre><code> "columns_dtypes_polars": {
"pcd": "pl.Utf8",
"streg": "pl.Int64",
"oac11": "pl.Utf8",
"lat": "pl.Float64",
"long": "pl.Float64",
"imd": "pl.Int64"
}
</code></pre>
<p>When I try to use this after reading into python, the values for PolarsDataTypes are still strings and Polars throws an error. I can't have the raw values in JSON as that would throw an error. I have a ton of fields, so I do need to apply the <code>dtypes</code> parameter.</p>
<p>So my main question is how do I convert the string representation <code>"pl.Int64"</code> to raw PolarsDataType representation <code>pl.Int64</code> so I can use it in the <code>read_csv()</code> <code>dtype</code> parameter?</p>
| <python><python-polars> | 2023-07-10 14:15:26 | 1 | 339 | Bish |
76,654,416 | 17,530,552 | How to perform a pairwise correlation in a Pandas dataframe containing lists? | <p>I created a Pandas dataframe, called <code>df2</code>, which looks as follows when I print it:</p>
<pre><code> 728
100610 [0.8872128569054152, 0.6275935748500376, 0.105...
102311 [-0.9484644612008593, -1.7934280570087853, -2....
104416 [0.1664251633793124, 0.1116268791242702, 0.050...
105923 [-0.2307886056759264, -0.5762187864896702, -0....
</code></pre>
<p>The column on the very left (<code>100610 102311 104416 105923</code>) are four subjects from which the data stems. Every subject has a time-series of 150 sampling points. For example, subject <code>100610</code> has the time-series <code>[0.8872128569054152, 0.6275935748500376, 0.105...</code> and so on.</p>
<p>I am running a sliding window pairwise correlation approach (728 sliding windows). The number <code>728</code> denotes the last sliding window in a for loop. The output above is a paradigmatic example of the very last sliding window of <code>df2</code>.</p>
<p><strong>Aim:</strong> I would like to run a pairwise correlation between the four subjects (between the subjects’ time-series) as follows:</p>
<pre><code>pairwise_cor = df2.corr(method="pearson")
</code></pre>
<p>However, this results in the following and empty output for <code>pairwise_cor</code>:</p>
<pre><code>Empty DataFrame
Columns: []
Index: []
</code></pre>
<p><strong>Question:</strong>
How do I have to modify the code for <code>pairwise_cor = df2.corr(method="pearson")</code> so that the code does not produce an empty output?</p>
<p>As far as my understanding goes, the problem is based on the fact that every row contains a list or array of values. <code>pairwise_cor = df2.corr(method="pearson")</code> would probably work if I could transpose the dataframe so that every column corresponds to one subject, and every row to one value of the list. Is that correct? How could I modify the dataframe so change it accordingly?</p>
| <python><pandas><dataframe> | 2023-07-10 13:50:04 | 1 | 415 | Philipp |
76,654,376 | 2,812,625 | Python Convert column values of 1-50 to 1-10 | <p>I have values in a column that are 1 to 50. What is the easiest way to convert every 5 to scale back to 1-10? (e.g. [1,2,3,4,5] = 1, [6,7,8,9,10] = 2 )</p>
| <python><mapping> | 2023-07-10 13:45:48 | 1 | 446 | Tinkinc |
76,654,129 | 2,925,716 | why I get error-runtime with this python youtubedownloader code | <p>How can I download video from YouTube is there some MWE??
I want to download a playlist from YouTube site:
(I want to download a video from YouTube as follows)
<code>https://www.youtube.com/watch?v=_oHTzIsLMGc</code></p>
<p>With this code</p>
<pre><code>from pytube import YouTube
#ask for the link from user
link = input("Enter the link of YouTube video you want to download: ")
yt = YouTube(link)
#Showing details
print("Title: ",yt.title)
print("Number of views: ",yt.views)
print("Length of video: ",yt.length)
print("Rating of video: ",yt.rating)
#Getting the highest resolution possible
ys = yt.streams.get_highest_resolution()
#Starting download
print("Downloading...")
ys.download()
print("Download completed!!")
</code></pre>
<p>I'm getting this
<strong>ERROR</strong></p>
<pre><code>$ python dl.py
Enter the link of YouTube video you want to download: https://www.youtube.com/watch?v=_oHTzIsLMGc
Title: CAER EN EL SUEÑO PROFUNDO,Sanación del Estrés,Ansiedad y Estados Depresivos,Restauración Corporal#16
Number of views: 34789
Length of video: 367200
Rating of video: None
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/pytube/__main__.py", line 181, in fmt_streams
extract.apply_signature(stream_manifest, self.vid_info, self.js)
File "/usr/local/lib/python3.9/site-packages/pytube/extract.py", line 409, in apply_signature
cipher = Cipher(js=js)
File "/usr/local/lib/python3.9/site-packages/pytube/cipher.py", line 43, in __init__
self.throttling_plan = get_throttling_plan(js)
File "/usr/local/lib/python3.9/site-packages/pytube/cipher.py", line 405, in get_throttling_plan
raw_code = get_throttling_function_code(js)
File "/usr/local/lib/python3.9/site-packages/pytube/cipher.py", line 311, in get_throttling_function_code
name = re.escape(get_throttling_function_name(js))
File "/usr/local/lib/python3.9/site-packages/pytube/cipher.py", line 296, in get_throttling_function_name
raise RegexMatchError(
pytube.exceptions.RegexMatchError: get_throttling_function_name: could not find match for multiple
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/cygdrive/c/Users/hynek0/Desktop/FU/VILLA/dl.py", line 13, in <module>
ys = yt.streams.get_highest_resolution()
File "/usr/local/lib/python3.9/site-packages/pytube/__main__.py", line 296, in streams
return StreamQuery(self.fmt_streams)
File "/usr/local/lib/python3.9/site-packages/pytube/__main__.py", line 188, in fmt_streams
extract.apply_signature(stream_manifest, self.vid_info, self.js)
File "/usr/local/lib/python3.9/site-packages/pytube/extract.py", line 409, in apply_signature
cipher = Cipher(js=js)
File "/usr/local/lib/python3.9/site-packages/pytube/cipher.py", line 43, in __init__
self.throttling_plan = get_throttling_plan(js)
File "/usr/local/lib/python3.9/site-packages/pytube/cipher.py", line 405, in get_throttling_plan
raw_code = get_throttling_function_code(js)
File "/usr/local/lib/python3.9/site-packages/pytube/cipher.py", line 311, in get_throttling_function_code
name = re.escape(get_throttling_function_name(js))
File "/usr/local/lib/python3.9/site-packages/pytube/cipher.py", line 296, in get_throttling_function_name
raise RegexMatchError(
pytube.exceptions.RegexMatchError: get_throttling_function_name: could not find match for multiple
</code></pre>
<p><strong>EDIT line 264</strong></p>
<pre><code>$ pytube https://www.youtube.com/watch?v=_oHTzIsLMGc
Loading video...
Traceback (most recent call last):
File "/usr/local/bin/pytube", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.9/site-packages/pytube/cli.py", line 53, in main
_perform_args_on_youtube(youtube, args)
File "/usr/local/lib/python3.9/site-packages/pytube/cli.py", line 60, in _perform_args_on_youtube
download_highest_resolution_progressive(
File "/usr/local/lib/python3.9/site-packages/pytube/cli.py", line 479, in download_highest_resolution_progressive
_download(stream, target=target)
File "/usr/local/lib/python3.9/site-packages/pytube/cli.py", line 256, in _download
filesize_megabytes = stream.filesize // 1048576
File "/usr/local/lib/python3.9/site-packages/pytube/streams.py", line 157, in filesize
self._filesize = request.filesize(self.url)
File "/usr/local/lib/python3.9/site-packages/pytube/request.py", line 204, in filesize
return int(head(url)["content-length"])
File "/usr/local/lib/python3.9/site-packages/pytube/request.py", line 268, in head
response_headers = _execute_request(url, method="HEAD").info()
File "/usr/local/lib/python3.9/site-packages/pytube/request.py", line 37, in _execute_request
return urlopen(request, timeout=timeout) # nosec
File "/usr/lib/python3.9/urllib/request.py", line 214, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python3.9/urllib/request.py", line 523, in open
response = meth(req, response)
File "/usr/lib/python3.9/urllib/request.py", line 632, in http_response
response = self.parent.error(
File "/usr/lib/python3.9/urllib/request.py", line 561, in error
return self._call_chain(*args)
File "/usr/lib/python3.9/urllib/request.py", line 494, in _call_chain
result = func(*args)
File "/usr/lib/python3.9/urllib/request.py", line 641, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 500: Internal Server Error
</code></pre>
| <python><error-handling><runtime-error> | 2023-07-10 13:15:38 | 3 | 1,019 | user2925716 |
76,654,117 | 1,845,408 | How to deal with "This model's maximum context length is 4097 tokens." issue in Scikit-LLM | <p>I am trying the Scikit-LLM on a StackOverflow question dataset comprising around 7k rows. Below is the code where I train and test a Zero Shot Classifier.</p>
<pre><code>X_train, X_test, y_train, y_test =
train_test_split(_soQuestions['Body'], _soQuestions['isClosed'], test_size=0.33, random_state=42, stratify=_soQuestions['isClosed'])
#%%
from skllm import ZeroShotGPTClassifier
clf = ZeroShotGPTClassifier(openai_model="gpt-3.5-turbo")
clf.fit(X_train, y_train)
labels = clf.predict(X_test)
</code></pre>
<p>After half an hour, I received the following error. However, I have no idea how to divide the dataset into chunks of proper sizes.</p>
<blockquote>
<p>Could not obtain the completion after 3 retries: <code>InvalidRequestError :: This model's maximum context length is 4097 tokens. However, your messages resulted in 4438 tokens. Please reduce the length of the messages.</code></p>
</blockquote>
<p>I appreciate any advice.</p>
| <python><scikit-learn><large-language-model> | 2023-07-10 13:14:04 | 0 | 8,321 | renakre |
76,653,974 | 16,383,578 | Repeat elements in nested lists each a different number of times, why smarter methods are slower? | <p>I had seen this <a href="https://stackoverflow.com/questions/76651767/multiply-list-in-list-string-items-with-list-in-list-integers">question</a> today, and clearly the asker didn't show any research effort at all. But someone posted an answer, the code in the answer was very straightforward and verbose, so I wanted to post a more concise and elegant solution, and I wanted the smarter method to be faster.</p>
<p>To save you a click, the problem is, given a list of lists, and another list of the same number of lists as the first one, each sublist in the second nested list contains only integers, and all sublist of the second list contain the same number of elements as the first list, assume they are different, repeat the each last level element in the first nested the corresponding element in the second list times.</p>
<p>Example:</p>
<pre><code>data = ([2, 0, 2, 2],
[3, 3, 1, 2],
[1, 0, 3, 3],
[1, 1, 1, 2],
[0, 0, 2, 1],
[0, 1, 3, 3],
[3, 1, 3, 2],
[1, 0, 1, 2])
mult = ([3, 0, 0, 3],
[2, 2, 1, 1],
[0, 2, 2, 1],
[3, 3, 3, 2],
[0, 2, 3, 2],
[1, 1, 3, 2],
[3, 1, 2, 3],
[3, 2, 0, 0])
output = deque([[2, 2, 2, 2, 2, 2],
[3, 3, 3, 3, 1, 2],
[0, 0, 3, 3, 3],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2],
[0, 0, 2, 2, 2, 1, 1],
[0, 1, 3, 3, 3, 3, 3],
[3, 3, 3, 1, 3, 3, 2, 2, 2],
[1, 1, 1, 0, 0]])
</code></pre>
<p>I quickly came up with a list comprehension solution:</p>
<pre><code>def repeat_element_listcomp(data, mult):
return [[i for i, j in zip(a, b) for _ in range(j)] for a, b in zip(data, mult)]
</code></pre>
<p>But I was surprised to find it slower than the simple solution of the <a href="https://stackoverflow.com/a/76651822/16383578">first answer</a>:</p>
<pre><code>def repeat_element_zero(data, mult):
combined = []
for sublist1, sublist2 in zip(data, mult):
sublist = []
for elem1, elem2 in zip(sublist1, sublist2):
sublist.extend([elem1]* elem2)
combined.append(sublist)
return combined
</code></pre>
<pre><code>In [229]: %timeit repeat_element_zero(data, mult)
9.86 µs ± 129 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
In [230]: %timeit repeat_element(data, mult)
14.1 µs ± 156 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
</code></pre>
<p>I wasted many minutes trying to come up with a more efficient solution, I tried many more smart methods, and all of them are slower somehow, I then posted an <a href="https://stackoverflow.com/a/76651985/16383578">answer</a> there.</p>
<p>Setup:</p>
<pre><code>import random
from collections import deque
from functools import reduce
from itertools import chain
from operator import iconcat
def random_numbers(n):
return random.choices(range(n), k=n)
def make_data(n):
return random_numbers(n), random_numbers(n)
def make_sample(n, limit=300):
return list(zip(*[make_data(limit) for _ in range(n)]))
def repeat_element_zero(data, mult):
combined = []
for sublist1, sublist2 in zip(data, mult):
sublist = []
for elem1, elem2 in zip(sublist1, sublist2):
sublist.extend([elem1]* elem2)
combined.append(sublist)
return combined
def repeat_element_listcomp(data, mult):
return [[i for i, j in zip(a, b) for _ in range(j)] for a, b in zip(data, mult)]
def repeat_element_chain(data, mult):
return [list(chain(*([i]*j for i, j in zip(a, b)))) for a, b in zip(data, mult)]
def repeat_element_helper(data, mult):
return reduce(iconcat, ([i]*j for i, j in zip(data, mult)), [])
def repeat_element(data, mult):
return deque(map(repeat_element_helper, data, mult))
approaches=[
repeat_element_listcomp,
repeat_element_chain,
repeat_element
]
run_performance_comparison(approaches,[1000,2000,3000],setup=make_sample)
</code></pre>
<p>Performance:</p>
<p><a href="https://i.sstatic.net/p5gyp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p5gyp.png" alt="enter image description here" /></a></p>
<pre><code>In [188]: data, mult = make_sample(32, 10)
In [189]: %timeit repeat_element_zero(data, mult)
102 µs ± 3.36 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [190]: %timeit repeat_element_listcomp(data, mult)
145 µs ± 3.55 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [191]: %timeit repeat_element_chain(data, mult)
141 µs ± 4.74 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [192]: %timeit repeat_element(data, mult)
127 µs ± 1.4 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [193]: data, mult = make_sample(32, 32)
In [194]: %timeit repeat_element(data, mult)
576 µs ± 10.8 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [195]: %timeit repeat_element_chain(data, mult)
647 µs ± 16.1 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [196]: %timeit repeat_element_listcomp(data, mult)
837 µs ± 12.6 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [197]: %timeit repeat_element_zero(data, mult)
465 µs ± 15.8 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [198]: data, mult = make_sample(256, 32)
In [199]: %timeit repeat_element_zero(data, mult)
3.69 ms ± 64.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [200]: %timeit repeat_element(data, mult)
4.47 ms ± 88.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [201]: %timeit repeat_element_listcomp(data, mult)
7.01 ms ± 688 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>Profiling Code:</p>
<pre><code>import timeit
import matplotlib.pyplot as plt
from typing import List, Dict, Callable
from contextlib import contextmanager
@contextmanager
def data_provider(data_size, setup=lambda N: N, teardown=lambda: None):
data = setup(data_size)
yield data
teardown()
def run_performance_comparison(approaches: List[Callable],
data_size: List[int],
setup=lambda N: N,
teardown=lambda: None,
number_of_repetitions=5, title='N'):
approach_times: Dict[Callable, List[float]] = {approach: [] for approach in approaches}
for N in data_size:
with data_provider(N, setup, teardown) as data:
for approach in approaches:
approach_time = timeit.timeit(lambda: approach(*data), number=number_of_repetitions)
approach_times[approach].append(approach_time)
for approach in approaches:
plt.plot(data_size, approach_times[approach], label=approach.__name__)
plt.xlabel(title)
plt.ylabel('Execution Time (seconds)')
plt.title('Performance Comparison')
plt.legend()
plt.show()
</code></pre>
<p>I want to know, why all of my smart methods slower? What is going on here? Why these methods which are normally what increase performance, are making the code slower in this case? I guess this must be implementation detail of CPython, and if version is important I am using <code>Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]</code>, I don't know C yet so I don't know the low-level stuff. But I am really curious and want an explanation.</p>
<hr />
<p>Disassembly of the functions:</p>
<pre><code>In [232]: import dis
In [233]: dis.dis(repeat_element_zero)
11 0 BUILD_LIST 0
2 STORE_FAST 2 (combined)
12 4 LOAD_GLOBAL 0 (zip)
6 LOAD_FAST 0 (data)
8 LOAD_FAST 1 (mult)
10 CALL_FUNCTION 2
12 GET_ITER
>> 14 FOR_ITER 29 (to 74)
16 UNPACK_SEQUENCE 2
18 STORE_FAST 3 (sublist1)
20 STORE_FAST 4 (sublist2)
13 22 BUILD_LIST 0
24 STORE_FAST 5 (sublist)
14 26 LOAD_GLOBAL 0 (zip)
28 LOAD_FAST 3 (sublist1)
30 LOAD_FAST 4 (sublist2)
32 CALL_FUNCTION 2
34 GET_ITER
>> 36 FOR_ITER 12 (to 62)
38 UNPACK_SEQUENCE 2
40 STORE_FAST 6 (elem1)
42 STORE_FAST 7 (elem2)
15 44 LOAD_FAST 5 (sublist)
46 LOAD_METHOD 1 (extend)
48 LOAD_FAST 6 (elem1)
50 BUILD_LIST 1
52 LOAD_FAST 7 (elem2)
54 BINARY_MULTIPLY
56 CALL_METHOD 1
58 POP_TOP
60 JUMP_ABSOLUTE 18 (to 36)
17 >> 62 LOAD_FAST 2 (combined)
64 LOAD_METHOD 2 (append)
66 LOAD_FAST 5 (sublist)
68 CALL_METHOD 1
70 POP_TOP
72 JUMP_ABSOLUTE 7 (to 14)
19 >> 74 LOAD_FAST 2 (combined)
76 RETURN_VALUE
In [234]: dis.dis(repeat_element)
31 0 LOAD_GLOBAL 0 (deque)
2 LOAD_GLOBAL 1 (map)
4 LOAD_GLOBAL 2 (repeat_element_helper)
6 LOAD_FAST 0 (data)
8 LOAD_FAST 1 (mult)
10 CALL_FUNCTION 3
12 CALL_FUNCTION 1
14 RETURN_VALUE
In [235]: dis.dis(repeat_element_helper)
28 0 LOAD_GLOBAL 0 (reduce)
2 LOAD_GLOBAL 1 (iconcat)
4 LOAD_CONST 1 (<code object <genexpr> at 0x000001C86CC20240, file "<ipython-input-225-c385a750c738>", line 28>)
6 LOAD_CONST 2 ('repeat_element_helper.<locals>.<genexpr>')
8 MAKE_FUNCTION 0
10 LOAD_GLOBAL 2 (zip)
12 LOAD_FAST 0 (data)
14 LOAD_FAST 1 (mult)
16 CALL_FUNCTION 2
18 GET_ITER
20 CALL_FUNCTION 1
22 BUILD_LIST 0
24 CALL_FUNCTION 3
26 RETURN_VALUE
Disassembly of <code object <genexpr> at 0x000001C86CC20240, file "<ipython-input-225-c385a750c738>", line 28>:
0 GEN_START 0
28 2 LOAD_FAST 0 (.0)
>> 4 FOR_ITER 10 (to 26)
6 UNPACK_SEQUENCE 2
8 STORE_FAST 1 (i)
10 STORE_FAST 2 (j)
12 LOAD_FAST 1 (i)
14 BUILD_LIST 1
16 LOAD_FAST 2 (j)
18 BINARY_MULTIPLY
20 YIELD_VALUE
22 POP_TOP
24 JUMP_ABSOLUTE 2 (to 4)
>> 26 LOAD_CONST 0 (None)
28 RETURN_VALUE
In [236]: dis.dis(repeat_element_listcomp)
22 0 LOAD_CONST 1 (<code object <listcomp> at 0x000001C86CE6B7E0, file "<ipython-input-225-c385a750c738>", line 22>)
2 LOAD_CONST 2 ('repeat_element_listcomp.<locals>.<listcomp>')
4 MAKE_FUNCTION 0
6 LOAD_GLOBAL 0 (zip)
8 LOAD_FAST 0 (data)
10 LOAD_FAST 1 (mult)
12 CALL_FUNCTION 2
14 GET_ITER
16 CALL_FUNCTION 1
18 RETURN_VALUE
Disassembly of <code object <listcomp> at 0x000001C86CE6B7E0, file "<ipython-input-225-c385a750c738>", line 22>:
22 0 BUILD_LIST 0
2 LOAD_FAST 0 (.0)
>> 4 FOR_ITER 14 (to 34)
6 UNPACK_SEQUENCE 2
8 STORE_FAST 1 (a)
10 STORE_FAST 2 (b)
12 LOAD_CONST 0 (<code object <listcomp> at 0x000001C86489E550, file "<ipython-input-225-c385a750c738>", line 22>)
14 LOAD_CONST 1 ('repeat_element_listcomp.<locals>.<listcomp>.<listcomp>')
16 MAKE_FUNCTION 0
18 LOAD_GLOBAL 0 (zip)
20 LOAD_FAST 1 (a)
22 LOAD_FAST 2 (b)
24 CALL_FUNCTION 2
26 GET_ITER
28 CALL_FUNCTION 1
30 LIST_APPEND 2
32 JUMP_ABSOLUTE 2 (to 4)
>> 34 RETURN_VALUE
Disassembly of <code object <listcomp> at 0x000001C86489E550, file "<ipython-input-225-c385a750c738>", line 22>:
22 0 BUILD_LIST 0
2 LOAD_FAST 0 (.0)
>> 4 FOR_ITER 13 (to 32)
6 UNPACK_SEQUENCE 2
8 STORE_FAST 1 (i)
10 STORE_FAST 2 (j)
12 LOAD_GLOBAL 0 (range)
14 LOAD_FAST 2 (j)
16 CALL_FUNCTION 1
18 GET_ITER
>> 20 FOR_ITER 4 (to 30)
22 STORE_FAST 3 (_)
24 LOAD_FAST 1 (i)
26 LIST_APPEND 3
28 JUMP_ABSOLUTE 10 (to 20)
>> 30 JUMP_ABSOLUTE 2 (to 4)
>> 32 RETURN_VALUE
In [237]: dis.dis(repeat_element_chain)
25 0 LOAD_CONST 1 (<code object <listcomp> at 0x000001C86CC22600, file "<ipython-input-225-c385a750c738>", line 25>)
2 LOAD_CONST 2 ('repeat_element_chain.<locals>.<listcomp>')
4 MAKE_FUNCTION 0
6 LOAD_GLOBAL 0 (zip)
8 LOAD_FAST 0 (data)
10 LOAD_FAST 1 (mult)
12 CALL_FUNCTION 2
14 GET_ITER
16 CALL_FUNCTION 1
18 RETURN_VALUE
Disassembly of <code object <listcomp> at 0x000001C86CC22600, file "<ipython-input-225-c385a750c738>", line 25>:
25 0 BUILD_LIST 0
2 LOAD_FAST 0 (.0)
>> 4 FOR_ITER 18 (to 42)
6 UNPACK_SEQUENCE 2
8 STORE_FAST 1 (a)
10 STORE_FAST 2 (b)
12 LOAD_GLOBAL 0 (list)
14 LOAD_GLOBAL 1 (chain)
16 LOAD_CONST 0 (<code object <genexpr> at 0x000001C86BF07260, file "<ipython-input-225-c385a750c738>", line 25>)
18 LOAD_CONST 1 ('repeat_element_chain.<locals>.<listcomp>.<genexpr>')
20 MAKE_FUNCTION 0
22 LOAD_GLOBAL 2 (zip)
24 LOAD_FAST 1 (a)
26 LOAD_FAST 2 (b)
28 CALL_FUNCTION 2
30 GET_ITER
32 CALL_FUNCTION 1
34 CALL_FUNCTION_EX 0
36 CALL_FUNCTION 1
38 LIST_APPEND 2
40 JUMP_ABSOLUTE 2 (to 4)
>> 42 RETURN_VALUE
Disassembly of <code object <genexpr> at 0x000001C86BF07260, file "<ipython-input-225-c385a750c738>", line 25>:
0 GEN_START 0
25 2 LOAD_FAST 0 (.0)
>> 4 FOR_ITER 10 (to 26)
6 UNPACK_SEQUENCE 2
8 STORE_FAST 1 (i)
10 STORE_FAST 2 (j)
12 LOAD_FAST 1 (i)
14 BUILD_LIST 1
16 LOAD_FAST 2 (j)
18 BINARY_MULTIPLY
20 YIELD_VALUE
22 POP_TOP
24 JUMP_ABSOLUTE 2 (to 4)
>> 26 LOAD_CONST 0 (None)
28 RETURN_VALUE
</code></pre>
<p>I barely understand any of the commands.</p>
| <python><python-3.x><performance> | 2023-07-10 12:56:52 | 1 | 3,930 | Ξένη Γήινος |
76,653,956 | 20,266,647 | MLRun, Issue with slow response times | <p>I see higher throughput and long average response delay (waiting for worker in range 20-50 seconds), see outputs from grafana:</p>
<p><a href="https://i.sstatic.net/agbi5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/agbi5.png" alt="enter image description here" /></a></p>
<p>I know, that part of optimization can be:</p>
<ul>
<li>use more workers (for each pod/replica)</li>
<li>increase sources for each pod/replica</li>
<li>use more pods/replicas in k8s</li>
</ul>
<p>I tuned performance based on increase sources and pods/replicas see:</p>
<pre><code># increase of sources (for faster execution)
fn.with_requests(mem="500Mi", cpu=0.5) # default sources
fn.with_limits(mem="2Gi", cpu=1) # maximal sources
# increase parallel execution based on increase of pods/replicas
fn.spec.replicas = 2 # default replicas
fn.spec.min_replicas = 2 # min replicas
fn.spec.max_replicas = 5 # max replicas
</code></pre>
<p>Do you know, how can I increase amount of workers and expected impacts to CPU/Memory?</p>
| <python><mlrun><nuclio> | 2023-07-10 12:54:39 | 1 | 1,390 | JIST |
76,653,864 | 15,913,281 | Finding List Index of Values in Dataframe Column | <p>Given the following dataframe how do I create a new column called "MemWeight" containing the index position in "mem_list" of each value in the Weighting column?</p>
<pre><code>data = {'MemRef': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 'MemName': ['a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a', 'a'], 'Weighting': [2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 1.97, 2, 2, 2, 2, 2, 2, 2, 2]}
df = pd.DataFrame.from_dict(data)
mem_list = [1.96, 1.97, 1.98, 1.99, 2]
</code></pre>
<p>The following does not work and returns the error below:</p>
<pre><code>df["MemWeight"] = mem_list.index(df["Weighting"])
Traceback (most recent call last):
File "E:/Documents/PycharmProjects/test.py", line 270, in <module>
df["MemWeight"] = mem_list.index(df["Weighting"])
File "C:\Users\xxxx\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\core\generic.py", line 1538, in __nonzero__
f"The truth value of a {type(self).__name__} is ambiguous. "
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>None of the suggestions in the error work. They give a miriad of other errors.</p>
| <python><pandas> | 2023-07-10 12:43:46 | 2 | 471 | Robsmith |
76,653,765 | 14,336,726 | How to activate a venv in VSC? | <p>I have a Python script open in VSC. I have a venv (Python 3.10.0) selected as the Kernel. In terminal I have the following line:</p>
<pre><code>PS C:\Users\person123\Desktop\Project\venv>
</code></pre>
<p>I think my venv is not activated. Am I correct and how to activate it? The instruction tricks I've seen (for example this <code>venv\Scripts\activate</code>) don't activate this venv. Thanks for your assistance.</p>
| <python><visual-studio-code> | 2023-07-10 12:29:46 | 2 | 480 | Espejito |
76,653,751 | 3,607,738 | Flask unable to connect to mongodb on same server | <p>I am using Flask 1.1.2 with MongoDB 4.2</p>
<p>The MongoDB server can be accessed without entering an username or a password, so I assumed I would not need those in the configuration files</p>
<p>But I am now meeting this error when I'm trying an endpoint that uses the database</p>
<pre><code>[Mon Jul 10 12:00:26.064569 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] [2023-07-10 12:00:26,063] ERROR in app: Exception on /stade/list [GET]
[Mon Jul 10 12:00:26.064596 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] Traceback (most recent call last):
[Mon Jul 10 12:00:26.064599 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 2447, in wsgi_app
[Mon Jul 10 12:00:26.064602 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] response = self.full_dispatch_request()
[Mon Jul 10 12:00:26.064605 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1952, in full_dispatch_request
[Mon Jul 10 12:00:26.064608 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] rv = self.handle_user_exception(e)
[Mon Jul 10 12:00:26.064610 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1821, in handle_user_exception
[Mon Jul 10 12:00:26.064613 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] reraise(exc_type, exc_value, tb)
[Mon Jul 10 12:00:26.064615 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] File "/usr/local/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise
[Mon Jul 10 12:00:26.064617 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] raise value
[Mon Jul 10 12:00:26.064619 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1950, in full_dispatch_request
[Mon Jul 10 12:00:26.064622 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] rv = self.dispatch_request()
[Mon Jul 10 12:00:26.064624 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1936, in dispatch_request
[Mon Jul 10 12:00:26.064626 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] return self.view_functions[rule.endpoint](**req.view_args)
[Mon Jul 10 12:00:26.064629 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] File "/var/www/oneplay/oneplay_app/controllers/stade.py", line 23, in stade_list
[Mon Jul 10 12:00:26.064631 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] return stade_service.stade_list()
[Mon Jul 10 12:00:26.064633 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] File "/var/www/oneplay/oneplay_app/services/stade.py", line 80, in stade_list
[Mon Jul 10 12:00:26.064636 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] stades = stade_repo.read_all()
[Mon Jul 10 12:00:26.064638 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] File "/var/www/oneplay/oneplay_app/repository/stade.py", line 33, in read_all
[Mon Jul 10 12:00:26.064640 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] stades = Stade.objects()
[Mon Jul 10 12:00:26.064642 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] File "/usr/local/lib/python3.6/site-packages/mongoengine/queryset/manager.py", line 38, in __get__
[Mon Jul 10 12:00:26.064645 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] queryset = queryset_class(owner, owner._get_collection())
[Mon Jul 10 12:00:26.064647 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] File "/usr/local/lib/python3.6/site-packages/mongoengine/document.py", line 232, in _get_collection
[Mon Jul 10 12:00:26.064649 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] if cls._meta.get("auto_create_index", True) and db.client.is_primary:
[Mon Jul 10 12:00:26.064652 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] File "/usr/local/lib/python3.6/site-packages/pymongo/mongo_client.py", line 1006, in is_primary
[Mon Jul 10 12:00:26.064681 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] return self._server_property('is_writable')
[Mon Jul 10 12:00:26.064684 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] File "/usr/local/lib/python3.6/site-packages/pymongo/mongo_client.py", line 831, in _server_property
[Mon Jul 10 12:00:26.064686 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] writable_server_selector)
[Mon Jul 10 12:00:26.064688 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] File "/usr/local/lib/python3.6/site-packages/pymongo/topology.py", line 231, in select_server
[Mon Jul 10 12:00:26.064703 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] address))
[Mon Jul 10 12:00:26.064705 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] File "/usr/local/lib/python3.6/site-packages/pymongo/topology.py", line 189, in select_servers
[Mon Jul 10 12:00:26.064707 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] selector, server_timeout, address)
[Mon Jul 10 12:00:26.064710 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] File "/usr/local/lib/python3.6/site-packages/pymongo/topology.py", line 205, in _select_servers_loop
[Mon Jul 10 12:00:26.064712 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] self._error_message(selector))
[Mon Jul 10 12:00:26.064716 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594] pymongo.errors.ServerSelectionTimeoutError: 127.0.0.1:27017: [Errno 13] Permission denied
[Mon Jul 10 12:00:26.064736 2023] [wsgi:error] [pid 19007] [client 104.28.249.102:41594]
</code></pre>
<p>My config is as follows:</p>
<pre><code>MONGODB_SETTINGS = {
"db": "db_name",
"port": 27017,
"host": "127.0.0.1",
}
</code></pre>
<p>I have already imported some data in the used collection, I am not sure whether I should configure something with MongoDB or with Flask</p>
<p>Server info:</p>
<ul>
<li>OS: CentOS 7</li>
<li>Python: python 3.6.8</li>
<li>Flask: flask 1.1.2</li>
<li>MongoDB: 4.2.24</li>
<li>Flask Mongoengine: flask-mongoengine 1.0.0</li>
<li>Mongoengine: mongoengine 0.20</li>
<li>PyMongo: pymongo 3.9.0</li>
</ul>
<p>I have tried to access it from the Python console and it connects correctly</p>
<pre><code>Python 3.6.8 (default, Jun 20 2023, 11:53:23)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pymongo
>>> client = pymongo.MongoClient('127.0.0.1', 27017)
>>> db = client['db_name']
>>> db.collection_names()
['init', 'stade']
</code></pre>
<p>Do I have to set something somewhere? Because I do not remember doing so locally</p>
| <python><mongodb><flask><pymongo-3.x> | 2023-07-10 12:28:08 | 1 | 367 | prout |
76,653,505 | 633,001 | IntelOMP and LLVM OMP colliding | <p>We have a conda environment that apparently causes issues when running:</p>
<pre><code>/opt/miniconda/lib/python3.10/site-packages/threadpoolctl.py:546: RuntimeWarning:
Found Intel OpenMP ('libiomp') and LLVM OpenMP ('libomp') loaded at
the same time. Both libraries are known to be incompatible and this
can cause random crashes or deadlocks on Linux when loaded in the
same Python program.
</code></pre>
<p>I tried to figure out which package requires LLVM OpenMP:</p>
<pre><code>grep llvm ~/anaconda3/pkgs/*/info/index.json
/home/user/anaconda3/pkgs/libclang-10.0.1-default_hb85057a_2/info/index.json: "libllvm10 >=10.0.1,<10.1.0a0",
/home/user/anaconda3/pkgs/libllvm10-10.0.1-hbcb73fb_5/info/index.json: "name": "libllvm10",
/home/user/anaconda3/pkgs/libllvm11-11.1.0-h9e868ea_6/info/index.json: "name": "libllvm11",
/home/user/anaconda3/pkgs/libllvm14-14.0.6-hdb19cb5_3/info/index.json: "name": "libllvm14",
/home/user/anaconda3/pkgs/llvmlite-0.39.1-py310he621ea3_0/info/index.json: "libllvm11 >=11.1.0,<11.2.0a0",
/home/user/anaconda3/pkgs/llvmlite-0.39.1-py310he621ea3_0/info/index.json: "name": "llvmlite",
/home/user/anaconda3/pkgs/llvmlite-0.40.0-py310he621ea3_0/info/index.json: "libllvm14 >=14.0.6,<14.1.0a0",
/home/user/anaconda3/pkgs/llvmlite-0.40.0-py310he621ea3_0/info/index.json: "name": "llvmlite",
/home/user/anaconda3/pkgs/numba-0.56.4-py310h1128e8f_0/info/index.json: "llvmlite >=0.39.*,<0.40",
/home/user/anaconda3/pkgs/numba-0.57.0-py310h1128e8f_0/info/index.json: "libllvm14 >=14.0.6,<14.1.0a0",
/home/user/anaconda3/pkgs/numba-0.57.0-py310h1128e8f_0/info/index.json: "llvmlite >=0.40.0,<0.41.0a0",
/home/user/anaconda3/pkgs/pynndescent-0.5.10-py310h06a4308_0/info/index.json: "llvmlite >=0.34",
</code></pre>
<p>Other parts of the packages explicitly use Intel OpenMP:</p>
<pre><code>grep intel ~/anaconda3/pkgs/*/info/index.json
/home/user/anaconda3/pkgs/intel-openmp-2021.4.0-h06a4308_3561/info/index.json: "name": "intel-openmp",
/home/user/anaconda3/pkgs/intel-openmp-2023.1.0-hdb19cb5_46305/info/index.json: "name": "intel-openmp",
/home/user/anaconda3/pkgs/mkl-2021.4.0-h06a4308_640/info/index.json: "intel-openmp 2021.*"
/home/user/anaconda3/pkgs/mkl-2023.1.0-h6d00ec8_46342/info/index.json: "intel-openmp 2023.*",
/home/user/anaconda3/pkgs/pytorch-1.12.1-cpu_py310hb1f1ab4_1/info/index.json: "intel-openmp >=2021.4.0,<2022.0a0",
/home/user/anaconda3/pkgs/scikit-learn-intelex-2023.0.2-py310h06a4308_0/info/index.json: "name": "scikit-learn-intelex",
/home/user/anaconda3/pkgs/scipy-1.10.0-py310hd5efca6_1/info/index.json: "intel-openmp >=2021.4.0,<2022.0a0",
/home/user/anaconda3/pkgs/scipy-1.10.1-py310h5f9d8c6_1/info/index.json: "intel-openmp >=2023.1.0,<2024.0a0",
</code></pre>
<p>How can I avoid this issue with two different OpenMPs used?</p>
| <python><ubuntu><conda><openmp> | 2023-07-10 11:57:47 | 1 | 3,519 | SinisterMJ |
76,653,412 | 2,587,931 | Add DataAugmentation and rescaling in CNN with Keras sequential API | <p>How do you add data augmentation and rescaling layer in a Convolution Network in Keras?</p>
<p>This is how I have defined it with functional API:</p>
<pre><code>image_size = (32,32)
data_augmentation = keras.Sequential(
[
layers.experimental.preprocessing.RandomFlip(),
layers.experimental.preprocessing.RandomRotation(0.25),
layers.experimental.preprocessing.RandomZoom(0.25),
]
)
inputs = tf.keras.Input(shape=image_size + (3, ), name='input')
data_aug = data_augmentation(inputs)
rescaling = tf.keras.layers.Rescaling(1. / 255)(data_aug)
conv_1 = layers.Conv2D(
32, 3, padding='valid', name='conv_1')(rescaling)
</code></pre>
<p>Without the data_augmentation and rescaling, the input shape is just defined in the first convolution layer:</p>
<pre><code>model_seq = tf.keras.models.Sequential()
model_seq.add(layers.Conv2D(
32, 3, padding='valid', input_shape = image_size + (3, ), name='conv_1'))
</code></pre>
<p>But i'm not sure what should I have to do. Just add the data augmentation and rescaling prior to convolution without an input layer? Do I add an input layer and remove shape definition in the first convolution?</p>
| <python><keras><conv-neural-network> | 2023-07-10 11:45:33 | 0 | 1,105 | Kaikus |
76,653,057 | 7,115,354 | Pulp Matching algorithm to replace greedy algo | <p>I am trying to create a matching algorithm using pulp but the results for the sample data I'm getting are wrong as I think the function is flawed.</p>
<p>Sample data:</p>
<pre><code>users = {
1: (5.0, 4.0, 1.0, 2, 1, 1),
2: (8.0, 6.0, 2.0, 3, 2, 1)
}
dataset = pd.DataFrame([
{'id': 1, 'group': 'A', 'weight': 1},
{'id': 2, 'group': 'A', 'weight': 2},
{'id': 3, 'group': 'A', 'weight': 3},
{'id': 4, 'group': 'A', 'weight': 3},
{'id': 5, 'group': 'A', 'weight': 4},
{'id': 6, 'group': 'A', 'weight': 6},
{'id': 7, 'group': 'A', 'weight': 7},
{'id': 8, 'group': 'A', 'weight': 8},
{'id': 9, 'group': 'B', 'weight': 2},
{'d': 10, 'group': 'B', 'weight': 1}
])
</code></pre>
<p>I would like to match different ids to users (without repetition). For each user I have a total weight, group A weight, group B weight, unique id count, group A unique id count, group B unique id count.</p>
<p>For the sample above the correct answer should be:</p>
<pre><code>{'id': 5, 'group': 'A', 'weight': 4, 'user_id': 1}
{'id': 10, 'group': 'B', 'weight': 1, 'user_id': 1}
{'id': 3, 'group': 'A', 'weight': 3, 'user_id': 2}
{'id': 4, 'group': 'A', 'weight': 3, 'user_id': 2}
{'id': 9, 'group': 'B', 'weight': 2, 'user_id': 2}
</code></pre>
<p>My first attempt:</p>
<pre><code>from pulp import *
import pandas as pd
from itertools import product
def match_weights(users, dataset):
matched_rows = []
variables = LpVariable.dicts("Item", range(len(dataset)), lowBound=0, cat='Binary')
user_vars = {}
for user_id, (total_weight, group_a_weight, group_b_weight, total_unique_users, group_a_unique_users, group_b_unique_users) in users.items():
user_vars[user_id] = {}
user_vars[user_id]['total_weight'] = LpVariable("TotalWeight_{}".format(user_id), lowBound=0, upBound=total_weight)
user_vars[user_id]['group_a_weight'] = LpVariable("GroupAWeight_{}".format(user_id), lowBound=0, upBound=group_a_weight)
user_vars[user_id]['group_b_weight'] = LpVariable("GroupBWeight_{}".format(user_id), lowBound=0, upBound=group_b_weight)
user_vars[user_id]['total_unique_users'] = LpVariable("TotalUniqueUsers_{}".format(user_id), lowBound=0, upBound=total_unique_users, cat='Integer')
user_vars[user_id]['group_a_unique_users'] = LpVariable("GroupAUniqueUsers_{}".format(user_id), lowBound=0, upBound=group_a_unique_users, cat='Integer')
user_vars[user_id]['group_b_unique_users'] = LpVariable("GroupBUniqueUsers_{}".format(user_id), lowBound=0, upBound=group_b_unique_users, cat='Integer')
prob = LpProblem("MatchingProblem", LpMaximize)
prob += lpSum(variables[i] for i in range(len(dataset)))
for user_id, (total_weight, group_a_weight, group_b_weight, total_unique_users, group_a_unique_users, group_b_unique_users) in users.items():
group_a_items = dataset[dataset['group'] == 'A'].index.tolist()
group_b_items = dataset[dataset['group'] == 'B'].index.tolist()
# Total weight constraint
prob += lpSum(variables[i] * dataset.loc[i, 'weight'] for i in range(len(dataset))) <= user_vars[user_id]['total_weight']
# Group A weight constraint
prob += lpSum(variables[i] * dataset.loc[i, 'weight'] for i in group_a_items) <= user_vars[user_id]['group_a_weight']
# Group B weight constraint
prob += lpSum(variables[i] * dataset.loc[i, 'weight'] for i in group_b_items) <= user_vars[user_id]['group_b_weight']
# Total unique user constraint
unique_users = set()
for i in range(len(dataset)):
if variables[i].value() == 1:
unique_users.add(dataset.loc[i, 'id'])
prob += lpSum(1 for u in unique_users) <= user_vars[user_id]['total_unique_users']
# Group A unique user constraint
unique_users_a = set()
for i in group_a_items:
if variables[i].value() == 1:
unique_users_a.add(dataset.loc[i, 'id'])
prob += lpSum(1 for u in unique_users_a) <= user_vars[user_id]['group_a_unique_users']
# Group B unique user constraint
unique_users_b = set()
for i in group_b_items:
if variables[i].value() == 1:
unique_users_b.add(dataset.loc[i, 'id'])
prob += lpSum(1 for u in unique_users_b) <= user_vars[user_id]['group_b_unique_users']
prob.solve()
for user_id, (total_weight, group_a_weight, group_b_weight, total_unique_users, group_a_unique_users, group_b_unique_users) in users.items():
matched_user_rows = []
for i in range(len(dataset)):
if variables[i].value() == 1:
matched_row = dataset.loc[i].copy()
matched_row['user_id'] = user_id
matched_user_rows.append(matched_row)
matched_rows.extend(matched_user_rows)
return matched_rows
</code></pre>
<p>However the results are:</p>
<pre><code>{1: {'group_a': [2], 'group_b': [10]}, 2: {'group_a': [2], 'group_b': [10]}}
</code></pre>
<p>Looks like my results might overwrite each other but also look wrong.</p>
<p>I tried to rewrite it and got similar incorrect results:</p>
<pre><code>def match_weights(users, dataset):
model = LpProblem("MatchingProblem", LpMaximize)
variables = LpVariable.dicts("Item", dataset.index, lowBound=0, cat='Binary')
model += lpSum(variables[i] for i in dataset.index)
# Add constraints for each user
for user_id, (total_weight, group_a_weight, group_b_weight, _, _, _) in users.items():
# Filter dataset based on user group
group_a_indices = dataset[dataset['group'] == 'A'].index
group_b_indices = dataset[dataset['group'] == 'B'].index
# Total weight constraint
model += lpSum(variables[i] * dataset.loc[i, 'weight'] for i in dataset.index) <= total_weight
# Group A weight constraint
model += lpSum(variables[i] * dataset.loc[i, 'weight'] for i in group_a_indices) <= group_a_weight
# Group B weight constraint
model += lpSum(variables[i] * dataset.loc[i, 'weight'] for i in group_b_indices) <= group_b_weight
unique_user_set = set(dataset['respondent_id'])
for user_id, (total_weight, _, _, total_unique_users, group_a_unique_users, group_b_unique_users) in users.items():
group_a_indices = dataset[dataset['group'] == 'A'].index
group_b_indices = dataset[dataset['group'] == 'B'].index
# Total unique users constraint
model += lpSum(variables[i] for i in dataset.index if dataset.loc[i, 'respondent_id'] in unique_user_set) \
<= total_unique_users
# Group A unique users constraint
model += lpSum(variables[i] for i in group_a_indices if dataset.loc[i, 'respondent_id'] in unique_user_set) \
<= group_a_unique_users
# Group B unique users constraint
model += lpSum(variables[i] for i in group_b_indices if dataset.loc[i, 'respondent_id'] in unique_user_set) \
<= group_b_unique_users
model.solve()
results = {}
for user_id, (_, _, _, _, _, _) in users.items():
group_a_indices = dataset[dataset['group'] == 'A'].index
group_b_indices = dataset[dataset['group'] == 'B'].index
matched_a = [dataset.loc[i, 'respondent_id'] for i in group_a_indices if variables[i].value() == 1]
matched_b = [dataset.loc[i, 'respondent_id'] for i in group_b_indices if variables[i].value() == 1]
results[user_id] = {'group_a': matched_a, 'group_b': matched_b}
return results
</code></pre>
<p>Where am I going wrong?</p>
| <python><matching><linear-programming><pulp><integer-programming> | 2023-07-10 11:00:07 | 2 | 814 | Olivia |
76,652,939 | 364,088 | Dockerfile - why does removing build tools increase the size of the resulting image? | <p>I have a Dockerfile which looks like this ...</p>
<pre><code># Pull base image
FROM python:3.9.17-slim-bullseye
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Binaries needed for the pip install stage
RUN apt-get update && \
apt-get install --no-install-recommends -y gcc python3-dev default-libmysqlclient-dev &&\
apt-get clean
# Create and set work directory called `code`
RUN mkdir -p /code
WORKDIR /code
# Install dependencies
COPY requirements.txt /tmp/requirements.txt
# Pip install
RUN set -ex && \
pip install --upgrade pip && \
pip install -r /tmp/requirements.txt && \
rm -rf /root/.cache/
#
#Once pip is finished we don't need this stuff
RUN apt-get remove python3-dev default-libmysqlclient-dev gcc -y && \
apt-get autoremove -y
# Copy local project
COPY . /code/
# Expose port 8000
EXPOSE 8000
# Use gunicorn on port 8000
CMD ["gunicorn", "--bind", ":8000", "--workers", "2", "django_project.wsgi"]
</code></pre>
<p>and it produces a 576MB image. If I remove these lines ...</p>
<pre><code>RUN apt-get remove python3-dev default-libmysqlclient-dev gcc -y && \
apt-get autoremove -y
</code></pre>
<p>... it produces a 577MB image.</p>
<p>I was hoping for a significant reduction in image size by removing the build tools but instead got a tiny increase. Is there something about what I'm doing in the Dockerfile which is obviously wrong ?</p>
<p>I invoke the image build by executing</p>
<pre><code>$ docker-compose build
</code></pre>
<p>... using a docker-compose.yml which looks like this ...</p>
<pre><code>version: '3.3'
services:
fooweb:
build: .
network_mode: "host"
container_name: foo-web
command: gunicorn --bind 0.0.0.0:8000 config.wsgi --workers=4
volumes:
- .:/code
ports:
- 8000:8000
</code></pre>
| <python><docker><docker-compose><dockerfile><gunicorn> | 2023-07-10 10:42:01 | 1 | 8,432 | shearichard |
76,652,748 | 7,972,989 | Equivalent of R geosphere::distGeo in Python | <p>I am translating R code to Python. I can't find a function to match the output of R function <code>geosphere::distGeo</code> in Python.
I have looked at a lot of answers here and it seems the Python equivalent is <code>geopy.distance.geodesic</code>, but the results don't match. R code gives 440km and Python code give 392km.</p>
<p>I am looking for a Python function (or maybe just parameters the good parameters ?) to match the 440km given by R.</p>
<p>I have tried this :</p>
<p>R code</p>
<pre><code>lyon = c(45.7597, 4.8422) # (latitude, longitude)
paris = c(48.8567, 2.3508)
geosphere::distGeo(lyon, paris) / 1000 # default is WGS84 and meters
# 440.7626 km
</code></pre>
<p>Python code</p>
<pre><code>from geopy.distance import geodesic
lyon = (45.7597, 4.8422) # (latitude, longitude)
paris = (48.8567, 2.3508)
geodesic(lyon, paris, ellipsoid="WGS-84").km
# 392.4315 km
</code></pre>
| <python><r><spatial><geopy><geosphere> | 2023-07-10 10:14:08 | 1 | 2,505 | gdevaux |
76,652,731 | 11,087,259 | How to manage a shared single session for an external website in FastAPI? | <p>I'm building a FastAPI application that needs to obtain data from an external website. To do this, I need to manage a session with the website. However, the session will time out after some time, so I need to call <code>login()</code> again to refresh it. I want to have only one active session at a time for the whole FastAPI application.</p>
<p>I've managed to create a solution that seems to work, but I'm not sure if it's the best approach. Here's what I have so far:</p>
<pre class="lang-py prettyprint-override"><code>class OldAppSession:
def __init__(self, email, password):
self.email = email
self.password = password
self.session = requests.Session()
self.login_lock = threading.Lock()
self.login_in_progress = False
self.login()
def login(self) -> bool:
"""Obtain a session cookie for the old app
Returns:
bool: True if the login was successful, False if the login is already in progress
"""
with self.login_lock:
if self.login_in_progress:
return False
self.login_in_progress = True
# Here I refresh my session.....
def _request(self, method, url, data=None):
response = getattr(self.session, method)(url, data=data)
if response.status_code in [401, 302]: # Unauthorized
not_in_progress = self.login()
if not_in_progress:
return getattr(self.session, method)(url, data=data)
else:
# Sleep for a bit to give the other thread a chance to finish logging in
for _ in range(2):
time.sleep(2)
response = getattr(self.session, method)(url, data=data)
if response.status_code not in [401, 302]:
return response
logger.error("Failed to log in to the old app - Failed to refresh session")
raise HTTPException(
status_code=status.HTTP_503_SERVICE_UNAVAILABLE,
detail="Failed to log in to the old app",
)
def get(self, url):
return self._request("get", url)
def post(self, url, data):
return self._request("post", url, data=data)
old_app_session = OldAppSession("test@example.com", "12345")
</code></pre>
<p>I'm using a threading.Lock() to ensure that only one thread is logging in at a time. However, I'm not sure if this will work if I use multiple gunicorn workers. Is there a better way to manage the session that will work with multiple workers? Is there anything else I should be doing to ensure that my solution is correct?</p>
| <python><multithreading><fastapi> | 2023-07-10 10:10:47 | 1 | 373 | Ruuza |
76,652,641 | 6,195,489 | query-exporter in Docker container not working | <p>I am trying to get <a href="https://github.com/albertodonato/query-exporter" rel="nofollow noreferrer">query-exporter</a> to run in a Docker container. With advice from the developer I have enabled IPv6 in docker by putting:</p>
<pre><code>{
"experimental": true,
"ip6tables": true
}
</code></pre>
<p>in my docker daemon.json and restarted.</p>
<p>I am using the following docker-compose file:</p>
<pre><code>version: "3.3"
services:
prometheus:
container_name: prometheus
image: prom/prometheus
restart: always
volumes:
- ./prometheus:/etc/prometheus/
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
ports:
- 9090:9090
networks:
- prom_app_net
grafana:
container_name: grafana
image: grafana/grafana
user: '472'
restart: always
environment:
GF_INSTALL_PLUGINS: 'grafana-clock-panel,grafana-simple-json-datasource'
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/:/etc/grafana/provisioning/
- './grafana/grafana.ini:/etc/grafana/grafana.ini'
env_file:
- ./grafana/.env_grafana
ports:
- 3000:3000
depends_on:
- prometheus
networks:
- prom_app_net
mysql:
image: mariadb:10.10
hostname: mysql
container_name: mysql
environment:
MYSQL_RANDOM_ROOT_PASSWORD: "yes"
MYSQL_DATABASE: slurm_acct_db
MYSQL_USER: slurm
MYSQL_PASSWORD: password
volumes:
- var_lib_mysql:/var/lib/mysql
networks:
- slurm
# network_mode: host
slurmdbd:
image: prom-slurm-cluster:${IMAGE_TAG:-21.08.6}
build:
context: .
args:
SLURM_TAG: ${SLURM_TAG:-slurm-21-08-6-1}
command: ["slurmdbd"]
container_name: slurmdbd
hostname: slurmdbd
volumes:
- etc_munge:/etc/munge
- etc_slurm:/etc/slurm
- var_log_slurm:/var/log/slurm
- cgroups:/sys/fs/cgroup:ro
expose:
- "6819"
ports:
- "6819:6819"
depends_on:
- mysql
privileged: true
cgroup: host
networks:
- slurm
#network_mode: host
slurmctld:
image: prom-slurm-cluster:${IMAGE_TAG:-21.08.6}
command: ["slurmctld"]
container_name: slurmctld
hostname: slurmctld
volumes:
- etc_munge:/etc/munge
- etc_slurm:/etc/slurm
- slurm_jobdir:/data
- var_log_slurm:/var/log/slurm
- etc_prometheus:/etc/prometheus
- /sys/fs/cgroup:/sys/fs/cgroup:rw
expose:
- "6817"
- "8080"
- "8081"
- "8082/tcp"
ports:
- 8080:8080
- 8081:8081
- 8082:8082/tcp
depends_on:
- "slurmdbd"
privileged: true
cgroup: host
#network_mode: host
networks:
- slurm
c1:
image: prom-slurm-cluster:${IMAGE_TAG:-21.08.6}
command: ["slurmd"]
hostname: c1
container_name: c1
volumes:
- etc_munge:/etc/munge
- etc_slurm:/etc/slurm
- slurm_jobdir:/data
- var_log_slurm:/var/log/slurm
- cgroups:/sys/fs/cgroup:ro
expose:
- "6818"
depends_on:
- "slurmctld"
privileged: true
cgroup: host
#network_mode: host
networks:
- slurm
c2:
image: prom-slurm-cluster:${IMAGE_TAG:-21.08.6}
command: ["slurmd"]
hostname: c2
container_name: c2
volumes:
- etc_munge:/etc/munge
- etc_slurm:/etc/slurm
- slurm_jobdir:/data
- var_log_slurm:/var/log/slurm
- cgroups:/sys/fs/cgroup:ro
expose:
- "6818"
- "22"
depends_on:
- "slurmctld"
privileged: true
cgroup: host
networks:
- slurm
#network_mode: host
volumes:
etc_munge:
etc_slurm:
slurm_jobdir:
var_lib_mysql:
var_log_slurm:
grafana_data:
prometheus_data:
cgroups:
etc_prometheus:
networks:
prom_app_net:
slurm:
enable_ipv6: true
ipam:
config:
- subnet: 2001:0DB8::/112
</code></pre>
<p>Then installed query-exporter on the slurmctld container and run it with the following config.yaml:</p>
<pre><code>databases:
db1:
dsn: sqlite:////test.db
connect-sql:
- PRAGMA application_id = 123
- PRAGMA auto_vacuum = 1
labels:
region: us1
app: app1
metrics:
metric1:
type: gauge
description: A sample gauge
queries:
query1:
interval: 5
databases: [db1]
metrics: [metric1]
sql: SELECT random() / 1000000000000000 AS metric1
</code></pre>
<p>But it is not working - prometheus lists the target as being down:</p>
<p><a href="https://i.sstatic.net/6VyeH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6VyeH.png" alt="prometheus window down" /></a></p>
<p>But the container set-up seems to be fine as if I run the following test exporter:</p>
<pre><code>from prometheus_client import start_http_server, Summary
import random
import time
# Create a metric to track time spent and requests made.
REQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request')
# Decorate function with metric.
@REQUEST_TIME.time()
def process_request(t):
"""A dummy function that takes some time."""
time.sleep(t)
if __name__ == '__main__':
# Start up the server to expose the metrics.
start_http_server(8082)
# Generate some requests.
while True:
process_request(random.random())
</code></pre>
<p>Prometheus can connect to the target fine:</p>
<p><a href="https://i.sstatic.net/bZZzn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZZzn.png" alt="target up" /></a></p>
<p>Can anyone see what the problem could be?</p>
<p>Thanks!</p>
<p><strong>Update</strong></p>
<p>I run query-exporter by hand on the slurmctld container. There isnt anything in the container logs about query-exporter:</p>
<pre><code>2023-07-10 10:11:37 ---> Starting the MUNGE Authentication service (munged) ...
2023-07-10 10:11:37 ---> Waiting for slurmdbd to become active before starting slurmctld ...
2023-07-10 10:11:37 -- slurmdbd is not available. Sleeping ...
2023-07-10 10:11:39 -- slurmdbd is now active ...
2023-07-10 10:11:39 ---> starting systemd ...
</code></pre>
<p>I think th etest_query.py that works is using IPv4 on port 8082, while the query exporter is trying to bind IPv6.</p>
<p><code>docker port slurmctld</code> gives:</p>
<pre><code>8080/tcp -> 0.0.0.0:8080
8080/tcp -> [::]:8080
8081/tcp -> 0.0.0.0:8081
8081/tcp -> [::]:8081
8082/tcp -> 0.0.0.0:8082
8082/tcp -> [::]:8082
</code></pre>
<p>I guess i need to pint prometheus at <code>8082/tcp -> [::]:8082</code> when the query-exporter runs, but I'm not sure how to do it.</p>
| <python><docker><sqlite><docker-compose><prometheus> | 2023-07-10 09:57:43 | 1 | 849 | abinitio |
76,652,463 | 11,894,831 | Matplotlib : Two treemaps in the same figure | <p>I'm new to matplotlib et squarify and I want to display two distinct treemaps in the same figure.</p>
<p>I use the code below which display the two treemaps in the same axes and i don't get why.</p>
<pre><code>http_return_status_label_1 = ['200','300','500']
http_return_status_count_1 =[4,8,12]
http_return_status_label_1 = ['2000','3000','5000']
http_return_status_count_1 =[40,88,102]
fig, (ax1, ax2) = plt.subplots(1, 2, subplot_kw={'aspect': 'equal'})
ax1.subplot = squarify.plot(sizes=http_return_status_count_1, label=http_return_status_label_1, alpha=.8)
ax2.subplot = squarify.plot(sizes=http_return_status_count_2, label=http_return_status_label_2, alpha=.8)
plt.axis('off')
plt.show()
</code></pre>
| <python><matplotlib> | 2023-07-10 09:33:34 | 1 | 475 | 8oris |
76,652,445 | 3,251,645 | How to use pointer to an array of custom objects in ctypes | <p>I have a <code>ctypes</code> structure defined like this:</p>
<pre><code>class CNode(Structure):
pass
CNode._fields_ = [
("type", c_int32),
("children", POINTER(CNode)),
]
</code></pre>
<p>Here, <code>CNode</code> is used to represent a tree where the <code>children</code> field is supposed to point to a list of other <code>CNode</code> types. I have another function which generates a <code>CNode</code> tree from a tree which uses different Python class. I'm populating <code>children</code> like this:</p>
<pre><code>def gen_c_tree(tree):
cnode = gen_c_node(tree)
if len(tree.children) > 0:
tmp = [gen_c_node(child) for child in tree.children]
cnode.children = POINTER(CNode * len(tree.children))(*tmp)
return cnode
</code></pre>
<p>But, I'm getting this error:</p>
<pre><code>Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "path/file", line 32, in <module>
genctree.gen_c_tree(parser.tree)
File "path/file", line 29, in gen_c_tree
cnode.children = POINTER(CNode * len(tree.children))(*tmp)
^^^^^^^^^^^^^^
TypeError: incompatible types, LP_CNode_Array_1 instance instead of LP_CNode instance
</code></pre>
<p>How do I fix this?</p>
| <python><c++><ctypes> | 2023-07-10 09:31:22 | 0 | 2,649 | Amol Borkar |
76,652,331 | 13,023,224 | Sort column strings without numbers (and keep order when doing graphs) | <p>I have this df code</p>
<pre><code>df = pd.DataFrame({'A': ['0-5', '18-23', '12-17', '6-11'], 'qty':[7,15,8,34]})
</code></pre>
<p>yielding</p>
<pre><code> A qty
0 0-5 7
1 18-23 15
2 12-17 8
3 6-11 34
</code></pre>
<p>I would like to order the df by col 'A' without having to number the A column, so that later when I do graphs I don't have the numbers.</p>
<p>This is the desired output after sorting the df by column A:</p>
<pre><code> A qty
0 0-5 7
3 6-11 34
2 12-17 8
1 18-23 15
</code></pre>
<p>To achieve a similar result I would:</p>
<pre><code># add a category code
df['A'] = df['A'].astype('category').cat.codes + 1
# convert format
df['A'] = df['A'].astype('string')
# use a dictionary to rename (based on former output)
dic = {
'1':'1_0-5',
'3':'3_18-23',
'2':'2_12-17',
'4':'4_6-11',
}
df['A'] = df['A'].replace(dic, regex=True)
## use a dictionary to rename again
dic = {
'1_0-5':'1_0-5',
'3_18-23':'4_18-23',
'2_12-17':'3_12-17',
'4_6-11':'2_6-11',
}
df['A'] = df['A'].replace(dic, regex=True)
</code></pre>
<p>by doing this, I can achieve this:</p>
<pre><code> A qty
0 1_0-5 7
1 2_6-11 15
2 3_12-17 8
3 4_18-23 34
</code></pre>
<p>Groupby does not work for me, while it would order column A as desired, when I would do graphs, order would not be kept.</p>
| <python><pandas><sorting> | 2023-07-10 09:16:02 | 3 | 571 | josepmaria |
76,652,288 | 19,354,807 | Conditionals with possible None values in List comprehension | <p>I have a <code>xml</code> file that lists speakers:</p>
<pre><code><speakerlist>
<speaker>
<title>Dr.</titel>
<firstname>Bernd</firstname>
<lastname>Baumann</lastname>
</speaker>
<speakerid="11003218">
<firstname>Karsten</firstname>
<lastname>Schneider</lastname>
<info>(Erfurt)</info>
</speaker>
...
<speakerlist>
</code></pre>
<p>Some of the speaker attributes are always given (<code>firstname</code>, <code>lastname</code>) while others are optional (<code>title</code>, <code>info</code>). I want to extract the names with the additional info in a straightforward way.</p>
<p>Just the name is easy, using beatifulsoup:</p>
<pre class="lang-py prettyprint-override"><code>[speaker.find("firstname").text + " " + speaker.find("lastname").text for speaker in speakerlist.find_all("speaker")]
</code></pre>
<p>But how can I prepend the <code>title</code> if existing? I tried</p>
<pre class="lang-py prettyprint-override"><code>[
speaker.find("title").text + " " + speaker.find("firstname").text + " " + speaker.find("lastname").text
if speaker.find("title").text is not None
else speaker.find("firstname").text + " " + speaker.find("lastname").text
for speaker in speakerlist.find_all("speaker")
]
</code></pre>
<p>but this throws</p>
<pre class="lang-py prettyprint-override"><code>'NoneType' object has no attribute 'text'
</code></pre>
<p>when the <code>title</code> attribute does not exist. I understand why this happens, but I don't see a workaround.</p>
<p>Is there a nice and cohesive way for a one-liner to extract the information I want?</p>
| <python><list-comprehension><conditional-operator><nonetype> | 2023-07-10 09:09:25 | 1 | 548 | Quantum |
76,652,066 | 1,068,980 | importlib.metadata.PackageNotFoundError when running Python in Docker | <p>I've got the following Docker folder in my project structure</p>
<pre><code>.../
docker/
src/
my_app/
__init.py__
# rest of py files
my_app_entrypoint.py
Pipfile
Pipfile.lock
pyproject.toml
setup.cfg
setup.py
test/
build.yaml
Dockerfile
...
</code></pre>
<p>My Docker is defined like this:</p>
<pre><code>...
COPY src/Pipfile* /home/workspace/
RUN pip install --upgrade pip \
&& pip install pipenv \
&& pipenv requirements > requirements.txt \
&& pip install -r requirements.txt
...
COPY src/pyproject.toml ./
COPY src/setup.cfg ./
COPY src/setup.py ./
COPY src/my_app_entrypoint.py ./
COPY src/my_app ./my_app
RUN chmod a+x my_app_entrypoint.py
...
</code></pre>
<p>and, when running the python code, I am getting the following error:</p>
<pre><code>Traceback (most recent call last):
File "/home/workspace/my_app_entrypoint.py", line 2, in <module>
import my_app
File "/home/workspace/my_app/__init__.py", line 20, in <module>
__version__ = metadata.version("my_app")
File "/usr/local/lib/python3.9/importlib/metadata.py", line 569, in version
return distribution(distribution_name).version
File "/usr/local/lib/python3.9/importlib/metadata.py", line 542, in distribution
return Distribution.from_name(distribution_name)
File "/usr/local/lib/python3.9/importlib/metadata.py", line 196, in from_name
raise PackageNotFoundError(name)
importlib.metadata.PackageNotFoundError: my_app
</code></pre>
<p>The line <code>/home/workspace/my_app/__init__.py</code> that is causing the error is:</p>
<pre><code>__version__ = metadata.version("my_app")
</code></pre>
<p>and my setup.cfg looks like:</p>
<pre><code>[metadata]
name = my_app
version = 1.0.0
...
[options]
packages =
my_app
</code></pre>
<p>I've tried multiple things but I always get the same error after building the image. It is like it is not recognising the package or the setuptools is not being loaded properly.</p>
<p>Am I missing something or doing something wrong? Maybe should I include the <code>RUN pip install .</code> in dockerfile after copying the app files in order to install the package?</p>
<p>Thank you very much in advance!</p>
| <python><docker><python-import><setuptools><python-packaging> | 2023-07-10 08:37:47 | 1 | 369 | P. Solar |
76,651,878 | 2,699,574 | SQL Alchemy relationship issue | <p>I have the schema definition for postgres:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy.orm import declarative_base
from services.db import DB
from sqlalchemy import Column, String, text, DateTime, ForeignKey
from sqlalchemy.orm import relationship
import uuid
Base = declarative_base()
class UserStatus(Base):
__tablename__ = 'user_statuses'
__table_args__ = {'schema': 'public'} # Specify the schema name
code = Column(String, primary_key=True)
label = Column(String, nullable=False)
users = relationship('User')
class User(Base):
__tablename__ = 'users'
__table_args__ = {'schema': 'public'} # Specify the schema name
id = Column(
String,
primary_key=True,
default=uuid.uuid4()
)
first_name = Column(String, nullable=False)
last_name = Column(String, nullable=False)
email = Column(String, nullable=False)
password = Column(String, nullable=False)
reset_token = Column(String, nullable=True)
status_code = Column(String, ForeignKey(
"user_statuses.code"))
created_at = Column(DateTime, nullable=False)
updated_at = Column(DateTime, nullable=True)
</code></pre>
<p>The problem is I keep getting error that the relation like this:</p>
<pre><code>sqlalchemy.exc.NoReferencedTableError: Foreign key associated with column 'users.status_code' could not find table 'user_statuses' with which to generate a foreign key to target column 'code'
</code></pre>
<p>i am using alembic for migration autogeneration and even if i create the tables from db and try to insert i get failure for the relationship.</p>
<p><strong>What am I doing wrong here?</strong></p>
<p>the code can be found here: <a href="https://github.com/rode093/fastapi-strawberry-graphql-template" rel="nofollow noreferrer">https://github.com/rode093/fastapi-strawberry-graphql-template</a></p>
<p>in the branch defining-db-relationship</p>
| <python><python-3.x><sqlalchemy><alembic> | 2023-07-10 08:12:36 | 0 | 406 | Rode093 |
76,651,708 | 6,649,591 | post requests in python | <p>I want to do a post request in python with basic authentication AND token. I have already tested the post in postman and it worked but with python I always get status code 403.</p>
<p>For this API I have to do...</p>
<ul>
<li>first fetch a token from the header with a GET request</li>
<li>use this token in header for POST request</li>
</ul>
<pre><code>auth = HTTPBasicAuth('User', 'Password')
token = requests.get(URL_token, auth=auth, headers={"x-csrf-token":"FETCH"}).headers['x-csrf-token']
requests.post(POST_url, data=json.dumps(test_data), headers={"x-csrf-token":token}, auth=auth)
</code></pre>
<p>The test_data are type of a list and again, in postman it works.</p>
<p>Is there something which I am doing wrong in the POST?</p>
| <python><python-requests><request> | 2023-07-10 07:50:04 | 2 | 487 | Christian |
76,651,622 | 9,757,174 | Save firestore SERVER_TIMESTAMP without nanoseconds | <p>I am saving the createTime as a <code>firestore.SERVER_TIMESTAMP</code> and it stores the date with date and time in nanoseconds. That is more precision than required and I would like to instead just store it in a format which could be serialized by fastapi in Python - preferably datetime format only. Right now, I get the following error <code>TypeError: Type is not JSON serializable: DatetimeWithNanoseconds</code>.</p>
<p>I can fix the above error using JsonEncoder but I would like to achieve this without json Encoder.</p>
<p>Is there a way I can go about it?</p>
| <python><google-cloud-firestore><fastapi> | 2023-07-10 07:37:41 | 1 | 1,086 | Prakhar Rathi |
76,651,385 | 8,537,993 | Use own __str__() method instead of model's through which object was accessed in Django | <p>I have such models:</p>
<pre><code>class Location(models.Model):
pass
class Place(Location):
name = models.CharField(...)
def __str__(self):
return str(self.name)
class Coordinates(Location):
x = models.DecimalField(...)
y = models.DecimalField(...)
def __str__(self):
return f"({x}, {y})"
</code></pre>
<p>If I try to query <code>Location.objects.get(pk=1).__str__()</code>, I get <code>Location object (1)</code>, instead I expect <code>(123, 456)</code>.</p>
<p>How can I make objects to use their own <code>__str__()</code> method, instead of model's through which I've accessed the object?</p>
| <python><django><class><django-models> | 2023-07-10 07:04:28 | 1 | 303 | Simonas |
76,651,246 | 3,099,733 | Is there a smart way in Python to check if a input value match one of the class variables? | <p>Here is a naive implementation.</p>
<pre class="lang-py prettyprint-override"><code>class DataFormat:
CP2K_OUTPUT_DIR = 'cp2k/output_dir'
VASP_OUTPUT_DIR = 'vasp/output_dir'
LAMMPS_OUTPUT_DIR = 'lammps/output_dir'
CP2K_OUTPUT = 'cp2k/output'
VASP_OUTPUT = 'vasp/xml'
EXTXYZ = 'extxyz'
@classmethod
def is_valid(cls, format: str):
return format in [
cls.CP2K_OUTPUT_DIR,
cls.VASP_OUTPUT_DIR,
cls.LAMMPS_OUTPUT_DIR,
cls.CP2K_OUTPUT,
cls.VASP_OUTPUT,
cls.EXTXYZ,
]
</code></pre>
<p>Is there a smart way to implement <code>is_valid</code> so that I don't need to update it when a new data format is supported?</p>
| <python><dry> | 2023-07-10 06:43:15 | 0 | 1,959 | link89 |
76,651,201 | 10,483,893 | numpy with a list of Dict: Syntax to filter elements? | <p>Say I have a numpy list with each elements a Dict</p>
<pre><code>data = [
{
'Account' : '111',
'RIC' : 'AAPL.OQ',
'Position' : 100,
'isActive' : True,
'Rating' : math.nan
},
{
'Account' : '111',
'RIC' : 'MSFT.OQ',
'Position' : 200,
'isActive' : False,
'Rating' : 73
},
{
'Account' : '111',
'RIC' : 'IBM.N',
'Position' : 300,
'isActive' : True,
'Rating' : math.inf
},
{
'Account' : '222',
'RIC' : 'AAPL.OQ',
'Position' : 1000,
'isActive' : False,
'Rating' : 89
},
{
'Account' : '222',
'RIC' : 'MSFT.OQ',
'Position' : 2000,
'isActive' : True,
'Rating' : np.nan
},
{
'Account' : '222',
'RIC' : 'IBM.N',
'Position' : 3000,
'isActive' : True,
'Rating' : 59
}
]
data = np.array(data)
</code></pre>
<p>How do I filter for example only return elements where isActive==True?</p>
<p>Unlike pandas, numpy don't support syntax like data[data.isActive==True]</p>
<p>I am looking for numpy syntax, and <strong>not</strong> look for a solution where you convert above 'data' to simple python list (then try list comprehension) or convert to pandas.</p>
<p>Thanks</p>
| <python><numpy> | 2023-07-10 06:36:48 | 2 | 1,404 | user3761555 |
76,650,966 | 1,668,622 | With poetry and a pyproject.toml file, how do I distribute/install files outside site-packages, e.g. .desktop files or icons? | <p><a href="https://stackoverflow.com/questions/501597/how-to-distribute-desktop-files-and-icons-for-a-python-package-in-gnome-with">This question</a> also asks, what I'm looking for but it's quite old and there hasn't been Poetry yet.</p>
<p>In the <a href="https://python-poetry.org/docs/pyproject/" rel="nofollow noreferrer">documentation</a> I didn't find a way to tell Poetry to create packages which upon installation distribute/create files outside the <code>site-packages</code> folder (except <code>tool.poetry.scripts</code>, which creates executables in <code>~/.local/bin</code>).</p>
<p>How do I make <code>pip install .. <my-package></code> create e.g. <code>~/.local/share/applications/my-package.desktop</code> with Poetry?</p>
| <python><desktop-application><gnome><python-poetry><pyproject.toml> | 2023-07-10 05:55:27 | 0 | 9,958 | frans |
76,650,856 | 7,070,863 | No module named 'pydantic_core._pydantic_core' in AWS Lambda though library is installed for FastAPI based code | <p>AWS lambda deployment of FastAPI gives the following error:</p>
<pre><code>[ERROR] Runtime.ImportModuleError: Unable to import module 'users_crud': No module named 'pydantic_core._pydantic_core'
Traceback (most recent call last):
</code></pre>
<p>Though the pydantic lib is already installed. I am using version 3.10 which is now supported by AWS.</p>
| <python><aws-lambda><fastapi><pydantic> | 2023-07-10 05:28:35 | 16 | 2,979 | Suhail Abdul Rehman Chougule |
76,650,659 | 1,492,229 | How to join 2 dataframes and pivot them | <p>how to join 2 dataframes and pivot them</p>
<p>I have this <strong>dfICU</strong> dataframe that has list of <em>ICU</em> units in a hospital</p>
<pre><code>ICU
A1
A2
A3
B1
B2Closed
B2Covid
B7
C1West
C2South
C3
.
.
.
P53Child
</code></pre>
<p>the other dataframe <strong>dfPts</strong> has Patients info</p>
<pre><code>PtsID VisitID ICU Frequency
934 15 A3 4
934 15 C2South 2
934 62 B2Covid 5
934 62 A2 6
882 35 C2South 7
882 35 C3 2
882 35 A2 9
882 91 P53Child 5
105 44 C2South 2
105 80 B7 8
</code></pre>
<p>I am trying to put them both in a single pivoted dataframe so if the ICU unit does not exit in the <strong>dfPts</strong> it shows 0 Frequency</p>
<p>Something like this</p>
<pre><code>PtsID VisitID A1 A2 A3 B1 B2Closed B2Covid B7 C1West C2South C3 .... P53Child
934 15 0 0 4 0 0 0 0 0 2 0 0
934 62 0 6 0 0 0 5 0 0 0 0 0
882 35 0 0 0 0 0 0 0 0 7 2 0
882 91 0 0 0 0 0 0 0 0 0 0 5
105 44 0 0 0 0 0 0 0 0 2 0 0
105 80 0 0 0 0 0 0 8 0 0 0 0
</code></pre>
<p>I start by pivoting the dfPts but that did not add all ICU units in <strong>dfICU</strong> because some <em>ICUs</em> are <em>0</em> for all patients</p>
<p>here is what i have done so far and did not know what to do after</p>
<pre><code>df = dfPts.set_index(['PtsID','VisitID']).pivot(columns='ICU')['Frequency']
df[np.isnan(df)] = 0
</code></pre>
<p>How to do that?</p>
| <python><pandas><dataframe> | 2023-07-10 04:18:07 | 3 | 8,150 | asmgx |
76,650,653 | 3,247,006 | How to pass JavaScript variables to Django template tags and filters? | <p>I could pass <code>Hello</code> to <a href="https://docs.djangoproject.com/en/4.2/ref/templates/builtins/#with" rel="nofollow noreferrer">with</a> tag's <code>dj_val</code> and <a href="https://docs.djangoproject.com/en/4.2/ref/templates/builtins/#upper" rel="nofollow noreferrer">upper</a> filter in <code><script></script></code> in <code>index.html</code>, then <code>Hello</code> and <code>HELLO</code> was displayed on console as shown below:</p>
<pre><code>{% "index.html" %}
<script>
{% with dj_val="Hello" %}
console.log("{{ dj_val }}") # Hello
{% endwith %}
console.log("{{ "Hello"|upper }}") # HELLO
</script>
</code></pre>
<p>But, I could not pass JavaScript's <code>js_val</code> set <code>Hello</code> to <code>with</code> tag's <code>dj_val</code> and <code>upper</code> filter in <code><script></script></code> in <code>index.html</code>, then nothing was displayed on console as shown below:</p>
<pre><code>{% "index.html" %}
<script>
let js_val = "Hello"
{% with dj_val=js_val %}
console.log("{{ dj_val }}") # Nothing
{% endwith %}
console.log("{{ js_val|upper }}") # Nothing
</script>
</code></pre>
<p>So, how can I pass JavaScript's <code>js_val</code> set <code>Hello</code> to <code>with</code> tag's <code>dj_val</code> and <code>upper</code> filter to display <code>Hello</code> and <code>HELLO</code> on console?</p>
| <javascript><python><django><django-templates><templatetags> | 2023-07-10 04:16:41 | 0 | 42,516 | Super Kai - Kazuya Ito |
76,650,424 | 4,688,190 | Undetected Chromedriver not applying any options | <p>Undetected Chromedriver is not applying my options. The windows size is not altered and the extension is not loaded. Same problem on Linux and Windows.</p>
<pre><code>from webdriver_manager.chrome import ChromeDriverManager
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
import undetected_chromedriver as uc
options = uc.ChromeOptions()
options.add_extension(proxies_extension)
options.add_argument("--window-size=500,500")
driver = uc.Chrome(service=Service(ChromeDriverManager().install()), options=options)
</code></pre>
<p>It works perfectly when I substitute the above for <code>options = webdriver.ChromeOptions()</code> and <code>driver = webdriver.Chrome(...</code></p>
<p>What could I possibly be doing wrong? Thanks in advance.</p>
| <python><selenium-webdriver><undetected-chromedriver> | 2023-07-10 02:55:20 | 1 | 678 | Ned Hulton |
76,650,340 | 21,107,707 | How to detect leaves from an image using cv2.Canny edges? | <p>I have an image, which you can view here: <a href="https://i.sstatic.net/U9ySl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U9ySl.png" alt="image with edges detected" /></a></p>
<p>What I want to do is extract all the leaves from the image. However, I am stuck on how to get these shapes. I've tried using contours, and finding the contours with the largest areas, but it turns out none of these contours are connected, making that impossible. Would I be able to detect which parts of this image have the biggest splotches of black, essentially showing me the leaves (I could use a color filter on the original image to remove the hand and label)?</p>
| <python><opencv><image-segmentation><canny-operator> | 2023-07-10 02:23:37 | 2 | 801 | vs07 |
76,650,291 | 1,643,537 | Grouping lat and long into smaller areas/radius to get total within the radius in Python | <p>I have a long list of longitude and latitude of places. So I would like to group the long and lat into smaller area lets say 100m radius and get total count from long and lat within that area/radius.</p>
<pre><code>Example table:
id lat long total
1 1.5021033 103.6241121 1
2 1.502434 103.6439708 1
3 3.1319197 101.6840589 1
Desired result:
lat long total
1.502 103.624 2
3.132 101.684 1
</code></pre>
<p>Based on what I read from GIS wiki (<a href="http://wiki.gis.com/wiki/index.php/Decimal_degrees" rel="nofollow noreferrer">http://wiki.gis.com/wiki/index.php/Decimal_degrees</a>), the radius can change based on the decimal point. So my question would be is it enough to round the long and lat to 3 decimal if I wanted to group the location by 100m radius?</p>
<pre class="lang-py prettyprint-override"><code># Example of code snippet
import pandas as pd
header = ['id', 'lat', 'long', 'total']
content = [[1,1.5021033,103.6241121,1], [2,1.502434,103.6239708,1], [3,3.1319197,101.6840589,1]]
df = pd.DataFrame(content, columns=header)
df['lat'] = df['lat'].round(decimals = 3)
df['long'] = df['long'].round(decimals = 3)
display(df.drop(columns='id').groupby(by=["lat", "long"]).count())
</code></pre>
<p>Is the solution as simple as this without losing too much of the accuracy (within the 100m radius)?</p>
| <python><pandas><group-by><geocoding> | 2023-07-10 02:01:57 | 1 | 3,205 | Cryssie |
76,650,205 | 839,733 | Why does sort ignore the total ordering methods defined in my class? | <p>Given the following class:</p>
<pre><code>@functools.total_ordering
class Entry:
def __init__(self, nr: list[int, int] = None, p: int = 0) -> None:
self.nr = nr if nr is not None else [0, 0]
self.p = p
def __repr__(self) -> str:
return f"Entry(nr={self.nr}, p={self.p})"
def __eq__(self, other: Entry) -> bool:
return (self.nr[0] == other.nr[0] and self.nr[1] >= other.nr[1]) or (self.nr[0] > other.nr[0])
def __gt__(self, other: Entry) -> bool:
return (self.nr[0] == other.nr[0] and self.nr[1] < other.nr[1]) or (self.nr[0] < other.nr[0])
</code></pre>
<p>And a list of entries:</p>
<pre><code>L = [
Entry(nr=[98, 111], p=0),
Entry(nr=[111, 98], p=1),
Entry(nr=[98, 111], p=2),
Entry(nr=[111, 99], p=3),
Entry(nr=[99, 101], p=4),
Entry(nr=[101, 108], p=5),
Entry(nr=[108, -1], p=6)
]
</code></pre>
<p>Calling <code>L.sort()</code> is expected to produce the following ordering (only <code>p</code> values shown for brevity): <code>[0, 2, 4, 5, 6, 1, 3]</code>.</p>
<p><strong>But nothing happens! Why not?</strong></p>
<p>I have also experimented with making the class a <code>dataclass</code> by replacing the <code>__init__</code> with the following (and adding a <code>dataclass</code> annotation to the class, of course), but that didn't change anything. I'd prefer it to be a <code>dataclass</code>, so, that I don't have to provide an implementation of <code>__repr__</code>.</p>
<pre><code>nr: list[int, int] = dataclasses.field(default_factory=lambda: [0, 0])
p: int = 0
</code></pre>
| <python><sorting><python-dataclasses> | 2023-07-10 01:30:41 | 1 | 25,239 | Abhijit Sarkar |
76,650,157 | 9,947,159 | Assign class variables empty string and list them in order of definition | <p>I am looking for a way to initialize a bunch of variables such that I can also print an array of those variables in the order they were defined. I thought using an <code>enum class</code> might help with that... so I took this working code that I cobbled together...</p>
<pre><code>from enum import Enum
class all_vars(Enum):
var1='1'
var2='2'
var3='3'
var4='4'
var_array=[]
for items in all_vars:
var_array.append(items.name)
print(var_array) --outputs ['var1', 'var2', 'var3', 'var4']
</code></pre>
<p>And modified it to my use case--<strong>initialize with empty string instead</strong></p>
<pre><code>from enum import Enum
class all_vars(Enum):
var1=''
var2=''
var3=''
var4=''
var_array=[]
for items in all_vars:
var_array.append(items.name)
print(var_array) --outputs ['var1'] --need all the variables here
</code></pre>
<p>How do I make it include all the variables in the array in the second case. If <code>Enum</code> isn't the way to go about this, I'm open to other alternatives. Thanks!</p>
| <python><python-3.x> | 2023-07-10 01:11:46 | 1 | 5,863 | Rajat |
76,650,031 | 19,366,064 | Python: How to import modules that depends on other modules from another directory | <p>This is my folder structure:</p>
<pre><code>src
file1.py
file2.py
tests
test1.py
</code></pre>
<pre><code>#file1.py
from file2 import module2
modules = 'module1' + module2
</code></pre>
<pre><code>#file2.py
module2 = 'module2'
</code></pre>
<pre><code>#test1.py
import sys
sys.path.append('..')
from src.file1 import modules
print(modules)
</code></pre>
<p>I cannot "import modules from src.file1" because ModuleNotFoundError: No module named 'file2'.</p>
<p>How can I import modules from another folder where the module that I am importing also imports modules from other files?</p>
| <python><visual-studio-code> | 2023-07-10 00:15:46 | 2 | 544 | Michael Xia |
76,649,888 | 4,850,343 | How do I copy an image from the output in Jupyter Notebook 7+? | <p>I've been working with Jupyter Notebooks for quite a while. When working with visualisations, I like to copy the output image from a cell by right clicking the image and selecting "Copy Image" from the context menu:</p>
<p><a href="https://i.sstatic.net/BANkdm.png" rel="noreferrer"><img src="https://i.sstatic.net/BANkdm.png" alt="enter image description here" /></a></p>
<p>I like working with the direct copy from the notebook, especially for answering questions on Stack Overflow, so I'd rather not store them to disk. That would be a real reason to revert to legacy notebooks for me.</p>
<p>However with the <a href="https://jupyter-notebook.readthedocs.io/en/latest/notebook_7_features.html" rel="noreferrer">Notebook 7 migration</a> coming, I gave the beta a try by running <code>pip install notebook --pre --upgrade</code> and to my surprise I can't right click, copy the output image because the new Jupyter context menu pops up instead.</p>
<p><a href="https://i.sstatic.net/D3IXAm.png" rel="noreferrer"><img src="https://i.sstatic.net/D3IXAm.png" alt="enter image description here" /></a></p>
<p>This really breaks my workflow. How can I copy an image from the output of a cell in notebook 7+?</p>
| <python><jupyter-notebook><copy-paste> | 2023-07-09 23:06:27 | 1 | 17,634 | Sebastian Wozny |
76,649,721 | 6,213,939 | No adding of token in swagger for flask | <p>This is my code:</p>
<pre><code>authorizations = {
'Basic Auth': {
'type': 'basic',
'in': 'header',
'name': 'Authorization'
},
}
task_namespace = Namespace('task', security='Authorization', authorizations=authorizations, description='A namespace for tasks')
@task_namespace.route('/')
class TaskGetResource(Resource):
@jwt_required(refresh=True)
def get(self):
user_id = get_jwt_identity()
return Task.query.filter_by(
user_id=user_id
)
</code></pre>
<p>WHen I run the flask app and go to the swagger url, I authorize it by email and password and then run the <code>api/task</code> as located in the swagger, but the header token does not get added</p>
<p>Complete code is <a href="https://github.com/eadaradhiraj/flask-tasks-jwt" rel="nofollow noreferrer">https://github.com/eadaradhiraj/flask-tasks-jwt</a></p>
<p><a href="https://i.sstatic.net/LPDiX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LPDiX.png" alt="enter image description here" /></a></p>
| <python><flask><swagger> | 2023-07-09 22:06:09 | 2 | 943 | Echchama Nayak |
76,649,671 | 9,008,162 | How do I round the numbers in a df column correctly in Python? | <p>The following is my df: <a href="https://www.dropbox.com/s/nbez3esbo8fedmf/aapl.csv?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/nbez3esbo8fedmf/aapl.csv?dl=0</a></p>
<pre><code>date ticker open high low close adjClose
2019-07-08 AAPL 50.2025 50.35 49.6025 50.005 48.516
2019-07-09 AAPL 49.8 50.3775 49.7025 50.31 48.8119
2019-07-10 AAPL 50.4625 50.9325 50.39 50.8075 49.2946
2019-07-11 AAPL 50.8275 51.0975 50.4275 50.4375 48.9356
2019-07-12 AAPL 50.6125 51.0 50.55 50.825 49.3116
2019-07-15 AAPL 51.0225 51.4675 51.0 51.3025 49.7748
2019-07-16 AAPL 51.1475 51.5275 50.875 51.125 49.6026
2019-07-17 AAPL 51.0125 51.2725 50.8175 50.8375 49.3237
2019-07-18 AAPL 51.0 51.47 50.925 51.415 49.884
</code></pre>
<p>I'd like to round the close column to 2 decimal places. I tried the following:</p>
<pre><code>df['close'] = round(df['close'], 2)
df.loc[:, 'close'] = df.loc[:, 'close'].round(2)
df.loc[:, 'close'] = df.loc[:, 'close'].apply(lambda x: Decimal(x).quantize(Decimal('0.01'), rounding=ROUND_HALF_UP))
df.loc[:, 'close'] = df.loc[:, 'close'].apply(lambda x: Decimal(x).quantize(Decimal('0.01')))
df.loc[:, 'close'] = np.round(df.loc[:, 'close'], 2)
</code></pre>
<p>But the best I can do is this:</p>
<pre><code>date ticker open high low close adjClose
2019-07-08 AAPL 50.2025 50.35 49.6025 50.01 48.516
2019-07-09 AAPL 49.8 50.3775 49.7025 50.31 48.8119
2019-07-10 AAPL 50.4625 50.9325 50.39 50.81 49.2946
2019-07-11 AAPL 50.8275 51.0975 50.4275 50.44 48.9356
2019-07-12 AAPL 50.6125 51.0 50.55 50.83 49.3116
2019-07-15 AAPL 51.0225 51.4675 51.0 51.30 49.7748
2019-07-16 AAPL 51.1475 51.5275 50.875 51.13 49.6026
2019-07-17 AAPL 51.0125 51.2725 50.8175 50.84 49.3237
2019-07-18 AAPL 51.0 51.47 50.925 51.41 49.884
</code></pre>
<p>The date <strong>2019-07-18</strong> should be <code>51.42</code>, but I got <code>51.41</code>. And depending on which of the five ways I used, some can't even round <strong>2019-07-08</strong> <code>50.005</code> & <strong>2019-07-12</strong> <code>50.825</code> appropriately because I got <code>50</code> and <code>50.82</code> instead of <code>50.01</code> and <code>50.83</code>.</p>
<p>So how can I round it properly?</p>
| <python><pandas><dataframe><rounding><rounding-error> | 2023-07-09 21:48:21 | 2 | 775 | saga |
76,649,509 | 11,277,108 | Find element by XPath sometimes works and sometimes doesn't | <p>Given the following slightly pseudo code:</p>
<pre><code>from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions
from selenium.webdriver.support.ui import WebDriverWait
from selenium.common.exceptions import TimeoutException
from selenium.webdriver import ChromeOptions, Chrome
options = ChromeOptions()
driver = Chrome(options=options)
waiter = WebDriverWait(driver, 10)
list_of_urls = [<list_of_urls>]
for url in list_of_urls:
locator = (By.XPATH, "xpath_element_A")
element_A_condition = expected_conditions.presence_of_element_located(locator)
element_A = waiter.until(element_A_condition)
try:
locator = (By.XPATH, "xpath_sub_element_A")
sub_element_A_condition = expected_conditions.presence_of_element_located(locator)
sub_element_A = waiter.until(sub_element_A_condition)
except TimeoutException as e:
raise e
</code></pre>
<p>I'm finding that about 2-3% of the URLs I try to scrape are raising the <code>TimeoutException</code>.</p>
<p>I've tried extending the wait time and I've even tried refreshing the page multiple times and attempting the entire page-scrape again - all to no avail.</p>
<p>To try and get to the bottom of this I put a breakpoint on the final line and ran the code in debugging mode. When the exception was raised and the break point hit I ran <code>waiter.until(sub_element_A_condition)</code> again in the debug terminal and it immediately returned <code>sub_element_A</code>.</p>
<p>I've now repeated this debugging process multiple times and the result is always the same - the <code>TimeoutException</code> is raised and the break point hit but I'm able to immediately run <code>waiter.until(sub_element_A_condition)</code> and it always returns the element.</p>
<p>This is most perplexing. The only thing I think I've done differently when the exceptions were raised was that I switched to the window (I run non-headless) to manually eyeball that the element was on the page. Could that be doing something that causes the element to become visible?</p>
| <python><selenium-webdriver><web-scraping><selenium-chromedriver> | 2023-07-09 21:01:33 | 2 | 1,121 | Jossy |
76,649,501 | 2,173,773 | GitPython: error: Module "git" does not explicitly export attribute "Repo" [attr-defined] | <p>I am using Python 3.10.4, <a href="https://gitpython.readthedocs.io/en/stable/" rel="nofollow noreferrer">GitPython</a> version 3.1.31, <a href="https://mypy.readthedocs.io/en/stable/" rel="nofollow noreferrer">mypy</a> version 1.4.1:</p>
<pre><code>$ pip show GitPython
Name: GitPython
Version: 3.1.31
Location: /home/hakon/.pyenv/versions/3.10.4/lib/python3.10/site-packages
Requires: gitdb
$ python --version
Python 3.10.4
$ mypy --version
mypy 1.4.1 (compiled: yes)
</code></pre>
<p>If run <code>mypy</code> on this minimal example (<code>git-python-types.py</code>) :</p>
<pre><code>import git
repo = git.Repo('some_dir')
</code></pre>
<p>I get the following error:</p>
<pre><code>$ mypy --strict git-python-types.py
git-python-types.py:3: error: Module "git" does not explicitly export attribute "Repo" [attr-defined]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>Any ideas on why this error occurs and how to fix it?</p>
<h2>Some clues</h2>
<p>I can see the following line in the GitPython <a href="https://github.com/gitpython-developers/GitPython/blob/c09a71e2caefd5c25195b0b2decc8177d658216a/git/__init__.py#L49" rel="nofollow noreferrer">source code</a> :</p>
<pre><code>from git.repo import Repo # @NoMove @IgnorePep8
</code></pre>
<p>but I am not sure if <code>mypy</code> is reading this line or not.</p>
| <python><mypy><python-typing><gitpython> | 2023-07-09 21:00:28 | 2 | 40,918 | Håkon Hægland |
76,649,409 | 2,725,810 | Access denied in OpenSearch Serverless | <p>I am trying to create a minimal working example for working with AWS OpenSearch Serverless. With the help of <a href="https://youtu.be/SUVjrOKYVVk" rel="nofollow noreferrer">this</a> tutorial, this is the code:</p>
<pre class="lang-py prettyprint-override"><code>import boto3
from opensearchpy import OpenSearch, RequestsHttpConnection, AWSV4SignerAuth
host = 'onb565zzbfkjr3spn8v5.us-east-1.aoss.amazonaws.com'
region = 'us-east-1'
credentials = boto3.Session().get_credentials()
auth = AWSV4SignerAuth(credentials, region)
client = OpenSearch(
hosts = [{
'host': host,
'port': 443
}],
http_auth = auth,
use_ssl = True,
verify_certs=True,
connection_class = RequestsHttpConnection
)
def create_index(index_name):
index_body = {
'settings': {
'index': {
'number_of_shards': 1
}
}
}
response = client.indices.create(index_name, body=index_body)
print('\nCreating index:')
print(response)
create_index('myindex')
</code></pre>
<p>I have performed the following steps:</p>
<ol>
<li>Created an IAM user that has the policies <code>AmazonOpenSearchServiceFullAccess</code> and <code>AmazonESFullAccess</code> (just in case). I also added two inline policies:</li>
</ol>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "aoss:APIAccessAll",
"Resource": "*"
}
]
}
</code></pre>
<p>and</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "aoss:DashboardsAccessAll",
"Resource": "*"
}
]
}
</code></pre>
<p>(for some reason, the latter two permissions are not shown when I create a collection)</p>
<ol start="2">
<li><p>Executed <code>aws configure</code> to provide the keys and the region.</p>
</li>
<li><p>Created a collection with the rule for <code>Public</code> access, the IAM user as the selected principal, and all accesses enabled.</p>
</li>
</ol>
<p>Despite all this, I get 403 (Access denied) when trying to create an index. What could I be missing?</p>
<p><strong>UPDATE</strong> I have now asked the same question in the <a href="https://repost.aws/questions/QU0Khakx3TR_mjnlh5STDmzA/access-denied-403-when-creating-an-index-in-opensearch-serverless" rel="nofollow noreferrer">AWS community</a>.</p>
| <python><amazon-web-services><boto3><aws-sdk><amazon-opensearch> | 2023-07-09 20:36:07 | 2 | 8,211 | AlwaysLearning |
76,649,320 | 5,640,517 | Celery signature/delay AttributeError: 'UUID' object has no attribute 'game_id' | <p>First I chain tasks like this</p>
<pre class="lang-py prettyprint-override"><code>tasks_chain = chain(create_game_folder.si(instance.id))
tasks_chain |= download_game.si(instance.id).set(
task_id=str(task_id)
)
</code></pre>
<p>But I get this error</p>
<pre><code> logger.debug(f"{game_id=}")
AttributeError: 'UUID' object has no attribute 'game_id'
</code></pre>
<p>This is how the create folder function starts</p>
<pre class="lang-py prettyprint-override"><code>
@shared_task()
def create_game_folder(game_id):
logger.info("create_game_folder")
logger.debug(f"{game_id=}")
game = Game.objects.get(id=game_id)
</code></pre>
<p>Why is it trying to get game_id from UUID? I know that instance.id returns a UUID object but if I do str(instance.id) I get <code>AttributeError: 'str' object has no attribute 'game_id'</code></p>
<p>Is this something .si() or .delay() do in the background and for some reason the stacktrace is showing me the logger line?</p>
<p>Update:</p>
<p>/etc/systemd/system/celery.service</p>
<pre><code>[Unit]
Description=Celery Service
Requires=django-app.service
After=django-app.service
[Service]
Type=forking
User=david
Group=vboxsf
RuntimeDirectory=celery
WorkingDirectory=/var/www/html/django-app.com
ExecStart=poetry run celery -A game_manager multi start worker --loglevel="INFO" --concurrency=5
ExecStop=poetry run celery multi stopwait worker --loglevel="INFO"
ExecReload=poetry run celery -A game_manager multi restart worker --loglevel="INFO" --concurrency=5
Restart=always
[Install]
WantedBy=multi-user.target
</code></pre>
<p>celery config</p>
<pre><code># Celery configuration
CELERY_BROKER_URL = "redis://localhost:6379"
CELERY_RESULT_BACKEND = "redis://localhost:6379"
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TIMEZONE = 'UTC'
CELERY_ENABLE_UTC = True
</code></pre>
<p>/etc/systemd/system/django-app.service</p>
<pre><code>[Unit]
Description=Start Django app
After=network.target
[Service]
User=david
Group=david
Environment="PYTHONUNBUFFERED=TRUE"
WorkingDirectory=/var/www/html/django-app.com
ExecStart=poetry run gunicorn game_manager.wsgi:application --config game_manager/gunicorn_conf.py --reload
[Install]
WantedBy=multi-user.target
</code></pre>
| <python><celery><django-celery> | 2023-07-09 20:10:47 | 0 | 1,601 | Daviid |
76,649,259 | 15,160,601 | Why is performing matrix multiplication on a pre-transposed matrix faster than on a non-transposed matrix? | <p>Consider the following code in Python, where multiplying a pre-transposed matrix yields faster execution time compared to multiplying a non-transposed matrix:</p>
<pre><code>import numpy as np
import time
# Generate random matrix
matrix_size = 1000
matrix = np.random.rand(matrix_size, matrix_size)
# Transpose the matrix
transposed_matrix = np.transpose(matrix)
# Multiply non-transposed matrix
start = time.time()
result1 = np.matmul(matrix, matrix)
end = time.time()
execution_time1 = end - start
# Multiply pre-transposed matrix
start = time.time()
result2 = np.matmul(transposed_matrix, transposed_matrix)
end = time.time()
execution_time2 = end - start
print("Execution time (non-transposed):", execution_time1)
print("Execution time (pre-transposed):", execution_time2)
</code></pre>
<p>Surprisingly, multiplying the pre-transposed matrix is faster. One might assume that the order of multiplication should not affect the performance significantly, but there seems to be a difference.</p>
<p>Why does processing a pre-transposed matrix result in faster execution time compared to a non-transposed matrix? Is there any underlying reason or optimization that explains this behavior?</p>
<h2>UPDATE</h2>
<p>I've taken the comments about the <code>cache</code> into consideration and I'm generating new matrices on each loop:</p>
<pre><code>import numpy as np
import time
import matplotlib.pyplot as plt
# Generate random matrices
matrix_size = 3000
# Variables to store execution times
execution_times1 = []
execution_times2 = []
# Perform matrix multiplication A @ B^T and measure execution time for 50 iterations
num_iterations = 50
for _ in range(num_iterations):
matrix_a = np.random.rand(matrix_size, matrix_size)
start = time.time()
result1 = np.matmul(matrix_a, matrix_a)
end = time.time()
execution_times1.append(end - start)
# Perform matrix multiplication A @ B and measure execution time for 50 iterations
for _ in range(num_iterations):
matrix_b = np.random.rand(matrix_size, matrix_size)
start = time.time()
result2 = np.matmul(matrix_b, matrix_b.T)
end = time.time()
execution_times2.append(end - start)
# Print average execution times
avg_execution_time1 = np.mean(execution_times1)
avg_execution_time2 = np.mean(execution_times2)
#print("Average execution time (A @ B^T):", avg_execution_time1)
#print("Average execution time (A @ B):", avg_execution_time2)
# Plot the execution times
plt.plot(range(num_iterations), execution_times1, label='A @ A')
plt.plot(range(num_iterations), execution_times2, label='B @ B.T')
plt.xlabel('Iteration')
plt.ylabel('Execution Time')
plt.title('Matrix Multiplication Execution Time Comparison')
plt.legend()
plt.show()
# Display BLAS configuration
np.show_config()
</code></pre>
<p>Results:</p>
<p><a href="https://i.sstatic.net/gfpbX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gfpbX.png" alt="Result" /></a></p>
<pre><code>blas_mkl_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Users/User/anaconda3\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:/Users/User/anaconda3\\Library\\include']
blas_opt_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Users/User/anaconda3\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:/Users/User/anaconda3\\Library\\include']
lapack_mkl_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Users/User/anaconda3\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:/Users/User/anaconda3\\Library\\include']
lapack_opt_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Users/User/anaconda3\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:/Users/User/anaconda3\\Library\\include']
Supported SIMD extensions in this NumPy install:
baseline = SSE,SSE2,SSE3
found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2
not found = AVX512F,AVX512CD,AVX512_SKX,AVX512_CLX,AVX512_CNL
</code></pre>
| <python><numpy><matrix><transpose> | 2023-07-09 19:55:51 | 1 | 2,052 | zoldxk |
76,649,120 | 3,286,743 | Split a parquet file by groups | <p>I have a large-ish dataframe in a Parquet file and I want to split it into multiple files to leverage Hive partitioning with pyarrow.
Preferably without loading all data into memory.</p>
<p>(This question has been asked before, but I have not found a solution that is both fast and with low memory consumption.)</p>
<p>As a small example consider the following dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
from random import choice, randint
from string import ascii_letters
N = 10_000_000
pl.DataFrame({
'id': [choice(ascii_letters) for _ in range(N)],
'a': [randint(0, 100) for _ in range(N)],
}).write_parquet('stackoverflow.parquet')
</code></pre>
<p>I know that pyarrow can help out, but it's super slow for big files.</p>
<pre class="lang-py prettyprint-override"><code>import pyarrow.dataset as ds
ds_df = ds.dataset('stackoverflow.parquet')
ds.write_dataset(ds_df, 'stackoverflow_data', format='parquet', partitioning=['id'])
</code></pre>
<p>Polars can also help out, but the fastest solution I have made only works if I have the dataframe in memory:</p>
<pre class="lang-py prettyprint-override"><code>import os
import polars as pl
df = pl.read_parquet('stackoverflow.parquet')
split_df = df.partition_by('id', as_dict=True)
for id in split_df:
save_path = os.path.join('stackoverflow_data', f'id={id}')
os.makedirs(save_path, exist_ok=True)
split_df[id].write_parquet(os.path.join(save_path, 'data.parquet'))
</code></pre>
<p>However, for large files I prefer to work with <code>LazyFrame</code>s.
This can be done by repeatedly filtering a <code>LazyFrame</code> and writing the result to disk:</p>
<pre class="lang-py prettyprint-override"><code>df_query = pl.scan_parquet('stackoverflow.parquet')
ids = df_query.select(pl.col('id').unique()).collect().get_column('id').to_list()
for id in ids:
save_path = os.path.join('stackoverflow_data', f'id={id}')
os.makedirs(save_path, exist_ok=True)
df = df_query.filter(pl.col('id') == id).collect()
df.write_parquet(os.path.join(save_path, 'data.parquet'))
</code></pre>
<p>Unfortunately, this is much slower due to the repeated filtering.</p>
<p>Any suggestions for a better tradeoff between speed and memory usage?</p>
| <python><python-polars><pyarrow> | 2023-07-09 19:13:59 | 3 | 1,177 | robertdj |
76,649,096 | 3,130,747 | How to parse multiple date formats using pandera schema | <p>How can I process a column containing datetimes in two formats, both <code>"%Y-%m-%dT%H:%M"</code>, and <code>"%Y-%m-%dT%H:%M:%S"</code> ?</p>
<p>MWE showing what I'm trying to do:</p>
<pre class="lang-py prettyprint-override"><code>from pandera.engines import pandas_engine
from pathlib import Path
import io
import pandas as pd
import pandera as pa
# this doesn't work
data = 'date_column\n2020-11-26T02:06:30\n2020-11-22T01:49\n'
df = pd.read_csv(io.StringIO(data))
schema = pa.DataFrameSchema(
{
"date_column": pa.Column(
pandas_engine.DateTime(
to_datetime_kwargs = {
"format":"%Y-%m-%dT%H:%M:%S"},
tz = "Europe/London")
),
},
coerce=True
)
new_df = schema.validate(df)
</code></pre>
<p>Which gives the following errors for different format strings:</p>
<pre><code># using format: "%Y-%m-%dT%H:%M"
# pandera.errors.SchemaError:
# Error while coercing 'date_column' to type datetime64[ns, Europe/London]:
# Could not coerce <class 'pandas.core.series.Series'> data_container into type datetime64[ns, Europe/London]:
# index failure_case
# 0 0 2020-11-26T02:06:30
</code></pre>
<p>And:</p>
<pre><code># using format: "%Y-%m-%dT%H:%M:%S"
# pandera.errors.SchemaError:
# Error while coercing 'date_column' to type datetime64[ns, Europe/London]:
# Could not coerce <class 'pandas.core.series.Series'> data_container into type datetime64[ns, Europe/London]:
# index failure_case
# 0 1 2020-11-22T01:49
</code></pre>
| <python><pandas><pandera> | 2023-07-09 19:08:34 | 1 | 4,944 | baxx |
76,649,070 | 2,417,922 | How do I implement a binary morphological image operation that counts for each pixel the number of non-zero 4-neighbors | <p>I have a binary image in the form of a Numpy 2D integer array. I want to create another image with values 0-4 denoting how many of each pixel's 4-neighbors are 1-valued. I was hoping for something in Numpy or SciPy, but anything in Python will be welcomed.</p>
| <python><image-processing><mathematical-morphology> | 2023-07-09 19:00:58 | 0 | 1,252 | Mark Lavin |
76,649,029 | 850,781 | matplotlib legend: remove handles, keep labels | <p>I plot many points in different colors and I want the legend to contain only the names in different colors, like this:</p>
<pre><code>for x,y,c in my_data:
axes.plot(x,y,color=c)
legend = axes.legend(loc="best", labels=my_labels)
for text in legend.get_texts():
text.set_color(label2color[text.get_text()])
</code></pre>
<p>this results in a legend where the labels are in the correct colors but the handles show lines with points the colors of the first few points:</p>
<p><a href="https://i.sstatic.net/PKizt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PKizt.png" alt="enter image description here" /></a></p>
<p>I want to remove the handles completely so that the only text in the legend is the labels.</p>
| <python><matplotlib><legend> | 2023-07-09 18:50:52 | 1 | 60,468 | sds |
76,649,019 | 7,555,022 | Use Etrade API's query parameters returns unauthorized (401) | <p>I cannot understand how to properly use the query parameters using Etrade's production API. Regardless of the query parameter used, the response is always unauthorized. Here's the <a href="https://apisb.etrade.com/docs/api/account/api-portfolio-v1.html" rel="nofollow noreferrer">documentation</a> relevant to this example. Most of the code is taken from the Python example on this <a href="https://developer.etrade.com/home" rel="nofollow noreferrer">page</a>.</p>
<p>I receive a 200 response without the parameter and 401 when adding the parameter <code>sortOrder=ASC</code>.</p>
<pre><code>import configparser
from rauth import OAuth1Service
import webbrowser
# loading configuration file
config = configparser.ConfigParser()
config.read("etrade_python_client/config.ini")
etrade = OAuth1Service(
name="etrade",
consumer_key=config["DEFAULT"]["PROD_KEY"],
consumer_secret=config["DEFAULT"]["PROD_SECRET"],
request_token_url="https://api.etrade.com/oauth/request_token",
access_token_url="https://api.etrade.com/oauth/access_token",
authorize_url="https://us.etrade.com/e/t/etws/authorize?key={}&token={}",
base_url="https://api.etrade.com")
request_token, request_token_secret = etrade.get_request_token(params={"oauth_callback": "oob", "format": "json"})
authorize_url = etrade.authorize_url.format(etrade.consumer_key, request_token)
webbrowser.open(authorize_url)
</code></pre>
<p>The previous line opens a web browser which I navigate to and copy the code and store it as <code>text_code</code>.</p>
<pre><code>text_code = "<copy-code-from-web-browser>"
session = etrade.get_auth_session(request_token, request_token_secret, params={"oauth_verifier": text_code})
# get account info
url_account = "https://api.etrade.com/v1/accounts/list.json"
data_account = session.get(url_account, header_auth=True).json()
# store account id key
account_id_key = res["AccountListResponse"]["Accounts"]["Account"][0]["accountIdKey"]
# get portfolio and specify sortOrder
url_portfolio = "https://api.etrade.com/v1/accounts/{}/portfolio?sortOrder=ASC".format(account_id_key)
data_portfolio = session.get(url_portfolio, header_auth=True)
print(data_portfolio)
</code></pre>
<pre><code>>>> <Response [401]>
</code></pre>
<pre><code># get portfolio and do not specify sort order
url_portfolio = "https://api.etrade.com/v1/accounts/{}/portfolio".format(account_id_key)
data_portfolio = session.get(url_portfolio, header_auth=True)
print(data_portfolio)
</code></pre>
<pre><code>>>> <Response [200]>
</code></pre>
<p>Has anyone else ran into this issue? I'm thinking I must not be passing the query parameters correctly.</p>
| <python><https><etrade-api> | 2023-07-09 18:48:23 | 1 | 958 | Riley Finn |
76,648,992 | 13,771,657 | How to animate a line graph in python where each step of the animation draws an entirely new line based on data in dataframe, and export as gif | <p>I would like to animate a line graph in python based on data in my df.</p>
<ul>
<li>For each step in the animation, a line graph would be displayed using one row of the df.</li>
<li>A fraction of a second later, a new line graph would be displayed using the next row of data in the df.</li>
<li>This process would continue until each row of data had been separately displayed.</li>
<li>After this process had been done for all rows, the graph would show the last line of data.</li>
<li>I would also have code that converts the animation to a gif and then exports it.</li>
</ul>
<p>I'm lost as to how to do this and was hoping someone could point me in the right direction.</p>
<p>Here is the code and df I have so far:</p>
<pre><code># Import dependencies
import pandas as pd
from datetime import datetime
# Create lists to be converted to df
data = [['01/01/2016', 4.17, 4.42, 4.53, 4.71, 4.77, 4.72],
['02/05/2017', 4.59, 4.64, 4.70, 4.74, 4.80, 4.68],
['04/17/2018', 4.67, 4.82, 4.90, 5.02, 5.20, 5.06],
['03/03/2019', 4.70, 4.79, 4.90, 4.80, 4.50, 3.84],
['08/21/2021', 6.02, 5.47, 5.34, 5.55, 5.44, 5.25],
['09/14/2022', 5.18, 5.25, 5.36, 5.37, 5.27, 4.74],
['05/05/2023', 5.32, 5.47, 5.46, 5.52, 5.53, 4.64]
]
# Create the pandas df
df = pd.DataFrame(data, columns=['date', 'Month 1', 'Month 2', 'Month 3',
'Month 4', 'Month 5', 'Month 6'])
# Convert 'date' to datetime.
df['date'] = pd.to_datetime(df['date'], format='%m/%d/%Y')
# Display df
display(df)
# Create animated line graph as described in my question
</code></pre>
| <python><pandas><matplotlib><seaborn> | 2023-07-09 18:39:49 | 1 | 528 | BGG16 |
76,648,965 | 9,588,300 | VSC debugger python not stopping in breakpoint of class method | <p>I have the following code in python, based on a AWS documentation to consume from a Kinesis stream. However my problem is entirely with VSC, not with anything specific of kinesis.</p>
<p>My code is simple</p>
<ol>
<li>It imports some libraries</li>
<li>It defines a class, with an init and some method</li>
<li>Outside the class definition, it creates an instance with the <code>__init__</code> parameters</li>
<li>It calls a method from that instance</li>
</ol>
<p>I want to enter debug mode in VSC. And specifically I want to enter to the method of the class that is being invoked on step 4. I would expect that when I put a breakpoint in the <code>instance_object.method()</code> and hit <code>Step in</code> it would enter the class's method lines of code so I can go one by one. But instead, it just terminates as if there was nothing to step in to</p>
<p>Here is my specific code:</p>
<pre class="lang-py prettyprint-override"><code> import boto3
import json
import time
import datetime
from botocore.exceptions import ClientError
class Consume():
def __init__(self,kinesis_client,stream_name):
self.kinesis_client=kinesis_client
self.stream_name=stream_name
self.details=kinesis_client.describe_stream(StreamName='input_stream')
def consuming(self,max_records):
try:
response = self.kinesis_client.get_shard_iterator(
StreamName=self.stream_name, ShardId=self.details['Shards'][0]['ShardId'],
ShardIteratorType='LATEST')
shard_iter = response['ShardIterator']
record_count = 0
while record_count < max_records:
response = self.kinesis_client.get_records(
ShardIterator=shard_iter, Limit=10)
shard_iter = response['NextShardIterator']
records = response['Records']
print("Got {} records.".format(len(records)))
record_count += len(records)
yield records
except ClientError:
print("Couldn't get records from stream {}.".format(self.stream_name))
raise
kinesis_client=boto3.client('kinesis',
aws_access_key_id=<some_hidden_string>,
aws_secret_access_key=<some_hidden_string>)
consumer=Consume(kinesis_client,stream_name='input_stream')
consumer.consuming(5)
</code></pre>
<p>So, I've set a breakpoint at the last line <code>consumer.consuming(5)</code> and when the debugger hit's that, I click step in but it just terminates directly.</p>
<p>What am I doing wrong? Here is a picture showing my breakpoints</p>
<p><a href="https://i.sstatic.net/WruTx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WruTx.png" alt="Breakpoints VSC debugging" /></a></p>
<p>And this is my launch.json . I am also running this locally on a windows 11 machine in plain VSC on the desktop, no docker container nor anything like that</p>
<pre class="lang-json prettyprint-override"><code>{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true
}
]
}
</code></pre>
| <python><visual-studio-code><vscode-debugger> | 2023-07-09 18:31:37 | 1 | 462 | Eugenio.Gastelum96 |
76,648,956 | 317,460 | How to delete all rows matching a condition using SqlModel | <p>The documentation of SqlModel about DELETE action: <a href="https://sqlmodel.tiangolo.com/tutorial/delete/" rel="nofollow noreferrer">https://sqlmodel.tiangolo.com/tutorial/delete/</a></p>
<p>shows how to delete a single line using the functions</p>
<ul>
<li>.one() - If only a single record is expected</li>
</ul>
<p>or</p>
<ul>
<li>.first() - to get the 1st line from multiple lines found.</li>
</ul>
<pre><code>def delete_heroes():
with Session(engine) as session:
statement = select(Hero).where(Hero.name == "Spider-Youngster")
results = session.exec(statement)
hero = results.one()
print("Hero: ", hero)
session.delete(hero)
session.commit()
</code></pre>
<p>But how can I delete all the lines matching the condition? I tried using the .all() function</p>
<pre><code>def delete_heroes():
with Session(engine) as session:
statement = select(Hero).where(Hero.name == "Spider-Youngster")
results = session.exec(statement)
hero = results.all()
print("Hero: ", hero)
session.delete(hero)
session.commit()
</code></pre>
<p>But all I get is an error:
<code>sqlalchemy.orm.exc.UnmappedInstanceError: Class 'builtins.list' is not mapped</code></p>
<hr />
<p>My solution uses a loop to iterate over multiple lines, delete each line and commit at the end.</p>
<pre><code>def delete_heroes():
with Session(engine) as session:
statement = select(Hero).where(Hero.name == "Spider-Youngster")
results = session.exec(statement)
hero = results.all()
print("Hero: ", hero)
for result in results:
session.delete(result)
session.commit()
</code></pre>
<p><strong>Is there a way to do this without the loop?</strong></p>
| <python><sqlmodel> | 2023-07-09 18:29:25 | 2 | 3,627 | RaamEE |
76,648,786 | 3,251,645 | Self referencing Struct type using python Ctypes | <p>I have node class like this:</p>
<pre><code>@dataclass
class TreeNode:
type: NodeType
tok: Token = None
children: list = field(default_factory=list)
</code></pre>
<p>Here, <code>children</code> is a list which contains other <code>TreeNode</code>s which are children of the parent node. I'm trying to create a <code>ctypes</code> structure which replicates the class above so I can send a <code>TreeNode</code> object to a C++ function from python. It looks like this:</p>
<pre><code>class CTreeNode(Structure):
_fields_ = [("type", c_int32), ("tok", CToken), ("children", POINTER('CTreeNode') * 100)]
</code></pre>
<p>I'm getting this error:</p>
<pre><code>SystemError: <class '_ctypes.PyCArrayType'> returned NULL without setting an exception
</code></pre>
<p>I've looked at the documentation which says arrays can be defined like so</p>
<pre><code>("point_array", POINT * 4)
</code></pre>
<p>But how do I do it by referencing <code>CTreeNode</code> inside <code>CTreeNode</code> using ctypes. Please help.</p>
| <python><c++><ctypes> | 2023-07-09 17:49:48 | 1 | 2,649 | Amol Borkar |
76,648,639 | 1,565,758 | PIL paste image dead center at position | <p>I'm having a bit of a problem to get an image pasted on the exact wanted position. The biggest problem is because I'm rotating the image I'm pasting.</p>
<pre><code>import math
import sys
from PIL import Image, ImageDraw, ImageFont
def main(argv):
img = Image.new("RGB", (
300, 300), (255, 255, 255))
dr = ImageDraw.Draw(img)
dr.ellipse((0, 0, 300, 300), fill='white', outline='blue', width=2)
dr.ellipse((25, 25, 275, 275), fill='white', outline='blue', width=2)
dr.point((150, 150), fill="red")
for x in range(6):
print(x)
value = (2 * x * math.pi) / 6
value_x = math.floor(138 * math.cos(value) + 150)
value_y = math.floor(138 * math.sin(value) + 150)
print(value_x)
print(value_y)
dr.line([150, 150, value_x, value_y], fill='red', width=1, joint=None)
rotate(str(x), math.ceil(value * (180 / math.pi)), img, value_x, value_y)
#img.save("output/image.png", "PNG")
img.show()
def rotate(text: str, degrees, img, x, y):
ft = ImageFont.truetype('font/Roboto-Regular.ttf', 12)
tim = Image.new('RGBA', (7*len(str(text)), 12), (100, 100, 100, 100))
dr = ImageDraw.Draw(tim)
dr.text([0, 0], text, font=ft, fill='red')
tim = tim.rotate(360-degrees-90, expand=1)
#tim.save("output/tim" + str(degrees) + ".png", "PNG")
img.paste(tim, (x, y), tim)
if __name__ == "__main__":
main(sys.argv[1:])
</code></pre>
<p>or</p>
<p><a href="https://programiz.pro/learn/python/online-ide/EZNBPNRQP1?utm_source=programiz_dot_com_python_compiler_save_button" rel="nofollow noreferrer">https://programiz.pro/learn/python/online-ide/EZNBPNRQP1?utm_source=programiz_dot_com_python_compiler_save_button</a></p>
<p>I can't get this running in the online editor, keeps saying the roboto-regular.ttf file is unknown. However the output if you run it in an IDE is the following</p>
<p>as you can see in the image, the numbers should be put at the end of the radian line dead center. I just can't get my offsets right I'm missing some math thing I believe to calculate the correct position.</p>
<p><a href="https://i.sstatic.net/xMcYw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xMcYw.png" alt="enter image description here" /></a></p>
<p>I could use any help to try and get the offsets right. It should have something to do with the size of the temporary image, but even if I take the width and length and divide it by two it still goes wrong :( I'm missing something</p>
| <python><math><python-imaging-library> | 2023-07-09 17:11:21 | 1 | 1,266 | kenny |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.