QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,301,404 | 181,783 | SimpleXMLRPCServer.shutdown() failure post request | <p>I recently added a RPC backend to an existing single-threaded application as shown below</p>
<pre class="lang-py prettyprint-override"><code>lock = Lock()
def update_state(address, value):
with lock:
state[address] = value
class ExistingClass:
def __init__(self, *args, **kwargs):
thread = Thread(target=self.start_listener)
thread.start()
def start_listener(self):
self.server = SimpleXMLRPCServer(("localhost", 8002) , allow_none=True)
self.server.register_function(update_state, "update_state")
self.server.serve_forever()
def read_state(self, address):
with lock:
return state[address]
def write_state(self, address, value):
with lock:
return state[address] = value
ec = ExistingClass()
ec.server.shutdown()
</code></pre>
<p>The RPC backend is run in a separate thread to allow the application to keep running.</p>
<p>The problem is that shutting down the application requires explicitly stopping the server and the function <code>SimpleXMLRPCServer.shutdown()</code> does not work after a client has called the PRC <code>update_state</code> i.e. the command <code>ps</code> shows that the process is still alive post shutdown.</p>
<p>Initially I thought the problem could be due to the absence of a lock but that didn't resolve the problem.</p>
| <python><xml-rpc><simplexmlrpcserver> | 2023-10-16 10:59:57 | 2 | 5,905 | Olumide |
77,301,377 | 12,468,438 | Pandas apply casts None dtype to object or float depending on other outputs | <p>I would like to control the output dtypes for <em>apply</em> on a row. <em>foo</em> and <em>bar</em> below have multiple outputs.</p>
<pre><code>import pandas as pd
def foo(x):
return x['a'] * x['b'], None, x['a'] > x['b']
def bar(x):
return x['a'] * x['b'], None
df = pd.DataFrame([{'a': 10, 'b': 2}, {'a': 10, 'b': 20}])
df2 = df.copy()
df[['product', 'dummy', 'greater']] = df.apply(foo, axis=1, result_type='expand')
df2[['product', 'dummy']] = df2.apply(bar, axis=1, result_type='expand')
</code></pre>
<p>The output dtypes are:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>col</th>
<th>df</th>
<th>df2</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>int64</td>
<td>int64</td>
</tr>
<tr>
<td>b</td>
<td>int64</td>
<td>int64</td>
</tr>
<tr>
<td>product</td>
<td><strong>int64</strong></td>
<td><strong>float64</strong></td>
</tr>
<tr>
<td>dummy</td>
<td><strong>object</strong></td>
<td><strong>float64</strong></td>
</tr>
<tr>
<td>greater</td>
<td>bool</td>
<td>-</td>
</tr>
</tbody>
</table>
</div>
<p>A comment to this question <a href="https://stackoverflow.com/questions/52078594/pandas-apply-changing-dtype">pandas apply changing dtype</a>, suggests that apply returns a series with a single dtype. That may be the case with <em>bar</em> since the outputs can be cast to float. But it doesn't seem to be the case for <em>foo</em>, because then the outputs would need to be object.</p>
<p>Is it possible to control the output dtypes of apply? I.e. get/specify the output dtypes (int, object) for bar, or do I need to cast the dtype at the end?</p>
<p>Background:
I have a dataframe where the <em>dummy</em> column has values True, False and None and dtype 'object'. The apply function runs on some corner cases, and introduces NaN instead of None. I'm replacing the NaN with None after apply, but it seems overly complicated.</p>
<p>pandas version 1.5.2</p>
| <python><pandas><dataframe><apply><dtype> | 2023-10-16 10:55:38 | 1 | 314 | Frank_Coumans |
77,301,376 | 9,021,547 | XML parsing: how to return None is element was not found | <p>I need to parse an XML file, where the same tag might have a different number of children present, such the one shown below:</p>
<pre><code>xml_file = '''
<clients>
<client>
<name>John</name>
</client>
<client>
<name>Mike</name>
<age>25</age>
</client>
</clients>
'''
</code></pre>
<p>I want the code to return <code>None</code> if any tag was not found. My current solution looks like this:</p>
<pre><code>from xml.etree import ElementTree as et
tree = et.fromstring(xml_file)
client_attr = []
for client in tree.iter('client'):
results = {'name': None, 'age': None}
name = client.find('name')
age = client.find('age')
results.update({
'name': name.text if name is not None else None,
'age': age.text if age is not None else None
}
)
client_attr.append(results)
</code></pre>
<p>However, this seems to be kinda clunky and takes up a lot of space and time when applied to my real data. It seems to me like there should be a way to combine the steps but I could not figure out how.
Thx in advance.</p>
| <python><xml><parsing> | 2023-10-16 10:55:36 | 3 | 421 | Serge Kashlik |
77,301,148 | 10,133,797 | Python colormap in MATLAB | <p>I wish to use matplotlib's <code>cmap='bwr'</code> for in MATLAB (for e.g. <code>imagesc</code>), but that's not supported. Can it be "exported" from matplotlib into MATLAB?</p>
| <python><matlab><matplotlib> | 2023-10-16 10:20:34 | 1 | 19,954 | OverLordGoldDragon |
77,301,054 | 10,721,627 | How to load CSV file from an Azure blob storage using DuckDB? | <p>I have CSV files stored in an Azure Blob Storage container. I want to use DuckDB to load this data for analysis.</p>
<p>Can someone guide me through the process of loading a CSV file from an Azure Blob Storage container into DuckDB? I am looking for a Python-based solution or any relevant code examples.</p>
| <python><azure-blob-storage><duckdb> | 2023-10-16 10:06:22 | 1 | 2,482 | Péter Szilvási |
77,301,031 | 7,968,764 | Compressing image with python-opencv actually results more size | <p>I want to compress image in-memory using python.</p>
<p>Using this code from answer <a href="https://stackoverflow.com/a/40768705">https://stackoverflow.com/a/40768705</a> I expect that changing "quality param" (90) I will get less size of result image.</p>
<pre><code>encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 90]
result, enc_img = cv2.imencode('.jpg', img_np, encode_param)
</code></pre>
<p>My original plan was to decrease this "quality param" until I get the desired image size.
Smth like this:</p>
<pre><code>def compress_img(img: bytes, max_size: int = 102400):
"""
@param max_size: maximum allowed size in bytes
"""
quality = 90
while len(img) > max_size:
img = _compress_img(img, quality=quality)
quality -= 5
if quality < 0:
raise ValueError(f"Too low quality: {quality}")
return img
</code></pre>
<p>But after running some tests, I actually get bigger size of resulting image than the original. I don't understand how original size can be less than compressed. What is wrong in this logic?</p>
<pre><code>original size: 125.07 kb
load to cv2 size: 4060.55 kb
compressed size: 186.14 kb
</code></pre>
<p>Here is full code:</p>
<pre><code>import cv2
import requests
import numpy as np
def request_img(img_url: str) -> bytes:
r = requests.get(img_url, stream=True)
img = r.raw.read()
return img
if __name__ == '__main__':
url = "https://m.media-amazon.com/images/I/71bUROETvoL._AC_SL1500_.jpg"
img_bytes = request_img(url)
# the original size of image is 128073 bytes `len(img_bytes)`
with open("__img_orig.jpg", "wb") as f:
f.write(img_bytes)
print(f"original size: {round(len(img_bytes)/ 1024, 2)} kb")
# after I load image to cv2 - it becomes `4158000` bytes
image_np = np.frombuffer(img_bytes, np.uint8)
img_np = cv2.imdecode(image_np, cv2.IMREAD_COLOR)
print(f"load to cv2 size: {round(len(img_np.tobytes()) / 1024, 2)} kb")
# resulting "compressed" size is `190610` bytes which is 48% more than original sie. How can it be???
encode_param = [int(cv2.IMWRITE_JPEG_QUALITY), 90]
result, enc_img = cv2.imencode('.jpg', img_np, encode_param)
res_img_bytes = enc_img.tobytes()
print(f"compressed size: {round(len(res_img_bytes) / 1024, 2)} kb")
with open("__img_compress.jpg", "wb") as f:
f.write(res_img_bytes)
</code></pre>
| <python><opencv><python-imaging-library><compression><image-compression> | 2023-10-16 10:01:30 | 1 | 527 | PATAPOsha |
77,300,935 | 17,082,611 | SSIM is high, MSE is low but the two images are not similar | <p>I want to calculate the similarity between these two images:</p>
<p><a href="https://i.sstatic.net/TXevJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TXevJ.png" alt="original" /></a></p>
<p>and</p>
<p><a href="https://i.sstatic.net/cmKmK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cmKmK.png" alt="reconstructed" /></a></p>
<p>These are brain topography maps and colors inside the circle represent the area being activated while watching TV. Thus <strong>I am looking for a metric which computes the similarity between original and reconstructed image taking into account how much they are similar with respect the "activated" areas</strong>. As you can see the two images are not very similar since, in the original image, the activated area is left-side while in the reconstructed one the activated area is right-side.</p>
<p>For doing that, I am combining both <a href="https://scikit-image.org/docs/stable/api/skimage.metrics.html#skimage.metrics.structural_similarity" rel="nofollow noreferrer">SSIM</a> (scaled between 0 and 1) and <a href="https://scikit-image.org/docs/stable/api/skimage.metrics.html#skimage.metrics.mean_squared_error" rel="nofollow noreferrer">MSE</a> using this formula:</p>
<pre><code>ssim = scaled_ssim(original_image, reconstructed_image) # between 0 and 1
mse = mean_squared_error(original_image, reconstructed_image)
score = ssim / (mse + 1)
</code></pre>
<p>where <code>scaled_ssim</code> is defined as follows:</p>
<pre><code>def scaled_ssim(original, reconstructed):
original = normalize(original)
reconstructed = normalize(reconstructed)
score = ssim(original, reconstructed, data_range=1, channel_axis=-1)
return (score + 1) / 2
</code></pre>
<p>And I am getting:</p>
<pre><code>ssim on random test sample: 0.5712
mse on random test sample: 0.0004
score on random test sample: 0.5710
</code></pre>
<p>As you can notice, the <code>score</code> is <code>0.5710</code>. Can you suggest me a metric which performs better, that is takes into account the activation areas of the "brain"?</p>
| <python><machine-learning><scikit-image><mse><ssim> | 2023-10-16 09:46:25 | 0 | 481 | tail |
77,300,643 | 3,297,613 | vscode complains about python's classmethod typing | <p>My vscode editor always complains about the classmethod typing irrespective of the class.</p>
<pre><code>from __future__ import annotations
class A:
@classmethod
def foo(cls: type[A]) -> str:
return "bar"
</code></pre>
<p>Error/warning about the <code>classmethod</code> (raised by Pylance),</p>
<pre><code>Argument of type "(cls: type[A]) -> str" cannot be assigned to parameter "__f" of type "() -> _R_co@classmethod" in function "__init__"
Type "(cls: type[A]) -> str" cannot be assigned to type "() -> _R_co@classmethod"PylancereportGeneralTypeIssues
</code></pre>
<p>VSCode Version : 1.83.0
Pylance Version: v2023.10.20</p>
<p><code>settings.json</code></p>
<pre><code>{
"editor.formatOnSave": true,
"editor.rulers": [140],
"flake8.args": ["--config=setup.cfg"],
"flake8.path": ["${workspaceFolder}/venv/bin/python", "-m", "flake8"],
"[python]": {
"editor.defaultFormatter": "ms-python.black-formatter",
"editor.formatOnSave": true
},
"black-formatter.args": ["--line-length", "120"],
"mypy-type-checker.args": ["--config-file=setup.cfg"],
"mypy-type-checker.path": [
"${workspaceFolder}/venv/bin/python",
"-m",
"mypy"
],
"python.languageServer": "Pylance",
"python.analysis.typeCheckingMode": "basic",
"files.exclude": {
"**/.git": true,
"**/.svn": true,
"**/.hg": true,
"**/CVS": true,
"**/.DS_Store": true,
"**/Thumbs.db": true,
"**/__pycache__": true,
"**/.pytest_cache": true,
"**/.mypy_cache": true,
"**/venv": true
},
"python.defaultInterpreterPath": "${workspaceFolder}/venv/bin/python",
"autoDocstring.docstringFormat": "google-notypes"
}
</code></pre>
<p><code>setup.cfg</code></p>
<pre><code>[flake8]
max-line-length = 120
extend-ignore =
# no whitespace before colon on list slice
E203,
# line break occurred before a binary operator
W503,
# comparison to True should be is not == (not true for pandas)
E712,
# line too long (handled by formatter automatically whenever possible)
E501,
# missing type annotation for self in method
ANN101,
# missing type annotation for *args, **kwargs
ANN002,
ANN003
per-file-ignores =
# imported but unused
__init__.py: F401
exclude = .vscode, .git, __pycache__, venv, tests
require-plugins = flake8-annotations, flake8-pep585
# infer no return as a None return type
suppress-none-returning = true
# for mypy
mypy-init-return = true
# allow importing from typing for convenience
pep585-whitelisted-symbols =
Iterator
Iterable
Callable
[mypy]
exclude = .vscode, .git, __pycache__, venv, tests
strict_optional = False
</code></pre>
| <python><vscode-debugger><python-3.10> | 2023-10-16 08:59:31 | 1 | 175,204 | Avinash Raj |
77,300,536 | 21,404,794 | Iterating through dict of dicts | <p>I have a dictionary of dictionaries, in this fashion:</p>
<pre class="lang-py prettyprint-override"><code>scores = {'default': {'MSE': [[1.0,2.0], [3.0,4.0]], 'R2': [[2.0,3.0],[4.0,5.0]]},
'RBF': {'MSE': [[3.0,4.0],[5.0,6.0]], 'R2': [[4.0,5.0],[6.0,7.0]]}
}
</code></pre>
<p>And I need the list of numbers inside the second dictionary to be a <code>np.array</code>, because I want to plot them in a graph and each column goes in a different graph, and list indexing is not able to separate the columns correctly.</p>
<p>To access each list, I'd do <code>scores['default']['MSE']</code> and that returns <code>[[1.0, 2.0],[3.0,4.0]</code> I need the values by column (so for the first graph I'd need the values <code>[2.0, 4.0]</code>) and list indexing is not versatile enough.</p>
<p>I want to apply the <code>np.array</code> to those lists so I can index them easier, and I've tried two methods:</p>
<p><strong>Method 1</strong>: Using <code>map()</code></p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
scores = {'default': {'MSE': [[1.0,2.0], [2.0,3.0]], 'R2': [[2.0,3.0],[3.0,4.0]]},
'RBF': {'MSE': [[3.0,4.0],[4.0,5.0]], 'R2': [[4.0,5.0],[5.0,6.0]]}
}
scores = map(np.array, scores)
</code></pre>
<p>This method does not work, because I need to apply the function to the internal list, not the dictionaries.</p>
<p>There's also a variation of this method that gets me closer but does not work, because it destroys the dictionary keys in the process (just change the map line to this):</p>
<pre class="lang-py prettyprint-override"><code>scores = [list(map(np.array, scores[k][k2])) for k in scores.keys() for k2 in scores[k].keys()]
</code></pre>
<p><strong>Method 2</strong>: Using for loops</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
scores = {'default': {'MSE': [[1.0,2.0], [2.0,3.0]], 'R2': [[2.0,3.0],[3.0,4.0]]},
'RBF': {'MSE': [[3.0,4.0],[4.0,5.0]], 'R2': [[4.0,5.0],[5.0,6.0]]}
}
for k in scores.keys():
for k2 in scores[k].keys():
scores[k][k2] = np.array(scores[k][k2])
print(scores)
</code></pre>
<p>While this method works, It feels really rudimentary, as it requires a bunch of for loops and accessing and rewriting each dict entry.</p>
<p><strong>Is there a more pythonic way of applying a function to the innermost part of a dict of dicts while keeping everything else the same?</strong></p>
| <python><arrays><list><dictionary> | 2023-10-16 08:41:38 | 2 | 530 | David Siret Marqués |
77,300,475 | 9,488,023 | How can I replace NaN values with empty cells in Pandas in Python? | <p>I have a large Pandas dataset in Python with several columns of different data types that all contain some cells that are NaN. My problem is that I want these cells to be empty rather than say 'nan', but I'm not sure how to do it. I can replace the nan-value with an empty string, but then I can't keep the columns the correct data types, as in the example below:</p>
<pre><code>df_test = pd.DataFrame(data = {'A': ['a', 'b', np.nan], 'B': [1, np.nan, 3], 'C': [np.nan, 2.2, 3.3]})
df_test = df_test.replace(np.nan, '')
df_test = df_test.astype({'A': 'str', 'B': 'Int64', 'C': 'float64'})
</code></pre>
<p>This just gives</p>
<pre><code>TypeError: object cannot be converted to an IntegerDtype
</code></pre>
<p>Since it cannot convert the empty string to an integer. Is there another way of converting nan-values to actual empty cells instead of empty strings? Thanks for the help!</p>
| <python><pandas><string><replace><nan> | 2023-10-16 08:32:36 | 0 | 423 | Marcus K. |
77,300,444 | 988,279 | Replace if/else with something else (e.g. pattern matching) | <p>I compare two sources (a csv file, and the database). I want to get rid of this if/else construct. How can that be achieved with the "python 3.10 pattern matching"</p>
<pre><code>for row in csv_file:
entry = find_entry_in_database(row.get('id'))
# check for email changes
if entry[0].email != row.get('email'):
do_something_with_the_email(email)
# check for name changes
if entry[0].name != row.get('name'):
do_something_with_the_name(name)
# check for firstname changes
if entry[0].first_name != row.get('first_name'):
do_something_with_the_first_name(first_name)
...
</code></pre>
| <python> | 2023-10-16 08:27:32 | 1 | 522 | saromba |
77,300,395 | 16,399,497 | SqlAlchemy: hybrid attribute as sum of columns with null values | <p>I've got a simple problem I can't figure out.</p>
<p>I've got a simple data model with three integer columns (which can be null). I want to create an hybrid attribute to get the sum of three columns. My wish is to finally get the maximal total or to filter according to this total.</p>
<p>I've made a minimal example.</p>
<pre class="lang-py prettyprint-override"><code>from contextlib import contextmanager
from sqlalchemy import create_engine, func, select
from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column, sessionmaker
engine = create_engine("sqlite://")
"""The database engine."""
DbSession = sessionmaker(autocommit=False, autoflush=False, bind=engine)
"""The application session class."""
class Base(DeclarativeBase):
"""The SQLalchemy declarative base."""
class Data(Base):
__tablename__ = "data"
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
x: Mapped[int | None]
y: Mapped[int | None]
z: Mapped[int | None]
@hybrid_property
def total(self):
return self.x + self.y + self.z
@total.expression
@classmethod
def total(cls):
return select(cls.x + cls.y + cls.z).label("total")
@contextmanager
def get_db() -> DbSession:
"""Return a database session."""
with DbSession() as db_is:
with db_is.begin():
yield db_is
# inner context calls db.commit(), if there were no exceptions
# outer context calls db.close()
Base.metadata.create_all(engine)
with get_db() as db:
# Add data
db.add(Data(x=1, y=1, z=None))
db.add(Data(x=None, y=2, z=2))
db.add(Data(x=3, y=3, z=3))
db.flush()
# Try to get maximal total
print(db.scalar(select(func.max(Data.total))))
# Try to get entries with total less than 5
print(db.scalars(select(Data).filter(Data.total < 5)).all())
</code></pre>
<p>My problems are:</p>
<ul>
<li>It seems something goes wrong in the hybrid attribute definition in case of None value in x, y and z columns. Should I use a case?</li>
<li>Even my two final request instructions seem buggy.</li>
</ul>
<p>The above code fails. The first statement <code>db.scalar(select(func.max(Data.total)))</code> fails with the following message.</p>
<pre><code>sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) misuse of aggregate: max()
[SQL: SELECT max((SELECT data.x + data.y + data.z AS anon_1
FROM data)) AS max_1]
</code></pre>
<p>The second statement <code>db.scalars(select(Data).filter(Data.total < 5)).all()</code> returns <code>[]</code>. Running <code>db.scalars(select(Data.total)).all()</code> returns <code>[None]</code>.</p>
<p>Can someone help me?</p>
| <python><database><sqlalchemy> | 2023-10-16 08:19:30 | 1 | 723 | emonier |
77,300,252 | 15,093,600 | Type allowed keys of TypedDict | <p>I wonder how we should type a key, that corresponds to <code>TypedDict</code> instance.</p>
<p>Consider that there is a class, that accepts a dictionary as an input.
The dictionary is typed via <code>TypedDict</code>.
Let us say there is a method
that accesses the values of the dictionary by the key.
In this case the key must be a <code>Literal</code> containing allowed keys from the dictionary
as shown in the snippet below.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Literal, TypedDict
class Data(TypedDict):
one: str
two: str
class MyClass:
def __init__(self, data: Data) -> None:
self.data = data
def value(self, key: Literal['one', 'two']) -> str:
return self.data[key]
</code></pre>
<p>But this solution is not convenient.
For example, if we extend <code>Data</code> by adding one more key, we will have to also add it into <code>key</code> type annotation.</p>
<p>Is there a programmable way to type <code>key</code>?</p>
| <python><typeddict> | 2023-10-16 07:54:29 | 1 | 460 | Maxim Ivanov |
77,300,153 | 3,420,197 | IPython.display doesn't show long wav files | <p>I use IPython.display for listening audio file in Jupyter notebook.
I have a long file(30 min).</p>
<pre><code>
import IPython.display as dis
import numpy
y = numpy.zeros(30*60*48000)
</code></pre>
<p>This code work as fluidly for 5 seconds</p>
<p><code>dis.display(dis.Audio(y[:5*48000], rate=48000))</code></p>
<p>but for 30 minutes the code hangs and there are no errors.</p>
<p><code>dis.display(dis.Audio(y, rate=48000))</code></p>
<p>Is this method not suitable for displaying long audios and is it limited?
Is there another way to display long files?</p>
| <python><audio><jupyter-notebook> | 2023-10-16 07:42:02 | 1 | 1,712 | Anna Andreeva Rogotulka |
77,300,113 | 726,730 | get cpu temperature without elevated privileges | <p>In windows 7 or in windows 11 i want to display the cpu temperature using a python script.</p>
<p>The code i run successfully is:</p>
<pre class="lang-py prettyprint-override"><code>import wmi
import pyuac
def main():
w = wmi.WMI(namespace='root\\wmi')
temperature = w.MSAcpi_ThermalZoneTemperature()[0]
temperature = int(temperature.CurrentTemperature / 10.0 - 273.15)
print(str(temperature)+"°C")
input("Press any key to exit...")
if __name__ == "__main__":
if not pyuac.isUserAdmin():
print("Re-launching as admin!")
pyuac.runAsAdmin()
else:
main() # Already an admin here.
</code></pre>
<p>The main problem with the above script is that it must be admin user to run it.
Is there any other similar script i can use with no admin privileges required?</p>
| <python><windows><cpu><temperature> | 2023-10-16 07:35:14 | 0 | 2,427 | Chris P |
77,300,078 | 22,466,650 | How .transform handle the splitted groups? | <p>I have this dataframe :</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'subject': ['a', 'a', 'b', 'b', 'c', 'd'],
'level': ['hard', None, None, 'easy', None, 'medium']})
print(df)
subject level
0 a hard
1 a None
2 b None
3 b easy
4 c None
5 d medium
</code></pre>
<p>When using the code :</p>
<pre><code>df.groupby('subject').transform(lambda group: print(group))
</code></pre>
<p>I got four printed groups. That's ok because we have four subjects : <code>a</code>, <code>b</code>, <code>c</code> and <code>d</code><br />
But I don't understand the group 2, i feel like transform have accumulated the values of the two first groups. Also, there is a weird indentation that seem to separate the first group from the second one</p>
<pre><code># ------------------------ group1
0 hard
1 None
Name: level, dtype: object
# ------------------------ group2
level
0 hard
1 None
2 None
3 easy
Name: level, dtype: object
# ------------------------ group3
4 None
Name: level, dtype: object
# ------------------------ group4
5 medium
Name: level, dtype: object
</code></pre>
<p>Can someone please explain the logic to me ?</p>
| <python><pandas> | 2023-10-16 07:28:56 | 1 | 1,085 | VERBOSE |
77,300,059 | 2,006,844 | AppStore Limitation to publish iOS app embedded with Python script | <p>Will there be any restrictions/Limitations on publishing iOS applications that internally run Python scripts on the App Store during the Apple Review process?</p>
| <python><ios><xcode><app-store> | 2023-10-16 07:25:28 | 1 | 479 | lreddy |
77,300,038 | 13,605,694 | pip error: ModuleNotFoundError: No module named 'distutils' | <p>I updated to python 3.12 on windows and trying to <code>pip install -r requirements.txt</code>, but it terminates with error when trying to build <code>numpy</code>, <code>ModuleNotFoundError: No module named 'distutils'</code></p>
<p>requirements.txt</p>
<pre><code>altgraph==0.17.4
blinker==1.6.2
certifi==2023.7.22
cffi==1.15.1
charset-normalizer==3.2.0
click==8.1.7
contourpy==1.1.0
cycler==0.11.0
Flask==2.3.3
Flask-Cors==4.0.0
fonttools==4.42.1
idna==3.4
iniconfig==2.0.0
itsdangerous==2.1.2
Jinja2==3.1.2
kiwisolver==1.4.5
MarkupSafe==2.1.3
matplotlib==3.7.2
numpy==1.25.2
packaging==23.1
Pillow==10.0.0
pluggy==1.3.0
ply==3.11
pycparser==2.21
pyinstaller==6.1.0
pyinstaller-hooks-contrib==2023.10
pyparsing==3.0.9
PySpice==1.5
pytest==7.4.2
python-dateutil==2.8.2
PyYAML==6.0.1
requests==2.31.0
scipy==1.11.2
six==1.16.0
urllib3==2.0.4
Werkzeug==2.3.7
</code></pre>
<p>EDIT:added requirements file</p>
| <python><python-3.x><numpy><pip> | 2023-10-16 07:21:08 | 0 | 392 | ayitinya |
77,299,863 | 5,667,265 | MATLAB Interp2 equivalent in SciPy python not working | <p>Original MATLAB Statement is -</p>
<pre><code>Output = (interp2(X,Y,Tbl',Z,ThrtlPrcnt,'linear'));
</code></pre>
<p>Where parameter values (in Python) are</p>
<pre><code>X = array([0, 550, 600, 700, 800, 874, 900, 950, 1000, 1100, 1200, 1300, 1400,
1500, 1600, 1700, 1730, 2000, 2100, 2200, 2500, 3000], dtype=object)
Y =array([0, 1, 10, 20, 30, 40, 50, 70, 80, 100], dtype=object)
Tbl =
array([[-75, -226, -239, -271, -375, -453, -701, -759, -818, -997],
[-1269, -1716, -1967, -2028, -2056, -2104, -2106, -2124, -2130,
-2137],
[-2156, -2680, 463, -114, -166, -271, -375, -453, -701, -759],
[-818, -997, -1269, -1716, -1967, -2028, -2056, -2104, -2106,
-2124],
[-2130, -2137, -2156, -2680, 954, 285, 285, 285, 285, 167],
[125, 44, -32, -161, -290, -402, -521, -660, -800, -925],
[-970, -1318, -1426, -1534, -1857, -2679, 1106, 535, 535, 535],
[535, 417, 375, 294, 218, 89, -32, -152, -271, -391],
[-510, -637, -670, -969, -1080, -1191, -1524, -2331, 1258, 860],
[860, 860, 860, 809, 792, 759, 726, 648, 570, 500],
[400, 300, 185, 81, 46, -271, -389, -506, -858, -1634],
[1410, 1285, 1285, 1285, 1285, 1222, 1200, 1155, 1112, 1037],
[965, 890, 800, 700, 600, 500, 470, 150, 0, -150],
[-525, -1286, 1592, 1592, 1592, 1592, 1592, 1576, 1568, 1563],
[1554, 1518, 1483, 1432, 1381, 1329, 1261, 1157, 1119, 776],
[649, 522, 140, -590, 1867, 1944, 1937, 1924, 1910, 1901],
[1897, 1895, 1893, 1887, 1849, 1815, 1781, 1747, 1620, 1516],
[1477, 1125, 995, 864, 473, -242, 2019, 2144, 2137, 2124],
[2130, 2145, 2150, 2150, 2150, 2150, 2150, 2150, 2150, 2120],
[2009, 1880, 1850, 1550, 1420, 1290, 900, 200, 2383, 2536],
[2550, 2578, 2606, 2626, 2634, 2648, 2648, 2648, 2648, 2648],
[2620, 2520, 2369, 2234, 2193, 1823, 1686, 1549, 1139, 455]],
dtype=object)
Z =
array([600, 700, 800, 874, 900, 950, 1000, 1100, 1200, 1300, 1400, 1500,
1600, 1700, 1730], dtype=object)
ThrtlPrcnt = 100
</code></pre>
<p>My Expected output is -</p>
<pre><code> np.array([[2550],
[2578],
[2606],
[2626],
[2634],
[2648],
[2648],
[2648],
[2648],
[2648],
[2620],
[2520],
[2369],
[2234],
[2193]])
</code></pre>
<p>I have tried scipy RectBivariateSpline, bisplrep and bisplev but no luck.
Trying it from since last week but no luck. In 1D interpolation, i.e. np.interp we need to shuffle last parameter from MATLAB to first in Python. Not sure how it goes in 2D. What I am missing?</p>
<p>for RectBivariateSpline getting error - raise ValueError('x dimension of z must have same number of '
ValueError: x dimension of z must have same number of elements as x.</p>
| <python><matlab><linear-interpolation> | 2023-10-16 06:51:20 | 1 | 1,259 | Dipak |
77,299,819 | 65,889 | Can I use VS Code's launch config to just run a specific python file and not to debug it? | <p>In my Python project I have a module <code>my_stuff.py</code> which is active in my editor, but I want to execute the file <code>main.py</code> to run my project. -- Can I use <code>launch.json</code> for this?</p>
<p>I tried a simple config like this:</p>
<pre class="lang-json prettyprint-override"><code>{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: Main File",
"type": "python",
"request": "launch",
"program": "main.py",
"console": "integratedTerminal",
"justMyCode": true
},
]
}
</code></pre>
<p>But this always wants to start the <code>debugpy</code> launcher. I would just like to launch the main file as</p>
<pre class="lang-bash prettyprint-override"><code>/my/venv/python /my/project/dir/main.py
</code></pre>
<p>How can I do this with <code>launch.json</code>?</p>
| <python><visual-studio-code> | 2023-10-16 06:41:27 | 1 | 10,804 | halloleo |
77,299,359 | 5,624,602 | Confluence REST API content-type issue for attachment with python | <p>I am trying to upload an attachment to Confluence using REST API from python. I get 415 error: unsupported media type.</p>
<p>I understand the content-type is not correct, but I tried some content types and got the same error:</p>
<pre><code>import requests
print("start....................")
url = 'https://confluence.XXX.com/confluence/rest/api/content/' + str(108458915) + '/child/attachment/'
TOKEN = 'XXXXXXXX'
headers = {'X-Atlassian-Token': 'no-check',
"Authorization": f"Bearer {TOKEN}"
} #no content-type here!
file = 'my_file.jpeg'
content_type = 'multipart/form-data'
files = {'file': (file, open(file, 'rb'), content_type)}
print("request....................")
r = requests.post(url, headers=headers, files=files, verify='confluence.XXX.crt')
r.raise_for_status()
</code></pre>
<p>I tried the following content-types: <code>'image/jpeg'</code> <code>'application/json'</code> <code>'application/xml'</code> <code>'application/octet-stream'</code> <code>'multipart/form-data'</code> but get the same 415 error:</p>
<blockquote>
<p>requests.exceptions.HTTPError: 415 Client Error: for url:
<a href="https://confluence.XXX.com/rest/api/content/108458915/child/attachment/" rel="nofollow noreferrer">https://confluence.XXX.com/rest/api/content/108458915/child/attachment/</a></p>
</blockquote>
| <python><python-requests><confluence-rest-api><http-status-code-415> | 2023-10-16 04:09:08 | 1 | 1,513 | STF |
77,299,088 | 489,088 | How to replace a value by another value only in certain columns of a 3d Numpy array? | <p>I have a 3d numpy array and a list of columns (axis = 1) in which I would like to replace all zeroes by a constant value:</p>
<p>My sample data is like so:</p>
<pre><code>data = np.array([
[[10, 10, 10], [0, 10, 10], [0, 10, 0]],
[[20, 0, 20], [20, 20, 0], [0, 20, 20]],
[[0, 30, 30], [30, 0, 30], [0, 30, 30]],
])
</code></pre>
<p>My attempt was to:</p>
<pre><code>columns_to_replace = [0, 1]
data[data[:, columns_to_replace, :] == 0.0] = 1000 # value to replace
</code></pre>
<p>But this results in:</p>
<pre><code>IndexError: boolean index did not match indexed array along dimension 1; dimension is 3 but corresponding boolean dimension is 2
</code></pre>
<p>I would like the output to be:</p>
<pre><code>[[[10 10 10]
[ 1000 10 10]
[ 0 10 0]]
[[20 1000 20]
[20 20 1000]
[ 0 20 20]]
[[ 1000 30 30]
[30 1000 30]
[ 0 30 30]]]
</code></pre>
<p>Where indexes 0 and 1 of the second dimension have all zeroes replaced by 1000.</p>
<p>I'm thinking I might have to compose this inner boolean array so that the shape is the same as the <code>data</code> array and the columns not included in the <code>columns_to_replace</code> array are all set to <code>False</code>, but not sure how to accomplish this.</p>
<p>Am I on the right track or is there another better approach?</p>
| <python><arrays><numpy><array-broadcasting> | 2023-10-16 02:14:25 | 1 | 6,306 | Edy Bourne |
77,299,004 | 6,115,999 | Nested schema for an association table | <p>I'm having trouble serializing my <code>Joke</code> data with a schema when I've added an association table.</p>
<p>Here is my model it allows users to enter a joke and also a tag for categorizing the joke:</p>
<pre><code>class Joke(db.Model):
id = db.Column(db.Integer, primary_key=True)
joke_name = db.Column(db.String(40), nullable=False)
joke = db.Column(db.String(5000), nullable=False)
joke_owner = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False)
joke_category = db.Column(db.Integer, db.ForeignKey('category.id'), nullable=True)
joke_created = db.Column(db.DateTime, nullable=True)
joke_edited = db.Column(db.DateTime, nullable=True)
tags = db.relationship('JokeTag', secondary=tag_joke, backref=db.backref('jokes', lazy='dynamic'), lazy='dynamic',
primaryjoin=tag_joke.c.joke_id==id,
secondaryjoin=tag_joke.c.joketag_id==JokeTag.id)
class JokeTag(db.Model):
id = db.Column(db.Integer, primary_key=True)
tag = db.Column(db.String(25), nullable=False)
tag_owner = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False)
</code></pre>
<p>Here is my association table that connects the two on a many-to-many basis:</p>
<pre><code>
tag_joke = db.Table('tag_joke',
db.Column('joke_id', db.Integer, db.ForeignKey('joke.id')),
db.Column('joketag_id', db.Integer, db.ForeignKey('joke_tag.id'))
)
</code></pre>
<p>I am trying to serialize the data by using the following schema:</p>
<pre><code>class TagJokeSchema(ma.Schema):
class Meta:
fields = ('joke_id', 'joketag_id')
class JokeSchema(ma.Schema):
class Meta:
fields = ('id', 'joke_name', 'joke', 'joke_owner', 'joke_category', 'joke_created', 'joke_edited', 'tags')
tags=ma.Nested(TagJokeSchema, many=True)
</code></pre>
<p>I am doing something wrong with the schema because when I try to serialize the data with this:</p>
<pre><code>allnotes = Joke.query.filter_by(joke_owner=1).all()
result = jokes_schema.dump(allnotes)
</code></pre>
<p>I get this TypeError:</p>
<p><code>TypeError: Object of type AppenderBaseQuery is not JSON serializable</code></p>
<p>When I try to access the jokes and tags from the python command line I am able to access the Joke and corresponding tags. Although when I try and print a joke it does look a little crazy:</p>
<pre><code>Joke('35','No reason','Finance people get buff for no reason. Just to work on Excel spreadsheets.','1','55', None, None, SELECT joke_tag.id AS joke_tag_id, joke_tag.tag AS joke_tag_tag, joke_tag.tag_owner AS joke_tag_tag_owner
FROM joke_tag, tag_joke
WHERE tag_joke.joke_id = %s AND tag_joke.joketag_id = joke_tag.id)
</code></pre>
<p>I was hoping to get a cleaner print with tags nested in there if they have them.</p>
<p>edit for @Greg0ry:
The printout of allnotes looks like this:</p>
<pre><code>[Joke('25','Test Joke','Knock, Knock, Whos There, test joke','1','55', None, None, SELECT joke_tag.id AS joke_tag_id, joke_tag.tag AS joke_tag_tag, joke_tag.tag_owner AS joke_tag_tag_owner FROM joke_tag, tag_joke WHERE tag_joke.joke_id = %s AND tag_joke.joketag_id = joke_tag.id), Joke('35','No reason','Finance people get buff for no reason. Just to work on Excel spreadsheets.','1','55', None, None, SELECT joke_tag.id AS joke_tag_id, joke_tag.tag AS joke_tag_tag, joke_tag.tag_owner AS joke_tag_tag_owner FROM joke_tag, tag_joke WHERE tag_joke.joke_id = %s AND tag_joke.joketag_id = joke_tag.id)]
</code></pre>
<p>I also have this within my <code>Joke</code> class that explains the printout, I got this from a youtube tutorial I found I believe (Corey Shafer):</p>
<pre><code> def __repr__(self):
return f"Joke('{self.id}','{self.joke_name}','{self.joke}','{self.joke_owner}','{self.joke_category}', {self.joke_created}, {self.joke_edited}, {self.tags})"
</code></pre>
<p>as far as db.Model, I may not understand what you're looking for but I have this:</p>
<pre><code>app = Flask(__name__)
db = SQLAlchemy(app)
</code></pre>
<p>Hopefully there's a way to guide me a little more on what you're looking for?</p>
<p>Also if I look for a particular joke by doing</p>
<pre><code>jokequery = Joke.query.filter_by(joke_owner=1, id=595).first()
</code></pre>
<p>I get</p>
<pre><code>Joke('595','test joke','knock knock whos there test joke','1','475', None, None, SELECT joke_tag.id AS joke_tag_id, joke_tag.tag AS joke_tag_tag, joke_tag.tag_owner AS joke_tag_tag_owner
FROM joke_tag, tag_joke
WHERE tag_joke.joke_id = %s AND tag_joke.joketag_id = joke_tag.id)
</code></pre>
<p>obviously the last part of that looks pretty bad. This is because <code>tags</code> is an <code>AppenderBaseQuery</code></p>
<p>but if I run this:</p>
<pre><code>jokequery.tags.all()
</code></pre>
<p>the output looks great and is a list which is JSON serializable:</p>
<pre><code>[JokeTag('45', 'stackoverflow', '1')]
</code></pre>
<p>But how do I set this up so that I can send in that list to be serialized and not the query?</p>
| <python><flask><sqlalchemy><flask-sqlalchemy><marshmallow> | 2023-10-16 01:37:33 | 1 | 877 | filifunk |
77,298,993 | 13,916,049 | Match index of first dataframe to column of a second dataframe and replace the index with corresponding values in the second dataframe | <p>'ref' is an AnnData object and one of its components is a <code>ref.var</code> Pandas dataframe.
<code>gene_list</code> is a separate Pandas dataframe with two columns: <code>ensembl_gene_id</code> and <code>hgnc_symbol</code>.
If <code>ref.var.index</code> matches <code>gene_list[hgnc_symbol]</code>, replace <code>ref.var.index</code> with <code>gene_list[ensembl_gene_id]</code>.
Ignore if no matches are found.
<code>ref.var_names</code> refer to the index of <code>ref.var</code>.</p>
<pre><code>ref_column_name = 'features'
for index, row in gene_list.iterrows():
# Check if the 'hgnc_symbol' is present in the index
if row['hgnc_symbol'] in ref.var.index:
# Replace the value in the index with 'ensembl_gene_id'
ref.var.index = ref.var.index.set_value(row['hgnc_symbol'], row['ensembl_gene_id'])
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [25], in <cell line: 2>()
2 for index, row in gene_list.iterrows():
3 # Check if the 'hgnc_symbol' is present in the index
4 if row['hgnc_symbol'] in ref.var.index:
5 # Replace the value in the index with 'ensembl_gene_id'
----> 6 ref.var.index = ref.var.index.set_value(row['hgnc_symbol'], row['ensembl_gene_id'])
TypeError: set_value() missing 1 required positional argument: 'value'
</code></pre>
<p>Input:</p>
<p><code>ref.var</code></p>
<pre><code>pd.DataFrame({'features': {'A1BG': 'A1BG',
'A1BG-AS1': 'A1BG-AS1',
'A1CF': 'A1CF',
'A2M': 'A2M',
'A2M-AS1': 'A2M-AS1'}})
</code></pre>
<p><code>gene_list</code></p>
<pre><code>pd.DataFrame({'ensembl_gene_id': {1: 'ENSG00000148584',
2: 'ENSG00000175899',
3: 'ENSG00000183044',
4: 'ENSG00000165029',
5: 'ENSG00000154263'},
'hgnc_symbol': {1: 'A1CF', 2: 'A2M', 3: 'ABAT', 4: 'ABCA1', 5: 'ABCA10'}})
</code></pre>
<p><code>ref.var_names</code></p>
<pre><code>Index(['A1BG-AS1', 'A1CF', 'A2M', 'A2M-AS1'], dtype='object')
</code></pre>
<p>Expected output:</p>
<p><code>ref.var</code></p>
<pre><code>pd.DataFrame({'features': {'ENSG00000148584': 'A1CF',
'ENSG00000175899': 'A2M'}})
</code></pre>
| <python><pandas> | 2023-10-16 01:29:51 | 1 | 1,545 | Anon |
77,298,753 | 6,591,677 | Django - Heroku postgres connection opens with each page refresh and does not close ever | <p>I am using Django, heroku, websocket, AJAX, celery, postgres, consumer, asgi (uvicorn; tried Daphne also). <strong>My problem is that after just a page refresh, new postgres connection would be created, and it doesn't go away even after long inactivity.</strong> Soon with just refreshes my connections go over the top (20/20 on heroku). On local environment, no problem - connection does not add up no matter what I do. This happens only on heroku using heroku postgres (I'm using the default version).</p>
<p>What I've tried:</p>
<ul>
<li>I've checked pg_stat_activity. The connections are related to the corresponding view code that queries certain data.</li>
</ul>
<pre><code>for post in posts:
temp_dict = defaultdict(list)
recommendations_ordered = post.recommendations.all().order_by('created_at')
...and so on, including request.user.
</code></pre>
<p>If I have 15 connections, most of them are from this -- same query. One from request.user. All idle. These don't exist when I run pg_stat_activity on local postgres.</p>
<ul>
<li>I've set my database settings as:</li>
</ul>
<pre><code>if ENVIRONMENT == 'development':
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'redacted',
'USER': 'redacted',
'PASSWORD': 'redacted',
'HOST': 'localhost',
'PORT': '5432',
}
}
else:
DATABASES = {
'default': dj_database_url.config(default=os.environ.get('DATABASE_URL'))
}
DATABASES['default']['CONN_MAX_AGE'] = 0
DATABASES['default']['ATOMIC_REQUESTS'] = True
</code></pre>
<ul>
<li>I've added close.connection to the aforementioned view as well as tasks.</li>
<li>Here's my procfile.</li>
</ul>
<pre><code>web: gunicorn -w 1 -k uvicorn.workers.UvicornWorker decide.asgi:application
worker: celery -A decide worker -l info --pool=threads
</code></pre>
<p>What could be going on? I'd really appreciate help.</p>
<p>In case relevant:</p>
<pre><code>
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": os.environ.get('REDIS_URL', 'redis://localhost:6379/0'),
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
}
}
}
if ENVIRONMENT == 'production':
CACHES['default']['OPTIONS']['CONNECTION_POOL_KWARGS'] = {"ssl": True}
# Cache settings
CACHE_TTL = 300
# Use Django's cache API
CACHE_MIDDLEWARE_ALIAS = "default"
CACHE_MIDDLEWARE_SECONDS = CACHE_TTL
CACHE_MIDDLEWARE_KEY_PREFIX = ""
# Celery Configuration
if ENVIRONMENT == 'development':
CELERY_BROKER_URL = 'redis://localhost:6379/0'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
else: # production
CELERY_BROKER_URL = os.getenv('REDIS_URL', 'redis://localhost:6379/0')
CELERY_RESULT_BACKEND = os.getenv('REDIS_URL', 'redis://localhost:6379/0')
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'UTC'
CELERY_TASK_DEFAULT_QUEUE = 'default'
CELERY_QUEUES = (
Queue('default', Exchange('default'), routing_key='default'),
)
django_heroku.settings(locals())
</code></pre>
| <python><django><postgresql><heroku> | 2023-10-15 23:05:44 | 1 | 479 | superbot |
77,298,704 | 9,072,753 | How to .format expressions like f string? | <p>Consider the following program:</p>
<pre><code>$ python -c 'import sys; print(sys.argv[1].format(**dict(val = 1)))' "val={val}"
val=1
</code></pre>
<p>However, the f-string functionality is not available in .format:</p>
<pre><code>$ python -c 'import sys; print(sys.argv[1].format(**dict(val = 1)))' "val={'1' if val else '2'}"
Traceback (most recent call last):
File "<string>", line 1, in <module>
KeyError: "'1' if val else '2'"
</code></pre>
<p>How can I use f-string formatting capabilities with format string and a dictionary both set at runtime?</p>
<p>I tried the following, however I do not know how to "trigger" f-string evaluation on a string:</p>
<pre><code>python -c 'import sys; locals().update(dict(val=1)); print(sys.argv[1])' "val={'1' if val else '2'}"
</code></pre>
<p>I could do this and it works:</p>
<pre><code>$ python -c 'import sys; locals().update(dict(val=1)); print(eval(f"f{sys.argv[1]!r}"))' "val={'1' if val else '2'}"
val=1
</code></pre>
<p>However, it feels way like a hack. Is there a better way?</p>
| <python><python-3.x><f-string> | 2023-10-15 22:38:34 | 0 | 145,478 | KamilCuk |
77,298,543 | 8,507,034 | Why is seaborn superimposing plots? | <p>The following function is called from a fastapi post request:</p>
<pre><code>import seaborn as sns
import os
from random import randint
def make_histogram(data, filepath, destructive = True):
sns.set()
sns.set_style("whitegrid")
sns.set_palette("husl")
sns.set_context("paper")
p = sns.histplot(data, )
f = p.get_figure()
if destructive and os.path.exists(filepath):
os.remove(filepath)
assert not os.path.exists(filepath)
f.figure.savefig(filepath)
</code></pre>
<p>I was expecting it to trash the plot and recreate it whenever the function is called. Instead it superimposes the plot onto the old plot:<a href="https://i.sstatic.net/ykKPw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ykKPw.png" alt="enter image description here" /></a></p>
<p>I thought it was modifying the file, so I replaced the filename with a randomly generated id. It still has the same behavior.</p>
<p>That tells me that it's updating the original figure each time the function is called.</p>
<p>I have questions. First, how do I fix it? Second, if it is modifying the figure... how? Why doesn't the figure fall out of scope and get garbage collected?</p>
<p>I have some experience with Jupyter Notebooks, but this is my first time doing anything with web development or traditional IDE.</p>
| <python><seaborn><fastapi> | 2023-10-15 21:30:56 | 0 | 315 | Jred |
77,298,383 | 284,932 | Stack size errors on fine tunning t5 with xsum using pytorch | <p>I am trying to fine fine tunning t5-small with xsum dataset on pytorch Windows 10 (CUDA 12.1).</p>
<p>Unfortunately <strong>Trainer</strong> (or Seq2SeqTrainer) class from bitsandbytes is not avaliable for Windows, so it was necessary to create a epoch loop:</p>
<pre><code>from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, get_scheduler
from torch.utils.data import DataLoader
from torch.optim import AdamW
import torch
from tqdm.auto import tqdm
dataset = load_dataset("xsum")
MODEL_NAME = "t5-small"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
prefix = "summarize: "
max_input_length = 1024
max_target_length = 128
def tokenize_function(examples):
inputs = [prefix + doc for doc in examples["document"]]
model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True)
# Setup the tokenizer for targets
labels = tokenizer(text_target=examples["summary"], max_length=max_target_length, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
tokenized_datasets = dataset.map(tokenize_function, batched=True)
tokenized_datasets = tokenized_datasets.remove_columns(['document', 'summary', 'id'])
tokenized_datasets.set_format("torch")
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
train_dataloader = DataLoader(small_train_dataset, shuffle=True, batch_size=8)
eval_dataloader = DataLoader(small_eval_dataset, batch_size=8)
model = AutoModelForSeq2SeqLM.from_pretrained(MODEL_NAME)
optimizer = AdamW(model.parameters(), lr=5e-5)
num_epochs = 3
num_training_steps = num_epochs * len(train_dataloader)
lr_scheduler = get_scheduler(
name="linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps
)
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
model.to(device)
progress_bar = tqdm(range(num_training_steps))
model.train()
for epoch in range(num_epochs):
for batch in train_dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad()
progress_bar.update(1)
model.save_pretrained("outputs/trained")
</code></pre>
<p>I got this error:</p>
<pre><code>RuntimeError: stack expects each tensor to be equal size, but got [352] at entry 0 and [930] at entry 1
</code></pre>
<p>How can I fix that?</p>
| <python><pytorch><huggingface-transformers><summarization> | 2023-10-15 20:27:10 | 0 | 474 | celsowm |
77,298,313 | 219,153 | How to select the native GUI backend for standard Tkinter input dialogs on Python | <p>This Python 3.11 script on Ubuntu 22.04:</p>
<pre><code>from tkinter import filedialog
fileName = filedialog.askopenfilename(title='Open file', filetypes=(("BMP files", "*.bmp"), ("All files", "*")))
</code></pre>
<p>opens this dialog box:</p>
<p><a href="https://i.sstatic.net/Mfcbh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mfcbh.png" alt="enter image description here" /></a></p>
<p>It looks different from the native file dialog, for example the one from GIMP:</p>
<p><a href="https://i.sstatic.net/OkRC1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OkRC1.png" alt="enter image description here" /></a></p>
<p>Is there away to select the latter to be used with <code>askopenfilename()</code>?</p>
| <python><tkinter><dialog><backend> | 2023-10-15 20:06:14 | 0 | 8,585 | Paul Jurczak |
77,298,182 | 7,724,007 | Snapcraft python builds - how to get it to pack - path doesn’t exist error? | <p>So I am newer to Snapcraft, however I have been having this issue, despite following examples and tutorials so I figured I would ask here.</p>
<p><strong>Machine setup</strong>: So I am on a windows machine, running ubuntu in a VM to compile the snap. Sadly work has not given me a ubuntu dev machine. Our deployments are on linux machines (hence the need for a snap), but they gave me a windows machine.</p>
<p><strong>Snap Problem</strong>: Have a file (main.py) that is normally launched as “python3 -m main” and we want to make it into a snap.</p>
<p>So far I have this as my snap:</p>
<pre><code>name: asdf
version: '4.0.0'
summary: asdf
description: |
qweradsf - blah-etc...
grade: devel
confinement: devmode
base: core22
parts:
py-dep:
plugin: python
source: .
python-packages:
- pyserial
- ftputil
apps:
asdf:
command: usr/bin/python -m main
environment:
PYTHONPATH: $SNAP/usr/lib/python3/dist-packages:${SNAP}/usr/local/lib
</code></pre>
<p>For the command part I have tried all the combinations below</p>
<pre><code>command: usr/bin/python -m main
command: /usr/bin/python -m main
command: /bin/python -m main
(also tried the above eith /usr/lib/python path)
command: python -m main
command: python3 -m main
</code></pre>
<p>When it goes to pack the snap file, I will get the error that the first argument - usr, bin, python, python3 - path doesn’t exist. Also tried adding PYTHONPATH. Nothing seems to work.</p>
<p>How do I call this python code? Do I need a source? Examples I saw didn’t use a source for the plugin (just had the . ), so I have been leaving that part mostly alone.</p>
| <python><ubuntu-22.04><snapcraft> | 2023-10-15 19:23:52 | 1 | 334 | meh93 |
77,297,981 | 145,682 | matplotlib plotting incorrectly when csv files are concatted | <p>There are two csv files mentioned in the following gist:</p>
<p><a href="https://gist.github.com/deostroll/7d1e95391855245fed8a4c746be4a901" rel="nofollow noreferrer">https://gist.github.com/deostroll/7d1e95391855245fed8a4c746be4a901</a></p>
<p>They are simple historical NAV charts of a fund. It has only two fields. <code>NAV</code> - its the price. And other field <code>NAV DATE</code> - which is date in <code>YYYY-MM-DD</code> format.</p>
<p>I can run the plot program (shared below) against each file separately and observe that they appear as expected.</p>
<p>Here is the code I use for plotting:</p>
<pre><code>import csv
import os
from datetime import datetime
def csv_to_jsonarray(file):
with open(file) as f:
rdr = csv.reader(f, delimiter=',', lineterminator=os.linesep)
headers = False
res = []
for row in rdr:
if not headers:
hrow = row
headers = True
continue
d = dict(zip(hrow, row))
res.append(d)
return res
def run(file):
arr = csv_to_jsonarray(file)
# print(len(arr))
xvals = []
yvals = []
for item in arr:
xvals.append(datetime.strptime(item['NAV DATE'], '%Y-%m-%d'))
yvals.append(float(item['NAV']))
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%Y-%m-%d'))
plt.gca().xaxis.set_major_locator(mdates.MonthLocator(interval=6))
# plt.scatter(xvals, yvals)
plt.plot(xvals, yvals)
plt.show()
</code></pre>
<p>The <code>run()</code> is the one of interest.</p>
<p>But when I try to plot the csv files (manually concatenated) it prints incorrectly as shown below:</p>
<p><a href="https://i.sstatic.net/AoSf8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AoSf8.png" alt="enter image description here" /></a></p>
<p>It seems to form a loop; in any capacity this is wrong! Is there an issue with my code? No issue with data exists since, I have verified it through a spreadsheet program, and, I am able to make charts correctly.</p>
| <python><matplotlib> | 2023-10-15 18:27:36 | 0 | 11,985 | deostroll |
77,297,946 | 3,050,730 | Type Hinting Error with Union Types and Type Variables in Python 3.11 | <p>I've encountered a type hinting issue in Python 3.11 using Pylance in Visual Studio Code, and I'm looking for insights into why this error is occurring and how to resolve it. Here's my code:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeAlias, TypeVar
Data: TypeAlias = int | str
DataVar = TypeVar("DataVar", int, str)
class A:
def __init__(self):
pass
def do_something(self, X: Data) -> Data:
return self._foo(X) # <---- Pylance raises an error here!!
def _foo(self, X: DataVar) -> DataVar:
return X
</code></pre>
<p>However, Pylance raises the following error:</p>
<pre class="lang-bash prettyprint-override"><code>Argument of type "Data" cannot be assigned to parameter "X" of type "DataVar@_foo" in function "_foo"
Type "Data" is incompatible with constrained type variable "DataVar"
</code></pre>
<p>I'm struggling to understand why this error is occurring. As far as I can tell, <code>Data</code> is a flexible type that can be either an <code>int</code> or a <code>str</code>, and <code>_foo</code> should be able to accept it as an argument. If I had provided the types in the reverse order, i.e. <code>do_something</code> expects a <code>DataVar</code> and <code>_foo</code> gets <code>Data</code>, I would expect an error (which is indeed raised)</p>
<ol>
<li>Why is Pylance raising this error?</li>
<li>Is there a correct way to annotate the types to avoid this error?</li>
<li>Is this a limitation or false positive with the type checker in Python 3.11?</li>
</ol>
<p>Any insights or suggestions on how to address this issue would be greatly appreciated.</p>
| <python><python-typing><pyright> | 2023-10-15 18:19:25 | 1 | 523 | nicrie |
77,297,681 | 12,214,867 | trying to run resnet on 128 by 128 imagenet but got 'the input dimensions must be equal' error, how do i fix it? | <p>i'm trying to build an image classifier following a <a href="https://www.mindspore.cn/tutorials/application/en/r2.1/cv/resnet50.html" rel="nofollow noreferrer">tutorial</a></p>
<p>i managed to go through the tutorial and got good accuracy on the CIFAR-10 dataset.</p>
<p>i'm trying to run the classifier on the imagenet dataset.</p>
<p>i did some modification on the original code from the tutorial to have the model handle the images from the imagenet dataset, including resize all the images to the shape <code>(128, 128, 3)</code> by the line <code>vision.Resize((128, 128))</code>. if i were willing to change it back to <code>vision.Resize((32, 32))</code>, the error would gone, so would the accuracy. so i have to insist on <code>(128, 128)</code>. which brings the quesion: which part of the rest should i modify to correspond to my modification of <code>vision.Resize((128, 128))</code></p>
<p>all the code below and Traceback are also on this <a href="https://github.com/liyi93319/mindspore/blob/main/ms_resnet_imagenet.ipynb" rel="nofollow noreferrer">notebook</a></p>
<h1>dataset</h1>
<pre><code>batch_size = 32 # Batch size
image_size = 128 # Image size of training data
workers = 2 # Number of parallel workers
num_classes = 1000 # Number of classes
def create_dataset(data_set, usage, resize, batch_size, workers):
trans = []
trans += [
vision.Resize((128, 128)),
vision.Rescale(1.0 / 255.0, 0.0),
vision.HWC2CHW()
]
target_trans = transforms.TypeCast(mstype.int32)
data_set = data_set.map(operations=trans,
input_columns='image',
num_parallel_workers=workers)
data_set = data_set.map(operations=target_trans,
input_columns='label',
num_parallel_workers=workers)
data_set = data_set.batch(batch_size)
return data_set
import mindspore
from mindspore.dataset import vision, transforms
import mindspore.dataset as ds
trainset = ds.ImageFolderDataset("./imagenet2012/train", decode=True)
testset = ds.ImageFolderDataset("./imagenet2012/val", decode=True)
dataset_train = create_dataset(trainset,
usage="train",
resize=image_size,
batch_size=batch_size,
workers=workers)
dataset_val = create_dataset(testset,
usage="test",
resize=image_size,
batch_size=batch_size,
workers=workers)
step_size_val = dataset_val.get_dataset_size()
step_size_train = dataset_train.get_dataset_size()
</code></pre>
<p>i'm aware that i'd better to use padding or some. However, the code above is not perfect, though it works, i tested the datasets by sampling some images, rendering checking shapes, all good.</p>
<p>the code above successfully yields 2 datasets, i.e. training set and test set.</p>
<h1>ResidualBlockBase</h1>
<p>The following code defines the ResidualBlockBase class to implement the building block structure, the same one as in the tutorial and i don't think it needs modification to apply to the imagenet dataset.</p>
<pre><code>from typing import Type, Union, List, Optional
import mindspore.nn as nn
from mindspore.common.initializer import Normal
# Initialize the parameters of the convolutional layer and BatchNorm layer
weight_init = Normal(mean=0, sigma=0.02)
gamma_init = Normal(mean=1, sigma=0.02)
class ResidualBlockBase(nn.Cell):
expansion: int = 1 # The number of convolution kernels at the last layer is the same as that of convolution kernels at the first layer.
def __init__(self, in_channel: int, out_channel: int,
stride: int = 1, norm: Optional[nn.Cell] = None,
down_sample: Optional[nn.Cell] = None) -> None:
super(ResidualBlockBase, self).__init__()
if not norm:
self.norm = nn.BatchNorm2d(out_channel)
else:
self.norm = norm
self.conv1 = nn.Conv2d(in_channel, out_channel,
kernel_size=3, stride=stride,
weight_init=weight_init)
self.conv2 = nn.Conv2d(in_channel, out_channel,
kernel_size=3, weight_init=weight_init)
self.relu = nn.ReLU()
self.down_sample = down_sample
def construct(self, x):
"""ResidualBlockBase construct."""
identity = x # shortcut
out = self.conv1(x) # First layer of the main body: 3 x 3 convolutional layer
out = self.norm(out)
out = self.relu(out)
out = self.conv2(out) # Second layer of the main body: 3 x 3 convolutional layer
out = self.norm(out)
if self.down_sample is not None:
identity = self.down_sample(x)
out += identity # output the sum of the main body and the shortcuts
out = self.relu(out)
return out
</code></pre>
<h1>ResidualBlock</h1>
<p>The following code defines the ResidualBlock class to implement the bottleneck structure. the same one as in the tutorial and i don't think it needs modification to apply to the imagenet dataset as well.</p>
<pre><code>class ResidualBlock(nn.Cell):
expansion = 4 # The number of convolution kernels at the last layer is four times that of convolution kernels at the first layer.
def __init__(self, in_channel: int, out_channel: int,
stride: int = 1, down_sample: Optional[nn.Cell] = None) -> None:
super(ResidualBlock, self).__init__()
self.conv1 = nn.Conv2d(in_channel, out_channel,
kernel_size=1, weight_init=weight_init)
self.norm1 = nn.BatchNorm2d(out_channel)
self.conv2 = nn.Conv2d(out_channel, out_channel,
kernel_size=3, stride=stride,
weight_init=weight_init)
self.norm2 = nn.BatchNorm2d(out_channel)
self.conv3 = nn.Conv2d(out_channel, out_channel * self.expansion,
kernel_size=1, weight_init=weight_init)
self.norm3 = nn.BatchNorm2d(out_channel * self.expansion)
self.relu = nn.ReLU()
self.down_sample = down_sample
def construct(self, x):
identity = x # shortcut
out = self.conv1(x) # First layer of the main body: 1 x 1 convolutional layer
out = self.norm1(out)
out = self.relu(out)
out = self.conv2(out) # Second layer of the main body: 3 x 3 convolutional layer
out = self.norm2(out)
out = self.relu(out)
out = self.conv3(out) # Third layer of the main body: 1 x 1 convolutional layer
out = self.norm3(out)
if self.down_sample is not None:
identity = self.down_sample(x)
out += identity # The output is the sum of the main body and the shortcut.
out = self.relu(out)
return out
</code></pre>
<h1>make_layer</h1>
<p>The following example defines make_layer to build residual blocks. The parameters are as follows:</p>
<p>last_out_channel: number of output channels of the previous residual network</p>
<p>block: residual network type. The value can be ResidualBlockBase or ResidualBlock.</p>
<p>channel: number of input channels of the residual network</p>
<p>block_nums: number of stacked residual network blocks</p>
<p>stride: stride of the convolution movement</p>
<pre><code>def make_layer(last_out_channel, block: Type[Union[ResidualBlockBase, ResidualBlock]],
channel: int, block_nums: int, stride: int = 1):
down_sample = None # shortcuts
if stride != 1 or last_out_channel != channel * block.expansion:
down_sample = nn.SequentialCell([
nn.Conv2d(last_out_channel, channel * block.expansion,
kernel_size=1, stride=stride, weight_init=weight_init),
nn.BatchNorm2d(channel * block.expansion, gamma_init=gamma_init)
])
layers = []
layers.append(block(last_out_channel, channel, stride=stride, down_sample=down_sample))
in_channel = channel * block.expansion
# Stack residual networks.
for _ in range(1, block_nums):
layers.append(block(in_channel, channel))
return nn.SequentialCell(layers)
</code></pre>
<h1>class ResNet</h1>
<p>The following sample code is used to build a ResNet-50 model. You can call the resnet50 function to build a ResNet-50 model. The parameters of the resnet50 function are as follows:</p>
<p>num_classes: number of classes. The default value is 1000.</p>
<p>pretrained: download the corresponding training model and load the parameters in the pre-trained model to the network.</p>
<pre><code>from mindspore import load_checkpoint, load_param_into_net
class ResNet(nn.Cell):
def __init__(self, block: Type[Union[ResidualBlockBase, ResidualBlock]],
layer_nums: List[int], num_classes: int, input_channel: int) -> None:
super(ResNet, self).__init__()
self.relu = nn.ReLU()
# At the first convolutional layer, the number of the input channels is 3 (color image) and that of the output channels is 64.
self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, weight_init=weight_init)
self.norm = nn.BatchNorm2d(64)
# Maximum pooling layer, reducing the image size
self.max_pool = nn.MaxPool2d(kernel_size=3, stride=2, pad_mode='same')
# Define each residual network structure block
self.layer1 = make_layer(64, block, 64, layer_nums[0])
self.layer2 = make_layer(64 * block.expansion, block, 128, layer_nums[1], stride=2)
self.layer3 = make_layer(128 * block.expansion, block, 256, layer_nums[2], stride=2)
self.layer4 = make_layer(256 * block.expansion, block, 512, layer_nums[3], stride=2)
# average pooling layer
self.avg_pool = nn.AvgPool2d()
# flattern layer
self.flatten = nn.Flatten()
# fully-connected layer
self.fc = nn.Dense(in_channels=input_channel, out_channels=num_classes)
def construct(self, x):
x = self.conv1(x)
x = self.norm(x)
x = self.relu(x)
x = self.max_pool(x)
x = self.layer1(x)
x = self.layer2(x)
x = self.layer3(x)
x = self.layer4(x)
x = self.avg_pool(x)
x = self.flatten(x)
x = self.fc(x)
return x
</code></pre>
<p>def _resnet(model_url: str, block: Type[Union[ResidualBlockBase, ResidualBlock]],
layers: List[int], num_classes: int, pretrained: bool, pretrained_ckpt: str,
input_channel: int):
model = ResNet(block, layers, num_classes, input_channel)</p>
<pre><code>if pretrained:
# load pre-trained models
download(url=model_url, path=pretrained_ckpt, replace=True)
param_dict = load_checkpoint(pretrained_ckpt)
load_param_into_net(model, param_dict)
return model
def resnet50(num_classes: int = 1000, pretrained: bool = False):
"ResNet50 model"
resnet50_url = "https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/models/application/resnet50_224_new.ckpt"
resnet50_ckpt = "./LoadPretrainedModel/resnet50_224_new.ckpt"
return _resnet(resnet50_url, ResidualBlock, [3, 4, 6, 3], num_classes,
pretrained, resnet50_ckpt, 2048)
</code></pre>
<p>i'm not sure if the last line in the code above needs modification to apply to my new data. since it seems to relate the error i encounter.</p>
<h1>Train the Model</h1>
<p>i set <code>out_channels</code> to 1000, to apply to the imagenet dataset</p>
<pre><code># Define the ResNet50 network.
network = resnet50(pretrained=True)
# Size of the input layer of the fully-connected layer
in_channel = network.fc.in_channels
fc = nn.Dense(in_channels=in_channel, out_channels=1000)
# Reset the fully-connected layer.
network.fc = fc
</code></pre>
<h1>hyperparameters</h1>
<p>i don't think the following hyperparameters needs modification so i keep it unchanged.</p>
<pre><code># Set the learning rate
num_epochs = 5
lr = nn.cosine_decay_lr(min_lr=0.00001, max_lr=0.001, total_step=step_size_train * num_epochs,
step_per_epoch=step_size_train, decay_epoch=num_epochs)
# Define optimizer and loss function
opt = nn.Momentum(params=network.trainable_params(), learning_rate=lr, momentum=0.9)
loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')
def forward_fn(inputs, targets):
logits = network(inputs)
loss = loss_fn(logits, targets)
return loss
grad_fn = ms.value_and_grad(forward_fn, None, opt.parameters)
def train_step(inputs, targets):
loss, grads = grad_fn(inputs, targets)
opt(grads)
return loss
</code></pre>
<h1>train and evaluate functions</h1>
<p>i don't think the following code needs modification so i keep it unchanged.</p>
<pre><code>import os
# Creating Iterators
data_loader_train = dataset_train.create_tuple_iterator(num_epochs=num_epochs)
data_loader_val = dataset_val.create_tuple_iterator(num_epochs=num_epochs)
# Optimal model storage path
best_acc = 0
best_ckpt_dir = "./BestCheckpoint"
best_ckpt_path = "./BestCheckpoint/resnet50-best.ckpt"
if not os.path.exists(best_ckpt_dir):
os.mkdir(best_ckpt_dir)
import mindspore.ops as ops
def train(data_loader, epoch):
"""Model taining"""
losses = []
network.set_train(True)
for i, (images, labels) in enumerate(data_loader):
loss = train_step(images, labels)
if i % 100 == 0 or i == step_size_train - 1:
print('Epoch: [%3d/%3d], Steps: [%3d/%3d], Train Loss: [%5.3f]' %
(epoch + 1, num_epochs, i + 1, step_size_train, loss))
losses.append(loss)
return sum(losses) / len(losses)
def evaluate(data_loader):
"""Model Evaluation"""
network.set_train(False)
correct_num = 0.0 # Number of correct predictions
total_num = 0.0 # Total number of predictions
for images, labels in data_loader:
logits = network(images)
pred = logits.argmax(axis=1) # Prediction results
correct = ops.equal(pred, labels).reshape((-1, ))
correct_num += correct.sum().asnumpy()
total_num += correct.shape[0]
acc = correct_num / total_num # Accuracy
return acc
</code></pre>
<h1>error in question</h1>
<p>the following code throws ValueError</p>
<blockquote>
<p>For 'MatMul' the input dimensions must be equal, but got 'x1_col': 32768 and 'x2_row': 2048.</p>
</blockquote>
<pre><code>print("Start Training Loop ...")
for epoch in range(num_epochs):
curr_loss = train(data_loader_train, epoch)
curr_acc = evaluate(data_loader_val)
print("-" * 50)
print("Epoch: [%3d/%3d], Average Train Loss: [%5.3f], Accuracy: [%5.3f]" % (
epoch+1, num_epochs, curr_loss, curr_acc
))
print("-" * 50)
# Save the model that has achieved the highest prediction accuracy
if curr_acc > best_acc:
best_acc = curr_acc
ms.save_checkpoint(network, best_ckpt_path)
print("=" * 80)
print(f"End of validation the best Accuracy is: {best_acc: 5.3f}, "
f"save the best ckpt file in {best_ckpt_path}", flush=True)
</code></pre>
<p>i don't even where to start to examine my code, since i didn't change it very much from the original one from the tutorial</p>
<p>any suggestion about hint or solution would be appreciated.</p>
<p>here is the Traceback</p>
<h1>Traceback</h1>
<hr />
<pre><code>ValueError Traceback (most recent call last)
Cell In[18], line 5
2 print("Start Training Loop ...")
4 for epoch in range(num_epochs):
----> 5 curr_loss = train(data_loader_train, epoch)
6 curr_acc = evaluate(data_loader_val)
8 print("-" * 50)
Cell In[17], line 10, in train(data_loader, epoch)
7 network.set_train(True)
9 for i, (images, labels) in enumerate(data_loader):
---> 10 loss = train_step(images, labels)
11 if i % 100 == 0 or i == step_size_train - 1:
12 print('Epoch: [%3d/%3d], Steps: [%3d/%3d], Train Loss: [%5.3f]' %
13 (epoch + 1, num_epochs, i + 1, step_size_train, loss))
Cell In[15], line 20, in train_step(inputs, targets)
19 def train_step(inputs, targets):
---> 20 loss, grads = grad_fn(inputs, targets)
21 opt(grads)
22 return loss
File ~/miniconda3/lib/python3.9/site-packages/mindspore/ops/composite/base.py:620, in _Grad.__call__.<locals>.after_grad(*args, **kwargs)
619 def after_grad(*args, **kwargs):
--> 620 return grad_(fn_, weights)(*args, **kwargs)
File ~/miniconda3/lib/python3.9/site-packages/mindspore/common/api.py:106, in _wrap_func.<locals>.wrapper(*arg, **kwargs)
104 @wraps(fn)
105 def wrapper(*arg, **kwargs):
--> 106 results = fn(*arg, **kwargs)
107 return _convert_python_data(results)
File ~/miniconda3/lib/python3.9/site-packages/mindspore/ops/composite/base.py:595, in _Grad.__call__.<locals>.after_grad(*args, **kwargs)
593 @_wrap_func
594 def after_grad(*args, **kwargs):
--> 595 res = self._pynative_forward_run(fn, grad_, weights, args, kwargs)
596 _pynative_executor.grad(fn, grad_, weights, grad_position, *args, **kwargs)
597 out = _pynative_executor()
File ~/miniconda3/lib/python3.9/site-packages/mindspore/ops/composite/base.py:645, in _Grad._pynative_forward_run(self, fn, grad, weights, args, kwargs)
643 _pynative_executor.set_grad_flag(True)
644 _pynative_executor.new_graph(fn, *args, **new_kwargs)
--> 645 outputs = fn(*args, **new_kwargs)
646 _pynative_executor.end_graph(fn, outputs, *args, **new_kwargs)
647 return outputs
Cell In[15], line 11, in forward_fn(inputs, targets)
10 def forward_fn(inputs, targets):
---> 11 logits = network(inputs)
12 loss = loss_fn(logits, targets)
13 return loss
File ~/miniconda3/lib/python3.9/site-packages/mindspore/nn/cell.py:662, in Cell.__call__(self, *args, **kwargs)
660 except Exception as err:
661 _pynative_executor.clear_res()
--> 662 raise err
664 if isinstance(output, Parameter):
665 output = output.data
File ~/miniconda3/lib/python3.9/site-packages/mindspore/nn/cell.py:659, in Cell.__call__(self, *args, **kwargs)
657 _pynative_executor.new_graph(self, *args, **kwargs)
658 output = self._run_construct(args, kwargs)
--> 659 _pynative_executor.end_graph(self, output, *args, **kwargs)
660 except Exception as err:
661 _pynative_executor.clear_res()
File ~/miniconda3/lib/python3.9/site-packages/mindspore/common/api.py:1304, in _PyNativeExecutor.end_graph(self, obj, output, *args, **kwargs)
1291 def end_graph(self, obj, output, *args, **kwargs):
1292 """
1293 Clean resources after building forward and backward graph.
1294
(...)
1302 None.
1303 """
-> 1304 self._executor.end_graph(obj, output, *args, *(kwargs.values()))
ValueError: For 'MatMul' the input dimensions must be equal, but got 'x1_col': 32768 and 'x2_row': 2048.
----------------------------------------------------
- C++ Call Stack: (For framework developers)
----------------------------------------------------
mindspore/core/ops/mat_mul.cc:101 InferShape
</code></pre>
| <python><deep-learning> | 2023-10-15 17:03:27 | 1 | 1,097 | JJJohn |
77,297,662 | 19,130,803 | Display dash html.Div inside dash ag-grid | <p>I am working on <code>dash</code> app. I am using <code>dash Ag-grid</code> component, to load custom data. For this I am using <code>cellRenderer</code> property which is define in <code>assets/dashAgGridComponentFunctions.js</code></p>
<pre><code>import dash
from dash import html
# sample content data
data = pd.DataFrame(
{
"First Name": ["Arthur", "Ford", "Zaphod", "Trillian"],
"Last Name": ["Dent", "Prefect", "Beeblebrox", "Astra"],
}
)
# sample content title and table
content_title = "some_title"
content_table = dbc.Table.from_dataframe(data, striped=True, bordered=True, hover=True)
# A single content div
content: html.Div = html.Div(
children=[
dbc.Row(children=content_title),
dbc.Row(children=content_table),
],
)
# contents is a list of html.Div
contents: list[html.Div] = [content-1, content-2,....]
# sample dataframe
df: pd.DataFrame = pd.DataFrame({"contents": contents})
# sample ag-grid
grid: dag.AgGrid = dag.AgGrid(
id="contents_grid",
columnDefs=[
{
"field": "contents",
"headerName": "Contents",
"cellRenderer": "SimpleDivCellRenderer",
},
],
columnSize="responsiveSizeToFit",
style={"height": 500, "width": "100%"},
rowData=df.to_dict("records"),
)
</code></pre>
<p>I have tried different ways and each time got different error for creating this <code>custom react</code> component for dash <code>html.Div</code>.(I have very limited knowledge of <code>javascript</code> and <code>react</code>)</p>
<pre><code># dashAgGridComponentFunctions.js
# way-1
dagcomponentfuncs.SimpleDivCellRenderer = function (props) {
// const {setData} = props;
return React.createElement('div', {
children: props.value,
// setProps,
style: {height: '100%', width: '100%'},
// className: props.className,
});
};
# way-2
dagcomponentfuncs.SimpleDivCellRenderer = function (props) {
// const {setData} = props;
return React.createElement(window.dash.html.Div, {
children: props.value,
// setProps,
style: {height: '100%', width: '100%'},
// className: props.className,
});
};
# way-3
dagcomponentfuncs.SimpleDivCellRenderer = function (props) {
// const {setData} = props;
return React.createElement(window.dash.html.Div, {
// setProps,
style: {height: '100%', width: '100%'},
// className: props.className,
}, props.value);
};
</code></pre>
<p>Errors like</p>
<pre><code>--> Objects are not valid as a React child (found: [missing argument]). If you meant to render a collection of children, use an array instead.
--> TypeError: window.html is undefined
--> TypeError: window.dash is undefined
</code></pre>
<p>What I am missing?</p>
| <python><reactjs><plotly-dash> | 2023-10-15 16:56:40 | 0 | 962 | winter |
77,297,624 | 5,284,829 | writing gzip csv file introduces random chars in the first row | <p>I am writing some csv data to a csv gzip file in python as follows</p>
<pre><code>import csv
import gzip
import io
csv_rows=[["cara","vera","tara"],["rar","mar","bar"],["jump","lump","dump"]]
mem_file = io.BytesIO()
with gzip.GzipFile(fileobj=mem_file,mode="wb") as gz:
with io.TextIOWrapper(gz, encoding='utf-8') as wrapper:
writer = csv.writer(wrapper)
writer.writerows(csv_rows)
gz.write(mem_file.getvalue())
gz.close()
mem_file.seek(0)
</code></pre>
<p>When I gunzip the file the first column in the first row is a strange set of characters and causes
the first row to actually have 4 columns
The 2nd and 3rd rows are ok</p>
<p>I have tried different data and always see this behavior in the first column of the first row.
What is wrong with the code?</p>
<p>For reference here it is what I see in the gunzipped file</p>
<pre><code>?‹? #,eÿcara,vera,tara
rar,mar,bar
jump,lump,dump
</code></pre>
| <python><csv><gzip> | 2023-10-15 16:46:11 | 2 | 658 | JennyToy |
77,297,445 | 11,108,930 | Pandas: Non overlapping datetimes algorithm | <p>I have a pandas dataframe (see input below) with multiple columns the important columns are start_datetime and end_datetime. The algorithm that I want to apply is the following. I compare every line to its next line. If the end_datetime (current line) is exceeding the start_datetime (next line) I want to move the row to the end. And then compare the current_line to the next line, which was the second next before. And I want to repeat this process until I have blocks of rows in which the time is non-overlapping. It is important that due to moving the problematic row to the end a jump in the datetimes will be present which will start as a new non-overlapping block. I have an example data set with input, step 1, step 2, and output.</p>
<p>I tried to solve it with this for loop but I am not able to adapt it in such way that it completes the task. The indices are confusing me. Maybe there is a built in pandas solution to this?</p>
<pre><code>for i in range(0, df_temp.shape[0] - 1):
if df_temp.iloc[i+1].start_datetime < df_temp.iloc[i].end_datetime:
df_temp = df_temp.reindex([index for index in df_temp.index if index != i + 1] + [i + 1], axis=0)
df_temp.to_csv("output.csv", header=True, index=False)
</code></pre>
<h1>input</h1>
<pre><code>start_datetime end_datetime
2023-01-01 00:55:13.000 2023-01-01 01:17:56.000
2023-01-01 01:29:57.000 2023-01-01 01:30:24.000
2023-01-01 01:30:55.000 2023-01-01 01:31:33.000
2023-01-01 01:32:05.000 2023-01-01 01:32:44.000
2023-01-01 01:33:59.000 2023-01-01 02:13:24.000
2023-01-01 04:59:59.000 2023-01-01 05:15:48.000
2023-01-01 05:54:04.000 2023-01-01 06:15:53.000
2023-01-01 06:05:56.000 2023-01-01 06:06:02.000 <-- will be shifted to end
2023-01-01 06:06:58.000 2023-01-01 06:42:34.000
2023-01-01 06:08:39.000 2023-01-01 06:31:53.000
2023-01-01 06:16:16.000 2023-01-01 06:39:13.000
2023-01-01 06:30:36.000 2023-01-01 06:38:15.000
2023-01-01 06:32:48.000 2023-01-01 06:49:34.000
2023-01-01 06:35:17.000 2023-01-01 07:02:44.000
2023-01-01 06:59:56.000 2023-01-01 07:47:47.000
2023-01-01 07:17:07.000 2023-01-01 07:20:11.000
2023-01-01 07:22:04.000 2023-01-01 07:57:34.000
2023-01-01 07:24:07.000 2023-01-01 07:54:18.000
2023-01-01 07:36:41.000 2023-01-01 08:07:18.000
2023-01-01 09:24:12.000 2023-01-01 09:46:57.000
2023-01-01 09:26:30.000 2023-01-01 09:58:42.000
2023-01-01 09:31:27.000 2023-01-01 10:00:19.000
</code></pre>
<h1>step 1</h1>
<pre><code>start_datetime end_datetime
2023-01-01 00:55:13.000 2023-01-01 01:17:56.000
2023-01-01 01:29:57.000 2023-01-01 01:30:24.000
2023-01-01 01:30:55.000 2023-01-01 01:31:33.000
2023-01-01 01:32:05.000 2023-01-01 01:32:44.000
2023-01-01 01:33:59.000 2023-01-01 02:13:24.000
2023-01-01 04:59:59.000 2023-01-01 05:15:48.000
2023-01-01 05:54:04.000 2023-01-01 06:15:53.000
2023-01-01 06:06:58.000 2023-01-01 06:42:34.000 <-- will be shifted to end
2023-01-01 06:08:39.000 2023-01-01 06:31:53.000
2023-01-01 06:16:16.000 2023-01-01 06:39:13.000
2023-01-01 06:30:36.000 2023-01-01 06:38:15.000
2023-01-01 06:32:48.000 2023-01-01 06:49:34.000
2023-01-01 06:35:17.000 2023-01-01 07:02:44.000
2023-01-01 06:59:56.000 2023-01-01 07:47:47.000
2023-01-01 07:17:07.000 2023-01-01 07:20:11.000
2023-01-01 07:22:04.000 2023-01-01 07:57:34.000
2023-01-01 07:24:07.000 2023-01-01 07:54:18.000
2023-01-01 07:36:41.000 2023-01-01 08:07:18.000
2023-01-01 09:24:12.000 2023-01-01 09:46:57.000
2023-01-01 09:26:30.000 2023-01-01 09:58:42.000
2023-01-01 09:31:27.000 2023-01-01 10:00:19.000
2023-01-01 06:05:56.000 2023-01-01 06:06:02.000
</code></pre>
<h1>step 2</h1>
<pre><code>start_datetime end_datetime
2023-01-01 00:55:13.000 2023-01-01 01:17:56.000
2023-01-01 01:29:57.000 2023-01-01 01:30:24.000
2023-01-01 01:30:55.000 2023-01-01 01:31:33.000
2023-01-01 01:32:05.000 2023-01-01 01:32:44.000
2023-01-01 01:33:59.000 2023-01-01 02:13:24.000
2023-01-01 04:59:59.000 2023-01-01 05:15:48.000
2023-01-01 05:54:04.000 2023-01-01 06:15:53.000
2023-01-01 06:08:39.000 2023-01-01 06:31:53.000 <-- will be shifted to end
2023-01-01 06:16:16.000 2023-01-01 06:39:13.000
2023-01-01 06:30:36.000 2023-01-01 06:38:15.000
2023-01-01 06:32:48.000 2023-01-01 06:49:34.000
2023-01-01 06:35:17.000 2023-01-01 07:02:44.000
2023-01-01 06:59:56.000 2023-01-01 07:47:47.000
2023-01-01 07:17:07.000 2023-01-01 07:20:11.000
2023-01-01 07:22:04.000 2023-01-01 07:57:34.000
2023-01-01 07:24:07.000 2023-01-01 07:54:18.000
2023-01-01 07:36:41.000 2023-01-01 08:07:18.000
2023-01-01 09:24:12.000 2023-01-01 09:46:57.000
2023-01-01 09:26:30.000 2023-01-01 09:58:42.000
2023-01-01 09:31:27.000 2023-01-01 10:00:19.000
2023-01-01 06:05:56.000 2023-01-01 06:06:02.000
2023-01-01 06:06:58.000 2023-01-01 06:42:34.000
</code></pre>
<h1>output</h1>
<pre><code>start_datetime end_datetime
2023-01-01 00:55:13.000 2023-01-01 01:17:56.000 <-- Start of first sequence
2023-01-01 01:29:57.000 2023-01-01 01:30:24.000
2023-01-01 01:30:55.000 2023-01-01 01:31:33.000
2023-01-01 01:32:05.000 2023-01-01 01:32:44.000
2023-01-01 01:33:59.000 2023-01-01 02:13:24.000
2023-01-01 04:59:59.000 2023-01-01 05:15:48.000
2023-01-01 05:54:04.000 2023-01-01 06:15:53.000
2023-01-01 06:16:16.000 2023-01-01 06:39:13.000
2023-01-01 06:59:56.000 2023-01-01 07:47:47.000
2023-01-01 09:24:12.000 2023-01-01 09:46:57.000 <-- End of first sequence without overlap
2023-01-01 06:05:56.000 2023-01-01 06:06:02.000 <-- Start of second sequence
2023-01-01 06:08:39.000 2023-01-01 06:31:53.000
2023-01-01 06:32:48.000 2023-01-01 06:49:34.000
2023-01-01 07:17:07.000 2023-01-01 07:20:11.000
2023-01-01 07:22:04.000 2023-01-01 07:57:34.000
2023-01-01 09:26:30.000 2023-01-01 09:58:42.000 <-- End of second sequence
2023-01-01 06:06:58.000 2023-01-01 06:42:34.000 <-- Start of third sequence
2023-01-01 07:24:07.000 2023-01-01 07:54:18.000
2023-01-01 09:31:27.000 2023-01-01 10:00:19.000 <-- End of third sequence
2023-01-01 06:30:36.000 2023-01-01 06:38:15.000 <-- Start of fourth sequence
2023-01-01 06:35:17.000 2023-01-01 07:02:44.000
2023-01-01 07:36:41.000 2023-01-01 08:07:18.000 <-- End of fourth sequence
</code></pre>
| <python><pandas><algorithm> | 2023-10-15 16:02:25 | 2 | 977 | MachineLearner |
77,297,356 | 14,468,588 | how to provide constraint for convex problem in python | <p>I want to solve a convex problem in python. But when I want to define constraints for my problem, it produces an error. here is my code:</p>
<pre><code>delta = 10;
A = pandas.read_excel(r"C:\Users\mohammad\Desktop\feko1\feko\A.xlsx")
Y = pandas.read_excel(r"C:\Users\mohammad\Desktop\feko1\feko\Y.xlsx")
A = numpy.array(A)
Y = numpy.array(Y)
s_L1 = cvxpy.Variable(6561)
constraints = [cvxpy.norm(A*s_L1 - Y,2) <= delta]
</code></pre>
<p>A and Y are 2322-by-6561 and 2322-by-1 matrix.</p>
<p>Error will be shown after running the above code (I have just prepared the required part of the code. If you think you need to know the previous lines of the code, please let me know):</p>
<pre><code> ---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_7148\2562544661.py in <module>
----> 1 constraints = [cp.norm(A*s_L1 - Y,2) <= delta]
2
~\AppData\Roaming\Python\Python39\site-packages\cvxpy\expressions\expression.py in cast_op(self, other)
48 """
49 other = self.cast_to_const(other)
---> 50 return binary_op(self, other)
51 return cast_op
52
~\AppData\Roaming\Python\Python39\site-packages\cvxpy\expressions\expression.py in __sub__(self, other)
582 """Expression : The difference of two expressions.
583 """
--> 584 return self + -other
585
586 @_cast_other
~\AppData\Roaming\Python\Python39\site-packages\cvxpy\expressions\expression.py in cast_op(self, other)
48 """
49 other = self.cast_to_const(other)
---> 50 return binary_op(self, other)
51 return cast_op
52
~\AppData\Roaming\Python\Python39\site-packages\cvxpy\expressions\expression.py in __add__(self, other)
568 return self
569 self, other = self.broadcast(self, other)
--> 570 return cvxtypes.add_expr()([self, other])
571
572 @_cast_other
~\AppData\Roaming\Python\Python39\site-packages\cvxpy\atoms\affine\add_expr.py in __init__(self, arg_groups)
32 # For efficiency group args as sums.
33 self._arg_groups = arg_groups
---> 34 super(AddExpression, self).__init__(*arg_groups)
35 self.args = []
36 for group in arg_groups:
~\AppData\Roaming\Python\Python39\site-packages\cvxpy\atoms\atom.py in __init__(self, *args)
49 self.args = [Atom.cast_to_const(arg) for arg in args]
50 self.validate_arguments()
---> 51 self._shape = self.shape_from_args()
52 if len(self._shape) > 2:
53 raise ValueError("Atoms must be at most 2D.")
~\AppData\Roaming\Python\Python39\site-packages\cvxpy\atoms\affine\add_expr.py in shape_from_args(self)
40 """Returns the (row, col) shape of the expression.
41 """
---> 42 return u.shape.sum_shapes([arg.shape for arg in self.args])
43
44 def expand_args(self, expr):
~\AppData\Roaming\Python\Python39\site-packages\cvxpy\utilities\shape.py in sum_shapes(shapes)
48 # Only allow broadcasting for 0D arrays or summation of scalars.
49 if shape != t and len(squeezed(shape)) != 0 and len(squeezed(t)) != 0:
---> 50 raise ValueError(
51 "Cannot broadcast dimensions " +
52 len(shapes)*" %s" % tuple(shapes))
ValueError: Cannot broadcast dimensions (2322,) (2322, 1)
</code></pre>
<p>can anyone mention the problem which I'm facing with?</p>
| <python><dimensions><convex-optimization><convex> | 2023-10-15 15:39:02 | 1 | 352 | mohammad rezza |
77,297,284 | 10,786,288 | Plotly Express: How can I adjust the size of marginal distribution plots? | <p>I have several datasets I am plotting with plotly express's scatterplot. I am using the option of the marginal distribution plots to show a histogram of the datasets but the problem is that the histograms on the margins tend to be way too big, eclipsing the actual data. An example:<a href="https://i.sstatic.net/vU9eW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vU9eW.png" alt="enter image description here" /></a></p>
<p>In these plots it's important to see the individual points so I would like to scale the histograms down, but I don't find any documentation on how to adjust the size of the marginal distribution plots separately.</p>
<p>How could I do this?</p>
| <python><plotly> | 2023-10-15 15:22:55 | 1 | 471 | lpnorm |
77,297,217 | 4,126,843 | "Package does not exist" error because Conda cannot find recent package versions | <p>I have an existing Conda 23.1.0 environment managed using Mamba 1.5.1, which was created with the <code>channels: conda-forge</code> option.
I want to install the latest version of the <code>holidays</code> package, <a href="https://anaconda.org/conda-forge/holidays" rel="nofollow noreferrer">which is 0.34</a>.</p>
<p>However, <code>mamba search holidays</code> gives me</p>
<pre><code>Loading channels: done
# Name Version Build Channel
holidays 0.9.9 py_0 conda-forge
holidays 0.9.10 py_0 conda-forge
...
holidays 0.15 pyhd8ed1ab_0 conda-forge
holidays 0.16 pyhd8ed1ab_0 conda-forge
holidays 0.17 pyhd8ed1ab_0 conda-forge
</code></pre>
<p>If I try <code>mamba install holidays=0.34</code> I get</p>
<pre><code>Could not solve for environment specs
The following package could not be installed
└─ holidays 0.34** does not exist (perhaps a typo or a missing channel).
</code></pre>
<p>How do I update the list of available packages so I can install the latest version?</p>
<p>[Edit]
The first time I tried this, this error popped up about zstandard missing. When I installed zstandard, the error disappeared but it didn't solve the problem, hence this question</p>
<pre><code>/opt/mambaforge/lib/python3.10/site-packages/conda_package_streaming/package_streaming.py:19: UserWarning: zstandard could not be imported. Running without .conda support.
warnings.warn("zstandard could not be imported. Running without .conda support.")
/opt/mambaforge/lib/python3.10/site-packages/conda_package_handling/api.py:29: UserWarning: Install zstandard Python bindings for .conda support
_warnings.warn("Install zstandard Python bindings for .conda support")
</code></pre>
<p>[Edit 2] As it turned out, this was the actual error that was hidden:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/mambaforge/lib/python3.10/site-packages/zstandard/__init__.py", line 39, in <module>
from .backend_c import * # type: ignore
ImportError: zstd C API versions mismatch; Python bindings were not compiled/linked against expected zstd version (10505 returned by the lib, 10502 hardcoded in zstd headers, 10502 hardcoded in the cext)
</code></pre>
| <python><conda><mamba> | 2023-10-15 15:04:59 | 1 | 7,661 | PHPirate |
77,297,092 | 20,000,775 | "Inconsistent Audio Classification Results between Flutter App and Postman - Flask API" | <p>I've developed a Flutter app that records audio and sends it to a Python Flask API for classification. The API uses Librosa for feature extraction and a pre-trained ML model for audio classification. However, I'm experiencing inconsistencies in the predictions.</p>
<h3>Problem:</h3>
<ul>
<li>When I send audio files from the Flutter app to the Flask API, I get incorrect classifications but when use Postman to send an audio file that was recorded with the same app I used for collecting training data, the classification is correct.</li>
<li>Both recordings are in <code>.wav</code> format with a bit rate of 16 kHz.</li>
</ul>
<p><strong>code</strong></p>
<pre><code>Future<void> _startRecording() async {
try {
Record record = Record();
if (await record.hasPermission()) {
print("startRecording() hasPermission ");
Directory tempDir = await getTemporaryDirectory();
String tempPath = tempDir.path + '/audio.wav';
await record.start(path: tempPath);
setState(() {
_isRecording = true;
_audioPath = tempPath;
print("tempPath $tempPath");
});
print("Start Recording - _audioPath: $_audioPath");
}
} catch (e) {
print("startRecording() has no Permission");
print(e);
}
}
Future<void> _stopRecording() async {
try {
Record record = Record();
String? path = await record.stop();
if (path != null) {
setState(() {
_isRecording = false;
_audioPath = path;
}); // Call the upload method here
print("Stop Recording - _audioPath: $_audioPath");
}
} catch (e) {
print(e);
}
Timer(Duration(seconds: 1), () {
uploadAudio(File(_audioPath!), 'balla');
});
}
Future<void> uploadAudio(File audioFile, String inputWord) async {
var request = http.MultipartRequest('POST', Uri.parse('http://192.168.8.181:5000/predict'));
request.fields['input_word'] = inputWord;
request.files.add(http.MultipartFile.fromBytes('audio_file', await audioFile.readAsBytes(), filename: 'audio.wav'));
var response = await request.send();
if (response.statusCode == 200) {
var result = await http.Response.fromStream(response);
print('Result: ${result.body}');
var parsedJson = json.decode(result.body);
if (parsedJson['result'] == "Correct Answer") {
audioPlayer.dispose(); audioPlayer.pause();
Navigator.push(
context,
MaterialPageRoute(
builder: (context) => Correct(),
),
);
}
if (parsedJson['result'] == "Wrong Answer") {
audioPlayer.dispose(); audioPlayer.pause();
Navigator.push(
context,
MaterialPageRoute(
builder: (context) => InCorrect(),
),
);
}
} else {
print('Failed to upload audio');
}
}
</code></pre>
<p><strong>Flask API</strong></p>
<pre><code>from flask import Flask, request, jsonify
from pydub import AudioSegment
import os
import librosa
import numpy as np
import joblib
import subprocess
import logging
app = Flask(__name__)
# Initialize logging
logging.basicConfig(filename='app.log', level=logging.INFO)
# Define max_length
max_length = 100
# Function to convert audio file bit rate
def convert_audio_bit_rate(audio_file_path, target_bit_rate=256000):
output_file_path = "converted_audio.wav"
try:
# Delete the existing converted file if it exists
if os.path.exists(output_file_path):
os.remove(output_file_path)
subprocess.run([
"ffmpeg",
"-i", audio_file_path,
"-ab", str(target_bit_rate),
output_file_path
])
except Exception as e:
logging.error(f"Error in converting audio: {e}")
return None
return output_file_path
def predict_class(audio_file_path):
try:
# Load the scaler, label encoder, and the model
scaler = joblib.load('scaler.pkl')
le = joblib.load('label_encoder.pkl')
model = joblib.load('Student_audio_model.pkl')
# Load the audio file
waveform, sample_rate = librosa.load(audio_file_path, sr=None)
# Feature extraction
mfcc = librosa.feature.mfcc(y=waveform, sr=sample_rate)
# Padding feature array to a fixed length
if mfcc.shape[1] < max_length:
pad_width = max_length - mfcc.shape[1]
mfcc = np.pad(mfcc, pad_width=((0, 0), (0, pad_width)), mode='constant')
else:
mfcc = mfcc[:, :max_length]
# Reshaping and scaling
features = mfcc.reshape(1, -1)
features = scaler.transform(features)
# Prediction
predicted_class = model.predict(features)
# Convert integer label to original class label
predicted_label = le.inverse_transform(predicted_class)[0]
except Exception as e:
logging.error(f"Error in prediction: {e}")
return None
return predicted_label
@app.route('/predict', methods=['POST'])
def predict():
try:
# Get the audio file
audio_file = request.files["audio_file"]
# Save the audio file
audio_file_path = "uploaded_audio.wav"
audio_file.save(audio_file_path)
# Predict the class
predicted_class = predict_class(audio_file_path)
if predicted_class is None:
return jsonify({"result": "Error in prediction"}), 500
# Get the input_word
input_word = request.form["input_word"]
# Clean-up
# if os.path.exists(audio_file_path):
# os.remove(audio_file_path)
#if os.path.exists(converted_audio_file_path):
# os.remove(converted_audio_file_path)
if input_word == predicted_class:
return jsonify({"result": "Correct Answer"})
else:
return jsonify({"result": "Wrong Answer"})
except Exception as e:
logging.error(f"General error: {e}")
return jsonify({"result": "An error occurred"}), 500
@app.errorhandler(404)
def not_found(error):
return jsonify({"error": "Not Found"}), 404
@app.errorhandler(500)
def internal_error(error):
return jsonify({"error": "Internal Server Error"}), 500
if __name__ == '__main__':
app.run(host="0.0.0.0", port=5000)
</code></pre>
<p>How i solve this issue ?</p>
| <python><flutter><flask><librosa> | 2023-10-15 14:32:18 | 1 | 642 | Dasun Dola |
77,297,031 | 2,612,235 | Format currency with babel without Unicode? | <p>Using Babel I get this:</p>
<pre><code>>>> babel.numbers.format_currency(1099.98, 'CHF', locale='fr_CH')
'1\u202f099,98\xa0CHF'
</code></pre>
<p>I would like this format:</p>
<pre><code>"CHF 1'099.98"
</code></pre>
<p>Is it possible to specify a custom format with babel?</p>
<p>My current ugly workaround is:</p>
<pre><code>>>> num = 1099.998
>>> s = f"CHF {num:,.2f}".replace(',',"'")
>>> s
"CHF 1'100.00"
</code></pre>
<p>But it also round my number :(</p>
| <python><format><numbers><python-babel> | 2023-10-15 14:14:11 | 2 | 29,646 | nowox |
77,296,633 | 4,379,151 | How to combine objects in a full outer join with Django? | <p>Let's use Django's standard users and groups. I have a couple of users and groups and each user can be assigned to several groups via a M2M relation.</p>
<p>Now, I want to construct a single query which cross joins users x groups. The purpose is a view that shows all groups with it's members, when users are members of several groups then I want them to be shown in each group.</p>
<p>I currently have this query, which appears to give me a cross join:</p>
<pre class="lang-py prettyprint-override"><code>groups = list(Group.objects.all())
queryset = User.objects.filter(groups__in=groups)
</code></pre>
<p>However, the query only contains data about the users. How can I include the data for each group into the queryset?</p>
| <python><django> | 2023-10-15 12:28:02 | 1 | 33,253 | Erik Kalkoken |
77,296,454 | 1,987,726 | How to dedent a block in Python3 under MacOS? | <p>I am learning Python3. It is executed on MacOS/Terminal by running <code>python3</code>.<br />
I was able to execute some examples from the book "Deep Learning with Python, Second Edition", but I am stuck now at a simple editor problem:<br />
If I enter the following statements</p>
<pre><code>>>> with tf.GradientTape as tape:
... tape.watch(input_const)
... result = tf.square(input_const)
...
</code></pre>
<p>I am not able to terminate the indention block.<br />
If I simply enter <code>return</code>, I get the error</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: __enter__
>>>
</code></pre>
<p>I have tried to terminate indention using <code>Shift [</code>, <code>Shift tab</code>, and <code>Control [</code>, but this does not work.<br />
How do I dedent a block?</p>
<p>I found a similar question <a href="https://stackoverflow.com/q/71595082/1987726">here</a>, but without a solution.</p>
| <python><read-eval-print-loop> | 2023-10-15 11:35:47 | 2 | 15,407 | Reinhard Männer |
77,296,419 | 195,787 | Optimizing a Python Function of SmoothStep with Multiple Conditionals for Numba Vectorization | <p>I implemented a function which creates a smooth rectangle function using <a href="https://en.wikipedia.org/wiki/Smoothstep" rel="nofollow noreferrer">SmoothStep</a>:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from numba import jit, njit
import matplotlib.pyplot as plt
@njit
def GenSmoothStep( vX: np.ndarray, lowVal: float, highVal: float, vY: np.ndarray, rollOffWidth: float = 0.1 ):
lowClip = max(lowVal - rollOffWidth, 0)
highClip = min(highVal + rollOffWidth, 1)
for ii in range(vX.size):
valX = vX.flat[ii]
if valX < lowClip:
vY.flat[ii] = 0.0
elif valX < lowVal:
# Smoothstep [lowClip, lowVal]
valXN = (lowVal - valX) / (lowVal - lowClip)
vY.flat[ii] = 1 - (valXN * valXN * (3 - (2 * valXN)))
elif valX > highClip:
vY.flat[ii] = 0.0
elif valX > highVal:
# Smoothstep [highVal, highClip]
valXN = (valX - highVal) / (highClip - highVal)
vY.flat[ii] = 1 - (valXN * valXN * (3 - (2 * valXN)))
else:
vY.flat[ii] = 1.0
numGridPts = 1000
lowVal = 0.15
highVal = 0.75
rollOffWidth = 0.3
vX = np.linspace(0, 1, numGridPts)
vY = np.empty_like(vX)
GenSmoothStep(vX, lowVal, highVal, vY, rollOffWidth = rollOffWidth)
plt.plot(vX, vY)
</code></pre>
<p>The function includes several conditionals which implies a vectorization unfriendliness.<br />
I was wondering if there are some simple steps to make the function more Numba friendly.</p>
<h3>Update</h3>
<p>I took the code by @AndrejKesely and updated it to handle the edge cases in my code (<code>lowVal = 0.0</code> and / or <code>highVal = 1.0</code>).<br />
I also add a variant to clip without branches.
This is the current state:</p>
<pre class="lang-py prettyprint-override"><code># %%
import numpy as np
from numba import jit, njit
import matplotlib.pyplot as plt
from timeit import timeit
@njit
def GenSmoothStep000( vX: np.ndarray, lowVal: float, highVal: float, vY: np.ndarray, rollOffWidth: float = 0.1 ):
lowClip = max(lowVal - rollOffWidth, 0)
highClip = min(highVal + rollOffWidth, 1)
for ii in range(vX.size):
valX = vX.flat[ii]
if valX < lowClip:
vY.flat[ii] = 0.0
elif valX < lowVal:
# Smoothstep [lowClip, lowVal]
valXN = (lowVal - valX) / (lowVal - lowClip)
vY.flat[ii] = 1 - (valXN * valXN * (3 - (2 * valXN)))
elif valX > highClip:
vY.flat[ii] = 0.0
elif valX > highVal:
# Smoothstep [highVal, highClip]
valXN = (valX - highVal) / (highClip - highVal)
vY.flat[ii] = 1 - (valXN * valXN * (3 - (2 * valXN)))
else:
vY.flat[ii] = 1.0
@njit
def Clamp001( x: float, lowBound: float = 0.0, highBound: float = 1.0 ):
if x < lowBound:
return lowBound
if x > highBound:
return highBound
return x
@njit
def Clamp002( x: float, lowBound: float = 0.0, highBound: float = 1.0 ):
return max(min(x, highBound), lowBound)
@njit
def SmoothStep001( x: float, lowBound: float = 0.0, highBound: float = 1.0 ):
x = Clamp001((x - lowBound) / (highBound - lowBound), 0.0, 1.0)
return x * x * (3.0 - 2.0 * x)
@njit
def SmoothStep002( x: float, lowBound: float = 0.0, highBound: float = 1.0 ):
x = Clamp002((x - lowBound) / (highBound - lowBound), 0.0, 1.0)
return x * x * (3.0 - 2.0 * x)
@njit
def GenSmoothStep001( vX: np.ndarray, lowVal: float, highVal: float, vY: np.ndarray, rollOffWidth: float = 0.1 ):
lowClip = max(lowVal - rollOffWidth, 0)
highClip = min(highVal + rollOffWidth, 1)
if (highVal == 1.0) and (lowVal == 0.0):
for ii in range(vX.size):
vY[ii] = 1.0
elif (highVal == 1.0):
for ii in range(vX.size):
vY[ii] = SmoothStep001(vX[ii], lowClip, lowVal)
elif (lowVal == 0.0):
for ii in range(vX.size):
vY[ii] = 1 - SmoothStep001(vX[ii], highVal, highClip)
else:
for ii in range(vX.size):
vY[ii] = SmoothStep001(vX[ii], lowClip, lowVal) * (1 - SmoothStep001(vX[ii], highVal, highClip))
@njit
def GenSmoothStep002( vX: np.ndarray, lowVal: float, highVal: float, vY: np.ndarray, rollOffWidth: float = 0.1 ):
lowClip = max(lowVal - rollOffWidth, 0)
highClip = min(highVal + rollOffWidth, 1)
if (highVal == 1.0) and (lowVal == 0.0):
for ii in range(vX.size):
vY[ii] = 1.0
elif (highVal == 1.0):
for ii in range(vX.size):
vY[ii] = SmoothStep002(vX[ii], lowClip, lowVal)
elif (lowVal == 0.0):
for ii in range(vX.size):
vY[ii] = 1 - SmoothStep002(vX[ii], highVal, highClip)
else:
for ii in range(vX.size):
vY[ii] = SmoothStep002(vX[ii], lowClip, lowVal) * (1 - SmoothStep002(vX[ii], highVal, highClip))
# %%
# Validation + JIT Compilation
numGridPts = 10_000
lowVal = 0.35
highVal = 0.55
rollOffWidth = 0.3
vX = np.linspace(0, 1, numGridPts)
hF, vHa = plt.subplots(nrows = 1, ncols = 3, figsize = (16, 5))
vY = np.empty_like(vX)
GenSmoothStep000(vX, lowVal, highVal, vY, rollOffWidth = rollOffWidth)
vHa[0].plot(vX, vY)
vY = np.empty_like(vX)
GenSmoothStep001(vX, lowVal, highVal, vY, rollOffWidth = rollOffWidth)
vHa[1].plot(vX, vY)
vY = np.empty_like(vX)
GenSmoothStep002(vX, lowVal, highVal, vY, rollOffWidth = rollOffWidth)
vHa[2].plot(vX, vY);
# %%
# Check Performance
time000 = timeit("GenSmoothStep000(vX, lowVal, highVal, vY, rollOffWidth = rollOffWidth)", number = 10_000, globals = globals())
time001 = timeit("GenSmoothStep001(vX, lowVal, highVal, vY, rollOffWidth = rollOffWidth)", number = 10_000, globals = globals())
time002 = timeit("GenSmoothStep002(vX, lowVal, highVal, vY, rollOffWidth = rollOffWidth)", number = 10_000, globals = globals())
print(time000)
print(time001)
print(time002)
</code></pre>
<p>The output is (On my computer, <code>Intel Core i7-6800K</code>):</p>
<pre class="lang-py prettyprint-override"><code>0.23776450000877958
0.23713289998704568
0.23025239999697078
</code></pre>
<p>So it seems to be still very close.</p>
| <python><numpy><performance><vectorization><numba> | 2023-10-15 11:23:32 | 1 | 5,123 | Royi |
77,296,360 | 15,844,296 | Capturing key and mouse events application wide in Beeware/Toga | <p>I recently discovered <code>Beeware/Toga</code> and I'm considering switching to it: it's so much nicer than <code>Tkinter</code> and it seems to be more straighforward than <code>wxPython</code>, and faster both at dev and run time. So I tried to build a few toy apps, one of them being a look-alike of <a href="https://angusj.com/sudoku/" rel="nofollow noreferrer">SimpleSudoku</a>.</p>
<p>The UI is simple: it shows the value in a solved cell, or the remaining "candidates" in a cell that's still unsolved. Then you select a cell by clicking it or moving to it with the arrow keys, and type the value you want to insert, or <code>Alt-<value></code> to remove a candidate value. There are a few more possibilities but let's ignore them for the moment.</p>
<p>The following code is a first attempt to build a preview (tested on Windows):</p>
<pre><code>import toga
from toga.style import Pack
from toga.style.pack import COLUMN, ROW
class SSClone(toga.App):
def startup(self):
main_box = toga.Box(style=Pack(direction=COLUMN, padding=5))
self.tiles = []
self.values = '123456789'
self.size = len(self.values)
label_text = ' '.join(self.values)
tile_size = 40
vbox = toga.Box(style=Pack(direction=COLUMN, padding=0, background_color='grey'))
for i,r in enumerate(range(self.size)):
hbox = toga.Box(style=Pack(direction=ROW, padding=(2,0,0,0), background_color='lightgrey'))
for c in range(self.size-1):
tile = toga.Label(label_text, style=Pack(padding=2, width=tile_size, height=tile_size, font_family='monospace', font_size=7))
hbox.add(tile)
self.tiles.append(tile)
tile = toga.Label(f' {self.values[i]}', style=Pack(padding=2, width=tile_size, height=tile_size, font_family='monospace', font_size=15, alignment='center', font_weight='bold', background_color='aqua'))
hbox.add(tile)
self.tiles.append(tile)
vbox.add(hbox)
main_box.add(vbox)
self.tiles[0].style.background_color='yellow'
self.main_window = toga.MainWindow(title=self.formal_name)
self.main_window.content = main_box
self.main_window.show()
</code></pre>
<p>I have used <code>Label</code> here, but one may also think of <code>TextInput</code> with <code>readonly=True</code>, <code>Canvas</code> (ouch!), or even a <code>Button</code> for each candidate (ouch!). All of these alternatives have disadvantages, but do at least solve some issues. What I would need, however, is a way to capture all keystrokes and mouse clicks at the application level. In <code>Tkinter</code> or <code>wxPython</code> I would simply bind the relevant <code>events</code>, respectively to the <code>app</code> or to a <code>panel</code>, but I can't find anything similar in <code>Toga</code>. Can someone help?</p>
<p>Thanks</p>
| <python><user-interface><beeware> | 2023-10-15 11:06:54 | 1 | 3,843 | gimix |
77,296,183 | 10,902,946 | Plot a semantic segmentation using plotly | <p>Given an image as a <code>uint8</code> <code>numpy.ndarray</code> of shape <code>(H, W, C)</code> and a set of binary masks (<code>uint8</code>) of shape <code>(H, W)</code>, where each mask is related to an instance of a class <code>c</code> (you could think of a dictionary having the names of the classes as keys and a list of associated masks for that class as values), how can I use Plotly to show a semantic segmentation of the image that shows information about the class of a mask when I hover with the pointer?</p>
<p>This is what I have achieved so far, using OpenCV to create an image to show after applying the masks to the original image (as you can see if I hover I don't get any information about the class since I was not able to add them):</p>
<p><a href="https://i.sstatic.net/0FKOF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0FKOF.png" alt="segmentation example, but hovering over a class doesn't show the name of the class" /></a></p>
<p>(in this example <strong>my goal is to add the information <code>class: keyboard/mouse/monitor</code> when I hover over one of the colored mask I've added to the image using the binary numpy arrays</strong>)</p>
<p>which I did like this:</p>
<pre class="lang-py prettyprint-override"><code>def draw_masks(img, masks):
# here masks is a dict having category names as keys
# associated to a list of binary masks
masked_image = img.copy()
colors = {}
for cat in masks:
colors[cat] = np.random.randint(0, 255, size=(3,)).tolist()
for cat in masks:
for mask in masks[cat]:
masked_image = np.where(np.repeat(mask[:, :, np.newaxis], 3, axis=2),
np.asarray(colors[cat], dtype="uint8"),
masked_image)
masked_image = masked_image.astype(np.uint8)
return cv2.addWeighted(image, 0.3, masked_image, 0.7, 0)
import plotly.express as px
segmented_image = draw_masks(image, masks)
fig = px.imshow(segmented_image)
fig.show()
</code></pre>
<p>If I keep this code I would only have to add the class names for each specific <code>(x,y)</code> location (I've tried, but haven't been able to do it).</p>
<p>However, if there is a more elegant solution that uses Plotly to directly add the masks to the image (that could manage automatically the colors for example), it would be better. So far, I've been able to plot the masks (and reach a result very similar to the one in the example) using <code>add_heatmap</code>, with the problem that the last trace takes up the entire image (since the heatmap considers also the "invisible" locations, signed with a zero).</p>
| <python><plotly><image-segmentation> | 2023-10-15 10:14:24 | 0 | 552 | Nicola Fanelli |
77,295,962 | 2,782,188 | Prevent my flask app using html unicode for curly braces | <p>I am making an API call in my flask app:</p>
<pre><code>count = len(bankid)
bankid = "044"
response = requests.get(f"{base_api_url}/banks/" + "{bankid}/branches".format(bankid=f"{{{bankid}:0{count}d}}", headers=headers))
</code></pre>
<p>My target is to use this url <code>https://api.flutterwave.com/v3/banks/044/branches</code>. Passing in "044" string only removes the leading <code>zero</code> so trying to use python string formatting to pass in <code>{044:03d}</code>.</p>
<p>This is what I get:</p>
<pre><code>GET /v3/banks/%7B044:03d%7D/branches HTTP/1.1" 401 65]
</code></pre>
<p>I want:</p>
<pre><code>GET /v3/banks/044/branches HTTP/1.1" 401 65]
</code></pre>
<p>I want python to use "{}" rather than the unicode representation.</p>
| <python><flask><python-requests><flutterwave> | 2023-10-15 09:10:19 | 2 | 956 | George Udosen |
77,295,731 | 4,776,689 | PostgreSQL Values list (table constructor) with SQLAlchemy. How to for native SQL query? | <p>I am trying to use PostgreSQL Values list <a href="https://www.postgresql.org/docs/current/queries-values.html" rel="nofollow noreferrer">https://www.postgresql.org/docs/current/queries-values.html</a> (also known as table constructor) in Python with SQLAlchemy.</p>
<p>Here the SQL working in PostgreSQL</p>
<pre><code>SELECT input_values.ticker
FROM (VALUES ('A'), ('B'), ('C')) as input_values(ticker)
</code></pre>
<p>I built a list of tickers and passing it as argument. It converted by SQLAlchecmy to</p>
<pre><code>SELECT input_values.ticker
FROM (VALUES (%(input_values_1)s, %(input_values_2)s, %(input_values_3)s)) as input_values(ticker)
</code></pre>
<p>Which looks good if would list be used as param with IN clause. But in my case this not works. How to provide parameters list correctly?</p>
<p>Here the code I have:</p>
<pre><code>import logging
from injector import inject
from sqlalchemy import bindparam
from sqlalchemy.sql import text
from database.database_connection import DatabaseConnection
class CompaniesDAO:
FIND_NEW_QUERY = '''
SELECT input_values.ticker
FROM (VALUES :input_values) as input_values(ticker)
'''
@inject
def __init__(self, database_connection: DatabaseConnection):
self.__database_connection = database_connection
def save_new(self, companies):
tickers = ['A', 'B', 'C']
input_values = {'input_values': tickers}
database_engine = self.__database_connection.get_engine()
with database_engine.connect() as connection:
query = text(CompaniesDAO.FIND_NEW_QUERY)
query = query.bindparams(bindparam('input_values', expanding=True))
result = connection.execute(query, input_values)
new_tickers = [row[0] for row in result]
logging.info(new_tickers)
</code></pre>
<p>I saw few related discussions like <a href="https://stackoverflow.com/questions/18858291/values-clause-in-sqlalchemy">VALUES clause in SQLAlchemy</a> and checked current solution like <a href="https://github.com/sqlalchemy/sqlalchemy/wiki/PGValues" rel="nofollow noreferrer">https://github.com/sqlalchemy/sqlalchemy/wiki/PGValues</a>. However, I not see solution for native SQL query. Is there any?</p>
| <python><postgresql><sqlalchemy> | 2023-10-15 07:55:24 | 1 | 2,990 | P_M |
77,295,712 | 1,815,710 | Cannot find module in Jupyter Notebook but other python files can | <p>I'm trying to use JupyterLab/Notebook within my API Service as a playground as I think it's a bit nicer than simply using the terminal. I've installed <code>jupyterlab</code> in my virtual environment and I believe it's activated according to the top right corner of the image uploaded. However, when I try to import a class I wrote, I get the error</p>
<p><code>ModuleNotFoundError: No module named 'api'</code></p>
<p>My current directory structure looks like this</p>
<pre><code>/zen_api
/api
/playground
test.ipynb <--- File getting the error
/tests
</code></pre>
<p>I have other python files within my <code>tests</code> directory that <code>import api.*</code> which work fine so I'm wondering why my Notebook cannot find the module?</p>
<p><a href="https://i.sstatic.net/9uqow.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9uqow.png" alt="enter image description here" /></a></p>
| <python><jupyter-notebook><jupyter-lab> | 2023-10-15 07:47:10 | 0 | 16,539 | Liondancer |
77,295,709 | 12,859,833 | Greenlet installation fails with `OSError` | <p>I try to install the <code>optuna</code> package into a TensorFlow <code>tensorflow:2.11.1-gpu</code> image.
However, it failed with</p>
<pre><code>ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: '/usr/local/include/python3.8/greenlet'
</code></pre>
<p>Installing other versions of <code>optuna</code>/<code>greenlet</code> does not help.</p>
| <python><tensorflow><hyperparameters><optuna><greenlets> | 2023-10-15 07:46:47 | 1 | 343 | emil |
77,295,658 | 7,818,560 | Airflow DAG breaks with JSONDecodeError | <p>So my DAG starts like given below:</p>
<pre><code>import airflow
from datetime import timedelta, datetime
from airflow import DAG
from airflow.models import Variable
from airflow.contrib.operators.bigquery_operator import BigQueryOperator
default_args = {
"owner": "scheduled_queries",
"depends_on_past": False,
"start_date": airflow.utils.dates.days_ago(1),
"email": Variable.get("email"),
"email_on_failure": True,
"email_on_retry": True,
"retries": 0,
"retry_delay": timedelta(minutes=1),
}
dag = DAG('scheduled_queries',
schedule_interval=None, #'0 0 * * *',
#template_searchpath=tmpl_search_path,
default_args=default_args
)
</code></pre>
<p>The error I'm getting is:</p>
<pre><code>"
Broken DAG (dags/testdag.py):
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/python3.8/lib/python3.8/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 1 (char 263)
"
</code></pre>
<p>Have got no idea on why this is happening. Because the same DAG works well in other environments like UAT.
Kindly guide me with the issue and solution. Thanks in advance!</p>
| <python><json><google-cloud-platform><airflow><directed-acyclic-graphs> | 2023-10-15 07:29:25 | 1 | 509 | user12345 |
77,295,565 | 14,498,998 | Static and Media files are not shown in django website - nginx & gunicorn | <p>I'm trying to deploy a django website on a vps and it's now up and running(after a lot of trouble!), but now my media and static files are not shown in my website and I really tried a lot of ways but none of them worked.</p>
<p>My nginx configuration:</p>
<pre><code>server {
listen 80;
server_name domain_name;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /var/www/real_estate/static/;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
</code></pre>
<p>gunicorn.service:</p>
<pre><code>[Unit]
Description=gunicorn daemon
Requires=gunicorn.socket
After=network.target
[Service]
User=shahriar
Group=www-data
WorkingDirectory=/home/shahriar/Amlak/real_estate
ExecStart=/home/shahriar/Amlak/env/bin/gunicorn \
--access-logfile - \
--workers 3 \
--bind unix:/run/gunicorn.sock \
real_estate.wsgi:application
[Install]
WantedBy=multi-user.target
</code></pre>
<p>settings.py:</p>
<pre><code># Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/4.2/howto/static-files/
STATIC_URL = '/var/www/real_estate/static/'
STATIC_ROOT = '/var/www/real_estate/static/assets'
STATICFILES_DIRS = [ 'static/', BASE_DIR/'static/', '/var/www/real_estate/static/'
]
# Default primary key field type
# https://docs.djangoproject.com/en/4.2/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = 'django.db.models.BigAutoField'
# Media Settings
MEDIA_URL = '/media/'
MEDIA_ROOT = BASE_DIR.joinpath('media')
</code></pre>
<p>And eventually I restarted every service possible but not a single static file was shown.</p>
<p>I tried changing settings.py from this:</p>
<pre><code>STATIC_URL = 'static/'
STATIC_ROOT = 'assets/'
STATICFILES_DIRS = [ 'static/', BASE_DIR/'static/'
]
</code></pre>
<p>to this:</p>
<pre><code>STATIC_URL = '/var/www/real_estate/static/'
STATIC_ROOT = '/var/www/real_estate/static/assets'
STATICFILES_DIRS = [ 'static/', BASE_DIR/'static/', '/var/www/real_estate/static/'
]
</code></pre>
<p>I tried changing from this :</p>
<pre><code>server {
listen 80;
server_name domain_name;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /var/www/real_estate/static/;
}
location /assets/ {
root /var/www/real_estate/assets/;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
</code></pre>
<p>to this:</p>
<pre><code>server {
listen 80;
server_name domain_name;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /var/www/real_estate/static/;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
</code></pre>
| <python><django><nginx><gunicorn><wsgi> | 2023-10-15 06:46:20 | 1 | 313 | Alin |
77,295,311 | 2,013,747 | how to define an "empty" async generator in Python? | <p>I need to define an "empty" async generator i.e. one that does not generate any values. Since the definition of an async generator is an async function containing a yield statement, this is what I came up with:</p>
<pre><code>async def _empty_async_generator() -> AsyncGenerator[str, None]:
if False:
yield ""
</code></pre>
<p>Is that the best that can be done in python 3.10? Is there a library function for creating empty async generators?</p>
| <python><python-3.x> | 2023-10-15 04:37:11 | 1 | 4,240 | Ross Bencina |
77,295,278 | 10,856,988 | Displaying 'FILTERED' Dataframe in HoverToolTip (Bokeh) | <p>In Bokeh(3.2.2) <strong>while trying to display a filtered pandas dataframe in a Hover tooltip via callback , the cb_data.source/cb_data.x/cb_data.y = undefined .</strong><br />
How to retrieve current x,y properties in Javascript?</p>
<p><a href="https://i.sstatic.net/FuN0E.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FuN0E.jpg" alt="enter image description here" /></a></p>
<p><strong>ERROR</strong></p>
<pre><code>cb_data.source = undefined
cb_data.value = undefined
cb_data.x/cb_data.y = undefined/undefined
</code></pre>
<p><strong>CODE:</strong></p>
<pre><code>import pandas as pd
from bokeh.models import HoverTool, ColumnDataSource, DatetimeTickFormatter, CustomJS
from bokeh.plotting import figure, show
data = {
"Time": ["12:03", "12:03", "12:03", "12:03", "12:12", "12:12", "12:12", "12:12", "12:17", "12:17", "12:17", "12:17"],
"Spot": [19653, 19653, 19653, 19653, 19629, 19629, 19629, 19629, 19640, 19640, 19640, 19640],
"Strike": [19500, 19600, 19700, 19800, 19500, 19600, 19700, 19800, 19500, 19600, 19700, 19800],
"cLtp": [99, 174, 167, 167, 91, 165, 161, 93, 95, 16, 12, 31],
"pLtp": [49, 24, 42, 25, 51, 74, 36, 53, 21, 29, 75, 32],
"cSellOi": [851500, 267500, 379050, 382350, 1324350, 389700, 438850, 1318250, 1365900, 449000, 433200, 433200],
"pSellOi": [2530350, 1615600, 1656000, 1606500, 2542100, 1598450, 1646950, 2519050, 2544450, 1488200, 1460400, 1460400],
}
df = pd.DataFrame(data)
# Convert the 'Time' column to datetime
df['Time'] = pd.to_datetime(df['Time'], format='%H:%M')
df.sort_values(by='Time', inplace=True)
source = ColumnDataSource(df)
plot = figure(
x_axis_label="Time",
y_axis_label="Spot",
x_axis_type="datetime",
title="Dataframe in Tooltip",
)
plot.line(x='Time', y='Spot', source=source, line_width=3, line_color="blue")
plot.xaxis.formatter = DatetimeTickFormatter(
days="%Y-%m-%d",
hours="%H:%M:%S",
minutes="%H:%M"
)
# Create a custom JavaScript callback for the hover tool
jsHover = """
var data = source.data;
var hover = hover;
hover.tooltips = [];
var tooltipString = ""
console.log( "cb_data.source = " + cb_data['source'] );
console.log( "cb_data.value = " + cb_data['value']);
console.log( "cb_data.x/cb_data.y = " + cb_data['x'] +"/" + cb_data['y']);
for (var i = 0; i < data['Strike'].length; i++) {
tooltipString += data['Strike'][i] + ": (" + data['cLtp'][i] + ", " + data['pLtp'][i] + ")[";
tooltipString += data['cSellOi'][i] + ", " + data['pSellOi'][i] + "]\\n";
}
hover.tooltips = tooltipString;
hover.change.emit();
console.log('jsHover Callback Finished');
console.log(hover.tooltips);
hoverTool = HoverTool(tooltips=None)
cb = CustomJS(args={'source': source,'hover': hoverTool}, code=jsHover)
hoverTool.callback = cb
plot.add_tools(hoverTool)
show(plot)
</code></pre>
<p>"""</p>
| <python><bokeh> | 2023-10-15 04:13:54 | 0 | 1,641 | srt111 |
77,295,253 | 5,161,197 | Efficient Queue with batching | <p>I want to implement an internal Queue for batch processing in my service. The client of my service would be making sequential calls with 1 element at a time and I would like to process multiple of them at once internally if I have them available. This will help me scale for the bulk operation case where requests are submitted in parallel. However, I would like to not slow down the sequential case by adding some artificial delays waiting for potential new requests and batching them. This artificial delay value is hard to come up with and also requires a lot of fine-tuning to make sure we don't do a busy poll and that performance for both sequential and parallel cases is reasonable.</p>
<p>In Python, I would do something like</p>
<pre><code>BATCHS_SIZE = 100
def drain_queue():
ev = ev_queue.get()
evs = []
evs.append(ev)
while True:
if len(evs) == BATCH_SIZE:
break
try:
ev = ev_queue.get(block=False)
except Empty:
break
evs.append(ev)
process_evs(evs)
</code></pre>
<p>I am looking for a similar solution for Golang, possibly using channels. This is the current solution I came up with:</p>
<pre><code>dur := time.Duration(1000) * time.Millisecond // 1 second wait.
for msg := range evQueue {
// Start a new batch with this message.
batch := []BatchArg{msg}
timedOut := false
// Loop until the batch is full or a time out.
for timeout := time.After(dur); len(batch) < 200 && !timedOut; {
select {
case other := <-evQueue:
batch = append(batch, other)
case <-timeout:
// The batching channel is empty, and the timer ticked.
timedOut = true
}
}
processBatch(batch)
}
</code></pre>
<p>But this adds an artificial delay of 1 second in the sequential case. The caller side adds to the queue in Python and the channel in Golang.</p>
| <python><go><queue><channel><job-batching> | 2023-10-15 03:53:02 | 0 | 363 | likecs |
77,295,084 | 1,048,171 | Convert YML object/dict to paths of directories using Ansible/Jinja/Python? | <p>For an Ansible role that needs to create multiple folders I want to work with a simple representation of the list of paths to create: an human readable YAML "object".</p>
<p>The following pseudo Ansible code should create (if missing) the following folders:</p>
<ul>
<li>/home/some-user/dir1</li>
<li>/home/some-user/dir1/dir1-subdir1</li>
<li>/home/some-user/dir1/dir1-subdir2</li>
<li>/home/some-user/dir1/dir1-subdir2/dir1-subdir2-foo</li>
<li>/home/some-user/dir2</li>
<li>/home/some-user/dir2/dir2-subdir2</li>
<li>/home/some-user/dir3</li>
</ul>
<p>But I fail to find a way to fully traverse the <code>directory_tree</code> variable.</p>
<pre class="lang-yaml prettyprint-override"><code>---
- name: Create directories
hosts: all
vars:
directory_tree:
base: /home/some-user
tree:
dir1:
dir1-subdir1: null
dir1-subdir2: null
dir1-subdir2-foo: null
dir2:
dir2-subdir2: null
dir3: null
tasks:
- name: Ensure folders exists
ansible.builtin.file:
path: '{{directory_tree.base }}/{{ item }}'
state: directory
loop: '{{ directory_tree.tree | …some magic here… }}'
</code></pre>
<p>Can this be done using Ansible? Or should I code something in Python for this?</p>
<p>Note: In a more advanced version I would have ownership and permissions (and other things) in the variable (in a special <code>.</code> key) and <code>ansible.builtin.file</code> would exploit them.</p>
<pre class="lang-yaml prettyprint-override"><code> vars:
directory_tree:
base: /home/some-user
tree:
dir1:
.:
user: root
group: root
mode: 0755
dir1-subdir1:
.:
user: root
group: root
dir1-subdir2:
.:
user: root
group: www-data
mode: 0750
dir1-subdir2-foo:
.:
user: root
group: root
mode: 0700
dir3:
.:
# Delete "dir3" folder
state: absent
</code></pre>
| <python><ansible><jinja2> | 2023-10-15 01:53:55 | 1 | 2,300 | CDuv |
77,295,081 | 16,565,444 | Interpolate data from dictionary given a float index position | <p>I have a json file with data tuples of torque figures for given temperatures. Say I pull the data figures from the 46 degree portion:</p>
<pre><code>j = {"46": [[88.0, 95.3], [86.1, 93.4], [84.2, 91.3], [82.8, 89.7], [80.8, 87.8]]}
</code></pre>
<p>If I am then given a floating point number as a reference index position, for example:</p>
<pre><code>index = 1.85
</code></pre>
<p>Is it possible to interpolate from the data using just the float index location??
Obviously if I try a straight float as an index position it gives me:</p>
<pre><code>TypeError: list indices must be integers or slices, not float
</code></pre>
<p>At the moment I am finding the nearest integers above and below the float and am then interpolating manually, but it is tedious. I am needing the interpolation for both of the numbers given in tuple.. for example, at index <code>1.85</code> I am doing:</p>
<pre><code>lower_first_num = j["46"][1][0]
upper_first_num = j["46"][2][0]
difference = lower_first_num - upper_first_num
result = lower_first_num - (difference * .85)
</code></pre>
<p>Where .85 is the percent between the two index points. I would then have to do this again for the second number in the tuple.</p>
<p>In regard to the index floats given, they could be many various numbers as this program will run hundreds of times with different figures.</p>
<p>I'm just wondering if there is a pre-built method that I can use please...</p>
| <python><json><dictionary><floating-point><interpolation> | 2023-10-15 01:51:22 | 1 | 489 | PW1990 |
77,295,005 | 1,758,023 | autogluon: Detected time series with length <= 2 in data | <p>I am using autogluon, and I have created my train_data from my dataframe.</p>
<pre><code>train_data = TimeSeriesDataFrame.from_data_frame(
df,
id_column="index",
timestamp_column="timestamp"
)
train_data.head()
value
item_id timestamp
0 2021-10-13 1409083.0
1 2021-10-14 1416055.0
2 2021-10-15 1223615.0
3 2021-10-16 1333072.0
4 2021-10-17 1284866.0
</code></pre>
<p>When I run</p>
<pre><code>predictor = TimeSeriesPredictor(
prediction_length=48,
path="autogluon-m4-hourly",
target="target",
eval_metric="MASE",
ignore_time_index=True
)
</code></pre>
<p>I get</p>
<pre><code>ValueError: Detected time series with length <= 2 in data.
Please remove them from the dataset.
</code></pre>
<p>Any idea how to fix this?</p>
<h2>Edit</h2>
<p>My original DF looked like</p>
<pre><code>df.head()
date total
0 2021-10-13 1409083.0
1 2021-10-14 1416055.0
2 2021-10-15 1223615.0
3 2021-10-16 1333072.0
4 2021-10-17 1284866.0
</code></pre>
<p>which was read from a csv file into the dataframe.</p>
<p>I have one value (total) for each day. The totals are what I want to forecast into the future. I have 730 rows in my data (2 years worth).</p>
<p>Is it possible to use autogluon to forecast this data?</p>
<p>I created the <code>item_id</code> column and added it to the df so that I could pass this to the <code>TimeSeriesPredictor</code></p>
| <python><autogluon> | 2023-10-15 01:03:11 | 2 | 2,198 | Mark |
77,294,679 | 10,431,629 | Pandas Column Split a row with Conditional and create a separate Column | <p>It seems the problem is not difficult, but somehow I am not able to make it work. My problem is as follows. I have a dataframe say as follows:</p>
<pre><code> dfin
A B C
a 1 198q24
a 2 128q6
a 6 1456
b 7 67q22
b 1 56
c 3 451q2
d 11 1q789
</code></pre>
<p>So now what I want to do is as follows, whenever the script will encounter a 'q', it will split the values and create a separate column with values starting from 'q'. The part before 'q' will remain in the original ( or maybe can crate a new column). So my desired output should be as follows:</p>
<pre><code> dfout
A B C D
a 1 198 q24
a 2 128 q6
a 6 1456
b 7 67 q22
b 1 56
c 3 451 q2
d 11 1 q789
</code></pre>
<p>So what I have tried till now is as follows:</p>
<pre><code> dfout = dfin.replace('\q\d*', '', regex=True)
</code></pre>
<p>Its creating one column without q, but I am not able to create the column D and not working as expected.</p>
<p>Any help/ideas will help and be appreciated.</p>
| <python><pandas><regex><group-by><split> | 2023-10-14 22:18:43 | 3 | 884 | Stan |
77,294,444 | 7,714,681 | Getting lists and dictionaries from my function call in the openai api | <p>While working with the ChatGPT api, I want to make function calls. Below is one of my functions:</p>
<pre><code>{
"name": "set_absolute_parameter_values_for_component",
"example": "Change the margin to ‘comfortable’.",
"description": "Set specific values for parameters of a single component in the specified card(s). "+
"This function applies absolute changes. Note: the 'Content' component cannot contain text, but 'Info' can hold text."+
"Call this function if both a relative and an absolute term are used.",
"parameters": {
"type": "object",
"properties": {
"to_card_list": {
"type": "string",
"description": "A list with card names that the user wants to update. If ",
"enum" : CARD_INFO['cards'],
},
"component_list": {
"type": "string",
"description": "A list with the component that the user wants to update. Note: the 'Content' component can NOT contain text.",
"enum" : CARD_INFO['component_list'],
},
"parameter_dict": {
"type": "string",
"description": "A dictionary mapping parameters to their desired values.",
"enum" : list(CARD_INFO['all_parameters']),
},
}
}
},
</code></pre>
<p>As the names for the parameters suggest, I want to get two lists and a dictionary. Sometimes ChatGPT (both 3.5 turbo and 4) returns the correct format, whereas it returns the wrong format, too, at times. I get an error when I change the types to ‘list’ and ‘dict’. Based on suggestions, I tried changing it to ‘array’ and ‘object’, which also resulted in errors.</p>
<p>Is it possible to set the type to lists and dicts?</p>
| <python><openai-api><chatgpt-api> | 2023-10-14 20:43:26 | 0 | 1,752 | Emil |
77,294,405 | 2,600,531 | handling multiprocessing.Manager() server process termination | <p>I'm writing my first multiprocessing application.</p>
<p>When the OS kills the server process spawned by multiprocessing.Manager(), whether it's due to OOM or if I manually SIGKILL it, some but not all of my child processes also terminate. My application is left in a bad state and systemctl does not restart it as I would like to have happen.</p>
<p>I tried to implement a watchdog child process to watch over its siblings and the manager's server process however it terminates with the manager process.</p>
<p>It seems my options are either:</p>
<ol>
<li>Perform the watchdog function in the parent process, or:</li>
<li>Have my child watchdog process send a keepalive to systemd via the sd_notify() API and configure systemd to restart the service absent this signal.</li>
</ol>
<p>Are there other options worthy of considering? Which is the best and why?</p>
| <python><multiprocessing><systemd><watchdog> | 2023-10-14 20:30:59 | 1 | 944 | davegravy |
77,294,353 | 2,458,922 | mutiple Gaussian Curves to fit scipy optimize curve_fit , | <p>Refer, <a href="https://github.com/shuyu-wang/DSC_analysis_peak_separation/tree/main/DSC-Automatic%20multimodal%20decomposition" rel="nofollow noreferrer">https://github.com/shuyu-wang/DSC_analysis_peak_separation/tree/main/DSC-Automatic%20multimodal%20decomposition</a></p>
<p>Consider X is a sum of Multiple Gaussin Function, and if we have to curve fit, I saw a solution for Two Gaussin Funtion as ,</p>
<pre><code>def func2(x, a1, a2, m1, m2, s1, s2):
return a1*np.exp(-((x-m1)/s1)**2) + a2*np.exp(-((x-m2)/s2)**2)
</code></pre>
<p>This func2 was used to with</p>
<pre><code># set parammers
AmpMin = 0.01
AmpMax = 1000
CenMin = min(x)
CenMax = max(x)
WidMin = 0.1
WidMax = 100
popt, pcov = curve_fit(func2, x, y,
bounds=([AmpMin,AmpMin, CenMin,CenMin, WidMin,WidMin],
[AmpMax,AmpMax, CenMax,CenMax, WidMax,WidMax]))
</code></pre>
<p>would like to have a parameter of int Cluster.
<code>Cluster = 2</code>, the above solution works</p>
<p>but <strong>generalizing</strong> <code>def func2</code> to <code>def funcn</code> ,
I would like to</p>
<pre><code>def fun1(x,a,m,s):
return a*np.exp(-((x-m)/s)**2)
def funcn(x, as[],ms[],ss[]):
accum = fun1(x,as[0],ms[0],ss[0])
for i in range(1,len(as)):
accum = accum + fun1(x,as[i],ms[i],ss[i])
return accum
</code></pre>
<p>Now, how to pass <code>funcn</code> to <code>curve_fit(funcn, x, y, bounds=# ? )</code></p>
| <python><curve-fitting><gaussian><scipy-optimize><gaussian-mixture-model> | 2023-10-14 20:16:29 | 1 | 1,731 | user2458922 |
77,294,331 | 22,466,650 | How to relay the sum of each nested list to the next one? | <p>My input is this list :</p>
<pre><code>my_list = [[3, 4, -1], [0, 1], [-2], [7, 5, 8]]
</code></pre>
<p>I need to sum the nested list <code>i</code> and pad it to the right of the list <code>i+1</code>, it's like a relay course.</p>
<p>I have to mention that the original list should be untouched and I'm not allowed to copy it.</p>
<pre><code>wanted = [[3, 4, -1], [0, 1, 6], [-2, 7], [7, 5, 8, 5]]
</code></pre>
<p>I tried the code below but I got a list with more elements than my original one :</p>
<pre><code>from itertools import pairwise
wanted = []
for left, right in pairwise(my_list):
wanted.extend([left, right + [sum(left)]])
print(wanted)
[[3, 4, -1], [0, 1, 6], [0, 1], [-2, 1], [-2], [7, 5, 8, -2]]
</code></pre>
<p>Can you guys explain what's happening please or suggest another solution ?</p>
| <python><list> | 2023-10-14 20:10:34 | 6 | 1,085 | VERBOSE |
77,294,212 | 4,348,534 | How do I get the URL of a quoted retweet? | <p>Take this tweet (you will have to be logged into twitter to see it): <a href="https://twitter.com/JoeBiden/status/1712487592740978801" rel="nofollow noreferrer">https://twitter.com/JoeBiden/status/1712487592740978801</a></p>
<p>It quotes this tweet, <a href="https://twitter.com/BidenHQ/status/1712485207121465481" rel="nofollow noreferrer">https://twitter.com/BidenHQ/status/1712485207121465481</a>, which is the URL I'm trying to retrieve.</p>
<p>But when I'm looking into the HTML, there's no <code>href</code>, and the URL doesn't appear anywhere. But it must be somewhere? How can I find it? (Except by clicking on the quoted retweet, of course)</p>
<p>In case it's relevant, I'm using <code>selenium</code>, here's my code:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
#launching browser
try:
chrome_options = webdriver.chrome.options.Options()
driver = webdriver.Chrome(service=Service(), options=chrome_options)
except Exception as e:
print(e)
# connecting to twitter
driver.get("https://twitter.com")
login = "......"
pw = "......"
xpath = '//*[@id="react-root"]/div/div/div[2]/main/div/div/div[1]/div[1]/div/div[3]/div[5]/a'
driver.find_element(By.XPATH, xpath).click()
xpath = '//*[@id="layers"]/div[2]/div/div/div/div/div/div[2]/div[2]/div/div/div[2]/div[2]/div/div/div/div[5]/label'
driver.find_element(By.XPATH, xpath).send_keys(login)
xpath = '//*[@id="layers"]/div[2]/div/div/div/div/div/div[2]/div[2]/div/div/div[2]/div[2]/div/div/div/div[6]'
driver.find_element(By.XPATH, xpath).click()
xpath = '//*[@id="layers"]/div[2]/div/div/div/div/div/div[2]/div[2]/div/div/div[2]/div[2]/div[1]/div/div/div[3]/div/label'
driver.find_element(By.XPATH, xpath).send_keys(pw)
xpath = '//*[@id="layers"]/div[2]/div/div/div/div/div/div[2]/div[2]/div/div/div[2]/div[2]/div[2]/div/div[1]/div/div/div'
driver.find_element(By.XPATH, xpath).click()
# loading the tweet
driver.get("https://twitter.com/JoeBiden/status/1712487592740978801")
# trying to retrieve the URL of the quoted retweet
xpath = '/html/body/div[1]/div/div/div[2]/main/div/div/div/div[1]/div/section/div/div/div[1]/div/div/article/div/div/div[3]/div[2]/div/div/div[2]'
driver.find_element(By.XPATH, xpath).no_idea_what_should_go_here_to_get_the_url_of_the_quoted_tweet
</code></pre>
| <python><html><selenium-webdriver> | 2023-10-14 19:35:08 | 1 | 4,297 | François M. |
77,294,119 | 11,101,156 | How to add combination memory to an Agent with tools? | <p>I want to create an Agent that will have 2 tools and a combined memory.</p>
<ul>
<li>Tools should be a <code>python_reply_tool</code> and my custom tool
<code>search_documents</code>.</li>
<li>I do not have problem adding the tools to the agent via <code>initialize_agent()</code> function</li>
<li>But I cannot figure out how to add memory.</li>
</ul>
<p>I have successfully created an example of combined memory:
llm_code = ChatOpenAI(temperature=0, model_name="gpt-4-0613") #gpt-3.5-turbo-16k-0613
llm_context = ChatOpenAI(temperature=0.5, model_name="gpt-4") #gpt-3.5-turbo</p>
<pre><code>chat_history_buffer = ConversationBufferWindowMemory(
k=5,
memory_key="chat_history_buffer",
input_key="input"
)
chat_history_summary = ConversationSummaryMemory(
llm=llm_context,
memory_key="chat_history_summary",
input_key="input"
)
chat_history_KG = ConversationKGMemory(
llm=llm_context,
memory_key="chat_history_KG",
input_key="input",
)
memory = CombinedMemory(memories=[chat_history_buffer, chat_history_summary, chat_history_KG])
</code></pre>
<p>How do I now pass this memory to the agent?
I tried via the <code>memory = memory</code> but without success.</p>
<p>Do I need to change the template somehow?</p>
| <python><langchain><large-language-model><py-langchain> | 2023-10-14 18:58:30 | 1 | 2,152 | Jakub Szlaur |
77,293,938 | 391,918 | No module named 'fuzzywuzzy | <p>I have created function fuzzy_match in postgres database. I have also installed fuzzywuzzy on my machine using <code>pip3 install fuzzywuzzy</code> command and I can see fuzzywuzzy installed in my <code>python3.10\site_packages\</code> folder and I also see that I am using python version 3.10 when I run command <code>python3 --version</code>.</p>
<p>However when I run the below query in dbeaver enterprise I get a <code>No module named 'fuzzywuzzy'</code> error.</p>
<p>What could be the issue?</p>
<p>running</p>
<pre><code>SELECT * FROM pg_extension WHERE extname = 'plpython3u';
</code></pre>
<p>sql shows plpython3u is installed and configured in postgres.</p>
<p><strong>Update 1:</strong></p>
<p>running following command shows <code>plpython3u version 1</code> is installed.</p>
<pre><code>SELECT * FROM pg_available_extensions WHERE name LIKE 'plpython3u%';
</code></pre>
<p><strong>function creation:</strong></p>
<pre><code>CREATE FUNCTION fuzzy_match(text, text, int) RETURNS boolean AS $$
import fuzzywuzzy.process
return fuzzy_match(text1, text2, threshold)
$$ LANGUAGE plpython3u;
</code></pre>
<p><strong>sql command:</strong></p>
<pre><code>SELECT t1.name, t2.name
FROM table1 t1, table2 t2
WHERE fuzzy_match(t1.name, t2.name, 85);
</code></pre>
| <python><postgresql><dbeaver> | 2023-10-14 17:58:20 | 0 | 4,412 | Ajit Goel |
77,293,884 | 8,152,837 | Trigger st.forms when download button is clicked in streamlit, python | <p>I've created a streamlit app. I intend to use this app to gather email ids of the users. I want to create a form input whenever someone clicks on the download button for the output they created with my app.
I'm not sure how to use the on_click keyword for this. Let me know if there's any other solution
This is what I had written:</p>
<h2>Helpers</h2>
<pre><code>import io
import pandas as pd
import streamlit as sl
from PIL import Image
from src import create_qr_code, create_wifi_qr
from streamlit_gsheets import GSheetsConnection
import config
def submitted():
sl.session_state.submitted = True
def reset():
sl.session_state.submitted = False
</code></pre>
<h2>Calling the source function here that generates the image. Replace that with any function you want that returns something</h2>
<pre><code>gen_btn = tab_gen.button("Generate QR Code")
download_flag = False
if gen_btn == True:
byte_im, img = create_qr_code(content=gen_content,
border=border,
scale=scale,
fg_color=fg_color,
bg_color=bg_color,
data_fg=data_fg,
data_bg=data_bg,
logo_flag=logo_flag,
logo=logo_image,
bg_flag=bg_flag,
bg_image=bg_image,
bg_url=None,
art_kind=art_kind)
tab_gen.image(img)
filename = "xyz.jpg"
</code></pre>
<h2>Next creating the form here within the tab.</h2>
<h3>When I tried it without session state, the submit button was not returning anything.</h3>
<pre><code> with tab_gen.form("email_form"):
sl.write("Please share your professional email to download the file")
email = sl.text_input("Professional email", key='email')
name = sl.text_input("Full Name", key='name')
# Every form must have a submit button.
sl.form_submit_button(label="Submit", on_click=submitted)
df = pd.DataFrame(data=None, columns=['email', 'name'])
if 'submitted' in sl.session_state:
if sl.session_state.submitted == True:
print("Printing inputs :", sl.session_state.email, sl.session_state.name)
print("Submitted Form")
data = {
'email': [sl.session_state.email],
'name': [sl.session_state.name]
}
original = conn.read(spreadsheet=config.EMAIL_SHEET)
df = pd.concat([original, pd.DataFrame(data)], ignore_index=True)
conn.update(spreadsheet=config.EMAIL_SHEET, data=df)
reset()
download_flag = True
if download_flag == True:
tab_gen.download_button("Download", data=byte_im, file_name=filename)
</code></pre>
<p><a href="https://i.sstatic.net/zBZBh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zBZBh.png" alt="screenshot of the download button in app" /></a></p>
| <python><streamlit> | 2023-10-14 17:43:06 | 0 | 469 | Sushant |
77,293,877 | 14,309,684 | How to wait for the response in Python requests? | <p>Let's say I have a code like this:</p>
<pre><code>def send_request():
response = requests.get(url="https://www.google.com/")
# wait for the response
threading.Thread(target=send_request, daemon=False).start()
</code></pre>
<p>I am sending the request on the background thread. How do I wait until I get the response? I don't want to call the <code>get()</code> on the main thread.</p>
| <python><python-requests> | 2023-10-14 17:40:14 | 3 | 1,695 | Kumar |
77,293,657 | 14,829,523 | Sort pandas df based on balances | <p>I have the following dummy df:</p>
<pre><code>import pandas as pd
# Create the initial DataFrame
df = pd.DataFrame({'car': ['Pickup', 'Racer', 'Lorry', 'Luxury', 'Cabrio', 'Bicycle', 'Truck'],
'old_owner': ['1', '1', '1', '2', '2', '3', '1'],
'new_owner': ['2', '3', '2', '1', '1', '1', '1']})
print(df)
>>>
car old_owner new_owner
0 Pickup 1 2
1 Racer 1 3
2 Lorry 1 2
3 Luxury 2 1
4 Cabrio 2 1
5 Bicycle 3 1
6 Truck 1 1
</code></pre>
<p>I want to sort it such that no one can have a lower car balance of -1 than originally. So, I need to directly (next row) assign the person I took a car from a car again.
Also, I always have to start with "removing" a car from one.
So, I cannot do</p>
<pre><code>0 Pickup 2 1
1 Racer 1 2
</code></pre>
<p>but rather</p>
<pre><code>0 Pickup 1 2
1 Racer 2 1
</code></pre>
<p>Dummy df should look like this:</p>
<pre><code> car old_owner new_owner
0 Racer 1 3
1 Bicycle 3 1
2 Luxury 2 1
3 Pickup 1 2
4 Cabrio 2 1
5 Lorry 1 2
6 Truck 1 1
</code></pre>
<p>As follows my code:</p>
<pre><code>car_counts = df['old_owner'].append(df['new_owner']).value_counts().to_dict()
# Create a function to sort the DataFrame based on the car counts
def custom_sort(row):
if row['old_owner'] == '1':
return car_counts[row['new_owner']], car_counts[row['old_owner']]
return car_counts[row['old_owner']], -car_counts[row['new_owner']]
# Sort the DataFrame using the custom sort function
df = df.assign(sort_key=df.apply(custom_sort, axis=1))
df = df.sort_values(by='sort_key')
# Reset the index and drop the 'sort_key' column
df = df.reset_index(drop=True).drop(columns=['sort_key'])
print(df)
</code></pre>
<p>It outputs this:</p>
<pre><code>>>>
0 Bicycle 3 1
1 Racer 1 3
2 Luxury 2 1
3 Cabrio 2 1
4 Pickup 1 2
5 Lorry 1 2
6 Truck 1 1
</code></pre>
<p>As you see, row 3 & 4 should be switched as "2" would have -2 cars but can only have -1.</p>
<p>I am using 1,2,3 as owner number to make it easier but in reality these would be around higher digits number like 12344 or 674849. (like an account number)</p>
| <python><pandas> | 2023-10-14 16:31:12 | 1 | 468 | Exa |
77,293,646 | 9,811,964 | How to cluster people who live close (but not too close) to each other? | <p><strong>What I have:</strong></p>
<p>I have a pandas dataframe with columns <code>latitude</code> and <code>longitude</code> which represent the spatial coordinates of the home of people.</p>
<p>This could be an example:</p>
<pre><code>import pandas as pd
data = {
"latitude": [49.5659508, 49.568089, 49.5686342, 49.5687609, 49.5695834, 49.5706579, 49.5711228, 49.5716422, 49.5717749, 49.5619579, 49.5619579, 49.5628938, 49.5628938, 49.5630028, 49.5633175, 49.56397639999999, 49.566359, 49.56643220000001, 49.56643220000001, 49.5672061, 49.567729, 49.5677449, 49.5679685, 49.5679685, 49.5688543, 49.5690616, 49.5713705],
"longitude": [10.9873409, 10.9894035, 10.9896749, 10.9887881, 10.9851579, 10.9853273, 10.9912959, 10.9910182, 10.9867083, 10.9995758, 10.9995758, 11.000319, 11.000319, 10.9990996, 10.9993819, 11.004145, 11.0003023, 10.9999593, 10.9999593, 10.9935709, 11.0011213, 10.9954016, 10.9982288, 10.9982288, 10.9975928, 10.9931367, 10.9939141],
}
df = pd.DataFrame(data)
df.head(11)
latitude longitude
0 49.565951 10.987341
1 49.568089 10.989403
2 49.568634 10.989675
3 49.568761 10.988788
4 49.569583 10.985158
5 49.570658 10.985327
6 49.571123 10.991296
7 49.571642 10.991018
8 49.571775 10.986708
9 49.561958 10.999576
10 49.561958 10.999576
</code></pre>
<p><strong>What I need:</strong></p>
<p>I need to group the people into clusters of cluster size equal to 9. This way I get clusters of neighbors. However, I do not want people with the <em>exact same spatial coordinates</em> to be in the same cluster. Since I have more then 3000 people in my dataset, there are many people (around some hundreds) with the exact same spatial coordinates.</p>
<p><strong>How to cluster the people?:</strong>
A great algorithm to do the clustering job is k-means-constrained. As explained in <a href="https://towardsdatascience.com/advanced-k-means-controlling-groups-sizes-and-selecting-features-a998df7e6745" rel="nofollow noreferrer">this article</a>, the algorithm allows to set the cluster size to 9. It took me a couple of lines to cluster the people.</p>
<p><strong>Problem:</strong></p>
<p>People who live in the same building (with same spatial coordinates) always get clustered into the same cluster since the goal is to cluster people who live close to each other. Therefore I have to find an automatic way, to put these people into a different cluster. But not just any different cluster, but a cluster which contains people who still live relatively close (see figure below).</p>
<p>This figure summarizes my problem:
<a href="https://i.sstatic.net/QMc5O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QMc5O.png" alt="enter image description here" /></a></p>
<p><strong>Background infos:</strong></p>
<p>This is how I cluster the people:</p>
<pre><code>from k_means_constrained import KMeansConstrained
coordinates = np.column_stack((df["latitude"], df["longitude"]))
# Define the number of clusters and the number of points per cluster
n_clusters = len(df) // 9
n_points_per_cluster = 9
# Perform k-means-constrained clustering
kmc = KMeansConstrained(n_clusters=n_clusters, size_min=n_points_per_cluster, size_max=n_points_per_cluster, random_state=0)
kmc.fit(coordinates)
# Get cluster assignments
df["cluster"] = kmc.labels_
# Print the clusters
for cluster_num in range(n_clusters):
cluster_data = df[df["cluster"] == cluster_num]["latitude", "longitude"]
print(f"Cluster {cluster_num + 1}:")
print(cluster_data)
</code></pre>
| <python><pandas><cluster-analysis><k-means><nearest-neighbor> | 2023-10-14 16:29:13 | 1 | 1,519 | PParker |
77,293,582 | 19,392,385 | A better way to store image with discord py? | <p>I am developing a discord bot and some commands require the user to input a file. The problem comes when fetching the files since as for now, I am using the URL. But with discord new update, any discord link older than a certain amount of minute won't display the image anymore.</p>
<p>For context I simply use the image in an embed I send and I use sqlite3 to store all the information the user input.</p>
<p>The question is how to store the image with two constraints:</p>
<ol>
<li>It has to be efficient</li>
<li>It has to be safe (avoid any malwares)</li>
</ol>
| <python><discord.py> | 2023-10-14 16:11:51 | 0 | 359 | Chris Ze Third |
77,293,479 | 2,996,578 | numpy: FutureWarning: future versions will not create a writeable array from broadcast_array. Set the writable flag explicitly to avoid this warning | <p>The following snippet raises a warning in numpy v1.24:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
x = np.array(0)
y = np.array([1, 2, 3])
x_, y_ = np.broadcast_arrays(x, y)
x_.flags.writeable
# FutureWarning: future versions will not create a writeable array from broadcast_array. Set the writable flag explicitly to avoid this warning.
</code></pre>
<p>How to mitigate?</p>
| <python><numpy> | 2023-10-14 15:42:19 | 2 | 1,188 | ndou |
77,293,194 | 10,963,057 | Content of Page is not found in beautiful soup soup | <p>I am trying to get access to the "historical quotes". I want to save the date and the close prices in a dataframe. I use bs4 to scrape the page, but I can't find the data in the page_soup.</p>
<pre><code>import requests
from bs4 import BeautifulSoup
url = "https://www.marketscreener.com/quote/index/MSCI-WORLD-SIZE-TILT-NETR-121861272/quotes/"
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.1234.567 Safari/537.36"
}
response = requests.get(url, headers=headers)
page_content = response.text
page_soup = BeautifulSoup(page_content, "html.parser")
</code></pre>
| <python><beautifulsoup> | 2023-10-14 14:22:59 | 1 | 1,151 | Alex |
77,293,193 | 3,099,733 | NameError when using `eval` in closure | <p>Consider the following code</p>
<pre class="lang-py prettyprint-override"><code>a = 1
fn = eval('lambda: a')
fn()
</code></pre>
<p>it will work well.
But if I move the above code into a closure</p>
<pre class="lang-py prettyprint-override"><code>def x():
a = 1
fn = eval('lambda: a')
fn()
x()
</code></pre>
<p>Then I will get:</p>
<pre><code>NameError: name 'a' is not defined
</code></pre>
<p>It looks like variables define in closure are treated differently. Then I try to apply <code>global</code> to <code>a</code> and it works well again.</p>
<pre class="lang-py prettyprint-override"><code>def x():
global a
a = 1
fn = eval('lambda: a')
fn()
x()
</code></pre>
<p>Though it works now but it will introduce other issue in my code, so <code>global</code> is not acceptable. Are there any other solutions to solve this issue. And it will be appreciated to explain me the reason.</p>
<h2>Update</h2>
<pre class="lang-py prettyprint-override"><code>def x():
a = 1
fn = eval('lambda: a', locals())
fn()
x()
</code></pre>
<p>That works. But I still wondering why it is different in closure.</p>
| <python> | 2023-10-14 14:22:53 | 0 | 1,959 | link89 |
77,292,972 | 2,727,028 | How to loop over subarrays until the end of a numpy array | <p>Suppose that we have a numpy array:</p>
<pre><code>a = np.array([0,1,2,3,4,5,6,7,8,9])
</code></pre>
<p>What I'm looking for is a way to loop over subranges in the array starting from a position from the end. For example like this:</p>
<pre><code>[4 5]
[6 7]
[8 9]
</code></pre>
<p>The following code does not work:</p>
<pre><code>n = 6
m = 2
for i in range(0, n, m):
print(a[-n+i:-n+i+m])
</code></pre>
<p>It prints:</p>
<pre><code>[4 5]
[6 7]
[]
</code></pre>
<p>The problem is that in the last iteration the value to the right of : becomes 0. I can handle this with a condition like:</p>
<pre><code>if -n+i+m == 0:
print(a[-n+i:])
</code></pre>
<p>But I'm looking for a more elegant solution if possible.</p>
| <python><numpy-slicing> | 2023-10-14 13:22:23 | 1 | 441 | Sanyo Mn |
77,292,760 | 5,119,359 | Calculate overlapping track segments | <p>My goal is to achieve something similiar to the Strava Global Heatmap Feature. So basically calculating heatmaps for tracks coming from GPX sources.</p>
<p>Lets say i have two GPX Files. I managed to import them via ogr2ogr as MultiLine Strings into a PostGIS database. Now i have two rows in my tracks table with the following geometry:</p>
<pre><code>{"type":"MultiLineString","coordinates":[[[11.363516,47.655794],[11.364301,47.655176],[11.364413,47.655137],[11.364871,47.654996],[11.365184,47.654925],[11.365453,47.655203],[11.365938,47.655518],[11.36609,47.655665],[11.366221,47.655918]]]}
{"type":"MultiLineString","coordinates":[[[11.364048,47.654866],[11.364423,47.655141],[11.364881,47.655],[11.365189,47.654965],[11.365542,47.654859],[11.365846,47.654845],[11.365935,47.654849]]]}
</code></pre>
<p>Those two tracks share (at least if you allow some tolerance) a portion of their segments:
<a href="https://i.sstatic.net/xYDvw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xYDvw.png" alt="tracks visualised" /></a></p>
<p>If i would analyse something like 100-200 tracks, some of them would share some segments, others would not.</p>
<p>My goal would be to find those shared segments, (throw away one of the segments) and to increase the "heat"/"count" in a different column.</p>
<p>I assume Strava is doing this <a href="https://medium.com/strava-engineering/the-global-heatmap-now-6x-hotter-23fc01d301de" rel="nofollow noreferrer">on a point level</a>. This works having billions of data points, but for my personal it doesn't look good to just take the normal track_points and do a "normal" heatmap calculation there.</p>
<p>Some of the information i found for calculating this with spatial operations are over 5-8 years old, so i hope there is any slightly comfortable solution doing this either in some framework or via Postgis ST calculations or some python coding.</p>
<p>Any ideas on that?</p>
| <python><geospatial><postgis><spatial> | 2023-10-14 12:10:49 | 1 | 557 | fighter-ii |
77,292,752 | 235,671 | Custom formatter properties not initialized via "." when "format" specified | <p>I have a custom <code>logging.Formatter</code> that supports some additional properties, but they don't get initialized when I also specify the <code>format</code> property. If I comment the <code>format</code> line out, the <code>.</code> property properly initializes its custom properties.</p>
<pre class="lang-py prettyprint-override"><code> "custom_formatter": {
"()": MyFormatter,
"style": "{",
"datefmt": "%Y-%m-%d %H:%M:%S",
"format": "<custom-format>", # <-- when removed, then the '.' works
".": {
"custom_property": "value"
}
}
</code></pre>
<p>Why doesn't the <code>.</code> work when <code>format</code> is used?</p>
<p>The formatter is implemented like this:</p>
<pre class="lang-py prettyprint-override"><code>class MyFormatter(logging.Formatter):
custom_property: str = "."
def format(self, record: logging.LogRecord) -> str:
# ...
return super().format(record)
</code></pre>
| <python><python-logging> | 2023-10-14 12:08:40 | 1 | 19,283 | t3chb0t |
77,292,726 | 494,826 | Feeding my classifier one document at a time | <p>I want my <code>ModelBuilder</code> class to feed a <code>self._classifier = MultinomialNB()</code> with the content of some webpages I scraped. The documents are many and pretty big, so I can't load the whole set to memory. I'm reading them file by file. Here's the relevant portion of code:</p>
<pre><code>X = []
y = []
# loop over all files in my docs folder and for each file:
X.append(self._vectorize_text(file.read()))
y.append(category['label'])
# end of the loop
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
self._classifier.fit(X_train, y_train)
# ...
</code></pre>
<p>Text pre-processing and vectorization functions:</p>
<pre><code>def _vectorize_text(self, text):
preprocessed_content = preprocess_text(text)
tfidf_vector = self._vectorizer.fit_transform([preprocessed_content])
return tfidf_vector.toarray()
</code></pre>
<pre><code>def preprocess_text(text):
words = word_tokenize(text)
words = [word.lower() for word in words if word.isalnum() and word.lower() not in stopwords.words('english')]
cleaned_text = ' '.join(words)
return cleaned_text
</code></pre>
<p>I get an error at <code>classifier.fit</code> when I start training my model:</p>
<blockquote>
<p>ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (121, 1) + inhomogeneous part.</p>
</blockquote>
<p>How can I resolve this?</p>
| <python><machine-learning><scikit-learn><nlp><tfidfvectorizer> | 2023-10-14 11:59:02 | 1 | 9,430 | Fabio B. |
77,292,565 | 1,215,175 | VS Code Flask configuration seemingly causing "unable to import module" | <p>I have a Flask app that works on both macOS and Windows, but there are some "module not found" messages showing in VS Code terminal on Windows (even though the app still works fine in spite of the issues). Seems like something wrong with VS Code configuration?</p>
<p>Note that the /Website is a subfolder of the VS Code project and the VS Code root is one level up, so there is a cwd command in the launch file which perhaps is related to the issue on Windows (but works on macOS).</p>
<p>Here is the terminal output for Windows. The first run is command line entered and works fine.</p>
<pre><code>(.venv) PS C:\Users\Brett\Documents\Development\Project\Website> flask run
Running in production mode
* Debug mode: off
2023-10-14 12:37:17,733 INFO WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on http://127.0.0.1:5000 [_internal.py:_internal:_log:96]
2023-10-14 12:37:17,734 INFO Press CTRL+C to quit [_internal.py:_internal:_log:96]
(.venv) PS C:\Users\Brett\Documents\Development\Project\Website>
</code></pre>
<p>Here is the output when run from launcher:</p>
<pre><code>(.venv) PS C:\Users\Brett\Documents\Development\Project\Website> C:; cd 'C:\Users\Brett\Documents\Development\Project/Website'; & 'c:\Users\Brett\Documents\Development\Project\Website\.venv\bin\python3.exe' 'c:\Users\Brett\.vscode\extensions\ms-python.python-2023.18.0\pythonFiles\lib\python\debugpy\adapter/../..\debugpy\launcher' '58689' '--' '-m' 'flask' 'run' '--no-debugger' '--no-reload'
Unable to find python module.
Unable to find python module.
Unable to find python module.
Unable to find python module.
Unable to find python module.
Unable to find python module.
Unable to find python module.
Running in debug mode
2023-10-14 12:37:35,973 INFO Logging setup complete [__init__.py:__init__:init_app:62]
* Serving Flask app 'wsgi.py'
* Debug mode: on
2023-10-14 12:37:36,002 INFO WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on http://127.0.0.1:5000 [_internal.py:_internal:_log:96]
2023-10-14 12:37:36,002 INFO Press CTRL+C to quit [_internal.py:_internal:_log:96]
(.venv) PS C:\Users\Brett\Documents\Development\Project\Website>
</code></pre>
<p>Here is my VS Code launch.json:</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Flask",
"type": "python",
"cwd": "${workspaceFolder}/Website",
"request": "launch",
"module": "flask",
"env": {
"FLASK_APP": "wsgi.py",
"FLASK_DEBUG": "1"
},
"args": [
"run",
"--no-debugger",
"--no-reload"
],
"jinja": true,
"justMyCode": true
},
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true
}
]
}
</code></pre>
<p>Final piece of info is that I put a breakpoint on entry on "app = init_app()" line in wsgi.py and the messages appear in terminal before the breakpoint is hit. Here is wsgi.py:</p>
<pre><code>from application import init_app
app = init_app()
if __name__ == "__main__":
app.run(host='0.0.0.0')
</code></pre>
| <python><visual-studio-code><flask> | 2023-10-14 11:01:48 | 1 | 995 | mr.b |
77,292,564 | 1,084,174 | Getting error "ValueError: The truth value of a Series is ambiguous" when I use lambda in Python | <p>Testing pandas <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer">apply</a> API:</p>
<pre><code>import numpy as np
import pandas as pd
z = np.random.randint(1, 5, size=(5, 5))
z.sort()
df = pd.DataFrame(z)
def mkp(s):
return s.pow(s)
mkpow = lambda s: 10 if s < 50 else s.pow(s)
df.apply(mkp) // works fine
df.apply(mkpow) // this line gives error
</code></pre>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>What's the cause and how to resolve it?</p>
| <python><pandas> | 2023-10-14 11:01:35 | 1 | 40,671 | Sazzad Hissain Khan |
77,292,463 | 13,086,128 | ValueError: operands could not be broadcast together with shapes (10,) (9,) | <p>I am trying to compare 2 numpy arrays</p>
<pre><code>np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) == np.array([1, 2, 3, 4, 5, 6, 7, 8, 9])
</code></pre>
<p>I am getting this error:</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-3-b1eabe8aba9f> in <cell line: 1>()
----> 1 np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) == np.array([1, 2, 3, 4, 5, 6, 7, 8, 9])
ValueError: operands could not be broadcast together with shapes (10,) (9,)
</code></pre>
<p>I am using:</p>
<pre><code>Windows 11
python 10
numpy 1.25
</code></pre>
<p><strong>Edit: This comparison was happening before numpy 1.25</strong></p>
| <python><python-3.x><numpy> | 2023-10-14 10:22:59 | 1 | 30,560 | Talha Tayyab |
77,292,377 | 700,070 | Pass a value from a function to array value in another function | <p>I have a function:</p>
<pre><code>def latest_file_to_backup():
# returns latest file from a directory
# called variable $latest_file
</code></pre>
<p>I have another function which I'm attempting to make which performs an API call to upload a file:</p>
<pre><code>def auth_to_pcloud(latest_file):
from pcloud import PyCloud
# auth details here - properly working
#pc.uploadfile(files=['/home/pi/<<USE:$latest_file here>>'], path='/a/remote/path/here')
</code></pre>
<p>I cannot figure out an easy way to pass the value of <code>$latest_file</code> to the <code>auth_to_pcloud()</code>.</p>
| <python><linux><raspberry-pi> | 2023-10-14 09:57:22 | 2 | 2,680 | Jshee |
77,292,075 | 2,540,336 | How do I extract files embedded in docx files in Python? | <p>I am trying to extract files that are embedded in docx files in Python. I have created a simple docx that contains an embedded pdf, an embedded zip and an embedded docx to begin with.</p>
<p>I have been using the code below.</p>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
from zipfile import ZipFile
import pandas as pd
docx_path = Path('junk/test.docx')
docx_unzip_folder = Path('junk')
unzipped_files = []
with ZipFile(docx_path, mode='r') as zip:
# locate all zip components
for entry in zip.infolist():
print(entry.filename)
# unzip the zip components
with zip.open(entry.filename, mode='r') as fzip_part:
# replace the folders separator with underscore for convenience
docx_unzip_file = docx_unzip_folder/(entry.filename.replace('/','_'))
with open(docx_unzip_file, 'wb') as output_file:
output_file.write(fzip_part.read())
unzipped_file = {'originating docx': docx_path.name,
'unzipped file name': docx_unzip_file.name,
'unzipped file path': docx_unzip_file,
}
unzipped_files.append(unzipped_file)
unzipped_files = pd.DataFrame(unzipped_files)
</code></pre>
<p>that works well in the sense that it outputs all embedded files as</p>
<ul>
<li>word_embeddings_oleObject1.bin</li>
<li>word_embeddings_oleObject2.bin</li>
<li>...</li>
</ul>
<p><a href="https://i.sstatic.net/HMIWf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HMIWf.png" alt="enter image description here" /></a></p>
<p>By processing the information in the various xmls generated I can deduce the mime type of the embedded files and their location in the containing docx document. However, when I try to open the pdf and zip in acrobat/gzip they do not open even if I change the extension. The embedded word file opens fine.</p>
<p>Any clues?</p>
<p>PS: Please note that embedded word files behave much better because the approach above saves them with the correct extension, and they open fine. The issue is the pdfs and zip files.</p>
<p>Many thanks.</p>
| <python><ms-word><ms-office><openxml><openxml-sdk> | 2023-10-14 08:08:41 | 1 | 597 | karpan |
77,291,538 | 5,632,400 | Find kth node from the end | <p>I written below function to <code>find kth node from the end.</code> However, one hidden test case is failing.
Please let me know what is wrong with below code.</p>
<pre><code>def find_kth_from_end(l, k):
slow = l.head
fast = l.head
for _ in range(k):
fast = fast.next
if fast is None:
return l.head
while fast.next:
slow = slow.next
fast = fast.next
return slow.next
</code></pre>
<br>
<pre><code>Error details
'NoneType' object has no attribute 'next'
</code></pre>
| <python><python-3.x><data-structures><linked-list><singly-linked-list> | 2023-10-14 03:42:26 | 1 | 15,885 | meallhour |
77,291,444 | 17,835,656 | how can i send file from s3 into the user in django? | <p>i have a website , this website you can use it to upload a file.</p>
<p>after that the users will be able to download what you have uploaded in that website.</p>
<p>i upload a file , and this file is getting saved inside a bucket in AWS-S3.</p>
<p>now how can i read that file from the bucket and send it to the user ?</p>
<p>this is my function for donwloading :</p>
<pre class="lang-py prettyprint-override"><code>
class Download(View):
def get(self, request):
try:
if request.session["language"] == "en":
the_path_of_templates = "website/en/"
else:
the_path_of_templates = "website/ar/"
except:
the_path_of_templates = "website/en/"
#########################################################
# # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#########################################################
try:
the_file = KarafetaProduct.objects.all()[0]
with open(os.path.join(BASE_DIR, str(the_file.file)), "rb") as file:
file_data = file.read()
response = HttpResponse(file_data, content_type='application/vnd.microsoft.portable-executable')
response['Content-Disposition'] = 'attachment; filename="{}"'.format(str(the_file.name))
return response
except:
return render(request=request, template_name=f"{the_path_of_templates}home.html")
</code></pre>
<p>this is the link of the file in s3 :</p>
<blockquote>
<p>karafeta-bucket.s3.amazonaws.com\Python-3.11.5_3_q9ZN0gC.exe</p>
</blockquote>
| <python><python-3.x><django><amazon-web-services><amazon-s3> | 2023-10-14 02:50:53 | 1 | 721 | Mohammed almalki |
77,291,430 | 3,731,622 | How to use GTK with cv2.namedWindow and cv2.imshow? | <p>How can I set the backend to GTK so that when I use <code>cv2.namedWindow</code> and <code>cv2.imshow</code> to display an image the GTK backend is used?</p>
<pre><code>cv2.namedWindow('some_name_for_the_window', cv2.WINDOW_NORMAL)
cv2.imshow('some_name_for_the_window', the_image_variable)
</code></pre>
| <python><opencv><backend><gtk> | 2023-10-14 02:44:30 | 1 | 5,161 | user3731622 |
77,291,235 | 547,198 | Python: Object cannot access __bases__ attribute | <p>An class instance should have access to the classes attributes.<br />
<code>__bases__</code> as per my understand is a class attribute<br />
If I have a class called <code>C1</code>, I can call <code>C1.__bases__</code>
but if I define an instance of <code>C1</code>, <code>obj1 = C1()</code>, <code>obj1.__bases__</code> does not work? I am surprised because using attribute notation, it should start a search in <code>C1</code>'s namespace and find <code>__bases__</code></p>
| <python> | 2023-10-14 00:46:20 | 3 | 7,362 | TimeToCodeTheRoad |
77,291,177 | 22,674,380 | Errors in loading a Tensorflow .pb model: Exception encountered when calling layer "lambda_2" (type Lambda) | <p>I'm trying to load a pre-trained <code>saved_model.pb</code> model from <a href="https://github.com/NJU-Jet/SR_Mobile_Quantization" rel="nofollow noreferrer">this</a> repo (from the root of that repo so error is reproduceable). I want to just pass an image and visualize the resulting output image. According to some <a href="https://github.com/NJU-Jet/SR_Mobile_Quantization/issues/11" rel="nofollow noreferrer">discussions</a> in the issues, this code should generally work. But I get some errors.</p>
<p>What is the problem? How can I fix the error?</p>
<pre><code>import tensorflow as tf
import cv2
import numpy as np
model_dir = './experiment/base7_D4C28_bs16ps64_lr1e-3/best_status'
image_path = 'sample.png'
image = cv2.imread(image_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image = cv2.resize(image, (256, 256))
input_data = image[np.newaxis].astype(np.float32)
model = tf.keras.models.load_model(model_dir)
output = model.predict(input_data)
output = cv2.cvtColor(output[0], cv2.COLOR_RGB2BGR)
cv2.imshow('hr', output)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>And error:</p>
<pre><code>....AppData\Local\anaconda3\Lib\site-packages\keras\src\layers\core\lambda_layer.py:327: UserWarning: solvers.networks.base7 is not loaded, but a Lambda layer uses it. It may cause errors.
function = cls._parse_function_from_config(
Traceback (most recent call last):
File "C:\Users\Downloads\SR_Mobile_Quantization-master\test.py", line 13, in <module>
model = tf.keras.models.load_model(model_dir)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\AppData\Local\anaconda3\Lib\site-packages\keras\src\saving\saving_api.py", line 238, in load_model
return legacy_sm_saving_lib.load_model(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\AppData\Local\anaconda3\Lib\site-packages\keras\src\utils\traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "C:/Users/Downloads/SR_Mobile_Quantization-master/solvers/networks/base7.py", line 27, in <lambda>
clip_func = Lambda(lambda x: K.clip(x, 0., 255.))
^
NameError: Exception encountered when calling layer "lambda_2" (type Lambda).
name 'K' is not defined
Call arguments received by layer "lambda_2" (type Lambda):
• inputs=tf.Tensor(shape=(None, None, None, 3), dtype=float32)
• mask=None
• training=None
</code></pre>
| <python><keras><deep-learning><neural-network><tensorflow2.0> | 2023-10-14 00:09:00 | 0 | 5,687 | angel_30 |
77,291,148 | 1,066,445 | Running stable diffusion gives No module named 'ldm.util' error | <p>I am trying to run Stable Diffusion with the following command</p>
<p><code>python3 scripts/txt2img.py --prompt "red velvet cake" --plms</code></p>
<p>I get the following error:</p>
<p><code>No module named 'ldm.util'; 'ldm' is not a package</code></p>
<p>I tried multiple solutions such as running the command <code>pip install -e .</code>
But, it didn't work</p>
| <python><pip><stable-diffusion> | 2023-10-13 23:55:16 | 1 | 4,232 | Art F |
77,291,084 | 7,212,809 | MultiProcessing Signal handling: Why isn't parent task killed when child task is terminated? | <p>As per my understanding, when using "fork" context, a signal delivered to a subprocess triggers parent's handler.</p>
<p><a href="https://github.com/python/cpython/issues/75670" rel="nofollow noreferrer">https://github.com/python/cpython/issues/75670</a></p>
<p>However that doesn't seem to be the case for this example:</p>
<pre><code>import time
import asyncio
from pebble import ProcessPool
import multiprocessing
SLEEP = 10
TIMEOUT = 2
import signal
def function(seconds):
print(f"Going to sleep {seconds}s..")
time.sleep(seconds)
print(f"Slept {seconds}s.")
def stop(reaons:str="caught signals"):
print("stopping")
async def main():
loop = asyncio.get_running_loop()
pool = ProcessPool(2, context=multiprocessing.get_context("fork"))
loop.add_signal_handler(signal.SIGINT, stop)
loop.add_signal_handler(signal.SIGTERM, stop)
try:
fut = pool.schedule(function, (SLEEP,), timeout=TIMEOUT)
await asyncio.wrap_future(fut, loop=loop)
except asyncio.exceptions.TimeoutError:
print("caught exception")
if __name__ == '__main__':
asyncio.run(main())
</code></pre>
<p>prints:</p>
<pre><code>Going to sleep 10s..
caught exception
</code></pre>
<p>I would have expected it to trigger the signal handler on the parent task (<code>stop</code>) as well. Why didn't that happen?</p>
| <python><multiprocessing> | 2023-10-13 23:27:01 | 1 | 7,771 | nz_21 |
77,290,835 | 4,249,338 | Negative list index in a format string | <p>The following string formatting involving list indexing works fine with positive indexes but fails with a type error on negative indexes.
I find it surprising as negative indexing is very pythonic.<br />
I am looking for a way to make that work or at least an explanation for why that's not possible.</p>
<pre class="lang-py prettyprint-override"><code> format_str0 = 'some_list[0] is {some_list[0]}'
format_str1 = 'some_list[1] is {some_list[1]}'
format_str_1 = 'some_list[-1] is {some_list[-1]}'
print(format_str0.format(some_list=[0, 1, 2])) # OK - prints some_list[0] is 0
print(format_str1.format(some_list=[0, 1, 2])) # OK - prints some_list[1] is 1
print(format_str_1.format(some_list=[0, 1, 2])) # TypeError: list indices must be integers or slices, not str
</code></pre>
| <python> | 2023-10-13 21:56:33 | 0 | 656 | gg99 |
77,290,832 | 4,180,276 | Subprocess.run timeout not killing subprocess | <p>I'm currently on <code>python-3.8.10</code> on <code>Ubuntu 20</code> with the following code. And despite every attempt the timeout does not seem to work. I'm unsure if I am inputing the text wrong?</p>
<pre><code> cmd = 'ebook-convert "%s" "%s"' % (filename, outfile)
process = subprocess.run(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, timeout=77400)
try:
ConversionFile.set_to_finished(options.get('uuid'))
if process.stderr and "error" in process.stderr.decode("utf-8"):
return {'error': process.stderr.decode("utf-8")}
return [outfile]
except Exception as e:
print(str(e))
</code></pre>
<p>Yet the program runs longer than the timeout and never gets killed</p>
| <python><subprocess><ubuntu-20.04> | 2023-10-13 21:55:26 | 1 | 2,954 | nadermx |
77,290,740 | 901,426 | SQLite3 fetchall not actually fetching all | <p>i created a simple table with 4 rows and populated it with data. when i examine the table in the interpreter, all data is present and correct. when i run <code>fetchall()</code> through the Python code, i only get 2 rows. what's going on?</p>
<p>here's the table (portSettings):</p>
<pre class="lang-sql prettyprint-override"><code>portName portPrefix portOption portDeadband
-------- ---------- ---------- ------------
mInput1 min1_ 1 0.1
mInput2 min2_ 1 0.1
mInput3 min3_ 1 0.1
mInput4 min4_ 1 0.1
</code></pre>
<p>here's the [updated] code:</p>
<pre class="lang-py prettyprint-override"><code>db = sqlite3.connect(dbLocation)
ptr = db.cursor()
query = 'SELECT * FROM portSettings;'
ptr.execute(query)
print(f'whatcha got: {ptr.fetchone()}')
if ptr.fetchone():
results = ptr.fetchall()
print(f'--------> {ptr.rowcount} results. voila! ↓ --------')
print(results)
</code></pre>
<p>and here's the results:</p>
<pre><code>--------> -1 results. voila! ↓ --------
[('mInput3', 'min3_', 1, 0.1), ('mInput4', 'min4_', 1, 0.1)]
</code></pre>
| <python><sqlite> | 2023-10-13 21:31:15 | 1 | 867 | WhiteRau |
77,290,667 | 11,748,924 | python requests stream iterate header only instead of body | <p>This code I got from this <a href="https://medium.com/edureka/python-requests-tutorial-30edabfa6a1c" rel="nofollow noreferrer">site</a></p>
<pre><code>import requests
req = requests.get('path/to/forest.jpg', stream=True)
req.raise_for_status()
with open('Forest.jpg', 'wb') as fd:
for chunk in req.iter_content(chunk_size=50000):
print('Received a Chunk')
fd.write(chunk)
</code></pre>
<p>Basically that code explains how to get the body content streamly. My case is similiar. Instead of iterating the body content, I just want to iterate the <strong>http header</strong>. Specifically, I just want to get <code>Set-Cookie</code> header and then terminate the connection. With doing that, I can save my download bandwidth if the body content is large while I just need the header.</p>
| <python><request><stream> | 2023-10-13 21:12:43 | 1 | 1,252 | Muhammad Ikhwan Perwira |
77,290,655 | 2,471,211 | Trino Python SDK - Server Side Cursor | <p>I am looking for a way to use a server side cursor with the <a href="https://github.com/trinodb/trino-python-client" rel="nofollow noreferrer">Trino Python SDK</a>.
I have a query that returns a large dataset and I want to keep memory usage in check.</p>
<p>If I use SQLAlchemy and Pandas, I am able to process it as a stream, without any issue like so:</p>
<pre><code>import pandas as pd
from trino.sqlalchemy import URL
from sqlalchemy import create_engine
engine = create_engine(
URL(
user=...)
)
conn = engine.connect().execution_options(stream_results=True)
query = "SELECT * FROM "big"."dataset".table"
for chunk_dataframe in pd.read_sql(query, conn, chunksize=1000000):
<process the chunks>
</code></pre>
<p>But I cannot find a way to do the same with the pure SDK. If I do:</p>
<pre><code>from trino.dbapi import connect
conn = connect(
host=...)
cursor = conn.cursor()
cursor.execute("SELECT * FROM "big"."dataset".table")
cursor.arraysize = 1000000
while True:
rows = cursor.fetchmany()
if not rows:
break
</code></pre>
<p>It seems the entire dataset is pulled before I reach the fetchmany.</p>
<p>How can I keep a low memory profile without using SQLAlchemy?</p>
| <python><sqlalchemy><bigdata><trino> | 2023-10-13 21:08:52 | 0 | 485 | Flo |
77,290,646 | 5,924,264 | How to restrict instances of a class from accessing certain methods? | <p>I currently have a class that has several attributes and methods. I want to implement a pseudo alternative constructor for a special case initialization. I plan to use the method described in <a href="https://stackoverflow.com/a/682545">https://stackoverflow.com/a/682545</a>.</p>
<p>Here's a skeleton of what I want to do:</p>
<pre><code>class MyClass:
def __init__(self, input1, input2....)
# init the class
@classmethod
def alt_ctor(cls, input):
return cls...
def method1(self):
# do something
def method2(self):
# do something
def method3(self):
# do something
</code></pre>
<p>For instances created using <code>alt_ctor</code>, I only want to give it access to <code>method2</code> and no access to other methods.</p>
<p>Is there a simple way to do this in Python? The only way I know is to have some flag that's read in by <code>alt_ctor</code> and stored as an instance attribute, and then assert against that flag for each of the methods.</p>
| <python><class> | 2023-10-13 21:07:06 | 1 | 2,502 | roulette01 |
77,290,510 | 1,445,660 | access fields by column name in SqlAlchemy raw select | <pre><code>games = session.execute(statement=text('select game.id as id, * from game')).all()
for game in games:
print(game['id'])
</code></pre>
<p>I get `tuple indices must be integers or slices, not str'.
How can I access the fields by column names and not by index?</p>
| <python><sqlalchemy> | 2023-10-13 20:36:29 | 2 | 1,396 | Rony Tesler |
77,290,496 | 10,101,321 | Pathlib joining absolute path together | <p>I'm trying to join two path together, the second starts with a leading slash.</p>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
path = Path('/first', '/second')
# outputs WindowsPath('/second')
# expecting WindowsPath('/first/second')
print(path)
</code></pre>
<p>From the example above, you can see it removes the first path.</p>
<p>While it's not a big deal to use something like <code>removeprefix('/')</code>, I'm curious to know why it's behaving this way. Is there is a builtin way to change this behaviour?</p>
| <python><python-3.x><path><pathlib> | 2023-10-13 20:33:35 | 1 | 850 | Apollo-Roboto |
77,290,438 | 7,452,343 | Issue with multiple inheritance Python parent class | <p>I have a class <code>SeriesIDTestLoader</code> that inherits from both <code>CSVSeriesIDLoader, CSVTestLoader</code>.</p>
<pre><code>
class SeriesIDTestLoader(CSVSeriesIDLoader, CSVTestLoader):
def __init__(self, series_id_col: str, main_params: dict, return_method: str, return_all=True, forecast_total=336):
"""_summary_
:param series_id_col: _description_
:type series_id_col: str
:param main_params: _description_
:type main_params: dict
:param return_method: _description_
:type return_method: str
:param return_all: _description_, defaults to True
:type return_all: bool, optional
:param forecast_total: _description_, defaults to 336
:type forecast_total: int, optional
"""
CSVSeriesIDLoader.__init__(self, series_id_col, main_params, return_method, return_all)
self.forecast_total = forecast_total
def get_from_start_date_all(self, forecast_start: datetime, series_id: int = None):
for df in self.listed_vals:
dt_row = df[
df["datetime"] == forecast_start
]
revised_index = dt_row.index[0]
return self.__getitem__(revised_index - self.forecast_history)
return self.__getitem__(forecast_start)
def __getitem__(self, idx: int) -> Tuple[Dict, Dict]:
return CSVTestLoader.__getitem__(self, idx)
</code></pre>
<p>I don't understand why this results in this traceback as CSVTestLoader.init is never called and I never call super(). I just want to call init on CSVSeriesIDLoader not on CSVTestLoader (the two have a common parent and just need to be inited once but I need methods from CSVTestLoader that are quite complex)</p>
<pre><code>Traceback (most recent call last):
File "/home/circleci/repo/tests/test_series_id.py", line 53, in test_series_test_loader
loader_ds1 = SeriesIDTestLoader("PLANT_ID", self.dataset_params, "shit")
File "/home/circleci/repo/flood_forecast/preprocessing/pytorch_loaders.py", line 664, in __init__
CSVSeriesIDLoader.__init__(self, series_id_col, main_params, return_method, return_all)
File "/home/circleci/repo/flood_forecast/preprocessing/pytorch_loaders.py", line 180, in __init__
super().__init__(**main_params1)
TypeError: CSVTestLoader.__init__() missing 2 required positional arguments: 'df_path' and 'forecast_total'
</code></pre>
| <python><pytorch> | 2023-10-13 20:19:45 | 1 | 918 | igodfried |
77,290,205 | 2,232,418 | Python in-memory GZIP on existing file | <p>I have a situation where I have an existing file. I want to compress this file using gzip and get the base64 encoding of this file and use this string for latter operations including sending as part of data in an API call.</p>
<p>I have the following code which works fine:</p>
<pre><code>import base64
import gzip
base64_string_to_use_later = None
with open('C:\\test.json', 'rb') as orig_file:
with gzip.open('C:\\test.json.gz', 'wb') as zipped_file:
zipped_file.writelines(orig_file)
with gzip.open('C:\\test.json.gz', 'rb') as zipped_file:
base64_string_to_use_later = base64.b64encode(zipped_file.read())
</code></pre>
<p>This code will take the existing file, create a compressed version and write this back to the file system. The second block takes the compressed file, opens it and fetches the base 64 encoded version.</p>
<p>Is there a way to make this more elegant to compress the file in memory and retrieve the base64 encoded string in memory?</p>
| <python><file><encoding><io><gzip> | 2023-10-13 19:23:27 | 1 | 2,787 | Ben |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.