QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,410,955 | 2,263,683 | Apache beam pipeline function does not run in parallel | <p>I have a <code>Dofn</code> function in my pipeline which is running in GCP dataflow and is suppose to do some process per products in parallel.</p>
<pre><code>class Step1(DoFn):
def process(self, element):
# Get a list of products
for idx, item in enumerate(product_list):
yield product, idx
class Step2(DoFn):
def process(self, element):
# Get index and product
logger.info(f"::: Processing product number {index} STARTED at {datetime.now()}:::::")
# Do some process ....
logger.info(f"::: FINISHED product number {index} at {datetime.now()}:::::")
with Pipeline(options=pipeline_options) as pipeline:
results = (
pipeline
| "Read from PubSub" >> io.ReadFromPubSub()
| "Product list" >> ParDo(Step1())
| "Process Product" >> ParDo(Step2())
| "Group data" >> GroupBy()
...
)
</code></pre>
<p>So <code>Step2</code> is suppose to run per product in parallel. But actually what I get in logs is:</p>
<pre><code>::: Processing product number 0 STARTED at <some_time> :::::
::: FINISHED product number 0 at <some_time>:::::
::: Processing product number 1 STARTED at <some_time> :::::
::: FINISHED product number 1 at <some_time>:::::
::: Processing product number 2 STARTED at <some_time> :::::
::: FINISHED product number 2 at <some_time>:::::
::: Processing product number 3 STARTED at <some_time> :::::
::: FINISHED product number 3 at <some_time>:::::
...
</code></pre>
<p>That shows that Instead of running <code>Step2</code> in parallel, everything is running sequentially, which takes a long time to finish for huge amount of products.</p>
<p>Is there something I'm missing here? Aren't <code>ParDo</code> functions suppose to run in parallel?</p>
<p><strong>Update</strong></p>
<p>As <a href="https://beam.apache.org/documentation/runners/direct/#execution-mode" rel="nofollow noreferrer">apache beam documentation</a> suggests, I tried the following options in PipelineOptions, and I double checked if they are actually set in the job in GCP but the result was the same:</p>
<ul>
<li><code>direct_num_workers=0</code></li>
<li><code>direct_running_mode='multi_threading'</code></li>
<li><code>direct_running_mode='multi_processing'</code></li>
</ul>
| <python><google-cloud-platform><google-cloud-dataflow><apache-beam> | 2023-11-02 15:38:31 | 1 | 15,775 | Ghasem |
77,410,933 | 7,936,386 | ModuleNotFoundError: No module named 'geocube.api'; 'geocube' is not a package | <p>I am unable to import <code>geocube</code> at all in my virtual environment, although I have installed it and my venv is activated.</p>
<pre><code>>>> from geocube.api.core import make_geocube
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\andrei.nita\Documents\PG\batch\pgtopy\geocube.py", line 10, in <module>
from geocube.api.core import make_geocube
ModuleNotFoundError: No module named 'geocube.api'; 'geocube' is not a package
</code></pre>
<p>What is strange is that if I start a python console from my <code>env\Scripts</code> and import <code>geocube</code> from that folder...it works. Is there a way to fix the path to the module it self? Is it something related to the Python interpreter? I get this error in the conda environments as well. I am using Python 3.10.9.
My venv has the following packages:</p>
<pre><code>(env) C:\Users\MyName\Documents\MyFolder\pgtopy>pip list
Package Version
----------------- ------------
affine 2.4.0
appdirs 1.4.4
attrs 23.1.0
cachetools 5.3.2
certifi 2023.7.22
click 8.1.7
click-plugins 1.1.1
cligj 0.7.2
colorama 0.4.6
fiona 1.9.5
GDAL 3.4.3
geocube 0.4.2
geopandas 0.14.0
numpy 1.26.1
odc-geo 0.4.1
packaging 23.2
pandas 2.1.2
pip 22.3.1
psutil 5.9.6
psycopg 3.1.12
pyparsing 3.1.1
pyproj 3.6.1
python-dateutil 2.8.2
pytz 2023.3.post1
rasterio 1.3.9
rioxarray 0.15.0
scipy 1.11.3
setuptools 65.5.0
shapely 2.0.2
six 1.16.0
snuggs 1.4.7
typing_extensions 4.8.0
tzdata 2023.3
xarray 2023.10.1
</code></pre>
| <python><conda><python-venv><modulenotfounderror> | 2023-11-02 15:34:58 | 1 | 619 | Andrei Niță |
77,410,905 | 5,344,240 | Visual Studio Code terminal shows multiple Conda envs | <p>I have VSCode in Windows 11. I have WSL (Ubuntu 22.04) and launch VSCode from the terminal like <code>code .</code> from the project folder. When I open the built-in terminal it shows two conda (Anaconda) environments in parentheses, so I have no idea which one is active, if any. On subsequent <code>conda deactivate</code> you can see in the attached screenshot that the prompt and the active env changing but there is surely something messed up here.</p>
<p>Also, in VSCode, when I set Python Interpreter to a conda env, in a few seconds the built-in terminal prompt picks up the change and the env name in the first parens changes to the new value.</p>
<p>Any idea how to fix it?</p>
<p>(The prompt should obviously show just one (the active) conda env and that one should change whenever the Python interpreter is updated in the command palette.)</p>
<p><a href="https://i.sstatic.net/Mw88T.png" rel="noreferrer"><img src="https://i.sstatic.net/Mw88T.png" alt="enter image description here" /></a></p>
<p>I looked into my <code>~/.bashrc</code> file but there is just the seemingly normal <code>>>> conda initialize</code> block at the bottom that was added when installing Anaconda</p>
| <python><visual-studio-code><environment><anaconda3> | 2023-11-02 15:31:16 | 3 | 455 | Andras Vanyolos |
77,410,704 | 10,120,952 | Pylance not working autocomplete for dynamically instantiated classes | <pre><code>from typing import Literal, overload, TypeVar, Generic, Type
import enum
import abc
import typing
class Version(enum.Enum):
Version1 = 1
Version2 = 2
Version3 = 3
import abc
from typing import Type
class Machine1BaseConfig:
@abc.abstractmethod
def __init__(self, *args, **kwargs) -> None:
pass
class Machine1Config_1(Machine1BaseConfig):
def __init__(self, fueltype, speed) -> None:
self.fueltype = fueltype
self.speed = speed
class Machine1Config_2(Machine1BaseConfig):
def __init__(self, speed, weight) -> None:
self.speed = speed
self.weight = weight
class Machine1FacadeConfig:
@classmethod
def get_version(cls, version: Version) -> Type[typing.Union[Machine1Config_1, Machine1Config_2]]:
config_map = {
Version.Version1: Machine1Config_1,
Version.Version2: Machine1Config_2,
Version.Version3: Machine1Config_2,
}
return config_map[version]
class Machine2BaseConfig:
@abc.abstractmethod
def __init__(self, *args, **kwargs) -> None:
pass
class Machine2Config_1(Machine2BaseConfig):
def __init__(self, gridsize) -> None:
self.gridsize = gridsize
class Machine2Config_2(Machine2BaseConfig):
def __init__(self, loadtype, duration) -> None:
self.loadtype = loadtype
self.duration = duration
class Machine2FacadeConfig:
@classmethod
def get_version(cls, version: Version) -> Type[typing.Union[Machine2Config_1, Machine2Config_2]]:
config_map = {
Version.Version1: Machine2Config_1,
Version.Version2: Machine2Config_1,
Version.Version3: Machine2Config_2,
}
return config_map[version]
class Factory:
def __init__(self, version: Version) -> None:
self.version = version
@property
def Machine1Config(self):
return Machine1FacadeConfig.get_version(self.version)
@property
def Machine2Config(self):
return Machine2FacadeConfig.get_version(self.version)
factory_instance = Factory(Version.Version1)
machine1_config_instance = factory_instance.Machine1Config()
machine2_config_instance = factory_instance.Machine2Config()
</code></pre>
<p>In the provided Python code, the Factory class is used to instantiate configuration objects for two different types of machines (Machine1 and Machine2) based on a specified version.
The problem is when using Pylance/Pyright with Visual Studio Code, I'm experiencing issues with autocomplete not correctly suggesting parameters for dynamically instantiated classes (Machine1Config and Machine2Config) in a factory design pattern.
How can I improve my code to enable more accurate and helpful autocompletion suggestions by Pylance for these dynamically determined types?</p>
<p>I have thought that this should somehow work with @overload decorater but I can't wrap my head around it how to quite implement it.</p>
<p>Furthermore currently with the type hint <code>Type[typing.Union[Machine1Config_1, Machine1Config_2]]</code> Pylance suggests all key word arguments of Machine1Config_1 and Machine1Config_2, so fueltype, speed, weight. If I leave this type hint away there is no autocompletion at all.</p>
| <python><python-typing><pylance><pyright> | 2023-11-02 15:03:40 | 3 | 1,104 | alexp |
77,410,272 | 726,730 | Problems installing Python av in Windows 11 | <pre class="lang-py prettyprint-override"><code>C:\Windows.old\Users\chris>pip install av
Defaulting to user installation because normal site-packages is not writeable
Collecting av
Using cached av-10.0.0.tar.gz (2.4 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [74 lines of output]
Compiling av\buffer.pyx because it changed.
[1/1] Cythonizing av\buffer.pyx
Compiling av\bytesource.pyx because it changed.
[1/1] Cythonizing av\bytesource.pyx
Compiling av\descriptor.pyx because it changed.
[1/1] Cythonizing av\descriptor.pyx
Compiling av\dictionary.pyx because it changed.
[1/1] Cythonizing av\dictionary.pyx
Compiling av\enum.pyx because it changed.
[1/1] Cythonizing av\enum.pyx
Compiling av\error.pyx because it changed.
[1/1] Cythonizing av\error.pyx
Compiling av\format.pyx because it changed.
[1/1] Cythonizing av\format.pyx
Compiling av\frame.pyx because it changed.
[1/1] Cythonizing av\frame.pyx
performance hint: av\logging.pyx:232:5: Exception check on 'log_callback' will always require the GIL to be acquired.
Possible solutions:
1. Declare the function as 'noexcept' if you control the definition and you're sure you don't want the function to raise exceptions.
2. Use an 'int' return type on the function to allow an error code to be returned.
Error compiling Cython file:
------------------------------------------------------------
...
cdef const char *log_context_name(void *ptr) nogil:
cdef log_context *obj = <log_context*>ptr
return obj.name
cdef lib.AVClass log_class
log_class.item_name = log_context_name
^
------------------------------------------------------------
av\logging.pyx:216:22: Cannot assign type 'const char *(void *) except? NULL nogil' to 'const char *(*)(void *) noexcept nogil'. Exception values are incompatible. Suggest adding 'noexcept' to type 'const char *(void *) except? NULL nogil'.
Error compiling Cython file:
------------------------------------------------------------
...
# Start the magic!
# We allow the user to fully disable the logging system as it will not play
# nicely with subinterpreters due to FFmpeg-created threads.
if os.environ.get('PYAV_LOGGING') != 'off':
lib.av_log_set_callback(log_callback)
^
------------------------------------------------------------
av\logging.pyx:351:28: Cannot assign type 'void (void *, int, const char *, va_list) except * nogil' to 'av_log_callback'. Exception values are incompatible. Suggest adding 'noexcept' to type 'void (void *, int, const char *, va_list) except * nogil'.
Compiling av\logging.pyx because it changed.
[1/1] Cythonizing av\logging.pyx
Traceback (most recent call last):
File "C:\Program Files\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Program Files\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\chris\AppData\Local\Temp\pip-build-env-4nz2e7u1\overlay\Lib\site-packages\setuptools\build_meta.py", line 355, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\chris\AppData\Local\Temp\pip-build-env-4nz2e7u1\overlay\Lib\site-packages\setuptools\build_meta.py", line 325, in _get_build_requires
self.run_setup()
File "C:\Users\chris\AppData\Local\Temp\pip-build-env-4nz2e7u1\overlay\Lib\site-packages\setuptools\build_meta.py", line 507, in run_setup
super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script)
File "C:\Users\chris\AppData\Local\Temp\pip-build-env-4nz2e7u1\overlay\Lib\site-packages\setuptools\build_meta.py", line 341, in run_setup
exec(code, locals())
File "<string>", line 157, in <module>
File "C:\Users\chris\AppData\Local\Temp\pip-build-env-4nz2e7u1\overlay\Lib\site-packages\Cython\Build\Dependencies.py", line 1154, in cythonize
cythonize_one(*args)
File "C:\Users\chris\AppData\Local\Temp\pip-build-env-4nz2e7u1\overlay\Lib\site-packages\Cython\Build\Dependencies.py", line 1321, in cythonize_one
raise CompileError(None, pyx_file)
Cython.Compiler.Errors.CompileError: av\logging.pyx
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
[notice] A new release of pip is available: 23.2.1 -> 23.3.1
[notice] To update, run: python.exe -m pip install --upgrade pip
</code></pre>
<p>I have no idea about this error I think. What can I try next?</p>
| <python><windows><pip><pyav> | 2023-11-02 14:06:31 | 1 | 2,427 | Chris P |
77,410,240 | 10,750,541 | How to obtain a list of [index, column, value] lists where the value is not 0 or not missing? | <p>Assuming that the starting point is a dataframe like <em><strong>df</strong></em>, I am trying to achieve the dataframe <em><strong>from_to_value</strong></em> in a pythonic way, meaning without iterating through the dataframe:</p>
<pre class="lang-py prettyprint-override"><code>>>> df
A B C
r1 5 NaN 0
r2 0 6 0
r3 1 4 0
>>> from_to_value
[['r1', 'A', 5],
['r2', 'B', 6],
['r3', 'A', 1],
['r3', 'B', 4]]
</code></pre>
<p>I have been looking <a href="https://stackoverflow.com/questions/72215476/get-index-column-pair-where-the-value-is-true">this post</a> too, but I am not really able to understand how it works to utilise it.</p>
| <python><pandas> | 2023-11-02 14:02:26 | 2 | 532 | Newbielp |
77,410,211 | 409,461 | Pythonic way to process a synchronous event (callback) in an async environment | <p>I'm looking for a solution with which I could use the Python socketIO <code>disconnect</code> event and forward this into a fully asynchronous host class.</p>
<pre><code> @self._sio.event
def disconnect():
''' Platform disconnected '''
logging.info("disconnected")
self._delegate.signaling_client_left()
</code></pre>
<p><code>self._delegate.signaling_client_left()</code> points to a non-async function. How could I make this an async function? I tried with <code>asyncio.run()</code>, but this leads to <code>asyncio.run() cannot be called from a running event loop</code></p>
| <python><async-await><python-socketio> | 2023-11-02 13:59:09 | 1 | 837 | decades |
77,410,154 | 471,136 | How to format currency column in streamlit for dataframes? | <p>I have the following piece of code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import streamlit as st
cashflow = pd.DataFrame(data=[
(2023, 10000),
(2022, 95000),
(2021, 90000),
(2020, 80000),
], columns=['year', 'cashflow'])
st.dataframe(
data=cashflow,
column_config={
'year': st.column_config.NumberColumn(format='%d'),
'cashflow': st.column_config.NumberColumn(label='cashflow', format='$%.0f'),
}
)
</code></pre>
<p>But, this outputs the <code>cashflow</code> column as <code>$95000</code> but I want it output <code>$95,000</code></p>
| <python><pandas><streamlit> | 2023-11-02 13:52:09 | 2 | 33,697 | pathikrit |
77,410,125 | 2,112,406 | pybind11 change elements in list in a class from the python side | <p>Suppose I have a class <code>Smol</code>:</p>
<pre class="lang-c++ prettyprint-override"><code>class Smol {
std::string name;
int value;
public:
Smol(std::string name_, int val){
name = name_;
value = val;
}
void set_name(std::string new_name){
name = new_name;
}
std::string get_name(){
return name;
}
};
</code></pre>
<p>and <code>Big</code> that will contain a vector of <code>Smol</code>s:</p>
<pre class="lang-c++ prettyprint-override"><code>class Big {
std::vector<Smol> contents;
std::string name;
public:
Big(){};
void set_contents(std::vector<Smol> contents_){
contents = contents_;
}
std::vector<Smol> get_contents(){
return contents;
}
void add_element(Smol element){
contents.push_back(element);
}
};
</code></pre>
<p>I expose them to python as follows:</p>
<pre class="lang-c++ prettyprint-override"><code>PYBIND11_MODULE(big_smol_test, m){
m.doc() = "testing one two";
py::class_<Smol>(m, "Smol")
.def("get_name", &Smol::get_name)
.def("set_name", &Smol::set_name)
.def(py::init<std::string, int>())
.def_property("name", &Smol::get_name, &Smol::set_name)
;
py::class_<Big>(m, "Big")
.def("set_contents", &Big::set_contents)
.def("get_contents", &Big::get_contents)
.def(py::init<>())
.def_property("contents", &Big::get_contents, &Big::set_contents)
;
}
</code></pre>
<p>I want to be able to edit the names of <code>Smol</code>s contained in an instance of <code>Big</code>:</p>
<pre class="lang-python prettyprint-override"><code>from build.big_smol_test import Big, Smol
s1 = Smol("foo", 1)
s2 = Smol("bar", 5)
b = Big()
b.set_contents([s1, s2])
b.contents[0].name = "lorem"
</code></pre>
<p>But the name remains as <code>"foo"</code>. How would I achieve this? Not sure if I want to make <code>contents</code> a vector of pointers, as I'd like to be able to change <code>s1</code> without affecting <code>b</code>, for instance.</p>
| <python><c++><pybind11> | 2023-11-02 13:47:17 | 0 | 3,203 | sodiumnitrate |
77,410,108 | 11,725,056 | How to efficiently get all the valid re-ranked permutations from a list when it has duplicate ranks inside it? | <p>Let's suppose two people got the similar marks or scores. Theoretically you can assign them the same rank but what if there is a constraint that there can't be duplicate ranks.</p>
<p>Examples:</p>
<ul>
<li>Input: [1,1,2] can be written as Output: [1,2,3], [2,1,3]</li>
<li>Input: [1,1,1] and can be written as ANY permutation because any of the position can have any rank possible Output: [1,2,3], [1,3,2], [2,2,3].... [3,2,1]</li>
<li>Input: [2,1,2] can be written as Output: [2,1,3], [3,1,2]</li>
<li>Input: [1,2,3] has no other permutations so Output = None</li>
<li>Input: [1,2,2] can be Output: [1,2,3], [1,3,2]</li>
</ul>
<p>There is <a href="https://stackoverflow.com/questions/14671013/ranking-of-numpy-array-with-possible-duplicates">another solution</a> but it gives repeated ranks which I can't</p>
<p>I have come up with this code so far which generated the sampling pool from the existing list</p>
<pre><code>from collections import Counter
from itertools import permutations
import random
ranks = [1,2,2,3,3,1,1,2,4,2]
counter = Counter(ranks)
sorted_counter = dict(sorted(counter.items(), key = lambda x: x[0]))
print(sorted_counter)
range_to_from = {1:(1,counter[1])}
sample_from = {}# wherever you find any rank "i", you can replace it with any number from the range
for key,val in sorted_counter.items():
if key == 1:
sample_from[1] = list(range(val+1))
continue
else:
start, end = (range_to_from[key-1][-1]+1, range_to_from[key-1][-1]+val)
range_to_from[key] = (start,end)
sample_from[key] = list(range(start,end+1))
</code></pre>
<p>If you do the below in a <code>while i != all_possible_perms</code>, you'll get all possible values but it';; be too costly comparing each and every generated list already:</p>
<pre><code># Get single permutation from the list
new_ranks = []
for item in ranks:
sub = random.choice(sample_from[item])
sample_from[item].remove(sub)
new_ranks.append(sub)
new_ranks
</code></pre>
| <python><python-3.x><algorithm><permutation><ranking> | 2023-11-02 13:44:44 | 1 | 4,292 | Deshwal |
77,409,954 | 5,084,560 | Polars and Pandas DataFrame consume almost same memory. Where is the advantage of Polars? | <p>I wanted to compare memory consumption for same dataset. I read same SQL query with pandas and polars from an Oracle DB. Memory usage results are almost same. and execution time is 2 times faster than polars. I expect polars will be more memory efficient.</p>
<p>Is there anyone who can explain this? And any suggestion to reduce memory usage size for same dataset?</p>
<p>Polars Read SQL:
<a href="https://i.sstatic.net/vf2HN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vf2HN.png" alt="enter image description here" /></a></p>
<p>Pandas Read SQL:
<a href="https://i.sstatic.net/ru6Wz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ru6Wz.png" alt="enter image description here" /></a></p>
<p>result(polars) and data(pandas) shapes:</p>
<p><a href="https://i.sstatic.net/zUlJh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zUlJh.png" alt="enter image description here" /></a></p>
<p>and lastly memory usages:</p>
<p><a href="https://i.sstatic.net/mAhLq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mAhLq.png" alt="enter image description here" /></a></p>
| <python><pandas><dataframe><python-polars> | 2023-11-02 13:23:09 | 2 | 305 | Atacan |
77,409,924 | 9,301,805 | Is there a "Highlight selected word" feature for Jupyter Notebook 7? | <p>I recently upgraded my Notebook to Notebook 7 (on a Windows machine with Python 3.11). I would like to have a "Highlight selected word" feature like <a href="https://jupyter-contrib-nbextensions.readthedocs.io/en/latest/install.html" rel="noreferrer">the one in</a> <code>jupyter-contrib-nbextensions</code>. I am not able to use the <code>jupyter-contrib-nbextensions</code> even it is installed, so my question is - Is there a "Highlight selected word" feature for Jupyter Notebook 7?</p>
<p>(I don't want to downgrade my Notebook version)</p>
| <python><jupyter-notebook><jupyter-extensions> | 2023-11-02 13:19:40 | 0 | 1,575 | user88484 |
77,409,918 | 427,083 | In Django admin, how do I manage complex user privileges? | <p>Suppose you have this entity structure in Django with to-many relationships:</p>
<pre><code>- Company
- Division
- Department
- Unit
</code></pre>
<p>Every user is part of a <code>Unit</code>, so I managed the rights to create, edit and delete entities in <em>Django Admin</em> by assigning <em>groups</em> for users that had <code>Department</code>, <code>Division</code> or <code>Company</code> privileges. E.g. "DepartmentAdmin", "DivisionAdmin", etc. To determine the correct privilege, I just followed the <code>Unit</code> up the chain. Easy enough.</p>
<p>Now I have to refactor: users can now be members of more than one <code>Unit</code> (perhaps keeping their "main" <code>Unit</code>). They can have arbitrary rights to administer any other <code>Department</code>, <code>Division</code>, etc..</p>
<p>How would I best model that?</p>
<p>I have tried with a new entity <code>Membership</code>, a kind of join table with additional information about the role / privileges. Is this a good idea? I found that this creates a lot of dependencies elsewhere and determining the rights is quite complicated.</p>
<p>e.g.</p>
<pre class="lang-py prettyprint-override"><code>class Membership(models.Model):
user = models.ForeignKey(User,
on_delete=models.CASCADE, related_name="other_departments")
department = models.ForeignKey(Department,
on_delete=models.CASCADE, related_name="other_members")
position = models.CharField(_("Position"), max_length=32, null=True, blank=True)
</code></pre>
<p>Or is there a better way by leveraging <strong>the existing admin rights management</strong> features of Django? How would you go about this?</p>
<p>Thanks!</p>
| <python><django><django-models><data-structures><django-admin> | 2023-11-02 13:18:45 | 1 | 80,257 | Mundi |
77,409,895 | 11,049,863 | Is it possible to configure my celery periodic tasks outside of the Django settigns file? I can't import my tasks into settings | <p>Here's how my project is organized.<br/></p>
<pre><code>electron /
...
electron / <----- sources root (pycharm)
...
electron /
...
celery.py
settings.py
tasks /
tasks.py <----- Here are my tasks
</code></pre>
<p>celery beat configuration in settings.py</p>
<pre><code>from celery.schedules import crontab
from tasks import tasks
CELERY_BEAT_SCHEDULE = {
"toutes-les-minutes": {
"task": "tasks.tache_journaliere",
"schedule": crontab(minute="*/1"),
},
}
</code></pre>
<p>When I import my tasks like this: from ..tasks import tasks.
I get the following error:</p>
<pre><code>ImportError: attempted relative import beyond top-level package
</code></pre>
<p>When I import tasks like this: from tasks import tasks,the following error occurs</p>
<pre><code>django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
</code></pre>
<p>I've been trying to correct these errors for a while, but it doesn't work. I even looked for solutions here. I still can't.</p>
<p>Is it not possible to configure periodic tasks outside of the settings.py file? (in the celery.py file for example)</p>
| <python><django><django-celery-beat> | 2023-11-02 13:14:53 | 0 | 385 | leauradmin |
77,409,881 | 2,581,506 | How to write python INFO logs to the output and DEBUG logs to a file? | <p>I searched multiple similar questions, but I couldn't find an answer.</p>
<p>Here is an example:</p>
<pre><code>logging.info("This should show both in a file and the output")
logging.debug("This should show only in a file")
</code></pre>
| <python><logging> | 2023-11-02 13:13:24 | 1 | 1,249 | Eran H. |
77,409,748 | 7,534,658 | How to infer a Python class generic based on a method input? | <p>I want to write a pipeline using Python like this:</p>
<pre class="lang-py prettyprint-override"><code>from collections.abc import Callable
from typing import Generic, TypeVar
T = TypeVar("T")
U = TypeVar("U")
V = TypeVar("V")
class Pipeline(Generic[T, U]):
def pipe(self, _cb: Callable[[U], V]) -> "Pipeline[T, V]":
...
def int_to_str(x: int) -> str:
return str(x)
a = Pipeline()
a = a.pipe(int_to_str)
# the inferred type is Pipeline[Unknown, str]
# I would like it to be Pipeline[int, str]
</code></pre>
<p>Is there anyway that Python can understand that because the first function passed is a function that takes an int as an input, then the pipeline's first generic type should be an int?</p>
<p>I tried to play with @overload etc. But I can't make it work.</p>
| <python><python-typing> | 2023-11-02 12:56:25 | 0 | 631 | p9f |
77,409,729 | 1,757,224 | Unknown character for Turkish character | <p>I have a dataframe consisting of two columns: (1) Turkish cities, (2) corresponding values.</p>
<pre><code>dict_ = {'City': {0: 'ADANA',
1: 'ANKARA',
2: 'ANTALYA',
3: 'AYDIN',
4: 'BALIKESİR',
5: 'BURSA',
6: 'DENİZLİ',
7: 'DÜZCE',
8: 'DİYARBAKIR',
9: 'ELAZIĞ',
10: 'GAZİANTEP',
11: 'GİRESUN',
12: 'HATAY',
13: 'KAHRAMANMARAŞ',
14: 'KARABÜK',
15: 'KARS',
16: 'KAYSERİ',
17: 'KIRIKKALE',
18: 'KIRKLARELİ',
19: 'KIRŞEHİR',
20: 'KOCAELİ',
21: 'KONYA',
22: 'KÜTAHYA',
23: 'MANİSA',
24: 'MARDİN',
25: 'MERSİN',
26: 'MUĞLA',
27: 'ORDU',
28: 'OSMANİYE',
29: 'SAKARYA',
30: 'SAMSUN',
31: 'TRABZON',
32: 'UŞAK',
33: 'YALOVA',
34: 'ZONGULDAK',
35: 'ÇORUM',
36: 'İSTANBUL',
37: 'İZMİR'},
'Value': {0: 15,
1: 25,
2: 19,
3: 2,
4: 6,
5: 5,
6: 3,
7: 1,
8: 1,
9: 1,
10: 7,
11: 2,
12: 31,
13: 5,
14: 1,
15: 1,
16: 4,
17: 5,
18: 1,
19: 1,
20: 6,
21: 4,
22: 2,
23: 1,
24: 1,
25: 5,
26: 5,
27: 4,
28: 3,
29: 2,
30: 3,
31: 2,
32: 2,
33: 1,
34: 2,
35: 2,
36: 221,
37: 6}}
data = pd.DataFrame(dict_)
</code></pre>
<p>When I try to capitalize the <code>City</code> column (where the first letter is uppercase and the rest is lowercase), I am having a weird character issue.</p>
<pre><code>data['İl'].apply(str.capitalize)
</code></pre>
<p>Lowercase version of "İ" changes to a character when I cannot identify, for examples:</p>
<p><a href="https://i.sstatic.net/EPho9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EPho9.png" alt="enter image description here" /></a></p>
<p>or</p>
<p><a href="https://i.sstatic.net/nqgzF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nqgzF.png" alt="enter image description here" /></a></p>
<pre><code>import unicodedata
unicodedata.name("i̇")
# TypeError: name() argument 1 must be a unicode character, not str
</code></pre>
<p>I tried many solutions but to no avail!</p>
| <python><pandas><string><capitalization><capitalize> | 2023-11-02 12:54:20 | 3 | 973 | ARAT |
77,409,726 | 2,876,079 | How to generate unit tests for Python using src and test folder structure (preferable with PyCharm integration)? | <p>In my python project I have following folder structure:</p>
<pre><code>src
foo
foo.py
baa
baa.py
test
foo
baa
</code></pre>
<p>and would like to <strong>generate a unit test file</strong> <code>test/foo/test_foo.py</code> that tests <code>src/foo/foo.py</code>.</p>
<p>In PyCharm I can</p>
<ul>
<li>go somewhere in foo.py and</li>
<li>use the key combination <code>Ctrl+Shift+T</code> and</li>
<li>select the option "Create new test".</li>
</ul>
<p>That generates a new test file and initializes it with some test method. Also see <a href="https://www.jetbrains.com/help/pycharm/creating-tests.html" rel="nofollow noreferrer">https://www.jetbrains.com/help/pycharm/creating-tests.html</a></p>
<p>However:</p>
<ul>
<li><p>PyCharm does not automatically detect the wanted target directory <code>test/foo</code> but suggests to use the source directory <code>src/foo</code> instead. That would result in a test file located in the same folder as the tested file.</p>
</li>
<li><p>The key combination does not work for files/modules that do not contain a class or function but just provide a property.</p>
</li>
<li><p>PyCharm generates unwanted <strong>init</strong>.py files.</p>
</li>
</ul>
<p>Therefore the built in unit test generation of PyCharm does not work for me.</p>
<p><strong>=> What is the recommended way to generate unit tests for Python?</strong></p>
<p>I also tried to use <a href="https://github.com/UnitTestBot" rel="nofollow noreferrer">UnitTestBot</a> or <a href="https://github.com/se2p/pynguin" rel="nofollow noreferrer">pynguin</a> but was not able to generate corresponding test files. Creating all the folders and files manually would be a tedious task and I hoped that a modern IDE would do (parts of) the job for me.</p>
<p>You might argue that test should be written first. Therefore, if there is an option to do it the other way around and generate the tested files from exiting tests... that would also be helpful.</p>
<p>Another option might be to go for <a href="https://dev.to/this-is-learning/copilot-chat-writes-unit-tests-for-you-1c82" rel="nofollow noreferrer">GitHub Copilot</a>. However, I am not allowed to use it at work and do not want to send our code to a remote server. Therefore, I am still looking for a more conservative approach.</p>
<p>A different approach would be a custom script that loops through the existing src folder structure and at least creates corresponding folders and files in the test folder. I was hoping for an existing solution but could not find one, yet.</p>
<p><strong>Related:</strong></p>
<p><a href="https://github.com/UnitTestBot/UTBotJava/issues/2670" rel="nofollow noreferrer">https://github.com/UnitTestBot/UTBotJava/issues/2670</a></p>
<p><a href="https://github.com/se2p/pynguin/issues/52" rel="nofollow noreferrer">https://github.com/se2p/pynguin/issues/52</a></p>
<p><a href="https://stackoverflow.com/questions/71273333/can-pytest-or-any-testing-tool-in-python-generate-the-unit-test-automatically">Can pytest or any testing tool in python generate the unit test automatically?</a></p>
<p><a href="https://www.strictmode.io/articles/using-github-copilot-for-testing" rel="nofollow noreferrer">https://www.strictmode.io/articles/using-github-copilot-for-testing</a></p>
<p><a href="https://dev.to/this-is-learning/copilot-chat-writes-unit-tests-for-you-1c82" rel="nofollow noreferrer">https://dev.to/this-is-learning/copilot-chat-writes-unit-tests-for-you-1c82</a></p>
| <python><unit-testing><pycharm><pytest><code-generation> | 2023-11-02 12:54:08 | 1 | 12,756 | Stefan |
77,409,551 | 3,238,679 | How to implement a symmetric autocorelation function? | <p>I am trying to compute these values:</p>
<p><img src="https://latex.codecogs.com/gif.image?%5Cdpi%7B110%7D%20R_z%5Bn,t%5D=z%5Bn+t%5Dz%5E*%5Bn-t%5D" alt="enter image description here" /></p>
<p><img src="https://latex.codecogs.com/gif.image?%5Cdpi%7B110%7D%20R_z%5Bn,t%27%5D=z%5Bn+t%27%5Dz%5E*%5Bn-t%27%5D" alt="enter image description here" /></p>
<p>where <code>*</code> is the complex conjugate,<code>z[n]= exp( 1i* 2* pi* cumsum(wav_file_signal[n])/ wav_fs)</code>, <code>t' = t + int(wav_fs/4)</code>, <code>N</code> is the signal length and <code>t</code> takes values in the range.</p>
<p>So far I have this</p>
<pre><code>import numpy as np
# Example usage
fs = 100 # Sample rate
Ts = 1 / fs # Time step
N = 1000 # Length of the signal
t = np.arange(0, N) * Ts # Time vector
f0 = 5 # Fundamental frequency of the signal
# Generate a noisy signal for testing
z = np.sin(2 * np.pi * f0 * t) + 0.5 * np.random.randn(N)
tau_lim = 30
taus = np.arange(-tau_lim, tau_lim + 1)
alphas = round(fs / (4 * np.max(np.abs(z))))
tau_delta = round(fs / (4 * f0))
Rx = np.zeros((len(z), len(taus)))
R_1_result = np.zeros(len(z))
R_2_result = np.zeros(len(z))
z = np.exp(1j * 2 * np.pi * alphas * np.cumsum(z) / fs)
# Circular shift function
def circular_shift(signal, shift_amount):
return np.roll(signal, shift_amount)
for i, tau in enumerate(taus):
# R_1_backward_shift = z[:len(z) - abs(tau)]
# R_1_forward_shift = z[abs(tau):]
R_1_backward_shift = circular_shift(z, - abs(tau))
R_1_forward_shift = circular_shift(z, abs(tau))
R_1_length = len(R_1_forward_shift) # R_1 and R_2 have the same length
R_1_result = R_1_forward_shift * np.conj(R_1_backward_shift)
# R_2_backward_shift = z[:len(z) - abs(tau) - tau_delta]
# R_2_forward_shift = z[abs(tau) + tau_delta:]
R_2_backward_shift = circular_shift(z, - abs(tau) - tau_delta)
R_2_forward_shift = circular_shift(z, abs(tau) + tau_delta)
R_2_length = len(R_2_forward_shift)
R_2_result = R_2_forward_shift * np.conj(R_2_backward_shift)
</code></pre>
<p>Could you please someone evaluate the correctness of the code?</p>
<p><code>R_z</code> is actually part of the kernel in equation (8) <a href="https://www.sciencedirect.com/science/article/pii/S0165168414003053" rel="nofollow noreferrer">here</a>.</p>
| <python><signal-processing> | 2023-11-02 12:25:38 | 0 | 1,041 | Thoth |
77,409,496 | 8,588,743 | Flipping the x & y axis in diagram resulting from plot_simultaneous() in a Tukey HSD test | <p>I have a data set and I need to check significant differences between 5 groups. The resulting diagram shows test groups on the y-axis and the response on the x-axis. However I want the figure to show the reverse, that is response on the y-axis and test groups on the x-axis. So the confidence interval bars should be vertical as opposed to the horizontal shown below. How can this be achieved?</p>
<p>Here is a fully reproducible sample of my code:</p>
<pre><code>import pandas as pd
from statsmodels.stats.multicomp import MultiComparison
from statsmodels.stats.multicomp import pairwise_tukeyhsd
response = [4,5, 7,6, 2,3, 12,9, 7,7]
groups = ["test1","test1","test2","test2","test3","test3","test4","test4","test5","test5"]
result = pairwise_tukeyhsd(response, groups, alpha=0.05)
# Plot the results
fig, ax = plt.subplots(figsize=(1, 1))
# Customize the plot appearance
result.plot_simultaneous(
comparison_name='test3',
ax = ax,
figsize = (8,5),
xlabel = "Response", # X-axis label
ylabel = 'Test Group', # Y-axis label
)
# Reverse the order of labels on the y-axis
ax.invert_yaxis()
# Add a title to the plot
ax.set_title(f'Tukey\'s HSD Test for Response')
# Show the plot
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/PpXCl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PpXCl.png" alt="enter image description here" /></a></p>
| <python><statsmodels><tukey> | 2023-11-02 12:18:52 | 0 | 903 | Parseval |
77,409,424 | 386,861 | Layout of plot in Altair is not appearing to show conditional colours in time series plot | <p>I've got some data on admissions in long format about obesity-related hospital admissions - you can see the full dataset here: <a href="https://www.dropbox.com/scl/fi/6hu7g556pzcyfkhhp1s5l/sorted_data.csv?rlkey=n1tz0mzddkdinkt14rofjxlj1&dl=0" rel="nofollow noreferrer">https://www.dropbox.com/scl/fi/6hu7g556pzcyfkhhp1s5l/sorted_data.csv?rlkey=n1tz0mzddkdinkt14rofjxlj1&dl=0</a></p>
<p>It looks like this..</p>
<pre><code> Region Major region Year variable quantity
0 County Durham North East 2014 Total admissions 2240.0
1 Darlington North East 2014 Total admissions 283.0
2 Gateshead North East 2014 Total admissions 1229.0
3 Hartlepool North East 2014 Total admissions 484.0
4 Middlesbrough North East 2014 Total admissions 422.0
</code></pre>
<p>I've used altair to create a plot with two dropdowns</p>
<p>import altair as alt
import pandas as pd</p>
<pre><code># Create dropdown for Major region
major_region_dropdown = alt.binding_select(options=melted_df['Major region'].unique().tolist(), name="Major region: ")
major_region_selection = alt.selection_point(fields=['Major region'], bind=major_region_dropdown)
# Create dropdown for variable
variable_dropdown = alt.binding_select(options=melted_df['variable'].unique().tolist(), name="Category: ")
variable_selection = alt.selection_point(fields=['variable'], bind=variable_dropdown)
color_encoding = alt.condition(
major_region_selection | variable_selection,
alt.value('red'),
alt.value('lightgray')
)
# Sort the data
sorted_data = melted_df.sort_values(by='Year')
# Main chart
main_chart = alt.Chart(sorted_data).mark_line(point=True).encode(
x='Year:O',
y='mean(quantity):Q',
color=color_encoding,
tooltip=["Major region", "Region", "mean(quantity):Q"]
).add_selection(
major_region_selection
).transform_filter(
major_region_selection
).add_selection(
variable_selection
).transform_filter(
variable_selection
).properties(
width=450,
height=600,
title="Region"
)
</code></pre>
<p>main_chart</p>
<p>It's supposed to be a time series chart which plots individual Regions over time with the encoding switching between lightgray to red if points on the line are selected. But it comes out like this. What's wrong?</p>
<p><a href="https://i.sstatic.net/T7aun.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/T7aun.png" alt="enter image description here" /></a></p>
| <python><pandas><altair> | 2023-11-02 12:07:38 | 0 | 7,882 | elksie5000 |
77,409,338 | 5,800,969 | How to scrape a page generated using flutter javascript code dynamically | <p>I am trying to scrape page to download a file by clicking on download button image in website below. I couldn't inspect that element to find its id for click using selenium . After looking at source code found that its a javascript flutter code and it can't be inspected. Sharing the screenshot of page and page source code below.</p>
<pre><code><html><head>
<!--
If you are serving your web app in a path other than the root, change the
href value below to reflect the base path you are serving from.
The path provided below has to start and end with a slash "/" in order for
it to work correctly.
For more details:
* https://developer.mozilla.org/en-US/docs/Web/HTML/Element/base
This is a placeholder for base href that will be replaced by the value of
the `--base-href` argument provided to `flutter build`.
-->
<base href="/">
<meta charset="UTF-8">
<meta content="IE=Edge" http-equiv="X-UA-Compatible">
<meta name="description" content="A new Flutter project.">
<!-- iOS meta tags & icons -->
<meta name="apple-mobile-web-app-capable" content="yes">
<meta name="apple-mobile-web-app-status-bar-style" content="black">
<meta name="apple-mobile-web-app-title" content="sms">
<link rel="apple-touch-icon" href="icons/Icon-192.png">
<!-- Favicon -->
<link rel="icon" type="image/png" href="favicon.png">
<title>SMS</title>
<link rel="manifest" href="manifest.json">
<script src="https://unpkg.com/canvaskit-wasm@0.37.1/bin/canvaskit.js"></script><style></style><meta flt-viewport="" name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no"><meta id="flutterweb-theme" name="theme-color" content="#ffffff"></head>
<body flt-renderer="canvaskit (auto-selected)" flt-build-mode="release" spellcheck="false" style="position: fixed; inset: 0px; overflow: hidden; padding: 0px; margin: 0px; user-select: none; touch-action: none; font: 14px sans-serif; color: red;">
<!-- This script installs service_worker.js to provide PWA functionality to
application. For more information, see:
https://developers.google.com/web/fundamentals/primers/service-workers -->
<script>
var serviceWorkerVersion = '3887127261';
var scriptLoaded = false;
function loadMainDartJs() {
if (scriptLoaded) {
return;
}
scriptLoaded = true;
var scriptTag = document.createElement('script');
scriptTag.src = 'main.dart.js';
scriptTag.type = 'application/javascript';
document.body.append(scriptTag);
}
if ('serviceWorker' in navigator) {
// Service workers are supported. Use them.
window.addEventListener('load', function () {
// Wait for registration to finish before dropping the <script> tag.
// Otherwise, the browser will load the script multiple times,
// potentially different versions.
var serviceWorkerUrl = 'flutter_service_worker.js?v=' + serviceWorkerVersion;
navigator.serviceWorker.register(serviceWorkerUrl)
.then((reg) => {
function waitForActivation(serviceWorker) {
serviceWorker.addEventListener('statechange', () => {
if (serviceWorker.state == 'activated') {
console.log('Installed new service worker.');
loadMainDartJs();
}
});
}
if (!reg.active && (reg.installing || reg.waiting)) {
// No active web worker and we have installed or are installing
// one for the first time. Simply wait for it to activate.
waitForActivation(reg.installing || reg.waiting);
} else if (!reg.active.scriptURL.endsWith(serviceWorkerVersion)) {
// When the app updates the serviceWorkerVersion changes, so we
// need to ask the service worker to update.
console.log('New service worker available.');
reg.update();
waitForActivation(reg.installing);
} else {
// Existing service worker is still good.
console.log('Loading app from service worker.');
loadMainDartJs();
}
});
// If service worker doesn't succeed in a reasonable amount of time,
// fallback to plain <script> tag.
setTimeout(() => {
if (!scriptLoaded) {
console.warn(
'Failed to load app from service worker. Falling back to plain <script> tag.',
);
loadMainDartJs();
}
}, 4000);
});
} else {
// Service workers not supported. Just drop the <script> tag.
loadMainDartJs();
}
</script>
<script src="main.dart.js" type="application/javascript"></script><flt-file-picker-inputs id="__file_picker_web-file-input"></flt-file-picker-inputs><script src="assets/packages/flutter_dropzone_web/assets/flutter_dropzone.js" type="application/javascript" defer=""></script><flt-glass-pane style="position: absolute; inset: 0px;"></flt-glass-pane></body></html>
</code></pre>
<p>Download button form page:
<a href="https://i.sstatic.net/4Zjfs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4Zjfs.png" alt="enter image description here" /></a></p>
<p>I tried following code to find the html generated using above flutter javascript code with no luck. Here is the code I have tried so far.</p>
<p><code>driver.execute_script("return document.getElementsByTagName('body')[0].innerHTML")</code></p>
<p>and
<code>driver.page_source</code></p>
<p>both gives same page actual source code above. I need actually html generated with download image button.</p>
<p>How can I find some id/class/img tag of the Download button to click using selenium for file download? Any help highly appreciated . Thanks in advance.</p>
| <python><selenium-webdriver><selenium-chromedriver> | 2023-11-02 11:52:27 | 1 | 2,071 | iamabhaykmr |
77,409,302 | 17,795,398 | Python logging: is there any way to keep last n log files | <p>I'm new to python logging. I was wondering if there is a way to keep last n log files. This is my config:</p>
<pre><code>import logging
logging.basicConfig(
filename = "logs/log.log",
filemode = "w",
encoding = "utf-8",
force = True,
level = logging.INFO,
format = "%(asctime)s %(levelname) -8s %(message)s",
datefmt = "%Y-%m-%d %H:%M:%S"
)
</code></pre>
<p>I want to keep, let's say, 10 log files. When the 11st file is going to be generated, the 1st is deleted automatically.</p>
<p>Is there any way to do this directly or do I have to implement my own function?</p>
| <python><logging> | 2023-11-02 11:46:03 | 2 | 472 | Abel Gutiérrez |
77,409,240 | 7,599,215 | python ftp cant delete file, while lftp can | <p>I'm trying to delete file from FTP server using python's <code>ftp</code> lib (I do have R/W permission):</p>
<pre class="lang-py prettyprint-override"><code>from ftplib import FTP
ftp = FTP('192.168.1.2')
ftp.login(user='admin', passwd=None)
ftp.login()
ftp.cwd('audio/user_rec/575B62E2')
ftp.dir()
</code></pre>
<p>Output:</p>
<pre><code>d--------- 1 owner nogroup 0 Oct 27 12:23 .
d--------- 1 owner nogroup 0 Oct 27 12:23 ..
---------- 1 owner nogroup 16120 Oct 27 12:23 2023-10-27_12-23-04.wav
---------- 1 owner nogroup 10680 Oct 27 12:23 2023-10-27_12-23-05.wav
</code></pre>
<p>When I try to <code>ftp.delete('2023-10-27_12-23-04.wav')</code>, I get: <code>error_perm: 550 Request action not taken.</code></p>
<p>But when I use <code>lftp</code> with the same password and the same FTP commands (DELE, sniffed with wireshark) I get file deleted successfully</p>
<p>What am I doing wrong?</p>
| <python><ftp> | 2023-11-02 11:35:32 | 0 | 2,563 | banderlog013 |
77,409,013 | 4,494,781 | Clearing (zeroing) all bytes of a sensitive bytearray in memory | <p>After reading the <a href="https://stackoverflow.com/a/51936887/4494781">following answer</a>, I want to make sure a <code>bytearray</code> with sensitive information (password) is correctly cleared in memory prior to garbage collection. I'm assuming garbage collection in python only removes the pointer but does not replace the actual data with zeros in memory.</p>
<p>What is the correct way of doing this?</p>
<p>This is what I'm trying to do, but I don't know how to verify that it works as intended:</p>
<pre><code>class CachedPasswordWidget(QtWidgets.QWidget):
"""A base class for widgets that may prompt the user for a password and
remember that password for the lifetime of that widget.
"""
def __init__(
self,
parent: Optional[QtWidgets.QWidget] = None,
):
super().__init__(parent)
# store the password as a mutable type so the memory can be zeroed after it is no longer needed
self._pwd: Optional[bytearray] = None
@property
def pwd(self) -> Optional[str]:
"""Return password.
Open a dialog to ask for the wallet password if necessary, and cache it.
If the password dialog is cancelled, return None.
"""
if self._pwd is not None:
return self._pwd.decode("utf-8")
# password = PasswordDialog(parent=self).run()
password = getpass.getpass()
if password is None:
# dialog cancelled
return
self._pwd = bytearray(password.encode("utf-8"))
return self._pwd.decode("utf-8")
def __del__(self):
if self._pwd is not None:
self._pwd[:] = b"\0" * len(self._pwd)
</code></pre>
<p>I guess what I'm asking is whether I can be sure that overwriting a slice of a bytearray will overwrite exactly the same memory as was used before, and whether I can be sure <code>__del__</code> will be called in a timely manner by the garbage collector.</p>
| <python><python-bytearray> | 2023-11-02 10:56:48 | 0 | 1,105 | PiRK |
77,408,930 | 12,040,751 | Passing an exception type as argument, how to type hint? | <p>I have a function which takes an exception class as an argument, this is the simplified version</p>
<pre class="lang-py prettyprint-override"><code>def catch_exception(exception):
try:
1/0
except exception:
print("lol")
</code></pre>
<pre><code>>>> catch_exception(exception=ZeroDivisionError)
lol
</code></pre>
<p>Type hinting this should be easy:</p>
<pre class="lang-py prettyprint-override"><code>def catch_exception(exception: Exception):
...
</code></pre>
<p>but when I then use this function, I get a problem from the IDE (PyCharm)</p>
<pre class="lang-py prettyprint-override"><code>catch_exception(exception=ZeroDivisionError)
# Expected type 'Exception', got 'Type[Exception]' instead
</code></pre>
<p>I understand the problem, I am passing an exception CLASS not an exception INSTANCE.
At the same time I am not sure how to type hint in way that is both formally correct and that has got documentation value for the user.</p>
<p>The following seems to work but there must be a better way:</p>
<pre class="lang-py prettyprint-override"><code>def catch_exception(exception: Type[Exception]):
...
</code></pre>
| <python><exception><python-typing> | 2023-11-02 10:43:20 | 1 | 1,569 | edd313 |
77,408,785 | 13,946,204 | How to make uWSGI server stop if an application raised exception on load? | <h2>Simple reproducible example:</h2>
<ul>
<li>project structure:</li>
</ul>
<pre><code>.
├── app.ini
├── app.py
├── uwsgi.ini
└── wsgi_run.py
</code></pre>
<ul>
<li><code>app.ini</code></li>
</ul>
<pre><code>[app:main]
use = call:app:main
pyramid.reload_templates = true
pyramid.debug_authorization = false
pyramid.debug_notfound = false
pyramid.debug_routematch = false
pyramid.includes =
</code></pre>
<ul>
<li><code>app.py</code></li>
</ul>
<pre class="lang-py prettyprint-override"><code>from pyramid.config import Configurator
from pyramid.httpexceptions import HTTPOk
from pyramid.view import view_config
@view_config(route_name='test')
def http_redirect(context, request):
return HTTPOk()
def main(_, **settings):
config = Configurator(settings=settings)
config.add_route('test', '/test')
# Error
1/0
return config.make_wsgi_app()
</code></pre>
<ul>
<li><code>uwsgi.ini</code></li>
</ul>
<pre><code>[uwsgi]
http = 0.0.0.0:9090
master = true
processes = 1
gevent = 8
listen = 4096
buffer-size = 65535
uid = %u
gid = %g
; emperor = 1 does not work too
lazy = true
lazy-app = false
need-app = true
module = wsgi_run
die-on-term = true
sharedarea = 1
ignore-signpipe = true
harakiri-verbose = true
memory-report = true
log-master = true
</code></pre>
<ul>
<li><code>wsgi_run.py</code></li>
</ul>
<pre class="lang-py prettyprint-override"><code>from pyramid.paster import get_app
application = get_app("app.ini", 'main')
</code></pre>
<h2>Command to start server:</h2>
<pre class="lang-bash prettyprint-override"><code>uwsgi uwsgi.ini
</code></pre>
<h2>Problem:</h2>
<p>When I start server it is retrying to load app in endless loop:</p>
<pre><code>[uWSGI] getting INI configuration from uwsgi.ini
*** Starting uWSGI 2.0.15 (64bit) on [Thu Nov 2 19:17:55 2023] ***
compiled with version: Apple LLVM 15.0.0 (clang-1500.0.40.1) on 16 October 2023 10:59:16
os: Darwin-23.0.0 Darwin Kernel Version 23.0.0: Fri Sep 15 14:42:42 PDT 2023; root:xnu-10002.1.13~1/RELEASE_X86_64
...
your processes number limit is 2784
your memory page size is 4096 bytes
detected max file descriptor number: 256
- async cores set to 8 - fd table size: 256
lock engine: OSX spinlocks
thunder lock: disabled (you can enable it with --thunder-lock)
sharedarea 0 created at 0x10850b000 (1 pages, area at 0x10850c000)
uWSGI http bound on 0.0.0.0:9090 fd 7
uwsgi socket 0 bound to TCP address 127.0.0.1:56279 (port auto-assigned) fd 6
...
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 42563)
spawned uWSGI worker 1 (pid: 42586, cores: 8)
spawned uWSGI http 1 (pid: 42587)
Traceback (most recent call last):
...
File "./app.py", line 15, in main
1/0
ZeroDivisionError: integer division or modulo by zero
unable to load app 0 (mountpoint='') (callable not found or import error)
OOPS ! failed loading app in worker 1 (pid 42586) :( trying again...
DAMN ! worker 1 (pid: 42586) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 42590)
Traceback (most recent call last):
...
File "./app.py", line 15, in main
1/0
ZeroDivisionError: integer division or modulo by zero
unable to load app 0 (mountpoint='') (callable not found or import error)
OOPS ! failed loading app in worker 1 (pid 42590) :( trying again...
DAMN ! worker 1 (pid: 42590) died :( trying respawn ...
Respawned uWSGI worker 1 (new pid: 42594)
...
</code></pre>
<p>How to make uwsgi server stop if application raised exception on load?</p>
| <python><uwsgi><pyramid> | 2023-11-02 10:23:30 | 0 | 9,834 | rzlvmp |
77,408,717 | 3,584,765 | Using a python ssh session which asks for a password to the private key | <p>I am using an ssh connection to remote server via username and password which works fine. I intended to automate some file transactions from my machine to this remote server and attempted to write a python script which would use ssh connections.</p>
<pre><code>from paramiko import SSHClient
from scp import SCPClient
ssh = SSHClient()
ssh.load_system_host_keys()
ip = ... # id of the remote server
username = ... # my username
password = ... # my password
ssh.connect(ip, username=username, password=password) # <- this line causes a dialog box to appear
scp = SCPClient(ssh.get_transport())
scp.get(local_path=local_target, remote_path=os.path.join(remote_target, local_target))
</code></pre>
<p>The actual dialog box asks for:</p>
<blockquote>
<p>Enter password to unlock the private key</p>
</blockquote>
<p>I do not remember my private key to be honest so I cannot provide it to the dialog box. My question are</p>
<p>a) why python asks for private key whereas in cli there is no need for it? Is cli using ssh-agent implicitly to perform the initial handshake while python does not?<br />
b) Is there a way to imitate the cli behavior and get rid of the necessity for password in python to unlock my private key?<br />
c) Is there a way to retrieve my private key from my username-password I am aware of? I found some relevant answer <a href="https://askubuntu.com/a/887605/546065">here</a> but I couldn't apply it in my case.
In <em>Passwords and Keys</em> program there isn't any entry in login for "Unlock Password for user@host". Instead I found an entry under <em>OpenSSH keys</em> where under properties I get a dialog box which has some options:</p>
<ol>
<li>Export, 2) Change Passphrase and 3) Delete SSH key.</li>
</ol>
<p>Would changing <em>Passphrase</em> be a solution for example? <a href="https://stackoverflow.com/questions/74311141/enter-password-to-unlock-the-private-key-connecting-to-digitalocean-droplet-lo">This relevant question</a> states so. I am only concerned if this change would affect my working cli ssh connection with the server though.</p>
| <python><ssh> | 2023-11-02 10:13:28 | 0 | 5,743 | Eypros |
77,408,654 | 17,473,587 | Encountering ValueError: not enough values to unpack (expected 3, got 0) with Celery in Django project on Windows 7 | <p>I have been working on a Django project on my Windows 7 machine, using VS-Code as my development environment. Recently, I decided to incorporate Celery for handling asynchronous tasks. However, I have been encountering a ValueError: not enough values to unpack (expected 3, got 0) whenever I try to retrieve the result of a task.</p>
<p>Here's a snippet of how I am creating and calling the task:</p>
<pre><code># common/test_tasks.py
from celery import shared_task
@shared_task
def add(x, y):
return x + y
# In Django shell
from common.test_tasks import add
result = add.delay(4, 6)
print(result.ready()) # Outputs: True
print(result.get()) # Raises ValueError
</code></pre>
<p>I have tried different Celery configurations and also different backends and brokers. Initially, I was using Redis as both the broker and backend, but I encountered the same error. I switched the broker to RabbitMQ (pyamqp://guest:guest@localhost//) while keeping Redis for the result backend. Still, the error persists. I also attempted to downgrade Celery to version 3.1.24 but faced installation issues on Windows.</p>
<p>Here are my relevant settings in the settings.py file:</p>
<pre><code># settings.py
CELERY_BROKER_URL = 'pyamqp://guest:guest@localhost//'
CELERY_RESULT_BACKEND = 'redis://localhost:6379/0'
</code></pre>
<p>I've also checked the logs, but nothing stood out. The tasks seem to be executing successfully, but the error occurs when trying to fetch the result.</p>
<p>Additionally, I've scoured through multiple forums and documentation but haven't found a solution that works for my specific setup. Any insights or suggestions on how I might resolve this issue would be greatly appreciated.</p>
| <python><django><redis><rabbitmq><celery> | 2023-11-02 10:04:31 | 1 | 360 | parmer_110 |
77,408,636 | 769,449 | Inherited Scrapy spider does not store any output from super basic tutorial site | <p>See the full Scrapy project filenames and code below.
It creates 2 output files myspecificspider.json AND myspecificspider_items.json (which is weird on its own as I'd expect only the latter), but more importantly neither of the files contain ANY content.
I can't figure out what's wrong here. Te debug log shows no errors and the CSS selector (tested in a separate non-inherited Spider) works. Does it have to do with my inheritance logic perhaps?</p>
<p><strong>pipelines.py</strong></p>
<pre><code>import json
class MyCustomPipeline(object):
def open_spider(self, spider):
self.items = []
def process_item(self, item, spider):
self.items.append(dict(item))
return item
class JsonPipeline:
def open_spider(self, spider):
filename = f'{spider.name}_items.json'
self.file = open(filename, 'w')
def close_spider(self, spider):
self.file.close()
def process_item(self, item, spider):
line = json.dumps(dict(item)) + "\n"
self.file.write(line)
return item
</code></pre>
<p><strong>basespider.py</strong></p>
<pre><code>import scrapy
from ..myitems import MyItem
from my_spiders.globalfunctions import *
class BaseSpider(scrapy.Spider):
custom_settings = {
'ITEM_PIPELINES': {'my_spiders.pipelines.JsonPipeline': 300,}
}
allowed_domains = ['test.com']
start_urls = ['']
def parse(self, response):
pass
</code></pre>
<p><strong>myitems.py</strong></p>
<pre><code>import scrapy
class MyItem(scrapy.Item):
url = scrapy.Field()
city = scrapy.Field()
status = scrapy.Field()
pass
</code></pre>
<p><strong>myspecificspider.py</strong></p>
<pre><code>import scrapy
import json
import re
import os
import time
from scrapy.selector import Selector
from scrapy.http import HtmlResponse
from lxml import html
from typing import TypedDict
from scrapy import signals
from my_spiders.globalfunctions import *
from ..myconfig import *
from ..myitems import MyItem
from my_spiders.spiders.basespider import BaseSpider
class MySpecificSpider(BaseSpider):
name = 'myspecificspider'
start_urls = ['https://quotes.toscrape.com']
allowed_domains = ['quotes.toscrape.com']
debug = False
def parse(self, response):
total_found = 0
for listing in response.xpath("//div[@class='quote']/span/a[starts-with(@href, '/author/')]"):
listing_url = listing.xpath("./@href").get()
listing_url = "https://quotes.toscrape.com" + listing_url #create a fully qualified URL
print(listing_url)
yield scrapy.Request(
url=response.urljoin(listing_url),
callback=self.parse_object,
)
def parse_object(self, response):
item = MyItem()
item['url'] = response.url # get url
item['city'] = 'test'
item['status'] = response.xpath('//span[contains(@class, "author-born-date")]/text()').get()
print("Extracted Item:", item)
yield item
</code></pre>
| <python><web-scraping><scrapy> | 2023-11-02 10:02:03 | 0 | 6,241 | Adam |
77,408,589 | 10,218,291 | Python dateparser error, AttributeError: "safe_load()" has been removed, use | <p>Today while I was trying to build my application without any changes made to old code just added logging and found below error by surprise.</p>
<pre><code> File "/app/search/parsers.py", line 9, in <module>
import dateparser
File "/home/python/.local/lib/python3.8/site-packages/dateparser/__init__.py", line 7, in <module>
_default_parser = DateDataParser(allow_redetect_language=True)
File "/home/python/.local/lib/python3.8/site-packages/dateparser/conf.py", line 84, in wrapper
return f(*args, **kwargs)
File "/home/python/.local/lib/python3.8/site-packages/dateparser/date.py", line 290, in __init__
available_language_map = self._get_language_loader().get_language_map()
File "/home/python/.local/lib/python3.8/site-packages/dateparser/languages/loader.py", line 21, in get_language_map
self._load_data()
File "/home/python/.local/lib/python3.8/site-packages/dateparser/languages/loader.py", line 39, in _load_data
data = SafeLoader(data).get_data()
File "/home/python/.local/lib/python3.8/site-packages/ruamel/yaml/constructor.py", line 108, in get_data
return self.construct_document(self.composer.get_node())
File "/home/python/.local/lib/python3.8/site-packages/ruamel/yaml/constructor.py", line 123, in construct_document
for _dummy in generator:
File "/home/python/.local/lib/python3.8/site-packages/ruamel/yaml/constructor.py", line 629, in construct_yaml_map
value = self.construct_mapping(node)
File "/home/python/.local/lib/python3.8/site-packages/ruamel/yaml/constructor.py", line 427, in construct_mapping
return BaseConstructor.construct_mapping(self, node, deep=deep)
File "/home/python/.local/lib/python3.8/site-packages/ruamel/yaml/constructor.py", line 242, in construct_mapping
value = self.construct_object(value_node, deep=deep)
File "/home/python/.local/lib/python3.8/site-packages/ruamel/yaml/constructor.py", line 145, in construct_object
data = self.construct_non_recursive_object(node)
File "/home/python/.local/lib/python3.8/site-packages/ruamel/yaml/constructor.py", line 179, in construct_non_recursive_object
data = constructor(self, node)
File "/home/python/.local/lib/python3.8/site-packages/dateparser/utils/__init__.py", line 192, in construct_yaml_include
return yaml.safe_load(get_data('data', node.value))
File "/home/python/.local/lib/python3.8/site-packages/ruamel/yaml/main.py", line 1105, in safe_load
error_deprecation('safe_load', 'load', arg="typ='safe', pure=True")
File "/home/python/.local/lib/python3.8/site-packages/ruamel/yaml/main.py", line 1037, in error_deprecation
raise AttributeError(s)
AttributeError:
"safe_load()" has been removed, use
yaml = YAML(typ='safe', pure=True)
yaml.load(...)
instead of file "/home/python/.local/lib/python3.8/site-packages/dateparser/utils/__init__.py", line 192
return yaml.safe_load(get_data('data', node.value))
script returned exit code 1
</code></pre>
<p>Package versions:</p>
<pre><code>dateparser==0.6.0
PyYAML==5.4.1
</code></pre>
<p>Can someone help what exactly changed out of sudden causing this error?</p>
| <python><python-3.x><django> | 2023-11-02 09:56:46 | 0 | 1,617 | Avi |
77,408,582 | 7,812,273 | Can we run IDL(Interactive Data Language) in Databricks Notebook (python code) | <p>I am working on a project where we use Azure stack for data engineering and analysis.
The main component for computation is Azure Databricks in which most of the code is written in python code.</p>
<p>Recently I got a requirement to work in a project where we have to process mf4(Measurement Data Files) files.
To process mf4 files, we sort out the solution to use asammdf library and process the file.</p>
<p>Then the next phase came to migrate few formulas which are in IDL(Interactive Data Language).
These formulas are stored in an Oracle database and we are able to connect to the database in order to get the formulas.</p>
<p>But the question arise here is How to run these IDL formulas on the data available in Azure Storage. Can we run these formulas through Python notebook file and use the data for analysis.</p>
<p>I have gone through few documentations and didn't get any Idea to implement a solution for this.</p>
<ol>
<li>Is it possible to import any IDL library in python notebook files and execute IDL formulas ?</li>
<li>Is this IDL an open source or licensed product?</li>
</ol>
<p>Any leads Appreciated! Thanks in Advance.</p>
| <python><databricks><azure-databricks><idl><idl-programming-language> | 2023-11-02 09:55:30 | 1 | 1,128 | Antony |
77,408,581 | 1,422,096 | Pandas weirdly reading dates with read_excel | <p>I have a problem a little bit similar to <a href="https://stackoverflow.com/questions/34156830/leave-dates-as-strings-using-read-excel-function-from-pandas-in-python">Leave dates as strings using read_excel function from pandas in python</a>.</p>
<p>When doing:</p>
<pre><code>x = pd.read_excel("example.xlsx")
x = x[x["date"] > "2023-01-01"]
</code></pre>
<p>I get an error:</p>
<blockquote>
<p>TypeError: '>' not supported between instances of 'datetime.datetime' and 'str'</p>
</blockquote>
<p>But when doing:</p>
<pre><code>x = x[x["date"] > datetime.datetime(2023, 1, 1)]
</code></pre>
<p>I get the error:</p>
<blockquote>
<p>TypeError: '>' not supported between instances of 'str' and 'datetime.datetime'</p>
</blockquote>
<p>So it seems that the column is neither 100% <code>str</code>, nor 100% <code>datetime</code> objects...</p>
<p>Indeed, some of them are strings like <code>18/12/2022</code> (arghh, unsortable with this format!), some of them are <code>datetime</code> objects.</p>
<p>TL;DR <strong>How to make Pandas parse dates in a coherent way when opening a XLSX with <code>read_excel</code>?</strong></p>
| <python><pandas> | 2023-11-02 09:55:29 | 2 | 47,388 | Basj |
77,408,505 | 9,640,238 | Properly unwrap long text with Python | <p>I'm trying to find a way to <em>properly</em> unwrap long text in Python. What I mean by "properly" is that only single occurrences of <code>\r\n</code> should be replaced by a space, and any sequence of two or more should be retained. For instance:</p>
<pre class="lang-py prettyprint-override"><code>text = "The 44th Chess Olympiad was an international\r\nteam chess event organised by the International Chess Federation (FIDE)\r\nin Chennai, India from 28 July to 10 August 2022.\r\n\r\nIt consisted of Open and Women's\r\ntournaments, as well as\r\nseveral events to promote chess.\r\n\r\nThe Olympiad was initially supposed to take place\r\nin Khanty-Mansiysk, Russia,\r\nthe host of the Chess World Cup 2019,\r\nin August 2020, but it was later moved to Moscow.\r\n\r\nHowever, it was postponed due to the COVID-19 pandemic\r\nand then relocated to Chennai."
</code></pre>
<p><code>\r\n\r\n</code> should be retained, but <code>\r\n</code> should be replaced by a space.</p>
<p>I'm really bad with regular expressions, which is often the option with best performance, so any hint would be much appreciated!</p>
| <python><text> | 2023-11-02 09:45:19 | 1 | 2,690 | mrgou |
77,408,418 | 9,614,610 | QSizeGrip doesn't resize window at all how do i fix that? | <p>I am attempting to create a frameless window to allow for a customizable title bar. However, I've encountered an issue with the <code>QSizeGrip</code> not resizing my window as intended.</p>
<p>I've experimented with placing it in different sections of the code, but unfortunately, it hasn't proven effective. I also came across a suggestion that I need to create a resizeEvent for widgets. I tried implementing this, but it didn't seem to resolve the problem either. Could someone please assist me? Thank you.</p>
<p>Here is the code:</p>
<pre><code>import sys
from PyQt5.QtCore import QPoint, Qt
from PyQt5.QtWidgets import QPushButton, QHBoxLayout, QVBoxLayout, \
QWidget, QLabel, QDialog, QSizeGrip, QMenuBar, QApplication
class TitleBar(QWidget):
height = 35
def __init__(self, parent):
super(TitleBar, self).__init__()
self.parent = parent
self.layout = QHBoxLayout()
self.layout.setContentsMargins(0, 0, 0, 0)
self.menu_bar = QMenuBar()
self.menu_bar.setStyleSheet("""
color: #fff;
background-color: #23272A;
font-size: 14px;
padding: 4px;
""")
self.menu_file = self.menu_bar.addMenu('File')
self.menu_file_quit = self.menu_file.addAction('Exit')
self.menu_file_quit.triggered.connect(self.parent.close)
self.menu_help = self.menu_bar.addMenu('Help')
self.layout.addWidget(self.menu_bar)
self.title = QLabel("Test")
self.title.setFixedHeight(self.height)
self.layout.addWidget(self.title, alignment=Qt.AlignCenter)
title_style = """
QLabel {
color: #ffffff; /* Text color (white) */
background-color: #23272a; /* Background color (dark grey) */
font-size: 18px; /* Font size */
font-weight: bold; /* Bold font */
padding: 8px; /* Padding around the text */
text-align: center; /* Horizontally center the text */
}
"""
self.title.setStyleSheet(title_style)
self.closeButton = QPushButton(' ')
self.closeButton.clicked.connect(self.on_click_close)
self.closeButton.setStyleSheet("""
background-color: #DC143C;
border-radius: 10px;
height: {};
width: {};
margin-right: 3px;
font-weight: bold;
color: #000;
font-family: "Webdings";
qproperty-text: "r";
""".format(self.height / 1.8, self.height / 1.8))
self.maxButton = QPushButton(' ')
self.maxButton.clicked.connect(self.on_click_maximize)
self.maxButton.setStyleSheet("""
background-color: #32CD32;
border-radius: 10px;
height: {};
width: {};
margin-right: 3px;
font-weight: bold;
color: #000;
font-family: "Webdings";
qproperty-text: "1";
""".format(self.height / 1.8, self.height / 1.8))
self.hideButton = QPushButton(' ')
self.hideButton.clicked.connect(self.on_click_hide)
self.hideButton.setStyleSheet("""
background-color: #FFFF00;
border-radius: 10px;
height: {};
width: {};
margin-right: 3px;
font-weight: bold;
color: #000;
font-family: "Webdings";
qproperty-text: "0";
""".format(self.height / 1.8, self.height / 1.8))
self.layout.addWidget(self.hideButton)
self.layout.addWidget(self.maxButton)
self.layout.addWidget(self.closeButton)
self.setLayout(self.layout)
self.start = QPoint(0, 0)
self.pressing = False
self.maximaze = False
def resizeEvent(self, QResizeEvent):
super(TitleBar, self).resizeEvent(QResizeEvent)
self.title.setFixedWidth(self.parent.width())
def mousePressEvent(self, event):
self.start = self.mapToGlobal(event.pos())
self.pressing = True
def mouseMoveEvent(self, event):
if self.pressing:
self.end = self.mapToGlobal(event.pos())
self.movement = self.end - self.start
self.parent.move(self.mapToGlobal(self.movement))
self.start = self.end
def mouseReleaseEvent(self, QMouseEvent):
self.pressing = False
def on_click_close(self):
self.parent.close()
def on_click_maximize(self):
self.maximaze = not self.maximaze
if self.maximaze: self.parent.setWindowState(Qt.WindowNoState)
if not self.maximaze:
self.parent.setWindowState(Qt.WindowMaximized)
def on_click_hide(self):
self.parent.showMinimized()
class StatusBar(QWidget):
def __init__(self, parent):
super(StatusBar, self).__init__()
self.initUI()
self.showMessage("***Test***")
def initUI(self):
self.label = QLabel("Status bar...")
self.label.setFixedHeight(24)
self.label.setAlignment(Qt.AlignLeft | Qt.AlignVCenter)
self.label.setStyleSheet("""
background-color: #23272a;
font-size: 12px;
padding-left: 5px;
color: white;
""")
self.layout = QHBoxLayout()
self.layout.setContentsMargins(0, 0, 0, 0)
self.layout.addWidget(self.label)
self.setLayout(self.layout)
def showMessage(self, text):
self.label.setText(text)
class MainWindow(QDialog):
def __init__(self):
QDialog.__init__(self)
self.setFixedSize(400, 200)
self.setWindowFlags(self.windowFlags() | Qt.FramelessWindowHint)
self.setStyleSheet("background-color: #2c2f33;")
self.setWindowTitle('Test')
self.title_bar = TitleBar(self)
self.status_bar = StatusBar(self)
self.grip = QSizeGrip(self)
self.layout = QVBoxLayout()
self.layout.setContentsMargins(0, 0, 0, 0)
self.layout.addWidget(self.title_bar)
self.layout.addStretch(1)
self.layout.addWidget(self.grip, 0, Qt.AlignBottom | Qt.AlignRight)
self.layout.addWidget(self.status_bar)
self.layout.setSpacing(0)
self.setLayout(self.layout)
if __name__=='__main__':
app = QApplication(sys.argv)
myWindow = MainWindow()
myWindow.show()
sys.exit(app.exec_())
</code></pre>
| <python><python-3.x><pyqt><pyqt5><qsizegrip> | 2023-11-02 09:33:53 | 1 | 417 | Vlad |
77,408,336 | 842,693 | Is there a way to specify per-feature project.scripts in pyproject.toml using hatch? | <p>I have a small python project "<code>tsdb</code>" using <em>hatch</em> for packaging.
The projects develops a python package and includes a demo using Streamlit. Since not everyone needs the demo and since Streamlit has a lot of dependencies, I make it an <em>optional dependency</em> (or a <em>feature</em>, as it is called in the documentation):</p>
<pre class="lang-ini prettyprint-override"><code>[project.optional-dependencies]
gui = [ "streamlit" ]
</code></pre>
<p>To make the demo easier to use, I add the following <em>entry point</em> (<em>script</em>):</p>
<pre class="lang-ini prettyprint-override"><code>[project.scripts]
tsdb-demo = "tsdb.scripts.run_streamlit:run"
</code></pre>
<p>This works, except that <code>tsdb-demo</code> throws an error when the package is installed without <code>[gui]</code> (since it is missing Streamlit).
It there a way to specify that the entry point should get installed only with <code>[gui]</code>?</p>
<p>And if not, what is the recommended/standard approach in these cases: should I put the demo into a separate package. And if so, can/should this be done within the same git repo, or should I have two separate repos?</p>
| <python><entry-point><hatch> | 2023-11-02 09:22:30 | 0 | 1,623 | Michal Kaut |
77,408,220 | 7,334,572 | How to format a dataframe column wise in pandas? | <p>I used to format the dataframe and output a file by:</p>
<pre class="lang-py prettyprint-override"><code>tta = pandas.DataFrame(
{'name': ['xx', 'y y', 'z'], 'alter': ['AA', 'BB', 'CCC']})
str1 = tta.to_string(formatters={'name': '{:05s}'.format,
'alter': '{:08s}'.format
}, header=False, index=False, justify='right', col_space=0)
</code></pre>
<p>However, the <code>to_string</code> function adds an extra space to my output string like the <code>str1</code> in the script:</p>
<pre><code>'xx000 AA000000\ny y00 BB000000\nz0000 CCC00000'
</code></pre>
<p>It seems it is hard to remove the extra white space.
(See: <a href="https://stackoverflow.com/questions/52030631/remove-the-automatic-two-spaces-between-columns-that-pandas-dataframe-to-string">Remove the automatic two spaces between columns that Pandas DataFrame.to_string inserts</a>)</p>
<p>Thus is there any elegant way to format the data frame based on the column and let me get results like this:</p>
<pre><code>'xx000AA000000\ny y00BB000000\nz0000CCC00000'
</code></pre>
| <python><pandas> | 2023-11-02 09:06:45 | 3 | 442 | cwind |
77,408,201 | 7,745,011 | Python custom logging handler tries to log itself during emit() | <p>I am trying to implement a custom logging handler, which should post to a rabbitmq server<sup>1</sup>.</p>
<p>A basic setup of this would look something like this:</p>
<pre><code>class RabbitMQHandler(logging.Handler):
def __init__(self, host: str, port: int, queue: str, level: int = 0) -> None:
self.host = host
self.port = port
self.queue = queue
super().__init__(level)
def emit(self, record: logging.LogRecord) -> None:
try:
msg = self.format(record)
with pika.BlockingConnection(pika.ConnectionParameters(
host=self.host,
port=self.port
)) as connection:
channel = connection.channel()
channel.queue_declare(queue=self.queue)
channel.basic_publish(
exchange='',
routing_key=self.queue,
body=msg
)
self.flush()
except (KeyboardInterrupt, SystemExit):
raise
except Exception:
self.handleError(record)
</code></pre>
<p>The issue here seems to be, that the <code>pika</code> module tries to log itself as soon as a connection is established, thus calling the <code>emit()</code> function recursively forever.</p>
<p>My question is whether should simply disable logging (how?) during the emit function, or if this is the wrong approach altogether?</p>
<p><strong>EDIT:</strong> Putting the above handler into a separate logger would be one way to go, however I am wondering about a general solution to this question.</p>
<hr />
<p><sup>1</sup> I know there are <a href="https://pypi.org/project/python-logging-rabbitmq/" rel="nofollow noreferrer">pip packages</a> for this, but for this project I have to avoid dependencies as much as possible.</p>
| <python><logging><rabbitmq><pika> | 2023-11-02 09:02:58 | 1 | 2,980 | Roland Deschain |
77,408,116 | 4,994,781 | pytest indirect parametrization fails when mixing it with non-fixture parameters | <p>First of all, sorry for the title, I couldn't think of something better.</p>
<p>The code below is fictitious because the real code where this is happening is much longer and this little example shows the same problem.</p>
<p>I have this code in my <code>conftest.py</code>:</p>
<pre><code>@pytest.fixture()
def my_fixture(argument):
return f'expected{argument[-1]}'
</code></pre>
<p>It works as expected when parametrizing <strong>only</strong> the fixture in a test.</p>
<p>But if I have this test function:</p>
<pre><code>@pytest.mark.parametrize('my_fixture, expected', [
('fixture_param1', 'expected1'),
('fixture_param2', 'expected2')
], indirect=['my_fixture'])
def test_indirect(my_fixture, expected):
value = my_fixture
assert value == expected
</code></pre>
<p>This fails with the following error:</p>
<pre><code> @pytest.fixture()
def my_fixture(argument):
E fixture 'argument' not found
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, cov, doctest_namespace, log_paths, monkeypatch, my_fixture, no_cover, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, unreadable_file
> use 'pytest --fixtures [testpath]' for help on them.
</code></pre>
<p>Obviously this works if the test function is called like this:</p>
<pre><code>@pytest.mark.parametrize('argument, expected', [
('fixture_param1', 'expected1'),
('fixture_param2', 'expected2')
], indirect=['my_fixture'])
def test_indirect(my_fixture, expected):
value = my_fixture
assert value == expected
</code></pre>
<p>but I find this syntax confusing because I would like to have <code>my_fixture</code> explicitly in the parametrization rather than the argument name.</p>
<p>What am I doing wrong? Apparently, what I try to do is correct, one can use both a fixture and a normal parameter in a <code>parametrize</code> call as long as the indirect parametrizations are declared. I'm at a loss here…</p>
<p>Thanks in advance!</p>
| <python><pytest><parametrized-testing> | 2023-11-02 08:50:47 | 1 | 580 | Raúl Núñez de Arenas Coronado |
77,407,990 | 268,581 | Investigating JSON data | <h1>PowerShell</h1>
<p>Here's an API call to sec.gov in Windows PowerShell:</p>
<pre><code>$result = Invoke-RestMethod "https://data.sec.gov/submissions/CIK0001318605.json" -UserAgent 'company user@abc.com'
</code></pre>
<p>If you want to try the call, replace <code>company user@abc.com</code> as appropriate.</p>
<p>If we look at <code>$result</code>, by default, it shows the property names as well as a brief representation of the values:</p>
<pre><code>PS C:\Users\dharm> $result
cik : 1318605
entityType : operating
sic : 3711
sicDescription : Motor Vehicles & Passenger Car Bodies
insiderTransactionForOwnerExists : 0
insiderTransactionForIssuerExists : 1
name : Tesla, Inc.
tickers : {TSLA}
exchanges : {Nasdaq}
ein : 912197729
description :
website :
investorWebsite :
category : Large accelerated filer
fiscalYearEnd : 1231
stateOfIncorporation : DE
stateOfIncorporationDescription : DE
addresses : @{mailing=; business=}
phone : 650-681-5000
flags :
formerNames : {@{name=TESLA MOTORS INC; from=2005-02-17T00:00:00.000Z; to=2017-01-27T00:00:00.000Z}}
filings : @{recent=; files=System.Object[]}
</code></pre>
<p>So, I can drill down into the <code>filings</code> and pipe it into <code>fl</code> to see what's in there:</p>
<pre><code>PS C:\Users\dharm> $result.filings | fl
recent : @{accessionNumber=System.Object[]; filingDate=System.Object[]; reportDate=System.Object[]; acceptanceDateTime=System.Object[]; act=System.Object[]; form=System.Object[];
fileNumber=System.Object[]; filmNumber=System.Object[]; items=System.Object[]; size=System.Object[]; isXBRL=System.Object[]; isInlineXBRL=System.Object[]; primaryDocument=System.Object[];
primaryDocDescription=System.Object[]}
files : {@{name=CIK0001318605-submissions-001.json; filingCount=440; filingFrom=2005-02-17; filingTo=2015-06-15}}
</code></pre>
<p>And so on.</p>
<h1>Python</h1>
<p>Let's make the equivalent call in Python:</p>
<pre><code>import requests
url = "https://data.sec.gov/submissions/CIK0001318605.json"
headers = { 'User-Agent': 'company user@abc.com' }
response = requests.request("GET", url, headers=headers)
</code></pre>
<p>If we use <code>response.json()</code>, all the data is dipslayed:</p>
<pre><code>response.json()
</code></pre>
<p>If I use <code>pprint</code>, it also displays all the data:</p>
<pre><code>import pprint
data = response.json()
pprint.pprint(data)
</code></pre>
<h1>Question</h1>
<p>Is there a way in Python to display only the top level properties and values in a table similar to the default PowerShell output?</p>
| <python><json> | 2023-11-02 08:32:56 | 1 | 9,709 | dharmatech |
77,407,930 | 1,020,139 | How to use Python Poetry as a "lightweight" virtual environment like `python -m venv <venv>`? | <p>I often prefer <code>python -m venv <venv></code> due to its simplicity and lightweight (no notion of a Poetry "package"), only setting environment variables to direct Python/Pip to use local packages.</p>
<p>How can I use Python Poetry like virtual environments created by <code>python -m venv <venv></code>?</p>
| <python><python-venv><python-poetry> | 2023-11-02 08:19:35 | 0 | 14,560 | Shuzheng |
77,407,811 | 9,304,534 | How to implement RBAC in azure and flask application | <p>I want to implement Azure RBAC in a flask application. How should I do it?</p>
<p>Inside the web app, Users can assign and remove roles to other users.</p>
| <python><azure><flask><azure-rbac> | 2023-11-02 08:01:46 | 1 | 1,885 | Nitin Raturi |
77,407,621 | 6,807,106 | python regular expression greedy capture | <p>I need to replace the entire text between AAA and BBB (not including those) in the following multi-line text with a blank.
I tried this pattern => 'This form.*[^Accuracy.]Accuracy.'
But it only captures until first 'Accuracy.'
I tried to do a greedy capture but cannot get the right pattern.
Please help.</p>
<pre><code>import re
haystack='''Other needs and/or restrictions:
Telephone visit on 11/16 @ 3:50 PM
AAA This form has been electronically signed.
Check for Accuracy. XXX authorized by Dr Zhivago (M.D.)
This form contains your private health information that
you may choose to release to another party;
please review for Accuracy.
BBB
'''
pattern = 'This form.*[^Accuracy\.]Accuracy\.'
p = re.compile(pattern, re.MULTILINE)
match = p.search(haystack)
if match != None:
haystack = re.sub(pattern,'',haystack,re.MULTILINE)
print(haystack)
</code></pre>
| <python><regex> | 2023-11-02 07:24:31 | 1 | 352 | Soundar Rajan |
77,407,078 | 696,264 | Unable to delete a record in Milvus collection | <p>The below code I have executed to delete a record from Milvus collection. There is no error but the respective record also not deleted.</p>
<p>Please recommend solution for this</p>
<pre><code>from pymilvus import connections,Collection,db,utility,FieldSchema, CollectionSchema
def connectVectorDB(**conStrings):
try:
milvusDB = connections.connect(
#db_name=conStrings['dbName'],
host=conStrings['hostname'],
port=conStrings['port']
)
db.using_database(conStrings['dbName'])
return milvusDB
except Exception as err:
raise err
dbConnections = connectVectorDB(dbName='default',hostname="localhost",port='19530')
myCol = Collection('DocStrings')
expression = "docID == 445341931317291845"
myCol.delete(expr=expression)
myCol.flush()
</code></pre>
| <python><milvus> | 2023-11-02 05:11:01 | 1 | 1,321 | Madhan |
77,406,962 | 435,563 | How to type function returning a class decorator? (python3.11) | <p>I know I can type a class decorator as follows:</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar('T', bound=Type[Any])
def decorator(cls: T) -> T:
...
</code></pre>
<p>Which can be used:</p>
<pre><code>@decorator
class Foo:
foo: int
</code></pre>
<p>And a reference to <code>foo</code> on instance of <code>Foo</code> will check properly in mypy.</p>
<p>But how do I type a function that returns a decorator, so that the decorated class types correctly? E.g. how can I fix the code below, if the arguments to <code>decorator</code>
have types not associated with the type of the wrapped class?</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar('T', bound=Type[Any])
Decorator: TypeAlias = Callable[[T], T]
def decorator(...) -> Decorator[Any]:
def wrapper(cls: T) -> T:
# some code modifying cls depending on args to "decorator"
...
return cls
return wrapper
# now instance don't type write
@decorator(...)
class Foo:
foo: int
</code></pre>
| <python><generics><python-decorators><mypy> | 2023-11-02 04:34:09 | 1 | 5,661 | shaunc |
77,406,961 | 1,085,068 | Merge a SQL table with a dataframe via pyodbc | <p>I'm trying to merge a SQL table to a dataframe object using pyodbc.</p>
<p>The merge will delete the rows if there are no matches in the destination (SQL) table, the problem is that it seems pyodbc apply the merge row by row, and by doing so when it reach the last row, all the rows updated or inserted before will be deleted as they don't match the key of the current row.</p>
<p>This doesn't work using execute_many, even with the tag fast_executemany set to True (I was hoping it would as I thought my dataframe would be merged as a whole, in bulk).</p>
<p>I tried to do the merge with a simple execute, and creating as many placeholders as there are cells in my dataframe, and this will work, but it's just very ugly and I'm pretty sure there is a limit of placeholders we can have in one statement.</p>
<p>I've made a simple code to illustrate that :</p>
<pre><code>import pandas as pd
import pyodbc
connection = pyodbc.connect(xxxxxxx, autocommit=True)
cursor = connection.cursor()
cursor.fast_executemany = True
cursor.execute("DELETE FROM [dbo].[MyTable]")
merge = """MERGE [dbo].[MyTable] TARGET USING (VALUES (?,?,?,?)) as SOURCE (Country,Region,Volume,NbClients)
ON TARGET.Country = SOURCE.Country AND TARGET.Region = SOURCE.Region
WHEN MATCHED THEN UPDATE SET TARGET.Volume = SOURCE.Volume, TARGET.NbClients = SOURCE.NbClients
WHEN NOT MATCHED BY TARGET THEN INSERT (Country,Region,Volume,NbClients) VALUES (SOURCE.Country, SOURCE.Region, SOURCE.Volume, SOURCE.NbClients)
WHEN NOT MATCHED BY SOURCE THEN DELETE;
"""
res = pd.DataFrame([['France', 'Finistere', 19488788.334505435, 13], ['France', 'Savoie', 11282506.25, 75], ['Maroc', 'Casablanca', 12454559.801253136, 15], ['Perou', 'Quito', 13059125.212926773, 49]])
# Won't work, only the last row of the dataframe will be in the table
cursor.executemany(merge, res.values.tolist())
res = pd.DataFrame([['France', 'Vendee', 1488788.3254, 56], ['Maroc', 'Casablanca', 42454559.801253136, 36]])
# Won't work, only the last row of the dataframe will be in the table
cursor.executemany(merge, res.values.tolist())
merge_flat = """MERGE [dbo].[MyTable] TARGET USING (VALUES (?,?,?,?), (?,?,?,?)) as SOURCE (Country,Region,Volume,NbClients)
ON TARGET.Country = SOURCE.Country AND TARGET.Region = SOURCE.Region
WHEN MATCHED THEN UPDATE SET TARGET.Volume = SOURCE.Volume, TARGET.NbClients = SOURCE.NbClients
WHEN NOT MATCHED BY TARGET THEN INSERT (Country,Region,Volume,NbClients) VALUES (SOURCE.Country, SOURCE.Region, SOURCE.Volume, SOURCE.NbClients)
WHEN NOT MATCHED BY SOURCE THEN DELETE;"""
params = [item for sublist in res.values.tolist() for item in sublist]
# Will work but need to have a placeholder for each cell
cursor.execute(merge_flat, *params)
</code></pre>
<p>The table is simply something like :</p>
<pre><code>CREATE TABLE [dbo].[MyTable](
[Country] [varchar](30) NOT NULL,
[Region] [varchar](50) NOT NULL,
[Volume] [numeric](24, 6) NULL,
[NbClients] [numeric](24, 10) NULL,
CONSTRAINT [PK_MyTable] PRIMARY KEY CLUSTERED
(
[Country] ASC,
[Region] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
</code></pre>
<p>So far the best solution seems to do a Delete/Insert, or maybe use a temporary table, insert my data in there and do the merge on it, but I can't believe something that should be so simple doesn't work.</p>
<p>Am I missing something ?</p>
<p>Thanks for your help.</p>
| <python><pandas><pyodbc> | 2023-11-02 04:34:04 | 1 | 1,921 | Guillaume |
77,406,773 | 1,546,990 | Django: getting the value of a serializer's field within the serializer itself | <p>In my Django serializer, I have a validate() method.</p>
<p>In it, I want to access the value of one of the serializer's fields. How would I do this?</p>
<p>I'm using Python 3.10.2 and Django 3.2.23.</p>
<p>Previous answers here have suggested using <code>self.initial_data.get("field_name"))</code> but that doesn't work for my version of Django (message: AttributeError: 'MySerializer' object has no attribute 'initial_data').</p>
<pre><code>class MySerializer:
models = models.MyModel
class Meta:
fields = "my_field"
def validate(self, data):
# Want to get the value of my_field here
</code></pre>
<p>Many thanks in advance for any assistance.</p>
| <python><django><django-rest-framework> | 2023-11-02 03:32:25 | 1 | 2,069 | GarlicBread |
77,406,710 | 3,446,927 | .python_packages and .venv directories prevent func azure functionapp publish | <p>I am trying to deploy a Python Azure Function using Azure Function Core Tools. When I run the publish command, I receive the following error:</p>
<pre><code>PS C:\[..]\func> func azure functionapp publish [function name] --build remote
Getting site publishing info...
Creating archive for current directory...
Performing remote build for functions project.
Deleting the old .python_packages directory
The file cannot be accessed by the system. : 'C:\[..]\func\.venv\lib64'
</code></pre>
<p>If I manually delete the <code>.python_packages</code> directory, I get:</p>
<pre><code>PS C:\[..]\func> func azure functionapp publish [function name] --build remote
Getting site publishing info...
Creating archive for current directory...
Performing remote build for functions project.
The file cannot be accessed by the system. : 'C:\[..]\func\.venv\lib64'
</code></pre>
<p>Then if I manually delete my <code>.venv</code> directory, the publish action is successful.</p>
<p>Is it possible to deploy <em>without</em> deleting these directories? If not, why is <code>func</code> unable to handle these actions as part of the publish process?</p>
| <python><azure><azure-functions> | 2023-11-02 03:08:04 | 1 | 539 | Joe Plumb |
77,406,653 | 4,582,026 | Tesseract not recognising text in image | <p>I am trying to use tesseract in python 3.11 to convert some images to text on Windows 11. I have tried preprocessing the images including morphing, enlarging, greyscaling and thresholding but nothing seems to work.</p>
<p>I have been chopping and changing my code but the below is where I have got to (path is the local path for each image) -</p>
<pre><code>import cv2
import pytesseract as tess
tess.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
img = cv2.imread(path)
greyscale = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
threshold = cv2.threshold(greyscale, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
data = tess.image_to_string(threshold, config='--psm 6')
print(data)
</code></pre>
<p>I'm open to using other libraries.</p>
| <python><opencv><computer-vision><ocr><tesseract> | 2023-11-02 02:48:43 | 2 | 549 | Vik |
77,406,413 | 2,975,438 | How can I speed up an analytic chatbot that's based on Langchain (with agents and tools) and Streamlit and disable its intermediate steps? | <p>I created an analytic chatbot using Langchain (with tools and agents) for the backend and Streamlit for the frontend. It works, but for some users' questions, it takes too much time to output anything. If I look at the output of intermediate steps, I can see that the chatbot tries to print out all relevant rows in the output. For example, below, the chatbot found 40 relevant comments and printed them out in one of its intermediate steps one by one (it takes up to one minute).</p>
<p><a href="https://i.sstatic.net/xyaVZl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xyaVZl.png" alt="enter image description here" /></a></p>
<p>My questions are:</p>
<ol>
<li>Is there any way to speed up this process?</li>
<li>How can I disable the intermediate output of the chatbot? (I already put <code>return_intermediate_steps=False</code>, <code>verbose=False</code>, and <code>expand_new_thoughts=False</code>, but the chatbot still shows intermediate steps.)</li>
</ol>
<p>Code for chatbot:</p>
<pre><code>
def load_data(path):
return pd.read_csv(path)
if st.sidebar.button('Use Data'):
# If button is clicked, load the EDW.csv file
st.session_state["df"] = load_data('./data/EDW.csv')
uploaded_file = st.sidebar.file_uploader("Choose a CSV file", type="csv")
if "df" in st.session_state:
msgs = StreamlitChatMessageHistory()
memory = ConversationBufferWindowMemory(chat_memory=msgs,
return_messages=True,
k=5,
memory_key="chat_history",
output_key="output")
if len(msgs.messages) == 0 or st.sidebar.button("Reset chat history"):
msgs.clear()
msgs.add_ai_message("How can I help you?")
st.session_state.steps = {}
avatars = {"human": "user", "ai": "assistant"}
# Display a chat input widget
if prompt := st.chat_input(placeholder=""):
st.chat_message("user").write(prompt)
llm = AzureChatOpenAI(
deployment_name = "gpt-4",
model_name = "gpt-4",
openai_api_key = os.environ["OPENAI_API_KEY"],
openai_api_version = os.environ["OPENAI_API_VERSION"],
openai_api_base = os.environ["OPENAI_API_BASE"],
temperature = 0,
streaming=True
)
max_number_of_rows = 40
agent_analytics_node = create_pandas_dataframe_agent(
llm,
st.session_state["df"],
verbose=False,
agent_type=AgentType.OPENAI_FUNCTIONS,
reduce_k_below_max_tokens=True, # to not exceed token limit
max_execution_time = 20,
early_stopping_method="generate", # will generate a final answer after the max_execution_time has been surpassed
# max_iterations=2, # to cap an agent at taking a certain number of steps
)
tool_analytics_node = Tool(
return_intermediate_steps=False,
name='Analytics Node',
func=agent_analytics_node.run,
description=f'''
This tool is useful when you need to answer questions about data stored in a pandas dataframe, referred to as 'df'.
'df' comprises the following columns: {st.session_state["df"].columns.to_list()}.
Here is a sample of the data: {st.session_state["df"].head(5)}.
When working with df, ensure not to output more than {max_number_of_rows} rows at once, either in intermediate steps or in the final answer. This is because df could contain too many rows, which could potentially overload memory, for example instead of `df[df['survey_comment'].str.contains('wet', na=False, case=False)]['survey_comment'].tolist()` use `df[df['survey_comment'].str.contains('wet', na=False, case=False)]['survey_comment'].head({max_number_of_rows}).tolist()`.
'''
)
tools = [tool_analytics_node]
chat_agent = ConversationalChatAgent.from_llm_and_tools(llm=llm, tools=tools, return_intermediate_steps=False)
executor = AgentExecutor.from_agent_and_tools(
agent=chat_agent,
tools=tools,
memory=memory,
return_intermediate_steps=False,
handle_parsing_errors=True,
verbose=False,
)
with st.chat_message("assistant"):
st_cb = StreamlitCallbackHandler(st.container(), expand_new_thoughts=False)
response = executor(prompt, callbacks=[st_cb])
st.write(response["output"])
</code></pre>
| <python><chatbot><streamlit><langchain><large-language-model> | 2023-11-02 01:10:19 | 1 | 1,298 | illuminato |
77,406,411 | 11,911,694 | TypeError: object AsyncMock can't be used in 'await' expression | <p>I created a pytest for a function but I get the following error when I try to run it:</p>
<pre><code>> return await cntrl.get_function_by_topic_by_method(url, method, *args, query_params=query_params, action=action, **kwargs)
E TypeError: object AsyncMock can't be used in 'await' expression
</code></pre>
<p>I am confused as to why this is happening because only asynchronous functions can be awaited and since I used AsyncMock() it should work.</p>
<p>The function is mentioned below and it sits in <code>app/folder_market/controller/DQR.py</code>:</p>
<pre><code>import traceback
import logging
import httpx
from .utils.ApiHelper import ApiHelper
logger = logging.getLogger(__name__)
class DQR(object):
def __init__(self, config, *args, **kwargs):
self.methods = ['GET', 'PUT']
self.config = config
self.args = args
self.kwargs = kwargs
self.cai_usage = config['cai_usage']
self.topic = "DQR"
async def retrieve_something(self, *args, query_params=None, **kwargs):
try:
action = 'retrievSomething'
cntrl = ApiHelper(self.config)
url, method = cntrl.get_url_and_method(self.topic, action=action)
return await cntrl.get_function_by_topic_by_method(url, method, *args, query_params=query_params, action=action, **kwargs)
except (httpx.RequestError, httpx.TimeoutException, httpx.ConnectError, httpx.NetworkError, httpx.TransportError) as error:
# Log the full exception traceback for debugging
traceback.print_exc()
msg = "Error in " + __name__ + ", retrieve_data_collection_requests: " + str(error)
print(msg)
logger.error(msg)
</code></pre>
<p>and the test for the function sit in <code>app/tests/controller/test_DQR.py</code>:</p>
<pre><code>from unittest.mock import AsyncMock
import os
import sys
import pytest
sys.path.insert(0, os.path.abspath("."))
from folder_market.controller.DQR import DQR
from folder_market.utils.ConfigHelper import load_config
from folder_market.controller.utils.ConfigHelper import config_validator
from pytest_mock import MockerFixture
config = load_config('marketplace/config.yaml')
config_validator(config)
instance_data_collection = DQR(config=config)
# Helper function to assert response data
def assert_response_data(response_data):
assert len(response_data['objects']) > 1
assert all(item.get('id') for item in response_data['objects'])
@pytest.mark.asyncio
async def test_retrieve_something_param_id_success(mocker: MockerFixture):
# Create a mock of the 'ApiHelper' instance
mock_api_helper = mocker.patch(
"folder_market.controller.DQR.ApiHelper"
)
api_helper_instance = mock_api_helper.return_value
api_helper_instance.get_url_and_method.return_value = ("https://mocked-api-url", "GET")
expected_response_data = {
"processingTime": 486,
"offset": 0,
"limit": 50,
"totalCount": 1,
"objects": [
{
"id": "407225ad-7870-4678-8169-43829d80c4bc",
"refId": "d90574a4-417d-4c34-b9f1-4f2da5fc997a"
}]}
mock_http_client = AsyncMock()
mock_http_client.get.return_value.status_code = 200
mock_http_client.get.return_value.json.return_value = expected_response_data
api_helper_instance.get_function_by_topic_by_method.return_value = mock_http_client
requestId = "407225ad-7870-4678-8169-43829d80c4bc"
get_data_collection_request = await instance_data_collection.retrieve_data_collection_requests(query_params={"requestIds": requestId})
assert get_data_collection_request.status_code == 200
response_data = get_data_collection_request.json()
assert_response_data(response_data)
assert response_data['objects'][0]['id'] == requestId
</code></pre>
| <python><asynchronous><async-await><python-asyncio> | 2023-11-02 01:09:22 | 1 | 303 | Aastha Jha |
77,406,316 | 11,659,424 | How do you safely pass values to SQLite PRAGMA statements in Python? | <p>I'm currently writing an application in Python that stores its data in a SQLite database. I want the database file to be stored encrypted on disk, and I found the most common solution for doing this to be <a href="https://www.zetetic.net/sqlcipher/" rel="nofollow noreferrer">SQLCipher</a>. I added <a href="https://github.com/coleifer/sqlcipher3" rel="nofollow noreferrer">sqlcipher3</a> to my project to provide the DB-API, and got started. With SQLCipher, the database encryption key is provided in the form of a PRAGMA statement which must be provided before the first operation on the database is executed.</p>
<pre class="lang-sql prettyprint-override"><code>PRAGMA key='hunter2'; -- like this
</code></pre>
<p>When my program runs, it prompts the user for the database password. My concern is that since this is a source of user input, it's potentially vulnerable to SQL injection. For example, a naive way to provide the key might look something like this:</p>
<pre class="lang-py prettyprint-override"><code>from getpass import getpass
import sqlcipher3
con = sqlcipher3.connect(':memory:')
cur = con.cursor()
password = getpass('Password: ')
cur.execute(f"PRAGMA key='{password}';")
### do stuff with the unencrypted database here
</code></pre>
<p>If someone was to enter something like "<code>hunter2'; DROP TABLE secrets;--</code>" into the password prompt, the resulting SQL statement would look like this after substitution:</p>
<pre class="lang-sql prettyprint-override"><code>PRAGMA key='hunter2'; DROP TABLE secrets;--';
</code></pre>
<p>Typically, the solution to this problem is to use the DB-API's parameter substitution. From the <a href="https://docs.python.org/3/library/sqlite3.html#how-to-use-placeholders-to-bind-values-in-sql-queries" rel="nofollow noreferrer">sqlite3 documentation</a>:</p>
<blockquote>
<p>An SQL statement may use one of two kinds of placeholders: question marks (qmark style) or named placeholders (named style). For the qmark style, <em>parameters</em> must be a sequence whose length must match the number of placeholders, or a <code>ProgrammingError</code> is raised. For the named style, <em>parameters</em> must be an instance of a <code>dict</code> (or a subclass), which must contain keys for all named parameters; any extra items are ignored. Here’s an example of both styles:</p>
<pre class="lang-py prettyprint-override"><code>con = sqlite3.connect(":memory:")
cur = con.execute("CREATE TABLE lang(name, first_appeared)")
# This is the named style used with executemany():
data = (
{"name": "C", "year": 1972},
{"name": "Fortran", "year": 1957},
{"name": "Python", "year": 1991},
{"name": "Go", "year": 2009},
)
cur.executemany("INSERT INTO lang VALUES(:name, :year)", data)
# This is the qmark style used in a SELECT query:
params = (1972,)
cur.execute("SELECT * FROM lang WHERE first_appeared = ?", params)
print(cur.fetchall())
</code></pre>
</blockquote>
<p>This works as expected in the sample code from the docs, but when using placeholders in a PRAGMA statement, we get an <code>OperationalError</code> telling us there's a syntax error. This is the case for both types of parameter substitution.</p>
<pre class="lang-py prettyprint-override"><code># these will both fail
cur.execute('PRAGMA key=?;', (password,))
cur.execute('PRAGMA key=:pass;', {'pass': password})
</code></pre>
<p>I'm not sure where to go from here. If we actually enter our malicious string at the password prompt, it won't work, producing the following error:</p>
<pre>
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
sqlcipher3.ProgrammingError: You can only execute one statement at a time.
</pre>
<p>So is the "naive" code from earlier safe? I'm not confident saying the answer is "yes" just because the <em>one</em> malicious string I could come up with didn't work, but there doesn't seem to be a better way of doing this. The answers to the only other person on here asking this question that I could find suggested equivalent solutions (<a href="https://stackoverflow.com/q/51559620/11659424">python + sqlite insert variable into PRAGMA statement</a>). I'd also rather not use an ORM, especially if it's just for this one case. Any suggestions would be appreciated, thanks.</p>
| <python><sqlite><sql-injection><sqlcipher> | 2023-11-02 00:26:37 | 2 | 409 | Sam Zuk |
77,406,185 | 3,311,276 | How can I get both a rendered template html page and swagger documentation of same flask app with differnt endpoint? | <p>I have written this code:</p>
<pre><code>from flask import Flask, render_template, request
from flask_restx import Api, Resource
app = Flask(__name__)
api = Api(app, version='1.0', title='Sample API', description='A sample API', doc='/api')
# Sample data
config = {
'nodes': ['Node1', 'Node2', 'Node3'],
'files': {
'Node1': ['File1', 'File2'],
'Node2': ['File3', 'File4'],
'Node3': ['File5', 'File6']
}
}
@app.route('/')
def index():
return render_template('index.html', nodes=config['nodes'], files=[])
@api.route('/get_files')
class Files(Resource):
def post(self):
selected_node = request.form['selected_node']
files = config['files'].get(selected_node, [])
return {'files': files}
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>and I am trying to use this same flask app to render the index.html with root
127.0.0.1:5000/ (doesn't work)</p>
<p>and
127.0.0.1:/5000/api gives me swagger documenation (works)</p>
| <python><flask><flask-restx> | 2023-11-01 23:38:46 | 1 | 8,357 | Ciasto piekarz |
77,406,157 | 5,274,291 | What is the total memory overhead of creating a single byte in python? | <p>When I run the following code in Python:</p>
<pre class="lang-py prettyprint-override"><code>>>> import sys
>>> mybyte = bytes(1)
>>> sys.getsizeof(mybyte)
34
</code></pre>
<p>I'd ideally expect something around 1 as size because I created a variable to store a single byte, that I don't want to be a string necessarily. I'd like to understand why it's returning a size of 34 bytes.</p>
<p>What are all data structures adding overhead to this single byte definition?</p>
| <python> | 2023-11-01 23:28:11 | 0 | 1,578 | João Pedro Schmitt |
77,406,003 | 832,490 | Expose interface to pybind11 | <p>I am trying to expose a interface (aka a class with no implementation) to Python using pybind11</p>
<p>I am not sure if pybind11 was built to this, because everything that I saw in their documentation is about to generate modules for python usage, and not to embedded.</p>
<p>I tried this:</p>
<pre><code>py::object main = py::module::import("__main__");
py::object global = main.attr("__dict__");
py::class_<Person>(global, "Person", py::module_local()).def("say", &Person::say); # Crashes when I added this line
py::exec(R"(
class Rodrigo(Person):
def say(self):
print("Hello world from Python!")
rodrigo = Rodrigo()
rodrigo.say()
)");
</code></pre>
<p>But it crashes, the back trace says something in GIL.</p>
<p>Is it possible to do what I am trying?</p>
| <python><c++><pybind11> | 2023-11-01 22:35:34 | 1 | 1,009 | Rodrigo |
77,405,678 | 4,338,776 | How to use optional parameters from decorated method in decorator method | <p>I have a function with an optional argument. How can I pass this argument into the decorator function?</p>
<pre><code> def decorator(self, func: Callable[[Any], Any]) -> Callable[[Any, Any], Any]:
def wrapper(*args: Any, **kwargs: Any) -> Any:
print(args) # does not contain the "optional_arg"
print(kwargs) # does not contain the "optional_arg" either
# How can I do something with the optional_arg in here?
return func(*args, **kwargs)
return wrapper
@decorator
def do_something(self, arg1: str, optional_arg="Hi") -> None:
# do something
pass
do_something("arg1")
</code></pre>
| <python><python-3.x> | 2023-11-01 21:21:42 | 2 | 847 | Zigzagoon |
77,405,547 | 4,439,019 | Filtering JSON records - Python | <p>I am trying to filter out the records and just show the JSON formatted records based on a specific column value. In this case <code>displayName</code></p>
<pre><code>data = {
"count": 10408,
"value": [
{
"id": 151,
"name": "Test",
"url": "https://url",
"build": {
"id": "649"
},
"isAutomated": true,
"owner": {
"displayName": "Test Username",
"id": "1234"
},
"project": {
"id": "123",
"name": "Research"
},
"startedDate": "2018-06-15T20:57:52.267Z",
"completedDate": "2018-06-15T20:57:57.813Z",
"state": "Completed",
"totalTests": 1,
"incompleteTests": 0,
"notApplicableTests": 0,
"passedTests": 1,
"unanalyzedTests": 0,
"revision": 4,
"webAccessUrl": "https://url",
"pipelineReference": {
"pipelineId": 649,
"stageReference": {},
"phaseReference": {},
"jobReference": {}
}
},
{
"id": 151,
"name": "Test2",
"url": "https://url",
"build": {
"id": "649"
},
"isAutomated": true,
"owner": {
"displayName": "Some other User",
"id": "1234"
},
"project": {
"id": "123",
"name": "Research"
},
"startedDate": "2018-06-15T20:57:52.267Z",
"completedDate": "2018-06-15T20:57:57.813Z",
"state": "Completed",
"totalTests": 1,
"incompleteTests": 0,
"notApplicableTests": 0,
"passedTests": 1,
"unanalyzedTests": 0,
"revision": 4,
"webAccessUrl": "https://url",
"pipelineReference": {
"pipelineId": 649,
"stageReference": {},
"phaseReference": {},
"jobReference": {}
}
}
]
}
def filterRecords():
filtered = [x for x in data if x["value"]["owner"]["displayName"] == "Test Username"]
print(filtered)
filterRecords()
</code></pre>
<p>But this is throwing: <code>TypeError: string indices must be integers</code></p>
| <python><json> | 2023-11-01 20:52:11 | 1 | 3,831 | JD2775 |
77,405,314 | 4,515,838 | multi-boundary API call not working in Python | <p>I have a working API call in Postman and I'm trying to replicate the call in Python. I started with the boilerplate code Postman provided and made some changes, but the API call is not working. I pulled a Wireshark log with a working call via Postman and compared it against my API call, and I think I may have found the issue, but I don't know how to fix it.</p>
<p>How can I update my Python code based on these findings below?</p>
<p>Here is a picture of the Wireshark call. The content-type of the first boundary is <code>application/json</code>.</p>
<p><a href="https://i.sstatic.net/cy7sm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cy7sm.png" alt="enter image description here" /></a></p>
<p>However, the actual call being made doesn't include the content-type <code>application/json</code> and I'm not sure if the JSON string is being passed correctly.</p>
<pre><code>>>> response.request.body
b'--affc9f680a99e3d8509e8c9f24b422b3\r\nContent-Disposition: form-data; name="documentAttributes"\r\n\r\n{"batchInfo":{"batchName":"YMTESTALIAS","createDateTime":"2023-05-22T14:00:00.000Z"},"batchDocumentInfo":{"documentTypeName":"Photo Identification","subject":"Alias Test","metadata":[{"fieldName":"MRN","fieldValue":"9305940"},{"fieldName":"ENCTR_NBR","fieldValue":"65013813"}]},"processQueue":"BATCH","indexImmediately":"true"}\r\n--affc9f680a99e3d8509e8c9f24b422b3\r\nContent-Disposition: form-data; name="documentContent"; filename="hello.tif"\r\nContent-Type: image/tiff\r\n\r\nII*\x002\x19\x00\x00\x80?\xe0@\x0
</code></pre>
<p>Here is the code</p>
<pre><code>import requests
import json
domain = 'cert'
client_mnemonic = 'xxxx_fl'
auth_token = 'Basic .....=='
file_path = 'C:/Users/system/Desktop/hello.tif'
url = f'http://1.1.1.1/docimaging-access-1.0/{domain}.{client_mnemonic}.cernerasp.com/service/documents'
batch_name = 'TESTALIAS'
document_type = 'Photo Identification'
subject = 'Alias Test'
mrn = '9305940'
fin = '65013813'
# build payload
payload = {
'documentAttributes': {
'batchInfo': {
'batchName': f'{batch_name}',
'createDateTime': '2023-10-31T14:00:00.000Z'
},
'batchDocumentInfo': {
'documentTypeName': f'{document_type}',
'subject': f'{subject}',
'metadata': [
{
'fieldName': 'MRN',
'fieldValue': f'{mrn}'
},
{
'fieldName': 'ENCTR_NBR',
'fieldValue': f'{fin}'
}
]
},
'processQueue':'BATCH',
'indexImmediately':'true'
}
}
headers = {
'Authorization': auth_token
}
files = {
'documentContent': ('hello.tif', open(file_path,'rb'), 'image/tiff')
}
response = requests.post(url, headers=headers, data=payload, files=files)
</code></pre>
| <python><http> | 2023-11-01 20:05:16 | 1 | 363 | Yitzhak |
77,405,230 | 7,045,589 | How can I test if a directory exists, and create a Path object with one line of code? | <p>I'm new to using the Python <code>pathlib</code> library, so please edit this question if I'm using incorrect terminology here and there...</p>
<p>I'm using the <code>Path</code> class to both create a directory if it doesn't exist, and then to instantiate a <code>Path</code> object. I currently have this as two lines of code like this because the <code>mkdir</code> method returns <code>NoneType</code>:</p>
<pre class="lang-py prettyprint-override"><code>my_path = Path("my_dir/my_subdir").mkdir(parents=True, exists_ok=True)
my_path = Path("my_dir/my_subdir")
</code></pre>
<p>Is there a way to do this with one line of code? Should I be taking a completely different approach?</p>
| <python><pathlib> | 2023-11-01 19:49:11 | 2 | 309 | the_meter413 |
77,405,100 | 13,086,128 | Immortal Objects in Python: how do they work? | <p>Recently I came across</p>
<p><strong>PEP 683 – Immortal Objects, Using a Fixed Refcount</strong></p>
<p><a href="https://peps.python.org/pep-0683/" rel="nofollow noreferrer">https://peps.python.org/pep-0683/</a></p>
<p>What I figured out that objects like <code>None</code>, <code>True</code>, <code>False</code> will be shared across interpreter instead of creating copies.</p>
<p>These were already immutable, weren't they?</p>
<p>Can someone please explain in easy terms. Thanks.</p>
| <python><python-3.x><pep><python-3.12> | 2023-11-01 19:23:22 | 3 | 30,560 | Talha Tayyab |
77,405,058 | 10,250,223 | Decrypt a string in Python that was encrypted by AES-CBC using crypto.subtle in JavaScript | <p>I am dealing with a case where I need to encrypt the payload at client side using JavaScript via AES-CBC algorithm, but later decrypt it in backend using Python. The module used for encryption in JavaScript is crypto.subtle and works with most of the params being in bytes.</p>
<p>In python2 I'm using a module called PyCrypto where the params are mostly in string. This discrepancy is causing issues with decryption and the output I'm getting is not the same as what was encrypted. Please help me find what I'm doing wrong. Here's the code I'm using:</p>
<p>Encryption (JavaScript)</p>
<pre class="lang-js prettyprint-override"><code>async function encrypt(content, key) {
var byteKey = new TextEncoder().encode(key);
var byteContent = new TextEncoder().encode(content);
// Generate an AES key using the Crypto API
var AESKey = await crypto.subtle.importKey("raw", byteKey, "AES-CBC", false, ["encrypt"]);
// Encrypt using AES algorithm
var encryptedBytes = await crypto.subtle.encrypt(
{ name: "AES-CBC", iv: new Uint8Array(16) }, AESKey, byteContent);
// Convert the encrypted byte array to a Base64 string
var encryptedBase64 = btoa(String.fromCharCode.apply(null, new Uint8Array(encryptedBytes)));
// Return the encrypted string
return encryptedBase64;
}
</code></pre>
<p>Example:</p>
<pre class="lang-js prettyprint-override"><code>encrypt("A sample string", "samplekey1234567").then(res => console.log(res))
OUTPUT: YZGjFFgnngBBBTSm8euZEw==
</code></pre>
<p>Decryption (Python)</p>
<pre class="lang-py prettyprint-override"><code>from Crypto.Cipher import AES
def decrypt(content, key):
cipher = AES.new(key, AES.MODE_CBC, buffer(bytearray(16), 0,16))
return cipher.decrypt(content)
</code></pre>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>cipher = AES.new("samplekey1234567", AES.MODE_CBC, buffer(bytearray(16), 0,16))
cipher.decrypt("YZGjFFgnngBBBTSm8euZEw==".decode("ascii")) # Gives ValueError: Input strings must be a multiple of 16 in length (Why is this not an issue in JavaScript equivalent?)
cipher.decrypt("YZGjFFgnngBBBTSm8euZEw== ".decode("ascii"))
OUTPUT: 'H\x15\xeeMZ\xbd\xdaX\xe6U\xe27F\x9a\xb4e' # Not the original string
</code></pre>
| <javascript><python><python-2.7><encryption><cryptography> | 2023-11-01 19:14:31 | 1 | 533 | Neeraj Kumar |
77,404,986 | 10,431,629 | Combining rows in Pandas based on a single column when column numbers can change | <p>My problem is something like this:</p>
<p>I have a dataframe say: (column names can change and columns can get added too)</p>
<pre><code> df-in
A B C D E
U1 1 24 35 a
V1 9 d 65 j
V1 85 tv 107 k
T2 12 7 37 l
X 8 n 12 uk
Y 7 iu 120 pq
Y 13 k1 77 rp
</code></pre>
<p>Based on column A, I want to do the following:</p>
<p>If column A values for two rows are the same, I want to merge them into a single row, with values from other columns merged by a comma, else the row remains as is. So, my output data frame should look like this:</p>
<pre><code>df-out
A B C D E
U1 1 24 35 a
V1 9, 85 d,tv 65,107 j,k
T2 12 7 37 l
X 8 n 12 uk
Y 7,13 iu,kl 120,77 pq,rp
</code></pre>
<p>I am still struggling to achieve this. I know if there are fixed columns to the dataframe, we can do something like:</p>
<pre><code>df['B'] = df[['A', 'B', 'C', 'D', 'E']].groupby(['A'])['B'].transform(lambda x: ','.join(x))
</code></pre>
<p>And go on for other columns, but in my case I can have 100+ columns and many columns can be new, so it is not possible with this route.</p>
| <python><pandas><row><grouping><aggregation> | 2023-11-01 18:58:05 | 0 | 884 | Stan |
77,404,797 | 13,100,938 | Is there a way to group subplots onto a single row in order to stack them vertically? | <p>I have 3 subplots I need to display like so:
<a href="https://i.sstatic.net/wfTRG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wfTRG.png" alt="enter image description here" /></a>
I have been successful in creating 2 subplots side by side but I am struggling to find (or describe) a way to get the above structure.</p>
<p>I am using a <code>FuncAnimation</code> to animate the data as it is plotted.</p>
<pre class="lang-py prettyprint-override"><code>self.figure, (self.plot1, self.plot2) = plt.subplots(
1, 2, figsize=(16, 9), width_ratios=[3, 1])
</code></pre>
<p>I need to keep the width ratios the same, does anyone have a way I can achieve this?</p>
| <python><matplotlib><subplot> | 2023-11-01 18:21:12 | 0 | 2,023 | Joe Moore |
77,404,793 | 5,604,827 | Python loop through list, enumerate, and start and end at specific index, but do not modify the index | <p>I have a text file as a list, each line is an item in the list. This list contains multiple start tag and end tag (but otherwise not structured) and I must iterate through the file processing data between the start and end tags. Due to potential errors in the file, I must ignore that section of data if some data between the start tag and end tag is missing.</p>
<p>To do this, I first gather a list of valid start indexes and valid end indexes, ensuring there's same number of start and end indexes. Then I must iterate over those slices and check if there is missing data in between them and discard the start and end index if so. Problem is that due to later processing, I need to retain the actual index of the line, so I can't easily use slices, and I've thus far not discovered a good way to set a start and end location in a for loop that is enumerated.</p>
<p>So assume my indexes of the lines in list are :
start = [1,32,60,90]
end = [29,59,65,125]</p>
<p>So I now need to process filelist[1:29] and filelist[32:59] etc. but doing this won't work because inside the for loop, it has altered the indexes of the actual data such that line 32 would become line 0. I cannot have that because I need to store additional indexes found while processing that data for another part of my program. Yes I could account for that, but it's annoying and complicates readability and there's got to be a way to do this in Python - it would be super simple to do in C:</p>
<pre><code>saved_index=[]
for index in range(start):
for i,l in enumerate(filelist[start[index]:end[index]]):
if "blah" in l:
saved_index.append(i) #this won't work i is index of subset not original list
</code></pre>
<p>So how can I iterate over only lines 1 to 29 and then 32 to 59, have the line index of filelist, and not have it altered by using a subset?</p>
| <python><python-3.x><list><slice><enumerate> | 2023-11-01 18:20:09 | 2 | 407 | KyferEz |
77,404,706 | 5,052,482 | Is there an Apache Beam function to gather a fixed number of elements? | <p>I am fairly new to Apache Beam and Dataflow, and I want to gather or "batch together" <code>k</code> number of elements from a <code>PCollection</code> that has <code>n</code> elements. In this case <code>n</code> is not a fixed number and <code>n > k</code>. For example, suppose my <code>PCollection</code> has 1000 elements in it, I want to create batches of up to 50 random elements as the API that I am hitting has a limit of 50 elements per call. I understand that the batching process would be in parallel, and I can make let's say easily 20 parallel calls to my API as well.</p>
<p>Is there a built-in component/function that allows me to do so or can you suggest me a custom <code>DoFn</code> that allows me to do so?</p>
<p>I believe this can be solved by a combination of <code>Combine</code>, <code>Partion</code> or <code>GroupByKey</code> but I don't know how to put it all together. I am looking for a solution using Python.
</p>
<p>A little background on what I want to achieve by using Dataflow by creating a component for each step (if it helps):</p>
<ol>
<li>Read a CSV file from a GCS bucket. The CSV file contains paths to numerous text files that are also on GCS.</li>
<li>Read each text file from the GCS bucket</li>
<li>Break down the raw text file into chunks of 100 characters</li>
<li>Collect 50 chunks together and call an API using the collected 50 chunks as a single request (this is the step where I need help)</li>
<li>Save the result of the API onto a database.</li>
</ol>
| <python><google-cloud-dataflow><apache-beam><apache-beam-io> | 2023-11-01 18:06:06 | 1 | 2,174 | Jash Shah |
77,404,689 | 11,001,783 | How to reformat spreadsheet tables by adding an extra column using the cell value | <p>I have a system file with format like this:</p>
<pre><code> Col 1 Col 2 Col 3
Row1: Employee Standard Report
Row2: Effective Date 11/01/2023
Row3: Employee_ID Name Gender
Row4: 0001 A F
Row5: 0002 B M
Row6: 0003 C F
</code></pre>
<p>The first two rows reflect the name of the report and the date the report run. So I'd like to remove the headers, and insert the date as one extra column to reformat it like:</p>
<pre><code> Employee_ID Effective_Date Name Gender
0001 11/01/2023 A F
0002 11/01/2023 B M
0003 11/01/2023 C F
</code></pre>
<p>Is there a way to extract the value for effective date and insert the date as a column?</p>
| <python> | 2023-11-01 18:02:13 | 0 | 359 | Sandy |
77,404,609 | 11,159,734 | Postgres/VectorPG expects string for some reason | <p>Basically I want to adapt this Colab Notebook from Google for my own project: <a href="https://colab.research.google.com/github/GoogleCloudPlatform/python-docs-samples/blob/main/cloud-sql/postgres/pgvector/notebooks/pgvector_gen_ai_demo.ipynb#scrollTo=hZKe_9qeRdMH" rel="nofollow noreferrer">https://colab.research.google.com/github/GoogleCloudPlatform/python-docs-samples/blob/main/cloud-sql/postgres/pgvector/notebooks/pgvector_gen_ai_demo.ipynb#scrollTo=hZKe_9qeRdMH</a></p>
<p>I created a Postgres SQL database on GCP and created a table using the following code:</p>
<pre><code>import os
import asyncio
import asyncpg
from google.cloud.sql.connector import Connector
from pgvector.asyncpg import register_vector
async def main():
loop = asyncio.get_running_loop()
async with Connector(loop=loop) as connector:
# Create connection to Cloud SQL database
conn: asyncpg.Connection = await connector.connect_async(
# Cloud SQL instance connection name
f"{project_id}:{region}:{instance_name}",
"asyncpg",
user=f"{database_user}",
password=f"{database_password}",
db=f"{database_name}"
)
await conn.execute("CREATE EXTENSION IF NOT EXISTS vector")
await register_vector(conn)
try:
await conn.execute("DROP TABLE my_table")
print("Table dropped successfully!")
except:
print("Table does not exist")
# Create the `my_table` table.
await conn.execute("""CREATE TABLE my_table(
chunk_id TEXT PRIMARY KEY,
file_name TEXT,
content TEXT,
contentVector vector(1536))""")
print("Table created successfully!")
await conn.close()
</code></pre>
<p>This code works fine. The table is successfully created. Note that I <strong>specified</strong> the contentVector to be of type <strong>"vector"</strong>. And I also enable the vector extension as shown in the colab tutorial.
If I retrieve the schema of my table I get back the following:</p>
<pre><code><Record column_name='contentvector' data_type='USER-DEFINED'>
<Record column_name='chunk_id' data_type='text'>
<Record column_name='file_name' data_type='text'>
<Record column_name='content' data_type='text'>
</code></pre>
<p>I think it's a bit wierd that it now states <strong>data_type='USER-DEFINED'</strong> instead of 'vector' but maybe it's supposed to work that way. I don't know.
Anyways I tried to insert some data from a json with the following code:</p>
<pre><code># Same connection stuff as shown above
insert_statement = """
INSERT INTO my_table(chunk_id, file_name, content, contentVector)
VALUES($1, $2, $3, $4)
ON CONFLICT (chunk_id)
DO NOTHING
"""
for record in data:
# Execute the INSERT statement
await conn.execute(insert_statement, record['chunk_id'], record['file_name'], record['content'], np.array(record['contentVector']))
</code></pre>
<p>I tried with and without the np.array part (np.array part was done in the colab tutorial as well). But I get the following error:</p>
<pre><code>File "asyncpg/protocol/prepared_stmt.pyx", line 197, in asyncpg.protocol.protocol.PreparedStatementState._encode_bind_msg
asyncpg.exceptions.DataError: invalid input for query argument $4: array([ 0.02754428, 0.0151995 , 0.0026... (expected str, got ndarray)
</code></pre>
<p>If I don't transform my data using np.arry it says "got list" instead of "got ndarray".
But the biggest question to me <strong>why</strong> is it expecting a <strong>string</strong>? I have no idea how this vector db magic works but I'm pretty sure that a str can't be the right data type for this.</p>
| <python><postgresql><google-cloud-platform> | 2023-11-01 17:47:51 | 0 | 1,025 | Daniel |
77,404,571 | 11,233,365 | How to load only chosen frames from TIFF image stack into memory, without having to load everything | <p>I am working with large TIFF image stacks that are equivalent to NumPy arrays on the order of <code>(300, 2048, 2048)</code>. I am interested in writing a function that allows me to quickly pick <code>n</code> chosen slices from the array for plotting and comparison, to see how the image has evolved. However, most of the methods I am aware of involve having to load the entire thing to memory first, which my desktop cannot handle.</p>
<p>Could you advise on how I could optimise my code in order to quickly identify slices of the array within the image stack to load and plot, <em>without having to load the whole stack to memory first</em>?</p>
<p><strong>Working Attempt:</strong><br />
The advice provided by @Malcolm below worked perfectly, allowing me to load the image stack metadata and individual slices without having to load the whole stack into memory. The <code>key=</code> argument in <code>io.imread()</code> allows for individual slices of the image stack to be loaded, and the <code>TiffFile()</code> class(?) allows for the file metadata to be loaded without loading the actual array.</p>
<pre><code># Import relevant modules
import matplotlib.pyplot as plt
from tifffile import TiffFile
from skimage import io
# Load file
file = "example.tiff" # Array of dimensions (300, 2048, 2048)
stk = TiffFile(file) # Load file metadata
# Check number of figures to plot in subplot
frm_sep = 10 # Frames between images
num_frm = 6
# Create figure and subplots
fig, axs = plt.subplots(nrows=1, ncols=num_frm, sharey=True)
for i in range(num_frm):
# Load image reference image to adjust intensities
if i == 0:
img_ref = io.imread(fft, key=0)
v_max=np.percentile(np.ravel(img_ref), 99)
else: pass
# Plot subplot
ax = axs[i]
ax.imshow(
io.imread(fft, key=(i * frm_sep)),
vmin=0,
vmax=v_max
)
ax.set_title(f"Frame {i * frm_sep}")
</code></pre>
<p><strong>My Original Attempt:</strong></p>
<pre><code># Import relevant modules
from skimage import io
import dask.array as da
import matplotlib.pyplot as plt
# Load file
file = "example.tiff" # Array dimensions of (300,2048, 2048)
img_stk = da.from_array(io.imread(file), chunks=(1, 512, 512) # Tried using dask array for lazy loading
# Load image stack to array
frm_sep = 10 # Frames separating images
num_frm = 6 # Number of frames to show
# Create figure to plot these chosen frames
fig, axs = plt.subplots(nrows=1, ncols=num_frm, sharey=True)
for i in range(num_frm):
# Display on subplot
axs[i].imshow(img_stk[i * frm_sep],
vmin=0,
vmax=np.percentile(np.ravel(img_stk[0]), 99)
)
# Set subplot title
axs[i].set_title(f"Frame {i * frm_sep}")
</code></pre>
<p>The code works, but the loading of the image stack takes a few minutes each time, which to me indicates that I am loading the whole stack instead of my chosen slices.</p>
| <python><matplotlib><multidimensional-array><lazy-loading><scikit-image> | 2023-11-01 17:41:40 | 0 | 301 | TheEponymousProgrammer |
77,404,543 | 945,511 | Find whether a float value is within categorical value using polars dataframe | <p>I'm trying to find whether a float value "real_value" is found within the interval "categorical_value" using polars dataframe. first, I tried to cast the "categorical_value" column to categorical datatype and then find the "real_value" within the "categorical_cast"<br />
The result should be <strong>is_in_cat=[True,False.True]</strong>, but I always get <strong>[false,false,false]</strong></p>
<pre><code>import polars as pl
df = pl.DataFrame({"real_value":[0.5,2.7,3.9], "categorical_value":["(0,1.1]","
(1.1,2.6]","(2.6,4.5]"]}) # the response [True, False, True]
df=df.with_columns(
pl.col("categorical_value").cast(pl.Categorical).alias("categorical_cast")
)
df = df.with_columns(
pl.col("real_value").is_in(pl.col("categorical_cast")).alias("is_in_cat")
)
print(df)
</code></pre>
<p><a href="https://i.sstatic.net/HlCe5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HlCe5.png" alt="enter image description here" /></a></p>
| <python><python-polars> | 2023-11-01 17:35:52 | 2 | 693 | Carlitos Overflow |
77,404,459 | 3,574,603 | Python3 + Digital Ocean: No module named 'torch' | <p>I am trying to install torch on a Digital Ocean Ubuntu droplet. I am following <a href="https://www.digitalocean.com/community/tutorials/how-to-install-and-use-pytorch" rel="nofollow noreferrer">this</a>:</p>
<pre><code>mkdir ~/pytorch
mkdir ~/pytorch/assets
cd ~/pytorch
python3 -m venv pytorch
source pytorch/bin/activate
</code></pre>
<p>Running the command <code>pip install torch==1.7.1+cpu torchvision==0.8.2+cpu -f https://download.pytorch.org/whl/torch_stable.html</code> results in the error:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement torch==1.7.1+cpu (from versions: 1.13.0, 1.13.0+cpu, 1.13.0+cu116, 1.13.0+cu117, 1.13.0+cu117.with.pypi.cudnn, 1.13.1, 1.13.1+cpu, 1.13.1+cu116, 1.13.1+cu117, 1.13.1+cu117.with.pypi.cudnn, 2.0.0, 2.0.0+cpu, 2.0.0+cpu.cxx11.abi, 2.0.0+cu117, 2.0.0+cu117.with.pypi.cudnn, 2.0.0+cu118, 2.0.1, 2.0.1+cpu, 2.0.1+cpu.cxx11.abi, 2.0.1+cu117, 2.0.1+cu117.with.pypi.cudnn, 2.0.1+cu118, 2.0.1+rocm5.3, 2.0.1+rocm5.4.2, 2.1.0, 2.1.0+cpu, 2.1.0+cpu.cxx11.abi, 2.1.0+cu118, 2.1.0+cu121, 2.1.0+cu121.with.pypi.cudnn, 2.1.0+rocm5.5, 2.1.0+rocm5.6)
ERROR: No matching distribution found for torch==1.7.1+cpu
</code></pre>
<p>So I have tried both the following instead (which seem to run successfully):</p>
<pre><code>pip3 install torch torchvision
pip install torch torchvision
</code></pre>
<p>However when I try importing torch in python, I get an error:</p>
<pre><code>>>> import torch
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'torch'
</code></pre>
<p>Why can't python find torch? What do I need to do to ensure I can use the module?</p>
| <python><pip><torch> | 2023-11-01 17:22:05 | 1 | 3,678 | user3574603 |
77,404,367 | 2,100,783 | Adding leading zeros to extracted month from date on SQLAlchemy | <p>I'm currently have this code working without issues:</p>
<p><code>cast(func.extract('month', cast(metric.table.c[date_field], Date)), String(2))</code></p>
<p>What I want to achieve is to have leading zeros on months that are composed of single digits.</p>
<p>I tried:
.zfill(2) it gave me an error:
zfill cannot be applied to CASE or Comparable object.</p>
<p>I tryed if len() then '0'+
I got:
Boolean value of this clause is not defined</p>
<p>Which would be the correct way to fill with zeroes that string?</p>
<p>Thanks!
Edit, some more context, DB is DuckDB (Postgres)</p>
<pre><code> elif agg == DATE_AGG.MONTHLY.value:
select_field = (cast(func.extract('year', cast(metric.table.c[date_field], Date)), String(5))
+ ' '
+ cast(func.extract('month', cast(metric.table.c[date_field], Date)), String(3))
).label("datefield")
order_field = [func.extract('year', cast(metric.table.c[date_field], Date)),
func.extract('month', cast(metric.table.c[date_field], Date))]
return dict(select_val=select_field, order_by=order_field)
</code></pre>
| <python><python-3.x><sqlalchemy> | 2023-11-01 17:05:48 | 1 | 582 | Alejandro |
77,404,317 | 653,311 | Correct way to use multiple inheritance with Qt (pysyde6) | <p>I'm trying to use multiple inheritance with my and Qt-derived classes. But faced with the fact that the <code>__init__()</code> method from my class is not called when I use <code>super()</code> to call <code>__init__()</code> for base classes. I suppose that's because Qt classes do not use <code>super()</code> in their constructors.</p>
<p>The example that does not work properly ("BASE INIT" was never printed):</p>
<pre class="lang-py prettyprint-override"><code>class OptionBase(QWidget):
def __init__(self, *args, **kwargs):
print("BASE INIT", self)
super().__init__(*args, *kwargs)
#----------------------------------------------------------------------
class OptionLineEdit(QLineEdit, OptionBase):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
</code></pre>
<p>I've tried several variants (change ordering of base classes, use metaclass, etc.). But only got either coredump or some strange python errors like: <code>"RuntimeError: '__init__' method of object's base class (OptionLineEdit) not called"</code> for the line where the statement <code>print()</code> is located.</p>
<p>The only way I managed to make it to work is the following:</p>
<pre class="lang-py prettyprint-override"><code>class OptionBase(QWidget):
def __init__(self, *args, **kwargs):
print("BASE INIT", self)
#----------------------------------------------------------------------
class OptionLineEdit(QLineEdit, OptionBase):
def __init__(self, *args, **kwargs):
QLineEdit.__init__(self, *args, **kwargs)
OptionBase.__init__(self, *args, **kwargs)
</code></pre>
<p>So, I don't call any further <code>__init__</code> in my base class (to avoid "double initialization" error). And I have to call all <code>__init__</code> manually in the child class.</p>
<p>It this correct way in such situation or am I missing something?</p>
<p>P.S. The MRO looks correct for me:</p>
<pre><code><class 'ui.widgets.option_lineedit.OptionLineEdit'>
<class 'PySide6.QtWidgets.QLineEdit'>
<class 'ui.widgets.option_base.OptionBase'>
<class 'PySide6.QtWidgets.QWidget'>
<class 'PySide6.QtCore.QObject'>
<class 'PySide6.QtGui.QPaintDevice'>
<class 'Shiboken.Object'>
<class 'object'>
</code></pre>
| <python><python-3.x><pyside><multiple-inheritance><pyside6> | 2023-11-01 16:59:51 | 0 | 10,221 | GrAnd |
77,404,283 | 8,271,728 | How to debug a "Connection reset by peer" issue in urllib3 when upgrading from v1 to v2 | <p>Does anyone have any pointers or advice on how to get to the bottom of a "Connection reset by peer" issue in Python with urllib3?</p>
<p>I have been using using urllib3 version 1 for year but have a project that when I swap from urllib3==1.26.18 to urllib3>=2.0.0 my requests that were working now fail.</p>
<p>In the real example the requests use the requests package with HTTPBasicAuth but even if I strip that out and use something as basic as the below I hit the issue.</p>
<pre><code>import urllib3
c = urllib3.PoolManager()
url = "HTTPS-endpoint-here"
r = c.request('GET', url)
</code></pre>
<p>The problem appears to be with one site but having run it through SSL labs I've confirmed that the certificates on the site are installed okay and that it supports TLS 1.2 as well as TLS 1.1. It doesn't support TLS1.3 but that's okay as urlib3 v2 supports 1.2 by default.</p>
<p>I've tested using Python 3.11.1 and 3.11.2 on Ubuntu 20.04 and 22.04.</p>
<p>I didn't have pyopenssl installed but I've installed that and no luck</p>
<p>If I run <code>openssl version</code> I can see I'm using <code>OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)</code></p>
<p><code>python -m requests.help</code></p>
<p>currently returns</p>
<pre><code>{
"chardet": {
"version": null
},
"charset_normalizer": {
"version": "2.0.10"
},
"cryptography": {
"version": ""
},
"idna": {
"version": "3.3"
},
"implementation": {
"name": "CPython",
"version": "3.11.1"
},
"platform": {
"release": "5.15.0-88-generic",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "",
"version": null
},
"requests": {
"version": "2.31.0"
},
"system_ssl": {
"version": "1010106f"
},
"urllib3": {
"version": "2.0.7"
},
"using_charset_normalizer": true,
"using_pyopenssl": false
}
</code></pre>
<p>As I can make requests to other HTTPS endpoints include HTTPBin with basicauth the issues must be with the endpoint I'm requesting (a clients) but I'm at a bit of a loss as to what to look at next to try to work out what's wrong.</p>
<p>If anyone has any suggestions I would appreciate it.</p>
<p><strong>Update</strong></p>
<p>Here is an example that can connect with urlib3 version 1.26.18 but if you then upgrade to 2.0.7 you immediately receive the "Connection reset by peer" exception</p>
<pre><code>import urllib3
c = urllib3.PoolManager()
url = "https://api.hicargo.com/BoxDocs/UK/JobDocs/HE630819/99054-77 4500187758.pdf"
r = c.request("GET", url)
print(r.status)
</code></pre>
<p>That end-point requires authentication so a "HTTP 401" is the expected response.</p>
| <python><ssl><https><urllib3> | 2023-11-01 16:52:58 | 0 | 931 | Steve Mapes |
77,404,219 | 13,142,245 | Pydantic: custom field constraints | <p>Using Pydantic, how can I enforce custom constraints? For example, suppose the below function determined if an input was valid</p>
<pre><code>def valid(x):
if typeof(x) != str:
return False
else:
return x.isnumeric() and len(x)==3
</code></pre>
<p>Essentially I want strings that only contain numbers and the length must be exactly 3 characters.</p>
<p>Is this possible directly? If not can I instead chain other pydantic methods?</p>
| <python><pydantic> | 2023-11-01 16:41:10 | 1 | 1,238 | jbuddy_13 |
77,404,194 | 5,662,005 | Databricks parallelize file system operations | <p>This question applies broadly but the specific use case is concatenating all fragmented files from a number of directories.</p>
<p>The crux of the question is optimizing/inspecting parallelism with with how Databricks performs file system operations.</p>
<p>Notes:</p>
<ul>
<li><p>Cluster has 128 cores for the driver. 1 worker with 8. Rationale that file system operations don't run on executors so that can be throttled.</p>
</li>
<li><p>All files in an external s3 bucket, not dbfs.</p>
</li>
</ul>
<pre><code> fragmented_files/
entity_1/
fragment_1.csv
...
fragment_n.csv
...
entity_n/
fragment_1.csv
...
fragment_n.csv
merged_files/
entity_1/
merged.csv
...
entity_n/
merged.csv
</code></pre>
<p>I have working code, the gist of which is</p>
<pre><code>def concat_files(fragged_dir, new):
with open(new) as nout:
for orig_frag_file in fragged_dir:
with open(orig_frag_file) as o_file:
nout.write(o_file)
with concurrent.futures.ThreadPoolExecutor() as executor:
results = executor.map(concat_files, all_directories_with_fragmented_files)
</code></pre>
<p>Questions:</p>
<ul>
<li><p>For file system operations, or anything that does not give a SparkUI display, how can I verify that I'm actually using all the driver cores? Rather than just queueing everything up to run on 1.</p>
</li>
<li><p>How would ThreadPoolExecutor vs. ProcessPoolExecutor vary here?</p>
</li>
<li><p>Is there an advantage to using the dbutils api vs. regular python?</p>
</li>
</ul>
| <python><databricks><aws-databricks><dbutils> | 2023-11-01 16:36:24 | 0 | 3,899 | Error_2646 |
77,403,801 | 2,386,605 | Make a httpx GET request with data body | <p>When I do a curl request for GET endpoint of a REST API, I would go it like this</p>
<pre><code>curl -X 'GET' \
'http://localhost:8000/user/?limit=10' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '[
{
"item_type": "user",
"skip": 0
}
]'
</code></pre>
<p>Directly translated into <code>requests</code>, I would get</p>
<pre><code>import requests
headers = {
'accept': 'application/json',
'Content-Type': 'application/json',
}
params = {
'limit': '10'
}
json_data = [
{
'item_type': 'user',
'skip': 0,
},
]
response = requests.get('http://localhost:8000/user/', params=params, headers=headers, json=json_data)
# Note: json_data will not be serialized by requests
# exactly as it was in the original request.
#data = '[\n {\n "item_type": "document",\n "skip": 0\n }\n]'
#response = requests.get('http://localhost:8000/user/', params=params, headers=headers, data=data)
</code></pre>
<p>Now for my test environment I am going to use <code>httpx</code>'s <code>AsyncClient</code>, but I cannot find a way to use the data part, i.e., <code>[{"item_type": "user", "skip": 0}]</code>, in a get request.</p>
<p>It would be terrific, if you know how to address this.</p>
| <python><curl><python-requests><httpx> | 2023-11-01 15:34:48 | 1 | 879 | tobias |
77,403,588 | 3,783,002 | Can't read and query graphml file with lxml | <p>I have an XML (GraphML) file which is loosely defined as follows:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="utf-8"?>
<graphml xmlns:x="http://www.yworks.com/xml/yfiles-common/markup/3.0" xmlns:y="http://www.yworks.com/xml/yfiles-common/3.0" xmlns:sys="http://www.yworks.com/xml/yfiles-common/markup/primitives/2.0" xsi:schemaLocation="http://graphml.graphdrawing.org/xmlns http://www.yworks.com/xml/schema/graphml.wpf/3.0/ygraphml.xsd " xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://graphml.graphdrawing.org/xmlns">
<key .... />
<key .... />
<key .... />
<key .... />
<data .. />
<data .. />
<graph id="G">
<node id=".."></node>
<node id=".."></node>
<node id=".."></node>
<node id=".."></node>
<edge id=".."></edge>
<edge id=".."></edge>
<edge id=".."></edge>
</graph>
</graphml>
</code></pre>
<p>I am interested in retrieving all the <code>node</code> elements in a python list for further processing. Here's what I have tried so far:</p>
<pre class="lang-py prettyprint-override"><code>from lxml import etree
tree = etree.parse("test1.graphml")
nodes = tree.findall('//graphml/node')
print("done")
</code></pre>
<p>However this didn't work and I'm not sure why. What am i doing wrong here?</p>
| <python><xml><graph><lxml><graphml> | 2023-11-01 15:01:23 | 1 | 6,067 | user32882 |
77,403,583 | 4,115,031 | How do I update the path of a venv in PyCharm? | <p>I moved a project from one directory into another, and now my venv won't work. Is there a way to update where PyCharm looks for the venv? I tried the 'Add existing local interpreter' dialog and it didn't work (after clicking "Ok" the interpreter I selected doesn't show up in the list of available interpreters):</p>
<p><a href="https://i.sstatic.net/vmyFA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vmyFA.png" alt="enter image description here" /></a></p>
| <python><pycharm><jetbrains-ide> | 2023-11-01 15:00:31 | 0 | 12,570 | Nathan Wailes |
77,403,321 | 22,466,650 | How to make an alias of a DataFrame attribute during the instantiation? | <p>To simplify my problem, let's consider this simple class.</p>
<pre><code>import pandas as pd
class Report:
def __init__(self, t=pd.DataFrame({'col1': [1, 2]}), n=None):
self.name = n
self.table_as_df = t
_df = self.table_as_df
_df = _df.rename(columns={'col1': 'col2'}) \
.assign(col3=['A', 'B']) \
.transpose()
report = Report().table_as_df
</code></pre>
<p>I made an alias for <code>self.table_as_df</code> to avoid repeating it.</p>
<p>Now the problem is that <code>print(report)</code> gives me only one column :</p>
<pre><code> col1
0 1
1 2
</code></pre>
<p>I feel like we need to use <code>inplace=True</code> but not all functions have it. I would like to use chain assignment.</p>
<p>Here is my expected output :</p>
<pre><code> 0 1
col2 1 2
col3 A B
</code></pre>
<p>Do you guys have a solution for my problem ? Also, can you suggest a better design for my class ?</p>
| <python><pandas> | 2023-11-01 14:21:10 | 1 | 1,085 | VERBOSE |
77,403,139 | 2,232,418 | How to import from one python file to another | <p>I know this question has been asked a million times but for the life of me I just cannot get this.</p>
<p>I have a scripts <code>setup.py</code> and <code>utility_functions.py</code> within a folder called <code>utils</code>.</p>
<p>utility_functions.py:</p>
<pre><code>codeql_language_identifiers = {
'c': 'cpp',
'c++': 'cpp',
'c#': 'csharp',
'go': 'go',
'java': 'java',
'kotlin': 'java',
'javascript': 'javascript',
'typescript': 'javascript',
'python': 'python',
'ruby': 'ruby',
'swift': 'swift',
}
traced_languages = [
codeql_language_identifiers['c'],
codeql_language_identifiers['c++'],
codeql_language_identifiers['c#'],
codeql_language_identifiers['go'],
codeql_language_identifiers['java'],
codeql_language_identifiers['kotlin'],
codeql_language_identifiers['swift'],
]
def is_traced_language(lang):
return lang in traced_languages
</code></pre>
<p>setup.py:</p>
<pre><code>from utility_functions import codeql_language_identifiers, is_traced_language
def run():
# Here be code
filter_for_codeql_supported_languages(['test', 'test2'])
def filter_for_codeql_supported_languages(unfiltered_languages_list):
filtered_languages_list = []
for lang in unfiltered_languages_list:
if lang in codeql_language_identifiers.keys():
filtered_languages_list.append(codeql_language_identifiers[lang])
return filtered_languages_list
if __name__ == "__main__":
run()
</code></pre>
<p>I am executing my code on AzureDevOps similar to:</p>
<pre><code>E:\vsts\agent1\_work\_tool\Python\3.10.5\x64\python.exe E:\vsts\agent1\_work\1268\appsec-ado\utils\codeql_setup.py
</code></pre>
<p>On a Linux agent this works fine but on a Windows agent I get:</p>
<pre><code>from utility_functions import codeql_language_identifiers, is_traced_language
ModuleNotFoundError: No module named 'utility_functions'
</code></pre>
<p>Can someone please help. I'm pulling my hair out with this.</p>
| <python><windows> | 2023-11-01 13:52:29 | 0 | 2,787 | Ben |
77,402,704 | 583,464 | search two lists and fill with None for each pair in correct order | <p>This is a continuation from <a href="https://stackoverflow.com/questions/77402021/map-by-searching-in-two-lists/77402100?noredirect=1#77402100">this</a>.</p>
<p>Right now, the result is:</p>
<pre><code>[{'type': 'POT'}, {'type': 'BOOK'}, {'type': 'GLASS'}]
</code></pre>
<p>I want, if a key is not found, to insert a <code>None</code> in the results.</p>
<p>So, I want:</p>
<pre><code>[{'type': 'POT'}, {'type': 'BOOK'}, None, None, {'type': 'GLASS'}]
</code></pre>
<p>If I try:</p>
<pre><code>stuff_types = {
"spana": {
"type": "BOOK",
},
"geom": {
"type": "GLASS",
},
"hlian": {
"type": "POT",
}
}
a = ['kal', 'khp', 'khp', 'khp', 'geom']
b = ['hlian', 'spana', 'piper', 'meli', 'phin']
the_list = []
for tup in zip(a, b):
for x in tup:
if x in stuff_types:
the_list.append(stuff_types[x])
else:
the_list.append(None)
print(the_list)
</code></pre>
<p>just to append <code>None</code> , we receive <code>None</code> for each one element of the pairs:</p>
<pre><code>('kal', 'hlian')
('khp', 'spana')
('khp', 'piper')
('khp', 'meli')
('geom', 'phin')
</code></pre>
<p>I want to receive <code>None</code> in the correct position for each pair not each element in the pair.</p>
| <python><python-3.x><list><dictionary> | 2023-11-01 12:41:06 | 3 | 5,751 | George |
77,402,633 | 3,207,874 | PDB won't stop inside pytest code, will stop in first function call | <p>If I define a pytest test like so:</p>
<pre><code>from my_app.main import some_function
def test_example():
breakpoint()
foo = 'bar'
some_function()
</code></pre>
<p>Then pdf will stop at the beginning of <code>some_function</code> instead of at <code>foo = 'bar'</code>. How can I get it to stop inside the test code?</p>
<p>Here is how I am running pytest:</p>
<pre><code>pytest /app/tests/test_example.py::test_example --cov=/app/src/
</code></pre>
| <python><pytest><pdb> | 2023-11-01 12:30:42 | 1 | 3,847 | user3207874 |
77,402,608 | 22,674,380 | Error in running the pytorch model: Missing key(s) in state_dict | <p>I am trying to run a script from <a href="https://github.com/cxzhou95/XLSR" rel="nofollow noreferrer">this repo</a> which tests a PyTorch model. I simply run it through <code>python test.py</code> using its defaults values (with the correct path to <code>best.pt</code>). But when running, it gives the following error.</p>
<p>What is the problem? How to fix it?</p>
<p>This is the snippet in <code>test.py</code>:</p>
<pre><code> device = 'cuda:' + str(opt.device) if opt.device != 'cpu' else 'cpu'
model = XLSR(opt.SR_rate)
# load pretrained model
if opt.model.endswith('.pt') and os.path.exists(opt.model):
model.load_state_dict(torch.load(opt.model, map_location=device))
else:
model.load_state_dict(torch.load(os.path.join(opt.save_dir, 'best.pt'), map_location=device))
</code></pre>
<p>And the error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\XLSR\test.py", line 110, in <module>
model.load_state_dict(torch.load(os.path.join(opt.save_dir, 'best.pt'), map_location=device))
File "C:\Users\AppData\Local\anaconda3\Lib\site-packages\torch\nn\modules\module.py", line 2152, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for XLSR:
Missing key(s) in state_dict: "Gblocks.0.conv0.conv2d_block.0.weight", "Gblocks.0.conv0.conv2d_block.0.bias", "Gblocks.0.conv0.conv2d_block.1.weight", "Gblocks.0.conv0.conv2d_block.1.bias", "Gblocks.0.conv0.conv2d_block.2.weight", "Gblocks.0.conv0.conv2d_block.2.bias", "Gblocks.0.conv0.conv2d_block.3.weight", "Gblocks.0.conv0.conv2d_block.3.bias", "Gblocks.1.conv0.conv2d_block.0.weight", "Gblocks.1.conv0.conv2d_block.0.bias", "Gblocks.1.conv0.conv2d_block.1.weight", "Gblocks.1.conv0.conv2d_block.1.bias", "Gblocks.1.conv0.conv2d_block.2.weight", "Gblocks.1.conv0.conv2d_block.2.bias", "Gblocks.1.conv0.conv2d_block.3.weight", "Gblocks.1.conv0.conv2d_block.3.bias", "Gblocks.2.conv0.conv2d_block.0.weight", "Gblocks.2.conv0.conv2d_block.0.bias", "Gblocks.2.conv0.conv2d_block.1.weight", "Gblocks.2.conv0.conv2d_block.1.bias", "Gblocks.2.conv0.conv2d_block.2.weight", "Gblocks.2.conv0.conv2d_block.2.bias", "Gblocks.2.conv0.conv2d_block.3.weight", "Gblocks.2.conv0.conv2d_block.3.bias".
Unexpected key(s) in state_dict: "Gblocks.0.conv0.weight", "Gblocks.0.conv0.bias", "Gblocks.1.conv0.weight", "Gblocks.1.conv0.bias", "Gblocks.2.conv0.weight", "Gblocks.2.conv0.bias".
</code></pre>
| <python><python-3.x><deep-learning><pytorch> | 2023-11-01 12:26:30 | 1 | 5,687 | angel_30 |
77,402,460 | 6,320,774 | download csv file via a website in pyhton | <p>I want to automatically download via python the csv file provided in <a href="https://www.bankofengland.co.uk/boeapps/database/Bank-Rate.asp" rel="nofollow noreferrer">this</a> website.</p>
<p><a href="https://www.bankofengland.co.uk/boeapps/database/Bank-Rate.asp" rel="nofollow noreferrer">https://www.bankofengland.co.uk/boeapps/database/Bank-Rate.asp</a></p>
<p>I am trying the usual methods suggested <a href="https://stackoverflow.com/questions/10556048/how-to-extract-tables-from-websites-in-python">here</a> as well as this script:</p>
<pre><code>base_url = 'https://www.bankofengland.co.uk/boeapps/database/Bank-Rate.asp#'
# 1. Download data
orig_m = (pd.read_csv(f'{base_url}.csv').dropna(how='all'))
</code></pre>
<p>Nothing has really worked so far. Can someone please help?</p>
| <python><csv><download> | 2023-11-01 12:01:57 | 2 | 1,581 | msh855 |
77,402,221 | 11,113,397 | Improving Deployment Time for Django Project on Azure Web App | <p>I have a Django project deployed on an Azure Web App (Linux, P2v2, Python 3.10, Django 4.2), which is functioning as expected. However, the <strong>deployment time is extremely slow</strong>, taking between 15 to 20 minutes to complete even a small change in the code. This is significantly longer than the deployment times I have experienced with other services.</p>
<p>Here is a snippet of the deployment log:</p>
<pre><code>2023-10-31T12:50:59.953Z - Updating branch 'master'.
2023-10-31T12:51:04.983Z - Updating submodules.
2023-10-31T12:51:05.086Z - Preparing deployment for commit id 'XXXXXXXXX'.
2023-10-31T12:51:05.338Z - PreDeployment: context.CleanOutputPath False
2023-10-31T12:51:05.432Z - PreDeployment: context.OutputPath /home/site/wwwroot
2023-10-31T12:51:05.535Z - Repository path is /home/site/repository
2023-10-31T12:51:05.638Z - Running oryx build...
2023-10-31T12:51:05.644Z - Command: oryx build /home/site/repository -o /home/site/wwwroot --platform python --platform-version 3.10 -p virtualenv_name=antenv --log-file /tmp/build-debug.log -i /tmp/8dbda100b9921ec --compress-destination-dir | tee /tmp/oryx-build.log
2023-10-31T12:51:06.599Z - Operation performed by Microsoft Oryx, https://github.com/Microsoft/Oryx
2023-10-31T12:51:06.614Z - You can report issues at https://github.com/Microsoft/Oryx/issues
2023-10-31T12:51:06.643Z - Oryx Version: 0.2.20230508.1, Commit: 7fe2bf39b357dd68572b438a85ca50b5ecfb4592, ReleaseTagName: 20230508.1
2023-10-31T12:51:06.658Z - Build Operation ID: 10aafab84d551a74
2023-10-31T12:51:06.664Z - Repository Commit : XXXXXXXXXXXXXX
2023-10-31T12:51:06.670Z - OS Type : bullseye
2023-10-31T12:51:06.677Z - Image Type : githubactions
2023-10-31T12:51:06.695Z - Detecting platforms...
2023-10-31T12:51:15.575Z - Detected following platforms:
2023-10-31T12:51:15.608Z - nodejs: 16.20.2
2023-10-31T12:51:15.613Z - python: 3.10.8
2023-10-31T12:51:15.618Z - Version '16.20.2' of platform 'nodejs' is not installed. Generating script to install it...
2023-10-31T12:51:15.623Z - Version '3.10.8' of platform 'python' is not installed. Generating script to install it...
2023-10-31T12:51:15.811Z - Using intermediate directory '/tmp/8dbda100b9921ec'.
2023-10-31T12:51:15.837Z - Copying files to the intermediate directory...
2023-10-31T12:51:17.447Z - Done in 2 sec(s).
2023-10-31T12:51:17.465Z - Source directory : /tmp/8dbda100b9921ec
2023-10-31T12:51:17.470Z - Destination directory: /home/site/wwwroot
2023-10-31T12:51:17.487Z - Downloading and extracting 'nodejs' version '16.20.2' to '/tmp/oryx/platforms/nodejs/16.20.2'...
2023-10-31T12:51:17.496Z - Detected image debian flavor: bullseye.
2023-10-31T12:51:18.418Z - Downloaded in 1 sec(s).
2023-10-31T12:51:18.431Z - Verifying checksum...
2023-10-31T12:51:18.465Z - Extracting contents...
2023-10-31T12:51:19.688Z - performing sha512 checksum for: nodejs...
2023-10-31T12:51:19.876Z - Done in 2 sec(s).
2023-10-31T12:51:19.907Z - Downloading and extracting 'python' version '3.10.8' to '/tmp/oryx/platforms/python/3.10.8'...
2023-10-31T12:51:19.913Z - Detected image debian flavor: bullseye.
2023-10-31T12:51:21.326Z - Downloaded in 2 sec(s).
2023-10-31T12:51:21.343Z - Verifying checksum...
2023-10-31T12:51:21.349Z - Extracting contents...
2023-10-31T12:51:23.829Z - performing sha512 checksum for: python...
2023-10-31T12:51:24.147Z - Done in 5 sec(s).
2023-10-31T12:51:24.180Z - image detector file exists, platform is python..
2023-10-31T12:51:24.193Z - OS detector file exists, OS is bullseye..
2023-10-31T12:51:24.298Z - Python Version: /tmp/oryx/platforms/python/3.10.8/bin/python3.10
2023-10-31T12:51:24.305Z - Creating directory for command manifest file if it does not exist
2023-10-31T12:51:24.311Z - Removing existing manifest file
2023-10-31T12:51:24.335Z - Python Virtual Environment: antenv
2023-10-31T12:51:24.341Z - Creating virtual environment...
2023-10-31T12:51:30.102Z - Activating virtual environment...
2023-10-31T12:51:30.114Z - Running pip install...
(like 4 minutes installing python libraries)
2023-10-31T12:55:37.571Z - Not a vso image, so not writing build commands
2023-10-31T12:55:37.576Z - Preparing output...
2023-10-31T12:55:37.586Z - Copying files to destination directory '/tmp/_preCompressedDestinationDir'...
2023-10-31T12:57:07.095Z - Done in 93 sec(s).
2023-10-31T12:57:07.109Z - Compressing content of directory '/tmp/_preCompressedDestinationDir'...
2023-10-31T13:05:28.569Z - Copied the compressed output to '/home/site/wwwroot'
2023-10-31T13:05:28.603Z - Removing existing manifest file
2023-10-31T13:05:28.641Z - Creating a manifest file...
2023-10-31T13:05:28.675Z - Manifest file created.
2023-10-31T13:05:28.684Z - Copying .ostype to manifest output directory.
2023-10-31T13:05:28.695Z - Done in 853 sec(s).
2023-10-31T13:05:29.132Z - Running post deployment command(s)...
2023-10-31T13:05:29.459Z - Generating summary of Oryx build
2023-10-31T13:05:29.588Z - Parsing the build logs
2023-10-31T13:05:29.703Z - Found 0 issue(s)
2023-10-31T13:05:29.884Z - Build Summary :
2023-10-31T13:05:29.973Z - ===============
2023-10-31T13:05:30.067Z - Errors (0)
2023-10-31T13:05:30.163Z - Warnings (0)
2023-10-31T13:05:30.400Z - Triggering recycle (preview mode disabled).
2023-10-31T13:05:30.534Z - Deployment successful. deployer = deploymentPath =
</code></pre>
<p>As you can see, most of the time is spent in the "Compressing content of directory '/tmp/_preCompressedDestinationDir'" step (853 seconds).</p>
<p>How can I expedite the deployment process? Even if it's not a full deployment, I need a more efficient way to make minor changes to the code if needed.</p>
<p><strong>Investigations:</strong></p>
<p>I have been exploring the App Service's build system, Oryx, and attempted various combinations of the <a href="https://github.com/microsoft/Oryx/blob/main/doc/configuration.md" rel="nofollow noreferrer">Oryx configurations</a> to optimize the deployment time, but I haven't seen any significant improvements.</p>
<p>I am curious about the necessity of compressing the virtual environment into 'output.tar.gz' only to have it unpacked into a temporary folder later. Is there a way to bypass this compression step?</p>
<p>Additionally, is there a way to modify the variables in the oryx-manifest.toml file? Adding application variable or adding a custom oryx-manifest to my code, both had no effect.</p>
<p>Any insights or suggestions would be greatly appreciated. Thank you!</p>
| <python><django><azure-web-app-service> | 2023-11-01 11:19:20 | 1 | 353 | btavares |
77,402,021 | 583,464 | map by searching in two lists | <p>I want to do a mapping.</p>
<p>I have two lists from which to extract the info and do the mapping.</p>
<p>If we don't find the available entries from the first list, we check the second list.</p>
<p>The first list is <code>a</code> and the second is <code>b</code>.</p>
<pre><code>stuff_types = {
"spana": {
"type": "BOOK",
},
"geom": {
"type": "GLASS",
}
}
def the_mapping(the_type):
for key in stuff_types.keys():
if key in the_type:
the_type = key
break
return stuff_types[the_type]
a = ['kal', 'khp', 'khp', 'khp', 'geom']
b = ['hlian', 'spana', 'piper', 'meli', 'phin']
the_list = []
for the_type in a:
try:
crop_mapping = the_mapping(the_type)
the_list.append(crop_mapping)
except:
try:
for the_type in b:
crop_mapping = the_mapping(the_type)
the_list.append(crop_mapping)
except KeyError:
print("\nCouldn't find it!")
print(the_list)
</code></pre>
<p>The result now is:</p>
<p><code>[{'type': 'GLASS'}]</code></p>
<p>but I want:</p>
<p><code>[{'type': 'GLASS'}, {'type': 'BOOK'}]</code></p>
<p>since I have in list <code>b</code> the <code>spana</code> word.</p>
<p>** UPD **</p>
<p>If we have:</p>
<pre><code>stuff_types = {
"spana": {
"type": "BOOK",
},
"geom": {
"type": "GLASS",
},
"hlian": {
"type": "POT",
}
}
</code></pre>
<p>and we run:</p>
<pre><code>the_list = [v for k, v in stuff_types.items() if k in set(a + b)]
</code></pre>
<p>it gives:</p>
<pre><code>[{'type': 'BOOK'}, {'type': 'GLASS'}, {'type': 'POT'}]
</code></pre>
<p>Can we have also the right order?</p>
<pre><code>[{'type': 'POT'}, {'type': 'BOOK'}, {'type': 'GLASS'}]
</code></pre>
| <python><dictionary><mapping> | 2023-11-01 10:39:52 | 1 | 5,751 | George |
77,402,006 | 15,051,310 | Smooth clamping a tensor | <ol>
<li>Is there a simpler and more performant way to do this?</li>
<li>Can it be done for each channel individually instead of having a global max/min, in a single operation?</li>
</ol>
<p>Essentially "soft clamping" values above/below threshold/-threshold to boundary/-boundary</p>
<pre><code>import torch
input_tensor = torch.tensor([[
[[-19, -7, -5, -4, -3, -1, 0, 1, 2, 3, 4, 7, 8, 10, 12, 13, 14, 15, 19],
[-17, -7, -5, -4, -3, -1, 0, 1, 2, 3, 4, 6, 7, 9, 12, 13, 14, 15, 16],
[-12, -7, -5, -4, -3, -1, 0, 1, 2, 3, 4, 7, 8, 9, 11, 13, 13, 15, 17],
[-11, -7, -5, -4, -3, -1, 0, 1, 2, 3, 4, 7, 8, 9, 12, 13, 14, 15, 19]]
]], dtype=torch.float16)
# Define the threshold and boundary
threshold = 3
boundary = 4
# Apply the smooth clamping operation to each channel individually
soft_clamped = torch.where(
input_tensor > threshold, # Above threshold
((input_tensor - threshold) / (input_tensor.max() - threshold)) * (boundary - threshold) + threshold,
torch.where(
input_tensor < -threshold, # Below -threshold
((input_tensor + threshold) / (input_tensor.min() + threshold)) * (-boundary + threshold) - threshold,
input_tensor
)
)
print(soft_clamped)
</code></pre>
<pre><code>tensor([[[[-4.0000, -3.2500, -3.1250, -3.0625, -3.0000, -1.0000, 0.0000,
1.0000, 2.0000, 3.0000, 3.0625, 3.2500, 3.3125, 3.4375,
3.5625, 3.6250, 3.6875, 3.7500, 4.0000],
[-3.8750, -3.2500, -3.1250, -3.0625, -3.0000, -1.0000, 0.0000,
1.0000, 2.0000, 3.0000, 3.0625, 3.1875, 3.2500, 3.3750,
3.5625, 3.6250, 3.6875, 3.7500, 3.8125],
[-3.5625, -3.2500, -3.1250, -3.0625, -3.0000, -1.0000, 0.0000,
1.0000, 2.0000, 3.0000, 3.0625, 3.2500, 3.3125, 3.3750,
3.5000, 3.6250, 3.6250, 3.7500, 3.8750],
[-3.5000, -3.2500, -3.1250, -3.0625, -3.0000, -1.0000, 0.0000,
1.0000, 2.0000, 3.0000, 3.0625, 3.2500, 3.3125, 3.3750,
3.5625, 3.6250, 3.6875, 3.7500, 4.0000]]]],
dtype=torch.float16)
</code></pre>
| <python><pytorch><tensor> | 2023-11-01 10:37:15 | 1 | 2,757 | Timothy Alexis Vass |
77,401,990 | 11,716,727 | How can I solve this error (object of type 'float' has no len()), please? | <p>I am working on the dataset <a href="https://www.kaggle.com/sid321axn/amazon-alexa-reviews" rel="nofollow noreferrer">AMAZON ALEXA REVIEW RATINGS</a></p>
<p>When I uploaded my dataset as below</p>
<pre><code>df_alexa = pd.read_csv('amazon_alexa.tsv', sep='\t')
</code></pre>
<p><a href="https://i.sstatic.net/8ELCs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8ELCs.png" alt="enter image description here" /></a></p>
<p>I wanted to add a new feature, which is called <strong>length</strong>, as shown below</p>
<pre><code>df_alexa['length'] = df_alexa['verified_reviews'].apply(len)
</code></pre>
<p>But, I get the below, error:</p>
<pre><code>TypeError: object of type 'float' has no len()
</code></pre>
<p><strong>Any assistance, please?</strong></p>
| <python><pandas><jupyter-notebook> | 2023-11-01 10:34:18 | 1 | 709 | SH_IQ |
77,401,947 | 6,836,950 | Fill null values with the closest non-null value from other columns | <p>I have the following polars dataframe</p>
<pre><code>import polars as pl
data = {
"product_id": ["1", "2", "3", "4", "5", "6", "7", "8", "9"],
"col1": ["a", "a", "a", "a", "a", "a", "a", "a", "a",],
"col2": ["b", None, "b", None, "b", None, "b", None, "b"],
"col3": ["c", None, "c", None, None, None, None, None, None],
"col4": [None, None, None, None, None, "d", None, None, "d"]
}
df = pl.DataFrame(data)
</code></pre>
<pre><code>┌────────────┬──────┬──────┬──────┬──────┐
│ product_id ┆ col1 ┆ col2 ┆ col3 ┆ col4 │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ str ┆ str ┆ str │
╞════════════╪══════╪══════╪══════╪══════╡
│ 1 ┆ a ┆ b ┆ c ┆ null │
│ 2 ┆ a ┆ null ┆ null ┆ null │
│ 3 ┆ a ┆ b ┆ c ┆ null │
│ 4 ┆ a ┆ null ┆ null ┆ null │
│ 5 ┆ a ┆ b ┆ null ┆ null │
│ 6 ┆ a ┆ null ┆ null ┆ d │
│ 7 ┆ a ┆ b ┆ null ┆ null │
│ 8 ┆ a ┆ null ┆ null ┆ null │
│ 9 ┆ a ┆ b ┆ null ┆ d │
└────────────┴──────┴──────┴──────┴──────┘
</code></pre>
<p>I need to fill null values in column <code>col4</code> with the closest non-null value from the columns to the left (not considering <code>product_id</code> column) and create a new column based on those values.</p>
<p>The desired output should be the following</p>
<pre><code>data = {
"product_id": ["1", "2", "3", "4", "5", "6", "7", "8", "9"],
"col1": ["a", "a", "a", "a", "a", "a", "a", "a", "a",],
"col2": ["b", None, "b", None, "b", None, "b", None, "b"],
"col3": ["c", None, "c", None, None, None, None, None, None],
"col4": [None, None, None, None, None, "d", None, None, "d"],
"desired_column": ["c", "a", "c", "a", "b", "d", "b", "a", "d"]
}
df = pl.DataFrame(data)
</code></pre>
<pre><code>shape: (9, 6)
┌────────────┬──────┬──────┬──────┬──────┬────────────────┐
│ product_id ┆ col1 ┆ col2 ┆ col3 ┆ col4 ┆ desired_column │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ str ┆ str ┆ str ┆ str │
╞════════════╪══════╪══════╪══════╪══════╪════════════════╡
│ 1 ┆ a ┆ b ┆ c ┆ null ┆ c │
│ 2 ┆ a ┆ null ┆ null ┆ null ┆ a │
│ 3 ┆ a ┆ b ┆ c ┆ null ┆ c │
│ 4 ┆ a ┆ null ┆ null ┆ null ┆ a │
│ 5 ┆ a ┆ b ┆ null ┆ null ┆ b │
│ 6 ┆ a ┆ null ┆ null ┆ d ┆ d │
│ 7 ┆ a ┆ b ┆ null ┆ null ┆ b │
│ 8 ┆ a ┆ null ┆ null ┆ null ┆ a │
│ 9 ┆ a ┆ b ┆ null ┆ d ┆ d │
└────────────┴──────┴──────┴──────┴──────┴────────────────┘
</code></pre>
<p>I tried different approaches but didn't have any success. Is there a way to achieve this behaviour within polars?</p>
| <python><python-polars> | 2023-11-01 10:27:53 | 1 | 4,179 | Okroshiashvili |
77,401,881 | 7,991,624 | Cannot install pytorch3d using conda or pip on windows 11 | <p>I'm trying to install pytorch3d, but neither conda or pip can find the pytorch3d package.</p>
<p>I followed the pytorch instructions: <a href="https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md" rel="nofollow noreferrer">https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md</a></p>
<p>The runtime dependencies can be installed by running:</p>
<pre><code>conda create -n pytorch3d python=3.9
conda activate pytorch3d
conda install pytorch=1.13.0 torchvision pytorch-cuda=11.8 -c pytorch -c nvidia
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
</code></pre>
<p>For the CUB build time dependency, which you only need if you have CUDA older than 11.7 (I have CUDA 12), if you are using conda, you can continue with</p>
<pre><code>conda install -c bottler nvidiacub
</code></pre>
<p>Anaconda Cloud</p>
<pre><code>conda install pytorch3d -c pytorch3d
</code></pre>
<p>I get the error:</p>
<pre><code>PackagesNotFoundError: The following packages are not available from current channels:
- pytorch3d
</code></pre>
<p>When I run:</p>
<pre><code>pip install pytorch3d
</code></pre>
<p>I get this error:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement pytorch3d (from versions: none)
ERROR: No matching distribution found for pytorch3d
</code></pre>
| <python><pip><pytorch><conda><pytorch3d> | 2023-11-01 10:19:58 | 1 | 437 | sos.cott |
77,401,727 | 1,172,907 | How to add brightness to a colorwheel | <p>Using the code from this excellent <a href="https://stackoverflow.com/questions/77396626/how-to-mark-rgb-colors-on-a-colorwheel-in-python/77396970#77396970">answer</a> from <a href="https://stackoverflow.com/users/19090048/mandy8055">mandy8055</a>, I'm able to mark a color on a colorwheel.<br />
But <strong>the information in the marker's position</strong> is the colour's brightness.</p>
<p>For example:<br />
<em>dark red</em> <code>(92, 4, 12)</code>:<br />
<a href="https://i.sstatic.net/Ti48f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ti48f.png" alt="first screenshot" /></a><br />
<em>red</em> <code>(255,0,0)</code><br />
<a href="https://i.sstatic.net/8CL0g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8CL0g.png" alt="second screenshot" /></a></p>
<p>My understanding is that in addition to the colours angle another factor (distance?) must come into the calculation of the marker's position.</p>
<p>How can I add brightness to the wheel in order to consider a colour's lightness or saturation e.g. (I presume the figure below is based on colorsys):
<a href="https://i.sstatic.net/fnbey.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fnbey.png" alt="enter image description here" /></a></p>
| <python><matplotlib><colors> | 2023-11-01 09:56:44 | 0 | 605 | jjk |
77,401,594 | 583,464 | remove mixed new line characters and digits | <p>Let's say I have this string list:</p>
<pre><code>a = ['6306\nHLIAN\nVARIOUS',
'10215\nSPINA',
'10279\nPIPERI-\nΜYTER',
'38003\nCORN\nSWEET',
'10234ROKA',
'10232\nANTH',
'8682PIPER\nYPAITH',
'8676\nMAROYL',
'10211\nΚAROT\nROOT',
'8685AGG\nYPAU']
</code></pre>
<p>I want to remove the digits and keep the first piece of words. So, I want result:</p>
<pre><code>['HLIAN',
'SPINA',
'PIPERI',
'CORN',
'ROKA',
'ANTH',
'PIPER',
'MAROYL',
'ΚAROT',
'AGG']
</code></pre>
<p>I tried something like this:</p>
<pre><code>from string import digits
def clean_list(data):
remove_digits = str.maketrans('', '', digits)
no_digs = [s.translate(remove_digits) for s in data]
results = []
for x in no_digs:
if '\n' in x:
if x.count('\n') == 2:
results.append(x.split('\n')[-2])
elif x.count('\n') == 1:
results.append(x.split('\n')[1])
else:
results.append(x)
return results
</code></pre>
<p>and I am receiving:</p>
<pre><code>['HLIAN',
'SPINA',
'PIPERI-',
'CORN',
'ROKA',
'ANTH',
'YPAITH',
'MAROYL',
'ΚAROT',
'YPAU']
</code></pre>
<p>I can't catch the <code>'8682PIPER\nYPAITH',</code> and <code>'8685AGG\nYPAU'</code> because they have one <code>\n</code> and two words between.</p>
<p>Also, it would be nice if the <code>'PIPERI-'</code> would come without the <code>-</code> symbol (it can be done in a next step though).</p>
| <python><string><list> | 2023-11-01 09:35:20 | 2 | 5,751 | George |
77,401,247 | 3,744,043 | How to type annotate optionally parametrized decorator, which use third party decorator inside | <p>Faced with problem, when using <code>mypy</code> on my project. At first I use a <code>backoff</code> package to do some retries on some functions/methods. Then I realised, that most of options are just repeated, so I created per-project decorator, with all common values filled for <code>backoff</code> decorator. But, I don't know how to annotate such "decorator in decorator". More to say, this should work with sync/async function/method matrix. Here is code to reproduce my pain:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from collections.abc import Awaitable, Callable
from functools import wraps
from typing import Any, Literal, ParamSpec, TypeVar
T = TypeVar("T")
P = ParamSpec("P")
AnyCallable = Callable[P, T | Awaitable[T]]
Decorator = Callable[[AnyCallable[P, T]], AnyCallable[P, T]]
def third_party_decorator(
a: int | None = None,
b: str | None = None,
c: Literal[None] = None,
d: bool | None = None,
e: str | None = None,
) -> Decorator[P, T]:
def decorator(f: AnyCallable[P, T]) -> AnyCallable[P, T]:
@wraps(f)
def wrapper(*args: P.args, **kwargs: P.kwargs) -> T | Awaitable[T]:
print(f"third_party_decorator {f = } {a = } {b = } {c = } {d = } {e = }")
return f(*args, **kwargs)
return wrapper
return decorator
def parametrized_decorator(f: AnyCallable[P, T] | None = None, **kwargs: Any) -> Decorator[P, T] | AnyCallable[P, T]:
def decorator(f: AnyCallable[P, T]) -> AnyCallable[P, T]:
defaults = {"a": 1, "b": "b", "c": None, "d": True}
defaults.update(kwargs)
print(f"parametrized_decorator {f = } {defaults = }")
decorator: Decorator[P, T] = third_party_decorator(**defaults)
wrapped = decorator(f)
return wrapped
if f is None:
return decorator
else:
return decorator(f)
@parametrized_decorator
def sync_straight_function(x: int = 0) -> None:
print(f"sync_straight_function {x = }")
@parametrized_decorator(b="B", e="e")
def sync_parametrized_function(x: str = "abc", y: bool = False) -> None:
print(f"sync_parametrized_function {x = } {y = }")
@parametrized_decorator
async def async_straight_function(x: int = 0) -> None:
print(f"sync_straight_function {x = }")
@parametrized_decorator(b="B", e="e")
async def async_parametrized_function(x: str = "abc", y: bool = False) -> None:
print(f"sync_parametrized_function {x = } {y = }")
class Foo:
@parametrized_decorator
def sync_straight_method(self, x: int = 0) -> None:
print(f"sync_straight_function {x = }")
@parametrized_decorator(b="B", e="e")
def sync_parametrized_method(self, x: str = "abc", y: bool = False) -> None:
print("sync_parametrized_method", x, y)
@parametrized_decorator
async def async_straight_method(self, x: int = 0) -> None:
print(f"sync_straight_function {x = }")
@parametrized_decorator(b="B", e="e")
async def async_parametrized_method(self, x: str = "abc", y: bool = False) -> None:
print(f"sync_parametrized_function {x = } {y = }")
def main_sync_functions() -> None:
sync_straight_function()
sync_straight_function(1)
sync_parametrized_function()
sync_parametrized_function("xyz", True)
async def main_async_functions() -> None:
await async_straight_function()
await async_straight_function(1)
await async_parametrized_function()
await async_parametrized_function("xyz", True)
def main_sync_methods() -> None:
foo = Foo()
foo.sync_straight_method()
foo.sync_straight_method(1)
foo.sync_parametrized_method()
foo.sync_parametrized_method("xyz", True)
async def main_async_methods() -> None:
foo = Foo()
await foo.async_straight_method()
await foo.async_straight_method(1)
await foo.async_parametrized_method()
await foo.async_parametrized_method("xyz", True)
if __name__ == "__main__":
print("\nSYNC FUNCTIONS:")
main_sync_functions()
print("\nASYNC FUNCTIONS:")
asyncio.run(main_async_functions())
print("\nSYNC METHODS:")
main_sync_methods()
print("\nASYNC METHODS:")
asyncio.run(main_async_methods())
</code></pre>
<p>The output of mypy have 44 errors:</p>
<pre><code>parametrized-decorator-typing.py:33: error: Argument 1 to "third_party_decorator" has incompatible type "**dict[str, object]"; expected "int | None" [arg-type]
decorator: Decorator[P, T] = third_party_decorator(**defaults)
^~~~~~~~
parametrized-decorator-typing.py:33: error: Argument 1 to "third_party_decorator" has incompatible type "**dict[str, object]"; expected "str | None" [arg-type]
decorator: Decorator[P, T] = third_party_decorator(**defaults)
^~~~~~~~
parametrized-decorator-typing.py:33: error: Argument 1 to "third_party_decorator" has incompatible type "**dict[str, object]"; expected "None" [arg-type]
decorator: Decorator[P, T] = third_party_decorator(**defaults)
^~~~~~~~
parametrized-decorator-typing.py:33: error: Argument 1 to "third_party_decorator" has incompatible type "**dict[str, object]"; expected "bool | None" [arg-type]
decorator: Decorator[P, T] = third_party_decorator(**defaults)
^~~~~~~~
parametrized-decorator-typing.py:48: error: Argument 1 has incompatible type "Callable[[str, bool], None]"; expected
"Callable[[VarArg(<nothing>), KwArg(<nothing>)], <nothing> | Awaitable[<nothing>]]" [arg-type]
@parametrized_decorator(b="B", e="e")
^
parametrized-decorator-typing.py:48: error: Argument 1 has incompatible type "Callable[[str, bool], None]"; expected <nothing> [arg-type]
@parametrized_decorator(b="B", e="e")
^
parametrized-decorator-typing.py:53: error: Argument 1 to "parametrized_decorator" has incompatible type "Callable[[int], Coroutine[Any, Any, None]]"; expected
"Callable[[int], <nothing> | Awaitable[<nothing>]] | None" [arg-type]
@parametrized_decorator
^
parametrized-decorator-typing.py:58: error: Argument 1 has incompatible type "Callable[[str, bool], Coroutine[Any, Any, None]]"; expected
"Callable[[VarArg(<nothing>), KwArg(<nothing>)], <nothing> | Awaitable[<nothing>]]" [arg-type]
@parametrized_decorator(b="B", e="e")
^
parametrized-decorator-typing.py:58: error: Argument 1 has incompatible type "Callable[[str, bool], Coroutine[Any, Any, None]]"; expected <nothing> [arg-type]
@parametrized_decorator(b="B", e="e")
^
parametrized-decorator-typing.py:68: error: Argument 1 has incompatible type "Callable[[Foo, str, bool], None]"; expected
"Callable[[VarArg(<nothing>), KwArg(<nothing>)], <nothing> | Awaitable[<nothing>]]" [arg-type]
@parametrized_decorator(b="B", e="e")
^
parametrized-decorator-typing.py:68: error: Argument 1 has incompatible type "Callable[[Foo, str, bool], None]"; expected <nothing> [arg-type]
@parametrized_decorator(b="B", e="e")
^
parametrized-decorator-typing.py:72: error: Argument 1 to "parametrized_decorator" has incompatible type "Callable[[Foo, int], Coroutine[Any, Any, None]]"; expected
"Callable[[Foo, int], <nothing> | Awaitable[<nothing>]] | None" [arg-type]
@parametrized_decorator
^
parametrized-decorator-typing.py:76: error: Argument 1 has incompatible type "Callable[[Foo, str, bool], Coroutine[Any, Any, None]]"; expected
"Callable[[VarArg(<nothing>), KwArg(<nothing>)], <nothing> | Awaitable[<nothing>]]" [arg-type]
@parametrized_decorator(b="B", e="e")
^
parametrized-decorator-typing.py:76: error: Argument 1 has incompatible type "Callable[[Foo, str, bool], Coroutine[Any, Any, None]]"; expected <nothing> [arg-type]
@parametrized_decorator(b="B", e="e")
^
parametrized-decorator-typing.py:82: error: Too few arguments [call-arg]
sync_straight_function()
^~~~~~~~~~~~~~~~~~~~~~~~
parametrized-decorator-typing.py:83: error: Argument 1 has incompatible type "int"; expected "Callable[[int], Awaitable[None] | None]" [arg-type]
sync_straight_function(1)
^
parametrized-decorator-typing.py:85: error: "Awaitable[<nothing>]" not callable [operator]
sync_parametrized_function()
^~~~~~~~~~~~~~~~~~~~~~~~~~~~
parametrized-decorator-typing.py:86: error: "Awaitable[<nothing>]" not callable [operator]
sync_parametrized_function("xyz", True)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
parametrized-decorator-typing.py:86: error: Argument 1 has incompatible type "str"; expected <nothing> [arg-type]
sync_parametrized_function("xyz", True)
^~~~~
parametrized-decorator-typing.py:86: error: Argument 2 has incompatible type "bool"; expected <nothing> [arg-type]
sync_parametrized_function("xyz", True)
^~~~
parametrized-decorator-typing.py:90: error: Incompatible types in "await" (actual type "Callable[[int], <nothing> | Awaitable[<nothing>]] | Awaitable[<nothing>]",
expected type "Awaitable[Any]") [misc]
await async_straight_function()
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
parametrized-decorator-typing.py:90: error: Too few arguments [call-arg]
await async_straight_function()
^~~~~~~~~~~~~~~~~~~~~~~~~
parametrized-decorator-typing.py:91: error: Incompatible types in "await" (actual type "Callable[[int], <nothing> | Awaitable[<nothing>]] | Awaitable[<nothing>]",
expected type "Awaitable[Any]") [misc]
await async_straight_function(1)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
parametrized-decorator-typing.py:91: error: Argument 1 has incompatible type "int"; expected "Callable[[int], <nothing> | Awaitable[<nothing>]]" [arg-type]
await async_straight_function(1)
^
parametrized-decorator-typing.py:93: error: "Awaitable[<nothing>]" not callable [operator]
await async_parametrized_function()
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
parametrized-decorator-typing.py:94: error: "Awaitable[<nothing>]" not callable [operator]
await async_parametrized_function("xyz", True)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
parametrized-decorator-typing.py:94: error: Argument 1 has incompatible type "str"; expected <nothing> [arg-type]
await async_parametrized_function("xyz", True)
^~~~~
parametrized-decorator-typing.py:94: error: Argument 2 has incompatible type "bool"; expected <nothing> [arg-type]
await async_parametrized_function("xyz", True)
^~~~
parametrized-decorator-typing.py:99: error: Too few arguments [call-arg]
foo.sync_straight_method()
^~~~~~~~~~~~~~~~~~~~~~~~~~
parametrized-decorator-typing.py:100: error: Argument 1 has incompatible type "int"; expected "Callable[[Foo, int], Awaitable[None] | None]" [arg-type]
foo.sync_straight_method(1)
^
parametrized-decorator-typing.py:100: error: Argument 1 has incompatible type "int"; expected "Foo" [arg-type]
foo.sync_straight_method(1)
^
parametrized-decorator-typing.py:102: error: "Awaitable[<nothing>]" not callable [operator]
foo.sync_parametrized_method()
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
parametrized-decorator-typing.py:103: error: "Awaitable[<nothing>]" not callable [operator]
foo.sync_parametrized_method("xyz", True)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
parametrized-decorator-typing.py:103: error: Argument 1 has incompatible type "str"; expected <nothing> [arg-type]
foo.sync_parametrized_method("xyz", True)
^~~~~
parametrized-decorator-typing.py:103: error: Argument 2 has incompatible type "bool"; expected <nothing> [arg-type]
foo.sync_parametrized_method("xyz", True)
^~~~
parametrized-decorator-typing.py:108: error: Incompatible types in "await" (actual type "Callable[[Foo, int], <nothing> | Awaitable[<nothing>]] | Awaitable[<nothing>]",
expected type "Awaitable[Any]") [misc]
await foo.async_straight_method()
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
parametrized-decorator-typing.py:108: error: Too few arguments [call-arg]
await foo.async_straight_method()
^~~~~~~~~~~~~~~~~~~~~~~~~~~
parametrized-decorator-typing.py:109: error: Incompatible types in "await" (actual type "Callable[[Foo, int], <nothing> | Awaitable[<nothing>]] | Awaitable[<nothing>]",
expected type "Awaitable[Any]") [misc]
await foo.async_straight_method(1)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
parametrized-decorator-typing.py:109: error: Argument 1 has incompatible type "int"; expected "Callable[[Foo, int], <nothing> | Awaitable[<nothing>]]" [arg-type]
await foo.async_straight_method(1)
^
parametrized-decorator-typing.py:109: error: Argument 1 has incompatible type "int"; expected "Foo" [arg-type]
await foo.async_straight_method(1)
^
parametrized-decorator-typing.py:111: error: "Awaitable[<nothing>]" not callable [operator]
await foo.async_parametrized_method()
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
parametrized-decorator-typing.py:112: error: "Awaitable[<nothing>]" not callable [operator]
await foo.async_parametrized_method("xyz", True)
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
parametrized-decorator-typing.py:112: error: Argument 1 has incompatible type "str"; expected <nothing> [arg-type]
await foo.async_parametrized_method("xyz", True)
^~~~~
parametrized-decorator-typing.py:112: error: Argument 2 has incompatible type "bool"; expected <nothing> [arg-type]
await foo.async_parametrized_method("xyz", True)
^~~~
Found 44 errors in 1 file (checked 1 source file)
</code></pre>
| <python><decorator><mypy><python-typing> | 2023-11-01 08:25:57 | 2 | 656 | broomrider |
77,401,182 | 9,363,181 | how to get the relative path of a jar file from outside in python | <p>I have a project structure like the one below:</p>
<pre><code>└── core
├── lib
│ └── some.jar
└── src
└── main
└── main.py
</code></pre>
<p>Now I am trying to access the <code>some.jar</code> inside the main.py file. It works fine with the absolute path but when I try to do something like below, it fails.</p>
<pre><code>file:///../../lib/some.jar
</code></pre>
<p>I also tried the below code:</p>
<pre><code>os.path.join(os.path.dirname(os.path.realpath(__file__)), "lib")
</code></pre>
<p>but It gives the path as below:</p>
<pre><code>/Users/IdeaProjects/getmesomeproject/core/src/main/lib
</code></pre>
<p>Is there anyway to pass the relative path of the <code>JAR</code> file?</p>
| <python><python-3.x> | 2023-11-01 08:14:25 | 2 | 645 | RushHour |
77,401,175 | 3,486,675 | How to make Flake8 ignore syntax within strings? | <p>I'm suddenly getting flake8 errors for syntax within strings.</p>
<p>For example, for the following line of code:</p>
<pre><code> tags.append(f'error_type:{error.get("name")}')
</code></pre>
<p>I'm getting this error: <code>E231 missing whitespace after ':'</code>.</p>
<p>I don't want to ignore all <code>E231</code> errors, because I care about them when they don't refer to text within strings.</p>
<p>I also don't want to have to go and add <code># noqa</code> comments to each of my strings.</p>
<p>I've tried to pin my flake8 version to <code>6.0.0</code> (which is the version that previously didn't raise these errors before).</p>
<p>I'm running flake8 with pre-commit (if that is relevant).</p>
<p>Why am I suddenly getting these errors for strings and how can I turn them off?</p>
<p>I should also mention that this is happening in Github Actions, specifically.</p>
| <python><pre-commit><flake8> | 2023-11-01 08:13:00 | 1 | 11,605 | D Malan |
77,401,164 | 17,672,187 | OpenCV: Detection of clearly delineated object boundary failing | <p>I have the following image:</p>
<p><a href="https://i.sstatic.net/YEoCk.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YEoCk.jpg" alt="enter image description here" /></a></p>
<p>I want to detect the edge of the document placed on the table in the image. I tried the following code (the earlier approaches are commented out).</p>
<pre><code>def detectDocEdge(imagePath):
image = loadImage(imagePath)
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Approach 1
# equalized_image = cv2.equalizeHist(gray_image)
# blurred = cv2.GaussianBlur(equalized_image, (15, 15), 0) # TODO: May not be required
# edges = cv2.Canny(blurred, 5, 150, apertureSize=3)
# kernel = np.ones((15, 15), np.uint8)
# dilated_edges = cv2.dilate(edges, kernel, iterations=1)
# Approach 2
# blurred = cv2.GaussianBlur(gray_image, (5, 5), 0)
# thresh = cv2.adaptiveThreshold(blurred, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 11, 2)
# kernel = np.ones((5, 5), np.uint8)
# dilated_edges = cv2.dilate(thresh, kernel, iterations=1)
# Approach 3
# Apply morphological operations
# kernel = np.ones((3, 3), np.uint8)
# closed_image = cv2.morphologyEx(thresholded_image, cv2.MORPH_CLOSE, kernel)
# Approach 4
blurred = cv2.GaussianBlur(gray_image, (5, 5), 0)
edges = cv2.Canny(blurred, 50, 150)
dilated = cv2.dilate(edges, None, iterations=5)
eroded = cv2.erode(dilated, None, iterations=5)
contours, hierarchy = cv2.findContours(eroded, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
contour_img = image.copy()
cv2.drawContours(contour_img, contours, -1, (0, 255, 0), 2)
showImage(contour_img)
max_contour = None
max_measure = 0 # This could be area or combination of area and perimeter
max_area = 0
for contour in contours:
area = cv2.contourArea(contour)
perimeter = cv2.arcLength(contour, True)
# Here we use a combination of area and perimeter to select the contour
measure = area + perimeter # You can also try different combinations
# if measure > max_measure:
# max_measure = measure
# max_contour = contour
if area > max_area:
max_area = area
max_contour = contour
contour_img = image.copy()
cv2.drawContours(contour_img, [max_contour], -1, (0, 255, 0), 2)
showImage(contour_img)
</code></pre>
<p>None of the approaches seem to work and I always get the following edge:
<a href="https://i.sstatic.net/MIoL2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MIoL2.png" alt="enter image description here" /></a></p>
<p>Could you suggest a fix? I need the code to be as generic as possible so that it can detect the document edge under as many condition as possible.</p>
| <python><opencv><computer-vision><edge-detection><canny-operator> | 2023-11-01 08:10:00 | 1 | 691 | Loma Harshana |
77,401,162 | 11,082,866 | Add text on image in python | <p>I am creating barcodes in python using Barcode library like the following:</p>
<pre><code> barcodes = CompanyBarcodes.objects.filter(asn_id__asn_number=asn_id,
company=company,
product__asset_category__in=cat_ids
).prefetch_related('company', 'product', 'grn_id', 'asn_id', 'itemGroup')
items = []
for i in barcodes:
items.append({'bar': i.barcode, 'name': str(i.product.name)
+ " - " + str(i.product.sku)})
alist = []
for item in items:
print("item", item)
EAN = barcode.get_barcode_class('code128')
d = {'module_width': 0.15,
'module_height': 2.0,
'quiet_zone': 0.5,
'text_distance': 1.5,
'font_size': 3,
'text_line_distance': 3,
'dpi': 900}
w = ImageWriter(d)
ean = EAN(item['bar'], writer=ImageWriter())
a = ean.save(item['bar'], options=d)
</code></pre>
<p>Now in the items list we have dictionaries which consists of two parts <code>bar</code> and <code>name</code>. how can I add the name in the image created by ImageWriter?</p>
<p>I tried:</p>
<pre><code> with open(a, 'rb') as image:
img = Image.open(image)
# Create a drawing context on the image
draw = ImageDraw.Draw(img)
# Use the same font as the original barcode image
font = ImageFont.load_default()
# Set the font size
font_size = 10 # You can adjust the font size as needed
# Calculate the position to center the text
text = item['name']
text_width, text_height = draw.textsize(text, font=font)
image_width, image_height = img.size
text_x = (image_width - text_width) / 2
text_y = image_height - text_height - 5 # Adjust the vertical position as needed
# Draw the text on the image
draw.text((text_x, text_y), text, fill=(0, 0, 0), font=font) # You can adjust the fill color
# Save the modified image
img.save(a) # Overwrite the original image with the changes
encoded_image = base64.b64encode(open(a, 'rb').read()).decode('utf-8')
alist.append({'image': encoded_image, **item})
</code></pre>
<p>But the text add is way too small and have totally different font from the one in the barcode.</p>
<p>Something like this<a href="https://i.sstatic.net/4ORyP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4ORyP.png" alt="enter image description here" /></a></p>
| <python><python-imaging-library> | 2023-11-01 08:09:44 | 0 | 2,506 | Rahul Sharma |
77,401,153 | 15,233,792 | How to read a jpeg from disk and instantiate it as 'werkzeug.datastructures.FileStorage'? | <pre><code>python 3.10.11
werkzeug 2.3.7
</code></pre>
<p>I have a jpeg in <code>/mnt/dragon.jpeg</code></p>
<p>I want to write unit test for a method, which has <code>werkzeug.datastructures.FileStorage</code> as input, and behavior is save it to a path.</p>
<p>I try to instantiate it with codes below:</p>
<pre class="lang-py prettyprint-override"><code>import os
from werkzeug.datastructures import FileStorage
PATH = "/mnt/dragon.jpeg"
def get_file(filepath: str) -> FileStorage:
filename = os.path.basename(filepath)
with open(filepath, 'rb') as file:
return FileStorage(file, name=filename)
file = get_file(PATH)
</code></pre>
<p>The ipython instance output are:</p>
<pre class="lang-py prettyprint-override"><code>In [8]: file
Out[8]: <FileStorage: '/mnt/dragon.jpeg' (None)>
In [9]: file.save("/mnt/saved.jpeg")
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[9], line 1
----> 1 file.save("/mnt/saved.jpeg")
File ~/.local/lib/python3.10/site-packages/werkzeug/datastructures/file_storage.py:129, in FileStorage.save(self, dst, buffer_size)
126 close_dst = True
128 try:
--> 129 copyfileobj(self.stream, dst, buffer_size)
130 finally:
131 if close_dst:
File ~/miniconda3/lib/python3.10/shutil.py:195, in copyfileobj(fsrc, fdst, length)
193 fdst_write = fdst.write
194 while True:
--> 195 buf = fsrc_read(length)
196 if not buf:
197 break
ValueError: read of closed file
</code></pre>
<p>How to make the <code>werkzeug.datastructures.FileStorage</code> file can be saved outside the scope of <code>open</code>?</p>
| <python><file><werkzeug> | 2023-11-01 08:08:18 | 2 | 2,713 | stevezkw |
77,401,110 | 7,334,203 | Remove duplicate variables declaration in xslt using Python | <p>Suppose I have this xslt 2.0 code which is generated using altova mapforce</p>
<pre><code><xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template name="SDEV:SDEV_NCTS_down_fillUCRC547"/>
<xsl:for-each select="($var292_CD___C)]">
<AL586B xmlns="AL586B" >
<xsl:variable name="var288_cur" as="node()" select="."/>
<xsl:variable name="var287___as_double" as="xs:double" select="xs:double('6')"/>
</AL586B>
</xsl:for-each>
<xsl:for-each select="(./ns0:AL586B)">
<AL506C xmlns="AL506C">
<xsl:attribute name="xsi:schemaLocation" namespace="http://www.w3.org/2001/XMLSchema-instance"/>
<xsl:variable name="var287___as_double" as="xs:double" select="xs:double('6')"/>
<xsl:variable name="var1_preparationDateAndTi_as_string" as="xs:string" select="xs:string(xs:dateTime(fn:string($var287___as_double)))"/>
</AL506C>
</xsl:for-each>
</xsl:stylesheet>
</code></pre>
<p>As you see the variable name <code>var287___as_double</code> has been declared twice and this is causing issues. How can I use Python to read this xslt and in each duplicate value that I found to rename it in something like <code>var287___as_double2</code>?</p>
<p>And of course whenever this duplicate var was used, to use the new declared var?</p>
<p>For example the new xslt will be something like:</p>
<pre><code><xsl:stylesheet version="2.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:template name="SDEV:SDEV_NCTS_down_fillUCRC547"/>
<xsl:for-each select="($var292_CD___C)]">
<AL586B xmlns="AL586B" >
<xsl:variable name="var288_cur" as="node()" select="."/>
<xsl:variable name="var287___as_double" as="xs:double" select="xs:double('6')"/>
</AL586B>
</xsl:for-each>
<xsl:for-each select="(./ns0:AL586B)">
<AL506C xmlns="AL506C">
<xsl:attribute name="xsi:schemaLocation" namespace="http://www.w3.org/2001/XMLSchema-instance"/>
<xsl:variable name="var287___as_double2" as="xs:double" select="xs:double('6')"/>
<xsl:variable name="var1_preparationDateAndTi_as_string" as="xs:string" select="xs:string(xs:dateTime(fn:string($var287___as_double2)))"/>
</AL506C>
</xsl:for-each>
</xsl:stylesheet>
</code></pre>
<p>Another solution would be to check after the AL506C Tag and rename every variable under it.</p>
| <python><xml><xslt-2.0> | 2023-11-01 07:59:26 | 1 | 7,486 | RamAlx |
77,400,918 | 9,516,820 | How to check if multiple variables are of a certain type | <p>I have a program that creates four csv files in pandas. I have them encased in a try except block such that if one of the csv files fail then none of the csv files should be created. To achieve this I declare the variables before the try except block as follows</p>
<pre><code>classes, names, extracts, cats = None, None, None, None
try:
# create the data frames below
...
...
except Exception as e:
# some error occurred
print(e)
if not all(isinstance(i, pd.DataFrame) for i in [classes, names, extracts, cats]):
# classes.to_csv(f'{output_path}classes.csv')
# names.to_csv(f'{output_path}names.csv')
# extracts.to_csv(f'{output_path}extracts.csv')
# cats.to_csv(f'{output_path}cats.csv')
print("yes")
else:
print("no")
</code></pre>
<p>In the above code, suppose the generation of the extracts data frame fails, then the none of the css should be saved. But the code still runs the if condition and not the else part</p>
<p>Please help</p>
| <python> | 2023-11-01 07:13:37 | 1 | 972 | Sashaank |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.