QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,128,332 | 4,734,217 | How to remove dictionary keys before a specific one (in insertion order)? | <p>There is a <strong>dictionary</strong></p>
<pre><code>dic = {
'aa': 10,
'bb': 11,
'cc': 12,
'dd': 13,
'ee': 14,
'ff': 15,
}
</code></pre>
<p>How to remove the specified key - let's say <strong>'cc': 12</strong>, and all the keys before it (in order of insertion)?</p>
<p>I need to get the following in my dictionary</p>
<pre><code>'dd': 13
'ee': 14
'ff': 15
</code></pre>
<p><strong>PS.</strong> <em>I didn’t think that such a simple task required any additional clarification. But now (based on the responses) I see that there are some nuances.</em></p>
<p>Since the answers have already been received I can not edit the original question, otherwise the integrity of the context will be broken. However, the user <a href="https://stackoverflow.com/questions/77128332/how-to-remove-dictionary-keys-before-a-specific-one-in-insertion-order/77128381?noredirect=1#comment135989444_77128381">@mozway</a> proposed to consider this question on the example of the following dictionary:</p>
<pre><code>dic = {'zz': 10, 'yy': 11, 'cc': 12, 'dd': 13, 'aa': 14, 'ff': 15}
</code></pre>
<p>with the expected result:</p>
<pre><code>'dd': 13, 'aa': 14, 'ff': 15
</code></pre>
<p>See <strong>answers</strong> and <strong>comments</strong> for more details</p>
| <python><dictionary> | 2023-09-18 15:06:51 | 3 | 460 | Michael |
77,128,325 | 9,518,890 | apsw.SQLError: SQLError: no such vfs: zipvfs when trying to read encrypted SQLite files on Windows using python & apsw | <p>I have a code that works under linux and I am trying to port it to windows. It uses apsw to work with encrypted SQLite files.</p>
<pre><code>flags = apsw.SQLITE_OPEN_READWRITE | apsw.SQLITE_OPEN_CREATE | apsw.SQLITE_OPEN_URI
connection_string = (
"file:{filepath}?zv=zlib&level=9&vfs=zipvfs&password256={password}".format(
filepath=filepath, password=password
)
)
connection = apsw.Connection(connection_string, flags=flags)
</code></pre>
<p>when I try to run it under windows, it throws this error:</p>
<pre><code>apsw.SQLError: SQLError: no such vfs: zipvfs
</code></pre>
<p>I have downloaded <code>zlibwapi.dll</code> and put it under System32, I have also tried building <code>apsw</code> from source</p>
<pre><code>python setup.py fetch --all build --enable-all-extensions install
</code></pre>
<p>but I am still getting the error. (tried in python 3.10 and 3.11)</p>
| <python><sqlite><zlib><apsw> | 2023-09-18 15:05:48 | 1 | 14,592 | Matus Dubrava |
77,128,200 | 512,480 | Windows pip3 and python3 disagree | <p>I've been having a problem that python3 from the command line fails to find and import packages installed by pip3 from the command line. Finally I got a clear view of the situation:</p>
<pre><code>C:\Users\Ken>where python3
C:\msys64\mingw64\bin\python3.exe
C:\Users\Ken\AppData\Local\Microsoft\WindowsApps\python3.exe
C:\Users\Ken>python3 --version
Python 3.10.10
C:\Users\Ken>pip3 --version
pip 23.2.1 from C:\Users\Ken\AppData\Roaming\Python\Python39\site-packages\pip (python 3.9)
</code></pre>
<p>How would python3 and pip3 get out of agreement as to what python version they are talking about? What's the most graceful way to get them back in order? I'd rather not break my mingw installation if I don't have to.</p>
| <python><pip> | 2023-09-18 14:48:52 | 1 | 1,624 | Joymaker |
77,128,122 | 75,103 | How do I create collapsible sections in the gitlab output from inside python script? | <p>According to the docs it should be simple:</p>
<pre><code>@contextmanager
def gitlab_section(name, header):
print(f'\033[0Ksection_start:{int(time.time())}:{name}[collapsed=true]\r\033[0K{header}')
yield
print(f'\033[0Ksection_end:{int(time.time())}:{name}:\r\033[0K')
</code></pre>
<p>however, using it like this:</p>
<pre><code>with gitlab_section('first', 'First'):
print('first section')
print("After first section")
with gitlab_section('second', 'Second'):
print('second section')
</code></pre>
<p>makes the "After.." line fold into the first section.</p>
<p>I've made it work with a horrible hack that starts a dummy section after each <code>section_end</code>:</p>
<pre><code>def unique():
return int(random.random() * 1000000000)
@contextmanager
def gitlab_section(name, header):
sys.stdout.flush()
sectionid = unique() # in case someone uses the same name twice...
print(f'\033[0Ksection_start:{int(time.time())}:{name}_{sectionid}[collapsed=true]\r\033[0K{header}')
yield
sys.stdout.flush()
print(f'\nsection_end:{int(time.time())}:{name}_{sectionid}:\r\033[0K')
# start a dummy section without a header
print(f'\033[0Ksection_start:{int(time.time())}:dummy_{unique()}\r\033[0K')
sys.stdout.flush()
</code></pre>
<p>the <code>\e[0K</code> prefix doesn't seem to be needed on <code>section_end</code> (and the gitlab generated output does not use it).</p>
<p>While this works, it seems horribly hacky. I've read the docs, thoroughly, but perhaps I missed something in the first example..?</p>
<p>What about nested collapsible sections, is that possible?</p>
| <python><gitlab><gitlab-ci> | 2023-09-18 14:34:58 | 1 | 27,572 | thebjorn |
77,128,073 | 7,383,799 | Python Numpy 'Invert' Boolean Mask Operation | <p>When I have an array a and a boolean mask b, I can find the 'masked' vector c.</p>
<pre><code>a = np.array([1, 2, 4, 7, 9])
b = np.array([True, False, True, True, False])
c = a[b]
</code></pre>
<p>Now suppose, it's the other way around. I have c and b and would like to arrive at d (below). What is the easiest way to do this?</p>
<pre><code>c = np.array([1, 4, 7])
b = np.array([True, False, True, True, False])
d = np.array([1, 0, 4, 7, 0])
</code></pre>
| <python><numpy><boolean> | 2023-09-18 14:27:40 | 1 | 375 | eigenvector |
77,127,986 | 5,128,087 | nektos/act - local github runner, how to reference local Platform (-P) image without needing docker.io | <p>How to run a local or non-docker image using nektos/act?</p>
<p>Nektos act by default attempts to login to docker.io to pull a remote image, is it possible to use a locally cached image?</p>
| <python><docker><nektos-act> | 2023-09-18 14:14:31 | 1 | 725 | Brian Horakh |
77,127,719 | 1,420,429 | filtering pandas dataframe rows by applying a function | <p>I have a pandas dataframe that has about 5 columns. One columns values are wrong strings.</p>
<blockquote>
<p>e.g. abspodfahoidf</p>
</blockquote>
<p>I have written a function to validate if this is a valid string (some characters are not allowed in the string).</p>
<p>Is there anyway that I can apply that function I created to all column values in the dataframe and drop all rows that are invalid? Dtype for this is shown as <code>object</code> when I do <code>pd.info()</code> on the dataframe.</p>
| <python><pandas> | 2023-09-18 13:37:25 | 3 | 2,503 | harish |
77,127,666 | 4,542,117 | Python Iteratively Fill Missing Values with Non-missing Values | <p>Let's say that I have 10 arrays with shape MxN. I want to iteratively loop through each array and fill in values where missing values are located in an initialized array.</p>
<p>For an example, let's take a 3x3 grid:</p>
<pre><code>initial_array = np.full((3,3),-999)
array([[-999, -999, -999],
[-999, -999, -999],
[-999, -999, -999]])
final = np.full((3,3),-999)
</code></pre>
<p>I then have a few arrays:</p>
<pre><code>b = array([[-999, -999, 1],
[ 1, -999, -999],
[-999, -999, -999]])
c = array([[-999, -999, 2],
[ 2, -999, -999],
[ 2, -999, -999]])
d = array([[ 3, -999, -999],
[ 3, -999, -999],
[ 3, -999, 3]])
</code></pre>
<p>I would want the final array to be:</p>
<pre><code>final = array([[ 3, -999, 1],
[ 1, -999, -999],
[ 2, -999, 3]])
</code></pre>
<p>The logic is as follows:</p>
<p>for final[0,0], there is not a value that is not missing (-999) until the third array, d, so the final value is 3.</p>
<p>For Final[1,0], there is an instance of 1 in the first array, b, so it does not matter what the rest of the values of [1,0] are for the rest of the arrays.</p>
<p>Since there were no values in Final[:,1], these are all still missing</p>
<p>Currently, the best approach I know how to do this would be to loop through each pixel and run 'if-then' loops:</p>
<pre><code>for i in range(0,initial_array.shape[0]):
for j in range(0,initial_array.shape[1]):
if initial_array[i,j] == -999:
if next_array[i,j] > -999:
final_array[i,j] = next_array[i,j]
</code></pre>
<p>However, as we get more / larger arrays to check, this becomes very slow. Is there a better approach to this?</p>
| <python><arrays> | 2023-09-18 13:30:34 | 2 | 374 | Miss_Orchid |
77,127,629 | 20,088,885 | Why I can't see my custom app I created in Odoo? | <p>so I've been trying to learn Odoo16 development, and I'm following the documentation, but the problem is, after I enable the <code>access rights</code> i can't see my custom app in the general settings.</p>
<p>This is my directory.</p>
<p><a href="https://i.sstatic.net/FtsQe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FtsQe.png" alt="enter image description here" /></a></p>
<p>models.py</p>
<pre><code>from odoo import fields, models
class RealEstate(models.Model):
_name = 'estate_property'
_description = 'Real Estate Property'
name = fields.Char(string="Name", required=True)
description = fields.Text(string="Description")
postcode = fields.Char(string="Postcode")
date_availability = fields.Date(string="Date")
expected_price = fields.Float(string="Expected Price", required=True)
selling_price = fields.Float(string="Selling Price", required=True)
bedrooms = fields.Integer(string="Bedrooms")
living_area = fields.Integer(string="Living Area")
facades = fields.Integer(string="Facades")
garage = fields.Boolean(string="Garage")
garden = fields.Boolean(string="Garden")
garden_area = fields.Integer(string="Garden Area")
garden_orientation = fields.Selection(
string="Garden Orientation",
selection=[('north', 'North'), ('east','East'), ('south','South'), ('west','west') ]
)
</code></pre>
<p><strong>manifest</strong>.py</p>
<pre><code>{
"name": "Estate", # The name that will appear in the App list
"version": "16.0", # Version
"application": True, # This line says the module is an App, and not a module
"depends": ["base"], # dependencies
"data": [
'security/ir.model.access.csv',
'views/estate_property_views.xml',
'views/estate_menus.xml'
],
"installable": True,
'license': 'LGPL-3',
}
</code></pre>
<p><strong>ir.model.access.csv</strong></p>
<pre><code>id,name,model_id:id,group_id:id,perm_read,perm_write,perm_create,perm_unlink
estate.access_estate_property,access_estate_property,estate.model_estate_property,base.group_user,1,1,1,1
</code></pre>
<p>Upon further searching, I read that I should commment/uncomment xml files, but it still doesn't work.</p>
<p>estate_menus.xml</p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<odoo>
<data>
<menuitem id="estate_menu_root" name="Estate Property">
<menuitem id="estate_first_level_menu" name="First Level">
<menuitem id="estate_property_menu_action" action="estate_property_action"/>
</menuitem>
</menuitem>
</data>
</odoo>
</code></pre>
<p>estate_property_views.xml</p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<odoo>
<data>
<record id="estate_property_action" model="ir.actions.act_window">
<field name="name">Properties</field>
<field name="res_model">estate_property</field>
<field name="view_mode">tree,form</field>
</record>
</data>
</odoo>
</code></pre>
<p>I'm stuck at this problem right now, I tried changing to <code>superuser</code> but it works one time, but once i refresh my browser, the custom apps will disappear</p>
<p><strong>Addition</strong></p>
<p>this is my odoo.conf</p>
<pre><code>[options]
admin_passwd = $pbkdf2-sha512$25000$7N07x/hf670XwvjfG2OMsQ$4bzF6iN4y0uxX3o915LeEAznINV1e8bZMf1c6rkyOX4Q5UfZE5uMYsvQwiY89IIZ2b61izNr3uVqEnbV3b6kxQ
db_host = localhost
db_port = 5432
db_user = admin
db_password = admin
addons_path = addons , customaddons
http_port = 8015
</code></pre>
| <python><xml><odoo> | 2023-09-18 13:25:50 | 1 | 785 | Stykgwar |
77,127,588 | 10,952,047 | error loading package "scvelo" on Rstudio using python environment | <p>I'm using python environment in <code>Rstudio</code> to load some packages, but when I try to load "scvelo" I found this message. I don't know where it comes from</p>
<pre><code>>>> import scvelo as scv
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 117, in _hook
return _find_and_load(name, import_)
File "/home/olga/.local/share/r-miniconda/envs/r-reticulate/lib/python3.9/site-packages/scvelo/__init__.py", line 3, in <module>
from scanpy import read, read_loom
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 117, in _hook
return _find_and_load(name, import_)
File "/home/olga/.local/share/r-miniconda/envs/r-reticulate/lib/python3.9/site-packages/scanpy/__init__.py", line 6, in <module>
from ._utils import check_versions
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 117, in _hook
return _find_and_load(name, import_)
File "/home/olga/.local/share/r-miniconda/envs/r-reticulate/lib/python3.9/site-packages/scanpy/_utils/__init__.py", line 28, in <module>
from .compute.is_constant import is_constant
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 117, in _hook
return _find_and_load(name, import_)
File "/home/olga/.local/share/r-miniconda/envs/r-reticulate/lib/python3.9/site-packages/scanpy/_utils/compute/is_constant.py", line 5, in <module>
from numba import njit
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 117, in _hook
return _find_and_load(name, import_)
File "/home/olga/.local/share/r-miniconda/envs/r-reticulate/lib/python3.9/site-packages/numba/__init__.py", line 43, in <module>
from numba.np.ufunc import (vectorize, guvectorize, threading_layer,
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 117, in _hook
return _find_and_load(name, import_)
File "/home/olga/.local/share/r-miniconda/envs/r-reticulate/lib/python3.9/site-packages/numba/np/ufunc/__init__.py", line 3, in <module>
from numba.np.ufunc.decorators import Vectorize, GUVectorize, vectorize, guvectorize
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 117, in _hook
return _find_and_load(name, import_)
File "/home/olga/.local/share/r-miniconda/envs/r-reticulate/lib/python3.9/site-packages/numba/np/ufunc/decorators.py", line 3, in <module>
from numba.np.ufunc import _internal
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 117, in _hook
return _find_and_load(name, import_)
SystemError: initialization of _internal failed without raising an exception
>>> import scvelo as scv
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 117, in _hook
return _find_and_load(name, import_)
File "/home/olga/.local/share/r-miniconda/envs/r-reticulate/lib/python3.9/site-packages/scvelo/__init__.py", line 3, in <module>
from scanpy import read, read_loom
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 117, in _hook
return _find_and_load(name, import_)
File "/home/olga/.local/share/r-miniconda/envs/r-reticulate/lib/python3.9/site-packages/scanpy/__init__.py", line 6, in <module>
from ._utils import check_versions
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 117, in _hook
return _find_and_load(name, import_)
File "/home/olga/.local/share/r-miniconda/envs/r-reticulate/lib/python3.9/site-packages/scanpy/_utils/__init__.py", line 28, in <module>
from .compute.is_constant import is_constant
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 117, in _hook
return _find_and_load(name, import_)
File "/home/olga/.local/share/r-miniconda/envs/r-reticulate/lib/python3.9/site-packages/scanpy/_utils/compute/is_constant.py", line 5, in <module>
from numba import njit
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 117, in _hook
return _find_and_load(name, import_)
File "/home/olga/.local/share/r-miniconda/envs/r-reticulate/lib/python3.9/site-packages/numba/__init__.py", line 43, in <module>
from numba.np.ufunc import (vectorize, guvectorize, threading_layer,
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 117, in _hook
return _find_and_load(name, import_)
File "/home/olga/.local/share/r-miniconda/envs/r-reticulate/lib/python3.9/site-packages/numba/np/ufunc/__init__.py", line 3, in <module>
from numba.np.ufunc.decorators import Vectorize, GUVectorize, vectorize, guvectorize
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 117, in _hook
return _find_and_load(name, import_)
File "/home/olga/.local/share/r-miniconda/envs/r-reticulate/lib/python3.9/site-packages/numba/np/ufunc/decorators.py", line 3, in <module>
from numba.np.ufunc import _internal
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 119, in _find_and_load_hook
return _run_hook(name, _hook)
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 93, in _run_hook
module = hook()
File "/home/olga/R/x86_64-pc-linux-gnu-library/4.3/reticulate/python/rpytools/loader.py", line 117, in _hook
return _find_and_load(name, import_)
SystemError: initialization of _internal failed without raising an exception
</code></pre>
<p>with other packages, like pandas, anndata, matplotlib it works fine. where is the error? I've tried to remove and reinstall numba and numpy but I did not fix it.</p>
<p>Info:</p>
<pre><code>reticulate::repl_python()
Python 3.9.18 (/home/olga/.local/share/r-miniconda/envs/r-reticulate/bin/python)
Reticulate 1.32.0 REPL -- A Python interpreter in R.
Enter 'exit' or 'quit' to exit the REPL and return to R.
</code></pre>
| <python><r><import> | 2023-09-18 13:20:41 | 0 | 417 | jonny jeep |
77,127,551 | 6,557,593 | How to display Start-Stop bands across horizontal time axis | <p>We want to display productive time on the phone, by employee, in a simple 1-dimensional time-axis chart.</p>
<p>Green in the chart represents time on the phone (between 'start' and 'stop':</p>
<p><a href="https://i.sstatic.net/XD7TD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XD7TD.png" alt="enter image description here" /></a></p>
<p>Ideally we would like to do this natively in PowerBI, but if there's a simpler way in Python, we can embed it in our BI solution.</p>
<p>The data is simple:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Agent</th>
<th>Start</th>
<th>Stop</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2023-09-16 11:45:48.900</td>
<td>2023-09-16 11:51:03.900</td>
</tr>
<tr>
<td>1</td>
<td>2023-09-16 11:56:06.720</td>
<td>2023-09-16 12:02:56.720</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
<p>I'm sure this is possible, we just haven't been able to get it done.</p>
<p>Thanks</p>
| <python><powerbi> | 2023-09-18 13:16:30 | 0 | 640 | ColinMac |
77,127,470 | 4,004,541 | Identify different cameras using OpenCV VideoCapture in Python | <p>I connected two USB cameras to my system (one RGB and another NIR).</p>
<p>One camera works on RGB and another on a grey scale, so they have a different channel.</p>
<p>I see that some people have suggested searching the camera IDs in the folder /dev/v4l/by-id/* and then opening the cameras as</p>
<pre><code>cap = cv.VideoCapture('/dev/v4l/by-id/usb-e-con_systems_See3CAM_CU81_3233D508-video-index1')
</code></pre>
<p>I see that this example has been suggested in C++ code, and it's not working on Python.</p>
<p>Any suggestion on how I can identify the cameras and open them using VideoCapture in Python?</p>
<p>My camera list are:</p>
<pre><code>ls /dev/v4l/by-id
usb-e-con_systems_See3CAM_CU27_192C8902-video-index0
usb-e-con_systems_See3CAM_CU27_192C8902-video-index1
usb-e-con_systems_See3CAM_CU81_3233D508-video-index0
usb-e-con_systems_See3CAM_CU81_3233D508-video-index1
</code></pre>
| <python><opencv><camera> | 2023-09-18 13:05:02 | 0 | 360 | Patrick Vibild |
77,127,220 | 2,791,346 | UPS tracking API activity status options | <h3>UPS Tracking API</h3>
<p>I am trying to use a UPS tracking API in Python.</p>
<pre><code>import requests
inquiry_number = "1ZXXXXXXXXXXXXXXXX"
url = "https://wwwcie.ups.com/api/track/v1/details/" + inquiry_number
query = {
"locale": "en_US",
"returnSignature": "true"
}
headers = {
"Content-Type": "application/json",
"transId": "1234",
"transactionSrc": "testing",
"Authorization": "Bearer "..."
}
response = requests.get(url, headers=headers, params=query)
data = response.json()
print(data)
</code></pre>
<p>This output some statuses:</p>
<pre><code>{
'trackResponse': {
'shipment': [
{
'inquiryNumber': '1Z1442YY7229014688',
'package': [
{
'trackingNumber': '1Z1442YY7229014688',
'deliveryDate': [
{
'type': 'DEL',
'date': '20220126'
}
],
'deliveryTime': {
'type': 'DEL',
'endTime': '163000'
},
'activity': [
{
'location': {
'address': {
'city': 'PARAMUS',
'stateProvince': 'NJ',
'countryCode': 'US',
'country': 'US'
},
'slic': '0761'
},
'status': {
'type': 'D',
'description': 'DELIVERED ',
'code': 'F4',
'statusCode': '011'
},
'date': '20220126',
'time': '163000'
},
{
'location': {
'address': {
'countryCode': 'US',
'country': 'US'
}
},
'status': {
'type': 'M',
'description': 'Shipper created a label, UPS has not received the package yet. ',
'code': 'MP',
'statusCode': '003'
},
'date': '20220126',
'time': '151641'
}
],
'packageCount': 1
}
]
}
]
}
}
</code></pre>
<p>Now I can't find anywhere a table with possible status types... I can see that 'D' and 'M' exist but what else?</p>
<h3>Also:</h3>
<p>in this documentation <a href="https://developer.ups.com/api/reference?loc=en_US#operation/getSingleTrackResponseUsingGET" rel="nofollow noreferrer">here</a>
I can see that in response there should be a currentStatus but I can't see him.</p>
<h3>P.S.</h3>
<p>How do you test with this API different scenarios, because it is obvious to me, that this test API is not connected with database for real (witch is expected). How can I test different trackig possibilities... dolivered, not delivered,...</p>
| <python><ups> | 2023-09-18 12:28:44 | 1 | 8,760 | Marko Zadravec |
77,127,192 | 1,417,075 | Cannot Decode RFID Tag | <p>Thanks for taking a look.</p>
<p>I'll include a full description here as I'm not sure what might be relevant to solving my question.</p>
<p><strong>Background Information:</strong></p>
<p>My current project involves reading <a href="https://en.wikipedia.org/wiki/Radio-frequency_identification" rel="nofollow noreferrer">RFID (Radio Frequency ID) tags</a>. To achieve this I'm using a Rasberry Pi with the <a href="https://es.aliexpress.com/item/1005003838583428.html?spm=a2g0o.detail.0.0.288a6bbR6bbR4e&gps-id=pcDetailTopMoreOtherSeller&scm=1007.40000.327270.0&scm_id=1007.40000.327270.0&scm-url=1007.40000.327270.0&pvid=f6ac03ca-91d9-44db-a09b-47a6408182cc&_t=gps-id:pcDetailTopMoreOtherSeller,scm-url:1007.40000.327270.0,pvid:f6ac03ca-91d9-44db-a09b-47a6408182cc,tpp_buckets:668%232846%238114%231999&pdp_npi=4%40dis%21EUR%2190.90%2165.45%21%21%2194.67%21%21%40211b442116950359978126278e11b1%2112000027313711271%21rec%21ES%21851497238%21S" rel="nofollow noreferrer">Fonkan FM 503 RFID Reader</a>. So far I got it to work using the code below and can correctly read and decode the test RFID that came with it. See the manufacturers instructions and my code below:</p>
<p><a href="https://i.sstatic.net/cFdnk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cFdnk.png" alt="rfid reader overview" /></a></p>
<p><a href="https://i.sstatic.net/snOBj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/snOBj.png" alt="rfid reader commands" /></a></p>
<pre><code>import serial
import time
from subprocess import Popen
ser = serial.Serial(port='/dev/ttyUSB0', baudrate = 38400, parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE, bytesize=serial.EIGHTBITS, timeout=1)
while 1:
ser.write(b"\nU\r")
epcs = ser.read(ser.inWaiting())
epcs = epcs.decode("ascii").replace("\r", "").replace("\n", "")
time.sleep(1)
if not epcs == "Q" and not epcs == "U" and not epcs == "":
epcs = epcs.split("U")
for epc in epcs:
print(epc)
</code></pre>
<p>This renders the following tag value in hexadecimal:</p>
<pre><code>3000E280699500005011153664F378DF
</code></pre>
<p>With this EPC I was able to go to the <a href="https://www.gs1.org/services/epc-encoderdecoder" rel="nofollow noreferrer">GS1 decoder page</a> and see that the tag decodes to get the information contained within.</p>
<p><a href="https://i.sstatic.net/30koO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/30koO.png" alt="gs1 decoder working" /></a></p>
<p><strong>The Question:</strong></p>
<p>Encouraged by this I decided to try and find some more tags to read so I went to <a href="https://maps.app.goo.gl/ivEvfYoqp4U1aFMH8" rel="nofollow noreferrer">Zara Home</a> a home furnishing store in Spain where RFID is used to track inventory and bought some bowls like the one below:</p>
<p><a href="https://i.sstatic.net/LTcwJ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LTcwJ.jpg" alt="zara home bowl" /></a></p>
<p>These came with the following RFID tags stuck to them:</p>
<p><a href="https://i.sstatic.net/ZIczF.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZIczF.jpg" alt="rfids_top" /></a></p>
<p><a href="https://i.sstatic.net/V34Uk.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V34Uk.jpg" alt="rfid bottom" /></a></p>
<p>When reading these with my code above, I get the following output EPC values:</p>
<pre><code>400009CA3D5A706573802B1D46418C4004235E73 (Returns Total value is out of range)
440009CA2662748C77002CFDDABC286004239E2D (Returns Header 68 is not a known EPC header)
400009CA3D5A706573802B1D467D8C400423D9B6 (Returns Total value is out of range)
400009CA3D5A6D58338026D0DEF60F400423DC64 (Can be decoded)
</code></pre>
<p>The problem is that I can't decode these using the <a href="https://www.gs1.org/services/epc-encoderdecoder" rel="nofollow noreferrer">GS1 decoder page</a> or <a href="https://pypi.org/project/pyepc/" rel="nofollow noreferrer">pyepc</a> which is a decoding library. I also tried other online tools but being very new to this field couldn't really understand the documentation about different types of tags and the data within them.</p>
<p><a href="https://i.sstatic.net/IQkdp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IQkdp.png" alt="enter image description here" /></a></p>
<p>I'd like to ask:</p>
<ul>
<li>Are these EPC codes and am I reading the tags correctly?</li>
<li>How can I decode these EPC codes? (I'd like to get the company Id)</li>
</ul>
| <python><rfid><epc> | 2023-09-18 12:24:19 | 1 | 1,082 | James Scott |
77,126,852 | 3,121,975 | Generic XML message parsing | <p>I have a number of XML message types with an elaborate header and sequence structure, but separate business message types. I'm trying convert these to objects in Python, and so far my bottom-level structure looks like this:</p>
<pre><code>T = TypeVar("T", bound=XmlRecord)
def _extract_fixed_gen_data(rec_type: Type[T], xml: dict) -> Dict[TimeCode, List[T]]:
results = {}
for data in xml["JPMR00010"]:
key = TimeCode(int(data["JP06219"]))
records = [
rec_type(record) # pylint: disable=too-many-function-args
for record in data["JPM00011"]["JPMR00011"]
]
results[key] = records
return results
class BusinessMessage(Generic[T], XmlRecord):
"""Contains the business data associated with the XML payload."""
# Some other fields
data = XmlProperty[list]("JPM00010", list, converter=lambda raw: _extract_fixed_gen_data(T, raw))
</code></pre>
<p>And <code>XmlProperty</code> and <code>XmlRecord</code> are defined like this:</p>
<pre><code>X = TypeVar("X")
class XmlProperty(Generic[X]):
def __init__( # pylint: disable=too-many-arguments
self,
xml_key: str,
dtype,
allow_empty: bool = False,
alternates: Optional[List[str]] = None,
converter: Optional[Callable[[Any], Any]] = None,
):
# Set the base fvields on the property
self._xml_key = xml_key
self._alternate_keys = alternates if alternates else []
self._dtype = dtype
self._allow_empty = allow_empty
self._value = None
self._converter = None
self._expects_dict = False
if converter is not None:
self._converter = converter
def parse(self, obj: object, data: dict):
raw_value = None
if self._xml_key in data:
raw_value = data[self._xml_key]
else:
alt_value = next(
filter(lambda alt: alt in data, self._alternate_keys), None
)
if alt_value is not None:
raw_value = data[alt_value]
elif not self._allow_empty:
raise KeyError(f"XML data is missing property {self._xml_key}")
if self._converter is not None:
raw_value = (
self._converter(raw_value, data)
if self._expects_dict
else self._converter(raw_value)
)
value = None
if raw_value is not None:
value = (
self._dtype(raw_value) if type(raw_value) != self._dtype else raw_value
)
self.__set__(obj, value)
def __set_name__(self, owner: object, name: str):
self.public_name = name
self.private_name = "_" + name
def __get__(self, obj: object, objtype: Optional[type] = None):
if obj is None:
return self
return getattr(obj, self.private_name)
def __set__(self, obj: object, value: Optional[X]):
setattr(obj, self.private_name, value)
class XmlRecord:
def __init__(self, data: dict):
self._xml_properties = {
name: prop
for name, prop in self.__class__.__dict__.items()
if isinstance(prop, XmlProperty)
}
for name, prop in self._xml_properties.items():
prop.parse(self, data)
</code></pre>
<p>The issue comes in trying to inject a generic argument into <code>_extract_fixed_gen_data</code>. Obviously, I can't inject <code>T</code> directly into the call because it's a type-variable and not a type. I could add a generic context to <code>XmlProperty</code>, which would allow me to get around the issue, but that could get messy very quickly. Does anyone have any recommendations on how to proceed here?</p>
| <python><generics><python-descriptors> | 2023-09-18 11:36:12 | 0 | 8,192 | Woody1193 |
77,126,835 | 22,595 | How to enable diagnostics for python-kafka consumer? | <p>I have test cases that have to read from kafka topics. On some machines that works, but on machine dedicated for tests it simply ends without neither result nor error.</p>
<p>My code to read all message from kafka topic:</p>
<pre><code>def show_messageges_from_topic(kafka_server, topic, date_filter, msg_id_list):
i = 0
consumer = kafka.KafkaConsumer(bootstrap_servers=kafka_server, group_id=None, auto_offset_reset='earliest')
try:
consumer.subscribe([topic])
while True:
records = consumer.poll(50) # timeout in millis
if not records:
break
for _, consumer_records in records.items():
for consumer_record in consumer_records:
i += 1
msg_process(topic, i, consumer_record, date_filter, msg_id_list)
finally:
consumer.close()
</code></pre>
<p>Using Kafka GUI I see that there are messages on topic I read.</p>
<p>On machine where it does not read messages I see that there is connection with Kafka:</p>
<pre><code>[root@kafka-client ~]# netstat -anp | grep "\.77"
tcp 0 0 169.0.1.223:46282 169.0.1.77:9092 ESTABLISHED 680492/python3
tcp 0 0 169.0.1.223:39088 169.0.1.77:9092 TIME_WAIT -
</code></pre>
<p>Is there something wrong with my code? How to enable consumer diagnostics?</p>
<p>PS I tried to use <code>group_id</code> but then my program does not read any messages even on machines where reading works.</p>
<p>PS2: I had similar question:
<a href="https://stackoverflow.com/questions/77058106/why-python-kafkaconsumer-do-not-read-all-messages-from-topic">Why Python KafkaConsumer do not read all messages from topic?</a>
where I tried to read from all partitions.</p>
| <python><apache-kafka><kafka-consumer-api> | 2023-09-18 11:33:31 | 0 | 54,502 | Michał Niklas |
77,126,759 | 7,327,257 | LightGBM Classifier faster on CPU than on GPU | <p>I've followed instructions in <a href="https://lightgbm.readthedocs.io/en/latest/GPU-Tutorial.html" rel="nofollow noreferrer">official documentation</a> of LightGBM. I followed each step and when I tried to make the speed test on both GPU and CPU I noticed that <strong>CPU computation is faster than GPU</strong>. However, computer recognizes the GPU and complains about nothing. <strong>Any clue about why is this happening?</strong> According to the tutorial, GPU should be at least three times faster.</p>
<p>I tried with my own dataset and Sklearng API (LightGBMClassifier) and it happens the same.
I'm working on a Debian 11, with a Nvidia GPU Tesla T4. I installed the drivers 470.182 and <code>nvidia-smi</code> works properly. It's a Google Cloud Virtual Machine.</p>
<p>Already check similar questions (like this <a href="https://stackoverflow.com/questions/66742614/lightgbm-benchmark-shows-no-speedup-on-rtx-2080-gpu-over-cpu">one</a>), but no answers found.</p>
<p>Thanks in advance.</p>
| <python><gpu><nvidia><lightgbm> | 2023-09-18 11:21:42 | 0 | 357 | M. Merida-Floriano |
77,126,706 | 5,722,359 | How to get the previous 10 and next 10 distinct group_ids from a SQLITE3 table after some data have been removed | <p>From a sqlite3 table column, I would like to extract the previous 10 distinct values at a given offset after some data have been removed.</p>
<p>Below is a sample script.</p>
<p><strong>Sample Script:</strong></p>
<pre><code>import sqlite3
import random
from itertools import count
from typing import Literal
class Data:
def __init__(self):
self.con = sqlite3.connect("data.db",
detect_types=sqlite3.PARSE_DECLTYPES,
)
self.cur = self.con.cursor()
self.create_table()
def create_table(self):
table = """CREATE TABLE IF NOT EXISTS
datatable(
sn INTEGER,
item_id TEXT PRIMARY KEY,
group_id TEXT)
"""
self.cur.execute("""DROP TABLE IF EXISTS datatable""")
self.cur.execute(table)
self.con.commit()
def insert_data_row(self, data):
sql = """INSERT OR IGNORE INTO datatable VALUES (?,?,?)"""
self.cur.execute(sql, data)
self.con.commit()
def close(self):
# 5. Close cursor & connection
self.cur.close()
self.con.close()
def delete_group_id(self, group_id: str):
sql = """DELETE FROM datatable WHERE group_id in (?)"""
self.cur.execute(sql, (group_id,))
self.con.commit()
def get_previous_page_of_group_ids(self, group_id: str, span: int):
sql1 = """SELECT MIN(sn) FROM
(SELECT sn from datatable WHERE group_id == (?))"""
sql2 = """SELECT DISTINCT group_id FROM datatable
WHERE sn < (?) ORDER BY group_id DESC LIMIT (?)
OFFSET (?)"""
self.cur.execute(sql1, (group_id,))
start_sn = self.cur.fetchone()[0]
print(f"{start_sn=}")
self.cur.execute(sql2, (start_sn, span, 10))
return self.cur.fetchall()
def get_next_page_of_group_ids(self, group_id: str, span: int):
sql1 = """SELECT MAX(sn) FROM
(SELECT sn from datatable WHERE group_id == (?))"""
sql2 = """SELECT DISTINCT group_id FROM datatable
WHERE sn > (?) LIMIT (?)"""
self.cur.execute(sql1, (group_id,))
start_sn = self.cur.fetchone()[0]
print(f"{start_sn=}")
self.cur.execute(sql2, (start_sn, span,))
return self.cur.fetchall()
if __name__ == "__main__":
db = Data()
db.create_table()
counter = count()
for i in range(50):
repeats = random.randint(0, 5)
for r in range(repeats):
sn = next(counter)
gid = f"G{i}"
id = f"{gid}_F{r}"
print(f"{sn=} {gid=} {id}")
db.insert_data_row((sn, id, gid))
start_gid = "G30"
db.delete_group_id("G0")
db.delete_group_id("G1")
db.delete_group_id("G29")
db.delete_group_id("G28")
print(f"{start_gid=}")
pb = db.get_previous_page_of_group_ids(start_gid, 10)
print(f"{pb=}")
nb = db.get_next_page_of_group_ids(start_gid, 10)
print(f"{nb=}")
db.close()
</code></pre>
<p>The return answers are:</p>
<pre><code>start_gid='G30'
start_sn=68
pb=[('G21',), ('G2',), ('G17',), ('G16',), ('G14',), ('G12',), ('G11',), ('G10',)]
start_sn=68
nb=[('G31',), ('G33',), ('G34',), ('G35',), ('G36',), ('G40',), ('G41',), ('G42',), ('G43',), ('G45',)]
</code></pre>
<p><code>pb</code> and <code>np</code> appear to be wrong. It should be from <code>((G27'),... to ..., (G18')</code> and <code>((G31'),... to ..., (G40')</code>. I can't figure out the mistake in the <code>.get_previous_page_of_group_ids()</code> and <code>.get_next_page_of_group_ids()</code> methods. Appreciate your help in understanding the mistake in this method. Thanks.</p>
<p><strong>Update:</strong></p>
<p>I have rewritten <code>.get_previous_page_of_group_ids()</code> using a combination of SQLITE3 and Python commands. May I know a pure SQLITE3 approach to write this method?</p>
<pre><code> def get_previous_page_of_group_ids(self, group_id: str, span: int):
sql1 = """SELECT MIN(sn) FROM
(SELECT sn from datatable WHERE group_id == (?))"""
sql2 = """SELECT group_id from datatable
WHERE sn < (?)"""
self.cur.execute(sql1, (group_id,))
start_sn = self.cur.fetchone()[0]
print(f"{start_sn=}")
self.cur.execute(sql2, (start_sn,))
indexes = {int(i[0][1:]) for i in self.cur.fetchall()}
print(f"{indexes=}")
gids = [f"G{i}" for i in sorted(indexes, reverse=True)[:span]]
gids.reverse()
print(f"{gids=}")
return gids
</code></pre>
<p>I also discovered the issue in <code>__main__</code> which sometimes caused mistakes in the results of methods <code>.get_next_page_of_group_ids()</code> and <code>.get_previous_page_of_group_ids()</code>. This statement</p>
<pre><code>repeats = random.randint(0, 5)
</code></pre>
<p>should instead be</p>
<pre><code>repeats = random.randint(1, 5)
</code></pre>
| <python><sqlite><sqlite3-python> | 2023-09-18 11:12:03 | 0 | 8,499 | Sun Bear |
77,126,603 | 16,306,516 | Adding link to a mobile message string in python | <p>I have a URL in a variable named <code>url_sale</code>,
I want to add url_sale in a string as a URL but am not able to achieve it
Here is my code</p>
<pre><code>base_url = self.env['ir.config_parameter'].get_param('web.base.url')
url_sale = base_url + '/web#id=%s&view_type=form&model=%s' % (record.id, record._name)
vals = {'mobile': empoyee.mobile_phone,
'message_type': 'message',
'message': f"""
Quotation approval request
Hello Approvers,
Quotation approval {record.name} has been raised by {record.user_id.name}.
Please review and approve or reject (with reason given) this quotation.
{url_sale}
"""
}
</code></pre>
<p>inside the dict <code>vals</code> for the key <code>message</code> when i pass a static URL like <code>https://www.goindigo.in/bookings/flight-select.html</code> its working, but when I pass through a variable for dynamic URl it's passing as a string.</p>
<p>actual output:</p>
<pre><code>Quotation approval request
Hello Approvers,
Quotation approval S00029 has been raised by Mitchell Admin.
Please review and approve or reject (with reason given) this quotation.
https://www.goindigo.in/bookings/flight-select.html
</code></pre>
<p>Expected Output:</p>
<p><a href="https://i.sstatic.net/zkKUN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zkKUN.png" alt="ExPected Output Image" /></a></p>
<p>The Difference between the Expected and Actual output is the "link", I need it in clickable.</p>
| <python><python-3.x><odoo> | 2023-09-18 10:53:23 | 1 | 726 | Sidharth Panda |
77,126,571 | 12,787,236 | How to share similar code between Cloud Functions? | <p>I have multiple Cloud Functions in Google Cloud Platform that are very similar, changing only a few parameters and variable values between them.</p>
<p>Is there a way to tackle this problem, organizing them like Python modules in order to reduce code duplication? Sharing the same variable and functions between them, maybe?</p>
<p>I've thought about using <a href="https://cloud.google.com/functions/docs/configuring/env-var" rel="nofollow noreferrer">environment variables</a> but they do not offer the purpose that I want.</p>
<p>And none of this other two questions <a href="https://stackoverflow.com/questions/66136369/how-to-share-code-between-client-and-cloud-functions">here </a> and <a href="https://stackoverflow.com/questions/67202735/sharing-code-between-two-gcp-python-cloud-functions">here </a> have any guidance on that.</p>
| <python><google-cloud-platform><google-cloud-functions><serverless><code-duplication> | 2023-09-18 10:49:16 | 1 | 1,948 | Henrique Branco |
77,126,558 | 9,877,065 | Don't understand how Python calls this C function | <p>please be patient , I am just trying to learn Python, but trying to figure out how some software works ( <a href="https://github.com/LBC-LNBio/pyKVFinder/tree/master" rel="nofollow noreferrer">https://github.com/LBC-LNBio/pyKVFinder/tree/master</a> ) .</p>
<p>Here in <a href="https://github.com/LBC-LNBio/pyKVFinder/blob/master/pyKVFinder/grid.py#L960" rel="nofollow noreferrer">https://github.com/LBC-LNBio/pyKVFinder/blob/master/pyKVFinder/grid.py#L960</a> , <code>grid.py</code>I have :</p>
<pre><code> ncav, cavities = _detect(
nvoxels,
nx,
ny,
nz,
xyzr,
P1,
sincos,
step,
probe_in,
probe_out,
removal_distance,
volume_cutoff,
box_adjustment,
P2,
surface,
nthreads,
verbose,
)
</code></pre>
<p>but <code>_detect</code> comes from the C compiled library ( <code> from _pyKVFinder import _detect, _detect_ladj</code> , I believe) and in its source code : <a href="https://github.com/LBC-LNBio/pyKVFinder/blob/master/C/pyKVFinder.c#L989C1-L994C2" rel="nofollow noreferrer">https://github.com/LBC-LNBio/pyKVFinder/blob/master/C/pyKVFinder.c#L989C1-L994C2</a> , <code>pyKVFinder.c</code></p>
<pre><code>/* Cavity detection */
/*
* Function: _detect
* -----------------
*
* Detect and cluster cavities
*
* PI: 3D grid
* size: number of voxels in 3D grid
* nx: x grid units
* ny: y grid units
* nz: z grid units
* atoms: xyz coordinates and radii of input pdb
* natoms: number of atoms
* xyzr: number of data per atom (4: xyzr)
* reference: xyz coordinates of 3D grid origin
* ndims: number of coordinates (3: xyz)
* sincos: sin and cos of 3D grid angles
* nvalues: number of sin and cos (sina, cosa, sinb, cosb)
* step: 3D grid spacing (A)
* probe_in: Probe In size (A)
* probe_out: Probe Out size (A)
* removal_distance: Length to be removed from the cavity-bulk frontier (A)
* volume_cutoff: Cavities volume filter (A3)
* box_adjustment: Box adjustment mode
* P2: xyz coordinates of x-axis vertice
* nndims: number of coordinates (3: xyz)
* is_ses: surface mode (1: SES or 0: SAS)
* nthreads: number of threads for OpenMP
* verbose: print extra information to standard output
*
* returns: PI[size] (cavities 3D grid) and ncav (number of cavities)
*/
int _detect(int *PI, int size, int nx, int ny, int nz, double *atoms,
int natoms, int xyzr, double *reference, int ndims, double *sincos,
int nvalues, double step, double probe_in, double probe_out,
double removal_distance, double volume_cutoff, int box_adjustment,
double *P2, int nndims, int is_ses, int nthreads, int verbose)
{
</code></pre>
<p>What am I missing about the different number of arguments and their different order ? How can Python get the correct results with such differences ? I think Python C bindings are provided by SWIG. I should stress out that I don't know C at all , but was just trying to figure out what was going on under the hood of the really nice tool.</p>
| <python><c><function><arguments><swig> | 2023-09-18 10:47:41 | 0 | 3,346 | pippo1980 |
77,126,224 | 1,769,197 | Display irregular timestamp on x-axis | <p>I have the following dataframe. And I would like to plot x and y but replace the x axis tick labels with the endTime column on. However, I just cannot get this right.</p>
<pre><code> endTime x y
0 2021-06-29 09:15:44.097 0.0 inf
1 2021-06-29 09:17:50.434 1.0 2.718282
2 2021-06-29 09:21:32.209 2.0 0.824361
3 2021-06-29 09:23:48.657 3.0 0.465204
4 2021-06-29 09:29:55.713 4.0 0.321006
fig, ax = plt.subplots(nrows=1,ncols=1,sharex=False,dpi=400)
ax.plot(abc['x'],abc['y'],linewidth = 1, color='black')
ax.tick_params(axis='both',which='major',labelsize=6)
ax.set_xticklabels(abc['endTime'])
ax.xaxis.set_major_formatter(mdates.DateFormatter('%H:%M:%S'))
ax.xaxis.set_major_locator(plt.MaxNLocator(10))
for label in ax.get_xticklabels():
label.set_ha('right')
label.set_rotation(45)
label.set_fontsize(5)
</code></pre>
<p><a href="https://i.sstatic.net/6AnYZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6AnYZ.png" alt="enter image description here" /></a></p>
<p>The actual dataframe is as follow:</p>
<pre><code>{'endTime': {0: Timestamp('2021-06-29 09:15:44.097000'),
1: Timestamp('2021-06-29 09:17:50.434000'),
2: Timestamp('2021-06-29 09:21:32.209000'),
3: Timestamp('2021-06-29 09:23:48.657000'),
4: Timestamp('2021-06-29 09:29:55.713000'),
5: Timestamp('2021-06-29 09:30:39.039000'),
6: Timestamp('2021-06-29 09:32:02.137000'),
7: Timestamp('2021-06-29 09:32:53.199000'),
8: Timestamp('2021-06-29 09:34:26.447000'),
9: Timestamp('2021-06-29 09:35:32.147000'),
10: Timestamp('2021-06-29 09:36:58.669000'),
11: Timestamp('2021-06-29 09:39:01.664000'),
12: Timestamp('2021-06-29 09:39:54.868000'),
13: Timestamp('2021-06-29 09:41:12.514000'),
14: Timestamp('2021-06-29 09:42:46.969000'),
15: Timestamp('2021-06-29 09:44:17.882000'),
16: Timestamp('2021-06-29 09:45:34.855000'),
17: Timestamp('2021-06-29 09:47:46.924000'),
18: Timestamp('2021-06-29 09:49:42.807000'),
19: Timestamp('2021-06-29 09:50:17.589000'),
20: Timestamp('2021-06-29 09:51:27.592000'),
21: Timestamp('2021-06-29 09:53:06.344000'),
22: Timestamp('2021-06-29 09:54:19.235000'),
23: Timestamp('2021-06-29 09:56:42.898000'),
24: Timestamp('2021-06-29 10:00:12.240000'),
25: Timestamp('2021-06-29 10:02:11.541000'),
26: Timestamp('2021-06-29 10:06:13.520000'),
27: Timestamp('2021-06-29 10:09:40.064000'),
28: Timestamp('2021-06-29 10:12:38.347000'),
29: Timestamp('2021-06-29 10:15:18.936000'),
30: Timestamp('2021-06-29 10:18:12.151000'),
31: Timestamp('2021-06-29 10:21:29.921000'),
32: Timestamp('2021-06-29 10:25:07.076000'),
33: Timestamp('2021-06-29 10:28:33.882000'),
34: Timestamp('2021-06-29 10:33:32.048000'),
35: Timestamp('2021-06-29 10:37:04.144000'),
36: Timestamp('2021-06-29 10:40:15.729000'),
37: Timestamp('2021-06-29 10:44:06.881000'),
38: Timestamp('2021-06-29 10:48:02.841000'),
39: Timestamp('2021-06-29 10:53:14.078000'),
40: Timestamp('2021-06-29 10:55:24.998000'),
41: Timestamp('2021-06-29 10:58:54.023000'),
42: Timestamp('2021-06-29 11:04:57.751000'),
43: Timestamp('2021-06-29 11:08:27.457000'),
44: Timestamp('2021-06-29 11:11:59.912000'),
45: Timestamp('2021-06-29 11:17:53.474000'),
46: Timestamp('2021-06-29 11:22:38.469000'),
47: Timestamp('2021-06-29 11:26:35.577000'),
48: Timestamp('2021-06-29 11:30:00.279000'),
49: Timestamp('2021-06-29 11:35:41.830000'),
50: Timestamp('2021-06-29 11:40:05.413000'),
51: Timestamp('2021-06-29 11:45:28.679000'),
52: Timestamp('2021-06-29 11:50:43.851000'),
53: Timestamp('2021-06-29 11:57:31.521000'),
54: Timestamp('2021-06-29 13:00:39.244000'),
55: Timestamp('2021-06-29 13:04:57.323000'),
56: Timestamp('2021-06-29 13:08:34.488000'),
57: Timestamp('2021-06-29 13:13:02.859000'),
58: Timestamp('2021-06-29 13:19:39.836000'),
59: Timestamp('2021-06-29 13:24:29.931000'),
60: Timestamp('2021-06-29 13:29:02.992000'),
61: Timestamp('2021-06-29 13:32:15.414000'),
62: Timestamp('2021-06-29 13:36:01.976000'),
63: Timestamp('2021-06-29 13:42:32.769000'),
64: Timestamp('2021-06-29 13:47:46.070000'),
65: Timestamp('2021-06-29 13:52:59.930000'),
66: Timestamp('2021-06-29 13:59:45.501000'),
67: Timestamp('2021-06-29 14:04:46.406000'),
68: Timestamp('2021-06-29 14:07:28.696000'),
69: Timestamp('2021-06-29 14:08:44.476000'),
70: Timestamp('2021-06-29 14:14:47.081000'),
71: Timestamp('2021-06-29 14:22:41.006000'),
72: Timestamp('2021-06-29 14:30:08.780000'),
73: Timestamp('2021-06-29 14:37:50.977000'),
74: Timestamp('2021-06-29 14:41:11.529000'),
75: Timestamp('2021-06-29 14:45:01.870000'),
76: Timestamp('2021-06-29 14:49:46.199000'),
77: Timestamp('2021-06-29 14:51:22.406000'),
78: Timestamp('2021-06-29 14:53:23.328000'),
79: Timestamp('2021-06-29 14:55:45.348000'),
80: Timestamp('2021-06-29 14:59:15.975000'),
81: Timestamp('2021-06-29 15:02:59.465000'),
82: Timestamp('2021-06-29 15:07:11.711000'),
83: Timestamp('2021-06-29 15:10:45.915000'),
84: Timestamp('2021-06-29 15:15:10.744000'),
85: Timestamp('2021-06-29 15:19:07.360000'),
86: Timestamp('2021-06-29 15:23:14.322000'),
87: Timestamp('2021-06-29 15:26:24.208000'),
88: Timestamp('2021-06-29 15:30:24.773000'),
89: Timestamp('2021-06-29 15:36:15.559000'),
90: Timestamp('2021-06-29 15:41:07.963000'),
91: Timestamp('2021-06-29 15:47:30.031000'),
92: Timestamp('2021-06-29 15:52:36.487000'),
93: Timestamp('2021-06-29 15:56:23.028000'),
94: Timestamp('2021-06-29 15:59:01.159000'),
95: Timestamp('2021-06-29 16:01:14.504000'),
96: Timestamp('2021-06-29 16:06:21.536000'),
97: Timestamp('2021-06-29 16:09:06.893000'),
98: Timestamp('2021-06-29 16:14:42.689000'),
99: Timestamp('2021-06-29 16:25:03.275000')},
'x': {0: 0.0,
1: 1.0,
2: 2.0,
3: 3.0,
4: 4.0,
5: 5.0,
6: 6.0,
7: 7.0,
8: 8.0,
9: 9.0,
10: 10.0,
11: 11.0,
12: 12.0,
13: 13.0,
14: 14.0,
15: 15.0,
16: 16.0,
17: 17.0,
18: 18.0,
19: 19.0,
20: 20.0,
21: 21.0,
22: 22.0,
23: 23.0,
24: 24.0,
25: 25.0,
26: 26.0,
27: 27.0,
28: 28.0,
29: 29.0,
30: 30.0,
31: 31.0,
32: 32.0,
33: 33.0,
34: 34.0,
35: 35.0,
36: 36.0,
37: 37.0,
38: 38.0,
39: 39.0,
40: 40.0,
41: 41.0,
42: 42.0,
43: 43.0,
44: 44.0,
45: 45.0,
46: 46.0,
47: 47.0,
48: 48.0,
49: 49.0,
50: 50.0,
51: 51.0,
52: 52.0,
53: 53.0,
54: 54.0,
55: 55.0,
56: 56.0,
57: 57.0,
58: 58.0,
59: 59.0,
60: 60.0,
61: 61.0,
62: 62.0,
63: 63.0,
64: 64.0,
65: 65.0,
66: 66.0,
67: 67.0,
68: 68.0,
69: 69.0,
70: 70.0,
71: 71.0,
72: 72.0,
73: 73.0,
74: 74.0,
75: 75.0,
76: 76.0,
77: 77.0,
78: 78.0,
79: 79.0,
80: 80.0,
81: 81.0,
82: 82.0,
83: 83.0,
84: 84.0,
85: 85.0,
86: 86.0,
87: 87.0,
88: 88.0,
89: 89.0,
90: 90.0,
91: 91.0,
92: 92.0,
93: 93.0,
94: 94.0,
95: 95.0,
96: 96.0,
97: 97.0,
98: 98.0,
99: 99.0},
'y': {0: inf,
1: 2.718281828459045,
2: 0.8243606353500641,
3: 0.46520414169536317,
4: 0.32100635417193535,
5: 0.24428055163203397,
6: 0.1968934021442743,
7: 0.16479499927072966,
8: 0.1416435566333533,
9: 0.12416878541576264,
10: 0.11051709180756478,
11: 0.09956085817042402,
12: 0.09057533746010242,
13: 0.08307376918678358,
14: 0.07671724505116398,
15: 0.07126260704981642,
16: 0.06653090368236621,
17: 0.062387533036807656,
18: 0.058729319153346476,
19: 0.05547585487325276,
20: 0.052563554818801206,
21: 0.04994147844694904,
22: 0.0475683379634032,
23: 0.04541031690899021,
24: 0.043439454382916305,
25: 0.041632430967695526,
26: 0.03996964453633811,
27: 0.03843449831943464,
28: 0.03701284647909522,
29: 0.035692557998043696,
30: 0.03446317045045247,
31: 0.03331561276838431,
32: 0.03224198148434696,
33: 0.031235358794543675,
34: 0.030289663602118867,
35: 0.029399528772330267,
36: 0.028560199373269844,
37: 0.027767447833407847,
38: 0.027017502824254296,
39: 0.0263069893464087,
40: 0.025632878013110722,
41: 0.024992441925521802,
42: 0.024383219846499493,
43: 0.02380298462536526,
44: 0.023249716020602763,
45: 0.022721577222185183,
46: 0.02221689449911207,
47: 0.021734139497433816,
48: 0.021271913794689738,
49: 0.020828935382242928,
50: 0.020404026800535116,
51: 0.019996104696205556,
52: 0.01960417060620258,
53: 0.019227302803948985,
54: 0.018864649067480945,
55: 0.018515420250202103,
56: 0.0181788845522312,
57: 0.01785436240487598,
58: 0.01754122189302606,
59: 0.01723887465061758,
60: 0.01694677217310435,
61: 0.01666440249833668,
62: 0.016391287213615297,
63: 0.016126978752130948,
64: 0.015871057946666964,
65: 0.015623131812453053,
66: 0.015382831534515036,
67: 0.015149810637850779,
68: 0.014923743321347438,
69: 0.014704322938598422,
70: 0.014491260610729258,
71: 0.014284283958041898,
72: 0.014083135938771927,
73: 0.013887573784552722,
74: 0.01369736802332,
75: 0.013512301581391233,
76: 0.013332168957335343,
77: 0.01315677546102486,
78: 0.01298593651194874,
79: 0.012819476991470978,
80: 0.01265723064425793,
81: 0.012499039524574767,
82: 0.012344753483575723,
83: 0.01219422969409067,
84: 0.01204733220974755,
85: 0.011903931555570808,
86: 0.011763904347465158,
87: 0.011627132938234756,
88: 0.011493505088003843,
89: 0.011362913657098819,
90: 0.011235256319625976,
91: 0.01111043529613601,
92: 0.01098835710390777,
93: 0.010868932323511311,
94: 0.010752075380425403,
95: 0.010637704340589017,
96: 0.010525740718860394,
97: 0.010416109299443053,
98: 0.010308737967415375,
99: 0.010203557550571063}}
</code></pre>
| <python><matplotlib> | 2023-09-18 09:57:02 | 0 | 2,253 | user1769197 |
77,126,112 | 9,373,320 | Creating dynamic pysimplegui tab | <p>I am trying to create a dynamic pysimplegui where I can add input box in the 2nd tab based on the entry from the inputs in the first tab from the user. That is if the user inputs 3 in the first tab then in the 2nd tab there will be 3 input boxes. Below is the sample code I am having. But this has error. Please help in correcting the error.</p>
<p>The error is occurring the updating the tab in the below lines:</p>
<pre><code>tab2.update(tab2_layout=tab2_layout)
</code></pre>
<p>Below is the full code:</p>
<pre><code>import PySimpleGUI as sg
# Define the layout for the first tab
tab1_layout = [
[sg.Text("Enter the number of input boxes you want to add:")],
[sg.InputText(key='-NUM_INPUT_BOXES-')],
[sg.Button("Submit", key='-SUBMIT-')],
]
# Create the first tab
tab1 = sg.Tab("Input", tab1_layout)
# Initialize the second tab layout as an empty list
tab2_layout = []
# Create the second tab but keep it disabled until needed
tab2 = sg.Tab("Dynamic Input Boxes", tab2_layout)
# Create the tab group
tab_group_layout = [[tab1, tab2]]
# Define the main layout with the tab group
layout = [
[sg.TabGroup(tab_group_layout)],
]
window = sg.Window("Dynamic Input Boxes", layout, resizable=True)
while True:
event, values = window.read()
if event == sg.WIN_CLOSED:
break
if event == '-SUBMIT-':
try:
num_input_boxes = int(values['-NUM_INPUT_BOXES-'])
# Define a layout for the dynamic input boxes
tab2_layout = [[sg.Text(f"Input Box {i + 1}:"),
sg.InputText(key=f'-INPUT_BOX_{i}-')] for i in range(num_input_boxes)]
# Add a "Submit" button to the dynamic layout
tab2_layout.append([sg.Button("Submit", key='-SUBMIT_DYNAMIC-')])
# Update the second tab with the dynamic layout
tab2.update(tab2_layout=tab2_layout)
# Create a new window with the updated layout
updated_layout = [[tab1, tab2]]
window.close()
window = sg.Window("Dynamic Input Boxes", updated_layout, resizable=True)
except ValueError:
sg.popup_error("Please enter a valid number for input boxes.")
if event == '-SUBMIT_DYNAMIC-':
# Retrieve values from the dynamic input boxes
input_values = [values[f'-INPUT_BOX_{i}-'] for i in range(num_input_boxes)]
sg.popup("Input Values", input_values)
window.close()
</code></pre>
| <python><python-3.x><pysimplegui> | 2023-09-18 09:42:43 | 0 | 330 | sumanta das |
77,125,922 | 15,222,211 | FastAPI pydantic model property in a JSON response | <p>I want to use a property to automatically generate key-value pairs and include them in a FastAPI JSON Response.
In my example, I'm attempting to use a property within a Pydantic model, but it's not working as expected.</p>
<p>I requested: <a href="http://127.0.0.1:8000/" rel="nofollow noreferrer">http://127.0.0.1:8000/</a><br />
and expected to receive: <code>{"id": "my_name", "name": "My Name"}</code><br />
but I received: <code>{"id": "my_name"}</code></p>
<p>Please help me find an elegant solution to properly document property in <a href="http://127.0.0.1:8000/docs" rel="nofollow noreferrer">http://127.0.0.1:8000/docs</a> and include the value in the response.</p>
<pre class="lang-py prettyprint-override"><code>import uvicorn
from fastapi import FastAPI
from pydantic import Field, BaseModel
app = FastAPI()
class Item(BaseModel):
id: str = Field(description="Item ID")
# name: str = Field(description="Item name") # NEED USE THIS ARGUMENT AS PROPERTY
model_config = {
"json_schema_extra": {
"examples": [{"id": "my_name", "name": "My Name"}]
}
}
@property
def name(self) -> str:
return self.id.replace("_", " ").upper()
@app.get("/")
async def main() -> Item:
return Item(id="my_name")
if __name__ == "__main__":
uvicorn.run(app)
</code></pre>
| <python><fastapi><pydantic> | 2023-09-18 09:18:03 | 1 | 814 | pyjedy |
77,125,906 | 19,108,785 | Install python packages in docker Alpine base image | <p>I have requirement to install python packages from requirements.txt file in docker.
I am using "python:3.10.10-alpine" as my base image and installing the packages from requirements.txt file.
Here is my docker file for refernce:</p>
<pre><code>FROM python:3.10.10-alpine
RUN apk update && \
apk add --no-cache wget && \
apk add --no-cache build-base libffi-dev openssl-dev && \
apk add python3 py3-pip gcc musl-dev
RUN pip install --no-cache-dir --upgrade pip
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
</code></pre>
<p>and in my requirements.txt bunch of python modules are mentioned, including scipy, numpy and others.</p>
<p>While installing the packages it returns error as:</p>
<pre><code>#10 1305.7 ERROR: Could not find a version that satisfies the requirement tensorflow==2.11.0 (from versions: none)
#10 1305.7 ERROR: No matching distribution found for tensorflow==2.11.0
#10 ERROR: process "/bin/sh -c pip install --no-cache-dir -r requirements.txt" did not complete successfully: exit code: 1
------
> [5/5] RUN pip install --no-cache-dir -r requirements.txt:
1157.3 Installing build dependencies: still running...
1220.5 Installing build dependencies: still running...
1293.2 Installing build dependencies: still running...
1304.1 Installing build dependencies: finished with status 'done'
1304.1 Getting requirements to build wheel: started
1304.8 Getting requirements to build wheel: finished with status 'done'
1304.8 Preparing metadata (pyproject.toml): started
1305.5 Preparing metadata (pyproject.toml): finished with status 'done'
1305.7 ERROR: Could not find a version that satisfies the requirement tensorflow==2.11.0 (from versions: none)
1305.7 ERROR: No matching distribution found for tensorflow==2.11.0
------
Dockerfile:11
--------------------
9 |
10 | COPY requirements.txt .
11 | >>> RUN pip install --no-cache-dir -r requirements.txt
--------------------
ERROR: failed to solve: process "/bin/sh -c pip install --no-cache-dir -r requirements.txt" did not complete successfully: exit code: 1
</code></pre>
<p>I tried running the requirements.txt without any mention of Versions, but it didn't work.</p>
| <python><docker><alpine-linux> | 2023-09-18 09:15:09 | 2 | 570 | Roronoa Zoro |
77,125,624 | 14,282,714 | SSLError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443) | <p>I would like to use the <code>SentenceTransformers</code> like from this <a href="https://www.sbert.net/" rel="nofollow noreferrer">page</a>. When running the following code I get an error which I'm not able to solve:</p>
<pre><code>import torch
from sentence_transformers import SentenceTransformer
sentences = ['This framework generates embeddings for each input sentence',
'Sentences are passed as a list of string.',
'The quick brown fox jumps over the lazy dog.']
model = SentenceTransformer("paraphrase-multilingual-mpnet-base-v2")
embeddings = model.encode(sentences)
</code></pre>
<p>This returns the following error:</p>
<pre><code>SSLError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /api/models/sentence-transformers/paraphrase-multilingual-mpnet-base-v2 (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))"), '(Request ID: b875fc51-82ec-4309-94ab-0d08f9ab067d)')
</code></pre>
<p>I'm using <code>Python 3.10.12</code> with the following packages version:</p>
<pre><code>torch 2.0.1
sentence-transformers 2.2.2
requests 2.31.0
</code></pre>
<p>I tried to add the following code like from this question (<a href="https://stackoverflow.com/questions/70481851/how-to-fix-exception-has-occurred-sslerror-httpsconnectionpool-in-vs-code-env">how to fix "Exception has occurred: SSLError HTTPSConnectionPool" in VS Code environment</a>):</p>
<pre><code>import requests
r = requests.get('https://huggingface.com', verify=False)
</code></pre>
<p>Unfortunately, this also returns an error:</p>
<pre><code>SSLError: HTTPSConnectionPool(host='huggingface.com', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))
</code></pre>
<p>I also tried to use <code>verify=ssl.CERT_NONE</code> instead of <code>verify=False</code> like described <a href="https://stackoverflow.com/questions/55680224/how-to-fix-requests-exceptions-sslerror">here</a>, but this also doesn't work.</p>
<p>Finally, I added this code and downgraded <code>requests</code> to <code>2.27.1</code> like described <a href="https://stackoverflow.com/questions/75110981/sslerror-httpsconnectionpoolhost-huggingface-co-port-443-max-retries-exce">here</a>:</p>
<pre><code>import os
os.environ['CURL_CA_BUNDLE'] = ''
</code></pre>
<p>Also this doesn't work. So I was wondering if anyone knows why this error happens and how to fix this?</p>
| <python><huggingface-transformers><huggingface><sentence-transformers> | 2023-09-18 08:35:36 | 1 | 42,724 | Quinten |
77,125,620 | 11,406,071 | Smoothing function with fixed end points (decreasing window size near boundary) | <p>I would like either</p>
<ul>
<li>a link to a smoothing function from existing library or</li>
<li>a 'reasonably performant' python function</li>
</ul>
<p>that performs simple boxcar smoothing but with the catch that it accepts a boundary condition like this:</p>
<p>Suppose an array of length 9, with a rolling window of 5</p>
<p>target source indices</p>
<hr />
<pre><code> 1 1
2 1 2 3
3 1 2 3 4 5
4 2 3 4 5 6
5 3 4 5 6 7
6 4 5 6 7 8
7 5 6 7 8 9
8 7 8 9
9 9
</code></pre>
<p>So the rolling window get smaller near the boundaries, thereby preserving the end point values.</p>
<p>I have looked at:
<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.uniform_filter1d.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.uniform_filter1d.html</a>
<a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rolling.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.rolling.html</a></p>
| <python><smoothing> | 2023-09-18 08:35:12 | 2 | 407 | Victor Savenije |
77,125,386 | 1,238,934 | Python global var in multiprocessing | <p>For the following code, I declare 'GlobalCount' that is intended to be the global variable.
Then I start the process() method with multiprocessing, that increments GlobalCount each second. If I set a breakpoint there, the value is incremented ok.
Then, in parallel, I am requesting 'GETSTATUS', who should return the value of GlobalCount. However, it is always 0! What I am doing wrong? Thank you.</p>
<p>Updated code:</p>
<pre><code>import multiprocessing
import socket
import time
from multiprocessing import Value
#globals
GlobalCount = Value('i', 0) # 'i' stands for integer
def main():
server_ip = "127.0.0.1"
server_port = 2222
# Create a UDP socket
server_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
server_address = (server_ip, server_port)
server_socket.bind(server_address)
response = ""
print("UDP server is listening on {}:{}".format(*server_address))
while True:
# Receive data from the client
data, client_address = server_socket.recvfrom(256)
if data:
data_str = data.decode('utf-8')
arrayParams = data_str.split(';')
if arrayParams[0] == "PROCESS":
server_socket.sendto(response.encode(), client_address)
# Start the process in parallel
training_process = multiprocessing.Process(target=process)
training_process.start()
elif arrayParams[0] == "GETSTATUS":
current_value = GlobalCount.value
response = str(current_value) #here GlobalCount is always 0
server_socket.sendto(response.encode(), client_address)
else:
print("")
def process():
for i in range(100):
with GlobalCount.get_lock(): # Ensure thread-safety when updating the value
GlobalCount.value += 1
time.sleep(1)
#Execute at start
if __name__ == '__main__':
main()
</code></pre>
| <python><multiprocessing><global-variables><python-multiprocessing> | 2023-09-18 07:48:40 | 2 | 3,830 | Jaume |
77,125,327 | 3,833,632 | Can the parent process find out the PID of the child process | <p>I have some python code that forks</p>
<pre><code>try:
pid = os.fork()
if pid > 0:
# parent process, return and keep running
return
except:
print("EXCEPTION")
sys.exit(1)
</code></pre>
<p>And I would find it useful to be able to know the PID of the child process within the parent process. I am fine with using locks/waits/synchronization methods. I am just trying to be cautious here as I want to make sure I do something safe.</p>
| <python><fork> | 2023-09-18 07:37:34 | 2 | 715 | CalebK |
77,125,079 | 4,417,586 | Can't build a Python Docker container on Raspberry Pi | <p>I have a backend Python project which I've been dockerizing sucessfully on my local machine. After several tries, it seems some Python packages (either <a href="https://dbus.freedesktop.org/doc/dbus-python/" rel="nofollow noreferrer"><code>dbus-python</code></a> or <a href="https://pygobject.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer"><code>PyGObject</code></a>) require some system packages to be built, so I use a Debian Docker image to be able to install those dependencies.</p>
<p>When building and running the image on my local machine (MacOS) using <code>docker compose up -d</code>, it works fine, installing the system packages, the Python packages, and building and running the docker containers successfully.</p>
<p>All in all the Dockerfile looks like this:</p>
<pre><code># We use Debian to be able to install pygobject
FROM python:3.11-slim-bookworm
# Download latest listing of available packages:
RUN apt-get -y update
# Upgrade already installed packages:
RUN apt-get -y upgrade
# Install the system packages which seem to be necessary for installing pygobject:
RUN apt-get -y install libdbus-glib-1-dev libgirepository1.0-dev libcairo2-dev
WORKDIR /backend
COPY ./requirements.txt /backend/requirements.txt
# Install Python dependencies
RUN pip3 install --no-cache-dir --upgrade -r /backend/requirements.txt
# Add backend folder to container
COPY ./ /backend
# Run the backend server on port 5000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "5000"]
</code></pre>
<p>However, when I do the same on a Raspberry Pi (<code>docker compose --build</code>), I get an error on the line <code>RUN pip3 install --no-cache-dir --upgrade -r /backend/requirements.txt</code>:</p>
<pre><code>Building wheel for ninja (pyproject.toml): finished with status 'error'
...
× Building wheel for ninja (pyproject.toml) did not run successfully.
...
ERROR: Could not build wheels for ninja, which is required to install pyproject.toml-based projects
...
failed to solve: process "/bin/sh -c pip3 install --no-cache-dir --upgrade -r /backend/requirements.txt" did not complete successfully: exit code: 1
</code></pre>
<p>This is the error I also used to have on my local machine (<code>ninja</code> probably being a subdependency of either <a href="https://dbus.freedesktop.org/doc/dbus-python/" rel="nofollow noreferrer"><code>dbus-python</code></a> or <a href="https://pygobject.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer"><code>PyGObject</code></a>), before solving it by using a Debian image and installing the system dependencies. So I'm a bit confused:</p>
<p><strong>How come a docker build can produce an building error on one host machine and not one other?</strong></p>
| <python><docker><bluez><pygobject><dbus-python> | 2023-09-18 06:50:15 | 0 | 1,152 | bolino |
77,125,023 | 8,471,995 | Combining Inheritance and dataclass | <p>I would like a decorator for a class that will</p>
<ul>
<li>Add <code>dataclass</code>-ness</li>
<li>Inherits a superclass</li>
<li><code>mypy</code> identifies</li>
</ul>
<p>So far, below was my solution.</p>
<pre class="lang-py prettyprint-override"><code>import dataclass as dc
# An example superclass
class Base:
def hello(self) -> None:
print("hi")
def my_func(base):
dataclass = dc.dataclass(base)
class new_class(dataclass, Base):
pass
return new_class
@my_func
class HasHelloAndDataclass:
a: int
</code></pre>
<p>But <code>mypy</code> doesn't recognize the <code>dataclass</code> part. <code>mypy</code> can't calculate <code>HasHelloAndDataclass</code>'s <code>__init__</code> and attribute <code>a</code>. But, the interpreter didn't complain.</p>
<p>How can I make this possible?</p>
| <python><python-dataclasses> | 2023-09-18 06:40:53 | 0 | 1,617 | Inyoung Kim 김인영 |
77,124,910 | 3,833,632 | What is the proper cross-platform way to run a background thread after execution? | <p>I am trying to write code that will allow me to keep Selenium drivers alive in background processes. Or spawn background processes that periodically check for status changes.</p>
<p>The only requirement is that these background drivers hold the selenium drivers alive in a way I can connect to with a remote session (using their URL and session ID) and can be later killed.</p>
<p>The design is that I want to have a registry file. Whenever I launch one of these processes in the background I want the script to exit but a background processes started in a way I can stop it later.</p>
<p>Ideally I am hoping to make it as simple as a PID. So if the registry file just had a PID and I was able to just kill it that would be slick.
I might put other things in this registry like status etc. But PID is the most important.</p>
<p>Where I am hung up on is exactly how I should start this process I intend to have running past the life of my script. Some people use fork() some people use Thread(daemon=True) and I am not sure how either of them would give me a PID I can use to close the process afterwards.</p>
<p>I am aiming for this code to be able to run on my Mac as well as linux.</p>
<p>Which direction should I take?</p>
| <python><multithreading> | 2023-09-18 06:19:36 | 0 | 715 | CalebK |
77,124,879 | 3,358,488 | pip: 'extras_require' must be a dictionary whose values are strings or lists of strings containing valid project/version requirement specifiers | <p>I tried running <code>pip install gym==0.21.0</code></p>
<p>but got the cryptic error:</p>
<pre><code>Collecting gym==0.21.0
Using cached gym-0.21.0.tar.gz (1.5 MB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [1 lines of output]
error in gym setup command: 'extras_require' must be a dictionary whose values are strings or lists of strings containing valid project/version requirement specifiers.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>What may be causing this error?</p>
| <python><pip><openai-gym> | 2023-09-18 06:12:27 | 1 | 5,872 | user118967 |
77,124,854 | 562,769 | Can I print the cumulative time per pytest fixture? | <p>I'm currently struggling with a test suite that is slower than I would like it to be. I want to figure out which part is the most promising to work on.</p>
<p>One technique I know is <a href="https://stackoverflow.com/a/27899853/562769"><code>--durations</code></a>, but that is not super helpful in my case.</p>
<p>I have tons of fixtures and I would like to improve the most critical ones.</p>
<p>Is there a way to measure the time spent per fixture (cumulative) over the whole test suite and print that?</p>
| <python><pytest><pytest-fixtures> | 2023-09-18 06:04:42 | 1 | 138,373 | Martin Thoma |
77,124,504 | 5,084,432 | Write large dataframe in Excel Pandas python | <p>I have a dataframe which contains data between 5,50,000, to 9,00,000 rows and 10 columns
I read the data from postgtresql and converted the data into dataframe which took few secondas.
However when trying to write data in excel it took more than 1 hour to write</p>
<p>writer = pd.ExcelWriter('filepath/file.xlsx', engine='xlsxwriter')</p>
<p>df.to_excel(writer, sheet_name='My Report', startrow=8, index=False, header=False)</p>
<p>Is there any way we can insert huge dataframe in excel format in few seconds?</p>
| <python><pandas><excel><xlsxwriter> | 2023-09-18 04:07:07 | 1 | 349 | shashank verma |
77,124,399 | 11,672,868 | how to get exact position of music playback in pygame? | <p>I know we can use <code>.get_raw()</code> on a Sound object to get a raw array of audio data, and I would like to get the exact values to pass as array indexes based on current playback position (for example <code>raw_array[pos:pos+offset]</code> would represent audio from current position plus an offset value, but Sound objects do not have a <code>.get_pos()</code> method (and it does not seem exact anyway), how can I accomplish this?</p>
| <python><audio><pygame> | 2023-09-18 03:21:07 | 1 | 308 | K-FLOW |
77,124,279 | 7,518,091 | TypeError: WebDriver.__init__() in selenium | <p>I have the following python code in selenium;</p>
<pre><code>from selenium.webdriver import Firefox, FirefoxOptions
from selenium.webdriver.firefox.service import Service
from selenium.webdriver.firefox.options import Options
from selenium.webdriver.common.by import By
options = Options()
options.set_preference('browser.download.folderList', 2)
options.set_preference('browser.download.manager.showWhenStarting', False)
options.set_preference('browser.download.dir', 'C:\\Users\\Own\\data')
options.set_preference('browser.helperApps.neverAsk.saveToDisk', 'text/plain')
options.set_preference('browser.helperApps.neverAsk.openFile', 'text/plain')
service = Service(executable_path='C:\\Users\\Own\\geckodriver-v0.33.0-win64\\geckodriver.exe')
browser = Firefox(service=service, options=options) # Encounter TypeError: WebDriver.__init__() got an unexpected keyword argument 'service'
</code></pre>
<p>In the last line, I encounter the error <code>TypeError: WebDriver.__init__() got an unexpected keyword argument 'service'</code></p>
<p>I am using python v3 and selenium v3.141</p>
| <python><selenium-webdriver> | 2023-09-18 02:28:52 | 1 | 3,699 | user1315789 |
77,124,208 | 5,162,426 | Optimizing NumPy finite differences via chain rule | <p>Consider the following code:</p>
<pre><code>x = np.array([1, 5, 6, 10]) # an unstructured coordinate
f = x**2 # function value on the points x
grad1 = np.gradient(f, x) # df/dx
grad2 = np.gradient(f) / np.gradient(x) # df/di * di/dx = df/dx
</code></pre>
<p>I would have expected that, by chain rule, <code>grad1=grad2</code>. The <code>i</code> in the comment above is simply a uniform "index"After testing, this equality is true for simple linear functions, but not e.g. <code>x**2</code> as shown above. I'm now wondering if there is a theoretical reason why the chain rule shouldn't hold in general for derivatives estimated by finite differences.</p>
<p>I think the problem lies in the follow observation:
<code>np.gradient</code> does not, in general, assume the input coordinates <code>x</code> to be uniform. But I think this expression of the chain rule does, which I suspect is implicit in the call <code>np.gradient(x)</code>. When we call <code>np.gradient(f, x)</code> with nonuniform <code>x</code>, we are really performing an interpolation for each interior point, rather than a true centered-difference...</p>
| <python><numpy><gradient><derivative><finite-difference> | 2023-09-18 01:53:00 | 1 | 3,032 | pretzlstyle |
77,124,086 | 5,162,426 | Is there a way to compute gradients on unstructured coordinates with Numpy? | <p>I have a 3D array of data <code>A</code>, with shape <code>(NX, NY, NZ)</code> in the <code>x</code>, <code>y</code>, and <code>z</code> dimensions, respectively.</p>
<p>I want to find the gradient of <code>A</code> in the <code>y</code> dimension. This can be done easily with NumPy:</p>
<p><code>dAdy = np.gradient(A, Y, axis=1)</code></p>
<p>where <code>Y</code> is a 1D vector of coordinates in the <code>y</code> dimension.</p>
<p>However, this becomes nontrivial if <code>Y</code> is unstructured. That is, every "column" of data at fixed positions <code>(x, z) = (Xi, Zi)</code> has a unique set of <code>y</code> coordinates. For example:</p>
<pre><code>A = np.random.random((10, 10, 10))
X = np.arange(10)
Y = np.sort(np.random.random((10, 10, 10)), axis=1)
Z = np.arange(10)
</code></pre>
<p>The result above is a 3D dataset <code>A</code>, defined on a structured set of <code>X</code> and <code>Z</code> coordinates, while the value of the <code>Y</code> coordinate is unique for every data point (but is of course monotonic in the <code>y</code> dimension). I want to estimate <code>dA/dy</code> via finite differences.</p>
<p>Essentially, I'm trying to take the gradient of many independent columns. Is there a way to vectorize this with NumPy? I tried the following iterative approach, but it's very slow:</p>
<pre><code># A is the 3D dataset
# Y is the 3D dataset with shape matching that of A; gives the y-position of each datapoint in A
NX, NY, NZ = A.shape[0], A.shape[1], A.shape[2]
dA_dy = np.zeros((NX, NY, NZ))
for i in range(NX):
for k in range(NZ):
dA_dy[i, :, k] = np.gradient(A[i,:,k], Y[i,:,k])
</code></pre>
<p>I also thought that I could get smart by implementing the chain rule:</p>
<pre><code>dA_dy = np.gradient(A, axis=1) / np.gradient(Y, axis=1)
</code></pre>
<p>But for the following simple test, this approach does not work:</p>
<pre><code>g = np.array([1, 5, 6, 10]) # an unstructured coordinate
f = g**2 # function value on the points x
grad1 = np.gradient(f, g) # df/dg
grad2 = np.gradient(f) / np.gradient(g) # df/dg?
</code></pre>
<p>I only get <code>grad1=grad2</code> for a few simple linear functions, but not the function represented above. I'm now wondering if there is a theoretical reason why the chain rule shouldn't hold in general for derivatives estimated by finite differences.</p>
| <python><numpy><derivative><differentiation><finite-difference> | 2023-09-18 00:47:31 | 1 | 3,032 | pretzlstyle |
77,123,898 | 3,605,608 | How to provide placeholders for multiple rows in EXCEPT clause? | <p>My goal is to, given a list of id's in Python, find id's not mapped to a row in an SQLite table. I'm trying to achieve this using the <code>EXCEPT</code> operator:</p>
<pre class="lang-sql prettyprint-override"><code>-- if the table currently stores id1 and id3 would only return id2
WITH cte(id) as VALUES ('id1'), ('id2'), ('id3')
SELECT * from cte EXCEPT SELECT id FROM some_table
</code></pre>
<p>I want to specify id's dynamically from a list. I'm able to format strings, hardcoding values:</p>
<pre class="lang-py prettyprint-override"><code>query = (
"with cte(id) as " +
f"(values {",".join(f"('{id}')" for id in ids)}) " +
"select * from cte except select id from some_table"
)
print(query)
res = cursor.execute(query)
</code></pre>
<p>This is vulnerable to SQL Injection. Instead placeholder syntax is preferred. <a href="https://docs.python.org/3/library/sqlite3.html#how-to-use-placeholders-to-bind-values-in-sql-queries" rel="nofollow noreferrer">Python sqlite3 documentation</a> show examples with <code>executemany</code> for <code>INSERT</code> operations, but how to apply that to a SELECT+EXCEPT single query (which must use <code>execute</code> and not <code>executemany</code>)? Alternatively, is there a better way to filter a list of inputs by those which aren't present in a table? Sample of my problem:</p>
<pre class="lang-py prettyprint-override"><code>import sqlite3
db = sqlite3.connect(":memory:")
cursor = db.cursor()
#
# First create a table of video-id,video-title pairs
#
cursor.execute("CREATE TABLE IF NOT EXISTS videos(id TEXT PRIMARY KEY, title TEXT)")
dummy_data = [
("vid1", "Video 1"),
("vid2", "Video 2"),
("vid3", "Video 3"),
]
# use executemany to insert multiple rows via placeholder VALUES
cursor.executemany("INSERT INTO videos VALUES(?, ?)", dummy_data)
db.commit()
# sanity check that we see the expected videos
res = cursor.execute("SELECT * FROM videos")
print(f"select* result: {res.fetchall()}")
#
# Next, given a set of video ids, find all of the ids not already stored in the DB
#
new_video_ids = ["vid1", "vid2", "vid5"] # vid1 and vid2 already exist in db. only vid5 should be returned
new_video_ids_str = ",".join(f"('{id}')" for id in new_video_ids)
print(new_video_ids_str)
# The following query uses python string formatting and is therefore vulnerable to SQL injection attacks
query = (
"with cte(id) as " +
f"(values {new_video_ids_str}) " +
"select * from cte except select id from videos"
)
print(query)
res = cursor.execute(query)
print(f"filter result: {res.fetchall()}")
# I'd like to use SQLite3 placeholder values but can't figure out the syntax. The following doesn't work.
# it fails since it's trying to all of the `new_video_ids` values as a single row rather than multiple rows.
#
# query = (
# "with cte(id) as " +
# "(values (?)) " +
# "select * from cte except select id from videos"
# )
# res = cursor.execute(query, new_video_ids)
# print(f"filter result: {res.fetchall()}")
db.close()
</code></pre>
| <python><sqlite><sqlite3-python> | 2023-09-17 22:59:47 | 1 | 585 | Scott M |
77,123,707 | 13,460,543 | Identify in a series the column with the maximum value for each row of a dataframe | <p>Suppose we have the following dataframe :</p>
<p><strong>Source Dataframe</strong></p>
<pre><code> A B C
0 55 9 96
1 69 86 5
2 30 63 12
3 79 52 31
</code></pre>
<p>I would like to identify in a series the column with the maximum value for each row of a dataframe.
So the expected result would be :</p>
<p><strong>Target Dataframe</strong></p>
<pre><code> MAX
0 C
1 B
2 B
3 A
</code></pre>
<p><strong>Dataframe to start with</strong></p>
<pre><code>import pandas as pd
data = {'A': [55, 69, 30, 79],
'B': [9, 86, 63, 52],
'C': [96, 5, 12, 31]}
df = pd.DataFrame(data)
</code></pre>
<p>Due to many bugs, I do not have time to solve this lovely problem, so I would really appreciate if someone provides me with an efficient way of doing the job.</p>
<p>Thanks</p>
| <python><pandas><dataframe><max> | 2023-09-17 21:39:37 | 0 | 2,303 | Laurent B. |
77,123,653 | 769,449 | Scrapy Spider with dynamically called Spider does not save any output to desired folder | <p>I want to run "___SPIDER_RUNNER.py" by hitting F5 in Visual Studio code. All seems to crawl ok, logging shows that items are being retrieved, but the output JSON file is not saved to folder C:\scrapy\JSON_output.
That folder exists. I have write persmissions.</p>
<p>I'm completely stuck as no errors are logged.</p>
<p>I tried different paths in file _singlepage_nonAJAX.py:</p>
<pre><code> 'FEED_URI': 'C:/scrapy/JSON_output/test.json'
'FEED_URI': r'C:\scrapy\JSON_output\test.json'
'FEED_URI': f'C:\\scrapy\\JSON_output\\{self.name}.json'
</code></pre>
<p>I tried removing ITEM_PIPELINES and FEED_EXPORT_FIELDS settings from settings.py</p>
<p>My folder structure is as follows:</p>
<pre><code>- C:\scrapy\my_spiders\___SPIDER_RUNNER.py
- C:\scrapy\my_spiders\__init__.py
- C:\scrapy\my_spiders\spiders\__init__.py
- C:\scrapy\my_spiders\spiders\_singlepage_nonAJAX.py
</code></pre>
<p>All "<strong>init</strong>.py" files contain no code.</p>
<p><strong>___SPIDER_RUNNER.py</strong></p>
<pre><code>import sys
sys.path.append('C:\\scrapy')
from scrapy.crawler import CrawlerProcess
from my_spiders.spiders._singlepage_nonAJAX import SinglePageNonAJAXSpider
import logging
logging.basicConfig(level=logging.DEBUG)
def run_spider(myname, start_urls, SERP_item, url, itemstatus, okstatus, title):
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})
process.crawl(SinglePageNonAJAXSpider,
myname=myname,
start_urls=start_urls,
SERP_item=SERP_item,
url=url,
itemstatus=itemstatus,
okstatus=okstatus,
title=title)
process.start()
run_spider("toscrape",
"https://quotes.toscrape.com",
"//div[@class='quote']/span/a[starts-with(@href, '/author/')]",
"./@href",
""''"",
"",
'//span[contains(@class, "author-born-date")]/text()')
</code></pre>
<p><strong>_singlepage_nonAJAX.py</strong></p>
<pre><code>import json
import re
import os
import scrapy
import time
from scrapy.selector import Selector
from scrapy.http import HtmlResponse
from lxml import html
class RentalItem(scrapy.Item):
city = scrapy.Field()
url = scrapy.Field()
class SinglePageNonAJAXSpider(scrapy.Spider):
name = 'whatever'
def __init__(self, myname=None, start_urls=None, SERP_item=None, url=None, itemstatus=None, okstatus=None, title=None, *args, **kwargs):
super(SinglePageNonAJAXSpider, self).__init__(*args, **kwargs)
if myname:
self.name = myname
if start_urls:
self.start_urls = [start_urls] # Assuming only one URL
self.SERP_item = SERP_item
self.url = url
self.itemstatus = itemstatus
self.okstatus = okstatus
self.title = title
# Update 2: update the FEEDS value with the modified 'name'
self.custom_settings['FEEDS'] = {
f'\\scrapy\\JSON_output\\{self.name}.json': {
'format': 'json',
'encoding': 'utf8',
'fields': None,
'indent': 4,
'item_export_kwargs': {
'export_empty_fields': True,
},
},
}
def parse(self, response):
for listing in response.xpath(self.SERP_item):
listing_url = listing.xpath(self.url).get()
yield scrapy.Request(
url=response.urljoin(listing_url),
callback=self.parse_object,
)
def parse_object(self, response):
item = RentalItem()
item['url'] = response.url # get url
item['city'] = 'mycity'
yield item
</code></pre>
<p><strong>pipelines.py</strong></p>
<pre><code>import json
class MyCustomPipeline(object):
def open_spider(self, spider):
self.items = []
def process_item(self, item, spider):
self.items.append(dict(item))
return item
</code></pre>
<p><strong>middlewares.py</strong></p>
<pre><code>from scrapy import signals
from itemadapter import is_item, ItemAdapter
class MySpiderMiddleware:
@classmethod
def from_crawler(cls, crawler):
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
def process_spider_input(self, response, spider):
return None
def process_spider_output(self, response, result, spider):
for i in result:
yield i
def process_spider_exception(self, response, exception, spider):
pass
def process_start_requests(self, start_requests, spider):
for r in start_requests:
yield r
def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name)
class MyDownloaderMiddleware:
@classmethod
def from_crawler(cls, crawler):
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
def process_request(self, request, spider):
return None
def process_response(self, request, response, spider):
return response
def process_exception(self, request, exception, spider):
pass
def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name)
</code></pre>
<p><strong>settings.py</strong></p>
<pre><code>BOT_NAME = 'my_spiders'
SPIDER_MODULES = ['my_spiders.spiders']
NEWSPIDER_MODULE = 'my_spiders.spiders'
ROBOTSTXT_OBEY = False
SPIDER_MIDDLEWARES = {
'scrapy_splash.SplashDeduplicateArgsMiddleware': 100
}
DOWNLOADER_MIDDLEWARES = {
'scrapy_splash.SplashCookiesMiddleware': 723,
'scrapy_splash.SplashMiddleware': 725,
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
'scrapy_useragents.downloadermiddlewares.useragents.UserAgentsMiddleware': 500,
'scrapy_selenium.SeleniumMiddleware': 800
}
from shutil import which
SELENIUM_DRIVER_NAME = 'chrome'
SELENIUM_DRIVER_EXECUTABLE_PATH = which('chromedriver')
SELENIUM_DRIVER_ARGUMENTS=['--headless']
#Configure item pipelines. See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'my_spiders.pipelines.MyCustomPipeline': 300,
}
FEED_EXPORT_FIELDS = [
'id', 'url', 'city', 'title'
]
SPLASH_URL = 'http://localhost:8050/'
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'
</code></pre>
<p><strong>UPDATE 2</strong></p>
<p>I tried both settings.py and custom_settings to set your setting.
But in both cases it still does not use my desired file output name.</p>
<p>When I set this in <code>settings.py</code> and execute <code>___SPIDER_RUNNER.py</code>:</p>
<pre><code>FEEDS = {
'items.json': {
'format': 'json',
'encoding': 'utf8',
'fields': None,
'indent': 4,
'item_export_kwargs': {
'export_empty_fields': True,
},
},
}
</code></pre>
<p>not output file is stored at all.</p>
<p>When I remove "FEEDS" from settings and add to class <code>SinglePageNonAJAXSpider</code> instead:</p>
<pre><code>custom_settings = {
'FEEDS': {
f'{format}.json': { # using spider name
'format': 'json',
'encoding': 'utf8',
'fields': None,
'indent': 4,
'item_export_kwargs': {
'export_empty_fields': True,
},
},
},
}
</code></pre>
<p>It always stores filename <code>whatever.json</code>, even though from my <code>___SPIDER_RUNNER.py</code> I pass the desired filename "my_filename_variable":</p>
<pre><code>run_spider("my_filename_variable",
"https://quotes.toscrape.com",
"//div[@class='quote']/span/a[starts-with(@href, '/author/')]",
"./@href",
""''"",
"",
'//span[contains(@class, "author-born-date")]/text()')
</code></pre>
<p>I checked your reference page on feeds, but I can't figure out what I need to change.</p>
| <python><scrapy> | 2023-09-17 21:19:28 | 1 | 6,241 | Adam |
77,123,583 | 275,002 | How to create a tweet using V2 API without Tweepy? | <p>I am trying the following code but it gives 401 error:</p>
<pre><code>headers = {
"Authorization": f"OAuth oauth_consumer_key={consumer_key},oauth_token={access_token},oauth_signature_method=HMAC-SHA1"
}
# Create data payload for the tweet with attached media
data = {
"text": text,
"media_ids": [media_id],
"tweet_mode": "extended"
}
tweet_response = requests.post(tweet_post_url, headers=headers, data=data)
print(tweet_response.json())
</code></pre>
| <python><twitter><python-3.5> | 2023-09-17 20:56:31 | 1 | 15,089 | Volatil3 |
77,123,435 | 33,404 | Why does Chroma DB crash with "illegal instruction" in this python:3.10-slim container? | <p>I am using <a href="https://docs.trychroma.com/" rel="nofollow noreferrer">Chroma DB</a> (0.4.8) in a Python 3.10 Flask REST API application. The application runs well on local developer machines (including Windows and OS X machines).</p>
<p>I am using the multi-stage <code>Dockerfile</code> below to package the application in an image based on <code>python:3.10-slim</code> (Debian 12 Bookworm). Images are built on <strong>Github Actions</strong> using the <a href="https://github.com/google-github-actions/deploy-cloudrun" rel="nofollow noreferrer"><code>google-github-actions/deploy-cloudrun@v1</code></a> action:</p>
<pre><code>FROM python:3.10-slim as base
ENV PYTHONFAULTHANDLER=1 \
PYTHONHASHSEED=random \
PYTHONUNBUFFERED=1
WORKDIR /app
# -------------------------------------
FROM base as builder
ENV PIP_DEFAULT_TIMEOUT=100 \
PIP_DISABLE_PIP_VERSION_CHECK=1 \
PIP_NO_CACHE_DIR=1 \
POETRY_VERSION=1.6
RUN apt-get update --fix-missing && apt-get install -y --fix-missing build-essential
RUN pip install "poetry==$POETRY_VERSION"
COPY pyproject.toml ./
COPY chat_api ./chat_api
RUN poetry config virtualenvs.in-project true && \
poetry install --only=main --no-root && \
poetry build
# -------------------------------------
FROM base as final
COPY --from=builder /app/.venv ./.venv
COPY --from=builder /app/dist .
COPY docker-entrypoint.sh .
RUN ./.venv/bin/pip install *.whl
RUN ["chmod", "+x", "docker-entrypoint.sh"]
CMD ["./docker-entrypoint.sh"]
</code></pre>
<p>As I am using <a href="https://python-poetry.org/" rel="nofollow noreferrer">Poetry</a> 1.6 to install the Python packages, here are the dependency specifications from my <code>pyproject.toml</code> file:</p>
<pre><code>[tool.poetry.dependencies]
python = "^3.10"
flask = "^2.3.3"
langchain = "^0.0.279"
flask-api = "^3.1"
openai = "0.27.8"
chromadb = "0.4.8"
tiktoken = "^0.4.0"
flask-sqlalchemy = "^3.0.5"
sqlalchemy = "^2.0.20"
pymysql = "^1.1.0"
google-cloud-logging = "^3.6.0"
flask-httpauth = "^4.8.0"
flask-cors = "^4.0.0"
gunicorn = "^21.2.0"
flask-migrate = "^4.0.4"
cryptography = "^41.0.3"
</code></pre>
<p>When I run the image in Google Cloud Run aor on a dev machine, the application loads successfully. However, as soon as a call is made to an endpoint that imports <code>chromadb</code>, the process crashes with this traceback:</p>
<pre><code>[ERROR] Worker (pid:3) was sent SIGILL!
Uncaught signal: 4, pid=3, tid=3, fault_addr=3.
Extension modules: google._upb._message, grpc._cython.cygrpc, charset_normalizer.md, _cffi_backend, markupsafe._speedups, sqlalchemy.cyextension.collections, sqlalchemy.cyextension.immutabledict, sqlalchemy.cyextension.processors, sqlalchemy.cyextension.resultproxy, sqlalchemy.cyextension.util, greenlet._greenlet, yaml._yaml, pydantic.typing, pydantic.errors, pydantic.version, pydantic.utils, pydantic.class_validators, pydantic.config, pydantic.color, pydantic.datetime_parse, pydantic.validators, pydantic.networks, pydantic.types, pydantic.json, pydantic.error_wrappers, pydantic.fields, pydantic.parse, pydantic.schema, pydantic.main, pydantic.dataclasses, pydantic.annotated_types, pydantic.decorator, pydantic.env_settings, pydantic.tools, pydantic, numpy.core._multiarray_umath, numpy.core._multiarray_tests, numpy.linalg._umath_linalg, numpy.fft._pocketfft_internal, numpy.random._common, numpy.random.bit_generator, numpy.random._bounded_integers, numpy.random._mt19937, numpy.random.mtrand, numpy.random._philox, numpy.random._pcg64, numpy.random._sfc64, numpy.random._generator, multidict._multidict, yarl._quoting_c, aiohttp._helpers, aiohttp._http_writer, aiohttp._http_parser, aiohttp._websocket, frozenlist._frozenlist, numexpr.interpreter (total: 56)
File "/app/.venv/bin/gunicorn", line 8 in <module>
File "/app/.venv/lib/python3.10/site-packages/gunicorn/app/wsgiapp.py", line 67 in run
File "/app/.venv/lib/python3.10/site-packages/gunicorn/app/base.py", line 236 in run
File "/app/.venv/lib/python3.10/site-packages/gunicorn/app/base.py", line 72 in run
File "/app/.venv/lib/python3.10/site-packages/gunicorn/arbiter.py", line 202 in run
File "/app/.venv/lib/python3.10/site-packages/gunicorn/arbiter.py", line 571 in manage_workers
File "/app/.venv/lib/python3.10/site-packages/gunicorn/arbiter.py", line 642 in spawn_workers
File "/app/.venv/lib/python3.10/site-packages/gunicorn/arbiter.py", line 609 in spawn_worker
File "/app/.venv/lib/python3.10/site-packages/gunicorn/workers/base.py", line 142 in init_process
File "/app/.venv/lib/python3.10/site-packages/gunicorn/workers/sync.py", line 126 in run
File "/app/.venv/lib/python3.10/site-packages/gunicorn/workers/sync.py", line 70 in run_for_one
File "/app/.venv/lib/python3.10/site-packages/gunicorn/workers/sync.py", line 32 in accept
File "/app/.venv/lib/python3.10/site-packages/gunicorn/workers/sync.py", line 135 in handle
File "/app/.venv/lib/python3.10/site-packages/gunicorn/workers/sync.py", line 178 in handle_request
File "/app/.venv/lib/python3.10/site-packages/flask/app.py", line 2213 in __call__
File "/app/.venv/lib/python3.10/site-packages/flask/app.py", line 2190 in wsgi_app
File "/app/.venv/lib/python3.10/site-packages/flask/app.py", line 1484 in full_dispatch_request
File "/app/.venv/lib/python3.10/site-packages/flask/app.py", line 1469 in dispatch_request
File "/app/.venv/lib/python3.10/site-packages/flask_httpauth.py", line 174 in decorated
File "/app/.venv/lib/python3.10/site-packages/redacted/routes.py", line 39 in messages_post
File "/app/.venv/lib/python3.10/site-packages/redacted/logic.py", line 25 in __init__
File "/app/.venv/lib/python3.10/site-packages/redacted/logic.py", line 40 in _load_vector_store
File "/app/.venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 119 in __init__
File "/app/.venv/lib/python3.10/site-packages/chromadb/__init__.py", line 143 in Client
File "/app/.venv/lib/python3.10/site-packages/chromadb/config.py", line 247 in instance
File "/app/.venv/lib/python3.10/site-packages/chromadb/api/segment.py", line 82 in __init__
File "/app/.venv/lib/python3.10/site-packages/chromadb/config.py", line 188 in require
File "/app/.venv/lib/python3.10/site-packages/chromadb/config.py", line 244 in instance
File "/app/.venv/lib/python3.10/site-packages/chromadb/config.py", line 293 in get_class
File "/usr/local/lib/python3.10/importlib/__init__.py", line 126 in import_module
File "<frozen importlib._bootstrap>", line 1050 in _gcd_import
File "<frozen importlib._bootstrap>", line 1027 in _find_and_load
File "<frozen importlib._bootstrap>", line 1006 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688 in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883 in exec_module
File "<frozen importlib._bootstrap>", line 241 in _call_with_frames_removed
File "/app/.venv/lib/python3.10/site-packages/chromadb/segment/impl/manager/local.py", line 13 in <module>
File "<frozen importlib._bootstrap>", line 1027 in _find_and_load
File "<frozen importlib._bootstrap>", line 1006 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688 in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883 in exec_module
File "<frozen importlib._bootstrap>", line 241 in _call_with_frames_removed
File "/app/.venv/lib/python3.10/site-packages/chromadb/segment/impl/vector/local_persistent_hnsw.py", line 9 in <module>
File "<frozen importlib._bootstrap>", line 1027 in _find_and_load
File "<frozen importlib._bootstrap>", line 1006 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688 in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883 in exec_module
File "<frozen importlib._bootstrap>", line 241 in _call_with_frames_removed
File "/app/.venv/lib/python3.10/site-packages/chromadb/segment/impl/vector/local_hnsw.py", line 21 in <module>
File "<frozen importlib._bootstrap>", line 1027 in _find_and_load
File "<frozen importlib._bootstrap>", line 1006 in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 674 in _load_unlocked
File "<frozen importlib._bootstrap>", line 571 in module_from_spec
File "<frozen importlib._bootstrap_external>", line 1176 in create_module
File "<frozen importlib._bootstrap>", line 241 in _call_with_frames_removed
Current thread 0x00003ef4e1198b80 (most recent call first):
Fatal Python error: Illegal instruction
</code></pre>
<p>The last coherent (to me) line in the traceback points to line 21 in <code>chromadb/segment/impl/vector/local_hnsw.py</code> which only contains <code>import hnswlib</code>. I deduce that this is a failure in the installation of the <a href="https://pypi.org/project/chroma-hnswlib/" rel="nofollow noreferrer"><code>chroma-hnswlib</code></a> package.</p>
<p>In the image's virtual environment <code>.venv/lib/python-3.10/site_packages</code> folder, I see the package as the folder <code>chroma_hnswlib-0.7.2.dist-info</code> and an adjacent file called <code>hnswlib.cpython-310-x86_64-linux-gnu.so</code></p>
<p><strong>My question is - Why is my image failing to correctly install <code>chroma-hnswlib</code> and how can I fix this?</strong></p>
<p><strong>UPDATE:</strong> I have modified my <code>Dockerfile</code> so that it now uses a single stage. This means <code>build-essentials</code> packages are now present in the resulting image. When I run the new image on my Windows machine (AMD Ryzen 7), the crash is no longer present. When I run the image in Google Cloud Run, the crash is reproduced.</p>
<p><strong>UPDATE 2:</strong> Up until now the images I've used were built in Github Actions. I've made the experiment of building an image on my dev machine and deploying directly to Cloud Run - It works. I'm now investigating which type of CPU GH Actions is running the build on.</p>
| <python><docker><dockerfile><python-poetry><chromadb> | 2023-09-17 20:09:49 | 2 | 16,911 | urig |
77,123,420 | 1,608,765 | Unequally spaced subplots | <p>I got 3 pairs of images that I would like to have paired together (See first figure, which is created by using a simple plt.subplots(161), with axes turned off for the even numbered subplots.)</p>
<p>Since the image contains 3 pairs of images, I would like each pair to be togetherm thus with smaller spacing. (See 2nd figure, which I edited with photoshop).</p>
<p>What I have:
<a href="https://i.sstatic.net/efzdQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/efzdQ.jpg" alt="enter image description here" /></a>
What I want it to be:
<a href="https://i.sstatic.net/MreUc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MreUc.png" alt="enter image description here" /></a></p>
| <python><matplotlib><subplot> | 2023-09-17 20:05:58 | 0 | 2,723 | Coolcrab |
77,123,249 | 17,157,890 | converting keras h5 to tfjs | <p>I am a total noob in pythona and with help of yt tuts made a keras h5 model with 97% accuracy.</p>
<p>Now I want to convert it to tfjs model to use it in my web app
for my first attempt</p>
<pre><code>!tensorflowjs_converter \
--input_format=keras \ h5jsmodelpath \outputpath
</code></pre>
<p>the following error popped after converting it into tfjs and loading it via js</p>
<p>1)The layer is defined in Python, in which case it needs to be ported to TensorFlow.js or your JavaScript code.
2)The custom layer is defined in JavaScript, but is not registered properly with tf.serialization.registerClass().</p>
<p>I thought the way I used tfjs converter was wrong so I again used tfjs_converter with following code</p>
<pre><code>!tensorflowjs_converter \
--input_format=keras \
--output_format=tfjs_graph_model \
--output_node_names=dense_1/Identity \
/content/WASTE8200X2359736.h5 \
.
</code></pre>
<p>to obtain new tfjs model but this time loading it simply returns: "Model loading failed: and then entire model.json"</p>
<p>my js loading model code</p>
<pre><code>const tf = require("@tensorflow/tfjs-node");
const handler = tf.io.fileSystem("tfjs_model\\content\\tfjs_model\\model.json");
const loadMyModel = async () => {
try {
model = await tf.loadLayersModel(handler);
console.log("Loaded model");
// Get information about the layers
model.layers.forEach((layer) => {
console.log(`Layer Name: ${layer.name}`);
console.log(`Input Shape: ${JSON.stringify(layer.inputShape)}`);
console.log(`Output Shape: ${JSON.stringify(layer.outputShape)}`);
console.log(`Number of Trainable Parameters: ${layer.countParams()}`);
console.log("---------------------------");
});
} catch (error) {
console.log("Model loading failed:", error);
}
};
loadMyModel();
</code></pre>
<p>gdrive links of my keras and both tfjs attempts:<a href="https://drive.google.com/drive/folders/1O_u3GI5uWDGorpvo1PgYMwcexFKerCzz?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/drive/folders/1O_u3GI5uWDGorpvo1PgYMwcexFKerCzz?usp=sharing</a></p>
<p>python notebook of orignal h5 model:<a href="https://github.com/Megahedron69/wasteSegregationmodel" rel="nofollow noreferrer">https://github.com/Megahedron69/wasteSegregationmodel</a>
so can someone help me convert this</p>
| <python><tensorflow><keras><tensorflow.js> | 2023-09-17 19:08:34 | 0 | 391 | Kartic Joshi |
77,122,892 | 13,174,189 | How replace specific quotation marks into apostrophes? | <p>My AI model generates this string:</p>
<pre><code>'{"name": "hammer 1.5"", "brand": "Eurosteel"}'
</code></pre>
<p>I want to turn it into dict using <code>json.loads</code>. But it is not possible because of "hammer 1.5"" in value of name key. So I want to turn my string into this string:</p>
<pre><code>'{'name': 'hammer 1.5"', 'brand': 'Eurosteel'}'
</code></pre>
<p>How could I do that? I want to do it with function to be able to apply it to all similar cases.</p>
<p>P.S.
Another option is to add \ before ":</p>
<pre><code>'{"name": "hammer 1.5\"", "brand": "Eurosteel"}'
</code></pre>
| <python><json><python-3.x><string> | 2023-09-17 17:16:58 | 2 | 1,199 | french_fries |
77,122,799 | 11,665,178 | gcloud crashed (AttributeError): 'str' object has no attribute 'get' cleanup policy | <p>I am trying to run the example command : <code>gcloud artifacts repositories set-cleanup-policies my-repo --policy=policy.json</code> mentioned in the <a href="https://cloud.google.com/sdk/gcloud/reference/artifacts/repositories/set-cleanup-policies" rel="nofollow noreferrer">documentation</a> but i am getting the following output :</p>
<pre><code>ERROR: gcloud crashed (AttributeError): 'str' object has no attribute 'get'
If you would like to report this issue, please run the following command:
gcloud feedback
To check gcloud for common problems, please run the following command:
gcloud info --run-diagnostics
</code></pre>
<p>I am using the latest version of the CLI as i have downloaded it from <a href="https://cloud.google.com/sdk/docs/install-sdk#mac" rel="nofollow noreferrer">here</a> just now.</p>
<p>Thanks in advance</p>
<p>EDIT :</p>
<p>MY policy is :</p>
<pre><code>{
"name": "keep-latest-version",
"action": {"type": "Keep"},
"mostRecentVersions": {
"packageNamePrefixes": ["users"],
"keepCount": 1
}
}
</code></pre>
| <python><google-cloud-platform><gcloud> | 2023-09-17 16:48:04 | 1 | 2,975 | Tom3652 |
77,122,703 | 16,611,809 | How to delete/unload all modules mentioned in a list= | <p>I want to unload all submodules that belong to a specifc module, because it conflicts with another module. Here's what I do:</p>
<pre><code>mainmodule_submodules = [key for key in sys.modules if key.startswith('mainmodule')]
for mainmodule_submodule in mainmodule_submodules:
del mainmodule_submodule # deletes the variable mainmodule_submodule instead of the submodule that's stored in it
del sys.modules[mainmodule_submodule]
</code></pre>
<p>The problem is that this deletes the variable <code>mainmodule_submodule</code> that's created by the for loop instead of the module that's stored a the value of <code>mainmodule_submodule</code>. How do I do this?</p>
| <python><del> | 2023-09-17 16:19:16 | 1 | 627 | gernophil |
77,122,604 | 18,904,265 | Use type hints from pydantic model for if/else logic | <p>I defined a pydantic model something like this:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Literal
from pydantic import BaseModel
class Model(BaseModel):
title: str | None
number: int | None
items: list[int] | None
choice: Literal["one", "two"] | None
</code></pre>
<p>Now I want to do something using the type hints of this model. How would I do this? I tried the following, which doesn't throw an error, but seemingly the <code>isinstance</code> checks always return False.</p>
<pre class="lang-py prettyprint-override"><code>from typing import get_type_hints, get_origin, Literal
for k, v in get_type_hints(Model).items():
if isinstance(v, str):
...
if isinstance(v, int):
...
if get_origin(v) == Literal:
...
</code></pre>
<p><code>get_type_hints</code> correctly returns e.g. <code>str | None</code>. @MisterMiyagi pointed out, that <code>isinstance</code> is not the right comparison here, but for me <code>if v is str</code> also didn't work - what am I missing here?</p>
| <python><python-typing><pydantic> | 2023-09-17 15:55:09 | 1 | 465 | Jan |
77,122,568 | 5,222,301 | How to generate one json into two json? | <p>I have the following fixture from a Django project:</p>
<pre><code>[
{
"model": "music",
"pk": 1,
"fields": {
"attributed_to": false,
"creator": null,
"name": "Piano Trio No. 1",
"name_en": "Piano Trio No. 1",
"name_de": "Trios für Pianoforte, Violine und Violoncello, Nr. 1",
"dedicated_to": "Prince Karl Lichnowsky",
"piece_type": "Trio",
"category": "Chamber Music",
"date_start": "1792-01-01",
"date_completed": "1794-01-01",
"key": "e-flat major"
},
{
"model": "music",
"pk": 2,
"fields": {
"attributed_to": false,
"creator": null,
"name": "Piano Trio No. 2",
"name_en": "Piano Trio No. 2",
"name_de": "Trios für Pianoforte, Violine und Violoncello, Nr. 2",
"dedicated_to": "Prince Karl Lichnowsky",
"piece_type": "Trio",
"category": "Chamber Music",
"date_start": "1792-01-01",
"date_completed": "1794-01-01",
"key": "G major"
},
{
"model": "music",
"pk": 3,
"fields": {
"attributed_to": false,
"creator": null,
"name": "Piano Trio No. 3",
"name_en": "Piano Trio No. 3",
"name_de": "Trios für Pianoforte, Violine und Violoncello, Nr. 3",
"dedicated_to": "Prince Karl Lichnowsky",
"piece_type": "Trio",
"category": "Chamber Music",
"date_start": "1792-01-01",
"date_completed": "1794-01-01",
"key": "c minor"
}
]
</code></pre>
<p>Due to a restructure of the app, I need to spilt/sort various columns of this fixture/json into two separate fixture/jsons. The two new fixture/json files should look like this:</p>
<p>Fixture #1</p>
<pre><code>[
{
"model": "music",
"pk": 1,
"fields": {
"creator": null,
"name": "Piano Trio No. 1",
"name_en": "Piano Trio No. 1",
"name_de": "Trios für Pianoforte, Violine und Violoncello, Nr. 1",
"key": "e-flat major"
},
{
"model": "music",
"pk": 2,
"fields": {
"creator": null,
"name": "Piano Trio No. 2",
"name_en": "Piano Trio No. 2",
"name_de": "Trios für Pianoforte, Violine und Violoncello, Nr. 2",
"key": "G major"
},
{
"model": "music",
"pk": 3,
"fields": {
"creator": null,
"name": "Piano Trio No. 3",
"name_en": "Piano Trio No. 3",
"name_de": "Trios für Pianoforte, Violine und Violoncello, Nr. 3",
"key": "c minor"
}
]
</code></pre>
<p>Fixture #2</p>
<pre><code>[
{
"model": "meta",
"pk": 1,
"fields": {
"attributed_to": false,
"dedicated_to": "Prince Karl Lichnowsky",
"piece_type": "Trio",
"category": "Chamber Music",
"date_start": "1792-01-01",
"date_completed": "1794-01-01",
},
{
"model": "meta",
"pk": 2,
"fields": {
"attributed_to": false,
"dedicated_to": "Prince Karl Lichnowsky",
"piece_type": "Trio",
"category": "Chamber Music",
"date_start": "1792-01-01",
"date_completed": "1794-01-01",
},
{
"model": "meta",
"pk": 3,
"fields": {
"attributed_to": false,
"dedicated_to": "Prince Karl Lichnowsky",
"piece_type": "Trio",
"category": "Chamber Music",
"date_start": "1792-01-01",
"date_completed": "1794-01-01",
}
]
</code></pre>
<p>I was wondering if there is an easy with python either through some sort of string manipulation or Json library to take the source file and generate/output these two new fixtures/json.</p>
<p>Cheers.</p>
| <python><json><django><django-fixtures> | 2023-09-17 15:49:12 | 1 | 1,200 | H C |
77,122,565 | 6,399,645 | Run terminal command from native terminal rather than stripped down subprocess.run | <p>I have an alias <code>test</code> in my .bashrc file. It works if I call it from my standard terminal.</p>
<p>When I run <code>subprocess.run("test",shell=True)</code> it gives error: <code>/bin/sh: 1: test: not found</code>.</p>
<p>How do I run a shell command <strong>exactly</strong> as if I just opened a terminal and executed it in there, and get the output of the command that it would have printed in my terminal?</p>
| <python><linux><shell> | 2023-09-17 15:48:48 | 2 | 434 | user56834 |
77,122,116 | 952,708 | Extracting attributes from an XLIFF file using Python | <p>I am using Python to read an XML-based file, specifically the SDLXLIFF variant of an XLIFF file generated by computer-aided translation software. Such files typically contain a copy of the source file, followed by the body, which contains translation units, which usually contain "source" and "target" text. Pairs of source and target text are generally referred to as "segments". (Sample SDLXLIFF document below. This has only 3 segments, but there could be many thousands.)</p>
<p>The expected output is a dict of segments like <code>{1: ["人口は江戸末期まで概ね3000万人台で安定していたが。","At the end of the Edo period the population was stable at roughly 30 million people.","true"]}</code>.</p>
<p>For each member of the dict the key is the segment <code>id</code> attribute from the <code>segs-def</code> part of the file.</p>
<p>The value is a three-element list containing the source text from <code><seg-source></code> that has a <code>mid</code> value matching the segment id, and the target text from <code><target></code> that has a <code>mid</code> value matching the segment id, and the <code>locked</code> attribute from the <code>segs-def</code> part of the file.</p>
<p>It seems to me that it should be possible to:</p>
<ol>
<li>Iterate through the segments in <code>segs-def</code></li>
<li>Get the <code>id</code> attribute and <code>locked</code> attribute</li>
<li>Search for the source in <code><seg-source></code> with an <code>mid</code> that matches <code>id</code> and get the source text</li>
<li>Search for the target in <code><target></code> with an <code>mid</code> that matches <code>id</code> and get the target text</li>
<li>Store in the dict a list containing source text, target text and locked status as a value, using <code>id</code> as the key</li>
</ol>
<p><a href="https://i.sstatic.net/OI9eR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OI9eR.png" alt="What to extract" /></a></p>
<p>My problems are:
a) I have not succeeded in iterating through each element in <code>segs-def</code> and extracting the <code>id</code> and <code>locked</code> attributes
b) Once I have the <code>id</code>, I do not know how to search/filter the element to find the one with the matching <code>mid</code> (for a segment id of 1, that would be <code><mrk mtype="seg" mid="1"></code>)</p>
<p>So far all my code does is extract the source and target text as follows:</p>
<pre><code>from lxml import etree
my_file = "example.sdlxliff"
f_xliff = open(my_file, encoding='utf-8', mode='r')
xliff_input = ''.join(f_xliff.readlines())
tree = etree.fromstring(xliff_input)
ns_map = dict()
ns_map['x'] = tree.nsmap[None]
for source, target in zip(tree.xpath('//x:seg-source//x:mrk', namespaces=ns_map), tree.xpath('//x:target//x:mrk', namespaces=ns_map)):
print(source.text + " --- " + target.text + "\n")
</code></pre>
<p>The <code>seg id</code> and <code>locked</code> status are stored in a separate part of the file that looks like this:</p>
<pre><code><sdl:seg-defs>
<sdl:seg id="1" locked="true" conf="Translated" origin="interactive">
</code></pre>
<p>What are effective and preferably pythonic ways of extracting the segment <code>id</code> and <code>locked</code> attributes from this document so that I can build the dict described above, with the <code>id</code> as the key for each segment and <code>locked</code> stored in a list with the corresponding source and target text as the value?</p>
<p>Sample SDLXLIFF file:</p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<xliff xmlns:sdl="http://sdl.com/FileTypes/SdlXliff/1.0"
xmlns="urn:oasis:names:tc:xliff:document:1.2" version="1.2" sdl:version="1.0">
<file original="C:\Users\abc\Documents\Studio 2019\Projects\DropFiles\japan.txt" datatype="x-sdlfilterframework2" source-language="ja-JP" target-language="en-US">
<header>
<file-info xmlns="http://sdl.com/FileTypes/SdlXliff/1.0">
<value key="SDL:FileId">02621408-34d4-4154-9dd7-7b6998ebe368</value>
<value key="SDL:CreationDate">09/16/2023 20:30:44</value>
<value key="SDL:OriginalFilePath">C:\Users\abc\Documents\Studio 2019\Projects\DropFiles\japan.txt</value>
<value key="SDL:OriginalEncoding">utf-8</value>
<value key="SDL:AutoClonedFlagSupported">True</value>
<value key="HasUtf8Bom">False</value>
<value key="LineBreakType">
</value>
<value key="ParagraphTextDirections"></value>
<sniff-info>
<detected-encoding detection-level="Likely" encoding="utf-8"/>
<detected-source-lang detection-level="Guess" lang="ja-JP"/>
<props>
<value key="HasUtf8Bom">False</value>
<value key="LineBreakType">
</value>
</props>
</sniff-info>
</file-info>
<sdl:filetype-info>
<sdl:filetype-id>Plain Text v 1.0.0.0</sdl:filetype-id>
</sdl:filetype-info>
<tag-defs xmlns="http://sdl.com/FileTypes/SdlXliff/1.0">
<tag id="0">
<st name="^">^</st>
</tag>
<tag id="1">
<st name="$">$</st>
</tag>
</tag-defs>
</header>
<body>
<trans-unit translate="no" id="a8a4c497-6cd0-4b42-b87d-9f5bc8cd545e">
<source>
<x id="0"/>
</source>
</trans-unit>
<trans-unit id="ab72d223-8a2a-43b0-b503-af65b7d27de2">
<source>人口は江戸末期まで概ね3000万人台で安定していたが。明治以降は人口急増期に入り、1967年に初めて1億人を突破した。その後出生率の低下に伴い2008年にピークを迎え、人口減少期が始まった。</source>
<seg-source>
<mrk mtype="seg" mid="1">人口は江戸末期まで概ね3000万人台で安定していたが。</mrk>
<mrk mtype="seg" mid="2">明治以降は人口急増期に入り、1967年に初めて1億人を突破した。</mrk>
<mrk mtype="seg" mid="3">その後出生率の低下に伴い2008年にピークを迎え、人口減少期が始まった。</mrk>
</seg-source>
<target>
<mrk mtype="seg" mid="1">At the end of the Edo period the population was stable at roughly 30 million people.</mrk>
<mrk mtype="seg" mid="2">The population began growing rapidly in the Meiji Era and thereafter, exceeding 100 million people for the first time in 1967.</mrk>
<mrk mtype="seg" mid="3">Subsequently the birthrate began to fall, and after peaking in 2008 the population began an era decline.</mrk>
</target>
<sdl:seg-defs>
<sdl:seg id="1" locked="true" conf="Translated" origin="interactive">
<sdl:prev-origin origin="interactive">
<sdl:value key="SegmentIdentityHash">zb5f5d0tJBp6ZfAxFmVvh26SM4E=</sdl:value>
<sdl:value key="created_by">STONEPC\abc</sdl:value>
<sdl:value key="created_on">09/16/2023 19:31:48</sdl:value>
<sdl:value key="last_modified_by">STONEPC\abc</sdl:value>
<sdl:value key="modified_on">09/16/2023 19:31:48</sdl:value>
<sdl:value key="SDL:OriginalTranslationHash">1069896568</sdl:value>
</sdl:prev-origin>
<sdl:value key="SegmentIdentityHash">zb5f5d0tJBp6ZfAxFmVvh26SM4E=</sdl:value>
<sdl:value key="created_by">STONEPC\abc</sdl:value>
<sdl:value key="created_on">09/16/2023 19:31:48</sdl:value>
<sdl:value key="last_modified_by">STONEPC\abc</sdl:value>
<sdl:value key="modified_on">09/16/2023 19:31:48</sdl:value>
<sdl:value key="SDL:OriginalTranslationHash">1069896568</sdl:value>
</sdl:seg>
<sdl:seg id="2" conf="Translated" origin="interactive">
<sdl:value key="SegmentIdentityHash">j8MTFYhJndu21g6nUiW8N28QU/k=</sdl:value>
<sdl:value key="created_by">STONEPC\abc</sdl:value>
<sdl:value key="created_on">09/16/2023 19:31:56</sdl:value>
<sdl:value key="last_modified_by">STONEPC\abc</sdl:value>
<sdl:value key="modified_on">09/16/2023 19:31:56</sdl:value>
<sdl:value key="SDL:OriginalTranslationHash">1432236465</sdl:value>
</sdl:seg>
<sdl:seg id="3" conf="Draft" origin="interactive">
<sdl:value key="SegmentIdentityHash">US1BN1eE/zdK+R9JVk9NSg+LmyU=</sdl:value>
<sdl:value key="created_by">STONEPC\abc</sdl:value>
<sdl:value key="created_on">09/16/2023 19:32:02</sdl:value>
<sdl:value key="last_modified_by">STONEPC\abc</sdl:value>
<sdl:value key="modified_on">09/16/2023 19:32:02</sdl:value>
</sdl:seg>
</sdl:seg-defs>
</trans-unit>
<trans-unit translate="no" id="acaff8f7-6e91-4012-b909-2dbe76238709">
<source>
<x id="1"/>
</source>
</trans-unit>
</body>
</file>
</xliff>
</code></pre>
| <python><xml><xpath><xliff> | 2023-09-17 13:57:20 | 2 | 8,037 | SlowLearner |
77,122,068 | 13,835,759 | SqlAlchemy Stored Procedure Returns without executing entire proc | <p>I'm writing stored procedure for performing an update operation using SQL Server.</p>
<p>The Stored Procedure performs the following high level operations</p>
<pre><code>--Print the values before update
SELECT 'Before' as When, x, y, z FROM eod.fn_get_data(@pricingdate)
BEGIN TRAN;
--Logic to update the data
--Print the values after update
SELECT 'After' as When, x, y, z FROM eod.fn_get_data(@pricingdate)
--Based on whether it was a test run or not, commit/rollback the transaction.
IF @Debug = 1
ROLLBACK TRAN;
ELSE
COMMIT TRAN;
</code></pre>
<p>When I execute this stored procedure in SSMS - EXEC my_stored_proc @pricingdate = '08/17/2023', it updates the data as expected and runs fine.</p>
<p>When I execute the same using SqlAlchemy, it doesn't seem to update the data. Infact, it returns me the data that I print before Update. (SqlAlchemy version - 1.2.18)</p>
<p>So, my logic to updata the data in the stored procedure contains CTEs in the following format</p>
<pre><code>WITH cte1 AS
(--LOGIC),
cte2 AS(--LOGIC),
UPDATE t1
SET x = a, y=b
FROM
cte2 JOIN table1 t1
</code></pre>
<p>Another thing to note is, if I somehow write a single line UPDATE statement with multiple subqueries after the 'Before Update' SELECT statement, the UPDATE statement gets executed while still returning me the 'Before Update' result.But my team follows a strict guideline to minimize subquery usage, hence I can't go ahead with this approach.</p>
<p>I even tried replacing the CTE with multiple SELECT statements and populating data in temp tables & finally performing update, but that gives similar issue as CTE.</p>
<p>I stumbled upon this SIMILAR issue - <a href="https://github.com/sqlalchemy/sqlalchemy/issues/5492" rel="nofollow noreferrer">https://github.com/sqlalchemy/sqlalchemy/issues/5492</a> which points out auto commit related stuff for update CTE, but that's ruled out in my case as I'm already using engine.begin().</p>
<p>The only difference is that OP is directly executing the update CTE as text query while I'm calling it as a part of a larger stored procedure.</p>
<p>Calling code-</p>
<pre><code>query = """ EXEC my_stored_proc @pricingdate = ?"""
params = (pricingdate,)
with self._engine.begin() as conn:
conn.execute(query,params)
</code></pre>
<p>I tried the following so far -</p>
<ol>
<li>SET NOCOUNT ON; at the top of my stored procedure.</li>
<li>I checked and saw a lot of people using callproc() on raw connection cursor object, but unfortunately, I can't use the same. It gives me error saying cursor does not have callproc method.</li>
<li>I found this post <a href="https://stackoverflow.com/questions/73558140/sqlalchemy-executing-stored-procedure-with-output">SQLAlchemy Executing Stored Procedure with Output</a> and tried setting autocommit to true as well.</li>
</ol>
<p>I'm not sure what I can try and how I can fix the issue here. Help would be highly appreciated.</p>
| <python><sql-server><stored-procedures><sqlalchemy> | 2023-09-17 13:44:06 | 0 | 364 | Vedita Kamat |
77,121,957 | 1,881,329 | Psycopg3 Async Connection pool leakage/timeout problem | <p>I have a leakage problem in my Psycopg3 Async Connection pool and I cannot find the source of the problem.</p>
<p>I extended my Flask app such that once the server starts up it creates an Async Connection pool.</p>
<p>Then, whenever a request is made, the app gets the connections needed to retrieve the data from the database, execute the queries, process the results and return the results for the user.</p>
<p>My implementation works for a while, but after few usages, it stops working. The exception thrown is <code>psycopg_pool.PoolTimeout</code>.</p>
<p>When I get the pool stats, I see that <code>pool_available = 0</code>, while <code>requests_queued</code> and <code>requests_errors</code> keep increasing at each new try, indicating that the available connections have exhausted.</p>
<p>Can anyone help me find the source of the problem?</p>
<p>Here's the implementation:</p>
<ol>
<li>The reusable connection pool:</li>
</ol>
<pre class="lang-py prettyprint-override"><code># db.py
import os
from psycopg_pool import AsyncConnectionPool
class DB:
def __init__(self):
self.async_pool = None
def connect_async(self):
conninfo = (f"dbname={os.environ['DBNAME']} "
f"user={os.environ['POSTGRES_USER']} "
f"password={os.environ['POSTGRES_PASSWORD']} "
f"host={os.environ['POSTGRES_HOST']} "
f"port={os.environ['POSTGRES_PORT']}")
self.async_pool = AsyncConnectionPool(
conninfo, min_size=4, max_size=10, timeout=15)
def get_async_pool(self):
if self.async_pool is None:
self.connect_async()
return self.async_pool
# extensions.py
from db import DB
my_db = DB()
</code></pre>
<ol start="2">
<li>The query execution:</li>
</ol>
<pre class="lang-py prettyprint-override"><code># search.py
from extensions import my_db
async def keyword_search(query):
async with my_db.get_async_pool().connection() as aconn:
async with aconn.cursor() as cur:
# query = ... keyword search logic
await cur.execute(query)
documents = await cur.fetchall()
column_names = [desc[0] for desc in cur.description]
return [dict(zip(column_names, doc)) for doc in documents]
async def semantic_search(query):
async with my_db.get_async_pool().connection() as aconn:
async with aconn.cursor() as cur:
# query = ... semantic search logic
await cur.execute(query)
documents = await cur.fetchall()
column_names = [desc[0] for desc in cur.description]
return [dict(zip(column_names, doc)) for doc in documents]
async def full_search(query):
ks, ss = await asyncio.gather(keyword_search(query), semantic_search(query))
return ks + ss
</code></pre>
<ol start="3">
<li>The Flask app:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>from extensions import my_db
from search import full_search
@bp.route('/search', methods=['GET'])
async def search_fn():
query = request.args.get('query')
for _ in range(2):
try:
results = await full_search(query)
return jsonify(results)
except psycopg_pool.PoolTimeout as e:
stats = my_db.get_async_pool().get_stats()
app.logger.error({'error': str(e), 'pool_stats': stats})
return jsonify({'error': 'Database connection timeout'}), 500
</code></pre>
<p>Other info that might be relevant:</p>
<ul>
<li>The only async route in the application is the <code>search_fn</code> above; all others are sync.</li>
</ul>
<p>Thanks in advance!</p>
| <python><postgresql><flask><psycopg3> | 2023-09-17 13:15:00 | 0 | 436 | Carlos Souza |
77,121,890 | 1,390,993 | Regex pattern to exclude timestamps | <p>I have the following text:</p>
<pre><code>Master of the universe\n\n(Jul 26, 2023 - 1:00pm)\n\n(Interviewee: Marina)\n\n\n\n(00:00:05 - 00:00:09)\n\n\t Alice: This project. Uh my job is to ask lots of questions.\n\n\n\n(00:00:10 - 00:00:11)\n\n\t Marina: What is it?\n\n\n\n(00:00:11 - 00:00:14)\n\n\t Alice: Uh uh impartially.\n\n\n\n(00:00:15 - 00:00:18)\n\n\t Alice: Uh so suddenly I don't work for a particular brand.\n\n\n\n(00:00:19 - 00:00:21)\n\n\t Alice: Uh I'm self-employed,\n\n\n\n(00:00:21 - 00:00:21)\n\n\t Marina: M M.\n\n\n\n(00:00:21 - 00:00:32)\n\n\t Alice: I do group interviews with lots of brands, from toothpaste to the product we're going to talk about today.\n\n\n\n(00:00:32 - 00:00:32)\n\n\t Marina: Okay.\n\n\n\n(00:00:33 - 00:00:37)\n\n\t Alice: Uh today we're gonna talk for an hour uh.\n\n\n\n(00:00:36 - 00:00:36)\n\n\t Marina: Okay.\n\n\n\n(00:00:37 - 00:00:39)\n\n\t
</code></pre>
<p>From above text, I want to extract the <code>name: text</code>. For e.g.:</p>
<pre><code>Alice: This project. Uh my job is to ask lots of questions.
Marina: What is it?
Alice: Uh uh impartially.
Alice: Uh so suddenly I don't work for a particular brand.
Alice: Uh I'm self-employed,
Marina: M M.
Alice: I do group interviews with lots of brands, from toothpaste to the product we're going to talk about today.
Marina: Okay.
Alice: Uh today we're gonna talk for an hour uh.
Marina: Okay.
</code></pre>
<p>I am able to identify the timestamps from this regex code, but not exclude them:</p>
<pre><code>(?:[\\n]+\(\d{2}:\d{2}:\d{2} - \d{2}:\d{2}:\d{2}\)[\\n\\t\\s]+|$)
</code></pre>
<p>I need a regex pattern that can exclude all the timestamps and other text, only keeping the <code>name: text</code> as shown above.</p>
<p><strong>EDIT:</strong>
<strong>I forgot to mention, exclude the lines that match the Interviewee name.</strong></p>
<p><strong>P.S: I do not want a python code to do a regex replace using the above pattern. I just a complete pattern to find matches for <code>name: text</code></strong></p>
| <python><regex><ms-word><docx><regex-group> | 2023-09-17 12:54:30 | 2 | 1,158 | sifar |
77,121,817 | 988,549 | What kind of Text object are axis labels? | <p>Axis labels (<code>axis.xaxis.label</code>) appear to be <code>matplotlib.text.Text</code> objects, but when extracting the position and attempting to manually create text at the same position, there appears to be a large offset. In the example below, I expect the red "x" would exactly overlap the black one, but it does not.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.set_xlabel('x')
label = ax.xaxis.label
pos = label.get_position()
pos_display = label.get_transform().transform(pos)
label_red = plt.text(pos_display[0], pos_display[1], 'x', transform=None, fontdict={'color': 'r'})
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/cvBPZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cvBPZ.png" alt="Script output" /></a></p>
<p>Furthermore, I wasn't able to deep-copy the label <code>Text</code> object. In the example below, I expect that a blue "x" would overlap the black one, but it just doesn't show in the figure. If instead of <code>label</code> I attempt to replicate <code>label_red</code>, then indeed a blue "x" overlaps the red one.</p>
<pre class="lang-py prettyprint-override"><code>from copy import copy
label_blue = copy(label)
label_blue.set_color('b')
ax.add_artist(label_blue)
plt.show()
</code></pre>
<p>So, what's so special about the axis label text objects? My end goal is to have a secondary label on the same axis.</p>
| <python><matplotlib> | 2023-09-17 12:34:46 | 1 | 385 | fheshwfq |
77,121,770 | 561,243 | rich.Prompt to retrieve two numbers or a letter | <p>I have a coding issue I would like to share with you.</p>
<p>In my python code, I need some user inputs, in particular I need him to type in either a pair of float values or a specific letter from a list to execute a command.</p>
<p>Something like:</p>
<blockquote>
<p>Enter the required values OR Q to Exit:</p>
</blockquote>
<p>I have coded everything using a while loop and the basic input followed by a series of checks on the input, but instead of my dodgy code, hard to maintain, I would like to use the rich.prompt to get a easier and nicer result.</p>
<p>I have seen that with rich.Prompt you can get a string (also among a choices list), an integer or a float. But I cannot see anyway how to mix these things.</p>
<p>Do you have any suggestion?</p>
<p>Thanks in advance,
toto</p>
| <python><input><rich> | 2023-09-17 12:21:39 | 1 | 367 | toto |
77,121,698 | 15,341,457 | Scrapy XPath - @href returning unexpected value | <p>I'm currently web-scraping restaurant reviews from Tripadvisor and I'm trying to retrieve restaurant links from this <a href="https://www.tripadvisor.it/Restaurants-g187791-Rome_Lazio.html" rel="nofollow noreferrer">page</a>.</p>
<p>I want the links of the 30 restaurant pages in the bottom part but I'm making some tests with just one of them. Retrieving the first one in the list can be done with this expression:</p>
<pre><code>//div[@data-test='1_list_item']/div/div[2]/div[1]/div//a/@href
</code></pre>
<p>Scrapy has some unexpected behaviours, the following css expression should be enough to retrieve all the links but instead, an empty array is returned:</p>
<pre><code>response.css('.b::attr(href)').extract()
</code></pre>
<p>The same goes with many Xpath expression and by using the one above like this:</p>
<pre><code>response.xpath("//div[@data-test='1_list_item']/div/div[2]/div[1]/div//a/@href").get()
</code></pre>
<p>I get the following link in return:</p>
<p><strong>/ShowUserReviews-g187791-d25107357-r916086825-ADESSO_Vineria_Bistrot-Rome_Lazio.html</strong></p>
<p>I don't know where this comes from, the link I can see in the inspect chrome console and that I expected to get in return is:</p>
<p><strong>/Restaurant_Review-g187791-d25107357-Reviews-ADESSO_Vineria_Bistrot-Rome_Lazio.html</strong></p>
| <python><http><web-scraping><xpath><scrapy> | 2023-09-17 12:04:25 | 2 | 332 | Rodolfo |
77,121,172 | 6,129,375 | How do I transpose a dataframe after groupby? | <p>I have this table</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>port</th>
<th>valueA</th>
<th>valueB</th>
<th>valueC</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>3.1</td>
<td>58.2</td>
<td>0.09</td>
</tr>
<tr>
<td>2</td>
<td>3.09</td>
<td>58.3</td>
<td>0.1</td>
</tr>
<tr>
<td>3</td>
<td>3.09</td>
<td>58.15</td>
<td>0.09</td>
</tr>
<tr>
<td>4</td>
<td>3.11</td>
<td>58.2</td>
<td>0.1</td>
</tr>
<tr>
<td>1</td>
<td>3.1</td>
<td>58.25</td>
<td>0.09</td>
</tr>
<tr>
<td>2</td>
<td>3.1</td>
<td>58.25</td>
<td>0.09</td>
</tr>
<tr>
<td>3</td>
<td>3.08</td>
<td>58.15</td>
<td>0.09</td>
</tr>
<tr>
<td>4</td>
<td>3.09</td>
<td>58.1</td>
<td>0.09</td>
</tr>
</tbody>
</table>
</div>
<p>I would like to group based on 'port' and then have them as columns in a new dataframe that has valueA as rows, like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
</tr>
</thead>
<tbody>
<tr>
<td>3.1</td>
<td>3.09</td>
<td>3.09</td>
<td>3.11</td>
</tr>
<tr>
<td>3.1</td>
<td>3.1</td>
<td>3.08</td>
<td>3.09</td>
</tr>
</tbody>
</table>
</div>
<p>How do I do this?</p>
| <python><pandas><dataframe> | 2023-09-17 09:12:37 | 3 | 380 | RedSmolf |
77,121,119 | 17,082,611 | GPU ran out of memory. How to invoke garbage collector for cleaning the GPU memory at each combination of hyperparameters using GridSearchCV? | <p>I am training my model on a remote server using <code>GridSearchCV</code> API for tuning some hyper parameters such as <code>epochs</code>, <code>l_rate</code>, <code>batch_size</code> and <code>patience</code>. Unfortunately while tuning them, after few iterations, I get the following error:</p>
<pre><code>Failed copying input tensor from /job:localhost/replica:0/task:0/device:CPU:0
to /job:localhost/replica:0/task:0/device:GPU:0
in order to run _EagerConst: Dst tensor is not initialized.
</code></pre>
<p>It seems that <a href="https://stackoverflow.com/a/40389498/17082611">the GPU memory of the server is not enough</a> and <a href="https://github.com/aymericdamien/TensorFlow-Examples/issues/38#issuecomment-223793214" rel="nofollow noreferrer">this error raises when GPU memory is full</a> and they recommend to reduce the data set size and/or the <code>batch_size</code>.</p>
<p>Firstly I reduced the <code>batch_size</code> to <code>2</code>, <code>4</code>, <code>8</code> and <code>16</code> but the error persists since I get:</p>
<pre><code>W tensorflow/tsl/framework/bfc_allocator.cc:485] Allocator (GPU_0_bfc) ran
out of memory trying to allocate 1.17GiB (rounded to 1258291200) requested
by op _EagerConst
If the cause is memory fragmentation maybe the environment variable
'TF_GPU_ALLOCATOR=cuda_malloc_async' will improve the situation
</code></pre>
<p>Then, I set <code>os.environ['TF_GPU_ALLOCATOR'] = 'cuda_malloc_async'</code> as suggested but the problems persists.</p>
<p>Nevertheless the issue seems to be solved if I reduce the data set size, but <em>I have to use the whole data set</em> (I cannot waste data).</p>
<p>In order to handle this problem my key ideas are:</p>
<ol>
<li><p>Prevent a new model and related objects for loss and training management from being recreated. This would be the optimal solution, as it would always use the same model (obviously ensuring that it is "reset" with each new combination of hyperparameters), with relative loss and training. This solution is perhaps the most complicated, as I don't know if the libraries I chose to use allow it.</p>
</li>
<li><p>Verify that the same problem is not caused by the data instead of the model (i.e. I would not want the same data to be reallocated for each combination of hyperparameters, leaving the old ones in memory). This could also be a cause and the solution to which I believe is simpler than the previous or similar one, but I see it as less probable as a cause. In any case, check that this does not happen.</p>
</li>
<li><p>Reset the memory at each combination of hyperparameters by invoking the garbage collector (I don't know if it works on the GPU too). This is the easiest solution and perhaps the first thing I would try, but it doesn't necessarily work, because if the libraries it uses maintain references to the objects in memory (even if they are no longer used) these are not eliminated by the garbage collector.</p>
</li>
</ol>
<p>Also, <a href="https://stackoverflow.com/a/42047606/17082611">with the tensorflow backend the current model is not destroyed</a>, so I need to clear the session.</p>
<p>If you have any additional thoughts or ideas, please feel free to share them with me. These are the involved functions:</p>
<pre><code>def grid_search_vae(x_train, latent_dimension):
param_grid = {
'epochs': [2500],
'l_rate': [10 ** -4, 10 ** -5, 10 ** -6, 10 ** -7],
'batch_size': [32, 64], # [2, 4, 8, 16] won't fix the issue
'patience': [30]
}
ssim_scorer = make_scorer(my_ssim, greater_is_better=True)
grid = GridSearchCV(
VAEWrapper(encoder=Encoder(latent_dimension), decoder=Decoder()),
param_grid, scoring=ssim_scorer, cv=5, refit=False
)
grid.fit(x_train, x_train)
return grid
def refit(fitted_grid, x_train, y_train, latent_dimension):
best_epochs = fitted_grid.best_params_["epochs"]
best_l_rate = fitted_grid.best_params_["l_rate"]
best_batch_size = fitted_grid.best_params_["batch_size"]
best_patience = fitted_grid.best_params_["patience"]
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=0.2)
encoder = Encoder(latent_dimension)
decoder = Decoder()
vae = VAE(encoder, decoder, best_epochs, best_l_rate, best_batch_size)
vae.compile(Adam(best_l_rate))
early_stopping = EarlyStopping("val_loss", patience=best_patience)
history = vae.fit(x_train, x_train, best_batch_size, best_epochs,
validation_data=(x_val, x_val), callbacks=[early_stopping])
return history, vae
</code></pre>
<p>While this is the <code>main</code> code:</p>
<pre><code>if __name__ == '__main__':
x_train, x_test, y_train, y_test = load_data("data", "labels")
# Reducing data set size will fix the issue
# new_size = 200
# x_train, y_train = reduce_size(x_train, y_train, new_size)
# x_test, y_test = reduce_size(x_test, y_test, new_size)
latent_dimension = 25
grid = grid_search_vae(x_train, latent_dimension)
history, vae = refit(grid, x_train, y_train, latent_dimension)
</code></pre>
<p>Can you help me?</p>
<p>If you need this information, these are the GPUs:</p>
<pre><code>2023-09-18 11:21:25.628286: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7347 MB memory: -> device: 0, name: NVIDIA GeForce GTX 1080, pci bus id: 0000:02:00.0, compute capability: 6.1
2023-09-18 11:21:25.629120: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1639] Created device /job:localhost/replica:0/task:0/device:GPU:1 with 7371 MB memory: -> device: 1, name: NVIDIA GeForce GTX 1080, pci bus id: 0000:03:00.0, compute capability: 6.1
2023-09-18 11:21:31.911969: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:432] Loaded cuDNN version 8600
</code></pre>
<p>and I am using tensorflow as keras backend, that is:</p>
<pre><code>from keras import backend as K
K.backend() # 'tensorflow'
</code></pre>
<p>I also tried to add:</p>
<pre><code>gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
</code></pre>
<p>in the <code>main</code> code (as first instructions) but this didn't help.</p>
<p>If you need the code for models, here it is:</p>
<pre><code>import numpy as np
import tensorflow as tf
from keras.initializers import he_uniform
from keras.layers import Conv2DTranspose, BatchNormalization, Reshape, Dense, Conv2D, Flatten
from keras.optimizers.legacy import Adam
from keras.src.callbacks import EarlyStopping
from skimage.metrics import structural_similarity as ssim
from sklearn.base import BaseEstimator
from sklearn.metrics import mean_absolute_error, make_scorer
from sklearn.model_selection import train_test_split, GridSearchCV
from tensorflow import keras
class VAEWrapper:
def __init__(self, **kwargs):
self.vae = VAE(**kwargs)
self.vae.compile(Adam())
def fit(self, x, y, **kwargs):
self.vae.fit(x, y, **kwargs)
def get_config(self):
return self.vae.get_config()
def get_params(self, deep):
return self.vae.get_params(deep)
def set_params(self, **params):
return self.vae.set_params(**params)
class VAE(keras.Model, BaseEstimator):
def __init__(self, encoder, decoder, epochs=None, l_rate=None, batch_size=None, patience=None, **kwargs):
super().__init__(**kwargs)
self.encoder = encoder
self.decoder = decoder
self.epochs = epochs # For grid search
self.l_rate = l_rate # For grid search
self.batch_size = batch_size # For grid search
self.patience = patience # For grid search
self.total_loss_tracker = keras.metrics.Mean(name="total_loss")
self.reconstruction_loss_tracker = keras.metrics.Mean(name="reconstruction_loss")
self.kl_loss_tracker = keras.metrics.Mean(name="kl_loss")
def call(self, inputs, training=None, mask=None):
_, _, z = self.encoder(inputs)
outputs = self.decoder(z)
return outputs
@property
def metrics(self):
return [
self.total_loss_tracker,
self.reconstruction_loss_tracker,
self.kl_loss_tracker,
]
def train_step(self, data):
data, labels = data
with tf.GradientTape() as tape:
# Forward pass
z_mean, z_log_var, z = self.encoder(data)
reconstruction = self.decoder(z)
# Compute losses
reconstruction_loss = tf.reduce_mean(
tf.reduce_sum(
keras.losses.binary_crossentropy(data, reconstruction), axis=(1, 2)
)
)
kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var))
kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))
total_loss = reconstruction_loss + kl_loss
# Compute gradient
grads = tape.gradient(total_loss, self.trainable_weights)
# Update weights
self.optimizer.apply_gradients(zip(grads, self.trainable_weights))
# Update metrics
self.total_loss_tracker.update_state(total_loss)
self.reconstruction_loss_tracker.update_state(reconstruction_loss)
self.kl_loss_tracker.update_state(kl_loss)
return {
"loss": self.total_loss_tracker.result(),
"reconstruction_loss": self.reconstruction_loss_tracker.result(),
"kl_loss": self.kl_loss_tracker.result(),
}
def test_step(self, data):
data, labels = data
# Forward pass
z_mean, z_log_var, z = self.encoder(data)
reconstruction = self.decoder(z)
# Compute losses
reconstruction_loss = tf.reduce_mean(
tf.reduce_sum(
keras.losses.binary_crossentropy(data, reconstruction), axis=(1, 2)
)
)
kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var))
kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))
total_loss = reconstruction_loss + kl_loss
# Update metrics
self.total_loss_tracker.update_state(total_loss)
self.reconstruction_loss_tracker.update_state(reconstruction_loss)
self.kl_loss_tracker.update_state(kl_loss)
return {
"loss": self.total_loss_tracker.result(),
"reconstruction_loss": self.reconstruction_loss_tracker.result(),
"kl_loss": self.kl_loss_tracker.result(),
}
@keras.saving.register_keras_serializable()
class Encoder(keras.layers.Layer):
def __init__(self, latent_dimension):
super(Encoder, self).__init__()
self.latent_dim = latent_dimension
seed = 42
self.conv1 = Conv2D(filters=64, kernel_size=3, activation="relu", strides=2, padding="same",
kernel_initializer=he_uniform(seed))
self.bn1 = BatchNormalization()
self.conv2 = Conv2D(filters=128, kernel_size=3, activation="relu", strides=2, padding="same",
kernel_initializer=he_uniform(seed))
self.bn2 = BatchNormalization()
self.conv3 = Conv2D(filters=256, kernel_size=3, activation="relu", strides=2, padding="same",
kernel_initializer=he_uniform(seed))
self.bn3 = BatchNormalization()
self.flatten = Flatten()
self.dense = Dense(units=100, activation="relu")
self.z_mean = Dense(latent_dimension, name="z_mean")
self.z_log_var = Dense(latent_dimension, name="z_log_var")
self.sampling = sample
def call(self, inputs, training=None, mask=None):
x = self.conv1(inputs)
x = self.bn1(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.conv3(x)
x = self.bn3(x)
x = self.flatten(x)
x = self.dense(x)
z_mean = self.z_mean(x)
z_log_var = self.z_log_var(x)
z = self.sampling(z_mean, z_log_var)
return z_mean, z_log_var, z
@keras.saving.register_keras_serializable()
class Decoder(keras.layers.Layer):
def __init__(self):
super(Decoder, self).__init__()
self.dense1 = Dense(units=4096, activation="relu")
self.bn1 = BatchNormalization()
self.dense2 = Dense(units=1024, activation="relu")
self.bn2 = BatchNormalization()
self.dense3 = Dense(units=4096, activation="relu")
self.bn3 = BatchNormalization()
seed = 42
self.reshape = Reshape((4, 4, 256))
self.deconv1 = Conv2DTranspose(filters=256, kernel_size=3, activation="relu", strides=2, padding="same",
kernel_initializer=he_uniform(seed))
self.bn4 = BatchNormalization()
self.deconv2 = Conv2DTranspose(filters=128, kernel_size=3, activation="relu", strides=1, padding="same",
kernel_initializer=he_uniform(seed))
self.bn5 = BatchNormalization()
self.deconv3 = Conv2DTranspose(filters=128, kernel_size=3, activation="relu", strides=2, padding="valid",
kernel_initializer=he_uniform(seed))
self.bn6 = BatchNormalization()
self.deconv4 = Conv2DTranspose(filters=64, kernel_size=3, activation="relu", strides=1, padding="valid",
kernel_initializer=he_uniform(seed))
self.bn7 = BatchNormalization()
self.deconv5 = Conv2DTranspose(filters=64, kernel_size=3, activation="relu", strides=2, padding="valid",
kernel_initializer=he_uniform(seed))
self.bn8 = BatchNormalization()
self.deconv6 = Conv2DTranspose(filters=1, kernel_size=2, activation="sigmoid", padding="valid",
kernel_initializer=he_uniform(seed))
def call(self, inputs, training=None, mask=None):
x = self.dense1(inputs)
x = self.bn1(x)
x = self.dense2(x)
x = self.bn2(x)
x = self.dense3(x)
x = self.bn3(x)
x = self.reshape(x)
x = self.deconv1(x)
x = self.bn4(x)
x = self.deconv2(x)
x = self.bn5(x)
x = self.deconv3(x)
x = self.bn6(x)
x = self.deconv4(x)
x = self.bn7(x)
x = self.deconv5(x)
x = self.bn8(x)
decoder_outputs = self.deconv6(x)
return decoder_outputs
</code></pre>
| <python><tensorflow><keras><memory-management><gpu> | 2023-09-17 08:52:30 | 1 | 481 | tail |
77,121,056 | 3,187,106 | Extract text from PDF got generated from autocad using Python | <p>I have got a PDF generated from AutoCad, I have used all python libraries to extract the text out of it (PyPDF2, PDFMiner,...) and all of them return empty text.</p>
<p>I have noticed that those PDFs are not a free text, e.g. I can NOT select any text in it, its like a picture or something</p>
<p>How can I extract the text out of it ? is this even possible or is there any library to do so ?</p>
<p>Thanks</p>
| <python><python-3.x><pdf> | 2023-09-17 08:33:12 | 1 | 426 | Saeed isa |
77,121,040 | 1,261,930 | missing files at django-admin startproject | <p>I am learning Django & when following through I can create the startup project however it only brings one file.. so I cannot continue to start the Django development server. These are the instructions I followed so far:</p>
<p><a href="https://i.sstatic.net/s07pt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s07pt.png" alt="enter image description here" /></a></p>
<p>How do I recreate those missing files as depicted in the instructions' tree . Please assist</p>
| <python><django><python-venv> | 2023-09-17 08:26:52 | 1 | 793 | Chagbert |
77,120,899 | 8,052,616 | How can I initialize an object with a string of package path in Python? | <p>For example, my package path is <code>component.sub_fold.package.class_name</code>. The <code>component.sub_fold</code> is the path of my package. The <code>package</code> is a file name. The <code>class_name</code> is a class name which I want to initialize it. I can initialize an object through</p>
<pre><code>from component.sub_fold.package import class_name
obj = class_name()
</code></pre>
<p>How can I initialize an object through the string <code>"component.sub_fold.package.class_name"</code>?</p>
| <python><python-import> | 2023-09-17 07:36:52 | 1 | 1,111 | taichi_tiger |
77,120,897 | 19,356,117 | How to solve "MemoryError" when download dataset by kaggle? | <p>I want to download dataset from kaggle, however when I run it on my local machine, it crashed, and this is my code:</p>
<pre><code>api = kaggle.KaggleApi(json_str)
api.authenticate()
api.datasets_download(owner_slug='headwater', dataset_slug='Camels')
</code></pre>
<p>This is crash report:</p>
<pre><code>test_dload_archive.py:8:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
..\venv\lib\site-packages\kaggle\api\kaggle_api.py:1494: in datasets_download
(data) = self.datasets_download_with_http_info(owner_slug, dataset_slug, **kwargs) # noqa: E501
..\venv\lib\site-packages\kaggle\api\kaggle_api.py:1563: in datasets_download_with_http_info
return self.api_client.call_api(
..\venv\lib\site-packages\kaggle\api_client.py:329: in call_api
return self.__call_api(resource_path, method,
..\venv\lib\site-packages\kaggle\api_client.py:161: in __call_api
response_data = self.request(
..\venv\lib\site-packages\kaggle\api_client.py:351: in request
return self.rest_client.GET(url,
..\venv\lib\site-packages\kaggle\rest.py:247: in GET
return self.request("GET", url,
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <kaggle.rest.RESTClientObject object at 0x000001B1FAE01D80>
method = 'GET'
url = 'https://www.kaggle.com/api/v1/datasets/download/headwater/Camels'
query_params = []
headers = {'Accept': 'file', 'User-Agent': 'Swagger-Codegen/1/python'}
body = None, post_params = {}, _preload_content = True, _request_timeout = None
……
if six.PY3:
> r.data = r.data.decode('utf8')
E MemoryError
..\venv\lib\site-packages\kaggle\rest.py:235: MemoryError
</code></pre>
<p>I think it's because of memory cost of unzipping a big file, but how to solve it?</p>
<p>Update:
When I'm in linux, crash looks like this:</p>
<pre><code> if six.PY3:
> r.data = r.data.decode('utf8')
E UnicodeDecodeError: 'utf-8' codec can't decode byte 0xcb in position 14: invalid continuation byte
</code></pre>
| <python><io><request><out-of-memory><kaggle> | 2023-09-17 07:36:48 | 1 | 1,115 | forestbat |
77,120,632 | 2,153,235 | What does ContainerType[str, ...] mean in Python? | <p>I realize that Python doesn't actually overload functions, but rather,
hints at multiple acceptable types for method input/output arguments.
By itself, this doesn't allow specification of which input types yield
which return types. Hence, the use of the <code>@overload</code> decorator to
designate multiple acceptable type-hinted prototypes. This is my
synthesis of reading multiple web pages, so if it's not entirely
correct, thanks for correcting me.</p>
<p>The PySpark package has a <code>rdd.py</code> module containing the following
method prototype:</p>
<pre><code>@overload
def toDF(
self: "RDD[RowLike]",
schema: Optional[Union[List[str], Tuple[str, ...]]] = None,
sampleRatio: Optional[float] = None,
) -> "DataFrame":
...
</code></pre>
<p>I've tried to find information on how to interpret <code>Tuple[str, ...]</code>.</p>
<p><a href="https://fastapi.tiangolo.com/id/python-types" rel="nofollow noreferrer">This</a> page talks about
type hinting for containiner arguments in general, but not what ellipsis
mean following a concrete type inside square brackets that suffix a
container type.</p>
<p>The ellipsis don't like it's in the context of slicing, which is another
use that I've seen mentioned online.</p>
<p>The role of the ellipsis differ from representing a no-op body, such as
with <code>pass</code>.</p>
<p><em><strong>How do I interpret <code>Tuple[str, ...]</code>?</strong></em></p>
| <python><python-typing> | 2023-09-17 05:49:25 | 1 | 1,265 | user2153235 |
77,120,569 | 11,445,140 | In docker can't use tree command, get error: /bin/sh: 52: tree: not found | <p>I'm using docker, I want to use <code>tree</code> command, but I get error:</p>
<pre><code># tree -L 1
/bin/sh: 52: tree: not found
# whereis tree
tree:
# apt install tree
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package tree
# pip install tree
Requirement already satisfied: tree in /usr/local/lib/python3.8/site-packages (0.2.4)
Requirement already satisfied: Pillow in /usr/local/lib/python3.8/site-packages (from tree) (10.0.1)
Requirement already satisfied: svgwrite in /usr/local/lib/python3.8/site-packages (from tree) (1.4.3)
Requirement already satisfied: setuptools in /usr/local/lib/python3.8/site-packages (from tree) (57.0.0)
Requirement already satisfied: click in /usr/local/lib/python3.8/site-packages (from tree) (8.1.7)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
# tree -L 1
/bin/sh: 56: tree: not found
#
</code></pre>
<p>How should I do?</p>
| <python><docker><tree><sh><bin> | 2023-09-17 05:17:18 | 1 | 519 | Tom |
77,120,536 | 3,899,975 | how to change the axis limit based on original data after transforming the data | <p>I have the following code, the problem is I want to keep the x limits as the original data (<code>failure</code>) and y data to be between 0, 1, or 0, 99.9999%. I want to do this without using <code>twinx</code> axis:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
#data
failure = np.array([168, 202, 190, 197, 169, 214, 201, 219, 206, 198, 190, 183, 206, 218, 214, 206, 213, 202, 209, 206])
#get plotting position for pprbablity distribution plots
def plottingPositionOriginal(failure, a=0.3):
x = np.sort(failure)
n = len(x)
F = []
for i in range(1, len(failure)+1):
F.append((i - a) / (n + 1 - 2 * a))
y = np.array(F)
return x, y
# transform the y axis so the plot appears linear
def plottingPositionFit(failure):
t, F = plottingPositionOriginal(failure)
x = np.log(t)
y = np.log(-np.log(1 - F))
beta, c = np.polyfit(x, y, 1)
y_fit = beta * x + c
return x, y, y_fit
X, Y, y_hat = plottingPositionFit(failure)
fig, ax = plt.subplots()
ax.scatter(X, Y)
ax.plot(X, y_hat)
plt.show()
</code></pre>
<p>This is my current plot:</p>
<p><a href="https://i.sstatic.net/uhMU4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uhMU4.png" alt="enter image description here" /></a></p>
<p>This is how I want the plot to look (minus the red lines):</p>
<p><a href="https://i.sstatic.net/7sRoK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7sRoK.png" alt="enter image description here" /></a></p>
| <python><matplotlib> | 2023-09-17 04:58:57 | 1 | 1,021 | A.E |
77,120,476 | 736,087 | System.IO.FileNotFoundException when calling dll from Pythonnet | <p>I am working on Pythonnet in Pycharm and calling C# dll from the python code, but getting error below while calling method.</p>
<p>Below is code</p>
<pre><code>from pythonnet import load
load("coreclr")
import platform
print(platform.architecture())
import clr
from System import *
from Microsoft import *
clr.AddReference("dll/dllname2")
from dllname import classname
var1 = String("test")
var2 = String("test")
obj: object = Class()
msg =(obj.MethodName(String(""),String("")))
</code></pre>
<p>Exception</p>
<pre><code>System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.Extensions.Configuration, Version=7.0.0.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'. The system cannot find the file specified.
File name: 'Microsoft.Extensions.Configuration, Version=7.0.0.0, Culture=neutral, PublicKeyToken=adb9793829ddae60'
</code></pre>
<p>Please help. I am stuck.</p>
| <python><dll><python.net> | 2023-09-17 04:29:46 | 1 | 5,727 | Saurabh |
77,120,413 | 13,771,657 | Converting single index pandas df to multi-index df and then grouping all rows with the same index together on separate lines | <p>I have a df that looks like this:</p>
<pre><code>import pandas as pd
# Create df
df = pd.DataFrame({'Name': ['Alice', 'Bob', 'Carol', 'Alice', 'Carol', 'Alice', 'Carol', 'Matt'],
'Address': ['123 A St', '123 B St', '123 C St', '123 A St', '123 C St', '456 X St', '123 C St', '123 M St'],
'State': ['AZ', 'TX', 'CA', 'AZ', 'CA', 'AZ', 'CA', 'MA'],
'Car': ['GMC', 'Mazda', 'Tesla', 'Honda', 'Nissan', 'Subaru', 'Mazda', 'Buick'],
'Miles': [1111, 2222, 3333, 4444, 5555, 6666, 7777, 8888]})
# Display df
display(df)
</code></pre>
<p><strong>Goal</strong></p>
<p>I would like for the output to be a multi-index df using 'Name', 'Address', and 'State' that would look as follows:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Address</th>
<th>State</th>
<th>Car</th>
<th>Miles</th>
</tr>
</thead>
<tbody>
<tr>
<td>Alice</td>
<td>123 A St</td>
<td>AZ</td>
<td>GMC</td>
<td>1111</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>Honda</td>
<td>4444</td>
</tr>
<tr>
<td>Alice</td>
<td>456 X St</td>
<td>AZ</td>
<td>Subaru</td>
<td>6666</td>
</tr>
<tr>
<td>Bob</td>
<td>123 B St</td>
<td>TX</td>
<td>Mazda</td>
<td>2222</td>
</tr>
<tr>
<td>Carol</td>
<td>123 C St</td>
<td>CA</td>
<td>Tesla</td>
<td>3333</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>Nissan</td>
<td>5555</td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td>Mazda</td>
<td>7777</td>
</tr>
<tr>
<td>Matt</td>
<td>123 M St</td>
<td>MA</td>
<td>Buick</td>
<td>8888</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Code attempted</strong></p>
<p>I tried the following code, but it does not group all rows of data with the same multi-index values:</p>
<pre><code>df = df.set_index(keys=['Name', 'Address', 'State'])
</code></pre>
<p>Thanks for any help you can provide.</p>
| <python><pandas><dataframe><multi-index> | 2023-09-17 03:50:38 | 1 | 528 | BGG16 |
77,120,347 | 1,492,229 | extract alphanumeric from text in dataframe in Python | <p>I have a data frame that is called df and looks like this</p>
<pre><code>Text No
c0404079=0.00 34
c1444716<=0.00 45
1c0<0226311 <= 0.00 36
c0001208 <= 0.00 32
0.00<c0243026<=2.00 85
c0036983 <= 0.00 55
c00369
</code></pre>
<p>74=0.00 39</p>
<p>I want to create a new column in that df that is called "Code"</p>
<p>this code can be the code in the first column which start with the letter c till the furst non alpha-numeric char or the end of the line</p>
<p>so the dataframe will be</p>
<pre><code>c0404079=0.00 34 c0404079
c1444716<=0.00 45 c1444716
1.0<c00226311 <= 0.00 36 c00226311
c0001208 <= 0.00 32 c0001208
0.00<c0243026<=2.00 85 c0243026
c0036983 <= 0.00 55 c0036983
c0036974=0.00 39 c0036974
</code></pre>
<p>Any idea how to do that?</p>
<p>I tried this but I did not get the right results</p>
<pre><code>df['Code'] = df['Text'].str.extract(r'c^(\d[^\W_]{5,})')
</code></pre>
| <python><dataframe><text> | 2023-09-17 03:11:58 | 1 | 8,150 | asmgx |
77,120,024 | 2,587,816 | How to force a python float to not use exponents | <p>I have some code that reads numbers into a floating-point Numpy array (from a text format), multiplies the values by 1000.0, and later prints the array.</p>
<p>I format the array for printing like so, where <code>k</code> is a string identifying the array and <code>v</code> is the array itself:</p>
<pre><code>out = f'{k.strip()} = ' + \
np.array2string(v,
prefix=f'{k} = ',
precision=5,
max_line_width=200,
floatmode='maxprec'
) + '\n'
</code></pre>
<p>I would like to see results that look like:</p>
<pre><code>New.vals = -80. -75. -70. -65. -60. -55. -50.
</code></pre>
<p>But usually I get a result using scientific notation:</p>
<pre><code>Typical.vals = -8.00E+01 -7.50E+01 -7.00E+01 -6.50E+01 -6.00E+01 -5.50E+01 -5.00E+01
</code></pre>
<p>How can I enforce formatting without scientific notation?</p>
| <python><numpy><floating-point><exponentiation> | 2023-09-17 00:04:55 | 1 | 5,170 | Jiminion |
77,119,982 | 11,091,255 | Difficulty getting filtered dmesg output using subprocess with Python | <p>I've been trying to get the log of unplugged USB devices from dmesg from within a Python script. At the command line, I would type:
<code>dmesg | tail -100 | grep "USB disconnect"</code> and see the desired output. Within the script I have been using the subprocess.run command but it seems to have an issue with the pipe redirection, since if I simply use "dmesg" as the subprocess.run argument I do indeed get the entire giant log. Here is an example attempt and its output, along with an attempt to do the same with subprocess.call :</p>
<pre><code>import subprocess
args = ["dmesg | tail -100 | grep \042USB disconnect\042"]
grep_rpt = str(subprocess.call(args[0x00], shell = True))
print(grep_rpt + ", " + str(len(grep_rpt)))
args = ["dmesg", "|", "tail", "-100", "|", "grep", "\042USB disconnect\042"]
shellCmd = subprocess.run(args, capture_output = True, text = True)
grep_rpt = str(shellCmd.stdout)
print(grep_rpt + ", " + str(len(grep_rpt)))
</code></pre>
<p>And the output :</p>
<pre><code>[ 4197.327275] usb 1-1.3: USB disconnect, device number 19
[ 4329.665928] usb 1-1.3: USB disconnect, device number 20
[ 4645.282024] usb 1-1.3: USB disconnect, device number 21
[ 4729.497491] usb 1-1.3: USB disconnect, device number 22
[ 4961.154086] usb 1-1.3: USB disconnect, device number 23
[ 5090.165051] usb 1-1.3: USB disconnect, device number 24
[ 5181.803769] usb 1-1.3: USB disconnect, device number 25
[ 5620.543437] usb 1-1.3: USB disconnect, device number 26
[ 5831.722098] usb 1-1.3: USB disconnect, device number 27
[ 7350.416575] usb 1-1.3: USB disconnect, device number 28
[ 7507.328715] usb 1-1.3: USB disconnect, device number 29
0, 1
, 0
</code></pre>
<p>This output is a bit deceiving, since the list of devices is the result of the actions of subprocess.call, this is always going out to the screen but is not being sent back as an object, since as you can see I get "0" back as the string. I tried doing <code>capture_output = True</code> to stop this but it appears to not be one of the call() options even though it is for run(). And you can see that run() is returning None. Again, if I just do "dmesg" without the pipes everything is fine. I did try <code>shlex</code> on the arguments for run() but it didn't help. I have also successfully used run() to get data from the i2c bus so I know I am handling the format for multiple arguments correctly. What am I missing here?</p>
| <python><linux><subprocess><dmesg> | 2023-09-16 23:35:11 | 1 | 305 | Christopher Theriault |
77,119,956 | 997,406 | Improve map plot resolution | <p>I'm trying to plot an image of a map, using matplotlib. On top of it I'm looking to plot a hiking route using a dataframe of of Longitude/Latitude/Elevation. The image resolution is not good and I'm trying to improve it. However, when I try to add more map tiles it doesn't get any better. I would like it to be of a resolution that looks good when printing a photo. I'd like the works on the map to be clear/sharp. Any help would be appreciated.</p>
<p>Here is the code that creates the image:</p>
<pre><code>import cartopy.crs as ccrs
import matplotlib.pyplot as plt
import cartopy.io.img_tiles as cimgt
resolution = 0.03
minLong = -119.5665
maxLong = -119.489072
minLat = 37.725664
maxLat = 37.767836
fig = plt.figure(figsize=(20, 20))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree())
osm_tiles = cimgt.OSM()
ax.set_extent([minLong- resolution,
maxLong + resolution,
minLat - resolution,
maxLat + resolution])
ax.add_image(osm_tiles, 13) # 10 is the zoom level, adjust as needed for more detail
xLat = 37.730172
xLong = -119.557938
ax.scatter(xLong, xLat, color='black',
marker='x', s=500)
fig.savefig('output_photos/%s_StackExchange_mapPhoto.jpg'%(imgName),dpi=400)
</code></pre>
<p><a href="https://i.sstatic.net/cwlxw.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwlxw.jpg" alt="matplotlibImage" /></a></p>
| <python><plot> | 2023-09-16 23:22:47 | 1 | 407 | gus |
77,119,827 | 1,691,278 | Speed up an O(n) algorithm: should I use pointers? | <p>I stumbled upon the following problem:</p>
<blockquote>
<p>Given an ordered list of cards with integers on them (e.g. <code>[1,24,3,4,5]</code>), two
players take turns to take cards. Player 1 goes first. Whenever either
player takes a card with an even number, they reverse the order of the
remaining cards, and the game continues. When all cards are taken,
they compute their sums. The player with the higher sum wins. If
there's a tie, player 1 wins. Write a program, given the sequence of
cards, to output the winner.</p>
</blockquote>
<p>I use pointers to finish this problem. Create two pointers <code>i = 0</code> and <code>j = n-1</code>, where <code>n</code> is the total number of cards. Create another pointer <code>k</code> to keep track of where you are. Whenever I encounter an even card, I just switch between <code>i</code> and <code>j</code>.</p>
<p>I wrote a function that works, but it takes O(n), because I do need to traverse the entire list. Is there a quicker way to do this?</p>
| <python><performance> | 2023-09-16 22:22:34 | 1 | 1,905 | user1691278 |
77,119,680 | 3,259,222 | Split array to multiple arrays using a collection of boolean masks | <p>Given a data array of shape (m,n) and array of bool masks of the same shape. Is there a way to apply the following routine without a loop:</p>
<ol>
<li>Apply each column of the masks array to the data array obtaining a sub-array</li>
<li>Store all sub-arrays</li>
</ol>
<p>Here is a loop solution</p>
<pre><code>import numpy as np
arr = np.array([
[1,2,3],
[np.nan,5,6]
])
masks = np.array([
[True, True, True],
[False, True, True]
])
arrays = []
for i in range(masks.shape[1]):
mask_i = masks[:, i]
arr_i = arr[mask_i, :]
arrays.append(arr_i)
</code></pre>
<p>Explored potential solutions:</p>
<ol>
<li><code>np.take</code> Doesn't work because the <code>indices</code> param requires homogeneous shape, while in some cases we can take fewer and in other more rows.</li>
<li>Fancy indexing <code>arr[masks,:]</code> doesn't work with higher dimensional masks</li>
<li><code>np.split</code> Doesn't work because it can split only to complete non-overlapping sub-arrays</li>
</ol>
| <python><numpy> | 2023-09-16 21:32:03 | 1 | 431 | Konstantin |
77,119,678 | 12,086,248 | Groupby 3 column values and find the 3rd column count in descending order and sum up the count values by ignoring the first count value in python | <p>I am trying to do a groupby for 3 column values and wanted to find the below from a python dataframe:
1. 3rd column value counts in descending order
2. sum up the count values by ignoring the first count value</p>
<p>I have tried with the below 2 lines of code, but not getting the desired results, can anyone help me out in this! Thanks in advance!</p>
<pre><code>df.groupby(['Gender', 'Area Code', 'Population'])["Population"].count()
df.groupby(['Gender', 'Area Code', 'Population'])["Population"].count().reset_index(name='count').sort_values(['count'], ascending=False)
</code></pre>
<p><strong>df:</strong></p>
<pre><code> Gender Area Code Population
Male RJ12 650
Male RJ12 650
Male RJ12 200
Male DL25 230
Male DL25 230
Male MH02 550
Male MH02 230
Male MH02 550
Male MH02 550
Male MH02 740
Female DL25 230
Female DL25 430
Female RJ07 850
Female RJ07 950
Female RJ07 950
Female RJ07 450
Female RJ07 950
Female RJ07 450
</code></pre>
<p><strong>Expected Result 1:</strong></p>
<p><a href="https://i.sstatic.net/GdHjH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GdHjH.png" alt="enter image description here" /></a></p>
<p><strong>Expected Result 2:</strong> <em><strong>(Ignoring the first value of count highlighted in brown color in the above image)</strong></em></p>
<p><a href="https://i.sstatic.net/81sAz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/81sAz.png" alt="enter image description here" /></a></p>
| <python><pandas><dataframe> | 2023-09-16 21:31:01 | 1 | 433 | Goutam |
77,119,653 | 1,862,861 | Replace $ signs bounding equations within opening and closing parentheses | <p>I have a load of inline LaTeX-style maths strings in Python, e.g., <code>eq = "$E = mc^2$"</code> or <code>eq = "$F = ma$ (N)"</code> that I am writing out into a webpage that uses MathJAX. I therefore want to convert the opening and closing <code>$</code> symbols into the default MathJAX style of opening with <code>\(</code> and closing with <code>\)</code>, so that my two above examples become <code>eqmathjax = "\(E = mc^2\)</code> and <code>eqmathjax = "\(F = ma\) (N)"</code>. Note that a string may actually contain more than one equation, each bounded by its own pair of <code>$</code> signs.</p>
<p>I could easily iterate over replacements as required, e.g,:</p>
<pre class="lang-py prettyprint-override"><code>twoeqns = "$E = mc^2$ and $F = ma$"
i = 0
repl = ["\(", "\)"]
while "$" in twoeqns:
twoeqns = twoeqns.replace("$", repl[i % 2], 1)
i += 1
</code></pre>
<p>but I'm sure there must be a simpler one-liner way to do this, e.g., with regular expressions (I know I could have the above solution and a function and use it as a one-liner). Does anyone have that simpler method?</p>
| <python><regex> | 2023-09-16 21:21:38 | 2 | 7,300 | Matt Pitkin |
77,119,489 | 13,620,323 | ML Model: ValueError: Incompatible dimension for X and Y matrices: X.shape[1] == 150528 while Y.shape[1] == 512 | <p>So I am trying to train a model with some images and then the model should detect whether its similar to the images trained or not the training part works good I guess it generates .npy file then the calculating similarity file loads that file and tries to calculate the similarity here is the code for that</p>
<pre><code>import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
from PIL import Image
import torchvision.transforms as transforms
import torch
import os
# Load the pre-trained features
features = np.load('features.npy')
# Define a function to calculate cosine similarity between an image and features
def calculate_similarity(image, features, transformer):
# Apply the same transformations used for training data
image = transformer(image)
# Extract features from the input image
image = image.unsqueeze(0) # Add batch dimension
# Flatten both the image and features
image_flat = image.view(1, -1)
features_flat = features.reshape(features.shape[0], -1)
# Now calculate cosine similarity
similarity = cosine_similarity(image_flat, features_flat)
return similarity
# Define a function to check if an image is similar to the training set
def is_similar(image, features, transformer, threshold=0.7):
similarity = calculate_similarity(image, features, transformer)
return similarity.max() >= threshold
# Example usage for processing images in a directory
if __name__ == '__main__':
# Define the transformation pipeline for both training and test data
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
# Directory containing the images
image_dir = 'data/train'
# List all files in the directory
image_files = [os.path.join(image_dir, f) for f in os.listdir(image_dir) if f.endswith('.jpeg')]
for image_path in image_files:
image = Image.open(image_path)
is_similar_image = is_similar(image, features, transform)
if is_similar_image:
print(f"The image {image_path} is similar to the training set.")
else:
print(f"The image {image_path} is not similar to the training set.")
</code></pre>
<p>So the above code is for the calculating similarity when I run this it gives the following error</p>
<p><strong>ValueError: Incompatible dimension for X and Y matrices: X.shape[1] == 150528 while Y.shape[1] == 512</strong></p>
<p>Just in case if needed heres the file which trains the ml model</p>
<pre><code>import torch
import torch.nn as nn
import torchvision.transforms as transforms
from torchvision import models
from torch.utils.data import DataLoader, Dataset
import os
from PIL import Image
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
# Define the custom dataset for image loading
class ImageDataset(Dataset):
def __init__(self, root_dir, transform=None):
self.root_dir = root_dir
self.transform = transform
self.image_filenames = os.listdir(root_dir)
def __len__(self):
return len(self.image_filenames)
def __getitem__(self, idx):
img_name = os.path.join(self.root_dir, self.image_filenames[idx])
image = Image.open(img_name)
if self.transform:
image = self.transform(image)
return image
# Set the data directory containing your training images
data_dir = 'data/train'
# Define data transformations for the model
transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
# Create a custom dataset and data loader
dataset = ImageDataset(data_dir, transform=transform)
dataloader = DataLoader(dataset, batch_size=64, shuffle=True)
# Load a pre-trained ResNet model
model = models.resnet18(pretrained=True)
model = nn.Sequential(*list(model.children())[:-1]) # Remove the final classification layer
# Extract features from images using the pre-trained model
model.eval()
features = []
with torch.no_grad():
for inputs in dataloader:
inputs = inputs.to('cuda') if torch.cuda.is_available() else inputs
outputs = model(inputs)
features.extend(outputs.cpu().numpy())
# Save the extracted features to a file
import numpy as np
features = np.array(features)
np.save('features.npy', features)
</code></pre>
| <python><machine-learning><pytorch> | 2023-09-16 20:26:21 | 0 | 538 | Mohammed Abid Nafi |
77,119,311 | 3,810,748 | Why does timezone unaware datetime object print out in local timezone? | <p>When I instantiate a datetime object like this:</p>
<pre><code>>>> print(datetime.datetime.now())
2023-09-16 10:30:15.50
</code></pre>
<p>I get the correct local time, which is currently around <code>10:30 AM</code>.</p>
<p>Why is it like this? Shouldn't it default to UTC time or something?</p>
| <python><datetime><timezone><utc> | 2023-09-16 19:31:45 | 1 | 6,155 | AlanSTACK |
77,119,159 | 3,090,114 | Execute an excel macro from python in unix environment | <p>Can anyone share any code snippet to execute an execl macro using python code in unix environment?</p>
<p>I tried using xlwings, but that doesn't support in unix environment.</p>
<p>Use Case: I have a pivot table report in excel with filter and using macro we want to change the filter value.</p>
| <python><excel><unix> | 2023-09-16 18:53:05 | 1 | 1,491 | Koushik Chandra |
77,119,100 | 8,831,742 | Label structure for a keras model with multiple outputs | <p>I have a keras model that inputs a simple array and outputs two values (x and y) that belong to 5 possible categories (encoded as one-hot), with a custom loss function. I know that you have to set the loss function for each desired output value, which I did in my script.</p>
<p>I initialized the model like this:</p>
<pre class="lang-py prettyprint-override"><code>inputs = keras.Input(shape=(14))
middle = layers.Dense(10,activation="relu")(inputs)
out_x = layers.Dense(5,activation="sigmoid")(middle)
out_y = layers.Dense(5,activation="sigmoid")(middle)
model = keras.Model(inputs=inputs,outputs={"x":out_x,"y":out_y})
model.compile(optimizer="adam",loss={"x":custom_loss,"y":custom_loss},metrics=["accuracy"])
</code></pre>
<p>I then tried to make an array of input data and labels. The labels were laid out as such:</p>
<pre><code>[
{"x":[0,0,1,0,0],"y":[1,0,0,0,0]},
...
]
</code></pre>
<p>but when I tried to use <code>model.fit(training_data,labels)</code> it gave me an error that was several hundreds of repetitions of the number 5 and then <code>Make sure all arrays contain the same number of samples.</code></p>
<p>What should my labels look like if I want my model to have multiple outputs?</p>
| <python><tensorflow><machine-learning><keras> | 2023-09-16 18:38:07 | 1 | 353 | none none |
77,119,048 | 386,861 | How is it possible to turn an jupyter notebook that uses altair, pandas and ipywidget into workable .py that i can throw into an exist page? | <p>I learned python through IPython Notebooks and scrambled together but I want to know if it is possible to take the notebook and sort of paste it it with its own route? Or can I upload the notebook into pythonanywhere and create a separate app that way?</p>
| <python><pandas><altair><pythonanywhere> | 2023-09-16 18:23:12 | 0 | 7,882 | elksie5000 |
77,119,031 | 540,665 | numpy behaves differently on a normal Jupiter notebook and one from a visual studio venv | <p>I am running a web version of a Jupyter notebook from a course material, and this code cell runs through successfully:</p>
<p><a href="https://i.sstatic.net/vN2kl.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vN2kl.jpg" alt="enter image description here" /></a></p>
<p>But when I ran it from a VS CODE virtual environment, I get the below error. Please help with the error.</p>
<p><a href="https://i.sstatic.net/JZF9c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JZF9c.png" alt="enter image description here" /></a></p>
| <python><numpy><jupyter-notebook><jupyter-console> | 2023-09-16 18:18:14 | 1 | 2,398 | dig_123 |
77,118,839 | 758,174 | tqdm.notebook horizontal alignment of multiple bars | <p>I'd like to know if there is a way to align multiple progress bars in a Jupyter notebook, like they look <a href="https://github.com/tqdm/tqdm#ipythonjupyter-integration" rel="nofollow noreferrer">in the docs</a>.</p>
<p>By contrast, my experience is that the horizontal position of the bar varies through the iterations (as the progress percentage changes) and based on the width of the description.</p>
<p>For example, just trying the snippet from the docs:</p>
<pre class="lang-py prettyprint-override"><code>from tqdm.notebook import trange, tqdm
from time import sleep
for i in trange(3, desc='1st loop'):
for j in tqdm(range(100), desc='2nd loop'):
sleep(0.01)
</code></pre>
<p>What I get is unaligned bars:</p>
<img src="https://i.sstatic.net/LmXTs.png" width="600">
<p>Instead, in the docs all the bars are neatly aligned, no matter the description width (in pixels) or the current percentage through the iteration:</p>
<img src="https://i.sstatic.net/8VXi0.png" width="600">
<p>(However, note the slight differences of import between the image in the docs and the code snippet they give; for example, <code>tqdm.tnrange</code> is deprecated).</p>
<p>I tried using <code>bar_format</code> e.g. with <code>'{desc:<20}{percentage:3.0f}%|{bar}{r_bar}'</code> or using <code>{l_bar:>20}...</code>, but no luck so far.</p>
<p>Versions:</p>
<pre><code>ipython=8.15.0
ipywidgets=8.0.4
jupyter_client=8.1.0
jupyterlab=4.0.6
jupyterlab_widgets=3.0.9
python=3.11.5
tqdm=4.65.0
</code></pre>
| <python><jupyter-notebook><tqdm> | 2023-09-16 17:24:09 | 0 | 26,554 | Pierre D |
77,118,831 | 722,209 | Table created with alembic migration using sqlite memory cannot be found | <p>I'm creating a table with an alembic migration using an in-memory sqlite connection. The migration is successful, but when I try to load the table with sqlalchemy, it isn't found.</p>
<p>This does not happen with I use a file for sqlite. I'm wondering if the alembic migration is really a separate process call that uses different memory from the app.</p>
<pre><code>import sqlalchemy as db
DB_CONNECTION_URL = 'sqlite:///'
engine = db.create_engine(DB_CONNECTION_URL)
connection = engine.connect()
metadata = db.MetaData()
from alembic.config import Config
import alembic.command
config = Config('alembic.ini')
config.set_main_option('sqlalchemy.url', DB_CONNECTION_URL)
config.attributes['connection'] = connection
alembic.command.upgrade(config, 'head')
print('upgrade completed')
print(metadata.tables.keys())
tasks = db.Table('tasks', metadata, autoload=True, autoload_with=engine)
</code></pre>
<p>The print statement for the tables will be an empty array, and the attempt to load the table will result in a table-not-found error.</p>
<p>Any ideas?</p>
<p>Thanks!</p>
| <python><sqlite><sqlalchemy><alembic> | 2023-09-16 17:21:49 | 0 | 2,976 | Tyrick |
77,118,564 | 4,575,197 | How to pass a Dataframe as train dataframe and another dataframe as Validation to GridSearchCV | <p>I'm a programmer who tries to find he's way into ML world. so the Question might be basic.</p>
<p>i have data from years 2010-2019. Now i'm trying to test different parameters on gradient boosting regression and i want to use 60% for traning,20% for Validation and 20% for Testing. Due to the nature of the Question that i'm trying to answer. I have already splitted the data into <code>Train_df</code> from 2010 till 2014 ,<code>evaluate_df</code> 2015 till 2017, <code>test_df</code> from 2018-2019.</p>
<p>model should be trained on <code>trained_df</code>, and evaluated on <code>evaluate_df</code>, finally i use the best model for Test dataframe <code>test_df</code>.</p>
<p>This is my code:</p>
<pre><code>p_test3 = {'learning_rate':[0.1,0.05,0.01,0.005], 'n_estimators':[500,750,1000,1250,1500]}
tuning = GridSearchCV(estimator =GradientBoostingRegressor( min_samples_split=2, min_samples_leaf=1, subsample=1,max_features='sqrt', random_state=10),
param_grid = p_test3, scoring='r2',n_jobs=-1, cv=evaluate_df)
tuning.fit(train_df[[col1]],train_df['col2'])
tuning.cv_results_, tuning.best_params_, tuning.best_score_
</code></pre>
<p>but i got this error:</p>
<blockquote>
<p>ValueError: too many values to unpack (expected 2)</p>
</blockquote>
<p>How can i test the model of GridSearchCV on a dataframe?</p>
| <python><pandas><machine-learning><evaluation><gridsearchcv> | 2023-09-16 16:07:41 | 1 | 10,490 | Mostafa Bouzari |
77,118,099 | 12,275,675 | TypeError: Updater.__init__() got an unexpected keyword argument 'token' | <p>I have this code which is suppose to look for /help command in telegram. So once you type /help in the telegram channel it will give you options. The code is as follows.</p>
<pre><code>from telegram import Update
from telegram.ext import Updater, CommandHandler, MessageHandler, CallbackContext
from telegram.ext import filters
# Define your bot token here
TOKEN = "YOUR_BOT_TOKEN"
def start(update, context):
update.message.reply_text("Welcome to your Telegram bot!")
def help_command(update, context):
update.message.reply_text("You requested help. Here are some available commands:\n"
"/help - Show this help message\n"
"/start - Start the bot")
def handle_message(update, context):
text = update.message.text
if text == '/start':
start(update, context)
elif text == '/help':
help_command(update, context)
def main():
# Initialize the Updater with your bot token
updater = Updater(token=TOKEN, use_context=True)
dispatcher = updater.dispatcher
# Define the command handlers
dispatcher.add_handler(CommandHandler("start", start))
dispatcher.add_handler(CommandHandler("help", help_command))
# Handle non-command messages using a filter
dispatcher.add_handler(MessageHandler(Filters.text & ~Filters.command, handle_message))
# Start the bot
updater.start_polling()
updater.idle()
if __name__ == '__main__':
main()
</code></pre>
<p>However I am getting this error</p>
<pre><code>TypeError: Updater.__init__() got an unexpected keyword argument 'token'
</code></pre>
<p>Could you please advise how I can resolve this error.</p>
| <python><bots><telegram> | 2023-09-16 14:05:57 | 3 | 1,220 | Slartibartfast |
77,118,088 | 9,940,188 | How to build a self-referential associative data structure? | <p>I'm trying to build a hierarchical structure consisting of Nodes where each Node is linked both to several "up" nodes as well as "down" nodes. However, when I run the code below, I get the error:</p>
<pre><code>sqlalchemy.exc.AmbiguousForeignKeysError: Could not determine join condition
between parent/child tables on relationship Node.links_up - there are
multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument,
providing a list of those columns which should be counted as containing a foreign key
reference to the parent table.
</code></pre>
<p>"Specify the 'foreign_keys' argument" ... but that's what I'm doing in the definition of Link, why isn't it honored? I can't do it in Node, because Node doesn't have any foreign keys.</p>
<p>This is already the watered-down version, I tried and completely failed to build something with "secondary joins" so that I could have relationships Node.nodes_up and Node.nodes_down, but that seems even more hopeless. Any hints on how to pull this off?</p>
<p>The documentation has examples of using association objects as well as adjacency relationships, but not both together.</p>
<pre><code>from sqlalchemy import Column, Integer, String, ForeignKey, create_engine
from sqlalchemy.orm import sessionmaker, relationship
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class Node(Base):
__tablename__ = 'node'
id = Column(Integer, primary_key=True)
name = Column(String(100))
links_up = relationship('Link', back_populates='node_down')
links_down = relationship('Link', back_populates='node_up')
class Link(Base):
__tablename__ = 'link'
up_id = Column(ForeignKey('node.id'), primary_key=True)
down_id = Column(ForeignKey('node.id'), primary_key=True)
node_up = relationship('Node',
foreign_keys=[up_id],
back_populates='links_down')
node_down = relationship('Node',
foreign_keys=[down_id],
back_populates='links_up')
engine = create_engine('sqlite:///test.db')
Base.metadata.create_all(engine)
Session = sessionmaker(engine)
db = Session()
r = Node(name='Parent')
db.add(r)
db.commit()
</code></pre>
<p>I saw <a href="https://stackoverflow.com/questions/25958963/self-referential-association-relationship-sqlalchemy">this</a> but it uses backref() which I (and sqlalchemy as of late) don't like. It should be possible with back_populates as well.</p>
| <python><sqlalchemy> | 2023-09-16 14:04:10 | 0 | 679 | musbur |
77,117,924 | 11,830,394 | Error while trying to install mmengine (ImportError: cannot import name 'six' from 'pkg_resources.extern') | <p>Error found with Python 3.8 and pip 20.0.2.</p>
<p>Following mmsegmentation install instructions, when running the command</p>
<pre><code>mim install mmengine
</code></pre>
<p>I was getting the following error:</p>
<pre><code>Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 602, in _exec
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 57, in <module>
ImportError: cannot import name 'six' from 'pkg_resources.extern' (/usr/local/lib/python3.8/dist-packages/pkg_resources/extern/__init__.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/mim", line 8, in <module>
sys.exit(cli())
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.8/dist-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/mim/commands/install.py", line 72, in cli
exit_code = install(list(args), index_url=index_url, is_yes=is_yes)
File "/usr/local/lib/python3.8/dist-packages/mim/commands/install.py", line 108, in install
importlib.reload(pip._vendor.pkg_resources)
File "/usr/lib/python3.8/importlib/__init__.py", line 169, in reload
_bootstrap._exec(spec, module)
File "<frozen importlib._bootstrap>", line 608, in _exec
KeyError: 'pkg_resources'
</code></pre>
| <python> | 2023-09-16 13:15:29 | 1 | 331 | ASRodrigo |
77,117,575 | 9,391,359 | docker compose volumes empty folder | <p>inside my vvchatbot folder are</p>
<ul>
<li>app</li>
<li>configs</li>
<li>docker-compose.yaml</li>
</ul>
<p>docker compose</p>
<pre><code>version: '3'
services:
app:
container_name: app
restart: always
build:
context: ./app
args:
- LLAMA_2_ACCESS=${LLAMA_2_ACCESS}
- CHAT_MODEL=${CHAT_MODEL}
env_file: .env
ports:
- "8501:8501"
command: streamlit run app.py
volumes:
- app-configs:/app/configs # Mount the app-configs volume
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
....
# Define named volumes
volumes:
app-configs:
</code></pre>
<p>and this is example of my app Docker file</p>
<pre><code># Stage 1: Install Python 3.8 and essential dependencies
FROM nvidia/cuda:11.8.0-cudnn8-devel-ubuntu20.04 as builder
...
FROM builder as app_builder
WORKDIR /usr/src/app
COPY requirements.txt .
COPY . .
....
</code></pre>
<p>I simplified the structure but the problem is that after i build app using <code>docker-compose up --build -d</code> inside app container i see configs folder but it's empty</p>
| <python><docker><docker-compose> | 2023-09-16 11:31:58 | 0 | 941 | Alex Nikitin |
77,117,483 | 839,733 | Iterator for k-combinations | <p><a href="https://leetcode.com/problems/combinations" rel="nofollow noreferrer">LeetCode 77. Combinations</a>:</p>
<blockquote>
<p>Given two integers n and k, return all possible combinations of k numbers chosen from the range [1, n].
You may return the answer in any order.</p>
</blockquote>
<p>My code using backtracking is given below.</p>
<pre><code>def combine(n: int, k: int) -> list[list[int]]:
def backtrack(i: int, comb: list[int]) -> None:
if len(comb) == k:
ans.append([*comb])
return
# Number of elements still needed to make a k-combination.
need = k - len(comb)
# The range of numbers is [i, n], therefore, size=n - i + 1
remaining = n - i + 1
# This is the last offset from which we can still make a k-combination.
offset = remaining - need
for j in range(i, i + offset + 1):
comb.append(j)
backtrack(j + 1, comb)
comb.pop()
ans: list[list[int]] = []
backtrack(1, [])
return ans
</code></pre>
<p>It works as expected.</p>
<p>Now, I'm looking at <a href="https://leetcode.com/problems/iterator-for-combination/" rel="nofollow noreferrer">LeetCode 1286. Iterator for Combination</a>, which basically asks for an <code>Iterator[list[int]]</code> or <code>Generator[list[int]]</code> instead of returning all the combinations at once.</p>
<blockquote>
<p>Technically, LeetCode 1286 works with strings, but for the sake of similarity to LeetCode 77, let's talk about integers only, since it makes no difference to the algorithm.</p>
</blockquote>
<p>Can the above code be converted to return an iterator? The reason I'd prefer starting with the above code and not something completely different is because of its simplicity, and to keep the two solutions similar as much as possible.</p>
<p><strong>My research efforts:</strong></p>
<p>I've studied the sample code for <a href="https://docs.python.org/3/library/itertools.html#itertools.combinations" rel="nofollow noreferrer">itertools.combinations</a>, but it works differently from my code above, and, IMO, is pretty convoluted since it uses some obscure/non-intuitive features such as <code>for-else</code> and loop variables referenced later (Python loop variables remain in scope even after the loop exits).</p>
<p>I've also looked at <a href="https://stackoverflow.com/q/127704/839733">Algorithm to return all combinations of k elements from n</a>, but due to the highly generic nature of the question (it doesn't specify a programming language or whether the combinations should be returned all at once or as an iterator), the answers are all over the place.</p>
<p>Finally, I've looked at <a href="https://stackoverflow.com/q/12991758/839733">Creating all possible k combinations of n items in C++</a>, which specifies C++, but not an iterator, and thus, doesn't fit my purpose, since I already know how to generate all combinations.</p>
| <python><algorithm><iterator><combinations><backtracking> | 2023-09-16 11:05:39 | 3 | 25,239 | Abhijit Sarkar |
77,117,451 | 1,959,753 | Disable the "nanny" when running Dask SSHCluster | <p>Consider an SSHCluster with multiple hosts.</p>
<pre><code>cluster = SSHCluster(["localhost", "hostname"],
connect_options={"known_hosts": None},
worker_options={"n_workers": params["n_workers"], },
scheduler_options={"port": 0, "dashboard_address": ":8797"},)
client = Client(cluster)
</code></pre>
<p>I have tried starting multiple workers on the same host (params["n_workers"] > 1), and found this to be rather wasteful in memory. In fact, I could not even get a successful run, without crashing, very likely due to running out of memory. I don't have these problems with multiprocessing, even when using multiple processes.</p>
<p>I believe a better strategy would be to re-design the worker method to be more fine-grained, and to require smaller input parameters and return smaller output results. This will take me a bit longer to achieve, and in the meantime, I am trying to start 1 worker on each host (params["n_workers"] = 1), and utilise the multiprocessing "pool" in the worker, to parallelise across the available cores in each host. (I would have my own config that decides how many processes to use, etc)</p>
<pre><code>manager = mp.Manager()
pool = mp.Pool()
</code></pre>
<p>I tried it, but got the error:</p>
<blockquote>
<p>AssertionError: daemonic processes are not allowed to have children</p>
</blockquote>
<p>Then I tried to create a Dask Distributed scheduler with scheduler = "processes", which seems to utilise a similar approach to the pool method.</p>
<pre><code>delayed_values = [delayed(worker)(param) for param in params]
futures = compute(delayed_values, scheduler="processes", num_processes=n_processes)
</code></pre>
<p>The above code returns the same error. This is likely happening because the "nanny" is creating the workers in multiprocessing processes, and then I am trying to create more processes from within each worker process.</p>
<p>I have found various links suggesting passing the --no-nanny param. See <a href="https://github.com/dask/dask/discussions/7157" rel="nofollow noreferrer">link1</a> which suggests passing --no-nanny to the worker directly, see <a href="https://distributed.dask.org/en/stable/killed.html" rel="nofollow noreferrer">link2</a> which mentions that it is possible to do it via the CLI. But I have not found an example of how to do this programmatically. I am not sure whether this is possible, and if it is possible, how to achieve it (via which object etc). I have tried looking into the Dask code, but have not figured it out.</p>
| <python><dask><dask-distributed><dask-delayed> | 2023-09-16 10:55:53 | 0 | 736 | Jurgen Cuschieri |
77,117,449 | 8,817,161 | How to reload config data in running Flask instance? | <p>Suppose we have a service that uses filesystem config file to get some job done:</p>
<pre><code>class Service:
def _load():
# (re)loads config file
pass
def do_some_work():
# uses config file
pass
</code></pre>
<p>... and a Flask app that provides the service to its clients.</p>
<pre><code>app = Flask(__name__)
service = Service()
@app.route('/do-some-work')
def do_some_work():
return service.do_some_work()
if __name__ == '__main__':
app.run()
</code></pre>
<p>The question is how to <strong>manually reload</strong> config file <strong>without restarting Flask/gunicorn</strong> instance.</p>
<p>I'm sorry if the question sounds a bit weird. I'm well aware about classic approaches:</p>
<ul>
<li>monitoring config file (dir) for changes via <a href="https://pypi.org/project/watchdog/" rel="nofollow noreferrer">watchdog</a> or its alternative</li>
<li>setting up some timer to check file modification date at fixed intervals, e.g. <a href="https://github.com/viniciuschiele/flask-apscheduler" rel="nofollow noreferrer">Flask-APScheduler</a></li>
<li>adding extra endpoint like <code>@app.route('/reload')</code></li>
</ul>
<p>In my case they all have various drawbacks. I just hope that is possible to build some cli that can interact with running Python process without using HTTP. But since I'm not a Python expert I'm not well informed about WSGI process isolation.</p>
<p><strong>UPDATE:</strong>
Ok, the answer is <a href="https://github.com/tomerfiliba-org/rpyc" rel="nofollow noreferrer">RPyC</a>, but either gunicorn should be started with single worker only or app have to use some kind of cache to store config data.</p>
| <python><flask><resources> | 2023-09-16 10:54:20 | 1 | 398 | Maxim |
77,117,081 | 17,157,890 | Model training failing after 1 epoch | <p>I am a noob in python and needed to train a model on a dataset.I found both the notebook and dataset at the same place and made appropriate changes to notebook to run the data from storage.The code fails at the training stage after 1 epoch completes with 'graph execution error'</p>
<p>Here is my jupyter notebook:<a href="https://github.com/Megahedron69/wasteSegregationmodel" rel="nofollow noreferrer">https://github.com/Megahedron69/wasteSegregationmodel</a></p>
<p>Here is the dataset:<a href="https://www.kaggle.com/datasets/aashidutt3/waste-segregation-image-dataset" rel="nofollow noreferrer">https://www.kaggle.com/datasets/aashidutt3/waste-segregation-image-dataset</a></p>
<p>Here is the original notebook:<a href="https://www.kaggle.com/code/gpiosenka/waste-f1-score-97" rel="nofollow noreferrer">https://www.kaggle.com/code/gpiosenka/waste-f1-score-97</a></p>
<p><a href="https://i.sstatic.net/K4Axo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K4Axo.png" alt="enter image description here" /></a></p>
<p>exact error location in notebook:
<a href="https://i.sstatic.net/b4H3b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b4H3b.png" alt="enter image description here" /></a></p>
<p>Entire error:
<a href="https://i.sstatic.net/ITYvb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ITYvb.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/gIqRL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gIqRL.png" alt="enter image description here" /></a></p>
| <python><tensorflow><keras><kaggle> | 2023-09-16 09:12:58 | 1 | 391 | Kartic Joshi |
77,117,067 | 775,155 | How do I solve a nonlinear Least Squares Error problem using Tensorflow 2 in Python? | <p>From known r, YC and ZC values I generate N (theta,YL) pairs using the equation:</p>
<pre><code>YL = YC + ZC*tan(theta + r) + noise
</code></pre>
<p>Having N such (theta,YL) pairs I want to recover r, YC and ZC by minimizing an error function, SE, using TensorFlow 2.</p>
<p>The following Python code shows how I generate N such pairs and the error function:</p>
<pre><code>import math
import random
YC = -0.25
ZC = 1.0
r = 4
N = 10
theta = []
YL = []
for theta_i in range(-10,11):
YL_i = math.tan(math.radians(r + theta_i)) + YC
noise = random.uniform(-0.001,0.001)
YL_i = YL_i + noise
print('theta: ' + str(theta_i) + ', YL: ' + str(YL_i))
theta.append(theta_i)
YL.append(YL_i)
def SE(r, YC, ZC, theta, YL):
N = len(theta)
E = 0
for i in range(0,N):
theta_i = theta[i]
YL_i = YL[i]
Ei = YC + ZC * math.tan(math.radians(r + theta_i)) - YL_i
E = E + Ei**2
return E
SE_0 = SE(4, YC, ZC, theta, YL)
print(SE_0)
</code></pre>
<p>How do I minimize SE using TensorFlow 2?</p>
| <python><tensorflow><tensorflow2.0> | 2023-09-16 09:08:06 | 0 | 4,009 | Andy |
77,116,715 | 12,635,140 | How to group data by heading in Pandas | <p>How can I group this data with pandas by heading in column <code>sl. no</code>
I have data like below in CSV or excel.</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>sl. No</th>
<th>v1</th>
<th>v2</th>
<th>v3</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>Heading1</code></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>243</td>
<td>45</td>
<td>3244</td>
</tr>
<tr>
<td>2</td>
<td>3</td>
<td>25</td>
<td>33</td>
</tr>
<tr>
<td>3</td>
<td>43</td>
<td>324</td>
<td>54</td>
</tr>
<tr>
<td><code>Head2</code></td>
<td></td>
<td></td>
<td></td>
</tr>
<tr>
<td>1</td>
<td>3</td>
<td>45</td>
<td>54</td>
</tr>
<tr>
<td>2</td>
<td>24</td>
<td>4</td>
<td>42</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table></div>
<p>Here is the dummy csv file</p>
<pre><code>sl. No,v1,v2,v3,v4
Heading1,,,,
1,243,45,3244,
2,3,25,33,
3,43,324,54,
Head2,,,,
1,3,45,54,
2,24,4,42,
</code></pre>
| <python><pandas><group-by> | 2023-09-16 07:06:04 | 1 | 319 | Hillal Kumar Roy |
77,116,236 | 678,572 | Find longest interval between overlapping intervals in Python? | <p>I have a list of intervals in the form of tuples such as this:</p>
<pre class="lang-py prettyprint-override"><code>data = [(5,10), (3,10), (13,15), (12,18), (20,29), (25,30)]
</code></pre>
<p>Each item in the tuple has 2 values (i.e., start, end) and there may or may not be overlaps between different intervals. If there are overlapping intervals, I want to only take the longest ones. The output in this test would be the following:</p>
<pre><code>output = [(3,10), (12,18), (20,29)]
</code></pre>
<p><strong>How can I do this in Python using either the standard library, <code>numpy</code>, or <code>pandas</code>?</strong></p>
<p>I started doing something naive like this but I don't think this will scale well...I also would rather not use <code>NetworkX</code></p>
<pre class="lang-py prettyprint-override"><code>import networkx as nx
data = [(5,10), (3,10), (13,15), (12,18), (20,29), (25,30)]
graph = nx.Graph()
n = len(data)
for i, a in enumerate(data):
a_seq = set(range(a[0], a[1] + 1))
for j in range(i+1, n):
b = data[j]
b_seq = set(range(b[0], b[1] + 1))
n_overlap = len(a_seq & b_seq)
if n_overlap:
graph.add_edge(a, b, weight=n_overlap)
output = list()
for nodes in nx.connected_components(graph):
lengths = dict()
for node in nodes:
start, end = node
lengths[node] = end - start
longest_interval, length_of_interval = sorted(lengths.items(), key=lambda x:x[1], reverse=True)[0]
output.append(longest_interval)
</code></pre>
<p>I'm assuming there is a better approach but right now it's escaping me.</p>
<p>Edit: There might be some confusion in the task but I can't mix and match intervals (e.g., (20,30) is invalid because it's not one of the starting intervals).</p>
| <python><list><tuples><intervals><overlap> | 2023-09-16 03:23:56 | 3 | 30,977 | O.rka |
77,116,082 | 10,262,805 | "addmm_impl_cpu_" not implemented for 'Half' | <p>This is the code for authentication:</p>
<pre><code>from huggingface_hub import notebook_login
#I passed the correct token and got `Token is valid (permission: read)`
notebook_login()
</code></pre>
<p>this is the code to create a model</p>
<pre><code>tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf",
use_auth_token=True,
)
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf",
device_map='auto',
torch_dtype=torch.float16,
use_auth_token=True,
)
</code></pre>
<p>this is the <code>model.config</code></p>
<pre><code>LlamaConfig {
"_name_or_path": "meta-llama/Llama-2-7b-chat-hf",
"architectures": [
"LlamaForCausalLM"
],
"bos_token_id": 1,
"eos_token_id": 2,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 11008,
"max_position_embeddings": 4096,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 32,
"pretraining_tp": 1,
"rms_norm_eps": 1e-06,
"rope_scaling": null,
"tie_word_embeddings": false,
"torch_dtype": "float16", ----------------
"transformers_version": "4.32.0",
"use_cache": true,
"vocab_size": 32000
}
</code></pre>
<p>'Half': refers to the half-precision floating-point format, which is also known as <code>float16</code> or <code>torch.float16</code>. It's a lower-precision data type compared to the standard 32-bit float32.</p>
<p>"addmm_impl_cpu_": I think this indicates that there is an issue with a specific operation or computation related to matrix multiplication (addmm) on the CPU.</p>
<p>I used the correct <code>dtype</code> same in the <code>model.config</code>. I also tried to use different dtypes: <code>torch.float32</code>, <code>torch.bfloat16</code>, <code>torch.bfloat32</code> but error persists</p>
<h1>Creating HuggingFace Pipeline:</h1>
<pre><code># text generation pipeline. we are creating text using pre-trained language model
pipe=pipeline("text-generation",
model=model,
# tokenization is the process of splitting text into smaller unit
tokenizer=tokenizer,
# data type for model inference. torch.bfloat16 is the lower precision floating point format.
# lower precision data types can help reduce memory usage and speed up inference
torch_dtype=torch.float16,
# determines the device (cpu or gpu) on which model will run
device_map='auto',
# if the generated text exceeds this limit, it will truncated or will be split into multiple segments to fit within the limit
max_new_tokens=512,
# setting it to 1 means there is no strict minimum requirement
min_new_tokens=-1,
# sampling strategy for generating text.
# it limits the choices of next token during generation to the top k most likely tokens according to the model's probabilities.
top_k=30)
llm=HuggingFacePipeline(pipeline=pipe,model_kwargs={'temperature':0.7})
</code></pre>
<h1>Creating a Conversation Retrieval QA Chain with memory:</h1>
<pre><code>memory=ConversationBufferMemory(memory_key='chat_history',return_messages=True)
pdf_qa=ConversationalRetrievalChain.from_llm(llm=llm,
retriever=vectordb.as_retriever(search_kwargs={'k':6}),
verbose=False, memory=memory)
</code></pre>
<h1>Error throws here:</h1>
<pre><code> result=pdf_qa({"question":"question here"})
</code></pre>
| <python><huggingface-transformers><large-language-model><llama> | 2023-09-16 01:55:39 | 1 | 50,924 | Yilmaz |
77,115,964 | 265,932 | Is there a way to access the root dictionary used by Jinja2 from within a template? | <p>Note: The lead-up to the question is a bit long, since I want to share the various ways of dealing with this issue, to help others struggling with it.</p>
<p>I'm trying to use Jinja2 in Python to format AWS SNS events (which are nested dictionaries). For example:</p>
<pre><code>event = {
"id": "c3a1d2c9-3364-e31b-2ef0-2c6d38933ca2",
"detail-type": "CloudWatch Alarm State Change",
"source": "aws.cloudwatch",
"time": "2019-10-02T17:04:40Z",
"detail": {
"alarmName": "ServerCpuTooHigh",
"configuration": {
...
</code></pre>
<p>and I can make a template that looks like</p>
<pre><code>{{ source }} had a problem
</code></pre>
<p>and then use <code>template.render(event)</code> to produce "aws.cloudwatch had a problem".</p>
<p>Where I'm having difficulty is with data fields with hyphens (or any other character not allowed in a Python identifier). Trying to use <code>{{ detail-type }}</code>, for example, throws an exception.</p>
<p>There are only two ways I can think of to get around this:</p>
<p>First, I can sanitize all of the keys in the dictionary to remove offending characters (which is what I'm doing now, but it bears the slight risk of collision with other keys)</p>
<p>The second way is to use Jinja2's support for calling <code>get()</code> or bracket notation to reference fields in the dictionary (like <code>{{ event.get('detail-type') }}</code> or <code>{{ event['detail-type'] }}</code>) but this requires that I have the an identifier ('event' in the examples I just used) that maps to the root of my dictionary, and <em>that requires that my entire dictionary be passed within a containing **kwargs dictionary</em>, like <code>template.render(event=event)</code>. The problem with <em>that</em> is that this now requires that <em>all</em> of my field references now have this prefix (so every field reference like <code>{{ source }}</code> now has to be changed to <code>{{ event.source }}</code>. This is a pretty high syntactic cost just to be able to access fields with hyphens. I can only think of two ways to address this issue (and be able to refer to most fields directly while still gaining access to problem keys):</p>
<ol>
<li>Move the problem keys in the root dictionary into a sub-dictionary (say, 'problemkeys' for illustrating the idea), so I could do something like <code>{{ source }} had a problem relating to {{ problemkeys.get['detail-type'] }}</code>.</li>
<li>If Jinja2 were to have some rarely-needed identifier (say, '_top') for getting to the **kwargs, then, when needed, we'd be able to use <code>{{ _top['detail-type'] }}</code>.</li>
</ol>
<p><strong>So, the question is:</strong> <em>Is</em> there an identifier we can use to access the top of the **kwargs we passed in to <code>render()</code>?</p>
<p>Edit: I think another reason for such a reference to the root dictionary would be to enable iterating over all of the root keys, like <code>{% for key,val in _top.items() %}</code>.</p>
| <python><jinja2> | 2023-09-16 00:56:53 | 1 | 2,272 | Jemenake |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.