QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,442,052 | 4,398,966 | shallow copy problem with values and references | <p>In python I have:</p>
<pre><code>x = [1,2,3,[4,5,6]]
y = x[:]
x[3][0] = 9
</code></pre>
<p>so now:</p>
<pre><code>x = [1,2,3,[9,5,6]]
y = [1,2,3,[9,5,6]]
</code></pre>
<p>but:
x[0] = 99</p>
<p>then:</p>
<pre><code>x = [99,2,3,[9,5,6]]
</code></pre>
<p>and:</p>
<pre><code>y = [1,2,3,[9,5,6]]
</code></pre>
<p>how come when I change an element that's in a list of a list it's reflected in both x an y but when I change an element in a list they are not?</p>
| <python><shallow-copy> | 2023-11-07 23:33:11 | 0 | 15,782 | DCR |
77,442,036 | 1,874,170 | Renaming/re-declaring functions for CFFI? | <p>I have a C library for which I'm trying to create (out-of-line, API mode) <a href="https://github.com/python-cffi/cffi" rel="nofollow noreferrer">CFFI</a> bindings. The C library provides various implementations of each function, but all of them have this giant, obnoxious prefix added onto them.</p>
<p>For example, the AVX-optimized implementation of the <em>foo</em> function from the <em>GreenSpam</em> submodule is named <code>THELIBRARY_GREENSPAM_OPTIMIZED_AVX_foo()</code>; I'll be exposing this as <code>TheLibrary.Spam.GreenSpam.foo</code>, with CPU optimizations chosen <em>transparently</em> at <code>import</code>-time.</p>
<p>I <em>am</em> capable of determining this prefix for each module at <code>ffibuilder.compile()</code>-time, so I'd like to try to lean into <a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself" rel="nofollow noreferrer">DRY</a> to the highest degree possible. My code currently looks like this, which as you can see is about as horrible as it gets as regards DRY <strong>and</strong> doesn't provide any easy or obvious avenue for moving forward at the run-time algorithm choice:</p>
<pre class="lang-py prettyprint-override"><code># TheLibrary/Spam/GreenSpam.py
from . import _greenspam_generic
from . import _greenspam_optimized_avx # TODO
__all__ = ['foo', 'oof']
def foo(bar):
"""GreenSpam foo"""
_out = _greenspam_generic.ffi.new(f"uint8_t[{len(bar):d}]")
_greenspam_generic.lib.THELIBRARY_GREENSPAM_GENERIC_foo(bar, len(bar), _out)
# void function never errors
return bytes(_out)
def oof(baz):
"""GreenSpam oof"""
_out = _greenspam_generic.ffi.new(f"uint8_t[{len(baz):d}]")
err = _greenspam_generic.lib.THELIBRARY_GREENSPAM_GENERIC_oof(baz, len(baz), _out)
if err:
raise RuntimeError("oof")
return bytes(_out)
</code></pre>
<pre class="lang-py prettyprint-override"><code># TheLibrary/Spam/PurpleSpam.py
from . import _purplespam_generic
from . import _purplespam_optimized_avx # TODO
__all__ = ['foo', 'oof']
def foo(bar):
"""PurpleSpam foo"""
_out = _purplespam_generic.ffi.new(f"uint8_t[{len(bar):d}]")
_purplespam_generic.lib.THELIBRARY_PURPLESPAM_GENERIC_foo(bar, len(bar), _out)
# void function never errors
return bytes(_out)
def oof(baz):
"""PurpleSpam oof"""
_out = _purplespam_generic.ffi.new(f"uint8_t[{len(baz):d}]")
err = _purplespam_generic.lib.THELIBRARY_PURPLESPAM_GENERIC_oof(baz, len(baz), _out)
if err:
raise RuntimeError("oof")
return bytes(_out)
</code></pre>
<p>I would really like to flatten out these C library function names when I'm running <code>ffi.compile()</code>, ideally exposing aliases something like <code>(fr'\b{re.escape(prefix)}_(\w+)\b', r'\1')</code> to make my <em>bindings'</em> code cleaner and better, and in particular to enable me to add in those import-time CPU choices without having to multiplicatively duplicate my code <em>again</em>.</p>
<p>I'd like to track the upstream library as closely as possible, so doing transformations on the C source to remove these prefixes would be painful and I'd like to ask in particular about <em>alternatives</em> to doing that.</p>
<p>However, I would be fine to add, for example during <code>ffibuilder.set_source()</code>, normalized <strong>aliases</strong> to the mangled names, especially if that could be done in a DRY-focused way. (I did try <a href="https://stackoverflow.com/q/3053561/1874170">function references</a>, but apparently those are C++ only?)</p>
<hr />
<p>The specific library I'm currently looking at is <a href="https://github.com/PQClean/PQClean" rel="nofollow noreferrer">PQClean</a>, but I expect I'll run up against this in future bindings as well, so that's another point against drilling down into transforming the C source too much, because I'll have to re-duplicate that effort each time I run across a new library, and possibly again each time that library refactors their header spaghetti.</p>
| <python><namespaces><function-declaration><python-cffi> | 2023-11-07 23:28:37 | 1 | 1,117 | JamesTheAwesomeDude |
77,441,848 | 6,462,301 | How to have plotly subplot subtitles with a multi-line main title? | <p>Consider the following python code:</p>
<pre><code>from plotly.subplots import make_subplots
import plotly.graph_objects as go
fig = make_subplots(
rows=2, cols=1,
subplot_titles=("Plot 1", "Plot 2"))
fig.add_trace(go.Scatter(x=[1, 2, 3], y=[4, 5, 6]),
row=1, col=1)
fig.add_trace(go.Scatter(x=[300, 400, 500], y=[600, 700, 800]),
row=2, col=1)
title_text = "Multiple Subplots with Titles<br>Multiple Subplots with Titles<br>Multiple Subplots with Titles<br>Multiple Subplots with Titles<br>Multiple Subplots with Titles"
fig.update_layout(title_text=title_text, title_x=0.5)
fig.show()
</code></pre>
<p>which generates:
<a href="https://i.sstatic.net/IQ5YN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IQ5YN.png" alt="enter image description here" /></a></p>
<p>How do I push all the subplots down to make room for the multi-line main title as well as the subtitle of the first subplot?</p>
| <python><plotly> | 2023-11-07 22:31:04 | 0 | 1,162 | rhz |
77,441,803 | 1,255,016 | Parallel computing in python within recursion | <p>I am interested in parallelizing a <code>func()</code> of a python program, which frequently calls a <code>compute_heavy()</code> function. Noticeably, this parallelization should not be an implementation detail invisible from the outside. One way to parallelize is based on <code>multiprocessing</code> pools:</p>
<pre class="lang-py prettyprint-override"><code>import time
from multiprocessing import Pool
def compute_heavy(x):
time.sleep(1)
return x*x
def func():
with Pool(5) as p:
result = p.map(compute_heavy, [1, 2, 3])
return sum(result)
</code></pre>
<p>This essentially comes down to fork-join type parallelism.</p>
<p>However, my function is more complicated to evaluate, as illustrated by the example below:</p>
<pre><code>def func(x):
if x == 0:
return 0
my_result = sum([compute_heavy(i) for i in range(x)])
other_result = func(x - 1) # recursion!
return my_result + other_result
</code></pre>
<p>First, it would probably be sensible to use a common pool throughout the program, which is easily remedied.</p>
<p>However, if I call <code>func(10)</code>, then its body has 10 parallelizable tasks, whereas the body of <code>func(1)</code> has just 1. Unfortunately, the <code>Pool.map</code> function blocks until all tasks are finished. So even with an unlimited number of processors, the call to <code>func(10)</code> will take 10 seconds, rather than 1, which I would prefer.</p>
<p>One approach would be to somehow include the recursive call to <code>func(x - 1)</code> into the argument to the <code>pool.map</code> function in order to wait only once. This could (?) however possibly cause a deadlock, since all processes in the pool are waiting for results (unless they return control to the pool during waiting?)</p>
<p>My question is: How can I achieve the optimal running time for the toy example without causing any type of resource exhaustion problems?</p>
<p>To this end I briefly looked into the <code>async</code> / <code>await</code> framework, which seems to be built around these principles, though it is apparently mostly used for I/O rather than computation, but maybe it works for computation as well?</p>
| <python><parallel-processing> | 2023-11-07 22:19:55 | 2 | 4,452 | hfhc2 |
77,441,623 | 1,925,303 | deptry reports dependencies as unused, although I need them | <p>In my python project, I use poetry version 1.7.0 and have a <code>pyproject.toml</code> containing</p>
<pre><code>[tool.poetry.dependencies]
python = "^3.11"
dogpile-cache = {extras = ["redis"], version = "^1.2.2"}
redis = "^4.5.5"
...
[tool.poetry.group.dev.dependencies]
deptry = "^0.12.0"
....
</code></pre>
<p>The rest of the content should be irrelevant, but note there is no <code>[tool.poetry.extras]</code> section.</p>
<p>I install dependencies via <code>poetry install</code>, respectively <code>poetry install --withou dev</code> for production.</p>
<p>When I execute <code>deptry .</code>, it reports <code>redis</code> dependency as unused. There is no direct import of redis in my project code, but it is still required by <code>dogpile-cache</code>. If I remove the explicit dependency, my application stops working.</p>
<p>I did read up on poetry extras, but to my understanding they should not be used here, because they should describe optional extended functionality of my project and not extras of dependencies I want to use.</p>
<p>How to solve the situation?
Do I need to change some deptry config to have the tool stop reporting those repositories? Or do I need to adjust my pyproject.toml and the way I install dependencies with poetry?</p>
| <python><python-poetry> | 2023-11-07 21:36:39 | 1 | 1,958 | SBH |
77,441,574 | 6,447,399 | Adding a 3rd chain to langchain to output multiple prompts | <p>I have been following some tutorials on langchain and I have got a little stuck regarding generating an output from some inputs. Basically what I am trying to do is:</p>
<ol>
<li>Input only the "cuisine"</li>
<li>From this, it generates a restaurante name,</li>
<li>The restaurante name gets passed to another sequentialchain to give me menu items</li>
<li><strong>Stuck here</strong> - I want to pass each menu item to a new chain in order to get the ingredients needed for this recipe.</li>
<li>If possible, I would also like to see how I can take steps 4 results for each menu item and generate a recipe using each of the ingredients</li>
</ol>
<p>Code:</p>
<pre><code>from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.chains import SimpleSequentialChain
from langchain.chains import SequentialChain
openai_key = ""
# Sequential chain
llm = OpenAI(temperature=0.6, openai_api_key = openai_key)
########## Chain 1 - Restaurant Name
prompt_template_name = PromptTemplate(
input_variables = ['cuisine'],
template = "I want to open a restaurant for {cuisine} food. Suggest a fancy name for this."
)
name_chain = LLMChain(llm=llm, prompt=prompt_template_name, output_key="restaurante_name")
######### Chain 2 - Menu Items
prompt_template_items = PromptTemplate(
input_variables = ['restaurante_name'],
template = """Suggest some menu items for {restaurante_name}. Return it as a comma separated list"""
)
food_items_chain = LLMChain(llm=llm, prompt=prompt_template_items, output_key="menu_items")
chain = SequentialChain(
chains = [name_chain, food_items_chain],
input_variables = ['cuisine'],
output_variables = ['restaurante_name', 'menu_items']
)
print(
chain({'cuisine': 'Spanish'})
)
######## Chain 3 - Menu recipies
prompt_template_ingredients = PromptTemplate(
input_variables = ['recipe_ingredients'],
template = """List the ingrediants for {recipe_ingredients}. Renturn it as a comma separated list"""
)
ingredients_chain = LLMChain(llm=llm, prompt=prompt_template_ingredients, output_key="ingredient_items")
chain = SequentialChain(
chains = [name_chain, food_items_chain, ingredients_chain],
input_variables = ['cuisine', 'restaurante_name'],
output_variables = ['restaurante_name', 'menu_items', 'ingredient_items']
)
print(
chain({'cuisine': 'American'})
)
</code></pre>
| <python><langchain> | 2023-11-07 21:26:42 | 0 | 7,189 | user113156 |
77,441,496 | 3,574,603 | Python: How do I correctly decrypt the text sent from my Ruby script? | <p>I am trying to encrypt text locally in Ruby and decrypt it on the server with Python.</p>
<p>I post the encrypted phrase like so:</p>
<pre class="lang-rb prettyprint-override"><code>require "base64"
require "openssl"
public_key = OpenSSL::PKey::RSA.new File.read './publickey.pem'
encrypted = public_key.public_encrypt "Hello everyone. This is a secret phrase."
uri = URI('#...')
params = { "passage" => Base64.encode64(encrypted) }
response = HTTParty.post(
uri,
:body => params.to_json,
:headers => {'Content-Type' => 'application/json'}
)
</code></pre>
<p>My Python looks like this:</p>
<pre class="lang-py prettyprint-override"><code> f = open('./privatekey.pem', 'rb') # generated with ruby (OpenSSL::PKey::RSA.new 2048)
private_key = RSA.importKey(f.read())
f.close()
return private_key.decrypt(encrypted)
</code></pre>
<p>This results in the error:</p>
<pre><code>raise NotImplementedError("Use module Crypto.Cipher.PKCS1_OAEP instead")
NotImplementedError: Use module Crypto.Cipher.PKCS1_OAEP instead
</code></pre>
<p>So, I use <code>PKCS1_OAEP</code>:</p>
<pre class="lang-py prettyprint-override"><code> #...
cipher_priv = PKCS1_OAEP.new(private_key)
decrypted = cipher_priv.decrypt(encrypted)
</code></pre>
<p>This results in the error:</p>
<pre><code>raise ValueError("Incorrect decryption.")
ValueError: Incorrect decryption.
</code></pre>
<p><strike>…and I'm sent round in circles.</strike> If not <code>PKCS1_OAEP</code> what is the correct decryption?</p>
<p>How do I decrypt the text sent from Ruby to the Python server?</p>
| <python><rsa><pycryptodome> | 2023-11-07 21:11:25 | 0 | 3,678 | user3574603 |
77,441,429 | 754,136 | Load YAML files in structured folders with OmegaConf | <pre><code>configs
|_________ experiment
| |--------- test.yaml
|--------- default.yaml
</code></pre>
<p><code>default.yaml</code> has some configuration parameters and then</p>
<pre><code>experiment: test.yaml
</code></pre>
<p><code>test.yaml</code> defines more parameters, such as <code>learning_rate: 0.1</code>.<br />
I would like to load them recursively with <code>OmegaConf</code>. If I do</p>
<pre class="lang-py prettyprint-override"><code>from omegaconf import DictConfig, OmegaConf
conf = OmegaConf.load("configs/default.yaml")
</code></pre>
<p>It doesn't work because <code>conf</code> has key <code>experiment</code> with value <code>test</code>, rather than key <code>learning_rate</code> with value <code>0.1</code>.<br />
It is possible to read YAML files recursively?</p>
| <python><yaml><omegaconf> | 2023-11-07 20:57:32 | 1 | 5,474 | Simon |
77,441,417 | 1,492,613 | How can I let pip install dependencies of my current source package without actually installing the package? | <p>According to many suggestions, we are deprecating the <em><a href="https://stackoverflow.com/questions/74506852/what-is-the-requirements-txt-what-should-be-in-it">requirements.txt</a></em> file. So now developers are purely based on dependencies described in <em>setup.py</em> and <em>pyproject.toml</em> (if one wants to build it).</p>
<p>Previously, one could just do <code>pip install -r requirements.txt</code>.</p>
<p>However, since the <em>requirements.txt</em> file will be removed, how would one do this step easily?</p>
<p>Many developers do not actually want to install with <code>-e .</code>, because they may have a production version in the system. One would just want to quickly spin up with proper dependencies and start to edit in their IDE.</p>
| <python><pip> | 2023-11-07 20:55:00 | 3 | 8,402 | Wang |
77,441,373 | 2,080,960 | When running my pip package I get ModuleNotFoundError | <p>I have a Python program which works great when running it from the src directory E.g.:</p>
<pre><code>> python start.py
</code></pre>
<p>I've published the package to pip, and after I've installed it and running it I get:</p>
<pre><code>> microservicebus-py
Traceback (most recent call last):
File "/usr/bin/microservicebus-py", line 5, in <module>
from src.start import main
File "/usr/lib/python3.9/site-packages/src/start.py", line 6, in <module>
import utils
ModuleNotFoundError: No module named 'utils'
</code></pre>
<p>The <code>utils.py</code> is in the same directory as the <code>start.py</code> file, but I must be missing some setting perhaps in my setup.py used for creating the pip package?</p>
<p>Does anyone have any idea?</p>
| <python><python-3.x> | 2023-11-07 20:48:29 | 0 | 963 | wmmhihaa |
77,441,269 | 379,572 | How to convert a Python generator to async generator? | <p>I have a generator function in Python which is IO bound. I'd like to convert it to an async generator, where the generator loop is running in a separate process or thread. For example, loading chunks of data from a socket, where we'd like to load the next chunk while we process the previous. I'm imagining a Queue would be used, where the IO thread is buffering chunks (the yield outputs) for the async generator to pickup. Preferably, I'd like to use the concurrent.futures module to allow flexibility in picking between separate process vs thread. What is the best way to go about this?</p>
<pre class="lang-py prettyprint-override"><code>import time
def blocking():
""" plain generator with blocking i/o """
i = 0
while True:
time.sleep(1)
yield i # fake result
i += 1
def consumer():
for chunk in blocking():
print(chunk)
</code></pre>
| <python><multithreading><concurrency><python-asyncio><generator> | 2023-11-07 20:27:49 | 2 | 7,358 | Azmisov |
77,441,245 | 12,436,050 | Group by column and assign value to a new column based on other column values in python | <p>I have following dataframe</p>
<pre><code>col1 col2 col3
HP:0002616 ['HP:0001679'] Abnormal aortic morphology
HP:0002616 ['HP:0002597'] Abnormality of the vasculature
HP:0002616 ['HP:0001626'] Abnormality of the cardiovascular system
HP:0002616 ['HP:0000118'] Phenotypic abnormality
HP:0002616 ['HP:0000118'] disease
HP:0002616 ['HP:0000118'] quality
HP:0002616 ['HP:0000118'] material property
HP:0002616 ['HP:0000118'] experimental factor
HP:0002616 ['HP:0000118'] Thing
HP:0002616 ['HP:0030680'] Abnormality of cardiovascular system morphology
HP:0002616 ['HP:0002617'] Vascular dilatation
HP:0010689 ['HP:0011297'] Abnormal digit morphology
HP:0010689 ['HP:0011842'] Abnormal skeletal morphology
HP:0010689 ['HP:0000924'] Abnormality of the skeletal system
HP:0010689 ['HP:0000118'] Phenotypic abnormality
HP:0010689 ['HP:0000118'] phenotype
</code></pre>
<p>The expected output is:</p>
<pre><code>col1 col4
HP:0002616 disease
HP:0010689 phenotype
</code></pre>
<p>I would like to create a new column based on values in col3 group by col1. For a col1 value, if col3 has 'disease', a new column col4 will have value 'disease', if col3 has 'phenotpye', a new column col4 will have value 'phenotype'. If both value exists, then col4 will have disease, phenotype. How can I achieve this?</p>
<p>Any help is highly appreciated.</p>
| <python><group-by> | 2023-11-07 20:23:24 | 1 | 1,495 | rshar |
77,441,226 | 7,479,675 | Resizing SVG Images to Specific Dimensions without Visual Alteration using Python | <p>I’m currently working on resizing SVG images using Python and I need some assistance. Specifically, I want to resize any given SVG image to the following dimensions:</p>
<pre><code>width = '512'
height = '512'
viewBox = '0 0 512 512'
</code></pre>
<p>Additionally, I need to redraw the vectors in such a way that the visual representation of the image remains unchanged after resizing. <strong>Could anyone provide guidance on how this can be achieved using Python?</strong></p>
<p>For example: I have an SVG with the following parameters: width="800px", height="800px", and viewBox="0 0 1024 1024". I want to change these parameters to width="512px", height="512px", and viewBox="0 0 512 512", and then correct all the vectors so that the image looks as it did before.</p>
<p><em>Here are the methods I’ve already tried:</em>
<em>Using svgutils: I was able to modify the width and height, but I couldn’t figure out how to change the viewBox while updating the new dimensions of the image vector.</em></p>
<pre><code>import os
import svgutils.transform as sg
directory = os.path.dirname(os.path.realpath(__file__))
for filename in os.listdir(directory):
if filename.endswith('.svg'):
fig = sg.fromfile(os.path.join(directory, filename))
fig.set_size(('200', '200'))
fig.save(os.path.join(directory, 'resized_' + filename))
</code></pre>
<p><em>Using Inkscape: The ‘–export-width=512’ and ‘–export-height=512’ commands didn’t work as expected.</em></p>
<pre><code>import subprocess
import glob
import os
# Get the directory of the current script
directory = os.path.dirname(os.path.realpath(__file__))
# Get a list of all SVG files in the directory
svg_files = glob.glob(os.path.join(directory, '*.svg'))
inkscape_path = 'D:\\Apps\\Inkscape\\bin\\inkscape.exe'
for file in svg_files:
# Export the file as a new SVG file with a width and height of 512
output_file = f"{file.rsplit('.', 1)[0]}_resized.svg"
result = subprocess.run([inkscape_path, file, '--export-area-drawing', '--export-filename=' + output_file, '--export-width=512', '--export-height=512'], capture_output=True, text=True, errors='ignore')
print(result.stdout)
print(result.stderr)
</code></pre>
<p><em>Using Cairo and rsvg: This method didn’t work on Windows. I encountered an error: “AttributeError: module ‘rsvg’ has no attribute ‘Handle’”.</em></p>
<pre><code>import os
import cairo
import rsvg
WIDTH, HEIGHT = 512, 512# desired width and height
# Get the directory of the current script
directory = os.path.dirname(os.path.realpath(__file__))
# Use os.walk to find all SVG files
for root, dirs, files in os.walk(directory):
for file in files:
if file.endswith('.svg'):
svg_path = os.path.join(root, file)
png_path = os.path.splitext(svg_path)[0] + '.png'
# Load the SVG data
with open(svg_path, 'r') as svg_xml:
svg = rsvg.Handle()
svg.write(svg_xml.read())
svg.close()
# Prepare the Cairo context
img = cairo.ImageSurface(cairo.FORMAT_ARGB32, WIDTH, HEIGHT)
ctx = cairo.Context(img)
# Scale whatever is written into this context
ctx.scale(2, 2) # scale in x and y directions
svg.render_cairo(ctx)
# Write out into a PNG file
with open(png_path, 'wb') as fout:
fout.write(img.write_to_png())
</code></pre>
| <python><svg> | 2023-11-07 20:20:34 | 0 | 392 | Oleksandr Myronchuk |
77,441,171 | 12,468,387 | Failed building wheel for twisted-iocpsupport | <p>I'm trying to dockerize my django project. When i run "docker build -t django_project" i get following error:</p>
<pre><code>18.82 Building wheel for twisted-iocpsupport (pyproject.toml): started
19.23 Building wheel for twisted-iocpsupport (pyproject.toml): finished with status 'error'
19.24 error: subprocess-exited-with-error
19.24
19.24 × Building wheel for twisted-iocpsupport (pyproject.toml) did not run successfully.
19.24 │ exit code: 1
19.24 ╰─> [13 lines of output]
19.24 running bdist_wheel
19.24 running build
19.24 running build_ext
19.24 building 'twisted_iocpsupport.iocpsupport' extension
19.24 creating build
19.24 creating build/temp.linux-x86_64-cpython-39
19.24 creating build/temp.linux-x86_64-cpython-39/twisted_iocpsupport
19.24 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -Itwisted_iocpsupport -I/usr/local/include/python3.9 -c twisted_iocpsupport/iocpsupport.c -o build/temp.linux-x86_64-cpython-39/twisted_iocpsupport/iocpsupport.o
19.24 twisted_iocpsupport/iocpsupport.c:1210:10: fatal error: io.h: No such file or directory
19.24 1210 | #include "io.h"
19.24 | ^~~~~~
19.24 compilation terminated.
19.24 error: command '/usr/bin/gcc' failed with exit code 1
19.24 [end of output]
19.24
19.24 note: This error originates from a subprocess, and is likely not a problem with pip.
19.24 ERROR: Failed building wheel for twisted-iocpsupport
19.24 Successfully built autobahn
19.24 Failed to build twisted-iocpsupport
19.24 ERROR: Could not build wheels for twisted-iocpsupport, which is required to install pyproject.toml-based projects
19.70
19.70 [notice] A new release of pip is available: 23.0.1 -> 23.3.1
19.70 [notice] To update, run: pip install --upgrade pip
------
Dockerfile:13
--------------------
11 | # Copy the requirements file into the container and install the Python dependencies
12 | COPY requirements.txt /app/
13 | >>> RUN pip install --no-cache-dir -r requirements.txt --no-dependencies
14 |
15 | # Copy the entire project directory into the container
--------------------
ERROR: failed to solve: process "/bin/sh -c pip install --no-cache-dir -r requirements.txt --no-dependencies" did not complete successfully: exit code: 1
</code></pre>
<p>My requirements.txt:</p>
<pre><code>asgiref==3.7.2
attrs==23.1.0
autobahn==23.6.2
Automat==22.10.0
cffi==1.16.0
channels==4.0.0
constantly==23.10.4
cryptography==41.0.5
daphne==4.0.0
Django==4.2.7
hyperlink==21.0.0
idna==3.4
incremental==22.10.0
pyasn1==0.5.0
pyasn1-modules==0.3.0
pycparser==2.21
pyOpenSSL==23.3.0
python-decouple==3.8
service-identity==23.1.0
six==1.16.0
sqlparse==0.4.4
Twisted==23.10.0
twisted-iocpsupport==1.0.4
txaio==23.1.1
typing_extensions==4.8.0
tzdata==2023.3
zope.interface==6.1
</code></pre>
<p>Dockerfile:</p>
<pre><code>FROM python:3.9.18
ENV PYTHONUNBUFFERED 1
ENV DJANGO_SETTINGS_MODULE django_project.settings
WORKDIR /app
COPY requirements.txt /app/
RUN pip install --no-cache-dir -r requirements.txt
COPY . /app/
EXPOSE 8000
CMD ["gunicorn", "django_project.wsgi:application", "--bind", "0.0.0.0:8000"]
</code></pre>
<p>Python version 3.9.18.</p>
<p>The problem is that daphne is trying to install the twisted-iocpsupport dependency. How to fix it and dockerize django project?</p>
<p>UPD:
added Dockerfile</p>
| <python><django><docker><twisted><daphne> | 2023-11-07 20:10:17 | 1 | 449 | Denzel |
77,441,136 | 6,594,089 | Multiple instances of inner classes in Python | <p>I'm trying to create a Child class within a Person class, but am having trouble accessing the child object after creation, when more than one instance is created dynamically.</p>
<p>For example, I ask the user how many kids they have, and I then want to create an instance of the inner child class for each child they have, and be able to access that information later.</p>
<p>Here is some code that I have tried so far, which works for one child. But I can't figure out how to access the information if they have more than 1 child.</p>
<pre><code>class Person:
def __init__(self, firstName, lastName, age):
self.firstName = firstName
self.lastName = lastName
self.age = age
self.numberOfChildren = 0
class Child:
def __init__(self, firstName, age):
self.firstName = firstName
self.age = age
def show(self):
print(self.firstName, self.age)
client = Person("Jake", "Blah", 31)
numberOfChildren = input("How many kids do you have?")
client.numberOfChildren = 2
for i in range(numberOfChildren):
firstName = input("What is the name of your child?")
age = input("how old is your child?")
child = client.Child(firstName, age)
child.show()
</code></pre>
<p>This correctly prints out the child, and seems to create the object within the Person class, but I can not figure out how to access this information through the Person class (client).</p>
<p>If I use something like child.firstName, it obviously only shows the last one entered, as the for loop is over-writing the child object every time. If I knew in advance how many children they would have, I could use something like child1, child2, etc, but since it is dynamic I don't know in advance.</p>
<p>Thanks in advance!</p>
| <python><class><inner-classes> | 2023-11-07 20:02:20 | 1 | 459 | LBJ33 |
77,441,030 | 1,471,980 | how do you get job id for splunk saved search in python | <p>I am trying to retrieve job id from splunk using python.</p>
<p>when I do this in curl, it works. It prints out the sid number.</p>
<pre><code>curl -u <username>:<password> -k https://example.com:8089/services/search/jobs -d search="search interface*"
</code></pre>
<p>I get:</p>
<pre><code><response>
<sid>1899999967</sid>
</response>
</code></pre>
<p>I need to convert this curl code to python request, I tried this:</p>
<pre><code>res=requests.get('https://example.com:8089/services/search/jobs', params= ('-d search='search interface*'"), auth=(<username>:<password>), verify=False)
</code></pre>
<p>I get a huge list of search results that incude other saved search results. I only need saved search interface*</p>
<p>Any ideas what I am doing here wrong?</p>
| <python><python-requests><splunk> | 2023-11-07 19:42:21 | 1 | 10,714 | user1471980 |
77,440,909 | 19,130,803 | dash dbc Modal: background more blur or dark | <p>I am working on a dash app.I am using <code>dbc Modal</code> with <code>backdrop=False</code> and getting totally visible background when modal is open. Also I tried <code>backdrop='static'</code> and getting atleast faint background.</p>
<pre><code>import dash_bootstrap_components as dbc
from dash import Input, Output, State, html
modal = html.Div(
[
dbc.Button("Open modal", id="open", n_clicks=0),
dbc.Modal(
[
dbc.ModalHeader(dbc.ModalTitle("Header")),
dbc.ModalBody("This is the content of the modal"),
dbc.ModalFooter(
dbc.Button(
"Close", id="close", className="ms-auto", n_clicks=0
)
),
],
id="modal",
is_open=False,
backdrop=False, # or 'static'
),
]
)
@app.callback(
Output("modal", "is_open"),
[Input("open", "n_clicks"), Input("close", "n_clicks")],
[State("modal", "is_open")],
)
def toggle_modal(n1, n2, is_open):
if n1 or n2:
return not is_open
return is_open
</code></pre>
<p>I tried</p>
<pre><code>style={'opacity': '0.5 !important'}
</code></pre>
<p>But it has no effect.</p>
<p>Is it possible to make the background more blur or dark?</p>
| <python><plotly><plotly-dash> | 2023-11-07 19:20:23 | 1 | 962 | winter |
77,440,833 | 2,658,278 | Removing non-numeric rows in a Pandas data frame mid-chain | <p>I have a Pandas dataframe with a column called EID. It is mostly integers but there are a few non-numeric values in the column. I'm trying to remove them in the middle of a function chain.</p>
<p>Here are some of the errors I get in the middle of my debugging session:</p>
<pre><code>(Pdb) df.dropna(subset=['EID']).query('EID.str.isnumeric()')
*** ValueError: Cannot mask with non-boolean array containing NA / NaN values
(Pdb) df.dropna(subset=['EID']).query('EID.str.isdigit()')
*** ValueError: Cannot mask with non-boolean array containing NA / NaN values
</code></pre>
<p>I even tried creating a new column:</p>
<pre><code>(Pdb) df.dropna(subset=['EID']).assign(isnum = lambda x: x.EID.str.isdigit())
</code></pre>
<p>but this new column is nothing but <code>NaN</code>.</p>
<p>How can I remove the rows where this column is non-numeric in the middle of a chain?</p>
<p>Edit: Sample dataset</p>
<p>input:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>EID</th>
<th>Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>123</td>
<td>Madsen,Gunnar</td>
</tr>
<tr>
<td>ret</td>
<td>Greene,Richard</td>
</tr>
<tr>
<td>465</td>
<td>Stull,Matthew</td>
</tr>
</tbody>
</table>
</div>
<p>Desired output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>EID</th>
<th>Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>123</td>
<td>Madsen,Gunnar</td>
</tr>
<tr>
<td>465</td>
<td>Stull,Matthew</td>
</tr>
</tbody>
</table>
</div> | <python><pandas> | 2023-11-07 19:08:18 | 2 | 552 | Josh English |
77,440,686 | 10,634,362 | POST method cannot redirect to HTML page(Backend Flask, Frontend Vanilla JS) | <p>I am trying to use value of dropdown from frontend to backend where on behalf of the dropdown value calculation will be happened and that endpoint will return a HTML page.
Would like to add that, I don't need to get any response from the POST method (eg: <code>return jsonify</code>). So far, I am using <code>Flask</code> for backend and for frontend <code>Vanilla JS</code>.</p>
<p>I have successfully pass and fetched dropdown value using POST method in backend. I have also checked from server and browser console and no error found.</p>
<p>The approach I have taken do far:</p>
<h3>Frontend</h3>
<h4>HTML file, written in index.html</h4>
<pre class="lang-html prettyprint-override"><code><label for="mode">Select Mode:</label>
<select id="mode" name="mode">
<option value="autoencoder">Autoencoder</option>
<option value="decoder">Decoder</option>
</select>
<button id="Module_submitBtn">Chose Module</button>
<script src="static/js/call_module.js "></script>
</code></pre>
<h4>JS file</h4>
<pre class="lang-js prettyprint-override"><code>function submitMode() {
var selectedMode = document.getElementById("mode").value;
fetch('/update_mode', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify({ mode: selectedMode })
})
.catch(error => console.error('Error:', error));
}
const submitBtn = document.getElementById('Module_submitBtn');
submitBtn.addEventListener('click', submitMode);
</code></pre>
<h3>Backend</h3>
<h4>python file</h4>
<pre class="lang-py prettyprint-override"><code># This is the desired endpiunt from where HTML file is rendered
@app.route("/decoder_graphical_view", methods=['GET'])
def decoder_graphical_view(name=None):
config_val = Decoder_Config.config_val
return render_template("decoder_graph.html",
N_value=config_val["Decoder_live_plot"]["N_"],
ebno_value=config_val["Decoder_live_plot"]["ebno_"],
phi_value=config_val["Decoder_live_plot"]["phi_"],
learningRate_value=config_val["Decoder_live_plot"]["learning_rate_"],
name=name
)
# This is the endpoint where POST method will work, dropdown value will come and it will call `decoder_graphical_view` function
@app.route('/update_mode', methods=['POST'])
def update_mode(name=None):
selected_mode = request.json.get('mode')
if selected_mode == 'autoencoder':
return redirect(url_for('decoder_graphical_view'))
elif selected_mode == 'decoder':
return redirect(url_for('decoder_graphical_view'))
</code></pre>
<p>I have also tried the <code>update_mode function</code> in the following way but same result and failed.</p>
<pre class="lang-py prettyprint-override"><code>@app.route('/update_mode', methods=['POST'])
def update_mode(name=None):
selected_mode = request.json.get('mode')
# set default value for the config data
Decoder_Config.config_val["Decoder_live_plot"]["N_"] = 100
Decoder_Config.config_val["Decoder_live_plot"]["ebno_"] = 2.0
Decoder_Config.config_val["Decoder_live_plot"]["phi_"] = 5
Decoder_Config.config_val["Decoder_live_plot"]["train_"] = "True"
train_var = str(Decoder_Config.config_val["Decoder_live_plot"]["train_"]).lower()
if train_var == "true":
train_var = True
else:
train_var = False
Decoder_Config.config_val["Decoder_live_plot"]["learning_rate_"] = 0.005
if selected_mode == 'autoencoder':
print("selected_mode: ", selected_mode)
return render_template("decoder_graph.html",
N_value=config_val["Decoder_live_plot"]["N_"],
ebno_value=config_val["Decoder_live_plot"]["ebno_"],
phi_value=config_val["Decoder_live_plot"]["phi_"],
learningRate_value=config_val["Decoder_live_plot"]["learning_rate_"],
name=name
)
elif selected_mode == 'decoder':
return render_template("decoder_graph.html",
N_value=config_val["Decoder_live_plot"]["N_"],
ebno_value=config_val["Decoder_live_plot"]["ebno_"],
phi_value=config_val["Decoder_live_plot"]["phi_"],
learningRate_value=config_val["Decoder_live_plot"]["learning_rate_"],
name=name
)
</code></pre>
<h3>Trial</h3>
<p>So far what I have tried are listed below: <a href="https://github.com/braintree/braintree_flask_example/blob/main/app.py#L65" rel="nofollow noreferrer">1</a>, <a href="https://www.reddit.com/r/flask/comments/oscqgt/how_to_redirect_page_after_post/" rel="nofollow noreferrer">2</a>, <a href="https://stackoverflow.com/questions/69846684/unable-to-redirect-to-another-route-in-flask">3</a>, though could not get any valid result.</p>
<p>In server console I have seen the following message while I have pressed <code>Submit button</code></p>
<pre><code>"POST /update_mode HTTP/1.1" 302 -
"GET /decoder_graphical_view HTTP/1.1" 200
</code></pre>
<p>What could be the possible way to get a rendered HTML after calling the POST method?</p>
| <javascript><python><html><flask> | 2023-11-07 18:40:20 | 2 | 701 | karim |
77,440,355 | 4,602,359 | Regular expression with negative lookahead and negative lookbehind to check if my match IS NOT between [[ and ]] | <p>I'm trying to write a Python script which replaces occurrences of given keywords, in given md file, by themself between [[ and ]].</p>
<p>It will be used several times on the same files, so I don't want to end with, for instance, FOO becoming [[FOO]], then [[[[FOO]]]] etc.</p>
<p>So I don't want FOO to be circle with [[ and ]].</p>
<p>The closest version I came up with is this:
<code>(?<!\[\[)\b(FOO)\b(?!\]\])</code></p>
<p>The status of my test list is:</p>
<pre><code>Should match : lorem ipsum FOO dolor ==> OK
Should NOT match : lorem ipsum [[FOO]] dolor ==> OK
Should NOT match : lorem [[ipsum FOO dolor]] sit amet ==> Not OK
Should NOT match : lorem [[ipsumFOOsolor]] sit amet ==> OK
Should NOT match : [[lorem]] [[ipsum-FOO&dolor-sit.pdf#page=130]] ==> Not OK
</code></pre>
<p>for reference, I would like to use this regexp in this python snippet:</p>
<pre class="lang-py prettyprint-override"><code> for term in term_list:
pattern = r'(?<!\[\[)\b(' + re.escape(term) + r')\b(?!\]\])'
file_content = re.sub(pattern, r'[[\1]]', file_content)
</code></pre>
<p>What could be the regexp I need?
What is wrong with this approach?</p>
<p>Thanks!</p>
| <python><regex><regex-negation><regex-replace> | 2023-11-07 17:42:04 | 2 | 348 | A. Ocannaille |
77,439,657 | 3,104,974 | Refer to Column by Content of Another Column | <p>I have a Dataframe of the following pattern</p>
<pre><code>+---+---+---+---+
|ref| A| B| C|
+---+---+---+---+
| A| 2| 3| 4|
| C| 9| 8| 7|
| B| 5| 6| 7|
+---+---+---+---+
</code></pre>
<p>I want to have a new column that contains the value of the column referenced in <code>ref</code>, so</p>
<pre><code>+---+
|new|
+---+
| 2|
| 7|
| 6|
+---+
</code></pre>
<p>An explicit solution would be <code>df.withColumn("new", F.when(df.ref == "A", df.A).when(...))</code>, but since I don't know in advance what values <code>ref</code> can contain, I would need a way to do this dynamically.</p>
| <python><pyspark> | 2023-11-07 15:52:55 | 2 | 6,315 | ascripter |
77,439,652 | 2,123,706 | convert column of lists of dictionaries to separate columns | <p>I have:</p>
<pre><code>grp = ["A","B","C","A","C","C","B"]
dictl = ["[{'TypeID': 0, 'Description': 'blah', 'DateCreated': '2018-08-09T14:00:30.957'}]",
"[{'TypeID': 0, 'Description': 'blah', 'DateCreated': '2018-08-09T14:00:30.957'}]",
"[]","[{'TypeID': 0, 'Description': 'blah', 'DateCreated': '2018-08-09T14:00:31.504'}]",
"[{'TypeID': 0, 'Description': 'blah', 'DateCreated': '2018-08-09T14:00:31.504'}]",
"[]","[{'TypeID': 0, 'Description': 'blah', 'DateCreated': '2018-08-09T14:00:31.504'}]"]
df = pd.DataFrame({'grp':grp,'dictl':dictl})
</code></pre>
<p>I would like to convert it to:</p>
<pre><code>pd.DataFrame({'grp':["A","B","C","A","C","C","B"],
'TypeID':["0","0","","0","0","","0"],
'Description':["blah","blah","","blah","blah","","blah"],
'DateCreated':["2018-08-09T14:00:30.957","2018-08-09T14:00:30.957","","2018-08-09T14:00:31.504","2018-08-09T14:00:31.504","","2018-08-09T14:00:31.504"]})
</code></pre>
<p>I tried suggestions from <a href="https://stackoverflow.com/questions/70488100/change-a-column-containing-list-of-dict-to-columns-in-a-dataframe">Change a column containing list of dict to columns in a DataFrame</a>, and had the following issues:</p>
<pre><code>for grp, dictl in df:
rec = {'Name': grp}
rec.update(x for d in dictl for x in d.items())
records.append(rec)
</code></pre>
<p>error: <code>ValueError: too many values to unpack (expected 2)</code></p>
<p>and</p>
<pre><code>df['dictl'].apply(lambda c:
pd.Series({next(iter(x.keys())).strip(':'):
next(iter(x.values())) for x in c})
)
</code></pre>
<p>gave error: <code>AttributeError: 'str' object has no attribute 'keys'</code></p>
<p>I have > 2m rows, so would like this method to be quick if possible</p>
| <python><pandas> | 2023-11-07 15:52:09 | 2 | 3,810 | frank |
77,439,647 | 2,725,810 | Unaccounted for time with ThreadPoolExecutor | <p>I have an AWS Lambda function:</p>
<pre class="lang-py prettyprint-override"><code>import boto3
import json
import concurrent.futures
from stats import Timer
client = boto3.client('lambda')
def do_lambda():
timer = Timer()
response = client.invoke(
FunctionName = 'arn:aws:lambda:us-east-1:497857710590:function:wherewasit-search',
InvocationType = 'RequestResponse',
Payload = json.dumps({'keys': 1}))
payload = json.loads(json.load(response['Payload'])['body'])
print(f"Response after {timer.stop()}ms")
def do_work(n_threads):
timer = Timer()
with concurrent.futures.ThreadPoolExecutor() as executor:
futures = []
for i in range(n_threads):
futures.append(executor.submit(do_lambda))
for future in concurrent.futures.as_completed(futures):
pass
print(f"Joined all: {timer.stop()}ms")
def lambda_handler(event, context):
do_work(9)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
</code></pre>
<p>The output:</p>
<pre class="lang-none prettyprint-override"><code>Response after 1236ms
Response after 1242ms
Response after 1251ms
Response after 1272ms
Response after 1322ms
Response after 1359ms
Response after 1126ms
Response after 1170ms
Response after 1246ms
Joined all: 2496ms
</code></pre>
<p>Given that all the threads are done in at most 1.4 seconds, what is another second spent on before all the threads are joined?</p>
| <python><multithreading><performance><aws-lambda><threadpool> | 2023-11-07 15:51:23 | 1 | 8,211 | AlwaysLearning |
77,439,388 | 4,019,775 | Locust - Group multiple api calls into one | <p>Is there a way to group multiple api calls into 1 in the locust report/UI ? I have this specific requirement because I want to acheive 40 transactions/second and in my case 3 api calls in sequential fashion make up for 1 transaction. I have already wrapped all my api calls into 1 task. Here is the code.</p>
<pre><code> host = config_utils.get_from_test_config('platform.initiate.baseUri')
def on_start(self):
self.account_host = config_utils.get_from_test_config('platform.initiate.baseUri')
self.api_host = config_utils.get_from_test_config('platform.api.baseUri')
self.username = config_utils.get_from_merchant_config('customer.go.apiToken')
self.password = config_utils.get_from_merchant_config('customer.go.apiSecret')
@task
def complete_workflow(self):
self.client.locust_name = "complete_workflow"
access_token = oauth.get_oauth_token(self.username, self.password)
initiate_headers = {'Content-Type': 'application/json', "Authorization": f"Bearer {access_token}"}
payload = {"customerInternalReference": "QEC PENN testing customer", "workflowDefinition": {"key": 3}}
initiate_response = self.client.post(f"{self.account_host}/api/v1/accounts", json=payload,
headers=initiate_headers, name="v3")
response = json.loads(initiate_response.text)
workflow_credentials = response.get("workflowExecution", {}).get("credentials", [])
id_credentials = [credential for credential in workflow_credentials if credential.get("category") == "ID"]
selfie_credentials = [credential for credential in workflow_credentials if
credential.get("category") == "SELFIE"]
self.workflow_id = response.get("workflowExecution", {}).get("id")
self.account_id = response.get("account", {}).get("id")
self.api_token = id_credentials[0].get("api", {}).get("token")
self.id_credential = id_credentials[0].get("id")
self.selfie_credential = selfie_credentials[0].get("id")
front_image = (
'USA_DL_front.jpg',
open('images/USA_DL_front.jpg', 'rb'),
"image/jpeg"
)
data = {'file': front_image}
encoder = MultipartEncoder(fields=data)
headers = {'Accept': '*/*', 'Content-Type': encoder.content_type, "Authorization": f"Bearer {self.api_token}"}
self.client.post(
f"{self.api_host}/api/v1/accounts/{self.account_id}/workflow-executions/{self.workflow_id}/credentials/{self.id_credential}/parts/FRONT",
data=encoder, headers=headers, name="v3")
finalise_header = {'Accept': '*/*', "Authorization": f"Bearer {self.api_token}"}
self.client.put(
f"{self.api_host}/api/v1/accounts/{self.account_id}/workflow-executions/{self.workflow_id}",
headers=finalise_header, name="finalise")
</code></pre>
| <python><performance-testing><load-testing><locust> | 2023-11-07 15:15:02 | 1 | 884 | Nilamber Singh |
77,439,357 | 5,565,481 | Chrome browser closes immediately after being launched with selenium | <p>Hello im trying to run the following code in python.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service as ChromeService
from webdriver_manager.chrome import ChromeDriverManager
driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()))
def verify_title():
# Navigate to the website
driver.get("https://google.com")
# Get the title of the page
title = driver.title
# Verify the title
expected_title = "Google"
if title == expected_title:
print("Title verification successful!")
else:
print(f"Title verification failed. Expected '{expected_title}, but got '{title}'.")
# Close the browser
driver.quit()
if __name__ == '__main__':
verify_title()
</code></pre>
<p>It does run successfully which outputs "Title verification successful!" however the chrome browser closes immediately.</p>
<p>Ive even removed the code
driver.quit() to see if that made any difference but still same problem.</p>
<p>Thank you.</p>
| <python><google-chrome><selenium-webdriver><testing><automation> | 2023-11-07 15:10:40 | 1 | 861 | Mr_Shoryuken |
77,439,324 | 482,439 | Caddy: How to serve static files from Django | <p>It's my first experience with Caddy and I'm having difficulties configuring it properly concerning static files.</p>
<p>I'm using an Ubuntu server and I'm running Caddy and Django+Gunicorn as Docker containers.</p>
<p>It works perfectly well except that it gives a 404 on static files such as CSS and JS files.</p>
<p>I've collected all static files to their corresponding directories in /home/myusername/static and have the following Caddyfile:</p>
<pre><code>mydomain.com {
encode gzip
handle_path /static/* {
root * "/home/myusername/static/"
file_server
}
handle_path /media/* {
root * "/home/myusername/media/"
file_server
}
reverse_proxy django-gunicorn-container-name:8000
}
</code></pre>
<p>What should I do to make Caddy serve static files correctly?</p>
<p>Any suggestions will be much appreciated!
Thanks!</p>
<p>EDIT: I'm using the following Dockerfile and Docker-compose.yml</p>
<p>Dockerfile:</p>
<pre><code>FROM python:latest
EXPOSE 8000
WORKDIR /pairs_trade_front_end_docker
COPY . .
RUN apt-get update
RUN pip install --upgrade pip
RUN pip3 install -r requirements.txt
CMD ["/bin/bash", "-c", "nohup python3 manage.py collectstatic --noinput & nohup python3 manage.py migrate & gunicorn -b 0.0.0.0:8000 setup.wsgi:application"]
</code></pre>
<p>docker-compose.yml:</p>
<pre><code>version: '3.9'
services:
database:
image: 'postgres:latest'
container_name: postgres
ports:
- 5432:5432
volumes:
- ~/postgres-data/:/var/lib/postgresql/data/
- ./logs:/logs
- ./postgresql.conf:/etc/postgresql.conf
env_file:
- .env
networks:
stats-trade-network:
aliases:
- postgresForStatsTrade
healthcheck:
test: ["CMD-SHELL", "pg_isready"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
webserver:
image: 'antoniorcampos/pairs-trade:0.9'
container_name: stats-trade-web-server
ports:
- 8000:8000
volumes:
- ~/static:/static
env_file:
- .env
networks:
stats-trade-network:
aliases:
- webserverForStatsTrade
depends_on:
database:
condition: service_healthy
restart: unless-stopped
caddy:
image: caddy:latest
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
networks:
stats-trade-network:
aliases:
- caddyForStatsTrade
depends_on:
webserver:
condition: service_started
database:
condition: service_healthy
restart: unless-stopped
networks:
stats-trade-network:
driver: bridge
</code></pre>
<p>And then I copy the static files to /home/myusername/static using a basic linux command line <code>cp -r ...</code>. I do it using myusername.</p>
| <python><django><caddy><caddyfile> | 2023-11-07 15:06:25 | 1 | 625 | AntonioR |
77,439,308 | 2,414,957 | fixing the repetitive wave structure in depth map when writing to disk | <p>My depth map has no problem when I visualize it using <code>rviz2</code> however when I save it using <code>astype(np.uint16)</code> after multiplying the depth values by <code>1000</code> as mentioned by the author of BundleSDF git repo here, it saves depth images in problematic shape.</p>
<p>If I use this script to save depth images:</p>
<pre><code>from pathlib import Path
from rosbags.highlevel import AnyReader
import numpy as np
from PIL import Image
from datetime import datetime
from matplotlib import image
import cv2
import matplotlib.pyplot as plt
with AnyReader([Path('/home/mona/rosbag2_2023_11_06-15_44_24')]) as reader:
connections = [x for x in reader.connections if x.topic == '/camera/camera/aligned_depth_to_color/image_raw']
for connection, timestamp, rawdata in reader.messages(connections=connections):
msg = reader.deserialize(rawdata, connection.msgtype)
timestamp_dt = datetime.fromtimestamp(msg.header.stamp.sec + msg.header.stamp.nanosec * 1e-9)
timestamp_str = timestamp_dt.strftime("%Y-%m-%d %H:%M:%S.%f")
timestamp_ns = msg.header.stamp.sec * 1e9 + msg.header.stamp.nanosec
numeric_timestamp = int(timestamp_ns / 1e-9)
image_data = msg.data.reshape(480, 640,-1)*1000
# Take only the first channel (grayscale)
grayscale_image = image_data[:, :, 0]
depth_image_name = 'depth/' + str(numeric_timestamp)[:20] + '.png'
cv2.imwrite(depth_image_name, grayscale_image.astype(np.uint16))
</code></pre>
<p>I get this:</p>
<p><a href="https://i.sstatic.net/Nlec4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Nlec4.png" alt="enter image description here" /></a></p>
<p>and if I use this slightly modified script to save images</p>
<pre><code>from pathlib import Path
from rosbags.highlevel import AnyReader
import numpy as np
from PIL import Image
from datetime import datetime
from matplotlib import image
import cv2
import matplotlib.pyplot as plt
with AnyReader([Path('/home/mona/rosbag2_2023_11_06-15_44_24')]) as reader:
connections = [x for x in reader.connections if x.topic == '/camera/camera/aligned_depth_to_color/image_raw']
for connection, timestamp, rawdata in reader.messages(connections=connections):
msg = reader.deserialize(rawdata, connection.msgtype)
timestamp_dt = datetime.fromtimestamp(msg.header.stamp.sec + msg.header.stamp.nanosec * 1e-9)
timestamp_str = timestamp_dt.strftime("%Y-%m-%d %H:%M:%S.%f")
timestamp_ns = msg.header.stamp.sec * 1e9 + msg.header.stamp.nanosec
numeric_timestamp = int(timestamp_ns / 1e-9)
w, h = msg.width, msg.height
image_data = msg.data.reshape(h, w,-1)*1000
# Take only the first channel (grayscale)
grayscale_image = image_data[:, :, 0]
depth_image_name = 'depth/' + str(numeric_timestamp)[:20] + '.png'
depth_data = np.array(grayscale_image, dtype=np.uint16)
image16 = cv2.UMat(depth_data)
cv2.imwrite(depth_image_name, image16.get())
</code></pre>
<p>I get this:</p>
<p><a href="https://i.sstatic.net/3jhkw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3jhkw.png" alt="enter image description here" /></a></p>
<p>My ros2 bag is healthy and has no problem. Also, since this was a depth aligned capture, the rgb images are healthy and saved in correct format.</p>
<p>How can I fix this problem and save the depth images correctly?</p>
<p>Related links:</p>
<ol>
<li><p><a href="https://github.com/NVlabs/BundleSDF/issues/82#issuecomment-1698248648" rel="nofollow noreferrer">https://github.com/NVlabs/BundleSDF/issues/82#issuecomment-1698248648</a></p>
</li>
<li><p><a href="https://github.com/NVlabs/BundleSDF/issues/82#issuecomment-1699580570" rel="nofollow noreferrer">https://github.com/NVlabs/BundleSDF/issues/82#issuecomment-1699580570</a></p>
</li>
<li><p><a href="https://github.com/NVlabs/BundleSDF/issues/74#issuecomment-1681430819" rel="nofollow noreferrer">https://github.com/NVlabs/BundleSDF/issues/74#issuecomment-1681430819</a></p>
</li>
</ol>
<p>My cup data as shown by <code>identify</code> command is following the same format as <code>milk</code> data captured by author of BundleSDF git repo.</p>
<pre><code>(base) mona@ada:~/BundleSDF/milk/2022-11-18-15-10-24_milk/depth$ identify ../../../cup/depth/16993034935581901703.png
../../../cup/depth/16993034935581901703.png PNG 640x480 640x480+0+0 16-bit Grayscale Gray 115375B 0.000u 0:00.000
(base) mona@ada:~/BundleSDF/milk/2022-11-18-15-10-24_milk/depth$ identify 1668813100171987935.png
1668813100171987935.png PNG 640x480 640x480+0+0 16-bit Grayscale Gray 49234B 0.000u 0:00.000
</code></pre>
<p>Some system info:</p>
<pre><code>(base) mona@ada:~$ ros2 wtf
/opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: joy has been updated to a new version. local: 3.1.0 < latest: 3.3.0
/opt/ros/humble/lib/python3.10/site-packages/ros2doctor/api/package.py: 112: UserWarning: sdl2_vendor has been updated to a new version. local: 3.1.0 < latest: 3.3.0
All 5 checks passed
</code></pre>
<pre><code>(base) mona@ada:~$ /usr/bin/python3.10
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux
</code></pre>
<pre><code>(base) mona@ada:~$ uname -a
Linux ada 6.2.0-36-generic #37~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Mon Oct 9 15:34:04 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
(base) mona@ada:~$ lsb_release -a
LSB Version: core-11.1.0ubuntu4-noarch:security-11.1.0ubuntu4-noarch
Distributor ID: Ubuntu
Description: Ubuntu 22.04.3 LTS
Release: 22.04
Codename: jammy
</code></pre>
<p>The RGB image saved looks like this:</p>
<p><a href="https://i.sstatic.net/IjwlS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IjwlS.png" alt="enter image description here" /></a></p>
<p>The process for capture is:</p>
<ol>
<li><p>open a terminal and type this command: <code>ros2 launch realsense2_camera rs_launch.py align_depth.enable:=true</code></p>
</li>
<li><p>open a new terminal and type this command: <code>ros2 bag record -a</code></p>
</li>
<li><p>press <code>CTL+C</code> to stop recording</p>
</li>
<li><p>run the script</p>
</li>
</ol>
<p>checking the depth aligned capture in <code>rviz2</code> I see my <code>depth aligned depth raw image</code> and <code>raw color image</code> like the following and it doesn't seem if there is a problem:
<a href="https://i.sstatic.net/Em2hv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Em2hv.png" alt="enter image description here" /></a></p>
<p>These are the topics I am showing in <code>rviz2</code>:</p>
<p><a href="https://i.sstatic.net/pzldO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pzldO.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/fKYJD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fKYJD.png" alt="enter image description here" /></a></p>
| <python><numpy><opencv><computer-vision><ros2> | 2023-11-07 15:05:04 | 0 | 38,867 | Mona Jalal |
77,439,294 | 21,404,794 | plotting line between values ignoring NaN values | <p>I'm trying to do a plot with a bunch of data, and I want some of the data, which has NaNs, to be plotted too. The problem I face is that I want a line through the NaN values connecting the other values. If we look at the image, the blue dots should be connected like the orange ones. If there were no NaNs I could draw it, but then I wouldnt be able to draw the orange points, and I need both.</p>
<p><a href="https://i.sstatic.net/LZ28A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LZ28A.png" alt="example of wanted vs obtained" /></a></p>
<p>I'll leave the code for the MWE that gives the figure here:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
x = [1.0, 3.0, 5.0, 7.0, 10.0, 15.0, 20.0]
y= [1.0, 3.0, 5.0, 7.0, 10.0, 5.0, 20.0]
Arealval = [30.1, np.nan, 33.4, np.nan, 22.4, np.nan, 35.8]
plt.plot(x, Arealval, marker= 'X', ls=':')
plt.plot(x, y, marker='o', ls=':')
plt.show()
</code></pre>
<p>I thought of duplicating my x values having 2 lists and keeping only the ones without NaN on one of them for the plot, and it works (example and image below) but that would be completely impractical to do with a large dataset.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
x = [1.0, 3.0, 5.0, 7.0, 10.0, 15.0, 20.0]
y= [1.0, 3.0, 5.0, 7.0, 10.0, 5.0, 20.0]
x2 = [1.0, 5.0, 10.0, 20.0]
Arealval = [30.1, 33.4, 22.4, 35.8]
plt.plot(x2, Arealval, marker= 'X', ls=':')
plt.plot(x, y, marker='o', ls=':')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/G5YUl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G5YUl.png" alt="without nans" /></a></p>
<p>To be clear, I don't want to interpolate or anything, there are just a bunch of observations (the ones with NaNs), and a bunch of predictions (the ones without NaNs), and I want to plot both, but giving more importance to the observations by having the line to locate them easier.</p>
<p>Is there any way to programatically draw a line between points ignoring the NaNs?</p>
| <python><matplotlib> | 2023-11-07 15:02:45 | 1 | 530 | David Siret Marqués |
77,439,288 | 22,128,188 | Encountered error while generating package metadata | <p>Collecting dotenv</p>
<pre><code>Using cached dotenv-0.0.5.tar.gz (2.4 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'
error: subprocess-exited-with-error
python setup.py egg_info did not run successfully.
exit code: 1
[77 lines of output]
C:\Users\naval\OneDrive\Desktop\Learning Hub\Python-BootCamp\section 64\day-64-starting-files-top-movies\venv\Lib\site-packages\setuptools\installer.py:27: SetuptoolsDeprecationWarning: setuptools.installer is deprecated. Requirements should be satisfied by a PEP 517 installer.
warnings.warn(
error: subprocess-exited-with-error
python setup.py egg_info did not run successfully.
exit code: 1
[22 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 14, in <module>
File "C:\Users\naval\AppData\Local\Temp\pip-wheel-n1gnwdbh\distribute_5385a5f0f3b7409680c07cd450777e03\setuptools\__init__.py", line 2, in <module>
from setuptools.extension import Extension, Library
File "C:\Users\naval\AppData\Local\Temp\pip-wheel-n1gnwdbh\distribute_5385a5f0f3b7409680c07cd450777e03\setuptools\extension.py", line 5, in <module>
from setuptools.dist import _get_unpatched
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "C:\Users\naval\OneDrive\Desktop\Learning Hub\Python-BootCamp\section 64\day-64-starting-files-top-movies\venv\Lib\site-packages\_virtualenv.py", line 89, in exec_module
old(module)
File "C:\Users\naval\AppData\Local\Temp\pip-wheel-n1gnwdbh\distribute_5385a5f0f3b7409680c07cd450777e03\setuptools\dist.py", line 7, in <module>
from setuptools.command.install import install
File "C:\Users\naval\AppData\Local\Temp\pip-wheel-n1gnwdbh\distribute_5385a5f0f3b7409680c07cd450777e03\setuptools\command\__init__.py", line 8, in <module>
from setuptools.command import install_scripts
File "C:\Users\naval\AppData\Local\Temp\pip-wheel-n1gnwdbh\distribute_5385a5f0f3b7409680c07cd450777e03\setuptools\command\install_scripts.py", line 3, in <module>
from pkg_resources import Distribution, PathMetadata, ensure_directory
File "C:\Users\naval\AppData\Local\Temp\pip-wheel-n1gnwdbh\distribute_5385a5f0f3b7409680c07cd450777e03\pkg_resources.py", line 1518, in <module>
register_loader_type(importlib_bootstrap.SourceFileLoader, DefaultProvider)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'importlib._bootstrap' has no attribute 'SourceFileLoader'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Encountered error while generating package metadata.
See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
Traceback (most recent call last):
File "C:\Users\naval\OneDrive\Desktop\Learning Hub\Python-BootCamp\section 64\day-64-starting-files-top-movies\venv\Lib\site-packages\setuptools\installer.py", line 82, in fetch_build_egg
subprocess.check_call(cmd)
File "C:\Users\naval\AppData\Local\Programs\Python\Python311\Lib\subprocess.py", line 413, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['C:\\Users\\naval\\OneDrive\\Desktop\\Learning Hub\\Python-BootCamp\\section 64\\day-64-starting-files-top-movies\\venv\\Scripts\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\naval\\AppData\\Local\\Temp\\tmp1fswpi90', '--quiet', 'distribute']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\naval\AppData\Local\Temp\pip-install-2udsov2_\dotenv_a5dadefcebc54b9f95af1edad47b6fce\setup.py", line 13, in <module>
setup(name='dotenv',
File "C:\Users\naval\OneDrive\Desktop\Learning Hub\Python-BootCamp\section 64\day-64-starting-files-top-movies\venv\Lib\site-packages\setuptools\__init__.py", line 86, in setup
_install_setup_requires(attrs)
File "C:\Users\naval\OneDrive\Desktop\Learning Hub\Python-BootCamp\section 64\day-64-starting-files-top-movies\venv\Lib\site-packages\setuptools\__init__.py", line 80, in _install_setup_requires
dist.fetch_build_eggs(dist.setup_requires)
File "C:\Users\naval\OneDrive\Desktop\Learning Hub\Python-BootCamp\section 64\day-64-starting-files-top-movies\venv\Lib\site-packages\setuptools\dist.py", line 875, in fetch_build_eggs
resolved_dists = pkg_resources.working_set.resolve(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naval\OneDrive\Desktop\Learning Hub\Python-BootCamp\section 64\day-64-starting-files-top-movies\venv\Lib\site-packages\pkg_resources\__init__.py", line 789, in resolve
dist = best[req.key] = env.best_match(
^^^^^^^^^^^^^^^
File "C:\Users\naval\OneDrive\Desktop\Learning Hub\Python-BootCamp\section 64\day-64-starting-files-top-movies\venv\Lib\site-packages\pkg_resources\__init__.py", line 1075, in best_match
return self.obtain(req, installer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naval\OneDrive\Desktop\Learning Hub\Python-BootCamp\section 64\day-64-starting-files-top-movies\venv\Lib\site-packages\pkg_resources\__init__.py", line 1087, in obtain
return installer(requirement)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naval\OneDrive\Desktop\Learning Hub\Python-BootCamp\section 64\day-64-starting-files-top-movies\venv\Lib\site-packages\setuptools\dist.py", line 945, in fetch_build_egg
return fetch_build_egg(self, req)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\naval\OneDrive\Desktop\Learning Hub\Python-BootCamp\section 64\day-64-starting-files-top-movies\venv\Lib\site-packages\setuptools\installer.py", line 84, in fetch_build_egg
raise DistutilsError(str(e)) from e
distutils.errors.DistutilsError: Command '['C:\\Users\\naval\\OneDrive\\Desktop\\Learning Hub\\Python-BootCamp\\section 64\\day-64-starting-files-top-movies\\venv\\Scripts\\python.exe', '-m', 'pip', '--disable-pip-version-check', 'wheel', '--no-deps', '-w', 'C:\\Users\\naval\\AppData\\Local\\Temp\\tmp1fswpi90', '--quiet', 'distribute']' returned non-zero exit status 1.
[end of output]
</code></pre>
<p>note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed</p>
<p>Encountered error while generating package metadata.</p>
<p>See above for output.</p>
<p>note: This is an issue with the package mentioned above, not pip.
hint: See above for details.</p>
| <python><pycharm><python-venv><dotenv> | 2023-11-07 15:02:21 | 1 | 396 | navalega0109 |
77,439,273 | 7,243,493 | order of python dict changes when send as a json response (python 3.11.3) | <p>I am trying to send a python dict to my frontend so can create some html based on the values, i therefore need the order of the dict to be reliable but when i send my dict to the frontend it changes the order.
I am using python 3.11.3</p>
<p>this is python dict i send as a response in add_indicator()</p>
<pre><code>corect
{'kind': 'ao', 'fast': 'int','slow': 'int', 'offset': 'int'}
</code></pre>
<p>but for some reason the response i get from postJsonString is in a diffrent order. I am using Pythin 3.11.3 so it does have ordered dicts.
I have just no clue what is going on, it seems extremely strange. Anyone can enlighten me?</p>
<pre><code>messed up order
{fast: 'int', kind: 'ao', offset: 'int', slow: 'int'}
</code></pre>
<p>PYTHON ROUTE</p>
<pre><code>
@bp.route('/add_indicator', methods=('POST', 'GET'))
def add_indicator():
if request.method == 'POST':
indicator = {'kind': 'ao', 'fast': 'int',
'slow': 'int', 'offset': 'int'}
indicator = jsonify(indicator)
print(indicator)
return indicator
</code></pre>
<p>JAVASCRIPT</p>
<pre><code>async function postIt() {
let ind_props = await postJsonString(data, "/add_indicator");
console.log("after postJsonString", ind_props);
}
async function postJsonString(data, endpoint) {
let response = await fetch(endpoint, {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify(data),
});
if (!response.ok) {
throw new Error("Request failed");
}
const responseData = await response.json();
return responseData;
}
</code></pre>
| <javascript><python><json><dictionary><request> | 2023-11-07 14:59:48 | 1 | 568 | Soma Juice |
77,439,174 | 1,142,881 | How to create a dataclass child instance from a parent's instance? | <p>I have the following <code>dataclass</code> use-case:</p>
<pre><code>from dataclasses import dataclass
@dataclass
class A:
x: int = 0
@dataclass
class B(A):
y: int = 0
a = A(x=5)
# what would a flatten implementation be?
b = B(flatten(a)?, y=6)
</code></pre>
<p>how to accomplish this use-case?</p>
| <python> | 2023-11-07 14:45:21 | 2 | 14,469 | SkyWalker |
77,439,172 | 8,844,500 | 'str' object has no attribute 'base_dtype' using keras_core and tensorflow | <p>I scratch my head on the following (simplistic) problem using keras_core on tensorflow</p>
<pre class="lang-py prettyprint-override"><code>import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import numpy as np
import keras_core as keras
import tensorflow as tf
import torch
print("Keras version: ", keras.__version__)
print("numpy version: ", np.__version__)
print("tensorflow version: ", tf.__version__)
print("torch version: ", torch.__version__)
d_input = 12
d_output = 5
inputs = keras.Input(shape=(int(d_input),))
outputs = keras.layers.Dense(
int(d_output), use_bias=False, activation='sigmoid')(inputs)
model = keras.Model(inputs=inputs, outputs=outputs)
loss_function = keras.losses.BinaryCrossentropy(from_logits=False)
optimizer = keras.optimizers.Adam()
metrics = [keras.metrics.Accuracy()]
model.compile(optimizer=optimizer, loss=loss_function, metrics=metrics)
batch_size = 24
X = np.random.randint(0, 2, size=(2*batch_size, d_input))
y = np.random.randint(0, 2, size=(2*batch_size, d_output))
X = keras.ops.convert_to_tensor(X, dtype="int8")
y = keras.ops.convert_to_tensor(y, dtype="int8")
model.fit(X, y, batch_size=batch_size)
</code></pre>
<p>that returns</p>
<pre class="lang-py prettyprint-override"><code>Using TensorFlow backend
2023-11-07 15:32:41.037477: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-11-07 15:32:42.714479: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-11-07 15:32:42.716272: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
Keras version: 0.1.7
numpy version: 1.23.5
tensorflow version: 2.10.0
torch version: 2.0.1
Traceback (most recent call last):
File "~/script.py", line 34, in <module>
model.fit(X, y, batch_size=batch_size)
File "~miniconda3/envs/nn/lib/python3.10/site-packages/keras_core/src/utils/traceback_utils.py", line 123, in error_handler
raise e.with_traceback(filtered_tb) from None
File "~miniconda3/envs/nn/lib/python3.10/site-packages/keras_core/src/backend/tensorflow/trainer.py", line 69, in train_step
gradients = tape.gradient(loss, trainable_weights)
AttributeError: 'str' object has no attribute 'base_dtype'
</code></pre>
<p>that I do not understand. Note I'm using the new <a href="https://keras.io/keras_core/" rel="nofollow noreferrer">keras_core</a> package. I did not try with jax.</p>
<p>Replacing tensorflow by torch do not raise any error: <code>os.environ["KERAS_BACKEND"] = "torch"</code>. Versions are on the output above.</p>
<p>Any help is warm welcome, either to improve the question or help finding an anwser :-)</p>
| <python><tensorflow><keras><pytorch> | 2023-11-07 14:45:08 | 1 | 329 | FraSchelle |
77,439,119 | 13,132,728 | How to turn multiple rows of the same key with one differing column value in to one row consisting of the count of that variable column for each key | <p>I have a dataframe that is structured like so:</p>
<pre><code>pd.DataFrame(
{'col1':['foo','foo','foo','foo','foo','foo'],'col2':['bar','bar','bar','bar','bar','bar'],'col3':['baz','baz','baz','baz','baz','baz'],'varying_column':['x','y','z','d','e','f']},index=['a','b','c','a','a','b']
).reset_index()
my_key col1 col2 col3 varying_column
0 a foo bar baz x
1 b foo bar baz y
2 c foo bar baz z
3 a foo bar baz d
4 a foo bar baz e
5 b foo bar baz f
</code></pre>
<p>Where each row has an index value and there is one column that varies. What I would like to do is just have one row for each index by creating a new column that is the count of <code>varying_column</code>, like so:</p>
<pre><code> my_key col1 col2 col3 count_varying_column
0 a foo bar baz 3
1 b foo bar baz 2
2 c foo bar baz 1
</code></pre>
<p>I assume this can be accomplished by doing some sort of <code>groupby</code> <code>index</code>, count <code>varying_column</code> and maybe <code>unstack</code>?</p>
<p>In my real data, the other columns also have varying values, but that is beside the point to this question, so I kept them all the same for simplicity.</p>
<p>Note: this is not just a simple group by aggregate as discussed in <a href="https://stackoverflow.com/questions/19384532/get-statistics-for-each-group-such-as-count-mean-etc-using-pandas-groupby">this stack overflow thread</a>. Yes, I want the count for each group, but this question is different in that I want to drop duplicates for <code>varying_column</code> by key and replace it with a new column that has one row per key value and a new column <code>count_varying_column</code> that is a count of each keys <code>varying_column</code>.</p>
| <python><pandas> | 2023-11-07 14:38:46 | 2 | 1,645 | bismo |
77,439,089 | 3,044,825 | Popup parameter of ipyleaflet CircleMarker triggers TraitError | <p>I'm trying to make event callback work in <code>ipyleaflet.CircleMarker</code>. I also need to add <code>CircleMarker.popup</code> to see info of marker that I have clicked:</p>
<pre><code>from ipywidgets import HTML
from ipyleaflet import Map, basemaps, CircleMarker#, Popup
address_collection =\
{0: {'address': 'Liepyno g. 9, Vilnius', 'lon': 25.256049572760737, 'lat': 54.6979514},
1: {'address': 'Paribio g. 25b, Vilnius', 'lon': 25.251277349999988, 'lat': 54.70223025},
2: {'address': 'Linkmenų g. 34, Vilnius', 'lon': 25.272923689938533, 'lat': 54.7098726},
4: {'address': 'Giedraičių g. 60B, Vilnius', 'lon': 25.279606048256667, 'lat': 54.70612675},
7: {'address': 'Pulko g. 2, Vilnius', 'lon': 25.287723473310106, 'lat': 54.7193275},
8: {'address': 'Kareivių g. 7, Vilnius', 'lon': 25.296566713988096, 'lat': 54.71738295},
9: {'address': 'Trinapolio g. 9e, Vilnius', 'lon': 25.28816380095093, 'lat': 54.7250439},
10: {'address': 'Kazio Ulvydo g. 11, Vilnius', 'lon': 25.276568280969475, 'lat': 54.7186202},
11: {'address': 'Apkasų g. 21, Vilnius', 'lon': 25.288784589489467, 'lat': 54.70532835},
12: {'address': 'Kazliškių g. 19, Vilnius', 'lon': 25.29201085, 'lat': 54.6980964}}
def marker_callback(marker, i):
def callback(*args, **kwargs):
#kwargs: {'event': 'interaction', 'type': 'click', 'coordinates': [..., ...]}
#i: index of marker
if kwargs['type'] == 'click':
if marker.fill_color in ('blue', 'red'): marker.fill_color = 'green'
elif marker.fill_color == 'green': marker.fill_color = 'red'
elif kwargs['type'] == 'dblclick':
if marker.color != 'blue': marker.fill_color = 'blue'
return callback
route_map = Map(center=(54.71,25.276843), zoom=13, basemap=basemaps.OpenStreetMap.Mapnik)
for i, address_record in address_collection.items():
location = (address_record['lat'], address_record['lon'])
marker = CircleMarker(radius=8, location=location,
weight=2, color='black', fill_color='blue', fill_opacity=0.3)
marker.popup=HTML(f'{address_record["address"]}<br>index: {i}')
marker.on_click(marker_callback(marker, i))
marker.on_dblclick(marker_callback(marker, i))
route_map.add_layer(marker)
route_map
</code></pre>
<p><a href="https://i.sstatic.net/uDy4M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uDy4M.png" alt="enter image description here" /></a></p>
<p>If I don't use <code>marker.popup</code> my color change event works as expected. Otherwise it works too, only <code>TraitError</code> is raised in a background:</p>
<pre><code>TraitError Traceback (most recent call last)
File ~\PycharmProjects\courier_colab\venv\lib\site-packages\ipywidgets\widgets\widget.py:766, in Widget._handle_msg(self, msg)
764 if 'buffer_paths' in data:
765 _put_buffers(state, data['buffer_paths'], msg['buffers'])
--> 766 self.set_state(state)
768 # Handle a state request.
769 elif method == 'request_state':
File ~\PycharmProjects\courier_colab\venv\lib\site-packages\ipywidgets\widgets\widget.py:648, in Widget.set_state(self, sync_data)
645 if name in self.keys:
646 from_json = self.trait_metadata(name, 'from_json',
647 self._trait_from_json)
--> 648 self.set_trait(name, from_json(sync_data[name], self))
File ~\PycharmProjects\courier_colab\venv\lib\site-packages\traitlets\traitlets.py:1738, in HasTraits.set_trait(self, name, value)
1736 raise TraitError(f"Class {cls.__name__} does not have a trait named {name}")
1737 else:
-> 1738 getattr(cls, name).set(self, value)
File ~\PycharmProjects\courier_colab\venv\lib\site-packages\traitlets\traitlets.py:703, in TraitType.set(self, obj, value)
702 def set(self, obj, value):
--> 703 new_value = self._validate(obj, value)
704 try:
705 old_value = obj._trait_values[self.name]
File ~\PycharmProjects\courier_colab\venv\lib\site-packages\traitlets\traitlets.py:735, in TraitType._validate(self, obj, value)
733 return value
734 if hasattr(self, "validate"):
--> 735 value = self.validate(obj, value)
736 if obj._cross_validation_lock is False:
737 value = self._cross_validate(obj, value)
File ~\PycharmProjects\courier_colab\venv\lib\site-packages\traitlets\traitlets.py:2417, in Float.validate(self, obj, value)
2415 value = float(value)
2416 if not isinstance(value, float):
-> 2417 self.error(obj, value)
2418 return _validate_bounds(self, obj, value)
File ~\PycharmProjects\courier_colab\venv\lib\site-packages\traitlets\traitlets.py:841, in TraitType.error(self, obj, value, error, info)
835 else:
836 e = "The '{}' trait expected {}, not {}.".format(
837 self.name,
838 self.info(),
839 describe("the", value),
840 )
--> 841 raise TraitError(e)
TraitError: The 'east' trait of a Map instance expected a float, not the NoneType None.
---------------------------------------------------------------------------
TraitError Traceback (most recent call last)
File ~\PycharmProjects\courier_colab\venv\lib\site-packages\ipywidgets\widgets\widget.py:766, in Widget._handle_msg(self, msg)
764 if 'buffer_paths' in data:
765 _put_buffers(state, data['buffer_paths'], msg['buffers'])
--> 766 self.set_state(state)
768 # Handle a state request.
769 elif method == 'request_state':
File ~\PycharmProjects\courier_colab\venv\lib\site-packages\ipywidgets\widgets\widget.py:648, in Widget.set_state(self, sync_data)
645 if name in self.keys:
646 from_json = self.trait_metadata(name, 'from_json',
647 self._trait_from_json)
--> 648 self.set_trait(name, from_json(sync_data[name], self))
File ~\PycharmProjects\courier_colab\venv\lib\site-packages\traitlets\traitlets.py:1738, in HasTraits.set_trait(self, name, value)
1736 raise TraitError(f"Class {cls.__name__} does not have a trait named {name}")
1737 else:
-> 1738 getattr(cls, name).set(self, value)
File ~\PycharmProjects\courier_colab\venv\lib\site-packages\traitlets\traitlets.py:703, in TraitType.set(self, obj, value)
702 def set(self, obj, value):
--> 703 new_value = self._validate(obj, value)
704 try:
705 old_value = obj._trait_values[self.name]
File ~\PycharmProjects\courier_colab\venv\lib\site-packages\traitlets\traitlets.py:735, in TraitType._validate(self, obj, value)
733 return value
734 if hasattr(self, "validate"):
--> 735 value = self.validate(obj, value)
736 if obj._cross_validation_lock is False:
737 value = self._cross_validate(obj, value)
File ~\PycharmProjects\courier_colab\venv\lib\site-packages\traitlets\traitlets.py:2417, in Float.validate(self, obj, value)
2415 value = float(value)
2416 if not isinstance(value, float):
-> 2417 self.error(obj, value)
2418 return _validate_bounds(self, obj, value)
File ~\PycharmProjects\courier_colab\venv\lib\site-packages\traitlets\traitlets.py:841, in TraitType.error(self, obj, value, error, info)
835 else:
836 e = "The '{}' trait expected {}, not {}.".format(
837 self.name,
838 self.info(),
839 describe("the", value),
840 )
--> 841 raise TraitError(e)
TraitError: The 'east' trait of a Map instance expected a float, not the NoneType None.
</code></pre>
<p>How to suppress this kind of errors? Is it a bug in <code>ipyleaflet</code> or do I need to implement <code>Popup</code> in a different manner?</p>
| <python><jupyter-notebook><leaflet><ipyleaflet> | 2023-11-07 14:35:15 | 0 | 5,979 | mathfux |
77,438,916 | 11,328,614 | Python pandas, read_fwf | <p>I'm trying to read a table from a text file (or <code>StringIO</code>) into <code>pandas</code>. To accomplish this, I use <code>pandas.read_fwf</code>.</p>
<p>However, I'm facing problems with the automatic column width detection.
In my case it works properly for columns 1-3 but not for column 4, which contains informal text of undefined width.</p>
<p>The detection works good for the first three columns, because their width can properly determined from the headers. The 4th column start can also be determined properly, as it is aligned with the corresponding header.</p>
<p>However, pandas refuses to put all remaining text into the 4th column.
It either creates several <code>Unnamed: X</code> columns with each word of the informal text in one column or it creates one named column which contains only the first word of the informal text.</p>
<p>Here is the column format:</p>
<pre><code>CL NAME STATE INFO
some category some_name some_state some informal info text
...
</code></pre>
<p>I'd like to achieve that all categories are put in column 1, all names in column two, all states in column three and all infos in column 4.</p>
<p>The two options I tried were:</p>
<ul>
<li>
<pre><code>x1 = pandas.read_fwf(infile, infer_nrows=1)
</code></pre>
<p>-> Results in a <code>INFO</code> column containing only the first word of the info text.</p>
<pre><code> CL NAME ... Unnamed: 5 Unnamed: 6
0 some category some_name ... NaN NaN
</code></pre>
</li>
<li>
<pre><code>x2 = pandas.read_fwf(infile)
</code></pre>
<p>-> Results in several unnamed columns each containing one word of the info text.</p>
<pre><code> CL NAME STATE INFO
0 some some_name some_state some
</code></pre>
</li>
</ul>
| <python><python-3.x><pandas><datatable><fixed-width> | 2023-11-07 14:13:06 | 0 | 1,132 | Wör Du Schnaffzig |
77,438,791 | 179,581 | Can I make a macOS app made of Python script behave in the Dock like normal apps do? | <p>The executable inside <code>my.app/Contents/MacOS</code> directory is a Python script (<code>app.py</code>) containing:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/python3
import os
from wsgiref.simple_server import make_server
with make_server(host=(host := '127.0.0.1'), port=(port := 1337),
app=lambda _, c: [c('200 OK', []), b'ok'][1:]) as s:
os.system(f'open http://{host}:{port}')
s.serve_forever()
</code></pre>
<p>When the app is started its icon begins to jump in the Dock for a long time and then jumping stops without getting the running indicator. So it looks like a stopped app that is pinned to the Dock while it's actually still running. It's impossible to move the icon in the Dock until it stops jumping. One can see the activity indicator if the app is pinned to the Dock.</p>
<p><a href="https://i.sstatic.net/QtRQE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QtRQE.png" alt="enter image description here" /></a></p>
<p>Can I somehow communicate with macOS from the Python script to make it behave as normal apps do? E.g. using macOS Python with tkinter and the other pre-installed libs.</p>
<p>If you want to create the app yourself, you will also need the following <code>my.app/Contents/Info.plist</code>:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>CFBundleExecutable</key><string>app.py</string>
</dict>
</plist>
</code></pre>
| <python><macos><dock> | 2023-11-07 13:53:52 | 0 | 11,136 | Andy |
77,438,731 | 7,423,100 | Fixing datatype error in oracledb from Python ORA-01790 | <p>I have the following code:</p>
<pre class="lang-py prettyprint-override"><code>data = [1,2,None,4,5,6]
sql = """
SELECT :1 AS ID, :2 AS pkey, :3 AS issuenum FROM dual UNION ALL
SELECT :4 AS ID, :5 AS pkey, :6 AS issuenum FROM dual
"""
cur = con.cursor()
cur.execute(sql, data)
data = cur.fetchall()
</code></pre>
<p>I gen an error <code>DatabaseError: ORA-01790: expression must have same datatype as corresponding expression</code></p>
<p>If i run it with <code>data = [1,2,3,4,5,6]</code>, it all passes.</p>
<p>From the error it seems, it doesn't like the missing value. My question is, how do I bid the statement so that I can select NULL value, where the None is in the list?</p>
| <python><python-3.x><python-oracledb> | 2023-11-07 13:45:51 | 1 | 1,303 | Finrod Felagund |
77,438,628 | 11,203,574 | How to analyze PDF using ChatGPT / Vision python API? | <p>I have a list of pdf files and I want to analyze the first page of each document to extract information. I've tried a lot of free and paid OCR, but in my case, the results aren't good enough.</p>
<p>So I want to try using the ChatGPT API in python. How do I go about it?</p>
<p>Also, I saw in <a href="https://platform.openai.com/docs/guides/vision" rel="nofollow noreferrer">openAI Vision documentation</a> that there is a <code>detail</code> parameter but there is no example provided, how do I use this parameter?</p>
| <python><pdf><openai-api><chatgpt-api> | 2023-11-07 13:32:03 | 1 | 377 | Aurelien |
77,438,611 | 4,490,376 | How can I test the error message returned by a flask app using render_template() | <p>I'm writing some unittests for a basic flask app. This app takes some data in as a post request and then fires it off to another service, before displaying the response to user on a web-page.</p>
<p>item_blueprint.py</p>
<pre><code>from flask import Blueprint, render_template, request, redirect, session
import requests
from urllib.parse import urljoin
import os
item_blueprint = Blueprint('item_blueprint', __name__)
base_item_url = os.getenv('ITEM_SERVICE_URL')
@item_blueprint.route('/create-item', methods=['GET', 'POST'])
def create_item():
error = None
if session.get('item'):
session.pop('item')
if request.method == 'POST':
item_name = request.form['item_name']
payload = {"authentication": session['authentication'],
"item_data": {"name": item_name},
"sender": "frontend_service",
}
url = urljoin(base_item_url, 'item/create-item')
r = requests.post(url, json=payload)
response = r.json()
if response['code'] == 200:
session['item'] = response['payload']
return redirect('/update-item', code=302)
else:
error = response['payload']
return render_template('/item/create_item.html', error=error)
</code></pre>
<p>This works as expected, with no issues. However, when I try to test it, I cannot seem to access the error message in 'render template'.</p>
<p>I'm using pytest, so have a flask app and client fixture over in conftest.py</p>
<p>conftest.py</p>
<pre><code>import pytest
from myapp.web.app import app as flask_app
@pytest.fixture()
def app():
yield flask_app
@pytest.fixture()
def client(app):
return app.test_client()
</code></pre>
<p>To avoid integration headaches the test itself mocks the <code>requests.post</code> function and uses the flask session context.</p>
<p>test_item.py</p>
<pre><code>from unittest import mock
mock_post_request = 'my_app.web.blueprints.item.requests.post'
authentication = {'auth_token': 'auth_token',
'user_id': 'user.id'}
class TestCreateItem:
new_item = {'item_name': 'test_item'}
def test_create_item_valid(self, app, client):
"""
GIVEN: An item_name has been sent to the create-item endpoint
WHEN: The endpoint returns a 200 code
THEN: The user is redirected to the update-item page
"""
with mock.patch(mock_post_request) as mock_post:
mock_post.return_value.status_code = 200
mock_post.return_value.json.return_value = {'payload': {'item': 'test_item'},
'code': 200,
}
with client.session_transaction() as self.session:
self.session['authentication'] = authentication
self.response = client.post('/create-item', data=self.new_item, follow_redirects=True)
assert self.response.status_code == 200
mock_post.assert_called_once()
assert len(self.response.history) == 1 #just one redirect
assert self.response.request.path == "/update-item"
def test_create_item_invalid(self, app, client):
"""
GIVEN: An item_name has been sent to the create-item endpoint
WHEN: The endpoint returns a non-200 code
THEN: The user remains on the create-item page
AND THEN: An error message is displayed
"""
with mock.patch(mock_post_request) as mock_post:
mock_post.return_value.status_code = 200
mock_post.return_value.json.return_value = {'payload': "error",
'code': 401,
}
with client.session_transaction() as self.session:
self.session['authentication'] = authentication
self.response = client.post('/create-item', data=self.new_item, follow_redirects=True)
assert self.response.status_code == 200
mock_post.assert_called_once()
assert len(self.response.history) == 0 # just one redirect
assert self.response.request.path == "/create-item"
assert self.response.request.args.get('error') == "error"
</code></pre>
<p>The first test case passes with no issues. The second fails because the final test (<code>assert self.response.request.args.get('error')</code>) returns None.</p>
<p>I have looked through the docs and tried using self.response.request. +args, +form, +get_json, +data as a way of accessing the 'error' parameter, but all either return 'None' or a type error.</p>
<p>I had wondered if the issue was with the mock_post item, but essentially skipping it by hardcoding the error value in the render_template() method doesn't change the behaviour.</p>
<p>Any suggestions gratefully received.</p>
| <python><flask><pytest><werkzeug> | 2023-11-07 13:29:34 | 1 | 551 | 741852963 |
77,438,515 | 10,590,609 | Python type hint pickable argument | <p>I want to communicate to users of my code that some function requires a pickable object. Here is an example that abstracts from any project details:</p>
<pre class="lang-py prettyprint-override"><code>import pickle
from typing import Callable
def pickable():
print("I am pickable!")
def get_non_pickable():
# local functions are not pickable
def not_pickable():
print("I am NOT pickable!")
return not_pickable
def pickle_func(func: Callable[[], None]):
with open("data.pickle", "wb") as f:
pickle.dump(func, f, pickle.HIGHEST_PROTOCOL)
# fine:
pickle_func(pickable)
# not fine: should show in my editor that I am passing a non-pickable item
pickle_func(get_non_pickable())
</code></pre>
<p>Is it possible to type hint that the function argument <code>func</code> in the function <code>pickle_func</code> must be pickable? Ideally the existing type hint for the argument should be kept.</p>
| <python><pickle><python-typing> | 2023-11-07 13:16:29 | 1 | 332 | Izaak Cornelis |
77,438,455 | 9,318,323 | Pandas.pivot TypeError: super(type, obj): obj must be an instance or subtype of type | <p>I have some code that stopped working after updating pandas from 1.3.2 to 2.1.1.
I do not understand why and how to solve it, as the error is unclear to me. Help me figure it.</p>
<p>Snippet of my code:</p>
<pre><code>import pandas as pd
d = {'value': [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
'x_axis_date': ['2020-01-01', '2020-01-02', '2020-01-03', '2020-01-04', '2020-01-05', '2020-01-06', '2020-01-07',
'2020-01-08', '2020-01-09', '2020-01-10'],
'curve_year': [2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019]}
df = pd.DataFrame(data=d)
df = df.assign(x_axis_date=pd.to_datetime(df['x_axis_date']))
# dtypes: float64, datetime64[ns], int64
pv_aggregate_agg = pd.pivot_table(df, index=df.x_axis_date, values=['value'], aggfunc=pd.DataFrame.mean) # error here
print(pv_aggregate_agg)
</code></pre>
<p>Stack trace:</p>
<pre><code>Traceback (most recent call last):
File "E:\repos\plot_graphs\venv\lib\site-packages\pandas\core\groupby\generic.py", line 1495, in aggregate
result = gba.agg()
File "E:\repos\plot_graphs\venv\lib\site-packages\pandas\core\apply.py", line 178, in agg
return self.agg_list_like()
File "E:\repos\plot_graphs\venv\lib\site-packages\pandas\core\apply.py", line 311, in agg_list_like
return self.agg_or_apply_list_like(op_name="agg")
File "E:\repos\plot_graphs\venv\lib\site-packages\pandas\core\apply.py", line 1351, in agg_or_apply_list_like
keys, results = self.compute_list_like(op_name, selected_obj, kwargs)
File "E:\repos\plot_graphs\venv\lib\site-packages\pandas\core\apply.py", line 370, in compute_list_like
new_res = getattr(colg, op_name)(func, *args, **kwargs)
File "E:\repos\plot_graphs\venv\lib\site-packages\pandas\core\groupby\generic.py", line 255, in aggregate
ret = self._aggregate_multiple_funcs(func, *args, **kwargs)
File "E:\repos\plot_graphs\venv\lib\site-packages\pandas\core\groupby\generic.py", line 360, in _aggregate_multiple_funcs
results[key] = self.aggregate(func, *args, **kwargs)
File "E:\repos\plot_graphs\venv\lib\site-packages\pandas\core\groupby\generic.py", line 292, in aggregate
return self._python_agg_general(func, *args, **kwargs)
File "E:\repos\plot_graphs\venv\lib\site-packages\pandas\core\groupby\generic.py", line 325, in _python_agg_general
result = self.grouper.agg_series(obj, f)
File "E:\repos\plot_graphs\venv\lib\site-packages\pandas\core\groupby\ops.py", line 850, in agg_series
result = self._aggregate_series_pure_python(obj, func)
File "E:\repos\plot_graphs\venv\lib\site-packages\pandas\core\groupby\ops.py", line 871, in _aggregate_series_pure_python
res = func(group)
File "E:\repos\plot_graphs\venv\lib\site-packages\pandas\core\groupby\generic.py", line 322, in <lambda>
f = lambda x: func(x, *args, **kwargs)
File "E:\repos\plot_graphs\venv\lib\site-packages\pandas\core\frame.py", line 11335, in mean
result = super().mean(axis, skipna, numeric_only, **kwargs)
TypeError: super(type, obj): obj must be an instance or subtype of type
</code></pre>
| <python><pandas><dataframe><pivot-table> | 2023-11-07 13:07:57 | 1 | 354 | Vitamin C |
77,438,431 | 3,203,158 | How to create a dummy onnx matching the I/O name, datatype, and shape of a given onnx model? | <p>I possess an ONNX model stored at a specific path, which is integrated into an established software pipeline where modifications are restricted. My objective is to develop a deep learning model and evaluate its compatibility with the existing pipeline. Currently, I am trying to create a dummy ONNX model that mimics the input and output names, data types, and shapes of the current ONNX model that will produce a constant-valued output (for the sake of testing). Kindly inform me if there are additional aspects that I need to consider.</p>
<p><em>I am looking for a Python code</em>, preferably based on PyTorch if it is feasible or necessary.</p>
| <python><pytorch><onnx><onnxruntime> | 2023-11-07 13:04:30 | 0 | 922 | aroyc |
77,438,275 | 12,390,973 | how to add emission related global constraint in pypsa? | <p>I have created a very small program to understand the workings of Pypsa. It has an hourly time interval load profile with random values between 2000MW-3000MW with 1-month total values. To serve this load profile I have added 11 generators:</p>
<pre><code>3 Coal
2 Gas
3 Nuclear
2 Solar
1 Backup
</code></pre>
<p>There are 5 basic properties of generators that I have added to the model:</p>
<pre><code>Capacity (MW)
Variable cost (Rupee/MWh)
Ramp-up rate (%)
Ramp-down rate (%)
CO2 Emission rate (Ton CO2/MWh)
</code></pre>
<p>I was able to add the manual constraint into the model where I can restrict the overall CO2 emission from the coal and gas plants. This is the code that I have written:</p>
<pre><code>import pypsa
import numpy as np
import pandas as pd
from pyomo.environ import Constraint
from pyomo.environ import value
start_mt = 1
start_yr = 2022
end_mt = 1
end_yr = 2022
end_day = 31
frequency = 60
snapshots = pd.date_range("{}-{}-01".format(start_yr, start_mt), "{}-{}-{} 23:59".format(end_yr, end_mt, end_day),
freq=str(frequency) + "min")
np.random.seed(len(snapshots))
# Create a PyPSA network
network = pypsa.Network()
# Add a load bus
network.add("Bus", "Bus")
network.set_snapshots(snapshots)
load_profile = np.random.randint(2000, 3000, len(snapshots))
# Add the load to the network
network.add("Load", "Load profile", bus="Bus", p_set=load_profile)
# Define the generator data dictionary
generator_data = {
'coal1': {'capacity': 800, 'ramp up': 0.6, 'ramp down': 0.6, 'variable cost': 10, 'co2_emission_factor': 0.95},
'coal2': {'capacity': 600, 'ramp up': 0.6, 'ramp down': 0.6, 'variable cost': 11, 'co2_emission_factor': 0.95},
'coal3': {'capacity': 500, 'ramp up': 0.6, 'ramp down': 0.6, 'variable cost': 11, 'co2_emission_factor': 0.95},
'gas1': {'capacity': 600, 'ramp up': 0.45, 'ramp down': 0.45, 'variable cost': 12, 'co2_emission_factor': 0.45},
'gas2': {'capacity': 600, 'ramp up': 0.45, 'ramp down': 0.45, 'variable cost': 13, 'co2_emission_factor': 0.45},
'nuclear1': {'capacity': 300, 'ramp up': 0.01, 'ramp down': 0.01, 'variable cost': 4, 'co2_emission_factor': 0.03},
'nuclear2': {'capacity': 400, 'ramp up': 0.01, 'ramp down': 0.01, 'variable cost': 3, 'co2_emission_factor': 0.03},
'nuclear3': {'capacity': 250, 'ramp up': 0.01, 'ramp down': 0.01, 'variable cost': 3, 'co2_emission_factor': 0.03},
'solar1': {'capacity': 150, 'ramp up': 1, 'ramp down': 1, 'variable cost': 1, 'co2_emission_factor': 0.0},
'solar2': {'capacity': 200, 'ramp up': 1, 'ramp down': 1, 'variable cost': 2, 'co2_emission_factor': 0.0},
'backup': {'capacity': 2000, 'ramp up': 1, 'ramp down': 1, 'variable cost': 20, 'co2_emission_factor': 0.0},
}
# Add generators to the network
for name, data in generator_data.items():
network.add("Generator", name,
bus="Bus",
p_nom=data['capacity'],
marginal_cost=data['variable cost'],
ramp_limit_up=data['ramp up'],
ramp_limit_down=data['ramp down'],
)
def extra_functionality(network, snaphots):
def co2_limiter(model):
coalGen = sum([model.generator_p[name, i] for i in list(network.snapshots) for name in ['coal1', 'coal2', 'coal3']])
gasGen = sum([model.generator_p[name, i] for i in list(network.snapshots) for name in ['gas1', 'gas2']])
nuclearGen = sum([model.generator_p[name, i] for i in list(network.snapshots) for name in ['nuclear1', 'nuclear2', 'nuclear3']])
coal_co2_emissions = coalGen * 0.95
gas_co2_emissions = gasGen * 0.45
nuclear_co2_emissions = nuclearGen * 0.03
total_co2 = (coal_co2_emissions + gas_co2_emissions + nuclear_co2_emissions) / 1000000
expr = total_co2 <= 0.04
return expr
network.model.co2_limiter = Constraint(rule=co2_limiter)
solver_name = "gurobi"
network.lopf(network.snapshots, solver_name=solver_name, extra_functionality=extra_functionality)
dispatch = network.generators_t.p
total_gen = dispatch.sum()
co2 = sum([total_gen[gen] * data['co2_emission_factor'] for gen, data in generator_data.items()])
print('CO2 Emission = ',co2)
dispatch['load profile'] = load_profile
dispatch.to_excel('fuel wise dispatch.xlsx')
</code></pre>
<p>When you run this program with and without the <code>co2_limiter</code> constraint you ll notice that without this constraint the total CO2 Emission ll be around <strong>860287 Ton CO2/ MWh</strong> and when you activate the co2_limiter constraint then the total emission ll be around <strong>399999 Ton CO2/MWh</strong> ( because we have set the limit to <strong><= 0.04 Million Ton CO2/MWh</strong>) and in dispatch you call also see although <strong>backup generator</strong> is the costliest one but since it has no co2 emission value it ll use that generator. My question is in Pypsa we do have some internal Global constraints to control this CO2 emission. I have gone through the documentation but still, I was not able to get how to use this in my case. Can anyone please help?</p>
| <python><pandas><pypsa> | 2023-11-07 12:38:38 | 1 | 845 | Vesper |
77,438,251 | 13,518,907 | Langchain ParentDocumetRetriever: Save and load | <p>I am using the PartentDocumentRetriever from Langchain.
Now I first want to build my vector database and then want to retrieve stuff.</p>
<p>Here is my file that builds the database:</p>
<pre><code># =========================
# Module: Vector DB Build
# =========================
import box
import yaml
from langchain.vectorstores import FAISS
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders import PyPDFLoader, DirectoryLoader
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.storage import InMemoryStore
from langchain.retrievers import ParentDocumentRetriever
from langchain.vectorstores import Chroma
# Import config vars
with open('config/config.yml', 'r', encoding='utf8') as ymlfile:
cfg = box.Box(yaml.safe_load(ymlfile))
# Build vector database
def run_db_build():
loader = DirectoryLoader(cfg.DATA_PATH,
glob='*.pdf',
loader_cls=PyPDFLoader)
documents = loader.load()
embeddings = HuggingFaceEmbeddings(model_name=cfg.EMBEDDING_MODEL_NAME,
model_kwargs={'device': 'mps'}, encode_kwargs={'device': 'mps', 'batch_size': 32})
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=200)
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
store = InMemoryStore()
vectorstore = Chroma(collection_name="split_parents", embedding_function=embeddings,
persist_directory="chroma_db/")
big_chunks_retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter,
)
big_chunks_retriever.add_documents(documents)
if __name__ == "__main__":
run_db_build()
</code></pre>
<p>So I am saving the Chroma Database in the folder "chroma_db". However I want to save PartentDocumentRetriever (big_chunk_objects) with the added documents to use it later when building a RetrievalQa chain. So how do I load "big_chunk_objects" in the following code?</p>
<pre><code>def build_retrieval_qa(llm, prompt, vectordb):
chain_type_kwargs={
#"verbose": True,
"prompt": prompt,
"memory": ConversationBufferMemory(
memory_key="history",
input_key="question")}
dbqa = RetrievalQA.from_chain_type(llm=llm,
chain_type='stuff',
retriever="HOW TO SET PARENTDOCUMENTRETRIEVER HERE?",
return_source_documents=cfg.RETURN_SOURCE_DOCUMENTS,
chain_type_kwargs=chain_type_kwargs,
)
return dbqa
</code></pre>
| <python><nlp><langchain> | 2023-11-07 12:36:09 | 3 | 565 | Maxl Gemeinderat |
77,438,162 | 687,739 | How to mutate dataclass argument upon instantiation? | <p>I have a <code>dataclass</code> that looks like this:</p>
<pre><code>@dataclass
class AllLastTick:
tick_type_mapping = {
0: "bid_size",
1: "bid_price",
2: "ask_price",
3: "ask_size",
4: "last_trade_price",
5: "last_trade_size"
}
time: int
tick_type: int
price: float
size: Decimal
</code></pre>
<p>And I instantiate it like this:</p>
<pre><code>tick_data = (
time,
tick_type,
price,
size
)
tick = AllLastTick(*tick_data)
</code></pre>
<p>The result is something that looks like this:</p>
<pre><code>AllLastTick(time=1699358716, tick_type=2, price=178.49, size=Decimal('200'))
</code></pre>
<p>What I'd like to do is convert the <code>tick_type</code> argument to the value in <code>AllLastTick .tick_type_mapping</code> when the class is instantiated.</p>
<p>The ideal result would look like this:</p>
<pre><code>AllLastTick(time=1699358716, tick_type="ask_price", price=178.49, size=Decimal('200'))
</code></pre>
<p>I've tried using something like this:</p>
<pre><code>...
tick_type: int = field(default_factory=lambda x: tick_type_mapping[x])
...
</code></pre>
<p>Which I understand does not work because the fields cannot be mutable (question mark on this point).</p>
<p>How can I accomplish this?</p>
| <python><python-dataclasses> | 2023-11-07 12:19:28 | 1 | 15,646 | Jason Strimpel |
77,438,149 | 219,976 | Insert large amount of data in table with index in Postgres | <p>I have a table <code>tbl</code> in Postgres with 50 million rows. The <code>tbl</code> has an index on <code>column_1</code> and there are a lot of queries to this table like</p>
<pre><code>select * from tbl
where column_1 = 'value'
</code></pre>
<p>Each query returns 0-30 rows, 10 on avarage.</p>
<p>Once a day I completely update data in table. The query is like</p>
<pre><code>delete from tbl;
insert into tbl
select * from tbl_2;
commit;
</code></pre>
<p>The challenge I face is the query runs too long: about 2-3 hours. It's probably because of index. Is there a way to speed up the data update and allow user to query <code>tbl</code> while it's being updated.
If this is important - the update process is run in python Airflow and the queries come from python web app.</p>
| <python><postgresql><indexing><airflow> | 2023-11-07 12:17:08 | 2 | 6,657 | StuffHappens |
77,438,103 | 4,399,016 | Selecting Rows that only match the column values in another data frame in Python | <p>I have a data frame:</p>
<pre><code>>df_idbank.dtypes
DATASET object
IDBANK object
KEY object
FREQ object
INDICATEUR object
CORRECTION_label_en object
</code></pre>
<p>Etc.</p>
<p>I have another data frame:</p>
<pre><code>>df_INDICATORS
Label IDBANK
0 Summary indicator of overall economic situatio... 1586891
1 Business climate summary indicator - SA series 1586890
2 Trend of expected activity - Overall - SA series 1586916
3 Trend of expected activity - Building structur... 1586885
4 Trend of expected activity - Finishings - SA s... 1586886
</code></pre>
<p>This would give a set of values pertaining to only values matching <code>IDBANK =="001586891"</code> from <code>df_idbank</code> data frame.</p>
<pre><code>df_idbank = df_idbank.loc[(df_idbank.FREQ == "M") & (df_idbank.CORRECTION_label_en == "Seasonal adjusted") & (df_idbank.IDBANK =="001586891") ]
</code></pre>
<p>How do I select all the values matching <code>IDBANK</code> column in <code>df_INDICATORS</code>?</p>
| <python><pandas><dataframe><pandas-loc> | 2023-11-07 12:07:59 | 1 | 680 | prashanth manohar |
77,438,010 | 15,638,204 | Is there a way to call and focus any popup window outside of the main window app? | <p>I am trying to open a popup window when the user types something.</p>
<p>I read somewhere that Windows only allows you to focus a popup window if it has been called form the main window.</p>
<p>I search in the web and found this "work around", not very robust, because the methods to force focus were not working:</p>
<pre><code>def bring_window_to_foreground(self):
self.top_window.update_idletasks()
hwnd = win32gui.FindWindow(None, "Select Expansion")
shell = win32com.client.Dispatch("WScript.Shell")
shell.SendKeys("%")
time.sleep(0.1)
shell.SendKeys("%")
win32gui.SetForegroundWindow(hwnd)
# Send another Alt key to nullify the activation
self.top_window.focus_force()
</code></pre>
<p>But sometimes this wont work and apps the use "alt" key to show menus like notepad will have problemas sometimes.</p>
<p>Is there any library or windows api I can use to send my tkinter window to foreground, always on top and focused, withou calling it from the main window app, that will be minimized?</p>
<p>And one more question? How do I show this popup exactly on the same place or closer to the text cursor? I am using the code below, but is not working:</p>
<pre><code>def get_caret_position(self):
class GUITHREADINFO(ctypes.Structure):
_fields_ = [("cbSize", ctypes.c_ulong),
("flags", ctypes.c_ulong),
("hwndActive", ctypes.wintypes.HWND),
("hwndFocus", ctypes.wintypes.HWND),
("hwndCapture", ctypes.wintypes.HWND),
("hwndMenuOwner", ctypes.wintypes.HWND),
("hwndMoveSize", ctypes.wintypes.HWND),
("hwndCaret", ctypes.wintypes.HWND),
("rcCaret", ctypes.wintypes.RECT)]
guiThreadInfo = GUITHREADINFO(cbSize=ctypes.sizeof(GUITHREADINFO))
hwnd = win32gui.GetForegroundWindow()
processID = ctypes.c_ulong()
threadID = ctypes.windll.user32.GetWindowThreadProcessId(hwnd, ctypes.byref(processID)) # <-- Corrected line
if ctypes.windll.user32.GetGUIThreadInfo(threadID, ctypes.byref(guiThreadInfo)):
caret_x = guiThreadInfo.rcCaret.left
caret_y = guiThreadInfo.rcCaret.top
return caret_x, caret_y
else:
return None, None
</code></pre>
| <python> | 2023-11-07 11:52:26 | 1 | 956 | avocadoLambda |
77,437,981 | 15,638,204 | Mypy unable to analyse types of tuple elements in a list comprehension | <p>All presented code samples were tested with Python 3.11, and mypy 1.6.0</p>
<p>And the problem is: mypy sees no problems in the following code:</p>
<pre><code>MyTupleType = tuple[int, str, bool]
Tuples = list[MyTupleType]
def findAllByInt(values : Tuples, ref : int) -> Tuples:
return [v for v in values if v[1] == ref] # here is the bug!
vals : Tuples = [ (1, 'a', True), (2, 'b', False),
(3, 'c', True), (1, 'd', False) ]
print( findAllByInt(vals, 1) )
</code></pre>
<p>Problem is the tuple element index, which is compared to the referenced value. The element index should be 0, not 1. v[1] is a str, while v[0] is an int.</p>
<p>The only solution I was able to achieve is to use a separate function to extract the tuple element:</p>
<pre><code>MyTupleType = tuple[int, str, bool]
Tuples = list[MyTupleType]
def intOfMyTuple(t : MyTupleType) -> int:
return t[0]
def findAllByInt(values : Tuples, ref : int) -> Tuples:
return [v for v in values if intOfMyTuple(v) == ref]
vals : Tuples = [ (1, 'a', True), (2, 'b', False),
(3, 'c', True), (1, 'd', False) ]
print( findAllByInt(vals, 1) )
</code></pre>
<p>Now, when we change the element index in intOfMyTuple, mypy will complain about the type mismatch, which is exactly what mypy should do.</p>
<p>Is it really neccessary to write separate functions for each tuple and element index I use in my code?</p>
<p>I'm doing something wrong, and mypy can be used to properly check the types in the first code sample?</p>
| <python> | 2023-11-07 11:48:22 | 1 | 956 | avocadoLambda |
77,437,716 | 4,435,175 | How to use .assign with with .where() for "condition, true, false" like in numpy.where()? | <p>I have a dataframe df:</p>
<pre><code># %%
import pandas as pd
# %%
values = [("a", 1), ("b", 2), ("c", 3), ("d", 4), ("a", 4), ("b", 6), ("c", 7), ("d", 8)]
df = pd.DataFrame(values, columns=["name", "value"])
# %%
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>name</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>1</td>
</tr>
<tr>
<td>b</td>
<td>2</td>
</tr>
<tr>
<td>c</td>
<td>3</td>
</tr>
<tr>
<td>d</td>
<td>4</td>
</tr>
<tr>
<td>a</td>
<td>4</td>
</tr>
<tr>
<td>b</td>
<td>6</td>
</tr>
<tr>
<td>c</td>
<td>7</td>
</tr>
<tr>
<td>d</td>
<td>8</td>
</tr>
</tbody>
</table>
</div>
<p>I want to:</p>
<ol>
<li>add a new column named "kpi"</li>
<li>in a function chaining pipeline with <code>.assign</code></li>
<li>without <code>numpy</code></li>
<li>without <code>.apply</code> (= loops = super slow = anti pattern)</li>
<li>without <code>.query</code> (= no auto complete, no auto formatting, no linting, no syntax highlighting = anti pattern for me)</li>
<li>with the following condition:</li>
</ol>
<blockquote>
<p>if (name.in("a", "b")) & (value % 2 == 0)) then (value / 5 * 100) else (value / 2)</p>
</blockquote>
<p>So the end result should look like:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>name</th>
<th>value</th>
<th>kpi</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>1</td>
<td>0.5</td>
</tr>
<tr>
<td>b</td>
<td>2</td>
<td>40</td>
</tr>
<tr>
<td>c</td>
<td>3</td>
<td>1.5</td>
</tr>
<tr>
<td>d</td>
<td>4</td>
<td>2</td>
</tr>
<tr>
<td>a</td>
<td>4</td>
<td>80</td>
</tr>
<tr>
<td>b</td>
<td>6</td>
<td>120</td>
</tr>
<tr>
<td>c</td>
<td>7</td>
<td>3.5</td>
</tr>
<tr>
<td>d</td>
<td>8</td>
<td>4</td>
</tr>
</tbody>
</table>
</div>
<p>So I need a condition and then a way to set the new value depending on the if/else = a ternary operator.</p>
<p>But pandas.where() has no else branch I can set?</p>
| <python><pandas><conditional-statements><conditional-operator> | 2023-11-07 11:10:00 | 2 | 2,980 | Vega |
77,437,694 | 12,242,085 | How to drop duplicates in Data Frame based on combination of values in 2 columns in Python Pandas? | <p>I have Data Frame in Python Pandas like below:</p>
<pre><code>df = pd.DataFrame({
'id' : [999, 999, 999, 185, 185, 185, 999, 999, 999],
'target' : [1, 1, 1, 0, 0, 0, 1, 1, 1],
'event': ['2023-01-01', '2023-01-02', '2023-02-03', '2023-01-01', '2023-01-02', '2023-01-03', '2023-01-01', '2023-01-02', '2023-01-03'],
'survey': ['2023-02-02', '2023-02-02', '2023-02-02', '2023-03-10', '2023-03-10', '2023-03-10', '2023-04-22', '2023-04-22', '2023-04-22'],
'event1': [1, 6, 11, 16, np.nan, 22, 74, 109, 52],
'event2': [2, 7, np.nan, 17, 22, np.nan, np.nan, 10, 5],
'event3': [3, 8, 13, 18, 23, np.nan, 2, np.nan, 99],
'event4': [4, 9, np.nan, np.nan, np.nan, 11, 8, np.nan, np.nan],
'event5': [5, np.nan, 15, 20, 25, 1, 1, 3, np.nan]
})
df = df.fillna(0)
df
</code></pre>
<p><a href="https://i.sstatic.net/JxoM4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JxoM4.png" alt="enter image description here" /></a></p>
<p><strong>Requirements:</strong></p>
<p>And I need to remain in my Data Frame not duplicated values in column "id" remaining also all rows with not duplicated values from column "survey" for that id.</p>
<p>So, for example as you can see in example for id = 999 we have in column "survey" value 2023-02-02 or 2023-04-22, so I need to stay all rows for id = 999 with 2023-02-02 or with 2023-04-022.</p>
<p><strong>Example of needed result:</strong></p>
<p>So, as a result I need something like below:</p>
<pre><code>df = pd.DataFrame({
'id' : [999, 999, 999, 185, 185, 185],
'target' : [1, 1, 1, 0, 0, 0],
'event': ['2023-01-01', '2023-01-02', '2023-02-03', '2023-01-01', '2023-01-02', '2023-01-03'],
'survey': ['2023-02-02', '2023-02-02', '2023-02-02', '2023-03-10', '2023-03-10', '2023-03-10'],
'event1': [1, 6, 11, 16, np.nan, 22],
'event2': [2, 7, np.nan, 17, 22, np.nan],
'event3': [3, 8, 13, 18, 23, np.nan],
'event4': [4, 9, np.nan, np.nan, np.nan, 11],
'event5': [5, np.nan, 15, 20, 25, 1]
})
df = df.fillna(0)
df
</code></pre>
<p><a href="https://i.sstatic.net/qwybb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qwybb.png" alt="enter image description here" /></a></p>
<p>How can I do that in Python Pandas ?</p>
| <python><pandas><dataframe><group-by> | 2023-11-07 11:05:44 | 3 | 2,350 | dingaro |
77,437,451 | 14,643,315 | In Selenium, how can I disable images or go headless after a certain event? | <p>My script logs into a site and clicks on various buttons.
From the start and until login is complete, there is a chance that a captcha would pop up, which I need to solve manually.</p>
<p>My question is: <strong>after</strong> solviong the captcha, is there a way to tell my script to go headless, or at least disable images? Because of this captcha I'm wasting resources that makes my script slower, because I don't really need to load all the extra stuff in the webpage. I just need it for the captcha.</p>
<p>Edit: my question is a bit different than the one in this thread:
<a href="https://stackoverflow.com/questions/71622167/how-can-i-switch-from-headless-mode-to-normal-mode-using-google-chrome-and-selen">How can I switch from headless mode to normal mode using Google Chrome and Selenium?</a>
Since I'm also asking about disabling images, <strong>or really any suggestion about how can I save resources after the login</strong>, considering I don't need the webpage to visually load in full (I only need to click several buttons).</p>
| <python><selenium-webdriver> | 2023-11-07 10:29:30 | 0 | 453 | sadcat_1 |
77,437,353 | 828,279 | Polars qcut return quantile break points | <p>With Polars (0.19) is it possible to make qcut return the break-points? These break-points are the numerical value at which the distribution is cut. For a normal distribution, quantile 0.5 will return the break-point ~0. We can return also the labels but these have to be strings. Is there anyway to also return the <em>quantile</em> break-point?</p>
<p>It's possible to work around this by making the label just be the break-point as a string then parsing this back out but it feels rather round-about (and hacky).</p>
<p><strong>Example:</strong></p>
<pre class="lang-py prettyprint-override"><code>X = pl.DataFrame({"x": np.random.normal(size=100)})
X.select(pl.col("x").qcut([ 0,0.5, 1], include_breaks=True).struct.rename_fields(["break", "label"])).unnest("x")
</code></pre>
<p>outputs</p>
<pre><code>shape: (100, 2)
┌──────────┬───────────────────────────────────┐
│ break ┆ label │
│ --- ┆ --- │
│ f64 ┆ cat │
╞══════════╪═══════════════════════════════════╡
│ 2.113793 ┆ (0.07471832756341387, 2.11379302… │
│ 0.074718 ┆ (-2.338473351991288, 0.074718327… │
│ 0.074718 ┆ (-2.338473351991288, 0.074718327… │
│ 2.113793 ┆ (0.07471832756341387, 2.11379302… │
│ … ┆ … │
│ 0.074718 ┆ (-2.338473351991288, 0.074718327… │
│ 2.113793 ┆ (0.07471832756341387, 2.11379302… │
</code></pre>
<p>Notice that the <code>break</code> column contains the <em>values</em> at which the data were split, not the <em>quantile</em>, which is what I'm after.</p>
<p>It's possible to emulate what I'm after with the following</p>
<pre class="lang-py prettyprint-override"><code>y = X.select(
pl.col("x")
.qcut(
[0., 0.5, 1],
labels=["0", "0.5", "1", "inf"],
include_breaks=True,
)
.struct.rename_fields(["break", "label"])
).unnest("x").with_columns(
pl.col("label")
.cast(pl.Utf8) # labels are categorical so make them strings first
.cast(pl.Float64)
.alias("quantile")
)
</code></pre>
<p>which outputs</p>
<pre><code>shape: (100, 3)
┌───────────┬───────┬──────────┐
│ break ┆ label ┆ quantile │
│ --- ┆ --- ┆ --- │
│ f64 ┆ cat ┆ f64 │
╞═══════════╪═══════╪══════════╡
│ -0.118428 ┆ 0.5 ┆ 0.5 │
│ 2.90702 ┆ 1 ┆ 1.0 │
│ -0.118428 ┆ 0.5 ┆ 0.5 │
│ -0.118428 ┆ 0.5 ┆ 0.5 │
│ … ┆ … ┆ … │
│ 2.90702 ┆ 1 ┆ 1.0 │
│ 2.90702 ┆ 1 ┆ 1.0 │
│ 2.90702 ┆ 1 ┆ 1.0 │
│ 2.90702 ┆ 1 ┆ 1.0 │
└───────────┴───────┴──────────┘
</code></pre>
| <python><python-polars> | 2023-11-07 10:16:01 | 0 | 13,502 | Dan |
77,436,994 | 4,451,521 | what is the effect of loc in a dataframe? | <p>If I have this minimal reproducible example</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"A":[12, 4, 5, None, 1],
"B":[7, 2, 54, 3, None],
"C":[20, 16, 11, 3, 8],
"D":[14, 3, None, 2, 6]})
index_ = ['Row_1', 'Row_2', 'Row_3', 'Row_4', 'Row_5']
df.index = index_
print(df)
# Option 1
result = df[['A', 'D']]
print(result)
# Option 2
result = df.loc[:, ['A', 'D']]
print(result)
</code></pre>
<p>What is the effect on using <code>loc</code> or not. The results are quite similar.
I ask this in preparation for a more complex question in which I have been instructed to use loc.</p>
| <python><pandas> | 2023-11-07 09:25:15 | 4 | 10,576 | KansaiRobot |
77,436,907 | 5,087,283 | Keep Python code safe from execution by Pydoc | <p>This question is related to the question <a href="https://stackoverflow.com/questions/33190444/stop-pydoc-from-running-my-python-program">Stop pydoc from running my Python program</a>, except that here I'm not actually trying to document my own code - I'm just trying to use <code>pydoc</code> to look up unrelated documentation without any side-effects.</p>
<p>I have some experience with other programming languages, and I'm accustomed to being able to put scraps of code in files, and then to choose when that code gets executed. Sometimes I source the code fragments from other files, and sometimes I execute them from the command line. Sometimes I never execute a piece of code directly, but I keep it in a file where I can copy and paste it into other files or into an interpreter.</p>
<p>With Python, I assumed that I should put my Python code in files with suffix <code>.py</code>, but I ran into a snag. The <code>pydoc</code> tool sometimes executes all files named <code>.py</code> in <code>PYTHONPATH</code>, for example when I do a keyword search:</p>
<pre><code>$ export PYTHONPATH=.
$ cat test.py
print("Hello world\n")
$ pydoc -k blah
Hello world
</code></pre>
<p>This makes Python unlike other programming languages I have encountered, in that even when using the standard tool-chain for the language (i.e. <code>pydoc</code>), I had difficulty predicting when pieces of code that I had stored in files on my file system would be executed.</p>
<p>I've read that a workaround is to put all my code in an <code>if</code> block like this:</p>
<pre><code>if __name__ == "__main__":
</code></pre>
<p>However, that is cumbersome because not only does it add another line and another level of indentation to every file, but it prevents me from using <code>import</code> to run the code.</p>
<p>If I decide to do away with <code>import</code> and rather execute all my code with <code>exec</code>:</p>
<pre><code>exec(open('filename').read())
</code></pre>
<p>then although I have to specify the full path to my code files, I can use a different file extension than <code>.py</code>, and then presumably <code>pydoc</code> will stop executing them. Is there a standard file extension to use for this purpose?</p>
<p>Incidentally, I was surprised that <code>pydoc</code> has this potentially disruptive behavior because it is one of the first tools that a beginner to the language could be expected to use. I would not expect someone to read the full documentation on the tool which displays documentation before using the tool which displays documentation. Even if they did, although it is mentioned in the "module reference" <a href="https://docs.python.org:0/3.9/library/pydoc.html" rel="nofollow noreferrer">https://docs.python.org:0/3.9/library/pydoc.html</a>, there appears to be no warning in the online documentation (<code>pydoc -h</code> or <code>pydoc pydoc</code>) that <code>pydoc</code> might execute all files named <code>.py</code>.</p>
<p>How do experienced Python users store Python code fragments? Is it preferred to use a different extension than <code>.py</code>, or to keep such code out of <code>PYTHONPATH</code>, or to avoid the use of <code>pydoc</code>; or is the <code>if __name__</code> method really the most popular?</p>
| <python><path><file-extension><pythonpath><pydoc> | 2023-11-07 09:14:49 | 0 | 811 | Metamorphic |
77,436,740 | 2,101,765 | Statforecast - AutoArima: How to run different models for each unique ids | <p>I am using <code>statsforecast</code> package and fitting AutoARIMA model. Code looks like the following.</p>
<pre><code>forecast = {}
models = {}
for item in Ids:
sf = StatsForecast(
models=[
AutoARIMA()
],
freq='M'
)
d = train_data[train_data['unique_id']==item
sf.fit(d)
f = sf.predict(h=3)
forecast[item] = f
models[item] = sf.fitted_[0][0].model_
</code></pre>
<p>I would expect different models with different parameters to be fitted for different Ids. Because, there are variability in the data. But it is fitting the same model for all the cases.</p>
<p>When I run in two different <code>Jupyter</code> Notebooks, I get two different models for two different sources of data. But I run model for each id, I am getting only one model for all the ids. I explored about setting different seeds, but I couldn't find any. When I run auto ARIMA, i would expect to see atleast more than one model\parameters. How do I get this?</p>
| <python><parameters><time-series><arima><statsforecast> | 2023-11-07 08:53:15 | 1 | 1,806 | Manoj G |
77,436,736 | 11,124,121 | Is there anyway to merge two datasets that with two similar key column but not the same key column? | <p>I have two datasets as follows:</p>
<p>a:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>have lunch</th>
<th>have dinner</th>
</tr>
</thead>
<tbody>
<tr>
<td>set a</td>
<td>set 1</td>
</tr>
<tr>
<td>set b</td>
<td>set 2</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
<p>b:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>variable</th>
<th>variable_value</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>have lunch</td>
<td>set a</td>
<td>0.2</td>
</tr>
<tr>
<td>have lunch</td>
<td>set b</td>
<td>0.5</td>
</tr>
<tr>
<td>have dinner</td>
<td>set 1</td>
<td>1.5</td>
</tr>
<tr>
<td>have dinner</td>
<td>set 2</td>
<td>1.7</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
<p>My expected outcome:</p>
<p>c:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>have lunch</th>
<th>have lunch value</th>
<th>have dinner</th>
<th>have dinner value</th>
</tr>
</thead>
<tbody>
<tr>
<td>set a</td>
<td>0.2</td>
<td>set 1</td>
<td>1.5</td>
</tr>
<tr>
<td>set b</td>
<td>0.5</td>
<td>set 2</td>
<td>1.7</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
<p>Is there anyway to merge these two datasets in python/r?</p>
<p>I can think of one solution in R:</p>
<pre><code>library(tidyverse)
library(magrittr)
b %<>% pivot_wider(names_from = variable, values_from = value)
left_join(a,b)
</code></pre>
| <python><r><pandas><tidyverse> | 2023-11-07 08:52:59 | 2 | 853 | doraemon |
77,436,636 | 5,912,544 | Generate one dataframe per element in a column and merge into current dataframe | <p>Given the following (example) dataframe :</p>
<pre><code>import pandas as pd
import pathlib
from pathlib import Path
cwd = Path('Path/to/somewhere')
df = pd.DataFrame(
{
'var1': [0, 5, 10, 15, 20, 25],
'var2': ['A', 'B']*3,
'var3': ['A', 'B']*3,
'path_col': [cwd / 'a.dat', cwd / 'b.dat', cwd / 'c.dat', cwd / 'd.dat', cwd / 'e.dat', cwd / 'f.dat'],
}
)
</code></pre>
<p>Each path in <code>path_col</code> points to a datafile, which I have a function to convert into a dataframe, e.g. :</p>
<pre><code>def open_and_convert_to_df(filepath: pathlib.Path):
# do things
return pd.Dataframe(...)
</code></pre>
<pre><code>data_df = pd.DataFrame(
{
'var4': [10, 20, 30],
'var5': [100, 200, 300],
'obs': [1000, 2000, 3000],
}
)
</code></pre>
<p>I'd like to generate a data_df from each path in <code>path_col</code> and merge into <code>df</code> such that the final df looks like :</p>
<pre><code> var1 var2 var3 var4 var5 obs
0 0 A 1 10 100 1000
1 0 A 1 10 100 2000
2 0 A 1 10 100 3000
3 0 A 1 10 200 1000
4 0 A 1 10 200 2000
5 0 A 1 10 200 3000
6 0 A 1 10 300 1000
...
n-3 25 B 2 30 200 3000
n-2 25 B 2 30 300 1000
n-1 25 B 2 30 300 2000
n 25 B 2 30 300 3000
</code></pre>
<p>In other words, variables 1 to 3 of the first df are indexes of the data contained in <code>path_col</code>.
Inside this data, var 4 and 5 are indexes of <code>obs</code>. I'm trying to index <code>obs</code> with all variables from 1 to 5.</p>
<p>The best I've come up with so far is using the <code>.map()</code> method like so :</p>
<pre><code>df['path_col'] = df['path_col'].map(open_and_convert_to_df)
</code></pre>
<p>I end up with the right df's in each <code>path_col</code> element but I'm lacking the next steps in order to "un-nest" those and obtain the desired df.</p>
| <python><pandas> | 2023-11-07 08:36:15 | 1 | 624 | MaximGi |
77,435,735 | 496,136 | Why does this code fail to move a sharepoint file to a different directory using Office365-REST-Python-Client | <ol>
<li>List item</li>
</ol>
<p>I am using python 3.10
using Office365-REST-Python-Client==2.4.1</p>
<p>my code attempts to move a file from sharepoint</p>
<pre><code>def get_file(ctx: ClientContext, name: str) -> File:
result = None
try:
file: File = ctx.web.get_file_by_server_relative_url(name)
ctx.load(file)
ctx.execute_query()
result = file
except ClientRequestException as e:
print(e)
finally:
return result
file_from: File = get_file(ctx, file_to_move)
file_from.moveto(folder_to, True)
ctx.execute_query()
</code></pre>
<p>I have checked the validity of the file and have gotten its unique id</p>
<p>the File.moveto call signature takes as first parameter:</p>
<pre><code>{"newurl": new_relative_url,"flags": flag}
</code></pre>
<p>i pass to move the file to a directory called Documents/.../inventory/error. The directory exist. I get an error.</p>
<p>i pass to move the file to a directory called Documents/.../inventory/error/name. I want the file to be named 'name'. I get an error.</p>
<p>i pass to move the file to a directory called sites/...Documents/.../inventory/error/name. I want the file to be named 'name'. I get an error.</p>
<pre><code>('-1, Microsoft.SharePoint.Client.InvalidClientQueryException', 'Input string was not in a correct format.', "400 Client Error:
</code></pre>
<p>what should the format of the first parameter and second parameter to File.moveto be?</p>
| <python><shared-libraries><office365-rest-client> | 2023-11-07 05:25:28 | 0 | 6,448 | reza |
77,435,467 | 7,554,103 | Check if object is an instance of a group of classes | <p>I have an object, and it is either</p>
<ul>
<li><code>statsmodels.genmod.generalized_linear_model.GLMResultsWrapper</code></li>
<li><code>statsmodels.regression.linear_model.RegressionResultsWrapper</code></li>
<li><code>statsmodels.base.elastic_net.RegularizedResultsWrapper</code></li>
</ul>
<p>or</p>
<ul>
<li><code>sklearn.ensemble._gb.GradientBoostingClassifier</code></li>
<li><code>sklearn.ensemble._forest.RandomForestClassifier</code></li>
</ul>
<p>I am trying to use <code>isinstance()</code> to like below:</p>
<pre><code>if isinstance(my_obj, statsmodel_type_class):
do sth
elif isinstance(my_obj, sklearn_type_class):
do sth
</code></pre>
<p>I can hard code <code>statsmodel_type_class</code> and <code>sklearn_type_class</code>. But are there any "Base" class for each of these two type of classes?</p>
| <python><class><scikit-learn><instance><statsmodels> | 2023-11-07 03:54:01 | 1 | 375 | wwj123 |
77,435,415 | 721,810 | Can't create text and annotations outside Matplotlib graph | <p>The following Python code generates a figure using Matplotlib in Jupyter Notebook which is ideal in appearance for an application that I am working on.</p>
<pre><code>import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
fig, ax = plt.subplots(facecolor='lightgrey')
# Define some graph offsets
start_x = 18
start_y = 18
end_x = 24
end_y = 24
# Draw an arrow
ax.annotate('Target',
xy = (18, 20),
xytext = (14, 20),
arrowprops = dict(facecolor ='red', shrink = 0.05, width = 2.5),
ha = 'center')
# Set the limits of the plot
ax.set_xlim(start_x, end_x)
ax.set_ylim(start_y, end_y)
# Customize the grid
plt.gca().set_aspect('equal', adjustable='box') # Make the grid equal in both dimensions
ax.set_ylim(ax.get_ylim()[::-1]) # Invert the y axis
ax.xaxis.tick_top() # Move the X-Axis to the top
# Title
middle_x = 14+(end_x-14)/2
plt.text(middle_x, 18, "THE TITLE" + '\nSubtitle\n 1\n 2\n 3\n 4\n\n', color = 'k', fontsize = 18, ha = 'center')
plt.grid()
plt.show()
</code></pre>
<p>The code generates the following figure.
<a href="https://i.sstatic.net/7mE3q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7mE3q.png" alt="enter image description here" /></a></p>
<p>Running the same code from a console application (with %matplotlib inline commented out) gives a less than desirable figure.
<a href="https://i.sstatic.net/eWkV6.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eWkV6.jpg" alt="enter image description here" /></a></p>
<p>Maximizing that figure gives a little more detail.
<a href="https://i.sstatic.net/HQ2LG.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HQ2LG.jpg" alt="enter image description here" /></a></p>
<p>Matplotlib has an 'add_axes' function which allows modifications to the canvas.
Modifying the code with the following
<code>ax = fig.add_axes([0.5, 0, 0.5, 0.5])# [left, bottom, width, height]</code>
generated the following:
<a href="https://i.sstatic.net/ZhyLF.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZhyLF.jpg" alt="enter image description here" /></a></p>
<p>How can the code be modified so that generating the figure from a console application will look like the figure generated in the Jupyter Notebook?</p>
| <python><matplotlib><jupyter-notebook> | 2023-11-07 03:36:32 | 3 | 1,889 | Brian O'Donnell |
77,435,365 | 10,200,497 | within each group-by, use formula for difference of values on the bottom row, picking column depending on whether one column is NaN | <p>This is my dataframe:</p>
<pre><code>df = pd.DataFrame(
{
'sym': ['a', 'a', 'b', 'c', 'c', 'c'],
'x': [100, 120, 80, 8, 8, 40],
'y': [20, 20, np.nan, 20, np.nan, 20],
'z': [2, 3, 4, 5, 6, 7],
}
)
</code></pre>
<p>And this is the output that I want. I want to create column <code>out</code>:</p>
<pre><code> sym x y z out
0 a 100 20.0 2 17.0
1 a 120 20.0 3 17.0
2 b 80 NaN 4 76.0
3 c 8 20.0 5 13.0
4 c 8 NaN 6 13.0
5 c 40 20.0 7 13.0
</code></pre>
<p>Column <code>out</code> is created with this code:</p>
<pre><code>def create_outcome(df):
df['out'] = df.y.iloc[-1] - df.z.iloc[-1]
return df
df = df.groupby('sym').apply(create_outcome)
</code></pre>
<p>And if the group has only one row and <code>y</code> is <code>nan</code>, <code>out</code> is <code>df['out'] = df.x.iloc[-1] - df.z.iloc[-1]</code>. For example <code>b</code> in <code>sym</code> column is calculated like that.</p>
<p>When I change <code>create_outcome</code> to calculate <code>out</code> for <code>b</code> in <code>sym</code> column, It does not give me the above output. This is how I changed it:</p>
<pre><code>def create_outcome(df):
df['out'] = df.y.iloc[-1] - df.z.iloc[-1]
mask = (
(len(df) == 1) &
(df.y.isna().all())
)
df.loc[mask, 'out'] = df.x.iloc[-1] - df.z.iloc[-1]
return df
</code></pre>
<p>I also tried this:</p>
<pre><code>df.loc[df.where(mask), 'out'] = df.x.iloc[-1] - df.z.iloc[-1]
</code></pre>
<p>I know that if I change my function with a conditional like:</p>
<pre><code>if len(df) == 1 and df.y.isna().all():
df['out'] = df.x.iloc[-1] - df.z.iloc[-1]
</code></pre>
<p>It works. But I wonder why it does not work when I use a mask.</p>
| <python><pandas> | 2023-11-07 03:16:47 | 2 | 2,679 | AmirX |
77,435,356 | 59,470 | OpenAI API: New version (v1) of the OpenAI Python package appears to contain breaking changes | <p>I have been playing with openai==0.28.1 and created a few projects by following examples from <em>deeplearning.ai</em> short courses that use ChatGPT and other OpenAI models (courses for ChatGPT, <a href="https://en.wikipedia.org/wiki/LangChain" rel="nofollow noreferrer">LangChain</a>, and more). Until today, everything worked without any problems.</p>
<p>Today I created another project with corresponding a <a href="http://pypi.python.org/pypi/virtualenv" rel="nofollow noreferrer">virtualenv</a> and installed the OpenAI library. This time it was openai==1.1.0 (PyPI already has it <a href="https://pypi.org/project/openai/#history" rel="nofollow noreferrer">updated to 1.1.1</a> since then).</p>
<p>As a result, all OpenAI API code stopped working as API objects appear to be renamed and functions/parameters changed between versions 0.28.1 and 1.1.0.</p>
<p>I tried to google any breaking changes and any release notes to no avail. I did find indirect confirmation that APIs changed while browsing <a href="https://github.com/openai/openai-python/commit/08b8179a6b3e46ca8eb117f819cc6563ae74e27d" rel="nofollow noreferrer">version history for README.md</a> that changed its examples between two versions as part of the <a href="https://github.com/openai/openai-python/pull/677" rel="nofollow noreferrer">massive V1 PR</a>.</p>
<p>Are there any documented breaking changes or documentation on how to maintain code for previous versions besides the obvious option of installing older package version <em>openai=0.28.1</em>?</p>
| <python><openai-api><chatgpt-api> | 2023-11-07 03:12:56 | 1 | 19,919 | topchef |
77,435,161 | 1,653,225 | How do you tell setuptools to build an extension for --debug when using pyproject.toml / PEP 518 | <p>I have created <a href="https://github.com/OddSource/ifaddrs4u/tree/main/ifaddrs4py" rel="nofollow noreferrer">a Python project with a C++ extension</a> using <code>pyproject.toml</code> (PEP 518, PEP 517, PEP 621, PEP 660, etc.). I'm sure I could be doing some things better, but generally speaking it's working great.</p>
<p>In typical cases, this project will be built <strong>without</strong> C++ debug symbols, which is the default for Setuptools and works as I'd expect. However, for troubleshooting purposes, I want to document a process for building the project <strong>with</strong> debug symbols. Even if not specifically attaching a debugger, debug symbols can be helpful for diagnosing crashes, if/when they occur.</p>
<p>I have tried various incantations to <code>pip install</code>:</p>
<ul>
<li><code>--debug</code>: silently ignored, library still built with <code>-DNDEBUG -g -O3</code></li>
<li><code>--global-option --debug</code> ("WARNING: Ignoring --global-option when building ifaddrs4py using PEP 517" and "DEPRECATION: --build-option and --global-option are deprecated")</li>
<li><code>--config-setting="--global-option=--debug"</code>: fails with "error: option --debug not recognized" after first emitting "SetuptoolsDeprecationWarning: Incompatible `config_settings` passed to build backend."</li>
<li><code>--config-setting="--build-option=--debug"</code>: no warning, but still fails with "error: option --debug not recognized"</li>
<li>Adding <code>[build_ext] \n debug = 1</code> to <code>pyproject.toml</code> like you would normally add to <code>setup.cfg</code> (<a href="https://stackoverflow.com/questions/61692952/how-to-pass-debug-to-build-ext-when-invoking-setup-py-install">see here</a>)</li>
</ul>
<p>The <a href="https://setuptools.pypa.io/en/latest/userguide/ext_modules.html" rel="nofollow noreferrer">Setuptools documentation</a> doesn't provide instructions on this is far as I can find.</p>
<p>I've seen several suggestions (like <a href="https://stackoverflow.com/questions/15253586/python-extension-debugging">this</a>) to use <code>python setup.py build -g</code> or <code>python setup.py build --debug</code>, but this is <a href="https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html" rel="nofollow noreferrer">a deprecated way</a> to use <code>setup.py</code> under the above PEPs.</p>
<p>Is it not possible under this new standard to build Python extensions with debug symbols? If it is, how to do this?</p>
| <python><c++><setuptools><debug-symbols><python-extensions> | 2023-11-07 02:05:54 | 0 | 3,248 | Nick Williams |
77,435,116 | 4,764,604 | RuntimeError: CUDA error: CUDA-capable device(s) is/are busy or unavailable while GPU utilization is 0% according to nvidia-smi | <p>I'm trying to launch a gradio backend that uses the LLM calls from Facebook. But it tells me that GPUs are not available:</p>
<pre><code>(.venv) reply@reply-GP66-Leopard-11UH:~/dev/chatbot-rag$ gradio gradio-chatbot.py
Warning: Cannot statically find a gradio demo called demo. Reload work may fail.
Watching: '/home/reply/.local/lib/python3.10/site-packages/gradio', '/home/reply/dev/chatbot-rag', '/home/reply/dev/chatbot-rag'
/home/reply/.local/lib/python3.10/site-packages/langchain/__init__.py:34: UserWarning: Importing PromptTemplate from langchain root module is no longer supported. Please use langchain.prompts.PromptTemplate instead.
warnings.warn(
Initializing backend for chatbot
/home/reply/.local/lib/python3.10/site-packages/transformers/pipelines/__init__.py:694: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
Traceback (most recent call last):
File "/home/reply/dev/chatbot-rag/gradio-chatbot.py", line 10, in <module>
backend.load_embeddings_and_llm_models()
File "/home/reply/dev/chatbot-rag/backend.py", line 50, in load_embeddings_and_llm_models
self.llm = self.load_llm(self.params)
File "/home/reply/dev/chatbot-rag/backend.py", line 66, in load_llm
pipe = pipeline("text-generation", model=self.llm_name_or_path, model_kwargs=model_kwargs)
File "/home/reply/.local/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 834, in pipeline
framework, model = infer_framework_load_model(
File "/home/reply/.local/lib/python3.10/site-packages/transformers/pipelines/base.py", line 269, in infer_framework_load_model
model = model_class.from_pretrained(model, **kwargs)
File "/home/reply/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 565, in from_pretrained
return model_class.from_pretrained(
File "/home/reply/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3222, in from_pretrained
max_memory = get_balanced_memory(
File "/home/reply/.local/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 771, in get_balanced_memory
max_memory = get_max_memory(max_memory)
File "/home/reply/.local/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 643, in get_max_memory
_ = torch.tensor([0], device=i)
RuntimeError: CUDA error: CUDA-capable device(s) is/are busy or unavailable
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
(.venv) reply@reply-GP66-Leopard-11UH:~/dev/chatbot-rag$ nvidia-smi
Tue Nov 7 02:17:55 2023
</code></pre>
<p>It does not appear that the GPU is being over-utilized.</p>
<pre><code>+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 3080 ... Off | 00000000:01:00.0 Off | N/A |
| N/A 46C P8 12W / 125W | 173MiB / 8192MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
| 0 N/A N/A 2910 G /usr/lib/xorg/Xorg 4MiB |
| 0 N/A N/A 5008 C python3 158MiB |
+---------------------------------------------------------------------------------------+
</code></pre>
<p>Side note, I'm surprised that python3 takes up so much memory.</p>
<p>I have the appropriate driver</p>
<p><a href="https://i.sstatic.net/nxAeH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nxAeH.png" alt="introducir la descripción de la imagen aquí" /></a></p>
<p>and I can load the model:</p>
<pre><code>(.venv) reply@reply-GP66-Leopard-11UH:~/dev/Llama-2-7b-chat-hf$ python3
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> model_directory = "/home/reply/dev/Llama-2-7b-chat-hf"
>>> model = AutoModelForCausalLM.from_pretrained(model_directory)
/home/reply/.local/lib/python3.10/site-packages/torch/cuda/__init__.py:138: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)
return torch._C._cuda_getDeviceCount() > 0
Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:09<00:00, 4.69s/it]
>>>
</code></pre>
<p>I don't know if it's related, but I have to say that installing LLamaV2 was no picnic:</p>
<pre><code>(.venv) reply@reply-GP66-Leopard-11UH:~/dev$ git clone https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
Cloning into 'Llama-2-7b-chat-hf'...
Username for 'https://huggingface.co': Mine
Password for 'https://Mine@huggingface.co':
remote: Enumerating objects: 85, done.
remote: Counting objects: 100% (70/70), done.
remote: Compressing objects: 100% (70/70), done.
remote: Total 85 (delta 36), reused 0 (delta 0), pack-reused 15
Unpacking objects: 100% (85/85), 978.94 KiB | 2.11 MiB/s, done.
Username for 'https://huggingface.co': Mine
Password for 'https://Mine@huggingface.co':
^Cwarning: Clone succeeded, but checkout failed.
You can inspect what was checked out with 'git status'
and retry with 'git restore --source=HEAD :/'
(.venv) reply@reply-GP66-Leopard-11UH:~/dev$
Exiting because of "interrupt" signal.
ls
chatbot-rag codellama faradai llama Llama-2-7b-chat-hf
(.venv) reply@reply-GP66-Leopard-11UH:~/dev$ cd Llama-2-7b-chat-hf/
eply@reply-GP66-Leopard-11UH:~/dev/Llama-2-7b-chat-hf$ git lfs pull
Username for 'https://huggingface.co': Mine
Password for 'https://Mine@huggingface.co':
Error updating the Git index: (4/4), 31 GB | 3.1 MB/s
error: pytorch_model-00002-of-00002.bin: cannot add to the index - missing --add option?
fatal: Unable to process path pytorch_model-00002-of-00002.bin
Errors logged to '/home/reply/dev/Llama-2-7b-chat-hf/.git/lfs/logs/20231107T015518.128081616.log'.
Use `git lfs logs last` to view the log.
reply@reply-GP66-Leopard-11UH:~/dev/Llama-2-7b-chat-hf$ git lfs logs last
git-lfs/3.4.0 (GitHub; linux amd64; go 1.20.6)
git version 2.34.1
$ git-lfs pull
Error updating the Git index:
error: pytorch_model-00002-of-00002.bin: cannot add to the index - missing --add option?
fatal: Unable to process path pytorch_model-00002-of-00002.bin
exit status 128
Current time in UTC:
2023-11-07 00:55:18
Environment:
LocalWorkingDir=/home/reply/dev/Llama-2-7b-chat-hf
LocalGitDir=/home/reply/dev/Llama-2-7b-chat-hf/.git
LocalGitStorageDir=/home/reply/dev/Llama-2-7b-chat-hf/.git
LocalMediaDir=/home/reply/dev/Llama-2-7b-chat-hf/.git/lfs/objects
LocalReferenceDirs=
TempDir=/home/reply/dev/Llama-2-7b-chat-hf/.git/lfs/tmp
ConcurrentTransfers=8
TusTransfers=false
BasicTransfersOnly=false
SkipDownloadErrors=false
FetchRecentAlways=false
FetchRecentRefsDays=7
FetchRecentCommitsDays=0
FetchRecentRefsIncludeRemotes=true
PruneOffsetDays=3
PruneVerifyRemoteAlways=false
PruneRemoteName=origin
LfsStorageDir=/home/reply/dev/Llama-2-7b-chat-hf/.git/lfs
AccessDownload=basic
AccessUpload=basic
DownloadTransfers=basic,lfs-standalone-file,ssh
UploadTransfers=basic,lfs-standalone-file,ssh
GIT_EXEC_PATH=/usr/lib/git-core
Client IP addresses:
192.168.0.15 2a01:e0a:2c1:a2b0:659c:aaa3:90e1:2ca4 2a01:e0a:2c1:a2b0:656d:2d3b:f648:31bd fe80::5a0d:fb4e:2f89:11b9
172.17.0.1
172.18.0.1
172.19.0.1
reply@reply-GP66-Leopard-11UH:~/dev/Llama-2-7b-chat-hf$ git hash-object pytorch_model-00002-of-00002.bin
fbbb6037dd5ef242b0501ae05db2710c350b325c
reply@reply-GP66-Leopard-11UH:~/dev/Llama-2-7b-chat-hf$ git update-index --add --cacheinfo 100644,fbbb6037dd5ef242b0501ae05db2710c350b325c,pytorch_model-00002-of-00002.bin
reply@reply-GP66-Leopard-11UH:~/dev/Llama-2-7b-chat-hf$ git lfs pull
(.venv) reply@reply-GP66-Leopard-11UH:~/dev/Llama-2-7b-chat-hf$ git restore --source=HEAD :/
Username for 'https://huggingface.co': fatal: could not read Username for 'https://huggingface.co': Success
Downloading model-00001-of-00002.safetensors (10 GB)
Username for 'https://huggingface.co': fatal: could not read Username for 'https://huggingface.co': Success
Error downloading object: model-00001-of-00002.safetensors (66dec18): Smudge error: Error downloading model-00001-of-00002.safetensors (66dec18c9f1705b9387d62f8485f4e7d871ca388718786737ed3c72dbfaac9fb): batch response: Git credentials for https://huggingface.co/meta-llama/Llama-2-7b-chat-hf not found.
Errors logged to '/home/reply/dev/Llama-2-7b-chat-hf/.git/lfs/logs/20231106T232024.743763803.log'.
Use `git lfs logs last` to view the log.
error: external filter 'git-lfs filter-process' failed
fatal: model-00001-of-00002.safetensors: smudge filter lfs failed
</code></pre>
<h1>Update</h1>
<p>Without any reason but reboots I now have two different errors, after rebooting, sometime I get:</p>
<pre><code>(.venv) reply@reply-GP66-Leopard-11UH:~/dev/chatbot-rag$ gradio gradio-chatbot.py
Warning: Cannot statically find a gradio demo called demo. Reload work may fail.
Watching: '/home/reply/.local/lib/python3.10/site-packages/gradio', '/home/reply/dev/chatbot-rag', '/home/reply/dev/chatbot-rag'
/home/reply/.local/lib/python3.10/site-packages/langchain/__init__.py:34: UserWarning: Importing PromptTemplate from langchain root module is no longer supported. Please use langchain.prompts.PromptTemplate instead.
warnings.warn(
Initializing backend for chatbot
/home/reply/.local/lib/python3.10/site-packages/torch/cuda/__init__.py:138: UserWarning: CUDA initialization: CUDA unknown error - this may be due to an incorrectly set up environment, e.g. changing env variable CUDA_VISIBLE_DEVICES after program start. Setting the available devices to be zero. (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:108.)
return torch._C._cuda_getDeviceCount() > 0
/home/reply/.local/lib/python3.10/site-packages/transformers/pipelines/__init__.py:694: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
Traceback (most recent call last):
File "/home/reply/dev/chatbot-rag/gradio-chatbot.py", line 10, in <module>
backend.load_embeddings_and_llm_models()
File "/home/reply/dev/chatbot-rag/backend.py", line 50, in load_embeddings_and_llm_models
self.llm = self.load_llm(self.params)
File "/home/reply/dev/chatbot-rag/backend.py", line 66, in load_llm
pipe = pipeline("text-generation", model=self.llm_name_or_path, model_kwargs=model_kwargs)
File "/home/reply/.local/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 834, in pipeline
framework, model = infer_framework_load_model(
File "/home/reply/.local/lib/python3.10/site-packages/transformers/pipelines/base.py", line 269, in infer_framework_load_model
model = model_class.from_pretrained(model, **kwargs)
File "/home/reply/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 565, in from_pretrained
return model_class.from_pretrained(
File "/home/reply/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2614, in from_pretrained
raise ImportError(
ImportError: Using `load_in_8bit=True` requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes `pip install -i https://test.pypi.org/simple/ bitsandbytes` or pip install bitsandbytes`
</code></pre>
<p>Like if the driver wasn't activated at all.</p>
<p>And some other time I have:</p>
<pre><code>(.venv) reply@reply-GP66-Leopard-11UH:~/dev/chatbot-rag$ gradio gradio-chatbot.py
Warning: Cannot statically find a gradio demo called demo. Reload work may fail.
Watching: '/home/reply/.local/lib/python3.10/site-packages/gradio', '/home/reply/dev/chatbot-rag', '/home/reply/dev/chatbot-rag'
/home/reply/.local/lib/python3.10/site-packages/langchain/__init__.py:34: UserWarning: Importing PromptTemplate from langchain root module is no longer supported. Please use langchain.prompts.PromptTemplate instead.
warnings.warn(
Initializing backend for chatbot
/home/reply/.local/lib/python3.10/site-packages/transformers/pipelines/__init__.py:694: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers.
warnings.warn(
Traceback (most recent call last):
File "/home/reply/dev/chatbot-rag/gradio-chatbot.py", line 10, in <module>
backend.load_embeddings_and_llm_models()
File "/home/reply/dev/chatbot-rag/backend.py", line 50, in load_embeddings_and_llm_models
self.llm = self.load_llm(self.params)
File "/home/reply/dev/chatbot-rag/backend.py", line 66, in load_llm
pipe = pipeline("text-generation", model=self.llm_name_or_path, model_kwargs=model_kwargs)
File "/home/reply/.local/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 834, in pipeline
framework, model = infer_framework_load_model(
File "/home/reply/.local/lib/python3.10/site-packages/transformers/pipelines/base.py", line 282, in infer_framework_load_model
raise ValueError(
ValueError: Could not load model /home/reply/dev/Llama-2-7b-chat-hf with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>, <class 'transformers.models.llama.modeling_llama.LlamaForCausalLM'>). See the original errors:
while loading with AutoModelForCausalLM, an error is thrown:
Traceback (most recent call last):
File "/home/reply/.local/lib/python3.10/site-packages/transformers/pipelines/base.py", line 269, in infer_framework_load_model
model = model_class.from_pretrained(model, **kwargs)
File "/home/reply/.local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 565, in from_pretrained
return model_class.from_pretrained(
File "/home/reply/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3246, in from_pretrained
raise ValueError(
ValueError:
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you want to dispatch the model on the CPU or the disk while keeping
these modules in 32-bit, you need to set `load_in_8bit_fp32_cpu_offload=True` and pass a custom
`device_map` to `from_pretrained`. Check
https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu
for more details.
while loading with LlamaForCausalLM, an error is thrown:
Traceback (most recent call last):
File "/home/reply/.local/lib/python3.10/site-packages/transformers/pipelines/base.py", line 269, in infer_framework_load_model
model = model_class.from_pretrained(model, **kwargs)
File "/home/reply/.local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3246, in from_pretrained
raise ValueError(
ValueError:
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you want to dispatch the model on the CPU or the disk while keeping
these modules in 32-bit, you need to set `load_in_8bit_fp32_cpu_offload=True` and pass a custom
`device_map` to `from_pretrained`. Check
https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu
for more details.
</code></pre>
| <python><gpu><nvidia><large-language-model><gradio> | 2023-11-07 01:46:31 | 0 | 3,396 | Revolucion for Monica |
77,435,102 | 8,951,295 | overwrite a txt file every hour instead of append using python | <p>I have a monitoring code from shodan <a href="https://help.shodan.io/guides/how-to-monitor-network" rel="nofollow noreferrer">https://help.shodan.io/guides/how-to-monitor-network</a></p>
<p>the script is continuously running and print the finding in CLI.
I extracted the data that I needed in this format <code>data=(f"{ip}:{port}")</code>
After appending the new finding to a txt file called ips.txt, I want every duration of time e.x: 2 hours, to overwrite the whole ips in the txt and start again write to the txt file.</p>
<p>As I wants to make some bash operations on the result in this duration and then another operations on the newly added data</p>
<p><strong>To Sum up:</strong>
I want to delete all the data in the txt file after 2 hours and start appending again.</p>
<p>I wrote this time code but it did not work.</p>
<pre><code>import subprocess
import time
import shlex
import json
from shodan import Shodan
from shodan.helpers import get_ip
from shodan.cli.helpers import get_api_key
from bs4 import BeautifulSoup
file_path = 'ips.txt'
duration = 900
start_time = time.time()
# Setup the Shodan API connection
api = Shodan(get_api_key())
for banner in api.stream.alert():
if 'port' in banner:
port=(banner['port'])
ip = get_ip(banner)
data=(f"{ip}:{port}")
while time.time() - start_time < duration:
with open(file_path, 'a') as file:
file.write(data+'\n')
time.sleep(5)
open(file_path, 'w').close()
with open(file_path, 'w') as file:
file.write('')
</code></pre>
| <python><python-3.x><time> | 2023-11-07 01:41:46 | 1 | 643 | ELMO |
77,435,098 | 2,990,266 | Why does patching a classmethod on a base class effect subclasses? | <p>I ran into a problem with patching a <code>classmethod</code> on a base-class. The problem is that patching the base-class effects calling the classmethod on a subclass.</p>
<p>I expect the output of this to produce <code>["foo", "bar"]</code> however with the <code>mocker.spy(Foo, "wrapper")</code> called before calling the classmethod the result is <code>["foo", "foo"]</code>.</p>
<p>What is happening here?</p>
<pre class="lang-py prettyprint-override"><code>def test_patching_parent_class(mocker):
"""
pip install pytest pytest-mock
"""
calls = []
class Foo:
def func2(self):
calls.append("foo")
@classmethod
def wrapper(cls):
cls().func2()
class Bar(Foo):
def func2(self):
calls.append("bar")
# Remove this line it will produce the expected result
spy = mocker.spy(Foo, "wrapper")
Foo.wrapper()
Bar.wrapper()
assert calls == ["foo", "bar"]
</code></pre>
| <python><mocking><pytest><patch> | 2023-11-07 01:40:26 | 1 | 2,025 | Aage Torleif |
77,434,872 | 5,927,701 | pythonic way to convert pptx to pdf | <p>I am trying to convert <code>PPTX</code> file to <code>PDF</code> file in core <code>Python</code></p>
<p>Method 1:</p>
<pre><code>import comtypes.client
def PPTtoPDF(inputFileName, outputFileName, formatType = 32):
powerpoint = comtypes.client.CreateObject("Powerpoint.Application")
powerpoint.Visible = 1
if outputFileName[-3:] != 'pdf':
outputFileName = outputFileName + ".pdf"
deck = powerpoint.Presentations.Open(inputFileName)
deck.SaveAs(outputFileName, formatType) # formatType = 32 for ppt to pdf
deck.Close()
powerpoint.Quit()
</code></pre>
<p>This method works on <code>local</code>, but very difficult to get it working in a <code>docker</code> container or on a <code>non-windows PROD</code></p>
<p>Method 2:</p>
<pre><code>import os
from pptx import Presentation
from io import BytesIO
def pptx_to_pdf(input_path, output_path):
presentation = Presentation(input_path)
output_stream = BytesIO()
presentation.save(output_stream)
output_stream.seek(0)
with open(output_path, "wb") as f:
f.write(output_stream.read())
</code></pre>
<p>This method works in code, but not able to open the converted file</p>
<p>Error:</p>
<pre><code>"Failed to load PDF document. Not decoded properly"
</code></pre>
<p>Is there a pure pythonic way of converting <code>PPTX</code> to <code>PDF</code> file without using <code>win32</code>?</p>
<p>Any suggestions will be useful</p>
| <python><pdf><reportlab><python-pptx> | 2023-11-07 00:09:52 | 0 | 4,590 | data_person |
77,434,808 | 6,761,992 | OpenAI API: How do I enable JSON mode using the gpt-4-vision-preview model? | <p>Update: It seems like they made a mistake in the API docs, and fixed it now.</p>
<p>Earlier, it said "when calling <code>gpt-4-vision-preview</code> or <code>gpt-3.5-turbo</code>," but now reads "when calling <code>gpt-4-1106-preview</code> or <code>gpt-3.5-turbo-1106</code>."</p>
<hr />
<p>According to <a href="https://platform.openai.com/docs/guides/text-generation/json-mode" rel="noreferrer">Text generation - OpenAI API</a>, "when calling <code>gpt-4-vision-preview</code> or <code>gpt-3.5-turbo</code>, you can set response_format to <code>{ type: "json_object" }</code> to enable JSON mode."</p>
<p>However, the following code throws an error:</p>
<pre><code> {'error': {'message': '1 validation error for Request\nbody -> response_format\n extra fields not permitted (type=value_error.extra)', 'type': 'invalid_request_error', 'param': None, 'code': None}}
</code></pre>
<p>If I comment <code>"response_format": {"type": "json_object"}</code>, it works fine.</p>
<pre><code> headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
payload = {
"model": "gpt-4-vision-preview",
"response_format": {"type": "json_object"},
"messages": [
{
"role": "system",
"content": "You are a helpful assistant. Your response should be in JSON format."
},
{
"role": "user",
"content": [
{
"type": "text",
"text": prompt
},
{
"type": "image_url",
"image_url": {
"url": f"data:image/jpeg;base64,{base64_image}"
}
}
]
}
],
"max_tokens": 1000,
}
response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
print(response.json())
</code></pre>
| <python><openai-api><gpt-4> | 2023-11-06 23:46:14 | 5 | 309 | chibop |
77,434,637 | 3,574,603 | Unable to reference env variable when script is run as a service | <p>I have an environment variable <code>KEY</code>. It is defined in <code>.bashrc</code></p>
<pre><code>export KEY=...
</code></pre>
<p>I can confirm that the variable is set with <code>echo $KEY</code>.</p>
<p>I have a Python script that references the variable with:</p>
<pre><code>os.environ.get('KEY')
</code></pre>
<p>When I run the script directly (<code>python3 main.py</code>), I able to confirm that the script is able to reference the key.</p>
<p>However, I am running the script as a service.</p>
<pre><code>[Unit]
Description=Tool
[Service]
Restart=always
User=root
WorkingDirectory=/var/lib/tool
ExecStart=/var/lib/tool/bin/python3 /var/lib/tool/main.py
[Install]
WantedBy=default.target
</code></pre>
<p>When run as a service (running on Ubuntu), <code>os.environ.get('KEY')</code> returns <code>null</code>.</p>
<p>How can I ensure the script, when running as a service, has access to the environment variable?</p>
| <python><linux><environment-variables> | 2023-11-06 22:50:22 | 0 | 3,678 | user3574603 |
77,434,581 | 16,808,528 | Output data does not match headers, off by one | <p>I have vaccination text data like this:</p>
<pre><code>vaccinations_text = """
|Entry #|Vaccine|Type of vaccine|Lot #|Manufacturer|VIS edition date|Date given|Vaccinator (initials)|
|1|Influenza|H1N1|2F03930|NOV|8/24/09|12/7/09|PLE|
|2|Tetanus, Diphtheria, Pertussis|Td|U049393AA|ASP|4/2/95|10/7/95|ODK|
|3|Tetanus, Diphtheria, Pertussis|Td|U049393AA|ASP|4/2/95|11/7/95|WMK|
|4|Tetanus, Diphtheria, Pertussis|Td|U049393AA|ASP|4/2/95|4/7/96|CPK|
|5|Tetanus, Diphtheria, Pertussis|Tdap|AC9349239AA|GSK|2/12/07|3/1/07|CPK|
|6|Measles, Mumps, Rubella|MMR|02302L|DVX|9/7/15|11/4/15|WNK|
|7|Hepatitis B|Heplisav-B|TD02302|DVX|9/9/16|10/4/18|WNK|
"""
</code></pre>
<p>Now, what I want to do is parsing the fictitious vaccination information from the vaccinations_text string to create a list of lists describing the vaccination information of each entry from the table.</p>
<p>Specifically, I want to create a list named vaccination_records where each element of the list is a vaccine record corresponding to a line of text from the original string. Each vaccine record should be modeled as a list of tuples, where each tuple is comprised of the column header and the value in that column.</p>
<p>For example, Entry 1 should be parsed into: [('Entry #', 1), ('Vaccine', 'Influenza'), ('Type of vaccine', 'H1N1'), ...], and so on. Represent all dates using the dt.date class.</p>
<p>I wrote the code like this:</p>
<pre><code>vaccination_records = []
lines = vaccinations_text.strip().split('\n')
headers = lines[0].split('|')[1:-1]
date_pattern = re.compile(r'(\d{1,2}/\d{1,2}/\d{2})')
for line in lines[1:]:
values = line.split('|')[1:-1]
record = [('Entry #', int(values[0]))]
for header, value in zip(headers, values[1:]):
if header == 'Date given':
date = date_pattern.search(value)
if date:
date = dt.datetime.strptime(date.group(0), '%m/%d/%y').date()
record.append((header, date))
else:
record.append((header, value))
vaccination_records.append(record)
</code></pre>
<p>But when I test whether I parse right or not, I use this code:</p>
<pre><code>for record in vaccination_records:
for item in record:
if item[0] == 'Entry #':
if item[1] == 1:
print("Entry 1 has been successfully parsed:")
print(record)
break
</code></pre>
<p>and then it looks like the output is not exactly what I expected. the output looks like this:</p>
<pre><code>Entry 1 has been successfully parsed:
[('Entry #', 1), ('Entry #', 'Influenza'), ('Vaccine', 'H1N1'), ('Type of vaccine', '2F03930'), ('Lot #', 'NOV'), ('Manufacturer', '8/24/09'), ('VIS edition date', '12/7/09')]
</code></pre>
<p>So my question is that how can I revise the code to get the desired output like:</p>
<pre><code>[('Entry #', 1), ('Vaccine', 'Influenza'), ('Type of vaccine', 'H1N1'), ...]
</code></pre>
| <python> | 2023-11-06 22:33:57 | 2 | 477 | Rstudyer |
77,434,333 | 2,391,144 | How can I use python-click to make a multi-command CLI interface | <p>I'd like to create a CLI for my application that's flexible and extensible. Click seems like the right solution but I'm new to it and have not seen an example that fits what I'm looking for. The application has a variety of different inputs (file, TCP socket, ZMQ etc); a variety of different tasks it can perform (task1, task2, etc); and a variety of outputs (file, socket, etc). I'd like to have a CLI which allows the defining of these 3 stages.</p>
<p>Desired CLI example:</p>
<pre><code>myapp.py \
source TCP --read-size=4096 127.0.0.1 8888 \
task taskABC --optional-arg example example 99 \
sink file myoutput.bin
</code></pre>
<p>The following code seems to provide most of what I want but only allows the specification of 1 of the 3 pipelines. Using the <code>chain=True</code> option on the top level group yields the error <code>RuntimeError: It is not possible to add multi commands as children to another multi command</code> and doesn't feel like the right answer as I want to require the specification of all 3 stages, not allow an arbitrary number of stage definitons.</p>
<p>Is my CLI vision achievable with Click?</p>
<pre class="lang-py prettyprint-override"><code>import click
@click.group()
def cli():
pass
@cli.group()
def source():
pass
@source.command()
@click.argument("host")
@click.argument("port", type=click.INT)
def tcp(host, port):
"""Command for TCP based source"""
@source.command()
@click.argument("input_file", type=click.File('rb'))
def file(input_file):
"""Command for file based source"""
@cli.group()
def task():
pass
@task.command()
def example1():
"""Example 1"""
@task.command()
def example2():
"""Example 2"""
@cli.group()
def sink():
pass
@sink.command()
@click.argument("host", type=click.STRING)
@click.argument("port", type=click.INT)
def tcp(host, port):
"""Command for TCP based sink"""
@sink.command()
@click.argument("output_file", type=click.File('wb'))
def file(output_file):
"""Command for file based source"""
if __name__ == '__main__':
cli()```
</code></pre>
| <python><python-click> | 2023-11-06 21:25:02 | 0 | 1,247 | Youssef Bagoulla |
77,434,196 | 226,473 | Is there a way to return all rows where only one column is not null? | <p>I have a dataframe that I'd like to break up into logical sub-dataframes.</p>
<p>The most logical way to do this, given how the data is, is to select rows from the original dataframe where only one of the columns is not null (i.e. <code>df.column.notnull()</code> is <code>True</code>).</p>
<p>Is there a shorthand for this or do I need to check each <em>other</em> column for every column that is not null?</p>
| <python><pandas><dataframe> | 2023-11-06 20:54:30 | 1 | 21,308 | Ramy |
77,434,156 | 1,753,525 | Pycharm FastAPI configuration works on Run, stalls on Debug when adding a breakpoint | <p>I just created a new Python project using Poetry. It's pretty simple, this is my pyproject.toml:</p>
<pre><code>[tool.poetry]
name = "my-app"
version = "0.1.0"
description = ""
authors = ["me"]
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.12"
fastapi = "^0.104.1"
supabase = "^2.0.3"
uvicorn = "^0.24.0.post1"
pydantic = {extras = ["dotenv"], version = "^1.10.2"}
[tool.poetry.group.dev.dependencies]
mypy = "^1.5.1"
pre-commit = "^3.5.0"
pytest = "^7.4.3"
pytest-cov = "^4.1.0"
</code></pre>
<p>This is my project structure: The entry point is <code>web_server.py</code>.</p>
<p><a href="https://i.sstatic.net/tJXhr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tJXhr.png" alt="project structure" /></a></p>
<p>web_server.py</p>
<pre><code>import uvicorn
from app.config.rest.api import create_fastapi_app
app = create_fastapi_app()
if __name__ == "__main__":
uvicorn.run(app)
</code></pre>
<p><code>api.py</code></p>
<pre><code>from fastapi import FastAPI
from app.config.rest.endpoints.router import api_router
def create_fastapi_app() -> FastAPI:
app = FastAPI(title="My API")
app.include_router(api_router, prefix="/api")
return app
</code></pre>
<p>And <code>router.py</code>:</p>
<pre><code>from fastapi import Header, APIRouter, Depends
from app.config.rest.endpoints.meta import router as meta_router
def custom_headers(
user_id: str = Header(examples=["postman"], convert_underscores=False),
):
return
no_headers_router = APIRouter()
headers_router = APIRouter(dependencies=[Depends(custom_headers)])
no_headers_router.include_router(meta_router, tags=["meta"])
api_router = APIRouter()
api_router.include_router(no_headers_router)
api_router.include_router(headers_router)
</code></pre>
<p>I can start my web server by running <code>poetry run uvicorn --app-dir ./src app.web_server:app --reload</code> in the console. I can also start the web server from PyCharm using the following config:</p>
<p><a href="https://i.sstatic.net/1YCrb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1YCrb.png" alt="Run config" /></a></p>
<p>This works as expected:</p>
<p><a href="https://i.sstatic.net/5vdU3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5vdU3.png" alt="API running" /></a></p>
<p>It also works if I debug instead of run this same configuration. However, if I add a breakpoint (doesn't matter where), PyCharm doesn't start the API and hangs in there forever. The endpoints are not accessible either as I guess the startup process has not finished. For example, adding a breakpoint in <code>uvicorn.run(app)</code> results in the app hanging forever:</p>
<p><a href="https://i.sstatic.net/4RtLB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4RtLB.png" alt="Debug configuration" /></a></p>
<p>I would assume that there's an issue with the Debug configuration if it would not start the server, but this only happens if I set any breakpoints in code. If there are no breakpoints, the configuration launches the app as expected.</p>
<p>What could be happening here?</p>
| <python><debugging><pycharm><fastapi><python-poetry> | 2023-11-06 20:46:56 | 1 | 687 | Samer |
77,434,087 | 4,882,545 | Execute GCP Cloud Run job with environment variable override using Python client | <p>I am trying to trigger a GCP Cloud Run job from a python script following the run_job documentation (<a href="https://cloud.google.com/python/docs/reference/run/latest/google.cloud.run_v2.services.jobs.JobsClient#google_cloud_run_v2_services_jobs_JobsClient_run_job" rel="nofollow noreferrer">https://cloud.google.com/python/docs/reference/run/latest/google.cloud.run_v2.services.jobs.JobsClient#google_cloud_run_v2_services_jobs_JobsClient_run_job</a>). However, I'm getting errors that I haven't been able to debug.</p>
<p>This is a job that already exists, but I need to overwrite an environment variable.</p>
<p>Here's is my code:</p>
<pre><code>env_vars = [
run_v2.EnvVar(name="VAR_1", value="var_1_value"),
run_v2.EnvVar(name="VAR_2", value="var_2_value"),
]
# Set the env vars as container overrides
container_overrides = run_v2.RunJobRequest.Overrides.ContainerOverride(
name="myjobname",
env=env_vars
)
request_override = run_v2.RunJobRequest.Overrides(
container_overrides=container_overrides
)
# Initialize the request
job_name = f"projects/myproject/locations/mylocation/jobs/myjob"
request = run_v2.RunJobRequest(
name=job_name,
overrides=request_override
)
# Make the request
operation = client.run_job(request=request)
logging.info("Waiting for operation to complete...")
response = operation.result()
logging.info(f"Operation result: {response}")
</code></pre>
<p>And this is the error I'm getting:</p>
<pre><code>Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/decorators/base.py", line 220, in execute
return_value = super().execute(context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/python.py", line 181, in execute
return_value = self.execute_callable()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/python.py", line 198, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/home/airflow/gcs/dags/etl.py", line 128, in run_acolite
request = run_v2.RunJobRequest(
File "/opt/python3.8/lib/python3.8/site-packages/proto/message.py", line 604, in __init__
super().__setattr__("_pb", self._meta.pb(**params))
TypeError: Message must be initialized with a dict: google.cloud.run.v2.RunJobRequest
</code></pre>
<p>Thank you!</p>
| <python><google-cloud-platform><google-cloud-functions><cloud><google-cloud-run> | 2023-11-06 20:35:09 | 1 | 533 | Raimundo Manterola |
77,434,044 | 2,638,049 | manage.py migrate on Amazon Linux 2023 | <p>I am trying to deploy my django environment on aws linux 2023 and I have a file with command for migration:</p>
<pre><code>container_commands:
01_migrate:
command: "source /var/app/venv/*/bin/activate && python3 manage.py migrate"
leader_only: true
</code></pre>
<p>Then I get an error in cfn-init.log</p>
<pre><code>2023-11-06 19:01:44,570 [ERROR] Command 01_migrate (source /var/app/venv/*/bin/activate && python3 manage.py migrate) failed
2023-11-06 19:01:44,570 [ERROR] Error encountered during build of postbuild_0_hq_app: Command 01_migrate failed
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/cfnbootstrap/construction.py", line 579, in run_config
CloudFormationCarpenter(config, self._auth_config, self.strict_mode).build(worklog)
File "/usr/lib/python3.9/site-packages/cfnbootstrap/construction.py", line 277, in build
changes['commands'] = CommandTool().apply(
File "/usr/lib/python3.9/site-packages/cfnbootstrap/command_tool.py", line 127, in apply
raise ToolError(u"Command %s failed" % name)
cfnbootstrap.construction_errors.ToolError: Command 01_migrate failed
2023-11-06 19:01:44,571 [ERROR] -----------------------BUILD FAILED!------------------------
2023-11-06 19:01:44,571 [ERROR] Unhandled exception during build: Command 01_migrate failed
Traceback (most recent call last):
File "/opt/aws/bin/cfn-init", line 181, in <module>
worklog.build(metadata, configSets, strict_mode)
File "/usr/lib/python3.9/site-packages/cfnbootstrap/construction.py", line 137, in build
Contractor(metadata, strict_mode).build(configSets, self)
File "/usr/lib/python3.9/site-packages/cfnbootstrap/construction.py", line 567, in build
self.run_config(config, worklog)
File "/usr/lib/python3.9/site-packages/cfnbootstrap/construction.py", line 579, in run_config
CloudFormationCarpenter(config, self._auth_config, self.strict_mode).build(worklog)
File "/usr/lib/python3.9/site-packages/cfnbootstrap/construction.py", line 277, in build
changes['commands'] = CommandTool().apply(
File "/usr/lib/python3.9/site-packages/cfnbootstrap/command_tool.py", line 127, in apply
raise ToolError(u"Command %s failed" % name)
cfnbootstrap.construction_errors.ToolError: Command 01_migrate failed
</code></pre>
<p>Is command changed from linux2 to linux 2023?</p>
<p>Thank you for your help</p>
| <python><django><amazon-web-services> | 2023-11-06 20:25:53 | 1 | 1,639 | serge |
77,433,576 | 8,116,305 | How to apply rolling_map() in Polars for a function that uses multiple input columns | <p>I have a function using Polars Expressions to calculate the standard deviation of the residuals from a linear regression (courtesy of <a href="https://stackoverflow.com/questions/74895640/how-to-do-regression-simple-linear-for-example-in-polars-select-or-groupby-con">this post</a>).</p>
<p>Now I would like to apply this function using a <strong>rolling window</strong> over a dataframe. My approaches below fail because I don't know how to pass two columns as arguments to the function, since rolling_map() applies to an Expr.</p>
<p>Is there a way to do this directly in Polars, or do I need to use a workaround with Pandas? Thank you for your support! (feels like I'm missing something obvious here...)</p>
<pre><code>import polars as pl
def ols_residuals_std(x: pl.Expr, y: pl.Expr) -> pl.Expr:
# Calculate linear regression residuals and return the standard deviation thereof
x_center = x - x.mean()
y_center = y - y.mean()
beta = x_center.dot(y_center) / x_center.pow(2).sum()
e = y_center - beta * x_center
return e.std()
df = pl.DataFrame({'a': [45, 76, 4, 88, 66, 5, 24, 72, 93, 87, 23, 40],
'b': [77, 11, 56, 43, 61, 25, 63, 7, 66, 17, 64, 75]})
# Applying the function over the full length - works
df = df.with_columns(ols_residuals_std(pl.col('a'), pl.col('b')).alias('e_std'))
df.with_columns(pl.col('a').rolling_map(ols_residuals_std(pl.col('a'), pl.col('b')), window_size=4, min_periods=1).alias('e_std_win'))
# PanicException: python function failed: PyErr { type: <class 'TypeError'>, value: TypeError("'Expr' object is not callable"), traceback: None }
df.with_columns(pl.col('a', 'b').rolling_map(ols_residuals_std(), window_size=4, min_periods=1).alias('e_std_win'))
# TypeError: ols_residuals_std() missing 2 required positional arguments: 'x' and 'y'
</code></pre>
| <python><dataframe><python-polars><rolling-computation> | 2023-11-06 18:58:17 | 1 | 343 | usdn |
77,433,449 | 4,426,091 | psycopg2: WHERE =ANY with multiple columns | <p>PostgreSQL v14
Psycopg2 v2.9.3</p>
<p>I can do this in straight PostgreSQL / pgsql, however I can't seem to get it in Psycopg2.</p>
<p>Given an example table:</p>
<pre><code> CREATE TABLE accounts (
id BIGINT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY,
account_name TEXT,
account_number TEXT NOT NULL,
other_account_info TEXT
)
-- Enforces uniqueness for the pair, both non-null
CREATE UNIQUE INDEX accounts_name_number_unique_idx ON accounts (account_name, account_number);
-- Enforces uniqueness for the account_number for a non-null account_name
CREATE UNIQUE INDEX accounts_number_unique_idx ON accounts (account_number) WHERE account_name IS NULL;
</code></pre>
<p>I want to match on an accounts record using 2 fields by doing something like the following:</p>
<pre><code>SELECT *
FROM accounts
WHERE (account_name, account_number) =ANY(VALUES('name', number'), ('name1', 'number2'))
</code></pre>
<p>This of course works just fine when I run it directly in psql, however I can't get psycopg2 to format the SQL properly.</p>
<p><strong>NOTE:</strong> I don't want to simply do <code>WHERE account_name = %(account_name)s AND account_number = %(account_number)s</code> because I will have an unknown number of account_name / account_number pairs to query on and don't want to dynamically generate the SQL.</p>
<p>I've tried the following:</p>
<pre><code>template: str = f"""
SELECT *
FROM accounts
WHERE (account_name, account_number) = ANY(VALUES%(account_list)s)
"""
inserts: dict = {'account_list': ('account_name1', 'account_number1')}
execute(cur=_cursor, sql=template, argslist=inserts)
</code></pre>
<p>And that works! But, as soon as I add a second account_name and account_number to the parameters and make it a tuple of tuples it breaks with: <code>UndefinedFunction: operator does not exist: text = record</code></p>
<pre><code>template: str = f"""
SELECT *
FROM accounts
WHERE (account_name, account_number) = ANY(VALUES%(account_list)s)
"""
inserts: dict = { 'account_list': (('account_name1', 'account_number1',),('account_name2', 'account_number2',)) }
execute(cur=_cursor, sql=template, argslist=inserts)
</code></pre>
<p>I've also tried make the parameter a list and removing "VALUES" from the SQL template, but it breaks and I get <code>DatatypeMismatch: cannot compare dissimilar column types text and unknown at record column 1</code></p>
<pre><code>template: str = f"""
SELECT *
FROM accounts
WHERE (account_name, account_number) = ANY(%(account_list)s)
"""
inserts: dict = { 'account_list': [('account_name1', 'account_number1'),('account_name2', 'account_number2')] }
execute(cur=_cursor, sql=template, argslist=inserts)
</code></pre>
<p>I recognize that in some situations, I need to cast the individual parameters, but I can't figure out how to do that in psycopg2.</p>
<p>Any help would be really appreciated!</p>
| <python><postgresql><psycopg2> | 2023-11-06 18:36:26 | 1 | 7,430 | Brooks |
77,433,420 | 4,859,780 | How do I tell matplotlib.pyplot to draw the figure on a different display? | <p>I am using a laptop (mac os) with two external displays. When I use <code>matplotlib.pyplot</code> to plot anything, the new figure always shows up on the laptop screen.</p>
<p>Is there a programmatic way to have <code>matplotlib.pyplot</code> create the figure on a different display (rather than having to manually drag the figure window over to the extended display from the laptop display)?</p>
| <python><macos><matplotlib> | 2023-11-06 18:29:27 | 1 | 1,157 | bphi |
77,433,354 | 2,458,922 | Google Colab Roll Back | <p>Some Keras Package of my old code , Of Sept 2023 stopped working in Nov 6 2023.
I am not remembering what all were those package by default by then.</p>
<p>Is there any way I can have the Google Colab roll back to a previous veriosn.</p>
<p>Below is the Error Message from my current code while complied a couple of months before.</p>
<pre><code>ImportError Traceback (most recent call last)
<ipython-input-4-1c3947b2993d> in <cell line: 3>()
1 import numbers
2 from datetime import datetime
----> 3 from keras.layers import Dense, Dropout, Flatten,GaussianDropout
4 from keras.layers import Conv2D, MaxPooling2D
5 #from keras.optimizers import SGD
4 frames
/usr/local/lib/python3.10/dist-packages/keras/callbacks.py in <module>
13 from collections import deque
14 from collections import OrderedDict
---> 15 from collections import Iterable
16 from .utils.generic_utils import Progbar
17 from . import backend as K
ImportError: cannot import name 'Iterable' from 'collections' (/usr/lib/python3.10/collections/__init__.py)
</code></pre>
| <python><google-cloud-platform><google-colaboratory> | 2023-11-06 18:17:58 | 1 | 1,731 | user2458922 |
77,433,314 | 7,422,748 | How to write a default parameter callback function that does nothing? [Python] | <p>I sometimes want to pass in a callback function as a parameter to <code>my_function</code>. However, I want the default behavior to simply do nothing.</p>
<p>I find myself wanting to write the following <strong>invalid</strong> lines of code:</p>
<pre class="lang-py prettyprint-override"><code>def my_function(args, callback_function=pass):
# do something
callback_function()
</code></pre>
<p>But that doesn't work, so the best <strong>valid</strong> solution I have come up with is:</p>
<pre class="lang-py prettyprint-override"><code>def do_nothing():
pass
def my_function(args, callback_function=do_nothing):
# do something
callback_function()
</code></pre>
<p>Is there a cleaner way to do this, more like the first example above?</p>
| <python><callback><default-parameters> | 2023-11-06 18:11:17 | 2 | 316 | Pedro Contipelli |
77,433,205 | 1,554,386 | How to install mysqlclient in a python:3-slim Docker image without bloating the image? | <p>I'm using <code>python:3-slim</code> Docker image and want to use the mysqlclient package from Pypi but getting the following error from <code>RUN pip install mysqlclient</code> command:</p>
<pre><code>...
Collecting mysqlclient
Downloading mysqlclient-2.2.0.tar.gz (89 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 89.5/89.5 kB 2.5 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'error'
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [27 lines of output]
/bin/sh: 1: pkg-config: not found
/bin/sh: 1: pkg-config: not found
Trying pkg-config --exists mysqlclient
Command 'pkg-config --exists mysqlclient' returned non-zero exit status 127.
Trying pkg-config --exists mariadb
Command 'pkg-config --exists mariadb' returned non-zero exit status 127.
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/usr/local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-f_fea8lo/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 355, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-f_fea8lo/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 325, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-f_fea8lo/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 341, in run_setup
exec(code, locals())
File "<string>", line 154, in <module>
File "<string>", line 48, in get_config_posix
File "<string>", line 27, in find_package_name
Exception: Can not find valid pkg-config name.
Specify MYSQLCLIENT_CFLAGS and MYSQLCLIENT_LDFLAGS env vars manually
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
</code></pre>
<p>My Dockerfile looks like:</p>
<pre><code>FROM python:3.12-slim
RUN pip install mysqlclient
</code></pre>
<p>I tried installing the build dependencies by adding the following to the Dockerfile above <code>RUN pip install mysqlclient</code>:</p>
<pre><code>RUN apt-get install python3-dev default-libmysqlclient-dev build-essential
</code></pre>
<p>but this took the Docker image to over 700MB. I also tried replacing the line above with the following to remove the build dependencies after install <code>mysqlclient</code>:</p>
<pre><code>RUN apt-get update && \
apt-get dist-upgrade && \
apt-get install -y pkg-config default-libmysqlclient-dev \
build-essential libmariadb-dev && \
pip install mysqlclient && \
apt-get purge -y pkg-config default-libmysqlclient-dev build-essential
</code></pre>
<p>but the resulting image was still over 500MB in size and took a lot of precious CI/CD build time as it installs and then uninstalls the build tools.</p>
<p><strong>How do I install <code>mysqlclient</code> without bloating my docker image with build dependencies?</strong></p>
| <python><docker><mysql-python> | 2023-11-06 17:48:35 | 1 | 27,985 | Alastair McCormack |
77,433,148 | 4,001,769 | Regex to find header position getting all the text between first and last header | <p>I have a string that represents the text in a file, and I would like to find the header for the text. It is well structured, so I was gessing it would be fairly simple to do it, but the line breaks are breaking me (pun intentended)!
The header looks like this:</p>
<pre><code>Start of the header
Something inside the header
Random date
Pg. 2
</code></pre>
<p>And I can have one or many headers in the same string (pretty obvious, I know) where the only thing that changes is the page number.
My problem is that I can't find the whole header if I don't use the SingleLine modifier, and if I use it, it goes from the first header all the way to the last one, causing me to remove everything in between!</p>
<p>Here is an example text:</p>
<pre><code>Start of the header
Something inside the header
Random date
Pg. 1
Mussum Ipsum, cacilds vidis litro abertis. Sapien in monti palavris qui num significa nadis i pareci latim. Admodum accumsan disputationi eu sit. Vide electram sadipscing et per. Mé faiz elementum girarzis, nisi eros vermeio. Posuere libero varius. Nullam a nisl ut ante blandit hendrerit. Aenean sit amet nisi.
Mussum Ipsum, cacilds vidis litro abertis. Per aumento de cachacis, eu reclamis. Quem manda na minha terra sou euzis! Manduma pindureta quium dia nois paga. Paisis, filhis, espiritis santis.
Start of the header
Something inside the header
Random date
Pg. 2
More things
</code></pre>
<p>This makes my regex to match everything between the 1st <code>Start of the Header</code> to Pg. 2, making me miss all the funny Mussum Loren Ipsum in between!</p>
<p>I tried using some variations of <code>Start of the header.*Pg.*\d\n</code> with the SingleLine modifier.
I also tried without the SingleLine modifier, but then I can't find a way to match everything in between the <code>Start of the header</code> and the <code>Pg. 1</code>. I tried using different sequences of <code>.*\n*</code> and failed miserably!
Any tips on how to solve it?
I'm using Python 3.11 with <code>re.finditer</code> like this:
<code>headers = re.finditer("Start of the header.*Pg.*\d\n", self.text, flags=re.DOTALL)</code></p>
| <python><regex> | 2023-11-06 17:38:31 | 2 | 428 | Andre Guilhon |
77,433,096 | 6,759,459 | NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported | <p>I try to load a dataset using the <code>datasets</code> python module in my local Python Notebook. I am running a Python 3.10.13 kernel as I do for my virtual environment.</p>
<p>I cannot load the datasets I am following from a tutorial. Here's the error:</p>
<pre><code>---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
/Users/ari/Downloads/00-fine-tuning.ipynb Celda 2 line 3
1 from datasets import load_dataset
----> 3 data = load_dataset(
4 "jamescalam/agent-conversations-retrieval-tool",
5 split="train"
6 )
7 data
File ~/Documents/fastapi_language_tutor/env/lib/python3.10/site-packages/datasets/load.py:2149, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2145 # Build dataset for splits
2146 keep_in_memory = (
2147 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2148 )
-> 2149 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
2150 # Rename and cast features to match task schema
2151 if task is not None:
2152 # To avoid issuing the same warning twice
File ~/Documents/fastapi_language_tutor/env/lib/python3.10/site-packages/datasets/builder.py:1173, in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)
1171 is_local = not is_remote_filesystem(self._fs)
1172 if not is_local:
-> 1173 raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.")
1174 if not os.path.exists(self._output_dir):
1175 raise FileNotFoundError(
1176 f"Dataset {self.dataset_name}: could not find data in {self._output_dir}. Please make sure to call "
1177 "builder.download_and_prepare(), or use "
1178 "datasets.load_dataset() before trying to access the Dataset object."
1179 )
NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.
</code></pre>
<p>How do I resolve this? I don't understand how this error is applicable, given that the dataset is something I am fetching and thus <em>cannot</em> be cached in my LocalFileSystem in the first place.</p>
| <python><python-3.x><pip><openai-api><huggingface-datasets> | 2023-11-06 17:29:38 | 2 | 926 | Ari |
77,433,017 | 521,347 | AlloyDB- CREATE DATABASE can not be run in a transaction block | <p>Since AlloyDB does not support creating a database using terraform yet, I am trying to create it using Alembic and sqlAlchemy. I am using <code>postgresql+psycopg2</code> as the driver.My code looks similar to the only answer on <a href="https://stackoverflow.com/questions/53847829/postgres-pg8000-create-database">here</a>.</p>
<pre><code>from sqlalchemy import create_engine
dburl = "postgresql+psycopg2://user:pswd@myip:5432/postgres/"
engine = create_engine(dburl)
conn = engine.connect()
con.rollback() # Make sure we're not in a transaction
con.autocommit = True # Turn on autocommit
conn.execute("CREATE DATABASE qux")
con.autocommit = False # Turn autocommit back off again
</code></pre>
<p>However, I am getting an error <code>sqlalchemy.exc.InternalError: (psycopg2.errors.ActiveSqlTransaction) CREATE DATABASE cannot run inside a transaction block</code>. Can someone suggest how can this be fixed?</p>
| <python><postgresql><sqlalchemy><alembic><google-alloydb> | 2023-11-06 17:18:12 | 0 | 1,780 | Sumit Desai |
77,432,967 | 8,030,746 | Why am I getting TimeoutException in Selenium with Python? | <p>Just started learning Selenium with Python. And no matter how much I change the WebDriverWait, it's still giving me the TimeoutException.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
url = 'https://bbrauncareers-bbraun.icims.com/jobs/search?ss=1&searchRelation=keyword_all'
driver = webdriver.Chrome()
driver.implicitly_wait(30)
driver.get(url)
wait = WebDriverWait(driver, 30)
title = wait.until(EC.visibility_of_element_located((By.CSS_SELECTOR, 'div.description')))
print(title)
</code></pre>
<p>The page loads fully on driver.get(url), but it doesn't seem to want to find the title. I tried changing the css selector to anything else on the page, and it's still the same. What am I doing wrong here? Thank you!</p>
| <python><selenium-webdriver> | 2023-11-06 17:06:41 | 1 | 851 | hemoglobin |
77,432,950 | 3,170,116 | remove duplicates from a DataFrame, and keep all columns from that row | <p>I am attempting to use pyspark to de-duplicate a DataFrame. Here is the DataFrame:</p>
<pre><code>employee_name department start_date favorite_color
---------------------------------------------------------------------------
James sales 2023-11-03T12:38:02.371076Z green
James sales 2020-01-03T12:38:02.371076Z red
Fred operations 2023-09-27T16:28:01.920063Z red
Fred leadership 2023-07-04T16:28:01.920063Z green
Fred operations 2023-10-27T16:28:01.920063Z green
Wilma administrative 2021-06-17T16:28:01.920063Z green
</code></pre>
<p>If <code>employee_name</code> and <code>department</code> are identical, it is considered a duplicate and only the row with the most recent <code>start_date</code> should be kept.</p>
<p>This gets me close:</p>
<pre><code>deduped_df = employee_df.groupBy(col("employee_name"), col("department")).agg(max(col("start_date")))
employee_name department start_date
-----------------------------------------------------------
Fred leadership 2023-07-04T16:28:01.920063Z
Fred operations 2023-10-27T16:28:01.920063Z
James sales 2023-11-03T12:38:02.371076Z
Wilma administrative 2021-06-17T16:28:01.920063Z
</code></pre>
<p>...however, it doesn't preserve the <code>favorite_color</code> column, and I can't figure out how to preserve it because the <code>group_by</code> clause eliminates it from the DataFrame.</p>
<p>Is there a way I can use <code>start_date</code> as the criteria for which row to keep, and keep all columns from the row that is kept?</p>
| <python><pyspark> | 2023-11-06 17:04:15 | 1 | 370 | ChadSC |
77,432,905 | 8,076,979 | Why does dataclass favour __repr__ over __str__? | <p>Given the following code, I think output <code>(A)</code> makes sense since <code>__str__</code> takes precedence over <code>__repr__</code> however I am a bit confused about output <code>(B)</code> why does it favour <code>__repr__</code> over <code>__str__</code> and is there a way to make the class use the <code>__str__</code> rather than the <code>__repr__</code> of <code>Foo</code> without defining <code>__str__</code> for <code>Bar</code>?</p>
<pre class="lang-py prettyprint-override"><code>@dataclass()
class Foo():
def __repr__(self) -> str:
return "Foo::__repr__"
def __str__(self) -> str:
return "Foo::__str__"
@dataclass()
class Bar():
foo: Foo
f = Foo()
b = Bar(f)
print(f) # (A) Outputs: Foo::__str__
print(b) # (B) Outputs: Bar(foo=Foo::__repr__)
</code></pre>
<p>I stumbled upon this because I saw that my custom <code>__str__</code> of <code>Foo</code> was not being used by <code>Bar</code> in fact even if I remove the <code>__repr__</code> output <code>(B)</code> will still be <code>Bar(foo=Foo())</code> instead of <code>Bar(foo=Foo::__str__)</code>.</p>
| <python><python-dataclasses> | 2023-11-06 16:59:16 | 2 | 969 | Tobias |
77,432,881 | 571,950 | How to use dynamic types as return types in Python/mypy | <p>I am trying to use a dynamically created type to use as return type.</p>
<p>The code works, but mypy throws an error.</p>
<p>The code snippet shows the problem in a very simplified way, but where I am using dynamic return types in my project, they have meaning (FastAPI for capturing the response structure in OpenAPI specs)</p>
<p>The problem I am having is getting mypy to accept the dynamic type as a valid type. It breaks with the <code>[valid-type]</code> error:</p>
<pre><code>cpl/features/aup/api/mypy_error.py: note: In function "list_products":
cpl/features/aup/api/mypy_error.py:79: error: Variable "cpl.features.aup.api.mypy_error.PydanticProduct" is not valid as a type [valid-type]
cpl/features/aup/api/mypy_error.py:79: note: See https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases
</code></pre>
<p>I have condensed the problem I am having into the snippet below:</p>
<pre class="lang-py prettyprint-override"><code># creating class dynamically
Product = type("Product", (object, ), {})
# Use dynamic class as return type
def list_products() -> list[Product]: # This gives [valid-type] error
"""
List all products.
:return: A list of products.
"""
return [Product(), Product(), Product()]
</code></pre>
<p>Can anyone help me solve this, there must be a way to address mypy to accept this sort of typing.</p>
<p><strong>Note:</strong> Where I am actually using this is in a FastAPI endpoint, so the actual return type structure has meaning for the generated OpenAPI spec.</p>
| <python><fastapi><mypy><pydantic> | 2023-11-06 16:55:48 | 1 | 747 | Jakob Simon-Gaarde |
77,432,777 | 4,399,016 | Pivoting a Pandas data frame from long to wide | <p>I have this code from <a href="https://stackoverflow.com/a/47152692/4399016">here</a>:</p>
<pre><code>np.random.seed([3, 1415])
df2 = pd.DataFrame({'A': list('aaaabbbc'), 'B': np.random.choice(15, 8)})
df2
A B
0 a 0
1 a 11
2 a 2
3 a 11
4 b 10
5 b 10
6 b 14
7 c 7
</code></pre>
<p>The expected should look something like</p>
<pre><code> a b c
0 0.0 10.0 7.0
1 11.0 10.0 NaN
2 2.0 14.0 NaN
3 11.0 NaN NaN
</code></pre>
<p>The code solution provided online is as follows:
STEP 1:</p>
<pre><code>df2.insert(0, 'count', df2.groupby('A').cumcount())
df2
count A B
0 0 a 0
1 1 a 11
2 2 a 2
3 3 a 11
4 0 b 10
5 1 b 10
6 2 b 14
7 0 c 7
</code></pre>
<p>Step 2:</p>
<pre><code>df2.pivot(*df2)
# df2.pivot(index='count', columns='A', values='B')
A a b c
count
0 0.0 10.0 7.0
1 11.0 10.0 NaN
2 2.0 14.0 NaN
3 11.0 NaN NaN
</code></pre>
<p>In my case I have a DATE column which has duplicate values. In count column, even though the cumcount method generates duplicates, this pivot works. But in my case, the Date Index column does not allow Pivot. Is it possible to perform Pivot with DATE column in spite of duplicate values?</p>
<pre><code> count A B
0 2023-01-01 a 0
1 2023-02-01 a 11
2 2023-03-01 a 2
3 2023-04-01 a 11
4 2023-01-01 b 10
5 2023-02-01 b 10
6 2023-03-01 b 14
7 2023-01-01 c 7
</code></pre>
<p>This Long form has to be made Wide with Count[DATES] as the first column and the values of Column A be made different Columns themselves. The Values of Column B populating the Pivot Table. Similar to this table below.</p>
<pre><code> A a b c
count
2023-01-01 0.0 10.0 7.0
2023-02-01 11.0 10.0 NaN
2023-03-01 2.0 14.0 NaN
2023-04-01 11.0 NaN NaN
</code></pre>
| <python><pandas><dataframe><group-by><pivot-table> | 2023-11-06 16:40:17 | 0 | 680 | prashanth manohar |
77,432,734 | 7,456,317 | Install a Python package and debug it | <p>I have the following use case:
I want to write some code that uses a package from GitHub, while debugging that package. I cloned that package next to the directory with my code, so the directory structure looks like this:</p>
<pre><code>/code
/github-package
/my-code
</code></pre>
<p>The package comes with a <code>setup.py</code> file that users <code>setuptools.setup</code> method, and I install it using:</p>
<pre class="lang-bash prettyprint-override"><code>$ python setup.py install
</code></pre>
<p>the package is installed and I can import it.
However, when I run my code, I can't debug the package (I can't step into it while debugging).
What am I doing wrong?</p>
| <python><setuptools><python-packaging> | 2023-11-06 16:35:20 | 0 | 913 | Gino |
77,432,589 | 2,725,810 | What resource is my Lambda function exhausting? | <p>The following AWS Lambda function uses multi-threading to read 10,000 small objects of about 40KB each from AWS DynamoDB.</p>
<pre class="lang-py prettyprint-override"><code>import boto3
import threading
import math
import json
class DynamoDB:
def __init__(self, table_name):
self.client = boto3.client('dynamodb', region_name="us-east-1")
self.table_name = table_name
# Batch-read
def read(self, keys):
batch_keys = {
self.table_name: {'Keys': [{'content_id': {'S': id}} for id in keys]}}
return self.client.batch_get_item(RequestItems=batch_keys)
dynamodb = DynamoDB("mytable")
def read_vectors(keys):
batch_size = 100
n_batches = math.ceil(len(keys)/batch_size)
for i in range(n_batches):
my_keys = keys[i * batch_size : (i + 1) * batch_size]
dynamodb.read(my_keys)
def concurrent_reads(keys, n_threads):
threads = []
keys_per_thread = math.ceil(len(keys)/n_threads)
for i in range(n_threads):
my_keys = keys[i * keys_per_thread : (i + 1) * keys_per_thread]
thread = threading.Thread(target=read_vectors, args=(my_keys,))
thread.start()
threads.append(thread)
for thread in threads:
thread.join()
def lambda_handler(event, context):
keys = [f"vectors/{content_id}" for content_id in range(10000)]
concurrent_reads(keys, 5) # 5 threads
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
</code></pre>
<p>Here are the time measurements:</p>
<pre><code>Threads Time, sec
1 10.5
2 7.1
3 5.5
4 5.2
5 5.4
</code></pre>
<p>I get similar performance with 2048 MB of RAM and with 10240 MB of RAM. Given that all objects add up to 400 MB and Lambda runs in the same region as DynamoDB, I am definitely not saturating the network bandwidth (especially with 10240 MB, which should give ~25 GiB/s).</p>
<p>What resource is my Lambda exhausting that prevents it from scaling?</p>
<p>P.S. The question at re:Post: <a href="https://repost.aws/questions/QUoFkFRrJ-TaSjU2wbS38XqQ/what-resource-is-my-lambda-function-exhausting" rel="nofollow noreferrer">https://repost.aws/questions/QUoFkFRrJ-TaSjU2wbS38XqQ/what-resource-is-my-lambda-function-exhausting</a></p>
| <python><multithreading><aws-lambda><amazon-dynamodb><boto3> | 2023-11-06 16:12:42 | 0 | 8,211 | AlwaysLearning |
77,432,481 | 2,647,747 | what does [x, y, λ] = xyλ mean in python | <p>I came across the code snippet recently.</p>
<pre class="lang-py prettyprint-override"><code>def DL (xyλ) :
[x, y, λ] = xyλ
return np.array([
dfdx(x, y) - λ * dgdx(x, y),
dfdy(x, y) - λ * dgdy(x, y),
- g(x, y)
])
</code></pre>
<p>However I don't know what <code>[x, y, λ] = xyλ</code> means.</p>
| <python> | 2023-11-06 15:58:04 | 1 | 373 | Zwy |
77,432,381 | 2,386,113 | How to set the plotting area size in Matplotlib? | <p>I am able to set the figure size but I need to have the same size for the plots. I am saving the plots as SVG and then loading them in Latex to create a figure. I have verified that the size of all the figures is the same but as shown by the <strong>red horizontal line</strong> in the screenshot below, the height of the plots is not the same.</p>
<p><a href="https://i.sstatic.net/VVlXq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VVlXq.png" alt="enter image description here" /></a></p>
<p><strong>My function to create and save plots:</strong></p>
<pre><code>def plot_and_save_figure(title, X, Y, xy_min, xy_max, save_image_path):
f = plt.figure(constrained_layout=True, figsize=(10, 10), dpi=100)
plt.scatter(X, Y, alpha=0.5, label='Data Points')
plt.plot((xy_min,xy_max), (xy_min,xy_max), 'r')
plt.xlim(xy_min,xy_max)
plt.ylim(xy_min,xy_max)
plt.gca().set_aspect('equal', 'box')
plt.savefig(save_image_path, bbox_inches='tight')
plt.grid()
plt.show()
return f
</code></pre>
<p><strong>Question:</strong> How can I enforce that the drawing Width-Height of my square plots are the same?</p>
| <python><matplotlib> | 2023-11-06 15:42:25 | 0 | 5,777 | skm |
77,432,378 | 9,150,928 | What is the result/effect of layers.__init__ in side a class without declaring them as variables? | <p>First to declear, I am not good at Python yet.</p>
<p>Below is a snippet of the original microsoft LoRA implementation.</p>
<pre><code>import torch
import torch.nn as nn
import torch.nn.functional as F
import math
from typing import Optional, List
class LoRALayer():
def __init__(
self,
r: int,
lora_alpha: int,
lora_dropout: float,
merge_weights: bool,
):
self.r = r
self.lora_alpha = lora_alpha
# Optional dropout
if lora_dropout > 0.:
self.lora_dropout = nn.Dropout(p=lora_dropout)
else:
self.lora_dropout = lambda x: x
# Mark the weight as unmerged
self.merged = False
self.merge_weights = merge_weights
class Embedding(nn.Embedding, LoRALayer):
# LoRA implemented in a dense layer
def __init__(
self,
num_embeddings: int,
embedding_dim: int,
r: int = 0,
lora_alpha: int = 1,
merge_weights: bool = True,
**kwargs
):
nn.Embedding.__init__(self, num_embeddings, embedding_dim, **kwargs)
LoRALayer.__init__(self, r=r, lora_alpha=lora_alpha, lora_dropout=0,
merge_weights=merge_weights)
# Actual trainable parameters
if r > 0:
self.lora_A = nn.Parameter(self.weight.new_zeros((r, num_embeddings)))
self.lora_B = nn.Parameter(self.weight.new_zeros((embedding_dim, r)))
self.scaling = self.lora_alpha / self.r
# Freezing the pre-trained weight matrix
self.weight.requires_grad = False
self.reset_parameters()
def reset_parameters(self):
nn.Embedding.reset_parameters(self)
if hasattr(self, 'lora_A'):
# initialize A the same way as the default for nn.Linear and B to zero
nn.init.zeros_(self.lora_A)
nn.init.normal_(self.lora_B)
def train(self, mode: bool = True):
nn.Embedding.train(self, mode)
if mode:
if self.merge_weights and self.merged:
# Make sure that the weights are not merged
if self.r > 0:
self.weight.data -= (self.lora_B @ self.lora_A).transpose(0, 1) * self.scaling
self.merged = False
else:
if self.merge_weights and not self.merged:
# Merge the weights and mark it
if self.r > 0:
self.weight.data += (self.lora_B @ self.lora_A).transpose(0, 1) * self.scaling
self.merged = True
def forward(self, x: torch.Tensor):
if self.r > 0 and not self.merged:
result = nn.Embedding.forward(self, x)
after_A = F.embedding(
x, self.lora_A.transpose(0, 1), self.padding_idx, self.max_norm,
self.norm_type, self.scale_grad_by_freq, self.sparse
)
result += (after_A @ self.lora_B.transpose(0, 1)) * self.scaling
return result
else:
return nn.Embedding.forward(self, x)
</code></pre>
<p>I have 2 python questions here.</p>
<ol>
<li>What is effect/result of nn.Embedding.<strong>init</strong> and LoRALayer.<strong>init</strong> in Embedding class without making them as variables? They are like this below.</li>
</ol>
<pre><code> nn.Embedding.__init__(self, num_embeddings, embedding_dim, **kwargs)
LoRALayer.__init__(self, r=r, lora_alpha=lora_alpha, lora_dropout=0,
merge_weights=merge_weights)
</code></pre>
<p>They are not like <code>embedding = nn.Embedding()</code> or <code>lora = LoRALayer()</code>. What do they do inside the Embedding class when they are not decleared as variables?</p>
<ol start="2">
<li>The LoRALayer class was never been used or called in Embedding class except <code>LoRALayer.__init__(self, r=r, lora_alpha=lora_alpha, lora_dropout=0, merge_weights=merge_weights)</code></li>
</ol>
<p>Then how does it work and why is it there?</p>
<p>Thank you for your kind explanations in advance.</p>
| <python><pytorch> | 2023-11-06 15:41:52 | 1 | 327 | S. Jay |
77,432,303 | 5,032,387 | How to make configuration settings part of Strategy in python-hypothesis | <p>I want to write a test for a method using the hypothesis library.</p>
<p>Within the method, I import configuration settings that differ according to the job that I'm running the codebase for (each unique run of the code gets different settings).</p>
<p>The methods look something like this</p>
<p>Module <code>user_settings</code>:</p>
<pre><code>setting1 = 5
setting2 = 10
setting3 = -0.085
</code></pre>
<p><code>test_class</code></p>
<pre><code>import user_settings
import numpy as np
class TestClass():
def __init__(self):
self.arr = np.random.normal(size=(5,1))
def dummy_method(self):
setting1 = user_settings.setting1
setting2 = user_settings.setting2
setting3 = user_settings.setting3
return (self.arr * setting1 + setting2) / setting3
</code></pre>
<p>I can define the object to feed to this function as a fixture in <code>conftest.py</code>:</p>
<pre><code>import pytest
from test_class import TestClass
@pytest.fixture(scope='session')
def test_obj():
return TestClass()
</code></pre>
<p>And then incorporate that into the test:</p>
<pre><code>def test_dummy_method(test_obj):
res = test_obj.dummy_method()
assert np.all(~np.isnan(res))
</code></pre>
<p>I want to leverage the hypothesis framework to test the output with different settings. But since they're set within the function, what can I do to make them part of a hypothesis strategy? I understand that one option is modifying the method to pass those settings as arguments. However, the current architecture was a deliberate design decision to keep the code succinct and clear - there are hundreds of these settings and there was no need up to now to have them as arguments, with the associated clutter (more lines in functions definition plus many more lines from docstrings).</p>
<h3>Update after comment by Zac Hatfield-Dodds</h3>
<p>I can use a context manager to temporarily set different values for the user settings:</p>
<pre><code>class SettingContextManager:
def __enter__(self):
# Save original settings
self.original_setting1 = user_settings.setting1
self.original_setting2 = user_settings.setting2
self.original_setting3 = user_settings.setting3
# Patch settings with test values
user_settings.setting1 = 5
user_settings.setting2 = -2
user_settings.setting3 = 99.8
def __exit__(self, exc_type, exc_value, exc_traceback):
# Restore original settings
user_settings.setting1 = self.original_setting1
user_settings.setting2 = self.original_setting2
user_settings.setting3 = self.original_setting3
</code></pre>
<p>And then use this ContextManager within the dummy method:</p>
<pre><code>import numpy as np
def test_dummy_method(test_obj):
with SettingContextManager():
res = test_obj.dummy_method()
assert np.all(~np.isnan(res))
</code></pre>
<p>Would I then pass the Hypothesis values with a decorator when defining <code>SettingContextManager</code> such as like this:</p>
<pre><code>from hypothesis import given
from hypothesis.extra.numpy import floating_dtypes
@given(setting1 = floating_dtypes())
class SettingContextManager:
def __enter__(self):
self.original_setting1 = user_settings.setting1
self.original_setting2 = user_settings.setting2
self.original_setting3 = user_settings.setting3
# Patch settings with test values
user_settings.setting1 = setting1
</code></pre>
<h3>Update #2 after follow-up comment by Zac Hatfield-Dodds</h3>
<p>I created a reproducible example following Zac's second comment. It's not erroring when I run <code>pytest</code>, but also not sure about the yield component and whether I need an <code>as</code> statement where I open the context manager.</p>
<p>Here is the hierarchy of files</p>
<pre><code>
└───src
│ dummy_class.py
│ user_settings.py
│
tests
│ conftest.py
│ context_manager.py
│ test_dummy_class.py
</code></pre>
<p><strong>Files</strong></p>
<p><code>dummy_class</code>:</p>
<pre class="lang-py prettyprint-override"><code>import user_settings
import numpy as np
class TestClass():
def __init__(self):
self.arr = np.random.normal(size=(5,1))
def dummy_method(self):
setting1 = user_settings.setting1
setting2 = user_settings.setting2
setting3 = user_settings.setting3
return (self.arr * setting1 + setting2) / setting3
</code></pre>
<p><code>user_settings</code></p>
<pre class="lang-py prettyprint-override"><code>setting1 = 5
setting2 = 10
setting3 = -0.085
</code></pre>
<p><code>conftest</code></p>
<pre><code>import pytest
import sys
sys.path.append('./src')
from dummy_class import TestClass
@pytest.fixture(scope='session')
def test_obj():
return TestClass()
</code></pre>
<p><code>context_manager</code></p>
<pre class="lang-py prettyprint-override"><code>from contextlib import contextmanager
import user_settings
@contextmanager
def temp_user_setting(new_value):
# Get the current value of the user_setting
original_value = user_settings.setting1
# Set the user_setting to the new value
user_settings.setting1 = new_value
try:
# Yield to allow the test code to execute with the modified user_setting
yield
finally:
# Revert the user_setting back to its original value
user_settings.setting1 = original_value
</code></pre>
<p><code>test_dummy_class</code></p>
<pre><code>import numpy as np
from hypothesis import given
import hypothesis.strategies as st
from context_manager import temp_user_setting
@given(st.floats(min_value=0.0, max_value=100.0))
def test_dummy_method(test_obj, setting1):
with temp_user_setting(setting1):
res = test_obj.dummy_method()
assert np.all(~np.isnan(res))
</code></pre>
| <python><testing> | 2023-11-06 15:30:34 | 1 | 3,080 | matsuo_basho |
77,432,232 | 5,640,517 | Python in Jupyter notebook, integers and is operator | <p>Reading these <a href="https://stackoverflow.com/questions/77432108/how-does-memory-management-in-python-work-for-integers">How does memory management in Python work for integers?</a> and <a href="https://stackoverflow.com/questions/306313/is-operator-behaves-unexpectedly-with-integers">"is" operator behaves unexpectedly with integers</a></p>
<p>I was testing</p>
<pre><code>a = 845
b = int("8"+"4"+"5")
print(a == b) # True
print(a is b) # False
print(id(a))
print(id(b))
a = 845
b = 840+5
print(a == b) # True
print(a is b) # True
print(id(a))
print(id(b))
</code></pre>
<p>If you run this code in python by copypasting or, as I was doing, shift+enter (notebook.cell.executeAndSelectBelow) in vscode to run it in Jupyter Notebook you get all different memory addresses.</p>
<pre><code>True
False
2832156355920
2832156352912
True
False
2832156355888
2832156355696
</code></pre>
<p>If you save it as a file and run it by <code>python main.py</code> you get the first a and the second a and b share memory address.</p>
<pre><code>True
False
1612814393360
1612814397168
True
True
1612814393360
1612814393360
</code></pre>
<p>Why? What's different int the interpreter when copypasting line by line and using Jupyter Notebook? I assume the <code>line by line</code> part but why?</p>
| <python><jupyter-notebook> | 2023-11-06 15:21:23 | 0 | 1,601 | Daviid |
77,432,229 | 4,810,328 | Python ctypes dtype multiply | <p>I have the following line in python:</p>
<pre><code>a = (ctypes.c_float*10)()
</code></pre>
<p>Here a is initialized as an object with _<em>repr</em>_ result:</p>
<pre><code><__main__.c_float_Array_10 object at 0x7fbb3e6cf340>
</code></pre>
<p>My understanding is that this object is some sort of a pointer that can interoperate between python and C functions. How does this work and how are we able to multiply <code>ctypes.c_float</code> class with and integer?</p>
| <python><python-3.x><ctypes> | 2023-11-06 15:21:11 | 1 | 711 | Tarique |
77,432,123 | 20,591,261 | Line plot doesn't align with corresponding bars | <p>I'm trying to make a bar chart with a line plot, but the line plot is starting from 2 instead or 1.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
url = 'https://raw.githubusercontent.com/Arancium98/datasets/main/ventas_pizzas2.csv'
df = pd.read_csv(url)
VentaS = df.groupby('week')['total_price'].sum()
fig, ax = plt.subplots(figsize=(15, 5))
VentaS.plot(kind='bar', ax=ax, title='Ventas por semana')
VentaS.plot(kind='line', color='tab:orange', ax=ax)
plt.show()
</code></pre>
<p>The 'VentaS' start from the 1 and finish on 53, dtype: float64, so I'm not sure why the line chart doesn't start from the 1.</p>
<p><a href="https://i.sstatic.net/rdLK5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rdLK5.png" alt="enter image description here" /></a></p>
<h3><code>df[['week', 'total_price']]</code></h3>
<pre class="lang-none prettyprint-override"><code> week total_price
0 1 14.00
1 1 16.00
2 1 18.50
3 1 20.75
4 1 16.00
...
48615 53 16.75
48616 53 17.95
48617 53 12.00
48618 53 20.25
48619 53 12.75
</code></pre>
<h3><code>VentaS</code></h3>
<pre class="lang-none prettyprint-override"><code>week
1 9867.35
2 16002.80
3 15118.20
4 15655.80
5 16359.35
6 16183.95
7 16091.55
8 15354.90
9 15967.80
10 16577.95
11 15827.30
12 15719.75
13 15696.75
14 16961.40
15 16167.60
16 16093.00
17 16136.25
18 15231.75
19 16258.45
20 17143.45
21 15912.90
22 15306.80
23 16656.20
24 15893.65
25 15402.25
26 16228.05
27 17517.00
28 15898.45
29 16261.25
30 16362.15
31 14991.80
32 15620.95
33 16287.35
34 15686.60
35 14283.55
36 15548.75
37 16725.25
38 15985.85
39 11030.90
40 16821.55
41 13029.95
42 15488.70
43 14431.75
44 13264.70
45 16124.95
46 15458.05
47 14867.30
48 19762.15
49 16624.30
50 15677.25
51 15976.25
52 11433.85
53 7246.50
Name: total_price, dtype: float64
</code></pre>
| <python><pandas><bar-chart><linechart> | 2023-11-06 15:03:46 | 1 | 1,195 | Simon |
77,432,043 | 1,014,217 | How to keep session state variables across a multipage app in Streamlit | <p>My app has a few pages, and the first time it loads, it will detect if there is an access token in the session state, if not the user with click on the login to go to Azure Id, and then a code will be returned in the query parameters and then an access token will be retrieved, thenI can use to render some buttons depending on the user groups, etc.</p>
<p>This works perfectly fine on my homepage.</p>
<p>Security.py</p>
<pre><code># Initialize the MSAL ConfidentialClientApplication
app = msal.ConfidentialClientApplication(CLIENT_ID, authority=AUTHORITY, client_credential=CLIENT_SECRET)
# Function to get the authorization URL
def get_auth_url():
"""
Generate the authorization URL for user sign-in.
"""
auth_url = app.get_authorization_request_url(SCOPE, redirect_uri=REDIRECT_URI)
return auth_url
# Function to exchange authorization code for an access token
def get_token_from_code(auth_code):
"""
Exchange an authorization code for an access token.
"""
result = app.acquire_token_by_authorization_code(auth_code, scopes=SCOPE, redirect_uri=REDIRECT_URI)
return result['access_token']
# Function to get user information
def get_user_info(access_token):
"""
Get user information using the provided access token.
"""
headers = {'Authorization': f'Bearer {access_token}'}
response = requests.get('https://graph.microsoft.com/v1.0/me', headers=headers)
return response.json()
# Function to get user's groups with display names
def get_user_groups(access_token):
"""
Get the user's groups with display names using the provided access token.
"""
headers = {'Authorization': f'Bearer {access_token}'}
params = {
'$select': 'displayName, id', # Specify the fields you want to retrieve
}
response = requests.get('https://graph.microsoft.com/v1.0/me/memberOf', headers=headers, params=params)
return response.json()
# Function to handle the OAuth2 redirect flow
def handle_redirect():
"""
Handle the OAuth2 redirect flow, retrieve the access token, and store it in the session.
"""
if not st.session_state.get('access_token'):
code = st.experimental_get_query_params().get('code')
if code:
access_token = get_token_from_code(code)
st.session_state['access_token'] = access_token
st.experimental_set_query_params()
</code></pre>
<p>then utils.py have this:</p>
<p>def setup_page():
if st.experimental_get_query_params().get('code'):
# Handle the OAuth2 redirect if an authorization code is present
handle_redirect()
access_token = st.session_state.get('access_token')</p>
<pre><code>if access_token:
# If an access token is available, retrieve and display user information and groups
user_info = get_user_info(access_token)
st.session_state['user_info'] = user_info
# Get user's group information
user_groups = get_user_groups(access_token)
st.session_state['user_groups'] = user_groups
# Display user's group information, handling cases where 'displayName' is not available
# if 'value' in user_groups:
# for group in user_groups['value']:
# group_display_name = group.get('displayName', group.get('mailNickname', group["id"]))
# group_id = group["id"]
# st.write(group_id)
# return True
else:
# If there's no access token, prompt the user to sign in
st.write("Please sign-in to use this app.")
auth_url = get_auth_url()
st.markdown(f"<a href='{auth_url}' target='_self'>Sign In</a>", unsafe_allow_html=True)
st.stop()
</code></pre>
<p>and then on both my homepage and page1.py, page2.py I just call setup_page.</p>
<p>However when I to from homepage to page1, session_state is lost, the access token is not there anymore.</p>
<p>Whats causing this?</p>
| <python><streamlit> | 2023-11-06 14:53:34 | 0 | 34,314 | Luis Valencia |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.