QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,904,666 | 7,483,509 | Find all decompositions in two factors of a number | <p>For a given <code>N</code>, I am trying to find every positive integers <code>a</code> and <code>b</code> such that <code>N = a*b</code>.</p>
<p>I start decomposing into prime factors using <a href="https://docs.sympy.org/latest/modules/ntheory.html" rel="nofollow noreferrer"><code>sympy.ntheory.factorint</code></a>, it gives me a dict factor -> exponent.</p>
<p>I have this code already, but I don't want to get duplicates (<code>a</code> and <code>b</code> play the same role):</p>
<pre class="lang-py prettyprint-override"><code>import itertools
from sympy.ntheory import factorint
def find_decompositions(n):
prime_factors = factorint(n)
cut_points = {f: [i for i in range(1+e)] for f, e in prime_factors.items()}
cuts = itertools.product(*cut_points.values())
decompositions = [((a := np.prod([f**e for f, e in zip(prime_factors, cut)])), n//a) for cut in cuts]
return decompositions
</code></pre>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>In [235]: find_decompositions(12)
Out[235]: [(1, 12), (3, 4), (2, 6), (6, 2), (4, 3), (12, 1)]
</code></pre>
<p>What I would like to get instead:</p>
<pre class="lang-py prettyprint-override"><code>Out[235]: [(1, 12), (3, 4), (2, 6)]
</code></pre>
<p>I tried to reduce halve the range in <code>cut_points</code> with range extends such as <code>e//2</code>, <code>1 + e//2</code>, <code>(1+e)//2</code>, <code>1 + (1+e)//2</code>. None of it ended up working.</p>
<p>A simple solution is obviously to compute the same and return:</p>
<pre class="lang-py prettyprint-override"><code>decompositions[:(len(decompositions)+1)//2]
</code></pre>
<p>but I am looking for an eventual solution that reduces the number of computations instead.</p>
| <python><algorithm><combinations> | 2023-08-15 09:12:27 | 3 | 1,109 | Nick Skywalker |
76,904,538 | 3,905,546 | Python openpyxl _images get data? | <p>My python version is 3.11
openpyxl 3.1.2</p>
<pre><code>path = r"D:\input"
wb = op.load_workbook(path + "/test.xlsx")
ws = wb.active
data1 = ws["B2"].value
img_obj = ws._images[0]
img_data = img_obj.image # error
</code></pre>
<p>I'm trying to get image data but I'm getting this error;</p>
<pre><code>AttributeError: 'Image' object has no attribute 'image'
</code></pre>
<p>So I tried changing it to the property of img_obj.img_data, but the result was the same.</p>
<pre><code>img_data = img_obj.image_data # error
</code></pre>
<p>How can I access my image data?
I'd like to insert image into my cell like this :</p>
<pre><code>ws.add_image(img_obj, "D6")
</code></pre>
<p>There is no problem with files outside the Excel file,<br>
and this problem occurs when I try to copy the image inside the Excel.</p>
| <python><image><copy><openpyxl> | 2023-08-15 08:47:37 | 1 | 351 | Richard |
76,904,457 | 2,124,252 | Positive lookbehind, followed by comma separated list, followed by positive lookahead | <p>Need to parse the output of the pytest log, specifically this line of it:</p>
<pre><code>=========== 1 passed, 24 skipped, 75 deselected in 251.82s (0:04:11) ===========
</code></pre>
<p>What I have so far:</p>
<pre><code>r"(?<== ).+(?= in .+ =)"
</code></pre>
<p>Which gives the following output:</p>
<pre><code>['1 passed, 24 skipped, 75 deselected']
</code></pre>
<p>Desired output is:</p>
<pre><code>[('1', 'passed'), ('24', 'skipped'), ('75', 'deselected')]
</code></pre>
<p>I understand I can take my existing output and modify it with string modifications using methods like <code>str.split</code>, but I know it's doable in one regex. I just can't figure it out.</p>
<p>EDIT: To clear up some confusion. The real log file is very large and thus why I had the look-ahead and look-behind in my original solution. I wanted to make sure that only the line in the example I provided is captured.
Furthermore, the line won't always contain <code>passed, skipped, and deselected</code>. There are a lot more categories and the line can have all of them or just one of them (e.g. <code>2 passed in 25s</code> or <code>1 passed, 2 skipped, 3 failed, 4 deselected in 500s</code>, etc.)</p>
<p>Snippet from log file:</p>
<pre><code>10:50:55 ============================= test session starts ==============================
10:50:55 platform linux -- Python 3.8.17, pytest-5.4.3, py-1.11.0, pluggy-0.13.1 -- /usr/local/bin/python
10:50:55 cachedir: .pytest_cache
10:50:55 plugins: lazy-fixture-0.6.3, mock-3.6.1, order-1.1.0, timeout-2.1.0, timeouts-1.2.1
10:50:55 timeout: 7200.0s
10:50:55 timeout method: thread
10:50:55 timeout func_only: False
10:50:55 setup timeout: 0.0s, execution timeout: 0.0s, teardown timeout: 0.0s
10:50:55 collecting ... collected 25 items
10:50:55
10:50:55 tests/e2e/test_0_testbed_create.py::test SKIPPED
10:50:55 =========================== short test summary info ============================
10:50:55 PASSED tests/e2e/test_0_testbed_create.py::test
10:50:55 ======================== 1 passed, 24 skipped in 27.02s ========================
</code></pre>
| <python><regex> | 2023-08-15 08:34:14 | 1 | 1,143 | Mo2 |
76,904,368 | 3,104,974 | pymysql connection: read_timeout has no effect | <p>When creating a connection to a MySQL database via sqlalchemy and <a href="https://pymysql.readthedocs.io/en/latest/modules/connections.html" rel="nofollow noreferrer"><code>pymysql</code></a> connector, I want to set a timeout for queries that take too long:</p>
<pre><code>from sqlalchemy import create_engine
eng = create_engine(
"mysql+pymysql://username:password@host:port/database",
connect_args={'connect_timeout': 1.0, 'read_timeout': 1.0}
)
</code></pre>
<p>With this code I expect any query to raise an exception latest after 2 seconds (1 for the connection itself, 1 for the reading operation). However a complex SELECT query can run for minutes anyway.</p>
<p>What could be the reasons that the <code>read_timeout</code> connection parameter is effectively ignored?</p>
| <python><sqlalchemy><timeout><pymysql> | 2023-08-15 08:18:39 | 0 | 6,315 | ascripter |
76,904,262 | 6,851,715 | Get the value of a column based on min max values of another column of a pandas dataframe in a grouped aggregate function | <p>The pandas dataframe:</p>
<pre><code>data = pd.DataFrame ({
'group': ['A', 'A', 'B', 'B', 'C', 'C'],
'date': ['2023-01-15', '2023-02-20', '2023-01-10', '2023-03-05', '2023-02-01', '2023-04-10'],
'value': [10, 15, 5, 25, 8, 12]} )
</code></pre>
<p>Trying to get the values of the 'value' column based on the min and max values of 'date' column for each 'group' in an aggregate function:</p>
<pre><code>## the following doesn't work
output = (
df
.groupby(['group'],as_index=False).agg(
## there are some other additional aggregate functions happening here too.
value_at_min = ('value' , lambda x: x.loc[x['date'].idxmin()])
, value_at_max = ('value' , lambda x: x.loc[x['date'].idxmax()])
))
</code></pre>
<p>This doesn't work, even with converting date to datetime (in fact, my original date column is in datetime format).</p>
<p>Desired output should be:</p>
<pre><code> group min_date max_date value_at_min value_at_max
0 A 2023-01-15 2023-02-20 10 15
1 B 2023-01-10 2023-03-05 5 25
2 C 2023-02-01 2023-04-10 8 12
</code></pre>
| <python><pandas><indexing><group-by><aggregate> | 2023-08-15 07:59:26 | 5 | 1,430 | Ankhnesmerira |
76,904,250 | 1,301,871 | How can I address my CORS issue from my hosted website which works on localhost? | <p>I have the following code which is working well from my localhost but, when I try to call the API from my website bowtieteacher.co.uk, I get an error:</p>
<p>Access to XMLHttpRequest at 'https://mathsapplication-mnfu5bhqyq-nw.a.run.app/predictions' from origin 'https://bowtieteacher.co.uk' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
polyfills.bb7759b4fc9d057e.js:1 POST <a href="https://mathsapplication-mnfu5bhqyq-nw.a.run.app/predictions" rel="nofollow noreferrer">https://mathsapplication-mnfu5bhqyq-nw.a.run.app/predictions</a> net::ERR_FAILED</p>
<hr />
<pre><code>from fastapi import FastAPI
from typing import List
from pydantic import BaseModel
from app.model.model import predict_grade
from starlette.middleware import Middleware
from starlette.middleware.cors import CORSMiddleware
class DataIn(BaseModel):
percentages: List[float]
class PredictionGrade(BaseModel):
grade: int
accuracy: float
origins = [
'http://localhost:4200',
'https://www.bowtieteacher.co.uk',
'https://maths-7b24d.web.app',
]
middleware = [
Middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=['*'],
allow_headers=['*']
)
]
app = FastAPI(middleware=middleware)
@app.get("/")
def home():
return {"health_check": "Health check"}
@app.post("/predictions", response_model=PredictionGrade)
def predict(payload: DataIn):
result = predict_grade(payload.percentages)
return {"grade": result[0], "accuracy": result[1]}
</code></pre>
<p>Could anybody please explain why it is working for my localhost and not my website? And if so, perhaps a suggestion as to what I could do for next steps? It works on Postman and I have researched answers but I don't understand what is wrong. I create a docker file from this code for my machine learning model which is hosted on Google Cloud if that helps.
Many thanks in advance</p>
| <python><angular> | 2023-08-15 07:57:33 | 1 | 621 | Rob W |
76,904,194 | 7,438,048 | Generic Keyword Arguments Type Annotations | <p>I would like to add type annotations for <code>fn1, fn2</code> and <code>kwargs</code>:</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
def __init__(self, fn1, fn2, **kwargs):
self.res1 = fn1(**kwargs)
self.res2 = fn2(**kwargs)
</code></pre>
<p>I want to bound <code>kwargs</code> to <code>fn1</code> & <code>fn2</code> input arguments to avoid cases like:</p>
<pre class="lang-py prettyprint-override"><code># fails on fn2 as kwargs doesnt have a key y
Foo(fn1=lambda x: x+1, fn2=lambda x,y:x+y, x=1).
</code></pre>
<p>Is it even possible to annotate it correctly in current Python (3.11 or 3.12)?</p>
<p>For example, if i had only a single parameter to the functions, I'd use <code>typing.TypeVar</code> as follows:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable, TypeVar
T = TypeVar('T')
class Foo:
def __init__(self, fn1: Callable[[T], T], fn2: Callable[[T], T], x: T):
self.res1 = fn1(x)
self.res2 = fn2(x)
# mypy typecheck fail when running mypy file.py
Foo(lambda x, y: x+1, lambda y: y*2, x=5)
# mypy typecheck pass when running mypy file.py
Foo(lambda x: x+x, lambda y: y*2, x='a')
Foo(lambda x: x+x, lambda y: y*2, x=3.14)
</code></pre>
<p><code>mypy file.py</code> output:</p>
<pre class="lang-bash prettyprint-override"><code>file.py:13: error: Cannot infer type of lambda [misc]
file.py:13: error: Argument 1 to "Foo" has incompatible type "Callable[[Any, Any], Any]"; expected "Callable[[int], int]" [arg-type]
Found 2 errors in 1 file (checked 1 source file)
</code></pre>
| <python><mypy><python-typing> | 2023-08-15 07:47:37 | 1 | 3,764 | ShmulikA |
76,903,956 | 3,212,973 | Force find stings just with digits inside and omit others | <p>What I have:</p>
<pre><code>(1 lorem 2 ipsum: dolor. sit-amet:)
(consectetur adipiscing elit)
(JUST UPPER LETTERS)
(and lower letters)
(And Propper letters)
</code></pre>
<p>What I did:</p>
<p><code>\([\d{1,} A-Za-z\.\-\:\;\,]{1,40}\)\b</code></p>
<p>What I need:</p>
<p>A regex sentence to find just the first line (1 lorem 2 ipsum: dolor. sit-amet:) not the others. The special symbols can appers in any order as the digits, and any quantitie. I'm using Python</p>
| <python><regex> | 2023-08-15 07:02:56 | 1 | 1,726 | Elbert Villarreal |
76,903,859 | 13,903,839 | A quick way to find the first matching submatrix from the matrix | <p>My matrix is simple, like:</p>
<pre><code># python3 numpy
>>> A
array([[0., 0., 1., 1., 1.],
[0., 0., 1., 1., 1.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0.]])
>>> P
array([[0., 0., 0., 0.]])
</code></pre>
<p>I need to find an all-zero region(one is enough) in A with the same size as P (1x4).
So the right answer include:</p>
<pre><code>(2, 0) # The vertex coordinates of the all-zero rectangular region that P can be matched
(2, 1)
(3, 0)
(3, 1)
(4, 0)
(4, 1)
# Just get any 1 answer
</code></pre>
<p>Actually my A matrix will reach a size of 30,000*30,000. I'm worried that it will be slow if written as a loop statement. Is there any quick way?</p>
<p>The size of P is uncertain, from 10*30 to 4000*80. At the same time, the A matrix lacks regularity, and looping from any point may require traversing the entire matrix to successfully match</p>
| <python><numpy><matrix><comparison> | 2023-08-15 06:42:42 | 3 | 301 | ojipadeson |
76,903,679 | 9,565,958 | How to change python version of google colab | <p>I know there are many similar questions already but it seems the situation of google colab is still changing.</p>
<p>I tried this <a href="https://saturncloud.io/blog/how-to-change-python-version-from-default-35-to-38-of-google-colab/" rel="nofollow noreferrer">method</a>, but it's not working.</p>
<pre><code>!apt-get install python3.8
!python3.8 --version
!ln -sf /usr/bin/python3.8 /usr/bin/python
</code></pre>
<p>I tried this method,</p>
<pre><code>!wget https://www.python.org/ftp/python/3.8.17/Python-3.8.17.tgz
!tar xvfz Python-3.8.17.tgz
!Python-3.8.17/configure
!make
!sudo make install
</code></pre>
<p>When I type <code>!python --version</code> it shows me python 3.8.17,<br />
but when I type</p>
<pre><code>import sys
sys.version
</code></pre>
<p>it still shows me python 3.10.12, which is the current version of colab.<br />
I tried the way that chatGPT told me but it's not working either.</p>
<pre><code>!sudo apt-get update
!sudo apt-get install python3.8
!sudo apt-get install python3.8-distutils
!wget https://bootstrap.pypa.io/get-pip.py
!sudo python3.8 get-pip.py
!sudo update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1
!python --version
</code></pre>
<p>Can someone tell me what I should do?</p>
| <python><google-colaboratory> | 2023-08-15 05:53:31 | 0 | 512 | June Yoon |
76,903,454 | 11,630,148 | Extra field in custom registration in DRF won't save | <p>I created a custom serializer that extends dj_rest_auth's <code>RegisterSerializer</code>, added the extra <code>user_type</code> field, overrode the <code>get_cleaned_data</code> method to include the <code>user_type</code> in the cleaned data then configured the custom registration serializer used for registration.</p>
<p>Here's my code:</p>
<pre class="lang-py prettyprint-override"><code>class CustomRegisterSerializer(RegisterSerializer):
user_type = serializers.ChoiceField(choices=[('seeker', 'Seeker'), ('recruiter', 'Recruiter')])
class Meta:
model = User
fields = "__all__"
def get_cleaned_data(self):
data = super().get_cleaned_data()
data['user_type'] = self.validated_data.get('user_type', '')
return data
</code></pre>
<pre class="lang-py prettyprint-override"><code>class Profile(models.Model):
class Type(models.TextChoices):
seeker = "Seeker"
recruiter = "Recruiter"
base_type = Type.seeker
user = models.OneToOneField(User, on_delete=models.CASCADE)
user_type = models.CharField(choices=Type.choices, default=base_type, max_length=20)
class Meta:
verbose_name = "Profile"
verbose_name_plural = "Profiles"
def __str__(self):
return self.user.username
</code></pre>
<pre class="lang-py prettyprint-override"><code>class CustomRegisterView(RegisterView):
serializer_class = CustomRegisterSerializer
</code></pre>
| <python><django><django-rest-framework> | 2023-08-15 04:28:58 | 0 | 664 | Vicente Antonio G. Reyes |
76,903,307 | 2,647,447 | How do you updated loaded image from a menu selection using Python ktinter? | <p>Using python's tkinter, I am trying to write this drop down menu where I can choose 3 options. for each option it will display a different image. what I have so far:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
from PIL import Image, ImageTk
def change_image(event):
selected_option = var.get()
if selected_option == "Option 1":
image_path = "image1.png"
elif selected_option == "Option 2":
image_path = "image2.png"
elif selected_option == "Option 3":
image_path = "image3.png"
new_image = Image.open(image_path)
new_image = new_image.resize((300, 300), Image.ANTIALIAS)
photo = ImageTk.PhotoImage(new_image)
img_label.configure(image=photo)
img_label.image = photo
#Create the main application window
root = tk.Tk()root.title("Dropdown Menu Example")
#Create a dropdown menu
options = ["Option 1", "Option 2", "Option 3"]
var = tk.StringVar(root)
var.set(options[0])
# Set the default
optiondropdown = tk.OptionMenu(root, var, *options, command=change_image)
optiondropdown.pack(pady=10)
#Load and display the initial image
initial_image = Image.open("image1.png")
initial_image = initial_image.resize((300, 300), Image.ANTIALIAS)
initial_photo = ImageTk.PhotoImage(initial_image)
img_label = tk.Label(root, image=initial_photo)
img_label.pack()
root.mainloop()
</code></pre>
<p>the result is: it will display image1.png, but clicking the update Image button after choosing option 2 or option 3 do not update the image to either image2.png or image3.png.</p>
<p>the output is:</p>
<p><a href="https://i.sstatic.net/70EeP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/70EeP.png" alt="enter image description here" /></a></p>
<p>I have 3 image files in my directory. As click option 2 in the drop down menu, the image2.png should have been displayed, for the same scenario, if I choose the options 3 the image3.png should have been displayed.</p>
| <python><tkinter> | 2023-08-15 03:41:37 | 0 | 449 | PChao |
76,903,305 | 21,575,627 | How is the range function defined? | <p>Typing <code>help(range)</code> gives:</p>
<pre><code>class range(object)
| range(stop) -> range object
| range(start, stop[, step]) -> range object
</code></pre>
<p>Basically, I'd like to define a similar function, where the required parameter (stop) comes after the optional (start) unless you specify multiple (overloaded). How can this be done?</p>
<p>For example, how when you call:</p>
<pre><code># first (and only) param is stop
range(5)
# first param is start, second is stop
range(1, 5)
</code></pre>
| <python><python-3.x> | 2023-08-15 03:40:28 | 1 | 1,279 | user129393192 |
76,903,292 | 65,889 | Using Flet in Pythonista on iOS | <p>I'm playing around with <a href="https://flet.dev/" rel="nofollow noreferrer">Flet</a> on macOS. It's pretty cool when you want to develop a UI in Python -- on macOS. 😄</p>
<p>Is there a way to use Flet code in <a href="http://omz-software.com/pythonista/" rel="nofollow noreferrer">Pythonista</a> on iOS?</p>
| <python><pythonista><flet> | 2023-08-15 03:34:45 | 1 | 10,804 | halloleo |
76,903,263 | 2,195,440 | How do I download a package from PyPI and install it with all its dependencies source code in an automated fashion? | <p>For a project, I need to automatically download the top 50 Python packages from PyPI. After that, I have to set up a new Python environment using venv and install these packages.</p>
<p>Here are the steps I plan to follow:</p>
<ul>
<li>Create a new Python virtual environment using conda.</li>
<li>Use pip to install the specific package: <code>pip install <new-package></code>.</li>
<li>In addition to installing the package, I also need to download the package and all its dependencies, ensuring I have the source code available. I need to have the source code of the third-party dependent packages as well.</li>
</ul>
<p>Has anyone attempted something similar or have any insights to share?</p>
| <python> | 2023-08-15 03:25:16 | 0 | 3,657 | Exploring |
76,903,248 | 3,247,006 | Pytest cannot test multiple cases with @pytest.mark.parametrize | <p>I defined <code>UserFactory</code> class in <code>tests/factories.py</code> as shown below following <a href="https://docs.pytest.org/en/7.3.x/how-to/parametrize.html#pytest-mark-parametrize-parametrizing-test-functions" rel="nofollow noreferrer">the doc</a>. *I use <a href="https://github.com/pytest-dev/pytest-django" rel="nofollow noreferrer">pytest-django</a> and <a href="https://github.com/pytest-dev/pytest-factoryboy" rel="nofollow noreferrer">pytest-factoryboy</a> in Django:</p>
<pre class="lang-py prettyprint-override"><code># "tests/factories.py"
import factory
from django.contrib.auth.models import User
class UserFactory(factory.django.DjangoModelFactory):
class Meta:
model = User
</code></pre>
<p>And, I defined <code>test_user_instance()</code> with <a href="https://docs.pytest.org/en/7.3.x/how-to/parametrize.html#pytest-mark-parametrize-parametrizing-test-functions" rel="nofollow noreferrer">@pytest.mark.parametrize()</a> to test 4 users as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "tests/test_ex1.py"
import pytest
from django.contrib.auth.models import User
@pytest.mark.parametrize(
"username, email",
{
("John", "test@test.com"), # Here
("John", "test@test.com"), # Here
("John", "test@test.com"), # Here
("John", "test@test.com") # Here
}
)
def test_user_instance(
db, user_factory, username, email
):
user_factory(
username=username,
email=email
)
item = User.objects.all().count()
assert item == True
</code></pre>
<p>But, only one user was tested as shown below:</p>
<pre class="lang-none prettyprint-override"><code>$ pytest -q
. [100%]
1 passed in 0.65s
</code></pre>
<p>So, how can I test 4 tests?</p>
| <python><django><pytest-django><parametrized-testing><pytest-factoryboy> | 2023-08-15 03:20:26 | 1 | 42,516 | Super Kai - Kazuya Ito |
76,903,137 | 14,291,703 | How to convert nested json to pandas dataframe? | <p>I have the following JSON structure,</p>
<pre><code>{
'total_numbers': 2,
'data':[
{
'col3':'2',
'col4':[
{
'col5':'P',
'col6':'H'
},
{
'col5':'P1',
'col6':'H1'
},
],
'col7':'2023-06-19T09:29:28.786Z',
'col9':{
'col10':'TEST',
'col11':'test@email.com',
'col12':'True',
'col13':'999',
'col14':'9999'
},
'col15':'2023-07-10T04:46:43.003Z',
'col16':False,
'col17':[
{
'col18':'S',
'col19':'H'
}
],
'col20':True,
'col21':{
'col22':'sss',
'col23':'0.0.0.0',
'col24':'lll'
},
'col25':0,
'col26':{
'col27':{
'col28':'Other'
},
'col29':'Other',
'col30':'cccc'
},
'col31':{
'col32':[
{
'col33':'123456789',
'col34':'2023-07-14T02:52:20.166Z',
'col36':True,
'col38':{
'col40':[
{
'col41':'99999999999',
},
{
'col41':'34534543535',
}
]
},
'col55':'878787878'
},
{
'col47':'112233445566',
'col48':'2023-07-24T09:26:03.425Z',
'col50':True,
'col52':{
'col53':[
{
'col54':'99999999999',
}
]
},
'col55':'878787878'
}
]
}
},
{
'col3':'3',
'col4':[
{
'col5':'P',
'col6':'H'
}
],
'col7':'2023-06-19T09:29:28.786Z',
'col9':{
'col10':'TEST',
'col11':'test@email.com',
'col12':'True',
'col13':'999',
'col14':'9999'
},
'col15':'2023-07-10T04:46:43.003Z',
'col16':False,
'col17':[
{
'col18':'S',
'col19':'H'
}
],
'col20':True,
'col21':{
'col22':'sss',
'col23':'0.0.0.0',
'col24':'lll'
},
'col25':0,
'col26':{
'col27':{
'col28':'Other'
},
'col29':'Other',
'col30':'cccc'
},
'col31':{
'col32':[
{
'col33':'123456789',
'col34':'2023-07-14T02:52:20.166Z',
'col36':True,
'col38':{
'col40':[
{
'col41':'99999999999',
},
{
'col41':'34534543535',
}
]
},
'col55':'878787878'
},
{
'col47':'112233445566',
'col48':'2023-07-24T09:26:03.425Z',
'col50':True,
'col52':{
'col53':[
{
'col54':'99999999999',
}
]
},
'col55':'878787878'
}
]
}
}
]
}
</code></pre>
<p>I want to convert this to a pandas dataframe. Through the resulting dataframe I want to be able to uniquely identify each row using col3.</p>
<p>I have used a few approaches before.</p>
<ol>
<li><a href="https://stackoverflow.com/a/72549493/14291703">https://stackoverflow.com/a/72549493/14291703</a></li>
</ol>
<p>Problem: The name of the columns are not correct.</p>
<ol start="2">
<li><a href="https://stackoverflow.com/a/76898193/14291703">https://stackoverflow.com/a/76898193/14291703</a></li>
</ol>
<p>Problem: Cannot uniquely identify each row with col3</p>
<p>What I expect is the following,</p>
<p>Each column represents a level in the json,
data_col3, data_col4_col5, data_col4_col6...</p>
| <python><json><pandas><dataframe> | 2023-08-15 02:37:01 | 0 | 512 | royalewithcheese |
76,903,114 | 11,630,148 | Profile not creating when creating a User with Django | <p>I'm using DRF for my project and I need help in automatically creating a Profile instance for a User model when a User is created. My signals is:</p>
<pre class="lang-py prettyprint-override"><code>from django.db.models.signals import post_save
from django.conf import settings
from django.contrib.auth import get_user_model
from .models import Profile
User = get_user_model()
def create_user_profile(sender, instance, created, **kwargs):
if created:
Profile.objects.create(user=instance)
post_save.connect(create_user_profile, sender=User)
</code></pre>
| <python><django><django-rest-framework> | 2023-08-15 02:25:26 | 1 | 664 | Vicente Antonio G. Reyes |
76,902,966 | 17,194,418 | python dash persistence and callback issue | <p>i'm trying to link in a dash app an <code>RangeSlider</code> with 2 inputs. Also i Would like to have persistence in those components. When I add persistence the components works well but the values are not saved. Here's a minimal example:</p>
<pre><code>from dash import (Dash, html, dcc, Input, callback, ctx,
Output,State,no_update,callback_context)
import dash_bootstrap_components as dbc
import numpy as np
app = Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP])
vals = np.arange(35.7, 42.2, 0.1)
slider = dcc.RangeSlider(min=0, max=100,
marks={vals[3*i]: f"{i:.2f}" for i in range(5)},
value = [36,40], id='slider',
persistence='Session')
inp1 = dbc.Input(id='inp1', value=36, persistence='Session')
inp2 = dbc.Input(id='inp2', value=40, persistence='Session')
app.layout = dbc.Container([
html.Div("", id='text' ),dbc.Row([
dbc.Col([ slider,inp1,inp2], ),
])
])
@callback(Output('inp1','value'),
Output('inp2','value'),
Output('slider','value'),
[Input('inp1','value'),
Input('inp2','value'),
Input('slider','value'),
],
prevent_initial_call=True
)
def update_vals(in1_val,in2_val,sl_val):
trigger = ctx.triggered_id
if trigger == 'slider':
datadic = {'inp1':sl_val[0],'inp2':sl_val[1],'slider':sl_val}
return sl_val[0],sl_val[1], sl_val
return no_update,no_update,no_update
if __name__ == '__main__':
app.run_server(port=3030,debug=True)
</code></pre>
<p>Can't understand why the callback and the persistance are colliding and i'm wondering if there's a solution for that.</p>
<p>Thanks!</p>
<h2>Edit:</h2>
<hr />
<p>the version of libraries used:</p>
<p>-Python 3.10.11</p>
<p>-Dash 2.11.1</p>
<p>-Numpy 1.25.0 (nothe that numpy only creates an array unused)</p>
<p>-Dash_bootstrap_components 1.4.1</p>
| <python><plotly-dash> | 2023-08-15 01:29:32 | 1 | 1,735 | Ulises Bussi |
76,902,873 | 1,797,307 | Github runner not installing requirements.txt file in lamda function | <p>The github runner shows that it completes the jobs correctly however when I run them I am getting an error showing that fastapi is not installed it is in the requirements.txt file</p>
<p>here is the yaml file I am using</p>
<pre><code> name: API CI/CD
on:
# Trigger the workflow on push
push:
branches:
# Push events on main branch
- main
# The Job defines a series of steps that execute on the same runner.
jobs:
CI:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install dependencies
run: pip3 install -r ./app/requirements.txt
- name: Create application ZIP archive
run: cd ./app && zip -r app.zip .
- name: Upload app.zip artifact
uses: actions/upload-artifact@v2
with:
name: app
path: ./app/app.zip
- name: Upload swagger.json artifact
uses: actions/upload-artifact@v2
with:
name: swagger
path: ./app/swagger.json
CD:
runs-on: ubuntu-latest
needs: [ CI ]
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
steps:
- name: Install AWS CLI
uses: unfor19/install-aws-cli-action@v1
with:
version: 1
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
- name: Download Lambda app.zip
uses: actions/download-artifact@v2
with:
name: app
- name: Download swagger.json artifact
uses: actions/download-artifact@v2
with:
name: swagger
- name: Upload app.zip to S3
run: aws s3 cp app.zip s3://telidexapigithubdeploy/app.zip
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
- name: Upload swagger.json to S3
run: aws s3 cp swagger.json s3://telidexapiswaggerjson/swagger.json
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
- name: Deploy new Lambda
run: aws lambda update-function-code --function-name telidexapi --s3-bucket telidexapigithubdeploy --s3-key app.zip
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_DEFAULT_REGION }}
</code></pre>
<p>here is the aws errors:</p>
<pre><code> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
| timestamp |
message
|
|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1692052249122 | INIT_START Runtime Version: python:3.9.v26 Runtime Version ARN: arn:aws:lambda:us-east-1::runtime:130681a0855afedf31b2b3fbcc2fbf1ca62875e0500edb56fd16cad65045b05b |
| 1692052249226 | START RequestId: 9a247ffa-5364-4eb5-9596-e1015c45bd84 Version: $LATEST
|
| 1692052249226 | [ERROR] Runtime.ImportModuleError: Unable to import module 'main': No module named 'fastapi' Traceback (most recent call last):
|
| 1692052249228 | END RequestId: 9a247ffa-5364-4eb5-9596-e1015c45bd84
|
| 1692052249228 | REPORT RequestId: 9a247ffa-5364-4eb5-9596-e1015c45bd84 Duration: 2.36 ms Billed Duration: 3 ms Memory Size: 128 MB Max Memory Used: 36 MB Init Duration: 103.49 ms |
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
</code></pre>
<p>Its gotta be in the deploy step of the CD but I am not sure what I am doing wrong Ive tried a bunch of different tutorials same result</p>
| <python><aws-lambda><github-actions><fastapi><requirements.txt> | 2023-08-15 00:38:47 | 2 | 735 | Kyle Sponable |
76,902,808 | 8,068,825 | Pandas - Set selected rows' columns to other DataFrame's rows' column values | <p>I have this code below <code>unique_allocations</code> is a DataFrame of 4 rows that has different numbers let's say [1,2,3,4]. Then as we can see for every row of <code>filered_processed_full</code> I'm duplicated the row 4 times then trying to set <code>allocation_columns</code> to <code>unique_allocations</code>. So for example if curr_df =
[[taco, allocation], [0 , 1], [0 , 1], [0 , 1], [0 , 1]] (we'll just pretend that the first row is column names and subsequent rows are values for row). I'd like to transform that into curr_df =
[[taco, allocation], [0 , 1], [0 , 2], [0 , 3], [0 , 4]], where in this example <code>allocation_columns</code> is <code>allocation</code>. How do I set it this way? Currently the print() is just printing <code>curr_df</code> wit the columns I wanted unchanged.</p>
<pre><code>import pandas as pd
for idx, row in filtered_processed_full.iterrows():
curr_df = pd.DataFrame()
for i in range (4):
curr_df = curr_df.append(row)
curr_df[allocation_columns] = unique_allocations
with pd.option_context('display.max_rows', None,
'display.max_columns', None,
'display.precision', 3,
):
print(curr_df[allocation_columns])
</code></pre>
| <python><pandas><dataframe> | 2023-08-15 00:13:24 | 2 | 733 | Gooby |
76,902,805 | 16,912,844 | Run With Timeout Function With Capture Output (with proper threading and queue) | <p>i am trying optimize a function <code>run_with_timeout</code> that basically takes a command (such as <code>ping google.com</code>), with a timeout (<code>10</code> seconds) and capture as well as display the console output of the process. Under normal circumstances, this works fine, but going through how to properly do concurrency from one of the core developer of Python (<a href="https://youtu.be/9zinZmE3Ogk?t=3332" rel="nofollow noreferrer">https://youtu.be/9zinZmE3Ogk?t=3332</a>), there need to be threading and queue for this. So the <code>print_manager</code> function is exclusively for print, and <code>queue_line</code> is the worker thread. (Continue below)</p>
<p><strong>Code:</strong></p>
<pre class="lang-py prettyprint-override"><code>def queue_line(data, print_queue):
for line in iter(data.readline, ''):
print_queue.put(line)
data.close()
def print_manager(print_queue):
while True:
line = print_queue.get()
print(line, end='')
# Inform `print_queue` that the job is done
print_queue.task_done()
def run_with_timeout_v2(command, timeout):
# Create the process
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
text=True,
)
process.timed_out = False
process.output_buffer = ''
print_queue = Queue()
print_thread = threading.Thread(target=print_manager, args=(print_queue,))
print_thread.daemon = True
print_thread.start()
del print_thread
worker_thread = threading.Thread(target=queue_line, args=(process.stdout, print_queue))
worker_thread.start()
try:
print_queue.put('In Try')
process.wait(timeout=timeout)
except subprocess.TimeoutExpired:
print_queue.put('In Except')
process.timed_out = True
process.kill()
finally:
print_queue.put('In Finally')
worker_thread.join()
print_queue.join()
</code></pre>
<p><strong>Expected Output</strong></p>
<pre><code>In Try
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=9.70 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=11.0 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=117 time=9.46 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=117 time=12.7 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=117 time=10.6 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=117 time=10.1 ms
In Except
In Finally
</code></pre>
<p>This mostly works fine until you try to simulate high workload environment with fuzzing, which amplify some of the race conditions. What's a proper way to correct this?</p>
<p><strong>Example Fuzz Code:</strong></p>
<pre class="lang-py prettyprint-override"><code>def queue_line(data, print_queue):
for line in iter(data.readline, ''):
utils.fuzz(is_fuzz=True)
print_queue.put(line)
utils.fuzz(is_fuzz=True)
data.close()
def print_manager(print_queue):
while True:
line = print_queue.get()
utils.fuzz(is_fuzz=True)
print(line)
utils.fuzz(is_fuzz=True)
# Inform `print_queue` that the job is done
print_queue.task_done()
...
</code></pre>
<p><strong>Fuzz Output:</strong></p>
<pre><code>PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
In Try
64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=9.65 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=9.53 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=117 time=15.4 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=117 time=9.97 ms
64 bytes from 8.8.8.8: icmp_seq=5 ttl=117 time=11.5 ms
64 bytes from 8.8.8.8: icmp_seq=6 ttl=117 time=12.1 ms
64 bytes from 8.8.8.8: icmp_seq=7 ttl=117 time=10.7 ms
64 bytes from 8.8.8.8: icmp_seq=8 ttl=117 time=11.3 ms
64 bytes from 8.8.8.8: icmp_seq=9 ttl=117 time=14.3 ms
64 bytes from 8.8.8.8: icmp_seq=10 ttl=117 time=10.1 ms
In Except
64 bytes from 8.8.8.8: icmp_seq=11 ttl=117 time=10.5 ms
In Finally
64 bytes from 8.8.8.8: icmp_seq=12 ttl=117 time=12.1 ms
</code></pre>
| <python><concurrency> | 2023-08-15 00:11:50 | 0 | 317 | YTKme |
76,902,780 | 1,229,736 | defaultdict with a lambda that uses the key as a parameter | <p>I was trying to create a defaultdict where the key was used as a parameter to create a classed object, but I get a TypeError when I try. I've put a minimum example below</p>
<pre><code>from collections import defaultdict
class Intersection:
def __init__(self,xy):
self.position = xy
self.stuff = []
def add(self,stuff):
self.stuff.append(stuff)
intersections: defaultdict[tuple[int,int],Intersection] = defaultdict(lambda xy: Intersection(xy))
intersections[(1,2)].add('stuff')
>>> TypeError: <lambda>() missing 1 required positional argument: 'xy'
</code></pre>
<p>I googled the error and it came up with <a href="https://stackoverflow.com/questions/57720100/defaultdict-with-a-default-value-that-is-a-lambda-that-takes-a-parameter-produce">this post</a> which is close to what I'm trying to do but just far enough not to be able to answer how to do this</p>
| <python><dictionary><lambda><parameters> | 2023-08-15 00:04:24 | 1 | 829 | Chris Rudd |
76,902,670 | 19,157,137 | How can I retrieve a list of available versions for a specific library compatible with a certain Python version? | <p>I'm currently working on a project in Python and need to ensure compatibility between a specific library and a particular Python version. To achieve this, I want to create a Python script that retrieves and displays a list of all available versions of a library that can be installed for a given Python version.</p>
<p>For instance, let's say my project is using Python 3.8, and I want to check the available versions of the <code>numpy</code> library that are compatible with this Python version. How can I write a script that leverages the Python Package Index (PyPI) API or any relevant method to fetch and showcase a list of <code>numpy</code> versions that I can consider for installation using <code>pip</code>?</p>
<p>I believe having such a script will greatly aid in managing version compatibility and making informed decisions about library installations. If anyone could provide me with a sample script that accomplishes this task that would be great.</p>
| <python><installation><pip><package><version> | 2023-08-14 23:22:32 | 1 | 363 | Bosser445 |
76,902,665 | 9,536,233 | How to return all google image results for specific query with get requests library? | <p>I have the following search query in Google:
<code>https://www.google.com/search?q=bonpland&tbm=isch&hl=en-US&tbs=qdr:w</code></p>
<p>This search returns all images found in the last week for the search term <code>bonpland</code>. Now I want to have all this HTML or image links and image redirections returned to my Python console, using the get requests library. If I run this URL in my browser, it shows initially some ~108 images. If I click one of the images, more load, and if i scroll down more and more are loaded until ~450 images are loaded then a <code>Show more results</code> button is prompted. Once clicked, another ~480 images load, so lets say roughly a thousand images are found with this query.</p>
<p>However, when I run the get command in Python as shown below, only 49 original images are returned:</p>
<pre><code>import requests
headers = {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8',
'Connection': 'keep-alive',
'DNT': '1',
'Accept-Language': 'en-US,en;q=0.5',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36 OPR/55.0.2994.37',
'Upgrade-Insecure-Requests': '1',
'Sec-Fetch-Dest': 'document',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-Site': 'none',
'Sec-Fetch-User': '?1',}
response = requests.get(
'https://www.google.com/search?q=bonpland&tbm=isch&hl=en-US&tbs=qdr:w',
headers=headers,
)
soup = BeautifulSoup(response.text, 'html.parser')
soup
</code></pre>
<p>Is there any way we can modify the URL to return all links, or modify the code such that we can retrieve all results with this library? I have tried to modify the URL in several ways without success.</p>
<p>I tried to scroll down and see in the network what happens, it seems like a post response, which returns a json, and I can recreate this in Python, but I seem to be unable to decode this json response, nor be able to think of some logic to generate these requests myself:</p>
<pre><code>import requests
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36 OPR/55.0.2994.37',
'Accept': '*/*',
'Accept-Language': 'en-US,en;q=0.5',
'Referer': 'https://www.google.com/',
'X-Same-Domain': '1',
'x-goog-ext-190139975-jspb': '["NL","ZZ","KgKka8TAAAFqmWCfx71ZfQ=="]',
'Content-Type': 'application/x-www-form-urlencoded;charset=utf-8',
'Origin': 'https://www.google.com',
'DNT': '1',
'Connection': 'keep-alive',
'Sec-Fetch-Dest': 'empty',
'Sec-Fetch-Mode': 'cors',
'Sec-Fetch-Site': 'same-origin'
}
params = {
'rpcids': 'HoAMBc',
'source-path': '/search',
'f.sid': '2747221314367002709',
'bl': 'boq_visualfrontendserver_20230813.07_p1',
'hl': 'en-US',
'authuser': '0',
'soc-app': '162',
'soc-platform': '1',
'soc-device': '1',
'_reqid': '303769',
'rt': 'c',
}
data = 'f.req=%5B%5B%5B%22HoAMBc%22%2C%22%5Bnull%2Cnull%2C%5B3%2Cnull%2C4294967246%2C1%2C3766%2C%5B%5B%5C%22CYxr5OmPOtywOM%5C%22%2C259%2C194%2C536870912%5D%2C%5B%5C%222BSR5sBuDzSHqM%5C%22%2C306%2C165%2C0%5D%2C%5B%5C%22ZA_122FexBY1nM%5C%22%2C268%2C188%2C34340864%5D%2C%5B%5C%22-3b9ovO7KQ_dYM%5C%22%2C275%2C183%2C10485760%5D%2C%5B%5C%220iXGzZD6KO-t_M%5C%22%2C275%2C183%2C444596224%5D%2C%5B%5C%22hO_2vHlaM5M5mM%5C%22%2C277%2C182%2C0%5D%2C%5B%5C%22eLrlE2L34a8f8M%5C%22%2C323%2C156%2C0%5D%2C%5B%5C%22Ahf1fxWknMx_AM%5C%22%2C259%2C195%2C956301312%5D%2C%5B%5C%22Nv1VenvVudaghM%5C%22%2C261%2C193%2C134217728%5D%2C%5B%5C%22Q9QbJWUxHV4hnM%5C%22%2C171%2C295%2C179568640%5D%2C%5B%5C%22FOerVX6mz_YP4M%5C%22%2C225%2C225%2C-2147483648%5D%2C%5B%5C%22kCWjgzlqhj6N8M%5C%22%2C225%2C225%2C-1257766912%5D%5D%2C%5B%5D%2C%5B%5D%2Cnull%2Cnull%2Cnull%2C0%5D%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2C%5B%5C%22bonpland%5C%22%2C%5C%22en-US%5C%22%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2C%5C%22qdr%3Aw%5C%22%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2C%5B%5D%5D%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2Cnull%2C%5Bnull%2C%5C%22CAM%3D%5C%22%2C%5C%22GKwCIAA%3D%5C%22%5D%5D%22%2Cnull%2C%22generic%22%5D%5D%5D&at=AAuQa1qdstatNh2yQw-sJIcvETC_%3A1692054165315&'
response = requests.post(
'https://www.google.com/_/VisualFrontendUi/data/batchexecute',
params=params,
headers=headers,
data=data,
)
response.content
</code></pre>
<p>Returns:</p>
<pre><code>b')]}\'\n\n128460\n[["wrb.fr","HoAMBc","[null,[],null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,[],null,null,null,null,false,null,null,null,null,null,null,null,null,null,null,null,null,null,[null,[[\\"/search?q\\\\u003dbonpland\\\\u0026source\\\\u003dlmns\\",null,null,\\"All\\",false,null,null,null,null,\\"WEB\\",[0],null,null,0],[\\"/search?q\\\\u003dbonpland\\\\u0026source\\\\u003dlmns\\\\u0026tbm\\\\u003disch\\",null,null,\\"Images\\",true,null,null,null,null,\\"IMAGES\\",[6],null,null,6]],[[\\"//maps.google.com/maps?q\\\\u003dbonpland\\\\u0026source\\\\u003dlmns\\\\u0026entry\\\\u003dmt\\",null,null,\\"Maps\\",false,null,null,null,null,\\"MAPS\\",[8],null,null,8],[\\"/search?q\\\\u003dbonpland\\\\u0026source\\\\u003dlmns\\\\u0026tbm\\\\u003dvid\\",null,null,\\"Videos\\",false,null,null,null,null,\\"VIDEOS\\",[13],null,null,13],[\\"/search?q\\\\u003dbonpland\\\\u0026source\\\\u003dlmns\\\\u0026tbm\\\\u003dnws\\",null,null,\\"News\\",false,null,null,null,null,\\"NEWS\\",[10],null,null,10],[\\"/search?q\\\\u003dbonpland\\\\u0026source\\\\u003dlmns\\\\u0026tbm\\\\u003dbks\\",null,null,\\"Books\\",false,null,null,null,null,\\"BOOKS\\",[2],null,null,2],[\\"/travel/flights?q\\\\u003dbonpland\\\\u0026source\\\\u003dlmns\\\\u0026tbm\\\\u003dflm\\",null,null,\\"Flights\\",false,null,null,null,null,\\"FLIGHTS\\",[20],null,null,20],[\\"/search?q\\\\u003dbonpland\\\\u0026source\\\\u003dlmns\\\\u0026tbm\\\\u003dfin\\",null,null,\\"Finance\\",false,null,null,null,null,\\"FINANCE\\",[22],null,null,22]]],0,null,null,null,null,null,null,null,null,null,true,null,null,null,[[false],false,null,null,null,null,[true,false],true,null,0.564668],false,[[{\\"444381080\\":[]}],[[[[{\\"444383007\\":[7,null,null,null,null,null,null,\\"b-GRID_STATE0\\",-1,null,null,null,[\\"GRID_STATE0\\",null,null,null,null,null,1,[],null,null,null,[4,null,4294966996,1,3766,[[\\"mroB5K80ptCTuM\\",259,194,16777216],[\\"c1GanxYo-04FqM\\",299,168,117440512],[\\"CwPZkiIyxII1IM\\",259,194,524288],[\\"K-EOlTyweDVg8M\\",275,183,16777216],[\\"7aHfaSFG7gX8iM\\",225,225,-1874853888],[\\"cgME6WrQ91TqbM\\",264,191,262144],[\\"zYwz1EEHHPaiqM\\",225,225,-1090519040],[\\"eTmT6kDpI3uIQM\\",223,226,0],[\\"Q0rGwJ3za5hy4M\\",276,183,50593792],[\\"eiNmj70lbdjNUM\\",260,194,17563648],[\\"7g0dolqpZILMhM\\",300,168,1040187392],[\\"UfVRJEEcMgRyyM\\",183,275,-1074003968],[\\"7v0HRR8xWKSvLM\\",215,234,2097152],[\\"wpRPgCuU5zJAsM\\",300,168,-1788084224],[\\"ttZT8wyA9AIcpM\\",193,261,17039360]],null,null,null,null,null,0],null,null,null,null,[true,null,null,\\"CAQ\\\\u003d\\",\\"GJADIAA\\\\u003d\\"],null,null,null,null,null,null,null,null,null,20],[[1692054866047747,117621638,1745567696],null,null,null,null,[[1]]]]}],[[[[{\\"444383007\\":[1,[0,\\"cjxY8tC9TPK3gM\\",[\\"https://encrypted-tbn0.gstatic.com/images?q\\\\u003dtbn:ANd9GcQkdUVoe0SeMM_uE_oUKymwnw4XFeg5IQ_a0xmxkByykYSCPGI1icA-E1WsxqOzfqSEvb8\\\\u0026usqp\\\\u003dCAU\\",159,316],[\\"https://www.rematadores.com/rematadores/remates/2023/27986_5.jpg\\",576,1140],null,0,\\"rgb(240,240,221)\\",null,false,null,null,null,null,null,null,null,null,null,null,null,null,false,false,null,false,{\\"2001\\":[null,null,null,0,0,0,0,true,false],\\"2003\\":[\\"6 days ago\\",\\"N-SEEMfLWqwvkM
</code></pre>
| <python><web-scraping><post><python-requests><get> | 2023-08-14 23:20:26 | 1 | 799 | Rivered |
76,902,544 | 117,030 | How to get email.message_from_bytes to work with unicode input | <p>When <code>email.message_from_bytes()</code> is given input with unicode/emoji in the headers, the resulting output results in unexpected TypeErrors. Is it possible to process the input (encoding, decoding, etc) before passing it to <code>message_from_bytes()</code> to prevent these TypeErrors?</p>
<p>The overall goal is to get <a href="https://github.com/GAM-team/got-your-back" rel="nofollow noreferrer">gyb.py</a> to successfully clean + restore backups from gyb-generated .eml files, some of which contain unicode/emoji in the email headers. Also the unicode/emoji should be preserved without mangling them (like in the sample output.)</p>
<p>Minimal reproduction:</p>
<pre><code>import email
f = open('./sample.eml', 'rb')
bytes = f.read()
message = email.message_from_bytes(bytes)
# No unicode/emoji: works as expected:
print(message['to'])
print(len(message['to']))
# With unicode/emoji: unexpected TypeError:
print(message['from'])
print(len(message['from']))
</code></pre>
<p>sample.eml</p>
<pre><code>To: recipient <to@example.com>
From:🔥sender🔥 <from@example.com>
</code></pre>
<p>output:</p>
<pre><code>$ python check-message.py
recipient <to@example.com>
26
����sender���� <from@example.com>
Traceback (most recent call last):
File "V:\gyb\jkm\check-message.py", line 10, in <module>
print(len(message['from']))
^^^^^^^^^^^^^^^^^^^^
TypeError: object of type 'Header' has no len()
</code></pre>
<hr />
<p>Github issues related to larger gyb restore problem</p>
<ul>
<li><a href="https://github.com/GAM-team/got-your-back/issues/433" rel="nofollow noreferrer">https://github.com/GAM-team/got-your-back/issues/433</a></li>
<li><a href="https://github.com/GAM-team/got-your-back/issues/432" rel="nofollow noreferrer">https://github.com/GAM-team/got-your-back/issues/432</a></li>
</ul>
| <python><email><unicode><emoji> | 2023-08-14 22:42:52 | 1 | 18,033 | Leftium |
76,902,445 | 276,220 | Poor performance (low requests per second and high response time) using Gunicorn + FastAPI + Machine learning model on AWS Fargate | <p>I'm deploying an ML model that takes as input a string and provides a set of numeric values determining the toxicity of the string. The model I'm using is <a href="https://github.com/unitaryai/detoxify" rel="nofollow noreferrer">Detoxify</a> running on CPU as Fargate does not yet support GPU instances.</p>
<p>Despite using gunicorn with 3 workers + FastAPI + preloading the model into a Docker image and then deploying to an instance on AWS Fargate (4 x vCPU / 8Gb RAM) behind an Application Load Balancer I'm experiencing poor requests per second performance (<strong>max ~25 req/second / 8sec response time</strong>). By comparison, load testing the <strong><code>/healthcheck</code> endpoint achieves around 3.8k req per/sec</strong>.</p>
<p>I'm load testing using <a href="https://locust.io/" rel="nofollow noreferrer">Locust</a> to send requests to the api deployed on AWS.</p>
<p>I have the following FastApi server:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from detoxify import Detoxify
app = FastAPI()
model = Detoxify("unbiased-small", device="cpu")
@app.get("/healthcheck")
def healthcheck():
return { "status": "OK" }
@app.get("/healthcheck/loadbalancer")
def healthcheck():
return { "status": "OK" }
@app.get("/toxicity")
def predict(q: str):
try:
return model.predict([q])
except Exception as e:
raise HTTPException(status_code=400, detail=str(e))
</code></pre>
<p>I'm running gunicorn using the following dockerfile command:</p>
<pre class="lang-bash prettyprint-override"><code>CMD gunicorn app.server:app --workers 3 --preload --worker-class uvicorn.workers.UvicornWorker --bind 0.0.0.0:8080 --keep-alive 10
</code></pre>
<p>There seems to be an issue when I scale up the number of connected clients attempting to make requests (i.e. 100 clients causes a higher response time of 8secs and achieves 20 req/sec vs 10 clients with 700ms response time but still achiving 20 req/sec).</p>
| <python><amazon-web-services><fastapi><gunicorn><aws-fargate> | 2023-08-14 22:15:28 | 0 | 6,094 | Garbit |
76,902,421 | 44,060 | Implementing strftime and strptime for custom date class | <p>I've written a custom date class for a non-Gregorian calendar system.</p>
<p>Is there an easy way to implement <code>strftime</code> and <code>strptime</code> without having to implement all the logic myself?</p>
<p>My class has a day, month and year field, so in principle all I would need to provide the existing <code>strftime</code> and <code>strptime</code> implementations are the names of months and days of the week as strings.</p>
| <python><strptime><strftime> | 2023-08-14 22:08:48 | 0 | 931 | AlfaZulu |
76,902,360 | 15,542,245 | Regex capture group returning wrong value | <p>From a script that underscores suburbs in address lists. All suburbs must be underscored. However when there is a street with the same name as the suburb these have been underscored too which is not wanted. The lines of code shown are used to correct this situation. They have been copied from the script.</p>
<p>This is a small sample of what is contained within a larger data sample. The address lists being processed are from text files. Here is a small chunk where Howard Road has been underscored in error.</p>
<pre><code>import re
addressList = ['118 Nile Street _Lower_Hutt Widow','88 _Howard Road _Point_Howard Student','168 Wellington Road _Wainuiomata Driver']
pattern1 = r'(\d?\/?)(\d+[A-z]?\s)_([A-z]\S+)(?=\s)(\s.+)|(\d+[A-z]?)(\d+[A-z]?\s)_([A-z]\S+)(?=\s)(\s.+)'
for i in range(len(addressList)):
replacedLine = addressList[i]
result = re.search(pattern1,replacedLine,flags=0)
if result:
print("There was a result!")
if result.group(1) is None or result.group(1) == '':
print("There was no group 1")
replacedLine = re.sub(pattern1, r'\2\3\4', replacedLine)
else:
print("All groups present")
replacedLine = re.sub(pattern1, r'\1\2\3\4', replacedLine)
print("New replaced line:",replacedLine)
</code></pre>
<p>The breakdown of the pattern is <a href="https://regex101.com/r/NXiLPA/1" rel="nofollow noreferrer">this demo</a>. This pattern must be able to match various street numbering like: 2/22, 2A. Which is why there is an OR statement included. The logic is to return all capture groups (less the underscore of the street name) If the street number is single digit then return all capture groups other than capture group 1.</p>
<p>This script returns the correct result. The only change to <code>addressList</code> is the removal of the street underscore:</p>
<pre><code>There was a result!
All groups present
New replaced line: 88 Howard Road _Point_Howard Student
</code></pre>
<p>The issue is that the results from my files have errors. In all cases the leftmost digit of this example return 1s. So the result shown here would be: <code>18 Howard Road _Point_Howard Student</code></p>
<p>Having carefully checked the output right before the code shown here and after in my full version code. I am sure the problem is here but unfortunately can't reproduce it in this example. This behavior only occurs for underscores being removed from street numbers greater than 9. So single digit addresses remain correct. This means the problem is only occurring in the <code>All groups present</code> branch.</p>
| <python><regex-group> | 2023-08-14 21:55:53 | 0 | 903 | Dave |
76,902,113 | 3,777,717 | Why are there no arrays of objects in Python? | <p>The Python module <a href="https://docs.python.org/3/library/array.html" rel="nofollow noreferrer">array</a> supports arrays, which are, unlike lists, stored in a contiguous manner, giving performance characteristics which are often more desirable. But the types of elements are limited to those listed in the documentation.</p>
<p>I can see why the types have to be ones with constant (or at least bounded from above) representation size. But don't pointers fall into that category? (The main implementation is written in C which, admittedly, allows pointers to different types to have different sizes, but they're all (perhaps except pointers to C functions, but that's not an issue for this question) convertible to <code>intptr_t</code>.) So, given boxing, arrays of arbitrary Python objects could be easily implemented, right? So why aren't they?</p>
| <python><arrays><memory><types><typed-arrays> | 2023-08-14 20:57:10 | 1 | 1,201 | ByteEater |
76,902,081 | 310,370 | How Can I Convert This yolov5 python script into yolov8 | <p>It is working great for yolov5 but i want to upgrade to yolov8</p>
<p>How can I do that?</p>
<pre><code>import cv2
import torch
import os
from PIL import Image
# Load the pre-trained YOLOv5 model
model = torch.hub.load('ultralytics/yolov5', 'yolov5x')
# Aspect ratios to consider
aspect_ratios = [(1024, 1280)]
def auto_zoom(input_dir, output_base_dir):
# Loop over all files in the input directory
for filename in os.listdir(input_dir):
# Create the full input path and read the file
input_path = os.path.join(input_dir, filename)
img = cv2.imread(input_path)
if img is None:
continue
# Run the image through the model
results = model(img)
# Find the first human detected in the image
human = next((x for x in results.xyxy[0] if int(x[5]) == 0), None)
if human is None:
print(f"No human detected in the image {input_path}.")
os.remove(input_path)
continue
# Crop the image to the bounding box of the human
x1, y1, x2, y2 = map(int, human[:4])
</code></pre>
| <python><yolo><yolov5><yolov8> | 2023-08-14 20:51:02 | 1 | 23,982 | Furkan Gözükara |
76,901,667 | 5,556,711 | Why cast return value of len to int in Python? | <p>In the source code of <a href="https://github.com/DLR-RM/stable-baselines3" rel="nofollow noreferrer">Stable Baselines3</a>, in <code>common/preprocessing.py</code>, on line 158, there is: <code>return (int(len(observation_space.nvec)),)</code>.</p>
<p>As far as I know, in Python, <code>len</code> can only return an int, and regardless if this is not true, would return an int here (might be wrong on both counts). If this is the case, the cast to int would not make sense to me.</p>
<p>Am I missing something here?</p>
| <python> | 2023-08-14 19:26:59 | 1 | 706 | David Cian |
76,901,528 | 5,352,674 | App TemplateDoesNotExist in Django Project | <p>I am struggling to figure out why I am receiving the <code>TemplateDoesNotExist</code> error in my project.</p>
<p>This is my initial project layout. The project name is localassets, the initial app name is assetlist.</p>
<p><a href="https://i.sstatic.net/QXyLx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QXyLx.png" alt="enter image description here" /></a></p>
<p>I have registered the app in my <code>settings.py</code> file.</p>
<pre><code>INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'assetlist.apps.AssetlistConfig',
'django_tables2',
'django_filters',
'widget_tweaks',
'django_bootstrap_icons',
'django_extensions',
'crispy_forms',
'crispy_bootstrap4',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'localassets.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
</code></pre>
<p>I have successfully created views and templates within this app and it displays all templates without issues.</p>
<p>I now want to add another app to this project. This app is named <code>workorders</code>. I created the app the folder structure is shown below:</p>
<p><a href="https://i.sstatic.net/ZlX9O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZlX9O.png" alt="enter image description here" /></a></p>
<p>I have also registered this app in the project <code>settings.py</code> file.</p>
<pre><code>INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'assetlist.apps.AssetlistConfig',
'django_tables2',
'django_filters',
'widget_tweaks',
'django_bootstrap_icons',
'django_extensions',
'crispy_forms',
'crispy_bootstrap4',
'workorders.apps.WorkordersConfig',
]
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
]
ROOT_URLCONF = 'localassets.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
</code></pre>
<p>My issue is that I have created a template <code>tenant_list.html</code> and a generic list view:</p>
<pre><code>class TenantListView(generic.ListView):
model = Tenant
</code></pre>
<p><code>workorders\views.py</code></p>
<pre><code>from django.urls import path
from . import views
urlpatterns = [
path("", views.index, name="hello"),
path('tenants', views.TenantListView.as_view(), name='tenants'),
]
</code></pre>
<p>I receive a <code>TemplateDoesNotExist</code> error when trying to access the url <code>workorders\tenants\</code> and I'm not entirely sure why the project can't access or find the template within the workorders app, am I missing something from within the <code>settings.py</code> file?</p>
| <python><django><django-templates> | 2023-08-14 19:03:15 | 3 | 319 | Declan Morgan |
76,901,337 | 6,619,692 | Why do f-strings require parentheses around assignment expressions? | <p>In Python (3.11) why does the use of an assignment expression (the "walrus operator") require wrapping in parentheses when used inside an f-string?</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python
from pathlib import Path
import torch
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
ckpt_dir = Path("/home/path/to/checkpoints")
_ckpt = next(ckpt_dir.iterdir())
print(_ckpt)
sdict = torch.load(_ckpt, map_location=DEVICE)
model_dict = sdict["state_dict"]
for k, v in model_dict.items():
print(k)
print(type(v))
print(_shape := v.size())
print(f"{(_numel := v.numel())}")
print(_numel == torch.prod(torch.tensor(_shape)))
</code></pre>
<p>The code block above with <code>print(f"{_numel := v.numel()}")</code> instead does not parse.</p>
<p>What about the parsing / AST creation mandates this?</p>
| <python><f-string><python-assignment-expression> | 2023-08-14 18:24:18 | 2 | 1,459 | Anil |
76,901,277 | 13,689,939 | Jupyter nbconvert function with multiple metadata masks | <p><strong>Problem</strong></p>
<p>I'm trying to clear <em>all</em> output and <em>almost all</em> metadata from a jupyter notebook. When I run the following command,</p>
<pre><code>jupyter nbconvert --ClearOutputPreprocessor.enabled=True \
--ClearMetadataPreprocessor.enabled=True \
--ClearMetadataPreprocessor.preserve_nb_metadata_mask="{('language_info', 'name'), 'kernelspec'}" \
--to=notebook --log-level=ERROR my_notebook.ipynb
</code></pre>
<p>I get the output I want,</p>
<pre><code> "metadata": {
"kernelspec": {
"display_name": "my-kernel",
"language": "python",
"name": "my-kernel"
},
"language_info": {
"name": "python"
}
}
</code></pre>
<p>but also get this warning:</p>
<pre><code>/usr/local/miniconda/lib/python3.7/site-packages/traitlets/traitlets.py:2935: FutureWarning: --ClearMetadataPreprocessor.preserve_nb_metadata_mask={('language_info', 'name'), 'kernelspec'} for containers is deprecated in traitlets 5.0. You can pass `--ClearMetadataPreprocessor.preserve_nb_metadata_mask item` ... multiple times to add items to a list.
FutureWarning,
</code></pre>
<p>Given the suggestion from the FutureWarning and the <a href="https://nbconvert.readthedocs.io/en/latest/config_options.html" rel="nofollow noreferrer">docs</a>, I ran</p>
<pre><code>jupyter nbconvert --ClearOutputPreprocessor.enabled=True \
--ClearMetadataPreprocessor.enabled=True \
--ClearMetadataPreprocessor.preserve_nb_metadata_mask="{('language_info', 'name')}" \
--ClearMetadataPreprocessor.preserve_nb_metadata_mask="{'kernelspec'}" \
--to=notebook --log-level=ERROR my_notebook.ipynb
</code></pre>
<p>However, instead of getting my expected output, the notebook has <em>no</em> metadata:</p>
<pre><code>"metadata": {}
</code></pre>
<p><strong>Question</strong></p>
<ol>
<li>What's going on here? Is it a dependency version issue?</li>
<li>How can I get the metadata mask to work without a FutureWarning?</li>
</ol>
| <python><jupyter-notebook><nbconvert> | 2023-08-14 18:12:08 | 1 | 986 | whoopscheckmate |
76,901,169 | 19,588,737 | Vectorized method to insert arrays of different length at different spots in an ndarray | <p>I'm trying to "expand" a 2D numpy array <code>nadir</code> of points (x,y,z), and fill gaps in space with interpolated points. Where there exist spatial gaps bigger than some tolerance <code>dist</code>, I want to use <code>np.insert</code> to insert the required number of <code>nan</code> rows to fill that gap, adding <code>nan</code> points to be interpolated after.</p>
<p>First, I locate the gaps, and see how many points (rows) I need to insert in each gap to achieve the desired spatial point density:</p>
<pre><code>import numpy as np
# find and measure gaps
nadir = nadir[~np.isnan(nadir).any(axis = 1)]
dist = np.mean(np.linalg.norm(np.diff(nadir[:,0:2], axis = 0), axis = 1), axis = 0) # mean distance for gap definition
gaps = np.where(np.linalg.norm(np.diff(nadir[:,0:2], axis = 0), axis = 1) > dist)[0] # gap locations
sizes = (np.linalg.norm(np.diff(nadir[:,0:2], axis = 0), axis = 1)[gaps] // dist).astype(int) # how many points to insert in each gap
</code></pre>
<p>What I wish I could do is pass a list of ndarrays to <code>np.insert()</code> for the <code>values</code> argument, like this:</p>
<pre><code>nadir = np.insert(nadir, gaps, [np.full((size, 3), np.nan) for size in sizes], axis = 0)
</code></pre>
<p>such that at every index in <code>gaps</code>, the corresponding <code>nan</code> array of shape <code>(size,3)</code> gets inserted. But the above code doesn't work:</p>
<pre><code>"ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (616,) + inhomogeneous part."
</code></pre>
<p>I could achieve this in a loop, but a nice vectorized approach would be ideal. Any ideas? Again, my final goal is to spatial fill gaps in this 3D data with interpolated values, without gridding, so any other clever approaches would also work!</p>
| <python><arrays><numpy><point><spatial-interpolation> | 2023-08-14 17:54:29 | 2 | 307 | bt3 |
76,901,120 | 10,331,351 | Why is my miniconda not usable as pycharm interpreter? | <p>I just created a (mini)conda env with a simple command</p>
<p>conda create MyNewEnv</p>
<p>MyNewEnv is created and I can activate and use it. So far so good.</p>
<p>Then I tried to use it as PyCharm interpreter but I realised that there were very few files/folders created under C:\users\eric\.conda\envs\MyNewEnv</p>
<p>I just have one folder "conda-meta" while my other environments have far more.</p>
<p><a href="https://i.sstatic.net/7MveW.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7MveW.jpg" alt="enter image description here" /></a></p>
<p>I believe I also created these other environments with the same command (conda create whatever)</p>
<p>Am I missing a step somewhere?</p>
| <python><miniconda> | 2023-08-14 17:45:38 | 0 | 3,795 | Eric Mamet |
76,901,073 | 298,622 | pyspark: how to improve python type hints for DataFrames? | <p>I'm using vscode with help of dbx to run pyspark code in Databricks cluster. Even though I'm using github copilot, code completion and navigation is rather poor due the lack of type hints. I thought of having dataclasses for each DataFrame and annotate them everywhere.</p>
| <python><pyspark><python-typing> | 2023-08-14 17:36:58 | 0 | 4,938 | Igor Gatis |
76,900,755 | 5,790,653 | python pyautogui doesn't release enter | <p>This is my code:</p>
<pre><code>from pyHM import mouse
from time import sleep
import pyautogui
import pyperclip
url = 'https://google.com'
call(['chrome', '--incognito'])
mouse.move(600, 100, multiplier=1)
mouse.click(600, 100)
pyautogui.typewrite(f'{url}\n')
</code></pre>
<p>Also other attempts of mine (I only add the lines change comparing the first code):</p>
<pre><code>url = 'https://google.com\n'
pyautogui.typewrite(f'{url}')
</code></pre>
<p>Other one (whole code):</p>
<pre><code>from subprocess import call
from pyHM import mouse
from time import sleep
import pyautogui
import pyperclip
url = 'https://google.com'
call(['chrome', '--incognito'])
mouse.move(600, 100, multiplier=1)
mouse.click(600, 100)
pyperclip.copy(url)
with pyautogui.hold('ctrl'):
pyautogui.press('v')
pyautogui.keyDown("enter")
pyautogui.keyUp('enter')
</code></pre>
<p>But the issue is it seems the key is not released. When I manually press the <code>enter</code> button in the address bar, the website loads. But if I don't do this, the <code>rounding circle</code> in the browser which indicates the site is still loading, that is seen in the web browser window.</p>
<p>I also tried this:</p>
<pre><code>pyautogui.press('enter')
pyautogui.keyUp('enter')
</code></pre>
<p>But it still doesn't work.</p>
<p><strong>Update 1</strong></p>
<p>I reached a workaround for myself but not a permanent solution maybe:</p>
<pre><code>call(['chrome', '--incognito'])
sleep(1)
pyautogui.press('f12')
sleep(1)
mouse.move(600, 100, multiplier=1)
mouse.click(600, 100)
pyperclip.copy(url)
with pyautogui.hold('ctrl'):
pyautogui.press('v')
pyautogui.press('enter')
pyautogui.press('f12')
</code></pre>
| <python><pyautogui> | 2023-08-14 16:47:43 | 1 | 4,175 | Saeed |
76,900,650 | 5,586,573 | Goal seek function in python to find scaling factor | <p>I am trying to implement a goal-seek function to find a scaling factor for an array. Currently attempting to use <code>scipy.optimize.fsolve</code>. I am not sure if this is the right approach. Basically I have an array and I want to find the scaling factor to multiply each value in the array until the sum of the array matches a target value.</p>
<pre><code>Target_value = 1169308000
def func(x):
return sum(array*x) - Target_value
scaling_factor = fsolve(func, 0.1)
</code></pre>
<p>The above returns 0.1, which was the initial value I gave <code>fsolve</code>. I must not be using this function correctly or this isn't what I need. Any guidance here?</p>
| <python><scipy-optimize> | 2023-08-14 16:28:35 | 2 | 709 | HM14 |
76,900,560 | 1,200,914 | Is there any way to define a set element with an OR? | <p>I need to check if subset A is a subset of B. Therefore, the easiest way would be:</p>
<pre><code>A.issubset(B)
</code></pre>
<p>For A, I can have different definitions where each definition is a set made of strings, e.g. <code>A={"yellow", "red", "green", "blue"}</code>. However, there's one definition where I would like that one of the elements can be either one or another value, e.g., <code>A={"yellow" or "orange", "red", "green", "blue"}</code>. Therefore, when doing issubset, I would like to check if for the first element, any of both possible values are present, and if so, continue with the rest of A elements to look for.</p>
<p>Is there any pythagonic way that does not need to declare a second set?</p>
| <python><set> | 2023-08-14 16:13:33 | 1 | 3,052 | Learning from masters |
76,900,494 | 2,163,864 | Getting Pandas read_sql working with SQLAlchemy session ContextManager and ORM | <p>I'm using SQLAlchemy's ORM to construct queries with a ContextManager to manage the session. It's been <a href="https://stackoverflow.com/questions/29525808/sqlalchemy-orm-conversion-to-pandas-dataframe?rq=2">suggested</a> that you can get Pandas' <code>read_sql</code> working with a <code>Query</code> object by <code>read_sql(sql=q.statement, con=q.session.bind)</code> however this doesn't work within a <code>with</code> block.</p>
<p>For example, the following setup results in an <code>UnboundExecutionError</code>. I think the <code>session</code> is somehow not bound.</p>
<pre><code>@contextmanager
def db_session():
session = Session()
try:
yield session
session.commit()
except:
session.rollback()
raise
finally:
session.close()
class MyTable(Base):
__tablename__ = 'MyTables'
__table_args__ = {'schema': 'MyDatabase.dbo'}
A = Column(Integer)
B = Column(String(100, 'SQL_Latin1_General_CP1_CI_AS'))
with db_session() as s:
q = s.query(table.A, table.B).filter(table.A >= 10)
q = q.yield_per(1000).enable_eagerloads(False)
results = pd.read_sql(sql=q.statement, con=q.session.bind)
pd.DataFrame(results, columns=['A', 'B'])
sqlalchemy.exc.UnboundExecutionError: This session is not bound to a single Engine or Connection, and no context was provided to locate a binding.
</code></pre>
| <python><pandas><sqlalchemy> | 2023-08-14 16:03:56 | 1 | 9,527 | siki |
76,900,405 | 895,148 | Django/MSSQL: Issues counting objects through foreignkey relations | <p><em>environment:</em></p>
<ul>
<li>Database: MSSQL</li>
<li>Django 3.2.20</li>
<li>mssql-django 1.3</li>
</ul>
<p><strong>models</strong></p>
<pre class="lang-py prettyprint-override"><code>class ChangeOrder(models.Model):
# ...fields...
class GroupChange(models.Model):
order = models.ForeignKey(
ChangeOrder, related_name='groupchanges'
)
action = models.CharField(max_length=10, choices=['assignment', 'removal'])
# ...other fields...
class UserChange(models.Model):
order = models.ForeignKey(
ChangeOrder, related_name='userchanges'
)
action = models.CharField(max_length=10, choices=['assignment', 'removal'])
# ...other fields...
</code></pre>
<p><strong>objective</strong></p>
<p>For each ChangeOrder, I want to annotate/calculate:</p>
<ul>
<li><em>ant_assignment_count</em>: Count of '<strong>assignment</strong>' actions in both GroupChange and UserChange.</li>
<li><em>ant_removal_count</em>: Count of '<strong>removal</strong>' actions in both GroupChange and UserChange.</li>
</ul>
<p><strong>query</strong></p>
<pre class="lang-py prettyprint-override"><code>ChangeOrder.objects.annotate(
ant_assignment_count=Sum(
Case(
When(userchanges__action='assignment', then=1),
When(groupchanges__action='assignment', then=1),
default=0, output_field=IntegerField()
)
),
ant_removal_count=Sum(
Case(
When(userchanges__action='removal', then=1),
When(groupchanges__action='removal', then=1),
default=0, output_field=IntegerField()
)
)
)
</code></pre>
<p><strong>objects</strong></p>
<pre class="lang-py prettyprint-override"><code>co = ChangeOrder.objects.create()
GroupChange.objects.create(order=co, action='removal', ..)
GroupChange.objects.create(order=co, action='removal', ..)
UserChange.objects.create(order=co, action='assignment', ..)
</code></pre>
<p>If I run the query with those created objects I receive <code>ant_assignment_count=2</code> and <code>ant_removal_count=2</code>. <br/>But it should be <code>ant_assignment_count=1</code> and <code>ant_removal_count=2</code>.</p>
<p>I've attempted various methods including annotations, subqueries, and Count with Case statements. However, I'm encountering issues and getting incorrect results. It seems to be a problem with the <code>LEFT OUTER JOIN</code> on <code>GroupChange</code> and <code>UserChange</code>.</p>
<p>I'd appreciate any help!</p>
| <python><django><mssql-django> | 2023-08-14 15:49:53 | 1 | 1,680 | chsymann |
76,900,349 | 6,184,683 | Repo id error when using hugging face transformers | <p>I keep getting this error when I try to use hugging face transformers library.</p>
<pre><code>huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': 'C:/Users/FZH91R/PycharmProjects/DudeWheresMyHealing/Pytorch/tf_model.h5'. Use `repo_type` argument if needed.
</code></pre>
<p>Here is my code:</p>
<pre><code> CODE_DIR = os.path.dirname(__file__)
ROOT_DIR = os.path.dirname(CODE_DIR)
MODEL_DIR = os.path.join(ROOT_DIR, "Pytorch")
CONFIG_DIR = os.path.join(ROOT_DIR, "Pytorch")
# Load the model to be used.
path_model = MODEL_DIR + "\tf_model.h5"
print(path_model)
path_config = CONFIG_DIR + "\config.json"
print(path_config)
# Load model from a local source.
tokenizer = AutoTokenizer.from_pretrained(path_model, local_files_only=True)
model = AutoModel.from_pretrained(path_model, config=path_config, local_files_only=True)
</code></pre>
<p>My versions:</p>
<p>transformers-4.31.0</p>
<p>How can I fix this error?</p>
| <python><machine-learning><pytorch><huggingface-transformers><huggingface-tokenizers> | 2023-08-14 15:40:34 | 1 | 701 | Aeryes |
76,900,310 | 2,175,534 | Flask: Utilize User Input | <p>I'm trying to allow the user to pick between a few different videos in a folder and play which one they want by utilizing their input.</p>
<p>Current Code:</p>
<pre><code>@app.route('/AAR/')
def aar():
path = Path().absolute()
print(path)
path = str(path) + r"\static\trainings"
dirs = os.listdir(path)
temp = []
for dir in dirs:
temp.append({'name': dir})
return render_template('aar.html', data=temp, selection = "hi")
@app.route("/AAR/" , methods=['GET', 'POST'])
def test():
select = request.form.get('comp_select')
print(select)
return render_template('aar.html', selection = select)
</code></pre>
<p>HTML:</p>
<pre><code>{% extends "header.html" %}
{% block body %}
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href='/static/style.css' />
</head>
<body>
<center>
<h1>Trainings</h1>
<hr>
<form class="form-inline" method="POST">
<div class="form-group">
<div class="input-group">
<span class="input-group-addon">Please select</span>
<select name="comp_select" class="selectpicker form-control">
{% for o in data %}
<option value="{{ o.name }}">{{ o.name }}</option>
{% endfor %}
</select>
</div>
<button type="submit" class="btn btn-default">Go</button>
</div>
</form>
<hr>
</center>
</body>
<div id="load3" class="load3">
<title>Video Example</title>
<h2>Video Replay Test</h2>
<h2>{{ selection }}</h2>
<video width=500 height="500" controls>
<source src="{{ url_for('static', filename='trainings/{{ selection }}')}}" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
{% endblock %}
</code></pre>
<p>My thought process here was that I could retrieve the user selection of a video name (works correctly) and then simply re-render the page and use the name of the user selected video in the <code>src</code> to load up the video, but so far when I click "Go" nothing happens. It is successfully printing to console the user selection, but nothing else seems to happen.</p>
| <javascript><python><html><ajax><flask> | 2023-08-14 15:35:08 | 1 | 1,406 | Bob |
76,900,083 | 6,000,739 | Python convert text symbolic (string) matrix to sympy Matrix for determinant | <p>Suppose we want to compute a 4*4 matrix (say <code>A</code>) for determinant. If A is a <strong>numerical matrix</strong>, which reads</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
A = np.array([[6,5,10,1],[7,10,7,6],[9,8,12,2],[4,9,11,3]])
np.linalg.det(A)
# 279.9999999999999
</code></pre>
<p>then we can easily get above result. For convenience, we adopt another method to get the same result as follows</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
a = '''
6 5 10 1
7 10 7 6
9 8 12 2
4 9 11 3
'''
A = np.matrix(a).reshape(4,4)
np.linalg.det(A)
# 279.9999999999999
</code></pre>
<p>i.e, first convert a string matrix to <em>np.matrix</em> and then compute its determinant.</p>
<p>Next, we assume the <code>4*4</code> matrix A is a <strong>symbolic matrix</strong>, e.g., A has some elements symbols w.r.t <code>x</code> . Then we can resort to the <code>sympy</code> package, which reads</p>
<pre class="lang-py prettyprint-override"><code>import sympy as sy
x = sy.symbols('x')
A = sy.Matrix((
[1, x, x ** 2, 0],
[0, 1, x, x ** 2],
[x ** 2, 0, 1, x],
[x, x ** 2, 0, 1]
))
sy.det(A)
## x**8 + x**4 + 1
</code></pre>
<p>Now my question is how can we convert text symbolic (string) matrix to sympy Matrix for determinant as that in the <strong>numerical case</strong>? I have tried</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import sympy as sy
x = sy.symbols('x')
a = '''
1 x x**2 0
0 1 x x**2
x**2 0 1 x
x x**2 0 1
'''
# A = sy.Matrix(np.matrix(a).reshape(4,4)) #Error!!!
# A = sy.Matrix(sy.MatrixSymbol(a,4,4)) #Unexpected!!!
# sy.det(A)
</code></pre>
<p>and both <code>A = sy.Matrix(np.matrix(a).reshape(4,4))</code> and <code>A = sy.Matrix(sy.MatrixSymbol(a,4,4))</code> are not working! How can I remedy this?</p>
| <python><numpy><matrix><sympy> | 2023-08-14 15:03:00 | 2 | 715 | John Stone |
76,900,080 | 2,353,911 | FastAPI: how to test exception handler logic | <p>I am implementing exception handler logic to intercept <code>RetryError</code>s occurring at service level.</p>
<p>For context, in that specific case, the second argument of the handler's signature: <code>async def my_handler(request: Request, call_next)</code> is, in fact, not a callable, but the actual <code>RetryError</code>.</p>
<p>My manual test with the application runtime seems to indicate my fix resolves the issue.</p>
<p>However, I am having considerable trouble adding non-E2E coverage for the change in the handler.</p>
<p><strong>TL;DR</strong>
My exact problem is that I cannot seem to make my test execute the handler function in test, whereas it does work at runtime.</p>
<p>Here's my test code:</p>
<pre><code>@fixture(scope="session", name="exception_handler_test_client")
def get_test_client() -> TestClient:
app = FastAPI()
router = APIRouter()
# see below for the "service" function that raises RetryError
router.add_api_route("/test_exception_handler/", retryable_failure, methods=["GET"])
app.include_router(router)
# the real test would replace foo with the real exception handler I want to test
app.add_exception_handler(Exception, foo)
test_client = TestClient(router)
return test_client
async def foo(request: Request, call_next):
# here for convenience, the real one is somewhere else in the code base
# and neither the real one nor this one execute on exception raised
pass
# this does raise RetryError as expected
@retry(wait_fixed=100, stop_max_delay=1000, stop_max_attempt_number=2, retry_on_result=lambda Any: True)
def retryable_failure():
"""
Programmatically fails a retryable function for testing purposes.
"""
pass
# the actual test
@pytest.mark.asyncio
def test_retry_error_handled_without_hard_failure(exception_handler_test_client: TestClient) -> None:
# TODO once the handler works I would assert on the JSON response and HTTP status
exception_handler_test_client.get("/test_exception_handler/")
</code></pre>
<p>The test fails because it raises <code>RetryError</code> as expected, but the handler is never executed.</p>
<p>I am aware of <a href="https://stackoverflow.com/questions/73371263/how-to-test-fastapi-exception-handler">this</a> post, describing a similar issue.</p>
<p>The only answer mentions using <code>pytest.raises</code>, but that doesn't solve my problem at all - it just passes the test without executing the handler.</p>
<p><strong>Extra note</strong>:</p>
<p>The exception handler associated with the app in test doesn't execute even if the programmatically failed retryable function is replaced with a garden-variety, non-retryable function just raising Exception.</p>
| <python><pytest><fastapi><exceptionhandler> | 2023-08-14 15:02:39 | 1 | 48,632 | Mena |
76,900,067 | 5,547,553 | Convert python list to polars dataframe | <p>I have a list and a variable, eg:</p>
<pre><code>myvar = 'KEY_1'
mylist = ['apple', 'banana', 'peach']
</code></pre>
<p>I'd like to convert them to a Polars dataframe of something like:</p>
<pre><code>PROD PARAM1 PARAM2 PARAM3
KEY_1 apple banana peach
</code></pre>
<p>How can I do that?</p>
| <python><python-polars> | 2023-08-14 15:00:50 | 1 | 1,174 | lmocsi |
76,900,021 | 11,366,870 | How to get the access token in python from a URL where oauth2 flow is Resource owner password credential grant | <p>I need to request a token to fetch the data from an API.
I have below information with me.</p>
<pre><code>Oath 2 flow: "Resource owner password credential grant"
Resource Owner Name: "username"
Resource Owner Password: "password"
Client Identification: "clientID"
Client Password: "clientPassword"
Access token URL: "URL"
</code></pre>
<p>I have been trying multiple ways but all of them seems to fail.
Can anybody suggest what I am missing.</p>
<p>Below is one of the code I am trying, but it fails.</p>
<pre><code>import json
import requests
data = { "username" : "somename", "password":"somepassword"}
headers = {"Client_id": "SomeID", "Client_secret": "SomePassword", "Content-Type": "application/json"}
response = requests.get("https://URL/as/token.oauth2", data=data, headers=headers)
if response.status_code in [200]:
tok_dict = json.loads(response.text)
print(tok_dict)
issued_at = tok_dict["issued_at"]
expires_in = tok_dict["expires_in"]
token_type = tok_dict["token_type"]
access_token = tok_dict["access_token"]
else:
print("error")
print(response.status_code)
</code></pre>
| <python><oauth-2.0> | 2023-08-14 14:53:53 | 1 | 786 | danD |
76,899,935 | 17,160,160 | Define conditional bounds to Pyomo Var | <p>Given the following trivial model:</p>
<pre><code>model.WEEKS = Set(initialize = [1,2,3])
model.PRODS = Set(initialize = ['Q24','J24','F24'])
model.volume = Var(model.WEEKS,model.PRODS, within = NonNegativeIntegers)
</code></pre>
<p>I'd like to set varying bounds for each permutation in <code>model.volume</code> depending on the initial character of the second index. Currently, I have achieved this by applying the following constraints to specified subsets:</p>
<pre><code># create subsets
Q_PRODS = Set(within = model.WEEKS * model.PRODS, initialize = [x for x in model.volume if x[1][0]=='Q'])
M_PRODS = Set(within = model.WEEKS * model.PRODS, initialize = [x for x in model.volume if x[1][0]!='Q'])
#define functions
def c1(model, i,j):
return (25, model.volume[i,j], 60)
model.c1 = Constraint(Q_PRODS, rule = c1)
def c2(model, i,j):
return (40, model.volume[i,j], 75)
model.c2 = Constraint(M_PRODS, rule = c2)
</code></pre>
<p>Which correctly outputs the following:</p>
<pre><code>2 Constraint Declarations
c1 : Size=3, Index=c1_index, Active=True
Key : Lower : Body : Upper : Active
(1, 'Q24') : 25.0 : volume[1,Q24] : 60.0 : True
(2, 'Q24') : 25.0 : volume[2,Q24] : 60.0 : True
(3, 'Q24') : 25.0 : volume[3,Q24] : 60.0 : True
c2 : Size=6, Index=c2_index, Active=True
Key : Lower : Body : Upper : Active
(1, 'F24') : 40.0 : volume[1,F24] : 75.0 : True
(1, 'J24') : 40.0 : volume[1,J24] : 75.0 : True
(2, 'F24') : 40.0 : volume[2,F24] : 75.0 : True
(2, 'J24') : 40.0 : volume[2,J24] : 75.0 : True
(3, 'F24') : 40.0 : volume[3,F24] : 75.0 : True
(3, 'J24') : 40.0 : volume[3,J24] : 75.0 : True
</code></pre>
<p>However, this seems a little clumsy and I wondered if there is more efficient method that would achieve the same ends? For example, by defining a rule to pass during the creation of <code>model.volume</code>?</p>
| <python><pyomo> | 2023-08-14 14:42:13 | 1 | 609 | r0bt |
76,899,917 | 19,053,778 | Merge a list of dataframes without creating suffixes | <p>I have a list of dataframes I want to merge together with the following requirements:</p>
<ul>
<li>If the dataframes have the same columns (i.e. if it's a data refresh of some sort), then they should be "concatenated" (no creating of suffixes such as _x or _y)</li>
<li>The merging should be done on a list of indices so that no duplicate rows will be found</li>
</ul>
<p>Example:</p>
<pre><code>df1 = pd.DataFrame{'WeekCom':['2020-01-01','2020-01-02','2020-01-02','2020-01-03'],'Y':[2020,2020,2020,2020],'QT':[Q1,Q1,Q1,Q1],'M':['Jan','Jan','Jan','Jan'],'W':['W1','W1','W1','W2'],'Col_X':[0,1,1,2],'Col_Y':[3,0,0,1]}
df2 = pd.DataFrame{'WeekCom':['2020-01-02','2020-01-04'],'Y':[2020,2020],'QT':[Q1,Q1],'M':['Jan','Jan'],'W':['W1','W2'],'Col_X':[1,3],'Col_Z':[3,3]}
dataframe_list = [df1,df2]
current_file_type_merged_dataframe = reduce(lambda merged, df: pd.merge(merged, df, on=list_of_indices, how='outer'), dataframe_list)
</code></pre>
<p>This kinda works on the list of indices ('WeekCom','Y','H','QT',...) since I get a unique set of indices on the dataframe, but then the columns that have the same name from both dataframes, are "separated" by the suffix "_x" and "_y". I've already checked this SO thread <a href="https://stackoverflow.com/questions/53645882/pandas-merging-101">Pandas Merging 101</a>, but no sucess.</p>
<p>Thanks!</p>
| <python><pandas> | 2023-08-14 14:39:28 | 1 | 496 | Chronicles |
76,899,779 | 7,403,431 | Minimum package requirements to run a jupyter notebook in vscode | <p>Recently I keep running into problems with my python notebooks in vscode where vscode doesn't see the installed <code>ipykernel</code>. There are several posts on this issue with suggestions to update certain packages (<a href="https://stackoverflow.com/questions/60100344/vscode-not-picking-up-ipykernel">VSCode not picking up ipykernel</a>, <a href="https://stackoverflow.com/questions/64997553/python-requires-ipykernel-to-be-installed">Python requires ipykernel to be installed</a>, <a href="https://stackoverflow.com/questions/62806475/vscode-not-detecting-ipykernel-verified-it-is-actually-installed">vscode not detecting ipykernel, verified it is actually installed</a>, <a href="https://stackoverflow.com/questions/72628465/install-ipykernel-in-vscode-ipynb-jupyter">Install ipykernel in vscode - ipynb (Jupyter)</a>, ...)</p>
<p>This makes me wonder what the actual minimum requirements are. Different "offical" channels mention different dependencies:</p>
<ul>
<li><a href="https://code.visualstudio.com/docs/datascience/jupyter-notebooks#_setting-up-your-environment" rel="nofollow noreferrer">https://code.visualstudio.com/docs/datascience/jupyter-notebooks#_setting-up-your-environment</a> mentions <code>jupyter</code></li>
<li><a href="https://github.com/microsoft/vscode-jupyter/wiki/Jupyter-Kernels-and-the-Jupyter-Extension#python-environments" rel="nofollow noreferrer">https://github.com/microsoft/vscode-jupyter/wiki/Jupyter-Kernels-and-the-Jupyter-Extension#python-environments</a> mentions <code>ipython</code> and <code>ipykernel</code></li>
<li><a href="https://github.com/microsoft/vscode-jupyter/wiki/Installing-Jupyter" rel="nofollow noreferrer">https://github.com/microsoft/vscode-jupyter/wiki/Installing-Jupyter</a> mentions <code>jupyter</code> may be required.</li>
</ul>
<p>Previously I only had</p>
<pre class="lang-bash prettyprint-override"><code>ipykernel
notebook
</code></pre>
<p>installed in the conda environment which worked just fine.</p>
<p>So what are the actual requirements to run jupyter notebooks in vscode? What are the needed packages with versions?</p>
| <python><visual-studio-code><jupyter> | 2023-08-14 14:22:42 | 3 | 1,962 | Stefan |
76,899,717 | 3,042,018 | Assertion fails for collections.deque | <p>I can't see why I get an assertion error for the following code. It looks to me like the output from the print statement should be equivalent to the return value of <code>queue_challenge()</code> but not so. Can someone please explain why, and what the correct assertion should be?</p>
<pre><code>from collections import deque
class Queue:
def __init__(self):
self.items = deque()
def is_empty(self):
return not self.items
# return len(self.items) == 0
def enqueue(self, item):
self.items.append(item)
def dequeue(self):
return self.items.popleft()
def size(self):
return len(self.items)
def peek(self):
return self.items[0]
def __str__(self):
return str(self.items)
def queue_challenge():
q = Queue()
q.enqueue("Learning")
q.enqueue("is")
q.dequeue()
q.enqueue("great")
q.enqueue("fun")
q.dequeue()
return q
print(queue_challenge()) # deque(['great', 'fun'])
assert queue_challenge() == deque(['great', 'fun'])
</code></pre>
| <python><assertion><deque> | 2023-08-14 14:15:00 | 2 | 3,842 | Robin Andrews |
76,899,630 | 10,133,797 | Sums of random number reorderings combine to recurring values | <pre class="lang-matlab prettyprint-override"><code>g0 = randn(1, 100);
g1 = g0;
g1(2:end) = flip(g1(2:end));
sprintf("%.15e", sum(g0) - sum(g1))
</code></pre>
<pre class="lang-py prettyprint-override"><code>g0 = np.random.randn(100)
g1 = g0.copy()
g1[1:] = g1[1:][::-1]
print(sum(g0) - sum(g1))
</code></pre>
<p>In both Python and MATLAB, rerunning these commands enough times will repeat the following values (or their negatives; incomplete list):</p>
<pre><code>8.881784197001252e-15
3.552713678800501e-15
2.6645352591003757e-15
4.440892098500626e-16
1.7763568394002505e-15
</code></pre>
<p>In fact, the first and second time I ran them - <code>mat -> py -> mat -> py</code> - they were <em>exactly same</em>, making me think they share the RNG on system level with a delay... (but ignore this happened for the question).</p>
<p>I'll sooner fall through the floor than this coinciding, plus across different languages.</p>
<p>What's happening?</p>
<hr>
<p>Windows <code>11</code>, Python <code>3.11.4</code>, numpy <code>1.24.4</code>, MATLAB <code>9.14.0.2286388 (R2023a) Update 3</code>,</p>
| <python><numpy><matlab><random><precision> | 2023-08-14 14:01:56 | 1 | 19,954 | OverLordGoldDragon |
76,899,273 | 17,136,258 | When was the last change of a certian state | <p>I have a problem. I want to know when was the last change of a certian state was. For example I have even more columns. I want to know when on which date changed the state. E.g. the last change of a state for id 2 was <code>2023-04-12 11:00:40</code>. How could I extract this date?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = {'ID': [1, 1, 1, 2, 2, 2, 2, 2, 2],
'state': ['Recording', 'Recording', 'Testing', 'Testing', 'Testing', 'Recording', 'Recording', 'Recording', 'Testing'],
'change_date': ['2023-08-12T11:00:40', '2023-07-12T11:00:40', '2023-06-12T11:00:40', '2023-10-12T11:00:40', '2023-09-12T11:00:40', '2023-04-12T11:00:40', '2023-03-12T11:00:40', '2023-03-12T11:00:40', '2023-01-12T11:00:40']}
df = pd.DataFrame(data)
df['change_date'] = pd.to_datetime(df['change_date'])
print(df)
</code></pre>
<pre class="lang-py prettyprint-override"><code> ID state change_date
0 1 Recording 2023-08-12 11:00:40
1 1 Recording 2023-07-12 11:00:40
2 1 Testing 2023-06-12 11:00:40
3 2 Testing 2023-10-12 11:00:40
4 2 Testing 2023-09-12 11:00:40
5 2 Recording 2023-04-12 11:00:40
6 2 Recording 2023-03-12 11:00:40
7 2 Recording 2023-03-12 11:00:40
8 2 Testing 2023-01-12 11:00:40
</code></pre>
<p>What I want</p>
<pre class="lang-py prettyprint-override"><code>ID last_change state
1 2023-07-12 Recording
2 2023-04-12 Testing
</code></pre>
<p>What I tried</p>
<pre class="lang-py prettyprint-override"><code>result = df.groupby('ID')['change_date'].max().reset_index()
print(result)
[OUT]
ID change_date
0 1 2023-08-12 11:00:40
1 2 2023-10-12 11:00:40
</code></pre>
| <python><pandas><dataframe> | 2023-08-14 13:16:57 | 3 | 560 | Test |
76,899,175 | 7,484,093 | Removing outliers and calculating a trimmed mean in Python for multiple columns with different number of actual values | <p>I have a dataset. Let's say, 10010 rows and 100 columns, column values might include NaN and for each column of NaNs can be different.</p>
<p>I want</p>
<ul>
<li>to pick n number of columns from this dataset (let's say 20, without order, e.g, Column1, Column2, etc).</li>
<li>trim outliers (2.5% of the highest and 2.5% of the lowest for each of selected columns), excluding NaN values (so if 10 values among 10010 are NaN in Column1, I need to trim out actual highest 250 values from the top and 250 actual lowest values from the bottom of 10000 values)</li>
<li>But if the Column2 has 110 NaN initially, I want to trim 2.5% percent from each side for the actual number of values (in this case 9900, not 10000 like in Column1 column)</li>
<li>Calculate trimmed mean for each of selected columns</li>
<li>Have a new dataset after trimming where all trimmed outliers were converted to NaN</li>
</ul>
| <python><pandas><trim><outliers> | 2023-08-14 13:02:41 | 1 | 2,520 | Anakin Skywalker |
76,899,099 | 10,232,932 | Groupby / Summarize dataframe for every column except one column in pandas | <p>I have the following dataframe (columnA - columnM):</p>
<pre><code>columnA columnB .... columnK columnL columnM
A AA .... 10 AAA AAAA
A AA .... 20 AAA AAAA
B BB .... 5 BBB BBBB
A BB .... 40 AAA AAAA
...
</code></pre>
<p>How can I groupby the dataframe, that it calculates the <code>sum</code> for columnK by every group defined by all other columns, that it generates the output:</p>
<pre><code>columnA columnB .... columnK columnL columnM
A AA .... 30 AAA AAAA
B BB .... 5 BBB BBBB
A BB .... 40 AAA AAAA
...
</code></pre>
<p>for sure I can run the manuel command:</p>
<pre><code>df.groupby(['columnA', 'columnB',...,'columnL', 'columnM'])['columnK'].sum().reset_index()
</code></pre>
<p>so I exclude the <code>columnK</code> in the first (), but is there a more pythonic way to do it? I keep all rows in the <code>groupby</code>function and use the some over the <code>columnK</code>.</p>
| <python><pandas><dataframe> | 2023-08-14 12:53:54 | 1 | 6,338 | PV8 |
76,899,048 | 453,673 | How to obtain specific principal components from PCA using sklearn or matplotlib, for EigenFaces? | <p><strong>Background:</strong><br />
I'm doing research using EigenFaces with Python. I need to extract any principal component of multiple images, and use those selected principal components to do feature reduction and face identification with a training dataset of images.</p>
<p><strong>Problem:</strong><br />
In <code>sklearn</code>, the <a href="https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html" rel="nofollow noreferrer">PCA function</a> allows specifying only <code>n_components</code>, which would take the first <code>n</code> number of principal components. But I need to be able to select any principal component individually, because I need to try using a random combination of multiple principal components, to do the feature reduction and Eigen faces computation. That's part of the research requirement.</p>
<p>I noticed some bespoke implementations <a href="https://stackoverflow.com/questions/18299523/basic-example-for-pca-with-matplotlib">here</a> and <a href="https://medium.com/@nahmed3536/a-python-implementation-of-pca-with-numpy-1bbd3b21de2e" rel="nofollow noreferrer">here</a>, but I'd prefer a more standard library, to avoid errors in the results. Also noticed <a href="https://erdogant.github.io/pca/pages/html/Outlier%20detection.html#detect-new-unseen-outliers" rel="nofollow noreferrer">another PCA library</a> which does not seem to offer low-level functions to obtain more details that I need.</p>
<p>Is there any reliable way to get the individual principal components using Python?</p>
| <python><scikit-learn><pca> | 2023-08-14 12:47:09 | 1 | 20,826 | Nav |
76,898,849 | 4,465,454 | What is the equivalent to Flutter's pubspec.yaml in python | <p><strong>SITUATION</strong>: I started my coding journey in Flutter, where the pubspec.yaml file at the root of my project directory allows me to easily manage the external libraries / dependencies that my project uses.</p>
<p><strong>COMPLICATION</strong>: I've recently started using python as well and am having a lot more trouble managing external packages. From what I can see, people seem to mainly do either of the following within a venv (both of which I consider way less easy & intuitive:</p>
<ol>
<li>Install packages manually with pip</li>
<li>Use a requirements.txt file and then group-install packages manually with that</li>
</ol>
<p><strong>QUESTION</strong>: Is there an easier way to manage my external packages in python that is similar to Flutter's pubspec.yaml?</p>
| <python><flutter> | 2023-08-14 12:13:33 | 1 | 1,642 | Martin Reindl |
76,898,818 | 2,393,472 | Creating a сustom side panel using the LibreOffice SDK | <p>Good afternoon.</p>
<p>I had the following question:</p>
<p>How to use the LibreOffice SDK and Python to create a custom side panel in LibreOffice Calc?</p>
| <python><sdk><libreoffice-calc> | 2023-08-14 12:07:01 | 0 | 333 | Anton |
76,898,602 | 13,242,312 | Mypy stub files and VS Code | <p>I have problems with stub files in vs code. I use Pylance but I don't get IntelliSense when the stub files are not in the same folder as their implementation, even if the subfolder corresponds to the "python.analysis.stubPath" setting.
I don't like to put all the stub files in the same folder as their implementation because I find it kind of ugly, so when reading the docs of Pylance, it should work if I put them in a subfolder named typings (default stubPath), but it doesn't, here is a minimal example:
With the current file structure:</p>
<pre><code>test/
foo/
typings/
foo.pyi
foo.py
test.py
</code></pre>
<p><strong>foo/foo.py</strong></p>
<pre class="lang-py prettyprint-override"><code>Foo = type("Foo", (), { x: 1, y: 2 })
</code></pre>
<p><strong>foo/typings/foo.pyi</strong></p>
<pre class="lang-py prettyprint-override"><code>class Foo:
x: int
y: int
</code></pre>
<p><strong>test.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from foo.foo import Foo # Here IntelliSense gives me def Foo() -> object, but it should give me class Foo
</code></pre>
| <python><visual-studio-code><mypy> | 2023-08-14 11:36:03 | 1 | 1,463 | Fayeure |
76,898,404 | 1,668,622 | In Python async is it possible to provide an async generator with the context of if's caller? | <p>I have a 'chain' of async tasks, i.e. task which <code>gather</code>/<code>await</code> other tasks with an arbitrary depth.
Now I want an 'upper' task (i.e. one that <code>await</code>s 'lower' tasks, which might then <code>await</code> yet lower tasks) to provide one of the 'lower' tasks with some contextual information, e.g. some logging object which might not yet exist when the lower task gets created.</p>
<pre class="lang-py prettyprint-override"><code>async def compute(args):
#
# Here we don't know (yet) when or by whom this function get's awaited
#
logger = global_loggers[find_context(asyncio.current_task())]
.. do stuff ..
logger.write("something that goes somewhere")
yield stuff
logger.write("something that goes somewhere")
yield stuff
logger.write("something that goes somewhere")
..
async def some_weired_scheduler():
#
# this function doesn't make sense, it's just here to make the `await` queue deeper
..
tasks_for_later.append(asyncio.gather(compute(<args1>), compute(<args2>)))
..
async def logging_executor():
#
# here we invoke the compute-generators and we want to 'inject' some runtime
# environment, in this case it's just a logger
#
logger = MyLogger()
global_loggers[find_context(asyncio.current_task())] = logger
for t in tasks_for_later:
await t
</code></pre>
<p>Is there some built in way to accomplish something like this? Is there something like a 'task stack' where I can see what task <code>await</code>s what another <code>yield</code>s or <code>return</code>s?
Doesn't <code>Quart</code> do something like this with it's global <code>request</code> object?</p>
| <python><asynchronous><async-await><python-asyncio> | 2023-08-14 11:05:59 | 1 | 9,958 | frans |
76,898,179 | 2,195,440 | How to programmatically list Python packages on PyPI with certain criteria for example that also have code on GitHub? | <p>I'm interested in analyzing Python packages that are both published on PyPI and have their source code available on GitHub. Is there a programmatic way to get a list of such packages?</p>
<p>I need to search and filter based on the following:</p>
<ol>
<li>Python packages that are published after a certain date</li>
<li>has source code available on GitHub</li>
<li>that could be downloaded with pip</li>
</ol>
<p>I have already attempted to find an API or python package and to the best of my knowledge there seems to be none.</p>
<p>I am wondering if there is a way to achieve this programmatically.</p>
| <python><pypi> | 2023-08-14 10:32:42 | 0 | 3,657 | Exploring |
76,898,133 | 13,123,667 | Skimage random_noise : can't use "var" parameter | <p><a href="https://scikit-image.org/docs/stable/api/skimage.util.html#skimage.util.random_noise" rel="nofollow noreferrer">Skimage documentation</a> includes a "var" parameter, but when I try to use it <code>noise_image = random_noise(image, mode=chosen_noise_mode, var=0.1)</code> I got the following error:</p>
<pre><code>var keyword not in allowed keywords []
</code></pre>
<p>The error seems to make sense but it's on the documentation so i'm a bit confused. Can anyone help ?</p>
<p>reproducible example:</p>
<pre><code>from skimage.util import random_noise
image = cv2.imread("my_image.jpg", flags=cv2.IMREAD_UNCHANGED)
# as skimage and cv2 have different encoding formats:
image = img_as_float(image)
noise_types=["pepper","poisson","speckle"]
chosen_noise_mode=noise_types[random.randint(0,2)]
noise_image = random_noise(image, mode=chosen_noise_mode, var=0.1, mean = 0)
</code></pre>
| <python><scikit-image> | 2023-08-14 10:26:23 | 2 | 896 | Timothee W |
76,898,070 | 9,560,245 | How may one generate IPFS PeerID in Python without involving a working IPFS node? | <p>I need to find a clue about all steps of generation of the PeerID used to identify the IPFS node, and then to generate the third-party PeerID using Python script. Does someone have a working example or a link to a good explanation? I am interested in the generation of PeerID without the involvement of a working IPFS node.</p>
| <python><ipfs><libp2p> | 2023-08-14 10:17:38 | 1 | 596 | Andrei Vukolov |
76,898,041 | 7,201,333 | Calculate remaining percentage in pandas | <p>I have a dataframe that looks like:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'source': ['A', 'A', 'A', 'A', 'Z', 'Z'],
'target':['B', 'C', 'D', 'E', 'W', 'X'],
'revenue_pct':[0.1, 0.15, 0.12, 0.05, 0.4, 0.2],
'year':[2010, 2010, 2010, 2011, 2020, 2021]})
</code></pre>
<p>I would like to add rows to this dataframe showing the remaining value to reach 100% in the <code>revenue_pct</code> column. This remaining value should be calculated according to both the <code>source</code> and the <code>year</code>. In short, the outcome should look like:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'source': ['A', 'A', 'A', 'A', 'A', 'A', 'Z', 'Z', 'Z', 'Z'],
'target':['B', 'C', 'D', 'unknown', 'E', 'unknown', 'W', 'unknown', 'X', 'unknown'],
'revenue_pct':[0.1, 0.15, 0.12, 0.63, 0.05, 0.95, 0.4, 0.6, 0.2, 0.8],
'year':[2010, 2010, 2010, 2010, 2011, 2011, 2020, 2020, 2021, 2021]})
</code></pre>
| <python><pandas> | 2023-08-14 10:12:39 | 1 | 313 | scrps93 |
76,898,029 | 2,700,041 | Ultralytics YOLOv8 `probs` attribute returning `None` for object detection | <p>I'm using the Ultralytics YOLOv8 implementation to perform object detection on an image. However, when I try to retrieve the classification probabilities using the <code>probs</code> attribute from the results object, it returns <code>None</code>. Here's my code:</p>
<pre><code>from ultralytics import YOLO
# Load a model
model = YOLO('yolov8n.pt') # pretrained YOLOv8n model
# Run batched inference on a list of images
results = model('00000.png') # return a list of Results objects
# Process results list
for result in results:
boxes = result.boxes # Boxes object for bbox outputs
masks = result.masks # Masks object for segmentation masks outputs
keypoints = result.keypoints # Keypoints object for pose outputs
probs = result.probs # Probs object for classification outputs
print(probs)
</code></pre>
<p>When I run the above code, the output for <code>print(probs)</code> is <code>None</code>. The remaining output is</p>
<pre><code>image 1/1 00000.png: 640x640 1 person, 1 zebra, 7.8ms
Speed: 2.6ms preprocess, 7.8ms inference, 1.3ms postprocess per image at shape (1, 3, 640, 640)
</code></pre>
<p>Why is the probs attribute returning <code>None</code>, and how can I retrieve the classification probabilities for each detected object? Is there a specific design reason behind this behavior in the Ultralytics YOLOv8 implementation?</p>
| <python><deep-learning><object-detection><yolov8> | 2023-08-14 10:11:04 | 3 | 1,427 | hanugm |
76,898,018 | 14,291,703 | How to convert a nested JSON consisting of lists, ints, dicts, strs and None to pandas dataframe? | <p>I have the following JSON structure,</p>
<pre><code>{
'total_numbers': 2,
'data':[
{
'col3':'2',
'col4':[
{
'col5':'P',
'col6':'H'
}
],
'col7':'2023-06-19T09:29:28.786Z',
'col9':{
'col10':'TEST',
'col11':'test@email.com',
'col12':'True',
'col13':'999',
'col14':'9999'
},
'col15':'2023-07-10T04:46:43.003Z',
'col16':False,
'col17':[
{
'col18':'S',
'col19':'H'
}
],
'col20':True,
'col21':{
'col22':'sss',
'col23':'0.0.0.0',
'col24':'lll'
},
'col25':0,
'col26':{
'col27':{
'col28':'Other'
},
'col29':'Other',
'col30':'cccc'
},
'col31':{
'col32':[
{
'col33':'123456789',
'col34':'2023-07-14T02:52:20.166Z',
'col36':True,
'col38':{
'col40':[
{
'col41':'99999999999',
},
{
'col41':'34534543535',
}
]
},
'col55':'878787878'
},
{
'col47':'112233445566',
'col48':'2023-07-24T09:26:03.425Z',
'col50':True,
'col52':{
'col53':[
{
'col54':'99999999999',
}
]
},
'col55':'878787878'
}
]
}
},
{
'col3':'2',
'col4':[
{
'col5':'P',
'col6':'H'
}
],
'col7':'2023-06-19T09:29:28.786Z',
'col9':{
'col10':'TEST',
'col11':'test@email.com',
'col12':'True',
'col13':'999',
'col14':'9999'
},
'col15':'2023-07-10T04:46:43.003Z',
'col16':False,
'col17':[
{
'col18':'S',
'col19':'H'
}
],
'col20':True,
'col21':{
'col22':'sss',
'col23':'0.0.0.0',
'col24':'lll'
},
'col25':0,
'col26':{
'col27':{
'col28':'Other'
},
'col29':'Other',
'col30':'cccc'
},
'col31':{
'col32':[
{
'col33':'123456789',
'col34':'2023-07-14T02:52:20.166Z',
'col36':True,
'col38':{
'col40':[
{
'col41':'99999999999',
},
{
'col41':'34534543535',
}
]
},
'col55':'878787878'
},
{
'col47':'112233445566',
'col48':'2023-07-24T09:26:03.425Z',
'col50':True,
'col52':{
'col53':[
{
'col54':'99999999999',
}
]
},
'col55':'878787878'
}
]
}
}
]
}
</code></pre>
<p>Is there a dynamic function I could use to convert this structure to tabular form based on traversing through the JSON?</p>
<p><strong>I came across this function</strong> (marked as <a href="https://stackoverflow.com/a/72549493/14291703">solution</a>) which i think could be helpful for my case. In the example I understand there will be 6 rows based on the number of levels in the json though I don't understand the naming of some columns. The data is there but the column names are not correct a/c to its level. Is there any update I could make to the function to achieve this?</p>
<p>For example, the name of the columns should be actually root_data_col4_col5 and root_data_col4_col6</p>
<p><a href="https://i.sstatic.net/aEGFo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aEGFo.png" alt="enter image description here" /></a></p>
| <python><json><pandas><dataframe> | 2023-08-14 10:08:50 | 2 | 512 | royalewithcheese |
76,897,947 | 6,224,975 | use pytest.fixture to set class-variable in a class test | <p>Is there a way to use a fixture in a non-test function e.g for setting a variable in a class-test during <code>setup_class</code></p>
<pre class="lang-py prettyprint-override"><code>#conftest.py
import pytest
import random
import string
@pytest.fixture(scope = "session")
def run_id():
random_id = ''.join(random.choices(string.ascii_letters + string.digits, k=16))
_run_id = "fail_fast_" + random_id
return _run_id
</code></pre>
<p>and then a test-file</p>
<pre class="lang-py prettyprint-override"><code>#test.py
import pytest
import os
def get_run_id(run_id):
return run_id
class TestTrainingAndValidationPipeline:
@classmethod
def setup_class(cls):
cls._run_id = get_run_id() # fails "TypeError: get_run_id() missing 1 required
# positional argument: 'run_id'"
print(f"Setting up {cls._run_id}")
@classmethod
def teardown_class(cls):
print(f"Tearing down {cls._run_id}")
def test_attribute_change(self):
print("hello")
self._run_id= "world"
print(self._run_id)
</code></pre>
<p>in case we cannot set a class-attribute like this, how could I then retrieve the <code>run_id</code> using <code>get_run_id</code> (since that could solve the issue too)</p>
| <python><pytest> | 2023-08-14 09:56:27 | 1 | 5,544 | CutePoison |
76,897,820 | 2,878,290 | How to DataBricks read Delta tables based on incremental data | <p>we have to read the data from delta table and then we are joining the all the tables based on our requirements, then we would have to call the our internal APIS to pass the each row data. this is our goal. there is another one requirement, we would have to read bulk data is first time only, the second time we have to read changes or updated data from source of the delta tables. kindly help us, how to implement this senario.</p>
<p>bulk load code, as below format.</p>
<pre><code>a = spark.table("tablename1")
b = spark.table("tablename2")
c = spark.table("tablename3")
final_df = spark.sql(" joinig 3 dataframe as per our requirement")
</code></pre>
<p>Calling APIS for each row in above that dataframe "final_df"</p>
<p>now, we could not use changedata properties for source tables. is it possible to read incremental data any timestamp manner or any customer implementation, kindly share to us.</p>
<p>Thanks</p>
| <python><pyspark><azure-databricks><databricks-sql> | 2023-08-14 09:37:43 | 1 | 382 | Developer Rajinikanth |
76,897,808 | 17,136,258 | Check if the last values were zero except it was a weekend | <p>I have a problem. I want to filter the rows in which the last date (see the column-header e.g. <code>2023-08-11</code>) has the value 0.0 three times.
The special feature is that a weekend (Saturday or Sunday) should not be taken into account.</p>
<pre class="lang-py prettyprint-override"><code>
data = {
'ID': [1, 2, 3],
'holder': ['Max', 'Max', 'Max'],
'state': ['Test', 'Test', 'Test'],
'2023-08-06': [0.0, 0.0, 0.0],
'2023-08-07': [5.0, 0.0, 0.0],
'2023-08-08': [0.0, 3.0, 6.0],
'2023-08-09': [0.0, 5.0, 0.0],
'2023-08-10': [0.0, 5.0, 0.0],
'2023-08-11': [0.0, 0.0, 0.0],
'2023-08-12': [0.0, 0.0, 0.0],
'2023-08-13': [0.0, 0.0, 0.0]
}
df = pd.DataFrame(data)
print(df)
</code></pre>
<pre class="lang-py prettyprint-override"><code># (Couting starts from 2023-08-11), because 22 and 13 were a weekend
ID holder state 2023-08-06 2023-08-07 2023-08-08 2023-08-09 2023-08-10 2023-08-11 2023-08-12 2023-08-13
0 1 Max Test 0.0 5.0 0.0 0.0 0.0 0.0 0.0 0.0
1 2 Max Test 0.0 0.0 3.0 5.0 5.0 0.0 0.0 0.0
2 3 Max Test 0.0 0.0 6.0 0.0 0.0 0.0 0.0 0.0
</code></pre>
<p>What I want
Only the IDs with the numbers 1 and 3 should be displayed.</p>
<pre class="lang-py prettyprint-override"><code> ID holder state 2023-08-06 2023-08-07 2023-08-08 2023-08-09 2023-08-10 2023-08-11 2023-08-12 2023-08-13
0 1 Max Test 0.0 5.0 0.0 0.0 0.0 0.0 0.0 0.0
1 3 Max Test 0.0 0.0 6.0 0.0 0.0 0.0 0.0 0.0
</code></pre>
<hr />
<p>My first solution was this, but the problem is that it is not checking if it is a weekend or not.</p>
<pre class="lang-py prettyprint-override"><code>filtered_rows = pivot_table[pivot_table.iloc[:, -3:].eq(0.0).all(axis=1)]
</code></pre>
<p>What I tried</p>
<pre class="lang-py prettyprint-override"><code>def is_weekend(date):
print(date)
return date.weekday() >= 5
filtered_rows = df[df.iloc[:, -3:].eq(0.0).all(axis=1) & ~df.iloc[:, -1].apply(is_weekend)]
print(filtered_rows)
[OUT] AttributeError: 'float' object has no attribute 'weekday'
</code></pre>
| <python><pandas><dataframe> | 2023-08-14 09:35:59 | 1 | 560 | Test |
76,897,751 | 5,722,359 | Correct syntax for python-sqlite3 script? How to select all rows that has a column with a specific value? | <p>I wrote the following python-sqlite3 code. It has two issues:</p>
<ol>
<li><p>I want to understand how to write a SQLite3 statement to extract all rows with "cola". What is the correct syntax for the <code>.get_fav_drink_folks(drink) method</code>?</p>
<p>Error:</p>
<pre><code>line 25, in get_fav_drink_folks self.cur.execute(sql, drink)
sqlite3.ProgrammingError: Incorrect number of bindings supplied. The current statement uses 1, and there are 4 supplied.
</code></pre>
</li>
</ol>
<p>Script:</p>
<pre><code>import sqlite3
class DB:
def __init__(self):
self.con = sqlite3.connect("database.db")
self.cur = self.con.cursor()
self.create_table()
def create_table(self):
table = """CREATE TABLE IF NOT EXISTS
database (name TEXT PRIMARY KEY,
fav_food TEXT,
fav_drink TEXT
)"""
self.cur.execute(table)
self.con.commit()
def insert_data_row(self, items):
sql = """INSERT OR IGNORE INTO database VALUES(?,?,?)"""
self.cur.execute(sql, items)
self.con.commit()
def get_fav_drink_folks(self, drink):
sql = """SELECT * FROM database WHERE fav_drink in (?)"""
self.cur.execute(sql, drink)
return self.cur.fetchmany()
if __name__ in "__main__":
db = DB()
data = [
("Mike", "Steak", "Cola",),
("Dora", "McDonalds", "Sprite",),
("Sally", "Salad", "Pepsi",),
("Eve", "Pizza", "Cola",),
]
for row_items in data:
print(f"{row_items=}")
db.insert_data_row(row_items)
print(f"\ncoke_drinkers are {db.get_fav_drink_folks('Cola')}")
</code></pre>
| <python><sqlite><sqlite3-python> | 2023-08-14 09:24:41 | 1 | 8,499 | Sun Bear |
76,897,614 | 7,713,770 | cannot import name 'load_dotenv' from 'dotenv' with django docker | <p>I try to dockerize django.</p>
<p>So I have this Docker file:</p>
<pre><code># pull official base image
FROM python:3.9-alpine3.13
# set work directory
WORKDIR /usr/src/app
EXPOSE 8000
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add linux-headers postgresql-dev gcc python3-dev musl-dev
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
COPY ./requirements.dev.txt .
RUN pip install -r requirements.txt
# copy entrypoint.sh
COPY ./entrypoint.sh .
RUN sed -i 's/\r$//g' /usr/src/app/entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
# copy project
COPY . .
# run entrypoint.sh
CMD ["python3", "manage.py", "runserver"]
</code></pre>
<p>and Docker compose:</p>
<pre><code>version: "3.9"
services:
app:
build:
context: .
args:
- DEV=true
ports:
- "8000:8000"
volumes:
- .:/app
command: >
sh -c "python ./manage.py migrate &&
python ./manage.py runserver 0:8000"
env_file:
- ./.env
volumes:
dev-static-data:
</code></pre>
<p>Part of settings.py:</p>
<pre><code>import os
from pathlib import Path
from os import environ
from dotenv import load_dotenv
load_dotenv()
</code></pre>
<p>this works fine. when I do a</p>
<p>and my requirements.txt looks like:</p>
<pre><code>python manage.py runserver 8080
</code></pre>
<p>The app runs.</p>
<p>requirements.txt:</p>
<pre><code>Django>=4.0.4
djangorestframework>=3.13.1
psycopg2>=2.9.3
drf-spectacular>=0.22.1
Pillow>=9.1.0
drf-yasg==1.20.0
django-cors-headers==3.10.1
django-dotenv
http://github.com/unbit/uwsgi/archive/uwsgi-2.0.zip
</code></pre>
<p>I can build the docker container. But I can't run it.</p>
<p>If I do a docker-compose up. I get this error:"</p>
<pre><code>Traceback (most recent call last):
dwl_backend-app-1 | File "/usr/src/app/./manage.py", line 5, in <module>
dwl_backend-app-1 | from DierenWelzijn.settings import base
dwl_backend-app-1 | File "/usr/src/app/DierenWelzijn/settings/base.py", line 15, in <module>
dwl_backend-app-1 | from dotenv import load_dotenv
dwl_backend-app-1 | ImportError: cannot import name 'load_dotenv' from 'dotenv' (/usr/local/lib/python3.9/site-packages/dotenv.py)
</code></pre>
<p>if I do a pip freeze I see the package django-dotenv installed:</p>
<pre><code>Django==4.1.6
django-allauth==0.52.0
django-cors-headers==3.13.0
django-dotenv==1.4.2
</code></pre>
<p>Question: how to run the docker django container?</p>
| <python><django><docker><docker-compose> | 2023-08-14 09:03:46 | 0 | 3,991 | mightycode Newton |
76,897,481 | 18,221,164 | Multiple files inside Azure function(python app) | <p>I have an azure function python app deployed which makes use of multiple python files from the codebase.
Currently the structure is as follows :</p>
<pre><code>├── HttpExample
│ ├── __init__.py
│ └── function.json
├── getting_started.md
├── host.json
├── local.settings.json
├── requirements.txt
└── shared_code
├── __init__.py
├── prep_backup.py
├── final_execution.py
├── final_helper.py
├── file_info.py
└── requirements.txt
</code></pre>
<p>In my HttpExample, inside the <strong>init</strong>.py, I call my function to do the work as follows:</p>
<pre><code>import logging
import azure.functions as func
from shared_code import final_execution
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
final_execution.run_function() # fails here
if name:
return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.")
else:
return func.HttpResponse(
"This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
status_code=200
)
</code></pre>
<p>But the <code>final_execution</code> makes use of import statements pointing to</p>
<pre><code>from prep_backup import func1,func2
</code></pre>
<p>But the above block fails as :</p>
<pre><code>Result: Failure Exception: ModuleNotFoundError: No module named 'prep_backup'. Please check the requirements.txt file for the missing module.
</code></pre>
<p>All the necessary public packages used are listed in the requeirement.txt.</p>
<p>Any suggestions to work around this?</p>
| <python><azure><azure-functions> | 2023-08-14 08:43:09 | 1 | 511 | RCB |
76,897,225 | 180,174 | Is there a way to make a decorator or metaclass for a lazily initialized attribute / property | <p>In our current codebase, we have a recurring pattern of:</p>
<pre class="lang-py prettyprint-override"><code>class Something:
_client: SomeClientClass = None
@property
def client(self):
if self._client is None:
self._client = SomeClientClass(**with_some_parameters_from_self)
return self._client
</code></pre>
<p>this is required, because</p>
<pre class="lang-py prettyprint-override"><code>class Something:
client = SomeClientClass(**with_some_parameters_from_self)
</code></pre>
<p>leads to problems with pretty much every single widely used client library, as it instantiates the clients at import time.</p>
<p>Is there either an <em>easy and comprehensible</em> or <em>ANY</em> way to implement this in Python so that the code would look something like</p>
<pre class="lang-py prettyprint-override"><code>class Something:
@lazy_property
def client(self):
return SomeClientClass(**with_some_paramaters_from_self)
</code></pre>
<p>This is not the end of the world, and the original code is not exceedingly verbose, but I'm curious if something like this is possible</p>
| <python> | 2023-08-14 07:58:45 | 0 | 39,908 | Kimvais |
76,897,057 | 173,003 | Catching a failure in IPython image display | <p>In my Jupyter Notebook, I need to delegate the rendering of a Graphviz source file to an external service, for instance <a href="https://quickchart.io/documentation/graphviz-api/" rel="nofollow noreferrer">QuickChart</a>:</p>
<pre class="lang-py prettyprint-override"><code>from IPython.display import Image
url = "https://quickchart.io/graphviz?graph=graph{a--b}"
Image(url=url)
</code></pre>
<p>However, if this fails (e.g., the server is down, or the rate limit is reached, etc.), I would like to be able to fall back to another service. The screenshot below shows the result of a working example and a failure case:</p>
<p><a href="https://i.sstatic.net/3mc5j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3mc5j.png" alt="enter image description here" /></a></p>
<p>How could I catch the error which produces this broken image icon? I would be interested in a general answer, independent from a particular service or image format. Reading the <a href="https://github.com/ipython/ipython/blob/main/IPython/core/display.py" rel="nofollow noreferrer">source code</a> gave me no clue, and the <code>__dict__</code> object only contains the parameters:</p>
<pre class="lang-py prettyprint-override"><code>>>> Image(url=url).__dict__
{'format': 'https://quikchat.io/graphviz?graph=graph{a--b}',
'embed': False,
'width': None,
'height': None,
'retina': False,
'unconfined': False,
'alt': None,
'url': 'https://quikchat.io/graphviz?graph=graph{a--b}',
'filename': None,
'data': None,
'metadata': {}}
</code></pre>
| <python><web-services><exception><jupyter-notebook><ipython> | 2023-08-14 07:30:39 | 1 | 4,114 | Aristide |
76,897,023 | 206,253 | Using a dictionary to update a row in a pandas dataframe? | <p>I am using a dict to update the values in a row of pandas dataframe like this:</p>
<pre><code>dict = {'id': 1, 'col1': 1, 'col2': 2} # supplied by another method
upd_dict = {key: [val] for key, val in dict.items()}
upd_row = pd.DataFrame.from_records(upd_dict, index='id')
df.update(upd_row)
</code></pre>
<p>This works well - if the updated values are not None. I would like to extend this so that the values in df can be updated to NA/NaN.</p>
<p>How can I elegantly update several values in a row to NA/NaN if these NA/NaN values are supplied in a dictionary (like in the code snippet)?</p>
<p>EDIT: I am aware that <code>update</code> cannot set cells to NA. The reason I am asking is that currently the only solution which comes to mind is to loop over all NA values in the dict and set the corresponding cells in the dataframe row to NA. But I thought there is a better way. Similar to what you can do in SQL: an UPDATE statement can update several columns. Essentially, my question is how can I update several cells in a dataframe row <strong>at once</strong>. <code>update</code> ticks the boxes - but only if the updated values are not NA.</p>
| <python><pandas><dataframe><dictionary> | 2023-08-14 07:25:46 | 1 | 3,144 | Nick |
76,896,806 | 1,536,050 | How do I scrobble to Last.fm's API in Python without getting Invalid method signature supplied? | <p>I have exchanged a token for a session key that looks like this "UserName 94ag345Sd3452C2ffa3aT 0"</p>
<p>I have the following code:
session_key = 'UserName 94ag345Sd3452C2ffa3aT 0'</p>
<pre><code>method = "track.scrobble"
artist = "Muse"
track = "Time Is Running Out"
timestamp = str(time.time())
api_sig = hashlib.md5(
f"api_key{API_KEY}artist{artist}method{method}sk{session_key}timestamp{timestamp}track{track}{API_SECRET}".encode('utf-8')).hexdigest()
header = {"user-agent" : "mediascrobbler/0.1",
"Content-type": "application/x-www-form-urlencoded"}
# API URL
url = "https://ws.audioscrobbler.com/2.0/"
# Parameters for the POST request
params = {
"api_key": API_KEY,
"artist[0]": artist,
"method": method,
"sk": session_key,
"timestamp[0]": timestamp,
"track[0]": track,
"api_sig": api_sig,
}
params['api_sig'] = api_sig
# Sending the POST request
try:
response = requests.post(url, params=params, headers=header)
except requests.exceptions.RequestException as e:
print(e)
sys.exit(1)
return response.text
</code></pre>
<p>Why does it keep returning Invalid Method Signature? I have utf encoded my params. All required params are there. I've exchanged a token for a session key.</p>
<p>Update: If I set the api sig to look like this:</p>
<pre><code>api_sig = hashlib.md5(
f"api_key{API_KEY}artist[0]{artist}method{method}sk{session_key}timestamp[0]{timestamp}track[0]{track}{API_SECRET}".encode('utf-8')
</code></pre>
<p>).hexdigest()</p>
<p>but this is returning Invalid session key - Please re-authenticate with a brand newly generated session key I know to be valid. If I'm not mistaken, those sessions don't expire.</p>
| <python><rest><last.fm> | 2023-08-14 06:43:21 | 4 | 1,550 | KinsDotNet |
76,896,686 | 13,849,446 | Questions regarding react in django or react as standalone | <p>I am creating an website using react and Django. To combine Django and react there are 2 ways</p>
<blockquote>
<ol>
<li>React in its own “frontend” Django app: load a single HTML template
and let React manage the frontend (difficulty: medium)</li>
<li>Django REST as a standalone API + React as a standalone SPA (difficulty: hard, it
involves JWT for authentication)</li>
</ol>
</blockquote>
<p>Now my questions are</p>
<ol>
<li>What are security concerns related to both and which of them is more secure way to do it.</li>
<li>In Option-1 we can use both session and token based authentication while in Option-2 we must use token based authentication and XSRF header is needed. Is session based auth better and in what factors it effects</li>
<li>If in future I plan to move to react Native for mobile app, which would be more suitable. (I could not find any example when using Option-1)</li>
<li>Apparently, Instagram is using session based auth, which means they have used Option-1. Are Option-1 sites SPA based and is Insta SPA based.
An extra unrelated question, which is better JWT token auth or Django knox auth. If JWT is better please refer any guide on integrating it with Django.</li>
</ol>
<p>If someone does not have complete answer and knows just any of them, please share.</p>
| <python><reactjs><django><react-native><session-cookies> | 2023-08-14 06:21:26 | 0 | 1,146 | farhan jatt |
76,896,653 | 11,319,137 | OpenGL not running on WSL2 | <p>I am trying to get PyBullet to work on my WSL2 installation of Ubuntu 22.04 with an Nvidia GTX 1050Ti. When I run <code>p.connect(p.GUI)</code>, the execution fails with the following output:</p>
<pre class="lang-none prettyprint-override"><code>startThreads creating 1 threads.
starting thread 0
started thread 0
argc=2
argv[0] = --unused
argv[1] = --start_demo_name=Physics Server
ExampleBrowserThreadFunc started
X11 functions dynamically loaded using dlopen/dlsym OK!
X11 functions dynamically loaded using dlopen/dlsym OK!
libGL error: MESA-LOADER: failed to open swrast: /home/dkapur17/anaconda3/envs/drl/lib/libstdc++.so.6: version `GLIBCXX_3.4.30' not found (required by /lib/x86_64-linux-gnu/libLLVM-15.so.1) (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: swrast
Creating context
Failed to create GL 3.3 context ... using old-style GLX context
Failed to create an OpenGL context
</code></pre>
<p>Following is the output of <code>glxinfo -B</code>:</p>
<pre class="lang-none prettyprint-override"><code>name of display: :0
display: :0 screen: 0
direct rendering: Yes
Extended renderer info (GLX_MESA_query_renderer):
Vendor: Microsoft Corporation (0xffffffff)
Device: D3D12 (NVIDIA GeForce GTX 1050 Ti) (0xffffffff)
Version: 23.0.4
Accelerated: yes
Video memory: 12132MB
Unified memory: no
Preferred profile: core (0x1)
Max core profile version: 4.2
Max compat profile version: 4.2
Max GLES1 profile version: 1.1
Max GLES[23] profile version: 3.1
OpenGL vendor string: Microsoft Corporation
OpenGL renderer string: D3D12 (NVIDIA GeForce GTX 1050 Ti)
OpenGL core profile version string: 4.2 (Core Profile) Mesa 23.0.4-0ubuntu1~22.04.1
OpenGL core profile shading language version string: 4.20
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL version string: 4.2 (Compatibility Profile) Mesa 23.0.4-0ubuntu1~22.04.1
OpenGL shading language version string: 4.20
OpenGL context flags: (none)
OpenGL profile mask: compatibility profile
OpenGL ES profile version string: OpenGL ES 3.1 Mesa 23.0.4-0ubuntu1~22.04.1
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10
</code></pre>
<p>Programs like <code>glxgears</code>, <code>gedit</code>, and <code>gimp</code> all work.</p>
| <python><opengl><windows-subsystem-for-linux><pybullet> | 2023-08-14 06:14:54 | 1 | 516 | dkapur17 |
76,896,576 | 17,267,064 | How do I locate the Google Maps URL present inside the iframe element via Selenium? | <p>I wish to retrieve the Google Maps URL present inside the iframe element of a webpage <a href="https://villageinfo.in/andaman-&-nicobar-islands/nicobars/car-nicobar/arong.html" rel="nofollow noreferrer">https://villageinfo.in/andaman-&-nicobar-islands/nicobars/car-nicobar/arong.html</a>.</p>
<p>Below is the sample url which I need from the iframe element.
<a href="https://maps.google.com/maps?ll=9.161616,92.751558&z=18&t=h&hl=en-GB&gl=US&mapclient=embed&q=Arong%20Andaman%20and%20Nicobar%20Islands%20744301" rel="nofollow noreferrer">https://maps.google.com/maps?ll=9.161616,92.751558&z=18&t=h&hl=en-GB&gl=US&mapclient=embed&q=Arong%20Andaman%20and%20Nicobar%20Islands%20744301</a></p>
<p>I am unable to retrieve URL when I performed below steps. Page has only 1 iframe element.</p>
<pre><code>iframe = driver.find_element_by_tag_name('iframe')
driver.switch_to.frame(iframe)
</code></pre>
<p>I even tried to retrieve all urls inside the element, but it gave Google Ads URLs apart from the one I require.</p>
<pre><code>urls = driver.find_elements(By.TAG_NAME, 'a')
for url in urls:
print(url.get_attribute('href'))
</code></pre>
<p>What should be the approach or methodology here to be tried?</p>
<p>Thanks in advance.</p>
| <python><selenium-webdriver><selenium-chromedriver><seleniumwire> | 2023-08-14 05:58:31 | 1 | 346 | Mohit Aswani |
76,896,490 | 6,025,866 | Getting an error when trying to use ChromaDB | <p>I am new to LangChain and I was trying to implement a simple Q & A system based on an example tutorial online.</p>
<p>The code is as follows:</p>
<pre><code>from langchain.llms import LlamaCpp
from langchain.llms import gpt4all
from langchain.embeddings import LlamaCppEmbeddings
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
def write_text_file(content, file_path):
try:
with open(file_path, 'w') as file:
file.write(content)
return True
except Exception as e:
print(f"Error occurred while writing the file: {e}")
return False
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
Answer:"""
prompt = PromptTemplate(template=prompt_template, input_variables=["context", "question"])
llm = LlamaCpp(model_path="airoboros-l2-13b-gpt4-1.4.1.ggmlv3.q2_K.bin")
embeddings = LlamaCppEmbeddings(model_path="airoboros-l2-13b-gpt4-1.4.1.ggmlv3.q2_K.bin")
llm_chain = LLMChain(llm=llm, prompt=prompt)
file_path = "corpus_v1.txt"
loader = TextLoader(file_path)
docs = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)
texts = text_splitter.split_documents(docs)
db = Chroma.from_documents(texts, embeddings)
question = "What is ant–fungus mutualism?"
similar_doc = db.similarity_search(question, k=1)
context = similar_doc[0].page_content
query_llm = LLMChain(llm=llm, prompt=prompt)
response = query_llm.run({"context": context, "question": question})
print(response)
</code></pre>
<p>The data can be found <a href="https://github.com/adhok/data_sources_new/blob/main/corpus_v1.txt" rel="nofollow noreferrer">here</a>. The model used here can be found in this <a href="https://huggingface.co/TheBloke/airoboros-l2-13B-gpt4-1.4.1-GGML/blob/main/airoboros-l2-13b-gpt4-1.4.1.ggmlv3.q2_K.bin" rel="nofollow noreferrer">link</a>.</p>
<p>I am getting the following error</p>
<pre><code>llama_tokenize_with_model: too many tokens
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[10], line 6
4 text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)
5 texts = text_splitter.split_documents(docs)
----> 6 db = Chroma.from_documents(texts, embeddings)
File ~/miniconda3/envs/tensorflow/lib/python3.10/site-packages/langchain/vectorstores/chroma.py:603, in Chroma.from_documents(cls, documents, embedding, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
601 texts = [doc.page_content for doc in documents]
602 metadatas = [doc.metadata for doc in documents]
--> 603 return cls.from_texts(
604 texts=texts,
605 embedding=embedding,
606 metadatas=metadatas,
607 ids=ids,
608 collection_name=collection_name,
609 persist_directory=persist_directory,
610 client_settings=client_settings,
611 client=client,
612 collection_metadata=collection_metadata,
613 **kwargs,
614 )
File ~/miniconda3/envs/tensorflow/lib/python3.10/site-packages/langchain/vectorstores/chroma.py:567, in Chroma.from_texts(cls, texts, embedding, metadatas, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
539 """Create a Chroma vectorstore from a raw documents.
540
541 If a persist_directory is specified, the collection will be persisted there.
(...)
556 Chroma: Chroma vectorstore.
557 """
558 chroma_collection = cls(
559 collection_name=collection_name,
560 embedding_function=embedding,
(...)
565 **kwargs,
566 )
--> 567 chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
568 return chroma_collection
File ~/miniconda3/envs/tensorflow/lib/python3.10/site-packages/langchain/vectorstores/chroma.py:187, in Chroma.add_texts(self, texts, metadatas, ids, **kwargs)
185 texts = list(texts)
186 if self._embedding_function is not None:
--> 187 embeddings = self._embedding_function.embed_documents(texts)
188 if metadatas:
189 # fill metadatas with empty dicts if somebody
190 # did not specify metadata for all texts
191 length_diff = len(texts) - len(metadatas)
File ~/miniconda3/envs/tensorflow/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py:110, in LlamaCppEmbeddings.embed_documents(self, texts)
101 def embed_documents(self, texts: List[str]) -> List[List[float]]:
102 """Embed a list of documents using the Llama model.
103
104 Args:
(...)
108 List of embeddings, one for each text.
109 """
--> 110 embeddings = [self.client.embed(text) for text in texts]
111 return [list(map(float, e)) for e in embeddings]
File ~/miniconda3/envs/tensorflow/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py:110, in <listcomp>(.0)
101 def embed_documents(self, texts: List[str]) -> List[List[float]]:
102 """Embed a list of documents using the Llama model.
103
104 Args:
(...)
108 List of embeddings, one for each text.
109 """
--> 110 embeddings = [self.client.embed(text) for text in texts]
111 return [list(map(float, e)) for e in embeddings]
File ~/miniconda3/envs/tensorflow/lib/python3.10/site-packages/llama_cpp/llama.py:812, in Llama.embed(self, input)
803 def embed(self, input: str) -> List[float]:
804 """Embed a string.
805
806 Args:
(...)
810 A list of embeddings
811 """
--> 812 return list(map(float, self.create_embedding(input)["data"][0]["embedding"]))
File ~/miniconda3/envs/tensorflow/lib/python3.10/site-packages/llama_cpp/llama.py:776, in Llama.create_embedding(self, input, model)
774 tokens = self.tokenize(input.encode("utf-8"))
775 self.reset()
--> 776 self.eval(tokens)
777 n_tokens = len(tokens)
778 total_tokens += n_tokens
File ~/miniconda3/envs/tensorflow/lib/python3.10/site-packages/llama_cpp/llama.py:471, in Llama.eval(self, tokens)
469 raise RuntimeError(f"llama_eval returned {return_code}")
470 # Save tokens
--> 471 self.input_ids[self.n_tokens : self.n_tokens + n_tokens] = batch
472 # Save logits
473 rows = n_tokens if self.params.logits_all else 1
ValueError: could not broadcast input array from shape (8,) into shape (0,)
</code></pre>
<p>This error did not occur when the text length in the corpus was shorter. Is there a parameter that we need to change?</p>
<p>These are the libraries and their versions</p>
<pre><code>langchain -> '0.0.252'
numpy -> '1.25.0'
</code></pre>
<p>Thanks in advance!</p>
| <python><nlp-question-answering><large-language-model><chromadb> | 2023-08-14 05:39:24 | 3 | 441 | adhok |
76,896,221 | 4,145,280 | Generate teams of similar strengths with contiguous areas | <h3>Idea</h3>
<p>It's quite sad that so many great countries (e.g. India) and players (e.g. Mo Salah) may never play at the FIFA (football/soccer) World Cup (the same argument could apply as well to other sports events dominated by a handful of dominant teams, such international cricket and basketball tournaments).</p>
<p><a href="https://i.sstatic.net/KY0xC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KY0xC.png" alt="enter image description here" /></a></p>
<p>It would be neat to try and create a more balanced event, while still keeping the location-based element, where each team is of a (roughly) similar strength, with all of the players from a contiguous area (preferably of the same landmass, but obviously not possible in all circumstances, and beyond the scope of this question).</p>
<p>For example, in the case of a weaker region, maybe their team would span many countries (e.g. maybe a South Asian team), whereas for a strong country, maybe there would be multiple teams (e.g. a team of just the players from the suburbs of Paris).</p>
<p>I could then afterwards plot the areas, maybe using Voronois.</p>
<h3>Background</h3>
<p>I managed to gather the relevant information, through scraping (of Football Manager and Transfermarkt), but I'm stuck on how to design the algorithm to select the teams.</p>
<h3>Problem</h3>
<p>There is a large number of coordinates, which correspond to places of birth of players. These places all have players, and the players all have ratings (from 1 - 100) and positions.</p>
<p>The task is, given a certain team size (11), and a certain number of teams (in the example below, 10), and a certain number of required players in each position (1, though substitutes would be handy), divide the area up into contiguous areas where the best team you can form from the players of the given area has roughly equal skill to the teams of other areas.</p>
<h3>Question</h3>
<p>I've been reading a bit about graph theory things, but I'm unsure where to begin creating a program which solves this problem.</p>
<p>I was looking for some guidance on this, and (if it's possible), if you could create something with the toy example, that would be amazing!!</p>
<p>Also, if tackling the overall problem is too difficult, if you can find a way to narrow the problem, and can create something which addresses that smaller problem, which can then be generalised to the larger problem, that would be great too.</p>
<h4>Sample code (in R, but I've included Python and Julia equivalent code further below- an answer using one of these (or similar) would be great)</h4>
<pre><code>set.seed(0)
library(tidyverse)
df <- tibble(
# generate random latitudes and longitudes, and the number of players for that location
lat = runif(100, 0, 90),
lon = runif(100, 0, 180),
players = rnorm(100, 0, 5) |> abs() |> ceiling() |> as.integer()
)
num_positions <- 11
position <- letters[1:num_positions]
df <- df |>
# generate position and skill data, and unnest
mutate(position = map(players, ~sample(position, .x, replace = TRUE)),
skill = map(players, ~sample(1:100, .x, replace = TRUE)))|>
unnest_longer(c(position, skill))
# plot the data
df |>
summarise(skill = mean(skill), players = first(players), .by = c(lat, lon)) |>
ggplot(aes(x = lon, y = lat)) +
geom_point(aes(size = players, color = skill)) +
scale_size_continuous(range = c(1, 10)) +
scale_color_viridis_c(option = "magma") + # similar to the Colombian shirt colours
theme_minimal() +
theme(legend.position = "none",
panel.grid.major = element_blank(),
panel.grid.minor = element_blank())
n_teams <- 10
</code></pre>
<p>Scatter plot of the sample data (circle size is number of players, colour is average skill)
<a href="https://i.sstatic.net/M2aTP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M2aTP.png" alt="scatter plot of sample data" /></a></p>
<h3>Notes</h3>
<ol>
<li><p>In the thing I want to do, it would involve 64 teams, so at least 704 players, and probably around 3x the size of the sample dataset. The real dataset has a lot of rows, but by filtering it, I should be able to get it down to a few thousand.</p>
</li>
<li><p>In real life, some players can play well in more than one position, but in the example code I gave, each player only has one position. Adding multiple positions per player would likely make this a lot more difficult to solve, so it's outside the scope of this question.</p>
</li>
<li><p>If you can do it across a sphere (like the globe), that would be amazing, but a rectangle would be okay too.</p>
</li>
<li><p><a href="https://stackoverflow.com/a/76949264/4145280">Reinderien has helpfully pointed out</a> that using contiguousness without compactness could lead to some Amigara Fault-like jerrymandered monstrosities, <a href="https://techcrunch.com/2019/08/01/this-free-ugly-font-is-made-from-hideously-gerrymandered-districts/" rel="nofollow noreferrer">similar to what we see in US electoral maps</a>. So although I'm still trying to figure out a better title, I would say prioritise compactness over continguity if it works better for you (though I'm guessing this would throw up the problem of exclaves?)</p>
</li>
</ol>
<h4>Update:</h4>
<h5>Python code:</h5>
<pre class="lang-py prettyprint-override"><code>import random, math, pandas
random.seed(0)
df = pandas.DataFrame({'lat': [random.uniform(0, 90) for i in range(100)],
'lon': [random.uniform(0, 180) for i in range(100)],
'players':[math.ceil(abs(random.normalvariate(0, 5))) for i in range(100)]})
num_players = 11
positions = list(map(chr, range(97, 97+num_players)))
df['position'] = df['players'].apply(lambda x: random.choices(positions, k=x))
df['skill'] = df['players'].apply(lambda x: random.choices(range(1, 101), k=x))
</code></pre>
<h5>Julia code:</h5>
<pre><code>using Random, DataFrames
Random.seed!(0)
df = DataFrame(lat = rand(100) * 90,
lon = rand(100) * 180,
players = Int64.(ceil.(abs.(5randn(100)))))
num_positions = 11
position = ['a'+ i for i in 0:num_positions-1]
df[!, :position] = [rand(position, players) for players in df.players]
df[!, :skill] = [rand(1:100, players) for players in df.players]
df = flatten(df, [:position, :skill])
</code></pre>
| <python><r><algorithm><julia><mathematical-optimization> | 2023-08-14 04:08:20 | 2 | 12,688 | Mark |
76,896,195 | 6,534,818 | Python: Unpack list of dicts containing individual np.arrays | <p>Is there a pure numpy way that I can use to get to this expected outcome?
Right now I have to use Pandas and I would like to skip it.</p>
<pre><code>import pandas as pd
import numpy as np
listOfDicts = [{'key1': np.array(10), 'key2': np.array(10), 'key3': np.array(44)},
{'key1': np.array(2), 'key2': np.array(15), 'key3': np.array(22)},
{'key1': np.array(25), 'key2': np.array(25), 'key3': np.array(11)},
{'key1': np.array(35), 'key2': np.array(55), 'key3': np.array(22)}]
</code></pre>
<p>Use pandas to parse:</p>
<pre><code># pandas can unpack simply
df = pd.DataFrame(listOfDicts)
# get all values under the same key
xd = df.to_dict('list')
# ultimate goal
np.stack([v for k, v in xd.items() if k not in ['key1']], axis=1)
array([[10, 44],
[15, 22],
[25, 11],
[55, 22]])
</code></pre>
<pre><code># I would like listOfDicts to transform temporarily into this with pure numpy,
# from which I could do basically anything to it:
{'key1': [np.array([10, 2, 25, 35])],
'key2': [np.array([10, 15, 25, 55])],
'key3': [np.array([44, 22, 11, 22])]
}
</code></pre>
| <python><arrays><numpy> | 2023-08-14 04:00:34 | 2 | 1,859 | John Stud |
76,896,094 | 549,226 | Add_layer in python api for arcgis neither completes action nor returns an error | <p>I have a python script where I'm trying to add a feature layer to a webmap. The script completes without any errors but the feature layer doesn't get added.</p>
<p>Here's the code in question:</p>
<pre><code>from arcgis.gis import GIS
from arcgis.mapping import WebMap
print("Logging in...")
gis = GIS("https://fotnf.maps.arcgis.com/", "XXXXXXX", "YYYYYYYY")
print(f"Connected to {gis.properties.portalHostname} as {gis.users.me.username}")
webmap_item = gis.content.get('e1405425e52d43689cfdaecd43e0239d')
feature_layer = gis.content.get('cc5eb6737f5441c48f2ea1c5ab42935e')
webmap = WebMap(webmap_item)
print("Adding layer")
webmap.add_layer(feature_layer)
print("Done")
</code></pre>
<p>Here's a screenshot of the Content pane in arcgis online. The code is attempting to add the Join_2_12_23 feature layer to the Test Mesa map.
<a href="https://i.sstatic.net/HaI7v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HaI7v.png" alt="enter image description here" /></a></p>
<p>The version of the arcgis python package is 2.1.0.3</p>
| <python><arcgis><esri><esri-maps><arcgis-online> | 2023-08-14 03:18:50 | 2 | 7,625 | opike |
76,896,051 | 10,138,470 | Any better swapping technique when implementing beam search? | <p>I have a list of 20 IDs that I want to subject to beam search. The list has unique IDs i.e. none is repeated. For each IDs order, there is a value that can be computed i.e. the objective function. Basically this can just be the sum of individual values that are associated with each ID. I believe if I used brute force, then I would be required to make every combination i.e. 20 factorial combinations to come up with say the highest sum based on different combinations. I'm keen on utilising beam search in this instance and this is my approach.</p>
<pre><code>beam_width = 2
max_iterations = 4
IDs = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
Values = [10,11,23,33,23,233,22,13,78,90,9,8,10,11,45,34,45,18,19,20]
#Objective calculation function
def objective_function(ID_order):
total_performance = sum(Values[IDs.index(IDs)] for IDs in ID_order) #sum of values for IDs in the ID order
return total_performance
#beam search function
def beam_search(IDs, beam_width, max_iterations):
candidate_solutions = [(random.sample(IDs, len(IDs)), 0.0)] # Initialize candidate solution with a random order with 0 as the objective
best_rule_order = None
best_objective = float('-inf')
for iteration in range(max_iterations):
new_candidates = []
#Swap rules in the current solution by generating new candidate solutions
for ID_order, _ in candidate_solutions:
for i in range(len(IDs)):
new_order = ID_order[:i] + [IDs[i]] + ID_order[i+1:] #Swap ids before calculating the objective function
objective = objective_function(new_order)
new_candidates.append((new_order, objective))
new_candidates.sort(key=lambda x: x[1], reverse=True) #Sort new candidate solutions in descending order based on the objective value
candidate_solutions = new_candidates[:beam_width] #Select the top candidates based on the beam width
#update the best ID order and objective if a better solutions is found
if candidate_solutions[0][1] > best_objective:
best_ID_order = candidate_solutions[0][0]
best_objective = candidate_solutions[0][1]
return best_ID_order, best_objective
best_rule_order, best_objective = beam_search(IDs, beam_width, max_iterations)
print("Overall Best Rule Order :", best_ID_order)
print("Overall Best Objective:", best_objective)
</code></pre>
<p>My question is specifically here <code>new_order = ID_order[:i] + [IDs[i]] + ID_order[i+1:]</code> as it duplicates some IDs except for the first iteration. Any recommendation on a better swapping technique that I can subject to the search? Basically, I'm keen on the order of IDs as that will affect the objective function in the extra bit that I'll use the code for. Therefore, I don't want any rule duplicated.</p>
| <python><optimization> | 2023-08-14 03:03:10 | 0 | 445 | Hummer |
76,895,981 | 11,901,732 | to_csv returns special characters when writing pandas dataframe into csv | <p>When trying to write a pandas dataframe into a csv file, I noticed that special characters,</p>
<p>e.g., the input data from Excel shows "Department - Reconciliation", after writing into csv, instead of "Department - Reconciliation" it shows "Department – Reconciliation". But when I read the csv file into Python, it shows "Department - Reconciliation" again.</p>
<p>I used <code>df.to_csv('df.csv', index = False )</code></p>
<p>Why is this happening? How do I fix it?</p>
<hr />
<p>other attempts:</p>
<p>tried 1:</p>
<pre><code>df.to_csv('df.csv', index = False, encoding="utf-8")
</code></pre>
<p>and 2:</p>
<pre><code>df = df.astype(str)
df = df.applymap(str)
</code></pre>
<p>in hope to convert data type into string but the results are the same.</p>
<hr />
<p>Edit:</p>
<p>The source data has been extracted from a business system, I just noticed that it shows "Department - Reconciliation" in the <code>Excel</code> file but "Department – Reconciliation" in the <code>csv</code> file.</p>
| <python><pandas><export-to-csv> | 2023-08-14 02:35:00 | 1 | 5,315 | nilsinelabore |
76,895,879 | 12,345,144 | Grabbing the largest 20 numbers' line number in a CSV list | <p>I'm working on a small project that i've gotten stuck on. I've used python to take a list of timestamps and heatmap data and separate them by line (always 1-100). I am aware of the max() option, but to the best of my knowledge exhausted google and stack overflow attempting to include the line number and multiple max numbers descending.</p>
<p>Here is a sample of the csv I am working with:</p>
<pre><code>0.00088006474088529383
0.00015301444453664169
0.0001578056486084342
4.8472783963609083e-05
0.00018440120509040085
7.766234473424159e-05
</code></pre>
<p>What I would ideally need is a list of the 20 biggest numbers' lines in the csv, for example:</p>
<pre><code>6
4
1
5
3
2
</code></pre>
<p>I'm unsure how to start this, but I have experimented with:</p>
<pre><code>with open('heatmap.csv', 'r') as heatnum:
for line in heatnum:
print(max(heatnum))
</code></pre>
<p>This unfortunately only gives me the singular max number, I'm unsure where to start receiving descending max numbers up to 20, and how to output line number.</p>
| <python><python-3.x><max> | 2023-08-14 01:49:05 | 1 | 317 | theboy |
76,895,797 | 15,632,586 | How to extract labels from a GML file with networkx? | <p>I am looking to extract all nodes with label as <code>super_technique</code> from my GML file, which looks like this:</p>
<pre><code> node [
id 17
label "T1592"
name "Gather Victim Host Information"
types "super_technique"
]
node [
id 18
label "T1592/001"
name "Hardware"
types "sub_technique"
]
node [
id 19
label "T1592/002"
name "Software"
types "sub_technique"
]
</code></pre>
<p>Here is my current code on Python:</p>
<pre><code>import csv
import networkx as nx
# Load the GML file
tree = nx.read_gml('Tactic_Technique_Reference_Example.gml')
technique_dict = {}
for node, data in tree.nodes(data=True):
types = data['types']
if types == "super_technique":
label = data['label']
id = data['id']
technique_dict[label] = id
print(technique_dict[label])
</code></pre>
<p>However, when I tried to run the code, I got a <code>KeyError: 'label'</code>. Therefore, I would like to ask that is there any way to extract <code>super_technique</code> labels from my GML file?</p>
| <python><networkx><gml> | 2023-08-14 01:14:48 | 0 | 451 | Hoang Cuong Nguyen |
76,895,792 | 8,471,995 | Numpy consider a subclass of list as not a array-like | <p>How can I make numpy consider a subclass of <code>list</code> as not a array-like?</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
class A(list):
pass
values = [A((1,2)), A((3,4))]
array = np.array(values, dtype=np.object_)
</code></pre>
<p>Will create an array of <code>int</code>, not array of <code>A</code> even though <code>dtype=object_</code> is set.</p>
<p>Also, I need an array of objects because the shape must be kept. For example: Array of <code>class Car</code> with shape <code>(1,3,1)</code> will become a tensor of <code>(1,3,1,5,3)</code> where a single car is represented as a tensor of shape <code>(5,3)</code>.</p>
| <python><arrays><numpy> | 2023-08-14 01:13:17 | 1 | 1,617 | Inyoung Kim 김인영 |
76,895,581 | 11,210,476 | Subclassing dataclass fails to override default arguments | <p>I'm failing to override a default kwarg.</p>
<pre><code>n [13]: from dataclasses import dataclass
In [14]: @dataclass
...: class D1:
...: name: str = "foo"
...: class D2(D1):
...: name: str = "bar"
...: D2().name == "foo" # WTF
Out[14]: True
</code></pre>
| <python><oop><inheritance><python-dataclasses> | 2023-08-13 23:28:05 | 0 | 636 | Alex |
76,895,467 | 8,492,678 | Creating two dimensional list out of other lists that are not the same size | <p>I have four lists</p>
<pre class="lang-py prettyprint-override"><code>apps = ["App 1", "App 2", "App 3", "App 4"]
devices = ["device1"]
groups = ["group1", "group2", "group3", "group4"]
rules_name = ["rule1", "rule2", "rule3", "rule4", "rule5"]
</code></pre>
<p>and I need to iterate over each list adding the index of each list to it's correct index in the new list, something like this...</p>
<p>The <code>None</code> would indicate that there are no other items in the list, in this case the <code>device</code> list</p>
<pre class="lang-py prettyprint-override"><code>rows = [
[apps[0], devices[0], groups[0], group_rules[0]],
[apps[1], None ,groups[1], group_rules[1]],
[apps[2], None ,groups[2], group_rules[2]],
[apps[3], None ,groups[3], group_rules[3]],
[apps[4], None ,groups[4], group_rules[4]],
# The rows list can have any number of rows based on the size of the lists above.
]
</code></pre>
<p>I've tried looping over each list which just created a pyramid of loops and did not give the correct results. I also tried list comprehension but I'm not super well versed in those so maybe I did it wrong.</p>
<p>For what it's worth, I'm using the <a href="https://github.com/Textualize/rich" rel="nofollow noreferrer">Rich</a> library which requires either a list of rows or adding a bunch of</p>
<pre class="lang-py prettyprint-override"><code>table.add_row("App1", "Device1", "group1", "rule1")
</code></pre>
<p>However, I'm not sure how to make that dynamic as the original four lists will vary in size each time I run this code.</p>
| <python><rich> | 2023-08-13 22:36:07 | 1 | 320 | glob |
76,895,364 | 13,438,431 | SQLAlchemy 2: manually unexpire a row? | <p>Prerequisites:</p>
<ul>
<li>PostgreSQL 14</li>
<li>SQLAlchemy 2.0.19</li>
<li>Python 3.10</li>
<li>Asyncpg 0.28.0</li>
</ul>
<p>Please, imagine this script:</p>
<pre><code>async with sessionmaker() as session, session.begin():
person: Person = await session.get(Person, (person_id, ))
print(person.age)
try:
async with session.begin_nested():
person.age = -1000
await session.flush()
except Exception as ex:
pass # ignore the exception
print(person.age)
</code></pre>
<p>So, in essence we're trying to:</p>
<ol>
<li>Get a person with some known id.</li>
<li>Knowingly set their age to an erroneous value <strong>inside a subtransaction</strong> (A.K.A. "checkpoint").</li>
<li>The subtransaction fails with a check constraint violation.</li>
<li>We catch the exception, so the transaction rolls back to the moment before the checkpoint began and continues to execute.</li>
</ol>
<p><em>Expected behavior:</em></p>
<ol start="5">
<li>The person is unmodified, because all the modifications were done inside the erroneous checkpoint block, so we safely read back the <code>person.age</code>.</li>
</ol>
<p><em>What actually happens:</em></p>
<ol start="5">
<li>The person object becomes <a href="https://docs.sqlalchemy.org/en/20/orm/session_api.html#sqlalchemy.orm.Session.expire" rel="nofollow noreferrer">expired</a>, suggesting the state is inconsistent between the database and the python object.</li>
</ol>
<p>I guess SQLAlchemy doesn't support the "modifications happened inside the checkpoint" part of my reasoning, or I just understand something inherently incorrectly.</p>
<p>Either way, a quick fix, as I see it, would be to forcibly <code>unexpire</code> the object. Is it, at all, possible? Or maybe there is a better way?</p>
<p><strong>EDIT:</strong></p>
<p>I've found a way to <code>unexpire</code> the object:</p>
<pre><code>state = attributes.instance_state(person)
state.expired = False
state.expired_attributes.clear()
</code></pre>
<p>But the entirety of object's fields is set to <code>None</code> by SQLAlchemy for some reason.</p>
<p><strong>Is there a way to make it NOT DO THAT?</strong></p>
<p><em>P.S.</em> Please, do not suggest to <code>refresh()</code>, as this is exactly the thing I'm trying to avoid. I use app-level locks to ensure that nobody else accesses this precise piece of data - <code>refresh</code>ing it would defeat the whole purpose.</p>
| <python><postgresql><sqlalchemy><orm><asyncpg> | 2023-08-13 21:53:34 | 0 | 2,104 | winwin |
76,895,090 | 5,134,817 | Python caching in for loop | <p>I recently came across the following behaviour which surprised me, which I first thought was due to Python's small integer caching.</p>
<h3>Integer caching</h3>
<pre class="lang-py prettyprint-override"><code>id(0)
id(0) # Same as above
</code></pre>
<p>These two match, as expected from small integer caching.</p>
<h3>Integer caching</h3>
<pre class="lang-py prettyprint-override"><code>id(123456789)
id(123456789) # These differ
</code></pre>
<p>These differ, as I would expect.</p>
<h3>What is going on here?</h3>
<pre class="lang-py prettyprint-override"><code>list((id(123456789) for i in range(2)) # These are the same!
[id(123456789) for i in range(2)] # These are the same, but different from the previous.
for i in range(2): print(id(123456789)) # These are the same, but different from the previous.
for i in range(2): print(id(123456789)) # These are the same, but different from the previous.
</code></pre>
<p>It seems some caching is going on when the variables are in a for loop. I wasn't expecting this, and would be keen to be pointed in the right direction for an explanation. I am battling an issue where I want the <code>ids()</code> of elements to be different, but this caching is getting in my way.</p>
| <python><caching> | 2023-08-13 20:27:33 | 1 | 1,987 | oliversm |
76,895,063 | 1,319,998 | Calling curl from Python - stderr is different from when running on the commend line. Why? | <p>When running curl on the command line against a domain that doesn't exist, I get an error message as expected</p>
<pre><code>$ curl https://doesnoexist.test/
curl: (6) Could not resolve host: doesnoexist.test
</code></pre>
<p>But if I do the same thing from Python, printing the standard error</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
proc = subprocess.Popen(['curl', 'https://doesnoexist.test/'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
outs, errs = proc.communicate()
print(errs)
</code></pre>
<p>then I get the the start of what seems to be a download progress indicator, and then the same error message right after</p>
<pre><code>$ python curl.py
b' % Total % Received % Xferd Average Speed Time Time Time
Current\n Dload Upload Total Spent
Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:--
--:--:-- --:--:-- 0curl: (6) Could not resolve host: doesnoexist.test\n'
</code></pre>
<p>Why? How can I only get the error message in Python?</p>
<p>(Ideally answers won't just be for curl, but more general where similar things happen when running other processes)</p>
| <python><curl><subprocess> | 2023-08-13 20:21:18 | 1 | 27,302 | Michal Charemza |
76,895,027 | 21,575,627 | Pythonic way to find index of certain char in a circular manner | <p>Let's say I have a string like:</p>
<pre><code>'abcdefgha'
</code></pre>
<p>I'd like to find the index of the next character <code>a</code> after the index 2 (in a circular manner). Meaning it should find index 7 in this case (via <code>mystr.index('a', 2)</code>); however, in this case:</p>
<pre><code>'abcdefgh'
</code></pre>
<p>it should return index 0. Is there any such built-in function?</p>
| <python> | 2023-08-13 20:09:15 | 3 | 1,279 | user129393192 |
76,894,983 | 1,807,557 | Pandas read_csv dtype and chunk is inconsistent | <p>When reading a somewhat large csv (around 300,000 rows by 160 columns), declaring dtype via dictionary provides inconsistent results. Data are localized to European standards so I've declared the decimal and thousands separator as well.</p>
<p>When setting chunksize to 25000, pandas reads the first 49,999 records and datatypes the columns as expected. On the 50,000th record, however, pandas throws a valueError stating it cannot convert string '0,00' to float. Interestingly, after the chunk that contains the 50,000th record, subsequent chunks are properly datatyped.</p>
<p>Not sure if it matters, but this is also an AWS ETL in Glue.</p>
<pre><code>for i, chunk in enumerate(pd.read_csv('file.csv')
, chunksize=25000
, on_bad_lines='skip'
, sep=';'
, decimal=','
, thousands='.'
, header=None
, low_memory=False
, names=fieldNames
, dtype=dictdatatypes)):
</code></pre>
<p>I have tried removing the low_memory=False argument and there is no difference in output. I would expect that all chunks would have all columns datatyped in a consistent manner.</p>
| <python><pandas><aws-glue> | 2023-08-13 19:58:34 | 1 | 691 | Daniel |
76,894,982 | 1,033,422 | How to create a histogram with points rather than bars | <p>I would like to plot a histplot but using points rather than bars.</p>
<pre class="lang-py prettyprint-override"><code>x_n10_p0_6 = binom.rvs(n=10, p=0.6, size=10000, random_state=0)
x_n10_p0_8 = binom.rvs(n=10, p=0.8, size=10000, random_state=0)
x_n20_p0_8 = binom.rvs(n=20, p=0.6, size=10000, random_state=0)
df = pd.DataFrame({
'x_n10_p0_6': x_n10_p0_6,
'x_n10_p0_8': x_n10_p0_8,
'x_n20_p0_8': x_n20_p0_8
})
sns.histplot(df)
</code></pre>
<p>This is what I'm getting:</p>
<p><a href="https://i.sstatic.net/lN4Gh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lN4Gh.png" alt="enter image description here" /></a></p>
<p>I would like to see something like this:</p>
<p><a href="https://i.sstatic.net/e0s1Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e0s1Y.png" alt="enter image description here" /></a></p>
<p><em>Source: <a href="https://en.wikipedia.org/wiki/Binomial_distribution#/media/File:Binomial_distribution_pmf.svg" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Binomial_distribution#/media/File:Binomial_distribution_pmf.svg</a></em></p>
<p>There is an element attribute to histplot but it only takes the values <code>{“bars”, “step”, “poly”}</code></p>
| <python><pandas><numpy><seaborn><histplot> | 2023-08-13 19:57:38 | 1 | 24,908 | Chris Snow |
76,894,889 | 6,220,759 | Using integers for "quoting" parameter in csv module | <p>The <a href="https://docs.python.org/3/library/csv.html" rel="nofollow noreferrer"><code>csv</code></a> module defines the constants <code>QUOTE_ALL</code>, <code>QUOTE_MINIMAL</code>, <code>QUOTE_NONNUMERIC</code>, <code>QUOTE_NONE</code>, which are used in the <code>quoting</code> keyword argument, for instance:</p>
<pre><code>with open('eggs.csv', 'w', newline='') as csvfile:
spamwriter = csv.writer(csvfile, quoting=csv.QUOTE_ALL)
spamwriter.writerow(['Spam'] * 5 + ['Baked Beans'])
</code></pre>
<p>But it appears that this keyword equally accepts integers between 0-3, which map to QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3). (I found this in <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html" rel="nofollow noreferrer">the documentation of the Pandas library</a>):</p>
<pre><code>with open('eggs.csv', 'w', newline='') as csvfile:
spamwriter = csv.writer(csvfile, quoting=2)
spamwriter.writerow(['Spam'] * 5 + ['Baked Beans'])
</code></pre>
<p>Using a number greater than 3 yields a <code>TypeError: bad "quoting" value</code>.</p>
<p>But I can't find this behaviour documented anywhere (the Python <code>csv</code> documentation just says that the module defines these constants), nor can I find it in the source code for the <code>csv</code> library.</p>
<p>Is there some implicit mapping for constants I don't know about? How does this work, and is it documented?</p>
| <python><csv> | 2023-08-13 19:29:25 | 2 | 11,757 | Josh Friedlander |
76,894,736 | 3,656,916 | 2D Coordinates of point defined as fractions of rectangular sides. Rectangular is not parallel to X, Y axes | <p>I have a rectangular ABCD: (x1,y1)... (x4,y4).
AB is not necessarily parallel to any of the axis.</p>
<p>I need to find coordinates of a point inside of it, defined as portion of its sides:
alpha *AB, alpha*CD .
beta*BC, beta*DA.
where alpha and beta are between 0 and 1.</p>
<p>Is there as simple solution to it?</p>
| <python><geometry><coordinates> | 2023-08-13 18:43:01 | 0 | 507 | DDR |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.