QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,924,709 | 13,014,864 | Histogram of grouped data in PySpark | <p>I have data consisting of a date-time, IDs, and velocity, and I'm hoping to get histogram data (start/end points and counts) of velocity for each ID using PySpark. Sample data:</p>
<pre class="lang-py prettyprint-override"><code>df = spark.createDataFrame(
[
("2023-06-01 07:09:17", "abc", 4.5),
("2023-06-01 07:09:18", "abc", 9.1),
("2023-06-01 07:09:19", "abc", 3.2),
("2023-06-01 07:10:06", "ddc", 5.1),
("2023-06-01 07:09:07", "ddc", 3.6),
("2023-06-01 07:09:08", "ddc", 2.6)
],
["date_time", "id", "velocity"]
)
</code></pre>
<p>I'm not too picky about how the output is formatted. Initially I was histograms using Spark's <code>rdd.histogram(bins)</code> function, but this was over all the velocity values (with no grouping). This code was:</p>
<pre class="lang-py prettyprint-override"><code>df.filter(col("velocity").isNotNull()).rdd.histogram(list(range(0, 100, 1)))
</code></pre>
<p>However, I cannot figure out how to do this for grouped data. I've tried these two things:</p>
<pre class="lang-py prettyprint-override"><code># This throws an error: 'GroupedData' object has no attribute 'rdd'
df.filter(col("velocity").isNotNull()).groupBy("id").rdd.histogram(list(range(0, 100, 1)))
# This throws a much longer error, but ends with: TypeError: 'str' object is not callable
# I think this has to do with the rdd.groupBy method
df.filter(col("velocity").isNotNull()).rdd.groupBy("id").histogram(list(range(0, 100, 1)))
# This throws a long error, with this TypeError: TypeError: '>' not supported between instances of 'tuple' and 'int'
df.filter(col("velocity").isNotNull()).select("id", "velocity").rdd.groupByKey().histogram(list(range(0, 100, 1)))
</code></pre>
| <python><apache-spark><pyspark><histogram><rdd> | 2023-08-17 19:53:33 | 1 | 931 | CopyOfA |
76,924,596 | 880,874 | How can I use SQL and Python together to fill a template file with SQL data? | <p>I have a text file that is a template like this:</p>
<pre><code>Question: {QuestionStem}
{answers}
Correct answer: {correctAnswer}
Date: {examDate}
Class: {examClass}
</code></pre>
<p>I need to populate it and save a copy of it with every question in an exam.</p>
<p>The exam data comes from a T-SQL Query (MSSQL) that looks like this:</p>
<pre><code>QuestionID QuestionStem QuestionOrder AnswerId AnswerText AnswerOrder isCorrect Class Date
100 What color is the sky? 0 1 Red 0 0 Astronomy 10/1/2024
100 What color is the sky? 0 2 Blue 1 1 Astronomy 10/1/2024
100 What color is the sky? 0 3 Orange 2 0 Astronomy 10/1/2024
100 What color is the sky? 0 4 Green 3 0 Astronomy 10/1/2024
200 The Sun is bright(T/F) 1 11 True 0 1 Astronomy 5/1/2024
200 The Sun is bright(T/F) 1 12 False 1 0 Astronomy 5/1/2024
</code></pre>
<p>I need the data for each exam to fill the template file like this:</p>
<pre><code>Question: What color is the sky?
a. Red
b. Blue
c. Orange
d. Green
Correct answer: b
Date: 10/1/2024
Class: Astronomy
Question: The Sun is bright(T/F)
a. True
b. False
Correct answer: a
Date: 5/1/2024
Class: Astronomy
</code></pre>
<p>Does Python have a way of taking the SQL data, populating the template with the data, and then saving it as a text file?</p>
<p>For Fynn :)</p>
<p>Pastebin: <a href="https://pastebin.com/kUEf6hZT" rel="nofollow noreferrer">https://pastebin.com/kUEf6hZT</a></p>
<p>The output of your code is wonderful as you can see below...but I can't get the <code>isCorrect</code> part to work:</p>
<pre><code>Question: What color is the sky?
-[ ] Red
-[ ] Green
-[ ] Gold
-[ ] Blue
Correct answer:
Date: 2023-08-23
Class: Astro 101
</code></pre>
| <python><python-3.x><t-sql><sql-server-2012> | 2023-08-17 19:34:04 | 1 | 7,206 | SkyeBoniwell |
76,924,582 | 2,279,829 | Can't get mlb stats data from sportsreference.com | <p>When I enter the command below in the terminal</p>
<pre><code>scrapy shell "https://www.baseball-reference.com/teams/BOS/2023.shtml"
</code></pre>
<p>I have no problem getting any of the data & I can find everything on the page. But when I scrape for it (via scrapy crawl mlb_players) I get no data. I see the following in the terminal</p>
<pre><code>2023-08-17 15:26:03 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.baseball-reference.com/teams/BOS/2023.shtml> (referer: None)
</code></pre>
<p>I don't know why I'm having these issue/s. The code I use for the stats table is</p>
<pre><code>for stats in response.xpath('//table[@id="team_batting"]/tbody/tr'):
</code></pre>
<p>but scrapy doesn't find the table or any of the divs (all_team_batting/div_team_batting) that contain the table. The only data I'm still able to get from the page is the year & team using the code below</p>
<pre><code>item_team = response.xpath('//h1/span[2]/text()').get()
item_year = response.xpath('//h1/span[1]/text()').get()
</code></pre>
<p>I don't know what's going on? I had no problems before... all feedback and/or input is welcome. Don't know what changed...</p>
| <python><html><web-scraping><scrapy> | 2023-08-17 19:31:28 | 0 | 1,328 | JC23 |
76,924,492 | 12,380,096 | Python Mocking / Patching multiple nested functions / variables | <p>I am new to Python and GCP but I am trying to create some tests for my GCF function that moves a file from one bucket to another.</p>
<h2>Simplified Python code:</h2>
<pre class="lang-python prettyprint-override"><code>import functions_framework
from google.cloud import storage
storageClient = storage.Client()
@functions_framework.cloud_event
def storage_trigger(cloud_event):
data = cloud_event.data
return move_file_to_folder(data)
def move_file_to_folder(data):
file_name = data['name']
folder_name = 'test-folder/'
source_bucket = storageClient.get_bucket(data['bucket'])
source_blob = source_bucket.get_blob(file_name)
dest_file_name = file_name
try:
if source_bucket.get_blob(folder_name + file_name):
#This is what I need to mock
dest_file_name = increment_file_name(source_bucket, folder_name, file_name)
source_bucket.copy_blob(source_blob, source_bucket, folder_name + dest_file_name)
return f'{dest_file_name} successfully sent and moved!'
except Exception as e:
print(e)
return Exception
def increment_file_name(source_bucket, folder_name, file_name):
new_name = file_name
split_name = file_name.split('.')
first_part = '.'.join(split_name[:-1])
ext = split_name[-1]
num = 1
while source_bucket.get_blob(folder_name + new_name):
new_name = f'{first_part}-{num}.{ext}'
print(new_name)
num += 1
return new_name
</code></pre>
<h2>Test</h2>
<pre class="lang-python prettyprint-override"><code>import unittest
from unittest.mock import patch, Mock
from cloudevents.http import CloudEvent
import main
attributes = {
"id": "5e9f24a",
"type": "google.cloud.storage.object.v1.finalized",
"source": "sourceUrlHere",
}
data = {
'bucket': 'test_bucket',
'contentType': 'audio/mpeg',
'id': 'test.mp3',
'kind': 'storage#object',
'name': 'test.mp3',
'size': '184162',
'storageClass': 'STANDARD',
'timeCreated': '2023-07-25T13:42:25.541Z',
'updated': '2023-07-25T13:42:25.541Z'
}
event = CloudEvent(attributes, data)
class SuccessTests(unittest.TestCase):
@classmethod
def setUpClass(cls) -> None:
return super().setUpClass()
@patch('main.storageClient')
@patch('main.increment_file_name')
def test_storage_trigger(self, mock_storage, func):
mock_storage_client = mock_storage.Client.return_value
mock_bucket = Mock()
mock_storage_client.get_bucket.return_value = mock_bucket
func.return_value = 'new-test-file-name.mp3'
expected = 'Data successfully inserted into BigQuery table.', 200
print(main.storage_trigger(event))
self.assertEqual(main.storage_trigger(event), expected)
if __name__ == "__main__":
unittest.main()
</code></pre>
<h2>Issues</h2>
<p>If I don't mock the increment_file_name function, it just increments until eternity (I assume because the the mock is designed to just say it exists?). So I am trying to patch that function</p>
<p>Several resources indicate that I need to patch where the function is called not the function itself, but the function call is being assigned to a variable. I tried: <code>@patch('main.move_file_to_folder.increment_file_name')</code> but then it cant find the attribute <code>increment_file_name</code></p>
<h2>Question</h2>
<p>So how do I tell the test that make the reassignment of <code>dest_file_name</code> to be 'new-test-file-name.mp3'?</p>
| <python><google-cloud-platform><python-unittest> | 2023-08-17 19:14:26 | 1 | 1,481 | AylaWinters |
76,924,449 | 10,326,759 | How can inheritance change the class signature | <p>I'm finding that inheriting from a base class can change the derived class signature according to <code>inspect.signature</code>, and I would like to understand how that happens. Specifically, the base class in question is <code>tensorflow.keras.layers.Layer</code>:</p>
<pre><code>import sys
import inspect
import tensorflow as tf
class Class1(tf.keras.layers.Layer):
def __init__(self, my_arg: int):
pass
class Class2:
def __init__(self, my_arg: int):
pass
print("Python version: ", sys.version)
print("Tensorflow version: ", tf.__version__)
print("Class1 signature: ", inspect.signature(Class1))
print("Class2 signature: ", inspect.signature(Class2))
</code></pre>
<p>Outputs</p>
<pre><code>Python version: 3.8.10 (default, Mar 23 2023, 13:10:07)
[GCC 9.3.0]
Tensorflow version: 2.12.0
Class1 signature: (*args, **kwargs)
Class2 signature: (my_arg: int)
</code></pre>
<p>I tried running the code above and I expected it to print the same signature for both classes.</p>
| <python><tensorflow> | 2023-08-17 19:05:28 | 1 | 497 | Andrea Allais |
76,924,401 | 10,729,196 | ValueError when running autopep8 | <p>I'm trying to run autopep8 recursively in a directory named "tests" with following command:</p>
<pre><code>autopep8 --in-place --recursive tests
</code></pre>
<p>But I'm facing an odd error when running the command:</p>
<pre><code>.pyenv/versions/3.11.4/lib/python3.11/configparser.py", line 819, in _get
return conv(self.get(section, option, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: invalid literal for int() with base 10: 'True'
</code></pre>
<p>I already tried other valid commands to try and run <code>autopep8</code> but none worked. How can I fix this issue?</p>
<p>[EDIT] Whole stack:</p>
<pre><code>autopep8 tests --in-place --recursive --verbose
read config path: /home/acabista/Documents/automations/rainmaker/api-automation/tox.ini
Traceback (most recent call last):
File "/home/acabista/.pyenv/versions/rainmaker-api/bin/autopep8", line 8, in <module>
sys.exit(main())
^^^^^^
File "/home/acabista/.pyenv/versions/rainmaker-api/lib/python3.11/site-packages/autopep8.py", line 4499, in main
args = parse_args(argv[1:], apply_config=apply_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/acabista/.pyenv/versions/rainmaker-api/lib/python3.11/site-packages/autopep8.py", line 3835, in parse_args
parser = read_config(args, parser)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/acabista/.pyenv/versions/rainmaker-api/lib/python3.11/site-packages/autopep8.py", line 4013, in read_config
for norm_opt, k, value in _get_normalize_options(
File "/home/acabista/.pyenv/versions/rainmaker-api/lib/python3.11/site-packages/autopep8.py", line 3970, in _get_normalize_options
value = config.getint(section, k)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/acabista/.pyenv/versions/3.11.4/lib/python3.11/configparser.py", line 834, in getint
return self._get_conv(section, option, int, raw=raw, vars=vars,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/acabista/.pyenv/versions/3.11.4/lib/python3.11/configparser.py", line 824, in _get_conv
return self._get(section, conv, option, raw=raw, vars=vars,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/acabista/.pyenv/versions/3.11.4/lib/python3.11/configparser.py", line 819, in _get
return conv(self.get(section, option, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: invalid literal for int() with base 10: 'True'
</code></pre>
| <python><python-3.x><linter><autopep8> | 2023-08-17 18:57:39 | 1 | 493 | Andressa Cabistani |
76,924,321 | 10,628,853 | exclude negative words from nltk stopwords | <p>I want to remove the nltk stopwords from my sentences except the ones that have negative meaning such as: no, not, couldn't etc. In other words, I want to exclude negative words from the stopwords' list. How can I do that?</p>
| <python><machine-learning><nltk><stop-words><data-preprocessing> | 2023-08-17 18:45:59 | 1 | 747 | Shadi Farzankia |
76,923,989 | 4,019,495 | Pydantic 1.9.1: is there a way to avoid the "if field in values" pattern? | <p>Let's compare two Model objects and the following dictionary.</p>
<pre><code>class A(BaseModel):
field1: StrictStr
field2: StrictStr
@validator('field2', always=True)
def check_field2_not_dupliated(cls, field2, values):
if field2 == values['field1'] + 'x':
raise ValueError(f'{field2=} is the same as field1!')
return field2
class B(BaseModel):
field1: StrictStr
field2: StrictStr
@validator('field2', always=True)
def check_field2_not_dupliated(cls, field2, values):
if 'field1' in values:
if field2 == values['field1'] + 'x':
raise ValueError(f'{field2=} is the same as field1!')
return field2
d = {
'field1': 1,
'field2': 'hello'
}
</code></pre>
<p>If you run <code>parse_obj</code>, you get</p>
<pre><code>> A.parse_obj(d)
5 @validator('field2', always=True)
6 def check_field2_not_dupliated(cls, field2, values):
----> 7 if field2 == values['field1']:
8 raise ValueError(f'{field2=} is the same as field1!')
9 return field2
KeyError: 'field1'
</code></pre>
<pre><code>> B.parse_obj(d)
ValidationError: 1 validation error for B
field1
str type expected (type=type_error.str)
</code></pre>
<p><strong>I would like the error message of B without needing to have to wrap the validator for field2 in "if 'field1' in values", that is, with code looking like A</strong>.</p>
<p>One possible approach I'm not sure whether is possible is throwing an error once pydantic realizes <code>field1</code> has invalid input.</p>
| <python><pydantic> | 2023-08-17 17:49:50 | 1 | 835 | extremeaxe5 |
76,923,945 | 1,835,727 | Rotate all tick labels perpendicularly in polar plot with uneven tick distribution | <p>I am making a polar plot where the ticks are not uniformly distributed around the circle. There are a few very good Q&A pairs that deal with uniformly distributed answers, and they all use a <em>divide-up-the-circle</em> approach. <a href="https://stackoverflow.com/questions/46719340/how-to-rotate-tick-labels-in-polar-matplotlib-plot">E.g. this</a>.</p>
<p>I'd like to know if it's possible to use the transform that's baked into the label to rotate the text the way I'd like to put it.</p>
<p>I can sort of do this, but I can't work out how to anchor it properly. The code that is doing it is here:</p>
<pre><code>for tick in plt.xticks()[1]:
tick._transform = tick._transform + mpl.transforms.Affine2D().rotate_deg_around(0, 0, 10)
</code></pre>
<p>which gives an output like this:</p>
<p><a href="https://i.sstatic.net/EWqd4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EWqd4.png" alt="enter image description here" /></a></p>
<p>Whereas I'd like an output like this:</p>
<p><a href="https://i.sstatic.net/DULBV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DULBV.png" alt="enter image description here" /></a></p>
<p>(from the above-linked question)</p>
<p>Obviously, I'd need a 90° rotation, not a 10° one, but 90° rotates it off the canvas.</p>
<p>Is this approach possible, or do I need to reassess my strategy?</p>
<p>The full code block is here:</p>
<pre><code>import random
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
one_person = {
"Human": {
"Collaboration": 4,
"Growth Mindset": 3,
"Inclusion": 5,
"Project and Studio Life": 2,
},
"Tectonics": {
"Office Manual and Procedures": 3,
"Documentation Standards": 3,
"Site Stage Services": 2,
"External and Public Domain Works": 2,
"Structure": 3,
"Enclosure": 2,
"Waterproofing (int. and ext.)": 3,
"Interiors": 1,
"Structure and Services": 2,
},
"Technology": {
"Bluebeam": 2,
"Confluence": 3,
"Drawing on screens": 0,
"dRofus": 0,
"Excel": 2,
"Grasshopper": 1,
"InDesign": 2,
"Outlook": 2,
"Python": 5,
"Rhino": 1,
"Teams": 2,
"Timesheets and expenses": 3,
},
"Regenerative": {
"REgenerative Design": 3,
"Materials and Embodied Carbon practice": 1,
"Materials and Embodied Carbon analysis": 2,
"Energy": 3,
"Resilience": 1,
"Rating Systems": 2,
},
"Design": {
"Predesign - Briefing, Stakeholder Engagement & Establishing Project Values": 2,
"Predesign - Feasibility Studies And Strategic Organisational Planning": 3,
"Initiating Design": 2,
"Conserving Design": 3,
"Design Communication - Written": 2,
"Design Communication - Visual": 4,
"Design Communication - Verbal": 3,
},
"Connecting with country": {"Connecting with Country": 2},
}
colours = [
"b", # blue.
"g", # green.
"r", # red.
"c", # cyan.
"m", # magenta.
"y", # yellow.
"k", # black.
# "w", # white.
]
def draw_radar(data, colour_letters, person_name=""):
"""Draw the graph.
Based substantially on this SO thread:
https://stackoverflow.com/questions/60563106/complex-polar-plot-in-matplotlib
"""
# not really sure why -1, but if you don't you get an empty segment
num_areas = len(data) - 1
running_total = 0
thetas = {}
for key, value in data.items():
this_area_num_points = len(value)
this_area_theta = ((2 * np.pi) / num_areas) / (this_area_num_points)
thetas[key] = []
for i in range(len(value)):
thetas[key].append((i * this_area_theta) + running_total)
running_total += (2 * np.pi) / num_areas
labels = []
for key, value in data.items():
for area, score in value.items():
labels.append(f"{score} {key}: {area}")
for name, theta_list in thetas.items():
individual_scores = list(data[name].values())
colour = random.choice(colour_letters)
if len(theta_list) > 1:
plt.polar(theta_list, individual_scores, c=colour, label=name)
elif len(theta_list) == 1:
plt.scatter(theta_list, individual_scores, c=colour, label=name)
plt.yticks(np.arange(-5, 5), [""] * 5 + list(range(5)))
plt.xticks(
np.concatenate(tuple(list(thetas.values()))),
labels,
transform_rotates_text=True,
)
for tick in plt.xticks()[1]:
tick._transform = tick._transform + mpl.transforms.Affine2D().rotate_deg_around(
0, 0, 10
)
if person_name:
plt.title = f"Competency for {person_name}"
plt.savefig("radar.png")
draw_radar(one_person, colours)
</code></pre>
| <python><matplotlib> | 2023-08-17 17:41:46 | 1 | 13,530 | Ben |
76,923,844 | 5,252,389 | How does a global variable behave when using Python joblib parallel? | <p>How do global variables behave in context of being accessed and modified by the function I parallelized using joblib's <code>Parallel</code> functionality.</p>
<p>Suppose this is the code I've written</p>
<pre><code> # Global variable
client = CustomClass()
# Thread level function
def runThread():
# modify client variables
# call client methods etc
# Call the runThread() function using joblib Parallel, 5 threads
Parallel(n_jobs=5, backend="multiprocessing")(delayed(runThread) for _ in range(5))
</code></pre>
<p>So how does the global client object I've created behave as, does it:</p>
<ol>
<li>Each instance of the <code>runThread()</code> function will have access to it's own copy of the global variable and the different parallel instances can independently modify/access the client, OR</li>
<li>All the processes that we spawn will share the same instance of the <code>client</code> and the different processes access and modify the global <code>client</code> and I need to control access with a mutex ?</li>
</ol>
| <python><python-3.x><python-multiprocessing><python-multithreading><joblib> | 2023-08-17 17:25:27 | 0 | 396 | Sukrit Kumar |
76,923,788 | 2,195,440 | how to understand from where a function call is made? | <p>I am using TreeSitter to parse python code.</p>
<p>I need to understand <code>check_files_in_directory</code> is invoked from <code>GPT4Readability.utils</code>. I already captured all the function calls. I have to do this programatically.</p>
<p>But now I have to find out from which file <code>check_files_in_directory</code> is called. I am struggling to understand what would the logic to do it. Can anyone please suggest?</p>
<p>I have to implement this with a Tree sitter AST parser from scratch. I would appreciate links to proper resources, high-level logic. I am not looking for actual code implementation and instead high-level reasoning.</p>
<pre><code>import os
from getpass import getpass
from GPT4Readability.utils import *
import importlib.resources as pkg_resources
def generate_readme(root_dir, output_name, model):
"""Generates a README.md file based on the python files in the provided directory
Args:
root_dir (str): The root directory of the python package to parse and generate a readme for
"""
# prompt_folder_name = os.path.join(os.path.dirname(__file__), "prompts")
# prompt_path = os.path.join(prompt_folder_name, "readme_prompt.txt")
with pkg_resources.open_text('GPT4Readability.prompts','readme_prompt.txt') as f:
inb_msg = f.read()
# with open(prompt_path) as f:
# lines = f.readlines()
# inb_msg = "".join(lines)
file_check_result = check_files_in_directory(root_dir)
</code></pre>
| <python><python-3.x><abstract-syntax-tree><treesitter> | 2023-08-17 17:17:29 | 2 | 3,657 | Exploring |
76,923,698 | 8,102,500 | What's wrong with my implementation of OpenCV's warpAffine? | <p>For some reason, I need to re-implement OpenCV's <code>warpAffine</code> using GPU and I <strong>only need to translate the input image without rotation</strong>. That is, my 2×3 transformation matrix has the form:</p>
<pre><code>[[ 1, 0, x_shift],
[ 0, 1, y_shift]
</code></pre>
<p>which would make the implementation simpler.</p>
<p>Then I implemented the prototype using Python to verify the algorithm:</p>
<pre class="lang-py prettyprint-override"><code># Bilinear Interpolation:
# for a source image, sample a value indexed by (y, x) even if x, y are fractionals.
def sample(img, x: float, y: float):
h, w = img.shape
left = math.floor(x)
top = math.floor(y)
right = left + 1
bottom = top + 1
# top left corner
if (0 <= left < w) and (0 <= top < h):
a = img[top, left]
else:
a = 0
# top right corner
if (0 <= right < w) and (0 <= top < h):
b = img[top, right]
else:
b = 0
# bottom left corner
if (0 <= left < w) and (0 <= bottom < h):
c = img[bottom, left]
else:
c = 0
# bottom right corner
if (0 <= right < w) and (0 <= bottom < h):
d = img[bottom, right]
else:
d = 0
# linear interpolation of top two points
top_interleaved = (right - x) * a + (x - left) * b
# linear interpolation of bottom two points
bottom_interleaved = (right - x) * c + (x - left) * d
# linear interpolation of top and bottom points
return (bottom - y) * top_interleaved + (y - top) * bottom_interleaved
def warpAffine(img, shift_x: float, shift_y: float):
output = np.empty_like(img)
h, w = img.shape
for y in range(h):
for x in range(w):
output[y, x] = sample(img, x - shift_x, y - shift_y)
return output
</code></pre>
<p>And test it with:</p>
<pre class="lang-py prettyprint-override"><code>a = np.arange(1, 5, dtype=np.float32).reshape(2, 2)
# a = [[1, 2],
# [3, 4]]
shift_x = 0.1
shift_y = 0.1
warpAffine(a, shift_x, shift_y)
</code></pre>
<p>It outputs:</p>
<pre class="lang-py prettyprint-override"><code>array([[0.81, 1.71],
[2.52, 3.7 ]], dtype=float32)
</code></pre>
<p>It looks good but is a little different from OpenCV's <code>warpAffine</code>:</p>
<pre class="lang-py prettyprint-override"><code>affine_arr = np.array([[1, 0, shift_x], [0, 1, shift_y]], dtype=np.float32)
affine_img = cv.warpAffine(
a,
affine_arr,
(a.shape[1], a.shape[0]),
flags=cv.INTER_LINEAR,
borderMode=cv.BORDER_CONSTANT,
borderValue=0,
)
</code></pre>
<p>The <code>affined_img</code> is:</p>
<pre class="lang-py prettyprint-override"><code>array([[0.82, 1.73],
[2.55, 3.72]], dtype=float32)
</code></pre>
<p>I don't know what went wrong. I tried to read <a href="https://github.com/opencv/opencv/blob/abda7630737fa8307efc88169d3525d3d2005898/modules/imgproc/src/imgwarp.cpp#L2622" rel="nofollow noreferrer">OpenCV's source code of <code>warpAffine</code></a> but it's too hard to read. I hope someone who's familiar with OpenCV's <code>warpAffine</code> can tell me what's my implementation's issue.</p>
| <python><opencv> | 2023-08-17 17:04:23 | 1 | 1,203 | Shuai |
76,923,677 | 5,838,180 | In python search-and-replace (with fileinput) erases all my file content | <p>I have a .py-file <code>my_file.py</code> that contains parameters and looks something like that:</p>
<pre><code>input_folder = 'bla/bla/'
output_folder = 'out/put/folder/'
iterations = 20
</code></pre>
<p>I am writing a script that reads in this .py-file and when it encounters a user-defined line it changes the parameter in this line. So I managed so far to create this script</p>
<pre><code>import fileinput
for line in fileinput.input("my_file.py", inplace=True):
if line.startswith('output_folder ='):
print("output_folder = '/new/file/path/'", end='') # for Python 3
</code></pre>
<p>What I would expect this script to produce is:</p>
<pre><code>input_folder = 'bla/bla/'
output_folder = '/new/file/path/'
iterations = 20
</code></pre>
<p>Instead, it deletes all the lines in <code>my_file.py</code> that I haven't mentioned and leaves me with just this content:</p>
<pre><code>output_folder = '/new/file/path/'
</code></pre>
<p>How do I achieve the desired result? Tnx</p>
| <python><file><file-io> | 2023-08-17 17:01:23 | 2 | 2,072 | NeStack |
76,923,540 | 6,139,162 | Why does AWS Lambda experience a consistent 10-second delay for an unresolvable domain using requests.get? | <p>I'm experiencing a peculiar behavior when trying to make an HTTP request to unresolved domain names from an AWS Lambda function using the <code>requests</code> library in Python.</p>
<p>When I attempt to make a request using:</p>
<pre><code>response = requests.get('https://benandjerry.com', timeout=(1,1))
</code></pre>
<p>In AWS Lambda, it consistently takes around 10 seconds before it throws an error. However, when I run the same code on my local environment, it's instant. I've verified this using logs and isolated tests.</p>
<p>I've considered potential issues like Lambda's cold starts, Lambda runtime differences, and even VPC configurations, but none seem to be the root cause.</p>
<p>I also tried using curl to access the domain, and it instantly returned with Could not resolve host: benandjerry.com.</p>
<p>Last point, this is happening on specific unresolved domain names, not all of them.</p>
<p>Here's a sample:</p>
<ul>
<li><a href="http://seen.ma/" rel="nofollow noreferrer">http://seen.ma/</a></li>
<li>benandjerry.com</li>
<li>clenyabeauty.com</li>
<li>gong.com</li>
</ul>
<p>FYI, you can easily replicate the issue by creating a python3.9 Lambda on AWS & adding the following code:</p>
<pre><code>import json
from botocore.vendored import requests
import urllib.request
import os
def lambda_handler(event, context):
# TODO implement
url = 'http://benandjerry.com'
try:
response = requests.get(url, proxies=None,verify=False)
except Exception as e:
print(e)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
</code></pre>
<p>Questions:</p>
<ol>
<li>What could be causing this consistent 10-second delay in AWS Lambda
for an unresolvable domain using requests?</li>
<li>How can I get AWS Lambda to instantly recognize that the domain is unresolvable, similar to the behavior on my local machine?</li>
</ol>
| <python><aws-lambda><python-requests> | 2023-08-17 16:41:50 | 3 | 446 | jjyoh |
76,923,406 | 8,519,380 | How to send a message to every consumer who is idle in Kafka? | <p>I have three consumers and one producer in Kafka.
When the producer sends all the messages (there are 100 messages in my simple code), these messages are divided among three consumers, and my main problem is this division of messages.
Sometimes a message may be long, that's why one consumer may not be able to answer all the messages quickly, but another consumer who answers all the messages quickly becomes idle and has nothing to do.<br></p>
<p>How to have all the messages in the queue and whenever the consumers are done with their work, then receive the next message from the producer? (Of course, I don't know whether consumers receive messages from producers or topics, and I am a beginner in this field)
Thank you for guiding me completely.<br></p>
<p>I took a video about the working process, please watch it. According to the video, one consumer has finished its work and is idle, but the other two consumers are running.
<br><a href="https://i.imgur.com/2Kg5vvx.mp4" rel="nofollow noreferrer">Movie link</a>.
<br>
My codes:
Topic:</p>
<pre><code>kafka-topics --bootstrap-server localhost:9092 --create --topic numbers --partitions 4 --replication-factor 1
</code></pre>
<p>producer.py:</p>
<pre><code>from time import sleep
from json import dumps
from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers=['localhost:9092'], value_serializer=lambda x: dumps(x).encode('utf-8'))
for e in range(100):
data = {'number' : e}
producer.send('numbers', value=data)
print(f"Sending data : {data}")
</code></pre>
<p>consumer0.py:</p>
<pre><code>import json
from kafka import KafkaConsumer
print("Connecting to consumer ...")
consumer = KafkaConsumer(
'numbers',
bootstrap_servers=['localhost:9092'],
auto_offset_reset='earliest',
enable_auto_commit=True,
group_id='my-group',
value_deserializer=lambda x: json.loads(x.decode('utf-8')))
for message in consumer:
print(f"{message.value}")
</code></pre>
<p>consumer1.py:</p>
<pre><code>import json
from kafka import KafkaConsumer
import time
print("Connecting to consumer ...")
consumer = KafkaConsumer(
'numbers',
bootstrap_servers=['localhost:9092'],
auto_offset_reset='earliest',
enable_auto_commit=True,
group_id='my-group',
value_deserializer=lambda x: json.loads(x.decode('utf-8')))
for message in consumer:
time.sleep(1)
print(f"{message.value}")
</code></pre>
<p>consumer2.py:</p>
<pre><code>import json
from kafka import KafkaConsumer
import time
print("Connecting to consumer ...")
consumer = KafkaConsumer(
'numbers',
bootstrap_servers=['localhost:9092'],
auto_offset_reset='earliest',
enable_auto_commit=True,
group_id='my-group',
value_deserializer=lambda x: json.loads(x.decode('utf-8')))
for message in consumer:
time.sleep(2)
print(f"{message.value}")
</code></pre>
| <python><apache-kafka><producer-consumer><kafka-python> | 2023-08-17 16:26:35 | 1 | 778 | Sardar |
76,923,331 | 22,307,474 | Why can't I see the mesh? | <pre><code>
import pygame as pg
import numpy as np
import pyassimp
import glm
from OpenGL.GL import *
vertex_shader = """
#version 330 core
layout (location = 0) in vec3 aPos;
uniform mat4 model;
uniform mat4 projection;
uniform mat4 view;
void main(){
gl_Position = projection * view * model * vec4(aPos, 1.0);
}
"""
fragment_shader = """
#version 330 core
out vec4 fragColor;
void main(){
fragColor = vec4(1.0, 0.0, 1.0, 1.0);
}
"""
class Camera3d():
def __init__(self, position=glm.vec3(0.0), rotation=glm.vec3(0.0)):
self.position = position
self.rotation = rotation
self.projection_matrix = glm.perspective(glm.radians(75), 800 / 600, 0.1, 1024)
@property
def forward(self):
forward = glm.vec3()
forward.x = glm.cos(glm.radians(self.rotation.y)) * glm.cos(glm.radians(self.rotation.x))
forward.y = glm.sin(glm.radians(self.rotation.x))
forward.z = glm.sin(glm.radians(self.rotation.y)) * glm.cos(glm.radians(self.rotation.x))
return glm.normalize(forward)
@property
def right(self):
return glm.normalize(glm.cross(self.forward, glm.vec3(0, 1, 0)))
@property
def up(self):
return glm.normalize(glm.cross(self.right, self.forward))
def update(self):
keys = pg.key.get_pressed()
if keys[pg.K_w]:
self.position += glm.vec3(0, 0, 1)
elif keys[pg.K_s]:
self.position += glm.vec3(0, 0, -1)
if keys[pg.K_a]:
self.position += glm.vec3(-1, 0, 0)
elif keys[pg.K_d]:
self.position += glm.vec3(1, 0, 0)
def get_projection_matrix(self):
return self.projection_matrix
def get_view_matrix(self):
return glm.lookAt(self.position, self.position + self.forward, self.up)
class Mesh3d():
def __init__(self):
self.position = glm.vec3(0.0)
self.rotation = glm.vec3(0.0)
self.vertices = np.array([], dtype=np.float32)
self.texcoords = np.array([], dtype=np.uint32)
self.normals = np.array([], dtype=np.float32)
self.faces = np.array([], dtype=np.uint32)
self.vao, self.vbo, self.ebo = 0, 0, 0
self.shader = self.create_shader_program(vertex_shader, fragment_shader)
def compile_shader(self, source: str, shader_type: GL_SHADER_TYPE):
shader = glCreateShader(shader_type)
glShaderSource(shader, source)
glCompileShader(shader)
return shader
def create_shader_program(self, vertex_source: str, fragment_source: str):
vertex_shader = self.compile_shader(vertex_source, GL_VERTEX_SHADER)
fragment_shader = self.compile_shader(fragment_source, GL_FRAGMENT_SHADER)
shader_program = glCreateProgram()
glAttachShader(shader_program, vertex_shader)
glAttachShader(shader_program, fragment_shader)
glLinkProgram(shader_program)
glDeleteShader(vertex_shader)
glDeleteShader(fragment_shader)
return shader_program
def get_model_matrix(self):
model = glm.mat4(1.0)
model = glm.translate(model, self.position)
return model
def setup_buffers(self):
self.vao = glGenVertexArrays(1)
glBindVertexArray(self.vao)
self.vbo = glGenBuffers(1)
glBindBuffer(GL_ARRAY_BUFFER, self.vbo)
glBufferData(GL_ARRAY_BUFFER, self.vertices.nbytes,
self.vertices, GL_STATIC_DRAW)
self.ebo = glGenBuffers(1)
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, self.ebo)
glBufferData(GL_ELEMENT_ARRAY_BUFFER, self.faces.nbytes,
self.faces, GL_STATIC_DRAW)
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE,
3 * self.vertices.dtype.itemsize, None)
glEnableVertexAttribArray(0)
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE,
2 * self.faces.dtype.itemsize, None)
glEnableVertexAttribArray(1)
glBindVertexArray(0)
def render(self, camera: Camera3d):
if not self.vao:
return
glUseProgram(self.shader)
model_loc = glGetUniformLocation(self.shader, "model")
glUniformMatrix4fv(model_loc, 1, GL_FALSE,
glm.value_ptr(self.get_model_matrix()))
projection_loc = glGetUniformLocation(self.shader, "projection")
glUniformMatrix4fv(projection_loc, 1, GL_FALSE,
glm.value_ptr(camera.get_projection_matrix()))
view_loc = glGetUniformLocation(self.shader, "view")
glUniformMatrix4fv(view_loc, 1, GL_FALSE,
glm.value_ptr(camera.get_view_matrix()))
glBindVertexArray(self.vao)
glDrawElements(GL_TRIANGLES, len(self.faces) * 3, GL_UNSIGNED_INT, None)
glBindVertexArray(0)
glUseProgram(0)
def load_from(self, filename: str):
with pyassimp.load(filename) as scene:
mesh = scene.meshes[0]
self.vertices = mesh.vertices
self.texcoords = mesh.texturecoords
self.normals = mesh.normals
self.faces = mesh.faces
self.setup_buffers()
return self
class App():
def __init__(self):
self.win = pg.display.set_mode((800, 600), pg.OPENGL | pg.DOUBLEBUF, vsync=True)
self.clock = pg.time.Clock()
self.mesh = Mesh3d().load_from('res/teapot.obj')
self.camera = Camera3d(glm.vec3(0, 0, -3))
def __events(self):
for e in pg.event.get():
if e.type == pg.QUIT:
self.__is_running = False
def __update(self):
self.camera.update()
def __render(self):
glClearColor(0.05, 0.05, 0.05, 1)
glClear(GL_COLOR_BUFFER_BIT)
self.mesh.render(self.camera)
pg.display.flip()
def run(self):
self.__is_running = True
while self.__is_running:
self.__events()
self.__update()
self.__render()
self.clock.tick(0)
pg.quit()
if __name__ == '__main__':
App().run()
</code></pre>
<p>This code should just draw the mesh loaded from the file, but when I run it, I see only emptiness.</p>
| <python><python-3.x><opengl><pygame><pyopengl> | 2023-08-17 16:14:58 | 1 | 510 | bin4ry |
76,923,210 | 6,260,154 | Python need help in removing leading and trailing whitespace within string inside bracket in xpath | <p>I have some xpath strings in which I need help in removing one or more leading and trailing whitespace within string inside bracket. So, these are some sample xpaths I have:</p>
<pre><code>'/a/b[b1=" a12s "]/c[c1="1a3 "]/d'
'/a/b[b1=" 12a6a"]/c[c1=" s23 "]/d'
'/a/b[b1="s9d "]/c[c1=" 1 2 x "]/d'
</code></pre>
<p>And this is the output I am looking for:</p>
<pre><code>'/a/b[b1="a12s"]/c[c1="1a3"]/d'
'/a/b[b1="12a6a"]/c[c1="s23"]/d'
'/a/b[b1="s9d"]/c[c1="1 2 x"]/d'
</code></pre>
<p>I need help in removing these one or more leading and trailing whitespaces from xpaths. Kindly guide me on this.</p>
| <python><regex> | 2023-08-17 15:59:38 | 1 | 1,016 | Tony Montana |
76,923,151 | 11,748,924 | firebase realtime database offline mode to prevent R/W limitation | <p>I heard that firebase realtime database has some limitation for free version:</p>
<pre><code>
Simultaneous connections: 100
GB stored: 1 GB
GB downloaded: 10 GB/month
Multiple databases per project: No
</code></pre>
<p>I also heard that firebase work on offline mode. I want prevent GB downloaded counted 10GB/month, therefore I expect I can setup firebase realtime database to store in local database.</p>
<p>Here's my basic code:</p>
<pre><code>from dotenv import load_dotenv
import os
import firebase_admin
from firebase_admin import credentials
from firebase_admin import db
load_dotenv()
# Fetch the service account key JSON file contents
cred = credentials.Certificate(os.getenv('FIREBASE_CREDENTIAL_PATH'))
# Initialize the app with a service account, granting admin privileges
firebase_admin.initialize_app(cred, {
'databaseURL': os.getenv('FIREBASE_DATABASE_URL')
})
# As an admin, the app has access to read and write all data, regradless of Security Rules
ref = db.reference('/a_reference')
ref.set({
'data':'test'
})
print(ref.get())
# ref.push() #I thought if I comment this, it will store in local database.
</code></pre>
<p>After I run code above, I got expected output. But, it also updated to the databaseURL (online) which is counting <code>GB Downloaded</code> limitation.</p>
<p>I expect it stored locally first before, because I want develop my entire app with db, so many R/W operations in DB while I'm debugging.</p>
| <python><database><firebase><firebase-realtime-database> | 2023-08-17 15:50:50 | 1 | 1,252 | Muhammad Ikhwan Perwira |
76,923,144 | 394,957 | Saving Masking layer with functional API to .keras file | <p>I'm using a custom mask in my Keras model. When I try to load the model with <code>model=tf.saved_model.load('model.keras')</code> from a <code>.keras</code> file, I get the following error:</p>
<pre><code>TypeError: <keras.src.layers.core.masking.Masking object at 0x7e7735fa1460> could not be deserialized properly. Please ensure that components that are Python object instances (layers, models, etc.) returned by `get_config()` are explicitly deserialized in the model's `from_config()` method.
</code></pre>
<p>Here is my model:</p>
<pre><code>N_FEATURES = 6
MASK_VALUE = np.asarray([0.0 for i in range(N_FEATURES)])
def get_clean_model():
# Input layer
input_layer = Input(shape=(None, N_FEATURES))
masked_input = Masking(mask_value=MASK_VALUE)(input_layer)
# LSTM layer with regularization
lstm_layer = LSTM(units=N_FEATURES, activation='tanh', return_sequences=True,
recurrent_regularizer='l2', kernel_regularizer='l2')(masked_input)
# Dropout layer
dropout_layer = Dropout(0.05)(lstm_layer)
# Dense layers with regularization
dense_layer1 = Dense(N_FEATURES, activation='sigmoid', kernel_regularizer='l2')(dropout_layer)
# Skip connection: Concatenate masked input with dense_layer1
concatenated_layer = Concatenate()([masked_input, dense_layer1])
dense_layer2 = Dense(N_FEATURES*2, activation='sigmoid', kernel_regularizer='l2')(dense_layer1)
# Dropout layer
dropout_layer2 = Dropout(0.05)(dense_layer2)
# Output layer
output_layer = Dense(1, activation='sigmoid')(dropout_layer2)
# Create the model
model = Model(inputs=input_layer, outputs=output_layer)
</code></pre>
<p><a href="https://stackoverflow.com/questions/48391265/keras-serializing-a-masking-layer-for-save-load">This question</a> shows how to do it for custom layers, but I don't have any custom layers. How can I serialize and retrieve this model? Thank you!</p>
| <python><tensorflow><keras> | 2023-08-17 15:50:10 | 1 | 1,955 | Mark C. |
76,923,117 | 5,838,180 | In python how to pass a variable to the command line? | <p>I have a function that I use something like that:</p>
<pre><code>def func(file_path):
!python some_script.py --par file_path
func('/some/file/path/file.csv')
</code></pre>
<p>This obviously results in an error message, because Python assumes that <code>file_path</code> in the line <code>!python some_script.py --par file_path</code> is a literal file path, while I want to use it as a variable representing a file path.</p>
<p>So, how do I pass a variable into the command line <code>!python some_script.py --par file_path</code>? How do I tell the command line that <code>file_path</code> is not the path, but represents it? Tnx</p>
| <python><variables><command-line><sys> | 2023-08-17 15:46:32 | 0 | 2,072 | NeStack |
76,922,960 | 2,532,408 | Is ''tuple[A, B|C]'' or ''tuple[A,B]|tuple[A,C]'' preferred for mypy? | <p>Consider the following two annotations:</p>
<pre class="lang-py prettyprint-override"><code>def foo1(arg: tuple[datetime, int] | tuple[datetime, None]) -> datetime:
...
def foo2(arg: tuple[datetime, int | None]) -> datetime:
...
</code></pre>
<p><strong>Is there a reason to use one over the other?</strong> <em>(besides preference)</em></p>
<p>As far as I can tell they are logically equivalent; are they?</p>
| <python><python-typing><mypy> | 2023-08-17 15:25:57 | 2 | 4,628 | Marcel Wilson |
76,922,917 | 5,722,359 | How to toggle sometimes one and sometimes more than one item in Sqlite3? | <pre><code>def toggle_status_of_item(self, item_id: str):
sql = """UPDATE table
SET status = CASE status
WHEN 0 THEN 1
ELSE 0 END
WHERE item_id = ?"""
self.cur.execute(sql, (item_id,))
self.con.commit()
</code></pre>
<p>The above method toggles the boolean value in column <code>status</code> of a given <code>item_id</code>. However, <code>item_id</code> can sometimes be plural, i.e. it may contain more than one value, which I have no control of.</p>
<p>How should I rewrite the sqlite3 doc-string to toggle sometimes one and sometimes more than one <code>item_id</code>? Other than changing <code>item_id: str</code> to <code>item_id: list</code>, how do I write the SQLite commands to apply the <code>CASE</code> statement to a list of item ids? Thank you in advance.</p>
<p>Below method will do what I want. However, it is not a pure SQLITE approach. I would like to know the SQL commands to achieve the below.</p>
<pre><code>def toggle_status_of_item(self, item_ids: list):
sql = """UPDATE table
SET status = CASE status
WHEN 0 THEN 1
ELSE 0 END
WHERE item_id = ?"""
for id in item_ids:
self.cur.execute(sql, (id,))
self.con.commit()
</code></pre>
| <python><sqlite><sqlite3-python> | 2023-08-17 15:20:59 | 2 | 8,499 | Sun Bear |
76,922,668 | 1,946,418 | Using decorators from base class | <p>I have a class that inherits from another <code>base</code> class, and would like to use decorators</p>
<pre class="lang-py prettyprint-override"><code>class Base:
def Checkpoint(func: Any) -> Any:
def inner(self, *args, **kwargs):
return f"calling {func} from a decorator"
return inner
</code></pre>
<pre class="lang-py prettyprint-override"><code>class RegularClass(Base):
@Checkpoint
def DoSomething1(self):
print("Doing something 1")
@Checkpoint
def DoSomething2(self):
print("Doing something 1")
</code></pre>
<p>It works if I move <code>Checkpoint</code> inside <code>RegularClass</code>; but I am hoping I would keep <code>Checkpoint</code> inside <code>Base</code> class, so I can reuse it across several classes</p>
<p>I am getting <code>Undefined name 'Checkpoint'</code> error when I try to instantiate <code>RegularClass</code> and use it. Was following <a href="https://www.geeksforgeeks.org/creating-decorator-inside-a-class-in-python/" rel="nofollow noreferrer">this article</a>, not sure what I am missing.</p>
<p>I have also tried <code>@Base.Checkpoint</code>, but that doesn't seem to work either. Here is the error I am getting</p>
<pre><code>Base.Checkpoint() missing 1 required positional argument: 'func'
</code></pre>
<p>Not sure why it's adding the <code>()</code> after <code>Base.Checkpoint</code>? I don't need <code>()</code> with decorators, right?</p>
<p>Any ideas anyone? Python version: 3.11.4.</p>
| <python><python-decorators> | 2023-08-17 14:48:59 | 2 | 1,120 | scorpion35 |
76,922,615 | 5,741,205 | Create and call a function that "asynchronously" updates a file in a loop until the second function that is started in parallel is done | <p>I'm new to multiprocessing / threading and asyncio in Python and I'd like to parallelise two function calls so that the first function updates a healthcheck text file in an endless loop with 5 min. interval until the second function is done. After that the loop for the first function call should be stopped.</p>
<p>Here is an attempt, which is not working:</p>
<pre><code>import multiprocessing
import time
done_event = multiprocessing.Event()
# Function to update the healthcheck text file
def update_healthcheck():
interval = 5 * 60 # 5 minutes interval in seconds
while not done_event.is_set():
with open("healthcheck.txt", "w") as f:
f.write("Health is okay.")
time.sleep(interval)
# Function that simulates the second task
def second_task():
time.sleep(20) # Simulating some work
done_event.set() # Set the event to signal the first function to stop
if __name__ == "__main__":
# Start the first function in a separate process
healthcheck_process = multiprocessing.Process(target=update_healthcheck)
healthcheck_process.start()
# Start the second function in the main process
second_task()
# Wait for the healthcheck process to finish
healthcheck_process.join()
print("Both tasks completed.")
</code></pre>
<p>What would be a correct and better implementation of that snippet?</p>
<p>Thank you!</p>
| <python><loops><python-asyncio><python-multiprocessing><python-multithreading> | 2023-08-17 14:43:41 | 1 | 211,730 | MaxU - stand with Ukraine |
76,922,573 | 2,350,150 | Python "or" executes both left and right expressions | <p>I have the following code:</p>
<pre><code>def func(i):
a = i + 1
(a % 15 == 0 and print(f"{a} AB")) or print(a)
i < 15 and func(a)
func(0)
</code></pre>
<p>The output is:</p>
<pre><code>1
2
...
14
15 AB
15
16
</code></pre>
<p>I don't get why both <code>(a % 15 == 0 and print(f"{a} AB"))</code> and <code>print(a)</code> statements are executed when I am using <code>or</code> between them.</p>
<p>Any ideas?</p>
<p><strong>P.S:
Just a small clarification, because of the comments - this is just an exercise and not a real code that will be used in production ever</strong></p>
| <python> | 2023-08-17 14:39:10 | 1 | 1,633 | Georgi Georgiev |
76,922,432 | 6,026,181 | Field ordering on .model_dump() | <p>When I call <code>my_model.model_dump()</code> I need the fields to be ordered in a specific way.</p>
<p>Pydantic has <a href="https://docs.pydantic.dev/dev-v1/usage/models/#field-ordering" rel="nofollow noreferrer">rules</a> for how fields are ordered. However my issue is I have a <code>computed_field</code> that I need to be dumped before other non-computed fields. Pydantic seems to place this computed field last no matter what I do.</p>
<p>In the example below I need the <code>computed_field</code> <code>foobar</code> to come before <code>buzz</code>:</p>
<pre><code>from pydantic import BaseModel, computed_field
class MyModel(BaseModel):
foo: str
bar: str
buzz: str
@computed_field
@property
def foobar(self) -> str:
return self.foo + self.bar
if __name__ == "__main__":
my_model = MyModel(foo="foo", bar="bar", buzz="buzz")
print(my_model.model_dump())
</code></pre>
<p>Expected result:</p>
<pre><code>{'foo': 'foo', 'bar': 'bar', 'buzz': 'buzz', 'foobar': 'foobar'}
</code></pre>
<p>How can I get <code>foobar</code> to appear before <code>buzz</code> in the dumped model?</p>
| <python><pydantic> | 2023-08-17 14:22:54 | 2 | 378 | gavinest |
76,922,384 | 17,160,160 | Cumulative constraint expression over indexed set in Pyomo | <p>Using the following simplified model:</p>
<pre><code>model = ConcreteModel()
model.WEEKS = Set(initialize = [1,2,3])
model.PRODS = Set(initialize = ['Q24','J24','F24'])
model.MONTHS = Set(initialize = ['J24','F24'])
model.volume = Var(model.WEEKS,model.PRODS, within = NonNegativeIntegers)
model.auction = Var(model.WEEKS,model.PRODS, within = Binary)
model.HedgeMin = Param(model.WEEKS,model.MONTHS, within = NonNegativeIntegers, initialize = {(1,'J24'):45,(1,'F24'):45,
(2,'J24'):90,(2,'F24'):90,
(3,'J24'):135,(3,'F24'):135})
model.HedgeMax = Param(model.WEEKS,model.MONTHS, within = NonNegativeIntegers, initialize = {(1,'J24'):60,(1,'F24'):60,
(2,'J24'):120,(2,'F24'):120,
(3,'J24'):180,(3,'F24'):180})
</code></pre>
<p>I am attempting to construct a constraint that applies cumulative bounds to the product of <code>auction</code> * <code>volume</code> for each week in <code>model.WEEKS</code> and each month in <code>model.MONTHS</code>. using helpful guidance from @AirSquid's response to this <a href="https://stackoverflow.com/questions/74025003/expression-with-cumulative-sum-in-pyomo">post</a>, I have constructed the following:</p>
<pre><code>def hedge_rule(model, i,j):
subset = {x for x in model.WEEKS if x <= i}
return (model.HedgeMin[i,j],
sum(model.auction[i,j] * model.volume[i,j] for i in subset for j in model.MONTHS)
model.HedgeMax[i,j])
model.hedge_const = Constraint(model.WEEKS,model.MONTHS, rule = hedge_rule)
</code></pre>
<p>Which produces the following constraint:</p>
<pre><code>hedge_const : Size=6, Index=hedge_const_index, Active=True
Key : Lower : Body : Upper : Active
(1, 'F24') : 45.0 :auction[1,J24]*volume[1,J24] + auction[1,F24]*volume[1,F24] : 60.0 :True
(1, 'J24') : 45.0 :auction[1,J24]*volume[1,J24] + auction[1,F24]*volume[1,F24] : 60.0 :True
(2, 'F24') : 90.0 :auction[1,J24]*volume[1,J24] + auction[1,F24]*volume[1,F24] + auction[2,J24]*volume[2,J24] + auction[2,F24]*volume[2,F24] : 120.0 : True
(2, 'J24') : 90.0 :auction[1,J24]*volume[1,J24] + auction[1,F24]*volume[1,F24] + auction[2,J24]*volume[2,J24] + auction[2,F24]*volume[2,F24] : 120.0 : True
(3, 'F24') : 135.0 : auction[1,J24]*volume[1,J24] + auction[1,F24]*volume[1,F24] + auction[2,J24]*volume[2,J24] + auction[2,F24]*volume[2,F24] + auction[3,J24]*volume[3,J24] + auction[3,F24]*volume[3,F24] : 180.0 : True
(3, 'J24') : 135.0 : auction[1,J24]*volume[1,J24] + auction[1,F24]*volume[1,F24] + auction[2,J24]*volume[2,J24] + auction[2,F24]*volume[2,F24] + auction[3,J24]*volume[3,J24] + auction[3,F24]*volume[3,F24] : 180.0 : True
</code></pre>
<p>I believe that, because I am summing <code>auction[i,j]*volume[i,j]</code> over the set <code>model.MONTHS</code>, both <code>J24</code> and <code>F24</code> are included in each set declaration. So, for example:<br />
<code>(1, 'F24')</code> = <code>45.0 :auction[1,J24]*volume[1,J24] + auction[1,F24]*volume[1,F24] : 60.0</code>.</p>
<p>However, I would like to make it so that only the relevant months are included with each key. i.e.<br />
<code>(1, 'F24')</code> = <code>45.0 :auction[1,F24]*volume[1,F24] : 60.0</code><br />
<code>(1, 'J24')</code> = <code>45.0 :auction[1,J24]*volume[1,J24] : 60.0</code><br />
<code>(2, 'F24')</code> = <code>90.0 :auction[1,F24]*volume[1,F24] + auction[2,F24]*volume[2,F24] : 120.0</code><br />
<code>(2, 'J24')</code> = <code>90.0 :auction[1,J24]*volume[1,J24] + auction[2,J24]*volume[2,J24] : 120.0</code>
etc.</p>
<p>My question is, is this possible with a single definition of <code>model.hedge_rule</code>? I imagine the solution will require the creation of separate sets for each element in <code>MONTHS</code> but I'm not quite sure how to implement it.</p>
<p>As always, help and guidance much appreciated!</p>
<hr />
<p><strong>Additional Details</strong>
In response to @AirSquid's reply below:</p>
<p>In my simple formulation above, all elements of <code>PRODS</code> are currently associated with all elements of <code>WEEKS</code>. In this way @AirSquid's formulation of <code>WEEK_PROD</code> would actually output:</p>
<pre><code>WEEK_PROD : Size=3, Index=WEEKS, Ordered=Insertion
Key : Dimen : Domain : Size : Members
1 : 1 : Any : 3 : {'Q24', 'J24', 'F24'}
2 : 1 : Any : 3 : {'Q24', 'J24', 'F24'}
3 : 1 : Any : 3 : {'Q24', 'J24', 'F24'}
</code></pre>
<p>For this reason, the formulation of <code>C1</code> wont quite produce the desired results as all products are summed across each week:</p>
<pre><code>C1 : Size=3, Index=WEEKS, Active=True
Key : Lower : Body : Upper : Active
1 : 5.0 : auction[1,Q24]*volume[1,Q24] + auction[1,J24]*volume[1,J24] + auction[1,F24]*volume[1,F24] : 10.0 : True
2 : 5.0 : auction[2,Q24]*volume[2,Q24] + auction[2,J24]*volume[2,J24] + auction[2,F24]*volume[2,F24] : 10.0 : True
3 : 5.0 : auction[3,Q24]*volume[3,Q24] + auction[3,J24]*volume[3,J24] + auction[3,F24]*volume[3,F24] : 10.0 : True
</code></pre>
<p>I am trying to end up with output indexed using keys <code>(week, product)</code> in which the <code>Q24</code> (or any product beginning 'Q') is excluded to produce the output examples in my original post.</p>
<p>Ultimately though, I'd like the relevant <code>(week,'Q24')</code> entries to be summed along with each entry for i.e. <code>J24</code> and<code>F24</code>. Note that this aspect of the problem was originally excluded in an attempt to keep things simpler in my post.</p>
<p>For clarity sake, this is the exact output I attempting to achieve:<br />
<strong>Output</strong></p>
<pre><code>Key : Lower : Body : Upper : Active
(1, 'F24') : 45.0 :auction[1,F24]*volume[1,F24] + auction[1,Q24]*volume[1,Q24] : 60.0 : True
(1, 'J24') : 45.0 :auction[1,J24]*volume[1,J24] + auction[1,Q24]*volume[1,Q24]: 60.0 : True
(2, 'F24') : 90.0 :auction[1,F24]*volume[1,F24] + auction[2,F24]*volume[2,F24] + auction[1,Q24]*volume[1,Q24] + auction[2,Q24]*volume[2,Q24] : 120.0 : True
(2, 'J24') : 90.0 :auction[1,J24]*volume[1,J24] + auction[2,J24]*volume[2,J24] + auction[1,Q24]*volume[1,Q24] + auction[2,Q24]*volume[2,Q24]: 120.0 : True
(3, 'F24') : 135.0 :auction[1,F24]*volume[1,F24] + auction[2,F24]*volume[2,F24] + auction[3,F24]*volume[3,F24] + auction[1,Q24]*volume[1,Q24] + auction[2,Q24]*volume[2,Q24] + auction[3,Q24]*volume[3,Q24]: 180.0 : True
(3, 'J24') : 135.0 :auction[1,J24]*volume[1,J24] + auction[2,J24]*volume[2,J24] + auction[3,J24]*volume[3,J24] + auction[1,Q24]*volume[1,Q24] + auction[2,Q24]*volume[2,Q24] + auction[3,Q24]*volume[3,Q24]: 180.0 : True
</code></pre>
| <python><pyomo> | 2023-08-17 14:15:39 | 1 | 609 | r0bt |
76,922,296 | 8,284,452 | How to pipe python subprocess to batch command in linux? (subprocess runs bash command that calls a python script) | <p>I am trying to use run a python script by piping it to the <code>batch</code> command of linux to basically form a task queue. This requires that my python subprocess be spawned by bash instead of just spawning the python script directly in the subprocess, so the subprocess is sort of nested, I guess. The python script runs and succeeds, in that it outputs a text file and the process gives me a <code>returncode</code> of <code>0</code>, but it doesn't clear from the queue if I check <code>atq</code> in the terminal and the <code>stderr</code> of the process prints the job number indicating there is some sort of error. I don't know how to get a more detailed error message from the shell so if anyone has information on that, I'm all ears. I think the <code>batch</code> command is supposed to send an email if there's an error, but I've never gotten one.</p>
<p>How I am calling the subprocess:</p>
<pre><code>process = subprocess.Popen(["/bin/sh", "-c", f"python ./atqTest/hello.py --stuff 1980 | batch"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = process.communicate()
print(f'out: {out.decode()}')
print(f'err: {err.decode()}')
print(process.returncode)
</code></pre>
<p>What <code>hello.py</code> looks like:</p>
<pre><code>import sys, argparse
def parse_args(args):
"""Parses command line agruments."""
parser = argparse.ArgumentParser(
description="""do stuff""",
usage="""--stuff <random arg>\n"""
)
parser.add_argument(
"--stuff",
required=True,
help="""put something in"""
)
return parser.parse_args(args)
def main():
parser = parse_args(sys.argv[1:])
with open(f'/Documents/test/test.txt', 'w') as f:
for item in parser.stuff:
f.write(item + '\n')
if __name__ == "__main__":
main()
</code></pre>
<p>The output I get:</p>
<pre><code>out:
err: job 94 at Thu Aug 17 10:01:23 2023
0
</code></pre>
<p>After typing <code>atq</code> in the terminal to check the subprocess status (you can see it's not running):</p>
<pre><code>94 Thu Aug 17 10:01:00 2023
</code></pre>
<p>I have also tried using this to call the process but it doesn't make a difference:</p>
<pre><code>process = subprocess.Popen(f"python ./atqTest/hello.py --stuff 1980 | batch", stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
</code></pre>
| <python><python-3.x><linux><bash><subprocess> | 2023-08-17 14:04:47 | 1 | 686 | MKF |
76,922,098 | 1,474,073 | How to use generated code from Protobuf from a Pip dependency? | <p>Let's say I have one Python package where the following is generated from a <code>foo.proto</code>:</p>
<pre class="lang-protobuf prettyprint-override"><code>syntax = "proto3";
package foo;
message Foo {
string bar = 1;
}
</code></pre>
<p>Let's say the package is called <code>foo_lib</code>.</p>
<p>Now, I want to create another package <code>bar</code> that references this. I already pushed the <code>foo_lib</code> to my private PyPi repository, and I successfully <code>pip install foo_lib</code> in my <code>bar</code> project.</p>
<p>Now, I have the following:</p>
<pre class="lang-protobuf prettyprint-override"><code>syntax = "proto3";
import "foo.proto";
package bar;
message Bar {
.foo.Foo foo = 1;
}
</code></pre>
<p>I generate the code like this:</p>
<pre class="lang-bash prettyprint-override"><code>python -m grpc_tools.protoc -I path/to/directory/where/bar.proto/is -I path/to/directory/where/foo.proto/is --python_out src/bar path/to/directory/where/bar.proto/is/bar.proto
</code></pre>
<p>This generates the code correctly, but the generated Python code chokes on</p>
<pre class="lang-py prettyprint-override"><code>import foo_pb2 as foo__pb2
</code></pre>
<p>because with <code>foo</code> referenced as a PyPi dependency, it is located at <code>foo_lib.foo_pb2</code>.</p>
<p>Is it possible to somehow tell the Protobuf compiler that it should prefix <code>foo_lib</code> or do I really have to generate all dependent Python code again and again?</p>
| <python><pip><protocol-buffers> | 2023-08-17 13:43:11 | 1 | 8,242 | rabejens |
76,922,092 | 10,771,559 | How to upload multiple datasets and output a table of each dataframe in Dash? | <p>I have a simple dash app where a user can upload a dataset and the app outputs the dataset in a table.</p>
<p>I want the user to be able to upload multiple datasets and the table of each of these datasets to be shown.</p>
<p>So for example, if the user was to upload three different datasets than the dash app would show a table of the first dataset to be uploaded, followed by a table of the second dataset to be uploaded, followed by a table of the third dataset to be uploaded.</p>
<p>How can I do this? Here is my current code:</p>
<pre><code>import base64
import datetime
import io
import plotly.graph_objs as go
import plotly.express as px
import dash
from dash.dependencies import Input, Output, State
from dash import dcc, html, dash_table
import numpy as np
import pandas as pd
external_stylesheets = ["https://codepen.io/chriddyp/pen/bWLwgP.css"]
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
server = app.server
colors = {"graphBackground": "#F5F5F5", "background": "#ffffff", "text": "#000000"}
app.layout = html.Div(
[
dcc.Upload(
id="upload-data",
children=html.Div(["Drag and Drop or ", html.A("Select Files")]),
multiple=True,
),
html.Div(id="output-data-upload"),
]
)
def parse_data(contents, filename):
content_type, content_string = contents.split(",")
decoded = base64.b64decode(content_string)
try:
if "csv" in filename:
# Assume that the user uploaded a CSV or TXT file
df = pd.read_csv(io.StringIO(decoded.decode("utf-8")))
elif "xls" in filename:
# Assume that the user uploaded an excel file
df = pd.read_excel(io.BytesIO(decoded))
elif "txt" or "tsv" in filename:
# Assume that the user upl, delimiter = r'\s+'oaded an excel file
df = pd.read_csv(io.StringIO(decoded.decode("utf-8")), delimiter=r"\s+")
except Exception as e:
print(e)
return html.Div(["There was an error processing this file."])
return df
@app.callback(
Output("output-data-upload", "children"),
[Input("upload-data", "contents"), Input("upload-data", "filename")],
)
def update_table(contents, filename):
table = html.Div()
if contents:
contents = contents[0]
filename = filename[0]
df = parse_data(contents, filename)
table = html.Div(
[
html.H5(filename),
dash_table.DataTable(
data=df.to_dict("rows"),
columns=[{"name": i, "id": i} for i in df.columns],
),
html.Hr(),
html.Div("Raw Content"),
html.Pre(
contents[0:200] + "...",
style={"whiteSpace": "pre-wrap", "wordBreak": "break-all"},
),
]
)
return table
if __name__ == "__main__":
app.run_server(debug=True)
</code></pre>
| <python><plotly-dash> | 2023-08-17 13:42:16 | 1 | 578 | Niam45 |
76,922,055 | 4,930,914 | Match against List and pil | <p>I am new to pil package. I wish to merge only those images whose name is in the list.</p>
<p>w.txt file contains these filename</p>
<pre><code>1
2
</code></pre>
<p>Folder contains 4 images 1.png, 2.png, 3.png and 4.png</p>
<pre><code>from PIL import Image
with open('w.txt') as f:
my_list =list(f)
print(my_list)
img_01 = Image.open("1.png")
img_02 = Image.open("2.png")
img_03 = Image.open("3.png")
img_04 = Image.open("4.png")
img_01_size = img_01.size
img_02_size = img_02.size
img_03_size = img_02.size
img_02_size = img_02.size
new_im = Image.new('RGB', (2*img_01_size[0],2*img_01_size[1]), (250,250,250))
new_im.paste(img_01, (0,0))
new_im.paste(img_02, (img_01_size[0],0))
new_im.paste(img_03, (0,img_01_size[1]))
new_im.paste(img_04, (img_01_size[0],img_01_size[1]))
new_im.save("merged_images.png", "PNG")
</code></pre>
<p>Unsure of where to loop within pil. kindly help.</p>
<p><a href="https://i.sstatic.net/F0yfF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F0yfF.png" alt="enter image description here" /></a></p>
| <python><image><python-imaging-library> | 2023-08-17 13:36:33 | 1 | 915 | Programmer_nltk |
76,922,011 | 5,583,772 | How does polars handle modulus of negative numbers | <p>I see that polars has a modulus operator but it seems to work different than I expect, in this example code:</p>
<pre><code>import polars as pl
import numpy as np
x = 2.1
df = pl.DataFrame({'x':x})
print(x % 2) # Prints 0.1
print(np.mod(x,2)) # Prints 0.1
print(df.select(pl.col('x').mod(2))) # Prints 0.1
x = -.1
df = pl.DataFrame({'x':x})
print(x % 2) # Prints 1.9
print(np.mod(x,2)) # Prints 1.9
print(df.select(pl.col('x').mod(2))) #Prints -0.1
</code></pre>
<p>You can see that I get the same answer from %, numpy and polars when I start from a positive number, but a different answer when starting from a negative. Is there a way to force polars to return the positive version matched to numpy?</p>
| <python><python-polars> | 2023-08-17 13:30:17 | 1 | 556 | Paul Fleming |
76,921,985 | 10,227,815 | How to use jinja template in dataframe | <p>I've pyspark df like below:</p>
<pre><code>FirstName LastName Score
Hello World [('Math', 90), ('Eng', 80)]
ABC XYZ [('Math', 90)]
</code></pre>
<p>The column <code>Score</code> is nothing but <code>struct</code> type in Spark which looks like below:</p>
<pre><code>[Row(sub='Math', score=90), Row(sub='Eng', score=80)]
</code></pre>
<p>I want this <code>Score</code> column as Score_HTML.
Expected output be like:</p>
<pre><code>FirstName LastName Score_HTML
Hello World "<b>FullName:</b>Hello World <br><br> <table border="1"><tr><td>Sub</td><td>Score</td></tr><tr><td>Math</td><td>90</td></tr><tr><td>Eng</td><td>80</td></tr></table>"
ABC XYZ "<b>FullName:</b>ABC XYZ <br><br> <table border="1"><tr><td>Sub</td><td>Score</td></tr><tr><td>Math</td><td>90</td></tr></table>"
</code></pre>
<p>How can I achieve this using <code>Jinja</code> template.</p>
<p>I even tried to convert to Pandas DF from Spark, and then apply Jinja template like below:</p>
<pre><code>import jinja2
template = environment.from_string(
"""
<b>FullName:</b>{{ FirstName }} {{ LastName }} </br></br>
<table border="1"><tr><td>Sub</td><td>Score</td></tr><tr>
{% for value in df['Score'] %}
<td>"{{ value['sub'] }}"</td><td>"{{ value['score'] }}"</td>
{% endfor %}
</tr></table>
"""
)
df['Score_HTML'] = template.render(FirstName=df['FirstName'], LastName=df['FirstName']) ???
</code></pre>
<p>Need help to define Jinja template and use it in DF [either SparkDF or PandasDF] to achieve this.<br>
Thanks in advance.</p>
| <python><pandas><pyspark><jinja2> | 2023-08-17 13:26:11 | 2 | 303 | SHM |
76,921,840 | 10,866,873 | Regex match strings and not docstings or comment strings | <p>I am trying to match valid strings in a bunch of code e.g.:<br>
<code>myvar = "abc"</code>->match 'abc'.<br>
<code>dict['key'] = 'value'</code> -> match 'key' and 'value' separately.<br>
<code>default_val = ''</code> -> match '' (empty string).</p>
<p>However I don't want it to match docstrings or strings inside comments, e.g.<br></p>
<pre><code>"""
some doc string
with some random 'string'
"""
</code></pre>
<p><code>#this comment contains "a string"</code><br>
both of these should not match anything.</p>
<p>I use tab indentation as well so that may need to be included in the regex and i only want the actual string to be returned as a match.</p>
<p>Currently I have managed to get to this: <code>^\t*(?!#).+(?:[^\"\']{2,})[\"\'].*[\"\']$</code> which doesn't work and probably never will how I'm wanting it.</p>
<p>example of methods
find first case of " or ' in line:
if "" or '' is next in line then skip checking to after next instance of """ or '''
else match to next case of " or ' if looking back on same line has no #</p>
| <python><regex><sublimetext> | 2023-08-17 13:09:24 | 2 | 426 | Scott Paterson |
76,921,805 | 1,111,088 | File cannot be parsed and uploaded to Azure in the same function | <p>I have this code snippet in Python/Flask where the user can upload multiple files and I want to parse the XML file first and then upload the file to Azure:</p>
<pre><code>import xmltodict
def function1():
files = request.files.getlist("file")
for file in files:
doc = xmltodict.parse(file)
blobname = azure.blob_upload(file)
</code></pre>
<p>The <code>file</code> object becomes messed up somehow after either the parsing or the upload happens. When I parse it first (like in the code sample), the upload doesn't stop even for a very small file. When I upload it first, a parsing error occurs: "No element found: line 1, column 0".</p>
<p>The parsing line works perfectly if I run it on its own. The file is also uploaded properly and quickly to Azure if it's run on its own.</p>
<p>Here's the code snippet for blob_upload:</p>
<pre><code>from azure.storage.blob import BlobServiceClient
service = BlobServiceClient(account_url="url", credential="key")
container = service.get_container_client("container")
def blob_upload(data):
blob = container.get_blob_client("example")
blob.upload_blob(data)
return blob_name
</code></pre>
<p>I could probably upload it first then pull the file from Azure after, but that sounds awfully inefficient. Besides, I would want to parse the XML file first anyway to determine if it's properly formatted before I upload to Azure.</p>
<p>Is there a way to make this work?</p>
| <python><flask><azure-web-app-service><xmltodict> | 2023-08-17 13:02:55 | 1 | 2,480 | rikitikitik |
76,921,586 | 8,461,786 | Type hint for an argument that is a value of a class attribute | <p>In the below code, how can I make the Pylance type error go away? Disclaimer - I understand why the error is there, I just don't know how to make the code work for both the function and the dict at the same time.</p>
<p>Preferably, I would like to achieve this without migrating the class to <code>Enum</code> or its attributes to <code>Literal</code> (I have tried both, but with limited success - there is always an error in either the dict or the function).</p>
<pre><code>class Constants:
a = 'aaa'
b = 'bbb'
d = {
Constants.a: 'a value',
Constants.b: 'b value',
}
def print_constant(constant: Constants.a): # here Pylance produces the error cited below
print(constant)
print_constant(Constants.a)
print(d[Constants.a])
</code></pre>
<blockquote>
<p>Expected type expression but received
"str"PylancereportGeneralTypeIssues (class) Constants</p>
</blockquote>
| <python><python-typing><pyright> | 2023-08-17 12:34:41 | 1 | 3,843 | barciewicz |
76,921,455 | 2,435,911 | Mix flow and block style in YAML dump | <p>I am using <code>Python 3.8.10</code> and <code>ruamel 0.17.31</code>. I have this python dict</p>
<pre><code>d = {'a': {'b': {'c': {'x': 1, 'y': 1}, 'd': [1, 2, 3], 'e': {'f': 1, 'g': 1} }}}
</code></pre>
<p>Using <code>ruamel.yaml</code>, I want it to be printed like:</p>
<pre><code>a:
b:
c:
x: 1
y: 1
d: [1, 2, 3]
e:
f: 1
g: 1
</code></pre>
<p>I think I am mixing the flow and block styles. By using the <code>default_flow_style</code> I can either of the one output</p>
<pre><code>a:
b:
c:
x: 1
y: 1
d:
- 1
- 2
- 3
e:
f: 1
g: 1
</code></pre>
<p>or</p>
<pre><code>a:
b:
c: {x: 1, y: 1}
d: [1, 2, 3]
e: {f: 1, g: 1}
</code></pre>
<p>Do I need to write a custom representer?</p>
| <python><yaml><ruamel.yaml> | 2023-08-17 12:17:35 | 1 | 979 | harsh |
76,921,442 | 7,999,749 | Draw a 3d drape over the non zero non null pixels in my 3d tif image | <p>So I have a 3d tif pixel image and want to draw a 3d drape over the non_zero non_null pixels</p>
<p>At first I worked in a 2d manner and passed on every slice and tried to draw a shape around and then plot them to form a 3d shape but it doesnt seem to work out</p>
<p>I tried the 3d suite in fiji using 3D drawing ROIs but it only drew a fixed 2d circle on every slice</p>
| <python><r><3d><imagej><fiji> | 2023-08-17 12:16:03 | 1 | 968 | Ali_Nass |
76,921,435 | 1,479,670 | Why can't repeat the training of a model with identical rsults? | <p>I am trying to replicate the results of my model. But each time i run it, the results are different (even with restarting the google colab's runtime).
Here's the model</p>
<pre><code># set random seed
tf.random.set_seed(42)
# 1. Create the model
model_1 = tf.keras.models.Sequential([
tf.keras.layers.Dense(1, input_shape=[1])
])
# 2. Compile the model
model_1.compile(loss=tf.keras.losses.mae,
optimizer=tf.keras.optimizers.SGD(),
metrics=["mae"])
# 3. Fit the model
model_1.fit(X_train, y_train, epochs=100)
</code></pre>
<p><code>X_train</code> and <code>y_train</code> are two tensors each with <code>shape=(40,)</code>, and <code>dtype=int32</code></p>
<pre><code># 4. make a prediction
y_pred = model.predict(X_test)
# 5. mean absolute error
mae = tf.metrics.mean_absolute_error(y_true=y_test,
y_pred=tf.squeeze(tf.constant(y_pred)))
</code></pre>
<p><code>X_test</code> and <code>y_test</code> are two tensors each with <code>shape=(10,)</code>, and <code>dtype=int32</code></p>
<p>Now every time i run the above code i get a different value for <code>mae</code>, for example:
8.63, 21.26, 14.96, 14.93, 14,84, ...</p>
<p>I had expected to get identical runs, because i set the random seed before building and training the model.</p>
<p>How can i exactly reproduce my model's performance?</p>
| <python><tensorflow><keras> | 2023-08-17 12:15:11 | 1 | 1,355 | user1479670 |
76,921,229 | 1,509,695 | Make plotly 3D chart have its camera position on the depth line going through the origin | <p>I'd like my 3D Scatter chart to launch with its camera looking straight towards the origin of the data's coordinate system, on the ray coming out from the depth vector <code>(0, 0, 1)</code>.</p>
<p>ChatGPT knows to explain the <code>camera</code> argument of plotly really well, but it seems that the following code, my best shot at it so far, is still launching the chart with a tilt compared to the camera really being on the depth line going to the origin.</p>
<pre><code>from numpy import array
import plotly.graph_objects as go
vec1 = array([-0.20306842, 0.90820287, 0.36596552])
vec2 = array([-0.91857355, 0.15817927, -0.36221809])
vec3 = array([ 0.38370564, 0.13700477, -0.91323583])
fig = go.Figure(data=[
go.Scatter3d(
x=[0, vec1[0]],
y=[0, vec1[1]],
z=[0, vec1[2]],
mode='lines',
line=dict(width=10, color='blue'),
), go.Scatter3d(
x=[0, vec2[0]],
y=[0, vec2[1]],
z=[0, vec2[2]],
mode='lines',
line=dict(width=10, color='green'),
), go.Scatter3d(
x=[0, vec3[0]],
y=[0, vec3[1]],
z=[0, vec3[2]],
mode='lines',
line=dict(width=10, color='black'),
), go.Scatter3d(
x=[0, 0],
y=[0, 0],
z=[0, 1],
mode='lines',
line=dict(width=10, color='red'),
name='viewport depth direction'
)])
fig.update_scenes(
camera=dict(
# transform such that the the depth axis of the data coordinate system, which is the standard 3D coordinate system, maps to the z-axis of the window, contrary to plotly's default window projection.
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=0, y=0, z=3)
))
fig.show()
</code></pre>
<p>As seen here:</p>
<p><a href="https://i.sstatic.net/Hzo9E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hzo9E.png" alt="enter image description here" /></a></p>
<p>What should I be doing differently to accomplish that camera position?</p>
<p>The point where all vectors intersect in the image is indeed the origin of the coordinate system. If the camera was indeed on the desired line then the red vector will have appeared as a point, not a line.</p>
| <python><3d><plotly><perspectivecamera> | 2023-08-17 11:49:31 | 0 | 13,863 | matanox |
76,921,225 | 13,370,214 | Getting the oldest review in google maps | <p>I have a datframe df which has a column ''name_address'' which will act as a search prompt. I want to retrive the oldest review and the count of ratings using google API</p>
<p>I used the following query but it is not accurate:</p>
<pre><code> if search_results:
# Get the place ID of the first result
place_id = search_results[0].get("place_id")
# Make API request to fetch place details using Place Details
details_url = f"https://maps.googleapis.com/maps/api/place/details/json?placeid={place_id}&key={api_key}"
details_response = requests.get(details_url)
place_details = details_response.json().get("result", {})
# Get the total count of ratings (user_ratings_total)
rating_count = place_details.get("user_ratings_total", "N/A")
rating_counts.append(rating_count)
# Get the ratings list and find the oldest review
ratings = place_details.get("reviews", [])
oldest_review_date = "N/A"
if ratings:
oldest_rating = min(ratings, key=lambda rating: rating.get("time"))
oldest_review_unix_time = oldest_rating.get("time")
oldest_review_date = datetime.datetime.fromtimestamp(oldest_review_unix_time).strftime("%Y-%m-%d")
oldest_review_dates.append(oldest_review_date)
else:
rating_counts.append("N/A")
oldest_review_dates.append("N/A")
</code></pre>
| <python><pandas><dataframe><google-maps> | 2023-08-17 11:49:04 | 1 | 431 | Harish reddy |
76,921,117 | 206,253 | Binning based on percentage missing data | <p>I have monthly customer records for a particular period (say from 2000-10 till 2005-03 inclusive). Most customers have one record per month for each of the months in this period. This would be, in total, 54 records for this period.</p>
<p>However, some have some of this data missing.</p>
<p>I would like to produce stats showing how many customers have</p>
<ul>
<li>0%</li>
<li>less than 1%</li>
<li>less than 5%</li>
<li>less than 10%</li>
<li>more than 10%</li>
</ul>
<p>missing records for this period. How to do that?</p>
<p>I am including a tiny subset of the data. It shows that customer 2 doesn't have a record for 2001-02.</p>
<pre><code>df=pd.DataFrame({'cust_id': [1,1,1,1,1,1,2,2,2,2,2],
'period' : [200010,200011,200012,200101,200102,200103,200010,200011,200012,200101,200103],
'volume' : [1,2,3,4,5,6,7,8,9,10,12],
'num_transactions': [3,4,5,6,7,8,9,10,11,12,13]})
</code></pre>
| <python><pandas><group-by> | 2023-08-17 11:33:38 | 3 | 3,144 | Nick |
76,920,993 | 1,942,868 | How can I call the SOAP function such as checkUser(param: ns1:UserAuthParam) | <p>I am new to SOAP API.</p>
<p>I am groping my way for now.</p>
<p>I am trying to call the SOAP function.</p>
<p>I have WSDL file which has this code:</p>
<pre><code> <element name="checkUser">
<complexType>
<sequence>
<element name="param" nillable="true" type="impl:UserAuthParam"/>
</sequence>
</complexType>
</element>
</code></pre>
<p>then it has <code>UserAuthParam</code> like this,</p>
<pre><code> <complexType name="UserAuthParam">
<sequence>
<element name="userId" nillable="true" type="xsd:string"/>
<element name="password" nillable="true" type="xsd:string"/>
</sequence>
</complexType>
</code></pre>
<p>I convert this</p>
<pre><code>$python -mzeep UserCheck.wsdl > wsdl.txt
</code></pre>
<p>in <code>wsdl.txt</code></p>
<pre><code>Service: UserAuthService
Port: UserAuth (Soap11Binding: {http://userauth.user.service.okm.com}UserAuthSoapBinding)
Operations:
checkUser(param: ns1:UserAuthParam) -> checkExecuteReturn: ns1:UserAuthParamOut
</code></pre>
<p>I made this coding to test.</p>
<pre><code>import zeep
client = zeep.Client(wsdl="UserCheck.wsdl")
res = client.service.checkUser(userId=12,password="kkkxxx")
</code></pre>
<p>However, this shows error</p>
<blockquote>
<p>TypeError: {http://userauth.user.service.okm.com}checkUser() got an unexpected keyword argument 'userId'. Signature: `param: {http://userauth.user.service.okm.com}UserAuthParam`</p>
</blockquote>
<p>Maybe my calling method is wrong, but How can I do this?</p>
| <python><soap><soap-client><zeep> | 2023-08-17 11:17:47 | 1 | 12,599 | whitebear |
76,920,946 | 2,009,558 | why does the output of numpy KDE not map easily to the input? | <p>I want to use KDE to estimate the cluster density across a list of XY points I have detected in my microscopy images (that's a completely different process). I'm trying to adapt the code in this answer: <a href="https://stackoverflow.com/a/64499779/2009558">https://stackoverflow.com/a/64499779/2009558</a></p>
<p>Why doesn't the output of the KDE map to the input dimensions? I don't get what the need is to map the KDE output to a grid. Nor why the dimensions of the grid don't match input data. What is the value of "128j" in this line?</p>
<pre><code>gx, gy = np.mgrid[x.min():x.max():128j, y.min():y.max():128j]
</code></pre>
<p>What sort of python object is that? It's got both numbers and letters, but it's not a string? I tried googling this but couldn't find an answer. Numpy is so unpythonic sometimes, it drives me nuts.</p>
<p>Here's where I'm at so far. The data's just a pandas df with X and Y coordinates as floats.</p>
<pre><code>import numpy as np
import plotly.express as px
import plotly.offline as offline
import pandas as pd
from scipy.stats import gaussian_kde
xx = df['X']
yy = df['Y']
xy = np.vstack((xx, yy))
kde = gaussian_kde(xy)
gx, gy = np.mgrid[xx.min():xx.max():128j, yy.min():yy.max():128j]
gxy = np.dstack((gx, gy))
# print(gxy[0])
z = np.apply_along_axis(kde, 2, gxy)
z = z.reshape(128, 128)
fig = px.imshow(z)
fig.add_trace(go.Scatter(x = xx, y = yy, mode='markers', marker = dict(color='green', size=1)))
fig.show()
</code></pre>
<p>This produces most of the plot I want: The density plot with the points overlaid on it, but the dimensions of the density data are 128 x 128, instead of the dimensions of the limits of the input.
<a href="https://i.sstatic.net/TLzVA.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TLzVA.jpg" alt="KDE fail" /></a></p>
<p>When I try substituting the real dimensions in the reshaping like this</p>
<pre><code>z = z.reshape(ceil(xx.max()-xx.min()), ceil(yy.max()-yy.min()))
</code></pre>
<p>I just get errors.</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_19840/2556669395.py in <module>
12 z = np.apply_along_axis(kde, 2, gxy)
13 # z = z.reshape(128, 128)
---> 14 z = z.reshape(ceil(xx.max()-xx.min()), ceil(yy.max()-yy.min()))
15
16 fig = px.imshow(z)
ValueError: cannot reshape array of size 16384 into shape (393,464)
</code></pre>
| <python><numpy><kernel-density> | 2023-08-17 11:11:02 | 1 | 341 | Ninja Chris |
76,920,944 | 8,564,860 | Pyspark Structured Streaming - error related to allowAutoTopicCreation | <p>I'm trying to do some very basic stream processing using PySpark (3.2.4) Structured Streaming, using Kafka as my data source. Just to get up and running, I'm attempting the really basic task of parsing a field <code>changeType</code> from my source messages and appending it out to the console. However, when I run my script I get an <code>pyspark.errors.exceptions.captured.StreamingQueryException</code>. See below for script and traceback:</p>
<h5>pyspark_structured_streaming.py</h5>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import SparkSession
from pyspark.sql.functions import from_json
from pyspark.sql.types import StructType, StringType
spark = SparkSession \
.builder \
.appName("PysparkTesting") \
.getOrCreate()
spark.sparkContext.setLogLevel('WARN')
schema = StructType().add("changeType", StringType())
stream_data = spark \
.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", "host1:port1,host2:port2") \
.option("subscribe", "topic") \
.load()
parsed_data = stream_data.selectExpr("CAST(value as STRING)") \
.select(from_json("value", schema).alias("data")) \
.select("data.changeType")
query = parsed_data.writeStream \
.outputMode("append") \
.format("console") \
.start()
query.awaitTermination()
</code></pre>
<p>I run the script using the command</p>
<pre><code>spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.12:3.4.1 pyspark_structured_streaming.py
</code></pre>
<p>which should run fine but gives the error:</p>
<h5>Traceback</h5>
<pre><code>Traceback (most recent call last):
File "/Users/johnf/my_project/pyspark/pyspark_structured_streaming.py", line 31, in <module>
query.awaitTermination()
File "/Users/johnf/.conda/envs/my_project/lib/python3.9/site-packages/pyspark/python/lib/pyspark.zip/pyspark/sql/streaming/query.py", line 201, in awaitTermination
File "/Users/johnf/.conda/envs/my_project/lib/python3.9/site-packages/pyspark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py", line 1322, in __call__
File "/Users/johnf/.conda/envs/my_project/lib/python3.9/site-packages/pyspark/python/lib/pyspark.zip/pyspark/errors/exceptions/captured.py", line 175, in deco
pyspark.errors.exceptions.captured.StreamingQueryException: [STREAM_FAILED] Query [id = 290ef0e0-dca4-4a2a-b767-bd171210c2e4, runId = 61f00034-c12e-42a9-b55e-0ee133f8211a] terminated with exception: org.apache.kafka.common.errors.UnsupportedVersionException: MetadataRequest versions older than 4 don't support the allowAutoTopicCreation field
</code></pre>
<p>My first thought would be that this suggests the topic doesn't exist, hence an error about <code>allowAutoTopicCreation</code>. However, the topic definitely does exist and I can consume messages from it using <code>KafkaConsumer</code> from <code>kafka-python</code>.</p>
<p>In the same environment as Spark I also have <a href="https://github.com/dpkp/kafka-python" rel="nofollow noreferrer">kafka-python</a> version 2.0.2 installed. The Kafka brokers I am trying to access are on remote servers in my company, Kafka version 2.6.0.3-8.</p>
| <python><apache-spark><pyspark><apache-kafka> | 2023-08-17 11:10:47 | 2 | 1,102 | John F |
76,920,889 | 13,187,876 | Configuring a Conditional Breakpoint in Spyder to Catch When an Error is Raised | <p>I've been using the debugger in spyder and experimenting with the 'Set/Edit Conditional Breakpoint' functionality. See the code snippet below; where I set the breakpoint condition to <code>i==2</code> on line 4 the IPdb will pause the code nicely when <code>i</code> is equal to 2 and I can inspect my variables in the variable explorer.</p>
<p>What I'd really like to do is set a conditional breakpoint to only trigger when a certain type of error is raised, therefore allowing me to inspect the variables that caused the error. When I set the conditional breakpoint to simply be: <code>ValueError</code>, it does not trigger when there's an error. <strong>Is setting a conditional breakpoint to only trigger when an error is raised possible, and if so, what is the correct syntax?</strong></p>
<pre><code>1 my_list = [1.4, 1.8, 'some text', 4.1]
2
3 for i, value in enumerate(my_list):
4 value_int = int(value)
</code></pre>
<p>I've been running this with the following: <code>Spyder==5 & Python==3.10</code></p>
| <python><spyder><conditional-breakpoint> | 2023-08-17 11:03:32 | 0 | 773 | Matt_Haythornthwaite |
76,920,802 | 5,447,434 | How to extract header, paragraph, table structure from pdf using azure form recognizer in python | <p>I would like to extract the data like Header, paragraphs, tables, pagenumber, pagefooter from the pdf in the dataframe format using the azure form recognizer using python.</p>
<p>PFB expected output.</p>
<p><a href="https://i.sstatic.net/A3iHR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A3iHR.png" alt="enter image description here" /></a></p>
<p>I have tried using layout model but the from the response i am not able to identify the header, paragraph or table</p>
<p><a href="https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/quickstarts/get-started-sdks-rest-api?view=doc-intel-3.1.0&viewFallbackFrom=form-recog-3.0.0&preserve-view=true&pivots=programming-language-python" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/ai-services/document-intelligence/quickstarts/get-started-sdks-rest-api?view=doc-intel-3.1.0&viewFallbackFrom=form-recog-3.0.0&preserve-view=true&pivots=programming-language-python</a></p>
<pre><code>import os
from azure.ai.formrecognizer import DocumentAnalysisClient
from azure.core.credentials import AzureKeyCredential
endpoint = "<your-endpoint>"
key = "<your-key>"
def format_polygon(polygon):
if not polygon:
return "N/A"
return ", ".join(["[{}, {}]".format(p.x, p.y) for p in polygon])
def analyze_layout():
# sample form document
formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"
document_analysis_client = DocumentAnalysisClient(
endpoint=endpoint, credential=AzureKeyCredential(key)
)
poller = document_analysis_client.begin_analyze_document_from_url(
"prebuilt-layout", formUrl)
result = poller.result()
for idx, style in enumerate(result.styles):
print(
"Document contains {} content".format(
"handwritten" if style.is_handwritten else "no handwritten"
)
)
for page in result.pages:
print("----Analyzing layout from page #{}----".format(page.page_number))
print(
"Page has width: {} and height: {}, measured with unit: {}".format(
page.width, page.height, page.unit
)
)
for line_idx, line in enumerate(page.lines):
words = line.get_words()
print(
"...Line # {} has word count {} and text '{}' within bounding box '{}'".format(
line_idx,
len(words),
line.content,
format_polygon(line.polygon),
)
)
for word in words:
print(
"......Word '{}' has a confidence of {}".format(
word.content, word.confidence
)
)
for selection_mark in page.selection_marks:
print(
"...Selection mark is '{}' within bounding box '{}' and has a confidence of {}".format(
selection_mark.state,
format_polygon(selection_mark.polygon),
selection_mark.confidence,
)
)
for table_idx, table in enumerate(result.tables):
print(
"Table # {} has {} rows and {} columns".format(
table_idx, table.row_count, table.column_count
)
)
for region in table.bounding_regions:
print(
"Table # {} location on page: {} is {}".format(
table_idx,
region.page_number,
format_polygon(region.polygon),
)
)
for cell in table.cells:
print(
"...Cell[{}][{}] has content '{}'".format(
cell.row_index,
cell.column_index,
cell.content,
)
)
for region in cell.bounding_regions:
print(
"...content on page {} is within bounding box '{}'".format(
region.page_number,
format_polygon(region.polygon),
)
)
print("----------------------------------------")
if __name__ == "__main__":
analyze_layout()
</code></pre>
| <python><azure-form-recognizer><pdf-extraction> | 2023-08-17 10:51:27 | 1 | 323 | Niranjanp |
76,920,706 | 15,406,243 | Need to balancing kafka consumer tasks | <p>I need to have a kafka producer and 4 consumers in python that balancing queue.</p>
<p>My Topic bash code:</p>
<pre><code>kafka-topics --bootstrap-server localhost:9092 --create --topic numbers --partitions 4 --replication-factor 1
</code></pre>
<p>for example when I send producer messages, kafka divides the messages equal to consumers.
but I need to check if a consumer work done, new message assign to the consumer.</p>
<p>It's help me to balancing and increase the process speed.</p>
<p>my consumer code:</p>
<pre><code>import json, time
from kafka import KafkaConsumer
print("Connecting to consumer ...")
consumer = KafkaConsumer(
'numbers',
bootstrap_servers=['localhost:9092'],
auto_offset_reset='earliest',
enable_auto_commit=True,
group_id='my-group',
value_deserializer=lambda x: json.loads(x.decode('utf-8')))
for message in consumer:
print(f"{message.value}")
time.sleep(1)
</code></pre>
<p>My producer code:</p>
<pre><code>from time import sleep
from json import dumps
from kafka import KafkaProducer
producer = KafkaProducer(bootstrap_servers=['localhost:9092'], value_serializer=lambda x: dumps(x).encode('utf-8'))
for e in range(100):
data = {'number' : e}
producer.send('numbers', value=data)
print(f"Sending data : {data}")
sleep(5)
</code></pre>
| <python><apache-kafka><kafka-python> | 2023-08-17 10:37:16 | 1 | 312 | Ali Esmaeili |
76,920,401 | 10,722,752 | How to check if all float columns in a Pandas DataFrame are approximately equal or close | <p>I have a dataframe with 12 columns. In those I have many float columns, which I need to verify that the values are approximately equal or close enough.</p>
<p>Sample Data:</p>
<pre><code>df = pd.DataFrame({'id' : ['abc', 'pqr', 'xyz', 'cbz'],
'col1' : [0.0234, 0.001852, 4.123, 0.0012],
'col2' : [0.0235, 0.001851, 0.0123, 0.0013],
'col3' : [0.0233, 0.001849, 0.124, 0.0011]})
df
id col1 col2 col3
0 abc 0.0234 0.0235 0.0233
1 pqr 0.001852 0.001851 0.001849
2 xyz 4.123 0.0123 0.124
</code></pre>
<p>I can use <code>np.isclose</code> and set a threshold that is applicable in my case, which would be 0.062. But can someone please let me know how to compare if col1 is approximately equal to col2 approximately equal to col3. If even 1 column doesn't satisfy the condition, the result should be <code>False</code> as in the case of <code>id</code> <code>xyz</code>.</p>
| <python><pandas> | 2023-08-17 09:54:51 | 3 | 11,560 | Karthik S |
76,920,265 | 10,372,480 | pathlib Path - create path instance and mkdir in one line | <p>Pathlib Path - looks like I cannot 'create a path instance' and 'mkdir' in one one line? Am I doing something wrong or this is not possible?</p>
<p>This returns None for region_path. WHY?</p>
<pre><code>script_path = Path(__file__).parent.resolve()
region_path = Path(script_path/"device_details"/"region").mkdir(parents=True, exist_ok=True)
</code></pre>
<p>This returns a valid path for region_path, but I'm bothered by the extra step/line.</p>
<pre><code>script_path = Path(__file__).parent.resolve()
region_path = Path(script_path/"device_details"/"region")
region_path.mkdir(parents=True, exist_ok=True)
</code></pre>
| <python><path><pathlib> | 2023-08-17 09:38:17 | 1 | 326 | Mo Fatty |
76,919,772 | 6,479,506 | converting cppyy returned objects to python objects | <p>I have a cpp method that returns a tuple of two dimensional vectors of double values (std::tuplestd::vector<std::vector<double>, ...>. When calling the method through cppyy, I am getting a weird wrapper object representing the tuple and nested vectors.
These vectors eventually need to be turned into pytorch tensors, and for that to happen I need them to first be converted to python lists (or numpy arrays). How does one convert cppyy objects to pure python objects?</p>
| <python><c++><cppyy> | 2023-08-17 08:34:08 | 0 | 628 | Carpet4 |
76,919,611 | 12,871,978 | Docker python client cannot connect docker desktop | <p>I'm using docker-desktop on ubuntu18.04 and I run some containers successfully with it.
I try to use docker python client as <code>client = docker.from_env()</code> but it failed as follows:</p>
<pre><code>Traceback (most recent call last):
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/urllib3/connectionpool.py", line 714, in urlopen
httplib_response = self._make_request(
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/urllib3/connectionpool.py", line 415, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/http/client.py", line 1283, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/http/client.py", line 1329, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/http/client.py", line 1278, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/http/client.py", line 1038, in _send_output
self.send(msg)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/http/client.py", line 976, in send
self.connect()
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/docker/transport/unixconn.py", line 43, in connect
sock.connect(self.unix_socket)
FileNotFoundError: [Errno 2] No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/urllib3/connectionpool.py", line 798, in urlopen
retries = retries.increment(
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/urllib3/util/retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/urllib3/packages/six.py", line 769, in reraise
raise value.with_traceback(tb)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/urllib3/connectionpool.py", line 714, in urlopen
httplib_response = self._make_request(
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/urllib3/connectionpool.py", line 415, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/http/client.py", line 1283, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/http/client.py", line 1329, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/http/client.py", line 1278, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/http/client.py", line 1038, in _send_output
self.send(msg)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/http/client.py", line 976, in send
self.connect()
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/docker/transport/unixconn.py", line 43, in connect
sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/docker/api/client.py", line 214, in _retrieve_server_version
return self.version(api_version=False)["ApiVersion"]
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/docker/api/daemon.py", line 181, in version
return self._result(self._get(url), json=True)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/docker/utils/decorators.py", line 46, in inner
return f(self, *args, **kwargs)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/docker/api/client.py", line 237, in _get
return self.get(url, **self._set_request_timeout(kwargs))
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/requests/adapters.py", line 501, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/caohch1/Desktop/monitor/log_extractor.py", line 2, in <module>
client = docker.from_env()
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/docker/client.py", line 96, in from_env
return cls(
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/docker/client.py", line 45, in __init__
self.api = APIClient(*args, **kwargs)
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/docker/api/client.py", line 197, in __init__
self._version = self._retrieve_server_version()
File "/home/caohch1/anaconda3/envs/mointor/lib/python3.10/site-packages/docker/api/client.py", line 221, in _retrieve_server_version
raise DockerException(
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
</code></pre>
<p>I browsed the related solutions and found that my <code>Unit docker.service</code> seems not running. I also failed to start it.</p>
<pre><code>caohch1@caohch1-OptiPlex-7090:~/Desktop/monitor$ systemctl status docker
Unit docker.service could not be found.
caohch1@caohch1-OptiPlex-7090:~/Desktop/monitor$ systemctl start docker
Failed to start docker.service: Unit docker.service not found.
caohch1@caohch1-OptiPlex-7090:~/Desktop/monitor$ sudo snap status docker
error: unknown command "status", see 'snap help'.
caohch1@caohch1-OptiPlex-7090:~/Desktop/monitor$ sudo snap start docker
error: snap "docker" not found
</code></pre>
<p>However, I'm sure that my docker desktop works well and all containers are running normally.</p>
<p>I'm not sure whether it is caused by docker-desktop. How can I make the docker python client connect to my docker?</p>
| <python><docker><containers><docker-desktop><dockerpy> | 2023-08-17 08:14:48 | 1 | 407 | MissSirius |
76,919,560 | 1,113,579 | Marshmallow: Schema.load() should not validate the data again if it is previously validated | <p>In my web application's backend code (written in Python using Flask and Marshmallow), I first call Marshmallow's Schema.validate() and if I get a dictionary of errors, I send the errors to the Frontend.</p>
<p>If there are no validation errors, I proceed to call Schema.load(). But all the custom validation functions get called once again by Marshmallow, perhaps because it tries to validate the Schema again as part of the load process.</p>
<p>Is there a way I can tell Schema.load() that the data is already validated and it need not trigger Schema.validate() again?</p>
<p>And if there is no way to skip the Schema.validate() during Schema.load(), then if I intend to load the Schema eventually, is there any benefit to call Schema.validate() explicitly; I may as well call Schema.load() directly and catch any ValidationErrors.</p>
| <python><flask><marshmallow><flask-marshmallow> | 2023-08-17 08:06:17 | 1 | 1,276 | AllSolutions |
76,919,445 | 10,694,589 | python3 opencv not defined | <p>I installed opencv on my ubuntu 20.04 :</p>
<pre><code>python3 -m pip install opencv-python
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: opencv-python in /usr/local/lib/python3.8/dist-packages (4.6.0.66)
Requirement already satisfied: numpy>=1.14.5 in /home/alexandre/.local/lib/python3.8/site-packages (from opencv-python) (1.22.1)
</code></pre>
<p>and when I run my code : python3 schli.py</p>
<p>I get : NameError: name 'cv2' is not defined</p>
<p>my code :</p>
<pre><code>import cv2 as cv
bruitelect = cv.imread("bruit-electro.jpg",cv2.IMREAD_GRAYSCALE)
brut = cv.imread("17-14-05.jpg",cv2.IMREAD_GRAYSCALE)
fond = cv.imread("bruit-fond.jpg",cv2.IMREAD_GRAYSCALE)
cv.imshow('grayscale image', brut)
"""
"image brut - bruit elec"
num = cv.absdiff(brut,bruitelec)
"image fond - bruit elec"
deno = cv.absdiff(fond,bruitelec)
schlie = num/deno
"""
</code></pre>
<p>I don't understand why ?
Thanks !</p>
| <python><python-3.x><opencv> | 2023-08-17 07:48:54 | 1 | 351 | Suntory |
76,919,422 | 6,851,715 | get the number of unique rows in the last/next dynamically generated n rows for each row in a group | <p>I've got a dataset with group id, dates and locations.
I'd like to count the number of unique locations per row per group for the <strong>last n1</strong> and the <strong>next n2 days</strong>. In the example below, <strong>n1=2 and n2=3</strong> and I'd like the solution based on dynamic n1 and n2. it's a rather large dataset so performance is key. <strong>n1 and n2 include the row's date as 1 day.</strong></p>
<pre><code>data = {
'GROUP': ['A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B'],
'DATE': ['1/01/2023', '1/01/2023', '1/01/2023', '2/01/2023', '2/01/2023', '3/01/2023', '3/01/2023', '3/01/2023', '3/01/2023', '1/06/2023', '2/06/2023', '3/06/2023', '4/06/2023', '5/06/2023', '5/06/2023', '5/06/2023', '6/06/2023'],
'LOCATION': ['A1', 'A1', 'A2', 'A1', 'A2', 'A2', 'A3', 'A3', 'A4', 'B1', 'B2', 'B2', 'B3', 'B4', 'B2', 'B1', 'B1']
}
df = pd.DataFrame(data)
</code></pre>
<pre><code>desired output: # n1=2 # n2=3
GROUP DATE LOCATION LOCS_LAST_n1_DAYS LOCS_NEXT_n2_DAYS
0 A 1/01/2023 A1 1 4
1 A 1/01/2023 A1 1 4
2 A 1/01/2023 A2 2 4
3 A 2/01/2023 A1 2 4
4 A 2/01/2023 A2 2 3
5 A 3/01/2023 A2 2 3
6 A 3/01/2023 A3 3 2
7 A 3/01/2023 A3 3 2
8 A 3/01/2023 A4 4 1
9 B 1/06/2023 B1 1 2
10 B 2/06/2023 B2 2 2
11 B 3/06/2023 B2 1 3
12 B 4/06/2023 B3 2 4
13 B 5/06/2023 B4 2 3
14 B 5/06/2023 B2 3 2
15 B 5/06/2023 B1 4 1
16 B 6/06/2023 B1 3 1
</code></pre>
| <python><pandas><group-by><dynamic><unique> | 2023-08-17 07:44:23 | 1 | 1,430 | Ankhnesmerira |
76,919,297 | 8,648,276 | Convert position and rotation from Blender to Unreal engine | <p>I am exporting keyframe coordinates and rotations from Blender(latest) as CSV via python, and using them to animate an object through code, in Unreal Engine 5.0.4. The positions are matching, but not the rotations. What I have tried:</p>
<ul>
<li>converted eulers from right hand z-up to left-hand z-up</li>
<li>rotation quaternions</li>
<li>world matrix transformation</li>
</ul>
<p>I have strictly followed these:</p>
<ul>
<li>Imported the same model into both, rotated the model in blender, towards the same "forward"</li>
<li>Applied all transformations from object>>apply>>apply all transformations.</li>
<li>Added basic rotations at different keyframe.</li>
<li>Exported keyframe rotations as Euler, or Quaternions or world matrix.</li>
<li>The Unreal script loads the angles as it is, without change.</li>
</ul>
| <python><blender><unreal-engine5><fbx><unreal> | 2023-08-17 07:25:47 | 1 | 367 | Subham |
76,919,224 | 6,930,340 | Sort pandas multi-index dataframe while controlling a specific level | <p>I have a multi-index <code>pd.DataFrame</code>. I want the index to be sorted in ascending order. However, the outermost <code>level_1</code> is an exception. This level should be sorted according to ``["B", "A", "C"].</p>
<pre><code>import pandas as pd
import numpy as np
# Create multi-index
index = pd.MultiIndex.from_tuples([
('B', 'Y', 'II', 'D'),
('A', 'X', 'I', 'A'),
('C', 'Z', 'II', 'B'),
('A', 'Y', 'I', 'B'),
('C', 'X', 'I', 'A')
], names=['level_1', 'level_2', 'level_3', 'level_4'])
# Create DataFrame with random values
data = np.random.randint(0, 10, (5, 2))
df = pd.DataFrame(data, index=index, columns=['column1', 'column2'])
print(df)
column1 column2
level_1 level_2 level_3 level_4
B Y II D 1 7
A X I B 9 4
C Z I B 5 3
A Y I A 2 4
C X II A 9 2
</code></pre>
<p>I am looking for the following result:</p>
<pre><code> column1 column2
level_1 level_2 level_3 level_4
B Y II D 1 7
A X I B 9 4
A Y I A 2 4
C X II A 9 2
C Z I B 5 3
</code></pre>
| <python><pandas><sorting><multi-index> | 2023-08-17 07:15:47 | 0 | 5,167 | Andi |
76,919,054 | 3,032,376 | Numpy complex128 nan in division | <p>I am trying to understand why I get a warning using numpy's (version 1.24.2) <code>complex128</code>, which I do not get using the built-in <code>complex</code> type:</p>
<pre class="lang-py prettyprint-override"><code>import math, numpy
(math.nan)/(1j*math.nan)
# (nan+nanj)
(math.nan)/numpy.complex128(1j*math.nan)
# <stdin>:1: RuntimeWarning: invalid value encountered in scalar divide
# (nan+nanj)
</code></pre>
<p>Note how the second line gives a warning for some reason, while the first does not. As far as I know, nan/nan should just be nan, without any warning?</p>
<p>Note also that the same does not happen with <code>float64</code>:</p>
<pre class="lang-py prettyprint-override"><code>(math.nan)/numpy.float64(math.nan)
# nan
</code></pre>
<p>Why does numpy treat the <code>complex128</code> version as invalid, while the <code>float64</code> version is not? (@Homer512 already pointed out how to disable the warnings in the comments, however I'd like to understand why it's happening in the first place)</p>
| <python><numpy><warnings><nan><complex-numbers> | 2023-08-17 06:43:10 | 0 | 450 | Lukas Lang |
76,918,529 | 14,459,677 | Matching two dataframes and counting the times a matched row appeared in the first dataframe | <p>I have two dataframes (<code>df1</code> and <code>df2</code>).</p>
<p><code>df1</code> looks like this:</p>
<pre><code>A B C
Girl 25 APPLE
Boy 10 SAMSUNG
Girl 10 LG
Boy 5 Ap
Boy 68 SAM
</code></pre>
<p><code>df2</code> looks like this:</p>
<pre><code>D E
APPLE Ap
SAMSUNG Sam
LG lg
GOOGLE Go
</code></pre>
<p>I want to do index-match between these two so I can produce a new dataframe called <code>df3</code>.</p>
<p>if either column D or E (in <code>df2</code>) can be found in <code>df1</code>, it has to count them and reflect it on
the newly produced dataframe, <code>df3</code>.</p>
<p><code>df3</code> should look like this:</p>
<pre><code>A Count
Girl 2
Boy 3
</code></pre>
| <python><pandas><indexing><match><vlookup> | 2023-08-17 04:44:58 | 1 | 433 | kiwi_kimchi |
76,918,456 | 9,983,652 | calculate mean of columns which only include column value larger than zero | <p>I'd like to calculate the mean of all columns but only consider the columns' value larger than zero.</p>
<p>For example</p>
<pre><code>df_dict={'A':[0,2,10,0],'B':[10,0,30,40],'C':[0,5,10,10]}
df=pd.DataFrame(df_dict)
df
A B C
0 0 10 0
1 2 0 5
2 10 30 10
3 0 40 10
</code></pre>
<p>normally, if we just use df.mean(axis=1), it would produce</p>
<pre><code>df.mean(axis=1)
0 3.333333
1 2.333333
2 16.666667
3 16.666667
dtype: float64
</code></pre>
<p>what I wanted is: don't consider any value less than or equal to zero when calculating mean, the result I'd to have is
10/1,(2+5)/2,(10+30+10)/3,(40+10)/2 for each row</p>
<p>How to do it? Thanks</p>
| <python><pandas> | 2023-08-17 04:18:39 | 2 | 4,338 | roudan |
76,918,323 | 4,750,852 | Pip cannot install some Python packages due to missing platform tag in M1 Mac | <p>Lately I have been facing issues installing packages through pip.</p>
<p>For e.g. when I try to install the <code>torch</code> package, it fails with the below error</p>
<pre><code>> arch
arm64
> python --version
3.10.12
❯ pip --version
pip 23.2.1 from /Users/vinayvaddiparthi/repos/makemore/.venv/lib/python3.10/site-packages/pip (python 3.10)
> python -m venv .venv
> source .venv/bin/activate
> pip install torch
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
</code></pre>
<p>Running it in the verbose mode, <code>pip install -vv torch</code>, shows that pip skips all available wheels since none of have system compatible tags. For e.g.</p>
<pre><code> Skipping link: none of the wheel's tags (cp310-none-macosx_10_9_x86_64) are compatible (run pip debug --verbose to show compatible tags): https://files.pythonhosted.org/packages/2e/27/5c912ccc490ec78585cd463198e80be27b53db77f02e7398b41305606399/torch-2.0.1-cp310-none-macosx_10_9_x86_64.whl (from https://pypi.org/simple/torch/) (requires-python:>=3.8.0)
Skipping link: none of the wheel's tags (cp310-none-macosx_11_0_arm64) are compatible (run pip debug --verbose to show compatible tags): https://files.pythonhosted.org/packages/5a/77/778954c0aad4f7901a1ba02a129bca7467c64a19079108e6b1d6ce8ae575/torch-2.0.1-cp310-none-macosx_11_0_arm64.whl (from https://pypi.org/simple/torch/) (requires-python:>=3.8.0)
</code></pre>
<p>Similar logs appear for all versions of torch</p>
<p>Checking <code>pip debug --verbose</code> shows the following</p>
<pre><code>❯ pip debug verbose | grep -i -E "macosx|cp310"
WARNING: This command is only meant for debugging. Do not use this with automation for parsing and getting these details, since the output and options of this command may change without notice.
cp310-cp310-macosx_10_16_arm64
cp310-cp310-macosx_10_16_universal2
cp310-cp310-macosx_10_15_arm64
cp310-cp310-macosx_10_15_universal2
cp310-cp310-macosx_10_14_arm64
cp310-cp310-macosx_10_14_universal2
cp310-cp310-macosx_10_13_arm64
cp310-cp310-macosx_10_13_universal2
cp310-cp310-macosx_10_12_arm64
cp310-cp310-macosx_10_12_universal2
</code></pre>
<p>I can see that <code>macosx_11_0_arm64</code> tag is missing here which I believe is why pip is not able to install the wheel for that tag even though it is available. And I have no idea what makes it appear in that list.</p>
<p>I use pyenv for managing the Python environments and I'm not sure if that is causing any issue here.</p>
<p>MacOS details:</p>
<pre><code>❯ sw_vers
ProductName: macOS
ProductVersion: 13.5
BuildVersion: 22G74
</code></pre>
<p>Please let me know if you need any more details and thanks for checking.</p>
| <python><macos><pip><pyenv> | 2023-08-17 03:33:56 | 1 | 1,691 | Vinay |
76,918,260 | 4,701,426 | Running multiple instances of a Python script concurrently | <p>I have a script that I run from the command line. I need to run many instances of the script concurrently to save time. Imagine each instance scrapes one Amazon product page that is passed to it as an argument in the command line and there are millions of these pages.</p>
<p>Right now, my core i7 CPU manages to run only 10-12 of these scripts without getting too hot (I have plenty of RAM and GPU left). Is there anything along the lines of multi-threading or parallel processing (I have just heard the names, btw) that can let me get more out of the CPU and save my time? As things stand now, it's going to take me a few straight months to complete the task... if my CPU survives.</p>
<p>Edit: each script opens a Chrome browser (selenium) navigates a webpage, scrapes some data (Beautiful Soup), and writes it to csv. But the script also spends a lot of time waiting/sleeping for web elements to become clickable/visible and for IP not to get banned.</p>
| <python><multithreading><parallel-processing> | 2023-08-17 03:16:52 | 0 | 2,151 | Saeed |
76,918,224 | 43,343 | How can I make the inner scope of an async python function match the outer scope when defined in a loop? | <p>I'm dynamically creating endpoints in in a FastAPI project based on models loaded from another file. When I loop over these I expected to be able to get the value of the variable in the loop inside the async function. However I get only the last value the variable held. This will probably be more clear with code:</p>
<pre><code>my_models = [
("FooModel", FooModel),
("BarModel", BarModel),
]
app = FastAPI()
for name, model in my_models:
snake_name = utils.camel_to_snake(name)
title_name = utils.camel_to_title(name)
@app.post(
f"/{snake_name}",
response_model=model,
name=f"{title_name} Create",
)
async def validate_item(item: model, request: Request):
logger.info(
f"[Valid] {request.method} {title_name} ID: {item.identifier()}"
)
return item
</code></pre>
<p>The FastAPI endpoints get created correctly, but wether I hit the <code>/foo_model</code> endpoint or the <code>/bar_model</code> endpoint the logger will print "Bar Model" for <code>title_name</code>. I expected the async def to act like a closure with a temporary local scope which holds what was in the outer variable when defined. It seems like it is actually waiting to evaluate it until the time it is called and it holds whatever was the last in the for loop.</p>
<p>The values are what I expect inside the <code>@app.post</code> decorator but not inside the <code>validate_item</code> itself.</p>
<p>How can I get <code>title_name</code> to behave like how I want and have it hold the proper value when defining it? Also does <code>model</code> in the signature behave the same way?</p>
<p>I'm using:</p>
<ul>
<li>Python 3.10.12</li>
<li>fastapi 0.95.1</li>
<li>and running it using a UvicornWorker</li>
<li>uvicorn 0.21.1</li>
</ul>
| <python><asynchronous><fastapi><pydantic><uvicorn> | 2023-08-17 03:06:21 | 0 | 902 | Robbie |
76,918,044 | 1,290,485 | Cannot import mediapipe - TypeError: 'numpy._DTypeMeta' object is not subscriptable | <p>The install is successful. I receive this error when trying to import.</p>
<pre><code>TypeError: 'numpy._DTypeMeta' object is not subscriptable
</code></pre>
<p><a href="https://i.sstatic.net/hOAK4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hOAK4.png" alt="enter image description here" /></a></p>
<p>I have tried higher and lower versions of numpy (1.22.0,1.23.0,1.24.0,1.25.0,1.25.2). I installed mediapipe via pypi as well as a downloaded whl (<code> mediapipe-0.10.3-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl</code>) from <a href="https://pypi.org/project/mediapipe/0.10.3/#files" rel="nofollow noreferrer">pypi</a></p>
<p>Versions</p>
<pre><code>numpy 1.21.5
mediapipe 0.10.3
Python 3.10.6
</code></pre>
<p>These questions are similar but not the answer.</p>
<p><a href="https://stackoverflow.com/questions/71596440/importing-xarray-raises-not-subscriptable-issue">Importing xarray raises not subscriptable issue</a></p>
<p><a href="https://stackoverflow.com/questions/76252065/cannot-import-mediapipe-in-jupyter-notebook">Cannot import mediapipe in Jupyter notebook</a></p>
<p><a href="https://stackoverflow.com/questions/76879942/how-to-prevent-error-message-when-importing-import-cv2">How to prevent error message when importing import cv2?</a></p>
| <python><python-3.x><numpy><typeerror><mediapipe> | 2023-08-17 02:05:37 | 4 | 6,832 | Climbs_lika_Spyder |
76,917,993 | 3,017,906 | Sympy - reducing the order of cos()**n | <p><strong>UPDATE 08/20/2023</strong></p>
<p>I by chance sumbled on the fact that <code>TR8</code> by "expanding products of cos in to sums" actually does a very similar thing to the "complex" expressions I want to develop:</p>
<p><a href="https://i.sstatic.net/zAuli.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zAuli.png" alt="enter image description here" /></a></p>
<p>So <code>TR8</code> seems to recognize a product for <code>cos(x)**n</code> and it therefore unfolds it into sums of <code>cos</code> of sum/difference angles. However, on its own it leaves the squared expressions untouched. If I <code>expand</code> the intermediate result and re-apply <code>TR8</code>, I get where I want to be.</p>
<p>Since my expressions are a little more involved, I end up nesting <code>TR8(expand(</code> expressions until I get my desired result, so it is worth looking into the many hints contained in the answers to this question.</p>
<hr />
<p>I am looking for a way to expand powers of sums of cosine functions into sums of cosines of sums/differences of multiple angles. In the simplest form, starting from a square of a sum, one would get a sum of cosines of double angles plus (this is another simplification step) the expansion of a product of two cosines into the sum of cosines of sum and difference angles:</p>
<p><a href="https://i.sstatic.net/WVQY0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WVQY0.png" alt="enter image description here" /></a></p>
<p><code>sympy</code> has a set of functions to apply simplifications/transformations to trigonometric expressions, as detailed <a href="https://docs.sympy.org/dev/modules/simplify/fu.htm" rel="nofollow noreferrer">here</a>.</p>
<p>In particular, <code>TR7</code> is interesting in my case in that it "reduce cos power (increase angle)", as the description says.</p>
<p>This appears to work - however - for <code>cos()**2</code> only:</p>
<p><a href="https://i.sstatic.net/qYGgS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qYGgS.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/ZXFkT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZXFkT.png" alt="enter image description here" /></a></p>
<p>... And so on for other higher powers of <code>cos</code>.</p>
<p>Looking at the <a href="https://github.com/sympy/sympy/blob/master/sympy/simplify/fu.py" rel="nofollow noreferrer">code</a>, it appears clear why this is the case:</p>
<p><a href="https://i.sstatic.net/yD9Ds.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yD9Ds.png" alt="enter image description here" /></a></p>
<p>So it looks like the description in the reference I reported above, is a little misleading.</p>
<p>But now my question is: what is the right way to split something like <code>cos(x)**(2*n)</code> into a product of <code>n</code> <code>cos(x)**2</code> terms?</p>
<p>I started to look into the <a href="https://docs.sympy.org/latest/tutorials/intro-tutorial/manipulation.html" rel="nofollow noreferrer">expression tree</a> topic, thinking maybe I will have to parse the expression and break it into <code>cos(x)**2</code> atoms and separately apply the <code>TR7</code> simplification to each of them, then apply some form of expansion and successively probably <code>TR8</code> to complete the last step of multiplication of <code>cos(n*x)*cos(x)</code>, or something similar.</p>
<p>I would like to hear if you have any suggestion on how to tackle this.</p>
| <python><sympy><trigonometry> | 2023-08-17 01:49:15 | 2 | 1,343 | Michele Ancis |
76,917,967 | 22,674,380 | Python pip is messed up- How to fix it? | <p>I recently installed anaconda and apparently it messed up my pip. when I install a package with it like <code>pip install virtualenv</code> it says:</p>
<pre><code>Requirement already satisfied: virtualenv in /usr/local/lib/python3.10/dist-packages (20.24.3)
</code></pre>
<p>But when I want to use it with <code>virtualenv venv</code>, it says:</p>
<pre><code>Command 'virtualenv' not found.
</code></pre>
<p>And when I do <code>python -m site</code> It shows:</p>
<pre><code>sys.path = [
'/home/user/.local',
'/home/user/anaconda3/lib/python311.zip',
'/home/user/anaconda3/lib/python3.11',
'/home/user/anaconda3/lib/python3.11/lib-dynload',
'/home/user/anaconda3/lib/python3.11/site-packages',
]
USER_BASE: '/home/user/.local' (exists)
USER_SITE: '/home/user/.local/lib/python3.11/site-packages' (doesn't exist)
ENABLE_USER_SITE: True
</code></pre>
<p>Whereas my pip is in <code>~/.local/lib/python3.10/site-packages</code>! So apparently pip packages are no more visible to my python.</p>
<p>What is the problem? How to fix this?</p>
| <python><python-3.x><ubuntu><pip><anaconda> | 2023-08-17 01:36:20 | 1 | 5,687 | angel_30 |
76,917,779 | 2,805,482 | Create pyspark dataframe from numpy to 2d arrays | <p>Hi I have multiple numpy arrays in below shape and format.</p>
<pre><code>print(user_feature.shape)
print(service_id.shape)
print(target_id.shape)
print(target_label.shape)
print(service_label.shape)
Output:
(4621998, 620)
(4621998, 7)
(4621998,)
(4621998,)
(4621998, 7)
print(user_feature)
print(service_id)
print(target_id)
print(target_label)
print(service_label)
[[ 1.33677050e-02 -1.45685431e-02 -2.30765194e-02 ... 0.00000000e+00
0.00000000e+00 1.16669689e-04]
[ 1.33677050e-02 -1.45685431e-02 -2.30765194e-02 ... 0.00000000e+00
0.00000000e+00 1.16669689e-04]
[ 1.33677050e-02 -1.45685431e-02 -2.30765194e-02 ... 0.00000000e+00
0.00000000e+00 1.16669689e-04]
...
[-5.55971265e-03 6.94929948e-03 -2.85931975e-02 ... 1.36206508e-01
4.67081647e-03 7.43526791e-04]
[-5.55971265e-03 6.94929948e-03 -2.85931975e-02 ... 1.36206508e-01
4.67081647e-03 7.43526791e-04]
[-5.55971265e-03 6.94929948e-03 -2.85931975e-02 ... 1.36206508e-01
4.67081647e-03 7.43526791e-04]]
[[215. 215. 215. ... 554. 215. 215.]
[215. 215. 215. ... 215. 215. 215.]
[215. 215. 554. ... 215. 215. 215.]
...
[116. 116. 149. ... 149. 44. 44.]
[116. 149. 116. ... 44. 44. 297.]
[149. 116. 149. ... 44. 297. 297.]]
[215. 215. 554. ... 297. 297. 176.]
[1. 1. 1. ... 1. 1. 1.]
[[1. 1. 1. ... 1. 1. 1.]
[1. 1. 1. ... 1. 1. 1.]
[1. 1. 1. ... 1. 1. 1.]
...
[2. 5. 1. ... 1. 1. 1.]
[5. 1. 1. ... 1. 1. 1.]
[1. 1. 1. ... 1. 1. 1.]]
</code></pre>
<p>So I am trying to merge this arrays vertically and create a pyspark dataframe like, below is what I was trying, I have defined a schema and using that to create dataframe however I am getting below error. I also tried to convert individual numpy array into df (later merge all the df) but getting the same error on the schema. What am I missing here?</p>
<pre><code>schema_fc = StructType( [ StructField("user_feature", ArrayType( ArrayType(FloatType()) ), False),
StructField("sequence_service_id_list", ArrayType( ArrayType(FloatType()) ), False),
StructField("target_service_id", ArrayType(FloatType() ) , False),
StructField("target_label", ArrayType(FloatType() ) , False),
StructField("sequence_label_list", ArrayType( ArrayType(FloatType() ) ), False)] )
df = spark.createDataFrame(zip(train_user_feature, train_sequence_service_id_list, train_target_service_id, train_target_label, train_sequence_label_list), schema_fc)
</code></pre>
<blockquote>
<p>Fail to execute line 2: df = spark.createDataFrame(zip(train_user_feature, train_sequence_service_id_list, train_target_service_id, train_target_label, train_sequence_label_list), schema_fc)
Traceback (most recent call last):
File "/tmp/1692220704218-0/zeppelin_python.py", line 158, in
exec(code, _zcUserQueryNameSpace)
File "", line 2, in
File "/usr/lib/spark/python/pyspark/sql/session.py", line 675, in createDataFrame
return self._create_dataframe(data, schema, samplingRatio, verifySchema)
File "/usr/lib/spark/python/pyspark/sql/session.py", line 700, in _create_dataframe
rdd, schema = self._createFromLocal(map(prepare, data), schema)
File "/usr/lib/spark/python/pyspark/sql/session.py", line 509, in _createFromLocal
data = list(data)
File "/usr/lib/spark/python/pyspark/sql/session.py", line 682, in prepare
verify_func(obj)
File "/usr/lib/spark/python/pyspark/sql/types.py", line 1409, in verify
verify_value(obj)
File "/usr/lib/spark/python/pyspark/sql/types.py", line 1390, in verify_struct
verifier(v)
File "/usr/lib/spark/python/pyspark/sql/types.py", line 1409, in verify
verify_value(obj)
File "/usr/lib/spark/python/pyspark/sql/types.py", line 1352, in verify_array
verify_acceptable_types(obj)
File "/usr/lib/spark/python/pyspark/sql/types.py", line 1292, in verify_acceptable_types
% (dataType, obj, type(obj))))
TypeError: field user_feature: ArrayType(ArrayType(FloatType,true),true) can not accept object array([ 1.33677050e-02, -1.45685431e-02, -2.30765194e-02, -5.82330208e-03,
-3.32775936e-02, -2.07596943e-02, -2.37267348e-04, -4.20683017e-03,
-4.34294827e-02, 7.81743042e-03, 1.68688912e-02, -1.24656968e-02,
4.02029790e-03, 3.12208608e-02, -4.03045630e-03, 2.28498932e-02,
1.13157155e-02, 1.06274318e-02, -8.11346527e-03, -6.98270462e-03,
-3.91414156e-03, -8.35975632e-03, 1.49020925e-05, -7.33336899e-03,
8.67067836e-03, -2.28234846e-03, 5.22214249e-02, 1.39330365e-02,
3.98872644e-02, 6.10537268e-03, 2.20239125e-02, -1.11836568e-03,
-1.80659704e-02, -1.99369527e-02, 2.31389888e-02, -4.65059280e-02,
2.19357051e-02, -2.52697505e-02, 1.59132443e-02, 2.98593007e-02,
3.99250053e-02, 4.35081795e-02, 1.41635444e-02, -1.70806684e-02,
1.30436081e-03, -1.31941019e-02, 1.14841061e-02, 5.76580316e-02,
2.16435902e-02, -4.92058229e-03, -3.88808325e-02, 2.62990035e-02,
-1.39478920e-02, 8.46350566e-04, 3.43511552e-02, -1.80143528e-02,
-4.80983080e-03, 1.50774773e-02, 2.53398716e-02, 2.06974316e-02,
7.26055726e-03, -4.74966839e-02, 2.62531005e-02, 1.31620280e-02,
-8.68222490e-03, -1.43548492e-02, 9.61813331e-03, -2.14950927e-02,
1.20869037e-02, 7.72804953e-04, -2.06626896e-02, 1.97541751e-02,
2.28899159e-02, -1.63509734e-02, 6.25781249e-04, 2.29061060e-02,
2.55851950e-02, 6.25669882e-02, 2.07121912e-02, 1.52262878e-02,
-4.50072298e-03, -2.88834609e-03, -7.85173662e-03, -3.19551006e-02,
-1.96053647e-02, 2.16488615e-02, -1.82419382e-02, -8.35052226e-03,
8.67485628e-03, -6.06879368e-02, 1.43139064e-02, 1.74148679e-02,
7.30755646e-03, 2.66504586e-02, 3.40831396e-03, 2.75030751e-02,
1.13305971e-02, -4.58726063e-02, 1.36343203e-02, 3.14601064e-02,
-1.86643917e-02, -2.10148469e-03, -4.82135266e-02, 2.11269110e-02,
1.60128847e-02, 3.05281598e-02, 1.86553858e-02, -3.48868184e-02,
-3.12457476e-02, -8.56714416e-03, 5.18892147e-02, 3.10956426e-02,
-1.29655236e-03, 1.91727206e-02, 7.09275994e-03, -2.15799417e-02,
3.56485695e-02, 1.97729263e-02, -2.52993172e-03, 4.34201807e-02,
2.76413155e+00, 0.00000000e+00, 0.00000000e+00, 1.16669689e-04]) in type <class 'numpy.ndarray'></p>
</blockquote>
<p>Expected output</p>
<pre><code>user_feature service_id target_id target_label service_label
[ 1.33677050e-02 -1.45685431e-02 -2.30765194e-02 ... 0.00000000e+00 0.00000000e+00 1.16669689e-04] [215. 215. 215. ... 554. 215. 215.] [1. 1. 1. ... 1. 1. 1.] [215. 215. 554. ... 297. 297. 176.] [5. 1. 1. ... 1. 1. 1.]
[ 1.33677050e-02 -1.45685431e-02 -2.30765194e-02 ... 0.00000000e+00 0.00000000e+00 7.43526791e-04] [215. 215. 554. ... 215. 215. 215.] [1. 1. 1. ... 1. 1. 1.] [215. 215. 554. ... 297. 297. 176.] [5. 1. 1. ... 1. 1. 1.]
[ 1.33677050e-02 -1.45685431e-02 -2.30765194e-02 ... 0.00000000e+00 0.00000000e+00 2.16669689e-04] [215. 215. 554. ... 215. 215. 215.] [1. 1. 1. ... 1. 1. 1.] [215. 215. 554. ... 297. 297. 176.] [5. 1. 1. ... 1. 1. 1.]
[ 1.33677050e-02 -1.45685431e-02 -2.30765194e-02 ... 0.00000000e+00 0.00000000e+00 1.16669689e-04] [215. 215. 554. ... 215. 215. 215.] [1. 1. 0. ... 1. 1. 1.] [215. 215. 554. ... 297. 297. 176.] [6. 1. 1. ... 1. 1. 1.]
</code></pre>
| <python><pandas><numpy><pyspark><numpy-ndarray> | 2023-08-17 00:24:50 | 2 | 1,677 | Explorer |
76,917,629 | 8,151,881 | Copy file from ephemeral docker container to host | <p>I want to copy a file generated by a docker container, and store inside it to my local host. What is a good way of doing that? The docker container is ephemeral (i.e. it runs for a very short time and then stops.)</p>
<p>I am working with the below mentioned scripts:</p>
<p>Python (<code>script.py</code>) which generates and saves a file titled <code>read.txt</code>.</p>
<pre><code>with open('read.txt', 'w') as file:
lst = ['Some\n',
'random\n',
'sentence\n']
file.writelines("% s\n" % words for words in lst)
</code></pre>
<p>I use the below <code>Dockerfile</code>:</p>
<pre><code>FROM python:3.9
WORKDIR /app
RUN pip install --upgrade pip
COPY . /app/
RUN pip install --requirement /app/requirements.txt
CMD ["python", "/app/script.py.py"]
</code></pre>
<p>Below is my folder structure:</p>
<pre><code>- local
- folder1
- script.py
- requirements.txt
- Dockerfile
- folder2
</code></pre>
<p>Till now, I have managed to successfully build a docker container using:</p>
<pre><code>docker build --no-cache -t test:v1 .
</code></pre>
<p>When I run this docker container inside <code>/local/folder1/</code> using the below command, I get the desired file, i.e. <code>read.txt</code> inside <code>/local/folder1/</code></p>
<pre><code>docker run -v /local/folder1/:/app/ test:v1
</code></pre>
<p>But, when I run <code>docker run -v /local/folder2/:/app/ test:v1</code> inside <code>/local/folder2/</code>, I do not see <code>read.txt</code> inside <code>/local/folder2/</code> and I get the below message.</p>
<pre><code>python: can't open file '/app/script.py': [Errno 2] No such file or directory
</code></pre>
<p>I want to be able to get <code>read.txt</code> inside <code>/local/folder2/</code> when I run the docker container <code>test:v1</code> inside <code>/local/folder2/</code>. How can I do it? I want to be able to do it without copying the contents of <code>/local/folder1/</code> inside <code>/local/folder2/</code>.</p>
<p>The docker container is ephemeral (i.e. it runs for a very short time and then stops.) Hence answers given in <a href="https://stackoverflow.com/questions/22049212/copying-files-from-docker-container-to-host/22050116#22050116">this</a> and <a href="https://stackoverflow.com/questions/25292198/docker-how-can-i-copy-a-file-from-an-image-to-a-host/31316636#31316636">this</a> Stackoverflow posts, which focus on <code>docker cp</code> have not worked for me.</p>
<p>Time of running of the abovementioned container is not very essential. Even if a workable
solution increases the time of running of a container, that is okay.</p>
| <python><docker> | 2023-08-16 23:31:32 | 2 | 592 | Ling Guo |
76,917,610 | 6,332,554 | Python chained exceptions: how to access local variables in "root" exception? | <p>I have some utility code to dump the local variables for each frame during Python exception handling. In simplified form, it looks like this:</p>
<pre><code>import sys
import traceback
type_, value, tb = sys.exc_info()
for frame, lineno in traceback.walk_tb(tb):
print(frame.f_locals)
</code></pre>
<p>This works fine when I have an unchained exception, i.e. something like</p>
<pre><code>Traceback (most recent call last):
File "...", line 115, in get_context_data
...code...
...multiple levels of stack trace...
TypeError: ...
</code></pre>
<p>prints a set of local variables printed out for each level in the traceback. However, when exception chaining is in effect, i.e.:</p>
<pre><code>Traceback (most recent call last):
File "a.py", line 115, in get_context_data
...code...
...multiple levels of stack trace...
TypeError: some msg
The above exception was the direct cause of the following exception:
File "x.py", ...
...
File "y.py", ...
...
RuntimeError: some other msg
</code></pre>
<p>Now, to cope with the chained exception, the code has to loop:</p>
<pre><code> type_, value, tb = sys.exc_info()
while value:
for frame, lineno in traceback.walk_tb(tb):
print(frame.f_locals)
#
# Loop for chained exceptions.
#
value = value.__cause__ or value.__context__
_type = value.__class__ if value else None
tb = value.__traceback__ if value else None
</code></pre>
<p>The first time around the loop (i.e. the top most code in a.py) correctly prints the frame local variable. But as we follow <code>value.__cause__</code> down to the lower frames (x.py and y.py), <code>frame.f_locals</code> always seems to be empty. Is there some way to access those variables?</p>
| <python><exception> | 2023-08-16 23:25:03 | 0 | 723 | Shaheed Haque |
76,917,508 | 678,572 | Calculating (partial) correlation from a (shrunken) covariance matrix (Help porting R code to Python) | <p>There's a paper that I found interesting and would like to use some of the methods in Python. <a href="https://www.sciencedirect.com/science/article/pii/S2590197420300082" rel="nofollow noreferrer">Erb et al. 2020</a> implements partial correlation on compositional data and <a href="https://arxiv.org/pdf/2212.00496.pdf" rel="nofollow noreferrer">Jin et al. 2022</a> implements it in an R package called <a href="https://github.com/tpq/propr" rel="nofollow noreferrer">Propr</a>.</p>
<p>I found the function <a href="https://github.com/tpq/propr/blob/12553b3bcd159649f25d9a0e480250c1eee1d965/R/1-propr.R#L326" rel="nofollow noreferrer"><code>bShrink</code></a> that I'm simplifying below:</p>
<pre><code>library(corpcor)
# Load iris dataset (not compositional but will work for this case)
X = read.table("https://pastebin.com/raw/e3BSEZiK", sep = "\t", row.names = 1, header = TRUE, check.names = FALSE)
bShrink <- function(M){
# transform counts to log proportions
P <- M / rowSums(M)
B <- log(P)
# covariance shrinkage
D <- ncol(M)
Cb <- cov.shrink(B,verbose=FALSE)
G <- diag(rep(1,D))-matrix(1/D,D,D)
Cov <- G%*%Cb%*%G
# partial correlation
PC <- cor2pcor(Cov)
return(PC)
}
> bShrink(X)
[,1] [,2] [,3] [,4]
[1,] 1.0000000 0.96409509 0.6647093 -0.23827651
[2,] 0.9640951 1.00000000 -0.4585507 0.02735205
[3,] 0.6647093 -0.45855072 1.0000000 0.85903005
[4,] -0.2382765 0.02735205 0.8590301 1.00000000
</code></pre>
<p>Now I'm trying to port this in Python. Getting some small differences between <code>Cb</code> which is expected but major differences in <code>PC</code> which is the partial correlation (<a href="https://www.rdocumentation.org/packages/corpcor/versions/1.6.10/topics/cor2pcor" rel="nofollow noreferrer">cor2pcor function</a>).</p>
<p>I tried using the answers from here but couldn't get it to work: <a href="https://stackoverflow.com/questions/52229220/partial-correlation-in-python">Partial Correlation in Python</a></p>
<p>Here's my Python code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
from sklearn.covariance import LedoitWolf
def bShrink(M:pd.DataFrame):
components = M.columns
M = M.values
P = M/M.sum(axis=1).reshape(-1,1)
B = np.log(P)
D = M.shape[1]
lw_model = LedoitWolf()
lw_model.fit(B)
Cb = lw_model.covariance_
G = np.eye(D) - np.ones((D, D)) / D
Cov = G @ Cb @ G
precision = np.linalg.inv(Cov)
diag = np.diag(precision)
Z = np.outer(diag, diag)
partial = -precision / Z
return pd.DataFrame(partial, index=components, columns=components)
X = pd.read_csv("https://pastebin.com/raw/e3BSEZiK", sep="\t", index_col=0)
bShrink(X)
# sepal_length sepal_width petal_length petal_width
# sepal_length -5.551115e-17 -5.551115e-17 -5.551115e-17 -5.551115e-17
# sepal_width -5.551115e-17 -5.551115e-17 -5.551115e-17 -5.551115e-17
# petal_length -5.551115e-17 -5.551115e-17 -5.551115e-17 -5.551115e-17
# petal_width -5.551115e-17 -5.551115e-17 -5.551115e-17 -5.551115e-17
</code></pre>
<p>I'm trying to avoid using Pingouin or any other packages than numpy, pandas, and sklearn.</p>
<p><strong>How can I create a partial correlation matrix from a shrunken covariance matrix?</strong></p>
| <python><r><statistics><correlation><covariance> | 2023-08-16 22:49:00 | 1 | 30,977 | O.rka |
76,917,149 | 4,737,944 | Unknown user-based global PIP configuration sets wrong PyPi registry | <p>I used to need to install all PIP packages from a private repository (that mirrored pypi.org but also contained some of our own packages).</p>
<p>To ensure that <code>pip install <package></code> would always resolve all packages from this repository, I created a file named <code>~/.pypirc</code> with this content:</p>
<pre><code>[distutils]
index-servers =
nexus
nexus-own
[nexus]
repository = https://nexus.REDACTED.com/repository/pypi/
username = REDACTED
password = REDACTED
[nexus-own]
repository = https://nexus.REDACTED.com/repository/our-pypi/
username = REDACTED
password = REDACTED
</code></pre>
<p>However, the process has now been changed and the specified repository doesn't exist any more.</p>
<p>I deleted the <code>.pypirc</code> file but still, pip will try to resolve packages from the private repository.</p>
<p>For my own projects I can circumvent this by explicitly stating pypi.org as repository in the <code>Pipfile</code>, but when I want to install a package globally, it will fail because it will try to access the now-defunct repository:</p>
<pre><code>Looking in indexes: https://REDACTED:****@nexus.REDACTED.com/repository/pypi/simple
Collecting PACKAGE_NAME_REDACTED
Downloading PACKAGE_NAME_REDACTED
- 193.2 kB 2.7 MB/s 0:00:00
ERROR: Cannot unpack file /private/var/folders/yg/c7k3cx_s105249_xfk0rz8vr0000gp/T/pip-unpack-pyy2dbxl/REDACTED.git (downloaded from /private/var/folders/yg/c7k3cx_s105249_xfk0rz8vr0000gp/T/pip-req-build-sqsbu7rx, content-type: text/html; charset=utf-8); cannot detect archive format
</code></pre>
<p>How can this be now that the <code>.pypirc</code> file has been removed?</p>
<p>Does pip cache this information anywhere? I've searched my complete user's home directory recursively but could not find any trace of this configuration.</p>
<p>Where could pip be getting it from?</p>
<p>How can I make pip revert back to the default settings with pypi.org as default repository?</p>
<p>This only affects one user; if I login as a different user, I can install packages from pypi.org withouth problems.</p>
<p>My local system is macOS 13.2.1. Ventura.
Pip version is 22.3.1.
Python version is 3.11.4</p>
| <python><pip> | 2023-08-16 21:22:07 | 1 | 433 | ronin667 |
76,917,054 | 2,039,471 | Where are the docs about dataclass fields that have no type hints? | <p>Consider this example of dataclass:</p>
<pre><code>from dataclasses import dataclass, asdict
@dataclass
class Example:
typed: str = "a"
nontyped = "nt_value"
# Now let's make some tests and see 'nontyped' behaviour
ex = Example()
ex2 = Example()
print(ex)
# Example(typed='a')
print(ex.nontyped)
# nt_value
print(asdict(ex))
# {'typed': 'a'}
ex.nontyped = 'new_value'
print(ex.nontyped)
# new_value
print(ex2.nontyped)
# nt_value
print(asdict(ex))
# {'typed': 'a'}
print(asdict(ex2))
# {'typed': 'a'}
</code></pre>
<p>I see that <code>nontyped</code> is a class attribute and it doesn't appear in the representation of the dataclass object.</p>
<p>Where can I read about this in the docs?</p>
| <python><python-dataclasses> | 2023-08-16 21:01:40 | 1 | 4,075 | Alexander C |
76,917,039 | 1,319,998 | Python subprocess and KeyboardInterrupt - can it cause a subprocess without a variable? | <p>If starting a subprocess from Python:</p>
<pre class="lang-py prettyprint-override"><code>from subprocess import Popen
with Popen(['cat']) as p:
pass
</code></pre>
<p>is it possible that the process starts, but because of a KeyboardInterrupt caused by CTRL+C from the user, the <code>as p</code> bit never runs, but the process has indeed started? So there will be no variable in Python to do anything with the process, which means terminating it with Python code is impossible, so it will run until the end of the program?</p>
<p>Looking around the Python source, I see what I think is where the process starts in the <code>__init__</code> call at <a href="https://github.com/python/cpython/blob/3.11/Lib/subprocess.py#L807" rel="nofollow noreferrer">https://github.com/python/cpython/blob/3.11/Lib/subprocess.py#L807</a> then on POSIX systems ends up at <a href="https://github.com/python/cpython/blob/3.11/Lib/subprocess.py#L1782C1-L1783C39" rel="nofollow noreferrer">https://github.com/python/cpython/blob/3.11/Lib/subprocess.py#L1782C1-L1783C39</a> calling <code>os.posix_spawn</code>. What happens if there is a KeyboardInterrupt just after <code>os.posix_spawn</code> has completed, but before its return value has even been assigned to a variable?</p>
<p>Simulating this:</p>
<pre><code>class FakeProcess():
def __init__(self):
# We create the process here at the OS-level,
# but just after, the user presses CTRL+C
raise KeyboardInterrupt()
def __enter__(self):
return self
def __exit__(self, _, __, ___):
# Never gets here
print("Got to exit")
p = None
try:
with FakeProcess() as p:
pass
finally:
print('p:', p)
</code></pre>
<p>This prints <code>p: None</code>, and does <em>not</em> print <code>Got to exit</code>.</p>
<p>This does suggest to me that a KeyboardInterrupt can prevent the cleanup of a process?</p>
| <python><subprocess><signals><sigint> | 2023-08-16 20:57:49 | 2 | 27,302 | Michal Charemza |
76,917,004 | 1,816,077 | Support for an intermediate table with composite primary keys in Django 3.2 | <p>I have a set of Django 3.2 models defined below. When I use the admin UI to add a new challenge then it fails with <em>"column challenge_countries.id does not exist"</em>. How can I let Django know that the <code>ChallengeCountry</code> model does not contain an <code>id</code> column but instead has a composite primary key made of the columns <code>challenge_id</code> and <code>country_id</code>?</p>
<pre><code>import uuid
from django.db import models
class Country(models.Model):
id = models.UUIDField(
primary_key=True,
default=uuid.uuid4,
editable=False,
)
country_name = models.TextField(blank=False, null=True)
country_iso = models.TextField(blank=False, null=True)
class Meta:
managed = False
db_table = "countries"
class ChallengeCountry(models.Model):
challenge = models.ForeignKey(
"Challenge",
on_delete=models.CASCADE,
related_name="challenge",
db_column="challenge_id"
)
country = models.ForeignKey(
Country,
on_delete=models.DO_NOTHING,
related_name="country",
db_column="country_id"
)
class Meta:
managed = False
db_table = "challenge_countries"
unique_together = ["challenge", "country"]
class Challenge(models.Model):
id = models.UUIDField(
primary_key=True,
default=uuid.uuid4,
editable=False,
)
name = models.TextField(blank=False, null=True)
countries = models.ManyToManyField(
Country,
related_name="challenges",
verbose_name="country",
blank=True,
through=ChallengeCountry,
through_fields=("challenge", "country")
)
class Meta:
managed = False
db_table = "challenges"
</code></pre>
<p><strong>NOTE</strong>: I do not have the freedom to add an <code>id</code> column to the <code>challenge_countries</code> table</p>
<p>I have also tried removing <code>unique_together</code> from <code>ChallengeCountry</code> and instead added <code>primary_key=True</code> to both the <code>challenge</code> and <code>country</code> fields in that model but that resulted in this error during migration:</p>
<pre><code>pipenv run python manage.py migrate
SystemCheckError: System check identified some issues:
ERRORS:
<class 'challenges.admin.ChallengesAdmin'>: (admin.E013) The value of 'fieldsets[1][1]["fields"]' cannot include the ManyToManyField 'countries', because that field manually specifies a relationship model.
challenges.ChallengeCountry: (models.E026) The model cannot have more than one field with 'primary_key=True'.
</code></pre>
<p>The <code>ChallengesAdmin</code> that this error refers to has the following code snippet:</p>
<pre><code>
class ChallengesAdmin(admin.ModelAdmin):
form = ChallengeForm
...
filter_horizontal = [
"countries",
]
fieldsets = [
(
None,
{
"fields": [
"id",
"name",
...
]
},
),
(
"Countries",
{
"fields": ["countries"],
"description": """
If no countries are selected,
the challenge will be visible to all users
""",
},
),
]
...
</code></pre>
| <python><django-models> | 2023-08-16 20:50:50 | 0 | 387 | Mufasa |
76,916,997 | 4,620,387 | How to self-update a single instance executable in Python? | <p>I'm working on a Python application that is required to self-update. The unique challenge I'm facing is that I only have a single executable on my machine (App.exe). Any other "executables" are actually just shortcuts pointing back to this one. My updater is part of the same executable that it's trying to update. I thought of creating a backup of the current running executable, then launching the update process from the backup. However, I'm concerned about unpredictable behaviors and would like to avoid renaming or changing the original filename.</p>
<p>Here's a pseudocode of my current approach:</p>
<pre><code>def installUpdates():
# ... [Preliminary code: Check for updates, download, etc.]
# Backup current executable without renaming
current_exec_path = sys.executable
backup_exec_path = create_backup_without_renaming(current_exec_path)
# Run the installer from the backup
proc = subprocess.Popen([backup_exec_path, args])
proc.communicate()
# Check installer's return code
if proc.returncode == 0:
# Success - assuming installer updated the original executable
cleanup_backup(backup_exec_path)
else:
# Error - handle error situation
handle_error_scenario()
# Kill current process
kill_current_process()
</code></pre>
| <python> | 2023-08-16 20:50:00 | 0 | 1,805 | Sam12 |
76,916,992 | 594,323 | Serial device only returns position data every other time | <p>I have python code for reading the current position data from a serial pan-tilt-zoom (PTZ) device in response to a command I send it. This code works fine on a Raspberry Pi, but on a Jetson Orin Nano, it gives empty results on every other write + read command sequence. To work around this, I am sending the "get position" command again after each read.</p>
<p>Is there a better solution to ensure reliable reading from this device?</p>
<pre><code>import serial
import time
COM_PORT = "/dev/ttyUSB0"
BAUD_RATE = 9600
CMD_GET_POS = b'Rllllllllll'
def get_position():
ser = serial.Serial(COM_PORT, BAUD_RATE, timeout=1)
time.sleep(0.5)
ser.write(CMD_GET_POS)
time.sleep(0.1)
read_val = ser.readline()
time.sleep(0.1)
# The workaround required for next read to work on Jetson Orin Nano for some reason.
ser.write(CMD_GET_POS)
current_pos = {'h': 0, 'v': 0}
if isinstance(read_val, bytes):
current_pos_data = read_val.decode()
if current_pos_data:
current_h = int(current_pos_data[1:4])
current_v = int(current_pos_data[4:7])
current_pos = {'h': current_h, 'v': current_v}
return current_pos
def test_get_position():
for i in range(3):
print(f"get_position {i} {get_position()}")
time.sleep(0.5)
test_get_position()
</code></pre>
| <python><serial-port><nvidia-jetson> | 2023-08-16 20:49:05 | 0 | 1,050 | ramiwi |
76,916,990 | 913,494 | Finding a file in a Google Shared Drive | <p>I am using a service account to access my shared drive via some python scripting.
I have a larger application where I am trying to create a new Google Sheet in a shared drive using the FolderID. But before I create it I use the Drive API to see if the sheet already exists. I am able to write to the folder successfully. But for some reason I cannot get drive_service.files().list to work properly.<br />
I have broken the actions into simple individual scripts for testing. Again I have confirmed I can write to folder.</p>
<p>But if I try searching by name, or I tried to make it even more simple just by listing all the files in a folder by ID, I get no files found:</p>
<pre><code>credentials = service_account.Credentials.from_service_account_file(SERVICE_ACCOUNT_KEY_PATH, scopes=['https://www.googleapis.com/auth/drive.readonly'])
# Create a Google Drive API client
drive_service = build('drive', 'v3', credentials=credentials)
def list_files(folder_id=None):
query = f"'{folder_id}' in parents" if folder_id else "'root' in parents"
results = drive_service.files().list(q=query, supportsAllDrives=True).execute()
files = results.get('files', [])
if not files:
print("No files found.")
else:
print("Files:")
for file in files:
print(f"{file['name']} ({file['id']})")
if __name__ == '__main__':
folder_id = input("Enter the folder ID (Press Enter to list the root directory): ")
list_files(folder_id)
</code></pre>
<p>It will however list files in the root of the service account's drive. But it can't for the folder I am specifying on the shared drive, EVEN won't list a file that the service account created in the shared drive.
The service account has "content</p>
| <python><google-drive-api> | 2023-08-16 20:48:41 | 1 | 535 | Matt Winer |
76,916,982 | 247,542 | How to initialize Django database without using existing migrations? | <p>I have a Django project, which contains dozens of migrations, and I want to initialize it on a new database. How do I do this without running all the migrations, which can take a lot of time to run?</p>
<p>In earlier versions of Django, it used to have a "syncdb" command that you could run to create all tables in a blank database, bypassing all migrations.</p>
<p>Current Django's <code>migrate</code> command has a <code>--run-syncdb</code> option, but the docs say:</p>
<pre><code>Creates tables for apps without migrations.
</code></pre>
<p>Since all my apps have migrations, I interpret this to mean it does nothing in my case.</p>
<p>Does modern Django no longer have anyway to initialize a database without tediously running through every single migration?</p>
<p>Note, I'm not asking how to initialize a Django database. I'm asking how to do it more efficiently than by running migrations, which are designed to modify an existing database, not initialize one from scratch.</p>
<p>This has to be possible, since the unittest framework has the option to initialize the test database schema without running migrations.</p>
| <python><django><django-migrations> | 2023-08-16 20:47:01 | 1 | 65,489 | Cerin |
76,916,966 | 2,132,930 | Syntax error wile trying out NRQL in NewRelic | <p>I am trying to fetch some data via NewRelic's NerdGraph GraphQL API.</p>
<pre><code>url="https://api.newrelic.com/graphql"
headers={"API-Key": api_key}
nrql="""{
actor {
account(id: 1234) {
name
}
}
}
"""
response=requests.get(url=url, headers=headers, json={'query': nrql})
</code></pre>
<p>However, I am getting an error.</p>
<p><code>{'errors': [{'locations': [{'column': 2, 'line': 1}], 'message': 'syntax error before: "\\"query\\""'}]}</code></p>
<p>The same query works in NerdGraph API Explorer application in New Relic. What could be wrong here?</p>
| <python><graphql><newrelic> | 2023-08-16 20:43:30 | 1 | 917 | Prabodh Mhalgi |
76,916,781 | 8,074,805 | loop through multiple subdirectories, return specific .json file and add content of json file to pandas dataframe | <p>I have a folder with multiple subdirectories that contain a number of .json files.</p>
<p>I am only interested in getting the content of .json files that are titled "all_content.json" in each subdirectory (this file has the same name in each directory).</p>
<p>Then I want to take the content from each file and add it to one pandas dataframe, where the column title is the key (e.g.: column 1 = content and column 2 = date)</p>
<p>{
"content": "The flowers are so pretty here",
"date": "1999-10-22"
}</p>
<p>This is what I have tried so far, but I am not sure how to select the right file, open it and then save the content:</p>
<p>path = './folders'</p>
<pre><code>for root, dirs, files in os.walk(path):
print(files) # returns list of all files in the folder
for file in files:
if file.endswith("all_content.json"):
print(file)
with open(file) as fp:
data = json.load(fp)
</code></pre>
| <python><json><pandas> | 2023-08-16 20:13:19 | 1 | 735 | msa |
76,916,636 | 6,290,283 | Langchain chains not using Pinecone vectorstore | <p>I try to use a perfectly valid and populated Pinecone index as a vectorstore in my langchain implementation. However, the chains don't load or operate with the vectorstore in any way.</p>
<p>for example, this code:</p>
<pre><code>question = "What is your experience?"
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0.1)
pc_index = pinecone.Index(index_name)
print(pc_index.describe_index_stats())
pc_interface = Pinecone.from_existing_index(
index_name,
embedding=OpenAIEmbeddings(),
namespace="SessionIndex"
)
qa_chain = RetrievalQA.from_chain_type(
llm,
retriever=pc_interface.as_retriever(),
)
print(qa_chain.run(question))
</code></pre>
<p>returns:</p>
<pre><code>{'dimension': 1536,
'index_fullness': 0.0,
'namespaces': {'SessionIndex': {'vector_count': 40}},
'total_vector_count': 40}
As an AI language model, I don't have personal experiences like humans do. However, I have been trained on a wide range of data sources, including books, articles, and websites, to provide information and assist with various topics. Is there something specific you would like to know or discuss?
</code></pre>
<p>The index contains a number of entries related to personal experience of a person. If I use RetrievalQAWithSourcesChain and get the len() of sources, it prints 0.</p>
<p>How do I make Pinecone indexes work with Langchain?</p>
| <python><langchain><vector-database><pinecone> | 2023-08-16 19:48:49 | 2 | 539 | Kristian Vybiral |
76,916,322 | 3,696,153 | removing from a list in a for loop does not seem to work | <p>Example code:</p>
<pre class="lang-py prettyprint-override"><code>found_list = [1,2,3,4,5]
remove_list =[1,2,3,4,5]
for canidate in found_list:
if canidate in remove_list:
found_list.remove(canidate)
# Expecting this to be an EMPTY list.
# because everything in found is in remove
print("NEW FOUND LIST: %s" % found_list )
</code></pre>
<p>In my actual case, I am using "os.walk()" - and need to prune/ignore some subdirectories. The example is a list of subdirectories (ie: names like: ".svn" and ".git") I want to ignore</p>
<p>The examples I have seen are to use for() over the directory or file list.
And use list.remove() to remove the specific item directly.</p>
<p>However - things are not being deleted like I expect.
I think this is a limitation (?bug?) in the way that the for() loop iterates over a list
ie: If you delete the current item the next item is skipped and not considered.</p>
<p>Is this documented anywhere?</p>
<p>My workaround solution is to create a new list and then assign that to the list given by os.walk()</p>
<p>Thanks.</p>
| <python><list> | 2023-08-16 18:50:57 | 1 | 798 | user3696153 |
76,916,197 | 13,323,289 | Can't save trained transformer model | <p>I have trained on Transformer model but cant save the best model. what is wrong here! Code is running fine and trained.</p>
<pre><code>if not os.path.exists("asr-checkpoint"):
os.makedirs("asr-checkpoint")
checkpoint_path = '/content/asr-checkpoint'
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
monitor='val_loss',
mode='min',
save_best_only=True,
verbose=1
)
print("tf.executing_eagerly():", tf.executing_eagerly())
history = model.fit(
ds,
validation_data=val_ds,
callbacks=[display_cb],
initial_epoch=0,
epochs=1
)
</code></pre>
| <python><tensorflow><keras><nlp><transformer-model> | 2023-08-16 18:32:43 | 1 | 330 | XO56 |
76,916,173 | 20,652,094 | Leetcode 424. Longest Repeating Character Replacement: Right Pointer Incrementation | <p>I'm trying to solve <a href="https://leetcode.com/problems/longest-repeating-character-replacement/" rel="nofollow noreferrer">Leetcode 424. Longest Repeating Character Replacement</a>.
Why is this code not working, I cannot get my head around it.</p>
<pre><code>class Solution:
def characterReplacement(self, s: str, k: int) -> int:
l, r = 0, 0
res = 0
char_count = {}
while r < len(s):
char_count[s[r]] = char_count.get(s[r], 0) + 1
substr_len = r - l + 1
if substr_len - max(char_count.values()) <= k:
res = max(res, substr_len)
r += 1
else:
char_count[s[l]] -= 1
l += 1
return res
</code></pre>
<p>Test case:</p>
<pre><code>Input
s =
"AABABBA"
k =
1
Output
5
Expected
4
</code></pre>
<p>While this works:</p>
<pre><code>class Solution:
def characterReplacement(self, s: str, k: int) -> int:
l, r = 0, 0
res = 0
char_count = {}
while r < len(s):
char_count[s[r]] = char_count.get(s[r], 0) + 1
substr_len = r - l + 1
r += 1
if substr_len - max(char_count.values()) <= k:
res = max(res, substr_len)
else:
char_count[s[l]] -= 1
l += 1
return res
</code></pre>
<p>The difference between the two codes is where the right pointer is incremented. Shouldn't the right pointer only be incremented if the <code>substr_len - max(char_count.values()) <= k</code>? Isn't this what the first code is doing?, while the second code always increments the right pointer, even if <code>substr_len > k</code>?</p>
| <python><sliding-window> | 2023-08-16 18:27:43 | 1 | 307 | user123 |
76,916,070 | 1,501,191 | pyspark f string not ending | <p>As given in the picture this string is not ending in Jupyter notebook. But it works in python file. Why is that?</p>
<pre><code>spark_session.sql(f"show tables in {schemaName} like '{tableName}'").count() == 1;
</code></pre>
<p><a href="https://i.sstatic.net/buLG0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/buLG0.png" alt="enter image description here" /></a></p>
| <python><pyspark><syntax-highlighting> | 2023-08-16 18:10:23 | 0 | 8,362 | Blue Clouds |
76,915,986 | 7,713,770 | Docker container Django debug mode true, still production mode | <p>I have dockerized django app. And I have an .env file with debug=1. But when I run the docker container it is in apparently in production mode: debug=false.</p>
<p>This is my docker-compose file:</p>
<pre><code>version: "3.9"
services:
app:
build:
context: .
args:
- DEV=true
ports:
- "8000:8000"
env_file:
- ./.env
volumes:
- ./zijn:/app
command: >
sh -c " python manage.py wait_for_db &&
python ./manage.py migrate &&
python ./manage.py runserver 0:8000"
environment:
- DB_HOST=db
- DB_NAME=zijn
- DB_USER=zijn
- DB_PASS=235711
- DEBUG=1
depends_on:
- db
db:
image: postgres:13-alpine
volumes:
- dev-db-data:/var/lib/postgresql/data/
environment:
- POSTGRES_DB=dzijn
- POSTGRES_USER=zijn
- POSTGRES_PASSWORD=235711
volumes:
dev-db-data:
dev-static-data:
</code></pre>
<p>and the .env file:</p>
<pre><code>DEBUG=1
SECRET_KEY="django-insecure-kwuz7%@967xvpdnf7go%r#d%lgl^c9ah%!_08l@%x=s4e4&+(u"
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
DB_NAME="zijn"
DB_USER="zijn"
DB_PASS="235711"
DB_HOST=db
DB-PORT=54326
</code></pre>
<p>And also my settings.py is debug mode=True:</p>
<pre><code>SECRET_KEY = os.environ.get('SECRET_KEY')
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = os.environ.get('DEBUG') == "True"
ALLOWED_HOSTS = []
ALLOWED_HOSTS_ENV = os.environ.get('ALLOWED_HOSTS')
if ALLOWED_HOSTS_ENV:
ALLOWED_HOSTS.extend(ALLOWED_HOSTS_ENV.split(','))
</code></pre>
<p>Because it returns the message:</p>
<pre><code> dotenv.read_dotenv()
dwl_backend-app-1 | CommandError: You must set settings.ALLOWED_HOSTS if DEBUG is False.
</code></pre>
<p>And also the templates are not loaded.</p>
<p>Question: how to start docker container with debug=True?</p>
| <python><django><docker><docker-compose> | 2023-08-16 17:55:30 | 1 | 3,991 | mightycode Newton |
76,915,819 | 20,830,264 | Add an acroform to a pdf file with Python | <p>With this Python script I'm able to create a new pdf file called <code>"my_file.pdf"</code> and to add an acroForm editable text box:</p>
<pre class="lang-py prettyprint-override"><code>from reportlab.pdfgen import canvas
from reportlab.lib.units import cm
from reportlab.lib import colors
from reportlab.lib.pagesizes import A4
pdf = canvas.Canvas("my_file.pdf", bottomup=0)
pdf.drawString(100, 100, "blablabla")
x = pdf.acroForm
x.textfield(value = "hello world!", fillColor = colors.yellow, borderColor = colors.black, textColor = colors.red, borderWidth = 2, borderStyle = 'solid', width = 500, height = 50, x = 50, y = 40, tooltip = None, name = None, fontSize = 20)
pdf.save()
</code></pre>
<p>When I open the <code>"my_file.pdf"</code> file with Adobe reader I see this:
<a href="https://i.sstatic.net/Fhi8l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Fhi8l.png" alt="my_file.pdf" /></a></p>
<p>But what I want, is to add the text box in an already existing pdf file called <code>"input.pdf"</code> (see next figure), instead of adding this box in a new pdf file <code>"my_file.pdf"</code>.</p>
<p><a href="https://i.sstatic.net/96FG0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/96FG0.png" alt="input.pdf file" /></a></p>
<p>To give you a hint, I'm already able to add a draw string (a not-editable text) to the existing pdf file called "input.pdf", and I obtain the edited file called "out.pdf" (see next figure):</p>
<pre class="lang-py prettyprint-override"><code>from io import BytesIO
import pikepdf
from reportlab.pdfgen import canvas
import os
from reportlab.lib.units import cm
from reportlab.lib import colors
from reportlab.lib.pagesizes import A4
from PyPDF2 import PdfFileReader, PdfFileWriter
from pypdf import PdfReader
text = "input.pdf"
def generate_stamp(msg, xy):
x, y = xy
buf = BytesIO() # This creates a BytesIO buffer for temporarily storing the generated PDF content.
c = canvas.Canvas(buf, bottomup=0) # This creates a canvas object using the BytesIO buffer. The bottomup=0 argument indicates that the coordinates increase from bottom to top (typical for PDFs).
c.setFontSize(16)
c.setFillColorCMYK(0, 0, 0, 0, alpha=0.7)
# c.rect(194, 5, 117, 17, stroke=1, fill=1)
c.setFillColorCMYK(0, 0, 0, 100, alpha=0.7)
c.drawString(x, y, msg)
c.save()
buf.seek(0)
return buf
stamp = generate_stamp('SOME TEXT STAMP', (300, 100))
# Add the comment to the first page of the pdf file
pdf_orig = pikepdf.open(text)
pdf_text = pikepdf.open(stamp)
formx_text = pdf_orig.copy_foreign(pdf_text.pages[0].as_form_xobject())
formx_page = pdf_orig.pages[0]
formx_name = formx_page.add_resource(formx_text, pikepdf.Name.XObject)
stamp_text = pdf_orig.make_stream(b'q 1 0 0 1 0 0 cm %s Do Q' % formx_name)
pdf_orig.pages[0].contents_add(stamp_text)
pdf_orig.save('./out.pdf')
</code></pre>
<p><a href="https://i.sstatic.net/wQ24a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wQ24a.png" alt="out.pdf file" /></a></p>
<p>I would like to have the same thing for the editable text box.</p>
| <python><pdf><reportlab><adobe-reader> | 2023-08-16 17:28:16 | 2 | 315 | Gregory |
76,915,716 | 11,141,816 | Is sympy evalf() using binary or decimal arithmetic? | <p>I'm working on a numerical evaluation using the sympy evalf() package. Consider the mpmath library's introduction to precision and accuracy <a href="https://mpmath.org/doc/current/technical.html" rel="nofollow noreferrer">https://mpmath.org/doc/current/technical.html</a>, where on
<a href="https://docs.sympy.org/latest/modules/evalf.html" rel="nofollow noreferrer">https://docs.sympy.org/latest/modules/evalf.html</a>
mentioned that</p>
<blockquote>
<p>Exact SymPy expressions can be converted to floating-point approximations (decimal numbers) using either the .evalf() method or the N() function. ...</p>
</blockquote>
<p>where, it also mentioned that</p>
<blockquote>
<p>By default, numerical evaluation is performed to an accuracy of 15 decimal digits.</p>
</blockquote>
<p>By "floating-point approximation", it sounded like the sympy's internal arithmetic is binary. However, when comparing an input value such as <code>value=1.2</code>, I found that the <code>Float(value)</code> was needed to feed into a function <code>f_sympy(Float(value))</code> match the results from mpmath <code>f_mpmath(value)</code> and <code>f_mpmath(mp.mpf(value))</code>, where <code>f_sympy(value)</code> and <code>f_mpmath(mp.mpf(str(value)))</code> provided somewhat distinct decimal values. (precision to 200 decimal, where the deviation showed up on the 20th decimal places.)</p>
<p>Is sympy evalf() using binary or decimal arithmetic? Why <code>Float(value)</code> was required for sympy and why <code>mp.mpf(str(value))</code> also changed the evaluation in mpmath?</p>
| <python><binary><decimal><sympy><mpmath> | 2023-08-16 17:12:49 | 0 | 593 | ShoutOutAndCalculate |
76,915,707 | 13,578,682 | Is it possible to glob for *.yaml and *.yml files in one pattern? | <p>How to find all .yml and .yaml files using a <em>single</em> glob pattern? Desired output:</p>
<pre><code>>>> import os, glob
>>> os.listdir('.')
['f1.yaml', 'f2.yml', 'f0.txt']
>>> glob.glob(pat)
['f1.yaml', 'f2.yml']
</code></pre>
<p>Attempts that don't work:</p>
<pre><code>>>> glob.glob("*.ya?ml")
[]
>>> glob.glob("*.y[a]ml")
['f1.yaml']
</code></pre>
<p>Current workaround is globbing twice, but I want to know if this is possible with a single pattern.</p>
<pre><code>>>> glob.glob('*.yaml') + glob.glob('*.yml')
['f1.yaml', 'f2.yml']
</code></pre>
<p>Not looking for more workarounds, if this is not possible with a single pattern then I'd like to see an answer like "<em>glob can not find <code>.yaml</code> and <code>.yml</code> files with a single pattern because of these reasons...</em>"</p>
| <python><filenames><glob><pathlib><fnmatch> | 2023-08-16 17:10:52 | 3 | 665 | no step on snek |
76,915,629 | 16,299,715 | Converting JSON to XML with namespaces | <p>My JSON data:</p>
<pre><code> {
"Type":"Baggage",
"TotalPrice":"INR10080",
"SupplierCode":"AI",
"CreateDate":"2023-08-16T06:29:51.961+00:00",
"ServiceStatus":"Offered",
"SequenceNumber":"1204",
"ServiceSubCode":"0C2",
"SSRCode":"XBAG",
"IssuanceReason":"C",
"Key":"PNmYlnTqWDKA0ie2FAAAAA==",
"AssessIndicator":"MileageOrCurrency",
"InclusiveOfTax":"true",
"InterlineSettlementAllowed":"false",
"GeographySpecification":"Sector",
"Source":"MCE",
"ViewableOnly":"false",
"TotalWeight":"20KG",
"ProviderCode":"1G",
"Quantity":"1",
"BasePrice":"INR9600",
"ApproximateTotalPrice":"INR10080",
"ApproximateBasePrice":"INR9600",
"Taxes":"INR480",
"IsRepriceRequired":"false",
"common_v52_0:ServiceData":{
"BookingTravelerRef":"PNmYlnTqWDKAvie2FAAAAA==",
"AirSegmentRef":"dnVZlnUqWDKAXC/qFAAAAA==",
"TravelerType":"ADT",
"common_v52_0:CabinClass":{
"Type":"Economy"
}
},
"common_v52_0:ServiceInfo":{
"common_v52_0:Description":"UPTO44LB_20KG_BAGGAGE"
},
"air:TaxInfo":{
"Category":"K3",
"Amount":"INR480",
"Key":"PNmYlnTqWDKA3ie2FAAAAA=="
},
"air:EMD":{
"FulfillmentType":"2",
"AssociatedItem":"Flight",
"RefundReissueIndicator":"NonRefundable",
"Commissionable":"false",
"Booking":"SSR",
"FulfillmentTypeDescription":"Associated_to_a_flight_coupon_of_a_ticket"
},
"air:FeeApplication":{
"Code":"4",
"#text":"Per_travel"
}
}
</code></pre>
<p>And I want XML like this:</p>
<pre><code> <air:OptionalService Type="Baggage" TotalPrice="INR10080" SupplierCode="AI"
CreateDate="2023-08-16T06:29:51.961+00:00" ServiceStatus="Offered"
SequenceNumber="1204" ServiceSubCode="0C2" SSRCode="XBAG" IssuanceReason="C"
Key="PNmYlnTqWDKA0ie2FAAAAA==" AssessIndicator="MileageOrCurrency"
InclusiveOfTax="true" InterlineSettlementAllowed="false"
GeographySpecification="Sector" Source="MCE" ViewableOnly="false"
TotalWeight="20KG" ProviderCode="1G" Quantity="1" BasePrice="INR9600"
ApproximateTotalPrice="INR10080" ApproximateBasePrice="INR9600" Taxes="INR480"
IsRepriceRequired="false">
<common_v52_0:ServiceData BookingTravelerRef="PNmYlnTqWDKAvie2FAAAAA=="
AirSegmentRef="dnVZlnUqWDKAXC/qFAAAAA==" TravelerType="ADT">
<common_v52_0:CabinClass Type="Economy" />
</common_v52_0:ServiceData>
<common_v52_0:ServiceInfo>
<common_v52_0:Description>UPTO44LB 20KG BAGGAGE</common_v52_0:Description>
</common_v52_0:ServiceInfo>
<air:TaxInfo Category="K3" Amount="INR480" Key="PNmYlnTqWDKA3ie2FAAAAA==" />
<air:EMD FulfillmentType="2" AssociatedItem="Flight"
RefundReissueIndicator="NonRefundable" Commissionable="false" Booking="SSR"
FulfillmentTypeDescription="Associated to a flight coupon of a ticket" />
<air:FeeApplication Code="4">Per travel</air:FeeApplication>
</air:OptionalService>
</code></pre>
<p>My code:</p>
<pre><code>import xml.etree.ElementTree as ET
def json_to_xml(element, data):
for key, value in data.items():
if isinstance(value, dict):
sub_element = ET.SubElement(element, key)
json_to_xml(sub_element, value)
else:
if ":" in key:
ns_prefix, local_name = key.split(":")
print(ns_prefix, local_name)
sub_element = ET.SubElement(element, "{" + ns_prefix + "}" + local_name)
sub_element.text = value
if key == "#text":
element.text = value
else:
element.set(key, value)
root = ET.Element("air:OptionalService")
json_to_xml(root, json_data)
tree = ET.ElementTree(root)
tree.write("test.xml", encoding="utf-8", xml_declaration=True)
</code></pre>
<p>And here is the output:</p>
<pre><code><?xml version='1.0' encoding='utf-8'?>
<air:OptionalService xmlns:ns0="common_v52_0" Type="Baggage" TotalPrice="INR10080" SupplierCode="AI"
CreateDate="2023-08-16T06:29:51.961+00:00" ServiceStatus="Offered" SequenceNumber="1204"
ServiceSubCode="0C2" SSRCode="XBAG" IssuanceReason="C" Key="PNmYlnTqWDKA0ie2FAAAAA=="
AssessIndicator="MileageOrCurrency" InclusiveOfTax="true" InterlineSettlementAllowed="false"
GeographySpecification="Sector" Source="MCE" ViewableOnly="false" TotalWeight="20KG"
ProviderCode="1G" Quantity="1" BasePrice="INR9600" ApproximateTotalPrice="INR10080"
ApproximateBasePrice="INR9600" Taxes="INR480" IsRepriceRequired="false">
<common_v52_0:ServiceData BookingTravelerRef="PNmYlnTqWDKAvie2FAAAAA=="
AirSegmentRef="dnVZlnUqWDKAXC/qFAAAAA==" TravelerType="ADT">
<common_v52_0:CabinClass Type="Economy" />
</common_v52_0:ServiceData>
<common_v52_0:ServiceInfo common_v52_0:Description="UPTO44LB_20KG_BAGGAGE">
<ns0:Description>UPTO44LB_20KG_BAGGAGE</ns0:Description>
</common_v52_0:ServiceInfo>
<air:TaxInfo Category="K3" Amount="INR480" Key="PNmYlnTqWDKA3ie2FAAAAA==" />
<air:EMD FulfillmentType="2" AssociatedItem="Flight" RefundReissueIndicator="NonRefundable"
Commissionable="false" Booking="SSR"
FulfillmentTypeDescription="Associated_to_a_flight_coupon_of_a_ticket" />
<air:FeeApplication Code="4">Per_travel</air:FeeApplication>
</air:OptionalService>
</code></pre>
<p>All is good but the problem is in my output, look at the 2nd line in my output - <code>xmlns:ns0="common_v52_0"</code>, it's not in my json . one more problem is in my output <code><ns0:Description></code> it's not right, look at my json it's not <code>"ns0:Description"</code> it's <code>"common_v52_0:Description"</code>.</p>
| <python><json><xml><elementtree><xml-namespaces> | 2023-08-16 16:54:55 | 1 | 641 | BiswajitPaloi |
76,915,447 | 1,200,914 | Use lambda in pandas.replace does not return value, but a lambda function | <p>I want to do a replace in pandas using Regex, and then modifying the Regex result itself. For example, I want to look for dates in any columns in the dataframe that contain a gap to remove it, e.g. <code>02/08/ 2023</code> would be <code>02/08/2023</code> (They always have a gap after the second /). In principle, I don't know the name of the columns which contain these dates, and they are always strings.</p>
<p>The code I am using to replace is the following:</p>
<pre><code>import pandas as pd
import numpy as np
import re
technologies = {
'Courses':["Spark","PySpark","Hadoop",pd.NaT],
'Fee' :[20000,25000,26000,22000],
'Duration':['30day','40days',np.nan, None],
'Discount':[1000,2300,1500,1200],
'Date':["23/3/ 2022","99/5/ 34","7/7/ 2122","6/12/ 2024"],
}
indexes=['r1','r2','r3','r4']
df = pd.DataFrame(technologies,index=indexes)
print(df.replace(r'(\d+/\d+/ \d+)', lambda x: x.group(1).replace(" ", ""), regex=True))
# Courses Fee Duration Discount Date
#r1 Spark 20000 30day 1000 <function <lambda> at 0x148adb9c71f0>
#r2 PySpark 25000 40days 2300 <function <lambda> at 0x148adb9c71f0>
#r3 Hadoop 26000 NaN 1500 <function <lambda> at 0x148adb9c71f0>
#r4 NaT 22000 None 1200 <function <lambda> at 0x148adb9c71f0>
</code></pre>
<p>But this code shows me a lambda function when printing the dataframe. However, if I use a re.sub:</p>
<pre><code>import re
txt = "That will be 59/3/ 3432 dollars"
#Find all digit characters:
x = re.sub(r"(\d+/\d+/ \d+)", lambda x: x.group(1).replace(" ", ""), txt)
print(x)
# That will be 59/3/3432 dollars
</code></pre>
<p>This code shows the correct answer. Why pandas shows me lambda function instead of the real value? How can I fix it?</p>
| <python><pandas> | 2023-08-16 16:26:28 | 2 | 3,052 | Learning from masters |
76,915,430 | 22,400,527 | Detecting line inside a rectangle using OpenCV and Python | <p>I have this image.<a href="https://i.sstatic.net/U7Z0e.png" rel="nofollow noreferrer">Rectangles with lines</a>.
I have to assign the number (1 to 4) below the image of the rectangle with respect to its length
inside the rectangle. The shorter the line lower the number. I am using OpenCV and Jupyter Notebook. How can I accomplish this?</p>
<p>I tried using Hough line transform and contour detection using OpenCV. But Hough line transform detects way too many lines even in places where there are no lines. This is my first time experimenting with images and OpenCV.</p>
<p>Also, I would be grateful if you could also tell me how to make the rectangles straight along with the straight lines inside them. Although I haven't tried doing this myself yet, it will be helpful.</p>
<p><strong>Edit</strong>
Based on the suggestion by @fmw42, I ended up doing the followig.</p>
<pre class="lang-py prettyprint-override"><code>import cv2
import numpy as np
import matplotlib.pyplot as plt
# Load the image
image = cv2.imread('rects.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray, 50, 150)
# Find contours in the edge-detected image with hierarchical retrieval
contours, hierarchy = cv2.findContours(edges, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_SIMPLE)
# Create a copy of the original image to draw the contours on
contour_image = image.copy()
# Lists to store the lengths of inner contours (lines)
inner_contour_lengths = []
# Loop through the contours and hierarchy
for i, contour in enumerate(contours):
# Get the parent index from the hierarchy
parent_idx = hierarchy[0][i][3]
if parent_idx != -1:
# Inner contour, line
perimeter = cv2.arcLength(contour, True)
inner_contour_lengths.append((perimeter, i)) # Store length and index
# Sort the inner contour lengths in ascending order
inner_contour_lengths.sort()
# Assign numbers to the lines based on their lengths
line_numbers = {length_index[1]: i + 1 for i, length_index in enumerate(inner_contour_lengths)}
# Draw and label the lines for the four contours with lowest lengths
for length, index in inner_contour_lengths[:4]: # Only the first four contours
contour = contours[index]
cv2.drawContours(contour_image, [contour], -1, (0, 255, 0), 2) # Green color
number = line_numbers[index]
cv2.putText(contour_image, str(number), tuple(contour[0][0]), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2) # Red color
plt.imshow(cv2.cvtColor(contour_image, cv2.COLOR_BGR2RGB))
plt.show()
</code></pre>
<p>The result is here.<a href="https://i.sstatic.net/typeg.png" rel="nofollow noreferrer">result image</a>. If there is a better way to do it, please tell me.</p>
| <python><opencv><jupyter-notebook><straight-line-detection> | 2023-08-16 16:24:45 | 0 | 329 | Ashutosh Chapagain |
76,915,413 | 3,482,266 | Type hinting in Python, for the case where there's no single superclass | <p>In sklearn, there are several different scaler classes , such as <code>MinMaxScaler</code>, or <code>StandardScaler</code>, which inherit from <code>OneToOneFeatureMixin</code>, <code>TransformerMixin</code>, <code>BaseEstimator</code>.</p>
<p>My FeatureEngineering class receives a scaler. However, I would like to type hint it, instead of having to specify the scaler class, as is below.</p>
<pre><code>class FeatureEngineering(ABC):
def __init__(self, dataframe:pd.DataFrame, scaler:MinMaxScaler | None = None):
self.dataframe = dataframe
self.scaler = scaler
</code></pre>
<p>I thought of creating a new class, called <code>GeneralScaler</code>, and make it inherit from all the sklearn superclasses, or using a Union (typing) operator. However, I have a feeling this shouldn't be done...</p>
<ol>
<li>Is there a way to properly type hint for this?</li>
<li>Does Python allow the creation of specific type for this case? How?</li>
</ol>
| <python><python-typing> | 2023-08-16 16:20:43 | 0 | 1,608 | An old man in the sea. |
76,915,334 | 3,247,006 | Which version of chrome driver does `webdriver.Chrome()` get in Selenium with Python? | <p>I know that <code>webdriver.Chrome()</code> below can get the chrome driver but I do not know which version of chrome driver it gets because <a href="https://selenium-python.readthedocs.io/" rel="nofollow noreferrer">the doc</a> doesn't have any such explanation. *I use <strong>Selenium 4.11.2</strong>:</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
driver = webdriver.Chrome()
</code></pre>
<p>So, which version of chrome driver does <code>webdriver.Chrome()</code> get? The latest one?</p>
| <python><selenium-webdriver><browser><selenium-chromedriver><version> | 2023-08-16 16:08:15 | 2 | 42,516 | Super Kai - Kazuya Ito |
76,915,007 | 18,221,164 | Incorrect response in Azure Functions Python | <p>I have an azure function which connects to the keyvault to access the secret and prints it out. Its on a HTTP trigger, and is as follows:</p>
<pre><code>import logging
import azure.functions as func
from azure.keyvault.secrets import SecretClient
from azure.identity import DefaultAzureCredential
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
try:
KVUri = f"https://XXXXX.azure.net/"
credential = DefaultAzureCredential()
logging.info('credential info is %s',credential)
#client = SecretClient(vault_url=KVUri, credential=credential)
client = SecretClient(vault_url=KVUri, credential=credential)
password = client.get_secret('TestSecret').value
logging.info('Secret value is %s', password)
return func.HttpResponse("Function executed successfully", status_code=200)
except Exception as e:
return func.HttpResponse(f"An error occurred: {str(e)}", status_code=500)
</code></pre>
<p>When I test this function, I get a the following error:</p>
<pre><code>Message: The user, group or application 'appid=XX;oid=XX;iss=XXX' does not have secrets get permission on key vault 'XXXXXX;location=westeurope'. For help resolving this issue, please see https://go.microsoft.com/fwlink/?linkid=2125287
Inner error: {
"code": "AccessDenied"
}
</code></pre>
<p>The secret is present in the keyvault. However, since this is a failure, I expect the log to show the failure. But the log ends with : <a href="https://i.sstatic.net/ITATi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ITATi.png" alt="Response" /></a></p>
<p>Should it not be a failure here? <a href="https://i.sstatic.net/JjGZw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JjGZw.png" alt="Response2" /></a></p>
<p>Please suggest what am I doing wrong here?</p>
| <python><azure><azure-functions> | 2023-08-16 15:22:44 | 1 | 511 | RCB |
76,914,922 | 534,298 | Cannot pass C array as memoryview in Cython | <p>Here is the example pyx file</p>
<pre><code># yy.pyx
def foo(double[::1] args):
cdef double[3] v = [args[0], args[1], 0]
bar(v)
def bar(double[::1] args):
pass
</code></pre>
<p>and the main (I run in ipython for simplicity)</p>
<pre><code>import numpy as np
import pyximport
pyximport.install()
import yy
yy.foo(np.arange(2.0))
</code></pre>
<p>Then I got the error</p>
<pre><code>In [3]: run main.py
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File ~/src/cython-numpy/main.py:8
4 pyximport.install()
6 import yy
----> 8 yy.foo(np.arange(2.0))
File ~/src/cython-numpy/yy.pyx:3, in yy.foo()
1 def foo(double[::1] args):
2 cdef double[3] v = [args[0], args[1], 0]
----> 3 bar(v)
4
5 def bar(double[::1] args):
File ~/src/cython-numpy/yy.pyx:5, in yy.bar()
3 bar(v)
4
----> 5 def bar(double[::1] args):
6 pass
7
File stringsource:660, in View.MemoryView.memoryview_cwrapper()
File stringsource:350, in View.MemoryView.memoryview.__cinit__()
TypeError: a bytes-like object is required, not 'list'
</code></pre>
<p>If I use <code>array.array</code> for <code>v</code>, then there is no error. But I don't know why C array cannot be used here</p>
| <python><arrays><cython> | 2023-08-16 15:12:34 | 1 | 21,060 | nos |
76,914,910 | 17,160,160 | Dictionary from data frame. Values assigned to tuple keys derived from column, row products | <p>Given a data frame structured as follows:</p>
<pre><code>df = pd.DataFrame({
'DATE' : [1,2,3,4,5],
'Q24' : [23.28, 28.81, 29.32, 29.8, 30.25],
'J24' : [24.22, 24.89, 25.54, 26.15, 26.73],
'F24' : [22.34, 32.73, 33.1, 33.45, 33.77]
})
</code></pre>
<p>I would like to create a dictionary in which all keys are tuples containing the products of values in <code>df['DATES']</code> and elements in <code>df.columns[1:]</code>. I would then like to assign the relevant values from the data frame to those keys.</p>
<p>So far, I have achieved this by creating an empty dictionary of the requisite keys:</p>
<pre><code>import itertools
keys = list(itertools.product(df['DATE'],df.columns[1:]))
dict1 = dict.fromkeys(keys)
</code></pre>
<p>Then creating a list containing a dictionary for each relevant column:</p>
<pre><code>dict2 = df.iloc[:,1:].to_dict('records')
</code></pre>
<p>I've then used a for loop to assign values to keys in <code>dict1</code>:</p>
<pre><code>for x in df['DATE']:
for y in df.columns[1:]:
dict1[x,y] = dict2[x-1][y]
</code></pre>
<p>Which correctly produces the desired output:</p>
<pre><code>
{(1, 'Q24'): 23.28,(1, 'J24'): 24.22,(1, 'F24'): 22.34,
(2, 'Q24'): 28.81,(2, 'J24'): 24.89,(2, 'F24'): 32.73,
(3, 'Q24'): 29.32,(3, 'J24'): 25.54,(3, 'F24'): 33.1,
(4, 'Q24'): 29.8,(4, 'J24'): 26.15,(4, 'F24'): 33.45,
(5, 'Q24'): 30.25,(5, 'J24'): 26.73,(5, 'F24'): 33.77}
</code></pre>
<p>However, this feels like something of an ugly monstrosity of code and I wondered if there was a more elegant means to achieve the same output?</p>
<p>Help and guidance much appreciated!</p>
| <python><pandas> | 2023-08-16 15:10:54 | 1 | 609 | r0bt |
76,914,902 | 7,446,003 | Wagtail custom permissions | <p>I have a wagtail site. I have certain pages that I only want certain users to be able to access. I have used the standard django permissions setup to do this:</p>
<p>models.py</p>
<pre><code>class Case(TimeStampedModel):
case_userdefined_id = models.CharField(
max_length=17, null=False, blank=False, unique=True, default='default'
)
user = models.ForeignKey(
get_user_model(), blank=False, null=True, on_delete=models.SET_NULL)
class Meta:
permissions = (("can_access_case", "Can Access case"),)
</code></pre>
<p>views.py</p>
<pre><code>class CreateCaseView(PermissionRequiredMixin, CreateView):
permission_required = 'Case.can_access_case'
print('test2')
template_name = 'cases/create_case.html'
form_class = CaseForm
success_url = reverse_lazy('patients')
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
form = CaseForm()
context['form'] = form
return context
</code></pre>
<p>wagtail_hooks.py</p>
<pre><code>from django.contrib.auth.models import Permission
from wagtail import hooks
@hooks.register('register_permissions')
def view_restricted_page():
return Permission.objects.filter(codename="can_access_case")
</code></pre>
<p>Then in the wagtail admin I have created a group 'case_access' and ticked the custom permissions to allow access. I have then made various users members of this group.</p>
<p>However these users still get a '403 forbidden' screen, it is only superusers that can access the relevant pages.</p>
<p>What else do I need to do?</p>
| <python><django><wagtail> | 2023-08-16 15:10:12 | 1 | 422 | RobMcC |
76,914,812 | 10,232,932 | Fill only one previous NaN column value with following value | <p>Let us assume I have the following dataframe df:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[1, 2, None], [4, None, None], [None, 1, 9]])
df
0 1 2
0 1 2 Nan
1 4 NaN NaN
2 NaN 1 9
</code></pre>
<p>How can I fill the column2 the row 1 with the following value of the next row (in this case row 2), and leave the otherone in row 0 empty. So that only the first previous NaN gets filled (for column2), that it generates the output:</p>
<pre><code> 0 1 2
0 1 2 Nan
1 4 NaN 9
2 NaN 1 9
</code></pre>
| <python><pandas><dataframe> | 2023-08-16 15:00:36 | 2 | 6,338 | PV8 |
76,914,737 | 11,568,176 | Split string with dates into three substrings | <p>I have strings of the following format: "Description of stuffI\n1 31 2019\nPlace of business"</p>
<p>For this string, the code below works perfectly.The problem comes in when there is a digit in the first group, such as</p>
<p>"Description of 2nd stuffI\n1 31 2019\nPlace of business"</p>
<p>or</p>
<p>"1)Description of stuffI\n1 31 2019\nPlace of business"</p>
<p>How do I allow a digit in the first group without messing with the date capture that comes later?</p>
<pre><code>
pattern = r"(\D*)(\d.*\d.*\d)\D(.*)$"
example_string = "Description of stuffI\n1 31 2019\nPlace of business"
match = re.match(pattern, example_string)
if match:
print("Group 1:", match.group(1).strip())
print("Group 2:", match.group(2).strip())
print("Group 3:", match.group(3).strip())
else:
print("No match found!")
</code></pre>
| <python><regex> | 2023-08-16 14:51:35 | 1 | 1,386 | Lars Skaug |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.