QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,849,095
| 8,372,455
|
How to fix "Please use 'min' instead of 'T'"
|
<p>How can I fix the warning:</p>
<pre><code>FutureWarning: 'T' is deprecated and will be removed in a future version. Please use 'min' instead of 'T'.
if median_diff > pd.Timedelta(freq):
</code></pre>
<p>In the method below it comes up every time I run it but I am not using a <code>T</code> in my code that I can see:</p>
<pre><code>@staticmethod
def apply_rolling_average_if_needed(df, freq="1min", rolling_window="5min"):
""" Apply rolling average if time difference between consecutive
timestamps is not greater than the specified frequency.
"""
print("Warning: If data has a one minute or less sampling frequency a rolling average will be automatically applied")
sys.stdout.flush()
time_diff = df.index.to_series().diff().iloc[1:]
# Calculate median time difference to avoid being affected by outliers
median_diff = time_diff.median()
print(f"Warning: Median time difference between consecutive timestamps is {median_diff}.")
sys.stdout.flush()
if median_diff > pd.Timedelta(freq):
print(f"Warning: Skipping any rolling averaging...")
sys.stdout.flush()
else:
df = df.rolling(rolling_window).mean()
print(f"Warning: A {rolling_window} rolling average has been applied to the data.")
sys.stdout.flush()
return df
</code></pre>
|
<python><pandas><dataframe><timedelta><deprecation-warning>
|
2024-08-08 14:52:05
| 0
| 3,564
|
bbartling
|
78,848,723
| 162,684
|
Non-nullable field in schema does not prevent null values in corresponding table column
|
<p>I have read the <a href="https://arrow.apache.org/docs/python/generated/pyarrow.field.html" rel="nofollow noreferrer">documentation</a> of the <code>field()</code> method, and as far as I understand I can say that a field does not allow NULL values by specifying <code>nullable=False</code>. So, I have tried this example:</p>
<pre class="lang-py prettyprint-override"><code>schema = pyarrow.schema(fields=[
pyarrow.field(name='unique_id', type=pyarrow.uint64(), nullable=False),
pyarrow.field(name='age', type=pyarrow.uint8(), nullable=True),
pyarrow.field(name='favourite_colours', type=pyarrow.list_(pyarrow.string()), nullable=True)
])
data = [
[1, None, 3],
[10, None, 30],
[None, ['red', 'blue'], ['green']]
]
table = pyarrow.table(data=data, schema=schema)
print(table)
</code></pre>
<p>I declare the field <code>unique_id</code> as <code>nullable=False</code>, and therefore I was expecting some sort of error when building the <code>pyarrow.table</code>, where I pass <code>data</code> with a NULL value for <code>unique_id</code> (<code>[1, None, 3]</code> in the code example).</p>
<p>Am I doing something wrong? I'm using pyarrow 17.0.0 with Python 3.8.16</p>
|
<python><pyarrow><apache-arrow>
|
2024-08-08 13:29:18
| 0
| 13,583
|
MarcoS
|
78,848,610
| 4,494,505
|
Why the Langchain placeholder is not being called?
|
<p>My goal is to insert the <code>language</code> placeholder into the invoke method but currently the response is always in english. I follow this tutorial -> <a href="https://python.langchain.com/v0.2/docs/tutorials/chatbot/" rel="nofollow noreferrer">https://python.langchain.com/v0.2/docs/tutorials/chatbot/</a></p>
<p>Any reason why?</p>
<pre><code># Securely input the OpenAI API key
api_key = os.getenv("OPENAI_API_KEY")
# Initialize the ChatOpenAI model with the API key
model = ChatOpenAI(model="gpt-3.5-turbo", api_key=api_key, verbose=True)
def invoke_model(session_id: str, message: str):
# Define the prompt with ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant. Answer all questions to the best of your ability in {language}."),
MessagesPlaceholder(variable_name="messages"),
]
)
# Create the chain using the prompt and the model
chain = prompt | model
# Use with_message_history to handle messages
with_message_history = RunnableWithMessageHistory(chain, get_session_history, input_messages_key="messages")
human_message = HumanMessage(content=message)
session_history = get_session_history(session_id)
session_history.add_message(human_message) # Add the human message to the history
config = {"configurable": {"session_id": session_id}}
response = with_message_history.invoke({"messages": [human_message], "language": "Spanish"}, config=config)
assistant_message = AIMessage(content=response.content)
session_history.add_message(assistant_message) # Add the assistant's response to the history
return response.content
</code></pre>
|
<python><py-langchain>
|
2024-08-08 13:09:42
| 1
| 1,628
|
airsoftFreak
|
78,848,572
| 7,920,004
|
Python doesn't see added module to the same dir
|
<p>I added to my <code>utils</code> dir a new file that contains <code>dict</code> variable which I want to import into another file.</p>
<p><code>mapping.py</code></p>
<pre><code>config = [
{
"filter_column": "SHOW_NAME",
"horizons": [
3,
4,
5,
6
],
"windows": [
52
],
"metrics": [
"mean",
"std",
"min",
"max",
"quantile_25",
"quantile_75",
"trend"
]
},.....
]
</code></pre>
<p>On deploy, my airflow <code>stage2.py</code> dag throws an error:</p>
<pre><code>ModuleNotFoundError: No module named 'utils'
</code></pre>
<p>Odd things is that <code>env_config</code> is recognized</p>
<p>Folder structure</p>
<pre><code>README.rst
LICENSE
...
dags/utils/env_config.py
dags/utils/mapping.py
dags/utils/__init__.py
dags/stage2.py
...
</code></pre>
<p><code>stage2.py</code></p>
<pre><code>import os
import sys
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.dummy_operator import DummyOperator
from airflow.providers.amazon.aws.operators.glue import GlueJobOperator
from airflow.utils.task_group import TaskGroup
sys.path.insert(0, os.path.abspath(os.path.dirname(__file__)))
from utils.env_config import EnvironmentConfig # noqa: E402
from utils.mapping import * # returns error
config = EnvironmentConfig(__file__)
</code></pre>
|
<python><airflow><mwaa>
|
2024-08-08 13:02:20
| 1
| 1,509
|
marcin2x4
|
78,848,438
| 9,047,420
|
Python operator overloading - chaining '>' does not work as expected
|
<p>I'm trying to make a class that can keep track of basic join operations between instances of that class. See the minimum reproducible example below</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from typing import Literal
class Block():
jointype_lookup = {'both': '>', 'right': '>>', 'left': '<<'}
def __init__(self, name):
self.name = name
self.join_stack = [self]
self.join_stack_types = []
@classmethod
def _from_join_opertation(cls, block1: Block, block2: Block, join_type: Literal['left', 'right', 'both']):
self = cls.__new__(cls)
self.join_stack = block1.join_stack + block2.join_stack
self.join_stack_types = block1.join_stack_types + [join_type] + block2.join_stack_types
return self
def __gt__(self, other: Block):
# join both
return_graph_obj = Block._from_join_opertation(self, other, 'both')
return return_graph_obj
def __rshift__(self, other: Block):
# join right
return_graph_obj = Block._from_join_opertation(self, other, 'right')
return return_graph_obj
def __lshift__(self, other: Block):
# join left
return_graph_obj = Block._from_join_opertation(self, other, 'left')
return return_graph_obj
def __str__(self):
command = f"{self.join_stack[0].name}"
for i, type in enumerate(self.join_stack_types):
command += f" {self.jointype_lookup[type]} {self.join_stack[i+1].name}"
return command
command = Block('oxygen') >> Block('hydrogen') >> Block('helium') >> Block('carbon')
print(command)
# output: oxygen >> hydrogen >> helium >> carbon
command = Block('oxygen') << Block('hydrogen') << Block('helium') << Block('carbon')
print(command)
# output: oxygen << hydrogen << helium << carbon
command = Block('oxygen') > Block('hydrogen') > Block('helium') > Block('carbon')
print(command)
# output: helium > carbon
</code></pre>
<p>The last example is the problem. I can fix this by surrounding each successive operation in brackets (which is not ideal for my use case), but I'm keen on understanding why the comparison operators seem to work fundamentally differently to the other operators and if there is a way to change their behaviour</p>
|
<python><operator-overloading>
|
2024-08-08 12:31:32
| 2
| 510
|
arneyjfs
|
78,848,271
| 13,000,229
|
How can I get each page of "Page Break Preview" in Excel using Python?
|
<h2>Problem</h2>
<p>I'm trying to get each page of "Page Break Preview". Precisely speaking, I need to know which cells Page 1 contains, which cells Page 2 contains and so on.</p>
<p>Could you teach me how to realize this functionality in Python 3.12? I tried <code>openpyxl</code>, but other libraries are welcome too.</p>
<h2>Example (What I Tried)</h2>
<p>I find <code>openpyxl</code> has <code>print_area</code> variable, but this variable only returns the whole print area, which contains all pages, so I can't know which page each cell belongs to.</p>
<p>As you can see the code and the screenshot, the result is <code>A1:R50</code>, which contains both Page 1 and Page 2.</p>
<p>In this example, the expected result is something like <code>{'Page1': 'A1:I50', 'Page2':'J1:R50'}</code>.</p>
<pre><code>from openpyxl import load_workbook
workbook = load_workbook(filename="./sample_loader_data/sample.xlsx",
read_only=True,
data_only=True,
)
workbook.worksheets[0].print_area # Result: "'Test worksheet'!$A$1:$R$50"
</code></pre>
<p><a href="https://i.sstatic.net/KnMveJsG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KnMveJsG.png" alt="enter image description here" /></a></p>
|
<python><excel><openpyxl>
|
2024-08-08 11:53:29
| 2
| 1,883
|
dmjy
|
78,848,034
| 1,172,907
|
Call a fixture with arguments
|
<p>I've read the pytest fixture parametrization <a href="https://docs.pytest.org/en/6.2.x/fixture.html#parametrizing-fixtures" rel="nofollow noreferrer">documentation</a> but struggle on the most simple task now.</p>
<p>How can I have <code>f</code> and <code>f1</code> returned <code>as_file</code>?</p>
<pre class="lang-py prettyprint-override"><code>import pytest
@pytest.fixture
def as_file(arg):
with open(arg) as f:
return f
@pytest.fixture
def f(as_file):
return as_file("f.txt")
@pytest.fixture
def f1(as_file):
return as_file("f1.txt")
def test(f,f2):
assert f.name == "f.txt"
assert f1.name == "f1.txt"
</code></pre>
|
<python><pytest><pytest-fixtures>
|
2024-08-08 11:03:37
| 2
| 605
|
jjk
|
78,848,020
| 143,556
|
Invalid content type. image_url is only supported by certain models
|
<p>That's my Python request:</p>
<pre><code>def analyze_screenshot_base64(encoded_image):
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {openai.api_key}"
}
payload = {
"model": "gpt-4o-mini",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Develop a trading setup (LONG or SHORT) with a CRV of at least 5 based on this chart. Include entry price, stop loss, and take profit levels."
},
{
"type": "image_url",
"image": {
"url": f"data:image/png;base64,{encoded_image}"
}
}
]
}
],
"max_tokens": 300
}
response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
return response.json()
</code></pre>
<p>But the API returns me:</p>
<blockquote>
<p>{'error': {'message': 'Invalid content type. image_url is only
supported by certain models.', 'type': 'invalid_request_error',
'param': 'messages.[0].content.[1].type', 'code': None}}</p>
</blockquote>
<p>What could be the problem?</p>
|
<python><openai-api><chatgpt-api>
|
2024-08-08 11:00:34
| 2
| 1,331
|
Ploetzeneder
|
78,847,998
| 4,809,603
|
List to DataFrame with row and column headers
|
<p>I need to convert a list (including headers) to a Dataframe.</p>
<p>If I do it directly using <code>pl.DataFrame(list)</code>, the headers are created and everything is kept as a string. Moreover, the table is transposed, such that the first element in the list becomes the first column in the dataframe.</p>
<p><strong>Input list.</strong></p>
<pre class="lang-py prettyprint-override"><code>[
['Earnings estimate', 'Current qtr. (Jun 2024)', 'Next qtr. (Sep 2024)', 'Current year (2024)', 'Next year (2025)'],
['No. of analysts', '13', '11', '26', '26'],
['Avg. Estimate', '1.52', '1.62', '6.27', '7.23'],
['Low estimate', '1.36', '1.3', '5.02', '5.88'],
['High estimate', '1.61', '1.74', '6.66', '8.56'],
['Year ago EPS', '1.76', '1.36', '5.74', '6.27'],
]
</code></pre>
<p><strong>Expected output.</strong></p>
<p><a href="https://i.sstatic.net/O9iU90H1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O9iU90H1.png" alt="Expected Output" /></a></p>
|
<python><python-polars>
|
2024-08-08 10:54:18
| 3
| 415
|
Rhys
|
78,847,944
| 3,220,554
|
Using XLSXWRITER in Python: How insert 27 digit number in cell and display it as text not as scientific number
|
<p>I'm writing an Excel using XLSXWRITER package in Python. Formats are applied to cells and this all works, except for one cell when the value assigned is a 27 digit text string (extracted from some source).</p>
<p>I've read <a href="https://stackoverflow.com/questions/29727612/how-to-apply-format-as-text-and-accounting-using-xlsxwriter">How to apply format as 'Text' and 'Accounting' using xlsxwriter</a>. It suggests to set the number format to '@', but when I try:</p>
<pre><code>WSFMT_TX_REFINFO = wb.add_format({'num_format': '@'
, 'align': 'right'
, 'border_color': WS_TX_BORDERCOLOR
, 'right': WS_TX_BORDERSYTLE
})
</code></pre>
<p>and write a cell with:</p>
<pre><code>refdata = '001022002024080400002400105'
ws.write(wsRow, WS_COL_REF_INFO, refdata, WSFMT_TX_REFINFO)
</code></pre>
<p>The cell is shown as</p>
<pre><code>1.041E+24
</code></pre>
<p>and in the editor field as</p>
<pre><code>1.041002024073E+24
</code></pre>
<p>If I change the format specification from <code>'@'</code> to <code>0</code>, i.e. change</p>
<pre><code>WSFMT_TX_REFINFO = wb.add_format({'num_format': '@'
</code></pre>
<p>to</p>
<pre><code>WSFMT_TX_REFINFO = wb.add_format({'num_format': 0
</code></pre>
<p>the cell is shown as</p>
<pre><code>1022002024080400000000000
</code></pre>
<p>Note that the digits after the 14th are replaced by zeros. In the editor field it shows as</p>
<pre><code>1.0220020240804E+24
</code></pre>
<p>What I need: The number shall be show as 27 digit string, exactly as found in <code>refdata</code>
Note: There are cases, where <code>refdata</code> may contain alphanumeric strings in some cases, besides pure 27 digit strings.</p>
<p>Any hint?</p>
|
<python><xlsxwriter>
|
2024-08-08 10:39:58
| 2
| 2,744
|
phunsoft
|
78,847,819
| 607,846
|
Add two numpy arrays with an offset
|
<p>Lets say I have two arrays:</p>
<pre><code>a = numpy.array([1,2,3,4,5])
b = numpy.array([10,11,12])
</code></pre>
<p>I wish to add these arrays together, but I wish to start at index 3 in the first array, to produce:</p>
<pre><code>numpy.array([1,2,3,14,16,12]).
</code></pre>
<p>So I'm basically adding an extra 0 to a[3:] to make it the same length as b, and then adding this with b, before appending this result to a[:3].</p>
<p>Can this be done in one step in numpy or do I just implement each of these steps?</p>
<pre><code>def insert(data1, data2, offset):
c = numpy.zeros(len(data2) + offset )
c[:len(data1)] = data1
c[offset:] += data2
return c
</code></pre>
|
<python><numpy>
|
2024-08-08 10:14:45
| 3
| 13,283
|
Baz
|
78,847,702
| 10,780,974
|
Setting the legend on top of the lines connecting mark_inset plot
|
<p>I'm using <code>InsetPosition</code> and <code>mark_inset</code> to make a subplot so that I have the lines connecting them. However, I can't get the lines to be on top of the legend in the first plot. Any thoughts on how I can fix this?</p>
<p>I'd also like to get the tick markers on top of the line plots if I can.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from mpl_toolkits.axes_grid1.inset_locator import InsetPosition, mark_inset
plind = 1 # don't want the first plot
### make the main plot
fig, ax = plt.subplots(1, 1, figsize=(3.35,2.5), dpi=150)
### make the first plot
### three bits of data
ax.plot(f[plind:]/1e3, s[plind:], zorder=4, c='C0', label='BES')
ax.plot(ff[plind:]/1e3, ss[plind:], zorder=6, c='C2', label='BES welch, N=2$^3$')
ax.plot(fbeam[plind:]/1e3, sbeam[plind:], lw=0.8, zorder=5, c='C1', label='Beam')
### want this fill between on the right
ax.fill_between([5e2,4e3], [ylim[0],ylim[0]], [ylim[-1],ylim[-1]], color='C3', alpha=0.3, zorder=3)
### set the x and y limits
ax.set_xlim([4e-3, 2e3])
ax.set_ylim([1e-19, 1e-1])
### want it in log scale
ax.set_xscale('log')
ax.set_yscale('log')
### set the ylabel
ax.set_ylabel('Spectra (V$^2$/Hz)', fontsize=9)
### formatting the ticks
ax.tick_params(axis="both", which='both', labelsize=9,
direction='in', left=True, bottom=True, right=True, top=True)
ax.minorticks_on()
### make the legend
leg = ax.legend(handlelength=0, handletextpad=0, fontsize=8, fancybox=1,
framealpha=1, labelcolor='linecolor', loc='lower center').set_zorder(15)
### changing the positions so it fits within my box
X0 = ax.get_position().x0 + 0.07
X1 = ax.get_position().x1 - 0.32
Y0 = ax.get_position().y0 + 0.06
Y1 = ax.get_position().y1 + 0.09
ax.set_position([X0, Y0, X1-X0, Y1-Y0])
### making the inset
ax2 = plt.axes([0, 0, 1, 1])
ip = InsetPosition(ax, [1.05, 0., 1, 1])
ax2.set_axes_locator(ip)
mark_inset(ax, ax2, loc1=2, loc2=3, fc='none', ec='k', lw=0.8, zorder=7)
### plot the data again
ax2.plot(f[plind:]/1e3, s[plind:], zorder=4, c='C0')
ax2.plot(ff[plind:]/1e3, ss[plind:], zorder=6, c='C2')
ax2.plot(fbeam[plind:]/1e3, sbeam[plind:], lw=0.8, zorder=5, c='C1')
### limit it and set the log scale for y only
ax2.set_xlim([2, 7])
ax2.set_ylim([5e-12,2e-5])
ax2.set_yscale('log')
### format the ticks
ax2.yaxis.tick_right()
ax2.tick_params(axis="both", which='both', labelsize=9,
direction='in', left=True, bottom=True, right=True, top=True)
ax2.minorticks_on()
ax2.set_xticks(np.arange(2,7.1,1))
### make a joint axis for the xlabel
ax3=fig.add_subplot(111, frameon=False)
ax3.set_position([ax.get_position().x0, ax.get_position().y0,
ax2.get_position().x1-ax.get_position().x0,
ax.get_position().y1-ax.get_position().y0])
ax3.tick_params(labelcolor='none', which='both', top=False, bottom=False, left=False, right=False)
ax3.set_xlabel('Frequency (kHz)')
</code></pre>
<p><a href="https://i.sstatic.net/e8lSa5Cv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e8lSa5Cv.png" alt="Image of the plot. Notice how the lines connecting the left subplot with the right subplots are above the legend, this is what I want to change. Any thoughts on how to move the markers above the lines would be helpful too." /></a></p>
|
<python><matplotlib><subplot><insets>
|
2024-08-08 09:51:15
| 1
| 374
|
Steven Thomas
|
78,847,208
| 22,400,527
|
What is the better way to create multiple user types in Django? abstract classes or proxy models?
|
<p>I want to create multiple user types in Django. The user types are 'Admin', 'Company' and 'Individual'. Should I use abstract models or proxy models for this requirement.</p>
<p>I have already done this using proxy models. How would it be done using abstract models instead? Is any one of them better than the other? If so, how?</p>
<p>Here is how I have implemented it using proxy models.</p>
<p><code>models.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>class User(AbstractBaseUser, PermissionsMixin):
id = models.AutoField(primary_key=True)
email = models.EmailField(max_length=150, unique=True, null=False, blank=False)
role = models.CharField(max_length=50, choices=Role.choices, null=False, blank=True)
is_staff = models.BooleanField(null=False, default=False)
is_active = models.BooleanField(null=False, default=True)
is_superuser = models.BooleanField(null=False, default=False)
objects = AccountManager()
USERNAME_FIELD = "email"
def __str__(self):
return self.email
def has_perm(self, perm, obj=None):
return self.is_staff
def has_module_perms(self, app_label):
return self.is_superuser
class Admin(User):
objects = AdminManager()
class Meta:
proxy = True
def custom_method_for_admin_only(self):
return something
class AdminProfile(models.Model):
admin = models.OneToOneField(
Admin, on_delete=models.CASCADE, related_name="admin_profile"
)
first_name = models.CharField(max_length=50, null=False, blank=False)
middle_name = models.CharField(max_length=50, null=True, blank=True)
last_name = models.CharField(max_length=50, null=False, blank=False)
def __str__(self):
return f"{self.first_name} {self.last_name}"
class Company(User):
objects = CompanyManager()
class Meta:
proxy = True
def custom_method_for_company_only(self):
return something
class CompanyProfile(models.Model):
company = models.OneToOneField(
Company, on_delete=models.CASCADE, related_name="company_profile"
)
name = models.CharField(max_length=50, null=False, blank=False)
is_verified = models.BooleanField(default=False, null=False, blank=True)
logo = models.ImageField(upload_to="images/", null=True, blank=True)
def __str__(self):
return self.name
class Individual(User):
objects = IndividualManager()
class Meta:
proxy = True
class IndividualProfile(models.Model):
individual = models.OneToOneField(
Individual, on_delete=models.CASCADE, related_name="individual_profile"
)
first_name = models.CharField(max_length=50, null=False, blank=False)
middle_name = models.CharField(max_length=50, null=True, blank=True)
last_name = models.CharField(max_length=50, null=False, blank=False)
def __str__(self):
return f"{self.first_name} {self.last_name}"
</code></pre>
<p>And the managers for the models would be something like this:</p>
<p>managers.py:</p>
<pre class="lang-py prettyprint-override"><code>class Role(models.TextChoices):
ADMIN = "ADMIN", "Admin"
COMPANY = "COMPANY", "Company"
INDIVIDUAL = "INDIVIDUAL", "Individual"
class AccountManager(BaseUserManager):
def create_superuser(
self,
email,
password,
first_name=None,
middle_name=None,
last_name=None,
role=Role.ADMIN,
**other_fields
):
with transaction.atomic():
other_fields.setdefault("is_staff", True)
other_fields.setdefault("is_superuser", True)
other_fields.setdefault("is_active", True)
if not role == Role.ADMIN:
raise ValueError("Superuser must have a role of ADMIN.")
if not email:
raise ValueError("Superuser must have an email.")
if not first_name:
raise ValueError("Superuser must have a first name.")
if not last_name:
raise ValueError("Superuser must have a last name.")
if other_fields.get("is_staff") is not True:
raise ValueError("Superuser must be assigned to is_staff=True")
if other_fields.get("is_superuser") is not True:
raise ValueError("Superuser must be assigned to is_superuser=True")
email = self.normalize_email(email)
superuser = self.model(email=email, role=role, **other_fields)
superuser.set_password(password)
superuser.save()
admin_profile_model = apps.get_model("admin_app", "AdminProfile")
admin_profile_model.objects.create(
user=superuser,
first_name=first_name,
middle_name=middle_name,
last_name=last_name,
)
return superuser
def create(self, email, password, role=None, **other_fields):
if not role:
raise ValueError("A user must have a role.")
if not email:
raise ValueError("A user must have an email.")
email = self.normalize_email(email)
user = self.model(email=email, role=role, **other_fields)
user.set_password(password)
user.save()
return user
class AdminManager(AccountManager):
def create(
self, email, password, first_name, last_name, middle_name=None, **other_fields
):
admin_profile_model = apps.get_model("admin_app", "AdminProfile")
with transaction.atomic():
admin = super().create(
email=email, role=Role.ADMIN, password=password, **other_fields
)
admin_profile_model.objects.create(
admin=admin,
first_name=first_name,
middle_name=middle_name,
last_name=last_name,
)
return admin
def get_queryset(self, *args, **kwargs):
return super().get_queryset(*args, **kwargs).filter(role=Role.ADMIN)
class CompanyManager(AccountManager):
def create(self, email, password, name, **other_fields):
with transaction.atomic():
company_profile_model = apps.get_model("company", "CompanyProfile")
company = super().create(
email=email, role=Role.COMPANY, password=password, **other_fields
)
company_profile_model.objects.create(company=company, name=name)
return company
def get_queryset(self, *args, **kwargs):
return super().get_queryset(*args, **kwargs).filter(role=Role.COMPANY)
class IndividualManager(AccountManager):
def create(
self, email, password, first_name, last_name, middle_name=None, **other_fields
):
individual_profile_model = apps.get_model("account", "IndividualProfile")
with transaction.atomic():
user = super().create(
email=email, role=Role.INDIVIDUAL, password=password, **other_fields
)
individual_profile_model.objects.create(
user=user,
first_name=first_name,
middle_name=middle_name,
last_name=last_name,
)
return user
def get_queryset(self, *args, **kwargs):
return super().get_queryset(*args, **kwargs).filter(role=Role.INDIVIDUAL)
</code></pre>
|
<python><django><django-models>
|
2024-08-08 08:04:20
| 2
| 329
|
Ashutosh Chapagain
|
78,847,160
| 2,545,523
|
Combine several affine transformations get from OpenCV estimateAffinePartial2D
|
<p>I use OpenCV on Android. I use the <code>goodFeaturesToTrack</code> method to find features in a video frame. Then I take the next video frame and find those features in it using <code>calcOpticalFlowPyrLK</code> and estimate the needed transformation using <code>estimateAffinePartial2D</code>. Up to here everything works as I expect.</p>
<p>What I want to achieve is this, given several transformation matrices from <code>estimateAffinePartial2D</code> I want to have one matrix I can apply to the original frame to get the last frame, i.e. having m1 (frame_0 -> frame_1), m2 ... mn, I want to calculate Mn for a transformation between frame_0 to frame_n.</p>
<p>The matrix returned by <code>estimateAffinePartial2D</code> is 2x3</p>
<pre><code>[[cos(theta)*s, -sin(theta)*s, tx];
[sin(theta)*s, cos(theta)*s, ty]]
</code></pre>
<p>I tried adding a row of ones and multiplying two matrices but the result matrix doesn't seem to correspond to a transformation combining the two transformations (the translation getting multiplied seems wrong).</p>
<pre><code>>>> arr1 = np.array([[cos(pi / 4.), -sin(pi / 4.), 100], [sin(pi / 4.), cos(pi / 4.), 100], [1, 1, 1]])
>>> arr2 = np.array([[cos(pi / 2.), -sin(pi / 2.), 50], [sin(pi / 2.), cos(pi / 2.), 50], [1, 1, 1]])
>>> arr_result = np.multiply(arr1, arr2)
>>> print(arr_result)
[[4.32978028e-17 7.07106781e-01 5.00000000e+03]
[7.07106781e-01 4.32978028e-17 5.00000000e+03]
[1.00000000e+00 1.00000000e+00 1.00000000e+00]]
</code></pre>
|
<python><numpy><opencv><computer-vision><linear-algebra>
|
2024-08-08 07:55:49
| 1
| 1,691
|
cliffroot
|
78,847,145
| 7,164,358
|
PySeriel - Accept (write) confirmation prompt from network sensor
|
<p>I'm working on collecting a few common configuration steps for jumping network sensors via the console port.
Most of what I'll tried is working great, however I have noticed that if I send a command that requires user confirmation my write statment isn't working as intended.</p>
<p>As you can see from this putty example</p>
<pre><code># Login sequence cut
hostname# show uptime
uptime "51 days, 20 hours, 36 mins and 2 seconds"
hostname# config terminal
Entering configuration mode terminal
hostname(config)# system contact "John Doe"
hostname(config)# copy running-config startup-config
Are you sure? [no,yes] yes
OK. Saved running-config to startup-config.
hostname(config)# exit
hostname#
</code></pre>
<p>My script works fin until we reach the <code>Are you sure?</code></p>
<pre><code>hostname(config)# copy running-config startup-config
Are you sure?
Aborted: by user
hostname(config)# yes
-----------------------^
syntax error: unknown command
</code></pre>
<p>It almost appers as a halt command (<code>\r</code> or ctrl+c ) is sent before the yes write is initiated ? Does any one know if this prompts back from a serial device is handled differently ?</p>
<pre class="lang-py prettyprint-override"><code>import serial
import time
def read_until_prompt(prompt, timeout=10, user_input=None):
buffer = ""
end_time = time.time() + timeout
while time.time() < end_time:
if ser.in_waiting:
buffer += ser.read(ser.in_waiting).decode("utf-8")
if prompt in buffer:
if user_input is not None:
command = user_input.encode("utf-8") + b"\r\n"
ser.write(command)
print("\n--Current buffer--\n" + buffer)
return buffer
time.sleep(0.1)
raise TimeoutError(f"Did not find prompt: {prompt}")
ser = serial.Serial(
port=com_interface,
baudrate=115200,
bytesize=serial.EIGHTBITS,
parity=serial.PARITY_NONE,
stopbits=serial.STOPBITS_ONE,
timeout=1,
xonxoff=False, # No flow control
rtscts=False, # No flow control
dsrdtr=False, # No flow control
)
try:
STANDARD_PROMT = "#"
CONFIG_PROMT = "(config)#"
# Initial connection sequence
time.sleep(2) # Initial delay, to allow for initialization
ser.write(b"\x03") # reset any activity
read_until_prompt("login:")
ser.write(b"john\r\n") # Username
read_until_prompt("Password:")
ser.write(f"{password}\r\n".encode("utf-8"))
# Workes just fine
read_until_prompt(STANDARD_PROMT, user_input="show uptime")
read_until_prompt(STANDARD_PROMT, user_input="config terminal")
read_until_prompt(CONFIG_PROMT, user_input=f'system contact "John Doe"')
# Save config to startup
read_until_prompt(CONFIG_PROMT, user_input=f'copy running-config startup-config')
# a confirmation promped is shown 'Are you sure? [yes/no]', buffer shows 'Are you sure?'
read_until_prompt("Are you sure?", user_input='yes')
# Additional attempt
ser.write(b'copy running-config startup-config\r\n')
time.sleep(0.2)
ser.write(b'yes\r\n')
time.sleep(15)
read_until_prompt(CONFIG_PROMT, user_input="exit")
finally:
ser.close()
</code></pre>
|
<python><serial-port><pyserial>
|
2024-08-08 07:52:11
| 0
| 1,013
|
Petter Östergren
|
78,846,882
| 10,426,490
|
Gemini Status 429 no matter what
|
<p>I'm unsure of why on earth my Google Gemini requests are returning status 429 resource exhausted. It definitely isn't because my resource is exhausted! This exact code worked a couple weeks ago...ugh.</p>
<p><strong>Specs</strong>:</p>
<ul>
<li>I have a paid account
<ul>
<li><a href="https://i.sstatic.net/82vE7YAT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82vE7YAT.png" alt="enter image description here" /></a></li>
</ul>
</li>
<li>Sending ~1M tokens</li>
<li>Via Python SDK using</li>
</ul>
<pre><code>def start_gemini_chat(api_key, system_message, user_message):
genai.configure(api_key=api_key)
safety_settings = [
{"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_NONE"},
{"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_NONE"},
{"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "threshold": "BLOCK_NONE"},
{"category": "HARM_CATEGORY_DANGEROUS_CONTENT", "threshold": "BLOCK_NONE"}]
generation_config = {
"temperature": 0,
"max_output_tokens": 8192
}
model = genai.GenerativeModel(
model_name="gemini-1.5-pro-latest",
generation_config=generation_config,
system_instruction=system_message,
safety_settings=safety_settings
)
chat_session = model.start_chat(history=[])
response = chat_session.send_message(user_message)
return response.text
</code></pre>
<ul>
<li><p>The last time I received a 429 was during initial setting of Gemini.</p>
<ul>
<li>I created a Google Workspace, and thought that was enough to send paid requests to Gemini.</li>
<li>But no. I had to Connect a GCP Billing Account. So I did that.</li>
</ul>
</li>
<li><p>My two week Google Workspace trial has ended, so I activated my account</p>
<ul>
<li><a href="https://i.sstatic.net/AWrMqg8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AWrMqg8J.png" alt="enter image description here" /></a></li>
</ul>
</li>
<li><p>I even locked my Billing Account to my GCP Project. Didn't make a difference.</p>
<ul>
<li><a href="https://i.sstatic.net/A2gNCwb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2gNCwb8.png" alt="enter image description here" /></a></li>
</ul>
</li>
<li><p>I recreated the API key and sent it along with the request. No difference...</p>
<ul>
<li>I did notice, when I create a new API key, it says <code>Free of charge</code>...
<ul>
<li><a href="https://i.sstatic.net/bm7lDATU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bm7lDATU.png" alt="enter image description here" /></a></li>
</ul>
</li>
<li>But after the key is created, I refresh the browser now the key says <code>Paid</code>...
<ul>
<li><a href="https://i.sstatic.net/zKL6Hg5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zKL6Hg5n.png" alt="enter image description here" /></a></li>
</ul>
</li>
</ul>
</li>
</ul>
<p>Does anyone have experience overcoming this status code with Gemini?</p>
<hr />
<p><strong>EDIT 1</strong>:</p>
<p>Added some retry logic for <em>typical</em> status 429 codes:</p>
<pre><code>import google.generativeai as genai
from google.generativeai.types import RequestOptions
from google.api_core import retry
#--------------------------------------------------------
def submit_gemini_query(api_key, system_message, user_message):
genai.configure(api_key=api_key)
safety_settings = [
{"category": "HARM_CATEGORY_HARASSMENT", "threshold": "BLOCK_NONE"},
{"category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_NONE"},
{"category": "HARM_CATEGORY_SEXUALLY_EXPLICIT", "threshold": "BLOCK_NONE"},
{"category": "HARM_CATEGORY_DANGEROUS_CONTENT", "threshold": "BLOCK_NONE"}]
generation_config = {
"temperature": 0,
"max_output_tokens": 8192
}
model = genai.GenerativeModel(
model_name="gemini-1.5-pro-latest",
generation_config=generation_config,
system_instruction=system_message,
safety_settings=safety_settings
)
chat_session = model.start_chat(history=[])
response = chat_session.send_message(user_message,
request_options=RequestOptions(
retry=retry.Retry(
initial=10,
multiplier=2,
maximum=60,
timeout=300
)
)
)
return response.text
</code></pre>
<ul>
<li>With only this change, the code now consistently returns Status 503 Model Overloaded messages!!
<ul>
<li><a href="https://i.sstatic.net/gw3vizGI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gw3vizGI.png" alt="enter image description here" /></a></li>
</ul>
</li>
<li>Tested it this morning, same response
<ul>
<li><a href="https://i.sstatic.net/V06qNOgt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V06qNOgt.png" alt="enter image description here" /></a></li>
</ul>
</li>
<li>Very frustrating. Its like trying to troubleshoot something but the clues point you to non-issues.</li>
</ul>
|
<python><google-gemini><http-status-code-429>
|
2024-08-08 06:48:08
| 0
| 2,046
|
ericOnline
|
78,846,694
| 4,277,485
|
Generate single dataframe using 2 dataframe with different number of rows, combining the columns
|
<p>we have 2 dataframes with different number of columns and rows. Want to combine both to a single dataframe in *<strong>python 3.7</strong></p>
<p>Df1:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">c1</th>
<th style="text-align: center;">c2</th>
<th style="text-align: right;">c3</th>
<th>c4</th>
<th style="text-align: center;">c5</th>
<th style="text-align: right;">c6</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">abc</td>
<td style="text-align: center;">2024-05-02</td>
<td style="text-align: right;">EOC</td>
<td>12.7</td>
<td style="text-align: center;">xyz</td>
<td style="text-align: right;">3</td>
</tr>
</tbody>
</table></div>
<p>DF2:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">s1</th>
<th style="text-align: center;">s2</th>
<th style="text-align: right;">s3</th>
<th>s4</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">34.7</td>
<td style="text-align: center;">83.9</td>
<td style="text-align: right;">0</td>
<td>67</td>
</tr>
<tr>
<td style="text-align: left;">35.7</td>
<td style="text-align: center;">73.5</td>
<td style="text-align: right;">0</td>
<td>67</td>
</tr>
<tr>
<td style="text-align: left;">44.7</td>
<td style="text-align: center;">21.4</td>
<td style="text-align: right;">0</td>
<td>67</td>
</tr>
<tr>
<td style="text-align: left;">67.7</td>
<td style="text-align: center;">56.1</td>
<td style="text-align: right;">0</td>
<td>67</td>
</tr>
<tr>
<td style="text-align: left;">90.7</td>
<td style="text-align: center;">79</td>
<td style="text-align: right;">0</td>
<td>67</td>
</tr>
<tr>
<td style="text-align: left;">7</td>
<td style="text-align: center;">21.4</td>
<td style="text-align: right;">0</td>
<td>67</td>
</tr>
<tr>
<td style="text-align: left;">12</td>
<td style="text-align: center;">83.9</td>
<td style="text-align: right;">0</td>
<td>67</td>
</tr>
</tbody>
</table></div>
<p>Result expected:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">c1</th>
<th style="text-align: center;">c2</th>
<th style="text-align: right;">c3</th>
<th>c4</th>
<th style="text-align: center;">c5</th>
<th style="text-align: right;">c6</th>
<th style="text-align: left;">s1</th>
<th style="text-align: center;">s2</th>
<th style="text-align: right;">s3</th>
<th>s4</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">abc</td>
<td style="text-align: center;">2024-05-02</td>
<td style="text-align: right;">EOC</td>
<td>12.7</td>
<td style="text-align: center;">xyz</td>
<td style="text-align: right;">3</td>
<td style="text-align: left;">34.7</td>
<td style="text-align: center;">83.9</td>
<td style="text-align: right;">0</td>
<td>67</td>
</tr>
<tr>
<td style="text-align: left;">abc</td>
<td style="text-align: center;">2024-05-02</td>
<td style="text-align: right;">EOC</td>
<td>12.7</td>
<td style="text-align: center;">xyz</td>
<td style="text-align: right;">3</td>
<td style="text-align: left;">35.7</td>
<td style="text-align: center;">73.5</td>
<td style="text-align: right;">0</td>
<td>67</td>
</tr>
<tr>
<td style="text-align: left;">abc</td>
<td style="text-align: center;">2024-05-02</td>
<td style="text-align: right;">EOC</td>
<td>12.7</td>
<td style="text-align: center;">xyz</td>
<td style="text-align: right;">3</td>
<td style="text-align: left;">44.7</td>
<td style="text-align: center;">21.4</td>
<td style="text-align: right;">0</td>
<td>67</td>
</tr>
<tr>
<td style="text-align: left;">abc</td>
<td style="text-align: center;">2024-05-02</td>
<td style="text-align: right;">EOC</td>
<td>12.7</td>
<td style="text-align: center;">xyz</td>
<td style="text-align: right;">3</td>
<td style="text-align: left;">67.7</td>
<td style="text-align: center;">56.1</td>
<td style="text-align: right;">0</td>
<td>67</td>
</tr>
<tr>
<td style="text-align: left;">abc</td>
<td style="text-align: center;">2024-05-02</td>
<td style="text-align: right;">EOC</td>
<td>12.7</td>
<td style="text-align: center;">xyz</td>
<td style="text-align: right;">3</td>
<td style="text-align: left;">90.7</td>
<td style="text-align: center;">79</td>
<td style="text-align: right;">0</td>
<td>67</td>
</tr>
<tr>
<td style="text-align: left;">abc</td>
<td style="text-align: center;">2024-05-02</td>
<td style="text-align: right;">EOC</td>
<td>12.7</td>
<td style="text-align: center;">xyz</td>
<td style="text-align: right;">3</td>
<td style="text-align: left;">7</td>
<td style="text-align: center;">21.4</td>
<td style="text-align: right;">0</td>
<td>67</td>
</tr>
<tr>
<td style="text-align: left;">abc</td>
<td style="text-align: center;">2024-05-02</td>
<td style="text-align: right;">EOC</td>
<td>12.7</td>
<td style="text-align: center;">xyz</td>
<td style="text-align: right;">3</td>
<td style="text-align: left;">12</td>
<td style="text-align: center;">83.9</td>
<td style="text-align: right;">0</td>
<td>67</td>
</tr>
</tbody>
</table></div>
<p>I have tried using concat with axis=1</p>
<pre><code> df3 = pd.concat([df1, df2], axis=1)
df_3.loc[:, 'c1':'c6'] = df_3.loc[:, 'c1':'c6'].ffill()
</code></pre>
<p>This works with Python 3.9+ but how to handle this in Python 3.7</p>
|
<python><pandas><dataframe><concatenation>
|
2024-08-08 05:53:30
| 0
| 438
|
Kavya shree
|
78,846,556
| 2,728,148
|
Algorithm for a substring prefix matching a regular expression (in python)
|
<p>I am writing a parser in Python which operates on a stream. The stream may be unbounded, so I cannot keep it all in memory. As a convenience, I would like to offer a regular expression matcher which matches part of the stream. I believe that means that I need to come up with a regular expression that matches any <em>prefix</em> of a full regular expression match. For purposes of this question, let us assume that all regular expression matches will only match a finite set of characters.* My intent is to simply buffer up the stream in chunks until the regular expression match is complete</p>
<p>I'm trying to figure out an algorithm to determine whether I need to fetch another chunk or not. If the match failed to find a match because it needs more characters to match or could match more characters, then I need to get another chunk. If it failed because the characters it already has could never match the string, then I should not buffer more data (because I might end up trying to buffer an unbounded amount of data). As an example, consider the regular expression for a pair of identifiers separated by a colon: <code>([0-9A-Za-z]+):([0-9A-Za-z]+)</code> This does not match "Hello", but there's the potential that the next chunk may give it a matching string such as "Hello:World". However, the string "^Hello" cannot possibly match that regular expression because ^ is not part of <code>[0-9A-Za-z]</code>. No additional characters could create a match (thus my parser needs to use a different rule to match the "^" first)</p>
<p>I believe this can be done with the careful application of <code>?</code> to make later parts of the regular expression optional. I'm looking for an algorithm that can transform a regular expression matching a string into a regular expression matching a prefix of any possible matching string.</p>
<p>My goal is to do this using only regular expression functionality available in Python. I do not want to code my own regex engine because the built-in one will be far faster than anything I can write in python.</p>
<p>*. In practice, I'll end up setting a limit on how large a regular expression match can get, but I'd rather exclude that detail from this question.</p>
|
<python><regex><algorithm>
|
2024-08-08 04:44:26
| 1
| 11,113
|
Cort Ammon
|
78,846,426
| 5,091,720
|
Pandas parquet file pyarrow.lib.ArrowMemoryError: malloc of size 106255424 failed
|
<p>I am trying to run a python script in cPanel terminal. I am getting an error as the script tries to open the parquet file that is a 46.65 MB size. This worked on my home computer.</p>
<pre><code>df = pd.read_parquet(file_path + 'CA_s.parquet')
</code></pre>
<p>The error message</p>
<pre><code>Traceback (most recent call last):
File "/home/.../my_script.py", line 60, in <module>
df = pd.read_parquet(file_path + 'CA_s.parquet')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/.../virtualenv/.../python3.11/.../pandas/io/parquet.py", line 667, in read_parquet
return impl.read(
^^^^^^^^^^
File "/home/.../virtualenv/.../python3.11/.../pandas/io/parquet.py", line 281, in read
result = pa_table.to_pandas(**to_pandas_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/array.pxi", line 885, in pyarrow.lib._PandasConvertible.to_pandas
File "pyarrow/table.pxi", line 5002, in pyarrow.lib.Table._to_pandas
File "/home/.../virtualenv/.../python3.11/.../pyarrow/pandas_compat.py", line 784, in table_to_dataframe
result = pa.lib.table_to_blocks(options, table, categories,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 3941, in pyarrow.lib.table_to_blocks
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowMemoryError: malloc of size 106255424 failed
</code></pre>
<p>I typed <code>free -m</code> in terminal to get the memory data:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th></th>
<th>total</th>
<th>used</th>
<th>free</th>
<th>shared</th>
<th>buff/cache</th>
<th>available</th>
</tr>
</thead>
<tbody>
<tr>
<td>Mem:</td>
<td>15507</td>
<td>5497</td>
<td>2561</td>
<td>1209</td>
<td>7447</td>
<td>7588</td>
</tr>
<tr>
<td>Swap:</td>
<td>1023</td>
<td>811</td>
<td>212</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table></div>
<p>Please share any ideas or answers thanks.</p>
|
<python><pandas><cpanel><parquet><pyarrow>
|
2024-08-08 03:33:36
| 0
| 2,363
|
Shane S
|
78,846,318
| 5,567,893
|
How can I change the values in the tensor using dictionary?
|
<p>when I have a tensor and dictionary as below, how can I map the dictionary to the tensor?<br />
For example,</p>
<pre class="lang-py prettyprint-override"><code>Dict = {1: 'A', 2: 'B', 3: 'C'}
ex = torch.tensor([[1,2,3],[3,2,1]])
# Expected result
#tensor([[A, B, C],
# [C, B, A]])
</code></pre>
<p>I tried this <a href="https://discuss.pytorch.org/t/mapping-values-in-a-tensor/117731" rel="nofollow noreferrer">code</a> and <code>torch.where</code>, but it didn't work well.</p>
|
<python><dictionary><pytorch>
|
2024-08-08 02:30:15
| 2
| 466
|
Ssong
|
78,846,281
| 395,857
|
How can I write a POST request for Azure OpenAI that uses structured output?
|
<p>I want to write a POST request for Azure OpenAI that uses structured output.</p>
<p><a href="https://azure.microsoft.com/en-us/blog/announcing-a-new-openai-feature-for-developers-on-azure/" rel="nofollow noreferrer">https://azure.microsoft.com/en-us/blog/announcing-a-new-openai-feature-for-developers-on-azure/</a> says:</p>
<blockquote>
<p>Here’s an example API call to illustrate how to use Structured Outputs:</p>
<pre><code>{
"model": "gpt-4o-2024-08-06",
"prompt": "Generate a customer support response",
"structured_output": {
"schema": {
"type": "object",
"properties": {
"responseText": { "type": "string" },
"intent": { "type": "string" },
"confidenceScore": { "type": "number" },
"timestamp": { "type": "string", "format": "date-time" }
},
"required": ["responseText", "intent", "confidenceScore", "timestamp"]
}
}
}
</code></pre>
</blockquote>
<p>Regrettably, they don't say how to use that API call. How can I write a POST request for Azure OpenAI that uses structured output?</p>
<hr />
<p>I tried to use the <a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#completions" rel="nofollow noreferrer">Completion REST API</a>:</p>
<pre><code>import requests
import json
from datetime import datetime
# Set your Azure OpenAI endpoint and key
endpoint = "https://your-azure-openai-endpoint.com"
api_key = "your-azure-openai-key"
deployment = 'engine-name'
endpoint = f'https://{endpoint}/openai/deployments/{deployment}/completions?api-version=2024-06-01'
# Define the request payload
payload = {
"model": "gpt-4omini-2024-07-18name",
"prompt": "Generate a customer support response",
"structured_output": {
"schema": {
"type": "object",
"properties": {
"responseText": { "type": "string" },
"intent": { "type": "string" },
"confidenceScore": { "type": "number" },
"timestamp": { "type": "string", "format": "date-time" }
},
"required": ["responseText", "intent", "confidenceScore", "timestamp"]
}
}
}
# Send the request
headers = {
"Content-Type": "application/json",
"api-key": f"{api_key}"
}
response = requests.post(endpoint, headers=headers, data=json.dumps(payload))
# Handle the response
if response.status_code == 200:
response_data = response.json()
print(json.dumps(response_data, indent=4))
else:
print(f"Error: {response.status_code}")
print(response.text)
</code></pre>
<p>However, I get the error <code>completion operation does not work with the specified model</code>:</p>
<pre><code>{"error":{"code":"OperationNotSupported","message":"The completion operation does not work with the specified model, gpt-4o-mini. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993."}}
</code></pre>
<p>Replacing the endpoint to the <a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/reference#chat-completions" rel="nofollow noreferrer">chat completions</a> REST API endpoint:</p>
<pre><code>endpoint = f'https://{endpoint}/openai/deployments/{deployment}/chat/completions?api-version=2024-06-01'
</code></pre>
<p>yields the error</p>
<pre><code>Error: 400
Unsupported data type
</code></pre>
<p>GPT-4o-2024-08-06 is not in API yet. Only available in playground according to <a href="https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#early-access-playground-preview" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#early-access-playground-preview</a>:</p>
<blockquote>
<p>Azure customers can test out GPT-4o 2024-08-06 today in the new AI Studio early access playground (preview).</p>
</blockquote>
<p>So I guess I should use another REST API with gpt-4o-mini-20240718, but which one?</p>
|
<python><azure><http-post><azure-openai>
|
2024-08-08 02:04:46
| 2
| 84,585
|
Franck Dernoncourt
|
78,846,128
| 7,290,845
|
DLT - how to get pipeline_id and update_id?
|
<p>I need to insert pipeline_id and update_id in my Delta Live Table (DLT), the point being to know which pipeline created which row. How can I obtain this information?</p>
<p>I know you can get job_id and run_id from widgets but I don't know if these are the same things as pipeline_id and update_id.</p>
<p>I have a solution with event_hooks using Python but it looks wrong, I do not think it makes sense to listen for events in my DLT pipeline for this purpose.</p>
|
<python><apache-spark><databricks><azure-databricks><delta-live-tables>
|
2024-08-08 00:27:26
| 1
| 1,689
|
Zeruno
|
78,846,004
| 395,857
|
How can I use structured_output with Azure OpenAI with the openai Python library?
|
<p>I want to use structured output with Azure OpenAI.</p>
<p>I tried the following code, based on the code given in <a href="https://openai.com/index/introducing-structured-outputs-in-the-api/" rel="nofollow noreferrer">https://openai.com/index/introducing-structured-outputs-in-the-api/</a>:</p>
<pre><code>from pydantic import BaseModel
from openai import AzureOpenAI
class Step(BaseModel):
explanation: str
output: str
class MathResponse(BaseModel):
steps: list[Step]
final_answer: str
client = AzureOpenAI(api_key='[redacted]',
api_version='2024-05-01-preview',
azure_endpoint='[redacted]')
completion = client.beta.chat.completions.parse(
model="gpt-4omini-2024-07-18-name",
messages=[
{"role": "system", "content": "You are a helpful math tutor."},
{"role": "user", "content": "solve 8x + 31 = 2"},
],
response_format=MathResponse,
)
message = completion.choices[0].message
if message.parsed:
print(message.parsed.steps)
print(message.parsed.final_answer)
else:
print(message.refusal)
</code></pre>
<p>I get the error:</p>
<pre><code>openai.BadRequestError: Error code: 400:
{
"error": {
"message": "Invalid parameter: response_format must be one of json_object, text.",
"type": "invalid_request_error",
"param": "response_format",
"code": "None"
}
}
</code></pre>
<p>How to fix it?</p>
<p>I ran <code>pip install -U openai</code>: I use <code>openai==1.40.1</code> and Python 3.11.</p>
<hr />
<p>I also tried <a href="https://cookbook.openai.com/examples/structured_outputs_intro" rel="nofollow noreferrer">https://cookbook.openai.com/examples/structured_outputs_intro</a> using using Azure+ GPT-4o mini (2024-07-18), it didn't work either, same error message:</p>
<pre><code>from openai import AzureOpenAI
# Replace these variables with your Azure OpenAI endpoint and API key
endpoint = "https://<your-resource-name>.openai.azure.com"
api_key = "<your-api-key>"
deployment_name = "<your-deployment-name>" # Replace with your deployment name
MODEL = deployment_name
# API endpoint for the completion request
api_url = f"{endpoint}/openai/deployments/{deployment_name}/chat/completions?api-version=2024-06-01"
client = AzureOpenAI(api_key='[redacted]',
api_version='2024-07-01-preview',
azure_endpoint='https://[redacted].openai.azure.com/')
math_tutor_prompt = '''
You are a helpful math tutor. You will be provided with a math problem,
and your goal will be to output a step by step solution, along with a final answer.
For each step, just provide the output as an equation use the explanation field to detail the reasoning.
'''
def get_math_solution(question):
response = client.chat.completions.create(
model=MODEL,
messages=[
{
"role": "system",
"content": math_tutor_prompt
},
{
"role": "user",
"content": question
}
],
response_format={
"type": "json_schema",
"json_schema": {
"name": "math_reasoning",
"schema": {
"type": "object",
"properties": {
"steps": {
"type": "array",
"items": {
"type": "object",
"properties": {
"explanation": {"type": "string"},
"output": {"type": "string"}
},
"required": ["explanation", "output"],
"additionalProperties": False
}
},
"final_answer": {"type": "string"}
},
"required": ["steps", "final_answer"],
"additionalProperties": False
},
"strict": True
}
}
)
return response.choices[0].message
# Testing with an example question
question = "how can I solve 8x + 7 = -23"
result = get_math_solution(question)
print(result.content)
</code></pre>
|
<python><nlp><azure-openai><gpt-4>
|
2024-08-07 23:14:19
| 2
| 84,585
|
Franck Dernoncourt
|
78,845,983
| 315,734
|
From Python code, how can I simulate a storage event to a Google Eventarc topic?
|
<p>Setup:</p>
<ol>
<li>Deploy a google cloud run service that simply logs exactly its POST payload</li>
<li>Create an eventarc that triggers when an object is finalized in a storage bucket that calls that cloud run service endpoint</li>
</ol>
<p>That all works normally. Now, I know of two options to simulate an event (other than rewriting files which is not "simulating" and not acceptable for this case):</p>
<ol>
<li>PubSub API</li>
</ol>
<p>It's straightforward to publish a message to the Eventarc trigger's topic. However, whenever I do that, the payload that arrives at the cloud run service is an envelope wrapping a base64 encoded payload. Eventarc is somehow avoiding that envelope because when it triggers, the payload is just the CloudEvents JSON. Is there some setting in the PubSub API to get rid of this envelope or something?</p>
<ol start="2">
<li>EventArc Publishing API</li>
</ol>
<p>The problem here is that it is a google sourced event. So there is no channel. But the API requires a channel to be specified. I tried various formats for the channel but could find nothing that worked. The one possibility is if I set the channel to "google" then I get a <code>403 permission denied on resource project google.</code> If that is supposed to work, exactly what permissions are required? The docs mention <code>eventarc.publisher</code> which I granted but perhaps I missed giving it to the correct service account?</p>
|
<python><google-cloud-run><event-arc>
|
2024-08-07 23:03:20
| 1
| 7,079
|
mentics
|
78,845,958
| 1,631,414
|
How to prompt for multiple exclusive groups of arguments from python command line
|
<p>So I'm trying to parse the command line for 2 groups of mutually exclusive arguments.</p>
<ol>
<li>The user can specify a file</li>
<li>The user can specify a server
along with a view</li>
</ol>
<p>I'm looking at the argparse.add_mutually_exclusive_group function. However, it doesn't seem like it will do what I need. I want to let someone specify a file or specify a jenkins server with a view. I don't know how to manipulate add_mutually_exclusive_group to either accept a 1 argument group or a 2 argument group. I will save everyone the pain of looking at my other failed trials but I'm getting the feeling either</p>
<ol>
<li>I'm using add_mutually_exclusive_group wrong or</li>
<li>I'm using the wrong function to accomplish what I want.</li>
</ol>
<p>Here's my code, although, I know it's wrong because I'm not using it correctly to make jenkins and view be a group but I don't know what to do to accomplish that.</p>
<pre><code>parser = argparse.ArgumentParser()
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--file', dest='file', help='Get jobs from file')
group.add_argument('--jenkins', dest='jenkins', help='Get jobs from Jenkins view and separate by section')
group.add_argument('--view', dest='view')
</code></pre>
|
<python><argparse>
|
2024-08-07 22:52:31
| 1
| 6,100
|
Classified
|
78,845,949
| 1,329,652
|
How to define a typing special form for use with static type checking?
|
<p>I would like to create a typing special form that takes a non-type, say integer, as an argument, similar to the 2nd argument of <code>typing.Annotated</code>.</p>
<p>For example, I'd like <code>foo[5]</code> to be the same type annotation as <code>Annotated[int, 5]</code>.</p>
<p>There are many ways that don't fail at runtime, but are rejected by the type checker. I'm using mypy 1.11.1 - latest version at this time.</p>
<pre class="lang-python prettyprint-override"><code>import typing
class foo:
def __class_getitem__(cls, param: int):
return typing.Annotated[int, param]
foo5 : foo[5]
</code></pre>
<p>mypy doesn't like it:</p>
<pre><code>"foo" expects no type arguments, but 1 given [type-arg]
Invalid type: try using Literal[5] instead? [valid-type]
</code></pre>
<p>Using <code>_SpecialForm</code>:</p>
<pre class="lang-python prettyprint-override"><code>@typing._SpecialForm
def bar(_self, param : int):
return typing.Annotated[int, param]
bar5 : bar[5]
</code></pre>
<p>Nope:</p>
<pre><code>Too many arguments for "_SpecialForm" [call-arg]
Function "ex_typing_1.bar" is not valid as a type [valid-type]
Perhaps you need "Callable[...]" or a callback protocol?
Invalid type: try using Literal[5] instead? [valid-type]
</code></pre>
<p>Even the parenthesized syntax I don't care much for is no-go:</p>
<pre><code>def baz(param : int):
return typing.Annotated[int, param]
baz5 : baz(5)
</code></pre>
<p>Sez mypy:</p>
<pre><code>Invalid type comment or annotation [valid-type]
Suggestion: use baz[...] instead of baz(...)
</code></pre>
|
<python><python-typing>
|
2024-08-07 22:48:10
| 1
| 99,011
|
Kuba hasn't forgotten Monica
|
78,845,943
| 1,404,208
|
Why is graph in Networkx pointing arrows in the wrong direction?
|
<p>I have a pandas data frame with two columns: source and sink. In a simplified form, there are just 5 users (source column) that can owe money to any other user (sink column).
I thought the following code would show the plot of user 1 owing to user 3 (yes arrow is correct, good), user 2 to 4 (good), 3 to 5 (good), 4 to 1 (wrong), 5 to 2 (wrong). What should I do to get the last two arrows to point in the right direction?</p>
<p><a href="https://i.sstatic.net/oTQUlLrA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTQUlLrA.png" alt="output of the sample code" /></a></p>
<pre><code>df = pd.DataFrame({'source': [1, 2, 3, 4, 5], 'sink': [3, 4, 5, 1, 2]})
G = nx.Graph()
for row in df.iterrows():
print(row[1]['source'], row[1]['sink'])
G.add_edge(row[1]['source'], row[1]['sink'])
pos = nx.spring_layout(G)
nodes = nx.draw_networkx_nodes(G, pos, node_color="orange")
nx.draw_networkx_labels(G, pos)
edges = nx.draw_networkx_edges(
G,
pos,
arrows=True,
arrowstyle="->",
arrowsize=10,
width=2,
)
</code></pre>
|
<python><graph><networkx><permutation>
|
2024-08-07 22:46:05
| 1
| 455
|
alexey
|
78,845,851
| 1,394,336
|
Quickly training multiple small neural networks
|
<p>I want to train multiple models on the same training data (just different initializations). Each model has exactly the same architecture. Crucially, the models are extremely small, so having them all in memory at the same time won't be a problem. How can I make this as efficient as possible?</p>
<p>I read that cuda is supposed to be non-blocking, so computations should be parallelized automatically. However, when I write the code in a naive way, I can see that the training time scales linearly with the number of models.</p>
<pre><code>import time
import numpy as np
import torch
import torch.nn as nn
class MLP(nn.Module):
def __init__(self, network_size):
super(MLP, self).__init__()
self.fc1 = nn.Linear(2, network_size)
self.fc2 = nn.Linear(network_size, 1)
def forward(self, x):
return torch.sigmoid(self.fc2(self.fc1(x)))
def train(num_networks, network_size, num_iterations):
criterion = torch.nn.BCELoss()
data = torch.zeros((5, 2), device='cuda')
targets = torch.ones((5, 1), device='cuda')
models = []
for _ in range(num_networks):
models.append(MLP(network_size).cuda())
for model in models:
optimizer = torch.optim.Adam(model.parameters())
for _ in range(num_iterations):
output = model(data)
loss = criterion(output, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
training_start = time.perf_counter()
train(1, 20, 1000)
print(f"Training 1 model took {time.perf_counter() - training_start:.2f}s")
training_start = time.perf_counter()
train(5, 20, 1000)
print(f"Training 5 models took {time.perf_counter() - training_start:.2f}s")
training_start = time.perf_counter()
train(15, 20, 1000)
print(f"Training 15 models took {time.perf_counter() - training_start:.2f}s")
</code></pre>
<p>Output:</p>
<pre><code>Training 1 model took 0.68s
Training 5 models took 3.36s
Training 15 models took 10.18s
</code></pre>
<p>I've been able to optimize this by merging the models into one larger network architecture:</p>
<pre><code>class MergedMLP(nn.Module):
def __init__(self, num_networks, network_size):
super().__init__()
self.fc1 = nn.Linear(2, num_networks * network_size, device='cuda')
self.fc2 = nn.Linear(num_networks * network_size, num_networks, device='cuda')
self.fc2_weight_mask = torch.zeros_like(self.fc2.weight.data, device='cuda', requires_grad=False)
for i in range(num_networks):
self.fc2_weight_mask[i,i*network_size:(i+1)*network_size] = 1
self.fc2.weight.data *= self.fc2_weight_mask
def forward(self, x):
return torch.sigmoid(self.fc2(self.fc1(x)))
def train_merged(num_networks, network_size, num_iterations):
criterion = torch.nn.BCELoss()
data = torch.zeros((5, 2), device='cuda')
targets = torch.ones((5, num_networks), device='cuda')
model = MergedMLP(num_networks, network_size).cuda()
optimizer = torch.optim.Adam(model.parameters())
for _ in range(num_iterations):
output = model(data)
loss = criterion(output, targets)
optimizer.zero_grad()
loss.backward()
model.fc2.weight.grad *= model.fc2_weight_mask
optimizer.step()
training_start = time.perf_counter()
train_merged(15, 20, 1000)
print(f"Training merged models took {time.perf_counter() - training_start:.2f}s")
</code></pre>
<p>Output:</p>
<pre><code>Training merged models took 0.70s
</code></pre>
<p>Is it possible to achieve the runtime of the merged implementation with code more similar to the naive one? The merged implementation seems to be more error prone, e.g. when I increase the network size or want to extract individual networks from the trained merged model.</p>
|
<python><performance><pytorch><neural-network>
|
2024-08-07 22:09:07
| 2
| 2,057
|
Christopher
|
78,845,737
| 14,684,366
|
Convert XML file into dictionary, with ElementTree
|
<p>I have an XML configuration file used by legacy software, which I cannot change or format. The goal is to use Python 3.9 and transform the XML file into a dictionary, using only <code>xml.etree.ElementTree</code> library.</p>
<p>I was originally looking at this <a href="https://stackoverflow.com/a/68082847/14684366">reply</a>, which produces almost the expected results.</p>
<p><code>Scenario.xml</code> file contents:</p>
<pre class="lang-xml prettyprint-override"><code><Scenario Name="{{ env_name }}">
<Gateways>
<Alpha Host="{{ host.alpha_name }}" Order="1">
<Config>{{ CONFIG_DIR }}/alpha.xml</Config>
<Arguments>-t1 -t2</Arguments>
</Alpha>
<Beta Host="{{ host.beta_name }}" Order="2">
<Config>{{ CONFIG_DIR }}/beta.xml</Config>
<Arguments>-t1</Arguments>
</Beta>
<Gamma Host="{{ host.gamma_name }}" Order="3">
<Config>{{ CONFIG_DIR }}/gamma.xml</Config>
<Arguments>-t2</Arguments>
<!--<Data Count="58" />-->
</Gamma>
</Gateways>
</Scenario>
</code></pre>
<p>Python code to convert XML file to dictionary:</p>
<pre><code>from pprint import pprint
from xml.etree import ElementTree
def format_xml_to_dictionary(element: ElementTree.Element):
'''
Format xml to dictionary
:param element: Tree element
:return: Dictionary formatted result
'''
try:
return {
**element.attrib,
'#text': element.text.strip(),
**{i.tag: format_xml_to_dictionary(i) for i in element}
}
except ElementTree.ParseError as e:
raise e
if __name__ == '__main__':
tree = ElementTree.parse('Scenario.xml').getroot()
scenario = format_xml_to_dictionary(tree)
pprint(scenario)
</code></pre>
<p>Functional code output with <code><!--<Data Count="58" />--></code> commented:</p>
<pre><code>$ python test.py
{'#text': '',
'Gateways': {'#text': '',
'Alpha': {'#text': '',
'Arguments': {'#text': '-t1 -t2'},
'Config': {'#text': '{{ CONFIG_DIR }}/alpha.xml'},
'Host': '{{ host.alpha_name }}',
'Order': '1'},
'Beta': {'#text': '',
'Arguments': {'#text': '-t1'},
'Config': {'#text': '{{ CONFIG_DIR }}/beta.xml'},
'Host': '{{ host.beta_name }}',
'Order': '2'},
'Gamma': {'#text': '',
'Arguments': {'#text': '-t2'},
'Config': {'#text': '{{ CONFIG_DIR }}/gamma.xml'},
'Host': '{{ host.gamma_name }}',
'Order': '3'}},
'Name': '{{ env_name }}'}
</code></pre>
<p>I'm trying to address two issues:</p>
<ol>
<li><code>Scenario</code> is missing from dictionary keys, because root node is already the <code>Scenario</code> tag, I'm not sure what I need to do, in order to make it part of dictionary</li>
<li>If I uncomment <code><Data Count="58" /></code>, I get the following error:</li>
</ol>
<pre><code>AttributeError: 'NoneType' object has no attribute 'strip'
</code></pre>
<p>I'm not sure what type of if/else condition I need to implement, I tried something like that, but it is setting all <code>#text</code> values to <code>''</code> instead of stripping them:</p>
<pre><code>'#text': element.text.strip() if isinstance(
element.text, ElementTree.Element
) else '',
</code></pre>
|
<python><xml><dictionary>
|
2024-08-07 21:27:05
| 1
| 591
|
Floren
|
78,845,456
| 8,382,067
|
Trying to update column in GridDB but can't figure out why it is not updating
|
<p>Im trying to update data in a GridDB container using the Python client. Ive set up my environment and written the following code to perform the update. The update operation does not seem to be working as the data stays the same.</p>
<p>Any ideas on what I may have done wrong here? Here is the documentation I was reading: <a href="https://docs.griddb.net/gettingstarted/python/" rel="nofollow noreferrer">https://docs.griddb.net/gettingstarted/python/</a></p>
<pre><code>import griddb_python as griddb
factory = griddb.StoreFactory.get_instance()
try:
store = factory.get_store(
notification_member="239.0.0.1",
notification_port=31999,
cluster_name="myCluster",
username="admin",
password="admin"
)
container = store.get_container("myContainer")
query = container.query("SELECT * FROM myContainer WHERE id = 1")
rs = query.fetch()
if rs.has_next():
data = rs.next()
data[2] = 35 # Update the age field
# Put the updated row back into the container
container.put(data)
store.commit()
except griddb.GSException as e:
for i in range(e.get_error_stack_size()):
print("[", i, "]")
print(e.get_error_code(i))
print(e.get_message(i))
</code></pre>
|
<python><sql><griddb>
|
2024-08-07 19:54:16
| 0
| 2,099
|
Josh Adams
|
78,845,450
| 9,935,280
|
ImportError: cannot import name 'RSAAlgorithm' from 'jwt.algorithms'
|
<p>The <code>RSAAlgorithm</code> PyJWT algorithm method fails to import, but I do have the <code>PyJWT</code> installed.</p>
<p>Error:</p>
<pre><code>ImportError: cannot import name 'RSAAlgorithm' from 'jwt.algorithms'
</code></pre>
<p>I checked if the package is available by running this command:</p>
<pre class="lang-bash prettyprint-override"><code>poetry show|grep -i pyjwt
</code></pre>
<pre><code>pyjwt 2.9.0 JSON Web Token implementation in...
</code></pre>
|
<python><django><algorithm><jwt>
|
2024-08-07 19:53:10
| 1
| 524
|
Jonny
|
78,845,357
| 3,486,684
|
Python + Polars: how can I convert the values of a column with type `enum` into their integer ("physical") representation?
|
<p>The polars <a href="https://docs.pola.rs/user-guide/concepts/data-types/categoricals/#enum-type" rel="nofollow noreferrer">user guide</a> suggests that enums have a physical, integer representation.</p>
<p>Is it possible to access the integers associated with an enum value? For example, is there a nicer way to get the integer representation in the following example?</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import polars as pl
np.random.seed(556)
enum_vals = [
"".join([chr(c_code) for c_code in np.random.randint(97, 123, 3)])
for n in range(10)
]
enum_dtype = pl.Enum(pl.Series(enum_vals))
(
pl.Series(
"enum_vals",
[enum_vals[x] for x in np.random.randint(0, len(enum_vals), 5)],
dtype=enum_dtype,
)
.to_frame()
.with_columns(
enum_repr=pl.col("enum_vals").map_elements(
lambda x: enum_vals.index(x), return_dtype=pl.Int64()
)
)
)
</code></pre>
<pre class="lang-py prettyprint-override"><code>shape: (5, 2)
┌───────────┬───────────┐
│ enum_vals ┆ enum_repr │
│ --- ┆ --- │
│ enum ┆ i64 │
╞═══════════╪═══════════╡
│ loo ┆ 8 │
│ sby ┆ 5 │
│ cqm ┆ 3 │
│ cbn ┆ 2 │
│ vtk ┆ 9 │
└───────────┴───────────┘
</code></pre>
|
<python><python-polars>
|
2024-08-07 19:21:14
| 2
| 4,654
|
bzm3r
|
78,845,316
| 19,838,445
|
How to configure Python ssl server ciphers to match client's
|
<p>I'm trying to run this simple socket ssl server in python</p>
<pre class="lang-py prettyprint-override"><code>import socket, ssl
context = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH)
bindsocket = socket.socket()
bindsocket.bind(("", 443))
bindsocket.listen(5)
while True:
newsocket, fromaddr = bindsocket.accept()
connstream = context.wrap_socket(newsocket, server_side=True)
try:
data = connstream.recv(1024)
if not data:
break
finally:
connstream.shutdown(socket.SHUT_RDWR)
connstream.close()
</code></pre>
<p>but when connecting with client</p>
<pre class="lang-bash prettyprint-override"><code>curl -v https://localhost:443/
</code></pre>
<p>I'm getting this error</p>
<pre><code>Traceback (most recent call last):
File "/Users/example/server_ssl.py", line 15, in <module>
connstream = context.wrap_socket(newsocket, server_side=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/example/.pyenv/versions/3.11.9/lib/python3.11/ssl.py", line 517, in wrap_socket
return self.sslsocket_class._create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/example/.pyenv/versions/3.11.9/lib/python3.11/ssl.py", line 1104, in _create
self.do_handshake()
File "/Users/example/.pyenv/versions/3.11.9/lib/python3.11/ssl.py", line 1382, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: NO_SHARED_CIPHER] no shared cipher (_ssl.c:1006)
</code></pre>
<p>curl itself shows this error</p>
<pre><code>* Trying 127.0.0.1:443...
* Connected to localhost (127.0.0.1) port 443
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: /etc/ssl/cert.pem
* CApath: none
* TLSv1.3 (IN), TLS alert, handshake failure (552):
* OpenSSL/3.3.1: error:0A000410:SSL routines::ssl/tls alert handshake failure
* closing connection #0
curl: (35) OpenSSL/3.3.1: error:0A000410:SSL routines::ssl/tls alert handshake failure
</code></pre>
<p>How to make sure both client/server use same ciphers?
I've tried setting different options for the context, but still getting the same error.</p>
<pre><code># none of these works
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE
context.set_ciphers("ECDHE+AESGCM:ECDHE+CHACHA20:DHE+AESGCM:DHE+CHACHA20")
</code></pre>
<p>At this point I don't want to provide any certificates as <a href="https://stackoverflow.com/a/5935252/19838445">suggested here</a>, how can I make this code work?</p>
|
<python><ssl><curl><https><ssl-certificate>
|
2024-08-07 19:04:32
| 0
| 720
|
GopherM
|
78,845,297
| 3,486,684
|
How do define custom auto-imports for Pylance/Visual Studio Code?
|
<p>When I type out something like <code>np.</code> I think this triggers Visual Studio Code + Pylance's (not sure) auto-import completion by suggesting that <code>import numpy as np</code> might be relevant.</p>
<p>I would like to create similar custom auto-import/complete associations. For example: between <code>pl</code> and <code>polars</code>, so that if I type something like <code>pl.</code> then <code>import polars as pl</code> is given as an auto-import suggestion.</p>
<p>How can I do this? Is this specific to the Pylance extension I am using, or something about Visual Studio Code?</p>
<p>Please note that auto-import/import-completion is <em>very different</em> from custom code snippets, as covered in <a href="https://stackoverflow.com/questions/29995863/how-to-add-custom-code-snippets-in-vscode">How to add custom code snippets in VSCode?</a> The reason for this being:</p>
<ul>
<li>VS Code adds a new import statement (and therefore has to figure out if it is possible to resolve that import) at the top of the file. It does <em>not</em> add code where the cursor is, which is what a snippet would do.</li>
<li>This functionality relies on a language server of some sort (hence my suspicion it is Pylance that is providing this functionality) both to resolve the import, and to insert the import statement at the appropriate location in the file.</li>
</ul>
|
<python><visual-studio-code><pylance>
|
2024-08-07 18:58:21
| 2
| 4,654
|
bzm3r
|
78,845,218
| 2,006,674
|
FastAPI TestClient not starting lifetime in test
|
<p>Example code:</p>
<pre><code>import os
import asyncio
from contextlib import asynccontextmanager
from fastapi import FastAPI, Request
@asynccontextmanager
async def lifespan(app: FastAPI):
print(f'Lifetime ON {os.getpid()=}')
app.state.global_rw = 0
_ = asyncio.create_task(infinite_1(app.state), name='my_task')
yield
app = FastAPI(lifespan=lifespan)
@app.get("/state/")
async def inc(request: Request):
return {'rw': request.app.state.global_rw}
async def infinite_1(app_rw_state):
print('infinite_1 ON')
while True:
app_rw_state.global_rw += 1
print(f'infinite_1 {app_rw_state.global_rw=}')
await asyncio.sleep(10)
</code></pre>
<p>This is all working fine, every 10 seconds <code>app.state.global_rw</code> is increased by one.</p>
<p>Test code:</p>
<pre><code>from fastapi.testclient import TestClient
def test_all():
from a_10_code import app
client = TestClient(app)
response = client.get("/state/")
assert response.status_code == 200
assert response.json() == {'rw': 0}
</code></pre>
<p>Problem that I have found is that TestClient(app) will not start <code>async def lifespan(app: FastAPI):</code>.<br />
Started with <code>pytest -s a_10_test.py </code></p>
<p>So, how to start lifespan in FastAPI TestClient ?</p>
<p>P.S. my real code is more complex, this is just simple example for demonstration purposes.</p>
|
<python><asynchronous><pytest><python-asyncio><fastapi>
|
2024-08-07 18:36:28
| 2
| 7,392
|
WebOrCode
|
78,845,156
| 1,028,270
|
Ruamel thinks this list is a multi-document file
|
<p>This is my valid YAML file:</p>
<pre class="lang-yaml prettyprint-override"><code># my_yaml.yaml
- aaa: zzzzz
bbbb: 'lksdjflksjdflksdj'
xxxx:
qqqq:
- sldflsdkjflks
fffff: []
- aaa: zzzzz
bbbb: 'lksdjflksjdflksdj'
xxxx:
qqqq:
- sldflsdkjflks
fffff: []
- aaa: zzzzz
bbbb: 'lksdjflksjdflksdj'
xxxx:
qqqq:
- sldflsdkjflks
fffff: []
</code></pre>
<p>But this is what <code>ruamel.yaml</code> generates:</p>
<pre class="lang-python prettyprint-override"><code>from ruamel.yaml import YAML
yaml = YAML()
yaml.preserve_quotes = True
yaml.default_flow_style = None
yaml.explicit_start = False
formatted_config = yaml.load(Path("my_yaml.yaml").open().read())
yaml.dump_all(formatted_config, Path("/tmp/derps.yaml"))
</code></pre>
<p><code>/tmp/derps.yaml</code> looks like:</p>
<pre class="lang-yaml prettyprint-override"><code>aaa: zzzzz
bbbb: 'lksdjflksjdflksdj'
xxxx:
qqqq:
- sldflsdkjflks
fffff: []
---
aaa: zzzzz
bbbb: 'lksdjflksjdflksdj'
xxxx:
qqqq:
- sldflsdkjflks
fffff: []
---
aaa: zzzzz
bbbb: 'lksdjflksjdflksdj'
xxxx:
qqqq:
- sldflsdkjflks
fffff: []
</code></pre>
<p>I don't understand why it's inserting <code>---</code> for each list item. All valid YAML parsers just see this as a list.</p>
<p>How can I make ruamel process this correctly?</p>
|
<python><ruamel.yaml>
|
2024-08-07 18:16:02
| 1
| 32,280
|
red888
|
78,845,128
| 7,084,129
|
pytest-randomly gives the same "random" result for every test
|
<p>I'm trying to figure out how to use <code>pytest-randomly</code> properly, but the problem is I always get the same set of random numbers for every test. All I did was install <code>pytest-randomly</code> (3.15.0) and run tests normally.</p>
<p>Here's an example code:</p>
<pre><code>import random
def generate_random_numbers(size):
return [random.randint(1, 100) for _ in range(size)]
def test_random_numbers_size_1():
numbers = generate_random_numbers(10)
print(f"test_random_numbers_size 1.1: Generated numbers = {numbers}")
numbers = generate_random_numbers(10)
print(f"test_random_numbers_size 1.2: Generated numbers = {numbers}")
assert len(numbers) == 10
def test_random_numbers_size_2():
numbers = generate_random_numbers(10)
print(f"test_random_numbers_size 2.1: Generated numbers = {numbers}")
numbers = generate_random_numbers(10)
print(f"test_random_numbers_size 2.2: Generated numbers = {numbers}")
assert len(numbers) == 10
</code></pre>
<p>The result:</p>
<pre><code>...
Using --randomly-seed=2523483435
test_example.py::test_random_numbers_size_2 PASSED [ 50%]
test_random_numbers_size 2.1: Generated numbers = [26, 52, 25, 95, 2, 69, 11, 94, 74, 48]
test_random_numbers_size 2.2: Generated numbers = [72, 41, 6, 27, 60, 28, 54, 48, 100, 76]
test_example.py::test_random_numbers_size_1 PASSED [100%]
test_random_numbers_size 1.1: Generated numbers = [26, 52, 25, 95, 2, 69, 11, 94, 74, 48]
test_random_numbers_size 1.2: Generated numbers = [72, 41, 6, 27, 60, 28, 54, 48, 100, 76]
</code></pre>
<p>As you can see the numbers are repeated for each test method. Shouldn't they be different? Is it possible to change the <code>pytest-randomly</code> seed for each test method?
I've tried with some fixtures but nothing worked.</p>
<p>The <a href="https://pypi.org/project/pytest-randomly/#description" rel="nofollow noreferrer">docs</a> say:</p>
<blockquote>
<ul>
<li>Resets the global <code>random.seed()</code> at the start of every test case and test to a fixed number - this defaults to <code>time.time()</code> from the
start of your test run, but you can pass in <code>--randomly-seed</code> to
repeat a randomness-induced failure.</li>
</ul>
</blockquote>
|
<python><pytest>
|
2024-08-07 18:06:14
| 1
| 401
|
Duje
|
78,844,952
| 3,412,205
|
How to type hint the Apache Beam default DoFn.TimestampParam
|
<p>I'm struggling to annotate an extra parameter of a DoFn <code>process</code> method, specifically <a href="https://beam.apache.org/releases/pydoc/current/apache_beam.transforms.core.html#apache_beam.transforms.core.DoFn.process" rel="nofollow noreferrer">a timestamp parameter</a>.</p>
<p>Minimal example:</p>
<pre><code>import apache_beam as beam
from apache_beam.transforms.window import TimestampedValue
from apache_beam.utils.timestamp import TimestampTypes
class Do(beam.DoFn):
def process(
self,
element: int,
timestamp: TimestampTypes = beam.DoFn.TimestampParam,
) -> Iterable[TimestampedValue[int]]:
yield TimestampedValue(element, timestamp)
</code></pre>
<p>Note: <code>TimestampTypes</code> has a type of <code>Union[int, float, Timestamp]</code></p>
<p>This results in mypy stating that the parameter type is incorrect:</p>
<blockquote>
<p>Incompatible default for argument "timestamp" (default has type "_DoFnParam", argument has type "Union[int, float, Timestamp]")</p>
</blockquote>
<p>However, if I annotate the parameter as indicated, the resulting <code>timestamp</code> type is then incorrect:</p>
<pre><code>import apache_beam as beam
from apache_beam.transforms.core import _DoFnParam
from apache_beam.transforms.window import TimestampedValue
class Do(beam.DoFn):
def process(
self,
element: int,
timestamp: _DoFnParam = beam.DoFn.TimestampParam,
) -> Iterable[TimestampedValue[int]]:
yield TimestampedValue(element, timestamp)
</code></pre>
<blockquote>
<p>Argument 2 to "TimestampedValue" has incompatible type "_DoFnParam"; expected "Union[int, float, Timestamp]"</p>
</blockquote>
<p>Has anyone resolved this discrepancy successfully, or is this a limitation of type hinting in Beam that I should ignore checking for now?</p>
|
<python><apache-beam><python-typing><mypy>
|
2024-08-07 17:12:56
| 1
| 1,465
|
user3412205
|
78,844,871
| 17,220,672
|
Locking resource in FastAPI - using a multiprocessing Worker
|
<p>I would like to make an <code>FastAPI</code> service with one <code>/get</code> endpoint which will return a ML-model inference result. It is pretty easy to implement that, but the catch is I periodically need to update the model with a newer version (trough request on another server with models, but that is beside the point), and here I see a problem!</p>
<p>What will happen if one request calls old model, but the old model is currently being replaced by a newer one?? How can I implement this kind of locking mechanism with <code>asyncio</code> ?</p>
<p>Here is the code:</p>
<pre><code>import asyncio
import time
from concurrent.futures import ProcessPoolExecutor
from fastapi import FastAPI, Request
from sentence_transformers import SentenceTransformer
app = FastAPI()
sbertmodel = None
def create_model():
global sbertmodel
sbertmodel = SentenceTransformer('multi-qa-MiniLM-L6-cos-v1')
# if you try to run all predicts concurrently, it will result in CPU trashing.
pool = ProcessPoolExecutor(max_workers=1, initializer=create_model)
def model_predict():
ts = time.time()
vector = sbertmodel.encode('How big is London')
return vector
async def vector_search(vector):
# simulate I/O call (e.g. Vector Similarity Search using a VectorDB)
await asyncio.sleep(0.005)
@app.get("/")
async def entrypoint(request: Request):
loop = asyncio.get_event_loop()
ts = time.time()
# worker should be initialized outside endpoint to avoid cold start
vector = await loop.run_in_executor(pool, model_predict)
print(f"Model : {int((time.time() - ts) * 1000)}ms")
ts = time.time()
await vector_search(vector)
print(f"io task: {int((time.time() - ts) * 1000)}ms")
return "ok"
</code></pre>
<p>My model update would be implemented trough Repeated tasks (but that is not important now) : <a href="https://fastapi-utils.davidmontague.xyz/user-guide/repeated-tasks/" rel="nofollow noreferrer">https://fastapi-utils.davidmontague.xyz/user-guide/repeated-tasks/</a></p>
<p>This is the idea of a model serving : <a href="https://luis-sena.medium.com/how-to-optimize-fastapi-for-ml-model-serving-6f75fb9e040d" rel="nofollow noreferrer">https://luis-sena.medium.com/how-to-optimize-fastapi-for-ml-model-serving-6f75fb9e040d</a></p>
<p>EDIT: what is important to run multiple requests concurrently, and while model is updating, acquire lock so that requests wouldnt fail, they should just wait a little bit longer because it is a small model.</p>
|
<python><python-asyncio><fastapi><python-multiprocessing>
|
2024-08-07 16:51:14
| 1
| 419
|
mehekek
|
78,844,612
| 7,447,542
|
Outlook using python win32com reading only email till a specific date and nothing after that
|
<p>Below is my Python code using win32com to read emails from a particular folder.</p>
<pre><code>
from logging import root
import os
import win32com.client
from datetime import datetime, timedelta
import zipfile
date_format = "%m/%d/%Y %H:%M"
def recursively_find_folder(folder, target_name):
if folder.Name == target_name:
return folder
for subfolder in folder.Folders:
found_folder = recursively_find_folder(subfolder, target_name)
if found_folder:
return found_folder
#function to check the emails mentioned in outlook folder and down load the attachements based on email subject
def download_attachments(folder_name, output_folder, start_time, end_time, target_subject):
outlook_app = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
#root_folder = outlook_app.Folders.Item(3) # Assume the first folder is the mailbox
root_folder = outlook_app.GetDefaultFolder(6) # Assume the first folder is the mailbox
target_folder = recursively_find_folder(root_folder, folder_name)
if target_folder:
print(f"Found folder: {target_folder.Name}")
# Iterate through items in the folder
items = target_folder.Items
items.sort("[ReceivedTime]", True)
for item in items:
print("Item:: ", item)
print(" Subject:: ", item.Subject.lower())
print(" Recevied Time: ", item.ReceivedTime)
# Check if the email matches the criteria
for subject in target_subject:
print(subject)
print("Email Received Time: ", datetime.strptime(item.ReceivedTime.strftime('%m/%d/%Y %H:%M'), date_format))
if (
start_time <= datetime.strptime(item.ReceivedTime.strftime('%m/%d/%Y %H:%M'), date_format) <= end_time
and subject.lower().strip() in item.Subject.lower().strip()
and item.Attachments.Count > 0
):
print(f"Processing email: {item.Subject}")
for attachment in item.Attachments:
# Save attachments to the output folder
attachment.SaveAsFile(os.path.join(output_folder, attachment.FileName))
print(f"Downloaded attachment: {attachment.FileName}")
else:
print("Nothing Happened!!!")
else:
print(f"Folder '{folder_name}' not found.")
#--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
#function to find zip folder and unzip it
def find_and_unzip_report_file(folder_path, extraction_path):
# Check if the folder exists
if not os.path.exists(folder_path):
print(f"Error: Folder '{folder_path}' not found.")
return
# Get a list of all files in the folder
files = os.listdir(folder_path)
# Find the report file based on the name pattern
report_file = next((file for file in files if file.lower().startswith('report') and file.lower().endswith('.zip')), None)
if report_file:
# Construct the full path to the zip file
zip_file_path = os.path.join(folder_path, report_file)
# Create the extraction path if it doesn't exist
os.makedirs(extraction_path, exist_ok=True)
# Unzip the contents of the zip file
with zipfile.ZipFile(zip_file_path, 'r') as zip_ref:
zip_ref.extractall(extraction_path)
os.rename(folder_path + 'CC CM Provisioning - INC SLA - ALL.csv', folder_path + 'CC CM Provisioning - INC SLA - ALL' + '-' + report_file[7:24] + '.csv')
os.remove(zip_file_path)
print(f"Successfully unzipped '{zip_file_path}' to '{extraction_path}'.")
else:
print("Error: Report file not found in the specified folder.")
if __name__ == "__main__":
folder_to_download = "service_tickets"
output_directory = "//prod_drive/meta/downloads/"
# Get the first day of the current month
start_date_time = (datetime.today().replace(day=1, hour=23, minute=0, second=0, microsecond=0) - timedelta(days=1)).strftime('%m/%d/%Y %H:%M')
end_date_time = (datetime.today().replace(day=1, hour=23, minute=10, second=0, microsecond=0) - timedelta(days=1)).strftime('%m/%d/%Y %H:%M')
date_format = "%m/%d/%Y %H:%M"
start_time = datetime.strptime(start_date_time, date_format)
print("Start Time:", start_time)
end_time = datetime.strptime(end_date_time, date_format)
print("End Time: ", end_time)
target_subject = ['CC CM Provisioning - INC SLA - ALL','CC CM Provisioning - SCTASK SLA - All','CC CM Provisioning - SCTASK - All']
download_attachments(folder_to_download, output_directory, start_time, end_time, target_subject)
find_and_unzip_report_file(output_directory, output_directory)
</code></pre>
<p>But the above code only reads email till 05/31/2024 and nothing after that. I tried running the code for other folders as well, but is the same. Emails scanned/processed only till 05/31.</p>
<p>Could someone please advise what exactly the issue in my code. Couldnt find any similar posts regarding this.</p>
|
<python><python-3.x><outlook><win32com>
|
2024-08-07 15:47:21
| 2
| 424
|
Nikhil Ravindran
|
78,844,458
| 4,321,525
|
How to Optimize Grid Lines and Support Points for Piecewise Linear Interpolation?
|
<p>I am working on a piecewise linear interpolation problem in Python where I optimize the placement of support points (in x- and y-direction) to fit some data. The 1D version of my code works as expected, distributing the support points nicely to fit the data, especially placing more support points in areas with smaller bending radii.</p>
<p><a href="https://i.sstatic.net/3KPjD4ol.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3KPjD4ol.png" alt="This is how the optimization is supposed to work" /></a></p>
<pre><code>import numpy as np
from scipy.interpolate import interpn
from scipy.optimize import minimize
import matplotlib.pyplot as plt
# Erzeuge Testdaten: parabel um den nullpunkt
x_data = np.linspace(-3.15/2, 3.15/2, 800)
y_data = np.sin(x_data)
N = 8 # Anzahl der Stützstellen
def piecewise_linear_interp_error(support_points, x_data, y_data):
x_support = support_points[:N]
y_support = support_points[N:]
idx = np.argsort(x_support)
x_support = x_support[idx]
y_support = y_support[idx]
grid = (x_support,)
points = x_data[:, None]
# Interpolation der Stützstellen
y_interp = interpn(grid, y_support, points, method='linear', bounds_error=False, fill_value=None).flatten()
# Berechne den quadratischen Fehler
error = np.sqrt(np.mean((y_data - y_interp) ** 2))
return error
# Startwerte für die Optimierung (gleichmäßig verteilte Punkte)
initial_x_support = np.linspace(np.min(x_data), np.max(x_data), N)
initial_y_support = np.interp(initial_x_support, x_data, y_data)
initial_support_points = np.concatenate([initial_x_support, initial_y_support])
result = minimize(piecewise_linear_interp_error, initial_support_points, args=(x_data, y_data), method='SLSQP')
optimized_x_support = result.x[:N]
optimized_y_support = result.x[N:]
plt.plot(x_data, y_data, label='Original Data')
plt.plot(optimized_x_support, optimized_y_support, 'ro-', label='Optimized Support Points')
plt.legend()
plt.show()
print("Optimized Support Points:")
for x, y in zip(optimized_x_support, optimized_y_support):
print(f"({x:.2f}, {y:.2f})")
</code></pre>
<p>I modeled the 1-D and n-D versions to be similar. Most of the extra code is for piercing together, separating the optimization vector, and handling the grid for the multidimensional case.</p>
<pre><code>import numpy as np
from scipy.interpolate import interpn
from scipy.optimize import minimize
import matplotlib.pyplot as plt
def display_support_points(src_grid, target_vec):
print("Optimized Support Points:")
for i, points in enumerate(src_grid):
formatted_points = ', '.join(f'{p:.2f}' for p in points)
print(f"({formatted_points}, {target_vec[i]})")
class PiecewiseLinearInterpolation:
def __init__(self, source_values, target_values, source_resolution):
self.n_dimensions = source_values.shape[1]
self.src_vals = source_values
self.target_values = target_values
self.src_resolution = source_resolution # List of support points for each dimension
self.src_vec_shape = None
self.initial_support_vector = self.generate_inital_support_vector()
def generate_inital_support_vector(self):
initial_src_support_vec = []
for i, x in enumerate(self.src_vals.T):
initial_src_support_vec.append(np.linspace(np.min(x), np.max(x), self.src_resolution[i]))
initial_src_support_vec = np.concatenate(initial_src_support_vec)
# Create a grid for each dimension based on the resolutions
src_grids = [np.linspace(np.min(x), np.max(x), res) for x, res in
zip(self.src_vals.T, self.src_resolution)]
# Create a meshgrid for interpolation
src_grid = np.array(np.meshgrid(*src_grids, indexing='ij')).T.reshape(-1, self.n_dimensions)
orig_source_idx = []
for dim in range(self.n_dimensions):
orig_source_idx.append(np.unique(self.src_vals[:, dim], ))
# reshape original target_values to the shape of src_vals
self.src_vec_shape = [len(np.unique(x)) for x in orig_source_idx]
target_vec = self.target_values.reshape(self.src_vec_shape)
initial_target_support = interpn(
orig_source_idx, # The grid for each dimension
target_vec, # The data to interpolate
src_grid, # The points where interpolation is needed
method='linear', # Interpolation method
bounds_error=False, # Do not raise an error if points are out of bounds
fill_value=None # Use NaN for out-of-bounds points
).flatten() # Flatten the result to get the initial support points for y
return np.concatenate([initial_src_support_vec, initial_target_support])
def calc_interpolation_error(self, support_vector):
src_vec, target_vec = self.split_support_vector(support_vector)
src_vec_sorted, target_vec_sorted = self.reorder_support_vectors(src_vec, target_vec)
points = np.array(self.src_vals).reshape(-1, self.n_dimensions)
target_interp = interpn(src_vec_sorted, target_vec_sorted, points, method='linear', bounds_error=False,
fill_value=None)
interp_err = np.sqrt(np.sum((self.target_values - target_interp) ** 2))
return interp_err
def reorder_support_vectors(self, src_vec, target_vec):
for i in range(self.n_dimensions):
idx = np.argsort(src_vec[i])
src_vec[i] = src_vec[i][idx]
target_vec = np.take(target_vec, idx, axis=i)
return src_vec, target_vec
def split_support_vector(self, support_vec):
src_vec = []
start = 0
for res in self.src_resolution:
end = start + res
src_vec.append(support_vec[start:end])
start = end
target_vec = support_vec[start:]
target_vec = target_vec.reshape(tuple(self.src_resolution))
return src_vec, target_vec
def optimize_support_points(self):
result = minimize(self.calc_interpolation_error, self.initial_support_vector, method='SLSQP')
result_src_vec, result_target_vec = self.split_support_vector(result.x)
res_src_vec_sorted, res_target_vec_sorted = self.reorder_support_vectors(result_src_vec, result_target_vec)
result_grid = np.array(np.meshgrid(*res_src_vec_sorted, indexing='ij')).T.reshape(-1, self.n_dimensions)
return result_grid, res_target_vec_sorted
def display_results(self, src_grid, target_vec):
plt.figure(figsize=(10, 8))
if src_grid.shape[1] == 1:
plt.plot(src_grid[:, 0], target_vec, 'ro-', label='Optimized Support Points')
elif src_grid.shape[1] == 2:
ax = plt.axes(projection='3d')
ax.scatter(self.src_vals[:, 0], self.src_vals[:, 1], self.target_values, label='Original Data')
ax.scatter(src_grid[:, 0], src_grid[:, 1], target_vec,
color='red', label='Optimized Support Points')
else:
raise ValueError("Plotting for dimensions higher than 2 is not supported.")
plt.legend()
plt.show()
# This is how its supposed to be used:
def main():
# Create a dataset with two dimensions for src_vals
x = np.linspace(-3.15 / 2, 3.15 / 2, 20)
y = np.linspace(-3.15 / 2, 3.15 / 2, 20)
X, Y = np.meshgrid(x, y)
Z = np.sin(X) + np.sin(Y)
raw_src_vec = np.array([X.flatten(), Y.flatten()]).T
raw_target_vec = Z.flatten()
x_resolutions = [5, 10] # Individual resolutions for each dimension
interpolator = PiecewiseLinearInterpolation(raw_src_vec, raw_target_vec, x_resolutions)
src_grid, target_vec = interpolator.optimize_support_points()
interpolator.display_results(src_grid, target_vec)
#display_support_points(src_grid, target_vec)
if __name__ == "__main__":
main()
</code></pre>
<p><a href="https://i.sstatic.net/29UgJPM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/29UgJPM6.png" alt="However, the 2D version places the support points all over, leading to poor interpolation results." /></a></p>
<p>To me, it seems as if the optimizer messed it up; however, it is more likely that I messed up with the multi-dimensional stuff. I wrote tests and verified pieces that seemed suspicious, and those were correct. How can I streamline the code to be more straightforward? Where is my data corruption?</p>
|
<python><optimization>
|
2024-08-07 15:12:35
| 1
| 405
|
Andreas Schuldei
|
78,844,342
| 1,176,573
|
Sort only few columns in pandas dataframe by month and year
|
<p>How do I sort the only the <code>month year</code> columns in below dataframe? These columns could differ for different stock.</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th></th>
<th>Quarterly Results</th>
<th>Stockname</th>
<th>Mar 2021</th>
<th>Jun 2021</th>
<th>Sep 2021</th>
<th>Dec 2021</th>
<th>Dec 2022</th>
<th>Mar 2023</th>
<th>Jun 2023</th>
<th>Sep 2023</th>
<th>Mar 2022</th>
<th>Jun 2022</th>
<th>Sep 2022</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>Sales</td>
<td>526433</td>
<td>40.8</td>
<td>41.64</td>
<td>47.7</td>
<td>47.62</td>
<td>54.14</td>
<td>51.94</td>
<td>50.95</td>
<td>50.39</td>
<td>54.71</td>
<td>50.92</td>
<td>63.42</td>
</tr>
<tr>
<td>1</td>
<td>Expenses</td>
<td>526433</td>
<td>36.78</td>
<td>36.7</td>
<td>39.98</td>
<td>42.3</td>
<td>48.94</td>
<td>49.74</td>
<td>48.58</td>
<td>48.46</td>
<td>51.69</td>
<td>45.54</td>
<td>57.43</td>
</tr>
<tr>
<td>2</td>
<td>Operating Profit</td>
<td>526433</td>
<td>4.02</td>
<td>4.94</td>
<td>7.72</td>
<td>5.32</td>
<td>5.2</td>
<td>2.2</td>
<td>2.37</td>
<td>1.93</td>
<td>3.02</td>
<td>5.38</td>
<td>5.99</td>
</tr>
<tr>
<td>3</td>
<td>OPM %</td>
<td>526433</td>
<td>9.85%</td>
<td>11.86%</td>
<td>16.18%</td>
<td>11.17%</td>
<td>9.60%</td>
<td>4.24%</td>
<td>4.65%</td>
<td>3.83%</td>
<td>5.52%</td>
<td>10.57%</td>
<td>9.44%</td>
</tr>
<tr>
<td>4</td>
<td>Other Income</td>
<td>526433</td>
<td>0.7</td>
<td>0.5</td>
<td>0.35</td>
<td>3.89</td>
<td>2.74</td>
<td>1.55</td>
<td>1.98</td>
<td>0.42</td>
<td>2.19</td>
<td>2.43</td>
<td>1.58</td>
</tr>
</tbody>
</table></div>
|
<python><pandas>
|
2024-08-07 14:49:08
| 2
| 1,536
|
RSW
|
78,844,205
| 22,963,183
|
How to develop a Generalized RAG Pipeline for Text, Images, and Structured Data
|
<p>I'm trying to find a general solution for RAG to solve problems involving both text, images, chart, tables,.., they are in many different formats such as <strong>.docx</strong>, <strong>.xlsx</strong>, <strong>.pdf</strong>.</p>
<p><strong>The requirement for the answer:</strong></p>
<ul>
<li>Some answers are just images</li>
<li>Some answers only contain text and need to be absolutely accurate because it relates to a process,...</li>
<li>On the other hand, the answers may not need to be absolutely accurate but should still ensure logical consistency; this is something I am already working on</li>
</ul>
<p><strong>The features of the documents:</strong></p>
<ul>
<li>Some documents in <strong>DOCX</strong> and <strong>Excel</strong> formats contain only text; this is the simplest form. My task is to determine the embedding model and LLM, in addition to selecting hyperparameters such as <strong>chunk size</strong>, <strong>chunk overlap</strong>, etc., and experimenting to find the <strong>appropriate values</strong></li>
<li>If the documents have more <strong>complex content</strong>, such as DOCX files containing <strong>text</strong> and <strong>images</strong>, or <strong>PDF</strong> files containing <strong>text</strong>, <strong>images</strong>, <strong>charts</strong>, <strong>tables</strong>, etc., I haven't found a general solution to handle them yet.</li>
</ul>
<p>Below are some documents I have read but feel I don't fully understand, I'm not sure how it can help me.</p>
<ul>
<li><a href="https://medium.com/kx-systems/guide-to-multimodal-rag-for-images-and-text-10dab36e3117" rel="nofollow noreferrer">https://medium.com/kx-systems/guide-to-multimodal-rag-for-images-and-text-10dab36e3117</a></li>
<li><a href="https://blog.langchain.dev/semi-structured-multi-modal-rag/" rel="nofollow noreferrer">https://blog.langchain.dev/semi-structured-multi-modal-rag/</a></li>
</ul>
<p><strong>I want to be able to outline a pipeline to answer questions according to the requirements of my system</strong>. Any help would be greatly appreciated!</p>
<p><strong>System:</strong></p>
<ul>
<li>LLM was run locally (Llama 3.1 13N Instruct, Qwen2-7B-Instruct,...)</li>
</ul>
|
<python><langchain><large-language-model><python-embedding><rag>
|
2024-08-07 14:22:48
| 1
| 515
|
happy
|
78,844,160
| 15,547,292
|
Getting a flat view of a nested list
|
<p>In Python, is it possible to get a flat view of a list of lists that dynamically adapts to changes to the original, nested list?</p>
<p><strong>To be clear, I am not looking for a static snapshot, but for a view that reflects changes.</strong></p>
<p>Further, the sub-lists should not be restricted to a primitive type, but be able to contain arbitrary objects, and not tied to a static size, but be allowed to shrink or expand freely.</p>
<p>Simple example:</p>
<pre class="lang-py prettyprint-override"><code>a = ["a", "b", "c"]
b = ["d", "e", "f"]
view = flat_view([a, b])
# `view` should show ["a", "b", "c", "d", "e", "f"]
b[0] = "x"
# `view` should show ["a", "b", "c", "x", "e", "f"]
</code></pre>
<p>The implementation of <code>flat_view()</code> is what I'm looking for.</p>
|
<python><list><flatten>
|
2024-08-07 14:12:41
| 3
| 2,520
|
mara004
|
78,843,966
| 4,232,472
|
Kafka broker does not receive message from python producer
|
<p>Hello i use the following configuration for Kafka using docker compose</p>
<p><strong>compose_kafka.yml</strong></p>
<pre><code>version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
environment:
ZOO_MY_ID: 1
kafka:
image: wurstmeister/kafka
container_name: kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.1.10
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
kafka_manager:
image: kafkamanager/kafka-manager
container_name: kafka-manager
restart: always
ports:
- "9000:9000"
environment:
ZK_HOSTS: "192.168.1.10:2181"
APPLICATION_SECRET: "random-secret"
</code></pre>
<p>I create a producer which produces messages to the kafka server</p>
<p><strong>Generate.py</strong></p>
<pre><code>from faker import Faker
fake = Faker()
class Registered_user:
def get_registered_user():
return {
"name": fake.name(),
"address": fake.address(),
"created_at": fake.year()
}
</code></pre>
<p><strong>Producer_registered_user.py</strong></p>
<pre><code>import time
import json
from kafka import KafkaProducer
from fake_data import Generate
def json_serializer(data):
return json.dumps(data).encode("utf-8")
producer = KafkaProducer(bootstrap_servers='192.168.1.10:9092',
value_serializer=json_serializer)
if __name__ == '__main__':
while 1 == 1:
user = Generate.Registered_user.get_registered_user()
producer.send('registered_user', user)
print(user)
time.sleep(4)
</code></pre>
<p><a href="https://i.sstatic.net/EYHgdBZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EYHgdBZP.png" alt="enter image description here" /></a></p>
<p>But the consumer does not receive any messages:</p>
<p><a href="https://i.sstatic.net/Z4qMgT5m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z4qMgT5m.png" alt="enter image description here" /></a></p>
<p><strong>Consumer_registered_user.py</strong></p>
<pre><code>import json
from kafka import KafkaConsumer
if __name__ == '__main__':
consumer = KafkaConsumer(
bootstrap_servers='192.168.1.10:9092',
auto_offset_reset="from-beginning",
group_id="consumer-group-a"
)
for message in consumer:
print("User = {}".format(json.loads(message.value)))
</code></pre>
<p>I also checked if the topic is listed under the topics:</p>
<p><a href="https://i.sstatic.net/A2l2rd78.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2l2rd78.png" alt="enter image description here" /></a></p>
<p>and if kafka received the messages:</p>
<p><a href="https://i.sstatic.net/A2QOBx98.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2QOBx98.png" alt="enter image description here" /></a></p>
<p>Could you please help me with this problem?</p>
|
<python><apache-kafka><apache-zookeeper>
|
2024-08-07 13:28:01
| 1
| 1,650
|
Erik hoeven
|
78,843,838
| 315,427
|
CFMessagePortSendRequest FAILED(-1)
|
<p>I'm using <code>PyQT6</code> on MacOS and <code>exec()</code> to open dialog</p>
<pre><code>if dialog.exec() == QDialog.DialogCode.Accepted:
print("Adding...")
else:
print("Adding cancelled")
</code></pre>
<p>The reason for exec is to block the execution until I get result from the dialog. However, often I am getting the following lines</p>
<blockquote>
<p>TSMSendMessageToUIServer: CFMessagePortSendRequest FAILED(-1) to send
to port com.apple.tsm.uiserver</p>
</blockquote>
<p>What this error means and how to solve it?</p>
|
<python><macos><pyqt><pyqt6>
|
2024-08-07 12:59:23
| 0
| 29,709
|
Pablo
|
78,843,741
| 2,051,572
|
Error Using Ollama Docker Image with LangChain to Perform LLaMA Model Inference
|
<p>I'm trying to use the official Ollama Docker image to run a Python script that performs inference using the <code>llama3.1</code> model from Ollama with LangChain. My script performs some operations on a Pandas DataFrame and then uses the LLaMA model to answer questions based on the DataFrame's content.</p>
<p>I am pretty new to working with Docker, and created a simple script with the help of ChatGPT 4o and the <a href="https://hub.docker.com/r/ollama/ollama" rel="nofollow noreferrer">reference in Docker on using ollama</a>.</p>
<p>I received the error message:</p>
<pre><code>ERROR: failed to solve: python:3.9-slim: failed to resolve source metadata for docker.io/library/python:3.9-slim: failed to do request: Head "https://registry-1.docker.io/v2/library/python/manifests/3.9-slim": EOF
</code></pre>
<p>Here's a simplified version of my script:</p>
<pre><code>import pandas as pd
from langchain_ollama.llms import OllamaLLM
from langchain.prompts import PromptTemplate
from langchain.schema.runnable import RunnableLambda
# Sample DataFrame
df = pd.DataFrame({'product': ['Product A', 'Product B'], 'description': ['Description A', 'Description B']})
# Initialize the LLaMA model using Ollama
model = OllamaLLM(model="llama3.1")
# Define the prompt template
question_template = """
Description: {description}
Question: {question}
"""
# Create a PromptTemplate
prompt_to_list_creation = PromptTemplate(
template=question_template,
input_variables=["description", "question"]
)
# Define a function to parse the response into a string
def parse_response(response) -> str:
return response.strip() if isinstance(response, str) else response.content.strip()
# Create a RunnableLambda to parse the response
parse_lambda = RunnableLambda(func=parse_response)
# Chain the model and parsing function
expression_with_parsing = prompt_to_list_creation | model | parse_lambda
# Function to get answers based on the DataFrame input
def get_answers_from_df(df: pd.DataFrame, question: str):
answers = []
for _, row in df.iterrows():
input_data = {"description": row['description'], "question": question}
response = expression_with_parsing.invoke(input_data)
answers.append(response)
return answers
# Example usage
question = "What are the key features?"
answers = get_answers_from_df(df, question)
print(answers)
</code></pre>
<p>I tried to use the following Dockerfile to pull the <code>llama3.1</code> model and run the script:</p>
<pre><code># Use the official Ollama image to pull the model
FROM ollama/ollama:latest as ollama
RUN ollama pull llama3.1
# Use a Python image to set up the environment and run the script
FROM python:3.9-slim
COPY --from=ollama /root/.ollama/models /root/.ollama/models
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY script.py .
CMD ["python", "script.py"]
</code></pre>
<p>With requirements.txt</p>
<pre><code>networkx
langchain
langchain_ollama
</code></pre>
|
<python><docker><containers><langchain><ollama>
|
2024-08-07 12:37:08
| 0
| 2,512
|
Ohm
|
78,843,713
| 2,521,423
|
PySide6 GUI displays differently on different computers
|
<p>I have a GUI application built in PySide6. Mostly it works fine, but a specific part of it displays differently on different machines: specifically, the text color is white on one machine, but black (as it should be) on every other machine I have tested on. Comparison below:</p>
<p>Working:
<a href="https://i.sstatic.net/UmwUPCoE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UmwUPCoE.png" alt="enter image description here" /></a></p>
<p>Not working
<a href="https://i.sstatic.net/F0RHvaqV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F0RHvaqV.png" alt="enter image description here" /></a></p>
<p>Same code, same python version, same library versions. I have completely uninstalled python, and reinstalled all packages from scratch, to no effect.</p>
<p>The major difference between the test setups is the screen resolution, but resetting the screen resolutions to match across different machines does not change anything. I have also added</p>
<pre><code>QGuiApplication.setAttribute(Qt.AA_EnableHighDpiScaling)
QGuiApplication.setAttribute(Qt.AA_UseHighDpiPixmaps)
</code></pre>
<p>before instantiating anything per some advice elsewhere about standardizing resolutions, but this also has no visible effect.</p>
<p>I am hoping that someone recognizes a known issue from the visual difference. Any ideas?</p>
|
<python><pyside6>
|
2024-08-07 12:31:17
| 0
| 1,488
|
KBriggs
|
78,843,705
| 1,876,739
|
Replace pandas DataFrame column values with dict values looked up by separate column
|
<p>Given a pandas <code>DataFrame</code> containing <code>nan</code>:</p>
<pre><code>import numpy as np
import pandas as pd
bad_df = pd.DataFrame({'foo': [np.nan, 1.0, 2.0], 'bar': ['a', 'b', 'c']})
</code></pre>
<p>And also given a dictionary with lookup keys that may exist in the <code>bar</code> column:</p>
<pre><code>replace_values = {'a': 7.0, 'd': 10.0}
</code></pre>
<p><strong>How can the <code>nan</code> values in the <code>foo</code> column be replace with values in the <code>replace_values</code> dictionary based on the lookup of <code>bar</code>?</strong> The resulting DataFrame would look like:</p>
<pre><code>expected_df = pd.DataFrame({'foo': [7.0, 1.0, 2.0], 'bar': ['a', 'b', 'c']})
</code></pre>
<p>The typical <code>fillna</code> and <code>replace</code> methods on DataFrames don't seem to have this functionality since they will replace all na with the same value as opposed to a lookup value.</p>
<p>Thank you in advance for your consideration and response.</p>
|
<python><pandas>
|
2024-08-07 12:29:18
| 1
| 17,975
|
Ramón J Romero y Vigil
|
78,843,541
| 3,402,703
|
hide firefox downloads window (or contain to one desktop)
|
<p>I don't even know if the name of that is "window": it's not in the DOM, but in the browser, and Firefox opens it everytime a download is completed. I can be accessed with <code>Ctrl+Shift+y</code> if I do it manually.</p>
<p>I'm downloading some pdfs with python and selenium. I set all options for downloading those files without any additional window, and to my belief, to avoid "the window" (which I thought was the "download panel"):</p>
<pre><code>profile.set_preference("browser.download.folderList", 2)
profile.set_preference("browser.download.dir", download_dir)
profile.set_preference("browser.helperApps.neverAsk.saveToDisk", "application/pdf")
profile.set_preference("pdfjs.disabled", True) # Disable built-in PDF viewer
# Suppress download panel notifications
profile.set_preference("browser.download.panel.shown", False)
profile.set_preference("browser.download.panel.suppress", True)
</code></pre>
<p>That didn't help, so I tried to, after each iteration, change context and send <code>Ctrl+Shift+y</code>:</p>
<pre><code>driver.set_context("chrome")
# Create a var of the window and send the key combo
win = driver.find_element(By.TAG_NAME,"html")
win.send_keys(Keys.CONTROL + Keys.SHIFT + 'y')
# Set the focus back to content
driver.set_context("content")
</code></pre>
<p>That didn't work either, it sent the keys flawlessly, but the intruding window remained there.</p>
<p>It wouldn't be much of a problem if the window were only in the desktop where I'm running the script, but it shows in all desktops (here in the one running the script, and in an empty desktop. At this moment I'm using only half the screen to write this!)</p>
<p><a href="https://i.sstatic.net/VCb8dq3t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VCb8dq3t.png" alt="desktop running the script" /></a></p>
<p><a href="https://i.sstatic.net/vh37jPo7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vh37jPo7.png" alt="empty desktop" /></a></p>
<p>I'm running:</p>
<ul>
<li>Ubuntu 22.04 (both gnome/wayland and i3 display the problem)</li>
<li>Mozilla Firefox 128.0.3</li>
<li>Python 3.10.12</li>
<li>selenium Version: 4.23.1</li>
</ul>
|
<python><selenium-webdriver><firefox><selenium-firefoxdriver>
|
2024-08-07 11:56:33
| 1
| 6,507
|
PavoDive
|
78,843,458
| 3,070,181
|
Ignore specific string in flake8 warning when using icecream
|
<p>I use <a href="https://github.com/gruns/icecream" rel="nofollow noreferrer">icecream</a> to support my debugging and a large project I use the <a href="https://github.com/gruns/icecream#import-tricks" rel="nofollow noreferrer">install method</a> to make it available in all files.</p>
<p>Unfortunately, flake8, reports that 'ic' is an undefined name</p>
<pre><code>src/forms/frm_player_edit.py:149:9: F821 undefined name 'ic'
</code></pre>
<p>Is there any way to suppress this warning just for this string using a global flake8 setting? (I still want it to warn undefined names in general)</p>
|
<python><flake8>
|
2024-08-07 11:35:35
| 2
| 3,841
|
Psionman
|
78,843,191
| 1,016,428
|
NumPy on small arrays: elementary arithmetic operations performances
|
<p>I am not 100% positive that this question has a solution besides "that's the overhead, live with it", but you never know.</p>
<p>I have a very simple set of elementary mathematical operations done on rather small 1D NumPy arrays (6 to 10 elements). The arrays' <code>dtype</code> is <code>np.float32</code>, while other inputs are standard Python floats.</p>
<p>The differences in timings are reproducible on all machines I have (Windows 10 64 bit, Python 3.9.10 64 bit, NumPy 1.21.5 MKL).</p>
<p>An example:</p>
<pre class="lang-py prettyprint-override"><code>def NumPyFunc(array1, array2, float1, float2, float3):
output1 = (array2 - array1) / (float2 - float1)
output2 = array1 + output1 * (float3 - float1)
return output1, output2
</code></pre>
<p>Given these inputs:</p>
<pre class="lang-py prettyprint-override"><code>import numpy
sz = 6
array1 = 3000.0 * numpy.random.uniform(size=(sz,)).astype(numpy.float32)
array2 = 2222.0 * numpy.random.uniform(size=(sz,)).astype(numpy.float32)
float1 = float(numpy.random.uniform(100000, 1e7))
float2 = float(numpy.random.uniform(100000, 1e7))
float3 = float(numpy.random.uniform(100000, 1e7))
</code></pre>
<p>I get on machine 1:</p>
<pre><code>%timeit NumPyFunc(array1, array2, float1, float2, float3)
3.33 µs ± 18 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
</code></pre>
<p>And on machine 2:</p>
<pre><code>%timeit NumPyFunc(array1, array2, float1, float2, float3)
1.5 µs ± 19.4 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
</code></pre>
<p>All nice and well, but I have to do these operations millions upon millions of times. One suggestion would be to use Numba LLVM JIT-compiler (which I know nothing about), but I heard it can get cumbersome to distribute an application with py2exe when Numba is involved.</p>
<p>So I thought I'd make a simple Fortran subroutine and wrap it with f2py, just for fun:</p>
<pre><code>pure subroutine f90_small_arrays(n, array1, array2, float1, float2, float3, output1, output2)
implicit none
integer, intent(in) :: n
real(4), intent(in), dimension(n) :: array1, array2
real(4), intent(in) :: float1, float2, float3
real(4), intent(out), dimension(n) :: output1, output2
output1 = (array2 - array1) / (float2 - float1)
output2 = array1 + output1 * (float3 - float1)
end subroutine f90_small_arrays
</code></pre>
<p>and time it in a Python function like this:</p>
<pre class="lang-py prettyprint-override"><code>from f90_small_arrays import f90_small_arrays
def FortranFunc(array1, array2, float1, float2, float3):
output1, output2 = f90_small_arrays(array1, array2, float1, float2, float3)
return output1, output2
</code></pre>
<p>I get on machine 1:</p>
<pre><code>%timeit FortranFunc(array1, array2, float1, float2, float3)
654 ns ± 0.869 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
</code></pre>
<p>And on machine 2:</p>
<pre><code>%timeit FortranFunc(array1, array2, float1, float2, float3)
286 ns ± 5.92 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
</code></pre>
<p>Which is more than 5 times faster than NumPy, even though I am just doing basic math operations.</p>
<p>While I get it that array creation has its own overhead, I wasn't expecting such a big ratio between the two. I have also tried to upgrade to NumPy 1.26.3, but it is actually 15% slower than NumPy 1.21.5...</p>
<p>I can of course get the answer "just replace the NumPy code with the Fortran one", which will imply a loss of readability - the code doing the actual operation is in another file, a Fortran file.</p>
<p>It may also be that there is nothing that can be done to narrow the gap between NumPy and Fortran, and the overhead of operations in NumPy arrays is what it is.</p>
<p>But of course any ideas/suggestions are more than welcome :-) .</p>
|
<python><arrays><numpy><performance><f2py>
|
2024-08-07 10:30:33
| 2
| 1,449
|
Infinity77
|
78,843,073
| 6,477,565
|
OSError: Bad file descriptor
|
<p>I'm trying to run celery beat worker and a normal celery worker as docker containers. I have my tasks in flask application.</p>
<p>The normal celery worker is working fine but getting following error in <strong>beat worker</strong></p>
<pre><code>[2024-08-07 09:57:24,692: INFO/MainProcess] mingle: searching for neighbors
[2024-08-07 09:57:25,413: INFO/Beat] beat: Starting...
[2024-08-07 09:57:25,416: ERROR/Beat] Process Beat
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/billiard-4.2.0-py3.11.egg/billiard/process.py", line 323, in _bootstrap
self.run()
File "/usr/local/lib/python3.11/site-packages/celery-5.4.0-py3.11.egg/celery/beat.py", line 718, in run
self.service.start(embedded_process=True)
File "/usr/local/lib/python3.11/site-packages/celery-5.4.0-py3.11.egg/celery/beat.py", line 634, in start
humanize_seconds(self.scheduler.max_interval))
^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/kombu/utils/objects.py", line 40, in __get__
return super().__get__(instance, owner)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/functools.py", line 1001, in __get__
val = self.func(instance)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/celery-5.4.0-py3.11.egg/celery/beat.py", line 677, in scheduler
return self.get_scheduler()
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/celery-5.4.0-py3.11.egg/celery/beat.py", line 667, in get_scheduler
aliases = dict(load_extension_class_names(extension_namespace))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/celery-5.4.0-py3.11.egg/celery/utils/imports.py", line 150, in load_extension_class_names
_entry_points = entry_points(group=namespace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/metadata/__init__.py", line 1041, in entry_points
return SelectableGroups.load(eps).select(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/metadata/__init__.py", line 476, in load
ordered = sorted(eps, key=by_group)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/metadata/__init__.py", line 1038, in <genexpr>
eps = itertools.chain.from_iterable(
^
File "/usr/local/lib/python3.11/importlib/metadata/_itertools.py", line 16, in unique_everseen
k = key(element)
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/metadata/__init__.py", line 955, in _normalized_name
or super()._normalized_name
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/metadata/__init__.py", line 627, in _normalized_name
return Prepared.normalize(self.name)
^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/metadata/__init__.py", line 622, in name
return self.metadata['Name']
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/metadata/__init__.py", line 611, in metadata
or self.read_text('PKG-INFO')
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/importlib/metadata/__init__.py", line 939, in read_text
return self._path.joinpath(filename).read_text(encoding='utf-8')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/zipfile.py", line 2468, in read_text
with self.open('r', encoding, *args, **kwargs) as strm:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/zipfile.py", line 2434, in open
stream = self.root.open(self.at, zip_mode, pwd=pwd)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/zipfile.py", line 1579, in open
fheader = zef_file.read(sizeFileHeader)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/zipfile.py", line 786, in read
self._pos = self._file.tell()
^^^^^^^^^^^^^^^^^
OSError: [Errno 9] Bad file descriptor
</code></pre>
<p><strong>I'm using python 3.11.9-slim as the base image for the celery beat worker</strong></p>
<p>Following is the versions of the packages installed in the container</p>
<pre><code>amqp==5.2.0
attrs==24.1.0
billiard==4.2.0
blinker==1.8.2
boto3==1.34.145
botocore==1.34.145
celery==5.4.0
certifi==2024.7.4
cffi==1.17.0
charset-normalizer==3.3.2
CIRAUtils==9.0.0
click==8.1.7
click-didyoumean==0.3.1
click-plugins==1.1.1
click-repl==0.3.0
cobble==0.1.4
cryptography==43.0.0
cssselect==1.2.0
dnspython==2.6.1
et-xmlfile==1.1.0
flasgger==0.9.7.1
Flask==3.0.2
Flask-Cors==4.0.1
Flask-JWT-Extended==4.6.0
Flask-PyMongo==2.3.0
gunicorn==22.0.0
idna==3.7
importlib_metadata==8.2.0
itsdangerous==2.2.0
Jinja2==3.1.4
jmespath==1.0.1
jsonschema==4.23.0
jsonschema-specifications==2023.12.1
kombu==5.3.7
lxml==5.2.2
mammoth==1.8.0
MarkupSafe==2.1.5
mistune==3.0.2
msal==1.30.0
numpy==2.0.1
openpyxl==3.1.5
packaging==24.1
pandas==2.2.2
prompt_toolkit==3.0.47
pycparser==2.22
pycryptodome==3.20.0
PyJWT==2.8.0
pymongo==4.8.0
pyquery==2.0.0
python-dateutil==2.9.0
pytz==2024.1
PyYAML==6.0.1
redis==5.0.7
referencing==0.35.1
requests==2.32.3
rpds-py==0.19.1
s3transfer==0.10.2
sentry-sdk==2.10.0
six==1.16.0
tzdata==2024.1
urllib3==2.2.2
vine==5.1.0
wcwidth==0.2.13
Werkzeug==3.0.3
xmltodict==0.12.0
zipp==3.19.2
</code></pre>
<p><code>CIRAUtils==9.0.0</code> is custom package</p>
|
<python><flask><celery><python-importlib><kombu>
|
2024-08-07 10:06:03
| 1
| 815
|
Albin Antony
|
78,842,962
| 7,920,004
|
Airflow DAG throws GlueJobOperator is not JSON serializable
|
<p>Below Airflow task throws an <strong>error</strong>:</p>
<pre><code>[2024-08-07T09:05:00.142+0000] {{xcom.py:664}} ERROR - Object of type GlueJobOperator is not JSON serializable. If you are using pickle instead of JSON for XCom, then you need to enable pickle support for XCom in your airflow config or make sure to decorate your object with attr.
</code></pre>
<p><strong>Code</strong>:</p>
<pre><code>import os
import sys
from datetime import datetime, timedelta
from airflow import DAG
from airflow.operators.dummy_operator import DummyOperator
from airflow.providers.amazon.aws.operators.glue import GlueJobOperator
from airflow.utils.task_group import TaskGroup
from airflow.decorators import task_group, task, dag
from airflow.operators.python import PythonOperator
sys.path.insert(0, os.path.abspath(os.path.dirname(__file__)))
from utils.environment_config import EnvironmentConfig # noqa: E402
config = EnvironmentConfig(__file__)
import json
params_one = ["value"]
params_two = ["1","2"]
params_three = [4, 12, 52]
params_four = [3]
param_five = ["col"]
playground_bucket = config.get_environment_variable("playground_bucket_name", default_var="undefined")
intermittent_data_location = config.get_environment_variable("stage3_output_intermittent_location", default_var="undefined")
stage3_task_role = config.get_environment_variable("stage3_task_role", default_var="undefined")
join_bridge_script = config.get_bridge_script("join_bridge_script.py")
#default_args={ "donot_pickle": "True" }
@dag(dag_id='chore_task_group_stage3', schedule=None, catchup=False)
def pipeline():
@task
def lag_tasks_with_filter(
param_one,
demo,
param_three,
param_four,
,
lag_task_role,
intermittent_data_location,
playground_bucket
):
return GlueJobOperator(
task_id=f"create_task_{param_one}_{param_two}_w{param_three}param_four{param_four}param_five{param_five}",
job_name=config.generate_job_name(f"param_four{param_four}-weeks{param_three}-" + f"filter{param_five}-job-{param_one}-{param_two}"),
script_location=config.get_bridge_script("lags_bridge_script.py"),
iam_role_name=lag_task_role,
script_args={
"--lagWithCatPath": f"s3://{intermittent_data_location}/output/with_cat" + f"/param_one={param_one}/param_two={param_two}",
"--rawDataInputPath": f"s3://{playground_bucket}/output/oneyear" + f"/param_one={param_one}/param_two={param_two}/",
"--numberOfLagWeeks": str(param_four),
"--windowSizeWeeks": str(param_three),
"--filterCol": param_five,
"--taskId": f"create_task_{param_one}_{param_two}_w{param_three}param_four{param_four}param_five{param_five}",
},
create_job_kwargs={
"WorkerType": "G.2X",
"NumberOfWorkers": 5,
"GlueVersion": "4.0",
"DefaultArguments": {
"--job-language": "python",
"--enable-job-insights": "true",
"--enable-metrics": "true",
"--enable-auto-scaling": "true",
"--enable-observability-metrics": "true",
"--TempDir": f"s3://{config.get_environment_variable('glue_tmp_dir_location', default_var='undefined')}",
"--extra-py-files": config.get_asset_file_location(
"ctc_telligence_forecasting_data_product-0.0.1-py3-none-any.whl"
),
"--enable-spark-ui": "true",
"--spark-event-logs-path": f"s3://{config.get_environment_variable('glue_spark_ui_logs_location', default_var='undefined')}",
},
},
update_config=True,
)
ts = DummyOperator(task_id='start')
te = DummyOperator(task_id='end')
t1 = lag_tasks_with_filter.partial(lag_task_role=stage3_task_role, intermittent_data_location=intermittent_data_location, playground_bucket=playground_bucket).expand(param_one=params_one, param_two=params_two, param_three=params_three, param_four=params_four, param_five=param_five)
# setting dependencies
ts >> t1 >> te
pipeline()
</code></pre>
<p>When removing <code>return</code>, DAG passes but Glue jobs don't get created and triggered.</p>
<p>I want to keep <code>@task</code> decorator syntax since it allows for creating mapped instances with <code>expand()</code>.</p>
|
<python><amazon-web-services><airflow><mwaa>
|
2024-08-07 09:46:23
| 1
| 1,509
|
marcin2x4
|
78,842,905
| 3,885,696
|
Standardization of a 1d vector in torch
|
<p>I am working with torch dataset of 1d signals, and would like to standardize the vectors to mean 0, std 1 before further processing the data. If I would have dealt with an image, I could use torchvision.transforms:</p>
<pre class="lang-py prettyprint-override"><code> import torchvision.transforms as transforms
import torch
data_2d = torch.rand(1, 100,100)
normalized_data_2d = transforms.Normalize(mean = (data_2d.mean(),), std = (data_2d.std(),))(data_2d)
print(f'mean: {normalized_data_2d.mean()} ~ 0 , std: {normalized_data_2d.std()} ~ 1, ok')
</code></pre>
<p>I get:<br />
<code>mean: -4.1373571235681084e-08 ~ 0 , std: 0.9999999403953552 ~ 1, ok</code></p>
<p>When I use 1d data in the same manner:</p>
<pre class="lang-py prettyprint-override"><code> data_1d = torch.rand(100)
normalized_data_1d = transforms.Normalize(mean = (data_1d.mean(),), std = (data_1d.std(),))(data_1d)
</code></pre>
<p>I get <code>TypeError: Tensor is not a torch image</code> error:</p>
<p>Is there an elegant way to standardize 1d vectors using torch transform?</p>
|
<python><torch>
|
2024-08-07 09:36:11
| 1
| 1,954
|
Nir
|
78,842,887
| 12,397,370
|
Map RGB colors from a heatmap back to their original scale values
|
<p>I have the following heatmap</p>
<p><a href="https://i.sstatic.net/bZejlPlU.webp" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZejlPlU.webp" alt="heatmap" /></a></p>
<p>and all I know is that the mapping between the colors and the original values is as follows: -100 is blue, 0 is black, and +10 is red. The mapping is assumed to be linear. White squares can be ignored (i.e nan).</p>
<p>I have managed to extract the heatmap as an SVG (the one I'm showing here is a WEBP because stackoverflow does not allow SVG) and I therefore have the RGB color of each square.</p>
<p>How could I map each RBG color back to its original value, or at least a good approximation ?</p>
|
<python><matplotlib><colors>
|
2024-08-07 09:31:51
| 0
| 485
|
Gabriel Cia
|
78,842,630
| 13,840,270
|
Add column by accessing item in Array based on ID without SQL expression
|
<p>I have the following data in a dataframe <code>df</code>:</p>
<pre class="lang-json prettyprint-override"><code>{
"data":[
{
"id":"a",
"val":1
},
{
"id":"b",
"val":2
}
]
}
</code></pre>
<p>I would now like to add a new column "test" for the id <code>'b'</code> containing its value <code>2</code>.</p>
<p>I know I can do this with:</p>
<pre class="lang-py prettyprint-override"><code>(
df
.withColumn(
"test",
F.expr("filter(data,x->x.id=='b')")[0]["val"]
)
.show()
)
</code></pre>
<p>Yielding the desired:</p>
<pre><code>+----------------+----+
| data |test|
+----------------+----+
|[{a, 1}, {b, 2}]| 2 |
+----------------+----+
</code></pre>
<p>Could this be achieved in a more "native" way (not using SQL)? I know that <code>F.col("data")[1]["val"]</code> can be used if I go by index rather than ID as an example.</p>
|
<python><apache-spark><pyspark>
|
2024-08-07 08:35:02
| 2
| 3,215
|
DuesserBaest
|
78,842,605
| 7,395,592
|
16-byte offset in MPEG-4 video export from DICOM file
|
<p><strong>Short version</strong>: Where is the 16-byte offset coming from when exporting an MPEG-4 video stream from a DICOM file with <a href="https://pydicom.github.io/" rel="nofollow noreferrer"><code>Pydicom</code></a> via the following code? (And, bonus question, is it always a 16-byte offset?)</p>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
import pydicom
in_dcm_filename: str = ...
out_mp4_filename: str = ...
ds = pydicom.dcmread(in_dcm_filename)
Path(out_mp4_filename).write_bytes(ds.PixelData[16:]) # 16-byte offset necessary
</code></pre>
<p><em>For reproducibility, one can use e.g. <a href="https://www.dropbox.com/s/0jmh0ob4vftd9o0/test_720.dcm?dl=1" rel="nofollow noreferrer">this DICOM file</a> which I found in <a href="https://groups.google.com/g/comp.protocols.dicom/c/JU-fdVgReV8/m/33VKVjQeRp0J" rel="nofollow noreferrer">this old discussion</a> on Google Groups (content warning: the video shows the open brain in a neurosurgical intervention).</em></p>
<h2>Long version</h2>
<p>I have a number of DICOM files containing surgical MPEG-4 video streams (transfer syntax UID <em>1.2.840.10008.1.2.4.102 – MPEG-4 AVC/H.264 High Profile / Level 4.1</em>). I wanted to export the video streams from the DICOM files for easier handling in downstream tasks.</p>
<p>After a bit of googling, I found the <a href="https://forum.dcmtk.org/viewtopic.php?t=4447" rel="nofollow noreferrer">following discussion</a>, suggesting the use of <code>dcmdump</code> from <a href="https://dicom.offis.de/dcmtk.php.en" rel="nofollow noreferrer"><code>DCMTK</code></a>, as follows (which I was able to reproduce):</p>
<ul>
<li>Run <code>dcmdump +P 7fe0,0010 <in_dcm_filename> +W <out_folder></code>.</li>
<li>From the resulting two files in <code><out_folder></code>, <code>mpeg4.dcm.0.raw</code> and <code>mpeg4.dcm.1.raw</code>, discard the first one, which has a size of 0 bytes, and keep the second one (potentially changing its suffix to <code>.mp4</code>), which is a regular, playable video file.</li>
</ul>
<p>From what I saw in the <code>dcmdump</code> command, I concluded this was just a raw dump of tag <code>7fe0,0010</code> (which is the <em>Pixel Data</em> attribute)¹, so I thought I could reproduce this with <code>Pydicom</code>. My first attempt was using <code>Path(out_mp4_filename).write_bytes(ds.PixelData)</code> (see code sample above for complete details); however, I ended up with a file that could not be played. I then compared a hex dump of the <code>dcmdump</code> result and of the <code>Pydicom</code> result:</p>
<pre class="lang-bash prettyprint-override"><code>$ hd ./dcmdump.mp4 | head
00000000 00 00 00 20 66 74 79 70 69 73 6f 6d 00 00 02 00 |... ftypisom....|
00000010 69 73 6f 6d 69 73 6f 32 61 76 63 31 6d 70 34 31 |isomiso2avc1mp41|
00000020 00 00 00 08 66 72 65 65 00 ce 97 1d 6d 64 61 74 |....free....mdat|
...
$ hd ./pydicom.mp4 | head
00000000 fe ff 00 e0 00 00 00 00 fe ff 00 e0 3e bc ce 00 |............>...|
00000010 00 00 00 20 66 74 79 70 69 73 6f 6d 00 00 02 00 |... ftypisom....|
00000020 69 73 6f 6d 69 73 6f 32 61 76 63 31 6d 70 34 31 |isomiso2avc1mp41|
...
</code></pre>
<p>From this I noticed that my <code>Pydicom</code> export contained 16 preceding extra bytes. Once I removed them via <code>Path(out_mp4_filename).write_bytes(ds.PixelData[16:])</code>, I got the exact same, playable video export as with <code>dcmdump</code>.</p>
<p>So, again, my question is: Where do these 16 extra bytes come from, what is their meaning, and am I safe simply removing them?</p>
<p><sub>¹) <strong>Update:</strong> In hindsight, I should have gotten suspicious because of the <em>two</em> files that were created by <code>dcmdump</code>.</sub></p>
|
<python><dicom><pydicom>
|
2024-08-07 08:29:45
| 1
| 6,750
|
simon
|
78,842,466
| 6,930,340
|
Split a polars DataFrame into multiple chunks with groupby
|
<p>Consider the following <code>pl.DataFrame</code>s:</p>
<pre><code>import datetime
import polars as pl
df_orig = pl.DataFrame(
{
"symbol": [*["A"] * 10, *["B"] * 8],
"date": [
*pl.datetime_range(
start=datetime.date(2024, 1, 1),
end=datetime.date(2024, 1, 10),
eager=True,
),
*pl.datetime_range(
start=datetime.date(2024, 1, 1),
end=datetime.date(2024, 1, 8),
eager=True,
),
],
"data": [*range(10), *range(8)],
}
)
df_helper = pl.DataFrame({"symbol": ["A", "B"], "start_idx": [[0, 5], [0, 4]]})
chunk_size = 5
with pl.Config(tbl_rows=30):
print(df_orig)
print(df_helper)
shape: (18, 3)
┌────────┬─────────────────────┬──────┐
│ symbol ┆ date ┆ data │
│ --- ┆ --- ┆ --- │
│ str ┆ datetime[μs] ┆ i64 │
╞════════╪═════════════════════╪══════╡
│ A ┆ 2024-01-01 00:00:00 ┆ 0 │
│ A ┆ 2024-01-02 00:00:00 ┆ 1 │
│ A ┆ 2024-01-03 00:00:00 ┆ 2 │
│ A ┆ 2024-01-04 00:00:00 ┆ 3 │
│ A ┆ 2024-01-05 00:00:00 ┆ 4 │
│ A ┆ 2024-01-06 00:00:00 ┆ 5 │
│ A ┆ 2024-01-07 00:00:00 ┆ 6 │
│ A ┆ 2024-01-08 00:00:00 ┆ 7 │
│ A ┆ 2024-01-09 00:00:00 ┆ 8 │
│ A ┆ 2024-01-10 00:00:00 ┆ 9 │
│ B ┆ 2024-01-01 00:00:00 ┆ 0 │
│ B ┆ 2024-01-02 00:00:00 ┆ 1 │
│ B ┆ 2024-01-03 00:00:00 ┆ 2 │
│ B ┆ 2024-01-04 00:00:00 ┆ 3 │
│ B ┆ 2024-01-05 00:00:00 ┆ 4 │
│ B ┆ 2024-01-06 00:00:00 ┆ 5 │
│ B ┆ 2024-01-07 00:00:00 ┆ 6 │
│ B ┆ 2024-01-08 00:00:00 ┆ 7 │
└────────┴─────────────────────┴──────┘
shape: (2, 2)
┌────────┬───────────┐
│ symbol ┆ start_idx │
│ --- ┆ --- │
│ str ┆ list[i64] │
╞════════╪═══════════╡
│ A ┆ [0, 5] │
│ B ┆ [0, 3] │
└────────┴───────────┘
</code></pre>
<p>Now, I need to split the dataframe into two chunks of length 5 (<code>chunk_size</code>) grouped by the <code>symbol</code> column. The column <code>start_idx</code> indicate the rows to start the chunk in each group. That is, group A will be split into two chunks of length 5 starting in row 0 and 5, while the chunks of grouß B start in row 0 and 3.
Finally, all chunks need to be concatenated on <code>axis=0</code>, whereby a new column <code>split_idx</code> indicates where the split is coming from.</p>
<p>Here's what I am looking for:</p>
<pre><code>shape: (20, 4)
┌────────────────────┬─────────────────────┬──────┐
│ split_idx ┆ symbol ┆ date ┆ data │
│ ┆ --- ┆ --- ┆ --- │
│ i64 ┆ str ┆ datetime[μs] ┆ i64 │
╞═══════════╪════════╪═════════════════════╪══════╡
│ 0 ┆ A ┆ 2024-01-01 00:00:00 ┆ 0 │
│ 0 ┆ A ┆ 2024-01-02 00:00:00 ┆ 1 │
│ 0 ┆ A ┆ 2024-01-03 00:00:00 ┆ 2 │
│ 0 ┆ A ┆ 2024-01-04 00:00:00 ┆ 3 │
│ 0 ┆ A ┆ 2024-01-05 00:00:00 ┆ 4 │
│ 0 ┆ B ┆ 2024-01-01 00:00:00 ┆ 0 │
│ 0 ┆ B ┆ 2024-01-02 00:00:00 ┆ 1 │
│ 0 ┆ B ┆ 2024-01-03 00:00:00 ┆ 2 │
│ 0 ┆ B ┆ 2024-01-04 00:00:00 ┆ 3 │
│ 0 ┆ B ┆ 2024-01-05 00:00:00 ┆ 4 │
│ 1 ┆ A ┆ 2024-01-06 00:00:00 ┆ 5 │
│ 1 ┆ A ┆ 2024-01-07 00:00:00 ┆ 6 │
│ 1 ┆ A ┆ 2024-01-08 00:00:00 ┆ 7 │
│ 1 ┆ A ┆ 2024-01-09 00:00:00 ┆ 8 │
│ 1 ┆ A ┆ 2024-01-10 00:00:00 ┆ 9 │
│ 1 ┆ B ┆ 2024-01-04 00:00:00 ┆ 3 │
│ 1 ┆ B ┆ 2024-01-05 00:00:00 ┆ 4 │
│ 1 ┆ B ┆ 2024-01-06 00:00:00 ┆ 5 │
│ 1 ┆ B ┆ 2024-01-07 00:00:00 ┆ 6 │
│ 1 ┆ B ┆ 2024-01-08 00:00:00 ┆ 7 │
└───────────┴────────┴─────────────────────┴──────┘
</code></pre>
<p>Keep in mind that list in column <code>start_idx</code> may be of variable length for each individual row. The length of each list determines the number of chunks for each group.</p>
|
<python><python-polars>
|
2024-08-07 07:52:59
| 2
| 5,167
|
Andi
|
78,842,293
| 298,847
|
Python @overload using union types causes overlapped function signature errors
|
<p>I would like to write the following overloaded Python function:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any, TypeVar, overload
_T1 = TypeVar('_T1')
_T2 = TypeVar('_T2')
_T3 = TypeVar('_T3')
@overload
def parse_as(ty: type[_T1] | type[_T2], s: bytes) -> _T1 | _T2:
...
@overload
def parse_as(ty: type[_T1] | type[_T2] | type[_T3], s: bytes) -> _T1 | _T2 | _T3:
...
def parse_as(ty: Any, s: bytes) -> Any:
raise NotImplementedError()
</code></pre>
<p>The goal of <code>parse_as()</code> is to attempt to parse the input bytes as the given types and, if successful, return a value of the given types. However this gives the following mypy error:</p>
<pre><code>error: Overloaded function signature 2 will never be matched: signature 1's parameter type(s) are the same or broader
</code></pre>
<p>Is there any way to express the type of <code>parse_as()</code>?</p>
<p>Aside: in my particular case all the <code>TypeVar</code>s share the same class as their <code>bound</code>, if that matters.</p>
|
<python><python-typing><mypy>
|
2024-08-07 07:16:38
| 0
| 9,059
|
tibbe
|
78,842,246
| 1,008,698
|
python socket recv timeout while Wireshark shows received UDP packet (windows)
|
<p>I have the following diagram:</p>
<blockquote>
<p>Send: <strong>client</strong> --(tx_socket)--> <strong>server</strong></p>
<p>Reply: <strong>server</strong> -- (rx_socket) --> <strong>client</strong></p>
</blockquote>
<pre><code>def connect(self) -> None:
self.tx_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.tx_socket.bind(("", self.tx_src_port))
self.rx_socket = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
self.rx_socket.settimeout(3.0)
self.rx_socket.bind(("", self.rx_dst_port))
</code></pre>
<p>However, I always get timeout error when waiting for reply from server, even when Wireshark shows the reply packet:</p>
<pre><code>self.tx_socket.sendto(command_str, (self.dst_addr, self.tx_dst_port))
self.rx_socket.recv(4096) # <--- TimeoutError
</code></pre>
<p>Somehow, this can be solved by sending a character from client to server using rx_socket in advance:</p>
<pre><code>self.rx_socket.sendto("\n".encode('ascii'), (self.dst_addr, self.tx_dst_port))
</code></pre>
<p>But I do not understand why this happens. Also, the same issue does not exist in Linux.</p>
<p>Can you give me an explaination ? Any reply is greatly appreciated.</p>
|
<python><sockets>
|
2024-08-07 07:01:07
| 1
| 2,660
|
Tran Ngu Dang
|
78,842,133
| 5,109,026
|
"gcloud storage cp t.txt gs://my.bucket.name/" gives error: ModuleNotFoundError: No module named 'google.auth'
|
<p>I'm trying to upload a file to a bucket using:</p>
<pre><code>gcloud storage cp t.txt gs://my.bucket.name/
</code></pre>
<p>But there's an error:</p>
<pre><code>Copying file://t.txt to gs://adhoc.textra.me/t.txt
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/opt/homebrew/Cellar/python@3.12/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.3/Frameworks/Python.framework/Versions/3.12/lib/python3.12/multiprocessing/spawn.py", line 132, in _main
self = reduction.pickle.load(from_parent)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/share/google-cloud-sdk/lib/googlecloudsdk/command_lib/storage/tasks/task_graph_executor.py", line 37, in <module>
from googlecloudsdk.command_lib.storage import encryption_util
File "/opt/homebrew/share/google-cloud-sdk/lib/googlecloudsdk/command_lib/storage/encryption_util.py", line 28, in <module>
from googlecloudsdk.command_lib.storage import hash_util
File "/opt/homebrew/share/google-cloud-sdk/lib/googlecloudsdk/command_lib/storage/hash_util.py", line 25, in <module>
from googlecloudsdk.command_lib.storage import fast_crc32c_util
File "/opt/homebrew/share/google-cloud-sdk/lib/googlecloudsdk/command_lib/storage/fast_crc32c_util.py", line 32, in <module>
from googlecloudsdk.command_lib import info_holder
File "/opt/homebrew/share/google-cloud-sdk/lib/googlecloudsdk/command_lib/info_holder.py", line 45, in <module>
from googlecloudsdk.core.credentials import store as c_store
File "/opt/homebrew/share/google-cloud-sdk/lib/googlecloudsdk/core/credentials/store.py", line 34, in <module>
from googlecloudsdk.api_lib.auth import external_account as auth_external_account
File "/opt/homebrew/share/google-cloud-sdk/lib/googlecloudsdk/api_lib/auth/external_account.py", line 24, in <module>
from googlecloudsdk.core.credentials import creds as c_creds
File "/opt/homebrew/share/google-cloud-sdk/lib/googlecloudsdk/core/credentials/creds.py", line 33, in <module>
from google.auth import compute_engine as google_auth_compute_engine
ModuleNotFoundError: No module named 'google.auth'
Completed files 0/1 | 0B/16.5kiB
</code></pre>
<p>It hangs at that point and doesn't upload.</p>
<p>This is on Macos with the <code>google-cloud-sdk</code> installed via Brew:</p>
<pre><code>$ brew info google-cloud-sdk
==> google-cloud-sdk: 479.0.0 (auto_updates)
https://cloud.google.com/sdk/
Installed
/opt/homebrew/Caskroom/google-cloud-sdk/479.0.0 (132B)
From: https://github.com/Homebrew/homebrew-cask/blob/HEAD/Casks/g/google-cloud-sdk.rb
==> Name
Google Cloud SDK
==> Description
Set of tools to manage resources and applications hosted on Google Cloud
==> Artifacts
google-cloud-sdk/install.sh (Installer)
google-cloud-sdk/bin/gsutil (Binary)
google-cloud-sdk/completion.bash.inc -> /opt/homebrew/etc/bash_completion.d/google-cloud-sdk (Binary)
google-cloud-sdk/bin/bq (Binary)
google-cloud-sdk/bin/docker-credential-gcloud (Binary)
google-cloud-sdk/completion.zsh.inc -> /opt/homebrew/share/zsh/site-functions/_google_cloud_sdk (Binary)
google-cloud-sdk/bin/gcloud (Binary)
google-cloud-sdk/bin/git-credential-gcloud.sh -> git-credential-gcloud (Binary)
==> Caveats
To add gcloud components to your PATH, add this to your profile:
for bash users
source "$(brew --prefix)/share/google-cloud-sdk/path.bash.inc"
for zsh users
source "$(brew --prefix)/share/google-cloud-sdk/path.zsh.inc"
source "$(brew --prefix)/share/google-cloud-sdk/completion.zsh.inc"
for fish users
source "$(brew --prefix)/share/google-cloud-sdk/path.fish.inc"
==> Analytics
install: 11,361 (30 days), 33,066 (90 days), 118,204 (365 days)
</code></pre>
<p>The SDK has been updated to the latest version:</p>
<pre><code>$
~/delicious/textra (git) $ gcloud -v
Google Cloud SDK 487.0.0
beta 2024.08.06
bq 2.1.7
core 2024.08.06
gcloud-crc32c 1.0.0
gsutil 5.30
</code></pre>
<p>Uploading using <code>gsutil</code> works fine, but I want to use <code>gcloud compute cp ...</code> so I can specify a configuration, e.g. <code>gcloud --configuration=dev compute cp ...</code>.</p>
<p><strong>Edit</strong>: I've tried completely uninstalling and deleting config files and reinstalling, with the same result.</p>
<p><strong>Edit</strong>: it turns out unsetting the <code>CLOUDSDK_PYTHON_SITEPACKAGES</code> environment variable fixed the issue!</p>
|
<python><google-cloud-platform><gcloud>
|
2024-08-07 06:34:43
| 1
| 971
|
Theo Lassonder
|
78,841,959
| 1,447,953
|
Iterate over dependent iterators together
|
<p>So here is my scenario:</p>
<pre><code>def gen1(a):
for item in a:
yield item
def gen2(a):
for item in a:
yield item**2
a = range(10)
g1 = gen1(a)
g2 = gen2(g1)
for item1, item2 in zip(g2, g1):
print(item1, item2)
</code></pre>
<p>Output:</p>
<pre><code>0 1
4 3
16 5
36 7
64 9
</code></pre>
<p>But I want to get:</p>
<pre><code>0 0
1 1
4 2
9 3
16 4
25 5
36 6
49 7
64 8
81 9
</code></pre>
<p>Basically I have several generators that feed through each other doing some streaming transformations on data. However, I then want to go and write some test functions that can look at the input and output streams simultaneously so I can compare them and make sure the results are what I expect. But, I cannot iterate over them in the way I tried above because the iterations get triggered for the bottom iterator twice per loop, messing it all up.</p>
<p>Is there some clever thing I can do to duplicate the input or something to achieve what I want? I supposed I can try to instantiate two copies of the bottom-level iterator, but my real one is rather more complicated to create than the <code>range(10)</code> in this example, so it would be better to be able to copy it or something. Not sure what the best approach is.</p>
|
<python><python-itertools>
|
2024-08-07 05:39:12
| 0
| 2,974
|
Ben Farmer
|
78,841,908
| 1,870,400
|
Why python wiremock doesnt respect precision and returns numbers as string?
|
<p>I am trying basic mock example on decimals using wiremock and I couldn't get it to work. And I dont see any example online either. Below is the response section of my json file</p>
<pre><code> "response": {
"status": 200,
"jsonBody": {
"weekly_rate": "{{randomDecimal lower=100.00 upper=800.00 precision=2}}",
"max_amount": "{{randomDecimal lower=5000.00 upper=20000.00 precision=2}}"
},
"transformers": ["response-template"]
}
</code></pre>
<p>and I get this response</p>
<pre><code>{
"weekly_rate": "220.87939761296946",
"max_amount": "8860.916637737479"
}
</code></pre>
<p>Two issues</p>
<ul>
<li>The numbers have more than 2 decimal places so the precision is not
respected.</li>
<li>The numbers are strings instead of numeric</li>
</ul>
|
<python><wiremock>
|
2024-08-07 05:13:57
| 1
| 6,414
|
user1870400
|
78,841,671
| 3,624,000
|
Fetch Nth Keys from a Struct in Python
|
<p>Here is my json which has nested structs or list in it. I want to extract all the nth key from the JSON after parsing through all of them. I am doing this in python and able to crack it through re-cursive method but would help to solve through iterative method. Any help here in python is really appreciatable!</p>
<p>Here is the logic:</p>
<ol>
<li>Traverse every key in the json. If the key is not <strong>dict</strong> or <strong>list</strong>, add them to a list called <strong>nthkeys_list</strong></li>
<li>If the key is a <strong>dict</strong>, again parse through dict till you find the nth key.</li>
<li>If the key is a <strong>list</strong>, navigate through each json from the list till you find the nth key.</li>
</ol>
<p><strong>NOTE</strong>: The <strong>nthkeys_list</strong> dont want the keys which is off type <strong>dict or list</strong> but only keys off type other than dict or list.</p>
<p><strong>Input JSON:</strong></p>
<pre><code>
key:{
List2:[
{
keyN1:"Value1",
keyN2:"Value2"
}
],
List3:[
{
keyN3:"Value3",
keyN4:"Value4"
},
{
keyN5:"Value5",
keyN6:"Value6"
}
],
colid:"999999999999",
md:{
keyN6:dev",
keyN7:false
},
prrrid:null,
},
mesd:"d5ad787e-1ee9-4785-a02f-719fe4531748",
dacntrl:{
addds:0,
keyNote:{
cn:"dev",
ie:false
}
},
ci:620126060707,
planss:{
planss:{
processid:null,
mAddOnIds:[
],
planrenew:true,
},
uflag:false
}
}
</code></pre>
<p><strong>Expected Output</strong>
nthkeys_list = [keyN1, keyN2, keyN3, keyN4, keyN5, keyN6, colid, keyN7, keyN8, prrrid, mesd, addds, cn, ie, ci, processid, planrenew, uflag]</p>
|
<python><arrays><json>
|
2024-08-07 02:59:20
| 1
| 311
|
user3624000
|
78,841,576
| 10,737,147
|
strange matplotlib limits when aspect ratio is fixed
|
<p>Im facing a strange behaviour from matplotlib when equal aspect ratio is preserved.
For starters, this is the minimum code sample to show this behaviour. First with the correct behaviour and second with the crooked behaviour</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# Generate some random data
x = np.arange(0, 10, 0.1)
y = np.random.randn(len(x))
# Create a plot
fig, ax = plt.subplots()
# ax.set_aspect('equal', 'datalim')
# Set x-axis limits
ax.set_xlim(2, 5)
# Print the x and y limits
print(f"x-axis limits before plotting: {ax.get_xlim()}")
print(f"y-axis limits before plotting: {ax.get_ylim()}")
ax.plot(x, y*10)
# Print the x and y limits
print(f"x-axis limits after plotting: {ax.get_xlim()}")
print(f"y-axis limits after plotting: {ax.get_ylim()}")
# Set x-axis limits
ax.set_xlim(2, 5)
# Print the x and y limits
print(f"x-axis limits after setting xlim: {ax.get_xlim()}")
print(f"y-axis limits after setting xlim: {ax.get_ylim()}")
# Show the plot
plt.show()
</code></pre>
<p>So this prints</p>
<pre><code>x-axis limits before plotting: (2.0, 5.0)
y-axis limits before plotting: (0.0, 1.0)
x-axis limits after plotting: (2.0, 5.0)
y-axis limits after plotting: (-23.30742047433786, 21.093948480431905)
x-axis limits after setting xlim: (2.0, 5.0)
y-axis limits after setting xlim: (-23.30742047433786, 21.093948480431905)
</code></pre>
<p>which correctly correspond to the plot being generated
<a href="https://i.sstatic.net/JpgdIt2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpgdIt2C.png" alt="enter image description here" /></a></p>
<p>Now the strange thing happens when I uncomment the ax.set_aspect('equal', 'datalim') line.</p>
<p>It prints below data</p>
<pre><code>x-axis limits before plotting: (2.0, 5.0)
y-axis limits before plotting: (0.0, 1.0)
x-axis limits after plotting: (2.0, 5.0)
y-axis limits after plotting: (-31.012258207723225, 22.633044899779353)
x-axis limits after setting xlim: (2.0, 5.0)
y-axis limits after setting xlim: (-31.012258207723225, 22.633044899779353)
</code></pre>
<p>But shows a graph with completely different limits
<a href="https://i.sstatic.net/cwwnaxSg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwwnaxSg.png" alt="enter image description here" /></a></p>
<p>My questions are:</p>
<ol>
<li>is there anyway to get matplotlib to return the correct x,y limits</li>
<li>is there anyway to keep AR by changing only y data limits keeping manually set xlimits untouched ?</li>
</ol>
<p><strong>EDIT:</strong>
@jodyKlymac pointed out fig.draw_without_rendering() to be used to get the correct limits. so this works.</p>
<p>But after plotting, I tried changing the xlim on line number 28. Matplotlib ignores this set_xlim. why and how to fix this. I want to see the final image with x limits going from 2 to 5. Doesn't matter how cropped the y axis is but cannot use adjustable='box' because I need the plotting bbox large enough</p>
|
<python><matplotlib><aspect-ratio>
|
2024-08-07 01:57:32
| 1
| 437
|
XYZ
|
78,841,431
| 1,103,734
|
Correctly reimplementing `defaultdict`
|
<p>I want to re-implement <code>defaultdict</code> myself. In principle, this feels like a really simple question, and one I might give as a "homework assignment", but the more I dug in, the more complicated it got. My first attempt looked something like this</p>
<pre class="lang-py prettyprint-override"><code>class DefaultDict(dict):
def __init__(self, factory):
self._factory = factory
super().__init__()
def __getitem__(self, key):
missing = object()
super_value = super().get(key, missing)
if super_value is missing:
return self._factory()
return super_value
</code></pre>
<p>However, this does not pass even the most trivial test suite of</p>
<pre class="lang-py prettyprint-override"><code>x = DefaultDict(lambda: 'no value :(')
y = collections.defaultdict(lambda: 'no value :(')
x['a'] = 'hello'
y['a'] = 'hello'
print(x['a'] == y['a']) # True
print(x.get('a') == y.get('a')) # True
print(x['b'] == y['b']) # True
print(x.get('b') == y.get('b')) # False; x.get('b') is None
</code></pre>
<p>After some research, I found <code>__missing__</code>, but an implementation using it fails the above test suite in exactly the same way</p>
<pre><code>class DefaultDict(dict):
def __init__(self, factory):
self._factory = factory
super().__init__()
def __missing__(self, key):
return self._factory()
</code></pre>
<p>It seems to me that the only way to do this completely is to re-implement <code>dict</code> methods like <code>get</code> and <code>pop</code>, which I don't think should be necessary. I glanced at the <a href="https://github.com/python/cpython/blob/b5e142ba7c2063efe9bb8065c3b0bad33e2a9afa/Modules/_collectionsmodule.c#L2287-L2299" rel="nofollow noreferrer">C implementation of defaultdict</a>, and as far as I can tell, it only implements <code>__missing__</code> as well, so I highly doubt this is required.</p>
<p>What would be the most "pythonic" way to re-implement <code>defaultdict</code>?</p>
|
<python><dictionary><defaultdict>
|
2024-08-07 00:23:56
| 1
| 4,846
|
ollien
|
78,841,378
| 16,171,413
|
A more elegant approach to writing Django’s unit tests
|
<p>I am currently writing tests using Django’s unit tests (based on Python standard library module: unittest). I have written this test for my Contact model which passes:</p>
<pre><code>class ContactTestCase(TestCase):
def setUp(self):
"""Create model objects."""
Contact.objects.create(
name='Jane Doe',
email='janedoe@gmail.com',
phone='+2348123940567',
subject='Sample Subject',
message='This is my test message for Contact object.'
)
def test_user_can_compose_message(self):
""" Test whether a user can compose a messsage in the contact form."""
test_user = Contact.objects.get(name='Jane Doe')
self.assertEqual(test_user.email, 'janedoe@gmail.com')
self.assertEqual(test_user.phone, '+2348123940567')
self.assertEqual(test_user.subject, 'Sample Subject')
self.assertEqual(test_user.message, 'This is my test message for Contact object.')
</code></pre>
<pre><code>Found 1 test(s).
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
.
----------------------------------------------------------------------
Ran 1 test in 0.005s
OK
Destroying test database for alias 'default'...
</code></pre>
<p>However, I had to use the <code>assertEqual</code> method 4 times in this test (could be more when testing models with more fields). It seems like this doesn't follow the DRY principle.</p>
<p>I know from the <a href="https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertEqual" rel="nofollow noreferrer">docs</a> that <code>assertEqual(first, second, msg=None)</code> tests that first and second are equal. If the values do not compare equal, the test will fail.</p>
<p>Is there a workaround or a more elegant approach to writing such tests?</p>
|
<python><django><python-unittest><django-testing><django-unittest>
|
2024-08-06 23:47:35
| 1
| 5,413
|
Uchenna Adubasim
|
78,841,259
| 825,227
|
Two sided seaborn barplot not referencing proper y-values
|
<p>Trying to make a two-sided barplot using Seaborn in Python and it doesn't appear to be using the proper levels for one side of the plot.</p>
<p>Data looks like this:</p>
<pre><code> Time Symbol Position Operation Side Price Size
0 2023-07-25 15:09:12.249964 MCDU3 0 0 1 0.7595 -2
1 2023-07-25 15:09:12.255196 MCDU3 1 0 1 0.7594 -7
2 2023-07-25 15:09:12.258575 MCDU3 2 0 1 0.7593 -8
3 2023-07-25 15:09:12.267100 MCDU3 3 0 1 0.7592 -16
4 2023-07-25 15:09:12.270027 MCDU3 4 0 1 0.7591 -14
5 2023-07-25 15:09:12.272276 MCDU3 5 0 1 0.759 -407
6 2023-07-25 15:09:12.274441 MCDU3 6 0 1 0.7589 -14
7 2023-07-25 15:09:12.276581 MCDU3 7 0 1 0.7588 -14
8 2023-07-25 15:09:12.278742 MCDU3 8 0 1 0.7587 -264
9 2023-07-25 15:09:12.280768 MCDU3 9 0 1 0.7586 -15
10 2023-07-25 15:09:12.283094 MCDU3 0 0 0 0.7596 102
11 2023-07-25 15:09:12.286398 MCDU3 1 0 0 0.7597 8
12 2023-07-25 15:09:12.289751 MCDU3 2 0 0 0.7598 8
13 2023-07-25 15:09:12.292842 MCDU3 3 0 0 0.7599 17
14 2023-07-25 15:09:12.295488 MCDU3 4 0 0 0.76 409
15 2023-07-25 15:09:12.297606 MCDU3 5 0 0 0.7601 16
16 2023-07-25 15:09:12.299546 MCDU3 6 0 0 0.7602 16
17 2023-07-25 15:09:12.302073 MCDU3 7 0 0 0.7603 14
18 2023-07-25 15:09:12.305483 MCDU3 8 0 0 0.7604 14
19 2023-07-25 15:09:12.307733 MCDU3 9 0 0 0.7605 658
</code></pre>
<p>Code looks like this--not clear to me why <code>Price</code> levels for first <code>Side</code> are being used instead of the actual <code>Price</code> present for second plot.</p>
<pre><code>import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
f, ax = plt.subplots()
sns.set_color_codes('muted')
# d.loc[d.Side==1,'Size'] = d[d.Side==1].Size*-1
sns.barplot(data = d[d.Side==1], x = 'Size', y = 'Price', color = 'b', orient = 'h')
sns.barplot(data = d[d.Side==0], x = 'Size', y = 'Price', color = 'r', orient = 'h')
</code></pre>
<p>Output:
<a href="https://i.sstatic.net/VC8X5jlt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VC8X5jlt.png" alt="enter image description here" /></a></p>
|
<python><seaborn>
|
2024-08-06 22:44:02
| 1
| 1,702
|
Chris
|
78,841,211
| 2,697,895
|
How to get the dimensions of a toga.Canvas() in Python?
|
<p>In a Python BeeWare project, I want to use toga.Canvas to draw some horizontal rectangles, but I don't know from where to get the Canvas width. I can't find any documentation for the toga.Canvas() dimensions on the internet...</p>
<pre><code> def redraw_canvas(self):
x = 4; y = 4;
for i in range(7):
with self.canvas.context.Fill(color=self.clRowBkg) as fill:
fill.rect(x, y, 100, self.row_height)
y += self.row_height + 4
</code></pre>
|
<python><beeware><toga>
|
2024-08-06 22:22:37
| 1
| 3,182
|
Marus Gradinaru
|
78,841,091
| 12,919,727
|
Option to try flexible install first when using ```conda install -c conda-forge PACKAGE_NAME```
|
<p>I am working from an AWS SageMaker Notebook Instance. For installation of most packages for my project, conda fails to install using frozen install:</p>
<pre><code>conda install -c conda-forge geopandas
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solv
e.
</code></pre>
<p>This takes several minutes... Is there a way to option conda to install by first trying the flexible solve?</p>
|
<python><conda><amazon-sagemaker>
|
2024-08-06 21:35:33
| 0
| 491
|
McM
|
78,841,040
| 6,357,916
|
ModuleNotFoundError: No module named 'yolov7'
|
<p>I am trying out code in <a href="https://github.com/NirAharon/BoT-SORT" rel="nofollow noreferrer">this repository</a>.
I run one of the command given in their readme page, but I am getting ModuleNotFoundError:</p>
<pre><code>/workspace/BoTSORT$ python3 tools/mc_demo_yolov7.py --weights pretrained/mot17_sbs_S50.pth --source /workspace/datasets/videos/seq1/seq1_florida_00-15-50_00-16-05.mp4 --fuse-score --agnostic-nms --with-reid
Traceback (most recent call last):
File "tools/mc_demo_yolov7.py", line 14, in <module>
from yolov7.models.experimental import attempt_load
ModuleNotFoundError: No module named 'yolov7'
</code></pre>
<p>Here is <code>ls</code> command output:</p>
<pre><code>/workspace/BoTSORT$ ls
LICENSE VideoCameraCorrection assets fast_reid requirements.txt setup.py tools yolov7
README.md YOLOX_outputs datasets pretrained setup.cfg start_container.sh tracker yolox
</code></pre>
<p>So, <code>yolov7</code> folder does indeed exist in the directory <code>/workspace/BoTSORT</code>.</p>
<p>One thing I noticed that it does not contain <code>__init__.py</code>. So I tried adding one in <code>/workspace/BoTSORT/yolov7</code> directory with the following content (similar to <code>/workspace/BoTSORT/yolox/__init__.py</code> directory):</p>
<pre><code>#!/usr/bin/env python3
# -*- coding:utf-8 -*-
from .models import *
__version__ = "0.1.0"
</code></pre>
<p>But no help, am still getting the same error.</p>
<p><strong>PS:</strong></p>
<p>I am running inside docker container. But I believe this does not have any effect that can cause the given error.</p>
|
<python><python-3.x><pip><python-packaging><yolov7>
|
2024-08-06 21:18:40
| 0
| 3,029
|
MsA
|
78,841,010
| 20,591,261
|
Format datetime in polars
|
<p>I have a polars dataframe that contains a <code>datetime</code> column. I want to convert this column to strings in the format <code>%Y%m</code>. For example, all dates in January 2024 should be converted to <code>"202401"</code>.</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime
import polars as pl
data = {
"ID" : [1,2,3],
"dates" : [datetime(2024,1,2),datetime(2024,1,3),datetime(2024,1,4)],
}
df = pl.DataFrame(data)
</code></pre>
<p>I have tried using <code>strftime</code>. However, the following <code>AttributeError</code> is raised.</p>
<pre class="lang-py prettyprint-override"><code>AttributeError: 'Expr' object has no attribute 'strftime'
</code></pre>
|
<python><datetime><python-polars><strftime>
|
2024-08-06 21:09:10
| 2
| 1,195
|
Simon
|
78,840,951
| 17,800,932
|
Abstracting a real hardware and simulated device with the same interface in Python
|
<p>I am looking for a more idiomatic or concise OOP pattern to implement the equivalent of the following.</p>
<h4>Interface and implementations</h4>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod
from typing import override
class Device:
"""Main interface that hides whether the device is a real hardware device or
a simulated device.
"""
def __init__(self, ip_address: str, port: int, simulated: bool) -> None:
if simulated:
self.__session = DeviceSimulated()
else:
self.__session = DeviceHardware(ip_address, port)
def initialize(self) -> None: self.__session.initialize()
def close(self) -> None: self.__session.close()
def turn_on(self) -> None: self.__session.turn_on()
def turn_off(self) -> None: self.__session.turn_off()
class DeviceHardware(Device):
def __init__(self, ip_address: str, port: int) -> None:
self._ip_address = ip_address
self._port = port
@override
def initialize(self) -> None:
print(f"DeviceHardware initialized with IP: {self._ip_address} and Port: {self._port}")
@override
def close(self) -> None: print("DeviceHardware closed")
@override
def turn_on(self) -> None: print("DeviceHardware turned on")
@override
def turn_off(self) -> None: print("DeviceHardware turned off")
class DeviceSimulated(Device):
def __init__(self) -> None:
# Nothing to initialized for the simulated device
pass
@override
def initialize(self) -> None: print(f"DeviceSimulated initialized")
@override
def close(self) -> None: print("DeviceSimulated closed")
@override
def turn_on(self) -> None: print("DeviceSimulated turned on")
@override
def turn_off(self) -> None: print("DeviceSimulated turned off")
</code></pre>
<h4>Client usage</h4>
<p>This would be used by a client like this.</p>
<pre class="lang-py prettyprint-override"><code># Simulated case
device = Device(ip_address="192.168.1.10", port=123, simulated=True)
device.initalize()
device.turn_on()
device.turn_off()
device.close()
# Real hardware case
device = Device(ip_address="192.168.1.10", port=123, simulated=False)
device.initalize()
device.turn_on()
device.turn_off()
device.close()
</code></pre>
<h4>Question</h4>
<p>Basically, the point is that I present a unified interface to users, such that all they need to be concerned with is toggling the <code>simulated</code> argument. And similarly, I present a shared unified interface for the simulated and real hardware device implementations of the interface.</p>
<p>Is there a better Python and/or OOP way to handle this to satisfy these two constraints? This works, is simple, and is straightforward, but it is a little verbose perhaps. So while there may be more elegant solutions, I am particularly interested in those that require even less code and boilerplate.</p>
<p>I've done this in a different language, however, the objects were value-based and not referenced based, so it was done differently. As it stands here in Python, it's a little bit like the State Pattern, in that the <code>Device</code> changes its "state" based upon the <code>simulated</code> argument, with each method deferring down to the specific, actual "state" of the class.</p>
|
<python><oop><design-patterns><state-pattern><proxy-pattern>
|
2024-08-06 20:43:40
| 1
| 908
|
bmitc
|
78,840,589
| 235,472
|
How to spilt a Python class in multiple files in different subdirectories?
|
<p>I can split a class definition in multiple files only if these are located in the same directory:</p>
<pre><code># entrypoint.py
class C(object):
from ImplFile import OtherMethod
def __init__(self):
self.a = 1
# ImplFile.py
def OtherMethod(self):
print("a = ", self.a)
</code></pre>
<p>If the <code>ImplFile.py</code> file is in a subdirectory, I get the error:</p>
<pre><code>ModuleNotFoundError: No module named 'OtherMethod'
</code></pre>
<p>How can I specify that the member function is defined in that file?</p>
<hr />
<p>The reason I want to split the implementation in more than one file is due to its size, both in terms of number of member functions and their size.</p>
|
<python><subdirectory>
|
2024-08-06 18:41:38
| 1
| 13,528
|
Pietro
|
78,840,534
| 5,568,409
|
How to understand a strange command linked to Delaunay and Line2D?
|
<p>I was intrigued by a command line that I don't understand, given in the answer of the (already old) post <a href="https://stackoverflow.com/questions/29807551/how-to-change-matplotlib-line-color-in-scipy-spatial-delaunay-plot-2d">HERE</a>.</p>
<p>So, I tried to investigate and made the following program:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.lines import Line2D
from scipy.spatial import Delaunay, delaunay_plot_2d
np.random.seed(12345)
points = np.random.rand(8, 2)
tri = Delaunay(points)
fig, ax = plt.subplots(figsize=(8, 4))
D = delaunay_plot_2d(tri, ax=ax)
ax.set_aspect('equal', 'box')
list = [l for l in D.axes[0].get_children() if type(l) is Line2D]
list
</code></pre>
<p>which gave me <code>3</code> items in the <code>list</code>:</p>
<pre><code>[<matplotlib.lines.Line2D at 0x246dd230550>,
<matplotlib.lines.Line2D at 0x246dd230850>,
<matplotlib.lines.Line2D at 0x246dd230af0>]
</code></pre>
<p>I then could use the <code>1st</code> item to change the color of the points:</p>
<pre><code>patch_point = [l for l in D.axes[0].get_children() if type(l) is Line2D][0]
patch_point.set_color('blue')
</code></pre>
<p>and the <code>2nd</code> item to change the color of the lines:</p>
<pre><code>patch_line = [l for l in D.axes[0].get_children() if type(l) is Line2D][1]
patch_line.set_color('green')
</code></pre>
<p>but what is the meaning of the <code>3rd</code> item? For example, the following lines doesn't change anything...</p>
<pre><code>patch_what = [l for l in D.axes[0].get_children() if type(l) is Line2D][2]
patch_what.set_color('red')
</code></pre>
<p>Could someone help me understanding what this strange command <code>[l for l in D.axes[0].get_children() if type(l) is Line2D]</code> does?</p>
|
<python><matplotlib><delaunay>
|
2024-08-06 18:23:57
| 1
| 1,216
|
Andrew
|
78,840,528
| 9,890,009
|
How to optimize method with multiple query calls?
|
<p>I have an event system in my app. Every time a user deletes/updates/creates and object a new event is created in the database with the info of the object and the type of event it is.
My Event model looks something like this:</p>
<pre><code>class Event(models.Model):
uuid = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
model = models.CharField(max_length=200)
object_id = models.IntegerField()
event_type = enum.EnumField(ModelEventType)
</code></pre>
<p>In this model, model is the name of the object's model, and object_id is the ID of that object.</p>
<p>My app also has multiple access permissions: some objects are private to the user, while others are shared among teammates. Thus, I need to restrict events to only those that correspond to a user's access level. To fetch each user's events, I retrieve all objects the user has access to, obtain their IDs, and then fetch the events related to those objects.</p>
<p>I use a method like this to fetch the objects belonging to a user:</p>
<pre><code>class Event(models.Model):
....
@staticmethod
def get_events_ids(request):
object_ids = {}
Object1 = apps.get_model("myapp", "Object1")
Object2 = apps.get_model("myapp", "Object2")
Object3 = apps.get_model("myapp", "Object3")
Object4 = apps.get_model("myapp", "Object4")
Object5 = apps.get_model("myapp", "Object5")
objects1 = Object1.objects.filter(...)
objects2 = Object2.objects.filter(...)
objects3 = Object3.objects.filter(...)
objects4 = Object4.objects.filter(...)
objects5 = Object5.objects.filter(...)
objects = {
"Object1": objects1,
"Object2": objects2,
"Object3": objects3,
"Object4": objects4,
"Object5": objects5,
}
for key in objects.keys():
object_ids[key] = list(objects[key].values_list("id", flat=True))
return object_ids
</code></pre>
<p>I apply this approach to many more models. With this method, I get the IDs of each model and then filter events by model and ID:</p>
<pre><code>event_objects = Event.get_events_ids()
events = []
# Get the events of each model
for key in event_objects.keys():
ids = event_objects[key]
events_list = qs.filter(object_id__in=ids, model=key)
events.extend(list(events_list))
ids = [obj.uuid for obj in events]
filtered_events = qs.filter(uuid__in=ids)
</code></pre>
<p>The problem I'm facing is that this approach is slow and inefficient due to the large number of queries involved. Is there a way to perform a bulk fetch or something similar to improve response time? Any suggestions for optimizing this process would be greatly appreciated.</p>
|
<python><django><django-orm>
|
2024-08-06 18:21:22
| 1
| 792
|
Paul Miranda
|
78,840,402
| 186,099
|
How to infer the type of the first element in an iterable?
|
<p><strong>How to infer the type of the first element in an iterable in MyPy/Pyright?</strong></p>
<p>Is there any way to annotate the code below to a bit more narrow scoped?
Meaning that I want the type checker to assume that the function below returns an object of the type that is contained in the iterable or list.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Iterable, TypeVar
T = TypeVar('T')
def get_first(iterable: Iterable[T]) -> T | None:
for item in iterable:
return item
return None
</code></pre>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>class Bob:
pass
ll : List[Bob] = [Bob(), Bob()] # Create a list that contains objects of type Bob
first_bob = get_first(ll) # <--- Type checker should infer that the type of first_bob is Bob
</code></pre>
<p>I am looking for a solution that works specifically in MyPy/Pyright.</p>
|
<python><python-typing><mypy><pyright>
|
2024-08-06 17:44:24
| 2
| 9,511
|
Vlad
|
78,840,242
| 8,436,290
|
How to return used context to answer using Langchain in Python
|
<p>I have built a RAG system like this:</p>
<pre><code>loader = PyPDFLoader(pdf_file_name)
raw_documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
documents = text_splitter.split_documents(raw_documents)
print(documents[-1])
Document(
metadata={'source': '/Appraisal.pdf', 'page': 37},
page_content='File No.\nProperty Address\nCity County State Zip Code\nClient10828\nBorrower or Owner John Smith & Kitty Smith\n29 Dream St\nDreamTown SC 99999\nSouthern First Bank\nBB Appraisals, LLC'
)
compressor = CohereRerank(
top_n=top_n,
model="rerank-english-v3.0",
cohere_api_key=""
)
retriever = vectorstore.as_retriever(
search_type="similarity",
search_kwargs={"k": top_n}
)
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever
)
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
response_schemas = [
ResponseSchema(name="price", description="Price", type="float"),
ResponseSchema(name="unit", description="Unit", type="int"),
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
rag_prompt = PromptTemplate(
input_variables=["context","question"],
template=template,
partial_variables={"format_instructions": output_parser.get_format_instructions()},
)
rag_chain = (
{"context": compression_retriever | format_docs, "question": RunnablePassthrough()}
| rag_prompt
| llm
| output_parser
)
query = "What is the price? How many units?"
response = rag_chain.invoke(query, config={"configurable": {"session_id": "abc123"}},)
</code></pre>
<p>But then my response is a JSON with my price and unit as keys only. And I would like to be able to have a "context" variable that stores the paragraphs used in my document that the algo relied upon to answer the questions.</p>
<p>Any idea how I could do that please?</p>
|
<python><langchain><large-language-model>
|
2024-08-06 16:57:27
| 1
| 467
|
Nicolas REY
|
78,840,095
| 2,827,582
|
Datetime doesn't seem to be converting
|
<p>I am trying to cast from a 'created_at' value in UTC to a value in Eastern Time. But after converting the time using astimezone, the time is the same and raises the assert error that the two times are identical. I tried making the datetime aware by doing <code>f_time.replace(tzinfo=pytz.utc)</code> although I am not sure this is necessary.</p>
<pre class="lang-py prettyprint-override"><code>def get_f_datetime(self, rec):
f_time = datetime.datetime.strptime(rec['created_at'], "%Y-%m-%dT%H:%M:%SZ")
f_time = f_time.replace(tzinfo=pytz.utc)
eastern_time = f_time.astimezone(pytz.timezone('US/Eastern'))
try:
assert f_time != eastern_time
except AssertionError():
raise Exception(f'the time is the utc: {f_time} eastern: {eastern_time}. This is wrong.')
dt_string= eastern_time.strftime("%m/%d/%Y %I:%M %p")
return dt_string
</code></pre>
<p>The assertion error is triggering, indicating that f_time is identical to eastern_time. I want the eastern time to be offset.</p>
|
<python><datetime><strptime><pytz>
|
2024-08-06 16:22:59
| 1
| 1,542
|
Steve Scott
|
78,839,970
| 305,597
|
How can I add several help arguments using argparse, which don't require adding any required arguments?
|
<p>I'm using <code>ArgumentParser</code> to document the usage of a complicated program. Since the program has several long lists of capabilities, it requires several <code>--help</code> arguments.</p>
<p>My current <code>ArgumentParser</code> looks like this.</p>
<pre><code>parser = ArgumentParser(description = 'Complicated program')
parser.add_argument('--list-models', action = 'store_true', help = 'List the available models')
parser.add_argument('--list-stuff', action = 'store_true', help = 'List the other stuff')
# ...
parser.add_argument('important_data', help = 'Some important data that is required')
</code></pre>
<p>I want <code>--list-models</code> and <code>--list-stuff</code> to print some data and exit, similar to how <code>--help</code> works.</p>
<p>However, running the program with either argument fails since <code>important_data</code> is a positional argument, which <code>ArgumentParser</code> always requires.</p>
<p>What's the nicest way to implement this help pattern?</p>
|
<python><arguments><argparse>
|
2024-08-06 15:50:26
| 2
| 9,705
|
Martín Fixman
|
78,839,834
| 3,070,181
|
python extension not loading in vscode
|
<p>I have seen <a href="https://stackoverflow.com/questions/74466183/why-is-my-python-extension-not-loading-in-vscode-microsoft">this question</a> and <a href="https://stackoverflow.com/questions/72791043/how-would-i-fix-the-issue-of-the-python-extension-loading-and-extension-activati">this one</a></p>
<p>Neither help me</p>
<p>When I add the python extension for vscode, the status line states</p>
<pre><code>python extension loading ...
</code></pre>
<p>But it never does</p>
<p>I have removed all extensions from vscode and deleted all ms-python files from .config, removed and reinstalled vscode</p>
<p>I am running python 3.12.4 on manjaro</p>
<pre><code>code - OSS
Version: 1.91.1
Commit: f1e16e1e6214d7c44d078b1f0607b2388f29d729
Date: 2024-07-12T15:10:53.972Z
Electron: 29.4.5
ElectronBuildId: undefined
Chromium: 122.0.6261.156
Node.js: 20.9.0
V8: 12.2.281.27-electron.0
OS: Linux x64 6.6.41-1-MANJARO
</code></pre>
<p>Can anyone help?</p>
|
<python><visual-studio-code>
|
2024-08-06 15:14:02
| 1
| 3,841
|
Psionman
|
78,839,662
| 1,391,441
|
emcee refuses to properly explore function
|
<p>I have a 1D function that looks like this:</p>
<p><a href="https://i.sstatic.net/EyatUXZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EyatUXZP.png" alt="function to be fitted" /></a></p>
<p>i.e. a pronounced drop towards a stable value around 0. The function is written as:</p>
<pre><code>((1. / np.sqrt(1. + x ** 2)) - (1. / np.sqrt(1. + C ** 2))) ** 2
</code></pre>
<p>where <code>C</code> is the parameter I'm trying to explore using emcee. The problem is that emcee (using a uniform prior) refuses to explore the region of large likelihood and instead wanders around the entire allowed range for this parameter seemingly randomly. Here's what the traces look like (full code is below):</p>
<p><a href="https://i.sstatic.net/657AWjxB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/657AWjxB.png" alt="emcee trace plot" /></a></p>
<p>where the true value is shown with a red line. Whereas <code>scipy.optimize.minimize</code> is able to easily zoom in on the true value, emcee is apparently unable to do it.</p>
<p>Am I doing something wrong or is this function just not able to be explored using a uniform prior like I am doing?</p>
<hr />
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import emcee
def main():
# Set true value for the variable
C_true = 27.
# Generate synthetic data
x = np.arange(.1, 100)
y_true = func(x, C_true)
noise = 0.01
y_obs = np.random.normal(y_true, noise)
# Set up the MCMC
nwalkers = 4
ndim = 1
nburn = 500
nsteps = 5000
# Maximum value for the 'C' parameter
C_max = 5 * C_true
# Use a 10% STDDEV around the true value for the initial state
p0 = [np.random.normal(C_true, C_true * .1, nwalkers)]
p0 = np.array(p0).T
# Run the MCMC
print("Running emcee...")
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=(x, y_obs, C_max))
# Burn-in
state = sampler.run_mcmc(p0, nburn)
sampler.reset()
sampler.run_mcmc(state, nsteps)
samples = sampler.chain.reshape((-1, ndim))
# Print the median and 1-sigma uncertainty of the parameters
C_median = np.median(samples)
C_percnt = np.percentile(samples, [16, 84])
print(f'C = {C_median:.2f} ({C_percnt[0]:.2f}, {C_percnt[1]:.2f})')
# Chains
plt.plot(sampler.chain[:, :, 0].T, c='k', alpha=0.1)
plt.axhline(C_true, color='r')
plt.ylabel('C')
plt.xlabel('Step')
plt.tight_layout()
plt.show()
# Fitted func
plt.scatter(x, y_obs)
y_emcee = func(x, C_median)
plt.scatter(x, y_emcee)
plt.show()
def func(x, C):
x_C = ((1. / np.sqrt(1. + x ** 2)) - (1. / np.sqrt(1. + C ** 2))) ** 2
# Beyond C, the function is fixed to 0
return np.where(x < C, x_C, 0)
def lnlike(C, x, y_obs):
model = func(x, C)
lkl = -np.sum((y_obs - model) ** 2)
return lkl
def lnprior(C, C_max):
if 0 < C < C_max:
return 0.0
return -np.inf
def lnprob(C, x, y_obs, C_max):
lp = lnprior(C, C_max)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike(C, x, y_obs)
if __name__ == '__main__':
main()
</code></pre>
|
<python><optimization><bayesian><emcee>
|
2024-08-06 14:40:54
| 1
| 42,941
|
Gabriel
|
78,839,616
| 2,197,296
|
Tensorrt installation issues in WSL
|
<p>I'm trying to get tensorrt working with Tensorflow 2.17, yet after trying all of the official and unofficial instructions several times I still can't get it to work and I'm at the edge of sanity.
I've have installed CUDA 12.5 (as that's reported as the last version supported by tensorrt)</p>
<pre><code>$ sudo apt-get install tensorrt libnvinfer10 [16:24:34]
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
libnvinfer10 is already the newest version (10.2.0.19-1+cuda12.5).
libnvinfer10 set to manually installed.
tensorrt is already the newest version (10.2.0.19-1+cuda12.5).
0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded.
</code></pre>
<pre><code>$ pip install -U tensorrt [16:24:26]
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: tensorrt in ./miniconda3/envs/tf/lib/python3.12/site-packages (10.2.0.post1)
Requirement already satisfied: tensorrt-cu12==10.2.0.post1 in ./miniconda3/envs/tf/lib/python3.12/site-packages (from tensorrt) (10.2.0.post1)
</code></pre>
<pre><code>$ nvidia-smi [16:26:49]
Tue Aug 6 16:26:53 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.147.05 Driver Version: 560.76 CUDA Version: 12.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... On | 00000000:01:00.0 On | Off |
| 0% 39C P8 15W / 450W | 1514MiB / 24564MiB | 1% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
</code></pre>
<pre><code>$ python -c "import tensorflow as tf; print(tf.__version__)" [15:31:54]
2024-08-06 16:22:46.458404: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-08-06 16:22:46.596055: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-08-06 16:22:46.644047: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-08-06 16:22:46.662195: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-08-06 16:22:46.758216: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-08-06 16:22:47.524911: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2.17.0
</code></pre>
<pre><code>$ ls /usr/lib/tensorrt/bin [16:24:49]
ls: cannot access '/usr/lib/tensorrt/bin': No such file or directory
</code></pre>
<p>No matter what installation method I try, the <code>/usr/lib/tensorrt/bin</code> path is always missing.</p>
<p>Running Ubuntu 22.04 on WSL2, Windows 11 host.</p>
<p>Any help appreciated.</p>
|
<python><tensorflow><ubuntu-22.04><tensorrt>
|
2024-08-06 14:30:20
| 0
| 1,266
|
Nicolas
|
78,839,580
| 1,576,804
|
Removing ```json and ``` from Markdown
|
<p>I have a result object from the code:</p>
<pre><code>for fragment_string in fragment_strings:
prompt = generate_prompt(fragment_string)
messages = [{"role": "user", "content": prompt}]
result = chat_completion(messages)
# Post-process the response to remove the "json" prefix
if result.strip().startswith("json"):
result = result.strip()[4:].strip()
print(f'{result} \n')
</code></pre>
<p>That looks like</p>
<pre><code>```json
{
"question": "What are the key findings and recommendations regarding the use of AI in clinical practice and the integration of smart devices for health monitoring?",
"answer": "The key findings and recommendations regarding the use of AI in clinical practice and the integration of smart devices for health monitoring are as follows:
1. AI Applications in Clinical Practice:
- Findings: The development of new techniques as established standards of care uses robust peer-reviewed R&D practices, providing safeguards against deceptive or poorly-validated AI algorithms. AI diagnostics replacing established medical standards will require extensive validation.
- Recommendations: Support the preparation of promising AI applications for rigorous approval procedures needed for clinical practice. Create testing and validation approaches for AI algorithms to evaluate performance under conditions different from the training set.
2. Confluence of AI and Smart Devices for Monitoring Health and Disease:
- Findings: Revolutionary changes in health and health care are beginning with the use of smart devices to monitor individual health, often outside traditional clinical settings. AI and smart devices will become increasingly interdependent, with AI powering many health-related mobile devices and apps, and mobile devices generating massive datasets for AI-based health tools.
- Recommendations: Support the development of AI applications that enhance the performance of new mobile monitoring devices and apps. Develop data infrastructure to capture and integrate data from smart devices to support AI applications. Ensure privacy and transparency in data use. Track developments in foreign health care systems for useful technologies and potential failures.",
"fragments_needed": [
{
"index": "32",
"content": "1. AI Applications in Clinical Practice\nFindings:\n● The process of developing a new technique as an established standard of care uses the robust practice of peer-reviewed R&D, and can provide safeguards against the deceptive or poorly-validated use of AI algorithms. (Section 2.3)\n● The use of AI diagnostics as replacements for established steps in medical standards of care will require far more validation than the use of such diagnostics to provide supporting information that aids in decisions. (Section 2.3)\nRecommendations:\n● Support work to prepare AI results for the rigorous approval procedures needed for acceptance for clinical practice. Create testing and validation approaches for AI algorithms to evaluate performance of the algorithms under conditions that differ from the training set. (Section 2.3)"
},
{
"index": "32",
"content": "2. Confluence of AI and Smart Devices for Monitoring Health and Disease\nFindings:\n● Revolutionary changes in health and health care are already beginning in the use of smart devices to monitor individual health. Many of these developments are taking place outside of traditional diagnostic and clinical settings. (Section 3.1)\n● In the future, AI and smart devices will become increasingly interdependent, including in health-related fields. On one hand, AI will be used to power many health-related mobile monitoring devices and apps. On the other hand, mobile devices will create massive datasets that, in theory, could open new possibilities in the development of AI-based health and health care tools. (Section 3.1)\nRecommendations:\n● Support the development of AI applications that can enhance the performance of new mobile monitoring devices and apps. (Section 3.1)\n● Develop data infrastructure to capture and integrate data generated from smart devices to support AI applications. (Section 3.1)\n● Require that development include approaches to insure privacy and transparency of data use. (Section 3.1)\n● Track developments in foreign health care systems, looking for useful technologies and also technology failures. (Section 3.1)"
}
]
}
```
</code></pre>
<p>I want to extract the result as a JSON object without the Markdown ```json prefix and ``` at the end and only the actual JSON object in Python. What is the way to do this?</p>
|
<python><json><markdown>
|
2024-08-06 14:22:45
| 1
| 4,234
|
vkaul11
|
78,839,568
| 17,311,709
|
Tableau data source not updating through hyper API
|
<p>I have a workbook containing one data source being an extract file, I'm trying to update this file through hyper API, this is how my code looks like :</p>
<pre><code>with server.auth.sign_in(tableau_auth):
# Get the project ID
all_projects, pagination_item = server.projects.get()
project_id = next(project.id for project in all_projects if project.name == project_name)
# Find the workbook
all_workbooks, pagination_item = server.workbooks.get()
print("Workbooks available:")
for workbook in all_workbooks:
print(f"Workbook name: {workbook.name}, Workbook ID: {workbook.id}")
workbook = next(workbook for workbook in all_workbooks if workbook.name == workbook_name)
# Get the workbook's data sources
server.workbooks.populate_connections(workbook)
data_source = next(ds for ds in workbook.connections if ds.datasource_name == data_source_name)
# Update the data source with the new file
new_data_source = TSC.DatasourceItem(project_id, name=data_source.datasource_name)
new_data_source = server.datasources.publish(new_data_source, local_file_path,
TSC.Server.PublishMode.Overwrite)
# Refresh the workbook to reflect the new data
server.workbooks.refresh(workbook.id)
print(f"Workbook {workbook.name} (ID: {workbook.id}) has been refreshed.")
print("Data source updated and workbook refreshed.")
</code></pre>
<p>this code executes correctly but when i go to my tableau server i don't see any changes, i also find the output.hyper file that i'm trying to push in the project folder like so:</p>
<p><a href="https://i.sstatic.net/4hla6a2L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4hla6a2L.png" alt="enter image description here" /></a></p>
<p>after i run the code i also get a notification saying that the workbook is out of date but clicking refresh doesn't do anything either.</p>
<p>Am i doing something wrong here?</p>
|
<python><tableau-api><tableau-cloud>
|
2024-08-06 14:21:21
| 0
| 635
|
Rafik Bouloudene
|
78,839,513
| 6,620,090
|
python import issue with similar directory structure
|
<p>I have the below directory structure:</p>
<pre><code>project_root_dir
├── a
│ └── b
│ ├── __init__.py
│ └── test_cases.py
└── subdir
└── a
└── b
└── __init__.py
</code></pre>
<p>My <code>PYTHONPATH</code> env is set as below:</p>
<pre><code>export PYTHONPATH=<workspace_path>/project_root_dir/:<workspace_path>/project_root_dir/subdir
</code></pre>
<p>I am launching the interpreter from <code>project_root_dir/subdir</code></p>
<pre><code># from inside project_root_dir/subdir
import a.b.test_cases
</code></pre>
<p>It throwing module not found error</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'a.b.test_cases'
</code></pre>
<p>The python interpreter will look for that module first under <code>subdir.a.b</code> which it will not find and then look for the module in <code>project_root_dir.a.b</code> where it should be found. (Since both the paths are present in PYTHONPATH). Then why is the error being thrown?</p>
<p>Is there a way I can get it to work?</p>
|
<python><python-3.x><import>
|
2024-08-06 14:08:51
| 3
| 1,292
|
Rachit Tayal
|
78,839,482
| 5,568,409
|
Is it possible to plot a Voronoi tessellation directly from a previous Delaunay triangulation?
|
<p>I have a small Python code plotting a small Delaunay triangulation:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial import Delaunay
points = np.array([[0, 0],
[1, 0],
[0.5, 0.5],
[1 , 0.5]])
tri = Delaunay(points)
fig, ax = plt.subplots(figsize=(8, 4))
ax.set_aspect('equal', 'box')
plt.triplot(points[:,0], points[:,1], tri.simplices.copy())
plt.plot(points[:,0], points[:,1], 'o')
plt.show()
</code></pre>
<p>Do you know if it's now possible in Python to directly superimpose on the previous plot the corresponding Voronoi tessellation?</p>
<p>I am using Python '3.9.7' and a Matplotlib '3.8.4'</p>
|
<python><python-3.x><matplotlib><voronoi><delaunay>
|
2024-08-06 14:03:10
| 1
| 1,216
|
Andrew
|
78,839,425
| 10,518,698
|
How to convert PPTX to PDF on a Non-Windows machine
|
<p>I'm sure many have come across this question, but I couldn't find any solutions for non-windows machines.</p>
<p>I need to prepare a Python script that can run on Linux (multiple versions) as well as On Windows machines. The script should read a PPT file, and convert it into a PDF, then convert the PDF into images. I can achieve the second part (PDF to images), but I can't achieve the first part (PPT to PDF).</p>
<p>Can anyone please help me to achieve this?</p>
<p>I tried Spire and Aspose, but both of them are commercial and we need to pay. Python pptx doesn't support conversion. Win32 is not available in Linux.</p>
<p>I am at a dead end.</p>
<p>This is the code I used to achieve the above (both the first and second parts) using the spire library, but it doesn't convert more than 10 pages.</p>
<pre><code>!pip install Spire.Presentation
# Function to convert PPT to Images
def ppt_to_images(ppt_file, destination_folder):
presentation = Presentation()
presentation.LoadFromFile(ppt_file)
presentation_name = ppt_file.split('/')[-1].split('.')[0]
#Save PPT document to images
for i, slide in enumerate(presentation.Slides):
file_name = presentation_name + '_' + str(i) + ".png"
image = slide.SaveAsImage()
image.Save(destination_folder + '/' + file_name)
image.Dispose()
presentation.Dispose()
return None
ppt_to_images("/content/drive/MyDrive/Colab Notebooks/DummyPPTs/dummy_powerpoint.pptx", OUTPUT_FOLDER)
</code></pre>
<p>Calling the above function using</p>
<pre><code>ppt_to_images("/content/drive/MyDrive/Colab Notebooks/DummyPPTs/dummy_powerpoint.pptx", OUTPUT_FOLDER)
</code></pre>
|
<python><python-pptx>
|
2024-08-06 13:51:28
| 1
| 513
|
JSVJ
|
78,839,378
| 16,591,917
|
Can you use python xlwings library to trigger VBA code
|
<p>I have a use case where I want to automate and interface an Excel model from The workflow would be something like:</p>
<ol>
<li>Transfer data from Python to Excel</li>
<li>Execute macro to run model model to convergence (about 45secs)</li>
<li>Wait for macro to a cell statues indicating completion.</li>
<li>Collect the data from Excel and post-process in Python</li>
</ol>
<p>Can I do this with xlwings or any other Excel interface library?</p>
|
<python><excel><xlwings>
|
2024-08-06 13:41:15
| 1
| 319
|
JacquesStrydom
|
78,839,329
| 34,509
|
Wheels created by "pip freeze" and "pip wheel" not usable with --no-index and --find-links?
|
<p>I created wheels from an existing venv, that resulted in, among others, the following wheel. The wheel was created within <code>docker</code>, with a Python version of 3.9 (the smallest possible versions were chosen to provide maximal compatibility)</p>
<pre><code>wheels/codechecker-6.23.1-cp39-cp39-linux_x86_64.whl
</code></pre>
<p>My requirements.txt has that version "6.23.1" listed. But when I try to install that version (with Python 3.11), using</p>
<pre><code>python3 -m pip install --no-index --find-links wheels -r requirements.txt
</code></pre>
<p>Python gives an error message:</p>
<blockquote>
<pre><code>Looking in links: wheels
Processing wheels/alembic-1.5.5-py2.py3-none-any.whl
ERROR: Could not find a version that satisfies the requirement codechecker==6.23.1 (from > versions: none)
ERROR: No matching distribution found for codechecker==6.23.1
</code></pre>
</blockquote>
<p>My suspicion is that the version encoding of "cp39" (for CPython3.9) somehow gets in the way with using it with CPython3.11, but from what I read, wheels should be upward-compatible and Pip should be downward compatible with older wheel versions. So what could be wrong here? Both systems are 64bit systems.</p>
|
<python><linux><pip><python-wheel>
|
2024-08-06 13:33:36
| 1
| 509,594
|
Johannes Schaub - litb
|
78,839,287
| 3,494,328
|
f2py in numpy 2.0.1 does not expose variables the way numpy 1.26 did. How can I access Fortran variables in Python?
|
<p>I used to run a collection of Fortran 95 subroutines from Python by compiling it via f2py. In the Fortran source I have a module with my global variables:</p>
<pre><code> MODULE GEOPLOT_GLOBALS
IMPLICIT NONE
INTEGER, PARAMETER :: N_MAX = 16
INTEGER, PARAMETER :: I_MAX = 18
INTEGER, PARAMETER :: J_MAX = 72
...
END MODULE GEOPLOT_GLOBALS
</code></pre>
<p>The compiled file has the name "geoplot.cpython-312-darwin.so" and is in a subfolder named "geo". When using f2py in numpy 1.26, I could do this:</p>
<pre><code>import geo.geoplot as geo
maxN = geo.geoplot_globals.n_max
maxI = geo.geoplot_globals.i_max
maxJ = geo.geoplot_globals.j_max
</code></pre>
<p>Now, with numpy 2.0.1, I do the same but get the error message</p>
<pre><code>AttributeError: module 'geo.geoplot' has no attribute 'geoplot_globals'
</code></pre>
<p>Which can be confirmed by listing the <code>__dict__</code> attribute or using the <code>getmembers</code> module: They all list the Fortran subroutines and modules which contain source code, except for the <code>geoplot_globals</code> module which contains only variable declarations.</p>
<p>So my question is: How am I supposed to access global Fortran variables from Python when using numpy 2.0? And please do not suggest to write all to a file in Fortran only to read it in Python. There should be a more direct way.</p>
|
<python><numpy><f2py>
|
2024-08-06 13:27:19
| 2
| 714
|
Peter Kämpf
|
78,839,246
| 5,029,509
|
PyTorch and Opacus for Differential Privacy
|
<p>When testing an example code from the <strong>TensorFlow</strong> website using <strong>Jupyter Notebook</strong>, which is available <a href="https://www.tensorflow.org/responsible_ai/privacy/tutorials/classification_privacy" rel="nofollow noreferrer">here</a>, I encountered an error. You can find my SO question about that error <a href="https://stackoverflow.com/q/78836989/5029509">here</a>.</p>
<p>As a result, I decided to write equivalent implementations for the same functionality using <strong>PyTorch</strong> with <strong>Opacus</strong> and <strong>PySyft</strong>. However, I unfortunately encountered another error.</p>
<p>Below is the code for implementing the same functionality of the example code from the TensorFlow website, but using PyTorch with Opacus and PySyft, along with the error message.</p>
<pre><code>import torch
import torch.nn as nn
import torch.optim as optim
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
from opacus import PrivacyEngine
# Define a simple model
class SimpleCNN(nn.Module):
def __init__(self):
super(SimpleCNN, self).__init__()
self.conv1 = nn.Conv2d(1, 32, kernel_size=3)
self.fc1 = nn.Linear(32*26*26, 10)
def forward(self, x):
x = torch.relu(self.conv1(x))
x = x.view(-1, 32*26*26)
x = self.fc1(x)
return torch.log_softmax(x, dim=1)
# Data loaders
transform = transforms.Compose([transforms.ToTensor()])
train_dataset = datasets.MNIST('.', train=True, download=True, transform=transform)
train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True)
# Initialize model, optimizer, and loss function
model = SimpleCNN()
optimizer = optim.SGD(model.parameters(), lr=0.01)
criterion = nn.NLLLoss()
# Initialize PrivacyEngine
privacy_engine = PrivacyEngine(
model,
batch_size=64,
sample_size=len(train_loader.dataset),
epochs=1,
max_grad_norm=1.0,
)
privacy_engine.attach(optimizer)
# Training loop
model.train()
for epoch in range(1):
for data, target in train_loader:
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
optimizer.step()
# Print privacy statistics
epsilon, best_alpha = optimizer.privacy_engine.get_privacy_spent(1e-5)
print(f"Epsilon: {epsilon}, Delta: 1e-5")
</code></pre>
<p>Error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[1], line 32
29 criterion = nn.NLLLoss()
31 # Initialize PrivacyEngine
---> 32 privacy_engine = PrivacyEngine(
33 model,
34 batch_size=64,
35 sample_size=len(train_loader.dataset),
36 epochs=1,
37 max_grad_norm=1.0,
38 )
40 privacy_engine.attach(optimizer)
42 # Training loop
TypeError: PrivacyEngine.__init__() got an unexpected keyword argument 'batch_size'
</code></pre>
|
<python><tensorflow><machine-learning><pytorch><pysyft>
|
2024-08-06 13:17:08
| 1
| 726
|
Questioner
|
78,839,105
| 736,662
|
JSONPath extracting
|
<p>I have used an online helper, <a href="https://jsonpath.com/" rel="nofollow noreferrer">https://jsonpath.com/</a> to find a working jsonpath to extract a value from the attacehd json structure. The expressionis as follows:</p>
<pre><code>$.activations[0].bidReference
</code></pre>
<p>How can I translate this into "python"? I have tried this:</p>
<pre><code>jdata = response.json()
bidreference = jdata["activations[0].bidReference"]
</code></pre>
<p>But I get a "KeyError".</p>
<p>Here is the json response:</p>
<pre><code> {
"orderReference": "88a13d98-f9c9-49cb-8682-2ec2ef27bc73",
"requestReference": "106d1b23-978a-4894-a1e1-e3de2b0c0f02",
"createdUtc": "2024-08-06T12:22:30+00:00",
"receivedUtc": "0001-01-01T00:00:00+00:00",
"startUtc": "2024-08-06T12:30:00+00:00",
"endUtc": "2024-08-06T12:45:00+00:00",
"numberOfActivations": 1,
"activations": [
{
"requestReference": "106d1b23-978a-4894-a1e1-e3de2b0c0f02",
"orderReference": "88a13d98-f9c9-49cb-8682-2ec2ef27bc73",
"bidReference": "e810ff33-4946-425d-9c63-1584125efc9b",
"status": "Activated",
"direction": "UP",
"startUtc": "2024-08-06T12:30:00+00:00",
"endUtc": "2024-08-06T12:45:00+00:00",
"quantity": 15,
"reasonCode": null,
"reasonText": null,
"powerPlantName": "Nore 1",
"powerPlantId": 7303,
"priceAreaId": 1,
"regionId": 10
}
],
"events": [
{
"activationReference": "88a13d98-f9c9-49cb-8682-2ec2ef27bc73",
"activationRequestReference": "106d1b23-978a-4894-a1e1-e3de2b0c0f02",
"eventType": "AcceptedAcknowledgeOut",
"eventMessage": "Activation request from TSO was acknowledged",
"eventUtc": "2024-08-06T12:22:32.604432+00:00"
},
{
"activationReference": "88a13d98-f9c9-49cb-8682-2ec2ef27bc73",
"activationRequestReference": "106d1b23-978a-4894-a1e1-e3de2b0c0f02",
"eventType": "AcceptedOut",
"eventMessage": "1 accepted and 0 rejected activations sent to TSO",
"eventUtc": "2024-08-06T12:22:32.832895+00:00"
},
{
"activationReference": "88a13d98-f9c9-49cb-8682-2ec2ef27bc73",
"activationRequestReference": "106d1b23-978a-4894-a1e1-e3de2b0c0f02",
"eventType": "AcceptedAcknowledgeIn",
"eventMessage": "TSO accepted acknowledge",
"eventUtc": "2024-08-06T12:22:34.387739+00:00"
}
],
"activationType": "ACTIVATION",
"activationRelations": [
{
"reference": "e810ff33-4946-425d-9c63-1584125efc9b",
"requestReference": "106d1b23-978a-4894-a1e1-e3de2b0c0f02",
"powerPlantId": 7303,
"powerPlantName": "Nore 1",
"priceAreaId": 1,
"regionId": 10
}
],
"error": false
}
</code></pre>
|
<python><json>
|
2024-08-06 12:47:03
| 2
| 1,003
|
Magnus Jensen
|
78,839,103
| 18,091,040
|
How to return plain text or JSON depending on condition?
|
<p>Is there a way to do something like this using FastAPI:</p>
<pre class="lang-py prettyprint-override"><code>@app.post("/instance/new", tags=["instance"])
async def MyFunction(condition):
if condition:
response = {"key": "value"}
return response
else:
return some_big_plain_text
</code></pre>
<p>The way it is coded now, the JSON is returned fine, but <code>some_big_plain_text</code> is not human friendly. If I do: <code>@app.post("/instance/new", tags=["instance"], PlainTextResponse)</code> I get an error when returning a JSON response.</p>
|
<python><fastapi>
|
2024-08-06 12:46:41
| 1
| 640
|
brenodacosta
|
78,839,092
| 11,357,695
|
Defining fixtures with variable import paths
|
<p>I am testing functions from different modules that use an imported helper function from <code>helpers.py</code>. I am monkeypatching this helper function in each module function's test (please see below). I would like to put <code>setup_do_thing()</code> and <code>mock_do_thing()</code> in a <code>conftest.py</code> file to cut down on repetition and remove <code>general_mocks.py</code>, but am unable to do this due to the different import paths supplied to <code>monkeypatch.setattr()</code> in <code>test_module_1.py</code> and <code>test_module_2.py</code>.</p>
<p>Is there a way to put the fixture in a single file and supply to multiple test modules?</p>
<p><code>app/helper.py</code>:</p>
<pre><code>def do_thing():
#do_thing
</code></pre>
<p><code>app/module1.py</code>:</p>
<pre><code>from helper import do_thing
def use_do_thing_for_x():
...
do_thing()
...
</code></pre>
<p><code>app/module2.py</code>:</p>
<pre><code>from helper import do_thing
def use_do_thing_for_y():
...
do_thing
...
</code></pre>
<p><code>tests/general_mocks.py</code>:</p>
<pre><code>def mock_do_thing():
#do_other_thing
</code></pre>
<p><code>tests/test_module_1.py</code>:</p>
<pre><code>from general_mocks import mock_do_thing
@pytest.fixture
def setup_do_thing(monkeypatch):
monkey_patch.setattr('app.module1.do_thing',
mock_do_thing)
def test_use_do_thing_for_x(setup_do_thing):
...
</code></pre>
<p><code>tests/test_module_2.py</code>:</p>
<pre><code>from general_mocks import mock_do_thing
@pytest.fixture
def setup_do_thing(monkeypatch):
monkey_patch.setattr('app.module2.do_thing',
mock_do_thing)
def test_use_do_thing_for_y(setup_do_thing):
...
</code></pre>
|
<python><pytest><monkeypatching><pytest-fixtures>
|
2024-08-06 12:43:26
| 1
| 756
|
Tim Kirkwood
|
78,839,067
| 11,159,734
|
Azure SAS token does not expire even though expiry is specified
|
<p>I created a SAS token using the following python function:</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime, timedelta
from azure.storage.blob import BlobServiceClient, BlobSasPermissions, generate_blob_sas
def generate_sas_token(connection_string: str, container_name: str, blob_name: str, duration: int = 60) -> tuple[str, str]:
blob_service_client = BlobServiceClient.from_connection_string(connection_string)
sas_token = generate_blob_sas(
account_name=blob_service_client.account_name,
container_name=container_name,
blob_name=blob_name,
account_key=blob_service_client.credential.account_key,
permission=BlobSasPermissions(read=True), # Set the permissions as needed
expiry=datetime.now() + timedelta(minutes=duration) # Set the expiry time as needed
)
sas_url = f"https://{blob_service_client.account_name}.blob.core.windows.net/{container_name}/{blob_name}?{sas_token}"
return sas_token, sas_url
</code></pre>
<p>As you can see I specify the expiry date by using a 60 minutes offset to the current time. So I would expect the token to be invalid after 60 minutes. However even 2 hours later the SAS token is still valid. I opened the SAS url in another browser in private mode to prevent any browser caching messing sth up but I was still able to download the file just fine.
The <a href="https://learn.microsoft.com/en-us/python/api/azure-storage-blob/azure.storage.blob?view=azure-python#azure-storage-blob-generate-blob-sas" rel="nofollow noreferrer">documentation</a> mentions that the 'expiry' field can be overwritten by an access policy but I checked my storage account and there are no access policies present.</p>
<p>Is there anything I missed as I don't see any errors in the code?</p>
|
<python><azure><azure-blob-storage>
|
2024-08-06 12:36:15
| 1
| 1,025
|
Daniel
|
78,839,059
| 3,183,808
|
Efficient sky searches with HEALPix-alchemy
|
<p>I am trying to implement a cone search for a catalog with the HEALPix-alchemy package. I have the following models set up, based on the examples here: <a href="https://arxiv.org/pdf/2112.06947" rel="nofollow noreferrer">https://arxiv.org/pdf/2112.06947</a></p>
<pre><code>class Field(Base):
"""
Represents a collection of FieldTiles making up the area of interest.
"""
id = Column(Integer, primary_key=True, autoincrement=True)
tiles = relationship(lambda: FieldTile, order_by="FieldTile.id")
class FieldTile(Base):
"""
A HEALPix tile that is a component of the Field being selected.
"""
id = Column(ForeignKey(Field.id), primary_key=True)
hpx = Column(Tile, index=True)
pk = Column(Integer, primary_key=True, autoincrement=True)
class Source(Base):
"""
Represents a source and its location.
"""
id = mapped_column(Integer, primary_key=True, index=True, autoincrement=True)
name = Column(String, unique=True)
Heal_Pix_Position = Column(Point, index=True, nullable=False)
</code></pre>
<p>I am then using CDSHealpix to get the HEALPix cells contained within a specified cone.</p>
<p>I construct the Multi Order Coverage map from the HEALPix cells using MOCpy and extract the HEALPix tiles using HEALPix-alchemy.</p>
<p>I then populate the Field table with this collection of tiles.</p>
<p>Finally, I perform the following query:</p>
<pre><code>query = db.query(Source).filter(FieldTile.hpx.contains(Source.Heal_Pix_Position)).all()
</code></pre>
<p>However, this is very inefficient as I am effectively checking each source in my catalog to see if it is contained within a FieldTile.</p>
<p>How can I modify my approach so that I am checking each tile and returning the sources that it contains?</p>
|
<python><sqlalchemy><orm><healpy><object-relational-model>
|
2024-08-06 12:34:32
| 0
| 435
|
jm22b
|
78,838,965
| 9,715,816
|
pip >24.0 causes Expected matching RIGHT_PARENTHESIS for LEFT_PARENTHESIS, after version specifier kubernetes (>=9.0.0a1.0) ; extra == 'all_extras'
|
<p>I am trying to install <code>prefect[kubernetes,azure]==1.4.0</code> using <code>pip</code>.</p>
<p>It get installed successfully with <code>pip<=24.0</code>.</p>
<p>However with any <code>pip>24.0</code> the installation fails. For example:</p>
<pre><code># in a venv
pip install pip==24.2 # released on 2024-07-29
pip install prefect[kubernetes,azure]==1.4.0
</code></pre>
<p>results in</p>
<pre><code>ERROR: Exception:
Traceback (most recent call last):
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/pkg_resources/__init__.py", line 3367, in _dep_map
return self.__dep_map
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/pkg_resources/__init__.py", line 3147, in __getattr__
raise AttributeError(attr)
AttributeError: _DistInfoDistribution__dep_map
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/packaging/requirements.py", line 36, in __init__
parsed = _parse_requirement(requirement_string)
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/packaging/_parser.py", line 62, in parse_requirement
return _parse_requirement(Tokenizer(source, rules=DEFAULT_RULES))
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/packaging/_parser.py", line 80, in _parse_requirement
url, specifier, marker = _parse_requirement_details(tokenizer)
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/packaging/_parser.py", line 118, in _parse_requirement_details
specifier = _parse_specifier(tokenizer)
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/packaging/_parser.py", line 208, in _parse_specifier
with tokenizer.enclosing_tokens(
File "/usr/lib/python3.10/contextlib.py", line 142, in __exit__
next(self.gen)
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/packaging/_tokenizer.py", line 189, in enclosing_tokens
self.raise_syntax_error(
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/packaging/_tokenizer.py", line 167, in raise_syntax_error
raise ParserSyntaxError(
pip._vendor.packaging._tokenizer.ParserSyntaxError: Expected matching RIGHT_PARENTHESIS for LEFT_PARENTHESIS, after version specifier
kubernetes (>=9.0.0a1.0) ; extra == 'all_extras'
~~~~~~~~~~^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 105, in _run_wrapper
status = _inner_run()
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_internal/cli/base_command.py", line 96, in _inner_run
return self.run(options, args)
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_internal/cli/req_command.py", line 67, in wrapper
return func(self, options, args)
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_internal/commands/install.py", line 379, in run
requirement_set = resolver.resolve(
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 95, in resolve
result = self._result = resolver.resolve(
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/resolvelib/resolvers.py", line 546, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/resolvelib/resolvers.py", line 427, in resolve
failure_causes = self._attempt_to_pin_criterion(name)
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/resolvelib/resolvers.py", line 239, in _attempt_to_pin_criterion
criteria = self._get_updated_criteria(candidate)
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/resolvelib/resolvers.py", line 229, in _get_updated_criteria
for requirement in self._p.get_dependencies(candidate=candidate):
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/provider.py", line 247, in get_dependencies
return [r for r in candidate.iter_dependencies(with_requires) if r is not None]
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/provider.py", line 247, in <listcomp>
return [r for r in candidate.iter_dependencies(with_requires) if r is not None]
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 401, in iter_dependencies
for r in self.dist.iter_dependencies():
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_internal/metadata/pkg_resources.py", line 247, in iter_dependencies
return self._dist.requires(extras)
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/pkg_resources/__init__.py", line 3070, in requires
dm = self._dep_map
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/pkg_resources/__init__.py", line 3369, in _dep_map
self.__dep_map = self._compute_dependencies()
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/pkg_resources/__init__.py", line 3379, in _compute_dependencies
reqs.extend(parse_requirements(req))
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/pkg_resources/__init__.py", line 3434, in __init__
super().__init__(requirement_string)
File "/home/lambis/Projects/Prefect/prefect-flows/.venv/lib/python3.10/site-packages/pip/_vendor/packaging/requirements.py", line 38, in __init__
raise InvalidRequirement(str(e)) from e
pip._vendor.packaging.requirements.InvalidRequirement: Expected matching RIGHT_PARENTHESIS for LEFT_PARENTHESIS, after version specifier
kubernetes (>=9.0.0a1.0) ; extra == 'all_extras'
~~~~~~~~~~^
</code></pre>
|
<python><pip>
|
2024-08-06 12:10:57
| 2
| 2,019
|
Charalamm
|
78,838,932
| 9,640,238
|
Fill NA in pandas column with new UUID
|
<p>My dataframe includes a column with UUID's. Some values may be missing, and I need to fill them with a new UUID.</p>
<p>Of course, if I do:</p>
<pre class="lang-py prettyprint-override"><code>df["B"].fillna(str(uuid.uuid4()))
</code></pre>
<p>all missing entries will be filled with the same value, which is not what I want. I would instead the <code>uuid4()</code> function to be executed every time.</p>
<p>How can I achieve this result?</p>
|
<python><pandas>
|
2024-08-06 12:03:08
| 2
| 2,690
|
mrgou
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.