QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
78,873,281
| 15,587,184
|
Creating and fixing PDF with margings, and text style in reportlab
|
<p>I'm having an issue creating a report using ReportLab in Python.</p>
<p>I'm using a JSON file from an AWS S3 bucket. Here is a sample of the information:</p>
<pre><code>from reportlab.lib.pagesizes import A1
from reportlab.pdfgen import canvas
from reportlab.lib.units import cm
company_info = [
{'title': 'What were Harley\'s total sales for Q3?',
'context': 'Harley\'s total sales for Q3 were /n **$15 million**, representing a **10% increase** compared to Q2.'},
{'title': 'Which region showed the highest sales?',
'context': 'The **North American region** showed the highest sales, contributing **$8 million** to the total.'},
{'title': 'What was the percentage increase in sales for the European market?',
'context': 'The **European market** experienced a /n **12% increase** in sales, totaling **$4 million** for Q3.'},
{'title': 'Did Harley\'s introduce any new products in Q3?',
'context': ('In Q3, Harley\'s made a significant impact with the introduction of **two new products** that have been well-received by the market. '
'The **Harley Davidson X1** is a cutting-edge motorcycle designed with advanced technology and performance enhancements, catering to the evolving needs of enthusiasts. '
'Alongside, the **Harley Davidson Pro Series** offers a range of high-performance accessories aimed at enhancing the riding experience. '
'These new products have been introduced in response to customer feedback and market trends, reflecting Harley\'s commitment to innovation and quality. '
'The product launches have been supported by comprehensive marketing efforts, including promotional events and digital campaigns, which have effectively generated excitement and increased consumer interest.')},
{'title': 'What was the impact of the new product launches on sales?',
'context': ('The recent product launches had a substantial impact on Harley\'s sales performance for Q3. The introduction of the **Harley Davidson X1** and the **Harley Davidson Pro Series** '
'contributed an additional **$2 million** in revenue, accounting for approximately **13%** of the total Q3 sales. '
'These new products not only boosted overall sales but also enhanced Harley\'s market position by attracting new customers and increasing repeat purchases. '
'The successful integration of these products into the company\'s existing lineup demonstrates Harley\'s ability to innovate and adapt to market demands. '
'Ongoing customer feedback and sales data will continue to inform product development and marketing strategies, ensuring that Harley\'s maintains its competitive edge and meets consumer expectations.')},
]
</code></pre>
<p>I'm using this code to generate the report, and I want to include a background image.</p>
<pre><code>def create_report(company_info, image_path, output_pdf_path):
# Define margins
top_margin = 12 * cm
bottom_margin = 8 * cm
left_margin = 2 * cm
right_margin = 2 * cm
# Create a canvas object
c = canvas.Canvas(output_pdf_path, pagesize=A1)
# Draw the background image
c.drawImage(image_path, 0, 0, width=A1[0], height=A1[1])
# Define text properties
font_size = 18
c.setFont("Helvetica-Bold", font_size)
# Calculate y position
y_position = A1[1] - top_margin
# Write the content
for i, item in enumerate(company_info):
question = item.get('title', '')
answer = item.get('context', '')
# Write question number and question
c.setFont("Helvetica-Bold", font_size)
c.drawString(left_margin, y_position, f'{i + 1}. {question}')
y_position -= font_size + 10 # Adjust for line spacing
# Write answer
c.setFont("Helvetica", font_size)
c.drawString(left_margin, y_position, answer)
y_position -= 2 * font_size + 30 # Adjust for space between question and answer
# Check if y position is below the bottom margin
if y_position < bottom_margin:
c.showPage()
c.drawImage(image_path, 0, 0, width=A1[0], height=A1[1])
y_position = A1[1] - top_margin
c.setFont("Helvetica-Bold", font_size) # Reapply font settings for new page
# Save the PDF
c.save()
bg = 'bg_temp.jpg' # Path to your background image
output_pdf_path = 'report_harleys.pdf' # Path where the PDF will be saved
create_report(company_info, bg, output_pdf_path)
</code></pre>
<p>I haven't yet been able to achieve the following:</p>
<ul>
<li>Justify the text (both the title and the context) so that it does not extend beyond the right margin of the page.</li>
<li>Ensure that new lines specified by \n and bold text indicated by ** in my JSON are correctly reflected in the PDF.</li>
</ul>
<p>Here is what I'm currently getting:</p>
<p><a href="https://i.sstatic.net/CuVJftrk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CuVJftrk.png" alt="enter image description here" /></a></p>
<p>I would like it to look something like this:</p>
<p><a href="https://i.sstatic.net/9nnoJpAK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9nnoJpAK.png" alt="enter image description here" /></a></p>
<p>Notice how with there a \n we have a new line and text between ** are bold? reference with the first item of the company info.</p>
<p>What might I be missing? I've tried numerous tutorials but haven't achieved the desired outcome.</p>
|
<python><pdf><reportlab>
|
2024-08-15 00:16:24
| 1
| 809
|
R_Student
|
78,873,223
| 5,082,048
|
Draw arrow from data coordinates to AnnotationBBox
|
<p>I have a figure with two axis, and an annotation box that sits below the top axis. I want an arrow that starts at the data point in the top axis and points to the top-center of the box (currently it points to 0,0, see picture below). How can I do this?
<a href="https://i.sstatic.net/wjRKgfUY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wjRKgfUY.png" alt="enter image description here" /></a></p>
<p>The position of the box is influenced by</p>
<ul>
<li>the data coordinates (in x dimension)</li>
<li>the axes fraction (in y dimension)</li>
<li>the box is moved by a certain amount of points (using boxcoords and xybox, respectively)</li>
<li>and the box is moved w.r.t. the anchorpoint (using the box_alignment
argument).</li>
</ul>
<p>Here is the code that generated the figure:</p>
<pre><code>%matplotlib ipympl
import matplotlib.pyplot as plt
from matplotlib.offsetbox import AnnotationBbox, DrawingArea
# Create figure and axes
fig = plt.figure()
ax2 = fig.add_subplot(212)
ax1 = fig.add_subplot(211)
ax1.plot(0.3,0.4,'ok')
# Dimensions and position of the annotation box
box_width, box_height = 20, 20
box_position = (0.2, 0) # Place it in the center of the axes
# Create the box
box = DrawingArea(box_width, box_height)
# Create the AnnotationBbox
annotation_box = AnnotationBbox(box,
box_position, # xy, controlled by xycoords
xycoords=('data', 'axes fraction'),
xybox = (0,-30), # additional shift, controlled by boxcoords
boxcoords="offset points",
pad=0.5,
box_alignment=(0.5,1),
annotation_clip=False)
# Add the annotation box to the desired axis (e.g., ax1)
ax1.add_artist(annotation_box)
# Ensure that this annotation is drawn on top of everything else by adjusting the zorder
annotation_box.set_zorder(10)
arrow = FancyArrowPatch(
posA=(0.3,0.4),
posB=(0,0), # should be updated to point to the annotation box
arrowstyle="->",
mutation_scale=10,
color="black",
linewidth=1,
clip_on = False
)
ax1.add_patch(arrow)
# Show plot
plt.show()
</code></pre>
<p>I have head a look at matplotlibs <a href="https://matplotlib.org/stable/users/explain/artists/transforms_tutorial.html" rel="nofollow noreferrer">transformation tutorial</a> but I cannot seem to get it right.</p>
<p>The ideal solution would work independent of which coordinate systems are used to place the AnnotationBBox (e.g. if xycoords was changed to something else, it would be ideal if the solution still works)</p>
|
<python><matplotlib>
|
2024-08-14 23:41:57
| 1
| 3,950
|
Arco Bast
|
78,873,119
| 6,595,551
|
How to prevent ruff formatter from adding a newline after module-level docstring?
|
<p>I'm using <code>ruff</code> as a replacement to <code>black</code> formatter but I wanna keep the diff at minimum. I'm noticing that it automatically inserts a newline between the module-level docstring and the first import statement.</p>
<p>For example, given this code:</p>
<pre class="lang-py prettyprint-override"><code>"""base api extension."""
import abc
from typing import List, Optional, Type
</code></pre>
<p>After running:</p>
<pre class="lang-bash prettyprint-override"><code>ruff format file.py --diff
</code></pre>
<p>It gives me this:</p>
<pre class="lang-bash prettyprint-override"><code>@@ -1,4 +1,5 @@
"""base api extension."""
+
import abc
from typing import List, Optional, Type
</code></pre>
<p>If I format the file, the output is like this:</p>
<pre class="lang-py prettyprint-override"><code>"""base api extension."""
import abc
from typing import List, Optional, Type
</code></pre>
<p>I wanna keep the original formatting without adding that newline. I couldn't find any settings I could use to ignore this. Is there a way to configure ruff to prevent this behaviour? Thank you!</p>
<p>My <code>pyproject.toml</code> before adding ruff:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.black]
line_length = 120
include = '\.py$'
[tool.isort]
multi_line_output = 3
include_trailing_comma = true
force_grid_wrap = 0
line_length = 120
profile = "black"
</code></pre>
<p>After adding <code>ruff</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.ruff]
line-length = 120
</code></pre>
<pre><code>Context:
Python: 3.11.9
Ruff Version: 0.5.7
</code></pre>
|
<python><python-black><ruff>
|
2024-08-14 22:41:01
| 1
| 1,647
|
Iman Shafiei
|
78,873,083
| 2,774,885
|
choosing the non-empty group when I have multiple regex matches in a python regex?
|
<p>I have a regex defined as such, it's two regular expressions separated by an OR - I want to find lines from a text file that match either of these regexes...</p>
<pre><code>my_regex = re.compile(r'^\s+V.*CORE.*?: (\S+)|^\s+NP._VDDC_V: (\S+)')
# ^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^
# regex1 regex2
# now we do the search...
my_match = my_regex.search(line)
</code></pre>
<p>the lines I am searching will match regex1 or regex2 or neither (but never both!). The value I'm interested in is the <code>\S+</code> enclosed in grouping parenthesis in both regexes.</p>
<p>When the line matches regex1, I get what I want by looking at <code>my_match.groups(1)</code>. But when the line matches regex2, I have to go get what I want from <code>my_match.groups(2)</code>.</p>
<p>Is there a super slick pythonic way to say "give me the group from the thing that <strong>actually</strong> matched" ?</p>
<p>I know that I can do something like dig through the match object, looking at <code>my_match.regs</code>... the group that didn't match will be <code>.regs[n] = (-1,-1)</code> so and the one that did match will have actual offsets ... I could look at them in order and find the one that is NOT <code>(-1,-1)</code>... but it seems like there's always some much more elegant way to do this that I'm just not aware of.</p>
|
<python><regex>
|
2024-08-14 22:22:08
| 2
| 1,028
|
ljwobker
|
78,872,939
| 22,312,722
|
Homebrew Python not working after running conda deactivate
|
<p>I am having so much trouble with what I assume is a conflict between conda and homebrew. When I run my code with the line <code>print(sys.executable)</code>, this is the output:</p>
<p><code>/Users/user/opt/anaconda3/bin/python</code></p>
<p>Even though my interpreter in VSCode is set to this:</p>
<p><code>/opt/homebrew/bin/python3.9</code></p>
<p>So I tried doing <code>conda deactivate</code>. But when I did this, I get this error:</p>
<p><code>zsh: command not found: python</code></p>
<p>I've seen some advice online saying it might have to do with conda being initialized automatically. So I went into my .bash_profile and commented out all of this:</p>
<pre><code># >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/Users/user/opt/anaconda3/bin/conda' 'shell.bash' 'hook' 2$
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/Users/user/opt/anaconda3/etc/profile.d/conda.sh" ]; then
. "/Users/user/opt/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/Users/user/opt/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
</code></pre>
<p>and I did something similar in .zshrc</p>
<p>But this seems like it's producing other problems. I can't deactivate the conda environment now and the output is 'print(sys.executable)` remains the same.</p>
<p>If anyone needs more info from me, I will gladly provide it.</p>
|
<python><conda>
|
2024-08-14 21:13:11
| 0
| 761
|
simey
|
78,872,884
| 8,062,181
|
Multi-index lookup between 2 dataframes
|
<p>I have 2 dataframes: one that acts as a lookup dataframe, and another that I insert values into where numerous rows match between them.</p>
<p>The lookup dataframe looks like this:</p>
<pre><code>data1 = {
'store': ['1', '1', '1', '1', '1', '1'],
'department': ['produce', 'produce', 'produce', 'bakery', 'meat', 'bakery'],
'task': ['water', 'stock', 'water', 'bread', 'stock', 'doughnuts'],
'employee': ['232', '111', '232', '121', '333', '121'],
'step': ['A', 'B', 'C', 'A', 'C', 'A'],
'month_year_date': ['1991-01', '1991-01', '1991-01', '1991-02', '1991-03', '1991-02'],
'work_time': [1.2, 2.4, 1.1, 4.0, 1.5, 3.5]}
df_1 = pd.DataFrame.from_dict(data1)
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: right;">store</th>
<th style="text-align: right;">department</th>
<th style="text-align: right;">task</th>
<th style="text-align: right;">employee</th>
<th style="text-align: right;">step</th>
<th style="text-align: right;">month_year_date</th>
<th style="text-align: right;">work_time</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">produce</td>
<td style="text-align: right;">water</td>
<td style="text-align: right;">232</td>
<td style="text-align: right;">A</td>
<td style="text-align: right;">1991-01</td>
<td style="text-align: right;">1.2</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">produce</td>
<td style="text-align: right;">stock</td>
<td style="text-align: right;">111</td>
<td style="text-align: right;">B</td>
<td style="text-align: right;">1991-01</td>
<td style="text-align: right;">2.4</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">produce</td>
<td style="text-align: right;">water</td>
<td style="text-align: right;">232</td>
<td style="text-align: right;">C</td>
<td style="text-align: right;">1991-01</td>
<td style="text-align: right;">1.1</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">bakery</td>
<td style="text-align: right;">bread</td>
<td style="text-align: right;">121</td>
<td style="text-align: right;">A</td>
<td style="text-align: right;">1991-02</td>
<td style="text-align: right;">4.0</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">meat</td>
<td style="text-align: right;">stock</td>
<td style="text-align: right;">333</td>
<td style="text-align: right;">C</td>
<td style="text-align: right;">1991-03</td>
<td style="text-align: right;">1.5</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">bakery</td>
<td style="text-align: right;">doughnuts</td>
<td style="text-align: right;">121</td>
<td style="text-align: right;">A</td>
<td style="text-align: right;">1991-02</td>
<td style="text-align: right;">3.5</td>
</tr>
</tbody>
</table></div>
<p>and the insert dataframe looks like this:</p>
<pre><code>data2 = {
'store': ['1', '1', '1', '1', '1', '1'],
'department': ['produce', 'produce', 'produce', 'bakery', 'meat', 'bakery'],
'task': ['water', 'stock', 'water', 'bread', 'stock', 'doughnuts'],
'employee': ['232', '111', '232', '121', '333', '121'],
'step': ['A', 'B', 'A', 'A', 'C', 'A'],
'month_year_date': ['1991-01', '1991-01', '1991-02', '1991-02', '1991-03', '1991-02']}
df_2 = pd.DataFrame.from_dict(data2)
df_2[['A', 'B', 'C']] = 0
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: right;">store</th>
<th style="text-align: right;">department</th>
<th style="text-align: right;">task</th>
<th style="text-align: right;">employee</th>
<th style="text-align: right;">step</th>
<th style="text-align: right;">month_year_date</th>
<th style="text-align: right;">A</th>
<th style="text-align: right;">B</th>
<th style="text-align: right;">C</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">produce</td>
<td style="text-align: right;">water</td>
<td style="text-align: right;">232</td>
<td style="text-align: right;">A</td>
<td style="text-align: right;">1991-01</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">produce</td>
<td style="text-align: right;">stock</td>
<td style="text-align: right;">111</td>
<td style="text-align: right;">B</td>
<td style="text-align: right;">1991-01</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">produce</td>
<td style="text-align: right;">water</td>
<td style="text-align: right;">232</td>
<td style="text-align: right;">A</td>
<td style="text-align: right;">1991-02</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">bakery</td>
<td style="text-align: right;">bread</td>
<td style="text-align: right;">121</td>
<td style="text-align: right;">A</td>
<td style="text-align: right;">1991-02</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">meat</td>
<td style="text-align: right;">stock</td>
<td style="text-align: right;">333</td>
<td style="text-align: right;">C</td>
<td style="text-align: right;">1991-03</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">bakery</td>
<td style="text-align: right;">doughnuts</td>
<td style="text-align: right;">121</td>
<td style="text-align: right;">A</td>
<td style="text-align: right;">1991-02</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
</tr>
</tbody>
</table></div>
<p>My goal is to modify <code>df_2</code> so that each each row is a unique combination of <code>store</code>, <code>department</code>, <code>task</code>, <code>employee</code>, <code>step</code>, and <code>month_year_date</code> with the values for columns <code>A</code>, <code>B</code>, <code>C</code> corresponding to the <code>work_time</code> from <code>df_1</code>. I'm currently using something like this:</p>
<pre><code>for i in range(len(df_2)):
indx = np.where(
(df_2.department[i] == df_1.department)
& (df_2.task[i] == df_1.task)
& (df_2.employee[i] == df_1.employee)
& (df_2.month_year_date[i] == df_1.month_year_date))
for n in range(0, len(indx[0])):
df_2.loc[i, df_1.step[indx[0][n]]] = df_1.loc[indx[0][n]].work_time
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: right;">store</th>
<th style="text-align: right;">department</th>
<th style="text-align: right;">task</th>
<th style="text-align: right;">employee</th>
<th style="text-align: right;">step</th>
<th style="text-align: right;">month_year_date</th>
<th style="text-align: right;">A</th>
<th style="text-align: right;">B</th>
<th style="text-align: right;">C</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">produce</td>
<td style="text-align: right;">water</td>
<td style="text-align: right;">232</td>
<td style="text-align: right;">A</td>
<td style="text-align: right;">1991-01</td>
<td style="text-align: right;">1.2</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1.1</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">produce</td>
<td style="text-align: right;">stock</td>
<td style="text-align: right;">111</td>
<td style="text-align: right;">B</td>
<td style="text-align: right;">1991-01</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">2.4</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">produce</td>
<td style="text-align: right;">water</td>
<td style="text-align: right;">232</td>
<td style="text-align: right;">A</td>
<td style="text-align: right;">1991-02</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">bakery</td>
<td style="text-align: right;">bread</td>
<td style="text-align: right;">121</td>
<td style="text-align: right;">A</td>
<td style="text-align: right;">1991-02</td>
<td style="text-align: right;">4.0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">meat</td>
<td style="text-align: right;">stock</td>
<td style="text-align: right;">333</td>
<td style="text-align: right;">C</td>
<td style="text-align: right;">1991-03</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1.5</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">bakery</td>
<td style="text-align: right;">doughnuts</td>
<td style="text-align: right;">121</td>
<td style="text-align: right;">A</td>
<td style="text-align: right;">1991-02</td>
<td style="text-align: right;">3.5</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
</tr>
</tbody>
</table></div>
<p>which yields what I'm looking for, but it takes a significant amount of time given larger dataframes (2+ million rows).</p>
<p>Is there a more efficient way to accomplish this task? I know that nested <code>for</code> loops create exponential time complexities, and would love to speed this step up!</p>
|
<python><pandas><dataframe><indexing>
|
2024-08-14 20:51:08
| 1
| 411
|
Luxo_Jr
|
78,872,754
| 251,589
|
Onboarding a new codebase to mypy - silencing errors one-by-one
|
<p>I am currently converting an existing codebase to <code>mypy</code>.</p>
<p>There are ~500 type errors. I am not familiar with the code so fixing all of the errors would be time consuming.</p>
<p>I would like to:</p>
<ol>
<li>Run <code>mypy</code></li>
<li>For each error listed, edit the code and add a <code># type: ignore # Legacy types. Ignoring for now. See <ticket number>. If you edit this code, please fix the types.</code></li>
</ol>
<p>What is the best way to do this?</p>
|
<python><mypy>
|
2024-08-14 20:03:00
| 1
| 27,385
|
sixtyfootersdude
|
78,872,343
| 1,313,890
|
Efficient Merge Code in Pyspark / Databricks
|
<p>I have a library built out for handling MERGE statements on Databricks delta tables. The code for these statements is pretty straightforward and for almost every table resembles the following:</p>
<pre><code>def execute_call_data_pipeline(self, df_mapped_data: DataFrame, call_data_type: str = 'columns:mapped'):
dt_call_data = get_delta_table(self.Spark, self.Catalog, self.Schema, 'call_data')
dt_call_data.alias('old').merge(
source=df_mapped_data.alias('new'),
condition=expr('old.callDataId = new.callDataId')
).whenMatchedUpdate(set=
{
'callId': col('new.callId') if 'callId' in df_mapped_data.columns else col('old.callId'),
'callDataMapId': col('new.callDataMapId') if 'callDataMapId' in df_mapped_data.columns else col('old.callDataMapId'),
'callDataType': col('new.callDataType') if 'callDataType' in df_mapped_data.columns else col('old.callDataType'),
'legacyColumn': col('new.legacyColumn') if 'legacyColumn' in df_mapped_data.columns else col('old.legacyColumn'),
'dataValue': col('new.dataValue') if 'dataValue' in df_mapped_data.columns else col('old.dataValue'),
'isEncrypted': col('new.isEncrypted') if 'isEncrypted' in df_mapped_data.columns else col('old.isEncrypted'),
'silverUpdateOn': to_date(lit(datetime.now(timezone.utc)), 'yyyy-MM-dd HH:mm:ss.S')
}
).whenNotMatchedInsert(values=
{
'callId': col('new.callId'),
'callDataMapId': col('new.callDataMapId') if 'callDataMapId' in df_mapped_data.columns else lit(None),
'callDataType': col('new.callDataType') if 'callDataType' in df_mapped_data.columns else lit(call_data_type),
'legacyColumn': col('new.legacyColumn') if 'legacyColumn' in df_mapped_data.columns else lit(None),
'dataValue': col('new.dataValue'),
'isEncrypted': col('new.isEncrypted') if 'isEncrypted' in df_mapped_data.columns else lit(False),
'silverCreateOn': to_date(lit(datetime.now(timezone.utc)), 'yyyy-MM-dd HH:mm:ss.S')
}
).execute()
</code></pre>
<p>The code executes fine but is rather tedious to write out as I have to spell out every column on every table across 3 different catalogs (medallion approach lakehouse). I was looking to make this a bit more efficient to write, so I abstracted out the dictionary creation (since the rules were almost always the same for every column) and developed the following:</p>
<pre><code>def get_column_value(df: DataFrame, column_name: str, table_alias: str = 'new', default_value=None) -> Column:
default_value = default_value if default_value is Column else lit(default_value)
return col(f'{table_alias}.{column_name}') if column_name in df.columns else default_value
def build_update_values(df: DataFrame, update_on_field, update_columns: list, new_alias: str = 'new', old_alias: str = 'old') -> dict:
update_values = dict(map(lambda x: (x, get_column_value(df, x, new_alias, col(f'{old_alias}.{x}'))), update_columns))
update_values.update({update_on_field: to_date(lit(datetime.now(timezone.utc)), 'yyyy-MM-dd HH:mm:ss.S')})
return update_values
def build_insert_values(df: DataFrame, create_on_field: str, update_columns: list, field_defaults: dict = None, table_alias: str = 'new', identity_col: str = None) -> dict:
column_list = update_columns + [identity_col] if identity_col is not None else update_columns
column_defaults = dict(map(lambda c: (c, None), update_columns))
if field_defaults is not None:
column_defaults = column_defaults | field_defaults
insert_values = dict(map(lambda x: (x, get_column_value(df, x, table_alias, column_defaults[x])), column_defaults.keys()))
insert_values.update({create_on_field: to_date(lit(datetime.now(timezone.utc)), 'yyyy-MM-dd HH:mm:ss.S')})
return insert_values
def build_update_columns(df: DataFrame, skip_columns: list) -> list:
return list(set(df.columns) - set(skip_columns))
def execute_call_data_pipeline(self, df_mapped_data: DataFrame, call_data_type: str = 'columns:mapped'):
dt_call_data = get_delta_table(self.Spark, self.Catalog, self.Schema, 'call_data')
insert_defaults = dict([('callDataType', call_data_type),
('isEncrypted', False)])
update_columns = build_update_columns(df_mapped_data, self.__helper.SkipColumns + ['callDataId'])
update_values = build_update_values(df_mapped_data, 'silverUpdateOn', update_columns)
insert_values = build_insert_values(df_mapped_data, 'silverCreateOn', update_columns, field_defaults=insert_defaults)
dt_call_data.alias('old').merge(
source=df_mapped_data.alias('new'),
condition=expr('(old.callDataId = new.callDataId) OR (old.callId = new.callId AND old.callDataType = new.callDataType AND old.legacyColumn = new.legacyColumn)')
).whenMatchedUpdate(set=update_values).whenNotMatchedInsert(values=insert_values).execute()
</code></pre>
<p>This makes the code MUCH easier to write but I'm noticing a SIGNIFICANT performance degradation. Jobs that used to execute in a handful of minutes now will take hours (or sometimes just hang). I'm admittedly not that comfortable with debugging the nitty gritty in Spark using the SparkUI, but the only thing that really seemed to jump out at me was some significant memory spill (~20GB although I wasn't seeing any out of memory errors).</p>
<p>I ran a few experiments to confirm it was the code changes that caused the issue and that is definitely the case, but I'm confused as to why. I'd love to find a way to keep the new code as it's much faster to write when onboarding new tables, but it's currently worthless given the performance. Can anyone point me towards what the problem may be or even where I might want to look in the SparkUI to identify the issue?</p>
<p>I am on Azure Databricks working with a Unity catalog.</p>
|
<python><pyspark><databricks><azure-databricks>
|
2024-08-14 17:52:00
| 1
| 547
|
Shane McGarry
|
78,872,318
| 11,106,572
|
pyinstaller creating a package more than a GB for a simple python script having import pandas, and some operations
|
<p>When I run pyinstaller for such a simple script <code>sam.py</code> as below, the package size is worth 1.6 GB...How do we justify sharing this big file to someone ? Is it normal ?</p>
<pre><code>import pandas as pd
d = pd.read_csv("~/a.csv")
cols = ['a', 'b']
d[cols].to_csv("~/new.csv", index = False)
</code></pre>
<pre><code>Package size
1.6G _internal
48M sam
</code></pre>
<p>The package was generated with the spec file with the command <code>pyinstaller sam.spec</code> where the spec file was generated using <code>pyinstaller sam.py</code> and later edited to increase recursion limit to resolve related issues</p>
<pre><code># -*- mode: python ; coding: utf-8 -*-
import sys
sys.setrecursionlimit(5000) # Increase recursion limit
</code></pre>
<p>The <code>_internal</code> package shows the following large files:</p>
<pre><code>126M libLLVM-14.so
95M botocore
72M libmkl_core.so.2
65M libmkl_avx512.so.2
63M libmkl_intel_thread.so.2
52M libmkl_avx.so.2
49M libmkl_mc3.so.2
49M libmkl_avx2.so.2
48M libmkl_mc.so.2
42M libmkl_tbb_thread.so.2
41M libmkl_def.so.2
38M libmkl_pgi_thread.so.2
34M libarrow.so.1400
31M libpython3.12.so.1.0
31M libmkl_gnu_thread.so.2
31M libicudata.so.73
31M bokeh
31M babel
29M libmkl_sequential.so.2
24M PyQt5
23M pyarrow
23M pandas
22M libmkl_intel_lp64.so.2
18M libstdc++.so.6.0.29
18M libmkl_intel_ilp64.so.2
18M libmkl_gf_lp64.so.2
17M libomptarget.rtl.level0.so
16M libmkl_vml_avx.so.2
16M libmkl_rt.so.2
16M lib-dynload
</code></pre>
|
<python><pandas><ubuntu><pyinstaller>
|
2024-08-14 17:46:18
| 1
| 318
|
cryptickey
|
78,872,300
| 893,254
|
How to convert mutiple level Pandas DataFrame column names into single level?
|
<p>There are other seemingly similar looking questions which already exist on this site, however all of them appear to be related to performing some form of melt operation, rather than a column renaming operation, which is what I ask about here.</p>
<p>I have a Pandas DataFrame with multiple columns. The column names are <em>tuples</em> meaning that there are (effectively) multiple levels to the column names.</p>
<p>I want to rename these columns, and change the names from tuples to single strings.</p>
<p>I have currently found one way to do this, but I am not convinced this is the best approach.</p>
<p>Since Pandas allows the <code>df.columns</code> object to be replaced with a list of strings of the same length as <code>len(df.columns)</code>, one possible approach is to create a list of strings by concatenating the existing tuple values.</p>
<p>Here's some code which does this.</p>
<pre><code>new_column_names = (
list(
map(
'_'.join,
df.columns,
)
)
)
df.columns = new_column_names
</code></pre>
<p>Is there any better way to do this? It feels like a lot of manual work to do something which seems like it might be a common operation with a more ideomatic solution.</p>
|
<python><pandas><dataframe>
|
2024-08-14 17:41:54
| 1
| 18,579
|
user2138149
|
78,872,237
| 1,251,099
|
Protobuf (Python): read binary data file and assign the data to repeated object
|
<p>I have a protobuf file as below.</p>
<pre><code>syntax = "proto3";
message Message {
repeated bytes data = 1;
}
</code></pre>
<p>This is the Python code.</p>
<pre><code>import test_pb2
message = test_pb2.Message()
with open("test.dat", mode='rb') as file:
message.data.extend(file.read())
</code></pre>
<p>This is the error.</p>
<pre><code>Traceback (most recent call last):
File "./test.py", line 7, in <module>
message.data.extend(file.read())
TypeError: 48 has type int, but expected one of: bytes
</code></pre>
<p>I also try "read_bytes()", similar error.</p>
<p>The following code works fine.</p>
<pre><code>x = list()
with open("test.dat", mode='rb') as file:
x.extend(file.read())
</code></pre>
<p>Seemingly, the repeated object does not work like list.</p>
|
<python><python-3.x><protocol-buffers>
|
2024-08-14 17:21:16
| 1
| 6,206
|
user180574
|
78,872,166
| 41,060
|
How do I stream a large BZ2 data file using Python requests stream=True/iter_content()?
|
<p>I have a large data file at <a href="https://dumps.wikimedia.org/other/pageview_complete/monthly/2024/2024-07/pageviews-202407-user.bz2" rel="nofollow noreferrer">https://dumps.wikimedia.org/other/pageview_complete/monthly/2024/2024-07/pageviews-202407-user.bz2</a> (3.3 GB).</p>
<p>I'm trying to stream its contents and process it line by line, by using requests <code>stream=True</code> and <code>iter_content()</code>.</p>
<p>The logic works fine, and I can process data for about 15-20 minutes before I get:</p>
<pre><code><SNIP>
urllib3.exceptions.IncompleteRead: IncompleteRead(123551405 bytes read, 3479106984 more expected)
The above exception was the direct cause of the following exception:
<SNIP>
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(123551405 bytes read, 3479106984 more expected)', IncompleteRead(123551405 bytes read, 3479106984 more expected))
During handling of the above exception, another exception occurred:
<SNIP>
requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(123551405 bytes read, 3479106984 more expected)', IncompleteRead(123551405 bytes read, 3479106984 more expected))
</code></pre>
<p>Here's my code:</p>
<pre class="lang-python prettyprint-override"><code>@contextmanager
def get_pageview_response():
url = get_pageview_url()
with requests.get(url,
stream=True,
headers={
'User-Agent': WP1_USER_AGENT,
'Connection': 'keep-alive'
},
timeout=120) as r:
r.raise_for_status()
yield r
def raw_pageviews(decode=False):
def as_bytes():
with get_pageview_response() as r:
decompressor = BZ2Decompressor()
trailing = b''
# Read data in 128 MB chunks
for http_chunk in r.iter_content(chunk_size=128 * 1024 * 1024):
data = decompressor.decompress(http_chunk)
lines = [line for line in data.split(b'\n') if line]
if not lines:
continue
# Reunite incomplete lines
yield trailing + lines[0]
yield from lines[1:-1]
trailing = lines[-1]
# Nothing left, yield the last line
yield trailing
if decode:
for line in as_bytes():
yield line.decode('utf-8')
else:
yield from as_bytes()
def pageview_components():
for line in raw_pageviews():
pass # Processing logic goes here
</code></pre>
<p>Based on reading the error message, my current guess is that after 15-20 minutes, my data processing (which includes I/O operations to write to a MariaDB instance) is "falling behind" the HTTP streaming. I imagine that the HTTP socket goes "idle" and the server closes the connection.</p>
<p>Does that sound right? Does it have anything to do with my BZ2 decompression step?</p>
<p>I've tried increasing the HTTP chunk size to 128 MB (from 16 MB) to allow for more "buffer", but that doesn't seem to help.</p>
|
<python><http><python-requests><http-chunked>
|
2024-08-14 16:59:43
| 0
| 2,860
|
audiodude
|
78,872,103
| 868,574
|
python - How to get Unicode characters to display as boxes instead of accented letters - "x96\x88" and "x96\x80"
|
<p>I have a table that is returning the characters "Γ’\x96\x88" and "Γ’\x96\x80"</p>
<p>These are displaying as "Γ’" and "Γ’"</p>
<p>However, what I need is for them to display as "β" and "β".</p>
<p>How do I handle this in Python so they display as I wish?</p>
<p>This is in macOS using python 3, both in the terminal console, and also within jupyter-notebook.</p>
<p>I am scraping the table with these values from a web page with pandas.read_html (I can't share the page unfortunately) where they display as the boxes, but then within the dataframe and thereafter in python they are Γ’.</p>
|
<python><unicode><character-encoding><non-ascii-characters>
|
2024-08-14 16:44:14
| 1
| 447
|
Lachlan Macnish
|
78,872,037
| 6,733,654
|
role_required decorator for FastAPI route
|
<p>Disclaimer and sorry words.. It's been quite a long time since I do not ask questions here and also I am a complete novice in FastAPI, so.. please do not judge too strong</p>
<p>I am playing with FastAPI authorization and wondering how can I protected my routes from user who are authenticated but do not have permission for that specific routes.</p>
<p>I have done some code that solves this.</p>
<p>Here is my routes:</p>
<pre><code>@router.get('/student_route_only')
@role_required(allowed_roles=[UserRole(name='student')])
async def student_route_only(
token: Annotated[str, Depends(oauth2_scheme)],
auth_service: AuthService = Depends(get_auth_service),
user_data: UserAuthResponse = None,
):
return UserAuthResponse(
user_id=user_data.user_id,
role=user_data.role,
name=user_data.name,
)
@router.get('/routes_for_student_and_admin')
@role_required(allowed_roles=[UserRole(name='student'), UserRole(name='admin')])
async def routes_for_student_and_admin(
token: Annotated[str, Depends(oauth2_scheme)],
auth_service: AuthService = Depends(get_auth_service),
user_data: UserAuthResponse = None,
):
return UserAuthResponse(
user_id=user_data.user_id,
role=user_data.role,
name=user_data.name,
)
</code></pre>
<p>This are my pydantic model:</p>
<pre><code>class UserAuthResponse(BaseModel):
user_id: int
role: str
name: str
class UserRole(BaseModel):
name: str
@validator('name')
def name_must_be_valid(cls, value):
allowed_roles = ['admin', 'student', 'teacher']
if value.lower() not in allowed_roles:
raise ValueError(
f"Invalid role. Allowed roles are: {', '.join(allowed_roles)}"
)
return value
</code></pre>
<p>and this are my decorator that does the job:</p>
<pre><code>def role_required(allowed_roles: list[UserRole]):
def decorator(func):
@wraps(func)
async def wrapper(*args, **kwargs):
auth_service = kwargs.get('auth_service')
token = kwargs.get('token')
user_data = await auth_service.get_current_user(token)
if not user_data or user_data.role not in [role.name for role in allowed_roles]:
raise HTTPException(
status_code=status.HTTP_403_FORBIDDEN,
detail='This operation is forbidden for you',
)
kwargs['user_data'] = user_data # pushing gotten user_data back
return await func(*args, **kwargs)
return wrapper
return decorator
</code></pre>
<p>I have searched for the relatively simple decision for role route protecting in FastAPI but did not find something that are not hard to implement.</p>
<p>So I would like to ask you, is it okay to use such kind of code in production?</p>
<p>Because I think it is quite simple to use, for instance if I have to protect some route and make it available let's say only for admin I can simply do this:</p>
<pre><code>@router.get('/admin_route_only')
@role_required(allowed_roles=[UserRole(name='admin')]) # and that's it. The decorator does the rest of the job
async def admin_route_only(
token: Annotated[str, Depends(oauth2_scheme)],
auth_service: AuthService = Depends(get_auth_service),
user_data: UserAuthResponse = None,
):
return UserAuthResponse(
user_id=user_data.user_id,
role=user_data.role,
name=user_data.name,
)
</code></pre>
<p>On the other hand my PyCharm does not like that <code>auth_service</code> and <code>token</code> in function are not used (but they are needed in the decorator). Would it be okay for other developers? And for linters?</p>
<p>And also is it okay to delegate authorization to decorator like this and then push the user data back through kwargs.. ?</p>
<p>Thank you very much in advance!</p>
|
<python><authentication><fastapi><decorator><rbac>
|
2024-08-14 16:29:30
| 2
| 475
|
John
|
78,871,680
| 1,072,352
|
In NumPy arrays, is there a syntax to set a value in the last dimension based on the first dimension?
|
<p>I've Googled and asked ChatGPT and looked through NumPy docs and can't find any way to do this, so thought I'd ask here.</p>
<p>Suppose I have a 4-dimensional array -- in this case, of shape (3, 2, 2, 2):</p>
<pre><code>a = np.array([
[[[0, 0], [0, 0]],
[[0, 0], [0, 0]]],
[[[0, 0], [0, 0]],
[[0, 0], [0, 0]]],
[[[0, 0], [0, 0]],
[[0, 0], [0, 0]]],
])
</code></pre>
<p>and I want to set the last (second) element of the last dimension to a different value according to each row of the first dimension. In my example I have 3 rows in the first dimension, so let's suppose I wanted to apply the values [1, 2, 3] to result in:</p>
<pre><code>[
[[[0, 1], [0, 1]],
[[0, 1], [0, 1]]],
[[[0, 2], [0, 2]],
[[0, 2], [0, 2]]],
[[[0, 3], [0, 3]],
[[0, 3], [0, 3]]],
]
</code></pre>
<p>The closest syntax I've been able to think of would be:</p>
<pre><code>a[:, ..., 1] = [1, 2, 3]
</code></pre>
<p>But it produces an error (<code>ValueError: could not broadcast input array from shape (3,) into shape (3,2,2)</code>). No error is produced if I try:</p>
<pre><code>a[:, ..., 1] = [1, 2]
</code></pre>
<p>but it produces a different result which isn't what I want:</p>
<pre><code>a = np.array([
[[[0, 1], [0, 2]],
[[0, 1], [0, 2]]],
[[[0, 1], [0, 2]],
[[0, 1], [0, 2]]],
[[[0, 1], [0, 2]],
[[0, 1], [0, 2]]],
])
</code></pre>
<p>Is there any way to elegantly and compactly do what I want?</p>
<p>For now I've written a loop to cycle over every row of the first dimension and then set the values per-row, but I wonder if there's a more powerful way to do this in a single line.</p>
|
<python><arrays><numpy><numpy-ndarray><array-broadcasting>
|
2024-08-14 15:08:03
| 1
| 1,375
|
crazygringo
|
78,871,472
| 2,889,521
|
CPython: Usage of `tp_finalize` in C-defined static types with no custom `tp_dealloc`
|
<p><a href="https://peps.python.org/pep-0442/" rel="nofollow noreferrer">PEP 442</a> introduced the <code>tp_finalize</code> callback to Python type definitions (as a one-to-one equivalent of Pythons classes' <code>__del__</code> function), and recommends using this for any non-trivial destruction.</p>
<p>The <a href="https://docs.python.org/3/c-api/typeobj.html#c.PyTypeObject.tp_finalize" rel="nofollow noreferrer">official API doc</a> states:</p>
<blockquote>
<p>If <code>tp_finalize</code> is set, the interpreter calls it once when finalizing an instance.</p>
</blockquote>
<p>However, this does not seem to be true for C-defined static Python types that do not define a custom deallocator (either directly or inherited from their base). From my understanding:</p>
<ul>
<li>Static types with no custom <code>tp_dealloc</code> will inherit the deallocator from the base python type (<code>PyBaseObject_Type</code>), which is <code>object_dealloc</code>.
<ul>
<li><code>object_dealloc</code> is extremely simple, it just calls the <code>tp_free</code> of the given object's type.</li>
<li>Therefore, <code>tp_finalize</code> will never be called for these objects.</li>
</ul>
</li>
<li>Types defined on the heap using <a href="https://docs.python.org/3/c-api/type.html#c.PyType_FromSpec" rel="nofollow noreferrer"><code>PyType_FromSpec</code></a> and similar will inherit by default the <code>subtype_dealloc</code> deallocator.
<ul>
<li><code>subtype_dealloc</code> is much more complex and will call <code>PyObject_CallFinalizerFromDealloc</code>.</li>
</ul>
</li>
</ul>
<h2>Questions</h2>
<p>Assuming that I am understanding the current behavior of CPython correctly:</p>
<ul>
<li>Is it expected that <code>PyBaseObject_Type</code> deallocator does not call <code>tp_finalize</code>?</li>
<li>If yes, what are the reasons for this exception?</li>
<li>And would it make sense then to expose <code>subtype_dealloc</code> as a generic dealloc callback for C-defined static types?</li>
</ul>
|
<python><python-3.x><destructor><cpython><python-c-api>
|
2024-08-14 14:23:40
| 1
| 398
|
mont29
|
78,871,372
| 7,713,770
|
How to generate the confirm password reset view with Django?
|
<p>I have a Django Rest Framework api app. And I try to generate some functionaly for forgot password. At the moment there is an api call availeble for reset password. And a user gets an email with a reset email link.</p>
<p>But the problem I am facing is that if the user triggers the reset email link that this results in an error:</p>
<pre><code>Page not found (404)
Request Method: GET
Request URL: http://127.0.0.1:8000/reset-password/MjE/cbr2cj-0d6c660c151de4e79594212801241fed/
</code></pre>
<p>So this is the views.py file with the function</p>
<blockquote>
<p>PasswordResetConfirmView</p>
</blockquote>
<pre><code>class PasswordResetConfirmView(generics.GenericAPIView):
serializer_class = PasswordResetConfirmSerializer
print("password rest")
def post(self, request, uidb64, token, *args, **kwargs):
try:
uid = urlsafe_base64_decode(uidb64).decode()
user = Account.objects.get(pk=uid)
print(user)
except (TypeError, ValueError, OverflowError, Account.DoesNotExist):
user = None
if user and default_token_generator.check_token(user, token):
serializer = self.get_serializer(data=request.data)
serializer.is_valid(raise_exception=True)
serializer.save(user=user)
return Response({"message": "Password reset successful."}, status=status.HTTP_200_OK)
else:
return Response({"error": "Invalid token or user."}, status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p>And the serializers.py file:</p>
<pre><code>class PasswordResetConfirmSerializer(serializers.Serializer):
new_password = serializers.CharField(write_only=True)
def validate_new_password(self, value):
validate_password(value)
return value
def save(self, user):
user.set_password(self.validated_data['new_password'])
user.save()
# Render the success email template
email_subject = 'Password Reset Successful'
email_body = render_to_string('password_reset_success_email.html', {
'user': user,
})
# Send success email to user
send_mail(
subject=email_subject,
message=email_body,
from_email='email', to your "from" email
recipient_list=[user.email],
html_message=email_body,
fail_silently=False
)
</code></pre>
<p>And model account looks:</p>
<pre><code>
class MyAccountManager(BaseUserManager):
@allowed_users(allowed_roles=['account_permission'])
def create_user(self, email, password=None, **extra_field):
if not email:
raise ValueError("Gebruiker moet een email adres hebben.")
if not password:
raise ValueError("Gebruiker moet een password hebben.")
user = self.model(email=email, **extra_field)
user.set_password(password)
user.save(using=self._db)
return user
def create_superuser(self, email, username, password):
user = self.create_user(
email=self.normalize_email(email),
username=username,
password=password,
)
user.is_admin = True
user.is_active = True
user.is_staff = True
user.is_superadmin = True
user.save(using=self._db)
return user
</code></pre>
<p>main urls.py file:</p>
<pre><code>from django.conf import settings
from django.conf.urls.static import static
from django.contrib import admin
from django.urls import include, path
from drf_spectacular.views import (SpectacularAPIView, SpectacularSwaggerView)
from rest_framework.schemas import get_schema_view
urlpatterns = [
path('accounts/', include('accounts.urls', namespace='accounts')) ,
</code></pre>
<p>and accounts urls.py file:</p>
<pre><code>from django.urls import path
from rest_framework.authtoken.views import obtain_auth_token
from accounts import views
from .views import PasswordResetRequestView, PasswordResetConfirmView, test_reverse_url
app_name='accounts'
urlpatterns = [
path('reset-password/', PasswordResetRequestView.as_view(), name='password_reset_request'),
path('reset-password/<uidb64>/<token>/', PasswordResetConfirmView.as_view(), name='password_reset_confirm'),
]
</code></pre>
<p>So I can't figured out why it is thrown an 404 page not found error.</p>
<p>Question: How to resolve the 404 page not found error?</p>
|
<python><django><django-rest-framework>
|
2024-08-14 14:01:42
| 0
| 3,991
|
mightycode Newton
|
78,871,370
| 3,512,538
|
pybind11 cross compilation can link against native python executable, can't install cross python
|
<p>Working on x64 ubuntu22.04 and on ubuntu23.10 with python3.10 and python3.11 on both.</p>
<p>I can build my project natively for both python versions, controlling the python version using <code>PYBIND11_PYTHON_VERSION</code> - which then triggers <code>find_package(Python3 ${PYBIND11_PYTHON_VERSION} REQUIRED EXACT COMPONENTS Interpreter Development)</code>, or by passing directly <code>PYTHON_EXECUTABLE</code>.</p>
<p>However, trying to build this project for aarch64 (using a toolchain for aarch64, see at the bottom), fails.</p>
<p>I can make it build and run, if I manually set the <code>PYTHON_EXECUTABLE</code> to my native <code>PYTHON_EXECUTABLE</code>. I don't understand how and why...</p>
<p>What I expected to do was:</p>
<ol>
<li>install an aarch64 python (lib, headers, executable) on my machine</li>
<li>use <code>find_package(Python3 ${PYBIND11_PYTHON_VERSION} REQUIRED EXACT COMPONENTS Interpreter Development)</code> or better yet <code>DPYBIND11_FINDPYTHON=ON</code> (as can be seen <a href="https://pybind11.readthedocs.io/en/stable/compiling.html#findpython-mode" rel="nofollow noreferrer">here</a>)</li>
</ol>
<p>I can't install the cross python3 [step (1)], no matter what I tried... Is this the correct method to go about cross compilation? or should I really use x64 python executable for aarch64 build?</p>
<pre><code># toolchain file
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_SYSTEM_PROCESSOR aarch64)
set(CMAKE_C_COMPILER aarch64-linux-gnu-gcc)
set(CMAKE_CXX_COMPILER aarch64-linux-gnu-g++)
set(CMAKE_STRIP aarch64-linux-gnu-strip CACHE FILEPATH "Strip")
</code></pre>
|
<python><linux><cross-compiling><pybind11>
|
2024-08-14 14:01:32
| 0
| 12,897
|
CIsForCookies
|
78,871,265
| 8,605,348
|
Stripe Python AttributeError: 'coroutine' object has no attribute 'auto_paging_iter'
|
<p>An example in the Stripe Python library doesn't seem to work. The <a href="https://github.com/stripe/stripe-python/blob/master/README.md#async" rel="nofollow noreferrer">README</a> says:</p>
<pre class="lang-py prettyprint-override"><code># .auto_paging_iter() implements both AsyncIterable and Iterable
async for c in await stripe.Customer.list_async().auto_paging_iter():
....
</code></pre>
<p>I try running this locally:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import stripe
stripe.api_key = "secret"
async def main():
async for c in await stripe.Customer.list_async().auto_paging_iter():
print(c)
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>But I get an error <strong>and</strong> a warning:</p>
<pre><code>Traceback (most recent call last):
File "/Users/person/project/call_stripe/main.py", line 13, in <module>
asyncio.run(main())
File "/Users/person/.pyenv/versions/3.12.4/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/Users/person/.pyenv/versions/3.12.4/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/person/.pyenv/versions/3.12.4/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/person/project/call_stripe/main.py", line 8, in main
async for c in await stripe.Customer.list_async().auto_paging_iter():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'coroutine' object has no attribute 'auto_paging_iter'
sys:1: RuntimeWarning: coroutine 'Customer.list_async' was never awaited
</code></pre>
<p>Digging through the source code, I see <code>.auto_paging_iter_async()</code> exists, but it doesn't work:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import stripe
stripe.api_key = "secret"
async def main():
async for c in await stripe.Customer.list_async().auto_paging_iter_async():
print(c)
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<pre><code>Traceback (most recent call last):
File "/Users/person/project/call_stripe/main.py", line 13, in <module>
asyncio.run(main())
File "/Users/person/.pyenv/versions/3.12.4/lib/python3.12/asyncio/runners.py", line 194, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/Users/person/.pyenv/versions/3.12.4/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/person/.pyenv/versions/3.12.4/lib/python3.12/asyncio/base_events.py", line 687, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/person/project/call_stripe/main.py", line 8, in main
async for c in await stripe.Customer.list_async().auto_paging_iter_async():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'coroutine' object has no attribute 'auto_paging_iter_async'
sys:1: RuntimeWarning: coroutine 'Customer.list_async' was never awaited
</code></pre>
<p>And it's the same if I use the <code>StripeClient</code> object:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from stripe import StripeClient
api_key = "secret"
client = StripeClient(api_key)
async def main():
# auto_paging_iter_async() also fails
async for c in await client.customers.list_async().auto_paging_iter():
print(c)
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>What am I doing wrong?</p>
<p>I'm using Python 3.12.4 installed with PyEnv 2.4.10 on Mac. My local Poetry environment:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.dependencies]
python = ">=3.8,<3.13"
httpx = "^0.27.0"
stripe = "^10.7.0"
</code></pre>
|
<python><python-3.x><asynchronous><stripe-payments><python-asyncio>
|
2024-08-14 13:40:17
| 1
| 1,294
|
ardaar
|
78,871,239
| 327,074
|
BleakScanner timeout with asyncio.wait_for when function has returned
|
<ul>
<li>bleak version: 0.22.2</li>
<li>Python version: 3.8.18</li>
<li>Operating System: Linux</li>
<li>BlueZ version (<code>bluetoothctl -v</code>) in case of Linux: 5.64</li>
</ul>
<h3>Description</h3>
<p>I have a BleakScanner with a timeout. The timeout still occured after returning from the <code>asyncio.wait_for</code> function (20s after returning from the function). Any further scanning is then blocked with the following DBus error</p>
<h3>What I Did</h3>
<p>I have a gateway continually connecting to 4 of the same type of device in a round-robin fashion.</p>
<p>I ran the following code and get the log from the <code>asyncio.TimeoutError</code> even though <code>asyncio.wait_for(_scan_with_timeout(filter, device_id), timeout)</code> should have returned</p>
<pre class="lang-py prettyprint-override"><code>async def scan(
device_id, timeout=SCAN_TIMEOUT_S
) -> Optional[Tuple[BLEDevice, Set[str]]]:
try:
filter = get_device_filter(device_id)
return await asyncio.wait_for(_scan_with_timeout(filter, device_id), timeout)
except asyncio.TimeoutError:
logger.info(
f"Timeout reached. Device {device_id} not found after {timeout} seconds"
)
return None
async def _scan_with_timeout(filter, device_id):
seen = set()
async with BleakScanner() as scanner:
async for device, advertisement in scanner.advertisement_data():
found_device_id = get_device_id(device)
if found_device_id and found_device_id.startswith(filter):
logger.info(f"Found {device!r}")
logger.debug(f"{advertisement!r}")
logger.debug(f"platform_data: {advertisement.platform_data}")
logger.debug(f"service_data: {advertisement.service_data}")
seen.add(found_device_id)
if found_device_id == device_id:
logger.info(f"Connecting")
return (device, seen)
</code></pre>
<h3>Logs</h3>
<p>These are the logs I have, I don't have any further in depth logs yet.</p>
<pre><code>2024-08-13 15:25:10,187 | INFO | xtel | Found BLEDevice(D0:83:D4:00:A2:61, XAIRQ-00A261)
2024-08-13 15:25:10,385 | INFO | xtel | Found BLEDevice(D0:83:D4:00:A2:97, XAIRQ-00A297)
2024-08-13 15:25:10,426 | INFO | xtel | Found BLEDevice(D0:83:D4:00:A2:D1, XAIRQ-00A2D1)
2024-08-13 15:25:10,427 | INFO | xtel | Connecting
2024-08-13 15:25:30,122 | INFO | xtel | Timeout reached. Device D0:83:D4:00:A2:D1 not found after 30 seconds
2024-08-13 15:25:30,123 | INFO | xtel | Rendezvous failed
2024-08-13 15:25:30,124 | INFO | gateway | Device D0:83:D4:00:A2:D1 data collection: None.
2024-08-13 15:25:30,124 | INFO | gateway | Device D0:83:D4:00:A2:D1 status: DeviceStatus(t_slot_index=1, is_configured=True, is_defective=False, last_connect=1723562100.0084567, retry_count=1).
2024-08-13 15:25:30,125 | INFO | gateway | Time to sensor connect: 119.87453603744507
2024-08-13 15:27:30,098 | INFO | gateway | Time slots: ['D0:83:D4:00:A2:6A', 'D0:83:D4:00:A2:D1', 'D0:83:D4:00:A2:82', 'D0:83:D4:00:5E:9A']
2024-08-13 15:27:30,100 | INFO | xtel | Reconnecting to device to fetch data
2024-08-13 15:27:30,101 | INFO | xtel | Rendezvous - waiting for data...
2024-08-13 15:27:30,101 | INFO | xtel | Scanning for D0:83:D4:00:A2:82...
2024-08-13 15:27:30,107 | ERROR | xtel | Rendezvous connect failed because: BleakDBusError('org.bluez.Error.InProgress', 'Operation already in progress')
2024-08-13 15:27:30,108 | INFO | xtel | Rendezvous failed
2024-08-13 15:27:30,109 | INFO | gateway | Device D0:83:D4:00:A2:82 data collection: None.
2024-08-13 15:27:30,109 | INFO | gateway | Device D0:83:D4:00:A2:82 status: DeviceStatus(t_slot_index=2, is_configured=True, is_defective=False, last_connect=1723562250.0088952, retry_count=1).
2024-08-13 15:27:30,111 | INFO | gateway | Time to sensor connect: 149.88888812065125
</code></pre>
<p>These two lines are the ones that are the problem:</p>
<pre><code>2024-08-13 15:25:10,427 | INFO | xtel | Connecting
2024-08-13 15:25:30,122 | INFO | xtel | Timeout reached. Device D0:83:D4:00:A2:D1 not found after 30 seconds
</code></pre>
<p>The "Connecting" log occurs just before the return of the function.</p>
|
<python><bluetooth-lowenergy><python-asyncio><python-bleak>
|
2024-08-14 13:34:20
| 1
| 13,115
|
icc97
|
78,871,211
| 881,712
|
Continuously iterate through a dict
|
<p>If we have a <code>dict</code> we can easily iterate through its values:</p>
<pre><code>a = {'mon': 10, 'tue': 13, 'wed': 6, 'thu': 24, 'fri': 15}
for v in a:
print(v)
</code></pre>
<p>I also have a nested loop like this:</p>
<pre><code>while True:
# some conditions to break the outer loop
for v in a:
# some conditions to break the inner loop
print(v)
</code></pre>
<p>Every time it runs a cycle in the outer loop, it starts again from the "first" <code>dict</code> element (I'm aware dictionaries are not ordered, but if they are not modified the cycles should be the same).</p>
<p>Instead, I want it continues from the last key, and if it ends, starts again from the beginning.</p>
<p>Example:</p>
<pre><code>1st outer cycle: 'mon', 'tue' (then breaks then inner cycle)
2nd outer cycle: 'wed' (then breaks then inner cycle)
3rd outer cycle: 'thu', 'fri', 'mon' (then breaks then inner cycle)
4th outer cycle: 'tue', 'wed' (then breaks then inner cycle)
breaks the outer cycle
</code></pre>
<p>My attempt was to save the last key and skip the previous ones, but is not straightforward and I have to restart the cycle if it ends before I have iterated through all keys.</p>
<p>Is there a simple way in Python to achieve this?</p>
|
<python><dictionary>
|
2024-08-14 13:28:24
| 3
| 5,355
|
Mark
|
78,871,193
| 66,191
|
Alembic (MySQL) - Always wants to drop/create indexes where index contains column that requires a size
|
<p>I have the following class..</p>
<pre><code>class X( Base ):
__tablename__ = "X"
__table_args__ = (
Index( "index1", "downloaded", text( "x_name( 255 )" ) ),
)
id: Mapped[ int ] = mapped_column( BIGINT( unsigned = True ), primary_key = True )
downloaded: Mapped[ date ] = mapped_column( DATE )
x_name: Mapped[ str ] = mapped_column( TINYTEXT )
</code></pre>
<p>As you can see, i have an index, downloaded and x_name. x_name needs to have its size determined so I've used text() to denote this.</p>
<p>All is well until I try and perform a migration using alembic with this.</p>
<p>Every time I run the auto migration I get a warning along the lines of..</p>
<pre><code>Generating approximate signature for index Index( 'index1', Column('downloaded', DATE(), table=<X>, nullable=False ), <sqlalchemy.sql.elements.TextClause object at 0x7b87bc1aafe0 ). The dialect implementation should either skip expression indexes or provide a custom implementation.
</code></pre>
<p>And the migration script drops and creates the index every time. Every migration I do, even if I don't make any changes to this table, gets the index dropped and recreated</p>
<p>I just don't know how to fix this. I have to have the "text()" in the index as it doesn't render the DDL correctly. It mentions providing a "custom implementation" but I've no idea what that means or where to start.</p>
|
<python><mysql><sqlalchemy><alembic>
|
2024-08-14 13:24:27
| 1
| 2,975
|
ScaryAardvark
|
78,870,853
| 3,265,791
|
SQLAlchemy select with arrow ADBC driver
|
<p>I am trying to use SQLAlchemy queries with the <a href="https://arrow.apache.org/adbc/current/index.html" rel="nofollow noreferrer">ADBC</a> driver.</p>
<p>The main issue is that I use a lot of sqlalchemy queries, i.e. <code>query = select(*cols)...</code> and I can't simply mix the two like in the following way, as <code>query</code> needs to be a <code>str</code> type with the ADBC engine.</p>
<pre><code>from adbc_driver_postgresql import dbapi
from sqlalchemy.sql import select
query = select(*cols) ...
with dbapi.connect('postgres:///db_name') as conn:
data = pd.read_sql(query), con=conn)
</code></pre>
<p>The question is how to combine the two? Simply <code>str(query)</code> won't work. Or is there a way to use <code>create_engine()</code> with the ADBC driver</p>
|
<python><postgresql><sqlalchemy><apache-arrow>
|
2024-08-14 12:13:54
| 1
| 639
|
MMCM_
|
78,870,831
| 9,270,577
|
Best way to resolve conflicts between application packages
|
<p>I am developing a package that will be used by 3-party applications that I have no control over whatsoever.</p>
<p>I am wondering what is the best way to solve conflicts between packages.</p>
<p>For example:</p>
<p>Lets say a 3-party app wants to use my package called <code>external-package</code>.</p>
<pre><code>βββ Dockerfile
βββ app.py
βββ external-package
β βββ Core
β β βββ ClassA.py
β β βββ __init__.py
β βββ setup.py
βββ requirements.txt
</code></pre>
<p>App.py:</p>
<pre><code>from Core import ClassA
app = ClassA()
app.send()
</code></pre>
<p>ClassA:</p>
<pre><code>import backoff
import requests
class ClassA:
@backoff.on_exception(
backoff.expo,
requests.exceptions.RequestException,
max_tries=35,
backoff_log_level=20,
max_time=2400,
raise_on_giveup=True,
)
def send(self):
requests.get('https://google.com')
</code></pre>
<p>Dockerfile:</p>
<pre><code>FROM python:3.10-slim-buster
RUN apt-get update && apt-get install -y \
automake \
build-essential \
libtool \
wget \
--no-install-recommends && \
rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY ./external-package /app/external-package
RUN pip install --no-cache-dir /app/external-package
COPY requirements.txt /app/requirements.txt
RUN pip install --no-cache-dir -r /app/requirements.txt
COPY . /app
CMD ["python", "app.py"]
</code></pre>
<p>Running this container will outputs an error:</p>
<pre><code>Traceback (most recent call last):
File "/app/app.py", line 4, in <module>
app.send()
File "/usr/local/lib/python3.10/site-packages/backoff/_sync.py", line 87, in retry
wait = _init_wait_gen(wait_gen, wait_gen_kwargs)
File "/usr/local/lib/python3.10/site-packages/backoff/_common.py", line 23, in _init_wait_gen
return wait_gen(**kwargs)
TypeError: expo() got an unexpected keyword argument 'backoff_log_level'
</code></pre>
<p>Because, the external-package must have <code>backoff ==1.9.2</code>, and the 3-party must have <code>backoff ==2.2.1</code> but the container has <code>backoff ==1.9.2</code>.</p>
<p>Keep in mind this is an example only for <code>backoff</code> and I want a way to solve this issue for any library there is.</p>
|
<python><python-3.x><pip>
|
2024-08-14 12:09:08
| 1
| 1,164
|
rook
|
78,870,803
| 5,084,560
|
DPY-3008: unsupported in-band notification with error number 2396
|
<p>I run a SQL query with a python script. i use python-oracledb library. same script works for other queries without error. i run query on Toad. there is no problem or error. i check the search engines for error message. i couldn't find a solution.</p>
<p>any ideas?</p>
|
<python><oracle-database><python-oracledb>
|
2024-08-14 12:02:45
| 2
| 305
|
Atacan
|
78,870,739
| 1,930,508
|
Reuse inheritance tree of one out of multiple classes in Python
|
<p>I have 2 classes each implementing a specific behavior and using parent classes.</p>
<p>Now I have a new class that needs to use either behavior but which one can only be determined during/after construction.</p>
<p>That is currently being done by using multiple inheritance.
However as the base classes were not written taking this into account the contained <code>super()</code> call will call into the wrong sub-tree</p>
<p>Consider the following MWE:</p>
<pre><code>class A:
def __init__(self, use_b):
self.init_b = use_b # Maybe complicated
def foo(self):
print("A")
class B(A):
def foo(self):
super().foo()
print("B")
class C1(A):
def foo(self):
super().foo()
print("C1")
class C2(C1):
def foo(self):
super().foo()
print("C2")
class D(B, C2):
def __init__(self, *args):
super().__init__(*args)
self.use_b = self.init_b
def foo(self):
if self.use_b:
B.foo(self)
else:
C2.foo(self)
d = D(True)
d.foo() # Expected: "A B"
print("NEXT")
d = D(False)
d.foo() # Expected: "A C1 C2"
</code></pre>
<p>Somewhere in <code>A</code> a property is initialized that is used in <code>D</code> to determine the appropriate class to use. Both of the candidates <code>B</code> and <code>C2</code> inherit from the base class as they are/used to be independent implementations</p>
<p>It works for calling <code>C2.foo</code> but calling <code>B.foo</code> ends up calling <code>foo</code> in <code>C2</code> and <code>C1</code> too, which I do not want.</p>
<p>I do know WHY this happens (MRO) but have no good idea to resolve this.</p>
<p>I might be able to have <code>D</code> only inherit from <code>A</code>, initialize this and then decide which subclass to use, i.e.:</p>
<pre><code>class D(A):
def __init__(self, *args):
super().__init__(*args)
if self.init_b:
self.impl = B(*args)
else:
self.impl = C2(*args)
def foo(self):
self.impl.foo()
</code></pre>
<p>But this has 3 issues:</p>
<ol>
<li>It initializes at least <code>A</code> multiple times. That is bad for performance and might fail if <code>A.__init__</code> modifies the ref-type args (e.g. lists)</li>
<li>It might fail if there are some args only <code>B</code> and <code>C2</code> understand so <code>A</code> can't handle them</li>
<li>I'll need to implement and wrap <strong>all</strong> methods of <code>A</code> that could be called. If I forget one that was overwritten by either of the <code>B</code> or <code>C</code> classes the behavior will be silently wrong. Similar for properties</li>
</ol>
<p>Is there a clean and safe way to handle this scenario?</p>
|
<python><inheritance><overloading><multiple-inheritance><method-resolution-order>
|
2024-08-14 11:50:20
| 1
| 5,927
|
Flamefire
|
78,870,698
| 10,215,301
|
Is it necessary for torch_dtype when loading a model and the precision for trainable weights to be different? If so, why?
|
<p>According to <a href="https://github.com/huggingface/peft/issues/341#issuecomment-1884911753" rel="nofollow noreferrer">this comment</a> in the huggingface/peft package, if a model is loaded in fp16, the trainable weights must be cast to fp32. From this comment, I understand that generally, the <code>torch_dtype</code> used when loading a model and the precision used for training must be different. Why is it necessary to change the precision? Also, does this principle apply to both fine-tuning and continual pretraining?</p>
<p>As a minimal working example, I'm attempting to perform a continual pretraining on <a href="https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/" rel="nofollow noreferrer">microsoft/Phi-3-mini-128k-instruct</a>, whose default <code>torch_dtype</code> is <a href="https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/blob/bb5bf1e4001277a606e11debca0ef80323e5f824/config.json#L135" rel="nofollow noreferrer">bfloat16</a>. When loading the model with <code>torch_dtype=torch.float16</code>, training commenced when the precision for trainable weights was set to <code>TrainingArguments(fp16=False, bf16=True)</code> (i.e. different precision for model loading and trainable weights). However, when the precision for trainable weights was set to <code>TrainingArguments(fp16=True, bf16=False)</code> (i.e. same precision for model loading and trainable weights), an error <code>raise ValueError("Attempting to unscale FP16 gradients.")</code> occurred, preventing the start of training. The execution environment was an NVIDIA RTX3060 with only 12GB of vRAM. For continual pretraining of the Phi-3 model, how should the <code>torch_dtype</code> be set when loading the model and for trainable weights to minimize vRAM usage? For instance, should the model be loaded with <code>torch_dtype=fp32</code> and the precision for trainable weights set to <code>TrainingArguments(fp16=True, bf16=False)</code>, or should the model be loaded with <code>load_in_8bit</code> and the precision for trainable weights set to <code>TrainingArguments(fp16=True, bf16=False)</code>? I would like to know effective and feasible combinations.</p>
<h1>MWE</h1>
<h2>train_deepspeed.py</h2>
<pre class="lang-py prettyprint-override"><code>import argparse
import os
import warnings
from typing import Dict, List
import deepspeed
import torch
from datasets import load_dataset
from omegaconf import OmegaConf
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
PreTrainedTokenizer,
Trainer,
TrainingArguments,
)
import gc
from utils import seed_everything
warnings.filterwarnings("ignore")
os.environ["TOKENIZERS_PARALLELISM"] = "false"
def preprocess_function(
examples: Dict[str, List[str]],
tokenizer: PreTrainedTokenizer,
max_length: int,
) -> Dict[str, List[int]]:
inputs = tokenizer(
examples["text"],
truncation=True,
padding="max_length",
max_length=max_length,
)
inputs["labels"] = inputs.input_ids.copy()
return inputs
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"--train_config",
"-p",
type=str,
default="./configs/train_configs/train_base.yaml",
)
parser.add_argument(
"--local_rank",
"-l",
type=int,
default=0,
)
args = parser.parse_args()
config = OmegaConf.load(args.train_config)
# distributed learning
deepspeed.init_distributed()
# set seed
seed_everything(config.seed)
# load model
model = AutoModelForCausalLM.from_pretrained(
config.model.model,
torch_dtype=torch.float16,
use_cache=config.model.use_cache,
device_map={"": 0},
attn_implementation="flash_attention_2",
)
tokenizer = AutoTokenizer.from_pretrained(
config.model.tokenizer,
add_eos_token=True,
)
# load dataset
dataset = load_dataset(
path=config.dataset.path,
name=config.dataset.subset,
split=config.dataset.split,
cache_dir=config.dataset.cache_dir,
)
# transform dataset
dataset = dataset.map(
lambda examples: preprocess_function(
examples, tokenizer, config.model.max_length
),
batched=True,
remove_columns=dataset.column_names,
num_proc=32,
)
dataset = dataset.train_test_split(test_size=0.2)
# initiate training
training_args = TrainingArguments(**config.train)
trainer = Trainer(
model=model,
tokenizer=tokenizer,
train_dataset=dataset["train"],
eval_dataset=dataset["test"],
args=training_args,
# data_collator=data_collator,
)
with torch.autocast("cuda"):
trainer.train()
del dataset
del trainer
gc.collect()
deepspeed.runtime.utils.empty_cache()
torch.cuda.empty_cache()
if __name__ == "__main__":
main()
</code></pre>
<h2>train_base.yaml</h2>
<pre class="lang-yaml prettyprint-override"><code>model:
model: microsoft/Phi-3-mini-128k-instruct
tokenizer: microsoft/Phi-3-mini-128k-instruct
use_cache: False
max_length: 512
train:
output_dir: ./outputs
evaluation_strategy: steps
logging_strategy: steps
save_strategy: steps
learning_rate: 1e-6
num_train_epochs: 3
per_device_train_batch_size: 1
per_device_eval_batch_size: 1
gradient_accumulation_steps: 256 # per_device_train_bath_size*gradient_accumulation_steps=256
gradient_checkpointing: True
weight_decay: 0.01
warmup_ratio: 0.1
optim: adamw_bnb_8bit # adamw_torch
fp16: False
bf16: True
dataloader_num_workers: 1
eval_steps: 50
save_steps: 100
logging_steps: 5
run_name: test
save_total_limit: 2
save_on_each_node: False
neftune_noise_alpha: 5 # NEFTTune
# deepspeed: ./configs/deepspeed/ds_config_zero2.json
report_to: wandb
torch_compile: True
logging_dir: ./outputs/log
seed: 42
dataset:
path: hotchpotch/wikipedia-ja-20231030
subset: chunked #!!null
split: train
cache_dir: /mnt/d/huggingface/datasets
</code></pre>
<h2>pyproject.toml</h2>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
name = "continual-pretrain"
version = "0.1.0"
description = ""
authors = ["Carlos Luis Rivera"]
license = "MIT"
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.11"
fsspec = "2024.3.1"
datasets = "^2.19.2"
accelerate = "^0.31.0"
aiohttp = "^3.9.5"
aiosignal = "^1.3.1"
annotated-types = "^0.7.0"
appdirs = "^1.4.4"
async-timeout = "^4.0.3"
attrs = "^23.2.0"
bitsandbytes = "^0.43.1"
certifi = "^2024.6.2"
charset-normalizer = "^3.3.2"
click = "^8.1.7"
deepspeed = "^0.14.2"
dill = "^0.3.8"
docker-pycreds = "^0.4.0"
docstring-parser = "^0.16"
filelock = "^3.14.0"
frozenlist = "^1.4.1"
gitdb = "^4.0.11"
gitpython = "^3.1.43"
hjson = "^3.1.0"
huggingface-hub = "^0.23.3"
idna = "^3.7"
jinja2 = "^3.1.4"
markdown-it-py = "^3.0.0"
markupsafe = "^2.1.5"
mdurl = "^0.1.2"
mpmath = "^1.3.0"
multidict = "^6.0.5"
multiprocess = "^0.70.16"
networkx = "^3.3"
ninja = "^1.11.1.1"
numpy = "^1.26.4"
nvidia-ml-py = "^12.555.43"
packaging = "^24.0"
pandas = "^2.2.2"
peft = "0.6.0"
protobuf = "<5.0.0"
psutil = "^5.9.8"
py-cpuinfo = "^9.0.0"
pyarrow = "^16.1.0"
pyarrow-hotfix = "^0.6"
pydantic = "^2.7.3"
pydantic-core = "^2.18.4"
pygments = "^2.18.0"
pynvml = "^11.5.0"
python-dateutil = "^2.9.0.post0"
pytz = "^2024.1"
pyyaml = "^6.0.1"
regex = "^2024.5.15"
requests = "^2.32.3"
rich = "^13.7.1"
safetensors = "^0.4.3"
scipy = "^1.13.1"
sentencepiece = "^0.2.0"
sentry-sdk = "^2.5.1"
setproctitle = "^1.3.3"
shtab = "^1.7.1"
six = "^1.16.0"
smmap = "^5.0.1"
sympy = "^1.12.1"
tokenizers = "^0.19.1"
tqdm = "^4.66.4"
transformers = "^4.41.2"
trl = "^0.9.4"
typing-extensions = "^4.12.2"
tyro = "^0.8.4"
tzdata = "^2024.1"
urllib3 = "^2.2.1"
wandb = "^0.17.1"
xxhash = "^3.4.1"
yarl = "^1.9.4"
omegaconf = "^2.3.0"
llama-cpp-python = { version = "^0.2.77", source = "llama_cpp_python_cu121" }
torch = { version = "^2.3.1+cu121", source = "torch_cu121" }
nvidia-cublas-cu12 = { version = "^12.1.3.1", source = "torch_cu121" }
nvidia-cuda-cupti-cu12 = { version = "^12.1.105", source = "torch_cu121" }
nvidia-cuda-nvrtc-cu12 = { version = "^12.1.105", source = "torch_cu121" }
nvidia-cuda-runtime-cu12 = { version = "^12.1.105", source = "torch_cu121" }
nvidia-cudnn-cu12 = { version = "^8.9.2.26", source = "torch_cu121" }
nvidia-cufft-cu12 = { version = "^11.0.2.54", source = "torch_cu121" }
nvidia-curand-cu12 = { version = "^10.3.2.106", source = "torch_cu121" }
nvidia-cusolver-cu12 = { version = "^11.4.5.107", source = "torch_cu121" }
nvidia-cusparse-cu12 = { version = "^12.1.0.106", source = "torch_cu121" }
nvidia-nccl-cu12 = { version = "^2.20.5", source = "torch_cu121" }
nvidia-nvtx-cu12 = { version = "^12.1.105", source = "torch_cu121" }
optimum = "^1.20.0"
tensorboard = "^2.17.0"
wheel = "^0.43.0"
pytorch-triton = { version = "^2.3.0", source = "torch_cu121" }
[tool.poetry.group.dev.dependencies]
black = "^24.4.2"
flake8 = "^7.0.0"
ipykernel = "^6.29.4"
ipywidgets = "^8.1.3"
seedir = "^0.4.2"
emoji = "^2.12.1"
nbformat = "^5.10.4"
nbclient = "^0.10.0"
nbconvert = "^7.16.4"
[[tool.poetry.source]]
name = "torch_cu121"
url = "https://download.pytorch.org/whl/cu121"
priority = "explicit"
[[tool.poetry.source]]
name = "llama_cpp_python_cu121"
url = "https://abetlen.github.io/llama-cpp-python/whl/cu121"
priority = "explicit"
[[tool.poetry.source]]
name = "torch_nightly_cu121"
url = "https://download.pytorch.org/whl/nightly/cu121/"
priority = "explicit"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
|
<python><pytorch><nlp><huggingface-transformers><large-language-model>
|
2024-08-14 11:39:28
| 0
| 3,723
|
Carlos Luis Rivera
|
78,870,533
| 5,457,202
|
Issues trying to load saved Keras U-Net model from h5 file
|
<p>I've been assigned a task in my company to try to hydrate a model that was trained for a previous project, and while I can load it again, I'm failing to try it and I don't know why.</p>
<p>The model follows a U-Net architecture, and here's the output of the <code>summary()</code> method after calling <code>load_weights()</code>.</p>
<pre><code>Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 640, 512, 1 0 []
)]
conv2d (Conv2D) (None, 640, 512, 64 640 ['input_1[0][0]']
)
conv2d_1 (Conv2D) (None, 640, 512, 64 36928 ['conv2d[0][0]']
)
max_pooling2d (MaxPooling2D) (None, 320, 256, 64 0 ['conv2d_1[0][0]']
)
conv2d_2 (Conv2D) (None, 320, 256, 12 73856 ['max_pooling2d[0][0]']
8)
conv2d_3 (Conv2D) (None, 320, 256, 12 147584 ['conv2d_2[0][0]']
8)
max_pooling2d_1 (MaxPooling2D) (None, 160, 128, 12 0 ['conv2d_3[0][0]']
8)
conv2d_4 (Conv2D) (None, 160, 128, 25 295168 ['max_pooling2d_1[0][0]']
6)
conv2d_5 (Conv2D) (None, 160, 128, 25 590080 ['conv2d_4[0][0]']
6)
max_pooling2d_2 (MaxPooling2D) (None, 80, 64, 256) 0 ['conv2d_5[0][0]']
conv2d_6 (Conv2D) (None, 80, 64, 512) 1180160 ['max_pooling2d_2[0][0]']
conv2d_7 (Conv2D) (None, 80, 64, 512) 2359808 ['conv2d_6[0][0]']
max_pooling2d_3 (MaxPooling2D) (None, 40, 32, 512) 0 ['conv2d_7[0][0]']
conv2d_8 (Conv2D) (None, 40, 32, 1024 4719616 ['max_pooling2d_3[0][0]']
)
conv2d_9 (Conv2D) (None, 40, 32, 1024 9438208 ['conv2d_8[0][0]']
)
up_sampling2d (UpSampling2D) (None, 80, 64, 1024 0 ['conv2d_9[0][0]']
)
concatenate (Concatenate) (None, 80, 64, 1536 0 ['up_sampling2d[0][0]',
) 'conv2d_7[0][0]']
conv2d_10 (Conv2D) (None, 80, 64, 512) 7078400 ['concatenate[0][0]']
conv2d_11 (Conv2D) (None, 80, 64, 512) 2359808 ['conv2d_10[0][0]']
up_sampling2d_1 (UpSampling2D) (None, 160, 128, 51 0 ['conv2d_11[0][0]']
2)
concatenate_1 (Concatenate) (None, 160, 128, 76 0 ['up_sampling2d_1[0][0]',
8) 'conv2d_5[0][0]']
conv2d_12 (Conv2D) (None, 160, 128, 25 1769728 ['concatenate_1[0][0]']
6)
conv2d_13 (Conv2D) (None, 160, 128, 25 590080 ['conv2d_12[0][0]']
6)
up_sampling2d_2 (UpSampling2D) (None, 320, 256, 25 0 ['conv2d_13[0][0]']
6)
concatenate_2 (Concatenate) (None, 320, 256, 38 0 ['up_sampling2d_2[0][0]',
4) 'conv2d_3[0][0]']
conv2d_14 (Conv2D) (None, 320, 256, 12 442496 ['concatenate_2[0][0]']
8)
conv2d_15 (Conv2D) (None, 320, 256, 12 147584 ['conv2d_14[0][0]']
8)
up_sampling2d_3 (UpSampling2D) (None, 640, 512, 12 0 ['conv2d_15[0][0]']
8)
concatenate_3 (Concatenate) (None, 640, 512, 19 0 ['up_sampling2d_3[0][0]',
2) 'conv2d_1[0][0]']
conv2d_16 (Conv2D) (None, 640, 512, 64 110656 ['concatenate_3[0][0]']
)
conv2d_17 (Conv2D) (None, 640, 512, 64 36928 ['conv2d_16[0][0]']
)
conv2d_18 (Conv2D) (None, 640, 512, 1) 65 ['conv2d_17[0][0]']
==================================================================================================
Total params: 31,377,793
Trainable params: 31,377,793
Non-trainable params: 0
__________________________________________________________________________________________________
</code></pre>
<p>My main concern is that when I load a picture as an numpy array, ending up with an input shaped (640, 512, 1), just like the first layer, I get the following error.</p>
<pre><code>from tensorflow.keras.preprocessing.image import load_img, img_to_array
img_size = (640,512)
color_mode = "grayscale"
image = img_to_array(load_img(image_path, target_size=self.image_size, color_mode=self.color_mode))
image = image/255.0
print(image.shape)
#(640, 512, 1)
#unet is a wrapper class which contains the model described above
#unet.load_weights('../../models/unet_model_i_04.h5')
unet.model.predict(image)
</code></pre>
<p>This snippet produces this error:</p>
<pre><code>ValueError: Input 0 of layer "model" is incompatible with the layer: expected shape=(None, 640, 512, 1), found shape=(32, 512, 1)
</code></pre>
<p>I tried changing the shape of the input, and hence the image size, (I suspect, based on an outdated notebook I was granted access to, that when the model was initially trained and exported, it was done using 320x256 images, but that only changes the error to</p>
<pre><code>ValueError: Input 0 of layer "model" is incompatible with the layer: expected shape=(None, 320, 256, 1), found shape=(32, 256, 1)
</code></pre>
|
<python><tensorflow><keras><deep-learning><unet-neural-network>
|
2024-08-14 10:52:17
| 1
| 436
|
J. Maria
|
78,870,532
| 12,424,131
|
Python metaclass keyword arguments not getting used by subclass
|
<p>I am trying to write a metaclass to assist in serialization. The intention of the metaclass was to isolate the production code from the serialization mode (e.g. YAML or JSON) as much as possible. So, things could inherit from the class <code>Serializable</code>, and not have to worry (too much) about whether it was going to be serialized as YAML or JSON.</p>
<p>I have the following code (for a YAML serializer). It is basically working, but I wanted to specify the Loader and Dumper as Keyword argument (since PyYaml could use SafeLoader, FullLoader, etc.) This is where I have the problem. I added keyword arguments for Loader and Dumper to the <code>Serializable</code> class. These work for that class, but not for subclasses, e.g. when I define the class <code>A</code> (which is a subclass of <code>Serializable</code>) the meteclass's <code>__new__</code> does not get any keyword arguments.</p>
<p>What am I doing wrong?</p>
<pre><code>import yaml
class SerializableMeta(type):
@classmethod
def __prepare__(cls, name, bases, **kwargs):
return {"yaml_tag": None, "to_yaml": None, "from_yaml": None}
def __new__(cls, name, bases, namespace, **kwargs):
yaml_tag = f"!{name}"
cls_ = super().__new__(cls, name, bases, namespace)
cls_.yaml_tag = yaml_tag
cls_.to_yaml = lambda dumper, obj: dumper.represent_mapping(yaml_tag, obj.__dict__)
cls_.from_yaml = lambda loader, node: cls_(**loader.construct_mapping(node))
kwargs["Dumper"].add_representer(cls_, cls_.to_yaml)
kwargs["Loader"].add_constructor(yaml_tag, cls_.from_yaml)
return cls_
class Serializable(metaclass=SerializableMeta, Dumper=yaml.Dumper, Loader=yaml.Loader):
pass
class A(Serializable):
def __init__(self, a, b):
self.a = a
self.b = b
def __repr__(self):
return f"A({self.a}, {self.b})"
def __eq__(self, other):
if isinstance(other, A):
return self.a == other.a and self.b == other.b
else:
return NotImplemented
</code></pre>
|
<python><metaclass>
|
2024-08-14 10:52:13
| 1
| 466
|
Steven Dickinson
|
78,870,514
| 4,340,985
|
How to read a csv into pandas with missing columns in the header?
|
<p>I have a CSV file from a measurement device, that produces a bunch of values (Temperature, Rain and Wind) and gives some metadata for the device:</p>
<pre><code>Station, Hillside
ID, 12345
elevation, 54321
units, Β°C, mm, kph
time, temp, prec, wind
2024-08-01 00:00, 18, 0, 5
2024-08-01 01:00, 18, 0, 2
2024-08-01 02:00, 17.5, 3, 14
</code></pre>
<p>When I try to read this with <code>df=pd.read_csv('12345.csv', header=[0,1,2,3,4], index_col=0, parse_dates=True)</code> I get an <code>Header rows must have an equal number of columns</code> error. If I read it in with <code>header=[3,4]</code>, where I have all columns, it works, but especially the <code>Station</code> name would be rather important to have.</p>
<p>I can probably read that in separately and then add it to the df's multiindex, but I wonder if I'm missing some obvious shortcut, since a lot of devices that produce csv files appear to add the metadata on top, without filling out all columns. Especially since I can have multiindex rows, that stretch over many columns.</p>
<p><em>Edit:</em> Essentially, I'm looking for a dataframe like this:</p>
<pre><code>Station Hillside |m i
ID 12345 |u n
elevation 54321 |l d
units Β°C mm kph |t e
time temp prec wind |i x
__________________________________________
2024-08-01 00:00:00 18.0 0 5 |d
2024-08-01 01:00:00 18.0 0 2 |a
2024-08-01 02:00:00 17.5 3 14 |t
... |a
</code></pre>
|
<python><pandas><csv><multi-index>
|
2024-08-14 10:47:44
| 3
| 2,668
|
JC_CL
|
78,870,445
| 864,245
|
Class inheritance where the children are simple variable-only classes
|
<p>I am working with some YAML patches. These patches are a similar structure, but contain different values. The values are often difficult to remember, so I want to abstract them away into class instances that I can reference.</p>
<p>Here is the approach I have taken so far:</p>
<pre class="lang-py prettyprint-override"><code>class YamlPatch:
def __init__(self, kind, name, namespace, op, path, value):
target={
"kind": kind,
"name": name,
"namespace": namespace
},
scalar=[{
"op": op,
"path": path,
"value": value
}]
self.yaml = (target, scalar)
class PatchA(YamlPatch):
def __init__(self, name):
namespace = "my-namespace"
kind = "test"
op = "replace"
path = "/test"
value = "hello"
super().__init__(kind, name, namespace, op, path, value)
class PatchB(YamlPatch):
def __init__(self, path):
namespace = "my-namespace"
name = "my-name"
kind = "test"
op = "replace"
value = "hello"
super().__init__(kind, name, namespace, op, path, value)
### Insert 4 or 5 other types of patches here...
patches = []
patches.append(PatchA("hello").yaml)
for app in ["app1", "app2"]:
patches.append(PatchB(f"/{app}").yaml)
print(patches)
### output: [(({'kind': 'test', 'name': 'hello', 'namespace': 'my-namespace'},), [{'op': 'replace', 'path': '/test', 'value': 'hello'}]), (({'kind': 'test', 'name': 'my-name', 'namespace': 'my-namespace'},), [{'op': 'r
eplace', 'path': '/app1', 'value': 'hello'}]), (({'kind': 'test', 'name': 'my-name', 'namespace': 'my-namespace'},), [{'op': 'replace', 'path': '/app2', 'value': 'hello'}])]
</code></pre>
<p>This feels messy and repetitive, especially when you add in type-hinting and comments. Not very DRY. Some of the values are fairly common defaults, and having to <code>__init__</code> then <code>super()</code> in every child class (patch) doesn't feel good.</p>
<p>I tried using dataclasses, but because the required "input" arguments for the child classes are different, I would have to use the <code>kw_only</code> argument which can be tricky to remember with so many different patches (e.g. <code>PatchA(value="blah")</code> or was it <code>PatchA(name="blah")</code>, I can't remember?).</p>
<p>In short, I'm looking for the quickest and most efficient way to write code which allows me to reference a memorable, simple name (I've called them <code>PatchA</code> and <code>PatchB</code> here but in real code they'll be something unique and obvious to the maintainers) and return the correctly-formatted YAML patch. E.g. <code>print(PatchA)</code>.</p>
<p>I am using Python 3.11.</p>
<p>--- EDIT FOR CLARIFICATION</p>
<p>The reason I want to avoid having to use keyword arguments is because each patch has a different set of values, and they're not simple to remember. Also, if the values change, I want to be able to change them in one place rather than every time they're referenced in my code.</p>
<p>Here's a realistic (albeit abridged) example:</p>
<pre class="lang-py prettyprint-override"><code>class YamlPatch:
yaml = ...
class PackageApplicationFromGit(YamlPatch):
path = "/spec/path"
name = f"application-{application}"
value = f"/some/applications/path/{application}"
class AppsGitRepoBranchPatch(YamlPatch):
kind = "GitRepository"
path = "/spec/ref/branch"
value = "my-branch-name"
</code></pre>
<p>The two patches have the same structure, but wildly different values. These values are all static <em>apart from a single argument</em>, e.g. a branch name or an application name.</p>
|
<python><python-3.x>
|
2024-08-14 10:32:51
| 2
| 1,316
|
turbonerd
|
78,870,086
| 13,942,929
|
Cython : How can we add default value to fused_type parameter?
|
<p>I want my function that has a fused_type parameter to take a default value.
I already added int as a part of my fused_type. But I keep getting an error.</p>
<pre><code>ctypedef fused test_type:
double
int
str
MyObject
</code></pre>
<hr />
<pre><code>def test_point(self, a : test_type = 0):
print("hahah")
</code></pre>
<hr />
<p>[Error]</p>
<pre><code>TypeError: Expected unicode, got int
</code></pre>
|
<python><cython><cythonize>
|
2024-08-14 09:14:28
| 0
| 3,779
|
Punreach Rany
|
78,870,005
| 6,450,267
|
How to input multiple inputs in RunnableWithMessageHistory of LangChain?
|
<p>I would like to ask about LangChain for LLM in python.</p>
<p>I need multiple inputs with chatting history to run the model, so I tried to use RunnableWithMessageHistory but got an error.</p>
<blockquote>
<p><strong>Error in RootListenersTracer.on_chain_end callback: KeyError('input')</strong></p>
<p><strong>{'output1': 'future', 'output2': 'past'}</strong></p>
</blockquote>
<pre><code>llm = ChatOpenAI(
model="gpt-4o",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
api_key=api_info['api_key'],
organization=api_info['organization']
# base_url="...",
# other params...
)
store = {}
def get_session_history(session_ids):
if session_ids not in store:
store[session_ids] = ChatMessageHistory()
return store[session_ids]
response_schemas = [
ResponseSchema(name="output1", description="translates first word to English"),
ResponseSchema(name="output2", description="translates next word to English")
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
prompt_template_1 = ChatPromptTemplate(
messages=[
SystemMessagePromptTemplate.from_template(
"Output MUST be JSON format"
),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template(
"translates {korean_word1} and {korean_word2} to English."
"\n{format_instructions}"
)
],
input_variables=["korean_word1", "korean_word2"],
partial_variables={"format_instructions": output_parser.get_format_instructions()}
)
chain1 = prompt_template_1 | llm | output_parser
chain_with_history = RunnableWithMessageHistory(
chain1,
get_session_history,
history_messages_key="chat_history",
)
word1 = 'λ―Έλ'
word2 = 'κ³Όκ±°'
result = chain_with_history.invoke(
{"korean_word1": word1, "korean_word2": word2},
config={"configurable": {"session_id": "abc123"}},
)
</code></pre>
<p>And I tried to use 'input_messages_key' in RunnableWithMessageHistory but also failed. Because input_messages_key allow one key only, not multiple keys.</p>
<pre><code>chain_with_history = RunnableWithMessageHistory(
chain1,
get_session_history,
input_messages_key=["korean_word1", "korean_word2"], // Error Here
history_messages_key="chat_history",
)
</code></pre>
<p>Please help me to write the correct code</p>
|
<python><openai-api><langchain>
|
2024-08-14 08:59:49
| 0
| 340
|
Soonmyun Jang
|
78,869,764
| 2,612,235
|
Local Package Development with Poetry on Ubuntu 24.04?
|
<p>I'm working on a Python project that uses Poetry for dependency management. Recently, I found a bug in a third-party package, so I cloned its repository to work on it locally. My goal is to integrate this local version of the package into my project, but I've run into some issues.</p>
<p>First, I attempted to manually add the package path to my <code>pyproject.toml</code> like this:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.dependencies]
buggy-one = { path = './buggy-one' }
</code></pre>
<p>This does install the package, but not in editable mode, which is not ideal for development.</p>
<p>Next, I tried using <code>pip install -e ./buggy-one</code> to install the package in editable mode. Unfortunately, on newer versions of Ubuntu (like 24.04), this results in the following error:</p>
<pre><code>error: externally-managed-environment
Γ This environment is externally managed
β°β> To install Python packages system-wide, try apt install python3-xyz...
</code></pre>
<p>The error message suggests using <code>pipx</code> as an alternative, but this isn't suitable for my needs since <code>pipx</code> installs the package in a separate environment, and I need to work on it within my Poetry-managed project.</p>
<p>Given these challenges, what is the correct approach in 2024 to work on a local package within a Poetry-managed project on Ubuntu? Is there a way to properly link the local package so that I can develop and test it simultaneously wi</p>
<h3>New pip message on Ubuntu 24.04</h3>
<p>Here the full new message you get if you try to use <code>pip</code> on Ubuntu 24.04:</p>
<pre><code>$ pip install gettext
error: externally-managed-environment
Γ This environment is externally managed
β°β> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.
If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage
a virtual environment for you. Make sure you have pipx
installed.
See /usr/share/doc/python3.12/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or
OS distribution provider. You can override this, at the risk of
breaking your Python installation or OS, by passing
--break-system-packages.
</code></pre>
|
<python><ubuntu><pip><python-poetry><pipx>
|
2024-08-14 08:03:52
| 1
| 29,646
|
nowox
|
78,869,763
| 10,855,529
|
Using starts_with for comparing a string to a list of strings in Polars
|
<p>Could I do a <code>starts_with</code> check for a string with a list of strings and return <code>True</code> if the string starts with <em>any</em> of the strings in the list.</p>
<p>For now, I came up with the following.</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({
'a': ['https://abcd.com', 'https://youtube.com'],
'validation_urls': [
['https://abcd.com', 'https://efgh.com'],
['https://abcd.com', 'https://efgh.com']
],
})
</code></pre>
<pre class="lang-py prettyprint-override"><code>df = df.with_columns(is_valid=pl.col('a').str.starts_with(pl.col('validation_urls').explode()))
</code></pre>
<pre><code>βββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββββββββ¬βββββββββββ
β a β validation_urls β is_valid β
β --- β --- β --- β
β str β list[str] β bool β
βββββββββββββββββββββββͺβββββββββββββββββββββββββββββββββββββββββββͺβββββββββββ‘
β https://abcd.com β ["https://abcd.com", "https://efgh.com"] β true β
β https://youtube.com β ["https://abcd.com", "https://efgh.com"] β false β
βββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββββββββ΄βββββββββββ
</code></pre>
<p>However, I am not sure if this is the ideal way to perform this check.</p>
|
<python><python-polars>
|
2024-08-14 08:03:49
| 1
| 3,833
|
apostofes
|
78,869,587
| 11,046,379
|
Get boolean expression from hierarchical Pandas DataFrame
|
<p>The dataframe is given as :</p>
<pre><code>df = pd.DataFrame(
{
"id": [1, 2, 3, 4, 5, 6, 7, 8],
"parent_id": [0, 0, 1, 1, 2, 2, 4, 4],
"value": [a>2, b<4, d>5, e<3, h>1, i>10, f>3, g>2],
}
)
</code></pre>
<p>I need get that string expression:</p>
<pre><code>"or(and (a > 2,
or(d > 5,
and (e<3,
or (f > 3,
g > 2)
)
)
),
and(b < 4,
or(h > 1,
i > 10)
)
)"
</code></pre>
<p>I.e.all children should be as parameters of "OR" function , and parent and it's children should be as parameters of "AND" function</p>
|
<python><pandas><dataframe>
|
2024-08-14 07:14:58
| 1
| 1,658
|
harp1814
|
78,869,394
| 11,895,964
|
Import Errors for Custom Django Apps in XBlock Development on OpenEdX Palm Version
|
<p>Iβm having trouble importing custom Django apps in an Open edX XBlock. Even though the Django application is installed in the Open edX environment, I'm encountering errors.</p>
<p>The <code>video_rating</code> custom Django application is installed and working perfectly in this environment.</p>
<p><code>onlineoffline</code> is my XBlock.</p>
<blockquote>
<p>2024-07-22 10:43:36,866 WARNING 32 [xblock.plugin] [user None] [ip None] plugin.py:144 - Unable to load XBlock βonlineofflineβ
Traceback (most recent call last):
File β/openedx/venv/lib/python3.8/site-packages/xblock/plugin.pyβ, line 141, in load_classes
yield (class_.name, cls.load_class_entry_point(class))
File β/openedx/venv/lib/python3.8/site-packages/xblock/plugin.pyβ, line 70, in load_class_entry_point
class = entry_point.load()
File β/openedx/venv/lib/python3.8/site-packages/pkg_resources/init.pyβ, line 2517, in load
return self.resolve()
File β/openedx/venv/lib/python3.8/site-packages/pkg_resources/init.pyβ, line 2523, in resolve
module = import(self.module_name, fromlist=[βnameβ], level=0)
File β/openedx/venv/lib/python3.8/site-packages/onlineoffline/init.pyβ, line 1, in
from .onlineoffline import OnlineOfflineClassXBlock
File β/openedx/venv/lib/python3.8/site-packages/onlineoffline/onlineoffline.pyβ, line 4, in
from openedx.features.video_rating.models import UserFeedbackSave,Questions,Type,Ratings
ModuleNotFoundError: No module named βopenedx.features.video_ratingβ</p>
</blockquote>
|
<python><django><openedx><edx><palm>
|
2024-08-14 06:17:42
| 1
| 414
|
Neeraj Kumar
|
78,869,326
| 11,678,700
|
Why is it that calling standard sum on a numpy array produces a different result than numpy.sum?
|
<p>Observe in the following code, creating an numpy array and calling the builtin python <code>sum</code> function produces different results than <code>numpy.sum</code></p>
<p>How is numpy's sum function implemented? And why is the result different?</p>
<pre><code>test = [.1]*10
test = [np.float64(x) for x in test]
test[5]= np.float64(-.9)
d = [np.asarray(test) for x in range(0,60000)]
sum(sum(d))
</code></pre>
<p>outputs</p>
<pre><code>np.float64(-1.7473212210461497e-08)
</code></pre>
<p>but</p>
<pre><code>np.sum(d)
</code></pre>
<p>outputs</p>
<pre><code>np.float64(9.987344284922983e-12)
</code></pre>
|
<python><numpy><data-science>
|
2024-08-14 05:55:28
| 2
| 328
|
Liam385
|
78,869,236
| 6,101,419
|
Use Custom Manager to Filter on a Reverse Relation
|
<p>I have a set of users and a set of assignments each user submits.</p>
<pre class="lang-py prettyprint-override"><code>class User(models.Model):
name = models.CharField()
class Assignment(models.Model):
user = models.ForeignKey(
"User",
related_name="assignments"
)
status = models.CharField()
approved = AssignmentActiveManager()
rejected = AssignmentRejectedManager()
...
</code></pre>
<p>I created custom managers to determine different states of assignments, as it's complex and should be internal to the model. For example:</p>
<pre class="lang-py prettyprint-override"><code>class AssignmentActiveManager()
def get_queryset(self):
return Assignment.objects.filter(status__in=["Approved", "Accepted"])
</code></pre>
<p>Now I want to get all users with an approved assignment, using the <code>Assignment.approved</code> manager, because I don't want to duplicate the filtering code.</p>
<p>I can do</p>
<pre><code>Users.objects.filter(assignments__in=Assignments.approved.all()).all()
</code></pre>
<p>However, this query involves a <code>WHERE status IN (SELECT ...)</code> subquery. This is going to be less efficient than the query generated if I had written the filtering explicitly:</p>
<pre><code>Users.objects.filter(assignments__status__in=["Approved", "Accepted"]).all()
</code></pre>
<p>Which would do an <code>INNER JOIN</code> and <code>WHERE user.assignment_id IN (Approved, Accepted)</code>.</p>
<p>So my question is: Is there a way to select <em>users</em> by filtering on <em>assignments</em> using the <code>Assignment</code> custom managers <em>efficiently</em>?</p>
|
<python><django>
|
2024-08-14 05:16:28
| 1
| 2,137
|
Enrico Borba
|
78,869,092
| 14,808,637
|
Visualization of Graphs Data
|
<p>I need to visualize graphs where each pair of nodes connected by a comma represents an edge, and the numeric value represents the intensity of that edge. For instance, ('A', 'B'): 0.71 means that node A is connected to node B with an edge intensity of 0.71. Now, I need to visualize these graphs in python. Here are the graphs data.</p>
<p>Graph 1</p>
<pre><code>('A', 'B'): 0.71
('M', 'B'): 0.67
('N', 'B'): 0.64
('A', 'O'): 0.62
('O', 'B'): 0.60
('N', 'O'): 0.53
('M', 'O'): 0.46
('A', 'N'): 0.18
('M', 'N'): 0.11
</code></pre>
<p>Graph2</p>
<pre><code>('ABC', 'ADC'): 0.53
</code></pre>
<p>Graph3</p>
<pre><code>('CDE', 'CFH'): 0.28
</code></pre>
<p>Graph4</p>
<pre><code>('GHI', 'GMI'): 0.20
</code></pre>
<p>Graph5</p>
<pre><code>('XYZ', 'XWZ'): 0.17
</code></pre>
<p>Can anyone assist me visualize these graphs clearly?</p>
<p>I tested your code, and I encountered the same issue of overlapping nodes that I had with my own code. I have attached the output image generated by your code for your reference. I used the following command to save the file: <code>plt.savefig('graph_image.png')</code></p>
<p><a href="https://i.sstatic.net/657xgMtB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/657xgMtB.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><matplotlib><graph>
|
2024-08-14 04:03:44
| 1
| 774
|
Ahmad
|
78,869,085
| 2,966,723
|
Using and changing a variable name for a value in a dict
|
<p>I've encountered a challenge in doing repeated simulations with a changing parameter value using Python. I'm looking for a clean way to change the parameter value.</p>
<p>Reducing it to the simplest case, we can assume the parameter is showing up as the value for some specific keys in a dictionary, <code>D</code>, let's say <code>D[1]</code> and <code>D[2]</code> are both <code>delta</code>. I need to rerun the code with a changed value of <code>delta</code>, but updating <code>D</code> in practice is not as simple in my real context as updating <code>D[1]</code> and <code>D[2]</code>:</p>
<p><strong>Here's the question:</strong></p>
<p>Is there a way to create a dict <code>D</code> with some values equal to a parameter <code>delta</code> so that if I change <code>delta</code> later, the corresponding values in <code>D</code> are automatically updated? Basically I'm sort of trying to set the value of <code>D</code> to be passed by reference.</p>
<hr />
<p>More context:</p>
<p>Right now I have something like:</p>
<pre><code>delta = 1
D = {1:delta, 2:delta, 3:1} #the real process of creating `D` is computationally slow.
#... some arbitrary code here, some of which I can't modify that needs D[1] to behave like a number.
delta = 2
D = {1:delta, 2:delta, 3:1}
#... equivalent code here
</code></pre>
<p>I can't really wrap this in a for loop because I'm actually doing this in a Jupyter notebook and I want to include a markdown cell describing the outputs each time I change delta. Also creating <code>D</code> is pretty involved.</p>
<p>My best solution right now is to track which keys correspond to delta in a separate list and then go through and update all of them. This process will be a touch messy and might cause confusion for some users who will not be all that familiar with python.</p>
<p>I don't think it is possible to do what I'm asking, but if so it would make my life much easier.</p>
|
<python><dictionary>
|
2024-08-14 04:01:35
| 1
| 24,012
|
Joel
|
78,868,737
| 11,233,365
|
How to relate new SQL table row to an existing row in a related SQL table without creating a duplicate new row
|
<p>I am trying to create two tables using SQLModel where a parent can have multiple children. The parents and their children will be found and registered to the database individually, so I need to somehow implement the ability to point already existing entries in the parent table to their children as they are found and registered, and vice-versa.</p>
<p>I have tried coming up with a proof-of-concept to register the parents and their children separately, and map the relationships as they are registered. I am running this using FastAPI, so the installation of the Python environment I used is as follows:</p>
<pre class="lang-none prettyprint-override"><code># Setting up the test Python environment
$ mamba/conda create my-sql-db python==3.9.*
$ mamba/conda activate my-sql-db
$ python -m pip install sqlmodel fastapi "uvicorn[standard]"
$ uvicorn db:app --reload # Run the FastAPI applet using Uvicorn
</code></pre>
<p>This is the standalone script to generate the two API endpoints I am trying to implement, with the script being named <code>db.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from contextlib import asynccontextmanager
from sqlmodel import SQLModel, Session, Field, Relationship, create_engine, select
from fastapi import FastAPI
from typing import Optional, List, Union
"""
Set up example database to show the relationships I'm trying to establish
"""
# Define tables and relationships
class Parent(SQLModel, table=True): # type: ignore
id: Optional[int] = Field(default=None, primary_key=True)
name: str = Field() # In case more parameters are needed
children: List["Child"] = Relationship(
back_populates="parent",
sa_relationship_kwargs={"cascade": "delete"},
)
class Child(SQLModel, table=True): # type: ignore
id: Optional[int] = Field(default=None, primary_key=True)
name: str = Field() # In case more parameters are needed
parent: Optional["Parent"] = Relationship(
back_populates="children",
)
parent_id: Optional[int] = Field(
foreign_key="parent.id",
default=None,
)
# Set up database URL
db_name = "database.db"
db_url = f"sqlite:///{db_name}"
connect_args = {"check_same_thread": False}
engine = create_engine(db_url, echo=True, connect_args=connect_args)
def create_db_and_tables():
SQLModel.metadata.create_all(engine)
"""
Set up FastAPI app to interact with the database through
"""
app = FastAPI()
# Initialise database on FastAPI startup
@app.on_event("startup")
def on_startup():
create_db_and_tables()
# Helper functions
def check_db_entry_exists(
table: Union[Parent, Child],
name: str,
) -> bool:
"""
Checks if the name has already been registered in the database. Used to avoid registering duplicates
"""
with Session(engine) as session:
try:
session.exec(
select(table)
.where(table.name == name)
).one()
return True
except Exception:
return False
@app.post("/parent/")
def create_parent(
name: str,
children: List[str] = [],
) -> Parent:
"""
With this API endpoint, I am hoping to be able to add new parents to the database,
but also revisit and update an existing parent when a new child entry gets added.
"""
with Session(engine) as session:
# Check if the parent already exists
if check_db_entry_exists(Parent, name) is True:
parent = session.exec(
select(Parent)
.where(Parent.name == name)
).one()
# Eventually, I want to be able to check if info associated with the parent
# has changed, and to update the entry if it has
# Register the parent if they're new
else:
parent = Parent()
# Add name
parent.name = name
# Add children
if len(children) < 1:
pass
else:
for child in children:
parent.children.append(Child(name=child))
# Add to database and show
session.add(parent)
session.commit()
session.refresh(parent)
return parent
@app.post("/child/")
def create_child(
name: str,
parent: Optional[str] = None,
) -> Child:
"""
With this table, I'm hoping to add the child as a standalone entry first, then come
back and point it to a newly added parent once one gets added to the database
"""
with Session(engine) as session:
# Check if the child being registered already exists
if check_db_entry_exists(Child, name) is True:
child = session.exec(
select(Child)
.where(Child.name == name)
).one()
# Eventually, I want to be able to check if the information associated with
# the child has changed, and update it if it has
# Register child to the database if it doesn't already exist
else:
child = Child()
# Add name
child.name = name
# Add parent
child.parent = Parent(name=parent)
# Add to database and show
session.add(child)
session.commit()
session.refresh(child)
return child
</code></pre>
<p>At present, when I add a new entry to the child table via the FastAPI docs page and try to point it to the parent, a duplicate of the parent gets created instead of using an already existing entry. I'm new to working with SQL databases and SQLModel, so could you tell me how I can modify/add to what I've already got in order to implement what I've described?</p>
|
<python><fastapi><sqlmodel>
|
2024-08-14 00:55:00
| 1
| 301
|
TheEponymousProgrammer
|
78,868,652
| 4,718,221
|
Inheriting from Python dataclass
|
<p>I started using data classes recently and am having some issues understanding how inheritance works. I can't understand the problem with too many variables being passed on. Am I using the super() method correctly here?</p>
<p>Attaching code below:</p>
<pre><code>from typing import Optional
@dataclass
class ZooAnimals():
myrange = list(range(10, 16, 1))
food_daily_kg: Optional[int]
price_food: float
area_required: list = field(default_factory=list)
name: str = field(default='Zebra', kw_only=True)
weight: InitVar[int] = field(default = 50, kw_only=True)
weight_pounds: int = field(init=False)
dangerous_animal_warning: str = field(default='No', kw_only=True, init=False)
def __post_init__(self, weight):
self.weight_pounds = self.weight * 2.7
if self.name == 'Crocodile':
self.dangerous_animal_warning = "Dangerous"
@dataclass
class Cats(ZooAnimals):
def __init__(self, area_required, name, weight):
super().__init__(area_required, name, weight)
z = Cats([30], name='Leopard', weight=35)
z
Cats(food_daily_kg=[30], price_food='Leopard', area_required=35, name='Zebra', weight_pounds=135.0, dangerous_animal_warning='No')
</code></pre>
|
<python><inheritance><python-dataclasses>
|
2024-08-13 23:52:20
| 0
| 604
|
user4718221
|
78,868,598
| 169,252
|
Friend optimized some python code of mine, what is it really doing?
|
<p>A friend optimized some code I wrote. I was trying to make sense of it.
Using some actual values it boils down to this:</p>
<pre><code>from itertools import cycle
list = sorted([8, 4, 6, 2])
c = cycle(list)
u = [set(next(c) for _ in range(2)) for _ in range(6)]
print(u)
</code></pre>
<p>This code prints
<code>[{2, 4}, {8, 6}, {2, 4}, {8, 6}, {2, 4}, {8, 6}]</code></p>
<p>What I do not understand, is why the second pair is always <code>{8, 6}</code>, when the list is sorted?</p>
<p>Looking at <code>set(next(c) for _ in range(2))</code>, I thought it should always get the next from the cycle, so I was expecting <code>[{2, 4}, {6, 8}, {2, 4}, {6, 8}, {2, 4}, {6, 8}]</code></p>
|
<python><cycle>
|
2024-08-13 23:22:52
| 0
| 6,390
|
unsafe_where_true
|
78,868,592
| 20,302,906
|
Sending different content to clients throug websocket connection
|
<p>I'm working on an online blackjack game to be implemented with python websockets. Player 1 and 2, both human, will interact with the server through two client instances I'm planning to make with <a href="https://textual.textualize.io" rel="nofollow noreferrer">textual</a> which make it an online terminal based game.</p>
<p>What I'm trying to find out is how to show Player 1 and 2 games independently. That means Player 1 should see his own cards even the covered one and Player 2 uncovered cards only. The same applies for Player 2.</p>
<p><strong>Example</strong></p>
<p><em>Player 1</em></p>
<pre><code>My game: A, 2, 3, 4
P2 game: [covered], 5, 6,7
</code></pre>
<p><em>Player 2</em></p>
<pre><code>My game: J, 5, 6, 7
P1 game: [covered], 2, 3, 4
</code></pre>
<p>I've done some research on this topic but all results use one UI content that is shared between players. That's basically the same approach the official tutorial has but I've not found anything that can expand my idea of using urls that work like a REST API that I can use to send dedicated content to players.</p>
<p>Is there a way to implement a system that uses urls like <em>ws://localhost:8001/player_one</em> and <em>ws://localhost:8001/player_two</em> for this purpose?</p>
<p>Is there any other approach to do this?</p>
|
<python><websocket>
|
2024-08-13 23:17:09
| 1
| 367
|
wavesinaroom
|
78,868,372
| 738,811
|
Mocking a function import with from keyword
|
<p>There are three files, a file with a test and two simple modules:</p>
<p><code>a.py</code>:</p>
<pre><code>import b
def bar():
b.foo()
</code></pre>
<p><code>b.py</code>:</p>
<pre><code>def foo():
print("Hello from b.py")
</code></pre>
<p><code>test_a.py</code>:</p>
<pre><code>from a import bar
from unittest import mock
def test_bar():
with mock.patch('b.foo') as mock_foo:
bar()
mock_foo.assert_called_once()
</code></pre>
<p>The goal is to mock the function <code>foo</code> from the module <code>b</code> which is then called inside the function <code>bar</code>. The above code works as expected and the test will pass.</p>
<p>However, when the file <code>a.py</code> will be modified as follows, where the <code>foo</code> function is imported directly from the module <code>b</code>, then the mocking will fail and the original, non-mocked <code>foo</code> function is getting called inside the function <code>bar</code>:</p>
<pre><code>from b import foo
def bar():
foo()
</code></pre>
<p>Why the function <code>foo</code> is not mocked properly in the test when it's imported with the <code>from</code> keyword in the <code>a</code> module?</p>
|
<python><unit-testing>
|
2024-08-13 21:31:59
| 1
| 15,721
|
scdmb
|
78,868,296
| 12,820,205
|
Icons not found by briefcase/android-emulator
|
<p>I am building an app for android and iOS using beeware. When running the android emulator through briefcase, my icons are not included. I get the warnings:</p>
<pre><code>I/python.stdout: WARNING: Can't find icon view_white; falling back to default icon
I/python.stdout: WARNING: Can't find icon add_white; falling back to default icon
I/python.stdout: WARNING: Can't find icon info_white; falling back to default icon
</code></pre>
<p>While continuously updating, building and running the emulator, I have done the following to address the problem:</p>
<ol>
<li><p>Double checked the icon references in the toga code (correct case and no file extensions etc.)</p>
</li>
<li><p>Made it so that the icons are included in the <strong>APK</strong>'s <strong>drawable</strong> folder by the system when completing the android build</p>
</li>
<li><p>Inspected the <strong>adb logcat</strong> (to no avail)</p>
</li>
</ol>
<p>How can I make the android emulator find the icons?</p>
|
<python><android-emulator><beeware>
|
2024-08-13 20:58:06
| 1
| 1,994
|
rjen
|
78,868,163
| 5,013,066
|
Does Poetry for Python use a nonstandard pyproject.toml? How?
|
<p>I am considering introducing my organization to Poetry for Python, and I came across this claim:</p>
<blockquote>
<p>Avoid using the Poetry tool for new projects. Poetry uses non-standard implementations of key features. For example, it does not use the standard format in pyproject.toml files, which may cause compatibility issues with other tools.</p>
</blockquote>
<p>--<a href="https://www.stuartellis.name/articles/python-modern-practices/" rel="nofollow noreferrer">Modern Good Practices for Python Development</a></p>
<p>Is this true? I didn't immediately turn up anything searching. What does Poetry do that is nonstandard, in the <em>pyproject.toml</em> or anywhere else?</p>
|
<python><python-poetry><pyproject.toml>
|
2024-08-13 20:12:08
| 1
| 839
|
Eleanor Holley
|
78,868,024
| 2,986,153
|
How to know when to use map_elements, map_batches, lambda, and struct when using UDFs?
|
<pre><code>import polars as pl
import numpy as np
df_sim = pl.DataFrame({
"daily_n": [1000, 2000, 3000, 4000],
"prob": [.5, .5, .5, .6],
"size": 1
})
df_sim = df_sim.with_columns(
pl.struct("daily_n", "prob", "size")
.map_elements(lambda x:
np.random.binomial(n=x['daily_n'], p=x['prob'], size=x['size']))
.cast(pl.Int32)
.alias('events')
)
df_sim
</code></pre>
<pre><code>shape: (4, 4)
βββββββββββ¬βββββββ¬βββββββ¬βββββββββ
β daily_n β prob β size β events β
β --- β --- β --- β --- β
β i64 β f64 β i32 β i32 β
βββββββββββͺβββββββͺβββββββͺβββββββββ‘
β 1000 β 0.5 β 1 β 491 β
β 2000 β 0.5 β 1 β 979 β
β 3000 β 0.5 β 1 β 1524 β
β 4000 β 0.6 β 1 β 2449 β
βββββββββββ΄βββββββ΄βββββββ΄βββββββββ
</code></pre>
<p>However the following code would fail with the message
"TypeError: float() argument must be a string or a number, not 'Expr'"</p>
<pre><code>df_sim.with_columns(
np.random.binomial(n=pl.col('daily_n'), p=pl.col('prob'), size=pl.col('size'))
.alias('events')
)
</code></pre>
<p>Why do some functions require use of <code>struct()</code>, <code>map_elements()</code> and <code>lambda</code>, while others do not?</p>
<p>In my case below I am able to simply refer to polars columns as function arguments by using <code>col()</code>.</p>
<pre><code>def local_double(x):
return(2*x)
df_ab = pl.DataFrame({
"group": ["A", "A", "B", "B"],
"converted": [True, False, True, True],
"revenue": [150, 200, 300, 500]
})
df_ab.with_columns(rev_2x = local_double(pl.col("revenue")))
</code></pre>
<pre><code>shape: (4, 4)
βββββββββ¬ββββββββββββ¬ββββββββββ¬βββββββββ
β group β converted β revenue β rev_2x β
β --- β --- β --- β --- β
β str β bool β i64 β i64 β
βββββββββͺββββββββββββͺββββββββββͺβββββββββ‘
β A β true β 150 β 300 β
β A β false β 200 β 400 β
β B β true β 300 β 600 β
β B β true β 500 β 1000 β
βββββββββ΄ββββββββββββ΄ββββββββββ΄βββββββββ
</code></pre>
|
<python><dataframe><python-polars>
|
2024-08-13 19:29:16
| 2
| 3,836
|
Joe
|
78,867,953
| 1,592,427
|
Psycopg hangs on connection
|
<p>Running the latest (3.2.1) version of th psycopg faced a problem that it hangs on getting connection from the pool to the postgres db. I saw that timeout there is 0.1 so it must raise an error but instead it just got stuck at that point. Are there any way to force it to raise an error if failed to connect?</p>
<pre><code>(Python) File "/sqlalchemy/pool/base.py", line 674, in _init_
self.__connect()
(C) File "/python3.9/Include/internal/pycore_ceval.h", line 40, in function_code_fastcall (/libpython3.9.so.1.0)
(Python) File "/sqlalchemy/pool/base.py", line 896, in __connect
self.dbapi_connection = connection = pool._invoke_creator(self)
(Python) File "/sqlalchemy/engine/create.py", line 643, in connect
return dialect.connect(*cargs, **cparams)
(Python) File "/sqlalchemy/engine/default.py", line 621, in connect
return self.loaded_dbapi.connect(*cargs, **cparams)
(Python) File "/psycopg/connection.py", line 103, in connect
rv = waiting.wait_conn(gen, interval=_WAIT_INTERVAL)
(Python) File "/psycopg/waiting.py", line 98, in wait_conn
fileno, s = gen.send(ready)
(C) File "/python3.9/Include/internal/pycore_ceval.h", line 40, in gen_send_ex (/libpython3.9.so.1.0)
(Python) File "/psycopg/_connection_base.py", line 430, in _connect_gen
pgconn = yield from generators.connect(conninfo, timeout=timeout)
(C) File "???", line 0, in PQconnectPoll (/psycopg_binary.libs/libpq-0c1bcb9b.so.5.16)
(C) File "???", line 0, in gss_acquire_cred (/psycopg_binary.libs/libgssapi_krb5-497db0c6.so.2.2)
(C) File "???", line 0, in gss_acquire_cred_from (/psycopg_binary.libs/libgssapi_krb5-497db0c6.so.2.2)
(C) File "???", line 0, in gss_indicate_mechs_by_attrs (/psycopg_binary.libs/libgssapi_krb5-497db0c6.so.2.2)
(C) File "???", line 0, in gss_indicate_mechs (/psycopg_binary.libs/libgssapi_krb5-497db0c6.so.2.2)
(C) File "???", line 0, in updateMechList (/psycopg_binary.libs/libgssapi_krb5-497db0c6.so.2.2)
(C) File "???", line 0, in krb5int_open_plugin (/psycopg_binary.libs/libkrb5support-d0bcff84.so.0.1)
(C) File "/usr/src/debug/glibc-2.28-225.el8.x86_64/dlfcn/dlopen.c", line 87, in dlopen@@GLIBC_2.2.5 (/usr/lib64/libdl-2.28.so)
(C) File "/usr/src/debug/glibc-2.28-225.el8.x86_64/dlfcn/dlerror.c", line 142, in _dlerror_run (/usr/lib64/libdl-2.28.so)
(C) File "/usr/src/debug/glibc-2.28-225.el8_8.11.x86_64/elf/dl-error-skeleton.c", line 227, in _dl_catch_error (/usr/lib64/libc-2.28.so)
(C) File "/usr/src/debug/glibc-2.28-225.el8_8.11.x86_64/elf/dl-error-skeleton.c", line 208, in _dl_catch_exception (/usr/lib64/libc-2.28.so)
(C) File "/usr/src/debug/glibc-2.28-225.el8.x86_64/dlfcn/dlopen.c", line 66, in dlopen_doit (/usr/lib64/libdl-2.28.so)
(C) File "/usr/src/debug/glibc-2.28-225.el8_8.11.x86_64/elf/dl-open.c", line 927, in _dl_open (/usr/lib64/ld-2.28.so)
(C) File "/usr/src/debug/glibc-2.28-225.el8_8.11.x86_64/elf/dl-close.c", line 138, in _dl_close_worker (/usr/lib64/ld-2.28.so)
(C) File "/usr/src/debug/glibc-2.28-225.el8_8.11.x86_64/elf/dl-close.c", line 508, in _dl_close_worker (inlined) (/usr/lib64/ld-2.28.so)
(C) File "/usr/src/debug/glibc-2.28-225.el8.x86_64/nptl/allocatestack.c", line 1243, in __wait_lookup_done (/usr/lib64/libpthread-2.28.so)
(C) File "../sysdeps/nptl/futex-internal.h", line 135, in futex_wait_simple (inlined) (/usr/lib64/libpthread-2.28.so)
(C) File "../sysdeps/unix/sysv/linux/futex-internal.h", line 61, in futex_wait (inlined) (/usr/lib64/libpthread-2.28.so)
</code></pre>
|
<python><postgresql><sqlalchemy><psycopg2><psycopg3>
|
2024-08-13 19:05:41
| 0
| 414
|
Andrew
|
78,867,805
| 13,562,186
|
Struggling to get mathematical model to work
|
<p>I am trying to mimic a mathematical model based on the following model documentation.</p>
<p>4.1.1.4 Exposure to Vapour: Evaporation</p>
<p><a href="https://www.rivm.nl/bibliotheek/rapporten/2017-0197.pdf" rel="nofollow noreferrer">https://www.rivm.nl/bibliotheek/rapporten/2017-0197.pdf</a></p>
<p><a href="https://i.sstatic.net/82Vt6geT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82Vt6geT.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/jT7iZnFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jT7iZnFd.png" alt="enter image description here" /></a></p>
<p>Python script:</p>
<pre><code>import math
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
class ExposureToVapourEvaporation:
def __init__(self):
# Input parameters
self.frequency = 197 # per year
self.exposure_duration = 0.75 # minute
self.product_amount = 500 # g (amount of diluted product applied on a surface)
self.weight_fraction_substance = 0.2 # fraction in the product
self.room_volume = 1 # mΒ³
self.ventilation_rate = 0.5 # per hour
self.inhalation_rate = 22.9 / 1000 # mΒ³/min (converted from L/min)
self.vapour_pressure = 0.0106 # Pa
self.molecular_weight = 46.1 / 1000 # kg/mol (converted from g/mol)
self.release_area = 0.002 # mΒ²
self.release_duration = 0.3 # minute
self.application_temperature = 20 + 273.15 # K (converted from Β°C)
self.mass_transfer_coefficient = 10 / 3600 # m/s (converted from m/h)
self.body_weight = 9.8 # kg
# Optional parameters
self.is_product_used_in_dilution = True
self.dilution = 1 # times (only used if is_product_used_in_dilution is True)
self.molecular_weight_matrix = 22 / 1000 # kg/mol (converted from g/mol)
self.is_pure_substance = True
# Derived parameters
self.is_constant_surface_area = True
self.weight_fraction_solution = self.calculate_weight_fraction_solution()
def calculate_weight_fraction_solution(self):
if self.is_product_used_in_dilution:
return self.weight_fraction_substance / self.dilution
return self.weight_fraction_substance
def calculate_equilibrium_vapour_pressure(self):
return self.vapour_pressure
def evaporation_ode(self, y, t):
A_air, A_prod = y
K = self.mass_transfer_coefficient
P_eq = self.calculate_equilibrium_vapour_pressure()
P_air = A_air * 8.314 * self.application_temperature / (self.molecular_weight * self.room_volume)
if t < self.release_duration * 60: # During release
dA_air_dt = K * self.release_area * (self.molecular_weight / (8.314 * self.application_temperature)) * (P_eq - P_air) - (self.ventilation_rate / 3600) * self.room_volume * A_air
dA_prod_dt = -dA_air_dt
elif t == self.release_duration * 60: # At the exact end of release
dA_air_dt = 100 * K * self.release_area * (self.molecular_weight / (8.314 * self.application_temperature)) * (P_eq - P_air) - (self.ventilation_rate / 3600) * self.room_volume * A_air
else: # After release
dA_air_dt = -(self.ventilation_rate / 3600) * self.room_volume * A_air
dA_prod_dt = 0
if A_prod <= 0:
dA_prod_dt = 0
dA_air_dt = -(self.ventilation_rate / 3600) * self.room_volume * A_air
return [dA_air_dt, dA_prod_dt]
def solve_evaporation(self):
t = np.linspace(0, self.exposure_duration * 60, 1000) # seconds
t = np.sort(np.unique(np.append(t, self.release_duration * 60))) # Ensure we have a point at the end of release
y0 = [0, (self.product_amount / 1000) * self.weight_fraction_solution] # kg
solution = odeint(self.evaporation_ode, y0, t)
return t, solution
def calculate_metrics(self):
t, solution = self.solve_evaporation()
A_air = solution[:, 0]
concentrations = A_air / self.room_volume * 1e6 # Convert to mg/mΒ³
mean_concentration = np.mean(concentrations)
peak_concentration = np.max(concentrations)
# Calculate TWA 15 min
twa_15_min = mean_concentration if self.exposure_duration <= 15 else np.mean(concentrations[-int(15*60/self.exposure_duration):])
# Calculate daily and yearly averages
daily_average = mean_concentration * (self.exposure_duration / 1440)
yearly_average = daily_average * (self.frequency / 365)
# Calculate doses
event_dose = (mean_concentration * self.inhalation_rate * self.exposure_duration) / self.body_weight
day_dose = event_dose # Assuming one event per day
metrics = {
"mean_event_concentration": mean_concentration,
"peak_concentration_twa_15_min": twa_15_min,
"mean_concentration_day": daily_average,
"year_average_concentration": yearly_average,
"external_event_dose": event_dose,
"external_day_dose": day_dose
}
return metrics, t, concentrations
def plot_air_concentration(self, t, concentrations):
plt.figure(figsize=(10, 6))
plt.plot(t / 60, concentrations) # Convert time to minutes
plt.axvline(x=self.release_duration, color='r', linestyle='--', label='End of Release')
plt.axvline(x=self.exposure_duration, color='g', linestyle='--', label='End of Exposure')
plt.title('Air Concentration Over Time')
plt.xlabel('Time (minutes)')
plt.ylabel('Concentration (mg/mΒ³)')
plt.legend()
plt.grid(True)
plt.show()
# Create an instance of the model and calculate metrics
model = ExposureToVapourEvaporation()
metrics, t, concentrations = model.calculate_metrics()
print("Calculated Results:")
for key, value in metrics.items():
print(f"{key}: {value:.2e} mg/mΒ³" if "concentration" in key else f"{key}: {value:.2e} mg/kg bw")
# Plot the air concentration over time
model.plot_air_concentration(t, concentrations)
</code></pre>
<p>The expected result is:</p>
<p><a href="https://i.sstatic.net/51bkzKyH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/51bkzKyH.png" alt="enter image description here" /></a></p>
<p>Current:</p>
<p><a href="https://i.sstatic.net/TgfciMJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TgfciMJj.png" alt="enter image description here" /></a></p>
<p>I have been trying to get the maths to work but just doesn't look right and I wonder if the documentation might be wrong.</p>
|
<python><math><differential-equations>
|
2024-08-13 18:16:23
| 1
| 927
|
Nick
|
78,867,587
| 835,730
|
Parse WhatsApp message read status
|
<p>My question is more about html layout and parsing dynamic of content.</p>
<p>My task: parse contacts who read my particular message in the Group.
I tried to see DOM structure for the DIV block that hold that contacts.
But it's dynamic and list 20 items only. So when I scroll down it's updated.</p>
<p>So what should I do to read and parse all contacts (upto 1000 contacts)</p>
<p>Need some links/tutorials on how people do this type of parsing nowadays.</p>
<p>P.S. my WA Group is full of contacts, so I wanted to write an app to see who is not active for some long time so I can remove them and free slots for new people.</p>
<p>Thanks!</p>
|
<python><c#><html><selenium-webdriver><html-parsing>
|
2024-08-13 17:22:12
| 1
| 393
|
Jeffrey Rasmussen
|
78,867,241
| 19,369,310
|
Applying function that takes a list as input and a list as output to a pandas dataframe
|
<p>I have defined the following function:</p>
<pre><code>def my_function(inputList):
intermediateList = []
outputList = []
S = 0
for x in inputList:
S += x
y = 6*x
intermediateList.append(y)
for elt in intermediateList:
z = S / elt
outputList.append(z)
return outputList
</code></pre>
<p>In words, <code>my_function</code> outputs the 'sum of the list' divided by (6 * element)
and I have a Pandas dataframe that looks like</p>
<pre><code>data = {
'Class_ID': [1,1,1,1,1,1,1,3,3,3,3],
'Date': ['1/1/2023','1/1/2023','1/1/2023','1/1/2023','1/1/2023','1/1/2023','1/1/2023','17/4/2022','17/4/2022','17/4/2022','17/4/2022'],
'Student_ID': [3,4,6,8,9,1,10,5,2,3,4],
'feature': [1,3,-2,8,4,5,6,0.2,0.1,0.55,0.15]
}
df = pd.DataFrame(data)
</code></pre>
<p>And I would like to apply <code>my_function</code> to the <code>feature</code> column grouped by <code>Class_ID</code> and here is my code:</p>
<p>df['New_feature'] = df.groupby('Class_ID')['feature'].apply(my_function)</p>
<p>and the desired output looks like</p>
<pre><code> Class_ID Date Student_ID feature New_feature
0 1 1/1/2023 3 1.00 4.166667
1 1 1/1/2023 4 3.00 1.388889
2 1 1/1/2023 6 -2.00 -2.083333
3 1 1/1/2023 8 8.00 0.520833
4 1 1/1/2023 9 4.00 1.041667
5 1 1/1/2023 1 5.00 0.833333
6 1 1/1/2023 10 6.00 0.694444
7 3 17/4/2022 5 0.20 0.833333
8 3 17/4/2022 2 0.10 1.666667
9 3 17/4/2022 3 0.55 0.303030
10 3 17/4/2022 4 0.15 1.111111
</code></pre>
<p>However, it gives the following dataframe instead:</p>
<pre><code> Class_ID Date Student_ID feature New_feature
0 1 1/1/2023 3 1.00 NaN
1 1 1/1/2023 4 3.00 [4.166666666666667, 1.3888888888888888, -2.083...
2 1 1/1/2023 6 -2.00 NaN
3 1 1/1/2023 8 8.00 [0.8333333333333333, 1.6666666666666665, 0.303...
4 1 1/1/2023 9 4.00 NaN
5 1 1/1/2023 1 5.00 NaN
6 1 1/1/2023 10 6.00 NaN
7 3 17/4/2022 5 0.20 NaN
8 3 17/4/2022 2 0.10 NaN
9 3 17/4/2022 3 0.55 NaN
10 3 17/4/2022 4 0.15 NaN
</code></pre>
<p>What did I do wrong and how can I fix it? Thank you.</p>
|
<python><pandas><dataframe><group-by><apply>
|
2024-08-13 15:54:52
| 1
| 449
|
Apook
|
78,867,160
| 3,710,004
|
PyPDF2 stalling while parsing pdf for unknown reason
|
<p>I have a script in which I go through and parse a large collection of PDFs. I noticed that when I tried to parse a particular PDF, the script just stalls forever. But it doesn't throw up an error and as far as I can tell, the PDF is not corrupted. I can't tell what the issue is, but I can see that it happens on page 4. Is there a way to find out what is causing this issue, or to just skip the PDF if it is taking longer than one minute to parse?</p>
<p>For reference, here is the PDF: <a href="https://go.boarddocs.com/fl/palmbeach/Board.nsf/files/CTWGW9459021/$file/22C-001R_2ND%20RENEWAL%20CONTRACT_TERRACON.pdf" rel="nofollow noreferrer">https://go.boarddocs.com/fl/palmbeach/Board.nsf/files/CTWGW9459021/$file/22C-001R_2ND%20RENEWAL%20CONTRACT_TERRACON.pdf</a></p>
<pre><code>from PyPDF2 import PdfReader
doc = "somefile.pdf"
doc_text = ""
try:
print(doc)
reader = PdfReader(doc)
for i in range(len(reader.pages)):
print(i)
page = reader.pages[i]
text = page.extract_text()
doc_text += text
except Exception as e:
print(f"The file failed due to error {e}:")
doc_text = ""
</code></pre>
|
<python><pypdf>
|
2024-08-13 15:38:17
| 1
| 686
|
user3710004
|
78,867,121
| 2,287,458
|
Fill several polars columns with a constant value
|
<p>I am working with the following code...</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
'region': ['GB', 'FR', 'US'],
'qty': [3, 6, -8],
'price': [100, 102, 95],
'tenor': ['1Y', '6M', '2Y'],
})
cols_to_set = ['price', 'tenor']
fill_val = '-'
df.with_columns([pl.lit(fill_val).alias(c) for c in cols_to_set])
</code></pre>
<p>...with the following output.</p>
<pre><code>shape: (3, 4)
ββββββββββ¬ββββββ¬ββββββββ¬ββββββββ
β region β qty β price β tenor β
β --- β --- β --- β --- β
β str β i64 β str β str β
ββββββββββͺββββββͺββββββββͺββββββββ‘
β GB β 3 β - β - β
β FR β 6 β - β - β
β US β -8 β - β - β
ββββββββββ΄ββββββ΄ββββββββ΄ββββββββ
</code></pre>
<p>Instead of using a list of <code>pl.lit</code> expressions, I thought I could use a single <code>pl.lit(fill_val).alias(cols_to_set)</code>. However, this crashes with an error</p>
<pre class="lang-py prettyprint-override"><code>TypeError: argument 'name': 'list' object cannot be converted to 'PyString'
</code></pre>
<p>Is there a way to simplify the above and set all columns in <code>cols_to_set</code> to a specific constant value <code>fill_val</code>?</p>
|
<python><python-polars>
|
2024-08-13 15:30:39
| 2
| 3,591
|
Phil-ZXX
|
78,866,838
| 1,812,732
|
Create filter based on specific column
|
<p>How do I filter an 2D array based on a value in a column?</p>
<p>I tried <code>arr[arr[2] > 5]</code> but it fails</p>
<pre><code>>>> arr = np.arange(12).reshape((4,3))
>>> print(arr)
[[ 0 1 2]
[ 3 4 5]
[ 6 7 8]
[ 9 10 11]]
>>> print(arr[arr[2] > 5])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: boolean index did not match indexed array along dimension 0; dimension is 4 but corresponding boolean dimension is 3
</code></pre>
|
<python><numpy>
|
2024-08-13 14:24:34
| 1
| 11,643
|
John Henckel
|
78,866,648
| 6,439,229
|
How to get 'underlying' key in shift combinations
|
<p>In order to implement customisable hotkeys in an application I'm looking for a way to capture and display key presses with modifiers.</p>
<p>What I have now is this customised <code>QLineEdit</code>:</p>
<pre><code>from PyQt6.QtWidgets import QApplication, QLineEdit
from PyQt6.QtGui import QKeyEvent, QKeySequence
from PyQt6.QtCore import Qt, QKeyCombination
class HKLineEdit(QLineEdit):
modifiers = {16777248, 16777249, 16777250, 16777251}
def keyPressEvent(self, event: QKeyEvent):
if event.isAutoRepeat():
return
if event.key() in self.modifiers:
return
seq = QKeySequence(event.keyCombination())
self.setText(seq.toString())
# alternative with native virtual key:
mods = Qt.KeyboardModifier(event.modifiers())
nv_key = Qt.Key(event.nativeVirtualKey())
seq = QKeySequence(QKeyCombination(mods, nv_key))
self.setText(seq.toString())
app = QApplication([])
window = HKLineEdit()
window.show()
app.exec()
</code></pre>
<p>This works fine but it's not completely perfect for Shift-combinations.
For example <code>Shift+2</code> is displayed as <code>Shift+@</code></p>
<p>Is there a (cross-platform, cross-locale) way to display Shift-combinations with the 'base key' instead of the actual key?</p>
<p>Because it sounded appropriate I've tried to use <code>event.nativeVirtualKey()</code> and that gives the desired result for alphanumeric keys, but for other keys the results are very unexpected. For example <code>,</code> gives <code>ΒΌ</code>, and <code>'</code> gives <code>Γ</code> with my standard US keyboard on Win10.<br />
Please note that beyond the descriptive name I have no idea what a 'native virtual key' actually is and I couldn't find an explanation.</p>
|
<python><pyqt6><qkeyevent>
|
2024-08-13 13:41:48
| 0
| 1,016
|
mahkitah
|
78,866,546
| 436,315
|
PyInstaller causing a crash, PyCharm works a charm
|
<p>I have someone else's Python code that I am upgrading from 3.7 to 3.12 before changing database calls to API calls. The Python code is compiled using PyInstaller into an executable to be distributed as required.</p>
<p>If I run the PyInstaller command, it compiles fine creating an executable and an "_internal" folder containing all of the files required (although the previous version had more files present in the same location as the executable. When I double click the executable, I get a runtime error which can be seen below:</p>
<pre><code>Traceback (most recent call last):
File "D:\SourceCode\ODL\ODL_App.py", line 10, in <module>
import core.core_utils
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "core\__init__.py", line 5, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "core\ODL_Method.py", line 4, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "als_microservices\__init__.py", line 2, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "als_microservices\aqc_stl.py", line 11, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "scipy\stats\__init__.py", line 610, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "scipy\stats\_stats_py.py", line 37, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "scipy\sparse\__init__.py", line 293, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "scipy\sparse\_base.py", line 5, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "scipy\sparse\_sputils.py", line 10, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "scipy\_lib\_util.py", line 18, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "scipy\_lib\_array_api.py", line 21, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "scipy\_lib\array_api_compat\numpy\__init__.py", line 1, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "numpy\f2py\__init__.py", line 19, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "numpy\f2py\f2py2e.py", line 23, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "numpy\f2py\crackfortran.py", line 159, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "numpy\f2py\auxfuncs.py", line 19, in <module>
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "numpy\f2py\cfuncs.py", line 19, in <module>
AttributeError: 'NoneType' object has no attribute 'write'
</code></pre>
<p>Obviously I can see that the last line contains the crux of the error, but I do not know to which module or line within said module that applies to, as it references a number of modules.</p>
<p>Now when running through PyCharm, I can get the application running, although it times out at a database function, so I cannot see where the error occurs by stepping through the code.</p>
<p>If I enter in the command "python --version" within either the terminal of PyCharm or the Window by which I am instigating the PyInstaller, I get the same Python version. If I import sys and then do a print(sys.version) within the PyCharm Python window, I also get the same version.</p>
<p>Now this use to work when everything was running on Python 3.7, so I know it is down to either a mismatch of versions between Python and Modules, or something that has been deprecated or a function now changed.</p>
<p>The big question is: how do I narrow this down in order to resolve the issue.</p>
|
<python><pycharm><pyinstaller><executable>
|
2024-08-13 13:20:48
| 1
| 1,148
|
Jim Grant
|
78,866,437
| 1,975,199
|
"unpacking" binary into floating point
|
<p>I've got a data set that is in binary, and trying to convert it to decimal/float.</p>
<p>For now, I am just using Python to get the understanding for now.
Typically, I would achieve this in one of two ways. Either using Python's struct, or bit shifting and OR'ing.</p>
<p>For instance, I have a two byte array of [67,1] (little endian).</p>
<p>I can do this:</p>
<p><code>print(struct.unpack('<H', bytes(byte_array))[0])</code></p>
<p>or this:</p>
<p><code>byte_array[1] << 8 | byte_array[0]</code></p>
<p>Both give me the same value, 323. Which is correct.</p>
<p>Now, I also have a vendor provided application that reads the same raw data file that I am reading, but they represent their value differently:</p>
<p><a href="https://i.sstatic.net/5337nlJH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5337nlJH.png" alt="enter image description here" /></a></p>
<p>What are they doing to take a 16-bit value and make a float out of with that kind of precision?</p>
<p>Likewise, the second value shown is -4.56, but when I do the above code I just get -5.</p>
|
<python><bit-manipulation>
|
2024-08-13 12:57:36
| 1
| 432
|
jgauthier
|
78,866,416
| 10,527,135
|
Serving Static Files with Nginx and Django in Docker
|
<p>Despite seeing many similar issues in other threads, I've been unable to configure Nginx to serve static files from my Django project.</p>
<p>Here are my two static variables in my <code>settings.py</code>:</p>
<pre><code>STATIC_URL = '/static/'
STATIC_ROOT='/opt/django/portfolio/collectstatic'
</code></pre>
<p>Here is my dockerfile to build project image:</p>
<pre><code>FROM python:3.11-slim
WORKDIR opt/django/
COPY pyproject.toml .
RUN python -m pip install .
COPY ./portfolio/ ./portfolio/
WORKDIR portfolio/
RUN python manage.py collectstatic --noinput
RUN python manage.py makemigrations
RUN python manage.py migrate
EXPOSE 8000
CMD ["gunicorn", "portfolio.wsgi:application", "--bind", "0.0.0.0:8000"]
</code></pre>
<p>Here is the <code>docker-compose.yml</code>:</p>
<pre><code>services:
web:
build:
context: .
dockerfile: ./docker/Dockerfile_django
container_name: webserver
volumes:
- static_data:/opt/django/portfolio/collectstatic
expose:
- "8000"
ports:
- "8000:8000"
depends_on:
- db
networks:
- docker_network
db:
image: postgres:15
container_name: db_postgres
expose:
- "5432"
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
networks:
- docker_network
nginx:
image: nginx:latest
container_name: nginx
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
- static_data:/opt/django/portfolio/collectstatic
depends_on:
- web
networks:
- docker_network
volumes:
postgres_data:
static_data:
networks:
docker_network:
driver: bridge
name: docker_network
</code></pre>
<p>And finally, here is my <code>nginx.conf</code>:</p>
<pre><code>events {}
http {
server {
listen 80;
server_name localhost;
location /static {
alias /opt/django/portfolio/collectstatic;
autoindex on;
}
# skip favicon.ico
location /favicon.ico {
access_log off;
return 204;
}
location / {
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}
</code></pre>
<p>I see no errors in my logs, and nginx is reporting 200's when I make a GET to a url in the project. When I visit <code>http://127.0.0.1/static/website/style.css</code> I can see the css (and js files). I've cleared my cache. When I go to <code>/opt/django/portfolio/collectstatic</code> in my web container, I can see the static files. The structure of my project is like this:</p>
<pre><code> - my_project/
- nginx/
- nginx.conf
- my_project/
- my_project/
- settings.py
...
- web/
- js/
- some_js.js
- some_other_js.js
- website/
- style.css
admin.py
apps.py
forms.py
models.py
tests.py
...
</code></pre>
<p>What am I missing? I appreciate any direction / feedback on potential misconfigurations I may be overlooking.</p>
|
<python><django><docker><nginx>
|
2024-08-13 12:54:22
| 2
| 349
|
fjjones88
|
78,866,188
| 9,438,759
|
CTypes bitfield sets whole byte
|
<p>I have the following structure:</p>
<pre><code>class KeyboardModifiers(Structure):
_fields_ = [
('left_control', c_bool, 1),
('right_control', c_bool, 1),
('left_shift', c_bool, 1),
('right_shift', c_bool, 1),
('left_alt', c_bool, 1),
('right_alt', c_bool, 1),
('left_meta', c_bool, 1),
('right_meta', c_bool, 1),
('left_super', c_bool, 1),
('right_super', c_bool, 1),
('left_hyper', c_bool, 1),
('right_hyper', c_bool, 1),
]
</code></pre>
<p>It represents a structure returned by a C function, the fields are properly set and their values returned, issue comes when setting a field. For example, if I were to do something like:</p>
<pre><code>my_keyboard_mods.left_shift = True
</code></pre>
<p>The first 8 fields would add be set to True, similarly with the next 8. What seems to happen is that it sets the value for the whole byte not respecting the bitfield. My question is:</p>
<ol>
<li>I am doing something wrong, so what's wrong?</li>
<li>This is a bug with ctypes, is there a workaround?</li>
</ol>
<p>Thanks.</p>
|
<python><ctypes><bit-fields>
|
2024-08-13 12:05:07
| 2
| 321
|
Slendi
|
78,865,964
| 6,400,277
|
RS485 communication between C++ and Python, message splitted and receive in multiple reception
|
<p><strong>Context</strong></p>
<p>I am working on a personal project on which I have to connect a Python script (3.7) to a QML application (Qt 5.2, C++17), all running on a Linux RHEL8 distribution.</p>
<p><strong>Hardware connections</strong></p>
<p>I have two computers with 2 USB ports, and 2 homemade cables with USB/RS485 converters in each sides.</p>
<p><strong>What works</strong></p>
<p>I can communicate from PC1 to PC2 with first cable, and in another direction with second cable.</p>
<p><strong>What is wrong</strong></p>
<p>The Python side receive data from C++, sometimes in one reception, sometimes in two. So I deal with it receiving data one by one character and waiting for SOF and EOF character to tell to manage the data received. But for me there is something wrong receiving sometimes data in multiple reception.</p>
<p>The same thing appears in the C++ side, when it is reading the data sent by the Python, sometimes the data is complete at first reception, sometimes two reception are needed.</p>
<p>Example: a message of length 15 bytes can be received in one time, or first I receive 4 characters then the 11 followings.</p>
<p><strong>Wished RS485 configuration</strong></p>
<ul>
<li>1 start bit set to '0',</li>
<li>8 bits constituting the byte,</li>
<li>1 even parity bit,</li>
<li>1 stop bit set to '1',</li>
<li>Idle state of the data lines between frames set to β1β,</li>
<li>Baudrate: 115200 baud.</li>
</ul>
<p><strong>Question</strong></p>
<p>Being not an expert in RS485 communication, I read a lot of tutorial and documentation to understand how to configure both sides (C++ and Python). Can you tell me if the multiple reception is due to a wrong configuration of RS485 ? I don't know if the problem is coming from the C++ or Python side.</p>
<p><strong>Minimalist code</strong></p>
<p>In each side I have a thread for the reception, and another one to send data.</p>
<p>Python:</p>
<pre><code>import threading, struct
from .GenericMessage import crc_16
import serial
class SerialManager:
# Constructor
def __init__(self):
# Update parameter for the RS485 connection
self.serialPortName_read = "/dev/ttyUSB1"
self.serialPortName_send = "/dev/ttyUSB0"
self.serialPort_read = serial.Serial()
self.serialPort_send = serial.Serial()
# Create sockets and start listening thread
self.createRS485connection()
self.receiverThread = threading.Thread(target=self.listeningThread)
self.receiverThread.daemon = True
self.receiverThread.start()
def createRS485connection(self):
# Configure sending device
self.serialPort_send.port = self.serialPortName_send # Set device name
self.serialPort_send.baudrate = 115200 # Set baud rate at 115 200 baud
self.serialPort_send.bytesize = serial.EIGHTBITS # Set Number of data bits for a byte = 8
self.serialPort_send.parity = serial.PARITY_EVEN # Set parity bit to even
self.serialPort_send.stopbits = serial.STOPBITS_ONE # Set stop bit to 1
# Open sending device
self.serialPort_send.open()
# Configure receiving device
self.serialPort_read.port = self.serialPortName_read # Set device name
self.serialPort_read.baudrate = 115200 # Set baud rate at 115 200 baud
self.serialPort_read.bytesize = serial.EIGHTBITS # Set Number of data bits for a byte = 8
self.serialPort_read.parity = serial.PARITY_EVEN # Set parity bit to even
self.serialPort_read.stopbits = serial.STOPBITS_ONE # Set stop bit to 1
self.serialPort_read.timeout = 0.0001 # Set timeout to read to 0.0001 second
# Open reading device
self.serialPort_read.open()
# Reset all previous data received
self.serialPort_read.reset_input_buffer()
return
def sendMessage(self, message):
bytesSent = self.serialPort_send.write(message)
if bytesSent < 0:
print("Message not sending")
elif bytesSent != len(message):
print("Message not sent completely")
else:
return
def listeningThread(self):
# Initialize variables
dataReceived = b''
while self.isListening:
# Wait for data to read
if self.serialPort_read.in_waiting > 0:
# Read byte one by one
for i in range(self.serialPort_read.in_waiting):
dataReceived += self.serialPort_read.read(1)
# Function to manage character one by one and reconstruct the original message to deal with
</code></pre>
<p>C++</p>
<pre><code>#include "../inc/SerialManager.h"
SerialManager::SerialManager()
{
m_RS485_serialPortName_sending = "/dev/ttyUSB0";
m_RS485_serialPortName_reading = "/dev/ttyUSB1";
// Configure RS485 devices
configureRS485sendingSide();
configureRS485readingSide();
// Create a new thread for listening and one for sending
m_threadListeningServer = std::thread(&SerialManager::listenWhileLoop, this);
m_threadListeningServer.detach();
m_threadSendingServer = std::thread(&SerialManager::sendWhileLoop, this);
m_threadSendingServer.detach();
}
SerialManager::~SerialManager()
{
// Close the RS485 devices when finished
if (close(m_RS485_serialPort_reading) < 0)
qDebug() << "Reading device not closed properly: " << m_RS485_serialPortName_reading;
if (close(m_RS485_serialPort_sending) < 0)
qDebug() << "Reading device not closed properly: " << m_RS485_serialPortName_sending;
}
bool SerialManager::configureRS485readingSide()
{
// Open the serial port in read only mode
m_RS485_serialPort_reading = open(m_RS485_serialPortName_reading.toStdString().c_str(), O_RDONLY);
// Create new termios struct, we call it 'tty' for convention
struct termios tty;
// Read in existing settings, and handle any error
if(tcgetattr(m_RS485_serialPort_reading, &tty) != 0)
{
std::cout << "Error in reading side from tcgetattr: " + std::string(std::strerror(errno));
throw std::runtime_error("Error in reading side from tcgetattr: " + std::string(std::strerror(errno)));
}
// Set control modes
tty.c_cflag |= PARENB; // 1. Set parity bit, enabling parity
tty.c_cflag &= ~CSTOPB; // 2. Clear stop field, only one stop bit used in communication
tty.c_cflag &= ~CSIZE; // 3.a. Clear all bits that set the data size
tty.c_cflag |= CS8; // 3.b. 8 bits per byte
tty.c_cflag &= ~CRTSCTS; // 4. Disable RTS/CTS hardware flow control (most common)
tty.c_cflag |= CREAD | CLOCAL; // 5. Turn on READ & ignore ctrl lines (CLOCAL = 1)
// Set local modes
tty.c_lflag &= ~ICANON; // 1. Disable canonical mode
tty.c_lflag &= ~ECHO; // 2. Disable echo
tty.c_lflag &= ~ECHOE; // 3. Disable erasure
tty.c_lflag &= ~ECHONL; // 4. Disable new-line echo
tty.c_lflag &= ~ISIG; // 5. Disable interpretation of INTR, QUIT and SUSP characters
// Set input modes
tty.c_iflag &= ~(IXON | IXOFF | IXANY); // 1. Turn off software flow control
tty.c_iflag &= ~(IGNBRK|BRKINT|PARMRK|ISTRIP|INLCR|IGNCR|ICRNL); // 2. Disable any special handling of received bytes
// Set output modes
tty.c_oflag &= ~OPOST; // 1. Prevent special interpretation of output bytes (e.g. newline chars)
tty.c_oflag &= ~ONLCR; // 2. Prevent conversion of newline to carriage return/line feed
// Set read configuration: blocking read of any number of chars with a maximum timeout (given by VTIME)
tty.c_cc[VTIME] = 10; // Wait for up to 1s (10 deciseconds), returning as soon as any data is received
tty.c_cc[VMIN] = 0;
// Set in baud rate to be 115200 baud
cfsetispeed(&tty, B115200);
// Save tty settings, also checking for error
if (tcsetattr(m_RS485_serialPort_reading, TCSANOW, &tty) != 0)
{
std::cout << "Error in reading side from tcgetattr: " + std::string(std::strerror(errno));
throw std::runtime_error("Error in reading side from tcsetattr: " + std::string(std::strerror(errno)));
}
qDebug() << "Reading side correctly configured";
return true;
}
bool SerialManager::configureRS485sendingSide()
{
// Open the serial port in write only mode
m_RS485_serialPort_sending = open(m_RS485_serialPortName_sending.toStdString().c_str(), O_WRONLY);
// Create new termios struct, we call it 'tty' for convention
struct termios tty;
// Read in existing settings, and handle any error
if(tcgetattr(m_RS485_serialPort_sending, &tty) != 0)
{
std::cout << "Error in sending side from tcgetattr: " + std::string(std::strerror(errno));
throw std::runtime_error("Error in sending side from tcgetattr: " + std::string(std::strerror(errno)));
}
// Set control modes
tty.c_cflag |= PARENB; // 1. Set parity bit, enabling parity
tty.c_cflag &= ~CSTOPB; // 2. Clear stop field, only one stop bit used in communication
tty.c_cflag &= ~CSIZE; // 3.a. Clear all bits that set the data size
tty.c_cflag |= CS8; // 3.b. 8 bits per byte
// tty.c_cflag &= ~CRTSCTS; // 4. Disable RTS/CTS hardware flow control (most common)
tty.c_cflag |= CLOCAL; // 5. Do not turn on READ & ignore ctrl lines (CLOCAL = 1)
// Set local modes
tty.c_lflag &= ~ICANON; // 1. Disable canonical mode
tty.c_lflag &= ~ECHO; // 2. Disable echo
tty.c_lflag &= ~ECHOE; // 3. Disable erasure
tty.c_lflag &= ~ECHONL; // 4. Disable new-line echo
tty.c_lflag &= ~ISIG; // 5. Disable interpretation of INTR, QUIT and SUSP characters
// Set input modes
tty.c_iflag &= ~(IXON | IXOFF | IXANY); // 1. Turn off software flow control
tty.c_iflag &= ~(IGNBRK|BRKINT|PARMRK|ISTRIP|INLCR|IGNCR|ICRNL); // 2. Disable any special handling of received bytes
// Set output modes
tty.c_oflag &= ~OPOST; // 1. Prevent special interpretation of output bytes (e.g. newline chars)
tty.c_oflag &= ~ONLCR; // 2. Prevent conversion of newline to carriage return/line feed
// Set read configuration: blocking read of any number of chars with a maximum timeout (given by VTIME)
tty.c_cc[VTIME] = 10; // Wait for up to 1s (10 deciseconds), returning as soon as any data is received
tty.c_cc[VMIN] = 0;
// Set out baud rate to be 115200 baud
cfsetospeed(&tty, B115200);
// Save tty settings, also checking for error
if (tcsetattr(m_RS485_serialPort_sending, TCSANOW, &tty) != 0)
{
std::cout << "Error in sending side from tcgetattr: " + std::string(std::strerror(errno));
throw std::runtime_error("Error in sending side from tcsetattr: " + std::string(std::strerror(errno)));
}
qDebug() << "Writing side correctly configured";
return true;
}
bool SerialManager::sendResponse(const char* msg, int msgSize)
{
int bytesSent = write(m_RS485_serialPort_sending, msg, msgSize);
if (bytesSent < 0)
{
std::cout << "Error sending message: " + std::string(std::strerror(errno));
throw std::runtime_error("Error sending message: " + std::string(std::strerror(errno)));
}
else if (static_cast<size_t>(bytesSent) != msgSize)
{
std::cout << "Incomplete message sent: " + std::to_string(bytesSent) + " bytes sent instead of " + std::to_string(msgSize) + " bytes";
throw std::runtime_error("Incomplete message sent: " + std::to_string(bytesSent) + " bytes sent instead of " + std::to_string(msgSize) + " bytes");
}
return true;
}
void SerialManager::listenWhileLoop()
{
pthread_setname_np(pthread_self(), "listenWhileLoop");
// First wait the end of the initialisation
while (!m_swIsInitialized)
{
// Wait for 100s not to override processor
std::this_thread::sleep_for(std::chrono::milliseconds(100));
}
char buf[1024];
while (m_softwareIsRunning)
{
// Wait the software to be initialized
if (m_swIsInitialized)
{
// Wait until receive a message from a socket
int sizeDataReceived = read(m_RS485_serialPort_reading, &buf, sizeof(buf));
if (sizeDataReceived > 0)
{
switchTypeMessageReceived(buf, sizeDataReceived);
}
//clean buffer
memset(&buf[0], 0, sizeof(buf));
}
}
}
void SerialManager::switchTypeMessageReceived(char *buffer, int sizeDataReceived)
{
// Manage data received
}
</code></pre>
|
<python><c++><rs485>
|
2024-08-13 11:21:51
| 1
| 635
|
Mathieu Gauquelin
|
78,865,932
| 390,224
|
Get rotation angle from rotation vector over axis Y/Z
|
<p>For some of you this could be a really easy question, but I didn't find a solution or was too dumb to understand some of the math papers.</p>
<p>I am currently working in Python but that shouldn't matter.
What I want is to get the rotation angle of the unit vector representing the rotation, over a specific axis.</p>
<p><strong>Example:</strong></p>
<p>I have the Vector <code>(1.0, 0, 0)</code> and I know this represents the <code>rotation over Y-axis</code>.
Or the Vector <code>(0, 0, -1)</code> which is the representation for a <code>rotation over Z-axis</code>.</p>
<p>How do I get the angles represented by these vectors?</p>
|
<python><math><vector><rotation>
|
2024-08-13 11:14:37
| 0
| 2,597
|
Nuker
|
78,865,667
| 7,695,845
|
How to draw a rectangle with one side in matplotlib?
|
<p>I want to draw a rectangle in matplotlib and I want only the top edge to show. I tried to draw a line on top of the rectangle to make it work, but I was not satisfied with the result. Here's my code:</p>
<pre class="lang-python prettyprint-override"><code>import matplotlib.pyplot as plt
from matplotlib.axes import Axes
from matplotlib.figure import Figure
from matplotlib.patches import Rectangle
LIM = 5
def configure_plot(lim: float, **kwargs) -> tuple[Figure, Axes]:
fig: Figure
ax: Axes
fig, ax = plt.subplots(**kwargs)
ax.axis("equal")
ax.set_axis_off()
ax.set_xlim(-lim, lim)
ax.set_ylim(-lim, lim)
fig.tight_layout()
return fig, ax
def main() -> None:
fig: Figure
ax: Axes
fig, ax = configure_plot(LIM)
width = 8
height = 3.5
color = "tab:blue"
rect = Rectangle(
(-width / 2, -height),
width,
height,
facecolor=color,
alpha=0.8,
linewidth=3,
)
ax.add_patch(rect)
ax.plot([-width / 2, width / 2], [0, 0], color=color)
plt.show()
if __name__ == "__main__":
main()
</code></pre>
<p>This gives me this output:</p>
<p><a href="https://i.sstatic.net/0bkPB3wC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0bkPB3wC.png" alt="" /></a></p>
<p>It's almost the result I am looking for, but the line of the edge continues slightly beyond the rectangle's corners which makes it look worse in my opinion. I could play with the line end points to shorten it and make it fit, but this doesn't sound like an ideal solution. Is there a way to draw a rectangle and only one of its edges in matplotlib?</p>
|
<python><matplotlib>
|
2024-08-13 10:12:58
| 2
| 1,420
|
Shai Avr
|
78,865,556
| 3,128,122
|
Only one FTP query to get directory files and metadata
|
<p>I'm working on a project that needs solid performance.</p>
<p>I need to analyze the contents of an FTP folder (and its sub-folders) with a single FTP request (to avoid making a call per file, which I could do with <code>ftp_host.stat(file_path)</code>).</p>
<p>For each folder, I need to retrieve :</p>
<ul>
<li>The file/folder name (and whether it's a folder or a file)</li>
<li>File size</li>
<li>The file's last modification date</li>
</ul>
<p>I'm working with <code>ftputil</code> in Python. The framework I'm working in requires this information (I can't use <code>download_if_newer()</code>). I tried with the code:</p>
<pre><code>ftp_host._dir(folder)
</code></pre>
<p>This code returns almost everything I need in this line:</p>
<pre><code>-rw-r--r-- 1 test ftpusers 37 Aug 13 09:37 fruits.csv
</code></pre>
<p>who gives me this information:</p>
<ul>
<li>leading <code>-</code> indicates that it's a file (<code>d</code> for directory)</li>
<li><code>37</code> is the size</li>
<li>and <code>fruits.csv</code> is the name</li>
</ul>
<p>However, the date is incomplete: I only get <code>Aug 13 09:37</code>. The year is missing.</p>
<p>Is there a solution for retrieving the year as well?</p>
|
<python><ftp><ftputil>
|
2024-08-13 09:51:59
| 1
| 11,394
|
Samuel Dauzon
|
78,865,475
| 6,300,438
|
Python on Mac does not recognize venv executable
|
<p>I recently moved to Macbook to do programming. It has a really weird problem that I don't know where to start debugging. I already perform an exhautive search on Google but can't find a solution.</p>
<p>Basically, I try to create a virtual environment <code>env</code> with<code>python3.10</code>, activate it and expect to have <code>python3.10</code>, but surprisingly, it's <code>python3.12</code> which different executable.</p>
<pre class="lang-bash prettyprint-override"><code>luanpham@Macc:~/ws/RCAEval$ python3.10 -m venv env
luanpham@Macc:~/ws/RCAEval$ . env/bin/activate
(env) luanpham@Macc:~/ws/RCAEval$ which python
/Users/luanpham/ws/RCAEval/env/bin/python
(env) luanpham@Macc:~/ws/RCAEval$ python --version
Python 3.12.3
(env) luanpham@Macc:~/ws/RCAEval$ python
Python 3.12.3 (v3.12.3:f6650f9ad7, Apr 9 2024, 08:18:47) [Clang 13.0.0 (clang-1300.0.29.30)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> ^D
(env) luanpham@Macc:~/ws/RCAEval$ which python3.10
/Users/luanpham/ws/RCAEval/env/bin/python3.10
(env) luanpham@Macc:~/ws/RCAEval$ python
Python 3.12.3 (v3.12.3:f6650f9ad7, Apr 9 2024, 08:18:47) [Clang 13.0.0 (clang-1300.0.29.30)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> print(sys.executable)
/Library/Frameworks/Python.framework/Versions/3.12/bin/python3.12
</code></pre>
<p>It would great if anyone can let me know what's happened here...</p>
|
<python><macos>
|
2024-08-13 09:35:00
| 2
| 599
|
Luan Pham
|
78,865,470
| 11,046,379
|
Recursively traverse Pandas Dataframe
|
<p>There is Pandas dataframe</p>
<pre><code> id parent_id result
1 0 True
2 0 False
3 1 True
4 1 False
5 2 True
6 2 False
7 4 True
8 4 True
</code></pre>
<p>How to recursively traverse the entire frame, while
if the node contains descendants, then i need to do "OR" operation for all descendants rows,and do "AND" operation for this intermediate result and parent row? i.e.</p>
<pre><code>AND(parent['result'], OR (descendants['result']))
</code></pre>
<p>So i need get result of this boolean expression:</p>
<pre><code>df.loc[df['id'] == 1, 'result'] AND
(df.loc[df['id'] == 3, 'result'] OR
(
df.loc[df['id'] == 4, 'result']
AND
(
df.loc[df['id'] == 7, 'result'] OR
df.loc[df['id'] == 8, 'result']
)
)
)
OR
(
df.loc[df['id'] == 2, 'result']
AND (
df.loc[df['id'] == 5, 'result'] OR
df.loc[df['id'] == 6, 'result']
)
)
</code></pre>
|
<python><pandas>
|
2024-08-13 09:34:09
| 2
| 1,658
|
harp1814
|
78,865,284
| 8,510,149
|
PySpark optimization mindset - loop over groups with joins and union
|
<p>I'm looping over groups in a PySpark dataframe and do one filter operation, several joins (depending on depth of group) and one union operation on each group. The individual groups are quite small, in my real-world use cases number of rows for each group ranges from 3-20. I have around 1500 groups to loop thru and it takes very long time.</p>
<p>I run this on Databricks 14.3, driver: 64 GB,8 workers.</p>
<p>I'm interested in how I should think in terms of optimization. There are lot of recommendations online - broadcasting, cache etc. But I found it hard to know when to use what and why.</p>
<p>How can the code snippet below be optimized? How would a Spark-developer think?</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql import functions as F
spark = SparkSession.builder.appName("OPT").getOrCreate()
spark.conf.set("spark.sql.shuffle.partitions", "auto")
data = [
("A", 1, 0, 2121),
("A", 2, 2121, 5567),
("A", 3, 5567, 5566),
("A", 3, 5567, 5568),
("A", 3, 5567, 5569),
("A", 3, 5567, 5570),
("B", 1, 0, 3331),
("B", 2, 3331, 5515),
]
columns = ["group_id", "level", "parent", "node"]
test_df = spark.createDataFrame(data, columns)
test_df = test_df.withColumn("path", F.array("parent"))
# Create list to iterate over
list_to_iterate = test_df.groupBy("group_id").agg(F.max("level").alias("depth")).collect()
# Empty dataframe to store result from loop
new_result_df = spark.createDataFrame([], schema=test_df.schema)
for group in list_to_iterate:
current_level = group['depth']
tmp=test_df.filter(col('group_id')==group['group_id'])
original_group = tmp
while current_level > 1:
# Repeatedly join operation
joined_df = tmp.alias("child").join(
original_group.alias("parent"),
F.col("child.parent") == F.col("parent.node"),
"left"
).select(
F.col("child.group_id"),
F.col("child.level"),
F.col("parent.parent").alias("parent"),
F.col("child.node"),
# Append operation
F.expr("CASE WHEN parent.parent IS NOT NULL THEN array_union(child.path, array(parent.parent)) ELSE child.path END").alias("path")
)
tmp = joined_df
current_level -= 1
# Union operation
new_result_df = new_result_df.union(joined_df)
new_result_df.show(truncate=False)
</code></pre>
<p>Output:</p>
<pre><code>
+--------+-----+------+----+---------------+
|group_id|level|parent|node|path |
+--------+-----+------+----+---------------+
|A |1 |NULL |2121|[0] |
|A |2 |NULL |5567|[2121, 0] |
|A |3 |0 |5566|[5567, 2121, 0]|
|A |3 |0 |5568|[5567, 2121, 0]|
|A |3 |0 |5569|[5567, 2121, 0]|
|A |3 |0 |5570|[5567, 2121, 0]|
|B |1 |NULL |3331|[0] |
|B |2 |0 |5515|[3331, 0] |
+--------+-----+------+----+---------------+
</code></pre>
|
<python><pyspark><optimization><databricks>
|
2024-08-13 08:56:35
| 1
| 1,255
|
Henri
|
78,865,258
| 1,617,563
|
How to pass an untyped dictionary assigned to a variable to a method that expects a TypedDict without mypy complaining?
|
<p>I would like to pass a dictionary to a method <code>foo</code> that expects a <code>TypedDict</code> without explicitly mentioning the type. When I pass the dictionary directly to the method, everything is good. However, when I assign the dictionary to a variable <code>configs</code> first, mypy complains about incompatible types. Interestingly, PyCharm does not complain about <code>foo(configs)</code> as long as <code>configs</code> contains the correct keys and values. I could add type information to <code>configs</code> at [1] below but wonder if I could improve the typing to make the usage of <code>foo</code> and <code>Params</code> less verbose.</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypedDict
class Params(TypedDict):
a: str
b: str
def foo(config: Params):
pass
foo({"a": "a", "b": "b"}) # Okay
configs = {"a": "a", "b": "b"} # [1]
foo(configs) # error: Argument 1 to "foo" has incompatible type "dict[str, str]"; expected "Params" [arg-type]
</code></pre>
|
<python><python-typing><mypy>
|
2024-08-13 08:50:15
| 2
| 2,313
|
aleneum
|
78,865,145
| 6,212,999
|
ltrace doesn't work for Python compiled with --enable-shared
|
<p>When I build Python 3.12.4 with <a href="https://docs.python.org/3/using/configure.html#cmdoption-enable-shared" rel="nofollow noreferrer">--enable-shared</a> option:</p>
<blockquote>
<p>Enable building a shared Python library: libpython (default is no).</p>
</blockquote>
<p>the only output I get from ltrace is:</p>
<pre><code>Py_BytesMain(2, 0x7ffd1e465518, 0x7ffd1e465530, 0x649272aa9da0) = 0
+++ exited (status 0) +++
</code></pre>
<p>If the option is not used, ltrace works as expected.</p>
<p>Why is it happening? How can I troubleshoot it?</p>
<p>My config:</p>
<ul>
<li>Ubuntu 22.04</li>
<li>ltrace version 0.7.3</li>
</ul>
|
<python><gcc><linker><shared-libraries><ltrace>
|
2024-08-13 08:19:00
| 1
| 405
|
Marcin BarczyΕski
|
78,865,114
| 6,597,296
|
Using Twisted to implement implicit FTPS server
|
<p>I am writing an FTP server using the Python framework Twisted. Twisted has its own plain FTP implementation - but it doesn't support FTPS. I've noticed that most clients connect and immediately issue an <code>AUTH TLS</code> command, requesting an encrypted FTPS connection. If the server responds that this command is not supported, they just disconnect.</p>
<p>There are third-party libraries that implement implicit FTPS server (i.e., the client connects via FTPS right off the bat) like <a href="https://github.com/chevah/txftps" rel="nofollow noreferrer">this one</a> - but this is not what I need. I need explicit FTPS support - i.e., to switch to a TLS connection from within an FTP connection when the <code>AUTH TLS</code> command is received.</p>
<p>Any ideas how to do this?</p>
<p>P.S. Edited to correctly use explicit/implicit.</p>
|
<python><ftp><twisted><ftps>
|
2024-08-13 08:09:48
| 1
| 578
|
bontchev
|
78,865,047
| 6,234,139
|
KeyError when filtering time series on basis of datetimeindex in pandas
|
<p>I am trying to filter a time series on the basis of a datetimeindex in pandas, but get a KeyError. The example below for instance yields the error KeyError: '2021'. What causes this?</p>
<pre><code>import pandas as pd
data = {'date':['2021-11-1', '2021-12-1', '2022-01-1', '2022-02-1'],
'value':['hello', 'bonjour', 'merhaba', 'goeiendag']}
df = pd.DataFrame(data)
df['date'] = pd.to_datetime(df['date'])
df.set_index('date', inplace=True)
print(df['2021'])
#print(df['2021-01'])
#print(df['2021-01-01'])
</code></pre>
|
<python><pandas>
|
2024-08-13 07:51:51
| 0
| 701
|
koteletje
|
78,864,992
| 4,934,344
|
Quick search list of tuples
|
<p>I have a very long list of tuples, about 4000 entries. Data is not sorted. This is not the full code. It is a simplified example. Just looking for a way to speed this up if possible.</p>
<pre><code>arr = [['863', '0.31', '0.00', '0.69'], ['621', '1.00', '0.00', '0.00'], ['834', '1.00', '0.00', '0.00']]
</code></pre>
<p>etc</p>
<p>I'm running a loop to compare a text file with 40000 entries and I need to search the array above for matches. The best I have come up with us using a numpy array:</p>
<pre><code>nparr = np.asarray(arr)
lines = []
for i in range(40000): #a simplified version of loop which has varying values in orignal
ind = np.where(nparr == '834') #varies in original
item = nparr[ind[0][0]]
lines.append(item[0])
</code></pre>
<p>Write out results</p>
<pre><code>with open("output.txt", "w", buffering=8192) as file:
file.writelines(lines)
</code></pre>
<p>On my windows machine the loop takes about 1.3s, and the writeout takes 0.02s. For some reason on a linux machine that has better specs the loop takes 2x longer. I have narrowed the speed drop to the np.where function.</p>
<p>Is there a way to speed this up wihtout np.where or am I at max speed here?</p>
<p>Thanks</p>
|
<python><numpy><performance><loops>
|
2024-08-13 07:37:23
| 2
| 611
|
Rankinstudio
|
78,864,333
| 1,940,534
|
How do I use presence_of_element_located to click a checkbox
|
<p>I want to use a piece of code like this:</p>
<pre><code>elementspan = WebDriverWait(driver, 10).until(EC.presence_of_element_located(("xpath", '//span[text()="" and @class="cb-i"]')))
</code></pre>
<p>the trick is the html I am presented is below
I want to click the checkbox below,just not sure how to code the command above to hunt down and click the "checkbox"</p>
<pre><code><div class="cb-c" role="alert" style="display: flex;">
<label class="cb-lb">
<input type="checkbox">
<span class="cb-i"></span>
<span class="cb-lb-t">Verify you are human</span></label></div>
</code></pre>
|
<python><selenium-webdriver><xpath><chrome-web-driver>
|
2024-08-13 03:58:55
| 1
| 1,217
|
robm
|
78,863,941
| 2,449,857
|
Parsing XML with lxml without unescaping characters
|
<p>Is there a way to read XML from a string with <code>lxml</code>, without converting escaped characters (<code>&apos;</code> for <code>'</code>, <code>&quot;</code> for <code>"</code>, etc) back to their original form?</p>
<p>For example,</p>
<pre class="lang-py prettyprint-override"><code>from lxml import etree
def test_etree_parse():
test_str = "Text&apos;s escaped"
xml = etree.XML(f"<root>{test_str}</root>")
assert xml.find(".").text == test_str
</code></pre>
<p>This fails because <code>etree.XML</code> converts <code>"Text&apos;s escaped"</code> to <code>"Text's escaped"</code>.</p>
|
<python><lxml>
|
2024-08-12 23:53:17
| 0
| 3,489
|
Jack Deeth
|
78,863,925
| 8,251,318
|
mock.patch object extending into non-decorated methods?
|
<p>I have the following code:</p>
<pre><code>def get_mocked_contract_event():
event = {#stuff}
return event
@pytest.fixture(scope="function")
def context():
yield
#works as expected
@mock.patch("ticketfactory.TicketFactory", autospec = True)
def test_success_ticket(mock_ticket_factory, context):
return_ticket_mock = mock.Mock()
soap_mock = mock.Mock()
mock_ticket_factory.getTicket.return_value = return_ticket_mock
return_ticket_mock.buildRequest.return_value = soap_mock
from call_ticket_lambda import lambda_handler
response = lambda_handler(str(get_mocked_contract_event()), context)
assert response == "success string"
#fails on AssertionError, ticket is of type mock?
def test_contract_ticket_soap():
from call_ticket_lambda import TicketFactory
ticket = TicketFactory.getTicket("CONTRACT")
Logger().info("ticket:" + str(type(ticket)))
assert isinstance(ticket, ContractTicket) == True
</code></pre>
<p><code>test_success_ticket</code> works like a charm. However, when I try to run <code>test_contract_ticket_soap()</code>, it fails on an AssertionError, saying its a mock.</p>
<p>The error: <code>test_call_ticket_lambda.py:93: AssertionError</code></p>
<p>The log: <code>{"level":"INFO","location":"test_ticket_soap:91","message":"ticket:<class 'unittest.mock.Mock'>"}</code></p>
<p>It seems as though the @patch decorators are somehow extending beyond the <code>test_success_ticket</code> method. How is this possible? Is there some sort of teardown logic I need to run between each test case?</p>
|
<python><unit-testing><pytest>
|
2024-08-12 23:40:49
| 0
| 877
|
Matthew
|
78,863,704
| 15,231,102
|
Why will my python file run via a terminal command but not when I run the same command inside of a function in my Flutter macos app?
|
<p>I want to execute python code in my Flutter macos app to run a YOLO object detection model.</p>
<p>I am using the <a href="https://pub.dev/packages?q=process_run" rel="nofollow noreferrer">https://pub.dev/packages?q=process_run</a> package to execute shell commands.
The package works and will run a python file with a simple print function, but not my object detection file.</p>
<p>I have cloned <a href="https://github.com/ultralytics/yolov5" rel="nofollow noreferrer">https://github.com/ultralytics/yolov5</a> inside of my app.</p>
<p>In my flutter app I have a function:</p>
<pre><code>_runPython() async {
var shell = Shell();
await shell.run('''
python3 yolov5/detect.py
''');
}
</code></pre>
<p>When I call _runPython I get the error:</p>
<pre><code>Traceback (most recent call last):
File "yolov5/detect.py", line 39, in <module>
import torch
ModuleNotFoundError: No module named 'torch'
</code></pre>
<p>However when I run <code>python3 yolov5/detect.py</code> in my terminal the code executes just fine.</p>
<p>Looking inside of my python files inside of my app I don't see any errors, and everything appears to be imported.</p>
<p>Why is my detect.py working via the terminal and not from my Flutter app?</p>
|
<python><flutter><pytorch><torch><yolo>
|
2024-08-12 21:47:11
| 1
| 597
|
greenzebra
|
78,863,647
| 695,984
|
Define default arguments using a dict
|
<p>I have an unusual situation where a dict of keyword-type argument defaults, <code>the_defaults</code>, is created before a function <code>my_function</code> is defined:</p>
<pre><code>the_defaults = {'kwarg_a': 1, 'kwarg_b': 2}
def my_function(my_args = the_defaults):
print(my_args['kwarg_a'])
print(my_args['kwarg_b'])
</code></pre>
<p>This code is ugly because my function body has to address entries of <code>my_args</code> instead of using local variables.
Is there any way to avoid this?
I am imagining a dynamic function header <code>def my_function(**kwargs=**the_defaults): [...]</code> but this is obviously meaningless.</p>
|
<python><function><dynamic>
|
2024-08-12 21:23:28
| 1
| 1,044
|
Christian Chapman
|
78,863,622
| 24,191,255
|
Extracting a curve and identifying coordinates from an image using OpenCV
|
<p>Currently I am particularly interested in pacing strategy optimization in different sports. As part of these kind of processes, I have to define certain courses concerning distance and changes in altitude along the course. I thought I could make this process easier by extracting the data I need from downloaded schematic course profiles (.png) using OpenCV, instead of other lot more manual and time consuming solutions.</p>
<p>After reviewing some of the previous questions at SO, I aimed for identifying the curve representing the course profile of a specific course iodentify some of its points using a predetermined step size (pixels) and saving their coordinates to.csv. However, since I am a very beginner in OpenCV, I believe there must be some better approaches, unsurprisingly, my attempt wasn't well-executed.</p>
<p>Something I also find problematic in my approach is that identification of the curve is very color dependent, thereby if an image with different graphics is used, I need to rewrite parts of the code. Is there a way to avoid this?</p>
<p>Here you can find an example image, representing a course profile I used for testing my code <a href="https://i.sstatic.net/xVA3UK7i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xVA3UK7i.png" alt="Course profile" /></a></p>
<p>My code looks like as follows:</p>
<pre><code>`import cv2
import numpy as np
import matplotlib.pyplot as plt
image_path = r"image"
image = cv2.imread(image_path)
hsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
color_blue1 = np.array([100, 50, 50])
color_blue2 = np.array([140, 255, 255])
mask = cv2.inRange(hsv_image, color_blue1, color_blue2)
blue_curve = cv2.bitwise_and(image, image, mask=mask)
# Converting the result to grayscale
gray_curve = cv2.cvtColor(blue_curve, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray_curve, 50, 150)
contours, _ = cv2.findContours(edges, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
curve_contour = max(contours, key=cv2.contourArea)
curve_points = curve_contour.squeeze()
curve_points = curve_points[np.argsort(curve_points[:, 0])]
def sample_curve(points, step):
x_values = points[:, 0]
sampled_points = []
for x in range(x_values.min(), x_values.max(), step):
interval_points = points[(x_values >= x) & (x_values < x + step)]
if interval_points.size:
avg_y = interval_points[:, 1].mean()
sampled_points.append([x, avg_y])
return np.array(sampled_points)
# Set step frequency (pixels)
step_frequency = 5
sampled_curve = sample_curve(curve_points, step_frequency)
plt.figure(figsize=(15, 7))
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
plt.plot(curve_points[:, 0], curve_points[:, 1], 'r-', label='Detected curve')
plt.scatter(sampled_curve[:, 0], sampled_curve[:, 1], color='blue', label='Sampled points')
plt.legend()
plt.show()
np.savetxt('sampled_curve.csv', sampled_curve, delimiter=',', header='distance,altitude', comments='')`
</code></pre>
<p>which gave the following result:
<a href="https://i.sstatic.net/8elPblTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8elPblTK.png" alt="Result" /></a></p>
<p>EDIT:
Here is a course profile relatively clean of markings, for testing methods:
<a href="https://i.sstatic.net/BKAhv2zu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BKAhv2zu.png" alt="Profile 2" /></a></p>
|
<python><opencv><image-processing><computer-vision><plot-parsing>
|
2024-08-12 21:14:20
| 2
| 606
|
MΓ‘rton HorvΓ‘th
|
78,863,608
| 1,786,016
|
Django 5 update_or_create reverse one to one field
|
<p>On Django 4.x</p>
<p>Code is working as expected</p>
<pre><code>from django.db import models
class Project(models.Model):
rough_data = models.OneToOneField(
"Data",
related_name="rough_project",
on_delete=models.SET_NULL,
null=True,
blank=True,
)
final_data = models.OneToOneField(
"Data",
related_name="final_project",
on_delete=models.SET_NULL,
null=True,
blank=True,
)
class Data(models.Model):
pass
data, created = Data.objects.update_or_create(
rough_project=project, defaults=data
)
</code></pre>
<p>On Django 5.x:</p>
<pre><code>ValueError: The following fields do not exist in this model: rough_project
</code></pre>
<p>I do not see any changes related to this in <a href="https://docs.djangoproject.com/en/5.0/releases/5.0/" rel="nofollow noreferrer">changelog</a></p>
<p>In Django 4.2 and below it worked as one line with <code>update_or_create</code> and in 5.0 it stopped working, so instead I need to do something like this:</p>
<pre><code>data = getattr(project, "rough_data", None)
if data:
# update fields here
else:
# create Data object
</code></pre>
|
<python><django><django-models>
|
2024-08-12 21:09:14
| 1
| 7,822
|
Arti
|
78,863,540
| 3,949,008
|
Force PyArrow table write to ignore NULL type and use original schema type for a column
|
<p>I have this piece of code that appends two parts of the same data to a PyArrow table. The second write fails because the column gets assigned <code>null</code> type. I understand why it is doing that. Is there a way to force it to use the type in the table's schema, and not use the inferred one from the data in second write?</p>
<pre><code>import pandas as pd
import pyarrow as pa
import pyarrow.parquet as pq
data = {
'col1': ['A', 'A', 'A', 'B', 'B'],
'col2': [0, 1, 2, 1, 2]
}
df1 = pd.DataFrame(data)
df1['col3'] = 1
df2 = df1.copy()
df2['col3'] = pd.NA
pat1 = pa.Table.from_pandas(df1)
pat2 = pa.Table.from_pandas(df2)
writer = pq.ParquetWriter('junk.parquet', pat1.schema)
writer.write_table(pat1)
writer.write_table(pat2)
</code></pre>
<p>My error on second write above:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/lib/python3.10/site-packages/pyarrow/parquet/core.py", line 1094, in write_table
raise ValueError(msg)
ValueError: Table schema does not match schema used to create file:
table:
col1: string
col2: int64
col3: null
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 578 vs.
file:
col1: string
col2: int64
col3: int64
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 577
</code></pre>
|
<python><pandas><pyarrow>
|
2024-08-12 20:45:38
| 1
| 10,535
|
Gopala
|
78,863,539
| 1,940,534
|
Python selenium webdriver.chrome issue
|
<p>I have a code using these libraries</p>
<pre><code>from arcgis.gis import GIS
from arcgis.geometry import Point, Polyline, Polygon
import datetime
import os
from selenium import webdriver
from selenium.webdriver.chrome.service import Service as ChromeService
from selenium.webdriver.common.keys import Keys
from webdriver_manager.chrome import ChromeDriverManager
import time
from selenium.webdriver.support.select import Select
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from getpass import getpass
</code></pre>
<p>I did a</p>
<pre><code>pip install -r requirements.txt
</code></pre>
<p>but I still get this error with loading the chromedriver below:</p>
<pre><code>PS E:\code\esri\FissionStaking2> python .\marswaitforlogin.py
enter in a ZULU time 00:00 -> 23:59 (15:10) for 3:10pm
14:30
Traceback (most recent call last):
File "E:\code\esri\FissionStaking2\marswaitforlogin.py", line 34, in <module>
driver = webdriver.Chrome(service=ChromeService(ChromeDriverManager().install()))
File "E:\code\esri\FissionStaking2\.venv\lib\site-packages\selenium\webdriver\chrome\webdriver.py", line 45, in __init__
super().__init__(
File "E:\code\esri\FissionStaking2\.venv\lib\site-packages\selenium\webdriver\chromium\webdriver.py", line 55, in __init__
self.service.start()
File "E:\code\esri\FissionStaking2\.venv\lib\site-packages\selenium\webdriver\common\service.py", line 98, in start
self._start_process(self._path)
File "E:\code\esri\FissionStaking2\.venv\lib\site-packages\selenium\webdriver\common\service.py", line 208, in _start_process
self.process = subprocess.Popen(
File "E:\Program Files\Python310\lib\subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "E:\Program Files\Python310\lib\subprocess.py", line 1456, in _execute_child
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
OSError: [WinError 193] %1 is not a valid Win32 application
PS E:\code\esri\FissionStaking2>
</code></pre>
<p>any idea what I am missing?</p>
|
<python><selenium-webdriver><selenium-chromedriver>
|
2024-08-12 20:45:28
| 1
| 1,217
|
robm
|
78,863,526
| 6,622,697
|
Loading package from disk in Pycharm
|
<p>I am trying to load a package in Pycharm that I have on disk.</p>
<p>Everythings works if I do this outside of Pycharm in a venv</p>
<pre><code>pip install -e python/createInput
</code></pre>
<p>But using Pycharm's Install Package from Disk</p>
<p><a href="https://i.sstatic.net/cKuzu1gY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cKuzu1gY.png" alt="enter image description here" /></a></p>
<p>When I click OK, nothing happens. No error, no acknowledgment. When I search for my package in the installed packages, it's not there. And I can't import it anywhere.</p>
<h1>Update</h1>
<p>I restarted Pycharm and refreshed its packages. I was able to install my package. I got a confirmation in the notifications. And it appears in the package manager
<a href="https://i.sstatic.net/nut9oWjP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nut9oWjP.png" alt="enter image description here" /></a></p>
<p>But Pycharm still doesn't recognize it in my code. Importing in a terminal windows seems to work. But Pycharm can't find it<br />
<a href="https://i.sstatic.net/2fs74EDM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fs74EDM.png" alt="enter image description here" /></a></p>
<p>But <code>pip</code> knows that it's there
<a href="https://i.sstatic.net/9QEX2hmK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QEX2hmK.png" alt="enter image description here" /></a></p>
<p>So somehow, I can install it in the package manager, but Pycharm can't seem to find it</p>
<p>Here is a screenshot of my code and the Problems view</p>
<p><a href="https://i.sstatic.net/Jp9xYEz2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jp9xYEz2.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/3PngqtlD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3PngqtlD.png" alt="enter image description here" /></a></p>
|
<python><pip><pycharm>
|
2024-08-12 20:40:51
| 1
| 1,348
|
Peter Kronenberg
|
78,863,462
| 4,710,409
|
How to retrive ChoiceField value from a form?
|
<p>I want to get the selected choice from a <strong>ChoiceField</strong> in a view. When I submit the form, I process it in a view:</p>
<p>views.py</p>
<pre><code>def myFormSubmitView(request):
...
if form.is_valid():
print("valid form")
post = Post()
post.title = form.cleaned_data["title"]
post.body = form.cleaned_data["body"]
**>>>>post.choice_test = form.cleaned_data["choice_test"]**
post.save()
return HttpResponseRedirect(request.path_info)
</code></pre>
<p>I want to get the value of choice_test.
How can I do that ?</p>
<p>=========EDIT:========</p>
<p>This is the code of forms.py and models.py:</p>
<p><strong>forms.py</strong></p>
<pre><code>from blog.models import Post
from django.forms import ModelForm
class NewBlogForm(ModelForm):
title = models.CharField(max_length=100)
section_ids =(
("1", "sec1"),
("2", "sec2"),)
section = forms.ChoiceField(choices=section_ids)
body = models.CharField(max_length=255)
image = models.ImageField(upload_to='static/.../', error_messages = {'invalid':("Image files only")},)
class Meta:
model = Post
fields = ['title','section','body','image']
def __str__(self):
return self.title
</code></pre>
<p><strong>models.py</strong></p>
<pre><code>class Post(models.Model):
title = models.CharField(max_length=255)
author = models.ForeignKey(xxx,on_delete=models.CASCADE)
image = models.ImageField(null=True,upload_to='static/xyz')
body = models.TextField()
created_on = models.DateTimeField(auto_now_add=True)
last_modified = models.DateTimeField(auto_now=True)
categories = models.ManyToManyField("Category", related_name="posts")
likes = models.IntegerField(default=0,null=False)
shares = models.IntegerField(default=0,null=False)
def __str__(self):
return self.title
</code></pre>
<p>Post() is a model not the request.POST.</p>
|
<python><django><forms><modelform>
|
2024-08-12 20:20:35
| 2
| 575
|
Mohammed Baashar
|
78,863,444
| 14,250,641
|
Efficiently fetch sequences for sliding window (large dataset)
|
<p>The dataset I have stored are just coordinates of DNA sequence.</p>
<p>df:</p>
<pre><code>chr start stop label
chr1 9000 9100 1
chr1 8803 8903 1
chr1 8903 9000 0
</code></pre>
<p>My goal is to expand the original dataset by creating a sliding window around each coordinate to capture context sequences.</p>
<p>new_df:</p>
<pre><code>chr start stop label
chr1 9000-5000 9000+5000 1
chr1 9001-5000 9001+5000 1
chr1 9002-5000 9002+5000 1
...
chr1 9100-5000 9100+5000 1
...
</code></pre>
<ol>
<li>Create Sliding Window: For each entry in my original dataset, generate new rows by creating a sliding window of 5000 nucleotides on either side of the start and stop coordinates. This effectively captures the sequence context around each original coordinate.</li>
<li>Expand Dataset: For each original entry, create new rows where each row represents a specific position within the 5000-nucleotide context window. For example, if the original start is 9000, I will generate rows with start values from 9000-5000 to 9000+5000, and similarly adjust the stop values. Each sequence is now 10,001 characters in length now.</li>
</ol>
<p>using this function:</p>
<pre><code>def expand_coordinates(element_locs, context=3):
# Vectorized expansion of coordinates
start = element_locs['Start'].astype(int)
end = element_locs['End'].astype(int)
expanded_data = []
for idx, row in element_locs.iterrows():
chr_name = row['Chromosome']
chr_start = start[idx]
chr_end = end[idx]
for i in range(chr_start, chr_end + 1):
expanded_data.append({
'Chromosome': chr_name,
'Start': max((i - 1) - context, 0),
'End': min(i + context, max_sizes[chr_name])
})
expanded_df = pd.DataFrame(expanded_data)
return expanded_df
</code></pre>
<ol start="3">
<li>Fetch Sequences: Use the expanded coordinates from new_df to fetch the corresponding DNA sequences from the dataset.</li>
</ol>
<pre><code>def get_element_seqs(element_locs, context=3):
expanded_df = expand_coordinates(element_locs, context=context)
# Optimize genome fetching
genome = pysam.Fastafile(ref_genome)
def fetch_sequences(row):
return genome.fetch(row['Chromosome'], row['Start'], row['End'])
# Fetch sequences in a vectorized way
expanded_df['sequence'] = expanded_df.apply(fetch_sequences, axis=1)
return element_seqs
</code></pre>
<ol start="4">
<li>Tokenize the sequences</li>
</ol>
<pre><code>dataset = Dataset.from_pandas(element_final[['Chromosome', 'sequence', 'label']])
dataset = dataset.shuffle(seed=42)
tokenizer = AutoTokenizer.from_pretrained(f"InstaDeepAI/nucleotide-transformer-500m-human-ref")
def tokenize_function(examples):
outputs = tokenizer.batch_encode_plus(examples["sequence"], return_tensors="pt", truncation=False, padding=False, max_length=80)
return outputs
# Creating tokenized dataset
tokenized_dataset = dataset.map(
tokenize_function,
batched=True, batch_size=2000)
</code></pre>
<ol start="5">
<li>Get the embeddings from the tokens</li>
</ol>
<pre><code>input_file = f"tokenized_elements/tokenized_{ELEMENT_LABEL}/{filename}.arrow"
# Load input data
d1 = Dataset.from_file(input_file)
def embed_function(examples):
torch.cuda.empty_cache()
gc.collect()
inputs = torch.tensor(examples['input_ids']) # Convert to tensor
inputs = inputs.to(device)
with torch.no_grad():
outputs = model(input_ids=inputs, output_hidden_states=True)
# Step 3: Extract the embeddings
hidden_states = outputs.hidden_states # List of hidden states from all layers
embeddings = hidden_states[-1] # Assuming you want embeddings from the last layer
averaged_embeddings = torch.mean(embeddings, dim=1) # Calculate mean along dimension 1 (the dimension with size 86)
averaged_embeddings = averaged_embeddings.to(torch.float32) # Ensure float32 data type
return {'embeddings': averaged_embeddings}
# Map embeddings function to input data
embeddings = d1.map(embed_function, batched=True, batch_size=1550)
embeddings = embeddings.remove_columns(["input_ids", "attention_mask"])
# Save embeddings to disk
output_dir = f"embedded_elements/embeddings_{ELEMENT_LABEL}/{filename}" # Assuming ELEMENT_LABEL is defined elsewhere
</code></pre>
<ol start="6">
<li>Plug in the embeddings to XGBoost</li>
</ol>
<p>This ends up giving me huge datasets that makes my code crash (e.g. I start with 700K rows and they get expanded to 1000million rows). I have been using pandas, so maybe that's the problem too? Another issue is I'm not using batching I think?
Unfortunately, my code keeps crashing between steps 2+3. I think I need to implement batching, but I'm unsure how everything will work since I eventually will need to feed in output to an LLM.</p>
|
<python><pandas><dataframe><bigdata><bioinformatics>
|
2024-08-12 20:13:36
| 1
| 514
|
youtube
|
78,863,326
| 2,071,807
|
Structural pattern matching for checking if a list contains an element
|
<p>Boto3 lists buckets unhelpfully like this:</p>
<pre class="lang-py prettyprint-override"><code>{'Buckets': [{'CreationDate': datetime.datetime(1, 1, 1, 0, 0, tzinfo=tzlocal()),
'Name': 'foo'},
{'CreationDate': datetime.datetime(1, 1, 1, 0, 0, tzinfo=tzlocal()),
'Name': 'bar'},
{'CreationDate': datetime.datetime(2024, 7, 30, 15, 4, 59, 150000, tzinfo=tzlocal()),
'Name': 'baz'}],
'Owner': {...}}
</code></pre>
<p>If I want to know whether a bucket <code>foo</code> is at the start of that list, I can write this:</p>
<pre class="lang-py prettyprint-override"><code>match s3_response:
case {"Buckets": [{"Name": "foo", "CreationDate": _}, *_]}:
print("Bucket 'foo' found")
case _:
print("Bucket 'foo' not found")
</code></pre>
<p>And I can think of a similar solution if the bucket is known to be at the end.</p>
<p>But what about finding bucket <code>bar</code> which is in the middle of the list of buckets? You might hope you could do this:</p>
<pre class="lang-py prettyprint-override"><code>match s3_response:
case {"Buckets": [*_, {"Name": "bar", "CreationDate": _}, *_]}:
print("Bucket 'bar' found")
case _:
print("Bucket 'bar' not found")
</code></pre>
<p>But that gives:</p>
<blockquote>
<p>SyntaxError: multiple starred names in sequence pattern</p>
</blockquote>
<p>Of course I could do things the boring old way, but where's the fun in that:</p>
<pre class="lang-py prettyprint-override"><code>'bar' in [bucket.get("Name", "") for bucket in s3_response.get("Buckets", [])]
</code></pre>
|
<python><pattern-matching>
|
2024-08-12 19:33:31
| 1
| 79,775
|
LondonRob
|
78,863,319
| 12,694,438
|
ValueError: Namespace Atspi not available
|
<p>I'm trying to use pyatspi.</p>
<p>I installed pygobject with <code>conda install conda-forge::pygobject</code>.</p>
<p>Then I ran both <code>sudo apt-get install python3-at-spi</code> and <code>sudo apt-get -y install python3-pyatspi</code>, just in case.</p>
<p>I'm using a conda environment, so I added a symlink to <code>pyatspy</code> into the environment.</p>
<p>When I <code>import pyatspi</code>, it throws this error:</p>
<pre><code>Traceback (most recent call last):
File "...", line 1, in <module>
import pyatspi
File "...", line 18, in <module>
gi.require_version('Atspi', '2.0')
File "...", line 122, in require_version
raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace Atspi not available
</code></pre>
|
<python><pygobject>
|
2024-08-12 19:31:51
| 1
| 944
|
splaytreez
|
78,863,131
| 2,687,427
|
Why does the inclusion of an inner function that uses a local variable change order of locals
|
<p>The following prints <code>{'a': 'a', 'b': 'b'}</code>:</p>
<pre class="lang-py prettyprint-override"><code>def foo(a: str = "a", b: str = "b") -> None:
print(locals())
foo() # Prints {'a': 'a', 'b': 'b'}
</code></pre>
<p>Which I'd expect as <code>locals</code> in Python 3.7+ <a href="https://github.com/python/cpython/issues/76871" rel="nofollow noreferrer">returns the order of creation</a>.</p>
<p>But the below prints <code>{'b': 'b', 'a': 'a'}</code></p>
<pre class="lang-py prettyprint-override"><code>def foo(a: str = "a", b: str = "b") -> None:
print(locals())
lambda: a
# or:
# def inner() -> None:
# a
foo() # Prints {'b': 'b', 'a': 'a'}
</code></pre>
<p>It seems like it delayed the order of variable creation, which is strange. Why is this?</p>
<p>Edit: Ah, this only happens with Python 3.10 and lower. I guess it's a Python issue? If anyone could find the relevant GitHub issue, please share it. I might be blind as I can't find the relevant details in the 3.11 changelogs.</p>
|
<python><lambda><local-variables>
|
2024-08-12 18:32:35
| 0
| 3,472
|
Nelson Yeung
|
78,863,093
| 6,197,439
|
PyQt5 dynamically expand widget to two columns in QGridLayout?
|
<p>The example below, which I've modified from the answer in <a href="https://stackoverflow.com/questions/59429678/pyqt5-widget-in-grid-expansion">Pyqt5 widget in grid expansion</a>, starts like this:</p>
<p><a href="https://i.sstatic.net/Kn61JhnG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kn61JhnG.png" alt="example starting GUI" /></a></p>
<p>Then when you click the Column Cover button, the idea is that the first label "Row 0, Column 0" (outlined in red) expands on both columns of the QGridLayout - however, I get this:</p>
<p><a href="https://i.sstatic.net/zOWxSEG5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zOWxSEG5.png" alt="example GUI after click" /></a></p>
<p>So, it seems that both grid columns' widths were taken into account, in order to center the outlined label, which is good - however, the outlined label did not expand to take up the combined width of both columns.</p>
<p>How can I have the label expand in width, and take up the space of both columns in grid layout, when "Column Cover" button is clicked? I.e. I'd want to obtain this after first button click after program start (edited in image editor, box is approx what I think the union of the bounding boxes of both labels below the outlined one are).</p>
<p><a href="https://i.sstatic.net/xVdzsCEi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xVdzsCEi.png" alt="desired result GUI" /></a></p>
<p>Note that I am aware I could do probably that by manipulating <code>.setFixedWidth</code> of <code>MainUi.label</code> - however, I'm looking for some sort of a setup, where the label would automatically expand to fill available width, depending on whether its placed in one or two columns of the QGridLayout.</p>
<p>The code:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt5.QtCore import *
from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
from PyQt5 import QtCore, QtGui, QtWidgets
#rom example import Ui_MainWindow
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(191, 136)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.gridLayout = QtWidgets.QGridLayout(self.centralwidget)
self.gridLayout.setObjectName("gridLayout")
self.pushButton = QtWidgets.QPushButton(self.centralwidget)
self.pushButton.setObjectName("pushButton")
self.gridLayout.addWidget(self.pushButton, 0, 0, 1, 2)
self.label_3 = QtWidgets.QLabel(self.centralwidget)
self.label_3.setObjectName("label_3")
self.gridLayout.addWidget(self.label_3, 3, 0, 1, 1)
self.label_4 = QtWidgets.QLabel(self.centralwidget)
self.label_4.setObjectName("label_4")
self.gridLayout.addWidget(self.label_4, 2, 1, 1, 1)
self.label_5 = QtWidgets.QLabel(self.centralwidget)
self.label_5.setObjectName("label_5")
self.gridLayout.addWidget(self.label_5, 3, 1, 1, 1)
self.label_6 = QtWidgets.QLabel(self.centralwidget)
self.label_6.setObjectName("label_6")
self.gridLayout.addWidget(self.label_6, 1, 1, 1, 1)
self.label = QtWidgets.QLabel(self.centralwidget)
self.label.setObjectName("label")
self.gridLayout.addWidget(self.label, 1, 0, 1, 1)
self.label.setStyleSheet("border: 1px solid; border-color:red;")
self.label_2 = QtWidgets.QLabel(self.centralwidget)
self.label_2.setObjectName("label_2")
self.gridLayout.addWidget(self.label_2, 2, 0, 1, 1)
MainWindow.setCentralWidget(self.centralwidget)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.pushButton.setText(_translate("MainWindow", "Column Cover"))
self.label_3.setText(_translate("MainWindow", "Row 2, Column 0"))
self.label_4.setText(_translate("MainWindow", "Row 1, Column 1"))
self.label_5.setText(_translate("MainWindow", "Row 2, Column 1"))
self.label_6.setText(_translate("MainWindow", "Row 0, Column 1"))
self.label.setText(_translate("MainWindow", "Row 0, Column 0"))
self.label_2.setText(_translate("MainWindow", "Row 1 ,Column 0 "))
class MainWindow(QMainWindow):
def __init__(self, parent=None):
super(MainWindow, self).__init__(parent)
QWidget.__init__(self, parent)
self.MainUi = Ui_MainWindow()
self.MainUi.setupUi(self)
self.MainUi.pushButton.setCheckable(True) # +++ True
self.MainUi.pushButton.clicked.connect(self.expand_row)
def expand_row(self, state): # +++ state
if state:
self.MainUi.label_6.hide()
self.MainUi.gridLayout.addWidget(self.MainUi.label, 1, 0, 1, 2, Qt.AlignHCenter) # ...2, 1)
else:
self.MainUi.gridLayout.addWidget(self.MainUi.label, 1, 0, 1, 1) # ...1, 1)
self.MainUi.label_6.show()
# self.MainUi.gridLayout.setRowStretch(0,1)
if __name__ == "__main__":
app = QApplication(sys.argv)
MainWindow = MainWindow()
MainWindow.setWindowTitle('Example')
MainWindow.show()
sys.exit(app.exec_())
</code></pre>
|
<python><pyqt5><qgridlayout>
|
2024-08-12 18:16:41
| 1
| 5,938
|
sdbbs
|
78,862,861
| 2,287,458
|
Expand/Unnest Polars struct into rows, not into columns
|
<p>I have this DataFrame</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
'as_of': ['2024-08-01', '2024-08-02', '2024-08-03', '2024-08-04'],
'quantity': [{'A': 10, 'B': 5}, {'A': 11, 'B': 7}, {'A': 9, 'B': 4, 'C': -3},
{'A': 15, 'B': 3, 'C': -14, 'D': 50}]
}, schema={'as_of': pl.String, 'quantity': pl.Struct})
</code></pre>
<pre><code>shape: (4, 2)
ββββββββββββββ¬βββββββββββββββββββ
β as_of β quantity β
β --- β --- β
β str β struct[4] β
ββββββββββββββͺβββββββββββββββββββ‘
β 2024-08-01 β {10,5,null,null} β
β 2024-08-02 β {11,7,null,null} β
β 2024-08-03 β {9,4,-3,null} β
β 2024-08-04 β {15,3,-14,50} β
ββββββββββββββ΄βββββββββββββββββββ
</code></pre>
<p>Which if I unnest</p>
<pre class="lang-py prettyprint-override"><code>df.unnest('quantity')
</code></pre>
<p>Gives me the following</p>
<pre><code>shape: (4, 5)
ββββββββββββββ¬ββββββ¬ββββββ¬βββββββ¬βββββββ
β as_of β A β B β C β D β
β --- β --- β --- β --- β --- β
β str β i64 β i64 β i64 β i64 β
ββββββββββββββͺββββββͺββββββͺβββββββͺβββββββ‘
β 2024-08-01 β 10 β 5 β null β null β
β 2024-08-02 β 11 β 7 β null β null β
β 2024-08-03 β 9 β 4 β -3 β null β
β 2024-08-04 β 15 β 3 β -14 β 50 β
ββββββββββββββ΄ββββββ΄ββββββ΄βββββββ΄βββββββ
</code></pre>
<p>Instead of each unnesting into columns, can I unnest into <strong>rows</strong> to get a dataframe like so?</p>
<pre><code>shape: (11, 3)
ββββββββββββββ¬βββββββ¬βββββββββββ
β as_of β name β quantity β
β --- β --- β --- β
β str β str β i64 β
ββββββββββββββͺβββββββͺβββββββββββ‘
β 2024-08-01 β A β 10 β
β 2024-08-01 β B β 5 β
β 2024-08-02 β A β 11 β
β 2024-08-02 β B β 7 β
β 2024-08-03 β A β 9 β
β β¦ β β¦ β β¦ β
β 2024-08-03 β C β -3 β
β 2024-08-04 β A β 15 β
β 2024-08-04 β B β 3 β
β 2024-08-04 β C β -14 β
β 2024-08-04 β D β 50 β
ββββββββββββββ΄βββββββ΄βββββββββββ
</code></pre>
|
<python><dataframe><python-polars><unpivot><unnest>
|
2024-08-12 17:17:30
| 2
| 3,591
|
Phil-ZXX
|
78,862,781
| 2,676,598
|
Improving low-light image capture using OpenCV
|
<p>We are using the following Python script combined with OpenCV to record video of salmon migration in order to estimate total fish passage,</p>
<pre><code>import cv2
import numpy as np
import time
import datetime
import pathlib
import imutils
import os
import shutil
import socket
szPlacename = 'Yukon River'
iCaptureDuration = 600
iFramesPerSec = 3
szFramesPerSec = str(iFramesPerSec)
cap = cv2.VideoCapture(0)
iSleepDuration = 0
iFrameWidth = 1024
iFrameHeight = 768
szFrameWidth = str(iFrameWidth)
szFrameHeight = str(iFrameHeight)
szComputerName = socket.gethostname()
iTotal, iUsed, iFree = shutil.disk_usage("/")
iPercent = 100 * iUsed / iTotal
iPercent = round(iPercent, 1)
szPercent = str(iPercent)+"%"
font = cv2.FONT_HERSHEY_COMPLEX_SMALL
if (cap.isOpened() == False):
print("Unable to read camera feed")
pathlib.Path(('C:\\LZ\\')+szPlacename+"-"+(datetime.datetime.now().strftime("%Y%m%d"))).mkdir(parents=True, exist_ok=True)
out = cv2.VideoWriter('C:\\LZ\\'+szPlacename+"-"+datetime.datetime.now().strftime("%Y%m%d")+'/'+datetime.datetime.now().strftime("%Y%m%d%H%M%S") + " "+ szPlacename + '_StarLink.avi',cv2.VideoWriter_fourcc('m','j','p','g'),iFramesPerSec, (iFrameWidth,iFrameHeight))
iPrev = 0
iStartTime = time.time()
while( int(time.time() - iStartTime) < iCaptureDuration ):
#start fps
iTimeElapsed = time.time() - iPrev
while(iTimeElapsed > 1./iFramesPerSec):
ret, frame = cap.read()
if not ret:
break
if iTimeElapsed > 1./iFramesPerSec:
iPrev = time.time()
szDateTime = str(datetime.datetime.now())
szDateTime = szDateTime[0:22]
if ret==True:
frame = imutils.resize(frame, width=iFrameWidth)
frame = cv2.putText(frame,"Laptop: "+szComputerName+", HD used= "+szPercent+", FPS= "+szFramesPerSec+", Size(px)= "+szFrameWidth+"x"+szFrameHeight,(10,90),font, 1,(0,255,255),2,cv2.LINE_8)
frame = cv2.putText(frame,"Location: "+szPlacename +", Time: "+ szDateTime,(10,120),font, 1,(0, 255, 255),2,cv2.LINE_8)
out.write(frame)
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release()
out.release()
cv2.destroyAllWindows()
</code></pre>
<p>Migration occurs during the day and at night. We have installed underwater lights to facilitate counting after dark.
Here are links to video captured at dusk as well as after dark,</p>
<p>Dusk: <a href="https://youtu.be/EtMkmfISS48" rel="nofollow noreferrer">https://youtu.be/EtMkmfISS48</a></p>
<p>Dark: <a href="https://youtu.be/FHELMpw_mpQ" rel="nofollow noreferrer">https://youtu.be/FHELMpw_mpQ</a></p>
<p><strong>Question:</strong> Are there other aspects of OpenCV that could be utilized that would improve low light image capture?</p>
<p>The camera is a USB ELP 1080P Low Light (0.01 lux) 30fps H.264 IMX323 camera for industrial machine vision purchased from AliExpress.</p>
|
<python><opencv><webcam><video-capture><light>
|
2024-08-12 16:55:11
| 0
| 2,174
|
portsample
|
78,862,691
| 3,241,486
|
`mlflow.transformers.log_model()` does not finish
|
<h3>Problem</h3>
<p>I want to use <code>mlflow.transformers.log_model()</code> to log a finetuned huggingface model.</p>
<p><strong>However, when the <code>mlflow.transformers.log_model</code> method is running, it simply does not finish - runs forever - throws no errors.</strong></p>
<p>I suspect my configuration is not right, the model is too big?
The output says <code>Skipping saving pretrained model weights to disk</code> so that should not be the problem.</p>
<p>Any ideas how to do this properly?</p>
<h3>Example</h3>
<p>This is more or less how my setup looks like, you cannot run this, it includes some pseudocode...</p>
<p>I am on python 3.11.9 with <code>transformers = "^4.41.2"</code> & <code>mlflow = "^2.15.1"</code>.</p>
<pre><code>import mlflow
import torch
from peft import LoraConfig
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
TrainingArguments,
)
from trl import SFTTrainer, setup_chat_format
train_dataset = ...
eval_dataset = ...
model_id = "LeoLM/leo-hessianai-7b-chat-bilingual"
# Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
quantization_config=bnb_config,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer_no_pad = AutoTokenizer.from_pretrained(model_id, add_bos_token=True)
model, tokenizer = setup_chat_format(model, tokenizer)
peft_config = LoraConfig(...)
args = TrainingArguments(...)
# Define Trainer
trainer = SFTTrainer(
model=model,
args=args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
peft_config=peft_config,
tokenizer=tokenizer,
packing=True,
)
# mlflow
mlflow.set_experiment("my_experiment")
with mlflow.start_run() as run:
mlflow.transformers.autolog()
trainer.train()
components = {
"model": trainer.model,
"tokenizer": tokenizer_no_pad,
}
# !!! This function all does not finish... !!!
mlflow.transformers.log_model(
transformers_model=components,
artifact_path="model",
)
</code></pre>
<p>The last output I get in the console is:</p>
<pre><code>INFO mlflow.transformers: Overriding save_pretrained to False for PEFT models, following the Transformers behavior. The PEFT adaptor and config will be saved, but the base model weights will not and reference to the HuggingFace Hub repository will be logged instead.
Unrecognized keys in `rope_scaling` for 'rope_type'='linear': {'type'}
/mypath/llm4pa-open-source/.venv/lib/python3.11/site-packages/peft/utils/save_and_load.py:209: UserWarning: Setting `save_embedding_layers` to `True` as the embedding layer has been resized during finetuning.
warnings.warn(
2024/08/12 18:21:14 INFO mlflow.transformers: Skipping saving pretrained model weights to disk as the save_pretrained is set to False. The reference to HuggingFace Hub repository LeoLM/leo-hessianai-7b-chat-bilingual will be logged instead.
/mypath/llm4pa-open-source/.venv/lib/python3.11/site-packages/_distutils_hack/__init__.py:26: UserWarning: Setuptools is replacing distutils.
warnings.warn("Setuptools is replacing distutils.")
</code></pre>
|
<python><nlp><huggingface-transformers><mlflow><mlops>
|
2024-08-12 16:27:32
| 1
| 2,533
|
chamaoskurumi
|
78,862,684
| 6,778,374
|
Regular expression for letters within a Unicode range?
|
<p>I'm using the <code>re</code> module in Python. Let's say I want to find all Arabic letters in a string. Essentially I want to combine <code>\w</code> with <code>[\u0600-\u06FF]</code>.</p>
<p>Is there a way of doing this? Specify both a character range and a class, where both must match?</p>
<p>If not possible with python's native <code>re</code> module, is it possible with <code>regex</code> from pip?</p>
<p>This question is about combining two different criteria for a single character match (i.e. intersection), it's not really about Arabic letters. And besides, all the answers to questions regarding matching Arabic letters completely miss the many alternate forms that can appear in Arabic text.</p>
|
<python><regex>
|
2024-08-12 16:26:25
| 2
| 675
|
NeatNit
|
78,862,640
| 20,898,396
|
Async wrapper - Pylance doesn't recognize that the function is now awaitable
|
<p>For convenience, I created a wrapper to run blocking code in a separate thread.</p>
<pre class="lang-py prettyprint-override"><code>from functools import wraps
import time
import asyncio
def async_wrapper(func):
@wraps(func)
async def wrapper(*args, **kwargs):
print("wrapper")
return await asyncio.to_thread(func, *args, **kwargs)
return wrapper
@async_wrapper
def blocking_code():
print("Start")
time.sleep(5)
print("end")
await blocking_code() # type issue
</code></pre>
<pre><code>wrapper
Start
end
</code></pre>
<p>However, Pylance shows the following warning (in VSCode, python 3.10):</p>
<pre><code>"None" is not awaitable
"None" is incompatible with protocol "Awaitable[_T_co@Awaitable]"
"__await__" is not present PylancereportGeneralTypeIssues
(function) def blocking_code() -> None
</code></pre>
<p>Is there a way to fix the type hinting?</p>
<p>(This is to avoid having to create a public async function that simply calls a private blocking function each time.)</p>
<p>Edit:
<a href="https://i.sstatic.net/E1c1sIZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E1c1sIZP.png" alt="strict mode" /></a></p>
<pre><code>{
"python.analysis.typeCheckingMode": "strict"
}
</code></pre>
<p>After enabling strict mode, I see that the return type is <code>Unknown</code>. The error for <code>await blocking_code()</code> is the same.</p>
|
<python><python-asyncio><python-typing><pyright>
|
2024-08-12 16:15:06
| 1
| 927
|
BPDev
|
78,862,578
| 16,725,431
|
Pip breaks with AttributeError
|
<p>I am trying to install pillow, however this is what happened:</p>
<pre><code>> pip3 install pillow
Traceback (most recent call last):
File "C:\Users\DELL\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\DELL\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\USER\Project\Scripts\pip3.exe\__main__.py", line 7, in <module>
File "C:\Users\USER\Project\lib\site-packages\pip\_internal\cli\main.py", line 78, in main
command = create_command(cmd_name, isolated=("--isolated" in cmd_args))
File "C:\Users\USER\Project\lib\site-packages\pip\_internal\commands\__init__.py", line 114, in create_command
module = importlib.import_module(module_path)
File "C:\Users\DELL\AppData\Local\Programs\Python\Python39\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "C:\Users\USER\Project\lib\site-packages\pip\_internal\commands\install.py", line 16, in <module>
from pip._internal.cli.req_command import (
File "C:\Users\USER\Project\lib\site-packages\pip\_internal\cli\req_command.py", line 19, in <module>
from pip._internal.index.package_finder import PackageFinder
File "C:\Users\USER\Project\lib\site-packages\pip\_internal\index\package_finder.py", line 31, in <module>
from pip._internal.req import InstallRequirement
File "C:\Users\USER\Project\lib\site-packages\pip\_internal\req\__init__.py", line 9, in <module>
from .req_install import InstallRequirement
File "C:\Users\USER\Project\lib\site-packages\pip\_internal\req\req_install.py", line 40, in <module>
from pip._internal.operations.install.wheel import install_wheel
File "C:\Users\USER\Project\lib\site-packages\pip\_internal\operations\install\wheel.py", line 40, in <module>
from pip._vendor.distlib.scripts import ScriptMaker
File "C:\Users\USER\Project\lib\site-packages\pip\_vendor\distlib\scripts.py", line 64, in <module>
WRAPPERS = {
File "C:\Users\USER\Project\lib\site-packages\pip\_vendor\distlib\scripts.py", line 64, in <dictcomp>
WRAPPERS = {
File "C:\Users\USER\Project\lib\site-packages\pip\_vendor\distlib\resources.py", line 196, in iterator
for name in resource.resources:
File "C:\Users\USER\Project\lib\site-packages\pip\_vendor\distlib\util.py", line 465, in __get__
value = self.func(obj)
File "C:\Users\USER\Project\lib\site-packages\pip\_vendor\distlib\resources.py", line 115, in resources
return self.finder.get_resources(self)
File "C:\Users\USER\Project\lib\site-packages\pip\_vendor\distlib\resources.py", line 180, in get_resources
return set([f for f in os.listdir(resource.src_path) if allowed(f)])
AttributeError: 'ResourceContainer' object has no attribute 'src_path'
</code></pre>
<p>All operations with <code>pip</code> and <code>pip3</code>, including upgrade pip, result in the exact error above.</p>
<p>I am working in a virtual env, the global <code>pip</code> works just fine but the venv pip fails.</p>
<p>The last thing i did with <code>pip</code> was installing <code>customtkinter</code></p>
<p>More details:</p>
<pre><code>> py -m pip list
Package Version
------------- ---------
customtkinter 5.2.2
darkdetect 0.8.0
easygui 0.98.3
mido 1.3.2
numpy 2.0.1
opencv-python 4.10.0.84
packaging 23.2
pip 24.2
setuptools 60.2.0
wheel 0.37.1
</code></pre>
<p>Python version:</p>
<pre><code>> py --version
Python 3.9.6
</code></pre>
<p>Pip version:</p>
<pre><code>py -m pip --version
pip 24.2 from D:\Users\USER\Project\lib\site-packages\pip (python 3.9)
</code></pre>
|
<python><python-3.x><pip>
|
2024-08-12 15:56:28
| 0
| 444
|
Electron X
|
78,862,530
| 7,124,155
|
Why do I get error when importing Python module in Databricks?
|
<p>I'm trying to import a module using Python files in Databricks, but it's not a notebook and not Spark. Python version 3.10.12.</p>
<p>In the Databricks workspace (Git repo, not user workspace), I define this in testmodule.py:</p>
<pre><code>def sum_numbers(a, b):
return a + b
class Circle:
def __init__(self, radius):
self.radius = radius
self.color = 'black'
def calculate_area(self):
return round(math.pi * self.radius ** 2, 2)
</code></pre>
<p>In another Python file, I run</p>
<pre><code>import testmodule as tm
tm.sum_numbers(4,20)
</code></pre>
<p>which correctly gives the output of 24. So why can't I do this?</p>
<pre><code>from testmodule import Circle
ImportError: cannot import name 'Circle' from 'testmodule' (/Workspace/Repos/myuserID@mycompany.com/myGitRepo/testmodule.py)
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
File <command-123>, line 12
11 tm.sum_numbers(4,20)
---> 12 from testmodule import Circle
</code></pre>
<p>FYI, I'm aware how to import a notebook like this, but I need to be able to run the code in other environments that may not be Databricks or Spark.</p>
<pre><code>%run ./testmodule
</code></pre>
|
<python><pyspark><module><databricks>
|
2024-08-12 15:44:55
| 0
| 1,329
|
Chuck
|
78,862,512
| 4,381,589
|
use xcom shared variable as key in BigQueryInsertJobOperator configuration
|
<p>I am having a case where I want to share a string to a BigQueryInsertJobOperator and use it as a key to take a given configuration. I try to do it as is:</p>
<p>Inside dag definition I am using PythonOperator to push a variable to xcom:</p>
<pre><code>def capture_mode(**context):
trigger = context["dag_run"].conf["message"].strip()
mode = "case_a" if (trigger == "trigger_casea") else "case_b"
task_instance = context["task_instance"]
task_instance.xcom_push(key="mode", value=mode)
return mode
capture_mode = PythonOperator(
task_id="capture_mode",
provide_context=True,
python_callable=capture_mode,
dag=dag,
)
</code></pre>
<p>Afterwards I try to pull the parameter mode to use it as a parameter key for a json configuration:</p>
<pre><code>copy_data = BigQueryInsertJobOperator(
task_id="copy-table",
configuration={
"copy": {
"sourceTable": {
"projectId": copy_data_config["sourceTable"]["projectId"],
"datasetId": copy_data_config["sourceTable"]["datasetId"],
"tableId": copy_data_config["sourceTable"]["tableId"][
"{{task_instance.xcom_pull(task_ids='capture_mode', key='mode')}}"
],
},
"destinationTable": {
"projectId": copy_data_config["destinationTable"]["projectId"],
"datasetId": copy_data_config["destinationTable"]["datasetId"],
"tableId": copy_data_config["destinationTable"]["tableId"][
"{{task_instance.xcom_pull(task_ids='capture_mode', key='mode')}}"
],
},
"createDisposition": "CREATE_IF_NEEDED",
"writeDisposition": "WRITE_TRUNCATE",
"useLegacySql": False,
"priority": "BATCH",
}
},
impersonation_chain=config["serviceAccountName"],
location=config["datasetsLocation"],
)
</code></pre>
<p>the json configuration looks like this:</p>
<pre><code> "copy_data": {
"sourceTable": {
"projectId": "project_id",
"datasetId": "dataset_id",
"tableId": {
"case_a": "table_a",
"case_b": "table_b"
}
}
</code></pre>
<p>Here I am trying to pull the right value in order to get case_a or case_b configuration.:</p>
<pre><code> {{task_instance.xcom_pull(task_ids='capture_mode', key='mode')}}
</code></pre>
<p>But I am getting the following error KeyError in the lines referencing the line above:</p>
<pre><code> Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/airflow/gcs/dags/my_dag.py", line 139, in <module>
"tableId": copy_data_config["sourceTable"]["tableId"][
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
KeyError: "{{task_instance.xcom_pull(task_ids='capture_mode', key='mode')}}"
</code></pre>
<p>What am I doing wrong ?</p>
|
<python><google-bigquery><airflow><airflow-xcom>
|
2024-08-12 15:37:56
| 1
| 427
|
saadoune
|
78,862,511
| 810,815
|
Sums Package Failing to Tokenize in Python
|
<p>I am using the following code to summarize my text in Python. The code is being run in Jupyter Notebook. I have already install sumy using pip command.</p>
<pre><code>pip install sumy nltk
python -m nltk.downloader punkt
</code></pre>
<pre><code>from sumy.parsers.plaintext import PlaintextParser
from sumy.nlp.tokenizers import Tokenizer
from sumy.summarizers.lsa import LsaSummarizer
from io import StringIO
# Define the text to be summarized
text = """
Natural Language Processing (NLP) is a field of artificial intelligence that focuses on the interaction between computers and humans through natural language. The ultimate objective of NLP is to enable computers to understand, interpret, and respond to human language in a way that is both valuable and meaningful.
NLP is used to apply algorithms to identify and extract the natural language rules such that the unstructured language data is converted into a form that computers can understand. When the text has been provided, a computer can take many different approaches to process it. The algorithm can be a rule-based or a machine learning-based approach.
"""
# Use StringIO to simulate a file-like object
text_io = StringIO(text)
# Parse the text
parser = PlaintextParser.from_file(text_io, Tokenizer("english"))
# Initialize the LSA summarizer
summarizer = LsaSummarizer()
# Generate the summary (you can adjust the number of sentences)
summary = summarizer(parser.document, sentences_count=2)
# Print the summary
for sentence in summary:
print(sentence)
</code></pre>
<p>When I run the program I get the following error:</p>
<pre><code>ame)
662 def find_class(self, module, name):
663 # Forbid every function
--> 664 raise pickle.UnpicklingError(f"global '{module}.{name}' is forbidden")
UnpicklingError: global 'copy_reg._reconstructor' is forbidden
</code></pre>
<p>Any ideas!</p>
|
<python><machine-learning>
|
2024-08-12 15:37:15
| 2
| 9,764
|
john doe
|
78,862,484
| 6,145,729
|
Remove Leading/Trialing spaces from header row in Excel using Python
|
<p>I'm using pandas to read an XLSB file into a data frame before using 'to_sql't to push it to my SQLite Database. Engine is using pyxlsb and not an issue.</p>
<p>My issue is that my XLSB column headers (row 1) have a mixture of spaces before and after the column name. This means that the database also captures the errors in the create statement.</p>
<pre><code>CREATE TABLE "EXAMPLE" (
"NAME " TEXT,
" AGE" TEXT,
"POST CODE " TEXT
)
</code></pre>
<p>So, do I need to fix the issue before I pd.read_excel?
Can I 'to_sql' without using data frame headers?
Can I 'to_sql' where the data frame headers don't match the table in the DB by doing a Drop & Insert?</p>
<p>Any guidance would be much appreciated; sadly, fixing the XLSB at source is not an option.</p>
|
<python><pandas><dataframe><sqlite>
|
2024-08-12 15:31:34
| 1
| 575
|
Lee Murray
|
78,862,471
| 160,245
|
Extend TypedDict to save/retrieve as JSON to/from file
|
<p>I want to have simple data structure, and save it to disk as JSON, and later take the JSON and put back into the data structure.</p>
<p>This is the error I get:</p>
<blockquote>
<p>AttributeError: 'dict' object has no attribute 'saveToFile'</p>
</blockquote>
<p>with the code below:</p>
<pre><code>import json
from typing_extensions import TypedDict
class AutoBlog(TypedDict):
blog_title: str
blog_prompt: str
min_words: int
max_words: int
def saveToFile(self, filename):
str_json = json.dumps(self)
with open(filename, 'w', encoding='unicode-escape') as f:
f.write(str_json)
def readFromFile(self, filename):
with open(filename) as f:
str_json = f.read()
self = json.loads(str_json)
if __name__ == "__main__":
filename = 'c:/Users/All Users/WebsiteGarden/test_serialize.json'
print("Filename=" + filename)
autoBlogData1 = AutoBlog()
autoBlogData1['blog_title'] = "my blog title"
autoBlogData1['blog_prompt'] = "write an article about xyz"
autoBlogData1['min_words'] = 800
autoBlogData1['max_words'] = 1200
autoBlogData1.saveToFile(filename)
autoBlogData2 = AutoBlog()
autoBlogData2.readFromFile(filename)
print("autoBlogData2['blog_title']=" + autoBlogData2['blog_title'])
</code></pre>
|
<python><typeddict>
|
2024-08-12 15:27:46
| 2
| 18,467
|
NealWalters
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.