QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,547,268
| 12,358,733
|
Proper way to do simple single row update in SqlAlchemy 2.x?
|
<p>I'm in the process of migrating from SQLAlchemy 1.4 to 2.0. I've got inserts working via this function:</p>
<pre><code>async def db_insert(engine, table_name: str, values={}):
from sqlalchemy import Table, MetaData
try:
async with engine.begin() as conn:
table = await conn.run_sync(lambda conn: Table(table_name, MetaData(), autoload_with=conn))
statement = table.insert().values(values)
result = await conn.execute(statement)
return result
except Exception as e:
raise e
</code></pre>
<p>Example code calling the function:</p>
<pre><code>async def main():
from sqlalchemy.ext.asyncio import create_async_engine
try:
engine = create_async_engine()
values = {'col_a': "value1", 'col_b': "value2", 'col_c': 123}
result = await db_insert(engine, "mytable", values)
await engine.dispose()
except Exception as e:
quit(e)
</code></pre>
<p>Now the question: how do I update one of those existing rows, for example change 'col_c' to 456? I'd assume <a href="https://docs.sqlalchemy.org/en/20/tutorial/data_update.html" rel="nofollow noreferrer">update()</a> but having some trouble understanding what the syntax should be, and how to specify which row to update without hard-coding column names.</p>
<p>Database is MySQL 5 and Framework is Starlette 0.25.0. Python 3.11.2</p>
|
<python><mysql><sqlalchemy><python-asyncio>
|
2023-06-24 17:30:37
| 0
| 931
|
John Heyer
|
76,547,234
| 13,132,728
|
How to index JSON by value name in python
|
<p>Let's say I pull JSON daily and after some initial indexing, what I have looks like this:</p>
<pre><code>[{'CategoryId': 493, 'name': 'foo'},
{'CategoryId': 735, 'name': 'bar'},
{'CategoryId': 988, 'name': 'baz'},
{'CategoryId': 1024, 'name': 'foo_foo'},
{'CategoryId': 729, 'name': 'foo_bar'},
{'CategoryId': 743, 'name': 'foo_baz',
{...}
]
</code></pre>
<p>And I want to access the dict where <code>name</code> is <code>foo_baz</code>.</p>
<p>Here is what I do now (ignore the <code>[...]</code> as they are just initial indexing I do to get to this point and is besides the point of the question):</p>
<pre><code>(json.loads(response.text)[...][...][5])
</code></pre>
<p>Now this works... most of the time. Occasionally, there will be additional <code>CategoryId</code> that screws up my indexing, like so:</p>
<pre><code>[{'CategoryId': 493, 'name': 'foo'},
{'CategoryId': 735, 'name': 'bar'},
{'CategoryId': 988, 'name': 'baz'},
{'CategoryId': 1024, 'name': 'foo_foo'},
{'CategoryId': 729, 'name': 'foo_bar'},
{'CategoryId': 522, 'name': 'new_cat'},
{'CategoryId': 743, 'name': 'foo_baz',
{...}
]
</code></pre>
<p>This now ruins my indexing thanks to <code>new_cat</code>. What I have been doing is just implementing a try except, but this becomes un-pythonic very quickly:</p>
<pre><code>try:
lists = (json.loads(response.text)[...][...][5])
except KeyError:
lists = (json.loads(response.text)[...][...][6])
...
</code></pre>
<p>So, my question is, is there a way I can just index this json based on the value of name? something like:</p>
<pre><code>(json.loads(response.text)[...][...][function where name is equal to 'foo_baz'])
</code></pre>
|
<python><json>
|
2023-06-24 17:23:49
| 1
| 1,645
|
bismo
|
76,547,058
| 11,423,076
|
How to send click inputs to a background program without putting it into focus or using the cursor?
|
<p>I am trying to automate a tedious task in an idle clicker game (it is not competitive and scripting is generally considered fine by the community). I initially tried using AutoHotkey's <code>ControlClick</code>, but this does not work with the game. I tried creating a Python script using various methods including <code>win32api.mouse_event</code>, <code>win32gui.PostMessage</code>, <code>win32giu.SendMessage</code>, <code>pywinauto</code>, <code>pydirectinput</code> and maybe others I am forgetting.</p>
<p>I am able to send key presses using <code>pywinauto.Application</code> module (code below), but cannot figure out how to send mouse clicks.</p>
<pre><code>if(__name__ == "__main__"):
app = pywinauto.Application(backend='win32').connect(title_re="NGU Idle").window(title_re="NGU Idle")
while(True):
if keyboard.is_pressed("pause"): # just to exit the loop
break
app.send_keystrokes("w")
time.sleep(0.02)
</code></pre>
<p>I think the issue that I am facing, is that for most programs of this sort, there is a parent controller and a child controller to which background clicks can be sent, however Unity manages things differently. Using <code>Window Spy</code>, the window has the class <code>UnityWndClass</code>. Looking around at other threads on StackOverflow and elsewhere (e.g. <a href="https://stackoverflow.com/questions/59285854/is-there-a-way-to-send-a-click-event-to-a-window-in-the-background-in-python">Is there a way to send a click event to a window in the background in python?</a>), the way this seems to be done in other programs is using <code>hWnd</code> such as below (answer in linked thread), which doesn't work since <code>hwnd1</code> is <code>0</code>:</p>
<pre><code>def click(x, y):
hWnd = win32gui.FindWindow(None, "BlueStacks")
lParam = win32api.MAKELONG(x, y)
hWnd1= win32gui.FindWindowEx(hWnd, None, None, None)
win32gui.SendMessage(hWnd1, win32con.WM_LBUTTONDOWN, win32con.MK_LBUTTON, lParam)
win32gui.SendMessage(hWnd1, win32con.WM_LBUTTONUP, None, lParam)
click(100,100)
</code></pre>
<p>So, is this possible? Maybe not in Python? Any leads are welcome!</p>
|
<python><python-3.x><unity-game-engine><automation>
|
2023-06-24 16:42:04
| 0
| 449
|
anInputName
|
76,546,879
| 2,215,904
|
Unpack tuple to mixed array, variables
|
<p>I have string of bytes that come from embeded system and I use python <code>struct.unpack()</code> to give me tuple of values.</p>
<p>As there are a lot of values I wish to make code more readable.
For now I have it done like this:</p>
<pre><code>ArrA = [None] * 3
ArrB = [None] * 2
ArrA[0],ArrA[1],ArrA[2], n, ArrB[0],ArrB[1], crc = (1, 2, 3, 4, 5, 6, 7)
# tuple came from unpack()
</code></pre>
<p>Result:</p>
<pre><code>ArrA = [1, 2, 3]
n = 4
ArrB = [5, 6]
crc = 7
</code></pre>
<p>Is there some way to shorten arrays? eg <code>ArrA[0],ArrA[1],ArrA[2]</code> to something like <code>ArrA[]*3</code>??</p>
<p>In example arrays are short in real they are pretty long.</p>
|
<python><struct><unpack>
|
2023-06-24 15:56:33
| 3
| 460
|
eSlavko
|
76,546,838
| 5,942,779
|
Why is streamlit messing up my Plotly chart?
|
<p>Importing the streamlit library into my Python code has changed my Plotly chart into black and white. The same description can also be found on the Plotly <a href="https://github.com/streamlit/streamlit/issues/6897" rel="nofollow noreferrer">github</a> page.</p>
<p>My problem can be replicated with the following steps:</p>
<p><strong>First of all, I have the following system configuration:</strong></p>
<pre><code>Streamlit version: 1.23.1
Plotly version: 5.15
Python version: 3.11
Operating System: Windows 11 Pro (64-bit)
Browser: Microsoft Edge Version 114.0.1823.58 (64-bit)
Virtual environment: miniconda 23.3.1
</code></pre>
<p><strong>Step 1. Setup a new environment at the command prompt</strong></p>
<pre><code>conda create -n test python==3.11
conda activate test
pip install numpy pandas plotly==5.15 streamlit==1.23.1
</code></pre>
<p><strong>Step 2. Launch Python at the command prompt</strong></p>
<pre><code>python
</code></pre>
<p>**Step 3. Create a Plotly chart without importing the streamlit library by running these code **</p>
<pre><code>import plotly.graph_objects as go
import numpy as np
t = np.linspace(0, 10, 100)
y = np.sin(t)
fig = go.Figure(go.Scatter(x=t, y=y, mode='markers'))
fig.show()
</code></pre>
<p><strong>Step 4: My Plotly plot looks like this (expected behavior)</strong>
<a href="https://i.sstatic.net/iBkto.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iBkto.png" alt="enter image description here" /></a></p>
<p><strong>Step 5: Import the streamlit library and re-run the same code.</strong></p>
<pre><code>import streamlit as st
import plotly.graph_objects as go
import numpy as np
t = np.linspace(0, 10, 100)
y = np.sin(t)
fig = go.Figure(go.Scatter(x=t, y=y, mode='markers'))
fig.show()
</code></pre>
<p><strong>Step 6: My plotly chart becomes black and white like this (unexpected behavior)</strong>
<a href="https://i.sstatic.net/161lT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/161lT.png" alt="enter image description here" /></a></p>
<p><strong>Step 7: It also affected Pandas's plot with plotly backend</strong></p>
<pre><code>import streamlit as st
import pandas as pd
pd.options.plotting.backend='plotly'
df = pd.DataFrame(np.random.random((10,10)))
fig = df.plot()
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/wkVKy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wkVKy.png" alt="enter image description here" /></a></p>
<p><strong>Does anyone know how to fix this issue?</strong></p>
|
<python><pandas><plotly><streamlit>
|
2023-06-24 15:46:17
| 1
| 689
|
Scoodood
|
76,546,774
| 291,033
|
How to add internal whitespace to YAML sequence with Python ruamel
|
<p>I'm using ruamel.yaml to format YAML documents in Python:</p>
<pre><code>#!/usr/bin/env python
import sys
from ruamel.yaml import *
yaml = YAML()
numbers = CommentedSeq()
numbers.fa.set_flow_style()
numbers.extend([1, 2, 3])
root = CommentedMap()
root['numbers'] = numbers
yaml.dump(root, sys.stdout)
</code></pre>
<p>I followed the advice in <a href="https://stackoverflow.com/a/56543954/291033">this accepted answer</a> to format the sequence inline. This outputs the following:</p>
<pre><code>numbers: [1, 2, 3]
</code></pre>
<p>How can I add internal whitespace, and output this instead:</p>
<pre><code>numbers: [ 1, 2, 3 ]
</code></pre>
|
<python><yaml><ruamel.yaml>
|
2023-06-24 15:31:53
| 1
| 2,256
|
Huw Walters
|
76,546,572
| 16,360,640
|
Removing Title Bar in PyQt5 from a New Window
|
<p>I have created an application which contains a button. On the click of the button, <strong>another window open on top of the first window. I want to remove the whole title bar (menu, minimize, maximize, and exit)</strong>. Only want to keep the title. What I am actually expecting is for <strong>this window to open up like a pop up window</strong> in which I can enter crendentials and stuff. There should be a button to go back which will close this pop up window.</p>
<p>I have tried using <code>FramelessWindow</code>, but it does not generate the desired results. I have tried a few other things like <code>WA_TranslucentBackground</code>, etc., but that did not work as well.</p>
<p>The code I tried is this:
<code>self.setWindowFlags(Qt.FramelessWindowHint) self.setAttribute(Qt.WA_TranslucentBackground)</code></p>
<p>This causes some significant issues, like new Windows content are overlayed on the first window</p>
<p><strong>Code:</strong></p>
<pre><code>class LoginWindow(QMainWindow):
def __init__(self, parent=None):
super().__init__(parent)
# make window size auto adjust to the size of the widgets but with a minimum size
self.setSizePolicy(QSizePolicy.Minimum, QSizePolicy.Minimum)
self.setMinimumSize(500, 500)
# remove minimize and maximize buttons
self.setWindowFlags(Qt.FramelessWindowHint)
self.setAttribute(Qt.WA_TranslucentBackground)
self.setStyleSheet("background-color: #F8F8F8;")
self.setup_ui()
self.add_navigation_arrow()
def setup_ui(self):
central_widget = QWidget(self)
self.setCentralWidget(central_widget)
main_layout = QVBoxLayout(central_widget)
main_layout.setAlignment(Qt.AlignCenter)
main_layout.setSpacing(20)
username_label = QLabel("Username", self)
username_label.setFont(QFont("Arial", 12))
username_input = QLineEdit(self)
username_input.setPlaceholderText("Enter your username")
# fix the width of the input field
username_input.setFixedWidth(250)
organization_label = QLabel("Organization", self)
organization_label.setFont(QFont("Arial", 12))
organization_input = QLineEdit(self)
organization_input.setPlaceholderText("Enter your organization name")
organization_input.setFixedWidth(250)
next_button = QPushButton("Next", self)
next_button.setStyleSheet(
"QPushButton {background-color: #000000; color: white; border-radius: 10px; padding: 10px;}"
"QPushButton:hover {background-color: #333333;}"
"QPushButton:pressed {background-color: #555555;}"
)
main_layout.addSpacing(20)
main_layout.addWidget(username_label)
main_layout.addWidget(username_input)
main_layout.addWidget(organization_label)
main_layout.addWidget(organization_input)
main_layout.addSpacing(20)
main_layout.addWidget(next_button, alignment=Qt.AlignHCenter)
def add_navigation_arrow(self):
back_arrow = QPushButton("←", self)
back_arrow.setStyleSheet(
"QPushButton {background-color: transparent; color: black; font-size: 20px; font-weight: bold;}"
"QPushButton:hover {color: #555555;}"
)
back_arrow.setFixedSize(40, 40)
back_arrow.clicked.connect(self.go_back)
back_arrow.move(10, 10) # Adjust the position of the arrow button
def go_back(self):
self.close()
self.parent().setEnabled(True)
self.parent().show()
class StartupWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("Startup Window")
self.setFixedSize(500, 300)
self.setStyleSheet("background-color: #F8F8F8;")
self.setup_ui()
def setup_ui(self):
central_widget = QWidget(self)
self.setCentralWidget(central_widget)
main_layout = QVBoxLayout(central_widget)
main_layout.setAlignment(Qt.AlignCenter)
main_layout.setSpacing(20)
main_layout.addStretch(1)
button_layout = QHBoxLayout()
button_layout.setAlignment(Qt.AlignCenter)
login_button = QPushButton("Login", self)
login_button.setStyleSheet(
"QPushButton {background-color: #44A147; color: white; border-radius: 10px; "
"padding-left: 20px; padding-right: 20px; padding-top: 10px; padding-bottom: 10px;}"
"QPushButton:hover {background-color: #3B8548;}"
)
signup_button = QPushButton("Sign Up", self)
signup_button.setStyleSheet(
"QPushButton {background-color: #007BFF; color: white; border-radius: 10px; padding-left: 20px; "
"padding-right: 20px; padding-top: 10px; padding-bottom: 10px;} "
"QPushButton:hover {background-color: #0069D9;}"
)
exit_button = QPushButton("Close/Exit", self)
exit_button.setStyleSheet(
"QPushButton {background-color: #DC3545; color: white; border-radius: 10px; padding: 10px;}"
"QPushButton:hover {background-color: #C82333;}"
)
login_button.clicked.connect(self.open_login_window)
button_layout.addWidget(login_button)
button_layout.addSpacing(20)
button_layout.addWidget(signup_button)
main_layout.addLayout(button_layout)
main_layout.addStretch(1)
main_layout.addWidget(exit_button, alignment=Qt.AlignCenter)
def open_login_window(self):
self.setEnabled(False)
self.login_window = LoginWindow(self)
self.login_window.show()
</code></pre>
|
<python><pyqt5>
|
2023-06-24 14:37:54
| 0
| 341
|
Shisui Otsutsuki
|
76,546,455
| 11,021,252
|
How come the pandas have this OutOfBoundsDatetime: Out of bounds nanosecond timestamp error?
|
<p>I have seen several solutions for this 'OutOfBoundsDatetime: Out of bounds nanosecond timestamp' error for some extreme dates in pandas but none of them really address a solution. How come a widespread data analytics library like Pandas released this limitation?</p>
<p>Other than substituting NaT value, Is there any proper wrap-around against this limitation by keeping these days in datetime format??</p>
|
<python><pandas><datetime>
|
2023-06-24 14:12:22
| 0
| 507
|
VGB
|
76,546,454
| 11,613,489
|
Using selenium want to click and URL defined in a class
|
<p>I need another hint today.
I'm trying to build a Python/Selenium code, the Idea is to click on <em>www.thewebsiteIwantoclickon</em> Below is the sample of the HTML I am working on.</p>
<p>The class <em>entity-result__title-text</em> is repeated several times in the HTML, so I want to click for each class <em>entity-result__title-text</em> the element <em>href=</em> open the site <em>www.thewebsiteIwantoclickon</em> perform some actions (I have in a separate code) and go back to the previous HTML and repeat the same process until the last class <em>entity-result__title-text</em></p>
<pre><code> <span class="entity-result__title-text
t-16">
<a class="app-aware-link " href="https://www.thewebsiteIwantoclickon" data-
test-app-aware-link="">
<span dir="ltr"><span aria-hidden="true"><!---->Mi Name<!----></span><span class="visually-hidden"><!---->See something<!----></span></span>
</a>
<span class="entity-result__badge t-14 t-normal t-black--light">
<div class="display-flex
flex-row-reverse
align-items-baseline">
<!----> <span class="image-text-lockup__text entity-result__badge-text">
<span aria-hidden="true"><!---->• 2º<!----></span><span class="visually-hidden"><!---->example<!----></span>
</span>
</div>
</span>
</span>
</code></pre>
<p>I have written the following code, but no action is performed.</p>
<pre><code>links = driver.find_elements(By.XPATH, "//span[@class='entity-result__title-text']/a[@class='app-aware-link']")
for link in links:
href = link.get_attribute("href")
link.click()
# My Action done and I'm ready to close the website
driver.back()
</code></pre>
<p>But nothing happens.</p>
|
<python><html><selenium-webdriver><webdriver>
|
2023-06-24 14:11:48
| 1
| 642
|
Lorenzo Castagno
|
76,546,450
| 307,428
|
Pandas not seeing data as datetime
|
<p>Panda is not picking up date time (ie, I need to keep minutes and seconds), nor can I convert it.</p>
<pre><code> import pandas as pd
rides = pd.read_parquet('../data/raw/rides_2022-01.parquet')
rides.head(20)
rides = rides[['pickup_datetime', 'PULocationID']]
rides.rename(columns={
'tpep_pickup_datetime': 'pickup_datetime',
'PULocationID': 'pickup_location_id',
}, inplace=True)
rides.to_datetime(rides['pickup_datetime']) <-- errors here AttributeError: 'DataFrame' object has no attribute 'to_datetime'
rides['pickup_datetime'].describe(include='all')
count 2463879
mean 2022-01-17 01:58:40.393673472
min 2022-01-01 00:00:08
25% 2022-01-09 15:37:56
50% 2022-01-17 12:11:54
75% 2022-01-24 13:49:37
max 2022-01-31 23:59:58
Name: pickup_datetime, dtype: object
</code></pre>
|
<python><pandas>
|
2023-06-24 14:09:49
| 1
| 10,799
|
jdog
|
76,546,365
| 1,140,382
|
Specify type of Jinja2 variable in PyCharm
|
<p>Is it possible to assign a "type" or class to a variable passed to a Jinja template in Pycharm?</p>
<p>Specifically, similar to what is done in IntelliJ IDEA for Freemarker templates using this declaration in a comment:</p>
<pre class="lang-java prettyprint-override"><code><#-- @ftlvariable name="foo" type="java.lang.String" file="path/to/file" -->
</code></pre>
<p>See here for a reference as to how they do it:</p>
<p><a href="https://www.jetbrains.com/help/idea/template-data-languages.html#7b90b664" rel="nofollow noreferrer">https://www.jetbrains.com/help/idea/template-data-languages.html#7b90b664</a></p>
<p>After the above comment/annotation is included in the template file, the variable "foo" has a type reference to java.lang.String and auto-complete works for it. I'm not asking for anything complicated, just for fields on a class. E.g.:</p>
<pre><code>{{ author.Name }}
</code></pre>
<p>In the above example, it would be ideal if Name and other fields would be listed/populated.</p>
|
<python><pycharm><jinja2><fastapi>
|
2023-06-24 13:49:10
| 0
| 1,252
|
Zoran Pavlovic
|
76,546,062
| 8,388,707
|
Merge complex nested JSON and check value in comma separated column
|
<p>I have two dataframe like this</p>
<pre><code>df1 = pd.DataFrame({"name":["cream", "hello"], "keywords": ["[pudding, heavy Cream, heavycream, Sourcream]", "[hello, world, hi]"]})
df2 = pd.DataFrame({"ingredient":["cream", "hi", "MS"], "Frequency": [5, 10, 23]})
</code></pre>
<p>I want to merge both the data frame into another where the requirement is, we should not have duplicate value i.e. the new data frame have all the value of df1 plus add the second data frame df2 value if df ingredient value like 'cream' should not exist in df1 'name' field and also not in df1 'keywords' field in exact form, so 'cream' and 'heavy cream' both are different.</p>
<p>Ideally, the result should be like this
<a href="https://i.sstatic.net/H3c9a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H3c9a.png" alt="enter image description here" /></a></p>
<p>the first two values are directly copied from df1, while the df2 'cream' value exists directly in df1 name so not added again and df2 'hi' value exist in df1 'keywords' so not added meanwhile third 'ms' value is added.</p>
<p>I have written this code, but I feel it's not most optimized</p>
<pre><code>df_new = df1.copy()
for ingredient in df2["ingredient"].str.lower():
if df1["name"].str.contains(ingredient).any():
df1 = df1.loc[~df1["name"].str.contains(ingredient)]
else:
df1["keywords"] = df1["keywords"].astype('str')
df1["keywords"] = df1["keywords"].apply(lambda x: x[1:-1].split(", "))
for keyword in df1["keywords"]:
print(ingredient)
print(keyword)
if ingredient not in keyword:
print("add value")
df_new = pd.concat([df_new, pd.DataFrame({"name": ingredient, "keywords": ""},index=[0])], ignore_index=True)
</code></pre>
<p>In the first run, its giving the expected table but on multiple runs, it keeps adding backslash and breaking the code like this
<a href="https://i.sstatic.net/PWKfY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PWKfY.png" alt="enter image description here" /></a></p>
<p>Can someone please suggest better way to do it.</p>
|
<python><pandas><dataframe>
|
2023-06-24 12:26:31
| 1
| 1,592
|
Vineet
|
76,545,756
| 8,055,025
|
Selecting one column of dataframe as index and one column as value for a pandas series
|
<p>I have a CSV (pokemon.csv) with multiple columns like <code>name</code>, <code>type1</code>, <code>type2</code>, <code>pokedex</code>, etc.
I want to load the csv as a <code>dataframe</code> and then get one pandas <code>series</code> out of it.
I want to give <code>name</code> column as index and <code>type1</code> column as value, so I am doing this:</p>
<pre><code>import pandas as pd
indexed_series = pd.read_csv('./pokemon.csv', index_col = 'name', usecols = ['type1']).squeeze()
</code></pre>
<p>But I am getting the following error:</p>
<pre><code>ValueError: Index name invalid
</code></pre>
<p>What am I missing?</p>
|
<python><pandas><dataframe>
|
2023-06-24 11:04:54
| 1
| 2,063
|
Ankit Sahay
|
76,545,714
| 9,581,027
|
Generate sphinx documentation of only rst files from different directory (don't have any code in app)
|
<p>Project structure</p>
<pre><code>App
├───2023
│ ├───JULLY
│ │ ├───WEEK26.rst
│ │ └───WEEK27.rst
│ └───JUNE
│ └───WEEK25.rst
├───2022
│ ├───JULLY
│ │ ├───WEEK26.rst
│ │ └───WEEK27.rst
│ └───JUNE
│ └───WEEK25.rst
Doc
├───conf.py
├───modules.rst
├───index.rst
</code></pre>
<p>how can i reference automatically all folders and in recursive fashion in index.rst</p>
<p>I need to Display data in following format on HTML</p>
<pre><code>2023
JAN
WEEK 1
WEEK 2
WEEK 3
WEEK 4
FEB
WEEK 5
WEEK 6
WEEK 7
WEEK 8
MARCH
APRIL
2023
JAN
WEEK 1
WEEK 2
WEEK 3
WEEK 4
FEB
WEEK 5
WEEK 6
WEEK 7
WEEK 8
MARCH
APRIL
</code></pre>
<p>How can achieve same structure in index.rst file without hard-coding whenever new file come in App i need to update same as it is in index.rst</p>
<p>I Tried</p>
<pre><code>Documentation Sections
======================
.. toctree::
:maxdepth: 2
:glob:
../app/2023/JUNE/*.rst
</code></pre>
<p>ERROR</p>
<pre><code>WARNING: toctree glob pattern '../app/2023/JUNE/*.rst' didn't match any documents
</code></pre>
|
<python><python-sphinx>
|
2023-06-24 10:55:15
| 0
| 503
|
ravishankar chavare
|
76,545,651
| 1,077,533
|
Remove tag containing a certain text
|
<p>I have this HTML code:</p>
<pre><code><p>
<strong>
Read our full
<a href="">
review
</a>
</strong>
</p>
</code></pre>
<p>I would like to remove all 'p' tags containing 'Read our full' string using BeautifulSoup. I tried the following code, but it doesn't work</p>
<pre><code>for x in soup.find_all("p", text=re.compile('Read our full')):
x.decompose()
</code></pre>
|
<python><beautifulsoup>
|
2023-06-24 10:40:39
| 2
| 1,051
|
Floppy88
|
76,545,595
| 1,340,744
|
convert utc date time to utc timestamp in Python
|
<p>I need to convert date time to UTC timestamp. but the problem is; different systems returns different timestamps based on machine timezone.</p>
<p>example:
date1 = "2022-05-17 04:35:38"</p>
<p>UTC timestamp is:
1652762138</p>
<p>Code:</p>
<pre><code>date1 = "2022-05-17 04:35:38"
date_utc = int(datetime.datetime.strptime(date1, '%Y-%m-%d %H:%M:%S').timestamp())
</code></pre>
<p>the problem as I mentioned is it returns number based on machine timezone.</p>
<p>I need simplest solution please.</p>
|
<python><python-3.x><datetime><utc>
|
2023-06-24 10:26:51
| 1
| 382
|
HabibS
|
76,545,549
| 14,672,356
|
Unilateral error bars in plotly stacked bar chart
|
<p>How can I get unilateral error bars in the stacked bar chart in plotly below?</p>
<pre><code>import plotly.graph_objects as go
y1_values = [5, 4, 4, 5, 6, 5]
y2_values = [9, 8, 7, 6, 5, 4]
y3_values = [4, 6, 7, 7, 7, 9]
y1_errors = [0.5, 0.6, 0.4, 0.7, 0.5, 0.8]
y2_errors = [0.4, 0.3, 0.5, 0.7, 0.3, 0.5]
y3_errors = [0.6, 0.2, 0.3, 0.4, 0.4, 0.4]
fig = go.Figure()
# config = dict({'scrollZoom': True})
# Bars for increase with error bars
fig.add_bar(x=[1, 1.5, 2.2, 2.7, 3.4, 3.9], y=y3_values, name="Increase", marker_color='#FF6692',
error_y=dict(type="data", array=y3_errors, thickness=2, width=10),
base=[sum(x) for x in zip(y1_values, y2_values)]
)
# Bars for decrease with error bars
fig.add_bar(x=[1, 1.5, 2.2, 2.7, 3.4, 3.9], y=y2_values, name="Decrease", marker_color='#19D3F3',
error_y=dict(type="data", array=y2_errors, thickness=2, width=10),
base=y1_values
)
# Bars for unchanged with error bars
fig.add_bar(x=[1, 1.5, 2.2, 2.7, 3.4, 3.9], y=y1_values, name="Unchanged", marker_color='#00CC96',
error_y=dict(type="data", array=y1_errors, thickness=2, width=10),
base=[0] * 6
)
# Annotation for bar1 and bar2
fig.add_annotation(text="10 minutes",
xref="paper", yref="paper",
x=0.07, y=-0.18,
showarrow=False,
font_size=20
)
# Annotation for bar3 and bar4
fig.add_annotation(text="30 minutes",
xref="paper", yref="paper",
x=0.5, y=-0.18,
showarrow=False,
font_size=20
)
# Annotation for bar5 and bar6
fig.add_annotation(text="60 minutes",
xref="paper", yref="paper",
x=0.93, y=-0.18,
showarrow=False,
font_size=20
)
# Layout
fig.update_layout(barmode="stack", showlegend=True, template='plotly_white',
height=700,
width=1000,
legend=dict(title='', orientation="h", traceorder='normal', x=0.46, y=1.05,
bgcolor='rgba(0,0,0,0)', bordercolor='rgba(0,0,0,1)', borderwidth=0,
font_size=20
)
)
fig.update_yaxes(showline=True, showgrid=False, linewidth=0.5, linecolor='black',
title='<b>Percent</b>', titlefont=dict(size=24), title_standoff=30,
ticks='outside',
dtick=2,
ticklen=10,
tickfont=dict(size=20),
range=[0, 20]
)
fig.update_xaxes(title='', tickvals=[1, 1.5, 2.2, 2.7, 3.4, 3.9],
ticktext=["<b>Cat</b><br>n = 15", "<b>Dog</b><br>n = 15",
"<b>Cat</b><br>n = 15", "<b>Dog</b><br>n = 15",
"<b>Cat</b><br>n = 15", "<b>Dog</b><br>n = 15"],
ticks="", tickfont_size=20, linecolor="black", linewidth=1
)
fig.show()
</code></pre>
<p>gives</p>
<p><a href="https://i.sstatic.net/L0P5q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L0P5q.png" alt="stacked bar chart with bidirectional error bars" /></a></p>
<p>I would like to have the following (modified in Illustrator)</p>
<p><a href="https://i.sstatic.net/TCeVc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TCeVc.png" alt="stacked bar chart with bidirectional error bars, modified" /></a></p>
|
<python><plotly><confidence-interval><stacked-bar-chart>
|
2023-06-24 10:13:30
| 2
| 353
|
hans
|
76,545,397
| 3,225,420
|
Use .format() with StrMethodFormatter in Matplotlib?
|
<p>This works:</p>
<pre><code>for ax in fig.axes:
ax.xaxis.set_major_formatter(StrMethodFormatter("{x:,.3f}"))
</code></pre>
<p>This returns KeyError: 'x':</p>
<pre><code>for ax in fig.axes:
ax.xaxis.set_major_formatter(StrMethodFormatter("{x:,.{}f}".format(3)))
</code></pre>
<p>I want to set the number of decimal places in my labels but don't want to hard code how many.</p>
<p>My approach inspired by this <a href="https://stackoverflow.com/a/53482674/3225420">answer</a>.</p>
<p>Updates on attempts:</p>
<p>This also works:</p>
<pre><code>`ax.xaxis.set_major_formatter(StrMethodFormatter('{}'.format('{x:,.0f}'))) # No decimal places`
</code></pre>
<p>This doesn't, which is confusing:</p>
<pre><code>ax.xaxis.set_major_formatter(StrMethodFormatter('{}'.format('{x:,.{}f}'.format('0') ) ) )
</code></pre>
<p>This doesn't, which is also confusing:</p>
<pre><code>x = '{x:,.{}f}'.format(str(0))
ax.xaxis.set_major_formatter(StrMethodFormatter('{}'.format(x) ))
</code></pre>
<p>Tried this 'just because', it did not work:</p>
<pre><code>ax.xaxis.set_major_formatter(StrMethodFormatter('{}'.format('{x:,{}}'.format('.0f') ) ) )
</code></pre>
<p>What can I try next?</p>
|
<python><matplotlib><axis-labels><string.format><x-axis>
|
2023-06-24 09:27:39
| 1
| 1,689
|
Python_Learner
|
76,545,352
| 8,236,050
|
Using Stable-baselines3 with Gymnasium action space error
|
<p>I was trying to use My <code>gym</code> environment with stable baselines, but when I had to update the <code>stable-baselines3</code> version to 2.0.0a5 my environment did not work anyore, and after loking at several documentation and forum threads I saw I had to start using <code>gymnasium</code> instead of <code>gym</code> to make it work. Now my code does work well in my MacOs and Google Colab. Nevertheless, I have tried to create a virtual environment on a Windows using the same requirement file as in Google Colab (where the code does work), but on Windows I get:</p>
<pre><code>AssertionError: The algorithm only supports (<class 'gym.spaces.box.Box'>, <class 'gym.spaces.discrete.Discrete'>, <class 'gym.spaces.multi_discrete.MultiDiscrete'>, <class 'gym.spaces.multi_binary.MultiBinary'>) as action spaces but Discrete(5) was provided
</code></pre>
<p>The versions I have of <code>gym</code>, <code>gymnasium</code> and <code>stable-baselines3</code> in both environments is the same, so I do not understand the reason why this happens. My versions are the fllowing:</p>
<ul>
<li><code>gym</code> --> Version: 0.25.2</li>
<li><code>gymnasium</code> --> Version: 0.28.1</li>
<li><code>stable-baselines3</code> --> Version: 2.0.0a5</li>
</ul>
|
<python><openai-gym><stable-baselines>
|
2023-06-24 09:13:51
| 1
| 513
|
pepito
|
76,545,231
| 12,224,591
|
Remove NaN Values from list (Python 3.10)
|
<p>I'm attempting to remove all <code>NaN</code> value entry from a Python 3.10 NumPy array of X-Y data points, prior to creating a polynomial fit via the <code>polyfit</code> NumPy function off of the data. The actual <code>NaN</code> values are located on the Y-axis, but I would like to remove both the X and Y values for each Y instance that's a <code>NaN</code>.</p>
<hr />
<p>The <a href="https://stackoverflow.com/questions/28647172/numpy-polyfit-doesnt-handle-nan-values">following attempt</a>:</p>
<pre><code>import numpy as np
def main():
dataX = [1, 2, 3, 4, 5]
dataY = [1, np.nan, 5, np.nan, 1]
finiteIdx = np.isfinite(dataX) & np.isfinite(dataY)
poly = np.polyfit(dataX[finiteIdx], dataY[finiteIdx], 2)
if (__name__ == "__main__"):
main()
</code></pre>
<p>Results in:</p>
<pre><code> poly = np.polyfit(dataX[finiteIdx], dataY[finiteIdx], 2)
TypeError: only integer scalar arrays can be converted to a scalar index
</code></pre>
<hr />
<p>The <a href="https://stackoverflow.com/questions/11620914/how-do-i-remove-nan-values-from-a-numpy-array">following attempt</a>:</p>
<pre><code>import numpy as np
def main():
dataX = [1, 2, 3, 4, 5]
dataY = [1, np.nan, 5, np.nan, 1]
poly = np.polyfit(dataX[~np.isnan(dataY)], dataY[~np.isnan(dataY)], 2)
if (__name__ == "__main__"):
main()
</code></pre>
<p>Results in:</p>
<pre><code> poly = np.polyfit(dataX[~np.isnan(dataY)], dataY[~np.isnan(dataY)], 2)
TypeError: only integer scalar arrays can be converted to a scalar index
</code></pre>
<hr />
<p>The <a href="https://stackoverflow.com/questions/42387613/delete-nan-or-reduce-length-of-numpy-array-if-array-contains-nan-after-convert?noredirect=1&lq=1">following attempt</a>:</p>
<pre><code>import numpy as np
def main():
dataX = [1, 2, 3, 4, 5]
dataY = [1, np.nan, 5, np.nan, 1]
poly = np.polyfit(dataX[dataY != np.nan], dataY[dataY != np.nan], 2)
if (__name__ == "__main__"):
main()
</code></pre>
<p>Results in:</p>
<pre><code> raise TypeError("expected 1D vector for x")
TypeError: expected 1D vector for x
</code></pre>
<hr />
<p>What is the proper way of removing all <code>NaN</code> values from a NumPy array?</p>
<p>Thanks for reading my post, any guidance is appreciated.</p>
|
<python><arrays><numpy><nan>
|
2023-06-24 08:35:14
| 2
| 705
|
Runsva
|
76,545,223
| 20,776,947
|
Install Firefox on Google Colab and Access using Public URL
|
<p>I am trying to install Firefox on Google Colab and access it Interface using Public URL.</p>
<p><strong>My Code :</strong></p>
<pre><code>!apt-get update
!pip install pyngrok
!pip install selenium
!apt-get install firefox -y
from pyngrok import ngrok
import subprocess
# Launch the web browser
subprocess.Popen(['firefox', '--headless', '--no-sandbox', '--disable-dev-shm-usage', '--remote-debugging-port=8888'])
# Wait for Firefox to start
import time
time.sleep(5) # Adjust the delay if needed
# Create a public URL using ngrok
ngrok.set_auth_token('2Rds12n2TZQrrPo0SF7F4i7ZmGg_3qDDTjYdSAyH31y55mJuB')
browser_url = ngrok.connect(8888).public_url
# Display the public URL
print("Web Browser URL:", browser_url)
#Verification
!firefox --version
!lsof -i :8888
!netstat :8888
# Keep the cell running until interrupted
try:
while True:
pass
except KeyboardInterrupt:
print("Cell execution interrupted.")
finally:
# Stop ngrok
ngrok.kill()
</code></pre>
<p><strong>Error :</strong></p>
<p><em><strong>After opening Public URL in my web browser, it showing bad request.</strong></em></p>
|
<python><python-3.x><firefox><jupyter-notebook><google-colaboratory>
|
2023-06-24 08:32:43
| 0
| 457
|
Mehul Kumar
|
76,545,178
| 3,020,379
|
how to isolate python 2.7 and 3.4 and prevent them from reading the windows registry PythonPath
|
<p>All python versions including the last one 3.12 will read PythonPath from the windows registry when they are created using <a href="https://yuji.wordpress.com/2009/08/31/python-on-windows-setting-pythonpath-environment-variable/" rel="nofollow noreferrer">this method</a> ; including inside venv or virtualenv.</p>
<p>as the doc says :</p>
<blockquote>
<p>When running python.exe, or any other .exe in the main Python
directory (either an installed version, or directly from the PCbuild
directory), the core path is deduced, and the core paths in the
registry are ignored. <strong>Other “application paths” in the registry are
always read.</strong> <a href="https://docs.python.org/3/using/windows.html" rel="nofollow noreferrer">source</a></p>
</blockquote>
<p>so if some one adds this to the windows registry:</p>
<pre class="lang-ini prettyprint-override"><code>Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.7\PythonPath\ccc]
@="D:\\tttttt27;gggggggg27;"
</code></pre>
<p>then python will happily add them to <code>sys.path</code>.</p>
<p><strong>how can I prevent this for python 2.7/3.4? (I know they are old but sometimes you are forced to use them)</strong></p>
<h1>Safe solutions:</h1>
<p>this are the solutions I tested but non work for 2.7/3.4: <code>venv/virtualenv</code>, <code>-E</code>, <code>-I</code>, <code>python*._pth</code>, <code>set "PYTHONHOME=%pythonExe_dir%"</code>
<a href="https://i.sstatic.net/OHmeT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OHmeT.png" alt="enter image description here" /></a></p>
<h1>risked solutions:</h1>
<p>I found a hackish solution that works but it seems a risk:</p>
<ul>
<li><p>for python <code>2.7</code>, open <code>python27.dll</code> in notepad++ for example, search for <code>PythonCore</code> word , it will be in line 22539, then change <code>P</code> to <code>y</code> and <code>y</code> to <code>P</code>, so <code>PythonCore</code> become <code>yPthonCore</code></p>
</li>
<li><p>for python <code>3.4</code>, open <code>python34.dll</code>, search for <code>PythonCore</code> word using this notpad++ regex: <code>\\\0P\0y\0t\0h\0o\0n\0C\0o\0r\0e</code> , it will be in line 17356, change <code>P</code> to <code>y</code> and <code>y</code> to <code>P</code></p>
</li>
</ul>
<p><strong>if there is no safe solution what implications can I suffer from using the risked solution? can something break if I edit the dll like I did?</strong></p>
|
<python><python-3.x><python-2.7>
|
2023-06-24 08:17:59
| 0
| 1,816
|
Badr Elmers
|
76,545,117
| 3,719,167
|
Neo4J Django neomodel not showing nodes in the explorer
|
<p>I'm new to Graph database and using Neo4j database with <code>neomodel</code> library in <code>Django application</code>.</p>
<p>The settings are defined as</p>
<pre class="lang-py prettyprint-override"><code>NEOMODEL_NEO4J_BOLT_URL = os.environ.get('NEO4J_BOLT_URL')
NEOMODEL_SIGNALS = True
NEOMODEL_FORCE_TIMEZONE = False
NEOMODEL_ENCRYPTED_CONNECTION = True
NEOMODEL_MAX_POOL_SIZE = 50
</code></pre>
<p>and the <code>models.py</code> file has the following model</p>
<pre class="lang-py prettyprint-override"><code>class Person(StructuredNode):
SEXES = {
'm': 'Male',
'f': 'Female',
}
id = properties.UniqueIdProperty()
first_name = properties.StringProperty(unique_index=True, required=True, max_length=100)
gender = properties.StringProperty(choices=SEXES, required=True)
</code></pre>
<p>In Django shell <code>python manage.py shell</code>, creating a node as</p>
<pre class="lang-bash prettyprint-override"><code>>>> from family_tree.models import Person
>>> person = Person(first_name='Anuj', gender='m')
>>> person.save()
<Person: {'id': '572d0b8ff3a8402ba58c4a03cace78ba', 'first_name': 'Anuj', 'middle_name': None, 'last_name': None, 'gender': 'm', 'date_of_birth': None, 'date_of_death': None, 'created': datetime.datetime(2023, 6, 24, 7, 44, 26, 223871, tzinfo=<UTC>)}>
</code></pre>
<p>But the database explorer has no nodes in it.</p>
<p><a href="https://i.sstatic.net/AP0H1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AP0H1.png" alt="enter image description here" /></a></p>
<p>Also refreshing node gives <code>DoesNotExist</code> exception</p>
<pre class="lang-bash prettyprint-override"><code>>>> person.refresh()
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/neomodel/core.py", line 616, in refresh
raise self.__class__.DoesNotExist("Can't refresh non existent node")
neomodel.core.PersonDoesNotExist: (PersonDoesNotExist(...), "Can't refresh non existent node")
</code></pre>
|
<python><django><neo4j><neomodel>
|
2023-06-24 07:59:01
| 1
| 9,922
|
Anuj TBE
|
76,545,079
| 1,980,208
|
Take every second row from existing dataframe and arrange them in to columns
|
<p>I have a time series which looks like this :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>min</th>
<th>Volume every minute</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>0.2</td>
</tr>
<tr>
<td>2</td>
<td>0.3</td>
</tr>
<tr>
<td>3</td>
<td>0.4</td>
</tr>
<tr>
<td>.</td>
<td>.</td>
</tr>
<tr>
<td>1440</td>
<td>0.8</td>
</tr>
<tr>
<td>1441</td>
<td>0.5</td>
</tr>
<tr>
<td>.</td>
<td>.</td>
</tr>
<tr>
<td>4018</td>
<td>0.6</td>
</tr>
<tr>
<td>.</td>
<td>.</td>
</tr>
<tr>
<td>252000</td>
<td>0.7</td>
</tr>
<tr>
<td>252001</td>
<td>0.9</td>
</tr>
</tbody>
</table>
</div>
<p>I have to arrange this in to new dataframe in such a way that <strong>min</strong> column should have every 1441st value. Also created columns are something like this : <br></p>
<p><strong>1_1, 1_2,1_3, ...1_1339, 1_1440</strong> -- Entries every min (1440 entries from 1 | 1440 entries from 1441 | 1440 entries from 2881 .... )<br>
<strong>3_1, 3_2,3_3, ...3_1339, 3_1440</strong> -- Entries every 3 min (1440 entries from 1 | 1440 entries from 1441 | 1440 entries from 2881 .... )<br>
<strong>5_1, 5_2,5_3, ...5_1339, 5_1440</strong> -- Entries every 5 min (1440 entries from 1 | 1440 entries from 1441 | 1440 entries from 2881 .... )<br></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">min</th>
<th style="text-align: center;">1_1</th>
<th style="text-align: right;">..</th>
<th style="text-align: right;">1_1440</th>
<th style="text-align: right;">3_1</th>
<th style="text-align: right;">..</th>
<th style="text-align: right;">3_1440</th>
<th style="text-align: right;">..</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">0.2</td>
<td style="text-align: right;">.</td>
<td style="text-align: right;">0.8</td>
<td style="text-align: right;">0.2</td>
<td style="text-align: right;">.</td>
<td style="text-align: right;">0.6</td>
<td style="text-align: right;"></td>
</tr>
<tr>
<td style="text-align: left;">1441</td>
<td style="text-align: center;">0.5</td>
<td style="text-align: right;">.</td>
<td style="text-align: right;">val at 2880 min</td>
<td style="text-align: right;">0.5</td>
<td style="text-align: right;">.</td>
<td style="text-align: right;">val at 5458 min</td>
<td style="text-align: right;"></td>
</tr>
<tr>
<td style="text-align: left;">.</td>
<td style="text-align: center;">.</td>
<td style="text-align: right;">.</td>
<td style="text-align: right;">.</td>
<td style="text-align: right;">.</td>
<td style="text-align: right;">.</td>
<td style="text-align: right;">.</td>
<td style="text-align: right;"></td>
</tr>
<tr>
<td style="text-align: left;">.</td>
<td style="text-align: center;">.</td>
<td style="text-align: right;">.</td>
<td style="text-align: right;">.</td>
<td style="text-align: right;">.</td>
<td style="text-align: right;">.</td>
<td style="text-align: right;">.</td>
<td style="text-align: right;"></td>
</tr>
<tr>
<td style="text-align: left;">252001</td>
<td style="text-align: center;">0.9</td>
<td style="text-align: right;">NaN</td>
<td style="text-align: right;">NaN</td>
<td style="text-align: right;">NaN</td>
<td style="text-align: right;">.</td>
<td style="text-align: right;">NaN</td>
<td style="text-align: right;"></td>
</tr>
</tbody>
</table>
</div>
<p>i know i might have to use <code>DataFrame.pivot_table</code> with <code>DataFrame.add_prefix</code> but little clueless.</p>
|
<python><pandas>
|
2023-06-24 07:46:00
| 1
| 439
|
prem
|
76,544,877
| 1,613,047
|
How to convert .safetensors or .ckpt Files and Using in FlaxStableDiffusionImg2ImgPipeline?
|
<p>I am trying to convert a .safetensors model to a diffusers model using the Python script found at <a href="https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_stable_diffusion_to_diffusers.py" rel="nofollow noreferrer">https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_stable_diffusion_to_diffusers.py</a>. The command I tried is <code>python3 convert_original_stable_diffusion_to_diffusers.py --checkpoint_path /home/aero/stable-diffusion-webui/models/Stable-diffusion/chilloutmix_NiPrunedFp32Fix.safetensors --scheduler_type euler-ancestral --dump_path /home/aero/diffusers/models/chilloutmix_NiPrunedFp32Fix --from_safetensors</code>. After the conversion, I intend to use the diffusers model within the FlaxStableDiffusionImg2ImgPipeline.</p>
<p>However, I encountered an error when running the script I provided below (full code):</p>
<pre><code>First error: OSError: diffusion_pytorch_model.bin file found in directory /home/aero/diffusers/models/chilloutmix_NiPrunedFp32Fix/vae. Please load the model using from_pt=True.
I modified the code by adding from_pt=True.
Second error: TypeError: getattr(): attribute name must be string
</code></pre>
<p>My question is how I can fix these issues and properly convert the .safetensors model to a diffusers model, so I can use it with FlaxStableDiffusionImg2ImgPipeline without encountering any errors?</p>
<p>Full Code:</p>
<pre class="lang-py prettyprint-override"><code>import jax
import numpy as np
import jax.numpy as jnp
from flax.jax_utils import replicate
from flax.training.common_utils import shard
import requests
from io import BytesIO
from PIL import Image
from diffusers import FlaxStableDiffusionImg2ImgPipeline
import time
from datetime import datetime
import random
def create_key(seed=0):
return jax.random.PRNGKey(seed)
start_time = time.time()
url = "https://i.pinimg.com/564x/e6/36/a6/e636a664f860a1ec9f7b5f3c4e2f634b.jpg"
response = requests.get(url)
init_img = Image.open(BytesIO(response.content)).convert("RGB")
init_img = init_img.resize((784, 784))
prompts = "hyperreal, artstation, (masterpiece:1.0), (best quality:1.4), (ultra highres:1.2), (photorealistic:1.4), (8k, RAW photo:1.2), (soft focus:1.4), (sharp focus:1.4)"
num_samples = jax.device_count()
pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained(
"/home/aero/diffusers/models/chilloutmix_NiPrunedFp32Fix",
dtype=jnp.bfloat16,
safety_checker=None,
# from_pt=True,
)
for x in range(4):
rng = create_key(random.randint(0, 7183698734589870))
rng = jax.random.split(rng, num_samples)
prompt_ids, processed_image = pipeline.prepare_inputs(
prompt=[prompts] * num_samples, image=[init_img] * num_samples
)
p_params = replicate(params)
prompt_ids = shard(prompt_ids)
processed_image = shard(processed_image)
output = pipeline(
prompt_ids=prompt_ids,
image=processed_image,
params=p_params,
prng_seed=rng,
strength=0.6,
num_inference_steps=50,
jit=True,
height=784,
width=784,
).images
output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:])))
# Get timestamp
timestamp = datetime.now().strftime("%Y%m%d-%H%M%S")
# Loop over images and save to output directory with unique name
for i, image in enumerate(output_images):
filename = f"./{timestamp}_{x}_{i}.jpg"
image.save(filename)
duration = time.time() - start_time
print(f"Elapsed time: {duration:.4f} seconds")
</code></pre>
<p>Error Stack:</p>
<pre><code>╭──────────────────────────── Traceback (most recent call last) ─────────────────────────────╮
│ /home/aero/diffusers/./test.py:28 in <module> │
│ │
│ 25 │
│ 26 num_samples = jax.device_count() │
│ 27 │
│ ❱ 28 pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained( │
│ 29 │ "/home/aero/diffusers/models/chilloutmix_NiPrunedFp32Fix", │
│ 30 │ dtype=jnp.bfloat16, │
│ 31 │ safety_checker=None, │
│ │
│ /home/aero/.local/lib/python3.8/site-packages/diffusers/pipelines/pipeline_flax_utils.py:4 │
│ 46 in from_pretrained │
│ │
│ 443 │ │ │ │ │ if class_candidate is not None and issubclass(class_obj, class_c │
│ 444 │ │ │ │ │ │ load_method_name = importable_classes[class_name][1] │
│ 445 │ │ │ │ │
│ ❱ 446 │ │ │ │ load_method = getattr(class_obj, load_method_name) │
│ 447 │ │ │ │ │
│ 448 │ │ │ │ # check if the module is in a subdirectory │
│ 449 │ │ │ │ if os.path.isdir(os.path.join(cached_folder, name)): │
╰────────────────────────────────────────────────────────────────────────────────────────────╯
</code></pre>
|
<python><pytorch><stable-diffusion><flax>
|
2023-06-24 06:41:57
| 0
| 9,455
|
Aero Wang
|
76,544,858
| 2,732,959
|
Python: best practice to manage logging handlers
|
<p>What's the best practice for logger instantiation? Currently, I'm using this to create a logger:</p>
<pre><code>def create_logger(logger_name='default_logger_name', tags={"application": "default-app", "environment": "development"}):
handler = logging_loki.LokiQueueHandler(
Queue(-1),
url="https://somewhere",
tags=tags,
auth=("some", "thing="),
version="1",
)
logger = logging.getLogger(logger_name)
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
logger.addHandler(get_console_handler())
return logger
def get_console_handler():
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.DEBUG)
log_format = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
console_handler.setFormatter(log_format)
return console_handler
</code></pre>
<p>and then calling</p>
<pre><code>self.logger = create_logger(__name__)
</code></pre>
<p>in any class where I want logging.
This has two problems. The first one is that my class is suddenly coupled to loki which is obviously bad. The second problem is that even if I wasn't using any handler that was using extra modules, I noticed that during unit testing, I have to remove the handlers explicitly between different tests as I otherwise would get 2x the output for the test that was ran second, 3x the output for the third and so on as duplicate handlers keep getting added without ever being removed.
What kind of pattern should I use to avoid these issues? The first thing that occurred to me is to pass the logger creation method in the class constructor. This solves the first problem but does not solve the issue of having to remove the handlers. The second would be to pass a logger instance and handle everything outside of the class (like handler removal between tests). However, this would still leave me with having to do an explicit handler removal which feels a bit weird.</p>
|
<python><logging>
|
2023-06-24 06:36:19
| 1
| 715
|
John Smith
|
76,544,760
| 15,245,889
|
Finding lucky numbers?
|
<p>I have this rule:
A lucky number is a number with a 6 or an 8 in it, but not a 6 and an 8 in it.</p>
<p>The problem is to find how many lucky numbers in an inclusive range l, r.</p>
<p>First, I tried using a for loop. It worked, but it was slow.
Now, I am trying another method, but I'm not quite fond of it and it appears to be failing.
Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>l, r = [int(i) for i in input().split()]
def nums_without_digit(l, r, x):
"""
Returns the number of numbers in the range (l, r) that do not contain the digit x.
Args:
l: The lower bound of the range.
r: The upper bound of the range.
x: The digit to be excluded.
Returns:
The number of numbers in the range that do not contain the digit x.
"""
count = 0
for i in range(10):
if i != x:
count += (r // 10**i) - (l - 1) // 10**i
return count
def nums_without_digits(l, r, x, y):
"""
Returns the number of numbers in the range (l, r) that do not contain the digits x and y.
Args:
l: The lower bound of the range.
r: The upper bound of the range.
x: The first digit to be excluded.
y: The second digit to be excluded.
Returns:
The number of numbers in the range that do not contain the digits x and y.
"""
count = 0
for i in range(10):
if i != x and i != y:
count += (r // 10**i) - (l - 1) // 10**i
return count * 2
total = r - l + 1
total -= nums_without_digit(l, r, 6)
total -= nums_without_digit(l, r, 8)
total -= nums_without_digits(l, r, 6, 8)
print(total)
</code></pre>
<p>The idea is to start with all the possible numbers and remove those that are not possible.
The problem is, when give something like L=1, R=10, this program will spit out -34 instead of the expected 2.</p>
<p>Any help is appreciated.</p>
|
<python><math>
|
2023-06-24 06:01:54
| 4
| 384
|
UCYT5040
|
76,544,529
| 5,132,860
|
RuntimeError with Microsoft's Text-to-Speech on Google Cloud Run
|
<p>I am experiencing an issue with running Microsoft's text-to-speech on Google Cloud Run. The problem arose suddenly last night and I've been getting the following error:</p>
<pre><code> Traceback (most recent call last):
File "/code/app/./speech/backend.py", line 42, in save_text_to_speech
speech_api.speech()
File "/code/app/./speech/speech_api.py", line 266, in speech
synthesizer = SpeechSynthesizer(speech_config=self.speech_config, audio_config=audio_config)
File "/usr/local/lib/python3.9/site-packages/azure/cognitiveservices/speech/speech.py", line 1598, in __init__
self._impl = self._get_impl(impl.SpeechSynthesizer, speech_config, audio_config,
File "/usr/local/lib/python3.9/site-packages/azure/cognitiveservices/speech/speech.py", line 1703, in _get_impl
_impl = synth_type._from_config(speech_config._impl, None if audio_config is None else audio_config._impl)
RuntimeError: Runtime error: Failed to initialize platform (azure-c-shared). Error: 2153
</code></pre>
<p>The error occurs when I try to execute synthesizer.speak_ssml(). Here is the related code:</p>
<pre><code>audio_config = AudioOutputConfig(filename=file_name)
synthesizer = SpeechSynthesizer(speech_config=self.speech_config, audio_config=audio_config)
synthesizer.speak_ssml(self.input_data['text'])
</code></pre>
<p>Interestingly, this issue doesn't occur in my local environment. Additionally, if I build the image locally and deploy it to Cloud Run, I don't encounter this error.</p>
<p>My local environment is:</p>
<ul>
<li>MacOS 11.6.2</li>
<li>Docker v20.10.10</li>
</ul>
<p>However, when I build it with CloudBuild and deploy it to Cloud Run, I get the above error. I have tried the following to resolve it:</p>
<ul>
<li>Clearing the kaniko cache</li>
<li>Switching from 'kaniko' to 'gcr.io/cloud-builders/docker'</li>
</ul>
<p>Neither of these attempts resolved the issue. Considering the circumstances under which the error occurs, I suspect there might be a problem with CloudBuild, but I can't pinpoint the exact cause. If there are any other potential solutions I could try, I would greatly appreciate your advice.</p>
<h2>Update 2023-07-18</h2>
<pre><code>FROM python:3.11
WORKDIR /app
RUN apt-get update && \
apt-get install -y build-essential libssl-dev ca-certificates libasound2 wget && \
wget -O - https://www.openssl.org/source/openssl-1.1.1u.tar.gz | tar zxf - && \
cd openssl-1.1.1u && \
./config --prefix=/usr/local && \
make -j $(nproc) && \
make install_sw install_ssldirs && \
ldconfig -v && \
export SSL_CERT_DIR=/etc/ssl/certs && \
cd ../ && \
rm -rf openssl-1.1.1u && \
pip install --no-cache-dir azure-cognitiveservices-speech==1.30.0
COPY . /app
CMD ["python3", "app.py"]
</code></pre>
<pre><code>
import os
import azure.cognitiveservices.speech as speechsdk
# This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION"
speech_config = speechsdk.SpeechConfig(subscription=os.environ.get('SPEECH_KEY'), region=os.environ.get('SPEECH_REGION'))
audio_config = speechsdk.audio.AudioOutputConfig(use_default_speaker=True)
print('key:'+os.environ.get('SPEECH_KEY'))
print('region:'+os.environ.get('SPEECH_REGION'))
# The language of the voice that speaks.
speech_config.speech_synthesis_voice_name='en-US-JennyNeural'
speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=audio_config)
# Get text from the console and synthesize to the default speaker.
text = "Hello world!"
speech_synthesis_result = speech_synthesizer.speak_text_async(text).get()
if speech_synthesis_result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted:
print("Speech synthesized for text [{}]".format(text))
elif speech_synthesis_result.reason == speechsdk.ResultReason.Canceled:
cancellation_details = speech_synthesis_result.cancellation_details
print("Speech synthesis canceled: {}".format(cancellation_details.reason))
if cancellation_details.reason == speechsdk.CancellationReason.Error:
if cancellation_details.error_details:
print("Error details: {}".format(cancellation_details.error_details))
print("Did you set the speech resource key and region values?")
</code></pre>
<p>Code from <a href="https://learn.microsoft.com/en-us/azure/cognitive-services/speech-service/get-started-text-to-speech?tabs=linux%2Cterminal&pivots=programming-language-python" rel="nofollow noreferrer">MS document</a></p>
<h3>Error</h3>
<pre><code>Speech synthesis canceled: CancellationReason.Error
Error details: Connection failed (no connection to the remote host). Internal error: 1. Error details: Failed with error: WS_OPEN_ERROR_UNDERLYING_IO_OPEN_FAILED
wss://southeastasia.tts.speech.microsoft.com/cognitiveservices/websocket/v1
X-ConnectionId: c4955d953f8e480c906061e6219eb8fd USP state: Sending. Received audio size: 0 bytes.
Did you set the speech resource key and region values?
</code></pre>
<p>I had set SPEECH_KEY and SPEECH_REGION. It was printed on console. However I got the error. Please help me.</p>
|
<python><azure><google-cloud-run><google-cloud-build>
|
2023-06-24 04:10:28
| 2
| 3,104
|
Nori
|
76,544,127
| 10,293,576
|
Python convert list of IPs inrange of IPs into dash
|
<p>Given list of IPs</p>
<pre><code>list_of_ips = ['10.0.0.10', '10.0.0.11', '10.0.0.12', '10.0.0.13', '10.0.0.40', '10.0.0.43', '10.0.0.44', '10.0.0.45', '10.0.0.46', '10.0.0.47', '10.0.0.48', '10.0.0.49', '10.0.0.50', '10.0.0.51', '10.0.0.52', '10.0.0.53', '10.0.0.54', '10.0.0.55', '10.0.0.56', '10.0.0.57', '10.0.0.58', '10.0.0.59', '10.0.0.60']
</code></pre>
<p>Want to collapse the list into range of IPs with dash if sequence of IPs in range or single IP.</p>
<p>Example:</p>
<pre><code>output = ['10.0.0.10-10.0.0.13', '10.0.0.40', '10.0.0.43-10.0.0.60']
</code></pre>
<p>Here is the partially working code, for some reason I can't get the last range after <code>10.0.0.43 - 10.0.0.60</code></p>
<p>Not sure were why it is not adding</p>
<pre><code>def compare_ips(ip1, ip2):
last_octet1 = int(ip1.split('.')[-1]) # splits the ip and grabs the last octet
last_octet2 = int(ip2.split('.')[-1]) # splits the ip and grabs the last octet
if last_octet1 > last_octet2:
return -1
if last_octet1 < last_octet2:
if (last_octet2 - last_octet1) == 1:
return 1
else:
print("More then 1")
return 99
return 0
range, group = [], []
for a, b in zip(list_of_ips, list_of_ips[1:]):
check = compare_ips(a, b)
if check == 1:
group.append(a)
continue
elif check == 99:
if not a in group:
check2 = compare_ips(a, b)
if check2 == 1:
group.append(a)
continue
else:
group.append(a)
if not b in range:
range.append(b)
if len(group) > 1:
range.append("{}-{}".format(group[0],group[-1]))
group = []
</code></pre>
|
<python><python-3.x><list><range><ip>
|
2023-06-24 00:35:46
| 2
| 301
|
miu
|
76,543,909
| 2,196,270
|
How to create a polars column listing duplicates of another column
|
<p>I have hard a hard time searching for the answer to this as I find it hard to put into words.</p>
<p>I'm trying to aggregate multiple listings of files on disks, some of which have the same files. I want only one row for a given file, and a separate column with the disks that file may be on.</p>
<p>Say I have the following DataFrame:</p>
<pre class="lang-py prettyprint-override"><code>recordings = pl.DataFrame(
{
"disk": ["NT23", "NT24", "NT23", "NT24"],
"path_on_disk": ["file_a.txt", "file_a.txt", "file_b.txt", "file_b.txt"],
"other_data": [2.0, 2.0, 3.0, 3.0],
}
)
</code></pre>
<p>Which looks something like this (SO doesn't like terminal characters I guess):</p>
<pre class="lang-none prettyprint-override"><code>┌──────┬──────────────┬────────────┐
│ disk ┆ path_on_disk ┆ other_data │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ f64 │
╞══════╪══════════════╪════════════╡
│ NT23 ┆ file_a.txt ┆ 2.0 │
│ NT24 ┆ file_a.txt ┆ 2.0 │
│ NT23 ┆ file_b.txt ┆ 3.0 │
│ NT24 ┆ file_b.txt ┆ 3.0 │
└──────┴──────────────┴────────────┘
</code></pre>
<ul>
<li>Actual df has > 500k rows</li>
<li>Actual df has more columns, but the if <code>path_on_disk</code> matches in two or more rows then we will assume that the rest of the fields in those rows match, except for <code>disk</code>.</li>
</ul>
<p>I want to:</p>
<ol>
<li>Find all rows where <code>path_on_disk</code> are the same</li>
<li>Make a new column <code>disks</code> containing the different values of <code>disk</code> joined together with <code>", ".join()</code></li>
</ol>
<p>Something like this:</p>
<pre class="lang-none prettyprint-override"><code>┌──────────────┬────────────┬────────────┐
│ path_on_disk ┆ disks ┆ other_data │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ f64 │
╞══════════════╪════════════╪════════════╡
│ file_a ┆ NT23, NT24 ┆ 2.0 │
│ file_b ┆ NT23, NT24 ┆ 3.0 │
└──────────────┴────────────┴────────────┘
</code></pre>
<p>I've figured out that I can use <code>recordings.group_by("path_on_disk")</code> to accomplish the first objective:</p>
<pre class="lang-py prettyprint-override"><code>for group_df in recordings.group_by("path_on_disk"):
if len(group_df) > 1:
print(group_df)
break
</code></pre>
<p>This shows the first found group of rows where <code>path_on_disk</code> match.</p>
<p>I tried the following, but got an error:</p>
<pre class="lang-py prettyprint-override"><code>def merge_disknames(df: pl.DataFrame):
return ", ".join(sorted(df["disk"]))
recordings.group_by("path_on_disk").map_groups(merge_disknames).rename("disks")
</code></pre>
<pre class="lang-none prettyprint-override"><code>PanicException: Could net get DataFrame attribute '_df'. Make sure that you return a DataFrame object.: PyErr { type: <class 'AttributeError'>, value: AttributeError("'str' object has no attribute '_df'"), traceback: None }
</code></pre>
|
<python><python-polars>
|
2023-06-23 23:09:23
| 1
| 507
|
RandyP
|
76,543,865
| 2,732,959
|
Python: How do I import all exception from some module?
|
<p>I have the following scenario:
Program A uses library L. Library L defines some exceptions. If an exception defined in L is raised in A, I want to serialize it into a json and send it via some channel (e.g. redis). This exception is then received by some program B where I want to deserialize the exception and do something with it. In order to be able to deserialize the exception, B needs to know all the exceptions defined in L. How would I get all exceptions from L? Would it make sen to loop through all members of L and check if they are subclasses of Exception?</p>
|
<python><exception>
|
2023-06-23 22:58:33
| 1
| 715
|
John Smith
|
76,543,862
| 1,914,781
|
print ascii symbol with hex code value in python
|
<p>What's proper way to print ascii symbol with hex code? say hex symbol └, need to be print out,the hex value is 0xC0.</p>
<p>ascii table reference is <a href="https://www.matematica.pt/en/useful/complete-ascii-table.php" rel="nofollow noreferrer">here</a>!</p>
<p>below code not work:</p>
<pre><code>x = 0xC0
print(ord(x))
</code></pre>
|
<python>
|
2023-06-23 22:57:34
| 1
| 9,011
|
lucky1928
|
76,543,794
| 9,092,669
|
How to convert text into a CSV by comma delimiter and double quotes?
|
<p>This is my current string value:</p>
<pre><code>"""
|name, model, os
|A,"I PAD (10.0"", 2020, Wi-Fi)",OS_A
"""
</code></pre>
<p>And I would like the output to be like below, and eventually save as a csv:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">name</th>
<th style="text-align: center;">model</th>
<th style="text-align: right;">os</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">A</td>
<td style="text-align: center;">I PAD (10.0"", 2020, Wi-Fi)</td>
<td style="text-align: right;">OS</td>
</tr>
</tbody>
</table>
</div>
<p>I am getting tripped up because in the model field, the string has commas and double quotes inside of it. My current thought is to regex any offending text, but is there another solution to doing this?</p>
|
<python><csv><pyspark>
|
2023-06-23 22:36:12
| 4
| 395
|
buttermilk
|
76,543,675
| 1,545,014
|
PyParsing: Conditional parsing, depending on value
|
<p>I need to parse Touchstone files (<a href="https://www.ibis.org/connector/touchstone_spec11.pdf" rel="nofollow noreferrer">version 1.1</a>, and <a href="https://ibis.org/touchstone_ver2.0/touchstone_ver2_0.pdf" rel="nofollow noreferrer">version 2.0</a>), but these have a strange rule in the syntax (see page 11 in the 1.1 spec, the top paragraph starting with <code>Note</code>.</p>
<p>So, I need to change the syntax rule from 'data points' to 'noise parameters', depending on the first <code>float</code> of the line. Like in this example:</p>
<pre><code>! NETWORK PARAMETERS
2 .95 -26 3.57 157 .04 76 .66 -14
22 .60 -144 1.30 40 .14 40 .56 -85
! NOISE PARAMETERS (the down jump from 22 - linea above - to 4 - below, should trigger change of syntax)
4 .7 .64 69 .38
18 2.7 .46 -33 .40
</code></pre>
<p>(The lines starting with <code>!</code> are comments and are optional)</p>
<p>There is no other parameter in the data file to help. (This only occurs in 'old' 1.x version of the spec. In the 2.0 version (which still has to be compatible with 1.*), a keyword was introduced).</p>
<p>How can I implement this in a single grammar? (I suspect the only solution is a line-by-line parser?)</p>
|
<python><grammar><pyparsing>
|
2023-06-23 21:55:52
| 1
| 5,472
|
jcoppens
|
76,543,612
| 5,942,100
|
Organizing and create column names with values using Pandas
|
<p>I would like to organize and create column names with values using Pandas</p>
<p><strong>Data</strong></p>
<pre><code>name Box stat size
dd1 HDL FALSE 3
dd1 LDL FALSE 3
dd2 LDL FALSE 4
dd3 HDL TRUE 1
dd5 HDL FALSE 5
</code></pre>
<p><strong>Desired</strong></p>
<pre><code>Box TRUE FALSE
HDL 1 8
LDL 0 7
</code></pre>
<p><strong>Doing</strong></p>
<pre><code>pivot_df = pd.pivot_table(df, values='size', index='Box', columns='stat', aggfunc=sum, fill_value=0)
</code></pre>
<p>However, the first column is not visible. Any suggestion is appreciated.</p>
|
<python><pandas><numpy>
|
2023-06-23 21:41:53
| 1
| 4,428
|
Lynn
|
76,543,458
| 1,989,579
|
Converting a real-time MP3 audio stream to 8000/mulaw in Python
|
<p>I'm working with an API that streams real-time audio in the MP3 format (44.1kHz/16bit) and I need to convert this stream to 8000/mulaw. I've tried several solutions, but all have run into issues due to the structure of the MP3 data.</p>
<p>My current approach is to decode and process each chunk of audio as it arrives, using PyDub and Python's audioop module. However, I often encounter errors that seem to arise from trying to decode a chunk of data that doesn't contain a complete MP3 frame.</p>
<p>Here's a simplified version of my current code:</p>
<pre><code>from pydub import AudioSegment
import audioop
import io
class StreamConverter:
def __init__(self):
self.state = None
self.buffer = b''
def convert_chunk(self, chunk):
# Add the chunk to the buffer
self.buffer += chunk
# Try to decode the buffer
try:
audio = AudioSegment.from_mp3(io.BytesIO(self.buffer))
except CouldntDecodeError:
return None
# If decoding was successful, empty the buffer
self.buffer = b''
# Ensure audio is mono
if audio.channels != 1:
audio = audio.set_channels(1)
# Get audio data as bytes
raw_audio = audio.raw_data
# Sample rate conversion
chunk_8khz, self.state = audioop.ratecv(raw_audio, audio.sample_width, audio.channels, audio.frame_rate, 8000, self.state)
# μ-law conversion
chunk_ulaw = audioop.lin2ulaw(chunk_8khz, audio.sample_width)
return chunk_ulaw
# This is then used as follows:
for chunk in audio_stream:
if chunk is not None:
ulaw_chunk = converter.convert_chunk(chunk)
# send ulaw_chunk to twilio api
</code></pre>
<p>I believe my issue stems from the fact that MP3 data is structured in frames, and I can't reliably decode the audio if a chunk doesn't contain a complete frame. Also, a frame could potentially be split between two chunks, so I can't decode them independently.</p>
<p>Does anyone have any ideas on how I can handle this? Is there a way to process an MP3 stream in real-time while converting to 8000/mulaw, possibly using a different library or approach?</p>
|
<python><audio><twilio><mp3><mu-law>
|
2023-06-23 21:05:31
| 1
| 3,512
|
user60108
|
76,543,388
| 5,165,649
|
REGEX of hive output is different from spark sql regex output
|
<p>I researched many of the similiar question but how to correct it to fit in spark sql statement when executing is not mentioned clearly.</p>
<pre><code>test1= spark.sql("""SELECT regexp_extract(UPPER("This is the first sentence.This is second sentence. This is the third sentence"),'\\.([^\.]+)\\.',1) as s""")
test1=test1.toPandas()
test1
</code></pre>
<p><a href="https://i.sstatic.net/0Ffvb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0Ffvb.png" alt="enter image description here" /></a></p>
<p>But in Hive I wrote</p>
<pre><code>SELECT regexp_extract(UPPER("This is the first sentence.This is second sentence. This is the third sentence"),'\\.([^\.]+)\\.',1)
</code></pre>
<p>the out put is different</p>
<p><a href="https://i.sstatic.net/hKvPQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hKvPQ.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/7Uq3L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7Uq3L.png" alt="enter image description here" /></a></p>
<p>Above are the versions I am using.
I would like the same output as hive in spark sql how do I achieve getting second sentence?</p>
|
<python><regex><pyspark><hive>
|
2023-06-23 20:48:46
| 1
| 487
|
viji
|
76,543,326
| 6,087,667
|
Code autocomplete not working in the debug console of pycharm
|
<p>When I enter into the debug mode and type something in the python debug console, it doesn't show the popup menu with suggestions. It shows suggestions only when one presses the <code>TAB</code>. How can I make the popups to show up automatically, just like in the editor?</p>
<p><a href="https://i.sstatic.net/f49xP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f49xP.png" alt="enter image description here" /></a></p>
|
<python><debugging><pycharm>
|
2023-06-23 20:33:36
| 0
| 571
|
guyguyguy12345
|
76,543,056
| 971,602
|
How to distribute pandas dataframe operations
|
<p>I am trying to parallelize dataframe operations for faster performance using a Dask cluster with a 6GB RAM limit per node. Here is the code I am using.</p>
<p>First I am using pandas as a baseline.</p>
<pre class="lang-py prettyprint-override"><code>from dask.distributed import Client, LocalCluster
import dask.dataframe as dd
import pandas as pd
def print_cnt(df):
print(len(df))
# 170MB parquet, 13 Million Records
df = pd.read_parquet('trip-data.parquet', engine='pyarrow')
print_cnt(df[df.tip_amount == 0])
print_cnt(df[df.payment_type == 1])
print_cnt(df[df.trip_distance > 10])
# 5717616
# 7834821
# 724979
# Wall time: 4.92 s
</code></pre>
<pre class="lang-py prettyprint-override"><code>df.info(memory_usage='deep')
# dtypes: ...
# memory usage: 2.9 GB
</code></pre>
<p>So far, so good. Now I am trying to parallelize the pandas operation using Dask:</p>
<pre class="lang-py prettyprint-override"><code>def evaluate(df, condition):
return len(df[condition])
cluster = LocalCluster(n_workers=2, threads_per_worker=4, memory_limit='4GB')
client = Client(cluster)
futures = [
client.submit(evaluate, df, df.tip_amount == 0),
client.submit(evaluate, df, df.payment_type == 1),
client.submit(evaluate, df, df.trip_distance > 10)
]
res = [fut.result() for fut in futures]
</code></pre>
<p>This ends up hanging for a long time and never finishes.</p>
<p>Finally I try using dask dataframes instead:</p>
<pre class="lang-py prettyprint-override"><code># 170MB parquet, 13 Million Records
ddf = dd.read_parquet('trip-data.parquet', engine='pyarrow')
ddf = ddf.repartition(npartitions=8)
print_cnt(ddf[ddf.tip_amount == 0])
print_cnt(ddf[ddf.payment_type == 1])
print_cnt(ddf[ddf.trip_distance > 10])
</code></pre>
<p>but I end up getting this message:</p>
<pre><code>distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting
distributed.nanny - WARNING - Restarting worker
distributed.nanny - WARNING - Worker exceeded 95% memory budget. Restarting
distributed.nanny - WARNING - Restarting worker
</code></pre>
<pre class="lang-py prettyprint-override"><code>ddf.info(memory_usage='deep')
# dtypes: ...
# memory usage: 1.9 GB
</code></pre>
<p>How can I successfully parallelized these operations?</p>
|
<python><pandas><dataframe><dask>
|
2023-06-23 19:40:57
| 1
| 1,203
|
locorecto
|
76,542,999
| 857,031
|
How to handle different data for KMeans: standardize, drop, or something else?
|
<p>I'm going to build a KMeans model which has is using records with different datatypes. I plan to use feature scaling to standardize the money amounts (like last 3 periods, rolling year value). For other amounts, like the number of years active, would I do the same? Finally, I have 0/1 bit data. I assume I should remove them.</p>
<p>Is this the right approach for this data? Thanks.</p>
<p>Edit:
Added picture of data</p>
<p><a href="https://i.sstatic.net/NpvwJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NpvwJ.png" alt="data example" /></a></p>
|
<python><k-means><data-preprocessing>
|
2023-06-23 19:29:12
| 0
| 1,878
|
jabs
|
76,542,939
| 15,285,210
|
dbus service does not register to the dbus
|
<p>I created a service using dbus-python which outputs the time. The problem is that the dbus service is not registering to the dbus services and when I call the service I get the error</p>
<pre><code>dbus.exceptions.DBusException: org.freedesktop.DBus.Error.ServiceUnknown: The name com.gkbrk.Time was not provided by any .service files
</code></pre>
<p>This is the code:
Service.py</p>
<pre><code>import dbus
import dbus.service
import time
class Time(dbus.service.Object):
def __init__(self):
self.bus = dbus.SessionBus()
name = dbus.service.BusName('com.gkbrk.Time', bus=self.bus)
super().__init__(name, '/Time')
@dbus.service.method('com.gkbrk.Time', out_signature='s')
def CurrentTime(self):
"""Use strftime to return a formatted timestamp
that looks like 23-02-2018 06:57:04."""
formatter = '%d-%m-%Y %H:%M:%S'
return time.strftime(formatter)
if __name__ == '__main__':
import dbus.mainloop.glib
from gi.repository import GLib
dbus.mainloop.glib.DBusGMainLoop(set_as_default=True)
loop = GLib.MainLoop()
object = Time()
loop.run()
</code></pre>
<p>Client.py</p>
<pre><code>import dbus
bus = dbus.SessionBus()
time = bus.get_object('com.gkbrk.Time', '/Time')
curr = time.CurrentTime()
print('The current time is', curr)
</code></pre>
<p>How to solve this? I run the service.py and nothing happens, if I list the dbus services, my service does not appear.</p>
|
<python><linux><ipc><dbus>
|
2023-06-23 19:19:14
| 1
| 347
|
varo111
|
76,542,862
| 6,346,514
|
python, assign a path of an excel file without know the full file name
|
<p>I am trying to assign a dynamic path excel to my variable <code>source_path</code>.</p>
<p>However I dont know what the date of the file will be. it could be anything.. is there a way to just search my drive and look for just "weekly time report" and ignore the date? and read that file to source path?</p>
<pre><code>from pathlib import Path # to work with file paths
import pandas as pd
import pathlib
import re
import os
import shutil
source_path = Path(r'...desktop...path\weekly time report 6.19.23.xslx')
dest_path = Path(r'networkdrive\weekly time report 6.19.23')
shutil.copyfile(source_path ,dest_path )
</code></pre>
|
<python>
|
2023-06-23 19:03:01
| 1
| 577
|
Jonnyboi
|
76,542,797
| 1,025,177
|
Is "suggesting" a type for a generic Python methods possible in VSCode?
|
<p>In C# it's possible to "suggest" which type a generic function uses by manually supplying it, instead of letting the compiler guess it from context.</p>
<p>So a method <code>T func<T>()</code> would return a string if used like <code>var x = func<String>()</code>.</p>
<p>I'm looking for the same effect for type hinting purposes in VSCode.<br />
The reason being, that I have a lookup function that returns a <code>tkinter.Widget</code>, but depending on which Widget it looks up, it could be any variant of that.</p>
<p>While just having the method type hinted as <code>-> tkinter.Widget:</code> works for the majority of properties, I'd like to be able to specifically tell VSCode which type to expect to get specific properties too.<br />
I've found out that I can "force" a type onto a variable, by using <code>widget: tkinter.Label = getWidget()</code> to tell VSCode that the result is to be treated as a <code>tkinter.Label</code>, but at this point I'm simply curious if an approach closer to C#'s, maybe with an optional named parameter, exists as well.</p>
<p>I've found some posts about <code>typing.TypeVar</code>, but I'm not fully getting yet how that works nor if it's even relevant to what I'm looking for.</p>
<p>Edit: code example from C#</p>
<pre class="lang-cs prettyprint-override"><code>public class JsonConfig
{
public static T Load<T>(string jsonFile = null) { ... }
}
</code></pre>
<p>This is a snippet from a class I wrote to easily use <code>Newtonsoft.Json</code> for (de)serializing my own simple settings classes.<br />
The useful part about C#'s Generics is that this method can accept any class I throw at it at runtime, as long as the underlying (de)serializer can make sense of it.<br />
In my case this meant any simple datatype up to <code>Dictionary<T,T></code> and <code>List<T></code> and any structure that can be constructed with those.</p>
<p>If I call the method like</p>
<pre><code>var data = ConfigInstance.Load<List<String>>();
</code></pre>
<p>then the IDE knows that the expected result ending up in the variable must be <code>List<String></code> and from there on treats the variable as such for intellisense etc.</p>
<p>In VSCode Python the closest I got to this behaviour is setting a default return type like</p>
<pre><code>def Load(jsonFile: str = None) -> tkinter.Widget: ...
</code></pre>
<p>and if needed "forcing" the variable I'm loading into to take the expected derived type</p>
<pre><code>frame: tkinter.Frame = Load()
</code></pre>
<p>At this point I'm just wondering if there's another approach to this, maybe by providing the expected return type with a named variable, or however Pylance does type hinting in VSCode.</p>
|
<python><tkinter><python-typing>
|
2023-06-23 18:50:45
| 1
| 364
|
BloodyRain2k
|
76,542,789
| 480,118
|
pandas: find or mark first occurrence of a new high over a lookback window
|
<p>I have the following code which adds a cool to indicate whether the closing value is a new 52 week high.</p>
<pre><code> df1['52wkh'] = df1['high'].rolling(min_periods=1, window=252).max()
df1['is_52wkch'] = df1['close'] >= df1.shift(1)['52wkh']
</code></pre>
<p>This produces data that looks like this:</p>
<pre><code> close high 52wkh is_52wkch
date
2021-11-22 4682.939941 4743.830078 4743.830078 False
2021-12-27 4791.189941 4791.490234 4791.490234 True
2021-12-28 4786.350098 4807.020020 4807.020020 False
2021-12-30 4778.729980 4808.930176 4808.930176 False
2022-01-04 4793.540039 4818.620117 4818.620117 False
2023-06-12 4338.930176 4340.129883 4340.129883 True
2023-06-13 4369.009766 4375.370117 4375.370117 True
2023-06-14 4372.589844 4391.819824 4391.819824 False
2023-06-15 4425.839844 4439.200195 4439.200195 True
2023-06-16 4409.589844 4448.470215 4448.470215 False
</code></pre>
<p>What i would like to do is actually mark true only the first occurrence of a high within that rolling window. so the table should look like this actually:</p>
<pre><code> close high 52wkh is_52wkch
date
2021-11-22 4682.939941 4743.830078 4743.830078 False
2021-12-27 4791.189941 4791.490234 4791.490234 True
2021-12-28 4786.350098 4807.020020 4807.020020 False
2021-12-30 4778.729980 4808.930176 4808.930176 False
2022-01-04 4793.540039 4818.620117 4818.620117 False
2023-06-12 4338.930176 4340.129883 4340.129883 True
2023-06-13 4369.009766 4375.370117 4375.370117 False
2023-06-14 4372.589844 4391.819824 4391.819824 False
2023-06-15 4425.839844 4439.200195 4439.200195 False
2023-06-16 4409.589844 4448.470215 4448.470215 False
</code></pre>
<p>Im thinking if i can count the number of these occurrences over the 52 week window, then i could filter for a value of 1. Something like below:</p>
<p><code>df1.groupby((df1['is_52wkch'] != df1['is_52wkch'].shift(1)).cumsum()).cumcount()+1</code></p>
<p>That doesnt do the trick for me though because i think it has to be a conditional count applied to the rolling 52 week windows. In other words, count only true values over the last 52 weeks.
Struggling with trying to figure out the correct api or methods to use for this to do it in a most efficient manner...any help would be appreciated.</p>
<hr />
<p><strong>UPDATE: This might be a possible solution im thinking:</strong></p>
<pre><code>df1['52wkh'] = df1['high'].rolling(min_periods=1, window=252).max()
#mark all 52 week high that meets this condition
df1['is_52wkch'] = df1['close'] >= df1.shift(1)['52wkh']
#count the occurrences of above over the last 52 weeks
df1['is_52wkch_cnt'] = df1.rolling(window=252)['is_52wkch'].sum()
</code></pre>
<p>So i think after this, i would be able to filter on is_52wkch==True and is_52wkch_cnt == 1 to get the first 52 week high marked within last 52 weeks</p>
<p>Does this look correct?</p>
|
<python><pandas><numpy>
|
2023-06-23 18:49:15
| 2
| 6,184
|
mike01010
|
76,542,644
| 3,611,472
|
Manage build number Python script
|
<p>I am writing an extensive Python script on my laptop. At every major change, I commit and push the project into a Git directory.</p>
<p>I have a virtual machine on Azure where I clone the repository and run the script. The script sometimes runs for more than a day and while it’s running, it may happen that I push some changes in the git repository.</p>
<p>At the beginning of the script, I would like to print a build number that uniquely identifies a version of the script, so that I am aware of what version it’s running.</p>
<p>What is the conventional way to do so? I don’t want to do it manually, of course.</p>
|
<python><git><version-control>
|
2023-06-23 18:23:02
| 1
| 443
|
apt45
|
76,542,433
| 1,747,834
|
How to wrap an existing module?
|
<p>Our code makes extensive use of the <code>fcntl</code>-module to lock files -- to prevent multiple instances of the program from stepping over each other. This works fine on Linux, but, as we just discovered, there is no fcntl on Windows...</p>
<p>Fortunately, we don't need the multiple instance-safety on Windows and could just <em>fake</em> it there. But replacing the existing use of <code>fcntl</code>-module, its constants (like <code>fcntl.LOCK_EX</code>) and function (<code>fcntl.flock</code>) is something, I want to avoid.</p>
<p>So I tried creating an <code>fcntl.py</code> of my own with something like:</p>
<pre><code>import sys
import os
if os.name == 'posix':
syspath = sys.path
sys.path = sys.path[1:] # Remove current directory from search path
import fcntl as realfcntl
LOCK_EX = realfcntl.LOCK_EX
LOCK_SH = realfcntl.LOCK_SH
LOCK_UN = realfcntl.LOCK_UN
sys.path = syspath
def flock(fd, mode):
return realfcntl.flock(fd, mode)
else:
# Fake it:
LOCK_EX = -1
LOCK_SH = -1
LOCK_UN = -1
def flock(fd, mode):
print('Pretending to manipulate locking on FD %s' % fd, file = sys.stderr)
</code></pre>
<p>To my dismay, this fails at the import time on Unix, on line 8: <code>module 'fcntl' has no attribute 'LOCK_EX'</code>, which tells me, my attempt to trick it into loading the <em>real</em> fcntl failed.</p>
<p>How can the wrapper like mine load the <em>real</em> module (being wrapped)?</p>
|
<python><python-import>
|
2023-06-23 17:41:25
| 2
| 4,246
|
Mikhail T.
|
76,542,432
| 20,220,485
|
How do you group a dataframe based on a column with string values?
|
<p>I am having some trouble using <code>groupby</code> to group a <code>df</code> based on the numeric value in a string. The regular expression <code>(\w+)_\w+</code> should match to the digit in the string with which I wish to make a group, however I am unsure how to implement this with <code>groupby</code>.</p>
<p>Any assistance would be appreciated.</p>
<p>Data:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'x':['ab_c_1.0','ab_c_1.1','ab_c_12.0','ab_c_12.1','ab_c_123.0','ab_c_123.1']})
</code></pre>
<p>Desired groupings:</p>
<pre><code> x
0 ab_c_1.0
1 ab_c_1.1
2 ab_c_12.0
3 ab_c_12.1
4 ab_c_123.0
5 ab_c_123.1
</code></pre>
|
<python><pandas><regex><group-by>
|
2023-06-23 17:41:21
| 2
| 344
|
doine
|
76,542,312
| 2,956,276
|
Poetry: Can we use `explicit` priority for chained dependencies?
|
<p>Let's say I have three Python packages <code>A</code>, <code>B</code> and <code>C</code>. All of them are managed by poetry and published in a private package repository.
Package <code>B</code> depends on <code>A</code>, package <code>C</code> depends on <code>B</code>, and all of them have additional dependencies on packages from the PyPI repository.</p>
<p>Using poetry 1.5.1 I tried this setup, which doesn't work for me (I've omitted irrelevant parts of the <code>pyproject.toml</code> files for brevity):</p>
<p><code>B/pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.dependencies]
A = {version = "*", source = "my-private"}
[[tool.poetry.source]]
name = "PyPI"
priority = "primary"
[[tool.poetry.source]]
name = "my-private"
url = "https://gitlab.com/api/v4/groups/65473942/-/packages/pypi/simple"
priority = "explicit"
</code></pre>
<p><code>C/pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.dependencies]
B = {version = "*", source = "my-private"}
[[tool.poetry.source]]
name = "PyPI"
priority = "primary"
[[tool.poetry.source]]
name = "my-private"
url = "https://gitlab.com/api/v4/groups/65473942/-/packages/pypi/simple"
priority = "explicit"
</code></pre>
<p>When I run <code>poetry lock</code> in project <code>B</code>, then it works fine. I mean, the package <code>A</code> is found in <code>my-private</code> repository and all other dependencies are found in PyPI.</p>
<p>But when I run <code>poetry lock</code> in project <code>C</code>, then the command fails because it cannot find the package <code>A</code>. It seems to me that it is trying to install <code>A</code> from PyPI, but there is no such package.</p>
<p><strong>My questions:</strong></p>
<ul>
<li>Is this behavior intentional or is it a poetry bug?</li>
<li>Is it possible to use <code>explicit</code> priority for repositories containing packages with transitive dependencies (<code>C</code> depends on <code>B</code> which depends on <code>A</code> - all stored in private repository)?</li>
</ul>
<p>The only solution I have found so far is not to use <code>explicit</code> priority.
If I change the priority of <code>my-private</code> repository to <code>default</code> (and remove the <code>source</code> attribute in the dependencies section), then it works - all packages are searched in <code>my-private</code> repository, and if they are not found there, then PyPI is used as a "fallback".</p>
<p>But I still wonder if the <code>explicit</code> priority can be used in this case, because it seems safer to me (poetry will fail if my package is not found in <code>my-private</code> repository and a package with the same name from PyPI will not be installed by mistake).
My current solution also does not work properly when I use more than one private repository (if they contain packages with the same name).</p>
<hr />
<p><strong>Update 2024-08-06</strong></p>
<p>The <code>default</code> priority is obsolete these days. The workaround which works for me now (poetry 1.8.3) looks like this:</p>
<p><code>C/pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.dependencies]
B = {version = "*", source = "my-private"}
[tool.poetry.group.explicit.dependencies]
A = {version = "*", source = "my-private"}
[[tool.poetry.source]]
name = "my-private"
url = "https://gitlab.com/api/v4/groups/65473942/-/packages/pypi/simple"
priority = "explicit"
</code></pre>
<p>It means I use the <code>explicit</code> priority for <code>my-private</code> repository and I mention (redundantly) the transitive dependencies in <em>explicit</em> group - to not mix them with direct dependencies of my project. This workaround is in line with current <a href="https://python-poetry.org/docs/repositories#package-source-constraint" rel="nofollow noreferrer">official documentation</a> mentioned by <a href="https://stackoverflow.com/users/6615402/jack">@jack</a> in <a href="https://stackoverflow.com/a/78666390/2956276">his answer</a>.</p>
<p>The downside is, that we need to watch transitive dependencies and edit the <em>explicit</em> group after dependencies update (if the transitive dependencies was changed). That's why I consider this "solution" as a workaround only.</p>
|
<python><python-poetry>
|
2023-06-23 17:24:18
| 1
| 1,313
|
eNca
|
76,542,251
| 8,869,570
|
Printing columns from dataframe with column names that have a similar string
|
<p>I have a dataframe with several columns whose names end with <code>_avg</code>. Sometimes I want to do some quick analysis and just print all those columns ending with <code>_avg</code>.</p>
<p>Is there something like</p>
<pre><code>print(df["*_avg"])
</code></pre>
<p>where <code>*</code> is used like how it's used in shell?</p>
|
<python><pandas><dataframe>
|
2023-06-23 17:14:31
| 0
| 2,328
|
24n8
|
76,542,189
| 14,715,428
|
Command Python: Select Interpreter resulted in an error: Cannot destructure property 'filename' of 'r' as it is undefined
|
<p>I just ran into infinite <em>Discovering Python Interpreters</em>.</p>
<p><a href="https://i.sstatic.net/hQVYe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hQVYe.png" alt="Discovering Python Interpreters" /></a></p>
<p>Then I tried almost everything, including deleting extensions, reloading VS Code, changing Python Path and etc.
But finally when I just tried to select the interpreter I came across the error:</p>
<blockquote>
<p>Command Python: Select Interpreter resulted in an error: Cannot destructure property 'filename' of 'r' as it is undefined</p>
</blockquote>
<p><a href="https://i.sstatic.net/sIjXe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sIjXe.png" alt="Error" /></a></p>
<p>However I have <code>PYTHONPATH</code> defined and <code>python3</code> works well in console.</p>
<p>P.S. Created issue on Github: <a href="https://github.com/microsoft/vscode-python/issues/21481" rel="nofollow noreferrer">https://github.com/microsoft/vscode-python/issues/21481</a></p>
|
<python><visual-studio-code>
|
2023-06-23 17:04:23
| 2
| 349
|
TayJen
|
76,542,172
| 5,489,190
|
How to make pytorch 11-output model by combining three sequential models of 4, 3 and 4 shape?
|
<p>I'm trying to move my project from Tensorflow to Pytorch to compare the accuracy. The overall data flow is as follows (the number in brackets refers to layer output size):
<a href="https://i.sstatic.net/6GqDV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6GqDV.png" alt="enter image description here" /></a></p>
<p>Now, in Tensorflow I can use functional API and write three <code>tf.keras.Sequential</code>:</p>
<pre><code>def single_model(topology):
model = tf.keras.Sequential([
tf.keras.layers.Dense(topology, activation = relu),
tf.keras.layers.Dense(topology, activation = relu)])
return model
input_ANN = tf.keras.layers.Input(shape=(21,), name="Input")
model1=single_model(4)(input_ANN)
model2=single_model(3)(input_ANN)
model3=single_model(4)(input_ANN)
concat = tf.keras.layers.Concatenate(axis=-1, name='Concatenate')([model1, model2, model3])
model = tf.keras.Model(inputs=[input_ANN], outputs=[concat])
</code></pre>
<p>And from this point I have a complete single model to work as with typical single-road ANN. But how to do the same with <code>Pytorch</code>? I was trying to do this as follow (21 is input size)</p>
<pre><code># define the model
def single_model(topology):
model = nn.Sequential(
nn.Linear(21, topology),
nn.ReLU(),
nn.Linear(topology, topology),
nn.ReLU())
return model
# define the model
model1=single_model(3)
model2=single_model(4)
model3=single_model(3)
model=nn.Sequential(*model1.children(),*model2.children(),*model3.children())
</code></pre>
<p>But it fails with the error <code>running_mean should contain 3 elements not 21</code>. The error is on the lane where I try to generate model prediction in fitting loop:</p>
<pre><code>for e in range(epochs):
train_loss = 0.0
model.train() # Optional when not using Model Specific layer
for data, labels in train_dataloader:
optimizer.zero_grad()
target = model(data) #Here it crash
loss = MARELoss(target,labels)
loss.backward()
optimizer. Step()
train_loss += loss. Item()
</code></pre>
<p>Can you please advise me how to connect the sub-models in the correct way?</p>
|
<python><tensorflow><keras><pytorch>
|
2023-06-23 17:01:40
| 1
| 749
|
Karls
|
76,542,019
| 10,430,394
|
RDkit removes explicit Hydrogens while optimizing geometry
|
<p>I'm trying to optimize a bunch of molecules from an SDF file using <code>AllChem.MMFFOptimizeMolecule</code>. My molecules already have explicit hydrogens, but the warning told me:</p>
<pre><code>Molecule does not have explicit Hs. Consider calling AddHs()
</code></pre>
<p>So I decided that it won't hurt to add the <code>AddHs()</code> command just in case. But even after adding that, the warning persists. Not only that, but when I write out the optimized molecules to a new SDF file, the hydrogens are missing.</p>
<p>So it seems that RDKit is removing my explicit hydrogens when reading in the SDF file using <code>Chem.SDMolSupplier(path)</code>, not adding hydrogens when I call <code>Chem.AddHs(mol)</code>, doing the optimization without them and then writing the opt geometries without hydrogens to the new file and I do not understand what I did to deserve this...</p>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>from rdkit import Chem
from rdkit.Chem import AllChem
def opt_sdf_coords(path='all_moles.sdf',outpath='all_moles_opt.sdf'):
suppl = Chem.SDMolSupplier(path)
w = Chem.SDWriter(outpath)
for mol in suppl:
Chem.AddHs(mol)
AllChem.MMFFOptimizeMolecule(mol)
w.write(mol)
w.close()
</code></pre>
<p>This is an excerpt from the SDF file so that you can run the code yourself:</p>
<pre><code>
OpenBabel06232316383D
9 9 0 0 0 0 0 0 0 0999 V2000
0.9772 0.0715 -0.0147 Br 0 0 0 0 0 0 0 0 0 0 0 0
2.8573 0.0846 -0.0063 C 0 0 0 0 0 0 0 0 0 0 0 0
3.6483 1.2076 -0.0051 C 0 0 0 0 0 0 0 0 0 0 0 0
4.9310 0.7321 0.0017 N 0 0 0 0 0 0 0 0 0 0 0 0
4.8622 -0.6331 0.0043 C 0 0 0 0 0 0 0 0 0 0 0 0
3.6184 -1.0534 -0.0005 N 0 0 0 0 0 0 0 0 0 0 0 0
3.4204 2.2636 -0.0083 H 0 0 0 0 0 0 0 0 0 0 0 0
5.7787 1.2827 0.0044 H 0 0 0 0 0 0 0 0 0 0 0 0
5.7442 -1.2604 0.0095 H 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0 0
2 3 2 0 0 0 0
2 6 1 0 0 0 0
3 4 1 0 0 0 0
3 7 1 0 0 0 0
4 5 1 0 0 0 0
4 8 1 0 0 0 0
5 6 2 0 0 0 0
5 9 1 0 0 0 0
M END
$$$$
OpenBabel06232316383D
9 9 0 0 0 0 0 0 0 0999 V2000
0.9772 0.0715 -0.0147 Br 0 0 0 0 0 0 0 0 0 0 0 0
2.8573 0.0846 -0.0063 C 0 0 0 0 0 0 0 0 0 0 0 0
3.6483 1.2076 -0.0051 C 0 0 0 0 0 0 0 0 0 0 0 0
4.9310 0.7321 0.0017 N 0 0 0 0 0 0 0 0 0 0 0 0
4.8622 -0.6331 0.0043 C 0 0 0 0 0 0 0 0 0 0 0 0
3.6184 -1.0534 -0.0005 N 0 0 0 0 0 0 0 0 0 0 0 0
3.4204 2.2636 -0.0083 H 0 0 0 0 0 0 0 0 0 0 0 0
5.7787 1.2827 0.0044 H 0 0 0 0 0 0 0 0 0 0 0 0
5.7442 -1.2604 0.0095 H 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 0 0 0 0
2 3 2 0 0 0 0
2 6 1 0 0 0 0
3 4 1 0 0 0 0
3 7 1 0 0 0 0
4 5 1 0 0 0 0
4 8 1 0 0 0 0
5 6 2 0 0 0 0
5 9 1 0 0 0 0
M END
$$$$
</code></pre>
|
<python><rdkit>
|
2023-06-23 16:38:32
| 1
| 534
|
J.Doe
|
76,541,544
| 1,028,270
|
What is the proper way to use mock_secretsmanager in a module scoped pytest fixture?
|
<p>This does not work</p>
<pre><code>@pytest.fixture(scope="module")
def monkeypatch_module():
# gross bug: https://github.com/pytest-dev/pytest/issues/363
from _pytest.monkeypatch import MonkeyPatch
mpatch = MonkeyPatch()
yield mpatch
mpatch.undo()
@pytest.fixture(scope="module")
@mock_secretsmanager
def setup_stuff(monkeypatch_module):
secret_name = "test_mock_secret01"
sm_client = boto3.client("secretsmanager", region_name="us-east-1")
sm_client.create_secret(
Name=secret_name,
SecretString='{"username":"mockuser","password":"mockpass"}',
)
# module level env vars
monkeypatch_module.setenv("MY_VAR", "sldkfjsdf")
@pytest.mark.unittest
def test__mytest(setup_stuff):
secret_name = "test_mock_secret01"
my_method_that_gets_the_secret(secret_name)
</code></pre>
<p>I get this error:</p>
<pre><code>botocore.errorfactory.ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the GetSecretValue operation: Secrets Manager can't find the specified secret.
</code></pre>
<p>I had to make it a function and use it like this:</p>
<pre><code>@mock_secretsmanager
def setup_stuff(monkeypatch_module):
secret_name = "test_mock_secret01"
sm_client = boto3.client("secretsmanager", region_name="us-east-1")
sm_client.create_secret(
Name=secret_name,
SecretString='{"username":"mockuser","password":"mockpass"}',
)
# module level env vars
monkeypatch_module.setenv("MY_VAR", "sldkfjsdf")
@mock_secretsmanager
@pytest.mark.unittest
def test__mytest(monkeypatch, monkeypatch_module):
setup_stuff(monkeypatch_module)
# function level env vars
monkeypatch.setenv("MY_LOCAL_VAR", "sldkfjsdf")
</code></pre>
<p>But this will run with every function call.</p>
<p>I just want to create a fixture that creates mock secrets (sets env vars and other stuff) once for the entire module.</p>
<p>What is the proper way to use <code>mock_secretsmanager</code> in a module scoped fixture?</p>
|
<python><pytest><moto>
|
2023-06-23 15:31:48
| 2
| 32,280
|
red888
|
76,541,326
| 7,446,003
|
AWS Elastic beanstalk django migration not working
|
<p>I am trying to deploy my application using <code>eb deploy</code></p>
<p>I have my migrate file at .platform/hooks/predeplou/01_migrate_pre.sh:</p>
<pre><code>source /var/app/venv/staging-LQM1lest/bin/activate
echo "Running migrations predeploy"
python /var/app/current/manage.py migrate
python /var/app/current/manage.py showmigrations
echo "finished migrations"
</code></pre>
<p>The deployment appears to go succesfully but the migration does not occur. When I check the logs the eb-hooks.log show that the 01_migrate_pre.sh file has ran but the migration has not been implemented e.g. the
There are migrations that are shown as not ran from the print out of <code>showmigrations</code>:</p>
<pre><code>Running migrations predeploy
[X] 0027_alter_sideeffect_se_name
[ ] 0028_alter_sideeffect_se_name
finished migrations
</code></pre>
<p>Please can anyone advice hoe I can get the migration to run (or even just to get a print out of the output of the migrate command, e.g. it is not printing out anything to log like ' No migrations to apply.'</p>
|
<python><django><amazon-web-services><amazon-elastic-beanstalk><django-migrations>
|
2023-06-23 15:01:20
| 0
| 422
|
RobMcC
|
76,541,323
| 2,741,831
|
Way to play a sound in python on Linux without providing a file?
|
<p>I need a piece of code that plays a sound, any sound to indicate an operation is finished. It should work on Linux and play some kind of sound on the currently selected audio output.
I don't want to have to provide a file, I just need some code that can play sound that is easy to copy paste in a jupyter notebook and works on Linux.</p>
|
<python><audio>
|
2023-06-23 15:01:08
| 2
| 2,482
|
user2741831
|
76,541,257
| 2,149,641
|
pytest: Fixture shortcut to object inside another fixture breaks when object is updated
|
<p>I have code that is essentially like this where the fixture <code>foo</code> is a shortcut to the <code>foo</code> object inside fixture <code>myclass</code>. This is purely for convenience so I don't have to do <code>myclass.foo</code>, however when <code>myclass.foo</code> gets updated, the <code>foo</code> fixture is not, even though they seem to be the same object before it's updated. Is this just a bad practice I should avoid?</p>
<pre><code>class MyClass:
def __init__(self):
self.foo = object()
def update_foo(self):
self.foo = object()
@pytest.fixture
def myclass():
yield MyClass()
@pytest.fixture
def foo(myclass):
return myclass.foo
def test_foo(myclass, foo):
assert myclass.foo is foo
myclass.update_foo()
assert myclass.foo is foo # <-- this one fails
</code></pre>
|
<python><pytest><fixtures><pytest-fixtures>
|
2023-06-23 14:53:38
| 2
| 799
|
jjj
|
76,541,233
| 14,661,648
|
psycopg2 - Update a json column with a new key/value pair
|
<p>Let's say I have a table that has a <strong>json</strong> type column.</p>
<p>I have data like so:</p>
<pre><code>{"Value A": 20}
</code></pre>
<p>How do I update the row and only add a key value pair to the json column of that row so it looks like this?</p>
<pre><code>{"Value A" : 20, "Value B": 25}
</code></pre>
<p>I am using psycopg2 and PostgreSQL 14.</p>
|
<python><postgresql><psycopg2>
|
2023-06-23 14:51:02
| 0
| 1,067
|
Jiehfeng
|
76,541,228
| 1,188,943
|
Extracting Values from SVG tooltips with python selenium
|
<p>It's almost two days that I'm working on getting text values from the tooltips of the Geo chart from the following links:</p>
<pre><code>https://exportpotential.intracen.org/en/markets/geo-map?fromMarker=i&exporter=4&whatMarker=ls&what=6&toMarker=j
</code></pre>
<p>The problem is tooltips are visible when hover the mouse cursor on them. What I did, is clicking on respective items and get values. But the problem is some items are not clickable. The code I'm using for this is:</p>
<pre><code>all_nodes = driver.find_element(By.CSS_SELECTOR, ".flamingo.chart").find_elements(By.CSS_SELECTOR, ".node")
for node in all_nodes:
node.find_element(By.CSS_SELECTOR, ".circle.total").click()
country = driver.find_element(By.CSS_SELECTOR, ".tooltip-title").text.strip()
</code></pre>
<p>but when I use this code I get:</p>
<pre><code>Message: element click intercepted: Element <circle class="circle total IND" cx="0" cy="0" r="65" style="fill: rgba(158, 106, 117, 0.6);"></circle> is not clickable at point (830, 543). Other element would receive the click: <circle class="circle perc IRQ" cx="0" cy="0" r="61.56828206861062" style="fill: rgb(158, 106, 117);"></circle>
(Session info: chrome=114.0.5735.133)
Stacktrace:
0 chromedriver 0x000000010c8d56b8 chromedriver + 4937400
1 chromedriver 0x000000010c8ccb73 chromedriver + 4901747
2 chromedriver 0x000000010c48a616 chromedriver + 435734
3 chromedriver 0x000000010c4d5b97 chromedriver + 744343
4 chromedriver 0x000000010c4d3450 chromedriver + 734288
5 chromedriver 0x000000010c4d0a84 chromedriver + 723588
6 chromedriver 0x000000010c4cfbe4 chromedriver + 719844
7 chromedriver 0x000000010c4c21e1 chromedriver + 664033
8 chromedriver 0x000000010c4f2012 chromedriver + 860178
9 chromedriver 0x000000010c4c19c1 chromedriver + 661953
10 chromedriver 0x000000010c4f21ce chromedriver + 860622
11 chromedriver 0x000000010c50ce76 chromedriver + 970358
12 chromedriver 0x000000010c4f1de3 chromedriver + 859619
13 chromedriver 0x000000010c4bfd7f chromedriver + 654719
14 chromedriver 0x000000010c4c10de chromedriver + 659678
15 chromedriver 0x000000010c8912ad chromedriver + 4657837
16 chromedriver 0x000000010c896130 chromedriver + 4677936
17 chromedriver 0x000000010c89cdef chromedriver + 4705775
18 chromedriver 0x000000010c89705a chromedriver + 4681818
19 chromedriver 0x000000010c86992c chromedriver + 4495660
20 chromedriver 0x000000010c8b4838 chromedriver + 4802616
21 chromedriver 0x000000010c8b49b7 chromedriver + 4802999
22 chromedriver 0x000000010c8c599f chromedriver + 4872607
23 libsystem_pthread.dylib 0x00007fff2050c8fc _pthread_start + 224
24 libsystem_pthread.dylib 0x00007fff20508443 thread_start + 15
</code></pre>
|
<python><selenium-webdriver><svg>
|
2023-06-23 14:50:25
| 2
| 1,035
|
Mahdi
|
76,540,983
| 208,525
|
Custom Annotated class with default metadata
|
<p>I'm trying to develop a custom <a href="https://docs.python.org/3/library/typing.html#typing.Annotated" rel="nofollow noreferrer">Annotated</a> class...</p>
<p>Annotated allows to add some metadata to type hint that can be checked at runtime:</p>
<pre><code>from typing import Annotated
some: Annotated[int, 'must be even']
</code></pre>
<p>so metadata is always required - but I want to develop a similar type that initialises metadata with some default value</p>
<pre><code>some: MyAnnotated[int] # <- this must now must be equal to Annotated[int, '<default-meta>']
</code></pre>
<p>I'm able to make it work with this code:</p>
<pre><code>from typing import Generic, TypeVar, Annotated, Any
T = TypeVar('T')
class MyAnnotated(Generic[T]):
@classmethod
def __class_getitem__(cls, param: Any) -> T:
if isinstance(param, tuple):
return Annotated[param[0], param[1]] # type: ignore
return Annotated[param, '<default-meta>'] # type: ignore
assert MyAnnotated[int, 'my-meta'] == Annotated[int, 'my-meta']
assert MyAnnotated[int] == Annotated[int, '<default-meta>']
</code></pre>
<p>this works as expected - but editors (VScode) does not understand it and cannot add autocompletion:</p>
<p><a href="https://i.sstatic.net/6z51f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6z51f.png" alt="enter image description here" /></a></p>
<p>while works fine with default Annotated class:</p>
<p><a href="https://i.sstatic.net/PcigE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PcigE.png" alt="enter image description here" /></a></p>
|
<python><python-typing><mypy>
|
2023-06-23 14:20:06
| 2
| 5,931
|
Djangonaut
|
76,540,939
| 1,116,675
|
Duplicating courses in Moodle by using REST webservice with Python
|
<p>I'm really struggling to get my code for duplicating moodle courses via REST to work, just couldn't figure out the reason:</p>
<pre><code>import requests
import json
token = 'mytoken123'
domainname = 'https://example.com'
functionname = 'core_course_duplicate_course'
# Course ID of the course to be duplicated
courseid = 61 # Replace with your course ID
# New course details
newcoursedata = {
'fullname': 'New Course Full Name',
'shortname': 'NewCourseShortName',
'categoryid': 10,
'visible': 1,
}
restformat = 'json' # Also possible to use 'xml'
data = {
'courseid': courseid,
'options': newcoursedata,
}
headers = {
'Content-Type': 'application/json',
}
# REST CALL
serverurl = f'{domainname}/webservice/rest/server.php?wstoken={token}&wsfunction={functionname}&moodlewsrestformat={restformat}'
response = requests.post(serverurl, headers=headers, data=json.dumps(data))
# Print response
print(json.dumps(response.json(), indent=4))
</code></pre>
<p>The error message is:</p>
<pre><code>{
"exception": "invalid_parameter_exception",
"errorcode": "invalidparameter",
"message": "invalid parameter value (Missing required key in single structure: courseid)",
"debuginfo": "Missing required key in single structure: courseid"
}
</code></pre>
<p>What I'm missing here?
Any help much appreciated!</p>
|
<python><rest><web-services><moodle-api>
|
2023-06-23 14:14:22
| 1
| 938
|
Madamadam
|
76,540,730
| 514,149
|
Can't spawn background processes after the app was running for some time
|
<p>In our Python application we spawn some background processes when the program starts up. Then in some cases we need to shutdown one of them and some time later we want to re-start (= spawn) another background process again.</p>
<p>However, it seems that sometimes, when the application was running for some days already (it's a server application), then it cannot spawn another process. In this case we get the following error log at the call of <code>new_proc.start()</code>:</p>
<pre><code>Jun 23 14:23:46 srv gunicorn[2717986]: Traceback (most recent call last):
Jun 23 14:23:46 srv gunicorn[2717986]: File "<string>", line 1, in <module>
Jun 23 14:23:46 srv gunicorn[2717986]: File "/usr/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
Jun 23 14:23:46 srv gunicorn[2717986]: exitcode = _main(fd, parent_sentinel)
Jun 23 14:23:46 srv gunicorn[2717986]: File "/usr/lib/python3.8/multiprocessing/spawn.py", line 126, in _main
Jun 23 14:23:46 srv gunicorn[2717986]: self = reduction.pickle.load(from_parent)
Jun 23 14:23:46 srv gunicorn[2717986]: File "/usr/lib/python3.8/multiprocessing/synchronize.py", line 110, in __setstate__
Jun 23 14:23:46 srv gunicorn[2717986]: self._semlock = _multiprocessing.SemLock._rebuild(*state)
Jun 23 14:23:46 srv gunicorn[2717986]: FileNotFoundError: [Errno 2] No such file or directory
</code></pre>
<p>So apparently the Python multiprocessing module cannot create the file required for a semaphore (or something similar). When I then shutdown the whole application and restart it, then again all subprocesses can be spawned again. So it really seems to be related to the fact that the app was running for some days already ...</p>
<p>Any ideas what might be the issue here?</p>
|
<python><multiprocessing>
|
2023-06-23 13:42:41
| 1
| 10,479
|
Matthias
|
76,540,561
| 2,458,922
|
Using Pandas, find a Column's value when Group By on another Column has a Max Value of yet another column. Also in PySpark DataFrame
|
<p>Similar to <a href="https://stackoverflow.com/questions/44383136/pandas-groupby-where-you-get-the-max-of-one-column-and-the-min-of-another-column">pandas groupby where you get the max of one column and the min of another column</a>
Given that the Data Frame is about Car Race, we have in df the following [Race_Event_ID, Car_ID, Driver_ID, AvgSpeed]
Group By <code>df.group_by(['Race_Event_ID'])['Speed'].agg(['max','mean',.....]</code> can give Group by Stats.
But I need to have the <code>'Speed' , Car_ID, Driver_ID</code> <strong>of the Topper</strong>, or Whose <code>speed = 'Max Speed'</code>, and Similarly the Tailer's Speed, Car_ID, Driver_ID , ie Speed = Min Speed.</p>
<p>Given that there could be Tie with Top Speed, Lets get Both or at least one.</p>
<p>One Solution is ,</p>
<pre><code>df_max =df.group_by(['Race_Event_ID'])['Speed'].agg(['max'])
df_max.merge(df,on='Race_Event_ID')
</code></pre>
<p>This solution may take time. And within the same iteration of the data , things may be achievable if we were to iterate ourselves. Do we have an efficient way in Pandas ?
And I would be curious to see a Spark DataFrame solution as well.</p>
|
<python><pandas><dataframe><pyspark>
|
2023-06-23 13:24:04
| 1
| 1,731
|
user2458922
|
76,540,502
| 12,871,587
|
Alternatives for long .when().then().when().then().otherwise() chains
|
<p>Are there some clever alternatives writing long when().then().otherwise() chains without hardcoding the values, see the example below:</p>
<p>Let's say we have the following dataframe</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
{
"Market":["AT", "AT", "DE", "DE", "CA", "DE", "UK", "US"],
"Number of Days":[1, 2, 3, 4, 3, 4, 2, 1],
}
)
</code></pre>
<p>User defines some conditions as a dictionary for different countries</p>
<pre class="lang-py prettyprint-override"><code>params = {
"AT":{"Value": 1},
"DE":{"Value": 2},
"CA":{"Value": 3},
"UK":{"Value": 1},
"US":{"Value": 2}
}
</code></pre>
<p>Then I hard-code the countries and use the countries in the Polars .with_columns() as below:</p>
<pre class="lang-py prettyprint-override"><code>(
df
.with_columns(
pl.when(pl.col("Market") == "AT").then(pl.col("Number of Days") + params["AT"]["Value"])
.when(pl.col("Market") == "DE").then(pl.col("Number of Days") + params["DE"]["Value"])
.when(pl.col("Market") == "CA").then(pl.col("Number of Days") + params["CA"]["Value"])
.when(pl.col("Market") == "UK").then(pl.col("Number of Days") + params["UK"]["Value"])
.when(pl.col("Market") == "US").then(pl.col("Number of Days") + params["US"]["Value"])
.otherwise(None)
.alias("New Column")
)
)
</code></pre>
<p>Is there a way for me not to hard-code the countries in .with_columns, but somehow loop through the dictionary and create expression based on the values provided?¨</p>
<p>I tried the below but it says I have duplicate column names.</p>
<pre class="lang-py prettyprint-override"><code>exprs = []
for market, data in params.items():
condition = (pl.col("Market") == market)
result = (pl.col("Number of Days") + data["Value"])
expr = pl.when(condition).then(result)
exprs.append(expr)
df.with_columns(exprs)
</code></pre>
|
<python><dataframe><python-polars>
|
2023-06-23 13:17:57
| 2
| 713
|
miroslaavi
|
76,540,492
| 1,150,683
|
Is pip-installing a GitHub PR future-proof?
|
<p>I am in dependency hell right now where we need to use the functionality of library X which is only available if a specific, unmerged PR to library Y is installed. If I add this PR to my requirements, like so:</p>
<pre><code>git+https://github.com/org/project.git@refs/pull/123/head
</code></pre>
<p>will that keep working even after the PR has been merged? And what if the contributor who made the PR removes their branch that was used for the PR?</p>
|
<python><github><pip>
|
2023-06-23 13:16:22
| 0
| 28,776
|
Bram Vanroy
|
76,540,475
| 10,862,144
|
How do I call a member function from another file in python
|
<p>I am trying to create a multi-file environment for my project. The File structure is as such</p>
<pre><code>project_folder
|- FileA.py
|- FileB.py
</code></pre>
<p>The <code>FileA.py</code> has mainly the script in it and also I have improted <em>FileB</em> here<br />
<strong>FileA.py</strong></p>
<pre><code>import os,sys
from FileB import temp_class as T
// lines of code
str = "somestring"
A, B, C = T.func(str)
// lines of code
</code></pre>
<p><strong>FileB.py</strong></p>
<pre><code>import os,sys
class temp_class:
def func(self, S):
//lines of code
return (someA,someB,someC)
</code></pre>
<p>But I am getting the error as</p>
<blockquote>
<p>TypeError: unbound method func() must be called with temp_class instance as first argument (got str instance instead)</p>
</blockquote>
|
<python>
|
2023-06-23 13:14:07
| 4
| 978
|
RC0993
|
76,540,474
| 17,987,266
|
How to make an asynchronous call to a function that doesn't return anything?
|
<p>Suppose I define the following function:</p>
<pre><code>import time
def slow() -> None:
time.sleep(60)
</code></pre>
<p>How to make a call to <code>slow</code> asynchronously, such that the thread calling it never has to wait for it to finish before proceeding, for example:</p>
<pre><code>def main():
print("Hello")
slow()
print("World!") # Should not have to wait 60 seconds to execute this
</code></pre>
|
<python><asynchronous>
|
2023-06-23 13:13:50
| 1
| 369
|
sourcream
|
76,540,408
| 11,613,489
|
Differentiate by section class in Selenium
|
<p>Im facing a problem in Python selenium,
I would like to print on my code the following data, an email address: caioplinio@hotmail.it</p>
<p>I just need a hint, that's all...</p>
<p>HTML:</p>
<pre><code><section tabindex="-1" class="pv-profile-section pv-contact-info artdeco-container-card">
<!---->
<h2 class="text-body-large-open mb4">
Información de contacto
</h2>
<div class="pv-profile-section__section-info section-info" tabindex="-1">
<section class="pv-contact-info__contact-type ci-vanity-url">
<li-icon aria-hidden="true" type="linkedin-bug" class="pv-contact-info__contact-icon" size="medium">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" data-supported-dps="24x24" fill="currentColor" class="mercado-match" width="24" height="24" focusable="false">
<path d="M20.5 2h-17A1.5 1.5 0 002 3.5v17A1.5 1.5 0 003.5 22h17a1.5 1.5 0 001.5-1.5v-17A1.5 1.5 0 0020.5 2zM8 19H5v-9h3zM6.5 8.25A1.75 1.75 0 118.3 6.5a1.78 1.78 0 01-1.8 1.75zM19 19h-3v-4.74c0-1.42-.6-1.93-1.38-1.93A1.74 1.74 0 0013 14.19a.66.66 0 000 .14V19h-3v-9h2.9v1.3a3.11 3.11 0 012.7-1.4c1.55 0 3.36.86 3.36 3.66z"></path>
</svg>
</li-icon>
<h3 class="pv-contact-info__header t-16 t-black t-bold">
Perfil de Marco
</h3>
<div class="pv-contact-info__ci-container t-14">
<a href="https://www.mywebsite.com" class="pv-contact-info__contact-link link-without-visited-state t-14">
mywebsite.com/italia/caio_plinio
</a>
</div>
</section>
<section class="pv-contact-info__contact-type ci-email">
<li-icon aria-hidden="true" type="envelope" class="pv-contact-info__contact-icon">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" data-supported-dps="24x24" fill="currentColor" class="mercado-match" width="24" height="24" focusable="false">
<path d="M2 4v13a3 3 0 003 3h14a3 3 0 003-3V4zm18 2v1.47l-8 5.33-8-5.33V6zm-1 12H5a1 1 0 01-1-1V8.67L12 14l8-5.33V17a1 1 0 01-1 1z"></path>
</svg>
</li-icon>
<h3 class="pv-contact-info__header t-16 t-black t-bold">
Email
</h3>
<div class="pv-contact-info__ci-container t-14">
<a href="mailto: caioplinio@hotmail.it" class="pv-contact-info__contact-link link-without-visited-state t-14" target="_blank" rel="noopener noreferrer">
caioplinio@hotmail.it
</a>
</div>
</code></pre>
<p>This is what I have wrote does, part of the code.</p>
<pre><code>contact_info = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.CLASS_NAME, 'pv-contact-info__ci-container')))
# Find the email element within the contact info section
email_element = contact_info.find_element(By.CSS_SELECTOR, 'a.pv-contact-info__contact-link')
email = email_element.get_attribute('innerHTML')
print(email)
</code></pre>
<blockquote>
<p>Output: mywebsite.com/italia/caio_plinio</p>
</blockquote>
<p>I see the problem is I am using <em></em> more than more time.</p>
<p>I see the solution is "differenciate" by <em>section class</em> but how can I write it? (I just need a hint)</p>
<p><a href="https://i.sstatic.net/9EABC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9EABC.png" alt="enter image description here" /></a></p>
<p>My desired ouput:</p>
<blockquote>
<p>caioplinio@hotmail.it</p>
</blockquote>
|
<python><html><selenium-webdriver>
|
2023-06-23 13:05:08
| 2
| 642
|
Lorenzo Castagno
|
76,540,320
| 12,415,855
|
Get elements from dropbox using selenium and bs4?
|
<p>i would like to get the elements or the elements-links from this page
<a href="https://irglobal.com/advisors/" rel="nofollow noreferrer">https://irglobal.com/advisors/</a>
from the "Firm" select box</p>
<p>So at the end i would like to have:
<a href="https://irglobal.com/advisors/?firm=4i-advisory-services" rel="nofollow noreferrer">https://irglobal.com/advisors/?firm=4i-advisory-services</a>
<a href="https://irglobal.com/advisors/?firm=a-c-legal" rel="nofollow noreferrer">https://irglobal.com/advisors/?firm=a-c-legal</a>
and so on</p>
<p>I tried to get the element for this select box using the following code:</p>
<pre><code>import time
import os
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from webdriver_manager.chrome import ChromeDriverManager
if __name__ == '__main__':
WAIT = 1
print(f"Checking Browser driver...")
os.environ['WDM_LOG'] = '0'
options = Options()
options.add_argument("start-maximized")
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service(ChromeDriverManager().install())
driver = webdriver.Chrome (service=srv, options=options)
waitWD = WebDriverWait (driver, 10)
link = "https://irglobal.com/ advisors/"
print(f"Working for {link}")
driver.get (link)
time.sleep(WAIT)
soup = BeautifulSoup (driver.page_source, 'lxml')
tmp = soup.find("span", {"class": "select2-selection select2-selection--single"})
print(tmp.prettify())
driver.quit()
</code></pre>
<p>But the element i can find is only a span-element and no select-element which is was expecting:</p>
<pre><code><span aria-disabled="false" aria-expanded="false" aria-haspopup="true" aria-labelledby="select2-firm-container" class="select2-selection select2-selection--single" role="combobox" tabindex="0">
<span aria-readonly="true" class="select2-selection__rendered" id="select2-firm-container" role="textbox">
<span class="select2-selection__placeholder">
Select
</span>
</span>
</span>
<span class="select2-selection__arrow" role="presentation">
<b role="presentation">
</b>
</span>
</span>
</code></pre>
<p>How can i get the elements / the element-links from this select-box on the website?</p>
<p>Hello - i would like to get the elements or the elements-links from this page
<a href="https://irglobal.com/advisors/" rel="nofollow noreferrer">https://irglobal.com/advisors/</a>
from the "Firm" select box</p>
<p>So at the end i would like to have:
<a href="https://irglobal.com/advisors/?firm=4i-advisory-services" rel="nofollow noreferrer">https://irglobal.com/advisors/?firm=4i-advisory-services</a>
<a href="https://irglobal.com/advisors/?firm=a-c-legal" rel="nofollow noreferrer">https://irglobal.com/advisors/?firm=a-c-legal</a>
and so on</p>
<p>I tried to get the element for this select box using the following code:</p>
<pre><code>import time
import os
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from webdriver_manager.chrome import ChromeDriverManager
if __name__ == '__main__':
WAIT = 1
print(f"Checking Browser driver...")
os.environ['WDM_LOG'] = '0'
options = Options()
options.add_argument("start-maximized")
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service(ChromeDriverManager().install())
driver = webdriver.Chrome (service=srv, options=options)
waitWD = WebDriverWait (driver, 10)
link = "https://irglobal.com/ advisors/"
print(f"Working for {link}")
driver.get (link)
time.sleep(WAIT)
soup = BeautifulSoup (driver.page_source, 'lxml')
tmp = soup.find("span", {"class": "select2-selection select2-selection--single"})
print(tmp.prettify())
driver.quit()
</code></pre>
<p>But the element i can find is only a span-element and no select-element which is was expecting:</p>
<pre><code><span aria-disabled="false" aria-expanded="false" aria-haspopup="true" aria-labelledby="select2-firm-container" class="select2-selection select2-selection--single" role="combobox" tabindex="0">
<span aria-readonly="true" class="select2-selection__rendered" id="select2-firm-container" role="textbox">
<span class="select2-selection__placeholder">
Select
</span>
</span>
</span>
<span class="select2-selection__arrow" role="presentation">
<b role="presentation">
</b>
</span>
</span>
</code></pre>
<p>How can i get the elements / the element-links from this select-box on the website?</p>
|
<python><selenium-webdriver><beautifulsoup>
|
2023-06-23 12:53:53
| 1
| 1,515
|
Rapid1898
|
76,540,270
| 16,155,080
|
Uvicorn reload : kill all created sub process when auto reloading
|
<p>I currently have a fastapi deployed with uvicorn that starts a thread on initialisation (among other things) using <code>threading</code>.
This thread is infinite (it's a routine that updates every x seconds).
Before I updated to python 3.10, everything was working fine, everytime I changed the code, the server would detect change and reload, killing and creating a new thread at init.</p>
<p>But now, when I modify my code, the server detects change and try to reload but the created thread isn't killed (print still continue to flow in the console) refraining the server to fully reload.</p>
<pre><code>my print from my thread
WARNING: StatReload detected changes in 'app\api.py'. Reloading...
INFO: Shutting down
INFO: Waiting for application shutdown.
INFO: Application shutdown complete.
INFO: Finished server process [3736]
my print from my thread
my print from my thread
...
</code></pre>
<p>This works the same way if I ctrl+C in the console. The thread stays alive
My solution for the moment is to <code>kill PID</code>everytime I want to refresh but that's a bit annoying.</p>
<p>I tried to get back to python 3.7.9 but the problem remains.
I also tried to implement <code>atexit</code> and manually kill the process but it didn't work.</p>
<p>Any lead on how to properly handle this ?</p>
|
<python><multithreading><fastapi><uvicorn>
|
2023-06-23 12:46:42
| 1
| 641
|
Jules Civel
|
76,540,266
| 6,645,564
|
How do I get the markers in my scatterplot to be connected by lines only along one axis?
|
<p>I have been trying to create a scatter plot using the plotly package, but I keep running into a weird problem with how the plot is formatted. The plot I am trying to make has a categorical X axis and a continuous Y axis. What I want is for markers to delineate each point on the plot with markers, and then to have each marker connected to each other with a line. This sounds like it should be a relatively simple formatting task, but I have been unable to get it to work.</p>
<p>Here is a snapshot of an example dataframe that I am inputting into the code:
<a href="https://i.sstatic.net/Sy3FP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Sy3FP.png" alt="example dataframe" /></a></p>
<p>Here is the current code that I am using, where df is my input dataframe:</p>
<pre><code>grouping = df.set_index(sample_col)[group_col].to_dict()
fig = make_subplots(2, 1, subplot_titles=subplot_titles)
n = 0
colors = ["red","blue"]
colors_dict = {color_col_value:color for color_col_value,color in zip(list(df[class_col].unique()), colors)}
symbols = ["diamond", "arrow"]
symbols_dict = {id_type:symbol for id_type,symbol in zip(df[class_col].unique(), symbols)}
for index, gdf in enumerate(df.groupby([class_col])):
m, gdf = gdf
gdf = natsort_column(gdf, sample_col).reset_index(drop=True)
gdf[group_col] = gdf[sample_col].map(grouping)
fig.append_trace(go.Scatter(x=[gdf[group_col], gdf[sample_col], gdf['Sequence']],
y=gdf[intensity_col],
name=m,
mode='markers',
marker=dict(symbol=symbols_dict[m], size=12, color=colors_dict[m]),
legendgroup='group{}'.format(index),
showlegend=True),n,1)
n+=1
fig.update_layout(template='plotly_white', height=1000, width=800)
fig.update_xaxes(categoryorder='array', categoryarray=sorted(samples))
</code></pre>
<p>When I have the <code>mode</code> of the scatter plot set to <code>markers</code>, the plot looks like this:
<a href="https://i.sstatic.net/HD9ox.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HD9ox.png" alt="plot example markers" /></a></p>
<p>However, I really want the markers to be connected with lines. But when I set <code>mode='lines+markers'</code>, I get a plot that looks like this:</p>
<p><a href="https://i.sstatic.net/oIHrg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oIHrg.png" alt="plot example lines+markers" /></a></p>
<p>The markers are all connected both along the x and y axes. This is frustrating because I only want the markers to be connected along the x axis, where the markers are connected based on their corresponding sequence. This does mean that not all markers would be connected between different samples, but this what I would want anyways. Connecting the markers along the intensity values is not useful for what I'm trying to visualize at all. My suspicion is the root of this problem is that the x axis is multi-category, but I'm not sure how I can amend this.</p>
<p>I don't know why this is not working and it would be really helpful if anyone could point out the right direction to me.</p>
|
<python><plotly><plotly.graph-objects>
|
2023-06-23 12:46:19
| 1
| 924
|
Bob McBobson
|
76,540,258
| 4,593,199
|
scipy fftshift and fftw fftshift gives different values (shifted matrix are not the same)
|
<p>I have a C++ code which performs <code>fftshift</code> for a given <code>meshgrid</code>.</p>
<pre class="lang-cpp prettyprint-override"><code> const std::size_t size = 5;
auto ar = xt::meshgrid(xt::arange<double>(0, size), xt::arange<double>(0, size));
int translate = (size + 1) / 2;
xt::xarray<double> x = std::get<0>(ar) - translate;
xt::xarray<double> y = std::get<1>(ar) - translate;
xt::xarray<double> xy_ = xt::stack(xt::xtuple(x, y));
auto p = xt::fftw::fftshift(xy_);
std::cout << p << std::endl;
</code></pre>
<p>Which gives the following shifted matrix:</p>
<pre><code>{{{-3., -2., -1., 0., 1.},
{-3., -2., -1., 0., 1.},
{-3., -2., -1., 0., 1.},
{-3., -2., -1., 0., 1.},
{-3., -2., -1., 0., 1.}},
{{-1., -1., -1., -1., -1.},
{ 0., 0., 0., 0., 0.},
{ 1., 1., 1., 1., 1.},
{-3., -3., -3., -3., -3.},
{-2., -2., -2., -2., -2.}}}
</code></pre>
<p>whereas for python the same <code>fftshift</code> ()results in:</p>
<pre class="lang-py prettyprint-override"><code>np.mgrid[:size, :size] - int( (size + 1)/2 )
fftshifted_mat = scipy.fftpack.fftshift(mat)
print(fftshifted_mat)
</code></pre>
<pre><code>[[[ 0 1 -3 -2 -1]
[ 0 1 -3 -2 -1]
[ 0 1 -3 -2 -1]
[ 0 1 -3 -2 -1]
[ 0 1 -3 -2 -1]]
[[ 0 0 0 0 0]
[ 1 1 1 1 1]
[-3 -3 -3 -3 -3]
[-2 -2 -2 -2 -2]
[-1 -1 -1 -1 -1]]]
</code></pre>
<p>How can I make the c++ fftshift output matrix exactly equal to scipy's output matrix?</p>
<p>I tried using <code>xt::roll</code>, <code>xt::transpose + xt::swap</code>, and manual circular shift combinations, but none of them worked.</p>
<p>Update:
Tried using <code>roll</code></p>
<pre><code> for (std::size_t axis = 0; axis < xy_.shape().size(); ++axis){
std::size_t dim_size = xy_.shape()[axis];
std::size_t shift = (dim_size - 1) / 2;
xy_ = xt::roll(xy_, shift, axis);
}
</code></pre>
<p>However, for some reason only getting right matrix that is same as the scipy.fft.fftshift one with size = 5 or size = 125. I am not sure why?</p>
<p>Update 2:
As per @chris' answer I added manual shift with roll. It seems to replicate scipy's fftshift but seems to be quite slow.</p>
<pre class="lang-cpp prettyprint-override"><code>template <typename T>
void fftshift_roll(xt::xarray<T>& array)
{
std::size_t ndims = array.dimension();
std::vector<std::ptrdiff_t> shift_indices(ndims);
for (std::size_t i = 0; i < ndims; ++i) {
std::ptrdiff_t shift = static_cast<std::ptrdiff_t>(array.shape(i)) / 2;
shift_indices[i] = shift;
}
for (std::size_t i = 0; i < ndims; ++i) {
auto rolled = xt::roll(array, shift_indices[i], i);
array = xt::view(rolled, xt::all(), xt::all());
}
}
</code></pre>
|
<python><c++><scipy><fft><xtensor>
|
2023-06-23 12:45:20
| 1
| 1,133
|
Jarwin
|
76,540,138
| 5,437,090
|
python f-string and regular expression to generate customized path of a file in a directory
|
<p><strong>Given</strong>:</p>
<p>Files in a working directory:</p>
<pre><code>WKDIR = "/scratch/project_2004072/Nationalbiblioteket/dataframes"
$ ls -l
nikeX_docworks_lib_helsinki_fi_access_log_07_02_2021_lemmaMethod_stanza_27450_vocabs.json
nikeX_docworks_lib_helsinki_fi_access_log_07_02_2021_lemmaMethod_stanza_tfidf_matrix_RF_large.gz
nikeX_docworks_lib_helsinki_fi_access_log_07_02_2021_lemmaMethod_stanza_tfidf_vectorizer_large.gz
nikeX_docworks_lib_helsinki_fi_access_log_07_02_2021_lemmaMethod_stanza_user_tokens_df_27452_BoWs.gz
nikeY_docworks_lib_helsinki_fi_access_log_07_02_2021_lemmaMethod_stanza_26042_vocabs.json
nikeY_docworks_lib_helsinki_fi_access_log_07_02_2021_lemmaMethod_stanza_tfidf_matrix_RF_large.gz
nikeY_docworks_lib_helsinki_fi_access_log_07_02_2021_lemmaMethod_stanza_tfidf_vectorizer_large.gz
nikeY_docworks_lib_helsinki_fi_access_log_07_02_2021_lemmaMethod_stanza_user_tokens_df_26050_BoWs.gz
</code></pre>
<p><strong>Goal</strong>:</p>
<p>I'd like to create a customized path using regular expression <code>fr</code> to only read files with endings of <code>user_tokens_df_XXXX_BoWs.gz</code> and load them via some helper function later in my code.
Right now, I have a python script with f-string and regex which does not work:</p>
<pre><code>import re
import os
fprefix = f"nikeY_docworks_lib_helsinki_fi_access_log_07_02_2021_lemmaMethod_stanza_"
fpath = os.path.join(WKDIR, f'{fprefix}_user_token_sparse_df'fr'_user_tokens_df_(\d+)_BoWs.gz')
#fpath = os.path.join(WKDIR, f'{fprefix}_user_token_sparse_df'fr'(_user_tokens_df_(\d+)_BoWs.gz)') # did not work either!
print(fpath) # >>>> it's wrong! <<<<
try:
# load via helper function
df = load_pickle(fpath)
except:
# do something else
</code></pre>
<p>Is there any better approach to fix this? Do I have a wrong understanding that using <code>re.search()</code> is not helping since this <code>fpath</code> is fed into another function in <code>try except</code> block in my code.</p>
<p>Cheers,</p>
|
<python><regex><os.path>
|
2023-06-23 12:29:30
| 1
| 1,621
|
farid
|
76,540,116
| 10,303,173
|
How to listen to the playwright page.on("request") or playwright.on("response")
|
<p>My question is how do I listen to page.on("") like it's on the normal playwright I am using scrapy-playwright.</p>
<pre><code>def start_requests(self):
# GET request
yield scrapy.Request(
url,
meta={
"playwright": True,
"playwright_include_page": True,
},
)
def parse(self, response):
print("parse")
page = response.meta["playwright_page"]
print(page, "printing")
</code></pre>
<p>What I hope to achieve is to use capture a post request since the base url sends multiple background requests to the server and returns a json response. I am only interested in json response.</p>
<p><strong>EDIT</strong></p>
<pre><code>2023-06-24 12:39:35 [scrapy-playwright] DEBUG: [Context=default] Request: <POST https://turo.com/api/bulk-quotes/v2> (resource type: fetch, referrer: https://turo.com/gb/en/search?country=US&defaultZoomLevel=11&delivery=false&deliveryLocationType=city&endDate=06%2F30%2F2023&endTime=10%3A00&isMapSearch=false&itemsPerPage=200&latitude=37.7749295&location=San%20Francisco%2C%20CA&locationType=CITY&longitude=-122.41941550000001&pickupType=ALL&placeId=ChIJIQBpAG2ahYAR_6128GcTUEo&region=CA&sortType=RELEVANCE&startDate=06%2F25%2F2023&startTime=10%3A00&useDefaultMaximumDistance=true)
2023-06-24 12:39:35 [scrapy-playwright] DEBUG: [Context=default] Response: <200 https://turo.com/api/bulk-quotes/v2> (referrer: None)
</code></pre>
<p>I see the request and response in the terminal. How do I capture and return it to parse for further processing.</p>
|
<python><scrapy><scrapy-playwright>
|
2023-06-23 12:26:46
| 1
| 1,215
|
Onyilimba
|
76,540,098
| 8,458,083
|
How to create a nix project with the python package streamlit
|
<pre><code>{
description = "virtual environment with python and streamlit";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
inputs.flake-utils.url = "github:numtide/flake-utils";
outputs = { self, nixpkgs, flake-utils }:
flake-utils.lib.eachDefaultSystem (system:
let
pkgs = nixpkgs.legacyPackages.${system};
python=pkgs.python311;
python_packages= python.withPackages(ps: with ps;[
ipython
matplotlib
pandas
streamlit
]);
myDevTools = [
python_packages
];
in {
devShells.default = pkgs.mkShell {
buildInputs = myDevTools;
};
});
}
</code></pre>
<p>the command nix develop works fine until I add the package streamlit</p>
<blockquote>
<p>error:
… while calling the 'derivationStrict' builtin</p>
<pre><code> at /builtin/derivation.nix:9:12: (source not available)
… while evaluating derivation 'nix-shell'
whose name attribute is located at /nix/store/s1z7nb9n6r5n0r34fabp6yybwkbr8mjk-source/pkgs/stdenv/generic/make-derivation.nix:303:7
… while evaluating attribute 'buildInputs' of derivation 'nix-shell'
at /nix/store/s1z7nb9n6r5n0r34fabp6yybwkbr8mjk-source/pkgs/stdenv/generic/make-derivation.nix:350:7:
349| depsHostHost = lib.elemAt (lib.elemAt dependencies 1) 0;
350| buildInputs = lib.elemAt (lib.elemAt dependencies 1) 1;
| ^
351| depsTargetTarget = lib.elemAt (lib.elemAt dependencies 2) 0;
(stack trace truncated; use '--show-trace' to show the full trace)
error: undefined variable 'streamlit'
at /nix/store/zxv0741pn2r7h0vk1f1i9knvybfh3yff-source/flake.nix:15:11:
14| pandas
15| streamlit
</code></pre>
</blockquote>
|
<python><streamlit><nix><nix-flake>
|
2023-06-23 12:23:56
| 1
| 2,017
|
Pierre-olivier Gendraud
|
76,540,063
| 10,413,428
|
mypy, lambda, and QThread.quit() | QThread.deleteLater()
|
<p>I have several worker/thread setups in my application which I am currently refactoring to support mypy.</p>
<p>The setup is always similar to the following:</p>
<pre class="lang-py prettyprint-override"><code>self._test_thread = QThread()
self._test_worker = TestWorker()
self._test_worker.moveToThread(self._test_thread)
# Bind all the signals to the slots
self._test_thread.started.connect(self._test_worker.run)
self._test_worker.finished.connect(lambda: {
self._test_thread.quit(),
self._test_thread.wait(),
self._test_worker.deleteLater(),
self._some_custom_ui_change_method(),
self.ui.btn_test.setEnabled(True),
})
</code></pre>
<p>The problem is that mypy reports an error for <code>.quit()</code> and <code>.deleteLater()</code> of the QTread class.</p>
<p>The following error is displayed <code>Mypy: "quit" of "QThread" does not return a value [func-returns-value]</code>.</p>
<p>Both methods are annotated with `-> None'. Is there a rule that lamdas can only consist of returning statements? If so, is creating an extra function for the five statements a good solution?</p>
|
<python><lambda><pyside><pyside6>
|
2023-06-23 12:19:36
| 0
| 405
|
sebwr
|
76,539,772
| 3,423,825
|
How to resample and calculate amount weighted average price in Pandas?
|
<p>I have a dataframe of trades in which each trade has a corresponding amount and price. How can I resample the dataframe in a 1 minute period and calculate the amount weighted average prices of each period ?</p>
<pre><code>>df
amount price
datetime
2023-06-23 10:55:40.420 0.04657 30106.01
2023-06-23 10:55:42.348 0.00085 30104.54
2023-06-23 10:55:42.491 0.00368 30104.54
2023-06-23 10:55:43.211 0.03008 30104.54
2023-06-23 10:55:45.910 0.00035 30101.56
... ... ...
2023-06-23 10:58:06.401 0.00863 30108.00
2023-06-23 10:58:06.661 0.00829 30108.00
2023-06-23 10:58:07.474 0.00305 30108.00
2023-06-23 10:58:07.599 0.00048 30108.00
2023-06-23 10:58:08.393 0.00041 30108.00
[428 rows x 2 columns]
</code></pre>
<p>I can resample trade amounts with :</p>
<pre><code>>>> df['amount'].resample("1Min").sum()
datetime
2023-06-23 10:55:00 0.78885
2023-06-23 10:56:00 12.84216
2023-06-23 10:57:00 9.56456
2023-06-23 10:58:00 0.08334
Freq: T, Name: amount, dtype: float64
</code></pre>
<p>But what is the best solution to calculate the average price of each periods based on the amount of each trade ?</p>
|
<python><pandas>
|
2023-06-23 11:42:43
| 1
| 1,948
|
Florent
|
76,539,454
| 8,321,705
|
Remove gaps from unused categories in plotly python facet group bar plot
|
<p>Is there an easy way to remove the gaps between the bars belonging to a single group? Just to make that clear, in my case the "actions" in the example data will never overlap across the facetting variable "animal", so these values will always be missing for the "other" "animal".</p>
<p>EDIT: as asked in comments, a gap within a subplot is desired (e.g. the category in general is used, only one instance is missing), to keep a consistent layout per subplot. Just the space reserved for categories exclusively belonging to another subplot should be removed.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
data = [
[1, 10, "dog", "run"],
[2, 15, "dog", "run"],
[2, 13, "fish", "dive"],
[1, 14, "fish", "swim"],
[2, 13, "dog", "bark"],
[1, 12, "dog", "growl"],
[2, 12, "dog", "growl"],
[1, 16, "fish", "dive"],
[2, 7, "fish", "blub"],
[1, 7, "fish", "swim"]
]
df = pd.DataFrame(data, columns=['day', 'temperature', "animal", "action"])
fig = px.bar(
df,
x=df["temperature"],
y=df["day"].astype(str), # to use only existing values,
color="action",
barmode="group",
orientation="h",
facet_row="animal",
height=600
)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/EC6ZK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EC6ZK.png" alt="grouped_bar_plot_with_facets" /></a></p>
|
<python><pandas><plotly>
|
2023-06-23 10:59:53
| 1
| 633
|
crazysantaclaus
|
76,539,445
| 12,955,644
|
Pipenv is throwing error and not creating virtual environment
|
<p>I've reviewed other Pipenv-related questions, but none matched my specific issue. I'm using Ubuntu 20.04 with Python 3.8 and Pipenv 11.9.0, which was functioning properly.</p>
<p>However, when I encountered trouble installing a package, I decided to update both Python and Pipenv. I installed Pipenv 2023.6.18, but I couldn't use it because the "--version" command still displayed the old version, and the virtual environment continued to be created with Python 3.8.</p>
<p>To address this, I followed the instructions in the Pipenv documentation on selecting a specific Python version for virtual environment creation.
By running the command <code>pipenv --python 3.11</code> a virtual environment was created with Python 3.1 not 3.11</p>
<p>I verified the contents of my ~/.local/bin directory and found two files: autopep8 and pycodestyle.</p>
<p>autopep8 contains this code:</p>
<pre><code>#!/bin/python3.9
# -*- coding: utf-8 -*-
import re
import sys
from autopep8 import main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(main())
</code></pre>
<p>pycodestyle contains:</p>
<pre><code>#!/bin/python3.9
# -*- coding: utf-8 -*-
import re
import sys
from pycodestyle import _main
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)?$', '', sys.argv[0])
sys.exit(_main())
</code></pre>
<p>After running this command to update pipenv again:</p>
<pre><code>pip install --user --upgrade pipenv
</code></pre>
<p>I ended up messing it up completely. I am now unable to create and activate the virtual environment as I could before.</p>
<p>Then I uninstalled pipenv with this command: <code>pip uninstall pipenv</code> and it uninstalled the latest version.</p>
<p>Now checking pipenv --version throws this:</p>
<pre><code>/usr/bin/pipenv:6: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
from pkg_resources import load_entry_point
pipenv, version 11.9.0
</code></pre>
<p>pipenv shell throws this error:</p>
<pre><code>user@user:~/Desktop/Test-ML$ pipenv shell
/usr/bin/pipenv:6: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
from pkg_resources import load_entry_point
Shell for UNKNOWN_VIRTUAL_ENVIRONMENT already activated.
No action taken to avoid nested environments.
</code></pre>
<p>Here are some files from various directories that could be helpful in identifying the issue.</p>
<p>Inside ~/.local/lib:</p>
<pre><code>user@user:~/.local/lib$ ls
python3.8 python3.9
</code></pre>
<p>Inside /usr/lib:</p>
<pre><code>user@user:/usr/lib$ ls
Other packages
python2.7
python3
python3.11
python3.8
python3.9
Other packages
</code></pre>
<p>Inside the pipenv file of /usr/bin directory:</p>
<pre><code>user@user:/usr/bin$ cat pipenv
#!/usr/bin/python3
# EASY-INSTALL-ENTRY-SCRIPT: 'pipenv==11.9.0','console_scripts','pipenv'
__requires__ = 'pipenv==11.9.0'
import re
import sys
from pkg_resources import load_entry_point
if __name__ == '__main__':
sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
sys.exit(
load_entry_point('pipenv==11.9.0', 'console_scripts', 'pipenv')()
)
</code></pre>
<p>Now as I'm trying to completely remote pipenv I tried uninstalling again:</p>
<pre><code>user@user:~/Desktop/Test-ML$ pip uninstall pipenv
/usr/bin/pip:6: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
from pkg_resources import load_entry_point
Found existing installation: pipenv 11.9.0
Not uninstalling pipenv at /usr/lib/python3/dist-packages, outside environment /usr
Can't uninstall 'pipenv'. No files were found to uninstall.
</code></pre>
<p>It failed. But if I check the version:</p>
<pre><code>user@user:~/Desktop/Test-ML$ pipenv --version
/usr/bin/pipenv:6: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
from pkg_resources import load_entry_point
pipenv, version 11.9.0
</code></pre>
<p>I want to utilize the latest version of Pipenv to create and manage virtual environments while having the option to use the desired Python version.</p>
<p>Any guidance or suggestions would be greatly appreciated.</p>
<p>Thank you</p>
|
<python><ubuntu><pipenv><virtual-environment>
|
2023-06-23 10:58:32
| 1
| 333
|
Imtiaz Ahmed
|
76,539,374
| 20,776,947
|
Mount Multiple Google Drive Account with Google Colab
|
<p>I am trying to connect and Mount multiple Google Drive account with Google Colab. So, <code>qDiffusion Application</code> load file from both Google Drive account and show in the <code>client app</code></p>
<p><em>Github Repo for reference :</em> <a href="https://github.com/arenasys/qDiffusion" rel="nofollow noreferrer">https://github.com/arenasys/qDiffusion</a></p>
<p><strong>Original Code (<a href="https://colab.research.google.com/github/arenasys/qDiffusion/blob/master/remote_colab.ipynb" rel="nofollow noreferrer">Notebook URL</a>):</strong></p>
<pre><code>%cd /content
import IPython.display
import os
import sys
if not os.path.exists("/content/sd-inference-server"):
!git clone https://github.com/arenasys/sd-inference-server.git
%cd /content/sd-inference-server
model_folder = "/content/sd-inference-server/models"
try:
# decline the popup to use the local folder ^
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
model_folder = "/content/drive/My Drive/qDiffusion/models"
if not os.path.exists(model_folder):
!mkdir '/content/drive/My Drive/qDiffusion' -p
!cp -r 'models/' '/content/drive/My Drive/qDiffusion/'
except Exception as e:
pass
if not os.path.exists("venv"):
!curl -OJL https://huggingface.co/datasets/arenasys/qDiffusion/resolve/main/cached_venv5.tar
!tar xf cached_venv5.tar; mv cached_venv5 venv; rm cached_venv5.tar
!rm /content/sd-inference-server/venv/lib/python3.10/site-packages/pathlib.py
!pip install timm==0.6.13
!pip install tomesd==0.1.3
!pip install torchmetrics
!pip uninstall -y tensorflow
!wget http://launchpadlibrarian.net/367274644/libgoogle-perftools-dev_2.5-2.2ubuntu3_amd64.deb
!wget https://launchpad.net/ubuntu/+source/google-perftools/2.5-2.2ubuntu3/+build/14795286/+files/google-perftools_2.5-2.2ubuntu3_all.deb
!wget https://launchpad.net/ubuntu/+source/google-perftools/2.5-2.2ubuntu3/+build/14795286/+files/libtcmalloc-minimal4_2.5-2.2ubuntu3_amd64.deb
!wget https://launchpad.net/ubuntu/+source/google-perftools/2.5-2.2ubuntu3/+build/14795286/+files/libgoogle-perftools4_2.5-2.2ubuntu3_amd64.deb
!apt install -qq libunwind8-dev
!dpkg -i *.deb
%env LD_PRELOAD=libtcmalloc.so
IPython.display.clear_output()
if not sys.path[0] == "/content/sd-inference-server/":
sys.path.insert(0, "/content/sd-inference-server/venv/lib/python3.10/site-packages")
sys.path.insert(0, "/content/sd-inference-server/")
from pycloudflared import try_cloudflare
tunnel_url = try_cloudflare(port=28888, verbose=False)
print("ENDPOINT:", tunnel_url.tunnel.replace("https", "wss"))
!python remote.py "$model_folder"
try_cloudflare.terminate(28888)
</code></pre>
<p><strong>Multiple Mount Code (<a href="https://stackoverflow.com/a/57394402/20776947">Solution from Stackoverflow</a>) :</strong></p>
<pre><code>!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
!mkdir -p /drive2
!google-drive-ocamlfuse /drive2
</code></pre>
<p><strong>After Merging, i am trying Like :</strong></p>
<pre><code>!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
# Define a function to connect and mount Google Drive
def mount_drive():
creds = GoogleCredentials.get_application_default()
import getpass
vcode = getpass.getpass()
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} -code={vcode}
# Define a function to mount a specific Google Drive account
def mount_google_drive(account_name, drive_path):
# Create a folder to mount the drive
mount_folder = f"/drive_{account_name}"
!mkdir -p {mount_folder}
# Mount the drive
!google-drive-ocamlfuse {mount_folder}
# Move the contents to the desired drive path
!mv {mount_folder}/* {drive_path}
# Remove the empty folder
!rm -rf {mount_folder}
# Example usage:
mount_drive()
# Mount and specify the accounts and paths
accounts = [
{"name": "account1", "drive_path": "/content/drive/My Drive/qDiffusion/models"},
{"name": "account2", "drive_path": "/content/drive/My Drive/other_drive"}
]
for account in accounts:
print(f"Mounting {account['name']}...")
mount_google_drive(account["name"], account["drive_path"])
print(f"{account['name']} mounted at {account['drive_path']}")
%cd /content
import IPython.display
import os
import sys
if not os.path.exists("/content/sd-inference-server"):
!git clone https://github.com/arenasys/sd-inference-server.git
%cd /content/sd-inference-server
model_folder = "/content/sd-inference-server/models"
try:
# decline the popup to use the local folder ^
from google.colab import drive
drive.mount('/content/drive', force_remount=True)
model_folder = "/content/drive/My Drive/qDiffusion/models"
if not os.path.exists(model_folder):
!mkdir '/content/drive/My Drive/qDiffusion' -p
!cp -r 'models/' '/content/drive/My Drive/qDiffusion/'
except Exception as e:
pass
if not os.path.exists("venv"):
!curl -OJL https://huggingface.co/datasets/arenasys/qDiffusion/resolve/main/cached_venv5.tar
!tar xf cached_venv5.tar; mv cached_venv5 venv; rm cached_venv5.tar
!rm /content/sd-inference-server/venv/lib/python3.10/site-packages/pathlib.py
!pip install timm==0.6.13
!pip install tomesd==0.1.3
!pip install torchmetrics
!pip uninstall -y tensorflow
!wget http://launchpadlibrarian.net/367274644/libgoogle-perftools-dev_2.5-2.2ubuntu3_amd64.deb
!wget https://launchpad.net/ubuntu/+source/google-perftools/2.5-2.2ubuntu3/+build/14795286/+files/google-perftools_2.5-2.2ubuntu3_all.deb
!wget https://launchpad.net/ubuntu/+source/google-perftools/2.5-2.2ubuntu3/+build/14795286/+files/libtcmalloc-minimal4_2.5-2.2ubuntu3_amd64.deb
!wget https://launchpad.net/ubuntu/+source/google-perftools/2.5-2.2ubuntu3/+build/14795286/+files/libgoogle-perftools4_2.5-2.2ubuntu3_amd64.deb
!apt install -qq libunwind8-dev
!dpkg -i *.deb
%env LD_PRELOAD=libtcmalloc.so
IPython.display.clear_output()
if not sys.path[0] == "/content/sd-inference-server/":
sys.path.insert(0, "/content/sd-inference-server/venv/lib/python3.10/site-packages")
sys.path.insert(0, "/content/sd-inference-server/")
from pycloudflared import try_cloudflare
tunnel_url = try_cloudflare(port=28888, verbose=False)
print("ENDPOINT:", tunnel_url.tunnel.replace("https", "wss"))
!python remote.py "$model_folder"
try_cloudflare.terminate(28888)
</code></pre>
<p>It not working as expected. No popup to connect different Google Drive account and No EndPoint TryCloudflare Public URL.</p>
|
<python><python-3.x><jupyter-notebook><google-drive-api><google-colaboratory>
|
2023-06-23 10:46:43
| 0
| 457
|
Mehul Kumar
|
76,539,143
| 4,797,662
|
Not enough free space error while compiling a model with Neuron SDK
|
<p>I'm trying to compile a model on Sagemaker Studio with AWS Neuron SDK in order to run the model on inferentia instance. I'm getting the following error during compilation phase.</p>
<pre><code>ERROR:Neuron:Not enough free space to write 268435456 bytes
Traceback (most recent call last):
File "/opt/conda/lib/python3.9/site-packages/torch_neuron/convert.py", line 413, in op_converter
neuron_function = self.subgraph_compiler(
File "/opt/conda/lib/python3.9/site-packages/torch_neuron/decorators.py", line 135, in trace
weight_paths = write_params(model_dir, constant_parameter_tensors, constant_param_key_mapping)
File "/opt/conda/lib/python3.9/site-packages/torch_neuron/decorators.py", line 1517, in write_params
np.save(f'{directory}/weights/{file_name}.npy', weight.numpy())
File "<__array_function__ internals>", line 5, in save
File "/opt/conda/lib/python3.9/site-packages/numpy/lib/npyio.py", line 529, in save
format.write_array(fid, arr, allow_pickle=allow_pickle,
File "/opt/conda/lib/python3.9/site-packages/numpy/lib/format.py", line 691, in write_array
array.tofile(fp)
OSError: Not enough free space to write 268435456 bytes
</code></pre>
<p>Command</p>
<pre><code># Compile the model using the wrapper
compiler_args = ['--verbose','1', '--num-neuroncores', str(1)]
optimizations = ['FLOAT32_TO_FLOAT16']
model_neuron = torch.neuron.trace(mpt_model, example, compiler_args=compiler_args, optimizations=optimizations, separate_weights=True, strict=False)
</code></pre>
<p>Model - <code>mosaicml/mpt-7b-instruct</code></p>
<p>AWS Sagemaker instance - <code>ml.m5.4xlarge</code></p>
<p>Running <code>df -h</code> on the terminal shows that the overlay filesystem is being used to write the files during compilation.</p>
<p><a href="https://i.sstatic.net/j0AHH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j0AHH.png" alt="enter image description here" /></a></p>
<p>The two possible resolutions I can see is to</p>
<ul>
<li>Either increase disk space</li>
<li>Or use another directory to write temporary files during neuron compilation</li>
</ul>
<p>But I don't see a way to pass custom storage path in neuron SDK API and neither a way to extend the filesystem storage</p>
<p>Neuron API Ref - <a href="https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuron/api-compilation-python-api.html#torch_neuron.Optimization" rel="nofollow noreferrer">https://awsdocs-neuron.readthedocs-hosted.com/en/latest/frameworks/torch/torch-neuron/api-compilation-python-api.html#torch_neuron.Optimization</a></p>
|
<python><amazon-web-services><machine-learning><amazon-sagemaker>
|
2023-06-23 10:15:29
| 0
| 664
|
Ishank Gulati
|
76,539,059
| 272,023
|
How to unit test Pydantic validators?
|
<p>I have a Pydantic class that performs custom validation</p>
<pre><code>class MyClass(BaseModel):
foo: str
@validator('foo')
def perform_custom_validation(cls, val):
# we would normally perform custom validation logic here on the value val
raise ValueError('Always fails')
</code></pre>
<p>I want to test my validation logic:</p>
<pre><code>def create_class(a_foo: str):
MyClass(foo=a_foo)
# now inside my unit test
with self.assertRaises(ValidationError):
create_class('fail')
</code></pre>
<p>I think I should be catching that exception, but my unit test does not see the exception and so my unit test fails. However, if I run <code>create_class</code> outside of that context manager then the exception occurs and my code fails.</p>
<p>How am I supposed to test that this exception is thrown?</p>
|
<python><pydantic>
|
2023-06-23 10:03:53
| 1
| 12,131
|
John
|
76,538,957
| 12,436,050
|
Select column by regex in python
|
<p>I have a dataframe with following columns</p>
<pre><code>Name Company generic name generic name R&D Number (DC-) R&D Number (A) type
A AB 53654 5767 1111 3333 a
C CD 56767 56667
</code></pre>
<p>I would like to create a subset of this dataframe and combine columns with a pattern by aggregating the values by a ','</p>
<p>The desired output is:</p>
<pre><code>Name Company generic name R&D Number
A AB 53654, 5767 1111, 3333
C CD 56767 56667
</code></pre>
<p>I found there is a way to filter the columns based on regex like below
df.filter(regex=("R&D Number.*"))</p>
<p>But is there a way to join with the other columns and concatenate the value by ',' to get the final output. Any help is highly appreciated</p>
|
<python><pandas>
|
2023-06-23 09:51:47
| 1
| 1,495
|
rshar
|
76,538,913
| 5,559,342
|
How do I get my Telegram bot to write in monospace?
|
<p>I am trying to write something in "monospace" with my telegram bot using the following Python function:</p>
<pre><code>@bot.message_handler(commands=['test_monospace'])
def test_monospace(message):
bot.reply_to(message, "```This message should be in monospace```")
</code></pre>
<p>Unfortunately, when I test the bot I get the message without the monospace formatting</p>
<p><a href="https://i.sstatic.net/ZKr2t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZKr2t.png" alt="The bot output" /></a></p>
<p>How can I get it to use monospace?</p>
|
<python><formatting><telegram-bot>
|
2023-06-23 09:47:03
| 1
| 5,075
|
Robb1
|
76,538,814
| 497,219
|
Updating the file contents of a Django FileField during upload results in I/O error
|
<p>I'm trying to encrypt the contents of a file that is being uploaded. This is the relevant code snippet:</p>
<pre><code>class AppFile(models.Model):
app_file = models.FileField(upload_to=upload_to, validators=[validate_file_size])
encrypted_data_key = models.CharField(max_length=500, blank=True)
def encrypt_file_with_data_key(self, data_key):
cipher = Fernet(data_key)
with self.app_file.open(mode='rb') as file:
file_data = file.read()
encrypted_data = cipher.encrypt(file_data)
with self.app_file.open(mode='wb') as encrypted_file:
encrypted_file.write(encrypted_data)
def save(self, *args, **kwargs):
if self._state.adding is True:
# New image being uploaded
encrypted_data_key, data_key = self.generate_data_key_from_vault()
self.encrypted_data_key = encrypted_data_key
# Encrypt the uploaded image file
self.encrypt_file_with_data_key(data_key)
super().save(args, kwargs)
</code></pre>
<p>I prefer this approach as this is agnostic of the StorageProvider being used. I also want to avoid detaching the file to a temporary folder, and re-attach it after encryption.</p>
<p>However, this results in the following error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/jeroenjacobs/.pyenv/versions/myapp/lib/python3.11/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/jeroenjacobs/.pyenv/versions/myapp/lib/python3.11/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jeroenjacobs/.pyenv/versions/myapp/lib/python3.11/site-packages/django/views/generic/base.py", line 84, in view
return self.dispatch(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jeroenjacobs/.pyenv/versions/myapp/lib/python3.11/site-packages/django/views/generic/base.py", line 119, in dispatch
return handler(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jeroenjacobs/.pyenv/versions/myapp/lib/python3.11/site-packages/django/views/generic/edit.py", line 184, in post
return super().post(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jeroenjacobs/.pyenv/versions/myapp/lib/python3.11/site-packages/django/views/generic/edit.py", line 153, in post
return self.form_valid(form)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/jeroenjacobs/.pyenv/versions/myapp/lib/python3.11/site-packages/django/contrib/messages/views.py", line 12, in form_valid
response = super().form_valid(form)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jeroenjacobs/.pyenv/versions/myapp/lib/python3.11/site-packages/django/views/generic/edit.py", line 135, in form_valid
self.object = form.save()
^^^^^^^^^^^
File "/Users/jeroenjacobs/.pyenv/versions/myapp/lib/python3.11/site-packages/django/forms/models.py", line 548, in save
self.instance.save()
File "/Users/jeroenjacobs/PycharmProjects/myapp/mainapp/models.py", line 90, in save
self.encrypt_file_with_data_key(data_key)
File "/Users/jeroenjacobs/PycharmProjects/myapp/mainapp/models.py", line 77, in encrypt_file_with_data_key
with self.app_file.open(mode='wb') as encrypted_file:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/jeroenjacobs/.pyenv/versions/myapp/lib/python3.11/site-packages/django/db/models/fields/files.py", line 80, in open
self.file.open(mode)
File "/Users/jeroenjacobs/.pyenv/versions/myapp/lib/python3.11/site-packages/django/core/files/uploadedfile.py", line 115, in open
self.file.seek(0)
ValueError: I/O operation on closed file.
</code></pre>
<p>Reading the contents doesn't seem to be a problem, but writing seems to generate an error, despite <code>open</code> being called.</p>
|
<python><django>
|
2023-06-23 09:34:43
| 0
| 1,583
|
Jeroen Jacobs
|
76,538,749
| 735,786
|
Can't use apt-get on Ubuntu 16.04 VM cuz missing python3
|
<p>I can't seem to find a solution for this. I need <code>sudo apt-get install <package></code> to work because <code>sudo apt-get -f -qq install -y</code> doesn't work.</p>
<p>Here is the log:</p>
<pre><code>ubuntu@ip-10-8-0-145:~$ sudo apt-get -f -qq install -y
Preconfiguring packages ...
(Reading database ... 104324 files and directories currently installed.)
Preparing to unpack .../dh-python_2.20151103ubuntu1.2_all.deb ...
/var/lib/dpkg/info/dh-python.prerm: 6: /var/lib/dpkg/info/dh-python.prerm: py3clean: not found
dpkg: warning: subprocess old pre-removal script returned error exit status 127
dpkg: trying script from the new package instead ...
/var/lib/dpkg/tmp.ci/prerm: 6: /var/lib/dpkg/tmp.ci/prerm: py3clean: not found
dpkg: error processing archive /var/cache/apt/archives/dh-python_2.20151103ubuntu1.2_all.deb (--unpack):
subprocess new pre-removal script returned error exit status 127
/var/lib/dpkg/info/dh-python.postinst: 6: /var/lib/dpkg/info/dh-python.postinst: py3compile: not found
dpkg: error while cleaning up:
subprocess installed post-installation script returned error exit status 127
Preparing to unpack .../python3-distupgrade_1%3a16.04.32_all.deb ...
/var/lib/dpkg/info/python3-distupgrade.prerm: 6: /var/lib/dpkg/info/python3-distupgrade.prerm: py3clean: not found
dpkg: warning: subprocess old pre-removal script returned error exit status 127
dpkg: trying script from the new package instead ...
/var/lib/dpkg/tmp.ci/prerm: 6: /var/lib/dpkg/tmp.ci/prerm: py3clean: not found
dpkg: error processing archive /var/cache/apt/archives/python3-distupgrade_1%3a16.04.32_all.deb (--unpack):
subprocess new pre-removal script returned error exit status 127
/var/lib/dpkg/info/python3-distupgrade.postinst: 6: /var/lib/dpkg/info/python3-distupgrade.postinst: py3compile: not found
dpkg: error while cleaning up:
subprocess installed post-installation script returned error exit status 127
Preparing to unpack .../python3-update-manager_1%3a16.04.17_all.deb ...
/var/lib/dpkg/info/python3-update-manager.prerm: 6: /var/lib/dpkg/info/python3-update-manager.prerm: py3clean: not found
dpkg: warning: subprocess old pre-removal script returned error exit status 127
dpkg: trying script from the new package instead ...
/var/lib/dpkg/tmp.ci/prerm: 6: /var/lib/dpkg/tmp.ci/prerm: py3clean: not found
dpkg: error processing archive /var/cache/apt/archives/python3-update-manager_1%3a16.04.17_all.deb (--unpack):
subprocess new pre-removal script returned error exit status 127
/var/lib/dpkg/info/python3-update-manager.postinst: 6: /var/lib/dpkg/info/python3-update-manager.postinst: py3compile: not found
dpkg: error while cleaning up:
subprocess installed post-installation script returned error exit status 127
Preparing to unpack .../python3-problem-report_2.20.1-0ubuntu2.30_all.deb ...
/var/lib/dpkg/info/python3-problem-report.prerm: 6: /var/lib/dpkg/info/python3-problem-report.prerm: py3clean: not found
dpkg: warning: subprocess old pre-removal script returned error exit status 127
dpkg: trying script from the new package instead ...
/var/lib/dpkg/tmp.ci/prerm: 6: /var/lib/dpkg/tmp.ci/prerm: py3clean: not found
dpkg: error processing archive /var/cache/apt/archives/python3-problem-report_2.20.1-0ubuntu2.30_all.deb (--unpack):
subprocess new pre-removal script returned error exit status 127
No apport report written because MaxReports is reached already
/var/lib/dpkg/info/python3-problem-report.postinst: 6: /var/lib/dpkg/info/python3-problem-report.postinst: py3compile: not found
dpkg: error while cleaning up:
subprocess installed post-installation script returned error exit status 127
Preparing to unpack .../python3-apport_2.20.1-0ubuntu2.30_all.deb ...
/var/lib/dpkg/info/python3-apport.prerm: 6: /var/lib/dpkg/info/python3-apport.prerm: py3clean: not found
dpkg: warning: subprocess old pre-removal script returned error exit status 127
dpkg: trying script from the new package instead ...
/var/lib/dpkg/tmp.ci/prerm: 6: /var/lib/dpkg/tmp.ci/prerm: py3clean: not found
dpkg: error processing archive /var/cache/apt/archives/python3-apport_2.20.1-0ubuntu2.30_all.deb (--unpack):
subprocess new pre-removal script returned error exit status 127
No apport report written because MaxReports is reached already
/var/lib/dpkg/info/python3-apport.postinst: 6: /var/lib/dpkg/info/python3-apport.postinst: py3compile: not found
dpkg: error while cleaning up:
subprocess installed post-installation script returned error exit status 127
Preparing to unpack .../python3-cryptography_1.2.3-1ubuntu0.3_amd64.deb ...
/var/lib/dpkg/info/python3-cryptography.prerm: 6: /var/lib/dpkg/info/python3-cryptography.prerm: py3clean: not found
dpkg: warning: subprocess old pre-removal script returned error exit status 127
dpkg: trying script from the new package instead ...
/var/lib/dpkg/tmp.ci/prerm: 6: /var/lib/dpkg/tmp.ci/prerm: py3clean: not found
dpkg: error processing archive /var/cache/apt/archives/python3-cryptography_1.2.3-1ubuntu0.3_amd64.deb (--unpack):
subprocess new pre-removal script returned error exit status 127
No apport report written because MaxReports is reached already
/var/lib/dpkg/info/python3-cryptography.postinst: 6: /var/lib/dpkg/info/python3-cryptography.postinst: py3compile: not found
dpkg: error while cleaning up:
subprocess installed post-installation script returned error exit status 127
Preparing to unpack .../python3-urllib3_1.13.1-2ubuntu0.16.04.4_all.deb ...
/var/lib/dpkg/info/python3-urllib3.prerm: 6: /var/lib/dpkg/info/python3-urllib3.prerm: py3clean: not found
dpkg: warning: subprocess old pre-removal script returned error exit status 127
dpkg: trying script from the new package instead ...
/var/lib/dpkg/tmp.ci/prerm: 6: /var/lib/dpkg/tmp.ci/prerm: py3clean: not found
dpkg: error processing archive /var/cache/apt/archives/python3-urllib3_1.13.1-2ubuntu0.16.04.4_all.deb (--unpack):
subprocess new pre-removal script returned error exit status 127
No apport report written because MaxReports is reached already
/var/lib/dpkg/info/python3-urllib3.postinst: 6: /var/lib/dpkg/info/python3-urllib3.postinst: py3compile: not found
dpkg: error while cleaning up:
subprocess installed post-installation script returned error exit status 127
Preparing to unpack .../software-properties-common_0.96.20.10_all.deb ...
/var/lib/dpkg/info/software-properties-common.prerm: 6: /var/lib/dpkg/info/software-properties-common.prerm: py3clean: not found
dpkg: warning: subprocess old pre-removal script returned error exit status 127
dpkg: trying script from the new package instead ...
/var/lib/dpkg/tmp.ci/prerm: 6: /var/lib/dpkg/tmp.ci/prerm: py3clean: not found
dpkg: error processing archive /var/cache/apt/archives/software-properties-common_0.96.20.10_all.deb (--unpack):
subprocess new pre-removal script returned error exit status 127
No apport report written because MaxReports is reached already
/var/lib/dpkg/info/software-properties-common.postinst: 6: /var/lib/dpkg/info/software-properties-common.postinst: py3compile: not found
dpkg: error while cleaning up:
subprocess installed post-installation script returned error exit status 127
Preparing to unpack .../python3-software-properties_0.96.20.10_all.deb ...
/var/lib/dpkg/info/python3-software-properties.prerm: 6: /var/lib/dpkg/info/python3-software-properties.prerm: py3clean: not found
dpkg: warning: subprocess old pre-removal script returned error exit status 127
dpkg: trying script from the new package instead ...
/var/lib/dpkg/tmp.ci/prerm: 6: /var/lib/dpkg/tmp.ci/prerm: py3clean: not found
dpkg: error processing archive /var/cache/apt/archives/python3-software-properties_0.96.20.10_all.deb (--unpack):
subprocess new pre-removal script returned error exit status 127
No apport report written because MaxReports is reached already
/var/lib/dpkg/info/python3-software-properties.postinst: 6: /var/lib/dpkg/info/python3-software-properties.postinst: py3compile: not found
dpkg: error while cleaning up:
subprocess installed post-installation script returned error exit status 127
Preparing to unpack .../sosreport_3.9.1-1ubuntu0.16.04.2_amd64.deb ...
/var/lib/dpkg/info/sosreport.prerm: 6: /var/lib/dpkg/info/sosreport.prerm: py3clean: not found
dpkg: warning: subprocess old pre-removal script returned error exit status 127
dpkg: trying script from the new package instead ...
/var/lib/dpkg/tmp.ci/prerm: 6: /var/lib/dpkg/tmp.ci/prerm: py3clean: not found
dpkg: error processing archive /var/cache/apt/archives/sosreport_3.9.1-1ubuntu0.16.04.2_amd64.deb (--unpack):
subprocess new pre-removal script returned error exit status 127
No apport report written because MaxReports is reached already
/var/lib/dpkg/info/sosreport.postinst: 6: /var/lib/dpkg/info/sosreport.postinst: py3compile: not found
dpkg: error while cleaning up:
subprocess installed post-installation script returned error exit status 127
Preparing to unpack .../cloud-init_21.1-19-gbad84ad4-0ubuntu1~16.04.2_all.deb ...
/var/lib/dpkg/info/cloud-init.prerm: 14: /var/lib/dpkg/info/cloud-init.prerm: py3clean: not found
dpkg: warning: subprocess old pre-removal script returned error exit status 127
dpkg: trying script from the new package instead ...
/var/lib/dpkg/tmp.ci/prerm: 14: /var/lib/dpkg/tmp.ci/prerm: py3clean: not found
dpkg: error processing archive /var/cache/apt/archives/cloud-init_21.1-19-gbad84ad4-0ubuntu1~16.04.2_all.deb (--unpack):
subprocess new pre-removal script returned error exit status 127
No apport report written because MaxReports is reached already
/var/lib/dpkg/info/cloud-init.postinst: 329: /var/lib/dpkg/info/cloud-init.postinst: py3compile: not found
dpkg: error while cleaning up:
subprocess installed post-installation script returned error exit status 127
Errors were encountered while processing:
/var/cache/apt/archives/dh-python_2.20151103ubuntu1.2_all.deb
/var/cache/apt/archives/python3-distupgrade_1%3a16.04.32_all.deb
/var/cache/apt/archives/python3-update-manager_1%3a16.04.17_all.deb
/var/cache/apt/archives/python3-problem-report_2.20.1-0ubuntu2.30_all.deb
/var/cache/apt/archives/python3-apport_2.20.1-0ubuntu2.30_all.deb
/var/cache/apt/archives/python3-cryptography_1.2.3-1ubuntu0.3_amd64.deb
/var/cache/apt/archives/python3-urllib3_1.13.1-2ubuntu0.16.04.4_all.deb
/var/cache/apt/archives/software-properties-common_0.96.20.10_all.deb
/var/cache/apt/archives/python3-software-properties_0.96.20.10_all.deb
/var/cache/apt/archives/sosreport_3.9.1-1ubuntu0.16.04.2_amd64.deb
/var/cache/apt/archives/cloud-init_21.1-19-gbad84ad4-0ubuntu1~16.04.2_all.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)
ubuntu@ip-10-8-0-145:~$
</code></pre>
<p>From the log, I gather that python3 isn't installed. However, running <code>sudo apt-get install python3</code> seems to indicate it is installed. Here is the log:</p>
<pre><code>ubuntu@ip-10-8-0-145:~$ sudo apt-get install python3
Reading package lists... Done
Building dependency tree
Reading state information... Done
python3 is already the newest version (3.5.1-3).
You might want to run 'apt-get -f install' to correct these:
The following packages have unmet dependencies:
ubuntu-release-upgrader-core : Depends: python3-distupgrade (= 1:16.04.32) but 1:16.04.25 is to be installed
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).
ubuntu@ip-10-8-0-145:~$
</code></pre>
<p>However, after checking /usr/bin/, I got no python3 binaries installed.</p>
<pre><code>ubuntu@ip-10-8-0-145:~$ ls -aslt /usr/bin | grep -i python
0 lrwxrwxrwx 1 root root 23 Mar 1 2021 pdb2.7 -> ../lib/python2.7/pdb.py
0 lrwxrwxrwx 1 root root 33 Mar 1 2021 python2.7-config -> x86_64-linux-gnu-python2.7-config
3412 -rwxr-xr-x 1 root root 3492624 Mar 1 2021 python2.7
4 -rwxr-xr-x 1 root root 2909 Mar 1 2021 x86_64-linux-gnu-python2.7-config
0 lrwxrwxrwx 1 root root 23 Jan 26 2021 pdb3.5 -> ../lib/python3.5/pdb.py
0 lrwxrwxrwx 1 root root 9 Nov 24 2017 python -> python2.7
0 lrwxrwxrwx 1 root root 9 Nov 24 2017 python2 -> python2.7
0 lrwxrwxrwx 1 root root 16 Nov 24 2017 python2-config -> python2.7-config
0 lrwxrwxrwx 1 root root 16 Nov 24 2017 python-config -> python2.7-config
0 lrwxrwxrwx 1 root root 29 Nov 24 2017 pyversions -> ../share/python/pyversions.py
0 lrwxrwxrwx 1 root root 33 Nov 24 2017 x86_64-linux-gnu-python-config -> x86_64-linux-gnu-python2.7-config
4 -rwxr-xr-x 1 root root 1056 Nov 24 2017 dh_python2
0 lrwxrwxrwx 1 root root 26 May 18 2016 dh_pypy -> ../share/dh-python/dh_pypy
0 lrwxrwxrwx 1 root root 29 May 18 2016 dh_python3 -> ../share/dh-python/dh_python3
0 lrwxrwxrwx 1 root root 26 May 18 2016 pybuild -> ../share/dh-python/pybuild
0 lrwxrwxrwx 1 root root 31 Mar 23 2016 py3versions -> ../share/python3/py3versions.py
4 -rwxr-xr-x 1 root root 976 Nov 27 2015 python3-jsondiff
4 -rwxr-xr-x 1 root root 3662 Nov 27 2015 python3-jsonpatch
4 -rwxr-xr-x 1 root root 974 Nov 27 2015 python2-jsondiff
4 -rwxr-xr-x 1 root root 3660 Nov 27 2015 python2-jsonpatch
4 -rwxr-xr-x 1 root root 1342 Oct 24 2015 python3-jsonpointer
4 -rwxr-xr-x 1 root root 1340 Oct 24 2015 python2-jsonpointer
ubuntu@ip-10-8-0-145:~$
</code></pre>
<p>I tried to do what was instructed and run <code>sudo apt-get install --reinstall python3</code> but got:</p>
<pre><code>ubuntu@ip-10-8-0-145:~$ sudo apt-get install --reinstall python3
Reading package lists... Done
Building dependency tree
Reading state information... Done
You might want to run 'apt-get -f install' to correct these:
The following packages have unmet dependencies:
ubuntu-release-upgrader-core : Depends: python3-distupgrade (= 1:16.04.32) but 1:16.04.25 is to be installed
E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).
ubuntu@ip-10-8-0-145:~$
</code></pre>
<p>Now, I have tried to follow numerous articles claiming to install python3 on ubuntu 16.04 without <code>sudo apt-get</code> command but can't seem to get it right. Can someone post instructions on how to fix my machine?</p>
|
<python><python-3.x><ubuntu><ubuntu-16.04><python-install>
|
2023-06-23 09:25:53
| 2
| 3,996
|
tyronegcarter
|
76,538,675
| 22,009,322
|
How to make the arrow blinking without changing its position using matplotlib animation?
|
<p>I am new to python but finally seem to get the basic idea of how the animation works in matplotlib.
However, I still can not figure out how to make the arrow to blink on one place pointing to the red circle (both circles remain static and not animated by any means).
I thought that I could make it work just by duplicating the coordinates of the arrow thus making the number of frames to 2 with identic coordinates, but unfortunately it didn't work.
I tried the examples shown here: <a href="https://stackoverflow.com/questions/40033948/arrow-animation-in-python">Arrow animation in Python</a>
But ax.clear() doesn't suit my needs and ax.patches.remove(patch) for some reason is not working.
The arrow remains static and get the "IndexError: list index out of range" error.
I appreciate any advice!</p>
<p>Output:
<a href="https://i.sstatic.net/E2XIu.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E2XIu.gif" alt="enter image description here" /></a></p>
<p>Example of my code:</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
from matplotlib.animation import FuncAnimation
from matplotlib import animation
import matplotlib.patches as patches
fig = plt.figure()
ax = fig.gca()
# Axes labels and title are established
ax = fig.gca()
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_ylim(-20, 20)
ax.set_xlim(-20, 20)
# Drawing static circles:
circle1 = plt.Circle((10, 10), 3, color='red')
circle2 = plt.Circle((-10, -4), 2, color='blue')
ax.add_patch(circle1)
ax.add_patch(circle2)
# Coordinates of the arrow:
x = np.array([2, 2])
y = np.array([2, 2])
dx = x*2
dy = y*2
patch = patches.Arrow(x[0], y[0], dx[0], dy[0])
def init():
ax.add_patch(patch)
return patch,
def animate(i):
global patch
ax.patches.remove(patch)
patch = patches.Arrow(x[i], y[i], dx[i], dy[i])
ax.add_patch(patch)
return patch,
anim = FuncAnimation(fig, animate, init_func=init,
frames=len(x), interval=200)
anim.save('222c.gif', writer='pillow')
plt.show()
plt.close()
</code></pre>
|
<python><matplotlib><matplotlib-animation>
|
2023-06-23 09:15:47
| 1
| 333
|
muted_buddy
|
76,538,612
| 7,347,925
|
Best way to parse lat and lon string
|
<p>I want to convert one string, which contains lat and lon values, to two floats. However, the string may be separated by mixing commas and spaces, such as <code>28.111,77.222</code>, <code>28.111, 77.222</code>, <code>28.111 77.222</code>, <code>28.111 77.222</code>.</p>
<p>I have tried this method:</p>
<pre><code>import re
test = '28.111,77.222'
res = re.findall(r'\d+', test)
lat = res[0]
lon = res[1]
</code></pre>
<p>It only works for int, how about float numbers like this case?</p>
|
<python><string><split><numbers><python-re>
|
2023-06-23 09:06:04
| 1
| 1,039
|
zxdawn
|
76,538,597
| 14,143,152
|
Falcon LLM no output running on Azure Notebook
|
<p>I have been unable to get any output from the model even waiting 10 minutes. Running on a Azure Notebook with a Compute instance Standard_E4ds_v4, 4 core, 32GB.
Any assistance is appreciated.</p>
<p>Code:</p>
<pre><code>!source activate llm_env
%pip install conda
import conda
%conda install cudatoolkit
%pip install torch
%pip install einops
%pip install accelerate
%pip install transformers==4.27.4
%pip install huggingface-hub
%pip install chardet
%pip install cchardet
from transformers import AutoTokenizer, AutoModelForCausalLM, TFAutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-7b"
rrmodel = AutoModelForCausalLM.from_pretrained(model,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",)
tokenizer = AutoTokenizer.from_pretrained(model)
input_text = "What is a giraffe?"
input_ids = tokenizer.encode(input_text, return_tensors='pt')
attention_mask = torch.ones(input_ids.shape)
output = rrmodel.generate(input_ids,
attention_mask=attention_mask,
max_length=2000,
do_sample=True,
pad_token_id = 50256,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,)
#Never goes into this section
print(f"Got output: {output}")
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(output_text)
</code></pre>
|
<python><azure><machine-learning><jupyter-notebook><huggingface-transformers>
|
2023-06-23 09:04:57
| 2
| 388
|
Yassin Sameh
|
76,538,567
| 529,273
|
Why does this method write a file in the same directory and not in the one specified?
|
<p>There are images in E:/images/projects/1/extracts. extracts has a subdirectory called cropped. When I run the code bmp files are generated in the extracts folder not in the cropped subfolder. I am running under Windows.</p>
<pre><code>from PIL import Image
import os.path, sys
path = "E:/images/projects/1/extracts"
dirs = os.listdir(path)
def crop():
for item in dirs:
fullpath = os.path.join(path,item) #corrected
if os.path.isfile(fullpath):
im = Image.open(fullpath)
f, e = os.path.splitext(fullpath)
w, h = im.size
imCrop = im.crop((0, 30, w, h-90))
imCrop.save(os.path.join(path,"cropped",f + '.bmp'), "BMP", quality=100)
crop()
</code></pre>
|
<python>
|
2023-06-23 09:01:04
| 1
| 4,510
|
CarbonMan
|
76,538,523
| 13,396,497
|
Python find the multiple sets based on condition from dataframe and read the first and line from every set
|
<p>I have a csv file (2 different sample)-</p>
<p>Sample-1 -</p>
<pre><code> Date-Time Func Byte
2023/01/01 16:45:13 APP 0
2023/01/01 16:57:08 APP 1
2023/01/01 17:10:44 APP 5
2023/01/01 17:11:04 APP 0
2023/01/01 17:12:24 APP 0
2023/01/01 17:15:27 APP 4
2023/01/01 17:16:08 APP 4
</code></pre>
<p>Sample-2 -</p>
<pre><code> Date-Time Func Byte
2023/01/01 16:45:13 APP 4
2023/01/01 16:57:08 APP 1
2023/01/01 17:10:44 APP 5
2023/01/01 17:11:04 APP 0
2023/01/01 17:12:24 APP 0
2023/01/01 17:15:27 APP 4
2023/01/01 17:16:08 APP 4
2023/01/01 17:16:20 APP 0
</code></pre>
<p>I want to get the start_time(when Byte value start as non-zero) and end time(when Byte value end as non-zero),
also leave the end time as empty if it's last line of the file-</p>
<p>Output from Sample 1 -</p>
<pre><code>Date/Time(Start) Func Date/Time(Start)
2023/01/01 16:57:08 APP 2023/01/01 17:10:44
2023/01/01 17:15:27 APP
</code></pre>
<p>Output from Sample 2 -</p>
<pre><code>Date/Time(Start) Func Date/Time(Start)
2023/01/01 16:45:13 APP 2023/01/01 17:10:44
2023/01/01 17:15:27 APP 2023/01/01 17:16:08
</code></pre>
|
<python><pandas><datetime>
|
2023-06-23 08:54:39
| 2
| 347
|
RKIDEV
|
76,538,350
| 12,131,472
|
how to convert a JSON file to a desired dataframe with Python
|
<p>I have a Json file in a local folder, here is the .json file's content:</p>
<pre><code> [
{
"id": "TD3$-FFA",
"shortCode": "TD3$-FFA",
"dataSet": {
"id": "TD3C",
"shortCode": "TD3C",
"shortDescription": "Dirty Middle East Gulf to China",
"displayGroup": "BDTI",
"datumUnit": "Worldscale",
"datumPrecision": 2,
"data": [
{
"value": 53.59,
"date": "2023-06-09"
},
{
"value": 67.95,
"date": "2023-06-12"
},
{
"value": 73.77,
"date": "2023-06-13"
}
],
"apiIdentifier": "RDSSYGSJBFEV9P2FLSCXGQC3510G2EGE"
},
"datumUnit": "$/mt",
"datumPrecision": 3,
"projectionStartOn": "2010-05-10T00:00:00",
"projectionEndOn": "2023-06-26T00:00:00",
"groupings": [
{
"date": "2023-06-09T00:00:00",
"groups": [
{
"periodType": "m",
"projections": [
{
"identifier": "TD3$BALMO",
"period": "Jun 23",
"value": 14.483,
"validFrom": "2023-06-01",
"validTo": "2023-06-30",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-09"
},
{
"identifier": "TD3$CURMON",
"period": "Jun 23",
"value": 13.542,
"validFrom": "2023-06-01",
"validTo": "2023-06-30",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-09"
},
{
"identifier": "TD3$+1_M",
"period": "Jul 23",
"value": 12.729,
"validFrom": "2023-07-01",
"validTo": "2023-07-31",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-09"
},
{
"identifier": "TD3$+2_M",
"period": "Aug 23",
"value": 11.058,
"validFrom": "2023-08-01",
"validTo": "2023-08-31",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-09"
},
{
"identifier": "TD3$+3_M",
"period": "Sep 23",
"value": 11.361,
"validFrom": "2023-09-01",
"validTo": "2023-09-29",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-09"
},
{
"identifier": "TD3$+4_M",
"period": "Oct 23",
"value": 12.308,
"validFrom": "2023-10-01",
"validTo": "2023-10-31",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-09"
},
{
"identifier": "TD3$+5_M",
"period": "Nov 23",
"value": 13.168,
"validFrom": "2023-11-01",
"validTo": "2023-11-30",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-09"
}
]
},
{
"periodType": "q",
"projections": [
{
"identifier": "TD3$CURQ",
"period": "Q2 23",
"value": 13.805,
"validFrom": "2023-04-01",
"validTo": "2023-06-30",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-09"
},
{
"identifier": "TD3$+1Q",
"period": "Q3 23",
"value": 11.716,
"validFrom": "2023-07-01",
"validTo": "2023-09-29",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-09"
},
{
"identifier": "TD3$+2Q",
"period": "Q4 23",
"value": 14.151,
"validFrom": "2023-10-01",
"validTo": "2023-12-22",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-09"
},
{
"identifier": "TD3$+3Q",
"period": "Q1 24",
"value": 12.984,
"validFrom": "2024-01-01",
"validTo": "2024-03-29",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-09"
},
{
"identifier": "TD3$+4Q",
"period": "Q2 24",
"value": 11.41,
"validFrom": "2024-04-01",
"validTo": "2024-06-28",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-09"
},
{
"identifier": "TD3$+5Q",
"period": "Q3 24",
"value": 11.117,
"validFrom": "2024-07-01",
"validTo": "2024-09-30",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-09"
}
]
},
{
"periodType": "y",
"projections": [
{
"identifier": "TD3$+1CAL",
"period": "Cal 24",
"value": 12.522,
"validFrom": "2024-01-01",
"validTo": "2024-12-24",
"nextRolloverDate": "2023-12-22",
"archiveDate": "2023-06-09"
},
{
"identifier": "TD3$+2CAL",
"period": "Cal 25",
"value": 11.974,
"validFrom": "2025-01-01",
"validTo": "2025-12-24",
"nextRolloverDate": "2023-12-22",
"archiveDate": "2023-06-09"
},
{
"identifier": "TD3$+3CAL",
"period": "Cal 26",
"value": 11.63,
"validFrom": "2026-01-01",
"validTo": "2026-12-24",
"nextRolloverDate": "2023-12-22",
"archiveDate": "2023-06-09"
},
{
"identifier": "TD3$+4CAL",
"period": "Cal 27",
"value": 11.554,
"validFrom": "2027-01-01",
"validTo": "2027-12-24",
"nextRolloverDate": "2023-12-22",
"archiveDate": "2023-06-09"
}
]
}
]
},
{
"date": "2023-06-12T00:00:00",
"groups": [
{
"periodType": "m",
"projections": [
{
"identifier": "TD3$BALMO",
"period": "Jun 23",
"value": 16.409,
"validFrom": "2023-06-01",
"validTo": "2023-06-30",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-12"
},
{
"identifier": "TD3$CURMON",
"period": "Jun 23",
"value": 14.864,
"validFrom": "2023-06-01",
"validTo": "2023-06-30",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-12"
},
{
"identifier": "TD3$+1_M",
"period": "Jul 23",
"value": 14.259,
"validFrom": "2023-07-01",
"validTo": "2023-07-31",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-12"
},
{
"identifier": "TD3$+2_M",
"period": "Aug 23",
"value": 12.182,
"validFrom": "2023-08-01",
"validTo": "2023-08-31",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-12"
},
{
"identifier": "TD3$+3_M",
"period": "Sep 23",
"value": 12.378,
"validFrom": "2023-09-01",
"validTo": "2023-09-29",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-12"
},
{
"identifier": "TD3$+4_M",
"period": "Oct 23",
"value": 13.092,
"validFrom": "2023-10-01",
"validTo": "2023-10-31",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-12"
},
{
"identifier": "TD3$+5_M",
"period": "Nov 23",
"value": 14.027,
"validFrom": "2023-11-01",
"validTo": "2023-11-30",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-12"
}
]
},
{
"periodType": "q",
"projections": [
{
"identifier": "TD3$CURQ",
"period": "Q2 23",
"value": 14.246,
"validFrom": "2023-04-01",
"validTo": "2023-06-30",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-12"
},
{
"identifier": "TD3$+1Q",
"period": "Q3 23",
"value": 12.94,
"validFrom": "2023-07-01",
"validTo": "2023-09-29",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-12"
},
{
"identifier": "TD3$+2Q",
"period": "Q4 23",
"value": 14.677,
"validFrom": "2023-10-01",
"validTo": "2023-12-22",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-12"
},
{
"identifier": "TD3$+3Q",
"period": "Q1 24",
"value": 13.124,
"validFrom": "2024-01-01",
"validTo": "2024-03-29",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-12"
},
{
"identifier": "TD3$+4Q",
"period": "Q2 24",
"value": 11.6,
"validFrom": "2024-04-01",
"validTo": "2024-06-28",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-12"
},
{
"identifier": "TD3$+5Q",
"period": "Q3 24",
"value": 11.286,
"validFrom": "2024-07-01",
"validTo": "2024-09-30",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-12"
}
]
},
{
"periodType": "y",
"projections": [
{
"identifier": "TD3$+1CAL",
"period": "Cal 24",
"value": 12.688,
"validFrom": "2024-01-01",
"validTo": "2024-12-24",
"nextRolloverDate": "2023-12-22",
"archiveDate": "2023-06-12"
},
{
"identifier": "TD3$+2CAL",
"period": "Cal 25",
"value": 12.103,
"validFrom": "2025-01-01",
"validTo": "2025-12-24",
"nextRolloverDate": "2023-12-22",
"archiveDate": "2023-06-12"
},
{
"identifier": "TD3$+3CAL",
"period": "Cal 26",
"value": 11.723,
"validFrom": "2026-01-01",
"validTo": "2026-12-24",
"nextRolloverDate": "2023-12-22",
"archiveDate": "2023-06-12"
},
{
"identifier": "TD3$+4CAL",
"period": "Cal 27",
"value": 11.661,
"validFrom": "2027-01-01",
"validTo": "2027-12-24",
"nextRolloverDate": "2023-12-22",
"archiveDate": "2023-06-12"
}
]
}
]
},
{
"date": "2023-06-13T00:00:00",
"groups": [
{
"periodType": "m",
"projections": [
{
"identifier": "TD3$BALMO",
"period": "Jun 23",
"value": 18.077,
"validFrom": "2023-06-01",
"validTo": "2023-06-30",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-13"
},
{
"identifier": "TD3$CURMON",
"period": "Jun 23",
"value": 15.922,
"validFrom": "2023-06-01",
"validTo": "2023-06-30",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-13"
},
{
"identifier": "TD3$+1_M",
"period": "Jul 23",
"value": 14.388,
"validFrom": "2023-07-01",
"validTo": "2023-07-31",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-13"
},
{
"identifier": "TD3$+2_M",
"period": "Aug 23",
"value": 12.258,
"validFrom": "2023-08-01",
"validTo": "2023-08-31",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-13"
},
{
"identifier": "TD3$+3_M",
"period": "Sep 23",
"value": 12.478,
"validFrom": "2023-09-01",
"validTo": "2023-09-29",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-13"
},
{
"identifier": "TD3$+4_M",
"period": "Oct 23",
"value": 13.327,
"validFrom": "2023-10-01",
"validTo": "2023-10-31",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-13"
},
{
"identifier": "TD3$+5_M",
"period": "Nov 23",
"value": 14.12,
"validFrom": "2023-11-01",
"validTo": "2023-11-30",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-13"
}
]
},
{
"periodType": "q",
"projections": [
{
"identifier": "TD3$CURQ",
"period": "Q2 23",
"value": 14.599,
"validFrom": "2023-04-01",
"validTo": "2023-06-30",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-13"
},
{
"identifier": "TD3$+1Q",
"period": "Q3 23",
"value": 13.042,
"validFrom": "2023-07-01",
"validTo": "2023-09-29",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-13"
},
{
"identifier": "TD3$+2Q",
"period": "Q4 23",
"value": 14.567,
"validFrom": "2023-10-01",
"validTo": "2023-12-22",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-13"
},
{
"identifier": "TD3$+3Q",
"period": "Q1 24",
"value": 13.159,
"validFrom": "2024-01-01",
"validTo": "2024-03-29",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-13"
},
{
"identifier": "TD3$+4Q",
"period": "Q2 24",
"value": 11.624,
"validFrom": "2024-04-01",
"validTo": "2024-06-28",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-13"
},
{
"identifier": "TD3$+5Q",
"period": "Q3 24",
"value": 11.276,
"validFrom": "2024-07-01",
"validTo": "2024-09-30",
"nextRolloverDate": "2023-06-30",
"archiveDate": "2023-06-13"
}
]
},
{
"periodType": "y",
"projections": [
{
"identifier": "TD3$+1CAL",
"period": "Cal 24",
"value": 12.7,
"validFrom": "2024-01-01",
"validTo": "2024-12-24",
"nextRolloverDate": "2023-12-22",
"archiveDate": "2023-06-13"
},
{
"identifier": "TD3$+2CAL",
"period": "Cal 25",
"value": 12.1,
"validFrom": "2025-01-01",
"validTo": "2025-12-24",
"nextRolloverDate": "2023-12-22",
"archiveDate": "2023-06-13"
},
{
"identifier": "TD3$+3CAL",
"period": "Cal 26",
"value": 11.734,
"validFrom": "2026-01-01",
"validTo": "2026-12-24",
"nextRolloverDate": "2023-12-22",
"archiveDate": "2023-06-13"
},
{
"identifier": "TD3$+4CAL",
"period": "Cal 27",
"value": 11.675,
"validFrom": "2027-01-01",
"validTo": "2027-12-24",
"nextRolloverDate": "2023-12-22",
"archiveDate": "2023-06-13"
}
]
}
]
}
],
"apiIdentifier": "RPSVJJTJBXBCAF2FAG2PQAVYN4UGQ9LN"
}
]
</code></pre>
<p>What is the easiest way to convert to below dataframe? Thanks
periodType column has either m or q or y as value which means month, quarter and year</p>
<pre><code>date periodType identifier period value validFrom validTo archiveDate
2023-06-09T00:00:00 m TD3$BALMO Jun 23 14.483 2023-06-01 2023-06-30 2023-06-09
2023-06-09T00:00:00 m TD3$CURMON Jun 23 13.542 2023-06-01 2023-06-30 2023-06-09
2023-06-09T00:00:00 m TD3$+1_M Jul 23 12.729 2023-07-01 2023-07-31 2023-06-09
2023-06-09T00:00:00 m TD3$+2_M Aug 23 11.058 2023-08-01 2023-08-31 2023-06-09
2023-06-09T00:00:00 m TD3$+3_M Sep 23 11.361 2023-09-01 2023-09-29 2023-06-09
2023-06-09T00:00:00 m TD3$+4_M Oct 23 12.308 2023-10-01 2023-10-31 2023-06-09
2023-06-09T00:00:00 m TD3$+5_M Nov 23 13.168 2023-11-01 2023-11-30 2023-06-09
2023-06-09T00:00:00 q TD3$CURQ Q2 23 13.805 2023-04-01 2023-06-30 2023-06-09
2023-06-09T00:00:00 q TD3$+1Q Q3 23 11.716 2023-07-01 2023-09-29 2023-06-09
2023-06-09T00:00:00 q TD3$+2Q Q4 23 14.151 2023-10-01 2023-12-22 2023-06-09
2023-06-09T00:00:00 q TD3$+3Q Q1 24 12.984 2024-01-01 2024-03-29 2023-06-09
2023-06-09T00:00:00 q TD3$+4Q Q2 24 11.41 2024-04-01 2024-06-28 2023-06-09
2023-06-09T00:00:00 q TD3$+5Q Q3 24 11.117 2024-07-01 2024-09-30 2023-06-09
2023-06-09T00:00:00 y TD3$+1CAL Cal 24 12.522 2024-01-01 2024-12-24 2023-06-09
2023-06-09T00:00:00 y TD3$+2CAL Cal 25 11.974 2025-01-01 2025-12-24 2023-06-09
2023-06-09T00:00:00 y TD3$+3CAL Cal 26 11.63 2026-01-01 2026-12-24 2023-06-09
2023-06-09T00:00:00 y TD3$+4CAL Cal 27 11.554 2027-01-01 2027-12-24 2023-06-09
2023-06-12T00:00:00 m TD3$BALMO Jun 23 16.409 2023-06-01 2023-06-30 2023-06-12
2023-06-12T00:00:00 m TD3$CURMON Jun 23 14.864 2023-06-01 2023-06-30 2023-06-12
2023-06-12T00:00:00 m TD3$+1_M Jul 23 14.259 2023-07-01 2023-07-31 2023-06-12
2023-06-12T00:00:00 m TD3$+2_M Aug 23 12.182 2023-08-01 2023-08-31 2023-06-12
2023-06-12T00:00:00 m TD3$+3_M Sep 23 12.378 2023-09-01 2023-09-29 2023-06-12
2023-06-12T00:00:00 m TD3$+4_M Oct 23 13.092 2023-10-01 2023-10-31 2023-06-12
2023-06-12T00:00:00 m TD3$+5_M Nov 23 14.027 2023-11-01 2023-11-30 2023-06-12
2023-06-12T00:00:00 q TD3$CURQ Q2 23 14.246 2023-04-01 2023-06-30 2023-06-12
2023-06-12T00:00:00 q TD3$+1Q Q3 23 12.94 2023-07-01 2023-09-29 2023-06-12
2023-06-12T00:00:00 q TD3$+2Q Q4 23 14.677 2023-10-01 2023-12-22 2023-06-12
2023-06-12T00:00:00 q TD3$+3Q Q1 24 13.124 2024-01-01 2024-03-29 2023-06-12
2023-06-12T00:00:00 q TD3$+4Q Q2 24 11.6 2024-04-01 2024-06-28 2023-06-12
2023-06-12T00:00:00 q TD3$+5Q Q3 24 11.286 2024-07-01 2024-09-30 2023-06-12
2023-06-12T00:00:00 y TD3$+1CAL Cal 24 12.688 2024-01-01 2024-12-24 2023-06-12
2023-06-12T00:00:00 y TD3$+2CAL Cal 25 12.103 2025-01-01 2025-12-24 2023-06-12
2023-06-12T00:00:00 y TD3$+3CAL Cal 26 11.723 2026-01-01 2026-12-24 2023-06-12
2023-06-12T00:00:00 y TD3$+4CAL Cal 27 11.661 2027-01-01 2027-12-24 2023-06-12
</code></pre>
<p>Thanks so much!
I am trying to be concise but stackoverflow says "It looks like your post is mostly code; please add some more details." so please ignore the below text</p>
|
<python><json><pandas>
|
2023-06-23 08:31:22
| 1
| 447
|
neutralname
|
76,538,285
| 14,282,714
|
Check if pdf document is encrypted with Spacy
|
<p>I would like to check if a pdf document is encrypted. We could use the <code>.is_encrypted</code> way from <code>pypdf</code> like this:</p>
<pre><code>import pypdf
if pypdf.PdfReader("test_doc_encrypted.pdf").is_encrypted:
print("Yes")
else:
print("No")
</code></pre>
<p>Output:</p>
<pre><code>Yes
</code></pre>
<p>But I would like to only use <code>spacy</code> for this. So I was wondering if anyone knows if it is possible to check if a document is encrypted using <code>spacy</code>?</p>
<p>To make it reproducible, here I share the encrypted pdf document: <a href="https://drive.google.com/file/d/1pstF7TxjvFW5mV264G6QW_53KGRIs2vO/view?usp=sharing" rel="nofollow noreferrer">encrypted document</a></p>
|
<python><spacy>
|
2023-06-23 08:25:06
| 2
| 42,724
|
Quinten
|
76,538,273
| 10,153,759
|
Spotfire Weighted Median value
|
<p>Is there any function which returns weighted Median value?</p>
<p>Input 2 columns like
weight = [100,210,30,50]
values = [1, 2, 3, 4]</p>
<p>and maybe function like WeightedMedian(w = weight, v = values)</p>
<p>or can you help me to construct code for that, Data-->Data Function Properties --> Expression Functions --> New.</p>
<p>or formula in Custom Expression</p>
|
<python><r><spotfire><median><weighted>
|
2023-06-23 08:23:48
| 2
| 379
|
Easyhyum
|
76,538,245
| 9,730,403
|
Unable to access global variable in main function | python
|
<p>I wanted to understand, how global scope works in <code>if __name__ == '__main__'</code>: Block <br>
E.g:</p>
<pre><code>x = 2
if __name__ == '__main__':
global x
x = 10
print(x)
</code></pre>
<p>The above will throw an exception:</p>
<pre><code>global x
^^^^^^^^
SyntaxError: name 'x' is assigned to before global declaration
</code></pre>
<p>But, when I'm creating an new function and calling global in it then its working fine.</p>
<pre><code>x=10
def test():
global x
x=2
print(x)
pass
if __name__ == '__main__':
test()
print(x)
</code></pre>
<p>It's working fine without any error. I'm just trying to understand what's happening here.
When python file is called, it will invoke if <strong>name</strong> == '<strong>main</strong>': and then access the global variable</p>
|
<python><program-entry-point>
|
2023-06-23 08:20:02
| 2
| 1,124
|
Tushar
|
76,538,222
| 7,318,120
|
OSError: [WinError 433] A device which does not exist was specified: 'csv_files'
|
<p>I am trying to troubleshoot this problem which appears to be intermittent. the function is called once per hour.</p>
<ul>
<li>sometimes the code runs for several days (hundreds of loops)</li>
<li>other times it runs for just a day (20 or so loops)</li>
</ul>
<pre><code>OSError: [WinError 433] A device which does not exist was specified: 'csv_files'
</code></pre>
<p>I trace the code back to this function, which sometimes returns an empty dataframe.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
def get_existing_df(symbol:str)-> pd.DataFrame:
'''
check if file exists & get the the existing dataframe for a given symbol
args:
symbol (str): the ticker symbol
return (df, last_trade):
df: (DataFrame) of the last trades (empty df if no trades)
'''
file_name = 'csv_files/all_trades_' + symbol + '.csv'
if os.path.exists(file_name):
df = pd.read_csv(file_name, index_col=0)
else:
df = pd.DataFrame()
return df
</code></pre>
<p>The code uses a relative path. I can also confirm the obvious:</p>
<ol>
<li>the path exists</li>
<li>the file exists</li>
<li>i believe that i have permission (the full path is my local google drive).</li>
</ol>
<p>But every now and again the method gives me an empty dataframe, which breaks the code (with the above error).</p>
<p>What is causing this and how can i prevent this ?</p>
|
<python><pandas><read-csv>
|
2023-06-23 08:15:32
| 1
| 6,075
|
darren
|
76,538,214
| 4,847,250
|
How calling pick_event only once if I click on several lines in matplotlib?
|
<p>I'm trying to call a function on line click, in a matplotlib figure.
My issue is, if I click where multiple lines are crossing, the pick_event call several time the function. I need the function to be called only once in that case</p>
<p>here a minimal example:</p>
<pre><code> from matplotlib import pyplot as plt
from matplotlib.lines import Line2D
import numpy as np
x = np.arange(100)
y = np.sin(x/10)
fig, ax1 = plt.subplots()
line, = ax1.plot(x, y, picker=10)
line, = ax1.plot(x, y, picker=10)
line, = ax1.plot(x, y, picker=10)
def pick_handler(event):
print("Pick_handler called!")
if event.mouseevent.button==1 and isinstance(event.artist, Line2D):
print("Do the thing.")
fig.canvas.mpl_connect('pick_event', pick_handler)
plt.show()
</code></pre>
<p>if i click on the lines (3 superposed lines) the function is called three time instead of once.</p>
|
<python><matplotlib><click>
|
2023-06-23 08:13:35
| 1
| 5,207
|
ymmx
|
76,538,202
| 3,607,738
|
Langchain agent streams its though process into websocket
|
<p>I am using Flask and Langchain as a QA microservice, and depending on some parameters from the frontend (react) and backend (express), it would call either:</p>
<ul>
<li>a simple question service (no index, no agent)</li>
<li>an indexed question service (ChromaDB index, no agent)</li>
<li>an agent service (no index, ZapierNLA agent)</li>
</ul>
<p>We stream the responses using Websockets (we also have a REST API alternative if we don't want to stream the answers), and here is the implementation of a custom callback handler on my side of things:</p>
<pre class="lang-py prettyprint-override"><code>class CustomHandler(StreamingStdOutCallbackHandler):
user_id = None
def __init__(self, user_id):
self.user_id = user_id
def on_llm_new_token(self, token: str, **kwargs):
emit('result', {'message_id': self.user_id, 'data': token})
if token == '':
emit('result', {'message_id': self.user_id, 'data': 'STREAM_END'})
</code></pre>
<p>This is how we add it to the chat model:</p>
<pre class="lang-py prettyprint-override"><code>handler = CustomHandler(user_id)
llm = ChatOpenAI(model_name=model_name, temperature=temperature, streaming=streaming,
callback_manager=CallbackManager([handler]), verbose=True)
# Somehow, this does not work if verbose=False or if we ommit the verbose
</code></pre>
<p>For the first two services, the stream works as intended, but the agent service streams his whole thought process, like this:</p>
<pre><code>I need to use the Gmail: Create Draft tool to create a draft email with the ten greek god names listed in the body. Action: Gmail: Create Draft Action Input: Body: "1. Zeus\n2. Hera\n3. Poseidon\n4. Demeter\n5. Athena\n6. Apollo\n7. Artemis\n8. Ares\n9. Aphrodite\n10. Hephaestus", To: my own email address, Subject: "List of Ten Greek Gods"The draft email has been created successfully. Action: None Action Input: NoneI need to use the Gmail: Send Email tool to send the draft email I just created. Action: Gmail: Send Email Action Input: Body: "1. Zeus\n2. Hera\n3. Poseidon\n4. Demeter\n5. Athena\n6. Apollo\n7. Artemis\n8. Ares\n9. Aphrodite\n10. Hephaestus", To: my own email address, Subject: "List of Ten Greek Gods"I need to go back to using the Gmail: Create Draft tool and add in the parameter for "To" to send the email to myself. Action: Gmail: Create Draft Action Input: Body: "1. Zeus\n2. Hera\n3. Poseidon\n4. Demeter\n5. Athena\n6. Apollo\n7. Artemis\n8. Ares\n9. Aphrodite\n10. Hephaestus", To: my own email address, Subject: "List of Ten Greek Gods"The draft email has been created and I can now send it to myself by using Gmail: Send Draft tool. Action: Gmail: Send Draft Action Input: Thread Id: "188e72dae1b0f2b7"I need to go back to using the Gmail: Create Draft tool and add in the parameter for "To" to send the email to myself. After that, I should use Gmail: Send Message tool to send the email. Action: Gmail: Create Draft Action Input: Body: "1. Zeus\n2. Hera\n3. Poseidon\n4. Demeter\n5. Athena\n6. Apollo\n7. Artemis\n8. Ares\n9. Aphrodite\n10. Hephaestus", To: my own email address, Subject: "List of Ten Greek Gods"Now that the draft email has been created with the correct parameters, I can use the Gmail: Send Message tool to send the email to myself. Action: Gmail: Send Message Action Input: Id: "188e72dec0ec5524"I need to review the list of available tools and find one that can send an email given a draft message Id. Action: None Action Input: NoneI know that the Gmail API has a function to send draft messages using the draft Id, so I can use a code snippet to accomplish this. Action: Code Execution Action Input: Use the Gmail API to send the email draft with Id "188e72dec0ec5524" to my own email addressI will need to use some external method or language to execute the code to send the email using the Gmail API. I can use a programming language like Python or a tool like Postman. Action: Python Code Execution Action Input: Use the Gmail API to send the email draft with Id "188e72dec0ec5524" to my own email addressI can use a third-party integration tool like Zapier or IFTTT to automate the process of sending the email draft from Gmail to my own email address. Action: Zapier Integration Action Input: Set up a Zapier integration to automatically send the email draft with Id "188e72dec0ec5524" to my own email addressSince I am not able to use any of the tools provided to directly send the email draft, I will have to manually copy and paste the contents of the draft and email it to myself. Final Answer: Manually copy and paste the contents of the draft and email it to myself.
</code></pre>
<p>This is the function that calls the ZapierNLA agent:</p>
<pre class="lang-py prettyprint-override"><code>def simple_order(human_message: str, system_message: str, model_name: str = 'gpt-3.5-turbo', streaming: bool = False, temperature: float = 0.6, user_id: str = None, history=None):
if history is None:
history = []
handler = CustomHandler(user_id)
llm = ChatOpenAI(model_name=model_name, temperature=temperature)
if streaming:
llm = ChatOpenAI(model_name=model_name, temperature=temperature, streaming=streaming, callback_manager=CallbackManager([handler]), verbose=True)
messages = generate_messages(history=history, system_message=system_message, human_message=human_message)
zapier = ZapierNLAWrapper()
toolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier)
agent = initialize_agent(tools=toolkit.get_tools(), llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)
return agent.run(messages)
</code></pre>
<p>I am aware that the <code>verbose=True</code> is what causes all of this. And maybe because I took a similar approach as my implementation of a simple question, I may need to tweak a few things.</p>
<p>I have already tried defining <code>ignore_agent</code> to <code>True</code> in the <code>CustomHandler</code> class, but nothing changed.</p>
<p>I have tried removing the <code>verbose</code> but nothing is being emitted and on the front, it's just showing a "typing" animation from the service</p>
<p>I have tried putting the <code>verbose</code> parameter in the <code>initialize_agent</code> call, but it gives the same result as what I have just mentionned previously</p>
<p>How do I stream only the <code>Final Answer</code> of the agent?</p>
|
<python><websocket><flask-socketio><langchain>
|
2023-06-23 08:10:58
| 2
| 367
|
prout
|
76,538,170
| 3,182,496
|
VSCode debug - how to show full list contents
|
<p>When debugging Python in VSCode, does anyone know how to get it to show the full list contents instead of collapsing it?</p>
<p>There's probably a setting for this, but I can't find it. It's a pain having to remember to wrap everything you want to see in a <code>str()</code> call.</p>
<p><a href="https://i.sstatic.net/QbzPE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QbzPE.png" alt="enter image description here" /></a></p>
<p>I have the same problem in both the Debug Console and Variables panel.</p>
<p>Before anyone suggests clicking the <code>></code> to expand the values, that isn't always helpful either.</p>
<p><a href="https://i.sstatic.net/jgv3c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jgv3c.png" alt="enter image description here" /></a></p>
<p>EDIT:</p>
<p>Minimum reproducible example (as shown in the image above). In the Debug Console, type</p>
<pre><code>mycoordinates=[[(0.0,0.0),(1.0,1.0)]]
</code></pre>
<p>Then try and show that variable</p>
<pre><code>mycoordinates
</code></pre>
<p>VSCode shows the list in the collapsed form. If you want to see the actual values, you need to do this</p>
<pre><code>str(mycoordinates)
</code></pre>
|
<python><visual-studio-code><vscode-debugger>
|
2023-06-23 08:04:55
| 0
| 1,278
|
jon_two
|
76,538,125
| 7,745,011
|
Is it possible to await multiprocessing.Process.join() in python?
|
<p>Basically what I am asking is, if there is a "<code>await p.join()</code>" function, which awaits a separate Process, while letting the event loop in the main process do other stuff.</p>
<p>I have a python-asyncio architecture in a FastAPI application. In case I have a long running and CPU heavy operation I would like to handle it in a separate Process, something like the following toy example:</p>
<pre><code>def heavy_duty(input: InputModel) -> None:
# long running calculation here
async def process(input: InputModel) -> None:
p = Process(target=heavy_duty, args=(input,))
p.start()
await p.join() # give up event loop and wait for heavy duty to finish while main process can do other stuff
</code></pre>
<p>Would this be possible to do in the described way?</p>
|
<python><python-asyncio><python-multiprocessing>
|
2023-06-23 07:58:12
| 1
| 2,980
|
Roland Deschain
|
76,538,038
| 7,991,581
|
Implementing REST API digital signature
|
<p>I'm currently trying to implement a minimal REST API with digital signature.</p>
<p>I understand that the underlying concept is that the sender sign the payload with its private key using HMAC and that the receiver verify this signature with the public key.</p>
<p>However I have some difficulties implementing a working example.</p>
<p>My API is implemented in PHP and the client in python.</p>
<p>I verify requests in PHP as follow</p>
<pre><code>public function verify_request(Request $request)
{
$method = $request->method();
$timestamp = $request->header("timestamp");
$public_key = $request->header("api-key");
$signature = $request->header("signature");
$url = $request->fullUrl();
$data = $method.$timestamp.$url;
$result = openssl_verify($data, $signature, $public_key, OPENSSL_ALGO_SHA256);
Logger::info("Result : ".$result);
return $result;
}
</code></pre>
<p>I already successfully signed requests on several other APIs using this minimal example in python</p>
<pre><code>import requests
import hashlib
import hmac
import time
public_key = ""
private_key = ""
method = "GET"
base_url = "https://testing-api.com"
endpoint = "/private/test"
url = base_url + endpoint
timestamp = str(int(time.time()))
signature_data = method + timestamp + url
signature = hmac.new(private_key.encode("utf-8"), signature_data.encode("utf-8"), hashlib.sha256).hexdigest()
headers = {
"api-key": public_key,
"timestamp": timestamp,
"signature": signature,
}
response = requests.request(method, url, headers=headers)
print(response)
</code></pre>
<p>However what I don't understand is what kind of keys are used.</p>
<p>For example I generated RSA keys as follow</p>
<pre><code>public_key = """-----BEGIN PUBLIC KEY-----
MFwwDQYJKoZIhvcNAQEBBQADSwAwSAJBAL7Jz6mHX1cau3wlc6QrGW+kcVHO0HPj
iwUDxdM1B8KSLgkfle4+Snq6HhOp5VXdQXZkYF4u9ctWbx/sB2oiD1sCAwEAAQ==
-----END PUBLIC KEY-----"""
private_key = """-----BEGIN PRIVATE KEY-----
MIIBUwIBADANBgkqhkiG9w0BAQEFAASCAT0wggE5AgEAAkEAvsnPqYdfVxq7fCVz
pCsZb6RxUc7Qc+OLBQPF0zUHwpIuCR+V7j5KeroeE6nlVd1BdmRgXi71y1ZvH+wH
aiIPWwIDAQABAkAnxyvkzLS0FH7Cg4x4zgOfo0l9JQGRJ//0K7UzM/tKNZOy881O
+qEDld7XMaUOmyGd/FxSskrJrzWTSqEZtQ8BAiEA7wsKUjGWFyTfuMkj/3/E0XI2
YYEaTRpROl4NN6FUttcCIQDMUnn7JDovxYlp17QhyOb41qc1I402QZ6zNiz33ytP
HQIgA2rr/drZo4ESdcjia9++x6PTZTd8Ucfji2sW00nKNUcCIFo7Oh9Mml2qcMrL
NYOOA2J0+RaggqYpSHqAPE+iwK+JAiB2YbMutRh5EDyTcI2ql/hYFLlAJVT/E1AN
/ItM1KCQQQ==
-----END PRIVATE KEY-----"""
</code></pre>
<p>But it doesn't match the format of the API keys I tested on several other APIs, and when I try to pass the public key into the header I got the following error</p>
<pre><code>requests.exceptions.InvalidHeader: Invalid return character or leading space in header: api-key
</code></pre>
<p>For example on an API I tested, the keys had the following format</p>
<pre><code>public_key = "Tub3FH4CLjHezvRcSKdeE18a9VrtzL"
private_key = "tBssW3InKRq09OMLkveyuZ65LDRokjVX8FXO0CL5VpdaV73NWayUvJSSAtCs"
</code></pre>
<p>But when I try to use those API keys for signing into my python script I have the following error on the PHP</p>
<pre><code>ErrorException: openssl_verify(): supplied key param cannot be coerced into a public key
</code></pre>
<p>So I have two issues here about the API key format.</p>
<p>First I would like to know what kind of keys are used by those APIs so I can have keys as simple strings instead of multi-line strings.</p>
<p>Secondly I'd like to understand why those API keys are not considered correct by <code>openssl_verify</code>, since I suppose that how the signature are verified.</p>
<p><strong>Edit:</strong> As @Toppaco said in his answer, my issue came from the fact that I thought those APIs were using asymmetric keys and only keeping the public key when in fact they use two unrelated keys, one for authentication, and one for signing, thus my confusion.</p>
|
<python><php><digital-signature>
|
2023-06-23 07:43:59
| 1
| 924
|
Arkaik
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.