QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
75,557,082
| 5,896,319
|
How to calculate how many days lef to due date in a query?
|
<p>I have a model that has <code>due_date</code> field and I want to create a query that annotates overdue value that counts how many days are left until due_date.
but I can't implement it in my code I'm getting this error:</p>
<pre><code>Case() got an unexpected keyword argument 'default'
</code></pre>
<p>and if I remove the default I get:</p>
<pre><code> TypeError: QuerySet.annotate() received non-expression(s):
</code></pre>
<p>PS: Some of the objects have null due_date</p>
<pre><code> query_list = file.qs.annotate(overdue=Case(
When(due_date__gt=timezone.now().date(), then=Value((F('due_date') - timezone.now().date()))),
When(due_date__isnull=True, then=Value(None)),
default=Value(0),
output_field=DurationField())).values_list('overdue', ...)
</code></pre>
|
<python><django>
|
2023-02-24 13:22:00
| 0
| 680
|
edche
|
75,556,838
| 7,168,098
|
change one level of a multiindex dataframe with a function
|
<p>Assuming a multiindex dataframe as follows:</p>
<pre><code>import pandas as pd
import numpy as np
arrays = [np.array(['John', 'John', 'John', 'Jane', 'Jane', 'Jane']),
np.array(['New York', 'New York', 'San Francisco', 'New York', 'New York', 'San Francisco']),
np.array(['2018', '2019', '2018', '2019', '2020', '2020']),
np.array(['HR', 'Finance', 'HR', 'Finance', 'HR', 'Finance']),
np.array(['Company A', 'Company A', 'Company A', 'Company B', 'Company B', 'Company B']),
np.array(['Manager 1', 'Manager 1', 'Manager 2', 'Manager 2', 'Manager 3', 'Manager 3'])]
df = pd.DataFrame(np.random.randn(3, 6), columns=arrays)
df.columns.names = ['userid', 'city', 'year', 'department', 'company', 'line_manager']
display(df)
</code></pre>
<p>(The real case is much bigger of course)</p>
<p>I need to change the values of the names in the level called userid based on a function.</p>
<p>Here an example:</p>
<pre><code>def change_name(name):
if name.starts_with("J"):
# only changing the name under certain conditions
result = "name"+ "00"
else:
result = name
return result
</code></pre>
<p>How do I do that?
The rest of the indexes values remain the same.</p>
|
<python><pandas><multi-index>
|
2023-02-24 12:56:31
| 1
| 3,553
|
JFerro
|
75,556,786
| 14,269,252
|
double ended slider that indicate to some specific dates in stream lit app
|
<p>I am building an streamlit app. I have a Data Frame with start, end and activity date.
I want to make a double ended slider which starts from the first date and end to the end date and also indicate and show the active date (pointing to the active date) per ID.</p>
<p>I know how to make the slider indicating to the first and last dates and the index date only for one ID, but it is not the double ended slider, and I want to show index date per id on slider.</p>
<pre><code>id first_month last_month Active_date
PT1 2011-06-01 2019-10-01 2015-10-01
PT3 2020-09-01 2022-06-01 2021-10-01
</code></pre>
<pre class="lang-py prettyprint-override"><code>df['first_month_active'] = pd.to_datetime(df['first_month_active'])
start_dt = st.sidebar.date_input('Start date', value=df['first_month_active'].min())
df['last_month_active'] = pd.to_datetime(df['last_month_active'])
end_dt = st.sidebar.date_input('End date', value=df['last_month_active'].max())
#If we agree that index date is
df['Active_date'] = pd.to_datetime(df['Active_date'])
index_dt = st.sidebar.date_input('Index date', value="2015-10-01")
st.write(start_dt)
cols1,_ = st.columns((3,2)) # To make it narrower
format = 'MMM DD,YYYY' # format output
max_days = end_dt-start_dt
slider = cols1.slider('Select date', min_value=start_dt, value=index_dt ,max_value=end_dt, format=format)
</code></pre>
|
<python><pandas><streamlit>
|
2023-02-24 12:51:43
| 1
| 450
|
user14269252
|
75,556,558
| 9,997,666
|
Sampling DataFrame based on Start and End time for each process per Group - Pandas
|
<p>I have a dataframe where I have a column having the group no. Under each group there are multiple processes associated which has a specific start and end time.</p>
<p>It looks like the following:</p>
<pre><code>Group | Process | StartTime | EndTime |
-----------------------------------------------------------------
1 | A | 2023-01-01 10:09:18 | 2023-01-01 11:19:28 |
1 | B | 2023-01-01 11:29:01 | 2023-01-01 19:29:00 |
1 | C | 2023-01-01 19:56:11 | 2023-01-02 01:09:10 |
2 | A | 2023-02-14 23:54:11 | 2023-02-15 04:01:14 |
2 | B | 2023-02-14 05:56:11 | 2023-02-14 09:00:20 |
2 | D | 2023-02-14 10:16:01 | 2023-02-14 21:06:30 |
</code></pre>
<p>All I want to do is for each group I want to resample the dataframe with a frequency of 1 minute with the start and end time.</p>
<p>For ex. for Group 1 for Process A, I will have rows starting from 01-01-2023 10:09 to 11:20, sample at a frequency of 1min, which is <code>df.resample('1T')</code></p>
<pre><code>Group | Process | Sample Timestamp | StartTime | EndTime |
--------------------------------------------------------------------------------------
1 | A | 2023-01-01 10:09:00 | 2023-01-01 10:09:18 | 2023-01-01 11:19:28|
1 | A | 2023-01-01 10:10:00 | 2023-01-01 10:09:18 | 2023-01-01 11:19:28|
1 | A | 2023-01-01 10:11:00 | 2023-01-01 10:09:18 | 2023-01-01 11:19:28|
.... | ... | ... | ... | ... |
1 | A | 2023-01-01 11:18:00 | 2023-01-01 10:09:18 | 2023-01-01 11:19:28|
1 | A | 2023-01-01 11:19:00 | 2023-01-01 10:09:18 | 2023-01-01 11:19:28|
1 | B | 2023-01-01 11:29:00 | 2023-01-01 11:29:01 | 2023-01-01 19:29:00|
1 | B | 2023-01-01 11:30:00 | 2023-01-01 11:29:01 | 2023-01-01 19:29:00|
.... | ... | ... | ... | ... |
1 | B | 2023-01-01 19:28:00 | 2023-01-01 11:29:01 | 2023-01-01 19:29:00|
1 | B | 2023-01-01 19:29:00 | 2023-01-01 11:29:01 | 2023-01-01 19:29:00|
< same for Process C and other Groups as well>
</code></pre>
<p>As a reference I tried this piece of code over here: <a href="https://stackoverflow.com/questions/43806497/pandas-resample-a-dataframe-using-a-specified-start-date-end-date-and-granula">Reference Code</a></p>
<p>But unfortunately, I am unable to implement it by each group.</p>
<p>Any help is appreciated.</p>
|
<python><pandas><time-series>
|
2023-02-24 12:25:49
| 1
| 1,193
|
Debadri Dutta
|
75,556,417
| 948,655
|
Numpy's `NDArray[np.int_]` not compatible with Python's `Sequence[Integral]`?
|
<p>Code to reproduce:</p>
<pre class="lang-py prettyprint-override"><code>from numbers import Integral
from collections.abc import Sequence
import numpy as np
from numpy.typing import NDArray
def f(s: Sequence[Integral]):
print(s)
def g() -> NDArray[np.int_]:
return np.asarray([1, 2, 3])
def _main() -> None:
a = g()
f(a)
if __name__ == "__main__":
_main()
</code></pre>
<p>On line 18 I get the following Mypy error:</p>
<pre><code>Argument 1 to "f" has incompatible type "ndarray[Any, dtype[signedinteger[Any]]]"; expected "Sequence[Integral]"
</code></pre>
<p>What's also weird is that this error doesn't occur if I remove the <code> -> None</code> from the <code>def _main() -> None</code>.</p>
<p><strong>EDIT</strong></p>
<p>It's even worse. Forget about <code>Integer</code> and <code>np.int_</code>. <code>NDArray</code> itself (or indeed <code>np.ndarray</code> class) seems to be incompatible with <code>Sequence</code>, which doesn't make any sense to me. Run the following code:</p>
<pre class="lang-py prettyprint-override"><code>from collections.abc import Sequence
import numpy as np
from numpy.typing import NDArray
def f(s: Sequence):
print(s)
def g() -> np.ndarray:
return np.asarray([1, 2, 3])
def _main() -> None:
a = g()
f(a)
if __name__ == "__main__":
_main()
</code></pre>
<p>I get the Mypy error: <code>Argument 1 to "f" has incompatible type "ndarray[Any, Any]"; expected "Sequence[Any]" [arg-type]</code>.</p>
<p><strong>EDIT AGAIN</strong>
My versions are:</p>
<ul>
<li>Python 3.10.1</li>
<li>Mypy 1.0.1</li>
<li>Numpy 1.24.2</li>
</ul>
|
<python><numpy><python-typing><mypy>
|
2023-02-24 12:10:23
| 1
| 8,813
|
Ray
|
75,556,353
| 5,711,995
|
How to render a python panel component from react with pyodide?
|
<p>I am trying to use an example from <a href="https://panel.holoviz.org/user_guide/Running_in_Webassembly.html#pyodide" rel="nofollow noreferrer">the panel documentation</a> of how to display a panel component from python using pyodide, but from a react component, instead of from pure html.</p>
<p>I have set up a <a href="https://github.com/AshleySetter/nextjs-react-app-with-pyodide" rel="nofollow noreferrer">minimal NextJS react app</a> which can be cloned, and ran locally simply with <code>npm i && npm start</code>. My example works for simple python code returning a string or number, but when I attempt to use it with the example panel code for a simple slider I am unsure what to return in order for react to render the slider.</p>
<p>The python code is contained in <a href="https://github.com/AshleySetter/nextjs-react-app-with-pyodide/blob/main/src/App.js" rel="nofollow noreferrer">src/App.js</a>. I am simply overwriting the myPythonCodeString variable from the panel code to a simple 1+9 arithmetic to demonstrate it works in that simple case.</p>
<p>Any help would be much appreciated.</p>
<p><strong>Edit:</strong> I have added commits to this repo fixing the problem, the state of the repo when this question was asked can be seen in commit 3c735653dda0e873f17a98d0fb74edaca367ca00.</p>
|
<python><reactjs><holoviz><pyodide><holoviz-panel>
|
2023-02-24 12:02:48
| 2
| 1,609
|
SomeRandomPhysicist
|
75,556,221
| 1,473,517
|
Why is np.dot so much faster than np.sum?
|
<p>Why is np.dot so much faster than np.sum? Following this <a href="https://stackoverflow.com/questions/61945412/which-method-is-faster-and-why-np-sumarr-vs-arr-sum/61945719#61945719">answer</a> we know that np.sum is slow and has faster alternatives.</p>
<p>For example:</p>
<pre><code>In [20]: A = np.random.rand(1000)
In [21]: B = np.random.rand(1000)
In [22]: %timeit np.sum(A)
3.21 Β΅s Β± 270 ns per loop (mean Β± std. dev. of 7 runs, 100,000 loops each)
In [23]: %timeit A.sum()
1.7 Β΅s Β± 11.5 ns per loop (mean Β± std. dev. of 7 runs, 1,000,000 loops each)
In [24]: %timeit np.add.reduce(A)
1.61 Β΅s Β± 19.6 ns per loop (mean Β± std. dev. of 7 runs, 1,000,000 loops each)
</code></pre>
<p>But all of them are slower than:</p>
<pre><code>In [25]: %timeit np.dot(A,B)
1.18 Β΅s Β± 43.9 ns per loop (mean Β± std. dev. of 7 runs, 1,000,000 loops each)
</code></pre>
<p>Given that np.dot is both multiplying two arrays elementwise and then summing them, how can this be faster than just summing one array? If B were set to the all ones array then np.dot would simply be summing A.</p>
<p>So it seems the fastest option to sum A is:</p>
<pre><code>In [26]: O = np.ones(1000)
In [27]: %timeit np.dot(A,O)
1.16 Β΅s Β± 6.37 ns per loop (mean Β± std. dev. of 7 runs, 1,000,000 loops each)
</code></pre>
<p>This can't be right, can it?</p>
<p>This is on Ubuntu with numpy 1.24.2 using openblas64 on Python 3.10.6.</p>
<p>Supported SIMD extensions in this NumPy install:</p>
<pre><code>baseline = SSE,SSE2,SSE3
found = SSSE3,SSE41,POPCNT,SSE42,AVX,F16C,FMA3,AVX2
</code></pre>
<p><strong>Update</strong></p>
<p>The order of the timings reverses if the array is much longer. That is:</p>
<pre><code>In [28]: A = np.random.rand(1000000)
In [29]: O = np.ones(1000000)
In [30]: %timeit np.dot(A,O)
545 Β΅s Β± 8.87 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
In [31]: %timeit np.sum(A)
429 Β΅s Β± 11 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
In [32]: %timeit A.sum()
404 Β΅s Β± 2.95 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
In [33]: %timeit np.add.reduce(A)
401 Β΅s Β± 4.21 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
<p>This implies to me that there is some fixed sized overhead when calling np.sum(A), A.sum(), np.add.reduce(A) that doesn't exist when calling np.dot() but the part of the code that does the summation is in fact faster.</p>
<p>ββββββββββ-</p>
<p>Any speed ups using cython, numba, python etc would be great to see.</p>
|
<python><numpy><cython><simd><numba>
|
2023-02-24 11:48:28
| 2
| 21,513
|
Simd
|
75,555,968
| 6,241,554
|
Can't access alert from Selenium webdriver
|
<p>I can't access/find/click alert with <code>alert_is_present</code> or <code>driver.switch_to.alert</code> features in both Firefox and Chrome drivers.</p>
<p>Easy example:</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions as ec
from selenium.webdriver.common.by import By
def test_easy():
driver = webdriver.Firefox() # The same with Chrome
driver.get("https://webcamtests.com/check")
WebDriverWait(driver, 10).until(ec.element_to_be_clickable((By.ID, "webcam-launcher"))).click()
alert = WebDriverWait(driver, 10).until(ec.alert_is_present())
</code></pre>
<p>The alert is visible:</p>
<p><a href="https://i.sstatic.net/wSqbP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wSqbP.png" alt="alert image" /></a></p>
<p>but I'm getting <code>TimeoutException</code></p>
<p>Similarly with the last line replaced by:</p>
<pre class="lang-py prettyprint-override"><code> time.sleep(10)
alert = driver.switch_to.alert
</code></pre>
<p>I'm getting <code>NoAlertPresentException</code>.</p>
<pre><code>Additional info:
Python 3.11
Selenium 4.8.2
Firefox Driver Version: 102.8.0
Chrome Driver Version: 110.0.5481.177
</code></pre>
<p>PS. No I don't want to ignore that alert or auto-click 'Allow'/'Block' button. Let's say I want to get alert text.</p>
|
<python><selenium-webdriver>
|
2023-02-24 11:23:59
| 0
| 1,841
|
Piotr Wasilewicz
|
75,555,764
| 10,981,411
|
python tkinter - how do I see specific value in the input field
|
<p>Below is my code</p>
<p>So when I run the script I want my input field to show the value in list_val i.e 23 in this case. How do I do that?
At the moment nothing shows up in my input field.
Also, if I replace the input field with some other number it should replace the value in list_val.</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tkinter import *
root = Tk()
list_val = [23]
e = Entry(root, width=50)
e.pack()
def myClick():
myLabel = Label(root,text= e.get())
myLabel.pack()
myButton = Button(root, text='Enter Value', command = myClick)
myButton.pack()
root.mainloop()
</code></pre>
|
<python><tkinter>
|
2023-02-24 11:04:05
| 1
| 495
|
TRex
|
75,555,751
| 8,076,879
|
How to add text next to an image with Pillow or OpenCV?
|
<p>I am trying to add some text next to and under a <code>QR code</code>.</p>
<p>The problem is that I am struggling on how to edit the image into a <code>QR Code + text</code>. The image below is what I would like to have as a result. The function signature can be changed too.</p>
<p><a href="https://i.sstatic.net/0N02l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0N02l.png" alt="Target Image" /></a></p>
<p>This is the code I have so far</p>
<pre class="lang-py prettyprint-override"><code>"""
requirements.txt
qrcode==7.4.2
Pillow==8.1.0
opencv-python==4.7.0.68
"""
import os
from pathlib import Path
import cv2
import qrcode
from PIL import Image, ImageFont, ImageDraw, ImageOps
def create_qr_img() -> str:
QRcode = qrcode.QRCode(
error_correction=qrcode.constants.ERROR_CORRECT_H,
box_size=5,
)
url = 'google.com'
QRcode.add_data(url)
QRcode.make()
# adding color to QR code
QRimg = QRcode.make_image(
fill_color='Black', back_color="white").convert('RGB')
# save the QR code generated
out_fp = f'temp_/QR.png'
QRimg.save(out_fp)
return out_fp
def add_str_to_img(img_path: str,
str1: str,
str2: str,
show:bool=False) -> str:
black_color_rgb = (0,0,0)
white_color_rgb = (255,255,255)
img = Image.open(img_path)
#failed attempt 1)
# expanding the border works only for writing on top or under the QR code
# but if the string is too long, it gets cut off
img = ImageOps.expand(img, border=30, fill=white_color_rgb)
# failed attempt 2)
# add empty space to the left of the QR code
# exp_cm = 3
# exp_px = int(exp_cm * 37.79527559055118)
# new_shape_pixels = (img.width+exp_px, img.height)
# img = ImageOps.fit(img, new_shape_pixels, method=Image.ANTIALIAS,
# #bleed=0.0, centering=(0.5, 0.5)
# )
# end failed attempt 2)
draw = ImageDraw.Draw(img)
font_path = os.path.join(cv2.__path__[0],'qt','fonts','DejaVuSans.ttf')
font = ImageFont.truetype(font_path, size=52)
# on top of the QR code
draw.text((62,0),str1,(0,0,0),font=font,
align='center'
)
# bottom
draw.text((0,470),str2,black_color_rgb,font=font,
align='center',
)
print('QR code TO BE generated!')
out_fp = f'temp_/QR_print.png'
Path(out_fp).unlink(missing_ok=True)
img.save(out_fp)
if show:
img.show()
print('QR code generated!')
return out_fp
if __name__ == '__main__':
img_path = create_qr_img()
add_str_to_img(img_path,
'ExampleAboveQr',
'This is some long string. It could be multi-line. 22222222',
show=True)
</code></pre>
<p>I think the solution should be something like with <code>ImageOps.fit</code> but I could not get it work how I wanted (see <code>attempt 2)</code>) in code.</p>
|
<python><python-imaging-library>
|
2023-02-24 11:02:39
| 1
| 2,438
|
DaveR
|
75,555,623
| 1,438,934
|
Python: Return pre stored json file in response in Django Rest Framework
|
<p>I want to write an API, which on a GET call , returns pre-stored simple json file. This file should be pre-stored in file system. How to do that?</p>
<p>register is app name. static is folder inside register. There I keep stations.json file. register/static/stations.json.</p>
<p>Content of this "stations.json" file should be returned in response.</p>
<p>settings.py:</p>
<pre><code>STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'register/static/')
]
STATIC_URL = 'static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')
</code></pre>
<p>views.py:</p>
<pre><code>from django.shortcuts import render
# Create your views here.
from django.contrib.auth.models import User
from .serializers import RegisterSerializer
from rest_framework import generics
from django.http import JsonResponse
from django.conf import settings
import json
class RegisterView(generics.CreateAPIView):
queryset = User.objects.all()
serializer_class = RegisterSerializer
def get_stations(request):
with open(settings.STATICFILES_DIRS[0] + '/stations.json', 'r') as f:
data = json.load(f)
return JsonResponse(data)
</code></pre>
<p>urls.py:</p>
<pre><code>from django.urls import path
from register.views import RegisterView
from . import views
urlpatterns = [
path('register/', RegisterView.as_view(), name='auth_register'),
path('stations/', views.get_stations, name='get_stations'),
]
</code></pre>
<p>setup/urls.py:</p>
<pre><code>from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('api/', include('register.urls')),
]
</code></pre>
<p>When I hit GET request from Postman: "http://127.0.0.1:8000/api/stations/",</p>
<p>I get error: 500 Internal server error.</p>
<p>TypeError at /api/stations/</p>
<p>Error:</p>
<pre><code><html lang="en">
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<meta name="robots" content="NONE,NOARCHIVE">
<title>TypeError
at /api/stations/</title>
<style type="text/css">
html * {
padding: 0;
margin: 0;
}
</code></pre>
|
<python><django>
|
2023-02-24 10:49:18
| 1
| 1,182
|
Anish Mittal
|
75,555,521
| 11,747,861
|
Polars equivalent of pandas factorize
|
<p>Does polars have the function to encode string column into integers (1, 2, 3) like <a href="https://pandas.pydata.org/docs/reference/api/pandas.factorize.html" rel="nofollow noreferrer">pandas.factorize</a>?</p>
<p>Didn't find it in the polars documentation</p>
|
<python><dataframe><python-polars>
|
2023-02-24 10:39:35
| 2
| 2,757
|
Mark Wang
|
75,554,956
| 5,081,366
|
Python: Visualize duplicates rows with horizontal lines from Pandas series
|
<p>I would like to accomplish that the following R code does, but using Python. The idea is to visualize as horizontal black lines the duplicated rows in a dataframe. I think I can use the <code>pandas.core.series.Series</code> that gives <code>df.duplicated()</code> (with True and False values), but I don't know how to create the plot.</p>
<p>Here is the R code and the resultant plot:</p>
<pre><code># get the row numbers of duplicated rows
duplicated_rows <- data_frame(duplicated = duplicated(ign_data), row = 1:nrow(ign_data)) %>%
filter(duplicated == T)
# Plot duplicated rows as black lines
ggplot(duplicated_rows, aes(xintercept = row)) +
geom_vline(aes(xintercept = row)) + # plot a black line for each duplicated row
ggtitle("Indexes of duplicated rows") + # add a title
coord_flip() + scale_x_reverse() #flip x & y axis and reverse the x axis
</code></pre>
<p><a href="https://i.sstatic.net/VJSVP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VJSVP.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2023-02-24 09:44:16
| 1
| 752
|
mjbsgll
|
75,554,807
| 9,827,719
|
Google Cloud Functions Python Sessions: "The session is unavailable because no secret key was set"
|
<p>I have created a simple login script that I want to host on <a href="https://cloud.google.com/functions" rel="nofollow noreferrer">Google Cloud Functions</a>.</p>
<p>The application works fine when I run it locally because I then run <strong>app.py</strong> which uses <code>app.secret_key</code> and starts a Flask app with <code>app.run(...)</code>.</p>
<p>However when running on Goolge Cloud Functions the starting point is the function <code>main(request=None)</code>. This does not start a Flask App. Because of this a secret_key is not set and I get the error "The session is unavailable because no secret key was set".</p>
<p>How can I solve this problem?</p>
<p><strong>app.py - Only used locally</strong></p>
<pre><code>import os
import flask
from flask import request
from flask_cors import CORS
import main
app = flask.Flask(__name__)
cors = CORS(app)
app.config['CORS_HEADERS'] = 'Content-Type'
@app.route("/", methods=['GET', 'POST'])
def index():
print("index()")
return main.main(request)
# - Main start ----------------------------------------------------------------
if __name__ == "__main__":
# Sessions
app.secret_key = os.urandom(12)
app.run(debug=True, host="0.0.0.0", port=8080, ssl_context=('certificates/cert.pem', 'certificates/key.pem'))
</code></pre>
<p><strong>main.py - The entry point for the Google Cloud Function</strong></p>
<pre><code>import json
import os
import flask
from flask import session
from src.pages.login import login
def main(request=None):
print("main()")
# Logged in?
if not session.get('logged_in'):
return login(app)
else:
return "logged in!"
# Finish
return {"message": "completed successfully"}
if __name__ == '__main__':
main()
</code></pre>
<p><strong>src/pages/login.py</strong></p>
<pre><code>from flask import request, session
def login(app):
print("login() :: Init")
# Get :: Process
process: int = int(request.args['process'])
# Process == 1
if process == 1:
# Fetch username and password
inp_username: str = ""
inp_password: str = ""
try:
login = request.form
inp_username = login['inp_username']
inp_password = login['inp_password']
except Exception as e:
print(f"login() :: Process error when getting request form e={e}")
# Test username and password
if inp_username == "admin" and inp_password == "admin":
print("login() :: Welcome")
# Sessions
session['logged_in'] = True
return "Welcome"
else:
print("login() :: Wrong password")
return "Wrong password"
return '''
<form method="post" action="?process=1" enctype="multipart/form-data">
<p>Username:<br />
<input type="text" name="inp_username" value="" />
</p>
<p>Password:<br />
<input type="password" name="inp_password" value="" />
</p>
<p>
<input type="submit" value="Login" />
</p>
</form>
'''
</code></pre>
|
<python><google-cloud-functions>
|
2023-02-24 09:29:54
| 1
| 1,400
|
Europa
|
75,554,753
| 10,743,830
|
Iterating fast over h5 file and perform some calculations
|
<ol start="2">
<li>I need a super fast solution, that needs maximally 5 seconds on the 9000 datapoints I provide in the link. Reason is because the real data is actually millions of rows.</li>
<li>Link to the h5 file: <a href="https://drive.google.com/file/d/16aI3plRFa3M6nSIiT1XioUIgsPYl1Wg8/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/16aI3plRFa3M6nSIiT1XioUIgsPYl1Wg8/view?usp=sharing</a></li>
</ol>
<p>The task at hand is as follows: Given the coordinate data of different body parts of different mice in the h5 file. Read in the h5 file (hopefully as numpy.array not as pandas what I did underneath) and then calculate the centroid based on tail1, tail2 and tail3 body parts.</p>
<p>My suspicion underneath is that .loc indexing is what causes the problem and generally dataframe iteration is sub-optimal.</p>
<p>What I have done is standard loc indexing:</p>
<pre><code>filename="look at the h5 file in the link" # h5 above
new_centroid_trackings = np.array([[0,0,0,0,0,0,0,0]]) # initialize the data to concatinate after every iteration
model_name="DLC_resnet50_4mice_new_video_no_wheelFeb17shuffle1_220000" # not relevant for task
tracking_coords = pd.read_hdf(filename) # read in the data
for frame in range(tracking_coords.shape[0]):
centroid_mouse1_x=(tracking_coords.loc[frame, model_name]["mouse1"]["tail1"]["x"]+tracking_coords.loc[frame, model_name]["mouse1"]["tail2"]["x"]+tracking_coords.loc[frame, model_name]["mouse1"]["tail3"]["x"])/3
centroid_mouse1_y=(tracking_coords.loc[frame, model_name]["mouse1"]["tail1"]["y"]+tracking_coords.loc[frame, model_name]["mouse1"]["tail2"]["y"]+tracking_coords.loc[frame, model_name]["mouse1"]["tail3"]["y"])/3
centroid_mouse2_x=(tracking_coords.loc[frame, model_name]["mouse2"]["tail1"]["x"]+tracking_coords.loc[frame, model_name]["mouse2"]["tail2"]["x"]+tracking_coords.loc[frame, model_name]["mouse2"]["tail3"]["x"])/3
centroid_mouse2_y=(tracking_coords.loc[frame, model_name]["mouse2"]["tail1"]["y"]+tracking_coords.loc[frame, model_name]["mouse2"]["tail2"]["y"]+tracking_coords.loc[frame, model_name]["mouse2"]["tail3"]["y"])/3
centroid_mouse3_x=(tracking_coords.loc[frame, model_name]["mouse3"]["tail1"]["x"]+tracking_coords.loc[frame, model_name]["mouse3"]["tail2"]["x"]+tracking_coords.loc[frame, model_name]["mouse3"]["tail3"]["x"])/3
centroid_mouse3_y=(tracking_coords.loc[frame, model_name]["mouse3"]["tail1"]["y"]+tracking_coords.loc[frame, model_name]["mouse3"]["tail2"]["y"]+tracking_coords.loc[frame, model_name]["mouse3"]["tail3"]["y"])/3
centroid_mouse4_x=(tracking_coords.loc[frame, model_name]["mouse4"]["tail1"]["x"]+tracking_coords.loc[frame, model_name]["mouse4"]["tail4"]["x"]+tracking_coords.loc[frame, model_name]["mouse4"]["tail3"]["x"])/3
centroid_mouse4_y=(tracking_coords.loc[frame, model_name]["mouse4"]["tail1"]["y"]+tracking_coords.loc[frame, model_name]["mouse4"]["tail4"]["y"]+tracking_coords.loc[frame, model_name]["mouse4"]["tail3"]["y"])/3
# now concatinate the centroids to the previous ones
new_centroid_trackings=np.concatenate((new_centroid_trackings, np.array([[centroid_mouse1_x,centroid_mouse1_y,centroid_mouse2_x, centroid_mouse2_y, centroid_mouse3_x, centroid_mouse3_y, centroid_mouse4_x, centroid_mouse4_y]])), axis=0)
</code></pre>
<p>And for this around 90 seconds is needed for all the rows.</p>
<p>Needed solution: Numpy (or not) solution that takes maximally 5 seconds for all the rows.</p>
|
<python><pandas><dataframe><numpy><performance>
|
2023-02-24 09:24:35
| 1
| 352
|
Noah Weber
|
75,554,740
| 5,320,906
|
Schema validation does not report all missing children
|
<p>Given this example schema ("big.xsd"):</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8" ?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<xsd:element name="root">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="A"/>
<xsd:element name="B"/>
<xsd:element name="C1" minOccurs="0"/>
<xsd:element name="C2" minOccurs="0"/>
<xsd:element name="C3" minOccurs="0"/>
<xsd:element name="C4" minOccurs="0"/>
<xsd:element name="C5" minOccurs="0"/>
<xsd:element name="C6" minOccurs="0"/>
<xsd:element name="C7" minOccurs="0"/>
<xsd:element name="C8" minOccurs="0"/>
<xsd:element name="C9" minOccurs="0"/>
<xsd:element name="C10" minOccurs="0"/>
<xsd:element name="C11" minOccurs="0"/>
<xsd:element name="C12" minOccurs="0"/>
<xsd:element name="C13" minOccurs="0"/>
<xsd:element name="C14" minOccurs="0"/>
<xsd:element name="C15" minOccurs="0"/>
<xsd:element name="D"/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>
</code></pre>
<p>and this example document ("big.xml"):</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" ?>
<root>
<A/>
<B/>
</root>
</code></pre>
<p>Validating the schema with lxml reports only the first ten "missing" children (line break inserted for readability):</p>
<pre><code>>>> from lxml import etree
>>> schema_doc = etree.parse('big.xsd')
>>> schema = etree.XMLSchema(schema_doc)
>>>
>>> doc = etree.parse('big.xml')
>>> schema.assertValid(doc)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "src/lxml/etree.pyx", line 3643, in lxml.etree._Validator.assertValid
lxml.etree.DocumentInvalid: Element 'root': Missing child element(s).
Expected is one of ( C1, C2, C3, C4, C5, C6, C7, C8, C9, C10 )., line 2
</code></pre>
<p>This is consistent with xmllint's output (I believe lxml delegates the validation to libxml2) (line break inserted for readability):</p>
<pre class="lang-none prettyprint-override"><code>$ xmllint --noout --schema big.xsd big.xml
big.xml:2: element root: Schemas validity error : Element 'root':
Missing child element(s). Expected is one of ( C1, C2, C3, C4, C5, C6, C7, C8, C9, C10 ).
big.xml fails to validate
</code></pre>
<p>Is there a way to make lxml report <em>all</em> the missing children, in particular the <code>D</code> element which is required to conform to the schema?</p>
<hr />
<p>Notes</p>
<ul>
<li>The actual schema is from a third party, so it cannot be changed.</li>
<li>Since the codebase I'm working with already depends on lxml I'm not asking for other packages (such as <a href="https://xmlschema.readthedocs.io/en/latest/intro.html" rel="nofollow noreferrer">xmlschema</a>) which might produce more useful error messages. I want to avoid adding more dependencies if possible.</li>
</ul>
|
<python><xml><xsd><lxml><libxml2>
|
2023-02-24 09:22:47
| 1
| 56,990
|
snakecharmerb
|
75,554,738
| 691,505
|
Attempting to run threads concurrently inside while loop in Python 3.11
|
<p>I am trying to get my head around threading in Python 3.11 and I am trying to work out why when I put a <code>time.sleep(120)</code> inside <code>execute_subtasks</code> that the next thread is not processed and the code appears to run sequentially instead of concurrently.</p>
<p>Do I need to <code>start</code> the thread outside of the <code>for</code> loop or do I need to move the location of the <code>join</code>?</p>
<pre class="lang-py prettyprint-override"><code>from threading import Thread, Event, active_count, current_thread
threads = list()
while True:
try:
# ignore the main thread
if active_count() > 1:
for index, thread in enumerate(threads):
thread.join()
for message in queue.get_messages(4):
if active_count() >= config.maxthreads + 1:
# Thread pool is about to overflow. Skipping.
continue
x = threading.Thread(
target=message.get_task().execute_subtasks,
daemon=True,
args=[message.get_context()]
)
threads.append(x)
x.start()
except Exception:
# Exception in dispatch loop
</code></pre>
|
<python><python-3.x><multithreading><concurrency><python-multithreading>
|
2023-02-24 09:22:40
| 1
| 17,341
|
crmpicco
|
75,554,696
| 14,359,801
|
Dataframe : replace value and values around based on condition
|
<p>I would like to create a filter to replace values in a dataframe column based on a condition and also the values around it.</p>
<p>For exemple I would like to filter values and replace then with NaN if they are superior to 45 but also the value before and after it even if they are not meeting the condition:</p>
<pre><code>df[i] = 10, 12, 25, 60, 32, 26, 23
</code></pre>
<p>In this exemple the filter should replace 60 by NaN and also the value before (25) and the value after (32).The result of the filter would be :</p>
<pre><code>df[i] = 10, 12, NaN, NaN, NaN, 26, 23
</code></pre>
<p>So far I am using this line but it only replace value that meet the condition and not also values around:</p>
<pre><code>df[i].where(df[i] <= 45, np.nan, inplace=True)
</code></pre>
|
<python><pandas><dataframe><filter><replace>
|
2023-02-24 09:18:33
| 2
| 308
|
Ketchup
|
75,554,694
| 7,032,878
|
How to correctly refresh aws credentials with Python
|
<p>I'm trying to use the RefreshableCredentials module from botocore in order to manage automatically the credentials update.</p>
<pre><code>import boto3
import botocore
from botocore.credentials import RefreshableCredentials
from botocore.session import get_session
def get_aws_credentials(aws_role_arn, session_name):
sts_client = boto3.client('sts')
assumed_role_object = sts_client.assume_role(
RoleArn = aws_role_arn,
RoleSessionName = session_name,
DurationSeconds = 900
)
return {
'access_key': assumed_role_object['Credentials']['AccessKeyId'],
'secret_key': assumed_role_object['Credentials']['SecretAccessKey'],
'token': assumed_role_object['Credentials']['SessionToken'],
'expiry_time': assumed_role_object['Credentials']['Expiration'].isoformat()
}
def get_aws_autorefresh_session(aws_role_arn, session_name):
session_credentials = RefreshableCredentials.create_from_metadata(
metadata = get_aws_credentials(aws_role_arn, session_name),
refresh_using = get_aws_credentials,
method = 'sts-assume-role'
)
session = get_session()
session._credentials = session_credentials
autorefresh_session = boto3.Session(botocore_session=session)
return autorefresh_session, session_credentials
</code></pre>
<p>Generating the credentials like this:</p>
<pre><code>arn = "1234"
session = "Test"
session, credentials = get_aws_autorefresh_session(arn, session)
</code></pre>
<p>And then I'm passing the session_credentials from get_aws_autorefresh_session to wathever function may need them.</p>
<p>With this approach, I've noticed that everything works, but after 300 seconds this exception is raised:</p>
<pre><code>get_aws_credentials() missing 2 required positional arguments: 'aws_role_arn' and 'session_name'
</code></pre>
<p>On the contrary, if I modify the function <code>get_aws_credentials</code> eliminating the variables, and passing static values for them:</p>
<pre><code>def get_aws_credentials():
sts_client = boto3.client('sts')
assumed_role_object = sts_client.assume_role(
RoleArn = "1234",
RoleSessionName = "Test",
DurationSeconds = 900
)
return {
'access_key': assumed_role_object['Credentials']['AccessKeyId'],
'secret_key': assumed_role_object['Credentials']['SecretAccessKey'],
'token': assumed_role_object['Credentials']['SessionToken'],
'expiry_time': assumed_role_object['Credentials']['Expiration'].isoformat()
}
def get_aws_autorefresh_session():
session_credentials = RefreshableCredentials.create_from_metadata(
metadata = get_aws_credentials(),
refresh_using = get_aws_credentials,
method = 'sts-assume-role'
)
session = get_session()
session._credentials = session_credentials
autorefresh_session = boto3.Session(botocore_session=session)
return autorefresh_session, session_credentials
</code></pre>
<p>Everything works smoothly.</p>
<p>My question is how to retrieve the credentials using variables for the role_arn.</p>
|
<python><amazon-web-services>
|
2023-02-24 09:18:30
| 2
| 627
|
espogian
|
75,554,605
| 1,230,694
|
Python Lambda Function Response Structure For App Sync Resolver
|
<p>I am trying to get a very simple example working in App Sync in which I will return a hardcoded list and an ID from a query, but I cannot work out what the response from my python lambda should look like. Whatever I try, I end up with <code>null</code> values when I run the query.</p>
<p>Here is my schema:</p>
<pre><code>type BMSDefinition {
object_id: String
definition: String
}
type BMSDefinitions {
premise_id: ID
definitions: [BMSDefinition]
}
type Query {
getDefinitions(premise_id: ID!): BMSDefinitions
}
schema {
query: Query
}
</code></pre>
<p>Here is my simple function:</p>
<pre><code>import json
def handler(event, context):
data = json.dumps(get_definitions())
return data
def get_definitions():
return {
"premise_id":"123456",
"definitions": [
{
"object_id": "123456",
"definition": "ABC"
},
{
"object_id": "7891011",
"definition": "DEF"
}
]
}
</code></pre>
<p>Here is my query:</p>
<pre><code>query MyQuery {
getDefinitions(premise_id: "123") {
premise_id
definitions {
object_id
definition
}
}
}
</code></pre>
<p>And here is the result of calling the API:</p>
<pre><code>{
"data": {
"getDefinitions": {
"premise_id": null,
"definitions": null
}
}
}
</code></pre>
<p>The lambda function definitely executes each time I run the query, I can see this from the lambda logs. If I look at the logs for AppSync I can see this in the response section of an execution of the query:</p>
<pre><code>"context": {
"arguments": {
"installation_id": "123"
},
"result": "{\"premise_id\": \"123456\", \"definitions\": [{\"object_id\": \"123456\", \"definition\": \"ABC\"}, {\"object_id\": \"7891011\", \"definition\": \"DEF\"}]}",
"stash": {},
"outErrors": []
},
"transformedTemplate": "{\"premise_id\": \"123456\", \"definitions\": [{\"object_id\": \"123456\", \"definition\": \"ABC\"}, {\"object_id\": \"7891011\", \"definition\": \"DEF\"}]}"
</code></pre>
<p>I have attached the lambda data source to the <code>getDefinitions</code> query in the schema view and I thought that using no VTL and the lambda resolver that AppSync should be mapping the fields in my response to the model in the schema.</p>
<p>I don't really know where to go from here as I cannot see why the values are null.</p>
|
<python><graphql><aws-appsync>
|
2023-02-24 09:10:48
| 1
| 3,899
|
berimbolo
|
75,554,575
| 14,693,246
|
Pytest mock different behaviour to same function on loop
|
<p>I'm having troubles on mocking the return value on looped lists. Imagine I have a function that paints the cars like this:</p>
<pre><code>def do_operate(car_list: list):
flawless_car_list = list()
for car in car_list:
if paint_the_car(car):
flawless_car_list.append(car)
else:
print_log(f"{car} couldn't get painted")
return flawless_car_list
</code></pre>
<p>On each iteration there's a some random error probability (like unable to connect to server, or couldn't get car details) that a car couldn't get painted for an unknown reason. We have tests to ensure that even a car can't get painted, we are continuing to paint the rest of the upcoming cars, and logging the erroneous cars.</p>
<p>If I test the happy path, everything is ok like this:</p>
<pre><code>def test_happy_path(mocker):
test_list = ["honda", "bmw", "ford"]
mocker.patch("pytest1.paint_the_car", return_value=True)
mocker.patch("pytest1.print_log")
card_list = do_operate(car_list=test_list)
assert len(card_list) == 3
paint_the_car.assert_called()
print_log.assert_not_called()
</code></pre>
<p>But at some point I need to be able to mock the <code>paint_car</code> like this <code>mocker.patch("pytest1.paint_the_car", return_value=False)</code> <strong>for a specific car (second car)</strong> and <code>mocker.patch("pytest1.paint_the_car", return_value=True)</code> for <strong>first and third cars</strong>. In the end, I would like to pass the assertions;</p>
<pre><code>def test_error_occured_on_second_item(mocker):
test_list = ["honda", "bmw", "ford"]
# HOW TO CHANGE MOCKS?
mocker.patch("pytest1.paint_the_car", return_value=True)
mocker.patch("pytest1.print_log")
card_list = do_operate(car_list=test_list)
assert len(card_list) == 2
paint_the_car.assert_called()
print_log.assert_called_with("bmw couldn't get painted")
</code></pre>
|
<python><pytest>
|
2023-02-24 09:08:28
| 1
| 749
|
Fatih Ersoy
|
75,554,533
| 1,852,526
|
Read text from os.Popen that opens new command prompt
|
<p>I am using os.Popen to open a new command prompt window and run a process. How can I read the text within that command prompt?</p>
<pre><code>import os
def OpenServers():
os.chdir(coreServerFullPath)
process=os.popen("start cmd /K CoreServer.exe -c -s").read()
print(process) #Prints nothing
</code></pre>
<p>This is the output text that's shown in the command prompt which I want to print.</p>
<p><a href="https://i.sstatic.net/iTaTL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iTaTL.png" alt="Output text" /></a></p>
<h2>Edit</h2>
<p>I also tried this way, but no luck</p>
<pre><code> from subprocess import Popen, PIPE, STDOUT
def OpenServers():
os.chdir(coreServerFullPath)
result = subprocess.Popen(['start', 'cmd', '/k', 'CoreServer.exe -c -s'], shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
time.sleep(4)
output=result.stdout.read()
print(output) #Prints nothing
</code></pre>
<h2>Edit 2</h2>
<p>I tried something like this. Problem is, it makes me run 2 times. The first time when I run, the console is blank. The second time when I run it works but gives me an error because I can only open one instance of the server,.</p>
<pre><code>def Test1():
os.chdir(coreServerFullPath)
result = subprocess.check_output(['CoreServer.exe', '-c', '-s'])
print(result.stdout)
</code></pre>
<p>Here is the full code that I was trying. I can run CoreServer only as an admin so doing it like this</p>
<pre><code>import os
import sys
import subprocess
from subprocess import Popen, CREATE_NEW_CONSOLE
from subprocess import Popen, PIPE, STDOUT
import time
import ctypes, sys
#The command prompts must be opened as administrators. So need to run the python script with elevated permissions. Or else it won't work
def is_admin():
try:
return ctypes.windll.shell32.IsUserAnAdmin()
except:
return False
if is_admin():
#The program can only run with elevated admin privileges.
#Get the directory where the file is residing.
currentDirectory=os.path.dirname(os.path.abspath(__file__))
coreServerFullPath=os.path.join(currentDirectory,"Core\CoreServer\Server\CoreServer/bin\Debug")
isExistCoreServer=os.path.exists(coreServerFullPath)
echoServerFullPath=os.path.join(currentDirectory,"Echo\Server\EchoServer/bin\Debug")
isExistEchoServer=os.path.exists(echoServerFullPath)
#For now this is the MSBuild.exe path. Later we can get this MSBuild.exe as a standalone and change the path.
msBuildPath="C:\Program Files (x86)\Microsoft Visual Studio/2019\Professional\MSBuild\Current\Bin/amd64"
pathOfCorecsProjFile=os.path.join(currentDirectory,"Core\CoreServer\Server\CoreServer\CoreServer.csproj")
pathOfEchocsProjFile=os.path.join(currentDirectory,"Echo\Server\EchoServer\EchoServer.csproj")
def OpenServers():
os.chdir(coreServerFullPath)
#os.system("start /wait cmd /c {command}")
command_line = [coreServerFullPath, '-c', '-s']
result = subprocess.Popen(['start', 'cmd', '/k', 'CoreServer.exe -c -s'], shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT, close_fds=True)
time.sleep(4)
output=result.stdout.read()
print(output)
#process=os.popen("start cmd /K CoreServer.exe -c -s").read()
#print(process)
def Test():
os.chdir(coreServerFullPath)
output = subprocess.check_output(['CoreServer.exe', '-c', '-s'],shell=True)
time.sleep(4)
print(output)
def Test1():
os.chdir(coreServerFullPath)
result = subprocess.check_output(['CoreServer.exe', '-c', '-s'])
print(result.stdout)
if(not isExistCoreServer):
if(os.path.isfile(pathOfCorecsProjFile)):
os.chdir(msBuildPath)
startCommand="start cmd /c"
command="MSBuild.exe "+pathOfCorecsProjFile+" /t:build /p:configuration=Debug"
#os.system(startCommand+command)
cmd=subprocess.Popen(startCommand+command)
if(not isExistEchoServer):
if(os.path.isfile(pathOfEchocsProjFile)):
os.chdir(msBuildPath)
startCommand="start cmd /c"
command="MSBuild.exe "+pathOfEchocsProjFile+" /t:build /p:configuration=Debug"
os.system(startCommand+command)
if(isExistCoreServer and isExistEchoServer):
Test1()
else:
# Re-run the program with admin rights
ctypes.windll.shell32.ShellExecuteW(None, "runas", sys.executable, " ".join(sys.argv), None, 1)
</code></pre>
|
<python><windows><popen>
|
2023-02-24 09:04:04
| 1
| 1,774
|
nikhil
|
75,554,457
| 11,692,124
|
Python how to automatically set time and timezone in windows enabled
|
<p>I have this code to automatically set time and timezone in windows enabled.
with first lines I am making sure that the code has admin privileges but the code runs and gives no error but the changes in registry are not applied.</p>
<pre><code>import ctypes
if not ctypes.windll.shell32.IsUserAnAdmin():
raise Exception("Admin privileges required to modify registry.")
import win32api
import win32con
# Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient
# Define the registry key path and value name
key_path = r"SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient"
value_name = "Enabled"
# Enable the "Set time automatically" feature
win32api.RegSetValueEx(
win32con.HKEY_LOCAL_MACHINE,
key_path + "\\" + value_name,
0,
win32con.REG_DWORD,
1,
)
# Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\tzautoupdate
# Define the registry key path and value name
key_path = r"SYSTEM\CurrentControlSet\Services\tzautoupdate"
value_name = "Start"
# Enable the "Set time zone automatically" feature
win32api.RegSetValueEx(
win32con.HKEY_LOCAL_MACHINE,
key_path + "\\" + value_name,
0,
win32con.REG_DWORD,
1,
)
print('done')
</code></pre>
|
<python><windows><pywin32>
|
2023-02-24 08:55:57
| 1
| 1,011
|
Farhang Amaji
|
75,554,456
| 231,934
|
Processing AWS Athena result with Python
|
<p>We are using Amazon Athena for some analytical processing. Athena produces CSV into S3 bucket, which we process with Python. This works until we use composite values as query result.</p>
<p>It seems that Athena uses some SerDe format (I suspect it's SimpleLazySerDe, but it's hard to find that in official documentation).</p>
<p>Is there any library for Python that is capable of deserialising composite types in CSV that's produced by Athena? And is it really SimpleLazySerDe or another (maybe even standard) format?</p>
<p>An example query</p>
<pre class="lang-sql prettyprint-override"><code>SELECT ARRAY[1,2,3] as array,
ARRAY[ARRAY[1,2], ARRAY[3,4]] as array_of_arrays,
ARRAY[MAP(
ARRAY['a'],
ARRAY['1']
)]
</code></pre>
<p>Produces this CSV</p>
<pre><code>"array","array_of_arrays","_col2"
"[1, 2, 3]","[[1, 2], [3, 4]]","[{a=1}]"
</code></pre>
<p>It's apparent that format used by Athena for complex values is not any standard format (not JSON, YAML, etc). Without specification, grammar, it's hard to parse it because without knowing all specifications for separators, escaping literals, it would be trial and error. Please note that the query is only an example to produce complex values so everyone can take a look and provide information what format that is and how to parse it.</p>
<p>Please note that I don't search answers for how to orchestrate Athena runs with Python nor some workarounds like CTAS. My original question is</p>
<ul>
<li>what format is it</li>
<li>is it standard format</li>
<li>is there any Python library that is capable of SerDe operations on top of it</li>
</ul>
<p>Thank you</p>
|
<python><amazon-web-services><amazon-athena>
|
2023-02-24 08:55:55
| 1
| 3,842
|
Martin Macak
|
75,554,366
| 14,269,252
|
Generate a fake date between two date in Python data frame
|
<p>I have a data frame called "test" as follows, I would like to generate a random date between this two date.</p>
<pre><code>id first_month last_month
PT1 2011-06-01 2019-10-01
PT3 2020-09-01 2022-06-01
</code></pre>
<pre><code>
import random
test["random_date"] = test.first_month_active + (test.last_month_active - start) * random.random()
</code></pre>
<p>I tried with this code but the error is :</p>
<pre><code>TypeError: unsupported operand type(s) for +: 'TimedeltaArray' and 'datetime.date'
</code></pre>
|
<python><pandas>
|
2023-02-24 08:46:10
| 2
| 450
|
user14269252
|
75,554,354
| 12,415,855
|
Selenium / select dropbox?
|
<p>i try to use selenium with this site:</p>
<p><a href="https://gesund.bund.de/suchen/aerztinnen-und-aerzte" rel="nofollow noreferrer">https://gesund.bund.de/suchen/aerztinnen-und-aerzte</a></p>
<p>with the following code:</p>
<pre><code>import time
import os, sys
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
from webdriver_manager.chrome import ChromeDriverManager
# from fake_useragent import UserAgent
if __name__ == '__main__':
path = os.path.abspath(os.path.dirname(sys.argv[0]))
options = Options()
options.add_argument("start-maximized")
options.add_experimental_option("prefs", {"profile.default_content_setting_values.notifications": 1})
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option('useAutomationExtension', False)
options.add_argument('--disable-blink-features=AutomationControlled')
srv=Service(ChromeDriverManager().install())
driver = webdriver.Chrome (service=srv, options=options)
waitWD = WebDriverWait (driver, 10)
link = f"https://gesund.bund.de/suchen/aerztinnen-und-aerzte"
driver.get (link)
waitWD.until(EC.presence_of_element_located((By.XPATH,'//input[@id="arztsuche__field_where"]'))).clear()
time.sleep(0.5)
waitWD.until(EC.presence_of_element_located((By.XPATH,'//input[@id="arztsuche__field_where"]'))).send_keys("13403")
time.sleep(0.5)
# select = Select(driver.find_element(By.XPATH,'//div[@id="arztsuche-fachrichtung-list"]'))
# driver.execute_script("arguments[0].click();", waitWD.until(EC.element_to_be_clickable((By.XPATH, '//div[@id="arztsuche-fachrichtung-list"]'))))
waitWD.until(EC.presence_of_element_located((By.XPATH,'//div[@id="arztsuche-fachrichtung-list"]'))).send_keys("Hausarzt")
</code></pre>
<p>input("Press!")</p>
<p>I try to select an entry from the combobox "Fachrichtung" i the middle but it is not working. I tried several thing like you can see in the comment:</p>
<pre><code> # select = Select(driver.find_element(By.XPATH,'//div[@id="arztsuche-fachrichtung-list"]'))
# driver.execute_script("arguments[0].click();", waitWD.until(EC.element_to_be_clickable((By.XPATH, '//div[@id="arztsuche-fachrichtung-list"]'))))
waitWD.until(EC.presence_of_element_located((By.XPATH,'//div[@id="arztsuche-fachrichtung-list"]'))).send_keys("Hausarzt")
</code></pre>
<p>Tried to use select (but it tells me its no select-element). Tried to click on it - but it is not clickable. And tried to simple assign a text to it - but this is also not working.</p>
<p>How can i automate the element on this site?</p>
|
<python><selenium-webdriver><xpath><webdriver><webdriverwait>
|
2023-02-24 08:44:40
| 1
| 1,515
|
Rapid1898
|
75,554,263
| 14,159,253
|
beanie.exceptions.CollectionWasNotInitialized error
|
<p>I'm new to the <a href="https://beanie-odm.dev/" rel="nofollow noreferrer"><code>Beanie</code></a> library which is</p>
<blockquote>
<p>an asynchronous Python object-document mapper (ODM) for MongoDB. Data models are based on Pydantic.</p>
</blockquote>
<p>I was trying this library with <code>fastAPI</code> framework, and made an ODM for some document, let's say it's name is <code>SomeClass</code> and then tried to insert some data in the db using this ODM.<br />
Here's the code for ODM and the method to create a document (in<code>someClass.py</code>):</p>
<pre class="lang-py prettyprint-override"><code>from beanie import Document
from pydantic import Field, BaseModel
class SomeClassDto(BaseModel):
"""
A Class for Data Transferring.
"""
name: str = Field(max_length=maxsize, min_length=1)
class SomeClassDao:
"""
This is a class which holds the 'SomeClass' class (inherited from Beanie Document),
and also, the methods which use the 'SomeClass' class.
"""
class SomeClass(Document):
name: str = Field(max_length=20, min_length=1)
@classmethod
async def create_some_class(cls, body: SomeClassDto):
some_class = cls.SomeClass(**body.dict())
return await cls.SomeClass.insert_one(some_class)
</code></pre>
<p>I've used and called the <code>create_some_class</code> function, but it throwed this error:<br />
<code>beanie.exceptions.CollectionWasNotInitialized</code></p>
<p>However the error is self-explanatory but I didn't understand at first, and couldn't find any relatable question about my problem in SO, so I decided to post this question and answer it, for the sake of future.</p>
|
<python><mongodb><fastapi><odm>
|
2023-02-24 08:35:58
| 2
| 1,725
|
Behdad Abdollahi Moghadam
|
75,554,257
| 2,897,115
|
python: pysftp only download first 10MB and exit
|
<p>I am using pysftp to download files from server.</p>
<p>I am debugging my code. For that purpose i want pysftp to download only 10MB and exit.</p>
<pre><code>
sftp_folder_location = 'outbound'
sftp = pysftp.Connection(host=Hostname, username=Username, password=Password,cnopts=cnopts)
with sftp.cd(sftp_folder_location):
local_path = '/home/ubuntu/data'
sftp.isfile(filename)
sftp.get(filename,os.path.join(local_path, filename))
sftp.close()
</code></pre>
|
<python>
|
2023-02-24 08:35:36
| 1
| 12,066
|
Santhosh
|
75,553,947
| 282,328
|
How to perform async tasks during shutdown?
|
<p>I'm trying to create an async connection pool that keeps reference count for connections (multiple consumers can use one connection in parallel) and shuts down inactive ones after a timeout. I'm struggling to correctly implement shutdown logic: <code>cleanup_all</code> is called when the event loop is already shut down so I can't correctly call and await disconnect methods that return Futures. Is there a way to detect shutdown earlier and do clean up while the event loop is still active? I'm currently testing shutting down with keyboard interrupt but it should work in other modes too.</p>
<pre><code>class ClientPool:
class PoolItem:
def __init__(self, client: Client):
self.client = client
self.last_used = 0.
self.counter = 0
clients: dict[int, PoolItem] = {}
def __init__(self, max_idle_time=1*60):
self.max_idle_time = max_idle_time
atexit.register(self.cleanup_all)
@asynccontextmanager
async def acquire(self):
item = await self.get_item()
item.counter += 1
try:
yield item.client
finally:
item.counter -= 1
item.last_used = time.monotonic()
async def get_item(self) -> PoolItem:
client = ...
if client.id not in self.clients:
await client.connect()
self.clients[client.id] = self.PoolItem(c)
return self.clients[client.id]
async def dispose_item(self, item: PoolItem):
del self.clients[item.client.id]
await item.client.disconnect()
def cleanup_all(self):
tasks = [item.client.disconnect() for item in list(self.clients.values())]
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(*tasks))
async def cleanup(self):
while True:
await asyncio.sleep(self.max_idle_time)
try:
t = time.monotonic()
for client in list(self.clients.values()):
if client.counter == 0 and t - client.last_used > self.max_idle_time:
await self.dispose_item(client)
except Exception as e:
logger.exception(e)
</code></pre>
|
<python><python-3.x><contextmanager>
|
2023-02-24 08:02:06
| 0
| 8,574
|
Poma
|
75,553,784
| 813,970
|
Iterating on group of columns in a dataframe from custom list - pandas
|
<p>I have a dataframe <strong>df</strong> like this</p>
<pre><code>TxnId TxnDate TxnCount
100 2023-02-01 2
500 2023-02-01 1
400 2023-02-01 4
100 2023-02-02 3
500 2023-02-02 5
100 2023-02-03 3
500 2023-02-03 5
400 2023-02-03 2
</code></pre>
<p>I have the following custom lists</p>
<pre><code>datelist = [datetime.date(2023,02,03), datetime.date(2023,02,02)]
txnlist = [400,500]
</code></pre>
<p>I want to iterate the df as per below logic:</p>
<pre><code>for every txn in txnlist:
sum = 0
for every date in datelist:
sum += df[txn][date].TxnCount
</code></pre>
<p>I would also be interested to understand how to find average of TxnCount for filtered TxnIds.</p>
<p>After <strong>Sum step</strong> based on above input and filters:</p>
<pre><code> TxnId TxnCount
400 2
500 10
</code></pre>
<p>Average corresponding to <code>TxnId 400 = (2+0)/2 = 1</code></p>
<p>Average corresponding to <code>TxnId 500 = (5+5)/2 = 5</code></p>
<p>If average > 3 , add row from dataframe to breachList</p>
<pre><code>breachList =[[500,10]]
</code></pre>
<p>Please help me how to do this in pandas</p>
|
<python><pandas>
|
2023-02-24 07:41:47
| 2
| 628
|
KurinchiMalar
|
75,553,740
| 458,661
|
Airflow: retry task with different parameter or settings
|
<p>I'm executing a query that sometimes will fail because of the setting of one parameter.
To set it on the 'safe' side is not desired, as this greatly affects performance in a negative way.
So I'd love to retry the same task on failure, but with a changed value for this parameter.</p>
<p>Is there a 'native' way of doing this in Airflow?
Or should I use the 'try: except:' way and work around this?</p>
|
<python><airflow><google-cloud-composer>
|
2023-02-24 07:37:31
| 1
| 1,956
|
Chrisvdberge
|
75,553,614
| 8,151,881
|
Tensorflow: The channel dimension of the inputs should be defined
|
<p>I am new to Tensorflow, and am trying to train a specific deep learning neural network. I am using Tensorflow (2.11.0) to get a deep neural network model which is described below. The data which I use is also given below:</p>
<p><strong>Data:</strong></p>
<p>Here is some example data. For sake of ease we can consider 10 samples in data. Here, each sample has shape: <code>(128,128)</code>.</p>
<p>One can consider the below code as example training data.</p>
<pre><code>x_train = np.random.rand(10, 128, 128, 1)
</code></pre>
<p><strong>Normalization layer:</strong></p>
<pre><code>normalizer = tf.keras.layers.Normalization(axis=-1)
normalizer.adapt(x_train)
</code></pre>
<p><strong>Build model:</strong></p>
<pre><code>def build_and_compile_model(norm):
model = tf.keras.Sequential([
norm,
layers.Conv2D(128, 128, activation='relu'),
layers.Conv2D(3, 3, activation='relu'),
layers.Flatten(),
layers.Dense(units=32, activation='relu'),
layers.Dense(units=1)
])
model.compile(loss='mean_absolute_error', optimizer=tf.keras.optimizers.Adam(0.001))
return model
</code></pre>
<p>When I do</p>
<pre><code>dnn_model = build_and_compile_model(normalizer)
dnn_model.summary()
</code></pre>
<p>I get the below error:</p>
<pre><code>ValueError: The channel dimension of the inputs should be defined. The input_shape received is (None, None, None, None), where axis -1 (0-based) is the channel dimension, which found to be `None`.
</code></pre>
<p><strong>What am I doing wrong here?</strong></p>
<p>I have tried to get insights from <a href="https://stackoverflow.com/questions/48264676/tensorflow-valueerror-the-channel-dimension-of-the-inputs-should-be-defined-fo">this</a>, <a href="https://stackoverflow.com/questions/66013918/valueerror-the-channel-dimension-of-the-inputs-should-be-defined-found-none">this</a>, <a href="https://stackoverflow.com/questions/68978375/the-channel-dimension-of-the-inputs-should-be-defined-found-none">this</a> and <a href="https://stackoverflow.com/questions/64881851/tensorflow-keras-model-load-error-valueerror-the-last-dimension-of-the-inputs">this</a>. But, I have not found a workable solution yet.</p>
<p>What should I do to remove the error and get the model to work?</p>
<p>I will appreciate any help.</p>
|
<python><tensorflow><machine-learning><keras><deep-learning>
|
2023-02-24 07:22:13
| 1
| 592
|
Ling Guo
|
75,553,554
| 8,967,422
|
How not to start docker container on system reboot?
|
<p>I have <code>compose.yml</code> file:</p>
<pre><code> api:
restart: on-failure
command: uvicorn app:app
...
jobs:
restart: on-failure
command: python job.py
...
</code></pre>
<p><code>job.py</code>:</p>
<pre><code>import asyncio
from prometheus_client import start_http_server
async def background(sio):
await asyncio.gather(...) # <- there are many tasks with while True
start_http_server(5000)
asyncio.run(background(sio))
</code></pre>
<p>It works okay. After stopping everything turns off. But when I restart system, <code>jobs</code> container starting automatically. Why?! <code>api</code> is not starting, but why <code>jobs</code> starting?</p>
|
<python><docker><docker-compose><jobs>
|
2023-02-24 07:14:06
| 2
| 486
|
Alex Poloz
|
75,553,461
| 10,829,044
|
Access ipynb files from shared drive using local Jupyter notebook
|
<p>Currently, I have my python jupyter notebook installed in my laptop.</p>
<p>So, am able to see two <code>.exe files</code> such as <code>jupyter-dejavue.exe</code> and <code>jupyter-nbconvert.exe</code> under the below path</p>
<pre><code>C:\Users\test\AppData\Roaming\Python\Python38\Scripts\
</code></pre>
<p>Currently, I have been asked to move all my code files to the company network shared drive which looks like below</p>
<pre><code>\\ANZINDv024.abcde.com\GLOBAL_CODE_FILES
</code></pre>
<p>So, when I launch Jupyter notebook from my start menu (in my laptop), am not able to navigate to the shared drive in the below screen (because I don't see the shared drive folder)</p>
<p><a href="https://i.sstatic.net/QDzdR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QDzdR.png" alt="enter image description here" /></a></p>
<p>So, I went to my shared drive and double-clicked <code>.ipynb</code> files but it opens in browser (with text format).</p>
<p>So, I chose <code>open with</code> and tried opening with <code>jupyter-dejavue.exe</code> and <code>jupyter-nbconvert.exe</code> but both doesn't launch the jupyter notebook.</p>
<p>How to launch Jupyter notebook to run <code>.ipynb</code> files stored in shared drive?</p>
|
<python><jupyter-notebook><jupyter><jupyter-lab><jupyterhub>
|
2023-02-24 07:01:27
| 3
| 7,793
|
The Great
|
75,553,449
| 2,739,700
|
Azure blob create using Rest API python signature error
|
<p>I am trying to create blob file using Python code using shared key, but it is failing and below is the details</p>
<p>Note: we cannot use azure blob python sdk due to storage vendor we must use sharedkey</p>
<p>Python code:</p>
<pre><code>import requests
import datetime
import hmac
import hashlib
import base64
storage_account_name = '<my-storage-account>'
storage_account_key = 'xxxxxxx'
# Define the container name and blob name
container_name = 'mycontainer5226'
blob_name = "test2"
# Set the request method and version
REQUEST_METHOD = 'PUT'
REQUEST_VERSION = '2020-04-08'
# Set the request date
REQUEST_DATE = datetime.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT')
CANONICALIZED_HEADERS = f'x-ms-date:{REQUEST_DATE}\nx-ms-version:{REQUEST_VERSION}\n'
# Set the canonicalized resource string
CANONICALIZED_RESOURCE = f'/{storage_account_name}/{container_name}/'
REQUEST_DATE = datetime.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT')
data = "test 1 23 file content"
Content_Length = str(len(data))
VERB = 'PUT'
Content_Encoding = ''
Content_Language = ''
Content_MD5 = ''
Content_Type = 'text/plain'
Date = ''
If_Modified_Since = ''
If_Match = ''
If_None_Match = ''
If_Unmodified_Since = ''
Range = ''
CanonicalizedHeaders = CANONICALIZED_HEADERS
CanonicalizedResource = CANONICALIZED_RESOURCE
# \'PUT\n\n\n22\n\ntext/plain\n\n\n\n\n\n\nx-ms-blob-type:BlockBlob\nx-ms-date:Fri, 24 Feb 2023 07:58:30 GMT\nx-ms-meta-m1:v1\nx-ms-meta-m2:v2\nx-ms-version:2020-04-08\n/blobmediapedevwus2/mycontainer5226/test2\
STRING_TO_SIGN = (VERB + '\n' + Content_Encoding + '\n' + Content_Language + '\n' +
Content_Length + '\n' + Content_MD5 + '\n' + Content_Type + '\n' +
Date + '\n' + If_Modified_Since + '\n' + If_Match + '\n' +
If_None_Match + '\n' + If_Unmodified_Since + '\n' + Range + '\n' +
CanonicalizedHeaders + CanonicalizedResource).encode('utf-8').strip()
print(STRING_TO_SIGN)
signed_string = base64.b64encode(hmac.new(base64.b64decode(storage_account_key), msg=STRING_TO_SIGN, digestmod=hashlib.sha256).digest()).decode()
headers = {
'x-ms-date' : REQUEST_DATE,
'x-ms-version' : REQUEST_VERSION,
'Content-Type': 'text/plain',
'Content-Length': Content_Length,
'x-ms-blob-type': "BlockBlob",
'x-ms-meta-m1': "v1",
'x-ms-meta-m2': "v2",
'Authorization' : ('SharedKey ' + storage_account_name + ':' + signed_string)
}
url = ('https://' + storage_account_name + f'.blob.core.windows.net/{container_name}/{blob_name}')
r = requests.put(url, headers = headers, data=data)
print(r.status_code)
print(r.content)
</code></pre>
<p>below is the string_to_sign:</p>
<pre><code>\'PUT\n\n\n22\n\ntext/plain\n\n\n\n\n\n\nx-ms-blob-type:BlockBlob\nx-ms-date:Fri, 24 Feb 2023 08:03:17 GMT\nx-ms-meta-m1:v1\nx-ms-meta-m2:v2\nx-ms-version:2020-04-08\n/blobmediapedevwus2/<storage-account>/test2\
b'PUT\n\n\n22\n\ntext/plain\n\n\n\n\n\n\nx-ms-date:Fri, 24 Feb 2023 08:03:17 GMT\nx-ms-version:2020-04-08\n/<storage-name>/mycontainer5226/'
</code></pre>
<p>Actual server signature:</p>
<pre><code>b'\xef\xbb\xbf<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.\nRequestId:262e0db6-201e-0010-651c-482d3a000000\nTime:2023-02-24T06:51:30.2303424Z</Message><AuthenticationErrorDetail>The MAC signature found in the HTTP request \'3xxxxxxxxxxxxxxxxx\' is not the same as any computed signature. Server used following string to sign: \'PUT\n\n\n22\n\ntext/plain\n\n\n\n\n\n\nx-ms-blob-type:BlockBlob\nx-ms-date:Fri, 24 Feb 2023 06:51:27 GMT\nx-ms-meta-m1:v1\nx-ms-meta-m2:v2\nx-ms-version:2020-04-08\n/<storage account>/mycontainer5226/test2\'.</AuthenticationErrorDetail></Error>'
</code></pre>
<p>seems there could be some issue during constructing signature</p>
|
<python><azure><rest><azure-blob-storage><azure-storage>
|
2023-02-24 07:00:05
| 1
| 404
|
GoneCase123
|
75,553,432
| 16,115,413
|
can't locate popup button with selenium
|
<p>I have been trying to use selenium on a webpage but this popup is refraining me to do so.</p>
<p><a href="https://i.sstatic.net/xD7Nj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xD7Nj.png" alt="enter image description here" /></a></p>
<p>note that the popup is only shown when you are not signed in (means you have to run my code so that selenium opens up a new browser window for you which does not have any accounts)</p>
<p>I want to click on the "Not Interested" button through selenium.</p>
<p>I don't want to close the popup every time manually,
is there a way to automate this?</p>
<p>here is my code:</p>
<pre><code># relevant packages & modules
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
import time
# relevant website
website = 'https://www.daraz.pk/'
# initialize Chrome
driver = webdriver.Chrome('C:\webdrivers\chromedriver.exe')
# open website
driver.get(website)
#maximize window
driver.maximize_window()
# waiting for popup
time.sleep(5)
# dealing with pop up
# with xpath
pop_up_deny = driver.find_element(By.XPATH , '/html/body/div[9]//div/div/div[3]/button[1]')
pop_up_deny.click()
</code></pre>
<p>It raised this error:</p>
<p><a href="https://i.sstatic.net/vEVkR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vEVkR.png" alt="enter image description here" /></a></p>
<p>My chrome version : 110.0.5481.178 (Official Build) (64-bit)
My selenium version : ChromeDriver 110.0.5481.77</p>
|
<python><python-3.x><selenium-webdriver><web-scraping>
|
2023-02-24 06:57:56
| 2
| 549
|
Mubashir Ahmed Siddiqui
|
75,553,243
| 661,716
|
python run asyncio task in the background
|
<p>I want to run asynio task in the background so that below code prints out 'b' once while printing out 'a' recursively. Instead it prints 'a' forever with no 'b'.</p>
<p>It should cancel the task after 3 seconds like shown in the code.</p>
<p>I tried with create_task and event_loops with no success. Any help would be appreciated.</p>
<pre><code># SuperFastPython.com
# example of running an asyncio coroutine in the background
import asyncio
import time
import uuid
async def custom_coro():
while True:
print('a')
time.sleep(1)
# await asyncio.sleep(1)
async def run_ws():
await asyncio.wait([
custom_coro(),
])
if __name__ == '__main__':
tasks = {}
task = asyncio.run(run_ws())
# task = asyncio.create_task(run_ws())
task_id = str(uuid.uuid4())
tasks[task_id] = task
print('b')
time.sleep(3)
tasks[task_id].cancel()
# tasks = asyncio.all_tasks()
# t = tasks.pop()
# t.cancel()
# asyncio.run(run_ws())
# print('b')
</code></pre>
|
<python><multithreading><asynchronous><python-asyncio>
|
2023-02-24 06:28:49
| 1
| 1,226
|
tompal18
|
75,553,223
| 14,546,482
|
Loop over multiple databases and save database name in df
|
<p>I am attempting to connect to a Pervasive server and loop over many databases inside of that server. I cant quite figure out how to save the results so that each chunk of results stores an additional column with the database's name. Any help would be greatly appreciated!</p>
<p>Here's what I have so far:</p>
<pre><code>import pyodbc
import pandas as pd
databases = ['db1','db2']
sql = 'select top 10 * from testdatabase'
connect_string = 'DRIVER=Pervasive ODBC Interface;SERVERNAME=1.1.1.1:1111;DBQ={}'
connections = [pyodbc.connect(connect_string.format(n)) for n in databases]
cursors = [conn.cursor() for conn in connections]
data = []
try:
for cur in cursors:
rows = cur.execute(sql).fetchall()
df = pd.DataFrame.from_records(rows, columns=[col[0] for col in cur.description])
for name in databases:
df['DATABASE'] = name
#filename is getting overwritten
data.append(df)
except Exception as e:
print(e)
finally:
for cur in cursors:
cur.close()
df = pd.concat(data, axis=0).reset_index(drop=True)
</code></pre>
<p>ideally the output would look something like this:</p>
<pre><code>column 1 column 2 column 3 DATABASE
random data random data random data db1
random data random data random data db1
random data random data random data db1
random data random data random data db2
random data random data random data db2
random data random data random data db2
</code></pre>
|
<python><pandas><dataframe><pyodbc>
|
2023-02-24 06:24:16
| 0
| 343
|
aero8991
|
75,553,212
| 1,770,724
|
best way to check if a numpy array is all non negative
|
<p>This works, but not algorithmically optimal since I dont need the min value to be stored while the function is parsing the array:</p>
<pre><code>def is_non_negative(m):
return np.min(m) >= 0
</code></pre>
<p>Edit: Depending on the data an optimal function could indeed save a lot because it will terminate at the first encounter of a negative value. If only one negative value is expected, time will be cut by a factor of two in average. However building the optimal algorithm outside numpy library will be at a huge cost (Python code vs C++ code).</p>
|
<python><numpy><numpy-ndarray>
|
2023-02-24 06:22:54
| 2
| 4,878
|
quickbug
|
75,553,207
| 11,976,344
|
How to pass lambda hypterparameter to Sagemaker XGboost estimator with set_hyperparameters
|
<pre><code>xgb = sagemaker.estimator.Estimator(**training_dict, sagemaker_session=sagemaker_session)
xgb.set_hyperparameters( num_round = 2000,
objective = 'binary:logistic',
tree_method = 'hist',
eval_metric = 'auc',
.
.
.
lambda = 0.5,
alpha = 1
)
xgb.fit({'train': s3_input_train, 'validation': s3_input_validation})
</code></pre>
<p>The documentation here lists <code>lambda</code> for L2 Regularization but when I pass this to the <code>set_parameter</code> method for sagemaker estimator I get a syntax error because lambda is keyword.</p>
<p><a href="https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost_hyperparameters.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost_hyperparameters.html</a></p>
<pre><code> lambda = 0.5,
^
SyntaxError: invalid syntax
</code></pre>
|
<python><python-3.x><amazon-web-services><machine-learning><amazon-sagemaker>
|
2023-02-24 06:22:21
| 1
| 398
|
Gaurav Chawla
|
75,553,199
| 14,900,600
|
Allow eval() to evaluate only arithmetic expressions and certain functions
|
<p>I have a calculator that uses the <code>eval()</code> function to evaluate expressions, but I am aware of the fact that using <code>eval()</code> is dangerous as it can be used to run arbitrary code on the machine.</p>
<p>So, I want it to only be able to evaluate arithmetic expressions and certain defined functions (imported from another file).</p>
<p>consolecalc.py:</p>
<pre class="lang-py prettyprint-override"><code>from mymathmodule import *
import time
while True:
try:
Choose = '''Choose your function:
1: Percentage calculator
2: Factorial calculator
3: Basic arithmetics (+-*/)
'''
for character in Choose:
print(character, end="", flush=True)
time.sleep(0.006)
a = int(input(':'))
if a == 1:
percentagecalc()
elif a == 2:
factorialcalc()
elif a == 3:
calc = input("Enter Expression:")
print("Expression equals:",eval(calc))
else:
print("\nOops! Error. Try again...")
time.sleep(0.6)
except:
print("\nOops! Error. Try again...")
time.sleep(0.6)
</code></pre>
<p>The defined functions are present in <em>mymathmodule.py</em> and I only want <code>eval()</code> to be able to evaluate those along with basic arithmetic expressions</p>
<p>I would also like to know if there are any alternatives to <code>eval()</code> that could do this for me...</p>
|
<python><eval>
|
2023-02-24 06:21:17
| 1
| 541
|
SK-the-Learner
|
75,553,171
| 3,847,651
|
How to use chart.js drawing multiple lines from line sets with (X, Y) values?
|
<p>I'm not going to chart multiple lines by months as Chart.js document.
I have some data like:</p>
<p>line1={(X=1.1, Y=100), (X=2.1, Y=200), ... }</p>
<p>line2={(X=1.2, Y=110), (X=2.2, Y=210), ... }</p>
<p>......</p>
<p>I want all lines being charted on the same canvas as the below picture.
<a href="https://i.sstatic.net/iwhrf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iwhrf.png" alt="Multiple lines demo, coordinates not the same as the numbers above" /></a></p>
<p>I mean different lines may have different X cordinates .</p>
<p>Chart.js seems not support this sort of line charting(points must be aligned exactly on the fixed X listed in label list). Chart.js may like:</p>
<p>X = {X=1.1, X=2.1, ...}</p>
<p>line1={ y1, y2, y3 ... }</p>
<p>line2={ y'1, y'2, y'3 ... }</p>
<p>All lines share the same X. This is not what I want.</p>
<p>I need every line have it's own X list, line1(X1 list, Y1 list), line2(X2 list, Y2 list) .....lineN(XN list, YN list)</p>
<p>However QT can handle this well as I expect.</p>
<p>If Chart.js is not best to do this work please tell me which one is good. I don't like to draw lines on canvas by raw js. Is Plotly a good choice?</p>
|
<javascript><python><chart.js>
|
2023-02-24 06:16:25
| 3
| 1,553
|
Wason
|
75,553,096
| 4,348,400
|
Why is `sympy.Matrix.inv` slow?
|
<p>Here is a small example. It surprised me at how slow it is.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import sympy as sp
X = np.empty(shape=(10,3), dtype=object)
for i in range(10):
for j in range(3):
X[i,j] = sp.Symbol(f'X{i}{j}')
X = sp.Matrix(X.T @ X)
print(X.inv()) # This is really slow...
</code></pre>
<p>I would have thought that taking an inverse of a relatively small matrix would be fast. Am I missing something about the intended use case?</p>
|
<python><matrix><sympy><symbolic-math><matrix-inverse>
|
2023-02-24 06:02:27
| 1
| 1,394
|
Galen
|
75,553,061
| 6,384,423
|
plotnine:: is there a better way to refer args of facet_wrap in after_stat
|
<p>I wants to show percentages for bar plots using <code>plotnine</code> with <code>facet_wrap</code> and <code>stat = 'count'</code>.<br />
(Of course I can do it with preparation of values and <code>stat = 'identity'</code>, but I want to avoid it.)<br />
When I give the arg of <code>facet_wrap</code> to <code>aes</code>, I can refer it in <code>after_stat</code>.<br />
But it needs to nullify the <code>aes</code> manually. it seems ridiculous.<br />
Is there a better way to do it ?</p>
<p>Any help would be greatly appreciated. Below is an example;</p>
<pre><code>from plotnine import *
from plotnine.data import mtcars
import pandas as pd
def prop_per_xcc(x, color, count):
df = pd.DataFrame({'x': x, 'color': color, 'count': count})
prop = df['count']/df.groupby(['x', 'color'])['count'].transform('sum')
return prop
facet_num = mtcars.vs.nunique()
print(
ggplot(mtcars, aes('factor(cyl)', fill='factor(am)')) +
geom_bar(position='fill') +
geom_text(aes(color = "factor(vs)", # sets arg of facet wrap to refer in after_stat
label = after_stat('prop_per_xcc(x, color, count) * 100')),
stat = 'count',
position = position_fill(vjust = 0.5),
format_string = '{:.1f}%',
show_legend = False) +
scale_color_manual(values = ["black"] * facet_num) + # nullify the aes manually
facet_wrap("vs")
)
</code></pre>
<p><a href="https://i.sstatic.net/miJUW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/miJUW.png" alt="enter image description here" /></a></p>
|
<python><plotnine>
|
2023-02-24 05:55:10
| 1
| 6,911
|
cuttlefish44
|
75,553,033
| 139,150
|
count similar values from list of dictionaries
|
<p>I have a list of dictionaries and I need to count unique entries.
Then I need to sort the values based on the tuple that is part of the key "corrected_word" (2 < 3 < 33)</p>
<pre><code>mylist = [
{'original_word': 'test1', 'corrected_word': ('test12', 3)},
{'original_word': 'test1', 'corrected_word': ('test12', 3)},
{'original_word': 'test2', 'corrected_word': ('test22', 2)},
{'original_word': 'test3', 'corrected_word': ('test3', 33)},
{'original_word': 'test3', 'corrected_word': ('test3', 33)},
{'original_word': 'test3', 'corrected_word': ('test3', 33)}
]
</code></pre>
<p>Expected Output:</p>
<pre><code>mylist = [
{'original_word': 'test2', 'corrected_word': ('test22', 2, 1)},
{'original_word': 'test1', 'corrected_word': ('test12', 3, 2)},
{'original_word': 'test3', 'corrected_word': ('test3', 33, 3)}
]
</code></pre>
<p>I have tried this:</p>
<pre><code>from collections import Counter
Counter([str(i) for i in mylist])
</code></pre>
<p>But it does not return the list of dictionaries.</p>
|
<python>
|
2023-02-24 05:51:14
| 2
| 32,554
|
shantanuo
|
75,552,842
| 12,319,746
|
Adding new text data to an existing blob in Azure
|
<p>I have a blob with data like this</p>
<pre><code>2324
2321
2132
</code></pre>
<p>How do I add a new value in this blob? So if I add '2200', it becomes</p>
<pre><code>2324
2321
2132
2200
</code></pre>
<p>I have tried <code>append.block()</code> but that gives the error</p>
<blockquote>
<pre><code>Exception: ResourceExistsError: The blob type is invalid for this operation.
RequestId:16a8f0f9-001e-
Time:2023-02-24T05:05:16.1581160Z
ErrorCode:InvalidBlobType
</code></pre>
</blockquote>
<pre><code> blob_client = container_client.get_blob_client("LIST.txt")
blob_client.append_block('5231\n')
stuff = blob_client.download_blob().readall()
ans = stuff.decode('utf-8')
ans_list = ans.split('\r\n')
# print(ans_list)
for an in ans_list:
if an == '5231':
print("Num Exists")
</code></pre>
|
<python><azure>
|
2023-02-24 05:15:45
| 2
| 2,247
|
Abhishek Rai
|
75,552,588
| 368,453
|
Time complexity of the function multiset_permutations from sympy.utilities.iterables
|
<p>I'd like to know the time complexity of the function multiset_permutations from SciPy.</p>
<p>We could use this function as:</p>
<pre><code>from sympy.utilities.iterables import multiset_permutations
from sympy import factorial
[''.join(i) for i in multiset_permutations('aab')]
</code></pre>
<p>I'd like to know the time complexity of using this function in comparison to the time complexity of the function permutations from itertools.</p>
<p>I have researched about it in the documentation, but I could not find it.</p>
|
<python><algorithm><scipy><time-complexity>
|
2023-02-24 04:17:15
| 2
| 17,488
|
Alucard
|
75,552,575
| 1,852,526
|
Read command prompt output in new window in Python
|
<p>I referred to this <a href="https://stackoverflow.com/questions/15198967/read-a-command-prompt-output-in-a-separate-window-in-python">Read Command Prompt output</a> here. But I can't get it to work.</p>
<p>What I am trying to do is, I open a new command prompt window using subprocess.Popen and I want to run an exe file with some arguments. After I run that process, I want to capture the output or read the text in that command prompt.</p>
<p>When I say <code>cmd = subprocess.Popen('cmd.exe /K "CoreServer.exe -c -s"',creationflags=CREATE_NEW_CONSOLE,shell=True,stdout=subprocess.PIPE)</code> it won't run the process at all.</p>
<pre><code>import subprocess
from subprocess import Popen, CREATE_NEW_CONSOLE
def OpenServers():
os.chdir(coreServerFullPath)
cmd = subprocess.Popen('cmd.exe /K "CoreServer.exe -c -s"',creationflags=CREATE_NEW_CONSOLE)
time.sleep(3)
os.chdir(echoServerFullPath)
#cmd.exe /K "EchoServer.exe -c -s"
cmd1=subprocess.Popen('cmd.exe /K "EchoServer.exe -c -s"',creationflags=CREATE_NEW_CONSOLE)
#subprocess.Popen(['runas', '/user:Administrator', '"CoreServer.exe -c -s"'],creationflags=CREATE_NEW_CONSOLE
print("OUTPUT 1 "+cmd.stdout.readline())
</code></pre>
<p>Please see this screenshot, I want to read the text in the command prompt.</p>
<p><a href="https://i.sstatic.net/jiBtL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jiBtL.png" alt="Command prompt text" /></a></p>
<p>Just in case, here is the full code.</p>
<pre><code>import os
import subprocess
from subprocess import Popen, CREATE_NEW_CONSOLE
import time
import ctypes, sys
#The command prompts must be opened as administrator. So need to run the python script with elebvated permissions. Or else it won't work
def is_admin():
try:
return ctypes.windll.shell32.IsUserAnAdmin()
except:
return False
if is_admin():
#The program can only run with elevated admin previlages.
#Get the directory where the file is residing.
currentDirectory=os.path.dirname(os.path.abspath(__file__))
coreServerFullPath=os.path.join(currentDirectory,"Core\CoreServer\Server\CoreServer/bin\Debug")
isExistCoreServer=os.path.exists(coreServerFullPath)
echoServerFullPath=os.path.join(currentDirectory,"Echo\Server\EchoServer/bin\Debug")
isExistEchoServer=os.path.exists(echoServerFullPath)
#For now this is the MSBuild.exe path. Later we can get this MSBuild.exe as a standalone and change the path.
msBuildPath="C:\Program Files (x86)\Microsoft Visual Studio/2019\Professional\MSBuild\Current\Bin/amd64"
pathOfCorecsProjFile=os.path.join(currentDirectory,"Core\CoreServer\Server\CoreServer\CoreServer.csproj")
pathOfEchocsProjFile=os.path.join(currentDirectory,"Echo\Server\EchoServer\EchoServer.csproj")
def OpenServers():
os.chdir(coreServerFullPath)
cmd = subprocess.Popen('cmd.exe /K "CoreServer.exe -c -s"',creationflags=CREATE_NEW_CONSOLE)
time.sleep(3)
os.chdir(echoServerFullPath)
#cmd.exe /K "EchoServer.exe -c -s"
cmd1=subprocess.Popen('cmd.exe /K "EchoServer.exe -c -s"',creationflags=CREATE_NEW_CONSOLE)
#subprocess.Popen(['runas', '/user:Administrator', '"CoreServer.exe -c -s"'],creationflags=CREATE_NEW_CONSOLE
if(not isExistCoreServer):
if(os.path.isfile(pathOfCorecsProjFile)):
os.chdir(msBuildPath)
startCommand="start cmd /c"
command="MSBuild.exe "+pathOfCorecsProjFile+" /t:build /p:configuration=Debug"
#os.system(startCommand+command)
cmd=subprocess.Popen(startCommand+command)
if(not isExistEchoServer):
if(os.path.isfile(pathOfEchocsProjFile)):
os.chdir(msBuildPath)
startCommand="start cmd /c"
command="MSBuild.exe "+pathOfEchocsProjFile+" /t:build /p:configuration=Debug"
os.system(startCommand+command)
if(isExistCoreServer and isExistEchoServer):
OpenServers()
else:
# Re-run the program with admin rights
ctypes.windll.shell32.ShellExecuteW(None, "runas", sys.executable, " ".join(sys.argv), None, 1)
</code></pre>
|
<python><subprocess>
|
2023-02-24 04:13:19
| 1
| 1,774
|
nikhil
|
75,552,548
| 2,900,089
|
"TypeError: Object of type int64 is not JSON serializable" while trying to convert a nested dict to JSON
|
<p>I have a nested dictionary that I am trying to convert to JSON using <code>json.dumps(unserialized_data), indent=2)</code>. The dictionary currently looks like this:</p>
<pre><code>{
"status": "SUCCESS",
"data": {
"cal": [
{
"year": 2022,
"month": 8,
"a": [
{
"a_id": 1,
"b": [
{
"abc_id": 1,
"val": 2342
}
]
}
]
},
{
"year": 2022,
"month": 9,
"a": [
{
"a_id": 2,
"b": [
{
"abc_id": 3,
"val": 2342
}
]
}
]
}
]
}
}
</code></pre>
<p>How can I convert all integers of type <code>int64</code> to <code>int</code> while leaving the structure of the dict and values of any other data type unaffected?</p>
|
<python><json><int64>
|
2023-02-24 04:09:25
| 3
| 625
|
Somnath Rakshit
|
75,552,504
| 2,604,247
|
How to Change the Loglevel or Silence All Logs for a Third Party Library in Python?
|
<p>Here is how my code looks like.</p>
<pre><code>#!/usr/bin/env python3
# encoding:utf-8
import logging, asyncua
logging.basicConfig(format='%(asctime)s | %(levelname)s: %(message)s',
level=logging.INFO)
...# Other stuffs
logging.info(msg='My message')
</code></pre>
<p>But seems like the <code>asyncua</code> library has an extremely verbose log setting inside, even at info level. So as I set my loglevel to info or debug, the library itself fills my console with unnecessary (at least to me) messages where my own log messages get lost. The workaround I am using is setting my own loglevel as ERROR to suppress their info/debug log and logging all my messages as error, which seem like a terrible hack to avoid.</p>
<p>So how do I totally silence any log from my imports, or set them all at error/warning level, to make sure my own logs do not get lost in the console? Is there a method that covers all relevant libraries, not just <code>asyncua</code>, but also, for example, <code>tensorflow</code>?</p>
|
<python><logging><python-logging>
|
2023-02-24 04:00:58
| 1
| 1,720
|
Della
|
75,552,490
| 10,715,700
|
How do I store a large number of regex and find the regex that has a match for a given string?
|
<p>We generally use regex to match with strings. I want to do it the other way around. I have a large number of regex. Now, given a string, I should identify which regex had a match with the string. How do I do this?</p>
<p>I was considering storing all the regex in Elasticsearch and then query it using the string, but I am not able to find any documentation to see if it possible.</p>
<p>I could store all the regex in a DB, get the ones I want to check matches and then find matches, but is there a better way to do it?</p>
|
<python><regex><elasticsearch><search>
|
2023-02-24 03:58:39
| 1
| 430
|
BBloggsbott
|
75,552,483
| 5,283,144
|
Add partition columns of Parquet files from Google Cloud Storage to BigQuery
|
<p>I have Parquet files stored in a Google Cloud Storage Bucket with paths such as:
<code>gs://some_storage/files.parquet/category=abc/type=xyz/partition.parquet</code></p>
<p>Each parquet file has the fields:
<code>{'date':'2023-03-01','value': 2.718}</code></p>
<p>I am loading these fields to BigQuery and I would need to include the partition columns, i.e. <code>category</code> and <code>type</code> in the final table, so that the event would have the fields:
<code>{'date':'2023-03-01','value': 2.718, 'category': 'abc', 'type': 'xyz'}</code></p>
<p>Currently i'm iterating over the directory <code>gs://some_storage/files.parquet</code>, extracting the category and type partitions with a regexp from the paths, appending the values to the parquet file at time of read and inserting to Bigquery.</p>
<p>There must be a better way since this form of partitioning is standard with parquet files. Is there any method, either via pyarrow or google cloud services that will read in the partition directly without having to iterate over paths and using a regexp? Or better is there any way I can result in the data in a BigQuery table including <code>category</code> and <code>type</code> columns?</p>
<p>Thank you in advance for any help.</p>
<p>My current method looks like this:</p>
<pre><code>import re
import gcsfs
import pyarrow.parquet as pq
fs = gcsfs.GCSFileSystem(project='gcs-project')
# extract paths
paths = []
root_dir = 'gs://some_storage/files.parquet'
category_paths = fs.ls(root=root_dir)
for category_path in category_paths:
feature_paths = fs.ls(category_path)
for file_path in feature_paths:
[file_paths] = fs.ls(file_path)
paths.append(file_paths)
# read and append partition columns
for path in paths:
category = re.search(r'category=(.*?)/', path).group(1)
feature = re.search(r'feature=(.*?)/', path).group(1)
df = pq.ParquetDataset(path)
# append the category and feature on the df
df['category'] = category
df['feature'] = feature
# finally insert to bigquery
</code></pre>
|
<python><python-3.x><google-bigquery><google-cloud-storage><parquet>
|
2023-02-24 03:56:52
| 1
| 553
|
Mike G
|
75,552,468
| 19,238,204
|
How to find Expectation from continuous function with SymPy and Python?
|
<p>I want to calculate Expectation and Variance in terms of ΞΌ and X, but I do not know what to fill the X below, since it is not a Normal distribution nor Poisson, but a pdf that is random.</p>
<pre><code>from sympy import symbols, Integral
from sympy.stats import Normal, Expectation, Variance, Probability
mu = symbols("ΞΌ", positive=True)
sigma = symbols("Ο", positive=True)
pdf = (15/512)*(x**2)*((4-x)**2)
X = ?
print('Var(X) =',Variance(X).evaluate_integral())
print('E(X-ΞΌ) =',Expectation((X - mu)**2).expand())
print('final computation:')
print('E(X-ΞΌ) =',Expectation((X - mu)**2).doit())
</code></pre>
<p>In addition to that, there is another condition <code>0<x<4</code>. Thus <code>E(X)=2</code> should be the right answer.
<a href="https://i.sstatic.net/cbmfq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cbmfq.png" alt="1" /></a>
Thanks.</p>
|
<python><sympy>
|
2023-02-24 03:53:53
| 1
| 435
|
Freya the Goddess
|
75,552,390
| 5,947,182
|
Python playwright unable to access elements
|
<p>I want to scrape the words which reside in the <code><li></code> elements. The results return an empty list. Are they resided within a frame because as I can see they are not within any <code><iframe><\iframe></code> elements? If they do how do you access the frame or find the frame id in this case? Here is the <a href="https://www.paperrater.com/page/lists-of-adjectives" rel="nofollow noreferrer">site</a> and the code</p>
<pre><code>from playwright.sync_api import sync_playwright, expect
def test_fetch_paperrater():
path = r"https://www.paperrater.com/page/lists-of-adjectives"
with sync_playwright() as playwright:
browser = playwright.chromium.launch()
page = browser.new_page()
page.goto(path)
texts = page.locator("div#header-container article.page ul li").all_inner_texts()
print(texts)
browser.close()
</code></pre>
|
<python><web-scraping><playwright><playwright-python>
|
2023-02-24 03:34:27
| 2
| 388
|
Andrea
|
75,552,338
| 1,039,486
|
Cannot import langchain.agents.load_tools
|
<p>I am trying to use LangChain Agents and am unable to import load_tools.
Version: <code>langchain==0.0.27</code></p>
<p>I tried these:</p>
<pre><code>from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from langchain.agents import load_tools
</code></pre>
<p>shows output</p>
<pre><code>---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-36-8eb0012265d0> in <module>
1 from langchain.agents import initialize_agent
2 from langchain.llms import OpenAI
----> 3 from langchain.agents import load_tools
ImportError: cannot import name 'load_tools' from 'langchain.agents' (C:\ProgramData\Anaconda3\lib\site-packages\langchain\agents\__init__.py)
</code></pre>
|
<python><python-3.x><langchain>
|
2023-02-24 03:22:02
| 3
| 392
|
Shooter
|
75,552,196
| 13,176,726
|
Getting 500 Internal Server Error when Deploying Django Project in Linux Server using ubunto 18.04 Apache
|
<p>I have deployed a Django Project using 18.04 and before using Apache I was testing using Portal:8000 and it worked perfectly fine when I added Apache and made the required changes I am getting 500 Internal Server Error and in the log I am getting the following error</p>
<pre><code>[Fri Feb 24 02:22:13.453312 2023] [wsgi:error] [pid 7610:tid 140621178033920] Traceback (most recent call last):
[Fri Feb 24 02:22:13.453355 2023] [wsgi:error] [pid 7610:tid 140621178033920] File "/home/user/project/project/wsgi.py", line 12, in <module>
[Fri Feb 24 02:22:13.453364 2023] [wsgi:error] [pid 7610:tid 140621178033920] from django.core.wsgi import get_wsgi_application
[Fri Feb 24 02:22:13.453388 2023] [wsgi:error] [pid 7610:tid 140621178033920] ModuleNotFoundError: No module named 'django'
[Fri Feb 24 02:25:31.905130 2023] [wsgi:error] [pid 7610:tid 140621161248512] mod_wsgi (pid=7610): Target WSGI script '/home/user/project/project/wsgi.py' cannot be loaded as Python module.
[Fri Feb 24 02:25:31.906124 2023] [wsgi:error] [pid 7610:tid 140621161248512] mod_wsgi (pid=7610): Exception occurred processing WSGI script '/home/user/project/project/wsgi.py'.
[Fri Feb 24 02:25:31.906647 2023] [wsgi:error] [pid 7610:tid 140621161248512] Traceback (most recent call last):
[Fri Feb 24 02:25:31.906871 2023] [wsgi:error] [pid 7610:tid 140621161248512] File "/home/user/project/project/wsgi.py", line 12, in <module>
[Fri Feb 24 02:25:31.906979 2023] [wsgi:error] [pid 7610:tid 140621161248512] from django.core.wsgi import get_wsgi_application
[Fri Feb 24 02:25:31.907098 2023] [wsgi:error] [pid 7610:tid 140621161248512] ModuleNotFoundError: No module named 'django'
</code></pre>
<p>Here is the port 80:</p>
<pre><code><VirtualHost *:80>
ServerAdmin webmaster@localhost
DocumentRoot /var/www/html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
Alias /static /home/user/project/static
<Directory /home/user/project/static>
Require all granted
</Directory>
Alias /media /home/user/project/media
<Directory /home/user/project/media>
Require all granted
</Directory>
<Directory /home/user/project/project>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
WSGIScriptAlias / /home/user/project/project/wsgi.py
WSGIDaemonProcess project python-path=/home/user/project python-home=/home/user/project/venv
WSGIProcessGroup project
ServerName server_domain_name_or_IP
<Directory /var/www/html/>
AllowOverride All
</Directory>
</VirtualHost>
</code></pre>
<p>Here is the setting.py</p>
<pre><code>import os
from pathlib import Path
import json
#from decouple import config
with open('/etc/config.json') as config_file:
config=json.load(config_file)
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = config['SECRET_KEY']
............................................
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / config.get('DATABASE_NAME'),
#'NAME': BASE_DIR / config('DATABASE_NAME'),
}
}
</code></pre>
<p>Here is the allowed permissions:</p>
<pre><code>total 3120
drwxrwxr-x 12 user www-data 4096 Feb 23 04:52 .
drwxr-xr-x 7 user user 4096 Feb 22 04:29 ..
drwxr-xr-x 4 user user 4096 Feb 22 03:55 api
-rw-rw-r-- 1 user www-data 561152 Feb 23 04:52 db.sqlite3
-rw-rw-r-- 1 user user 2574273 Feb 23 04:22 get-pip.py
drwxr-xr-x 7 user user 4096 Feb 22 03:55 .git
drwxr-xr-x 3 user user 4096 Feb 24 02:38 project
drwxr-xr-x 3 user user 4096 Feb 22 03:55 .idea
-rwxr-xr-x 1 user user 685 Feb 22 03:55 manage.py
drwxrwxr-x 2 user www-data 4096 Feb 22 03:55 media
drwxr-xr-x 6 user user 4096 Feb 22 03:55 my_project
-rw-r--r-- 1 user user 661 Feb 22 03:55 requirements.txt
drwxr-xr-x 8 user user 4096 Feb 23 04:42 static
drwxr-xr-x 5 user user 4096 Feb 22 03:55 tac
drwxr-xr-x 5 user user 4096 Feb 22 03:55 users
drwxrwxr-x 6 user user 4096 Feb 22 04:00 venv
</code></pre>
<p><em>Update:</em>
I have made the following changes but still the same error:
in the wsgi.py</p>
<pre><code>import os
import sys
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
# Add individual virtual environment packages at the end of sys.path; global env $
sys.path.append('/home/user/project/venv/lib/python3.8/site-packages')
# Add individual virtual environment packages at the beginning of sys.path; indiv$
sys.path.insert(0, '/home/user/project/venv/lib/python3.8/site-packages')
# Add the path to the 'project' Django project
sys.path.append('/home/user/project')
application = get_wsgi_application()
</code></pre>
<p>in the port 80 config:</p>
<pre><code># Alias /static /home/user/project/static
# <Directory /home/user/project/static>
# Require all granted
# </Directory>
# Alias /media /home/user/project/media
# <Directory /home/user/project/media>
# Require all granted
# </Directory>
# <Directory /home/user/project/project>
# <Files wsgi.py>
# Require all granted
# </Files>
# </Directory>
# WSGIScriptAlias / /home/user/project/project/wsgi.py
# WSGIDaemonProcess project python-path=/home/user/project:/home/user$
# WSGIProcessGroup project
</code></pre>
<p>My question:
What am I doing wrong and how to fix it. Note that I downloaded Pytohn 3.8 to the server and the Django site was working perfectly fine, not sure what went wrong</p>
|
<python><django><linux><apache><wsgi>
|
2023-02-24 02:48:46
| 1
| 982
|
A_K
|
75,552,168
| 11,248,638
|
What is the correct order in data preprocessing stage for Machine Learning?
|
<p>I am trying to create some sort of step-by-step guide/cheat sheet for myself on how to correctly go over the data preprocessing stage for Machine Learning.</p>
<p>Let's imagine we have a binary Classification problem.
Would the below strategy work or do I have to change/modify the order of some of the steps and maybe something should be added or removed?</p>
<p><strong>1.</strong> <strong>LOAD DATA</strong></p>
<pre><code>import pandas as pd
df = pd.read_csv("data.csv")
</code></pre>
<p><strong>2.</strong> <strong>SPLIT DATA</strong> - I understand, that to prevent "data leakage", we <strong>MUST</strong> split data into training (work with it) and testing (pretend it does not exist) sets.</p>
<pre><code>from sklearn.model_selection import train_test_split
# stratify = 'target' if proportion disbalance in data, so training and testing sets will have the same proportion after splitting.
train_df, test_df = train_test_split(df, test_size = 0.33, random_state = 42, stratify = 'target')
</code></pre>
<p><strong>3.</strong> <strong>EDA ON TRAINING DATA</strong> - Is it correct to look at the training set only or should we do EDA before splitting? If we assume the Test set doesn't exist, then we should not care what is there, right?</p>
<pre><code>train_df.info()
train_df.describe()
# + Plots etc.
</code></pre>
<p><strong>4.</strong> <strong>OUTLIERS ON TRAINING DATA</strong> - If we have to scale the data, the Mean (Average) is very sensitive to outliers, therefore we have to take care of them in the beginning. Also, if we decide to fill Null numerical features with mean, outliers may be a problem in this case.</p>
<pre><code>import matplotlib.pyplot as plt
import seaborn as sns
# Check distributions
sns.diplot(train_df)
sns.boxplot(train_df)
train_df.corr() # Correlation between all features and label
train_df.corr()["target"].sort_values()
sns.scatterplot(x = "Column X", y = 'target', data = train_df)
train_df.describe() # above 75% + 1.5 * (75% - 25%) and below 25% - 1.5 * (75% - 25%)
</code></pre>
<p><strong>5.</strong> <strong>MISSING VALUES ON TRAINING DATA</strong> - We can't have Null values. We either remove or fill in them. This step should be taken care of in the beginning.</p>
<pre><code>train_df.info()
train_df.isnull().sum() # or train_df.isna().sum()
# Show the rows with Null values
train_df[train_df["Column"].isnull()]
</code></pre>
<p><strong>6.</strong> <strong>FEATURE ENGINEERING ON TRAINING DATA</strong> - Is this step should be taken care of in the beginning as well? I think so because we can create the feature that might need to be scaled.</p>
<pre><code># If some columns (not target) correlated with each other, we should delete one of them, or make some sort of blending.
train_df.corr()
train_df = train_df.drop("1 of Correlated X Column", axis = 1)
# For normally distributed data, the skewness should be about 0. A skewness value > 0 means there is more weight in the left tail of the distribution
# We should try to have normal distribution in the columns
train_df["Not Skewed Column"] = np.log(train_df["Skewed Column"] + 1)
train_df["Not Skewed Column"].hist(figsize = (20,5))
plt.show()
</code></pre>
<p><strong>7.</strong> <strong>CATEGORICAL DATA</strong> - We can't have objects in the data frame.</p>
<pre><code>from sklearn.preprocessing import OneHotEncoder # Just an example
# Create X and y variables
X_train = train_df.drop('target', axis = 1)
y_train = np.where(train['target'] == 'yes', 1, 0)
# Create the one hot encoder
onehot = OneHotEncoder(handle_unknown = 'ignore')
# Apply one hot encoding to categorical columns
encoded_columns = onehot.fit_transform(X_train.select_dtypes(include = 'object')).toarray()
X_train = X_train.select_dtypes(exclude = 'object')
X_train[onehot.get_feature_names_out()] = encoded_columns
</code></pre>
<p><strong>8.</strong> <strong>IMBALANCED DATA</strong> - Good to have the same or similar number of observations in the target column.</p>
<pre><code> from imblearn.over_sampling import SMOTE # Just an example
# Create the SMOTE class
sm = SMOTE(random_state = 42)
# Resample to balance the dataset
X_train, y_train = sm.fit_resample(X_train, y_train)
</code></pre>
<p><strong>9.</strong> <strong>SCALE DATA</strong> - Should we scale the target column in the Regression task?</p>
<pre><code># Brings mean close to 0 and std to 1. Formula = (x - mean) / std
from sklearn.preprocessing import StandardScaler # Just an example
scaler = StandardScaler()
scaled_X_train = scaler.fit_transform(X_train) # X_test we don't fit, only transform!
</code></pre>
<p><strong>10.</strong> <strong>PRINCIPAL COMPONENT ANALYSIS (PCA) - REDUCING DIMENSIONALITY</strong> - Should data be scaled before applying PCA?</p>
<pre><code># Example: PCA = 50 (n_components). Let's say Input is 100 X features, after applying PCA, Output will be 50 X features.
# Why don't use PCA all the time? We lose the ability to explain what each value is because they are now in combination with a whole bunch of features.
# Will not be able to look at feature importance, trees, etc. We use it when we need to.
# If we are able to train the model with all features, then great. if can't, we can apply PCA, but be ready to lose the ability to explain what is driving the machine learning model.
from sklearn.decomposition import PCA # Just an example
pca = PCA(n_components = 50) # Just an Example
scaled_X_train = pca.fit_transform(scaled_X_train) # X_test we don't fit, only transform!
</code></pre>
<p><strong>11.</strong> <strong>MODEL, FIT, EVALUATE, PREDICT</strong></p>
<pre><code>from sklearn.linear_model import RidgeClassifier # Just an Example
from sklearn.metrics import accuracy_score, recall_score, precision_score, f1_score, confusion_matrix
model = RidgeClassifier()
model.fit(scaled_X_train, y_train)
# HERE we should create and / or execute transformation function that will take test_df as input and will return scaled_X_test and y_test
y_pred = model.predict(scaled_X_test)
# Evaluate model - Calculate Classification metrics
accuracy = accuracy_score(y_test, y_pred)
precision = precision_score(y_test, y_pred)
recall = recall_score(y_test, y_pred)
f1 = f1_score(y_test, y_pred)
print(f"RidgeClassifier model scores Accuracy: {accuracy}, Precision: {precision}, Recall: {recall}, F1-Score: {f1}")
confusion_matrix(y_test, y_pred, labels = [1,0])
</code></pre>
<p><strong>12.</strong> <strong>SAVE MODEL</strong></p>
<pre><code>import joblib # Just an example
# Save Model
joblib.dump(model, 'best_model.joblib')
</code></pre>
|
<python><machine-learning><scikit-learn><pipeline><data-preprocessing>
|
2023-02-24 02:41:33
| 2
| 401
|
Yara1994
|
75,552,079
| 12,969,608
|
Having trouble looping through HTML tbody and creating a DataFrame from a table
|
<p>Here is the URL/Table that I'm having trouble with:</p>
<p><a href="https://www.naturalstattrick.com/teamtable.php?fromseason=20222023&thruseason=20222023&stype=2&sit=all&score=all&rate=y&team=all&loc=B&gpf=410&fd=&td=" rel="nofollow noreferrer">https://www.naturalstattrick.com/teamtable.php?fromseason=20222023&thruseason=20222023&stype=2&sit=all&score=all&rate=y&team=all&loc=B&gpf=410&fd=&td=</a></p>
<p>The HTML structure looks like:</p>
<pre class="lang-HTML prettyprint-override"><code><table>
<colgroup>
<col />
</colgroup>
<thead>
<tr>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
</tr>
</tbody>
</table>
</code></pre>
<p>Python:</p>
<pre class="lang-py prettyprint-override"><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
##### LOAD IN NHL TEAM STATS ######
# Send a GET request to the Natural Stat Trick website and retrieve the HTML content
nhl_team_stats = "https://www.naturalstattrick.com/teamtable.php?fromseason=20222023&thruseason=20222023&stype=2&sit=all&score=all&rate=y&team=all&loc=B&gpf=410&fd=&td="
nhl_team_stats_response = requests.get(nhl_team_stats)
nhl_team_stats_html_content = nhl_team_stats_response.content
# Use BeautifulSoup to parse the HTML content
nhl_team_stats_soup = BeautifulSoup(nhl_team_stats_html_content, "html.parser")
# Find the table containing the NHL schedule data
nhl_team_stats_table = nhl_team_stats_soup.find("table", attrs={"id": "teams"})
# Extract the table headers
nhl_team_stats_headers = []
for thead in nhl_team_stats_table.find_all('thead'):
for tr in thead.find_all("tr"):
for th in tr.find_all("th"):
nhl_team_stats_headers.append(th.text.strip())
nhl_team_stats_headers[0] = "ID"
print(nhl_team_stats_headers)
# Extract the table rows
nhl_team_stats_rows = []
teams_list = []
tbody = nhl_team_stats_table.find("tbody")
for tr in tbody.find_all("tr"):
row = []
for td in tr.find_all(["td"]):
cell_text = td.text.strip()
row.append(cell_text)
nhl_team_stats_rows.append(row)
teams_list.append(row[1])
print(nhl_team_stats_rows)
# print(teams_list)
# Convert the nhl_team_stats_rows to a DataFrame
# nhl_team_stats_df = pd.DataFrame(nhl_team_stats_rows, nhl_team_stats_headers)
# Get the mean xGF/60
# mean_gf = pd.to_numeric(nhl_team_stats_df[22], errors='coerce').mean()
# Convert the teams_list to a DataFrame
# teams_df = pd.DataFrame(teams_list, columns=["Team"])
</code></pre>
<p>Is anything jumping out at anyone because I feel like I am looping through the table correctly? Printing out the headers, it looks like the headers loop works fine. It's just the rows I am having an issue with. It seems like it's duplicating data which makes me think there are too many loops, but I haven't found a way to get the data without duplicates and a list length that equals 32. Any ideas?</p>
|
<python><pandas><dataframe>
|
2023-02-24 02:19:37
| 0
| 587
|
Tony Ingle
|
75,552,070
| 4,117,975
|
How to mock the post() and get() calls of httpx in python unitest?
|
<p>The following test works when I patch the entire function <code>get_post()</code> and <code>get_call()</code>. How can I patch the <code>httpx.post()</code> and <code>httpx.get()</code>?</p>
<p>In <code>src/app.py</code></p>
<pre><code>import httpx
class Client:
def __init__(self, url):
self.url = url
def post_call(self, payload):
response = httpx.post(
url=self.url,
json=payload,
)
response.raise_for_status()
return response.json()
def get_call(self, id):
response = httpx.get(
url=self.url,
params={"id": id},
)
response.raise_for_status()
return response.json()
</code></pre>
<p>In <code>test/test.py</code></p>
<pre><code>from unittest.mock import patch
import httpx
import pytest
from src import app
@pytest.mark.anyio
@patch("app.get_call",return_value=httpx.Response(200, json ={"status":"passed"}))
def test_get(mocker):
cl = Client("test-url")
result = cl.get_call("test")
assert result.json() == {"status":"passed"}
@pytest.mark.anyio
@patch("app.get_post", return_value=httpx.Response(200,json={"status":"passed"}),)
def test_post(mocker):
cl = Client("test-url")
result = cl.post_call({"data":"test"})
assert result.json() == {"status":"passed"}
</code></pre>
<p>When I tried to patch the httpx call then I get thrown with the error:
<code>ModuleNotFoundError: No module named 'app.Client'; 'app' is not a package</code></p>
|
<python><unit-testing><pytest><python-unittest><python-unittest.mock>
|
2023-02-24 02:17:46
| 1
| 1,258
|
Amogh Mishra
|
75,551,991
| 2,954,547
|
Re-number disjoint sections of an array, by order of appearance
|
<p>Consider an array of contiguous "sections":</p>
<pre class="lang-py prettyprint-override"><code>x = np.asarray([
1, 1, 1, 1,
9, 9, 9,
3, 3, 3, 3, 3,
5, 5, 5,
])
</code></pre>
<p>I don't care about the actual values in the array. I only care that they demarcate disjoint sections of the array. I would like to renumber them so that the first section is all <code>0</code>, the second second is all <code>1</code>, and so on:</p>
<pre class="lang-py prettyprint-override"><code>desired = np.asarray([
0, 0, 0, 0,
1, 1, 1,
2, 2, 2, 2, 2,
3, 3, 3,
])
</code></pre>
<p>What is an elegant way to perform this operation? I don't expect there to be a single best answer, but I think this question could provide interesting opportunities to show off applications of various Numpy and other Python features.</p>
<p>Assume for the sake of this question that the array is 1-dimensional and non-empty.</p>
|
<python><numpy>
|
2023-02-24 02:00:32
| 3
| 14,083
|
shadowtalker
|
75,551,933
| 11,116,696
|
how to write a pyomo optimization to select optimal volume of renewables?
|
<p><strong>Background</strong></p>
<p>I am trying to write a pyomo optimization which takes in a customer's electricity load and the generation data of several renewable projects, then optimally solves for the lowest cost selection of renewable projects to minimize electricity consumption, subject to a few constraints.</p>
<p><strong>What I've tried</strong></p>
<p>Using the pyomo readthedocs and stackoverflow. I've written my first attempt (below), but I am having two issues.</p>
<p><strong>Problem</strong></p>
<ol>
<li>ERROR: Rule failed for Expression 'd_spill_var' with index 0: PyomoException:
Cannot convert non-constant Pyomo expression</li>
</ol>
<p>I think this is because I am trying to return a max(expr, 0) value for one of my dependent Expresions. However even if I change this, I still get issue 2 below;</p>
<ol start="2">
<li>RuntimeError: Cannot write legal LP file. Objective 'objective' has nonlinear terms that are not quadratic.</li>
</ol>
<p><strong>Help Requested</strong></p>
<p>Could someone please point me in the right direction to solving the above two issues? Any help would be greatly appreciated!</p>
<p><strong>Code OLD v1.0</strong></p>
<pre><code>import os
import pandas as pd
from pyomo.environ import *
import datetime
def model_to_df(model, first_period, last_period):
# Need to increase the first & last hour by 1 because of pyomo indexing
periods = range(model.T[first_period + 1], model.T[last_period + 1] + 1)
spot = [value(model.spot[i]) for i in periods]
load = [value(model.load[i]) for i in periods]
slr1 = [value(model.slr1_size[i]) for i in periods]
slr2 = [value(model.slr2_size[i]) for i in periods]
slr3 = [value(model.slr3_size[i]) for i in periods]
wnd1 = [value(model.wnd1_size[i]) for i in periods]
wnd2 = [value(model.wnd2_size[i]) for i in periods]
wnd3 = [value(model.wnd3_size[i]) for i in periods]
d_slrgen_var = [value(model.d_slrgen_var[i]) for i in periods]
d_wndgen_var = [value(model.d_wndgen_var[i]) for i in periods]
d_spill_var = [value(model.d_spill_var[i]) for i in periods]
d_selfcons_var = [value(model.d_selfcons_var[i]) for i in periods]
df_dict = {
'Period': periods,
'spot': spot,
'load': load,
'slr1': slr1,
'slr2': slr2,
'slr3': slr3,
'wnd1': wnd1,
'wnd2': wnd2,
'wnd3': wnd3,
'd_slrgen_var': d_slrgen_var,
'd_wndgen_var': d_wndgen_var,
'd_spill_var': d_spill_var,
'd_selfcons_var': d_selfcons_var
}
df = pd.DataFrame(df_dict)
return df
LOCATION = r"C:\cbc-win64"
os.environ["PATH"] = LOCATION + ";" + os.environ["PATH"]
df = pd.DataFrame({
'hour': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
'load': [100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100],
'spot': [65.4, 62.7, 60.9, 60.3, 61.8, 64.5, 65.9, 57.9, 39.7, 28.3, 20.9, 16.3, 18.1, 23.9, 32.3, 43.2, 59.3, 76.3, 80.5, 72.5, 73.1, 69.0, 67.9, 67.7],
'slr1': [0.00, 0.00, 0.00, 0.00, 0.00, 0.04, 0.20, 0.44, 0.60, 0.69, 0.71, 0.99, 1.00, 0.66, 0.75, 0.63, 0.52, 0.34, 0.14, 0.02, 0.00, 0.00, 0.00, 0.00],
'slr2': [0.00, 0.00, 0.00, 0.00, 0.03, 0.19, 0.44, 0.68, 1.00, 0.83, 0.90, 0.88, 0.98, 0.94, 0.83, 0.70, 0.36, 0.11, 0.02, 0.00, 0.00, 0.00, 0.00, 0.00],
'slr3': [0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.03, 0.17, 0.39, 0.87, 0.91, 1.00, 0.89, 0.71, 0.71, 0.85, 0.63, 0.52, 0.32, 0.12, 0.02, 0.00, 0.00, 0.00],
'wnd1': [1.00, 0.72, 0.74, 0.94, 0.69, 0.90, 0.92, 0.76, 0.51, 0.35, 0.31, 0.34, 0.37, 0.28, 0.35, 0.40, 0.39, 0.32, 0.42, 0.48, 0.74, 0.63, 0.80, 0.97],
'wnd2': [0.95, 0.67, 0.82, 0.48, 0.51, 0.41, 0.33, 0.42, 0.34, 0.30, 0.39, 0.29, 0.34, 0.55, 0.67, 0.78, 0.84, 0.73, 0.77, 0.89, 0.76, 0.97, 1.00, 0.91],
'wnd3': [0.32, 0.35, 0.38, 0.28, 0.33, 0.38, 0.41, 0.38, 0.51, 0.65, 0.54, 0.88, 0.93, 0.89, 0.90, 1.00, 0.90, 0.76, 0.76, 0.92, 0.71, 0.56, 0.52, 0.40]
})
first_model_period = df['hour'].iloc[0]
last_model_period = df['hour'].iloc[-1]
# **********************
# Build Model
# **********************
model = ConcreteModel()
# Fixed Paramaters
model.T = Set(initialize=df.index.tolist(), doc='hourly intervals', ordered=True)
model.load_v = Param(model.T, initialize=df.load, doc='customers load', within=Any)
model.spot_v = Param(model.T, initialize=df.spot, doc='spot price for each interval', within=Any)
model.slr1 = Param(model.T, initialize=df.slr1, doc='1MW output solar farm 1', within=Any)
model.slr2 = Param(model.T, initialize=df.slr2, doc='1MW output solar farm 2', within=Any)
model.slr3 = Param(model.T, initialize=df.slr3, doc='1MW output solar farm 3', within=Any)
model.wnd1 = Param(model.T, initialize=df.wnd1, doc='1MW output wind farm 1', within=Any)
model.wnd2 = Param(model.T, initialize=df.wnd2, doc='1MW output wind farm 2', within=Any)
model.wnd3 = Param(model.T, initialize=df.wnd3, doc='1MW output wind farm 3', within=Any)
# Variable Parameters
model.slr1_flag = Var(model.T, doc='slr 1 on / off', within=Binary, initialize=0)
model.slr2_flag = Var(model.T, doc='slr 2 on / off', within=Binary, initialize=0)
model.slr3_flag = Var(model.T, doc='slr 3 on / off', within=Binary, initialize=0)
model.wnd1_flag = Var(model.T, doc='wnd 1 on / off', within=Binary, initialize=0)
model.wnd2_flag = Var(model.T, doc='wnd 2 on / off', within=Binary, initialize=0)
model.wnd3_flag = Var(model.T, doc='wnd 3 on / off', within=Binary, initialize=0)
model.slr1_size = Var(model.T, bounds=(0, 1500), doc='selected size in MWs', initialize=0, within=NonNegativeIntegers)
model.slr2_size = Var(model.T, bounds=(0, 1500), doc='selected size in MWs', initialize=0, within=NonNegativeIntegers)
model.slr3_size = Var(model.T, bounds=(0, 1500), doc='selected size in MWs', initialize=0, within=NonNegativeIntegers)
model.wnd1_size = Var(model.T, bounds=(0, 1500), doc='selected size in MWs', initialize=0, within=NonNegativeIntegers)
model.wnd2_size = Var(model.T, bounds=(0, 1500), doc='selected size in MWs', initialize=0, within=NonNegativeIntegers)
model.wnd3_size = Var(model.T, bounds=(0, 1500), doc='selected size in MWs', initialize=0, within=NonNegativeIntegers)
model.total_gen = Var(model.T, initialize=0, within=NonNegativeReals)
# Dependent Expression Parameters
def dependent_solar_gen(model, t):
"Total selected solar Generation"
return (model.slr1[t] * model.slr1_flag[t] * model.slr1_size[t]) + \
(model.slr2[t] * model.slr2_flag[t] * model.slr2_size[t]) + \
(model.slr3[t] * model.slr3_flag[t] * model.slr3_size[t])
model.d_slrgen_var = Expression(model.T, rule=dependent_solar_gen)
def dependent_wind_gen(model, t):
"Total selected wind Generation"
return (model.wnd1[t] * model.wnd1_flag[t] * model.wnd1_size[t]) + \
(model.wnd2[t] * model.wnd2_flag[t] * model.wnd2_size[t]) + \
(model.wnd3[t] * model.wnd3_flag[t] * model.wnd3_size[t])
model.d_wndgen_var = Expression(model.T, rule=dependent_wind_gen)
def dependent_spill(model, t):
"Volume of energy not consumed by customer (spilled into grid)"
expr = (model.d_slrgen_var[t] + model.d_wndgen_var[t]) - model.load_v[t]
return max(0, expr)
model.d_spill_var = Expression(model.T, rule=dependent_spill)
def dependent_self_cons(model, t):
"Volume of energy consumed by customer"
expr = (model.d_slrgen_var[t] + model.d_wndgen_var[t]) - model.d_spill_var[t]
return expr
model.d_selfcons_var = Expression(model.T, rule=dependent_self_cons)
# -----------------------
# Constraints
# -----------------------
def min_spill(model, t):
"Limit spill renewables to 10% of total"
return model.d_spill_var[t] <= 0.1 * (model.d_slrgen_var[t] + model.d_wndgen_var[t])
model.min_spill_c = Constraint(model.T, rule=min_spill)
def load_match(model, t):
"contract enough renewables to offset 100% load, even if its not time matched"
return (model.d_slrgen_var[t] + model.d_wndgen_var[t]) >= model.load_v[t]
model.load_match_c = Constraint(model.T, rule=load_match)
# **********************
# Define the income, expenses, and profit
# **********************
green_income = sum(model.spot_v[t] * model.d_spill_var[t] for t in model.T)
black_cost = sum(model.spot_v[t] * (model.load_v[t] - model.d_selfcons_var[t]) for t in model.T)
slr_cost = sum(40 * model.d_slrgen_var[t] for t in model.T)
wnd_cost = sum(70 * model.d_wndgen_var[t] for t in model.T)
profit = green_income - black_cost - slr_cost - wnd_cost
model.objective = Objective(expr=profit, sense=maximize)
# Solve the model
# solver = SolverFactory('glpk')
solver = SolverFactory('cbc')
solver.solve(model, timelimit=10)
results_df = model_to_df(model, first_period=first_model_period, last_period=last_model_period)
print(results_df)
</code></pre>
<p><strong>Solved</strong></p>
<p>Thanks @airliquid, I managed to solve the issue thanks to your advice.
What I had to do was linearize the max constraint, redefine dependent expressions as constraints, remove some redundant variables, and change the last two constraints to summations</p>
<p>It's not the prettiest answer, but it works!</p>
<p><strong>UPDATED CODE (V2)</strong></p>
<pre><code>import os
import pandas as pd
from pyomo.environ import *
import datetime
def model_to_df(model, first_period, last_period):
# Need to increase the first & last hour by 1 because of pyomo indexing
periods = range(model.T[first_period + 1], model.T[last_period + 1] + 1)
spot = [value(model.spot_v[i]) for i in periods]
load = [value(model.load_v[i]) for i in periods]
slr1 = [value(model.slr1_size[i]) for i in periods]
slr2 = [value(model.slr2_size[i]) for i in periods]
slr3 = [value(model.slr3_size[i]) for i in periods]
wnd1 = [value(model.wnd1_size[i]) for i in periods]
wnd2 = [value(model.wnd2_size[i]) for i in periods]
wnd3 = [value(model.wnd3_size[i]) for i in periods]
slr1_gen = [value(model.slr1_gen[i]) for i in periods]
slr2_gen = [value(model.slr2_gen[i]) for i in periods]
slr3_gen = [value(model.slr3_gen[i]) for i in periods]
wnd1_gen = [value(model.wnd1_gen[i]) for i in periods]
wnd2_gen = [value(model.wnd2_gen[i]) for i in periods]
wnd3_gen = [value(model.wnd3_gen[i]) for i in periods]
total_gen = [value(model.total_gen[i]) for i in periods]
spill_gen = [value(model.spill_gen[i]) for i in periods]
spill_gen_sum = [value(model.spill_gen_sum[i]) for i in periods]
spill_binary = [value(model.spill_binary[i]) for i in periods]
self_cons = [value(model.self_cons[i]) for i in periods]
df_dict = {
'Period': periods,
'spot': spot,
'load': load,
'slr1': slr1,
'slr2': slr2,
'slr3': slr3,
'wnd1': wnd1,
'wnd2': wnd2,
'wnd3': wnd3,
'slr1_gen': slr1_gen,
'slr2_gen': slr2_gen,
'slr3_gen': slr3_gen,
'wnd1_gen': wnd1_gen,
'wnd2_gen': wnd2_gen,
'wnd3_gen': wnd3_gen,
'total_gen': total_gen,
'spill_gen': spill_gen,
'self_cons': self_cons,
'spill_gen_sum': spill_gen_sum,
'spill_binary': spill_binary
}
df = pd.DataFrame(df_dict)
return df
LOCATION = r"C:\cbc-win64"
os.environ["PATH"] = LOCATION + ";" + os.environ["PATH"]
df = pd.DataFrame({
'hour': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
'load': [100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100, 100],
'spot': [65.4, 62.7, 60.9, 60.3, 61.8, 64.5, 65.9, 57.9, 39.7, 28.3, 20.9, 16.3, 18.1, 23.9, 32.3, 43.2, 59.3, 76.3, 80.5, 72.5, 73.1, 69.0, 67.9, 67.7],
'slr1': [0.00, 0.00, 0.00, 0.00, 0.00, 0.04, 0.20, 0.44, 0.60, 0.69, 0.71, 0.99, 1.00, 0.66, 0.75, 0.63, 0.52, 0.34, 0.14, 0.02, 0.00, 0.00, 0.00, 0.00],
'slr2': [0.00, 0.00, 0.00, 0.00, 0.03, 0.19, 0.44, 0.68, 1.00, 0.83, 0.90, 0.88, 0.98, 0.94, 0.83, 0.70, 0.36, 0.11, 0.02, 0.00, 0.00, 0.00, 0.00, 0.00],
'slr3': [0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.03, 0.17, 0.39, 0.87, 0.91, 1.00, 0.89, 0.71, 0.71, 0.85, 0.63, 0.52, 0.32, 0.12, 0.02, 0.00, 0.00, 0.00],
'wnd1': [1.00, 0.72, 0.74, 0.94, 0.69, 0.90, 0.92, 0.76, 0.51, 0.35, 0.31, 0.34, 0.37, 0.28, 0.35, 0.40, 0.39, 0.32, 0.42, 0.48, 0.74, 0.63, 0.80, 0.97],
'wnd2': [0.95, 0.67, 0.82, 0.48, 0.51, 0.41, 0.33, 0.42, 0.34, 0.30, 0.39, 0.29, 0.34, 0.55, 0.67, 0.78, 0.84, 0.73, 0.77, 0.89, 0.76, 0.97, 1.00, 0.91],
'wnd3': [0.32, 0.35, 0.38, 0.28, 0.33, 0.38, 0.41, 0.38, 0.51, 0.65, 0.54, 0.88, 0.93, 0.89, 0.90, 1.00, 0.90, 0.76, 0.76, 0.92, 0.71, 0.56, 0.52, 0.40]
})
# df.to_csv('example.csv', index=False)
first_model_period = df['hour'].iloc[0]
last_model_period = df['hour'].iloc[-1]
# **********************
# Build Model
# **********************
model = ConcreteModel()
# Fixed Paramaters
model.T = Set(initialize=df.index.tolist(), doc='hourly intervals', ordered=True)
model.load_v = Param(model.T, initialize=df.load, doc='customers load', within=Any)
model.spot_v = Param(model.T, initialize=df.spot, doc='spot price for each interval', within=Any)
model.slr1 = Param(model.T, initialize=df.slr1, doc='1MW output solar farm 1', within=Any)
model.slr2 = Param(model.T, initialize=df.slr2, doc='1MW output solar farm 2', within=Any)
model.slr3 = Param(model.T, initialize=df.slr3, doc='1MW output solar farm 3', within=Any)
model.wnd1 = Param(model.T, initialize=df.wnd1, doc='1MW output wind farm 1', within=Any)
model.wnd2 = Param(model.T, initialize=df.wnd2, doc='1MW output wind farm 2', within=Any)
model.wnd3 = Param(model.T, initialize=df.wnd3, doc='1MW output wind farm 3', within=Any)
# Variable Parameters
model.slr1_size = Var(model.T, bounds=(0, 1000), doc='selected size in MWs', initialize=0, within=NonNegativeIntegers)
model.slr2_size = Var(model.T, bounds=(0, 1000), doc='selected size in MWs', initialize=0, within=NonNegativeIntegers)
model.slr3_size = Var(model.T, bounds=(0, 1000), doc='selected size in MWs', initialize=0, within=NonNegativeIntegers)
model.wnd1_size = Var(model.T, bounds=(0, 1000), doc='selected size in MWs', initialize=0, within=NonNegativeIntegers)
model.wnd2_size = Var(model.T, bounds=(0, 1000), doc='selected size in MWs', initialize=0, within=NonNegativeIntegers)
model.wnd3_size = Var(model.T, bounds=(0, 1000), doc='selected size in MWs', initialize=0, within=NonNegativeIntegers)
model.slr1_gen = Var(model.T, initialize=0, within=NonNegativeReals)
model.slr2_gen = Var(model.T, initialize=0, within=NonNegativeReals)
model.slr3_gen = Var(model.T, initialize=0, within=NonNegativeReals)
model.wnd1_gen = Var(model.T, initialize=0, within=NonNegativeReals)
model.wnd2_gen = Var(model.T, initialize=0, within=NonNegativeReals)
model.wnd3_gen = Var(model.T, initialize=0, within=NonNegativeReals)
model.total_gen = Var(model.T, initialize=0, within=NonNegativeReals)
model.spill_gen_sum = Var(model.T, initialize=0, within=Reals)
model.spill_binary = Var(model.T, doc='to get max', within=Binary, initialize=0)
model.spill_gen = Var(model.T, initialize=0, within=NonNegativeReals)
model.self_cons = Var(model.T, initialize=0, within=NonNegativeReals)
# -----------------------
# Constraints
# -----------------------
# SIZE CONSTRAINTS
def slr1_size(model, t):
"slr1 size"
return model.slr1_size[t] == model.slr1_size[1]
model.slr_size1_c = Constraint(model.T, rule=slr1_size)
def slr2_size(model, t):
"slr2 size"
return model.slr2_size[t] == model.slr2_size[1]
model.slr_size2_c = Constraint(model.T, rule=slr2_size)
def slr3_size(model, t):
"slr3 size"
return model.slr3_size[t] == model.slr3_size[1]
model.slr_size3_c = Constraint(model.T, rule=slr3_size)
def wnd1_size(model, t):
"wnd1 size"
return model.wnd1_size[t] == model.wnd1_size[1]
model.wnd_size1_c = Constraint(model.T, rule=wnd1_size)
def wnd2_size(model, t):
"wnd2 size"
return model.wnd2_size[t] == model.wnd2_size[1]
model.wnd_size2_c = Constraint(model.T, rule=wnd2_size)
def wnd3_size(model, t):
"wnd3 size"
return model.wnd3_size[t] == model.wnd3_size[1]
model.wnd_size3_c = Constraint(model.T, rule=wnd3_size)
# GENERATION EXPRESSIONS / CONSTRAINTS
def slr1_gen(model, t):
"solar 1 generation"
return model.slr1_gen[t] == model.slr1[t] * model.slr1_size[t]
model.slr_gen1_c = Constraint(model.T, rule=slr1_gen)
def slr2_gen(model, t):
"solar 2 generation"
return model.slr2_gen[t] == model.slr2[t] * model.slr2_size[t]
model.slr_gen2_c = Constraint(model.T, rule=slr2_gen)
def slr3_gen(model, t):
"solar 3 generation"
return model.slr3_gen[t] == model.slr3[t] * model.slr3_size[t]
model.slr_gen3_c = Constraint(model.T, rule=slr3_gen)
def wnd1_gen(model, t):
"wind 1 generation"
return model.wnd1_gen[t] == model.wnd1[t] * model.wnd1_size[t]
model.wnd_gen1_c = Constraint(model.T, rule=wnd1_gen)
def wnd2_gen(model, t):
"wind 2 generation"
return model.wnd2_gen[t] == model.wnd2[t] * model.wnd2_size[t]
model.wnd_gen2_c = Constraint(model.T, rule=wnd2_gen)
def wnd3_gen(model, t):
"wind 3 generation"
return model.wnd3_gen[t] == model.wnd3[t] * model.wnd3_size[t]
model.wnd_gen3_c = Constraint(model.T, rule=wnd3_gen)
# TOTAL GENERATION
def total_gen(model, t):
"sum of generation"
return model.total_gen[t] == model.slr1_gen[t] + model.slr2_gen[t] + model.slr3_gen[t] + \
model.wnd1_gen[t] + model.wnd2_gen[t] + model.wnd3_gen[t]
model.total_gen_c = Constraint(model.T, rule=total_gen)
# SPILL GENERATION
def spill_gen_sum(model, t):
"X >= x1"
return model.spill_gen_sum[t] == model.total_gen[t] - model.load_v[t]
model.spill_gen_sum_c = Constraint(model.T, rule=spill_gen_sum)
def spill_check_one(model, t):
"X >= x1"
return model.spill_gen[t] >= model.spill_gen_sum[t]
model.spill_check_one_c = Constraint(model.T, rule=spill_check_one)
def spill_check_two(model, t):
"X >= x2"
return model.spill_gen[t] >= 0
model.spill_check_two_c = Constraint(model.T, rule=spill_check_two)
def spill_binary_one(model, t):
"X <= x1 + M(1-y)"
return model.spill_gen[t] <= model.spill_gen_sum[t] + 9999999*(1-model.spill_binary[t])
model.spill_binary_one_c = Constraint(model.T, rule=spill_binary_one)
def spill_binary_two(model, t):
"X <= x2 + My"
return model.spill_gen[t] <= 9999999*model.spill_binary[t]
model.spill_binary_two_c = Constraint(model.T, rule=spill_binary_two)
# SELF CONS
def self_cons(model, t):
"X <= x2 + My"
return model.self_cons[t] == model.total_gen[t] - model.spill_gen[t]
model.self_cons_c = Constraint(model.T, rule=self_cons)
# ACTUAL CONSTRAINTS
def min_spill(model, t):
"Limit spill renewables to 10% of total"
return sum(model.spill_gen[t] for t in model.T) <= 0.2 * sum(model.total_gen[t] for t in model.T)
model.min_spill_c = Constraint(model.T, rule=min_spill)
def load_match(model, t):
"contract enough renewables to offset 100% load, even if its not time matched"
return sum(model.total_gen[t] for t in model.T) >= sum(model.load_v[t] for t in model.T)
model.load_match_c = Constraint(model.T, rule=load_match)
# **********************
# Define the battery income, expenses, and profit
# **********************
green_income = sum(model.spot_v[t] * model.spill_gen[t] for t in model.T)
black_cost = sum(model.spot_v[t] * (model.load_v[t] - model.self_cons[t]) for t in model.T)
slr_cost = sum(40 * (model.slr1_gen[t] + model.slr2_gen[t] + model.slr3_gen[t]) for t in model.T)
wnd_cost = sum(70 * (model.wnd1_gen[t] + model.wnd2_gen[t] + model.wnd3_gen[t]) for t in model.T)
cost = black_cost + slr_cost + wnd_cost - green_income
model.objective = Objective(expr=cost, sense=minimize)
# Solve the model
# solver = SolverFactory('glpk')
solver = SolverFactory('cbc')
solver.solve(model)
# , timelimit=10
results_df = model_to_df(model, first_period=first_model_period, last_period=last_model_period)
print(results_df)
results_df.to_csv('temp.csv', index=False)
</code></pre>
|
<python><optimization><pyomo><nonlinear-optimization><linear-optimization>
|
2023-02-24 01:48:12
| 1
| 601
|
Bobby Heyer
|
75,551,873
| 27,657
|
tkinter scrollable listbox refresh
|
<p>I have the folling tkinter UI that I am building that on load does not immediately load the listbox data, and I'm not sure why. Instead, on load I get the scrollbar and an empty listbox (the button shows up fine too). As soon as I interact with the window at all, the contents of the listbox show up:</p>
<pre><code>from tkinter import *
gui = Tk()
gui.eval('tk::PlaceWindow . center')
# gui.geometry("500x200")
top_frame = Frame(gui)
top_frame.pack(side=TOP)
bot_frame = Frame(gui)
bot_frame.pack(side=BOTTOM)
scrollbar = Scrollbar(top_frame)
scrollbar.pack(side=LEFT, fill=Y)
lb = Listbox(top_frame)
lb.pack()
def onselect(evt):
w = evt.widget
index = int(w.curselection()[0])
value = w.get(index)
print(index, value)
lb.bind('<<ListboxSelect>>', onselect)
lb.insert(0, *range(100))
scrollbar.config(command=lb.yview)
lb.config(yscrollcommand=scrollbar.set)
quit_button = Button(bot_frame, text="Quit", command=gui.destroy)
quit_button.pack()
mainloop()
</code></pre>
<p>It seems like there is some ordering in which the pack calls need to occur that I can't seem to get right.</p>
<p>How can I get the items to show up on window load while keeping the scrollbar on the left?</p>
<p>EDIT 1: system info:</p>
<pre><code>platform.platform(): macOS-12.6.2-x86_64-i386-64bit
platform.python_version(): 3.10.6
tk.TkVersion: 8.6
</code></pre>
|
<python><user-interface><tkinter>
|
2023-02-24 01:30:05
| 1
| 17,799
|
javamonkey79
|
75,551,851
| 1,572,215
|
Different output when comparing dates with timezone when using psycopg2
|
<p>A simple query</p>
<pre><code>SELECT current_date = current_date AT TIME ZONE 'Asia/Kolkata'
</code></pre>
<p>returns <code>TRUE</code>,</p>
<p>but if I run the same script through python psycopg2 returns <code>FALSE</code>.</p>
<pre><code>c = connection.cursor()
c.execute("SELECT current_date = current_date AT TIME ZONE 'Asia/Kolkata'")
c.fetchone()
(False,)
</code></pre>
<p>As you can imagine went mad debugging my software.</p>
<p>and then I did this</p>
<pre><code>c.execute("SELECT current_date = (current_date AT TIME ZONE 'Asia/Kolkata')::date")
c.fetchone()
(True,)
</code></pre>
<p>Voila <code>TRUE</code></p>
<p>so when querying through python psycopg2 we must cast the date <code>at timezone 'Asia/Kolkata'</code> to data type <code>date</code>. Though the behaviour is consistent when using <code>timezone 'UTC'</code> ie the output is same in psql and python psycopg2 for <code>at timezone 'UTC'</code></p>
<p>Is this a flaw in psycopg2 or is it something else which I am missing.</p>
|
<python><sql><postgresql><psycopg2>
|
2023-02-24 01:25:28
| 0
| 1,026
|
Shh
|
75,551,818
| 2,073,640
|
Cannot get pinecone upsert to work in python
|
<p>No matter what I am trying on my flask server, I cannot get an upsert to work. I even stopped using the document I was trying with, and have switched over to just the sample from pinecone. That works on the website itself, but not when I try via the sdk. Any help is appreciated, I am going nuts here.</p>
<pre><code>sample_doc = {
"vectors": [
{
"id": "item_0",
"metadata": {
"category": "sports",
"colors": [
"blue",
"red",
"green"
],
"time_stamp": 0
},
"values": [
0.07446312652229216,
0.8866284618893006,
0.5244262265711986
]
},
],
"namespace": "example_namespace"
}
# Upsert the vector into the Pinecone index
index = pinecone.Index(index_name)
index.upsert(vectors=[sample_doc])
ValueError: Invalid vector value passed: cannot interpret type <class βlistβ>
</code></pre>
<p>Note that I have tried every possible variation on this, iβve tried <code>index.upsert(sample_doc)</code>, variations of the insert object, etc.</p>
|
<python><machine-learning>
|
2023-02-24 01:18:11
| 1
| 358
|
SimonStern
|
75,551,763
| 14,722,297
|
Pyspark extract all that comes after the second period
|
<p>I am looking to create a new column that contains all characters after the second last occurrence of the '.' character.</p>
<p>If there are less that two '.' characters, then keep the entire string.</p>
<p>I am looking to do this in spark 2.4.8 without using a UDF. Any ideas?</p>
<pre><code>data = [
('google.com',),
('asdasdasd.google.com',),
('a.d.a.google.com',),
('www.google.com',)
]
df = sc.parallelize(data).toDF(['host'])
df.withColumn('domain', functions.regexp_extract(df['host'], r'\b\w+\.\w+\b', 0)).show()
+--------------------+----------------+
| host| domain|
+--------------------+----------------+
| google.com| google.com|
|asdasdasd.google.com|asdasdasd.google|
| a.d.a.google.com| a.d|
| www.google.com| www.google|
+--------------------+----------------+
</code></pre>
<p>The desired result is the following.</p>
<pre><code>+--------------------+----------------+
| host| domain|
+--------------------+----------------+
| google.com| google.com|
|asdasdasd.google.com| google.com|
| a.d.a.google.com| google.com|
| www.google.com| google.com|
+--------------------+----------------+
</code></pre>
|
<python><regex><string><pyspark><substring>
|
2023-02-24 01:04:07
| 3
| 1,895
|
BoomBoxBoy
|
75,551,750
| 9,105,621
|
how to encode a dataframe table to json
|
<p>I have a flask function that I create a plotly chart then pass via plotly encoder.</p>
<p>Is there something similar for datatables library? I want to pass both the fig and a table object into my json for a javascript function to parse later.</p>
<p>Here is my current code:</p>
<pre><code>def drawtieringchart(service='All'):
fig = create_tiering_scatter(service)
graphJSON = json.dumps(fig, cls=plotly.utils.PlotlyJSONEncoder)
generallogger.info(graphJSON)
return graphJSON
</code></pre>
<p>I want something like this:</p>
<pre><code>def drawtieringchart(service='All'):
fig = create_tiering_scatter(service)
graphJSON = json.dumps(fig, cls=plotly.utils.PlotlyJSONEncoder)
df = somedata
dfJson = json.dumps(df, cls=ENCODER HERE?
final_JSON = COMBINE BOTH JSONS
return final_JSON
</code></pre>
|
<python><datatables>
|
2023-02-24 01:00:58
| 1
| 556
|
Mike Mann
|
75,551,671
| 1,120,622
|
Reference Python module docstring from within a class
|
<p>Consider the following code:</p>
<pre><code>"""Module documentation."""
import argparse
class HandleArgs()
"""Class documentation"""
def __call__()
"""Method documentation"""
parser = argparse.ArgumentParser(description= __doc__)
</code></pre>
<p>This code will try to use the docstring for the method, not the module. How do I access the module docstring from within a method within the class?</p>
|
<python><docstring>
|
2023-02-24 00:43:01
| 1
| 2,927
|
Jonathan
|
75,551,570
| 816,566
|
Python: How to I force interpretation of a value as a single tuple of one string, not a collection of characters?
|
<p>I'm using Python Python 3.10.8</p>
<p>I have a function that splits regex delimited strings into a tuple of arbitrary length. I want to count the number of sub-strings returned from my function. But when the source string does not have the delimiter, and my function correctly returns a tuple with a single string, the built-in len() returns the length of the string. How can I know/force that the return value is a single string, and not a collection of characters?
This test function does not work as desired:</p>
<pre><code>def test_da_tuple(subject_string, expected_length):
da_tuple = MangleSplitter.tuple_of(subject_string)
pprint.pprint(da_tuple)
tuple_len = len(da_tuple)
assert tuple_len == expected_length, ("\"%s\" split into %d not %d" % (subject_string, tuple_len, expected_length))
</code></pre>
<p>And some samples</p>
<pre><code>MANGLED_STR_00 = "Jack L. Chalker - Demons of the Dancing GodsUC - #2DG"
CRAZYNESS = "A - B - C - D - F - F - G - H - I"
MANGLED_STR_07 = "Book Over"
</code></pre>
<p>I want my test_da_tuple() to verify 3 for MANGLED_STR_00, 9 for CRAZYNESS, and 1 for MANGLED_STR_07. Instead I get an assertion error that MANGLED_STR_07 split into 9 not 1.</p>
|
<python><tuples>
|
2023-02-24 00:18:43
| 1
| 1,641
|
Charlweed
|
75,551,527
| 12,044,155
|
Why Keras and SKLearn give me different accuracy values?
|
<p>I have created a linear classification model. It takes images as input and classifies them with one of two classes. I want to test the accuracy of my model with a different dataset than the one used for training.</p>
<p>These are the parameters I have used to compile the model:</p>
<pre class="lang-py prettyprint-override"><code>model.compile(
optimizer=keras.optimizers.SGD(learning_rate=0.1),
loss=keras.losses.CategoricalCrossentropy(),
metrics=['accuracy']
)
</code></pre>
<p>After training, I load a different dataset for the "production" tests.</p>
<pre class="lang-py prettyprint-override"><code>dataset = keras.utils.image_dataset_from_directory(
'/content/test-dataset-2',
image_size=(512, 384),
labels='inferred',
label_mode='categorical',
).map(preprocess)
</code></pre>
<p>I calculate the accuracy of the model against the given dataset using the <code>evaluate</code> method:</p>
<pre><code>model.evaluate(dataset)
</code></pre>
<p>It outputs an accuracy of 73,33% and loss of ~56.</p>
<p>However, if I calculate the accuracy through a different method, I get a different result. For example:</p>
<pre class="lang-py prettyprint-override"><code>from tensorflow.keras.utils import to_categorical
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay, classification_report
def get_labels(arr):
return np.argmax(arr, axis=1)
# I am probably doing too much hard work over here...
predictions = to_categorical(np.argmax(model.predict(dataset), axis=1), 2)
y = np.concatenate([y for x, y in dataset], axis=0)
cm = confusion_matrix(get_labels(y), get_labels(predictions))
cm_display = ConfusionMatrixDisplay(cm).plot()
</code></pre>
<p>This outputs the following Confusion Matrix:</p>
<p><a href="https://i.sstatic.net/o6zbJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o6zbJ.png" alt="enter image description here" /></a></p>
<p>Which amounts for 47,33% accuracy, a drastically different result than the one given by <code>model.evaluate</code>.</p>
<p>So, which one is wrong, and why?</p>
<p>On one hand, I'm prone to believe the accuracy given by <code>model.evaluate</code> is too high, I don't think the model is that good.</p>
<p>On the other hand, it's likely that I have made some mistake in the code that one hot encodes the predictions before passing them to the confusion matrix function.</p>
|
<python><tensorflow><machine-learning><keras>
|
2023-02-24 00:07:59
| 0
| 2,692
|
Allan Juan
|
75,551,463
| 8,162,603
|
What's the best way to programmatically highlight strings in Jinja?
|
<p>I am working on some code to do quote identification/attribution in articles and I'd like to highlight the identified quotes in the HTML file the code generates.</p>
<p>I have a function which formats a Jinja/HTML file with the article text and metadata:</p>
<pre><code>def render_doc(doc_id):
# load article html into BeautifulSoup
data = dl.load_file(doc_id)
soup = extract_soup(data)
# extract metadata
context = get_metadata(soup)
context['bodytext'] = soup.bodytext
# render and open html file
path = './'
filename = 'ln_template.html'
rendered = Environment(
loader=FileSystemLoader(path)
).get_template(filename).render(context)
file_name = f"{doc_id}.html"
with open(file_name, "w+") as f:
f.write(rendered)
os.system(f"open {file_name}")
</code></pre>
<p>If I have the text of the quotes, what would be the best way to add a <code><mark></code> tag or some css or otherwise highlight the text in the HTML file?</p>
|
<python><html><jinja2>
|
2023-02-23 23:57:32
| 1
| 527
|
steadynappin
|
75,551,453
| 5,203,117
|
How to refine heatmap?
|
<p>I have a pandas data frame the looks like this:</p>
<pre><code> SPX RYH RSP RCD RYE ... RTM RHS RYT RYU EWRE
Date ...
2022-02-25 NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN
2022-03-04 9.0 5.0 8.0 12.0 1.0 ... 6.0 4.0 11.0 2.0 3.0
2022-03-11 8.0 12.0 6.0 11.0 1.0 ... 3.0 13.0 9.0 2.0 4.0
2022-03-18 5.0 6.0 8.0 1.0 13.0 ... 9.0 10.0 2.0 12.0 11.0
2022-03-25 5.0 12.0 9.0 13.0 1.0 ... 2.0 4.0 10.0 3.0 7.0
</code></pre>
<p>Here is the info on it:</p>
<pre><code>>>> a.ranks.info()
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 52 entries, 2022-02-25 to 2023-02-17
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 SPX 51 non-null float64
1 RYH 51 non-null float64
2 RSP 51 non-null float64
3 RCD 51 non-null float64
4 RYE 51 non-null float64
5 RYF 51 non-null float64
6 RGI 51 non-null float64
7 EWCO 51 non-null float64
8 RTM 51 non-null float64
9 RHS 51 non-null float64
10 RYT 51 non-null float64
11 RYU 51 non-null float64
12 EWRE 51 non-null float64
dtypes: float64(13)
memory usage: 5.7 KB
>>>
</code></pre>
<p>I plot a heatmap of it like so:</p>
<pre><code> cmap = sns.diverging_palette(133, 10, as_cmap=True)
sns.heatmap(self.ranks, cmap=cmap, annot=True, cbar=False)
plt.show()
</code></pre>
<p>This is the result:<a href="https://i.sstatic.net/5yGsB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5yGsB.png" alt="enter image description here" /></a>
What I would like to have is the image flipped with symbols on the y-axis and dates on the x-axis. I have tried .imshow() and the various pivot methods to no avail.</p>
<p>I suspect that have two questions:
Is seaborn or imshow the right way to go about this?
How do I pivot a pandas dataframe where the index is datetime?</p>
|
<python><pandas><seaborn><imshow>
|
2023-02-23 23:55:56
| 1
| 597
|
John
|
75,551,439
| 1,107,226
|
Clicking dropdown option, getting base WebDriverException with empty Message
|
<p>Using Python & Selenium in Safari browser; trying to select from a dropdown box.</p>
<p>The dropdown looks like this in the HTML:</p>
<pre class="lang-html prettyprint-override"><code><select name="ctl00$cph1$d1$cboExchange" onchange="javascript:setTimeout('__doPostBack(\'ctl00$cph1$d1$cboExchange\',\'\')', 0)" id="ctl00_cph1_d1_cboExchange" style="width:205px;margin-left:30px;">
<option value="AMEX">American Stock Exchange</option>
<option value="ASX">Australian Securities Exchange</option>
<option value="CFE">Chicago Futures Exchange</option>
<option value="EUREX">EUREX Futures Exchange</option>
<option value="FOREX">Foreign Exchange</option>
<option selected="selected" value="INDEX">Global Indices</option>
<option value="HKEX">Hong Kong Stock Exchange</option>
<option value="KCBT">Kansas City Board of Trade</option>
<option value="LIFFE">LIFFE Futures and Options</option>
<option value="LSE">London Stock Exchange</option>
<option value="MGEX">Minneapolis Grain Exchange</option>
<option value="USMF">Mutual Funds</option>
<option value="NASDAQ">NASDAQ Stock Exchange</option>
<option value="NYBOT">New York Board of Trade</option>
<option value="NYSE">New York Stock Exchange</option>
<option value="OTCBB">OTC Bulletin Board</option>
<option value="SGX">Singapore Stock Exchange</option>
<option value="TSX">Toronto Stock Exchange</option>
<option value="TSXV">Toronto Venture Exchange</option>
<option value="WCE">Winnipeg Commodity Exchange</option>
</select>
</code></pre>
<p>My select-related code looks like this:</p>
<pre class="lang-py prettyprint-override"><code> try:
exchange_dropdown_id = 'ctl00_cph1_d1_cboExchange'
exchange_dropdown = driver.find_element(by=By.ID, value=exchange_dropdown_id)
exchange_dropdown_select = Select(exchange_dropdown)
print('-- clicking dropdown element')
exchange_dropdown_select.select_by_value('AMEX')
print('IT WORKED!')
except (selenium.common.exceptions.ElementClickInterceptedException,
selenium.common.exceptions.ElementNotInteractableException,
selenium.common.exceptions.ElementNotSelectableException,
selenium.common.exceptions.ElementNotVisibleException,
selenium.common.exceptions.ImeActivationFailedException,
selenium.common.exceptions.ImeNotAvailableException,
selenium.common.exceptions.InsecureCertificateException,
selenium.common.exceptions.InvalidCookieDomainException,
selenium.common.exceptions.InvalidCoordinatesException,
selenium.common.exceptions.InvalidElementStateException,
selenium.common.exceptions.InvalidSelectorException,
selenium.common.exceptions.InvalidSessionIdException,
selenium.common.exceptions.InvalidSwitchToTargetException,
selenium.common.exceptions.JavascriptException,
selenium.common.exceptions.MoveTargetOutOfBoundsException,
selenium.common.exceptions.NoAlertPresentException,
selenium.common.exceptions.NoSuchAttributeException,
selenium.common.exceptions.NoSuchCookieException,
selenium.common.exceptions.NoSuchElementException,
selenium.common.exceptions.NoSuchFrameException,
selenium.common.exceptions.NoSuchShadowRootException,
selenium.common.exceptions.NoSuchWindowException,
selenium.common.exceptions.ScreenshotException,
selenium.common.exceptions.SeleniumManagerException,
selenium.common.exceptions.SessionNotCreatedException,
selenium.common.exceptions.StaleElementReferenceException,
selenium.common.exceptions.TimeoutException,
selenium.common.exceptions.UnableToSetCookieException,
selenium.common.exceptions.UnexpectedAlertPresentException,
selenium.common.exceptions.UnexpectedTagNameException,
selenium.common.exceptions.UnknownMethodException,
) as ex:
print('*** POSSIBLE EXCEPTION FOUND:\n' + repr(ex) + '\n*** END REPR')
except selenium.common.exceptions.WebDriverException as ex:
print('*** WEBDRIVER BASE EXCEPTION:\n' + repr(ex) + '\n*** END BASE EXCEPTION DETAIL')
except Exception as ex:
print('*** Failure selecting item. Exception:\n' + str(ex) + '*** END EXCEPTION MESSAGE ***')
print('*** Exception Detail?:\n' + repr(ex) + '\n*** END EXCEPTION MESSAGE ***')
</code></pre>
<p>The result is:</p>
<blockquote>
<p>-- clicking dropdown element<br />
*** WEBDRIVER BASE EXCEPTION:<br />
WebDriverException()<br />
*** END BASE EXCEPTION DETAIL</p>
</blockquote>
<p>If I use <code>str(ex)</code> instead of <code>repr(ex)</code>, I get the word MESSAGE: with no other information.</p>
<p>I entered <a href="https://selenium-python.readthedocs.io/api.html" rel="nofollow noreferrer">all the exceptions in the WebDriver API</a> in hopes of catching "the one" that would tell me what's happening.</p>
<p>I have tried using <code>select_by_visible_text('American Stock Exchange')</code> as well, with the same result.</p>
<p>I can iterate through all the options with the <code>Select</code> object:</p>
<pre class="lang-py prettyprint-override"><code> for option in exchange_dropdown_select.options:
print(option.get_attribute('value'))
if option.get_attribute('value') == 'AMEX':
print('AMEX found')
</code></pre>
<p>and it finds the option. However, if I add a click on the option:</p>
<pre class="lang-py prettyprint-override"><code> for option in exchange_dropdown_select.options:
print(option.get_attribute('value'))
if option.get_attribute('value') == 'AMEX':
print('AMEX found')
option.click()
print('clicked AMEX')
</code></pre>
<p>it still ends with the empty WebDriver exception:</p>
<blockquote>
<p>AMEX found<br />
*** WEBDRIVER BASE EXCEPTION:<br />
WebDriverException()<br />
*** END BASE EXCEPTION DETAIL.</p>
</blockquote>
<p>The HTML is enclosed in a table, if that makes a difference. And, I'm using Python 3.9.</p>
<p>Any ideas what might be happening and how I might fix it?</p>
<p>Also, I'd prefer not to use the XPATH or index values as the developers might change the order of the options, and the XPATH doesn't contain any ID information relating to the item(s) I want to select - again, a path that could change if the option order changes.</p>
<p>Oh, and my imports are:</p>
<pre class="lang-py prettyprint-override"><code>import sys
import selenium # for all of those exceptions down there; will be removed when solved
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import Select
from selenium.webdriver.support.ui import WebDriverWait
</code></pre>
<p><em><strong>Update:</strong></em> to get to the issue -</p>
<ul>
<li>start with the URL <a href="https://eoddata.com/" rel="nofollow noreferrer">eoddata.com</a></li>
<li>log in (free account is fine)</li>
<li>click the Download button at the upper left (next to Home Page button)</li>
<li>click the 'X' to close the popup</li>
</ul>
<p>Then, I'm looking at the Download Data portion in the upper left.</p>
<p>In this particular example, I want to select the Exchange - <code>value = "AMEX"</code> and <code>visible text = "American Stock Exchange"</code>. It's currently the first index, but again, this could change at any time, so I don't want to count on an exact position, if I can help it.</p>
<p>Finally, in respectful netizenship, the current <em>robots.txt</em> is:</p>
<pre><code>User-agent: *
Crawl-delay:10
Disallow: /images/
Disallow: /styles/
Disallow: /scripts/
User-agent: turnitinbot
Disallow: /
Sitemap: http://www.eoddata.com/sitemapindex.xml
</code></pre>
<p><em><strong>Update 2:</strong></em>
Apple Docs related to <code>safaridriver</code> (accessed 28 Feb 2023):</p>
<ul>
<li><a href="https://developer.apple.com/documentation/webkit/about_webdriver_for_safari" rel="nofollow noreferrer">About WebDriver for Safari</a>.</li>
<li><a href="https://developer.apple.com/documentation/webkit/testing_with_webdriver_in_safari" rel="nofollow noreferrer">Testing with WebDriver in Safari</a>.</li>
<li><a href="https://developer.apple.com/documentation/webkit/macos_webdriver_commands_for_safari_12_and_later" rel="nofollow noreferrer">macOS WebDriver Commands for Safari 12 and later</a></li>
</ul>
<p>I have found a number of posts, without selected answers, asking about this issue in Safari. I've not found any instructional info with a SELECT specifically - and all of the rest of the steps I listed above are working perfectly.</p>
<p>It looks like I need a REST way of accessing the SELECT/OPTION item -- or, I need to switch to Firefox or Chrome, lol.</p>
|
<python><selenium-webdriver><safaridriver>
|
2023-02-23 23:53:56
| 1
| 8,899
|
leanne
|
75,551,292
| 1,868,358
|
Python / Django - Safest way to normalize relative paths cross-platform?
|
<p><em>Please note this question is about <strong>relative</strong> paths and must account for <strong>cross-platform</strong> functionality.</em></p>
<p>When a user uploads a file, I want to generate a timestamped path to it:</p>
<pre><code>
def get_filepath(instance, filename) -> str:
"""
Returns a timestamped directory path to deposit files into.
i.e. "evidence/2023/02/23/96672aa5-94d9-4289-8531-d7a8dc8f060d/data.jpg"
"""
dt = timezone.now()
YYYY = dt.year
MM = str(dt.month).zfill(2)
DD = str(dt.day).zfill(2)
UID = str(uuid.uuid4())
# settings.MEDIA_URL = 'media/', I just checked
path = (str(x) for x in (
settings.MEDIA_URL, 'evidence', YYYY, MM, DD, UID, filename
))
path = os.path.join(*path) # dubious
return path
</code></pre>
<p>This is raising <code>SuspiciousFileOperation</code>, which is not at all surprising:</p>
<pre><code>raise SuspiciousFileOperation(
django.core.exceptions.SuspiciousFileOperation: Detected path traversal attempt in '/media/evidence\2023\02\23\1cfa40b9-b522-4c70-b59b-fd270d722ab2\templeos.txt'
</code></pre>
<p>I'm at a loss as to how to normalize this though.</p>
<ul>
<li><code>pathlib.PurePath</code> -- didn't work, Django <code>FileField</code> doesn't like the <code>Path</code> object, which want to treat the string as an absolute path by sticking <code>C:\</code> in front.</li>
<li><code>os.path.normpath</code> -- didn't work either (<code>SuspiciousFileOperation: The joined path (C:\media\evidence\2023\02\23\cc32be4a-76d8-47c5-a274-9783707844b3\templeos.txt) is located outside of the base path component (C:\Users\...</code>)</li>
<li>This does appear to work if I do <code>str(x).strip('/\\')</code> when defining the path, but this is hacky and the sort of thing I'd expect a proper library to handle...</li>
</ul>
<p>What am I "supposed" to do to normalize <em>relative</em> paths such that this works on Windows and Linux?</p>
|
<python><django><path><pathlib><os.path>
|
2023-02-23 23:21:47
| 0
| 1,507
|
Ivan
|
75,551,188
| 18,125,194
|
Change the grid color or change spacing of cells for Plotly imshow
|
<p>I have generated a Plotly imshow figure, but I feel like it's a bit hard to read for my data since I have numerous categories.</p>
<p>I would like to either change the gird colours to help to define the cells or, as a perfered option, to add space between the cells to make the plot easier to read (add white space in-between the colours so they are not right beside each other).</p>
<p>I tried to use</p>
<pre><code>fig.update_xaxes(showgrid=True,
gridwidth=1,
gridcolor='blue')
</code></pre>
<p>but it doesn't work, neither does</p>
<pre><code>fig.update_xaxes(visible=True)
</code></pre>
<p>I have no idea how I would add write space in between the cells (I'm not sure it's possible, but, I thought I'd ask)</p>
<p>My full code is below</p>
<pre><code>import pandas as pd
import plotly.express as px
# Sample data
df = pd.DataFrame({'cat':['A','B','C'],
'att1':[1,0.55,0.15],
'att2':[0.55,0.3,0.55],
'att3':[0.55,0.55,0.17]
})
df =df.set_index('cat')
# Graphing
fig = px.imshow(df, text_auto=False,height=800, width=900,
color_continuous_scale=px.colors.diverging.Picnic,
aspect='auto')
fig.update_layout(xaxis={'side': 'top'})
fig.update_xaxes(tickangle=-45,
showgrid=True,
gridwidth=1,
gridcolor='blue')
fig.update(layout_coloraxis_showscale=False)
fig.update_xaxes(visible=True)
fig.show()
</code></pre>
|
<python><plotly>
|
2023-02-23 23:01:45
| 1
| 395
|
Rebecca James
|
75,551,175
| 2,299,245
|
Update raster values using Python
|
<p>I'm trying to read in a raster file. It's a 32-bit float raster, with either values 1 or no data. I want to update the values of 1 to 10, and write it out again (probably as a UNIT8 data type?). Here is my attempt:</p>
<pre><code>import rioxarray
import numpy
my_rast = rioxarray.open_rasterio("my_file.tif", masked=True, cache=False)
my_rast[numpy.where(my_rast == 1)] = 10
my_rast.rio.set_nodata(255)
my_rast.rio.to_raster("output.tif", compress='lzw', num_threads='all_cpus', tiled=True,
dtype='uint8', driver="GTiff", predictor=2, windowed=True)
</code></pre>
<p>However the fourth line never seems to finish (maybe as it's a fairly large raster?). I'm not sure what I'm doing wrong.</p>
<p>Here is the result of <code>print(my_rast)</code>:</p>
<pre><code><xarray.DataArray (band: 1, y: 1140, x: 1053)>
[1200420 values with dtype=float64]
Coordinates:
* band (band) int64 1
* x (x) float64 9.412 9.412 9.412 9.413 ... 9.703 9.704 9.704 9.704
* y (y) float64 47.32 47.32 47.32 47.32 ... 47.0 47.0 47.0 47.0
spatial_ref int64 0
Attributes:
AREA_OR_POINT: Area
scale_factor: 1.0
add_offset: 0.0
</code></pre>
|
<python><raster><python-xarray><rasterio>
|
2023-02-23 23:00:12
| 1
| 949
|
TheRealJimShady
|
75,551,166
| 1,714,490
|
How can I create a python3 venv in cmake install?
|
<p>I have a medium sized project composed by many parts, mostly in C++, but testing and configuration relies on Python3 scripts.</p>
<p>The project buildsystem is generated using CMake and installed (by CMake rules) in a "deploy" directory.</p>
<p>I would like to create a Python venv to segregate changes.
I have the following CMake fragment:</p>
<pre><code>set(VENV ${CMAKE_INSTALL_PREFIX}/venv)
set(REQUIREMENTS ${CMAKE_SOURCE_DIR}/tools/testing/scripts/requirements.txt)
set(BIN_DIR ${VENV}/bin)
set(PYTHON ${BIN_DIR}/python)
set(OUTPUT_FILE ${VENV}/environment.txt)
add_custom_command(
OUTPUT ${OUTPUT_FILE}
COMMAND ${Python3_EXECUTABLE} -m venv ${VENV}
COMMAND ${BIN_DIR}/pip install -U pip wheel
COMMAND ${BIN_DIR}/pip install -r ${REQUIREMENTS}
COMMAND ${BIN_DIR}/pip freeze > ${VENV}/environment.txt
DEPENDS ${REQUIREMENTS}
)
add_custom_target(venv DEPENDS ${PYTHON})
</code></pre>
<p>... but I don't know how to trigger the <code>venv</code> target while doing installation.</p>
<p>I am unsure if this is the right approach (comments welcome, of course), but need should be quite clear: I need to be able to run my scripts from the "deploy" directory using something like:</p>
<pre><code>venv/bin/python test-script.py
</code></pre>
<p>or use a custom: <code>#!venv/bin/python</code> line (I am working under Linux).</p>
<h2>UPDATE:</h2>
<p>My somewhat working code, based on @user Answer, is as follows:</p>
<pre><code>find_package(Python3 REQUIRED COMPONENTS Interpreter)
set(VENV "${CMAKE_INSTALL_PREFIX}/venv")
set(REQUIREMENTS "${CMAKE_SOURCE_DIR}/tools/testing/scripts/requirements.txt")
set(BIN_DIR "${VENV}/bin")
install(CODE "
MESSAGE(\"Creating VENV from ${Python3_EXECUTABLE} to ${VENV}\")
execute_process(COMMAND_ECHO STDOUT COMMAND ${Python3_EXECUTABLE} -m venv ${VENV} )
execute_process(COMMAND_ECHO STDOUT COMMAND ${BIN_DIR}/pip install -U pip wheel )
execute_process(COMMAND_ECHO STDOUT COMMAND ${BIN_DIR}/pip install -r ${REQUIREMENTS} )
")
</code></pre>
<p>Equivalent code with <code>set(...</code> and <code>find_package(...</code> inside <code>install(CODE...</code> did not work; also quoting seems wrong.</p>
<p>I will accept this answer, but I would like to clarify the above.</p>
|
<python><cmake><python-venv>
|
2023-02-23 22:59:38
| 1
| 3,106
|
ZioByte
|
75,551,108
| 127,682
|
convert a column(string) in csv file to a tuple of ints
|
<p>Currently, I process a file using the <code>csv</code> module, which creates a list of dictionaries.</p>
<pre class="lang-py prettyprint-override"><code>import csv
file = open('csvfile.csv')
lines = csv.reader(file)
header = next(lines) # ['name', 'price', 'date']
# when I do the following
for line in lines:
print(line)
# I get the following
['xxxx', '5.00', '2/23/2023']
# assigning types to the columns to do type conversion using a function
types = [
str,
float,
str # this need to be a tuple
# tried tuple(map(int, cannotchoosecolumn.split('/')))
# did not work
]
# now to create a list of dicts
alist_of_dicts = [
{
name: func(val)
for name, func, val in zip(header, types, line)
}
for line in lines
]
</code></pre>
<p>How would I select the third column <code>str(2/23/2023)</code> to change to a <code>tuple(2, 21, 2007)</code> using the format I am currently using?</p>
|
<python><csv>
|
2023-02-23 22:49:34
| 2
| 465
|
capnhud
|
75,551,043
| 3,255,453
|
pandas series to_json memory leak
|
<p>My production service's memory was constantly increasing, and I think the root cause is the pandas.Series.to_json.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import gc
for i in range(0,10):
series = pd.Series([0.008, 0.002])
json_string = series.to_json(orient="records")
_ = gc.collect()
print("gc_count={}".format(len(gc.get_objects())))
</code></pre>
<p>output:</p>
<pre><code>gc_count=46619
gc_count=46619
gc_count=46620
gc_count=46621
gc_count=46622
gc_count=46623
gc_count=46624
gc_count=46625
gc_count=46626
gc_count=46627
</code></pre>
<p>What's interesting is that the first and the second call always has the same GC count, and then it starts increasing by one in each iteration.</p>
<p>Has anyone faced this before? Are there ways to avoid the memory leak?</p>
<p>[Python versions tried: 3.8 and 3.9]</p>
<p>Update: This seems to be related: <a href="https://github.com/pandas-dev/pandas/issues/24889" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/issues/24889</a> and using to_dict and converting it using json seems to be a workaround.</p>
|
<python><python-3.x><pandas>
|
2023-02-23 22:38:17
| 1
| 504
|
learnerer
|
75,550,956
| 552,613
|
How to use Scapy in Python 3 to write a PCAP to a byte or string?
|
<p>I am trying to get a PCAP into a byte variable, but I cannot figure out how to do it. So far I have something like:</p>
<pre><code>import io
from scapy.all import *
packet = Ether() / IP(dst="1.2.3.4") / UDP(dport=123)
packet = IP(src='127.0.0.1', dst='127.0.0.2')/TCP()/"GET / HTTP/1.0\r\n\r\n"
content = b''
wrpcap(io.BytesIO(content), [packet])
</code></pre>
<p>But <code>content</code> is empty. How can I get this to work?</p>
|
<python><scapy>
|
2023-02-23 22:28:25
| 1
| 2,562
|
Addy
|
75,550,882
| 9,064,615
|
Trying to install the latest Pytorch (1.13.1) instead installs 1.11.0
|
<p>I'm trying to install the latest Pytorch version, but it keeps trying to instead install 1.11.0. I'm on Windows 10 running Python 3.10.8, and I have CUDA 11.6 installed.</p>
<p>I'm running the following command:</p>
<pre><code>pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
</code></pre>
<p>Even if I give it the flag --no-cache-dir, it proceeds to download 1.11.0 anyways. Running</p>
<pre><code>pip install torch==1.13.1
</code></pre>
<p>Installs the CPU version. Any way that I can download the specific module directly and install it manually?</p>
|
<python><python-3.x><pytorch>
|
2023-02-23 22:16:51
| 1
| 608
|
explodingfilms101
|
75,550,869
| 10,404,281
|
Rolling average base on two columns in Pandas
|
<p>I want to get a rolling sum for each month base on weeks.
Here is what my df looks like</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame([1, 2, 1, 2, 3, 4, 1, 2, 1, 2, 3, 4, 1, 1, 2], index=pd.MultiIndex.from_arrays([[ 30, 20,15, 10, 20, 20,5,15,20,10,10, 30, 20,15, 10], ['red','red','red','red', 'red', 'red', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue','Green', 'Green', 'Green'],['Feb', 'Feb', 'Mar', 'Mar', 'Mar', 'Mar', 'Feb', 'Feb', 'Mar', 'Mar', 'Mar', 'Mar', 'Feb', 'Mar', 'Mar']], names=['number', 'color', 'Mon'])).reset_index()
df
number color mon 0
30 red Feb 1
20 red Feb 2
15 red Mar 1
10 red Mar 2
20 red Mar 3
20 red Mar 4
5 blue Feb 1
15 blue Feb 2
20 blue Mar 1
10 blue Mar 2
10 blue Mar 3
30 blue Mar 4
20 Green Feb 1
15 Green Mar 1
10 Green Mar 2
</code></pre>
<p>I want to get the rolling sum for each month for each color base on that month's week.
For the color red and mon Feb 1st week 30 and 2nd week 50 (30+20). For-Mar it should be for red 15,25,45,65</p>
<pre class="lang-py prettyprint-override"><code>number color mon 0 rolling
30 red Feb 1 30
20 red Feb 2 50
15 red Mar 1 15
10 red Mar 2 25
20 red Mar 3 45
20 red Mar 4 65
5 blue Feb 1 5
15 blue Feb 2 20
20 blue Mar 1 20
10 blue Mar 2 30
10 blue Mar 3 40
30 blue Mar 4 70
20 Green Feb 1 20
15 Green Mar 1 15
10 Green Mar 2 25
</code></pre>
<p>I'm trying to use group by with rolling window with and without lambda but it didn't work out</p>
<pre class="lang-py prettyprint-override"><code>df.groupby(by=['color','Mon']).rolling(window=2).sum()
# also below command
df.groupby(by=['color','Mon']).apply(lambda x: x.rolling(2).sum())
</code></pre>
<p>Any help would be greatly appreciated.
Thanks in advance!!</p>
|
<python><pandas><group-by>
|
2023-02-23 22:15:33
| 1
| 819
|
rra
|
75,550,787
| 6,676,101
|
How do we test if a character is usually proceeded by a backslash?
|
<p>When writing strings, computer programmers will often insert something called an "<em><strong>escape sequence</strong></em>".</p>
<p>For example, the string literal <code>"Hello World\n"</code> ends in a line feed character <code>\n</code>.</p>
<p>As another example, the string literal <code>"Hello World\r"</code> ends in a carriage return character <code>\r</code>.</p>
<p>How do we test whether or not a character is usually proceeded by a back-slash character?</p>
<pre class="lang-python prettyprint-override"><code>characters = [chr(num) for num in range(32, 127)]
for ch in characters:
if is_slashed(ch):
print(repr(ch).ljust(20), "IS SLASHED")
else:
print(repr(ch).ljust(20), "IS **NOT** SLASHED")
</code></pre>
<p>Suppose that <code>ch</code> is a line-feed character.</p>
<p>Then we have:</p>
<pre class="lang-None prettyprint-override"><code>repr(ch) ==
[single_quote, backslash_character, the_letter_n, single_quote]
</code></pre>
<p>After that, we have...</p>
<pre class="lang-None prettyprint-override"><code> repr(ch)[1:-2] ==
[backslash, letter n]
</code></pre>
<p>Can we test <code>repr(ch)[1:-2][0] == backslash</code>?</p>
<p>Why is there an error message in the following code?</p>
<pre class="lang-python prettyprint-override"><code>characters = [chr(num) for num in range(32, 127)]
for ch in characters:
if not repr(ch)[1:-2][0] == "\\":
print(ch)
</code></pre>
|
<python><python-3.x><string>
|
2023-02-23 22:03:49
| 1
| 4,700
|
Toothpick Anemone
|
75,550,770
| 11,693,768
|
Drop duplicates in a subset of columns per row, rowwise, only keeping the first copy, rowwise
|
<p>I have the following pandas dataframe, which is over 7 million rows</p>
<pre><code>import pandas as pd
data = {'date': ['2023-02-22', '2023-02-21', '2023-02-23'],
'x1': ['descx1a', 'descx1b', 'descx1c'],
'x2': ['ALSFNHF950', 'KLUGUIF615', np.nan],
'x3': [np.nan, np.nan, 24319.4],
'x4': [np.nan, np.nan, 24334.15],
'x5': [np.nan, np.nan, 24040.11],
'x6': [np.nan, 75.33, 24220.34],
'x7': [np.nan, np.nan, np.nan],
'v': [np.nan, np.nan, np.nan],
'y': [404.29, np.nan, np.nan],
'ay': [np.nan, np.nan, np.nan],
'by': [np.nan, np.nan, np.nan],
'cy': [np.nan, np.nan, np.nan],
'gy': [np.nan, np.nan, np.nan],
'uap': [404.29, 75.33, np.nan],
'ubp': [404.29, 75.33, np.nan],
'sf': [np.nan, 2.0, np.nan]}
df = pd.DataFrame(data)
</code></pre>
<p>If there are any duplicates of a number in any of the columns x3,x4,x5,x6,x7,v,y,ay,by,cy,gy,uap,ubp, I want to to delete the duplicates and only keep one copy, either the one in column <code>x6</code> or the first column in which the duplicate appears.</p>
<p>In most rows the first copy if there are copies appear in column x6.</p>
<p>The output should look like this,</p>
<pre><code>data = {'date': ['2023-02-22', '2023-02-21', '2023-02-23'],
'x1': ['descx1a', 'descx1b', 'descx1c'],
'x2': ['ALSFNHF950', 'KLUGUIF615', np.nan],
'x3': [np.nan, np.nan, 24319.4],
'x4': [np.nan, np.nan, 24334.15],
'x5': [np.nan, np.nan, 24040.11],
'x6': [np.nan, 75.33, 24220.34],
'x7': [np.nan, np.nan, np.nan],
'v': [np.nan, np.nan, np.nan],
'y': [404.29, np.nan, np.nan],
'ay': [np.nan, np.nan, np.nan],
'by': [np.nan, np.nan, np.nan],
'cy': [np.nan, np.nan, np.nan],
'gy': [np.nan, np.nan, np.nan],
'uap': [np.nan, np.nan, np.nan],
'ubp': [np.nan, np.nan, np.nan],
'sf': [np.nan, 2.0, np.nan]}
</code></pre>
<p>So far I only figured out,</p>
<pre><code>check = ['x3', 'x4', 'x5', 'x6', 'x7', 'v', 'y', 'ay', 'by', 'cy', 'gy', 'uap', 'ubp']
df[check] = df[check].where(~df[check].duplicated(), np.nan)
</code></pre>
<p>But it's wrong.</p>
<p>Is there a way to get this done?</p>
|
<python><pandas><dataframe>
|
2023-02-23 22:02:06
| 1
| 5,234
|
anarchy
|
75,550,497
| 2,730,554
|
bokeh show series name when hovering over line on chart
|
<p>I am plotting a line chart. I use the HoverTool so that when a user hovers over a line they can see the date & the value, this bit works. However it doesn't show them the series name, I have tried using the special $name but it just shows three question marks. What am I doing wrong?</p>
<pre><code>source = ColumnDataSource(data=dict(
x=[dt.datetime(2023, 1, 1), dt.datetime(2023, 1, 2), dt.datetime(2023, 1, 3), dt.datetime(2023, 1, 4), dt.datetime(2023, 1, 5)],
y1=[2, 5, 8, 2, 7],
y2=[3, 2, 1, 4, 7],
y3=[1, 4, 5, 6, 3]
))
source = ColumnDataSource(data_dict)
tools = "pan,box_zoom,zoom_in,zoom_out,redo,undo,reset,crosshair"
p = figure(title = "Blah",
x_axis_label='date',
width=1600,
height=850,
x_axis_type='datetime',
tools=tools,
toolbar_location='above')
for c in data_dict.keys():
if c != 'x':
p.line(x='x', y=c, source=source, legend_label=c)
p.legend.location = "top_left"
p.legend.click_policy = "hide"
formatters_tooltips = {'$x': 'datetime'}
p.add_tools(HoverTool(
tooltips=[('Date', '$x{%Y-%m-%d}'), ('Name', '$name'), ('Value', '$y')],
formatters=formatters_tooltips))
output_file("C:/some_path/some_file.html")
show(p)
</code></pre>
|
<python><bokeh>
|
2023-02-23 21:29:23
| 1
| 6,738
|
mHelpMe
|
75,550,166
| 8,869,570
|
How do you call timestamp() on column of pandas dataframe?
|
<pre><code> time1 = time2 = time3 = datetime.datetime(2022, 12, 2, 8, 15)
rows = pd.DataFrame(
{
"id": [1, 1, 1],
"time": [dt1, dt2, dt3],
})
</code></pre>
<p>When I do</p>
<pre><code>rows.time.dt.timestamp()
</code></pre>
<p>I get the error</p>
<pre><code>AttributeError: 'DatetimeProperties' object has no attribute 'timestamp'
</code></pre>
<p>I can call <code>timestamp()</code> on each individual <code>rows.time.iloc[i]</code>, but I would like to do it on the whole column.</p>
|
<python><pandas><dataframe><datetime>
|
2023-02-23 20:47:21
| 1
| 2,328
|
24n8
|
75,550,124
| 7,613,669
|
Python Polars: How to add a progress bar to map_elements / map_groups?
|
<p>Is it possible to add a <strong>progress bar</strong> to a Polars <strong>apply loop with a custom function</strong>?</p>
<p>For example, how would I add a progress bar to the following toy example:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame(
{
"team": ["A", "A", "A", "B", "B", "C"],
"conference": ["East", "East", "East", "West", "West", "East"],
"points": [11, 8, 10, 6, 6, 5],
"rebounds": [7, 7, 6, 9, 12, 8]
}
)
df.group_by("team").map_groups(lambda x: x.select(pl.col("points").mean()))
</code></pre>
<p><strong>Edit 1:</strong></p>
<p>After help from @Jcurious, I have the following 'tools' that can be re-used for other functions, however it does not print to console correctly.</p>
<pre class="lang-py prettyprint-override"><code>def pl_progress_applier(func, task_id, progress, **kwargs):
progress.update(task_id, advance=1, refresh=True)
return func(**kwargs)
def pl_groupby_progress_apply(data, group_by, func, drop_cols=[], **kwargs):
global progress
with Progress() as progress:
num_groups = len(data.select(group_by).unique())
task_id = progress.add_task('Applying', total=num_groups)
return (
data
.group_by(group_by)
.map_groups(lambda x: pl_progress_applier(
x=x.drop(drop_cols), func=func, task_id=task_id, progress=progress, **kwargs)
)
)
# and using the function custom_func, we can return a table, howevef the progress bar jumps to 100%
def custom_func(x):
return x.select(pl.col('points').mean())
pl_groupby_progress_apply(
data=df,
group_by='team',
func=custom_func
)
</code></pre>
<p>Any ideas on how to get the progress bar to actually work?</p>
<p><strong>Edit 2:</strong></p>
<p>It seems like the above functions do indeed work, however if you're using PyCharm (like me), then it does not work. Enjoy non-PyCharm users!</p>
|
<python><python-polars>
|
2023-02-23 20:43:26
| 2
| 348
|
Sharma
|
75,549,817
| 1,143,669
|
Transformer train a new tokenizer base on existing one
|
<p>In the following code</p>
<pre><code>from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")
tokenizer_new = tokenizer.train_new_from_iterator(training_corpus, 50000, new_special_tokens = ['ε₯εΊ·','ε»ε¦','θ―εη',....])
</code></pre>
<p>where training_corpus is an iterator generating lines of text from hard drive chinese_medical.txt file</p>
<p>For readers who may not familiar with "bert-base-chinese", it is a single character tokenizer with "wordpiece" model as default</p>
<p>My question is</p>
<p><strong>tokenizer_new</strong> has totally different indexing from <strong>tokenizer</strong>, for example</p>
<pre><code>tokenizer.vocab['ε
'] #1044
tokenizer_new.vocab['ε
'] #5151
</code></pre>
<p>this makes continue training (i.e. train a Chinese medical specific BERT model) on model</p>
<pre><code>model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese")
</code></pre>
<p>impossible. Because the indexing id for the same character is totally different in <strong>tokenizer_new</strong></p>
<p>How to make the indexing for the same character fixed?</p>
|
<python><nlp><tokenize><transformer-model><huggingface-tokenizers>
|
2023-02-23 20:06:42
| 1
| 332
|
Katelynn ruan
|
75,549,697
| 7,926,383
|
How to speed up updating cell colors with pygsheets?
|
<p>I'm using this code to update the background color of a range of cells in a google sheet:</p>
<pre><code>from pygsheets import Workbook, Color
gc = pygsheets.authorize(service_file='path/to/credentials.json')
workbook = gc.open('spreadsheet_name')
worksheet = workbook.worksheet_by_title('Sheet1')
cell_range = worksheet.range('E2:J37')
for row in cell_range:
for cell in row:
cell.color = (0.8, 0.8, 0.8)
</code></pre>
<p>But the program is extremely slow. After it does a chunk of cells, it will hang for several minutes before continuing, and as a result for a range this size it takes like 20 minutes, somewhat undermining the point of automating this. Is there a way to speed this up? From what I can tell there isn't a way to set the formatting for a range of cells directly, necessitating this iterative approach.</p>
|
<python><google-sheets><google-sheets-api><pygsheets>
|
2023-02-23 19:53:44
| 1
| 607
|
Jacob H
|
75,549,599
| 7,447,976
|
how to efficiently read pq files - Python
|
<p>I have a list of files with <code>.pq</code> extension, whose names are stored in a list. My intention is to read these files, filter them based on <code>pandas</code>, and then merge them into a single <code>pandas</code> data frame.</p>
<p>Since there are thousands of files, the code currently runs super inefficiently. The biggest bottleneck is where I read the pq file. During the experiments, I commented out the filtering part. I've tried three different ways as shown below, however, it takes 1.5 seconds to reach each file which is quite slow. Are there alternative ways that I can perform these operations?</p>
<pre><code>from tqdm import tqdm
from fastparquet import ParquetFile
import pandas as pd
import pyarrow.parquet as pq
files = [.....]
#First way
for file in tqdm(files ):
temp = pd.read_parquet(file)
#filter temp and append
#Second way
for file in tqdm(files):
temp = ParquetFile(file).to_pandas()
# filter temp and append
#Third way
for file in tqdm(files):
temp = pq.read_table(source=file).to_pandas()
# filter temp and append
</code></pre>
<p>Each line when I read <code>file</code> inside the for loop, it takes quite bit of a long time. For 24 files, I spend 28 seconds.</p>
<pre><code> 24/24 [00:28<00:00, 1.19s/it]
24/24 [00:25<00:00, 1.08s/it]
</code></pre>
<p>One sample file is on average 90MB that corresponds to 667858 rows and 48 columns. Data type is all numerical (i.e. float64). The number of rows may vary, but the number of columns remains the same.</p>
|
<python><pandas><parquet><fastparquet>
|
2023-02-23 19:42:07
| 2
| 662
|
sergey_208
|
75,549,553
| 1,451,614
|
Sklearn OrdinalEncoder Parameters
|
<p>For sklearn OrdinalEncoder:
<a href="https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OrdinalEncoder.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OrdinalEncoder.html</a></p>
<p>What is the difference between <code>unknown_value</code> and <code>encoded_missing_value</code>?</p>
<p>Looking at source code it almost seems like they are being mixed in use.
<a href="https://github.com/scikit-learn/scikit-learn/blob/8c9c1f27b/sklearn/preprocessing/_encoders.py#L1071" rel="nofollow noreferrer">https://github.com/scikit-learn/scikit-learn/blob/8c9c1f27b/sklearn/preprocessing/_encoders.py#L1071</a></p>
<p>For example, for the code snippet below, changing the <code>encoded_missing_value</code> seems to not do anything.</p>
<pre><code>import numpy as np
from sklearn.preprocessing import OrdinalEncoder
import sklearn
print(sklearn.__version__)
print()
ordinal_encoder_1 = OrdinalEncoder(
categories = [["a", "b", "c", "d", "e", "f", "g"]],
handle_unknown="use_encoded_value",
unknown_value=-1,
#encoded_missing_value=-2
)
print(ordinal_encoder_1.fit_transform([[np.nan],["a"], ["a"], ['asdf'], [None], ["b"], ["c"]]))
1.1.1
[[-1.]
[ 0.]
[ 0.]
[-1.]
[-1.]
[ 1.]
[ 2.]]
</code></pre>
|
<python><machine-learning><scikit-learn>
|
2023-02-23 19:37:39
| 1
| 511
|
gsandhu
|
75,549,463
| 10,159,065
|
Merging dataframes based on pairs
|
<p>I have a dataframe that looks like this:</p>
<pre><code>df = pd.DataFrame({'col_1': ['1', '2', '3', '4'],
'col_2': ['a:b,c:d', ':v', 'w:,x:y', 'a:g,h:b,j:']
})
</code></pre>
<p>The datatype of col_2 is a string, so we must do string manipulation/regex.</p>
<p>I also have another dataframe that has a mapping between key-value pair from col_2. It looks like this:</p>
<pre><code>df1 = pd.DataFrame({'col_1': ['a', 'c', '', 'w', 'x', 'a', 'h', 'j','t'],
'col_2': ['b', 'd', 'v', '','y', 'g', 'b', '', 'g'],
'col_3': ['aw', 'rt', 'er', 'aa', 'ey', 'wk', 'oo', 'ri', 'ty'],
'col_4': ['rt', 'yu', 'gq', 'tr', 'ui', 'pi', 'pw', 'pp', 'uu']
})
</code></pre>
<p>basically <code>a:b</code> translated to <code>aw:rt</code>, which means you can't reach <code>aw</code> and <code>rt</code> without both <code>a</code> and <code>b</code>,</p>
<p>I want to get all the values from col_4 corresponding to the key-value pairs in col_2, so i want my output to be</p>
<pre><code>pd.DataFrame({'col_1': ['1', '2', '3', '4'],
'col_2': ['a:b,c:d', ':v', 'w:,x:y', 'a:g,h:b,j:'],
'col_3': ['rt,yu', 'gq', 'tr,ui','pi,pw,pp' ]
})
</code></pre>
<p>I am able to extract key, value pair as different columns using</p>
<pre><code>df[['c1', 'c2']] = df['col_2'].str.extract(r'^([^:,]*):([^:,]*)')
</code></pre>
<p>so I can extract all the key-value pairs as columns and then do merge, but it looks like a lengthy route, Any other optimised way?</p>
|
<python><pandas><string><dataframe><lookup>
|
2023-02-23 19:27:47
| 1
| 448
|
Aayush Gupta
|
75,549,180
| 7,248,882
|
how to type a custom callable type in Python
|
<p>I have a class called Foo:</p>
<pre><code>class Foo:
def __init__(self, callable):
self.my_attr = "hi"
self.callable = callable
def __call__(self, *args, **kwargs):
# call the wrapped in function
return self.callable(*args, **kwargs)
</code></pre>
<p>I would like to type its instances (the <code>__call__</code> method and the <code>my_attr</code> attribute).</p>
<p>Thank you for your help,</p>
|
<python><generics><mypy><typing><callable>
|
2023-02-23 18:57:21
| 1
| 512
|
Bashir Abdelwahed
|
75,549,173
| 10,492,521
|
SWIG for C++ code to Python , is this the valid way to wrap a std::array the same as a C-style array?
|
<p>SWIG newbie here. Let's say I have some typemaps defined for a C-style array:</p>
<pre><code>%typemap(in) double[ANY] (double temp[$1_dim0]) {
...
}
// Convert from C to Python for c-style arrays
%typemap(out) double [ANY] {
...
}
</code></pre>
<p>If I want these to use this same exact logic for a std::array of doubles instead of a C-style array. However, I am not sure what the correct syntax is for this typemap to apply to an array of arbitrary dimension. Would the following logic be sufficient?</p>
<pre><code>%apply double[ANY] { std::array<double, ANY> };
</code></pre>
<p>Thanks!</p>
|
<python><c++><arrays><swig>
|
2023-02-23 18:56:48
| 1
| 515
|
Danny
|
75,549,065
| 6,067,528
|
How can I use list unpacking with condition in list?
|
<p>This is what I'm attempting</p>
<p><code>[1,2,3 if False else *[6,5,7]]</code></p>
<p>This is what I'm expecting</p>
<p><code>[1,2,3,6,5,7]</code></p>
<p>How could I get this to work without flattening the list - i.e. <code>np.flatten([1,2,3 if False else [6,5,7]])</code> or similar</p>
<p>Is there an approach I can use to unpack <code>[6,5,7]</code> inside my list? Advice much appreciated!</p>
|
<python>
|
2023-02-23 18:45:18
| 1
| 1,313
|
Sam Comber
|
75,548,903
| 7,483,211
|
How to make Snakemake wildcard work for empty string?
|
<p>I expected Snakemake to allow wildcards to be empty strings, alas, this isn't the case.</p>
<p>How can I make a wildcard accept an empty string?</p>
|
<python><wildcard><bioinformatics><snakemake>
|
2023-02-23 18:29:20
| 1
| 10,272
|
Cornelius Roemer
|
75,548,900
| 19,325,656
|
Fill out JSON file from pandas df
|
<p><strong>TLDR</strong> I got JSON file with data about exercises, I parsed it added custom data and filled out the template JSON but to my surprise, my template wasn't filled out with different rows but with multiple copies of one row.</p>
<p>How can I populate my template with values from each row. One row one template.</p>
<pre><code>{
"exercises": [
{
"name": "3/4 Sit-Up",
"force": "pull",
"level": "beginner",
"mechanic": "compound",
"equipment": "body only",
"primaryMuscles": [
"abdominals"
],
"secondaryMuscles": [],
"instructions": [
"Lie down on the floor and secure your feet. Your legs should be bent at the knees.",
"Place your hands behind or to the side of your head. You will begin with your back on the ground. This will be your starting position.",
"Flex your hips and spine to raise your torso toward your knees.",
"At the top of the contraction your torso should be perpendicular to the ground. Reverse the motion, going only ΓΒΎ of the way down.",
"Repeat for the recommended amount of repetitions."
],
"category": "strength"
},
{
"name": "90/90 Hamstring",
"force": "push",
"level": "beginner",
"mechanic": null,
"equipment": "body only",
"primaryMuscles": [
"hamstrings"
],
"secondaryMuscles": [
"calves"
],
"instructions": [
"Lie on your back, with one leg extended straight out.",
"With the other leg, bend the hip and knee to 90 degrees. You may brace your leg with your hands if necessary. This will be your starting position.",
"Extend your leg straight into the air, pausing briefly at the top. Return the leg to the starting position.",
"Repeat for 10-20 repetitions, and then switch to the other leg."
],
"category": "stretching"
}]
}
</code></pre>
<p>I've imported this to pandas df with only 3 values that are important to me</p>
<pre><code>with open('exercises.json','r') as f:
data = json.loads(f.read())
df = pd.json_normalize(data, record_path = ['exercises'])
parsed_df = df[['name', 'category']]
parsed_df.loc[:, 'authorized'] = True
</code></pre>
<p>Then I've created my template and loaded in to the script</p>
<pre><code>{
"model": "myapp.exercise",
"pk": 1,
"fields": {
"name": "",
"category": "",
"authorized": ""
}
}
</code></pre>
<pre><code>with open('template.json','r') as f:
data_template = json.loads(f.read())
</code></pre>
<p>Next I've iterated over each row in my parsed_df to get all the values and fill out copies of template file, then added it to the list</p>
<pre><code>dict_list = []
for index, row in parsed_df.iterrows():
data_template['pk'] = index
data_template['fields']['name'] = row['name']
data_template['fields']['category'] = row['category']
data_template['fields']['authorized'] = row['authorized']
dict_list.append(data_template)
</code></pre>
<p>I've printed out row and index value first to be sure that Im seeing different values in each interaction. Everything was great but then I've checked out my dict_list that had multiple copies of the last row in parsed_df</p>
|
<python><pandas>
|
2023-02-23 18:29:02
| 1
| 471
|
rafaelHTML
|
75,548,786
| 1,410,221
|
How do I export audio stored in a numpy array in a lossy format like m4a?
|
<p>I have some text-to-speech code that gives me a numpy array for its audio output. I can export this audio array to a WAV file like so:</p>
<pre class="lang-py prettyprint-override"><code>sample_rate = 48000
audio_normalized = audio
audio_normalized = audio_normalized / np.max(np.abs(audio_normalized))
# [[https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.write.html][scipy.io.wavfile.write β SciPy v1.10.0 Manual]]
scipy.io.wavfile.write(output_path, sample_rate, audio_normalized,)
</code></pre>
<p>But when the text is long, I get this error:</p>
<pre class="lang-py prettyprint-override"><code> File "/Users/evar/code/python/blackbutler/blackbutler/butler.py", line 216, in cmd_zsh
scipy.io.wavfile.write(output_path,
File "/Users/evar/mambaforge/lib/python3.10/site-packages/scipy/io/wavfile.py", line 812, in write
raise ValueError("Data exceeds wave file size limit")
ValueError: Data exceeds wave file size limit
</code></pre>
<p>So I think I need to convert the numpy array to a small, lossy format like <code>m4a</code> or <code>mp3</code> using Python, and then save that.</p>
|
<python><audio><scipy><mp3><text-to-speech>
|
2023-02-23 18:16:31
| 1
| 4,193
|
HappyFace
|
75,548,741
| 11,710,304
|
How to conditonally replace row values with when-then-otherwise in Polars?
|
<p>I have a data set with three columns. Column A is to be checked for containing parts of strings. If the string matches foo partwise, the value from L should be replaced by the value of column G. If not nothing should change. For this I have tried the following.</p>
<pre><code>df = pl.DataFrame(
{
"A": ["foo", "ham", "spam", "egg",],
"L": ["A54", "A12", "B84", "C12"],
"G": ["X34", "C84", "G96", "L6",],
}
)
print(df)
shape: (4, 3)
ββββββββ¬ββββββ¬ββββββ
β A β L β G β
β --- β --- β --- β
β str β str β str β
ββββββββͺββββββͺββββββ‘
β foo1 β A54 β X34 β
β ham β A12 β C84 β
β foo2 β B84 β G96 β
β egg β C12 β L6 β
ββββββββ΄ββββββ΄ββββββ
expected outcome
shape: (4, 3)
ββββββββ¬ββββββ¬ββββββ
β A β L β G β
β --- β --- β --- β
β str β str β str β
ββββββββͺββββββͺββββββ‘
β foo1 β X34 β X34 β
β ham β A12 β C84 β
β foo2 β G96 β G96 β
β egg β C12 β L6 β
ββββββββ΄ββββββ΄ββββββ
</code></pre>
<p>I tried this</p>
<pre><code>df = df.with_columns(
pl.when(
pl.col("A")
.str.contains("foo"))
.then(pl.col("L"))
.alias("G")
.otherwise(pl.col("G"))
)
</code></pre>
<p>However, this does not work. Can someone help me with this?</p>
|
<python><dataframe><python-polars>
|
2023-02-23 18:11:12
| 2
| 437
|
Horseman
|
75,548,734
| 11,220,141
|
Out of bound timestamps in pandas
|
<p>I need to rewrite some sql code to python, and my problem is necessity of calculation differences in days:
<a href="https://i.sstatic.net/wtLEM.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wtLEM.jpg" alt="enter image description here" /></a>
As you can see, for cases with final_pmt_date β9999-12-31β, the dates subtracted easily.</p>
<p>But in pandas there is limit for datetime64 type, so I get exception:
<a href="https://i.sstatic.net/YGf2L.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YGf2L.jpg" alt="enter image description here" /></a></p>
<p>All answers I saw were about converting this dates into NaN (with βcoerceβ keyword). But I need to calculate number of days for such datetimes also.</p>
<p>Thank you in advance</p>
|
<python><pandas><datetime><timestamp>
|
2023-02-23 18:11:03
| 2
| 365
|
ΠΡΠ±ΠΎΠ²Ρ ΠΠΎΠ½ΠΎΠΌΠ°ΡΠ΅Π²Π°
|
75,548,708
| 5,924,264
|
How to vectorize a for loop of yields?
|
<p>I currently have the following for loop that loops over each row and yields a <code>RowClass</code> object that's instantiated based on a row in <code>df</code></p>
<pre><code># df is a Pandas dataframe
for i in range(len(df)):
# RowClass is a C++ class that's exposed through pybind that simply converts the row to an
# object with attributes corresponding to each column
yield RowClass(df.iloc[i])
</code></pre>
<p>Is it possible to vectorize this and optimize away the for loop?</p>
|
<python><pandas><vectorization><yield>
|
2023-02-23 18:08:25
| 0
| 2,502
|
roulette01
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.