QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,923,269
| 10,866,873
|
Tkinter multi-parent class not initialising
|
<p>I am working on overhauling Tkinter to do a few extra things. To facilitate this I have created a bunch of classes for each of the widgets along with a base class for global methods.</p>
<p>The issue I am having is the global class not being initialised when using the super() method in one of the widgets due to the lack of a <code>super()</code> call being made in the base Tkinter widgets and framework.</p>
<pre class="lang-py prettyprint-override"><code>from tkinter import *
from uuid import uuid4
class baseWidget:
def __init__(self, master, *args, **kwargs):
super().__init__()
print("baseWidget Initialised")
self.__unique_id = uuid4() ##example why its needed
class a:
def __init__(self, master, **kwargs):
print("a Initialised")
super().__init__(master, **kwargs)
class cFrame(Frame, baseWidget):
#class cFrame(a, baseWidget):
def __init__(self, master, **kwargs):
super(cFrame, self).__init__(master, **kwargs)
#baseWidget.__init__(self, master, **kwargs)
a=1
if a:
root = Tk()
root.geometry("400x400")
f = cFrame(root, bg="red")
f.pack(side=TOP, fill=X, expand=1)
b = Button(f, text="submit")
b.pack(side=TOP)
root.mainloop()
else:
cFrame('test', a=1, b=2)
</code></pre>
<p>In the above code there are 2 toggle states, set a=0 and switch the cFrame class comments around for the correct implementation of how it <strong>should</strong> work (noted as s1), or leave them as shown to see the current issue (noted as s2).</p>
<p>In s1 both class <code>a</code> and class <code>baseWidget</code> are called (with class <code>a</code> being a stand-in for <code>Frame</code>) and both classes are initialised.</p>
<p>In s2 class <code>Frame</code> is initialised but class <code>baseWidget</code> is not initialised, however methods available to <code>baseWidget</code> are available to the child class.</p>
<p>As a work around I did create a second method:</p>
<pre class="lang-py prettyprint-override"><code>def __xinit__(self, master, **kwargs):
super(cFrame, self).__init(master, **kwargs)
baseWidget.__init__(self, master, **kwarhs)
</code></pre>
<p>And switch the standard <code>super()</code> call from my <code>__init__</code> method to instead call <code>__xinit__</code>, however this solution requires that I modify every class to include this workaround with a lot of copy-paste code.</p>
<p>I am hoping to find a better solution to initialise both classes without bloating the code with the same method throughout.</p>
|
<python><python-3.x><tkinter>
|
2023-04-03 19:48:50
| 0
| 426
|
Scott Paterson
|
75,922,887
| 21,420,742
|
Counting how many IDs Report to a manager in Python
|
<p>I have a dataset that uses python and numpy in it, what I have is a dataset that looks at Employees and their managers. What I want is to count how many reports one manager has.</p>
<p>Here is the sample dataset:</p>
<pre><code>ID Date Job Dept. Manager ID
1 Oct 2022 Sales Rep Sales 5
1 Dec 2022 Sales Rep Sales 5
1 Feb 2023 Sales Rep Sales 5
2 Feb 2022 Tech Support Tech 4
2 Jun 2022 Sales Advisor Sales 5
2 Nov 2022 Sales Advisor Sales 5
3 Dec 2021 Tech Consult Tech 4
3 Sept 2022 Tech Advisor Tech 4
</code></pre>
<p>The output I want is:</p>
<pre><code>Manager ID Reports
4 1
5 2
</code></pre>
<p>My current code I have used is :</p>
<pre><code>counts = df['ID'].groupby(df['Manager ID'].astype(float)).nunique()
df['Reports'] = df['ID'].astype(str).map(counts).fillna(0, downcast ='infer')
</code></pre>
<p>This code only outputs zeros. Any suggestion on this?</p>
|
<python><python-3.x><pandas><dataframe><numpy>
|
2023-04-03 18:57:29
| 4
| 473
|
Coding_Nubie
|
75,922,876
| 4,454,635
|
Python - overwrite constant from imported file and use in imported functions
|
<p>I have a module where some constants are defined and also used in several functions. How can I over-ride their values from my main file ?</p>
<p>Say this is the module, <code>test_import.py</code></p>
<pre><code>MY_CONST = 1
def my_func(var = MY_CONST):
print(var)
</code></pre>
<p>And this is my <code>main.py</code> file:</p>
<pre><code>import test_import
MY_CONST = 2
test_import.MY_CONST = 3
test_import.my_func()
</code></pre>
<p>This code still prints "1". I want it to print some other value (obviously, without passing a value when calling <code>my_func()</code>)</p>
|
<python><python-import>
|
2023-04-03 18:56:10
| 1
| 3,186
|
horace_vr
|
75,922,599
| 5,212,614
|
Is Spyder pointed to the wrong folder or is my setup totally wrong?
|
<p>I was using Spyder just fine until a few hours ago. Now, I realize, there are two ways to access Spyder, as seen in the image below.</p>
<p><a href="https://i.sstatic.net/wUi5S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wUi5S.png" alt="enter image description here" /></a></p>
<p>I guess something just recently changed, but I'm not sure what it is. Anyway, if I click on the first icon, nothing happens. This used to open my Spyder IDE. If I click on the second icon, the Spyder IDE opens and I see this.</p>
<p><a href="https://i.sstatic.net/95Udr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/95Udr.png" alt="enter image description here" /></a></p>
<p>Now, when I select that single line of code and hit F9, I get a message that says <code>ModuleNotFoundError: No module named 'pandas'</code> However, Pandas is already installed. if I open the Anaconda Prompt and type 'pip install pandas' it confirms that Pandas is already installed.</p>
<p><a href="https://i.sstatic.net/GHYxK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GHYxK.png" alt="enter image description here" /></a></p>
<p>What is wrong with my setup? How can I reset things to the way they were a few hours ago?</p>
|
<python><python-3.x><anaconda><spyder>
|
2023-04-03 18:21:25
| 1
| 20,492
|
ASH
|
75,922,563
| 14,024,882
|
xticklabels not disappearing even after setting them to null
|
<p>I am trying to plot things from 0.01 to 0.1 on a log scale. I am using the following piece of code:</p>
<pre><code> fig = plt.figure ( figsize=(4/1.6,3/1.6), constrained_layout=True )
ax = plt.axes()`
plt.rcParams["axes.labelweight"] = "bold"
ax.tick_params(direction='in', bottom=True, top=True, left=True, right=True, which='both')
ax.tick_params(axis='x', labelsize=8)
ax.tick_params(axis='y', labelsize=8)
ax.plot (x_data, y_data)
ax.minorticks_off()
ax.set_xscale('log')
yticks = np.arange(0.985, 1.005, 0.005)
ax.set_yticks ( yticks )
ax.set_yticklabels (ax.get_yticks(), weight='bold')`
ax.set_ylim ( 0.985, 1.0 )
ax.set_xlim ( 0.01, 0.1 )
ax.yaxis.set_minor_locator(tck.AutoMinorLocator())
ax.yaxis.set_major_formatter(StrMethodFormatter('{x:1.3f}') )
ax.xaxis.set_major_formatter(StrMethodFormatter('{x:1.2f}') )
ax.set_aspect ('auto')
</code></pre>
<p>As I am doing this, I am still getting the labels of minorticks on the x-axis. This is what I am seeing:
<a href="https://i.sstatic.net/FwNpw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FwNpw.png" alt="enter image description here" /></a></p>
<p>My question is, why am I seeing all this extraneous stuff on the xtick labels? I have tried turning <code>ax.set_xticklabels([])</code> at the very top and inserting labels manually, but it still did not work. <strong>What am I doing wrong here?</strong></p>
<p><em>My desired output is for xticklabels to be bold and be "0.01, 0.025, 0.05, 0.075, 0.1".</em></p>
|
<python><matplotlib>
|
2023-04-03 18:17:21
| 1
| 365
|
megamence
|
75,922,456
| 14,236,252
|
Is session auth possible in Django Rest Framework with a mobile client?
|
<p>I want my application to have a mobile client (non native, built using Quasar). As I understand token auth will work, so I have chosen and implemented JWT for the web app though I havent tested the mobile client yet. I was wondering is it possible/recommended to use session auth in this context?</p>
|
<python><django><django-rest-framework><quasar>
|
2023-04-03 18:02:48
| 2
| 514
|
bendowlingtech
|
75,922,425
| 7,009,988
|
pyplot second subplot won't show
|
<p>I'm trying to plot 2 scatterplots next to each other using pyplot, but I can't get it to work.</p>
<pre><code>x1 = list(stablepdfs.keys())
y1 = list(stablepdfs.values())
plt.scatter(x1,y1)
plt.title('Size of Succeeded vs Failed Pdfs')
plt.xlabel('Pdf Size in MB')
plt.ylabel('# of Pdfs')
plt.subplot(211)
#-----------------------------------
x2 = list(unstablepdfs.keys())
y2 = list(unstablepdfs.values())
plt.scatter(x2,y2, c='r')
plt.title('Size of Succeeded vs Failed Pdfs')
plt.xlabel('Pdf Size in MB')
plt.ylabel('# of Pdfs')
plt.subplot(212)
plt.show()
</code></pre>
<p>However, I only see one plot.</p>
<p><img src="https://stackoverflow.microsoft.com/images/a/d0c14a24-31da-4e40-81d4-e99833a60698.png" alt="enter image description here" /></p>
<p>Interestingly, when I reverse the order of the code and plot the other one first, now I am only able to see the other plot.</p>
<pre><code>x2 = list(unstablepdfs.keys())
y2 = list(unstablepdfs.values())
plt.scatter(x2,y2, c='r')
plt.title('Size of Succeeded vs Failed Pdfs')
plt.xlabel('Pdf Size in MB')
plt.ylabel('# of Pdfs')
plt.subplot(212)
#-----------------------------------
x1 = list(stablepdfs.keys())
y1 = list(stablepdfs.values())
plt.scatter(x1,y1)
plt.title('Size of Succeeded vs Failed Pdfs')
plt.xlabel('Pdf Size in MB')
plt.ylabel('# of Pdfs')
plt.subplot(211)
plt.show()
</code></pre>
<p><a href="https://stackoverflow.microsoft.com/images/a/73caa631-820e-4417-950e-a8752f87d0e3.png" rel="nofollow noreferrer"><img src="https://stackoverflow.microsoft.com/images/a/73caa631-820e-4417-950e-a8752f87d0e3.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2023-04-03 17:57:47
| 0
| 1,643
|
wheeeee
|
75,922,371
| 10,620,003
|
Change the a parameter or name of the function all over the .py files
|
<p>I have the different .py files (~10). For example, here is two function in two different .py files. So, I want to change the parameter b to c, and the name of function sum to new_name and I want to change them all over the other .py files. Do we have any option in python for that:</p>
<p>f1.py:</p>
<pre><code>def sum(a,b):
return a+b
</code></pre>
<p>f2.py:</p>
<pre><code>import f1
def foo(a,b):
return a+2*b + sum(a,b)
</code></pre>
<p>The code which I want is output:</p>
<p>f1.py:</p>
<pre><code>def new_name(a,c):
return a+c
</code></pre>
<p>f2.py:</p>
<pre><code>def foo(a,c):
return a+2*c + new_name(a,c)
</code></pre>
|
<python>
|
2023-04-03 17:50:25
| 1
| 730
|
Sadcow
|
75,922,153
| 21,420,742
|
How to get count of Employee activity by Manager in Python
|
<p>I have a dataset that uses numpy and pandas and looks at employees by ID # and all activity like promotion, termination, job switch, and etc. What I want to do is know how to count the number of changes and group them to the manager.</p>
<p>Here is a sample of the data. For Reference: 1 = Yes 0 = No</p>
<pre><code>ID Date Job_Title ManagerID Status Terminated Job_Change Team_Change
1 May 2022 Sales Rep 7 Active 0 0 0
1 Oct 2022 Sales Consultant 7 Active 0 1 0
1 Jan 2023 Sales Consultant 7 Active 0 0 0
2 Feb 2022 Tech Advisor 3 Active 0 0 0
2 May 2022 Tech Advisor 3 Termed 1 0 0
3 Dec 2021 Sales Supervisor 7 Active 0 0 0
3 Jan 2022 Tech Supervisor 10 Active 0 1 1
3 Feb 2023 Tech Manager 10 Active 0 1 0
</code></pre>
<p>What I want the output to look like:</p>
<pre><code>ManagerID Terminated Job_Change Team Change
3 1 0 0
7 0 1 0
10 0 2 1
</code></pre>
<p>Is their a way to print this output out without having to create a new dataframe?</p>
|
<python><python-3.x><pandas><dataframe><numpy>
|
2023-04-03 17:21:44
| 1
| 473
|
Coding_Nubie
|
75,921,970
| 726,730
|
PyQt5 QGroupBox with checkbox - disable checkbox only not the child widgets
|
<p>in pyqt5 if i have a checkable QGroupBox. Is there any easy way to disable only the checkbox (True - False) but not the child widgets of QGroupBox?</p>
|
<python><pyqt5><qgroupbox>
|
2023-04-03 16:59:19
| 0
| 2,427
|
Chris P
|
75,921,952
| 6,928,292
|
Create N tasks in step function. One for each lambda function that exists in list
|
<p>I'm just trying to create a simple step function where each task runs sequentially:
<code>task1 -> task2 -> task3</code></p>
<p>One task for every lambda in <code>lambda_handler_paths</code> = <code>['/Users/aaronwest/Code/ce-engineering-test-live/tests/tf-senior-engineer/validators/my-other-function/lambda.zip', '/Users/aaronwest/Code/ce-engineering-test-live/tests/tf-senior-engineer/validators/my-function/lambda.zip']</code></p>
<p>Here's what I been playing with to try and get something working:</p>
<pre><code>def build_state_machine_definition(validators: list) -> dict:
"""Builds a state machine definition for a given list of tests"""
# Create initial state
definition = sfn.Chain.start(state=sfn.Pass(self, "Pass"))
# Loop over tests and add to state machine definition
for i, validator in enumerate(validators):
# Get the name of the lambda function
lambda_name = validator.split("/")[-2]
print("lambda_name: " + str(lambda_name))
lambda_function = _lambda.Function(
self,
f"{lambda_name}-lambda",
runtime=_lambda.Runtime.PYTHON_3_8,
code=_lambda.Code.from_asset(validator),
handler="lambda_function.lambda_handler",
timeout=Duration.seconds(30),
)
if i == len(validators) - 1:
# This is the last LambdaInvoke task
definition.next(
tasks.LambdaInvoke(
self, f"{lambda_name}-Validation", lambda_function=lambda_function
)
)
else:
definition.next(
tasks.LambdaInvoke(
self, f"{lambda_name}-Validate", lambda_function=lambda_function
)
)
return definition
state_machine_definition = build_state_machine_definition(lambda_handler_paths)
# Create a single Step Function
sfn.StateMachine(
self,
"StateMachine",
definition=state_machine_definition,
timeout=Duration.minutes(10),
)
</code></pre>
<p>at the moment I'm receiving <code>RuntimeError: Error: State 'Pass' already has a next state</code></p>
|
<python><python-3.x><aws-cdk><aws-step-functions>
|
2023-04-03 16:57:51
| 1
| 1,815
|
aphexlog
|
75,921,938
| 13,004,915
|
Bootstrap a function with multiple arguments using scipy.stats.bootstrap
|
<p>I am trying to compute the Standard Error of an estimate using <code>scipy.stats.bootstrap</code>. The function I am using takes two arguments. E.g. I have two lists like:</p>
<pre class="lang-py prettyprint-override"><code>x = [12, 14, 82, 55, 63, 56]
w = [0.61, 1.01, 1.8, 2.6, 0.93, 1.13]
</code></pre>
<p>An I would like to bootstrap a function similar to:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
np.average(x, weights=w) # <- Or any other function that takes 2 or more arguments.
</code></pre>
<p>I have tried:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.stats import bootstrap
x = [12, 14, 82, 55, 63, 56]
w = [0.61, 1.01, 1.8, 2.6, 0.93, 1.13]
# I tried converting 2 arguments into only 1.
def weighted_mean(z):
return np.average(z[0], weights=z[1])
bootstrap(((np.array(x), np.array(w) ), ),
statistic=weighted_mean,
confidence_level=0.95, axis=0)
</code></pre>
<p>But I get the following error:</p>
<pre><code># IndexError: index 1 is out of bounds for axis 0 with size 1
</code></pre>
<p>How can I compute the Standard Error using <code>scipy.stats.bootstrap</code> for that function or a similar one?</p>
|
<python><scipy>
|
2023-04-03 16:55:42
| 1
| 759
|
Josep Espasa
|
75,921,890
| 504,717
|
How to force sqlalchemy to use big int for specific and clause
|
<p>I have this code</p>
<pre><code>my_query = query.filter(
sa.and_(Content.target_type == 'a_name', Content.target_ids.overlap(a_name_ids))
)
</code></pre>
<p><code>a_name_ids</code> is a value which get fed into system from another api. It just so happened that the api has changed the value and it now contains big int and small int i.e., values like 30 and <code>5000002212222</code>. Due to this, my app is throwing an error that <code>operator does not exist: integer[] && bigint[]</code></p>
<p>Whats the best way to solve this? Can i force sqlalchemy to cast all the values from <code>a_name_ids</code> to BigInt? Any other approach?</p>
<p>I believe we are using <code>sqlalchemy=1.1.18</code>.</p>
<p>The <code>Content.target_id</code> is defined as <code>target_ids = Column(ARRAY(sa.Integer))</code>. But changing this will require a big change on other platforms which are using this shared lib as well.</p>
|
<python><postgresql><sqlalchemy><python-3.8>
|
2023-04-03 16:49:32
| 0
| 8,834
|
Em Ae
|
75,921,584
| 10,271,487
|
Pyplot creating chart as 1 continuous line instead of multiple individual lines
|
<p>Plotting a dataframe results in 1 line instead of 1 line per iteration of a dataframe of values.</p>
<p>Instead on multiple individual lines that trace positioning from a dataframe, I get 1 line which connects the end of one iteration to the start of the next and I'm not sure why.</p>
<pre><code>fig = plt.figure(figsize=(16,6))
lane2 = trajec.loc[trajec.Lane_ID == 2].sort_values(by=['Vehicle_ID', 'Frame_ID']).loc[slice(None), slice(0, 1500),:]
for id in lane2.index.get_level_values(0).unique(): # gets vehicle Ids
yaxis = lane2['ewm_y'].loc[slice(id)]
xaxis = yaxis.index.get_level_values(1)
plt.plot(xaxis, yaxis)
plt.show()
</code></pre>
<p>Output:
<a href="https://i.sstatic.net/PXQWx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PXQWx.png" alt="enter image description here" /></a></p>
<p>As you can see we are getting a convoluted graph with each iteration being connected to the last plt.plot(x,y) call.</p>
|
<python><matplotlib>
|
2023-04-03 16:14:28
| 1
| 309
|
evan
|
75,921,577
| 488,917
|
Murmur3 Hash Compatibility Between Go and Python
|
<p>We have two different libraries, one in Python and one in Go that need to compute murmur3 hashes identically. Unfortunately no matter how hard we try, we cannot get the libraries to produce the same result. It appears from <a href="https://stackoverflow.com/questions/29932956/murmur3-hash-different-result-between-python-and-java-implementation">this SO question about Java and Python</a> that compatibility isn't necessarily straight forward.</p>
<p>Right now we're using the <a href="https://pypi.org/project/mmh3/" rel="nofollow noreferrer">python mmh3</a> and <a href="https://pkg.go.dev/github.com/spaolacci/murmur3" rel="nofollow noreferrer">Go github.com/spaolacci/murmur3</a> libraries.</p>
<p>In Go:</p>
<pre class="lang-golang prettyprint-override"><code>hash := murmur3.New128()
hash.Write([]byte("chocolate-covered-espresso-beans"))
fmt.Println(base64.RawURLEncoding.EncodeToString(hash.Sum(nil)))
// Output: cLHSo2nCBxyOezviLM5gwg
</code></pre>
<p>In Python:</p>
<pre class="lang-py prettyprint-override"><code>name = "chocolate-covered-espresso-beans"
hash = mmh3.hash128(name.encode('utf-8'), signed=False).to_bytes(16, byteorder='big', signed=False)
print(base64.urlsafe_b64encode(hash).decode('utf-8').strip("="))
# Output: jns74izOYMJwsdKjacIHHA (big byteorder)
hash = mmh3.hash128(name.encode('utf-8'), signed=False).to_bytes(16, byteorder='little', signed=False)
print(base64.urlsafe_b64encode(hash).decode('utf-8').strip("="))
# Output: HAfCaaPSsXDCYM4s4jt7jg (little byteorder)
hash = mmh3.hash_bytes(name.encode('utf-8'))
print(base64.urlsafe_b64encode(hash).decode('utf-8').strip("="))
# Output: HAfCaaPSsXDCYM4s4jt7jg
</code></pre>
<p>In Go, <code>murmur3</code> returns a <code>uint64</code> so we assume <code>signed=False</code> in Python; however we also tried <code>signed=True</code> and did not get matching hashes.</p>
<p>We're open to different libraries, but are wondering if there is something wrong with either our Go or Python methodologies of computing a base64 encoded hash from a string. Any help appreciated.</p>
|
<python><go><hash><murmurhash>
|
2023-04-03 16:14:02
| 1
| 5,432
|
bbengfort
|
75,921,438
| 6,224,557
|
Replace column value in panda dataframe based on another colum value
|
<p>I have the following panda dataframe:</p>
<pre><code> Material Min_lot Reorder
0 E01 1 0
1 E02 1 -1
2 E03 1 10
3 E04 1 1
4 E05 5 0
5 E06 5 -1
6 E07 5 10
7 E08 5 5
8 E09 5 3
</code></pre>
<p>I want to create a new column or replace the existing Reorder based on the following rules</p>
<pre><code>reorder <0 replace reorder with 0
reorder >= Min_lot -> do not change reorder
reorder < Min_lot -> replace reorderwith Min_lot value
</code></pre>
<p>as an example the output is:</p>
<p><a href="https://i.sstatic.net/hliQY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hliQY.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2023-04-03 15:57:40
| 5
| 305
|
Rolando Azevedo
|
75,921,409
| 8,862,235
|
Group by with applying a custom function
|
<p>I want to get a list of points after performing an aggregation.
Suppose that I have the following dataset:</p>
<pre><code>AC_ID,AC_PERIOD,ROU_ROUTE_ID,SEQUENCE_NR,SEG_POINT_FROM_ID,SEG_POINT_TO_ID
502,S,A1,1,BRY,LUREN
502,S,A1,2,LUREN,DJL
502,S,A1,3,DJL,LISMO
502,S,A1,4,LISMO,SIROD
502,S,A100,1,KL,*ROS1
502,S,A100,2,*ROS1,AMEPU
502,S,A100,3,AMEPU,SARZI
</code></pre>
<p>In the end, I wish to have:</p>
<pre><code>A1,[BRY, LUREN, DJL, LISMO]
A100,[KL, *ROS1, AMEPU, SARZI]
</code></pre>
<p>As you can see, I would group by the ROU_ROUTE_IDand get all the elements from SEG_POINT_FROM_ID + the last element of SEG_POINT_TO_ID.</p>
<p>In order to do this I started using a group by and i want to apply a function that I will create before in order to do the logic with the points.</p>
<p>something like this:</p>
<pre><code>data.groupby('ROU_ROUTE_ID')[['SEG_POINT_FROM_ID','SEG_POINT_TO_ID']].apply(lambda x: f(x))
</code></pre>
<p>My problem is that I can't access the two columns in the f(x).</p>
<p>I think there is a faster way of doing this calculation but I am stuck for hours on this.</p>
<p>I was also considering about doing an iterative algorithm without using pandas, but i don't know if it will be more optimized than using pandas out of the box.</p>
|
<python><pandas><dataframe><group-by>
|
2023-04-03 15:54:41
| 3
| 323
|
Filip
|
75,921,348
| 4,044,696
|
Read CSV wit specific columns and rest into last column
|
<p>I have set of columnar data, separated by spaces</p>
<pre><code>Lorem ipsum dolor sit amet, consectetur adipiscing elit
</code></pre>
<p>What is needed is to read it as CSV, separated by white spaces, but only 5 first columns to be in, rest of the text should be single column, something like this</p>
<pre><code>|Lorem|ipsum|dolor|sit|amet,|consectetur adipiscing elit|
</code></pre>
<p>Last column should contain anything beyond first 5, so there would be variable number of words and spaces</p>
<p>No problem to read it as CSV one column per word.</p>
<p>But I have a bit of a struggle to get variable word count tail into single column.</p>
<p>Any help is greatly appreciated</p>
<p>I did read it line-by-line and parse and insert into Pandas DF, but this is/was slow. So anything with read_csv() or similar would be nice.</p>
<p>Basically, log file</p>
<pre><code>|ID|Datetime|Level|Recovered|Place|Message
1 20230102 WARN Yes func1 Something wrong
2 20230103 ERROR No func2 Something really bad
</code></pre>
|
<python><pandas><csv>
|
2023-04-03 15:47:21
| 3
| 20,330
|
Severin Pappadeux
|
75,921,330
| 2,009,409
|
Django and Time Zone for regions that no longer observe the Daylight Save Time (DST)
|
<p>I have a Django project, time zone is enabled for multiple purposes, my <code>TIME_ZONE = 'America/Mexico_City'</code>, however, since this year 2023, in this time zone it's no longer observed the DST, I use the <code>localtime</code> to get the right date/time in some cases, but it detects the DST</p>
<pre><code>>>> localtime(timezone.now())
datetime.datetime(2023, 4, 3, 10, 14, 49, 782365, tzinfo=<DstTzInfo 'America/Mexico_City' CDT-1 day, 19:00:00 DST>)
>>> timezone.now()
datetime.datetime(2023, 4, 3, 15, 14, 54, 953013, tzinfo=<UTC>)
>>> datetime.now()
datetime.datetime(2023, 4, 3, 9, 15, 7, 628038)
</code></pre>
<p><code>datetime.now()</code> has the correct date/time, of course I can change the <code>localtime</code> for <code>datetime.now()</code>, but there are a lot of them, I want to understand how I can "update" or "sync" my Django project so it take the correct DST when some region change it for both observe or not longer observe it.</p>
|
<python><django><timezone><dst>
|
2023-04-03 15:44:23
| 1
| 333
|
Alejandro Franco
|
75,921,214
| 13,039,962
|
How to give a 3d shape (to the bars) to 2d grouped bars plot
|
<p>I have this df:</p>
<pre><code> YEAR CODE ANOM%
0 1964 152101 -85.294118
1 1965 152101 -62.352941
2 1965 152111 -85.862069
3 1966 152101 -81.352941
4 1966 152111 -77.873563
.. ... ... ...
108 2020 152101 -38.823529
109 2021 152101 -59.352941
110 2021 152111 74.137931
111 2022 152101 -95.235294
112 2022 152111 -76.436782
[113 rows x 3 columns]
</code></pre>
<p>With this code i'm creating a table to plot grouped bars:</p>
<pre><code>df.sort_values('YEAR', axis=0, ascending=True, inplace=True, ignore_index=True)
years = df["YEAR"].unique()
prv= df["CODE"].unique()
#Categorizacion
df["YEAR"] = pd.Categorical(df["YEAR"], categories=years)
df["CODE"] = pd.Categorical(df["CODE"], categories=prv)
#PIVOT THE TABLE
df_pivot = pd.pivot_table(
df, #df
values="ANOM%",
index="YEAR",
columns="CODE",
aggfunc=np.sum
)
df_pivot.columns = df_pivot.columns.add_categories('AVERAGE')
df_pivot['AVERAGE']=df_pivot.mean(axis=1).astype(int)
</code></pre>
<p>And with this code i'm creating the figure and plotting the grouped bars:</p>
<pre><code>fig = plt.figure('Grouped Bars', figsize=(16,10), dpi=100)
ax1 = fig.add_axes([0.1, 0.1, 0.85, 0.75])
df_pivot.plot(kind='bar',
color=['black' if col!='AVERAGE' else 'red' for col in df_pivot.columns],
fontsize=12.0, zorder=1, edgecolor='black', linewidth=2,
alpha=0.7, ax=ax1, depthshade=True)
</code></pre>
<p>I want to give to my bars a 3D effect (i don't wanna plot a 3D graphic bar, i just want my bars to have a slightly 3D effect). I tried with this parameters:</p>
<pre><code>edgecolor='black', linewidth=2,
alpha=0.7, depthshade=True
</code></pre>
<p>but with <code>depthshade=True</code> i got this error: <code>AttributeError: Rectangle.set() got an unexpected keyword argument 'depthshade'</code></p>
<p>How can i have a 3d shape graphic bars? Matplotlib version: 3.7.0
PD: If i still have to plot a bar graph in 3d, how can i do it?</p>
|
<python><pandas><matplotlib>
|
2023-04-03 15:33:12
| 0
| 523
|
Javier
|
75,921,094
| 9,363,181
|
Unable to mock the return value of the private method in python unittest
|
<p>I have the core logic as below:</p>
<pre><code>class DBTProjectUploader:
def __init__(self, file_manager: FileManager, s3_client):
self.file_manager = file_manager
self.s3_client = s3_client
def copy(self, dbt_attr: DbtAttr):
old_bucket, old_bucket_name, old_prefix, new_bucket, new_prefix = self.__get_bucket_name_and_prefixes(dbt_attr)
for f in self.file_manager.list_objects(old_bucket, old_prefix):
old_source = {'Bucket': old_bucket_name, 'Key': f.key}
new_key = f.key.replace(old_prefix, new_prefix, 1)
self.file_manager.copy_object(new_bucket, new_key, old_source)
return "Success"
def __get_bucket_name_and_prefixes(self, dbt_attr: DbtAttr):
old_bucket_name, old_prefix = self.__get_bucket_and_key(dbt_attr.dag_file_location)
new_bucket_name, new_prefix = self.__get_bucket_and_key("s3://" + old_bucket_name + "/dags/dbt/")
old_bucket = self.s3_client.Bucket(old_bucket_name)
new_bucket = self.s3_client.Bucket(new_bucket_name)
return old_bucket, old_bucket_name, old_prefix, new_bucket, new_prefix
def __get_bucket_and_key(self, path: str):
return path[5:].split('/', 1)
class FileManager:
logger = get_provisioner_logger()
def __init__(self, s3_client):
self.s3_client = s3_client
def list_objects(self, old_bucket, old_prefix):
return old_bucket.objects.filter(Prefix=old_prefix)
def copy_object(self, new_bucket, new_key, old_source):
new_obj = new_bucket.Object(new_key)
new_obj.copy(old_source)
return "Success"
</code></pre>
<p>Where the <code>DBTProjectUploader.copy</code> is responsible for copying the <code>S3</code> nested objects from one path to another path. Now, I am trying to write the test case using the <code>Python unit test</code> for the <code>copy</code> method. However, I am unable to mock the value returned from the private method due to which I am getting the mock id in the return value.</p>
<pre><code>def mock_s3_object(self, key, body):
obj = MagicMock()
obj.key = key
obj.get.return_value = {'Body': MagicMock(read=MagicMock(return_value=body))}
return obj
@patch('boto3.resource')
@patch('provisioner.src.services.dbt_project_uploader.DBTProjectUploader._DBTProjectUploader__get_bucket_name_and_prefixes',
return_value="bucket-name")
def test_upload_dbt_project(self, mock_resource, mock_get_bucket_name_and_prefixes):
source_bucket = MagicMock()
dest_bucket = MagicMock()
mock_s3_client = MagicMock()
file_manager = MagicMock()
dbt_attr = DbtAttr("path", "s3://bucket-name/old-prefix/", "dbt_mwaa01")
mock_resource.Bucket.side_effect = [source_bucket, dest_bucket]
print(source_bucket)
source_bucket.objects.filter.return_value = [
self.mock_s3_object('new_prefix/dags/dbt/dbt_mwaa01/', b'Sample data')
]
uploader = DBTProjectUploader(file_manager, mock_s3_client)
result = uploader.copy(dbt_attr)
file_manager.list_objects.assert_called_once_with('bucket-name', "old-prefix/")
# source_bucket.objects.filter.assert_called_once_with("old-prefix/")
dest_bucket.Object.assert_called_once_with('new_prefix/dags/dbt/dbt_mwaa01/')
dest_bucket.Object.return_value.copy.assert_called_once_with(
{'Bucket': 'bucket-name', 'Key': 'new_prefix/dags/dbt/dbt_mwaa01/'})
self.assertEqual(result, 'Success')
</code></pre>
<p>It gives an error below:</p>
<pre><code>Expected: list_objects('bucket-name', 'old-prefix/')
Actual: list_objects(<MagicMock name='mock.Bucket()' id='4572034384'>, 'old-prefix/')
</code></pre>
<p>I have already tried to mock the private method but when I try to print the variables inside the original methods, it gives mock-ids only. It doesn't return the value. What else am I missing here? TIA</p>
|
<python><unit-testing><python-unittest>
|
2023-04-03 15:19:35
| 1
| 645
|
RushHour
|
75,921,024
| 2,233,608
|
Automatically modernize python syntax
|
<p>I'm looking for a tool that will automatically modernize my python code. Something that will take a python version as configuration, and automatically modernize the code for that version.</p>
<p>For example, with python 3.9+ we can convert all</p>
<pre><code>from typing import List
my_list:List[str] = []
</code></pre>
<p>to</p>
<pre><code>my_list:list[str] = []
</code></pre>
<p>And with python 3.10+ we can convert all</p>
<pre><code>from typing import Optional
my_optional: Optional[str] = None
</code></pre>
<p>to</p>
<pre><code>my_optional: str | None = None
</code></pre>
<p>I'm sure there are other syntax updates that can also be done.</p>
<p>Is there some tool that does this? I've looked into mypy, black, and flake8 and I can't see anything that will convert automatically for me, or even warn me about these so that I can update them manually.</p>
|
<python><mypy><flake8><python-black>
|
2023-04-03 15:12:49
| 1
| 1,178
|
niltz
|
75,920,984
| 21,351,146
|
How do you indicate in Python that a class method will return a class instance?
|
<p>I am trying to emulate the idea of ok and err variants from rust in python, here is a result class</p>
<pre><code>from typing import Any, Union
class Result:
def __init__(self, value: Any, status: bool = True) -> None:
self.value = value
self.status = status
@classmethod
def Ok(cls, result: Any) -> Result:
return cls(result, status=True)
@classmethod
def Err(cls, error: Any):
return cls(error, status=False)
def unwrap(self):
if self.status:
print("from Result -> self.status TRUE")
print(f"from Result -> self.value = {self.value}")
return self.value
else:
print("from Result -> self.status FALSE")
print(f"from Result -> self.value = {self.value}")
return self.value
</code></pre>
<p>I cannot just use the class name <code>Result</code> to indicate the return type, how do can I properly do so using the <code>typing</code> module or with whatever other method that is available? TY!</p>
<p>EDIT:</p>
<p>would i so something like</p>
<pre><code> @classmethod
def ok(cls, result: Any) -> __init__:
return cls(result, status=True)
</code></pre>
|
<python><python-typing>
|
2023-04-03 15:08:14
| 0
| 301
|
userh897
|
75,920,773
| 9,403,794
|
How to change columns type in numpy ndarray
|
<p>Because I do not understand numpy dtypes, I have to create Pandas and then I can change type of columns. But that is wrong way because of memory consumption. If I have pandas columns like that:</p>
<pre><code>df[0 ] = df[0 ].astype(np.int64)
df[1 ] = df[1 ].astype(np.int8)
df[2 ] = df[2 ].astype(np.int8)
df[3 ] = df[3 ].astype(np.int8)
df[4 ] = df[4 ].astype(np.int8)
df[5 ] = df[5 ].astype(np.int8)
df[6 ] = df[6 ].astype(np.int8)
df[7 ] = df[7 ].astype(np.int8)
df[8 ] = df[8 ].astype(np.int8)
df[9 ] = df[9 ].astype(np.int8)
df[10 ] = df[10 ].astype(np.int16)
df[11] = df[11].astype(np.float64)
df[12] = df[12].astype(np.int16)
df[13] = df[13].astype(np.int16)
df[14] = df[14].astype(np.int8)
df[15] = df[15].astype(np.float64)
df[16] = df[16].astype(np.int16)
df[17] = df[17].astype(np.int16)
df[18] = df[18].astype(np.int8)
</code></pre>
<p>If numpy ndarray have for example a shape (55541, 19), how can I change type of columns:</p>
<pre><code>df[0 ] = df[0 ].astype(np.int64)
df[1 ] = df[1 ].astype(np.int8)
df[10 ] = df[10 ].astype(np.int16)
df[11] = df[11].astype(np.float64)
</code></pre>
<p>I tired to do the same for ndarray:</p>
<pre><code>arr[:, 0 ] = arr[:, 0 ].astype(np.int64)
</code></pre>
<p>but does not work.</p>
<p>I want to have right column type after reading ndarray from pickle file.</p>
<p>Thank you for answer.</p>
|
<python><pandas><numpy><numpy-ndarray>
|
2023-04-03 14:49:05
| 0
| 309
|
luki
|
75,920,755
| 19,198,552
|
Why does the parameter "disabledwidth" of a tkinter canvas rectangle not work?
|
<p>I want the outline of a rectangle in a canvas to get a bigger width, when the rectangle is in state "disabled". Therefore I use the parameter "disabledwidth=4". But when the rectangle is in state "disabled", the outline has still a width of 1 instead of 4.</p>
<p>This is my code, which shows the problem: When I move the mouse over the rectangle, the state of the rectangle changes to "active", everything works as expected, especially the outlinewidth changes to 4. But when I change the state to "disabled" by clicking on the button the outline stays at width 1. What am I doing wrong?</p>
<pre><code>import tkinter as tk
def disabled():
canvas.itemconfig(rect, state="disabled")
def normal():
canvas.itemconfig(rect, state="normal")
root = tk.Tk()
canvas = tk.Canvas(root, height=250, width=250)
button1 = tk.Button(root, text="change rectangle to state disabled", command=disabled)
button2 = tk.Button(root, text="change rectangle to state normal" , command=normal )
rect = canvas.create_rectangle(40, 40, 180, 180,
fill = "red",
activefill = "green2",
activeoutline = "green3",
activewidth = 4,
disabledfill = "grey",
disabledoutline= "grey2",
disabledwidth = 4
)
canvas.grid()
button1.grid()
button2.grid()
root.mainloop()
</code></pre>
|
<python><tkinter><canvas>
|
2023-04-03 14:47:15
| 2
| 729
|
Matthias Schweikart
|
75,920,640
| 16,436,095
|
What is the fundamental difference between flags and kwargs with bool values in Python?
|
<p>e. g. <a href="https://docs.python.org/3/library/re.html#flags" rel="nofollow noreferrer">flags are used in module <code>re</code></a>.</p>
<p>When do I need to use flags when writing my own module, and when do I need keywords with bool values? Is it just a matter of convenience or not?</p>
<pre class="lang-py prettyprint-override"><code>def foo1 (bar, flags=0):
...
def foo2 (bar, flag=False):
...
foo1 (bar, FLAG)
foo2 (bar, flag=True)
</code></pre>
|
<python>
|
2023-04-03 14:33:39
| 1
| 370
|
maskalev
|
75,920,592
| 19,674,402
|
Replace only first row for each user
|
<p>I have a table that has users, food, and which one is their favorite.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">user</th>
<th style="text-align: center;">food</th>
<th style="text-align: right;">is_favorite</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">Beef</td>
<td style="text-align: right;">False</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">Pork</td>
<td style="text-align: right;">False</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">Pork</td>
<td style="text-align: right;">False</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">Beef</td>
<td style="text-align: right;">False</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">Potatoes</td>
<td style="text-align: right;">False</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">Beef</td>
<td style="text-align: right;">False</td>
</tr>
</tbody>
</table>
</div>
<p>The same user appears in several rows. I need to set exactly 1 of the rows <em>per user</em> as favorite (<code>is_favorite</code>=True):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">user</th>
<th style="text-align: center;">food</th>
<th style="text-align: right;">is_favorite</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">Beef</td>
<td style="text-align: right;">True</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">Pork</td>
<td style="text-align: right;">False</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">Pork</td>
<td style="text-align: right;">True</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">Beef</td>
<td style="text-align: right;">False</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">Potatoes</td>
<td style="text-align: right;">False</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">Beef</td>
<td style="text-align: right;">True</td>
</tr>
</tbody>
</table>
</div>
<p>Now every user has exactly 1 favorite food.</p>
<hr />
<p>I successfully got exactly 1 row for each user, but can't apply it to my initial <code>df</code>. I'm pretty sure it's something simple I'm missing, but I don't know pandas that well. It also feels like this is the wrong way to do it:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
dict(
user=[1, 1, 3, 3, 3, 4],
food=['Beef', 'Pork', 'Pork', 'Beef', 'Potatoes', 'Beef'],
is_favorite=[False, False, False, False, False, False]))
# This works. It gives me exactly 1 row per user
first_food_per_user = df.groupby('user').nth(0).reset_index()
# This doesn't work
for _, row in first_food_per_user.iterrows():
df['is_favorite'].loc[
(df['user'] == row['user'])
&
df['food'] == row['food'],
] = True
</code></pre>
|
<python><pandas><dataframe><group-by>
|
2023-04-03 14:28:55
| 1
| 496
|
PythonForEver
|
75,920,451
| 11,501,976
|
How to modify numpy array with arbitrary indices in vectorized way?
|
<h1>Simplified story</h1>
<p>Suppose I have an array <code>arr</code> and indices <code>idx</code>.
For each <code>i</code> occuring in <code>idx</code>, I want to increase <code>arr[i]</code> by one.</p>
<p>A non-vectorized approch will be like this:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
arr = np.zeros(5)
idx = [0, 1, 1, 2, 0]
for i in idx:
arr[i] += 1
</code></pre>
<p>Is there any way to vectorize this?</p>
<p>Note that <code>arr[idx] += 1</code> is invalid because of the duplicate indices.</p>
<pre><code>arr = np.zeros(1)
idx = [0, 0]
arr[idx] += 1 # arr becomes array([1]), not array([2])
</code></pre>
<p>Of course, using <code>np.unique()</code> can achieve the same goal in this 1D array example. But actually I am trying to deal with 2D array and I doubt that counting the elements would be the best solution.</p>
<h2>Edit</h2>
<p><code>np.unique</code> indeed works, but it seems there is unnecessary slowdown. I would like a faster approach (if exists).</p>
<p>Here is an example of 2D indices for 10,000 points without duplicate.</p>
<pre class="lang-py prettyprint-override"><code>arr = np.zeros((10000, 10000))
idx = np.stack([np.arange(10000), np.arange(10000)])
%timeit np.unique(idx, axis=1, return_counts=True) # takes 1.93 ms
%timeit arr[idx[0], idx[1]] += 1 # takes 235 μs
</code></pre>
<p>Apparently, iterating by indexing is ~10 times faster.</p>
<h2>Edit2</h2>
<p>@PaulS's answer was faster than <code>np.unique</code>.</p>
<pre class="lang-py prettyprint-override"><code>%timeit np.add.at(arr, (idx[0], idx[1]), 1) # takes 925 μs
</code></pre>
<h2>Edit3</h2>
<p>Here is the example with random index to test duplicate indices.</p>
<pre class="lang-py prettyprint-override"><code>arr = np.zeros((10000, 10000))
ran = (np.random.rand(10000)*10).astype(int)
idx = np.stack([ran, ran])
%timeit np.unique(idx, axis=1, return_counts=True) # takes 3.24 ms
%timeit np.add.at(arr, (idx[0], idx[1]), 1) # takes 859 μs
</code></pre>
<p>(edit: typo)</p>
<h1>Detailed story</h1>
<p>I am trying to implement a Hough line transformation algorithm using NumPy. (The reason why I'm not using <code>cv2.HoughLines()</code> is because I want the result directly from the coordinates of the points, not from binary array).</p>
<p>Getting the curves in <code>(r, θ)</code> plane was easy, but I am having trouble implementing the accumulator in vectorized way. Currently I am relying on flattening the 2D data into 1D. Is there a nicer and faster way to perform accumulation?</p>
<p>Thank you in advance!</p>
|
<python><numpy>
|
2023-04-03 14:16:25
| 2
| 378
|
JS S
|
75,920,227
| 21,787,377
|
How to use choice field in flask
|
<p>I'm from django, in django models, when we want to use <code>choices</code> in our field, we defined it like this:</p>
<pre><code>class ListAlbum(models.Model):
LIST_OF_ALBUM = (
('Shakira', 'shakira'),
('Don Jazy', 'Don Jazy'),
('Psquare', 'Psquare')
)
name = models.Charfield(max_length=20, choices=LIST_OF_ALBUM)
</code></pre>
<p>As a beginner in <code>flask</code>, I don't know how to use it in flask models.</p>
|
<python><flask>
|
2023-04-03 13:51:56
| 0
| 305
|
Adamu Abdulkarim Dee
|
75,920,179
| 12,990,915
|
Why does torch.cuda.is_available() return True in terminal but False in Jupyter notebook?
|
<p>As you can see from the image below, <code>torch.cuda.is_available()</code> returns <code>True</code> in terminal but <code>False</code> in Jupyter notebook. It seems that I am using the same Python.</p>
<p><strong>In Jupyter notebook:</strong></p>
<pre><code>import torch
</code></pre>
<pre><code>!which python
</code></pre>
<p>/home/s1612415/miniconda3/envs/test/bin/python</p>
<pre><code>torch.cuda.is_available()
</code></pre>
<p>False</p>
<p><strong>In terminal:</strong></p>
<pre><code>(test) w7830$ which python
~/miniconda3/envs/test/bin/python
(test) w7830$ python
Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
</code></pre>
<p><a href="https://i.sstatic.net/iwdVY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iwdVY.png" alt="enter image description here" /></a></p>
|
<python><jupyter-notebook><pytorch>
|
2023-04-03 13:46:34
| 1
| 383
|
user572780
|
75,920,123
| 2,744,242
|
How to import the 'PyPDF4' library in Pyodide?
|
<p>"I'm trying to import the 'PyPDF4' library in Pyodide using micropip but it's giving me an error:</p>
<pre><code>ValueError: Can't find a pure Python 3 wheel for 'pypdf4'.
</code></pre>
<p>Would anyone know what I'm doing wrong?"</p>
<pre><code>async function main() {
let pyodide = await loadPyodide();
await pyodide.loadPackage('micropip');
const micropip = pyodide.pyimport("micropip");
await micropip.install('PyPDF4');
pyodide.runPython(`
import PyPDF4
print(PyPDF4.__version__)
`);
}
main();
</code></pre>
|
<python><pyodide>
|
2023-04-03 13:39:57
| 1
| 13,406
|
rafaelcb21
|
75,919,967
| 1,586,860
|
How to benchmark a single function call in Python?
|
<p>How to interactively (micro)benchmark a code expression in Python (i.e. the equivalent of <code>@btime</code>/<code>@benchmark</code> in Julia) ?</p>
<p>I would like to benchmark interactively an expression (a function call), where the number of times this expression should be evaluated depends on its computational cost / variance....</p>
<p>I have tried <code>timeit.timeit</code> but I have some quesitons:</p>
<ol>
<li>How do I interpret the time output of the following results:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>>>> a = 2.2
>>> timeit.repeat("round(a)", setup='from __main__ import a; gc.enable()', repeat=5,number=10)
[4.631001502275467e-06, 1.3809185475111008e-06, 1.2170057743787766e-06, 1.1718366295099258e-06, 1.1730007827281952e-06]
>>> timeit.timeit("round(a)", setup='from __main__ import a; gc.enable()', number=10)
5.741836503148079e-06
>>> timeit.timeit("round(a)", setup='from __main__ import a; gc.enable()')
0.11461802502162755
>>> timeit.Timer('round(a)', setup='from __main__ import a; gc.enable()').autorange()
(5000000, 0.4272152939811349)
</code></pre>
<p>It seems that when I go from <code>number</code> from <code>1</code> to <code>10</code>, the timing remains pretty stable, but then it grows of different order of magnitude....</p>
<ol start="2">
<li>Does it makes sense a workflow where I first create the <code>Timer</code> object once, and then I modify several times the code to evaluate and benchmark the modified Timer ? E.g.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import timeit
import gc
bclass = timeit.Timer('round(1.1)','gc.enable()', globals=globals())
[...]
bclass.stmt = "small_3x2_nash.vertex_enumeration()"
bclass.autorange()
bclass.stmt = "small_3x2_nash.lemke_howson_enumeration()"
bclass.autorange()
bclass.stmt = "small_3x2_nash.support_enumeration()"
bclass.autorange()
</code></pre>
<p>(seems this is not possible, as I have tried very different code and the result of the timing remains almost the same)</p>
|
<python><benchmarking><microbenchmark>
|
2023-04-03 13:23:57
| 1
| 6,503
|
Antonello
|
75,919,928
| 12,248,220
|
where can I find the numpy.matmul() source code?
|
<p>I do not obtain the same results when I use <code>np.matmul(A, b)</code> in Python and when I use <code>xtensor-blas</code>'s <code>xt::linalg::dot(A, b)</code> in C++.</p>
<p>I am investigating the reasons, as when saved and read from disk, <code>A</code> and <code>b</code> are identical when doing <code>np.allclose(A, b)</code> in Python.</p>
<p>The result of these 2 multiplications (in Py and in C++) is a 250 element 1D array. The first elements are identical, the elements from the middle are different, the elements from the end are identical. As below:</p>
<pre><code> [ True True True True True True True True True True True True
True True True True True True True True True True True True
True True True True True True True True True True True True
False False False False False False False False False False False False
False False False False False False False False False False False False
False False False False False False False False False False False False
False False True True False False False False False False False False
False False False False False False False False False False False False
False False False False False False False False False False False False
False False False False False False False False False False False False
False False False False False False False False False False False False
False False False False False False False False False False False True
True True True True True True True True True True True True
True True True True True True True True True True True True
True True True True True True True True True True True True
True True True True True True True True True True True True
True True True True True True True True True True True True
True True True True True True True True True True True True
True True True True True True True True True True True True
True True True True True True True True True True True True
True True True True True True True True True True]
</code></pre>
<p>I identified what exactly is called by <code>xtensor-blas</code>: it is <code>gemv</code> from <code>BLAS</code>.</p>
<p>I want to see what is called by <code>np.matmul(A, b)</code>.
Does anyone know how to find what exactly is called from BLAS when doing <code>np.matmul(A, b)</code>?</p>
<p>Thank you!</p>
|
<python><arrays><numpy><optimization><blas>
|
2023-04-03 13:18:40
| 1
| 576
|
velenos14
|
75,919,862
| 10,844,937
|
Is it possible to print different in a decorator in Python?
|
<p>I need to call the function <code>notice</code> <strong>twice</strong>. And here <code>notice</code> function calls a decorator.</p>
<pre><code>def decorator(func):
def wrapper(*args, **kwargs):
# print 'first' or 'second' here.
return func(*args, **kwargs)
return wrapper
@decorator
def notice():
print("call")
notice()
notice()
</code></pre>
<p>Here what I would like to print is</p>
<pre><code>first call
second call
</code></pre>
<p>Is it possible?</p>
|
<python><python-3.x>
|
2023-04-03 13:12:21
| 0
| 783
|
haojie
|
75,919,809
| 21,420,742
|
How to get Multiple Results without using if elif statement in Python
|
<p>I am trying to conditionally make a statement without using if elif statement in python to get multiple results. I am using Numpy and pandas at the current moment and I have found a way to use I just forgot the correct steps and don't know the name of this conditional statement.</p>
<pre><code>grade = []
condition = [(if df['score'] => 90),
((if df['score'] < 90) & (df['score'] >= 80)),
((if df['score'] < 80) & (df['score'] >= 70)),
((if df['score'] < 70) & (df['score'] >= 65)),
(if df['score'] < 65)]
results = {'A','B','C','D','F'}
grade.append(condition,results)
df['grade'] = results
</code></pre>
<p>This isn't right and I was hoping to get help on how to get this conditional to work. of if someone had a better way of doing it.
This is what I want:</p>
<pre><code>score grade
45 F
79 C
88 B
56 F
94 A
</code></pre>
|
<python><python-3.x><pandas><dataframe><numpy>
|
2023-04-03 13:06:05
| 2
| 473
|
Coding_Nubie
|
75,919,792
| 6,645,564
|
What is the most efficient way to read from a template text file and then write interpolated values from different variables from a script?
|
<p>I have a Python script that I want to output with a README.txt files for users. In the README.txt, there are going to be some description of what was run in the script, including the values. The values of these variables change depending on the project. Since the README.txt file is kind of long, and copying and pasting values to multiple README.txt files is time consuming, I wanted to create and quick and easy function that would read in a template README.txt file and then interpolate the correct values in their appropriate places, creating a new, pertinent README.txt file.</p>
<p>So an example snippet from the README.txt template would read as:</p>
<pre><code>In this script, we ran the statistical processes with variable A set to {variableA} and variable B set to {variableB}.
We also used the following comparisons where we examined group {group1} vs group {group2}.
</code></pre>
<p>I have placed the curly brackets to indicate where I want variables from my script (i.e. variableA, variableB, group1, and group2) so their values are interpolated into the correct places. But I am not certain how best I should format my README.txt template and set it up to efficiently take in the values of the relevant variables. All these variables are defined globally, so scope should not be a problem here.</p>
<p>So far my code reads:</p>
<pre><code>with open("README_template.txt") as readme:
readme = readme.readlines()
readme_new = []
for line in readme:
#interpolate the pertinent values into the line if necessary
readme_new.append(line)
#write out new README.txt file
</code></pre>
|
<python><string-interpolation>
|
2023-04-03 13:03:47
| 1
| 924
|
Bob McBobson
|
75,919,742
| 6,057,371
|
pandas apply groupby and agg, and update orig columns
|
<p>I have a dataframe <code>df</code>:</p>
<pre><code> Group1. Group2 Val
0 1 Q 2
1 1 Q 3
2 2 R 8
3 4 Y 9
</code></pre>
<p>I want to update df with list of values per group, so new df will be</p>
<pre><code> Group Group2 Val new
0 1 Q 2 [2, 3]
1 1 Q 3 [2, 3]
2 2 R 8 [8]
3 4 Y 9 [9]
</code></pre>
<p>What is the best way to do so?</p>
|
<python><pandas><dataframe><group-by>
|
2023-04-03 12:58:04
| 2
| 2,050
|
Cranjis
|
75,919,738
| 4,940,741
|
how can i add numeric scales to x and y axis of scanpy umap?
|
<p><a href="https://i.sstatic.net/DLb84.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DLb84.png" alt="enter image description here" /></a>I am using the code below to make two population on my umap</p>
<pre><code>adata_all.obs["cell_type_lvl0"] = adata_all.X[:, adata_all.var["marker"] == 'CD45RA'] > 0.5
adata_all.obs["cell_type_lvl0"] = adata_all.obs["cell_type_lvl0"].map(
{True: "CD45+", False: "CD45-"}
)
import matplotlib.pyplot as plt
import scanpy.plotting as sc_pl
# create a new figure with a specified size
fig = plt.figure(figsize=(8, 8))
# plot UMAP with color
sc_pl.umap(adata_all, color='cell_type_lvl0', ax=True)
# set axis labels and tick labels
plt.xlabel('UMAP 1', fontsize=12)
plt.ylabel('UMAP 2', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
# show the figure
plt.show()
</code></pre>
<p>he code on top shows me an empty umap. but, The line <code>sc_pl.umap(adata_all, color='cell_type_lvl0')</code> shows me umap but without numeric x and y axis. how can I fix this? The code on top shows me an empty umap</p>
|
<python><matplotlib><visualization><scanpy>
|
2023-04-03 12:57:53
| 1
| 575
|
minoo
|
75,919,642
| 4,100,282
|
Combine three markers in a single matplotlib legend item
|
<p>I would like to plot square markers of two colors, with some of the markers of each color having an extra dot in the middle:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from matplotlib import pyplot as ppl
x = np.array([1,2,3,4])
y = np.array([1,2,3,4])
kw = dict(mew = 1, mec = 'k', ms = 10)
ppl.plot(x[::2], y[::2], 's', mfc = 'w', label = 'A', **kw)
ppl.plot(x[1::2], y[1::2], 's', mfc = [.8]*3, label = 'B', **kw)
ppl.plot(x[:2], y[:2], 'ks', mew = 0, ms = 3, label = 'special data')
ppl.legend()
ppl.show()
</code></pre>
<p>Which yields this:</p>
<p><a href="https://i.sstatic.net/LpEHo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LpEHo.png" alt="plot" /></a></p>
<p>I would like to replace the small black marker used in thre third legend item by a combined marker with two side-by-side squares, one white and one grey, both with the central black mark, but so far all my attempts at hacking something with <code>matplotlib.legend_handler.HandlerTuple()</code> have been fruitless.</p>
<p>Is there a simple way to achieve this?</p>
|
<python><matplotlib><legend>
|
2023-04-03 12:46:22
| 2
| 305
|
Mathieu
|
75,919,241
| 14,131,782
|
Pandas.ExcelWriter() file corruption unless use .save() function
|
<p>I am trying to use Pandas to write a dataframe to Excel. The data is just some price data from a public Crypto API:</p>
<p><a href="https://i.sstatic.net/QlPeX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QlPeX.png" alt="first few rows of data in Excel" /></a></p>
<p>My code is as follows:</p>
<pre><code>writer = pd.ExcelWriter('crypto.xlsx')
btc.to_excel(writer, sheet_name='Bitcoin') # btc = dataframe
</code></pre>
<p>This code produces a corrupt .xlsx file. I have found that I can produce a working one if I add the following to the end of the code:</p>
<pre><code>writer.save()
</code></pre>
<p>However, adding this line gives this warning:</p>
<blockquote>
<p>C:\Users\UserName\AppData\Local\Temp\ipykernel_1504\1045303506.py:7:
FutureWarning: save is not part of the public API, usage can give
unexpected results and will be removed in a future version<br />
writer.save()</p>
</blockquote>
<p>What is the proper way to do this?</p>
|
<python><pandas><dataframe><pandas.excelwriter>
|
2023-04-03 11:59:03
| 1
| 1,078
|
user14131782
|
75,919,230
| 4,125,774
|
azure information protection remove label in linux
|
<p>I am in a situation that requires (decrypting/removeLabel) in Linux eg. Debian or rasbian ?
I have looked into azure products and they clearly stated that there is no "official" support for Linux..... but my question here is .... is there any workaround to achieve this in Linux?</p>
<p>One silly way I could think of is to get this protected file in a windows pc, decrypt it and send it to the Linux PC over scp.... but this requires me to have a sort of a bypass PC to run just for this ...seems pretty silly ... any ideas/suggestion?</p>
<p>Thanks in advance</p>
|
<python><linux><azure>
|
2023-04-03 11:58:01
| 1
| 307
|
KapaA
|
75,919,215
| 6,546,835
|
win32serviceutil issue with starting a custom service
|
<p>I'm trying to create a windows service by using the code below.
The global variables are defined, and the imports are properly imported.
The main bulk of the code is:</p>
<pre><code>class MyHandler(FileSystemEventHandler):
def __init__(self):
self.changed_files = {}
def on_any_event(self, event):
if event.is_directory or event.event_type == 'modified':
root, dirs, files = next(os.walk(folder_to_monitor))
for file_name in files:
file_path = os.path.join(root, file_name)
if event.is_directory or file_name in self.changed_files.get(root, set()):
self.changed_files[root] = {file_name}
for dir_path in dirs:
self.changed_files[os.path.join(root, dir_path)] = set()
elif event.event_type == 'deleted' or event.event_type == 'created':
root, file_name = os.path.split(event.src_path)
self.changed_files[root].add(file_name)
def should_upload_files(self):
return len(self.changed_files) > 0
def get_changed_files_dict(self):
return self.changed_files
class CloudService(win32serviceutil.ServiceFramework):
_svc_name_ = global_service_name
_svc_display_name_ = global_service_name
_svc_account_ = global_service_account
def __init__(self, args):
win32serviceutil.ServiceFramework.__init__(self, args)
self.stop_event = win32event.CreateEvent(None, 0, 0, None)
self.is_running = False
self.svc_account = _svc_account_
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.stop_event)
self.is_running = False
def SvcDoRun(self):
self.is_running = True
self.ReportServiceStatus(win32service.SERVICE_RUNNING)
while self.is_running:
event_handler = MyHandler()
observer = Observer()
observer.schedule(event_handler, folder_to_monitor, recursive=True)
observer.start()
while self.is_running:
if event_handler.should_upload_files():
changed_files = event_handler.get_changed_files_dict()
# upload_files_to_server(changed_files)
with open("log.txt", "w") as f:
f.write(str(changed_files))
event_handler.changed_files.clear()
time.sleep(1)
if __name__ == '__main__':
# Delete the service
subprocess.call(["sc", "delete", global_service_name])
# Create the service
python_path = sys.executable
service_path = os.path.abspath(__file__)
# print(python_path)
subprocess.call(
[
'sc',
'create',
global_service_name,
f'binPath="{python_path} {service_path}"',
'start=auto',
]
)
print(f'\nService "{global_service_name}" created.\n')
# Set up the service
win32serviceutil.HandleCommandLine(CloudService)
</code></pre>
<p>The goals are:</p>
<ol>
<li><p>automatically delete the service (reset for testing), then re-create it, with a specific name/description and to have it be in status of "Running".</p>
</li>
<li><p>When monitoring a folder, any modification or change will be logged in a .txt file on the desktop (for testing)</p>
</li>
</ol>
<p>At the moment the service is being created in the <code>services.msc</code> list, but the status is <strong>empty</strong>, and manually starting it produces errors:</p>
<blockquote>
<p>Error 2: The system cannot find the file specified.</p>
</blockquote>
<p>or</p>
<blockquote>
<p>Error 1053: The service did not respond to the start or control request in a timely fashion.</p>
</blockquote>
<p>Solutions attempted:</p>
<ul>
<li><p>Tried looking in the forum and saw some answers as to copying the python.dll to site-packages folder but that didn't work.</p>
</li>
<li><p>using admin terminal to manually install the .py file, generates same output...</p>
</li>
<li><p>in depth conversation with chat-gpt3.5 about possible solutions :) didn't help in the end..</p>
</li>
</ul>
<hr />
<p><strong>EDIT</strong></p>
<p>After browsing again through some tutorials such as:</p>
<p><a href="https://thepythoncorner.com/posts/2018-08-01-how-to-create-a-windows-service-in-python/" rel="nofollow noreferrer">https://thepythoncorner.com/posts/2018-08-01-how-to-create-a-windows-service-in-python/</a></p>
<p>and looking at posts here such as</p>
<p><a href="https://stackoverflow.com/questions/2106366/python-win32-service">Python win32 service</a></p>
<p>I am still stuck.
My modified, hopefully simpler code now is:</p>
<pre><code>class CloudService(win32serviceutil.ServiceFramework):
_svc_name_ = global_service_name
_svc_display_name_ = global_service_name
_svc_description_ = global_service_description
def __init__(self, args):
win32serviceutil.ServiceFramework.__init__(self, args)
self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)
socket.setdefaulttimeout(60)
def SvcStop(self):
self.stop()
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
def SvcDoRun(self):
self.start()
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,
servicemanager.PYS_SERVICE_STARTED,
(self._svc_name_, ''))
self.main()
def main(self):
# do some stuff
if __name__ == '__main__':
if len(sys.argv) == 1:
servicemanager.Initialize()
servicemanager.PrepareToHostSingle(CloudService)
servicemanager.StartServiceCtrlDispatcher()
else:
win32serviceutil.HandleCommandLine(CloudService)
</code></pre>
<p>If I use terminal with admin to <code>python CloudService.py install</code> then the service appears, which also happened before, but when I try to Start it, I still get the error again:</p>
<blockquote>
<p>Error 1053: The service did not respond to the start or control request in a timely fashion.</p>
</blockquote>
<p>Any ideas on what this could be...?
I'm guessing it's something with user permissions but not clear on what exactly is happening.</p>
|
<python><automation><service><windows-services><win32serviceutil>
|
2023-04-03 11:56:26
| 1
| 636
|
Yafim Simanovsky
|
75,919,008
| 10,216,028
|
How to pass "-c" argument correctly to pytest.main?
|
<p>I already checked <a href="https://stackoverflow.com/questions/55933832/pytest-define-custom-path-for-tests-collection">this question</a> but couldn't fix the error.</p>
<p>I am designing a testing framework based on using <code>pytest</code>.</p>
<p><code>pytest</code> will be called in a python script and not from the command line.</p>
<p>I have several directories in each of which there are:</p>
<ul>
<li>a file called <code>tests.py</code> in which <code>test_</code> functions are placed.</li>
<li>a <code>.ini</code> file</li>
</ul>
<p>What I am trying to do is to store each testing report in its corresponding directory. That is the why I am using a separate <code>.ini</code> file in each directory.</p>
<p>Following is the content of each <code>.ini</code> file:</p>
<pre><code>[pytest]
addopts = --html=<path>/test_report.html
</code></pre>
<p>And the whole setup is:</p>
<pre><code>root
+-- dir_1
| +-- d1.ini
| +-- tests_d1.py
+-- dir_2
| +-- d2.ini
| +-- tests_d2.py
.
.
.
+-- dir_n
| +-- dn.ini
| +-- tests_dn.py
</code></pre>
<p>When calling <code>pytest.main</code>, I do it as follows:</p>
<pre><code>pytest_params = ['-v', <path> + tests.py,
'-c', <path> + <some name>.ini]
pytest.main(pytest_params)
</code></pre>
<p>But when I run <code>pytest.main</code> I get the following error:</p>
<pre><code>ERROR: usage: main.py [options] [file_or_dir] [file_or_dir] [...]
main.py: error: unrecognized arguments: --html=<path>/test_report.html <path>/tests.py
inifile: <path>/<some name>.ini
rootdir: <path>
</code></pre>
<p>(I removed the paths from the error message).</p>
<p>How do I fix the error?</p>
|
<python><python-3.x><pytest>
|
2023-04-03 11:33:24
| 1
| 455
|
Coder
|
75,918,895
| 4,373,805
|
is there a way to implement pandas wide_to_long in Polars?
|
<p>I use Pandas wide to long to stack survey data and it works beautifully with regex and stub names, is this possible to do in Polars ?</p>
<p>e.g. in Pandas -</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3],
'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3],
'ht_one': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1],
'ht_two': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9]
})
changed_df = pd.wide_to_long(df,
stubnames='ht',
i=['famid', 'birth'],
j='age',
sep='_',
suffix=r'\w+')
</code></pre>
<p>stubnames can take a list as well.</p>
<p>Edit- Added code after taking inspiration from Jqurious -</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
import polars as pl
import re
# Create age group data
age_groups = np.random.choice(['0-18', '19-35', '36-50', '51-65', '65+'], size=10)
# Create gender data
genders = np.random.choice(['Male', 'Female', 'Other'], size=10)
# Create familiarity and affinity data
fam_aff = np.random.rand(10, 4)
# Create column names
cols = ['Age_group', 'Gender', 'Familiarity_loop1', 'Familiarity_loop2', 'Affinity_loop1', 'Affinity_loop2']
# Combine data into dataframe
data = np.column_stack([age_groups, genders, fam_aff])
df = pd.DataFrame(data=data, columns=cols)
df["unique_records"] = np.arange(len(df))
regex_pattern = '^.*_loop\d'
# get polars DF
pl_df = pl.from_pandas(df)
# get all columns list
col_list = pl_df.columns
loop_list = [] # list of columns which contains _loop
sans_loop_list = [] # list of columns which do not contain _loop
for col in col_list:
if re.search(regex_pattern, col):
loop_list.append(col)
else:
sans_loop_list.append(col)
pl_long_df = (pl_df
.unpivot(
index = pl_df.select(sans_loop_list).columns,
variable_name = "master_stack")
.with_columns(pl.col("master_stack").str.replace(r"_loop\d",""))
)
pl_long_df.pivot(on="master_stack", index=sans_loop_list, values="value", aggregate_function=pl.element())
</code></pre>
<p>I want to see Affinity and Familiarity as their own columns, but I am not able to achieve it.</p>
<p>Edit 2 - Added Polars output and Pandas output</p>
<p>Polars -
<a href="https://i.sstatic.net/UduRV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UduRV.png" alt="polars melt and pivot output" /></a></p>
<p>Pandas output -
<a href="https://i.sstatic.net/1ItWU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1ItWU.png" alt="pandas wide_to_long output" /></a></p>
|
<python><python-polars>
|
2023-04-03 11:18:02
| 1
| 468
|
Ezio
|
75,918,753
| 12,284,585
|
What is the equivalent of this Python code in Hasekell?
|
<p>I try to make the below Haskell code equal to the Python code, but it doesn't work and i can't spot the error? Can somebody spot the error? The Python output is what i want</p>
<pre><code>def balanceList(lst):
length = len(lst)
if length<3:
return lst
else:
middle=length//2
return [lst[middle]] + balance(lst[middle+1:] + balance(lst[:middle]))
print( balanceList(list(range(1,10))) )
# [5, 3, 8, 2, 7, 4, 9, 6, 1]
</code></pre>
<pre class="lang-hs prettyprint-override"><code>balanceList :: [a] -> [a]
balanceList lst
| len < 3 = lst
| otherwise = [lst !! middle] ++ balanceList (drop (middle+1) lst) ++ balanceList (take (middle) lst)
where len = length lst; middle = len `div` 2
-- >>> balanceList [1..9]
-- [5,8,9,6,7,3,4,1,2]
</code></pre>
|
<python><haskell>
|
2023-04-03 11:03:24
| 1
| 1,333
|
tturbo
|
75,918,731
| 4,847,250
|
why predict with a tensorflow model don't give the same answer for each signal separatly and all signal at once?
|
<p>Have created a tenforflow model that is taking 512 input samples (1 * N * 512) and I would like to make a prediction with new input.</p>
<p>I have an s variable that has 19*512 signal
if i predict the output of my model with one signal at a time</p>
<pre><code>[DLmodel( s[i,:][np.newaxis,np.newaxis,:] ).numpy()[0,:,0] for i in range(19)]
</code></pre>
<p>I got this answer:</p>
<pre><code>[[0.41768566 0.5564939 0.30202574 0.35190994 0.27736259 0.28247398 0.2699227 0.33878434 0.35135144 0.31779674 0.3259031 0.3272484 0.32065392 0.33836302 0.31446803 0.26727855 0.29702038 0.30528304 0.32032394]]
</code></pre>
<p>but if I predict directly with a 2D matrix (all signals) I get :</p>
<pre><code>DLmodel( s[np.newaxis,:,:] ).numpy()[0,:,0]
[4.1768566e-01 3.5780075e-01 1.5305097e-01 9.7242827e-03 8.3400400e-06 2.6045337e-09 2.0279233e-11 1.0051511e-12 4.4332330e-13 2.3794513e-13 2.0760676e-13 1.8587506e-13 1.7166681e-13 1.7180506e-13 1.7025846e-13 1.5340669e-13 1.8261155e-13 1.4610023e-13 1.4570285e-13]
</code></pre>
<p>I don't understand why the answers are not equal?</p>
<p>I don't understand also why if i make a 2d matrix input with a sliding window of 5 signals with 1 sample shift, I don't get the correct answer:</p>
<pre><code>Signals = []
k=0
for i in range(int(437*Fs),int(437*Fs)+5):
Signals.append(Sigs[10,(k+i):(k+i)+size])
Signals = np.array(Signals)
Signals = np.expand_dims(Signals, axis=[0])
print(DLmodel(Signals).numpy()[0,:,0])
Signals = []
k=0
for i in range(int(437*Fs),int(437*Fs)+5):
Signals.append(Sigs[10,(k+i+1):(k+i+1)+size])
Signals = np.array(Signals)
Signals = np.expand_dims(Signals, axis=[0])
print(DLmodel(Signals).numpy()[0,:,0])
</code></pre>
<p>print this :</p>
<pre><code>[0.9198115 0.98681784 0.997053 0.9992207 0.9997619 ]
[0.92536646 0.9863089 0.99667054 0.99903715 0.999721 ]
</code></pre>
<p>I tab the second line so both up and down number should be the same.This is very confusing.</p>
<p>Here's the model I used:</p>
<pre><code>DLmodel = Sequential()
DLmodel.add(LSTM(units=size, return_sequences=True, input_shape=(None, size),
activation='tanh')) # , kernel_regularizer=L2(0.01)))
DLmodel.add(Dropout(0.3))
DLmodel.add(Dense(size // 2, activation="relu", kernel_initializer="uniform"))
DLmodel.add(Dropout(0.3))
DLmodel.add(Dense(size // 4, activation="relu", kernel_initializer="uniform"))
DLmodel.add(Dropout(0.3))
DLmodel.add(Dense(size // 8, activation="relu", kernel_initializer="uniform"))
DLmodel.add(Dropout(0.3))
DLmodel.add(Dense(1, activation="sigmoid", kernel_initializer="uniform"))
DLmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy', 'mse'], run_eagerly=True)
</code></pre>
|
<python><tensorflow><predict>
|
2023-04-03 11:00:42
| 1
| 5,207
|
ymmx
|
75,918,462
| 7,386,830
|
How to save each iterating Statsmodel as a file to be used later?
|
<p>I have the following table generated:</p>
<pre><code>import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
# Generate 'random' data
np.random.seed(0)
X = 2.5 * np.random.randn(10) + 1.5
res = 0.5 * np.random.randn(10)
y = 2 + 0.3 * X + res
Name = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J']
# Create pandas dataframe to store our X and y values
df = pd.DataFrame(
{'Name': Name,
'X': X,
'y': y})
# Show the dataframe
df
</code></pre>
<p>Resulting in the following table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>X</th>
<th>y</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>5.910131</td>
<td>3.845061</td>
</tr>
<tr>
<td>B</td>
<td>2.500393</td>
<td>3.477255</td>
</tr>
<tr>
<td>C</td>
<td>3.946845</td>
<td>3.564572</td>
</tr>
<tr>
<td>D</td>
<td>7.102233</td>
<td>4.191507</td>
</tr>
<tr>
<td>E</td>
<td>6.168895</td>
<td>4.072600</td>
</tr>
<tr>
<td>F</td>
<td>-0.943195</td>
<td>1.883879</td>
</tr>
<tr>
<td>G</td>
<td>3.875221</td>
<td>3.909606</td>
</tr>
<tr>
<td>H</td>
<td>1.121607</td>
<td>2.233903</td>
</tr>
<tr>
<td>I</td>
<td>1.241953</td>
<td>2.529120</td>
</tr>
<tr>
<td>J</td>
<td>2.526496</td>
<td>2.330901</td>
</tr>
</tbody>
</table>
</div>
<p>The the following code iterates to exludes one row at a time, and builds a set of regression plots:</p>
<pre><code>import statsmodels.formula.api as smf
import warnings
warnings.filterwarnings('ignore')
# Initialise and fit linear regression model using `statsmodels`
for row_index, row in df.iterrows():
# dataframe with all rows except for one
df_reduced = df[~(df.index == row_index)]
model = smf.ols('X ~ y', data=df_reduced)
model = model.fit()
intercept, slope = model.params
print(model.summary())
y1 = intercept + slope * df_reduced.y.min()
y2 = intercept + slope * df_reduced.y.max()
plt.plot([df_reduced.y.min(), df_reduced.y.max()], [y1, y2], label=row.Name, color='red')
plt.scatter(df_reduced.y, df_reduced.X)
plt.legend()
plt.savefig(f"All except {row.Name} analogue.pdf")
plt.show()
</code></pre>
<p>The question is, how can I save each of the models that are being generated as a file that can be used later ? In this present example, there should be at least 9 regression models being generated. I would like to have them each as a file that can be identified with a name as well.</p>
<p>Second question is, how can I add a space in between each of the model summary and plots in the visual generations of matplotlib.</p>
|
<python><pandas><matplotlib><regression><statsmodels>
|
2023-04-03 10:28:54
| 2
| 754
|
Dinesh
|
75,918,378
| 9,356,904
|
Python script for retrieving Reddit posts works but returns blank results
|
<p>I've written a python script which is intended to :</p>
<ul>
<li>Call the Reddit API</li>
<li>Retrieve posts and comments of the subreddit 'r/booksuggestions'</li>
<li>Save the results into a CSV for further analysis</li>
</ul>
<p>The CSV successfully creates but the content is blank. I've confirmed the credentials are correct and I don't get a 401 error etc (which I did before I adjusted the client ID to the correct one). I'm using the PRAW Reddit/Python library.</p>
<p>I seem to be using the correct subreddit name and have confirmed there are posts in the time period. What am I doing wrong?</p>
<pre><code>import praw
import pandas as pd
import datetime as dt
import csv
import datetime
reddit = praw.Reddit(client_id='my client ID',
client_secret='my secret',
user_agent='my agent')
# Get the subreddit of /r/booksuggestions
subreddit = reddit.subreddit("booksuggestions")
# set time for last 7 days
end_time = datetime.datetime.now()
start_time = end_time - datetime.timedelta(days=7)
# Prep the CSV
with open("booksuggestions_data.csv", mode="w", newline="") as csv_file:
fieldnames = ["type", "id", "created_utc", "author", "body"]
writer = csv.DictWriter(csv_file, fieldnames=fieldnames)
writer.writeheader()
# Search for posts
for submission in subreddit.search(query=None, sort="new", time_filter="week"):
try:
if dt.datetime.utcfromtimestamp(submission.created_utc) >= start_time:
writer.writerow(
{
"type": "post",
"id": submission.id,
"created_utc": submission.created_utc,
"author": submission.author.name,
"body": submission.selftext,
}
)
# Search for comments in the post
submission.comments.replace_more(limit=None)
for comment in submission.comments.list():
if dt.datetime.utcfromtimestamp(comment.created_utc) >= start_time:
writer.writerow(
{
"type": "comment",
"id": comment.id,
"created_utc": comment.created_utc,
"author": comment.author.name,
"body": comment.body,
}
)
except Exception as e:
print(f"Error processing {submission.id}: {str(e)}")
</code></pre>
|
<python><reddit><praw>
|
2023-04-03 10:18:31
| 1
| 559
|
nc14
|
75,918,366
| 10,428,677
|
Groupby and generate a column saying how many values are imputed
|
<p>I have a dataframe that looks like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Region</th>
<th>Country</th>
<th>Imputed</th>
<th>Year</th>
<th>Price</th>
</tr>
</thead>
<tbody>
<tr>
<td>Africa</td>
<td>South Africa</td>
<td>No</td>
<td>2016</td>
<td>500</td>
</tr>
<tr>
<td>Africa</td>
<td>South Africa</td>
<td>No</td>
<td>2017</td>
<td>400</td>
</tr>
<tr>
<td>Africa</td>
<td>South Africa</td>
<td>Yes</td>
<td>2018</td>
<td>432</td>
</tr>
<tr>
<td>Africa</td>
<td>South Africa</td>
<td>No</td>
<td>2019</td>
<td>450</td>
</tr>
<tr>
<td>Africa</td>
<td>Nigeria</td>
<td>Yes</td>
<td>2016</td>
<td>750</td>
</tr>
<tr>
<td>Africa</td>
<td>Nigeria</td>
<td>Yes</td>
<td>2017</td>
<td>780</td>
</tr>
<tr>
<td>Africa</td>
<td>Nigeria</td>
<td>No</td>
<td>2018</td>
<td>816</td>
</tr>
<tr>
<td>Africa</td>
<td>Nigeria</td>
<td>No</td>
<td>2019</td>
<td>890</td>
</tr>
<tr>
<td>Africa</td>
<td>Kenya</td>
<td>Yes</td>
<td>2016</td>
<td>212</td>
</tr>
<tr>
<td>Africa</td>
<td>Kenya</td>
<td>No</td>
<td>2017</td>
<td>376</td>
</tr>
<tr>
<td>Africa</td>
<td>Kenya</td>
<td>No</td>
<td>2018</td>
<td>415</td>
</tr>
<tr>
<td>Africa</td>
<td>Kenya</td>
<td>No</td>
<td>2019</td>
<td>430</td>
</tr>
</tbody>
</table>
</div>
<p>Here is the sample data:</p>
<pre><code>data1 = {'Region': ['Africa','Africa','Africa','Africa','Africa','Africa','Africa','Africa','Africa','Africa','Africa','Africa'],
'Country': ['South Africa','South Africa','South Africa','South Africa','Nigeria','Nigeria','Nigeria','Nigeria','Kenya','Kenya','Kenya','Kenya'],
'Imputed': ['No','No','Yes','No','Yes','Yes','No','No','Yes','No','No','No'],
'Year': [2016, 2017, 2018, 2019,2016, 2017, 2018, 2019,2016, 2017, 2018, 2019],
'Price': [500, 400, 432,450,750,780,816,890,212,376,415,430]}
df = pd.DataFrame(data1)
</code></pre>
<p>I have to do a <code>groupby</code> using <code>Region</code> and <code>Year</code> to calculate the regional price for each year, which is straightforward to do. However, I would like to add a new column which says how many values have been imputed when doing the <code>groupby</code>.</p>
<p>The output should look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Region</th>
<th>Imputed</th>
<th>Year</th>
<th>Price</th>
</tr>
</thead>
<tbody>
<tr>
<td>Africa</td>
<td>2/3 Components Imputed</td>
<td>2016</td>
<td>487.3</td>
</tr>
<tr>
<td>Africa</td>
<td>1/3 Components Imputed</td>
<td>2017</td>
<td>518.7</td>
</tr>
<tr>
<td>Africa</td>
<td>1/3 Components Imputed</td>
<td>2018</td>
<td>554.3</td>
</tr>
<tr>
<td>Africa</td>
<td>0/3 Components Imputed</td>
<td>2019</td>
<td>590</td>
</tr>
</tbody>
</table>
</div>
<p>Below is my code so far:</p>
<pre><code>df = df.groupby(['Region','Year'])['Price'].mean()
</code></pre>
<p>Is there any way of adding the additional column as per my desired output example?</p>
|
<python><pandas><dataframe>
|
2023-04-03 10:17:39
| 2
| 590
|
A.N.
|
75,918,361
| 2,405,663
|
How to set date format in DatePicker bokeh
|
<p>I m building a page with some chart using Bokeh and Python. I m using DatePicker to show a field to select a Date.</p>
<p>This is the code:</p>
<pre><code>date_pickerStart = DatePicker(title='', value=dateStart, width=100)
</code></pre>
<p>now this field is displayed in the page and the user can select the date using the calendar. After set the date I can display it in the follow format:
yyyy-mm-dd</p>
<p>I need to set another format or even better another language (italian language) and the format should be like this:
dd-mm-yyyy</p>
<p>Is possible to do this?</p>
|
<python><datepicker><bokeh>
|
2023-04-03 10:16:53
| 2
| 2,177
|
bircastri
|
75,918,140
| 1,587,118
|
Getting RuntimeError: expected scalar type Half but found Float in AWS P3 instances in opt6.7B fine tune
|
<p>I have a simple code which takes a opt6.7B model and fine tunes it. When I run this code in Google colab(Tesla T4, 16GB) it runs without any problem. But when I try to run the the same code in AWS p3-2xlarge environment (Tesla V100 GPU, 16GB) it gives the error.</p>
<pre><code>RuntimeError: expected scalar type Half but found Float
</code></pre>
<p>To be able to run the fine tuning on a single GPU I use LORA and peft. which are installed exactly the same way (pip install) in both cases. I can use <code>with torch.autocast("cuda"):</code> and then that error vanishes. But the loss of the training becomes very strange meaning it does not gradually decrease rather it fluctuates within a large range (0-5) (and if I change the model to GPT-J then the loss always stays 0) whereas the loss is gradually decreasing for the case of colab. So I am not sure if using <code>with torch.autocast("cuda"):</code> is a good thing or not.</p>
<p>The transfromeers version is <code>4.28.0.dev0</code> in both case. Torch version for colab shows <code>1.13.1+cu116</code> whereas for p3 shows - <code>1.13.1</code> (does this mean it does not have CUDA support? I doubt, on top of that doing <code>torch.cuda.is_available()</code> shows True)</p>
<p>The only large difference I can see is that for colab, bitsandbytes has this following setup log</p>
<pre><code>===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64...
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 118
</code></pre>
<p>Whereas for p3 it is the following</p>
<pre><code>===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
CUDA SETUP: CUDA runtime path found: /opt/conda/envs/pytorch/lib/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.0
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /opt/conda/envs/pytorch/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cuda117_nocublaslt.so...
</code></pre>
<p>What am I missing? I am not posting the code here. But it is really a very basic version that takes opt-6.7b and fine tunes it on alpaca dataset using LORA and peft.</p>
<p>Why does it run in colab but not in p3? Any help is welcome :)</p>
<p>-------------------- EDIT</p>
<p>I am posting a minimal code example that I actually tried</p>
<pre><code>import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import torch
import torch.nn as nn
import bitsandbytes as bnb
from transformers import AutoTokenizer, AutoConfig, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"facebook/opt-6.7b",
load_in_8bit=True,
device_map='auto',
)
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b")
for param in model.parameters():
param.requires_grad = False # freeze the model - train adapters later
if param.ndim == 1:
# cast the small parameters (e.g. layernorm) to fp32 for stability
param.data = param.data.to(torch.float32)
model.gradient_checkpointing_enable() # reduce number of stored activations
model.enable_input_require_grads()
class CastOutputToFloat(nn.Sequential):
def forward(self, x): return super().forward(x).to(torch.float32)
model.lm_head = CastOutputToFloat(model.lm_head)
def print_trainable_parameters(model):
"""
Prints the number of trainable parameters in the model.
"""
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
)
from peft import LoraConfig, get_peft_model
config = LoraConfig(
r=16,
lora_alpha=32,
target_modules=["q_proj", "v_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
model = get_peft_model(model, config)
print_trainable_parameters(model)
import transformers
from datasets import load_dataset
tokenizer.pad_token_id = 0
CUTOFF_LEN = 256
data = load_dataset("tatsu-lab/alpaca")
data = data.shuffle().map(
lambda data_point: tokenizer(
data_point['text'],
truncation=True,
max_length=CUTOFF_LEN,
padding="max_length",
),
batched=True
)
# data = load_dataset("Abirate/english_quotes")
# data = data.map(lambda samples: tokenizer(samples['quote']), batched=True)
trainer = transformers.Trainer(
model=model,
train_dataset=data['train'],
args=transformers.TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
warmup_steps=100,
max_steps=400,
learning_rate=2e-5,
fp16=True,
logging_steps=1,
output_dir='outputs'
),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False)
)
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
trainer.train()
</code></pre>
<p>And here is the full stack trace</p>
<pre><code>/tmp/ipykernel_24622/2601578793.py:2 in <module> │
│ │
│ [Errno 2] No such file or directory: '/tmp/ipykernel_24622/2601578793.py' │
│ │
│ /opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/trainer.py:1639 in train │
│ │
│ 1636 │ │ inner_training_loop = find_executable_batch_size( │
│ 1637 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │
│ 1638 │ │ ) │
│ ❱ 1639 │ │ return inner_training_loop( │
│ 1640 │ │ │ args=args, │
│ 1641 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │
│ 1642 │ │ │ trial=trial, │
│ │
│ /opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/trainer.py:1906 in │
│ _inner_training_loop │
│ │
│ 1903 │ │ │ │ │ with model.no_sync(): │
│ 1904 │ │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │
│ 1905 │ │ │ │ else: │
│ ❱ 1906 │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │
│ 1907 │ │ │ │ │
│ 1908 │ │ │ │ if ( │
│ 1909 │ │ │ │ │ args.logging_nan_inf_filter │
│ │
│ /opt/conda/envs/pytorch/lib/python3.9/site-packages/transformers/trainer.py:2662 in │
│ training_step │
│ │
│ 2659 │ │ │ loss = loss / self.args.gradient_accumulation_steps │
│ 2660 │ │ │
│ 2661 │ │ if self.do_grad_scaling: │
│ ❱ 2662 │ │ │ self.scaler.scale(loss).backward() │
│ 2663 │ │ elif self.use_apex: │
│ 2664 │ │ │ with amp.scale_loss(loss, self.optimizer) as scaled_loss: │
│ 2665 │ │ │ │ scaled_loss.backward() │
│ │
│ /opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/_tensor.py:488 in backward │
│ │
│ 485 │ │ │ │ create_graph=create_graph, │
│ 486 │ │ │ │ inputs=inputs, │
│ 487 │ │ │ ) │
│ ❱ 488 │ │ torch.autograd.backward( │
│ 489 │ │ │ self, gradient, retain_graph, create_graph, inputs=inputs │
│ 490 │ │ ) │
│ 491 │
│ │
│ /opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/autograd/__init__.py:197 in backward │
│ │
│ 194 │ # The reason we repeat same the comment below is that │
│ 195 │ # some Python versions print out the first line of a multi-line function │
│ 196 │ # calls in the traceback and some print out the last line │
│ ❱ 197 │ Variable._execution_engine.run_backward( # Calls into the C++ engine to run the bac │
│ 198 │ │ tensors, grad_tensors_, retain_graph, create_graph, inputs, │
│ 199 │ │ allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to ru │
│ 200 │
│ │
│ /opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/autograd/function.py:267 in apply │
│ │
│ 264 │ │ │ │ │ │ │ "Function is not allowed. You should only implement one " │
│ 265 │ │ │ │ │ │ │ "of them.") │
│ 266 │ │ user_fn = vjp_fn if vjp_fn is not Function.vjp else backward_fn │
│ ❱ 267 │ │ return user_fn(self, *args) │
│ 268 │ │
│ 269 │ def apply_jvp(self, *args): │
│ 270 │ │ # _forward_cls is defined by derived class │
│ │
│ /opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/utils/checkpoint.py:157 in backward │
│ │
│ 154 │ │ │ raise RuntimeError( │
│ 155 │ │ │ │ "none of output has requires_grad=True," │
│ 156 │ │ │ │ " this checkpoint() is not necessary") │
│ ❱ 157 │ │ torch.autograd.backward(outputs_with_grad, args_with_grad) │
│ 158 │ │ grads = tuple(inp.grad if isinstance(inp, torch.Tensor) else None │
│ 159 │ │ │ │ │ for inp in detached_inputs) │
│ 160 │
│ │
│ /opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/autograd/__init__.py:197 in backward │
│ │
│ 194 │ # The reason we repeat same the comment below is that │
│ 195 │ # some Python versions print out the first line of a multi-line function │
│ 196 │ # calls in the traceback and some print out the last line │
│ ❱ 197 │ Variable._execution_engine.run_backward( # Calls into the C++ engine to run the bac │
│ 198 │ │ tensors, grad_tensors_, retain_graph, create_graph, inputs, │
│ 199 │ │ allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to ru │
│ 200 │
│ │
│ /opt/conda/envs/pytorch/lib/python3.9/site-packages/torch/autograd/function.py:267 in apply │
│ │
│ 264 │ │ │ │ │ │ │ "Function is not allowed. You should only implement one " │
│ 265 │ │ │ │ │ │ │ "of them.") │
│ 266 │ │ user_fn = vjp_fn if vjp_fn is not Function.vjp else backward_fn │
│ ❱ 267 │ │ return user_fn(self, *args) │
│ 268 │ │
│ 269 │ def apply_jvp(self, *args): │
│ 270 │ │ # _forward_cls is defined by derived class │
│ │
│ /opt/conda/envs/pytorch/lib/python3.9/site-packages/bitsandbytes/autograd/_functions.py:456 in │
│ backward │
│ │
│ 453 │ │ │ │
│ 454 │ │ │ elif state.CB is not None: │
│ 455 │ │ │ │ CB = state.CB.to(ctx.dtype_A, copy=True).mul_(state.SCB.unsqueeze(1).mul │
│ ❱ 456 │ │ │ │ grad_A = torch.matmul(grad_output, CB).view(ctx.grad_shape).to(ctx.dtype │
│ 457 │ │ │ elif state.CxB is not None: │
│ 458 │ │ │ │ │
│ 459 │ │ │ │ if state.tile_indices is None:
</code></pre>
<p>(Sorry if this is a very novice question but I have no solution at the moment :( )</p>
|
<python><pytorch><huggingface-transformers><huggingface>
|
2023-04-03 09:53:31
| 4
| 2,271
|
SRC
|
75,918,043
| 6,064,623
|
How to avoid extra digit being added in the addition of decimals in Python
|
<p><code>print(37.4<(32.0+5.0+0.2+0.2))</code>
This statement gives a True result but I need it should give me a False as the addition of the numbers (32.0+5.0+0.2+0.2) is precisely the same as 37.4 but here it gives the following result 37.400000000000006 for this equation (32.0+5.0+0.2+0.2). Hence, I am getting a True result rather False one.</p>
<p>I am new to python so don't know how to deal with it.</p>
<p>Many thanks</p>
|
<python>
|
2023-04-03 09:44:05
| 3
| 1,607
|
amol
|
75,917,994
| 3,614,197
|
Web scraping to obtain pdf reports with beautiful soup Python
|
<p>I am trying to write some web scraping code to search the net for feasability studies of mining companies. As a starting point I would like reports that are PDF.
what I have so far is below but I get an error with the last line</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
# Set the search query
query = 'mining companies pre-feasibility feasibility studies'
# Set the URL of the search results page
url = 'https://www.google.com.au/search?q=' + query
# Set the user-agent to avoid being detected as a scraper
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'}
# Send a GET request to the search results page
response = requests.get(url, headers=headers)
# Parse the HTML content of the search results page with Beautiful Soup
soup = BeautifulSoup(response.content, 'html.parser')
# Find all the search result links
links = soup.find_all('a')
# Filter the links to those that point to PDF documents
pdf_links = [link['href'] for link in links if link['href'].endswith('.pdf')]
</code></pre>
<p>The error I get is Keywork 'href'</p>
<p>where I am going wrong?</p>
|
<python><web-scraping><beautifulsoup>
|
2023-04-03 09:37:17
| 1
| 636
|
Spooked
|
75,917,956
| 8,040,369
|
modbus controller: Trying to get data from Modbus controller using IP address in Python
|
<p>I am trying to get data from Modbus controller using IP address.</p>
<pre><code>from pymodbus.client.sync import ModbusTcpClient as ModbusClient
client = ModbusClient("10.98.237.80", port=502, auto_open=True)
client.connect()
rr = client.read_holding_registers(60, 2, unit=1)
print (rr.registers)
</code></pre>
<p>This gives the below exception</p>
<pre><code>2023-04-03 14:57:18,678 MainThread DEBUG sync :216 Connection to Modbus server established. Socket ('10.80.144.99', 64492)
2023-04-03 14:57:18,680 MainThread DEBUG transaction :140 Current transaction state - IDLE
2023-04-03 14:57:18,681 MainThread DEBUG transaction :145 Running transaction 1
2023-04-03 14:57:18,682 MainThread DEBUG transaction :273 SEND: 0x0 0x1 0x0 0x0 0x0 0x6 0x1 0x3 0x0 0x3c 0x0 0x2
2023-04-03 14:57:18,683 MainThread DEBUG sync :76 New Transaction state 'SENDING'
2023-04-03 14:57:18,684 MainThread DEBUG transaction :287 Changing transaction state from 'SENDING' to 'WAITING FOR REPLY'
2023-04-03 14:57:20,082 MainThread DEBUG transaction :375 Changing transaction state from 'WAITING FOR REPLY' to 'PROCESSING REPLY'
2023-04-03 14:57:20,083 MainThread DEBUG transaction :297 RECV: 0x0 0x1 0x0 0x0 0x0 0x3 0x1 0x83 0xfc
2023-04-03 14:57:20,084 MainThread DEBUG socket_framer :147 Processing: 0x0 0x1 0x0 0x0 0x0 0x3 0x1 0x83 0xfc
2023-04-03 14:57:20,084 MainThread DEBUG factory :266 Factory Response[131]
2023-04-03 14:57:20,085 MainThread DEBUG transaction :454 Adding transaction 1
2023-04-03 14:57:20,085 MainThread DEBUG transaction :465 Getting transaction 1
2023-04-03 14:57:20,086 MainThread DEBUG transaction :224 Changing transaction state from 'PROCESSING REPLY' to 'TRANSACTION_COMPLETE'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-11-1ea661f8edc1> in <module>
6 rr = client.read_holding_registers(60, 2, unit=1)
7
----> 8 print (rr.registers)
AttributeError: 'ExceptionResponse' object has no attribute 'registers'
</code></pre>
<p>I am completely new to this Modbus and modbus server/client.
Please help. any help is much appreciated.</p>
|
<python><modbus><modbus-tcp><pymodbus>
|
2023-04-03 09:32:23
| 1
| 787
|
SM079
|
75,917,940
| 7,132,596
|
Django post_save is not executed on CI/CD
|
<p>I recently increased the default permissions for all my Django Views. In order for the users to have some default permissions I add new users to a default Group that has mainly viewing permissions. This is done via a <code>post_save</code> signal.</p>
<pre class="lang-py prettyprint-override"><code>MODELS_VIEW = [
"time entry",
"time entry version",
"project",
"client",
"user",
]
MODELS_CHANGE = ["time entry version", "user"]
MODELS_CREATE = ["time entry version"]
MODELS_DELETE = ["time entry version"]
def _add_permissions_to_group(group, perm_type: str, models: List[str]) -> None:
"""Add the permission of the `perm_type` to the given models."""
from django.contrib.auth.models import Permission
for model in models:
perm_name = f"Can {perm_type} {model}"
permission = Permission.objects.get(name=perm_name)
group.permissions.add(permission)
def _add_view_permissions_to_group(group) -> None:
_add_permissions_to_group(group, perm_type="view", models=MODELS_VIEW)
def _add_create_permissions_to_group(group) -> None:
_add_permissions_to_group(group, perm_type="add", models=MODELS_CREATE)
def _add_change_permissions_to_group(group) -> None:
_add_permissions_to_group(group, perm_type="change", models=MODELS_CHANGE)
def _add_delete_permissions_to_group(group) -> None:
_add_permissions_to_group(group, perm_type="delete", models=MODELS_DELETE)
def _get_default_user_group_with_permissions():
"""Get or create the default UserGroup and add all standard permissions."""
from django.contrib.auth.models import Group
logger.debug("Getting default group for user")
group, _ = Group.objects.get_or_create(name="default")
# Add the standard permissions for users
_add_view_permissions_to_group(group)
_add_create_permissions_to_group(group)
_add_change_permissions_to_group(group)
_add_delete_permissions_to_group(group)
return group
class UsersConfig(AppConfig):
"""Config for Users"""
default_auto_field = "django.db.models.BigAutoField"
name = "users"
def ready(self):
"""Add the user to the default UserGroup."""
logger.debug("User-config: App ready")
def _add_to_default_group(sender, **kwargs):
"""Add the user to the default group on creation."""
if kwargs["created"]:
group = _get_default_user_group_with_permissions()
user = kwargs["instance"]
logger.debug(f"Adding user {user} to group {group}")
user.groups.add(group)
post_save.connect(_add_to_default_group, sender=settings.AUTH_USER_MODEL)
</code></pre>
<p>I have a fixture in my tests that creates a user and the following test using that fixture:</p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture()
def user_max(django_user_model):
"""Create a normal user called max."""
logger.debug("Creating user max")
user = django_user_model.objects.create_user(
username="max",
email="abc@de.com",
password="password",
)
logger.debug(f"Permissions after creating: {user.get_all_permissions()}")
return user
def test_default_permissions(django_user_model):
"""Test default permissions."""
print("Group default permissions: ", Group.objects.get(name="default").permissions.all())
print(
"Permissions of user_max: ",
django_user_model.objects.get(id=django_user_model.objects.get(username="max").id).get_all_permissions(),
)
assert django_user_model.objects.get(username="max").id).get_all_permissions().count() > 5
</code></pre>
<p>When running the test I receive the debug messages from <code>ready()</code>-function, <code>_get_default_user_group_with_permissions()</code>-function as well as the test:</p>
<pre class="lang-bash prettyprint-override"><code>2023-04-03 11:13:05,685 - conftest - DEBUG - Creating user max
2023-04-03 11:13:05,900 - users.apps - DEBUG - Getting default group for user
2023-04-03 11:13:05,978 - users.apps - DEBUG - Adding user max to group default
2023-04-03 11:13:05,995 - conftest - DEBUG - Permissions after creating: {'projects.view_task', 'projects.view_project', 'bookings.delete_timeentryversion', 'projects.view_projecttype', 'bookings.view_timeentry', 'bookings.view_timeentryversion', 'clients.view_client', 'users.view_resource', 'projects.view_taskgroup', 'users.change_user', 'users.view_team', 'bookings.change_timeentryversion', 'bookings.add_timeentryversion', 'users.view_user'}
</code></pre>
<p>Now to my issue: When running the same code on Gitlab CI/CD this test errors because no <code>default</code>-Group is created and there are neither debug messages from the <code>ready()</code>-function nor the <code>_get_default_user_group_with_permissions()</code>-function:</p>
<pre class="lang-bash prettyprint-override"><code>2023-04-03 10:31:07,963 - conftest - DEBUG - Creating user max
2023-04-03 10:31:08,293 - conftest - DEBUG - Permissions after creating: set()
</code></pre>
<p>How come that the code from <code>ready()</code> is not executed in the CI/CD Pipeline?</p>
|
<python><django><continuous-integration><pytest><pytest-django>
|
2023-04-03 09:30:42
| 2
| 956
|
Hans Bambel
|
75,917,750
| 9,143,046
|
Very slow aggregate on Pandas 2.0 dataframe with pyarrow as dtype_backend
|
<p>Let's say I have the following dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Code</th>
<th>Price</th>
</tr>
</thead>
<tbody>
<tr>
<td>AA1</td>
<td>10</td>
</tr>
<tr>
<td>AA1</td>
<td>20</td>
</tr>
<tr>
<td>BB2</td>
<td>30</td>
</tr>
</tbody>
</table>
</div>
<p>And I want to perform the following operation on it:</p>
<pre><code>df.groupby("code").aggregate({
"price": "sum"
})
</code></pre>
<p>I have tried playing with the new pyarrow dtypes introduced in Pandas 2.0 and I created 3 copies, and for each copy I measured execution time (average of 5 executions) of the operation above.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Code column dtype</th>
<th>Price column dtype</th>
<th>Execution time</th>
</tr>
</thead>
<tbody>
<tr>
<td>Object</td>
<td>float64</td>
<td>2.94 s</td>
</tr>
<tr>
<td>string[pyarrow]</td>
<td>double[pyarrow]</td>
<td>49.5 s</td>
</tr>
<tr>
<td>string[pyarrow]</td>
<td>float64</td>
<td>1.11 s</td>
</tr>
</tbody>
</table>
</div>
<p>Can anyone explain why applying an aggregate function on a column with double pyarrow dtype is so slow compared to the standard numpy float64 dtype?</p>
|
<python><pandas><group-by><pyarrow><apache-arrow>
|
2023-04-03 09:06:55
| 1
| 331
|
ADEL NAMANI
|
75,917,657
| 630,971
|
Translate list of labels into array of labels per ID in python
|
<p>I have a data frame with texts and labels. Each text has multiple rows with on label.</p>
<pre><code>dummy_df = pd.DataFrame([['Text1','label1'], ['Text1', 'label2']], columns=["TEXT", "LABELS"])
</code></pre>
<p>I would like to have the following to apply MultiLabelBinarizer() function.</p>
<pre><code>TEXT | LABEL
Text1| [[label1,label2]]
</code></pre>
<p><a href="https://www.projectpro.io/recipes/one-hot-encoding-with-multiple-labels-in-python" rel="nofollow noreferrer">Reference 1</a>
<a href="https://www.kaggle.com/questions-and-answers/66693" rel="nofollow noreferrer">Reference 2</a></p>
|
<python><pandas><dataframe>
|
2023-04-03 08:54:48
| 1
| 472
|
sveer
|
75,917,449
| 4,837,637
|
Python sql alchemy cursor not select statement
|
<p>i have developed this Database class to create more function to reuse in other file:</p>
<pre><code>import threading
import sqlite3
from queue import Queue
import sqlalchemy
class Database(threading.Thread):
def __init__(self, db, logger = None):
super(Database, self).__init__()
self.db=db
self.reqs=Queue()
self.log = logger
self.start()
self.join()
def run(self):
try:
# create connection pool
pool = sqlalchemy.create_engine("postgresql+pg8000://",creator=self.db,pool_size= 20,max_overflow= 0)
cnx = pool.raw_connection()
cursor = cnx.cursor()
print('Conection DB')
except Exception as e:
print(f"[DB] An error occurred: {e}")
return
while True:
req, arg, res = self.reqs.get()
if req=='--close--': break
cursor.execute(req, arg)
if res:
for rec in cursor:
res.put(rec)
res.put('--no more--')
cnx.close()
def execute(self, req, arg=None, res=None):
self.reqs.put((req, arg or tuple(), res))
def sel(self, req, arg=None,res=None):
self.execute(req, arg, res)
def select(self, req, arg=None):
res=Queue()
self.execute(req, arg, res)
while True:
rec=res.get()
if rec=='--no more--': break
yield rec
def close(self):
self.execute('--close--')
</code></pre>
<p>now into other python file i try to call the select function in this way:</p>
<pre><code># initialize Connector object
connector = Connector()
def getconn():
with Connector() as connector:
conn = connector.connect(
instance_connection_name,
"pg8000",
user = db_user,
password = db_pass,
db = db_name,
ip_type = IPTypes.PUBLIC
)
return conn
def get_p(found:Database):
#data = select("SELECT * FROM products")
data = found.select("SELECT count(*) FROM products")
return data
dbstore = Database(db=getconn)
try:
print(get_p(dbstore))
except Exception as e:
print(e)
</code></pre>
<p>into terminal i see that the connection message ('Conection DB') is printed but the sql statement into select function is not executed.</p>
<p>Where am I doing wrong?</p>
|
<python><sqlalchemy><cursor>
|
2023-04-03 08:31:14
| 0
| 415
|
dev_
|
75,917,443
| 14,269,252
|
Plotly - how all the data point in x and y axis and adjust the size of chart depends on number of data points
|
<p>I wrote the code as follows to plot the scatter plot using Plotly and I just want to use this library.</p>
<ul>
<li><p>how can I get rid of this messy x and y axis when I want to show all data?</p>
</li>
<li><p>how can I adjust the size of chart depends on number of data points that is showing ? because the chart is connected to some streamlit filter and is dynamic, depends what I choose.</p>
</li>
<li><p>I also when I use all the data point, the x axis is messy, is there anyway to adjust the plot ?</p>
</li>
</ul>
<pre><code>def chart(df):
color_discrete_map = {'df1': 'rgb(255,0,0)',
'df2': 'rgb(0,255,0)',
'df3': '#11FCE4',
'df4': '#9999FF',
'df5': '#606060',
'df6': '#CC6600'}
fig = px.scatter(df, x='DATE', y='CODE', color='SOURCE', width=1200,
height=800,color_discrete_map=color_discrete_map)
fig.update_layout(xaxis_type='category')
fig.update_layout(
margin=dict(l=250, r=0, t=0, b=20),
)
fig.update_layout(xaxis=dict(tickformat="%y-%m", tickmode = 'linear', tick0 = 0.5,dtick = 0.75))
fig.update_xaxes(ticks= "outside",
ticklabelmode= "period",
tickcolor= "black",
ticklen=10,
minor=dict(
ticklen=4,
dtick=7*24*60*60*1000,
tick0="2016-07-03",
griddash='dot',
gridcolor='white')
)
fig.update_layout(yaxis={"dtick":1},margin={"t":0,"b":0},height=500)
</code></pre>
<p><a href="https://i.sstatic.net/Jn1V1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jn1V1.png" alt="enter image description here" /></a></p>
<p>Update :</p>
<p>I was able to add the code as follows to make the slider for x axis, so now the question is how can I make slider for y axis ?</p>
<pre><code>
fig.update_layout(
xaxis=dict(
rangeselector=dict(
buttons=list([
dict(count=1,
label="1m",
step="month",
stepmode="backward"),
dict(count=6,
label="6m",
step="month",
stepmode="backward"),
dict(count=1,
label="YTD",
step="year",
stepmode="todate"),
dict(count=1,
label="1y",
step="year",
stepmode="backward"),
dict(step="all")
])
),
rangeslider=dict(
visible=True
),
type="date"
)
)
fig.update_xaxes(tickformat = '%Y-%B', dtick='M1')
</code></pre>
|
<python><plot><charts><plotly><streamlit>
|
2023-04-03 08:30:44
| 0
| 450
|
user14269252
|
75,917,238
| 9,850,681
|
Error "Auto offset commit failed: [Error 25] UnknownMemberIdError:" after days of use
|
<p>This problem always occurs after a few days that microservices communicate with kafka, I have 3 nodes, and for each microservice I use a group id on a specific topic. The error is as follows.</p>
<pre><code>Unable connect to node with id 1:
Failed fetch messages from 1: NodeNotReadyError: Attempt to send a request to node which is not ready (node id 1).
Failed fetch messages from 2: [Error 7] RequestTimedOutError
Failed fetch messages from 1: [Error 7] RequestTimedOutError
Failed fetch messages from 2: [Error 7] RequestTimedOutError
Error sending JoinGroupRequest_v2 to node 1 [[Error 7] RequestTimedOutError] -- marking coordinator dead
Marking the coordinator dead (node 1)for group _message_alarm_ticket_app1.
Failed fetch messages from 3: [Error 7] RequestTimedOutError
Heartbeat failed: local member_id was not recognized; resetting and re-joining group
Heartbeat session expired - marking coordinator dead
Marking the coordinator dead (node 3)for group nce_alarms.
OffsetCommit failed for group nce_alarms due to group error ([Error 25] UnknownMemberIdError: nce_alarms), will rejoin
OffsetCommit failed for group nce_alarms due to group error ([Error 25] UnknownMemberIdError: nce_alarms), will rejoin
OffsetCommit failed for group nce_alarms due to group error ([Error 25] UnknownMemberIdError: nce_alarms), will rejoin
OffsetCommit failed for group nce_alarms due to group error ([Error 25] UnknownMemberIdError: nce_alarms), will rejoin
OffsetCommit failed for group nce_alarms due to group error ([Error 25] UnknownMemberIdError: nce_alarms), will rejoin
OffsetCommit failed for group nce_alarms due to group error ([Error 25] UnknownMemberIdError: nce_alarms), will rejoin
OffsetCommit failed for group nce_alarms due to group error ([Error 25] UnknownMemberIdError: nce_alarms), will rejoin
OffsetCommit failed for group nce_alarms due to group error ([Error 25] UnknownMemberIdError: nce_alarms), will rejoin
OffsetCommit failed for group nce_alarms due to group error ([Error 25] UnknownMemberIdError: nce_alarms), will rejoin
OffsetCommit failed for group nce_alarms due to group error ([Error 25] UnknownMemberIdError: nce_alarms), will rejoin
OffsetCommit failed for group nce_alarms due to group error ([Error 25] UnknownMemberIdError: nce_alarms), will rejoin
OffsetCommit failed for group nce_alarms due to group error ([Error 25] UnknownMemberIdError: nce_alarms), will rejoin
OffsetCommit failed for group nce_alarms due to group error ([Error 25] UnknownMemberIdError: nce_alarms), will rejoin
OffsetCommit failed for group nce_alarms due to group error ([Error 25] UnknownMemberIdError: nce_alarms), will rejoin
OffsetCommit failed for group nce_alarms due to group error ([Error 25] UnknownMemberIdError: nce_alarms), will rejoin
OffsetCommit failed for group nce_alarms due to group error ([Error 25] UnknownMemberIdError: nce_alarms), will rejoin
Auto offset commit failed: [Error 25] UnknownMemberIdError: nce_alarms
</code></pre>
<p><strong>Describe Topic</strong></p>
<pre><code>root@dev-s-kafka1:/opt/kafka/bin# ./kafka-topics.sh --describe --bootstrap-server localhost:9092 --topic nce_alarms
Topic: nce_alarms TopicId: zDniSSlUTgS4bWyXKPg5Zw PartitionCount: 8 ReplicationFactor: 3 Configs: segment.bytes=1073741824
Topic: nce_alarms Partition: 0 Leader: 3 Replicas: 3,1,2 Isr: 3,2,1
Topic: nce_alarms Partition: 1 Leader: 1 Replicas: 1,2,3 Isr: 3,2,1
Topic: nce_alarms Partition: 2 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: nce_alarms Partition: 3 Leader: 3 Replicas: 3,2,1 Isr: 3,2,1
Topic: nce_alarms Partition: 4 Leader: 1 Replicas: 1,3,2 Isr: 3,2,1
Topic: nce_alarms Partition: 5 Leader: 2 Replicas: 2,1,3 Isr: 2,3,1
Topic: nce_alarms Partition: 6 Leader: 3 Replicas: 3,1,2 Isr: 3,2,1
Topic: nce_alarms Partition: 7 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
</code></pre>
<p><strong>Environment</strong></p>
<ul>
<li>aiokafka version 0.8.0</li>
<li>kafka-python version 2.0.2:</li>
<li>Kafka Broker version 3.0.0:</li>
<li>Python version 3.9.16</li>
</ul>
<p>If more information is needed please let me know, unfortunately I have only recently started working on this project, but I can ask my colleagues.</p>
<p>Thank you.</p>
|
<python><apache-kafka><aiokafka>
|
2023-04-03 08:04:01
| 0
| 460
|
Plaoo
|
75,917,020
| 3,650,477
|
Doxygen for Python produces bad formatting since an update
|
<p>I'm using doxygen for a while to document my project. However in a new computer with a newer version of doxygen (1.9.1) it produces bad formatting for function docstrings.</p>
<p>I attach an image that summarises very well the issue.</p>
<p><a href="https://i.sstatic.net/j9CO6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j9CO6.png" alt="enter image description here" /></a></p>
<p>The documentation right after the first line is included within a code block, not just regular text. This breaks also the special doxygen keywords. This did not happen in previous versions of doxygen, it only happens when I run doxygen in the new computer. I have not find any change in the doxygen documentation that explains that this is the (new) expected behaviour and how to avoid it.</p>
<p>Am I the only one having this issue?</p>
<p>PS: The documentation is like this:</p>
<pre class="lang-py prettyprint-override"><code> """!Figure with the delays between source images and the composites.
@param output_img name of the output file
@param df_composites DataFrame with the composites database
"""
</code></pre>
|
<python><doxygen><docstring>
|
2023-04-03 07:36:58
| 1
| 2,729
|
Pythonist
|
75,916,896
| 5,025,009
|
Trying to understand the order of the legend labels when using pandas and matplotlib.pyplot
|
<p>I am trying to understand why the order of the labels behave like that in the following case.</p>
<p>I first plot the pandas dataframe and then I plot a horizontal line. I then change the legend labels and create them in the order (pandas column names, line label i.e. <code>['case0', 'case1', 'case2'] + ['line label']</code>). Why is the final result altered?</p>
<p>The only way to have the correct order is to use <code>ax.legend(['line label']+['case0', 'case1', 'case2'], fontsize=6, loc='upper left')</code> but this is counterintuitive.</p>
<pre><code>import numpy as np, pandas as pd
data = np.random.rand(5,3)
plt.figure(figsize=figsize, dpi=120)
ax=pd.DataFrame(data).plot.bar(color=pal)
ax.axhline(y=0.4, lw=1, ls='--', c='k', label='line')
ax.legend(['case0', 'case1', 'case2'] + ['line label'], fontsize=6, loc='upper left')
</code></pre>
<p><a href="https://i.sstatic.net/HcFkx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HcFkx.png" alt="enter image description here" /></a></p>
|
<python><pandas><matplotlib>
|
2023-04-03 07:21:14
| 1
| 33,417
|
seralouk
|
75,916,837
| 869,402
|
Poetry include additional data files in wheel
|
<p>I have a simple python package, let's call it <code>my_package</code>.</p>
<p>Its files are located in <code>src/python/my_package</code>.
In addition, there is a <code>data</code> folder in the repository root, which should be included in the resulting python wheel within the <code>my_package</code>.</p>
<pre><code>.
├── src
└── python
└── my_package
├── data
└── stuff.json
├── pyproject.toml
</code></pre>
<p>I did not find any way to configure poetry that it includes the additional <code>data</code> folder in the correct way.</p>
<p>Here is my <code>pyproject.toml</code></p>
<pre><code>[tool.poetry]
name = "my-package"
version = "2.10.0"
packages = [
{ include = "my_package", from = "src/python" }
]
# This does not work! It puts the `data` folder into site-packages/data instead of site-packages/my_package/data
include = [
{ path = "data", format = ["sdist", "wheel"] }
]
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>I also found the following solution, using a pre-build script:
<a href="https://github.com/python-poetry/poetry/issues/5539#issuecomment-1126818974" rel="noreferrer">https://github.com/python-poetry/poetry/issues/5539#issuecomment-1126818974</a></p>
<p>Problem: it changes the wheel that it is not pure anymore, but depends on CPython.</p>
<p>Also tried with symlink, but that does not work: <a href="https://stackoverflow.com/questions/65670606/how-to-include-symlinks-and-the-linked-file-in-a-python-wheel-using-poetry">How to include symlinks and the linked file in a python wheel using Poetry?</a></p>
<p><strong>Question:
What is the correct way to include additional resource files in a python wheel using poetry?</strong></p>
|
<python><python-poetry><python-wheel>
|
2023-04-03 07:10:54
| 1
| 6,896
|
Stefan Profanter
|
75,916,672
| 2,291,357
|
rails deployment fails on asset precompilation due to npm/pyton issue
|
<p>In the process of deploying to a Ubuntu 20.04 server</p>
<pre><code> 01 $HOME/.rbenv/bin/rbenv exec bundle exec rake assets:precompile
01 yarn install v1.22.19
01 [1/4] Resolving packages...
01 [2/4] Fetching packages...
01 [3/4] Linking dependencies...
01 warning " > webpack-dev-server@3.11.2" has unmet peer dependency "webpack@^4.0.0 || ^5.0.0".
01 warning "webpack-dev-server > webpack-dev-middleware@3.7.3" has unmet peer dependency "webpack@^4.0.0 || ^5.0.0".
01 [4/4] Building fresh packages...
01 error /home/deploy/mainapp/releases/20230403051422/node_modules/node-sass: Command failed.
[...]
01 gyp verb clean removing "build" directory
01 gyp verb command configure []
01 gyp verb check python checking for Python executable "python2" in the PATH
01 gyp verb `which` failed Error: not found: python2
01 gyp verb `which` failed at getNotFoundError (/home/deploy/mainapp/shared/node_modules/which/which.js:13:12)
...
01 gyp verb check python checking for Python executable "python" in the PATH
01 gyp verb `which` failed Error: not found: python
01 gyp verb `which` failed at getNotFoundError (/home/deploy/mainapp/shared/node_modules/which/which.js:13:12)
[...]
01 gyp ERR! stack Error: Can't find Python executable "python", you can set the PYTHON env variable.
01 gyp ERR! stack at PythonFinder.failNoPython (/home/deploy/mainapp/shared/node_modules/node-gyp/lib/configure.js:484:19)
[... concluding with]
01 gyp ERR! command "/usr/bin/node" "/home/deploy/mainapp/shared/node_modules/node-gyp/bin/node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library="
01 gyp ERR! cwd /home/deploy/mainapp/shared/node_modules/node-sass
01 gyp ERR! node -v v18.15.0
01 gyp ERR! node-gyp -v v3.8.0
01 gyp ERR! not ok
</code></pre>
<p>Investigating: <code>/usr/bin</code> has two directories <code>python3</code> and <code>python3.8</code>. Yet, the deploy seems to hunt for <code>python</code> or <code>python2</code>...</p>
<p>This did not occur on another deployment for the same app on a different machine, still with Ubuntu 20.04.</p>
<p>The intricacies of npm, python and webpack are confounding.</p>
<p>What needs to be done to get the asset precompilation to complete?</p>
|
<python><ruby-on-rails><npm><webpacker>
|
2023-04-03 06:47:14
| 1
| 6,329
|
Jerome
|
75,916,653
| 6,765,276
|
Convert MATLAB use of Probability Density Function (PDF) to Python
|
<p>I'm working to convert below MATLAB code to Python:</p>
<p><strong>MATLAB:</strong></p>
<pre><code>data = [ 44374 44034 44150 44142 44332 43986 44423 44346 44199 44129 44173 43981 44152 43797 44167 43944 44069 44327 44083 44473 44530 44361 44067 44154 44212 44537 44158 44428 43911 44397];
RMS = sqrt( data );
MA = movmean( RMS , 15 );
x = RMS - MA;
y = linspace( -10 , 10 , 21 );
f_x = pdf( x , y );
</code></pre>
<p>So far I have this <strong>Python</strong> code:</p>
<pre><code>from decimal import *
import numpy as np
from scipy.stats import gaussian_kde
data = [44374, 44034, 44150, 44142, 44332, 43986, 44423, 44346, 44199, 44129, 44173, 43981, 44152, 43797, 44167, 43944, 44069, 44327, 44083, 44473, 44530, 44361, 44067, 44154, 44212, 44537, 44158, 44428, 43911, 44397]
getcontext().prec = 18 # Change the precision
RMS = [Decimal(x).sqrt() for x in data]
RMS = [float(x) for x in RMS]
RMS = np.array(RMS)
MA = movmean(RMS, 15)
x = np.subtract(RMS, MA)
y = np.linspace(-10, 10, 21)
kde = gaussian_kde(x)
f_x = kde.pdf(y)
</code></pre>
<p>I have implemented the <code>movmean</code> function to be the same as <code>MATLAB</code>. Comparing both code I have ensure that <code>x</code> values and <code>y</code> values are the same for both <code>MATLAB</code> and <code>Python</code>.</p>
<p>Unfortunately the results after running the <code>pdf</code> <a href="https://en.wikipedia.org/wiki/Probability_density_function" rel="nofollow noreferrer">Probability density function</a> are not equal...</p>
<p><strong>Results in MATLAB:</strong></p>
<blockquote>
<p>0, 0, 0, 0, 0, 0, 0, 0, 0, 0.0666666666666667, 0.800000000000000,
0.133333333333333, 0, 0, 0, 0, 0, 0, 0, 0, 0</p>
</blockquote>
<p><strong>Results in Python:</strong></p>
<blockquote>
<p>0, 0, 0, 0, 0, 0, 0, 0, 0.0000005849, 0.1135087105, 0.6936653409,
0.0836530196, 0.0000000072, 0, 0, 0, 0, 0, 0, 0, 0</p>
</blockquote>
<p>I tried various combinations of <code>bw_method</code>s in <code>gaussian_kde</code> function without luck.</p>
<p><strong>Edit:</strong></p>
<p>Per request in the comment here is my <code>movmean</code> function for Python code:</p>
<pre><code>from numpy.lib.stride_tricks import sliding_window_view
def movmean_edge_cases(arr, window_size):
# adding NaN for edges
padding = (window_size - 1) // 2
arr = np.pad(arr, (padding, padding), 'constant')
# convert zeros to NaN
arr[arr == 0] = np.nan
sliding_windows = sliding_window_view(arr, window_shape=window_size)
left_edge = np.nanmean(sliding_windows[:padding], axis=1)
right_edge = np.nanmean(sliding_windows[-padding:], axis=1)
return left_edge, right_edge
def movmean(arr, window_size):
# Create a windowed filter with equal weights
weights = np.ones(window_size) / window_size
# Compute the convolution between the signal and the filter
mean_values = np.convolve(arr, weights, mode='valid')
# Compute the edge values
left_edge, right_edge = movmean_edge_cases(arr, window_size)
# add edge values to the original list
mean_values = np.insert(mean_values, 0, left_edge)
mean_values = np.append(mean_values, right_edge)
return mean_values
</code></pre>
|
<python><numpy><matlab>
|
2023-04-03 06:44:20
| 0
| 569
|
DavidDr90
|
75,916,373
| 3,810,748
|
Do I need to manually disconnect my Google Drive credentials from Google Colab?
|
<p>I am new to using Google Colab and I am currently mounting data from Google Drive like this:</p>
<pre><code>from google.colab import drive
drive.mount('/content/drive')
</code></pre>
<p>Do I need to be worried about potential security issues? Do I need to manually clear out sessions?</p>
<p>Or does Google Colab automatically know to scrub credentials when we terminate a remote connection?</p>
|
<python><security><google-colaboratory>
|
2023-04-03 05:54:53
| 1
| 6,155
|
AlanSTACK
|
75,916,309
| 10,621,921
|
Using tf.data.Dataset prefetch makes model performance overfit?
|
<p>I'm trying to train a simple LRCN model with some sequential image dataset in Tensorflow 2.5.0.
Training performance was fine like increasing to 0.9x training & validation accuracy both in first 5 epochs and train & validation loss kept decreasing during the training.</p>
<p>Then, I've tried to optimize data pipeline with using prefetch().
The dataset I use is sequential images (.png) that titles and information are written in .csv file. So I made the data generator like below :</p>
<pre><code>def setData(data):
X, y = [], []
name = data.loc['fileName'].values.tolist()[0]
info1 = data.loc['info1'].values.tolist()[0]
info2 = data.loc['info2'].values.tolist()[0]
info3 = data.loc['info3'].values.tolist()[0]
if os.path.isfile(filepath + name) == False:
print('No file for img')
return
try:
img = np.load(filepath + fName)
except:
print(name)
if info1 in info_list:
X.append(img)
if info2 == 'True':
y.append(0)
else:
y.append(1)
X = np.array(X)
X = np.reshape(X, (3, 128, 128, 1)).astype(np.float64)
y = np_utils.to_categorical(y, num_classes = 2)
y = np.reshape(y, (2)).astype(np.float64)
return X, y
</code></pre>
<p>And I added the data generator load function like this :</p>
<pre><code>def generatedata(i):
i = i.numpy()
X_batch, y_batch = setData(pd.DataFrame(traindata.iloc[i]))
return X_batch, y_batch
</code></pre>
<p>Finally, I prefetched dataset using map</p>
<pre><code>z = list(range(len(traindata[])))
trainDataset = tf.data.Dataset.from_generator(lambda: z, tf.uint8)
trainDataset = trainDataset.map(lambda i: tf.py_function(func = generatedata,
inp = [i],
Tout = [tf.float32, tf.float32]),
num_parallel_calls = tf.data.AUTOTUNE)
</code></pre>
<p>After I applied these steps, training accuracy goes 0.9x in first epoch, 1.0 in the first 3-5 epochs and validation accuracy stays at around 0.6x and validation loss kept growing over x.x.</p>
<p>I believe that the prefetch only changes the data pipeline that do not affect to the model performance so I'm not sure what caused this overfitting(maybe?)-like results.
I followed every step of the prefetch step that were denoted at the Tensorflow documentation. Though, since I'm not very familiar with tensorflow, there might be some mistakes.</p>
<p>Is there any line that I missed? Any opinion would be really greatfull.
Thanks in advance.</p>
|
<python><tensorflow><machine-learning><deep-learning><prefetch>
|
2023-04-03 05:37:43
| 1
| 513
|
Young.J
|
75,916,234
| 3,810,748
|
Is there anything else I need to do besides calling model.to(device) for HuggingFace GPU?
|
<p>I am new to using HuggingFace and the PyTorch ML ecosystem. I am trying to use a GPU device instead of the default CPU.</p>
<p>Can someone tell me if the following script is correct? The only thing I am calling is <code>lmhead_model.to(device)</code>.</p>
<p>I am not sure whether or not I need to move the <code>tokenizer</code>, <code>train_dataset</code>, <code>data_collator</code>, or anything else. Any insight would be appreciated for this beginner.</p>
<pre><code>from transformers import GPT2Tokenizer
from transformers import GPT2LMHeadModel
from transformers import TextDataset
from transformers import DataCollatorForLanguageModeling
from transformers import Trainer
from transformers import TrainingArguments
# Load the GPT-2 tokenizer and LM head model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
lmhead_model = GPT2LMHeadModel.from_pretrained('gpt2')
# Load the training dataset and divide blocksize
train_dataset = TextDataset(
tokenizer=tokenizer,
file_path='tinyshakespeare.txt',
block_size=64
)
# Create a data collator for preprocessing batches
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer,
mlm=False
)
# Optionally configure the model and pytorch w/ gpu
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
lmhead_model = lmhead_model.to(device)
if device.type == 'cuda':
print('We are currently using a GPU')
else:
print('We are currently using a CPU')
# Defining the training arguments
training_args = TrainingArguments(
output_dir='tinyshakespeare', # output directory for checkpoints
overwrite_output_dir=True, # overwrite any existing content
per_device_train_batch_size=32, # sample batch size for training
dataloader_num_workers=0, # number of workers for dataloader
max_steps=1000, # maximum number of training steps
save_steps=1000, # after # steps checkpoints are saved
save_total_limit=1, # maximum number of checkpoints to save
prediction_loss_only=True, # only compute loss during prediction
learning_rate=3e-4, # learning rate
fp16=True if device.type == 'cuda' else False, # use 16-bit (mixed) precision
optim='adamw_torch', # define the optimizer for training
lr_scheduler_type='linear', # define the learning rate scheduler
logging_steps=10, # after # steps logs are printed
report_to='none', # report to wandb, tensorboard, etc.
)
# Performing the ML training loop
if __name__ == '__main__':
trainer = Trainer(
model=lmhead_model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
)
result = trainer.train()
print_summary(result)
</code></pre>
|
<python><pytorch><huggingface>
|
2023-04-03 05:22:10
| 1
| 6,155
|
AlanSTACK
|
75,916,129
| 11,774,808
|
Why am I unable to create Boto3 S3 client instance in Airflow 2.5 (task gets stuck without error)? How to debug it?
|
<p>When I try to create a Boto3 client instance (e.g., <code>s3 = boto3.client('s3')</code>), the Airflow task gets stuck. It doesn't end or get timed out or even throw any error.</p>
<p>Questions:</p>
<ol>
<li>How do I debug this?</li>
<li>Which environment variables / files should I ensure exist?</li>
</ol>
<p>More info:</p>
<ul>
<li>The task doesn't remain in scheduled / queued state. It is in running state (green bar).</li>
<li>This is only happening in latest version of Airflow (2.5). I ran similar code in an older version (2.1.2) which works fine.</li>
</ul>
<p>Task instance log:</p>
<pre><code>*** Reading local file: ...
...
--------------------------------------------------------------------------------
[2023-04-03, 10:13:15 IST] {taskinstance.py:1283} INFO - Starting attempt 1 of 1
...
[2023-04-03, 10:13:15 IST] {task_command.py:388} INFO - Running <TaskInstance: check_boto3_connection.check manual__2023-04-03T04:43:09.385235+00:00 [running]> on ...
[2023-04-03, 10:13:15 IST] {taskinstance.py:1509} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=***
AIRFLOW_CTX_DAG_ID=check_boto3_connection
AIRFLOW_CTX_TASK_ID=check
...
AIRFLOW_CTX_TRY_NUMBER=1
...
[2023-04-03, 10:13:15 IST] {logging_mixin.py:137} INFO - Creating Boto3 client...
</code></pre>
<p>The full DAG:</p>
<pre class="lang-py prettyprint-override"><code>import datetime
import boto3
import botocore.client
from airflow.decorators import dag, task
@dag(schedule=None, start_date=datetime.datetime(2023, 4, 1), catchup=False)
def check_boto3_connection():
check()
@task
def check():
print('Creating Boto3 client...')
s3 = boto3.client(
's3',
config=botocore.client.Config(
connect_timeout=5,
retries={'max_attempts': 0},
),
)
print('S3 client:', s3)
check_boto3_connection()
</code></pre>
|
<python><macos><airflow><boto3>
|
2023-04-03 04:53:09
| 1
| 1,142
|
m01010011
|
75,915,991
| 17,698,538
|
Mock a library method response using pytest
|
<p>I am new to pytest, i want to create a test for a method called inside a library. So following is my sample usecase.</p>
<p>I have a python library called <strong>core</strong>:</p>
<ul>
<li>Inside <strong>core</strong> there is a folder called <strong>p_core</strong>.</li>
<li>Inside <strong>p_core</strong> there is python file <strong>common.py</strong></li>
<li>In this <strong>common.py</strong> file, there is a class <strong>CommonBase</strong>, this class have following method</li>
</ul>
<pre><code>def get_timestamp(input_example):
# I have an import statement in this file that is, import time
timestamp = time.time()
return f'{input_example}-{timestamp}'
</code></pre>
<p>So I want to mock the time.time(), because every time I don't want to get a different timestamp.</p>
<p>following is my pytest code</p>
<pre><code>import time
from core.p_core.common import CommonBase
def test_get_timestamp(mocker):
mock_now = mocker.patch("core.p_core.common.CommonBase.time.time",
return_value=1680087494.2400253)
assert CommonBase.get_timestamp('TEST') == "TEST1680087494.2400253"
</code></pre>
<p>But I am getting an error <strong>ModuleNotFoundError: No module named core.p_core.common.CommonBase</strong></p>
<p>Please ignore my get_timestamp methods logic, this is just an example.</p>
<p>Thank you.</p>
|
<python><python-3.x><pytest><pytest-mock>
|
2023-04-03 04:12:58
| 1
| 367
|
randomDev
|
75,915,813
| 13,431,295
|
Upload KML data from Google Earth to django Models
|
<p>I have a model</p>
<pre class="lang-py prettyprint-override"><code>class Area(models.Model):
name = models.CharField(max_length = 255)
point = postgis.PointField(null = True, blank = True)
area = postgis.MultiPolygonField(null = True, blank = True)
</code></pre>
<p>I have a kml file which consist of different areas and I want to upload it into this model so later I can add the foreign key of the area in other table</p>
<p>How to do that?</p>
|
<python><django><kml><geodjango>
|
2023-04-03 03:18:01
| 2
| 965
|
Shaheem PP
|
75,915,809
| 7,421,447
|
Accuracy value more than 1 with nn.BCEWithLogitsLoss() loss function pytorch in Binary Classifier
|
<p>I am trying to use <code>nn.BCEWithLogitsLoss()</code> for model which <a href="https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html" rel="nofollow noreferrer">initially</a> used <code>nn.CrossEntropyLoss()</code>. However, after doing some changes to the training function to accommodate the <code>nn.BCEWithLogitsLoss()</code> loss function the model accuracy values are shown as more than 1. Please find the code below.</p>
<pre><code># Data augmentation and normalization for training
# Just normalization for validation
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
data_dir = '/kaggle/input/catsndogsorg/hymenoptera_data'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
shuffle=True, num_workers=4)
for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
#############
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print(f'Epoch {epoch}/{num_epochs - 1}')
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device).unsqueeze(1)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
#print(outputs, labels)
loss = criterion(outputs, labels.float())
print(loss)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
if phase == 'train':
scheduler.step()
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print(f'{phase} Loss: {epoch_loss:.4f} Acc: {epoch_acc:.4f}')
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print(f'Training complete in {time_elapsed // 60:.0f}m {time_elapsed % 60:.0f}s')
print(f'Best val Acc: {best_acc:4f}')
# load best model weights
model.load_state_dict(best_model_wts)
return model
</code></pre>
<p>EDIT:Model pipeline</p>
<pre><code>model_ft = models.resnet18(weights='ResNet18_Weights.DEFAULT')
num_ftrs = model_ft.fc.in_features
# Here the size of each output sample is set to 2.
# Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)).
model_ft.fc = nn.Linear(num_ftrs, 1)
model_ft = model_ft.to(device)
criterion = nn.BCEWithLogitsLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
model_ft = train_model(model_ft, criterion, optimizer_ft,
exp_lr_scheduler,num_epochs=25)
</code></pre>
<p>The outputs of training loop:</p>
<pre><code>outputs shape: torch.Size([4, 1])
labels shape: torch.Size([4, 1])
logits: tensor(0.3511,grad_fn<BinaryCrossEntropyWithLogitsBackward0>)
train Loss: 1.0000 Acc: 2.0164
val Loss: 1.0000 Acc: 1.8105
</code></pre>
<p>Would anyone be able to help me in this matter please.</p>
<p>Thanks & Best Regards
AMJS</p>
|
<python><machine-learning><pytorch><loss-function><transfer-learning>
|
2023-04-03 03:17:04
| 1
| 713
|
Alain Michael Janith Schroter
|
75,915,644
| 4,928,920
|
Call decorator only once in a nested class function
|
<p>My class has the "public" functions attached with a decorator. In one of the functions, I call another class function that also has the same decorator. But given the decorator was already invoked, I'd like to skip this decorator call. Is there a way to achieve this?</p>
<p>Background: I have some virtual "circuit breakers" that I need to turn turn on/off between the calls. But if I call a function that already touched the circuit breakers once, then I'd want to avoid it in a nested call.</p>
<pre><code>class foo:
def my_decorator(func):
@wraps(func)
def _wrapper(self, *args, **kwargs):
print("before")
val = func(self, *args, **kwargs)
print("after")
return val
return _wrapper
@my_decorator
def bar(self):
return 0
@my_decorator
def baz(self):
self.bar()
return 1
</code></pre>
<p>In this example, I see :</p>
<pre><code>f.baz()
before
before
after
after
</code></pre>
<p>How do I modify it so I only <code>before</code> and <code>after</code> once, like how bar does:</p>
<pre><code>f.bar()
before
after
</code></pre>
|
<python><python-3.x><python-decorators>
|
2023-04-03 02:23:28
| 3
| 1,228
|
rookie
|
75,915,537
| 13,854,064
|
Using BeautifulSoup in a BeeWare app gives `ModuleNotFoundError: No module named 'bs4'
|
<p>I am trying to import <code>beautifulsoup4</code>in a BeeWare app (with its own virtual environment) using the command:</p>
<pre><code>from bs4 import BeautifulSoup
</code></pre>
<p>But I get a <code>ModuleNotFoundError: No module named 'bs4'</code> even though I have installed <code>beautifulsoup4</code> in my virtual environment and added it to the dependencies in my <code>pyproject.toml</code> file. What's the issue then? Solutions I have found for regular virtual environments do not seem to apply to BeeWare apps. The issue only arises when I run the <code>briefcase run</code> command, but not with <code>briefcase dev</code>.</p>
|
<python><beautifulsoup><python-import><python-venv><beeware>
|
2023-04-03 01:45:20
| 1
| 434
|
gimi
|
75,915,513
| 2,392,151
|
parquet time stamp overflow with fastparquet/pyarrow
|
<p>I have a parquet file I am reading from s3 using fastparquet/pandas , the parquet file has a column with date 2022-10-06 00:00:00 , and I see it is wrapping it as 1970-01-20 06:30:14.400, Please see code and error and screen shot of parquet file below .I am not sure why this is happening ?
2022-09-01 00:00:00 seems to be fine.
if I choose "pyarrow" as the engine, it fails with exception</p>
<pre><code>pyarrow error:
pyarrow.lib.ArrowInvalid: Casting from timestamp[us] to timestamp[ns] would result in out of bounds timestamp: 101999952000000000
</code></pre>
<p>Please advise.</p>
<p>fastparquet error:</p>
<pre><code>OverflowError: value too large
Exception ignored in: 'fastparquet.cencoding.time_shift'
OverflowError: value too large
OverflowError: value too large
</code></pre>
<p>code:</p>
<pre class="lang-py prettyprint-override"><code>s3_client = boto3.client('s3')
obj = s3_client.get_object(Bucket="blah", Key="blah1")
df=pd.read_parquet(io.BytesIO(obj['Body'].read()),engine="fastparquet")
</code></pre>
|
<python><pandas><parquet><pyarrow><fastparquet>
|
2023-04-03 01:36:57
| 1
| 363
|
Bill
|
75,915,279
| 9,354,344
|
How to substract raster1 and raster2 in rasterio
|
<p>I have tried to subtract the <code>raster1.tif</code> and <code>raster2.tif</code> using rasterio in python. The <code>raster1.tif</code> file is larger and completely overlaps the <code>raster2.tif</code>.</p>
<p>I need to subtract <code>raster1.tif - raster2.tif</code> in such a way so that the output raster extent will be equal to <code>raster1.tif</code>.</p>
<p>I have written the following code,</p>
<pre class="lang-py prettyprint-override"><code>import rasterio
with rasterio.open('raster1.tif') as src1, rasterio.open('raster2.tif') as src2:
data1 = src1.read(1)
data2 = src2.read(1)
data = data1 - data2
with rasterio.open("output.tif", 'w', **src1.meta) as dst:
dst.write(data,1)
</code></pre>
<p>It produces the following error,</p>
<pre><code>ValueError: operands could not be broadcast together with shapes (790,1554) (57,67)
</code></pre>
<p>I know this error is generated since both my raster has different bounds. Can anyone help me to fix this issue? is there any way to reshape the <code>data2</code> so I can subtract easily?</p>
<blockquote>
<p><strong>PS:</strong> I have reproduced the required thing with ArcGIS Pro using a raster calculator with the condition below,</p>
</blockquote>
<pre class="lang-sql prettyprint-override"><code>Con(IsNull('raster2.tif'), 'raster1.tif', 'raster1.tif' - 'raster2.tif')
</code></pre>
<blockquote>
<p>I am looking for the equivalent code in <code>rasterio</code>.</p>
</blockquote>
<p><a href="https://i.sstatic.net/vnFon.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vnFon.png" alt="Image showing raster1 and raster2" /></a></p>
|
<python><rasterio>
|
2023-04-03 00:14:51
| 1
| 2,397
|
Tek Kshetri
|
75,915,102
| 2,247,459
|
Does Python 'any' function requires evaluating all boolean values?
|
<p>Consider the following python instruction</p>
<pre><code>any([condition(element) for element in some_list])
</code></pre>
<p>Is it optimised to stop once <code>condition(element)</code> is <code>True</code>, or <code>any</code> is applied after the whole list of boolean values is built?</p>
<p>If case the the whole list <em>is</em> created, is there a way to avoid it, apart from coding an explicit loop?</p>
<hr />
<p>Let me clarify the question, since someone misunderstood it.</p>
<p>Scenario 1. <code>condition(element)</code> is evaluated for <strong>each</strong> element of <code>some_list</code> and list of boolean values (containing True or False for corresponding element of some_list) is created. Only after that <em>any</em> is applied to the created list of booleans.</p>
<p>Scenario 2. Instead of evaluating <code>condition(element)</code> for each element, the lookup stops immediately after <code>condition(element)</code> holds for certain element of <code>some_list</code></p>
<p>Is Python optimised to follow Scenario 2 rather than Scenario 1?</p>
|
<python><any>
|
2023-04-02 23:19:56
| 1
| 3,984
|
cyanide
|
75,915,020
| 7,327,257
|
Fill array with multiple arrays and multiple masks
|
<p>I'm working with classifier's prediction, with a dataframe where I removed NaN values before feeding the algorithm. After the prediction, I want to create a new array where, if there was a valid point in the original dataframe, it will take the prediction, in other case the thing gets complicated: if there was a NaN value in the dataframe, it has to check other two dataframes and take de point from the one that has valid value.</p>
<p>An example of what I need:</p>
<pre><code>z = np.array([2, 4, 5, 7])
x = np.array([3, 6, 9, 8])
pred_value = 11
mask_z = z[z%2 == 0] # array([True, True, False, False])
mask_x = x[x%2 == 0] # array([False, True, False, True])
mask_pred = np.arary([True, True, False, True])
</code></pre>
<p>Now I want to create a new array and fill it taking value from three different arrays. Lets say that, where there is a <code>False</code> in <code>mask_pred</code>, I want to take <code>pred_value</code>. But if there is a <code>True</code> in <code>mask_pred</code> I need to check <code>mask_z</code> and <code>mask_x</code> so that if there is a <code>True</code> in <code>mask_z</code> but a <code>False</code> in <code>mask_x</code>, it will take <code>x</code> value (and the other way around). In case both <code>mask_z</code> and <code>mask_x</code> are <code>True</code>, then it will take a NaN value:</p>
<pre><code>y = np.empty(mask_pred.shape)
y[~mask_pred] = pred_value
# Part to fix:
y[mask_pred] = if mask_z == True and mask_x == False then take x value;
if mask_z == False and mask_x == True then take z value;
if mask_z == True and mask_x == True then fill with np.NaN
print(y)
array([3, NaN, 11, 7])
</code></pre>
<p>I need to figure out how to make the last part working with masks and not loops, in an efficient way for large arrays.</p>
<p>Thanks in advance.</p>
|
<python><arrays><numpy>
|
2023-04-02 22:57:11
| 1
| 357
|
M. Merida-Floriano
|
75,915,006
| 2,817,520
|
Updating a WSGI application without rebooting the server
|
<p>Is is possible to update a WSGI application without the need to reboot the server? I mean you can do that with WordPress. In WordPress you can install plugins, update them or even update the whole WordPress. How? Is it because it is written in PHP?</p>
|
<python><wsgi>
|
2023-04-02 22:53:24
| 1
| 860
|
Dante
|
75,914,894
| 20,276,285
|
How to convert a hexstring to shellcode format?
|
<p>I have a bytes key define as:</p>
<blockquote>
<p>KEY = b'\xb5\x89\xd5\x03\x03\x96`\x9dq\xa7\x81\xed\xb2gYR'</p>
</blockquote>
<p>I want this to be formatted like shellcode, i.e two hexa characters like: \x41\x42\x43...</p>
<p>So I tried to do it like this:</p>
<pre><code>KEY = b'\xb5\x89\xd5\x03\x03\x96`\x9dq\xa7\x81\xed\xb2gYR'
hexkey = KEY.hex()
l = []
for i in range(0, len(hexkey) - 2, 2):
b = '\\x' + hexkey[i] + hexkey[i+1]
l.append(b)
</code></pre>
<p>but this isn't escaping the backslash for some reason, I get this output:</p>
<blockquote>
<p>['\\xb5', '\\x89', '\\xd5', '\\x03', '\\x03', '\\x96', '\\x60',
'\\x9d', '\\x71', '\\xa7', '\\x81', '\\xed', '\\xb2', '\\x67',
'\\x59']</p>
</blockquote>
<p>what am i doing wrong? Is there a better way to do this?</p>
|
<python><shellcode>
|
2023-04-02 22:20:47
| 1
| 308
|
IRP_HANDLER
|
75,914,807
| 7,386,830
|
Loop through each row and build regression model & plot (n-1 row step through)
|
<p>I have a simple linear regression model below, that has been fit to a 10 row data.</p>
<pre><code>import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
# Generate 'random' data
np.random.seed(0)
X = 2.5 * np.random.randn(10) + 1.5
res = 0.5 * np.random.randn(10)
y = 2 + 0.3 * X + res
Name = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J']
# Create pandas dataframe to store our X and y values
df = pd.DataFrame(
{'Name': Name,
'X': X,
'y': y})
# Show the dataframe
df
import statsmodels.formula.api as smf
# Initialise and fit linear regression model using `statsmodels`
model = smf.ols('X ~ y', data=df)
model = model.fit()
# Predict values
age_pred = model.predict()
# Plot regression line against actual data
plt.figure(figsize=(6, 4))
plt.plot(sc_X['Salary'], sc_X['Age'], 'o') # scatter plot showing actual data
plt.plot(sc_X['Salary'], age_pred, 'r', linewidth=2) # regression line
plt.show()
</code></pre>
<p>The dataframe consists data like the following:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>X</th>
<th>y</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>5.910131</td>
<td>3.845061</td>
</tr>
<tr>
<td>B</td>
<td>2.500393</td>
<td>3.477255</td>
</tr>
<tr>
<td>C</td>
<td>3.946845</td>
<td>3.564572</td>
</tr>
<tr>
<td>D</td>
<td>7.102233</td>
<td>4.191507</td>
</tr>
<tr>
<td>E</td>
<td>6.168895</td>
<td>4.072600</td>
</tr>
<tr>
<td>F</td>
<td>-0.943195</td>
<td>1.883879</td>
</tr>
<tr>
<td>G</td>
<td>3.875221</td>
<td>3.909606</td>
</tr>
<tr>
<td>H</td>
<td>1.121607</td>
<td>2.233903</td>
</tr>
<tr>
<td>I</td>
<td>1.241953</td>
<td>2.529120</td>
</tr>
<tr>
<td>J</td>
<td>2.526496</td>
<td>2.330901</td>
</tr>
</tbody>
</table>
</div>
<p>I would like to produce a regression model by looping through the table above, by excluding one row of data at a time for each 'Name'. The model should be built as n-1 row.</p>
<p>So for example, the first model and scatter plot, it should be all names (except the row corresponding values for A row); then for the second model and scatter plot, it will be all data except B row, and so on.</p>
<p>How could this be implemented to the table above ? How can I also produce a regression plot, for each of the (n-1) regression model built automatically by the code ?</p>
<p>On the resulting plots, can I include an annotation that says something like 'except A' (to indicate A has been excluded from data used to build the model). Followed by 'except B', then 'except C'.</p>
|
<python><pandas><regression><statsmodels>
|
2023-04-02 21:57:57
| 2
| 754
|
Dinesh
|
75,914,786
| 13,960,043
|
How to ensure a clean exit with a script hosted on the cloud?
|
<p>I built a web scraper that would scrape multiple websites for data, problem is that the data is very large and takes like 8 hours to scrape(with the use of sleep don't want to bother their servers too much).</p>
<p>Well, the cloud service I want to host it on will only run it for 6 hours before killing the script so Im making it so that it picks up where it left off when restarted. How do I ensure a clean exit of the code when the cloud service kills it? I don't want anything unexpected to happen to the data.</p>
|
<python><web-scraping><cloud><remote-server>
|
2023-04-02 21:52:12
| 1
| 317
|
Nandril
|
75,914,757
| 272,023
|
How to integrate Kerberos into FastApi?
|
<p>Is there an existing way of integrating Kerberos authentication into <a href="https://fastapi.tiangolo.com/" rel="nofollow noreferrer">FastApi</a>?</p>
<p>I can see in this <a href="https://stackoverflow.com/questions/64146591/custom-authentication-for-fastapi">SO question</a> details of creating a custom authentication method. The FastApi docs <a href="https://fastapi.tiangolo.com/advanced/security/http-basic-auth/" rel="nofollow noreferrer">here</a> describe how to implement Basic Auth. Am I correct in thinking that I need to modify this to return a 401 with Www-Authenticate: Negotiate for no token and validate a supplied token in the Authorization header? Is there an example in Python where this is fine or is there a ready-made Kerberos library for use with FastApi already?</p>
|
<python><fastapi><kerberos>
|
2023-04-02 21:43:30
| 1
| 12,131
|
John
|
75,914,570
| 9,840,684
|
Creating labels based on partial string matches in Python
|
<p>I have an index column that's leading 2 or 3 characters indicating a category I'd like to create labels for. Consider the following data:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
</tr>
</thead>
<tbody>
<tr>
<td>NDP2207342</td>
</tr>
<tr>
<td>OC22178098</td>
</tr>
<tr>
<td>SD88730948</td>
</tr>
<tr>
<td>OC39002847</td>
</tr>
<tr>
<td>PTP9983930</td>
</tr>
<tr>
<td>NDP9110876</td>
</tr>
</tbody>
</table>
</div>
<p>with a desired output of:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>Labels</th>
</tr>
</thead>
<tbody>
<tr>
<td>NDP2207342</td>
<td>NCAR</td>
</tr>
<tr>
<td>OC22178098</td>
<td>OCAR</td>
</tr>
<tr>
<td>SD88730948</td>
<td>SDAR</td>
</tr>
<tr>
<td>OC39002847</td>
<td>OCAR</td>
</tr>
<tr>
<td>PTP9983930</td>
<td>SWAR</td>
</tr>
<tr>
<td>NDP9110876</td>
<td>NCAR</td>
</tr>
</tbody>
</table>
</div>
<p>Unfortunately the <strong>number</strong> of leading characters is not consistent (could be two or three), but those leading characters do consistently map to the categories of interest.</p>
<p>My attempt was hoping to do something as simple (or like) a SQL wildcard search, but unfortunately didn't work (which is obvious in retrospect). Here's what I had:</p>
<pre><code>
def labelApply(x):
'''
Applying labels for categories
'''
if x == 'OC%': return 'OCAR'
elif x == 'NDP%': return 'NCAR'
elif x == 'PTP%': return 'SWAR'
elif x == 'SD%' : return 'SDAR'
else: return 'Out of Area'
df['labels'] = df['index'].apply(labelApply)
</code></pre>
<p>But that didn't work.</p>
<p>Any thoughts?</p>
|
<python><pandas><function>
|
2023-04-02 20:56:15
| 2
| 373
|
JLuu
|
75,914,464
| 14,883,879
|
RuntimeError: The size of tensor a (61) must match the size of tensor b (64) at non-singleton dimension 3
|
<p>Has anyone a clue what the error here could be? I already looked at other StackOverflow threads or PyTorch forum threads, but I didn't find anything 😕</p>
<p>My Dataset is from <a href="https://github.com/skyatmoon/CHoiCe-Dataset" rel="nofollow noreferrer">https://github.com/skyatmoon/CHoiCe-Dataset</a>. For Labels, I use the name of the directories in which the images are.</p>
<p>If you need more code/information, don't hesitate to ask.</p>
<p><strong>Train Method</strong></p>
<pre><code>def train():
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.3, momentum=0.9)
for epoch in range(3000):
running_loss = 0
for images, labels in dataloader:
optimizer.zero_grad()
output = model(images)
loss = criterion(output, labels.view(1, -1))
loss.backward()
optimizer.step()
running_loss += loss.item()
</code></pre>
<p><strong>Model</strong></p>
<pre><code>model = nn.Sequential(
nn.Linear(28, 16),
nn.Sigmoid(),
nn.Linear(16, 16),
nn.Sigmoid(),
nn.Linear(16, 61)
)
</code></pre>
<p><strong>DataLoader</strong></p>
<pre><code>dataloader = DataLoader(
dataset=dataset,
batch_size=64,
shuffle=True,
)
</code></pre>
|
<python><tensorflow><deep-learning><pytorch><neural-network>
|
2023-04-02 20:31:29
| 1
| 445
|
basti394
|
75,914,405
| 4,913,254
|
How to filter values in a column with common values in another common and apply two conditions in the filtering
|
<p>I have a data frame like this</p>
<pre><code>df = pd.DataFrame({'patient': ['patient1', 'patient1', 'patient1','patient2', 'patient2', 'patient3','patient3','patient4'],
'gene':['TYR','TYR','TYR','TYR','TYR','TYR','TYR','TYR'],
'variant': ['buu', 'luu', 'stm','lol', 'bla', 'buu', 'lol','buu'],
'genotype': ['hom', 'het', 'hom','het', 'hom', 'het', 'het','het']})
df
patient gene variant genotype
0 patient1 TYR buu hom
1 patient1 TYR luu het
2 patient1 TYR stm hom
3 patient2 TYR lol het
4 patient2 TYR bla hom
5 patient3 TYR buu het
6 patient3 TYR lol het
7 patient4 TYR buu het
</code></pre>
<p>I want to identify patients who have buu and additional variants BUT not luu. So, the expected output should be like this</p>
<pre><code>patient gene variant genotype
patient3 TYR buu het
patient3 TYR lol het
</code></pre>
<p>How can I do this?</p>
|
<python><pandas>
|
2023-04-02 20:17:11
| 3
| 1,393
|
Manolo Dominguez Becerra
|
75,914,306
| 5,081,918
|
Unable to set correct Email Template for Mailchimp Campaign via Python API
|
<p>I'm unable to set correct Email template for Mailchimp Campign.</p>
<p>I'm picking the <code>Template_ID</code> from the browser URL like: <a href="https://xxx.admin.mailchimp.com/email/templates/editor?id=%60Template_ID%60" rel="nofollow noreferrer">https://xxx.admin.mailchimp.com/email/templates/editor?id=`Template_ID`</a>.</p>
<p>Is it correct?</p>
<pre><code> campaign_data = dict(
type='regular',
recipients=dict(list_id="SEGMENT ID", segment_opts=dict(
saved_segment_id=segment_id)),
settings=dict(
subject_line=subject_line,
from_name='FROM_EMAIL',
reply_to='REPLY_EMAIL',
template_id=<Template_ID>
),
content_type="multichannel"
)
campaign = client.campaigns.create(campaign_data)
</code></pre>
<p>With the Above code, Campaign is getting created but it is not able to set correct template.</p>
|
<python><mailchimp><mailchimp-api-v3.0>
|
2023-04-02 19:55:28
| 0
| 1,054
|
nishit chittora
|
75,914,293
| 1,053,961
|
Nested async-to-sync generator stops at first iteration when executing each iteration individually with asyncio.run
|
<p>I’m working on a async library that intends to provide a sync support. For that I use <code>asgiref</code> but for demonstration purpose i’ll run <code>asyncio</code> that has the same issue.</p>
<p>I want to wrap an async function that returns an async generator into a decorator that will transform this result into a classic generator. <strong>I need to do this without consuming the source generator because I might handle a very large amount of data during this process.</strong> So I need to yield items one at a time while synchronising is a classic python context.</p>
<p>This subject was discussed in <a href="https://stackoverflow.com/questions/71580727/translating-async-generator-into-sync-one">another thread</a> for a non-nested async generator case, and a user complained about the same issue I encountered today in <a href="https://stackoverflow.com/questions/71580727/translating-async-generator-into-sync-one#comment127339112_71581122">this comment</a></p>
<p>I’ve summarised my issue into this example:</p>
<pre><code>import asyncio
async def async_generator_source():
yield 1
yield 2
yield 3
async def async_generator_wrapper():
async for item in async_generator_source():
yield f"source says: {item}"
def sync_generator_wrapper():
async_generator = async_generator_wrapper()
try:
while True:
print(asyncio.run(async_generator.__anext__()))
except StopAsyncIteration:
print("stop async iteration received")
sync_generator_wrapper()
</code></pre>
<p>I expect this output:</p>
<pre class="lang-none prettyprint-override"><code>source says: 1
source says: 2
source says: 3
stop async iteration received
</code></pre>
<p>But instead I have this output:</p>
<pre class="lang-none prettyprint-override"><code>source says: 1
stop async iteration received
</code></pre>
<p>How could I fix this behaviour?</p>
<p>Please remember that <code>async_generator_source</code> content should not be stored in a list since it is in my use case a very large set of data.</p>
|
<python><python-asyncio>
|
2023-04-02 19:53:03
| 1
| 440
|
Guibod
|
75,914,181
| 8,372,455
|
pandas creating M-F time series dummies
|
<p>Is it possible to create a Boolean point one hot style in time series data for when the data frame hour is between 7AM and 5PM Monday through Friday? This errors out below when I add in the additional <code>and df.index.weekday <= 4</code> to the lambda function:</p>
<pre><code>hours = pd.get_dummies(df.index.hour, prefix='hour')
df['hour'] = df.index.strftime('%H').astype('int')
df['7AM_to_5PM'] = df['hour'].apply(lambda hr : 1 if (hr > 7 and hr < 17 and df.index.weekday <= 4) else 0)
</code></pre>
<p>Error is:</p>
<pre><code>----> 3 df['7AM_to_5PM'] = df['hour'].apply(lambda hr : 1 if (hr > 7 and hr < 17 and df.index.weekday <= 4) else 0)
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>This below works just fine but will also incorporate the weekend data which I dont want:</p>
<pre><code> df['7AM_to_5PM'] = df['hour'].apply(lambda hr : 1 if (hr > 7 and hr < 17) else 0)
</code></pre>
<p>Ultimate use case I am trying to create a seperate dataframe for this dummy variable for plotting purposes:
<code>df_scatter = df[['oat','electricity','7AM_to_5PM']]</code></p>
<p>Thanks for any tips</p>
|
<python><pandas>
|
2023-04-02 19:28:04
| 1
| 3,564
|
bbartling
|
75,914,039
| 11,021,252
|
How to calculate the distance between shapely Linestring and Point (3D)
|
<p>As shown in the figure, I have a 3D shapely Linestring (<code>line</code>) and a Point (<code>point</code>).
I am trying to calculate the distance between them using the command <code>line.distance(point)</code>. But I am getting <code>0.0</code> as the output. It is not supposed to be zero, as shown in the figure.</p>
<p><code>line='LINESTRING Z (7760.332870937675 -2204.1529076701922 13310.4921875, 5275.867565893014 -2204.1529076701922 13421.5302734375)'</code></p>
<p><code>point='POINT Z (6677.34068980373 -2204.1529076701922 12820.9072265625)'</code></p>
<p>How to solve this? What I am doing wrong here?
Help please.</p>
<p><a href="https://i.sstatic.net/az3CB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/az3CB.png" alt="Figure" /></a></p>
|
<python><gis><geopandas><shapely>
|
2023-04-02 18:59:57
| 1
| 507
|
VGB
|
75,913,917
| 565,200
|
Tkinter Progress bar without progress
|
<p>I'm having a problem when plotting a progress bar and using Object oriented.
The following code creates a main window with the progress bar and three buttons but also another one. Why does the code create this second window?</p>
<p>In addition, the progress bar doesn't show a real progress. The buttons are configured to increase the value of the progress bar using the <code>step(20)</code> method.</p>
<p>I've set up a maximum value but doesn't seems to work.</p>
<p>Note: I need to create the <code>progressbar</code> variable before the <code>init</code> method to be able to call it in button execution command.</p>
<pre><code>class Window(tkinter.Frame):
button1 = Button()
button2 = Button()
button3 = Button()
progressBar = tkinter.ttk.Progressbar()
def __init__(self, master):
super().__init__(master)
master.title("Prueba JVS")
master.geometry("600x600")
self.frame = Frame(root)
self.frame.pack()
leftframe = Frame(master)
leftframe.pack(side=LEFT)
rightframe = Frame(master)
rightframe.pack(side=RIGHT)
label = Label(self.frame, text="Genetic JVS App")
label.pack()
button1 = Button(leftframe, text="Botón +", height=5, width=10, command=self.increment())
button1.pack(padx=30, pady=3)
button2 = Button(leftframe, text="Valor", height=5, width=10, command=self.display())
button2.pack(padx=30, pady=3)
button3 = Button(rightframe, text="Generar", height=5, width=10, command=self.button_clicked())
button3.pack(padx=30, pady=3)
#self.frame = tkinter.Frame(self.master)
# Barra de progreso.
self.progressBar = tkinter.ttk.Progressbar(self.frame, length=500)
self.progressBar.configure(maximum=200)
self.progressBar.pack(padx=10, pady=50)
self.frame.pack(padx=50, pady=5)
def increment(self):
self.progressBar.step(20)
def decrement(self):
self.progressBar.step(-20)
def reset(self):
self.progressBar["value"] = 0
def display(self):
print(self.progressBar["value"])
logging.info("Valor: %i", self.progressBar["value"])
def button_clicked(self):
print('Button clicked')
logging.info("Botón 3 pulsado.")
self.progressBar.step(5)
root = Tk()
window = Window(root)
root.mainloop()
</code></pre>
<p><a href="https://i.sstatic.net/fOmUb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fOmUb.png" alt="Screenshot" /></a></p>
|
<python><python-3.x><user-interface><tkinter>
|
2023-04-02 18:32:09
| 1
| 7,570
|
Jorge Vega Sánchez
|
75,913,916
| 2,272,824
|
How to create a useful Polars Dataframe from a numpy structured array?
|
<p>I'm converting an application from Pandas to Polars in search of better scalability and performance.
My app reads data from an hdf5 compound dataset (using h5py) into a numpy structured array from which I create the Pandas dataframe directly as follows,</p>
<pre><code># dset is the hdf5 compound Dataset
np_struct_array = np.empty(dset.shape, dset.dtype)
dset.read_direct(np_struct_array)
df=pd.DataFrame(np_struct_array, dset.dtype)
</code></pre>
<p>The dtype of the numpy structured array depends on the h5 dataset being read, but a typical example would something like</p>
<pre><code>[('SID', '<i8'), ('G', '<i8'), ('C', '<i8'), ('D', '<f8'), ('DOMAIN_ID', '<i8')]
</code></pre>
<p>This is super fast and the dataframe uses the column names and types from the numpy structured array directly</p>
<p>When I switch to Polars and use the same approach, the resulting Polars dataframe is a single column dataframe of type object which is not what I need - for example the schema arising from the above numpy structured array is <code>{'column_0': Object}</code></p>
<p>I can do the following and get the dataframe I need, but the performance sucks - 10x slower than Pandas</p>
<pre><code>df=pd.DataFrame(
{
field_name: np_struct_array[field_name] for field_name in np_struct_array.dtype.fields
}
)
</code></pre>
<p>So my question is what is the fastest/most efficient way to get the hdf5 compound dataset into a Polars dataframe? Is there a better way to use the numpy structured array with Polars for example? I could continue to read the data into a Pandas dataframe and then create a Polars dataframe from that, but I think that will create a copy which I'd rather avoid since the data can be big.</p>
<p>Any advice will be gratefully received.</p>
<p>Doug</p>
|
<python><dataframe><numpy><hdf5><python-polars>
|
2023-04-02 18:32:02
| 1
| 391
|
scotsman60
|
75,913,847
| 11,611,246
|
Run and break loop with progress bar in tkinter
|
<p>I want to create a small GUI in <code>tkinter</code> to download a list of files stored as URLs in a <code>*.csv</code> table.</p>
<p>However, I could not find a method to combine a progress bar plus an option to interrupt the process at any time. If I run the download loop within the main thread, it will block anything else and some user clicking on "cancel" will have no effect. On the other hand, it seems I cannot get the current status of the loop from an external process and use it to update the progress bar value.</p>
<p>Currently, this is my code:</p>
<pre><code>#-----------------------------------------------------------------------------|
# Import modules
import os, requests
import pandas as pd
import threading as th
import tkinter as tk
from tkinter import ttk
from tkinter import filedialog as fd
#-----------------------------------------------------------------------------|
# Functions
def download(remote_url, local_file):
data = requests.get(remote_url)
try:
with open(local_file, "wb") as f:
f.write(data.content)
exit_status = 0
except:
exit_status = 1
return exit_status
#-----------------------------------------------------------------------------|
# GUI
class BulkDownloader(tk.Frame):
def __init__(self, master = None):
super().__init__(master)
self.pack(padx = 0, pady = 0)
self.create_widgets()
self.master.minsize(300, 50)
self._continue_loop = True
self._current_progress = 0
self._downloading = False
def create_widgets(self):
# Variables
## Input file
self.file_name = tk.StringVar()
self.file_name.set("")
## Output folder
self.output_folder = tk.StringVar()
self.output_folder.set("")
## Progress bar status
self.progress_var = tk.DoubleVar()
self.progress_var.set(0)
# Title
self.winfo_toplevel().title("Bulk Downloader")
# Display
self.display_current_in = tk.Entry(self)
self.display_current_in.grid(row = 0, column = 4, columnspan = 4,
sticky = "EW")
self.display_current_in["textvariable"] = self.file_name
self.display_current_out = tk.Entry(self)
self.display_current_out.grid(row = 1, column = 4, columnspan = 4,
sticky = "EW")
self.display_current_out["textvariable"] = self.output_folder
# Buttons
## Select input file
self.select_in_button = tk.Button(self)
self.select_in_button["text"] = "Select input file"
self.select_in_button["command"] = self.select_in_file
self.select_in_button.grid(row = 0, column = 0, columnspan = 3,
sticky = "EW")
## Select output directory
self.select_out_button = tk.Button(self)
self.select_out_button["text"] = "Select output directory"
self.select_out_button["command"] = self.select_out_folder
self.select_out_button.grid(row = 1, column = 0, columnspan = 3,
sticky = "EW")
## Close main window
self.cancel_button = tk.Button(self)
self.cancel_button["text"] = "Cancel"
self.cancel_button["command"] = self.cancel_all
self.cancel_button.grid(row = 2, column = 0, columnspan = 3,
sticky = "EW")
## Download files
self.download_button = tk.Button(self)
self.download_button["text"] = "Download"
self.download_button["command"] = self.start_download
self.download_button.grid(row = 2, column = 4, columnspan = 4,
sticky = "E")
## Progress bar
self.progress = ttk.Progressbar(self, variable = self.progress_var,
mode = "determinate")
self.progress.grid(row = 3, column = 0, columnspan = 38, sticky = "EW")
def select_in_file(self):
current_selection = self.file_name.get()
set_dir_to = current_selection if current_selection != "" else "/"
file_types = (("csv file", "*.csv"), ("all files", "*.*"))
selection = fd.askopenfilename(title = "Open file",
initialdir = set_dir_to,
filetypes = file_types)
self.file_name.set(selection)
def select_out_folder(self):
current_selection = self.output_folder.get()
set_dir_to = current_selection if current_selection != "" else "/"
selection = fd.askdirectory(title = "Select destination directory",
initialdir = set_dir_to)
self.output_folder.set(selection)
def download_files(self):
local_folder = self.output_folder.get()
csv_table = self.file_name.get()
table = pd.read_csv(csv_table)
url_list = table[table.columns[0]].to_list()
for enumerator, item in enumerate(url_list):
if not self._continue_loop:
break
local_file = os.path.join(local_folder, os.path.split(item)[1])
exit_status = download(item, local_file)
self._current_progress = ((enumerator + 1) * 100) // len(url_list)
self.finished(exit_status)
def start_download(self):
self._continue_loop = True
if not self._downloading:
self._current_progress = 0
t = th.Thread(target = self.download_files, daemon = True)
t.start()
self.check_progress()
def check_progress(self):
self.progress_var.set(self._current_progress)
#self.master.update()
self.after(1000, self.check_progress)
def finished(self, exit_status):
self._downloading = False
messageWindow = tk.Toplevel(self)
window_title = "Bulk Downloader finished" if exit_status == 0 else \
"Bulk download failed"
messageWindow.title(window_title)
tk.Label(messageWindow, text = window_title).pack()
tk.Button(messageWindow, text = "Close window",
command = self.master.destroy).pack()
def cancel_all(self):
self._continue_loop = False
self._downloading = False
self.master.destroy
#-----------------------------------------------------------------------------|
# Run Bulk Downloader
root = tk.Tk()
app = BulkDownloader(root)
app.mainloop()
</code></pre>
<p>I also tried to set the progress as a global variable (rather than a property of the class) with no success.
Also I tried to update the progress from within the external thread by adding</p>
<pre><code> self.progress_var.set(self._current_progress)
self.master.update()
</code></pre>
<p>to the loop within <code>self.download_files()</code>; however, in this case Python failed because [some message something not in the main loop], my IDE crashed before I could read what the problem is. But I assume I simply cannot update the <code>tkinter.DoubleVar</code> from an external thread.</p>
<p>I also tried to use <code>Queue</code> as in <a href="https://stackoverflow.com/questions/63836979/bar-progress-in-tkinter-with-thread">this answer</a> to a somewhat similar problem. here, I adjusted the script as follows:</p>
<pre><code> def download_files(self, Q):
local_folder = self.output_folder.get()
csv_table = self.file_name.get()
table = pd.read_csv(csv_table)
url_list = table[table.columns[0]].to_list()
for enumerator, item in enumerate(url_list):
if not self._continue_loop:
break
local_file = os.path.join(local_folder, os.path.split(item)[1])
exit_status = download(item, local_file)
#self._current_progress = ((enumerator + 1) * 100) // len(url_list)
prog = ((enumerator + 1) * 100) // len(url_list)
Q.put(prog)
self.finished(exit_status)
def start_download(self):
self._continue_loop = True
if not self._downloading:
self.Q = qu.Queue()
self._current_progress = 0
t = th.Thread(target = self.download_files, daemon = True,
args = (self.Q,))
t.start()
self.check_progress()
def check_progress(self):
#self.progress_var.set(self._current_progress)
#self.master.update()
try:
value = int(self.Q)
except:
value = 0
self.progress_var.set(value)
self.after(1000, self.check_progress)
</code></pre>
<p>still with no changes in progress.</p>
<p>Maybe there is some different error in the code?</p>
<p><strong>Edit</strong></p>
<p>As pointed out by Michael Butscher, a <code>queue</code> cannot be converted to an <code>int</code>. I addeded <code>.get()</code> where I tried this conversion. Unfortunately, this only caused the session to not respond anymore.</p>
|
<python><tkinter><python-multithreading><progress>
|
2023-04-02 18:20:28
| 1
| 1,215
|
Manuel Popp
|
75,913,796
| 2,288,287
|
What is the correct use of "key" in the min() function?
|
<p>I have a question about the min() function in Python. I am confused about the use of the "key" option in min(). Could someone explain how key works in min(), especially in relation to a lambda function?</p>
<p>For example:</p>
<pre><code>lst = [10, -5, 20, 1]
min(lst, key=lambda x: x > 0)
</code></pre>
<p>As far as I understand, this expression should find the minimum element in the list lst that is greater than zero. However, it returns -5 instead of 1, which is the minimum element of lst that is greater than zero. Can someone explain why this happens and how I should correctly use key in the min() function?</p>
|
<python><min>
|
2023-04-02 18:10:21
| 1
| 328
|
Jose Angel
|
75,913,733
| 668,445
|
python: how close to the recursion limit?
|
<p>Is there a way I can measure how close I am to python's recursion limit? I know about <code>sys.getrecursionlimit</code> and <code>sys.setrecursionlimit</code>. I'd like to know how close I am to the limit, without actually reaching it: either by measuring the remaining gap, or by measuring how deep I am already and subtracting that from <code>sys.getrecursionlimit()</code>. If necessary, I suppose I could write a function that probes this by calling itself repeatedly until <code>RecursionError</code> and returning a counter, but is there a less messy way?</p>
|
<python><recursion>
|
2023-04-02 18:00:54
| 0
| 3,878
|
Bill Evans at Mariposa
|
75,913,609
| 5,640,517
|
Install mod-wsgi in a poetry virtual environment
|
<p>I have a django app.</p>
<p>I've created a poetry virtual environment to manage dependencies.</p>
<p>This app runs on python 3.10.</p>
<p>My <code>pyproject.toml</code> looks like this:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
name = "django_manager"
version = "0.1.0"
description = "A Game manager"
authors = [""]
[tool.poetry.dependencies]
python = "^3.10"
Django = "^4.1.7"
beautifulsoup4 = "^4.12.0"
requests = "^2.28.2"
pillow = "^9.4.0"
[tool.poetry.dev-dependencies]
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>Now I'm trying to move the app to a CentOS VM, I copied the whole project folder to <code>/var/www/html/GameManager</code> and inside it run <code>poetry install</code> then <code>poetry shell</code></p>
<p>Now if I do <code>python -V</code> I get Python 3.10.4 so that side works OK.</p>
<p>If I try to serve the app with apache:</p>
<p><code>/etc/httpd/conf.d/gamemanager.com.conf</code></p>
<pre><code><VirtualHost *:80>
ServerName gamemanager.com
ServerAlias *.gamemanager.com
Redirect permanent / https://gamemanager.com/
</VirtualHost>
<VirtualHost *:443>
ServerName gamemanager.com
ServerAlias *.gamemanager.com
<Directory /var/www/html/GameManager>
<Files wsgi.py>
Require all granted
</Files>
</Directory>
WSGIDaemonProcess gamemanager.com display-name=gamemanager user=user group=user_group processes=2 threads=15
WSGIScriptAlias / /var/www/html/GameManager/app/wsgi.py
WSGIProcessGroup gamemanager.com
TimeOut 3600
LogLevel info
ErrorLog "/var/log/httpd/gamemanager.com-error.log"
CustomLog "/var/log/httpd/gamemanager.com-access.log" common
</VirtualHost>
</code></pre>
<p>I see in <code>gamemanager.com-error.log</code> an expception executing wsgi.py, probably because that it's trying to use <code>/home/user/.local/lib/python3.6/site-packages/django/conf/__init__.py</code></p>
<p>So right now I'm trying to fix that by installing mod-wsgi inside the venv with python3.10 and pip3.10.</p>
<p>But I get a bunch of errors:</p>
<pre><code>/usr/bin/ld: /usr/local/lib/libpython3.10.a(pegen.o): relocation R_X86_64_32 against symbol `_Py_NoneStruct' can not be used when making a shared object; recompile con -fPIC
/usr/bin/ld: /usr/local/lib/libpython3.10.a(parser.o): relocation R_X86_64_32 against `.text' can not be used when making a shared object; recompile con -fPIC
/usr/bin/ld: /usr/local/lib/libpython3.10.a(string_parser.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile con -fPIC
/usr/bin/ld: /usr/local/lib/libpython3.10.a(myreadline.o): relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile con -fPIC
/usr/bin/ld: falló el enlace final: Sección no representable en la salida
collect2: error: ld devolvió el estado de salida 1
error: command '/usr/local/bin/gcc' failed with exit code 1
[end of output]
</code></pre>
<p>I've read to build python with <code>--enable-shared</code> (not sure if possible since it's installed by poetry) or needing <code>LD_RUN_PATH</code> environment variable but I don't know what the value would be, <code>/usr/local/lib/python3.10/</code>???</p>
|
<python><django><apache><mod-wsgi>
|
2023-04-02 17:33:50
| 1
| 1,601
|
Daviid
|
75,913,504
| 103,969
|
How can I reference a specific rect in a Bokeh plot?
|
<p>I'm working on a plot with ornamentations that draw rectangles over the plot to highlight some sections.</p>
<p>I'd like to find a reference for a rect that is plotted on top so that i can move it, unrelated to the data plotted. I can't seem to figure out how to do that. If there were multiple glyphs how would I be able to reference each one?</p>
<p>Let's say that I had a number of highlight rectangles overlaying my plot. And my plot was plotted in rectangles, how would I know to find one of the highlights?</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>x = [x*0.005 for x in range(0, 200)]
y = x
source = ColumnDataSource(data=dict(x=x, y=y))
plot = figure(width=400, height=400, x_range=(0, 1), y_range=(0, 1))
plot.line('x', 'y', source=source, line_width=3, line_alpha=0.6)
# the points to be plotted
x1 = .30
y1 = .30
w1 = .40
h1 = .20
# plotting the graph
plot.rect(x1, y1,w1,h1)
slider = Slider(start=0.1, end=4, value=1, step=.1, title="power")
slider.js_link('value', plot,'x1')
layout = column(slider, plot)
show(layout)</code></pre>
</div>
</div>
</p>
<p>the js_link call doesn't find the x1 in the Rect under the plot structure. plot.rect doesn't work of course.</p>
<p>How can I reference the rect?</p>
|
<python><bokeh>
|
2023-04-02 17:12:53
| 1
| 1,779
|
shigeta
|
75,913,334
| 926,319
|
Get Rust struct for parent or subclass from an attribute of a PyAny object
|
<p>I have the following class structure defined in a Rust PyO3 Extension:</p>
<pre class="lang-rust prettyprint-override"><code>#[pyclass(subclass)]
struct Parent {
foo: i32
}
#[pyclass(extends=Parent)]
struct Child {
bar: i32
}
</code></pre>
<p>Then in Python, a pure python class contains an attribute which can be an instance of either the parent or the subclass, like so:</p>
<pre class="lang-py prettyprint-override"><code>class MyOtherClass:
def __init__(self):
# This attribute can be an instance of Parent or Child
self.rust_class = Parent(foo=5)
</code></pre>
<p>Then in another PyO3 function, I want to be able to get the Rust struct for the <code>MyOtherClass.rust_class</code> attribute. I can easily get just one, say the Parent by doing the below function, but how would I be able to retrieve either the Parent of Child struct based on whatever the attribute is an instance of?</p>
<pre class="lang-rust prettyprint-override"><code>#[pyfunction]
fn my_function(
my_other_class: &PyAny //This is an instance of MyOtherClass
) -> i32 {
let my_struct: Parent = my_other_class.getattr("rust_class").unwrap().extract().unwrap();
my_struct.foo
}
</code></pre>
|
<python><rust><pyo3>
|
2023-04-02 16:43:30
| 1
| 2,054
|
Darren
|
75,913,320
| 20,726,966
|
List all files and folders in a specific subdirectory from Firebase Cloud Storage through Python
|
<p>I would like to list all the files and folders in a specific directory inside firebase cloud storage. I have tried using <code>files = storage.list_files()</code> from other stack overflow answers. This lists all the files and folders. I have tried using <code>files = storage.list_files(prefix='images/')</code> but it does not work.</p>
<p>I am using pyrebase4 to initialize the app and I have even provided serviceAccount but it did not work.</p>
<pre><code>bucket = storage.bucket()
bucket.list_blobs(prefix="files/")
</code></pre>
<p>Please respond with the libraries you are importing, the syntax of initializing firebase app and finally the code to fetch list of all files from a specific directory in firebase cloud storage.</p>
<p>If the above is not possible, would it be very resource demanding (time & cost) to do a <code>storage.list_files().includes(file_to_be_searched)</code> every time? for let's say, 10k total files.</p>
|
<python><firebase><cloud-storage><pyrebase>
|
2023-04-02 16:39:55
| 2
| 318
|
Homit Dalia
|
75,913,208
| 12,129,443
|
Why Pandas .divide() method adding the divisor as a column with NAN values?
|
<p>I am trying to divide a Pandas timeseries data frame with another data frame with exact matching datetime index, using pandas' <code>.divide()</code> method. When I do so instead of the expected element wise division, the divisor column is getting as a column the data frame being divided with NAN values. Why is that? I am unable to figure out.</p>
<p>Please note that if I use a scalar to divide there is no problem. Appreciate inputs.</p>
<p>Below are the info of the data frames.</p>
<pre><code>print(week1_range)
Max TemperatureF Min TemperatureF
Date
2013-07-01 79 66
2013-07-02 84 66
2013-07-03 86 71
2013-07-04 86 70
2013-07-05 86 69
2013-07-06 89 70
2013-07-07 77 70
-----------------------------------------
print(week1_range.info())
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 7 entries, 2013-07-01 to 2013-07-07
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Max TemperatureF 7 non-null int64
1 Min TemperatureF 7 non-null int64
dtypes: int64(2)
memory usage: 168.0 bytes
None
-----------------------------------------------
print(week1_mean)
Mean TemperatureF
Date
2013-07-01 72
2013-07-02 74
2013-07-03 78
2013-07-04 77
2013-07-05 76
2013-07-06 78
2013-07-07 72
----------------------------------------------------
print(week1_mean.info())
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 7 entries, 2013-07-01 to 2013-07-07
Data columns (total 1 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Mean TemperatureF 7 non-null int64
dtypes: int64(1)
memory usage: 112.0 bytes
None
-------------------------------------------
print(week1_range.divide(week1_mean, axis='rows'))
Max TemperatureF Mean TemperatureF Min TemperatureF
Date
2013-07-01 NaN NaN NaN
2013-07-02 NaN NaN NaN
2013-07-03 NaN NaN NaN
2013-07-04 NaN NaN NaN
2013-07-05 NaN NaN NaN
2013-07-06 NaN NaN NaN
2013-07-07 NaN NaN NaN
</code></pre>
|
<python><pandas><dataframe>
|
2023-04-02 16:18:25
| 1
| 668
|
Srinivas
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.