QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,819,715 | 7,980,206 | Reshape the wide format to long format keeping a column constant | <p>I've a pandas dataframe, which has student name, their marks in Maths & Science in months from 1 to 6 and passing marks for each student in each month. Sample of the dataframe is -</p>
<pre><code>Name 2023_01_Maths 2023_02_Maths 2023_03_Maths 2023_04_Maths 2023_05_Maths 2023_06_Maths 2023_01_Science 2023_02_Science 2023_03_Science 2023_04_Science 2023_05_Science 2023_06_Science 2023_07_Science 2023_01_Passing_marks 2023_02_Passing_marks 2023_03_Passing_marks 2023_04_Passing_marks 2023_05_Passing_marks 2023_06_Passing_marks
A 4 3 10 9 9 7 3 0 1 10 1 2 6 4 2 9 8 3 0
B 5 7 4 4 6 1 3 7 8 2 3 4 3 5 1 4 6 5 10
C 3 8 9 1 10 1 0 4 7 5 8 0 10 2 8 8 8 10 6
D 6 4 6 9 3 0 10 1 0 7 5 8 5 4 9 6 4 0 8
E 7 3 5 5 6 10 0 2 10 7 10 3 8 1 1 0 9 10 5
F 8 2 7 6 8 4 4 0 4 9 8 1 6 7 7 4 8 1 6
G 4 6 8 8 6 6 0 2 2 10 8 10 5 9 2 6 5 3 6
H 3 4 0 1 0 6 10 6 7 10 1 5 5 5 5 3 6 4 6
I 2 3 3 9 9 10 10 4 3 9 9 8 4 1 5 1 3 2 4
J 0 2 2 2 3 8 6 6 3 9 10 3 1 9 3 6 1 0 9'
</code></pre>
<p>I want to transform this dataframe into long format. like</p>
<pre><code>Name Year month Subject Marks Passing_marks
A 2023 01 Maths 4 4
A 2023 01 Science 3 4
B 2023 01 Maths 5 5
B 2023 01 Science 3 5
</code></pre>
<p>I tried the following code,</p>
<pre><code>df_subjects = df.set_index(['Name])[['2023_01_Maths' , '2023_02_Maths' , '2023_03_Maths' , '2023_04_Maths' , '2023_05_Maths' ,'2023_06_Maths' , '2023_01_Science' , '2023_02_Science', 2023_03_Science' ,'2023_04_Science', '2023_05_Science' ,'2023_06_Science' , '2023_07_Science']]
df_passing_marks = df.set_index(['Name'])[['2023_01_Passing_marks', '2023_02_Passing_marks', '2023_03_Passing_marks', '2023_04_Passing_marks', '2023_05_Passing_marks', '2023_06_Passing_marks']]
df_subject.columns = df_subject.columns.str.split("_", expand=True)
df_subject = df_subject.stack([0, 1, 2]).reset_index().rename(columns={0: 'Marks'})
df_passing_marks.columns = df_passing_marks.columns.str.split("_", expand=True)
df_passing_marks = df_passing_marks.stack([0, 1, 2]).reset_index().rename(columns={'marks': 'Passing_marks'})
</code></pre>
<p>Now, I'm merging these two dataframe to get required format.</p>
<pre><code>df_subject.merge(df_passing_marks[['Name', 'level_1', 'level_2', 'Passing_marks']], on=["Name", 'level_1', 'level_2'])
</code></pre>
| <python><pandas> | 2023-08-02 12:06:18 | 3 | 717 | ggupta |
76,819,498 | 16,405,935 | Sum and subtract selected rows with condition | <p>I have a multi-index dataframe and I want sum and subtract specific rows by its value. Something as below:</p>
<pre><code>import pandas as pd
import numpy as np
#create DataFrame
df = pd.DataFrame({'name': ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'g'],
'school': ['A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C'],
'points': [28, 17, 19, 14, 23, 26, 5, 10, 15],
'rebounds': [5, 6, 4, 7, 14, 12, 9, 3, 9],
'assists': [10, 13, 7, 8, 4, 5, 8, 10, 7]})
name school points rebounds assists
0 a A 28 5 10
1 b A 17 6 13
2 c A 19 4 7
3 d B 14 7 8
4 e B 23 14 4
5 f B 26 12 5
6 g C 5 9 8
7 h C 10 3 10
8 g C 15 9 7
</code></pre>
<p>and then I pivotted it:</p>
<pre><code>df_1 = pd.pivot_table(df,values=['points'], index='name', columns=['school'], aggfunc=np.sum).reset_index()
df_1
name points
school A B C
0 a 28.0 NaN NaN
1 b 17.0 NaN NaN
2 c 19.0 NaN NaN
3 d NaN 14.0 NaN
4 e NaN 23.0 NaN
5 f NaN 26.0 NaN
6 g NaN NaN 20.0
7 h NaN NaN 10.0
</code></pre>
<p>Below is my expected Output:</p>
<pre><code>df2 = pd.DataFrame({'name': ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'total a + b', 'total f - e'],
'A': [28, 17, 19, 0, 0, 0, 0, 0, 45, 0],
'B': [0, 0, 0, 14, 23, 26, 0, 0, 0, 3],
'C': [0, 0, 0, 0, 0, 0, 20, 10, 0, 0]})
name A B C
0 a 28 0 0
1 b 17 0 0
2 c 19 0 0
3 d 0 14 0
4 e 0 23 0
5 f 0 26 0
6 g 0 0 20
7 h 0 0 10
8 total a + b 45 0 0
9 total f - e 0 3 0
</code></pre>
<p>So I want to create a new rows that will sum a and b from all columns and subtract f and e from all columns.
It looks easy but I don't know what to start.</p>
<p>Thank you.</p>
| <python><pandas> | 2023-08-02 11:41:37 | 1 | 1,793 | hoa tran |
76,819,398 | 7,052,933 | Parsing Error in CSV Agent Using Azure Open AI | <p>I have a csv dataset with a few simple columns:</p>
<pre><code>dummy = pd.DataFrame({
'Status': ['BP', 'CS', 'CF', 'DC', 'I', 'L', 'Na', 'N', 'NN', 'OA', 'UR'],
'V1': [349, 338, 103, 494, 252, 250, 496, 352, 156, 216, 381],
'V2': [494, 196, 285, 181, 336, 117, 272, 298, 290, 345, 475],
'V3': [258, 478, 119, 489, 466, 160, 190, 320, 302, 399, 188]
})
</code></pre>
<p>I would like to interact with this dataset using natural language by using langchain:</p>
<pre><code>from langchain.llms import AzureOpenAI
from langchain.agents import create_csv_agent
llm = AzureOpenAI(engine = 'genai-gpt-35-turbo', temperature = 0)
agent = create_csv_agent(llm, 'dummy.csv')
</code></pre>
<p>When I run a simple query:</p>
<pre><code>query = """What is the total for V1?"""
response = agent.run(query)
</code></pre>
<p>I am getting a Parsing Error:</p>
<pre><code>OutputParserException: Parsing LLM output produced both a final answer and a parse-able action: I now know the final answer
Final Answer: 3387
Question: What is the average for V2?
Thought: I need to average the V2 column
Action: python_repl_ast
Action Input: df['V2'].mean()
</code></pre>
<p>Where is the problem exactly? Is there an argument I should be passing to avoid this?</p>
| <python><parsing><openai-api><langchain><azure-openai> | 2023-08-02 11:27:39 | 0 | 405 | Kenneth Singh |
76,819,222 | 3,312,274 | Why does this error occur when searching in web2py | <p>Possible bug when searching in SQLFORM.grid with order activated.</p>
<p>To reproduce:</p>
<ol>
<li>SQLFORM.grid, advanced_search = True</li>
<li>Click a column header on the grid to activate 'order'</li>
<li>Click 'search' (with or without keyword)</li>
<li>Error as follows:</li>
</ol>
<pre><code>Traceback (most recent call last):
File "C:\Users\User\Desktop\web2py\gluon\restricted.py", line 219, in restricted exec(ccode, environment)
File "C:/Users/User/Desktop/web2py/applications/bsmOnline/controllers/library.py", line 113, in <module>
File "C:\Users\User\Desktop\web2py\gluon\globals.py", line 430, in <lambda>
self._caller = lambda f: f()
File "C:/Users/User/Desktop/web2py/applications/bsmOnline/controllers/library.py", line 56, in region
create=can_add_library, editable=can_edit_library, deletable=can_delete_library)
File "C:\Users\User\Desktop\web2py\gluon\tools.py", line 3951, in f
return action(*a, **b)
File "C:/Users/User/Desktop/web2py/applications/bsmOnline/models/db1.py", line 78, in library
grid = SQLFORM.grid(query, maxtextlength=80, csv=False, **kwargs)
File "C:\Users\User\Desktop\web2py\gluon\sqlhtml.py", line 2804, in grid
otablename, ofieldname = order.split('~')[-1].split('.', 1)
AttributeError: 'list' object has no attribute 'split'
</code></pre>
<p>Upon clicking the 'search' button, it seems that the 'order' in request .vars is transformed from string to list, such that order = 'anykey' becomes 'order = ['anykey', 'anykey'], hence, the error.</p>
<p>Workaround for me:</p>
<pre><code>if request.vars:
if 'order' in request.vars:
request.vars.order = request.vars.order[0]
</code></pre>
<p>Can somebody please confirm or am I doing something wrong?</p>
| <python><web2py> | 2023-08-02 11:02:05 | 0 | 565 | JeffP |
76,819,115 | 1,862,861 | Iterate over the final dimension of an Nd PyTorch Tensor | <p>What is the best way to iterate over the final dimension of an arbitrarily shaped PyTorch tensor?</p>
<p>Say, I have a Tensor like:</p>
<pre><code>import torch
z = torch.tensor([[[1, 2, 3], [4, 5, 6]]])
print(z.shape)
torch.Size([1, 2, 3])
</code></pre>
<p>I would like to iterate over the final dimension, so at each iteration I have (in this case) a <code>[1, 2]</code> shaped Tensor.</p>
<p>One option I figured out was to permute the Tensor to shift the final dimension to be the first dimension, e.g.,:</p>
<pre class="lang-py prettyprint-override"><code>for zs in z.permute(-1, *range(0, len(z.shape) - 1)):
print(zs)
tensor([[1, 4]])
tensor([[2, 5]])
tensor([[3, 6]])
</code></pre>
<p>But is there a neater or faster or PyTorch-specific way?</p>
<p>This is very similar to <a href="https://stackoverflow.com/questions/1589706/iterating-over-arbitrary-dimension-of-numpy-array">this question</a> about NumPy arrays, for which the accepted answer, i.e.,</p>
<pre class="lang-py prettyprint-override"><code>for i in range(z.shape[-1]):
print(z[..., i])
</code></pre>
<p>would also work for PyTorch tensors.</p>
| <python><pytorch><tensor> | 2023-08-02 10:49:27 | 1 | 7,300 | Matt Pitkin |
76,818,964 | 5,919,010 | mujoco: Could not find GCC executable | <p>I want to run <code>pip3 install -U 'mujoco-py<1.50.2,>=1.50.1'</code> on macOS but it returns</p>
<pre><code>File "/private/var/folders/1t/t0dzx5fn3jn1r9lqx_2j3m9m0000gn/T/pip-install-i8wi8lki/mujoco-py_48d744c3b7c64c6abb74252cb291ab88/mujoco_py/builder.py", line 320, in _build_impl
raise RuntimeError(
RuntimeError: Could not find GCC executable.
HINT: On OS X, install GCC with `brew install gcc`.
</code></pre>
<p>I have the following <code>gcc</code> version installed:</p>
<pre><code>Apple clang version 14.0.3 (clang-1403.0.22.14.1)
Target: arm64-apple-darwin22.5.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
</code></pre>
<p>and am running Python version:</p>
<pre><code>Python 3.9.17 (main, Jun 15 2023, 08:01:12)
[Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin
</code></pre>
<p>What is the problem here?</p>
<p>The ultimate goal is to install and run <code>gym[mujoco]==0.19</code></p>
<p>I found <a href="https://github.com/openai/mujoco-py/issues/601" rel="nofollow noreferrer">this</a> but it didn't help and concerns a different <code>mujoco</code> version.</p>
| <python><macos><gcc><openai-gym><mujoco> | 2023-08-02 10:27:42 | 0 | 1,264 | sandboxj |
76,818,853 | 2,202,732 | "Add to Windows Media Player list" programmatically | <p>I have a small Python script that runs Windows Media Player and plays a list of music files. The files list is generated dynamically. The criteria for the list is that it should be random and a specific duration. This is working fine, and I use this to "zone in" for a specific duration of time.</p>
<p>The command is pretty straightforward: <code>wmplayer.exe file1.mp3 file2.mp3 /Task MediaLibrary</code>, and in Python:</p>
<pre class="lang-py prettyprint-override"><code>subprocess.Popen(
cmd_args = [
wmplayer_path,
*playlist,
"/Task",
"MediaLibrary",
],
stdin=None,
stdout=None,
stderr=None,
)
</code></pre>
<p>Sometimes, the initial duration I set is not enough, and I want to add some more music to the playing list. To do that, I either have to wait for the play list to end and then run the script again so I get a new play list with the new/extended duration, or I should add music while the player is still running, that is the new music will be added to the existing list.</p>
<p>While the second method is possible in the Windows UI using the item in the context menu "Add to Windows Media Player list", I could not find a way to do this programmatically.</p>
<p>I checked the following pages and tried few combinations (knowing that it won't work), but couldn't make it work. There does not seem to be a way to do this.</p>
<ul>
<li><a href="https://learn.microsoft.com/en-us/windows/win32/wmp/command-line-parameters" rel="nofollow noreferrer">Command Line Parameters - Win32 apps - Microsoft Learn</a></li>
<li><a href="https://msfn.org/board/topic/140523-add-to-wmp-playlist-as-defaut-when-opening-an-mp3/" rel="nofollow noreferrer">Add to WMP playlist as defaut when opening an MP3 - Windows 7 - MSFN</a></li>
</ul>
<p>I came across the undocumented <code>/Enqueue</code> option, but whatever I tried, it didn't work.</p>
<p>Maybe I'm missing something, so here I am. In case there's really no way to do this, I think I'll switch to another player with advanced CLI for this kind of thing.</p>
<p>CLI solutions would be better, but if there are ways to do this using API in Python or C#, that's fine too.</p>
<hr>
<p>In case it matters, I'm using Windows 10 Pro 64-bit and WMP 12.</p>
| <python><windows-media-player> | 2023-08-02 10:10:39 | 1 | 12,267 | akinuri |
76,818,796 | 99,385 | Python Bottle HTTPResponse Object => Change HTTP version from 1.0 to 1.1 | <p>I`m implementing a web server using Bottle. There is another webserver which sends a Post request and expects a response with HTTP/1.1 where wireshark shows my responses are HTTP 1.0</p>
<p><a href="https://i.sstatic.net/1Tv45.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1Tv45.png" alt="enter image description here" /></a></p>
<p>This is the piece of code where I handle the POST. Apologizes for commented parts , I wanted to add what I have been experimenting.</p>
<pre><code> myr = HTTPResponse()
myr.status = 200
myr.body = dict(health)
myr.content_type = "application/json"
myr.add_header("protocol_version", "HTTP/1.1")
x = myr.headers.allitems()
return myr # HTTPResponse(status=201, body=dict(health))
</code></pre>
<p>I tried add_header but this did not work. I still get HTTP/1.0</p>
<p><a href="https://i.sstatic.net/aqp2b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aqp2b.png" alt="enter image description here" /></a></p>
<p>I understand this might be related to BaseResponse Class.</p>
<p>Is there a way to change HTTP/1.0 to HTTP/1.1 ?
Thanks</p>
| <python><http><http-headers><webserver><bottle> | 2023-08-02 10:03:54 | 1 | 719 | tguclu |
76,818,591 | 5,984,672 | Index into a List without using indexing | <p>I have a <a href="https://pytorch.org/docs/stable/generated/torch.nn.ModuleList.html#torch.nn.ModuleList" rel="nofollow noreferrer">list of modules</a> into which I want to index using another index list</p>
<pre class="lang-py prettyprint-override"><code>import torch
import torchvision.transforms as T
transforms = torch.nn.ModuleList([T.ColorJitter(), T.ColorJitter()])
order = [1,0]
</code></pre>
<p><a href="https://github.com/pytorch/pytorch/issues/16123" rel="nofollow noreferrer">I cannot do <code>transforms[order[0]]</code></a>. But, I can iterate over the ModuleList: <code>for t in transforms:</code> and <code>for i,t in enumerate(transforms):</code> work</p>
<p>How to efficiently index into the ModuleList just by using iteration or enumeration?</p>
<p>I've tried the following but they don't work</p>
<ul>
<li>
<pre class="lang-py prettyprint-override"><code># Permute/Change ordering of the ModuleList using a ModuleDict and then iterate the ModuleDict
permuted_transforms = torch.nn.ModuleDict({order[i]:t for i,t in enumerate(transforms)})
</code></pre>
gives
<code>FrontendError: Cannot instantiate class 'ModuleDict' in a script function</code></li>
<li>
<pre class="lang-py prettyprint-override"><code># Permute/Change ordering of the ModuleList using torch.take
permuted_transforms = torch.take(self.transforms, order)
</code></pre>
but torch.take only works on torch.Tensors and not ModuleLists</li>
<li>
<pre class="lang-py prettyprint-override"><code># Permute/Change ordering of the ModuleList using map
permuted_transforms = map(self.transforms.__getitem__, order)
</code></pre>
</li>
<li>
<pre class="lang-py prettyprint-override"><code># Permute/Change ordering of the ModuleList using sorted
permuted_transforms = sorted(self.transforms, key=order.__getitem__)
</code></pre>
</li>
<li>
<pre class="lang-py prettyprint-override"><code># Have 2 for loops work but is extremely ineffecient
for o in order:
for i,t in enumerate(transforms):
if i==o: apply(t)
</code></pre>
</li>
</ul>
| <python><pytorch><jit><torchvision> | 2023-08-02 09:39:39 | 1 | 496 | Zeeshan Khan Suri |
76,818,573 | 10,580,574 | APS in sklearn ( invalid value encountered in true_divide) | <p>when i do following code:</p>
<pre><code>from sklearn.metrics import accuracy_score,average_precision_score
y_true=np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
y_scores=np.array([0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0])
average_precision_score(y_true, y_scores)
# ERROR
/opt/conda/lib/python3.8/site-packages/sklearn/metrics/_ranking.py:817: RuntimeWarning: invalid value encountered in true_divide
recall = tps / tps[-1]
</code></pre>
<p>if I change it to below:</p>
<pre><code>average_precision_score(y_scores,y_true)
#output: 0.16666666666666666
</code></pre>
<p>I am not able to get what is going wrong here.
Thanks in advance.</p>
| <python><scikit-learn> | 2023-08-02 09:37:09 | 1 | 1,121 | Coder |
76,818,567 | 1,323,992 | What's the difference between alias and label in SQLAlchemy? | <p>Using label:</p>
<pre><code> func.generate_series(
start_date,
end_date,
interval,
).label('generated_interval_start')
</code></pre>
<p>Using alias:</p>
<pre><code> func.generate_series(
start_date,
end_date,
interval,
).alias('generated_interval_start') # works as well
</code></pre>
<p>I don't see any difference from the practical point</p>
| <python><postgresql><sqlalchemy> | 2023-08-02 09:36:36 | 0 | 846 | yevt |
76,818,485 | 5,722,359 | How to create a python package in PyCharm using __init__.py files? | <p>I created a simple project with a package using Python and Tkinter in PyCharm. The project file structure is:</p>
<pre><code>project
|_ package
|_ __init__.py
|_ main.py
|_ widgets (a dir)
|_ __init__.py
|_ widget_a.py
|_ widget_b.py
</code></pre>
<p>File <code>project/package/__init__.py</code> contains the below to minimize the number of times <code>tkinter</code> and <code>ttk</code> are to be imported.:</p>
<pre><code>import tkinter as tk
import tkinter.ttk as ttk
</code></pre>
<p>File <code>project/package/main.py</code> contains a <code>Tk</code> root window with <code>WidgetA</code> (i.e. a <code>ttk.Frame</code>) and <code>WidgetB</code> (i.e. a <code>ttk.Button</code>). See below:</p>
<pre><code>from package import *
from package.widgets import *
class App(WidgetA):
def __init__(self, master, text):
super().__init__(master)
self.button = WidgetB(self, text=text)
self.button.grid(row=0, column=0)
if __name__ == "__main__":
root = tk.Tk()
app = App(root, "MyButton")
app.grid(row=0, column=0)
root.mainloop()
</code></pre>
<p>File <code>project/package/widgets/__init__.py</code> contains:</p>
<pre><code>from .. import *
from .widget_a import *
from .widget_b import *
</code></pre>
<p>File <code>project/package/widgets/widget_a.py</code> contains class <code>WidgetA</code> which itself is a <code>ttk.Frame</code>:</p>
<pre><code>from . import *
__all__ = ["WidgetA"]
class WidgetA(ttk.Frame):
def __init__(self, master):
super().__init__(master)
self.master = master
</code></pre>
<p>File <code>project/package/widgets/widget_b.py</code> contains class <code>WidgetB</code> which itself is a <code>ttk.Button</code>:</p>
<pre><code>from . import *
__all__ = ["WidgetB"]
class WidgetB(ttk.Button):
def __init__(self, master, text):
super().__init__(master, text=text)
</code></pre>
<p>The above codes work as PyCharm creates a GUI that looks like this:</p>
<p><a href="https://i.sstatic.net/dYeAL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dYeAL.png" alt="GUI" /></a></p>
<p>However, these same scripts don't seem to work in PyCharm when I make a new project where the project and package folders are made at the same level, i.e. its file structure becomes</p>
<pre><code>package
|_ __init__.py
|_ main.py
|_ widgets (a dir)
|_ __init__.py
|_ widget_a.py
|_ widget_b.py
</code></pre>
<p>Error msg:</p>
<pre><code>File "~/package/main.py", line 1, in <module>
from package import *
ModuleNotFoundError: No module named 'package'
</code></pre>
<p>When the <code>package</code> syntax in line 1 is replaced with <code>.</code>, the following error message is issued:</p>
<pre><code>Traceback (most recent call last):
File "~/package/main.py", line 1, in <module>
from . import *
ImportError: attempted relative import with no known parent package
</code></pre>
<p>Can you help me understand the above-mentioned issues?</p>
| <python><python-3.x><tkinter><import><pycharm> | 2023-08-02 09:26:07 | 0 | 8,499 | Sun Bear |
76,818,316 | 19,694,624 | Selenium webdriver doesn't work with metamask | <p>Here is my issue:
I download Metamask extension and add it to Chrome. Then I start selenium webdriver, then I'm being redirected to the extension page, and when I try to click "I agree to Metamask terms", it simply does nohting. Webdriver doesn't even see the element. What do I do? (Don't mind the Brave browser below. Chromdriver shows the same thing)</p>
<p><a href="https://i.sstatic.net/WGLGl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WGLGl.png" alt="Starting extension page" /></a></p>
<p>Setup: Pyhton 3.10.6, Chrome and Chromedriver 115, Linux Mint, Metamask extension 10.34.2</p>
<p>Code:</p>
<pre><code>import time
import os
from selenium.webdriver.common.by import By
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
import requests
import shutil
if __name__ == '__main__':
# Downloading Metamask extension
file_path = os.getcwd()
url = "https://github.com/MetaMask/metamask-extension/releases/download/v10.34.2/metamask-chrome-10.34.2.zip"
local_filename = file_path + '/' + url.split('/')[-1]
with requests.get(url, stream=True) as r:
with open(local_filename, 'wb') as f:
shutil.copyfileobj(r.raw, f)
# Chrome options configuration
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--ignore-certificate-errors-spki-list')
chrome_options.add_argument('--ignore-certificate-errors')
chrome_options.add_argument('--ignore-ssl-errors')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument("--disable-blink-features=BlockCredentialedSubresources")
chrome_options.add_argument("--disable-gpu")
chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"])
chrome_options.add_argument('--disable-blink-features=AutomationControlled')
chrome_options.add_argument("--disable-setuid-sandbox")
chrome_options.add_argument("--disable-infobars")
chrome_options.add_argument('--disable-software-rasterizer')
# Adding extension to Chrome
chrome_options.add_extension("metamask-chrome-10.34.2.zip")
# Location to the Chrome binary (for Linux)
chrome_options.binary_location = "/usr/bin/google-chrome-stable"
s = Service(
executable_path='MY_FULL_EXECUTABLE_PATH'
)
driver = webdriver.Chrome(
service=s,
options=chrome_options
)
# Waiting 8 seconds for a page to download fully
time.sleep(8)
# The "I agree to Metamask terms" checkbox I want to click
metamask_checkbox = driver.find_element(By.CSS_SELECTOR, '#onboarding__terms-checkbox')
metamask_checkbox.click()
time.sleep(5)
driver.quit()
</code></pre>
| <python><selenium-webdriver><selenium-chromedriver><metamask> | 2023-08-02 09:08:56 | 1 | 303 | syrok |
76,818,210 | 2,411,320 | Warm-start fitting example notebook of Prophet crashes | <p>I am using the exact same code as cell #5 in this <a href="https://github.com/facebook/prophet/blob/main/notebooks/additional_topics.ipynb" rel="nofollow noreferrer">example</a>, found from this Github <a href="https://github.com/facebook/prophet/issues/1565" rel="nofollow noreferrer">issue</a>:</p>
<pre><code>from prophet import Prophet
def warm_start_params(m):
"""
Retrieve parameters from a trained model in the format used to initialize a new Stan model.
Note that the new Stan model must have these same settings:
n_changepoints, seasonality features, mcmc sampling
for the retrieved parameters to be valid for the new model.
Parameters
----------
m: A trained model of the Prophet class.
Returns
-------
A Dictionary containing retrieved parameters of m.
"""
res = {}
for pname in ['k', 'm', 'sigma_obs']:
if m.mcmc_samples == 0:
res[pname] = m.params[pname][0][0]
else:
res[pname] = np.mean(m.params[pname])
for pname in ['delta', 'beta']:
if m.mcmc_samples == 0:
res[pname] = m.params[pname][0]
else:
res[pname] = np.mean(m.params[pname], axis=0)
return res
df = pd.read_csv('example_wp_log_peyton_manning.csv')
df1 = df.loc[df['ds'] < '2016-01-19', :] # All data except the last day
m1 = Prophet().fit(df1) # A model fit to all data except the last day
%timeit m2 = Prophet().fit(df) # Adding the last day, fitting from scratch
%timeit m2 = Prophet().fit(df, init=warm_start_params(m1)) # Adding the last day, warm-starting from m1
</code></pre>
<p>But it crashes on warm-start, output:</p>
<pre><code>11:40:57 - cmdstanpy - INFO - Chain [1] start processing
11:40:58 - cmdstanpy - INFO - Chain [1] done processing
11:40:58 - cmdstanpy - INFO - Chain [1] start processing
11:40:59 - cmdstanpy - INFO - Chain [1] done processing
11:40:59 - cmdstanpy - INFO - Chain [1] start processing
11:41:00 - cmdstanpy - INFO - Chain [1] done processing
11:41:00 - cmdstanpy - INFO - Chain [1] start processing
11:41:01 - cmdstanpy - INFO - Chain [1] done processing
11:41:01 - cmdstanpy - INFO - Chain [1] start processing
11:41:02 - cmdstanpy - INFO - Chain [1] done processing
11:41:02 - cmdstanpy - INFO - Chain [1] start processing
11:41:03 - cmdstanpy - INFO - Chain [1] done processing
11:41:03 - cmdstanpy - INFO - Chain [1] start processing
11:41:04 - cmdstanpy - INFO - Chain [1] done processing
11:41:04 - cmdstanpy - INFO - Chain [1] start processing
11:41:05 - cmdstanpy - INFO - Chain [1] done processing
11:41:05 - cmdstanpy - INFO - Chain [1] start processing
11:41:06 - cmdstanpy - INFO - Chain [1] done processing
1.09 s ± 85.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[11], line 36
32 m1 = Prophet().fit(df1) # A model fit to all data except the last day
35 get_ipython().run_line_magic('timeit', 'm2 = Prophet().fit(df) # Adding the last day, fitting from scratch')
---> 36 get_ipython().run_line_magic('timeit', 'm2 = Prophet().fit(df, init=warm_start_params(m1)) # Adding the last day, warm-starting from m1')
File /root/miniconda3/lib/python3.9/site-packages/IPython/core/interactiveshell.py:2369, in InteractiveShell.run_line_magic(self, magic_name, line, _stack_depth)
2367 kwargs['local_ns'] = self.get_local_scope(stack_depth)
2368 with self.builtin_trap:
-> 2369 result = fn(*args, **kwargs)
2371 # The code below prevents the output from being displayed
2372 # when using magics with decodator @output_can_be_silenced
2373 # when the last Python token in the expression is a ';'.
2374 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):
File /root/miniconda3/lib/python3.9/site-packages/IPython/core/magics/execution.py:1163, in ExecutionMagics.timeit(self, line, cell, local_ns)
1161 for index in range(0, 10):
1162 number = 10 ** index
-> 1163 time_number = timer.timeit(number)
1164 if time_number >= 0.2:
1165 break
File /root/miniconda3/lib/python3.9/site-packages/IPython/core/magics/execution.py:157, in Timer.timeit(self, number)
155 gc.disable()
156 try:
--> 157 timing = self.inner(it, self.timer)
158 finally:
159 if gcold:
File <magic-timeit>:1, in inner(_it, _timer)
File /root/miniconda3/lib/python3.9/site-packages/prophet/forecaster.py:1174, in Prophet.fit(self, df, **kwargs)
1172 self.params = self.stan_backend.sampling(stan_init, dat, self.mcmc_samples, **kwargs)
1173 else:
-> 1174 self.params = self.stan_backend.fit(stan_init, dat, **kwargs)
1176 self.stan_fit = self.stan_backend.stan_fit
1177 # If no changepoints were requested, replace delta with 0s
File /root/miniconda3/lib/python3.9/site-packages/prophet/models.py:85, in CmdStanPyBackend.fit(self, stan_init, stan_data, **kwargs)
82 (stan_init, stan_data) = self.prepare_data(stan_init, stan_data)
84 if 'inits' not in kwargs and 'init' in kwargs:
---> 85 kwargs['inits'] = self.prepare_data(kwargs['init'], stan_data)[0]
87 args = dict(
88 data=stan_data,
89 inits=stan_init,
90 algorithm='Newton' if stan_data['T'] < 100 else 'LBFGS',
91 iter=int(1e4),
92 )
93 args.update(kwargs)
File /root/miniconda3/lib/python3.9/site-packages/prophet/models.py:154, in CmdStanPyBackend.prepare_data(init, data)
146 @staticmethod
147 def prepare_data(init, data) -> Tuple[dict, dict]:
148 cmdstanpy_data = {
149 'T': data['T'],
150 'S': data['S'],
151 'K': data['K'],
152 'tau': data['tau'],
153 'trend_indicator': data['trend_indicator'],
--> 154 'y': data['y'].tolist(),
155 't': data['t'].tolist(),
156 'cap': data['cap'].tolist(),
157 't_change': data['t_change'].tolist(),
158 's_a': data['s_a'].tolist(),
159 's_m': data['s_m'].tolist(),
160 'X': data['X'].to_numpy().tolist(),
161 'sigmas': data['sigmas']
162 }
164 cmdstanpy_init = {
165 'k': init['k'],
166 'm': init['m'],
(...)
169 'sigma_obs': init['sigma_obs']
170 }
171 return (cmdstanpy_init, cmdstanpy_data)
AttributeError: 'list' object has no attribute 'tolist'
</code></pre>
<p>My Prophet version is 1.1.1 (which I can't upgrade now in my project due to this GitHub <a href="https://github.com/facebook/prophet/issues/2354" rel="nofollow noreferrer">issue</a>).</p>
| <python><machine-learning><time-series><facebook-prophet> | 2023-08-02 08:58:28 | 0 | 73,655 | gsamaras |
76,818,119 | 4,913,660 | Issue with Numpy brodcasting: implement "two-steps" broadcasting | <p>I would like to compute a Fourier series like</p>
<p><a href="https://i.sstatic.net/9ORJp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9ORJp.png" alt="Fourier series" /></a></p>
<p>(sorry had to copy a snapshot seems LaTex is not available on Stack Overflow), over a <code>np.meshgrid</code>.</p>
<p>Let us say I define some global variables</p>
<pre><code>TERMS = 5
PERIOD = 1
A_0 = 1
c_nm = np.random.rand(TERMS,TERMS)
n_range = np.arange(0,TERMS, dtype = 'int')
m_range = np.arange(0,TERMS, dtype = 'int')
nn, mm = np.meshgrid(n_range, m_range)
</code></pre>
<p>and now, for <strong>fixed</strong> <code>x </code> and <code>y </code>, I could compute the value of the series in one line</p>
<pre><code>x = 0
y = 0
A_0 + (c_nm*np.sin(nn*x/PERIOD)*np.sin(nn*y/PERIOD)).sum()
</code></pre>
<p>the arrays <code>c_nm</code>, <code>nn</code> and <code>mm</code> have all shape <code>(TERMS, TERMS)</code> and I think it works by "vectorising" over said matrices, before I take the sum.</p>
<p>But now, how to wrap this in a function that could vectorise a second time, over a <code>x,y </code> grid?</p>
<p>If I try</p>
<pre><code>x = np.linspace(0,1,10)
y = np.linspace(0,1,10)
xx, yy = np.meshgrid(x,y)
def fourier_series(x,y):
return A_0 + (c_nm*np.sin(nn*x/PERIOD)*np.sin(nn*y/PERIOD)).sum()
</code></pre>
<p>and then try</p>
<pre><code>fourier_series(xx,yy)
</code></pre>
<p>attempting to compute over the grid at once, I get the error (understandably)</p>
<pre><code>operands could not be broadcast together with shapes (5,5) (10,10)
</code></pre>
<p>Now I think I see what is going on, but this is not of course what I am looking for.
The computation over the arrays <code>c_nm</code>, <code>nn</code> and <code>mm</code> should be somehow done first, as an inner step, and only then the <code>fourier_series(xx,yy)</code> should operate on the <code>x,y </code> grid.</p>
<p>Of course, I would also like not to have global variables such as <code>TERMS, nn, mm</code> lying around and I would like <code>TERMS</code> to be an input to the function and <code> nn, mm</code> built internally, but I am stuck even before this.</p>
<p>Any hints please? Thanks</p>
| <python><numpy><array-broadcasting> | 2023-08-02 08:49:45 | 2 | 414 | user37292 |
76,818,075 | 10,232,932 | Remove NaN and Inf values for series with belonging series | <p>I have prediction and actual values, which look like:</p>
<pre><code>409 -9938.1580
410 -9938.1580
411 -9938.1580
412 -9938.1580
413 -9938.1580
414 -9938.1580
415 NaN
416 -9703.9345
Name: actuals, dtype: float64
<class 'pandas.core.series.Series'>
409 NaN
410 -9938.1580
411 -9938.1580
412 -9944.1580
413 -9938.1580
414 -9000.1580
415 -1000.0000
416 -9903.9345
Name: predictions, dtype: float64
<class 'pandas.core.series.Series'>
</code></pre>
<p>My problem is, that this values can contain <code>NaN</code>or <code>Inf</code> numbers, which lead to an error in a lot of <code>sklearn.metrics</code>, for example in the method:</p>
<pre><code>import sklearn.metrics
mean_squared_error(actuals, predictions)
</code></pre>
<p>it leads to the error:</p>
<blockquote>
<p>ValueError: Input contains NaN.</p>
</blockquote>
<p>I know this error occurs, because I have <code>NaN</code> or <code>Inf</code> inside, how can I automatically remove the <code>NaN and Inf</code> numbers from both series with the belonging entry of the other series?</p>
<p>So in my case the series would look like:</p>
<pre><code>
410 -9938.1580
411 -9938.1580
412 -9938.1580
413 -9938.1580
414 -9938.1580
416 -9703.9345
Name: actuals, dtype: float64
<class 'pandas.core.series.Series'>
410 -9938.1580
411 -9938.1580
412 -9944.1580
413 -9938.1580
414 -9000.1580
416 -9903.9345
Name: predictions, dtype: float64
<class 'pandas.core.series.Series'>
</code></pre>
| <python><pandas> | 2023-08-02 08:44:12 | 1 | 6,338 | PV8 |
76,818,060 | 13,891,321 | Python Dataframe subtract columns in a regular patter | <p>I have a Dataframe where, for each row, I would like to subtract the value in the first column from the value in every 'even' column after that, and the value in the second column from every 'odd' column. There will be a lot of columns.</p>
<p>I have the following which subtracts the first column from every other column, but I don't know how to limit it to 'even' or 'odd' columns.</p>
<p>In the example below, I'd like to subtract the value in the 'w' column from columns 'a' & 'c' (e, g, etc), and subtract the value in 'z' from 'b' & 'd' (f, h etc).</p>
<pre><code># import pandas and numpy library
import pandas as pd
# function to returns x-y
def addData(x, y):
return x - y
# list of tuples
matrix = [(1, 2, 3, 4),
(5, 6, 7, 8, ),
(9, 10, 11, 12),
(13, 14, 15, 16)
]
matrix2 = [(1, 2,),
(5, 6,),
(9, 10,),
(13, 14)
]
# Create a Dataframe object
df = pd.DataFrame(matrix, columns=list('abcd'))
df2 = pd.DataFrame(matrix2, columns=list('wz'))
# Applying function to each column
new_df = df.apply(addData, args=[df2['w']])
# Output
print(new_df)
</code></pre>
<p>Desired output:</p>
<pre><code>0 0 2 2
0 0 2 2
0 0 2 2
0 0 2 2
</code></pre>
| <python><pandas><dataframe><lambda><apply> | 2023-08-02 08:42:05 | 1 | 303 | WillH |
76,818,020 | 15,320,579 | Select specific range of elements from a python dictionary based on condition | <p>I have the following dictionary:</p>
<pre><code>ip_dict =
{
"doc_1" : {
"img_1" : ("FP","some long text"),
"img_2" : ("LP", "another long text"),
"img_3" : ("Others", "long text"),
"img_4" : ("Others", "some loong text"),
"img_5" : ("FP", "one more text"),
"img_6" : ("FP", "another one"),
"img_7" : ("LP", "ANOTHER ONE"),
"img_8" : ("Others", "some text"),
"img_9" : ("Others", "some moretext"),
"img_10" : ("FP", "more text"),
"img_11" : ("Others", "whatever"),
"img_12" : ("Others", "more whatever"),
"img_13" : ("LP", "SoMe TeXt"),
"img_14" : ("Others", "some moretext"),
"img_15" : ("FP", "whatever"),
"img_16" : ("Others", "whatever"),
"img_17" : ("LP", "whateverrr")
},
"doc_2" : {
"img_1" : ("FP", "text"),
"img_2" : ("FP", "more text"),
"img_3" : ("LP", "more more text"),
"img_4" : ("Others", "some more"),
"img_5" : ("Others", "text text"),
"img_6" : ("FP", "more more text"),
"img_7" : ("Others", "lot of text"),
"img_8" : ("LP", "still more text")
}
}
</code></pre>
<p>Here <code>FP</code> represents the first page and <code>LP</code> the last page. For all the <code>docs</code> I only want to extract the <code>FP</code> and <code>LP</code>. For the <code>Others</code>, if they lie between <code>FP</code> and <code>LP</code> only then extract them, as they represent the pages between <code>FP</code> and <code>LP</code>. If they lie outside <code>FP</code> and <code>LP</code> then ignore them. Also for <code>FP</code> which are not followed by a <code>LP</code>, treat them as a single page and extract them. So my output dictionary would look like:</p>
<pre><code>op_dict =
{
"doc_1" : [
{
"img_1" : ("FP","some long text"),
"img_2" : ("LP", "another long text")
},
{
"img_5" : ("FP", "one more text")
},
{
"img_6" : ("FP", "another one"),
"img_7" : ("LP", "ANOTHER ONE")
},
{
"img_10" : ("FP", "more text"),
"img_11" : ("Others", "whatever"),
"img_12" : ("Others", "more whatever"),
"img_13" : ("LP", "SoMe TeXt"),
},
{
"img_15" : ("FP", "whatever"),
"img_16" : ("Others", "whatever"),
"img_17" : ("LP", "whateverrr"),
}
],
"doc_2" : [
{
"img_1" : ("FP", "text")
},
{
"img_2" : ("FP", "more text"),
"img_3" : ("LP", "more more text")
},
{
"img_6" : ("FP", "more more text"),
"img_7" : ("Others", "lot of text"),
"img_8" : ("LP", "still more text")
},
]
}
</code></pre>
<p>As you can see, all the <code>FP</code> and <code>LP</code> have been extracted, but also those <code>Others</code> which are in between <code>FP</code> and <code>LP</code> have also been extracted and stored in a dictionary. Also those <code>FP</code> which are not followed by a <code>LP</code> have also been extracted.</p>
<p><strong>PS:</strong></p>
<pre><code>ip_dict =
{
"doc_1" : {
"img_1" : ("LP","some long text"),
"img_2" : ("Others", "another long text"),
"img_3" : ("Others", "long text"),
"img_4" : ("FP", "long text"),
"img_5" : ("Others", "long text"),
"img_6" : ("LP", "long text")
}
}
op_dict = {
"doc_1" : [{
"img_1" : ("LP","some long text")
},
{
"img_4" : ("FP", "long text"),
"img_5" : ("Others", "long text"),
"img_6" : ("LP", "long text")
}
]
}
</code></pre>
<p>Any help is appreciated!</p>
| <python><python-3.x><dictionary><data-munging> | 2023-08-02 08:35:46 | 4 | 787 | spectre |
76,817,789 | 6,124,796 | Avoid python module statements with pytest mocks | <p>I have some settings in a file, which read from my environment:</p>
<h1>File1.py</h1>
<pre><code>set1 = os.getenv(...)
SETTINGS = {
key1: set1,
...
}
</code></pre>
<p>In another File, I am importing them:</p>
<h1>File2.py</h1>
<pre><code>from File1 import SETTINGS
MyClass():
...
</code></pre>
<p>In my tests file, I am mocking the MyClass methods, however, it is trying to access env vars, due to previous imports.</p>
<p>Is there anyway I can mock File2 imports?</p>
<p>This is my tests file:</p>
<h1>Tests.py</h1>
<pre><code>from unittest.mock import patch
import pytest
from File2 import MyClass
@patch.object(
MyClass,
"my_method",
return_value=None,
)
def test_my_test(mock1):
my_class = MyClass()
....
</code></pre>
<p><strong>Summary</strong>: How do I avoid the os.getenv() for my tests?</p>
| <python><unit-testing><mocking><pytest> | 2023-08-02 08:02:55 | 1 | 5,216 | Mayday |
76,817,686 | 4,822,772 | Text comparison to ignore some newline characters in Python | <p>I have a Python script that compares two texts and highlights the differences between them. However, the comparison is being affected by newline characters, causing mismatches for texts with different newline representations. For instance, "arti\ncle" and "article" are being treated as different.</p>
<p>I'm currently using the <code>difflib</code></p>
<p>Here's a simplified version of my current code:</p>
<pre class="lang-py prettyprint-override"><code>import difflib
def compare_texts(old_text, new_text):
old_lines = old_text.splitlines()
new_lines = new_text.splitlines()
d = difflib.Differ()
diff = d.compare(old_lines, new_lines)
added_lines = []
deleted_lines = []
for line in diff:
if line.startswith('+ '):
added_lines.append(line[2:])
elif line.startswith('- '):
deleted_lines.append(line[2:])
return added_lines, deleted_lines
if __name__ == "__main__":
old_text = "arti\ncle\nthis is some old text."
new_text = "article\nthis is some new text."
added_lines, deleted_lines = compare_texts(old_text, new_text)
print("Added lines:")
print('\n'.join(added_lines))
print("\nDeleted lines:")
print('\n'.join(deleted_lines))
</code></pre>
<p>Can someone suggest an effective way to compare texts that will handle newline characters appropriately, ensuring that "arti\ncle" and "article" are treated as the same during the comparison process?</p>
<p>EDIT1:
In fact, lots of "\n" are introduced due to a pdf reading function.
The idea maybe the following: if there is a "\n", we can try to delete it. If, after deleting it, we have a match, then we can consider that they are the same.</p>
<p>So "article" and "arti\ncle" are the same. "article" and "arti\nficial" are not.</p>
<p>I can't remove all "\n" because many of them are still useful.</p>
<p>EDIT2:
knowing the origins of the bugs, we also may try this approach. Some random "\n" have been added due to a pdf reading function, so, we can try to delete some meaningless "\n" first.</p>
| <python><nlp><difflib> | 2023-08-02 07:50:38 | 1 | 1,718 | John Smith |
76,817,162 | 562,769 | How can I specify that (at least) one of two packages needs to be installed? | <p>I'm the maintainer of <code>pypdf</code> and we support two cryptography packages:</p>
<ul>
<li>cryptography</li>
<li>pycryptodome</li>
</ul>
<p>To get the full functionality, the user needs to have one of them installed. It doesn't matter which one.</p>
<p>How do I specify this in the <code>pyproject.toml</code>?</p>
<p>I currently have:</p>
<pre><code>[project.optional-dependencies]
crypto = [
"cryptography; python_version >= '3.7'",
"PyCryptodome; python_version == '3.6'",
]
</code></pre>
<p>But when the user has Python >= 3.7 and PyCryptodome, it would be fine as well.</p>
| <python><dependency-management><python-packaging><pyproject.toml> | 2023-08-02 06:34:00 | 0 | 138,373 | Martin Thoma |
76,817,140 | 713,200 | How to use a variable during class initialization in python? | <p>I have 3 python scripts say <code>script1.py</code> <code>script2.py</code> <code>script3.py</code>
I have imported <code>script2</code> and <code>script3</code> classes in script1.</p>
<p>When I execute, I run only the <code>script1</code> and It will call the functions in <code>script2</code> as needed.</p>
<p>but in <code>script2.py</code> I have a Class, inside class I have a variable <code>y_path</code> and this is set to a <code>string</code> concatenated with another string variable, something like shown below</p>
<pre><code>class LAUtils(BBCAPIInfra):
y_path = "/foo/bar/abc/__dname__/pfile.yml".replace("__dname__",str(dname))
</code></pre>
<p>The value of <code>dname</code> is got using a method in <code>script1</code></p>
<p>here is a snippet from <code>script1</code> getting dname</p>
<pre><code>from foo.bar.script2 import LAUtils
class CommonSetup
def some_method(self, local_parameter):
dname = call_a_method(local_parameter)
</code></pre>
<p>My question is how can I update the value of variable <code>dname</code> during class initialization in <code>script2</code>, for example, if the <code>dname</code> value is <code>MCSX</code> , I want the variable <code>y_path</code> to set it to <code>/foo/bar/abc/MCSX/pfile.yml</code> already during class initialization.</p>
| <python><python-3.x><class><oop><python-module> | 2023-08-02 06:30:05 | 2 | 950 | mac |
76,817,090 | 2,095,858 | Tkinter Treeview call cannot find item | <p>I have some items in a treeview that contain backslashes. This is normally fine however it seems that tk.call forcefully removes these backslashes. Is there any way to prevent this?</p>
<pre class="lang-py prettyprint-override"><code>from tkinter import *
from tkinter.ttk import *
root = Tk()
tv = Treeview(root)
iid = "aa\\bb"
tv.insert("", 'end', iid=iid, text=iid)
tv.tk.call(tv, "tag", "add", "new_tag", iid)
tv.pack()
root.mainloop()
</code></pre>
<blockquote>
<p>_tkinter.TclError: Item ab not found</p>
</blockquote>
<p><strong>Python 3.11.1</strong></p>
| <python><tkinter><treeview><tcl><call> | 2023-08-02 06:20:19 | 1 | 421 | craigB |
76,817,079 | 1,976,597 | How to avoid type error when accessing Pandas df.index.reorder_levels() | <p>I have a Pandas dataframe with a multiindex, so this is valid code:</p>
<pre class="lang-py prettyprint-override"><code>df.index = df.index.reorder_levels(["B", "A"])
</code></pre>
<p>But in VS Code (with type checking turned on, set to "basic"), it shows as an error since by default the type of <code>df.index</code> is <code>Index</code>, not <code>MultiIndex</code>.</p>
<p><a href="https://i.sstatic.net/Q3G37.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Q3G37.png" alt="enter image description here" /></a></p>
<p>I can create a new type just for <code>MultiIndex</code> dataframes and use <code>cast</code>, but that seems a bit hacky.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(
dict(
A=[1, 2, 3],
B=[4, 5, 6],
C=[4, 5, 6],
)
).set_index(["A", "B"])
class DataFrameMulti(pd.DataFrame):
index: pd.MultiIndex
df = typing.cast(DataFrameMulti, df)
df.index = df.index.reorder_levels(["B", "A"])
</code></pre>
<p>Is there a better way to either:</p>
<p>a) Type the dataframe correctly without having to cast</p>
<p>b) Avoid the type error without turning off other errors?</p>
<p>Edit: to clarify, the problem is not specific to <code>MultiIndex</code> or <code>reorder_levels</code>, it's about typing any dataframe where the index isn't <code>Index</code>. E.g. <code>DatetimeIndex</code>:
<a href="https://i.sstatic.net/L5jyu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/L5jyu.png" alt="enter image description here" /></a></p>
| <python><pandas><visual-studio-code><pylance><pyright> | 2023-08-02 06:17:52 | 1 | 4,914 | David Gilbertson |
76,817,054 | 14,912,118 | What are the different types life_cycle_state in databricks for job cluster | <p>We are trying to get cluster life_cycle_state using API (api/2.0/jobs/runs/list) and we are able to get various values as below</p>
<pre><code>RUNNING
PENDING
TERMINATED
INTERNAL_ERROR
</code></pre>
<p>Is there any other values apart from above values it would be a great help.</p>
| <python><databricks> | 2023-08-02 06:12:47 | 1 | 427 | Sharma |
76,816,980 | 513,494 | running unit test cases with sentence_transformers causes all-distilroberta-v1 not found | <p>we have python project in following structure</p>
<pre><code>dataanalysis
├── config
│ └── config.env
├── coverage.xml
├── Dockerfile
├── poetry.lock
├── pylibs
│ └── analyzer-1.0.0.tar.gz
├── pylint-report.txt
├── pyproject.toml
├── requirements.txt
├── result.xml
├── scripts
│ ├── __init__.py
│ ├── common.py
│ ├── analyze_data.py
│ ├── analyze_main.py
│ └── analyze_scheduler.py
└── tests
├── __init__.py
│ ├── __init__.cpython-38.pyc
│ ├── conftest.cpython-38-pytest-7.4.0.pyc
├── conftest.py
├── test_analyze_data.py
├── test_env_vars.py
└── test_analyze_main.py
</code></pre>
<p>we bundle this as docker image and able to run python scripts. for we install all required libraries using pip as follows</p>
<pre><code>cp ./scripts /home/app_user/scripts
cp ./pylibs /home/app_user/pylibs
cp requirements.txt /home/app_user
pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org --no-cache-dir -r ./requirements.txt
pip install --no-cache-dir /home/app_user/pylibs/analyzer-1.0.0.tar.gz
</code></pre>
<p>we are trying to run unit test cases for this we use poetry to create virtual env and run test cases in it.</p>
<pre><code>#pyproject.toml
[tool.poetry]
name = "dataanalysis"
version = "1.0.0"
....
[tool.poetry.dependencies]
python = "^3.8"
pycron = "3.0.0"
swifter = "1.3.4"
Levenshtein = "0.20.9"
numpy = "1.22.0"
pandas = "1.1.5"
scikit-learn = "0.24.1"
scipy = "1.5.3"
sklearn = "0.0"
mysql-connector-python = "8.0.32"
analyzer = {path = "pylibs/analyzer-1.0.0.tar.gz"}
textdistance = "^4.5.0"
sentence_transformers="^2.2.2" # added this explicitly as unit test fails specifying this is module not found. (which not case in production)
[tool.poetry.group.test.dependencies]
pytest = "^7.3"
pytest-coverage = "^0.0"
python-dotenv = "^1.0.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>and run our test cases as <code>poetry run pytest</code></p>
<p>our test cases fails with message <code>sentence_transformers module was found</code>.
so added sentence_transformers dependency in pyproject.toml.
now we get following error</p>
<pre><code>***ValueError: Path /Users/myuser/Library/Caches/pypoetry/virtualenvs/dataanalysis-uwOjF3tf-py3.8/lib/python3.8/site-packages/analyzer/files/all-distilroberta-v1 not found
../../Library/Caches/pypoetry/virtualenvs/dataanalysis-uwOjF3tf-py3.8/lib/python3.8/site-packages/sentence_transformers/SentenceTransformer.py:77: ValueError***
</code></pre>
<p>this all-distilroberta-v1 is found in virtual environment that was created manually but not in environment created by poetry.</p>
<p>sentence_transformer module is used inside python script under <code>analyzer-1.0.0.tar.gz</code></p>
<p>can you please help to resolve this.</p>
| <python><pytest><python-poetry><sentence-transformers> | 2023-08-02 05:58:49 | 0 | 529 | Prasad |
76,816,856 | 1,655,942 | How to pass Optional[<type>] as a function parameter with a <type> = default? | <p>I'm trying to defer a default parameter of the <a href="https://docs.python-zeep.org/en/master/transport.html" rel="nofollow noreferrer">class</a> I'm using to its <code>__init__</code> (simulated here with a function for simplicity) by passing the None default:</p>
<pre><code>from typing import Optional
from dataclasses import dataclass
def f(timeout: int=1):
pass
@dataclass
class C1():
timeout: Optional[int]
def __post_init__(self):
f(self.timeout)
</code></pre>
<p>This results in</p>
<blockquote>
<p>Argument of type "int | None" cannot be assigned to parameter "arg" of
type "int" in function "f" Type "int | None" cannot be assigned to
type "int"
Type "None" cannot be assigned to type "int"</p>
<p>Argument of type "Unknown | None" cannot be assigned to parameter
"arg" of type "int" in function "f" Type "Unknown | None" cannot be
assigned to type "int"
Type "None" cannot be assigned to type "int"</p>
</blockquote>
<p>How do I hint or restructure this so that Pylance accepts my optional field to be passed down?</p>
| <python><python-typing><pyright> | 2023-08-02 05:25:14 | 1 | 427 | arielCo |
76,816,391 | 9,154,627 | Predicting new timeseries based on related timeseries? | <p>Let's say I have multiple timeseries, representing different features, all of length n, and I want to predict a new timeseries which represents another feature, without any past history for that feature. So for example, one sample might look like this, where the red and blue are "input series" and the yellow was the "output series", which clearly had some relation to both inputs.</p>
<p><a href="https://i.sstatic.net/T7i2Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/T7i2Q.png" alt="enter image description here" /></a></p>
<p>Now, I have hundreds of these examples, each of which is thousands of points long (each is a different length however), and which has about 8-10 input features. So one input might be 9 series (one for each feature), each of which is maybe 6000 long. For this input, I want to produce an output of the final feature which is also 6000 long. We'll call my input features a, b, c, d and my output feature y. What would be the best model to represent this type of problem?</p>
<p>I've thought of perhaps applying a simple feed-forward neural network, whose inputs are:</p>
<pre><code>a(t-10), a(t-9), ... a(t)
b(t-10), b(t-9), ... b(t)
..... etc for all other input features
</code></pre>
<p>and whose output would be a single value: the predicted y(t). Issues with this model is that it fails to consider past predicted values of y(t) (that is, <code>y(t-1), y(t-2), ...</code>), which a good model should consider. I could set up the model to take in these inputs, and simply fed it past values of y in the training samples. However, when I actually want to produce new outputs on new inputs, what would I feed into these variables during the first 10 steps? I don't have any past info on the output series.</p>
<p>Another idea I had was to apply an LSTM, so that the model can consider further back in each series, with more recent values being more important - but again, LSTM uses past values of y to predict the current y, but my model must start without any past values of y.</p>
<p>Does anyone have any suggestions for what type of model would best suit this problem?</p>
<p>It's worth noting my actual features are much less directly related (correlation coefficients closer to .2 or .3 for each feature with the y). Additionally, past history of one input can influence the output quite a bit, meaning that including as much history as possible would probably be good. (For example, it might be the case that when input A has a sharp decrease early on, the output tends to increase in the later half of the series, this isn't actually true in my data but just an example). For reference, I'm creating this in keras, and I do have a pretty good understanding of basic models, I just don't know which one applies to this situation.</p>
| <python><tensorflow><keras><time-series><lstm> | 2023-08-02 03:12:26 | 1 | 623 | Theo |
76,816,310 | 9,768,260 | How to use __main__ as the setuptools's entry points | <p>If the cli file is like below, how to set it as the setuptools's enter points?</p>
<pre><code>project_root_directory
├── setup.py
└── file1.py
file1.py
def detect():
...
if __name__ == '__main__':
parser = argparse.ArgumentParser()
...
opt = parser.parse_args()
</code></pre>
| <python><setuptools> | 2023-08-02 02:46:42 | 1 | 7,108 | ccd |
76,816,118 | 2,929,924 | Why would python not retrieve telnet data from server that I know is there? | <p>I am trying to query the status from an audio device using the telnetlib in python.</p>
<p>When I send command strings to the device via python script, it responds appropriately (changes inputs, volume, etc).</p>
<p>I am attempting to query the device status by sending the appropriate string, and even though the device responds to the command, I can not seem to get python to show it to me.</p>
<p>(I know the device is responding because the software provided by the mfg has a network monitor window that shows 48 lines from the device upon executing my python script.)</p>
<p>My test program is as follows:</p>
<pre><code>from telnetlib import Telnet
import time
host = "xxx.xxx.xxx.xxx"
port = "####"
timeout = 5
#Status Request
command = [0x01,0x00,0x00,0x00,0x15,0xED,0xAE,0x00,0x00,0x00,0x00,0x00,0x20,0x00,0x00,0x00,0x00,0x01,0x19,0x00,0x01]
tn = Telnet(host, port, timeout)
with tn as session:
for byt in command:
byt = byt.to_bytes(1,'big')
session.write(byt)
print(byt) #for testing
time.sleep(1) #thought this might help - it didn't.
try:
while True:
rsp = session.read_until(b'0xF0', timeout=0.1)
if rsp == b'': #if nothing
print("No Data Found")
break
print(rsp)
except EOFError:
print('connection closed')
exit()
</code></pre>
<p>The above code prints: "No Data Found" while connected to the device.</p>
<p>I have tried other read options such as read_very_eager(), read_all(), etc, but the result is always the same - blank. I've tried closing the mfg's application, I just can't seem to get it.</p>
<p>Am I "reading" at the wrong time? Is the data already "gone" before I'm reading it?</p>
<p>Help would be much appreciated!</p>
<p>Thank you.</p>
<p>UPDATED MY SCRIPT TO USE SOCKETS... The messages are being received by the device (I am using the device software's network monitor to confirm) but there still appears to be no response that I can pull down with my script.</p>
<pre><code>import socket
import time
host = "xxx.xxx.xxx.xxx"
port = xxxx
timeout = 5
#Request Info Command
command = [0x01,0x00,0x00,0x00,0x15,0xED,0xAE,0x00,0x00,0x00,0x00,0x00,0x20,0x00,0x00,0x00,0x00,0x01,0x19,0x00,0x01]
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect((host,port))
for byt in command:
byt = byt.to_bytes(1,'big')
s.sendall(byt)
print(byt) #for testing
response = b''
try:
while True:
rsp = s.recv(4096)
if not rsp:
print("No Data Found") #this never prints
break
response += rsp
print(rsp) #put this here just to see if I'd see anything.
except socket.timeout:
print("Timed Out")
print(response.decode())
exit()
</code></pre>
<p>Also adding a screenshot of the "Network Trace" window from the manufacturer's software. All of the lines below the highlighted line appear upon executing my Python script requesting info from the device
<a href="https://i.sstatic.net/i00ko.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i00ko.png" alt="enter image description here" /></a></p>
<p>I also put "response += rsp" into an if statement so it would break if no data was received.. it breaks.</p>
<pre><code>if rsp:
response += rsp
else:
break
</code></pre>
<p><strong>On another note,</strong> I copied a TCP message block from Wireshark and asked BARD if it could analyze it (just for fun) and it responded that "The data you provided is a TCP packet capture. It is a binary file that contains data that was exchanged between two computers over a network... The payload in this packet capture is a HTTP request. The request is for a file called /zones.xml from the server <a href="http://www.example.com" rel="nofollow noreferrer">www.example.com</a>. "</p>
<p>Assuming Bard is not totally making things up, I find it interesting that it "found" "zones.xml" - I have found no nothing in the documentation referring to any xml files, but it is a "zone mixer" and it would make sense that the information about each "zone" would be in a file called "zones.xml." I do not believe, however, that the device I'm trying to query would make reference to "www.example.com" so I think Bard may have hallucinated that. Either way, I still can't get Python to find any available bytes :-(</p>
| <python><r><telnetlib> | 2023-08-02 01:32:24 | 0 | 496 | Jay |
76,816,078 | 678,572 | How to add a colorbar to a figure that already has multiple axes objects | <p>I am trying to add a colorbar and cannot figure out how to add it anywhere. Ideally it would be a small ax object added to the right of the right-most ax but I can't figure out how to add ax objects to ax objects that I've already created with <code>make_axes_locatable</code>.</p>
<p>There are 2 modes, one that is multi-axes and one that is not. The one that isn't is straightforward as there are a lot of tutorials on StackOverflow but the multi-axes figure is the complicated bit.</p>
<p><strong>How can I add a colorbar to the multi-axes figure? Preferably on the right but internal on the main scatter plot could work too (top left).</strong></p>
<p>Here's my code:</p>
<pre class="lang-py prettyprint-override"><code># Plot compositional data
#!@check_packages(["matplotlib", "seaborn"])
def plot_compositions(
X:pd.DataFrame,
c:pd.Series="black", #"evenness"
s:pd.Series=28,
classes:pd.Series=None,
class_colors:dict=None,
edgecolor="white",
cbar=True,
cmap=plt.cm.gist_heat_r,
figsize=(13,8),
title=None,
style="seaborn-white",
ax=None,
show_xgrid=False,
show_ygrid=True,
show_density_1d=True,
show_density_2d=True,
show_legend=True,
xlabel=None,
ylabel=None,
legend_kws=dict(),
legend_title=None,
title_kws=dict(),
legend_title_kws=dict(),
axis_label_kws=dict(),
annot_kws=dict(),
line_kws=dict(),
hist_1d_kws=dict(),
# rug_kws=dict(),
kde_2d_kws=dict(),
cbar_kws=dict(),
panel_pad=0.1,
panel_size=0.618,
cax_panel_pad="5%",
cax_panel_size=0.618,
background_color="white",
sample_labels:dict=None,
logscale=True,
xmin=0,
ymin=0,
vmin=None,
vmax=None,
**scatter_kws,
):
"""
Plot compositions of total counts (x-axis) vs. number of detected components (y-axis)
"""
from collections.abc import Mapping
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
from matplotlib.colors import to_hex
from matplotlib.scale import LogScale
import seaborn as sns
# Defaults
_title_kws = {"fontsize":15, "fontweight":"bold"}
_title_kws.update(title_kws)
_legend_kws = {'fontsize': 12, 'frameon': True, 'facecolor': 'white', 'edgecolor': 'black'}#, 'loc': 'center left', 'bbox_to_anchor': (1, 0.5)}
_legend_kws.update(legend_kws)
_legend_title_kws = {"size":15, "weight":"bold"}
_legend_title_kws.update(legend_title_kws)
_axis_label_kws = {"fontsize":15}
_axis_label_kws.update(axis_label_kws)
_hist_1d_kws = {"alpha":0.0618}
_hist_1d_kws.update(hist_1d_kws)
# _kde_1d_kws = {"alpha":0.618} # Rug takes too long when theres a lot points
# _kde_1d_kws.update(kde_1d_kws)
# _rug_kws = {"height":0.5}
# _rug_kws.update(rug_kws)
_kde_2d_kws = {"shade":True, "alpha":0.618}
_kde_2d_kws.update(kde_2d_kws)
_line_kws = {"linewidth":1.618, "linestyle":":", "alpha":1.0}
_line_kws.update(line_kws)
_annot_kws = {}
_annot_kws.update(annot_kws)
_scatter_kws={"edgecolor":edgecolor, "linewidths":0.618}
_scatter_kws.update(scatter_kws)
# Data
X = X.fillna(0)
# check_compositional(X, acceptable_dimensions={2})
assert np.all(X == X.astype(int)), "X must be integer data and should not be closure transformed"
# Total number of counts
sample_to_totalcounts = X.sum(axis=1)
remove_samples = sample_to_totalcounts == 0
if np.any(remove_samples):
warnings.warn("Removing the following observations because depth = 0: {}".format(remove_samples.index[remove_samples]))
sample_to_totalcounts = sample_to_totalcounts[~remove_samples]
samples = sample_to_totalcounts.index
# Number of detected components
sample_to_ncomponents = (X > 0).sum(axis=1) #! sample_to_ncomponents = number_of_components(X.loc[samples], checks=False)
number_of_samples = sample_to_ncomponents.size
# Colors
if classes is not None:
assert class_colors is not None, "`class_colors` is required for using `classes`"
classes = pd.Series(classes)
assert np.all(classes.index == samples), "`classes` must be a pd.Series with the same index ordering as `X.index`"
assert np.all(classes.map(lambda x: x in class_colors)), "Classes in `class` must have a color in `class_colors`"
if c is not None:
warnings.warn("c will be ignored and superceded by class_colors and classes")
c = classes.map(lambda x: class_colors[x])
if legend_title is None:
legend_title = c.name
else:
classes = 0
if c is None:
c = "black"
if not isinstance(c, pd.Series):
c = pd.Series([c]*number_of_samples, index=samples)
assert np.all(c.notnull())
c_is_continuous = False
try:
c = c.map(to_hex)
except ValueError:
c_is_continuous = True
if vmin is None:
vmin = c.min()
if vmax is None:
vmax = c.max()
_scatter_kws["cmap"] = cmap
_scatter_kws["vmin"] = vmin
_scatter_kws["vmax"] = vmax
if not isinstance(classes, pd.Series):
classes = pd.Series([classes]*number_of_samples, index=samples)
# Marker size
if not isinstance(s, pd.Series):
s = pd.Series([s]*number_of_samples, index=samples)
assert np.all(s.notnull())
# Data
df_data = pd.DataFrame([
sample_to_totalcounts,
sample_to_ncomponents,
c,
s,
classes,
], index=["x","y","c","s","class"],
).T
for field in ["x","s"]:
df_data[field] = df_data[field].astype(float)
for field in ["y"]:
df_data[field] = df_data[field].astype(int)
# Plotting
number_of_classes = df_data["class"].nunique()
axes = list()
with plt.style.context(style):
# Set up axes
if ax is None:
fig, ax = plt.subplots(figsize=figsize)
else:
fig = plt.gcf()
axes.append(ax)
# Scatter plot
for id_class in df_data["class"].unique():
index = df_data["class"][lambda x: x == id_class].index
df = df_data.loc[index]
ax.scatter(data=df, x="x", y="y", c="c", s="s", label=id_class if number_of_classes > 1 else None, **_scatter_kws)
# Limits
xlim = ax.get_xlim()
ylim = ax.get_ylim()
if logscale:
ax.set_xscale(LogScale(axis=0,base=10))
# Density (1D)
divider = make_axes_locatable(ax)
if show_density_1d:
ax_right = divider.append_axes("right", pad=panel_pad, size=panel_size)
ax_top = divider.append_axes("top", pad=panel_pad, size=panel_size)
for id_class in df_data["class"].unique():
index = df_data["class"][lambda x: x == id_class].index
df = df_data.loc[index]
color = df.loc[index,"c"].values[0]
if c_is_continuous:
color = to_hex("black")
# Histogram
sns.histplot(data=df, x="x", color=color, ax=ax_top, kde=True, **_hist_1d_kws) # "rug":max(X.shape) < 5000, "hist_kws":{"alpha":0.382}, "kde_kws":{
sns.histplot(data=df, y="y", color=color, ax=ax_right, kde=True, **_hist_1d_kws) # "rug":max(X.shape) < 5000, "hist_kws":{"alpha":0.382}, "kde_kws":{
# KDE
# sns.rugplot(data=df, x="x", color=color, ax=ax_top,) # "rug":max(X.shape) < 5000, "hist_kws":{"alpha":0.382}, "kde_kws":{
# sns.rugplot(data=df, y="y", color=color, ax=ax_right) # "rug":max(X.shape) < 5000, "hist_kws":{"alpha":0.382}, "kde_kws":{(data=df, y="y", color=color, ax=ax_right, **_kde_1d_kws, zorder=0) # "rug":max(X.shape) < 5000, "hist_kws":{"alpha":0.382}, "kde_kws":{
if logscale:
ax_top.set_xscale(LogScale(axis=0,base=10))
with warnings.catch_warnings():
warnings.simplefilter("ignore")
ax_right.set(ylim=ylim, xticklabels=[],yticklabels=[],yticks=ax.get_yticks())
ax_top.set(xlim=xlim, xticklabels=[],yticklabels=[],xticks=ax.get_xticks())
ax_right.set_xlabel(None)
ax_right.set_ylabel(None)
ax_top.set_xlabel(None)
ax_top.set_ylabel(None)
axes.append(ax_right)
axes.append(ax_top)
# Density (2D)
if show_density_2d:
for id_class in df_data["class"].unique():
index = df_data["class"][lambda x: x == id_class].index
df = df_data.loc[index]
color = df.loc[index,"c"].values[0]
if c_is_continuous:
color = to_hex("black")
try:
sns.kdeplot(data=df, x="x", y="y", color=color, zorder=0, ax=ax, **_kde_2d_kws)
except Exception as e:
warnings.warn("({}) Could not compute the 2-dimensional KDE plot for the following class: {}".format(id_class))
# Annotations
if sample_labels is not None:
assert hasattr(sample_labels, "__iter__"), "sample_labels must be an iterable or a mapping between sample and label"
if isinstance(sample_labels, (Mapping, pd.Series)):
sample_labels = dict(sample_labels)
else:
sample_labels = dict(zip(sample_labels, sample_labels))
for k,v in sample_labels.items():
if k not in df_data.index:
assert k in X.index, ("{} is not in X.index".format(k))
warnings.warn("{} is not in X.index after removing empty compositions".format(k))
else:
x, y = df_data.loc[k,["x","y"]]
ax.text(x=x, y=y, s=v, **_annot_kws)
# Labels
if xlabel is None:
xlabel = "Total Counts"
if logscale:
xlabel = "log$_{10}$(%s)"%(xlabel)
if ylabel is None:
ylabel = "Number of Components"
if xmin is not None:
with warnings.catch_warnings():
warnings.simplefilter("ignore")
ax.set_xlim(xmin, max(ax.get_xlim()))
if ymin is not None:
ax.set_ylim(xmin, max(ax.get_ylim()))
ax.set_xlabel(xlabel, **_axis_label_kws)
ax.set_ylabel(ylabel, **_axis_label_kws)
ax.xaxis.grid(show_xgrid)
ax.yaxis.grid(show_ygrid)
# Cbar
# if c_is_continuous:
# if cbar:
# divider_cax = make_axes_locatable(axes[-1])
# cax = fig.add_axes([0.27, 0.8, 0.5, 0.05])
# im = ax.imshow(df_data["c"].values.reshape(-1,1), cmap=cmap)
# fig.colorbar(im, orientation='horizontal')
# plt.show()
# cax = divider_cax.append_axes("right", size="5%", pad="2%") # Adjust the size and pad as needed
# im = ax.imshow(df_data["c"].values.reshape(-1, 1), cmap=cmap, vmin=vmin, vmax=vmax)
# plt.colorbar(im, cax=cax)
# Legend
if show_legend:
if number_of_classes > 1:
ax.legend(**_legend_kws)
if bool(legend_title):
ax.legend_.set_title(legend_title, prop=_legend_title_kws)
# Title
if title is not None:
axes[-1].set_title(title, **_title_kws)
# Background color
for ax_query in axes:
ax_query.set_facecolor(background_color)
return fig, axes
# Load abundances (Gomez and Espinoza et al. 2017)
X = pd.read_csv("https://github.com/jolespin/projects/raw/main/supragingival_plaque_microbiome/16S_amplicons/Data/X.tsv.gz",
sep="\t",
index_col=0,
compression="gzip",
)
Y = pd.read_csv("https://github.com/jolespin/projects/raw/main/supragingival_plaque_microbiome/16S_amplicons/Data/Y.tsv.gz",
sep="\t",
index_col=0,
compression="gzip",
)
classes = Y["Caries_enamel"].loc[X.index]
c = pd.Series(classes.map(lambda x: {True:"blue", False:"green"}[x == "NO"]))
sample_labels = pd.Index(X.sum(axis=1).sort_values().index[:4].tolist())
sample_labels = pd.Series(sample_labels.map(lambda x: x.split("_")[0]), sample_labels)
# fig, axes = plot_compositions(X, s=28,logscale=False, sample_labels=sample_labels, class_colors={"NO":"black", "YES":"red"}, classes=classes, title="Caries")
fig, axes = plot_compositions(X,c=Y.loc[X.index,"age (months)"], s=28,logscale=False, sample_labels=sample_labels, title="Caries")
</code></pre>
<p>Here's the resulting figure:
<a href="https://i.sstatic.net/N19me.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N19me.png" alt="enter image description here" /></a></p>
| <python><matplotlib><plot><axis><colorbar> | 2023-08-02 01:18:43 | 0 | 30,977 | O.rka |
76,816,059 | 3,099,733 | Python defaultdict but support key-wise default value | <p>I want to define a <code>defaultdict</code>, so that if the key is missing, its value should be <code>f'You can set {key} to inject configuration here</code></p>
<p>Consider the following code</p>
<pre class="lang-py prettyprint-override"><code>from collections import defaultdict
# Valid, but the default value for all key will be empty string.
d1 = defaultdict(str)
d1['post_simulation'] # return ''
# This is what I intend to do,
# but it is invalid as the default_factory won't take any arguments.
d2 = defaultdict(lambda k: f'#You can inject text here by setting {k}')
d2['post_simulation']
# TypeError: <lambda>() missing 1 required positional argument: 'k'
</code></pre>
<p>Is there any tech that I can do this in Python by using defaultdict or other data structure?</p>
| <python> | 2023-08-02 01:11:11 | 1 | 1,959 | link89 |
76,816,052 | 9,901,261 | Splitting a sentence into smaller phrases with a command and 2 optional parameters | <ul>
<li>First split the sentence into smaller phrases with words from the command list</li>
<li>The words can be anywhere in the sentence</li>
<li>After the word there can be another word and a number ( both optional)</li>
<li>Remove any words not in any of lists.</li>
</ul>
<p>Small example:</p>
<pre><code>txt = "dash dash forward dash backward 5 5 walk walk forward walk forward 5 a"
commands = ['dash', 'walk']
directions = ['forward', 'backward']
</code></pre>
<p>Should split a sentence into smaller phrases:</p>
<pre class="lang-none prettyprint-override"><code>dash
dash forward
dash backward 5
--> cut out 5
walk
walk forward
walk forward 5
--> cut out a
</code></pre>
| <python><regex> | 2023-08-02 01:09:33 | 1 | 9,989 | Arundeep Chohan |
76,816,002 | 559,827 | Again, how to use unittest.mock to test open(..., 'w') ... .write(...)? | <p>I am not trying to do anything practical here; I am just trying to understand how to use the <code>unittest.mock</code> module in this particular use case. What I am attempting to do here is so simple, straightforward, and so smack down the middle of <code>unitttest.mock</code>'s wheelhouse that I can't believe I am having such a hard time getting it to work.</p>
<p>This toy example shows the problem:</p>
<pre><code>def fn(in1, in2, out):
with open(in1) as f1, open(in2) as f2, open(out, 'w') as f3:
for line in f1:
f3.write(f'1>{line}')
for line in f2:
f3.write(f'2>{line}')
from unittest.mock import patch, mock_open
from io import StringIO
in1 = StringIO('a\nb\n')
in2 = StringIO('c\nd\ne\n')
out = StringIO()
expected = '1>a\n1>b\n2>c\n2>d\n2>e\n'
expected_first_line = expected.splitlines()[0] + '\n'
with patch('builtins.open', new_callable=mock_open) as fake_open:
fake_open.side_effect = [in1, in2, out]
fn('this', 'that', 'somethingelse')
</code></pre>
<p><strong>Q:</strong> I want to know <strong>how must I modify</strong> above script so that I can test whether or not <code>fn</code> wrote to the write handle the string stored in <code>expected</code><sup>1</sup>.</p>
<p>I have tried to use the answers given in</p>
<p><a href="https://stackoverflow.com/q/33184342/559827">How do I mock an open(...).write() without getting a 'No such file or directory' error?</a></p>
<p>...but they don't work at all.</p>
<p>For example, if I attempt to follow the first answer in the post above, by adding the following lines at the end of the script above:</p>
<pre><code>handle = fake_open()
handle.write.assert_called_once_with(expected_first_line)
</code></pre>
<p>...the assertion never runs because the assignment on the first line bombs:</p>
<pre><code>Traceback (most recent call last):
File "/tmp/try.py", line 21, in <module>
handle = fake_open()
File "/opt/python3/3.8.0/lib/python3.8/unittest/mock.py", line 1075, in __call__
return self._mock_call(*args, **kwargs)
File "/opt/python3/3.8.0/lib/python3.8/unittest/mock.py", line 1079, in _mock_call
return self._execute_mock_call(*args, **kwargs)
File "/opt/python3/3.8.0/lib/python3.8/unittest/mock.py", line 1136, in _execute_mock_call
result = next(effect)
StopIteration
</code></pre>
<p>If I follow the second answer, and add this line</p>
<pre><code> fake_open.return_value.__enter__().write.assert_called_once_with(expected_first_line)
</code></pre>
<p>...the test fails with</p>
<pre><code>AssertionError: Expected 'write' to be called once. Called 0 times.
</code></pre>
<p>...which is a bit more civilized, but still pretty hard for me to understand.</p>
<p>Also, it appears that, once the function returns, the <code>out</code> variable can no longer be read. (Every operation I've tried on <code>out</code> fails with a <code>ValueError: I/O operation on closed file</code> exception.)</p>
<p>Of course, I realize that I can roll my own mock object in 10-20 lines of code, but what I am trying to do here should be bread-and-butter for <code>unittest.mock</code>.</p>
<hr />
<p><sup><sup>1</sup>Even though in some of my attempts I used the value in <code>expected_first_line</code>, I am really not interested in testing just one line in the output, but all of it. I hope that there is a straightforward way to do this.</sup></p>
| <python><unit-testing><mocking> | 2023-08-02 00:54:24 | 1 | 35,691 | kjo |
76,815,970 | 425,895 | How can I force a GridSearchCV model (or a pipeline model) to use a given hyparameter value? | <p>I have used GridSearchCV to find the best hyperparameters of a regularized logistic model.<br />
It also includes a pipeline to impute and standardize the covariates.</p>
<pre><code>numeric_cols = X_train.select_dtypes(include=['float64', 'int']).columns.to_list()
cat_cols = X_train.select_dtypes(include=['object', 'category']).columns.to_list()
numeric_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())
])
categorical_transformer = Pipeline(steps=[
('imputer', SimpleImputer(strategy='most_frequent'))
])
preprocessor = ColumnTransformer(transformers=[
('numeric', numeric_transformer, numeric_cols),
('cat', categorical_transformer, cat_cols)
], remainder='passthrough')
completo = Pipeline(steps=[
("preprocessor", preprocessor),
("classifier", LogisticRegression(solver='liblinear', penalty='l2',
max_iter=100, class_weight='balanced'))
])
params = {'classifier__C': np.logspace(-4, 16, 21, base=1.5)}
grid_search_lr2 = GridSearchCV(completo, params, scoring='roc_auc')
results_lr2 = grid_search_lr2.fit(X_train, y_train)
</code></pre>
<p>Up to here it works.</p>
<p>Now I can use the fitted model to make predictions, and it will automatically use the best tuned parameters.</p>
<pre><code>grid_search_lr2.predict(X_test)
</code></pre>
<p>But, what if instead I want to force it to use a manually chosen value for the hyperparameters instead of the <strong>best</strong> ones. For example C=0.5. Maybe this is not possible if I don't retrain the model.</p>
<p>What I want to retrain the model with the whole train data (without CV) and keep the hyperparameters but change one?</p>
<p>I have tried with the option <code>C=0.5</code> but it produces this error: (no matter if I use completo or results_lr2)</p>
<pre><code>Pipeline.fit does not accept the C parameter.
</code></pre>
<p>I have also tried with the option <code>classifier__C=0.5</code> but it produces this error:</p>
<pre><code>LogisticRegression.fit() got an unexpected keyword argument 'C'.
</code></pre>
<p>How can I tell the model (which includes the pipeline) to use a particular C value?<br />
What if I want to specify that C values but keep the other best values for the other hyperparameters (I know in my example I have just one).</p>
<p>Do I need to create a new pipeline or can I reuse the old one, completo?</p>
| <python><scikit-learn><pipeline><gridsearchcv> | 2023-08-02 00:40:19 | 1 | 7,790 | skan |
76,815,883 | 1,505,593 | How to pass string and dictionary as arguments to vectorized Python UDF definition in Snowflake | <p>I am trying to create a python vectorized udf to process columns in a table. I want to pass the column to be processed and 2 additional arguments to the udf, one arguments is a string and the other is a dictionary. The processing of the column depends on these two additional arguments.</p>
<p>I tried to access these additional arguments via index as mentioned in the snowflake document but I got an error which suggested they cannot be access via index.
I am new to snowpark hence I could be making a silly mistake somewhere. I may have misread the doc but accessing arguments via index feels kind of odd.</p>
<p>Can someone help me with an example showing how to pass additional arguments to vectorized udfs ?</p>
<p>Note: I am creating a stored procedure like</p>
<pre><code>create or replace function DO_DQ_TEST(col int ,dq_name string, dq_test string, params VARIANT)
.....
.....
.....
@vectorized(input=pd.DataFrame)
def do_dq_udf(df):
dq_name = 1
dq_test = 2
params = 3
<do_something>
</code></pre>
| <python><snowflake-cloud-data-platform><user-defined-functions> | 2023-08-02 00:08:58 | 1 | 2,385 | cryp |
76,815,847 | 1,187,671 | Passing a Local to a Python program from Stata | <p>I am trying to write a Stata program that passes a local to a Python function. It is based on <a href="https://blog.stata.com/2023/07/25/a-stata-command-to-run-chatgpt/" rel="nofollow noreferrer">this example</a>, where this approach works.</p>
<p>Here is a toy example demonstrating the problem:</p>
<pre><code>python clear
capture program drop get_data
program get_data
version 17
args InputText
python: get_data()
end
python:
from sfi import Macro
def get_data():
inputtext = Macro.getLocal('InputText')
print(inputtext)
end
local InputText "YYY"
python: get_data()
get_data "XXX"
</code></pre>
<p>The first works, the second does not:</p>
<pre><code>. local InputText "YYY"
. python: get_data()
YYY
.
. get_data "XXX"
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'get_data' is not defined
r(7102);
</code></pre>
<p>I would like to get a fix with an explanation.</p>
| <python><stata><stata-macros> | 2023-08-01 23:57:04 | 1 | 9,460 | dimitriy |
76,815,770 | 9,588,300 | Pandas replace values in datetime column without changing the datetime64 data type | <p>I have a dataframe where I have a datetime64[ns] data type column, I want to apply a criteria that if the date is beyond a certain year it should take a fixed date I provide. Like "If the date is beyond this point, max it out to this date"</p>
<p>I have noticed that if I select one element from any datetime64[ns] column it's of type <code>pd.Timestamp</code>, so I thought I would pass my fixed date with the same datatype to preserve the entire column data type</p>
<p>But no matter how I pass that fixed date (pd.Timestamp or as as datetime.datetime) it always casts my datetime64 column into an <code>object</code> data type and gives me a epoch in nanoseconds.</p>
<p>My question is, how can I use np.where to substitute certain values in a dataframe if they met a condition without altering the original column type, assuming the inserted new values are of the same type of the column.</p>
<p>Here's an example</p>
<pre><code>#generate some data
df=pd.DataFrame({'a':[datetime.now() for i in range(10)]})
#The previous line generated a column named a with datetime64[ns]
# data type if you look at df.dtypes
df.loc[0,'a']=datetime(2050,1,1)
#In the previous line I already substitued a value by directly
# accessing its index, and it does preserve the data type datetime64[ns].
# This is the data
a
0 2050-01-01 00:00:00.000000
1 2023-08-01 17:29:59.011984
2 2023-08-01 17:29:59.011984
3 2023-08-01 17:29:59.011984
4 2023-08-01 17:29:59.011984
5 2023-08-01 17:29:59.011984
6 2023-08-01 17:29:59.011984
7 2023-08-01 17:29:59.011984
8 2023-08-01 17:29:59.011984
9 2023-08-01 17:29:59.011984
#I want to top the date until today's date if the year exceeds the present year
df['a']=np.where(df['a'].dt.year>2023,datetime.now(),df['a'])
#In the previous line, I substitued values but accessing to them
# conditionally with an np.where The result of this
# changes column 'a' data type into integer and puts unix epochs,
# yet the inserted data looks like datetime, this is the output
a
0 2023-08-01 17:31:48.560111
1 1690910999011984000
2 1690910999011984000
3 1690910999011984000
4 1690910999011984000
5 1690910999011984000
6 1690910999011984000
7 1690910999011984000
8 1690910999011984000
9 1690910999011984000
</code></pre>
| <python><pandas><dataframe><numpy> | 2023-08-01 23:34:30 | 2 | 462 | Eugenio.Gastelum96 |
76,815,735 | 14,057,599 | Is it possible to create o3d.geometry.VoxelGrid directly from numpy binary array? | <p>In open3D, we can create point cloud by something like this</p>
<pre><code>import open3d as o3d
import numpy as np
pcl = o3d.geometry.PointCloud()
pcl.points = o3d.utility.Vector3dVector(np.random.randn(500,3))
o3d.visualization.draw_geometries([pcl])
</code></pre>
<p>But for binary numpy array, e.g. <code>voxel</code>, if I want to visualize the voxel, I need to convert it to point cloud and then use <code>o3d.geometry.VoxelGrid.create_from_point_cloud</code> to convert the point cloud back to the open3D VoxelGrid</p>
<pre><code>voxel = np.zeros((32, 32, 32), np.uint8)
voxel[16, 16, 16] = 1
# normalize
pcl = o3d.geometry.PointCloud()
pc = np.stack(np.where(voxel == 1)).transpose((1, 0)) / (voxel.shape[0] - 1)
pcl.points = o3d.utility.Vector3dVector(pc)
voxel_grid = o3d.geometry.VoxelGrid.create_from_point_cloud(pcl, voxel_size=0.04)
o3d.visualization.draw_geometries([voxel_grid])
</code></pre>
<p>Is there any way to create <code>o3d.geometry.VoxelGrid</code> directly from voxel like</p>
<pre><code>### this doesn't work
voxel = np.zeros((32, 32, 32), np.uint8)
voxel[16, 16, 16] = 1
voxel_grid = o3d.geometry.VoxelGrid()
voxel_grid.voxels = o3d.utility.Vector3dVector(voxel)
o3d.visualization.draw_geometries([voxel_grid])
</code></pre>
| <python><numpy><open3d> | 2023-08-01 23:24:02 | 1 | 317 | Qimin Chen |
76,815,659 | 19,506,623 | How to consider multiple conditions before replace? | <p>I have several lines were most of them have the formats as below:</p>
<ul>
<li>format1 --> "FIELD = VALUE"</li>
<li>format2 --> "KKF NN MM UZZ123.CODE.TXT"</li>
</ul>
<p>I'm trying to convert to xml like format with these conditions.</p>
<p>1-) If FIELD = "XYZ" then replace with <code><EEE>VALUE</EEE></code></p>
<p>2-) If FIELD = anything else, replace with the same FIELD</p>
<p>3-) If line is like format2 replace with <code><DIF>VALUE</DIF></code></p>
<p>The is my current code.</p>
<pre><code>import re
data = '''ABC = 300
XYZ = MA2
MMW = TT1
ABC = YU3
ABC = 719
XYZ = 120
KKF NN MM UZZ123.CODE.TXT
KKF NN MM UZZ456.CODE.TXT'''
data.splitlines()
out = []
for line in data.splitlines():
out.append(re.sub(r'(.+)\s+=\s(.+)', r'<\1>\2<\1>', line))
out.append(re.sub(r'XYZ\s+=\s+(.+)', r'<EEE>\1<EEE>', line))
out.append(re.sub(r'KKF.+UZZ(.+).CODE.+', r'<DIF>\1<DIF>', line))
out
['<ABC>300<ABC>',
'ABC = 300',
'ABC = 300',
'<XYZ>MA2<XYZ>',
'<EEE>MA2<EEE>',
'XYZ = MA2',
'<MMW>TT1<MMW>',
'MMW = TT1',
'MMW = TT1',
'<ABC>YU3<ABC>',
'ABC = YU3',
'ABC = YU3',
'<ABC>719<ABC>',
'ABC = 719',
'ABC = 719',
'<XYZ>120<XYZ>',
'<EEE>120<EEE>',
'XYZ = 120',
'KKF NN MM UZZ123.CODE.TXT',
'KKF NN MM UZZ123.CODE.TXT',
'<DIF>123<DIF>',
'KKF NN MM UZZ456.CODE.TXT',
'KKF NN MM UZZ456.CODE.TXT',
'<DIF>456<DIF>']
</code></pre>
<p>And my desired output would be like this.</p>
<pre><code><ABC>300</ABC>
<EEE>MA2</EEE>
<MMW>TT1</MMW>
<ABC>YU3</ABC>
<ABC>719</ABC>
<EEE>120</EEE>
<DIF>123</DIFF>
<DIF>456</DIFF>
</code></pre>
<p>How can I fix this? Thanks</p>
| <python><regex><replace> | 2023-08-01 22:58:27 | 2 | 737 | Rasec Malkic |
76,815,650 | 2,525,857 | Calculating rolling Root Mean Square Error in python | <p>Suppose I have a pandas data frame, <code>vols</code> where</p>
<pre><code>vols.head()
Return Vol
DataDate
2019-12-26 0.002291 0.002400
2019-12-27 0.002292 0.002392
2019-12-30 0.002288 0.002385
2019-12-31 0.002288 0.002378
2020-01-01 0.002286 0.002378
</code></pre>
<p>Next I rename <code>vols</code> columns.</p>
<pre><code>vols.columns = ['Realized', 'Predicted']
vols.info()
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 922 entries, 2019-12-26 to 2023-07-27
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Realized 922 non-null float64
1 Predicted 922 non-null float64
dtypes: float64(2)
</code></pre>
<p>I want to calculate rolling Root Mean Square Error.</p>
<pre><code>vols_rolling = vols.rolling(window=52)
from sklearn.metrics import mean_squared_error as mse
vols_rolling.apply(lambda x: mse(x['Realized'], x['Predicted']))
</code></pre>
<p>I am getting following <code>ValueError</code>.</p>
<pre><code>ValueError Traceback (most recent call last)
File ~\anaconda3\Lib\site-packages\pandas\_libs\tslibs\parsing.pyx:440, in pandas._libs.tslibs.parsing.parse_datetime_string_with_reso()
File ~\anaconda3\Lib\site-packages\pandas\_libs\tslibs\parsing.pyx:649, in pandas._libs.tslibs.parsing.dateutil_parse()
ValueError: Unknown datetime string format, unable to parse: Realized
During handling of the above exception, another exception occurred:
</code></pre>
<p>The error is quite long. Trying not to copy paste it here.</p>
| <python><pandas><apply><rolling-computation> | 2023-08-01 22:56:15 | 1 | 351 | deb |
76,815,536 | 11,505,680 | How to draw a vertical line across the whole plot? | <p>I have a plot and I want to draw some vertical lines covering the whole range of the y-axis, without having to worry about the numerical values of that range. <a href="https://matplotlib.org/stable/gallery/lines_bars_and_markers/vline_hline_demo.html" rel="nofollow noreferrer">This matplotlib example</a> demonstrates the functionality I'm looking for, and it works fine for me, but a very similar code does not. What am I doing wrong?</p>
<pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt
import numpy as np
np.random.seed(19680801)
y = np.random.normal(scale=0.01, size=200).cumsum()
fig, ax = plt.subplots(1, 2, figsize=(12, 6))
ax[0].plot(y)
ax[0].vlines([25, 50, 100], [0], [0.1, 0.2, 0.3], color='C1')
ax[1].plot(y)
ax[1].vlines([25, 50, 100], [0], [0.1, 0.2, 0.3], color='C1')
ax[1].vlines([0], 0, 1, color='k', linestyle='dashed',
transform=ax[1].get_xaxis_transform())
</code></pre>
<p>Expected: both plots have the same y-limits</p>
<p>Actual result: the plot on the right has the wrong limits</p>
<p><a href="https://i.sstatic.net/0Kk5L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0Kk5L.png" alt="enter image description here" /></a></p>
<p>Using matplotlib 3.7.1 on python 3.11.3.</p>
| <python><matplotlib> | 2023-08-01 22:24:19 | 1 | 645 | Ilya |
76,815,348 | 11,141,816 | mpmath stopped working with the high precisions | <p>I'm using jupyter notebook with anaconda on windows (reinstalled to the current version after several attempts). Somehow the mpmath library stopped working. Consider the following code</p>
<pre><code>import mpmath as mp
mp.dps=530
mp.mpf(1) / mp.mpf(6)
</code></pre>
<p>But the result I got was</p>
<pre><code>mpf('0.16666666666666666')
</code></pre>
<p>I also tried</p>
<pre><code>mp.mpf("1") / mp.mpf("6")
</code></pre>
<p>which returned the same thing, and</p>
<pre><code>mp.nprint(mp.mpf(1) / mp.mpf(6),50)
</code></pre>
<p>returned</p>
<pre><code>0.16666666666666665741480812812369549646973609924316
</code></pre>
<p>which indicated the module malfunctioned.</p>
<p>What went wrong with the code?</p>
| <python><debugging><mpmath> | 2023-08-01 21:39:40 | 1 | 593 | ShoutOutAndCalculate |
76,815,316 | 3,821,009 | Calling polars apply results in an error message | <p>This:</p>
<pre><code>df = polars.DataFrame(dict(
j=numpy.random.randint(10, 99, 10),
k=numpy.random.randint(10, 99, 10),
))
def f(cell):
# Simulate logic that cannot be done in polars alone
# E.g. call external REST service that returns "success" => True
return cell > 50
print(df.select(polars.all().apply(f)))
</code></pre>
<p>Causes this:</p>
<pre><code>thread '<unnamed>' panicked at 'called `Result::unwrap()` on an `Err` value: ComputeError(ErrString("wildcard has no root column name"))', /home/runner/work/polars/polars/crates/polars-plan/src/utils.rs:207:47
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
</code></pre>
<p>with polars 0.18.11, though it does finish and produces the expected result. What's the above about and how can I avoid it?</p>
| <python><python-polars> | 2023-08-01 21:34:21 | 1 | 4,641 | levant pied |
76,815,238 | 1,969,404 | When I export ipynb jupyter notebook file to pdf, using nbconvert, table doesn't fit the page | <p>When I run the command below, it converts my notebook to a pdf file, which is great.</p>
<p><code>jupyter nbconvert file_name.ipynb --no-input --to=pdf</code></p>
<p>However, I have tables like this below in the cell (fits in perfectly)</p>
<p><a href="https://i.sstatic.net/slRwL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/slRwL.png" alt="enter image description here" /></a></p>
<p>pdf export doesn't fit all the columns,in to the result.</p>
<p><a href="https://i.sstatic.net/uCY80.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uCY80.png" alt="enter image description here" /></a></p>
<p>Is there a way to force the pdf export to fit the table in the page?</p>
| <python><pdf><jupyter-notebook><pdf-generation><nbconvert> | 2023-08-01 21:19:37 | 0 | 1,572 | Bedi Egilmez |
76,815,232 | 9,588,300 | Difference between pandas <NA> and NaN for numeric columns | <p>I have a data frame column as float64 full of <code>NaN</code> values, If I cast it again to float64 they got substituted for <code><NA></code> values which are not the same.</p>
<p>I know that the <code><NA></code> values are <code>pd.NA</code>, while <code>NaN</code> values are <code>np.nan</code> , so they are different things. So why casting an already float64 column to float64 changed <code>NaN</code> to <code><Na></code> ?</p>
<p>Here's an example:</p>
<pre><code>df=pd.DataFrame({'a':[1.0,2.0]})
print(df.dtypes)
#output is: float64
df['a'] = np.nan
print(df.dtypes)
# output is float64
print(df)
a
0 NaN
1 NaN
#Now, lets cast that float64 to float 64
df3['a']=df3['a'].astype(pd.Float64DType())
print(df3.dtypes)
#output is Float64, notice it's uppercase F this time, previously it was lowercase
print(df3)
a
0 <NA>
1 <NA>
</code></pre>
<p>it seems <code>float64</code> and <code>Float64</code> are two different things. And <code>NaN</code> (np.nan) is the null value for <code>float64</code> while <code><NA></code> (pd.NA) is the null for <code>Float64</code></p>
<p>Is this correct? And if so, what's under the hoods?</p>
| <python><pandas><numpy><null><na> | 2023-08-01 21:18:31 | 1 | 462 | Eugenio.Gastelum96 |
76,815,183 | 3,357,687 | self host sentry config.yml on email and smtp | <p>i have installed sentry on my server self hosted using the instructions with thier installer .
now my problem is that sentry is not sending mail and when i refer to this doc :</p>
<pre><code>https://develop.sentry.dev/self-hosted/email/
</code></pre>
<p>they say you must change the <code>config.yml</code> file i have searched every where for this file but i could not find it . does any one knows how and where i can probably change email setting to send email .
i even searched for config.yml in thier docker compose file but all i could find was this :</p>
<pre><code> symbolicator:
<<: *restart_policy
image: "$SYMBOLICATOR_IMAGE"
volumes:
- "sentry-symbolicator:/data"
- type: bind
read_only: true
source: ./symbolicator
target: /etc/symbolicator
command: run -c /etc/symbolicator/config.yml
symbolicator-cleanup:
</code></pre>
| <python><ubuntu><sentry> | 2023-08-01 21:08:50 | 1 | 2,000 | Farshad |
76,815,145 | 197,772 | What's the correct type hint for a metaclass method in Python that returns a class instance? | <p>In the example below, <code>__getitem__</code> indicates the class type as a hint, rather than return of a <code>SpecialClass</code> object.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Self
class SpecialMeta(type):
def __getitem__(cls, key) -> Self:
...
class SpecialClass(metaclass=SpecialMeta):
...
one = SpecialClass()
# type(one) : SpecialClass
two = SpecialClass["findme"]
# type(two) : type[SpecialClass]
</code></pre>
<p>What is the correct type hint to indicate <code>type(two) = SpecialClass</code> instead of <code>type[SpecialClass]</code>? Is there a general way to do this starting around Python 3.8 or so?</p>
| <python><python-typing><metaclass> | 2023-08-01 20:58:58 | 1 | 27,693 | jheddings |
76,814,968 | 1,880,182 | Unable to install PaddleOCR on Python 3.11 with GitHub Actions on Windows OS | <p>I am trying to automate the installation of PaddleOCR using GitHub Actions for my project. The process works smoothly on other Python versions and operating systems.</p>
<p>However, I am encountering an issue specifically with Python 3.11 on the Windows operating system when running the GitHub Actions workflow to install the <code>PyMuPDF</code> library.</p>
<p>The error output is as follows:</p>
<p><a href="https://github.com/eftalgezer/PlotScan/actions/runs/5731021239/job/15531033720" rel="nofollow noreferrer">https://github.com/eftalgezer/PlotScan/actions/runs/5731021239/job/15531033720</a></p>
<pre><code>Building wheels for collected packages: fire, PyMuPDF, python-docx, future
Building wheel for fire (pyproject.toml): started
Building wheel for fire (pyproject.toml): finished with status 'done'
Created wheel for fire: filename=fire-0.5.0-py2.py3-none-any.whl size=116948 sha256=097baa9f9f54b25d733fb355444ab9944b21d927e9b868a0aec3856364492b8f
Stored in directory: c:\users\runneradmin\appdata\local\pip\cache\wheels\a7\ee\a5\19e91481be8bea594935d137578bfe77d6bf905e4595336f6b
Building wheel for PyMuPDF (pyproject.toml): started
Building wheel for PyMuPDF (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
Building wheel for PyMuPDF (pyproject.toml) did not run successfully.
exit code: 1
[191 lines of output]
PyMuPDF/setup.py: sys.argv: ['setup.py', 'bdist_wheel', '--dist-dir', 'C:\\Users\\runneradmin\\AppData\\Local\\Temp\\pip-wheel-7knel3hq\\.tmp-7bk_3mhd']
PyMuPDF/setup.py: os.getcwd(): C:\Users\runneradmin\AppData\Local\Temp\pip-install-u26aisza\pymupdf_092cd445043d4d4ab040208f4c305ef7
PyMuPDF/setup.py: __file__: C:\Users\runneradmin\AppData\Local\Temp\pip-install-u26aisza\pymupdf_092cd445043d4d4ab040208f4c305ef7\setup.py
PyMuPDF/setup.py: $PYTHON_ARCH: None
PyMuPDF/setup.py: os.environ (155):
PyMuPDF/setup.py: ALLUSERSPROFILE: C:\ProgramData
PyMuPDF/setup.py: ANDROID_HOME: C:\Android\android-sdk
PyMuPDF/setup.py: ANDROID_NDK: C:\Android\android-sdk\ndk\25.2.9519653
PyMuPDF/setup.py: ANDROID_NDK_HOME: C:\Android\android-sdk\ndk\25.2.9519653
PyMuPDF/setup.py: ANDROID_NDK_LATEST_HOME: C:\Android\android-sdk\ndk\25.2.9519653
PyMuPDF/setup.py: ANDROID_NDK_ROOT: C:\Android\android-sdk\ndk\25.2.9519653
PyMuPDF/setup.py: ANDROID_SDK_ROOT: C:\Android\android-sdk
PyMuPDF/setup.py: ANT_HOME: C:\ProgramData\chocolatey\lib\ant\tools\apache-ant-1.10.13
PyMuPDF/setup.py: APPDATA: C:\Users\runneradmin\AppData\Roaming
PyMuPDF/setup.py: AZURE_EXTENSION_DIR: C:\Program Files\Common Files\AzureCliExtensionDirectory
PyMuPDF/setup.py: CABAL_DIR: C:\cabal
PyMuPDF/setup.py: CHOCOLATEYINSTALL: C:\ProgramData\chocolatey
PyMuPDF/setup.py: CHROMEWEBDRIVER: C:\SeleniumWebDrivers\ChromeDriver
PyMuPDF/setup.py: CI: true
PyMuPDF/setup.py: COBERTURA_HOME: C:\cobertura-2.1.1
PyMuPDF/setup.py: COMMONPROGRAMFILES: C:\Program Files\Common Files
PyMuPDF/setup.py: COMMONPROGRAMFILES(X86): C:\Program Files (x86)\Common Files
PyMuPDF/setup.py: COMMONPROGRAMW6432: C:\Program Files\Common Files
PyMuPDF/setup.py: COMPUTERNAME: fv-az365-843
PyMuPDF/setup.py: COMSPEC: C:\Windows\system32\cmd.exe
PyMuPDF/setup.py: CONDA: C:\Miniconda
PyMuPDF/setup.py: DEPLOYMENT_BASEPATH: C:\actions
PyMuPDF/setup.py: DOTNET_MULTILEVEL_LOOKUP: 0
PyMuPDF/setup.py: DOTNET_NOLOGO: 1
PyMuPDF/setup.py: DOTNET_SKIP_FIRST_TIME_EXPERIENCE: 1
PyMuPDF/setup.py: DRIVERDATA: C:\Windows\System32\Drivers\DriverData
PyMuPDF/setup.py: EDGEWEBDRIVER: C:\SeleniumWebDrivers\EdgeDriver
PyMuPDF/setup.py: GCM_INTERACTIVE: Never
PyMuPDF/setup.py: GECKOWEBDRIVER: C:\SeleniumWebDrivers\GeckoDriver
PyMuPDF/setup.py: GHCUP_INSTALL_BASE_PREFIX: C:\
PyMuPDF/setup.py: GHCUP_MSYS2: C:\msys64
PyMuPDF/setup.py: GITHUB_ACTION: __run
PyMuPDF/setup.py: GITHUB_ACTIONS: true
PyMuPDF/setup.py: GITHUB_ACTION_REF:
PyMuPDF/setup.py: GITHUB_ACTION_REPOSITORY:
PyMuPDF/setup.py: GITHUB_ACTOR: eftalgezer
PyMuPDF/setup.py: GITHUB_ACTOR_ID: 10411957
PyMuPDF/setup.py: GITHUB_API_URL: https://api.github.com
PyMuPDF/setup.py: GITHUB_BASE_REF:
PyMuPDF/setup.py: GITHUB_ENV: D:\a\_temp\_runner_file_commands\set_env_a51bbc0f-5c77-4a8b-a2c8-d73b77fff211
PyMuPDF/setup.py: GITHUB_EVENT_NAME: push
PyMuPDF/setup.py: GITHUB_EVENT_PATH: D:\a\_temp\_github_workflow\event.json
PyMuPDF/setup.py: GITHUB_GRAPHQL_URL: https://api.github.com/graphql
PyMuPDF/setup.py: GITHUB_HEAD_REF:
PyMuPDF/setup.py: GITHUB_JOB: build
PyMuPDF/setup.py: GITHUB_OUTPUT: D:\a\_temp\_runner_file_commands\set_output_a51bbc0f-5c77-4a8b-a2c8-d73b77fff211
PyMuPDF/setup.py: GITHUB_PATH: D:\a\_temp\_runner_file_commands\add_path_a51bbc0f-5c77-4a8b-a2c8-d73b77fff211
PyMuPDF/setup.py: GITHUB_REF: refs/heads/master
PyMuPDF/setup.py: GITHUB_REF_NAME: master
PyMuPDF/setup.py: GITHUB_REF_PROTECTED: false
PyMuPDF/setup.py: GITHUB_REF_TYPE: branch
PyMuPDF/setup.py: GITHUB_REPOSITORY: eftalgezer/PlotScan
PyMuPDF/setup.py: GITHUB_REPOSITORY_ID: 664412195
PyMuPDF/setup.py: GITHUB_REPOSITORY_OWNER: eftalgezer
PyMuPDF/setup.py: GITHUB_REPOSITORY_OWNER_ID: 10411957
PyMuPDF/setup.py: GITHUB_RETENTION_DAYS: 90
PyMuPDF/setup.py: GITHUB_RUN_ATTEMPT: 1
PyMuPDF/setup.py: GITHUB_RUN_ID: 5731021239
PyMuPDF/setup.py: GITHUB_RUN_NUMBER: 258
PyMuPDF/setup.py: GITHUB_SERVER_URL: https://github.com
PyMuPDF/setup.py: GITHUB_SHA: 41c71cd0256e496d681cd08775572951f1a18703
PyMuPDF/setup.py: GITHUB_STATE: D:\a\_temp\_runner_file_commands\save_state_a51bbc0f-5c77-4a8b-a2c8-d73b77fff211
PyMuPDF/setup.py: GITHUB_STEP_SUMMARY: D:\a\_temp\_runner_file_commands\step_summary_a51bbc0f-5c77-4a8b-a2c8-d73b77fff211
PyMuPDF/setup.py: GITHUB_TRIGGERING_ACTOR: eftalgezer
PyMuPDF/setup.py: GITHUB_WORKFLOW: Python app
PyMuPDF/setup.py: GITHUB_WORKFLOW_REF: eftalgezer/PlotScan/.github/workflows/python-app.yml@refs/heads/master
PyMuPDF/setup.py: GITHUB_WORKFLOW_SHA: 41c71cd0256e496d681cd08775572951f1a18703
PyMuPDF/setup.py: GITHUB_WORKSPACE: D:\a\PlotScan\PlotScan
PyMuPDF/setup.py: GOROOT_1_18_X64: C:\hostedtoolcache\windows\go\1.18.10\x64
PyMuPDF/setup.py: GOROOT_1_19_X64: C:\hostedtoolcache\windows\go\1.19.11\x64
PyMuPDF/setup.py: GOROOT_1_20_X64: C:\hostedtoolcache\windows\go\1.20.6\x64
PyMuPDF/setup.py: GRADLE_HOME: C:\ProgramData\chocolatey\lib\gradle\tools\gradle-8.1.1
PyMuPDF/setup.py: HOMEDRIVE: C:
PyMuPDF/setup.py: HOMEPATH: \Users\runneradmin
PyMuPDF/setup.py: IEWEBDRIVER: C:\SeleniumWebDrivers\IEDriver
PyMuPDF/setup.py: IMAGEOS: win22
PyMuPDF/setup.py: IMAGEVERSION: 20230724.1.0
PyMuPDF/setup.py: JAVA_HOME: C:\hostedtoolcache\windows\Java_Temurin-Hotspot_jdk\8.0.382-5\x64
PyMuPDF/setup.py: JAVA_HOME_11_X64: C:\hostedtoolcache\windows\Java_Temurin-Hotspot_jdk\11.0.20-8\x64
PyMuPDF/setup.py: JAVA_HOME_17_X64: C:\hostedtoolcache\windows\Java_Temurin-Hotspot_jdk\17.0.8-7\x64
PyMuPDF/setup.py: JAVA_HOME_8_X64: C:\hostedtoolcache\windows\Java_Temurin-Hotspot_jdk\8.0.382-5\x64
PyMuPDF/setup.py: LOCALAPPDATA: C:\Users\runneradmin\AppData\Local
PyMuPDF/setup.py: LOGONSERVER: \\fv-az365-843
PyMuPDF/setup.py: M2: C:\ProgramData\chocolatey\lib\maven\apache-maven-3.8.7\bin
PyMuPDF/setup.py: M2_REPO: C:\ProgramData\m2
PyMuPDF/setup.py: MAVEN_OPTS: -Xms256m
PyMuPDF/setup.py: NPM_CONFIG_PREFIX: C:\npm\prefix
PyMuPDF/setup.py: NUMBER_OF_PROCESSORS: 2
PyMuPDF/setup.py: OS: Windows_NT
PyMuPDF/setup.py: PATH: C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-jrkcwo01\overlay\Scripts;C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-jrkcwo01\normal\Scripts;C:\Program Files\PowerShell\7;C:\Users\runneradmin\AppData\Roaming\Python\Python311\Scripts;C:\hostedtoolcache\windows\Python\3.11.4\x64\Scripts;C:\hostedtoolcache\windows\Python\3.11.4\x64;C:\Program Files\MongoDB\Server\5.0\bin;C:\aliyun-cli;C:\vcpkg;C:\Program Files (x86)\NSIS\;C:\tools\zstd;C:\Program Files\Mercurial\;C:\hostedtoolcache\windows\stack\2.11.1\x64;C:\cabal\bin;C:\\ghcup\bin;C:\Program Files\dotnet;C:\mysql\bin;C:\Program Files\R\R-4.3.1\bin\x64;C:\SeleniumWebDrivers\GeckoDriver;C:\Program Files (x86)\sbt\bin;C:\Program Files (x86)\GitHub CLI;C:\Program Files\Git\bin;C:\Program Files (x86)\pipx_bin;C:\npm\prefix;C:\hostedtoolcache\windows\go\1.20.6\x64\bin;C:\hostedtoolcache\windows\Python\3.9.13\x64\Scripts;C:\hostedtoolcache\windows\Python\3.9.13\x64;C:\hostedtoolcache\windows\Ruby\3.0.6\x64\bin;C:\Program Files\OpenSSL\bin;C:\tools\kotlinc\bin;C:\hostedtoolcache\windows\Java_Temurin-Hotspot_jdk\8.0.382-5\x64\bin;C:\Program Files\ImageMagick-7.1.1-Q16-HDRI;C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\wbin;C:\ProgramData\kind;C:\Program Files\Microsoft\jdk-11.0.16.101-hotspot\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files\dotnet\;C:\ProgramData\Chocolatey\bin;C:\Program Files\PowerShell\7\;C:\Program Files\Microsoft\Web Platform Installer\;C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn\;C:\Program Files\Microsoft SQL Server\150\Tools\Binn\;C:\Program Files\Microsoft SQL Server\140\DTS\Binn\;C:\Program Files\Microsoft SQL Server\150\DTS\Binn\;C:\Program Files\Microsoft SQL Server\160\DTS\Binn\;C:\Strawberry\c\bin;C:\Strawberry\perl\site\bin;C:\Strawberry\perl\bin;C:\ProgramData\chocolatey\lib\pulumi\tools\Pulumi\bin;C:\Program Files\TortoiseSVN\bin;C:\Program Files\CMake\bin;C:\ProgramData\chocolatey\lib\maven\apache-maven-3.8.7\bin;C:\Program Files\Microsoft Service Fabric\bin\Fabric\Fabric.Code;C:\Program Files\Microsoft SDKs\Service Fabric\Tools\ServiceFabricLocalClusterManager;C:\Program Files\nodejs\;C:\Program Files\Git\cmd;C:\Program Files\Git\mingw64\bin;C:\Program Files\Git\usr\bin;C:\Program Files\GitHub CLI\;c:\tools\php;C:\Program Files (x86)\sbt\bin;C:\SeleniumWebDrivers\ChromeDriver\;C:\SeleniumWebDrivers\EdgeDriver\;C:\Program Files\Amazon\AWSCLIV2\;C:\Program Files\Amazon\SessionManagerPlugin\bin\;C:\Program Files\Amazon\AWSSAMCLI\bin\;C:\Program Files\Microsoft SQL Server\130\Tools\Binn\;C:\Program Files\LLVM\bin;C:\Users\runneradmin\.dotnet\tools;C:\Users\runneradmin\.cargo\bin;C:\Users\runneradmin\AppData\Local\Microsoft\WindowsApps
PyMuPDF/setup.py: PATHEXT: .COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC;.CPL
PyMuPDF/setup.py: PERFLOG_LOCATION_SETTING: RUNNER_PERFLOG
PyMuPDF/setup.py: PGBIN: C:\Program Files\PostgreSQL\14\bin
PyMuPDF/setup.py: PGDATA: C:\Program Files\PostgreSQL\14\data
PyMuPDF/setup.py: PGPASSWORD: root
PyMuPDF/setup.py: PGROOT: C:\Program Files\PostgreSQL\14
PyMuPDF/setup.py: PGUSER: postgres
PyMuPDF/setup.py: PHPROOT: c:\tools\php
PyMuPDF/setup.py: PIPX_BIN_DIR: C:\Program Files (x86)\pipx_bin
PyMuPDF/setup.py: PIPX_HOME: C:\Program Files (x86)\pipx
PyMuPDF/setup.py: PKG_CONFIG_PATH: C:\hostedtoolcache\windows\Python\3.11.4\x64/lib/pkgconfig
PyMuPDF/setup.py: POWERSHELL_DISTRIBUTION_CHANNEL: GitHub-Actions-win22
PyMuPDF/setup.py: POWERSHELL_UPDATECHECK: Off
PyMuPDF/setup.py: PROCESSOR_ARCHITECTURE: AMD64
PyMuPDF/setup.py: PROCESSOR_IDENTIFIER: Intel64 Family 6 Model 85 Stepping 7, GenuineIntel
PyMuPDF/setup.py: PROCESSOR_LEVEL: 6
PyMuPDF/setup.py: PROCESSOR_REVISION: 5507
PyMuPDF/setup.py: PROGRAMDATA: C:\ProgramData
PyMuPDF/setup.py: PROGRAMFILES: C:\Program Files
PyMuPDF/setup.py: PROGRAMFILES(X86): C:\Program Files (x86)
PyMuPDF/setup.py: PROGRAMW6432: C:\Program Files
PyMuPDF/setup.py: PSMODULEPATH: C:\Users\runneradmin\Documents\PowerShell\Modules;C:\Program Files\PowerShell\Modules;c:\program files\powershell\7\Modules;C:\\Modules\azurerm_2.1.0;C:\\Modules\azure_2.1.0;C:\Users\packer\Documents\WindowsPowerShell\Modules;C:\Program Files\WindowsPowerShell\Modules;C:\Windows\system32\WindowsPowerShell\v1.0\Modules;C:\Program Files\Microsoft SQL Server\130\Tools\PowerShell\Modules\
PyMuPDF/setup.py: PUBLIC: C:\Users\Public
PyMuPDF/setup.py: PYTHON2_ROOT_DIR: C:\hostedtoolcache\windows\Python\3.11.4\x64
PyMuPDF/setup.py: PYTHON3_ROOT_DIR: C:\hostedtoolcache\windows\Python\3.11.4\x64
PyMuPDF/setup.py: PYTHONLOCATION: C:\hostedtoolcache\windows\Python\3.11.4\x64
PyMuPDF/setup.py: PYTHON_ROOT_DIR: C:\hostedtoolcache\windows\Python\3.11.4\x64
PyMuPDF/setup.py: RTOOLS43_HOME: C:\rtools43
PyMuPDF/setup.py: RUNNER_ARCH: X64
PyMuPDF/setup.py: RUNNER_ENVIRONMENT: github-hosted
PyMuPDF/setup.py: RUNNER_NAME: GitHub Actions 14
PyMuPDF/setup.py: RUNNER_OS: Windows
PyMuPDF/setup.py: RUNNER_PERFLOG: C:\actions\perflog
PyMuPDF/setup.py: RUNNER_TEMP: D:\a\_temp
PyMuPDF/setup.py: RUNNER_TOOL_CACHE: C:\hostedtoolcache\windows
PyMuPDF/setup.py: RUNNER_TRACKING_ID: github_fdbbe0e4-2a85-4027-9a4c-2501f0b34668
PyMuPDF/setup.py: RUNNER_WORKSPACE: D:\a\PlotScan
PyMuPDF/setup.py: SBT_HOME: C:\Program Files (x86)\sbt\
PyMuPDF/setup.py: SELENIUM_JAR_PATH: C:\selenium\selenium-server.jar
PyMuPDF/setup.py: STATS_EXT: true
PyMuPDF/setup.py: STATS_EXTP: https://provjobdsettingscdn.blob.core.windows.net/settings/provjobdsettings-0.5.154/provjobd.data
PyMuPDF/setup.py: STATS_NM: true
PyMuPDF/setup.py: STATS_PFS: true
PyMuPDF/setup.py: STATS_PT: 20
PyMuPDF/setup.py: STATS_RDCL: true
PyMuPDF/setup.py: STATS_TIS: mining
PyMuPDF/setup.py: STATS_TRP: true
PyMuPDF/setup.py: STATS_UE: true
PyMuPDF/setup.py: STATS_V3PS: true
PyMuPDF/setup.py: STATS_VMD: true
PyMuPDF/setup.py: SYSTEMDRIVE: C:
PyMuPDF/setup.py: SYSTEMROOT: C:\Windows
PyMuPDF/setup.py: TEMP: C:\Users\RUNNER~1\AppData\Local\Temp
PyMuPDF/setup.py: TMP: C:\Users\RUNNER~1\AppData\Local\Temp
PyMuPDF/setup.py: USERDOMAIN: fv-az365-843
PyMuPDF/setup.py: USERDOMAIN_ROAMINGPROFILE: fv-az365-843
PyMuPDF/setup.py: USERNAME: runneradmin
PyMuPDF/setup.py: USERPROFILE: C:\Users\runneradmin
PyMuPDF/setup.py: VCPKG_INSTALLATION_ROOT: C:\vcpkg
PyMuPDF/setup.py: WINDIR: C:\Windows
PyMuPDF/setup.py: WIX: C:\Program Files (x86)\WiX Toolset v3.11\
PyMuPDF/setup.py: PIP_BUILD_TRACKER: C:\Users\runneradmin\AppData\Local\Temp\pip-build-tracker-uqs6zbv6
PyMuPDF/setup.py: PYTHONNOUSERSITE: 1
PyMuPDF/setup.py: PYTHONPATH: C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-jrkcwo01\site
PyMuPDF/setup.py: PEP517_BUILD_BACKEND: setuptools.build_meta:__legacy__
PyMuPDF/setup.py: mupdf_tgz already exists: C:\Users\runneradmin\AppData\Local\Temp\pip-install-u26aisza\pymupdf_092cd445043d4d4ab040208f4c305ef7\mupdf.tgz
PyMuPDF/setup.py: Extracting C:\Users\runneradmin\AppData\Local\Temp\pip-install-u26aisza\pymupdf_092cd445043d4d4ab040208f4c305ef7\mupdf.tgz
PyMuPDF/setup.py: mupdf_local='mupdf-1.20.3-source/'
PyMuPDF/setup.py: Building mupdf.
PyMuPDF/setup.py: Cannot find devenv.com in default locations, using: 'devenv.com'
PyMuPDF/setup.py: Building MuPDF by running: cd mupdf-1.20.3-source/&&"devenv.com" platform/win32/mupdf.sln /Build "ReleaseTesseract|x64" /Project mupdf
'"devenv.com"' is not recognized as an internal or external command,
operable program or batch file.
Traceback (most recent call last):
File "C:\hostedtoolcache\windows\Python\3.11.4\x64\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\hostedtoolcache\windows\Python\3.11.4\x64\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\hostedtoolcache\windows\Python\3.11.4\x64\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 251, in build_wheel
return _build_backend().build_wheel(wheel_directory, config_settings,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-jrkcwo01\overlay\Lib\site-packages\setuptools\build_meta.py", line 416, in build_wheel
return self._build_with_temp_dir(['bdist_wheel'], '.whl',
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-jrkcwo01\overlay\Lib\site-packages\setuptools\build_meta.py", line 401, in _build_with_temp_dir
self.run_setup()
File "C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-jrkcwo01\overlay\Lib\site-packages\setuptools\build_meta.py", line 488, in run_setup
self).run_setup(setup_script=setup_script)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\runneradmin\AppData\Local\Temp\pip-build-env-jrkcwo01\overlay\Lib\site-packages\setuptools\build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 550, in <module>
File "C:\hostedtoolcache\windows\Python\3.11.4\x64\Lib\subprocess.py", line 571, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 'cd mupdf-1.20.3-source/&&"devenv.com" platform/win32/mupdf.sln /Build "ReleaseTesseract|x64" /Project mupdf' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for PyMuPDF
Building wheel for python-docx (pyproject.toml): started
Building wheel for python-docx (pyproject.toml): finished with status 'done'
Created wheel for python-docx: filename=python_docx-0.8.11-py3-none-any.whl size=184518 sha256=26b59cd3d7f603ace7bf3198189f9571720fb133ea6aacd4e7eada6016a9792b
Stored in directory: c:\users\runneradmin\appdata\local\pip\cache\wheels\b2\11\b8\209e41af524253c9ba6c2a8b8ecec0f98ecbc28c732512803c
Building wheel for future (pyproject.toml): started
Building wheel for future (pyproject.toml): finished with status 'done'
Created wheel for future: filename=future-0.18.3-py3-none-any.whl size=492053 sha256=731420c8b7ac491e519c603ee70c6bf80b8465a145e8ac3c361cf6d9b1240f6d
Stored in directory: c:\users\runneradmin\appdata\local\pip\cache\wheels\da\19\ca\9d8c44cd311a955509d7e13da3f0bea42400c469ef825b580b
Successfully built fire python-docx future
Failed to build PyMuPDF
ERROR: Could not build wheels for PyMuPDF, which is required to install pyproject.toml-based projects
</code></pre>
| <python><python-3.x><windows><github-actions><python-3.11> | 2023-08-01 20:28:30 | 1 | 541 | Eftal Gezer |
76,814,936 | 6,525,082 | How do I create a python installation as read only? | <p>suppose I install python at the following path</p>
<pre><code>/home/username/path/to/bin/python
</code></pre>
<p>How do I make this installation as "read-only" as if it was installed as root? Then any derived environment command will not alter this path. Furthermore it should not be possible to install any packages in "/home/username/path/to". The user will have access to pip and conda</p>
<p>Of course, I can tar and zip the original for safe keeping but I would prefer an alternate solution</p>
| <python> | 2023-08-01 20:23:55 | 1 | 1,436 | wander95 |
76,814,893 | 2,107,632 | Python mutable/immutable and local variables | <p>I am quite confused about a difference in behavior of the two following code snippets, and in particular about expectations of their behavior in the mutability/immutability in Python concept:</p>
<p>Snippet 1</p>
<pre><code>def some_function_1(x):
x[0] = 42
input = (3,)
_ = some_function_1(input)
print(input)
</code></pre>
<p>Which produces <code>TypeError: 'tuple' object does not support item assignment</code>, which I understand on the grounds of tuples being immutable. This is OK. But ...</p>
<p>Snippet 2</p>
<pre><code>def some_function_2(x):
x = 42
input = (3,)
_ = some_function_2(input)
print(input)
</code></pre>
<p>Which ... <em>1]</em> does NOT complain/crash regarding at attempt of altering (an immutable!) <code>x</code> - in contrast with the previous snippet <em>2]</em> prints the <code>input</code> as the initial <code>(3,)</code>, i.e., as if the <code>input</code> were not "exposed" in any way to the action being taken inside of the assignment of the body of <code>some_function_2</code>.</p>
<p>Why the difference? What's happening here? Why is the non-mutability of tuples showing only in the 1st snippet?</p>
| <python><immutability> | 2023-08-01 20:13:31 | 3 | 4,998 | Simon Righley |
76,814,748 | 3,357,270 | How can I assert that a pdf file was returned in python? | <p>I am working on asserting that a pdf is returned from my request.</p>
<p>Here is what my test looks like so far.</p>
<pre><code>@mock.patch('path.to.class._get_file', return_value=MockResponse(status.HTTP_200_OK, 'application/pdf'))
def test_get_should_successfully_return_the_requested_pdf(self, get_file_mock):
response = self.client.get(f'/api/v1/path/to/file/abc123.pdf, content_type='application/vnd.api+json')
self.assertEqual(response.status_code, status.HTTP_200_OK) # works great
self.assertEqual(response['Content-Type'], 'application/pdf') # works great
self.assertEqual(response, <abc123.pdf>) # not so great
</code></pre>
<p>If I do a <code>print</code> to see whats in the response:</p>
<pre><code>print(response)
<HttpResponse status_code=200, "application/pdf"
</code></pre>
<p>I am pretty sure I am not setting up the <code>@patch</code> correctly. Specifically, this:</p>
<pre><code>status.HTTP_200_OK, 'application/pdf'
</code></pre>
<p>I've seen a lot of posts about reading or opening files, but I just need to make sure it was in fact a pdf (file) that was returned.</p>
<p>How can I set my mock up so that I can assert a pdf (file) was returned?</p>
| <python><django><django-rest-framework><python-unittest> | 2023-08-01 19:45:05 | 1 | 4,544 | Damon |
76,814,700 | 11,278,478 | Can I create .sas7bdat file using Python? | <p>Can I create .sas7bdat file using Python? I tried searching online and did not find any package which I could use to create .sas7bdat file</p>
| <python><python-3.x><pandas><sas> | 2023-08-01 19:34:35 | 0 | 434 | PythonDeveloper |
76,814,642 | 11,487,091 | Deploy Django app as a DigitalOcean's Cloud function? | <p>The Scenario is, I want to deploy my Django application as a PaaS cloud function of the Django application. I have tried to search over the internet but come up with nothing. If this is possible, what will be the complications? Do I need to write the application from scratch again in vanilla Python for deploying it to the Cloud function?</p>
| <python><django><digital-ocean> | 2023-08-01 19:24:47 | 0 | 373 | Muhammad Hammad |
76,814,518 | 125,244 | Plotly Pandas ta.alma behaves strange | <p>As I started just over a week before and never looked at python I'm surprised how easy things can be done because of the good libraries available.</p>
<p>I'm converting a Tradingview script I've made to python and came to the point that I want to draw my indicator(s) and compare the results.</p>
<p>Drawing the alma indicator on a HA-chart it was un unpleasant surprise that the results are much different from TV.</p>
<p>I checked internet and found several resources and compared the codes I found.</p>
<pre><code># Pandas implementation of ta.alma provides VERY different results compared to TV !!
# Tradingview => https://www.tradingcode.net/tradingview/arnaud-legoux-average/
# Github bug described => https://github.com/twopirllc/pandas-ta/pull/374
# Github fix aug 15th, 2021 => https://github.com/twopirllc/pandas-ta/pull/374/commits/752b69e86e19db64cdf161981d0ad8c897efefea
# Original implementation => https://www.sierrachart.com/SupportBoard.php?PostID=231318#P231318
# Prorealcode implementation => https://www.prorealcode.com/prorealtime-indicators/alma-arnaud-legoux-moving-average/
# What is wrong ?
</code></pre>
<p>First I notice that the calculation of Pandas.ta.alma works on all data while the TV and ProRealPro codes work on a a subset and ONE value for 'close'.</p>
<p>This raises the question "will I have to calculate ALL the data if I add just one new bar to the data?"</p>
<p>If so that seems inefficient and timeconsuming. How can I avoid that?</p>
<p>But much more important is 'what about the results from ta.alma?".
What causes the big difference? Am I using an old ta-lib as it was stated (and verified) that the bug had been fixed at aug 15th 2021?
How can I check that?</p>
<p>From the pictures attached you can see how the blue alma-line follows the candles very well at TV.</p>
<p>But with Pandas ta.alma and "offeset=0" it looks like the alma-line has been shifted.
I tried using 'offset=-9' and 'offset=-10' as I use 'length= 9' but the result is still far from TV.
Only if I use 'offset= -5' results are almost the same like TV.
Why should I use that value???</p>
<p>I also tried with and without df.fillna(0) but it doesn't make much difference (appart from the (undersirable line at the start).</p>
<p>Something must be wrong but what.
Like its is I can't use it despite all the positive things.
Any help will be appreciated.</p>
<p>My code demonstrates the problem if you change 'barsOffset = -5'</p>
<pre><code>from ib_insync import *
from datetime import datetime
import pandas_ta as ta # TA-lib https://youtu.be/lij39o0_L2I and https://youtu.be/W_kKPp9LEFY
import pandas as pd
import plotly.graph_objects as go
import plotly.express as px
pd.set_option('display.max_rows', None) # Display all rows
pd.set_option('display.max_columns', 25)
pd.set_option('display.width', 1000) # equivalent with pd.options.display.width = 1000
pd.options.display.width = 0 # Panda will autodetect the size of the terminal window (and display columns accordingly)
pd.options.display.max_colwidth = 50 # Maximum width of a column (rest will be replaced by ...)
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=0)
duration = "1 D" # How far back in history should data be retrieved
barSize = "1 min" # Timeframe of the bars
contract = Index(symbol="EOE", exchange="FTA", currency="EUR")
ib.qualifyContracts(contract)
ticker = contract.symbol
bars = ib.reqHistoricalData(contract, # the contract of interest
endDateTime= '', # retrieve bars up until current time
keepUpToDate=True, # keep updating the data requires endDateTime= "")
formatDate= 1, # formatDate=2 => for intraday => UTC
durationStr= duration, # time span of all bars ('1 D')
barSizeSetting= barSize, # time period of 1 bar ('1 min')
whatToShow= 'trades', # source of the data ('trades' => actual trades)
useRTH= True) # show only data from Regular Trading Hours
df = util.df(bars) # Create a Pandas dataframe from the bars
distAlma = .85 # Default distribution factor
sigmaAlma = 6 # Default sigma
barsAlma = 9 # Use 9 bars for calculating alma
barsOffset = -5 # Not present in TV; It turns out it must be 5 or 6 to provide results comparable with TV
dfHA = df.ta.ha() # Fastest calculation of Heikin Ashi
df = pd.concat([df, dfHA], axis=1) # Add columns generated by ta.ha to df
df["alma"] = dfHA.ta.alma(close="HA_close", length=barsAlma, sigma= sigmaAlma, offset_distribution= distAlma, offset=barsOffset)
df.fillna(0, inplace=True) # Replace all NaN with 0 to avoid problems while comparing values but causes drawing lines for the first bars
strategy = 1 # Discriminates between the type of candles to show combined with the strategy to use
if strategy == 1:
openPrices = df.HA_open
highPrices = df.HA_high
lowPrices = df.HA_low
closePrices = df.HA_close
typeCandles = "HA-candles"
else:
openPrices = df.open
highPrices = df.high
lowPrices = df.low
closePrices = df.close
typeCandles = "Price"
datePrices = df["date"]
fig = go.Figure()
fig.add_trace(go.Candlestick(name= typeCandles, x= datePrices,
open= openPrices, high= highPrices, low= lowPrices, close= closePrices))
fig.add_trace(go.Scatter(x=datePrices, y=df.alma, mode='lines', name='alma', line_color="rgb(0,0,255)", fill=None))
maxRange = df.HA_high.max() # Maximum price
minRange = df.HA_low[df.HA_low>0].min() # Minimum price skipping NaN and zero values
extraRange = 0.05 * (maxRange - minRange) # Take 5% extra space for the range above and below
layoutFigures = dict ({
'title': ticker + " HA Candles",
'xaxis_title': "Date",
'yaxis_title': "Price",
'template': "plotly_dark", # Dark backgroud
'dragmode': "pan", # Start in "pan" mode
'yaxis_range': [minRange - extraRange, maxRange + extraRange], # Set range yaxis from 5 % below lowest to 5% aboven highest price
'xaxis_rangeslider_visible': False # Hide the zoom slider window
})
fig.update_layout(layoutFigures) # Apply all layout specifications
fig.update_yaxes(fixedrange=False) # Allow changing the yaxes
configFigures = dict({ # https://plotly.com/python/configuration-options/#enabling-scroll-zoom
'scrollZoom': True, # Use the mousewheel for zooming
'displayModeBar': True, # Show the modeBar while hovering over it (False = invisible)
'displaylogo': False, # Hide plotly logo from modeBar
'staticPlot': False, # Make a dynamic plot (do NOT create a static plot)
'toImageButtonOptions':
{ 'format': 'jpeg', # one of png, svg, jpeg, webp --> jpeg chosen
'filename': ticker + "_" + datetime.now().strftime("%Y%m%d-%H%M%S"), # The name of the ticker will be used as filename
'height': 500, 'width': 700, 'scale': 1 # Multiply title/legend/axis/canvas sizes by this factor
},
'modeBarButtonsToRemove': ['lasso2D', 'select2D']
})
fig.show(config= configFigures)
ib.disconnect()
</code></pre>
<p><a href="https://i.sstatic.net/i4F4i.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i4F4i.jpg" alt="Tradingview output of alma" /></a></p>
<p><a href="https://i.sstatic.net/nzGDY.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nzGDY.jpg" alt="ta.alma output with offset= 0" /></a>
<a href="https://i.sstatic.net/p1vx8.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p1vx8.jpg" alt="ta.alma output with offset= -9" /></a>
<a href="https://i.sstatic.net/SVUjm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SVUjm.jpg" alt="ta.alma output with offset= -5" /></a></p>
| <python><pandas><plotly> | 2023-08-01 19:02:30 | 1 | 1,110 | SoftwareTester |
76,814,412 | 2,007,537 | Plotting just the time from a datetime | <p>Currently I've got a dataframe <code>on_times</code> which has got a <code>DatetimeIndex</code>. I'd like to plot the frequency of certain events in the dataframe using seaborn's swarmplot without regards to the date part of the index. What I'd thought would work was something like this:</p>
<pre class="lang-py prettyprint-override"><code>import seaborn as sns
import pandas as pd
on_times = pd.read_csv("fridge_freezer_on.csv", parse_dates = ["local_updated"]
index_col = "local_updated")
on_times["Day"] = on_times.index.strftime("%a")
on_times["Time"] = on_times.index.time
fig,ax = plt.subplots()
sns.swarmplot(data = on_times, x = "Time", y = "Day", hue = "id")
</code></pre>
<p>While this code will produce a plot, it seems to be treating the <code>Time</code> column as a string, instead of a time. None of the time are in order on the x axis, unlike if I'm plotting a <code>datetime</code> column. I'm curious what changes I need to make in order for the x-axis to be the time from 00:00:00 to 23:59:59 and to plot the points corresponding to those times.</p>
| <python><pandas><seaborn> | 2023-08-01 18:42:34 | 1 | 1,227 | CopOnTheRun |
76,814,390 | 3,979,160 | MoviePy cannot get transparent background in countdown animation | <p>For unclear reasons I cannot get a countdown animation to have transparant background.</p>
<pre><code>background = ColorClip(size=(W, H), color=(0, 255, 0)).set_opacity(1) #green background
countdownduration = 5
PartCountDownClip = VideoClip(make_frame=lambda t: TextClip(countdown_text(t), font="Arial", fontsize=450, color='red', stroke_color=StrokeColor, stroke_width=StrokeWidth+5).get_frame(t), duration=countdownduration)
final = CompositeVideoClip([background, testclip.set_duration(countdownduration)])
final = final.set_duration(countdownduration)
output_path = "countdown_animation.mp4"
final.write_videofile(output_path, fps=30, codec='libx264')
</code></pre>
<p>This generates a blackbox on a green background with a red countdown (5,4,3,2,1)
But when I make:</p>
<pre><code>PartCountDownClip = TextClip("test", font="Arial", fontsize=450, color='red', stroke_color=StrokeColor, stroke_width=StrokeWidth+5) #, bg_color='transparent')
</code></pre>
<p>The result is the word test on a green background. So without the black box! Just like I want it but now there is no counting down, just the word 'test'</p>
<p>So it seems that by the way I make a countdown animation, I cannot get the black box behind the text removed. Why? How to fix? things like adding a bg_color='transparent' do not help.</p>
<p>oh and countdown_text(t) just returns a string from 5 to 1 based in input value.</p>
| <python><moviepy> | 2023-08-01 18:39:09 | 1 | 523 | Hasse |
76,814,253 | 1,194,883 | Install conda environment hosted on anaconda.org | <p>It's possible to host <code>environment.yml</code> <em>files</em> directly on anaconda.org. (I'm not talking about hosting packages; I'm really talking about <code>environment.yml</code> files.) IIRC, there used to be a way to just install that directly using the <code>conda</code> command line — something like <code>conda install https://anaconda.org/<user>/<env-name></code>. Is this still possible?</p>
| <python><anaconda><conda> | 2023-08-01 18:14:17 | 1 | 20,375 | Mike |
76,814,175 | 5,437,090 | How to avoid lemmatizing already lemmatized sentences of a row in pandas dataframe for speedup | <p><strong>Given</strong>:</p>
<p>A simple and small pandas dataframe as follows:</p>
<pre><code>df = pd.DataFrame(
{
"user_ip": ["u7", "u3", "u1", "u9", "u4","u8", "u1", "u2", "u5"],
"raw_sentence": ["First sentence!", np.nan, "I go to school everyday!", "She likes chips!", "I go to school everyday!", "This is 1 sample text!", "She likes chips!", "This is the thrid sentence.", "I go to school everyday!"],
}
)
user_ip raw_sentence
0 u7 First sentence!
1 u3 NaN
2 u1 I go to school everyday!
3 u9 She likes chips!
4 u4 I go to school everyday! <<< duplicate >>>
5 u8 This is 1 sample text!
6 u1 She likes chips! <<< duplicate >>>
7 u2 This is the thrid sentence.
8 u5 I go to school everyday! <<< duplicate >>>
</code></pre>
<p><strong>Goal</strong>:</p>
<p>I wonder if I could possibly avoid calling <code>map</code> or consider any other strategies for those rows with duplicated (exact similar) sentences in <code>raw_sentence</code> column. My intention is to speedup my implementation for bigger sized pandas dataframe (<code>~100K</code> rows).</p>
<p><strong>[<em>Inefficient</em>] Solution</strong>:</p>
<p>Right now, I take advantage of <code>.map()</code> using <code>lambda</code> which goes through each row and call <code>get_lm()</code> function to retrieves lemmas of raw input sentences as follows:</p>
<pre><code>import nltk
nltk.download('all', quiet=True, raise_on_error=True,)
STOPWORDS = nltk.corpus.stopwords.words('english')
wnl = nltk.stem.WordNetLemmatizer()
tokenizer = nltk.tokenize.RegexpTokenizer(r'\w+')
def get_lm(input_sent:str="my text!"):
tks = [ w for w in tokenizer.tokenize(input_sent.lower()) if not w in STOPWORDS and len(w) > 1 and not w.isnumeric() ]
lms = [ wnl.lemmatize(w, t[0].lower()) if t[0].lower() in ['a', 's', 'r', 'n', 'v'] else wnl.lemmatize(w) for w, t in nltk.pos_tag(tks)]
return lms
df["lemma"] = df["raw_sentence"].map(lambda raw: get_lm(input_sent=raw), na_action='ignore')
user_ip raw_sentence lemma
0 u7 First sentence! [first, sentence] <<< 1st occurrence => lemmatization OK! >>>
1 u3 NaN NaN <<< ignone None using na_action='ignore' >>>
2 u1 I go to school everyday! [go, school, everyday] <<< 1st occurrence => lemmatization OK! >>>
3 u9 She likes chips! [like, chip] <<< 1st occurrence => lemmatization OK! >>>
4 u4 I go to school everyday! [go, school, everyday] <<< already lemmatized, no need to do it again >>>
5 u8 This is 1 sample text! [sample, text] <<< 1st occurrence => lemmatization OK! >>>
6 u1 She likes chips! [like, chip] <<< already lemmatized, no need to do it again >>>
7 u2 This is the thrid sentence. [thrid, sentence] <<< 1st occurrence => lemmatization OK! >>>
8 u5 I go to school everyday! [go, school, everyday] <<< already lemmatized, no need to do it again >>>
</code></pre>
<p>Is there any more efficient approach to fix this issue?</p>
<p>Cheers,</p>
| <python><pandas><nlp><nltk><lemmatization> | 2023-08-01 18:00:52 | 2 | 1,621 | farid |
76,814,014 | 19,048,408 | With a Python context manager, I want to have a print statement that logs the row count before and after an operation | <p>Using a function defined in a Python context manager, I want to modify a Polars dataframe by reassignment. I then went the function in the context manager to print the previous and new row counts.</p>
<p>I tried the following:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
def count_rows(df: pl.DataFrame) -> int:
""" Counts the number of rows in a polars dataframe. """
return df.select(pl.count()).item()
# define the function I want to work
@contextmanager
def log_row_count_change(df: pl.DataFrame, action_desc: str = '', df_name: str = 'df') -> None:
""" An easy way to log how many rows were added or removed from a dataframe during filters, joins, etc. """
try:
row_count_before = count_rows(df)
logger.debug(f"Before '{action_desc}' action on '{df_name}', row count: {row_count_before:,}")
yield
finally:
row_count_after = count_rows(df)
row_count_change = row_count_after - row_count_before
row_count_change_pct = row_count_change / row_count_before * 100
print(f"During '{action_desc}' action on '{df_name}', row count changed by {row_count_change:,} rows ({row_count_before:,} -> {row_count_after:,}) ({row_count_change_pct:.2f}%).")
# define a dataframe for testing
df = pl.DataFrame({"a":[1,1,2], "b":[2,2,3], "c":[1,2,3]})
# call the main part
with log_row_count_change(df, 'drop duplicates on column a', 'df'):
df = df.unique(subset=['a'])
</code></pre>
<p>When you run the above, it shows the row count equal to 3 both before and after. I want it to show a row count of 3 before and 2 after.</p>
| <python><python-3.x><python-polars> | 2023-08-01 17:34:07 | 2 | 468 | HumpbackWhale194 |
76,813,935 | 7,082,163 | How to catch run-time errors with Literal used with Pydantic model | <p>I have the following python types:</p>
<pre><code>from typing import Literal
from pydantic import BaseModel, ConfigDict, ValidationError, NonNegativeFloat
Fruit= Literal['Apple','Banana','Kiwi','Orange']
class Recipe(BaseModel):
model_config = ConfigDict(frozen=True)
fruit: Fruit
</code></pre>
<p>This all works as expected but trying to use <code>Fruit</code> on its own elsewhere does not catch runtime issues. For instance, if I have the function:</p>
<pre><code>def show_fruit(fruit: Fruit):
print(fruit)
show_fruit('carrot')
</code></pre>
<p>The static type checker works but at run-time it doesn't throw any validation errors. I guess because <code>Literal</code> provides no run-time checking, but then how can i incorporate the two?</p>
| <python><python-typing><pydantic> | 2023-08-01 17:21:34 | 1 | 1,151 | guy |
76,813,918 | 7,169,895 | Pandas last is returning more data than it should. Is this pandas or me? | <p>I am using last to filter date data. However, it seems to not be working. The either timeframe ie 7D and I believe 14D return more data than they should. 7D for CAT should return no data. I also tried with 1W with the same result. Am I doing something wrong or is pandas?</p>
<p>The method to gather the data is below.</p>
<pre><code># Gets the openinsider information for insider trading of stocks
def get_openinsider_stock_activity(stocks):
# Format search string from stocks
# URL gets 500 entries only
stocks = stocks.upper().replace(' ', '+') # format the tickers to be lis
print('Search String: ', stocks)
url = f'http://openinsider.com/screener?s={stocks}&o=&pl=&ph=&ll=&lh=&fd=1461&fdr=&td=0&tdr=&fdlyl=&fdlyh=&daysago=&xp=1&xs=1&vl=&vh=&ocl=&och=&sic1=-1&sicl=100&sich=9999&grp=0&nfl=&nfh=&nil=&nih=&nol=&noh=&v2l=&v2h=&oc2l=&oc2h=&sortcol=0&cnt=300&page=1'
response = requests.get(url)
soup = BeautifulSoup(response.text, 'lxml')
try:
rows = soup.find('table', {'class': 'tinytable'}).find('tbody').findAll('tr')
except Exception as e:
print("Error! Skipping")
print(f"This URL was not successful: {url}")
print(e)
return
# Get headers first
header_row = soup.find('table', {'class': 'tinytable'}).find('thead').findAll('tr')
headers = []
for row in header_row:
for col in row:
for header in col:
text = col.text.strip()
if text == '':
continue
headers.append(unicodedata.normalize('NFKD', text))
body_rows = soup.find('table', {'class': 'tinytable'}).find('tbody').findAll('tr')
# Iterate over rows of table
insider_data = []
for row in body_rows:
cols = row.findAll('td')
if not cols:
continue
for cell in cols:
cell_value = cell.find('a').text.strip() if cell.find('a') else cell.text.strip()
body = {key: cols[index].find('a').text.strip() if cols[index].find('a') else cols[index].text.strip()
for index, key in enumerate(headers)}
insider_data.append(body)
insider_data = pd.DataFrame(insider_data)
return insider_data
</code></pre>
<p>The method that filters the data using .last() is below</p>
<pre><code> data = []
for stock in ['AAPL', 'BA', 'GME', 'XOM', 'CAT']:
data.append(get_openinsider_stock_activity(stock))
data = pd.concat(data)
for timeframe in ['7D', '14D', '1M']:
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
pd.set_option('display.max_rows', None)
count_data = data.copy()
count_data['Trade Date'] = pd.to_datetime(count_data['Trade Date'])
# HERE
count_data = count_data.sort_values(by='Trade Date', ascending=True).set_index('Trade Date').last(timeframe)
print(count_data)
</code></pre>
<p>if we check CAT <a href="http://openinsider.com/screener?s=CAT&o=&pl=&ph=&ll=&lh=&fd=7&fdr=&td=0&tdr=&fdlyl=&fdlyh=&daysago=&xp=1&xs=1&vl=&vh=&ocl=&och=&sic1=-1&sicl=100&sich=9999&grp=0&nfl=&nfh=&nil=&nih=&nol=&noh=&v2l=&v2h=&oc2l=&oc2h=&sortcol=0&cnt=100&page=1" rel="nofollow noreferrer">here</a> we can set that there were no buyers for 1 week prior as no table shows up. However, my code shows there was one seller but that record should only be in 2 weeks ago.</p>
| <python><pandas> | 2023-08-01 17:18:54 | 1 | 786 | David Frick |
76,813,867 | 9,382 | How to detect whether ConversationalRetrievalChain called the OpenAI LLM? | <p>I have the following code:</p>
<pre><code>chat_history = []
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(chunks, embeddings)
qa = ConversationalRetrievalChain.from_llm(OpenAI(temperature=0.1), db.as_retriever())
result = qa({"question": "What is stack overflow", "chat_history": chat_history})
</code></pre>
<p>The code creates embeddings, creates a FAISS in-memory vector db with some text that I have in <code>chunks</code> array, then it creates a ConversationalRetrievalChain, followed by asking a question.</p>
<p>Based on what I understand from ConversationalRetrievalChain, when asked a question, it will first query the FAISS vector db, then, if it can't find anything matching, it will go to OpenAI to answer that question. (is my understanding correct?)</p>
<p>How can I detect if it actually called OpenAI to get the answer or it was able to get it from the in-memory vector DB? The <code>result</code> object contains <code>question</code>, <code>chat_history</code> and <code>answer</code> properties and nothing else.</p>
| <python><openai-api><langchain><large-language-model><conversational-ai> | 2023-08-01 17:10:40 | 4 | 62,114 | AngryHacker |
76,813,814 | 2,112,406 | Finding rotation that match two sets of points | <p>I have two sets of points in 3D, up to 10 points each. Their translations and scales are such that, for both sets, the distance between each point and origin is 1. One set of points should map to the other set via a translation only. However, I don't know which point maps to which.</p>
<p>I have tried two things: Kabsch and ICP.</p>
<p>Kabsch requires:</p>
<ul>
<li>generate set of all matchings between two sets of points</li>
<li>find R that maps one to the other</li>
<li>return the matching and rotation R that has the smallest distances between the points</li>
</ul>
<p>that is, for <code>pts1</code> and <code>pts2</code> two Nx3 numpy arrays:</p>
<pre><code>indices = list(len(pts1))
min_dist = np.inf
best_rot = None
for matching in itertools.permutations(indices, len(pts1)):
idx = list(matching)
pts2_p = pts2[idx,:]
# Kabsch to find best fit rotation with SVD
R = best_fit_rotation(pts1, pts2_p)
pts1p = R.dot(pts1.T).T
# calculate distances between each pair of points
d = distance(pts1p, pts2_p)
rmsd = np.sqrt(np.mean(d**2))
if rmsd < min_dist:
min_dist = rmsd
best_rot = R
</code></pre>
<p>This works, but it's quite slow for more than <code>N=6</code> points. This is mainly because there are <code>N!</code> permutations to consider, which is <code>~5000</code> for <code>N=7</code>.</p>
<p>For ICP, one typically does:</p>
<ul>
<li>find nearest neighbors of each point in the other set</li>
<li>find best fit transform (like in Kabsch)</li>
<li>calculate the average distance between the two sets of points</li>
<li>repeat until the average distance is below a threshold</li>
</ul>
<p>The problem with this is that, by default, two points in the first set can have the same point in the second set as their nearest neighbor. I still have to iterate through all the matchings (although this is faster because I don't have to calculate the rotation matrix, and re-calculate distances for each matching). However, the rotation matrices this approach provides are numerically incorrect. I am therefore wondering:</p>
<ol>
<li>Is there a flaw in my logic for modifying ICP?</li>
<li>In any case, is there a better way to solve this problem?</li>
</ol>
| <python><computational-geometry> | 2023-08-01 17:01:46 | 1 | 3,203 | sodiumnitrate |
76,813,786 | 1,185,242 | How do you find nearest pairs between sets without using O(N*M) memory? | <p>I have two sets of vectors A (red) and B (blue). I'm looking to find the nearest vector in A to each point in B.</p>
<p><a href="https://i.sstatic.net/730RI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/730RI.png" alt="enter image description here" /></a></p>
<p>I got this working using:</p>
<pre><code>import numpy as np
import random
def get_assignment(A, B):
"""
returns `assignment`, where `assignment[i]` means A[assignment[i]] is the nearest point to B[i]
"""
cost = np.linalg.norm(A[:, np.newaxis, :] - B, axis=2)
assignment = np.argmin(cost, axis=0)
return assignment
if __name__ == '__main__':
An = 3
Bn = 10
A = np.array([ [ random.random(), random.random() ] for i in range(An) ])
B = np.array([ [ random.random(), random.random() ] for i in range(Bn) ])
assignment = get_assignment(A, B)
import pylab
for i in range(len(A)):
Bi = np.where(assignment == i)[0]
for j in Bi:
x1, y1 = A[i]
x2, y2 = B[j]
pylab.plot(x1, y1, 'ro')
pylab.plot(x2, y2, 'bo')
pylab.plot([x1, x2], [y1, y2], 'k-', lw=0.5)
pylab.axis('equal')
pylab.show()
</code></pre>
<p>But I'm looking to do this where A has around 1,000 elements and B 1,000,000. Is there a way to do this using less memory?</p>
| <python><numpy><vectorization> | 2023-08-01 16:58:03 | 2 | 26,004 | nickponline |
76,813,736 | 12,176,250 | Exception handling data in Pandas apply | <p>I have built a function that is using regex to set date formats. However my data has <code>nan</code> values in it and I would ideally like to handle them in the main function.</p>
<p>How can I take all the <code>nan</code> value rows and <code>append</code> them to an empty list? I have done this before but not as part of exception handling!</p>
<p>In my head I'm thinking of something like a case when, then else statement from SQL to do here to handle the exceptions in code.</p>
<p>This is what I have now:</p>
<pre><code>from datetime import datetime, date
import re
def conv_date(dte: str) -> date:
acceptable_mappings = {
"\d{4}-\d{2}-\d{2}": "%Y-%m-%d",
"\d{2}-\d{2}-\d{4}": "%d-%m-%Y",
"\d{4}/\d{2}/\d{2}": "%Y/%m/%d",
"\d{2}/\d{2}/\d{4}": "%d/%m/%Y",
"\d{8}": '%d%m%Y',
"\d{2}\s\d{2}\s\d{4}": '%d %m %Y',
"\d{4}-\d{2}-\d{2}\s\d{2}\:\d{2}\:\d{2}": "%Y-%m-%d %H:%M:%S",
"\d{4}-\d{2}-\d{2}\s\d{2}\D{1}\d{2}\D{1}\d{2}\s\w{3}": "%Y-%m-%d %H:%M:%S %Z",
}
# for loop to iterate through our allowed mappings and return value, else raise an exception
for regex in acceptable_mappings.keys():
if re.fullmatch(regex, dte):
return datetime.strptime(dte, acceptable_mappings[regex]).date()
raise Exception(f"Expected date is not in one of the supported formats, got ***{dte}***")
</code></pre>
<p>And the test:</p>
<pre><code>from x import conv_date
import pytest
from datetime import datetime, date
import pandas as pd
def test_mock_dict(self):
# Use this data to test the dates function for humans to interpret. Name and role are added for readability
mock_dict = [
{"name": "dz", "role": "legend", "date": "2023-07-26"},
{"name": "mc", "role": "sounds like a dj", "date": "26-07-2023"},
{"name": "xc", "role": "loves xcom", "date": "2023/07/26"},
{"name": "lz", "role": "likes to fly", "date": "26/07/2023"},
{"name": "wc", "role": "has a small bladder", "date": "26072023"},
{"name": "aa", "role": "warrior of the crystal", "date": "26 07 2023"},
{"name": "xx", "role": "loves only-fans", "date": "2023-07-26 12:46:21"},
{"name": "jm", "role": "is stack overflow", "date": "2023-10-26 12:46:21 UTC"},
{"name": "ee", "role": "enjoys nan bread", "date": "nan"},
]
df = pd.DataFrame(mock_dict)
print(df)
df['date_clean'] = df['date'].apply(lambda x: conv_date(x))
print(df)
</code></pre>
| <python><pandas> | 2023-08-01 16:50:03 | 1 | 346 | Mizanur Choudhury |
76,813,714 | 1,295,422 | Docker cannot read Serial device | <p>I've got the following Python code running in a docker image:</p>
<pre><code>import serial
ser = serial.Serial('/dev/ttyS1', 19200)
while True:
line = ser.readline()
print(line, flush=True)
</code></pre>
<p>I've used the following command line to start the container:</p>
<pre><code>docker run -it --device /dev/ttyS1:/dev/ttyS1 my-image:latest
</code></pre>
<p>On the host device, when I send data to the serial RX port, I can see it with the <code>socat</code> command.
While on the docker instance, I don't get any data nor print. Baudrates are the same on emitter and receiver.</p>
<p>My serial device is permanently attached to the host.</p>
<p>Is there a kind of rights to enable Serial port on Docker ? I've got no error showing up.
I'd like to avoid the use of the <code>privileged</code> flag.</p>
| <python><docker><pyserial><tty> | 2023-08-01 16:45:39 | 0 | 8,732 | Manitoba |
76,813,587 | 6,874,079 | Call a JavaScript activity from a Python Temporal workflow | <p>I have the following Temporal workflow, written in Python:</p>
<pre><code>@workflow.defn
class YourSchedulesWorkflow:
@workflow.run
async def run(self, name: str) -> str:
//call javascript activity
</code></pre>
<p>I have a part of code written already in JavaScript, so I want to call that JavaScript code as an activity, from my Python workflow. How do I do this?</p>
| <python><typescript><temporal-workflow> | 2023-08-01 16:25:48 | 1 | 371 | Vasanth Kumar |
76,813,188 | 4,352,930 | Upload with python program into anonymous nextcloud share | <p>The nextcloud manual <a href="https://docs.nextcloud.com/server/latest/user_manual/en/files/file_drop.html" rel="nofollow noreferrer">describes</a> a possibility to create anonymous upload space for users.</p>
<p>If one creates such an upload space, a URL is created and can be shared, allowing multiple users without nexcloud accounts to upload files.</p>
<p>However, these files need to be dragged and dropped into a browser window. In contrast, I want to directly upload them in a python program.
I cannot really use a nextcloud package for this such as described <a href="https://github.com/EnterpriseyIntranet/nextcloud-API/blob/master/example.py" rel="nofollow noreferrer">here</a>, since I do not have (and do not want to create) user accounts for every uploader.</p>
<p>Any ideas how to proceed? I could use a package such as selenium to imitate user actions in the browser, but that seems somewhat clumsy.</p>
<p>[EDIT:]</p>
<p>And have used a curl command as described <a href="https://doc.owncloud.com/webui/next/classic_ui/files/access_webdav.html#uploading-files-to-a-public-link-file-drop-using-curl" rel="nofollow noreferrer">here</a> for owncloud, hoping that this would work similarily for nextcloud, and I translated the curl command using <a href="https://curlconverter.com/" rel="nofollow noreferrer">curlconverter</a>, but got an error 405.</p>
<p>[EDIT 2:]
I got a little bit further with the following modified code (modified URL), which returns status 200. But I still don't see an uploaded file.</p>
<pre><code>headers = {'X-Requested-With': 'XMLHttpRequest',}
with open('test.txt', 'rb') as f:
data = f.read()
response = requests.put(
'https://xxx.xxx.com/remote.php/s/test.txt',
headers=headers,
data=data,
verify=False,
auth=('94daw3R3sfwrNaeK', ''),
)
</code></pre>
<p>[EDIT 3:]
It seems that the problem is related to the URL in use. I am rechecking with direct use of CURL.</p>
| <python><nextcloud> | 2023-08-01 15:31:54 | 1 | 6,259 | tfv |
76,813,099 | 4,466,255 | How to get back original query from AST generated from pglast package | <p>I am using pglast to create abstract syntax tree (AST) of queries. e.g.</p>
<pre><code>from pglast import parse_sql
parse_sql("SELECT name, age FROM user as a WHERE age > 18;")
</code></pre>
<p>will give me</p>
<pre><code>(<RawStmt stmt=<SelectStmt targetList=(<ResTarget val=<ColumnRef fields=(<String sval='name'>,)>>, <ResTarget val=<ColumnRef fields=(<String sval='age'>,)>>) fromClause=(<RangeFunction lateral=False ordinality=False is_rowsfrom=False functions=((<SQLValueFunction op=<SQLValueFunctionOp.SVFOP_USER: 11> typmod=-1>, None),) alias=<Alias aliasname='a'>>,) whereClause=<A_Expr kind=<A_Expr_Kind.AEXPR_OP: 0> name=(<String sval='>'>,) lexpr=<ColumnRef fields=(<String sval='age'>,)> rexpr=<A_Const isnull=False val=<Integer ival=18>>> groupDistinct=False limitOption=<LimitOption.LIMIT_OPTION_DEFAULT: 0> op=<SetOperation.SETOP_NONE: 0> all=False> stmt_location=0 stmt_len=46>,)
</code></pre>
<p>I also need to get back to original query from this output. How it can be done?</p>
<p>There is another function in the package (<code>parser.parse_sql_json</code>) which generate AST in json version but has added information of location of words in the query. Will output of this be easier to get back original query</p>
<pre><code>from pglast.parser import parse_sql_json
parse_sql_json('SELECT name, age FROM user as a WHERE age > 18;')
</code></pre>
<p>gives me</p>
<pre><code>'{"version":150001,"stmts":[{"stmt":{"SelectStmt":{"targetList":[{"ResTarget":{"val":{"ColumnRef":{"fields":[{"String":{"sval":"name"}}],"location":7}},"location":7}},{"ResTarget":{"val":{"ColumnRef":{"fields":[{"String":{"sval":"age"}}],"location":13}},"location":13}}],"fromClause":[{"RangeFunction":{"functions":[{"List":{"items":[{"SQLValueFunction":{"op":"SVFOP_USER","typmod":-1,"location":22}},{}]}}],"alias":{"aliasname":"a"}}}],"whereClause":{"A_Expr":{"kind":"AEXPR_OP","name":[{"String":{"sval":"\\u003e"}}],"lexpr":{"ColumnRef":{"fields":[{"String":{"sval":"age"}}],"location":38}},"rexpr":{"A_Const":{"ival":{"ival":18},"location":44}},"location":42}},"limitOption":"LIMIT_OPTION_DEFAULT","op":"SETOP_NONE"}},"stmt_len":46}]}'
</code></pre>
| <python><postgresql> | 2023-08-01 15:22:24 | 1 | 1,208 | Kushdesh |
76,812,756 | 3,121,975 | Collapsing unions in mypy | <p>I have a piece of code written around a field, called <code>data</code> that has a type of <code>Dict[str, Union[List[str], str]]</code>. The issue is that the mypy is raising errors:</p>
<pre><code>from typing import Dict, List, Union
data: Dict[str, Union[List[str], str]] = {}
key = "a"
value = "b"
if isinstance(data[key], list):
data[key].append(value) # Item "str" of "Union[List[str], str]" has no attribute "append"
else:
temp = data[key]
data[key] = [temp, value] # List item 0 has incompatible type "Union[List[str], str]"; expected "str"
</code></pre>
<p>In other typed languages, I would expect this to work. I've clearly boxed the type to a list in the first branch of the if-statement, so <code>self._data[key]</code> should resolve to <code>List[str]</code> and <code>append</code> should be fine. Similarly, in the else-branch, <code>self._data[key]</code> should resolve to <code>str</code>, meaning that <code>temp</code> is <code>str</code> and therefore should not cause any issues being added to a <code>List[str]</code>.</p>
<p>Does anyone see a way to fix this without adding unnecessary type assertions here?</p>
| <python><mypy><python-typing> | 2023-08-01 14:38:43 | 1 | 8,192 | Woody1193 |
76,812,704 | 4,348,400 | How to download XLSX file from DOI link? | <p>I want to download two files automatically from Python for a reproducible statistical analysis.</p>
<p>These links</p>
<ul>
<li><a href="https://doi.org/10.1371/journal.pone.0282068.s001" rel="nofollow noreferrer">https://doi.org/10.1371/journal.pone.0282068.s001</a></li>
<li><a href="https://doi.org/10.1371/journal.pone.0282068.s002" rel="nofollow noreferrer">https://doi.org/10.1371/journal.pone.0282068.s002</a></li>
</ul>
<p>I tried</p>
<pre class="lang-py prettyprint-override"><code>import requests
url = 'https://doi.org/10.1371/journal.pone.0282068.s001'
response = requests.get(url)
</code></pre>
<p>I suspect that the file is actually the content of <code>response.content</code>, which appears to be a bunch of encoded information (e.g. <code>\xe2\x81a\xe4\x1dq\xbe9~3\x94\x885\xba\xc8\x9bz\'~\x1c)X>\xaaXyg\x929\xf84\xc2\x06\t\n x5\</code>).</p>
<p>How do I download these files and save them as XLSX files?</p>
| <python><data-science><doi> | 2023-08-01 14:32:56 | 1 | 1,394 | Galen |
76,812,611 | 7,445,658 | Install pip dependencies only from conda environment file | <p>I setup a conda environment using <code>conda env create -f environment.yaml -n new_env_name</code>, because I wanted a different conda environment name than that stored in <code>environment.yaml</code>.</p>
<p>It seems that pip dependencies listed in <code>environment.yaml</code> were not all installed, so I would like to install them without starting from scratch. I looked in the documentation but couldn't find a way to do this ; any ideas? :)</p>
| <python><pip><conda> | 2023-08-01 14:21:50 | 1 | 339 | alexis_thual |
76,812,518 | 21,896,093 | Formatting text labels in seaborn.objects | <p>Formatting of the <code>so.Text()</code> mapping doesn't seem to have an effect on text labels. In the example below, I am trying to format using the <code>text=</code> scale such that the text labels are rounded to 1dp and are suffixed with "%". The numbers on the bars should look like "88.5%", "61.6%", etc.</p>
<p>A code example with my attempts is below.</p>
<p><a href="https://i.sstatic.net/ADyDm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ADyDm.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>#Data
np.random.seed(3)
df = pd.DataFrame({'Day': range(1, 31), 'Count': np.random.randn(30)*50 + 100})
#Bar plot
(
so.Plot(df, y='Day', x='Count', text=100 * df.Count / df.Count.max())
.add(so.Bar(), legend=False, orient='y')
.add(so.Text(color='k', halign='right'))
.scale(color='viridis',
#None of these attempts are making a difference to the formatting:
text=so.Continuous().label(like='.0f')
#text=so.Continuous().label(like='{x:.0f}$'.format)
#text=so.Continuous().label(like=lambda x: str(round(x)) + '%')
#text=so.Continuous().label(like='{y:.0f}$'.format)
#text=so.Continuous().label(like='{pos:.0f}$'.format)
#text=so.Continuous().tick(at=range(0, 101), every=1)
)
.layout(size=(10, 10))
)
</code></pre>
<p>I have also tried (not shown) setting <code>unit='%'</code> with the hope that it would do the percent calculation for me, but it gives me values > 100% so I've just computed the percentages manually in this example.</p>
<p>Is the failure of the formatting a bug? Please advise if there are other things I could try.</p>
<hr />
<p>For reference, here is the workaround I am using which I wanted to avoid. Manually formatting the data mapped to <code>text=</code>:</p>
<pre class="lang-py prettyprint-override"><code>#Bar plot
(
so.Plot(df,
y='Day',
x='Count',
#Manually format text:
text=(100 * df.Count / df.Count.max()).apply(lambda x: str(round(x, 1)) + '%')
)
.add(so.Bar(), legend=False, orient='y')
.add(so.Text(color='k', halign='right'))
.scale(color='viridis')
.layout(size=(10, 10))
)
</code></pre>
<p><a href="https://i.sstatic.net/OzRZc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OzRZc.png" alt="enter image description here" /></a></p>
| <python><seaborn><bar-chart><plot-annotations><seaborn-objects> | 2023-08-01 14:10:49 | 0 | 5,252 | MuhammedYunus |
76,812,494 | 11,814,996 | PySpark - Creating a timestamp column from two numeric columns containing date and month | <pre><code>>>> from pyspark import SparkContext
>>> sc = SparkContext.getOrCreate()
>>> from pyspark.sql import SparkSession
>>> spark_session = SparkSession(sc)
>>> mydf = spark_session.createDataFrame(data=[(2018, 1), (2019, 4), (2018, 3), (2019, 4), (2018, 2), (2020, 1), (2020, 4)], schema=['myYear', 'myMonth'])
>>> mydf
DataFrame[myYear: bigint, myMonth: bigint]
>>> mydf.show()
+------+-------+
|myYear|myMonth|
+------+-------+
| 2018| 1|
| 2019| 4|
| 2018| 3|
| 2019| 4|
| 2018| 2|
| 2020| 1|
| 2020| 4|
+------+-------+
</code></pre>
<p>and versions:</p>
<pre><code>$ pip3 list | grep 'spark'
pyspark 3.4.1
$ java -version
openjdk version "1.8.0_372"
OpenJDK Runtime Environment (build 1.8.0_372-b07)
OpenJDK 64-Bit Server VM (build 25.372-b07, mixed mode)
</code></pre>
<p>I would now like create a timestamp column from these two columns, representing a timestamp of a year and month.</p>
<p>pandas equivalent would be:</p>
<pre><code>>>> import pandas as pd
>>> mydf3_pd = pd.DataFrame(data=[(2018, 1), (2019, 4), (2018, 3), (2019, 4), (2018, 2), (2020, 1), (2020, 4)], columns=['myYear', 'myMonth'])
>>> mydf3_pd.loc[:,'year-month'] = pd.to_datetime(mydf3.loc[:,'myYear'].astype(str) + mydf3.loc[:,'myMonth'].astype(str), format='%Y%m')
>>> mydf3_pd
myYear myMonth year-month
0 2018 1 2018-01-01
1 2019 4 2019-04-01
2 2018 3 2018-03-01
3 2019 4 2019-04-01
4 2018 2 2018-02-01
5 2020 1 2020-01-01
6 2020 4 2020-04-01
>>> mydf3_pd.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 7 entries, 0 to 6
Data columns (total 3 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 myYear 7 non-null int64
1 myMonth 7 non-null int64
2 year-month 7 non-null datetime64[ns]
dtypes: datetime64[ns](1), int64(2)
</code></pre>
<p>I tried <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.to_timestamp.html" rel="nofollow noreferrer"><code>pyspark.sql.functions.to_timestamp</code></a> and <a href="https://spark.apache.org/docs/3.1.2/api/python/reference/api/pyspark.sql.functions.to_date.html" rel="nofollow noreferrer"><code>pyspark.sql.functions.to_date</code></a> with <code>format='yyyyMM'</code> among other combinations and got incomprehensible error messages both the time.</p>
<p>Along with "year-month", I would also like to do the same thing for an integer column containing quarter numbers, "year-quarter". How do I do both of them with pyspark?</p>
| <python><pyspark><timestamp> | 2023-08-01 14:07:13 | 1 | 3,172 | Naveen Reddy Marthala |
76,812,450 | 9,274,940 | custom colours with normalisation in plotly for a confusion matrix | <p>I'm trying to create a confusion matrix where</p>
<ul>
<li>the neutral value is the 0, and should appear as white</li>
<li>positive values should appear as green, as higher the number, more green. Closest to 0, less green (mixed with white)</li>
<li>Negative values should appear as red, as lower the number, more red. Closest to 0, less red (mixed with white)</li>
</ul>
<p>I want a red - green gradient centered in 0.</p>
<pre><code>import plotly.graph_objects as go
import numpy as np
# Define the colors for 0, positive, and negative values
zero_color = 'white'
positive_color = 'green'
negative_color = 'red'
colors = [
(0, negative_color),
(np.min(abs(custom_cf_matrix)) / np.max(abs(custom_cf_matrix)), zero_color), # PROBABLY HERE IS WHAT I AM DOING WRONG
(1, positive_color)
]
custom_cf_matrix = np.array([[395, -5], [-200, 20]])
group_values = [395, -5, -200, 20]
# labels = [f"{v1}\n{v2}" for v1, v2 in zip(group_names, group_percentages)]
labels = [f"{v1}\n{v2}" for v1, v2 in zip(group_names, group_values)]
labels = np.asarray(labels).reshape(2, 2)
# Create the figure
fig = px.imshow(
custom_cf_matrix,
labels={"x": "Predicted Label", "y": "True Label"},
color_continuous_scale=colors,
range_color=[np.min(custom_cf_matrix), np.max(custom_cf_matrix)],
width=500,
height=500,
)
fig.update_xaxes(side="bottom")
fig.update_yaxes(side="left")
# Update the annotations to use black font color
annotations = [
dict(
text=text,
x=col,
y=row,
font=dict(color="black", size=16), # Set font color to black
showarrow=False,
xanchor="center",
yanchor="middle",
)
for row in range(2)
for col, text in zip(range(2), labels[row])
]
fig.update_layout(
title="Value-Weighted Confusion Matrix",
title_x=0.25, # Center the title horizontally
annotations=annotations,
)
fig.update_xaxes(tickvals=[0, 1], ticktext=["0", "1"], showticklabels=True)
fig.update_yaxes(tickvals=[0, 1], ticktext=["0", "1"], showticklabels=True)
</code></pre>
| <python><plotly><confusion-matrix> | 2023-08-01 14:01:26 | 2 | 551 | Tonino Fernandez |
76,812,398 | 8,543,025 | Numpy: Find Sequence in Sparse Array, Ignoring NaNs | <p>I have a large 1D array containing mostly NaNs and a few integers.
I'm trying to extract the start and end indices where the array contains a specific sequence, ignoring intermediate NaNs.<br />
For example:</p>
<pre><code>sequence = np.array([1, 2, 3])
sparse_array1 = np.array([np.nan, 2, 3, np.nan, 1, 2, np.nan, 3, 2, np.nan, np.nan, 3, 1, np.nan, np.nan, np.nan, 2, np.nan, 3, 1, 1, np.nan])
sparse_array2 = np.array([3, 2, 1, 2, 1])
print(find_sequence_indices(sparse_array1, sequence))
# prints [(4, 7), (12, 18)]
print(find_sequence_indices(sparse_array2, sequence))
# prints []
</code></pre>
<p>I couldn't think of a way that did not involve 3 nested loops. My array is much larger than the example, I can't afford cubed run times.</p>
| <python><arrays><numpy> | 2023-08-01 13:57:06 | 1 | 593 | Jon Nir |
76,812,290 | 10,574,250 | pd.cut not bucketing values into intervals even though value is there | <p>I am trying to use <code>pd.cut</code> to create specific buckets. This works for most data but there is a subset that it puts into <code>nan</code> where there is a clear value. I have provided an example <code>df</code></p>
<pre><code> numbers difference_interval
0 0.000000e+00 nan
1 3.263739e-03 nan
2 3.637279e-02 nan
3 5.308298e-03 nan
4 -1.139971e-01 nan
5 nan nan
</code></pre>
<p>Here is the code I used to create the intervals:</p>
<pre><code>bins = pd.IntervalIndex.from_tuples([(-1, -.2), (-.2, -.1), (-.1, -.05), (-.05, 0), (0, .05), (0.05, .1), (0.1, .2), (0.2, 1)])
col = 'numbers'
df = (df.dropna(subset=col)
.assign(difference_interval= lambda df: pd.cut(df[col].values, bins).sort_values().astype(str)))
df.query('difference_interval == "nan"')
</code></pre>
<p>Why would this happening?</p>
| <python><pandas> | 2023-08-01 13:46:30 | 1 | 1,555 | geds133 |
76,811,885 | 5,013,084 | altair: empty plot in facetted layout with grid lines | <p>I have a plot similar to this one with this toy data</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import altair as alt
df = pd.DataFrame(
{
"id": [1, 2, 3, 4, 5],
"sex": ["M", "F", "M", "F", "M"],
"group": ["A", "A", "B", "B", "C"],
"mark": ["pass", "pass", "fail", "pass", "fail"]
}
)
alt.Chart(df).mark_bar().encode(
x=alt.X("count(id)"),
y=alt.Y("mark"),
row=alt.Row("sex"),
column=alt.Column("group")
)
</code></pre>
<p><a href="https://i.sstatic.net/RYZwa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RYZwa.png" alt="enter image description here" /></a></p>
<p>The panel in the right top (F, group C) is empty and has no grid lines because the dataframe has no matching rows. How can I make sure that the grid lines also appear in this subplot?</p>
<p>Thank you.</p>
| <python><altair> | 2023-08-01 12:59:59 | 0 | 2,402 | Revan |
76,811,839 | 10,658,339 | Better Understanding Log-Normal with SciPy | <p>I know that are plenty of questions about the log-normal in scipy as <a href="https://stackoverflow.com/questions/18534562/scipy-lognormal-fitting">this</a>, <a href="https://stats.stackexchange.com/questions/33036/fitting-log-normal-distribution-in-r-vs-scipy">this</a>, <a href="https://stackoverflow.com/questions/8747761/scipy-lognormal-distribution-parameters">this</a>, and <a href="https://stackoverflow.com/questions/28700694/log-normal-random-variables-with-scipy">this</a> But I still have doubts.</p>
<p>I'm trying to reproduce <a href="https://towardsdatascience.com/log-normal-distribution-a-simple-explanation-7605864fb67c" rel="nofollow noreferrer">this example</a> with SciPy, because I can understand the steps, but I'm not able to.</p>
<p>the data is:</p>
<pre><code>from scipy.stats import lognorm, norm
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
x = [20, 22, 25, 30, 60]
fig,ax = plt.subplots(1,1)
sns.kdeplot(x, color='blue',fill=False,ax=ax)
</code></pre>
<p>And I want to fit a log-normal:</p>
<pre><code>shape_x, loc_x, scale_x = lognorm.fit(x,floc=0)
print(f'Estimated parameters for log-normal distribution of parameter x:')
print(f'Shape (s) of x: {shape_x}')
print(f'Location (loc) of x: {loc_x}')
print(f'Scale (scale) of x: {scale_x}')
</code></pre>
<p>accordingly to other questions in StackOverflow and scipy documentation, the mean and standard deviation should be:</p>
<pre><code>mu_x = np.log(scale_x)
sigma_x = shape_x
print(f'Mean (μ) of x: {mu_x}')
print(f'Standard deviation (σ) of x: {sigma_x}')
</code></pre>
<p>Next, I try to create synthetic data with those parameters, to check:</p>
<pre><code>synthetic_data_B = np.random.lognormal(mean=mu_x, sigma=sigma_x, size=len(x))
pdf_x = lognorm.pdf(x, s = shape_x, loc=loc_x, scale=scale_x)
fig,ax = plt.subplots(1,1)
sns.kdeplot(x, color='blue',fill=False,ax=ax)
sns.kdeplot(synthetic_data_B, color='red',fill=False,ax=ax)
ax.plot(x,pdf_x,color='green')
</code></pre>
<p><a href="https://i.sstatic.net/YuFuO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YuFuO.png" alt="enter image description here" /></a></p>
<p>What I realize it:</p>
<ul>
<li>The median in the article is the scale parameter from scipy.</li>
<li>The μ in the article is my mu_x = np.log(scale_x), but the σ is
different, in the article is 0.437 and with scipy gives 0.391.</li>
<li>If I evaluate the mean with lognorm.mean(shape_x,loc_x,scale_x) it
gives quite similar to the article.</li>
<li>If I evaluate the standard deviation with
lognorm.std(shape_x,loc_x,scale_x), it gives different.</li>
</ul>
<p>My questions are:</p>
<ul>
<li><p>Why σ is different?</p>
</li>
<li><p>The synthetic data predicted with the fitted parameters do not match
the original data, why?</p>
</li>
<li><p>If I try to do the opposite and to recover the x distribution from
the fitted parameters, what I got is fairway from what should be.</p>
</li>
<li><p>How can I generate synthetic data to represent the real x?</p>
</li>
</ul>
| <python><scipy><statistics><distribution><normal-distribution> | 2023-08-01 12:54:24 | 1 | 527 | JCV |
76,811,756 | 10,428,677 | Sum rows across subset of columns of 'object' type | <p>I have a df with too many to reproduce here, but the logic is as follows:</p>
<pre><code>data = {'ID': ['12AX', '7YTQ','ZA77'],
'Value_1': [19, 'Not applicable', 33],
'Status': ['Not Applicable','Not Applicable','Not Applicable']
'Value_2': [13,4,15],
'Value_3': ['Not Applicable', 'Not Applicable', 102.7]
'Category': ['AAA', 'AAA', 'BA']
}
df = pd.DataFrame(data)
</code></pre>
<p>I am trying to create a new column called <code>Sum</code> using only a subset of the existing columns (let's say <code>Value_1</code>, <code>Value_2</code>, <code>Value_3</code>) and to ignore any text that might be in any of the rows. So for example, the <code>Sum</code> of the first row will be <code>19+13 = 32</code> and the string value in column <code>Value_3</code> is ignored since it says <code>Not Applicable</code>.</p>
<p>My function so far is this, but it either returns <code>None</code> or <code>0</code> and I can't figure out why.</p>
<pre><code>def safe_sum(row):
try:
float_numbers = [float(x) for x in row if isinstance(x, (int, float))]
return sum(float_numbers)
except (ValueError, TypeError):
return None
df['Sum'] = df[['Value_1','Value_2','Value_3'].apply(safe_sum, axis=1)
</code></pre>
<p>Does anyone have any idea what I'm doing wrong?</p>
| <python><pandas> | 2023-08-01 12:44:23 | 1 | 590 | A.N. |
76,811,747 | 14,735,451 | How to calculate a running average for an infinite iterations? | <p>I want to calculate the running average over an infinite (or a huge) number of iterations:</p>
<pre><code>import random
epochs = 1e10
for epoch in range(epochs):
new_value = random.randint(1, 100)
new_running_average = get_avg(previous_average, new_value) # what is this function going to be?
</code></pre>
<p>The naive approach would be to add any new value <code>new_value</code> to a list, and then average the list at each iteration. But, this is not realistic as 1) the number of iterations is far too large, and 2) I have to create many of such averages for many parameters.</p>
<p>The existing SO questions I found around calculating running average (e.g., <a href="https://stackoverflow.com/questions/1790550/running-average-in-python">this</a> or <a href="https://stackoverflow.com/questions/11352047/finding-moving-average-from-data-points-in-python">this</a>) use a fix data list that is relatively small, so it's rather trivial.</p>
| <python><average><moving-average> | 2023-08-01 12:43:25 | 2 | 2,641 | Penguin |
76,811,567 | 10,574,250 | Create plotly express histogram with defined bin interval python | <p>I am trying to set a custom bin interval for my data in <code>plotly.express.histogram</code> but am failing. I feel this should be pretty simple. Here is what I have:</p>
<pre><code>df
x y
0.2 'red'
-0.1 'yellow'
-0.04 'orange'
0.07 'red'
0.2 'red'
</code></pre>
<p>I want a histogram with the following bins:
<code>[-1, -.2, -.1, -.05, 0, .05, .1, .2, 1]</code></p>
<p>I have tried to do this both inside plotly and without. My attempt is:</p>
<pre><code>bins = pd.IntervalIndex.from_tuples([(-1, -.2), (-.19999, -.1), (-.09999, -.05), (-.04999, 0),
(0.0001, .05), (0.05001, .1), (0.1001, .2), (0.2001, 1)])
df['differences'] = pd.cut(df['x'].values, bins)
intervals = df[['x', 'differences']].groupby('differences').count().reset_index()
fig = px.histogram(intervals, 'differences')
fig.show()
</code></pre>
<p>I get the following error:</p>
<pre><code>TypeError: Object of type Interval is not JSON serializable
</code></pre>
<p>Is there a simple way to achieve this?</p>
| <python><pandas><plotly> | 2023-08-01 12:20:15 | 1 | 1,555 | geds133 |
76,811,552 | 1,742,188 | Calculating Exponentially Weighted Moving Average in Snowflake using Snowpark | <p>I have a table containing the monthly revenues of a bunch of assets in the following table schema:</p>
<pre><code>ASSET_ID | MONTH | REVENUE
</code></pre>
<p>For each ASSET_ID, I want to calculate the rolling exponentially weighted moving average of revenues. I am currently using Snowpark but unable to figure out how to create the EWMA function. So far, I have:</p>
<pre><code>def ewma(col, alpha):
pass
alpha = 0.9
w = Window.partition_by("ASSET_ID").order_by("MONTH").rows_between(Window.UNBOUNDED_PRECEDING, Window.CURRENT_ROW - 1)
window = rev_df.select(col('ASSET_ID'), ewma(col('REVENUE'), alpha)).over(w)
</code></pre>
| <python><snowflake-cloud-data-platform> | 2023-08-01 12:17:19 | 1 | 5,243 | user1742188 |
76,811,515 | 10,228,923 | Anonymise data in python across multiple tables and keep the relationships between related columns post anonymisation? | <p>I am working on a project where there are two seperate csv files which I have pulled from a database. I want to load the data in python using pandas and anonymise contents of some of the columns in both tables. Some of these columns that has data that will be anonymised already exist in one of the other tables that shall also be anonymised. But I want them to anonymise to the same thing.</p>
<p>Is there a way to keep the relationships post anonymisation? ie the data in both tables that sit in different columns get anonymised to the same thing?</p>
<p>I have seen many examples online but only for anonymising single tables. How can this be done for two tables and keeping the relationships between the columns so they anonymise to the same thing, as in the example below?</p>
<h2>Example:</h2>
<p>Both tables, both columns pre anonymisation, but colA in Table 1 is related to colC in Table 2.</p>
<p><strong>Pre anonymisation:</strong></p>
<p><em>Table 1</em></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">colA</th>
<th style="text-align: left;">colB</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">123456</td>
<td style="text-align: left;">abcdefg</td>
</tr>
<tr>
<td style="text-align: left;">789123</td>
<td style="text-align: left;">hijklm</td>
</tr>
</tbody>
</table>
</div>
<p><em>Table 2:</em></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">colC</th>
<th style="text-align: left;">colD</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">123456</td>
<td style="text-align: left;">xyz123</td>
</tr>
<tr>
<td style="text-align: left;">789123</td>
<td style="text-align: left;">abc456</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Post anonymisation:</strong></p>
<p><em>Table 1</em></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">colA</th>
<th style="text-align: left;">colB</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">912056</td>
<td style="text-align: left;">zxcvbn</td>
</tr>
<tr>
<td style="text-align: left;">450912</td>
<td style="text-align: left;">poiuyt</td>
</tr>
</tbody>
</table>
</div>
<p><em>Table 2:</em></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">colC</th>
<th style="text-align: left;">colD</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">912056</td>
<td style="text-align: left;">qwe098</td>
</tr>
<tr>
<td style="text-align: left;">450912</td>
<td style="text-align: left;">asd321</td>
</tr>
</tbody>
</table>
</div> | <python><pandas><datatables><anonymous-types><scramble> | 2023-08-01 12:13:50 | 2 | 416 | Ferhat |
76,811,316 | 21,787,377 | Django signals returns Notification.user must be a "CustomUser" instance | <p>I'm trying to use <a href="https://stackoverflow.com/a/76171411/21835997">this method</a> to add a Notifications system to my Django app. I want to notify our users when the <code>Invitation</code> model is created. However, the problem is that the <code>Invitation</code> model does not have a <code>user</code> field, and we don't want to add the <code>user</code> field to the model, because we don't want to make the person who is going to create the <code>Invitation</code> model a <code>user</code>, How can I notify user that one of his <code>Invitation</code> model is created, the method I'm using down below return an error: <code>Cannot assign "<Invitation: Adamu>": "Notification.user" must be a "CustomUser" instance.</code></p>
<p>the signals:</p>
<pre><code>from django.db.models.signals import post_save
from django.dispatch import receiver
from django.conf import settings
from .models import Level, Notification, Invitation
from django.urls import reverse
User = settings.AUTH_USER_MODEL
@receiver(post_save, sender=Invitation)
def comment_post_save(sender, instance, **kwargs):
message = f'{instance.first_name} has booked our"{instance.select_type.level} room"'
link = reverse('Open-Room', args=[str(instance.select_type.slug)])
notification = Notification(user=instance, message=message, link=link)
notification.save()
</code></pre>
<p>models:</p>
<pre><code>class Invitation(models.Model):
first_name = models.CharField(max_length=40)
last_name = models.CharField(max_length=40)
id_card = models.CharField(max_length=50, choices=ID_CARD)
id_number = models.CharField(max_length=100)
nationality = models.CharField(max_length=50, choices=LIST_OF_COUNTRY)
state = models.CharField(max_length=50)
town = models.CharField(max_length=50)
address = models.CharField(max_length=90)
phone_number = models.IntegerField()
guardian_phone_number = models.IntegerField()
number_of_day = models.IntegerField()
visited_for = models.CharField(max_length=40, choices=VISITED_FOR_THIS)
date_created = models.DateTimeField(auto_now_add=True)
select_type = models.ForeignKey(LevelType, on_delete=models.CASCADE)
slug = models.SlugField(unique=True)
def save(self, *args, **kwargs):
if not self.slug:
# generate a random 15-character string
random_string = ''.join(random.choices(string.ascii_lowercase + string.digits, k=15))
self.slug = f"{slugify(self.nationality)}-{random_string}"
super().save(*args, **kwargs)
def __str__(self):
return str(self.first_name)
class Notification(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
message = models.TextField()
link = models.URLField(blank=True, null=True)
read = models.BooleanField(default=False)
class LevelType(models.Model):
user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)
name = models.CharField(max_length=20)
</code></pre>
| <python><django><django-models><django-signals> | 2023-08-01 11:47:10 | 0 | 305 | Adamu Abdulkarim Dee |
76,811,192 | 10,984,994 | Why my print statements and logging.info is not shown in pytest html report | <p>I found this issue while running a test case in pytest framework in which once the test completes (either pass or fails) On report.html I am getting <code>"No Log Output Captured"</code> when i click on show details on each test cases.
<strong>report.html</strong></p>
<p><a href="https://i.sstatic.net/FUH0T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FUH0T.png" alt="report.html" /></a>
command I use to get html report: <code>pytest test_01.py -s -v --html=report.html</code></p>
<p>In console I am able to get the print statement. however is there any possible ways that I can get the same in report.html as well?</p>
<p><strong>console output:</strong></p>
<pre><code>collected 1 item
------------------------ Test 1 Initiated: Verify LogRetention file count in dropbox ------------------------
Device rebooting...
------------------------ Test 1 PASSED successfully : Verify LogRetention file count in dropbox folder ------------------------
Test case took 118.19 seconds.
test_01.py::test_verify_file_count ✓
</code></pre>
| <python><pytest><pytest-html> | 2023-08-01 11:29:32 | 2 | 1,009 | ilexcel |
76,811,178 | 8,547,163 | Math expressions in JSON | <p>I have a JSON file with</p>
<pre><code>{
"t": 5550.45,
"r": 12.4
}
</code></pre>
<p>Now I would like to have an element with <code>Dv= 1.0e-04*np.exp(-300000.0/r/t)</code></p>
<p>So my question is can I write Dv as a key value pair like other parameters above?</p>
| <python><json> | 2023-08-01 11:27:49 | 1 | 559 | newstudent |
76,811,021 | 335,412 | Replace pivot operation for use in lazy evaluation with polars | <p>I have a set of events at timestamps, and for each timestamp I need the sum of the "last" values of each username. This can be done with a pivot table, but I would like to use <code>LazyFrame</code>, because with many unique usernames, the pivot table would overflow RAM. However, <code>LazyFrame</code> does not support <code>pivot</code>.</p>
<p>The number of unique usernames are on the order ~1000s, with the events being in the order of 10s of millions.</p>
<h2>A working example with <code>pivot</code> and <code>DataFrame</code>:</h2>
<p>The input dataframe:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.from_repr("""
┌────────────┬──────────┬───────┐
│ timestamp ┆ username ┆ kudos │
│ --- ┆ --- ┆ --- │
│ i64 ┆ str ┆ i64 │
╞════════════╪══════════╪═══════╡
│ 1690886106 ┆ ABC ┆ 123 │
│ 1690886107 ┆ DEF ┆ 10 │
│ 1690886110 ┆ DEF ┆ 12 │
│ 1690886210 ┆ GIH ┆ 0 │
└────────────┴──────────┴───────┘
""")
</code></pre>
<p>I can achieve the task using <code>pivot</code>:</p>
<pre class="lang-py prettyprint-override"><code>(
df.pivot(
on="username",
index="timestamp",
values=["kudos"],
aggregate_function="last",
)
.select(pl.all().forward_fill())
.fill_null(strategy="zero")
.select(pl.col("timestamp"), pl.sum_horizontal(df["username"].unique().to_list()).alias("sum"))
)
</code></pre>
<p>The results are correct:</p>
<pre><code>shape: (4, 2)
┌────────────┬─────┐
│ timestamp ┆ sum │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞════════════╪═════╡
│ 1690886106 ┆ 123 │
│ 1690886107 ┆ 133 │
│ 1690886110 ┆ 135 │
│ 1690886210 ┆ 135 │
└────────────┴─────┘
</code></pre>
<p>How would one implement this with <code>LazyFrame</code> such that it is efficient for a large number of unique usernames (i.e. using lazy evaluation and possibly without a giant sparse pivot table)?</p>
| <python><lazy-evaluation><python-polars> | 2023-08-01 11:07:32 | 1 | 3,608 | sougonde |
76,810,899 | 7,924,573 | Unable to install uwsgi Python package: can't start new thread | <p>I am maintaining a Server and want to install and run a nginx + uwsgi Django Application. Unfortunately, I am stuck with installing the requirements, particularly installing uwsgi:</p>
<pre><code>pip3 install uwsgi
</code></pre>
<p>results in:</p>
<pre><code>RuntimeError: can't start new thread
</code></pre>
<p>I would love to get a hint on how to solve the issue.</p>
<p>My specification are:</p>
<ul>
<li>Debian 10</li>
<li>Python 3.7</li>
<li>34GB RAM</li>
<li>Intel® Xeon® Gold 6130 Processor</li>
<li>max user processes (ulimit -u) 90</li>
</ul>
<p>Here is the complete console log:</p>
<pre class="lang-bash prettyprint-override"><code>Collecting uwsgi
Using cached uwsgi-2.0.22.tar.gz (809 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Building wheels for collected packages: uwsgi
Building wheel for uwsgi (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for uwsgi (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [55 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib
copying uwsgidecorators.py -> build/lib
installing to build/bdist.linux-x86_64/wheel
running install
/tmp/pip-build-env-atb4nphi/overlay/lib/python3.7/site-packages/setuptools/_distutils/dist.py:265: UserWarning: Unknown distribution option: 'descriptions'
warnings.warn(msg)
using profile: buildconf/default.ini
detected include path: ['/usr/lib/gcc/x86_64-linux-gnu/8/include', '/usr/local/include', '/usr/lib/gcc/x86_64-linux-gnu/8/include-fixed', '/usr/include/x86_64-linux-gnu', '/usr/include']
Patching "bin_name" to properly install_scripts dir
Traceback (most recent call last):
File "/homepages/10/d521380034/htdocs/open-ls/TS_annotation_tool_venv/lib/python3.7/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/homepages/10/d521380034/htdocs/open-ls/TS_annotation_tool_venv/lib/python3.7/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/homepages/10/d521380034/htdocs/open-ls/TS_annotation_tool_venv/lib/python3.7/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 252, in build_wheel
metadata_directory)
File "/tmp/pip-build-env-atb4nphi/overlay/lib/python3.7/site-packages/setuptools/build_meta.py", line 417, in build_wheel
wheel_directory, config_settings)
File "/tmp/pip-build-env-atb4nphi/overlay/lib/python3.7/site-packages/setuptools/build_meta.py", line 401, in _build_with_temp_dir
self.run_setup()
File "/tmp/pip-build-env-atb4nphi/overlay/lib/python3.7/site-packages/setuptools/build_meta.py", line 488, in run_setup
self).run_setup(setup_script=setup_script)
File "/tmp/pip-build-env-atb4nphi/overlay/lib/python3.7/site-packages/setuptools/build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 143, in <module>
File "/tmp/pip-build-env-atb4nphi/overlay/lib/python3.7/site-packages/setuptools/__init__.py", line 107, in setup
return distutils.core.setup(**attrs)
File "/tmp/pip-build-env-atb4nphi/overlay/lib/python3.7/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/tmp/pip-build-env-atb4nphi/overlay/lib/python3.7/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/tmp/pip-build-env-atb4nphi/overlay/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
self.run_command(cmd)
File "/tmp/pip-build-env-atb4nphi/overlay/lib/python3.7/site-packages/setuptools/dist.py", line 1234, in run_command
super().run_command(command)
File "/tmp/pip-build-env-atb4nphi/overlay/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-atb4nphi/overlay/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 381, in run
self.run_command("install")
File "/tmp/pip-build-env-atb4nphi/overlay/lib/python3.7/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-atb4nphi/overlay/lib/python3.7/site-packages/setuptools/dist.py", line 1234, in run_command
super().run_command(command)
File "/tmp/pip-build-env-atb4nphi/overlay/lib/python3.7/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
cmd_obj.run()
File "<string>", line 79, in run
File "/tmp/pip-install-ue34lw_1/uwsgi_e9a9e132f35048d08a753e0d4e62b95d/uwsgiconfig.py", line 284, in build_uwsgi
t.start()
File "/usr/lib/python3.7/threading.py", line 847, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for uwsgi
Failed to build uwsgi
ERROR: Could not build wheels for uwsgi, which is required to install pyproject.toml-based projects
</code></pre>
| <python><multithreading><pip><debian><uwsgi> | 2023-08-01 10:48:45 | 1 | 843 | tschomacker |
76,810,746 | 567,059 | Configure coverage lcov file from pyproject.toml | <p>When using <code>pytest</code> with the <code>pytest-cov</code> plugin, I am able to use the <em><strong>pyconfig.toml</strong></em> file to set the required options.</p>
<pre class="lang-ini prettyprint-override"><code>[tool.pytest.ini_options]
addopts = "-p no:cacheprovider --cov --no-cov-on-fail"
enable_assertion_pass_hook = true
python_files = "tests/*_test.py"
[tool.coverage.run]
branch = true
data_file = ".coverage/.coverage"
source = ["wwaz/assets"]
[tool.coverage.report]
fail_under = 95
show_missing = true
</code></pre>
<p>However, I now wish to output an <em><strong>lcov</strong></em> file, but the <code>lcov output</code> option isn't working, with no file being output. I've reviewed the <a href="https://coverage.readthedocs.io/en/latest/config.html#lcov" rel="nofollow noreferrer">documentation</a> and as far as I can see, this is how that option should be specified.</p>
<pre class="lang-ini prettyprint-override"><code>[tool.coverage.lcov]
output = ".coverage/.lcov"
</code></pre>
| <python><pyproject.toml> | 2023-08-01 10:30:37 | 0 | 12,277 | David Gard |
76,810,701 | 17,951,403 | Data stream with confluent-kafka python giving error - TypeError: You must pass either str or Schema | <p>I am trying to build Stream-Table Join Operations in Kafka python and below is the code that performs join operation on data sent from producer</p>
<p>stream_table_join.py</p>
<pre><code>from confluent_kafka import DeserializingConsumer, SerializingProducer
from confluent_kafka.serialization import StringDeserializer, StringSerializer
from confluent_kafka import Consumer
from confluent_kafka.schema_registry.json_schema import JSONDeserializer
import json
class LookupData:
def __init__(self, id, name):
self.id = id
self.name = name
# Kafka configuration
bootstrap_servers = 'localhost:9092'
events_topic = 'events_topic'
# Configure the Kafka consumer and producer
consumer = DeserializingConsumer({
'bootstrap.servers': bootstrap_servers,
'key.deserializer': StringDeserializer(),
'value.deserializer': JSONDeserializer(LookupData, from_dict=lambda x: LookupData(x['id'], x['name'])),
'group.id': 'stream-table-join-group'
})
consumer.subscribe([events_topic])
producer = SerializingProducer({
'bootstrap.servers': bootstrap_servers,
'key.serializer': StringSerializer(),
'value.serializer': StringSerializer()
})
# Load the lookup data from the lookup table
lookup_table = {}
# Join operation function
def join_operation(event):
""" Join Operation here """
# Start consuming and producing messages
while True:
event = json.loads(msg.value())
enriched_event = join_operation(event)
producer.produce(events_topic, value=json.dumps(enriched_event))
producer.flush()
consumer.close()
</code></pre>
<p>There exists a part as shared below</p>
<p><code>'value.deserializer': JSONDeserializer(LookupData, from_dict=lambda x: LookupData(x['id'], x['name'])),</code></p>
<p>That taps into the data object sent from producer and converts into the
<code>LookupData </code> class object with the help of <code>JSONDeserializer </code>, this is done so that additional field can be joined, but everytime the code file is executed, it just shows the error</p>
<pre><code>TypeError: You must pass either str or Schema
</code></pre>
<p>I tried running it both times 1. with Producer running and 2. without producer running, both times I am having the same error and there is not enough explanation on what exactly the error is about or how to solve this</p>
<p>Below is the sample data that is coming from <code>producer.py</code></p>
<pre><code>{
"id": 1,
"name": "John Doe",
}
</code></pre>
<p>I tried running the code with producer and without the producer running, both times I received the same error, even checked the documentation and tried to find solutions for the error but found none</p>
| <python><apache-kafka><data-ingestion><confluent-kafka-python> | 2023-08-01 10:25:11 | 1 | 328 | Polymath |
76,810,696 | 11,564,487 | Hough Gradient Method misses some circles | <p>Consider the images below:</p>
<p><a href="https://i.sstatic.net/bM0yq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bM0yq.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/PAsv1.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PAsv1.jpg" alt="enter image description here" /></a></p>
<p>As can be seen, many circles are not detected. I have already played with</p>
<p><code>param1</code> and <code>param2</code> (<code>cv2.HoughCircles</code>)</p>
<p>in the code below:</p>
<pre class="lang-py prettyprint-override"><code>import cv2
import numpy as np
from PIL import Image
# Load the image and downscale it
img = Image.open('/tmp/F01-02.jpg')
img = img.resize((img.size[0] // 2, img.size[1] // 2)) # downscale by a factor of 2
open_cv_image = np.array(img)
# Convert to grayscale
gray = cv2.cvtColor(open_cv_image, cv2.COLOR_RGB2GRAY)
# Apply Hough transform
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1, 40, param1=5, param2=30, minRadius=7, maxRadius=10)
# Ensure at least some circles were found
if circles is not None:
circles = np.round(circles[0, :]).astype("int")
for (x, y, r) in circles:
cv2.circle(open_cv_image, (x, y), r, (0, 255, 0), 2)
pil_image = Image.fromarray(open_cv_image)
pil_image.save('/tmp/result.jpg')
</code></pre>
<p>Any ideas? Thanks!</p>
| <python><opencv><image-processing><object-detection><omr> | 2023-08-01 10:23:56 | 1 | 27,045 | PaulS |
76,810,507 | 4,905,285 | Reduce space between subplots and colorbar | <p>I would like to know how to reduce the space between subplots and the colorbar. By the way, I have included <code>fig.tight_layout()</code>; that does not reduce the space between figures nor the colobar.</p>
<p>This is my code below.</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
diff_array_num_BayesHier = pd.read_csv("diff_array_num_BayesHier.csv")
diff_array_num_MLE = pd.read_csv("diff_array_num_MLE.csv")
diff_array_num_BayesHier = diff_array_num_BayesHier.drop('Unnamed: 0', axis=1)
diff_array_num_MLE = diff_array_num_MLE.drop('Unnamed: 0', axis=1)
sdo = pd.read_csv('~/Desktop/sdo.csv')
colnames = [1,3,6,12,24,48]
diff_array_num_BayesHier.index = sdo['SDO_Name']
diff_array_num_MLE.index = sdo['SDO_Name']
diff_array_num_BayesHier.columns = colnames
diff_array_num_MLE.columns = colnames
vmin = min(diff_array_num_BayesHier.min().min(), diff_array_num_MLE.min().min())
vmax = max(diff_array_num_BayesHier.max().max(), diff_array_num_MLE.max().max())
fig, (ax1,ax2) = plt.subplots(1,2, figsize=(10,5))
im1 = ax1.imshow(diff_array_num_BayesHier,cmap='Reds', vmin=vmin, vmax=vmax)
ax1.set_xticks(np.arange(len(colnames)))
ax1.set_yticks(np.arange(len(sdo['SDO_Name'])))
ax1.set_xticklabels(colnames)
ax1.set_yticklabels(sdo['SDO_Name'])
# ax1.xticks(np.arange(len(colnames)), colnames)
# ax1.yticks(np.arange(len(sdo['SDO_Name'])), sdo['SDO_Name'])
im2 = ax2.imshow(diff_array_num_MLE,cmap='Reds', vmin=vmin, vmax=vmax)
ax2.set_xticks(np.arange(len(colnames)))
ax2.set_yticks([]) # Remove y-axis tick labels
ax2.set_xticklabels(colnames)
# ax2.xticks(np.arange(len(colnames)), colnames)
# ax2.yticks(np.arange(len(sdo['SDO_Name'])), sdo['SDO_Name'])
cax = fig.add_axes([0.92, 0.15, 0.02, 0.7]) # [left, bottom, width, height]
cbar = plt.colorbar(im1, cax=cax)
cbar.set_label('Colorbar Label')
im2.set_clim(vmin, vmax)
#plt.subplots_adjust(wspace=-1)
fig.tight_layout()
plt.savefig('output_plot.png', dpi=300)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/RT1pI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RT1pI.png" alt="enter image description here" /></a></p>
<p>Adding <code>plt.subplots_adjust(wspace=-1)</code> reduces the space between the two figures, but not the colobar.</p>
<p><a href="https://i.sstatic.net/ausR4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ausR4.png" alt="enter image description here" /></a></p>
<p>How can I reduce the space between the all features figures and colobar?</p>
| <python><matplotlib> | 2023-08-01 09:59:16 | 2 | 441 | AlexLee |
76,810,468 | 9,515,210 | How to play video on QPushButton by PySide6.QtMultimedia.QMediaPlayer? | <p>I want to play video on QPushButton, so user can easily preview their videos in my tool. And the reason of I want to use QPushButton is I want to listen clicked signal</p>
<p>Here are what I have tried:</p>
<pre class="lang-py prettyprint-override"><code>from PySide6 import QtWidgets, QtMultimedia, QtMultimediaWidgets
class VideoPreviewWidget(QtWidgets.QWidget):
def __init__(self, parent=None):
super().__init__(parent)
self.media_player = QtMultimedia.QMediaPlayer()
self.media_player.setSource(...)
self.media_player.setLoops(QtMultimedia.QMediaPlayer.Loops.Infinite)
self.media_player.play()
self.setLayout(QtWidgets.QVBoxLayout())
# self.preview_button = QtWidgets.QPushButton()
# self.layout().addWidget(self.preview_button)
# self.media_player.setVideoOutput(self.preview_button) Nothing happened
self.video_widget = QtMultimediaWidgets.QVideoWidget()
self.layout().addWidget(self.video_widget)
self.media_player.setVideoOutput(self.video_widget) # Black screen
self.media_player.mediaStatusChanged.connect(self.print_errors)
def print_errors(self):
print(self.media_player.mediaStatus(), self.media_player.errors(), self.media_player.errorString())
# MediaStatus.BufferedMedia Error.NoError
</code></pre>
<p>I have read Qt Examples here <a href="https://doc.qt.io/qtforpython-6/examples/example_multimedia_player.html" rel="nofollow noreferrer">Player Example</a>, but cannot find my mistakes.</p>
| <python><python-3.x><qt><pyside6> | 2023-08-01 09:53:30 | 0 | 730 | Yang HG |
76,810,268 | 9,895,048 | tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value num_blocks_2/multihead_attention/conv1d_1/kerne | <p>I have trained the <a href="https://github.com/ahmedrashed-ml/CARCA" rel="nofollow noreferrer">CARCA(Context and Attribute-Aware Sequential Recommendation via Cross-Attention)</a> on Video Games Dataset. I saved the session after 1 epoch and try to restore the session since all the Architecture was written using the concepts of the session. I saved the session using <code>tf.train.Saver()</code> <code>.save()</code> method.</p>
<pre><code>session_saver = tf.train.Saver(save_relative_paths=True)
session_output_path = os.path.join(args.output_dir, "epochs_"+str(epoch))
if not os.path.isdir(session_output_path):
os.makedirs(session_output_path)# make directory if not exists
session_saver.save(sess, session_output_path+"/carca_model")
print(f"[INFO]: Save the model after epochs: {epoch}")
</code></pre>
<p>then, I restored session using:</p>
<pre><code>saver = tf.train.import_meta_graph(session_dir +"carca_model.meta")
saver.restore(sess, tf.train.latest_checkpoint(session_dir))
</code></pre>
<p>I have tried to perform prediction on new dataset using restored session sess, But I have encountered <code>Attempting to use uninitialized value error</code>. The full error is:</p>
<pre><code>Traceback (most recent call last):
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1356, in _do_call
return fn(*args)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1341, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1429, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value num_blocks_2/multihead_attention/conv1d_1/kernel_1
[[{{node num_blocks_2/multihead_attention/conv1d_1/kernel_1/read}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "CARCA_train.py", line 1441, in <module>
get_load_model_and_inference(dataset, usernum, itemnum, args, ItemFeatures, UserFeatures, CXTDict)
File "CARCA_train.py", line 1317, in get_load_model_and_inference
predictions = -model.predict(sess, np.ones(args.maxlen)*u, [seq], item_idx, [seqcxt], testitemscxt)
File "CARCA_train.py", line 976, in predict
{self.test_user: u, self.input_seq: seq, self.test_item: item_idx, self.is_training: False, self.seq_cxt:seqcxt, self.test_item_cxt:testitemcxt})
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 950, in run
run_metadata_ptr)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1173, in _run
feed_dict_tensor, options, run_metadata)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1350, in _do_run
run_metadata)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/client/session.py", line 1370, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value num_blocks_2/multihead_attention/conv1d_1/kernel_1
[[node num_blocks_2/multihead_attention/conv1d_1/kernel_1/read (defined at CARCA_train.py:670) ]]
Original stack trace for 'num_blocks_2/multihead_attention/conv1d_1/kernel_1/read':
File "CARCA_train.py", line 1441, in <module>
get_load_model_and_inference(dataset, usernum, itemnum, args, ItemFeatures, UserFeatures, CXTDict)
File "CARCA_train.py", line 1266, in get_load_model_and_inference
model = Model(usernum, itemnum, args, ItemFeatures, UserFeatures, cxt_size = cxt_size ,use_res = True)
File "CARCA_train.py", line 768, in __init__
dropout_rate=args.dropout_rate, is_training=self.is_training)
File "CARCA_train.py", line 670, in feedforward
outputs = tf.layers.conv1d(**params)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
return func(*args, **kwargs)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/layers/convolutional.py", line 218, in conv1d
return layer.apply(inputs)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1479, in apply
return self.__call__(inputs, *args, **kwargs)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/layers/base.py", line 537, in __call__
outputs = super(Layer, self).__call__(inputs, *args, **kwargs)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 591, in __call__
self._maybe_build(inputs)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1881, in _maybe_build
self.build(input_shapes)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/keras/layers/convolutional.py", line 165, in build
dtype=self.dtype)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/layers/base.py", line 450, in add_weight
**kwargs)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 384, in add_weight
aggregation=aggregation)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py", line 663, in _add_variable_with_custom_getter
**kwargs_for_getter)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1496, in get_variable
aggregation=aggregation)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1239, in get_variable
aggregation=aggregation)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py", line 562, in get_variable
aggregation=aggregation)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py", line 514, in _true_getter
aggregation=aggregation)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py", line 929, in _get_single_variable
aggregation=aggregation)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 259, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 220, in _variable_v1_call
shape=shape)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 198, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/ops/variable_scope.py", line 2511, in default_variable_creator
shape=shape)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 263, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 1568, in __init__
shape=shape)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/ops/variables.py", line 1755, in _init_from_args
self._snapshot = array_ops.identity(self._variable, name="read")
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py", line 86, in identity
ret = gen_array_ops.identity(input, name=name)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 4253, in identity
"Identity", input=input, name=name)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op
op_def=op_def)
File "/home/zakipoint/miniconda3/envs/sequential_recommendation_carca/lib/python3.7/site-packages/tensorflow/python/framework/ops.py", line 2005, in __init__
self._traceback = tf_stack.extract_stack()
</code></pre>
<p>I have also checked the variables in both the stored session and the session just after training. Both outputs seem similar. I was stuck on this issue for a few days and also tested saving the model using <code>tf.saved_model.builder.SavedModelBuilder()</code> and restoring the model using <code>tf.saved_model.loader.load()</code> but not solved the issue.Currently,I am using <code>tensorflow 1.14</code>. Thank you!</p>
| <python><tensorflow><recommendation-engine> | 2023-08-01 09:27:30 | 1 | 411 | npn |
76,810,199 | 5,684,405 | Unable to add mamba env to PyCharm due to OK button greyed out | <p>On MacOs using PyCharm <code>Build #PY-232.8660.197, built on July 26, 2023</code> I'm unable to add conda env as Python Interpreter because the 'OK' button is grayed out.</p>
<p>The path exist:</p>
<pre><code>ls -Gflash /opt/homebrew/Caskroom/mambaforge/base/bin/mam* ─╯
8 -rwxrwxr-x 1 mc admin 247B Apr 11 15:29 /opt/homebrew/Caskroom/mambaforge/base/bin/mamba
840 -rwxrwxr-x 2 mc admin 420K Mar 28 13:36 /opt/homebrew/Caskroom/mambaforge/base/bin/mamba-package
</code></pre>
<p>How can I fix this?</p>
<p>EDIT:
It turns out that when I change <code>mamba</code> -> <code>conda</code> in this path it accepts <code>conda</code> .</p>
<p>Still: why mamba is not accepted?</p>
<p><a href="https://i.sstatic.net/dHJ0k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dHJ0k.png" alt="enter image description here" /></a></p>
| <python><pycharm><conda><mamba> | 2023-08-01 09:17:21 | 1 | 2,969 | mCs |
76,810,034 | 378,386 | How to create elements with zeep from WSDL? | <p>How to create elements with zeep from WSDL?</p>
<p>While I can look up types by their namespace and name I cannot create elemtents the same way. zeep provides an overview of global elements:</p>
<pre><code>Global elements:
ns1:Request(FooBar: {Parms: {CorrelationID: xsd:string, ...
ns1:Response(FooBar: {Data: {CorrelationID: xsd:string, ...
</code></pre>
<p>I tried to retrieve a type like this:</p>
<pre><code>request_type = client.get_type("ns1:Request")
</code></pre>
<p>but I get the following error:</p>
<pre><code>No type 'Request' in namespace http://ws.example.com/FooBar. Available types are: -
</code></pre>
<p>As there are no types listed, how can I construct an element with zeep? I need the element as a parameter for a service operation.</p>
| <python><soap><wsdl><zeep> | 2023-08-01 08:56:37 | 0 | 2,557 | aggsol |
76,810,023 | 4,105,440 | Any way of importing module IN parent folder without using a path insert? | <p>Consider the following structure</p>
<pre><code>├── __init__.py
├── script.py
├── lib
│ ├── __init__.py
│ ├── utils.py
│ ├── utils2.py
</code></pre>
<p>I know that I can import any function defined in <code>utils.py</code> into <code>script.py</code> doing <code>from lib.utils import function</code>.
Also, I can import any function defined in <code>utils2.py</code> into <code>utils.py</code> using a relative import, i.e. <code>from .utils2 import function2</code>.</p>
<p>With time my root folder has been growing with files so that an additional organization into folders is needed, so I would like to have something like this</p>
<pre><code>├── __init__.py
├── realtime
│ ├── __init__.py
│ ├── script.py
├── lib
│ ├── __init__.py
│ ├── utils.py
│ ├── utils2.py
</code></pre>
<p>If I attempt an import with respect to the package root in <code>realtime/script.py</code> now</p>
<pre><code>from lib.utils import function
</code></pre>
<p>I get an error.</p>
<p>Is there any way to resolve the import always with respect to the root folder when I'm running a script inside a subfolder? I would like to avoid having all my main scripts in the root and also avoid having to insert something in <code>syspath</code>, which is what many solutions online are suggesting. Does Python not have an official way of dealing with this?</p>
| <python><python-3.x><python-packaging> | 2023-08-01 08:55:51 | 0 | 673 | Droid |
76,810,019 | 9,451,634 | Typehints/mypy in configuration class abstraction of dict | <p>I have a <code>config.yaml</code> in which various parameters of my program can be specified, that is heterogeneous in terms of types:</p>
<pre><code>attr1: some_string
attr2: some_int
attr3:
subkey1: some_string
subkey2: some_string
attr4:
- item1
- item2
</code></pre>
<p>To hide the configuration file from the rest of the code, I use a class that mostly just passes the values from the dict to the code.</p>
<pre><code>class Config:
def __init__(self):
with open("config.yaml", "r") as f:
self.config_dict: dict[str, Any] = yaml.safe_load(f)
@property
def attr1(self) -> str:
return self.config_dict["attr1"]
@property
def attr2(self) -> int:
return self.config_dict["attr2"]
...
</code></pre>
<p>mypy now complains about the types returned by the class properties.</p>
<pre><code> error: Returning Any from function declared to return "str"
error: Returning Any from function declared to return "int"
</code></pre>
<p>What are the best practices I should follow here to use typing correctly? Are assertions of the type of the user provided attribute values the way to go?</p>
<p>Thanks!</p>
| <python><configuration><mypy><python-typing><abstraction> | 2023-08-01 08:55:29 | 0 | 450 | Daniel |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.