QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,483,294
| 5,371,582
|
python type annotation when default argument is None
|
<p>Maybe related:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/19202633/python-3-type-hinting-for-none">Python 3 type hinting for None?</a></li>
<li><a href="https://stackoverflow.com/questions/71431701/in-python-how-can-i-type-hint-a-list-with-an-empty-default-value">In Python, how can I type hint a list with an empty default value?</a></li>
</ul>
<pre class="lang-py prettyprint-override"><code>def other_function(x: int) -> None:
pass
def foo(x: int = None) -> None: # Inconsistent; how to do better ?
if x is None:
x = 4
other_function(x)
</code></pre>
<p>When I arrive to the line <code>other_function(x)</code>, I'm sure that <code>x</code> is an int.</p>
<p>My issue is that the liner (pyright in neovim) believes that <code>x</code> is something like <code>Union[int, None]</code>, so that it does not accept to pass <code>x</code> to the <code>other_function</code>.</p>
<p>QUESTION : Is there a way to say "At this point, I KNOW that <code>x</code> is an <code>int</code>" ?</p>
<p>I've tried the solutions in the linked questions (and some variations) without success.</p>
<p>CLARIFICATION 1: I am not really interested in the type hint itself in the function declaration (this is the point of the linked questions). I'm intersested in making python/linter know that, after <code>if x is None:...</code>, I know that <code>x</code> is <code>int</code>.</p>
|
<python><python-typing><default-value>
|
2023-02-17 10:49:19
| 0
| 705
|
Laurent Claessens
|
75,483,258
| 990,639
|
Python saxonpy returns I/O error when transforming XML
|
<p>I am trying to perform XSLT transform using <a href="https://pypi.org/project/saxonpy/" rel="nofollow noreferrer">saxonpy</a> with Python code as attached:</p>
<pre><code>from saxonpy import PySaxonProcessor
import os
class XMLProcessor:
proc = PySaxonProcessor(license = False)
def processXSLT2(self, sourceXmlDocPath, xsltStyleSheetPath):
# https://www.saxonica.com/saxon-c/doc1.2/html/saxonc.html#PyXslt30Processor
print(self.proc.version)
self.proc.set_cwd(os.getcwd()) #set the CWD first
xsltproc = self.proc.new_xslt30_processor()
output = xsltproc.transform_to_string(source_file = sourceXmlDocPath,
stylesheet_file = xsltStyleSheetPath)
return output
</code></pre>
<p>In my main py file, it is called using XMLProcessor.processXSLT2(XMLProcessor, LOCAL_XML_FILE, os.environ['LAMBDA_TASK_ROOT'] + '/metadata.xsl')</p>
<p>However, the console shows this error message:</p>
<pre><code>Saxon/C 1.2.1 running with Saxon-HE 9.9.1.5C from Saxonica
Error
I/O error reported by XML parser processing D:\git\lambda\data\test.xml: unknown protocol: d
</code></pre>
|
<python><xml><saxon>
|
2023-02-17 10:45:57
| 1
| 1,147
|
Eugene
|
75,483,237
| 3,922,727
|
Python how to return an excel file without converting to bytes using HttpResponse
|
<p>We want to return an excel file from python to the front-end. The way were are doing it now is as follows in an azure http trigger function.</p>
<pre><code>return func.HttpResponse(
customizedFile,
headers={"Content-Disposition": 'attachment; filename="{customizedFileName}"'},
mimetype='application/vnd.ms-excel',
status_code=200,
)
</code></pre>
<p>The motive to return a file is to preserve file formatting. As there is another way in converting the workbook as bytes and send them back as mentioned in this <a href="https://stackoverflow.com/questions/72632654/return-excel-file-from-azure-function-via-http-using-python">post</a>:</p>
<pre><code>buffer = BytesIO()
excel_buf = df.to_excel(buffer)
return func.HttpResponse(buffer.getvalue(), status_code=200)
</code></pre>
<p>Our aim to preserve the formatting after the needed cutomization.</p>
<p>Also we don't want to save in a blob as the end user should not have access to our storage.</p>
<p>The way mentioned at the top of the post is returning the following error:</p>
<blockquote>
<p>response is expected to be either of str, bytes, or bytearray, got
Workbook</p>
</blockquote>
<p>The returned file is saved successfully in a local directory.</p>
<p>How can we send the file as excel only without the need to convert to bytes. Is it something to fix from the front end to accept such types?</p>
|
<python><excel><azure-functions><httpresponse><azure-http-trigger>
|
2023-02-17 10:44:07
| 1
| 5,012
|
alim1990
|
75,483,222
| 18,949,720
|
Finding contour around a cluster of pixels
|
<p>I have a set of images that look like this:</p>
<p><a href="https://i.sstatic.net/YmZd2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YmZd2.png" alt="enter image description here" /></a></p>
<p>Using python need a way to find a contour around the yellow shape that ignores the isolated points and is not too complex. Something looking a bit like this :</p>
<p><a href="https://i.sstatic.net/yBY8S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yBY8S.png" alt="enter image description here" /></a></p>
<p>I tried some methods such as the find_contours function from skimage,which gives this after keeping only the biggest contour:</p>
<p><a href="https://i.sstatic.net/Guj3N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Guj3N.png" alt="enter image description here" /></a></p>
<p>which is not what I am looking for. A also tried active contour (snake) which had the problem of paying too much attention to isolated pixels. Is there a particular method that would help me in this situation ?</p>
<p>Thank you</p>
|
<python><image-processing><contour>
|
2023-02-17 10:42:53
| 1
| 358
|
Droidux
|
75,483,205
| 5,722,359
|
How to change the dynamic appearance (i.e. color) of tkinter ttk.Scrollbar?
|
<p>I have found questions and answers on changing the static colour of a <code>ttk.Scrollbar</code>. However, I have not yet found one on changing its dynamic appearance, which is my question.</p>
<p>My <a href="https://stackoverflow.com/a/48933106/5722359">scripts</a> have exposed the elements of a <code>Vertical.TScrollbar</code> to be:</p>
<p><strong>clam theme:</strong></p>
<pre><code>Stylename = Vertical.TScrollbar
Layout = [('Vertical.Scrollbar.trough', {'sticky': 'ns', 'children': [('Vertical.Scrollbar.uparrow', {'side': 'top', 'sticky': ''}), ('Vertical.Scrollbar.downarrow', {'side': 'bottom', 'sticky': ''}), ('Vertical.Scrollbar.thumb', {'sticky': 'nswe'})]})]
Element(s) = ['Vertical.Scrollbar.trough', 'Vertical.Scrollbar.uparrow', 'Vertical.Scrollbar.downarrow', 'Vertical.Scrollbar.thumb']
Vertical.Scrollbar.trough options: ('orient', 'background', 'bordercolor', 'troughcolor', 'lightcolor', 'darkcolor', 'arrowcolor', 'arrowsize', 'gripcount', 'sliderlength')
Vertical.Scrollbar.uparrow options: ('orient', 'background', 'bordercolor', 'troughcolor', 'lightcolor', 'darkcolor', 'arrowcolor', 'arrowsize', 'gripcount', 'sliderlength')
Vertical.Scrollbar.downarrow options: ('orient', 'background', 'bordercolor', 'troughcolor', 'lightcolor', 'darkcolor', 'arrowcolor', 'arrowsize', 'gripcount', 'sliderlength')
Vertical.Scrollbar.thumb options: ('orient', 'background', 'bordercolor', 'troughcolor', 'lightcolor', 'darkcolor', 'arrowcolor', 'arrowsize', 'gripcount', 'sliderlength')
</code></pre>
<p><strong>default theme:</strong></p>
<pre><code>Stylename = Vertical.TScrollbar
Layout = [('Vertical.Scrollbar.trough', {'sticky': 'ns', 'children': [('Vertical.Scrollbar.uparrow', {'side': 'top', 'sticky': ''}), ('Vertical.Scrollbar.downarrow', {'side': 'bottom', 'sticky': ''}), ('Vertical.Scrollbar.thumb', {'sticky': 'nswe'})]})]
Element(s) = ['Vertical.Scrollbar.trough', 'Vertical.Scrollbar.uparrow', 'Vertical.Scrollbar.downarrow', 'Vertical.Scrollbar.thumb']
Vertical.Scrollbar.trough options: ('borderwidth', 'troughcolor', 'troughrelief')
Vertical.Scrollbar.uparrow options: ('background', 'relief', 'borderwidth', 'arrowcolor', 'arrowsize')
Vertical.Scrollbar.downarrow options: ('background', 'relief', 'borderwidth', 'arrowcolor', 'arrowsize')
Vertical.Scrollbar.thumb options: ('orient', 'width', 'relief', 'background', 'borderwidth')
</code></pre>
<p>I tried changing the dynamic color of the its thumb using:</p>
<pre><code>ss.map("Vertical.TScrollbar", background=[('pressed', "yellow"), ('active', "yellow")])
</code></pre>
<p>or</p>
<pre><code>ss.map("Vertical.TScrollbar", thumb=[('pressed', "yellow"), ('active', "yellow")])
</code></pre>
<p>but these syntaxes don't work.</p>
<p>I want the scrollbar thumb to change to yellow color whenever the mouse pointer goes over or presses the <code>ttk.Scrollbar</code>. How can this objective be achieved? Thanks.</p>
<p><strong>Sample script(adapted from <a href="https://stackoverflow.com/a/46917680/5722359">@Aivar</a>):</strong></p>
<pre><code>import tkinter as tk
from tkinter import ttk
root = tk.Tk()
style = ttk.Style()
style.theme_use('clam')
# style.theme_use('default')
# list the options of the style
# (Argument should be an element of TScrollbar, eg. "thumb", "trough", ...)
print(style.element_options("Horizontal.TScrollbar.thumb"))
# configure the style
style.configure("Horizontal.TScrollbar", gripcount=0,
background="Green", darkcolor="DarkGreen", lightcolor="LightGreen",
troughcolor="gray", bordercolor="blue", arrowcolor="white")
# Syntax A - don't work
style.map("Vertical.TScrollbar.thumb",
background=[('pressed', "yellow"), ('active', "yellow")],
bordercolor=[('pressed', "yellow"), ('active', "yellow")],
troughcolor=[('pressed', "yellow"), ('active', "yellow")],
lightcolor=[('pressed', "yellow"), ('active', "yellow")],
darkcolor=[('pressed', "yellow"), ('active', "yellow")],
)
# Syntax B - don't work either
##style.map("Vertical.TScrollbar",
## background=[('pressed', "yellow"), ('active', "yellow")],
## bordercolor=[('pressed', "yellow"), ('active', "yellow")],
## troughcolor=[('pressed', "yellow"), ('active', "yellow")],
## lightcolor=[('pressed', "yellow"), ('active', "yellow")],
## darkcolor=[('pressed', "yellow"), ('active', "yellow")],
## )
hs = ttk.Scrollbar(root, orient="horizontal")
hs.place(x=5, y=5, width=150)
hs.set(0.2,0.3)
root.mainloop()
</code></pre>
<p>These are useful documentation<a href="https://www.tcl.tk/man/tcl/TkCmd/ttk_scrollbar.html" rel="nofollow noreferrer"> [1</a>, <a href="https://anzeljg.github.io/rin2/book2/2405/docs/tkinter/ttk-map.html" rel="nofollow noreferrer">2]</a>, on dynamic styling and <code>ttk.Scrollbar</code> that I have found. The <code>tcl</code> documentation did mention that the dynamic states of a <code>ttk.Scrollbar</code> are: <code>active</code>, <code>disabled</code>. I have tried removing the <code>pressed</code> state in my test script so as to only mention <code>active</code> state but this amendment did not work.</p>
|
<python><tkinter><tcl><ttk>
|
2023-02-17 10:42:07
| 1
| 8,499
|
Sun Bear
|
75,482,636
| 1,389,394
|
Pandas merge and join not picking up correct values
|
<p>The join is not working. Sample data and code as follows.</p>
<p>Look up file:</p>
<pre><code> helper time Loc FUEL Rep KM
0.1|A100|A 0.1 A100 100.00% -3.93 659
0.1|A200|A 0.1 A200 100.00% -4.49 628
0.22|A100|B 0.22 A100 90.00% -1.49 511
...
</code></pre>
<p>After importing look up file, did the following command to remove any spaces as there's was a keyerror before. I am guessing there might be some space issue within the columns.</p>
<pre><code>dflookup.columns = dflookup.columns.to_series().apply(lambda x: x.strip())
</code></pre>
<p>Here's main file:</p>
<pre><code>time user loc dist flightKM loc2 helper1
0.1 PilotL1 A100 A 140 A200 0.1|A200|A
0.22 PilotL2 B100 B 230 A100 0.22|A100|B
...
</code></pre>
<p>Expect output of main df</p>
<pre><code>time user loc dist flightKM loc2 helper1 Rep2 FUEL2
0.1 PilotL1 A100 A 140 A200 0.1|A200|A -3.93 100%
0.22 PilotL2 B100 B 230 A100 0.22|A100|B -1.49 90%
...
</code></pre>
<p>Tried some of the solutions provided in SO. Haven't gotten a fix yet.
Aim: to do a match using helper columns on Left, Right joins to add two columns (Rep, Fuel) from lookup into dfmain.</p>
<p><strong>PROBLEM:</strong> would like some tips to solve the left, join issue as it's not finding all and correct values from lookup "Rep, FEUL" to dfmain. Open to a quick fix as well as tips to optimizing the code in anyway, as this is just a basic py script with possible adhoc operations.</p>
<p>code:</p>
<pre><code> dfmain['Loc'] = dfmain['Loc'].str.replace(' ', '')
#creating a helper1 column in dfmain by concat columns as left,
right joins didnot allow a multi column in join operator
dfmain['helper1'] = dfmain[['time', 'loc2', 'dist']].apply(
lambda x: '|'.join(x.dropna().astype(str)),
axis=1
)
#search merge
dfmain = pd.merge(
left=dfmain,
right=dflookup[['helper', 'Rep', 'FUEL']],
left_on='helper1',
right_on='helper',
how='left')
#tidy up
dfmain.rename(columns={'Rep':'Rep2'}, inplace=True)
dfmain.rename(columns={'FUEL':'FUEL2'}, inplace=True)
big_df = big_df.drop(columns=['helper'])
</code></pre>
<p><em><strong>For scrutiny sake:</strong></em></p>
<pre><code>print("minimum reproducible code and dataset")
dflookup = pd.DataFrame([('falcon', 'bird', 100),
('parrot', 'bird', 50),
('lion', 'mammal', 50),
('monkey', 'mammal', 100)],
columns=['type', 'class', 'years'],
index=[0, 2, 3, 1])
dfmain = pd.DataFrame([('Target','falcon', 'bird', 389.0),
('Shout','parrot', 'bird', 24.0),
('Roar','lion', 'mammal', 80.5),
('Jump','monkey','mammal', np.nan),
('Sing','parrot','bird', 72.0)],
columns=['name','type', 'class', 'max_speed'],
index=[0, 2, 3, 1, 2])
</code></pre>
|
<python><pandas><dataframe><left-join>
|
2023-02-17 09:51:23
| 1
| 14,411
|
bonCodigo
|
75,482,616
| 215,929
|
How do I get clipboard image data with Python?
|
<p>In Win 10 you can Win+Shift+S to take a screenshot of a given section of screen, and can then Ctrl+V into something like Discord and immediately send that screenshot to people.</p>
<p>I'm trying to get that data (presumably a bmp / png format file) from the clipboard using Python and the win32clipboard module, but it's not working and I'm not even sure my approach is correct:</p>
<pre class="lang-py prettyprint-override"><code># Using Python 3.10.4
import win32clipboard
win32clipboard.OpenClipboard()
z = win32clipboard.EnumClipboardFormats() # returns '49161'
mem = win32clipboard.GetClipboardData(49161) # returns b'x00\x00\x00\x00....'
dat = win32clipboard.GetGlobalMemory(mem) # raises TypeError: The object is not a PyHANDLE object
# ... don't worry I close the clipboard later.
</code></pre>
<p>I can't find any info on how to create a PyHANDLE object and I have a feeling that I'm going down the wrong way. Can someone point me in the right direction?</p>
|
<python><python-3.x><windows><winapi><clipboarddata>
|
2023-02-17 09:48:54
| 0
| 2,827
|
Enrico Tuvera Jr
|
75,482,432
| 13,921,399
|
Find connected components recursively in a data frame
|
<p>Consider the following data frame:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
df = pd.DataFrame(
{
"main": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
"component": [
[1, 2],
[np.nan],
[3, 8],
[np.nan],
[1, 5, 6],
[np.nan],
[7],
[np.nan],
[9, 10],
[np.nan],
[np.nan],
],
}
)
</code></pre>
<p>The column <code>main</code> represents a certain approach. Each approach consists of components. A component itself could also be an approach and is then called sub-approach.</p>
<p>I want to find all connected sub-approaches/components for a certain approach.</p>
<p>Suppose, for instance, I want to find all connected sub-approaches/components for the main approach '0'.
Then, my desired output would look like this:</p>
<pre><code>target = pd.DataFrame({
"main": [0, 0, 2, 2, 8, 8],
"component": [1, 2, 3, 8, 9, 10]
})
</code></pre>
<p>Ideally, I want to be able to just choose the approach and then get all sub-connections.
I am convinced that there is a smart approach to do so using <code>networkx</code>. Any hint is appreciated.</p>
<p>Ultimately, I want to create a graph that looks somewhat like this (for approach 0):</p>
<p><a href="https://i.sstatic.net/r0w0d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r0w0d.png" alt="enter image description here" /></a></p>
<p><strong>Additional information:</strong></p>
<p>You can explode the data frame and then remove all components from the <code>main</code> column (components are approaches that do not have any component).</p>
<pre><code>df_exploded = df.explode(column="component").dropna(subset="component")
</code></pre>
<p>The graph can be constructed as follows:</p>
<pre class="lang-py prettyprint-override"><code>import networkx as nx
import graphviz
G = nx.Graph()
G.add_edges_from([(i, j) for i, j in target.values])
graph_attr = dict(rankdir="LR", nodesep="0.2")
g = graphviz.Digraph(graph_attr=graph_attr)
for k, v in G.nodes.items():
g.node(str(k), shape="box", style="filled", height="0.35")
for n1, n2 in G.edges:
g.edge(str(n2), str(n1))
g
</code></pre>
|
<python><pandas><network-programming><networkx><connected-components>
|
2023-02-17 09:32:05
| 1
| 1,811
|
ko3
|
75,482,382
| 139,150
|
Compare each string with all other strings in a dataframe
|
<p>I have this dataframe:</p>
<pre><code>mylist = [
"₹67.00 to Rupam Sweets using Bank Account XXXXXXXX5343<br>11 Feb 2023, 20:42:25",
"₹66.00 to Rupam Sweets using Bank Account XXXXXXXX5343<br>10 Feb 2023, 21:09:23",
"₹32.00 to Nagori Sajjad Mohammed Sayyed using Bank Account XXXXXXXX5343<br>9 Feb 2023, 07:06:52",
"₹110.00 to Vikram Manohar Jsohi using Bank Account XXXXXXXX5343<br>9 Feb 2023, 06:40:08",
"₹120.00 to Winner Dinesh Gupta using Bank Account XXXXXXXX5343<br>30 Jan 2023, 06:23:55",
]
import pandas as pd
df = pd.DataFrame(mylist)
df.columns = ["full_text"]
ndf = df.full_text.str.split("to", expand=True)
ndf.columns = ["amt", "full_text"]
ndf2 = ndf.full_text.str.split("using Bank Account XXXXXXXX5343<br>", expand=True)
ndf2.columns = ["client", "date"]
df = ndf.join(ndf2)[["date", "client", "amt"]]
</code></pre>
<p>I have created embeddings for each client name:</p>
<pre><code>from openai.embeddings_utils import get_embedding, cosine_similarity
import openai
openai.api_key = 'xxx'
embedding_model = "text-embedding-ada-002"
embeddings = df.client.apply([lambda x: get_embedding(x, engine=embedding_model)])
df["embeddings"] = embeddings
</code></pre>
<p>I can now calculate the similarity index for a given string. For e.g. "Rupam Sweet" using:</p>
<pre><code>query_embedding = get_embedding("Rupam Sweet", engine="text-embedding-ada-002")
df["similarity"] = df.embeddings.apply(lambda x: cosine_similarity(x, query_embedding))
</code></pre>
<p>But I need the similarity score of each client across all other clients. In other words, the client names will be in rows as well as in columns and the score will be the data. How do I achieve this?</p>
|
<python><numpy><nlp><vectorization><similarity>
|
2023-02-17 09:26:36
| 2
| 32,554
|
shantanuo
|
75,481,879
| 10,626,495
|
Multiple pytest sessions during tests run
|
<p>I am writing tests using <code>pytest</code> and <code>pytest-xdist</code> and I want to run <code>pytest_sessionstart</code> before all workers start running and <code>pytest_sessionfinish</code> when they are done.</p>
<p>I found this solution: <a href="https://github.com/pytest-dev/pytest-xdist/issues/271#issuecomment-826396320" rel="nofollow noreferrer">link</a>, but this is not working as expected. There are multiple sessions starting and finishing during test run. Hence multiple cleanups are done, which cause the tests to fail (it cleans tmp directory and tests fails with <code>FileNotFoundError</code>).</p>
<p>I added code to write to file when session is started and once it is finished. The log looks like this:</p>
<pre><code>init 0x00007f988f5ee120
worker init gw0
...
worker init gw7
init 0x00007f229cdac2e0
cleanup 0x00007f229cdac2e0 0
init 0x00007f1a31a4e2e0
cleanup 0x00007f1a31a4e2e0 0
worker done gw0
...
worker done gw4
cleanup 0x00007f988f5ee120 1
</code></pre>
<p>As you can see there are some session starting after all workers started and before they are done.</p>
<p>My code looks like this:</p>
<pre class="lang-py prettyprint-override"><code>def pytest_sessionstart(session: pytest.Session):
if hasattr(session.config, 'workerinput'):
# log to file 'worker init {id}'
return
# log to file 'init {sess id}'
# do some stuff
def pytest_sessionfinish(session: pytest.Session, exitstatus: int):
if hasattr(session.config, 'workerinput'):
# log to file 'worker done {id}'
return
# log to file 'cleanup {sess id} {exitstatus}'
# do some stuff
</code></pre>
|
<python><pytest><pytest-xdist>
|
2023-02-17 08:32:59
| 1
| 8,586
|
maciek97x
|
75,481,861
| 2,604,247
|
How to Get the Node IDs of an OPCUA Server from UA Expert?
|
<p>I am a noob in dealing with OPCUA and trying the python library to interface with a server. I have already established a connection with the OPCUA Expert application, and this is what a screenshot looks like.</p>
<p><a href="https://i.sstatic.net/7StnF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7StnF.png" alt="enter image description here" /></a></p>
<p>This is how my sample python code looks like.</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# encoding: utf-8
from opcua import Client
from opcua import ua
url='opc.tcp://192.168.112.94:4840'
client=Client(url=url)
somenode=client.get_node(nodeid='NS2|String/Plc/MD') # UaStringParsingError: ('Error parsing string NS2|String/Plc/MD', ValueError('not enough values to unpack (expected 2, got 1)'))
</code></pre>
<p>So basically, what is a nodeid and how to form it from what the UA Expert shows? Does not that column bearing the same name directly give the node id? Some helps on how to find the correct node id will be appreciated.</p>
<p>I realise I do not even have clarity on <em>what</em> is a node, as in is it like a node in graph theory, or network routing, or something else?</p>
|
<python><opc-ua><opc><node-opcua>
|
2023-02-17 08:31:29
| 1
| 1,720
|
Della
|
75,481,729
| 188,331
|
pandas.DataFrame.groupby loses index and messes up the data
|
<p>I have a <code>pandas.DataFrame</code> (named <code>df</code>) with the following data:</p>
<pre><code> labels texts
0 labelA Some Text 12345678
1 labelA Some Text 12345678
2 labelA Some Text 12345678
3 labelA Some Text 12345678
4 labelB Some Text 12345678
5 labelB Some Text 12345678
6 labelB Some Text 12345678
7 labelC Some Text 12345678
8 labelC Some Text 12345678
9 labelC Some Text 12345678
10 labelC Some Text 12345678
11 labelC Some Text 12345678
12 labelC Some Text 12345678
</code></pre>
<p>when I perform group by with the following (the goal is to take 2 samples from each label), the index is lost:</p>
<pre><code>grouped = df.groupby('labels')
result = grouped.apply(lambda x: x.sample(n=2))
print(result)
</code></pre>
<p>The output becomes:</p>
<pre><code> labels texts
labels
labelA 0 labelA Some Text 12345678
0 labelA Some Text 12345678
0 labelB Some Text 12345678
0 labelB Some Text 12345678
0 labelC Some Text 12345678
0 labelC Some Text 12345678
</code></pre>
<p>I would like the output becomes:</p>
<pre><code> labels texts
0 labelA Some Text 12345678
1 labelA Some Text 12345678
2 labelB Some Text 12345678
3 labelB Some Text 12345678
4 labelC Some Text 12345678
5 labelC Some Text 12345678
</code></pre>
<p>How should I make the changes?</p>
<p>I tried to use <code>result.dropout(0).reset_index()</code> according to <a href="https://stackoverflow.com/a/48761501/188331">this answer</a>, but it becomes:</p>
<pre><code> index labels texts
0 0 labelA Some Text 12345678
1 0 labelA Some Text 12345678
2 0 labelB Some Text 12345678
3 0 labelB Some Text 12345678
4 0 labelC Some Text 12345678
5 0 labelC Some Text 12345678
</code></pre>
|
<python><pandas>
|
2023-02-17 08:18:02
| 1
| 54,395
|
Raptor
|
75,481,684
| 6,953,017
|
Django: Deleting multiple objects with a view that requires a object pk?
|
<p>Hey I got this Code to remove my keys:</p>
<pre class="lang-py prettyprint-override"><code>class AKeysRemove(DeleteView, ProgramContextMixin):
model = AKeys
template_name = 'administration/keys/remove.html'
def dispatch(self, request, *args, **kwargs):
return super(AKeysRemove, self).dispatch(request, *args, **kwargs)
def get_success_url(self):
return reverse('akeys_index', args=[self.get_program_id()])
def delete(self, request, *args, **kwargs):
# Get the query parameters from the request
is_active = request.GET.get('is_active')
category = request.GET.get('category')
# Build a Q object to filter AccessKeys by is_active and category
q_filter = Q()
if is_active is not None:
q_filter &= Q(is_active=is_active)
if category is not None:
q_filter &= Q(category=category)
# Check if there are any filters
has_filters = is_active is not None or category is not None
# Delete the AKeys that match the filter, or just the one AKey
if has_filters:
queryset = self.get_queryset().filter(q_filter)
deleted_count, _ = queryset.delete()
if deleted_count == 1:
messages.success(request, f"One AKey deleted.")
else:
messages.success(request, f"{deleted_count} AKeys deleted.")
else:
obj = self.get_object()
obj.delete()
messages.success(request, f"AKey {obj} deleted.")
return redirect(self.get_success_url())
</code></pre>
<p>My url looks like this:</p>
<pre class="lang-py prettyprint-override"><code>re_path(r'^p/(?P<p_id>[0-9]+)/keys/(?P<pk>[0-9]+)/delete/?$', AKeysRemove.as_view(), name='akeys_delete'),
</code></pre>
<p>Deleting one Single Key works fine, but I build myself a filter to delete Keys from a certain category or if they're active or not (<code>is_active</code>)</p>
<pre class="lang-html prettyprint-override"><code>
<div class="row" style="margin-top: 10px">
<div class="col-md-12">
<form method="POST" action="{% url 'akeys_delete' p.id %}" id="delete-akeys-form">
<div class="form-group">
<label for="category-filter">Category:</label>
<select name="category" id="category-filter" class="form-control">
<option value="">All</option>
{% for category in acategories %}
<option value="{{ category.name }}">{{ category.name }}</option>
{% endfor %}
</select>
</div>
<div class="form-group">
<label for="active-filter">Status:</label>
<select name="is_active" id="active-filter" class="form-control">
<option value="">All</option>
<option value="true">Active</option>
<option value="false">Inactive</option>
</select>
</div>
<button type="submit" class="btn btn-default">Delete</button>
</form>
</div>
</div>
</code></pre>
<p>The problem now is that when I open my site I get the obvious error:</p>
<p><code>Reverse for 'akeys_delete' with arguments '(3,)' not found. 1 pattern(s) tried: ['admin/p/(?P<p_id>[0-9]+)/keys/(?P<pk>[0-9]+)/delete/?$']</code></p>
<p>Which I understand, since its missing the key.pk, I just can't really figure out how I could re-write my code to accept both single objects to delete and multiple?</p>
<p>I'm thankful for any help :)</p>
|
<python><django>
|
2023-02-17 08:13:42
| 1
| 930
|
NakedPython
|
75,481,619
| 8,169,680
|
Unusual import of a class in Python
|
<p>There is a file <code>exceptions.py</code> present in <code>kubernetes.client</code> folder where <code>ApiException</code> class is defined. So I can write the following line in my own file say <code>myfile.py</code> and use the <code>ApiException</code> for raising exception.</p>
<p><strong>some_folder.myfile.py code snippet:</strong></p>
<pre><code>from kubernetes.client.exceptions import ApiException
.....
.....
try:
.....
except ApiException as e:
.....
</code></pre>
<p>That is fine.</p>
<p>Also in <code>rest.py</code> present in <code>kubernetes.client</code> folder is importing the same class <code>ApiException</code> and raising some exception.</p>
<p><strong>kubernetes.client.rest.py code snippet:</strong></p>
<pre><code>from kubernetes.client.exceptions import ApiException
.....
.....
if not 200 <= r.status <= 299:
raise ApiException(http_resp=r)
</code></pre>
<p>That is also fine. But I am pretty much confused to see the below things as <code>ApiException</code> is imported from <code>kubernetes.client.rest</code> in <code>some_file.py</code> file (see below), <strong>not</strong> from <code>kubernetes.client.exceptions</code> where actual class definition for <code>ApiException</code> is present.</p>
<p><strong>some_folder.some_file.py code snippet:</strong></p>
<pre><code>from kubernetes.client.rest import ApiException
.....
.....
try:
.....
except ApiException as e:
.....
</code></pre>
<p>The above code is working but I am really surprised. Can somebody explain me what is happening here. Sorry I am new to Python.</p>
<p><strong>Note:</strong></p>
<ol>
<li>ApiException class is not defined in <code>kubernetes.client.rest</code>, it is only defined in <code>kubernetes.client.exceptions</code></li>
<li>I have searched many articles at online but did not get much information.</li>
</ol>
|
<python><python-3.x>
|
2023-02-17 08:06:28
| 1
| 764
|
Surya
|
75,481,527
| 4,847,250
|
How make tensorflow use GPU?
|
<p>I'm working with python and I would like to use tensorflow with my GTX2080TI but tensorflow is using only the CPU.</p>
<p>when I ask for device on my computer, it always return an empty list:</p>
<pre><code>In [3]: tf.config.list_physical_devices('GPU')
Out[3]: []
</code></pre>
<p>I try this post: <a href="https://stackoverflow.com/questions/51306862/how-do-i-use-tensorflow-gpu">How do I use TensorFlow GPU?</a>
but I don't use cuda and the tennsorflow-gpu seems outdated.</p>
<p>I also try this well done tutorial <a href="https://www.youtube.com/watch?v=hHWkvEcDBO0" rel="nofollow noreferrer">https://www.youtube.com/watch?v=hHWkvEcDBO0</a> without success.</p>
<p>I install the card drivers, CUDA and cudNN but I still get the same issue.
I also unsinstall tensorflow and keras an install them again without success.</p>
<p>I don't know how can I find what is missing or if I made something wrong.</p>
<p>Python 3.10
Tensoflow version: 2.11.0
Cuda version: 11.2
cudNN: 8.1</p>
<p>This line code tell's me that cuda is not build:</p>
<pre><code>print(tf_build_info.build_info)
OrderedDict([('is_cuda_build', False), ('is_rocm_build', False), ('is_tensorrt_build', False), ('msvcp_dll_names', 'msvcp140.dll,msvcp140_1.dll')])
</code></pre>
|
<python><tensorflow><gpu>
|
2023-02-17 07:56:10
| 1
| 5,207
|
ymmx
|
75,481,508
| 14,020,570
|
Python Dataframe column is list of dicts and how parse it
|
<p>I have dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>name</th>
<th>describe</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>some</td>
<td>[{'id':20, 'name':'thisIwantAsNameColumn','value':'thisIwantasValueinRow'},{'id':22, 'name':'thisIwantAsNameColumn2','value':'thisIwantasValueinRow2'}]</td>
</tr>
<tr>
<td>2</td>
<td>some2</td>
<td>[{'id':23, 'name':'thisIwantAsNameColumn','value':'thisIwantasValueinRow'},{'id':24, 'name':'thisIwantAsNameColumn2','value':'thisIwantasValueinRow2'}]</td>
</tr>
</tbody>
</table>
</div>
<p>and i want:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>name</th>
<th>thisIwantAsNameColumn</th>
<th>thisIwantAsNameColumn2</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>some</td>
<td>thisIwantasValueinRow</td>
<td>thisIwantasValueinRow2</td>
</tr>
<tr>
<td>2</td>
<td>some2</td>
<td>thisIwantasValueinRow</td>
<td>thisIwantasValueinRow2</td>
</tr>
</tbody>
</table>
</div>
<p>i try write function, but it creates a new dataframe for me and I would then have to connect it through something and that doesn't work well:</p>
<pre><code>def proccess_customFields(row):
customF={}
for item in row:
customF["custom_field-{}".format(item.get('name'))] = item.get('value')
result= pd.DataFrame.from_dict(customF,orient='index').T
return result
</code></pre>
|
<python><json><list><dataframe><dictionary>
|
2023-02-17 07:53:30
| 1
| 314
|
Cesc
|
75,481,316
| 5,132,860
|
How to avoid converting model to Traced-model every time in YOLOv7?
|
<p>I am using <a href="https://github.com/WongKinYiu/yolov7" rel="nofollow noreferrer">YOLOv7</a> for object detection. When I run the following command, object detection works fine:</p>
<pre><code>python detect.py --weights yolov7.pt --conf 0.25 --img-size 640 --source inference/images/horses.jpg
</code></pre>
<p>However, every time I run this command, the model is converted to a Traced-model, which takes a few seconds.</p>
<pre><code>Model Summary: 306 layers, 36905341 parameters, 6652669 gradients
Convert model to Traced-model...
traced_script_module saved!
model is traced!
</code></pre>
<p>Object detection takes only around 1 second. How can I avoid converting the model every time and just perform object detection?</p>
<p>I tried setting the output traced_model.pt as the weights, but I got the following error:</p>
<pre><code>AttributeError: 'RecursiveScriptModule' object has no attribute 'get'
</code></pre>
|
<python><yolov7>
|
2023-02-17 07:31:14
| 1
| 3,104
|
Nori
|
75,481,254
| 8,771,201
|
Python directory structure problem moving one directory up in the structure gives "no module named" error
|
<p>This is my project structure in Visual Studio Code:</p>
<pre><code>| database
|- __init__.py
|- database.py
| sales
|- __init__.py
|- shop.py
main.py
</code></pre>
<p>Insert database.py I have a function:</p>
<pre><code>def insertArticle()
</code></pre>
<p>Now I want to use this function in shop.py so I do this:</p>
<pre><code>import database.database as database
myResult is database.insertArticle()
</code></pre>
<p>This is one of the various options I tried but it all ends up with "No module named database"</p>
<p>I just run shop.py directly for testing purposes (so not from main.py)</p>
|
<python>
|
2023-02-17 07:24:39
| 0
| 1,191
|
hacking_mike
|
75,480,945
| 16,626,322
|
How can I show the progress of a script running in Gradio?
|
<p>I have successfully implemented a button in Gardio that runs a script.</p>
<pre><code>def generate_output(input_path,output_path):
cmd = f"python parse.py \"{input_path}\" \"{output_path}\""
subprocess.call(cmd, shell=True)
with gr.Row():
btn_run = gr.Button(
'RUN', elem_id='generate'
)
btn_run.click(
fn=generate_output,
inputs =[tb_input_path,tb_output_path],
outputs=[]
)
</code></pre>
<p>Although the script actually runs when the button is pressed, there is no intuitive UI that informs the user of what is happening, making it difficult to know what is going on.</p>
<p>I want a UI that can inform the execution rate. For example, like this.</p>
<pre><code>text_informs = gr.Markdown("")
def generate_output(input_path,output_path):
text_informs.update("started!")
try:
cmd = f"python parse.py \"{input_path}\" \"{output_path}\""
subprocess.call(cmd, shell=True)
text_informs.update("Completed")
except subprocess.CalledProcessError as e:
text_informs.update("error occured!")
</code></pre>
<p>How can the execution rate be informed? If something can be informed that the script is running, any means will be okay.</p>
|
<python><gradio>
|
2023-02-17 06:38:47
| 2
| 539
|
sooyeon
|
75,480,817
| 1,942,868
|
Password shown twice for django register form
|
<p>I am making django register form, but it shows the password twice. (three times including password conformation)</p>
<p><a href="https://i.sstatic.net/WGAYz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WGAYz.png" alt="enter image description here" /></a></p>
<p>Does anyone figure out the reason></p>
<p>These are source code below.</p>
<p>register.html</p>
<pre><code>{% extends "defapp/base.html" %}
{% block title %}User Register{% endblock %}
{% block content %}
<form method="POST" class="form-group">
{% csrf_token %}
{{ form.as_p }}
<button type="submit" class="btn btn-success">Register</button>
</form>
{% endblock %}
</code></pre>
<p>RegisterView Class</p>
<pre><code>class RegisterView(CreateView):
form_class = f.RegisterForm
template_name = "defapp/register.html"
success_url = reverse_lazy("top")
def form_valid(self, form):
user = form.save()
login(self.request, user)
self.object = user
return HttpResponseRedirect(self.get_success_url())
class Login(LoginView):
form_class = f.LoginForm
template_name = 'defapp/login.html'
class Logout(LoginRequiredMixin, LogoutView):
template_name = 'defapp/login.html'
</code></pre>
<p>RegisterForm is here.</p>
<pre><code>from django.contrib.auth.forms import AuthenticationForm,UserCreationForm
from django.contrib.auth.models import User
class LoginForm(AuthenticationForm):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
for field in self.fields.values():
field.widget.attrs['class'] = 'form-control'
field.widget.attrs['placeholder'] = field.label
class RegisterForm(UserCreationForm):
class Meta:
model = User
fields = ["username", "password", "email"]
</code></pre>
|
<python><django>
|
2023-02-17 06:16:34
| 1
| 12,599
|
whitebear
|
75,480,778
| 2,954,547
|
Matplotlib shrink figure to size of axes, or otherwise match its aspect ratio
|
<p>I have been using Cartopy to plot data using <code>'equal'</code> aspect, resulting in all manner of non-square Axes sizes. These usually look OK in Jupyter notebooks, but when saving the images (or when doing more complicated operations like adding colorbars), the resulting Figures are often huge, with a lot of blank space around the Axes plotting area. They also look bad when using <code>%matplotlib widget</code>. An example is provided below.</p>
<p><a href="https://i.sstatic.net/g5r2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g5r2C.png" alt="Example of image with undesirable padding" /></a></p>
<p>It seems that the <em>figure</em> in this case is too big in at least one dimension. I would like to remove that extra space in the final output figure, without shrinking the size of the plotting area itself.</p>
<p>I know that I can adjust the figure size itself with <code>.set_figwidth</code> and <code>.set_figheight</code>, as well as setting <code>figsize=</code> upon creation. But I don't know how to figure out the correct dimensions to shrink the figure without shrinking the axes, and I haven't seen any way to do this automatically. What's the correct solution? I would like to avoid manually editing my images after creating them!</p>
|
<python><matplotlib>
|
2023-02-17 06:08:31
| 0
| 14,083
|
shadowtalker
|
75,480,557
| 11,402,025
|
Receiving Error not all arguments converted during string formatting
|
<p>I am new to working on Python. I m not able to understand how can I send the correct input t0 the query.</p>
<pre><code> list_of_names = []
for country in country_name_list.keys():
list_of_names.append(getValueMethod(country))
sql_query = f"""SELECT * FROM table1
where name in (%s);"""
db_results = engine.execute(sql_query, list_of_names).fetchone()
</code></pre>
<pre><code>
Give the error " not all arguments converted during string formatting"
</code></pre>
|
<python><sqlalchemy>
|
2023-02-17 05:28:01
| 2
| 1,712
|
Tanu
|
75,480,282
| 2,333,234
|
Python:how to use regex search in dict
|
<p>how to do regex on dic key?
getting expected string or bytes-like object, got 'dict' with below code.</p>
<p>Thanks</p>
<pre><code>with open('fbg3.csv','r') as file:
csvreader=csv.DictReader(file)
regex=r"^REP\W+BGR30$"
for row in csvreader:
if re.match(regex,row):
print(row)
</code></pre>
<p>fbg3.csv</p>
<pre><code>REPRESENTATIVE_BGR30,LEAD_AGR31,....
1,2,3
11,22,33
.............
</code></pre>
<p>expected output: REPRESENTATIVE_BGR30:1,11,..</p>
|
<python><regex><dictionary>
|
2023-02-17 04:34:03
| 1
| 553
|
user2333234
|
75,480,268
| 19,425,874
|
Python script not returning any results while web scraping
|
<p>I am looking to scrape a list of URLs -- I want to visit each one & then return all IMG links contained within each HREF on the page (in essence, visit each link and return the image address of the player headshot on each player profile).</p>
<p>I have a successful script for one set of URLs below - this is what I'm trying to achieve:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import gspread
gc = gspread.service_account(filename='creds.json')
sh = gc.open_by_key('1TD4YmhfAsnSL_Fwo1lckEbnUVBQB6VyKC05ieJ7PKCw')
worksheet = sh.get_worksheet(0)
# AddValue = ["Test", 25, "Test2"]
# worksheet.insert_row(AddValue, 3)
def get_links(url):
data = []
req_url = requests.get(url)
soup = BeautifulSoup(req_url.content, "html.parser")
for td in soup.find_all('td', {'data-th': 'Player'}):
a_tag = td.a
name = a_tag.text
player_url = a_tag['href']
print(f"Getting {name}")
req_player_url = requests.get(
f"https://basketball.realgm.com{player_url}")
soup_player = BeautifulSoup(req_player_url.content, "html.parser")
div_profile_box = soup_player.find('div', {'class': 'profile-box'})
img_tag = div_profile_box.find('img')
image_url = img_tag['src']
row = {"Name": name, "URL": player_url, "Image URL": image_url}
data.append(row)
return data
urls = [
'https://basketball.realgm.com/dleague/players/2022',
'https://basketball.realgm.com/dleague/players/2021',
'https://basketball.realgm.com/dleague/players/2020',
'https://basketball.realgm.com/dleague/players/2019',
'https://basketball.realgm.com/dleague/players/2018',
]
res = []
for url in urls:
print(f"Getting: {url}")
data = get_links(url)
res = [*res, *data]
if res != []:
header = list(res[0].keys())
values = [
header, *[[e[k] if e.get(k) else "" for k in header] for e in res]]
worksheet.append_rows(values, value_input_option="USER_ENTERED")
</code></pre>
<p>This returns an output of: Player Name, Player URL, Player Headshot:</p>
<p><a href="https://i.sstatic.net/xYcng.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xYcng.png" alt="correct output" /></a></p>
<p>I tweaked the code to tweak for a different set of URLs, but it's not returning any information. No errors are showing, but nothing seems to be happening:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import gspread
gc = gspread.service_account(filename='creds.json')
sh = gc.open_by_key('1TD4YmhfAsnSL_Fwo1lckEbnUVBQB6VyKC05ieJ7PKCw')
worksheet = sh.get_worksheet(0)
def get_links(url):
data = []
req_url = requests.get(url)
soup = BeautifulSoup(req_url.content, "html.parser")
for td in soup.find_all('td', {'data-th': 'Player'}):
a_tag = td.a
name = a_tag.text
player_url = a_tag['href']
print(f"Getting {name}")
req_player_url = requests.get(
f"https://basketball.realgm.com{player_url}")
soup_player = BeautifulSoup(req_player_url.content, "html.parser")
div_profile_box = soup_player.find('div', {'class': 'profile-box'})
img_tags = div_profile_box.find_all('img')
for i, img_tag in enumerate(img_tags):
image_url = img_tag['src']
row = {"Name": name, "URL": player_url,
f"Image URL {i}": image_url}
data.append(row)
return data
urls = [
"https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/player/All/desc",
"https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/2",
"https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/3"
]
for url in urls:
data = get_links(url)
for row in data:
worksheet.insert_row(list(row.values()))
</code></pre>
<p>I also checked a version debugging "soup_player", but I'm still not receiving any results:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import gspread
gc = gspread.service_account(filename='creds.json')
sh = gc.open_by_key('1TD4YmhfAsnSL_Fwo1lckEbnUVBQB6VyKC05ieJ7PKCw')
worksheet = sh.get_worksheet(0)
def get_links(url):
data = []
req_url = requests.get(url)
soup = BeautifulSoup(req_url.content, "html.parser")
for td in soup.find_all('td', {'data-th': 'Player'}):
a_tag = td.a
name = a_tag.text
player_url = a_tag['href']
print(f"Getting {name}")
req_player_url = requests.get(
f"https://basketball.realgm.com{player_url}")
soup_player = BeautifulSoup(req_player_url.content, "html.parser")
print(f"soup_player for {name}: {soup_player}")
div_profile_box = soup_player.find('div', {'class': 'profile-box'})
img_tags = div_profile_box.find_all('img')
for i, img_tag in enumerate(img_tags):
image_url = img_tag['src']
row = {"Name": name, "URL": player_url, f"Image URL {i}": image_url}
data.append(row)
return data
urls = [ "https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/player/All/desc", "https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/2", "https://basketball.realgm.com/international/stats/2023/Averages/Qualified/All/minutes/All/desc/3"]
for url in urls:
data = get_links(url)
for row in data:
worksheet.insert_row(list(row.values()))
</code></pre>
<p>Any advice as to what I may be doing wrong here? Thank you in advance!</p>
|
<python><beautifulsoup><python-requests><python-requests-html>
|
2023-02-17 04:30:08
| 1
| 393
|
Anthony Madle
|
75,480,225
| 3,198,568
|
Using if-else in "with" statement in Python
|
<p>I want to open a file that may be gzipped or not. To open the file, I use either</p>
<pre><code>with open(myfile, 'r') as f:
some_func(f) # arbitrary function
</code></pre>
<p>or</p>
<pre><code>import gzip
with gzip.open(myfile, 'r') as f:
some_func(f)
</code></pre>
<p>I want to check if <code>myfile</code> has a <code>gz</code> extension or not, and then from there decide which <code>with</code> statement to use. Here's what I have:</p>
<pre><code># myfile_gzipped is a Boolean variable that tells me whether it's gzipped or not
if myfile_gzipped:
with gzip.open(myfile, 'rb') as f:
some_func(f)
else:
with open(myfile, 'r') as f:
some_func(f)
</code></pre>
<p>How should I go about it, without having to repeat <code>some_func(f)</code>?</p>
|
<python>
|
2023-02-17 04:22:43
| 4
| 2,253
|
irene
|
75,480,219
| 17,347,824
|
Reading csv data into a table in postgresql via INSERT INTO with Python
|
<p>I have a postgresql table created in python that I need to then populate with data from a csv file. The csv file has 4 columns and a header row. When I use a for loop with <code>INSERT INTO</code> it's not working correctly.</p>
<p>It is giving me an error telling me that a certain column doesn't exist, but the column is actually the first ID in the ID column.</p>
<p>I've looked over all the similar issues reported on other questions and can't seem to find something that works.</p>
<p>The table looks like this (with more lines):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Gender</th>
<th>Weight</th>
<th>Age</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>F</td>
<td>121</td>
<td>20</td>
</tr>
<tr>
<td>B</td>
<td>M</td>
<td>156</td>
<td>31</td>
</tr>
<tr>
<td>C</td>
<td>F</td>
<td>110</td>
<td>18</td>
</tr>
</tbody>
</table>
</div>
<p>The code I am running is the following:</p>
<pre><code>import pandas as pd
df = pd.read_csv('df.csv')
for x in df.index:
cursor.execute("""
INSERT INTO iddata (ID, Gender, Weight, Age)
VALUES (%s, %s, %d, %d)""" % (df.loc[x]['ID'],
df.loc[x]['Gender'],
df.loc[x]['Weight'],
df.loc[x]['Age']))
conn.commit
</code></pre>
<p>The error I'm getting says</p>
<pre><code>UndefinedColumn: column "a" does not exist
LINE 3: VALUES (A, F, 121, 20)
^
</code></pre>
|
<python><postgresql>
|
2023-02-17 04:21:28
| 1
| 409
|
data_life
|
75,480,174
| 10,097,229
|
Aggregate time series data on weekly basis
|
<p>I have a dataframe that consists of 3 years of data and two columns <code>remaining useful life</code> and <code>predicted remaining useful life</code>.</p>
<p>I am aggregating <code>rul</code> and <code>pred_rul</code> of 3 years data for each machineID for the maximum date they have. The original dataframe looks like this-</p>
<pre><code> rul pred_diff machineID datetime
10476749 870 312.207825 408 2021-05-25 00:00:00
11452943 68 288.517578 447 2023-03-01 12:00:00
12693829 381 273.159698 493 2021-09-16 16:00:00
3413787 331 291.326416 133 2022-10-26 12:00:00
464093 77 341.506195 19 2023-10-10 16:00:00
... ... ... ... ...
11677555 537 310.586090 456 2022-04-07 00:00:00
2334804 551 289.307129 92 2021-09-04 20:00:00
5508311 35 293.721771 214 2023-01-06 04:00:00
12319704 348 322.199219 479 2021-11-11 20:00:00
4777501 87 278.089417 186 2021-06-29 12:00:00
1287421 rows × 4 columns
</code></pre>
<p>And I am aggregating it based on this code-</p>
<pre><code>y_test_grp = y_test.groupby('machineID').agg({'datetime':'max', 'rul':'mean', 'pred_diff':'mean'})[['datetime','rul', 'pred_diff']].reset_index()
</code></pre>
<p>which gives the following output-</p>
<pre><code> machineID datetime rul pred_diff
0 1 2023-10-03 20:00:00 286.817681 266.419401
1 2 2023-11-14 00:00:00 225.561953 263.372531
2 3 2023-10-25 00:00:00 304.736237 256.933351
3 4 2023-01-13 12:00:00 204.084899 252.476066
4 5 2023-09-07 00:00:00 208.702431 252.487156
... ... ... ... ...
495 496 2023-10-11 00:00:00 302.445285 298.836798
496 497 2023-08-26 04:00:00 281.601613 263.479885
497 498 2023-11-28 04:00:00 292.593906 263.985034
498 499 2023-06-29 20:00:00 260.887529 263.494844
499 500 2023-11-08 20:00:00 160.223614 257.326034
500 rows × 4 columns
</code></pre>
<p>Since this is <code>grouped by</code> on machineID, it is giving just 500 rows which is less. I want to aggregate <code>rul</code> and <code>pred_rul</code> on weekly basis such that for each machineID I get 52weeks*3years=156 rows. I am not able to identify which function to use for taking 7 days as interval and aggregating <code>rul</code> and <code>pred_rul</code> on that.</p>
|
<python><pandas><datetime><time-series>
|
2023-02-17 04:12:30
| 1
| 1,137
|
PeakyBlinder
|
75,480,149
| 8,942,319
|
poetry --help (or any other command) results in Library not loaded: '/usr/local/Cellar/python@3.10/3.10.8/Frameworks/.../3.10/Python'
|
<p>I had a poetry project working fine. I re-opened it today for the first time in a while, made some changes (mostly in the app code itself) and ran <code>poetry install</code> to update the dependencies.</p>
<p>Any <code>poetry</code> command results in</p>
<pre><code>Library not loaded: '/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/Python'
Referenced from: '/Users/my_user/Library/Application Support/pypoetry/venv/bin/python'
</code></pre>
<p>Any tips on how to get poetry working again?</p>
|
<python><python-3.x><virtualenv><python-poetry>
|
2023-02-17 04:02:42
| 0
| 913
|
sam
|
75,480,122
| 13,176,726
|
How to pass change_password.html to replace Django Admin Password Reset
|
<p>Currently in my urls.py I have the following links for user to reset their password</p>
<pre><code>app_name = 'users'
urlpatterns = [
path('password/', user_views.change_password, name='change_password'),
path('password-reset/', auth_views.PasswordResetView.as_view(template_name='users/password_reset.html', success_url=reverse_lazy('users:password_reset_done')), name='password_reset'),
path('password-reset/done/', auth_views.PasswordResetDoneView.as_view(template_name='users/password_reset_done.html'),name='password_reset_done'),
path('password-reset-confirm/<uidb64>/<token>/',auth_views.PasswordResetConfirmView.as_view(template_name='users/change_password.html',success_url=reverse_lazy('users:password_reset_complete')),name='password_reset_confirm'),
path('password-reset-complete/', auth_views.PasswordResetCompleteView.as_view(template_name='users/password_reset_complete.html'),name='password_reset_complete'),
]
</code></pre>
<p>here is the change_password.html</p>
<pre><code> <main class="mt-5" >
<div class="container dark-grey-text mt-5">
<div class="content-section">
<form method="POST">
{% csrf_token %}
<fieldset class="form-group">
<legend class="border-bottom mb-4">Reset Password</legend>
{{ form|crispy }}
</fieldset>
<div class="form-group">
<button class="btn btn-outline-info" type="submit">Reset Password</button>
</div>
</form>
</div>
</div>
</main>
</code></pre>
<p>After the user receives the reset email and clicks on the link to reset password it goes to the Django Admin Style page to reset password.</p>
<p>How can I pass the template that I have <code>change_password.html</code> and how can I redirect afterwards to the login page to login?</p>
<p>Just to add more context not sure if it might be the reason in the main urls.py</p>
<pre><code>urlpatterns = [
path('', include('django.contrib.auth.urls')),
path('admin/', admin.site.urls),
path('users/', include('users.urls'), ),
]
</code></pre>
<p>Here is the terminal showing the sequence:</p>
<pre><code>"GET /users/password-reset/ HTTP/1.1" 200 1849
"POST /users/password-reset/ HTTP/1.1" 302 0
"GET /users/password-reset/done/ HTTP/1.1" 200 1339
"GET /reset/NTM/xxxxxxxxxxxxxxxxx/ HTTP/1.1" 302 0
"GET /reset/NTM/set-password/ HTTP/1.1" 200 2288
"POST /reset/NTM/set-password/ HTTP/1.1" 302 0
"GET /reset/done/ HTTP/1.1" 200 1459
</code></pre>
|
<python><django><django-templates><django-urls><django-authentication>
|
2023-02-17 03:54:57
| 1
| 982
|
A_K
|
75,480,056
| 6,108,107
|
Remove less than character '<' and return half the numeric component styled to show changes
|
<p>I need to clean up some data. For items in a dataframe that are of the format '<x' I want to return 'x/2' so if the cell contents is '<10' it should be replaced with '5', if the cell contents is '<0.006' it should be replace with 0.003 etc. I want changed cells to be formatted red and bold. I have the following code which operates in two steps and each step does what I want (almost) but I get a TypeError: 'float' object is not iterable when I try and chain them using : <code>fixed_df=df.style.apply(color_less_than,axis=None).applymap(lessthan)</code></p>
<p>Note that the actual dataset may be thousands of rows and will contain mixed and Dummy data and code :</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A': ['<10', '20', 'foo', '<30', '40'],
'B': ['baz', '<dlkj', 'bar', 'foo', '<5']})
def color_less_than(x):
c1 = 'color: red; font-weight: bold'
c2 = ''
df1 = pd.DataFrame(c2, index=x.index, columns=x.columns)
for col in x.columns:
mask = x[col].str.startswith("<")
#display(mask)
df1.loc[mask, col] = c1
return df1
def lessthan(x):
#for x in df:
if isinstance(x, np.generic):
return x.item()
elif type(x) is int:
return x
elif type(x) is float:
return x
elif type(x) is str and x[0]=="<":
try:
return float(x[1:])/2
except:
return x
elif type(x) is str and len(x)<10:
try:
return float(x)
except:
return x
else:
return x
coloured=df.style.apply(color_less_than,axis=None)
halved=df.applymap(lessthan)
display(coloured)
display(halved)
</code></pre>
<p>Note that the df item <dlkj does not display at all after applying color_less_than and I don't know why, I want it to be returned unformatted as it should not be changed (it's a string and cant be 'halved'). I have been trying to use the boolean mask to do both the calculation and the formatting but I can't get it to work.</p>
|
<python><pandas>
|
2023-02-17 03:40:02
| 2
| 578
|
flashliquid
|
75,480,002
| 3,884,713
|
In PyTorch, how can I avoid an expensive broadcast when adding two tensors then immediately collapsing?
|
<p>I have two 2-d tensors, which align via broadcasting, so if I add/subtract them, I incur a huge 3-d tensor. I don't really need that though, since I'll be performing a <code>mean</code> on one dimension. In this demo, I unsqueeze the tensors to show how they align, but they are 2-d otherwise.</p>
<pre><code>x = torch.tensor(...) # (batch , 1, B)
y = torch.tensor(...) # (1, , A, B)
out = torch.cos(x - y).mean(dim=2) # (batch, B)
</code></pre>
<p>Possible Solutions:</p>
<ul>
<li><p>An algebraic simplification, but for the life of me I haven't solved this yet.</p>
</li>
<li><p>Some PyTorch primitive that'll help? This is cosine similarity, but, a bit different than <code>torch.cosine_similarity</code>. I'm applying it to complex numbers' <code>.angle()</code>s.</p>
</li>
<li><p>Custom C/CPython code that loops efficiently.</p>
</li>
<li><p>Other?</p>
</li>
</ul>
|
<python><pytorch><cosine-similarity><array-broadcasting><numpy-einsum>
|
2023-02-17 03:27:51
| 1
| 3,806
|
Josh.F
|
75,479,827
| 288,609
|
how to fix the corrupted base environment
|
<p>I accidentally install a lot of packages using <code>pip install -r requirements.txt</code> under base environment. Then I tried to <code>pip uninstall</code>, but it seems that the uninstalling process is unsuccessful.</p>
<p>I am using the miniconda on Windows. How can I recover the base environment to clean state? Or do I have to reinstall miniconda to remove the whole base environment?</p>
|
<python><pip><conda><miniconda>
|
2023-02-17 02:47:01
| 1
| 13,215
|
user288609
|
75,479,771
| 9,386,819
|
Why does instantiating a set with braces preserve a string within it while instantiating with the set function splits the string?
|
<p>Just the question above.</p>
<p>Why is</p>
<pre><code>>>> x = {'foo'}
>>> print(x)
{'foo'}
</code></pre>
<p>But</p>
<pre><code>>>> x = set('foo')
>>> print(x)
{'o', 'f'}
</code></pre>
|
<python><set>
|
2023-02-17 02:36:09
| 1
| 414
|
NaiveBae
|
75,479,701
| 8,954,291
|
pandas categorical doesn't sort multiindex
|
<p>I've pulled some data from SQL as a CSV:</p>
<pre><code>Year,Decision,Residency,Class,Count
2019,Applied,Resident,Freshmen,1143
2019,Applied,Resident,Transfer,404
2019,Applied," ",Grad/Postbacc,418
2019,Applied,Non-Resident,Freshmen,1371
2019,Applied,Non-Resident,Transfer,371
2019,Admitted,Resident,Freshmen,918
2019,Admitted,Resident,Transfer,358
2019,Admitted," ",Grad/Postbacc,311
2019,Admitted,Non-Resident,Freshmen,1048
2019,Admitted,Non-Resident,Transfer,313
2020,Applied,Resident,Freshmen,1094
2020,Applied,Resident,Transfer,406
2020,Applied," ",Grad/Postbacc,374
2020,Applied,Non-Resident,Freshmen,1223
2020,Applied,Non-Resident,Transfer,356
2020,Admitted,Resident,Freshmen,1003
2020,Admitted,Resident,Transfer,354
2020,Admitted," ",Grad/Postbacc,282
2020,Admitted,Non-Resident,Freshmen,1090
2020,Admitted,Non-Resident,Transfer,288
</code></pre>
<p>I've written a transform as follows:</p>
<pre class="lang-py prettyprint-override"><code>data = pd.read_csv("Data.csv")
#Categorize the rows
data["Class"] = pd.Categorical(data["Class"],["Freshmen","Transfer","Grad/Postbacc","Grand"],ordered=True)
data["Decision"] = pd.Categorical(data["Decision"],["Applied","Admitted"],ordered=True)
data["Residency"] = pd.Categorical(data["Residency"],["Resident","Non-Resident"],ordered=True)
#Subtotal classes
tmp = data.groupby(["Year","Class","Decision"],sort=False).sum("Count")
tmp["Residency"] = "Total"
tmp.reset_index(inplace=True)
tmp = pd.concat([data,tmp],ignore_index=True)
#Grand total
tmp2 = data.groupby(["Year","Decision"],sort=False).sum("Count")
tmp2["Class"] = "Grand"
tmp2["Residency"] = "Total"
tmp2.reset_index(inplace=True)
tmp = pd.concat([tmp,tmp2],ignore_index=True)
#Crosstab it
tmp = pd.crosstab(index=[tmp["Year"],tmp["Class"],tmp["Residency"]],
columns=[tmp["Decision"]],
values=tmp["Count"],
aggfunc="sum")
tmp = tmp.loc[~(tmp==0).all(axis=1)]
tmp["%"] = np.round(100*tmp["Admitted"]/tmp["Applied"],1)
tmp = tmp.stack().unstack(["Year","Decision"])
print(tmp)
</code></pre>
<p>and it outputs as follows:</p>
<pre><code>Year 2019 2020
Decision Applied Admitted % Applied Admitted %
Class Residency
Freshmen Non-Resident 1371.0 1048.0 76.4 1223.0 1090.0 89.1
Resident 1143.0 918.0 80.3 1094.0 1003.0 91.7
Total 2514.0 1966.0 78.2 2317.0 2093.0 90.3
Grad/Postbacc Total 418.0 311.0 74.4 374.0 282.0 75.4
Grand Total 3707.0 2948.0 79.5 3453.0 3017.0 87.4
Transfer Non-Resident 371.0 313.0 84.4 356.0 288.0 80.9
Resident 404.0 358.0 88.6 406.0 354.0 87.2
Total 775.0 671.0 86.6 762.0 642.0 84.3
</code></pre>
<p>Expected output is</p>
<pre><code>Year 2019 2020
Decision Applied Admitted % Applied Admitted %
Class Residency
Freshmen Resident 1143.0 918.0 80.3 1094.0 1003.0 91.7
Non-Resident 1371.0 1048.0 76.4 1223.0 1090.0 89.1
Total 2514.0 1966.0 78.2 2317.0 2093.0 90.3
Transfer Resident 404.0 358.0 88.6 406.0 354.0 87.2
Non-Resident 371.0 313.0 84.4 356.0 288.0 80.9
Total 775.0 671.0 86.6 762.0 642.0 84.3
Grad/Postbacc Total 418.0 311.0 74.4 374.0 282.0 75.4
Grand Total 3707.0 2948.0 79.5 3453.0 3017.0 87.4
</code></pre>
<p>The categories successfully sort themselves correctly right up until I throw the dataframe into <code>pd.crosstab</code> at which point it all falls apart. What's going on and how do I fix it?</p>
|
<python><pandas><pivot-table>
|
2023-02-17 02:22:19
| 1
| 1,351
|
Jakob Lovern
|
75,479,577
| 5,235,665
|
Calculating bottom quartile performers with Pandas DataFrames
|
<p>Brand new to Pandas (Python) here and am cutting my teeth with some lightweight analytics, but am having some difficulty getting started.</p>
<p>I have a spreadsheet with the following data in it:</p>
<pre><code>Fruit,HarvestCount,HarvestDate
Apple,100,08/03/2022
Banana,2500,04/15/2022
Apple,4000,10/11/2022
Pineapple,5,02/07/2022
Pear,250,06/09/2022
Banana,1000,08/11/2022
Orange,20,07/23/2022
Orange,140,11/29/2022
Strawberry,600,12/11/2022
Apple,5000,04/01/2022
Pear,10,07/07/2022
Banana,50,10/19/2022
</code></pre>
<p>I am reading this Excel into a dataframe like so:</p>
<pre><code>data = pd.read_excel('fruit-harvests.xlsx', sheet_name='Harvests')
df_temp = pd.DataFrame(data)
</code></pre>
<p>Now what I am trying to do is:</p>
<ul>
<li>collapse the dataframe by "Fruit" name and sum each fruit's Harvest Count; then</li>
<li>figure out the who the bottom 25% quartile performers were (that is, the bottom 25% fruits with the lowest summed harvest count)</li>
</ul>
<p>Hence if I did this manually, the collapse + sum would look like:</p>
<pre><code>Apple,9100
Banana,3550
Pineapple,5
Pear,260
Orange,160
Strawberry,600
</code></pre>
<p>Sorted by <code>HarvestCount</code> descending it looks like:</p>
<pre><code>Apple,9100
Banana,3550
Strawberry,600
Pear,260
Orange,160
Pineapple,5
</code></pre>
<p>Since after the collapse we see there are six (6) distinct fruits, the bottom quartile would be the worst-performing 1.5 fruits, or rounded up, the worst 2 fruits:</p>
<pre><code>Orange,160
Pineapple,5
</code></pre>
<p>So from the time I read my Excel into a dataframe, I have to:</p>
<ol>
<li>Sum/aggregate/collapse</li>
<li>Sort by HarvestCount descending (or ascending whichever is easier for the next step)</li>
<li>And finally create a new dataframe of the worst-performing 25% of fruits (rows)</li>
</ol>
<p>Can anyone point me in the right direction here please?</p>
|
<python><pandas>
|
2023-02-17 01:54:18
| 1
| 845
|
hotmeatballsoup
|
75,479,462
| 2,599,709
|
How do I extract the weights of my quantized model for use on hardware?
|
<p>EDIT: attaching some code to help generate similar results (appended at end)</p>
<p>I have a really small model with architecture <code>[2, 3, 6]</code> where the hidden layer uses ReLU and it's a softmax activation for multiclass classification. Trained offline and statically quantized later to qint8. What I would like to do now is extract the weights so I can use them on other hardware via matrix multiplication/addition. The problem I'm encountering is it doesn't seem to behave as expected. Take for instance this GraphModule output of state_dict():</p>
<pre class="lang-py prettyprint-override"><code>OrderedDict([('input_layer_input_scale_0', tensor(0.0039)),
('input_layer_input_zero_point_0', tensor(0)),
('input_layer.scale', tensor(0.0297)),
('input_layer.zero_point', tensor(0)),
('input_layer._packed_params.dtype', torch.qint8),
('input_layer._packed_params._packed_params',
(tensor([[-0.1180, 0.1180],
[-0.2949, -0.5308],
[-3.3029, -7.5496]], size=(3, 2), dtype=torch.qint8,
quantization_scheme=torch.per_tensor_affine, scale=0.05898105353116989,
zero_point=0),
Parameter containing:
tensor([-0.4747, -0.3563, 7.7603], requires_grad=True))),
('out.scale', tensor(1.5963)),
('out.zero_point', tensor(243)),
('out._packed_params.dtype', torch.qint8),
('out._packed_params._packed_params',
(tensor([[ 0.4365, 0.4365, -55.4356],
[ 0.4365, 0.0000, 1.3095],
[ 0.4365, 0.0000, -13.9680],
[ 0.4365, -0.4365, 4.3650],
[ 0.4365, 0.4365, -3.0555],
[ 0.4365, 0.0000, -1.3095],
[ 0.4365, 0.0000, 3.0555]], size=(7, 3), dtype=torch.qint8,
quantization_scheme=torch.per_tensor_affine, scale=0.43650051951408386,
zero_point=0),
Parameter containing:
tensor([ 19.2761, -1.0785, 14.2602, -22.3171, 10.1059, 7.2197, -11.7253],
requires_grad=True)))])
</code></pre>
<p>If I directly access the weights the way I think I should like so:</p>
<pre class="lang-py prettyprint-override"><code>input_weights = np.array(
[[-0.1180, 0.1180],
[-0.2949, -0.5308],
[-3.3029, -7.5496]])
inputs_scale = 0.05898105353116989
inputs_zero_point = 0
W1=np.clip(np.round(input_weights/inputs_scale+ inputs_zero_scale), -127, 128)
b1=np.clip(np.round(np.array([-0.4747, -0.3563, 7.7603])/inputs_scale + inputs_zer_scale), -127, 128)
output_weights = np.array(
[[ 0.4365, 0.4365, -55.4356],
[ 0.4365, 0.0000, 1.3095],
[ 0.4365, 0.0000, -13.9680],
[ 0.4365, -0.4365, 4.3650],
[ 0.4365, 0.4365, -3.0555],
[ 0.4365, 0.0000, -1.3095],
[ 0.4365, 0.0000, 3.0555]])
outputs_scale=0.43650051951408386
outputs_zero_point=0
W1=np.clip(np.round(output_weights/outputs_scale+ outputs_zero_scale), -127, 128)
W2=np.clip(np.round(np.array([ 19.2761, -1.0785, 14.2602, -22.3171, 10.1059, 7.2197, -11.7253])/outputs_scale + outputs_zero_scale), -127, 128)
</code></pre>
<p>And then I give it some data:</p>
<pre><code>inputs = np.array(
[[1. , 1. ], # class 0 example
[1. , 0. ], # class 1 example
[0. , 1. ],
[0. , 0. ],
[0. , 0.9 ],
[0. , 0.75],
[0. , 0.25]]) # class 6 example
</code></pre>
<p>Where each row is an example, then I would expect to be able to do matrix multiplication and argmax over the rows to get the result. However, doing that gives me this:</p>
<pre><code>>>> (ReLU((inputs @ W1.T) + b1) @ W2.T + b2).argmax(axis=0)
array([0, 3, 0, 3, 0, 0, 3])
</code></pre>
<p>which is not right.
And when I test accuracy of the quantized model in pytorch it's high enough that it should get all examples correct here. So what am I misunderstanding in terms of accessing these weights/bias?</p>
<p>EDIT: adding code to help people mess around with quantization. Now technically it doesn't matter how this code is generated - an OrderedDict of the quantized model will remain similar. If you want to mess around with it, here is some code to generate a model and quantize it on the XOR problem. Note that I'm using a multiclass classification still to help stick to my original model. Anyway.... here you go...</p>
<pre class="lang-py prettyprint-override"><code>import torch
import torch.nn as nn
import random
import copy
import numpy as np
import tensorflow as tf
import torch.nn.functional as F
from torch.ao.quantization.quantize_fx import prepare_fx, convert_fx
from torch.utils.data import DataLoader, TensorDataset
from pytorch_lightning.callbacks.progress import RichProgressBar
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
import pytorch_lightning as pl
class XORModel(nn.Module):
def __init__(self, h: int):
super().__init__()
self.input_layer = nn.Linear(2, h)
self.out = nn.Linear(h, 2)
def forward(self, x):
out = self.input_layer(x)
out = F.relu(out)
out = self.out(out)
return out
class LitModel(pl.LightningModule):
def __init__(self, model: XORModel):
super().__init__()
self.model = model
def forward(self, x):
return self.model(x)
def _generic_step(self, batch, batch_idx, calc_metric: bool = False):
x, y = batch
out = self.model(x)
if calc_metric:
with torch.no_grad():
soft = F.softmax(out, dim=-1)
metric = (soft.argmax(-1).ravel() == y.ravel()).float().mean()
self.log('Accuracy', metric, prog_bar=True)
loss = F.cross_entropy(out, y)
return loss
def training_step(self, batch, batch_idx):
loss = self._generic_step(batch, batch_idx)
self.log('train_loss', loss, prog_bar=True)
return loss
def validation_step(self, batch, batch_idx):
loss = self._generic_step(batch, batch_idx, calc_metric=True)
self.log('val_loss', loss, prog_bar=True)
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.model.parameters())
def get_accuracy(model: XORModel, seed: int):
dataset = make_dataset(1000, 1000, False, seed)
model.eval()
ret = []
with torch.no_grad():
for X, y in dataset:
out = F.softmax(model(X), dim=-1).argmax(-1)
ret.append((out.cpu().numpy() == y.numpy()).mean())
model.train()
return np.array(ret).mean()
def make_dataset(samples: int, batch_size: int, shuffle: bool, seed: int):
inputs, outputs = [], []
rng = random.Random(seed)
for _ in range(samples):
x0 = rng.randint(0, 1)
x1 = rng.randint(0, 1)
y = x0 ^ x1
inputs.append((x0, x1))
outputs.append(y)
dataset = TensorDataset(torch.tensor(inputs, dtype=torch.float), torch.tensor(outputs, dtype=torch.long))
dataloader = DataLoader(dataset, batch_size=batch_size, shuffle=shuffle)
return dataloader
def quantize_model(model: XORModel):
model_to_quantize = copy.deepcopy(model)
model_to_quantize.eval()
def calibrate(m, data_loader):
m.eval()
with torch.no_grad():
for x in data_loader:
m(x)
loader = make_dataset(1000, 1000, False, 0x42)
sample_inputs = next(iter(loader))[0]
qconfig_dict = {'': torch.quantization.get_default_qconfig('fbgemm')}
prepared_model = prepare_fx(model, qconfig_dict)
calibrate(prepared_model, sample_inputs)
quantized_model = convert_fx(prepared_model)
return quantized_model
if __name__ == '__main__':
train_dataset = make_dataset(10_000, 256, True, 123456)
val_dataset = make_dataset(500, 64, True, 0xabcd)
test_dataset = make_dataset(1000, 1000, False, 0x1122)
model = XORModel(3)
lit_model = LitModel(model)
trainer = pl.Trainer(accelerator='cpu', max_epochs=100,
callbacks=[
RichProgressBar(refresh_rate=50),
EarlyStopping(monitor='val_loss', mode='min', patience=3)
])
trainer.fit(lit_model, train_dataset, val_dataset)
qmodel = quantize_model(lit_model.model)
print('accuracy of model', get_accuracy(model, 0xbeef)) # prints 1
print('accuray of qmodel', get_accuracy(qmodel, 0xbeef)) # prints 1
</code></pre>
<p>Now assuming you save off the qmodel for later, you can look at the parameters similar to how I do by calling <code>qmodel.state_dict()</code></p>
|
<python><tensorflow><deep-learning><pytorch><quantization>
|
2023-02-17 01:28:49
| 3
| 4,338
|
Chrispresso
|
75,479,376
| 728,286
|
Flask application fails to read data from SQLite database on server using pd.read_sql
|
<p>In my flask application, I have a model class called <code>Well</code>, which includes a function called <code>getProdDF</code>, which pulls production data from a model class called <code>ProductionData</code> and turns it into a pandas dataframe using <code>pd.read_sql</code>. On my local machine it works fine like this:</p>
<pre><code>class Well:
id = db.Column(db.Integer, primary_key=True)
production = db.relationship('ProductionData', backref='well', lazy='dynamic')
def getProdDF(self):
data = self.production
df = pd.read_sql(data.statement, data.session.bind)
class ProductionData(ProdData, db.Model):
id = db.Column(db.Integer, primary_key=True)
well_id = db.Column(db.Integer, db.ForeignKey('well.id'))
oil = db.Column(db.Float)
#etc
</code></pre>
<p>but when I tried to move the code to a server and try to run it there it gives me this error:</p>
<pre><code> File "C:\Users\usr\Code\ProjName\app\models.py", line 517, in getProdDF
df = pd.read_sql(data.statement, data.session.bind)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\usr\Envs\ProjName\Lib\site-packages\pandas\io\sql.py", line 564, in read_sql
return pandas_sql.read_query(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\usr\Envs\ProjName\Lib\site-packages\pandas\io\sql.py", line 2078, in read_query
cursor = self.execute(*args)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\usr\Envs\ProjName\Lib\site-packages\pandas\io\sql.py", line 2016, in execute
cur = self.con.cursor()
^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'cursor'
</code></pre>
<p>Does anyone know what I'm doing wrong? The database file (<code>app.db</code>) is copied over, I can connect to the database and get data from it in the command line, but I get the same error when I try to run the <code>Well</code> model's <code>getProdDF</code> function:</p>
<pre><code>>>> from app import app, db
>>> from app.models import Well
>>> app.app_context().push()
>>> w = Well.query.first()
>>> w.getProdDF()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\usr\Code\ProjName\app\models.py", line 517, in getProdDF
df = pd.read_sql(data.statement, data.session.bind)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\usr\Envs\ProjName\Lib\site-packages\pandas\io\sql.py", line 564, in read_sql
return pandas_sql.read_query(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\usr\Envs\ProjNameemphasized text\Lib\site-packages\pandas\io\sql.py", line 2078, in read_query
cursor = self.execute(*args)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\usr\Envs\ProjName\Lib\site-packages\pandas\io\sql.py", line 2016, in execute
cur = self.con.cursor()
^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'cursor'
</code></pre>
<p>Thanks a lot,
Alex</p>
|
<python><pandas><sqlalchemy>
|
2023-02-17 01:09:02
| 1
| 4,914
|
Alex S
|
75,479,364
| 4,281,353
|
TensorFlow/Keras where are 'loss', 'mean_absolute_error', 'val_loss', 'val_mean_absolute_error' defined?
|
<p>TensorFlow/Keras has multiple metrics to monitor but where are they defined? Please point to the documentation or github code where those strings have been defined.</p>
<pre><code>tf.keras.callbacks.EarlyStopping(
monitor="val_loss", # <-----
min_delta=0,
patience=0,
verbose=0,
mode="auto",
baseline=None,
restore_best_weights=False,
start_from_epoch=0,
)
</code></pre>
<h1>Conclusion</h1>
<p>There is no documentation nor definition from TF/Keras. We need to figure them out by searching around, picking up bits and pieces from multiple resources. It is considered as a documentation bug.</p>
<p>Terminologies and values to use should have been defined and documented before being used but not have been done for this case.</p>
|
<python><tensorflow><keras>
|
2023-02-17 01:05:25
| 1
| 22,964
|
mon
|
75,479,363
| 17,696,880
|
Replace a string by another if it is found after a pattern and before another
|
<pre class="lang-py prettyprint-override"><code>import re
input_text = "Creo que ((PERS)los viejos gabinetes) estan en desuso, hay que hacer algo con ellos. ellos quedaron en el deposito de afuera, lloviznó temprano por lo que ((PERS)los viejos gabinetes) fueron llevados a la sala principal."
pattern_01 = r"((PERS)\s*los\s[\w\s]+)(\.)"
output = re.sub(pattern_01, r"\1, \1\3", input_text, flags = re.IGNORECASE)
print(output)
</code></pre>
<p>Replace any <code>"ellos"</code> substrings before the first dot <code>.</code> after a <code>((PERS)\s*los )</code> sequence with the content inside those brackets <code>((PERS)\s*los )</code> which must be found before that occurrence of that substring <code>"ellos"</code></p>
<p>Using this code directly does not modify the string</p>
<p>But I would need to get this output:</p>
<pre><code>"Creo que ((PERS)los viejos gabinetes) estan en desuso, hay que hacer algo con los viejos gabinetes. ellos quedaron en el deposito de afuera, lloviznó temprano por lo que ((PERS)los viejos gabinetes) fueron llevados a la sala principal."
</code></pre>
<p>the number of times the replacement must be performed is not known, that is, there may be more than one <code>"ellos"</code> between <code>((PERS)ellos )</code> and the first point <code>.</code> after this word</p>
|
<python><python-3.x><regex><replace><regex-group>
|
2023-02-17 01:05:15
| 1
| 875
|
Matt095
|
75,479,360
| 1,816,135
|
After updating chromium from 108 to 110: WebDriverException: Message: unknown error: unable to discover open pages
|
<p>I am using Selenium to access a service that require login. I login one time then the login in data saved into user data, a dir that I specify as following:</p>
<pre><code> chrome_options.add_argument("--user-data-dir=%s" % self.user_dir)
</code></pre>
<p>Everything was okay until I updated the system (Ubuntu server). Chromium was updated from 108 to 110. The first issue i faced that I need to update the ChromeDriver to 110.0.5481.77.</p>
<p>Now, when I use Selenium as usual, it took long time until I get the following error:</p>
<pre><code>File "/home/user/bots/teleBots/app/wa.py", line 49, in __init__
browser = webdriver.Chrome(executable_path=driver, options=self.chrome_options,
File "/home/user/ak_env_9/lib/python3.9/site-packages/selenium/webdriver/chrome/webdriver.py", line 81, in __init__
super().__init__(
File "/home/user/ak_env_9/lib/python3.9/site-packages/selenium/webdriver/chromium/webdriver.py", line 106, in __init__
super().__init__(
File "/home/user/ak_env_9/lib/python3.9/site-packages/selenium/webdriver/remote/webdriver.py", line 288, in __init__
self.start_session(capabilities, browser_profile)
File "/home/user/ak_env_9/lib/python3.9/site-packages/selenium/webdriver/remote/webdriver.py", line 381, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/home/user/ak_env_9/lib/python3.9/site-packages/selenium/webdriver/remote/webdriver.py", line 444, in execute
self.error_handler.check_response(response)
File "/home/user/ak_env_9/lib/python3.9/site-packages/selenium/webdriver/remote/errorhandler.py", line 249, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: unknown error: unable to discover open pages
</code></pre>
<p>I searched for solution and most of them suggest using the option:</p>
<pre><code> chrome_options.add_argument("--no-sandbox")
</code></pre>
<p>and others:</p>
<pre><code> chrome_options.add_argument("--remote-debugging-port=9222")
</code></pre>
<p>But nothing works for me until I removed the user dir option:</p>
<pre><code> #chrome_options.add_argument("--user-data-dir=%s" % self.user_dir)
</code></pre>
<p>It works for me but I have to log in everytime I run the script.</p>
<p>How can I solved this issue?</p>
<p>Is there anyway to downgrade chromium browser to 108? (I need deb file for ubuntu)</p>
<p>Or anyway to keep login active other than using user-data-dir?</p>
|
<python><linux><ubuntu><selenium-webdriver><selenium-chromedriver>
|
2023-02-17 01:03:59
| 2
| 1,002
|
AKMalkadi
|
75,479,350
| 13,430,381
|
How to drop elements from a series by using a Pandas for loop index as the index parameter for the drop function?
|
<p>I am attempting to run a loop that filters certain elements based on a condition and removes those that match, as shown below:</p>
<pre><code>for index, value in enumerate(some_dataset.iloc):
if min(some_dataset.iloc[index]) >= some_dataset.iloc[0].values[index]:
dataset_filtered = some_dataset.drop(index=index)
</code></pre>
<p>However, the value being passed to the index parameter in the variable <code>index</code> does not seem to behave as an integer. Instead, I receive the following error for the first value that attempts to be dropped:</p>
<blockquote>
<p>KeyError: '[1] not found in axis'</p>
</blockquote>
<p>Thinking it was a Series element, I attempted to cast it as an integer by setting <code>index = index.astype(int)</code> in the parameters for the drop() function, but in this case, it <em>does</em> seem to behave as an integer, producing the following error message:</p>
<blockquote>
<p>AttributeError: 'int' object has no attribute 'astype'</p>
</blockquote>
<p>To solve this problem, I looked at Anton Protopopov's answer to <a href="https://stackoverflow.com/questions/34066533/drop-elements-from-pandas-series-by-index">this question asked by jjjayn</a>, but it did not help in my situation as specific elements were referenced in place of an iterating index.</p>
<hr />
<p>For context, the if statement is in place to filter out any samples whose lowest values are at the 0th index (thus, where the <code>min()</code> value of a sample transect is equal to the value at index 0. Essentially, it would tell me that values in the sample only grow larger for increasing <code>x</code>, which here is wavelength. When I print a table to see which samples this applies to, the results are what I expect (100 nm wavelengths are the 0th index):</p>
<pre><code>Sample Value (100 nm) Value (minima) Min (λ)
#2 0.0050 0.0050 100
#3 0.0060 0.0060 100
#14 0.0025 0.0025 100
...
</code></pre>
<p>So, with these results printed, I don't think the condition is the issue. Indeed, the first index that should be getting dropped is also one that I'd expect to be dropped -- sample 2, which corresponds to [1], is getting passed, but I think the brackets are being passed along with it (at least, that's my guess). So in sum, the issue is that a single-element list/series <code>[n]</code> is being passed to the index parameter instead of the integer, <code>n</code>, which is what I want.</p>
|
<python><pandas><dataframe>
|
2023-02-17 01:01:39
| 1
| 526
|
ttoshiro
|
75,479,260
| 1,232,087
|
pyspark - creating Row instance inside createDataFrame() method
|
<p>Following code is supposed to create a dataframe <code>df2</code> with two columns - first column storing the name of each column of <code>df</code> and the second column storing the max length of each column of <code>df</code>. But I'm getting the error shown below:</p>
<p><strong>Question</strong>: What I may be doing wrong here, and how can we fix the error?</p>
<blockquote>
<p>NameError: name 'row' is not defined</p>
</blockquote>
<pre><code>from pyspark.sql.functions import col, length, max
from pyspark.sql import Row
df = df.select([max(length(col(name))).alias(name) for name in df.schema.names])
df2 = spark.createDataFrame([Row(col=name, length=row[name]) for name in df.schema.names], ['col', 'length'])
</code></pre>
|
<python><python-3.x><apache-spark><pyspark>
|
2023-02-17 00:38:41
| 1
| 24,239
|
nam
|
75,479,237
| 5,085,934
|
Dynamic pattern matching with MATCH failing
|
<p>I'm creating a dashboard with Dash on which I want a variable number of graphs with associated dropdowns underneath each other. The dropdowns control an aspect of the graph (how it's sorted, but this is unimportant). Here is the code:</p>
<pre><code>from dash import html, dcc
from dash.dependencies import Output, Input, State, MATCH
import dash_bootstrap_components as dbc
from app.plots import get_product_breakdown_bar_chart
from .selectors import get_product_selection_checklist, get_impact_parameter_selection_checklist, get_product_to_sort_on_dropdown, DEFAULT_PRODUCT_CHECKLIST_ID, DEFAULT_IMPACT_PARAMETER_CHECKLIST_ID, DEFAULT_PRODUCTS, DEFAULT_IMPACT_PARAMETERS
def get_selectors_pane():
selectors_row = dbc.Row([
dbc.Col(
[get_product_selection_checklist()],
width = 6
),
dbc.Col(
[get_impact_parameter_selection_checklist()],
width = 6
)
])
labels_row = dbc.Row([
dbc.Col(
[dbc.Label("Products:", html_for = DEFAULT_PRODUCT_CHECKLIST_ID)],
width = 6
),
dbc.Col(
[dbc.Label("Impact parameters: ", html_for = DEFAULT_IMPACT_PARAMETER_CHECKLIST_ID)],
width = 6
)
])
return html.Div([
labels_row,
selectors_row
])
saved_sorted_on_states = {}
selected_products = DEFAULT_PRODUCTS
def render_graph_rows(selected_products, selected_impact_parameters):
def sov(impact_parameter):
if impact_parameter in saved_sorted_on_states:
if saved_sorted_on_states[impact_parameter] in selected_products:
return saved_sorted_on_states[impact_parameter]
else:
saved_sorted_on_states.pop(impact_parameter)
return selected_products[0]
else:
return selected_products[0]
rows = []
for s_ip in selected_impact_parameters:
sort_on_dropdown_id = {"type": "sort-on-dropdown", "index": s_ip}
ip_graph_id = {"type": "impact-parameter-graph", "index": s_ip}
rows.append(
html.Div([
dbc.Row([
dbc.Col([dbc.Label("Sort on:", html_for = sort_on_dropdown_id)], width = 2),
dbc.Col([get_product_to_sort_on_dropdown(sort_on_dropdown_id, sov(s_ip))], width = 10)
]),
dbc.Row([
dbc.Col([
dcc.Graph(
id = ip_graph_id,
figure = get_product_breakdown_bar_chart(s_ip, selected_products, sov(s_ip))
)
], width = 12)
])
])
)
return rows
content_layout = html.Div(
id = "content",
children = [
get_selectors_pane(),
html.Div(
id = "graph-grid",
children = render_graph_rows(DEFAULT_PRODUCTS, DEFAULT_IMPACT_PARAMETERS)
)
],
style = {
"margin-left": "14rem",
"margin-right": "2rem",
"padding": "2rem 1rem",
}
)
def register_callback(app):
def sort_graph_callback(value, index):
global saved_sorted_on_states
saved_sorted_on_states[index] = value
return (get_product_breakdown_bar_chart(index, selected_products, value), )
app.callback(
[Output({"type": "impact-parameter-graph", "index": MATCH}, "figure")],
[Input({"type": "sort-on-dropdown", "index": MATCH}, "value")],
[State({"type": "sort-on-dropdown", "index": MATCH}, "id")]
)(sort_graph_callback)
def new_master_selection_callback(s_ps, s_ips):
global selected_products
selected_products = s_ps
return (render_graph_rows(s_ps, s_ips), )
app.callback(
[Output("graph-grid", "children")],
[Input(DEFAULT_PRODUCT_CHECKLIST_ID, "value"), Input(DEFAULT_IMPACT_PARAMETER_CHECKLIST_ID, "value")]
)(new_master_selection_callback)
</code></pre>
<p>The problem is that the sort_graph_callback defined on line 86 never gets called. This callback is supposed to connect dynamically added graphs with dynamically added dropdowns associated to them. But when I select a different option in such a dropdown nothing happens to the associated graph and the callback doesn't get called at all. I know this from setting breakpoints in them. I have verified that the correct id's are assigned to the rendered graph and dropdown components.</p>
<p>(Please note that I'm registering the callbacks in a peculiar way due to code organization reasons. I have verified that this is not the cause of the issue)</p>
<p>I have no clue anymore how to debug this issue. In my development environment pattern matching callback examples from the official documentation work just fine. Is there anything I'm missing?</p>
<p>Thank you so much in advance,<br />
Joshua</p>
|
<python><plotly-dash><dashboard>
|
2023-02-17 00:33:08
| 0
| 486
|
Joshua Schroijen
|
75,479,194
| 11,098,908
|
Why did sns.scatterplot produce a different output compared to plt.scatter on the same dataset
|
<p>I tried to visualise the PCA transformed data of the MNIST Digit dataset using <code>sns.scatterplot</code> and plt.scatter approaches as below</p>
<pre><code>from keras.datasets import mnist
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
(X_train, y_train), (X_test, y_test) = mnist.load_data()
dim_1 = X_train.shape[0]
dim_2 = X_train.shape[1]
dim_3 = X_train.shape[2]
arr = X_train.reshape(dim_1, dim_2 * dim_3)
sc = StandardScaler()
norm_arr = sc.fit_transform(arr)
pca = PCA(n_components=2)
pca_arr = pca.fit_transform(norm_arr)
pca_arr = np.vstack((pca_arr.T, y_train)).T
pca_df = pd.DataFrame(data=pca_arr, columns=("1st_principal", "2nd_principal", "label"))
pca_df = pca_df.astype({'label': 'int32'})
</code></pre>
<p>Using scatterplot from matplotlib produces this visual:</p>
<pre><code>sns.FacetGrid(pca_df, hue="label", height=6).map(plt.scatter, '1st_principal', '2nd_principal').add_legend()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/wLcNC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wLcNC.png" alt="enter image description here" /></a></p>
<p>On the other hand, scatterplot from seaborn is quite different, in particular the locations of digits 9 (on the bottom right hand corner instead of upper left hand corner compared to the first plot.</p>
<pre><code>plt.figure(figsize=(7,7))
sns.scatterplot(x = pca_arr_combo[:, 0], y = pca_arr_combo[:, 1],
hue = pca_arr_combo[:, 2], palette = sns.hls_palette(10), legend = 'full')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/TxffS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TxffS.png" alt="enter image description here" /></a></p>
<p>Can someone please explain why 2 different visuals can be produced on the same dataset? I was wondering <code>sns.FacetGrid</code> had something to do with it, but not sure why? Which scatterplot was correct?</p>
<p>Thanks.</p>
|
<python><matplotlib><seaborn><scatter-plot><pca>
|
2023-02-17 00:22:43
| 0
| 1,306
|
Nemo
|
75,479,151
| 2,225,373
|
How to get the IP address of incoming connection in custom PAM module
|
<p>I am using PAM authentication to authenticate with my linux server. I have created a view on my website through Apache2 where I can use python to manually validate each login through a web shell with facial recognition and two factor authentication. This is working, but I can't seem to recover the IP address of the incoming connection. I need a way to find the IP address of my connection to the server before SSH is connected, in the PAM module which is running Python. I would like to use bash for this.</p>
<p>I am trying to execute commands to recover the IP address, I tried using "who" and other commands to see incoming SSH connections to no avail. I also tried using "echo $PAM_RHOST" and "$SSH_CLIENT" and "$SSH_CONNECTION" with no success.</p>
|
<python><django><bash><ssh><pam>
|
2023-02-17 00:13:14
| 1
| 509
|
Charlotte Harper
|
75,479,119
| 21,113,865
|
How do I run multiple configuration commands in Dell EMC OS10 with Paramiko?
|
<p>I am trying to run a series of commands to configure a vlan on a Dell EMC OS10 server using Paramiko. However I am running into a rather frustrating problem.</p>
<p>I want to run the following</p>
<pre><code># configure terminal
(config)# interface vlan 3
(conf-if-vl-3)# description VLAN-TEST
(conf-if-vl-3)# end
</code></pre>
<p>However, I can't seem to figure out how to achieve this with <code>paramiko.SSHClient()</code>.</p>
<p>When I try to use <code>sshclient.exec_command("show vlan")</code> it works great, it runs this command and exits. However, I don't know how to run more than one command with a single <code>exec_command</code>.</p>
<p>If I run <code>sshclient.exec_command("configure")</code> to access the configuration shell, the command completes and I believe the channel is closed, since my next command <code>sshclient.exec_command("interface vlan ...")</code> is not successful since the switch is no longer in configure mode.</p>
<p>If there is a way to establish a persistent channel with <code>exec_command</code> that would be ideal.</p>
<p>Instead I have resorted to a function as follows</p>
<pre><code>chan = sshClient.invoke_shell()
chan.send("configure\n")
chan.send("interface vlan 3\n")
chan.send("description VLAN_TEST\n")
chan.send("end\n")
</code></pre>
<p>Oddly, this works when I run it from a Python terminal one command at a time.</p>
<p>However, when I call this function from my Python main, it fails. Perhaps the channel is closed too soon when it goes out of scope from the function call?</p>
<p>Please advise if there is a more reasonable way to do this</p>
|
<python><ssh><paramiko><switching><vlan>
|
2023-02-17 00:03:30
| 1
| 319
|
user21113865
|
75,479,115
| 1,394,353
|
redirecting rich.inspect to file?
|
<p>I am trying to troubleshoot why program receiving one set of parameters works (call it <strong>v1</strong>), while another call with <em>almost the same parameters</em> fails (call that <strong>v2</strong>).</p>
<p>So I want to diff some complex nested data structures. Trying a yaml dump resulted in errors (there are weakrefs and SQL connection objects). So I used <code>rich.inspect</code> in the pdb debugger and found some issues - rich is smart enough not to get itself in trouble. So now, I want to dump this out to a text file instead.</p>
<p>Is there a more elegant way than redirecting <code>sys.stdout</code>? The code below works, but it's fairly ugly near <code>with open("capture2.txt","w") as sys.stdout:</code>.</p>
<p><code>cat capture2.txt</code> prints substantially the same output as the rich.inspect, including the colors. So, that's good, but is there a cleaner way to send rich.inspect to a file? The <code>console</code> argument looks like it might, but then again it looks more like a way to specify color and terminal behavior, rather than allowing for redirection.</p>
<pre><code>from rich import inspect
import sys
#just a simple way to get some complexity cheap
from types import SimpleNamespace
foo = SimpleNamespace(
a = 1,
name = "Foo",
bar = SimpleNamespace(b=2, name="Bar!", di = dict(a=1,b=2,c=[1,2,3]))
)
#OK, simple prints to screen
inspect(foo)
# is there a better way here 👇
print("redirect stdout")
#save stdout
bup = sys.stdout
with open("capture2.txt","w") as sys.stdout:
inspect(foo)
# restore stdout
sys.stdout = bup
#if I don't restore `sys.stdout` from `bup` => IOError
print("coucou")
</code></pre>
<h3>Screenshot of output:</h3>
<p><a href="https://i.sstatic.net/HOQfw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HOQfw.png" alt="enter image description here" /></a></p>
<p>p.s. To make diffing easier I did get rid of some of the terminal codes (color and font-related) by using <code>rich.inspect(x,console = Console(force_terminal=False))</code>.</p>
|
<python><rich>
|
2023-02-17 00:02:09
| 1
| 12,224
|
JL Peyret
|
75,479,097
| 3,431,407
|
Scrape multiple pages with the same url using Python Selenium
|
<p>I have the following code that scrapes some information I need from a website. However, there are <strong>61</strong> pages I need to go through and scrape the same data that requires me to click on the 'Next' button to go to the next page with the <code>url</code> remaining the same.</p>
<p>I know it is possible to use <code>driver.find_element_by_link_text('Next').click()</code> to go to the next page but I am not sure how to include this in my code.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd
import time
driver = webdriver.Chrome()
driver.get('https://mspotrace.org.my/Sccs_list')
time.sleep(20)
# Get list of elements
elements = WebDriverWait(driver, 20).until(EC.presence_of_all_elements_located((By.XPATH, "//a[@title='View on Map']")))
# Loop through element popups and pull details of facilities into DF
pos = 0
df = pd.DataFrame(columns=['facility_name','other_details'])
for element in elements:
try:
data = []
element.click()
time.sleep(10)
facility_name = driver.find_element_by_xpath('//h4[@class="modal-title"]').text
other_details = driver.find_element_by_xpath('//div[@class="modal-body"]').text
time.sleep(5)
data.append(facility_name)
data.append(other_details)
df.loc[pos] = data
WebDriverWait(driver,10).until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[aria-label='Close'] > span"))).click() # close popup window
print("Scraping info for",facility_name,"")
time.sleep(15)
pos+=1
except Exception:
alert = driver.switch_to.alert
print("No geo location information")
alert.accept()
pass
print(df)
</code></pre>
|
<python><selenium-webdriver><web-scraping><webdriver>
|
2023-02-16 23:58:53
| 1
| 661
|
Funkeh-Monkeh
|
75,479,046
| 6,645,564
|
How can I combine a scatter plot with a density heatmap?
|
<p>I have a series of scatterplots (one example below), but I want to modify it so that the colors of the points in the plot become more red (or "hot") when they are clustered more closely with other points, while points that are spread out further are colored more blue (or "cold"). Is it possible to do this?</p>
<p><a href="https://i.sstatic.net/Nfk8T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Nfk8T.png" alt="scatterplot example" /></a></p>
<p>Currently, my code is pretty basic in its set up.</p>
<pre><code>import plotly.express as px
fig = px.scatter(data, x='A', y='B', trendline='ols')
</code></pre>
|
<python><plotly><plotly-express>
|
2023-02-16 23:48:08
| 1
| 924
|
Bob McBobson
|
75,478,784
| 1,462,718
|
Clang Dump AST of Python headers
|
<p>I am using:
<code>clang -Xclang -ast-dump -fsyntax-only /usr/local/Cellar/python@3.11/3.11.2/Frameworks/Python.framework/Versions/3.11/include/python3.11/boolobject.h</code></p>
<p>on the header to try to get all function declarations in a header</p>
<p><code>boolobject.h</code>:</p>
<pre><code>/* Boolean object interface */
#ifndef Py_BOOLOBJECT_H
#define Py_BOOLOBJECT_H
#ifdef __cplusplus
extern "C" {
#endif
PyAPI_DATA(PyTypeObject) PyBool_Type;
#define PyBool_Check(x) Py_IS_TYPE(x, &PyBool_Type)
/* Py_False and Py_True are the only two bools in existence.
Don't forget to apply Py_INCREF() when returning either!!! */
/* Don't use these directly */
PyAPI_DATA(PyLongObject) _Py_FalseStruct;
PyAPI_DATA(PyLongObject) _Py_TrueStruct;
/* Use these macros */
#define Py_False ((PyObject *) &_Py_FalseStruct)
#define Py_True ((PyObject *) &_Py_TrueStruct)
// Test if an object is the True singleton, the same as "x is True" in Python.
PyAPI_FUNC(int) Py_IsTrue(PyObject *x);
#define Py_IsTrue(x) Py_Is((x), Py_True)
// Test if an object is the False singleton, the same as "x is False" in Python.
PyAPI_FUNC(int) Py_IsFalse(PyObject *x);
#define Py_IsFalse(x) Py_Is((x), Py_False)
/* Macros for returning Py_True or Py_False, respectively */
#define Py_RETURN_TRUE return Py_NewRef(Py_True)
#define Py_RETURN_FALSE return Py_NewRef(Py_False)
/* Function to return a bool from a C long */
PyAPI_FUNC(PyObject *) PyBool_FromLong(long);
#ifdef __cplusplus
}
#endif
#endif /* !Py_BOOLOBJECT_H */
</code></pre>
<p>But it gives me the following errors:</p>
<pre><code>boolobject.h:10:26: error: expected function body after function declarator
PyAPI_DATA(PyTypeObject) PyBool_Type;
^
boolobject.h:18:26: error: expected function body after function declarator
PyAPI_DATA(PyLongObject) _Py_FalseStruct;
^
boolobject.h:19:26: error: expected function body after function declarator
PyAPI_DATA(PyLongObject) _Py_TrueStruct;
^
boolobject.h:26:17: error: expected function body after function declarator
PyAPI_FUNC(int) Py_IsTrue(PyObject *x);
^
boolobject.h:30:17: error: expected function body after function declarator
PyAPI_FUNC(int) Py_IsFalse(PyObject *x);
^
boolobject.h:38:12: error: unknown type name 'PyObject'
PyAPI_FUNC(PyObject *) PyBool_FromLong(long);
^
boolobject.h:38:24: error: expected function body after function declarator
PyAPI_FUNC(PyObject *) PyBool_FromLong(long);
^
</code></pre>
<p>I also tried to parse it with <code>libclang</code> in <code>python</code>:</p>
<pre><code>import sys
from typing import List
import clang
import clang.cindex
from clang.cindex import *
def traverse(node):
for child in node.get_children():
traverse(child)
print('Found %s [line=%s, col=%s] -- %s' % (node.displayname, node.location.line, node.location.column, node.kind))
if __name__ == "__main__":
clang.cindex.Config.set_library_path('/Library/Developer/CommandLineTools/usr/lib')
index = Index.create()
args = ['-xc++', '--std=c++17']
translation_unit = index.parse(
"/usr/local/Cellar/python@3.11/3.11.2/Frameworks/Python.framework/Versions/3.11/include/python3.11/boolobject.h",
args=args,
options=TranslationUnit.PARSE_INCOMPLETE)
print('Translation unit:', translation_unit.spelling)
traverse(translation_unit.cursor)
</code></pre>
<p>But it doesn't parse everything. It only spits out <code>VAR_DECL</code>'s:</p>
<pre><code>Found PyTypeObject [line=10, col=12] -- CursorKind.VAR_DECL
Found PyLongObject [line=18, col=12] -- CursorKind.VAR_DECL
Found PyLongObject [line=19, col=12] -- CursorKind.VAR_DECL
Found [line=6, col=8] -- CursorKind.UNEXPOSED_DECL
</code></pre>
<p>Any ideas how to fix this, so that it can parse the header? Am I missing a flag somewhere either on the command-line or the above python code?</p>
|
<python><c++><clang>
|
2023-02-16 23:05:49
| 0
| 23,565
|
Brandon
|
75,478,738
| 19,121,443
|
Turtle Python Package Could Not Be Installed On Ubuntu
|
<p>As you can see below, The turtle package could not be install on the machine and raised errors and even note that i've install python-tk package on ubuntu, how could i solve the problem?
<a href="https://i.sstatic.net/3VYsd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3VYsd.png" alt="Turtle Python Package" /></a></p>
|
<python><pip><pypi><python-turtle>
|
2023-02-16 22:58:26
| 1
| 361
|
Soroush Mirzaei
|
75,478,608
| 3,723,031
|
GitPython find most recent tag in the current branch only
|
<p>I am using this code snippet with <a href="https://gitpython.readthedocs.io/en/stable/index.html" rel="nofollow noreferrer">GitPython</a> to capture the current branch, most recent commit, and most recent tag. This information will be inserted into a version string.</p>
<pre><code>repo = git.Repo(search_parent_directories=True)
current_branch = str(repo.active_branch)
most_recent_tag = str(repo.tags[-1])
most_recent_commit = repo.head.object.hexsha[0:7]
num_commits_since_last_tag = len(list(repo.iter_commits(most_recent_tag + "..")))
</code></pre>
<p>As written, this code will find tags in other branches, not just the current branch. How can I restrict my search for most recent tag to only tags that point to commits in the current branch?</p>
|
<python><git><gitpython>
|
2023-02-16 22:38:57
| 1
| 1,322
|
Steve
|
75,478,574
| 854,183
|
Using dictionary comprehension to create a dictionary from list of dictionaries
|
<p>This is my original code, it works as I need:</p>
<pre><code>import collections
import json
import yaml
file_list = [
{'path': '/path/to/file1', 'size': 100, 'time': '2022-02-15'},
{'path': '/path/to/file2', 'size': 200, 'time': '2022-02-13'},
{'path': '/path/to/file3', 'size': 300, 'time': '2022-02-12'},
{'path': '/path/to/file4', 'size': 200, 'time': '2022-02-11'},
{'path': '/path/to/file5', 'size': 100, 'time': '2022-02-1-'}]
new_dict = collections.defaultdict(list)
for file in file_list:
new_dict[file['size']].append(file['path'])
print(json.dumps(new_dict, indent=4, sort_keys=True))
</code></pre>
<p>I have found that using collections.defaultdict(list) helps to simplify the loop code so I do not need to check if a key already exists before appending to its list.</p>
<p>EDIT:</p>
<p>Is it possible to make this code <em><strong>concise</strong></em> by using dictionary comprehension to create the new_dict? The collections.defaultdict(list) is catching me out.</p>
|
<python><dictionary-comprehension>
|
2023-02-16 22:32:32
| 3
| 2,613
|
quantum231
|
75,478,472
| 7,658,985
|
Pytesseract issue detecting dot
|
<p>I do have the following photo:</p>
<p><a href="https://i.sstatic.net/KCgZb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KCgZb.png" alt="enter image description here" /></a></p>
<p>I'm trying to extract the text from it but am facing an issue to detect the <code>dot</code> within email!</p>
<pre><code>In [1]: import cv2
In [2]: import pytesseract
In [3]: img = cv2.imread("images/10008.png")
In [4]: text = pytesseract.image_to_string(img,config='--psm 6')
In [5]: text
Out[5]: 'City: CALGARY\n\nProvince: AB\n\nCountry: CANADA\n\nCompany Name: Bonnyville Immigration Services Inc.\nEmail: sujit saha@live.com\n\nPhone Number: 403-805-0007\n'
In [6]:
</code></pre>
<p>But i tried that online <a href="https://ocr.space/" rel="nofollow noreferrer">site</a> and it's been able to detect the dot easily if i selected option --> <code>Use OCR Engine5 (Especially strong with text on complex backgrounds/low contrast)</code></p>
<p>Is there a way where i can replicate the same within <code>pytesseract</code></p>
|
<python><python-tesseract>
|
2023-02-16 22:19:06
| 1
| 11,557
|
αԋɱҽԃ αмєяιcαη
|
75,478,397
| 15,445,589
|
Python Package with Sub-Packages for Microservice purpose
|
<p>Im currently refactoring my monolith into microserivces. To make them communicate each service has as client module with client in it that call per request the other services. I want to manage the different packages as easy as possible so I created a repository which is my package. Then each folder/module is the service with the modules of it that are needed.</p>
<p>What I want to achieve is that I can simply call <code>pip install package["subpackage"]</code> and it installs only that specific module of the package. I choose a big package over small packages because of the naming problem that most services have basic names where pip packages already exist with these names.</p>
<p>Repository of package</p>
<p>repo</p>
<ul>
<li>payments/
client/
models/</li>
<li>auth/
client/
models/</li>
</ul>
<p>setup.py</p>
<p>Is there a way to provide the information what each submodule / module needs for installing like install_requires for each module ?</p>
<p>Is there a other good approach that I should take ? I know some companies to it with java so that each module is its own "package" but they are all under a company package. Maybe in python there is a better solution for this</p>
|
<python><pip><microservices>
|
2023-02-16 22:08:43
| 1
| 641
|
Kevin Rump
|
75,478,394
| 13,978,463
|
Unexpected behavior when I duplicate rows in data frame with a new structure
|
<p>I have a data frame like this (test file, the real one is bigger):</p>
<pre><code>og cogs consensus p_function category gene_1 gene_2 t dnds dn ds
OG0000190 COG0593 99 / 99 Chromosomal replication initiation ATPase DnaA (DnaA) L apS2gMQ_00001 GAWBhUD_01925 0.0194 0.126 0.0021 0.0163
OG0000190 COG0593 99 / 99 Chromosomal replication initiation ATPase DnaA (DnaA) L apS2gMQ_00001 GcPBA0T_00001 0.0174 0.001 0 0.0168
OG0000335 COG0845 99 / 99 Multidrug efflux pump subunit AcrA (membrane-fusion protein) (AcrA) MV JdgVjSO_00092 IlhnQ8K_01601 0.0244 0.1508 0.0027 0.0181
OG0000335 COG0845 99 / 99 Multidrug efflux pump subunit AcrA (membrane-fusion protein) (AcrA) MV JdgVjSO_00092 IsqAoZB_00822 0.0359 0.1083 0.0029 0.0265
OG0000532 COG0534 99 / 99 Na+-driven multidrug efflux pump, DinF/NorM/MATE family (NorM) V pr2jcFN_01326 528cT6K_01654 0.1306 0.1176 0.013 0.1105
OG0000532 COG0534 99 / 99 Na+-driven multidrug efflux pump, DinF/NorM/MATE family (NorM) V GcPBA0T_00567 7QtjQYC_01559 0.0502 0.1786 0.0067 0.0373
OG0000223 2DSC2 99 / 99 NO_COG_HIT NO_COG_HIT HQyC1X2_00055 BDcxYt7_01158 0.0083 99 0.0053 1e-04
OG0000223 2DSC2 99 / 99 NO_COG_HIT NO_COG_HIT kNAVz3k_01037 7QtjQYC_00282 0.0083 99 0.0053 1e-04
</code></pre>
<p>I wrote a python script to read every row and check in the fifth column ('category'). The main goal is to check if in that column there is a category with more than one letter, for example, MV in my data frame. If that happens, the "if" statement takes the row and duplicates it. The "else" part checks if 'NO_COG_HIT' is present.
In fact, the script works and creates a new data frame while duplicating the row when a category has more than one letter.
However, I want to do that without repeating the same two letters, I want to separate them. For example, my actual output in that row (with more than one letter):</p>
<pre><code>OG0000335 COG0845 99 / 99 Multidrug efflux pump subunit AcrA (membrane-fusion protein) (AcrA) MV JdgVjSO_00092 IlhnQ8K_01601 0.0244 0.1508 0.0027 0.0181
OG0000335 COG0845 99 / 99 Multidrug efflux pump subunit AcrA (membrane-fusion protein) (AcrA) MV JdgVjSO_00092 IsqAoZB_00822 0.0359 0.1083 0.0029 0.0265
OG0000335 COG0845 99 / 99 Multidrug efflux pump subunit AcrA (membrane-fusion protein) (AcrA) MV JdgVjSO_00092 IsqAoZB_00822 0.0359 0.1083 0.0029 0.0265
OG0000335 COG0845 99 / 99 Multidrug efflux pump subunit AcrA (membrane-fusion protein) (AcrA) MV JdgVjSO_00092 J02zmKx_01401 0.0162 0.121 0.0014 0.0118
</code></pre>
<p>The row is present 4 times (at the beginning were 2), but the letter is still the same "MV".
My expected output is (the letter will be separate, "M" and "V"):</p>
<pre><code>OG0000335 COG0845 99 / 99 Multidrug efflux pump subunit AcrA (membrane-fusion protein) (AcrA) M JdgVjSO_00092 IlhnQ8K_01601 0.0244 0.1508 0.0027 0.0181
OG0000335 COG0845 99 / 99 Multidrug efflux pump subunit AcrA (membrane-fusion protein) (AcrA) M JdgVjSO_00092 IlhnQ8K_01601 0.0244 0.1508 0.0027 0.0181
OG0000335 COG0845 99 / 99 Multidrug efflux pump subunit AcrA (membrane-fusion protein) (AcrA) V JdgVjSO_00092 IsqAoZB_00822 0.0359 0.1083 0.0029 0.0265
OG0000335 COG0845 99 / 99 Multidrug efflux pump subunit AcrA (membrane-fusion protein) (AcrA) V JdgVjSO_00092 J02zmKx_01401 0.0162 0.121 0.0014 0.0118
</code></pre>
<p>My script:</p>
<pre><code>import pandas as pd
# Load data frame
test_file = pd.read_csv("/path/test.tsv",
sep="\t", names=['og', 'cogs', 'consensus', 'p_function', 'category', 'gene_1', 'gene_2', 't',
'dnds', 'dn', 'ds'])
# Sets the variable rows to the index of the data frame
rows = test_file.index
# Set a new data frame
new_test_file = pd.DataFrame(columns=['og', 'cogs', 'consensus', 'p_function', 'category', 'gene_1', 'gene_2', 't',
'dnds', 'dn', 'ds'])
# Iterate over the rows in the data frame's index
for r in rows:
# Extract the value of "category" column for the current 'r' (row) and assigns it to the variable
join_cog = test_file.at[r, 'category']
# Check if the value in join_cog variable is not equal to 'NO_COG_HIT', if is not, it will execute
if (join_cog!= 'NO_COG_HIT'):
for letter in join_cog:
cog = letter
og = test_file.at[r, 'og']
cogs = test_file.at[r, 'cogs']
consensus = test_file.at[r, 'consensus']
p_function = test_file.at[r, 'p_function']
category = test_file.at[r, 'category']
gene_1 = test_file.at[r, 'gene_1']
gene_2 = test_file.at[r, 'gene_2']
t = test_file.at[r, 't']
dnds = test_file.at[r, 'dnds']
dn = test_file.at[r, 'dn']
ds = test_file.at[r, 'ds']
df_tmp = pd.DataFrame({'og': [og], 'cogs': [cogs], 'consensus': [consensus], 'p_function': [p_function],
'category': [category], 'gene_1': [gene_1], 'gene_2': [gene_2], 't': [t],
'dnds': [dnds], 'dn': [dn], 'ds': [ds]})
new_test_file = pd.concat([new_test_file, df_tmp], ignore_index=True)
# If 'NO_COG_HIT' is present, it will execute
else:
cog = letter
og = test_file.at[r, 'og']
cogs = test_file.at[r, 'cogs']
consensus = test_file.at[r, 'consensus']
p_function = test_file.at[r, 'p_function']
category = test_file.at[r, 'category']
gene_1 = test_file.at[r, 'gene_1']
gene_2 = test_file.at[r, 'gene_2']
t = test_file.at[r, 't']
dnds = test_file.at[r, 'dnds']
dn = test_file.at[r, 'dn']
ds = test_file.at[r, 'ds']
df_tmp = pd.DataFrame({'og': [og], 'cogs': [cogs], 'consensus': [consensus], 'p_function': [p_function],
'category': [category], 'gene_1': [gene_1], 'gene_2': [gene_2], 't': [t],
'dnds': [dnds], 'dn': [dn], 'ds': [ds]})
new_test_file = pd.concat([new_test_file, df_tmp], ignore_index=True)
</code></pre>
<p>I'm looking to solve this unexpected behavior.</p>
|
<python><pandas>
|
2023-02-16 22:08:11
| 1
| 425
|
Someone_1313
|
75,478,284
| 2,662,728
|
TypeError ... not JSON serializable in Django with no reference to my code in the traceback
|
<p>Notice that my code is not listed. It is all libraries.</p>
<pre><code>Traceback (most recent call last):
File "/root/env/lib/python3.9/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
File "/root/env/lib/python3.9/site-packages/django/utils/deprecation.py", line 138, in __call__
response = self.process_response(request, response)
File "/root/env/lib/python3.9/site-packages/django/contrib/sessions/middleware.py", line 59, in process_response
request.session.save()
File "/root/env/lib/python3.9/site-packages/django/contrib/sessions/backends/db.py", line 82, in save
obj = self.create_model_instance(data)
File "/root/env/lib/python3.9/site-packages/django/contrib/sessions/backends/db.py", line 69, in create_model_instance
session_data=self.encode(data),
File "/root/env/lib/python3.9/site-packages/django/contrib/sessions/backends/base.py", line 94, in encode
return signing.dumps(
File "/root/env/lib/python3.9/site-packages/django/core/signing.py", line 150, in dumps
return TimestampSigner(key, salt=salt).sign_object(
File "/root/env/lib/python3.9/site-packages/django/core/signing.py", line 228, in sign_object
data = serializer().dumps(obj)
File "/root/env/lib/python3.9/site-packages/django/core/signing.py", line 125, in dumps
return json.dumps(obj, separators=(",", ":")).encode("latin-1")
File "/usr/lib/python3.9/json/__init__.py", line 234, in dumps
return cls(
File "/usr/lib/python3.9/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.9/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python3.9/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
Exception Type: TypeError at /financialreconciliation/
Exception Value: Object of type AllCompany is not JSON serializable
</code></pre>
<p>I knew the offending view via the URL. I did a binary search in the view to try
and find the line where it occurred using print statements but I did not find it.
I got all the way to the return with print statements and they all showed.</p>
<p>Then I did a search using return HttpResponse() and I finally tracked down the
problem being::</p>
<pre><code>request.session['variance_dict'] = variance_dict
</code></pre>
<p>And indeed, the dictionary had AllCompany object in it.</p>
<p>I am still learning the control portion of Model–View–Controller so I didn't
realize that the update to the dictionary would not happen until a return. This
may also be a python issue. I'm not sure which caused it. However, I though others might be in my spot and this might help them.</p>
|
<python><json><django><session-variables>
|
2023-02-16 21:52:43
| 2
| 535
|
Anthony Petrillo
|
75,478,267
| 9,328,846
|
How to use Pandas groupby in a for loop (FutureWarning)
|
<p>I have the following pandas dataframe:</p>
<pre><code>d2 = {'col1': [0, 0, 1, 1, 2], 'col2': [10, 11, 12, 13, 14]}
df2 = pd.DataFrame(data=d2)
df2
</code></pre>
<p>Output:</p>
<pre><code> col1 col2
0 0 10
1 0 11
2 1 12
3 1 13
4 2 14
</code></pre>
<p>And I need to run the following:</p>
<pre><code>for i, g in df2.groupby(['col1']):
col1_val = g["col1"].iloc[0]
print(col1_val)
</code></pre>
<p>The original code is more complex but writing so for the purpose of illustration.</p>
<p>And the part <code>for i, g in df2.groupby(['col1']):</code> gives the following warning:</p>
<pre><code>FutureWarning: In a future version of pandas, a length 1 tuple will be returned
when iterating over a groupby with a grouper equal to a list of length 1.
Don't supply a list with a single grouper to avoid this warning.
</code></pre>
<p>How am I supposed to run the for loop to get rid of this warning?</p>
|
<python><pandas><dataframe><group-by>
|
2023-02-16 21:50:18
| 1
| 2,201
|
edn
|
75,478,168
| 2,175,534
|
Python Request to C#
|
<p>I'm attempting to utilize an API call that I found in some Python code and translate it into C# so that I can integrate it into a Unity application I'm developing. The Python code is:</p>
<pre><code>response = requests.request(
method=method,
url=url.as_uri(),
verify=settings.CACERT_FILE,
**kwargs,
)
</code></pre>
<p>where Method = POST, Verify = None, and Kwargs = {'json': {'time': 1, 'types': ['the_types']}}.</p>
<p>My C# implementation:</p>
<pre><code>var client = new HttpClient();
client.BaseAddress = new Uri("theurl.com");
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
HttpResponseMessage response = client.GetAsync("{'json': {'time': 1, 'types': ['the_types']}}").Result;
if (response.IsSuccessStatusCode)
{
Debug.Log("Worked");
}
else
{
Debug.Log("Didn't work");
}
client.Dispose();
</code></pre>
<p>I'm quite new to C#, and I can't figure out what I'm missing or where to go from here. Thanks in advance!</p>
<p>EDIT: Utilizing client.PostAsync</p>
<pre><code>HttpResponseMessage response = client.PostAsync(new Uri("theUrl"), new StringContent("{'json': {'time': 1, 'types': ['the_types']}}")).GetAwaiter().GetResult();
</code></pre>
|
<python><c#>
|
2023-02-16 21:35:01
| 1
| 1,406
|
Bob
|
75,478,106
| 7,346,976
|
EarlyStopping custom callback to stop training if validation metric goes up/down after certain number of training steps
|
<p>I want to stop the model using a custom callback if the <code>val_accuracy</code> is reducing after a certain number of <em>steps (steps here mean training_examples/batch_size)</em>.</p>
<p>Here's my first attempt which works but doesn't actually stop the model training:</p>
<pre><code>class CustomEarlyStopping(tf.keras.callbacks.Callback):
def __init__(self, monitor, max_steps, mode='min', delta=0):
super().__init__()
self.monitor = monitor
self.max_steps = max_steps
self.mode = mode
self.delta = delta
self.wait = 0
self.stopped_step = 0
self.best = None
def on_train_begin(self, logs=None):
self.wait = 0
self.stopped_step = 0
self.best = None
def on_train_batch_end(self, batch, logs=None):
current = logs.get(self.monitor)
if current is None:
return
if self.best is None:
self.best = current
elif (self.mode == 'min' and current < self.best - self.delta) or (self.mode == 'max' and current > self.best + self.delta):
self.wait += 1
if self.wait >= self.max_steps:
self.stopped_step = self.steps_per_epoch
self.model.stop_training = True
else:
self.wait = 0
self.best = current
early_stopping = CustomEarlyStopping(monitor='val_accuracy', max_steps=100)
</code></pre>
|
<python><tensorflow><keras><early-stopping>
|
2023-02-16 21:27:11
| 0
| 924
|
mank
|
75,478,099
| 12,242,085
|
How to extract performance metrics from confusion matrix for multiclass classification model with 5 classes in Python?
|
<p>I built multiclass classification model (with 5 classes in target) in Python and I have confusion matrix like below:</p>
<pre><code>confusion_matrix(y_test, model.predict(X_test))
[[2006 114 80 312 257]
[567 197 87 102 155]
[256 84 316 39 380]
[565 30 67 592 546]
[363 71 186 301 1402]]
</code></pre>
<p>How can I calculate based on confusion matrix above, the following values:</p>
<ol>
<li>True Negative</li>
<li>False Positive</li>
<li>False Negative</li>
<li>True Positive</li>
<li>Accuracy</li>
<li>True Positive Rate</li>
<li>False Positive Rate</li>
<li>True Negative Rate</li>
<li>False Negative Rate</li>
</ol>
<p>I have the following function to calculate that for binnary target, but how can I modify that function to calculate that for my 5 classes target ?</p>
<pre><code>def xx(model, X_test, y_test):
CM = confusion_matrix(y_test, model.predict(X_test))
print(CM)
print("-"*40)
TN = CM[0][0]
FP = CM[0][1]
FN = CM[1][0]
TP = CM[1][1]
sensitivity=TP/float(TP+FN)
specificity=TN/float(TN+FP)
print("True Negative:", TN)
print("False Positive:", FP)
print("False Negative:", FN)
print("True Positive:", TP)
print("Accuracy", round((TN + TP) / len(model.predict(X_test)) * 100, 2), "%")
print("True Positive rate",round(TP/(TP+FN)*100,2), "%")
print("False Positive rate",round(FP/(FP+TN)*100,2), "%")
print("True Negative rate",round(TN/(FP+TN)*100,2), "%")
print("False Negative rate",round(FN/(FN+TP)*100,2), "%")
</code></pre>
|
<python><function><machine-learning><confusion-matrix><multiclass-classification>
|
2023-02-16 21:26:29
| 1
| 2,350
|
dingaro
|
75,478,003
| 4,199,229
|
Is there a Python datetime library function that does the same thing as Oracle's NEXT_DAY function?
|
<p>Oracle has a very useful function for finding the date of the next weekday. This function is called <code>NEXT_DAY</code>. It takes a start date, say <code>15-OCT-2009</code>, and the weekday we're expecting, say <code>TUESDAY</code>. The result of <code>NEXT_DAY('15-OCT-2009', 'TUESDAY')</code> is <code>20-OCT-2009</code> because that's the next Tuesday after the 15th of October 2009.</p>
<p>Does Python have a function, be it Built-in or via the datetime library, that does this same thing?</p>
<p>I need it to deterministically return true/false for the question "if this day was in November, is it Thanksgiving?" without referencing any kind of lookup table for any year.</p>
<p>It's used by looking at the day number, the 15th part from above, and <strong>finding the number of days until the next Thursday</strong> in October then adding 21 to that. An alternative method of answering the question "is the day Thanksgiving?", deterministically, at runtime, without any lookup tables, restricted to plain Python or the datetime library would work for my situation but would not work to answer the question in the title.</p>
|
<python><datetime>
|
2023-02-16 21:15:55
| 2
| 358
|
UpTide
|
75,477,897
| 12,297,666
|
Dropping all rows of dataframe where discontinuous data happens
|
<p>Consider the following part of a Pandas dataframe:</p>
<pre><code> 0 1 2
12288 1000 45047 0.403
12289 1000 45048 0.334
12290 1000 45101 0.246
12291 1000 45102 0.096
12292 1000 45103 0.096
12293 1000 45104 0.024
12294 1000 45105 0.023
12295 1000 45106 0.023
12296 1000 45107 0.024
12297 1000 45108 0.024
12298 1000 45109 0.024
12299 1000 45110 0.055
12300 1000 45111 0.107
12301 1000 45112 0.024
12302 1000 45113 0.024
12303 1000 45114 0.024
12304 1000 45115 0.060
12305 1000 45116 1.095
12306 1000 45117 1.090
12307 1000 45118 0.418
12308 1000 45119 0.292
12309 1000 45120 0.446
12310 1000 45121 0.121
12311 1000 45122 0.121
12312 1000 45123 0.090
12313 1000 45124 0.031
12314 1000 45125 0.031
12315 1000 45126 0.031
12316 1000 45127 0.031
12317 1000 45128 0.036
12318 1000 45129 0.124
12319 1000 45130 0.069
12320 1000 45131 0.031
12321 1000 45132 0.031
12322 1000 45133 0.031
12323 1000 45134 0.031
12324 1000 45135 0.031
12325 1000 45136 0.059
12326 1000 45137 0.115
12327 1000 45138 0.595
12328 1000 45139 1.375
12329 1000 45140 0.780
12330 1000 45141 0.028
12331 1000 45142 0.029
12332 1000 45143 0.029
12333 1000 45144 0.029
12334 1000 45145 0.028
12335 1000 45146 0.085
12336 1000 45147 0.528
12337 1000 45148 0.107
12338 1000 45201 0.024
12339 1000 45204 0.024
12340 1000 45205 0.024
12341 1000 45206 0.024
12342 1000 45207 0.024
12343 1000 45208 0.024
12344 1000 45209 0.045
12345 1000 45210 0.033
12346 1000 45211 0.025
12347 1000 45212 0.024
12348 1000 45213 0.024
12349 1000 45214 0.024
12350 1000 45215 0.024
12351 1000 45216 0.108
12352 1000 45217 1.109
12353 1000 45218 2.025
12354 1000 45219 2.918
12355 1000 45220 4.130
12356 1000 45221 0.601
12357 1000 45222 0.330
12358 1000 45223 0.400
12359 1000 45224 0.200
12360 1000 45225 0.093
12361 1000 45226 0.023
12362 1000 45227 0.023
12363 1000 45228 0.023
12364 1000 45229 0.024
12365 1000 45230 0.024
12366 1000 45231 0.118
12367 1000 45232 0.064
12368 1000 45233 0.023
12369 1000 45234 0.023
12370 1000 45235 0.023
12371 1000 45236 0.022
12372 1000 45237 0.022
12373 1000 45238 0.022
12374 1000 45239 0.106
12375 1000 45240 0.074
12376 1000 45241 0.105
12377 1000 45242 1.231
12378 1000 45243 0.500
12379 1000 45244 0.382
12380 1000 45245 0.405
12381 1000 45246 0.469
12382 1000 45247 0.173
12383 1000 45248 0.035
12384 1000 45301 0.026
12385 1000 45302 0.027
</code></pre>
<p>In column <code>1</code>, it's a code that represents when some measurements (values in column <code>2</code>) were taken. The first <strong>three</strong> digits of the values in column <code>1</code> represent a day, and the last <strong>two</strong> digits represent the HH:MM:SS. We start from day <code>450</code> (first two rows), and in the third row we are already in the day <code>451</code>. From the index <code>12290</code> to <code>12337</code> you can see that we have <code>48</code> values (which represent <code>48</code> half-hourly measurements of a single day). So, last digit <code>01</code> means a measurement between 00:00:00 and 00:29:59, <code>02</code> means a measurement between 00:30:00 and 00:59:59, <code>03</code>means a measurement between <code>01:00:00</code> and <code>01:29:59</code>, and so on.</p>
<p>For example, a discontinuity happens in column <code>1</code> between index <code>12289</code> and index <code>12290</code>, but this discontinuity happened between <code>450</code> and <code>451</code> in the the first three digits (a discontinuity between two days, since we moved from one day to another. The last three digits <code>048</code> in <code>45048</code> represent the measurements between <code>23:30:00</code> and <code>23:59:59</code> in that day <code>450</code>), so those rows should not be dropped.</p>
<p>But now, if you look at the index <code>12338</code> and index <code>12339</code>, there is a discontinuity happening in the same day <code>452</code>, we are missing the measurements from <code>02</code> and <code>03</code> (we have measurements at <code>45201</code> and then the next at <code>45204</code>. So, <strong>ALL</strong> rows from the <code>452</code> day should be dropped.</p>
<p>And again a discontinuity happens between index <code>12383</code> and index <code>12384</code>, but since that happens between two different days (<code>452</code> and <code>453</code>), nothing should be dropped.</p>
<p>All the values in column <code>1</code> are <code>int64</code>.</p>
<p>Sorry if this is long and/or confusing, but any ideas in how can I solve this?</p>
|
<python><pandas>
|
2023-02-16 21:03:06
| 3
| 679
|
Murilo
|
75,477,823
| 12,284,585
|
Mosquitto MQTT: setting the `"ca_certs" to "/etc/ssl/certs/" leads to `IsADirectoryError: [Errno 21] Is a directory`
|
<p>Im looking for the corresponding code for</p>
<pre class="lang-bash prettyprint-override"><code># this works fine!
mosquitto_sub --capath /etc/ssl/certs/ -u user -P xyz -h hostname.com -p 8883 -t '#'
</code></pre>
<p>This is my code, which should do the same...</p>
<pre class="lang-py prettyprint-override"><code>import ssl
import paho.mqtt.client as mqtt
def on_connect(client, userdata, flags, rc):
print("Connected with result code " + str(rc))
client.subscribe("#")
def on_message(client, userdata, msg):
print(msg.topic + " " + str(msg.payload))
client = mqtt.Client()
client.username_pw_set("user", "xyz ")
client.tls_set("/etc/ssl/certs/", tls_version=ssl.PROTOCOL_TLSv1_2) # <- line 16
client.on_connect = on_connect
client.on_message = on_message
client.connect("hostname.com", 8883)
client.loop_forever()
</code></pre>
<p>But it gives me an Exception</p>
<pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last):
File "/workspaces/mqtt.py", line 16, in <module>
client.tls_set("/etc/ssl/certs/", tls_version=ssl.PROTOCOL_TLSv1_2)
File "/home/vscode/.local/lib/python3.9/site-packages/paho/mqtt/client.py", line 804, in tls_set
context.load_verify_locations(ca_certs)
IsADirectoryError: [Errno 21] Is a directory
</code></pre>
<p>I do not understand why, as <a href="https://pypi.org/project/paho-mqtt/" rel="nofollow noreferrer">the documentation</a> says: <em>a string path to the Certificate Authority certificate files</em></p>
|
<python><ssl><mqtt><mosquitto>
|
2023-02-16 20:55:07
| 1
| 1,333
|
tturbo
|
75,477,749
| 10,807,390
|
How can I add text to the same position in multiple matplotlib plots with different axis scales?
|
<p>I have ~20 plots with different axes, ranging from scales of 0-1 to 0-300. I want to use <code>plt.text(x,y)</code> to add text to the top left corner in my automated plotting function, but the changing axis size does not allow for this to be automated and completely consistent.</p>
<p>Here are two example plots:</p>
<pre><code>import matplotlib.pyplot as plt
plt.plot([1, 2, 3, 4])
plt.ylabel('some numbers')
plt.show()
#Plot 2
plt.plot([2, 4, 6, 8])
plt.ylabel('some numbers')
plt.show()
</code></pre>
<p>I want to use something like <code>plt.text(x, y, 'text', fontsize=8)</code> in both plots, but without specifying the <code>x</code> and <code>y</code> for each plot by hand, instead just saying that the text should go in the top left. Is this possible?</p>
|
<python><matplotlib><plot-annotations>
|
2023-02-16 20:45:51
| 2
| 427
|
Tom
|
75,477,740
| 4,889,914
|
How to run Python or C# as alternative to VBScript as automation engine
|
<p>For many years we have been developing an application with Visual C++ and MFC.
The application itself has no technical flow logic, but functions as a com-server. After initialisation, the programme starts the Windows VBScript engine, which in turn controls the flow of the programme through VBScript code.
To start VBScript, the classes <code>IActiveScriptParse</code> or <code>IActiveScript</code> are used.</p>
<p>Since VBScript has not been developed further by MS for years, I am afraid that in a few years the scripting host will disappear completely from Windows, which would be the ultimate disaster. Therefore, I would like to change the scripting to either Python or C#.</p>
<p>How can I start a Python script or a C# Dll method from the C++ program?
Are there implementation of <code>IActiveScriptParse</code> / <code>IActiveScript</code> for C#/Python?</p>
|
<python><c#><visual-c++><automation><iactivescript>
|
2023-02-16 20:44:59
| 0
| 341
|
suriel
|
75,477,603
| 1,045,909
|
Is there a way to prevent specific `enum.Flag` combinations?
|
<p>If I have an enum class like so:</p>
<pre class="lang-py prettyprint-override"><code>class TestFlag(enum.Flag):
A = enum.auto()
B = enum.auto()
C = enum.auto()
D = A | B # valid
</code></pre>
<p>Is it possible to specify a certain combination, such as, say <code>TestFlag.C | TestFlag.B</code> as invalid? In other words, is there a way to ensure that writing <code>TestFlag.C | TestFlag.B</code> will raise an Error?</p>
|
<python><enums><enum-flags>
|
2023-02-16 20:29:34
| 2
| 460
|
tkott
|
75,477,433
| 5,730,203
|
SQLALCHEMY - SQLITE: (sqlite3.InterfaceError) Error binding parameter 1 - probably unsupported type
|
<p>For the schema below:</p>
<pre><code>CREATE TABLE BatchData (
pk INTEGER PRIMARY KEY AUTOINCREMENT,
batchid TEXT NOT NULL,
status TEXT NOT NULL,
strategyname TEXt NOT NULL,
createdon DATETIME
);
</code></pre>
<p>I have been trying to update a column value based on list of batchids.</p>
<p>Snapshot of data in db is:</p>
<pre><code>pk,batchid,status,strategyname,createdon
1,a3eaa908-dbfc-4d9e-aa2a-2604ee3fdd95,FINISHED,OP_Ma,2023-02-15 06:20:21.924608
2,8813d314-4548-4c14-bd28-f2775fd7a1a7,INPROGRESS,OP_Ma,2023-02-16 06:01:19.335228
3,d7b0ef19-97a9-47b1-a885-925761755992,INPROGRESS,OP_CL,2023-02-16 06:20:52.748321
4,e30e2485-e62c-4d3c-9640-05e1b980654b,INPROGRESS,OP_In,2023-02-15 06:25:04.201072
</code></pre>
<p>While I'm able to update this table with following query executed directly in the console:</p>
<pre><code>UPDATE BatchData SET status = 'FINISHED' WHERE batchid in ('a3eaa908-dbfc-4d9e-aa2a-2604ee3fdd95',
'8813d314-4548-4c14-bd28-f2775fd7a1a7',
'd7b0ef19-97a9-47b1-a885-925761755992')
</code></pre>
<p>When I try to do the same using Sqlalchemy:</p>
<pre><code>import sqlalchemy as sa
sqlite_eng = sa.create_engine('blah.db')
...
...
status = 'FINISHED'
tuple_data = tuple(batchids)
STMT = sa.text("""UPDATE BatchData SET status = :stat WHERE batchid IN (:bids)""")
STMT_proxy = sqlite_eng.execute(STMT, stat=status, bids=tuple_data)
</code></pre>
<p>I have also made sure status is of type <code><str></code> and bids of type <code>tuple(<str>)</code>.
Still getting the following error:</p>
<pre><code> InterfaceError: (sqlite3.InterfaceError) Error binding parameter 1 - probably unsupported type.
[SQL: UPDATE BatchData SET status = ? WHERE batchid IN (?)]
[parameters: ('FINISHED', ('e30e2485-e62c-4d3c-9640-05e1b980654b', 'ea5df18f-1610-4f45-a3ee-d27b7e3bd1b4',
'd226c86f-f0bc-4d0c-9f33-3514fbb675c2',
'4a6b53cd-e675-44a1-aea4-9ae0 ... (21900 characters truncated) ... -c3d9-430f-b06e-c660b8ed13d8',
'66ed5802-ad57-4192-8d76-54673bd5cf8d', 'e6a3a343-b2ca-4bc4-ad76-984ea4c55e7e', '647dc42d-eccc-4119-b060-9e5452c2e9e5'))]
</code></pre>
<p>Can someone please help me find the problem with parameter type mismatch or parameter binding mistake?</p>
|
<python><database><sqlite><sqlalchemy><query-string>
|
2023-02-16 20:11:55
| 1
| 1,432
|
Asif Ali
|
75,477,402
| 2,280,178
|
How can I read csv form kaggle
|
<p>I want to read a csv-File from kaggle:</p>
<pre><code>import os
import pandas as pd
df = pd.read_csv('/kaggle/input/ibm-hr-analytics-attrition-dataset/WA_Fn-UseC_-HR-Employee-Attrition.csv')
print("Shape of dataframe is: {}".format(df.shape))
</code></pre>
<p>But I get this error:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: '/kaggle/input/ibm-hr-analytics-attrition-dataset/WA_Fn-UseC_-HR-Employee-Attrition.csv'
</code></pre>
<p>I took the file path from kaggle.</p>
<p>Thank you for any help.</p>
|
<python><csv><kaggle>
|
2023-02-16 20:08:55
| 1
| 517
|
SebastianS
|
75,477,373
| 6,046,626
|
Sqlalchemy is slow when doing query the first time
|
<p>I'm using Sqlalchemy(2.0.3) with python3.10 and after fresh container boot it takes ~2.2s to execute specific query, all consecutive calls of the same query take ~70ms to execute. I'm using PostgreSQL and it takes 40-70ms to execute raw query in DataGrip.
Here is the code:</p>
<pre class="lang-py prettyprint-override"><code>self._Session = async_sessionmaker(self._engine, expire_on_commit=False)
...
@property
def session(self):
return self._Session
...
async with PostgreSQL().session.begin() as session:
total_functions = aliased(db_models.Function)
finished_functions = aliased(db_models.Function)
failed_functions = aliased(db_models.Function)
stmt = (
select(
db_models.Job,
func.count(distinct(total_functions.id)).label("total"),
func.count(distinct(finished_functions.id)).label("finished"),
func.count(distinct(failed_functions.id)).label("failed")
)
.where(db_models.Job.project_id == project_id)
.outerjoin(db_models.Job.packages)
.outerjoin(db_models.Package.modules)
.outerjoin(db_models.Module.functions.of_type(total_functions))
.outerjoin(finished_functions, and_(
finished_functions.module_id == db_models.Module.id,
finished_functions.progress == db_models.FunctionProgress.FINISHED
))
.outerjoin(failed_functions, and_(
failed_functions.module_id == db_models.Module.id,
or_(
failed_functions.state == db_models.FunctionState.FAILED,
failed_functions.state == db_models.FunctionState.TERMINATED,
))
)
.group_by(db_models.Job.id)
)
start = time.time()
yappi.set_clock_type("WALL")
with yappi.run():
job_infos = await session.execute(stmt)
yappi.get_func_stats().print_all()
end = time.time()
</code></pre>
<p>Things I have tried and discovered:</p>
<ul>
<li>Problem is not related to connection or querying the database database. On service boot I establish connection and make some other queries.</li>
<li>Problem most likely not related to cache. I have disabled cache with <code>query_cache_size=0</code>, however I'm not 100% sure that it worked, since documentations says:</li>
</ul>
<blockquote>
<p>ORM functions related to unit-of-work persistence as well as some attribute loading strategies will make use of individual per-mapper caches outside of the main cache.</p>
</blockquote>
<ul>
<li>Profiler didn't show anything that caught my attention:</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>..urrency_py3k.py:130 greenlet_spawn 2/1 0.000000 2.324807 1.162403
..rm/session.py:2168 Session.execute 1 0.000028 2.324757 2.324757
..0 _UnixSelectorEventLoop._run_once 11 0.000171 2.318555 0.210778
..syncpg_cursor._prepare_and_execute 1 0.000054 2.318187 2.318187
..cAdapt_asyncpg_connection._prepare 1 0.000020 2.316333 2.316333
..nnection.py:533 Connection.prepare 1 0.000003 2.316154 2.316154
..nection.py:573 Connection._prepare 1 0.000017 2.316151 2.316151
..n.py:359 Connection._get_statement 2/1 0.001033 2.316122 1.158061
..ectors.py:452 EpollSelector.select 11 0.000094 2.315352 0.210487
..y:457 Connection._introspect_types 1 0.000025 2.314904 2.314904
..ction.py:1669 Connection.__execute 1 0.000027 2.314879 2.314879
..ion.py:1699 Connection._do_execute 1 2.314095 2.314849 2.314849
...py:2011 Session._execute_internal 1 0.000034 0.006174 0.006174
</code></pre>
<p>I have also seen that one may disable cache per connection:</p>
<pre class="lang-py prettyprint-override"><code>with engine.connect().execution_options(compiled_cache=None) as conn:
conn.execute(table.select())
</code></pre>
<p>However I'm working with ORM layer and not sure how to apply this in my case.</p>
<p>Any ideas where this delay might come from?</p>
|
<python><postgresql><sqlalchemy><asyncpg>
|
2023-02-16 20:05:50
| 2
| 458
|
JuicyKitty
|
75,477,365
| 3,647,167
|
Staged_predict from a Pipeline object
|
<p>I am having the same issue which was outlined years ago here:
<a href="https://github.com/scikit-learn/scikit-learn/issues/10197" rel="nofollow noreferrer">https://github.com/scikit-learn/scikit-learn/issues/10197</a></p>
<p>It seems to not have been resolved so I am looking for a work around. The example given there no longer works so here is one I wrote based on <a href="https://scikit-learn.org/stable/auto_examples/inspection/plot_partial_dependence.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/auto_examples/inspection/plot_partial_dependence.html</a></p>
<pre><code>from sklearn.datasets import fetch_openml
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OrdinalEncoder
from time import time
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import HistGradientBoostingRegressor
bikes = fetch_openml("Bike_Sharing_Demand", version=2, as_frame=True, parser="pandas")
# Make an explicit copy to avoid "SettingWithCopyWarning" from pandas
X, y = bikes.data.copy(), bikes.target
X["weather"].replace(to_replace="heavy_rain", value="rain", inplace=True)
mask_training = X["year"] == 0.0
X = X.drop(columns=["year"])
X_train, y_train = X[mask_training], y[mask_training]
X_test, y_test = X[~mask_training], y[~mask_training]
numerical_features = [
"temp",
"feel_temp",
"humidity",
"windspeed",
]
categorical_features = X_train.columns.drop(numerical_features)
hgbdt_preprocessor = ColumnTransformer(
transformers=[
("cat", OrdinalEncoder(), categorical_features),
("num", "passthrough", numerical_features),
],
sparse_threshold=1,
verbose_feature_names_out=False,
).set_output(transform="pandas")
hgbdt_model = make_pipeline(
hgbdt_preprocessor,
HistGradientBoostingRegressor(
categorical_features=categorical_features, random_state=0
),
)
hgbdt_model.fit(X_train, y_train)
staged_predict_train = [i for i in hgbdt_model.staged_predict(X_train)]
</code></pre>
<p>This produces AttributeError: 'Pipeline' object has no attribute 'staged_predict'</p>
<p>The first thing I tried was to just pass it directly to the model in the pipeline</p>
<pre><code>staged_predict_train = [i for i in hgbdt_model['histgradientboostingregressor'].staged_predict(X_train)]
</code></pre>
<p>This fails because X_train is no longer encoded by the prior step in the pipeline.</p>
|
<python><pandas><machine-learning><scikit-learn><pipeline>
|
2023-02-16 20:05:17
| 1
| 4,950
|
Keith
|
75,477,316
| 13,636,121
|
How can I get rid of or recolour the background of ipywidgets frames in a Jupyter Notebook in VS Code?
|
<p>I have a Jupyter Notebook running in VS Code with an <code>ipywidgets</code> button. The button gets displayed in a layout frame with a white background. I'd like to get rid of that frame, or make its background transparent. I've tried putting it inside a <code>Layout</code> object such as an <code>HBox</code>, but the layout component renders inside this white container.</p>
<p>I suspect this can only be controlled in the VS Code Jupyter extension, but thought I'd ask anyway.</p>
<p>Here is a minimal example of the code I'm using to create the button in the notebook:</p>
<pre class="lang-py prettyprint-override"><code>import ipywidgets as widgets
button = widgets.Button(description='Click me')
button.on_click(lambda x: print('Clicked!'))
button
</code></pre>
<p><a href="https://i.sstatic.net/WmDO5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WmDO5.png" alt="Jupyter Notebook ipywidget frame" /></a></p>
|
<python><visual-studio-code><jupyter-notebook>
|
2023-02-16 19:59:00
| 1
| 873
|
Carmen DiMichele
|
75,477,177
| 8,907,384
|
Why can't I access a specific variable inside of a threaded class
|
<p>I'm new to Python so this could be a simple fix.</p>
<p>I am using Flask and sockets for this Python project. I am starting the socket on another thread so I can actively listen for new messages. I have an array variable called 'SocketConnections' that is within my UdpComms class. The variable gets a new 'Connection' appended to it when a new socket connection is made. That works correctly. My issue is that when I try to read 'SocketConnections' from outside of the thread looking in, it is an empty array.</p>
<p><strong>server.py</strong></p>
<pre><code>from flask import Flask, jsonify
import UdpComms as U
app = Flask(__name__)
@app.route('/api/talk', methods=['POST'])
def talk():
global global_server_socket
apples = global_server_socket.SocketConnections
return jsonify(message=apples)
global_server_socket = None
def start_server():
global global_server_socket
sock = U.UdpComms(udpIP="127.0.0.1", portTX=8000, portRX=8001, enableRX=True, suppressWarnings=True)
i = 0
global_server_socket = sock
while True:
i += 1
data = sock.ReadReceivedData() # read data
if data != None: # if NEW data has been received since last ReadReceivedData function call
print(data) # print new received data
time.sleep(1)
if __name__ == '__main__':
server_thread = threading.Thread(target=start_server)
server_thread.start()
app.run(debug=True,host='192.168.0.25')
</code></pre>
<p><strong>UdpComms.py</strong></p>
<pre><code>import json
import uuid
class UdpComms():
def __init__(self,udpIP,portTX,portRX,enableRX=False,suppressWarnings=True):
self.SocketConnections = []
import socket
self.udpIP = udpIP
self.udpSendPort = portTX
self.udpRcvPort = portRX
self.enableRX = enableRX
self.suppressWarnings = suppressWarnings # when true warnings are suppressed
self.isDataReceived = False
self.dataRX = None
# Connect via UDP
self.udpSock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) # internet protocol, udp (DGRAM) socket
self.udpSock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) # allows the address/port to be reused immediately instead of it being stuck in the TIME_WAIT state waiting for late packets to arrive.
self.udpSock.bind((udpIP, portRX))
# Create Receiving thread if required
if enableRX:
import threading
self.rxThread = threading.Thread(target=self.ReadUdpThreadFunc, daemon=True)
self.rxThread.start()
def __del__(self):
self.CloseSocket()
def CloseSocket(self):
# Function to close socket
self.udpSock.close()
def SendData(self, strToSend):
# Use this function to send string to C#
self.udpSock.sendto(bytes(strToSend,'utf-8'), (self.udpIP, self.udpSendPort))
def SendDataAddress(self, strToSend, guid):
# Use this function to send string to C#
print('finding connection: ' + guid)
if self.SocketConnections:
connection = self.GetConnectionByGUID(guid)
print('found connection: ' + guid)
if connection is not None:
self.udpSock.sendto(bytes(strToSend,'utf-8'), connection.Address)
def ReceiveData(self):
if not self.enableRX: # if RX is not enabled, raise error
raise ValueError("Attempting to receive data without enabling this setting. Ensure this is enabled from the constructor")
data = None
try:
data, _ = self.udpSock.recvfrom(1024)
print('Socket data recieved from: ', _)
if self.IsNewConnection(_) == True:
print('New socket')
self.SendDataAddress("INIT:" + self.SocketConnections[-1].GUID, self.SocketConnections[-1].GUID)
data = data.decode('utf-8')
except WindowsError as e:
if e.winerror == 10054: # An error occurs if you try to receive before connecting to other application
if not self.suppressWarnings:
print("Are You connected to the other application? Connect to it!")
else:
pass
else:
raise ValueError("Unexpected Error. Are you sure that the received data can be converted to a string")
return data
def ReadUdpThreadFunc(self): # Should be called from thread
self.isDataReceived = False # Initially nothing received
while True:
data = self.ReceiveData() # Blocks (in thread) until data is returned (OR MAYBE UNTIL SOME TIMEOUT AS WELL)
self.dataRX = data # Populate AFTER new data is received
self.isDataReceived = True
# When it reaches here, data received is available
def ReadReceivedData(self):
data = None
if self.isDataReceived: # if data has been received
self.isDataReceived = False
data = self.dataRX
self.dataRX = None # Empty receive buffer
if data != None and data.startswith('DIALOG:'): #send it info
split = data.split(':')[1]
return data
class Connection:
def __init__(self, gUID, address) -> None:
self.GUID = gUID
self.Address = address
def IsNewConnection(self, address):
for connection in self.SocketConnections:
if connection.Address == address:
return False
print('Appending new connection...')
connection = self.Connection(str(uuid.uuid4()),address)
self.SocketConnections.append(connection)
return True
def GetConnectionByGUID(self, guid):
for connection in self.SocketConnections:
if connection.GUID == guid:
return connection
return None
</code></pre>
<p>As mentioned above. When IsNewConnection() is called in UdpComms it does append a new object to SocketConnections. It's just trying to view the SocketConnections in the app.route that is empty. My plans are to be able to send socket messages from the app.routes</p>
|
<python><multithreading><sockets><flask>
|
2023-02-16 19:43:28
| 1
| 497
|
Haley Mueller
|
75,477,157
| 6,552,836
|
Apply an equation to a dataframe using coefficients from another dataframe
|
<p>I have to 2 dataframes:</p>
<p><code>input_df</code></p>
<pre><code>Apples Pears Peaches Grapes
12 23 0 4
10 0 0 4
12 16 12 5
6 0 0 11
</code></pre>
<p><code>coefficients_df</code></p>
<pre><code>Fruit n w1 w2
Apples 2 0.4 40
Pears 1 0.1 43
Peaches 1 0.6 51
Grapes 2 0.5 11
</code></pre>
<p>I'm trying to apply an equation <code>y = w2*(1-exp(-w1*input_df^n))</code>to <code>input_df</code>. The equation takes coefficients from <code>coefficients_df</code></p>
<p>This is what I tried:</p>
<pre><code># First map coefficients_df to input_df
merged_df = input_df.merge(coefficients_df.pivot('Fruit'), on=['Apples','Pears','Peaches','Grapes'])
# Apply function to each row
output_df = merged_df.apply(lambda x: w2*(1-exp(-w1*x^n))
</code></pre>
|
<python><pandas><dataframe><numpy><data-wrangling>
|
2023-02-16 19:40:45
| 3
| 439
|
star_it8293
|
75,477,133
| 1,078,232
|
Is there some syntax to unpack a tuple but have an alternate value if one of the values is falsy?
|
<p>Example I might have:</p>
<pre><code>x = None
y = x or 0
</code></pre>
<p>In a tuple, am I able to somehow unpack the tuple but give some default value if a falsy value is found? Example:</p>
<pre><code>values = (1, None)
one or 0, two or 0 = values
</code></pre>
<p>I know that my solution doesn't work but I was wondering if there is some one-line syntax that does.</p>
|
<python><tuples>
|
2023-02-16 19:38:02
| 1
| 2,861
|
Pittfall
|
75,477,006
| 4,061,422
|
(FLASK, APScheduler)Error: _mysql_connector.MySQLInterfaceError: Commands out of sync; you can't run this command now
|
<p>I am facing this problem multiple times.</p>
<p>Deploy SQL File into Remote database.</p>
<p><strong>scheduler.py</strong></p>
<pre><code>from execute_sql import execute_file # sql executor on remote db
from flask_apscheduler import APScheduler # scheduler
scheduler = APScheduler()
scheduler.init_app(app)
files = **Multiple file paths**
for file_path in files:
scheduler.add_job(func=execute_file, id=job_id, trigger='cron', coalesce=True, args=[file_path])
</code></pre>
<p><strong>execute_sql.py</strong></p>
<pre><code>from mysql.connector import errors
import mysql.connector
from remote_db import db # remote database connection
def execute_file(sql_file):
if sql_file:
try:
with open(sql_file, 'r') as sql_f:
sql = sql_f.read()
cursor = db.cursor(dictionary=True)
cursor.execute(sql, multi=True)
db.commit()
except errors.Error as e:
console.error(f'[ ERROR ]: Error while Executing {str(e)}')
console.error(f'[ ERROR ]: Rolling back ...')
db.rollback()
return False, str(e)
except Exception as e:
console.error(f'[ ERROR ]: Error while Executing {str(e)}')
console.error(f'[ ERROR ]: Rolling back ...')
db.rollback()
return False, str(e)
finally:
cursor.close()
db.close()
console.info(f'[ INFO ]: "Transaction committed....')
return True, "Transaction committed."
else:
console.error(f'[ ERROR ]: Sql File not found ...')
return False, "Sql File not found"
</code></pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "<console>", line 9, in execute_file
File "/usr/local/lib/python3.9/dist-packages/mysql/connector/connection_cext.py",
line 425, in commit
self._cmysql.commit()
_mysql_connector.MySQLInterfaceError: Commands out of sync; you can't run this command now
</code></pre>
<p><strong>sql_file.sql</strong></p>
<pre><code>DROP DATABASE IF EXISTS video_games;
CREATE DATABASE video_games;
USE video_games;
DROP TABLE IF EXISTS video_games.genre;
CREATE TABLE video_games.genre (
id INT NOT NULL AUTO_INCREMENT,
genre_name VARCHAR(50) DEFAULT NULL,
CONSTRAINT pk_genre PRIMARY KEY (id)
);
INSERT INTO video_games.genre (id, genre_name) VALUES
(1,'Action'),
(2,'Adventure'),
</code></pre>
|
<python><mysql><flask><mysql-connector>
|
2023-02-16 19:23:08
| 1
| 2,510
|
vipin
|
75,476,971
| 2,011,513
|
Introspecting Django queryset field lookups
|
<p>I'm building a querybuilder in Django for something similar to an issues dashboard, which allows users to query and save dashboards for custom subsets of the issues in our database (e.g., all issues assigned to team A, created by client X, and created or updated in the current quarter). The frontend would allow the user to interact with the issues API to generate the list of issues they're looking for, and to save the filter criteria so they can visit the page and see an updated list of issues that match the filter criteria.</p>
<p>I'm thinking of saving the filter criteria as a dictionary in a JSONField, which I'd pass to the <code>Model.objects.filter</code> and the <code>Q</code> APIs.</p>
<p>I'd like to provide the Frontend a list of all eligible Field lookups (e.g., <code>exact</code>, <code>iexact</code>, <code>contains</code>, <code>icontains</code>, <code>in</code>, <code>gt</code>, etc.). Is there a class I can introspect to programmatically get a list of these lookups?</p>
<p>I read through the <a href="https://docs.djangoproject.com/en/4.1/ref/models/querysets/#field-lookups" rel="nofollow noreferrer">Field Lookup Docs</a> tried looking through the <a href="https://github.com/django/django/tree/main/django" rel="nofollow noreferrer">Django source code</a>, but couldn't find something similar to <code>Model._meta.get_fields()</code> for introspecting the fields on a model, as listed in the <a href="https://docs.djangoproject.com/en/4.1/ref/models/meta/#retrieving-all-field-instances-of-a-model" rel="nofollow noreferrer">Model _meta API docs</a>.</p>
|
<python><django><postgresql><django-models><django-rest-framework>
|
2023-02-16 19:20:10
| 1
| 3,332
|
Ashwin Balamohan
|
75,476,866
| 2,100,039
|
Python Print List Elements with Defined Range
|
<p>I have a list that is a column of numbers in a df called "doylist" for day of year list. I need to figure out how to print a range of user-defined rows in ascending order from the doylist df. For example, let's say I need to print the last daysback=60 days in the list from today's day of year to daysforward = 19 days from today's day of year. So, if today's day of year is 47, then my new list would look like this ranging from day of year 352 to day of year 67.</p>
<p>day_of_year =</p>
<pre><code>day_of_year = (today - datetime.datetime(today.year, 1, 1)).days + 1
</code></pre>
<p>doylist =</p>
<pre><code>doylist
Out[106]:
dyofyr
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
8 9
9 10
10 11
11 12
12 13
13 14
14 15
15 16
16 17
17 18
18 19
19 20
20 21
21 22
22 23
23 24
24 25
25 26
26 27
27 28
28 29
29 30
30 31
31 32
32 33
33 34
34 35
35 36
36 37
37 38
38 39
39 40
40 41
41 42
42 43
43 44
44 45
45 46
46 47
47 48
48 49
49 50
50 51
51 52
52 53
53 54
54 55
55 56
56 57
57 58
58 59
59 60
60 61
61 62
62 63
63 64
64 65
65 66
66 67
67 68
68 69
69 70
70 71
71 72
72 73
73 74
74 75
75 76
76 77
77 78
78 79
79 80
80 81
81 82
82 83
83 84
84 85
85 86
86 87
87 88
88 89
89 90
90 91
91 92
92 93
93 94
94 95
95 96
96 97
97 98
98 99
99 100
100 101
101 102
102 103
103 104
104 105
105 106
106 107
107 108
108 109
109 110
110 111
111 112
112 113
113 114
114 115
115 116
116 117
117 118
118 119
119 120
120 121
121 122
122 123
123 124
124 125
125 126
126 127
127 128
128 129
129 130
130 131
131 132
132 133
133 134
134 135
135 136
136 137
137 138
138 139
139 140
140 141
141 142
142 143
143 144
144 145
145 146
146 147
147 148
148 149
149 150
150 151
151 152
152 153
153 154
154 155
155 156
156 157
157 158
158 159
159 160
160 161
161 162
162 163
163 164
164 165
165 166
166 167
167 168
168 169
169 170
170 171
171 172
172 173
173 174
174 175
175 176
176 177
177 178
178 179
179 180
180 181
181 182
182 183
183 184
184 185
185 186
186 187
187 188
188 189
189 190
190 191
191 192
192 193
193 194
194 195
195 196
196 197
197 198
198 199
199 200
200 201
201 202
202 203
203 204
204 205
205 206
206 207
207 208
208 209
209 210
210 211
211 212
212 213
213 214
214 215
215 216
216 217
217 218
218 219
219 220
220 221
221 222
222 223
223 224
224 225
225 226
226 227
227 228
228 229
229 230
230 231
231 232
232 233
233 234
234 235
235 236
236 237
237 238
238 239
239 240
240 241
241 242
242 243
243 244
244 245
245 246
246 247
247 248
248 249
249 250
250 251
251 252
252 253
253 254
254 255
255 256
256 257
257 258
258 259
259 260
260 261
261 262
262 263
263 264
264 265
265 266
266 267
267 268
268 269
269 270
270 271
271 272
272 273
273 274
274 275
275 276
276 277
277 278
278 279
279 280
280 281
281 282
282 283
283 284
284 285
285 286
286 287
287 288
288 289
289 290
290 291
291 292
292 293
293 294
294 295
295 296
296 297
297 298
298 299
299 300
300 301
301 302
302 303
303 304
304 305
305 306
306 307
307 308
308 309
309 310
310 311
311 312
312 313
313 314
314 315
315 316
316 317
317 318
318 319
319 320
320 321
321 322
322 323
323 324
324 325
325 326
326 327
327 328
328 329
329 330
330 331
331 332
332 333
333 334
334 335
335 336
336 337
337 338
338 339
339 340
340 341
341 342
342 343
343 344
344 345
345 346
346 347
347 348
348 349
349 350
350 351
351 352
352 353
353 354
354 355
355 356
356 357
357 358
358 359
359 360
360 361
361 362
362 363
363 364
364 365
daysback = doylist.iloc[day_of_year-61] # 60 days back from today
daysforward = doylist.iloc[day_of_year+19] # 20 days forward from today
</code></pre>
<p>I need my final df or list to look like this:
final_list =</p>
<pre><code>352
353
354
355
356
357
358
359
360
361
362
363
364
365
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
</code></pre>
<p>I have tried variations of this but get the following error using this with a df called "doylist"-thank you!</p>
<pre><code>finallist = list(range(doylist.iloc[day_of_year-61],doylist.iloc[day_of_year+19]))
Traceback (most recent call last):
Cell In[113], line 1
finallist = list(range(doylist.iloc[day_of_year-61],doylist.iloc[day_of_year+19]))
TypeError: 'Series' object cannot be interpreted as an integer
</code></pre>
|
<python><list><range>
|
2023-02-16 19:07:12
| 3
| 1,366
|
user2100039
|
75,476,855
| 7,376,511
|
Class variable type depending on another class variable's type
|
<pre><code>class A:
var_a: bool = False
var_b: int | str # str if var_a is True, else int
a = A()
a.var_a = True
a.var_b # should be str
</code></pre>
<p>How can I type this so that mypy knows var_b should be a string if var_a is True? Is this possible? Maybe with some clever usage of <code>Literal[True]</code> or <code>Literal[False]</code>?</p>
|
<python><type-hinting><mypy>
|
2023-02-16 19:06:24
| 2
| 797
|
Some Guy
|
75,476,769
| 10,260,806
|
Is there a way to write to a dataframe based on data in another data frame?
|
<p>I have 2 data frames (df_text and df_excel) and I am trying to load the write a script that takes the latest date (in this example it would be 2023-02-12) from the 'end' column in df_text, match it to the dates column in (df_excel) then write the 'status' from df_text to df_excel for the correponsing name column.</p>
<h2>df_text</h2>
<pre><code> Name start end status
0 N1 2023-02-08 02:01:45 2023-02-08 08:15:01 completed
1 N2 2023-02-09 06:04:25 2023-02-09 10:35:50 completed
2 N1 2023-02-09 06:04:25 2023-02-09 10:35:50 completed
3 N1 2023-02-10 13:46:01 2023-02-10 16:35:50 completed
4 N4 2023-02-10 16:35:25 2023-02-10 19:35:50 started
5 N1 2023-02-11 16:35:25 2023-02-11 19:35:50 completed
6 N3 2023-02-11 16:35:25 2023-02-11 19:35:50 completed
7 N2 2023-02-11 16:35:25 2023-02-11 19:35:50 started
8 N4 2023-02-12 18:54:03 2023-02-12 23:53:09 completed
</code></pre>
<h2>df_excel</h2>
<pre><code>Unnamed: 0 2023-02-08 00:00:00 2023-02-09 00:00:00 ... 2023-02-12 00:00:00 2023-02-13 00:00:00 2023-02-14 00:00:00
0 N1 Completed Completed ... Waiting Waiting Waiting
1 N2 Waiting Completed ... Waiting Waiting Waiting
2 N3 Waiting Waiting ... Waiting Waiting Waiting
3 N4 Waiting Waiting ... Waiting Waiting Waiting
4 N5 Waiting Waiting ... Waiting Waiting Waiting
</code></pre>
<p>SO since the latest 'end' date in df_excel is 2023-02-12, and the corresponding name is N2 and the status is 'completed', then the cell for N4 in column '2023-02-12' in df_excel should be changed from 'waiting to 'completed'.</p>
<p>Is there a way to programatically do this?</p>
<p>So far I have this script that reads the txt and excel file but not sure the best approach for next> maybe match the dates from the 'end' column in the df_txt with the column in the df_excel and then write the status from the df_txt to the df_excel for the correponsing name (N1,N2,N3 etc).</p>
<pre><code>import os
import pandas as pd
import openpyxl
import time
df_text = pd.read_csv('data/text.txt', sep='|',
skiprows=(0, 2,)).iloc[:, 1:].applymap(str.strip)
# df_text
df_excel = pd.read_excel('data/excel.xlsx', skiprows=1)
print(df_excel)
</code></pre>
|
<python><pandas><dataframe>
|
2023-02-16 18:57:39
| 1
| 982
|
RedRum
|
75,476,664
| 5,931,362
|
Force Google Colab to use R kernel for existing notebook
|
<p>I have several existing Jupyter notebooks that use R instead of python.</p>
<p>When I open these notebooks in Colab, sometimes it will automatically use the R kernel (ir), and other times it will use Jupyter3 (which results in all the code being broken). I can't figure out why it uses the R kernel for one notebook but not for another.</p>
<p>Is there a way to manually change the kernel to R? Or some code to include that ensures Colab will recognize the notebook as being an R notebook and not a Python notebook?</p>
<p>I know that I can start a new notebook with the R kernel using <a href="https://colab.research.google.com/#create=true&language=r" rel="nofollow noreferrer">https://colab.research.google.com/#create=true&language=r</a>. If you go to Runtime -> Change runtime type, then you can select between the R and Python 3 kernels. However, that only works for new notebooks.</p>
<p>If I open an existing notebook that doesn't use automatically use the R kernel, if I go to Runtime -> Change runtime type, it only shows me options to change the Hardware acceleration options. It doesn't allow me to manually select the R kernel.</p>
<p>Any help would be greatly appreciated.</p>
|
<python><r><jupyter-notebook><google-colaboratory>
|
2023-02-16 18:46:53
| 2
| 355
|
jfrench
|
75,476,429
| 16,350,154
|
Pandas Read Continuously Growing CSV File
|
<p>I have a continuously growing CSV File, that I want to periodically read. I am also only interested in new values.</p>
<p>I was hoping to do something like:</p>
<pre><code>file_chunks = pd.read_csv('file.csv', chunksize=1)
while True:
do_something(next(file_chunks))
time.sleep(0.1)
</code></pre>
<p>in a frequency, that is faster than the .csv file is growing.
However, as soon as the iterator does not return a value once, it "breaks" and does not return values, even if the .csv file has grown in the meantime.</p>
<p>Is there a way to read continuously growing .csv files line by line?</p>
|
<python><pandas><csv>
|
2023-02-16 18:21:03
| 2
| 406
|
PlzBePython
|
75,476,288
| 11,015,558
|
Difference between 2 Polars dataframes
|
<p>What is the best way to find the differences between 2 Polars dataframes?
The <code>equals</code> method tells me if there is a difference, I want to find where is the difference.</p>
<p>Example:</p>
<pre><code>import polars as pl
df1 = pl.DataFrame([
{'id': 1,'col1': ['a',None],'col2': ['x']},
{'id': 2,'col1': ['b'],'col2': ['y', None]},
{'id': 3,'col1': [None],'col2': ['z']}]
)
┌─────┬─────────────┬─────────────┐
│ id ┆ col1 ┆ col2 │
│ --- ┆ --- ┆ --- │
│ i64 ┆ list[str] ┆ list[str] │
╞═════╪═════════════╪═════════════╡
│ 1 ┆ ["a", null] ┆ ["x"] │
│ 2 ┆ ["b"] ┆ ["y", null] │
│ 3 ┆ [null] ┆ ["z"] │
└─────┴─────────────┴─────────────┘
df2 = pl.DataFrame([
{'id': 1,'col1': ['a'],'col2': ['x']},
{'id': 2,'col1': ['b', None],'col2': ['y', None]},
{'id': 3,'col1': [None],'col2': ['z']}]
)
┌─────┬─────────────┬─────────────┐
│ id ┆ col1 ┆ col2 │
│ --- ┆ --- ┆ --- │
│ i64 ┆ list[str] ┆ list[str] │
╞═════╪═════════════╪═════════════╡
│ 1 ┆ ["a"] ┆ ["x"] │
│ 2 ┆ ["b", null] ┆ ["y", null] │
│ 3 ┆ [null] ┆ ["z"] │
└─────┴─────────────┴─────────────┘
</code></pre>
<p>The difference in the example is for id = 1 and id = 2.</p>
<p>I can join the dataframes:</p>
<pre><code>df1.join(df2, on='id', suffix='_df2')
┌─────┬─────────────┬─────────────┬─────────────┬─────────────┐
│ id ┆ col1 ┆ col2 ┆ col1_df2 ┆ col2_df2 │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ list[str] ┆ list[str] ┆ list[str] ┆ list[str] │
╞═════╪═════════════╪═════════════╪═════════════╪═════════════╡
│ 1 ┆ ["a", null] ┆ ["x"] ┆ ["a"] ┆ ["x"] │
│ 2 ┆ ["b"] ┆ ["y", null] ┆ ["b", null] ┆ ["y", null] │
│ 3 ┆ [null] ┆ ["z"] ┆ [null] ┆ ["z"] │
└─────┴─────────────┴─────────────┴─────────────┴─────────────┘
</code></pre>
<p><strong>Expected result</strong></p>
<p>I would like to either:</p>
<ul>
<li>add a boolean columns that shows True in the rows with a difference</li>
<li>filter and only display rows with a difference.</li>
</ul>
<p>The example has only 2 columns, but there are more columns in the dataframe.</p>
|
<python><python-polars>
|
2023-02-16 18:05:50
| 2
| 1,994
|
Luca
|
75,476,282
| 3,542,535
|
ThreadPoolExecutor multiprocessing with while loop and breadth first search?
|
<p>I'm trying to speed up some API calls by using <code>ThreadPoolExecutor</code>. I have a class that accepts a string list of <a href="https://wolf-h3-viewer.glitch.me/" rel="nofollow noreferrer">h3 cells</a> like <code>cell1,cell2</code>. h3 uses hexagons at different resolutions to get finer detail in mapping. The class methods take the returned cells and gets information about them that is passed to an API with params. The API will return a total number of results (could be over 1000). Because the API is limited to returning at most the first 1000 results through pagination, I utilize h3 to zoom into each cell until all of its children/grandchildren/etc have a total number of results under 1000. This is effectively doing BFS from the original cells provided.</p>
<p>When running this code with the <code>run</code> method, the expectation is that the <code>search_queue</code> would be empty as all cells have been processed. However, with the way its set up currently, only the <code>origin_cells</code> provided to the class get processed and retrieving <code>search_queue</code> shows unprocessed items. Swapping the <code>while</code> and <code>ThreadPoolExecutor</code> lines does run everything as expected, but it runs at the same speed as without using <code>ThreadPoolExecutor</code>.</p>
<p>Is there a way to make the multiprocessing work as expected?</p>
<p>Edit with working example</p>
<pre class="lang-py prettyprint-override"><code>import h3
import math
import requests
from concurrent.futures import ThreadPoolExecutor
from time import sleep
dummy_results = {
'85489e37fffffff': {'total': 1001},
'85489e27fffffff': {'total': 999},
'86489e347ffffff': {'total': 143},
'86489e34fffffff': {'total': 143},
'86489e357ffffff': {'total': 143},
'86489e35fffffff': {'total': 143},
'86489e367ffffff': {'total': 143},
'86489e36fffffff': {'total': 143},
'86489e377ffffff': {'total': 143},
}
class SearchH3Test(object):
def __init__(self, origin_cells):
self.search_queue = list(filter(None, origin_cells.split(',')))
self.params_list = []
def get_h3_radius(self, cell, buffer=False):
"""
Get the approximate radius of the h3 cell
"""
return math.ceil(
math.sqrt(
(h3.cell_area(cell))/(1.5*math.sqrt(3))
)*1000
+ ((100*(h3.h3_get_resolution(cell)/10)) if buffer else 0)
)
def get_items(self, cell):
"""
Return API items from passed params, including total number of items and a dict of items
r = requests.get(
url = 'https://someapi.com',
headers = api_headers,
params = params
).json()
"""
sleep(1)
r = dummy_results[cell]
return r['total']
def get_hex_params(self, cell):
"""
Return results from the derived params of the h3 cell
"""
lat, long = h3.h3_to_geo(cell)
radius = self.get_h3_radius(cell, buffer=True)
params = {
'latitude': lat,
'longitude': long,
'radius': radius,
}
total = self.get_items(cell)
print(total)
return total, params
def hex_search(self):
"""
Checks if the popped h3 cell produces a total value over 1000.
If over 1000, get the h3 cell children and append them to the search_queue
If greater than 0, append params to params_list
"""
cell = self.search_queue.pop(0)
total, params = self.get_hex_params(cell)
if total > 1000:
self.search_queue.extend(list(h3.h3_to_children(cell)))
elif total > 0:
self.params_list.append(params)
def get_params_list(self):
"""
Keep looping through the search quque until no items remain.
Use multiprocessing to speed up things
"""
with ThreadPoolExecutor() as e:
while self.search_queue:
e.submit(self.hex_search)
def run(self):
self.get_params_list()
</code></pre>
<pre><code>h = SearchH3Test(
'85489e37fffffff,85489e27fffffff',
)
h.run()
len(h.search_queue) # returns 7 for the children that weren't processed as expected
len(h.params_list) # returns 1 for the cell under 1000
</code></pre>
|
<python><asynchronous><multiprocessing><breadth-first-search><concurrent.futures>
|
2023-02-16 18:05:35
| 1
| 413
|
alpacafondue
|
75,476,211
| 3,082,461
|
Imputation after one hot encoding in scikit-learn
|
<p>I have a dataset where I have categorical and numerical data. I want to -</p>
<ol>
<li>Apply OneHot encoding for all categorical columns</li>
<li>Use the numerical data + one-hot encoded categorical data to do <code>Multiple Imputation</code> using IterativeImputer.</li>
<li>Integrate it to a pipeline where I have access to <code>fit</code> and <code>transform</code> methods.</li>
</ol>
<p>I can use <code>ColumnTransformer</code> to impute using only numerical columns but I want to use the categorical column too for the imputation.</p>
<p>E.g.</p>
<pre class="lang-py prettyprint-override"><code>sample_data = pd.DataFrame({
"a": [4.4, 1.0, None, 3.0, 2.7],
"b": ["HIGH", "HIGH", "LOW", "HIGH", "LOW"],
"c": [True, False, False, True, False]
})
</code></pre>
<p>I want to first encoded columns <code>b,c</code> and then use them along with <code>a</code> to impute the missing value in column <code>a</code>.</p>
<p>Also, I do not have any missing values in the categorical columns.</p>
|
<python><machine-learning><data-science><data-preprocessing>
|
2023-02-16 17:57:09
| 2
| 440
|
rhn89
|
75,476,062
| 5,091,720
|
Pandas what is the row of error and enter substitution value on dtype error
|
<p>I am reading in a large tab separated file with Pandas. I am trying to declare the dtypes prior to reading the data. I am getting errors with some data records. The file is so large that I don't know what row the error data is and what value (or maybe null) is causing the error. The column that is causing the error is "Cert#" which is supposed to be a positive int.</p>
<p>some code:</p>
<pre><code>import pandas as pd
import numpy as np
my_cols = [ "Date", "code", "many other cols", "Cert#", "Column 19"]
my_types = { "code": str, "result#": np.float64, "Cert#": np.int64}
df = pd.read_table(file_path, usecols=my_cols, parse_dates=["Date"], dtype=my_types, sep='\t', engine='python', encoding="ISO-8859-1", on_bad_lines = 'warn')
</code></pre>
<p>The error:</p>
<blockquote>
<p>ValueError: Unable to convert column Cert# to type int64</p>
</blockquote>
<ul>
<li>So is it possible to have pandas print the error row so I can see the bad Cert# value? I tried <code>on_bad_lines = 'warn'</code>.</li>
<li>Is it possible to substitute the value that causes the error with -999 ? Which the -999 does not occur in the data because all Cert# are positive integers.</li>
</ul>
|
<python><pandas>
|
2023-02-16 17:43:03
| 1
| 2,363
|
Shane S
|
75,476,008
| 2,348,290
|
Python execute code in parent shell upon exit
|
<p>I have a search program that helps users find files on their system. I would like to have it perform tasks, such as opening the file within editor or changing the parent shell directory to the parent folder of the file exiting my python program.</p>
<p>Right now I achieve this by running a bash wrapper that executes the commands the python program writes to the stdout. I was wondering if there was a way to do this without the wrapper.</p>
<p>Note:
subprocess and os commands create a subshell and do not alter the parent shell. This is an acceptable answer for opening a file in the editor, but not for moving the current working directory of the parent shell to the desired location on exit.</p>
<p>An acceptable alternative might be to open a subshell in a desired directory</p>
<p>example</p>
<pre><code>#this opens a bash shell, but I can't send it to the right directory
subprocess.run("bash")
</code></pre>
|
<python><posix>
|
2023-02-16 17:38:22
| 1
| 2,924
|
jeffpkamp
|
75,475,985
| 13,324,721
|
Does block_collapse() in sympy fail on inverse-transpose of a block matrix?
|
<p>I'm trying to perform some routine linear algebra on symbolic block matrices using <code>sympy</code>. When I call <code>block_collapse()</code> on a matrix that has been inversed and then transposed (in that order), <code>block_collapse()</code> outputs <code>1</code> instead of the correct matrix. See the last example of this reprex</p>
<pre class="lang-py prettyprint-override"><code>from sympy import (BlockMatrix, symbols, Identity, ZeroMatrix, block_collapse)
n = symbols('n')
In = Identity(n)
Zeron = ZeroMatrix(n,n)
A = BlockMatrix([
[In, Zeron],
[Zeron, In]])
At = A.transpose()
Ati = At.inverse()
Ai = A.inverse()
Ait = Ai.transpose()
block_collapse(A)
# Matrix([
# [I, 0],
# [0, I]])
block_collapse(At)
# Matrix([
# [I, 0],
# [0, I]])
block_collapse(Ati)
# Matrix([
# [I, 0],
# [0, I]])
block_collapse(Ai)
# Matrix([
# [I, 0],
# [0, I]])
block_collapse(Ait) # This one fails
# 1
</code></pre>
<p>On the other hand, manually calling <code>block_collapse()</code> on the intermediate matrix seems to resolve the problem and give the correct output.</p>
<pre class="lang-py prettyprint-override"><code>Aib = block_collapse(Ai)
Aibt = Aib.transpose()
block_collapse(Aibt)
# Matrix([
# [I, 0],
# [0, I]])
</code></pre>
<p>Have I misunderstood something about how <code>block_collapse</code> is meant to work, or is this a bug?</p>
|
<python><sympy><symbolic-math>
|
2023-02-16 17:36:12
| 0
| 710
|
David L Thiessen
|
75,475,933
| 10,687,615
|
Weighted Average Multiple Columns: TypeError: unhashable type: 'list'
|
<p>I have a table that looks like this:</p>
<pre><code>charge (0.0, 1.0) (0.0, 2.0) (0.0, 3.0)
84
116 1 10 147
226 9 842 342
343 2 278
503 10
939
</code></pre>
<p>I'm attempting to calculate the weighted average of each column and the post below as very helpful. <a href="https://stackoverflow.com/questions/33574908/pandas-group-weighted-average-of-multiple-columns">Pandas Group Weighted Average of Multiple Columns</a>. So for example, column <code>(0.0, 1.0)</code>, the weighted average would be 357.54.</p>
<p>However, I am doing something wrong where my code produces the following error. I've done some research on this error but am still unsure how to fix it <a href="https://stackoverflow.com/questions/509211/understanding-slicing">Understanding slicing</a>.</p>
<pre><code>**CODE**:
def weighted(x, cols, w="weights"):
return pd.Series(np.average(x[cols], weights=x[w], axis=0), cols)
FIN_A_PIV_Table.apply(weighted, ['(0.0, 1.0)','(0.0, 2.0)','(0.0, 3.0)'])
**ERROR**:
File Q:\VDI\Python Source\Provider_Coder_Updated_v_Spyder.py:372 in <module>
FIN_A_PIV_Table.apply(weighted, ['(0.0, 1.0)','(0.0, 2.0)','(0.0, 3.0)'])
File C:\Research\lib\site-packages\pandas\core\frame.py:8839 in apply
op = frame_apply(
File C:\Research\lib\site-packages\pandas\core\apply.py:88 in frame_apply
axis = obj._get_axis_number(axis)
File C:\Research\lib\site-packages\pandas\core\generic.py:550 in _get_axis_number
return cls._AXIS_TO_AXIS_NUMBER[axis]
TypeError: unhashable type: 'list'
</code></pre>
|
<python><pandas>
|
2023-02-16 17:31:18
| 1
| 859
|
Raven
|
75,475,769
| 1,812,711
|
Simple list of branch for a Node Python AnyTree
|
<p>Given a simple tree in AnyTree, when I look at a node it tells me its path to the root, but I cannot find anything that will simply give me that path as a list of the node names, or as a string which I can simply turn into a list of the names.</p>
<p>Typical example</p>
<pre><code>from anytree import Node, RenderTree, AsciiStyle
f = Node("f")
b = Node("b", parent=f)
a = Node("a", parent=b)
d = Node("d", parent=b)
c = Node("c", parent=d)
e = Node("e", parent=d)
g = Node("g", parent=f)
i = Node("i", parent=g)
h = Node("h", parent=i)
print(RenderTree(f, style=AsciiStyle()).by_attr())
</code></pre>
<p>Which renders up as the tree:</p>
<pre><code>f
|-- b
| |-- a
| +-- d
| |-- c
| +-- e
+-- g
+-- i
+-- h
</code></pre>
<p>I just want the path from a leaf to the top or root. So some command that will do this:</p>
<pre><code>>>> ShowMePath(e,f) # a command like this
["e", "d", "b", "f"]
</code></pre>
<p>I'd even be happy if I could just get a string version of the node name which I could .split() to get that string.</p>
<pre><code>>>> e
Node('/f/b/d/e') < can I get this as a string to split it?
</code></pre>
<p>All the iterator methods examples (eg PostOrderIter) seem to return parallel branches rather than just a simple path to the top. I've looked through the docs but don't see this simple give-me-the-path option. What am I missing? Isn't this something everyone needs?</p>
|
<python><tree><anytree>
|
2023-02-16 17:16:28
| 1
| 595
|
RossGK
|
75,475,737
| 7,760,910
|
unable to mock all the private methods using python unittest
|
<p>I have a core class where I am trying to read the zip file, unzip, get a specific file, and get the contents of that file. This works fine but now I am trying to mock all the things I used along the way.</p>
<pre><code>class ZipService:
def __init__(self, path: str):
self.path = path
def get_manifest_json_object(self):
s3 = boto3.resource('s3')
bucket_name, key = self.__get_bucket_and_key()
bucket = s3.Bucket(bucket_name)
zip_object_reference = bucket.Object(key).get()["Body"]
zip_object_bytes_stream = self.__get_zip_object_bytes_stream(zip_object_reference)
zipf = zipfile.ZipFile(zip_object_bytes_stream, mode='r')
return self.__get_manifest_json(zipf)
def __get_bucket_and_key(self):
pattern = "https:\/\/(.?[^\.]*)\.(.?[^\/]*)\/(.*)" # this regex is working but don't know how :D
result = re.split(pattern, self.path)
return result[1], result[3]
def __get_zip_object_bytes_stream(self, zip_object_reference):
return io.BytesIO(zip_object_reference.read())
def __get_manifest_json(self, zipf):
manifest_json_text = [zipf.read(name) for name in zipf.namelist() if "/manifest.json" in name][0].decode("utf-8")
return json.loads(manifest_json_text)
</code></pre>
<p>For this I have written a test case that throws an error:</p>
<pre><code>@patch('boto3.resource')
class TestZipService(TestCase):
def test_zip_service(self, mock):
s3 = boto3.resource('s3')
bucket = s3.Bucket("abc")
bucket.Object.get.return_value = "some-value"
zipfile.ZipFile.return_value = "/some-path"
inst = ZipService("/some-path")
with mock.patch.object(inst, "_ZipService__get_manifest_json", return_value={"a": "b"}) as some_object:
expected = {"a": "b"}
actual = inst.get_manifest_json_object()
self.assertIsInstance(expected, actual)
</code></pre>
<p><strong>Error</strong>:</p>
<pre><code>bucket_name, key = self.__get_bucket_and_key()
File "/Us.tox/py38/lib/python3.8/site-packages/services/zip_service.py", line 29, in __get_bucket_and_key
return result[1], result[3]
IndexError: list index out of range
</code></pre>
<p>What exactly is wrong here? Any hints would also be appreciated. TIA</p>
|
<python><python-3.x><amazon-web-services><unit-testing><amazon-s3>
|
2023-02-16 17:13:43
| 1
| 2,177
|
whatsinthename
|
75,475,706
| 3,624,000
|
Find the count of unique values in dataframe pandas ie) value occured only once in a column
|
<p>Can someone help me how I can find a count of values that have occurred only once in a column in a pandas dataframe? I know we can use n number of functions like <code>nunique</code> or <code>unique().size</code> and so on but they give us only the count of distinct values whereas I am trying to find values that hva occurred only once in an entire column. Here is an example</p>
<pre><code>import pandas as pd
technologies = {
'Courses':["Spark","PySpark","Python","Pandas","Python","Spark","Pandas","AWS","Spark"],
'Fee' :[20000,25000,22000,30000,25000,20000,30000,50000,20000],
'Duration':['30days','40days','35days','50days','40days','30days','50days','90days','30days'],
'Discount':[1000,2300,1200,2000,2300,1000,2000,1500,1000]
}
df = pd.DataFrame(technologies)
</code></pre>
<p>Running <code>df.Discount.unique().size</code> will be value <code>5</code> which are <code>[1000, 2300, 1200, 2000, 1500]</code></p>
<p>But my intended output is <code>2</code> which are <code>[1200, 1500]</code></p>
<p>Thanks.</p>
|
<python><pandas><dataframe>
|
2023-02-16 17:11:41
| 3
| 311
|
user3624000
|
75,475,692
| 3,740,839
|
Passing a number to a python script
|
<p>I am working in Linux using Ansible.
I am trying to run a script from a playbook, but I need to pass the script a few arguments: a string and 2 numbers. The first number is the number of times I need to retry to run the script, the second number variable is the retry attempt (which comes from a implement retry counter).</p>
<p>cat playbook.yml</p>
<pre><code>---
- name: Increment variable
ansible.builtin.set_fact:
attempt_number: "{{ attempt_number | d(0) | int + 1 }}"
- name: Running Script
command: "{{ my_dir }}/script.py {{ string }} {{ max_retries }} {{ attempt_number | int }}"
</code></pre>
<p>The python script looks like this:
cat script.py</p>
<pre><code>var1=sys.argv[0]
var2=sys.argv[1]
var3=sys.argv[2]
print "String %s" % var1
print "Num1 %d" % var2
print "Num2 %d" % var3
</code></pre>
<p>First I am just trying to check If the variable are being passed to the python script but I am getting this error:</p>
<blockquote>
<p>" File "", line 6, in ", "TypeError: %d format: a
number is required, not str"]</p>
</blockquote>
<p>What am I doing wrong? How can I change this, so my python script receive all the 3 parameters?</p>
|
<python><ansible><ansible-inventory>
|
2023-02-16 17:10:50
| 3
| 887
|
fr0zt
|
75,475,640
| 288,609
|
perform math operation between scalar and pandas dataframe
|
<p>In an existing code, I used the following code to perform operation against a dataframe column.</p>
<pre><code>df.loc[:, ['test1']] = m/(df.loc[:, ['rh']]*d1)
</code></pre>
<p>Here, both <code>m</code> and <code>d1</code> are scalar. 'test1' and 'rh' are the column names.</p>
<p>Is this the right way or the best practice to perform math operation against the dataframe?</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-02-16 17:06:13
| 1
| 13,215
|
user288609
|
75,475,466
| 11,249,098
|
Correct use of pytest fixtures of objects with Django
|
<p>I am relatively new to pytest, so I understand the simple use of fixtures that looks like that:</p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture
def example_data():
return "abc"
</code></pre>
<p>and then using it in a way like this:</p>
<pre class="lang-py prettyprint-override"><code>def test_data(self, example_data):
assert example_data == "abc"
</code></pre>
<p>I am working on a django app and where it gets confusing is when I try to use fixtures to create django objects that will be used for the tests.
The closest solution that I've found online looks like that:</p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture
def test_data(self):
users = get_user_model()
client = users.objects.get_or_create(username="test_user", password="password")
</code></pre>
<p>and then I am expecting to be able to access this user object in a test function:</p>
<pre class="lang-py prettyprint-override"><code>@pytest.mark.django_db
@pytest.mark.usefixtures("test_data")
async def test_get_users(self):
# the user object should be included in this queryset
all_users = await sync_to_async(User.objects.all)()
.... (doing assertions) ...
</code></pre>
<p>The issue is that when I try to list all the users I can't find the one that was created as part of the <code>test_data</code> fixture and therefore can't use it for testing. <br>
I noticed that if I create the objects inside the function then there is no problem, but this approach won't work for me because I need to parametrize the function and depending on the input add different groups to each user.</p>
<p>I also tried some type of <code>init</code> or <code>setup</code> function for my testing class and creating the User test objects from there but this doesn't seem to be pytest's recommended way of doing things. But either way that approach didn't work either when it comes to listing them later.</p>
<p>Is there any way to create test objects which will be accessible when doing a queryset?
Is the right way to manually create separate functions and objects for each test case or is there a pytest-way of achieving that?</p>
|
<python><django><pytest><pytest-django><pytest-fixtures>
|
2023-02-16 16:49:36
| 1
| 489
|
Flora Biletsiou
|
75,475,460
| 10,200,497
|
getting the first row of two masks that meets conditions and creating a new column
|
<p>This is my dataframe:</p>
<pre><code>df = pd.DataFrame({'a': [20, 21, 333, 444, 1, 666], 'b': [20, 20, 20, 20, 20, 20], 'c': [222, 211, 2, 1, 100, 200]})
</code></pre>
<p>I want to use two masks. The first one finds the second row that <code>a</code> is greater than <code>b</code>. and creates column <code>d</code>. This mask is:</p>
<pre><code>mask = (df.a >= df.b)
df.loc[mask.cumsum().eq(2) & mask, 'd'] = 'x'
</code></pre>
<p>Now I want to add another mask. Basically what I want is to find the first row that has two conditions.</p>
<p>a) It is after the first mask (That is, it is after the second row that <code>a</code> >= <code>b</code>)</p>
<p>b) Column <code>c</code> is greater than column <code>b</code></p>
<p>My desired output is as follows:</p>
<pre><code> a b c d
0 20 20 222 NaN
1 21 20 211 NaN
2 333 20 2 NaN
3 444 20 1 NaN
4 1 20 100 x
5 666 20 200 NaN
</code></pre>
<p>I tried a couple of ways but the fact that it has to be after the first mask made it difficult for me.</p>
|
<python><pandas>
|
2023-02-16 16:48:57
| 2
| 2,679
|
AmirX
|
75,475,373
| 9,479,925
|
How to check if any of column value is updated/deleted in pandas dataframe?
|
<p>I have a pandas data frame as.</p>
<pre><code>_d = {'first_name':['Joe','Sha','Ram','Wes','David'],
'last_name':['Doe','Jhu','Krishna','County','John'],
'middle_name':['R.','M.','Q.','S.','I.']
}
df_A = pd.DataFrame(_d)
</code></pre>
<p><a href="https://i.sstatic.net/jEezF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jEezF.png" alt="enter image description here" /></a></p>
<p>Here, first I change middle name of a person who's last name is Doe as RA. as below.</p>
<pre><code>df_A.loc[df_A['last_name']=='Doe','middle_name']='RA.'
</code></pre>
<p>So in pandas dataframe df_A an additional column is_changed is created and filled in with a value as Yes as below.</p>
<p><a href="https://i.sstatic.net/ddNjW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ddNjW.png" alt="enter image description here" /></a></p>
<p>There are few more changes done as below</p>
<pre><code>df_A.loc[df_A['first_name']=='David','last_name']='Curey'
</code></pre>
<pre><code>df_A.loc[df_A['first_name']=='Ram','first_name']='Laxman'
</code></pre>
<p>Final expected output would be as below.</p>
<p><a href="https://i.sstatic.net/e0PqL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e0PqL.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2023-02-16 16:41:13
| 2
| 1,518
|
myamulla_ciencia
|
75,475,229
| 14,720,380
|
How can I migrate my Cython code from Shapely 1.8 to Shapely 2.0?
|
<p>I have some custom Cython code to detect if linestrings cross a prepared geometry.</p>
<p>In the migration from shapely 1.8 to 2.0, the following line segfaults:</p>
<pre><code>result[i] = <np.uint8_t> GEOSPreparedIntersects_r(geos_handle, geom1, geom2)
</code></pre>
<p>The full Cython code is:</p>
<pre class="lang-py prettyprint-override"><code>#!python
#cython: language_level=3
#include <stdio.h>
#include <stddef.h>
#include <stdint.h>
cimport cython
from libc.stdint cimport uintptr_t
import shapely.prepared
import numpy as np
cimport numpy as np
__all__ = ['two_points_intersect_geom', "DTYPE"]
np.import_array()
DTYPE = np.float32
ctypedef np.float32_t DTYPE_t
cdef extern from "geos_c.h":
ctypedef void *GEOSContextHandle_t
ctypedef struct GEOSGeometry
ctypedef struct GEOSCoordSequence
ctypedef struct GEOSPreparedGeometry
GEOSCoordSequence *GEOSCoordSeq_create_r(GEOSContextHandle_t, unsigned int, unsigned int) nogil
int GEOSCoordSeq_getSize_r(GEOSContextHandle_t, GEOSCoordSequence *, unsigned int *) nogil
int GEOSCoordSeq_setX_r(GEOSContextHandle_t, GEOSCoordSequence *, int, double) nogil
int GEOSCoordSeq_setY_r(GEOSContextHandle_t, GEOSCoordSequence *, int, double) nogil
int GEOSCoordSeq_setZ_r(GEOSContextHandle_t, GEOSCoordSequence *, int, double) nogil
GEOSGeometry *GEOSGeom_createLineString_r(GEOSContextHandle_t, GEOSCoordSequence *) nogil
char GEOSPreparedIntersects_r(GEOSContextHandle_t, const GEOSPreparedGeometry *, const GEOSGeometry *) nogil
char GEOSIntersects_r(GEOSContextHandle_t, const GEOSGeometry *, const GEOSGeometry *) nogil
cdef GEOSContextHandle_t get_geos_context_handle():
# Note: This requires that lgeos is defined, so needs to be imported as:
from shapely.geos import lgeos
cdef uintptr_t handle = lgeos.geos_handle
return <GEOSContextHandle_t> handle
cdef GEOSPreparedGeometry *geos_from_prepared(shapely_geom) except *:
"""Get the Prepared GEOS geometry pointer from the given shapely geometry."""
cdef uintptr_t geos_geom = shapely_geom._geom
return <GEOSPreparedGeometry *> geos_geom
@cython.boundscheck(False)
@cython.wraparound(False)
def two_points_intersect_geom(np.ndarray[DTYPE_t, ndim=3] latlon, geometry):
"""
Example:
import numpy as np
import cartopy.feature as cfeature
from shapely.ops import unary_union
from shapely.prepared import prep
land = prep(unary_union(list(cfeature.NaturalEarthFeature('physical', 'land', '50m').geometries())))
latlon = np.array([
[[0, 0], [0, 10]],
[[0, 0], [0, -10]],
], dtype=float)
two_points_intersect_geom(latlon, land)
"""
cdef GEOSCoordSequence *coord_sequence
cdef GEOSPreparedGeometry *geom1
cdef GEOSGeometry *geom2
cdef double lat, lon
cdef int n_point_pairs = len(latlon)
cdef int seqSize = 2
cdef int seqDim = 2
cdef int i, j
cdef np.ndarray[np.uint8_t, ndim=1, cast=True] result = np.empty(n_point_pairs, dtype=np.uint8)
if not isinstance(geometry, shapely.prepared.PreparedGeometry):
geometry = shapely.prepared.prep(geometry)
geos_handle = get_geos_context_handle()
geom1 = geos_from_prepared(geometry)
for i in range(n_point_pairs):
coord_sequence = GEOSCoordSeq_create_r(geos_handle, seqSize, seqDim)
for j in range(2):
lat = latlon[i][j][0]
lon = latlon[i][j][1]
d = GEOSCoordSeq_setX_r(geos_handle, coord_sequence, j, lat)
d = GEOSCoordSeq_setY_r(geos_handle, coord_sequence, j, lon)
geom2 = GEOSGeom_createLineString_r(geos_handle, coord_sequence)
result[i] = <np.uint8_t> GEOSPreparedIntersects_r(geos_handle, geom1, geom2)
return result.view(dtype=np.bool_)
</code></pre>
<p>The test for this code is:</p>
<pre class="lang-py prettyprint-override"><code>geom = Polygon([[0, 0], [1, 0], [1, 1], [0, 1], [0, 0]])
land = prep(geom)
latlon = np.array(
[
[[-0.5, 0.5], [0.5, 0.5]],
[[10, 10], [20, 20]],
],
dtype=DTYPE_CYTHON,
)
ret = two_points_intersect_geom(latlon, land)
self.assertListEqual([True, False], list(ret))
</code></pre>
<p>I am not sure why this code no longer works, because I am still using the <code>_geom</code> attribute rather than the depreciated <code>__geom__</code>.</p>
<p>I have tested that the seg fault is not coming from <code>geos_handle</code>, <code>geom2</code> or assigning to the result with <code>result[i] = <np.uint8_t></code>. So it is coming from <code>geom1</code>.</p>
<p>I believe the problem is in the line:</p>
<pre><code>cdef GEOSPreparedGeometry *geos_from_prepared(shapely_geom) except *:
"""Get the Prepared GEOS geometry pointer from the given shapely geometry."""
cdef uintptr_t geos_geom = shapely_geom._geom
return <GEOSPreparedGeometry *> geos_geom
</code></pre>
<p>However I cant find a reason, looking through Shapely and geos changes why this function will now fail.</p>
|
<python><segmentation-fault><cython><shapely>
|
2023-02-16 16:29:07
| 1
| 6,623
|
Tom McLean
|
75,475,219
| 3,397,563
|
How to build an extracter spacy pipeline
|
<p>I am currently trying to extract some texts from sentences with spacy, did some courses about it but it is still a bit blur to me.</p>
<p>I have the following sentence:
ZZZ LLC is a limited liability company formed in the UK.
XYZ LLC is a limited liability company formed in the UK. XYZ LLC owns a commercial property located in Germany known as ‘rentview’.
Mr X owns 21% of XYZ LLC, the remaining 79% are own by ZZZ LLC which is the sole director of XYZ LLC.</p>
<p>What I want to extract are the following:</p>
<pre><code>{"name": "XYZ LLC", "type": "ORG", "country": "UK"},
{"name": "ZZZ LLC", "type": "ORG", "country": "UK"},
{"name": "XYZ LLC", "type": "ORG",
"owns": [{"name": "rentview", "type": "commercial property", "country": "Germany"}],
"owned_by": [
{"name": "X", "type": "PERSON", "percent": 21},
{"name":"ZZZ LLC", "type": "ORG", "percent": 79}
]
}
</code></pre>
<p>My approach is first to assign an incorporation country to a company.
Then detects owners of companies.
Thereafter detects ownerBy of companies.
And finally, generate the JSON object.</p>
<p>But I am already blocking about assigning country to company</p>
<p>my code to do this is the following, but I am pretty sure I'm not using the right approach.</p>
<pre><code>Span.set_extension("incorporation_country", default=False)
@Language.component("assign_org_country")
def assign_org_country(doc):
org_entities = [ent for ent in doc.ents if ent.label_ == "ORG"]
for ent in org_entities:
head = ent.root.head
if head.lemma_ in ['be']:
for child in head.children:
if child.dep_ == "attr" and child.text == "company" and child.right_edge.ent_type_ == "GPE":
ent._.incorporation_country = child
print(f"country of {ent.text} is {ent._.incorporation_country}")
return doc
</code></pre>
<p>Any ideas or tips of how to achieve this?</p>
|
<python><artificial-intelligence><spacy>
|
2023-02-16 16:28:22
| 1
| 1,778
|
oktapodia
|
75,475,193
| 12,596,824
|
Randomly Sample in Python with certain distirbution
|
<p>I want to create a dataframe with two columns an id column which repeats the ids 1-100 3 times and then 'age' where I randomly sample the ages 0-14 17% of the time, ages 15-64 65% of the time, ages 65-100 18% of the time.</p>
<p><strong>Example DF:</strong></p>
<pre><code>id age
1 21
1 21
1 21
2 45
2 45
2 45
3 64
3 64
3 64
</code></pre>
<p>Code i have so far:</p>
<pre><code>N = 100
R = 3
d = {'id': np.repeat(np.arange(1, N + 1), R)}
pd.DataFrame(d)
</code></pre>
<p>I'm stuck on how to simulate the age though.
How can I do this?</p>
|
<python><pandas>
|
2023-02-16 16:26:21
| 3
| 1,937
|
Eisen
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.