QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,276,568
| 11,154,841
|
How can I reuse logic to handle a keypress and a button click in Python's tkinter GUI?
|
<p>I have this code:</p>
<pre class="lang-py prettyprint-override"><code>from tkinter import *
import tkinter as tk
class App(tk.Frame):
def __init__(self, master):
def print_test(self):
print('test')
def button_click():
print_test()
super().__init__(master)
master.geometry("250x100")
entry = Entry()
test = DoubleVar()
entry["textvariable"] = test
entry.bind('<Key-Return>', print_test)
entry.pack()
button = Button(root, text="Click here", command=button_click)
button.pack()
root = tk.Tk()
myapp = App(root)
myapp.mainloop()
</code></pre>
<p>A click on the button throws:</p>
<pre class="lang-bash prettyprint-override"><code>Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\tkinter\__init__.py", line 1921, in __call__
return self.func(*args)
File "[somefilepath]", line 10, in button_click
print_test()
TypeError: App.__init__.<locals>.print_test() missing 1 required positional argument: 'self'
</code></pre>
<p>While pressing <code>Enter</code> while in the Entry widget works, it prints:
<code>test</code></p>
<p>See:</p>
<p><a href="https://i.sstatic.net/DZIZH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DZIZH.png" alt="enter image description here" /></a></p>
<p>Now if I drop the <code>(self)</code> from <code>def print_test(self):</code>, as <a href="https://stackoverflow.com/a/50948874/11154841">TypeError: button_click() missing 1 required positional argument: 'self'</a> shows, the button works, but pressing <code>Enter</code> in the Entry widget does not trigger the command but throws another exception:</p>
<p><a href="https://i.sstatic.net/Ksns9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ksns9.png" alt="enter image description here" /></a></p>
<pre class="lang-bash prettyprint-override"><code>Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\tkinter\__init__.py", line 1921, in __call__
return self.func(*args)
TypeError: App.__init__.<locals>.print_test() takes 0 positional arguments but 1 was given
</code></pre>
<p>How can I write the code so that both the button click event and pressing Enter will trigger the print command?</p>
|
<python><user-interface><tkinter><tkinter-entry><tkinter-button>
|
2023-05-17 22:26:54
| 2
| 9,916
|
questionto42
|
76,276,528
| 1,850,272
|
Element found but it's not clicked and the test fails
|
<p>I'm creating an automated test using amazon's website and I have to navigate to the "sign up" page. The link to register an account is in a menu that only pops up when you hover over the sign in link. When I type the following in the terminal I get the result I need</p>
<pre><code>$$("a.nav-a[href*='https://www.amazon.com/ap/register']")
</code></pre>
<p>So I added it a <code>find_element()</code> method like this</p>
<pre><code>driver.find_element(By.CSS_SELECTOR, "a.nav-a[href*='https://www.amazon.com/ap/register']").click()
</code></pre>
<p>The page loads, which I have set to open in incognito mode, I can see the frame appear as it does when you hover over it, but it doesn't navigate to the page and I get an error pointing at this line of code.</p>
<p>I've also tried these</p>
<pre><code># searching from parent to child
driver.find_element(By.CSS_SELECTOR, 'div#nav-ya-flyout-ya-newCust a[href*="https://www.amazon.com/ap/register"]').click()
# searching by the text it contains
driver.find_element(By.CSS_SELECTOR, "a.nav-a[contains(text(), 'Start here.')]").click()
</code></pre>
<p>Both of these yield the same result, selenium seems to find it but doesn't click it. Does anybody know why this is happening?</p>
|
<python><selenium-webdriver>
|
2023-05-17 22:17:28
| 1
| 3,354
|
Optiq
|
76,276,514
| 3,380,902
|
Spark DataFrame casting string to date results in null values
|
<p>I get <code>null</code> when I attempt to cast string date in Spark DataFrame to <code>date</code> type.</p>
<pre><code># Create a list of data
data = [(1, "20230517"), (2, "20230518"), (3, "20230519"), (4, "null")]
# Create a DataFrame from the list of data
df = spark.createDataFrame(data, ("id", "date"))
df.show()
df.printSchema()
root
|-- id: long (nullable = true)
|-- date: string (nullable = true)
# Convert the SaleDate column to datetime format
df1 = df.withColumn("date", df.date.cast('date'))
df1.select('date').show()
+--------+
|date |
+--------+
| null|
| null|
| null|
| null|
</code></pre>
|
<python><apache-spark><apache-spark-dataset>
|
2023-05-17 22:12:50
| 1
| 2,022
|
kms
|
76,276,484
| 5,296,106
|
Sum of element wise or on columns triplet
|
<p>I have a NumPy array <code>A</code> of shape <code>(n, m)</code> and dtype <code>bool</code>:</p>
<pre><code>array([[ True, False, False, False],
[ True, False, False, True],
[False, True, False, False],
[False, False, False, True],
[False, False, False, True]])
</code></pre>
<p>I would like to get the result <code>R</code> of shape <code>(m, m, m)</code> of dtype <code>int</code>:</p>
<pre><code>array([[[2, 3, 2, 4],
[3, 3, 3, 5],
[2, 3, 2, 4],
[4, 5, 4, 4]],
[[3, 3, 3, 5],
[3, 1, 1, 4],
[3, 1, 1, 4],
[5, 4, 4, 4]],
[[2, 3, 2, 4],
[3, 1, 1, 4],
[2, 1, 0, 3],
[4, 4, 3, 3]],
[[4, 5, 4, 4],
[5, 4, 4, 4],
[4, 4, 3, 3],
[4, 4, 3, 3]]])
</code></pre>
<p>where <code>R[k, i, j]</code> is the sum of the element wise logical <em>or</em> on columns k, i and j. So, for example:</p>
<pre><code>R[3, 1, 2] = (np.logical_or(np.logical_or(A[:, 3], A[:, 1]), A[:, 2])).sum()
</code></pre>
<p>One way to get this result is</p>
<pre><code>R = np.array(
[[[np.logical_or(np.logical_or(A[:, i], A[:, j]), A[:, k]).sum()
for i in range(m)] for j in range(m)] for k in range(m)],
)
</code></pre>
<p>But this is clearly not using numpy APIs. Is it possible to do it using NumPy? For example, with broadcasting, see the related question below.</p>
<p><a href="https://stackoverflow.com/q/76276165/5296106">This</a> is a related (simpler) question.</p>
|
<python><numpy>
|
2023-05-17 22:05:58
| 1
| 15,494
|
Riccardo Bucco
|
76,276,258
| 4,581,822
|
lxml parse doesn't recognize Indexing
|
<p>I am trying to scrape and parse some HMLT with Python and extract Italian fiscal code 'VTLLCU86S03I348V'.</p>
<p>If I inspect the page and copy the full xPath of that object I get no results and I do not understand why. I tried with normal xpath like '//div/p/a' for example and it worked. Things become messy when the xpath contains indexing like 'div[3]/div' even though it sounds weird to me as well. I have also noticed that the inspect xpath contains just one '/' at the beginning even though I think that two are required (righ?).</p>
<p>Hereunder a minimal reproducible example. What am I missing?</p>
<pre><code>from lxml import etree
import requests
# GET
page = requests.get('https://notariato.it/it/notary/luca-vitale/')
# PARSING
parser = etree.HTMLParser()
tree = etree.parse(StringIO(page.text), parser)
nome_raw = tree.xpath('//html/body/div[3]/section[2]/div/div[1]/div/section/div/div/div/div[6]/div/h4')
for i in range(len(nome_raw)):
print(nome_raw[i].text)
</code></pre>
<p>Thank you!</p>
|
<python><web-scraping><xpath><python-requests><lxml>
|
2023-05-17 21:16:14
| 1
| 7,210
|
SabDeM
|
76,276,165
| 5,296,106
|
Number of different elements for each columns pair
|
<p>I have a NumPy array <code>A</code> of shape <code>(n, m)</code> and dtype <code>bool</code>:</p>
<pre><code>array([[ True, False, False],
[ True, True, True],
[False, True, True],
[False, True, False]])
</code></pre>
<p>I would like to get the result <code>R</code> of shape <code>(m, m)</code> of dtype <code>int</code>:</p>
<pre><code>array([[0, 3, 2],
[3, 0, 1],
[2, 1, 0]])
</code></pre>
<p>where <code>R[i, j]</code> is the number of elements that are different in columns <code>i</code> and <code>j</code>. So, for example:</p>
<pre><code>R[0, 0] = (A[:, 0] != A[:, 0]).sum()
R[2, 1] = (A[:, 2] != A[:, 1]).sum()
R[0, 2] = (A[:, 0] != A[:, 2]).sum()
...
</code></pre>
<p>Is there a way to achieve this with NumPy?</p>
<p>Related question: <a href="https://stackoverflow.com/questions/76276484/sum-of-element-wise-or-on-columns-triplet">Sum of element wise or on columns triplet</a></p>
|
<python><numpy>
|
2023-05-17 21:01:29
| 2
| 15,494
|
Riccardo Bucco
|
76,275,890
| 1,927,834
|
Custom Library import reload in Robot Framework
|
<p>I am in a bit of a bind and looking for a solution to the following problem. I will present the code first.</p>
<pre><code>*** Settings ***
Library Widget.py ${FIRSTNAME} ${LASTNAME} WITH NAME toy
*** Variables ***
${FIRSTNAME} john
${LASTNAME} doe
*** Test Cases ***
Test Case 1
${fullname}= toy.get fullname
Log To Console ${fullname}
Set Global Variable ${FIRSTNAME} jane
${fullname}= toy.get fullname
Log To Console ${fullname}
</code></pre>
<p>In the code above, I am instantiating a library in <code>Settings</code> and creating an instance of the name <code>toy</code>.</p>
<p>After I use it, I am changing the <code>${FIRSTNAME}</code> variable value to <code>jane</code>.</p>
<p>After this line of code however, I would like that the instance <code>toy</code> refer to the new value of <code>${FIRSTNAME}</code> which is <code>jane</code>. However it still refers to the older value <code>john</code>.</p>
<p>Is there a way in Robot Framework to re-instantiate the library <code>Widget.py</code> with the new value of <code>${FIRSTNAME}</code> so that when I call <code>toy.get fullname</code> for the second time in the test case, it returns <code>jane doe</code> and NOT <code>john doe</code>?</p>
<p>OR</p>
<p>Is there a way to delete the <code>toy</code> instance and create a fresh instance in the same test case during runtime with the same name <code>toy</code> to maintain backward compatibility?</p>
|
<python><robotframework>
|
2023-05-17 20:13:06
| 0
| 361
|
utpal
|
76,275,867
| 7,895,542
|
How to clean up this directory traversal?
|
<p>I have the following directory structure that i want to traverse and call a function on each (loaded) file if the directories and file pass certain criteria</p>
<pre><code>self.directories[0]
│
└───maps_dir_1
│ │ file1.json
│ │ file2.txt
│ │ file3.json
│ └───subfolder1
│ │ file111.txt
│ │ file112.txt
│ │ ..
│
└───maps_dir_1
│ │ file4.json
│ │ file5.txt
│ │ file6.json
</code></pre>
<p>I have a working way but it is super nested and ugly and i am sure there has to be a cleaner way, but i am unsure what it is.</p>
<pre><code>for maps_dir in self.directories:
for map_dir in os.listdir(maps_dir):
if fails_check1(map_dir):
continue
for filename in os.listdir(os.path.join(maps_dir, map_dir)):
if not filename.endswith(".json"):
continue
file_path = os.path.join(map_dir, filename)
if os.path.isfile(file_path):
with open(file_path, encoding="utf-8") as demo_json:
demo_data: Game = json.load(demo_json)
match_id = os.path.splitext(filename)[0]
if fails_check2(match_id):
continue
self.do_stuff(demo_data, match_id)
</code></pre>
|
<python><directory>
|
2023-05-17 20:10:09
| 1
| 360
|
J.N.
|
76,275,859
| 1,329,652
|
Is there an easy way to add custom error messages in pyparsing?
|
<p>Suppose we have a grammar where there is an element that was expected but is not found. Assume that the backtracking is disabled, i.e. the element is preceded by a <code>-</code> sign.</p>
<p>Is there any way in the library to set a custom error message, such as "Foo was needed", on that element? Or is the only way to catch the parse exception and work it out using the location information?</p>
<p>Say:</p>
<pre><code>from pyparsing import *
grammar = (
Literal("//")
- Word(alphanums)("name")
+ Suppress(White()[1,...])
+ Word(alphanums)("operation")
).leave_whitespace()
grammar.run_tests("""
//name op
// opnoname
""", print_results=True)
</code></pre>
<p>The output for the 2nd line is:</p>
<pre><code>// opnoname
^
ParseException: Expected W:(0-9A-Za-z), found ' ' (at char 2), (line:1, col:3)
FAIL: Expected W:(0-9A-Za-z), found ' ' (at char 2), (line:1, col:3)
</code></pre>
<p>I'd like a custom message, such as "name was needed" instead of the generic "Expected W:(0-9A0Za-z), found ' '".</p>
<p>So far it looks like catching the <code>ParseException</code>, modifying the <code>message</code> in it, and re-rasising it would be a solution. Am I missing something more fundamental?</p>
<hr />
<p>For those curious: this came up when writing a <a href="https://en.wikipedia.org/wiki/Job_Control_Language" rel="nofollow noreferrer">JCL (Job Control Language)</a> parser.</p>
<p>I'm using latest pyparsing stable version at the moment: 3.0.9.</p>
|
<python><pyparsing>
|
2023-05-17 20:08:37
| 1
| 99,011
|
Kuba hasn't forgotten Monica
|
76,275,721
| 1,471,980
|
how do you update data dictionary in python
|
<p>I am going through a json file and extracing "name" and "id" field. I need to form a data dictionary and store "name" and "id" fields in data dictionary called my_dict</p>
<pre><code>report=("'name':{}, 'id':{}.format(content['name',content['id']
print(report)
'name': report1, 'id':1
</code></pre>
<p>I need to create and data dictionary and update the dictory with report values</p>
<p>I tried this:</p>
<pre><code>my_dict.update(report)
</code></pre>
<p>I am getting this error:</p>
<pre><code>ValueError: dictionary update sequence element #0 has lenght1, 2 is required
</code></pre>
|
<python><data-dictionary>
|
2023-05-17 19:49:20
| 1
| 10,714
|
user1471980
|
76,275,699
| 10,145,953
|
Replace value that does not equal specific values
|
<p>I've tried looking at other similar questions but haven't found an adequate answer for my needs.</p>
<p>I have a data frame with a column <code>answer</code> where respondents can answer <code>yes</code>, <code>no</code>, <code>maybe</code>, or some other response. The some other response is free text and for my purposes, I just need to categorize it as <code>other</code> but I can't quite figure out how to replace a value in a pandas dataframe that does not equal a few different values (I've seen answers for replacing values that do not equal one value, but I don't want it to effect the rows with yes, no, and maybe).</p>
<p>Sample data frame is below, any help is appreciated.</p>
<pre><code>id | answer |
1 | Yes |
2 | Maybe |
3 | No |
4 | idk |
5 | wtf |
6 | wth |
</code></pre>
|
<python><pandas><dataframe>
|
2023-05-17 19:46:45
| 2
| 883
|
carousallie
|
76,275,641
| 14,094,460
|
mkdocs: how to attach a downloadable file
|
<p>I have a mkdocs project that resembles the following:</p>
<pre><code>project
├─mkdocs.yml
├─docs
│ ├─home.md
│ ├─chapter1.md
│
├─static
├─file.ext
├─image.png
</code></pre>
<p>I am trying to find a way to "attach" <code>file1.ext</code> to the build, for instance as a link in <code>chapter1.md</code>.</p>
<p>Any suggestions how to achieve that? Detail: I want the file to be downloadable on click.</p>
|
<python><markdown><mkdocs><mkdocs-material>
|
2023-05-17 19:35:29
| 4
| 1,442
|
deponovo
|
76,275,627
| 17,896,651
|
Terminate a Subprocess of Subprocess
|
<p>I run 20 subprocesses by using:</p>
<pre><code>process1 = subprocess.Popen(command1, stdout=subprocess.PIPE, stderr=subprocess.STDOUT,
shell=False, encoding='utf-8', cwd=full_path)
</code></pre>
<p><code>command1</code> is calling a file named: "scheduleInstasck.py" which triggers also <strong>another command</strong>:</p>
<pre><code>process2 = subprocess.Popen(command2, shell=False, stderr=subprocess.STDOUT, stdout=subprocess.PIPE, cwd='./{}'.format(client_folders[running_index]))
</code></pre>
<p><code>command2</code> is calling a file named: "quickstart.py" which is my app.</p>
<p>Once I call <code>process1.terminate()</code> it will only kill process1 while process2 is still alive and I see all logs of it until it finish.</p>
<p>I did try:</p>
<p><a href="https://stackoverflow.com/a/320712/4215840">https://stackoverflow.com/a/320712/4215840</a></p>
<p><a href="https://stackoverflow.com/a/31464349/4215840">https://stackoverflow.com/a/31464349/4215840</a></p>
<p>to catch termination on scheduleInstasck and to do <code>process2.terminate()</code> but this never called for some reason on Windows.</p>
|
<python><terminate>
|
2023-05-17 19:33:52
| 1
| 356
|
Si si
|
76,275,555
| 5,833,797
|
Programmatically measure database query complexity in Python SQLAlchemy
|
<p>Is this possible in python/sqlalchemy?</p>
<p>When I write an endpoint which retrieves a list of records, I might accidentally make my query very inefficient without realizing.</p>
<p>Is there a way to measure the complexity of database queries in a method/unit test and throw an error if too many transactions take place?</p>
<p>In my example, I am using strawberry for providing a graphql router. On more than one occasion, I've made the following mistake, which involves an additional database query being made for each <code>ParentModel</code> in the list to retrieve the <code>ChildModel</code>. To get around this, I can make the <code>ChildModel</code> be loaded eagerly in the initial query. I would like to be able to make it very obvious to myself if my method will result in a large number of database queries.</p>
<pre><code>import strawberry
@strawberry.type
class ChildGQLSchema:
id: int
@classmethod
def from_model(cls, model: ChildModel):
return cls(
id=model.id,
)
@strawberry.type
class ParentGQLSchema:
id: int
@strawberry.field
def children(
self, info, page: int = 1, limit: int = 20
) -> list[ChildGQLSchema]:
# Unless explicitly loading the children, this will result in a
# query to the database for each parent.
models = (
session.query(ChildModel)
.filter(ChildModel.parent_id == self.id)
.all()
)
@strawberry.type
class Query:
@strawberry.field
def parent(self, info, id: int) -> ParentGQLSchema | None:
model = session.query(ParentModel).filter(ParentModel.id == id).first()
if not model:
return None
return ParentGQLSchema.from_model(model)
</code></pre>
|
<python><sqlalchemy>
|
2023-05-17 19:21:58
| 1
| 727
|
Dave Cook
|
76,275,525
| 7,102,572
|
Convert string that was originally a `timedelta` back into a `timedelta` object in Python
|
<p>I have strings which were originally produced from <code>timedelta</code> objects like so: <code>print(f'{my_delta}')</code>.</p>
<p>I have many of these statements logged (e.g. "12:21:00", "1 day, 0:53:00", "2 days, 9:28:00") and I simply want to parse these logged statements to convert back to <code>timedelta</code> objects. Is this possible with the <code>datetime</code> library?</p>
<p>The strings were literally produced from just printing timedelta objects, but I cannot seem to convert back by using <code>timedelta(my_string)</code>. Wondering if there is a standard way of doing this that I am missing.</p>
|
<python><datetime><serialization><timedelta><elapsedtime>
|
2023-05-17 19:17:51
| 2
| 1,048
|
efthimio
|
76,275,409
| 8,189,123
|
Adding PDF values and export values to ComboBox using PyMuPDF
|
<p>I am currently looking to set a face and export value to a PDF combobox using the good PyMuPDF module but I can't find the way. Normally, using Adobe API Javascript it would be something like this : <code>f.setItems( ["Ohio", "OH"], ["Oregon", "OR"], ["Arizona", "AZ"] );</code></p>
<p>I am wondering if I it would be possible to apply something like this:</p>
<pre><code>import fitz
myPDFfile = r"C:\temp\myPDFfile.pdf"
with fitz.open(myPDFfile) as doc:
for page in doc:
widgets = page.widgets()
for widget in widgets:
if widget.field_type_string in ('ComboBox'):
print('widget.field_name', widget.field_name, 'widget.field_value', widget.field_value)
if widget.field_name == 'ComboBox1':
print('widget.field_name',widget.field_name)
widget.choice_values=( ["Ohio", "OH"], ["Oregon", "OR"], ["Arizona", "AZ"] )
widget.field_value = 'test'
widget.update()
doc.saveIncr()
</code></pre>
<p>This code is making crash my Jupyter Notebook Kernel.
The only way to use it is by correcting the following line:
<code>widget.choice_values= ["Ohio", "Oregon", "Arizona"]</code> but it wont set any export value to the combobox.</p>
<p>Any ideas or is something not available yet using this module?</p>
|
<python><pdf><pymupdf>
|
2023-05-17 19:00:52
| 1
| 437
|
Camilo
|
76,275,357
| 11,898,085
|
How to use imported xsensdeviceapi64.dll classes and functions in Python code?
|
<p>From Xsens Device API documentation:</p>
<blockquote>
<p>The C++ interface is not available in compiled form but is provided as part of the SDK as source code that is incorporated in the C header files. This C++ interface code implements a convenience wrapper around the C API. This means that the developer does not have to deal with memory management (i.e. easy object-lifetime management) as the class implementation takes care of this.</p>
<p>The API comprises of two C-interface libraries that are supplied for MS Windows (32 and 64 bit) and Linux. These are:
XsTypes that contains generic types (vectors, matrices, quaternions, etc.) and some basic operations on those types.Xsens Device API that contains the access to functionality as implemented in Xsens devices. The Xsens Device API library depends on the XsTypes, while XsTypes is an independent library.</p>
</blockquote>
<p>So I import <code>xsensdeviceapi64.dll</code> to my <code>Python 3.8.10</code> code:</p>
<pre><code>import os
xda = os.add_dll_directory(r'C:\path\xsensdeviceapi64.dll')
</code></pre>
<p>However, this line</p>
<pre><code>class XdaCallback(xda.XsCallback):
</code></pre>
<p>throws an error when trying to create a class based on XsCallback Xsens Device API class:</p>
<pre><code>builtins.AttributeError: '_AddedDllDirectory' object has no attribute 'XsCallback'
</code></pre>
<p>XsCallback is a clearly documented Xsens Device API class and I have a working code for Linux (Linux works with a Python wheel file, this code is for Windows) so there is no question about that. I tried also with <code>xsensdeviceapi_csharp64.dll</code> and <code>xsensdeviceapi_com64.dll</code> which come with the Software Development Kit Windows download, in addition to using an import of <code>xstypes64.dll</code> with the <code>api</code> imports, but it didn't make any difference. Also renaming the <code>...dll</code> files to <code>...pyd</code> did not help.</p>
<p>Why doesn't this work? How can I access the API classes and functions in Windows using Python 3.8.10?</p>
|
<python><dll><dllimport>
|
2023-05-17 18:52:35
| 1
| 936
|
jvkloc
|
76,275,323
| 6,327,888
|
How to get Python multiprocessing Process() names to handoff to OS utilities like ps, top, etc
|
<p>I would like to have the python generated subprocesses (multiprocessing) report their python-assigned dict "name" into the system level log such that I can use existing process tracking utilities, without having to map the PID individually to the name provided in my application.</p>
<p>Initially, when spawning a <code>multiprocessing.Process(...name="procName")</code> I can see that the ps utility does not inherit the name provided internally in Python's Process properties, but instead as a duplicate of the parent thread. (This feature limitation is reasonable from both software and OS perspective)
<a href="https://i.sstatic.net/eVgLo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eVgLo.png" alt="OS process name not being handed python's process name parameter" /></a>
(OS process name not being handed python's process name parameter)</p>
<p>I have done some work arounds to modify the value of the Process using a more direct memory modification via c library through Cython.</p>
<p>This appears to work in some manner, but appears to only appear in ps correctly when the process is defunct.
<a href="https://i.sstatic.net/DRSP9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DRSP9.png" alt="OS process spawns with parent name assignment, followed by my approach forcing the process name update with lower-level communication clib, as shown by the process name updating (same PID, not shown in snippet though" /></a>
(OS process spawns with parent name assignment, followed by my approach forcing the process name update with lower-level communication clib, as shown by the process name updating (same PID, not shown in snippet though)</p>
<p>Can I get this to update on currently-running processes (long running)? In the mean-time I have dumped the processes into a log on-demand...but now also have to cross-reference the PID's to get the Python process names.</p>
<p>EDIT:
here's a dirty quick test script just to demo the approach of modifying on clib. the result "works" as shown above - but only after the child process is complete.</p>
<pre><code>from multiprocessing import current_process, Process, active_children
import time
def set_proc_name(newname):
from ctypes import cdll, byref, create_string_buffer
libc = cdll.LoadLibrary('libc.so.6')
buff = create_string_buffer(len(newname)+1)
buff.value = newname
s = libc.prctl(15, byref(buff), 0, 0, 0)
return s
def get_proc_name():
from ctypes import cdll, byref, create_string_buffer
libc = cdll.LoadLibrary('libc.so.6')
buff = create_string_buffer(128)
# 16 == PR_GET_NAME from <linux/prctl.h>
libc.prctl(16, byref(buff), 0, 0, 0)
return buff.value
</code></pre>
<p>EDIT2:
<code>setproctitle</code> package clearly works here as suggested by @AKX - I will dive into this cython library and find out what approach they are using and update...
<a href="https://i.sstatic.net/HyrEQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HyrEQ.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><unix><debian-based>
|
2023-05-17 18:47:20
| 0
| 721
|
Elysiumplain
|
76,275,157
| 1,368,479
|
PIL raises UnidentifiedImageError( PIL.UnidentifiedImageError: cannot identify image file when used as class function but not in a snippet
|
<p>I am trying to read a static image from a web request. This is my test code which works fine:</p>
<pre><code>import requests
from io import BytesIO
from PIL import Image
master_ip = '192.168.1.10'
url = 'http://' + master_ip + '/liveimage.jpg'
response = requests.get(url=url)
img = Image.open(BytesIO(response.content))
img.save('test.jpg')
</code></pre>
<p>However, when I embed this into a class like so:</p>
<pre><code>import requests
from io import BytesIO
from PIL import Image
class foo:
def get_image(self) -> None:
"""Read the last image from the web ui."""
url = 'http://' + self.master_ip + '/liveimage.jpg'
response = requests.get(url=url)
img = Image.open(BytesIO(response.content))
img.save('test1.jpg')
</code></pre>
<p>I get:</p>
<pre><code>Traceback (most recent call last):
File "/home/anon/code/test_keyence.py", line 9, in <module>
img = driver.get_image()
File "/home/anon/code/ambios-drivers/ambidrivers/keyence/sr5000.py", line 487, in get_image
img = Image.open(BytesIO(response.content))
File "/home/anon/ambi_venv/lib/python3.9/site-packages/PIL/Image.py", line 3298, in open
raise UnidentifiedImageError(msg)
PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x7fd0bd0e9630>
</code></pre>
|
<python><python-requests><python-imaging-library><bytesio>
|
2023-05-17 18:24:39
| 0
| 327
|
trycatch22
|
76,275,068
| 5,924,264
|
How to query from a csv repeatedly throughout the day in a simulation code
|
<p>I'm trying to write a simulation code in python. This simulation code relies on inputs for a large csv file, and there is a separate csv file for each day in the simulation. I need to make numerous queries (the queries are based on time, which are columns in the csv file) each simulation day.</p>
<p>I'm thinking of using <code>pandas.read_csv</code> to read this in as a dataframe, and store the result and then query from this dataframe. One coding requirement is I don't want the dataframe stored at the query site.</p>
<p>I think the easiest way to do this is with a class, e.g.,</p>
<pre><code>import pandas as pd
class DailyCSVLoader:
def __init__(filepath):
self.df = pd.read_csv(filepath)
def query(time):
# return the rows corresponding to time
</code></pre>
<p>with usage:</p>
<pre><code>import datetime
path = "/path/to/csv/file/filename.csv"
time = datetime.datetime(year=2020, month=1, day=1, hour=12, minute=2, second=0)
loader = DailyCSVLoader(path)
loader.query()
</code></pre>
<p>However, for my particular codebase, it might be slightly easier to do this outside of a class and with just a function and perhaps a static variable that holds the dataframe, e.g.,</p>
<pre><code>import pandas as pd
# because I don't want the calling site to store df, I decided to keep it as a static variable here
def daily_csv_loader(filepath):
daily_csv_loader.df = pd.read_csv(filepath)
def query(time, df):
# return rows from df corresponding to time
</code></pre>
<p>with usage</p>
<pre><code>import datetime
path = "/path/to/csv/file/filename.csv"
time = datetime.datetime(year=2020, month=1, day=1, hour=12, minute=2, second=0)
daily_csv_loader(filepath)
query(time, daily_csv_loader.)
</code></pre>
<p>Are there any other approaches here, preferably a functional approach (I would prefer not to use OOP here as alluded to previously)? Is there a functional approach that can be done with a single function, perhaps with nested functions?</p>
|
<python><pandas><dataframe><csv><design-patterns>
|
2023-05-17 18:12:31
| 1
| 2,502
|
roulette01
|
76,274,996
| 4,721,676
|
Connecting to Postgres via python using AAD
|
<p>If I am a member of an AD group that has an AD role defined and connect/select privileges on a Postgres DB, how can I use SQL Alchemy to connect to the DB with my username/pass?</p>
<p>If I just give my username/password, it tells me:</p>
<pre><code>FATAL: password authentication failed for user "<user>"
</code></pre>
<p>How can I tell it to use AAD for authentication?</p>
|
<python><postgresql><azure><sqlalchemy><azure-active-directory>
|
2023-05-17 18:03:25
| 1
| 1,031
|
David Schuler
|
76,274,950
| 15,804,190
|
Can you make a UDF in xlwings running on server?
|
<p>I'm playing around with <a href="https://docs.xlwings.org/en/stable/pro/server/server.html" rel="nofollow noreferrer">xlwings server</a> and I'm hoping to make use of user-defined functions in this context. I've been testing using it with FastAPI, but am open to others.</p>
<p>I'm able to follow the example well enough to get code to run on my test server (localhost) and make whatever changes to the workbook. That makes sense to me for something where I might have my end-user fill in lots of things and then hit a "calculate" button.</p>
<p>However, what I want to do is use a normal (at least normal-looking to the end-user) user-defined function that actually does the processing on the server. Doing all the processing code in VBA is sometimes possible but even when it is, code management is annoying and updates have to be pushed out to dozens of individual excel files where I'd prefer to just update the backend once.</p>
<p>Here's what I was working with so far:</p>
<pre class="lang-py prettyprint-override"><code>@app.post("/test")
def test(request: Request):
headers = request.headers
val1 = int(headers['val1'])
val2 = int(headers['val2'])
return {"result":val1 + val2}
</code></pre>
<p>Then in VBA I had this:</p>
<pre class="lang-vb prettyprint-override"><code>Function testfunc(ByRef myVal As Integer, ByRef val2 As Integer) As Integer
Dim d As Variant
Set d = CreateObject("Scripting.dictionary")
d.Add "val1", myVal
d.Add "val2", val2
testfunc = runremotepython("http://127.0.0.1:8000/test", exclude:="xlwings.conf", headers:=d)
End Function
</code></pre>
<p>I then tried to go into the code for runremotepython itself (from xlwings) and set it
so that it returns the 'result' from the server response. This seems bad since I think this xlwings code gets re-written regularly, but just for testing here, it does seem to temporarily work - debugging <code>testfunc</code> shows that it does get set to the value I expect</p>
<p>Even this, though, isn't working. It seems that <code>testfunc()</code> runs a circular loop on itself? I'm getting a "one or more circular references" warning from excel when I try to run the cell normally.</p>
<p>If there is a way to do this standard, it probably doesn't hacking the xlwings <code>runremotepython</code> code...</p>
|
<python><excel><vba><xlwings>
|
2023-05-17 17:55:28
| 0
| 3,163
|
scotscotmcc
|
76,274,881
| 1,711,271
|
How to change the font.size for all subplots with sns.set and rcParams
|
<p>I'm trying to change all font elements in a series of boxplots I'm plotting. My code is similar to</p>
<pre><code>from pathlib import Path
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
def plot_group(df, nrow, ncol, size, plot_title):
sns.set(style="darkgrid")
plt.rcParams['font.size'] = size
fig, axes = plt.subplots(nrow, ncol, figsize=(15, 10), tight_layout=True)
fig.suptitle(plot_title)
n_boxplots = 2
for i in range(nrow):
for j in range(ncol):
current_ax = axes[i, j]
sns.boxplot(data=df[['foo','bar']], palette="Set2", ax=current_ax)
current_ax.set_title("foo", fontsize=18)
plt.savefig(f'font_size_{size}.png', dpi=75)
plt.close(fig)
nrow = 3
ncol = 3
rng = np.random.default_rng(0)
arr = rng.random((30, 2))
df = pd.DataFrame(data=arr, columns=['foo', 'bar'])
for size in [14, 22]:
plot_title = f"Font size = {size}"
plot_group(df, nrow, ncol, size, plot_title)
</code></pre>
<p>However, this results in two figures where only the <code>suptitle</code> font size changes, but everything else stays the same:</p>
<p><a href="https://i.sstatic.net/qDxQU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qDxQU.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/CFKxZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CFKxZ.png" alt="enter image description here" /></a></p>
<p>Why? How can I fix this?</p>
|
<python><matplotlib><seaborn><font-size><subplot>
|
2023-05-17 17:46:24
| 1
| 5,726
|
DeltaIV
|
76,274,808
| 1,409,674
|
Absolute imports works either in tests or in uvicorn
|
<p>I have a bit of an odd problem, namely I have this structure:</p>
<pre><code> |-tests
| |-__init__.py
| |-domain
| | |-items
| | | |-test_service.py
| |-test_app.py
|-poetry_demo
| |-__init__.py
| |-app.py
| |-domain
| | |-__init__.py
| | |-items
| | | |-service.py
| | | |-controller.py
| | | |-__init__.py
| | | |-model.py
</code></pre>
<p>And in <code>app.py</code> I do</p>
<pre><code>from fastapi import FastAPI
from poetry_demo.domain.items.controller import ItemsController
from poetry_demo.domain.items.service import ItemsService
app = FastAPI()
app.include_router(ItemsController(service=ItemsService).router, prefix="/items")
</code></pre>
<p>And when I go into <code>poetry_demo</code> dir and run uvicorn with <code>uvicorn app:app --reload</code>, I am getting an error:</p>
<pre><code> from poetry_demo.domain.items.controller import ItemsController
ModuleNotFoundError: No module named 'poetry_demo'
</code></pre>
<p>But on the same time, when I run <code>pytest</code> from the main directory (one above <code>poetry_demo</code>), it works perfectly.</p>
<p>To have app running, I need to change <code>app.py</code> imports to:</p>
<pre><code>from domain.items.controller import ItemsController
from domain.items.service import ItemsService
</code></pre>
<p>And then it works, but... tests are breaking.</p>
<p>I tried changing import to relative (<code>.domain...</code>), but then I am getting</p>
<pre><code> from .domain.items.controller import ItemsController
ImportError: attempted relative import with no known parent package
</code></pre>
<p>Any help is appreciated ;)</p>
|
<python>
|
2023-05-17 17:38:56
| 1
| 8,015
|
Tomek Buszewski
|
76,274,733
| 11,002,498
|
Running Linux Commands in Jupyter Notebook
|
<p>I have a google colab file that I want to run in Visual Studio Code. Normal cells are running ok but I have the following cell:</p>
<pre><code># 1. Downloads, extracts, and sets the permissions for the Elasticsearch installation image:
%%bash
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.9.2-linux-x86_64.tar.gz -q
tar -xzf elasticsearch-7.9.2-linux-x86_64.tar.gz
chown -R daemon:daemon elasticsearch-7.9.2
</code></pre>
<p>The problem is that I am getting <code>SyntaxError: invalid syntax</code> on the <code>wget</code></p>
<p>I have tried multiple solutions from stackoverflow</p>
<ol>
<li>Use the generic <code>%%bash</code> I use in Google Colab too.</li>
<li>Use <code>%%shell</code></li>
<li>Use the longer version of 1, <code>%%script bash</code></li>
<li>Use <code>%%script</code> bash</li>
<li>Use the shortcut <code>%%!</code></li>
</ol>
<p>I used source like <a href="https://stackoverflow.com/questions/71385453/how-to-get-jupyter-bash-to-display-output-in-real-time">this</a>, <a href="https://stackoverflow.com/questions/54604995/how-to-run-a-script-shell-in-google-colab">this</a>. <a href="https://haystack.deepset.ai/tutorials/03_scalable_qa_system" rel="nofollow noreferrer">This</a> is the tutorial I used</p>
|
<python><bash><elasticsearch><jupyter-notebook><google-colaboratory>
|
2023-05-17 17:28:31
| 1
| 464
|
Skapis9999
|
76,274,714
| 17,896,651
|
Python: Why my Process stay alive on TK VS killed on command line (windows)
|
<p>On TK I do that:
<a href="https://i.sstatic.net/sVQpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sVQpB.png" alt="enter image description here" /></a></p>
<p>sys.exit(1) will exit the tk and the other threads running.
But all Processes triggered will stay alive in memory (I still see the logs file fill up).</p>
<p>To be more accurate:</p>
<p><strong>python Main.py</strong> trigger process <strong>python schdule.py</strong> that trigger process <strong>python start.py</strong>
I see logs in start.py</p>
<p>On the other hand not using TK, and killing the command line with X button</p>
<p>Leave no process alive - not accurate, they will keep on running until die with exception in a print
statement / some OS error things... WHY ?</p>
<p><a href="https://i.sstatic.net/k2sJo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k2sJo.png" alt="enter image description here" /></a></p>
|
<python><process><terminate>
|
2023-05-17 17:26:07
| 0
| 356
|
Si si
|
76,274,662
| 969,423
|
Pythonic way to map optional
|
<p>In Java one can use <code>Optional.map</code> to conditionally transform nullable values.</p>
<pre><code>Optional[Integer] multiplyByTwo(Optional[Integer] i) {
return i.map(x -> x * 2);
}
</code></pre>
<p>Is there a pythonic practice for this? This seems okay</p>
<pre><code>def multiply_by_two(i: int | None):
return None if i is None else i * 2
</code></pre>
<p>But is less ideal when you have a scenario</p>
<pre><code>def get_object_for_dict(d: dict):
return SomeType(i=None if d.get('i') is None else d['i'].to_decimal())
</code></pre>
<p>Where you'd like to avoid creating the extra intermediate variable</p>
<pre><code>def get_object_for_dict(d: dict):
i = d.get('i')
return SomeType(i=None if i is None else i.to_decimal())
</code></pre>
<p>Ideally, we could do</p>
<pre><code>def get_object_for_dict(d: dict):
return SomeType(i=map_or_none(d.get('i'), lambda x: x.to_decimal()))
</code></pre>
<p>Or if we got really fancy, we could have a python type that overrides <code>__getattribute__</code> to always return None</p>
<pre><code>def get_object_for_dict(d: dict):
return SomeType(i=Noneable(d.get('i')).to_decimal())
</code></pre>
<p>Or even better, have a dictionary type that supports this concept</p>
<pre><code>def get_object_for_dict(d: NoneableDict):
return SomeType(i=d.get('i').to_decimal())
</code></pre>
|
<python>
|
2023-05-17 17:19:21
| 0
| 905
|
John K
|
76,274,619
| 2,823,912
|
How do type a function in python based on the return value of a method on an instance passed in?
|
<p>In TypeScript, I can define a function such that the return value is different depending on the behavior of an instance that was passed into the function. (See the example below.)</p>
<p>How can I do something similar in Python?</p>
<pre class="lang-js prettyprint-override"><code>class DocumentSuperClass {
}
class DocumentA extends DocumentSuperClass {}
class DocumentB extends DocumentSuperClass {}
// ... many more sibling classes
class SuperClass {
doc: DocumentSuperClass
meth() {
return this.doc;
}
}
class A {
doc: DocumentA
meth() {
return this.doc;
}
}
class B {
doc: DocumentB
meth() {
return this.doc;
}
}
// ... many more sibling classes
function a<T extends SuperClass>(input: T) {
return input.meth() as ReturnType<T['meth']>;
}
const instanceOfDocumentA = a(new A());
const instanceOfDocumentB = a(new B());
// ... same behavior for many more sibling classes
</code></pre>
|
<python><type-hinting>
|
2023-05-17 17:13:00
| 3
| 767
|
nbrustein
|
76,274,095
| 3,389,288
|
Generate a directed graph with k inputs in j outputs and n nodes in networkx
|
<p>I'm trying to generate a directed graph using networkx in Python with the following characteristics: specified k inputs, j outputs, and n nodes. Ideally I would also be able to specify the connectivity of the graph through an additional parameter. I've played around with the <a href="https://networkx.org/documentation/stable/reference/generators.html#module-networkx.generators.directed" rel="nofollow noreferrer">various graph generators</a> in networkx, but I can't seem to find one that fits this specific description. The closest I've found is the <code>random_k_out_graph()</code> function, but I don't think that lets you specify the number of inputs as well. Any help is appreciated!</p>
|
<python><networkx><directed-acyclic-graphs>
|
2023-05-17 16:08:00
| 1
| 1,034
|
user3389288
|
76,274,067
| 538,706
|
Pandas: calculate ratio between groups over time
|
<p>I have a dataframe that looks something like this:</p>
<pre><code> time type values
0 t1 type1 v1
1 t2 type1 v2
2 t3 type1 v3
3 t1 type2 v4
4 t2 type2 v5
5 t3 type2 v6
</code></pre>
<p>What I would like to create, is another column that is the ratio between the values of type1 and type2 calculated at matched up time steps, like this:</p>
<pre><code> time type values ratio
0 t1 type1 val1 v1/v1
1 t2 type1 val2 v2/v2
2 t3 type1 val3 v3/v3
3 t1 type2 val4 v4/v1
4 t2 type2 val5 v5/v2
5 t3 type2 val6 v6/v3
</code></pre>
<p>Basically I want to calculate the ratio of type1 to type2 over time.</p>
<p>I looked at this: <a href="https://stackoverflow.com/questions/50892309/pandas-for-each-group-calculate-ratio-of-two-categories-and-append-as-a-new-col">pandas for each group calculate ratio of two categories, and append as a new column to dataframe using .pipe()</a></p>
<p>But I haven't quite been able to adapt this to my situation.</p>
|
<python><pandas>
|
2023-05-17 16:03:43
| 1
| 1,231
|
Flux Capacitor
|
76,274,063
| 4,045,275
|
How to divide one pandas dataframe (pivot table) by another if columns names are different?
|
<h3>What I am trying to do</h3>
<p>I am calculating certain pivot tables with <code>pandas</code> , and I want to divide one by the other.</p>
<p>An example (with fictional data) is below, where I have data on the sales of certain items, and I want to calculate things like total sales in $, total # of items sold, unit price, etc.</p>
<p>Another example is to calculate weighted averages, where in the last step you need to divide by the weight.</p>
<p>The data has multi-indices because it is the result of slicing and dicing by various classifications (e.g. how many high-quality vs low-quality widgets you have sold in London vs New York).</p>
<h3>A minimum reproducible example</h3>
<p>In the example below, I create the dataframe <code>piv_all_sales</code> with dimensions 7x8:</p>
<ul>
<li>7 rows: 2 regions x 3 products + 1 row for the total</li>
<li>8 columns: 2 metrics (gross and net sales) x (3 types of quality (low, medium, high) + 1 column for the total)</li>
</ul>
<p><code>piv_all_sales</code> looks like this:
<a href="https://i.sstatic.net/UurD3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UurD3.png" alt="enter image description here" /></a></p>
<p><code>piv_count</code> counts how many items I have sold, and has dimensions 7x4:
<a href="https://i.sstatic.net/Jf6es.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jf6es.png" alt="enter image description here" /></a></p>
<p>I want to divide the former by the latter - but the <code>DataFrame.div()</code> method doesn't work - presumably because the two dataframes have different column names.</p>
<p>An additional complication is that <code>piv_all_sales</code> has 8 columns while <code>piv_count</code> has 4</p>
<pre><code>import numpy as np
import pandas as pd
rng = np.random.default_rng(seed=42)
df=pd.DataFrame()
df['region'] = np.repeat(['USA','Canada'],12)
df['product'] = np.tile(['apples','strawberries','bananas'],8)
df['quality'] = np.repeat(['high','medium','low'],8)
df['net sales'] = rng.integers(low=0, high=100, size=24)
df['gross sales'] = rng.integers(low=50, high=150, size=24)
df['# items sold'] = rng.integers(low=1, high=20, size=24)
piv_net_sales = pd.pivot_table(data=df,
values=['net sales'],
index=['region','product'],
columns=['quality'],
aggfunc='sum',
margins=True)
piv_all_sales = pd.pivot_table(data=df,
values=['net sales','gross sales'],
index=['region','product'],
columns=['quality'],
aggfunc='sum',
margins=True)
piv_count = pd.pivot_table(data=df,
values=['# items sold'],
index=['region','product'],
columns=['quality'],
aggfunc='sum',
margins=True)
</code></pre>
<h3>What I have tried</h3>
<p>I wouldn't know how to divide the (7x8) dataframe by the (7x4) one.</p>
<p>So I started by trying to divide a 7x4 by a 7x4, ie using the dataframe which has only the net sales, not the net and gross together. However, neither works:</p>
<pre><code>out1 = piv_net_sales / piv_count
out2 = piv_net_sales.div(piv_count)
</code></pre>
<p>presumably because pandas looks for, and doesn't find, columns with the same names?</p>
<p>Neither works because both produce a 7x8 dataframe of all nans</p>
<h3>Partial, inefficient solution</h3>
<p>The only thing which kind of works is converting each dataframe to a numpy array, and then dividid the two arrays. This way it no longer matters that the column names were different. However:</p>
<ul>
<li><p>it is very inelegant and tedious, because I'd have to convert the dataframes to numpy arrays and the recreate the dataframes with the right indices</p>
</li>
<li><p>I still don't know how to divide the 7x8 dataframe by the 7x4; maybe split the 7x8 into 2 (7x4) arrays, calculate each, and then combine them again?</p>
<p>works but not very elegant nor efficient</p>
<p>out3 = piv_net_sales.to_numpy() / piv_count.to_numpy()</p>
</li>
</ul>
<h3>Similar questions</h3>
<p>I have found some similar questions, e.g.</p>
<p><a href="https://stackoverflow.com/questions/54431994/how-to-divide-two-pandas-pivoted-tables">How to divide two Pandas Pivoted Tables</a></p>
<p>but I have been unable to adapt those answers to my case.</p>
|
<python><pandas><dataframe><pivot><pivot-table>
|
2023-05-17 16:02:37
| 1
| 9,100
|
Pythonista anonymous
|
76,273,984
| 14,088,919
|
Why are colors randomly removed from colormap?
|
<p>This is my code:</p>
<pre><code>import pandas as pd
data = {'bar_groups': ['A', 'A', 'B', 'C', 'B', 'A', 'C', 'B', 'C', 'B'],
'color_groups': [1, 2, 3, 4, 5, 6, 7, 8, 1, 2],
0: [0.258, 0.087, 0.168, 0.241, 0.182, 0.236, 0.231, 0.092, 0.283, 0.186],
1: [0.212, 0.208, 0.135, 0.24, 0.256, 0.27, 0.218, 0.151, 0.162, 0.18],
2: [0.009, 0.062, 0.031, 0.017, 0.027, 0.02, 0.043, 0.087, 0.011, 0.04],
3: [0.006, 0.015, 0.006, 0.009, 0.009, 0.01, 0.016, 0.006, 0.004, 0.011],
4: [0.002, 0.002, 0.002, 0.002, 0.007, 0.005, 0.18, 0.002, 0.025, 0.004],
5: [0.268, 0.269, 0.262, 0.278, 0.278, 0.269, 0.19, 0.229, 0.395, 0.234],
6: [0.004, 0.017, 0.008, 0.009, 0.018, 0.002, 0.005, 0.012, 0.002, 0.04],
7: [0.242, 0.338, 0.387, 0.204, 0.222, 0.188, 0.117, 0.422, 0.117, 0.306],
8: [0.006, 0.015, 0.006, 0.009, 0.009, 0.01, 0.016, 0.006, 0.004, 0.011]}
df = pd.DataFrame(data)
# Plot code
fig, ax = plt.subplots(figsize=(10,7))
# bars on the left
df_target = df.groupby(["bar_groups", "color_groups"]).size()
df_target = (df_target / df_target.groupby("bar_groups").transform(sum)).unstack(level='color_groups')
df_target.plot(kind='bar',stacked=True, position=1, colormap='Set3', width=0.4, ax=ax, edgecolor='grey')
# bars on the right
df_prob = df[['bar_groups', 0,1,2,3,4,5,6,7,8]].groupby('bar_groups').mean()
df_prob.plot.bar(stacked=True, ax=ax, position=0, cmap='Set3', width=0.4, legend=False, alpha=0.6, edgecolor='grey')
ax.set_xlim(-0.75,9)
ax.set_ylabel('Actual (left) & Predicted (right) bars')
plt.title('Grouped bar chart')
</code></pre>
<p>For external reasons I had to remove the last column (<code>8</code>) and now the colors completely changed - matplotlib seems to be removing random colors in the sequence (pink, purple).</p>
|
<python><pandas><matplotlib><colormap>
|
2023-05-17 15:52:42
| 1
| 612
|
amestrian
|
76,273,930
| 3,541,631
|
Using a file as lock to not run a script(or related scripts) multiple times at the same time fails when a process pool is used
|
<p>I want to use a locking mechanism to not start two scripts(can be the same script or another connected script) at the same time. I'm using a file as a locker.</p>
<pre><code>class LockFile:
def __init__(self, lock_path):
self.lock_path = lock_path
def lock(self):
if os.path.isfile(self.lock_path):
raise ImporterLockError
else:
os.makedirs(os.path.dirname(self.lock_path), exist_ok=True)
with open(self.lock_path, "w") as f:
f.write.... # some info about the script
def unlock(self):
with contextlib.suppress(FileNotFoundError):
os.remove(self.lock_path)
</code></pre>
<p>I'm using the lock inside a function <code>run_wrapper</code> that it will be used as a wrapper for other functions:</p>
<pre><code>def run_wrapper(fn_to_run):
.......
lock = LockFile(lock_path=lock_path")
................
try:
lock.lock()
except ImporterLockError:
return 1
else:
log.debug("Importer lock acquired.")
try:
fn_to_run()
except SystemExit:
raise
except KeyboardInterrupt:
log.info("Terminated by user request.")
return 1
except ImporterLockError:
return 1
except Exception as e:
print(f"Unexpected error: ....")
return 1
finally:
# do some cleaning
lock.unlock()
print("Importer lock released.")
</code></pre>
<p>the <code>run wrapper</code> I'm using it to run a function <code>run</code> that is using multiprocessing pool.</p>
<pre><code>run_wrapper(run)
</code></pre>
<pre><code>def run():
...................
processes = set()
with multiprocessing.Pool(2) as pool:
processes.add(pool.apply_async(task_1))
processes.add(pool.apply_async(task_2))
[p.get() for p in processes]
</code></pre>
<p>The issue is that when an exception is triggered in a child process, the lock mechanism is not working, the <code>finally: ...</code> in <code>try - except</code> is not executing and the <code>lock.unlock</code> is not executed
so the lock file is not deleted, even if the script exit.
So running again a script will not work because the lock file wasn't deleted and no clean operations were done.</p>
|
<python><python-3.x>
|
2023-05-17 15:46:25
| 1
| 4,028
|
user3541631
|
76,273,845
| 19,325,656
|
Improving queries by select related with many related models
|
<p>Im writing an app with most of the data started across different tables. I've recently installed Django debug toolbar and noticed that i have some duplicates and similar queries. Thanks to this Im somewhat skeptical about my queries and how I make them. They work but are they efficient?</p>
<p><strong>models</strong></p>
<pre class="lang-py prettyprint-override"><code>class Location(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False, db_index=True)
city = models.CharField(max_length=40, blank=True)
placeId = models.CharField(max_length=100)
point = models.PointField(srid=4326, blank=True, null=True)
class User(AbstractBaseUser, PermissionsMixin):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False, db_index=True)
email = models.EmailField(_('email address'), unique=True)
first_name = models.CharField(max_length=150, blank=True)
current_location = models.ForeignKey(Location, on_delete=models.SET_NULL, related_name="location", null=True)
search_range = models.PositiveSmallIntegerField(default=10, blank=True, null=True)
objects = CustomAccountManager()
class Profile(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False, db_index=True)
user = models.OneToOneField(User, on_delete=models.CASCADE)
dummy_section = models.TextField(max_length=155, null=True, blank=True)
class Blocked(models.Model):
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
user = models.ForeignKey(User, on_delete=models.CASCADE, related_name="user_thats_blocking")
blocked_user = models.ForeignKey(User, on_delete=models.CASCADE, related_name="blocked_user")
date_created = models.DateTimeField(auto_now_add=True)
class Interactions(models.Model):
IntChoices = (
("yes", "yes"),
("no", "no"),
)
id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False, db_index=True)
user = models.ForeignKey(User, on_delete=models.CASCADE, related_name="current_user")
interacted_user = models.ForeignKey(User, on_delete=models.CASCADE, related_name="interacted_user")
interacted = models.CharField(max_length=20, default="yes", choices=(IntChoices), db_index=True)
</code></pre>
<p>So what I'm querying is.</p>
<p>Getting all the potential profiles within my search_range so in location model. Then I'm getting interactions with users from the interaction model. If I had interacted with someone before I remove them from my potential users and then I'm removing blocked users from the blocked model.</p>
<pre class="lang-py prettyprint-override"><code>
range_number = 10
allowed_users = Profile.objects.select_related('user').filter(user__current_location__point__distance_lte=(current_loc, Distance(km=int(range_number))))
blocked = Blocked.objects.select_related('user').filter(user__id=current_id).values('blocked_user')
users_taken_inter = Interactions.objects.select_related('user').filter(user__id=current_id).values('interacted_user')
users_not_taken_action = allowed_users.exclude(user__in=users_taken_action).exclude(user__in=blocked).exclude(user__id=current_id)
</code></pre>
<p>I know this works but I don't know how well. Im using pagination so on average I'm having 35 queries in 35/40ms. But can i remove duplicates and/or improve performance?</p>
|
<python><django><django-models><django-rest-framework><django-views>
|
2023-05-17 15:37:49
| 0
| 471
|
rafaelHTML
|
76,273,844
| 2,893,712
|
Pandas Query astype(str)
|
<p>I have a dataframe that has a column that usually has numbers only. On occasion, it will have text values, which means it will be of type <code>object</code> instead <code>int64</code>.</p>
<p>Instead of doing <code>ColumnA == "100" & ColumnA == 100</code>, I am trying to perform a query that will compare the value to the string value of an integer:</p>
<pre><code>df.query('ColumnA.astype(str) == "100"))
</code></pre>
<p>But this gives me a <code>KeyError</code> saying the <code>str</code> key does not exist.</p>
<p>Is there a way to use <code>astype</code> within a pandas query?</p>
|
<python><pandas><dataframe>
|
2023-05-17 15:37:34
| 1
| 8,806
|
Bijan
|
76,273,834
| 12,670,481
|
Use Generic type on a Typevar
|
<p>I am building a class decorator that works as well as possible with Python typing.
From the following code:</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Generic, List, Type
ExecutorType = TypeVar("ExecutorType")
class Task(Generic[ExecutorType]):
_executor: ExecutorType
pass
TaskType = TypeVar("TaskType", bound=Task)
ChildTaskType = TypeVar("ChildTaskType", bound=Task)
ExecutorType = TypeVar("ExecutorType", bound='Executor')
class Executor(Generic[TaskType]):
_TASKS_CLASSES: List[Type[TaskType]] = []
@classmethod
def task(cls: Type[ExecutorType]):
def decorator(task_cls: Type[ChildTaskType]):
cls._TASKS_CLASSES.append(task_cls)
return task_cls
return decorator
class Checker(Executor):
def checker_method(self):
return "Check Method"
pass
</code></pre>
<p>I need to be able to write the following decorator without any typing errors:</p>
<pre class="lang-py prettyprint-override"><code>@Checker.task()
class Check(Task):
def check_method(self):
self._executor.checker_method()
</code></pre>
<p>However, when I write this code, <code>self._executor</code> is not assigned to <code>Checker</code> as I would need it to be (It is still the base Executor class), so <code>checker_method</code> is obviously not found. To fix that, I need to specify that <code>Task</code> Generic is <code>Checker</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>@Checker.task()
class Check(Task[Checker]):
...
</code></pre>
<p>My question: Is there any way of using typing inside my decorator method <code>task</code> to specify that the output's class Generic is <code>ExecutorType</code>? In other words, I would need to type my task output as follows <code>Type[ChildTaskType[Type[ExecutorType]]</code> but it seems not possible.</p>
|
<python><oop><python-typing>
|
2023-05-17 15:36:47
| 0
| 593
|
HenriChab
|
76,273,728
| 9,064,356
|
PySpark SQL 'OR' condition working in unexpected manner with dates
|
<p>I am running queries on pyspark using pyspark sql but I am running into strange behavior.
I have where statement which looks like this:</p>
<pre><code>WHERE `foobar`.`business_date` = '2020-01-01'
OR `foobar_2`.`business_date` = '2020-01-01'
</code></pre>
<p>Above where statement filters out all of my data which results in empty df but what is strange is that when I change from OR to AND I no longer get empty data frame but proper results which doesn't make sense to me as OR should be more inclusive? Is there a different operator for OR in pyspark sql?</p>
<p><strong>Empty df:</strong></p>
<pre><code>spark.sql("""
select *
from `foobar`
full outer join `foobar_connector` on (`foobar_connector`.`some_id` = `foobar`.`other_id`)
full outer join `foobar_2` on (`foobar_connector`.`some_other_id` = `foobar_2`.`other_id`)
WHERE `foobar`.`business_date` = '2020-01-01'
OR `foobar_2`.`business_date` = '2020-01-01'
""").show()
</code></pre>
<p><strong>DF contains results:</strong></p>
<pre><code>spark.sql("""
select *
from `foobar`
full outer join `foobar_connector` on (`foobar_connector`.`some_id` = `foobar`.`other_id`)
full outer join `foobar_2` on (`foobar_connector`.`some_other_id` = `foobar_2`.`other_id`)
WHERE `foobar`.`business_date` = '2020-01-01'
AND `foobar_2`.`business_date` = '2020-01-01'
""").show()
</code></pre>
<p>What could be a reason for this behaviour? According to my understanding if "AND condition" returns non-empty df then "OR condition" shouldn't return empty df, it should retrun the same number of records or more.</p>
<hr/>
<p><strong>UPDATE 1</strong></p>
<hr/>
<p>I have found out that if I change the statement from</p>
<pre><code>WHERE `foobar`.`business_date` = '2020-01-01'
OR `foobar_2`.`business_date` = '2020-01-01'
</code></pre>
<p>to</p>
<pre><code>WHERE `foobar`.`business_date` = DATE('2020-01-01')
OR `foobar_2`.`business_date` = DATE('2020-01-01')
</code></pre>
<p>then I get the expected behaviour but I have no idea why it works for AND without the need to explicitly convert</p>
|
<python><sql><apache-spark><pyspark><where-clause>
|
2023-05-17 15:24:28
| 0
| 961
|
hdw3
|
76,273,725
| 1,769,197
|
Pandas: read_csv with usecols that may not exist in the csv file
|
<p>I have several large csv with many columns but they do not all have the same set of columns hence different number of columns in each file. And I only want to read columns that contains a certain string call it "abc-".</p>
<p>I created a list of possible column names that contain the string "abc-" and set the <code>usecol</code> equal to this when reading the csv using <code>pd.read_csv</code>. It gave me an error.</p>
<p><code>ValueError: Usecols do not match columns, columns expected but not found: ['abc-150']</code></p>
<p>What can I do ?</p>
|
<python><pandas>
|
2023-05-17 15:24:03
| 2
| 2,253
|
user1769197
|
76,273,670
| 10,466,809
|
How to import from neighboring test modules when they are outside package source directory?
|
<p>Suppose I have a package structure like (as shown at <a href="https://doc.pytest.org/en/latest/explanation/goodpractices.html#tests-outside-application-code" rel="nofollow noreferrer">pytest.org</a>):</p>
<pre><code>src/
mypkg/
__init__.py
app.py
view.py
tests/
test_app.py
test_view.py
...
</code></pre>
<p>Now suppose <code>test_view</code> defines a <code>important_attr</code> that I would like to import from within <code>test_app</code>. How can I accomplish this? It seems like absolute imports don't work because <code>test_app</code> and <code>test_view</code> are not technically part of any package (so the absolute import can't be resolved) and relative imports are not allowed for the same reason (since relative imports don't seem to be allowed outside of packages when files are run as scripts?)</p>
<p>e.g.:</p>
<pre><code># test_view.py
important_attr = 42
</code></pre>
<pre><code># test_app.py
from .test_view import important_attr
def main():
print(important_attr)
if __name__ == "__main__":
main()
</code></pre>
<p>When I run <code>test_app.py</code> I get <code>ImportError: attempted relative import with no known parent package</code></p>
<p>I could solve this by moving <code>tests</code> under <code>mypkg</code> directory, but I've seen the "separate tests dir from src dir" suggestion a number of times and I'm trying to understand how/if it can work for this use case. Maybe this is just a known downside of this strategy compared to having <code>tests</code> under <code>mypkg</code>?</p>
<p>adding <code>__init__.py</code> into <code>tests</code> directory didn't seem to fix the error.</p>
|
<python><testing><import>
|
2023-05-17 15:18:44
| 2
| 1,125
|
Jagerber48
|
76,273,567
| 10,829,044
|
get past success rate at each timepoint using pandas
|
<p>I have a dataframe like as below</p>
<pre><code>data = {
'project_id': [11, 22, 31, 45, 52, 61],
'user_id': [10001, 10002, 10001, 10003, 10002, 10004],
'customer_id': [20001, 20002, 20001, 20001, 20003, 20002],
'project_start_date': ['2023-01-01', '2023-02-01', '2023-03-01', '2023-04-01', '2023-05-01', '2023-06-01'],
'project_end_date': ['2023-01-31', '2023-02-28', '2023-03-31', '2023-04-30', '2023-05-31', '2023-05-10'],
'product_name': ['Product A', 'Product B', 'Product C', 'Product A', 'Product B', 'Product C'],
'success_flag': [0, 1, 1, 1, 0, 1]
}
df = pd.DataFrame(data)
</code></pre>
<p>I would like to do the below</p>
<p>a) <strong>for each user</strong>, compute two metrics - past_project_success_count, past_project_success_ratio</p>
<p>The logic to compute these two metrics is given below</p>
<p><code>past_project_success_count</code> : count the number of past projects where the success_flag = 1
<code>past_project_success_ratio</code> : find the ratio using formula -> all past success projects (for each user) divided by all past projects (for each user)</p>
<p>b) Similarly, I would like to compute the same metrics <strong>for each customer</strong></p>
<p>c) Similarly, I would like to compute the same metrics <strong>for each user and customer combo</strong></p>
<p>d) we should only look at the past and compute these metrics. Should not consider the current row/current project. (only past projects)</p>
<p>I tried the below but not going anywhere with this</p>
<pre><code>t1 = t1.sort_values('project_end_date')
t1['user_past_project_success_count'] = t1.groupby('user_id')['success_flag'].cumsum() / t1.groupby('user_id').cumcount()
t1['user_past_project_success_count'] = t1.groupby('customer_id')['success_flag'].cumsum() / t1.groupby('customer_id').cumcount()
t1['user_past_project_success_count'] = t1.groupby(['user_id','customer_id'])['success_flag'].cumsum() / t1.groupby(['user_id','customer_id']).cumcount()
</code></pre>
<p>one issue that I observed was sometime the cumcount returns the same value I don't know why. For past project count, it should just be in increasing order of 1 for each of the project.</p>
<p>I expect my output to be like as below. I have shown only for the user column. But we need to do the same for customer column and user. customer combo. Remember that first record for each group will always be zero as there is no past to look at.</p>
<p><a href="https://i.sstatic.net/w6w4j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w6w4j.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><list><group-by>
|
2023-05-17 15:08:39
| 1
| 7,793
|
The Great
|
76,273,502
| 10,967,961
|
cross merge/cartesian product in dask
|
<p>how can I perform the equivalent of this cross merge in dask?</p>
<pre><code>merged_df = pd.merge(df, df, how='cross', suffixes=('', '_y'))
</code></pre>
<p>To provide an example, say I have this dataframe, say dataframe A:</p>
<pre><code>#Niusup Niucust
#1 a
#1 b
#1 c
#2 d
#2 e
</code></pre>
<p>and want to obtain this one:</p>
<pre><code>#Niusup Niucust_x Niucust_y
#1 a a
#1 a b
#1 a c
#1 b a
#1 b b
#1 b c
#1 c a
#1 c b
#1 c c
#2 d d
#2 d e
#2 e d
#2 e e
</code></pre>
<p>I need Dask because dataframe A contains 5000000 observations and so I expect the cartesian product to contain a lot of observations.</p>
<p>thank you</p>
|
<python><pandas><merge><dask><cartesian-product>
|
2023-05-17 15:00:12
| 1
| 653
|
Lusian
|
76,273,420
| 2,876,079
|
Flask app.run method does not work with WinPython 3.11.1 and next.js application: fetch failed
|
<p>When using WinPython 3.10.5 I am able to debug my flask & next.js application using the flask debug mode (to enable hot reloads):</p>
<pre><code>app.run(debug=True, host=host, port=port)
</code></pre>
<p>However, when using WinPython 3.11.1, the flask debug mode does not work any more. The fetch command in the static part of my next application fails and I get following error in Google Chrome dev-tools console:</p>
<pre><code>GET http://localhost:3000/ 500 (Internal Server Error)
react-dom.development.js:29840 Download the React DevTools for a better development experience: https://reactjs.org/link/react-devtools
index.js:619 Uncaught TypeError: fetch failed
at Object.fetch (node:internal/deps/undici/undici:11457:11)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async apiCall (webpack-internal:///./src/pages/_app.js:30:26)
at async MyApp.getInitialProps (webpack-internal:///./src/pages/_app.js:34:27)
at async loadGetInitialProps (file://C:\python_env\workspace\flask_demo\front_end\node_modules\next\dist\shared\lib\utils.js:146:19)
at async renderToHTML (file://C:\python_env\workspace\flask_demo\front_end\node_modules\next\dist\server\render.js:417:13)
at async doRender (file://C:\python_env\workspace\flask_demo\front_end\node_modules\next\dist\server\base-server.js:1031:34)
at async cacheEntry.responseCache.get.incrementalCache.incrementalCache (file://C:\python_env\workspace\flask_demo\front_end\node_modules\next\dist\server\base-server.js:1162:28)
at async (file://C:\python_env\workspace\flask_demo\front_end\node_modules\next\dist\server\response-cache\index.js:99:36)
getServerError @ client.js:1
eval @ index.js:619
setTimeout (async)
hydrate @ index.js:607
await in hydrate (async)
eval @ next-dev.js:48
Promise.then (async)
eval @ next-dev.js:42
./node_modules/next/dist/client/next-dev.js @ main.js?ts=1684760846398:181
options.factory @ webpack.js?ts=1684760846398:649
__webpack_require__ @ webpack.js?ts=1684760846398:37
__webpack_exec__ @ main.js?ts=1684760846398:1094
(anonymous) @ main.js?ts=1684760846398:1095
webpackJsonpCallback @ webpack.js?ts=1684760846398:1197
(anonymous) @ main.js?ts=1684760846398:9
favicon.ico:1 GET http://localhost:3000/favicon.ico 500 (Internal Server Error)
</code></pre>
<p>a) If I manually enter the fetched url <code>http://localhost:5000/api/id_mode</code> in an extra tab, I get the expected result from the flask application.</p>
<p>b) If I run the flask application with the waitress <code>serve</code> function (=without debug mode) instead of using <code>app.run(...)</code> , the application works just fine. Therefore, it does not seem to be a JavaScript issue.</p>
<pre><code>serve(app, host=host, port=port)
</code></pre>
<p>Project file structure:</p>
<p><a href="https://i.sstatic.net/HhgVL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HhgVL.png" alt="enter image description here" /></a></p>
<p>Python code in <strong>main.py</strong>:</p>
<pre><code>import sys
from flask import (
Flask,
jsonify
)
from flask_cors import CORS
from waitress import serve
def main():
# workaround to fix issue with relative path to python in PyCharm for debugging
# Also see
# https://gitlab.cc-asp.fraunhofer.de/isi/micat/-/issues/363
sys.executable = sys.executable.replace('flask_demo\\App', 'flask_demo\\..\\..\\App')
app = create_app()
host = 'localhost'
port = 5000
CORS(app, resources={r"/*": {"origins": ["http://localhost:3000"]}})
app.run(debug=True, host=host, port=port)
# serve(app, host=host, port=port)
def create_app():
app = Flask(__name__)
@app.route('/api/id_mode')
def hello_world():
return jsonify({
"status": "success",
"message": "Hello World!"
})
return app
if __name__ == '__main__':
main()
</code></pre>
<p><strong>requirements.txt</strong>:</p>
<pre><code>flask==2.3.2
flask-cors==3.0.10
</code></pre>
<p>Next.js application defined in <strong>_app.js</strong>:</p>
<pre><code>import React from 'react';
import App from 'next/app';
export default function MyApp({ Component, pageProps }) {
const parameters = { ...pageProps };
return <Component {...parameters} />;
}
MyApp.getInitialProps = async (appContext) => {
const appProperties = await App.getInitialProps(appContext);
async function apiCall(){
const url = 'http://localhost:5000/api/id_mode'
const response = await fetch(url);
const text = await response.text();
return text;
}
const extraProperty = await apiCall()
return {
...appProperties,
extraProperty
};
};
</code></pre>
<p><strong>index.js</strong></p>
<pre><code>import React from 'react';
import Router from 'next/router';
export default function Index(properties) {
return <>{"Hello from next.js"}</>;
}
</code></pre>
<p><strong>package.json</strong></p>
<pre><code>{
"name": "flask_demo",
"version": "1.0.0",
"main": "index.js",
"scripts": {
"dev": "cross-env NODE_OPTIONS='--inspect' next dev"
},
"dependencies": {
"next": "13.4.3"
},
"devDependencies": {
"cross-env": "7.0.3"
}
}
</code></pre>
<p>Output in Network tab:</p>
<p><a href="https://i.sstatic.net/cgg8V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cgg8V.png" alt="enter image description here" /></a></p>
<p>Related:</p>
<p><a href="https://stackoverflow.com/questions/76242327/pycharm-runs-a-flask-app-but-fails-to-debug-it-in-python3-11/76263800#76263800">PyCharm runs a flask app but fails to debug it in python3.11</a></p>
<p><a href="https://stackoverflow.com/questions/74165121/next-js-fetch-request-gives-error-typeerror-fetch-failed">next.js fetch request gives error TypeError: fetch failed</a></p>
|
<python><flask><debugging><next.js><python-3.11>
|
2023-05-17 14:51:01
| 1
| 12,756
|
Stefan
|
76,273,139
| 6,690,682
|
rasterio - set compression quality
|
<p>I know about the possibility in the <code>rasterio</code> module to set a compression "family" like <code>webp</code> or <code>JPEG</code> by using the respective profile. But I want to have more control over the compression quality, like it is possible when using <code>GDAL</code>.</p>
<p>Ideally, I want something like this:</p>
<pre><code>from osgeo import gdal
driver = gdal.GetDriverByName('GTiff')
ds = driver.Create(MYFILE, cols, rows, 1, dtype, ['COMPRESS=JPEG', 'JPEG_QUALITY=90'])
</code></pre>
<p>I could not find anything in the rasterio <a href="https://rasterio.readthedocs.io/en/latest/topics/writing.html" rel="nofollow noreferrer">writing</a> or <a href="https://rasterio.readthedocs.io/en/latest/topics/profiles.html" rel="nofollow noreferrer">profile</a> documentation.</p>
|
<python><compression><gdal><rasterio>
|
2023-05-17 14:19:49
| 1
| 795
|
s6hebern
|
76,273,111
| 8,510,149
|
Issues with mean and groupby using pyspark
|
<p>I have an issue using mean together with groupBy (or window) on a pyspark dataframe. The code below returns a dataframe called mean_df, for the columns mean_change1, the values is NaN, NaN, NaN.</p>
<p>I don't understand why this is the case. Is it due to the presence of NaN in df?</p>
<pre><code>import pandas as pd
import numpy as np
from pyspark.sql import SparkSession
from pyspark.sql.window import Window
import pyspark.sql.functions as F
np.random.seed(10)
data = pd.DataFrame({'ID':list(range(1,101)) + list(range(1,101)) + list(range(1,101)),
'date':[pd.to_datetime('2021-07-30')]*100 + [pd.to_datetime('2022-12-31')] * 100 + [pd.to_datetime('2023-04-30')]*100,
'value1': [i for i in np.random.normal(98, 4.5, 100)] + [np.nan] *3 + [i for i in np.random.normal(100, 5, 97)] + [i for i in np.random.normal(120, 8, 95)] +[5.3, 9000, 160, -5222, 158],
'value2':[i for i in np.random.normal(52, 11, 100)] + [i for i in np.random.normal(50, 10, 100)] + [i for i in np.random.normal(49, 10, 100)]
})
spark = SparkSession.builder.getOrCreate()
df = spark.createDataFrame(data)
# Calculate change
window = Window.partitionBy("ID").orderBy("date")
df = df.withColumn("previous_value1", F.lag("value1", 1).over(window))
df = df.withColumn("change1", df["value1"] - df["previous_value1"])
mean_df = df.groupBy("date").agg(F.mean("change1").alias("mean_change1"))
mean_df.toPandas()
</code></pre>
|
<python><pyspark>
|
2023-05-17 14:17:21
| 1
| 1,255
|
Henri
|
76,273,075
| 13,770,014
|
How to malloc cvalues from python?
|
<p>This C file:</p>
<pre class="lang-c prettyprint-override"><code>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
struct Foo {
char *s;
int i;
};
struct Bar {
struct Foo *f;
int n;
};
void func(struct Bar *b)
{
for(int i = 0; i < b->n; i++) {
printf("s:%s i:%i\n", b->f[i].s, b->f[i].i);
}
}
//int main()
//{
// struct Foo *f = malloc(sizeof(*f) * 2);
// f[0].s = strdup("asd");
// f[0].i = 1;
// f[1].s = strdup("qwe");
// f[1].i = 2;
// struct Bar *b = malloc(sizeof(*b));
// b->f = f;
// b->n = 2;
// func(b); // works as expected
//}
</code></pre>
<p>And corresponding Python:</p>
<pre class="lang-py prettyprint-override"><code>
import ctypes
clibrary = ctypes.CDLL('clibrary.so')
class Foo(ctypes.Structure):
_fields_ = [
('s', ctypes.c_char_p),
('i', ctypes.c_int32)
]
class Bar(ctypes.Structure):
_fields_ = [
('f', ctypes.POINTER(Foo)),
('n', ctypes.c_int32)
]
b = Bar()
# b.f = ctypes.Array(2)
b.f[0] = Foo(b'asd', 4)
b.f[1] = Foo(b'qwe', 5)
b.n = 2
clibrary.func(ctypes.byref(b))
</code></pre>
<p>gives:</p>
<pre class="lang-none prettyprint-override"><code>ValueError: NULL pointer access
</code></pre>
<p>I know that the <code>Bar.f</code> is just not-allocated pointer, and that's why I make allocations in test <code>main()</code> (commented out). But how can I do allocations in python? I am using the <code>Bar.f</code> as a dynamic array in <code>func()</code>, because I don't know the size, or how many Foos will I need in that array. How to tackle this scenario with ctypes?</p>
|
<python><c><memory><ctypes>
|
2023-05-17 14:13:55
| 1
| 2,017
|
milanHrabos
|
76,272,632
| 4,399,016
|
How to convert the API response format into Pandas Dataframe?
|
<p>I have this code using which I am able to download data from API. But I don't know how to convert this to pandas data frame.</p>
<pre><code>import requests
import json
url = "https://statdb.luke.fi:443/PxWeb/api/v1/en/LUKE/02 Maatalous/04 Tuotanto/06 Lihantuotanto/02 Kuukausitilastot/02_Lihantuotanto_teurastamoissa_kk.px"
payload = json.dumps({
"query": [
{
"code": "Muuttuja",
"selection": {
"filter": "item",
"values": [
"Lihantuotanto"
]
}
},
{
"code": "Laji",
"selection": {
"filter": "item",
"values": [
"Lehmät"
]
}
}
],
"response": {
"format": "csv"
}
})
headers = {
'Content-Type': 'application/json',
'Cookie': 'rxid=710a361a-7044-494f-95b7-15261822712c'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
</code></pre>
<p>The code returns a text format data. I need guidance on how to make a pandas data frame with this output.</p>
<p>For this particular data frame, the Column headers would be</p>
<p>"Month",</p>
<p>"Variable",</p>
<p>"Cows 7) 8)"</p>
<p>So on and So forth.</p>
|
<python><json><pandas>
|
2023-05-17 13:26:05
| 1
| 680
|
prashanth manohar
|
76,272,523
| 10,981,411
|
Using tkinter, when I place my tree and buttons on the canvas why do they disappear?
|
<p>below are my codes</p>
<p>when I place my tree and buttons on the root I can see them</p>
<pre><code>import tkinter
from tkinter import ttk
from tkinter import *
root = tkinter.Tk()
root.geometry("1500x900")
root.title("Writer")
canvas = tkinter.Canvas(root, borderwidth=10)
frame = tkinter.Frame(canvas)
vsb = tkinter.Scrollbar(root, orient="vertical", command=canvas.yview)
canvas.configure(yscrollcommand=vsb.set)
canvas.pack(side="left", fill="both", expand=True)
canvas.create_window((0, 0), window=frame, anchor="nw")
frame.bind("<Configure>", lambda event, canvas=canvas: canvas.configure(scrollregion=canvas.bbox("all")))
def treeframe1(rows, df):
tree["columns"] = list(range(1, len(df.columns) + 1))
tree['show'] = 'headings'
tree['height'] = '45'
colnames = df.columns
tree.column(1, width=90, anchor='c')
tree.heading(1, text=colnames[0])
for i in range(2, len(df.columns) + 1):
tree.column(i, width=100, anchor='c')
tree.heading(i, text=colnames[i - 1])
for i in rows:
tree.insert('', 'end', values=i)
tree = ttk.Treeview(root, selectmode='browse', height='45')
tree.place(x=960 + 200 + 00, y=45, width=920)
vsb = ttk.Scrollbar(root, orient="vertical", command=tree.yview)
vsb.place(x=950 + 200 + 00, y=45, height=930)
tree.configure(yscrollcommand=vsb.set)
vsb = ttk.Scrollbar(root, orient="horizontal", command=tree.xview)
vsb.place(x=960 + 200 + 00, y=970, width=960)
tree.configure(xscrollcommand=vsb.set)
b1 = Button(root, text="dd1", width=15)
b1.place(x=650 + 200, y=465 + 200 - 600 + 5)
b2 = Button(root, text="dd2", width=15)
b2.place(x=650 + 200, y=515 + 200 - 600 + 5)
b3 = Button(root, text="dd3", width=15,)
b3.place(x=650 + 200, y=565 + 200 - 600 + 5)
root.mainloop()
</code></pre>
<p>but when I place them on the canvas (i.e. replacing the "root" with "frame") i dont see them. Why is that happening?</p>
<p>I am new to tkinter so if someone to modify the code to make it work that would be really helpful</p>
|
<python><tkinter><tkinter-canvas>
|
2023-05-17 13:15:18
| 2
| 495
|
TRex
|
76,272,502
| 2,825,570
|
Get all labels / entity groups available to a model
|
<p>I have the following code to get the named entity values from a given text:</p>
<pre><code>from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/distilbert-base-multilingual-cased-ner-hrl")
model = AutoModelForTokenClassification.from_pretrained("Davlan/distilbert-base-multilingual-cased-ner-hrl")
nlp = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="max")
example = "My name is Johnathan Smith and I work at Apple"
ner_results = nlp(example)
print(ner_results)
</code></pre>
<p>The following is the output:</p>
<pre><code>[{'end': 26,
'entity_group': 'PER',
'score': 0.9994689,
'start': 11,
'word': 'Johnathan Smith'},
{'end': 46,
'entity_group': 'ORG',
'score': 0.9983876,
'start': 41,
'word': 'Apple'}]
</code></pre>
<p>In the above example the labels / entitiy groups are <code>ORG</code> and <code>PER</code>. How to find all the labels / entitiy groups available?</p>
<p>Kindly advise.</p>
|
<python><pytorch><huggingface-transformers>
|
2023-05-17 13:11:36
| 2
| 8,621
|
Jeril
|
76,272,312
| 10,755,782
|
How to visualize segment-anything mask results in a single color?
|
<p>I'm trying to visualize the results of image segmentation which was done by 'segment-anything'. When I do this the segmented boxes/masks are plotted with different colors. <a href="https://i.sstatic.net/DVMOn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DVMOn.png" alt="as shown here" /></a> How can I plot all the masks with the same color (black) ?</p>
<pre><code>import cv2
import supervision as sv
import numpy as np
sam_result = np.load('sam_result_multy1.npy', allow_pickle=True)
print(sam_result[0].keys())
IMAGE_PATH = 'img_mult_sk.png'
image_bgr = cv2.imread(IMAGE_PATH)
mask_annotator = sv.MaskAnnotator()
detections = sv.Detections.from_sam(sam_result=sam_result)
annotated_image = mask_annotator.annotate(scene=image_bgr.copy(), detections=detections)
sv.plot_images_grid(
images=[image_bgr, annotated_image],
grid_size=(1, 2),
titles=['source image', 'segmented image']
)
</code></pre>
|
<python><computer-vision><image-segmentation>
|
2023-05-17 12:49:36
| 1
| 660
|
brownser
|
76,272,281
| 1,854,182
|
import openai fails (python): cannot import name 'COMMON_SAFE_ASCII_CHARACTERS'
|
<p>Trying to use openai:</p>
<pre><code>import openai
</code></pre>
<p>I've got the following errors:</p>
<pre><code>Traceback (most recent call last):
File "/.../lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 70, in <module>
import cchardet as chardet
ModuleNotFoundError: No module named 'cchardet'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/.../chat.py", line 1, in <module>
import openai
File "/.../lib/python3.11/site-packages/openai/__init__.py", line 9, in <module>
from openai.api_resources import (
File "/.../lib/python3.11/site-packages/openai/api_resources/__init__.py", line 1, in <module>
from openai.api_resources.audio import Audio # noqa: F401
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.11/site-packages/openai/api_resources/audio.py", line 4, in <module>
from openai import api_requestor, util
File "/.../lib/python3.11/site-packages/openai/api_requestor.py", line 21, in <module>
import aiohttp
File "/.../lib/python3.11/site-packages/aiohttp/__init__.py", line 6, in <module>
from .client import (
File "/.../lib/python3.11/site-packages/aiohttp/client.py", line 59, in <module>
from .client_reqrep import (
File "/.../lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 72, in <module>
import charset_normalizer as chardet # type: ignore[no-redef]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.../lib/python3.11/site-packages/charset_normalizer/__init__.py", line 23, in <module>
from charset_normalizer.api import from_fp, from_path, from_bytes, normalize
File "/.../lib/python3.11/site-packages/charset_normalizer/api.py", line 10, in <module>
from charset_normalizer.md import mess_ratio
File "charset_normalizer/md.py", line 5, in <module>
ImportError: cannot import name 'COMMON_SAFE_ASCII_CHARACTERS' from 'charset_normalizer.constant' (/.../lib/python3.11/site-packages/charset_normalizer/constant.py)
</code></pre>
|
<python><openai-api>
|
2023-05-17 12:46:34
| 1
| 888
|
Ofer Rahat
|
76,272,002
| 5,305,512
|
How to download NLTK package with proper security certificates inside docker container?
|
<p>I have tried all combinations mentioned <a href="https://stackoverflow.com/questions/31143015/docker-nltk-download">here</a> and other places, but I keep getting the same error.</p>
<p>Here is my <code>Dockerfile</code>:</p>
<pre><code>FROM python:3.9
RUN pip install virtualenv && virtualenv venv -p python3
ENV VIRTUAL_ENV=/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
WORKDIR /app
COPY requirements.txt ./
RUN pip install -r requirements.txt
RUN git clone https://github.com/facebookresearch/detectron2.git
RUN python -m pip install -e detectron2
# Install dependencies
RUN apt-get update && apt-get install libgl1 -y
RUN pip install -U nltk
RUN [ "python3", "-c", "import nltk; nltk.download('punkt', download_dir='/usr/local/nltk_data')" ]
COPY . /app
# Run the application:
CMD ["python", "-u", "app.py"]
</code></pre>
<p>The docker image gets built fine (I'm using the platform argument as I'm building the image to be run inside Linux, but my local machine where I'm building the image is Windows and the <code>detectron</code> library doesn't get installed in Windows):</p>
<pre><code>>>> docker buildx build --platform=linux/amd64 -t my_app .
[+] Building 23.2s (16/16) FINISHED
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 634B 0.0s
=> [internal] load metadata for docker.io/library/python:3.9 0.9s
=> [internal] load build context 0.0s
=> => transferring context: 1.85kB 0.0s
=> [ 1/11] FROM docker.io/library/python:3.9@sha256:6ea9dafc96d7914c5c1d199f1f0195c4e05cf017b10666ca84cb7ce8e269 0.0s
=> CACHED [ 2/11] RUN pip install virtualenv && virtualenv venv -p python3 0.0s
=> CACHED [ 3/11] WORKDIR /app 0.0s
=> CACHED [ 4/11] COPY requirements.txt ./ 0.0s
=> CACHED [ 5/11] RUN pip install -r requirements.txt 0.0s
=> CACHED [ 6/11] RUN git clone https://github.com/facebookresearch/detectron2.git 0.0s
=> CACHED [ 7/11] RUN python -m pip install -e detectron2 0.0s
=> CACHED [ 8/11] RUN apt-get update && apt-get install libgl1 -y 0.0s
=> CACHED [ 9/11] RUN pip install -U nltk 0.0s
=> [10/11] RUN [ "python3", "-c", "import nltk; nltk.download('punkt', download_dir='/usr/local/nltk_data')" ] 22.1s
=> [11/11] COPY . /app 0.0s
=> exporting to image 0.1s
=> => exporting layers 0.1s
=> => writing image sha256:83e2495addbc4cdf9b0885e1bb4c5b0fb0777177956eda56950bbf59c095d23b 0.0s
=> => naming to docker.io/library/my_app
</code></pre>
<p>But I keep getting the error below when trying to run the image:</p>
<pre><code>>>> docker run -p 8080:8080 my_app
[nltk_data] Error loading punkt: <urlopen error EOF occurred in
[nltk_data] violation of protocol (_ssl.c:1129)>
[nltk_data] Error loading punkt: <urlopen error EOF occurred in
[nltk_data] violation of protocol (_ssl.c:1129)>
[nltk_data] Error loading averaged_perceptron_tagger: <urlopen error
[nltk_data] EOF occurred in violation of protocol (_ssl.c:1129)>
Traceback (most recent call last):
File "/app/app.py", line 16, in <module>
index = VectorstoreIndexCreator().from_loaders(loaders)
File "/venv/lib/python3.9/site-packages/langchain/indexes/vectorstore.py", line 72, in from_loaders
docs.extend(loader.load())
File "/venv/lib/python3.9/site-packages/langchain/document_loaders/unstructured.py", line 70, in load
elements = self._get_elements()
File "/venv/lib/python3.9/site-packages/langchain/document_loaders/pdf.py", line 37, in _get_elements
return partition_pdf(filename=self.file_path, **self.unstructured_kwargs)
File "/venv/lib/python3.9/site-packages/unstructured/partition/pdf.py", line 75, in partition_pdf
return partition_pdf_or_image(
File "/venv/lib/python3.9/site-packages/unstructured/partition/pdf.py", line 137, in partition_pdf_or_image
return _partition_pdf_with_pdfminer(
File "/venv/lib/python3.9/site-packages/unstructured/utils.py", line 43, in wrapper
return func(*args, **kwargs)
File "/venv/lib/python3.9/site-packages/unstructured/partition/pdf.py", line 248, in _partition_pdf_with_pdfminer
elements = _process_pdfminer_pages(
File "/venv/lib/python3.9/site-packages/unstructured/partition/pdf.py", line 293, in _process_pdfminer_pages
_elements = partition_text(text=text)
File "/venv/lib/python3.9/site-packages/unstructured/partition/text.py", line 89, in partition_text
elif is_possible_narrative_text(ctext):
File "/venv/lib/python3.9/site-packages/unstructured/partition/text_type.py", line 76, in is_possible_narrative_text
if exceeds_cap_ratio(text, threshold=cap_threshold):
File "/venv/lib/python3.9/site-packages/unstructured/partition/text_type.py", line 273, in exceeds_cap_ratio
if sentence_count(text, 3) > 1:
File "/venv/lib/python3.9/site-packages/unstructured/partition/text_type.py", line 222, in sentence_count
sentences = sent_tokenize(text)
File "/venv/lib/python3.9/site-packages/unstructured/nlp/tokenize.py", line 38, in sent_tokenize
return _sent_tokenize(text)
File "/venv/lib/python3.9/site-packages/nltk/tokenize/__init__.py", line 106, in sent_tokenize
tokenizer = load(f"tokenizers/punkt/{language}.pickle")
File "/venv/lib/python3.9/site-packages/nltk/data.py", line 750, in load
opened_resource = _open(resource_url)
File "/venv/lib/python3.9/site-packages/nltk/data.py", line 876, in _open
return find(path_, path + [""]).open()
File "/venv/lib/python3.9/site-packages/nltk/data.py", line 583, in find
raise LookupError(resource_not_found)
LookupError:
**********************************************************************
Resource punkt not found.
Please use the NLTK Downloader to obtain the resource:
>>> import nltk
>>> nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/PY3/english.pickle
Searched in:
- '/root/nltk_data'
- '/venv/nltk_data'
- '/venv/share/nltk_data'
- '/venv/lib/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
- ''
**********************************************************************
</code></pre>
|
<python><linux><docker><ssl-certificate><nltk>
|
2023-05-17 12:15:01
| 1
| 3,764
|
Kristada673
|
76,271,910
| 4,704,065
|
Highlight a particular value in X axis
|
<p>I want to highlight particular values on x axis in Matplotlib .</p>
<pre><code>list1=[1,2,3,4,5,6,7,8,9] - xaxis values
list2=[1,2,23,4,5,6,7,8,9] - y-axis values
list3=[2,3,6,7]
plt.plot(list1,list2)
plt.show()
</code></pre>
<p>I want to highlight the xaxis labels as well its corresponding values in any color like red/green or whatever only when list3 value is present in list1</p>
|
<python><matplotlib>
|
2023-05-17 12:05:51
| 2
| 321
|
Kapil
|
76,271,807
| 12,048,753
|
Polars: Parsing nested fields when reading a csv file
|
<p>I have the following csv structure:</p>
<pre><code>Some;Csv;Data;Nested|Field|Containing|An|Array;And;Some;More;Fields;Here
</code></pre>
<p>Is there any away to parse the nested list using polars ?</p>
<p>What is the expected output ?</p>
<pre class="lang-py prettyprint-override"><code>{
"column_1": "Some",
"column_2": "Csv",
"column_3": "Data",
"column_4": [
"Nested",
"Field",
"Containing",
"An",
"Array"
],
"column_5": "And",
"column_6": "Some",
"column_7": "More",
"column_8": "Fields",
"column_9": "Here"
}
</code></pre>
<p>I'm trying to parse the csv file using something like:</p>
<pre class="lang-py prettyprint-override"><code>
pl.read_csv(
source="somefile.csv",
has_header=False,
separator=";"
)
</code></pre>
<p>In the documentation they mention the <code>List</code> type, but I don't understood how to use it in the dtypes parameter.</p>
<p>Thanks in advance!</p>
|
<python><dataframe><csv><python-polars>
|
2023-05-17 11:53:57
| 1
| 1,306
|
Fausto Alonso
|
76,271,668
| 9,476,917
|
pandas read_excel throws ValueError: Value does not match pattern
|
<p>I want to read an excel file as pandas Dataframe</p>
<pre><code>import pandas as pd
import os
path = r"my/path/to/file"
file = "myFile.xlsx"
df = pd.read_excel(os.path.join(path, file))
</code></pre>
<p>and receive following error:</p>
<blockquote>
<p>ValueError: Value does not match pattern
^[$]?([A-Za-z]{1,3})[$]?(\d+)(:[$]?([A-Za-z]{1,3})[$]?(\d+)?)?$|^[A-Za-z]{1,3}:[A-Za-z]{1,3}$</p>
</blockquote>
<p>The excel file has per default an active filter. If I perform a workaround using the <code>xlwings</code> library I can read the excel file as a Dataframe afterwards:</p>
<pre><code>#Start Workaround
wb = xw.Book(os.path.join(path, file))
if wb.sheets["Sheet1"].api.AutoFilterMode == True:
wb.sheets["Sheet1"].api.AutoFilterMode = False
wb.save()
wb.close()
#End Workaround
df = pd.read_excel(os.path.join(path, file)) #works now
</code></pre>
<p>Since a security update I need to confirm manually a Dialog Box after opening the file with <code>xlwings</code>. Does anyone know how one can read the excel directly into a Dataframe without using the workaround?</p>
<p>FYI I can't replicate the error with another excel file where I save it with an active filter...</p>
|
<python><pandas><excel>
|
2023-05-17 11:36:13
| 0
| 755
|
Maeaex1
|
76,271,573
| 940,490
|
Could there be a bug in `pandas.rolling.std` when custom weights are used?
|
<p>I have observed odd behavior of <code>pandas.rolling.std</code> when custom weights are used and trying to understand what is happening. My question is about how this method works and if there could be a bug when custom weights are used, but below there is a visual example of what looks very suspicious.</p>
<p>Through <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.window.rolling.Rolling.std.html" rel="nofollow noreferrer">official documentation</a> I get to the <a href="https://github.com/pandas-dev/pandas/blob/37ea63d540fd27274cad6585082c91b1283f963d/pandas/core/window/rolling.py#L2204-L2216" rel="nofollow noreferrer">source code</a> which immediately sends me to <a href="https://github.com/pandas-dev/pandas/blob/v2.0.1/pandas/core/window/rolling.py#L1530-L1552" rel="nofollow noreferrer">this block</a>. If I am not mistaken, at this the calculation of rolling variance goes into <a href="https://github.com/pandas-dev/pandas/blob/37ea63d540fd27274cad6585082c91b1283f963d/pandas/_libs/window/aggregations.pyx#L415-L476" rel="nofollow noreferrer">this method</a> implemented in a language (?) I am not familiar with... It is very likely I am missing something, but I do not see where weights are passed. Could this be an issue or could there be something else that is off?</p>
<h2>Example of suspicious behavior</h2>
<p>Here is an example of a fictitious time series with an artificially introduced large to create a spike of the rolling estimator. In addition, I compute the estimator using <code>pandas.rolling.mean</code> which actually looks more sensible.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
df = pd.Series(np.random.normal(size=500))
df.loc[125] = 15
# Calculation as I thought it is supposed to work
df_std_stock = (
df.rolling(window=200, min_periods=100, win_type='exponential')
.std(tau=-(25 / np.log(2)), center=0, sym=False)
)
# Calculation that produces results that look more sensible visually
df_std_custom = (
(df ** 2).rolling(window=200, min_periods=100, win_type='exponential')
.mean(tau=-(25 / np.log(2)), center=0, sym=False)
)
df_std_custom_mean = (
df.rolling(window=200, min_periods=100, win_type='exponential')
.mean(tau=-(25 / np.log(2)), center=0, sym=False)
)
df_std_custom = df_std_custom - df_std_custom_mean**2
</code></pre>
<p><a href="https://i.sstatic.net/g8eFx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g8eFx.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2023-05-17 11:25:46
| 1
| 1,615
|
J.K.
|
76,271,490
| 1,745,291
|
How to make a value awaitable in python?
|
<p>I have an <code>int</code>, and I want to make it awaitable, how can I do it ?</p>
<p>Context : I implementing an async generator yielding awaitable objects. However, it needs to await the first (and only the first) value so that it can yield the others.</p>
<p>Thus, the first value is already awaited, but the other are yieleded as they are , and the caller will await them. However, I don't want to add a ifawaitable logic in the caller, here why I want to make the first value awaitable.</p>
<p>The first way I considered is to crate a simple async identity function, but I was wondering if such function would already exist ? (or if there were a cleaner way)</p>
<pre><code>async def make_awaitable(value):
return value
</code></pre>
|
<python><asynchronous><async-await>
|
2023-05-17 11:16:41
| 2
| 3,937
|
hl037_
|
76,271,327
| 16,797,805
|
Python Azure Function cannot connect to Blob storage with Firewall
|
<p>I have a function app (with python runtime and consumption plan) that once a month must load some data from a public source, process them and put them to an azure blob storage, that instead must remain private.</p>
<p>This flow works when I tested it leaving the blob storage open to public.</p>
<p>Then I added some ip addresses to the firewall of the blob storage: in particular, my own private ip and all the list of outbound ips available in the "networking" section of the function app. In this case the function cannot communicate with the blob storage and the required files are not written there.</p>
<p><a href="https://learn.microsoft.com/en-us/azure/azure-functions/configure-networking-how-to?tabs=portal" rel="nofollow noreferrer">I know there is the possibility</a> to use a premium plan to enable the virtual network integration (that I think it should work smoothly), but I would prefer to not have additional extra costs.</p>
<p>Also, I can use <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-create-vnet" rel="nofollow noreferrer">a private endpoint connection</a>, but again this entails to have a premium plan for the app and an higher cost.</p>
<p>Why the function cannot communicate with the storage even if I added all its available outbound ips? Are there any additional ip that is required to be added? I also read <a href="https://learn.microsoft.com/en-us/azure/azure-functions/ip-addresses?tabs=portal#find-outbound-ip-addresses" rel="nofollow noreferrer">this doc</a>, but I cannot find the required information in the JSON file as explained.</p>
<p>Can someone help on this? Many thanks!</p>
|
<python><azure-functions><azure-blob-storage>
|
2023-05-17 10:56:42
| 1
| 857
|
mattiatantardini
|
76,271,318
| 6,067,528
|
How to trim pyspark schema output
|
<p>My pyspark dataframe has the following schema...</p>
<p><code>DataFrame[ExternalData: struct<provider:string,data:string,modality:array<string>>]</code></p>
<p>If I write (where <code>sdf</code> is my pyspark dataframe)..</p>
<p><code>sdf.schema</code></p>
<p>I get...</p>
<pre><code>StructType([StructField('ExternalData', StructType([StructField('provider', StringType(), True), StructField('data', StringType(), True), StructField('modality', ArrayType(StringType(), True), True)]), True)])
</code></pre>
<p>How can I get just the below?</p>
<pre><code>StructType([StructField('provider', StringType(), True), StructField('data', StringType(), True), StructField('modality', ArrayType(StringType(), True), True)])
</code></pre>
<p>There is a subtle difference in that the <code>ExternalData</code> <code>StructType</code> and <code>StructField</code> has been removed. The reason I need to do this is because the system I'm integrating parquet with expects parquet schema in this format, where <code>ExternalData</code> field and struct is passed elsewhere.</p>
<p>Does anyone have any advice?</p>
|
<python><pyspark>
|
2023-05-17 10:55:05
| 1
| 1,313
|
Sam Comber
|
76,271,284
| 11,327,283
|
Plot the regression line of the machine learning prediction model in python
|
<p>I would like to plot the regression line of the machine learning model on the data scatter plot as shown below:</p>
<p><a href="https://i.sstatic.net/fnrRD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fnrRD.png" alt="enter image description here" /></a></p>
<p>This is what I have tried so far:</p>
<p>Firstly, take any two dependent variables and split the data to train and test the model.</p>
<pre><code>
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_train = train_test_split(features, labels, test_size = 0.25, random_state = 42)
GradientBoostingRegressor().fit(X_train, y_train).predict(X_test)
</code></pre>
|
<python><machine-learning><scikit-learn><regression><random-forest>
|
2023-05-17 10:50:56
| 1
| 658
|
mArk
|
76,271,255
| 4,105,440
|
How to pass a parameter to an aggregation function in Dask
|
<p>I just discovered today that a sum of a column full of NaNs is 0 in <code>pandas</code> and <code>Dask</code> (why?!). I need a sum of all NaNs to be 0, because having NaNs means those values are missing, so their sum should be NaN as well.</p>
<p>From the documentation it appears that you have to pass <code>min_count = 0</code> to replicate this behaviour. However, I'm doing the sum into an aggregation that looks like this</p>
<pre class="lang-py prettyprint-override"><code>ddf.groupby("code").aggregate({'rain':'sum'}).compute()
</code></pre>
<p>Adding the argument <code>min_count</code> to the <code>aggregate</code> function seems to have no impact, while using a <code>lambda</code> in place of <code>'sum'</code> causes an error.</p>
|
<python><pandas><dask>
|
2023-05-17 10:46:15
| 1
| 673
|
Droid
|
76,271,210
| 3,907,364
|
SQLAlchemy: Add where clause based on count of many-to-many relation
|
<p>I am trying to create a model in which I can have different projects and each project has one or more authors. Since every author may be involved with multiple projects, this is a many-to-many relation. The setup for this is as follows</p>
<pre><code>from typing import List
from typing import Optional
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.orm import Mapped
from sqlalchemy.orm import mapped_column
from sqlalchemy.orm import relationship
from sqlalchemy import ForeignKey
from sqlalchemy import ForeignKeyConstraint
from sqlalchemy import Table
from sqlalchemy import Column
from sqlalchemy import UniqueConstraint
class Base(DeclarativeBase):
pass
author_project_association = Table(
"author_project_associations",
Base.metadata,
Column(
"author_id",
ForeignKey("authors.id", onupdate="CASCADE", ondelete="CASCADE"),
primary_key=True,
),
Column(
"project_id",
primary_key=True,
),
Column(
"project_name",
primary_key=True,
),
ForeignKeyConstraint(
["project_id", "project_name"], ["projects.id", "projects.name"]
),
)
class Author(Base):
__tablename__ = "authors"
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
name: Mapped[str] = mapped_column(unique=True)
projects: Mapped[List["Project"]] = relationship(
back_populates="authors", secondary=author_project_association
)
class Project(Base):
__tablename__ = "projects"
id: Mapped[int] = mapped_column(primary_key=True)
name: Mapped[str] = mapped_column(primary_key=True)
authors: Mapped[List[Author]] = relationship(
back_populates="projects", secondary=author_project_association
)
calculations: Mapped[List["Calculation"]] = relationship(back_populates="project")
</code></pre>
<p>Now, I would like to query for a project, given its name and the list of authors. I came as far as</p>
<pre><code>name = "Sample"
authors = ["John", "Kate"]
project = session.scalars(
select(Project)
.where(*[Project.authors.any(Author.name == current) for current in authors])
.where(Project.name == name)
).first()
</code></pre>
<p>which I think should almost work, except that it currently only checks that all provided author names are indeed authors of the returned project, but that project may still have additional authors that are not part of the input list of authors.</p>
<p>How can I restrict the returned project to the one that matches the list of author names exactly?</p>
<p>My attempt was to try to embed something like a <code>where(len(Project.authors) == len(authors))</code> into my query, but that doesn't work and I also failed to get a sub-query that does the counting to work inside the <code>where</code> clause.</p>
|
<python><sqlalchemy>
|
2023-05-17 10:41:34
| 1
| 3,658
|
Raven
|
76,271,031
| 9,363,181
|
Makefile script not building the python project zip properly
|
<p>I have below the <code>Makefile</code> script inside a <code>docker container</code> that executes a few commands for building the zip of the project and then initializes the terraform directory and uploads it in AWS. However, the zip which is uploaded for the lambda function raises a <code>module not found</code> error. Below is my script:</p>
<pre><code>run : build_and_deploy
build_and_deploy :
apt-get update && apt-get install zip; \
python -m pip install --upgrade pip; \
cd provisioner && pip install . --target=dependencies && cd dependencies && zip -r ../snp.zip . && cd .. && cd src && zip -r ../snp.zip . && cd .. && cd .. ; \
cd iac && terraform init -backend-config="bucket=some-state-8435202378" -backend-config="key=state/278435202378/somename.somedev.tfstate" -backend-config="region=eu-west-1" && terraform apply -auto-approve -var "environment=dev" -var "lambda_file=../provisioner/snp.zip" -var "lambda_handler=common.api.lambda_handler.lambda_handler" -var "domain_name=specific-snp-dev.dev.boot.com" -var "truststore_pem=${TRUSTSTORE_PEM}" -var "aws_route53_zone_name=dev.boot.com" -var 'tags={scope="boot.snowflake.specific.provisioner", alwaysOn=true, ownedBy="somename.Snowflake.SnowflakeSpecificProvisioner", createdBy="somename.Snowflake.SnowflakeSpecificProvisioner"}'; \
clean :
rm -rf __pycache__
</code></pre>
<p>When I try to deploy the project why gitlab pipeline using the same commands, it works fine. I don't get any errors. I am running this script on the docker container. I am getting the below error:</p>
<pre><code>{"errorMessage": "Unable to import module 'common.api.lambda_handler': /var/task/cryptography/hazmat/bindings/_rust.abi3.so: cannot open shared object file: No such file or directory", "errorType": "Runtime.ImportModuleError", "stackTrace": []}
</code></pre>
|
<python><amazon-web-services><makefile><terraform><gitlab>
|
2023-05-17 10:20:00
| 0
| 645
|
RushHour
|
76,270,794
| 5,847,976
|
Copy data from one numpy array to an other with possibly missmatched sizes
|
<p>So my issue is that I'm trying to paste a small image into a larger image with the typical numpy syntax :</p>
<pre><code>large_im = np.linspace(0,1,20)[:,None]*np.ones([1,20])
small_im = np.linspace(1,0,10)[None,:]*np.ones([10,1])
a0,b0 = 5,5
large_im[a0:a0+small_im.shape[0],b0:b0+small_im.shape[1]] = small_im
plt.figure()
plt.imshow(large_im)
</code></pre>
<p><a href="https://i.sstatic.net/GTPuD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GTPuD.png" alt="typical case" /></a></p>
<p>This works fine most of the time but there are edge cases, where part of the small image hangs outside of the large image and numpy doesn't like it.</p>
<p>Like so :</p>
<pre><code>a0,b0 = 15,15
large_im[a0:a0+small_im.shape[0],b0:b0+small_im.shape[1]] = small_im
</code></pre>
<p>Or :</p>
<pre><code>a0,b0 = -5,-5
large_im[a0:a0+small_im.shape[0],b0:b0+small_im.shape[1]] = small_im
</code></pre>
<p><a href="https://i.sstatic.net/jMpEn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jMpEn.png" alt="edge case" /></a><br />
These two snippets raise a Value error because numpy indexing produces a target array that do not match the small_im size.</p>
<p>Right now I have to tediously handle all those edge case by writing a bunch of code that checks each dimension of the source and target area and trims the sizes down so that everything fits properly.</p>
<p><strong>Do you know if there is a numpy function that does this automagically ?</strong></p>
|
<python><numpy><numpy-slicing>
|
2023-05-17 09:53:32
| 1
| 3,542
|
jadsq
|
76,270,705
| 3,590,067
|
How to assign specific color to a category in an array
|
<p>I want to show the land use classes of a region.</p>
<p>I have different raster files that contain information of land use. Each value of the pixel correspond to a specific land use class. I want to assign to each class a color as shown below. This is what I am doing:</p>
<pre><code>from matplotlib.colors import ListedColormap
import rasterio
import numpy
import earthpy.plot as ep
cmap_values = [0, 11, 22, 33, 40, 55, 66, 77]
cmap_colors = ['white', ## 0: No Data
'black', ## 11: urban
'darkorange', ## 22: Cropland
'brown', ## 33: Pasture
'darkgreen', ## 40: Forest
'purple', ## 55: Grass
'gray', ## 66: Other land
'blue' ## 77: water
]
cmap = ListedColormap(cmap_colors)
value_text = ['No Data', 'Urban', 'Cropland', 'Pasture',
'Forest', 'Grass/shrubland',
'Other land', 'Water']
</code></pre>
<p>In some cases I do not have all the classes in the raster. For instance in the following example I have only</p>
<pre><code>src = rasterio.open('myFile.tif')
data = src.read(1)
print(np.unique(data))
array([11, 22, 33, 40, 55, 77], dtype=uint8)
</code></pre>
<p>If I try to show the image it seems that it does not assign the right values to the colorbar but is assigns the color starting from the first value. In the figure below the color <code>white</code> is for the <code>urban</code> class, the <code>black</code> for the <code>cropland</code>, the <code>darkorange</code> for the <code>pasture</code> and so on. How can I keep the same color-class for each case that I encounter.</p>
<pre><code>f,ax=plt.subplots()
im = ax.imshow(data, cmap=cmap)
ep.draw_legend(im, titles=value_text, classes=cmap_values)
</code></pre>
<p><a href="https://i.sstatic.net/mjVBO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mjVBO.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><colorbar><colormap><rasterio>
|
2023-05-17 09:44:45
| 1
| 7,315
|
emax
|
76,270,546
| 5,568,409
|
How to correct my misunderstandings of subplots
|
<p>I seemed to understand how the subplots and ax options work in <code>Matplotlib</code>, but I just did a graphical test and it doesn't work at all as expected, which shows me that I still don't quite understand the syntax of the statements.</p>
<p>The program is below; I should get 2 separate graphs but there is only 1, which seems to overlay the 2 datasets.</p>
<p>[NOTE: the code must use <code>sns.histplot</code>; the <code>p1</code> and <code>p2</code> data do not correspond to the reality of the data to be presented.]</p>
<p>Could you explain to me what my typing errors are?</p>
<pre><code>p1 = [7.86706728, 2.07023424, 8.59099644, 7.07850226, 9.79575806]
p2 = [1.48705512, 0.3142216 , 0.3407479 , 0.32947036, 0.32947036]
fig, ax = plt.subplots(1, 2, figsize = (8,3), tight_layout = True)
ax[0] = sns.histplot(data = None, x = p1, bins = 25, discrete = False, shrink = 1.0,
stat = "probability", element = "bars", color = "green", kde = False)
ax[0].set_title("p1", fontsize = '15')
ax[0].set_xlabel("p1" , fontsize = '15')
ax[0].set_ylabel("Probability", fontsize = '15')
ax[1] = sns.histplot(data = None, x = p2, bins = 25, discrete = False, shrink = 1.0,
stat = "probability", element = "bars", color = "green", kde = False)
ax[1].set_title("p2", fontsize = '15')
ax[1].set_xlabel("p2" , fontsize = '15')
ax[1].set_ylabel("Probability", fontsize = '15')
plt.show()
</code></pre>
<p>Wrong output:
<a href="https://i.sstatic.net/2XABd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2XABd.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><seaborn><subplot>
|
2023-05-17 09:26:58
| 1
| 1,216
|
Andrew
|
76,270,413
| 9,513,536
|
Understanding why the leiden algorithm is not able to find communities for the iris dataset
|
<p>Good morning!,</p>
<p>I am trying to understand the Leiden algorithm and its usage to find partitions and clusterings.
The example provided in the documentation already finds a partition directly, such as the following:</p>
<pre><code>import leidenalg as la
import igraph as ig
G = ig.Graph.Famous('Zachary')
partition = la.find_partition(G, la.ModularityVertexPartition)
G.vs['cluster'] = partition.membership
ig.plot(partition,vertex_size = 30)
</code></pre>
<p>If one checks <code>partition.membership</code>, it already gets 4 clusters.</p>
<p>However, I am trying to do a similar thing with the <code>iris dataset</code> and the algorithm is not able to find clusters.
I have tried getting the X variables and create a:</p>
<ul>
<li>1- correlation matrix or,</li>
<li>pairwise distances,</li>
</ul>
<p>but those do not work well (not even by scaling values), because it is not able to create clusters based on observations. I assume correlations are not good to separate them or pairwise distances.
What am I not understanding well ?</p>
<p>here is the code for the correlation matrix:</p>
<pre><code>import numpy as np
from sklearn import datasets
import igraph as ig
import leidenalg
import cairo
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import pairwise_distances
# Load the Iris dataset
iris = datasets.load_iris()
X = iris.data # Features
y = iris.target # Class labels
# Create an adjacency matrix based on observation similarity
# adj_matrix = abs(1-np.corrcoef(X))
adj_matrix = pairwise_distances(X)
print(adj_matrix)
# Create an igraph graph object
graph = ig.Graph.Weighted_Adjacency(adj_matrix)
# Apply the Leiden algorithm for community detection evaluating the nº of clusters created by changing the resolution parameter.
for i in np.arange(0.9,1.05,0.05):
partition = leidenalg.find_partition(graph, leidenalg.CPMVertexPartition,
resolution_parameter = i)
print(i,len(np.unique(partition.membership)) )
#0.9 1
#0.9500000000000001 1
#1.0 150
#1.0500000000000003 150
</code></pre>
<p>As one can see, once it gets to 1, there is 150 cluster (equally to the nº of observations), and before that, it considers everything 1 cluster. Let me know your ideas.</p>
<p>Thank you for you time</p>
|
<python><graph><cluster-analysis><leiden>
|
2023-05-17 09:12:13
| 2
| 2,847
|
Carles
|
76,270,202
| 383,793
|
How can I express two types are consistent?
|
<p>Say I'm using libraries that handle Json and they both define their own types that I know are consistent in the sense of <a href="https://peps.python.org/pep-0483/#summary-of-gradual-typing" rel="nofollow noreferrer">PEP 483</a>.</p>
<p>How can I express that?</p>
<pre class="lang-py prettyprint-override"><code>from A import create_some_json
from B import consume_some_json
# and the types are
create_some_json: Callable[[...], A.JsonValue]
consume_some_json: Callable[[B.JsonValue], ...]
</code></pre>
<p>I know I can use <code>typing.cast</code> when I write a function that uses the return value of one and passes it to the other. And that's okay-ish, I guess. But this becomes cumbersome with api's that use combinators, like e.g. <code>hypothesis.SearchStrategy</code>. Do I really have to pollute the call-stack with a decorator that does the cast, just for the type checker? And with this example, given the nested nature of Json; what happens with the types of the contents of a Json container when I cast it to the type of the other library? Or in other words: what does a cast called on a generic do to its covariance?</p>
|
<python><python-typing>
|
2023-05-17 08:50:20
| 0
| 6,396
|
Chris Wesseling
|
76,270,084
| 1,226,020
|
Installing AWS EB CLI using pip fails due to breaking changes in urllib 2
|
<p>Suddenly my GitHub Actions CI workflow failed to install AWS EB CLI, due to breaking changes in <code>urllib</code> v2:</p>
<pre><code>ERROR: botocore 1.29.81 has requirement urllib3<1.27,>=1.25.4, but you'll have urllib3 2.0.2
</code></pre>
<p>When trying to use <code>eb deploy</code> anyway, this is the error:</p>
<pre><code>ImportError: cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' (/home/runner/.local/lib/python3.8/site-packages/urllib3/util/ssl_.py)
</code></pre>
<p>Apparently, <code>DEFAULT_CIPHERS</code> was removed from <code>urllib</code> in v2.</p>
<p>My command for installing <code>awsebcli</code> is simply this:</p>
<pre><code>pip install awsebcli==3.20.5 --upgrade --user
</code></pre>
<p>I really suck at Python as it's not my daily tool, so I want to know how this issue is fixed the "proper" way. And who's really at fault here? Is it <code>awsebcli</code> who fails to properly pin <code>urllib</code> at <code><2</code>? Apparently <code>botocore</code> requires <code><1.27,>=1.25.4</code>, but still for some reason <code>2.0.2</code> was installed - why?
Is the correct solution to separately install <code>urllib<2</code>, or to somehow instruct <code>awsebcli</code> to use <code>urllib<2</code> as a command line option, or to raise an issue with <code>awsebcli</code>?</p>
<p>How do I fix this 1) short-term just to get on with my business and 2) the "proper" way?</p>
<p>Full output of installation:</p>
<pre><code>> pip install awsebcli==3.20.5 --upgrade --user
Collecting awsebcli==3.20.5
Downloading awsebcli-3.20.5.tar.gz (261 kB)
Requirement already satisfied, skipping upgrade: PyYAML<5.5,>=5.3.1 in /usr/lib/python3/dist-packages (from awsebcli==3.20.5) (5.3.1)
Collecting blessed>=1.9.5
Downloading blessed-1.20.0-py2.py3-none-any.whl (58 kB)
Collecting botocore<1.29.82,>1.23.41
Downloading botocore-1.29.81-py3-none-any.whl (10.5 MB)
Collecting cement==2.8.2
Downloading cement-2.8.2.tar.gz (165 kB)
Requirement already satisfied, skipping upgrade: colorama<0.4.4,>=0.2.5 in /usr/lib/python3/dist-packages (from awsebcli==3.20.5) (0.4.3)
Collecting docker-compose<1.26.0,>=1.25.2
Downloading docker_compose-1.25.5-py2.py3-none-any.whl (139 kB)
Collecting pathspec==0.10.1
Downloading pathspec-0.10.1-py3-none-any.whl (27 kB)
Collecting python-dateutil<3.0.0,>=2.1
Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Requirement already satisfied, skipping upgrade: requests<=2.26,>=2.20.1 in /usr/lib/python3/dist-packages (from awsebcli==3.20.5) (2.22.0)
Collecting semantic_version==2.8.5
Downloading semantic_version-2.8.5-py2.py3-none-any.whl (15 kB)
Requirement already satisfied, skipping upgrade: setuptools>=20.0 in /usr/lib/python3/dist-packages (from awsebcli==3.20.5) (45.2.0)
Requirement already satisfied, skipping upgrade: six<1.15.0,>=1.11.0 in /usr/lib/python3/dist-packages (from awsebcli==3.20.5) (1.14.0)
Collecting termcolor==1.1.0
Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Collecting urllib3>=1.26.5
Downloading urllib3-2.0.2-py3-none-any.whl (123 kB)
Collecting wcwidth<0.2.0,>=0.1.7
Downloading wcwidth-0.1.9-py2.py3-none-any.whl (19 kB)
Collecting jmespath<2.0.0,>=0.7.1
Downloading jmespath-1.0.1-py3-none-any.whl (20 kB)
Collecting cached-property<2,>=1.2.0
Downloading cached_property-1.5.2-py2.py3-none-any.whl (7.6 kB)
Collecting texttable<2,>=0.9.0
Downloading texttable-1.6.7-py2.py3-none-any.whl (10 kB)
Collecting docopt<1,>=0.6.1
Downloading docopt-0.6.2.tar.gz (25 kB)
Collecting docker[ssh]<5,>=3.7.0
Downloading docker-4.4.4-py2.py3-none-any.whl (147 kB)
Requirement already satisfied, skipping upgrade: jsonschema<4,>=2.5.1 in /usr/lib/python3/dist-packages (from docker-compose<1.26.0,>=1.25.2->awsebcli==3.20.5) (3.2.0)
Collecting websocket-client<1,>=0.32.0
Downloading websocket_client-0.59.0-py2.py3-none-any.whl (67 kB)
Collecting dockerpty<1,>=0.4.1
Downloading dockerpty-0.4.1.tar.gz (13 kB)
Collecting paramiko>=2.4.2; extra == "ssh"
Downloading paramiko-3.1.0-py3-none-any.whl (211 kB)
Collecting pynacl>=1.5
Downloading PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (856 kB)
Collecting bcrypt>=3.2
Downloading bcrypt-4.0.1-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (593 kB)
Requirement already satisfied, skipping upgrade: cryptography>=3.3 in /home/runner/.local/lib/python3.8/site-packages (from paramiko>=2.4.2; extra == "ssh"->docker[ssh]<5,>=3.7.0->docker-compose<1.26.0,>=1.25.2->awsebcli==3.20.5) (39.0.2)
Requirement already satisfied, skipping upgrade: cffi>=1.4.1 in /home/runner/.local/lib/python3.8/site-packages (from pynacl>=1.5->paramiko>=2.4.2; extra == "ssh"->docker[ssh]<5,>=3.7.0->docker-compose<1.26.0,>=1.25.2->awsebcli==3.20.5) (1.15.1)
Requirement already satisfied, skipping upgrade: pycparser in /home/runner/.local/lib/python3.8/site-packages (from cffi>=1.4.1->pynacl>=1.5->paramiko>=2.4.2; extra == "ssh"->docker[ssh]<5,>=3.7.0->docker-compose<1.26.0,>=1.25.2->awsebcli==3.20.5) (2.21)
Building wheels for collected packages: awsebcli, cement, termcolor, docopt, dockerpty
Building wheel for awsebcli (setup.py): started
Building wheel for awsebcli (setup.py): finished with status 'done'
Created wheel for awsebcli: filename=awsebcli-3.20.5-py3-none-any.whl size=363782 sha256=bc52de3458e34fdc14f637bf8e811fbef93e8e0b20afcf62164a5d1908228e2d
Stored in directory: /home/runner/.cache/pip/wheels/c5/98/dd/aa92c4cf9550e51cb6816fb1d143898a00554dc6e94de25e66
Building wheel for cement (setup.py): started
Building wheel for cement (setup.py): finished with status 'done'
Created wheel for cement: filename=cement-2.8.2-py3-none-any.whl size=99712 sha256=3f334f4c68ade19447b0016ec8fa647b4e3b2c1cbcafcc63a32700fcbda2a35b
Stored in directory: /home/runner/.cache/pip/wheels/0b/7b/ab/00b71f5a2fc054823e74d65a9376568f1223c0cca63fc20d17
Building wheel for termcolor (setup.py): started
Building wheel for termcolor (setup.py): finished with status 'done'
Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4830 sha256=699a8487c7089c85c3538bf175bb8d7190c6c99c22d155a2b20049d03b064430
Stored in directory: /home/runner/.cache/pip/wheels/a0/16/9c/5473df82468f958445479c59e784896fa24f4a5fc024b0f501
Building wheel for docopt (setup.py): started
Building wheel for docopt (setup.py): finished with status 'done'
Created wheel for docopt: filename=docopt-0.6.2-py2.py3-none-any.whl size=13704 sha256=0f8550a2be8324265c088430bf120eff949c982d2f2b228055d5412e92a5e701
Stored in directory: /home/runner/.cache/pip/wheels/56/ea/58/ead137b087d9e326852a851351d1debf4ada529b6ac0ec4e8c
Building wheel for dockerpty (setup.py): started
Building wheel for dockerpty (setup.py): finished with status 'done'
Created wheel for dockerpty: filename=dockerpty-0.4.1-py3-none-any.whl size=16604 sha256=fd663c2086f7b1223321239ff451413eaa89256c7f9065c3a9ed92a90375bbd7
Stored in directory: /home/runner/.cache/pip/wheels/1a/58/0d/9916bf3c72e224e038beb88f669f68b61d2f274df498ff87c6
Successfully built awsebcli cement termcolor docopt dockerpty
ERROR: botocore 1.29.81 has requirement urllib3<1.27,>=1.25.4, but you'll have urllib3 2.0.2 which is incompatible.
</code></pre>
|
<python><pip><amazon-elastic-beanstalk><ebcli>
|
2023-05-17 08:35:57
| 0
| 9,405
|
JHH
|
76,270,059
| 3,209,276
|
Upgrade a self hosted python package from within a python session
|
<p>We are developing a python package hosted on our own Bitbucket. So far the upgrade of the package works fine from the terminal :</p>
<pre class="lang-bash prettyprint-override"><code>python -m pip install --upgrade --no-cache-dir git+ssh://git@{package_git_url}
</code></pre>
<p>The idea is to let the user upgrade the package from within its notebook as the package development is in early stage. For this we use this method within the package :</p>
<pre class="lang-py prettyprint-override"><code>def upgrade(self):
subprocess.run(
[
"python",
"-m",
"pip",
"install",
"--upgrade",
"--no-cache-dir",
self.config.PIP.URL,
]
)
</code></pre>
<p>But when calling this function within python, it does not upgrade the package in the virtual environment. Hence my question : is it possible to upgrade a python package from within a python session ?</p>
|
<python><pip>
|
2023-05-17 08:32:57
| 1
| 386
|
Rigoberta Raviolini
|
76,270,036
| 4,865,723
|
Create table instance not connected to a document with python-docx?
|
<p>This is an X-Post related to <a href="https://github.com/python-openxml/python-docx/issues/1190" rel="nofollow noreferrer">Issue #1190</a> at <code>python-docx</code> upstream.</p>
<p>This is how I do create a table in a docx document currently.</p>
<pre><code>doc = docx.Document()
tab = doc.add_table(rows=300, cols=5)
</code></pre>
<p>So the table object is "connected" to its parent document.</p>
<p>Is there a way to create a table object <strong>without</strong> having a connection to a parent object and add it to the document later? Somehow like this?</p>
<pre><code>doc = docx.Document()
tab_one = docx.Table(rows=300, cols=5)
tab_two = docx.Table(rows=100, cols=3)
doc.add_table(tab_two)
doc.add_table(tab_one)
</code></pre>
<p>Or (as a workaround) can I move a table object from one document instance to another like this?</p>
<pre><code>doc_temp = docx.Document()
tab = doc_temp.add_table(rows=300, cols=5)
doc_main = docx.Document()
doc_main.add_table(tab)
</code></pre>
<p>The background of my question is that I do create multiple tables with 100-300 rows and do formatting operations on each of its cells. So there is a lot of row and cell iterations going on which eat a lot of performance and time.</p>
<p>Doing this in multiprocessing where each worker has its own table object would speed up the process. I would like to create multiple tables in parallel and adding them to the document in a later step.</p>
<p>It is also clear that multiprocessing isn't the whole and best solution for a performance problem. Such a problem isn't solved just with adding more CPU resources into it. The algorithm itself should be optimized. For me the multiprocessing is just one step of the way to a better solution.</p>
<p><strong>EDIT</strong>: As a real world example <a href="https://codeberg.org/buhtz/buhtzology/src/commit/43ba073ebb444273f93bc6e886e35512fcda0fec/src/buhtzology/report.py#L1647" rel="nofollow noreferrer">here</a> you can see how I create docx-tables based on <code>pandas.DataFrame</code> objects.</p>
|
<python><python-docx>
|
2023-05-17 08:30:11
| 1
| 12,450
|
buhtz
|
76,269,989
| 10,818,184
|
what happens when boolean mask of numpy array is just a single value
|
<p>I know that boolean array can be used as a mask in a numpy array.
But when I use a single value as a mask in a numpy array, strange thing happened:</p>
<pre><code>x = [1,2,3,4,5]
result = np.array(x)[True]
print(result)
</code></pre>
<p>It turns out to be [[1 2 3 4 5]]. Why there is a double bracket? What does Python do in this case?</p>
|
<python><numpy>
|
2023-05-17 08:24:57
| 1
| 488
|
li_jessen
|
76,269,948
| 10,251,146
|
Pushing polar dataframe to redis
|
<p>What is the best way to push a polars dataframe to redis ?</p>
<p>Currently I am trying to read the _io.BytesIO buffer returned by write_ipc</p>
<pre><code>redis_instance.set("key",df.write_ipc(file = None, compression="lz4").read())
</code></pre>
<p>This doesn't seem to work as the key in redis is empty</p>
|
<python><redis><python-polars><py-redis>
|
2023-05-17 08:19:58
| 1
| 459
|
linus heinz
|
76,269,934
| 930,675
|
Issues creating a regex to extract code from Markdown
|
<p>I'm trying to extract code from a string of Markdown and I'm very close. My code is:</p>
<pre><code>import re
string = """
Lorem ipsum
```python
print('foo```bar```foo')
print('foo```bar```foo')
```
Lorem ipsum
"""
pattern = r'```(?:\w+\n)?(.*?)(?!.*```)'
result = re.search(pattern, string, re.DOTALL).group(1)
print(result)
</code></pre>
<p>And the result of this is:</p>
<pre><code>print('foo```bar```foo')
print('foo```bar```foo')
`
</code></pre>
<p>You'll notice the only problem I have is the extra backtick at the end of that code block. I can't figure out what's matching that or how to remove it but I'm certain it has something to do with the negative lookahead I'm using.</p>
|
<python><regex>
|
2023-05-17 08:18:16
| 1
| 3,195
|
Sean Bannister
|
76,269,866
| 3,322,273
|
Plot two 2D-data histograms on the same chart
|
<p>I am looking for a way to plot multiple histograms of 2D data, but I could not find any documentation about addressing such a problem.</p>
<p>Plotting multiple <strong>1-dimensional histograms</strong> on a same chart with Matplotlib has been addressed in <a href="https://stackoverflow.com/questions/6871201/plot-two-histograms-on-single-chart-with-matplotlib">this thread</a> and it also covered in <a href="https://matplotlib.org/stable/gallery/statistics/histogram_multihist.html" rel="nofollow noreferrer">the Matplotlib docs</a>.</p>
<img src="https://i.sstatic.net/MzHP9.png" width="400" />
<p>There are <a href="https://stackoverflow.com/questions/14061061/how-can-i-render-3d-histograms-in-python">threads</a> and documentations that deal with 2D data (<a href="https://matplotlib.org/3.1.0/gallery/mplot3d/hist3d.html" rel="nofollow noreferrer">3d plots</a> or <a href="https://www.python-graph-gallery.com/83-basic-2d-histograms-with-matplotlib" rel="nofollow noreferrer">contours</a>), but for a <strong>single data series</strong>.</p>
<img src="https://matplotlib.org/3.1.0/_images/sphx_glr_hist3d_001.png" width="300" />
<img src="https://i.sstatic.net/DGWz8.png" width="200" />
<p>I could not find any thread/docs on plotting multiple 2D-data histograms on the same chart. Is there a way to do it?</p>
|
<python><matplotlib><plot><histogram>
|
2023-05-17 08:09:24
| 1
| 12,360
|
SomethingSomething
|
76,269,750
| 1,025,140
|
Update TOML file with variables
|
<p>I am using this code to update an existing TOML file which contains variables.
Python TOML module is not able to parse this TOML file.</p>
<p>How can I use this code to update the TOML file?</p>
<p>I have tried the 'from yaml.loader import SafeLoader' also, but that didn't help.</p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>import toml,requests,uuid,argparse
import collections.abc
class TOMLConfig:
def __init__(self,config, file):
self.config = config
self.file=file
def update(self):
with open(self.file, "r") as f:
existing_config = toml.load(f)
self._update_config(existing_config, self.config)
with open(self.file, "w") as f:
f.write(toml.dumps(existing_config))
def _update_config(self, existing_config, new_config):
for key, value in new_config.items():
if isinstance(value, collections.abc.Mapping):
self._update_config(existing_config.setdefault(key, {}), value)
else:
existing_config[key] = value
</code></pre>
<p>TOML FIle</p>
<pre class="lang-ini prettyprint-override"><code>[system]
data_directory = "${DATA_DIRECTORY}"
[server]
telemetry_enabled = ${CONFIGS_TELEMETRY_ENABLED}
</code></pre>
<p>Error:</p>
<pre><code>╰─> python3 /tmp/config.py -p "{'data':{'shared_buffer_capacity_mb':8096,'block_cache_capacity_mb':8096,'AA_cache_capacity_mb':2048,'AA_memory_limit_mb':10240},'streaming':{'minimal_scheduling':True}}" -f /tmp/b
Traceback (most recent call last):
File "/opt/homebrew/lib/python3.11/site-packages/toml/decoder.py", line 511, in loads
ret = decoder.load_line(line, currentlevel, multikey,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/toml/decoder.py", line 778, in load_line
value, vtype = self.load_value(pair[1], strictly_valid)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/toml/decoder.py", line 910, in load_value
raise ValueError("This float doesn't have a leading "
ValueError: This float doesn't have a leading digit
</code></pre>
<p>During handling of the above exception, another exception occurred:</p>
<pre><code>Traceback (most recent call last):
File "/tmp/config.py", line 45, in <module>
f=cobj.update()
^^^^^^^^^^^^^
File "/tmp/config.py", line 20, in update
existing_config = toml.load(f)
^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/toml/decoder.py", line 156, in load
return loads(f.read(), _dict, decoder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/toml/decoder.py", line 514, in loads
raise TomlDecodeError(str(err), original, pos)
toml.decoder.TomlDecodeError: This float doesn't have a leading digit (line 5 column 1 char 88)
</code></pre>
|
<python><toml>
|
2023-05-17 07:55:17
| 1
| 980
|
user87005
|
76,269,633
| 6,035,977
|
How to accept only the next word in PyCharm GitHub copilot suggestion
|
<p>I would like to be able to accept only the next word of a github Copilot suggestion instead of the full suggestion. This is possible with VS Code as documented <a href="https://stackoverflow.com/questions/75114315/how-to-accept-only-the-next-word-in-github-copilot-suggestion#:%7E:text=To%20enable%20a%20shortcut%20to,enter%20your%20desired%20keyboard%20shortcut.">here</a>.
Is there a way to do this in PyCharm too?</p>
|
<python><autocomplete><pycharm><github-copilot>
|
2023-05-17 07:42:24
| 1
| 333
|
Corram
|
76,269,605
| 11,611,246
|
Multiprocessing Listener-Client two-way connection
|
<p>I am trying to make two Python processes communicate: One script (<code>Listener.py</code>) should run in the background in an endless loop. The other script (<code>Client.py</code>) should be executed whenever needed.
Each time, <code>Client.py</code> is executed, it is supposed to send some information in a Python <code>dict()</code>. <code>Listener.py</code> should read these information, use it as input to do some things and send back a result. <code>Client.py</code> should wait for the response and save it or print it.</p>
<p>So far, I have found examples of how to send data from one process to another. Based on what I found online, I assumed the following should be working:</p>
<p><strong>Listener.py</strong></p>
<pre><code>import time
from multiprocessing.connection import Listener
#-----------------------------------------------------------------------------|
# Settings
address = ("localhost", 6000)
listener = Listener(address, authkey = b"Passwort")
def do_something(img_path):
time.sleep(1)
return "I slept for 1 sec.."
#-----------------------------------------------------------------------------|
# Run and listen
if __name__ == "__main__":
while True:
connection = listener.accept()
try:
conn_in = connection.recv()
print(conn_in)
if conn_in is not None:
done_something = do_something()
connection.send({"how long I slept" : done_something,
"you told me" : conn_in["tale"]})
except:
print("Something unexpected happened...")
time.sleep(0.5)
</code></pre>
<p><strong>Client.py</strong></p>
<pre><code>import datetime
from multiprocessing.connection import Client
address = ("localhost", 6000)
with Client(address, authkey = b"Passwort") as connection:
connection.send(
{"tale" : "Once upon a time..."}
)
response = connection.recv()
print(response)
</code></pre>
<p>However, it is not.</p>
<p>I start the <code>Listener.py</code> script first and then run the <code>Client.py</code>. Unfortunately, something causes the <code>Listener.py</code> to crash. It does receive the information sent by <code>Client.py</code> but it does not respond.</p>
<p>Currently, I cannot wrap my head around how this should work and the <a href="https://docs.python.org/3/library/multiprocessing.html#multiprocessing-listeners-clients" rel="nofollow noreferrer">documentation</a> on the functions I think I need is rather brief. Maybe there are even better Python packages or different functions to do the job?</p>
|
<python><python-3.x><multiprocessing><communication>
|
2023-05-17 07:39:09
| 1
| 1,215
|
Manuel Popp
|
76,269,463
| 19,189,069
|
How to sort a list of bigrams according to the second unigram?
|
<p>I have a list of bigrams like this:</p>
<pre><code>mylist = ["hello world", "this python", "black cat", "brown dog"]
</code></pre>
<p>I want to sort this list according to the second word in each element alphabetically. The result is like this:</p>
<pre><code>mylist = ["black cat", "brown dog", "this python", "hello world"]
</code></pre>
<p>Could anyone help?
Thanks.</p>
|
<python>
|
2023-05-17 07:20:04
| 1
| 385
|
HosseinSedghian
|
76,269,260
| 2,252,356
|
Django CustomUser model Image field display in User List
|
<p><code>UserProfile</code> is CustomUser Model having columns avatar, bio and display fields.</p>
<p>I want to show Profile avatar in the User List</p>
<p>I have added avatar_tag function to be called by the users list</p>
<pre class="lang-py prettyprint-override"><code>#models.py
class UserProfile(models.Model):
user = models.OneToOneField(User,on_delete=models.CASCADE)
def get_upload_avatar_to(self, filename):
return f"images/avatar/{self.user.id}/{filename}"
avatar = models.ImageField(upload_to=get_upload_avatar_to, null=True, blank=True)
def avatar_tag(self):
if self.avatar.url is not None:
return mark_safe('<img src="{}" height="50"/>'.format(self.avatar.url))
else:
return ""
avatar_tag.short_description = 'Image'
avatar_tag.allow_tags = True
full_name = ''
bio = models.TextField(default="", blank=True)
display = models.CharField(default=full_name,max_length=512,blank=True)
</code></pre>
<p>admin.py code is below</p>
<pre class="lang-py prettyprint-override"><code>class UserProfileInline(admin.StackedInline):
model = UserProfile
fk_name = 'user'
can_delete = False
verbose_name = "User profile"
verbose_name_plural = 'Profiles'
classes = ('text-capitalize','collapse open')
extra = 1
list_display = ('avatar_tag')
fieldsets = (
("Profile", {'fields': ('avatar','bio', 'display',)}),)
class UserProfileAdmin(UserAdmin):
inlines = (UserProfileInline,)
</code></pre>
<p>How can I show the avatar image in list view of the User table?</p>
<p>I am not getting any errors in the user list avatar is not shown</p>
<p>Here is the image
<a href="https://i.sstatic.net/cgtHu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cgtHu.png" alt="enter image description here" /></a></p>
|
<python><django><django-models><django-admin>
|
2023-05-17 06:51:46
| 1
| 2,008
|
B L Praveen
|
76,269,159
| 8,481,155
|
Python Pytest ModuleNotFoundError
|
<p>I have a folder structure like below</p>
<pre><code>.
├── src
│ ├── __init__.py
│ ├── b.py
│ └── main.py
└── test
├── __init__.py
└── test_main.py
</code></pre>
<p>main.py imports a function from b.py which is causing me all sorts of problems.</p>
<p>main.py</p>
<pre><code>from b import sub_m
def add_m(a, b):
return a + b
def add_sub(a, b):
return add_m(a, b) + sub_m(a, b)
if __name__ == '__main__':
print(add_m(5, 5))
print(add_sub(5,5))
</code></pre>
<p>b.py</p>
<pre><code>def sub_m(a, b):
return a - b
</code></pre>
<p>test_main.py</p>
<pre><code>import unittest
from src.main import add_m, add_sub
class TestFile1(unittest.TestCase):
def test_add_m(self):
self.assertEqual(add_m(1, 2), 3)
def test_add_sub(self):
self.assertEqual(add_sub(5, 5), 10)
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>If I run <code>python src/main.py</code>, it works.
But when i run the tests by running <code>pytest test</code> I get an error <code>ModuleNotFoundError: No module named 'b'</code></p>
<p>I searched in StackOverflow and found different ways to get it working like running <code>python -m pytest</code>, <code>python -m pytest test</code>. But everything is giving me the same error.</p>
<p>The only way I could make the test work is by changing the <code>main.py</code> by updating the first line as</p>
<pre><code>from src.b import sub_m
</code></pre>
<p>Instead of,</p>
<pre><code>from b import sub_m
</code></pre>
<p>This change gets the tests working, but execution of main.py fails with this change <code>python src/main.py</code> fails saying <code>ModuleNotFoundError: No module named 'src'</code>.</p>
<p>What is the best way to get both the test and main.py working? I know this won't be the first time someone is facing this issue. But I tried my best to look for existing answers but nothing worked for me.</p>
|
<python><python-3.x><pytest><python-unittest>
|
2023-05-17 06:37:02
| 1
| 701
|
Ashok KS
|
76,268,858
| 972,647
|
Re-sort string based on precedence of separator
|
<p>I have a string with a certain meaning for example <code>"a,b;X1"</code> or <code>e&r1</code>. In total there are 3 possible separators between values: <code>,;&</code> where <code>;</code> has low precedence.</p>
<p>Also <code>"a,b;X1"</code> and <code>"b,a;X1"</code>are the same but to be able to compare they are the same, I want to predictably resort the string so that indeed the 2 can be compared to be equal. In essence <code>"b,a;X1"</code> must be "sorted" to become <code>"a,b;X1"</code> and this is rather a simple example. The expression can be more complex.</p>
<p>The precedence is of importance as <code>"a,b;X1"</code> is not the same as <code>"a;b,X1"</code>.</p>
<p>In general I would need to split into "groups by precedence and then sort the groups and merge things together again but unclear how to achieve this.</p>
<p>So far I have:</p>
<pre><code>example = "b,a;X1"
ls = example.split(';')
ls2 = [x.split(",") for x in ls]
ls3 = [[y.split("&") for y in x] for x in ls2]
ls3.sort()
print(ls3)
# [[['X1']], [['b'], ['a']]]
</code></pre>
<p>Sorting doesn't yet work as a should be before b and then I'm not sure how to "stitch" the result back together again.</p>
<p>For clarification:</p>
<ul>
<li><code>,</code> means OR</li>
<li><code>&</code> means AND (high precedence)</li>
<li><code>;</code> means AND (low precedence)</li>
</ul>
<p><code>"a,b;X1"</code> therefore means (a OR b) AND X1</p>
<p><code>"b,a;X1"</code> therefore means (b OR a) AND X1 i.e. the same</p>
|
<python><string><sorting>
|
2023-05-17 05:40:04
| 3
| 7,652
|
beginner_
|
76,268,799
| 9,346,370
|
How should I declare enums in SQLAlchemy using mapped_column (to enable type hinting)?
|
<p>I am trying to use <code>Enums</code> in SQLAlchemy 2.0 with <code>mapped_column</code>. So far I have the following code (taken from another question):</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy.dialects.postgresql import ENUM as pgEnum
import enum
class CampaignStatus(str, enum.Enum):
activated = "activated"
deactivated = "deactivated"
CampaignStatusType: pgEnum = pgEnum(
CampaignStatus,
name="campaignstatus",
create_constraint=True,
metadata=Base.metadata,
validate_strings=True,
)
class Campaign(Base):
__tablename__ = "campaign"
id: Mapped[UUID] = mapped_column(primary_key=True, default=uuid4)
created_at: Mapped[dt.datetime] = mapped_column(default=dt.datetime.now)
status: Mapped[CampaignStatusType] = mapped_column(nullable=False)
</code></pre>
<p>However, that gives the following error upon the construction of the <code>Campaign</code> class itself.</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 27, in <module>
class Campaign(Base):
...
AttributeError: 'ENUM' object has no attribute '__mro__'
</code></pre>
<p>Any hint about how to make this work?</p>
<p>The response from <a href="https://stackoverflow.com/questions/28894257/enum-type-in-sqlalchemy-with-postgresql">ENUM type in SQLAlchemy with PostgreSQL</a> does not apply as I am using version 2 of SQLAlchemy and those answers did not use <code>mapped_column</code> or <code>Mapped</code> types. Also, removing <code>str</code> from <code>CampaignStatus</code> does not help.</p>
|
<python><postgresql><sqlalchemy><fastapi>
|
2023-05-17 05:26:42
| 1
| 1,224
|
Javier Guzmán
|
76,268,656
| 10,698,244
|
How to extract dict values of pandas DataFrame in new columns?
|
<p>I would like to extract the values of a dictionary inside a Pandas DataFrame <code>df</code> into new columns of that DataFrame. All keys in the referring dict are the same across all rows.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({'a': [1, 2, 3], 'b': [{'x':[101], 'y': [102], 'z': [103]}, {'x':[201], 'y': [202], 'z': [203]}, {'x':[301], 'y': [302], 'z': [303]}]})
</code></pre>
<p><a href="https://i.sstatic.net/FyBxs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FyBxs.png" alt="enter image description here" /></a></p>
<pre><code>dfResult = pd.DataFrame({'a': [1, 2, 3], 'x':[101, 201, 301], 'y': [102, 202, 302], 'z': [103, 203, 303]})
</code></pre>
<p><a href="https://i.sstatic.net/eWJZK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eWJZK.png" alt="enter image description here" /></a></p>
<p>I am as far as I can get the keys and values out of the dict, but I do not know how to make new columns out of them:</p>
<pre><code>df.b.apply(lambda x: [x[y] for y in x.keys()])
0 [[101], [102], [103]]
1 [[201], [202], [203]]
2 [[301], [302], [303]]
df.b.apply(lambda x: [y for y in x.keys()])
0 [x, y, z]
1 [x, y, z]
2 [x, y, z]
</code></pre>
|
<python><python-3.x><pandas><dataframe>
|
2023-05-17 04:51:35
| 1
| 1,369
|
user7468395
|
76,268,529
| 11,505,680
|
Python string formatting for precision with no trailing zeros
|
<p>I have some floating-point numbers containing zero or more decimal places. I want to display them with <em>no more than</em> two decimal places, with no trailing zeros. In other words, I'm looking for a definition of <code>magic_fmt</code> that satisfies this condition:</p>
<pre class="lang-py prettyprint-override"><code>>>> ls = [123, 12, 1, 1.2, 1.23, 1.234, 1.2345]
>>> [magic_fmt(n, 2) for n in ls]
['123', '12', '1', '1.2', '1.23', '1.23', '1.23']
</code></pre>
<p>I tried a few things.</p>
<pre class="lang-py prettyprint-override"><code>>>> [f'{n:.2e}' for n in ls] # not right at all
['1.23e+02',
'1.20e+01',
'1.00e+00',
'1.20e+00',
'1.23e+00',
'1.23e+00',
'1.23e+00']
>>> [f'{n:.2f}' for n in ls] # has trailing zeros
['123.00', '12.00', '1.00', '1.20', '1.23', '1.23', '1.23']
>>> [f'{n:.2g}' for n in ls] # switches to exponential notation when I don't want it to
['1.2e+02', '12', '1', '1.2', '1.2', '1.2', '1.2']
>>> [str(round(n,2)) for n in ls]
['123', '12', '1', '1.2', '1.23', '1.23', '1.23']
</code></pre>
<p>The only solution that seems to work is abandoning the string formatting mini-language, which seems risky. Is there a better way?</p>
|
<python><string-formatting>
|
2023-05-17 04:18:31
| 0
| 645
|
Ilya
|
76,268,513
| 3,121,518
|
Python Multiple Inheritance Diamond Problem
|
<p>I have a diamond inheritance scenario. The two middle classes inherit okay, but the Combo class, I can't quite figure out. I want the Combo class to inherit all attributes with the overridden methods coming from the Loan class.</p>
<p>I can't figure out how to write the constructor for the Combo class.</p>
<p>This is the database, the data is coming from:</p>
<p><a href="https://i.sstatic.net/PoaJH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PoaJH.png" alt="ERD" /></a></p>
<p>Business Rules:</p>
<ul>
<li>One customer can have zero or many accounts</li>
<li>One account belongs to
one customer</li>
<li>One account is optionally a loan account</li>
<li>One loan account is one
account</li>
<li>One account is optionally a transaction account</li>
<li>One transaction
account is one account</li>
</ul>
<p>An account can be both a loan account and a transaction account</p>
<p>The combo account means the loan account has a card attached and is also the transaction account.</p>
<p>And this is the class diagram:</p>
<p><a href="https://i.sstatic.net/pESWi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pESWi.png" alt="UML Diagram" /></a></p>
<pre class="lang-py prettyprint-override"><code>class Account:
def __init__(self, account_id, customer_id, balance, interest_rate):
self.account_id = account_id
self.customer_id = customer_id
self.balance = float(balance)
self.interest_rate = float(interest_rate)
def deposit(self):
try:
amount = float(input('Enter deposit amount: '))
self.balance = self.balance + amount
print('Account Number:', self.account_id)
print('Deposit Amount:', amount)
print('Closing Balance:', self.balance)
print('Deposit successful.\n')
return True
except ValueError as e:
print('Error: please enter a valid amount.\n')
return False
def withdraw(self):
try:
amount = float(input('Enter withdrawal amount: '))
if amount > self.balance:
print('Account Number: ', self.account_id)
print('Withdrawal Amount: ', amount)
print('Account Balance: ', self.balance)
print('Error: withdrawal amount greater than balance.\n')
return False
else:
self.bal = self.balance - amount
print('Account Number: ', self.account_id)
print('Withdraw Amount: ', amount)
print('Closing Balance: ', self.balance)
print('Withdrawal Successful.\n')
return True
except ValueError as e:
print('Error: please enter a valid amount.\n')
return False
def __str__(self):
return (f'\nAccount Number: {self.account_id}\n'
f'Balance: {self.balance:.2f}\n'
f'Interest Rate: {self.interest_rate:.2f}\n')
class Transaction(Account):
def __init__(self, account_id, customer_id, balance, interest_rate, card_no, cvn, pin):
super().__init__(account_id, customer_id, balance, interest_rate)
self.card_no = card_no
self.cvn = cvn
self.pin = pin
class Loan(Account):
def __init__(self, account_id, customer_id, balance, interest_rate, duration, frequency, payment, limit):
super().__init__(account_id, customer_id, balance, interest_rate)
self.duration = int(duration)
self.frequency = frequency
self.payment = float(payment)
self.limit = float(limit)
# override inherited method
def withdraw(self):
try:
amount = float(input('Enter withdrawal amount: '))
if self.balance - amount < self.limit:
print('Account Number: ', self.account_id)
print('Withdrawal Amount: ', amount)
print('Account Balance: ', self.balance)
print('Error: withdrawal amount greater than limit.\n')
return False
else:
self.balance = self.balance - amount
print('Account Number: ', self.account_id)
print('Withdraw Amount: ', amount)
print('Closing Balance: ', self.balance)
print('Withdrawal Successful.\n')
return True
except ValueError as e:
print('Error: please enter a valid amount.\n')
return False
def __str__(self):
return super().__str__() + (f'Duration: {self.account_id} years\n'
f'Payment Frequency: {self.frequency}\n'
f'Limit: {self.limit:.2f}\n')
class Combo(Loan, Transaction):
def __init__(self, account_id, customer_id, balance, interest_rate, duration, frequency, payment, limit, card_no, cvn, pin):
# what goes here?
</code></pre>
<p>Test Data...</p>
<pre class="lang-py prettyprint-override"><code>from account import Account, Transaction, Loan, Combo
account_id = '1'
customer_id = '1'
balance = 0
interest_rate = 0.06
duration = 20
frequency = 'week'
payment = 500
limit = 500000
card_no = '5274728372688376'
cvn = '234'
pin = '9876'
loan = Loan(account_id, customer_id, balance, interest_rate, duration, frequency, payment, limit)
print(loan)
transaction = Transaction(account_id, customer_id, balance, interest_rate, card_no, cvn, pin)
print(transaction)
# whatever I do, it fails here
combo = Combo(account_id, customer_id, balance, interest_rate, duration, frequency, payment, limit, card_no, cvn, pin)
print(combo)
</code></pre>
|
<python><multiple-inheritance><diamond-problem>
|
2023-05-17 04:12:54
| 2
| 1,564
|
Harley
|
76,268,472
| 11,626,909
|
Python Code and Output in Bookdown pdf are not in multiple lines
|
<p>I am trying to write python code using <code>rmarkdown</code> in <code>bookdown</code>. The python code is ok. The problem is when the book pdf is generated, some long python codes and sometimes some python codes' output are outside of the pdf page and therefore they are not visible. Please see the images below.</p>
<p>In the image you can see the <code>print ('The total number of rows and columns in the dataset is {} and {} respectively.'.format(iris_df.shape[0],iris_df.shape[1]))</code> function code is not fully visible, but the output is visible. Another case, for <code>new_col = iris_df.columns.str.replace('\(.*\)','').str.strip().str.upper().str.replace(' ','_')</code> code, the whole code line is not visible and also the output of the code. The same issue is in <code>sns.scatterplot ()</code> line of code.</p>
<p>I am just wondering whether there is anyway in <code>bookdown</code> pdf, both the code and the associated output will not be outside of the pdf page.</p>
<p><strong>Note</strong>: I tried to write python code in <code>rmarkdown</code> in multiple lines, but it did not work and most cases the codes are not executed when python codes are written in multiple lines in <code>rmarkdown</code>.</p>
<p><img src="https://i.sstatic.net/M7lHk.png" alt="pdfoutput1" /></p>
<p>Here is the code that I used to generate the output in the image</p>
<pre class="lang-py prettyprint-override"><code>from sklearn import datasets
iris = datasets.load_iris()
iris.keys()
iris_df = pd.DataFrame (data = iris.data, columns = iris.feature_names)
iris_df['target'] = iris.target
iris_df.sample(frac = 0.05)
iris_df.shape
print ('The total number of rows and columns in the dataset is {} and {} respectively.'.format(iris_df.shape[0],iris_df.shape[1]))
iris_df.info()
new_col = iris_df.columns.str.replace('\(.*\)','').str.strip().str.upper().str.replace(' ','_')
new_col
iris_df.columns = new_col
iris_df.info()
sns.scatterplot(data = iris_df, x = 'SEPAL_LENGTH', y = 'SEPAL_WIDTH', hue = 'TARGET', palette = 'Set2')
plt.xlabel('Sepal Length'),
plt.ylabel('Sepal Width')
plt.title('Scatterplot of Sepal Length and Width for the Target Variable')
plt.show()
</code></pre>
|
<python><latex><r-markdown><bookdown>
|
2023-05-17 04:01:21
| 1
| 401
|
Sharif
|
76,268,348
| 2,960,978
|
How to update/modify request headers and query parameters in a FastAPI middleware?
|
<p>I'm trying to write a middleware for a FastAPI project that manipulates the request <code>headers</code> and / or <code>query</code> parameters in some special cases.</p>
<p>I've managed to capture and modify the request object in the middleware, but it seems that even if I modify the request object that is passed to the middleware, the function that serves the endpoint receives the original, unmodified request.</p>
<p>Here is a simplified version of my implementation:</p>
<pre class="lang-python prettyprint-override"><code>from fastapi import FastAPI, Request
from starlette.datastructures import MutableHeaders, QueryParams
from starlette.middleware.base import BaseHTTPMiddleware
class TestMiddleware(BaseHTTPMiddleware):
def __init__(self, app: FastAPI):
super().__init__(app)
def get_modified_query_params(request: Request) -> QueryParams:
pass ## Create and return new query params
async def dispatch(
self, request: Request, call_next, *args, **kwargs
) -> None:
# Check and manipulate the X-DEVICE-TOKEN if required
header_key = "X-DEVICE-INFo"
new_header_value = "new device info"
new_header = MutableHeaders(request._headers)
new_header[header_key] = new_header_value
request._headers = new_header
request._query_params = self.get_modified_query_params(request)
print("modified headers =>", request.headers)
print("modified params =>", request.query_params)
return await call_next(request)
</code></pre>
<p>Even though I see the updated values in the print statements above, when I try to print request object in the function that serves the endpoint, I see original values of the request.</p>
<p>What am I missing?</p>
|
<python><header><fastapi><query-string><starlette>
|
2023-05-17 03:17:02
| 1
| 1,460
|
SercioSoydanov
|
76,268,304
| 2,897,115
|
what the need of simple tests in python django
|
<p>I was reading about tests and came accross an example as below.</p>
<pre><code>import pytestimport pytest
@pytest.fixture
def customer():
customer = Customer(first_name="Cosmo", last_name="Kramer")
return customer
def test_customer_sale(customer):
assert customer.first_name == "Cosmo"
assert customer.last_name == "Kramer"
assert isinstance(customer, Customer)
</code></pre>
<p>I am wondering why to test the below three if we we already know how we are creating the object</p>
<pre><code>customer.first_name == "Cosmo"
customer.last_name == "Kramer"
isinstance(customer, Customer)
</code></pre>
|
<python><pytest>
|
2023-05-17 03:01:29
| 2
| 12,066
|
Santhosh
|
76,268,261
| 866,082
|
How to fix ipywidgets when they are not rendered in a jupyter-lab notbook?
|
<p>I have jupyter-lab version 3.6.3 installed and it fails to render ipywidgets. I followed the instructions mentioned in their <a href="https://ipywidgets.readthedocs.io/en/8.0.5/user_install.html" rel="nofollow noreferrer">documentation</a> and it still does not render.</p>
<p>These are the steps I've taken:</p>
<pre><code>conda install -n base -c conda-forge widgetsnbextension
conda install -n test_env -c conda-forge ipywidgets
</code></pre>
<p>I run the <code>jupyter-lab</code> in the <code>base</code> env from the terminal and then select the <code>test_env</code> as for my kernel when the notebook loads. Then I run the following cell:</p>
<pre class="lang-py prettyprint-override"><code>from IPython.display import display
import ipywidgets as widgets
w = widgets.IntSlider()
display(w)
</code></pre>
<p>And it outputs:</p>
<pre><code>IntSlider(value=0)
</code></pre>
<p>I have restarted the jupyter-lab process multiple times to make sure the changes are in effect. And still, no luck.</p>
<p>Any suggestions?</p>
|
<python><jupyter-lab><ipywidgets>
|
2023-05-17 02:46:36
| 1
| 17,161
|
Mehran
|
76,268,132
| 14,109,040
|
Assigning values to records in a dataframe based on datetime column being between a reference datetime range
|
<p>I have the following data frames:</p>
<p>period_df:</p>
<pre><code>Group1 Group2 Period Start time End time
G1 G2 Period 1 1900-01-01 05:01:00 1900-01-01 06:00:00
G1 G2 Period 2 1900-01-01 06:01:00 1900-01-01 07:00:00
G1 G2 Period 3 1900-01-01 07:01:00 1900-01-01 08:00:00
G1 G2 Period 4 1900-01-01 08:01:00 1900-01-01 09:00:00
G1 G2 Period 5 1900-01-01 09:01:00 1900-01-01 10:00:00
</code></pre>
<p>records_df:</p>
<pre><code>Group1 Group2 Original time
G1 G2 1900-01-01 05:05:00
G1 G2 1900-01-01 07:23:00
G1 G2 1900-01-00 07:45:00
G1 G2 1900-01-02 09:57:00
G1 G2 1900-01-02 08:23:00
</code></pre>
<p>I want to assign the corresponding <code>Period</code> from <code>period_df</code> to each record in <code>records_df</code>, based on the <code>Group1</code> and <code>Group2</code> columns and the time being between <code>Start time</code> and <code>End time</code>.</p>
<p>I wrote the following function to do that:</p>
<pre><code>def assign_period(record):
for _, period in period_df.iterrows():
if record['Group1'] == period['Group1'] and \
record['Group2'] == period['Group2'] and \
period['Start time'] <= record['Original time'] <= period['End time']:
return period['Period']
return None
</code></pre>
<p>And when I use the function to assign periods to the records I get the following output:</p>
<pre><code>records_df['Period'] = records_df.apply(assign_period, axis=1)
Group1 Group2 Original time Period
G1 G2 1900-01-01 05:05:00 Period 1
G1 G2 1900-01-01 07:23:00 Period 3
G1 G2 1900-01-00 07:45:00 None
G1 G2 1900-01-02 09:57:00 None
G1 G2 1900-01-02 08:23:00 None
</code></pre>
<p>Some records don't get assigned a period because the date is either a day before or after the dates mentioned on reference <code>period_df</code> dataframe.</p>
<p>The expected output is for Periods to be assigned irrespective of the date:</p>
<pre><code>Group1 Group2 Original time Period
G1 G2 1900-01-01 05:05:00 Period 1
G1 G2 1900-01-01 07:23:00 Period 3
G1 G2 1900-01-00 07:45:00 Period 3
G1 G2 1900-01-02 09:57:00 Period 5
G1 G2 1900-01-02 08:23:00 Period 4
</code></pre>
<p>How can I also incorporate a check for records that are not assigned a period in the above function to either go a day ahead or before and match up with the <code>Period</code> from <code>period_df</code>?</p>
<pre><code>import pandas as pd
period_df = pd.DataFrame({
'Group1': [
'G1',
'G1',
'G1',
'G1',
'G1'],
'Group2': [
'G2',
'G2',
'G2',
'G2',
'G2'],
'Period': [
'Period 1',
'Period 2',
'Period 3',
'Period 4',
'Period 5'],
'Start time': [
'1900-01-01 05:01:00',
'1900-01-01 06:01:00',
'1900-01-01 07:01:00',
'1900-01-01 08:01:00',
'1900-01-01 09:01:00'],
'End time': [
'1900-01-01 06:00:00',
'1900-01-01 07:00:00',
'1900-01-01 08:00:00',
'1900-01-01 09:00:00',
'1900-01-01 10:00:00']})
records_df = pd.DataFrame({
'Group1': [
'G1',
'G1',
'G1',
'G1',
'G1'],
'Group2': [
'G2',
'G2',
'G2',
'G2',
'G2'],
'Original time': [
'1900-01-01 05:05:00',
'1900-01-01 07:23:00',
'1900-01-00 07:45:00',
'1900-01-02 09:57:00',
'1900-01-02 08:23:00']})
</code></pre>
|
<python><pandas>
|
2023-05-17 02:07:20
| 1
| 712
|
z star
|
76,267,960
| 3,247,006
|
How to make only the 1st inline object boolean field "True" by default only for "Add" page in Django Admin?
|
<p>I'm trying to make only the 1st inline object boolean field <code>True</code> by default only for <strong>Add</strong> page.</p>
<p>So, I have <code>Person</code> model and <code>Email</code> model which has the foreign key of <code>Person</code> model as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "models.py"
class Person(models.Model):
name = models.CharField(max_length=20)
def __str__(self):
return self.name
class Email(models.Model):
person = models.ForeignKey(Person, on_delete=models.CASCADE)
is_used = models.BooleanField()
email = models.EmailField()
def __str__(self):
return self.email
</code></pre>
<p>Then, I set the custom form <code>EmailForm</code> to <code>form</code> in <code>EmailInline()</code> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "admin.py"
class EmailForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
kwargs['initial'] = {'is_used': True}
super().__init__(*args, **kwargs)
class EmailInline(admin.TabularInline):
model = Email
form = EmailForm # Here
@admin.register(Person)
class PersonAdmin(admin.ModelAdmin):
inlines = (EmailInline,)
</code></pre>
<p>But, all inline objects are <code>is_used=True</code> by default on <strong>Add</strong> page as shown below:</p>
<p><a href="https://i.sstatic.net/pG28w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pG28w.png" alt="enter image description here" /></a></p>
<p>And, I added one main object with one inline object which is <code>is_used=True</code> as shown below:</p>
<p><a href="https://i.sstatic.net/DLDAp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DLDAp.png" alt="enter image description here" /></a></p>
<p>But, the added inline object and other not-yet-added inline objects are <code>is_used=True</code> on <strong>Change</strong> page as shown below:</p>
<p><a href="https://i.sstatic.net/N74qm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N74qm.png" alt="enter image description here" /></a></p>
<p>And, I added one main object with one inline object which is <code>is_used=False</code> as shown below:</p>
<p><a href="https://i.sstatic.net/eY4nd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eY4nd.png" alt="enter image description here" /></a></p>
<p>But, the added inline object and other not-yet-added inline objects are <code>is_used=True</code> on <strong>Change</strong> page as shown below:</p>
<p><a href="https://i.sstatic.net/r7lhI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/r7lhI.png" alt="enter image description here" /></a></p>
<p>So, how can I make only the 1st inline object boolean field <code>True</code> by default only for <strong>Add</strong> page?</p>
|
<python><django-models><boolean><django-admin><default>
|
2023-05-17 01:08:07
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
76,267,954
| 8,278,630
|
org.elasticsearch.spark.sql.DefaultSource15 could not be instantiated
|
<p>Working on <code>Databricks</code> workspace with the <code>Python</code> language inside the <code>cluster</code>. I want to connect to Elasticsearch Remote server and bring data to Databricks workspace to work on.</p>
<ul>
<li><p>When I check the connection with <code>Elasticsearch</code> remote server, connection is succeeded:</p>
<pre><code>#ELasticsearch server connection
%sh
nc -vz hostname port
-----------------------
Connection to hostname port [tcp/*] succeeded!
</code></pre>
</li>
<li><p>However when I try to read data from Elasticsearch using the <code>spark.read</code> method with the Elasticsearch connector, an error occurs:</p>
<pre><code>Error message:
java.util.ServiceConfigurationError:
org.apache.spark.sql.sources.DataSourceRegister:
Provider org.elasticsearch.spark.sql.DefaultSource15 could not be instantiated
</code></pre>
</li>
<li><p>Source code that's written in Databricks workspace is as following:</p>
</li>
</ul>
<pre class="lang-py prettyprint-override"><code>df = (spark.read
.format( "org.elasticsearch.spark.sql" )
.option( "es.nodes", hostname )
.option( "es.port", port )
.option( "es.net.ssl", "false" )
.option( "es.nodes.wan.only", "true" )
.load( f"index_name" )
)
display(df)
</code></pre>
<p>Error full stack trace:</p>
<pre><code>java.util.ServiceConfigurationError:
org.apache.spark.sql.sources.DataSourceRegister: Provider
org.elasticsearch.spark.sql.DefaultSource15 could not be instantiated
Py4JJavaError Traceback (most recent call last)
File <command-2516541018203898>:4
1 # NOTE: We **must** set the es.nodes.wan.only property to 'true' so that the connector will connect to the node(s) specified by the `es.nodes` parameter.
2 # Without this setting, the ES connector will try to discover ES nodes on the network using a broadcast ping, which won't work.
3 # We want to connect to the node(s) specified in `es.nodes`.
----> 4 df = (spark.read
5 .format( "org.elasticsearch.spark.sql" )
6 .option( "es.nodes", "ip-address" )
7 .option( "es.port", port )
8 .option( "es.net.ssl", "false" )
9 .option( "es.nodes.wan.only", "true" )
10 .load( f"index-name" )
11 )
13 display(df)
File /databricks/spark/python/pyspark/instrumentation_utils.py:48, in _wrap_function.<locals>.wrapper(*args, **kwargs)
46 start = time.perf_counter()
47 try:
---> 48 res = func(*args, **kwargs)
49 logger.log_success(
50 module_name, class_name, function_name, time.perf_counter() - start, signature
51 )
52 return res
File /databricks/spark/python/pyspark/sql/readwriter.py:302, in DataFrameReader.load(self, path, format, schema, **options)
300 self.options(**options)
301 if isinstance(path, str):
--> 302 return self._df(self._jreader.load(path))
303 elif path is not None:
304 if type(path) != list:
File /databricks/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py:1321, in JavaMember.__call__(self, *args)
1315 command = proto.CALL_COMMAND_NAME +\
1316 self.command_header +\
1317 args_command +\
1318 proto.END_COMMAND_PART
1320 answer = self.gateway_client.send_command(command)
-> 1321 return_value = get_return_value(
1322 answer, self.gateway_client, self.target_id, self.name)
1324 for temp_arg in temp_args:
1325 temp_arg._detach()
File /databricks/spark/python/pyspark/errors/exceptions.py:228, in capture_sql_exception.<locals>.deco(*a, **kw)
226 def deco(*a: Any, **kw: Any) -> Any:
227 try:
--> 228 return f(*a, **kw)
229 except Py4JJavaError as e:
230 converted = convert_exception(e.java_exception)
File /databricks/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/protocol.py:326, in get_return_value(answer, gateway_client, target_id, name)
324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
325 if answer[1] == REFERENCE_TYPE:
--> 326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
331 "An error occurred while calling {0}{1}{2}. Trace:\n{3}\n".
332 format(target_id, ".", name, value))
Py4JJavaError: An error occurred while calling o566.load.
: java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider org.elasticsearch.spark.sql.DefaultSource15 could not be instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:232)
at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
at java.util.ServiceLoader$LazyIterator.access$700(ServiceLoader.java:323)
at java.util.ServiceLoader$LazyIterator$2.run(ServiceLoader.java:407)
at java.security.AccessController.doPrivileged(Native Method)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:409)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:46)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at scala.collection.TraversableLike.filterImpl(TraversableLike.scala:303)
at scala.collection.TraversableLike.filterImpl$(TraversableLike.scala:297)
at scala.collection.AbstractTraversable.filterImpl(Traversable.scala:108)
at scala.collection.TraversableLike.filter(TraversableLike.scala:395)
at scala.collection.TraversableLike.filter$(TraversableLike.scala:395)
at scala.collection.AbstractTraversable.filter(Traversable.scala:108)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:714)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSourceV2(DataSource.scala:782)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:331)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:240)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:380)
at py4j.Gateway.invoke(Gateway.java:306)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:195)
at py4j.ClientServerConnection.run(ClientServerConnection.java:115)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.NoClassDefFoundError: scala/$less$colon$less
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
at java.lang.Class.getConstructor0(Class.java:3075)
at java.lang.Class.newInstance(Class.java:412)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
... 34 more
Caused by: java.lang.ClassNotFoundException: scala.$less$colon$less
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at com.databricks.backend.daemon.driver.ClassLoaders$LibraryClassLoader.loadClass(ClassLoaders.scala:151)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
... 39 more
</code></pre>
<ul>
<li>Research on the issue that I've done so for:
<ol>
<li>The Error is related to <code>DataSourceRegister</code> interface in <code>Apache Spark</code> which is used for registering data sources</li>
<li>As far as I understand there is compatibility issue with the dependency. Particularly I presume that, <code>Elasticsearch Connector</code> and <code>Spark Version</code> in <code>Databricks</code> workspace are not compatible.</li>
<li>But couldn't find any reliable sources on the issue, and still couldn't configure the correct versions to work.</li>
</ol>
</li>
</ul>
|
<python><elasticsearch><databricks>
|
2023-05-17 01:05:24
| 0
| 1,073
|
Farkhod Abdukodirov
|
76,267,937
| 4,027,692
|
Quart `json_provider_class` makes no effect
|
<p>I am trying to modify my datetime object responses in Quart so that my frontend applications receive dates in ISO8601 format.</p>
<p>I expected the <code>json_provider_class</code> to do this job, but it's not working for me or maybe I am not doing it right.</p>
<p>Here's my code:</p>
<pre><code>class JSONProvider(DefaultJSONProvider):
@staticmethod
def default(object_):
if isinstance(object_, (datetime.date, datetime.datetime)):
return object_.isoformat()
if isinstance(object_, (Decimal, UUID)):
return str(object_)
if is_dataclass(object_):
return asdict(object_)
if hasattr(object_, "__html__"):
return str(object_.__html__())
raise TypeError(
f"Object of type {type(object_).__name__} is not JSON serializable")
@staticmethod
def dict_to_object(dict_):
return dict_
def loads(self, object_, **kwargs):
return super().loads(object_, object_hook=self.dict_to_object, **kwargs)
app = Quart('Asaph API')
app.json_provider_class = JSONProvider
</code></pre>
<p>Dates are still in RFC 2822 format and I can't figure out a way to make Quart controllers return dates in ISO8601 format</p>
|
<python><datetime><quart>
|
2023-05-17 00:59:17
| 1
| 423
|
Athus Vieira
|
76,267,924
| 2,738,250
|
Can startup scripts in subfolders access tools located in the parent folder?
|
<p>I have hundreds of startup scripts in the root of my project, and they all use tools from the "tools" folder. It has become challenging to locate the specific startup script I need, so I want to organize some of the startup scripts into subfolders. However, when I move a startup script to a subfolder, I encounter an error when trying to import tools using <code>from ..tools.test_tool import TestTool</code>.</p>
<p>The error message is:</p>
<blockquote>
<p>ImportError: attempted relative import with no known parent package</p>
</blockquote>
<p>Is there a way to move the startup script to a subfolder while still allowing it to access the tools (and how)? Or do all startup scripts have to remain in the root folder? I'm specifically <strong>looking for a yes or no answer</strong> to this question, rather than a general explanation of how relative paths work.</p>
|
<python>
|
2023-05-17 00:52:29
| 0
| 341
|
Vitaliy
|
76,267,849
| 6,273,496
|
group by multiple columns independently and plot distribution
|
<p>I have a dataframe that looks like this:</p>
<pre><code> user_id segment device operating_system
0 51958733 small and above desktop Chrome OS
1 48983182 unfunded desktop Chrome OS
2 54011662 unfunded desktop (not set)
3 53932081 unfunded desktop (not set)
4 51537380 unfunded desktop Chrome OS
... ... ... ... ...
503657 53898078 unfunded desktop Macintosh
503658 52169624 long tail desktop Macintosh
503659 53965505 unfunded desktop Macintosh
503660 50678194 unfunded desktop Macintosh
503661 52143912 unfunded desktop Macintosh
</code></pre>
<p>I'd like to find a way to efficiently count the distinct number of user for each group (I actually have much more columns/group in my real dataframe) and plot the output in a bar chart (or maybe something else if better suited)</p>
<p>I'm working in a notebook and right now I'm running the following code for each columns in distinct cells:</p>
<pre><code>groupby_segment = eda_df.groupby('segment').ahid.nunique()
groupby_segment.plot.bar(x="Segment", y="ahid", rot=70, title="Segment Distribution")
plt.show(block=True);
</code></pre>
<p>This is not very efficient because I have to create/update each cells of my notebook manually and in addition it's not good for visualisation because each bar chart are separated. I'd like to have them "grouped" into the same visualisation. Also I'd like to have this displayed as a ratio instead of a simple distinct count.</p>
|
<python><pandas><matplotlib><jupyter-notebook><visualization>
|
2023-05-17 00:26:58
| 1
| 2,904
|
Simon Breton
|
76,267,841
| 13,635,877
|
how to search for caret (^) in string
|
<p>I have a pandas dataframe with a bunch of strings. Some of the strings contain a caret (ie. a ^ symbol).</p>
<p>I am trying to remove them using this:</p>
<pre><code>df['text'] = df[df['text'].str.contains('^') == False]
</code></pre>
<p>I don't get an error but it is finding a caret in every row which is not correct. Is there something special about that symbol?</p>
|
<python><pandas><string><contains>
|
2023-05-17 00:24:31
| 1
| 452
|
lara_toff
|
76,267,698
| 1,361,802
|
ValueError: Related model u'app.model' cannot be resolved when adding admin and custom user in existing app
|
<p>I have an existing app that has as lot of migrations and no users table. I am looking to add django admin and a custom user. I am hitting <code>Related model u'myapp.model' cannot be resolved</code> when I run <code>manage.py migrate</code></p>
<pre><code>Operations to perform:
Apply all migrations: admin, auth, contenttypes, myapp, sessions
Running migrations:
Applying myapp.0001_initial... OK
Applying contenttypes.0001_initial... OK
Applying admin.0001_initial...Traceback (most recent call last):
<redacted>
ValueError: Related model 'myapp.user' cannot be resolved
</code></pre>
<p>Why is this happening and how do I fix this?</p>
|
<python><django><django-admin><django-migrations>
|
2023-05-16 23:30:41
| 1
| 8,643
|
wonton
|
76,267,623
| 1,624,552
|
Reading zip file content for later compute sha256 checksum fails
|
<p>I have a zip file which contains some regular files. This file is uploaded to a fileserver.
Now I am trying to compute the sha256 checksum for the zip file, then write the checksum into a *.sha256sum file and upload to the fileserver as well.</p>
<p>Then when one downloads the zip file and the checksum file (<em>.sha256sum) from the fileserver, he/she computes again the sha256 of the zip file and compare it with the one stored as text in the checksum file (</em>.sha256sum) just downloaded.</p>
<p>When I try to compute the sha256 checksum of the zip file i get an error.</p>
<pre><code>with open(filename) as f:
data = f.read()
hash_sha256 = hashlib.sha256(data).hexdigest()
</code></pre>
<p>The error is the following and it is thrown in line data = f.read():</p>
<pre><code>in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 44: character maps to <undefined>
</code></pre>
|
<python><zip><sha256>
|
2023-05-16 23:09:09
| 1
| 10,752
|
Willy
|
76,267,598
| 8,236,733
|
How to unlock PGP Self Decrypting Archive .exe files (PGP SDAs) in python with a known passphrase?
|
<p>I have a set of PGP Self Decrypting Archive <code>.exe</code> files (<a href="https://knowledge.broadcom.com/external/article/153684/creating-a-self-decrypting-archive-with.html" rel="nofollow noreferrer">https://knowledge.broadcom.com/external/article/153684/creating-a-self-decrypting-archive-with.html</a>) (on a Windows system) and have the password that unlocks them all. How can I just iterate through all of these PGP SDAs and use the passphrase to unlock them in python? (I'm sure this is a simple matter of knowing the right libs and args to use, but I've never worked with these kinds of files before).</p>
<p>(Example image of what I see when clicking the <code>.exe</code>s, for reference)</p>
<p><a href="https://i.sstatic.net/ExrDP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ExrDP.png" alt="enter image description here" /></a></p>
<p>Trying something with the gnupg lib (<a href="https://gnupg.readthedocs.io/en/latest/#decryption" rel="nofollow noreferrer">https://gnupg.readthedocs.io/en/latest/#decryption</a>) like...</p>
<pre class="lang-py prettyprint-override"><code>import gnupg
PASSWD = mypassword
extracted_files = [PATHS_OF_SDA_FILES]
for extracted_file_path in extracted_files:
decr_file = gpg.decrypt_file(extracted_file_path, passphrase=PASSWD)
print(decr_file.ok)
print(decr_file.status)
</code></pre>
<p>...or like...</p>
<pre class="lang-py prettyprint-override"><code>import gnupg
PASSWD = mypassword
extracted_files = [PATHS_OF_SDA_FILES]
for extracted_file_path in extracted_files:
with open(extracted_file_path, 'rb') as file_obj:
decr_file = gpg.decrypt_file(file_obj, passphrase=PASSWD)
print(decr_file.ok)
print(decr_file.status)
</code></pre>
<p>...shows status error</p>
<blockquote>
<p>False</p>
<p>no data was provided</p>
</blockquote>
<p>I've installed gpg4win-4.1.0.exe (<a href="https://gnupg.org/download/" rel="nofollow noreferrer">https://gnupg.org/download/</a>) to try to bulk unlock them this way, but not really sure how to use it (and when running the kleopatra.exe UI that came with it, it cannot detect the .exe files in the target folder when trying to Import. When using the Decrypt option, it says "Failed to find encrypted or signed data in one or more files"). Totally in the dark here, so any guidance would be appreciated.</p>
|
<python><exe><pgp>
|
2023-05-16 23:03:38
| 3
| 4,203
|
lampShadesDrifter
|
76,267,537
| 4,314,501
|
Airflow does not load DAG
|
<p>I have a DAG in airflow 2.2.3, and it has 3 tasks.</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime
from typing import List
from airflow.decorators import dag, task
@dag(dag_id='myDag',
start_date=datetime(2023, 5, 15),
schedule_interval="0 5 * * *",
max_active_runs=1,
catchup=False)
def my_dag():
@task
def load() -> List[dict]:
... # load some data from database
@task
def make_jobs(params: list) -> List[List[dict]]:
... # with params from database, make some list of jobs. One job is List[dict].
@task
def do_job(job: list) -> None:
... # do job
dag = my_dag()
</code></pre>
<p>And I'd like to run tasks with this flow, especially execute do_job in parallel.</p>
<pre><code>load() --> make_jobs() --> job_1
|-> job_2
|-> job_3
| ...
--> job_n
</code></pre>
<p>So, I configured my task flow like this,</p>
<pre class="lang-py prettyprint-override"><code> param_list = load()
job_list = make_jobs(param_list)
for job in job_list:
do_job(job)
</code></pre>
<p>..but this DAG does not show in Airflow UI. I doubt I made something wrong when making the task flow.</p>
<p>Even when I run the command <code>python my_dag_file.py</code> it does not end.</p>
<p>If I convert <code>my_dag</code> to just python function(not a DAG) and run, it operates well as I intended.</p>
<p>Also, if I remove <code>for job in job_list: do_job(job)</code>, it is compiled at least.</p>
<p>I can't find something wrong in taskflow code... any help will be appreciated.</p>
|
<python><airflow><airflow-2.x>
|
2023-05-16 22:46:26
| 0
| 1,024
|
cointreau
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.