QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,472,215 | 12,320,336 | Run Python script using a hotkey when a specific program is active | <p>Intended setup:</p>
<ul>
<li>Have <code>.py</code> script saved locally</li>
<li>Run required program</li>
<li>Use program normally until the need for the script arises</li>
<li>Press hotkey that tells Windows to run the python script</li>
<li>Close program when the script + my work is done</li>
<li>Hotkey should not run the script anymore unless the program is opened again</li>
</ul>
<p>Till now I've always manually run the script and quickly <code>Alt+Tab</code>'ing into the program before the script manages to start executing, but it's kind of a hassle. What's the general approach to do something like this?</p>
<p>OS: Windows</p>
<p>UPDATE: A method has been found. Though I'm keeping this thread active in case there are better options.</p>
| <python><windows><automation><hotkeys> | 2023-11-13 07:10:13 | 1 | 319 | SpectraXCD |
77,472,100 | 1,609,464 | How to get Selenium to extract the link for each tweet via clicking "Copy link" on Twitter? | <p>I'm trying to extract the links to each specific tweet on Twitter using Selenium (with Python). E.g., let's say I'm looking at <a href="https://twitter.com/search?q=from%3A%40POTUS&src=typed_query&f=live" rel="nofollow noreferrer">https://twitter.com/search?q=from%3A%40POTUS&src=typed_query&f=live</a></p>
<p>Each tweet has this arrow button on the bottom-right</p>
<p><a href="https://i.sstatic.net/wdw6C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wdw6C.png" alt="Arrow" /></a></p>
<p>that upon clicking, it reveals 3 more options, the first one being the "Copy link" option that saves the direct link of the tweet onto the clipboard.</p>
<p><a href="https://i.sstatic.net/IBgty.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IBgty.png" alt="enter image description here" /></a></p>
<p>How do I instruct Selenium to click that arrow + copy the link, and then save the URL for all the tweets on the page?</p>
<p>At the moment my code looks something like this:</p>
<pre><code>from selenium import webdriver
driver = webdriver.Firefox()
driver.get("https://twitter.com/search?q=from%3A%40POTUS&src=typed_query&f=live")
links = []
articles = driver.find_elements(By.XPATH, '//article[@data-testid = "tweet"]')
for article in articles:
link = article.find_element(By.XPATH, './/svg').click() # cant figure it out
links.append(link)
</code></pre>
| <python><selenium-webdriver><web-scraping><twitter> | 2023-11-13 06:41:00 | 1 | 1,541 | nwly |
77,471,991 | 41,948 | how to decide if StreamingResponse was closed in FastAPI/Starlette? | <p>When looping a generator in StreamingResponse() using FastAPI/starlette</p>
<p><a href="https://www.starlette.io/responses/#streamingresponse" rel="nofollow noreferrer">https://www.starlette.io/responses/#streamingresponse</a></p>
<p>how can we tell if the connection was somehow disconnected, so a event could be fired and handled somewhere else?</p>
<p>Scenario: writing an API with <a href="https://developer.mozilla.org/en-US/docs/Web/API/EventSource" rel="nofollow noreferrer"><code>text/event-stream</code></a>, need to know when client closed the connection.</p>
| <python><fastapi><starlette><asgi> | 2023-11-13 06:11:33 | 1 | 11,975 | est |
77,471,973 | 2,985,331 | Custom classifier won't accept data from test_train_split in sklearn | <p>I am attempting to write a custom classifier for use in a sklearn gridsearchCV pipeline.</p>
<p>I've stripped everything back to the bare minimum in the class which currently looks like this:</p>
<pre><code>from sklearn.base import BaseEstimator, ClassifierMixin
import pandas as pd
class DifferentialMethylation(BaseEstimator, ClassifierMixin):
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
return self
</code></pre>
<p>In my main code, I have this:</p>
<pre><code> X_train, X_test, y_train, y_test = train_test_split(df, cancerType, test_size=0.2, random_state=42)
differentialMethylation = DifferentialMethylation()
feature_selection = RFE(estimator=RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1))
randomForest = RandomForestClassifier(random_state=42)
# Create the pipeline with feature selection and model refinement
pipeline = Pipeline([
('differentialMethylation', differentialMethylation),
('featureSelection', featureSelection),
('modelRefinement', randomForest)
])
search = GridSearchCV(pipeline,
param_grid=parameterGrid,
scoring='accuracy',
cv=5,
verbose=0,
n_jobs=-1,
pre_dispatch='2*n_jobs')
search.fit(X_train, y_train)
</code></pre>
<p>If I remove the custom classifier from the pipeline, so that the pipeline looks like this:</p>
<pre><code> pipeline = Pipeline([
('featureSelection', featureSelection),
('modelRefinement', randomForest)
])
</code></pre>
<p>it runs happily. If I add that line back in, I get:</p>
<pre><code>ValueError: Expected 2D array, got scalar array instead:
array=DifferentialMethylation().
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
</code></pre>
<p>the X_train is a two dimensional data frame -
X_train.shape: (679, 369), y_train.shape: (679,). As best I can tell, the stripped back classifier .fit() method should be acting as a pass-through method, leaving the data unchanged, so I have no idea why the output of the test_train_split is being accepted by the RFE in the featureSelection classifier, but not in the differentialMethylation.</p>
<p>Unless there's some obscure piece of lore in the sklearn documentation about transforming input data for custom classifiers that I've missed.</p>
<p>Thoughts as to what's going on would be appreciated.</p>
| <python><scikit-learn><sklearn-pandas> | 2023-11-13 06:05:32 | 1 | 451 | Ben |
77,471,965 | 12,725,674 | Iteration in Parsing Script: Value remains the same | <p>I am currently developing a parser that iterates over .txt files containing transcripts of earnings conference calls. The objective is to extract sections spoken by a CEO. The provided code snippet is part of a larger script responsible for extracting various pieces of information, such as the date of the call and the company. You can find the complete transcript incl. the regex here: <a href="https://regex101.com/r/mhKevB/1" rel="nofollow noreferrer">https://regex101.com/r/mhKevB/1</a></p>
<pre><code> presentation_part = """
--------------------------------------------------------------------------------
Inge G. Thulin, 3M Company - Chairman, CEO & President [3]
--------------------------------------------------------------------------------
Thank you, Bruce, and good morning, everyone. Coming off a strong 2017, our team opened the new year with broad-based organic growth across all business groups. We expanded margins and posted a double-digit increase in earnings per share while continuing to invest in our business and return cash to our shareholders.
"""
ceos_lname_clean = ['Thulin', 'Davis']
try:
ceos_speaches_pres = []
if len(ceos_lname_clean) != 0:
for lname in ceos_lname_clean:
ceo_pattern = fr'(?m){lname}.*?(?:CEO|Chief Executive Officer)\b(?:(?!\n-+$).)*?\[\d+\]\s+^-+\s+((?s:.*?))(?=\s+^-+|\Z)' #Alternatives pattern wo neben dem Begriff CEO auch auf den Namen des CEO gematched wird
ceo_textparts_pres = re.findall(ceo_pattern, presentation_part, re.DOTALL | re.IGNORECASE)
ceo_speech_presentation = " ".join(ceo_textparts_pres)
ceos_speaches_pres.append(ceo_speech_presentation)
#Overall_dict[folder][comp_path]["CEO Presentation Speech"] = ceos_speaches_pres ##Add the text to a dict
else: ##try for COO in case ceos_lname_clean is empty
coos_speaches_pres = []
for coo_lname in coos_lname_clean:
coo_pattern = fr'(?m){coo_lname}.*?(?:COO|Chief Operating Officer)\b(?:(?!\n-+$).)*?\[\d+\]\s+^-+\s+((?s:.*?))(?=\s+^-+|\Z)' #Alternatives pattern wo neben dem Begriff COO auch auf den Namen des COO gematched wird
coo_textparts_pres = re.findall(coo_pattern, presentation_part, re.DOTALL | re.IGNORECASE)
coo_speech_presentation = " ".join(coo_textparts_pres)
coos_speaches_pres.append(coo_speech_presentation)
#Overall_dict[folder][comp_path]["COO Presentation Speech"] = coos_speaches_pres ##Add the text to a dict
except:
print("PROBLEM")
</code></pre>
<p>The provided snippet successfully extracts the text spoken by Thulin. However, when integrated into the overall script, an issue arises: ceo_textparts_pres retains the value of the previous iteration. That is, even ceo_textparts_pres for Davis should remain empty it holds the text spoken by Thulin.</p>
<p>I have spent the entire day troubleshooting this problem without success and keep getting more and more frustrated. The entire script is unfortunately too extensive to post it here but even the smallest hint or suggestions what might cause this problem would be greatly appreciated.</p>
<p>Thank you in advance for your assistance.</p>
| <python><regex> | 2023-11-13 06:03:14 | 1 | 367 | xxgaryxx |
77,471,934 | 12,104,604 | How to get multiprocessing working in Python with Kivy | <p>I do not understand how to combine multiprocessing with Kivy, a Python GUI library, so please help me on this.</p>
<p>Initially, I had created a code using Kivy, a Python GUI library, and threading.Thread, as follows.</p>
<pre class="lang-py prettyprint-override"><code>#-*- coding: utf-8 -*-
from kivy.lang import Builder
Builder.load_string("""
<TextWidget>:
BoxLayout:
orientation: 'vertical'
size: root.size
Button:
id: button1
text: "start"
font_size: 48
on_press: root.buttonClicked()
""")
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.properties import StringProperty
import threading
import time
class TextWidget(Widget):
def __init__(self, **kwargs):
super(TextWidget, self).__init__(**kwargs)
self.process_test = None
def p_test(self):
i = 0
while True:
print(i)
i = i + 1
if self.ids.button1.text == "start":
break
def buttonClicked(self):
if self.ids.button1.text == "start":
self.process_test = threading.Thread(target=self.p_test)
self.process_test.start()
self.ids.button1.text = "stop"
else:
self.ids.button1.text = "start"
self.process_test.join()
class TestApp(App):
def __init__(self, **kwargs):
super(TestApp, self).__init__(**kwargs)
self.title = 'testApp'
def build(self):
return TextWidget()
if __name__ == '__main__':
TestApp().run()
</code></pre>
<p>This code simply displays a single button, and when the button is pressed, it executes a print statement in a while loop.</p>
<p></p>
<p>This code, being a simple example, worked without any issues.</p>
<p>However, as the Kivy GUI definition file became larger, and the CPU processing load of the program running inside the p_test function increased, the program started to become choppy.</p>
<p>According to my machine's task manager, despite having plenty of CPU capacity, it seemed there were limitations to processing everything within a single process.</p>
<p>To circumvent this issue, I decided to use multiprocessing. However, I find the use of multiprocessing complex and hard to understand, so I would like to learn more about how to use it.</p>
<p>First, I replaced the threading.Thread in my earlier code with multiprocessing.Process as follows.</p>
<pre class="lang-py prettyprint-override"><code>#-*- coding: utf-8 -*-
from kivy.lang import Builder
Builder.load_string("""
<TextWidget>:
BoxLayout:
orientation: 'vertical'
size: root.size
Button:
id: button1
text: "start"
font_size: 48
on_press: root.buttonClicked()
""")
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.properties import StringProperty
import time
from multiprocessing import Process
class TextWidget(Widget):
def __init__(self, **kwargs):
super(TextWidget, self).__init__(**kwargs)
self.process_test = None
def p_test(self):
i = 0
while True:
print(i)
i = i + 1
if self.ids.button1.text == "start":
break
def buttonClicked(self):
if self.ids.button1.text == "start":
self.process_test = Process(target=self.p_test, args=())
self.process_test.start()
self.ids.button1.text = "stop"
else:
self.ids.button1.text = "start"
self.process_test.join()
class TestApp(App):
def __init__(self, **kwargs):
super(TestApp, self).__init__(**kwargs)
self.title = 'testApp'
def build(self):
return TextWidget()
if __name__ == '__main__':
TestApp().run()
</code></pre>
<p>Unfortunately, this code did not work correctly and resulted in an error. The error message was as follows.</p>
<pre><code> Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\taichi\Documents\Winpython64-3.11.5.0\WPy64-31150\python-3.11.5.amd64\Lib\multiprocessing\spawn.py", line 122, in spawn_main
exitcode = _main(fd, parent_sentinel)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\taichi\Documents\Winpython64-3.11.5.0\WPy64-31150\python-3.11.5.amd64\Lib\multiprocessing\spawn.py", line 132, in _main
self = reduction.pickle.load(from_parent)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
EOFError: Ran out of input
</code></pre>
<p>I learned that I must use multiprocessing directly under <code>if **name** == '**main** ':</code>. However, this makes it difficult to pass values between Kivy and the multiprocessing code, and I'm not sure how to handle it.</p>
<p>The code I created as a trial is as follows.</p>
<pre class="lang-py prettyprint-override"><code>#-*- coding: utf-8 -*-
from kivy.lang import Builder
Builder.load_string("""
<TextWidget>:
BoxLayout:
orientation: 'vertical'
size: root.size
Button:
id: button1
text: "start"
font_size: 48
on_press: root.buttonClicked()
""")
from kivy.app import App
from kivy.uix.widget import Widget
from kivy.properties import StringProperty
import threading
import time
from multiprocessing import Process
class TextWidget(Widget):
def __init__(self, **kwargs):
super(TextWidget, self).__init__(**kwargs)
self.process_test = None
def buttonClicked(self):
if self.ids.button1.text == "start":
self.ids.button1.text = "stop"
else:
self.ids.button1.text = "start"
class TestApp(App):
def __init__(self, **kwargs):
super(TestApp, self).__init__(**kwargs)
self.title = 'testApp'
def build(self):
return TextWidget()
def p_test(count, array):
i = 0
while True:
print(i)
i = i + 1
if __name__ == '__main__':
#shared memory
count = Value('i', 0)
array = Array('i', 0)
process_kivy = Process(target=TestApp().run(), args=[count, array])
process_kivy.start()
process_test = Process(target=p_test(), args=[count, array])
process_test.start()
process_kivy.join()
process_test.join()
</code></pre>
<p>I created the above code because I learned that using <strong>shared memory</strong> allows for data sharing between multiprocessing instances. However, I don't understand how to pass data to a <strong>class</strong> with shared memory.</p>
<p>I want to set it up so that the while loop starts only when a Kivy's button is pressed, but in reality, the print statement is executed after the Kivy GUI is closed in this code.</p>
<p>Furthermore, since the Kivy program also needs to be launched with multiprocessing, I don't know how to <strong>join</strong> my own process to itself.</p>
<p>How can I use multiprocessing correctly?</p>
<p>I am using Windows11 and WinPython.</p>
| <python><kivy> | 2023-11-13 05:56:00 | 1 | 683 | taichi |
77,471,818 | 308,827 | Exclude subplots without any data and left-align the rest in relplot | <p>Related to this question: <a href="https://stackoverflow.com/questions/77442845/use-relplot-to-plot-a-pandas-dataframe-leading-to-error">Use relplot to plot a pandas dataframe leading to error</a></p>
<p>Data for reproducible example is here:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = {'Index': ['TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN', 'TN10p', 'CSU', 'PRCPTOT', 'SDII', 'CWD', 'R99p', 'R99pTOT', 'TX', 'MIN'],
'Stage': [10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 15, 15, 15, 15, 15, 15, 15, 15, 15, 16, 16, 16, 16, 16, 16, 16, 16, 16, 17, 17, 17, 17, 17, 17, 17, 17, 17, 18, 18, 18, 18, 18, 18, 18, 18, 18, 19, 19, 19, 19, 19, 19, 19, 19, 19, 20, 20, 20, 20, 20, 20, 20, 20, 20, 21, 21, 21, 21, 21, 21, 21, 21, 21, 22, 22, 22, 22, 22, 22, 22, 22, 22, 23, 23, 23, 23, 23, 23, 23, 23, 23, 24, 24, 24, 24, 24, 24, 24, 24, 24, 25, 25, 25, 25, 25, 25, 25, 25, 25, 26, 26, 26, 26, 26, 26, 26, 26, 26, 27, 27, 27, 27, 27, 27, 27, 27, 27, 28, 28, 28, 28, 28, 28, 28, 28, 28],
'Z-Score CEI': [-0.688363146221944, 0.5773502691896258, -0.1132178081286216, -0.4278470185781525, 1.0564189237269357, -0.2085144140570746, -0.2085144140570747, 0.2094308186874662, 0.7196177629619716, 0.0, 0.2085144140570762, -1.3803992008056865, -1.3414801279616884, -0.898669162696764, -0.3015113445777637, -0.2953788838542738, 1.1753566728623484, 0.887285779752818, -0.7071067811865475, 0.2847473987257496, 0.1877402877114761, -0.14246249364941, 0.9686648999069224, -0.3015113445777636, -0.2734952011457535, 0.5888914135578924, -0.4488478006064821, -0.7745966692414834, 0.3052145041378634, 0.8197566686157259, 0.3377616284580471, 1.1832159566199232, -0.3015113445777637, -0.2952684241380082, -0.7971688059921156, 0.4479595231454734, -0.5805577953661853, 0.3015113445777642, -0.610500944190139, -0.7734588159553295, -0.5434722467562666, -0.2085144140570747, -0.2085144140570747, 0.8838570486142397, -0.7976091842744983, 2.213211486674006, 0.3779644730092272, -0.6900911175081499, -0.4856558012299846, -0.6044504143545613, -0.2085144140570746, -0.2085144140570747, 1.6498242899497324, 0.463638205246897, -0.064684622735315, 0.5488212999484522, -0.665392754456709, -1.096398502672124, 0.9387247898517332, -0.2085144140570747, -0.2085144140570748, 1.5486212537866115, 0.6776076459912243, -0.7973761651368712, 0.4773960376293314, 0.2611306759187019, -0.2450438178293888, 0.1097642599896903, -0.2085144140570746, -0.2085144140570747, 1.2468175442040146, 0.4912008775378222, -0.8071397220005339, 0.3015113445777636, -0.4051430868010012, -0.9843673918740764, 0.4231429298696365, -0.2085144140570746, -0.2182178902359924, 1.0617336112420042, 0.4221998839727844, -0.2267786838055363, 0.2847473987257496, 1.2708306299144654, 2.4058495687034616, -0.1042572070285372, 4.79583152331272, 4.79583152331272, -0.1758750648062869, 0.9614146130140746, -0.6493094697110509, 0.2847473987257496, -0.0566333001085325, 0.0970016157961683, -0.3380617018914065, -0.2085144140570746, -0.2132007163556104, 1.6462867435913509, 0.8920062635166146, -0.649519052838329, 0.2847473987257496, -0.5727902328114448, -0.385256843427376, 0.123403510468459, -0.2085144140570747, -0.2085144140570747, 0.7206954054604126, -0.0169294393471337, -0.1547646465068273, 0.3900382256192578, -0.91200685504817, -0.7643838011372592, -0.8553913029328061, -0.2085144140570746, -0.2132007163556104, 1.999517273479448, 0.2135313581345105, 0.3577708763999664, 0.2085144140570741, -0.5245759407883583, -0.3972170332271401, 0.1363988678940945, -0.2085144140570746, -0.2085144140570747, 2.180043023382912, 0.6949201395674811, -0.0345238339879863, 0.3872983346207417, -1.054383845470446, -0.7524909974608698, -0.79555728417573, -0.2085144140570747, -0.2085144140570747, 2.597515932302782, -0.0173575308522844, -0.7839294959021852, 0.5496481403962044, 0.3346732026206391, -0.1729151200242987, 0.8108848540793832, -0.2085144140570747, -0.2085144140570747, -0.1975075078549267, -0.1333012766349092, -0.7300956427599692, 0.3495310368212778, -0.9383516638143292, 0.3757624051611033, -0.9198662110078, -0.2085144140570747, -0.2085144140570747, 0.1077379509580834, -0.0391099277150297, -0.8006407690254357, 0.5226257719601375, 0.2650955994479978, -0.3323178678594628, 1.348187695720845, -0.2085144140570746, -0.2085144140570748, 0.6009413558916348, 0.455353435995126, -0.5933908290969269, 0.0, 0.1226864783178058, -0.0252747129054563, 0.8212299340934688, -0.2085144140570746, -0.2132007163556105, -0.8954835101738379, -1.1134420487718968],
'Type': ['Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI', 'Cold', 'Heat', 'Rain', 'Rain', 'Rain', 'Rain', 'Rain', 'Temperature', 'VI']}
df = pd.DataFrame(data)
</code></pre>
<p>I want to plot the data; rows should be based on the column <code>Type</code>, cols should be based on the column <code>Index</code>, the x-axis should be <code>Z-Score CEI</code>, and the y-axis should be based on <code>Stage</code> column. Currently, I am using <code>relplot</code> to do this:</p>
<pre class="lang-py prettyprint-override"><code>df = df.groupby('Index').filter(lambda x: not x['Z-Score CEI'].isna().all())
df["Type"] = df["Type"].astype("category")
df["Index"] = df["Index"].astype("category")
df["Type"] = df["Type"].cat.remove_unused_categories()
df["Index"] = df["Index"].cat.remove_unused_categories()
g = sns.relplot(
data=df,
x='Z-Score CEI',
y='Stage',
col='Index',
row='Type',
facet_kws={'sharey': True, 'sharex': True},
kind='line',
legend=False,
)
for (i,j,k), data in g.facet_data():
if data.empty:
ax = g.facet_axis(i, j)
ax.set_axis_off()
</code></pre>
<p>However, this leads to a plot where the empty plots are distorting the placement of the subplots with data. I want there to be no empty areas.</p>
<p>Current output looks like so:
<a href="https://i.sstatic.net/SpwuN.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SpwuN.jpg" alt="rder " /></a></p>
<p>In the graphic above, I want to remove all the subplots which have no data. This will result in different rows having different number of subplots e.g. 1st row might have 5 subplots and 2nd row will have only 4 subplots etc.</p>
<p>I want each row to only have the same <code>Type</code>, not mix multiple <code>Type</code>s.</p>
| <python><pandas><seaborn><relplot> | 2023-11-13 05:06:51 | 2 | 22,341 | user308827 |
77,471,745 | 4,451,521 | Plot with Xaxis vertical and Yaxis horizontal to the left | <p>How can I use matplotlib in a way that I can get something like this:</p>
<p><a href="https://i.sstatic.net/G1exj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/G1exj.png" alt="enter image description here" /></a></p>
<p>As you can see X is vertical up and Y is horizontal to the left.</p>
<p>(You can see that in a normal plot this point would be (-5, 3))</p>
<p>These questions (<a href="https://stackoverflow.com/q/925024/7758804">1</a>, <a href="https://stackoverflow.com/q/31556446/7758804">2</a>) do not look at the direction of the Y-axis. <strong>Y positive goes to the left.</strong></p>
| <python><matplotlib> | 2023-11-13 04:35:00 | 1 | 10,576 | KansaiRobot |
77,471,613 | 1,035,279 | When I use redirect_stdout in python 3.11, the terminal width appears to be set to 80. How can I change this value? | <p>For example, when automatically collating arparse help, I'd ideally like to set a specific width:</p>
<pre><code>def parse_help(command_list=... width=168):
command_list = [] if command_list is ... else command_list
s = io.StringIO()
with redirect_stdout(s, width=width):
try:
do_command([*command_list, '--help'])
except SystemExit:
pass
help_text = s.getvalue()
print_help(help_text)
for subcommand in get_subcommands(help_text):
parse_help([*command_list, subcommand])
</code></pre>
| <python><argparse> | 2023-11-13 03:32:58 | 2 | 16,671 | Paul Whipp |
77,471,567 | 188,331 | Evaluate's METEOR Implementation returns 0 score | <p>I have the following codes:</p>
<pre><code>import evaluate
reference1 = "犯人受到了嚴密的監控。" # Ground Truth
hypothesis1 = "犯人受到嚴密監視。" # Translated Sentence
meteor = metric_meteor.compute(predictions=[hypothesis1], references=[reference1])
print("METEOR:", meteor["meteor"])
</code></pre>
<p>It returns <code>0.0</code>.</p>
<p><strong>My question:</strong> How can I make the above code produce the same score as the below codes?</p>
<p>However, with NLTK, the score is <code>98.14814814814815</code>:</p>
<pre><code>from nltk.translate.meteor_score import single_meteor_score
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('fnlp/bart-base-chinese')
tokenized_reference1 = tokenizer(reference1)
tokenized_hypothesis1 = tokenizer(hypothesis1)
print("METEOR:", single_meteor_score(tokenized_reference1, tokenized_hypothesis1) * 100)
</code></pre>
<p>From the Evaluate's METEOR implementation, it's actually an NLTK wrapper:
<a href="https://huggingface.co/spaces/evaluate-metric/meteor/blob/main/meteor.py" rel="nofollow noreferrer">https://huggingface.co/spaces/evaluate-metric/meteor/blob/main/meteor.py</a></p>
| <python><meteor><huggingface-evaluate> | 2023-11-13 03:11:47 | 1 | 54,395 | Raptor |
77,471,517 | 11,645,727 | Debugger not stopping at breakpoints in VSCode (python code) | <p>When I run python in 1.84.2 vscode, I found that breakpoints did not work. It didn't stop where I interrupted, but stopped where thre was not breakpoint. This problem makes me very anxisous. I can't find a solution to the problem, and if I can't solve it, I would give up vscode, but I don't want to do this. Please help me.</p>
<p>The problem is as shown in:
<a href="https://i.sstatic.net/Rnw3o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Rnw3o.png" alt="enter image description here" /></a></p>
<p>launch.json file</p>
<pre><code>"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "python",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal",
"justMyCode": true,
}
</code></pre>
<p>I don't know where the porblem is, I didn't have any problem before, but this problem occurred when I used it two months later. Please help me, thank you.</p>
| <python><visual-studio-code><breakpoints> | 2023-11-13 02:45:30 | 4 | 455 | Min Guo |
77,471,459 | 371,408 | OpenAI Whisper segfaulting when running inside Docker container on M1 Mac | <p>I am trying to run Whisper in a Docker container on my M1 MacBook Air. When I run it, it gives a segfault. Any ideas how to debug?</p>
<p>The Dockerfile is pretty simple. Relevant excerpt:</p>
<pre><code>FROM ubuntu:22.04
# Update base image
RUN apt-get update && \
apt-get upgrade -y && \
apt-get autoremove -y
# Set up Python and Whisper
RUN apt-get install -y \
jq \
git \
curl \
gnupg \
ffmpeg \
findutils \
python3 \
python3-pip
RUN pip3 install git+https://github.com/openai/whisper.git
</code></pre>
<p>Whisper is installed as recommended in the <a href="https://github.com/openai/whisper#setup" rel="nofollow noreferrer">repo readme</a>:</p>
<pre><code>pip install git+https://github.com/openai/whisper.git
</code></pre>
<p>I have a WAV file that says "Hello world" that I am testing the transcription with in each environment.</p>
<ol>
<li>When I run Whisper on my Mac directly, outside of Docker, it runs fine:</li>
</ol>
<pre><code>>> time whisper --task transcribe --output_format json --model tiny hello_world.wav
/opt/homebrew/Cellar/openai-whisper/20231106/libexec/lib/python3.11/site-packages/whisper/transcribe.py:115: UserWarning: FP16 is not supported on CPU; using FP32 instead
warnings.warn("FP16 is not supported on CPU; using FP32 instead")
Detecting language using up to the first 30 seconds. Use `--language` to specify the language
Detected language: English
[00:00.000 --> 00:00.840] Hello world.
whisper --task transcribe --output_format json --model tiny hello_world.wav 7.49s user 0.80s system 297% cpu 2.780 total
</code></pre>
<ol start="2">
<li>When I run in the Docker file, it segfaults:</li>
</ol>
<pre><code># time whisper --task transcribe --output_format json --model tiny hello_world.wav
/usr/local/lib/python3.10/dist-packages/whisper/transcribe.py:115: UserWarning: FP16 is not supported on CPU; using FP32 instead
warnings.warn("FP16 is not supported on CPU; using FP32 instead")
Detecting language using up to the first 30 seconds. Use `--language` to specify the language
Segmentation fault
real 0m2.233s
user 0m2.507s
sys 0m0.746s
</code></pre>
<ol start="3">
<li>If I cross-build the Docker image for linux/amd64 arch and run with Rosetta, it works but runs ridiculously slowly (7.5s up to 5m 41s):</li>
</ol>
<p>Build command:</p>
<pre><code> docker buildx build \
--platform=linux/amd64 \
-t whisper \
-f ./Dockerfile .
</code></pre>
<pre><code># time whisper --task transcribe --output_format json --model tiny hello_world.wav
/usr/local/lib/python3.10/dist-packages/whisper/transcribe.py:115: UserWarning: FP16 is not supported on CPU; using FP32 instead
warnings.warn("FP16 is not supported on CPU; using FP32 instead")
Detecting language using up to the first 30 seconds. Use `--language` to specify the language
Detected language: English
[00:00.000 --> 00:00.840] Hello world.
real 5m40.946s
user 5m40.920s
sys 0m1.897s
</code></pre>
| <python><docker><openai-whisper> | 2023-11-13 02:24:29 | 0 | 21,479 | Zach Rattner |
77,471,415 | 1,942,868 | How to check the order of appearance on django test | <p>I use this sentences to check the letters <code>line1</code> and <code>line2</code> appears.</p>
<p>self.assertContains(response,"line1");
self.assertContains(response,"line2");</p>
<p>However I want to check the order of appearance.</p>
<p>I checked in this page, but can't figure out which I should use.</p>
<p><code>https://docs.djangoproject.com/en/2.2/topics/testing/tools/#django.test.SimpleTestCase.assertContains</code></p>
| <python><django><testunit> | 2023-11-13 02:05:34 | 2 | 12,599 | whitebear |
77,471,197 | 16,307,782 | Add numpy random values to a Polars dataframe using a column as input to numpy.random? | <p>Let's say I have a dataframe that has a column named <code>mean</code> that I want to use as an input to a random number generator. Coming from R, this is relatively easy to do in a pipeline:</p>
<pre class="lang-r prettyprint-override"><code>library(dplyr)
tibble(alpha = rnorm(1000),
beta = rnorm(1000)) %>%
mutate(mean = alpha + beta) %>%
bind_cols(random_output = rnorm(n = nrow(.), mean = .$mean, sd = 1))
#> # A tibble: 1,000 × 4
#> alpha beta mean random_output
#> <dbl> <dbl> <dbl> <dbl>
#> 1 0.231 -0.243 -0.0125 0.551
#> 2 0.213 0.647 0.861 0.668
#> 3 0.824 -0.353 0.471 0.852
#> 4 0.665 -0.916 -0.252 -1.81
#> 5 -0.850 0.384 -0.465 -3.90
#> 6 0.721 0.679 1.40 2.54
#> 7 1.46 0.857 2.32 2.14
#> 8 -0.242 -0.431 -0.673 -0.820
#> 9 0.234 0.188 0.422 -0.662
#> 10 -0.494 -2.15 -2.65 -3.01
#> # ℹ 990 more rows
</code></pre>
<p><sup>Created on 2023-11-12 with <a href="https://reprex.tidyverse.org" rel="nofollow noreferrer">reprex v2.0.2</a></sup></p>
<p>In python, I can create an intermediate dataframe and use it as input to <code>np.random.normal()</code>, then bind that to the dataframe, but this feels clunky. Is there a way to add the <code>random_output</code> col as a part of the pipeline/chain?</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import numpy as np
# create a df
df = (
pl.DataFrame(
{
"alpha": np.random.standard_normal(1000),
"beta": np.random.standard_normal(1000)
}
)
.with_columns(
(pl.col("alpha") + pl.col("beta")).alias("mean")
)
)
# create an intermediate object
sim_vals = np.random.normal(df.get_column("mean"))
# bind the simulated values to the original df
(
df.with_columns(random_output = pl.lit(sim_vals))
)
#> shape: (1_000, 4)
┌───────────┬───────────┬───────────┬───────────────┐
│ alpha ┆ beta ┆ mean ┆ random_output │
│ --- ┆ --- ┆ --- ┆ --- │
│ f64 ┆ f64 ┆ f64 ┆ f64 │
╞═══════════╪═══════════╪═══════════╪═══════════════╡
│ -1.380249 ┆ 1.531959 ┆ 0.15171 ┆ 0.938207 │
│ -0.332023 ┆ -0.108255 ┆ -0.440277 ┆ 0.081628 │
│ -0.718319 ┆ -0.612187 ┆ -1.330506 ┆ -1.286229 │
│ 0.22067 ┆ -0.497258 ┆ -0.276588 ┆ 0.908147 │
│ … ┆ … ┆ … ┆ … │
│ 0.299117 ┆ -0.371846 ┆ -0.072729 ┆ 0.592632 │
│ 0.789633 ┆ 0.95712 ┆ 1.746753 ┆ 2.954801 │
│ -0.264415 ┆ -0.761634 ┆ -1.026049 ┆ -1.369753 │
│ 1.893911 ┆ 1.554736 ┆ 3.448647 ┆ 5.192537 │
└───────────┴───────────┴───────────┴───────────────┘
</code></pre>
| <python><dataframe><numpy><python-polars> | 2023-11-13 00:20:10 | 1 | 372 | Mark Rieke |
77,471,168 | 21,305,238 | How to know which pyparsing ParserElement created which ParseResults? | <h2>The problem</h2>
<p>I have a bunch of <code>Group</code>-ed expressions like these, all of which contribute to the same grammar:</p>
<pre class="lang-py prettyprint-override"><code>first_and_operator = Group(Char('ab')('first') + Char('+-')('operator'))
full_expression = Group(first_and_operator + Char('cd')('second'))
</code></pre>
<pre class="lang-py prettyprint-override"><code>full_expression.parse_string('a + d', parse_all = True)
</code></pre>
<p>This, for instance, outputs the following results:</p>
<pre class="lang-py prettyprint-override"><code>ParseResults([
ParseResults([
# How to know that this is first_and_operator's results?
ParseResults(['a', '+'], {'first': 'a', 'operator': '+'}),
'd'
], {'second': 'd'})
], {})
</code></pre>
<p>The innermost have the following attributes, none of which helps figuring out the original expression (these are also undocumented; there's not one mention of <code>modal</code> or <code>all_names</code> in <a href="https://pyparsing-docs.readthedocs.io/en/latest/HowToUsePyparsing.html" rel="nofollow noreferrer">this whole page</a>):</p>
<pre class="lang-py prettyprint-override"><code>[
attr_value for attr in dir(results)
if not attr.startswith('__')
and not callable(attr_value := getattr(results, attr))
]
</code></pre>
<pre class="lang-py prettyprint-override"><code>[
('_all_names', set()), ('_modal', True), ('_name', 'first'),
('_null_values', (None, [], ())), ('_parent', None),
('_tokdict', {...: ...}), ('_toklist', ['a', '+']),
('first', 'a'), ('operator', '+')
]
</code></pre>
<p>If I'm passed an arbitrary <code>ParseResults</code> object, how do I know which expression created it?</p>
<h2>Things to consider and what I tried</h2>
<p>As you may have noticed, the nested <code>ParseResults</code> object is one level deeper than needed, since <code>full_expression</code> is also a <code>Group</code>. I have <a href="https://gist.github.com/InSyncWithFoo/6804f7e07b6363f60d3d5fcafbdd4316" rel="nofollow noreferrer">an <code>Expand</code> class</a> that does the exact opposite (code omitted here for brevity) which may be used in a parent expression to flatten the results.</p>
<p>This example is simplified a lot, but in reality the equivalent of <code>'c'</code> and such are also (possibly nested) <code>ParseResults</code> objects. To handle each of these elements correctly, I need to know the type of the expression that generated it in the first place.</p>
<pre class="lang-py prettyprint-override"><code>expression_list = Expand(full_expression) + (',' + Expand(full_expression))[1, ...]
</code></pre>
<pre class="lang-py prettyprint-override"><code>expression_list.parse_string('a + c, a - d, b + d', parse_all = True)
</code></pre>
<pre class="lang-py prettyprint-override"><code>ParseResults([
ParseResults(['a', '+'], {'first': 'a', 'operator': '+'}),
'c', ',',
ParseResults(['a', '-'], {'first': 'a', 'operator': '-'}),
'd', ',',
ParseResults(['b', '+'], {'first': 'b', 'operator': '+'}),
'd'
], {})
</code></pre>
<p><code>Expand</code> is a generic subclass of <code>TokenConverter</code>. This means the object passed to <code>Expand.postParse()</code> <em>must</em> be an actual <code>ParseResults</code>, or at least have the same interface of one (iterability, subscriptability, methods, etc.).</p>
<p>I have thought about using <code>.add_parse_action()</code> to convert that <code>ParseResults</code> into a subclass instance like this, but I am not even sure that this is the intended arguments. The parameters are also undocumented, <a href="https://github.com/pyparsing/pyparsing/blob/fad68f40a12d969344d7529bab7639d40d057ce7/pyparsing/results.py#L148" rel="nofollow noreferrer">not type hinted</a> and <a href="https://github.com/pyparsing/pyparsing/blob/fad68f40a12d969344d7529bab7639d40d057ce7/pyparsing/results.py#L171" rel="nofollow noreferrer">the code</a> is confusing to read.</p>
<pre class="lang-py prettyprint-override"><code>class TypedParseResults(ParseResults):
__slots__ = ('type',)
def __init__(self, *args, result_type: str, **kwargs):
self.type = result_type
super().__init__(*args, **kwargs)
@first_and_operator.add_parse_action
def make_results_typed(results: ParseResults):
return TypedParseResults(
self._toklist,
result_type = 'first_and_operator'
)
</code></pre>
<p>This only ever raises an exception from somewhere deep down in <code>core.py</code>:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File ".../main.py", line 85, in <module>
r = full_expression.parse_string('a + d', parse_all = True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...
File ".../site-packages/pyparsing/core.py", line 895, in _parseNoCache
ret_tokens = ParseResults(
^^^^^^^^^^^^^
TypeError: TypedParseResults.__init__() missing 1 required keyword-only argument: 'result_type'
</code></pre>
<p><a href="https://stackoverflow.com/q/71819618">This question</a>, while sounds similar, does not answer mine since it is about debugging and not runtime checking. Also, none of the answers at <a href="https://stackoverflow.com/q/29282878">this question</a> works for me, for various reasons:</p>
<ul>
<li>The first is incompatible with <code>Expand</code>.</li>
<li>The second is ugly. Even more so with tens of expressions. In addition, it has the same problem with the third, which is explained below.</li>
</ul>
<h2>Some context</h2>
<p>This grammar is the origin of two parsers, one CST and one AST; the initial results need to be in some interchangeable format that can then easily be converted to both, or so I think. I would like to make the two parsers independent (no CST to AST conversion) if that is at all possible.</p>
<p>Each expression is a <code>Group</code>, and needs to work on its own, which means <code>.set_name()</code> (only stored at the expression, not the results) and <code>.set_result_name()</code> (might not be the same across parent expressions) doesn't work.</p>
<h2>Conclusion</h2>
<p>This is particularly hard to do that I wonder if I took the wrong way. If there are better solutions to modify my existing code around, I'm willing to hear.</p>
| <python><pyparsing> | 2023-11-13 00:04:41 | 1 | 12,143 | InSync |
77,471,038 | 5,246,226 | Pass C++ matrix into Python to use as Numpy array? | <p>I am doing some processing of matrices in C++ that I would like to pass to a Python program/function, which should accept data in the form of a Numpy array, and pass something back to the C++ program. I've looked through the documentation on <a href="https://pybind11.readthedocs.io/en/stable/advanced/pycpp/numpy.html" rel="nofollow noreferrer">Numpy arrays in Pybind11</a>, but it isn't clear to me how I can create buffer protocol objects in C++ programs and pass them to Python programs, to be called in the C++ program itself. Is there a way I can do this?</p>
| <python><c++><numpy><pybind11> | 2023-11-12 23:12:31 | 0 | 759 | Victor M |
77,471,022 | 1,924,087 | PyGame - Draw Line Based on Heading | <p>I'm using <code>pygame.draw.line</code> to draw a line between two coordinates in pixels, the required format of this function is <code>(begin_x,begin_y),(end_x, end_y)</code>.</p>
<p>The begining coordinate for this line can be (0,0), and the line must have 100px of entire length. But which will be the ending coordinate for this line if the heading between the beggining and the end is 060º?</p>
<p>This line is the resultant vector of the airplane on screen.</p>
<p>Eg. If the airplane heading is 090º the ending coordinate is (0,100), if is 0º it would be (-100, 0), and so on.</p>
<pre><code># Tried this but it's not working
ending_x = begin_x + math.cos(heading) * length
ending_y = begin_y + math.sin(heading) * length
</code></pre>
<p><a href="https://i.sstatic.net/qNEJa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qNEJa.png" alt="enter image description here" /></a></p>
| <python><math><trigonometry> | 2023-11-12 23:05:51 | 2 | 681 | Ygor Montenegro |
77,470,842 | 3,195,413 | tensorflow2 save_model method failing on encoding error | <p>using python 3.9 and tf 2.14, i am able to successfully run through the entire notebook posted here and successfully create the model, and have it make its predictions:</p>
<p><a href="https://www.tensorflow.org/tutorials/keras/text_classification" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/keras/text_classification</a></p>
<p>notebook is here:
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/keras/text_classification.ipynb" rel="nofollow noreferrer">https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/keras/text_classification.ipynb</a></p>
<p>except that when i try to simply save the model by calling</p>
<pre><code>export_model.save("exported.keras")
</code></pre>
<p>i get ...</p>
<p>UnicodeEncodeError: 'charmap' codec can't encode character '\x96' in position 3325: character maps to </p>
<p>partial trace:</p>
<pre><code>---------------------------------------------------------------------------
UnicodeEncodeError Traceback (most recent call last)
Cell In[27], line 1
----> 1 export_model.save("exported.keras")
File C:\python_39_venv\lib\site-packages\keras\src\utils\traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File ~\AppData\Local\Programs\Python\Python39\lib\encodings\cp1252.py:19, in IncrementalEncoder.encode(self, input, final)
18 def encode(self, input, final=False):
---> 19 return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\x96' in position 3325: character maps to <undefined>
</code></pre>
<p>full trace to follow</p>
<p>the crash happens here:</p>
<pre><code> def save_assets(self, dir_path):
if self.input_vocabulary:
# Vocab saved in config.
# TODO: consider unifying both paths.
return
vocabulary = self.get_vocabulary(include_special_tokens=True)
vocabulary_filepath = tf.io.gfile.join(dir_path, "vocabulary.txt")
with open(vocabulary_filepath, "w") as f:
f.write("\n".join([str(w) for w in vocabulary]))
</code></pre>
<p>it fails on the f.write call. i can in the meantime try to</p>
<ol>
<li>identify which string it is that fails</li>
<li>try adding an optional encoding='utf-8' argument in the .open call</li>
</ol>
| <python><tensorflow2.0> | 2023-11-12 21:59:00 | 1 | 599 | 10mjg |
77,470,471 | 614,963 | filtered_df and str in Python | <p>I am new to Python. I'm trying to filter Dataset. The filter seems to work well or I think it does:)</p>
<pre><code>valid_Cas = ["yut", "thj", "bnm","vfd"]
filtered_df = df[df['Cas ID'].str[-3:].isin(valid_Cas)]
</code></pre>
<p>but when a filter more than three letters, it does not work,like:</p>
<pre><code>valid_Cas = ["yut", "thj", "bnm","vfd","cdret"]
filtered_df = df[df['Cas ID'].str[-3:].isin(valid_Cas)]
</code></pre>
<p>what does it mean: str[-3:] ?</p>
<p>how can I filter more than 3 letters?</p>
<p>does the code filter "bnm5623" and "5623bnm" or does it leave it?</p>
<p>thank you,</p>
| <python><string> | 2023-11-12 20:09:48 | 1 | 865 | user614963 |
77,470,456 | 726,730 | Qt Designer - Is it possible for QToolBar to have QToolButtons or QMenu as actions? | <p>Is it possible to have QMenu or QToolButton as action for QToolBar in Qt Designer?</p>
<p>The only choice is to create actions in Action Center, then add the actions in the QToolBar.</p>
<p>I want with some way to have submenu's for some actions.</p>
<p>I know (almost) how to do this in code but i don't know how can i do this in Qt Designer.</p>
<p>Thanks!!</p>
| <python><pyqt5><qt-designer> | 2023-11-12 20:04:57 | 0 | 2,427 | Chris P |
77,470,446 | 21,370,869 | Python programme executes all lines but does not run any 'subprocess.Popen()' lines | <p>I have simple Firefox add-on that, I am using it to test and discover the capabilities of <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Native_messaging#exchanging_messages" rel="nofollow noreferrer">Native Messaging</a>, something I recently came across.</p>
<p>The add-on side of things works as I expect it, nothing out of order there, I am struggling with the Python side of the add-on.</p>
<p>The add-on works like this:</p>
<ol>
<li>On the browser, when the page <a href="https://developer.mozilla.org/en-US/docs/Learn/Getting_started_with_the_web/HTML_basics" rel="nofollow noreferrer">HTML basics</a> is visited</li>
<li>The add-on will send the message "ping" to a Python script, on the local system, via stdin</li>
<li>The python script should reply back to the browser the message it received via stdin <strong>and</strong> run a process via <code>subprocess.Popen()</code></li>
</ol>
<p>Sure it enough, in all my tests, on the browser console, I can see the Python programme sending back a reply like <a href="https://i.imgur.com/tz6MiTG.png" rel="nofollow noreferrer">THIS</a>. But the line <code>subprocess.Popen(["explorer", "C:/Temp"])</code> is never executed at all. No matter where I place it in the Python script.</p>
<p>If I create a separate Python script with just the following code and run it by double clicking the file in explorer, it works. A explorer window is created:</p>
<pre><code>import subprocess
subprocess.Popen(["explorer", "C:/Temp"])
</code></pre>
<p>Of course I am looking to do more than just open a explorer window, its just a simple example. The point is for some reason, my Python programme is stuck either reading at stdin or somewhere else.</p>
<p>I tried restructuring the Python code to something simple and tried "closing" the stdin stream to see if that will help it carry on with the execution of the remaining lines:</p>
<pre><code>import sys
import json
import struct
import subprocess
rawLength = sys.stdin.buffer.read(4)
if len(rawLength) == 0:
sys.exit(0)
messageLength = struct.unpack('@I', rawLength)[0]
message = sys.stdin.buffer.read(messageLength).decode('utf-8')
sys.stdin.buffer.flush() #Try closing the stdin buffer
sys.stdin.buffer.close() #Try closing the stdin buffer
subprocess.Popen(["explorer", "C:/Temp"]) #Again not executed
</code></pre>
<p>Same issue persists, the last line is again not executed. I am new to Python, JavaScript and add-on development. I asked around for any debugging tools for such a novel, edge use case but sadly I have not turned up with answers. The python programme does not spawn its own console window so its hard to tell where execution is stuck at with something like <code>print()</code></p>
<p>I did try the following in its own python script file:</p>
<pre><code>import sys
import json
import struct
import subprocess
rawLength = sys.stdin.buffer.read(4)
print(rawLength)
subprocess.Popen(["explorer", "C:/Temp"])
</code></pre>
<p>It will spawn its own console window, the programme is stuck at <code>rawLength = sys.stdin.buffer.read(4)</code> and will remain there, even if I provide just a letter and press <code>enter</code>, it continues when I provide four letters, opening file explorer at <code>c:/Temp</code>.</p>
<p>Last time I asked around. I was told this might be what is happening and I looked for a way to stop the stdin stream reading or close it, which is what I tried to do with <code>flush()</code>/<code>close()</code> but it does not help.</p>
<p>Am I attempting to close the stdin stream the right way? If so am I succeeding? How does one know for sure? Is stdin even the culprit here?</p>
<p>I am out of ideas, any help would be greatly appreciated!</p>
<hr />
<p>For completeness, my add-on is compromised of only two files, a <code>manifest.json</code> and a <code>background.file</code>.</p>
<p><strong>Manifest.json</strong> file:</p>
<pre><code>{
"name": "test",
"manifest_version": 2,
"version": "1.0",
"browser_action": {"default_icon": "icons/message.svg"},
"browser_specific_settings": {"gecko": {"id": "test@example.org","strict_min_version": "50.0"}},
"background": {"scripts": ["background.js"]},
"permissions": ["tabs","activeTab", "webRequest", "<all_urls>", "nativeMessaging"]
}
</code></pre>
<p><strong>Background.json</strong> file:</p>
<pre><code>browser.webRequest.onCompleted.addListener(sendNativeMsg, {urls:["https://developer.mozilla.org/en-US/docs/Learn/Getting_started_with_the_web/HTML_basics"]});
function onResponse(response) {console.log(`INCOMING MSG: ${response}`);}
function sendNativeMsg(activeTab) {
let thisMsg = "ping"
console.log(`OUTGOING MSG: "${thisMsg}"`);
let sending = browser.runtime.sendNativeMessage("test", thisMsg);
sending.then(onResponse);
}
</code></pre>
<p>And the source code for the Python script is the following, which I got from the Native Messaging, MDN page, linked above:</p>
<pre><code>import sys
import json
import struct
import subprocess
# Read a message from stdin and decode it.
def getMessage():
rawLength = sys.stdin.buffer.read(4)
if len(rawLength) == 0:
sys.exit(0)
messageLength = struct.unpack('@I', rawLength)[0]
message = sys.stdin.buffer.read(messageLength).decode('utf-8')
return json.loads(message)
# Encode a message for transmission,
def encodeMessage(messageContent):
encodedContent = json.dumps(messageContent, separators=(',', ':')).encode('utf-8')
encodedLength = struct.pack('@I', len(encodedContent))
return {'length': encodedLength, 'content': encodedContent}
# Send an encoded message to stdout
def sendMessage(encodedMessage):
sys.stdout.buffer.write(encodedMessage['length'])
sys.stdout.buffer.write(encodedMessage['content'])
sys.stdout.buffer.flush()
while True:
subprocess.Popen(["explorer", "C:/Temp"]) #This line is never executed. The lines after here are executed.
receivedMessage = getMessage()
if receivedMessage == "ping":
sendMessage(encodeMessage('stdin was "' + receivedMessage + '", Task is done'))
</code></pre>
| <javascript><python><subprocess><firefox-addon><firefox-addon-webextensions> | 2023-11-12 19:58:43 | 0 | 1,757 | Ralf_Reddings |
77,470,398 | 201,657 | Can I mandate that a function in a base class is always run from a derived class' __init__? | <p>I have an abstract base class <code>BudgetCategories</code> and a class, <code>BudgetCategoriesFromFile</code>, derived from it:</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABCMeta
import csv
class BudgetCategories(metaclass=ABCMeta):
def __init__(self) -> None:
self._budget_categories: list[str] = []
def _check_budget_categories_are_unique(self) -> None:
lower_case_budget_categories = [bc.lower() for bc in self._budget_categories]
if len(lower_case_budget_categories) != len(set(lower_case_budget_categories)):
duplicates = set(
[
bc
for bc in lower_case_budget_categories
if lower_case_budget_categories.count(bc) > 1
]
)
raise ValueError(
f"Budget categories must be unique. Duplicates: {duplicates}"
)
@property
def budget_categories(self) -> list[str]:
return self._budget_categories
class BudgetCategoriesFromFile(BudgetCategories):
def __init__(self, path_to_budget_categories_csv_file: str) -> None:
super().__init__()
with open(path_to_budget_categories_csv_file, "r") as f:
reader = csv.reader(f)
for row in reader:
self._budget_categories.append(row[0])
self._check_budget_categories_are_unique()
</code></pre>
<p>I want to mandate that <code>_check_budget_categories_are_unique()</code> is always called from any class derived from <code>BudgetCategories</code>. Is there a way to accomplish this?</p>
<p>I could call <code>_check_budget_categories_are_unique()</code> from the <code>budget_categories</code> property getter but that is too late, I want to call it during object creation.</p>
<p>Any critique of this code structure is welcomed.</p>
| <python> | 2023-11-12 19:43:53 | 2 | 12,662 | jamiet |
77,470,014 | 7,895,542 | Python change directory while keeping symlinks in the path | <p>So i have a this path:
<code>/nfs/dust/atlas/user/jnitschk/CAF/StatisticTool/Output/frac_100ex_3_smooth_truth</code></p>
<p>that i execute a script from and thus get that when i do <code>os.getcwd()</code>.</p>
<p>Now i need to change to the following path to execute some subprocesses from:</p>
<p><code>/nfs/dust/atlas/user/jnitschk/CAF/StatisticTool/Output/frac_100ex_3_smooth_truth/Output/ranking_output_Truth_Smooth_ExpSmoothUnblinded_2_100ex_3_frac_obs</code></p>
<p>However as the second <code>Output</code> in the path is just a symlink to the first this is what i get when i do <code>os.chdir(THE_LONG_PATH)</code> and then <code>os.getcwd()</code>:
<code>/nfs/dust/atlas/user/jnitschk/CAF/StatisticTool/Output/ranking_output_Truth_Smooth_ExpSmoothUnblinded_2_100ex_3_frac_obs</code></p>
<p>Is there an easy way to get <code>chdir</code> to keep the full path including the intermediate symlinks?</p>
| <python><path> | 2023-11-12 17:53:33 | 1 | 360 | J.N. |
77,469,944 | 616,460 | Getting physical dimensions of screen in MacOS | <p>Is it possible to get the physical dimensions (millimeters or otherwise) of a given monitor with Python on macOS?</p>
<p>I was looking at <a href="https://developer.apple.com/documentation/appkit/nsscreen" rel="nofollow noreferrer">AppKit's NSScreen</a> but it only gives pixel resolution (via <code>frame</code>) and doesn't seem to report physical size. I'm not sure what other approach I could take.</p>
| <python><macos><appkit> | 2023-11-12 17:32:45 | 2 | 40,602 | Jason C |
77,469,808 | 11,860,883 | vectorized way of checking element existence | <p>I have a pytorch tensor a of floats a, a corresponding pytorch tensor of ints b of the same size as a. I have another tensor of ints c of variable length. I need to multiply the position of a by two if the corresponding element in b has appeared in c.</p>
<pre><code>import torch
a = torch.tensor([1.0, 2.0, 3.0, 4.0])
b = torch.tensor([0, 2, 1, 3])
c = torch.tensor([1, 2])
# Convert 'c' to a set for faster membership testing
c_set = set(c.tolist())
# Use vectorized operations to update 'a'
update_mask = torch.tensor([idx in c_set for idx in b], dtype=torch.bool)
a[update_mask] *= 2
</code></pre>
<p>This is the fastest way I can think of. Is there a more optimal way of achieving this?</p>
| <python><pytorch> | 2023-11-12 16:56:44 | 1 | 361 | Adam |
77,469,785 | 1,362,485 | Denormalize array of columns into multiple pandas columns | <p>Why column 'x' is always 't2'? I need it to be 't2', 't3', 't4'.... and the result should be 9 rows, not 3.</p>
<pre><code>import pandas as pd
rows = [{'id': 'x1', 'seq': 0, 'amount': 2000, 'payments': 4, 'dummy': 1 },
{'id': 'x2', 'seq': 0, 'amount': 4000, 'payments': 2, 'dummy': 11 },
{'id': 'x3', 'seq': 0, 'amount': 9000, 'payments': 3, 'dummy': 111 }]
df = pd.DataFrame(rows)
def generate_array(row):
times2 = row['amount'] + row['payments'] * 2
times3 = row['amount'] + row['payments'] * 3
times4 = row['amount'] + row['payments'] * 4
result = [ { 'x': 't2', 'y': times2 },
{ 'x': 't3', 'y': times3 },
{ 'x': 't4', 'y': times4 } ]
return result
df['complete_schedule'] = df.apply(generate_array, axis=1)
df_normalized = df['complete_schedule'].apply(lambda x: pd.Series(x[0]) )
print(df_normalized)
x y
0 t2 2008
1 t2 4004
2 t2 9006
</code></pre>
<p>Instead, I need the result to be:</p>
<pre><code> x y
0 t2 2008
1 t2 2012
2 t2 2016
3 t3 4008
4 t3 4012
5 t3 4016
6 t4 9008
7 t4 9012
8 t4 9016
</code></pre>
| <python><pandas> | 2023-11-12 16:52:02 | 1 | 1,207 | ps0604 |
77,469,596 | 9,137,547 | Normalize list of floats to probabilities | <p>I have a list of probability weights like <code>weights = [3, 7, 4, 2]</code> and I want to <strong>normalize</strong> it so that <code>sum(weights) == 1</code>.</p>
<p>This can be later used for something like "<a href="https://stackoverflow.com/questions/3679694/a-weighted-version-of-random-choice">A weighted version of random.choice</a>" with <a href="https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html" rel="nofollow noreferrer">numpy.random.choice</a></p>
<p>Currently I am doing something like:</p>
<pre><code>norm_one = sum(weights)
probabilities = [x / norm_one for x in weights]
</code></pre>
<p>I am wondering if there is any problem with what I am doing since floating points are represented with a finite number of bits and the sum of the list might not equal 1 and if there is a builtin function to normalize either a list or a numpy.array that I should use instead (or any better approach)</p>
| <python><numpy><probability><normalization> | 2023-11-12 15:51:01 | 1 | 659 | Umberto Fontanazza |
77,469,419 | 6,455,731 | Scrapy Spider: Stuck with callback not firing | <p>I am trying to scrape a <a href="https://github.com/orgs/COST-ELTeC/repositories" rel="nofollow noreferrer">github repo</a>.</p>
<p>I want to extract all XML file urls at level1 of every repo and in the best case also extract information from the XML files.</p>
<pre><code>import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
repo_rule = Rule(
LinkExtractor(
restrict_xpaths="//a[@itemprop='name codeRepository']",
restrict_text=r"ELTeC-.+"
)
)
pagination_rule = Rule(
LinkExtractor(restrict_xpaths="//a[@class='next_page']")
)
level_rule = Rule(
LinkExtractor(allow=r"/level1"),
follow=True,
callback="parse_level"
)
class ELTecSpider(CrawlSpider):
"""Scrapy CrawlSpider for crawling the ELTec repo."""
name = "eltec"
start_urls = ["https://github.com/orgs/COST-ELTeC/repositories"]
rules = [
repo_rule,
pagination_rule,
level_rule,
]
def parse_level(self, response):
print("INFO: ", response.url)
process = CrawlerProcess(
settings={
"FEEDS": {
"items.json": {
"format": "json",
"overwrite": True
},
},
}
)
process.crawl(ELTecSpider)
process.start()
</code></pre>
<p>The above extracts the responses for all level1 folders, but somehow I am stuck at this point. My plan was to go down every level1 url using callbacks like so:</p>
<pre><code>def parse_level(self, response):
yield scrapy.Request(response.url, callback=self.parse_docs)
def parse_docs(self, response):
docs_urls = response.xpath("//a[@class='Link--primary']")
for url in docs_urls:
print("INFO: ", url)
</code></pre>
<p>But apparently the callback never even fires.</p>
<p>What am I doing wrong?</p>
| <python><scrapy> | 2023-11-12 14:57:26 | 1 | 964 | lupl |
77,469,382 | 201,657 | How do I view a polars dataframe without truncating the values? | <p>I have a polars dataframe containing a column called <code>Notes</code> that has long text in it. If I simply output that dataframe (i.e. execute <code>df</code>) in a Jupyter notebook then the values therein get truncated</p>
<p><a href="https://i.sstatic.net/82fv6.png" rel="noreferrer"><img src="https://i.sstatic.net/82fv6.png" alt="enter image description here" /></a></p>
<p>I would like all of the text to be displayed. How do I do that?</p>
<p>To put it another way, I'd like the polars equivalent of <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.show.html" rel="noreferrer">Spark's <code>show(False)</code> method</a>.</p>
| <python><dataframe><python-polars> | 2023-11-12 14:45:01 | 2 | 12,662 | jamiet |
77,469,293 | 974,733 | Why does my Python CLI project structure result in ModuleNotFoundErrors? | <p>I have a Python application used as a CLI, let's call it <code>foo</code>. My problem is that the project works if I do <code>pip install .</code> in the root of my repository, and then execute <code>foo</code> in the terminal.<br />
But if I try to run the source code directly with the Python interpreter, then the <code>import</code>s don't work.</p>
<p>(I'm new to Python, and this is the first project I'm maintaining, so I suspect that I'm doing something incorrectly or not in an idiomatic way.)</p>
<p>The repository structure is the following, I have this folder structure.</p>
<pre><code>.
├── src/
│ └── foo/
│ ├── __init__.py
│ ├── __main__.py
│ └── applogic.py
├── pyproject.toml
└── setup.cfg
</code></pre>
<p><code>__init__.py</code> is an empty file. <code>applogic.py</code> contains various methods, let's say it defines one method:</p>
<pre><code>def execute_foo():
...
</code></pre>
<p>And <code>__main__.py</code> is the main entry point of the CLI, which imports and uses a method from <code>applogic.py</code>:</p>
<pre><code>from foo.applogic import execute_foo
def cli():
execute_foo()
if __name__ == '__main__': # pragma: no cover
cli()
</code></pre>
<p>This setup works if I install the package locally with <code>pip install .</code> in the root of the repo, and running it with <code>foo</code> in the terminal.<br />
But if I do <code>python src/foo/__main__.py</code>, then the import fails with this error.</p>
<pre><code>$ python src/foo/__main__.py
Traceback (most recent call last):
File "C:\GitHub\foo\src\foo\__main__.py", line 10, in <module>
from foo.applogic import execute_foo
ModuleNotFoundError: No module named 'foo'
</code></pre>
<p>If I remove the package name in the import statement, so change it from</p>
<pre><code>from foo.applogic import execute_foo
</code></pre>
<p>to</p>
<pre><code>from applogic import execute_foo
</code></pre>
<p>Then it works when executing the code with <code>python</code>.<br />
But then it doesn't work when I install the package with <code>pip install</code> and try executing it with <code>foo</code>. It fails on the import with the error</p>
<pre><code>ModuleNotFoundError: No module named 'execute_foo'
</code></pre>
<p>So my question is: is there a way to structure the project and use imports in a way that it works both when executing the code directly with <code>python</code>, and when it is packaged with <code>pip install</code>?</p>
<p>(If it helps, here is the source code of the real project: <a href="https://github.com/markvincze/sabledocs" rel="nofollow noreferrer">https://github.com/markvincze/sabledocs</a>)</p>
| <python> | 2023-11-12 14:19:45 | 1 | 8,115 | Mark Vincze |
77,469,134 | 17,487,457 | StackingClassifier with base-models trained on feature subsets | <p>I can best describe my goal using a synthetic dataset. Suppose I have the following:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
X, y = make_classification(n_samples=1000, n_features=10, n_classes=3,
n_informative=3)
df = pd.DataFrame(X, columns=list('ABCDEFGHIJ'))
X_train, X_test, y_train, y_test = train_test_split(
df, y, test_size=0.3, random_state=42)
X_train.head()
A B C D E F G H I J
541 -0.277848 1.022357 -0.950125 -2.100213 0.883638 0.821387 1.154613 0.075376 1.176242 -0.470087
440 1.089665 0.841446 -1.701004 -1.036256 -1.229357 0.345068 1.876470 -0.750067 0.080685 -1.318271
482 0.016010 0.025488 -1.189296 -1.052935 -0.623029 0.669521 1.518927 0.690019 -0.045486 -0.494186
422 -0.133358 -2.16219 1.170989 -0.942150 1.933444 -0.55118 -0.059908 -0.938672 -0.924097 -0.796185
778 0.901954 1.479360 -2.639176 -2.588845 -0.753915 -1.650621 2.727146 0.075260 1.330432 -0.941594
</code></pre>
<p>After conducting a feature importance analysis, the discovered that each of the 3-classes in the dataset can best be predicted using feature subset, as oppose to the whole. For example:</p>
<pre><code>class | optimal predictors
-------+-------------------
0 | A, B, C
1 | D, E, F, G
2 | G, H, I, J
-------+-------------------
</code></pre>
<p>At this point, I would like to use 3 <code>one-ve-rest</code> classifiers to train sub-models, one for each class and using the class's best predictors (as the base models). And then a <code>StackingClassifier</code> for final prediction.</p>
<p>I have high-level understanding of the <code>StackingClassifier</code>, where different base models can be trained (e.g. <code>DT, SVC, KNN</code> etc) and a meta classifier using another model e.g. <code>Logistice Regression</code>.</p>
<p>In this case however, the base model is one <code>DT</code> classifier, only that each is to be trained using feature subset best for the class, as above.</p>
<p>Then finally make predictions on the <code>X_test</code>.</p>
<p>But I am not sure how this can be done. So I give the description of my work using pseudo data as above.</p>
<p>How to design this to train the base models, and a final prediction?</p>
| <python><machine-learning><scikit-learn><classification><ensemble-learning> | 2023-11-12 13:36:50 | 2 | 305 | Amina Umar |
77,469,097 | 6,907,703 | How can I process a pdf using OpenAI's APIs (GPTs)? | <p>The web interface for ChatGPT has an easy pdf upload. <strong>Is there an API from openAI that can receive pdfs?</strong></p>
<p>I know there are 3rd party libraries that can read pdf but given there are images and other important information in a pdf, it might be better if a model like GPT 4 Turbo was fed the actual pdf directly.</p>
<p>I'll state my use case to add more context. I intent to do RAG. In the code below I handle the PDF and a prompt. Normally I'd append the text at the end of the prompt. I could still do that with a pdf if I extract its contents manually.</p>
<p>The following code is taken from here <a href="https://platform.openai.com/docs/assistants/tools/code-interpreter" rel="noreferrer">https://platform.openai.com/docs/assistants/tools/code-interpreter</a>. Is this how I'm supposed to do it?</p>
<pre class="lang-py prettyprint-override"><code># Upload a file with an "assistants" purpose
file = client.files.create(
file=open("example.pdf", "rb"),
purpose='assistants'
)
# Create an assistant using the file ID
assistant = client.beta.assistants.create(
instructions="You are a personal math tutor. When asked a math question, write and run code to answer the question.",
model="gpt-4-1106-preview",
tools=[{"type": "code_interpreter"}],
file_ids=[file.id]
)
</code></pre>
<p>There is an upload endpoint as well, but it seems the intent of those endpoints are for fine-tuning and assistants. I think the RAG use case is a normal one and not necessarily related to assistants.</p>
| <python><machine-learning><pdf><openai-api><chat-gpt-4> | 2023-11-12 13:25:44 | 7 | 1,281 | Muhammad Mubashirullah Durrani |
77,469,094 | 1,550,974 | ImportError: cannot import name 'ModelField' from 'pydantic.fields' | <p>I clone <a href="https://huggingface.co/spaces/os1187/docquery" rel="nofollow noreferrer">this huggingface</a> and tested on my app. But, I got this error.</p>
<pre><code>===== Application Startup at 2023-11-12 13:13:59 =====
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
Moving 0 files to the new cache system
0it [00:00, ?it/s]
0it [00:00, ?it/s]
image-classification is already registered. Overwriting pipeline for task image-classification...
Traceback (most recent call last):
File "/home/user/app/app.py", line 12, in <module>
from docquery.document import load_document, ImageDocument
File "/home/user/.local/lib/python3.10/site-packages/docquery/document.py", line 12, in <module>
from .ocr_reader import NoOCRReaderFound, OCRReader, get_ocr_reader
File "/home/user/.local/lib/python3.10/site-packages/docquery/ocr_reader.py", line 6, in <module>
from pydantic.fields import ModelField
ImportError: cannot import name 'ModelField' from 'pydantic.fields' (/home/user/.local/lib/python3.10/site-packages/pydantic/fields.py)
</code></pre>
<p>In my requirements.txt, I import like this. But it is not okay.</p>
<pre><code>torch
git+https://github.com/huggingface/transformers.git@21f6f58721dd9154357576be6de54eefef1f1818
git+https://github.com/impira/docquery.git@a494fe5af452d20011da75637aa82d246a869fa0#egg=docquery[web,donut]
</code></pre>
<p>I also tested with <code>pydantic==1.3</code> and it is not okay also. Can someone advise me please?</p>
<p><em>Updated</em>: I have attached error log from my app in huggingface.</p>
<p><a href="https://i.sstatic.net/yDe9L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yDe9L.png" alt="enter image description here" /></a></p>
| <python><pydantic><huggingface> | 2023-11-12 13:24:43 | 1 | 6,125 | Khant Thu Linn |
77,469,040 | 8,120,580 | Python dataclass automate ID incrementation in abstract class | <p>I want to create a unique ID incrementation for my Python subclasses using the abstract method, but I don't know how to separate each subclass's set of ID values. Here is my code:</p>
<pre><code>from dataclasses import dataclass, field
from itertools import count
@dataclass
class Basic:
identifier: int = field(default_factory=count().__next__, init=False)
def __init_subclass__(cls, **__):
super().__init_subclass__(**__)
# cls.identifier: int = field(default_factory=count().__next__, init=False)
class A(Basic):
pass
class B(Basic):
pass
</code></pre>
<p>The id values are shared between subclasses when I want to separate them:</p>
<pre><code>>>> import stack
>>> stack.A()
A(identifier=0)
>>> stack.B()
B(identifier=1)
</code></pre>
<p>Do you know how to solve the problem? I think the <code>__init_subclass__</code> is a solution, but I'm unsure how to make it work.</p>
| <python><python-dataclasses> | 2023-11-12 13:09:13 | 1 | 1,385 | Alex |
77,468,918 | 20,878,436 | WinError 10060 Connection Timeout Error When Downloading 'punkt' in NLTK | <p>I'm experiencing an issue with the NLTK library in Python, specifically when trying to download the 'punkt' tokenizer models. Here's the error message I receive:</p>
<p><strong>Context:</strong>
I'm attempting to download the <code>punkt</code> package using the following code:</p>
<pre class="lang-py prettyprint-override"><code>import nltk
nltk.download('punkt')
Troubleshooting Steps I've Taken:
Checked my internet connection, which seems to be working fine.
Temporarily disabled firewall and antivirus software, but the issue persists.
Attempted to use a different internet connection (e.g., mobile hotspot), but faced the same error.
</code></pre>
| <python><python-3.x><jupyter-notebook><nltk> | 2023-11-12 12:31:09 | 2 | 368 | Shounak Das |
77,468,917 | 2,881,414 | What is a reliable way to specify a minimum pip version? | <p>What is a reliable way to specify the minimum required <code>pip</code> version in a project that uses <code>pyproject.toml</code>?</p>
<p>Is it:</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["setuptools>=64.0", "pip>=x.y"]
build-backend = "setuptools.build_meta"
</code></pre>
<p>or should one put it into <code>requirements-dev.txt</code>?</p>
<p>or simply a <code>pip install --upgrade pip</code>, before installing the dev-requirements?</p>
<p>The background of the question is, I'm running tests across all main operating systems and Python versions. Some combinations fail because the pip version on the host machine is too old.</p>
| <python><pip><pyproject.toml> | 2023-11-12 12:30:23 | 1 | 17,530 | Bastian Venthur |
77,468,647 | 14,823,254 | the script for deleting files and folders consumes a lot of memory resources | <p>I wrote a script to delete files and folders, but it eats up all the RAM, eats up a lot of memory resources
, a folder that is parsed 1tb and new files are being written to it in parallel. What can I do to make the script eat not so many resources?
there are 48 GB of RAM on the server ( when the script does not work, 28 GB is used in ) that is, 20 RAM goes to the script</p>
<pre><code>import os, time, shutil
path = "/data/decision/"
now = time.time()
with open ('/root/decisionlist', 'w+') as f:
for filename in os.listdir(path):
if os.stat(os.path.join(path, filename)).st_mtime < now - 30*86400:
f.write(os.path.join(path, filename)+'\n')
with open('/root/decisionlist', 'r') as filetodelete:
for line in filetodelete:
shutil.rmtree(line)
os.remove('/root/decisionlist')
</code></pre>
| <python><python-3.x> | 2023-11-12 10:58:36 | 0 | 341 | Iceforest |
77,468,274 | 17,569,967 | How to make mutation which is inversion of one of the solution's genes when solutions are bit arrays | <p>That's what I have so far. As I see from the output, my parameters are not enough to constraint mutation to my needs. Sometimes no one gene is changed, sometimes more than one.</p>
<pre><code>import pygad
import numpy as np
def divider(ga_instance):
return np.max(np.sum(ga_instance.population, axis=1))
def on_start(ga_instance):
print("on_start()")
print(f'Начальная популяция:\n {ga_instance.population}')
def fitness_function(ga_instance, solution, _):
return np.sum(solution) / divider(ga_instance)
def on_fitness(ga_instance, population_fitness):
print(f'\non_fitness()')
print(f'Делитель: {divider(ga_instance)}')
for idx, (instance, fitness) in enumerate(zip(ga_instance.population, ga_instance.last_generation_fitness)):
print(f'{idx}. {instance}: {fitness}')
def on_parents(ga_instance, selected_parents):
print("\non_parents()")
print(f'Выбранные индексы родителей: {ga_instance.last_generation_parents_indices}')
print(f'Выбранные родители:\n {ga_instance.last_generation_parents}')
def on_crossover(ga_instance, offspring_crossover):
print("\non_crossover()")
print(f'Результат кроссинговера:\n {ga_instance.last_generation_offspring_crossover}')
def on_mutation(ga_instance, offspring_mutation):
print("\non_mutation()")
print(f'Результат мутации:\n {ga_instance.last_generation_offspring_mutation}')
def on_generation(ga_instance):
print(f"\non_generation()")
print("Выведенное поколение:\n ", ga_instance.population)
sol = ga_instance.best_solution()
print(f"Лучшее решение: {sol[0]} : {sol[1]}")
ga_instance = pygad.GA(
num_generations=1,
num_parents_mating=2,
fitness_func=fitness_function,
gene_type=int,
init_range_low=0,
init_range_high=2,
sol_per_pop=10,
num_genes=8,
crossover_type='single_point',
parent_selection_type="rws",
mutation_type="inversion",
mutation_num_genes=1,
on_start=on_start,
on_fitness=on_fitness,
on_parents=on_parents,
on_crossover=on_crossover,
on_mutation=on_mutation,
on_generation=on_generation,
)
ga_instance.run()
</code></pre>
<p>More details for stackoverflow algorithm: sdfgdsfdddddddddddfffffffffadsgadfgdafgdasgadsgdsagdsagdsagadsgdsagdsagdsag.</p>
| <python><pygad> | 2023-11-12 08:37:59 | 1 | 412 | Intolighter |
77,468,234 | 12,471,420 | NumPy - Check for "5 in a row" pattern in a 2d array | <p>I am trying to write a function that checks if there is at least one of "five in a row" of a number in a 2d array, in row, column, and diagonal directions. The array to check can be any square matrix with size >= 5, but I'll most likely be working with 7x7 ones.</p>
<p>For example, the following matrix has 3 occurrences of the pattern(detect at least one is sufficient) of five 1s in a sequence. One on the fist column, one diagonally from (0,6) to (5,1), and other diagonal from (1, 0) to (5, 6).</p>
<pre class="lang-py prettyprint-override"><code>A = np.array(
[
[0, 1, 0, 0, 0, 0],
[1, 0, 1, 0, 1, 0],
[1, 0, 0, 1, 0, 0],
[1, 0, 1 ,0, 1, 0],
[1, 1, 0, 0, 0, 1],
[1, 0, 0, 0, 0, 0]
]
)
</code></pre>
<p>My attempt at this looks like the following</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import cProfile
import pstats
import time
class Board:
def __init__(self, size):
self.data = np.zeros((size, size), dtype=np.byte)
self.size = size
# make sliding window of size 5 for each row and col
self.rowWindows = np.lib.stride_tricks.sliding_window_view(self.data, window_shape=(1,5))
self.colWindows = np.lib.stride_tricks.sliding_window_view(np.transpose(self.data), window_shape=(1,5))
# make sliding window for both diagonal directions
# storing them as object array since the diagonals windows have different sizes
self.antiDiagonalWindow = np.array(
[
np.lib.stride_tricks.sliding_window_view(np.fliplr(self.data).diagonal(offset=i), window_shape=(5,))
for i in range(-self.size + 5, self.size - 5 + 1, 1)
],
dtype=object,
)
self.diagonalWindow = np.array(
[
np.lib.stride_tricks.sliding_window_view(self.data.diagonal(offset=i), window_shape=(5,))
for i in range(-self.size + 5, self.size - 5 + 1, 1)
],
dtype=object,
)
def hasFiveInRow(self, value):
return (
np.any(np.all(self.rowWindows == value, -1),)
or np.any(np.all(self.colWindows == value,-1), )
# have to use concat to turn sliding window views into 2d array
# since diagonals have different sizes
or np.any(np.all(np.concatenate(self.antiDiagonalWindow) == value, -1), )
or np.any(np.all(np.concatenate(self.diagonalWindow) == value, -1), )
)
def benchMark():
b = Board(size=7)
b.data[:]=np.random.randint(low=0, high=3, size=(7,7))
for i in range(100_000):
val = b.hasFiveInRow(1)
# t0 = time.time()
# benchMark()
# print(time.time() - t0)
with cProfile.Profile() as p:
benchMark()
res = pstats.Stats(p)
res.sort_stats(pstats.SortKey.TIME)
res.print_stats()
</code></pre>
<p>The resulting performance is not too bad but I would like to improve it if possible since I am using it as a part of a games ai tree search and will have to call the function very large number of times. I am thinking the <code>np.any(np.all(windows))</code> is not ideal here since it has to create many boolean array to be reduced to a single value.</p>
<p>The cProfile logs is showing a large number of calls to 'reduce', 'dictcomp', and _wrapreduction' etc, which are taking a long time to finish.</p>
<p>Is there a better way to look for this pattern? I only need to check if this pattern occurs at least once as a boolean, although getting the exact position and number of occurrences would be nice.</p>
<p>Any help would be appreciated!</p>
| <python><algorithm><numpy> | 2023-11-12 08:17:46 | 2 | 365 | goldenotaste |
77,468,224 | 3,322,273 | Allowing multiple requests under the same socket connection with python socketserver | <p>I started a TCP server in Python and tried to send to it multiple requests using a single socket connection:</p>
<pre><code>>>> import socket
>>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
>>> s.connect(('127.0.0.1', 4444))
>>> message = b'hi'
>>> s.sendall(message)
>>> resp = s.recv(4096)
>>> resp
b'hello'
>>> # Now trying to send another message without restarting the socket
>>> s.sendall(message)
>>> resp = s.recv(4096)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ConnectionAbortedError: [WinError 10053] An established connection was aborted by the software in your host machine
</code></pre>
<p>The server rejects my second request, I'll have to close and restart the socket so I can send a new request.</p>
<p>This is how my server looks like:</p>
<pre><code>import socketserver
class MyTCPServer(socketserver.BaseRequestHandler):
def handle(self):
message = self.request.recv(4096)
self.request.sendall(b'hello')
server = socketserver.TCPServer(('127.0.0.1', 4444), MyTCPServer)
server.serve_forever()
</code></pre>
<p>How can I make the server keep the connection alive and thus accept multiple requests using a single socket?</p>
| <python><tcpserver><socketserver> | 2023-11-12 08:15:25 | 1 | 12,360 | SomethingSomething |
77,468,212 | 2,660,756 | Custom error messages from queries based on data in the collection in mongodb | <p>I'm wondering if there's a way to return custom errors and/or messages from queries depending on the data in the dataset in mongodb.</p>
<p>For example and clarification, consider that I have the following collection, named <code>users</code>, with data in this format:</p>
<pre><code>{
"_id": "65506e0b9ab7d8f1f9e23373",
"first_name": "John",
"last_name": "Smith",
"age": 33,
"occupation": "Software engineer",
"is_active": false,
}
</code></pre>
<p>And I would like to change the <code>occupation</code> field <strong>if</strong> the <code>_id</code> field matches <strong>AND</strong> the <code>is_active</code> field is true, however, I would also like to know the reason why the document wasn't updated if it wasn't.</p>
<p>One way to do this of course is by fetching the document that matches the <code>_id</code> field then validate (using programming language code) if the field <code>is_active == true</code> and if it is, then use an update query to update the <code>occupation</code> field, but it seems wasteful to me to make two queries for a simple update.</p>
<p>The logic I'm looking for in a <code>SINGLE</code> query is something similar to:</p>
<pre><code>if there is a document with _id that matches then:
if document.is_active == false:
return CUSTOM_INACTIVE_USER_ERROR
else:
update document.occupation to the new value
</code></pre>
<p>the <code>CUSTOM_INACTIVE_USER_ERROR</code> can be a JSON document for example:</p>
<pre><code>{
"error": "in_active is set to false, can't update"
}
</code></pre>
<p>Thank you in advance.</p>
| <python><mongodb><pymongo> | 2023-11-12 08:12:47 | 0 | 637 | Loay |
77,468,154 | 1,595,350 | How to Chat with my Data in Python using Microsoft Azure OpenAI and Azure Cogintive Search | <p>I have written code that extracts text from a PDF document and converts it into vectors using the text-embeddings-ada-002 model from Azure OpenAI. These vectors are then stored in a Microsoft Azure Cognitive Search Index and can be queried. However, I now want to use Azure OpenAI to interact with this data and retrieve a generated result. My code until now works fine, but i dont know how to implement the interaction through Azure OpenAI with my custom data in Azure Cognitive Search in Python.</p>
<p>This is my code:</p>
<pre><code>OPENAI_API_BASE = "https://xxxxx.openai.azure.com"
OPENAI_API_KEY = "xxxxxx"
OPENAI_API_VERSION = "2023-05-15"
openai.api_type = "azure"
openai.api_key = OPENAI_API_KEY
openai.api_base = OPENAI_API_BASE
openai.api_version = OPENAI_API_VERSION
AZURE_COGNITIVE_SEARCH_SERVICE_ENDPOINT = "https://xxxxxx.search.windows.net"
AZURE_COGNITIVE_SEARCH_API_KEY = "xxxxxxx"
AZURE_COGNITIVE_SEARCH_INDEX_NAME = "test"
AZURE_COGNITIVE_SEARCH_CREDENTIAL = AzureKeyCredential(AZURE_COGNITIVE_SEARCH_API_KEY)
llm = AzureChatOpenAI(deployment_name="gpt35", openai_api_key=OPENAI_API_KEY, openai_api_base=OPENAI_API_BASE, openai_api_version=OPENAI_API_VERSION)
embeddings = OpenAIEmbeddings(deployment_id="ada002", chunk_size=1, openai_api_key=OPENAI_API_KEY, openai_api_base=OPENAI_API_BASE, openai_api_version=OPENAI_API_VERSION)
acs = AzureSearch(azure_search_endpoint=AZURE_COGNITIVE_SEARCH_SERVICE_ENDPOINT,
azure_search_key = AZURE_COGNITIVE_SEARCH_API_KEY,
index_name = AZURE_COGNITIVE_SEARCH_INDEX_NAME,
embedding_function = embeddings.embed_query)
def generate_embeddings(s):
# wichtig! engine muss der name sein meiner bereitstellung sein!
response = openai.Embedding.create(
input=s,
engine="ada002"
)
embeddings = response['data'][0]['embedding']
return embeddings
def generate_tokens(s, f):
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
splits = text_splitter.split_text(s)
i = 0
documents = []
for split in splits:
metadata = {}
metadata["index"] = i
metadata["file_source"] = f
i = i+1
new_doc = Document(page_content=split, metadata=metadata)
documents.append(new_doc)
#documents = text_splitter.create_documents(splits)
return documents
drive.mount('/content/drive')
folder = "/content/drive/docs/pdf/"
page_content = ''
doc_content = ''
for filename in os.listdir(folder):
file_path = os.path.join(folder, filename)
if os.path.isfile(file_path):
print(f"Processing file: {file_path}")
doc = fitz.open(file_path)
for page in doc: # iterate the document pages
page_content += page.get_text() # get plain text encoded as UTF-8
doc_content += page_content
d = generate_tokens(doc_content, file_path)
print(d)
acs.add_documents(documents=d)
print("Done.")
query = "What are the advantages of an open-source ai model?"
search_client = SearchClient(AZURE_COGNITIVE_SEARCH_SERVICE_ENDPOINT, AZURE_COGNITIVE_SEARCH_INDEX_NAME, credential=AZURE_COGNITIVE_SEARCH_CREDENTIAL)
results = search_client.search(
search_text=None,
vector_queries= [vector_query],
select=["content_vector", "metadata"],
)
print(results)
for result in results:
print(result)
</code></pre>
<p>The fields in Azure Cognitive search are <code>content_vector</code> for the vectors and <code>content</code> for the plain text content. I have looks at a lot of GitHub Code, also published by Microsoft and know that it is implemented, but have obviously some problems understanding how this piece in particular is implemented.</p>
<p>So i am looking for some help/hint how to extend this code to interact with the content in Azure Cognitive via Azure Open AI Chat.</p>
| <python><azure><azure-cognitive-search><azure-openai> | 2023-11-12 07:46:36 | 1 | 4,326 | STORM |
77,468,134 | 4,769,404 | Truly portable executables written in python in 2023 | <p>I've recently spent a lot of time figuring out how to distribute tools written in Python as standalone, self-contained executables, that will works on most of the linux distribution without special environment (including python itself). Decided to write this post to discuss my approach.</p>
<p>Here are a few simple steps I've come up with to make this trick:</p>
<ol>
<li><p>This first is very obvious - use special tooling that can make such an executable, like Nuitka, Pyoxidizer, PyInstaller, python-appimage, etc.</p>
</li>
<li><p>Build your application against old enough glibc. You can do this using some old Linux distribution or just be setting up such toolchain manually. Its matters for c extensions, tools like nuitka/pyoxidizer and the python itself.</p>
</li>
<li><p>Last but not least - don't forget about dependencies and the dependencies of your dependencies. All Python wheels should have either an <code>any</code> <a href="https://packaging.python.org/en/latest/specifications/platform-compatibility-tags/" rel="nofollow noreferrer">platform tag</a> or one of the <a href="https://peps.python.org/pep-0600/" rel="nofollow noreferrer">manylinux</a> tags that suits your requirements. You can check tags and repair the bad ones with <a href="https://github.com/pypa/auditwheel" rel="nofollow noreferrer">auditwheel</a>. But fortunately, in the last 5-10 years, most of the mainstream packages have received manylinux wheels.</p>
</li>
</ol>
<p>Of course, there will be issues with PyOxidizer/Nuitka/pyinstaller/etc pitfalls, especially with big projects, so you need some e2e tests in every Linux you want to support.</p>
<p>For educational purposes, I made <a href="https://github.com/asapelkin/overpackaged" rel="nofollow noreferrer">this little repo</a>. It includes:</p>
<ol>
<li>A sample app with bad dependencies and a C extension.</li>
<li>A script to create a 'wheelhouse' with compatible wheels only (<code>manylinux*</code> and <code>any</code>).</li>
<li>Scripts to build executables using Nuitka, PyOxidizer, Appimage, pex.</li>
<li>Portability tests (run executables in different linux distros) and performance benchmarks.</li>
<li>Readme and Makefile to run any step with single command.</li>
</ol>
<p>It's obviously not a very useful repo, but at least it helped me to explore this topic a little, maybe it could help to demonstrate my approach to the topic.
I'd be grateful for any feedback and thoughts on that topic! Please, prove me wrong, it would be very helpful!</p>
| <python><linux><packaging><portability><nuitka> | 2023-11-12 07:41:16 | 0 | 690 | Arseniy |
77,467,788 | 22,213,065 | Keep second regex lines in first regex lines matches | <p>I have high number of txt list files in <code>E:\Desktop\Linux_distro\asliiiii</code> directory that following is a sample of one of my files:</p>
<pre><code>95
ROSA
139
96
Chakra
137
97
AV Linux
135
98
LibreELEC
134
99
Simplicity
131
100
Kodachi
130
20200301020449
79776361952441
</code></pre>
<p>Now I need a script that at first find <code>\d{14}</code> regex lines and after that in lines found keep only <code>20(?:0[0-9]|1[0-9]|20)[0-1][0-9]</code> regex lines.<br />
This mean following result must provide to me:</p>
<pre><code>95
ROSA
139
96
Chakra
137
97
AV Linux
135
98
LibreELEC
134
99
Simplicity
131
100
Kodachi
130
20200301020449
</code></pre>
<p>I wrote following python script but I don't know why it's not working good for my list!</p>
<pre><code>import os
import re
def process_file(file_path):
with open(file_path, 'r') as file:
lines = file.readlines()
# Find lines matching \d{14}
regex_pattern_1 = re.compile(r'\d{14}')
matching_lines = [line.strip() for line in lines if regex_pattern_1.search(line)]
# Keep only matches of the second regex in the found lines
regex_pattern_2 = re.compile(r'20(?:0[0-9]|1[0-9]|20)[0-1][0-9]\d{8}')
filtered_lines = []
for line in matching_lines:
matches = regex_pattern_2.findall(line)
filtered_lines.extend(matches)
# Write the filtered lines back to the file
with open(file_path, 'w') as file:
file.write('\n'.join(filtered_lines))
def process_files_in_directory(directory_path):
for filename in os.listdir(directory_path):
if filename.endswith('.txt'):
file_path = os.path.join(directory_path, filename)
process_file(file_path)
if __name__ == "__main__":
directory_path = r'E:\Desktop\Linux_distro\asliiiii'
process_files_in_directory(directory_path)
print("Processing complete.")
</code></pre>
<p>but this script provide me following result!!</p>
<pre><code>20200301020449
</code></pre>
<p>where is this script problem?</p>
| <python><regex> | 2023-11-12 04:59:06 | 3 | 781 | Pubg Mobile |
77,467,787 | 806,160 | why Changing in numpy array change tensor that converted from numpy? | <p>When I convert a numpy array to tensor and change the numpy in differnet ways some change the tensor and some not. it is sample:</p>
<p>in this code:</p>
<pre><code>array = np.arange(1., 8.)
tensor = torch.from_numpy(array)
array, tensor
</code></pre>
<p>the answer is :</p>
<pre><code>(array([1., 2., 3., 4., 5., 6., 7.]),
tensor([1., 2., 3., 4., 5., 6., 7.], dtype=torch.float64))
</code></pre>
<p>No when I add 1 to numpy array in this way it change tensor values:</p>
<pre><code>array = np.arange(1., 8.)
tensor = torch.from_numpy(array)
array += 1
array, tensor
</code></pre>
<p>The output is:</p>
<pre><code>(array([2., 3., 4., 5., 6., 7., 8.]),
tensor([2., 3., 4., 5., 6., 7., 8.], dtype=torch.float64))
</code></pre>
<p>But when I change numpy array in another way it dos not change tensor:</p>
<pre><code>array = np.arange(1., 8.)
tensor = torch.from_numpy(array)
array = array + 1
array, tensor
</code></pre>
<p>the output is :</p>
<pre><code>(array([2., 3., 4., 5., 6., 7., 8.]),
tensor([1., 2., 3., 4., 5., 6., 7.], dtype=torch.float64))
</code></pre>
<p>again this code dos not change it:</p>
<pre><code>array = np.arange(1., 8.)
tensor = torch.from_numpy(array)
array = array + 1
array += 1
array, tensor
</code></pre>
<p>This also happened when changing pytorch's tensor to numpy array.</p>
<p>Why this way work numpy and torch?</p>
| <python><arrays><numpy><pytorch><converters> | 2023-11-12 04:57:19 | 1 | 1,423 | Tavakoli |
77,467,685 | 10,200,497 | finding the first value of last streak in a column | <p>This is my dataframe:</p>
<pre><code>df = pd.DataFrame({'a': [1, 0, 0, 1, 0, -1, -1, -1, 0, -1, -1, -1]})
</code></pre>
<p>And this is the outcome that I want:</p>
<pre><code> a b
0 1 NaN
1 0 NaN
2 0 NaN
3 1 NaN
4 0 NaN
5 -1 NaN
6 -1 NaN
7 -1 NaN
8 0 NaN
9 -1 x
10 -1 NaN
11 -1 NaN
</code></pre>
<p>I want to find the last streak of numbers. In my sample it is -1 and then mark the first value of that streak. I want to have four additional conditions:</p>
<p>a) I dont want streak if it is streak of 0s.</p>
<p>b) The streak must be at least two rows.</p>
<p>c) If the last streak is 0, I don't want to mark anything. like the column <code>E</code> in the image below.</p>
<p>d) Note that in column <code>F</code> of the image I also don't want to mark anything.</p>
<p>To clarify more, in this image I highlighted the row that I want with different samples. Each column represent a unique sample like the above example. If there are no highlights, then no row should be marked.</p>
<p><a href="https://i.sstatic.net/tjpsp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tjpsp.png" alt="enter image description here" /></a></p>
<p>And this is what I have tried but did not work:</p>
<pre><code>mask = (df.a == -1)
df.loc[(mask.cumsum() == mask.sum() - 1) & mask, 'b'] = 'x'
</code></pre>
<p>And after finding the row, what is the best way to get that row without creating a new column like I did here?</p>
<p>I have a tendency to create a new column and then get the rows that the value is not <code>NaN</code>.</p>
| <python><pandas> | 2023-11-12 03:52:50 | 1 | 2,679 | AmirX |
77,467,422 | 13,086,128 | RuntimeWarning: overflow encountered in scalar negative | <p>I am getting this warning when I upgrade my numpy from <code>1.23.3</code> to <code>1.25.0</code></p>
<pre><code>#For 1.23.3 version
import numpy as np
x = np.uint64(0xffffffff)
print(-x)
#output
18446744069414584321
</code></pre>
<hr />
<pre><code>#For 1.25.0 version
import numpy as np
x = np.uint64(0xffffffff)
print(-x)
#output
18446744069414584321
<ipython-input-4-adbfb5bb01c0>:1: RuntimeWarning: overflow encountered in scalar negative
print(-x)
</code></pre>
<p>Anyone else is getting this warning message or its just me?</p>
| <python><numpy> | 2023-11-12 01:21:50 | 2 | 30,560 | Talha Tayyab |
77,467,361 | 14,256,643 | Understanding and Fixing User Permissions Logic in Django | <p>I know this might be a silly question but please don't laugh at me. I am struggling to understand this simple logic. I am not understating why this logic is not working where it means if request.user is not the owner of the object or the user is not an admin user. The first part of this logic works to prevent other users from overwriting the owner object but why second part not working where a user can be overwritten object if he is an admin user?</p>
<pre><code>if (request.user != comment.user) or (not request.user.is_superuser):
</code></pre>
<p>why my above logic only working for if user != owner of the obj ? why admin user can't edit if he is an admin?</p>
<p>but this logic works on both part</p>
<pre><code>if not (request.user.is_superuser or (request.user == blog.blog_author)):
</code></pre>
| <python><python-3.x><django> | 2023-11-12 00:44:57 | 2 | 1,647 | boyenec |
77,467,308 | 1,432,980 | tree view scrollbars do not work properly | <p>I am trying to create a <code>tree view</code> with horizontal and vertical scrollbars. My code looks like this (I added some gibberish to build deep hierarchy).</p>
<pre><code>import tkinter as tk
from tkinter import ttk
app = tk.Tk()
app.geometry("400x400")
app.title("Basic viewer")
main_window = ttk.PanedWindow(app, orient=tk.HORIZONTAL)
main_window.pack(expand=1, fill=tk.BOTH, padx=10, pady=10)
tree_view_frame = tk.Frame(main_window)
tree_view_frame.grid_columnconfigure(1, weight=1)
tree_view_frame.grid_rowconfigure(1, weight=1)
tree_view = ttk.Treeview(tree_view_frame, selectmode = tk.BROWSE)
tree_view.heading('#0', text='Managed Archives', anchor = tk.W)
tree_view.grid(row = 1, column = 1, sticky = "nswe")
item_1 = tree_view.insert('', tk.END, text='folder 1')
item_2 = tree_view.insert(item_1, tk.END, text='folder 2')
item_3 = tree_view.insert(item_2, tk.END, text='folder 3')
item_4 = tree_view.insert('', tk.END, text='folder 4')
item_5 = tree_view.insert('', tk.END, text='folder 5')
item_2 = tree_view.insert(item_1, tk.END, text='folder 2')
item_31 = tree_view.insert(item_2, tk.END, text='folder 3')
item_32 = tree_view.insert(item_31, tk.END, text='folder 3')
item_33 = tree_view.insert(item_32, tk.END, text='folder 3')
item_34 = tree_view.insert(item_33, tk.END, text='folder 3')
item_35 = tree_view.insert(item_34, tk.END, text='folder 3')
item_36 = tree_view.insert(item_35, tk.END, text='folder 3')
item_37 = tree_view.insert(item_36, tk.END, text='folder 3')
item_38 = tree_view.insert(item_37, tk.END, text='folder 3')
item_39 = tree_view.insert(item_38, tk.END, text='folder 3')
item_300 = tree_view.insert(item_39, tk.END, text='folder 3')
item_301 = tree_view.insert(item_300, tk.END, text='folder 3')
item_302 = tree_view.insert(item_301, tk.END, text='folder 3')
item_303 = tree_view.insert(item_302, tk.END, text='folder 3')
item_304 = tree_view.insert(item_303, tk.END, text='folder 3')
item_305 = tree_view.insert(item_304, tk.END, text='folder 3')
item_306 = tree_view.insert(item_305, tk.END, text='folder 3')
item_38 = tree_view.insert(item_37, tk.END, text='folder 3')
item_39 = tree_view.insert(item_38, tk.END, text='folder 3')
item_300 = tree_view.insert(item_39, tk.END, text='folder 3')
item_38 = tree_view.insert(item_37, tk.END, text='folder 3')
item_39 = tree_view.insert(item_38, tk.END, text='folder 3')
item_300 = tree_view.insert(item_39, tk.END, text='folder 3')
y_scroll = ttk.Scrollbar(tree_view_frame, orient = "vertical")
y_scroll.grid(row = 0, column = 2, rowspan = 3, sticky = "nswe")
x_scroll = ttk.Scrollbar(tree_view_frame, orient = "horizontal")
x_scroll.grid(row = 2, column = 0, columnspan = 2, sticky = "nswe")
tree_view.configure(yscrollcommand = y_scroll.set, xscrollcommand = x_scroll.set)
main_window.add(tree_view_frame)
app.mainloop()
</code></pre>
<p>But it seems like while vertical scrolls works (though it does not work when trying to use cursor, but works with mouse scroll), the horizontal scroll does not work at all. It stays white and that's it.</p>
<p>I get something like this</p>
<p><a href="https://i.sstatic.net/8WU9f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8WU9f.png" alt="enter image description here" /></a></p>
<p>Sometimes I do see the horizontal scroll working, but then it stops when I try to resize or do anything else.</p>
<p>What causes the problem?</p>
<p>I am running macOS.</p>
| <python><tkinter> | 2023-11-12 00:13:17 | 1 | 13,485 | lapots |
77,467,162 | 9,983,652 | ImportError: cannot import name 'KoalasFrame' from 'databricks.koalas' | <p>I got below warning and error</p>
<pre><code>from databricks.koalas import KoalasFrame
WARNING:root:Found pyspark version "3.5.0" installed. The pyspark version 3.2 and above has a built-in "pandas APIs on Spark" module ported from Koalas. Try `import pyspark.pandas as ps` instead.
WARNING:root:'PYARROW_IGNORE_TIMEZONE' environment variable was not set. It is required to set this environment variable to '1' in both driver and executor sides if you use pyarrow>=2.0.0. Koalas will set it for you but it does not work if there is a Spark context already launched.
ImportError Traceback (most recent call last)
cnrl\users\yongnual\Data\Spyder_workplace\DTS_dashboard\pandas2_high_performance_testing.ipynb Cell 18 line 1
----> 1 from databricks.koalas import KoalasFrame
ImportError: cannot import name 'KoalasFrame' from 'databricks.koalas' (c:\Anaconda\envs\dash2\lib\site-packages\databricks\koalas\__init__.py)
</code></pre>
| <python><python-3.x><apache-spark><spark-koalas> | 2023-11-11 23:01:40 | 2 | 4,338 | roudan |
77,467,082 | 9,983,652 | AttributeError: module 'dask' has no attribute 'set_options' after installing dask with conda | <p>I am using annaconda and I just installed dask using below</p>
<p><a href="https://anaconda.org/conda-forge/dask" rel="nofollow noreferrer">https://anaconda.org/conda-forge/dask</a></p>
<pre><code>conda install -c conda-forge dask
</code></pre>
<p>I can use read_csv() and no error, however, I got this error . I am wondering if I need to install everything using</p>
<p>python -m pip install "dask[complete]"</p>
<p>will it conflict with installed dask using anacondas? Thanks</p>
<pre><code>import dask.multiprocessing
dask.set_options(get=dask.multiprocessing.get) # set processes as default
AttributeError Traceback (most recent call last)
cnrl\users\yongnual\Data\Spyder_workplace\DTS_dashboard\pandas2_high_performance_testing.ipynb Cell 14 line 2
1 import dask.multiprocessing
----> 2 dask.set_options(get=dask.multiprocessing.get)
AttributeError: module 'dask' has no attribute 'set_options'
</code></pre>
| <python><dask><dask-distributed> | 2023-11-11 22:27:50 | 1 | 4,338 | roudan |
77,466,928 | 5,032,387 | LLM in LLMChain ignores prompt | <p>I'm getting an incorrect output from an LLMChain that uses a prompt that contains a system and human messages. Can't figure out why. I download the gpt4all-falcon-q4_0 model from <a href="https://gpt4all.io/index.html" rel="nofollow noreferrer">here</a> to my machine.</p>
<pre class="lang-py prettyprint-override"><code>from langchain.chains import ConversationChain, LLMChain
from langchain.llms import GPT4All
from langchain.prompts import (
PromptTemplate,
SystemMessagePromptTemplate,
HumanMessagePromptTemplate,
ChatPromptTemplate
)
import sys
sys.path.insert(0,"..")
from local_config import MODEL_PATH
llm = GPT4All(
model=MODEL_PATH,
)
system_template = """
You are a helpful assistant who generates comma separated lists.
A user will only pass a category and you should generate subcategories of that category in a comma-separated list.
ONLY return comma-separated values and nothing more!
"""
system_message = SystemMessagePromptTemplate.from_template(
system_template
)
human_template = '{category}'
human_message = HumanMessagePromptTemplate.from_template(human_template)
prompt = ChatPromptTemplate.from_messages([system_message, human_message])
chain = LLMChain(llm = llm,
prompt=prompt,
verbose=True)
chain.run('Machine Learning')
</code></pre>
<p>Instead of getting a string of comma-separated values, I get the following output:</p>
<pre><code>Engineer\nSure, I can help you with that. Please provide the category for which you want to generate subcategories.
</code></pre>
<p>For reference, it does work if I just use a simpler template for the prompt:</p>
<pre><code>template = """
Return all the subcategories of the following category
{category}
"""
prompt = PromptTemplate(input_variables=["category"],
template = template)
chain = LLMChain(llm = llm,
prompt=prompt,
verbose=True)
chain.run('Machine Learning')
</code></pre>
<p>Output</p>
<pre class="lang-py prettyprint-override"><code>'Here\'s a Python function that takes in a list of categories and returns a list of all the subcategories of each category:\n```python\ndef get_subcategories(categories):\n subcategories = []\n for category in categories:\n subcategories.append(category)\n return subcategories\n```\nTo use this function, you can pass it a list of categories as an argument, like this:\n```python\ncategories = [\n "Machine Learning",\n "Deep Learning",\n "Computer Vision",\n "Natural Language Processing",\n "Artificial Intelligence",\n "Robotics",\n "Game Development",\n "Data Science",\n "Big Data",\n "Cloud Computing",\n "Blockchain",\n "Internet of Things",\n "Cybersecurity",\n "Machine Learning",\n "Deep Learning",\n "Computer Vision",\n "Natural Language Processing",\n "Artificial Intelligence",\n "Robotics",\n "Game Development",\n "Data Science",\n "Big Data",\n "Cloud Computing",\n "Blockchain",\n "Internet of Things",\n "Cybersecurity",\n]\n\nsubcategories = get_subcategories(categories)\nprint(subcategories)\n```\nThis'
</code></pre>
| <python><langchain><gpt4all> | 2023-11-11 21:37:25 | 1 | 3,080 | matsuo_basho |
77,466,728 | 12,904,808 | mypy django models typehint | <p>I integrated mypy into my project and would like to type my django models, how would I enable the following conversions? ForeignKey to the actual model, charField and textField to str, DecimalField to Decimal...</p>
<pre><code>class mymodel(models.Model):
address: str = models.CharField(max_length=100)
description: str = models.TextField()
organisation: Organisation = models.ForeignKey(
"organisations.Organisation",
on_delete=models.CASCADE,
related_name="properties",
editable=False,
)
</code></pre>
<p>My current setup.cfg file looks like this:</p>
<pre><code>
[mypy]
plugins = mypy_drf_plugin.main, mypy_django_plugin.main, pydantic.mypy
exclude = /migrations/|template
python_version = 3.11
no_implicit_optional = True
strict_optional = True
[mypy.plugins.django-stubs]
django_settings_module = "backend.settings"
</code></pre>
| <python><django><django-models><mypy> | 2023-11-11 20:27:30 | 0 | 302 | swaffelay |
77,466,604 | 8,690,357 | Movie cast with characters in Django | <p>So, I'm doing the classical movie app in Django (I'm loving the framework!) and I feel like I hit a stump.</p>
<p>So, I have my Movie model:</p>
<pre><code>class Movie(models.Model):
TitlePRT = models.CharField(max_length=100)
TitleORG = models.CharField(max_length=100)
TitleAlt = models.CharField(max_length=100)
Poster = models.ImageField(upload_to=posterPath, max_length=100)
Synopsis = models.TextField()
Year = models.IntegerField()
DubStudio = models.ForeignKey("Studio", on_delete=models.CASCADE)
Trivia = models.TextField()
CreatedBy = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name="+")
CreatedAt = models.DateTimeField(auto_now=False, auto_now_add=True)
Modifiedby = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name="+")
ModifiedAt = models.DateTimeField(auto_now=True, auto_now_add=False)
</code></pre>
<p>My Actor model:</p>
<pre><code>class Actor(models.Model):
Bio = models.TextField()
Name = models.CharField(max_length=50)
FullName = models.CharField(max_length=50)
DateOfBirth = models.DateField()
Country = models.TextField(max_length=100)
Nationality = models.TextField(max_length=50)
Debut = models.ForeignKey("Movie", on_delete=models.CASCADE)
CreatedBy = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name="+")
CreatedAt = models.DateTimeField(auto_now=False, auto_now_add=True)
ModifiedBy = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name="+")
ModifiedAt = models.DateTimeField(auto_now=True, auto_now_add=False)
ClaimedBy = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name="+")
</code></pre>
<p>And, finally, my Cast model:</p>
<pre><code>class Cast(models.Model):
Movie = models.ForeignKey("Movie", on_delete=models.CASCADE)
Actor = models.ForeignKey("Actor", on_delete=models.CASCADE)
Character = models.TextField(max_length=50)
Observations = models.TextField(max_length=50)
</code></pre>
<p>Right now, I can add content through Django admin, but it's awkward, because I can't add a cast in the Movie form. I have to go to the Cast form, and a character one by one</p>
<p>I've searched about customizing admin forms and whatnot, but I get a little confused.</p>
<p>Is there to create a form field in the Movie form to add the cast, preferably with three elements: one where you can choose the actor form the actors in the db, a text field to type the role and another text field to add a little observation?</p>
<p>Or is there a less convoluted way to organize this data?</p>
| <python><django><django-models><django-forms> | 2023-11-11 19:45:55 | 1 | 397 | O Tal Antiquado |
77,466,563 | 3,238,679 | How to implement multi-band-pass filter with scipy.signal.butter | <p>Based on the band-pass filter <a href="https://%20https://stackoverflow.com/questions/12093594/how-to-implement-band-pass-butterworth-filter-with-scipy-signal-butter" rel="nofollow noreferrer">here</a>, I am trying to make a multi-band filter using the code bellow. However, the filtered signal is close to zero which affects the result when the spectrum is plotted. Should the coefficients of the filter of each band be normalized? Can you please someone suggest how I can fix the filter?</p>
<pre><code>from scipy.signal import butter, sosfreqz, sosfilt
from scipy.signal import spectrogram
import matplotlib
import matplotlib.pyplot as plt
from scipy.fft import fft
import numpy as np
def butter_bandpass(lowcut, highcut, fs, order=5):
nyq = 0.5 * fs
low = lowcut / nyq
high = highcut / nyq
sos = butter(order, [low, high], analog=False, btype='band', output='sos')
return sos
def multiband_filter(data, bands, fs, order=10):
sos_list = []
for lowcut, highcut in bands:
sos = butter_bandpass(lowcut, highcut, fs, order=order)
scalar = max(abs(fft(sos, 2000)))
# sos = sos / scalar
sos_list += [sos]
# sos_list = [butter_bandpass(lowcut, highcut, fs, order=order) for lowcut, highcut in bands]
# Combine filters into a single filter
sos = np.vstack(sos_list)
# Apply the multiband filter to the data
y = sosfilt(sos, data)
return y, sos_list
def get_toy_signal():
t = np.arange(0, 0.3, 1 / fs)
fq = [-np.inf] + [x / 12 for x in range(-9, 3, 1)]
mel = [5, 3, 1, 3, 5, 5, 5, 0, 3, 3, 3, 0, 5, 8, 8, 0, 5, 3, 1, 3, 5, 5, 5, 5, 3, 3, 5, 3, 1]
acc = [5, 0, 8, 0, 5, 0, 5, 5, 3, 0, 3, 3, 5, 0, 8, 8, 5, 0, 8, 0, 5, 5, 5, 0, 3, 3, 5, 0, 1]
toy_signal = np.array([])
for kj in range(len(mel)):
note_signal = np.sum([np.sin(2 * np.pi * 440 * 2 ** ff * t)
for ff in [fq[acc[kj]] - 1, fq[acc[kj]], fq[mel[kj]] + 1]], axis=0)
zeros = np.zeros(int(0.01 * fs))
toy_signal = np.concatenate((toy_signal, note_signal, zeros))
toy_signal += np.random.normal(0, 1, len(toy_signal))
toy_signal = toy_signal / (np.max(np.abs(toy_signal)) + 0.1)
t_toy_signal = np.arange(len(toy_signal)) / fs
return t_toy_signal, toy_signal
if __name__ == "__main__":
fontsize = 12
# Sample rate and desired cut_off frequencies (in Hz).
fs = 3000
f1, f2 = 100, 200
f3, f4 = 470, 750
f5, f6 = 800, 850
f7, f8 = 1000, 1000.1
cut_off = [(f1, f2), (f3, f4), (f5, f6), (f7, f8)]
# cut_off = [(f1, f2), (f3, f4)]
# cut_off = [(f1, f2)]
# cut_off = [f1]
t_toy_signal, toy_signal = get_toy_signal()
# toy_signal -= np.mean(toy_signal)
# t_toy_signal = wiener(t_toy_signal)
fig, ax = plt.subplots(6, 1, figsize=(8, 12))
fig.tight_layout()
ax[0].plot(t_toy_signal, toy_signal)
ax[0].set_title('Original toy_signal', fontsize=fontsize)
ax[0].set_xlabel('Time (s)', fontsize=fontsize)
ax[0].set_ylabel('Magnitude', fontsize=fontsize)
ax[0].set_xlim(left=0, right=max(t_toy_signal))
sos_list = [butter_bandpass(lowcut, highcut, fs, order=10) for lowcut, highcut in cut_off]
# Combine filters into a single filter
sos = np.vstack(sos_list)
# w *= 0.5 * fs / np.pi # Convert w to Hz.
#####################################################################
# First plot the desired ideal response as a green(ish) rectangle.
#####################################################################
# Plot the frequency response
for i in range(len(cut_off)):
w, h = sosfreqz(sos_list[i], worN=2000)
ax[1].plot(0.5 * fs * w / np.pi, np.abs(h), label=f'Band {i + 1}: {cut_off[i]} Hz')
ax[1].set_title('Multiband Filter Frequency Response')
ax[1].set_xlabel('Frequency [Hz]')
ax[1].set_ylabel('Gain')
ax[1].legend()
# ax[1].set_xlim(0, max(*cut_off) + 100)
#####################################################################
# Spectrogram of original signal
#####################################################################
f, t, Sxx = spectrogram(toy_signal, fs,
nperseg=930, noverlap=0)
ax[2].pcolormesh(t, f, np.abs(Sxx),
norm=matplotlib.colors.LogNorm(vmin=np.min(Sxx), vmax=np.max(Sxx)),
)
ax[2].set_title('Spectrogram of original toy_signal', fontsize=fontsize)
ax[2].set_xlabel('Time (s)', fontsize=fontsize)
ax[2].set_ylabel('Frequency (Hz)', fontsize=fontsize)
#####################################################################
# Compute filtered signal
#####################################################################
# Apply the multiband filter to the data
# toy_signal_filtered = sosfilt(sos, toy_signal)
toy_signal_filtered = np.sum([sosfilt(sos, toy_signal) for sos in sos_list], axis=0)
#####################################################################
# Spectrogram of filtered signal
#####################################################################
f, t, Sxx = spectrogram(toy_signal_filtered, fs,
nperseg=930, noverlap=0)
ax[3].pcolormesh(t, f, np.abs(Sxx),
norm=matplotlib.colors.LogNorm(vmin=np.min(Sxx),
vmax=np.max(Sxx))
)
ax[3].set_title('Spectrogram of filtered toy_signal', fontsize=fontsize)
ax[3].set_xlabel('Time (s)', fontsize=fontsize)
ax[3].set_ylabel('Frequency (Hz)', fontsize=fontsize)
ax[4].plot(t_toy_signal, toy_signal_filtered)
ax[4].set_title('Filtered toy_signal', fontsize=fontsize)
ax[4].set_xlim(left=0, right=max(t_toy_signal))
ax[4].set_xlabel('Time (s)', fontsize=fontsize)
ax[4].set_ylabel('Magnitude', fontsize=fontsize)
N = 1512
X = fft(toy_signal, n=N)
Y = fft(toy_signal_filtered, n=N)
# fig.set_size_inches((10, 4))
ax[5].plot(np.arange(N) / N * fs, 20 * np.log10(abs(X)), 'r-', label='FFT original signal')
ax[5].plot(np.arange(N) / N * fs, 20 * np.log10(abs(Y)), 'g-', label='FFT filtered signal')
ax[5].set_xlim(xmax=fs / 2)
ax[5].set_ylim(ymin=-20)
ax[5].set_ylabel(r'Power Spectrum (dB)', fontsize=fontsize)
ax[5].set_xlabel("frequency (Hz)", fontsize=fontsize)
ax[5].grid()
ax[5].legend(loc='upper right')
plt.tight_layout()
plt.show()
plt.figure()
# fig.set_size_inches((10, 4))
plt.plot(np.arange(N) / N * fs, 20 * np.log10(abs(X)), 'r-', label='FFT original signal')
plt.plot(np.arange(N) / N * fs, 20 * np.log10(abs(Y)), 'g-', label='FFT filtered signal')
plt.xlim(xmax=fs / 2)
plt.ylim(ymin=-20)
plt.ylabel(r'Power Spectrum (dB)', fontsize=fontsize)
plt.xlabel("frequency (Hz)", fontsize=fontsize)
plt.grid()
plt.legend(loc='upper right')
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/DPQQ5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DPQQ5.png" alt="enter image description here" /></a></p>
<p>The following is after using <code>@Warren Weckesser</code> comment:</p>
<pre><code>toy_signal_filtered = np.mean([sosfilt(sos, toy_signal) for sos in sos_list], axis=0)
</code></pre>
<p><a href="https://i.sstatic.net/sUqcT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sUqcT.png" alt="enter image description here" /></a></p>
<p>The following is after using <code>@Warren Weckesser</code> comment:</p>
<pre><code>toy_signal_filtered = np.sum([sosfilt(sos, toy_signal) for sos in sos_list], axis=0)
</code></pre>
<p><a href="https://i.sstatic.net/t9Oxc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t9Oxc.png" alt="enter image description here" /></a></p>
<p>Here is an example where a narrow band is used:</p>
<p><a href="https://i.sstatic.net/PkGSl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PkGSl.png" alt="enter image description here" /></a></p>
| <python><scipy><signal-processing><digital-filter> | 2023-11-11 19:34:16 | 2 | 1,041 | Thoth |
77,466,494 | 21,370,869 | stdin is not being saved to file | <p>I am trying to get a better understanding how stdin works and how to specifically work with it in Python.</p>
<p>I am trying to save whatever is received from stdin to a file. The file should have two lines</p>
<ul>
<li>Line one should be the letter count of the string</li>
<li>Line two should be the string itself</li>
</ul>
<pre><code>rawLength = sys.stdin.buffer.read(12)
file = open("my_file.txt", "w")
file.write(len(rawLength) + "\n" + rawLength) #file.write(rawLength) <== Does not work either
file.close
</code></pre>
<p>The file does get created but nothing happens to it. it is empty and remains empty after the python program exits.</p>
<p>I tried this, sure enough the console does print it as shown <a href="https://i.imgur.com/yP7giFR.png" rel="nofollow noreferrer">HERE</a></p>
<pre><code>import time
rawLength = sys.stdin.buffer.read(12) #save std to var
time.sleep(3) #because console window closes too fast
print(len(rawLength))
print(rawLength)
time.sleep(44)
</code></pre>
<p>The point of this exercise is to increase my understanding of std, so I can solve <a href="https://stackoverflow.com/questions/77463403/my-simple-python-program-does-not-execute-the-last-line?noredirect=1#comment136564351_77463403">THIS</a> problem that I asked about yesterday</p>
<p>Any help would be greatly appreciated!</p>
| <python><stdin> | 2023-11-11 19:14:49 | 2 | 1,757 | Ralf_Reddings |
77,466,456 | 1,595,350 | AttributeError: 'str' object has no attribute 'get_token' in Python when using Azure Cognitive Search Vector Search | <p>I am trying to do a Vector Search with Azure Cognitive Search in Python. This is my code:</p>
<pre><code>query = "What are the advantages of an open-source ai model?"
search_client = SearchClient(AZURE_COGNITIVE_SEARCH_SERVICE_ENDPOINT, AZURE_COGNITIVE_SEARCH_INDEX_NAME, credential=AZURE_COGNITIVE_SEARCH_API_KEY)
vector_query = VectorizableTextQuery(text=query, k=3, fields="content_vector")
results = search_client.search(
search_text=None,
vector_queries= [vector_query],
select=["title", "content_vector", "metadata"],
)
for result in results:
print(result)
</code></pre>
<p>But this throws me the following error:</p>
<pre><code>AttributeError Traceback (most recent call last)
<ipython-input-32-a8dbad346de2> in <cell line: 16>()
14 print(results)
15
---> 16 for result in results:
17 print(result)
14 frames
/usr/local/lib/python3.10/dist-packages/azure/core/pipeline/policies/_authentication.py in on_request(self, request)
97 self._token = self._credential.get_token(*self._scopes, enable_cae=self._enable_cae)
98 else:
---> 99 self._token = self._credential.get_token(*self._scopes)
100 self._update_headers(request.http_request.headers, self._token.token)
101
AttributeError: 'str' object has no attribute 'get_token'
</code></pre>
<p>When i do a simple <code>print(results)</code> i get</p>
<pre><code><iterator object azure.core.paging.ItemPaged at 0x7f17cf3af550>
</code></pre>
<p>I am using the latest Azure Cognitive Search API via:</p>
<pre><code>!pip install azure-search-documents --pre --upgrade
> Requirement already satisfied: azure-search-documents in
> /usr/local/lib/python3.10/dist-packages (11.4.0b11)
</code></pre>
| <python><azure><azure-cognitive-search><vector-search> | 2023-11-11 19:01:43 | 1 | 4,326 | STORM |
77,466,432 | 2,304,735 | How to convert string variable to float variable JSON file? | <p>I am trying to read string from JSON file and convert it to float. Other data in the file are integers, but this specific data is string.</p>
<p>JSON File:</p>
<pre><code>{
"Valuation": {
"TrailingPE": 29.4035,
"ForwardPE": 28.66,
"PriceSalesTTM": 5.5067,
"PriceBookMRQ": 44.6301,
"EnterpriseValue": 2906817822720,
"EnterpriseValueRevenue": 5.9171,
"EnterpriseValueEbitda": 23.52
},"Income_Statement": {
"currency_symbol": "USD",
"quarterly": {
"2023-09-30": {
"date": "2023-09-30",
"filing_date": "2023-11-03",
"currency_symbol": "USD",
"researchDevelopment": "7307000000.00",
"effectOfAccountingCharges": null,
"incomeBeforeTax": "26998000000.00",
"minorityInterest": null,
"netIncome": "22956000000.00",
"sellingGeneralAdministrative": "6151000000.00",
"sellingAndMarketingExpenses": null,
"grossProfit": "40427000000.00",
"reconciledDepreciation": "2653000000.00",
"ebit": "26969000000.00",
"ebitda": "29622000000.00",
"depreciationAndAmortization": "2653000000.00",
"nonOperatingIncomeNetOther": null,
"operatingIncome": "26969000000.00",
"otherOperatingExpenses": "62529000000.00",
"interestExpense": "1002000000.00",
"taxProvision": "4042000000.00",
"interestIncome": "18000000.00",
"netInterestIncome": null,
"extraordinaryItems": null,
"nonRecurring": null,
"otherItems": null,
"incomeTaxExpense": "4042000000.00",
"totalRevenue": "89498000000.00",
"totalOperatingExpenses": "13458000000.00",
"costOfRevenue": "49071000000.00",
"totalOtherIncomeExpenseNet": "29000000.00",
"discontinuedOperations": null,
"netIncomeFromContinuingOps": "22956000000.00",
"netIncomeApplicableToCommonShares": "22956000000.00",
"preferredStockAndOtherAdjustments": null
}
}
</code></pre>
<p>Code:</p>
<pre><code>import requests
import json
response = requests.get('some_url')
data = response.json()
# Write JSON File
with open('data.json', 'w', encoding='utf-8') as f:
json.dump(data, f, ensure_ascii=False, indent=4)
# Read JSON File
with open('path_to_file/data.json', 'r') as f:
data = json.load(f)
for statement in data['Income_Statement]['quarterly']['2023-09-30]['netIncome']:
value_ = float(statement)
</code></pre>
<p>When I run the above code I get the following error:</p>
<pre><code>ValueError: could not convert string to float: '.'
</code></pre>
<p>How can I read this string value and convert it to a float value and put it in the value_ variable?</p>
<p>*Note: The JSON file is wrong in the question, but the file is correct in my code and its to big to post it</p>
| <python><json><python-3.x> | 2023-11-11 18:55:33 | 1 | 515 | Mahmoud Abdel-Rahman |
77,466,107 | 9,195,104 | Interactive server-driven Flask SQL table with option to download filtered data in .csv | <p>I am trying to download a SQLite queried table as .csv using <code>Flask</code>, <code>SQLAlchemy</code> (not <code>Flask-SQLAlchemy</code>) and <code>DataTables</code> (plug-in for jQuery) with the server-side processing option.</p>
<p>Basically, I have a query that connects to a SQLite dataset, imports the data and returns it to a DataTable.</p>
<p>A summary of what is already working:</p>
<ul>
<li>Drop down (HTML select element) to chose which table should be displayed.</li>
<li>Retrieval of data given the selected table, using DataTable's "server-side processing" option, which allows the fetch of large data across multiple partial requests via Ajax request (using <code>offset</code> and <code>limit</code>).</li>
<li>Filter of case insensitive text in each column, which is passed from the DataTable table to the Flask code (server-side) and retrieved accordingly. User needs to press "Enter" after typing the code to apply the text filter.</li>
<li>Download of the complete displayed table as .csv.</li>
</ul>
<p>It looks like this:
<a href="https://i.imgur.com/2XsGyjl.png" rel="nofollow noreferrer"><img src="https://i.imgur.com/2XsGyjl.png" alt="" /></a></p>
<p>The problem that I am facing is that once I type in a filter from any column, the downloaded .csv still contains the full table instead of the filtered one. For example, when I search for "2" in <code>Column A</code> and press Enter, I see the filtered data as expected:</p>
<p><a href="https://i.imgur.com/LGr9dMl.png" rel="nofollow noreferrer"><img src="https://i.imgur.com/LGr9dMl.png" alt="" /></a></p>
<p>But when I press to download the table, the full table data is downloaded instead of the filtered one. I would like to download only the filtered data. Does anyone know how to fix this issue? Thanks in advance.</p>
<p>Below I provide a full working code with a sample SQLite "test.db" dataset with 2 tables ("table_a" and "table_b") to reproduce the code. I am using the latest versions of the packages (<code>Flask==3.0.0</code>, <code>pandas==2.1.3</code> and <code>SQLAlchemy==2.0.22</code>).</p>
<p>/templates/index.html</p>
<pre class="lang-html prettyprint-override"><code><!DOCTYPE html>
<html>
<head>
<title>Flask Application</title>
<link rel="stylesheet" href="https://cdn.datatables.net/1.13.7/css/jquery.dataTables.css"/>
<link rel="stylesheet" href="https://cdn.datatables.net/1.13.7/css/dataTables.bootstrap5.min.css"/>
<style>
.text-wrap{
white-space:normal;
}
.width-200{
width:200px;
}
</style>
</head>
<body>
<h1>Table Viewer</h1>
<form id="selector" action="{{ url_for('.database_download') }}" method="POST" enctype="multipart/form-data">
Select table:
<select id="table-selector" name="table-selector">
<!-- <option disabled selected value>Select a Dataset</option> -->
<option value="table_a" selected>Table A</option>
<option value="table_b">Table B</option>
</select>
</form>
<br>
<p>
<input type="checkbox" id="order-columns" value="True" checked> Order columns by date</p>
<p><input type="button" id="button-database-selector" value="Select Table" /></p>
<br>
<br>
<br>
<div id="content-a" class="table table-striped table-hover" style="display:none; width:100%; table-layout:fixed; overflow-x:auto;">
<table id="table-a">
<thead>
<tr>
<th></th>
<th></th>
</tr>
<tr>
<th></th>
<th></th>
</tr>
</thead>
</table>
</div>
<div id="content-b" class="table table-striped table-hover" style="display:none; width:100%; table-layout:fixed; overflow-x:auto;">
<table id="table-b">
<thead>
<tr>
<th></th>
<th></th>
</tr>
<tr>
<th></th>
<th></th>
</tr>
</thead>
</table>
</div>
<div id="download" style="display:none">
<p>Download table: <input type="submit" value="Download" form="selector"></button></p>
</div>
<script type="text/javascript" src="https://code.jquery.com/jquery-3.7.0.js"></script>
<script type="text/javascript" src="https://cdn.datatables.net/1.13.7/js/jquery.dataTables.min.js"></script>
<script type="text/javascript" src="https://cdn.datatables.net/1.13.7/js/dataTables.bootstrap5.min.js"></script>
<script type="text/javascript">
$(document).ready( function() {
$('input[id="button-database-selector"]').click( function() {
if ($('input[id="order-columns"]').prop('checked')) {
var order_columns = 'True';
} else {
var order_columns = '';
}
var database_selected = $('select[id="table-selector"]').val();
// Display download button
$('div[id="download"]').css('display', 'block');
if (database_selected == 'table_a') {
// Hide all tables
$('div[id^="content-"]').hide();
// Display table
$('div[id="content-a"]').css('display', 'block');
// Setup - add a text input to each header cell
$('#table-a thead tr:eq(1) th').each( function () {
var title = $('#table-a thead tr:eq(0) th').eq( $(this).index() ).text();
$(this).html( '<input type="text" placeholder="Search '+title+'" />' );
} );
var table = $('table[id="table-a"]').DataTable({
'pageLength': 50,
'ordering': false,
// 'scrollX': true,
// 'sScrollX': '100%',
'searching': true,
'dom': 'lrtip',
'orderCellsTop': true,
'serverSide': true,
'bDestroy': true,
'ajax': {
'url': "{{ url_for('.table_a') }}",
'type': 'GET',
'data': {
'orderColumns': order_columns,
},
},
'columns': [
{ 'title': 'Column Date', 'data': 'column_date' },
{ 'title': 'Column A', 'data': 'column_a' },
],
'columnDefs': [
{
'render': function (data, type, full, meta) {
return "<div class='text-wrap width-200'>" + data + "</div>";
},
'targets': '_all',
}
],
});
// Apply the search
table.columns().every( function (index) {
$('#table-a thead tr:eq(1) th:eq(' + index + ') input').on('keydown', function (event) {
if (event.keyCode == 13) {
table.column($(this).parent().index() + ':visible')
.search(this.value)
.draw();
}
});
});
} else if (database_selected === 'table_b') {
// Hide all tables
$('div[id^="content-"]').hide();
// Display table
$('div[id="content-b"]').css('display', 'block');
// Setup - add a text input to each header cell
$('#table-b thead tr:eq(1) th').each( function () {
var title = $('#table-b thead tr:eq(0) th').eq( $(this).index() ).text();
$(this).html( '<input type="text" placeholder="Search '+title+'" />' );
} );
var table = $('table[id="table-b"]').DataTable({
'pageLength': 50,
'ordering': false,
// 'scrollX': true,
// 'sScrollX': '100%',
'searching': true,
'dom': 'lrtip',
'orderCellsTop': true,
'serverSide': true,
'bDestroy': true,
'ajax': {
'url': "{{ url_for('.table_b') }}",
'type': 'GET',
'data': {
'orderColumns': order_columns,
},
},
'columns': [
{ 'title': 'Column Date', 'data': 'column_date' },
{ 'title': 'Column B', 'data': 'column_b' },
],
'columnDefs': [
{
'render': function (data, type, full, meta) {
return "<div class='text-wrap width-200'>" + data + "</div>";
},
'targets': '_all',
}
],
});
// Apply the search
table.columns().every( function (index) {
$('#table-b thead tr:eq(1) th:eq(' + index + ') input').on('keydown', function (event) {
if (event.keyCode == 13) {
table.column($(this).parent().index() + ':visible')
.search(this.value)
.draw();
}
});
});
}
});
});
</script>
</body>
</html>
</code></pre>
<p>/app.py</p>
<pre class="lang-py prettyprint-override"><code># Import packages
from datetime import date
from io import BytesIO
from fastapi.encoders import jsonable_encoder
from flask import Flask, render_template, request, jsonify, send_file
import pandas as pd
from sqlalchemy import create_engine, Column, DateTime, Integer, String
from sqlalchemy.orm import declarative_base, sessionmaker
app = Flask(__name__)
# Create a SQLite database engine
engine = create_engine(
url='sqlite:///test.db',
echo=False,
)
# Create a session
Session = sessionmaker(bind=engine)
session = Session()
# Create a base class for declarative models
Base = declarative_base()
# Define "table_a" model
class table_a(Base):
__tablename__ = 'table_a'
id = Column(Integer, primary_key=True)
column_date = Column(DateTime)
column_a = Column(String)
# Define "table_b" model
class table_b(Base):
__tablename__ = 'table_b'
id = Column(Integer, primary_key=True)
column_date = Column(DateTime)
column_b = Column(String)
with app.app_context():
Base.metadata.drop_all(bind=engine)
Base.metadata.create_all(bind=engine)
# Add sample data to the "table_a" table
session.add_all(
[
table_a(column_date=date(2023, 11, 10), column_a='1'),
table_a(column_date=date(2023, 10, 31), column_a='1'),
table_a(column_date=date(2023, 10, 31), column_a='2'),
],
)
# Add sample data to the "table_b" table
session.add_all(
[
table_b(column_date=date(2023, 11, 1), column_b='1'),
table_b(column_date=date(2023, 11, 11), column_b='1'),
],
)
session.commit()
def read_table(*, table_name):
# Read a SQLite database engine
engine = create_engine(
url='sqlite:///test.db',
echo=False,
)
# Create a session
Session = sessionmaker(bind=engine)
session = Session()
# Create a base class for declarative models
Base = declarative_base()
# Define "table_a" model
class table_a(Base):
__tablename__ = 'table_a'
id = Column(Integer, primary_key=True)
column_date = Column(DateTime)
column_a = Column(String)
# Define "table_b" model
class table_b(Base):
__tablename__ = 'table_b'
id = Column(Integer, primary_key=True)
column_date = Column(DateTime)
column_b = Column(String)
if table_name == 'table_a':
sql_query = session.query(table_a)
# Filters
filter_column_date = request.args.get(
key='columns[0][search][value]',
default='',
type=str,
)
filter_column_a = request.args.get(
key='columns[1][search][value]',
default='',
type=str,
)
# Other parameters
order_columns = request.args.get(key='orderColumns', default='', type=bool)
if filter_column_date != '':
sql_query = sql_query.filter(
table_a.column_date.ilike(
f'%{filter_column_date}%',
),
)
if filter_column_a != '':
sql_query = sql_query.filter(
table_a.column_a.ilike(
f'%{filter_column_a}%',
),
)
if order_columns is True:
sql_query = sql_query.order_by(table_a.column_date.asc())
if table_name == 'table_b':
sql_query = session.query(table_b)
# Filters
filter_column_date = request.args.get(
key='columns[0][search][value]',
default='',
type=str,
)
filter_column_b = request.args.get(
key='columns[1][search][value]',
default='',
type=str,
)
# Other parameters
order_columns = request.args.get(key='orderColumns', default='', type=bool)
if filter_column_date != '':
sql_query = sql_query.filter(
table_b.column_date.ilike(
f'%{filter_column_date}%',
),
)
if filter_column_b != '':
sql_query = sql_query.filter(
table_b.column_b.ilike(
f'%{filter_column_b}%',
),
)
if order_columns is True:
sql_query = sql_query.order_by(table_b.column_date.asc())
return sql_query
@app.route('/')
def home():
return render_template(template_name_or_list='index.html')
@app.route(rule='/table-a', methods=['GET'])
def table_a():
# Get the pagination parameters from the request
start = request.args.get(key='start', default=0, type=int)
length = request.args.get(key='length', default=10, type=int)
sql_query = read_table(table_name='table_a')
response = {
'draw': request.args.get(key='draw', default=1, type=int),
'recordsTotal': sql_query.count(),
'recordsFiltered': sql_query.count(),
'data': jsonable_encoder(sql_query.offset(start).limit(length).all()),
}
return jsonify(response)
@app.route(rule='/table-b', methods=['GET'])
def table_b():
# Get the pagination parameters from the request
start = request.args.get(key='start', default=0, type=int)
length = request.args.get(key='length', default=10, type=int)
sql_query = read_table(table_name='table_b')
response = {
'draw': request.args.get(key='draw', default=1, type=int),
'recordsTotal': sql_query.count(),
'recordsFiltered': sql_query.count(),
'data': jsonable_encoder(sql_query.offset(start).limit(length).all()),
}
return jsonify(response)
@app.route(rule='/database-download', methods=['GET', 'POST'])
def database_download():
if request.method == 'POST':
table = request.form.get('table-selector')
sql_query = read_table(table_name=f'{table}')
# Create a binary data memory file
buffer = BytesIO()
df = pd.read_sql(
sql=sql_query.statement,
con=create_engine(
url='sqlite:///test.db',
echo=False,
),
index_col=None,
coerce_float=True,
params=None,
parse_dates=None,
chunksize=None,
)
# DataFrame to buffer
df.to_csv(
path_or_buf=buffer,
sep=',',
na_rep='',
header=True,
index=False,
index_label=None,
encoding='utf-8-sig',
)
# Change the stream position to the start of the stream
buffer.seek(0)
return send_file(
path_or_file=buffer,
download_name=f'{table}.csv',
as_attachment=True,
)
if __name__ == '__main__':
app.run(host='localhost', port=5011, debug=True)
</code></pre>
| <python><flask><sqlalchemy><datatables> | 2023-11-11 17:27:21 | 1 | 421 | roboes |
77,465,672 | 2,589,895 | Deploy python app without any web server/ framework to azure app service linux | <p>I have a custom Linux app where I am not using any framework. It is a simple app with some module which does some scheduled job.
to run on local , i am using <code>python main.py arg1 arg2</code>. That's it it will use <code>schedule</code> package and schedule jobs. I do have <code>requirements.txt</code>.</p>
<p>Now I want to deploy this to Azure. Service I got the Azure WebApp linux. I know this is not the correct service since I am not serving anything over http, I am just connecting some db, APIs etc.</p>
<p>When I trying to deply using github action, I am getting issue with virtual environment, etc.</p>
<p>E.g. If I simply deploy with start up command, I am getting error</p>
<pre><code>2023-11-11T15:19:27.054568534Z Launching oryx with: create-script -appPath /home/site/wwwroot -output /opt/startup/startup.sh -virtualEnvName antenv -defaultApp /opt/defaultsite -userStartupCommand 'python main.py dev s'
2023-11-11T15:19:27.193901315Z Found build manifest file at '/home/site/wwwroot/oryx-manifest.toml'. Deserializing it...
2023-11-11T15:19:27.214165170Z Build Operation ID: 7f870c51d6df2456
2023-11-11T15:19:27.231953721Z Oryx Version: 0.2.20230707.1, Commit: 0bd28e69919b5e8beba451e8677e3345f0be8361, ReleaseTagName: 20230707.1
2023-11-11T15:19:27.243554311Z Output is compressed. Extracting it...
2023-11-11T15:19:27.243579812Z Extracting '/home/site/wwwroot/output.tar.gz' to directory '/tmp/8dbe2c7322f355e'...
2023-11-11T15:19:30.560170296Z App path is set to '/tmp/8dbe2c7322f355e'
2023-11-11T15:19:36.581666909Z Writing output script to '/opt/startup/startup.sh'
2023-11-11T15:19:36.805404566Z Using packages from virtual environment antenv located at /tmp/8dbe2c7322f355e/antenv.
2023-11-11T15:19:36.805441267Z Updated PYTHONPATH to '/opt/startup/app_logs:/tmp/8dbe2c7322f355e/antenv/lib/python3.11/site-packages'
2023-11-11T15:19:39.130548772Z ERROR:root:Critical Error: [Errno 2] No such file or directory: '/tmp/8dbe2c7322f355e/config/config.json'
</code></pre>
<p>Here is my github action</p>
<pre><code> - name: azure webapp deploy
uses: azure/webapps-deploy@v2
with:
app-name: ${{ vars.APP_NAME }}
package: ${{ vars.DEPLOY_PATH }}
startup-command: 'python main.py dev s'
</code></pre>
| <python><azure><github-actions> | 2023-11-11 15:22:32 | 1 | 2,580 | Mahajan344 |
77,465,549 | 203,204 | ValueError: "time data %r does not match format %r" | <p>I'm trying to parse time using <code>strptime</code>:</p>
<pre><code>t = datetime.strptime('2024-08-21T11:00:00 EDT', '%Y-%m-%dT%H:%M:%S %Z')
</code></pre>
<p>This code works for me in GMT+4, but it fails in GMT-9:</p>
<pre><code>ValueError: time data '2024-08-21T11:00:00 EDT' does not match format '%Y-%m-%dT%H:%M:%S %Z'
</code></pre>
<p>What am I doing wrong?</p>
| <python><datetime><timezone><strptime> | 2023-11-11 14:48:40 | 2 | 12,842 | Anthony |
77,465,479 | 6,013,016 | Pygame rectangle fill with color and border with different colors | <p>I want to draw a rectangle filled with one color and border with another color. Something like this:</p>
<p><a href="https://i.sstatic.net/emYwM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/emYwM.png" alt="enter image description here" /></a></p>
<p>And I can draw two rectangles, one is filled with some color and the other one has just borders:</p>
<pre><code>pygame.draw.rect(screen, "red", (100, 100, 100, 100))
pygame.draw.rect(screen, "green", (100, 100, 100, 100), 5)
</code></pre>
<p>But, is there any better way?</p>
| <python><pygame> | 2023-11-11 14:33:05 | 2 | 5,926 | Scott |
77,465,362 | 14,471,688 | How to access the value of a dictionary faster from a list of arrays in python? | <p>I was wondering how can I access the value of a dictionary rapidly from a list of arrays.</p>
<p>Here is my toy example:</p>
<pre><code>my_list = [np.array([ 1, 2, 3, 4, 5, 6, 8, 9, 10]), np.array([ 1, 3, 5, 6, 7, 10]), np.array([ 1, 2, 3, 4, 6, 8, 9, 10]), np.array([ 1, 3, 4, 7, 15]), np.array([ 1, 2, 4, 5, 10, 16]), np.array([6, 10, 15])]
my_dict = {1: 0, 2: 0, 3: 0, 4: 0, 5: 1, 6: 1, 7: 1, 8: 1, 9: 1, 10: 2, 11: 2, 12: 2, 13: 2, 14: 2, 15: 3, 16: 3}
</code></pre>
<p>Each key in <strong>my_dict</strong> correspond to the values in a list named <strong>my_list</strong></p>
<p>I used the following code to get the desired output set:</p>
<pre><code>unique_string_parents = {' '.join(map(str, set(map(my_dict.get, sublist)))) for sublist in my_list}
# output: {'0 1 2', '0 1 3', '0 1 2 3', '1 2 3'}
</code></pre>
<p>Suppose I have a huge dimension of <strong>my_list</strong> and <strong>my_dict</strong> that can be found here: <a href="https://gitlab.com/Schrodinger168/practice/-/tree/master/practice_dictionary" rel="nofollow noreferrer">https://gitlab.com/Schrodinger168/practice/-/tree/master/practice_dictionary</a></p>
<p>Here is the code to read the real files:</p>
<pre><code>import ast
file_list = 'list_array.txt'
with open(file_list, 'r') as file:
lines = file.readlines()
my_list= [np.array(list(map(int, line.split()))) for line in lines]
file_dictionary = "dictionary_example.txt"
with open(file_dictionary, 'r') as file_dict:
content = file_dict.read()
my_dict = ast.literal_eval(content)
</code></pre>
<p>I used a single line of code provided above; It takes about 16.70 seconds to get the desired output. I was wondering how I can speed up this algorithm or are there any other algorithms so that i can get the results faster in this case?</p>
<p>Any help or recommendation please! Many thanks!</p>
| <python><arrays><numpy><dictionary> | 2023-11-11 14:00:34 | 1 | 381 | Erwin |
77,465,301 | 1,595,350 | Trying adding embeddings in Azure Cognitive Search leads to error "The property 'content' does not exist on type 'search.documentFields'." | <p>I am extracting text from pdf documents and load it to Azure Cognitive Search for a RAG approach. Unfortunately this does not work. I am receiving the error message</p>
<pre><code>HttpResponseError: () The request is invalid. Details: The property 'content' does not exist on type 'search.documentFields'. Make sure to only use property names that are defined by the type.
Code:
Message: The request is invalid. Details: The property 'content' does not exist on type 'search.documentFields'. Make sure to only use property names that are defined by the type.
</code></pre>
<p>What i want to do is</p>
<ol>
<li>Extract text from pdf via pymupdf - works</li>
<li>Upload it to Azure Vector search as embeddings with vectors and metdata `filename``</li>
<li>Query this through ChatGPT model</li>
</ol>
<p>Beside the error i want to add to this <code>document</code> object the metadata information <code>filename</code> but also dont know how to extend this ...</p>
<p>My code:</p>
<pre><code>!pip install cohere tiktoken
!pip install openai==0.28.1
!pip install pymupdf
!pip install azure-storage-blob azure-identity
!pip install azure-search-documents --pre --upgrade
!pip install langchain
import fitz
import time
import uuid
import os
import openai
from PIL import Image
from io import BytesIO
from IPython.display import display
from azure.identity import DefaultAzureCredential
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chat_models import AzureChatOpenAI
from langchain.vectorstores import AzureSearch
from langchain.docstore.document import Document
from langchain.document_loaders import DirectoryLoader
from langchain.document_loaders import TextLoader
from langchain.text_splitter import TokenTextSplitter
from langchain.chains import ConversationalRetrievalChain
from langchain.prompts import PromptTemplate
from google.colab import drive
OPENAI_API_BASE = "https://xxx.openai.azure.com"
OPENAI_API_KEY = "xxx"
OPENAI_API_VERSION = "2023-05-15"
openai.api_type = "azure"
openai.api_key = OPENAI_API_KEY
openai.api_base = OPENAI_API_BASE
openai.api_version = OPENAI_API_VERSION
AZURE_COGNITIVE_SEARCH_SERVICE_NAME = "https://xxx.search.windows.net"
AZURE_COGNITIVE_SEARCH_API_KEY = "xxx"
AZURE_COGNITIVE_SEARCH_INDEX_NAME = "test"
llm = AzureChatOpenAI(deployment_name="gpt35", openai_api_key=OPENAI_API_KEY, openai_api_base=OPENAI_API_BASE, openai_api_version=OPENAI_API_VERSION)
embeddings = OpenAIEmbeddings(deployment_id="ada002", chunk_size=1, openai_api_key=OPENAI_API_KEY, openai_api_base=OPENAI_API_BASE, openai_api_version=OPENAI_API_VERSION)
acs = AzureSearch(azure_search_endpoint=AZURE_COGNITIVE_SEARCH_SERVICE_NAME,
azure_search_key = AZURE_COGNITIVE_SEARCH_API_KEY,
index_name = AZURE_COGNITIVE_SEARCH_INDEX_NAME,
embedding_function = embeddings.embed_query)
def generate_tokens(s, f):
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
splits = text_splitter.split_text(s)
i = 0
documents = []
for split in splits:
metadata = {}
metadata["index"] = i
metadata["file_source"] = f
i = i+1
new_doc = Document(page_content=split, metadata=metadata)
documents.append(new_doc)
#documents = text_splitter.create_documents(splits)
print (documents)
return documents
drive.mount('/content/drive')
folder = "/content/drive/.../pdf/"
page_content = ''
doc_content = ''
for filename in os.listdir(folder):
file_path = os.path.join(folder, filename)
if os.path.isfile(file_path):
print(f"Processing file: {file_path}")
doc = fitz.open(file_path)
for page in doc: # iterate the document pages
page_content += page.get_text() # get plain text encoded as UTF-8
d = generate_tokens(doc_content)
# the following line throws the error
# how can i add the chunks + filename to
# Azure Cognitive Search?
doc_content += page_content
d = generate_tokens(doc_content, file_path)
acs.add_documents(documents=d)
print(metadatas)
print("----------")
print(doc_content)
count = len(doc_content.split())
print("Number of tokens: ", count)
HttpResponseError Traceback (most recent call last)
<ipython-input-11-d9eaff7ee027> in <cell line: 10>()
31 all_texts.extend(d)
32
---> 33 acs.add_documents(documents=d)
34
35 metadatas = [{"Source": f"{i}-pl"} for i in range(len(all_texts))]
7 frames
/usr/local/lib/python3.10/dist-packages/azure/search/documents/_generated/operations/_documents_operations.py in index(self, batch, request_options, **kwargs)
1249 map_error(status_code=response.status_code, response=response, error_map=error_map)
1250 error = self._deserialize.failsafe_deserialize(_models.SearchError, pipeline_response)
-> 1251 raise HttpResponseError(response=response, model=error)
1252
1253 if response.status_code == 200:
HttpResponseError: () The request is invalid. Details: The property 'content' does not exist on type 'search.documentFields'. Make sure to only use property names that are defined by the type.
Code:
Message: The request is invalid. Details: The property 'content' does not exist on type 'search.documentFields'. Make sure to only use property names that are defined by the type.
</code></pre>
<p>This is my index in Azure Cognitive Search index:</p>
<p><a href="https://i.sstatic.net/4RWfQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4RWfQ.png" alt="enter image description here" /></a></p>
| <python><azure><azure-cognitive-search><langchain><azure-openai> | 2023-11-11 13:44:10 | 1 | 4,326 | STORM |
77,465,100 | 3,156,300 | Mirror Selected QGraphicsItems based on boundingRect center | <p>For the life of me I can not seem to figure out how to mirror the selected QGraphicsItems based on their selected center? Each attempt I've tried makes the items eventually shoot off screen. I've recreated this same setup in 3d applications and got it work but i'm missing something and can't figure out it. Any help would be much appreciated. The end goal is to be able to click the Mirror button and it flip them back and forth like the image below...</p>
<p><a href="https://i.sstatic.net/CNcgJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CNcgJ.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>import sys
from PySide2 import QtWidgets, QtCore, QtGui
from PySide2.QtGui import QPixmap, QPainter, QColor
import os
import random
class ImageWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
self.resize(1500,1080)
# controls
self.scene = QtWidgets.QGraphicsScene(self)
self.scene.setBackgroundBrush(QColor(40,40,40))
self.graphicsView = QtWidgets.QGraphicsView(self)
self.graphicsView.setSceneRect(-4000, -4000, 8000, 8000)
self.graphicsView.setRenderHints(QPainter.Antialiasing | QPainter.SmoothPixmapTransform)
self.graphicsView.setScene(self.scene)
# align actions
self.mirrorItemsHorizontalAct = QtWidgets.QAction('Mirror Horizontal', self)
self.mirrorItemsHorizontalAct.triggered.connect(self.mirror_items_horizontal)
transformToolbar = QtWidgets.QToolBar('Transform', self)
transformToolbar.addAction(self.mirrorItemsHorizontalAct)
self.addToolBar(transformToolbar)
self.setCentralWidget(self.graphicsView)
# Load images from subfolder
self.create_shapes()
def create_shapes(self):
gradient = QtGui.QLinearGradient(0, 0, 100, 0)
gradient.setColorAt(0, QtGui.QColor(0, 0, 255))
gradient.setColorAt(1, QtGui.QColor(255, 0, 0))
rectA = QtWidgets.QGraphicsRectItem(0,0,150,100)
rectA.setBrush(QtGui.QBrush(gradient))
rectA.setPen(QtGui.QPen(QtCore.Qt.NoPen))
rectA.setPen(QtGui.QPen(QtCore.Qt.green, 10, QtCore.Qt.SolidLine, QtCore.Qt.RoundCap, QtCore.Qt.RoundJoin))
rectA.setFlags(QtWidgets.QGraphicsItem.ItemIsSelectable | QtWidgets.QGraphicsItem.ItemIsMovable)
rectA.setSelected(True)
rectA.setPos(-120,50)
self.scene.addItem(rectA)
rectB = QtWidgets.QGraphicsRectItem(0,0,70,35)
rectB.setBrush(QtGui.QBrush(gradient))
rectB.setPen(QtGui.QPen(QtCore.Qt.NoPen))
rectB.setFlags(QtWidgets.QGraphicsItem.ItemIsSelectable | QtWidgets.QGraphicsItem.ItemIsMovable)
rectB.setSelected(True)
rectB.setPos(150,-150)
self.scene.addItem(rectB)
rectC = QtWidgets.QGraphicsRectItem(0,0,120,75)
rectC.setBrush(QtGui.QBrush(gradient))
rectC.setPen(QtGui.QPen(QtCore.Qt.NoPen))
rectC.setFlags(QtWidgets.QGraphicsItem.ItemIsSelectable | QtWidgets.QGraphicsItem.ItemIsMovable)
rectC.setSelected(True)
rectC.setPos(400,70)
self.scene.addItem(rectC)
# Align
def mirror_items_horizontal(self):
items = self.scene.selectedItems()
if not items:
items = self.scene.items()
# Calculate the collective bounding box
collectivePath = QtGui.QPainterPath()
for item in items:
collectivePath.addRect(item.sceneBoundingRect())
collectiveRect = collectivePath.boundingRect()
# Calculate the horizontal center of the collective bounding box
centerX = collectiveRect.center().x()
# print('centerX:', centerX)
# print('collectiveRect:', collectiveRect)
# Debug
cItem = QtWidgets.QGraphicsRectItem(collectiveRect)
cItem.setBrush(QtGui.QBrush(QtGui.QColor(255,0,0,128)))
cItem.setZValue(-1000)
self.scene.addItem(cItem)
cItem = QtWidgets.QGraphicsEllipseItem(0, 0, 10, 10)
cItem.setPos(collectiveRect.center() - QtCore.QPointF(cItem.boundingRect().width()*0.5, cItem.boundingRect().height()*0.5) )
cItem.setBrush(QtGui.QBrush(QtGui.QColor(255,255,0,128)))
cItem.setZValue(-1000)
self.scene.addItem(cItem)
for item in items:
local_offset = item.boundingRect().center().x() * 2.0
global_offset = collectiveRect.width() + item.boundingRect().x() * 2.0
print('scenePos', item.scenePos())
print('boundingRect', item.boundingRect())
print('boundingRect center', item.boundingRect().center())
print('local_offset', local_offset)
print('global_offset', global_offset)
scaleTm = QtGui.QTransform()
scaleTm.translate(local_offset, 0)
# scaleTm.translate(global_offset, 0)
# scaleTm.translate(645, 0)
scaleTm.scale(-1, 1)
tm = item.transform() * scaleTm
item.setTransform(tm)
if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
mainWindow = ImageWindow()
mainWindow.show()
sys.exit(app.exec_())
</code></pre>
| <python><pyside><qgraphicsview> | 2023-11-11 12:34:54 | 2 | 6,178 | JokerMartini |
77,465,090 | 13,176,896 | How to increase image resolution in openpyxl? | <p>I have some high-resolution images, but when adding them to excel using openpyxl, their resolutions becomes low. <br>
My example code is here.</p>
<pre><code>wb = openpyxl.Workbook()
ws = wb.active
img = openpyxl.drawing.image.Image('high_resolution_image.png')
img.anchor = AbsoluteAnchor(pos=position, ext=size)
ws.add_image(img)
</code></pre>
<p>What is the problem and how to solve it?</p>
| <python><excel><openpyxl><resolution> | 2023-11-11 12:31:39 | 0 | 2,642 | Gilseung Ahn |
77,465,049 | 3,259,222 | How to add new value for a given dimension and coordinate in xarray? | <p>I have data with the following xarray DataSet representation:</p>
<pre><code>df = pd.DataFrame({
'var': np.random.rand(16),
'country': 4*['UK'] + 4*['US'] + 4*['FR'] + 4*['DE'],
'time_delta': 4*list(pd.timedelta_range(
start='30D',
periods=4,
freq='30D'
)),
})
ds = df.set_index(['country','time_delta']).to_xarray()
</code></pre>
<p>I want to add new value of the variable for a given new coord and a given dimension while maintaining the existing dimensions:</p>
<blockquote>
<p>Set value=0 of variable=var for coord='0D' of dimension=time_delta while preserving other existing
dimensions (in that case country).</p>
</blockquote>
<p>In pandas I can do this via:</p>
<pre><code># 1. Pivot long to wide
df_wide = df.pivot(
index='country',
columns='time_delta'
).droplevel(0,axis=1)
# 2. Set value
df_wide['0D'] = 0
# 3. Melt wide to long
df_new = df_wide.melt(ignore_index=False).rename(
columns={'value': 'var'}
).reset_index()
ds_new = df_new.set_index(['country','time_delta']).to_xarray()
</code></pre>
<p>Is there a general way of making such operations in xarray in order to achieve directly <code>ds -> ds_new</code>?</p>
<p><strong>Edit</strong>: The below fails with message that: "KeyError: "not all values found in index 'time_delta'"</p>
<pre><code>ds['var'].loc[{'time_delta': '0D'}] = 0
</code></pre>
| <python><pandas><python-xarray> | 2023-11-11 12:16:19 | 1 | 431 | Konstantin |
77,464,627 | 8,844,970 | SQLAlchemy using both primaryjoin and secondaryjoin with association table | <p>I am trying to create a database for genomics. I have two tables, <code>Gene</code> and <code>Mutation</code> and I am trying to join them together. The genes are related to mutations if are on the same chromosome and their positions overlap. I tried using association table to try to connect them and get a join statement using Flask SQLAlchemy but I keep running into errors. Here is my setup:</p>
<pre><code>class Mutation(db.Model):
__tablename__ = 'mutation'
mutation_index = Column(String(100), primary_key=True)
chrom = Column(String(40), ForeignKey("contig.accession"))
mutation_start = Column(Integer)
mutation_end = Column(Integer)
gene = relationship("Gene",
primaryjoin="and_(gene_mutations.c.gene_chrom == Mutation.chrom, "
"Mutation.mutation_start <= gene_mutations.c.gene_end, "
"gene_mutations.c.gene_start <= Mutation.mutation_end)",
secondary="gene_mutations",
secondaryjoin="gene_mutations.c.gene_id == Gene.gene_id",
back_populates="mutations", viewonly=True)
class Gene(db.Model):
__tablename__ = 'gene'
gene_id = Column(String(40), primary_key=True)
chrom = Column(String(40))
gene_start = Column(Integer, nullable=True)
gene_end = Column(Integer, nullable=True)
mutations = relationship("VCF",
primaryjoin="and_(Gene.accession == gene_mutations.c.mutation_chrom, "
"gene_mutations.c.mutation_start<= Gene.end, "
"Gene.start <= gene_mutations.c.mutation_end)",
secondary="gene_mutations",
secondaryjoin="gene_mutations.c.mutation_id == Mutation.variant_index",
back_populates="gene", viewonly=True)
gene_mutations = Table("gene_mutations",
db.Model.metadata,
db.Column("mutation_id", ForeignKey('mutation.mutation_id')),
db.Column("mutation_start", ForeignKey('mutation.mutation_start')),
db.Column("mutation_end", ForeignKey('mutation.mutation_end')),
db.Column("mutation_chrom", ForeignKey('mutation.chrom')),
db.Column("gene_id", ForeignKey('gene.gene_id')),
db.Column("gene_start", ForeignKey('gene.start')),
db.Column("gene_end", ForeignKey('gene.end')),
db.Column("gene_chrom", ForeignKey('gene.chrom'))
)
</code></pre>
<p>If I have the logic setup correctly, the complicated looking <code>primaryjoin</code> should just find overlaps between the <code>Gene</code> and <code>Mutation</code>. So my logic was to connect <code>Mutation</code> to <code>gene_mutations</code> and then connect <code>gene_mutations</code> to <code>Gene</code> and vice-versa so I can join them together and filter based on different criteria. When I try to join the tables like this:</p>
<pre><code>Mutation.query.join(gene_mutations, (gene_mutations.c.mutation_id == Mutation.mutation_index)).all()
</code></pre>
<p>I get the error:</p>
<pre><code>sqlalchemy.exc.ProgrammingError: (mysql.connector.errors.ProgrammingError) 1146 (42S02): Table 'mutations.gene_mutations' doesn't exist
</code></pre>
<p>Is this the right way to go about joining these two tables?</p>
<p><strong>EDIT:</strong> So I launched mysql and manually added the table <code>gene_mutations</code> and that seems to have partially solved the error. Now it becomes, two question. 1) How do I make sure that <code>gene_mutations</code> table gets created everytime I run <code>db.create_all()</code>. I thought having the db.Model.metadata would create that. Second, unfortunately the query</p>
<pre><code>result = Mutation.query.join(gene_mutations, (gene_mutations.c.vcf_id == Mutation.mutation_index))
</code></pre>
<p>comes back empty. How do I create a joined table that can map mutations to genes (many-to-many map).</p>
| <python><sqlalchemy> | 2023-11-11 09:58:45 | 0 | 369 | spo |
77,464,554 | 1,595,350 | Why am i receiving "AttributeError: 'str' object has no attribute 'page_content'" when trying to add my embeddings to Azure Cognitive Search | <p>I am extracting text from pdf documents and load it to Azure Cognitive Search for a RAG approach. Unfortunately this does not work. I am receiving the error message</p>
<blockquote>
<p>AttributeError: 'str' object has no attribute 'page_content'</p>
</blockquote>
<p>What I want to do is</p>
<ol>
<li>Extract text from pdf via pymupdf - works</li>
<li>Upload it to Azuer Vector search as embeddings with <code>vectors</code> and `filename``</li>
<li>Query this through ChatGPT model</li>
</ol>
<p>This is my code:</p>
<pre><code>!pip install cohere tiktoken
!pip install openai==0.28.1
!pip install pymupdf
!pip install azure-storage-blob azure-identity
!pip install azure-search-documents --pre --upgrade
!pip install langchain
import fitz
import time
import uuid
import os
import openai
from PIL import Image
from io import BytesIO
from IPython.display import display
from azure.identity import DefaultAzureCredential
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
from langchain.embeddings import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.chat_models import AzureChatOpenAI
from langchain.vectorstores import AzureSearch
from langchain.document_loaders import DirectoryLoader
from langchain.document_loaders import TextLoader
from langchain.text_splitter import TokenTextSplitter
from langchain.chains import ConversationalRetrievalChain
from langchain.prompts import PromptTemplate
from google.colab import drive
OPENAI_API_BASE = "https://xxx.openai.azure.com"
OPENAI_API_KEY = "xxx"
OPENAI_API_VERSION = "2023-05-15"
openai.api_type = "azure"
openai.api_key = OPENAI_API_KEY
openai.api_base = OPENAI_API_BASE
openai.api_version = OPENAI_API_VERSION
AZURE_COGNITIVE_SEARCH_SERVICE_NAME = "https://xxx.search.windows.net"
AZURE_COGNITIVE_SEARCH_API_KEY = "xxx"
AZURE_COGNITIVE_SEARCH_INDEX_NAME = "test"
llm = AzureChatOpenAI(deployment_name="gpt35", openai_api_key=OPENAI_API_KEY, openai_api_base=OPENAI_API_BASE, openai_api_version=OPENAI_API_VERSION)
embeddings = OpenAIEmbeddings(deployment_id="ada002", chunk_size=1, openai_api_key=OPENAI_API_KEY, openai_api_base=OPENAI_API_BASE, openai_api_version=OPENAI_API_VERSION)
acs = AzureSearch(azure_search_endpoint=AZURE_COGNITIVE_SEARCH_SERVICE_NAME,
azure_search_key = AZURE_COGNITIVE_SEARCH_API_KEY,
index_name = AZURE_COGNITIVE_SEARCH_INDEX_NAME,
embedding_function = embeddings.embed_query)
def generate_tokens(s):
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
splits = text_splitter.split_text(s)
return splits
drive.mount('/content/drive')
folder = "/content/drive/.../pdf/"
page_content = ''
doc_content = ''
for filename in os.listdir(folder):
file_path = os.path.join(folder, filename)
if os.path.isfile(file_path):
print(f"Processing file: {file_path}")
doc = fitz.open(file_path)
for page in doc: # iterate the document pages
page_content += page.get_text() # get plain text encoded as UTF-8
doc_content += page_content
d = generate_tokens(doc_content)
# the following line throws the error
# how can i add the chunks + filename to
# Azure Cognitive Search?
acs.add_documents(documents=d)
print(metadatas)
print("----------")
print(doc_content)
count = len(doc_content.split())
print("Number of tokens: ", count)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-33-d9eaff7ee027> in <cell line: 10>()
31 all_texts.extend(d)
32
---> 33 acs.add_documents(documents=d)
34
35 metadatas = [{"Source": f"{i}-pl"} for i in range(len(all_texts))]
1 frames
/usr/local/lib/python3.10/dist-packages/langchain/schema/vectorstore.py in <listcomp>(.0)
118 """
119 # TODO: Handle the case where the user doesn't provide ids on the Collection
--> 120 texts = [doc.page_content for doc in documents]
121 metadatas = [doc.metadata for doc in documents]
122 return self.add_texts(texts, metadatas, **kwargs)
AttributeError: 'str' object has no attribute 'page_content'
</code></pre>
| <python><azure-cognitive-services><azure-cognitive-search><langchain><azure-openai> | 2023-11-11 09:33:02 | 1 | 4,326 | STORM |
77,464,520 | 4,230,643 | Can anyone explain when two imports of the same Python module can result in two different module objects? | <p>Last year, I wrote the following three modules in Python 3.7.2:</p>
<pre class="lang-py prettyprint-override"><code># my_module.py
my_counter = 0
def increment_counter():
global my_counter
my_counter += 1
</code></pre>
<pre class="lang-py prettyprint-override"><code># client_module1.py
import my_module
def increment_counter():
my_module.increment_counter()
print('Counter from client 1 is: %d' % my_module.my_counter
</code></pre>
<pre class="lang-py prettyprint-override"><code># client_module2.py
import my_module
def increment_counter():
my_module.increment_counter()
print('Counter from client 2 is: %d' % my_module.my_counter
</code></pre>
<p>Then, I tried the following in IPython:</p>
<pre class="lang-py prettyprint-override"><code> In [1]: import client_module1
In [2]: import client_module2
In [3]: client_module1.increment_counter()
Counter from client 1 is 1
In [4]: client_module2.increment_counter()
Counter from client 2 is 1
In [5]: client_module1.my_module is client_module2.my_module
Out [5]: False
</code></pre>
<p>I assumed that this is just the unfortunate way that Python works.</p>
<p>Today, after reading documentation that claims that the above result isn't what happens, I tried the experiment again. But I got different results:</p>
<pre class="lang-py prettyprint-override"><code> In [1]: import client_module1
In [2]: import client_module2
In [3]: client_module1.increment_counter()
Counter from client 1 is 1
In [4]: client_module2.increment_counter()
Counter from client 2 is 2
In [5]: client_module1.my_module is client_module2.my_module
Out [5]: True
</code></pre>
<p>Apparently, this second result is how Python modules are supposed to behave. Documentation claims that the only exceptions occur when the same module is imported via different explicit paths (such as directly via PYTHONPATH, and then indirectly through a containing directory that is also on PYTHONPATH), or by different methods (such as <code>import my_module</code> in one place but <code>from somewhere import my_module</code> someplace else).</p>
<p>But that's not what happened here. The files were all in the current working directory when I started IPython during both experiments, and the import statements were exactly the ones shown above. PYTHONPATH was unset, and there were no extra files in the directory.</p>
<p>Does anyone know which Python import edge case can cause this to happen sometimes but not others? It's very important to know in order to be able to do certain things involving global state reliably.</p>
<p>In both experiments, I used Python 3.7.2 and IPython 7.33.0.</p>
| <python><python-import> | 2023-11-11 09:20:33 | 0 | 2,681 | Throw Away Account |
77,464,293 | 11,353,092 | Is there a way to limit Python's keyboard.add_hotkey() to trigger only once per keystroke? | <pre><code>import keyboard
def on_c_pressed():
print("C key was pressed")
keyboard.add_hotkey('c', on_c_pressed)
</code></pre>
<p>Current situation is that it fires rapidly as the keystroke is actuated and being released.</p>
<p>One solution is to keep track of a global variable flag and have on_c_pressed() and on_c_released but that seems a little messy. Do any of you know a cleaner way or an alternative library that detects when I press a key state once per press?</p>
| <python><keyboard-events><keystroke><python-keyboard> | 2023-11-11 07:54:08 | 1 | 487 | Anonymous |
77,463,678 | 992,421 | removing , and converting to int | <p>I am banging my head to convert the following spark RDD data using code</p>
<pre><code>[('4', ('1', '2')),
('10', ('5',)),
('3', ('2',)),
('6', ('2', '5')),
('7', ('2', '5')),
('1', None),
('8', ('2', '5')),
('9', ('2', '5')),
('2', ('3',)),
('5', ('4', '2', '6')),
('11', ('5',))]
def adjDang(line, tc):
node, edges = line
print(f'node {node} edges {edges}')
if edges == None:
return (int(node),(0,0))
else:
if len(edges) == 1:
newedges = (edges[0]) #remove the comma which is unnecessary check '11'
else:
newedges = ()
for i in range(len(edges)):
newedges += edges[i]
print(f'node {node} edge{newedges}')
return(int(node), (1/tc, newedges))
</code></pre>
<p>I am getting the following output</p>
<pre><code>[(4, (0.09090909090909091, ('1', '2'))),
(10, (0.09090909090909091, '5')),
(3, (0.09090909090909091, '2')),
(6, (0.09090909090909091, ('2', '5'))),
(7, (0.09090909090909091, ('2', '5'))),
(1, (0, 0)),
(8, (0.09090909090909091, ('2', '5'))),
(9, (0.09090909090909091, ('2', '5'))),
(2, (0.09090909090909091, '3')),
(5, (0.09090909090909091, ('4', '2', '6'))),
(11, (0.09090909090909091, '5'))]
</code></pre>
<p>The expectation is to get the output in the format
(node_id , (score, edges)) so for example for node 5, it should look like
(5, (0.09090909090909091, 4, 2, 6)). those extra brackets should go away so that it looks like 1 single tuple after the node and the edges should be integers.</p>
<p>Appreciate any pointers on how to achieve this please</p>
| <python><apache-spark><pyspark><rdd> | 2023-11-11 02:17:26 | 1 | 850 | Ram |
77,463,403 | 21,370,869 | My simple python program does not execute the last line | <p>I have a simple Firefox extension, I have pretty much figured out the Firefox side of things (JavaScript, DOM and starting the Python program).</p>
<p>To explain how everything is supposed to work:</p>
<ol>
<li>A event occurs in the browser</li>
<li>Firefox launches the Python program on the local system (achieved with <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Native_messaging" rel="nofollow noreferrer">Native Messaging</a>)</li>
<li>Firefox passes a one time message to the Python program via stdin</li>
</ol>
<p>After step 3, Firefox is meant to exit the picture and Python is supposed to take over.</p>
<p>I am stuck on the Python part of this process. The python program does receive the message from Firefox, via stdin. But once execution goes past the line <code>receivedMessage = getMessage()</code>, I start to get odd behaviour. For example the last line <code>subprocess.Popen(...</code> is never executed. Even if I were to launch the Python program manually, say double clicking it in File explorer, the last line never executes.</p>
<p>The only way to make it execute is by commenting out <code>receivedMessage = getMessage()</code>.</p>
<pre><code>import subprocess
import json
import struct
import sys
def getMessage():
rawLength = sys.stdin.buffer.read(4)
messageLength = struct.unpack('@I', rawLength)[0]
message = sys.stdin.buffer.read(messageLength).decode('utf-8')
return json.loads(message)
receivedMessage = getMessage()
#subprocess.Popen(["explorer", "C:/Temp"]) #Is never executed
subprocess.Popen(['pwsh', 'C:/Temp/testProg.ps1']) #Is never executed
</code></pre>
<p>The core of the program is an example I got from the <a href="https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Native_messaging#app_side" rel="nofollow noreferrer">MDN documentation</a> page, that I reworked by getting rid of the redundant parts. I don't know the technical details behind stdin and how its specifically implemented in Python, I understand it at a high level only.</p>
<p>What could be holding back the execution of the program? Could it be held up by Firefox still streaming data to it?</p>
<p>Any help would be greatly appreciated!</p>
| <python><firefox-addon><stdin> | 2023-11-11 00:03:08 | 0 | 1,757 | Ralf_Reddings |
77,463,264 | 7,530,306 | tomli-w, adding arrays from a python script | <p>I have a toml file like this</p>
<pre><code>[[fruits]]
fruit_property =x
[[fruits]]
fruit_property =x
[[fruits]]
fruit_property =x
</code></pre>
<p>And I want to encode this from a python script. So far in python i have done</p>
<pre><code>fruits= [
{'fruit_property':x},{'fruit_property':y}, {'fruit_property':z},
]
toml_string = tomli_w.dumps(fruits)
</code></pre>
<p>and then writing the string to a file. However, this doesn't convert the array of dicts, and instead just keeps them the same way my code has them. I've looked and can't find any documentation for this, has anyone run into similar issues?</p>
<p>However, this does not convert</p>
| <python><toml> | 2023-11-10 23:11:55 | 2 | 665 | sf8193 |
77,463,217 | 688,563 | python pyroute2 move WiFi interface into network namespace | <p>I want to use python pyroute2 to discover a USB WiFi dongle on Linux and move it into a network namespace.</p>
<p>I can do this with an ethernet interface but I understand that I have to use the PHY to move it with pyroute2 if it is a wifi interface. I checked the docs and do not see an example for how to do this.</p>
<p>Below is what I have so far. How can I use pyroute2 to move my USB WiFi dongle into a network namespace?</p>
<pre><code># Inspect networking interfaces
import netifaces
# Setup netowrking namespaces
from pyroute2 import NetNS, NSPopen, netns, IPDB
from pyroute2 import IW
from pyroute2 import IPRoute
from pyroute2.netlink import NetlinkError
# Get a list of all the network interfaces
interfaces = netifaces.interfaces()
# Manage interfaces
ipdb = IPDB()
# Loop over all the interfaces and print their details
for iface in interfaces:
iface_details = netifaces.ifaddresses(iface)
if iface.find("wlxc") >= 0:
print(f"Found USB Wi-Fi interface: {iface} with details: {iface_details}")
# Truncate the long name
truncated_name = iface[-8:-1]
# Create a networking namespace
ipdb_netns = IPDB(nl=NetNS(truncated_name))
# Command to run
program_command = ["ifconfig"]
# Get the interface
print(f"Truncated name is: {truncated_name}")
link_iface = ipdb.interfaces[iface]
print(f"Link interface: {link_iface} ...")
# Look for the wireless index
ip = IPRoute()
iw = IW()
index = ip.link_lookup(ifname=iface)[0]
try:
wifi_if = iw.get_interface_by_ifindex(index)
print(f"Wireless interface: {wifi_if}")
with ipdb.interfaces[iface] as if_to_move:
print(f"Moving interface: {if_to_move} to: {truncated_name}")
if_to_move.net_ns_fd = truncated_name
print(f"Done moving to: {truncated_name} ...")
except NetlinkError as e:
if e.code == 19: # 19 'No such device'
print("not a wireless interface")
finally:
iw.close()
ip.close()
</code></pre>
<p>Error:</p>
<pre><code>Found USB Wi-Fi interface: wlxc87f548d9633 with details: {17: [{'addr': 'c8:7f:54:8d:96:33', 'broadcast': 'ff:ff:ff:ff:ff:ff'}]}
Truncated name is: 548d963
Link interface: {'address': 'c8:7f:54:8d:96:33', 'broadcast': 'ff:ff:ff:ff:ff:ff', 'ifname': 'wlxc87f548d9633', 'mtu': 1500, 'qdisc': 'noqueue', 'txqlen': 1000, 'operstate': 'DOWN', 'linkmode': 1, 'group': 0, 'promiscuity': 0, 'num_tx_queues': 1, 'num_rx_queues': 1, 'carrier': 0, 'carrier_changes': 1, 'proto_down': 0, 'gso_max_segs': 65535, 'gso_max_size': 65536, 'xdp': {'attrs': [('IFLA_XDP_ATTACHED', None)]}, 'carrier_up_count': 0, 'carrier_down_count': 1, 'min_mtu': 256, 'max_mtu': 2304, 'perm_address': 'c8:7f:54:8d:96:33', 'parent_dev_name': '3-3:1.0', 'parent_dev_bus_name': 'usb', 'index': 10, 'flags': 4099, 'ipdb_scope': 'system', 'ipdb_priority': 0, 'vlans': (), 'ipaddr': (), 'ports': (), 'family': 0, 'ifi_type': 1, 'state': 'up', 'neighbours': ()} ...
Wireless interface: ({'cmd': 7, 'version': 1, 'reserved': 0, 'attrs': [('NL80211_ATTR_IFINDEX', 10), ('NL80211_ATTR_IFNAME', 'wlxc87f548d9633'), ('NL80211_ATTR_WIPHY', 1), ('NL80211_ATTR_IFTYPE', 2), ('NL80211_ATTR_WDEV', 4294967297), ('NL80211_ATTR_MAC', 'c8:7f:54:8d:96:33'), ('NL80211_ATTR_GENERATION', 17), ('NL80211_ATTR_4ADDR', '00'), ('NL80211_ATTR_WIPHY_TX_POWER_LEVEL', 1600), ('NL80211_ATTR_TXQ_STATS', '08:00:01:00:00:00:00:00:08:00:02:00:00:00:00:00:08:00:03:00:00:00:00:00:08:00:04:00:00:00:00:00:08:00:05:00:00:00:00:00:08:00:06:00:00:00:00:00:08:00:08:00:00:00:00:00:08:00:09:00:00:00:00:00:08:00:0a:00:00:00:00:00')], 'header': {'length': 188, 'type': 34, 'flags': 0, 'sequence_number': 255, 'pid': 14840, 'error': None, 'target': 'localhost', 'stats': Stats(qsize=0, delta=0, delay=0)}, 'event': 'NL80211_CMD_NEW_INTERFACE'},)
Moving interface: {'address': 'c8:7f:54:8d:96:33', 'broadcast': 'ff:ff:ff:ff:ff:ff', 'ifname': 'wlxc87f548d9633', 'mtu': 1500, 'qdisc': 'noqueue', 'txqlen': 1000, 'operstate': 'DOWN', 'linkmode': 1, 'group': 0, 'promiscuity': 0, 'num_tx_queues': 1, 'num_rx_queues': 1, 'carrier': 0, 'carrier_changes': 1, 'proto_down': 0, 'gso_max_segs': 65535, 'gso_max_size': 65536, 'xdp': {'attrs': [('IFLA_XDP_ATTACHED', None)]}, 'carrier_up_count': 0, 'carrier_down_count': 1, 'min_mtu': 256, 'max_mtu': 2304, 'perm_address': 'c8:7f:54:8d:96:33', 'parent_dev_name': '3-3:1.0', 'parent_dev_bus_name': 'usb', 'index': 10, 'flags': 4099, 'ipdb_scope': 'system', 'ipdb_priority': 0, 'vlans': (), 'ipaddr': (), 'ports': (), 'family': 0, 'ifi_type': 1, 'state': 'up', 'neighbours': ()} to: 548d963
fail <class 'NoneType'>
fail <class 'NoneType'>
</code></pre>
<p>Updated code with suggested answer:</p>
<pre><code>#!/usr/bin/python3
# Configure logging
import logging
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO, format="%(asctime)s:%(levelname)s:%(message)s")
# Inspect networking interfaces
import netifaces
# Setup netowrking namespaces
from pyroute2 import NetNS, NSPopen, netns, IPDB, IPRoute
# Get a list of all the network interfaces
interfaces = netifaces.interfaces()
# Managing network interfaces
with IPDB() as ipdb:
# Loop over all the interfaces and logger.info their details
for iface in interfaces:
# Determine interface information
iface_details = netifaces.ifaddresses(iface)
if iface.find("wlxc") >= 0:
logger.info(f"Found USB Wi-Fi interface: {iface} with details: {iface_details}")
# Attempt to remove a networking namespace by the same name
iface_truncate = iface[-4:]
net_namespace = f"{iface_truncate}-ns"
try:
netns.remove(net_namespace)
except Exception as err:
logger.warning(f"Namespace does not exist and we could not remove it: {net_namespace} ...")
# Create a new network namespace or identify an existing one
logger.info(f"Trying to create network namespace: {net_namespace} ...")
netns.create(net_namespace)
# Get the interface index
with IPRoute() as ip:
index = ip.link_lookup(ifname=iface)[0]
logger.info(f"Index of USB WiFi is: {index} ...")
logger.info(f"Interfaces: {ipdb.interfaces[iface]} ...")
# Move the interface to the new network namespace
with ipdb.interfaces[index] as interf:
interf.net_ns_fd = net_namespace
logger.info(f"Moved {iface} to {net_namespace}")
logger.info("Done moving interface ...")
# Close IPDB
ipdb.release()
</code></pre>
<p>Traceback with latest code:</p>
<pre><code>2023-11-13 21:05:36,676:WARNING:Deprecation warning https://docs.pyroute2.org/ipdb_toc.html
2023-11-13 21:05:36,676:WARNING:To remove this DeprecationWarning exception, start IPDB(deprecation_warning=False, ...)
2023-11-13 21:05:36,678:INFO:Found USB Wi-Fi interface: wlxc87f548d9633 with details: {17: [{'addr': 'c8:7f:54:8d:96:33', 'broadcast': 'ff:ff:ff:ff:ff:ff'}]}
2023-11-13 21:05:36,680:INFO:Trying to create network namespace: 9633-ns ...
2023-11-13 21:05:36,686:INFO:Index of USB WiFi is: 10 ...
2023-11-13 21:05:36,700:INFO:Interfaces: {'address': 'c8:7f:54:8d:96:33', 'broadcast': 'ff:ff:ff:ff:ff:ff', 'ifname': 'wlxc87f548d9633', 'mtu': 1500, 'qdisc': 'noqueue', 'txqlen': 1000, 'operstate': 'DOWN', 'linkmode': 1, 'group': 0, 'promiscuity': 0, 'num_tx_queues': 1, 'num_rx_queues': 1, 'carrier': 0, 'carrier_changes': 1, 'proto_down': 0, 'gso_max_segs': 65535, 'gso_max_size': 65536, 'xdp': {'attrs': [('IFLA_XDP_ATTACHED', None)]}, 'carrier_up_count': 0, 'carrier_down_count': 1, 'min_mtu': 256, 'max_mtu': 2304, 'perm_address': 'c8:7f:54:8d:96:33', 'parent_dev_name': '3-3:1.0', 'parent_dev_bus_name': 'usb', 'index': 10, 'flags': 4099, 'ipdb_scope': 'system', 'ipdb_priority': 0, 'vlans': (), 'ipaddr': (), 'ports': (), 'family': 0, 'ifi_type': 1, 'state': 'up', 'neighbours': ()} ...
2023-11-13 21:05:36,700:INFO:Moved wlxc87f548d9633 to 9633-ns
fail <class 'NoneType'>
fail <class 'NoneType'>
Traceback (most recent call last):
File "/home/user/Desktop/network-stress-test/network-stress-test.py", line 52, in <module>
with ipdb.interfaces[index] as interf:
File "/usr/local/lib/python3.10/dist-packages/pyroute2/ipdb/transactional.py", line 206, in __exit__
self.commit()
File "/usr/local/lib/python3.10/dist-packages/pyroute2/ipdb/interfaces.py", line 1136, in commit
raise error
File "/usr/local/lib/python3.10/dist-packages/pyroute2/ipdb/interfaces.py", line 1036, in commit
run(nl.link, 'update', **request)
File "/usr/local/lib/python3.10/dist-packages/pyroute2/ipdb/interfaces.py", line 515, in _run
raise error
File "/usr/local/lib/python3.10/dist-packages/pyroute2/ipdb/interfaces.py", line 510, in _run
return cmd(*argv, **kwarg)
File "/usr/local/lib/python3.10/dist-packages/pyroute2/iproute/linux.py", line 1672, in link
ret = self.nlm_request(msg, msg_type=msg_type, msg_flags=msg_flags)
File "/usr/local/lib/python3.10/dist-packages/pyroute2/netlink/nlsocket.py", line 870, in nlm_request
return tuple(self._genlm_request(*argv, **kwarg))
File "/usr/local/lib/python3.10/dist-packages/pyroute2/netlink/nlsocket.py", line 1214, in nlm_request
for msg in self.get(
File "/usr/local/lib/python3.10/dist-packages/pyroute2/netlink/nlsocket.py", line 873, in get
return tuple(self._genlm_get(*argv, **kwarg))
File "/usr/local/lib/python3.10/dist-packages/pyroute2/netlink/nlsocket.py", line 550, in get
raise msg['header']['error']
pyroute2.netlink.exceptions.NetlinkError: (22, 'Invalid argument')
</code></pre>
| <python><python-3.x><linux><namespaces><pyroute2> | 2023-11-10 22:56:24 | 0 | 368 | PhilBot |
77,463,187 | 61,624 | Show the stack trace from a stuck v2 Python.exe application in Windows | <p>I have the same question as <a href="https://stackoverflow.com/questions/132058/showing-the-stack-trace-from-a-running-python-application">Showing the stack trace from a running Python application</a> with two differences: I'm looking for answers that work on Windows 10 and Python v2.</p>
<blockquote>
<p>I have this Python application that gets stuck from time to time and I
can't find out where.</p>
<p>Is there any way to signal Python[.exe] to show you the exact
code that's running?</p>
<p>Some kind of on-the-fly stacktrace?</p>
</blockquote>
<p>I run it as a <code>.bat</code> file so I can see its log, but when it locks up like this, nothing prints.</p>
<p>I <em>have</em> access to the source code, but I'm unfamiliar with the project at that level; ideally, I wouldn't need to edit the source code to get a stack trace, but if that's the only way, I'm willing to do it. I'm trying to gather enough information to write up a useful bug report for the maintainers.</p>
| <python><windows><python-2.7> | 2023-11-10 22:46:05 | 1 | 68,373 | Daniel Kaplan |
77,463,153 | 6,396,569 | Why does Tkinter lag only the first time a listbox is populated with a large number of lines? | <p>My application, written for Python 3.12.0, loads a large number (around 6,000+ lines) of text lines into a Tkinter Listbox like this:</p>
<pre><code> self.listbox.delete(0,tk.END)
self.listbox.insert(tk.END,*filtered_files)
</code></pre>
<p>Where <code>filtered_files</code> is a List of textual file names. I heard this was more performant than to insert the filenames one by one, so I just feed a big list into the method.</p>
<p>However, when this operation is run (I confirmed via profiling that this is indeed the operation), the first time, it takes 1-2 seconds and blocks the UI. However, subsequent times this and other similar operations are run by the program, they appear to occur instantaneously in the UI, even when the list data is entirely different. Why is this?</p>
| <python><python-3.x><tkinter><listbox> | 2023-11-10 22:33:10 | 0 | 2,567 | the_endian |
77,463,100 | 4,109,398 | formula for nth derivative in sympy | <p>I am playing with the <strong>sympy</strong> for a while. I want sympy to derive me a formula of nth derivative for specific function.
I tried:</p>
<pre class="lang-py prettyprint-override"><code>import sympy as sp
x = sp.symbols('x')
n = sp.Symbol('n',integer=True)
f = x**n
dnf = f.diff((x,n)) # n times derivative
dnf
# Derivative(x**n, (x, n))
# apply do it
dnf.doit() # still Derivative(x**n, (x, n))
</code></pre>
<p>Is it possible to get <code>sp.factorial(n)</code> as it should be?</p>
| <python><sympy> | 2023-11-10 22:17:25 | 1 | 412 | lukas kiss |
77,462,887 | 3,611,164 | Order plotly figure by CategoricalDtype order | <p>The goal is to visualize boxplots for each group in order. Group is of<code>pd.CategoricalDtype</code> and ordered.
I do not manage to get plotly to respect the order. Instead, it only sorts alphabetically.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
import plotly.express as px
df = pd.DataFrame({'group': list('CAACCBAA'), 'values': np.random.random(8)})
cat_ordered = pd.CategoricalDtype(categories=['B', 'C', 'A'], ordered=True)
df['group'] = df['group'].astype(cat_ordered)
fig = px.box(df, x='group', y='values',
category_orders={'group': list(df.group.cat.categories)})
fig.update_xaxes(categoryorder='category ascending')
fig.show()
</code></pre>
<p>The expected order would be B - C - A.
<a href="https://i.sstatic.net/h1bMV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h1bMV.png" alt="px.box output" /></a></p>
<p>I believe plotly does not take over the order from the pandas dataframe, but manually setting it does not help either.</p>
<p><code>plotly==5.17.0</code></p>
| <python><pandas><plotly><categorical-data> | 2023-11-10 21:18:46 | 1 | 366 | Fabitosh |
77,462,744 | 6,458,245 | Scikit Kmeans tutorial, n_runs and n_init different purpose? | <p><a href="https://scikit-learn.org/stable/auto_examples/text/plot_document_clustering.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/auto_examples/text/plot_document_clustering.html</a></p>
<p>In the above documentation, we are running KMeans. The KMeans class has an n_init parameter like so:</p>
<pre><code>kmeans = KMeans(
n_clusters=true_k,
max_iter=100,
n_init=1,
)
</code></pre>
<p>Additionally, the fit_and_evaluation function from the docs uses an n_runs parameter for the number of times to rerun kmeans.</p>
<pre><code>for seed in range(n_runs):
km.set_params(random_state=seed)
t0 = time()
km.fit(X)
train_times.append(time() - t0)
scores["Homogeneity"].append(metrics.homogeneity_score(labels, km.labels_))
scores["Completeness"].append(metrics.completeness_score(labels, km.labels_))
scores["V-measure"].append(metrics.v_measure_score(labels, km.labels_))
scores["Adjusted Rand-Index"].append(
metrics.adjusted_rand_score(labels, km.labels_)
)
scores["Silhouette Coefficient"].append(
metrics.silhouette_score(X, km.labels_, sample_size=2000)
)
</code></pre>
<p>Why are both n_runs and n_init needed? both parameters just restart the process with a different initialization</p>
| <python><machine-learning><scikit-learn> | 2023-11-10 20:40:45 | 0 | 2,356 | JobHunter69 |
77,462,722 | 10,461,632 | Passing **kwargs to function results in `Unexpected keyword argument` in flask app, but not test script | <p>I'm getting an unexpected keyword argument in my flask app. When I execute the same functions in a test python file, I don't get the <code>TypeError</code>.</p>
<p>In my flask app, I have a worker class and a function:</p>
<pre><code>import concurrent.futures as cf
class Worker:
"""Class for multiprocessing."""
def __init__(self, workers=1):
self.workers = workers
def run(self, fn, files, **kwargs):
print(kwargs)
num = len(files)
with cf.ProcessPoolExecutor(max_workers=self.workers) as executor:
results = [executor.submit(fn, f, **kwargs) for f in files]
for i, f in enumerate(cf.as_completed(results), start=1):
print(f'({i}/{num}): {f.result()}')
</code></pre>
<pre><code>def create_archive(folder, extension='.tar.gz', format='PAX', remove_directory=False):
# do some stuff
</code></pre>
<p>I execute the <code>run</code> method <code>Worker</code> like this in one of the routes:</p>
<pre><code> worker = Worker(workers=5)
worker.run(
create_archive,
valid_folders,
extension=request.json['extension'],
format=request.json['format'],
remove_directory=str2bool(request.json['removeDir'])
)
</code></pre>
<p>and get this error message:</p>
<pre><code>TypeError: create_archive() got an unexpected keyword argument 'extension'
</code></pre>
<p>The <code>print</code> statement in the <code>run</code> methods produces this output:</p>
<pre><code>{'extension': '.tar.gz', 'format': 'pax', 'remove_directory': False}
</code></pre>
<p>Now, I created a separate <code>test.py</code> script to try and replicate the issue. I get the same output from the print statement, and the code executes properly. Here's the <code>test.py</code>:</p>
<pre><code>import concurrent.futures as cf
class Worker:
"""Class for multiprocessing."""
def __init__(self, workers=1):
self.workers = workers
def run(self, fn, files, **kwargs):
print(kwargs)
num = len(files)
with cf.ProcessPoolExecutor(max_workers=self.workers) as executor:
results = [executor.submit(fn, f, **kwargs) for f in files]
for i, f in enumerate(cf.as_completed(results), start=1):
print(f'({i}/{num}): {f.result()}')
def create_archive(folder, extension='.tar.gz', format='pax', remove_directory=False):
print(folder, extension, format, remove_directory)
return f'{folder} complete'
if __name__ == "__main__":
worker = Worker(workers=5)
worker.run(
create_archive,
['folder 1', 'folder 2', 'folder 3'],
extension='.tar',
format='pax',
remove_directory=False
)
</code></pre>
<p>I've been looking at this for a long time and I don't see what the issue is. Any ideas as to why I'm getting this error in my flask app but not the test script?</p>
<p>Full error message:</p>
<pre><code>concurrent.futures.process._RemoteTraceback:
"""
Traceback (most recent call last):
File "/Users/useridhere/miniconda3/lib/python3.11/concurrent/futures/process.py", line 256, in _process_worker
r = call_item.fn(*call_item.args, **call_item.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: create_archive() got an unexpected keyword argument 'extension'
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/useridhere/repos/myproject/venv/lib/python3.11/site-packages/flask/app.py", line 1478, in __call__
return self.wsgi_app(environ, start_response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/useridhere/repos/myproject/venv/lib/python3.11/site-packages/flask_socketio/__init__.py", line 43, in __call__
return super(_SocketIOMiddleware, self).__call__(environ,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/useridhere/repos/myproject/venv/lib/python3.11/site-packages/engineio/middleware.py", line 74, in __call__
return self.wsgi_app(environ, start_response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/useridhere/repos/myproject/venv/lib/python3.11/site-packages/flask/app.py", line 1458, in wsgi_app
response = self.handle_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/useridhere/repos/myproject/venv/lib/python3.11/site-packages/flask_cors/extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
^^^^^^^^^^^^^^^^^^^^
File "/Users/useridhere/repos/myproject/venv/lib/python3.11/site-packages/flask/app.py", line 1455, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/useridhere/repos/myproject/venv/lib/python3.11/site-packages/flask/app.py", line 869, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/useridhere/repos/myproject/venv/lib/python3.11/site-packages/flask_cors/extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
^^^^^^^^^^^^^^^^^^^^
File "/Users/useridhere/repos/myproject/venv/lib/python3.11/site-packages/flask/app.py", line 867, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/useridhere/repos/myproject/venv/lib/python3.11/site-packages/flask/app.py", line 852, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/useridhere/repos/myproject/app.py", line 93, in create_archive
worker.run(
File "/Users/useridhere/repos/myproject/api/utilities.py", line 13, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/useridhere/repos/myproject/api/worker.py", line 18, in run
print(f'({i}/{num}): {f.result()}')
^^^^^^^^^^^^^
File "/Users/useridhere/miniconda3/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/Users/useridhere/miniconda3/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
TypeError: create_archive() got an unexpected keyword argument 'extension'
</code></pre>
<p>Flask app:</p>
<p><em>app.py</em> (the <code>request.json</code> parameters are strings, the values aren't important because of the error message I'm getting)</p>
<pre><code>import os
from pathlib import Path
from flask import jsonify, request
from entry import app, app_config, socketio
from api.worker import Worker
from api.utilities import str2bool
from api.archives import create_archive
@socketio.on('create-logger', namespace='/archive')
@socketio.on('create-progress', namespace='/archive')
@app.route('/archive/create', methods=['POST'])
def create_archive():
if not request.json['items']:
return jsonify({
'status': 500,
'message': 'No folders were provided.',
'invalid': []
})
# Confirm items are folders and not files
invalid_folders = []
for folder in request.json['items']:
folder_path = folder['path']
if not os.path.isdir(folder_path):
invalid_folders.append(folder_path)
if invalid_folders:
return jsonify({
'status': 500,
'message': 'All items provided must be folders. Open the developer tools to see a list of the invalid items.',
'invalid': invalid_folders
})
folders = [folder['path'] for folder in request.json['items']]
valid_folders = []
existing_archives = []
for folder in folders:
archive_filename = os.path.join(Path(folder).parent, Path(
folder).name + request.json['extension'])
if os.path.isfile(archive_filename):
existing_archives.append(archive_filename)
else:
valid_folders.append(folder)
socketio.emit('create-logger', 'Creating archive file(s)...',
namespace='/archive')
worker = Worker(workers=5)
worker.run(
create_archive,
valid_folders,
extension=request.json['extension'],
format=request.json['format'],
remove_directory=str2bool(request.json['removeDir'])
)
socketio.emit('create-logger', 'Archiving complete',
namespace='/archive')
return jsonify({
'status': 200,
'message': 'Archiving complete.',
'exists': existing_archives
})
if __name__ == "__main__":
socketio.run(app, **app_config)
</code></pre>
<p><em>entry.py</em></p>
<pre><code>import sys
from flask import Flask
from flask_cors import CORS
from flask_socketio import SocketIO
app = Flask(__name__)
app_config = {"port": sys.argv[1]}
# Developer mode uses app.py
if "app.py" in sys.argv[0]:
# Update app config
app_config["debug"] = True
# CORS settings
cors = CORS(
app,
resources={r"/*": {"origins": "http://localhost*"}},
)
# CORS headers
app.config["CORS_HEADERS"] = "Content-Type"
socketio = SocketIO(app, cors_allowed_origins='*')
</code></pre>
<p><em>archives.py</em></p>
<pre><code>import sys
sys.path.insert(0, '..')
import os # NOQA
import tarfile # NOQA
import fnmatch # NOQA
import shutil # NOQA
from pathlib import Path # NOQA
from entry import socketio # NOQA
from api.utilities import winapi_path # NOQA
def create_archive(folder, extension='.tar.gz', format='PAX', remove_directory=False):
"""Create an archive file.
Parameters
----------
folder : str, path-like object
Name of the folder to archive.
extension : str, '.tar' or '.tar.gz' (default)
File extension for archive file.
format : str, 'PAX' (default) or 'GNU'
Archive file format.
remove_directory : bool, default is False
Option to remove folder after the archive file is created.
Returns
-------
dict
Info containing filename and the total files included in the
archive file.
"""
socketio.emit('create-logger',
f'Creating archive for {folder}', namespace='/archive')
# Build filename
filename = os.path.join(Path(folder).parent, Path(folder).name + extension)
# Initalize info
info = {'filename': filename, 'total_files': 0}
# Determine mode
mode = 'x:' if extension.lower() == '.tar' else 'x:gz'
# Set format
fmt = tarfile.GNU_FORMAT if format.lower() == 'gnu' else tarfile.PAX_FORMAT
with tarfile.open(filename, mode=mode, format=fmt) as tar:
for dirpath, _, filenames in os.walk(folder):
if filenames:
for file in filenames:
path = Path(os.path.join(dirpath, file))
# arcname provides an alternate name that is relative
# to the folder being archived
altname = path.relative_to(Path(folder).parent)
# Enable long path
path = winapi_path(str(path))
tar.add(path, recursive=False, arcname=altname)
info['total_files'] += 1
if remove_directory:
shutil.rmtree(folder)
socketio.emit('create-progress', 1, namespace='/archive')
return info
</code></pre>
| <python><flask> | 2023-11-10 20:34:26 | 2 | 788 | Simon1 |
77,462,367 | 10,721,627 | How to create a persistent volume claim in the local Minikube cluster? | <p>I have installed Docker Desktop, Minikube, and Kubernetes (K8s) command line tools on Windows 10 and I am using Git bash. I created a FastAPI web service that logs some information. This is the content of the <code>requirements.txt</code> file:</p>
<pre><code>fastapi[all]
</code></pre>
<p>And this is the content of the <code>app/main.py</code> file, containing the web service code:</p>
<pre class="lang-py prettyprint-override"><code>import logging
from fastapi import FastAPI
logging.basicConfig(filename="my.log",
level=logging.DEBUG,
format="%(asctime)s [%(levelname)s] %(message)s")
app = FastAPI()
@app.get("/")
def read_root():
logging.info("Inside the `read_root` function.")
return {"Hello": "World"}
@app.get("/item/{item_id}")
def read_item(item_id: int):
item_message = "This is the item message."
logging.info(f"The item_id is the following: {item_id}")
return {"item_id": item_id, "item_message": item_message}
</code></pre>
<p>I have two simple API that logs to the <code>my.log</code> file. I created a docker image so that it can be deployed in the local Minikube cluster. The content of the <code>Dockerfile</code> file is:</p>
<pre><code>FROM python:3.9-alpine
WORKDIR /code
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
COPY ./app /code/app
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
</code></pre>
<p>I specified the <code>python:3.9-alpine</code> as the base image because it has fewer security vulnerabilities than most of the <a href="https://hub.docker.com/_/python/tags" rel="nofollow noreferrer">Python docker images</a>. I run the web service on the 8000 port using the Uvicorn webserver.</p>
<p>Next, I started the local Minikube cluster and pushed the docker image into it:</p>
<pre class="lang-bash prettyprint-override"><code>minikube start
eval $(minikube docker-env)
docker build -t fastapi-image .
</code></pre>
<p>The <code>eval $(minikube docker-env)</code> command reuses the Docker daemon inside the Minikube cluster. This means that I can just build docker images using Minikube's Docker daemon which speeds up local experiments. When listing the images with the <code>minikube image ls</code> command, I can see my docker image.</p>
<p>After that, I created a simple K8s deployment with one replica set and a service listening on the 8000 port. The <code>deployment.yml</code> manifest file is the following:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: fastapi-deployment
spec:
replicas: 1
selector:
matchLabels:
app: fastapi
template:
metadata:
labels:
app: fastapi
spec:
containers:
- name: fastapi
image: fastapi-image
imagePullPolicy: Never
ports:
- containerPort: 8000
---
apiVersion: v1
kind: Service
metadata:
name: fastapi-service
spec:
selector:
app: fastapi
ports:
- protocol: TCP
port: 8000
targetPort: 8000
</code></pre>
<p>Note that I added the <code>imagePullPolicy: Never</code> configuration because the <code>fastapi-image</code> is already in the Minikube Docker registry so Kubernetes does not attempt to download the specified image.</p>
<p>I created the resource from the manifest file and started a service tunnel for the FastAPI service:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl apply -f deployment.yml
minikube service fastapi-service
</code></pre>
<p>It opens up a browser window that serves the application. If I visit the Swagger UI on the <a href="http://127.0.0.1:12345/docs" rel="nofollow noreferrer">http://127.0.0.1:12345/docs</a> site and try out the APIs, I will get entries in the log file. To check the logs, I entered the Pod instance with the <code>kubectl exec -it <fastapi-deployment-pod> -- sh</code> command.</p>
<p>However, if the Pod dies (e.g. running <code>kubectl delete -f deployment.yml</code>) then the log entries are lost. I would like to persist the log files even if the Pod was deleted. To store the log files, I would like to create a persistent volume (PV) and then request storage using a persistent volume claim (PVC).</p>
<p>References:</p>
<ul>
<li><a href="https://fastapi.tiangolo.com/" rel="nofollow noreferrer">https://fastapi.tiangolo.com/</a></li>
<li><a href="https://fastapi.tiangolo.com/deployment/docker/" rel="nofollow noreferrer">https://fastapi.tiangolo.com/deployment/docker/</a></li>
<li><a href="https://minikube.sigs.k8s.io/docs/handbook/pushing/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/pushing/</a></li>
<li><a href="https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy</a></li>
<li><a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/</a></li>
<li><a href="https://kubernetes.io/docs/tutorials/hello-minikube/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/hello-minikube/</a></li>
</ul>
| <python><docker><kubernetes><minikube> | 2023-11-10 19:16:45 | 1 | 2,482 | Péter Szilvási |
77,462,354 | 1,100,107 | Get the matrix-vector representation of a set of symbolic linear inequalities | <p>With Sympy, it is possible to define symbolic linear inequalities, like:</p>
<p><a href="https://i.sstatic.net/LJ628.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LJ628.png" alt="enter image description here" /></a></p>
<p>This system is equivalent to:</p>
<p><a href="https://i.sstatic.net/Wgq7k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wgq7k.png" alt="enter image description here" /></a></p>
<p>And then it can be represented by a matrix <code>A</code> of coefficients and a vector <code>b</code> of upper bounds:</p>
<pre class="lang-py prettyprint-override"><code>A = np.array([
[-1, 0, 0], # -x
[ 1, 0, 0], # x
[ 0,-1, 0], # -y
[ 1, 1, 0], # x+y
[ 0, 0,-1], # -z
[ 1, 1, 1] # x+y+z
])
b = np.array([5, 4, 5, 3, 10, 6])
</code></pre>
<p>I'm wondering whether it is possible to directly get <code>A</code> and <code>b</code> from the symbolic linear inequalities, without manual gymnastics?</p>
| <python><sympy> | 2023-11-10 19:13:37 | 1 | 85,219 | Stéphane Laurent |
77,462,065 | 11,584,105 | ValueError: could not broadcast input array from shape (3024,3024,3) into shape (3024,3024) | <p>I have this code which works fine:</p>
<pre class="lang-py prettyprint-override"><code>import cv2
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from matplotlib import rcParams
import numpy as np
import os
from PIL import Image
from segment_anything import sam_model_registry, SamAutomaticMaskGenerator, SamPredictor
# Set directories for generation images and edit images
base_image_dir = os.path.join("IMG_4297.png")
mask_dir = os.path.join("masks")
edit_image_dir = os.path.join("03_edits")
# Point to your downloaded SAM model
sam_model_filepath = "../segment-anything/segment_anything/sam_vit_h_4b8939.pth"
#sam_model_filepath = "./sam_vit_h_4b8939.pth"
# Initiate SAM model
sam = sam_model_registry["default"](checkpoint=sam_model_filepath)
# Function to display mask using matplotlib
def show_mask(mask, ax):
color = np.array([30 / 255, 144 / 255, 255 / 255, 0.6])
h, w = mask.shape[-2:]
mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1)
ax.imshow(mask_image)
# Function to display where we've "clicked"
def show_points(coords, labels, ax, marker_size=375):
pos_points = coords[labels == 1]
neg_points = coords[labels == 0]
ax.scatter(
pos_points[:, 0],
pos_points[:, 1],
color="green",
marker="*",
s=marker_size,
edgecolor="white",
linewidth=1.25,
)
ax.scatter(
neg_points[:, 0],
neg_points[:, 1],
color="red",
marker="*",
s=marker_size,
edgecolor="white",
linewidth=1.25,
)
# Load chosen image using opencv
image = cv2.imread("./IMG_4297.png")
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Display our chosen image
plt.figure(figsize=(10, 10))
plt.imshow(image)
plt.axis("on")
plt.show()
# Set the pixel coordinates for our "click" to assign masks
input_point = np.array([[525, 325]])
input_label = np.array([1])
# Display the point we've clicked on
plt.figure(figsize=(10, 10))
plt.imshow(image)
show_points(input_point, input_label, plt.gca())
plt.axis("on")
plt.show()
# Initiate predictor with Segment Anything model
predictor = SamPredictor(sam)
predictor.set_image(image)
# Use the predictor to gather masks for the point we clicked
masks, scores, logits = predictor.predict(
point_coords=input_point,
point_labels=input_label,
multimask_output=True,
)
# Check the shape - should be three masks of the same dimensions as our image
masks.shape
# Display the possible masks we can select along with their confidence
for i, (mask, score) in enumerate(zip(masks, scores)):
plt.figure(figsize=(10, 10))
plt.imshow(image)
show_mask(mask, plt.gca())
show_points(input_point, input_label, plt.gca())
plt.title(f"Mask {i+1}, Score: {score:.3f}", fontsize=18)
plt.axis("off")
plt.show()
# Choose which mask you'd like to use
chosen_mask = masks[1]
# We'll now reverse the mask so that it is clear and everything else is white
chosen_mask = chosen_mask.astype("uint8")
chosen_mask[chosen_mask != 0] = 255
chosen_mask[chosen_mask == 0] = 1
chosen_mask[chosen_mask == 255] = 0
chosen_mask[chosen_mask == 1] = 255
# create a base blank mask
width = 1512
height = 1512
mask = Image.new("RGBA", (width, height), (0, 0, 0, 1)) # create an opaque image mask
# Convert mask back to pixels to add our mask replacing the third dimension
pix = np.array(mask)
pix[:, :, 3] = chosen_mask
# Convert pixels back to an RGBA image and display
new_mask = Image.fromarray(pix, "RGBA")
new_mask
# We'll save this mask for re-use for our edit
new_mask.save(os.path.join(mask_dir, "new_mask.png"))
</code></pre>
<p>But I am trying to use the second half with a slightly different program / AI language model:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from lang_sam.utils import draw_image
from PIL import Image
from lang_sam import LangSAM
from heic2png import HEIC2PNG
if __name__ == '__main__':
heic_img = HEIC2PNG('/Users/Downloads/IMG_4316.heic', quality=70) # Specify the quality of the converted image
heic_img.save() # The converted image will be saved as `test.png`
model = LangSAM()
image_pil = Image.open("/Users/Downloads/IMG_4316.png").convert("RGB")
text_prompt = "wall"
masks, boxes, phrases, logits = model.predict(image_pil, text_prompt)
masks.shape
labels = [f"{phrase} {logit:.2f}" for phrase, logit in zip(phrases, logits)]
image_array = np.asarray(image_pil)
image = draw_image(image_array, masks, boxes, labels)
image = Image.fromarray(np.uint8(image)).convert("RGB")
image.show()
chosen_mask = np.array(image).astype("uint8")
chosen_mask[chosen_mask != 0] = 255
chosen_mask[chosen_mask == 0] = 1
chosen_mask[chosen_mask == 255] = 0
chosen_mask[chosen_mask == 1] = 255
# create a base blank mask
width = 3024
height = 3024
mask = Image.new("RGBA", (width, height), (0, 0, 0, 1)) # create an opaque image mask
# Convert mask back to pixels to add our mask replacing the third dimension
pix = np.array(mask)
pix[:, :, 3] = chosen_mask
# Convert pixels back to an RGBA image and display
new_mask = Image.fromarray(pix, "RGBA")
new_mask.show()
new_mask.save()
</code></pre>
<p>I believe that the problem lies within the format of the converted image on this line:</p>
<pre class="lang-py prettyprint-override"><code>pix[:, :, 3] = chosen_mask
</code></pre>
<p>Is there a transformation or some operation I need to perform on <code>chosen_mask</code> to make to image work here?</p>
<p>The full error is:</p>
<pre><code>> Traceback (most recent call last):
File "/Users/Desktop/code/lang-segment-anything/app.py", line 112, in <module>
pix[:, :, 2] = chosen_mask
~~~^^^^^^^^^
ValueError: could not broadcast input array from shape (3024,3024,3) into shape (3024,3024)
~~~^^^^^^^^^
</code></pre>
| <python><numpy><python-imaging-library> | 2023-11-10 18:12:05 | 1 | 1,922 | Mr. Robot |
77,462,042 | 4,398,966 | Python how to cast tuple to list and apply dot notation | <p>Why does:</p>
<pre><code>x = (5,4,3)
y = list(x)
y.sort()
</code></pre>
<p>work, but</p>
<pre><code>x = (5,4,3)
y = list(x).sort()
</code></pre>
<p>not work?</p>
| <python> | 2023-11-10 18:07:36 | 3 | 15,782 | DCR |
77,461,972 | 754,136 | Multi-indexing with tuples | <p>I have a multi-dimensional <code>np.array</code>. I know the shape of the first N dimension and of the last M dimensions. E.g.,</p>
<pre class="lang-py prettyprint-override"><code>>>> n = (3,4,5)
>>> m = (6,)
>>> a = np.ones(n + m)
>>> a.shape
(3, 4, 5, 6)
</code></pre>
<p>Using tuples as indices allows for quick indexing like of the first N dimensions, like</p>
<pre class="lang-py prettyprint-override"><code>>>> i = (1,1,2)
>>> a[i].shape
(6,)
</code></pre>
<p>Using list does not give me the same result I need</p>
<pre class="lang-py prettyprint-override"><code>>>> i = [1,1,2]
>>> a[i].shape
(3, 4, 5, 6)
</code></pre>
<p>But I am having trouble doing multi-indexing (both to retrieve / assign values). For example,</p>
<pre class="lang-py prettyprint-override"><code>>>> i = (1,1,2)
>>> j = (2,2,2)
</code></pre>
<p>I need to pass something like</p>
<pre class="lang-py prettyprint-override"><code>>>> a[[i, j]]
</code></pre>
<p>and get an output shape of <code>(2, 6)</code>.</p>
<p>Instead I get</p>
<pre class="lang-py prettyprint-override"><code>>>> a[[i, j]].shape
(2, 3, 4, 5, 6)
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>>>> a[(i, j)].shape
(3, 5, 6)
</code></pre>
<p>I can always loop or change how I index things (like using <code>np.reshape</code> and <code>np.unravel_index</code>), but is there a more pythonic way to achieve what I need?</p>
<p><strong>EDIT</strong>
I'd need this for any number of indices, e.g.,</p>
<pre class="lang-py prettyprint-override"><code>>>> i = (1,1,2)
>>> j = (2,2,2)
>>> k = (0,0,0)
...
</code></pre>
| <python><numpy><indexing><multi-index> | 2023-11-10 17:53:18 | 4 | 5,474 | Simon |
77,461,832 | 17,561,414 | Autoloader: pass the folder names in the data source path with file notification | <p>Im using the Autolaoder to laod data incrementally and I have <strong>enabled the file notification services</strong>.</p>
<p>My problem is that the logic I want to include while passing the the datasource path if its against the file notification logic or no.</p>
<p>Everyday we receive the data in lake as new folder (name is the UTC when it got loaded).
I want Autolaoder to read all the folders except the latest loaded file.</p>
<p>To achiave that I did this code</p>
<pre><code>data_source = f"abfss://{container_read}@{storage_read}.dfs.core.windows.net/{folders}/{cdm}/*.csv"
</code></pre>
<p>In the <a href="https://docs.databricks.com/en/ingestion/auto-loader/patterns.html" rel="nofollow noreferrer">documentation</a> its written that I could pass the number of different values doing like this way <code>{ab,cd}</code></p>
<p>so my <code>folders</code> looks like this <code>{2023-10-09T12.31.31Z,2023-10-09T14.02.15Z}</code></p>
<p>I realized that I will always be passing full list of folders to be read except the latest one, which is kind of in conflict with file notification idea as many of the folders that I will be passing through the <code>folders</code> variable will have already been read in the past.</p>
<p>So could someone explain how can I notify autoloader to always exclude the latest loaded folder? reason why I want to exclude the latest folder is that csv files in there are getting updated everysecond so autolaoder fails when readin it.</p>
<p>EDIT:</p>
<p>I just found in the <a href="https://docs.databricks.com/en/ingestion/auto-loader/file-notification-mode.html#what-is-auto-loader-file-notification-mode" rel="nofollow noreferrer">documentation</a> that changing the path in autoloader is not supported with AUtolaoder when file notification is enabled. So im not sure now if my above solution even works properly without losing any data to load.</p>
| <python><azure><apache-spark-sql><azure-databricks><databricks-autoloader> | 2023-11-10 17:28:47 | 1 | 735 | Greencolor |
77,461,757 | 610,569 | Chomping left, right string by whitespace to iterate regex matches | <p>The goal is to extract the matching "words" (bounded by <code>\b|$|\s</code>), given the difflib <code>SequenceMatcher.get_matching_blocks()</code> output, e.g. given:</p>
<blockquote>
<p>s1 = "HYC00 Schulrucksack Damen, Causal Travel Schultaschen 14 Zoll Laptop Rucksack für Mädchen im Teenageralter Leichter Rucksack Wasserabweisend Bookbag College Boys Men Work Daypack"</p>
</blockquote>
<blockquote>
<p>s2 = "HYC00 School Backpack Women, Causal Travel School Bags 14 Inch Laptop Backpack for Teenage Girls Lightweight Backpack Water-Repellent Bookbag College Boys Men Work Daypack"</p>
</blockquote>
<p>The expected matching blocks to extract are:</p>
<pre><code>['HYC00', 'Causal Travel', '14', 'Laptop', 'Bookbag College Boys Men Work Daypack']
</code></pre>
<p>The simple cases are when the matching blocks from the difflib are immediately bounded by <code>\b|$|\s</code>, e.g.</p>
<pre><code>import re
from difflib import SequenceMatcher
s1 = "HYC00 Schulrucksack Damen, Causal Travel Schultaschen 14 Zoll Laptop Rucksack für Mädchen im Teenageralter Leichter Rucksack Wasserabweisend Bookbag College Boys Men Work Daypack"
s2 = "HYC00 School Backpack Women, Causal Travel School Bags 14 Inch Laptop Backpack for Teenage Girls Lightweight Backpack Water-Repellent Bookbag College Boys Men Work Daypack"
def is_substring_a_phrase(substring, s1):
if substring:
# Check if matching substring is bounded by word boundary.
match = re.findall(rf"\b{substring}(?=\s|$)", s1)
if match:
return match[0]
def matcher(s1, s2):
x = SequenceMatcher(None, s1, s2)
for m in x.get_matching_blocks():
# Extract the substring.
full_substring = s1[m.a:m.a+m.size].strip()
match = is_substring_a_phrase(full_substring, s1)
if match:
yield match
continue
matcher(s1, s2)
</code></pre>
<p>[out]:</p>
<pre><code>['14', 'Laptop', 'Bookbag College Boys Men Work Daypack']
</code></pre>
<p>Then to capture the <code>HYC00</code> and <code>Causal Travel</code>, the matching blocks are respectively <code>HYC00 Sch</code> and <code>men, Causual Travel</code>, so we'll have to do some "chomping" and remove the left, right or left and right most partial "words", i.e.</p>
<pre><code>def matcher(s1, s2):
x = SequenceMatcher(None, s1, s2)
for m in x.get_matching_blocks():
# Extract the substring.
full_substring = s1[m.a:m.a+m.size].strip()
match = is_substring_a_phrase(full_substring, s1)
if match:
yield match
continue
# Extract the left chomp substring.
left = " ".join(s1[m.a:m.a+m.size].strip().split()[1:])
match = is_substring_a_phrase(left, s1)
if match:
yield match
continue
# Extract the right chomp substring.
right = " ".join(s1[m.a:m.a+m.size].strip().split()[:-1])
match = is_substring_a_phrase(right, s1)
if match:
yield match
continue
# Extract the right chomp substring.
leftright = " ".join(s1[m.a:m.a+m.size].strip().split()[1:-1])
match = is_substring_a_phrase(leftright, s1)
if match:
yield match
continue
matcher(s1, s2)
</code></pre>
<p>[out]:</p>
<pre><code>['HYC00',
'Causal Travel',
'14',
'Laptop',
'Bookbag College Boys Men Work Daypack']
</code></pre>
<p>While the code snippet above works as expected, my questions in parts:</p>
<ul>
<li>is there some way to avoid the repeated code for the various chomp and multiple if-else to extract the matching blocks bounded by <code>\b|$|\s</code>?</li>
<li>is there a direct way to specify in <code>.get_matching_blocks()</code> to get only the parts bounded by <code>\b|$|\s</code>?</li>
<li>is there other ways of achieving the same objective without using the get_matching_blocks in this messy manner?</li>
</ul>
| <python><string><substring><difflib> | 2023-11-10 17:14:40 | 1 | 123,325 | alvas |
77,461,668 | 2,297,965 | Add key to dictionary within list of dictionaries based on value of another dictionary key | <p>I have a dictionary within a list of dictionaries that looks like this:</p>
<pre><code>input = [{'Clause':'Text that describes clause 1', 'Source':'pdf_file', 'additional_info': {'State_Index':0, 'City_Index':0, 'Population':100}},
{'Clause':'Text that describes clause 2', 'Source':'pdf_file', 'additional_info': {'State_Index':0, 'City_Index':1, 'Population':150}},
{'Clause':'Text that describes clause 3', 'Source':'pdf_file', 'additional_info': {'State_Index':1, 'City_Index':0, 'Population':75}},
{'Clause':'Text that describes clause 4', 'Source':'pdf_file', 'additional_info': {'State_Index':1, 'City_Index':1, 'Population':20}}]
</code></pre>
<p>Basically, I have text of clauses and each clause is specific to a city/state combo, detailed within the second dictionary. The first dictionary captures the clause, source, and the second additional_info dictionary. Within the additional_info dictionary we capture some more information. Separately, I have the name of each state and city corresponding to their index. What I'd like to do is simply add <code>'State_Name'</code> to the additional_info dictionary with the name that corresponds to the <code>'State_Index'</code>. The index looks like this:</p>
<pre><code>| State_Index | State_Name |
| ----------- | -------------- |
| 0 | AL |
| 1 | AK |
| . | . |
| . | . |
| . | . |
| 49 | WY |
</code></pre>
<p>So the additional_info dictionary would then include whichever state name corresponds to the index listed in the dictionary.</p>
<p>I am having some trouble on where to even begin for this. Any advice would be greatly appreciated.</p>
| <python><dictionary> | 2023-11-10 16:57:31 | 1 | 484 | coderX |
77,461,573 | 6,077,457 | Attribut error when using contextvar in python 3.10 | <p>When running this code below I got an</p>
<blockquote>
<p>AttributError: <code>__enter__</code></p>
</blockquote>
<p>I looked around with no success.</p>
<pre><code>import contextvars
# Création d'une variable de contexte
user_context = contextvars.ContextVar("user")
# Définition de la valeur de la variable dans le contexte actuel
with user_context.set("utilisateur123"):
# Cette valeur est accessible à l'intérieur de ce bloc
print(user_context.get()) # Affiche "utilisateur123"
# En dehors du bloc, la valeur redevient celle par défaut (None)
print(user_context.get()) # Affiche "None"
</code></pre>
| <python><python-contextvars> | 2023-11-10 16:38:40 | 1 | 1,085 | Herve Meftah |
77,461,452 | 12,415,855 | Getting no result when parsing a url with request and bs4? | <p>i try to collect some informations from a site using the attached code:</p>
<pre><code>import time
from bs4 import BeautifulSoup
import requests
if __name__ == '__main__':
url = "https://www.bobvila.com/articles/post-sitemap12.xml/"
page = requests.get(url)
time.sleep(3)
soup = BeautifulSoup (page.content,"lxml")
worker = soup.find("table", {"id": "sitemap"})
print(worker)
worker = worker.find("tbody")
findElems = worker.find_all("tr")[countElems*(-1):]
</code></pre>
<p>But i only get this error / its not finding the table?</p>
<pre><code>$ python temp1.py
G:\DEV\.venv\xlwings\lib\site-packages\bs4\builder\__init__.py:545: XMLParsedAsHTMLWarning: It looks like you're parsing an XML document using an HTML parser. If this really is an HTML document (maybe it's XHTML?), you can ignore or filter
this warning. If it's XML, you should know that using an XML parser will be more reliable. To parse this document as XML, make sure you have the lxml package installed, and pass the keyword argument `features="xml"` into the BeautifulSoup constructor.
warnings.warn(
None
Traceback (most recent call last):
File "G:\DEV\Fiverr\ORDER\ivokaza\temp1.py", line 12, in <module>
worker = worker.find("tbody")
AttributeError: 'NoneType' object has no attribute 'find'
(xlwings)
</code></pre>
| <python><python-requests> | 2023-11-10 16:20:20 | 2 | 1,515 | Rapid1898 |
77,461,362 | 5,213,015 | Django Rest - Redirect Password Form? | <p>I recently upgraded to Django 4.0 and upgraded <code>django-allauth</code> with alongside <code>django-rest-auth</code>.</p>
<p>When a user fills out the password reset form under <code>http://localhost:8000/api/dj-rest-auth/password/reset/</code> they get a link in the console that goes to:
<code>http://localhost:8000/users/reset/2n/bxggn2-05019c81f9d6dfda6a10b7cfec09e839/</code></p>
<p><a href="https://i.sstatic.net/cXyST.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cXyST.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/yvqbD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yvqbD.png" alt="enter image description here" /></a></p>
<p>The link provided gets me to this old form:</p>
<p><a href="https://i.sstatic.net/bKoEg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bKoEg.png" alt="enter image description here" /></a></p>
<p>How can I get this message in the console to point to <code>accounts/password/reset/key/1-set-password/</code>?</p>
<p>That form looks like this:</p>
<p><a href="https://i.sstatic.net/LX4Da.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LX4Da.png" alt="enter image description here" /></a></p>
<p>That’s where my <code>allauth</code> form lives and I’m not fully sure if this is the correct approach.</p>
<p>Below is some of my settings and urls.</p>
<p>Any help is gladly appreciated.</p>
<p>Thanks!</p>
<p><strong>settings.py</strong></p>
<pre><code>INSTALLED_APPS = [
# Local,
'api.apps.ApiConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.humanize',
'django.contrib.sessions',
'django.contrib.messages',
'whitenoise.runserver_nostatic',
'django.contrib.staticfiles',
'django.contrib.sites',
'users',
# 3rd Party
'rest_framework',
'rest_framework.authtoken',
'allauth',
'allauth.account',
'allauth.socialaccount',
'dj_rest_auth',
'dj_rest_auth.registration',
'corsheaders',
'drf_yasg',
]
AUTHENTICATION_BACKENDS = [
'django.contrib.auth.backends.ModelBackend',
'allauth.account.auth_backends.AuthenticationBackend',
]
REST_FRAMEWORK = {
'DEFAULT_PERMISSION_CLASSES': [
'rest_framework.permissions.IsAdminUser',
'rest_framework.permissions.IsAuthenticated',
],
'DEFAULT_AUTHENTICATION_CLASSES': [
'rest_framework.authentication.SessionAuthentication',
'rest_framework.authentication.TokenAuthentication',
],
'DEFAULT_SCHEMA_CLASS': 'rest_framework.schemas.coreapi.AutoSchema'
}
</code></pre>
<p><strong>urls.py</strong></p>
<pre><code>from django.urls import path, include
from allauth.account.views import ConfirmEmailView, EmailVerificationSentView
urlpatterns = [
path('accounts/', include('allauth.urls')),
path('', include('users.urls')),
path('api/', include('api.urls')),
path('api-auth/', include('rest_framework.urls')),
path('api/dj-rest-auth/registration/account-confirm-email/<str:key>/',
ConfirmEmailView.as_view()), # Needs to be defined before the registration path
path('api/dj-rest-auth/', include('dj_rest_auth.urls')),
path('api/dj-rest-auth/registration/', include('dj_rest_auth.registration.urls')),
path('api/rest-auth/registration/account-confirm-email/',
EmailVerificationSentView.as_view(), name='account_email_verification_sent'),
path('', include('django.contrib.auth.urls')),
path('users/', include('users.urls')),
path('users/', include('django.contrib.auth.urls')),
]
</code></pre>
| <python><django><django-rest-framework><django-serializer><django-allauth> | 2023-11-10 16:03:25 | 1 | 419 | spidey677 |
77,461,324 | 3,737,186 | Combination of logging.config.dictConfig(logger_config) and logging.getLogger(__name__) not working | <p>To have all logs below level ERROR written to STDOUT I have tried to use a logger configuration from this <a href="https://stackoverflow.com/a/53257669/3737186">answer</a>:</p>
<pre><code>class _ExcludeErrorsFilter(logging.Filter):
def filter(self, record):
"""Only lets through log messages with log level below ERROR ."""
return record.levelno < logging.ERROR
logger_config = {
'version': 1,
'filters': {
'exclude_errors': {
'()': _ExcludeErrorsFilter
}
},
'formatters': {
# Modify log message format here or replace with your custom formatter class
'default': {
'format': '%(asctime)s.%(msecs)03dZ level=%(levelname)s message=\"%(message)s\" module=%(module)s name=%(name)s processName=%(processName)s',
'datefmt': '%Y-%m-%d %H:%M:%S'
}
},
'handlers': {
'console_stderr': {
# Sends log messages with log level ERROR or higher to stderr
'class': 'logging.StreamHandler',
'level': 'ERROR',
'formatter': 'default',
'stream': sys.stderr
},
'console_stdout': {
# Sends log messages with log level lower than ERROR to stdout
'class': 'logging.StreamHandler',
'level': LOG_LEVEL,
'formatter': 'default',
'filters': ['exclude_errors'],
'stream': sys.stdout
}
},
'root': {
# In general, this should be kept at 'NOTSET'.
# Otherwise it would interfere with the log levels set for each handler.
'level': 'NOTSET',
'handlers': ['console_stderr', 'console_stdout']
},
}
logging.config.dictConfig(logger_config)
</code></pre>
<p>But now no logs get written when I try to create a logger AFTER this configuration with</p>
<pre><code>logger = logging.getLogger(__name__)
</code></pre>
<p>I stress AFTER because I am aware of the <code>disable_existing_loggers</code> argument, so this can be ruled out as a cause.</p>
<p>What works is:</p>
<pre><code>logger = logging.getLogger("root")
</code></pre>
<p>Now my question is:</p>
<p>How can I configure my loggers such that I can create separate loggers per python file with the configuration I desire to set (so it's not applying just to <em>root</em>.</p>
| <python><logging> | 2023-11-10 15:58:29 | 1 | 3,508 | Sjoerd222888 |
77,461,319 | 5,423,080 | Refactoring if statement in nested for loops | <p>I have source code like this one:</p>
<pre><code>test_dict = {"a": [1, 2, 3], "b": [4, 5, 6]}
for k, v in test_dict.items():
# something else...
for i in v:
if k == "a":
x = 100
y = i
else:
x = i
y = 100
print(x**2+y)
</code></pre>
<p>Is there a way to move the <code>if</code> statement between the first and second loop?</p>
<p>I would like to avoid the checking for each second loop iteration as the result for each key is the same.</p>
<p><strong>Update</strong></p>
<p>What I would like to reach here is a clearest code without repetitions.</p>
<p>I am stuck with the <code>x</code> and <code>y</code> variables are defined and used in the second <code>for</code> loop.</p>
<p><strong>Update 2</strong></p>
<p>As some comments pointed out, I abstracted too much the code, sorry for that, I will try to give more details and write a better code example, even if the code is quite big and I am trying to simplify it.</p>
<p>Some info:</p>
<ol>
<li><code>test_dict</code> is created in another part of the code and its leght is not fixed, it could contain several elements</li>
<li>as well as the lists inside <code>test_dict</code>, they couldn't have the same lenght</li>
<li>the elements in the lists are objects</li>
<li><code>x</code> and <code>y</code> are the input of another function</li>
</ol>
<p>this, maybe, is a better code example:</p>
<pre><code>@dataclasses.dataclass
class DummyObject:
id: str = ""
val: int = 0
test_dict = {
"ida": [DummyObject("id1", 1), DummyObject("id2", 2)],
"idb": [DummyObject("id3", 3), DummyObject("id4", 4), DummyObject("id5", 5)],
# other entries
}
for k, v in test_dict.items():
loop_result = []
# This is calculate based on other part of the code
compare_id = function_for_id(k)
for i in v:
if DummyObject("idz", 6).id == compare_id:
x = DummyObject("idz", 6)
y = i
else:
x = i
y = DummyObject("idz", 6)
loop_result.append(other_function(x, y))
</code></pre>
<p>I hope this gives a better idea.</p>
| <python><algorithm> | 2023-11-10 15:58:03 | 4 | 412 | cicciodevoto |
77,461,291 | 19,694,624 | FastAPI 422 Unprocessable Entity POST | <p>I have a FastAPI app with a POST method and I am trying to send a relatively large string with many words. Basically, it's a ChatGPT prompt with some text that needs to be rewritten by ChatGPT.
The problem is when I send this text, I get the error:</p>
<blockquote>
<p>422 Unprocessable Entity</p>
</blockquote>
<p>However, when I send the exact same text but as a text file, that then is read and written into string variable and then passed to ChatGPT, it works perfectly fine. Also, the POST method that takes a string as a parameter, works fine too if the string is relatively small. How can I fix it? I need to be able to send a large chunk of text without uploading it as a file.</p>
<p>FastAPI code:</p>
<pre><code>from fastapi import FastAPI, Request
from fastapi.middleware.cors import CORSMiddleware
from pydantic import BaseModel
from fastapi import File, UploadFile
import g4f
app = FastAPI()
origins = ["*"]
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
g4f.debug.logging = True
g4f.check_version = False
class Prompt(BaseModel):
text: str
@app.get("/")
def root():
return {"message": "some text whatever"}
@app.post("/prompt_text")
def create_prompt(prompt: Prompt):
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": str(prompt)}],
)
return {'result': response}
@app.post("/prompt_file")
def create_prompt(prompt: str = None, file: UploadFile = File(None)):
if file is not None:
file_content = file.file.read().decode('utf-8')
prompt_content = file_content
else:
prompt_content = prompt
response = g4f.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt_content}],
)
return {'result': response}
</code></pre>
| <python><python-3.x><rest><fastapi> | 2023-11-10 15:53:32 | 1 | 303 | syrok |
77,461,286 | 9,157 | why is GeneratorExit raised by this combination of yield & exception suppressing ContextDecorator? | <p>My goal is that instead of writing</p>
<pre class="lang-py prettyprint-override"><code>for item in items:
with MyCD():
... # processing logic
</code></pre>
<p>I would like to have a single line</p>
<pre class="lang-py prettyprint-override"><code> for item in MyCD.yield_each_item_wrapped(items):
... # processing logic
</code></pre>
<p>However, despite my <code>ContextDecorator</code> suppressing errors by returning <code>True</code> in <code>__exit__</code> on error, the <code>yield</code>ing the item after an item that raised error it immediately gets a <code>GeneratorError</code>.</p>
<p>So it seems that despite having handled the exception raised, the generator's <code>.close()</code> is called.</p>
<p>I'm running python 3.11.4</p>
<p>I would have expected to continue processing the remaining two items, as I deliberately suppressed the raised exception, i.e.: as the below snippet does</p>
<pre class="lang-py prettyprint-override"><code>def yield2(items):
for i, item in enumerate(items):
with MyCD():
if item:
raise Exception("my error")
yield i # yield success
if __name__ == "__main__":
raising_flags = [True, False, False, True]
expected = [1, 2]
actual = [i for i in yield2(raising_flags)]
assert expected == actual, (expected, actual)
</code></pre>
<hr />
<pre class="lang-py prettyprint-override"><code>import traceback
from contextlib import ContextDecorator
class MyCD(ContextDecorator):
@classmethod
def yield_each_item_wrapped(cls, items, **kwargs):
for i, item in enumerate(items):
print(f"yield_each_item_wrapped {i=}")
print(f"yield_each_item_wrapped {item=}")
with cls(**kwargs):
yield item
def __enter__(self):
print("__enter__")
def __exit__(self, exc_type, exc, exc_tb):
print("__exit__")
if exc:
traceback.print_exc()
return True # do not propagate
if __name__ == "__main__":
raising_flags = [True, False, False, True]
expected = [1, 2]
actual = []
for i, raising in enumerate(MyCD.yield_each_item_wrapped(raising_flags)):
if raising:
raise Exception("my error")
actual.append(i)
assert expected == actual, (expected, actual)
</code></pre>
<p>produces</p>
<pre><code>yield_each_item_wrapped i=0
yield_each_item_wrapped item=True
__enter__
__exit__
Traceback (most recent call last):
File "x.py", line 12, in yield_each_item_wrapped
yield item
GeneratorExit
yield_each_item_wrapped i=1
yield_each_item_wrapped item=False
__enter__
Exception ignored in: <generator object MyCD.yield_each_item_wrapped at 0x7f42e625aac0>
Traceback (most recent call last):
File "x.py", line 30, in <module>
raise Exception("my error")
RuntimeError: generator ignored GeneratorExit
Traceback (most recent call last):
File "x.py", line 30, in <module>
raise Exception("my error")
Exception: my error
</code></pre>
| <python><generator><yield><contextmanager> | 2023-11-10 15:52:34 | 0 | 1,662 | zsepi |
77,461,175 | 945,511 | transform json to polars dataframe | <p>i have the following json file and i would like to transform to a dataframe Polars. How can I use the pl.read_json function that have schema attribute?</p>
<pre><code> {
"data": {
"names": [
"A",
"B",
"C",
"D",
"E"
],
"ndarray": [
[
"abc",
true,
0.374618,
1,
0.83252
],
[
"hello",
false,
0.1265374619,
0,
0.253
]
]
}
}
</code></pre>
<p><a href="https://i.sstatic.net/6esAT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6esAT.png" alt="enter image description here" /></a></p>
| <python><json><dataframe><python-polars> | 2023-11-10 15:35:48 | 1 | 693 | Carlitos Overflow |
77,461,174 | 2,386,605 | How to build an asynchronous generator? | <p>When I run the code</p>
<pre><code>async def main(*args, **kwargs):
await sub_process1()
async def sub_process1():
iter = await sub_process2()
for i in iter:
yield i
async def sub_process2():
return [1, 2, 3]
</code></pre>
<p>I get</p>
<pre><code> async def main(*args, **kwargs):
> await sub_process1()
E TypeError: object async_generator can't be used in 'await' expression
</code></pre>
<p>The same result is obtained if I use</p>
<pre><code>async def sub_process1():
iter = await sub_process2()
async for i in iter:
yield i
</code></pre>
<p>instead.</p>
<p>How can I fix this?</p>
| <python><python-3.x><asynchronous><async-await><generator> | 2023-11-10 15:35:45 | 1 | 879 | tobias |
77,461,066 | 5,896,319 | How to handle wide tables in pylatex? | <p>I want to generate a PDF using Pylatex.
I can handle long table problems but I still have a wide table problem.
I'm using the A4 format because it should be printable. I do not want to rotate table but I can use another solution.
Do you have any recommendations for that?
Here is my code block:</p>
<pre><code>model = # I'm getting a model object in this line
column_size = len(item.get("columns"))
tabular_size = "|c" * column_size + "|"
table = LongTable(tabular_size)
table.add_hline()
headers = tuple(str(item['header']).replace("_", " ") for item in item.get("columns"))
table.add_row(headers)
table.add_hline()
for obj in model:
row_list = []
for val in item.get("columns"):
key = (val["key"])
value = getattr(obj, key)
row_list.append(value)
table.add_row(MultiColumn(
column_size,
data=list(row_list))
)
table.add_hline()
doc.append(VerticalSpace(r'0.3in'))
doc.append(NewLine())
</code></pre>
| <python><flask><pylatex> | 2023-11-10 15:20:17 | 1 | 680 | edche |
77,461,046 | 3,608,247 | Working with multiple Python versions **including development header files**? | <p>I develop Python application with support for different Python versions. I use <code>pyenv</code> to have multiple Python versions on my system (Linux, if that matters) and to switch between them.</p>
<p>Now I also need to build Python packages with extensions, as not all of them yet publish wheels for the recent Python version. And my system has development headers which is guaranteed to work with only one Python version (package <code>python3-dev</code>, which, on my system, serves Python 3.10.6). So attempts to build (some) packages in e.g. Python 3.12 <code>pyenv</code> environment fail.</p>
<p>Ideally, I want to be able to say:</p>
<pre><code>$ pyenv use 3.12
$ pip wheel <package> # <-- and GCC is using headers for Python 3.12, not for the version for which the development headers are installed system-wide
</code></pre>
<p>Is this a reasonable expectation? How can it be achieved?</p>
| <python><packaging><multiple-versions> | 2023-11-10 15:17:46 | 1 | 2,627 | Konstantin Shemyak |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.