QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,583,405
| 595,305
|
Latency in detecting key press (hotkey) due to non-event-thread activity
|
<p>In my application I am using a non-event-thread to do some fairly heavy processing: using module docx2python to extract text content from multiple .docx files and then re-assemble batches of lines (10-line batches) to form "Lucene Documents" for the purpose of indexing with Elasticsearch.</p>
<p>I have a hotkey from configuring a menu item which lets me toggle between viewing the initial frame and a 2nd frame which represents what's going on in terms of executing tasks. This toggling has been working instantaneously... until I started actually working on the real tasks.</p>
<p>I can tell from the log messages that the toggle method is not the culprit: there can be a gap (about 1 s) between me pressing the key and seeing the log message at the start of the toggle method.</p>
<p>Just to show that I'm not making some textbook mistake in my use of threads, please have a look at this MRE below (F4 toggles). The trouble is, this <strong>MRE does NOT in fact illustrate the problem</strong>. I tried using various crunchy tasks which might potentially "congest" thread-handling, but nothing so far reproduces the phenomenon.</p>
<p>I thought of putting <code>QApplication.processEvents()</code> in the worker thread at a point where it would be called frequently. This changed nothing.</p>
<p>Is anyone familiar with how hotkeys are handled? It feels like I'm missing some obvious "yield" technique (i.e. yield to force detection of hotkey activity...).</p>
<pre><code>import sys, time, logging, pathlib
from PyQt5 import QtWidgets, QtCore
class MainWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle('Blip')
self.create_initial_gui_components()
self.add_standard_menu_components()
self.task_mgr = TaskManager()
self.task_mgr.thread.start()
self.alternate_view = False
def create_initial_gui_components(self):
self.resize(1200, 600)
central_widget = QtWidgets.QFrame(self)
self.setCentralWidget(central_widget)
central_widget.setLayout(QtWidgets.QVBoxLayout(central_widget))
central_widget.setStyleSheet('border: 5px solid green;')
# initially showing frame
self.frame1 = QtWidgets.QFrame()
central_widget.layout().addWidget(self.frame1)
self.frame1.setStyleSheet('border: 5px solid blue;')
# initially hidden frame
self.frame2 = QtWidgets.QFrame()
central_widget.layout().addWidget(self.frame2)
self.frame2.setStyleSheet('border: 5px solid red;')
self.frame2.hide()
def add_standard_menu_components(self):
self.menubar = QtWidgets.QMenuBar(self)
self.setMenuBar(self.menubar)
self.main_menu = QtWidgets.QMenu('&Main', self.menubar)
self.menubar.addMenu(self.main_menu)
# make the menu item with hotkey
self.main_menu.toggle_frame_view_action = self.make_new_menu_item_action('&Toggle view tasks', 'F4',
self.toggle_frame_view, self.main_menu, enabled=True)
def toggle_frame_view(self):
# this is the message which occurs after pressing F4, after a very bad gap, in the "real world" app
print('toggle...')
if self.alternate_view:
self.frame2.hide()
self.frame1.show()
else:
self.frame2.show()
self.frame1.hide()
self.alternate_view = not self.alternate_view
def make_new_menu_item_action(self, text, shortcut, connect_method, menu, enabled=True):
action = QtWidgets.QAction(text, menu)
action.setShortcut(shortcut)
menu.addAction(action)
action.triggered.connect(connect_method)
action.setEnabled(enabled)
return action
class TaskManager():
def __init__(self):
self.thread = QtCore.QThread()
self.task = LongTask()
self.task.moveToThread(self.thread)
self.thread.started.connect(self.task.run)
class LongTask(QtCore.QObject):
def run(self):
out_file_path = pathlib.Path('out.txt')
if out_file_path.is_file():
out_file_path.unlink()
# None of this works (has the desired effect of causing toggling latency)...
s = ''
for i in range(20):
print(f'{i} blip')
for j in range(1, 1000001):
for k in range(1, 1000001):
l = j/k
# performing I/O operations makes no difference
with open('out.txt', 'w') as f:
f.write(f'l: {l} len(s) {len(s)}\n')
s += f'{{once upon a {j}}}\n'
print('here BBB')
def main():
app = QtWidgets.QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
if __name__ == '__main__':
main()
</code></pre>
<p><em><strong>Later</strong></em><br>
Suspicion about the answer. When a "long task" is started, some gui elements have to be constructed and added to frame 2 (progress bar and stop button on a small strip of a QFrame - one for each live task). Will have to do some tests. But it seems fairly unlikely that this could have such a dramatic effect (1s of latency).</p>
|
<python><multithreading><pyqt5><latency>
|
2023-06-29 17:36:41
| 1
| 16,076
|
mike rodent
|
76,583,115
| 8,741,781
|
Convert .kv to custom python class
|
<p>I'm new to Kivy and having a hard time understanding why my code isn't working.</p>
<p>I'd like to convert some reusable Kv language code into a custom python class but I can't figure out why it's not working.</p>
<pre><code><ReceivingShipmentDetailScreen>:
BoxLayout:
orientation: 'vertical'
padding: 20
spacing: 15
BoxLayout:
size_hint_y: None
height: 50
canvas.before:
Color:
rgba: (0.1803921568627451, 0.20784313725490197, 0.24313725490196078, 1)
Rectangle:
pos: self.pos
size: self.size
Label:
text: 'Receive New Shipment'
bold: True
font_size: 20
</code></pre>
<p>I attempted to create a python class and pass in <code>page_header_text</code> when it's called in my Kv code but it doesn't seem to be working.</p>
<p>The text reads "Title" instead of the expected "Receive New Shipment". Also the format is messed up; the label and rectangle are in separate parts of the page. It seems that <code>CustomLayout</code> also doesn't have a parent widget and the size is different than expected.</p>
<pre><code>class CustomLayout(BoxLayout):
page_header_text = StringProperty('Title')
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.orientation = 'vertical'
self.padding = 20
self.spacing = 15
page_header = BoxLayout(
size_hint_y=None,
height=50,
)
with page_header.canvas.before:
Color(
0.1803921568627451,
0.20784313725490197,
0.24313725490196078,
1,
)
Rectangle(pos=page_header.pos, size=page_header.size)
page_header.add_widget(Label(
text=self.page_header_text,
bold=True,
font_size=20,
))
self.add_widget(page_header)
</code></pre>
<p>Here's my usage the usage in the .kv file.</p>
<pre><code><ReceivingShipmentDetailScreen>:
CustomLayout:
page_header_text: 'Receive New Shipment'
</code></pre>
<p>What am I missing here?</p>
|
<python><kivy><kivy-language>
|
2023-06-29 16:53:05
| 1
| 6,137
|
bdoubleu
|
76,583,100
| 3,247,006
|
Is `pytz` deprecated now or in the future in Python?
|
<p><a href="https://pytz.sourceforge.net/" rel="noreferrer">pytz</a> is used in <strong>the Django version: <=3.2 doc</strong> <a href="https://docs.djangoproject.com/en/3.2/topics/i18n/timezones/#selecting-the-current-time-zone" rel="noreferrer">Selecting the current time zone</a> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>import pytz # Here
from django.utils import timezone
class TimezoneMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
tzname = request.session.get('django_timezone')
if tzname:
timezone.activate(pytz.timezone(tzname))
else:
timezone.deactivate()
return self.get_response(request)
</code></pre>
<p>But, <a href="https://docs.python.org/3/library/zoneinfo.html" rel="noreferrer">zoneinfo</a> is used instead of <code>pytz</code> in <strong>the Django version: 4.0<= doc</strong> <a href="https://docs.djangoproject.com/en/4.0/topics/i18n/timezones/#selecting-the-current-time-zone" rel="noreferrer">Selecting the current time zone</a> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>import zoneinfo # Here
from django.utils import timezone
class TimezoneMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
tzname = request.session.get('django_timezone')
if tzname:
timezone.activate(zoneinfo.ZoneInfo(tzname))
else:
timezone.deactivate()
return self.get_response(request)
</code></pre>
<p>My questions:</p>
<ol>
<li>Is <code>pytz</code> deprecated now or in the future in Python?</li>
<li>Why isn't <code>pytz</code> used in <strong>the Django version: 4.0<= doc</strong> <a href="https://docs.djangoproject.com/en/4.0/topics/i18n/timezones/#selecting-the-current-time-zone" rel="noreferrer">Selecting the current time zone</a>?</li>
</ol>
|
<python><django><timezone><pytz><zoneinfo>
|
2023-06-29 16:51:09
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
76,583,013
| 10,735,143
|
How to retrieve text by a keyword which contains punctuation?
|
<p>I am using SQLite with SQLAlchemy as ORM. I want to retrieve sentences with some keywords.</p>
<p>I consider situations in which a keyword appears at the start/end of the sentence using <code>table.message.startswith(keyword)</code> and <code>table.message.endswith(keyword)</code> (or<code>table.message.ilike(f' {keyword} ')</code>). But when the keyword starts or ends with punctuation:</p>
<pre class="lang-bash prettyprint-override"><code>"This is first example. This will continue..."
</code></pre>
<p>I can't retrieve anything using <code>example</code> or <code>continue</code> as keyword. I tried <code>table.message.ilike(f'%{keyword}%')</code> but retrieved sentences which do not contain the keyword.</p>
|
<python><sqlite><sqlalchemy>
|
2023-06-29 16:38:42
| 1
| 634
|
Mostafa Najmi
|
76,582,986
| 6,751,456
|
dataframe count unique list values from a column and add the sum as a row
|
<p>I have a following dataframe:</p>
<pre><code>data = {
's1': [[1, 2], [None], [2, 3]],
's2': [[4, 5], [6, 7], [3, 2]]
}
output:
s1 s2
0 [1, 2] [4, 5]
1 NaN [6, 7]
2 [2, 3] [3, 2]
</code></pre>
<p>I need to get a unique counts of each elements for these columns <code>s1</code> and <code>s2</code> and also add these counts as a row like:
EDIT: also need to ignore None/null values from the count.</p>
<pre><code>expected output:
step count
0 1 4 -> since [1,2,3,NaN] <<- EDIT this should only be 3 ignoring NaN
1 2 6 -> since[1,2,3,4,5,6]
</code></pre>
<p>What I did was a bit dirty:</p>
<pre><code>s1_unique = df['s1'].explode().unique()
s2_unique = df['s2'].explode().unique()
new_df = pd.DataFrame()
new_df['step] = [1,2]
new_df['count'] = [len(s1_unique), len(s2_unique)]
new_df['name'] = 'Others'
</code></pre>
<p>Is there a "neat" dataframe way to handle this?</p>
|
<python><pandas><dataframe>
|
2023-06-29 16:33:13
| 4
| 4,161
|
Azima
|
76,582,830
| 11,857,974
|
Get value of input fields using PySide2
|
<p>I am trying to use PySide2 with the dialog buttons.</p>
<p>Each column on the dialogWindow has a chexbox, a button and a label.</p>
<p>Clicking on the button <code>Set Text</code> opens another dialog box to enter a text. I would like to get the input value of this dialog box (<code>just enter this</code>), and update the label next to the clicking button. In this case, changing the first label value <code>Set value</code> to <code>just enter this</code>, as I clicked on the first <code>Set text</code> button.</p>
<p>Finally, by clicking on the "Apply" button, I would like to get the values of the check boxes, and the labels (those at the end of each row with <code>Set value</code>).</p>
<p>Below are the dialog boxes and the code</p>
<p><a href="https://i.sstatic.net/Wnj9W.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wnj9W.png" alt="Illustration" /></a></p>
<pre><code>from PySide2.QtWidgets import (
QApplication, QPushButton,
QWidget, QLabel, QVBoxLayout, QHBoxLayout, QLineEdit, QInputDialog, QCheckBox
)
import sys
from PySide2 import QtWidgets
class AnotherWindow(QWidget):
"""
This "window" is a QWidget. If it has no parent, it
will appear as a free-floating window as we want.
"""
def __init__(self):
super().__init__()
def init_input_dialog(self):
text, ok = QInputDialog.getText(self, 'Text Input Dialog', 'Enter something:')
if ok:
return str(text)
class Window(QWidget):
def __init__(self):
super().__init__()
v = QVBoxLayout()
objets = ['label1', 'label2']
for a in objets:
h = QHBoxLayout()
check_box = QCheckBox()
labels = QLabel( a )
button = QPushButton("Set text")
button.pressed.connect(
lambda val=a: self.show_new_window(val)
)
label_value = QLabel( "Set value", objectName=a )
h.addWidget(check_box)
h.addWidget(labels)
h.addWidget(button)
h.addWidget(label_value)
# Add the current objet entry to the main layout
v.addLayout(h)
# Add buttons: Ok and Cancel
btn_ok = QPushButton("Apply")
btn_ok.clicked.connect(self.get_dialog_box_values)
btn_cancel = QPushButton("Cancel")
btn_cancel.clicked.connect(self.close_dialog_box())
h = QHBoxLayout()
h.addWidget(btn_ok)
h.addWidget(btn_cancel)
v.addLayout(h)
# Set the layout
self.setLayout(v)
def show_new_window(self, n):
win = AnotherWindow()
val = win.init_input_dialog()
print(val)
# Replace the corresponding label with this value
# label_value.setText(str(val))
def get_dialog_box_values(self):
# Get state of checkboxes, and label values
check_box_values = []
label_values = []
print(check_box_values)
print(label_values)
def close_dialog_box(self):
# Close the dialog box
pass
app = QApplication(sys.argv)
w = Window()
w.show()
app.exec_()
</code></pre>
<p>I am also open to any suggestions on how to achieve this in a better way.
e.g. if there is a better way to add the <code>Apply</code> and <code>Cancel</code> buttons to the main dialog box.</p>
|
<python><qt><pyqt><pyside2><qdialog>
|
2023-06-29 16:09:50
| 1
| 707
|
Kyv
|
76,582,775
| 8,107,458
|
What is the best way to save comma separated keywords posted from client side into database table using Django?
|
<p>I have a text area input on a simple HTML page, where I am posting comma-separated keywords to save into my database table.
For example,
Keywords input is like: <code>keyword1,keyword2,keyword3, ...</code></p>
<p><a href="https://i.sstatic.net/CfbCI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CfbCI.png" alt="enter image description here" /></a></p>
<p>I am using Django 3.2 and Tastypie as a API framework.</p>
<p>Here is my Django model:</p>
<pre><code>class Keywords(models.Model):
keyword = models.CharField(max_length=200, unique=True)
class Meta:
db_table = 'keywords'
</code></pre>
<p>And Tastypie resource:</p>
<pre><code>class KeywordResource(ModelResource):
class Meta(CommonMeta):
queryset = Keywords.objects.all()
resource_name = 'keyword-resource'
filtering = {'id': ALL, 'keyword': ALL}
allowed_methods = ['get', 'post', 'put', 'patch']
</code></pre>
<p>I want to save the comma-separated keyword in one post request from the client side. How can I post comma-separated keywords and insert them one by one into the table <code>Keywords</code>?
Also, what would be the best way to use <code>save()</code> method from model or <code>hydrate()</code> method from tastypie resource to prepare the data to save?</p>
|
<python><django><django-models><tastypie>
|
2023-06-29 16:03:27
| 1
| 762
|
Shriniwas
|
76,582,545
| 219,976
|
How to export to dict some properties of an object
|
<p>I have a python class which has several properties. I want to implement a method which will return some properties as a dict. I want to mark the properties with a decorator. Here's an example:</p>
<pre><code>class Foo:
@export_to_dict # I want to add this property to dict
@property
def bar1(self):
return 1
@property # I don't want to add this propetry to dict
def bar2(self):
return {"smth": 2}
@export_to_dict # I want to add this property to dict
@property
def bar3(self):
return "a"
@property
def bar4(self):
return [2, 3, 4]
def to_dict(self):
return ... # expected result: {"bar1": 1, "bar3": "a"}
</code></pre>
<p>One way to implement it is to set an additional attribute to the properties with the export_to_dict decorator, like this:</p>
<pre><code>def export_to_dict(func):
setattr(func, '_export_to_dict', True)
return func
</code></pre>
<p>and to search for properties with the <code>_export_to_dict</code> attribute when <code>to_dict</code> is called.</p>
<p>Is there another way to accomplish the task?</p>
|
<python><properties><decorator>
|
2023-06-29 15:32:45
| 5
| 6,657
|
StuffHappens
|
76,582,514
| 6,195,489
|
Two SQLalchemy relationships one produces None the other an empty list: Why?
|
<p>I have the following classes set up to define tables using sqlalchemy:</p>
<p>Project</p>
<pre><code>class Project(Base):
__tablename__ = "projects"
id: Mapped[int] = mapped_column(primary_key=True, nullable=True)
title: Mapped[str] = mapped_column()
outputs: Mapped[list[ResearchOutput]] = relationship(
secondary=proj_output_association_tbl, back_populates="projects", uselist=True
)
users: Mapped[list[User]] = relationship(
secondary=project_association_tbl, back_populates="projects"
)
def __init__(
self,
pure_id: int,
title: str,
) -> None:
self.id = id
self.title = title
</code></pre>
<p>Output</p>
<pre><code>class Output(Base):
__tablename__ = "outputs"
id: Mapped[int] = mapped_column(primary_key=True, nullable=True)
users: Mapped[list[User]] = relationship(
secondary=output_association_tbl, back_populates="outputs"
)
title: Mapped[str] = mapped_column()
projects: Mapped[Optional[Project]] = relationship(
secondary=proj_output_association_tbl,
back_populates="outputs",
)
equipment: Mapped[Optional[Equipment]] = relationship(
secondary=equipment_association_tbl,
back_populates="outputs",
)
def __init__(
self,
id: int,
title: str,
) -> None:
self.pure_id = pure_id
self.title = title
</code></pre>
<p>Equipment
class Equipment(Base):</p>
<pre><code> __tablename__ = "equipment"
id: Mapped[str] = mapped_column(primary_key=True, nullable=True)
outputs: Mapped[list[ResearchOutput]] = relationship(
secondary=equipment_association_tbl, back_populates="equipment"
)
name: Mapped[str] = mapped_column()
def __init__(
self,
id: int,
name: str,
) -> None:
self.name = name
self.id = id
</code></pre>
<p>I can initialise an instance of <code>Output</code> and add <code>equipment</code> to it by initialising them and appending to <code>output.equipment</code>, as <code>output.equipment</code> is initialised as an empty list <code>[]</code>.</p>
<p>However when I try to do the exact same approach to set up a project for an output I cannot do it, as <code>output.projects</code> is initialised as <code>None</code>.</p>
<p>Can anyone explain why this would be?</p>
<p>Thanks!</p>
|
<python><sqlalchemy><orm>
|
2023-06-29 15:28:23
| 1
| 849
|
abinitio
|
76,582,502
| 11,793,491
|
In DBT I cannot import name 'RuntimeException' from 'dbt.exceptions'
|
<p>I tried to setup dbt on a EC2 instance, creating an conda environment:</p>
<pre><code>conda create --name dbt-athena python=3.10
conda activate dbt-athena
conda install -c conda-forge dbt-athena-adapter
</code></pre>
<p>Then I see dbt is installed correctly:</p>
<pre><code>dbt --version
Core:
- installed: 1.5.1
- latest: 1.5.2 - Update available!
</code></pre>
<p>So I tried to initialize a project:</p>
<pre><code>dbt init gapminder
15:08:31 Running with dbt=1.5.1
The profile gapminder already exists in /home/ec2-user/.dbt/profiles.yml. Continue and overwrite it? [y/N]: y
Which database would you like to use?
[1] athena
(Don't see the one you want? https://docs.getdbt.com/docs/available-adapters)
</code></pre>
<p>But I get this error:</p>
<pre><code>15:08:55 Encountered an error:
cannot import name 'RuntimeException' from 'dbt.exceptions' (/home/ec2-user/miniconda3/envs/dbt-athena/lib/python3.10/site-packages/dbt/exceptions.py)
15:08:55 Traceback (most recent call last):
File "/home/ec2-user/miniconda3/envs/dbt-athena/lib/python3.10/site-packages/dbt/cli/requires.py", line 86, in wrapper
result, success = func(*args, **kwargs)
File "/home/ec2-user/miniconda3/envs/dbt-athena/lib/python3.10/site-packages/dbt/cli/requires.py", line 71, in wrapper
return func(*args, **kwargs)
File "/home/ec2-user/miniconda3/envs/dbt-athena/lib/python3.10/site-packages/dbt/cli/main.py", line 457, in init
results = task.run()
File "/home/ec2-user/miniconda3/envs/dbt-athena/lib/python3.10/site-packages/dbt/task/init.py", line 305, in run
self.create_profile_from_target(adapter, profile_name=project_name)
File "/home/ec2-user/miniconda3/envs/dbt-athena/lib/python3.10/site-packages/dbt/task/init.py", line 173, in create_profile_from_target
load_plugin(adapter)
File "/home/ec2-user/miniconda3/envs/dbt-athena/lib/python3.10/site-packages/dbt/adapters/factory.py", line 202, in load_plugin
return FACTORY.load_plugin(name)
File "/home/ec2-user/miniconda3/envs/dbt-athena/lib/python3.10/site-packages/dbt/adapters/factory.py", line 57, in load_plugin
mod: Any = import_module("." + name, "dbt.adapters")
File "/home/ec2-user/miniconda3/envs/dbt-athena/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/ec2-user/miniconda3/envs/dbt-athena/lib/python3.10/site-packages/dbt/adapters/athena/__init__.py", line 1, in <module>
from dbt.adapters.athena.connections import AthenaConnectionManager
File "/home/ec2-user/miniconda3/envs/dbt-athena/lib/python3.10/site-packages/dbt/adapters/athena/connections.py", line 22, in <module>
from dbt.exceptions import RuntimeException, FailedToConnectException
ImportError: cannot import name 'RuntimeException' from 'dbt.exceptions' (/home/ec2-user/miniconda3/envs/dbt-athena/lib/python3.10/site-packages/dbt/exceptions.py)
</code></pre>
<p>I checked up my profiles.yml file:</p>
<pre><code>gapminder:
outputs:
dev:
aws_profile_name: ec2-user
database: awsdatacatalog
region_name: us-east-1
s3_data_dir: s3://mybucket/results/
s3_staging_dir: s3://mybucket/results/
schema: myschema
threads: 1
type: athena
target: dev
</code></pre>
<p>So I can't create this project. Please, any help will be greatly appreciated.</p>
|
<python><amazon-web-services><amazon-athena><dbt>
|
2023-06-29 15:26:35
| 1
| 2,304
|
Alexis
|
76,582,455
| 11,025,049
|
How to convert a image frombytes using Pillow
|
<p>I'm trying to make a Futronic FS88 fingerprint reader to work.</p>
<p>I'm using this reference: <a href="https://stackoverflow.com/questions/53182796/python-ctypes-issue-on-different-oses/53185316#53185316">python ctypes issue on different OSes</a></p>
<p>The code its work. But now, I want to save the fingerprint in a file. But I'm getting the following error:</p>
<pre class="lang-bash prettyprint-override"><code>ValueError: not enough image data
</code></pre>
<p>The code I added, based on the above reference, is:</p>
<pre class="lang-py prettyprint-override"><code># ct = ctypes
size = FTRSCAN_IMAGE_SIZE()
result_size = lib.ftrScanGetImageSize(handle_device, ct.byref(size))
pbuffer = ct.create_string_buffer(size.nImageSize)
result_frame = lib.ftrScanGetFrame(handle_device, pbuffer, None)
image_bytes = ct.string_at(pbuffer, size.nImageSize)
image = Image.frombytes("RGB", (size.nWidth, size.nHeight), image_bytes)
</code></pre>
<p>I'm receiving the error on the line <code>image = Image.frombytes("RGB", (size.nWidth, size.nHeight), image_bytes)</code>.</p>
<p>To better understand it, I created the following code:</p>
<pre class="lang-py prettyprint-override"><code>from PIL import Image
im = Image.open("/home/joca/Images/802610961_20633.jpg")
image_bytes = im.tobytes("raw")
size_width, size_height = im.size
image = Image.frombytes(
"RGB", (size_width, size_height), image_bytes, decoder_name="raw"
)
</code></pre>
<p>It generates the same error <code>error ValueError: not enough image data</code>.</p>
<p>My doubt is here. I am able to make it work when I use values below 500 (width and height). However, whenever I go above that, I always receive this error.</p>
<h2>Changes</h2>
<p>I change this block:</p>
<pre class="lang-py prettyprint-override"><code>from PIL import Image
im = Image.open("/home/joca/Images/802610961_20633.jpg")
image_bytes = im.tobytes("raw")
size_width, size_height = im.size
image = Image.frombytes(
"RGB", (size_width, size_height), image_bytes, decoder_name="raw"
)
</code></pre>
<p>For this:</p>
<pre class="lang-py prettyprint-override"><code>
with open("/home/joca/Images/802610961_20633.jpg", "rb") as f:
im = f.read()
print(type(im))
print(dir(im))
image = Image.open(io.BytesIO(im))
</code></pre>
<p>looks ok to me. I will test it.</p>
|
<python><python-imaging-library><ctypes>
|
2023-06-29 15:21:10
| 0
| 625
|
Joey Fran
|
76,582,236
| 774,133
|
grid search using a pipeline
|
<p>I am not sure I am using correctly the hyperparameter search functions in <code>scikit-learn</code>.</p>
<p>Please consider this code:</p>
<pre class="lang-py prettyprint-override"><code>from sklearn import datasets
from sklearn.model_selection import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.svm import LinearSVC
from sklearn.preprocessing import MinMaxScaler, StandardScaler
iris = datasets.load_iris()
X = iris.data
y = iris.target
scalers = [
StandardScaler(),
# MinMaxScaler(feature_range=(0,1)),
MinMaxScaler(feature_range=(-1,1)),
# PowerTransformer(),
# RobustScaler(unit_variance=True)
]
svm_param = {'scaler': scalers,
'learner': [LinearSVC()],
# 'learner__dual': [True, False], # with True svm selects random features
'learner__C': [0.01, 0.1, 1.0, 10.0, 100.0, 1000.0], # uniform(loc=1e-5, scale=1e+5), # [0.5, 1.0, 2],
'learner__tol': [1e-4], # svm_learner_tol, # [1e-5, 1e-4, 1e-3],
'learner__random_state': [22],
'learner__max_iter': [1000]}
pipe = Pipeline([
("scaler", None),
("learner", None)
])
grid = GridSearchCV(
pipe, param_grid=svm_param,
scoring="accuracy",
verbose=2,
refit=True,
cv = 5, return_train_score=True)
n_features = [X.shape[1], 20]
for nf in n_features:
X = iris.data[:, :nf] # we only take the first two features.
print("n_features", nf)
grid = GridSearchCV(
pipe, param_grid=svm_param,
scoring="accuracy",
verbose=2,
refit=True,
cv = 5, return_train_score=True)
grid.fit(X, y)
</code></pre>
<p>Basically, I would like to perform a grid search using two scalers and some parameters of a classifier, that, in this case, is SVM.</p>
<p>However, this is the output I get. In the first iteration over the number of features, I read:</p>
<pre class="lang-py prettyprint-override"><code>(150, 4)
n_features 4
Fitting 5 folds for each of 12 candidates, totalling 60 fits
[CV] END learner=LinearSVC(), learner__C=0.01, learner__max_iter=1000, learner__random_state=22, learner__tol=0.0001, scaler=StandardScaler(); total time= 0.0s
[CV] END learner=LinearSVC(), learner__C=0.01, learner__max_iter=1000, learner__random_state=22, learner__tol=0.0001, scaler=StandardScaler(); total time= 0.0s
[CV] END learner=LinearSVC(), learner__C=0.01, learner__max_iter=1000, learner__random_state=22, learner__tol=0.0001, scaler=StandardScaler(); total time= 0.0s
[CV] END learner=LinearSVC(), learner__C=0.01, learner__max_iter=1000, learner__random_state=22, learner__tol=0.0001, scaler=StandardScaler(); total time= 0.0s
[CV] END learner=LinearSVC(), learner__C=0.01, learner__max_iter=1000, learner__random_state=22, learner__tol=0.0001, scaler=StandardScaler(); total time= 0.0s
...
</code></pre>
<p>that seems fine. However, at the second iteration of the for loop, I get:</p>
<pre><code>n_features 20
Fitting 5 folds for each of 12 candidates, totalling 60 fits
[CV] END learner=LinearSVC(C=100.0, random_state=22), learner__C=0.01, learner__max_iter=1000, learner__random_state=22, learner__tol=0.0001, scaler=StandardScaler(); total time= 0.0s
[CV] END learner=LinearSVC(C=100.0, random_state=22), learner__C=0.01, learner__max_iter=1000, learner__random_state=22, learner__tol=0.0001, scaler=StandardScaler(); total time= 0.0s
[CV] END learner=LinearSVC(C=100.0, random_state=22), learner__C=0.01, learner__max_iter=1000, learner__random_state=22, learner__tol=0.0001, scaler=StandardScaler(); total time= 0.0s
[CV] END learner=LinearSVC(C=100.0, random_state=22), learner__C=0.01, learner__max_iter=1000, learner__random_state=22, learner__tol=0.0001, scaler=StandardScaler(); total time= 0.0s
[CV] END learner=LinearSVC(C=100.0, random_state=22), learner__C=0.01, learner__max_iter=1000, learner__random_state=22, learner__tol=0.0001, scaler=StandardScaler(); total time= 0.0s
...
</code></pre>
<p>During the first iteration, the learner is printed out as:</p>
<pre><code>learner=LinearSVC(), learner__C=0.01, learner__max_iter=1000, ...
</code></pre>
<p>i.e., default constructor followed by the parameters.</p>
<p>During the second iteration, the constructor does not correspond to the default one:</p>
<pre><code>learner=LinearSVC(C=100.0, random_state=22), learner__C=0.01, learner__max_iter=1000, ...
</code></pre>
<p>In this case, the constructor uses <code>C=100.0</code>, but <code>learner_C=0.01</code>.</p>
<p>Is this normal, or am I doing something wrong?</p>
|
<python><scikit-learn>
|
2023-06-29 14:55:12
| 1
| 3,234
|
Antonio Sesto
|
76,582,211
| 16,279,937
|
What method is called when using dot notation in MyClass.my_attribute?
|
<p>I want to print "getting x attr" whenever I use the dot notation to get an attribute.</p>
<p>For example</p>
<pre class="lang-py prettyprint-override"><code>class MyClass:
my_attribute = "abc"
print(MyClass.my_attribute)
</code></pre>
<p>I'd expect it to output like this:</p>
<pre><code>>>> getting my_attribute attr
>>> abc
</code></pre>
<p>I'm <em>not</em> instantiating the class.
I tried to add a print statement to <code>MyClass.__getattr__()</code> and <code>MyClass.__getattribute__()</code> but none of these will display anything.</p>
<p>I tried the following:</p>
<pre><code>class MyClass:
def __getattr__(self, key):
print("running __getattr__")
return super().__getattr__(key)
def __getattribute__(self, __name: str) -> Any:
print("running __getattribute__")
return super().__getattribute__(__name)
my_attribute = "abc"
</code></pre>
|
<python><python-3.x>
|
2023-06-29 14:50:58
| 1
| 369
|
Benjamin_Mourgues
|
76,582,088
| 4,551,325
|
Pandas Dataframe interpolate inside with constant value
|
<p>How to make:</p>
<pre><code>[In1]: df = pd.DataFrame({
'col1': [100, np.nan, np.nan, 100, np.nan, np.nan, np.nan],
'col2': [np.nan, 100, np.nan, np.nan, np.nan, 100, np.nan]})
df
[Out1]: col1 col2
0 100 NaN
1 NaN 100
2 NaN NaN
3 100 NaN
4 NaN NaN
5 NaN 100
6 NaN NaN
</code></pre>
<p>into:</p>
<pre><code>[Out2]: col1 col2
0 100 NaN
1 0 100
2 0 0
3 100 0
4 NaN NaN
5 NaN 100
6 NaN NaN
</code></pre>
<p>So basically I want to interpolate/fill NaN's with zero only for the <em>inside</em> area and a <code>limit=2</code>. Note in <code>col2</code> there are three consecutive NaN's in the middle and only two of them are replaced with zero.</p>
|
<python><pandas><dataframe><interpolation><fillna>
|
2023-06-29 14:35:38
| 2
| 1,755
|
data-monkey
|
76,582,042
| 3,016,709
|
Extract sentences of text file and put in a dataframe
|
<p>Given a .txt file:</p>
<pre><code>file = open("file.txt", "r")
data = file.read()
</code></pre>
<p>with a text that looks like:</p>
<blockquote>
<p>"table products section: <code><bold></code>'This is the section A'<code></bold></code> introduction:
'blablabla' subsection: <code><bold></code>'This is subsection A1'<code></bold></code> content:
'blablabla' subsection: <code><bold></code>'This is subsection A2' <code></bold></code> content:
'blablabla' subsection: <code><bold></code>'This is subsection A3'<code></bold></code> content:
'blablabla' section: <code><bold></code>'This is the section B'<code></bold></code>
introduction: 'blablabla' subsection: <code><bold></code>'This is subsection
B1'<code></bold></code> content: 'blablabla' subsection: <code><bold></code>'This is subsection
B2'<code></bold></code> content: 'blablabla' section: <code><bold></code>'This is the section
C'<code></bold></code> introduction: 'blablabla' subsection: <code><bold></code>'This is
subsection C1'<code></bold></code> content: 'blablabla' subsection: <code><bold></code>'This is
subsection C2'<code></bold></code> content: 'blablabla' subsection: <code><bold></code>'This is
subsection C3'<code></bold></code> content: 'blablabla' subsection: <code><bold></code>'This is
subsection C4'<code></bold></code> content: 'blablabla'"</p>
</blockquote>
<p>I need to extract the sentences of 'section' and 'subsection', and put them in a table. Expected output:</p>
<pre><code>section subsection
'This is section A' 'This is subsection A1'
'This is section A' 'This is subsection A2'
'This is section A' 'This is subsection A3'
'This is section B' 'This is subsection B1'
'This is section B' 'This is subsection B2'
'This is section C' 'This is subsection C1'
'This is section C' 'This is subsection C2'
'This is section C' 'This is subsection C3'
'This is section C' 'This is subsection C4'
</code></pre>
<p>So, basically, the code should look for the word 'section' then retrieve what's between <code><bold></code> and <code></bold></code>, then look for any 'subsection' after this 'section' and also retrieve what's between <code><bold></code> and <code></bold></code>. If no more 'subsection', do the same with next 'section' you find in the text, and so on. Until no more 'section'.</p>
<p>How could this be achieved?</p>
|
<python><design-patterns><text>
|
2023-06-29 14:30:37
| 1
| 371
|
LRD
|
76,581,999
| 4,502,950
|
Jupyter notebook Kernel keeps crashing when I try to run hugging face model
|
<p>I am trying to do sentiment analysis on customer feedback. This is the code that I am using</p>
<pre><code>import warnings
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoConfig
import numpy as np
import pandas as pd
from scipy.special import softmax
from transformers import logging
logging.set_verbosity_error()
# Example DataFrame
df = pd.DataFrame({'text': ['This movie is great!', 'I watched insidious', 'Happy this movie!', 'I feel bored.', 'The weather is nice.', np.nan]})
def predict_sentiment(text):
if pd.isna(text):
return 'N/A'
MODEL = "cardiffnlp/twitter-roberta-base-sentiment-latest"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
config = AutoConfig.from_pretrained(MODEL)
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
scores = output.logits[0].detach().numpy()
scores = softmax(scores)
ranking = np.argsort(scores)
ranking = ranking[::-1]
highest_sentiment = config.id2label[ranking[0]]
return highest_sentiment
predicted_sentiments = []
for index, row in df.iterrows():
text = row['text']
sentiment = predict_sentiment(text)
predicted_sentiments.append(sentiment)
df['Hugging Face'] = predicted_sentiments
df
</code></pre>
<p>it works and is giving really good results. We tested it for a sample of 20 feedbacks and it worked good. Now when I try to run this on the original dataset which has around only 200 feedback approx, the kernel keeps dying. Not sure what is going on, the dataset is really small and the kernel crashes within 1 minute of running the code. what could be the possible reasons of kernel crashing and how can I fix that?</p>
|
<python><nlp><huggingface-transformers><huggingface>
|
2023-06-29 14:26:34
| 0
| 693
|
hyeri
|
76,581,885
| 18,018,869
|
How can I change form field population *after* successful POST request for Class-Based-Views
|
<p>I want to change the forms initial field population. Here is the catch: I want to change it for the POST request and not for the GET request.</p>
<p>Let's say my app offers a calculation based on a body weight input. If the user inputs a body weight calculation happens with that weight, if not a default of 50 is used. So it is allowed to let the field empty. But after the calculation ran, I want to display that the value used for calculation was 50. That's why I want to populate the field with 50 after the form was initially submitted with an empty value.</p>
<pre class="lang-py prettyprint-override"><code># forms.py
class CalculatorForm(forms.Form):
body_weight = forms.FloatField(required=False)
def clean(self):
cleaned_data = super().clean()
if cleaned_data['body_weight'] is None:
cleaned_data['body_weight'] = 50.0
# ideally I want to put the logic here
# self.data[bpka] = 50.0 --> AttributeError: This QueryDict instance is immutable
# self.fields['body_weight'] = 50.0 --> does not work
# self.initial.update({'body_weight': 50.0}) --> does not work
return cleaned_data
</code></pre>
<p>I feel like the problem is that I can not reach the bound fields from the <code>clean()</code> method.</p>
<p>If it is not possible within the form the logic has to go to the view I guess. Feel free to modify here:</p>
<pre class="lang-py prettyprint-override"><code>class CalculatorView(FormView):
template_name = 'myapp/mycalculator.html'
form_class = CalculatorForm
def get_context_data(self, res=None, **kwargs):
context = super().get_context_data(**kwargs)
if res is not None:
context['result'] = res
return context
def form_valid(self, form, *args, **kwargs):
# form['body_weight'] = 50.0 --> does not work
res = form.cleaned_data['body_weight'] / 2
return render(
self.request,
self.template_name,
context=self.get_context_data(res=res, **kwargs)
)
</code></pre>
|
<python><django><django-views><django-forms>
|
2023-06-29 14:12:58
| 1
| 1,976
|
Tarquinius
|
76,581,831
| 11,079,284
|
Extracting URL from href attribute of specific <a> tag using Python and Selenium
|
<p>I'm trying to extract a URL from the href attribute of a specific tag on a webpage using Python and Selenium. The tag has a href attribute that starts with "javascript:SetAzurePlayerFileName", and I want to extract the URL that follows this string.</p>
<p>Here's an example of the tag (from this <a href="https://main.knesset.gov.il/activity/committees/insolvency/pages/committeetvarchive.aspx?topicid=19798" rel="nofollow noreferrer">page</a>):</p>
<pre><code><a href="javascript:SetAzurePlayerFileName('https://video.knesset.gov.il/KnsVod/_definst_/mp4:CMT/CmtSession_2081117.mp4/manifest.mpd',...">Link text</a>
</code></pre>
<p>I want to extract the URL <code>"https://video.knesset.gov.il/KnsVod/_definst_/mp4:CMT/CmtSession_2081117.mp4/manifest.mpd"</code>.</p>
<p>I've tried using BeautifulSoup with the requests library, but it seems the content of the webpage is loaded dynamically with JavaScript, so requests can't fetch the tag.</p>
<p>I then tried using Selenium with ChromeDriver, but I encountered issues with the WebDriver and the browser driver. I also tried setting the User-Agent header and waiting for the tag to be present with WebDriverWait, but I still couldn't find the element.</p>
<p>I switched to Firefox and GeckoDriver, but I'm still unable to find the element, and I'm getting a "NoSuchElementError".</p>
<p>Here's the latest version of my script:</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
from selenium.webdriver.firefox.service import Service
from webdriver_manager.firefox import GeckoDriverManager
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import re
def get_url_from_webpage(url):
options = webdriver.FirefoxOptions()
options.add_argument('user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:89.0) Gecko/20100101 Firefox/89.0')
driver = webdriver.Firefox(service=Service(GeckoDriverManager().install()), options=options)
driver.get(url)
try:
a_tag = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, '//a[starts-with(@href, "javascript:SetAzurePlayerFileName")]')))
href = a_tag.get_attribute('href')
match = re.search(r"javascript:SetAzurePlayerFileName\('(.*)',", href)
if match:
return match.group(1)
except Exception as e:
print(e)
finally:
driver.quit()
return None
</code></pre>
<p>Does anyone have any ideas on how I can successfully extract the URL from the href attribute of this tag?</p>
<p>Note: I believe that the main issue is that the content doesn't seem to be loaded in this mode of Python, rather than a problem with the regex or the XPath expression.</p>
|
<python><selenium-webdriver><web-scraping>
|
2023-06-29 14:08:36
| 1
| 1,052
|
Yanirmr
|
76,581,769
| 2,690,578
|
plot coordinates line in a pyplot
|
<p>I have a scatter plot. I would like to plot coordinate lines for each point, from X to Y, just like we used to do on school.</p>
<p>See picture below as a reference.</p>
<p><a href="https://i.sstatic.net/0QMp2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0QMp2.png" alt="enter image description here" /></a></p>
<p>I ended up using the "grid()" property, but that is not exactly what I want.</p>
<p>I also tried to use "axhline" and "axvline" passing xmin, xmax and ymin, ymax, but it did not work. It crossed a line throughout the entire plot.</p>
<p>Have a great day!</p>
|
<python><matplotlib><plot><scatter-plot>
|
2023-06-29 14:01:02
| 1
| 609
|
Gabriel
|
76,581,759
| 17,987,266
|
Why does VS Code show the word "function" as a class?
|
<p>My VS Code colors the word <code>function</code> distinctively and says it's a class:</p>
<p><a href="https://i.sstatic.net/JpW2B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpW2B.png" alt="enter image description here" /></a></p>
<p>But it behaves like a regular name such as <code>foo</code>. It's also not a built-in class, as calling it raises a <code>NameError</code> exception.</p>
<p>Does the <code>function</code> word has any special meaning?</p>
|
<python><visual-studio-code>
|
2023-06-29 14:00:01
| 3
| 369
|
sourcream
|
76,581,727
| 14,385,099
|
Updating MULTIPLE values in a column in pandas dataframe using ffill (or other methods)
|
<p>I have a dataframe that can be simplified like this:</p>
<pre><code>TR = [17,18,19,20,21,22,23,24,25,26,27,28,29]
Event = ['SRNeg', np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, 'Rating', np.NaN, np.NaN, 'ArrowTask', np.NaN]
df = pd.DataFrame({'Event':Event,'TR':TR})
</code></pre>
<p>I want to fill in some NaNs after the <code>event</code> value with that value. Note that not all NaNs are filled. Here is the ideal output:</p>
<pre><code>TR = [17,18,19,20,21,22,23,24,25,26,27,28,29]
Event = ['SRNeg', np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, np.NaN, 'Rating', np.NaN, np.NaN, 'ArrowTask', np.NaN]
Event_BOLD_Duration = ['SRNeg', 'SRNeg', 'SRNeg', 'SRNeg', 'SRNeg', 'SRNeg', 'SRNeg', 'SRNeg', 'Rating', 'Rating', np.NaN, 'ArrowTask', 'ArrowTask']
df = pd.DataFrame({'Event':Event,'Event_BOLD_Duration':Event_BOLD_Duration,'TR':TR})
</code></pre>
<p>Here is the code I have so far to complete the above task.</p>
<pre><code>#line 1
scores_full['Event_BOLD_Duration'] = scores_full['Event'].where(scores_full['Event'].isin(['SRNeg', 'SINeg'])).ffill(limit=7).fillna(scores_full['Event'])
#line 2
scores_full['Event_BOLD_Duration'] = scores_full['Event'].where(scores_full['Event'].isin(['ArrowTask'])).ffill(limit=4).fillna(scores_full['Event'])
#line 3
scores_full['Event_BOLD_Duration'] = scores_full['Event'].where(scores_full['Event'].isin(['Rating'])).ffill(limit=1).fillna(scores_full['Event'])
</code></pre>
<p>However, each line seems to override the previous line's output. How can I fix this issue?</p>
<p>Thank you!</p>
|
<python><pandas>
|
2023-06-29 13:57:02
| 2
| 753
|
jo_
|
76,581,516
| 10,353,865
|
"np.nan" isn't converted properly but "None" is
|
<p>In the following code I generate some data containing the value np.nan:</p>
<pre><code>import pandas as pd
import numpy as np
n = 20
df = pd.DataFrame({"x": np.random.choice(["dog","cat",np.nan],n), "y": range(0,n)})
</code></pre>
<p>Subsequently I check for missing values via the function pd.notnull and this does not indicate that there are any missing values:</p>
<pre><code>pd.notnull(df["x"])
</code></pre>
<p>Ok, the reason is that the np.nan used in the creation got somehow translated into a string "nan". But why? For instance, if I substitute the None value in the expression for np.nan, i.e. if I create the data via np.random.choice(["dog","cat",None],n), then everything works.</p>
<p>Can someone explain why np.nan isn't properly converted? And in general: How do I create random missing data for a string column without using np.nan or the None object?</p>
|
<python><pandas><numpy>
|
2023-06-29 13:27:23
| 2
| 702
|
P.Jo
|
76,581,453
| 188,331
|
Compute BLEU score of a Pandas DataFrame with valid rows filtered
|
<p>I have a Pandas DataFrame from an Excel file, which contains text data which need to calculate the BLEU score row-by-row.</p>
<pre><code>import evaluate
import pandas as pd
sacrebleu = evaluate.load("sacrebleu")
testset = pd.read_excel(xlsx_filename)
# find out valid rows with all columns are valid
valid_rows = testset['col1'].notna() & testset['col2'].notna() & testset['col3'].notna()
for i in range(len(testset)): # or... for i in range(len(testset.loc[valid_rows, 'col2']))
score = sacrebleu.compute(predictions=[testset.loc[valid_rows, 'col1'][i], testset.loc[valid_rows, 'col2'][i]], references=[testset.loc[valid_rows, 'col3'][i]])
</code></pre>
<p>It raises <code>KeyError: 139</code>.</p>
<p>The length of <code>valid_rows</code> and <code>testset</code> are 13700, while the length of <code>testset.loc[valid_rows, 'col2']</code> is 12208.</p>
<p>I know loop through for-loop is an anti-pattern, but how can I fit a Series into the <code>sacrebleu.compute()</code> function? It accepts only <code>[string, string], string</code> as input.</p>
<p>How can I solve this problem?</p>
|
<python><pandas><bleu><huggingface-evaluate>
|
2023-06-29 13:17:52
| 0
| 54,395
|
Raptor
|
76,581,386
| 2,107,667
|
In VS Code, see dependencies from virtual environment using python and poetry Pylint(E0401:import-error)
|
<p>For my project I use <strong>poetry</strong> to install all <strong>python 3.10</strong> dependencies. It works.</p>
<pre class="lang-bash prettyprint-override"><code>$ poetry env list
## .venv (Activated)
</code></pre>
<p>But in VS Code, <strong>Pylint</strong> do not recognise any installed library via poetry and the <em>Select interpreter</em> doesn't propose this <code>.venv</code> environment.</p>
<p><strong>How may I declare the right python environment <code>.venv</code> such that pylint find the installed libraries?</strong></p>
<p>I want to get rid of that:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI
## Unable to import 'fastapi' Pylint(E0401:import-error)
</code></pre>
|
<python><visual-studio-code><pylint><python-poetry>
|
2023-06-29 13:09:21
| 0
| 3,039
|
Costin
|
76,581,380
| 736,662
|
Locust/Python and using values from one list in another list
|
<p>I have list 1:</p>
<pre><code>TS_IDs = [
'10158',
'10159',
'10174'
]
</code></pre>
<p>and list 2:</p>
<pre><code>values = [
{
"id": 10017,
"values": [
{
"from": "2022-10-30T12:00:00Z",
"to": "2022-10-30T13:00:00Z",
"value": 15
},
{
"from": "2022-10-30T13:00:00Z",
"to": "2022-10-30T14:00:00Z",
"value": 16
}
]
}
]
</code></pre>
<p>I want to use values from list 1 into list 2 for the id-value (here being 10017).</p>
<p>How can that be obtained?</p>
|
<python><locust>
|
2023-06-29 13:08:54
| 1
| 1,003
|
Magnus Jensen
|
76,581,376
| 1,612,060
|
Count minutes per day over index
|
<p>I have a dataframe with irregular timestamps in seconds that spans over multiple days, I would like to create a new column and bucket these entries in minute buckets and have an increasing counter in a separate column. So all values that are within one minute should get the same counter value which increases with the number of minutes per day, on a new day the counter should start from 1 again.</p>
<pre><code> Value Counter
2020-01-01 10:00:00 7. 1
2020-01-01 10:00:05 45. 1
2020-01-01 10:00:10 25. 1
2020-01-01 10:02:00 85. 2
2020-01-02 07:00:00 51. 1
2020-01-02 10:00:00 52. 2
</code></pre>
<p>I thought about sth like this</p>
<pre><code>df['Counter'] = df.groupby([df.index.dt.day, df.index.dt.minute]).count()
</code></pre>
<p>Which does not seem to work.</p>
|
<python><dataframe><group-by><bucket><resample>
|
2023-06-29 13:08:29
| 2
| 843
|
ThatQuantDude
|
76,581,366
| 6,937,465
|
Convert pptx to pdf using an azure function
|
<p>I'm trying to convert a PowerPoint file to a PDF file using an Azure function which is based on a HTTP trigger. The files are located in one blob storage container as PowerPoints and I want them to be placed in another blob storage container as PDFs whenever the function app is called through a logic app. The idea is that whenever a new file is added to the ppt container, the logic app will call the function app and place the pdf into the pdf container as part of a workflow. The function app can work on one file at a time.</p>
<p>I've been looking at the answers to <a href="https://stackoverflow.com/questions/31487478/how-to-convert-a-pptx-to-pdf-using-python">this</a> question but they require an input path and an output path. I believe this isn't right for my needs as I'll just be passing the blob contents to the function via a logic app. I've developed this so far:</p>
<pre><code>def convert_pptx_to_pdf(inputFile):
powerpoint = comtypes.client.CreateObject("Powerpoint.Application")
deck = powerpoint.Presentations.Open(inputFile)
with tempfile.NamedTemporaryFile(suffix = ".pdf") as temp:
deck.SaveAs(temp, 32) # causes errors
output_stream = io.BytesIO()
deck.output(output_stream, "F")
return output_stream.getvalue()
def main(req: func.HttpRequest) -> func.HttpResponse:
pptx_file = req.get_body()
pdf_data = convert_pptx_to_pdf(pptx_file)
headers = {
"Content-Disposition": "attachment;filename=output.pdf",
"Content-Type": "application/pdf",
}
return func.HttpResponse(body=pdf_data, headers=headers, status_code=200)
</code></pre>
<p>However, I keep getting this error when I run convert_pptx_to_pdf because the temp file instance is not a string:</p>
<blockquote>
<p>ArgumentError: argument 1: : unicode string expected instead of _TemporaryFileWrapper instance</p>
</blockquote>
<p>Any suggestions for how to fix this is appreciated or any other suggestions of how to go about doing this are also welcome!</p>
<p>Thanks</p>
|
<python><windows><azure-functions><type-conversion>
|
2023-06-29 13:07:06
| 1
| 426
|
Carolina Karoullas
|
76,581,229
| 11,143,781
|
Is it possible to check if GPU is available without using deep learning packages like TF or torch?
|
<p>I would like to check if there is access to GPUs by using any packages other than tensorflow or PyTorch. I have found the <code>psutil.virtual_memory().percent</code> function that returns the usage of the GPU. However, this function could still return 0 if GPUs are utilized but not loaded. I wonder if there is another way to check it. Thanks!</p>
|
<python><gpu>
|
2023-06-29 12:49:50
| 1
| 316
|
justRandomLearner
|
76,581,181
| 20,220,485
|
How do you preserve hue associations across multiple plots in seaborn?
|
<p>With the following code as an example, how can I preserve the column-colour associations that you see in Plot 1 in the plot below it, Plot 2?</p>
<p>Is there a way to instruct <code>sns</code> to 'skip' a hue at specific points? I am looking for a simple workaround that I can control finely, and ideally without manually assigning hex codes.</p>
<p>To be clear, in Plot 2 I would like <code>'C'</code> to appear green and <code>'E'</code> to appear purple.</p>
<p>Any advice is appreciated, thanks.</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
np.random.seed(42)
df_1 = pd.DataFrame(np.random.rand(5, 5), columns=['A', 'B', 'C', 'D', 'E'])
df_2 = df_1.drop(['B', 'D'], axis=1)
</code></pre>
<p>— Plot 1 —</p>
<pre><code>sns.boxplot(data=df_1,
orient="h")
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/gf846.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gf846m.png" alt="enter image description here" /></a></p>
<p>— Plot 2 —</p>
<pre><code>sns.boxplot(data=df_2,
orient="h")
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/bta5v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bta5vm.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><colors><seaborn>
|
2023-06-29 12:43:14
| 1
| 344
|
doine
|
76,581,153
| 4,575,197
|
Removing a repetitive item in a list, ignores other items
|
<p>i have a list of words in Polish and i want to remove the single words in this list. that means i only want sentences. that's why i have a an <em><strong>If</strong></em> with this condition <code>if len(item.split())<=1:</code> . the Problem occourse when i run the function. it removes only this word: <em><strong><code>Powrót</code></strong></em> and ignores the words that come after it like <code>Energetyka</code>.</p>
<pre><code>_webTextList=['MOTORYZACJA', 'Szukaj w serwisie', 'Powrót', 'Energetyka', 'Powrót', 'Gazownictwo', 'Powrót', 'Górnictwo', 'Powrót', 'Hutnictwo', 'Powrót', 'Nafta', 'Powrót', 'Chemia', ...]#the list of words and sentenses.
def remove_one_words_from_list(_webTextList):
for item in _webTextList:
if len(item.split())<=1:
_webTextList.remove(item)
return _webTextList
</code></pre>
<p>i can not figure out what i'm doing wrong. tnx for the help.</p>
|
<python><pandas><list><for-loop>
|
2023-06-29 12:39:44
| 1
| 10,490
|
Mostafa Bouzari
|
76,581,027
| 6,178,591
|
How to send date within an input date control with onkeydown="return false" using Selenium Python
|
<p>I have this minimal html:</p>
<pre><code><!DOCTYPE html>
<html>
<body>
<input type="date" max="2023-03-09" value="2023-03-09" onkeydown="return false">
</body>
</html>
</code></pre>
<p>That just asks for a date, but <code>onkeydown="return false"</code> prevents keyboard input. So I have to navigate the (I guess browser-generated) calendar, but don't know how to access it. Even the calendar icon in the control is difficult to access. I have resorted to clicking with a fixed offset, but maybe there's a better way.</p>
<p>My minimal python code is:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver import ActionChains
import time
driver = webdriver.Firefox()
driver.get("E:\\Web\\TestDate\\public_html\\index.html")
buttonDate = driver.find_element(By.TAG_NAME, "input")
action = ActionChains(driver)
w, h = buttonDate.size['width'], buttonDate.size['height']
x, y = buttonDate.location['x'], buttonDate.location['y']
wx, wy = driver.get_window_size()['width'], driver.get_window_size()['height']
action.move_to_element_with_offset(buttonDate, w - 10, h - 7)
action.click()
action.perform()
time.sleep(30)
driver.quit()
</code></pre>
<p>With that I can get the calendar control open, but cannot use <code>send_keys()</code> to change the date.</p>
<p>Edit: Thanks for all the answers, you all saved me. I have accepted the shortest, most general purpose one, even if all were good.</p>
|
<javascript><python><date><selenium-webdriver><onkeydown>
|
2023-06-29 12:22:43
| 4
| 329
|
Julen
|
76,580,823
| 1,481,986
|
Missingno heatmap line / border color
|
<p>I use missingno <code>heatmap</code> visualization and would like to add a line \ border to show the different values. While I can adjust the font size, e.g. <code>msno.heatmap(df, fontsize=20)</code>. I couldn't find a way to adjust this property from the heatmap creating method. The following attempts yield <code>TypeError</code> -</p>
<pre><code>msno.heatmap(df, linecolor='black')
msno.heatmap(df, line_color='black')
msno.heatmap(df, bordercolor='black')
msno.heatmap(df, border_color='black')
</code></pre>
|
<python><missingno>
|
2023-06-29 11:57:04
| 0
| 6,241
|
Tom Ron
|
76,580,643
| 1,171,193
|
jupyter notebook export omits markdown
|
<p>Given a jupyter notebook (here Test_Export.ipynb) with the following content:</p>
<pre><code>{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"This is markdown."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print('This is code.')"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.4"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
</code></pre>
<p>I'd like to export the executed notebook without input code to asciidoc format. In order to do that, I use the offical docker image and execute:</p>
<p><code>docker run --rm -it --entrypoint /bin/bash -P -v ${PWD}:/home/jovyan/ jupyter/minimal-notebook:notebook-6.5.4 -c "jupyter nbconvert --execute --to asciidoc --no-input Test_Export.ipynb"</code></p>
<p>The result looks as follows:</p>
<pre><code>----
This is code.
----
</code></pre>
<p>Somehow any markdown area is completely omitted which is the case for <em>base-notebook</em> image and <em>minimal-notebook</em> one. Using <em>datascience-notebook</em> image, the markdown sections are correctly exported to the asciidoc file.</p>
<p>Why is that?</p>
|
<python><docker><jupyter-notebook><jupyter><nbconvert>
|
2023-06-29 11:34:18
| 2
| 3,694
|
janr
|
76,580,554
| 8,973,620
|
Does tf.data.Dataset.from_generator cache generated data?
|
<p>My question is, does the <code>dataset</code> created with the <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_generator" rel="nofollow noreferrer"><code>from_generator</code></a> function cache the generated data after the first epoch? If it pulls data from the generator after each epoch, is there a way to cache data to speed up model training?
I work with different shape arrays (RaggedTensor) and it looks like the generator is a way to go in this case.</p>
|
<python><tensorflow><keras><dataset><generator>
|
2023-06-29 11:21:30
| 0
| 18,110
|
Mykola Zotko
|
76,580,381
| 6,275,517
|
Parsing data from an rss feed using python to a CSV display issue for salary field
|
<p>I have written a script that takes job data from an RSS feed and adds it into a CSV file, this works great! However, I came across an issue with how salary data was displaying. The below shows how the data is appended to the data list:</p>
<pre><code># Append the entry data to the feed_data list
feed_data.append(
[
title,
published_date,
summary,
location,
application_email,
company,
company_website,
salary_from + " to " + salary_to,
]
)
</code></pre>
<p>The line <code>salary_from + " to " + salary_to</code> will show a salary range, for example 35000 to 45000. However, it was actually displaying by adding 4 after the decimal point, e.g. 35000.0000 to 45000.0000.</p>
<p>To get around this I altered to the following:</p>
<pre><code>salary_from[:-5] + " to " + salary_to[:-5]
</code></pre>
<p>This worked a treat and displayed as 35000 to 45000. Super! That was until I realised there are certain salaries that had a 0 value. This then caused the output to display as "to".</p>
<p>To try and fix this I added an if else:</p>
<pre><code>salary_from[:-5] + " to " + salary_to[:-5]
if salary_from == 0
else "Unknown",
</code></pre>
<p>However, this would then lead to every single Salary field within the CSV file displaying as "Unknown" regardless of if it had a salary or not.</p>
<p>What I'm trying to achieve is just those that have a 0 value displaying as "Unknown" and those with a value displaying with the correct salary range. Thank you for any help!</p>
|
<python>
|
2023-06-29 10:54:57
| 1
| 4,917
|
Joe
|
76,580,284
| 10,973,108
|
Storing selenium driver pages when run test to improve performance
|
<p>I was using an internal API to get the data from a website and I stored this requests using a VCR(cassette), but this API is no longer available.</p>
<p>Therefore, I'm using Selenium to handle pages and scrap the data that I need, but, to improve the performance, I would to store the pages that the webdriver travel, such as I was doing with simple requests, I mean, store this pages(such as requests) in the cassette VCR or something like it.</p>
<p>I think to download some page that the crawler travel and inject it in driver when run the unitttests, but maybe it's possible to configure the VCR to do something like it(and it's more simple).</p>
<p>It's possible to do it?</p>
|
<python><selenium-webdriver><python-unittest><vcr><cassette>
|
2023-06-29 10:44:30
| 0
| 348
|
Daniel Bailo
|
76,580,282
| 12,436,050
|
Group by and aggregate unique values in pandas dataframe
|
<p>I have a dataframe with following values</p>
<pre><code>col1 col2 col3
10002 en tea
10002 es te
10002 ru te
10003 en coffee
10003 de kaffee
10003 nl kaaffee
</code></pre>
<p>I would like to group by col1 and aggregate the values of col3 if col2 values are other than 'en'. The expected output is:</p>
<pre><code>col1 Term Name Synonyms
10002 tea te
10003 coffee kaffee | kaaffee
</code></pre>
<p>I am running following code to achieve this:</p>
<pre><code>
# group the data and concatenate the terms
grouped = df.groupby(['col1'])
.agg({'col3': lambda x: ' | '.join(x[df['col2'] != 'en'])})\
.reset_index()
# merge the original data with the grouped data
df_result = pd.merge(df[df['col2'] == 'en'], grouped, on=['col1'], how='left')
df_result = df_result [['col1', 'col3_x', 'col3_y']]
df_result .columns = ['col1', 'Term Name', 'Synonyms']
</code></pre>
<p>How can I get the unique value for col3 per col1 (eg 'te'). Any help is highly appreciated.</p>
|
<python><pandas>
|
2023-06-29 10:44:17
| 1
| 1,495
|
rshar
|
76,580,217
| 13,097,721
|
Apscheduler on localhost Flask missed scheduled task by 2 seconds
|
<p>I have the following apscheduler config in the <strong>init</strong>.py file of my Flask app:</p>
<pre><code>from apscheduler.jobstores.sqlalchemy import SQLAlchemyJobStore
jobstore = SQLAlchemyJobStore(url=database_string, tablename='apscheduler_jobs')
scheduler = BackgroundScheduler(jobstores={'default': jobstore})
if not scheduler.running:
scheduler.start()
print("Scheduler main started...")
</code></pre>
<p>I then have this Flask route (<a href="http://127.0.0.1:5000/profile/test-double-email" rel="nofollow noreferrer">http://127.0.0.1:5000/profile/test-double-email</a>), that if I go to, would schedule a task daily at a timezone:</p>
<pre><code>@profile_blueprints.route('/test-double-email', methods=['POST'])
@login_required
def test_double_email():
days = ['MON', 'TUE', 'WED', 'THU', 'FRI', 'SAT', 'SUN']
timezone_obj = get_timezone_from_ip_kien("49.181.220.210")
for day in days:
trigger = CronTrigger(day_of_week=day, hour=20, minute=8, timezone=timezone_obj)
job_name = f'job_driver_test_double_email_daily__{day}'
scheduler.add_job(
sendemail,
trigger=trigger,
name=job_name,
jobstore='default'
)
return redirect(url_for('Profile.profile'))
</code></pre>
<p>I was able to add the scheduled tasks fine, printing out my apschedular would get somethingn like:</p>
<pre><code>--------------
Job ID: d844c5657cad4b4a916f3ebc8d4a1ced
Job Name: job_driver_test_double_email_daily__THU
Next Run Time: 2023-06-29 20:08:00+10:00
Job Function: <function sendemail at 0x13895fb50>
Job Arguments: ()
Job Keyword Arguments: {}
Job Trigger: cron[day_of_week='thu', hour='20', minute='8']
Job Misfire Grace Time: 1
Job Coalesce: True
Job Max Instances: 1
</code></pre>
<p>I have keep my flask app running (app.py) locally the whole time and at 20:08pm at the scheduled time, the task was not carried out and i would get this error on my local terminal:</p>
<pre><code>Run time of job "job_driver_test_double_email_daily__THU (trigger: cron[day_of_week='thu', hour='20', minute='8'], next run at: 2023-06-29 20:08:00 AEST)" was missed by 0:00:02.344646
</code></pre>
<p>The thing I don't understand is that the task was just missed by 2 seconds, while the misfire grace time had been set defaulting to 1 minute...
<strong>Can someone pls explain to me why this happens and how to fix this?</strong> Weirdly, I have a live AWS instance that is hosting a live website connected to the same mysql database that hosts the apschedular jobs table, and if i scheduled a task through the website, that task would still be carried out fine. <strong>Also, another question that I have is, my live server also didn't run the tasks I scheduled locally, so maybe locally scheduled tasks can only be run by the local server?</strong></p>
<p><strong>Update1</strong>
the apscheduler documentation said something about this "Limiting the number of concurrently executing instances of a job
By default, only one instance of each job is allowed to be run at the same time. This means that if the job is about to be run but the previous run hasn’t finished yet, then the latest run is considered a misfire. It is possible to set the maximum number of instances for a particular job that the scheduler will let run concurrently, by using the max_instances keyword argument when adding the job.", so maybe i got error "no history of having subscription" because both my live and local server is trying to run the scheduled task at scheduled time?
=>Nope, this did not work. setting max_instances to 2 didn't work.</p>
|
<python><flask><apscheduler>
|
2023-06-29 10:33:43
| 1
| 348
|
Upchanges
|
76,580,097
| 2,641,187
|
python minor version in built wheel name
|
<p>I am trying to include the minor python version in the name of a wheel built with <code>pip wheel</code>. My project is based on a <code>pyproject.toml</code> that specifies version number etc. and also requires a concrete minor version of python via <code>requires-python</code>.</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ['setuptools']
build-backend = 'setuptools.build_meta'
[project]
version = '1.0.0'
name = 'my-package'
requires-python = '==3.10.*'
dependencies = [
...
]
</code></pre>
<p>However, when building the wheel</p>
<pre class="lang-bash prettyprint-override"><code>pip wheel --no-deps .
</code></pre>
<p>the resulting file's name is <code>my_package-1.0.0-py3-none-any.whl</code>.</p>
<p>What I would like however is <code>my_package-1.0.0-py310-none-any.whl</code>. How can I get pip to include the minor version? Is there some setting in <code>pyproject.toml</code>?</p>
|
<python><python-packaging><python-wheel><pyproject.toml>
|
2023-06-29 10:13:57
| 1
| 931
|
Darkdragon84
|
76,580,083
| 298,479
|
Why are enum members in Python 3.10 and 3.11 serialized to YAML differently?
|
<p>Using this snippet:</p>
<pre class="lang-py prettyprint-override"><code>import sys
import yaml
try:
from enum import ReprEnum
except ImportError:
from enum import Enum as ReprEnum
class MyEnum(int, ReprEnum):
foo = 123
print('Python version:', ''.join(map(str, sys.version_info[:3])))
print(yaml.dump({'test': MyEnum.foo}))
</code></pre>
<p>I get very different output on Python 3.10 and 3.11:</p>
<h3>3.10 output:</h3>
<pre class="lang-yaml prettyprint-override"><code>Python version: 3.10.12
test: !!python/object/apply:__main__.MyEnum
- 123
</code></pre>
<h3>3.11 output:</h3>
<pre class="lang-yaml prettyprint-override"><code>Python version: 3.11.4
test: !!python/object/apply:builtins.getattr
- !!python/name:__main__.MyEnum ''
- foo
</code></pre>
<p>My guess would be that it's related to the <a href="https://docs.python.org/3.11/whatsnew/3.11.html#enum" rel="nofollow noreferrer">many changes</a> in Python 3.11's enum module, but I'd like to understand why this changed...</p>
|
<python><pyyaml><python-3.11>
|
2023-06-29 10:12:05
| 1
| 319,812
|
ThiefMaster
|
76,580,063
| 9,072,753
|
How to type annotate a chunker function?
|
<p>I try the following:</p>
<pre><code>from typing import Iterator, List, Any, TypeVar
T = TypeVar('T')
def chunker(seq: Iterator[T], size: int) -> Iterator[List[T]]:
for i in seq:
yield []
for i in chunker([1, 2, 3, 4], 5):
pass
</code></pre>
<p>However, pyright tells me that:</p>
<pre><code> test.py:5:18 - error: Argument of type "list[int]" cannot be assigned to parameter "seq" of type "Iterator[T@chunker]" in function "chunker"
"list[int]" is incompatible with protocol "Iterator[T@chunker]"
"__next__" is not present (reportGeneralTypeIssues)
1 error, 0 warnings, 0 informations
</code></pre>
<p>How should I annotate a function that takes something that you can <code>for i in seq</code> it?</p>
<p>I tried searching on stack and google, but I guess like my google-fu is not good enough. I could use <code>Union[Iterator[T], List[T]]</code> but I think there should be something better?</p>
|
<python><python-typing>
|
2023-06-29 10:09:21
| 1
| 145,478
|
KamilCuk
|
76,580,060
| 15,528,750
|
PyTest: "no tests ran" even though tests were clearly run
|
<p>I have the following config file for <code>pytest</code>:</p>
<pre class="lang-bash prettyprint-override"><code>[pytest]
filterwarnings =
ignore::DeprecationWarning
ignore::UserWarning
python_files =
test_file_1.py
test_file_2.py
log_cli = True
log_level = INFO
</code></pre>
<p>I run <code>pytest</code> locally with the following command:</p>
<pre class="lang-py prettyprint-override"><code>python -m pytest path
</code></pre>
<p>and get the output</p>
<pre><code>====================================================================================================== no tests ran in 24.77s =======================================================================================================
</code></pre>
<p>which I cannot believe, since before, I get the following output in the terminal (because of the logging):</p>
<pre><code>INFO root:test_file_1.py:40 torch.Size([100, 71])
</code></pre>
<p>I noticed that if I remove the <code>filterwarnings</code> in <code>pytest.ini</code>, I get a <code>warnings summary</code>, but not the output "no tests ran".</p>
<p>What exactly is happening here? Why does <code>filterwarnings</code> lead to the output "no tests ran"?</p>
|
<python><python-3.x><unit-testing><pytest>
|
2023-06-29 10:08:51
| 1
| 566
|
Imahn
|
76,579,941
| 8,874,154
|
How can I improve the speed and performance of database queries?
|
<p>I'm currently working on a Django project where I'm encountering performance issues, especially as my application scales. To provide better context, I've included a realistic code snippet below that represents a performance-intensive section of my application:</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
class Order(models.Model):
# Fields and relationships...
class OrderItem(models.Model):
order = models.ForeignKey(Order, on_delete=models.CASCADE)
product = models.ForeignKey('Product', on_delete=models.CASCADE)
quantity = models.PositiveIntegerField()
class Product(models.Model):
# Fields and relationships...
def calculate_total_price(order_id):
order_items = OrderItem.objects.filter(order_id=order_id).select_related('product')
total_price = 0
for item in order_items:
product_price = item.product.price
quantity = item.quantity
total_price += product_price * quantity
return total_price
</code></pre>
<p>In the above code snippet, I have an Order model that is associated with multiple OrderItems, and each OrderItem references a Product. The <code>calculate_total_price</code> function is used to calculate the total price for a given order by iterating over its associated OrderItems. However, as the number of OrderItems increases, the calculation becomes significantly slower.</p>
<p>I would greatly appreciate any insights and suggestions on how to optimize this code snippet or any other general Django performance tips. Are there any specific techniques, database query optimizations, caching mechanisms, or Django features I should consider to enhance the performance of this code? Additionally, if there are any profiling tools or libraries you recommend for identifying and resolving performance issues, please let me know.</p>
<p>I'm keen to learn best practices and advanced techniques for optimizing Django performance, as I want to ensure my application performs efficiently as it grows. Thank you in advance for your valuable assistance!</p>
<p>I'm not sure what to do about the performance issues so I don't know where to begin optimisations.</p>
<p><strong>Edit</strong>
Including example view code:</p>
<pre class="lang-py prettyprint-override"><code>class OrderDetailView(View):
template_name = 'order_detail.html'
def get(self, request, order_id):
order = Order.objects.get(id=order_id)
total_price = calculate_total_price(order_id)
context = {
'order': order,
'total_price': total_price
}
return render(request, self.template_name, context)
</code></pre>
|
<python><django>
|
2023-06-29 09:51:06
| 3
| 1,711
|
Swift
|
76,579,922
| 5,246,906
|
What is $ character to Python?
|
<p>I just realized I had never seen a dollar $ character in Python, other than in a string. Does it have any defined usage, or is it reserved for future use? Google doesn't provide an answer. Experimentally it causes syntax errors. Just about any other "visible" unicode (not sure of correct unicode terminology) is accepted as part of a variable name (Python 3)</p>
<pre><code>>>> aμ = 44
>>> aμ
44
>>> a$=27
File "<stdin>", line 1
a$=27
^
SyntaxError: invalid syntax
</code></pre>
|
<python><character>
|
2023-06-29 09:48:58
| 1
| 8,312
|
nigel222
|
76,579,783
| 10,039,331
|
RAPIDS pip installation issue
|
<p>I've been trying to install RAPIDS in my Docker environment, which initially went smoothly. However, over the past one or two weeks, I've been encountering an error.</p>
<p>The issue seems to be that pip is attempting to fetch from the default PyPi registry, where it encounters a placeholder project. I'm unsure who placed it there or why, as it appears to serve no practical purpose.</p>
<pre class="lang-bash prettyprint-override"><code> => ERROR [12/19] RUN pip3 install cudf-cu11 cuml-cu11 cugraph-cu11 cucim --extra-index-url=https://pypi.nvidia.com 2.1s
------
> [12/19] RUN pip3 install cudf-cu11 cuml-cu11 cugraph-cu11 cucim --extra-index-url=https://pypi.nvidia.com:
#0 1.038 Looking in indexes: https://pypi.org/simple, https://pypi.nvidia.com
#0 1.466 Collecting cudf-cu11
#0 1.542 Downloading cudf-cu11-23.6.0.tar.gz (6.8 kB)
#0 1.567 Preparing metadata (setup.py): started
#0 1.972 Preparing metadata (setup.py): finished with status 'error'
#0 1.980 error: subprocess-exited-with-error
#0 1.980
#0 1.980 × python setup.py egg_info did not run successfully.
#0 1.980 │ exit code: 1
#0 1.980 ╰─> [16 lines of output]
#0 1.980 Traceback (most recent call last):
#0 1.980 File "<string>", line 2, in <module>
#0 1.980 File "<pip-setuptools-caller>", line 34, in <module>
#0 1.980 File "/tmp/pip-install-8463q674/cudf-cu11_9d3e1a792dae4026962cdff29926ce8d/setup.py", line 137, in <module>
#0 1.980 raise RuntimeError(open("ERROR.txt", "r").read())
#0 1.980 RuntimeError:
#0 1.980 ###########################################################################################
#0 1.980 The package you are trying to install is only a placeholder project on PyPI.org repository.
#0 1.980 This package is hosted on NVIDIA Python Package Index.
#0 1.980
#0 1.980 This package can be installed as:
#0 1.980 ```
#0 1.980 $ pip install --extra-index-url https://pypi.nvidia.com cudf-cu11
#0 1.980 ```
#0 1.980 ###########################################################################################
#0 1.980
#0 1.980 [end of output]
#0 1.980
#0 1.980 note: This error originates from a subprocess, and is likely not a problem with pip.
#0 1.983 error: metadata-generation-failed
#0 1.983
#0 1.983 × Encountered error while generating package metadata.
#0 1.983 ╰─> See above for output.
#0 1.983
#0 1.983 note: This is an issue with the package mentioned above, not pip.
#0 1.983 hint: See above for details.
</code></pre>
<p>I attempted to explicitly set the <code>--index-url</code> to <code>pypi.nvidia.com</code>, but this approach wasn't feasible either, as the dependencies for the RAPIDS packages appear to be hosted on the default PyPi.</p>
|
<python><docker><pip><rapids>
|
2023-06-29 09:27:37
| 1
| 320
|
Steven
|
76,579,777
| 2,040,432
|
How to prefix column names while unnesting in Polars?
|
<p>In a Polars dataframe I do multiple call to a complex function that returns a dictionary for different setups that is then unpacked into the dataframe.
I would like to prefix the names of the nested columns so they do not create duplicate columns.</p>
<p>Here is an example python code</p>
<pre class="lang-py prettyprint-override"><code>import datetime
import random
import polars as pl
df = pl.DataFrame({
'datetime_local': pl.Series(
datetime.datetime(2001, 1, 1, 1, 1, 1, 0) + datetime.timedelta(milliseconds=20 * i) for i in range(500)
),
'value1': pl.Series(random.random() for _ in range(500)),
'value2': pl.Series(random.random() for _ in range(500))
})
def my_complex_function(param_a:float, param_b:float, param_c:float, param_d:float):
""" takes n parameters perform complex calculations and return a dictionary"""
output_1 = param_a + param_b + param_c + param_d # you have to imagine that there are some complex calculations here
output_2 = param_a * param_b * param_c * param_d
output_3 = param_a / param_b / param_c / param_d
return {"output_1": output_1, "output_2": output_2, "output_3": output_3}
setup_A_param_c_value = 1.0
setup_A_param_d_value = 2.0
setup_B_param_c_value = 3.0
setup_B_param_d_value = 4.0
# Etc...
df = df.with_columns(pl.struct('value1', 'value2').map_elements(lambda x: \
my_complex_function(param_a=x['value1' ], param_b=x['value2'], \
param_c=setup_A_param_c_value, param_d=setup_A_param_d_value, \
) ).alias('setup_A_results')).unnest('setup_A_results')
# This errors
df = df.with_columns(pl.struct('value1', 'value2').map_elements(lambda x: \
my_complex_function(param_a=x['value1' ], param_b=x['value2'], \
param_c=setup_B_param_c_value, param_d=setup_B_param_d_value, \
) ).alias('setup_B_results')).unnest('setup_B_results')
</code></pre>
<p>There the second call to the function will of course generate a duplicate error</p>
<pre class="lang-py prettyprint-override"><code># DuplicateError: could not create a new DataFrame: column with name 'output_1' has more than one occurrence
</code></pre>
<p>I have found a dirty fix which is to pass the prefix to my function but this is not a clean way to do it.
There must be a way to prefix the nested columns names with a string.</p>
|
<python><dataframe><python-polars>
|
2023-06-29 09:26:24
| 2
| 2,436
|
kolergy
|
76,579,703
| 8,968,910
|
What is entry point of BigQuery Cloud Function?
|
<p>I successfully updated a BigQuery Table Using an External API and a Cloud Function. My entry point of the code below is hello_pubsub, however, I do not know what those 2 parameters are. I didn't provide event and context to the function, how can it still run my code without errors? Though I know the code in the function provide all the information to do the loading job.</p>
<pre><code>import pandas as pd
import requests
from datetime import datetime
from google.cloud import bigquery
def hello_pubsub(event, context):
PROJECT = "test-391108"
client = bigquery.Client(project=PROJECT)
table_id = "test-391108.TEAM.MEMBER"
API_ENDPOINT ='https://fantasy.premierleague.com/api/bootstrap-static/'
response = requests.get(API_ENDPOINT, timeout=5)
response_json = response.json()
df = pd.DataFrame(response_json['teams'])
df = df.iloc[:,:6]
job_config = bigquery.LoadJobConfig(write_disposition="WRITE_TRUNCATE")
job = client.load_table_from_dataframe(df, table_id, job_config=job_config)
</code></pre>
<p>Is there also another way I can schedule my code without using function and load data into Bigquery table by using an External API?</p>
|
<python><google-bigquery>
|
2023-06-29 09:16:41
| 1
| 699
|
Lara19
|
76,579,606
| 730,858
|
How does negative look up works in this regex
|
<pre><code>import re
text = """
This is a line.
Short
Long line
<!-- Comment line -->
"""
pattern = r'''(?:^.{1,3}$|^.{4}(?<!<!--))'''
matches = re.findall(pattern, text, flags=re.MULTILINE)
print(matches)
</code></pre>
<p>OUTPUT with <code>pattern = r'''(?:^.{1,3}$|^.{4}(?<!<!--))'''</code> :</p>
<pre><code>['This', 'Shor', 'Long']
</code></pre>
<p>OUTPUT with <code>pattern = r'''(?:^.{1,3}$|^.{3}(?<!<!--))'''</code> :</p>
<pre><code>['Thi', 'Sho', 'Lon', '<!-']
</code></pre>
<p>OUTPUT with <code>pattern = r'''(?:^.{1,3}$|^.{5}(?<!<!--))'''</code> :</p>
<pre><code>['This ', 'Short', 'Long ', '<!-- ']
</code></pre>
<p>Any number other than 4 in <code>.{4}(?<!<!--))</code> causes to display and match <!-- . How?</p>
|
<python><regex><negative-lookbehind>
|
2023-06-29 09:04:51
| 1
| 4,694
|
munish
|
76,579,388
| 4,481,287
|
Python Azure opentelemetry - inbound requests not getting logged
|
<p>I followed the instructions and the sample project here: <a href="https://github.com/microsoft/ApplicationInsights-Python/blob/05a8b1dd3556ab5e7c268c22ee30a365eaf5ec7a/azure-monitor-opentelemetry/samples/tracing/http_fastapi.py#L15" rel="nofollow noreferrer">https://github.com/microsoft/ApplicationInsights-Python/blob/05a8b1dd3556ab5e7c268c22ee30a365eaf5ec7a/azure-monitor-opentelemetry/samples/tracing/http_fastapi.py#L15</a>. I set up my app insights account and I can see exceptions and dependencies properly getting logged.</p>
<p>However, <strong>the requests table</strong> is not getting populated.. Based on the link above, requests to my FastAPI should automatically be populated. Am I doing something wrong?</p>
|
<python><fastapi><open-telemetry><azure-sdk-python><azure-monitor>
|
2023-06-29 08:36:30
| 2
| 1,371
|
Kevin Cohen
|
76,579,241
| 8,219,760
|
Reduce by multiple columns in pandas groupby
|
<p>Having dataframe</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(
{
"group0": [1, 1, 2, 2, 3, 3],
"group1": ["1", "1", "1", "2", "2", "2"],
"relevant": [True, False, False, True, True, True],
"value": [0, 1, 2, 3, 4, 5],
}
)
</code></pre>
<p>I wish to produce a target</p>
<pre class="lang-py prettyprint-override"><code>target = pd.DataFrame(
{
"group0": [1, 2, 2, 3],
"group1": ["1","1", "2", "2",],
"value": [0, 2, 3, 5],
}
)
</code></pre>
<p>where <code>"value"</code> has been chosen by</p>
<ol>
<li>Maximum of all positive <code>"relevant"</code> indices in <code>"value"</code> column</li>
<li>Otherwise maximum of <code>"value"</code> if no positive <code>"relevant"</code> indices exist</li>
</ol>
<p>This would be produced by</p>
<pre class="lang-py prettyprint-override"><code>def fun(x):
tmp = x["value"][x["relevant"]]
if len(tmp):
return tmp.max()
return x["value"].max()
</code></pre>
<p>were <code>x</code> a groupby dataframe.</p>
<p>Is it possible to achive the desired groupby reduction efficiently?</p>
<p>EDIT:</p>
<p>with payload</p>
<pre class="lang-py prettyprint-override"><code>from time import perf_counter()
df = pd.DataFrame(
{
"group0": np.random.randint(0, 30,size=10_000_000),
"group1": np.random.randint(0, 30,size=10_000_000),
"relevant": np.random.randint(0, 1, size=10_000_000).astype(bool),
"value": np.random.random_sample(size=10_000_000) * 1000,
}
)
start = perf_counter()
out = (df
.sort_values(by=['relevant', 'value'])
.groupby(['group0', 'group1'], as_index=False)
['value'].last()
)
end = perf_counter()
print("Sort values", end - start)
def fun(x):
tmp = x["value"][x["relevant"]]
if len(tmp):
return tmp.max()
return x["value"].max()
start = perf_counter()
out = df.groupby(["group0", "group1"]).apply(fun)
end = perf_counter()
print("Apply", end - start)
#Sort values 14.823943354000221
#Apply 1.5050544870009617
</code></pre>
<p><code>.apply</code>-solution got time of 1.5s. The solution with <code>sort_values</code> performed with 14.82s. However, reducing sizes of the test groups with</p>
<pre><code>...
"group0": np.random.randint(0, 500_000,size=10_000_000),
"group1": np.random.randint(0, 100_000,size=10_000_000),
...
</code></pre>
<p>led to vastly superior performance by the <code>sort_values</code> solution.
(15.29s versus 1423.84s). <code>sort_values</code> solution by <a href="https://stackoverflow.com/a/76579362/8219760">@mozway</a> is preferred, unless user specifically knows that data contains small group counts.</p>
|
<python><pandas><dataframe><group-by>
|
2023-06-29 08:17:43
| 3
| 673
|
vahvero
|
76,579,195
| 16,869,946
|
Pandas .groupby and .mean() based on conditions
|
<p>I have the following large dataset recording the result of a math competition among students in descending order of date: So for example, student 1 comes third in Race 1 while student 3 won Race 2, etc.</p>
<pre><code>Race_ID Date Student_ID Rank Studying_hours
1 1/1/2023 1 3 2
1 1/1/2023 2 2 5
1 1/1/2023 3 1 7
1 1/1/2023 4 4 1
2 11/9/2022 1 2 4
2 11/9/2022 2 3 2
2 11/9/2022 3 1 8
3 17/4/2022 5 4 3
3 17/4/2022 2 1 7
3 17/4/2022 3 2 2
3 17/4/2022 4 3 3
4 1/3/2022 1 3 7
4 1/3/2022 2 2 2
5 1/1/2021 1 2 2
5 1/1/2021 2 3 3
5 1/1/2021 3 1 6
</code></pre>
<p>and I want to generate a new column called "winning_past_studying_hours" which is the average studying hours of his past competitions and where he ended up with Rank 1 or 2.</p>
<p>So for example, for student 1:</p>
<pre><code>Race_ID Date Student_ID Rank Studying_hours
1 1/1/2023 1 3 2
2 11/9/2022 1 2 4
4 1/3/2022 1 3 7
5 1/1/2021 1 2 2
</code></pre>
<p>the column looks like</p>
<pre><code>Race_ID Date Student_ID Rank Studying_hours winning_past_studying_hours
1 1/1/2023 1 3 2 (4+2)/2 = 3
2 11/9/2022 1 2 4 2/1 = 2
4 1/3/2022 1 3 7 2/1= 2
5 1/1/2021 1 2 2 NaN
</code></pre>
<p>Similarly, for student 2:</p>
<pre><code>Race_ID Date Student_ID Rank Studying_hours
1 1/1/2023 2 2 5
2 11/9/2022 2 3 2
3 17/4/2022 2 1 7
4 1/3/2022 2 2 2
5 1/1/2021 2 3 3
</code></pre>
<p>The column looks like</p>
<pre><code>Race_ID Date Student_ID Rank Studying_hours winning_past_studying_hours
1 1/1/2023 2 2 5 (7+2)/2=4.5
2 11/9/2022 2 3 2 (7+2)/2=4.5
3 17/4/2022 2 1 7 2/1=2
4 1/3/2022 2 2 2 NaN
5 1/1/2021 2 3 3 NaN
</code></pre>
<p>I know the basic <code>groupby</code> and <code>mean</code> function but I do not know how to include the condition <code>Rank.isin([1,2])</code> in the <code>groupby</code> function. Thank you so much.</p>
<p>EDIT: Desired output:</p>
<pre><code>Race_ID Date Student_ID Rank Studying_hours winning_past_studying_hours
1 1/1/2023 1 3 2 3
1 1/1/2023 2 2 5 4.5
1 1/1/2023 3 1 7 5.333
1 1/1/2023 4 4 1 NaN
2 11/9/2022 1 2 4 2
2 11/9/2022 2 3 2 4.5
2 11/9/2022 3 1 8 4
3 17/4/2022 5 4 3 NaN
3 17/4/2022 2 1 7 2
3 17/4/2022 3 2 2 6
3 17/4/2022 4 3 3 NaN
4 1/3/2022 1 3 7 2
4 1/3/2022 2 2 2 NaN
5 1/1/2021 1 2 2 NaN
5 1/1/2021 2 3 3 NaN
5 1/1/2021 3 1 6 NaN
</code></pre>
|
<python><pandas><dataframe><group-by>
|
2023-06-29 08:12:44
| 1
| 592
|
Ishigami
|
76,579,142
| 14,728,691
|
Messages not sent to RabbitMQ when new files are added in a directory
|
<p>I have the following code that is used in a Kubernetes pod:</p>
<pre><code>import pika
import time
import os
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
class FileEventHandler(FileSystemEventHandler):
def __init__(self, channel):
self.channel = channel
def on_created(self, event):
if not event.is_directory:
message = 'New file created: %s' % event.src_path
print("Detected new file, sending message: ", message)
is_delivered = self.channel.basic_publish(exchange='', routing_key='myqueue', body=message)
print("Message delivery status: ", is_delivered)
def main():
credentials = pika.PlainCredentials('LBO64L', 'test')
print('Logged')
parameters = pika.ConnectionParameters('rabbitmq.developpement-dev-01.svc.cluster.local',
5672,
'/',
credentials,
socket_timeout=2)
print(parameters)
connection = pika.BlockingConnection(parameters)
print("Connection...")
channel = connection.channel()
channel.queue_declare(queue='myqueue')
print('Queue created !')
channel.basic_publish(exchange='', routing_key='myqueue', body='Test message')
print("Test message sent")
event_handler = FileEventHandler(channel)
if os.path.isdir('/mnt/lb064l/data/'):
print("Directory exists and is accessible")
else:
print("Directory does not exist or is not accessible")
print('Starting observer...')
observer = Observer()
observer.schedule(event_handler, path='/mnt/lb064l/data/', recursive=True)
observer.start()
try:
print('Running...')
while True:
time.sleep(1)
except KeyboardInterrupt:
observer.stop()
observer.join()
print('Message sent to queue !')
if __name__ == "__main__":
main()
</code></pre>
<p>This pod connects well to RabbitMQ. My goal is to send a message to a queue everytime a new file is added in the directory /mnt/.
/mnt/ contains data that comes from a persistent volume.</p>
<p>The following works well but messages are not sent when new file come in the directory /mnt/:</p>
<pre><code>channel.basic_publish(exchange='', routing_key='myqueue', body='Test message')
</code></pre>
<p>Files comes from another service in kubernetes, they are added in an SFTP server and persisted. In the python pod I mount the same volume persisted from the sftp.</p>
<p>Where can I be wrong ?</p>
|
<python><kubernetes><rabbitmq><python-watchdog>
|
2023-06-29 08:04:15
| 0
| 405
|
jos97
|
76,579,085
| 18,876,759
|
Python 3.9 use OR | operator for Union types?
|
<p>Since Python version 3.10, Unions can be writte as <code>X | Y</code> which is equivalent to <code>Union[X, Y]</code>. Is there some way/workaround to easily use (or just ignore) the <code>X | Y</code> syntax on Python 3.9?</p>
<p>I have a bigger Python package that is making exentsie use of the <code>X | Y</code> syntax for type annotations. It was originally developed for Python 3.10+.</p>
<p>Now there is a new requirement for the software to run on Raspberry Pi, but the latest Raspberry Pi OS release (bullseye) has only Python 3.9.2 integrated which does not support this syntax.</p>
<p>Debian Bookworm has been released recently. It has Python 3.11 integrated. I expect the release of a Bookworm based Raspberry Pi OS within the next few month. So there's really just a small time window where I would really need Python 3.9 support.</p>
<p><strong>EDIT</strong>:
I have already added the <code>from __future__ import annotations</code> import.
The software uses the <code>pydantic</code> package and the exception occurs when pydantic is trying to parse the type annotations:</p>
<pre><code>Traceback (most recent call last):
File "/home/michael/margintest/./margin.py", line 6, in <module>
from vtcontrol.apps.margintest.app import App
File "/home/michael/margintest/vtcontrol/apps/margintest/app.py", line 16, in <module>
from .widgets.steps import StepsDetailView, TestProgressbar
File "/home/michael/margintest/vtcontrol/apps/margintest/widgets/steps.py", line 6, in <module>
from ..runner import Runner
File "/home/michael/margintest/vtcontrol/apps/margintest/runner.py", line 11, in <module>
from .models.config import Config
File "/home/michael/margintest/vtcontrol/apps/margintest/models/config.py", line 76, in <module>
class StepConfig(BaseModel):
File "/home/michael/margintest/venv/lib/python3.9/site-packages/pydantic/main.py", line 178, in __new__
annotations = resolve_annotations(namespace.get('__annotations__', {}), namespace.get('__module__', None))
File "/home/michael/margintest/venv/lib/python3.9/site-packages/pydantic/typing.py", line 400, in resolve_annotations
value = _eval_type(value, base_globals, None)
File "/usr/lib/python3.9/typing.py", line 283, in _eval_type
return t._evaluate(globalns, localns, recursive_guard)
File "/usr/lib/python3.9/typing.py", line 539, in _evaluate
eval(self.__forward_code__, globalns, localns),
File "<string>", line 1, in <module>
TypeError: unsupported operand type(s) for |: 'type' and 'NoneType'
</code></pre>
|
<python><python-typing><pydantic><python-3.9>
|
2023-06-29 07:55:56
| 1
| 468
|
slarag
|
76,579,007
| 5,651,991
|
Is there an interactive table widget for Shiny for python that calls back selected rows?
|
<p>I'm using reactable, Datatable or rhandsontable in R Shiny to display an interactive table and as a user input for selecting rows. Given there are a few packages for doing this in R, I thought there would be even more libraries in python for selecting rows from an interactive table - however I haven't found one yet. Please let me know if one exists.</p>
|
<python><ipywidgets><py-shiny>
|
2023-06-29 07:46:04
| 1
| 3,794
|
Vlad
|
76,578,907
| 455,814
|
No FileSystem for scheme: abfss - running pyspark standalone
|
<p>Trying to read csv file stored in Azure Datalake Gen2 using standalone spark but getting <code>java.io.IOException: No FileSystem for scheme: abfss</code></p>
<p>Installed pyspark using: <code>pip install pyspark==3.0.3</code> and running it using following command, containing required deps:</p>
<p><code>pyspark --packages "org.apache.hadoop:hadoop-azure:3.0.3,org.apache.hadoop:hadoop-azure-datalake:3.0.3"</code></p>
<p>I found another answer here suggesting using <code>Spark 3.2+</code> with <code>org.apache.spark:hadoop-cloud_2.12</code> but it didn't work either, still getting the same exception, complete stack trace is pasted below:</p>
<pre><code>>>> spark.read.csv("abfss://raw@teststorageaccount.dfs.core.windows.net/members.csv")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/dev/binaries/spark-3.1.2-bin-hadoop2.7/python/pyspark/sql/readwriter.py", line 737, in csv
return self._df(self._jreader.csv(self._spark._sc._jvm.PythonUtils.toSeq(path)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dev/binaries/spark-3.1.2-bin-hadoop2.7/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
File "/Users/dev/binaries/spark-3.1.2-bin-hadoop2.7/python/pyspark/sql/utils.py", line 111, in deco
return f(*a, **kw)
^^^^^^^^^^^
File "/Users/dev/binaries/spark-3.1.2-bin-hadoop2.7/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o35.csv.
: java.io.IOException: No FileSystem for scheme: abfss
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:46)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:377)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:325)
at org.apache.spark.sql.DataFrameReader.$anonfun$load$3(DataFrameReader.scala:307)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:307)
at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:795)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
</code></pre>
|
<python><apache-spark><pyspark><azure-data-lake>
|
2023-06-29 07:31:50
| 1
| 6,790
|
Waqas
|
76,578,897
| 880,783
|
(How) can I make a decorator narrow a type in the function body?
|
<p>I am implementing a decorator to replace repeated early-exit checks such as</p>
<pre class="lang-py prettyprint-override"><code>if self.data is None:
self.log_something()
return
</code></pre>
<p>at the top of my methods. This approach (highly simplified here) does work nicely:</p>
<pre class="lang-py prettyprint-override"><code>"""early exit decorator."""
from collections.abc import Callable
Method = Callable[["Class"], None]
Condition = Callable[["Class"], bool]
def early_exit_if(condition: Condition) -> Callable[[Method], Method]:
def decorator(method: Method) -> Method:
def wrapper(instance: "Class") -> None:
if condition(instance):
return
method(instance)
return wrapper
return decorator
class Class:
def __init__(self) -> None:
self.data: list[int] | None = None
@staticmethod
def data_is_none(instance: "Class") -> bool:
return instance.data is None
@early_exit_if(data_is_none)
def do_something_with_data(self) -> None:
self.data.append(0)
if __name__ == "__main__":
Class().do_something_with_data()
</code></pre>
<p>In short, <code>@early_exit_if(data_is_none)</code> ensures that <code>data</code> is not <code>None</code> in the body of <code>do_something_with_data</code>. However, <code>mypy</code> does not know that:</p>
<blockquote>
<p>bug.py:30: error: Item "None" of "list[int] | None" has no attribute "append" [union-attr]</p>
</blockquote>
<p>How can I let <code>mypy</code> know (ideally, from within the decorator) that <code>data</code> is not <code>None</code>?</p>
<p>I know that <code>assert self.data is not None</code> works, but I dislike duplicating the check.</p>
<p><em>Edit</em>: I just learned about <code>TypeGuard</code>s, and I wonder if this approach can be combined with decorators.</p>
|
<python><mypy><typing>
|
2023-06-29 07:30:07
| 1
| 6,279
|
bers
|
76,578,852
| 6,730,854
|
Why are rvecs and tvecs different for every view in OpenCV's calibratecamera?
|
<p>I'm struggling to understand the real meaning of <code>revecs</code> and <code>tvecs</code> in the output of <code>cv2.calibrateCamera</code>. As I understand them, together they define the pose of the camera, meaning position and orientation of the camera.</p>
<p>However, if I pass multiple views shot from the same camera to <code>cv2.calibrateCamera</code> I get <code>rvecs</code> and <code>tvecs</code> for every view. Shouldn't they be the same if the camera didn't move?</p>
<p>My homography matrix for the first image is defined by</p>
<pre><code>K = mtx
R = cv2.Rodrigues(rvecs[0])[0]
T = tvecs[0]
RT = np.concatenate((R,T),axis=1)
H = np.dot(K, RT)
</code></pre>
<p>And I can see that it projects correctly. The way I understand homography it should stay the same for every image as the camera is not moving. However if R, T are changing H will change too.</p>
<p>It seems like I'm missing something very important about the way <code>rvecs</code> and <code>tvecs</code> are defined in OpenCV.</p>
|
<python><opencv>
|
2023-06-29 07:21:57
| 2
| 472
|
Mike Azatov
|
76,578,654
| 11,922,765
|
Understanding statistical hist and quantitle Plots and how to drop unreliable data
|
<p>I downloaded weather data and performed some statistical checks. Basically checking if data is okay to use. Does it satisfy normal distribution? or there completely straight lines? or just constant values due to some error. I want to find and drop such data.</p>
<p>My code:</p>
<pre><code>from meteostat import Point, Hourly
import statsmodels.api as sm
import pylab
## Geographical location details
lati = 39.758949
long = -84.191605
elev = 225
## get the data
start_date = datetime.strptime(df.index.strftime('%Y-%m-%d')[0], '%Y-%m-%d')
end_date = datetime.strptime(df.index.strftime('%Y-%m-%d')[-1], '%Y-%m-%d')
location = Point(lati, long, elev)
hourly_data = Hourly(location, start_date, end_date)
hourly_data = hourly_data.fetch()
# Basic filtering functions
def drop_low_count_numeric_columns(df, count_limit = 50):
"""
df: dataframe
count_limit : drop column(s) if it has less than count_limit (%) numeric data.
"""
# total length or count
tot_cnt = len(df)
#
cols_cnt = df.count(numeric_only=True)
cols_cnt = cols_cnt[cols_cnt > len(df)*(count_limit/100)].index
return df[cols_cnt]
hourly_data = drop_low_count_numeric_columns(hourly_data)
# plot statistical normal distribution and quantile plots
fig,ax = plt.subplots(nrows=len(hourly_data.columns), ncols=2, figsize=(10,16))
# ,squeeze=False,
# height_ratios=[0.4]*len(hourly_data.columns))
for col in hourly_data.columns:
# col = 'pres'
# Seaborn histplot with kernel distribution estimate line (applicable to univariate data)
sns.histplot(hourly_data[col], kde=True,ax=ax[hourly_data.columns.tolist().index(col),0])
# quantile-quantile plot
sm.qqplot(hourly_data[col], line='s',ax=ax[hourly_data.columns.tolist().index(col),1])
pylab.show()
# plt.subplots_adjust(left=0.1,
# bottom=0.1,
# right=0.9,
# top=0.9,hspace=1, wspace=1)
fig.tight_layout(pad=5.0)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/p0wmp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p0wmp.png" alt="enter image description here" /></a></p>
<p>My questions:</p>
<ol>
<li><p>what and how do I have to understand the above statistical plots? what do they signify? For instance, 4th sub-left-plot seems to have just one bin. Indicating that it has only one repeating value. No other values. I would like to identify such columns logically or through some score and drop them.</p>
</li>
<li><p>I have complete knowledge what the histogram with x-axis denoting values and y-axis denoting frequency. But, I have zero knowledge of what is on the x-axis and y-axis in the qq plot. What is theoretical and sample quantile here?</p>
</li>
<li><p>I did my best (as you can see from the commented lines in my code) to adjust subplot height so I want to see the each x-axis lablel. The label usually denotes the column name. All my efforts were in vain.</p>
</li>
</ol>
|
<python><matplotlib><seaborn><statsmodels>
|
2023-06-29 06:50:32
| 0
| 4,702
|
Mainland
|
76,578,468
| 1,664,430
|
After change in URLBLockList registrykey for Edge, I need to restart Edge to reflect the changes
|
<p>I have a use-case where what currently user is working on windows machine, we need to blocklist few websites.</p>
<p>I am able to do this but where I am stuck is that we need to restart Edge every time the registry value changes otherwise changes are not reflected. I am assuming that edge is caching the allowlist/blocklist when it start. Any way I can disable this</p>
<p>Registry key path - Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Edge\URLBlocklist</p>
|
<python><java><browser><microsoft-edge><chromium>
|
2023-06-29 06:15:37
| 1
| 3,610
|
Ashishkumar Singh
|
76,578,411
| 8,391,469
|
How to correct coordinate shifting in ax.annotate
|
<p>I tried to annotate a line plot with <code>ax.annotate</code> as follows.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x_start = 0
x_end = 200
y_start = 20
y_end = 20
fig, ax = plt.subplots(figsize=(5,5),dpi=600)
ax.plot(np.asarray([i for i in range(0,1000)]))
ax.annotate('', xy=(x_start, y_start), xytext=(x_end, y_end), xycoords='data', textcoords='data',
arrowprops={'arrowstyle': '|-|'})
plt.show()
</code></pre>
<p>which gave a plot (zoomed in)</p>
<p><a href="https://i.sstatic.net/dLqf0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dLqf0.png" alt="" /></a></p>
<p>Although I have specified <code>x_start</code> to be <code>0</code> and <code>x_end</code> to be <code>200</code>, the actual start is greater than <code>0</code> and actual end is smaller than <code>200</code> on the x-axis.</p>
<p>How do I correctly line up this annotation with the set coordinates?</p>
|
<python><matplotlib><plot-annotations>
|
2023-06-29 06:04:33
| 1
| 495
|
Johnny Tam
|
76,578,403
| 11,484,423
|
Sympy system of differential equations
|
<h3>Problem</h3>
<p>I'm making a symbolic solver for mechanical links <a href="https://stackoverflow.com/questions/76473155/sympy-solve-cant-eliminate-intermediate-symbols-from-systems-of-equations">previous question for details</a></p>
<p>Right now I can get sympy.solve to solve big systems of linear equations, but I'm having an hard time making the solver resolve the partial differential equations. The solver <em>can</em> solve them, but it gets confused in when and what it should solve and doesn't output something useful.</p>
<p>Minimal Code:</p>
<pre class="lang-py prettyprint-override"><code>#Try to solve Y=Z X=dY(Z)^3/dZ
import sympy as lib_sympy
def bad_derivative_wrong( in_x : lib_sympy.Symbol, in_y : lib_sympy.Symbol, in_z : lib_sympy.Symbol ):
l_equation = []
l_equation.append( lib_sympy.Eq( in_y, in_z ) )
l_equation.append( lib_sympy.Eq( in_x, lib_sympy.Derivative(in_y*in_y*in_y, in_z, evaluate = True) ) )
solution = lib_sympy.solve( l_equation, (in_x,in_y,), exclude = () )
return solution
def bad_derivative_unhelpful( in_x : lib_sympy.Symbol, in_y : lib_sympy.Symbol, in_z : lib_sympy.Symbol ):
l_equation = []
l_equation.append( lib_sympy.Eq( in_y, in_z ) )
l_equation.append( lib_sympy.Eq( in_x, lib_sympy.Derivative(in_y*in_y*in_y, in_z, evaluate = False) ) )
solution = lib_sympy.solve( l_equation, (in_x,in_y,), exclude = () )
return solution
def good_derivative( in_x : lib_sympy.Symbol, in_y : lib_sympy.Symbol, in_z : lib_sympy.Symbol ):
l_equation = []
l_equation.append( lib_sympy.Eq( in_y, in_z ) )
l_equation.append( lib_sympy.Eq( in_x, lib_sympy.Derivative(in_z*in_z*in_z, in_z, evaluate = True) ) )
#what happens here is that Derivative has already solved the derivative, it's not a symbol
solution = lib_sympy.solve( l_equation, (in_x,in_y,), exclude = () )
#lib_sympy.dsolve
return solution
if __name__ == '__main__':
#n_x = lib_sympy.symbols('X', cls=lib_sympy.Function)
n_x = lib_sympy.symbols('X')
n_y = lib_sympy.Symbol('Y')
n_z = lib_sympy.Symbol('Z')
print("Wrong Derivative: ", bad_derivative_wrong( n_x, n_y, n_z ) )
print("Unhelpful Derivative: ", bad_derivative_unhelpful( n_x, n_y, n_z ) )
print("Good Derivative: ", good_derivative( n_x, n_y, n_z ) )
</code></pre>
<p>Output:</p>
<pre><code>Wrong Derivative: {Y: Z, X: 0}
Unhelpful Derivative: {Y: Z, X: Derivative(Y**3, Z)}
Good Derivative: {Y: Z, X: 3*Z**2}
</code></pre>
<h3>Question:</h3>
<p>I need a way to add partial derivative symbols to my equations in a way that the solver is happy to solve.</p>
<p>E.g. the speed is the derivative of the position over time.
E.g. the sensitivity of the position in respect to the angle is related to precision and force.</p>
|
<python><sympy><equation><differential-equations><symbolic-math>
|
2023-06-29 06:02:13
| 1
| 670
|
05032 Mendicant Bias
|
76,578,147
| 2,981,639
|
DST temporal feature from timestamp using polars
|
<p>I'm migrating code to polars from pandas. I have time-series data consisting of a timestamp and value column and I need to compute a bunch of features. i.e.</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime, timedelta
df = pl.DataFrame({
"timestamp": pl.datetime_range(
datetime(2017, 1, 1),
datetime(2018, 1, 1),
timedelta(minutes=15),
time_zone="Australia/Sydney",
time_unit="ms", eager=True),
})
value = np.random.normal(0, 1, len(df))
df = df.with_columns([pl.Series(value).alias("value")])
</code></pre>
<p>I need to generate a column containing an indicator if the timestamp is standard or daylight time. I'm currently using <code>map_elements</code> because as far as I can see the isn't a <a href="https://docs.pola.rs/api/python/stable/reference/expressions/temporal.html" rel="nofollow noreferrer">Temporal Expr</a>, i.e. my current code is</p>
<pre class="lang-py prettyprint-override"><code>def dst(timestamp:datetime):
return int(timestamp.dst().total_seconds()!=0)
df = df.with_columns(pl.struct("timestamp").map_elements(lambda x: dst(**x)).alias("dst"))
</code></pre>
<p>(this uses a trick that effectively checks if the <code>tzinfo.dst(dt)</code> offset is zero or not)</p>
<p>Is there a (fast) way of doing this using <code>polars expressions</code> rather than (slow) <code>map_elements</code>?</p>
|
<python><datetime><dst><python-polars>
|
2023-06-29 04:57:10
| 2
| 2,963
|
David Waterworth
|
76,578,105
| 7,885,588
|
python mock patch not working on function used in __attrs_post_init__
|
<p>here's my mock:</p>
<pre><code> with mock.patch('utils.auth.get_access_token', mock.Mock(return_value=self.mock_access_token)) as mock_get_access_token:
self.phraseController = PhraseController(
cognito_client_id=self.cognito_client_id,
cognito_client_secret=self.cognito_client_secret,
raw_object_key=self.raw_object_key,
backend_url=self.backend_url,
cdn_url= self.cdn_url,
tagging_version=self.tagging_version,
output_dir= self.output_dir,
event_time= self.event_time,
output_bucket=self.output_bucket,
s3_endpoint_url = self.s3_endpoint_url
)
</code></pre>
<p>and the method that gets called after init in PhraseController:</p>
<pre><code>from utils.auth import get_access_token
@define(auto_attribs=True)
class PhraseController():
def __attrs_post_init__(self):
self.access_token = get_access_token(self.cognito_client_id, self.cognito_client_secret)
...
</code></pre>
<p>I just want to mock out get_access_token but it won't pick up the mock.</p>
|
<python><python-unittest>
|
2023-06-29 04:43:21
| 1
| 337
|
Steven Staley
|
76,578,071
| 10,916,136
|
Count number of words with 3 or more letters from a string in R
|
<p>I have a string sentence. I need to find the count of words from the sentence which have more than or equal to 3 letters.</p>
<p>For example:</p>
<pre><code>sentence <- 'I have a string sentence but i do not know how to get three lettered words from it'
# Total words = 18
# 3 or more lettered words = 12
</code></pre>
<p>How I do it in Python in a basic way:</p>
<pre><code>words = sentence.split(' ') #this creates a list of words
count = 0
for each in words:
if len(each) >= 3:
count = count + 1
print(count)
</code></pre>
<p>Alternative way in python (a little crude but):</p>
<pre><code>print(len(list(filter(lambda word: len(word)>= 3, words))))
</code></pre>
<p>I tried doing the same thing in R:</p>
<pre><code>words <- strsplit(sentence, split = ' ')
count <- 0
for (word in words) {
l <- nchar(word)
if (l >= 3) {
count <- count + 1
}
}
print(count)
</code></pre>
<p>This results in an error for me:</p>
<pre><code># Error in if (l >= 3) { : the condition has length > 1
# ERROR!
# Execution halted
</code></pre>
<p>When I checked this error on the web, it says that if we provide a vector to the if condition, then this error occurs. But I provided it with a simple numeric variable, so I do not understand what is causing this error.</p>
<p>Can someone please explain and help me out?</p>
<p>P.s.: I do not want to use any external package for this. I am learning R so want to do it with basics.</p>
|
<python><r><string><count>
|
2023-06-29 04:31:19
| 1
| 571
|
Veki
|
76,577,951
| 3,853,711
|
programs that can't run parallely
|
<p>I am building a python program to initiates the same program (<code>benchmark_program</code> in this example) with different parameters and then examine the results.</p>
<p>To improve efficiency I invoke several subprocess from <code>multiprocessing.Pool</code> to run each command in parallel. Then the weird thing is: however core number (number of parallel subprocess) increase (still below the physical core number and all cores are available), the total test time remians constant and from <code>htop</code> I found that everytime the script executes, only 1 core is actually engaged (IO is not busy either).</p>
<p>At first I thought it's a python problem so I wrote a simple program <code>compute_primes</code> to do some heavy computation on 1 single cores, and parallelizing does improve performance.</p>
<p>So I guess there is something in <code>benchmark_program</code>, perhaps some libraries, that can be invoked/called by only 1 cpu at a time. Now my question is: what makes a program that can't be run parallely? Is it some shared libraries or dependences?</p>
<p>the test script:</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing import Pool
import subprocess
import time
def invoke_subprocess(arg):
#command = "benchmark_program {}".format(arg)
#command = "compute_primes {}".format(arg)
command = "sleep 0.5"
return subprocess.run(command,
shell=True,
check=True,
stdout=subprocess.DEVNULL,
stderr=subprocess.STDOUT).returncode
if __name__ == '__main__':
for num_core in [2, 4, 8, 16]:
start = time.time()
with Pool(num_core) as p:
collected_result = p.map(invoke_subprocess, range(16))
print("core-{} took {:.2f} second".format(num_core,
time.time() - start))
</code></pre>
<p>This is basically the parallel script. You get the original program when commenting <code>command = "sleep 0.5"</code> and uncomment <code>#command = "benchmark_program {}".format(arg)</code></p>
|
<python><parallel-processing><shared-libraries><static-analysis>
|
2023-06-29 03:48:58
| 1
| 5,555
|
Rahn
|
76,577,745
| 5,722,359
|
How to get ttk.Treeview.bbox to work correctly during instantiating?
|
<p>The script below is meant to expose the visible top-level items id (i.e idd) of a <code>ttk.Treeview</code> widget. I created a method called <code>._show_visible_toplevel_iids</code> in the class to achieve this. I notice that this method (specifically the <code>.bbox</code> method of the <code>ttk.Treeview</code> widget) does not work correctly during the instantiation stage of the class. However, this method works thereafter, i.e. whenever the <code>ttk.Button</code> widget which runs the <code>._show_visible_toplevel_iids</code> method is manually clicked on by the mouse.</p>
<p>The following messages are printed out during instantiation:</p>
<pre><code>all_toplevel_iids=('G0', 'G1', 'G2', 'G3', 'G4', 'G5', 'G6', 'G7', 'G8', 'G9')
visible_toplevel_iids=[]
children_iids=[]
all_toplevel_iids=('G0', 'G1', 'G2', 'G3', 'G4', 'G5', 'G6', 'G7', 'G8', 'G9')
visible_toplevel_iids=[]
children_iids=[]
</code></pre>
<p>The following messages are printed out when the <code>ttk.Button</code> is clicked:</p>
<pre><code>all_toplevel_iids=('G0', 'G1', 'G2', 'G3', 'G4', 'G5', 'G6', 'G7', 'G8', 'G9')
visible_toplevel_iids=['G0', 'G1', 'G2', 'G3', 'G4']
children_iids=[('F0', 'F1'), ('F2', 'F3'), ('F4', 'F5'), ('F6', 'F7'), ('F8', 'F9')]
</code></pre>
<p><strong>Can you help me understand what I am doing wrong? Is there a bug in tkinter itself?</strong></p>
<p>Test Script:</p>
<pre><code>#!/usr/bin/python3
# -*- coding: utf-8 -*-
import tkinter as tk
import tkinter.ttk as ttk
class App(ttk.Frame):
def __init__(self, parent, *args, **kwargs):
BG0 = '#aabfe0' #Blue scheme
BG1 = '#4e88e5' #Blue scheme
ttk.Frame.__init__(self, parent=None, style='App.TFrame', borderwidth=0,
relief='raised', width=390, height=390)
self.parent = parent
self.parent.title('Treeview')
self.parent.geometry('470x350')
self.setStyle()
self.createWidgets(BG0, BG1)
self.rowconfigure(0, weight=1)
self.columnconfigure(0, weight=1)
def setStyle(self):
style = ttk.Style()
style.configure('App.TFrame', background='pink')
def createWidgets(self, BG0, BG1):
# Treeview with scroll bars
columns = [f'Column {i}' for i in range(10)]
self.tree = ttk.Treeview(
self, height=20, selectmode='extended', takefocus=True,
columns=("type", "property A", "property B", "selected"),
displaycolumns=["property A", "property B", "selected"])
self.ysb = ttk.Scrollbar(self, orient=tk.VERTICAL)
self.xsb = ttk.Scrollbar(self, orient=tk.HORIZONTAL)
self.tree.configure(yscrollcommand=self.ysb.set,
xscrollcommand=self.xsb.set)
self.tree.grid(row=0, column=0, columnspan=4, sticky='nsew')
self.ysb.grid(row=0, column=5, sticky='ns')
self.xsb.grid(row=1, column=0, columnspan=4, sticky='ew')
self.tree.column('#0', stretch=True, anchor='w', width=100)
self.tree.column('property A', stretch=True, anchor='n', width=100)
self.tree.column('property B', stretch=True, anchor='n', width=100)
self.tree.column('selected', stretch=True, anchor='n', width=100)
self.tree.heading('#0', text="Type", anchor='w')
self.tree.heading('property A', text='Property A', anchor='center')
self.tree.heading('property B', text='Property B', anchor='center')
self.tree.heading('selected', text='Selected', anchor='center')
counter = 0
for i in range(10):
self.tree.tag_configure(i)
tliid = f"G{i}"
self.tree.insert("", "end", iid=tliid, open=True,
tags=[tliid, 'Parent'], text=f"Restaurant {i}")
ciid = f"F{counter}"
self.tree.insert(tliid, "end", iid=ciid, text=f"Cookie",
tags=[tliid, ciid, 'Child', "Not Selected"],
values=(tliid, f"{i}-Ca", f"{i}-Cb", False))
counter += 1
ciid = f"F{counter}"
self.tree.insert(tliid, "end", iid=ciid, text=f"Pudding",
tags=[tliid, ciid, 'Child', "Not Selected"],
values=(tliid, f"{i}-Pa", f"{i}-Pb", False))
counter += 1
# Button
self.showbutton = ttk.Button(self, text="Show Visible Top-Level Items iid",
command=self._show_visible_toplevel_iids)
self.showbutton.grid(row=2, column=0, sticky='nsew')
self._show_visible_toplevel_iids() # .bbox doesn't work correctly
self.showbutton.invoke() # .bbox doesn't work correctly
def _show_visible_toplevel_iids(self):
tree = self.tree
tree.update_idletasks()
all_toplevel_iids = tree.get_children()
visible_toplevel_iids = [i for i in all_toplevel_iids
if isinstance(tree.bbox(i), tuple)]
children_iids = [tree.get_children([i]) for i in visible_toplevel_iids]
print()
print(f"{all_toplevel_iids=}")
print(f"{visible_toplevel_iids=}")
print(f"{children_iids=}")
if __name__ == '__main__':
root = tk.Tk()
app = App(root)
app.grid(row=0, column=0, sticky='nsew')
root.rowconfigure(0, weight=1)
root.columnconfigure(0, weight=1)
root.mainloop()
</code></pre>
|
<python><tkinter><treeview><tcl>
|
2023-06-29 02:36:29
| 2
| 8,499
|
Sun Bear
|
76,577,744
| 3,179,698
|
Do we have a way to train model in seq by the list of model names in python
|
<p>I was reading an article but I couldn't find it now.</p>
<p>Do you have any clue to train model using its name as parameters?</p>
<p>something like:</p>
<pre><code>model_level1 = ['svm','decision_tree','logistic_regression']
model_level2 = ['random_forest','neural_network']
</code></pre>
<p>and training as a pipeline, which is using the result of the first level as the input of the second level's models.</p>
<p>But I don't remember the exact code to do that and I can't find the result online, I really remember I read someone doing it that way and it is elegant.</p>
|
<python><machine-learning>
|
2023-06-29 02:36:22
| 1
| 1,504
|
cloudscomputes
|
76,577,612
| 4,827,407
|
How to use python package multiprocessing in metaflow?
|
<p>I am trying to run multiprocessing package in metaflow, in which fasttext model is running to predict some results. Here is my code:</p>
<pre><code>import pickle
import os
import boto3
import multiprocessing
from functools import partial
from multiprocessing import Manager
import time
import pickle
from metaflow import batch, conda, FlowSpec, step, conda_base, Flow, Step
from util import pip_install_module
@conda_base(libraries={'scikit-learn': '0.23.1', 'numpy': '1.22.4', 'pandas': '1.5.1', 'fasttext': '0.9.2'})
class BatchInference(FlowSpec):
pip_install_module("python-dev-tools", "2023.3.24")
@batch(cpu=10, memory=120000)
@step
def start(self):
import pandas as pd
import numpy as np
self.df_input = ['af', 'febrt' ,'fefv fd we' ,'fe hth dw hytht' ,' dfegrtg hg df reg']
self.next(self.predict)
@batch(cpu=10, memory=120000)
@step
def predict(self):
import fasttext
fasttext.FastText.eprint = lambda x: None
print('model reading started')
#download the fasttext model from aws s3.
manager = Manager()
model_abn = manager.list([fasttext.load_model('fasttext_model.bin')])
print('model reading finished')
time_start = time.time()
pool = multiprocessing.Pool()
#results = pool.map(self.predict_abn, self.df_input)
results = pool.map(partial(self.predict_abn, model_abn=model_abn), self.df_input)
pool.close()
pool.join()
time_end = time.time()
print(f"Time elapsed: {round(time_end - time_start, 2)}s")
self.next(self.end)
@step
def end(self):
print("Predictions evaluated successfully")
def predict_abn(self,text, model_abn):
model = model_abn[0]
return model.predict(text,k=1)
if __name__ == '__main__':
BatchInference()
</code></pre>
<p>The error message is:</p>
<pre><code>TypeError: cannot pickle 'fasttext_pybind.fasttext' object
</code></pre>
<p>I was told this is because fasttext model cannot be serialised. And I also try other message, for example:</p>
<pre><code>self.model_bytes_abn = pickle.dumps(model_abn)
</code></pre>
<p>to transfer the model to bytes type. But still does not work.</p>
<p>Plz tell me what is wrong about the code and how to fix it?</p>
|
<python><serialization><multiprocessing><pickle><netflix-metaflow>
|
2023-06-29 01:50:10
| 1
| 2,273
|
Feng Chen
|
76,577,527
| 1,245,262
|
Remove all rows in a Pandas DataFrame where a column is True
|
<p>I'm attempting to filter values out of a Pandas dataframe, but have Googled and ChatGPT'd to no avail:</p>
<p>why does</p>
<pre><code>x1 = df[df!=True]
x2 = df[df==True]
</code></pre>
<p>result in 2 dataframes, each with the same shape as the original? How can I filter this dataframe into the parts that are True and those that are not.?</p>
<p>Ultimately, I want to do this filtering on a dataframe with several columns, so what I really want to do is more lke:</p>
<pre><code>x1 = df[df['col1']!=True]
x2 = df[df['col1']==True]
</code></pre>
|
<python><pandas><dataframe>
|
2023-06-29 01:08:09
| 1
| 7,555
|
user1245262
|
76,577,388
| 1,715,544
|
Why doesn't the `nonlocal` keyword propogate the outer-scoped variable to the calling module?
|
<p>So this involves a maybe unusual chain of things:</p>
<p><strong>A.py</strong></p>
<pre><code>from B import call
def make_call():
print("I'm making a call!")
call(phone_number="867-5309")
</code></pre>
<p><strong>B.py</strong></p>
<pre><code>def call(phone_number):
pass
</code></pre>
<p><strong>test_A.py</strong></p>
<pre><code>import pytest
from A import make_call
@pytest.fixture
def patch_A_call(monkeypatch):
number = "NOT ENTERED"
number_list = []
def call(phone_number):
nonlocal number
number = phone_number
number_list.append(phone_number)
monkeypatch.setattr("A.call", call)
return (number, number_list)
def test_make_call(patch_A_call):
make_call()
print(f"number: {patch_A_call[0]}")
print(f"number_list: {patch_A_call[1]}")
</code></pre>
<p>What's printed is:</p>
<pre><code>number: NOT ENTERED
number_list: [867-5309]
</code></pre>
<p>I expected "867-5309" to be the value for both results.</p>
<p>I know that lists are passed by reference in Python—but I assumed that the <code>nonlocal</code> declaration would pass the variable along down the chain.</p>
<p>Why doesn't it work this way?</p>
|
<python><scope><pytest><monkeypatching><python-nonlocal>
|
2023-06-29 00:10:42
| 1
| 1,410
|
AmagicalFishy
|
76,577,266
| 12,206,312
|
How to send animated GIF from memory to FastAPI endpoint?
|
<p>I'm trying to send a GIF from memory to my FastAPI endpoint, which works, but the gif isn't animated. When I save it locally instead the animation works fine.
I don't want to save the image, but instead keep it in memory until it's returned to the endpoint.</p>
<p>I've already checked out this post, but still couldn't get it working: <a href="https://stackoverflow.com/questions/67571477/python-fastapi-returned-gif-image-is-not-animating">Python FastAPI: Returned gif image is not animating</a></p>
<p>So how can I return an animated .gif using FastAPI?</p>
<p>This is my attempted solution:</p>
<pre class="lang-py prettyprint-override"><code># main.py
@app.get('/youtube/{video_id}.gif')
def youtube_thumbnail (video_id: str, width: int = 320, height: int = 180):
image = util.create_youtube_gif_thumbnail(video_id)
return util.gif_to_streaming_response(image)
</code></pre>
<pre class="lang-py prettyprint-override"><code># util.py
def gif_to_streaming_response(image: Image):
imgio = BytesIO()
image.save(imgio, 'GIF')
imgio.seek(0)
return StreamingResponse(content=imgio, media_type="image/gif")
def create_gif(images, duration):
gif = images[0]
output = BytesIO() # for some reason this doesn't work (it shows a still image), but if I save the image with output = "./file.gif" the animation works!
# this works
output = "./file.gif"
gif.save(output, format='GIF', save_all=True, append_images=images, loop=0, duration=duration)
return gif
def create_youtube_gif_thumbnail(video_id: str):
images = [read_img_from_url(f"https://i.ytimg.com/vi/{video_id}/{i}.jpg") for i in range(1,3)]
return create_gif(images, 150)
</code></pre>
<p>Here's the full codebase: <a href="https://github.com/Snailedlt/Markdown-Videos/blob/418de75e200bf9d9f4f02e5a667af4c9b226b5d3/util.py#L74" rel="nofollow noreferrer">https://github.com/Snailedlt/Markdown-Videos/blob/418de75e200bf9d9f4f02e5a667af4c9b226b5d3/util.py#L74</a></p>
|
<python><flask><python-imaging-library><fastapi><animated-gif>
|
2023-06-28 23:30:34
| 1
| 823
|
Snailedlt
|
76,577,224
| 2,153,235
|
Python: Does importing "sys" module also import "os" module?
|
<p>I am following <a href="https://www.educative.io/answers/what-is-osgetenv-method-in-python" rel="nofollow noreferrer">this tutorial</a> to see why Python doesn't recognize an environment variable that was set at the Conda prompt (using CMD syntax, as this is an Anaconda installation on Windows 10).</p>
<p>It requires the <code>os</code> module, and as part of my getting familiar with Python, I decided to test whether <code>os</code> was already imported. Testing the presence of a module requires the <code>sys</code> module (as described <a href="https://stackoverflow.com/a/30483269/2153235">here</a>).</p>
<p>Strangely, right after importing <code>sys</code>, I found that <code>os</code> was imported without me having to do so. I find this odd, as most of my googling shows that you have to import them individually, e.g., <a href="https://thomas-cokelaer.info/tutorials/python/module_os.html" rel="nofollow noreferrer">here</a>.</p>
<p><strong>Does importing <code>sys</code> also impart <code>os</code>, as it seems to? If so, why is it common to import both individually?</strong></p>
<p>I can't test for the presence of <code>os</code> <em>before</em> importing <code>sys</code>, as I need <code>sys</code> to test for the presence of modules.</p>
<p>Here is the code that shows the apparent presence of <code>os</code> from importing <code>sys</code>, formatted for readability. It starts from the Conda prompt in Windows. The Conda environment is "py39", which is for Python 3.9:</p>
<pre><code>(py39) C:\Users\User.Name > python
Python 3.9.16 (main, Mar 8 2023, 10:39:24)
[MSC v.1916 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license"
for more information.
>>> import sys
>>> "os" in sys.modules
True
</code></pre>
<p><strong>Afternote:</strong> Thanks to Zero's answer, I found <a href="https://stackoverflow.com/a/51637094/2153235">this</a> code to be more what I'm looking for. After loading <code>sys</code>, the appropriate test is <code>( 'os' in sys.modules ) and ( 'os' in dir() )</code>:</p>
<pre><code>(py39) C:\Users\User.Name > python
'os' in dir() # False
import sys
'os' in sys.modules , 'os' in dir() # (True, False)
( 'os' in sys.modules ) and ( 'os' in dir() ) # False
import os
'os' in sys.modules , 'os' in dir() # (True, True)
( 'os' in sys.modules ) and ( 'os' in dir() ) # True
</code></pre>
<p><code>sys.modules</code> shows whether the module has been imported anywhere (presumably in the code that the Python interpreter has executed) while <code>dir()</code> indicates whether the module name is in the current namespace. Thanks to <em>Carcigenicate</em> for clarifying this point, and I hope that I understood it properly.</p>
|
<python><module>
|
2023-06-28 23:16:05
| 2
| 1,265
|
user2153235
|
76,577,129
| 1,270,003
|
multiprocessing worker code not executing
|
<p>I have code like this:</p>
<pre><code>def worker(data_row):
print("In worker", data_row)
if __name__ == '__main__':
db_conn = init_db_conn()
rows = db_conn.execute(query).fetchall()
db_conn.close()
pool = Pool(4)
jobs = []
for row in rows:
print("In main", row)
job = pool.apply_async(worker, (row,))
jobs.append(job)
for job in jobs:
job.get()
pool.join()
pool.close()
</code></pre>
<p>When I execute this, only <code>In main</code> are printed, and zero <code>In worker</code>, so no worker code is executed. How to resolve this?</p>
|
<python><python-multiprocessing>
|
2023-06-28 22:43:54
| 1
| 15,444
|
SwiftMango
|
76,576,955
| 15,593,152
|
Find K nearest neighbors of points in DF from another DF
|
<p>I have a DataFrame "grid" that forms a 2D grid with nodes placed at regular intervals both in X and Y (square grid), but some of the nodes are missing from the grid. The nodes are determined by <code>id</code>, <code>x</code>, and <code>y</code>, where <code>x</code> and <code>y</code> are intergers and <code>id</code> is a string.</p>
<p>On the other hand, I have another DF "seeds" that contains only some of the nodes of this "square_grid". "seeds" is determined by <code>seed_id</code>, <code>x</code>, and <code>y</code>, where <code>x</code> and <code>y</code> are intergers and <code>seed_id</code> is a string.</p>
<p>I need to compute, for every <code>seed_id</code>, which are the <code>id</code>s of it's K nearest neigbors. For example, if a seed is placed in a region where there is no missing nodes, it will have 8 first neighbors, 16 second nearest neigbors, etc.</p>
<p>So, if <code>K=24</code>, I need to find the match of the <code>seed_id</code> with the <code>id</code>s of its 24th nearest neighbors.</p>
<p>What I have tried is the following:</p>
<p>I used geopandas to obtain the point geometries from their <code>x</code> and <code>y</code> coordinates, than apply a buffer that forms a circle from each <code>seed_id</code>, with the radius set to catch just the 1st nearest neigbors, (<code>K=8</code>), or all the first and second nearest neighbors (<code>K=24</code>), and so on, and then I use a spatial join (sjoin) to see which <code>id</code>s are inside that circle:</p>
<pre><code>import geopandas as gpd
circles = seeds.buffer(2)
tbuf = circles.to_frame('geometry').reset_index().merge(
seeds.reset_index(),on='index').drop(columns='index')
match = tbuf.sjoin(grid,predicate='contains').drop(
columns = ['geometry','index_right']).drop(columns=['X','Y'])
</code></pre>
<p>This code works, but I would like to obtain the nearest neighbors without using spatial operations (the grid positions are integers) and just something like:</p>
<ul>
<li>In the case I only need the first nearest neighbors, then, for each seed_id, match with all the <code>id</code>s that are in the square formed by <code>[x-1,x+1,y-1,y+1]</code>.</li>
<li>In the case I only need the first&second nearest neighbors, then, for each seed_id, match with all the <code>id</code>s that are in the square formed by <code>[x-2,x+2,y-2,y+2]</code>.</li>
</ul>
<p>Any ideas?</p>
<p>Example of a grid with one node missing:</p>
<pre><code>1 2 3 4
4 5 6
7 8 9 10
11 12 13 14
</code></pre>
<p>If the DF "seeds" contains only 7 and 10, and I only need the first neighbors, then the expected outcome is:</p>
<pre><code>7,4
7,5
7,8
7,11
7,12
10,6
10,9
10,13
10,14
</code></pre>
|
<python><pandas><nearest-neighbor>
|
2023-06-28 22:00:22
| 0
| 397
|
ElTitoFranki
|
76,576,942
| 5,404,647
|
Performance degradation with increasing threads in Python multiprocessing
|
<p>I have a machine with 24 cores and 2 threads per core. I'm trying to optimize the following code for parallel execution. However, I noticed that the code's performance starts to degrade after a certain number of threads.</p>
<pre><code>import argparse
import glob
import h5py
import numpy as np
import pandas as pd
import xarray as xr
from tqdm import tqdm
import time
import datetime
from multiprocessing import Pool, cpu_count, Lock
import multiprocessing
import cProfile, pstats, io
def process_parcel_file(f, bands, mask):
start_time = time.time()
test = xr.open_dataset(f)
print(f"Elapsed in process_parcel_file for reading dataset: {time.time() - start_time}")
start_time = time.time()
subset = test[bands + ['SCL']].copy()
subset = subset.where(subset != 0, np.nan)
if mask:
subset = subset.where((subset.SCL >= 3) & (subset.SCL < 7))
subset = subset[bands]
# Adding a new dimension week_year and performing grouping
subset['week_year'] = subset.time.dt.strftime('%Y-%U')
subset = subset.groupby('week_year').mean().sortby('week_year')
subset['id'] = test['id'].copy()
# Store the dates and counting pixels for each parcel
dates = subset.week_year.values
n_pixels = test[['id', 'SCL']].groupby('id').count()['SCL'][:, 0].values.reshape(-1, 1)
# Converting to dataframe
grouped_sum = subset.groupby('id').sum()
ids = grouped_sum.id.values
grouped_sum = grouped_sum.to_array().values
grouped_sum = np.swapaxes(grouped_sum, 0, 1)
grouped_sum = grouped_sum.reshape((grouped_sum.shape[0], -1))
colnames = ["{}_{}".format(b, str(x).split('T')[0]) for b in bands for x in dates] + ['count']
values = np.hstack((grouped_sum, n_pixels))
df = pd.DataFrame(values, columns=colnames)
df.insert(0, 'id', ids)
print(f"Elapsed in process_parcel_file til end: {time.time() - start_time}")
return df
def fs_creation(input_dir, out_file, labels_to_keep=None, th=0.1, n=64, days=5, total_days=180, mask=False,
mode='s2', method='patch', bands=['B02', 'B03', 'B04', 'B05', 'B06', 'B07', 'B08', 'B8A', 'B11', 'B12']):
files = glob.glob(input_dir)
times_pool = [] # For storing execution times
times_seq = []
cpu_counts = list(range(2, multiprocessing.cpu_count() + 1, 4)) # The different CPU counts to use
for count in cpu_counts:
print(f"Executing with {count} threads")
if method == 'parcel':
start_pool = time.time()
with Pool(count) as pool:
arguments = [(f, bands, mask) for f in files]
dfs = list(tqdm(pool.starmap(process_parcel_file, arguments), total=len(arguments)))
end_pool = time.time()
start_seq = time.time()
dfs = pd.concat(dfs)
dfs = dfs.groupby('id').sum()
counts = dfs['count'].copy()
dfs = dfs.div(dfs['count'], axis=0)
dfs['count'] = counts
dfs.drop(index=-1).to_csv(out_file)
end_seq = time.time()
times_pool.append(end_pool - start_pool)
times_seq.append(end_seq - start_seq)
pd.DataFrame({'CPU_count': cpu_counts, 'Time pool': times_pool,
'Time seq' : times_seq}).to_csv('cpu_times.csv', index=False)
return 0
</code></pre>
<p>When executing the code, it scales well up to around 7-8 threads, but after that, the performance starts to deteriorate. I have profiled the code, and it seems that each thread takes more time to execute the same code.</p>
<p>For example, with 2 threads:</p>
<pre><code>Elapsed in process_parcel_file for reading dataset: 0.012271404266357422
Elapsed in process_parcel_file til end: 1.6681673526763916
Elapsed in process_parcel_file for reading dataset: 0.014229536056518555
Elapsed in process_parcel_file til end: 1.5836331844329834
</code></pre>
<p>However, with 22 threads:</p>
<pre><code>Elapsed in process_parcel_file for reading dataset: 0.17968058586120605
Elapsed in process_parcel_file til end: 12.049026727676392
Elapsed in process_parcel_file for reading dataset: 0.052398681640625
Elapsed in process_parcel_file til end: 6.014119625091553
</code></pre>
<p>I'm struggling to understand why the performance degrades with more threads. I've already verified that the system has the required number of cores and threads.</p>
<p>I would appreciate any guidance or suggestions to help me identify the cause of this issue and optimize the code for better performance.</p>
<p>It's really hard for me to provide a minimal working example so take that into account.</p>
<p>Thank you in advance.</p>
<p>Edit:
The files are around 80MB each. I have 451 files.
I added the following code to profile the function:</p>
<pre><code>...
start_time = time.time()
mem_usage_start = memory_usage(-1, interval=0.1, timeout=None)[0]
cpu_usage_start = psutil.cpu_percent(interval=None)
test = xr.open_dataset(f)
times['read_dataset'] = time.time() - start_time
memory['read_dataset'] = memory_usage(-1, interval=0.1, timeout=None)[0] - mem_usage_start
cpu_usage['read_dataset'] = psutil.cpu_percent(interval=None) - cpu_usage_start
...
</code></pre>
<p>And more code for each line in a similar fashion.
I used the libraries <code>memory_profiler</code> and <code>psutil</code>, and I have the information for each thread.
CSV's with the results are available here:
<a href="https://wetransfer.com/downloads/44df14ea831da7693300a29d8e0d4e7a20230703173536/da04a0" rel="nofollow noreferrer">https://wetransfer.com/downloads/44df14ea831da7693300a29d8e0d4e7a20230703173536/da04a0</a>
The results identify each line in the function with the number of cpus selected, so each one is a thread.</p>
<p>Edit2:</p>
<p>Here I have a report of a subset of the data, where you can clearly see what each thread is doing, and how some threads are getting less work than others:</p>
<p><a href="https://wetransfer.com/downloads/259b4e42aae6dd9cda5a22d576aba29520230717135248/ae3f88" rel="nofollow noreferrer">https://wetransfer.com/downloads/259b4e42aae6dd9cda5a22d576aba29520230717135248/ae3f88</a></p>
|
<python><multithreading><parallel-processing><multiprocessing><python-multiprocessing>
|
2023-06-28 21:57:10
| 3
| 622
|
Norhther
|
76,576,882
| 9,884,812
|
Django reverse() behind API Gateway/Proxy
|
<p>My Django REST API is deployed behind an API Gateway (Kong).<br />
I want to <code>reserve()</code> some urls in my <code>APIViews</code>.<br />
I would like to ask for help to get the right url format.<br />
Based on basepath of the API gateway.</p>
<p><strong>Communication flow:</strong><br />
Client (requesting basepath) <-> Kong (forwarding to upstream) <-> Apache (Reverse Proxy) <-> Django</p>
<p>Kong defines a <strong>basepath</strong> and an <strong>upstream</strong> to forward client request to Django.<br />
Kong includes <code>X_FORWARDED_HOST</code> and <code>X_FORWARDED_PATH</code> in the HTTP header.<br />
X_FORWARDED_PATH contains the basepath of the gateway.<br />
X_FORWARDED_HOST contains the gateway-url.</p>
<p>Gateway basepath is:<br />
<code>/gateway-basepath</code></p>
<p>Upstream path is:<br />
<code>mydomain.com/py/api/v1</code></p>
<p>Basically, without gateway, Django <code>reverse()</code> creates following url for <strong>users</strong> endpoint:<br />
<code>mydomain.com/py/api/v1/users/</code></p>
<p>With the API gateway, the Django creates follow path:<br />
<code>apigatewayurl.com/gateway-basepath/py/api/v1/users/</code><br />
Django is considering <strong>X_FORWARDED_HOST</strong>, but not <strong>X_FORWARDED_PATH</strong></p>
<p><strong>I need following result:</strong><br />
<code>apigatewayurl.com/gateway-basepath/users</code><br />
Otherwise the Django url resolving is not usable within the api gateway.</p>
<p>I would appreciate any help.</p>
<p><strong>urls.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from rest_framework.views import APIView
from rest_framework import routers
from . import views
class APIRootView(APIView):
def get(self, request, format=None):
return Response({
'users': reverse('user-list', request=request, format=format),
})
router = routers.DefaultRouter()
router.register(r'users', views.UserViewSet)
urlpatterns = [
path('api/v1/', APIRootView.as_view(), name="api_root"),
]
urlpatterns += router.urls
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>from rest_framework import viewsets
from django.contrib.auth import models as django_models
from .serializers import UserSerializer
class UserViewSet(viewsets.ModelViewSet):
queryset = django_models.User.objects.all()
serializer_class = UserSerializer
</code></pre>
<p><strong>serializers.py</strong></p>
<pre><code>from rest_framework import serializers
from django.contrib.auth.models import User
class UserSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = User
fields = ["url", "username", "email", "is_staff"]
</code></pre>
|
<python><django><django-rest-framework>
|
2023-06-28 21:41:24
| 1
| 539
|
Ewro
|
76,576,802
| 3,901,216
|
Matplotlib's TeX implementation searched for a file named 'cmbr10.tfm' in your texmf tree, but could not find it
|
<p>I am getting the error: <code>FileNotFoundError: Matplotlib's TeX implementation searched for a file named 'cmbr10.tfm' in your texmf tree, but could not find it</code> when I try to use matplolib with latex with a non-default font.</p>
<p>The exact filename changes depending on which font I try to use. I have checked that my latex install works with the desired package and that my PATH is set correctly. I also reinstalled texlive in case that was the source of the problem. The "missing" file is located in a sub-directory of <code>/usr/local/texlive/2023/texmf-dist/fonts/tfm</code>.
I've seen several similar issues online, but none seem to be giving this exact error.</p>
<p>Any ideas?</p>
<p>MWE from here: <a href="https://stackoverflow.com/questions/33942210/consistent-fonts-between-matplotlib-and-latex">Consistent fonts between matplotlib and latex</a></p>
<pre><code>import matplotlib.pylab as pl
from matplotlib import rc
rc('text', usetex=True)
rc('font', size=14)
rc('legend', fontsize=13)
rc('text.latex', preamble=r'\usepackage{cmbright}')
pl.figure()
pl.plot([0,1], [0,1], label=r'$\theta_{\phi}^3$')
pl.legend(frameon=False, loc=2)
pl.xlabel(r'change of $\widehat{\phi}$')
pl.ylabel(r'change of $\theta_{\phi}^3$')
</code></pre>
|
<python><matplotlib><latex>
|
2023-06-28 21:25:28
| 1
| 881
|
overfull hbox
|
76,576,689
| 14,385,099
|
Updating values in a column in pandas dataframe using itself and values in other columns
|
<p>I have a dataframe that can be simplified like that:</p>
<pre><code>event = ['NaN', 'SRNeg', 'NaN', 'NaN', 'NaN', 'NaN', 'SINeg', 'NaN', 'NaN', 'NaN']
scores = random.randint(100, size=(10))
TR = [1,2,3,4,5,6,7,8,9,10]
df = pd.DataFrame({'event':event, 'scores':scores, 'TR':TR})
</code></pre>
<p>I want to modify the dataframe such that whenever the value in <code>event</code> is of interest (in this case 'SRNeg' and 'SINeg'), I want to assign the corresponding events of the following two TRs with the same value. This is the example output:</p>
<pre><code>event = ['NaN', 'SRNeg', 'SRNeg', 'SRNeg', 'NaN', 'NaN', 'SINeg', 'SINeg', 'SINeg', 'NaN']
scores = random.randint(100, size=(10)) # same values as before
TR = [1,2,3,4,5,6,7,8,9,10]
df = pd.DataFrame({'event':event, 'scores':scores, 'TR':TR})
</code></pre>
<p>Does anyone know of an elegant way to do this?</p>
<p>Thanks!</p>
|
<python><pandas>
|
2023-06-28 21:03:19
| 0
| 753
|
jo_
|
76,576,620
| 848,277
|
Optional caching in python - Wrapping a cachetools decorator with arguments
|
<p>I'm using the <code>cachetools</code> library and I would like to wrap the decorator form this library and add a class self argument to enable/disable the caching at the class level e.e. <code>MyClass(enable_cache=True)</code></p>
<p>An example usage would be something like:</p>
<pre><code>class MyClass(object):
def __init__(self, enable_cache=True):
self.enable_cache = enable_cache
self.cache = cachetools.LRUCache(maxsize=10)
@cachetools.cachedmethod(operator.attrgetter('cache'))
def calc(self, n):
return 1*n
</code></pre>
<p>I'm not sure how to keep the cache as a shared self class object and allow for the enable_cache flag within my own wrapper decorator using this library.</p>
|
<python><caching><decorator><wrapper><cachetools>
|
2023-06-28 20:50:27
| 3
| 12,450
|
pyCthon
|
76,576,390
| 15,160,601
|
Understanding the Performance Advantages of C++ over Other Languages
|
<p>Why does C++ generally exhibit better execution speed than Java and Python? What factors contribute to this performance disparity? I conducted a series of tests to compare the execution speeds of these languages and seek a deeper understanding of the underlying reasons.</p>
<p>Context: As a computer science student, I have been exploring various programming languages to comprehend their performance characteristics. Through my experiments, I have consistently observed that C++ tends to outperform Java and Python in terms of execution speed. However, I desire a comprehensive understanding of the factors contributing to this performance difference.</p>
<p>Hardware and Compilation Details: To ensure a fair comparison, I executed the same algorithm using identical logic and datasets across all three languages. The experiments were conducted on a system equipped with an Intel Core i7 processor (8 cores) and 16 GB of RAM.</p>
<p>For the C++ code, I utilized GCC 10.2.0 with the following compilation flags:</p>
<p><code>g++ -O3 -march=native -mtune=native -std=c++17 -o program program.cpp</code></p>
<p>Java was executed using OpenJDK 11.0.1 with the following command:</p>
<p><code>java -Xmx8G -Xms8G Program</code></p>
<p>Python code was executed using Python 3.9.0 as follows:</p>
<p><code>python3 program.py</code>
C++ code:</p>
<pre><code>#include <iostream>
#include <chrono>
#include <vector>
#include <random>
// Function to generate a random matrix of size m x n
std::vector<std::vector<int>> generateRandomMatrix(int m, int n) {
std::vector<std::vector<int>> matrix(m, std::vector<int>(n));
std::random_device rd;
std::mt19937 gen(rd());
std::uniform_int_distribution<> dis(1, 100);
for (int i = 0; i < m; ++i) {
for (int j = 0; j < n; ++j) {
matrix[i][j] = dis(gen);
}
}
return matrix;
}
// Matrix multiplication function
std::vector<std::vector<int>> matrixMultiplication(const std::vector<std::vector<int>>& A, const std::vector<std::vector<int>>& B) {
int m = A.size();
int n = B[0].size();
int k = B.size();
std::vector<std::vector<int>> result(m, std::vector<int>(n, 0));
for (int i = 0; i < m; ++i) {
for (int j = 0; j < n; ++j) {
for (int x = 0; x < k; ++x) {
result[i][j] += A[i][x] * B[x][j];
}
}
}
return result;
}
int main() {
// Generate random matrices A and B of size 3 x 3
std::vector<std::vector<int>> A = generateRandomMatrix(3, 3);
std::vector<std::vector<int>> B = generateRandomMatrix(3, 3);
// Measure execution time
auto start = std::chrono::steady_clock::now();
// Perform matrix multiplication
std::vector<std::vector<int>> result = matrixMultiplication(A, B);
auto end = std::chrono::steady_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::microseconds>(end - start).count();
std::cout << "Execution time (C++): " << duration << " microseconds" << std::endl;
return 0;
}
</code></pre>
<p>Java code:</p>
<pre><code>import java.util.Arrays;
import java.util.Random;
public class Program {
// Function to generate a random matrix of size m x n
public static int[][] generateRandomMatrix(int m, int n) {
int[][] matrix = new int[m][n];
Random random = new Random();
for (int i = 0; i < m; ++i) {
for (int j = 0; j < n; ++j) {
matrix[i][j] = random.nextInt(100) + 1;
}
}
return matrix;
}
// Matrix multiplication function
public static int[][] matrixMultiplication(int[][] A, int[][] B) {
int m = A.length;
int n = B[0].length;
int k = B.length;
int[][] result = new int[m][n];
for (int i = 0; i < m; ++i) {
for (int j = 0; j < n; ++j) {
for (int x = 0; x < k; ++x) {
result[i][j] += A[i][x] * B[x][j];
}
}
}
return result;
}
public static void main(String[] args) {
// Generate random matrices A and B of size 3 x 3
int[][] A = generateRandomMatrix(3, 3);
int[][] B = generateRandomMatrix(3, 3);
// Measure execution time
long start = System.nanoTime();
// Perform matrix multiplication
int[][] result = matrixMultiplication(A, B);
long end = System.nanoTime();
long duration = end - start;
System.out.println("Execution time (Java): " + duration + " nanoseconds");
}
}
</code></pre>
<p>Python code:</p>
<pre><code>import time
import numpy as np
import random
# Function to generate a random matrix of size m x n
def generateRandomMatrix(m, n):
return [[random.randint(1, 100) for _ in range(n)] for _ in range(m)]
# Matrix multiplication function
def matrixMultiplication(A, B):
A = np.array(A)
B = np.array(B)
result = np.dot(A, B)
return result.tolist()
if __name__ == "__main__":
# Generate random matrices A and B of size 3 x 3
A = generateRandomMatrix(3, 3)
B = generateRandomMatrix(3, 3)
# Measure execution time
start = time.time()
# Perform matrix multiplication
result = matrixMultiplication(A, B)
end = time.time()
duration = (end - start) * 1e6
print("Execution time (Python): {} microseconds".format(duration))
</code></pre>
<p>I noticed a substantial performance difference in favor of C++. The execution times demonstrate C++'s superiority over Java and Python.</p>
<p>I understand that C++ is typically compiled, Java employs virtual machine emulation, and Python is interpreted. Consequently, I acknowledge that differences in execution approaches and compiler optimizations may significantly contribute to these performance disparities. Nonetheless, I would appreciate a more detailed explanation of the specific reasons underlying the observed performance differences.</p>
<p>Furthermore, I have taken the recommendation into account and conducted longer tests, running the algorithms on larger datasets to minimize the impact of initial startup costs for Java and Python. Nevertheless, the performance gap between C++ and the other languages remains substantial.</p>
<p>Could someone shed light on why C++ consistently outperforms Java and Python in this scenario? Are there specific compiler flags or runtime settings that could be adjusted in Java and Python to enhance their performance in similar computational tasks?</p>
<p>Thank you for sharing your insights and expertise!</p>
|
<python><java><c++><algorithm><performance>
|
2023-06-28 20:05:45
| 1
| 2,052
|
zoldxk
|
76,576,359
| 2,514,130
|
Create row for each unique value in a DataFrame column
|
<p>I have a Polars DataFrame that looks like this:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
data = {
'id': ['N/A', 'N/A', '1', '1', '2'],
'type': ['red', 'blue', 'yellow', 'green', 'yellow'],
'area': [0, 0, 3, 4, 5]
}
df = pl.DataFrame(data)
</code></pre>
<pre><code>shape: (5, 3)
┌─────┬────────┬──────┐
│ id ┆ type ┆ area │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞═════╪════════╪══════╡
│ N/A ┆ red ┆ 0 │
│ N/A ┆ blue ┆ 0 │
│ 1 ┆ yellow ┆ 3 │
│ 1 ┆ green ┆ 4 │
│ 2 ┆ yellow ┆ 5 │
└─────┴────────┴──────┘
</code></pre>
<p>I would like to do some sort of pivot or transpose operation so that each row is an id (excluding 'N/A') and there is a column for each type, and the value is the area. If no value is given, it should be zero. So in this case, the result should look like this:</p>
<pre><code> red blue yellow green
'1' 0 0 3 4
'2' 0 0 5 0
</code></pre>
<p>How can I do this in Polars? I would rather avoid converting the whole thing into pandas.</p>
|
<python><dataframe><pivot><python-polars>
|
2023-06-28 20:01:17
| 2
| 5,573
|
jss367
|
76,576,213
| 7,474,744
|
Parse Large Gzip File and Manipulate Data with Limited Memory
|
<p>Use Case: Given a ~2GB .gz file with newline delimited json, manipulate each line and write output to zip file (csv)</p>
<p>Issue: The environment I am working with has ~1GB of memory and I do not have traditional access to the file system. The only way I can write to a file is by passing the entire data stream as a single object from memory (I cannot loop a generator and write to file)</p>
<p>My approach so far has been to loop through the data in my .gz file, modify the data, then compress it in memory and write it out after all data is processed. When I use chunking and do not manipulate the data this works. However, when I try to do this one line at a time it seems to run indefinitely and does not work.</p>
<p>Example gzip data:</p>
<pre><code>{"ip": "1.1.1.1", "org": "cloudflare"}
{"ip": "2.2.2.2", "org": "chickenNugget"}
</code></pre>
<p>Note: that this is not true json, each line is valid json but this is NOT an array</p>
<p>Target Output:</p>
<pre><code>value,description
1.1.1.1, cloudflare
2.2.2.2, chickenNugget
</code></pre>
<p>Example that works in a few seconds using chunking:</p>
<pre><code>import gzip
chunksize = 100 * 1024 * 1024
with gzip.open('latest.json.gz', 'rt', encoding='utf8') as f:
while True:
chunk = f.read(chunksize)
if not chunk:
break
compressed += gzip.compress(chunk.encode())
# I am able to use platforms internal file creation
# process to create a zip with "compressed" variable - the issue here is that I cannot
# reliably manipulate the data.
</code></pre>
<p>What I tried but does NOT work</p>
<pre><code>import gzip
compressed = 'value,description,expiration,active\n'.encode()
with gzip.open('latest.json.gz', 'rt', encoding='utf8') as f:
for line in f:
obj = json.loads(line)
data = f'{obj.get("ip")}{obj.get("organization")},,True\n'
compressed += gzip.compress(data.encode())
# This code never seems to complete - I gave up after running for 3+ hours
</code></pre>
<p><strong>EDIT</strong>
When I test the second example in an unconstrained environment it runs forever as well. However, if I modify the code like below to break after 10k lines it works as expected</p>
<pre><code>...
count = 0
for line in f:
if count > 10000: break
...
count += 1
</code></pre>
<p>Is there a better way to approach this?</p>
|
<python><json><python-3.x><gzip>
|
2023-06-28 19:34:46
| 2
| 3,097
|
Joe
|
76,576,165
| 961,631
|
Convert rtf to html, keep colors
|
<p>I need to convert a bunch of RTF files to HTML keeping the same formatting as possible</p>
<p>So I tried that python script that converts all rtf files from a folder to its html variants:</p>
<pre class="lang-py prettyprint-override"><code>from glob import glob
import re
import os
import win32com.client as win32
import base64
import mammoth
from win32com.client import constants
from docx import Document
import sys
def change_word_format(file_path):
word = win32.gencache.EnsureDispatch('Word.Application')
doc = word.Documents.Open(file_path)
doc.Activate()
new_file_abs = os.path.abspath(file_path)
new_file_abs = re.sub(r'\.\w+$', '.docx', new_file_abs)
word.ActiveDocument.SaveAs(
new_file_abs, FileFormat=constants.wdFormatDocumentDefault
)
doc.Close(False)
return new_file_abs
def convert_image(image):
with image.open() as image_bytes:
encoded_src = base64.b64encode(image_bytes.read()).decode("ascii")
return {"src": f"data:{image.content_type};base64,{encoded_src}"}
def convert_docx_to_html(input_docx):
with open(input_docx, "rb") as docx_file:
result = mammoth.convert_to_html(docx_file, convert_image=mammoth.images.img_element(convert_image))
html = result.value
output_html = os.path.splitext(input_docx)[0] + ".html"
with open(output_html, "w", encoding="utf-8") as html_file:
html_file.write(html)
return output_html
def convert_rtf_to_html(input_rtf):
input_docx = change_word_format(input_rtf)
output_html = convert_docx_to_html(input_docx)
os.remove(input_docx)
return output_html
def convert_rtf_files_to_html(start_path):
for root, dirs, files in os.walk(start_path):
for file in files:
if file.endswith(".rtf"):
input_rtf = os.path.join(root, file)
output_html_path = convert_rtf_to_html(input_rtf)
print(f"Converted {input_rtf} to {output_html_path}")
os.remove(input_rtf)
if __name__ == "__main__":
start_path = sys.argv[1]
convert_rtf_files_to_html(start_path)
</code></pre>
<p>I want also to keep text colors from rtf in html actually black</p>
|
<python><html><converters><rtf><file-conversion>
|
2023-06-28 19:26:08
| 0
| 15,427
|
serge
|
76,576,155
| 14,385,099
|
Assign row numbers in pandas dataframe but starting from 1
|
<p>I have a dataframe that looks like this:</p>
<pre><code>data = [291.79, 499.31, 810.93, 1164.25]
df = pd.DataFrame(data, columns=['Onset'])
</code></pre>
<p>How can I assign 'row numbers' to each row such that each row is numbered in ascending order <strong>starting from 1</strong>?</p>
<p>The output should look like this:</p>
<pre><code>data = [291.79, 499.31, 810.93, 1164.25]
TR = [1,2,3,4]
df = pd.DataFrame(data, columns=['Onset'])
df['TR'] = TR
</code></pre>
<p>Thank you!</p>
|
<python><pandas>
|
2023-06-28 19:23:54
| 0
| 753
|
jo_
|
76,576,150
| 15,452,168
|
Filtering DataFrame based on specific conditions in Python
|
<p>I have a DataFrame with the following columns: INVOICE_DATE, COUNTRY, CUSTOMER_ID, INVOICE_ID, DESCRIPTION, USIM, and DEMANDQTY. I want to filter the DataFrame based on specific conditions.</p>
<p><a href="https://i.sstatic.net/oZIlJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oZIlJ.png" alt="enter image description here" /></a></p>
<p>The condition is that if the DESCRIPTION column contains the words "kids" or "baby", I want to include all the values from that INVOICE_ID in the filtered DataFrame. In other words, at least one item in the transaction should belong to the kids or baby category for the entire transaction to be included.</p>
<p>I tried using the str.contains() method in combination with a regular expression pattern, but I'm having trouble getting the desired results.</p>
<p>Here's my code:</p>
<pre><code>import pandas as pd
# Assuming the DataFrame is named 'df'
# Filter the DataFrame based on the condition
filtered_df = df[df['DESCRIPTION'].str.contains('kids|baby', case=False, regex=True)]
# Print the filtered DataFrame
filtered_df
</code></pre>
<p>However, this code does not provide the expected results. It filters the data frame based on individual rows rather than considering the entire transaction.</p>
<p>Please find below the test data: -</p>
<pre><code>import pandas as pd
import random
import string
import numpy as np
random.seed(42)
np.random.seed(42)
num_transactions = 100
max_items_per_transaction = 6
# Generate a list of possible items
possible_items = [
"Kids T-shirt", "Baby Onesie", "Kids Socks",
"Men's Shirt", "Women's Dress", "Kids Pants",
"Baby Hat", "Women's Shoes", "Men's Pants",
"Kids Jacket", "Baby Bib", "Men's Hat",
"Women's Skirt", "Kids Shoes", "Baby Romper",
"Men's Sweater", "Kids Gloves", "Baby Blanket"
]
# Create the DataFrame
rows = []
for i in range(num_transactions):
num_items = random.randint(1, max_items_per_transaction)
items = random.sample(possible_items, num_items)
invoice_dates = pd.date_range(start='2022-01-01', periods=num_items, freq='D')
countries = random.choices(['USA', 'Canada', 'UK'], k=num_items)
customer_id = i + 1
invoice_id = 1001 + i
for j in range(num_items):
item = items[j]
usim = ''.join(random.choices(string.ascii_uppercase + string.digits, k=6)) # Generate a random 6-character USIM value
demand_qty = random.randint(1, 10)
row = {
'INVOICE_DATE': invoice_dates[j],
'COUNTRY': countries[j],
'CUSTOMER_ID': customer_id,
'INVOICE_ID': invoice_id,
'DESCRIPTION': item,
'USIM': usim,
'DEMANDQTY': demand_qty
}
rows.append(row)
df = pd.DataFrame(rows)
# Print the DataFrame
df
</code></pre>
<p>Can anyone please guide me on how to properly filter the DataFrame based on the described condition? I would greatly appreciate any help or suggestions. Thank you!</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-06-28 19:23:05
| 1
| 570
|
sdave
|
76,576,054
| 4,449,035
|
How to send an m4a file from FastAPI to OpenAI transcribe
|
<p>I'm trying to get an <code>m4a</code> file transcribed. I'm receiving this file at a FastAPI endpoint and then attempting to send it to OpenAI's <code>transcribe</code> but it seems like the format/shape is off. How can I turn the <a href="https://fastapi.tiangolo.com/tutorial/request-files/#uploadfile" rel="nofollow noreferrer">UploadFile</a> into something that OpenAI will accept? The OpenAI docs for transcribe are essentially:</p>
<blockquote>
<p>The transcriptions API takes as input the audio file you want to transcribe and the desired output file format for the transcription of the audio. We currently support multiple input and output file formats.</p>
</blockquote>
<p>Here's my current code:</p>
<pre><code>@app.post("/transcribe")
async def transcribe_audio_file(file: UploadFile = File(...)):
contents = await file.read()
contents_str = contents.decode()
buffer = io.StringIO(contents_str)
transcript_response = openai.Audio.transcribe("whisper-1", buffer)
</code></pre>
<p>I've modified the above code to several different scenarios, which return the respective errors:</p>
<pre><code> transcript_response = openai.Audio.transcribe("whisper-1", file) # AttributeError: 'UploadFile' object has no attribute 'name'
transcript_response = openai.Audio.transcribe("whisper-1", contents) # AttributeError: 'bytes' object has no attribute 'name'
transcript_response = openai.Audio.transcribe("whisper-1", contents_str) # UnicodeDecodeError: 'utf-8' codec can't decode byte 0x86 in position 13: invalid start byte
transcript_response = openai.Audio.transcribe("whisper-1", buffer) # UnicodeDecodeError: 'utf-8' codec can't decode byte 0x86 in position 13: invalid start byte
</code></pre>
<p>I have something similar working in a vanilla CLI python script that looks like this:</p>
<pre><code>audio_file = open("./audio-file.m4a", "rb")
transcript_response = openai.Audio.transcribe("whisper-1", audio_file)
</code></pre>
<p>So I also tried using a method like that:</p>
<pre><code> with open(file.filename, "rb") as audio_file:
transcript = openai.Audio.transcribe("whisper-1", audio_file)
</code></pre>
<p>But that gave the error:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: '6ad52ad0-2fce-4d79-b4ac-e154379ceacd'
</code></pre>
<p>Any tips on how to debug this myself are also welcome. I'm coming from TypeScript land.</p>
|
<python><fastapi><openai-api>
|
2023-06-28 19:07:19
| 1
| 5,632
|
Brady Dowling
|
76,576,053
| 489,088
|
How to get Numba to free up resources from previous kernel execution?
|
<p>I have one kernel that need to execute twice consecutively with a different data set. When the first one ends, I call <code>device_results.copy_to_host(results, stream=stream)</code> to copy the results back.</p>
<p>I then copy a bunch of data into the GPU for the second kernel to execute, with calls like:</p>
<pre><code> stream = cuda.stream()
largest_vals = np.full(
100000,
-9223372036854775808,
dtype=np.int64)
device_largest_vals = cuda.to_device(largest_vals, stream=stream)
</code></pre>
<p>which should fit just fine in the GPU memory (each data set fits, but both do not fit together in memory).</p>
<p>However, it seems that after the first kernel execution finishes copying back it's result, if I attempt to copy new data for the second kernel run, then I run into out of memory errors.</p>
<p>I attempted to solve this by calling</p>
<pre><code> stream.synchronize()
cuda.synchronize()
</code></pre>
<p>after the first kernel execution, but no luck. Then I attempted to reset the CUDA context before each time I need to copy lots of data with:</p>
<pre><code> cuda.current_context().reset()
cuda.synchronize()
</code></pre>
<p>but if I do that, then the second kernel fails to launch with:</p>
<pre><code>numba.cuda.cudadrv.driver.CudaAPIError: [400] Call to cuLaunchKernel results in CUDA_ERROR_INVALID_HANDLE
</code></pre>
<p>which seems to indicate resetting the context caused the kernel not to be executable anymore (which is strange since when I invoke it for the second run I would expect that to be setup again, but apparently it's not?)</p>
<p>What is the proper way to tell Numba I am done with all the data from the first kernel so I can copy new data for the second execution?</p>
|
<python><python-3.x><cuda><kernel><numba>
|
2023-06-28 19:07:12
| 0
| 6,306
|
Edy Bourne
|
76,575,992
| 14,385,099
|
Rounding down numbers to the nearest even number
|
<p>I have a dataframe with a column of numbers that look like this:</p>
<pre><code>data = [291.79, 499.31, 810.93, 1164.25]
df = pd.DataFrame(data, columns=['Onset'])
</code></pre>
<p>Is there an elegant way to round <strong>down</strong> the odd numbers to the nearest even number? The even numbers should be rounded to the nearest even integer. This should be the output:</p>
<pre><code>data = [290, 498, 810, 1164]
df = pd.DataFrame(data, columns=['Onset'])
</code></pre>
<p>Thank you!</p>
|
<python><pandas>
|
2023-06-28 18:58:24
| 5
| 753
|
jo_
|
76,575,991
| 8,874,388
|
Safe way of converting seconds to nanoseconds without int/float overflowing?
|
<ul>
<li>I currently have a UNIX timestamp as a 64-bit float. It's seconds with a few fractions as a float. Such as <code>1687976937.597064</code>.</li>
<li>I need to convert it to nanoseconds. But there's 1 billion nanoseconds in 1 second. And doing a straight multiplication by 1 billion <s>would overflow the 64-bit float.</s></li>
</ul>
<p>Let's first consider the limits:</p>
<ul>
<li><code>1_687_976_937_597_064_000</code> is the integer result of the above timestamp multiplied by 1 billion. The goal is figuring out a way to safely reach this number.</li>
<li><code>9_223_372_036_854_775_807</code> is the maximum number storable in a 64-bit signed integer.</li>
<li><code>9_007_199_254_740_992.0</code> is the maximum number storable in a 64-bit float. And at that scale, there aren't enough bits to store any decimals at all (it's permanently <code>.0</code>). Edit: This claim is not correct. See the end of this post...</li>
<li>So 64-bit signed integer can hold the result. But a 64-bit float cannot hold the result and would overflow.</li>
</ul>
<p>So I was thinking:</p>
<ul>
<li>Since an integer is able to easily represent the result, I thought I could first convert the integer portion to an integer, and multiply by 1 billion.</li>
<li>And then extract just the decimals so that I get a new <code>0.XXXXXX</code> float, and then multiply that by 1 billion. By leading with a zero, I ensure that the integer portion of the float will never overflow. But perhaps the decimals could still overflow somehow? Hopefully floats will just safely truncate the trailing decimals instead of overflowing. By multiplying a <code>0.X</code> number by 1 billion, the resulting value should never be able to be higher than <code>1_999_999_999.XXXXX</code> so it seems like this multiplication <em>should</em> be safe...</li>
<li>After that, I truncate the "decimals float" into an integer to ensure that the result will be an integer.</li>
<li>Lastly, I add together the two integers.</li>
</ul>
<p>It seems to work, but this technique looks so hacky. Is it safe?</p>
<p>Here's a Python repl showing the process:</p>
<pre class="lang-py prettyprint-override"><code>>>> num = 1687976937.597064
>>> whole = int(num)
>>> whole
1687976937
>>> decimals = num - whole
>>> decimals
0.5970640182495117
>>> (whole * 1_000_000_000)
1687976937000000000
>>> (decimals * 1_000_000_000)
597064018.2495117
>>> int(decimals * 1_000_000_000)
597064018
>>> (whole * 1_000_000_000) + int(decimals * 1_000_000_000)
1687976937597064018
>>> type((whole * 1_000_000_000) + int(decimals * 1_000_000_000))
<class 'int'>
</code></pre>
<p>So here's the comparison:</p>
<ul>
<li><code>1_687_976_937_597_064_018</code> was the result of the above algorithm. And yes, there's a slight, insignificant float rounding error but I don't mind.</li>
<li><code>1_687_976_937_597_064_000</code> is the scientifically correct answer given by Wolfram Alpha's calculator.</li>
</ul>
<p>It certainly looks like a success, but is there any risk that my algorithm would be dangerous and break?</p>
<p>I am not brave enough to put it into production without confirmation that it's safe.</p>
<hr />
<p>Concerning the 64-bit float limits: Here are the results in Python 3's repl (pay attention to the <code>993</code> input and the <code>992</code> in the output):</p>
<pre class="lang-py prettyprint-override"><code>>>> 9_007_199_254_740_993.0
9007199254740992.0
</code></pre>
<p>But perhaps I am reading that "limit" incorrectly... Perhaps this is just a float rounding error.</p>
|
<python><python-3.x><algorithm><math>
|
2023-06-28 18:58:18
| 3
| 4,749
|
Mitch McMabers
|
76,575,878
| 11,280,068
|
python loguru output to stderr and a file
|
<p>I have the following line that configures my logger instance</p>
<pre><code>logger.add(sys.stderr, level=log_level, format="<green>{time:YYYY-MM-DD HH:mm:ss.SSS zz}</green> | <level>{level: <8}</level> | <yellow>Line {line: >4} ({file}):</yellow> <b>{message}</b>", colorize=True, backtrace=True, diagnose=True)
</code></pre>
<p>I want to configure my logger to use the above, and also output to a file, so that when I call logger.whatever() it outputs to the terminal and to a log file</p>
<p>This way, if I'm doing dev work and running the file directly, I can see the output in my terminal, and when the code is being run on a server by a cronjob, it can log to a file</p>
<p>I don't understand the whole concept of the "sinks" so sorry if this is an easy question</p>
|
<python><python-3.x><logging><loguru>
|
2023-06-28 18:39:33
| 1
| 1,194
|
NFeruch - FreePalestine
|
76,575,832
| 11,705,021
|
GIT recognize file as new when modifying key value using python in windows
|
<p>I have python code which modify key value in json file:</p>
<pre><code>with open(filename_write, 'r') as jsonFile:
data = json.load(jsonFile)
data['test']['ariel']['version'] = value
with open(filename_write, 'w') as jsonFile:
json.dump(data, jsonFile, indent=4)
</code></pre>
<p>But when I crating PR in bitbucket, it shows the file as new instead of showing only the diff I did. any idea how to overcome it?</p>
|
<python><json><git><bitbucket><pull-request>
|
2023-06-28 18:31:19
| 0
| 1,428
|
arielma
|
76,575,816
| 3,137,388
|
communication with subprocess using pipe created by os.pipe
|
<p>Our testcase creates a pipe, starts a C++ binary as subprocess (pass the write file descriptor as argument to subprocess), waits for C++ binary to write something to the pipe. We are not using stdout / stderr for this as we are using those to read the output, errors of the C++ binary.</p>
<p>Below is python code:</p>
<pre><code>import os
import subprocess
def main():
r, w = os.pipe()
cpp_path = "../bin/cpp_engine"
cpp_process = subprocess.Popen([cpp_path, "-v", "-p", str(w)])
r = os.fdopen(r)
stg = r.read()
print("test reads : ", stg)
cpp_process.kill()
if __name__ == "__main__":
main()
</code></pre>
<p>Below is C++ code where I am writing to pipe.</p>
<pre><code>#include <lyra/lyra.hpp>
int main(int argc, char * argv[])
{
auto intgTestPipe = -1;
auto cli = lyra::cli();
cli.add_argument(
lyra::opt(verboseFlag).name("-v").name("--verbose").help("enable verbose flag") |
lyra::opt(intgTestPipe, "Integraion test Pipe")
.name("-p")
.name("--pipe")
.help("Integration test pipe to communicate"));
auto result = cli.parse({argc, argv});
if (!result) {
std::cout << "Error in command line: " << result.message() << std::endl;
return 1;
}
std::cout << intgTestPipe << " " << typeid(intgTestPipe).name() << std::endl;
if (intgTestPipe != -1) {
std::cout << "in intgTestPipe" << std::endl;
auto fp = fdopen(intgTestPipe, "w");
std::string msg = "sending message\n";
if (fp != NULL) {
fputs(msg.c_str(), fp);
}
else {
std::cout << "Invalid file pointer" << std::endl;
}
fclose(fp);
return 0;
}
}
</code></pre>
<p>When I run python test, it is getting hung while reading from pipe. I think C++ binary is unable to write to the passed file descriptor.</p>
<p>output:</p>
<pre><code>4 i
in cpp intgTestPipe
Invalid file pointer
</code></pre>
<p>Not sure why I am getting bad file descriptor even though I am passing the write file descriptor from the pipe.</p>
<p>Is there is any issue with this program or how to communicate from C++ to python using pipe other than stdin, stdout and stderr.</p>
|
<python><c++><subprocess>
|
2023-06-28 18:29:07
| 0
| 5,396
|
kadina
|
76,575,796
| 2,153,235
|
What is "class ClassName : pass" in Python?
|
<p>In <a href="https://realpython.com/null-in-python" rel="nofollow noreferrer">this tutorial</a>, an entire class is defined as <code>class DontAppend: pass</code>. According to <a href="https://stackoverflow.com/questions/59296286/python-odd-way-of-declaring-a-class-with-a-name-after-the-colon">this Q&A</a> on Stack Overflow, <code>pass</code> defines the class.</p>
<p>The consensus from Google, however, is that <code>pass</code> is a <em>statement</em>. I've never seen statements define a class other than within definitions of class methods.</p>
<p>As a statement, <code>pass</code> also doesn't define a named class attribute/property.</p>
<p>Since <code>pass</code> doesn't serve to define a method or an attribute, how does it actually define the class? Is this just a special convention, solely for defining an empty class?</p>
|
<python><class>
|
2023-06-28 18:25:54
| 4
| 1,265
|
user2153235
|
76,575,794
| 17,275,588
|
Python subprocess (using Popen) hangs indefinitely and doesn't terminate when the script ends. Why, and how to fix?
|
<p>When I run the parent script without forking into the subprocess, it terminates fine at the end. When I run the subprocess script by itself, it terminates just fine at the end.</p>
<p>However when I run the subprocess script inside of the parent script as a subprocess as needed, instead of terminating at the end, it just hangs indefinitely. Any ideas why this happens?</p>
<p>Here's the parent script code -- with the subprocess execution code being in the move_current_image function:</p>
<pre><code>import os
from PIL import Image, ImageTk
import tkinter as tk
from tkinter import messagebox
import subprocess
image_folder = r'C:\Users\anton\Pictures\image_editing\CURRENT IMAGE FOLDER' # the folder path of the images to curate
requires_editing_folder = r'C:\Users\anton\Pictures\image_editing\REQUIRES IMAGE EDITING'
files = os.listdir(image_folder)
files_index = 0
root = tk.Tk() # creates a simple GUI window
root.geometry('800x600')
canvas = tk.Canvas(root, width=800, height=600)
canvas.pack(side="left", fill="both", expand=True)
scrollbar_v = tk.Scrollbar(root, orient="vertical", command=canvas.yview)
scrollbar_v.pack(side="right", fill="y")
scrollbar_h = tk.Scrollbar(root, orient="horizontal", command=canvas.xview)
scrollbar_h.pack(side="bottom", fill="x")
canvas.configure(yscrollcommand=scrollbar_v.set, xscrollcommand=scrollbar_h.set)
frame = tk.Frame(canvas)
canvas.create_window((0,0), window=frame, anchor='nw')
label = tk.Label(frame)
label.pack()
zoom_state = False
img = None
# Functions for mousewheel scrolling
def scroll_v(event):
canvas.yview_scroll(int(-1*(event.delta)), "units")
canvas.bind_all("<MouseWheel>", scroll_v)
def scroll_h(event):
canvas.xview_scroll(int(-1*(event.delta)), "units")
canvas.bind_all("<Shift-MouseWheel>", scroll_h)
# Functions for keyboard scrolling
def scroll_up(event):
canvas.yview_scroll(-5, "units")
def scroll_down(event):
canvas.yview_scroll(5, "units")
def scroll_left(event):
canvas.xview_scroll(-5, "units")
def scroll_right(event):
canvas.xview_scroll(5, "units")
def load_next_image():
global img
global files_index
if files_index >= len(files):
messagebox.showinfo("Information", "No more images left")
root.quit()
else:
img_path = os.path.join(image_folder, files[files_index])
img = Image.open(img_path)
update_image_display()
files_index += 1
def delete_current_image(): # deletes the current image and loads the next one
global files_index
img_path = os.path.join(image_folder, files[files_index - 1])
os.remove(img_path)
files.pop(files_index - 1)
files_index -= 1
load_next_image()
def update_image_display(): # toggles zoom in, zoom out
global img
if zoom_state:
tmp = img.resize((img.width, img.height)) # 100% size
else:
tmp = img.resize((int(img.width*0.4), int(img.height*0.4))) # 40% size
photo = ImageTk.PhotoImage(tmp)
label.config(image=photo)
label.image = photo
# Updating scroll region after image is loaded
root.update()
canvas.configure(scrollregion=canvas.bbox("all"))
def toggle_zoom(event): # toggles zoom state pt 2
global zoom_state
zoom_state = not zoom_state
update_image_display()
def move_current_image():
global files_index
img_path = os.path.join(image_folder, files[files_index - 1])
new_path = os.path.join(requires_editing_folder, files[files_index - 1])
os.rename(img_path, new_path)
subprocess.Popen(["python", "pixelbin-image-transformation-v3.py", files[files_index - 1], requires_editing_folder],
start_new_session=True)
files.pop(files_index - 1)
files_index -= 1
load_next_image()
# Bind the keys to the desired events
root.bind('<Right>', lambda event: load_next_image()) # approves this image, moves to next
root.bind('<Left>', lambda event: delete_current_image()) # deletes this image, as it doesn't meet the criteria
root.bind('<Up>', lambda event: move_current_image()) # moves this image into "requires photoshop editing" holding bay folder
root.bind('e', toggle_zoom) # bounces back between 100% and 40% zoom
# keyboard-based scrolling, for maximum efficiency
root.bind('f', scroll_down)
root.bind('r', scroll_up)
root.bind('d', scroll_left)
root.bind('g', scroll_right)
load_next_image() # start the workflow
root.mainloop() # start the GUI main loop
</code></pre>
<p>Here's the subprocess script code:</p>
<pre><code>import sys
import os
# STEP 1 = UPLOAD IMAGE FILE TO PIXELBIN
# DOCUMENTATION FOR API-BASED FILE UPLOADING: https://github.com/pixelbin-dev/pixelbin-python-sdk/blob/main/documentation/platform/ASSETS.md#fileupload
CurrentImageFilename = sys.argv[1] # this will be files[files_index - 1] from the parent script, which gets passed into this script as a variable
CurrentImageFolder = sys.argv[2] # this will be image_folder from the parent script, because this is run as a subprocess
CurrentFilepathFull = os.path.join(CurrentImageFolder, CurrentImageFilename)
FinalImageDownloadFolder = r"C:\Users\anton\Pictures\image_editing\IMAGE EDITING COMPLETED"
import asyncio
from pixelbin import PixelbinClient, PixelbinConfig
config = PixelbinConfig({
"domain": "https://api.pixelbin.io",
"apiSecret": "my_api_key_goes_here",
})
pixelbin:PixelbinClient = PixelbinClient(config=config)
try:
print("Pixelbin image upload in progress...")
result = pixelbin.assets.fileUpload( # sync method call (there's also an async method, in the above documentation)
file=open(CurrentFilepathFull, "rb"),
path="IMAGE_EDITING_FOLDER",
name=CurrentImageFilename, # uses the unique filename each time since I may be running transform operations in parallel. doing so on identical filenames, before deleting the previous, would absolutely cause timing problems.
access="public-read",
tags=["tag1","tag2"],
metadata={},
overwrite=True,
filenameOverride=True)
print("Image uploaded successfully!")
# print(result)
except Exception as e:
print(e)
# Pixelbin automatically modifies filenames like so. I therefore follow their convention so I can later delete the correctly-formatted filename.
FilenameModifiedByPixelbin = CurrentImageFilename.replace('.', '_')
# STEP 2 = RUN TRANSFORMATION ON SPECIFIED IMAGE
# DOCUMENTATION FOR URL-BASED TRANSFORMATIONS: https://www.pixelbin.io/docs/url-structure/#transforming-images-using-url, https://www.pixelbin.io/docs/transformations/ml/watermark-remover/
# this basically auto-runs the transformation if you submit the properly formatted URL;
# instead of literally navigating to the URL, this executes the same functionality via background HTTP GET requests
# in this case, the cloud-name is functionally like the API key that allows it to be authenticated
def auto_delete_original_image_from_storage():
# STEP 3 = AUTO-DELETE ORIGINAL IMAGE FROM PIXELBIN STORAGE, POST-TRANSFORMATION + DOWNLOAD
# DOCUMENTATION FOR API-BASED FILE DELETION: https://github.com/pixelbin-dev/pixelbin-python-sdk/blob/main/documentation/platform/ASSETS.md#deletefile
print("Pixelbin image deletion from storage in progress...")
try: # I don't redefine the config stuff / permissions here, because that was done earlier during fileUpload
result = pixelbin.assets.deleteFile( # sync method call
fileId=f"IMAGE_EDITING_FOLDER/{FilenameModifiedByPixelbin}")
print("Image deleted from Pixelbin storage!")
# print(result)
except Exception as e:
print(e)
import requests
import time
def fetch_image(url):
print("Pixelbin image transformation in progress...")
response = requests.get(url)
# continues functionally "refreshing" the CDN URL until the transformation is complete and the image is ready for download
# note that this does NOT continue to fire additional identical jobs + use up credits each time
# that's because once the exact specific job is sent via the specific "send job" URL? it simply runs until completed
# and you'll get a 202 code that simply says: "that job is currently in progress, please wait..."
# by "rerunning" it after fully completed too? it'll just auto-download the already-completed image, not re-run the original job
while response.status_code == 202: # status code 202 = transformation still in progress;
print("Transformation still processing. Waiting for 5 seconds before trying again.")
time.sleep(5)
response = requests.get(url)
if 'TransformationJobError' in response.text:
print("There was an error with the transformation.")
return None
if response.status_code == 200: # status code 200 = successful image transformation, ready for download;
filepath = os.path.join(FinalImageDownloadFolder, CurrentImageFilename)
with open(filepath, 'wb') as f:
f.write(response.content)
print("Image downloaded successfully!")
auto_delete_original_image_from_storage()
return True
print(f"Unexpected status code: {response.status_code}")
return None
url = f"https://cdn.pixelbin.io/v2/my_api_key_goes_here/wm.remove(rem_text:true)/IMAGE_EDITING_FOLDER/{FilenameModifiedByPixelbin}"
fetch_image(url)
</code></pre>
<p>Any ideas what could be causing this? Thanks!</p>
|
<python><windows><subprocess>
|
2023-06-28 18:25:14
| 1
| 389
|
king_anton
|
76,575,694
| 10,934,417
|
A quick way to squeeze multiple times in PytTorch?
|
<p>Is there any efficient way to to squeeze (unsqueeze) multiple times in pytorch tensors?</p>
<p>For example a tensor <strong>a</strong> has shape: [4,1,1,2],</p>
<pre><code>import torch
import torch.nn as nn
a = torch.tensor([[1,2,3,4],[5,6,7,8]], dtype=torch.float32)
a = a.reshape(4,1,1,2)
print(a.shape)
torch.Size([4, 1, 1, 2])
</code></pre>
<p>I would like to squeeze <strong>a</strong> to [4, 2] but without squeezing it twice manually, e.g.,</p>
<pre><code>a1 = a.squeeze(1).squeeze(1)
print(a1.shape)
torch.Size([4, 2])
</code></pre>
<p>Is there any way that I don't have to write <code>(un)squeeze(1)</code> twice/multiple times. Thanks</p>
|
<python><numpy><pytorch>
|
2023-06-28 18:05:39
| 1
| 641
|
DaCard
|
76,575,620
| 8,162,211
|
PyQt5 Script: Problem aligning Qlabel and coloring background
|
<p>The main portion of my script looks like this:</p>
<pre><code>def main():
global window
app = QApplication([])
window = MainMenu()
window.show()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
</code></pre>
<p>where <code>MainMenu</code>is a class whose definition starts as follows:</p>
<pre><code>class MainMenu(QMainWindow):
def __init__(self):
super(MainMenu, self).__init__()
self.window() #window function defined below
layout = self.frame_layout() #frame_layout function defined below
def window(self):
screen_width, screen_height = pyautogui.size()
self.setGeometry(0, 0, screen_width - 25, screen_height - 180)
self.showMaximized()
startup_background(self) # Inserts background image at startup (see below)
def frame_layout(self):
global frame
frame = QFrame()
self.setCentralWidget(frame)
frame_layout = QHBoxLayout()
frame.setLayout(frame_layout)
return frame_layout
</code></pre>
<p>Now for the <code>startup_background</code> function:</p>
<pre><code>def startup_background(window):
screen_width, screen_height = pyautogui.size()
img_path= 'brain.png'
pixmap = QPixmap(img_path).scaled(window.width(),
window.height(),QtCore.Qt.KeepAspectRatio)
label = QLabel(window)
label.setAlignment(Qt.AlignCenter) #attempt to center label in window
label.resize(pixmap.width(),pixmap.height()) #pixmap takes up entire label
label.setPixmap(pixmap)
label.setStyleSheet('background-color: white')
label.show()
</code></pre>
<p>The output at the startup of my program yields this:</p>
<p><a href="https://i.sstatic.net/6b8rk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6b8rk.png" alt="enter image description here" /></a></p>
<p>The problems I have are the following:</p>
<ol>
<li>I can't seem to horizontally center the label in the window. Am I misunderstanding what <code>setAlignment</code> does?</li>
<li>The background color of the window outside my label is grey. I've tried using
<code>setStyleSheet('background-color: white')</code> for the frame and for the window, but both
cover up the label altogether.</li>
</ol>
|
<python><pyqt5>
|
2023-06-28 17:51:47
| 0
| 1,263
|
fishbacp
|
76,575,611
| 2,612,592
|
Can't use asyncio with pywin32 to create a Windows Service
|
<p>I'm using pywin32 to create a Windows Service.</p>
<p>Problem is whenever I call <code>asyncio</code> functions like <code>asyncio.run()</code>, <code>asyncio.ensure_future()</code>, <code>asyncio.get_event_loop()</code> or <code>asyncio.ProactorEventLoop()</code> I get an <code>ValueError: set_wakeup_fd only works in main thread</code>.</p>
<p>Python version: 3.8.8</p>
<p>Executed as: <code>python test.py install</code> and then <code>python test.py start</code>. Error is found in <code>C:/xxx/errorLogs2.txt</code></p>
<p>Code:</p>
<pre><code>import socket
import win32serviceutil
import servicemanager
import win32event
import win32service
import asyncio
class Test(win32serviceutil.ServiceFramework):
_svc_name_ = 'testService'
_svc_display_name_ = 'Test Service'
_svc_description_ = 'Test Service Description'
@classmethod
def parse_command_line(cls):
win32serviceutil.HandleCommandLine(cls)
def __init__(self, args):
win32serviceutil.ServiceFramework.__init__(self, args)
self.hWaitStop = win32event.CreateEvent(None, 0, 0, None)
socket.setdefaulttimeout(60)
def SvcStop(self):
self.ReportServiceStatus(win32service.SERVICE_STOP_PENDING)
win32event.SetEvent(self.hWaitStop)
def SvcDoRun(self):
try:
l = asyncio.get_event_loop() # Fails here
servicemanager.LogMsg(servicemanager.EVENTLOG_INFORMATION_TYPE,
servicemanager.PYS_SERVICE_STARTED,
(self._svc_name_, ''))
except Exception as e:
with open('C:/xxx/errorLogs2.txt', 'a') as f:
f.write(str(e) + '\n')
if __name__ == '__main__':
Test.parse_command_line()
</code></pre>
|
<python><windows-services><python-asyncio><pywin32>
|
2023-06-28 17:50:54
| 1
| 587
|
Oliver Mohr Bonometti
|
76,575,561
| 18,022,759
|
Get values of DataFrame index column
|
<p>I have a <code>DataFrame</code> with an index column that uses pandas <code>TimeStamps</code>.</p>
<p>How can I get the values of the index as if it were a normal column (<code>df["index"]</code>)?</p>
|
<python><pandas><dataframe>
|
2023-06-28 17:43:24
| 2
| 976
|
catasaurus
|
76,575,327
| 2,827,771
|
Pyspark: check if the consecutive values of a column are the same
|
<p>I have a pyspark dataframe with the following format:</p>
<pre><code>ID Name Score Rank
1 A 10 1
1 B 20 2
2 C 10 1
2 C 12 2
3 D 11 1
4 E 12 1
4 E 13 2
</code></pre>
<p>goal is to find those ids where name is not the same for ranks 1 and 2, and if it is the same or there is only one name available, then filter those out. I think I can achieve that by creating another dataframe where I groupBy ID and then count name, left join with this dataframe and filter those with count < 2 but that's too hacky and I was wondering if there is a better way of doing this using windows functions. So the above would look like this after filtering undesired rows:</p>
<pre><code>ID Name Score Rank
1 A 10 1
1 B 20 2
</code></pre>
|
<python><dataframe><pyspark><apache-spark-sql>
|
2023-06-28 17:06:11
| 1
| 13,628
|
ahajib
|
76,575,080
| 875,977
|
memory increase after upgrading setuptools
|
<p>After upgrading <code>setuptools</code> to <code>65.3.0</code> all of sudden there is memory increase for majority of the process in the system. To narrow down the issue, I tried to find out the problematic <code>import</code> and checked, it looks like the <code>distutils</code> import in the script is causing loading of lot of dynamic libraries in the new version and causing more time to load and also increased memory usage.</p>
<pre class="lang-py prettyprint-override"><code>from distutils.version import StrictVersion
import time
while True:
time.sleep(1000)
</code></pre>
<pre class="lang-bash prettyprint-override"><code>[root@controller(FHGW-83) /home/robot]
# pmap 2440334 | wc -l
196
[root@controller(FHGW-83) /home/robot]
# pmap 2440334 | grep cpython | wc -l
111
# pmap 2432196 | grep cpython
00007f8606e8a000 72K r---- _ssl.cpython-39-x86_64-linux-gnu.so
...
00007f8606f02000 24K r-x-- _json.cpython-39-x86_64-linux-gnu.so
...
00007f8607296000 4K rw--- _csv.cpython-39-x86_64-linux-gnu.so
...
00007f860786f000 4K rw--- _sha512.cpython-39-x86_64-linux-gnu.so
...
00007f86078fe000 8K rw--- pyexpat.cpython-39-x86_64-linux-gnu.so
...
00007f86079f9000 4K rw--- _struct.cpython-39-x86_64-linux-gnu.so
...
and more...
$ top -p 2440334
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
2440334 root 20 0 31.9m 27.2m 0.0 0.2 0:00.68 S python test.py
</code></pre>
<p>In a machine where setuptools is at <code>57.0.0</code>, this issue is not seen.
virtual & RES memory consumption is also less.</p>
<pre class="lang-bash prettyprint-override"><code># python test.py &
[1] 1562394
]
# pmap 1562394 | wc -l
67
# pmap 1562394 | grep cpyth | wc -l
5
PID USER PR NI VIRT RES %CPU %MEM TIME+ S COMMAND
1562394 root 20 0 14.0m 9.3m 0.0 0.1 0:00.05 S python test.py
</code></pre>
<p>Here what is the relation with setuptools? python version (3.9.16) is same in both the machines. why is the script causing python to load lot of libraries?</p>
|
<python><python-3.x><linux><setuptools><python-3.9>
|
2023-06-28 16:27:17
| 1
| 835
|
Dinesh Reddy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.