QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 β |
|---|---|---|---|---|---|---|---|---|
77,400,886 | 9,599,508 | Python CustomTkinter CTKButton change of state not taking effect? | <p>I am writing an app with CustomTkinter. I have created a collapsible frame with a button to collapse and uncollapse the frame. But I need to be able to disable this button. I wrote a function <code>disable_uncollapse</code> to do this but the button remains clickable.</p>
<pre><code>import tkinter
import customtkinter as ctk
import tkinter as tk
from tkinter import ttk
from PIL import Image
ctk.set_appearance_mode("Dark")
ctk.set_default_color_theme("dark-blue")
appWidth, appHeight = 600, 700
class CollapsibleFrame(ctk.CTkFrame):
"""
Class to create a collapsible frame. Starts as collapsed. Row and Column are row and column of parent frame it should be gridded to
"""
def __init__(self, master, row:int=0,column:int=0,collapsed_text:str='',**kwargs):
super().__init__(master, **kwargs)
self.collapsed_text=collapsed_text
# add widgets onto the frame, for example:
self.label = ctk.CTkLabel(self)
self.label.grid(row=0, column=0)
self.title_label = ctk.CTkLabel(self,
text=self.collapsed_text)
self.title_label.grid(row=0, column=0)
self.row=row
self.column=column
self.collapsed_widgets=[]
# load icons
self.down_chevron=ctk.CTkImage(Image.open("chevron_up.png"), size=(7, 7))
self.up_chevron = ctk.CTkImage(Image.open("chevron_down.png"), size=(7, 7))
# Add hide/unhide buttons
self.collapsebuton = ctk.CTkButton(self,
image=self.down_chevron,text="")
self.collapsebuton.grid(row=0, column=1,
padx=20, pady=20,
sticky="e")
self.collapsebuton.bind("<Button-1>", lambda event: self.collapse())
self.uncollapsebuton = ctk.CTkButton(self,
image=self.up_chevron,text="")
self.uncollapsebuton.grid(row=0, column=1,
padx=20, pady=20,
sticky="e")
self.uncollapsebuton.bind("<Button-1>", lambda event: self.uncollapse())
def hide(self):
""" Completely hides frame """
print('hiding')
self.grid_forget()
def unhide(self):
""" Completely unhides frame """
print('unhiding')
self.grid(row=self.row,column=self.column,sticky='ew')
self.columnconfigure(0,weight=1)
#self.rowconfigure(0, weight=1)
def collapse(self):
""" Collapses frame, leaving icon to uncollapse """
print('collapsing')
# hide all known items
sub_items=self.grid_slaves()
for item in sub_items:
self.collapsed_widgets.append(item) # save these items for if we want to un-hide them later
item.grid_remove()
# add back uncollapse button, title
self.uncollapsebuton.grid()
self.collapsebuton.grid_remove()
self.title_label.grid()
def uncollapse(self):
""" Uncollapses frame, leaving icon to collapse """
print('uncollapsing')
# unhide previously hidden widgets
for item in self.collapsed_widgets:
item.grid()
self.collapsed_widgets.clear()
self.unhide()
self.collapsebuton.grid()
self.uncollapsebuton.grid_remove()
def disable_uncollapse(self):
self.uncollapsebuton.configure(state=ctk.DISABLED)
self.collapsebuton.configure(state='disabled')
class App(ctk.CTk):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.title("My awesome window")
self.geometry(f"{appWidth}x{appHeight}")
self.columnconfigure(0, weight=1)
create_vault_frame = CollapsibleFrame(self,row=0,column=0,collapsed_text="Panel1")
create_vault_frame.uncollapse()
new_label=ctk.CTkLabel(create_vault_frame,text='aaah')
new_label.grid(row=1,column=0)
create_vault_frame.disable_uncollapse()
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
<p>I have tried changing how I change the state including trying the following options:</p>
<pre><code>self.uncollapsebuton.configure(state=ctk.DISABLED)
self.uncollapsebuton.configure(state=tk.DISABLED)
self.uncollapsebuton.configure(state="DISABLED")
</code></pre>
<p>I have also tried running <code>self.update()</code> from within the frame.</p>
<p>But none of them seem to take effect. I have added a print statement to check, and this function is indeed getting called, the state is even changing, but the button remains clickable.</p>
| <python><tkinter><customtkinter> | 2023-11-01 07:04:36 | 1 | 302 | Mr. T |
77,400,589 | 1,866,775 | What is the reason for MultiHeadAttention having a different call convention than Attention and AdditiveAttention? | <p><code>Attention</code> and <code>AdditiveAttention</code> <a href="https://github.com/keras-team/keras/blob/68f9af408a1734704746f7e6fa9cfede0d6879d8/keras/layers/attention/base_dense_attention.py#L160" rel="nofollow noreferrer">are called with their input tensors in a list</a>. (same as <code>Add</code>, <code>Average</code>, <code>Concatenate</code>, <code>Dot</code>, <code>Maximum</code>, <code>Multiply</code>, <code>Subtract</code>)</p>
<p>But <code>MultiHeadAttention</code> <a href="https://github.com/keras-team/keras/blob/68f9af408a1734704746f7e6fa9cfede0d6879d8/keras/layers/attention/multi_head_attention.py#L549-L550" rel="nofollow noreferrer">is called by passing the input tensors as separate arguments</a>.</p>
<p>The following minimal example shows the difference in how the <code>inbound_nodes</code> are linked:</p>
<pre class="lang-py prettyprint-override"><code>import json
from tensorflow.keras.layers import AdditiveAttention, Attention, Input, MultiHeadAttention
from tensorflow.keras.models import Model
inputs = [Input(shape=(8, 16)), Input(shape=(4, 16))]
outputs = [Attention()([inputs[0], inputs[1]]),
AdditiveAttention()([inputs[0], inputs[1]]),
MultiHeadAttention(num_heads=2, key_dim=2)(inputs[0], inputs[1])]
model = Model(inputs=inputs, outputs=outputs)
print(json.dumps(json.loads(model.to_json()), indent=4))
</code></pre>
<pre><code>[...]
"name": "attention",
"inbound_nodes": [
[
[
"input_1",
0,
0,
{}
],
[
"input_2",
0,
0,
{}
]
]
]
[...]
"name": "additive_attention",
"inbound_nodes": [
[
[
"input_1",
0,
0,
{}
],
[
"input_2",
0,
0,
{}
]
]
]
[...]
"name": "multi_head_attention",
"inbound_nodes": [
[
[
"input_1",
0,
0,
{
"value": [
"input_2",
0,
0
]
}
]
]
]
[...]
</code></pre>
<p>What's the reason for <code>MultiHeadAttention</code> not following the convention of the other two attention layers, and what does it mean for <code>input_2</code> being stored as <code>value</code> in <code>input_1</code> in the <code>inbound_nodes</code>?</p>
<p>(Some context on why this is relevant to me: I'm maintaining <a href="https://github.com/Dobiasd/frugally-deep" rel="nofollow noreferrer">this library</a> and would like to implement support for <code>MultiHeadAttention</code>.)</p>
| <python><tensorflow><keras><attention-model><multihead-attention> | 2023-11-01 05:47:10 | 0 | 11,227 | Tobias Hermann |
77,400,502 | 6,301,394 | DuckDB export / import for a subset of tables | <p>Exporting and importing of all tables works great. I would like to select only a subset of tables for export. I can do COPY but that does not include the load.sql and schema.sql command files for neat importing.</p>
| <python><sql><duckdb> | 2023-11-01 05:20:04 | 1 | 2,613 | misantroop |
77,400,073 | 7,766,024 | Specifying known_first_party and src_path still causes Isort to group my own module with third party imports | <p>The directory structure of my project is as follows:</p>
<pre><code>.
βββ Home/
βββ project/
βββ data/
β βββ dataset.py
β βββ data_utils.py
βββ src/
β βββ main.py
βββ .pre-commit-config.yaml
βββ .isort.cfg
</code></pre>
<p>I also have a <code>.env</code> file that I'm using Python-dotenv to load that has the <code>PYTHONPATH</code> environment variable set to <code>PYTHONPATH="$(pwd):$(pwd)/src:$(pwd)/data"</code> so that I can perform imports from within the <code>data</code> directory.</p>
<p>In the <code>data_utils.py</code> file I have an import that goes <code>from dataset import Dataset</code>. When I run my commits Isort groups <code>from dataset import Dataset</code> with the other third party import statements.</p>
<p>I've tried to specify the following in my <code>.isort.cfg</code> but none of them work:</p>
<ol>
<li><code>known_first_party=data_utils</code></li>
<li><code>known_first_party=data_utils</code> & <code>src_paths=data</code></li>
</ol>
<p>I'm a little stuck as to what else I could try.</p>
| <python><isort> | 2023-11-01 02:46:35 | 0 | 3,460 | Sean |
77,399,743 | 159,072 | Splitting a data set for CNN | <p>Suppose, I have a tensor <code>tfDataSet</code> as follows:</p>
<pre><code>data3d = [
[[7.042 9.118 0. 1. 1. 1. 1. 1. 0. 0. 1. ]
[5.781 5.488 7.47 0. 0. 0. 0. 1. 1. 0. 0. ]
[5.399 5.166 6.452 0. 0. 0. 0. 0. 1. 0. 0. ]
[5.373 4.852 6.069 0. 0. 0. 0. 1. 1. 0. 0. ]
[5.423 5.164 6.197 0. 0. 0. 0. 2. 1. 0. 0. ]]
,
[[ 5.247 4.943 6.434 0. 0. 0. 0. 1. 1. 0. 0. ]
[ 5.485 8.103 8.264 0. 0. 0. 0. 1. 0. 0. 1. ]
[ 6.675 9.152 9.047 0. 0. 0. 0. 1. 0. 0. 1. ]
[ 6.372 8.536 11.954 0. 0. 0. 0. 0. 0. 0. 1. ]
[ 5.669 5.433 6.703 0. 0. 0. 0. 0. 1. 0. 0. ]]
,
[[5.304 4.924 6.407 0. 0. 0. 0. 0. 1. 0. 0. ]
[5.461 5.007 6.088 0. 0. 0. 0. 1. 1. 0. 0. ]
[5.265 5.057 6.41 0. 0. 0. 0. 3. 0. 0. 1. ]
[5.379 5.026 6.206 0. 0. 0. 0. 1. 1. 0. 0. ]
[5.525 5.154 6. 0. 0. 0. 0. 1. 1. 0. 0. ]]
,
[[5.403 5.173 6.102 0. 0. 0. 0. 1. 1. 0. 0. ]
[5.588 5.279 6.195 0. 0. 0. 0. 1. 1. 0. 0. ]
[5.381 5.238 6.675 0. 0. 0. 0. 1. 0. 0. 1. ]
[5.298 5.287 6.668 0. 0. 0. 0. 1. 1. 0. 0. ]
[5.704 7.411 4.926 0. 0. 0. 0. 1. 1. 0. 0. ]]
,
... ... ... ...
... ... ... ...
]
tfDataSet = tf.convert_to_tensor(data3d)
</code></pre>
<p>In each 2D arry inside the tensor, 1st eight columns are features, and the rest three columns are one-hot-encoded labels.</p>
<p>Suppose, I want to feed this tensor into a CNN. For that, I need to do two things:</p>
<ul>
<li>(1) split the <code>data3d</code> into <code>trainData3d</code>, <code>validData3d</code>, and <code>testData3d</code></li>
<li>(2) split each of the above three into <code>featureData3d</code> and <code>labelData3d</code>.</li>
</ul>
<p>Now, my question is, which one of the above steps should I do first and which one should I do second in order for being least expensive?</p>
<p>Explain why.</p>
<p>If I do #2 first, how can the feature and labels data maintain their correspondence?</p>
<p>Cross-posted: <a href="https://softwareengineering.stackexchange.com/questions/448402/splitting-a-data-set-for-cnn">SoftwareEngineering</a></p>
| <python><arrays><tensorflow><conv-neural-network> | 2023-11-01 00:31:01 | 1 | 17,446 | user366312 |
77,399,698 | 688,624 | Python "setuptools" build with C++ files in higher directory | <p>I am struggling to compile an extension module.</p>
<p>My C++ files are in a higher directory than my "setup.py"/etc. files. This is because the Python extension is a binding of a C++ library, and so is hierarchically subordinate<sup>[1]</sup>. This use-case, I should think, is basic and common.</p>
<p>I list my files using relative paths in my "setup.py" file. A simplified + untested but representative example:</p>
<pre class="lang-py prettyprint-override"><code>from setuptools import setup
from pybind11.setup_helpers import Pybind11Extension
myfiles = [ "../src/functionality.cpp", "../src/pybind11_binding_main.cpp" ]
setup_args = dict(
ext_modules = [
Pybind11Extension( "myext", myfiles )
],
)
setup(**setup_args)
</code></pre>
<p>On Windows, "py -m build --wheel" in this directory this works just fine, producing a library that works. However, with the equivalent command on Linux, I get:</p>
<pre><code>Fatal error: can't create build/temp.linux-x86_64-cpython-311/../src/pybind11_binding_main.o: No such file or directory
</code></pre>
<p>This happens because apparently no one considered relative paths that go up, for example because of the use-case above. The only useful reference to this problem is a <a href="https://bugs.python.org/issue9023" rel="nofollow noreferrer">decade-old bug report</a> that wasn't fixed. The given workaround was to use absolute paths, which I interpret as the following:</p>
<pre class="lang-py prettyprint-override"><code>myfiles = [ "../src/functionality.cpp", "../src/pybind11_binding_main.cpp" ]
import os.path
myfiles = [ os.path.abspath(path) for path in myfiles ]
</code></pre>
<p>However, with modern <s>technology</s> versions of the tools, this produces the following error:</p>
<pre><code>error: Error: setup script specifies an absolute path:
/absolute/path/to/myproject/src/pybind11_binding_main.cpp
setup() arguments must *always* be /-separated paths relative to the
setup.py directory, *never* absolute paths.
ERROR Backend subprocess exited when trying to invoke build_wheel
</code></pre>
<p>The error is descriptive, sensible, and well-takenβbut also maddeningly short-sighted in that <em>there apparently is no alternative</em>.</p>
<p>What am I <em>supposed</em> to do? Do they just assume any project is to be configured wholly and solely as a Python project? Can I do something like copy the C++ from the upper-level directory and the Python into some kind of temporary staging directory automatically? This should not be so difficult or weird!</p>
<hr />
<p><sub><sub><sup>[1]</sup>As well as that Python will name the package whatever its folder is called, whereas I am using that folder name for something else at root namespace. Also, as a practical issue, "python3 -m build" (apparently by unconfigurable <em>design</em>) spews a bunch of directories everywhere, and I'd rather not have that at my project root :V</sub></sub></p>
| <python><setuptools><pybind11> | 2023-11-01 00:15:16 | 0 | 15,517 | geometrian |
77,399,617 | 10,487,817 | Unable to write a dataframe to SQL Server as a table using SQL Alchemy, connection is established but cannot write a new table to the database | <p>I can connect to SQL Server using SQLAlchemy, but I am not able to write the data frame to the database as a table.</p>
<p>I am getting the below error:</p>
<pre><code>try:
SqlServerEngine = pyodbc.connect(driver='{ODBC Driver 17 for SQL Server}',
host=server_name,
database=database_name,
trusted_connection=tcon)
result = SqlServerEngine.execute('SELECT 1')
for row in result:
print(row)
print("SQL Server connection established successfully.")
App_test.to_sql('App_test',
schema='Test_Data',
con=SqlServerEngine,
if_exists='replace',
index=False)
# Close the database connection
SqlServerEngine.close()
except Exception as e:
print(f"Error: {e}")
</code></pre>
<p>Output as below:</p>
<pre><code>(1, )
SQL Server connection established successfully.
Error: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': ('42S02', "[42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]Invalid object name 'sqlite_master'. (208) (SQLExecDirectW); [42S02] [Microsoft][ODBC SQL Server Driver][SQL Server]Statement(s) could not be prepared. (8180)")
</code></pre>
<p>How can I resolve this error?</p>
| <python><sql-server><sqlalchemy> | 2023-10-31 23:45:16 | 0 | 363 | cyrus24 |
77,399,592 | 3,936,601 | 'Set like' behaviour of dict.items() for non hashable values | <p><a href="https://docs.python.org/3/library/stdtypes.html#dictionary-view-objects" rel="nofollow noreferrer">https://docs.python.org/3/library/stdtypes.html#dictionary-view-objects</a></p>
<p>The documentation says:</p>
<blockquote>
<p>If all values are hashable, so that (key, value) pairs are unique and hashable, then the items view is also set-like.</p>
</blockquote>
<p>However non-hashable items (lists) do seem to support 'set-like' operations:</p>
<pre class="lang-py prettyprint-override"><code>> {1:2, 3:4}.items() >= {1:2}.items()
True
> {1:[2], 3:4}.items() >= {1:[2]}.items()
True
> set({1:[2], 3:4}.items())
TypeError: unhashable type: 'list'
</code></pre>
<p>Why does this work? Can this behavior be relied upon to work correctly? I have not found a list that does not behave.</p>
| <python><dictionary><set> | 2023-10-31 23:36:32 | 1 | 1,739 | Evan Benn |
77,399,310 | 11,855,904 | How to Override Django ManyRelatedManager Methods and Enforce Validations | <p>I have the following two models that have a Many-to-Many (m2m) relationship. I want to implement validation rules when creating a relationship, particularly when the <code>Field</code> object is controlled.</p>
<pre class="lang-py prettyprint-override"><code>class Category(models.Model):
name = models.CharField(max_length=50, unique=True)
slug = models.SlugField(max_length=50, unique=True, editable=False)
fields = models.ManyToManyField("Field", related_name="categories")
</code></pre>
<pre class="lang-py prettyprint-override"><code>class Field(models.Model):
controller = models.ForeignKey(
"self",
models.CASCADE,
related_name="dependents",
null=True,
blank=True,
)
label = models.CharField(max_length=50)
name = models.CharField(max_length=50)
</code></pre>
<p>When creating a relationship between a <code>Category</code> and a <code>Field</code> objects via the manager's <code>add()</code> method, I would like to validate that, for a controlled field, the controller field must already be associated with the category.</p>
<p>When creating via the <code>set()</code> method, I would like to validate for any controlled field in the list, the controller field must also be in the list, OR is already associated with the category.</p>
<pre class="lang-py prettyprint-override"><code># Category
category = Category.objects.create(name="Cars", slug="cars")
# Fields
brand_field = Field.objects.create(name="brand", label="Brand")
model_field = Field.objects.create(name="model", label="Model", controller=brand_field)
# controller should already be associated or raises an exception
category.fields.add(model_field)
# Controller must be inclded in the list as well,
# OR already associated, or raise an exception
category.fields.set([model_field])
</code></pre>
<p>Where can I enforce this constraints? I'm thinking of overriding <code>ManyRelatedManager</code>'s <code>add()</code> and <code>set()</code> methods but I do not understand how they're coupled to Django model or field.</p>
<h2>Alternatively</h2>
<p>If it's not viable to override the <code>ManyRelatedManager</code> methods, where in the <code>ModelAdmin</code> can I intercept objects before they're saved so that I can run these checks?</p>
| <python><django><django-models><many-to-many><manyrelatedmanager> | 2023-10-31 22:05:28 | 0 | 392 | cy23 |
77,399,227 | 9,165,040 | What are the best ways to optimize this mongo query? | <p>I inherited a mongo query that I've been tasked to put into a GCP cloud function. The cloud function is getting a mongo timeout when it hits this query. When running locally, it takes quite a while as well ~ 40 seconds - but it does finish. I could use help in 2 ways here;</p>
<ol>
<li>Does my prognosis about why the cloud function is timing out on mongo sound right? Or are there cloud function settings to adjust?</li>
<li>Any suggestions for optimizing it are welcome. ChatGPT suggests combining $project stages, removing $group stages, and limiting return fields to only those necessary.</li>
</ol>
<pre><code>cash_balances_pipeline = [
{
'$match': {
'audit.createdDate': {
'$gte': datetime(2022, 1, 1, 10, 0, 0, tzinfo=timezone.utc),
'$lt': end_date
},
'status': 'completed'
}
},
{
'$unwind': {
'path': '$valueMap.bonusFunds',
'preserveNullAndEmptyArrays': True
}
},
{
'$project': {
'type': 1,
'cents': {
'$ifNull': ['$valueMap.cents', 0]
},
'bonusFundsAwarded': {
'$cond': [
{ '$in': ['$valueMap.bonusFunds.promotionType', ['riskFreeEntry']] },
{ '$ifNull': ['$valueMap.bonusFunds.value', 0] },
0
]
},
'bonusFundsUsed': {
'$cond': [
{ '$in': ['$valueMap.bonusFunds.promotionType', ['riskFreeEntry']] },
{ '$ifNull': ['$valueMap.bonusFunds.amountUsed', 0] },
0
]
}
}
},
{
'$group': {
'_id': '$type',
'cents': { '$sum': '$cents' },
'bonusFundsAwarded': { '$sum': '$bonusFundsAwarded' },
'bonusFundsUsed': { '$sum': '$bonusFundsUsed' },
'totalCents': {
'$sum': {
'$add': ['$cents', '$bonusFundsAwarded', { '$multiply': ['$bonusFundsUsed', -1] }]
}
}
}
},
{
'$project': {
'totalCents': { '$multiply': ['$totalCents', 0.01] },
'bonusFundsAwarded': 1,
'bonusFundsUsed': 1
}
},
{
'$sort': { '_id': 1 }
}
]
</code></pre>
| <python><mongodb><google-cloud-functions><data-pipeline> | 2023-10-31 21:44:01 | 1 | 518 | cDub |
77,399,168 | 3,220,769 | Signal.alarm not raising while other function is executing | <p>I have a Lambda function which does some querying to a database using the <code>teradatasql</code> library. I have a 15 minute timeout on my Lambda, but I want to make sure to throw an exception before hitting that timeout to let users know their query timed out. Here's the basic idea:</p>
<pre><code>import signal
import teradatasql
class TimeoutException(Exception):
pass
def timeout_handler(signal, frame):
raise TimeoutException("Query failed to execute in time")
def query_database(statements):
connect = teradatasql.connect(**kwargs)
cursor = connect.cursor()
for statement in statements:
cursor.execute(statement)
return cursor.fetchall()
def handler(event, context):
try:
# When an ALARM is raised, call the handler
signal.signal(signal.SIGALRM, timeout_handler)
# Raise an ALARM if we hit are within 15 seconds of the timeout
signal.alarm((context.get_remaining_time_in_millis() // 1000) - 15)
# Run queries
query_database(event['statements'])
except TimeoutError:
raise Exception("Statements were not processed in time")
</code></pre>
<p>I am seeing some strange behavior.</p>
<ul>
<li>Currently, the alarm does not get raised and the Lambda simply fails with the RuntimeError for timing out at the threshold.</li>
<li>If I remove the <code>signal.signal</code> call to assign a handler to the alarm, the alarm raises when expected but no handler around it is useless.</li>
<li>If I replace <code>query_database</code> body with a <code>time.sleep(1000)</code> then the alarm will raise as expected.</li>
</ul>
<p>Is there something to do with the library <code>teradatasql</code> and how it uses threads? Should I place the <code>signal.signal</code> call somewhere else?</p>
| <python><amazon-web-services><aws-lambda><signals> | 2023-10-31 21:29:49 | 1 | 3,327 | TomNash |
77,399,029 | 10,016,018 | Create table and view in the same migration in SQLAlchemy | <p>I'm trying to create a table and a view based on this table in a SQLAlchemy/alembic/PostgreSQL environment, with <code>alembic revision --autogenerate</code>. I could properly setup things so that alembic identifies the table and the view to be created, following the structure proposed <a href="https://stackoverflow.com/questions/36855336/alembic-generation-of-materialized-view/72829474#72829474">here</a>.</p>
<p>However, since I'm trying to create a table and the view based on this table on the same migration, problems arise. When it is time to create the view it fails saying that the table does not yet exist.</p>
<p>It works fine if I comment out the <code>register_entities([my_view])</code> from my migrations/env.py file, run the migration once (so it creates the table), then undo the comment and run the migration again (which creates the view). However, I would like to solve this issue so there is no need for any shady process for deploy or for new people working in the repo.</p>
<p>I'm not sure if it's trying to create the view before the table or what. Any tips on how to solve this?</p>
<p>Here are the full logs from running <code>alembic revision --autogenerate</code>:</p>
<pre><code>INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.autogenerate.compare] Detected added table 'my_table'
INFO [alembic_utils.depends] Resolving entities with no dependencies
INFO [alembic_utils.depends] Resolving entities with dependencies. This may take a minute
INFO [alembic_utils.replaceable_entity] Detecting required migration op PGView PGView: public.my_view
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UndefinedTable: view "my_view" does not exist
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/vscode/.local/lib/python3.11/site-packages/alembic_utils/simulate.py", line 47, in simulate_entity
sess.execute(entity.to_sql_statement_drop(cascade=True))
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2306, in execute
return self._execute_internal(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2197, in _execute_internal
result = conn.execute(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1416, in execute
return meth(
^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 516, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1639, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1848, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1988, in _exec_single_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2343, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) view "my_view" does not exist
[SQL: DROP VIEW "public"."my_view" cascade]
(Background on this error at: https://sqlalche.me/e/20/f405)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UndefinedTable: relation "my_table" does not exist
LINE 2: FROM my_table;
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/alembic", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.11/site-packages/alembic/config.py", line 630, in main
CommandLine(prog=prog).main(argv=argv)
File "/usr/local/lib/python3.11/site-packages/alembic/config.py", line 624, in main
self.run_cmd(cfg, options)
File "/usr/local/lib/python3.11/site-packages/alembic/config.py", line 601, in run_cmd
fn(
File "/usr/local/lib/python3.11/site-packages/alembic/command.py", line 234, in revision
script_directory.run_env()
File "/usr/local/lib/python3.11/site-packages/alembic/script/base.py", line 579, in run_env
util.load_python_file(self.dir, "env.py")
File "/usr/local/lib/python3.11/site-packages/alembic/util/pyfiles.py", line 93, in load_python_file
module = load_module_py(module_id, path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/alembic/util/pyfiles.py", line 109, in load_module_py
spec.loader.exec_module(module) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/workspaces/api/app/migrations/env.py", line 99, in <module>
run_migrations_online()
File "/workspaces/api/app/migrations/env.py", line 93, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/usr/local/lib/python3.11/site-packages/alembic/runtime/environment.py", line 938, in run_migrations
self.get_context().run_migrations(**kw)
File "/usr/local/lib/python3.11/site-packages/alembic/runtime/migration.py", line 612, in run_migrations
for step in self._migrations_fn(heads, self):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/alembic/command.py", line 210, in retrieve_migrations
revision_context.run_autogenerate(rev, context)
File "/usr/local/lib/python3.11/site-packages/alembic/autogenerate/api.py", line 567, in run_autogenerate
self._run_environment(rev, migration_context, True)
File "/usr/local/lib/python3.11/site-packages/alembic/autogenerate/api.py", line 614, in _run_environment
compare._populate_migration_script(
File "/usr/local/lib/python3.11/site-packages/alembic/autogenerate/compare.py", line 59, in _populate_migration_script
_produce_net_changes(autogen_context, upgrade_ops)
File "/usr/local/lib/python3.11/site-packages/alembic/autogenerate/compare.py", line 92, in _produce_net_changes
comparators.dispatch("schema", autogen_context.dialect.name)(
File "/usr/local/lib/python3.11/site-packages/alembic/util/langhelpers.py", line 268, in go
fn(*arg, **kw)
File "/home/vscode/.local/lib/python3.11/site-packages/alembic_utils/replaceable_entity.py", line 328, in compare_registered_entities
maybe_op = entity.get_required_migration_op(sess, dependencies=has_create_or_update_op)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vscode/.local/lib/python3.11/site-packages/alembic_utils/replaceable_entity.py", line 163, in get_required_migration_op
db_def = self.get_database_definition(sess, dependencies=dependencies)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/vscode/.local/lib/python3.11/site-packages/alembic_utils/replaceable_entity.py", line 102, in get_database_definition
with simulate_entity(sess, self, dependencies) as sess:
File "/usr/local/lib/python3.11/contextlib.py", line 137, in __enter__
return next(self.gen)
^^^^^^^^^^^^^^
File "/home/vscode/.local/lib/python3.11/site-packages/alembic_utils/simulate.py", line 62, in simulate_entity
sess.execute(entity.to_sql_statement_create())
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2306, in execute
return self._execute_internal(
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/orm/session.py", line 2197, in _execute_internal
result = conn.execute(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1416, in execute
return meth(
^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/sql/elements.py", line 516, in _execute_on_connection
return connection._execute_clauseelement(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1639, in _execute_clauseelement
ret = self._execute_context(
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1848, in _execute_context
return self._exec_single_context(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1988, in _exec_single_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 2343, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "/usr/local/lib/python3.11/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.UndefinedTable) relation "my_table" does not exist
LINE 2: FROM my_table;
^
[SQL: CREATE VIEW "public"."my_view" AS SELECT my_table.id AS id, my_table.field1 AS field1, my_table.field2 AS field2
FROM my_table;]
</code></pre>
<p>Versions:</p>
<ul>
<li>psycopg2~=2.9</li>
<li>sqlalchemy~=2.0</li>
<li>sqlalchemy-utils~=0.41.1</li>
<li>alembic~=1.10</li>
<li>alembic_utils~=0.8.2</li>
</ul>
| <python><sqlalchemy><orm><alembic><sqlalchemy-migrate> | 2023-10-31 20:54:26 | 1 | 982 | Danilo Filippo |
77,399,006 | 15,456,681 | Why are np.empty((1,), dtype=np.bool_) or global arrays slow as default arguments in numba? | <p>I was writing a function in numba which requires an optional argument to be an array of any dtype, and was experimenting with putting a default array directly in the function definition, and noticed that using <code>np.empty</code> or a global array was ~80x slower than using <code>array</code> or <code>zeros</code>. Why is using these functions in a function definition so slow, when neither of these functions are that slow in normal python or in an njit function:</p>
<pre><code>import numba as nb
import numpy as np
@nb.njit
def func_zeros(b=np.zeros((1,), dtype=np.bool_)):
return b
@nb.njit
def func_array(b=np.array(0, dtype=np.bool_)):
return b
@nb.njit
def func_empty(b=np.empty((1,), dtype=np.bool_)):
return b
_global_array = np.zeros((1,), dtype=np.bool_)
@nb.njit
def func_global(b=_global_array):
return b
@nb.njit
def func_None(b=None):
if b is None:
b = np.empty((1,), dtype=np.bool_)
return b
func_zeros(), func_array(), func_empty(), func_global(), func_None()
func_zeros(np.zeros((1,), dtype=np.bool_))
func_array(np.array(0, dtype=np.bool_))
func_empty(np.empty((1,), dtype=np.bool_))
func_global(np.zeros((1,), dtype=np.bool_))
func_None(np.empty((1,), dtype=np.bool_))
%timeit -n 10000 func_zeros()
%timeit -n 10000 func_array()
%timeit -n 10000 func_empty()
%timeit -n 10000 func_global()
%timeit -n 10000 func_None()
%timeit -n 10000 func_zeros(np.zeros((1,), dtype=np.bool_))
%timeit -n 10000 func_array(np.array(0, dtype=np.bool_))
%timeit -n 10000 func_empty(np.empty((1,), dtype=np.bool_))
%timeit -n 10000 func_global(np.zeros((1,), dtype=np.bool_))
%timeit -n 10000 func_None(np.empty((1,), dtype=np.bool_))
</code></pre>
<p>Output:</p>
<pre><code>296 ns Β± 2.73 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
291 ns Β± 5.45 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
24.4 Β΅s Β± 166 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) #!!!
24.4 Β΅s Β± 262 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each) #!!!
357 ns Β± 5.69 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
463 ns Β± 3.96 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
532 ns Β± 4.17 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
468 ns Β± 5.55 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
472 ns Β± 6.65 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
475 ns Β± 4.68 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
</code></pre>
<p>Secondary question, I'm running this in a Jupyter notebook in vscode, the times above are for the first time I run this cell after restarting the kernel, if I run the cell a second time, I get the following times:</p>
<pre><code>25 Β΅s Β± 728 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
23.1 Β΅s Β± 186 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
24.5 Β΅s Β± 477 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
23.8 Β΅s Β± 145 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
356 ns Β± 4.9 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
466 ns Β± 4.61 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
505 ns Β± 3.59 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
456 ns Β± 2.78 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
464 ns Β± 2.61 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
458 ns Β± 4.36 ns per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
</code></pre>
<p>if these are the correct times, why is using arrays as default arguments so slow in numba, and if not, why is this happening?</p>
| <python><numpy><performance><numba> | 2023-10-31 20:48:18 | 0 | 3,592 | Nin17 |
77,398,956 | 3,584,765 | How to pass a parameter of a lambda functon to another function which accepts the lambda function as parameter | <p>I am using a function (more specifically <em>optim.lr_scheduler.LambdaLR</em> from torch) where a Lambda function is passed as parameter.</p>
<pre><code>from torch import optim
lambda1 = lambda epoch: epoch /10
optimizer = optim.SGD(model.parameters(), lr=0.1)
scheduler = optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda1)
</code></pre>
<p>As long as the lambda function is simple I can handle it fine. But I found myself in need of using a normal function instead and I passed the function name as parameter.</p>
<pre><code>def func1(epoch, base=10):
return epoch/base
scheduler = optim.lr_scheduler.LambdaLR(optimizer, lr_lambda=func1)
</code></pre>
<p>which does the same thing exactly as the lambda function above but it can be more easily expanded to a complex function (as I needed it to be). The problem is I do not know how to actually pass the parameter (<em>base</em> in this case) to the function (<em>optim.lr_scheduler.LambdaLR</em> in this case) accepting the lambda function.</p>
<p>Is there a way to pass this parameter, in other words to pass <em>base</em> to <em>optim.lr_scheduler.LambdaLR</em> through <em>func1</em>?</p>
| <python><lambda> | 2023-10-31 20:36:51 | 1 | 5,743 | Eypros |
77,398,915 | 14,751,691 | Python pandas - want to use values in both cols hitting road block | <p>Trying to use both values of T and V from a 2 column text file (iterating over all rows) to pass to other functions</p>
<p><strong>SourceFile</strong></p>
<pre><code>T1|V1
T2|V2
T3|V3
</code></pre>
<p><strong>Output</strong></p>
<pre><code>T1:1 V1
Name: T1, dtype: object
T2:1 V2
Name: T1, dtype: object
T3:1 V3
Name: T3, dtype: object
</code></pre>
<p><strong>Code</strong></p>
<pre><code>import pandas as pd
dfcsv = pd.read_csv("mycsv.txt" , delimiter = '|', header=None, index_col=0)
df = pd.DataFrame(dfcsv)
for t,v in df.iterrows():
T = str(t)
V = str(v)
print (T+":"+V)`
</code></pre>
| <python><pandas><dataframe><loops> | 2023-10-31 20:30:12 | 0 | 411 | irnerd |
77,398,903 | 5,522,303 | boto3: Get version ID and support callbacks on object upload | <p>I would like to upload a file to S3 with a progress callback as it uploads, and also get the version ID of the object I just uploaded.</p>
<p><a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/put_object.html" rel="nofollow noreferrer"><code>client.put_object</code></a> returns the version ID, but doesn't have a callback parameter:</p>
<pre class="lang-py prettyprint-override"><code>client = boto3.client('s3')
response = client.put_object(...)
version_id = response['VersionId']
</code></pre>
<p><a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/upload_file.html" rel="nofollow noreferrer"><code>client.upload_file</code></a> <em>does</em> have a callback parameter, but it doesn't return anything:</p>
<pre class="lang-py prettyprint-override"><code>client = boto3.client('s3')
client.upload_file(..., Callback=my_callback)
</code></pre>
<p>The only thing I can think of is to tag my objects with some unique value, then after uploading find the version of the object with that unique value tagged. But that sounds overly complicated for what I'm trying to do.</p>
<p>Is there some API call I'm missing?</p>
| <python><boto3> | 2023-10-31 20:28:49 | 1 | 7,404 | Kevin |
77,398,891 | 7,994,685 | Langchain: Why does langchain agent return the action input instead of running it? | <p>I'm working with a Langchain agent.</p>
<p>I'm running into a phenomenon: After small talk with the chatbot where the user provides it with parameters to executes a function (set as a tool), instead of running the function, it returns to the user the action input instead.</p>
<p>Any ideas why this might be happening and how to fix it?</p>
<p>Here is the agent type I am running:</p>
<pre><code> agent_executor = initialize_agent(
tools,
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
memory=memory,
return_intermediate_steps=False,
agent_kwargs = {
'prefix': Prompt_Prefix,
"memory_prompts": [chat_history],
"input_variables": ["input", "agent_scratchpad", "chat_history"]
}
)
</code></pre>
| <python><langchain> | 2023-10-31 20:26:26 | 1 | 1,328 | Sharif Amlani |
77,398,737 | 29,347 | How can I get pandas read_csv converters to set the type on parsing? | <p>Given a CSV like:</p>
<pre><code>foo,geo,bar
1,{"country": "SG"},{}
2,{"country": "US"},{"id": "1234"}
</code></pre>
<p>I want to parse the JSON into a specific type, not just <code>object</code>. My type:</p>
<pre class="lang-py prettyprint-override"><code>geo_type = pd.ArrowDtype(pyarrow.struct([
('country', pyarrow.dictionary(pyarrow.uint8(), pyarrow.utf8()))
]))
</code></pre>
<p>This doesn't work:</p>
<pre class="lang-py prettyprint-override"><code>pd.read_csv(data,
converters={'geo': lambda v: pd.Series([json.loads(v)], dtype=geo_type)})
</code></pre>
<p>This does:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.read_csv(data, converters={'geo': json.loads})
df['geo'] = df['geo'].astype(geo_type)
</code></pre>
<p>What am I missing? Loading everything as python dicts and then converting them is really heavy on memory compared to parsing things into the correct types from thestart.</p>
| <python><pandas> | 2023-10-31 19:53:23 | 0 | 37,295 | Kit Sunde |
77,398,627 | 5,091,720 | Pandas worksheet.add_table AttributeError: 'int' object has no attribute 'lower' | <p>I created a function to format several dataframes the same way placing them into the same excel workbook.</p>
<p>The error is:</p>
<pre><code>Traceback (most recent call last):
File "f:\...\weather_WPC.py", line 282, in < module > frame_to_excel(writer, exl = NS_parts, df=dfsp)
File "f:\...\df_to_excel.py", line 41, in frame_to_excel
worksheet.add_table(0, 0, df.shape[0] , df.shape[1] - 1, {'columns': [{'header': column} for column in df.columns], 'name': exl['table_name']})
File "C:\...\Python\Python310\lib\site-packages\xlsxwriter\worksheet.py", line 125, in cell_wrapper
return method(self, *args, **kwargs)
File "C:\...\Python\Python310\lib\site-packages\xlsxwriter\worksheet.py", line 3329, in add_table
name = header_name.lower()
AttributeError: 'int' object has no attribute 'lower'
</code></pre>
<p>When I place the dataframe .to_clipboard() it looks fine. I decided to try a pd.read_clipboard() to see if the error occurs there.</p>
<pre class="lang-py prettyprint-override"><code>file = 'F:/.../testing_pd_to_excel.xlsx'
writer = pd.ExcelWriter(file, engine='xlsxwriter', date_format='m/d/yyyy')
dfb = pd.read_clipboard()
exl = {'sht': 'Test_B', 'table_name': 'New_B',}
frame_to_excel(writer, exl = exl, df=dfb)
</code></pre>
<p>The error did not occur π€π.</p>
<p>The code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
def frame_to_excel(writer, exl, df):
'''
This function takes a DataFrame and exports it to an Excel file with specific formatting.
'''
# Write the DataFrame data to the Excel writer.
df.to_excel(writer, sheet_name=exl['sht'], startrow=1, header=False, index=False)
worksheet = writer.sheets[exl['sht']]
worksheet.add_table(0, 0, df.shape[0] , df.shape[1] - 1, {'columns': [{'header': column} for column in df.columns], 'name': exl['table_name']})
# because of the error above, the rest of the formatting code does not execute.
</code></pre>
<p>Here is some sample data</p>
<pre><code> data = {
"Rainday": [1, 2, 3, "...", 31, 32, 33, 34],
"Day_O_M": [1, 2, 3, "...", 31, 1, 2, 3],
"Month": ["Oct", None, None, None, None, "Nov", None, None],
"Average": [0.01, 0.03, 0.08, "...", 2.85, 3.39, 3.56, 3.71],
"Driest": [0, 0, 0, "...", 1.01, 1.2, 1.26, 1.31],
"Wettest Actual": [0, 0, 0.3, "...", 11.93, 12.57, 12.69, 12.69],
"Wettest": [0, 0, 0.27, "...", 10.92, 11.37, 11.43, 11.38],
"2023": [0, 0, 0.01, "...", 0.01, 0, 0.93, 1.16],
"2024": [0, 0, 0.03, "...", 0.77, 0, 0, 0],
"Pct Yr-end": [100, 100, 175, "...", 27, 0, 0, 0],}
</code></pre>
<p>Any ideas on how to test or remedy this problem are appreciated.</p>
| <python><pandas><excel><dataframe> | 2023-10-31 19:30:46 | 1 | 2,363 | Shane S |
77,398,493 | 3,649,629 | Where is an inputShape in tf.keras.layers.Conv2D()? | <p>I am writing a simple CNN model for classic digit recognition task. Part of my code looks like this:</p>
<pre><code>model = tf.keras.models.Sequential()
image_width = 28
image_height = 28
image_channels = 1
model.add(layers.Conv2D(
8,
5,
inputShape = [image_width, image_height, image_channels],
strides = 1,
activation = 'relu',
kernelInitializer = 'varianceScaling'
));
</code></pre>
<p>where imports are:</p>
<pre><code>import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers
</code></pre>
<p>The current is this model layer marked as wrong by a compiler with error message <code>keyword argument is not understood: inputShape</code></p>
<p>I am refering this codelab: <a href="https://codelabs.developers.google.com/codelabs/tfjs-training-classfication/index.html#5" rel="nofollow noreferrer">https://codelabs.developers.google.com/codelabs/tfjs-training-classfication/index.html#5</a> where <code>inputShape</code> is presented, but on another hand the latest tf docs indeed don't have this argument <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D</a>.</p>
<p>There are many breaking changes in API over passed 5 years, so existing answers on SO or other resources don't really bridge this gap in docs and I have to ask for your help with this issue.</p>
<p>Could you suggest where is this parameter go and how the <code>Conv2D</code> layer should look like without <code>inputShape</code>?</p>
<p>Thanks!</p>
| <python><tensorflow><keras><deep-learning> | 2023-10-31 19:02:53 | 1 | 7,089 | Gleichmut |
77,398,449 | 4,830,062 | Unit Test Imported Modules when A Global Function Call will Fail | <p>I am trying to mock a session created in a test by an import. I have two files:</p>
<p>/lib/module.py</p>
<pre><code>import boto3
session = boto3.session()
</code></pre>
<p>/tests/test.py</p>
<pre><code>from lib.module import test_func
test_function():
assert test_func()
</code></pre>
<p>My testing environment fails on import from the session call without credentials. How can I mock the import/session to prevent failure in the testing environment?</p>
| <python><unit-testing><testing><python-import> | 2023-10-31 18:54:06 | 1 | 489 | J_Heads |
77,398,366 | 1,044,604 | How can I find the python client library for a gcloud command? | <p>To write a Python script to collect various details about all projects deployed in the organization, how can we find which shared-vpc host project/network is being used?</p>
<p>There's a <strong>gcloud</strong> command that provides details</p>
<pre class="lang-bash prettyprint-override"><code>gcloud compute shared-vpc get-host-project strong-bad-2g0v
## provides results
results
kind: compute#project
name: shared-vpc-host-52ey86a9
</code></pre>
<p>However, I want to use Python for the script, so following the docs on how to use the Python Client and searching <strong>Google Cloud Python Client</strong> <a href="https://github.com/googleapis/google-cloud-python" rel="nofollow noreferrer">repo</a> for "shared-vpc" or "vpc" doesn't provide relevant results. Looking under /compute_v1 (assuming a similar structure as <strong>gcloud</strong>), doesn't show results for "shared-vpc" or "vpc"</p>
| <python><google-cloud-platform><gcloud><client-library> | 2023-10-31 18:38:58 | 1 | 877 | codeangler |
77,398,362 | 10,377,244 | Polars - Windows function with condition | <p>Given this DataFrame.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
{'id':['1']*3 + ['2']*3,
'actions': [0,1,2, 0, 1, 2],
'campaign_name': [None,None, '2', '1', '2', None] ,
'event_time': [1,2,3,0,1,2],
'session_id':['session_1']*6}
)
</code></pre>
<pre><code>βββββββ¬ββββββββββ¬ββββββββββββββββ¬βββββββββββββ¬βββββββββββββ
β id β actions β campaign_name β event_time β session_id β
β --- β --- β --- β --- β --- β
β str β i64 β str β i64 β str β
βββββββͺββββββββββͺββββββββββββββββͺβββββββββββββͺβββββββββββββ‘
β 1 β 0 β null β 1 β session_1 β
β 1 β 1 β null β 2 β session_1 β
β 1 β 2 β 2 β 3 β session_1 β
β 2 β 0 β 1 β 0 β session_1 β
β 2 β 1 β 2 β 1 β session_1 β
β 2 β 2 β null β 2 β session_1 β
βββββββ΄ββββββββββ΄ββββββββββββββββ΄βββββββββββββ΄βββββββββββββ
</code></pre>
<p>I want to get the min\max <code>event_time</code> by combination group of session_id and id under condition <code>campaign_name</code>.</p>
<p>Here is my code.</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(
pl.when(pl.col('campaign_name').is_in(['1','2']))
.then(
pl.col('actions').min().over('session_id', 'id').alias('min_action')
),
pl.when(pl.col('campaign_name').is_in(['1','2']))
.then(
pl.col('actions').max().over('session_id', 'id').alias('max_action')
)
)
</code></pre>
<p>But the results are not as expected.</p>
<pre><code>shape: (6, 7)
βββββββ¬ββββββββββ¬ββββββββββββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββββ
β id β actions β campaign_name β event_time β session_id β min_action β max_action β
β --- β --- β --- β --- β --- β --- β --- β
β str β i64 β str β i64 β str β i64 β i64 β
βββββββͺββββββββββͺββββββββββββββββͺβββββββββββββͺβββββββββββββͺβββββββββββββͺβββββββββββββ‘
β 1 β 0 β null β 1 β session_1 β null β null β
β 1 β 1 β null β 2 β session_1 β null β null β
β 1 β 2 β 2 β 3 β session_1 β 0 β 2 β
β 2 β 0 β 1 β 0 β session_1 β 0 β 2 β
β 2 β 1 β 2 β 1 β session_1 β 0 β 2 β
β 2 β 2 β null β 2 β session_1 β null β null β
βββββββ΄ββββββββββ΄ββββββββββββββββ΄βββββββββββββ΄βββββββββββββ΄βββββββββββββ΄βββββββββββββ
</code></pre>
<p>The code works in the opposite direction.</p>
<p>The code calculates the min/max on all the values instead on the values filtered by the condition.</p>
<p>Expected results:</p>
<pre><code>shape: (6, 7)
βββββββ¬ββββββββββ¬ββββββββββββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββββ¬βββββββββββββ
β id β actions β campaign_name β event_time β session_id β min_action β max_action β
β --- β --- β --- β --- β --- β --- β --- β
β str β i64 β str β i64 β str β i64 β i64 β
βββββββͺββββββββββͺββββββββββββββββͺβββββββββββββͺβββββββββββββͺβββββββββββββͺβββββββββββββ‘
β 1 β 0 β null β 1 β session_1 β 2 β 2 β
β 1 β 1 β null β 2 β session_1 β 2 β 2 β
β 1 β 2 β 2 β 3 β session_1 β 2 β 2 β
β 2 β 0 β 1 β 0 β session_1 β 0 β 1 β
β 2 β 1 β 2 β 1 β session_1 β 0 β 1 β
β 2 β 2 β null β 2 β session_1 β 0 β 1 β
βββββββ΄ββββββββββ΄ββββββββββββββββ΄βββββββββββββ΄βββββββββββββ΄βββββββββββββ΄βββββββββββββ
</code></pre>
| <python><dataframe><window-functions><python-polars> | 2023-10-31 18:38:16 | 1 | 1,127 | MPA |
77,398,225 | 1,072,625 | Google Cloud Compute Python Library package naming confusion | <p>In the Google Compute Python library we seem to have 2 packages - screenshot
<a href="https://i.sstatic.net/UXLHV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UXLHV.png" alt="GCP Python Client GitHub" /></a>.</p>
<p>When importing the various modules should we refer to <code>compute</code> or <code>compute_v1</code>? It seems <code>compute</code> internally refers to <code>compute_v1</code>? Why is this structure introduced and what would be the right/consistent way to import the modules? Most of the examples on internet and GCP documentation use <code>compute_v1</code>.</p>
<p>e.g.</p>
<pre class="lang-py prettyprint-override"><code>from google.cloud import compute_v1
OR
from google.cloud import compute
</code></pre>
| <python><google-cloud-platform><google-cloud-compute-engine> | 2023-10-31 18:10:47 | 0 | 316 | Neeraj |
77,398,137 | 471,136 | Adding qfrm fails | <p>I am trying to run this simple piece of code (implement Excel's <code>YIELD()</code> using <code>qfrm</code> library) in Python using <code>poetry</code>:</p>
<pre class="lang-py prettyprint-override"><code>from qfrm import Bond
def excel_yield(price, par, rate, years, freq):
return Bond(par=par, T=years, freq=freq, cpn=rate, yld=None, price=price).yield_to_maturity()
print(excel_yield(price=62.254, par=100, rate=1.875/100, years=18, freq=2))
</code></pre>
<p>I did <code>poetry add qfrm</code> and I got this <code>pyproject.toml</code> file:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
name = "fixed-income-annuity"
version = "0.1.0"
description = ""
authors = ["pathikrit <pathikritbhowmick@msn.com>"]
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.9"
python-dateutil = "^2.8.2"
qfrm = "^0.2.0.27"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>Now when I run using <code>poetry run python main.py</code>, I get the following error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "/Users/pbhowmick/workspace/fixed_income_annuity/main.py", line 1, in <module>
from qfrm.Bond import *
File "/Users/pbhowmick/Library/Caches/pypoetry/virtualenvs/fixed-income-annuity-qEcOl1hQ-py3.9/lib/python3.9/site-packages/qfrm/__init__.py", line 7, in <module>
from .Util import *
File "/Users/pbhowmick/Library/Caches/pypoetry/virtualenvs/fixed-income-annuity-qEcOl1hQ-py3.9/lib/python3.9/site-packages/qfrm/Util.py", line 2, in <module>
import yaml
ModuleNotFoundError: No module named 'yaml'
</code></pre>
<p>Ok, so I need <code>yaml</code>? Why didn't poetry add it for me when I added <code>qfrm</code>?</p>
<p>I went and did this then:</p>
<pre class="lang-bash prettyprint-override"><code>poetry add pyyaml
</code></pre>
<p>But, now when I do: <code>poetry run python main.py</code> I get the following error:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "/Users/pbhowmick/workspace/fixed_income_annuity/main.py", line 1, in <module>
from qfrm import Bond
File "/Users/pbhowmick/Library/Caches/pypoetry/virtualenvs/fixed-income-annuity-qEcOl1hQ-py3.9/lib/python3.9/site-packages/qfrm/__init__.py", line 8, in <module>
from .Options import *
ModuleNotFoundError: No module named 'qfrm.Options'
</code></pre>
<p>Why? Didn't I just install <code>qfrm</code>?</p>
<p>Note: There is nothing wrong with my poetry setup because adding some other package (<code>python-dateutil</code>) works fine and when I import and use it.</p>
| <python><python-poetry> | 2023-10-31 17:52:58 | 1 | 33,697 | pathikrit |
77,398,093 | 10,243,689 | QML application does not receive signals from python backend | <p>I have tried a minimal example of using Python as the backend for the QML application. But weirdly, the signals from the backend have not been received in the QML application.</p>
<p>The QML code is as below:</p>
<pre><code>import QtQuick 2.15
import QtQuick.Window 2.15
Window {
id: mainWindow
width: 640
height: 480
visible: true
title: qsTr("Hello World")
property QtObject backend
property string text
Connections{
target: backend
function onSig(msg){
mainWindow.text = msg
console.log(msg)
}
}
Rectangle{
id: content
anchors.fill: parent
Text {
id: textField
anchors.centerIn: parent
color: "white"
text: mainWindow.text
}
}
}
</code></pre>
<p>The code for Python backend is:</p>
<pre><code># This Python file uses the following encoding: utf-8
import sys
from pathlib import Path
from PySide2.QtGui import QGuiApplication
from PySide2.QtQml import QQmlApplicationEngine
from PySide2.QtCore import Signal, QObject
class backEnd(QObject):
sig = Signal(str)
def __init__(self):
super().__init__()
self._test()
return
def _test(self):
colors = ["blue", "green", "red", "white", "black"]
for color in colors:
self.sig.emit(color)
return
if __name__ == "__main__":
app = QGuiApplication(sys.argv)
backend = backEnd()
engine = QQmlApplicationEngine()
engine.rootContext().setContextProperty("backend", backend)
qml_file = Path(__file__).resolve().parent / "main.qml"
engine.load(str(qml_file))
if not engine.rootObjects():
sys.exit(-1)
sys.exit(app.exec_())
</code></pre>
<p>I have read several tutorials on how to connect the Python backend to the QML application and all of them do it like how I did it here. So, this behavior is weird for me and I don't know what should I do.</p>
| <python><qt><qml> | 2023-10-31 17:45:37 | 0 | 621 | AMIR REZA SADEQI |
77,397,891 | 5,001,448 | Linearly interpolating in L*a*b* space yields negative RGB values? Is RGB's embedding within L*a*b* non-convex? | <p>I am trying to linearly interpolate between colors in CIELAB space starting out and retrieving colors in a linear RGB space.</p>
<p>My workflow is the following:</p>
<ol>
<li>Take 2 colors in linear RGB, let's say (1,0,0) and (0,0,1).</li>
<li>Convert those to XYZ space. I am using <a href="https://en.wikipedia.org/wiki/CIE_1931_color_space#Construction_of_the_CIE_XYZ_color_space_from_the_Wright%E2%80%93Guild_data" rel="nofollow noreferrer">this matrix</a>. This should be the matrix for RGB with the E-whitepoint.</li>
<li>Convert those to L*a*b* using the <a href="https://en.wikipedia.org/wiki/CIELAB_color_space#From_CIEXYZ_to_CIELAB" rel="nofollow noreferrer">formulas on
wikipedia</a>. For Xn, Yn, Zn, I am using the corresponding XYZ coordinates of RGB(1,1,1), which is (1,1,1) for this matrix. The coordinates are thus scaled so that the whitepoint has a Y of 1.0.</li>
<li>Linearly interpolate component-wise between these two
L*a*b* elements.</li>
<li>Convert the interpolated points to RGB by reversing
the above process (using the inverse transform from wikipedia, as well as the inverse matrix for XYZ->RGB.</li>
</ol>
<p>The problem is, these intermediate colors are outside my RGB space, even though I obviously start out with 2 colors that are inside it. To be specific: the G channel acquires negative values of about -0.058 when interpolating between RGB(1,0,0) and RGB(0,0,1).</p>
<p>If I use the primaries and whitepoint, i.e. the <a href="https://en.wikipedia.org/wiki/SRGB#From_sRGB_to_CIE_XYZ" rel="nofollow noreferrer">matrix of linear sRGB</a> instead, the same problem occurs.</p>
<p>Is this normal? It seems plausible to me that a the L*a*b* color space might be such that there's no guarantee that the direct line between two RGB-representable colors is also within that representable region. If it's not normal, where could the error be? If it is normal, how should I deal with it?</p>
<p>My code:</p>
<pre><code>import numpy as np
mat = np.array([[0.49,0.31,0.2],[0.17697,0.81240,0.01063],[0.,0.01,0.99]])
def RGB_to_Lab(R,G,B):
X,Y,Z = (mat@[[R],[G],[B]])[:,0]
Xn, Yn, Zn = (mat@[[1],[1],[1]])[:,0]
f = lambda t: t**(1/3) if t>(6/29)**3 else (1/3)*(29/6)**2*t+4/29
L = 116*f(Y/Yn) - 16
a = 500*(f(X/Xn)-f(Y/Yn))
b = 200*(f(Y/Yn)-f(Z/Zn))
return L, a, b
def Lab_to_RGB(L,a,b):
f = lambda t:t**3 if t>6/29 else 3*(6/29)**2*(t-4/29)
Xn, Yn, Zn = (mat@[[1],[1],[1]])[:,0]
X = Xn*f((L+16)/116+a/500)
Y = Yn*f((L+16)/116)
Z = Zn*f((L+16)/116-b/200)
R,G,B = (np.linalg.inv(mat)@[[X],[Y], [Z]])[:,0]
return R,G,B
def steps(n):
Lab = np.array(RGB_to_Lab(1,0,0))
Lab2 = np.array(RGB_to_Lab(0,0,1))
for i in range(n+1):
r = i/n
tmp = r*Lab2+(1-r)*Lab
print(Lab_to_RGB(*tmp))
</code></pre>
| <python><colors><rgb><color-space><cielab> | 2023-10-31 17:11:38 | 1 | 2,123 | JMC |
77,397,854 | 4,309,334 | python change group to each folder in a tree | <p>I have a directory structure as this:</p>
<p><code>/app/<app_name>/server/logs</code></p>
<p>I need to give <code>user_a</code> access read-only to logs and all the files</p>
<p>I created user_a with a group <code>group_a</code> and I now have to change the group to each folder leading to <code>logs</code> to allow <code>user_a</code> traverse the directory tree in order to be able to access the logs.</p>
<p>i then need to change in all the folders leading to logs the group owner and give it r-x permission</p>
<p>some challenges I am encountering only the <code>logs</code> folder that are under a <code>server</code> folder should be changed.</p>
<p>I managed to change the permissions to the parent directory of the server directory but I am wondering how to go into the deeper directories until I reach <code>logs</code></p>
<pre><code>#!/usr/bin/python
import grp
import pwd
import os
gid = grp.getgrnam('group_a').gr_gid
for root,dirs,files in os.walk('/app'):
for name in dirs:
if name.startswith('server'):
os.chown(os.path.dirname(os.path.join(root,name)), -1, gid)
os.chmod(os.path.dirname(os.path.join(root,name)), stat.S_IXGRP )
</code></pre>
<p><code>os.chmod</code> removed every other permissions instead of just adding read and execute to the group. how could I achieve this?</p>
<p>unfortunately pathlib is not available so i am looking for a legacy method. the server is too old to run python > 3.4.
we will upgrade next year.</p>
<p>there are about 100 directories to change that is why I am looking to automatize the task.</p>
| <python> | 2023-10-31 17:03:33 | 1 | 319 | danidar |
77,397,723 | 8,030,794 | How to find max value with shift and condition? | <p>I have <code>Dataframe</code> like this:</p>
<pre><code>Index A
0 3
1 2
2 5
3 4
4 1
5 2
6 7
7 3
8 1
</code></pre>
<p>And i need to go with the shift, taking 5 pieces and so that the maximum is in the center of them.</p>
<p>Result:</p>
<pre><code>Index A Res
0 3 0
1 2 0
2 5 5
3 4 0
4 1 0
5 2 0
6 7 7
7 3 0
8 1 0
</code></pre>
<p>How can i implement this using pandas methods?</p>
| <python><pandas> | 2023-10-31 16:40:05 | 1 | 465 | Fresto |
77,397,697 | 11,621,983 | OpenCV ArUco marker detection | <p>I am attempting to detect an ArUco marker on my image, but OpenCV is not detecting the marker. I believe it may potentially be because of low resolution, but I'm not sure.</p>
<p>I used <a href="https://chev.me/arucogen/" rel="nofollow noreferrer">this tool</a> to generate a 4x4 ID.</p>
<p>The following is my code:</p>
<pre class="lang-py prettyprint-override"><code>aruco_dict = cv2.aruco.getPredefinedDictionary(cv2.aruco.DICT_4X4_100)
def find_marker(img, x1, y1, x2, y2):
img = img[y1:y2, x1:x2]
img = imutils.resize(img, width=500)
cv2.imshow('result', img)
cv2.waitKey(0)
corners, ids, rej = cv2.aruco.detectMarkers(img, aruco_dict)
print(corners)
print(ids)
</code></pre>
<p>When running the code, this is the image that <code>cv.imshow</code> is showing me:
<a href="https://i.sstatic.net/NoVvz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NoVvz.png" alt="enter image description here" /></a></p>
<p>As you can see, there is a lower resolution ArUco label, but it should still be readable. However, the code prints out <code>None</code> for the ids and an empty <code>()</code> for corners.</p>
| <python><opencv><aruco> | 2023-10-31 16:37:51 | 1 | 382 | unfestive chicken |
77,397,644 | 1,636,598 | How to filter a DataFrame to only include categories that are linked to specific values? | <p>I have a CSV dataset that looks like this:</p>
<pre><code>country,year,happiness
A,2008,3.72
A,2009,4.4
A,2011,3.83
B,2009,5.49
B,2010,5.27
B,2011,5.87
B,2012,5.51
C,2010,5.46
C,2011,5.32
D,2009,6.42
D,2010,6.44
D,2011,6.78
</code></pre>
<p>My goal is to filter it to <strong>only include countries that have the years 2009 and 2010 and 2011</strong>, and then to further filter it to <strong>only include the years 2009 and 2010 and 2011</strong>.</p>
<p>Thus, here is the desired result:</p>
<pre><code>country,year,happiness
B,2009,5.49
B,2010,5.27
B,2011,5.87
D,2009,6.42
D,2010,6.44
D,2011,6.78
</code></pre>
<p>Notice that <strong>countries A and C were excluded</strong> from the output because they didn't include all three years, and the <strong>2012 record for country B was also excluded</strong> from the output.</p>
<p>This pandas code does accomplish the result:</p>
<pre class="lang-py prettyprint-override"><code>y09 = df.loc[df['year'] == 2009, 'country']
y10 = df.loc[df['year'] == 2010, 'country']
y11 = df.loc[df['year'] == 2011, 'country']
countries = set(y09) & set(y10) & set(y11)
years = [2009, 2010, 2011]
filtered = df.loc[df['country'].isin(countries) & df['year'].isin(years), :]
</code></pre>
<p>However, <strong>I'm looking for a more elegant way to achieve the same result using pandas</strong>, such that the solution would easily scale to a much larger range of years (say 50 years).</p>
| <python><pandas><dataframe> | 2023-10-31 16:29:13 | 6 | 6,120 | Kevin Markham |
77,397,507 | 7,055,769 | swagger not working after deploying to vercel | <p>When deploying my code I see this under the /docs endpoint (swagger)</p>
<p><a href="https://i.sstatic.net/Xcvu7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Xcvu7.png" alt="enter image description here" /></a></p>
<p><a href="https://python-django-project.vercel.app/docs" rel="nofollow noreferrer">https://python-django-project.vercel.app/docs</a> - link</p>
<p>My <code>settings.py</code>:</p>
<pre><code>from pathlib import Path
BASE_DIR = Path(__file__).resolve().parent.parent
SECRET_KEY = "django-insecure-rjo^&xj7pgft@ylezdg!)n_+(6k$22gme@&mxw_z!jymtv(z+g"
DEBUG = True
INSTALLED_APPS = [
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
"api.apps.ApiConfig",
"rest_framework_swagger",
"rest_framework",
"drf_yasg",
]
SWAGGER_SETTINGS = {
"SECURITY_DEFINITIONS": {
"basic": {
"type": "basic",
}
},
"USE_SESSION_AUTH": False,
}
REST_FRAMEWORK = {
"DEFAULT_PARSER_CLASSES": [
"rest_framework.parsers.JSONParser",
]
}
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
ROOT_URLCONF = "app.urls"
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
APPEND_SLASH = False
WSGI_APPLICATION = "app.wsgi.app"
# Normalnie te zmienne bylyby w plikach .env na platformie vercel
DATABASES = {
"default": {
"ENGINE": "",
"URL": "",
"NAME": "",
"USER": "",
"PASSWORD": "",
"HOST": "",
"PORT": 00000,
}
}
ALLOWED_HOSTS = ["127.0.0.1", ".vercel.app", "localhost"]
# Password validation
# https://docs.djangoproject.com/en/4.2/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
},
{
"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
},
{
"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
},
{
"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
},
]
# Internationalization
# https://docs.djangoproject.com/en/4.2/topics/i18n/
LANGUAGE_CODE = "en-us"
TIME_ZONE = "UTC"
USE_I18N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/4.2/howto/static-files/
STATIC_URL = "static/"
# Default primary key field type
# https://docs.djangoproject.com/en/4.2/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
</code></pre>
<p><code>wsgi.py</code></p>
<pre><code>import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "app.settings")
app = get_wsgi_application()
</code></pre>
<pre><code>urlpatterns = [
path("", index),
path("docs", schema_view.with_ui("swagger", cache_timeout=0)),
]
</code></pre>
<p><code>vercel.json</code></p>
<pre><code>{
"builds": [
{
"src": "build_files.sh",
"use": "@vercel/static-build",
"config": {
"distDir": "staticfiles_build"
}
},
{
"src": "/app/wsgi.py",
"use": "@vercel/python",
"config": { "maxLambdaSize": "15mb" }
}
],
"routes": [
{
"src": "/(.*)",
"dest": "app/wsgi.py"
},
{
"src": "/static/(.*)",
"dest": "/static/$1"
}
]
}
</code></pre>
<p>why am I getting the error in browser?</p>
| <python><django><django-rest-framework><swagger><vercel> | 2023-10-31 16:08:14 | 1 | 5,089 | Alex Ironside |
77,397,466 | 7,318,488 | Polars Python select based on dtype pl.list | <p>Hi I want to select those cols of a polars df that are of the dtype <a href="https://pola-rs.github.io/polars/py-polars/html/reference/api/polars.List.html" rel="nofollow noreferrer">list</a>.<br />
Selecting by dtypes works ususally fine with <code>df.select(pl.col(pl.Utf8))</code>.</p>
<p>However for the type list this does not seem to work...</p>
<h2>MRE</h2>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({"foo": [[c] for c in
["100CT pen", "pencils 250CT", "what 125CT soever", "this is a thing"]]}
)
df
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>foo
list[str]
["100CT pen"]
["pencils 250CT"]
["what 125CT soever"]
["this is a thing"]
</code></pre>
<pre class="lang-py prettyprint-override"><code>
df.select(pl.col(pl.List))
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>shape: (0, 0)
</code></pre>
| <python><python-polars><data-wrangling> | 2023-10-31 16:02:07 | 1 | 1,840 | BjΓΆrn |
77,397,374 | 2,725,810 | Using index better than sequential scan when every hundredth row is needed, but only with explicit list of values | <p>I have a table (under RDS Postgres v. 15.4 instance <code>db.m7g.large</code>):</p>
<pre class="lang-sql prettyprint-override"><code>CREATE TABLE MyTable (
content_id integer,
part integer,
vector "char"[]
);
</code></pre>
<p>There is a B-Tree index on <code>content_id</code>. My data consists of 100M rows. There are 1M (0 .. 10^6-1) different values of <code>content_id</code>. For each value of <code>content_id</code> there are 100 (0..99) values of <code>part</code>. The column <code>vector</code> contains an array of 384 byte-size numbers if <code>content_id</code> is divisible by 100 without a remainder. It is <code>NULL</code> otherwise.</p>
<p>I have constructed this artificial data to test performance of the following query submitted from a Python script (it will become clear in a moment why I left it in Python for the question):</p>
<pre class="lang-py prettyprint-override"><code>query = f"""
WITH
Vars(key) as (
VALUES (array_fill(1, ARRAY[{384}])::vector)
),
Projection as (
SELECT *
FROM MyTable P
WHERE P.content_id in ({str(list(range(0, 999999, 100)))[1:-1]})
)
SELECT P.content_id, P.part
FROM Projection P, Vars
ORDER BY P.vector::int[]::vector <#> key
LIMIT {10};
"""
</code></pre>
<p><code><#></code> is the dot product operator of the <code>pgvector</code> extension, and <code>vector</code> is the type defined by that extension, which to my understanding is similar to <code>real[]</code>.</p>
<p>Note that the <code>WHERE</code> clause specifies an explicit list of 10K values of <code>content_id</code> (which correspond to 1M rows, i.e. every hundredth row in the table). Because of this large explicit list, I have to leave my query in Python and cannot run <code>EXPLAIN ANALYZE</code>.</p>
<p>The above query takes ~6 seconds to execute.</p>
<p>However, when I prepend this query with <code>SET enable_seqscan = off;</code> the query takes only ~3 seconds.</p>
<p><strong>Question 1:</strong> Given that we need every 100-th row and that much of computation is about computing the dot products and ordering by them, how come sequential scans are <em>not</em> better than using the index? (All the more so, I can't understand how using the index could result in an improvement by a factor of 2.)</p>
<p><strong>Question 2:</strong> How come this improvement disappears if I change the explicit list of values for <code>generate_series</code> as shown below?</p>
<pre><code>WHERE content_id IN (SELECT generate_series(0, 999999, 100))
</code></pre>
<p>Now, for this latter query I have the output for <code>EXPLAIN ANALYZE</code>:</p>
<pre><code> Limit (cost=1591694.63..1591695.46 rows=10 width=24) (actual time=6169.118..6169.125 rows=10 loops=1)
-> Result (cost=1591694.63..2731827.31 rows=13819790 width=24) (actual time=6169.117..6169.122 rows=10 loops=1)
-> Sort (cost=1591694.63..1626244.11 rows=13819790 width=424) (actual time=6169.114..6169.117 rows=10 loops=1)
Sort Key: ((((p.vector)::integer[])::vector <#> '[1,1,...,1]'::vector))
Sort Method: top-N heapsort Memory: 34kB
-> Nested Loop (cost=194.30..1293053.94 rows=13819790 width=424) (actual time=2.629..6025.693 rows=1000000 loops=1)
-> HashAggregate (cost=175.02..177.02 rows=200 width=4) (actual time=2.588..5.321 rows=10000 loops=1)
Group Key: generate_series(0, 999999, 100)
Batches: 1 Memory Usage: 929kB
-> ProjectSet (cost=0.00..50.02 rows=10000 width=4) (actual time=0.002..0.674 rows=10000 loops=1)
-> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.000..0.001 rows=1 loops=1)
-> Bitmap Heap Scan on mytable p (cost=19.28..4204.85 rows=1382 width=416) (actual time=0.007..0.020 rows=100 loops=10000)
Recheck Cond: (content_id = (generate_series(0, 999999, 100)))
Heap Blocks: exact=64444
-> Bitmap Index Scan on idx_content_on_mytable (cost=0.00..18.93 rows=1382 width=0) (actual time=0.005..0.005 rows=100 loops=10000)
Index Cond: (content_id = (generate_series(0, 999999, 100)))
Planning Time: 0.213 ms
Execution Time: 6169.260 ms
(18 rows)
</code></pre>
<p><strong>UPDATE</strong> @jjanes commented regarding my first question:</p>
<blockquote>
<p>Assuming your data is clustered on content_id, you need 100 consecutive rows out of every set of 10,000 rows. That is very different than needing every 100th row.</p>
</blockquote>
<p>If I understand correctly, this means that each of the 10K look-ups of the index returns a range rather than 100 individual rows. That range can be then scanned sequentially.</p>
<p>Following are the outputs of <code>EXPLAIN (ANALYZE, BUFFERS)</code> for all three queries:</p>
<ol>
<li>The original query:</li>
</ol>
<pre><code> Limit (cost=1430170.64..1430171.81 rows=10 width=16) (actual time=6300.232..6300.394 rows=10 loops=1)
Buffers: shared hit=55868 read=436879
I/O Timings: shared/local read=1027.617
-> Gather Merge (cost=1430170.64..2773605.03 rows=11514348 width=16) (actual time=6300.230..6300.391 rows=10 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=55868 read=436879
I/O Timings: shared/local read=1027.617
-> Sort (cost=1429170.62..1443563.55 rows=5757174 width=16) (actual time=6291.083..6291.085 rows=8 loops=3)
Sort Key: ((((p.vector)::integer[])::vector <#> '[1,1,...,1]'::vector))
Sort Method: top-N heapsort Memory: 25kB
Buffers: shared hit=55868 read=436879
I/O Timings: shared/local read=1027.617
Worker 0: Sort Method: top-N heapsort Memory: 25kB
Worker 1: Sort Method: top-N heapsort Memory: 25kB
-> Parallel Seq Scan on mytable p (cost=25.00..1304760.16 rows=5757174 width=16) (actual time=1913.156..6237.441 rows=333333 loops=3)
Filter: (content_id = ANY ('{0,100,...,999900}'::integer[]))
Rows Removed by Filter: 33000000
Buffers: shared hit=55754 read=436879
I/O Timings: shared/local read=1027.617
Planning:
Buffers: shared hit=149
Planning Time: 8.444 ms
Execution Time: 6300.452 ms
(24 rows)
</code></pre>
<ol start="2">
<li>The query with <code>SET enable_seqscan = off;</code></li>
</ol>
<pre><code> Limit (cost=1578577.14..1578578.31 rows=10 width=16) (actual time=3121.539..3123.430 rows=10 loops=1)
Buffers: shared hit=95578
-> Gather Merge (cost=1578577.14..2922011.54 rows=11514348 width=16) (actual time=3121.537..3123.426 rows=10 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=95578
-> Sort (cost=1577577.12..1591970.05 rows=5757174 width=16) (actual time=3108.995..3108.997 rows=9 loops=3)
Sort Key: ((((p.vector)::integer[])::vector <#> '[1,1,...,1]'::vector))
Sort Method: top-N heapsort Memory: 25kB
Buffers: shared hit=95578
Worker 0: Sort Method: top-N heapsort Memory: 25kB
Worker 1: Sort Method: top-N heapsort Memory: 25kB
-> Parallel Bitmap Heap Scan on mytable p (cost=184260.30..1453166.66 rows=5757174 width=16) (actual time=42.277..3057.887 rows=333333 loops=3)
Recheck Cond: (content_id = ANY ('{0,100,...,999900}'::integer[]))
Buffers: shared hit=40000
Planning:
Buffers: shared hit=149
Planning Time: 8.591 ms
Execution Time: 3123.638 ms
(23 rows)
</code></pre>
<ol start="3">
<li>Like 2, but with <code>generate_series</code>:</li>
</ol>
<pre><code> Limit (cost=1591694.63..1591694.66 rows=10 width=16) (actual time=6155.109..6155.114 rows=10 loops=1)
Buffers: shared hit=104447
-> Sort (cost=1591694.63..1626244.11 rows=13819790 width=16) (actual time=6155.107..6155.111 rows=10 loops=1)
Sort Key: ((((p.vector)::integer[])::vector <#> '[1,1,...,1]'::vector))
Sort Method: top-N heapsort Memory: 25kB
Buffers: shared hit=104447
-> Nested Loop (cost=194.30..1293053.94 rows=13819790 width=16) (actual time=2.912..6034.798 rows=1000000 loops=1)
Buffers: shared hit=104444
-> HashAggregate (cost=175.02..177.02 rows=200 width=4) (actual time=2.870..5.484 rows=10000 loops=1)
Group Key: generate_series(0, 999999, 100)
Batches: 1 Memory Usage: 929kB
-> ProjectSet (cost=0.00..50.02 rows=10000 width=4) (actual time=0.002..0.736 rows=10000 loops=1)
-> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.000..0.001 rows=1 loops=1)
-> Bitmap Heap Scan on mytable p (cost=19.28..4204.85 rows=1382 width=416) (actual time=0.007..0.020 rows=100 loops=10000)
Recheck Cond: (content_id = (generate_series(0, 999999, 100)))
Heap Blocks: exact=64444
Buffers: shared hit=104444
-> Bitmap Index Scan on idx_content_on_mytable (cost=0.00..18.93 rows=1382 width=0) (actual time=0.005..0.005 rows=100 loops=10000)
Index Cond: (content_id = (generate_series(0, 999999, 100)))
Buffers: shared hit=40000
Planning:
Buffers: shared hit=180
Planning Time: 1.012 ms
Execution Time: 6155.251 ms
(24 rows)
</code></pre>
| <python><postgresql><query-optimization><amazon-rds><database-indexes> | 2023-10-31 15:50:09 | 2 | 8,211 | AlwaysLearning |
77,397,234 | 8,030,746 | How to get multiple elements of text from one div with BeautifulSoup? | <p>I'm using Beautiful Soup, together with Flask, to scrape and show elements on a page. However, I'm having some trouble understanding what I need to do to get multiple items from a single div, and how to write it in a for loop, to show up properly in a HTML file. I'll explain it much better with code.</p>
<p>This is the HTML layout I'm attempting to scrape:</p>
<pre><code><div class="card card-job">
<div class="container">
<div class="row">
<div class="col-12">
<div class="card-body">
<h2 class="card-title"><a class="stretched-link js-view-job" href="#">Associate Account Manager
[MedTech]</a></h2>
<div class="card-job-actions js-job" data-id="2306150237w"
data-jobtitle="Associate Account Manager [MedTech]">
<button class="btn-add-job " aria-label="Save Associate Account Manager [MedTech]" title="Save">
<svg class="icon-sprite">
<use xlink:href="/images/sprite.svg"></use>
</svg>
<span class="sr-only">Save</span>
</button>
<button class="btn-remove-job d-none" aria-label="Remove Associate Account Manager [MedTech]"
hidden="" title="Remove">
<svg class="icon-sprite">
<use xlink:href="/images/sprite.svg"></use>
</svg>
<span class="sr-only">Saved</span>
</button>
</div>
<ul class="list-inline job-meta">
<li class="list-inline-item">Sales - Selling MDD</li>
<li class="list-inline-item">Mongkok, China</li>
</ul>
</div>
</div>
</div>
</div>
</div>
</code></pre>
<p>And I need two things from it:</p>
<ol>
<li>The card title text, inside h2, under the <code>card-title</code> class, and</li>
<li>The description text, inside the list, under the <code>job-meta</code> class.</li>
</ol>
<p>I can get them individually, without any issues. For example, for card title:</p>
<pre><code>job_title = job.find_all("h2", {"class": "card-title"})
jobs = [i.get_text() for i in jobs_title]
@app.route('/')
def home():
return render_template('home.html', jobs=jobs)
</code></pre>
<p>And then I write a for loop inside my HTML file, that gets the list of job titles:</p>
<pre><code><div class="jobs">
{% for job in jobs %}
<h3>{{ job }}</h3>
{% endfor %}
</div>
</code></pre>
<p>However, if I take this approach and scrape the job description the same way, I have no way of adding it to the loop inside the HTML, so that it shows in proper order:</p>
<ol>
<li>Job Title 1, Job Description 1,</li>
<li>Job Title 2, Job Description 2</li>
</ol>
<p>Which leads me to believe I need a for loop inside my py file, as well. So this was the best I could come up with, but it's giving me a TypeError: 'ResultSet' object is not callable.</p>
<pre><code>soup = BeautifulSoup(source, 'lxml')
jobs = soup.find_all("div", {"class": "card-job"})
for job, desc in jobs(soup.find_all("h2", {"class": "card-title"}),
soup.find_all("ul", {"class": "job-meta"})):
print(job, desc)
</code></pre>
<p>What did I do wrong here? And how do I pass it into def home, and use it inside the HTML?</p>
<p>Thank you!</p>
| <python><flask><web-scraping><beautifulsoup> | 2023-10-31 15:31:46 | 1 | 851 | hemoglobin |
77,397,199 | 7,421,654 | No module named 'opentelemetry' in chromadb | <p>I have hosted chroma db in aws using docker it was working fine and there was no changes made in the code no redeployment done, I am getting this error.</p>
<pre><code>ModuleNotFoundError: No module named 'opentelemetry'
</code></pre>
| <python><chromadb> | 2023-10-31 15:26:06 | 0 | 1,493 | Mohamed Anser Ali |
77,397,181 | 14,643,315 | Can't exit the script when using 'keyboard' package in Python | <p>Here is my code:</p>
<pre><code>import keyboard
def onEsc(e):
if e.event_type == keyboard.KEY_DOWN and e.name == 'esc':
print("Goodbye")
exit(1)
keyboard.on_press_key('esc', onEsc)
while True:
None
</code></pre>
<p>I get this weird error - when pressing 'esc', it prints but doesn't exit.
Then, on next press of 'esc', it doesn't print at all.
I checked this in debug, and it seems that it leads to a sort of exception that gets caught, but I wasn't able to udnerstand the source code entirely.</p>
<p>*It doesn't make a difference here if I use 'sys.exit(1)' instead</p>
| <python> | 2023-10-31 15:23:29 | 0 | 453 | sadcat_1 |
77,397,174 | 1,314,159 | Data structure for a CSV file using OpenAI and LangChains CSVLoader | <p>OK this maybe a stupid question but I can't find an answer anywhere. I am trying to load a csv file into an openAI application that will use that US Cancer Related data. I am trying to determine the "best" data structure for doing queries. What I mean is:</p>
<p>Is it better to have each item in a separate row or does adding a column work the same way. Such as</p>
<pre><code>| State | County | Cancer | Rate |
|-------|------------|---------|------|
|SC | Charleston | Bladder | 3.9 |
|SC | Dorchester | Bladder | 4.4 |
|SC | Pickens | Bladder | 3.4 |
|SC | Charleston | Colon | 1.9 |
|SC | Dorchester | Colon | 8.5 |
|SC | Pickens | Colon | 3.4 |
</code></pre>
<p>or</p>
<pre><code>| State | County | Bladder_Rate | Colon_Rate |
|-------|------------|--------------|------------|
|SC | Charleston | 3.9 | 1.9 |
|SC | Dorchester | 4.4 | 8.5 |
|SC | Pickens | 3.4 | 3.4 |
</code></pre>
<p>The questions will only apply to the data that is on the csv file. Is there a public resource that has determine the accuracy of the responses based upon the data structure and the models selected. This will pretty much use static data so I'm willing to try and find the most accurate structure. However; that being said if a Researcher asks a question and the answer is a total hallucination then the app will never be used again. This will house all of the data in the United States, published by the governmental resources for every county in every state for every cancer type that they follow. But the data does not change very often.</p>
| <python><openai-api><langchain> | 2023-10-31 15:22:51 | 2 | 1,517 | Bill |
77,396,807 | 10,557,442 | Discrepancies in PySpark DataFrame Results When Using Window Functions and Filters | <p>When I do certain types of transformations on a dataframe that involve window functions with filters I am getting the wrong results. Here is a minimal example of the results I get with my code:</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import SparkSession
import pyspark.sql.functions as f
from pyspark.sql.window import Window as w
from datetime import datetime, date
from chispa.dataframe_comparer import assert_df_equality
spark = SparkSession.builder.config("spark.sql.repl.eagerEval.enabled", True).getOrCreate()
# Base dataframe
df = spark.createDataFrame(
[
(1, date(2023, 10, 1), date(2023, 10, 2), "open"),
(1, date(2023, 10, 2), date(2023, 10, 3), "close"),
(2, date(2023, 10, 1), date(2023, 10, 2), "close"),
(2, date(2023, 10, 2), date(2023, 10, 4), "close"),
(3, date(2023, 10, 2), date(2023, 10, 4), "open"),
(3, date(2023, 10, 3), date(2023, 10, 6), "open"),
],
schema="id integer, date_start date, date_end date, status string"
)
# We define two partition functions
partition = w.partitionBy("id").orderBy("date_start", "date_end").rowsBetween(w.unboundedPreceding, w.unboundedFollowing)
partition2 = w.partitionBy("id").orderBy("date_start", "date_end")
# Define dataframe A
A = df.withColumn(
"date_end_of_last_close",
f.max(f.when(f.col("status") == "close", f.col("date_end"))).over(partition)
).withColumn(
"rank",
f.row_number().over(partition2)
)
display(A)
| id | date_start | date_end | status | date_end_of_last_close | rank |
|----|------------|------------|--------|------------------------|------|
| 1 | 2023-10-01 | 2023-10-02 | open | 2023-10-03 | 1 |
| 1 | 2023-10-02 | 2023-10-03 | close | 2023-10-03 | 2 |
| 2 | 2023-10-01 | 2023-10-02 | close | 2023-10-04 | 1 |
| 2 | 2023-10-02 | 2023-10-04 | close | 2023-10-04 | 2 |
| 3 | 2023-10-02 | 2023-10-04 | open | NULL | 1 |
| 3 | 2023-10-03 | 2023-10-06 | open | NULL | 2 |
# When filtering by rank = 1, I get this weird result
A_result = A.filter(f.col("rank") == 1).drop("rank")
display(A_result)
| id | date_start | date_end | status | date_end_of_last_close |
|----|------------|------------|--------|------------------------|
| 1 | 2023-10-01 | 2023-10-02 | open | NULL |
| 2 | 2023-10-01 | 2023-10-02 | close | 2023-10-02 |
| 3 | 2023-10-02 | 2023-10-04 | open | NULL |
</code></pre>
<p>Obviously this result is not correct, moreover, if I create the dataframe from scratch and apply the filter I do not get the same:</p>
<pre class="lang-py prettyprint-override"><code># Define the schema as a string
schema = "id INT, date_start DATE, date_end DATE, status STRING, date_end_of_last_close DATE, rank INT"
# Define the same data as A dataframe
data = [
(1, date(2023, 10, 1), date(2023, 10, 2), "open", date(2023, 10, 3), 1),
(1, date(2023, 10, 2), date(2023, 10, 3), "close", date(2023, 10, 3), 2),
(2, date(2023, 10, 1), date(2023, 10, 2), "close", date(2023, 10, 4), 1),
(2, date(2023, 10, 2), date(2023, 10, 4), "close", date(2023, 10, 4), 2),
(3, date(2023, 10, 2), date(2023, 10, 4), "open", None, 1),
(3, date(2023, 10, 3), date(2023, 10, 6), "open", None, 2),
]
# Create the DataFrame
B = spark.createDataFrame(data, schema)
# Apply the same filter as before
B_result = B.filter(f.col("rank") == 1).drop("rank")
display(B_result)
| id | date_start | date_end | status | date_end_of_last_close |
|----|------------|------------|--------|------------------------|
| 1 | 2023-10-01 | 2023-10-02 | open | 2023-10-03 |
| 2 | 2023-10-01 | 2023-10-02 | close | 2023-10-04 |
| 3 | 2023-10-02 | 2023-10-04 | open | NULL |
</code></pre>
<p>What is happening here? I don't trust pyspark right now, because I am applying a series of transformations by adding columns and using different window functions to later apply filters, and in view of these discrepancies I don't know what to do anymore.</p>
<h1>Update 2023-10-31</h1>
<p>There seems to be some kind of problem in the latest version of pyspark (3.5.0). In the previous version (3.4.1) the script works as expected.</p>
| <python><apache-spark><pyspark><partitioning> | 2023-10-31 14:33:42 | 0 | 544 | Dani |
77,396,626 | 1,172,907 | How to mark rgb colors on a colorwheel in python? | <p>Given <code>(255,0,255)</code>, how can I output a colorwheel where this value is marked?</p>
<p><a href="https://i.sstatic.net/oTyIW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTyIWm.png" alt="enter image description here" /></a></p>
| <python><colors> | 2023-10-31 14:06:46 | 1 | 605 | jjk |
77,396,596 | 9,443,671 | How can I playback audio as it's being streamed to a file? | <p>I'm using <a href="https://github.com/rany2/edge-tts/tree/master" rel="nofollow noreferrer">edge-TTS</a> (text-to-speech software) that streams audio to a file as its being generated. Is there a way to do some processing and then playback this audio as soon as it gets generated and not when it finishes? I'm not too sure how to do this but it seems like it should be possible. The particular code that is being used is in this script from the authors: <a href="https://github.com/rany2/edge-tts/blob/master/examples/basic_audio_streaming.py" rel="nofollow noreferrer">https://github.com/rany2/edge-tts/blob/master/examples/basic_audio_streaming.py</a></p>
<p>I'm guessing what this does is continuously streams audio as soon as it's generated to a text file, now I'm wondering how can I do a little processing to the audio as soon as it arrives and play it back while streaming.</p>
| <python><file><streaming><text-to-speech><audio-streaming> | 2023-10-31 14:01:50 | 0 | 687 | skidjoe |
77,396,472 | 10,863,083 | Check if column values are consecutive in SQL and assign them a row_number | <p>I have the next table :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Device_id</th>
<th>Code</th>
<th>SESSION_ID</th>
<th>row_nb</th>
</tr>
</thead>
<tbody>
<tr>
<td>MD50</td>
<td>XSC547</td>
<td>7</td>
<td>450</td>
</tr>
<tr>
<td>MD51</td>
<td>LM678</td>
<td>7</td>
<td>451</td>
</tr>
<tr>
<td>MD60</td>
<td>SVF652</td>
<td>7</td>
<td>452</td>
</tr>
<tr>
<td>VF35</td>
<td>MMM547</td>
<td>7</td>
<td>550</td>
</tr>
<tr>
<td>VF23</td>
<td>NNN678</td>
<td>7</td>
<td>551</td>
</tr>
<tr>
<td>MD60</td>
<td>ABC652</td>
<td>7</td>
<td>800</td>
</tr>
</tbody>
</table>
</div>
<p>I need to know if there is any solution how to create an extra-column called <code>test</code> as follow:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Device_id</th>
<th>Code</th>
<th>SESSION_ID</th>
<th>row_nb</th>
<th>test</th>
</tr>
</thead>
<tbody>
<tr>
<td>MD50</td>
<td>XSC547</td>
<td>7</td>
<td>450</td>
<td>1</td>
</tr>
<tr>
<td>MD51</td>
<td>LM678</td>
<td>7</td>
<td>451</td>
<td>1</td>
</tr>
<tr>
<td>MD60</td>
<td>SVF652</td>
<td>7</td>
<td>452</td>
<td>1</td>
</tr>
<tr>
<td>VF35</td>
<td>MMM547</td>
<td>7</td>
<td>550</td>
<td>2</td>
</tr>
<tr>
<td>VF23</td>
<td>NNN678</td>
<td>7</td>
<td>551</td>
<td>2</td>
</tr>
<tr>
<td>MD60</td>
<td>ABC652</td>
<td>7</td>
<td>800</td>
<td>3</td>
</tr>
</tbody>
</table>
</div>
<p>Each consecutive rows gets the same value in the <code>test</code> column is that possible with SQL ?</p>
| <python><sql><snowflake-cloud-data-platform> | 2023-10-31 13:46:04 | 3 | 417 | baddy |
77,396,453 | 11,038,017 | sending POST, getting GET in Tornado | <p>I have a question. I am using tornado web server. I have a main handler which gets a request, pre-procceses it, creates a new request, and sends it further to the worker. The code of sending a pre-processed new request further to the worker looks like this:</p>
<pre><code> async def send_reworked_request(self, preprocessed_request, endpoint_url):
# Create an HTTP client instance
http_client = AsyncHTTPClient()
# Create a request to send the reworked request to the other handler
request = tornado.httpclient.HTTPRequest(
url=endpoint_url,
method="POST",
body=preprocessed_request,
connect_timeout = 1800.00,
request_timeout = 1800.00)
try:
# Send the request
response = await http_client.fetch(request)
return response
except Exception as e:
print("Error: %s" % e)
</code></pre>
<p>I am sending a POST request. I have get and post async functions at the worker. The request sent comes as a GET request with an empty body to the worker, and ends up not in the post method as I expect, but in the get one. I have also cought the remote_ip. It is always the same, it is coming 100% from my request.</p>
<p>Do you know what is happening???</p>
| <python><request><tornado> | 2023-10-31 13:43:54 | 0 | 333 | Irina KΓ€rkkΓ€nen |
77,396,451 | 7,895,331 | VSCode: Python virtual environments not automatically activating in integrated terminal | <p>I recently installed VSCode and noticed that the Python virtual environments are not automatically activating when I open the integrated terminal.</p>
<p>From the information available within VSCode in this link:</p>
<p><a href="https://github.com/microsoft/vscode-python/wiki/Activate-Environments-in-Terminal-Using-Environment-Variables" rel="nofollow noreferrer">https://github.com/microsoft/vscode-python/wiki/Activate-Environments-in-Terminal-Using-Environment-Variables</a></p>
<p>it appears that the Python extension may not be contributing to the terminal's environment.</p>
<p>I've attached a screenshot below that illustrates the issue:</p>
<p><a href="https://i.sstatic.net/uH9nA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uH9nA.png" alt="enter image description here" /></a></p>
<ol>
<li>Selecting "Command Prompt" from the terminal options within VSCode.</li>
<li>The Python extension notification indicates its functionality: it's supposed to automatically activate all terminals using the chosen environment, regardless of whether the environment's name appears in the terminal prompt.</li>
<li>In the "Terminal Environment Changes" section on my system , only the git extension seems to contribute to this terminal.</li>
<li>Examining the environment variables on my system, I notice that variables associated with Python packages are absent; only those related to git are visible.</li>
<li>For reference, the screenshot depicts the expected "Terminal Environment Changes" as per the provided link. This correct configuration, seen in step 5, is notably missing from the current setup on my system, as observed in step 4.</li>
</ol>
<p>Any ideas how to solve this?</p>
| <python><visual-studio-code> | 2023-10-31 13:43:24 | 3 | 601 | Rafael Zanzoori |
77,396,335 | 2,307,570 | What is the fastest way to remove multiple elements from a set, just based on their properties? (E.g. remove all negative numbers.) | <p>In the following example all negative numbers are removed from the set.<br>
But first I have to create the subset of negative numbers.<br>
This does not seem like the most efficient way to achieve this.</p>
<pre class="lang-py prettyprint-override"><code>my_set = {-100, -5, 0, 123, 3000}
my_set.difference_update({e for e in my_set if e < 0})
assert my_set == {0, 123, 3000}
</code></pre>
<p>In the title I refer to a set, but I mean that in a mathematical sense.<br>
My question is not specific to the datatype.</p>
<p>The following example is what I actually want to do.<br>
I have a set of pairs, which could also be seen as a binary matrix,<br>
and I want to remove some rows and columns of that matrix.<br>
Again, the effort to create the set <code>to_be_removed</code> seems wasted to me.<br>
I am looking for a way to directly get rid of all elements with some property.</p>
<p><a href="https://i.sstatic.net/12MS3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/12MS3.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>my_set = {
(0, 0), (0, 2), (0, 3), (0, 6), (0, 7), (1, 0),
(2, 3), (2, 4), (2, 7), (3, 3), (3, 7), (3, 9),
(4, 2), (4, 4), (4, 6), (4, 10), (5, 0), (6, 0),
(6, 1), (6, 3), (6, 8), (6, 9), (7, 1), (7, 9)
}
to_be_removed = {(i, j) for (i, j) in my_set if i in {0, 1, 7} or j in {0, 1, 6, 7, 10}}
assert to_be_removed == {
(0, 0), (0, 2), (0, 3), (0, 6), (0, 7), (1, 0), (2, 7), (3, 7),
(4, 6), (4, 10), (5, 0), (6, 0), (6, 1), (7, 1), (7, 9)
}
my_set.difference_update(to_be_removed)
assert my_set == {
(2, 3), (2, 4), (3, 3), (3, 9), (4, 2), (4, 4),
(6, 3), (6, 8), (6, 9)
}
</code></pre>
<p>Maybe <code>set</code> does not allow this. But I do not care about the datatype.<br>
I suppose that arrays make it easy to set whole rows and columns to zero.<br>
But I would like to avoid wasting space for zeros.<br>
(Sparse matrices, on the other hand, are apparently not made to be changed.)</p>
<hr />
<p><strong>Edit:</strong> The comment by Joe suggests the following:</p>
<pre class="lang-py prettyprint-override"><code>rows, columns = {2, 3, 4, 5, 6}, {2, 3, 4, 5, 8, 9}
my_set = {(i, j) for (i, j) in my_set if i in rows and j in columns}
</code></pre>
<p>That does indeed work, and is probably faster.</p>
| <python><arrays><set><binary-matrix> | 2023-10-31 13:27:59 | 1 | 1,209 | Watchduck |
77,396,200 | 4,192,366 | Make @intrinsic return a tuple | <p>I'm trying to implement a <code>(uint64 a)*(uint64 b)->(uint64 higher,uint64 lower)</code> function in numba. This was what I got fighting all night. There are really few examples online about this:</p>
<pre><code>import numpy as np
from llvmlite import ir
from numba import njit, types
from numba.extending import intrinsic
@intrinsic
def mul_(ctx, a, b):
def gen(ctx, build: ir.IRBuilder, sig, a):
u8 = ir.IntType(64)
u16 = ir.IntType(128)
a = build.mul(build.zext(a[0], u16), build.zext(a[1], u16))
# return build.addrspacecast(a, ctx.get_value_type(sig.return_type))
o = build.alloca(u8, 2)
O = build.bitcast(o, ir.ArrayType(u8, 2))
build.store(build.trunc(build.lshr(a, u16(64)), u8), build.gep(o, [u8(0)]))
build.store(build.trunc(a, u8), build.gep(o, [u8(1)]))
return o
return types.Tuple((types.u8, types.u8))(types.u8, types.u8), gen
@njit
def mul(x, y):
mul_(x, y)
a = 2**63 - 1
t = mul(a, a)
</code></pre>
<p>I found no matter what I do, it either says <code>Can't index at [0] in i64*</code> (for lower case <code>o</code>), or <code>'ArrayType' object has no attribute 'pointee'</code> (for upper case <code>O</code>). Can any one tell me what's the proper way of doing this?</p>
<p>It would even be better to directly return int128. Maybe as <code>S16</code>, <code>V16</code> or etc? I tried casting <code>int128</code> directly to <code>[int64 x 2]</code>, but it says it can't cast this way. As long as it's returned to outside python world, I can always <code>np.view</code> it to anything I want.</p>
| <python><numpy><llvm><numba> | 2023-10-31 13:11:43 | 2 | 1,760 | ZisIsNotZis |
77,396,198 | 5,919,632 | how to auto format E501 line too long (89 > 88 characters) in pre-commit? | <p>I have below .pre-commit-config.yaml file.</p>
<pre><code>default_language_version:
python: python3.10
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: check-yaml
- id: end-of-file-fixer
- id: trailing-whitespace
- id: check-byte-order-marker
- repo: https://github.com/pycqa/isort
rev: 5.12.0
hooks:
- id: isort
- repo: https://github.com/psf/black
rev: 23.7.0
hooks:
- id: black
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: 'v0.0.287'
hooks:
- id: ruff
args: [--fix, --exit-non-zero-on-fix]
</code></pre>
<p>I have demo.py python file where few lines are exceeding max-length 88 chars, in that case pre-commit only giving
<code>E501 line too long (89 > 88 characters)</code> error instead of re-formatting that line.</p>
<p>Is there any way we can auto-fix such issues?</p>
| <python><pre-commit><python-black><ruff> | 2023-10-31 13:11:29 | 2 | 647 | Akash Pagar |
77,396,096 | 2,817,520 | How to configure Flask-Babel to work with blueprints | <p>I have a Flask app with many blueprints. Each blueprint is a python package. Now how can I have translations alongside each package?</p>
| <python><flask> | 2023-10-31 12:58:10 | 1 | 860 | Dante |
77,396,021 | 264,136 | SSH to server and from there SCP to another server | <p>I want to SSH to a Linux server and from there, do a SCP copy to one ESXI server. Both need username and password.</p>
<ol>
<li><p>Connect to Linux box:</p>
<pre><code>ssh_linux = paramiko.SSHClient()
ssh_linux.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_linux.connect(linux_host, username=linux_user, password=linux_password)
</code></pre>
</li>
<li><p>SCP:</p>
<pre><code>scp_command = f"scp -o StrictHostKeyChecking=no {local_file_path} {esxi_user}@{esxi_host}:{remote_file_path}"
logging.info(f"{scp_command}")
stdin, stdout, stderr = ssh_linux.exec_command(scp_command)
scp_exit_status = stdout.channel.recv_exit_status()
if scp_exit_status == 0:
logging.info(f"File {file} copied successfully.")
else:
logging.error(f"Error copying. SCP command failed with exit code {scp_exit_status}")
logging.error(f"SCP STDOUT: {stdout.read().decode()}")
logging.error(f"SCP STDERR: {stderr.read().decode()}")
ssh_linux.close()
</code></pre>
</li>
</ol>
<p>How to send password of the ESXI host to the <code>scp</code> command?</p>
| <python><ssh><paramiko><scp><openssh> | 2023-10-31 12:45:41 | 1 | 5,538 | Akshay J |
77,395,818 | 6,338,996 | FileNotFoundError for a file that is certainly there when running a command from a python script | <p>I need to perform consecutive concatenations of FITS files (or one big concatenation of all of them) and, since doing them by hand is a terrible idea (around 200 files). I am using Python to write a script and then I run it in a WSL/UNIX terminal using <code>python3 code.py</code> on the command line.</p>
<p>The script accesses ascii tables that are in a subfolder (called <code>vvv_tiles_no_overlap</code>). Since I am running the script both on WSL on my Windows 11 PC for troubleshooting and on a UNIX terminal on a remote server for the real work, I use <code>os.getcwd()</code> and <code>os.path.join()</code> to create the paths that the script uses to access the files. So far, the code successfully completes the following tasks:</p>
<ol>
<li>Using <code>awk</code>, access the file in the subfolder and create a temporary CSV file in the same directory (<code>tmp.csv</code>).</li>
<li>Using <code>stilts</code>, convert <code>tmp.csv</code> to FITS (<code>tmp.fits</code>).</li>
<li>Run a <code>stilts</code> command to cross-match the local catalogue to a remote one and save it as its individual FITS file inside the subfolder.</li>
</ol>
<p>So far the code has no issues accessing the files and performing what it needs to do. Every command is called using <code>subprocess.call()</code>. Then, I ask the script to read every fits file created in the subfolder and concatenate them. When the code reaches that point, I get the following <code>FileNotFoundError</code>:</p>
<pre><code>Exception in thread Thread-1 (process): | time passed: 00:00:00
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/mnt/d/2023-2/awker-stiltser.py", line 183, in process
subprocess.call(f'stilts tcat lazy=True in="{Ft_paths[0]} {Ft_paths[1]}" out=VVVxGAIA.fits')
File "/usr/lib/python3.10/subprocess.py", line 345, in call
with Popen(*popenargs, **kwargs) as p:
File "/usr/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.10/subprocess.py", line 1863, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'stilts tcat lazy=True in="vvv_tiles_no_overlap/VVVxGAIA_b201.fits vvv_tiles_no_overlap/VVVxGAIA_b202.fits" out=VVVxGAIA.fits'
</code></pre>
<p>(The paths do not seem to be absolute because I changed the code around to see if <code>os</code> was somehow causing the problem, but I assure you that it changed nothing.)</p>
<p>I didn't understand how the file could not be found when the path was generated by the code itself. I double checked and finally ran the same call that is reported by <code>FileNotFoundError</code>:</p>
<pre><code>stilts tcat lazy=True in="vvv_tiles_no_overlap/VVVxGAIA_b201.fits vvv_tiles_no_overlap/VVVxGAIA_b202.fits" out=VVVxGAIA.fits
</code></pre>
<p>The command ran without issue and I got the expected result. What could cause this problem?</p>
| <python><awk><terminal><subprocess> | 2023-10-31 12:17:39 | 1 | 573 | condosz |
77,395,742 | 5,170,800 | Filtering rows of dataframe based on quintiles | <p>I created a dataframe using the following code. I am triong to create a new dataframe that contains only rows from the original dataframe that are less than or greater than Q1 or Q4 for each PP.</p>
<p>I tried <code>filtered_df = df[df['Hours per action'].lt(first_quartile) | df['Hours per action'].gt(fourth_quartile)]</code>, but that isn't right. I need all rows with 'PP' == 1 to use the Q1 and Q4 from 'PP' 1, all rows with 'PP' == 2 to use the Q1 and Q4 from 'PP' 2, and so on. Thank you</p>
<pre><code>import pandas as pd
# Create a list of all possible combinations of 'PP' and 'Name'
pp_employee_combinations = [(pp, name) for pp in range(1, 27) for name in ['Wyatt', 'Thom', 'Pete', 'Sue', 'Dave']]
# Create a new DataFrame with all of the possible combinations of 'PP' and 'Name'
df = pd.DataFrame(pp_employee_combinations, columns=['PP', 'Name'])
import numpy.random as rnd
# Fill in the 'Hours per action' column with random values
df['Hours per action'] = rnd.randint(10, 100, size=df.shape[0])
Q1 = df.groupby('PP')['Hours per action'].quantile(0.25)
Q4 = df.groupby('PP')['Hours per action'].quantile(0.75)
</code></pre>
| <python><pandas><dataframe> | 2023-10-31 12:07:18 | 1 | 581 | Britt |
77,395,533 | 2,552,108 | Secondary Axis Range from 0 to None in Update Layout Not Working | <p>I have a data of total sales and total invoices (rows) per Year. I want to plot them in a bar and line chart. I successfully made the chart with plotly.</p>
<p>However I am having problem in adjusting the secondary yaxis to start from 0 and not automatically set by plotly.</p>
<pre><code>fig = make_subplots(specs=[[{"secondary_y": True}]])
fig.add_trace(go.Bar(
x=smy[smy['variable'] == 'Weekly_Sales_sum']['Year'],
y=smy[smy['variable'] == 'Weekly_Sales_sum']['value'],
name='Weekly Sales Sum'
))
fig.add_trace(go.Scatter(
x=smy[smy['variable'] == 'Weekly_Sales_size']['Year'],
y=smy[smy['variable'] == 'Weekly_Sales_size']['value'],
name='Weekly Sales Size',
yaxis='y2'
))
fig.update_layout(title='Bar and Line Chart for Weekly Sales Sum and Size',
yaxis=dict(title='Weekly Sales Sum'),
yaxis2=dict(title='Weekly Sales Size', overlaying='y', side='right', range=[0, None]))
pyo.iplot(fig)
</code></pre>
<p>I have set the <code>fig</code>'s <code>update_layout</code> to start the secondary axis to start from 0 using <code>range=[0, None]</code> but it still doesn't work.</p>
<p><a href="https://i.sstatic.net/BZkHC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BZkHC.png" alt="enter image description here" /></a></p>
| <python><plotly> | 2023-10-31 11:33:56 | 1 | 1,170 | user2552108 |
77,395,425 | 5,437,090 | Flatten SciPy lil_matrix without converting to dense to avoid memory error | <p>Given:</p>
<pre><code>import numpy as np
import scipy
print(np.__version__, scipy.__version__) # numpy: 1.23.5 scipy: 1.11.3
</code></pre>
<p>I would like to flatten or squeeze my sliced array without converting it to dense in order to avoid memory error:</p>
<pre><code>mtx=scipy.sparse.lil_matrix((int(2e8), int(4e9)), dtype=np.float32) # very large lil_matrix
mtx[:int(4e6), :int(7e4)]=np.random.rand(int(4e6), int(7e4))
flat_mtx=mtx.getrowview(0).flatten() # sliced example: row: 0
</code></pre>
<p>But I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'lil_matrix' object has no attribute 'flatten'
</code></pre>
<p>For smaller sizes, I can alternatively do <code>mtx.getrowview(0).toarray().flatten()</code> or a faster approach of <code>np.squeeze(sp_mtx.getrowview(0).toarray())</code> but for larger sizes, I do not want to convert to dense <code>numpy</code> array.</p>
<p>What is the easy and memory efficient approach to flatten or squeeze a <code>scipy</code> sparse <code>lil_matrix</code>?</p>
| <python><scipy><sparse-matrix><flatten> | 2023-10-31 11:18:48 | 2 | 1,621 | farid |
77,395,177 | 1,734,097 | How to set value to a session state in streamlits? | <p>i'm new to streamlit and i tried to create a budgeting apps in Streamlit.
i have the following functions:</p>
<pre><code>#the purpose of this function are:
# create a st.number_input
# store its value in sessions
def input_number(labelnya,session_name,value=None,help=None,format='%g'):
valuenya = st.number_input(labelnya,value=value,help=help,format=format)
valuenya = 0 if valuenya is None else valuenya
if session_name not in st.session_state:
st.session_state[session_name] = valuenya
print("All Session:\n{}".format(st.session_state))
return valuenya
</code></pre>
<p>then, i want session_state store something like this:</p>
<pre><code>{
"asset": {
"asset1": {
"cash_on_hand": 1,
"account_receivable": 2
},
"asset2": {}
},
"liabilities": {
"liabilities1": {},
"liabilities2": {}
}
}
</code></pre>
<p>the purpose of having dictionaries like above in <code>session_state</code> is i can summarize per respective groups like <code>total per aset1</code> or <code>total per aset</code>.
and the following code is executed in <code>app.py</code></p>
<pre><code>input_number("Cash on Hand",session_name='how_do_i_set_cash_on_hand_value_from_this_number_input')
</code></pre>
<p>i'm a bit confused because in the <a href="https://docs.streamlit.io/library/api-reference/session-state" rel="nofollow noreferrer">streamlit pages</a> it defines how to use a session_state not in dictionary type.</p>
<p>how / where do i start?</p>
| <python><streamlit> | 2023-10-31 10:38:53 | 2 | 1,099 | Cignitor |
77,395,129 | 276,193 | No module named 'pybind11' when using poetry | <p>Used pybind11 in the past without issue pulled in as a submodule and used via cmake. Now working on another project that uses poetry, and so wanted to make everything poetry-centric.</p>
<p>Trying to build this simple example project <a href="https://github.com/octavifs/poetry-pybind11-integration" rel="nofollow noreferrer">https://github.com/octavifs/poetry-pybind11-integration</a> results in the failure</p>
<pre><code>Preparing build environment with build-system requirements poetry-core>=1.0.0
Building poetry-pybind11-integration (0.1.0)
Traceback (most recent call last):
File "/blah/poetry-pybind11-integration/build.py", line 1, in <module>
from pybind11.setup_helpers import Pybind11Extension, build_ext
ModuleNotFoundError: No module named 'pybind11'
</code></pre>
<p>where the first line of build.py is <code>from pybind11.setup_helpers import Pybind11Extension, build_ext</code> and pybind11 is listed in the toml.</p>
<p>Why?</p>
| <python><python-poetry><pybind11><python-extensions> | 2023-10-31 10:33:11 | 1 | 16,283 | learnvst |
77,395,017 | 12,133,068 | Dask DataFrame: write multiple CSV by column | <p>I have a Dask DataFrame with a column <code>"file"</code>. I want to write each line of the dataframe to a CSV whose path is given by the <code>"file"</code> column.</p>
<p>For instance, on the example below, the row indices 0, 1, 2, and 4 should be written to the file <code>a.csv</code> ; while we will write the row 2 to <code>b.csv</code> ; and the rows 3 and 5 to <code>c.csv</code>:</p>
<pre><code>import pandas as pd
import dask.dataframe as dd
df = pd.DataFrame({"x": [1, 2, 3, 7, 11, 2], "y": [1, 1, 2, 8, 0, 0], "file": ["a.csv", "a.csv", "b.csv", "c.csv", "a.csv", "c.csv"]})
ddf = dd.from_pandas(df, npartitions=2)
</code></pre>
<p>I tried two solutions that both works but are super slow and make the memory crash (even with <code><100MB</code> chunks and <code>128GB</code> of total RAM). The priority right now is to make it less expensive in term of memory, but if you can make it faster then it's even better!</p>
<h3>Bad solution 1</h3>
<p>Get each file group, and right them with a for loop. Super ugly, and super inefficient...</p>
<pre><code>for file in ddf["file"].unique().compute():
ddf[ddf["file"] == file].to_csv(file, single_file=True)
</code></pre>
<h3>Bad solution 2</h3>
<p>Use <code>map_partitions</code> and group datataframe on each partition separately.</p>
<pre><code>from pathlib import Path
def _write_partition(df: pd.DataFrame, partition_info=None) -> None:
if partition_info is not None:
for file, group_df in df.groupby("file"):
group_df.to_csv(
file, mode="a", header=not Path(file).exists(), index=False
)
ddf.map_partitions(_write_partition).compute()
</code></pre>
<p>This is working on small examples, but with my big dataframe (<code>20GB</code>), it runs for 3 hours without writing even one single line of CSV and then crashes because of memory (even with <code>128GB</code> of RAM). I'm quite new to Dask so maybe I'm doing something wrong...</p>
| <python><dask><dask-dataframe> | 2023-10-31 10:14:07 | 0 | 334 | Quentin BLAMPEY |
77,395,009 | 8,241,568 | Stacked bars in subplots with plotly | <p>I have the code below but I am now trying to have the two subplots in a 2 rows, 1 column format (so one subplot underneath the other). Unfortunately, when I try to do this, I end up have my bar plot with bars not being stacked anymore. Is there a way to to this with plotly?</p>
<pre><code>import plotly.graph_objects as go
# Sample data for the stacked bar plot
categories = ['Category 1', 'Category 2', 'Category 3']
values1 = [10, 20, 15]
values2 = [5, 15, 10]
values3 = [15, 10, 5]
# Sample data for the line plot
x = [1, 2, 3]
y = [30, 40, 35]
# Create a subplot with a stacked bar plot
fig = go.Figure()
fig.add_trace(go.Bar(x=categories, y=values1, name='Value 1'))
fig.add_trace(go.Bar(x=categories, y=values2, name='Value 2'))
fig.add_trace(go.Bar(x=categories, y=values3, name='Value 3'))
# Create a subplot with a line plot
fig.add_trace(go.Scatter(x=x, y=y, mode='lines+markers', name='Line Plot'))
# Set layout for the entire figure
fig.update_layout(
title='Stacked Bar Plot and Line Plot',
barmode='stack', # To create a stacked bar plot
xaxis=dict(title='Categories'), # X-axis title
yaxis=dict(title='Values'), # Y-axis title
)
# Show the plot
fig.show()
</code></pre>
<p>Here is my tentative - as you can see the bars end up not being stacked anymore:</p>
<pre><code>import plotly.subplots as sp
import plotly.graph_objects as go
# Create a 2x1 subplot grid
fig = sp.make_subplots(rows=2, cols=1)
# Data for the stacked bar plot
categories = ['Category A', 'Category B', 'Category C']
values1 = [10, 20, 15]
values2 = [5, 15, 10]
# Create the stacked bar plot
bar_trace1 = go.Bar(x=categories, y=values1, name='Trace 1')
bar_trace2 = go.Bar(x=categories, y=values2, name='Trace 2')
fig.add_trace(bar_trace1, row=1, col=1)
fig.add_trace(bar_trace2, row=1, col=1)
# Data for the line plot
x_values = [1, 2, 3, 4, 5]
y_values = [3, 5, 8, 4, 9]
# Create the line plot
line_trace = go.Scatter(x=x_values, y=y_values, mode='lines', name='Line Plot')
fig.add_trace(line_trace, row=2, col=1)
# Update layout
fig.update_layout(title='Stacked Bar and Line Plot',
xaxis_title='X-Axis Label',
yaxis_title='Y-Axis Label')
# Show the plot
fig.show()
</code></pre>
| <python><plotly> | 2023-10-31 10:13:25 | 1 | 1,257 | Liky |
77,394,929 | 3,104,974 | Convert pyspark.sql.column.Column to numpy array | <p>I'm new to pyspark and don't yet have a full overview of the avl. methods. I want to get <em>unique values of a single column</em> of a pyspark dataframe. This approach doesn't work:</p>
<pre><code>F.array_distinct(my_spark_df.my_column).???
</code></pre>
<p>Whatever <code>???</code>-function I try to apply to the column, <code>toPandas()</code>, <code>collect()</code>, <code>display()</code> etc., I get:</p>
<pre><code>TypeError: 'Column' object is not callable
</code></pre>
<p>I also found <a href="https://stackoverflow.com/questions/38610559/convert-spark-dataframe-column-to-python-list">this thread</a> which is similar, but didn't help in my case since I want to select only distinct values before collecting them.</p>
| <python><python-3.x><pyspark><apache-spark-sql> | 2023-10-31 10:01:52 | 2 | 6,315 | ascripter |
77,394,812 | 1,678,780 | Awaiting request.json() in FastAPI hangs forever | <p>I added the exception handling as given here (<a href="https://github.com/tiangolo/fastapi/discussions/6678" rel="noreferrer">https://github.com/tiangolo/fastapi/discussions/6678</a>) to my code but I want to print the complete request body to see the complete content. However, when I await the <code>request.json()</code> it never terminates. <code>request.json()</code> returns a coroutine, so I need to wait for the coroutine to complete before printing the result.
How can I print the content of the request in case an invalid request was sent to the endpoint?</p>
<p>Code example from github with 2 changes by me in the error handler and a simple endpoint.</p>
<pre class="lang-py prettyprint-override"><code>import logging
from fastapi import FastAPI, Request, status
from fastapi.exceptions import RequestValidationError
from fastapi.responses import JSONResponse
from pydantic import BaseModel
app = FastAPI()
@app.exception_handler(RequestValidationError)
async def validation_exception_handler(
request: Request, exc: RequestValidationError
) -> JSONResponse:
exc_str = f"{exc}".replace("\n", " ").replace(" ", " ")
logging.error(f"{request}: {exc_str}")
body = await request.json() # This line was added by me and never completes
logging.error(body) # This line was added by me
content = {"status_code": 10422, "message": exc_str, "data": None}
return JSONResponse(
content=content, status_code=status.HTTP_422_UNPROCESSABLE_ENTITY
)
class User(BaseModel):
name: str
@app.post("/")
async def test(body: User) -> User:
return body
</code></pre>
| <python><fastapi><freeze><starlette><exceptionhandler> | 2023-10-31 09:46:01 | 1 | 1,216 | GenError |
77,394,744 | 9,860,033 | How to directly extract fields of a nested dict using itemgetter in Python? | <p>I'm using python's itemgetter from the operator module.</p>
<p>This piece of code works perfectly fine:</p>
<pre><code>from operator import itemgetter
example_dict = {
"field1": 1,
"field2": 2,
"nestedDict": {
"subfield1": 3
}
}
field1, field2, nestedDict = itemgetter('field1', 'field2', 'nestedDict')(example_dict)
print(field1, field2, nestedDict)
</code></pre>
<p>Output:</p>
<pre><code>1 2 {'subfield1': 3}
</code></pre>
<p>Is there any way that I could directly extract <code>subfield</code> from <code>nestedDict</code>?</p>
<p>Ideally something like this:</p>
<pre><code>field1, field2, subfield1 = itemgetter('field1', 'field2', ('nestedDict', 'subfield1'))(example_dict)
</code></pre>
<p>I'm looking for a pythonic one-liner. I know how to solve this using a custom function or class, but I'd like to learn a better way to do it.</p>
| <python><json><dictionary> | 2023-10-31 09:33:47 | 1 | 1,134 | waykiki |
77,394,717 | 1,219,593 | python read yaml file with variable substitution | <p>I use the yaml library to read yaml files, and it works great. My only issue is with the variable substitution.</p>
<p>For instance, here is a simple example of yaml file I want to read:</p>
<pre><code>queues:
mail-queue-name: mail-app-test
mail-dlq-name: ${queues.mail-queue-name}-dlq
</code></pre>
<p>(just imagine it's not only 1 variable, but a lot of them)</p>
<p>I'd like to extract that into something like:</p>
<pre><code>with open('application.yaml', 'r') as file:
yaml_file = yaml.safe_load(file)
print(yaml['queues']['mail-queue-name']) // print mail-app-test
print(yaml['queues']['mail-dlq-name']) // print mail-app-test-dlq
</code></pre>
<p>Just a simple example of variable substitution. How can I achieve that? Using other libraries it's not a problem.</p>
| <python><yaml> | 2023-10-31 09:29:07 | 1 | 314 | Peppe |
77,394,642 | 12,242,085 | How to fill NaN values in 3 columns based on group of values in 2 other columns and stay untouchd values in rest cols in Data Frame in Python Pandas? | <p>I have Data Frame in Python Pandas like below:</p>
<pre><code>data = [
(1, None, None, None, '2023-01-10', None, None),
(1, None, None, None, '2023-01-10', 1, 0),
(1, 9, 0, 0.55, '2023-01-10', 15, None),
(2, None, None, None, '2023-11-22', 2, 1),
(2, 88, 1, 0.68, '2023-11-22', 103, 8)
]
df = pd.DataFrame(data, columns=['id', 'col1', 'col2', 'col3', 'col_date', 'col4', 'col5'])
df
</code></pre>
<p><a href="https://i.sstatic.net/gb90S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gb90S.png" alt="enter image description here" /></a></p>
<p>And I need to for each group of values from columns: id, col_date (data type datetime) fill values in columns: col1, col2, col3. For each group of values from columns: id, col_date at least one row has values in: col1, col2, col3 and I need to fill rest of rows by this values for each mentioned group.</p>
<p>Values in columns: col4, col5 (and rest of many more columns whoch was not included in this example) have stay untouched.</p>
<p>So as a result I need something like below:</p>
<pre><code>data = [
(1, 9, 0, 0.55, '2023-01-10', None, None),
(1, 9, 0, 0.55, '2023-01-10', 1, 0),
(1, 9, 0, 0.55, '2023-01-10', 15, None),
(2, 88, 1, 0.68, '2023-11-22', 2, 1),
(2, 88, 1, 0.68, '2023-11-22', 103, 8)
]
df = pd.DataFrame(data, columns=['id', 'col1', 'col2', 'col3', 'col_date', 'col4', 'col5'])
df
</code></pre>
<p><a href="https://i.sstatic.net/uaMjy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uaMjy.png" alt="enter image description here" /></a></p>
<p>How can I do that in Python Pandas ?</p>
| <python><pandas><dataframe><date> | 2023-10-31 09:17:28 | 2 | 2,350 | dingaro |
77,394,442 | 12,242,085 | How to fill NaN values in 3 columns based on group of values in 2 other columns in Data Frame in Python Pandas? | <p>I have Data Frame in Python Pandas like below:</p>
<pre><code>data = [
(1, None, None, None, '2023-01-10'),
(1, None, None, None, '2023-01-10'),
(1, 9, 0, 0.55, '2023-01-10'),
(2, None, None, None, '2023-11-22'),
(2, 88, 1, 0.68, '2023-11-22')
]
df = pd.DataFrame(data, columns=['id', 'col1', 'col2', 'col3', 'col_date'])
df
</code></pre>
<p><a href="https://i.sstatic.net/jbq7A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jbq7A.png" alt="enter image description here" /></a></p>
<p>And I need to for each group of values from columns: id, col_date (data type datetime) fill values in columns: col1, col2, col3. For each group of values from columns: id, col_date at least one row has values in: col1, col2, col3 and I need to fill rest of rows by this values for each mentioned group.</p>
<p>So, as a result i need to have something like below:</p>
<pre><code>data = [
(1, 9, 0, 0.55, '2023-01-10'),
(1, 9, 0, 0.55, '2023-01-10'),
(1, 9, 0, 0.55, '2023-01-10'),
(2, 88, 1, 0.68, '2023-11-22'),
(2, 88, 1, 0.68, '2023-11-22')
]
df = pd.DataFrame(data, columns=['id', 'col1', 'col2', 'col3', 'col_date'])
df
</code></pre>
<p><a href="https://i.sstatic.net/t0eRd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t0eRd.png" alt="enter image description here" /></a></p>
<p>How can I do that in Python Pandas ?</p>
| <python><pandas><dataframe><date> | 2023-10-31 08:46:01 | 3 | 2,350 | dingaro |
77,394,432 | 6,021,482 | Microsoft Build Tools for Python openAI | <p>I am installing open ai python library as</p>
<p>pip install openai</p>
<p>It was throwing an error that Microsoft build tools are required. I've installed it by following <a href="https://stackoverflow.com/questions/64261546/how-to-solve-error-microsoft-visual-c-14-0-or-greater-is-required-when-inst">StackOverflow Link</a></p>
<p>I've installed the build tools and the individual packages
<a href="https://i.sstatic.net/P1uLp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P1uLp.png" alt="enter image description here" /></a></p>
<p>Now I'm facing the below error
<a href="https://i.sstatic.net/uKUAw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uKUAw.png" alt="enter image description here" /></a></p>
<p>Please let me know How I can fix this error. My OS is Windows Server 2016.</p>
| <python><visual-c++><openai-api><visual-studio-2017-build-tools> | 2023-10-31 08:44:39 | 1 | 377 | Azeem112 |
77,394,187 | 12,224,591 | Insert Scrollbar into child ListBox? | <p>I'm attempting to insert a <code>Scrollbar</code> object into a <code>ListBox</code> object, when the <code>ListBox</code> object is a child of an existing window using Python's Tkinter library.</p>
<p>The vast majority of the Tkinter <code>Scrollbar</code> tutorials out there (<a href="https://www.tutorialspoint.com/python/tk_scrollbar.htm" rel="nofollow noreferrer">1</a>, <a href="https://www.geeksforgeeks.org/scrollable-frames-in-tkinter/#" rel="nofollow noreferrer">2</a>) put the <code>Scrollbar</code> object in a window that's directly a <code>ListBox</code>. I haven't seen one yet which places the <code>Scrollbar</code> into a <code>ListBox</code> that's a child inside of an existing window.</p>
<p>The following Python testing code:</p>
<pre><code>import tkinter as tk
def main():
mainWindow = tk.Tk(className = " TEST")
mainWindow.geometry("300x300")
scrollbar = tk.Scrollbar(mainWindow)
listBox = tk.Listbox(
mainWindow,
yscrollcommand = scrollbar.set,
)
listBox.place(x = 100, y = 100, anchor = "center")
for i in range(100):
listBox.insert(tk.END, "TEST")
scrollbar.pack(side = tk.RIGHT, fill = tk.Y)
scrollbar.config(command = listBox.yview())
mainWindow.resizable(False, False)
mainWindow.mainloop()
if (__name__ == "__main__"):
main()
</code></pre>
<p>Places the <code>Scrollbar</code> directly onto the right side of the main window, as such:</p>
<p><a href="https://i.sstatic.net/Jn1qpm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jn1qpm.png" alt="enter image description here" /></a></p>
<p>I'm assuming the <code>pack</code> function forces an element to one side of the parent window. If I try to specify the specific <code>Scrollbar</code> position using the <code>place</code> function instead, I cannot appear to change the height of the <code>Scrollbar</code>:</p>
<p><a href="https://i.sstatic.net/DO4C3m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DO4C3m.png" alt="enter image description here" /></a></p>
<p>I thought of specifying the <code>listBox</code> object to the argument of the <code>ScrollBar</code> constructor (instead of <code>mainWindow</code>), however it appears that I cannot create the <code>ScrollBar</code> object after I create the <code>ListBox</code> object, because I need to provide the <code>ScrollBar</code> object to the <code>ListBox</code> constructor for the <code>yscrollcommand</code> field. It appears that I cannot change the <code>yscrollcommand</code> field after creating the <code>ListBox</code> object.</p>
<p>What is the proper way to place a <code>Scrollbar</code> object inside of a <code>ListBox</code> object that's a child of an existing window?</p>
<p>Thanks for reading my post, any guidance is appreciated.</p>
| <python><tkinter> | 2023-10-31 08:06:39 | 3 | 705 | Runsva |
77,394,182 | 22,629,028 | Mojo (Mac) using wrong python lib | <p>As most Mac users, I have two separate versions of python on my machine: The one installed by Apple (version 3.9 in <code>/usr/bin</code>) and one installed via Homebrew (version 3.11 in <code>/opt/homebrew/bin</code>).</p>
<p>I've put the latter first in path and, sure enough,</p>
<pre class="lang-bash prettyprint-override"><code>which python3
</code></pre>
<p>prints</p>
<blockquote>
<p>/opt/homebrew/bin/python3</p>
</blockquote>
<p>and</p>
<pre class="lang-bash prettyprint-override"><code>python3 --version
</code></pre>
<p>prints</p>
<blockquote>
<p>Python 3.11.6</p>
</blockquote>
<p>but Mojo seems to use the other version, as packages installed via <code>pip3</code> are not accessible.</p>
<p>The following <code>sys_path.π₯</code></p>
<pre class="lang-py prettyprint-override"><code>from python import Python
def main():
let sys = Python.import_module("sys")
print(sys.prefix)
</code></pre>
<p>prints</p>
<blockquote>
<p>/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.9</p>
</blockquote>
<p>How do I make Mojo use the correct python lib?</p>
| <python><modular><mojolang> | 2023-10-31 08:06:12 | 1 | 2,212 | volkerschulz |
77,394,180 | 6,002,424 | Can't seem to find a proper way of splitting a huge (really huge) video into 1 hour chunks without any loss in quality | <p>I have backed up a video from Milestone system in mkv format. It's a ~106 hours-video with 37 GB size. I have searched for way to split into one-hour chunks without any quality loss. However, I am facing two problems:</p>
<ol>
<li>A python code with <code>subprocess</code> has produced videos in arbitrary lengths such as 11 minutes, 1.5 hours, 55 minutes, etc.</li>
<li>The produced videos look corrupt. When played I can't use forward or backward (5s) functions, it instantly leaps to the end of the video. There is no such an issue when original video is played.
Here is the code I used to do the job:</li>
</ol>
<pre><code>import subprocess
import math
input_file = "video.mkv"
output_prefix = "output"
segment_duration = 60 * 60
ffprobe_command = [
"ffprobe",
"-v",
"error",
"-show_entries",
"format=duration",
"-of",
"default=noprint_wrappers=1:nokey=1",
input_file,
]
total_duration = float(subprocess.check_output(ffprobe_command))
num_segments = int(total_duration / segment_duration)
for i in range(num_segments):
start_time = i * segment_duration
output_file = f"{output_prefix}_{i + 1}.mkv"
ffmpeg_command = [
"ffmpeg",
"-ss",
str(start_time),
"-i",
input_file,
"-t",
str(segment_duration),
"-c:v",
"copy",
"-c:a",
"copy",
output_file,
]
subprocess.run(ffmpeg_command)
</code></pre>
<p>I also tested with ffmpeg command with similar output lengths or some other failures such as <code>Too many packets buffered for output stream 0:0</code>.</p>
<p>Could someone suggest a better way to do the splitting in any language?
I don't mind spending too much time to split the video, all I need is one-hour videos without any corruption or quality loss.
I am on a Windows machine if it helps.</p>
| <python><windows><video><ffmpeg> | 2023-10-31 08:06:01 | 0 | 1,510 | bit_scientist |
77,393,960 | 5,368,083 | JSON dumping a dictionary where some values are PyDantic models | <p>Assume a dictionary with an arbitrary structure, where some values are native Python objects and others are instances of PyDantic's <code>BaseModel</code> subclasses</p>
<p>e.g.</p>
<pre class="lang-py prettyprint-override"><code>my_dict = {"key1": "value1",
"key2": {"key3": pydantic_object1,
"key4": 4},
"key5": pydantic_object2}
</code></pre>
<p>I want to dump this dictionary to a JSON file. The naive solution of using <code>BaseModel.model_dump()</code> to convert first all the PyDantic objects to dictionaries and then using <code>json.dump</code> doesn't work, because some of the attributes of the PyDantic objects cannot be serialized by the native serializer, e.g. <code>datetimes</code>s and other custom objects which their serializers are attached to the object's implementation.</p>
<p>I also couldn't figure out how to write a custom encoder that will user PyDantic's built in JSON encoder.</p>
<p>How would you solve this (PyDantic v2 and above)</p>
| <python><json><dictionary><pydantic> | 2023-10-31 07:26:33 | 1 | 12,767 | bluesummers |
77,393,783 | 13,238,846 | Python app running correctly on local environment but not on Azure | <p>I have deployed python web app for teams and deployed it to azure and It giving me following error. The libraries that I'm using with web app version are same. When I checking the error log following errors were there. I cannot figure out what causing this issue. Everything is similar in both versions.The local version is working perfectly fine and there are no any errors.</p>
<pre><code>Traceback (most recent call last):
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/botbuilder/core/bot_adapter.py", line 128, in run_pipeline
return await self._middleware.receive_activity_with_status(
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/botbuilder/core/middleware_set.py", line 69, in receive_activity_with_status
return await self.receive_activity_internal(context, callback)
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/botbuilder/core/middleware_set.py", line 79, in receive_activity_internal
return await callback(context)
File "/tmp/8dbd9c87f8c7bef/bots/state_management_bot.py", line 46, in on_turn
await super().on_turn(turn_context)
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/botbuilder/core/activity_handler.py", line 70, in on_turn
await self.on_message_activity(turn_context)
File "/tmp/8dbd9c87f8c7bef/bots/state_management_bot.py", line 93, in on_message_activity
response = chat(turn_context.activity.text, user_profile.user_id)
File "/tmp/8dbd9c87f8c7bef/ai_models/openaifunc.py", line 139, in chat
response = mrkl.run(userMsg)
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/langchain/chains/base.py", line 505, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/langchain/chains/base.py", line 310, in call
raise e
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/langchain/chains/base.py", line 304, in call
self._call(inputs, run_manager=run_manager)
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1146, in _call
next_step_output = self._take_next_step(
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/langchain/agents/agent.py", line 996, in _take_next_step
observation = tool.run(
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/langchain/tools/base.py", line 365, in run
raise e
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/langchain/tools/base.py", line 337, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/langchain/tools/base.py", line 510, in _run
self.func(
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/langchain/tools/base.py", line 449, in call
return self.run(tool_input, callbacks=callbacks)
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/langchain/tools/base.py", line 365, in run
raise e
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/langchain/tools/base.py", line 337, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/langchain/tools/base.py", line 631, in _run
else self.func(*args, **kwargs)
File "/tmp/8dbd9c87f8c7bef/ai_models/customTools.py", line 356, in productdescriptions
product_info = productdata(query)
File "/tmp/8dbd9c87f8c7bef/ai_models/customTools.py", line 366, in productdata
docsearch = Pinecone.from_existing_index(indexname, embedding)
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/langchain/vectorstores/pinecone.py", line 437, in from_existing_index
pinecone_index = cls.get_pinecone_index(index_name, pool_threads)
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/langchain/vectorstores/pinecone.py", line 354, in get_pinecone_index
indexes = pinecone.list_indexes() # checks if provided index exists
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/pinecone/manage.py", line 185, in list_indexes
response = api_instance.list_indexes()
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/pinecone/core/client/api_client.py", line 776, in call
return self.callable(self, *args, **kwargs)
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/pinecone/core/client/api/index_operations_api.py", line 1132, in __list_indexes
return self.call_with_http_info(**kwargs)
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/pinecone/core/client/api_client.py", line 838, in call_with_http_info
return self.api_client.call_api(
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/pinecone/core/client/api_client.py", line 413, in call_api
return self.__call_api(resource_path, method,
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/pinecone/core/client/api_client.py", line 200, in __call_api
response_data = self.request(
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/pinecone/core/client/api_client.py", line 439, in request
return self.rest_client.GET(url,
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/pinecone/core/client/rest.py", line 236, in GET
return self.request("GET", url,
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/pinecone/core/client/rest.py", line 202, in request
r = self.pool_manager.request(method, url,
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/urllib3/request.py", line 74, in request
return self.request_encode_url(
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/urllib3/request.py", line 96, in request_encode_url
return self.urlopen(method, url, **extra_kw)
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/urllib3/poolmanager.py", line 376, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 714, in urlopen
httplib_response = self._make_request(
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/urllib3/connectionpool.py", line 415, in _make_request
conn.request(method, url, **httplib_request_kw)
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/urllib3/connection.py", line 244, in request
super(HTTPConnection, self).request(method, url, body=body, headers=headers)
File "/opt/python/3.10.12/lib/python3.10/http/client.py", line 1283, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/opt/python/3.10.12/lib/python3.10/http/client.py", line 1324, in _send_request
self.putheader(hdr, value)
File "/tmp/8dbd9c87f8c7bef/antenv/lib/python3.10/site-packages/urllib3/connection.py", line 224, in putheader
_HTTPConnection.putheader(self, header, *values)
File "/opt/python/3.10.12/lib/python3.10/http/client.py", line 1260, in putheader
if _is_illegal_header_value(values[i]):
TypeError: expected string or bytes-like object
</code></pre>
| <python><azure-web-app-service><microsoft-teams><langchain> | 2023-10-31 06:56:31 | 1 | 427 | Axen_Rangs |
77,393,662 | 13,086,128 | Change mode for plotly.express | <p>I have a bunch of plotly plots. I am giving example from the plotly site.</p>
<p>I want to plot all of them in dark mode.
I have to manually enter <code>template="plotly_dark"</code> for each plot.</p>
<p>How can I change the default setting of my display so that I do not have to manually enter <code>template="plotly_dark"</code> for each plot?</p>
<pre><code>import plotly.express as px
df = px.data.iris()
fig = px.scatter(df, x="sepal_width", y="sepal_length", color="species", marginal_y="violin",
marginal_x="box", trendline="ols", template="plotly_dark")
fig.show()
</code></pre>
<hr />
<pre><code>import plotly.express as px
df = px.data.iris()
df["e"] = df["sepal_width"]/100
fig = px.scatter(df, x="sepal_width", y="sepal_length", color="species", error_x="e", error_y="e", template="plotly_dark")
fig.show()
</code></pre>
| <python><plotly> | 2023-10-31 06:31:17 | 1 | 30,560 | Talha Tayyab |
77,393,610 | 12,242,085 | How to create additional rows for each combination of values in 2 columns based on data in third column in Data Frame in Python Pandas? | <p>I have Data Frame in Python Pandas like below:</p>
<pre><code>data = [
(1, '2023-10-10', '2023-09-25', 1, 20, 11),
(1, '2023-10-10', '2023-10-04', 0, 10, 10),
(1, '2023-05-05', '2023-05-01', 0, 10, None),
(2, '2023-02-13', '2023-02-10', 0, 25, 10),
(2, '2023-02-13', '2023-02-02', None, 12, None)
]
df = pd.DataFrame(data, columns=['id', 'survey', 'event', 'col1', 'col2', 'col3'])
df
</code></pre>
<p><a href="https://i.sstatic.net/BDoyv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BDoyv.png" alt="enter image description here" /></a></p>
<p><strong>Input dataset:</strong></p>
<p>And as you can see I have columns:</p>
<ul>
<li>id - id of client</li>
<li>survey - date of survey (data type object)</li>
<li>event - date of event (data type object)</li>
<li>col1, col2, col3 - rest of variables (in real dataset I have many more)</li>
</ul>
<p><strong>Requirements:</strong></p>
<p>And I need to for each combination of values in columns id, survey I need to create rows with:</p>
<ul>
<li>the same values in column: id, survey</li>
<li>NaN values in columns: col1, col2, col3</li>
<li>dates (data type object) in column event which do not exist for a given combination values in id, survey going back to the date in the survey column of the previous month (if combination id = 1 and survey = '2023-10-10' I need to create rows with dates to have in column event values from 2023-10-10 till 2023-09-10 (remember each month need to have 31 days!! :) )</li>
<li>we assume that each month has 31 days (it does not matter how many days a month really has)</li>
</ul>
<p><strong>Example of needed output:</strong></p>
<p>For each combination of values in id, survey I create rows with all days 31 days back from the date in the survey column which do not exist in column event for each combination of values in columns: id, survey. And I put NaN in all created rows in columns: col1, col2, col3</p>
<pre><code>data = [
(1, '2023-10-10', '2023-09-25', 1, 20, 11),
(1, '2023-10-10', '2023-10-04', 0, 10, 10),
(1, '2023-10-10', '2023-10-09', None, None, None),
(1, '2023-10-10', '2023-10-08', None, None, None),
(1, '2023-10-10', '2023-10-07', None, None, None),
(1, '2023-10-10', '2023-10-06', None, None, None),
(1, '2023-10-10', '2023-10-05', None, None, None),
(1, '2023-10-10', '2023-10-03', None, None, None),
(1, '2023-10-10', '2023-10-02', None, None, None),
(1, '2023-10-10', '2023-10-01', None, None, None),
(1, '2023-10-10', '2023-09-31', None, None, None),
(1, '2023-10-10', '2023-09-30', None, None, None),
(1, '2023-10-10', '2023-09-29', None, None, None),
(1, '2023-10-10', '2023-09-28', None, None, None),
(1, '2023-10-10', '2023-09-27', None, None, None),
(1, '2023-10-10', '2023-09-26', None, None, None),
(1, '2023-10-10', '2023-09-24', None, None, None),
(1, '2023-10-10', '2023-09-23', None, None, None),
(1, '2023-10-10', '2023-09-22', None, None, None),
(1, '2023-10-10', '2023-09-21', None, None, None),
(1, '2023-10-10', '2023-09-20', None, None, None),
(1, '2023-10-10', '2023-09-19', None, None, None),
(1, '2023-10-10', '2023-09-18', None, None, None),
(1, '2023-10-10', '2023-09-17', None, None, None),
(1, '2023-10-10', '2023-09-16', None, None, None),
(1, '2023-10-10', '2023-09-15', None, None, None),
(1, '2023-10-10', '2023-09-14', None, None, None),
(1, '2023-10-10', '2023-09-13', None, None, None),
(1, '2023-10-10', '2023-09-12', None, None, None),
(1, '2023-10-10', '2023-09-11', None, None, None),
(1, '2023-10-10', '2023-09-10', None, None, None),
(1, '2023-05-05', '2023-05-01', 0, 10, None),
(1, '2023-05-05', '2023-05-05', None, None, None),
(1, '2023-05-05', '2023-05-04', None, None, None),
(1, '2023-05-05', '2023-05-03', None, None, None),
(1, '2023-05-05', '2023-05-02', None, None, None),
(1, '2023-05-05', '2023-04-31', None, None, None),
(1, '2023-05-05', '2023-04-30', None, None, None),
(1, '2023-05-05', '2023-04-29', None, None, None),
(1, '2023-05-05', '2023-04-28', None, None, None),
(1, '2023-05-05', '2023-04-27', None, None, None),
(1, '2023-05-05', '2023-04-26', None, None, None),
(1, '2023-05-05', '2023-04-25', None, None, None),
(1, '2023-05-05', '2023-04-24', None, None, None),
(1, '2023-05-05', '2023-04-23', None, None, None),
(1, '2023-05-05', '2023-04-22', None, None, None),
(1, '2023-05-05', '2023-04-21', None, None, None),
(1, '2023-05-05', '2023-04-20', None, None, None),
(1, '2023-05-05', '2023-04-19', None, None, None),
(1, '2023-05-05', '2023-04-18', None, None, None),
(1, '2023-05-05', '2023-04-17', None, None, None),
(1, '2023-05-05', '2023-04-16', None, None, None),
(1, '2023-05-05', '2023-04-15', None, None, None),
(1, '2023-05-05', '2023-04-14', None, None, None),
(1, '2023-05-05', '2023-04-13', None, None, None),
(1, '2023-05-05', '2023-04-12', None, None, None),
(1, '2023-05-05', '2023-04-11', None, None, None),
(1, '2023-05-05', '2023-04-10', None, None, None),
(1, '2023-05-05', '2023-04-09', None, None, None),
(1, '2023-05-05', '2023-04-08', None, None, None),
(1, '2023-05-05', '2023-04-07', None, None, None),
(1, '2023-05-05', '2023-04-06', None, None, None),
(1, '2023-05-05', '2023-04-05', None, None, None),
(2, '2023-02-13', '2023-02-10', 0, 25, 10),
(2, '2023-02-13', '2023-02-02', None, 12, None),
(2, '2023-02-13', '2023-02-13', None, None, None),
(2, '2023-02-13', '2023-02-12', None, None, None),
(2, '2023-02-13', '2023-02-11', None, None, None),
(2, '2023-02-13', '2023-02-09', None, None, None),
(2, '2023-02-13', '2023-02-08', None, None, None),
(2, '2023-02-13', '2023-02-07', None, None, None),
(2, '2023-02-13', '2023-02-06', None, None, None),
(2, '2023-02-13', '2023-02-05', None, None, None),
(2, '2023-02-13', '2023-02-04', None, None, None),
(2, '2023-02-13', '2023-02-03', None, None, None),
(2, '2023-02-13', '2023-02-01', None, None, None),
(2, '2023-02-13', '2023-01-31', None, None, None),
(2, '2023-02-13', '2023-01-30', None, None, None),
(2, '2023-02-13', '2023-01-29', None, None, None),
(2, '2023-02-13', '2023-01-28', None, None, None),
(2, '2023-02-13', '2023-01-27', None, None, None),
(2, '2023-02-13', '2023-01-26', None, None, None),
(2, '2023-02-13', '2023-01-25', None, None, None),
(2, '2023-02-13', '2023-01-24', None, None, None),
(2, '2023-02-13', '2023-01-23', None, None, None),
(2, '2023-02-13', '2023-01-22', None, None, None),
(2, '2023-02-13', '2023-01-21', None, None, None),
(2, '2023-02-13', '2023-01-20', None, None, None),
(2, '2023-02-13', '2023-01-19', None, None, None),
(2, '2023-02-13', '2023-01-18', None, None, None),
(2, '2023-02-13', '2023-01-17', None, None, None),
(2, '2023-02-13', '2023-01-16', None, None, None),
(2, '2023-02-13', '2023-01-15', None, None, None),
(2, '2023-02-13', '2023-01-14', None, None, None),
(2, '2023-02-13', '2023-01-13', None, None, None)
]
df = pd.DataFrame(data, columns=['id', 'survey', 'event', 'col1', 'col2', 'col3'])
df
</code></pre>
<p>How can I achieve this kind of output in Python Pandas ?</p>
| <python><pandas><dataframe><date> | 2023-10-31 06:18:32 | 2 | 2,350 | dingaro |
77,393,443 | 12,519,954 | how to run multiple lambda function locally with docker | <p>I have multiple lambda python functions. Which is related to each other. Suppose I have a main lambda function. from that main lambda function, I have to go to different lambda functions based on the condition.</p>
<p>I have dockerized the main lambda function like below.</p>
<p>Dockerfile</p>
<pre><code>FROM public.ecr.aws/lambda/python:3.10-x86_64
# Copy requirements.txt
COPY requirements.txt ${LAMBDA_TASK_ROOT}
# Install the specified packages
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
# Copy function code
COPY check_engine_function.py ${LAMBDA_TASK_ROOT}
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "check_engine_function.handler" ]
</code></pre>
<p>Lambda . check_engine_function.py</p>
<pre><code>
def handler(event, context):
if json_data.get('tts_engine') == 'google':
lambda_client.invoke(
FunctionName=google_tts_lambda,
InvocationType='Event', # Asynchronous invocation
Payload=json.dumps(json_data)
)
return {
'statusCode': 200,
'body': json.dumps('TTS processing initiated.')
}
</code></pre>
<p>In this lambda function I have invoke an another lambda function.
Q1: How can I do this in my local PC using docker?</p>
<p>Q2: Do I have to create dockerfile for each lambda? Or can I use one dockerfile for all lambda and create the image and push it to the amazon ECR ?</p>
| <python><amazon-web-services><docker><amazon-s3><aws-lambda> | 2023-10-31 05:32:42 | 1 | 308 | Mahfujul Hasan |
77,393,415 | 7,279,111 | How to add a new route to the existing Flask app | <p>I am newbie with Flask and trying to add a new route to <a href="https://github.com/Sanster/lama-cleaner/blob/0b3a9a68a296a3c4fc993bd904b7df6792c34579/lama_cleaner/server.py" rel="nofollow noreferrer">Lama Clearner</a>.</p>
<p>Existing routes work perfectly in my localhost but I added the below code</p>
<pre><code>@app.route("/get_ping")
def get_ping():
return str(True), 200
</code></pre>
<p>But when I try to make the request with http://localhost:8080/get_ping, I get 404 error. Am I missing something?</p>
<p>Thanks!</p>
| <python><flask><custom-error-pages> | 2023-10-31 05:25:42 | 1 | 547 | KingHodor |
77,393,344 | 2,153,235 | Back-ticks in DataFrame.colRegex? | <p>For PySpark, I find back-ticks enclosing regular expressions for
<code>DataFrame.colRegex()</code>
<a href="https://sparkbyexamples.com/pyspark/select-columns-from-pyspark-dataframe" rel="nofollow noreferrer">here</a>,
<a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.DataFrame.colRegex.html" rel="nofollow noreferrer">here</a>,
and in <a href="https://stackoverflow.com/questions/75219588">this SO
question</a>. Here is the
example from the <code>DataFrame.colRegex</code> doc string:</p>
<pre><code>df = spark.createDataFrame([("a", 1), ("b", 2), ("c", 3)], ["Col1", "Col2"])
df.select(df.colRegex("`(Col1)?+.+`")).show()
+----+
|Col2|
+----+
| 1|
| 2|
| 3|
+----+
</code></pre>
<p><a href="https://stackoverflow.com/a/75240179">The answer</a> to the SO question
<em>doesn't</em> show back-ticks for Scala. It refers to the Java
documentation for the <a href="https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html" rel="nofollow noreferrer"><code>Pattern</code>
class</a>,
but that doesn't explain back-ticks.</p>
<p><a href="http://www.wellho.net/resources/ex.php?item=y108/backtick" rel="nofollow noreferrer">This page</a>
indicates the use of back-ticks in Python to represent the string
representation of the adorned variable, but that doesn't apply
to a regular expression.</p>
<p>What is the explanation for the back-ticks?</p>
| <python><regex><apache-spark><pyspark> | 2023-10-31 05:05:19 | 1 | 1,265 | user2153235 |
77,393,316 | 8,253,860 | Is vars() a correct hash function for python classes? | <p>I need to implement <code>__eq__</code> and <code>__hash__</code> function in order to allow <code>lru_cache</code> decorator in some instance function of the class. This class is supposed to be used as a base class for many downstream classes, so I've added these functions:</p>
<p>This seems to work syntactically for simple test cases, but I'm not sure if it's actually satisfies the conditions for a good hash.</p>
<pre><code>def __eq__(self, __value: object) -> bool:
return vars(self) == vars(__value)
def __hash__(self) -> int:
# convert values to string for comparison
return hash(tuple(sorted(str(vars(self).values()))))
</code></pre>
| <python><caching><hash> | 2023-10-31 04:57:22 | 1 | 667 | Ayush Chaurasia |
77,393,250 | 15,320,579 | Python Flask API cannot handle multiple requests | <p>I have created a Python Flask API which basically accepts <code>POST</code> requests containing <strong>PDF documents</strong>, performs <strong>OCR</strong> on them and then runs a couple <strong>Deep Learning</strong> models on the text extracted.</p>
<p>I am using <strong>Multi threading</strong> in the <strong>OCR engine</strong> to extract the text from the pages <strong>concurrently</strong>. So the OCR will <strong>utilise all the threads available</strong> in the server in order to extract the text.</p>
<p>For a PDF document with <strong>87 pages</strong> it takes about <strong>70 seconds</strong> for the API to give a response.</p>
<p>If I send a <strong>single request</strong> through Postman, everything works great and I get a response. The issue arises when <strong>multiple requests</strong> are sent. Then I get no response and the API execution keeps on utilising the system resources. Even if I wait for 30 mins I still get no response. On analysing the results of <code>htop</code> I find that all the threads are being used at 100% but I do not get a response. I am running the API as follows with the <code>threaded=True</code> argument:</p>
<pre><code>app.run(debug=False, port = 6996, host = '0.0.0.0', threaded = True)
</code></pre>
<p>I don't know what the issue is. Any help/guidance will be much appreciated. Any links to articles or videos will help me too. Thanks in advance!</p>
<p><strong>PS:</strong> Below is the flow of the API.</p>
<ol>
<li>Convert PDF to image format.</li>
<li>Extract text from all the images using OCR engine. (Multi threading. Here all the available threads will be utilised!).</li>
<li>Clean the text extracted.</li>
<li>Run a couple Deep Learning models on the cleaned text and prepare the response.</li>
</ol>
| <python><python-3.x><flask><concurrency><flask-restful> | 2023-10-31 04:39:29 | 0 | 787 | spectre |
77,393,110 | 7,789,281 | Best way to host multiple pytorch model files for inference? | <h2>Context:</h2>
<ul>
<li>I'm working with an end to end deep learning TTS framework (you give it text input it gives you a wav object back)</li>
<li>I've created a FastAPI endpoint in a docker container that uses the TTS framework to do inference</li>
<li>My frontend client will hit this FastAPI endpoint to do inference on a GPU server</li>
<li>I'm going to have multiple docker containers behind a load balancer (haproxy) all running the same FastAPI endpoint image</li>
</ul>
<h3>My questions:</h3>
<ul>
<li><strong>Storage Choice:</strong> What is the recommended approach for hosting model files when deploying multiple Docker containers? Should I use Docker volumes, or is it advisable to utilize cloud storage solutions like S3 or Digital Ocean Spaces for centralized model storage?</li>
<li><strong>Latency Concerns:</strong> How can I minimize latency when fetching models from cloud storage? Are there specific techniques or optimizations (caching, partial downloads, etc.) that can be implemented to reduce the impact of latency, especially when switching between different models for inference?</li>
</ul>
<p>I'm still learning about mlops so I appreciate any help.</p>
| <python><deep-learning><pytorch><devops><mlops> | 2023-10-31 04:00:13 | 0 | 417 | Drew Scatterday |
77,392,937 | 1,755,083 | Fastest way to create a dataframe from a dict of lists | <p>I have a case where I have my initial data-structure as:</p>
<pre><code>import datetime
data = {
'col1': [1,2,3,4],
'col2': ['a', 'b', 'c', 'd'],
'col3': [datetime.datetime.now() for _ in range(4)],
'col4': [[1], [2,3], [4,5,6], [7,8,9,19]],
}
</code></pre>
<p>I want to create a pandas data frame out of this - So, I do:</p>
<pre><code>df = pd.DataFrame(data)
</code></pre>
<p>My biggest worry here is that I have multiple small dictionaries like the above coming in a streaming fashion. So, using <code>pd.DataFrame()</code> for each small dict - is very slow.</p>
<p>On my actual data, it seems to be taking nearly 30-40 ms per dict. I am trying to achieve <5ms ideally... I've been wondering if it is possible.</p>
<p>If I could accumulate all the data together - that would have made things faster cause accumulating it in a Python list and then creating a DF out of it is much faster. But I need it to be on the smaller dicts for my application.</p>
<p><strong>Approach 1</strong>: I tried doing something like:</p>
<pre><code>data = {
'col1': np.array([1.0,2.0,3.0,4.0], dtype=float),
'col2': np.array(['a', 'b', 'c', 'd'], dtype=object),
'col3': np.array([datetime.datetime.now() for _ in range(4)]),
'col4': np.array([[1], [2,3], [4,5,6], [7,8,9,19]]),
}
</code></pre>
<p>thinking that if I can give pandas numpy arrays directly - it would be much faster. But that did not help much ...</p>
<p><strong>Approach 2</strong>: I also tried using the internal function <code>pd.DataFrame._from_arrays</code> in pandas and giving the right inputs for that:</p>
<pre><code>df = pd.DataFrame._from_arrays(
[data[k] for k in data.keys()],
pd.Index(list(data.keys())),
pd.RangeIndex(0, len(data[next(iter(data.keys()))])),
)
</code></pre>
<p>and that seems to make it 5-10% faster</p>
<p>I have been reading about pyarrow and whether that can help me in this case. But any experiment I run with arrow seems to be making things slower - not faster.<br />
Any suggestions ?</p>
<p>Edit: Added a reproducible example</p>
<p>Some assumptions I can make:</p>
<ol>
<li>All the lists will be the same length</li>
<li>The data type in the list will be only float or datetime or object (str, arrays, dict) ... I avoided int due to the None issue they have</li>
<li>There are no corrupt or wrong values (i.e. no mixing of data types)</li>
</ol>
<p>Minimum reproducible example I am using (uses random data):</p>
<pre><code>import datetime
import random
import string
import time
import pandas as pd
random.seed(42)
trials = {
"num": {}, # 1 numeric column
"num10": {}, # 10 numeric columns
"text": {}, # 1 string column
"text10": {}, # 10 string columns
"date": {}, # 1 date column
"date10": {}, # 10 date columns
"mixed": {}, # 1 numeric, 1 string, 1 date column
}
def gen_num(size):
return [random.random() for i in range(size)]
def gen_text(size, strlen=100):
return [
"".join(
random.choice(string.ascii_letters)
for j in range(random.randint(1, strlen))
)
for i in range(size)
]
def gen_date(size):
return [datetime.datetime.now() for i in range(size)]
for size in [1, 10, 50, 100, 1000]:
trials["num"][size] = {"num": gen_num(size)}
trials["num10"][size] = {f"num{k}": gen_num(size) for k in range(1, 11)}
trials["text"][size] = {"text": gen_text(size)}
trials["text10"][size] = {f"text{k}": gen_text(size) for k in range(1, 11)}
trials["date"][size] = {"date": gen_date(size)}
trials["date10"][size] = {f"date{k}": gen_date(size) for k in range(1, 11)}
trials["mixed"][size] = {
"num": gen_num(size),
"text": gen_text(size),
"date": gen_date(size),
}
def run_tests(func, funcname, input="dict"):
times = {}
for name in trials:
times[name] = {}
for size in trials[name]:
start = time.perf_counter()
for _ in range(100):
out = func(trials[name][size])
end = time.perf_counter()
duration = (end - start) * 1000 # in ms
times[name][size] = duration
# print(f"{funcname} - {name} - {size} --- {duration:.3f} ms")
tot_time = sum(v2 for v1 in times.values() for v2 in v1.values())
tot_count = sum(1 for v1 in times.values() for v2 in v1.values())
print(
f"{funcname} - Total Time ".ljust(30, "-")
+ f" {tot_time:.3f} ms in {tot_count} iter"
)
def f1(data):
# Directly let pandas do its thing
return pd.DataFrame(data)
def f2(data):
# Use the internal function: pd.DataFrame._from_arrays
return pd.DataFrame._from_arrays(
[data[k] for k in data.keys()],
pd.Index(list(data.keys())),
pd.RangeIndex(0, len(data[next(iter(data.keys()))])),
)
run_tests(f1, "simple-pd")
# simple-pd - Total Time ------- 2152.598 ms in 35 iter
run_tests(f2, "fromarr")
# fromarr - Total Time --------- 2051.039 ms in 35 iter
</code></pre>
| <python><pandas><dataframe><numpy><performance> | 2023-10-31 02:53:25 | 0 | 3,357 | AbdealiLoKo |
77,392,813 | 377,022 | How do I make interacting updates? | <p>The <a href="https://plotly.com/python/dropdowns/" rel="nofollow noreferrer">Plotly Documentation on Dropdowns</a> contains the following example:</p>
<pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go
import pandas as pd
# Load dataset
df = pd.read_csv(
"https://raw.githubusercontent.com/plotly/datasets/master/finance-charts-apple.csv")
df.columns = [col.replace("AAPL.", "") for col in df.columns]
# Initialize figure
fig = go.Figure()
# Add Traces
fig.add_trace(
go.Scatter(x=list(df.Date),
y=list(df.High),
name="High",
line=dict(color="#33CFA5")))
fig.add_trace(
go.Scatter(x=list(df.Date),
y=[df.High.mean()] * len(df.index),
name="High Average",
visible=False,
line=dict(color="#33CFA5", dash="dash")))
fig.add_trace(
go.Scatter(x=list(df.Date),
y=list(df.Low),
name="Low",
line=dict(color="#F06A6A")))
fig.add_trace(
go.Scatter(x=list(df.Date),
y=[df.Low.mean()] * len(df.index),
name="Low Average",
visible=False,
line=dict(color="#F06A6A", dash="dash")))
# Add Annotations and Buttons
high_annotations = [dict(x="2016-03-01",
y=df.High.mean(),
xref="x", yref="y",
text="High Average:<br> %.3f" % df.High.mean(),
ax=0, ay=-40),
dict(x=df.Date[df.High.idxmax()],
y=df.High.max(),
xref="x", yref="y",
text="High Max:<br> %.3f" % df.High.max(),
ax=-40, ay=-40)]
low_annotations = [dict(x="2015-05-01",
y=df.Low.mean(),
xref="x", yref="y",
text="Low Average:<br> %.3f" % df.Low.mean(),
ax=0, ay=40),
dict(x=df.Date[df.High.idxmin()],
y=df.Low.min(),
xref="x", yref="y",
text="Low Min:<br> %.3f" % df.Low.min(),
ax=0, ay=40)]
fig.update_layout(
updatemenus=[
dict(
active=0,
buttons=list([
dict(label="None",
method="update",
args=[{"visible": [True, False, True, False]},
{"title": "Yahoo",
"annotations": []}]),
dict(label="High",
method="update",
args=[{"visible": [True, True, False, False]},
{"title": "Yahoo High",
"annotations": high_annotations}]),
dict(label="Low",
method="update",
args=[{"visible": [False, False, True, True]},
{"title": "Yahoo Low",
"annotations": low_annotations}]),
dict(label="Both",
method="update",
args=[{"visible": [True, True, True, True]},
{"title": "Yahoo",
"annotations": high_annotations + low_annotations}]),
]),
)
])
# Set title
fig.update_layout(title_text="Yahoo")
fig.show()
</code></pre>
<p>I would like to have two dropdowns instead, one for "High Visible" and the other for "Low Visible", and the title should be "Yahoo, High: {yes or no}, Low: {yes or no}", but neither internet searches nor ChatGPT can suggest how to change visibility of frames/traces depending on the current values of other dropdowns.</p>
<p>This is a somewhat contrived example; in my actual use case, I have three sliders with 8, 8, and 51 options respectively, and I don't want to create a single slider with 3264 values.</p>
| <python><plotly> | 2023-10-31 02:17:18 | 1 | 6,168 | Jason Gross |
77,392,792 | 4,898,202 | Is there a way to interpolate variables into a python string WITHOUT using the print function? | <p>Every example I have seen of Python string variable interpolation uses the <code>print</code> function.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>num = 6
# str.format method
print("number is {}".format(num))
# % placeholders
print("number is %s"%(num))
# named .format method
print("number is {num}".format(num=num))
</code></pre>
<p>Can you interpolate variables into strings without using <code>print</code>?</p>
| <python><string><variables><format><string-interpolation> | 2023-10-31 02:11:28 | 2 | 1,784 | skeetastax |
77,392,603 | 480,118 | slice numpy/pandas array based on a None value in one of the array/columns | <p>i have the following code where im trying to slice/transform a data array into multiple Dataframes, then concat those dataframes together</p>
<pre><code>import pandas as pd, numpy as np
arr=[['id', 'aaa', 'bbb', None, 'ccc', None, None],
['period', 'd', 'd', None, 'd', None, None],
['date', 'price', 'price', 'volume','price','volume', 'mktcap'],
['01/03/2001', 103.1, 103.2, 10000, 103.4, 20000, 1000000],
['01/04/2001', 104.1, 104.2, 11000, 104.4, 30000, 1000000],
['01/05/2001', 105.1, 105.2, 12000, 105.4, 40000, 1000000],
]
data=np.array(arr)
all_ts =[]
for col in range(1,data.shape[1]):
id = data[0][col]
per = data[1][col]
if id is None:
continue
#from the 3rd row, take the date column and the current col and produce a dataframe
ts_data = data[2:,[0, col]]
cols = ts_data[:1,][0]
ts_data = pd.DataFrame(ts_data[1:], columns=cols)
ts_data = ts_data[cols].dropna()
ts_data['id'] = id
ts_data['period'] = per
all_ts.append(ts_data)
df = pd.concat(all_ts)
df
</code></pre>
<p>The code above will only generate datframes with columns with columns:
<code>id, period, date, price</code> because when it encounters a None, i continue (as i cant figure out how to grab the following 'None' columns as yet).</p>
<p>What i would like to end up with are 3 dataframes:</p>
<ol>
<li>The 1st with columns: <code>id, period, date, price </code></li>
<li>The 2nd with columns: <code>id, period, date, price, volume </code></li>
<li>The 3rd with columns: <code>id, period, date, price, volume, mktcap</code></li>
</ol>
<p>So basically, from row 3, i want to take the first column (date column) plus the 'field' columns and produce a dataframe - the catch im struggling with though is that multiple fields should be taken together if the subsequent fields have None above them (i.e. in the period or id columns).</p>
<p><strong>Edit/Update:</strong>
i was able to do this with the following tweak...but it doesnt feel very pythonic/numpyic. note the addition of another variable named <code>select_cols</code>, which keeps getting appended to when a None is encountered. Im thinking numpy must present a better way of doing this...just not sure what that may be.</p>
<pre><code>all_ts =[]
select_cols = [0]
for col in range(1,data.shape[1]):
id = data[0][col]
per = data[1][col]
select_cols += [col]
if id is None:
continue
#from the 3rd row, take the date column and the current col and produce a dataframe
ts_data = data[2:, select_cols]
cols = ts_data[:1,][0]
ts_data = pd.DataFrame(ts_data[1:], columns=cols)
ts_data = ts_data[cols].dropna()
ts_data['id'] = id
ts_data['period'] = per
all_ts.append(ts_data)
select_cols=[0]
df = pd.concat(all_ts)
df
</code></pre>
| <python><pandas><numpy> | 2023-10-31 00:57:58 | 1 | 6,184 | mike01010 |
77,392,594 | 11,922,765 | Python dataframe split a string column into many | <p>The data I import comes in irregular fashion.</p>
<pre><code>df =
# following data all in one column
1 CABATT CAR BATTERY VOLTAGE -10.0 200.0
2 CPTEMP CAR DAS PANEL TEMP C -10.0 200.0
3 CAPTMA CAR PANEL A TEMP C -10.0 200.0
205 SPPT4P SPEED INPUT 4 CYCLINDER 0.0 32000.0
# Slicing the first three digital numbers as SNO
print(df[df.columns[0]].str.extract('(?P<SNO>\S\d{3}\S)'))
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
</code></pre>
<p>Expected output:</p>
<pre><code>SNo ABBRE DESCRIPTION MIN MAX
1 CABATT CAR BATTERY VOLTAGE -10.0 200.0
2 CPTEMP CAR DAS PANEL TEMP C -10.0 200.0
3 CAPTMA CAR PANEL A TEMP C -10.0 200.0
205 SPPT4P SPEED INPUT 4 CYCLINDER 0.0 32000.0
</code></pre>
| <python><pandas><regex><dataframe> | 2023-10-31 00:53:13 | 1 | 4,702 | Mainland |
77,392,339 | 308,827 | Select top n groups in pandas dataframe | <p>I have the following dataframe:</p>
<pre><code> Country Crop Harvest Year Area (ha)
Afghanistan Maize 2019 94910
Afghanistan Maize 2020 140498
Afghanistan Maize 2021 92144
Afghanistan Winter Wheat 2019 2334000
Afghanistan Winter Wheat 2020 2668000
Afghanistan Winter Wheat 2021 1833357
Argentina Maize 2019 7232761
Argentina Maize 2020 7730506
Argentina Maize 2021 8146596
Argentina Winter Wheat 2019 6050953
Argentina Winter Wheat 2020 6729838
Argentina Winter Wheat 2021 6394102
China Maize 2019 41309740
China Maize 2020 41292000
China Maize 2021 43355859
China Winter Wheat 2019 23732560
China Winter Wheat 2020 23383000
China Winter Wheat 2021 23571400
Ethiopia Maize 2019 2274306
Ethiopia Maize 2020 2363507
Ethiopia Maize 2021 2530000
Ethiopia Winter Wheat 2019 1789372
Ethiopia Winter Wheat 2020 1829051
Ethiopia Winter Wheat 2021 1950000
France Maize 2019 1506100
France Maize 2020 1691130
France Maize 2021 1549520
France Winter Wheat 2019 5244250
France Winter Wheat 2020 4512420
France Winter Wheat 2021 5276730
India Maize 2019 9027130
India Maize 2020 9569060
India Maize 2021 9860000
India Winter Wheat 2019 29318780
India Winter Wheat 2020 31357020
India Winter Wheat 2021 31610000
Namibia Maize 2019 21123
Namibia Maize 2020 35000
Namibia Maize 2021 46070
Namibia Winter Wheat 2019 1079
Namibia Winter Wheat 2020 2000
Namibia Winter Wheat 2021 3026
</code></pre>
<p>I want to select the top 2 countries by the average value of <code>Area (ha)</code> column across the `Harvest Year's. I tried this but it does not work:</p>
<p><code>df = df.groupby("Crop", dropna=False).apply( lambda x: x.nlargest(2, "Area (ha)") )</code></p>
<p>Output should be, here China and india are the countries with the largest average <code>Area (ha)</code> for both maize and Winter Wheat, but in the full datasets different countries would have largest values for different crops:</p>
<pre><code>Country Crop Harvest Year Area (ha)
China Maize 2019 41309740
China Maize 2020 41292000
China Maize 2021 43355859
China Winter Wheat 2019 23732560
China Winter Wheat 2020 23383000
China Winter Wheat 2021 23571400
India Maize 2019 9027130
India Maize 2020 9569060
India Maize 2021 9860000
India Winter Wheat 2019 29318780
India Winter Wheat 2020 31357020
India Winter Wheat 2021 31610000
</code></pre>
| <python><pandas> | 2023-10-30 23:09:53 | 2 | 22,341 | user308827 |
77,392,334 | 4,898,202 | How do I call multiple python functions by reference to operate on a variable so that I can change the sequence of function calls easily? | <p>I want to create a 'function pipeline', like a factory.</p>
<p>Let's say I have the following functions:</p>
<pre class="lang-py prettyprint-override"><code>def func1(var):
var = # do something with var
return var
def func2(var):
var = # do something else with var
return var
def func3(var):
var = # do another thing with var
return var
def func4(var):
var = # do something with var we haven't done before
return var
</code></pre>
<p>Now I want to call all functions on my object in sequence, but I want to test calling them in all possible sequence permutations and check the results because the final result will vary based upon the sequence in which the functions were called.</p>
<p>How do I change this manual sequence of function calls:</p>
<pre class="lang-py prettyprint-override"><code>myvar = something
myvar = func1(myvar)
myvar = func2(myvar)
myvar = func3(myvar)
myvar = func4(myvar) # final variable function transformation (result)
</code></pre>
<p>... into something that I can iterate through all possible sequences of, but without having to manually code it as follows?</p>
<pre class="lang-py prettyprint-override"><code>def func1(var):
var = # do something with var
return var
def func2(var):
var = # do something else with var
return var
def func3(var):
var = # do another thing with var
return var
def func4(var):
var = # do something with var we haven't done before
return var
myvar = func4(func3(func2(func1(myvar))))
# or
myvar = func3(func4(func2(func1(myvar))))
# or
myvar = func3(func2(func4(func1(myvar))))
# or
myvar = func3(func2(func1(func4(myvar))))
# or
myvar = func4(func2(func3(func1(myvar))))
# or
# ...etc...
# or (finally)
myvar = func1(func2(func3(func4(myvar))))
</code></pre>
| <python><function><sequence><pass-by-reference><call> | 2023-10-30 23:08:06 | 2 | 1,784 | skeetastax |
77,392,321 | 9,986,939 | Running Python 3.6 on Airflow | <p>Feels like this is a bit of a rookie question, but I figured I'd ask anyway.</p>
<p>I have some homegrown packages that are stuck on python 3.6 and I want to update Airflow, which runs them. This means I'm stuck on the highest version of Airflow that supports 3.6 right? Can I work around this using an env or something?</p>
| <python><airflow> | 2023-10-30 23:05:38 | 0 | 407 | Robert Riley |
77,392,037 | 4,001,750 | Issue while using map_partitions in dask with a power function on pandas table with list elements | <p>I am having the following code in python developed using the Dask framework:</p>
<pre><code># Create a Pandas DataFrame
df = pd.DataFrame({
'A': [[1], [2], [3], [4], [5]],
'B': [[6], [7], [8], [9], [10]]
})
# Convert the Pandas DataFrame to a Dask DataFrame
ddf = dd.from_pandas(df, npartitions=2)
def my_function2(x):
return x[0]**2
# Define a function to apply to each partition
def my_function(df):
return df.map(my_function2)
# Apply the function to each partition of the Dask DataFrame
result = ddf.map_partitions(my_function).compute()
</code></pre>
<p>When I execute it get the following error: <code>TypeError: unsupported operand type(s) for ** or pow(): 'str' and 'int'</code></p>
<p>This is a short version of my code. It essentially applies a power operator on floating numbers in the pandas table which is stored as a list in each cell of the table. What could be the reason for this error? I tried <code>np.power(x[0])</code> and <code>x[0]*x[0]</code> instead of <code>x[0]**2</code></p>
| <python><dask> | 2023-10-30 21:45:30 | 1 | 720 | naseefo |
77,392,010 | 7,743,076 | How to fill a form outside a <form> with selenium? | <p>I'm trying to fill in a form. Here's my code:</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.select import Select
from selenium.webdriver.support.wait import WebDriverWait
from selenium.webdriver.support import expected_conditions
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver import Keys
import time
driver = webdriver.Firefox()
driver.get('https://jazz.com.pk/choose-your-number')
prefixSelect = Select(driver.find_element(By.ID, 'ndc'))
criterionSelect = Select(driver.find_element(By.ID, 'searchcriteria'))
patternInput = driver.find_element(By.ID, 'pattern')
searchButton = driver.find_element(By.ID, 'submit')
prefixSelect.select_by_visible_text('0307')
criterionSelect.select_by_visible_text('First 4 Digits')
patternInput.send_keys('7384')
</code></pre>
<p>This is the error I get:</p>
<pre><code>line 19, in <module>
prefixSelect.select_by_visible_text('0307')
selenium.common.exceptions.ElementNotInteractableException: Message: Element <option> could not be scrolled into view
</code></pre>
<p>Below I'm posting a series of code blocks with my attempts to fill in the form (notice there is no <code><form></code> element on the page), and the errors I encounter:</p>
<p>Attempt</p>
<pre class="lang-py prettyprint-override"><code>prefixSelect = Select(driver.find_element(By.ID, 'ndc'))
criterionSelect = Select(driver.find_element(By.ID, 'searchcriteria'))
patternInput = driver.find_element(By.ID, 'pattern')
searchButton = driver.find_element(By.ID, 'submit')
WebDriverWait(driver, 10).until(expected_conditions.element_to_be_clickable((By.CSS_SELECTOR, 'option[value="0303"]')))
prefixSelect.select_by_visible_text('0307')
criterionSelect.select_by_visible_text('First 4 Digits')
patternInput.send_keys('7384')
</code></pre>
<p>Error</p>
<pre><code>line 19, in <module>
WebDriverWait(driver, 10).until(expected_conditions.element_to_be_clickable((By.CSS_SELECTOR, 'option[value="0303"]')))
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:
</code></pre>
<p>Attempt (Only the lines in between, for all attempts that follow)</p>
<pre class="lang-py prettyprint-override"><code>prefixOption = driver.find_element(By.CSS_SELECTOR, 'option[value="0307"]')
ActionChains(driver).move_to_element(prefixOption).perform()
</code></pre>
<p>Error</p>
<pre><code>line 20, in <module>
ActionChains(driver).move_to_element(prefixOption).perform()
selenium.common.exceptions.MoveTargetOutOfBoundsException: Message: Origin element <option> is not displayed
</code></pre>
<p>Attempt</p>
<pre class="lang-py prettyprint-override"><code>prefixOption = driver.find_element(By.CSS_SELECTOR, 'option[value="0307"]')
driver.execute_script('arguments[0].scrollIntoView();', prefixOption)
</code></pre>
<p>Error</p>
<pre><code>line 22, in <module>
prefixSelect.select_by_visible_text('0307')
selenium.common.exceptions.ElementNotInteractableException: Message: Element <option> could not be scrolled into view
</code></pre>
<p>Attempt</p>
<pre class="lang-py prettyprint-override"><code>prefixOption = driver.find_element(By.CSS_SELECTOR, 'option[value="0307"]')
driver.execute_script("arguments[0].dispatchEvent(new Event('mouseover'));", prefixOption)
</code></pre>
<p>Error</p>
<pre><code>line 22, in <module>
prefixSelect.select_by_visible_text('0307')
selenium.common.exceptions.ElementNotInteractableException: Message: Element <option> could not be scrolled into view
</code></pre>
<p>Then I tried to click the <code><select></code> which is already in view when the page is loaded.</p>
<p>Attempt</p>
<pre class="lang-py prettyprint-override"><code>driver.find_element(By.ID, 'ndc').click()
</code></pre>
<p>Error</p>
<pre><code>line 19, in <module>
driver.find_element(By.ID, 'ndc').click()
selenium.common.exceptions.ElementNotInteractableException: Message: Element <select id="ndc" class="ndc form-control mdbselectcustome abc w-100 select2-hidden-accessible" name="ndc"> could not be scrolled into view
</code></pre>
<p>Attempt</p>
<pre class="lang-py prettyprint-override"><code>for i in range(3):
driver.find_element(By.TAG_NAME, 'body').send_keys(Keys.ARROW_DOWN)
driver.find_element(By.ID, 'ndc').click()
</code></pre>
<p>Error</p>
<pre><code>line 22, in <module>
driver.find_element(By.ID, 'ndc').click()
selenium.common.exceptions.ElementNotInteractableException: Message: Element <select id="ndc" class="ndc form-control mdbselectcustome abc w-100 select2-hidden-accessible" name="ndc"> could not be scrolled into view
</code></pre>
<p>Attempt</p>
<pre class="lang-py prettyprint-override"><code>ActionChains(driver).move_to_element(driver.find_element(By.ID, 'ndc')).perform()
driver.find_element(By.ID, 'ndc').click()
</code></pre>
<p>Error</p>
<pre><code>line 20, in <module>
driver.find_element(By.ID, 'ndc').click()
selenium.common.exceptions.ElementNotInteractableException: Message: Element <select id="ndc" class="ndc form-control mdbselectcustome abc w-100 select2-hidden-accessible" name="ndc"> could not be scrolled into view
</code></pre>
<p>Attempt</p>
<pre class="lang-py prettyprint-override"><code>driver.execute_script('arguments[0].scrollIntoView();', driver.find_element(By.ID, 'ndc'))
driver.find_element(By.ID, 'ndc').click()
</code></pre>
<p>Error</p>
<pre><code>line 20, in <module>
driver.find_element(By.ID, 'ndc').click()
selenium.common.exceptions.ElementNotInteractableException: Message: Element <select id="ndc" class="ndc form-control mdbselectcustome abc w-100 select2-hidden-accessible" name="ndc"> could not be scrolled into view
</code></pre>
<p>Attempt</p>
<pre class="lang-py prettyprint-override"><code>driver.execute_script("arguments[0].dispatchEvent(new Event('mouseover'));", driver.find_element(By.ID, 'ndc'))
driver.find_element(By.ID, 'ndc').click()
</code></pre>
<p>Error</p>
<pre><code>line 20, in <module>
driver.find_element(By.ID, 'ndc').click()
selenium.common.exceptions.ElementNotInteractableException: Message: Element <select id="ndc" class="ndc form-control mdbselectcustome abc w-100 select2-hidden-accessible" name="ndc"> could not be scrolled into view
</code></pre>
<p>After all these attempts, maybe I can search more on the internet (probably just StackOverflow) and find out how to fill this form, but I just decided to ask here.</p>
<p>Please tell me how I can fill this form.</p>
| <python><html><selenium-webdriver><firefox> | 2023-10-30 21:40:58 | 1 | 532 | Huzaifa |
77,391,992 | 9,983,652 | how to split the string wtih brackets? | <p>I'd like to splitt the below string into something like this
<code>'180-240','0-100','100-200','0-110','200-240','0-120'</code></p>
<pre><code>a='(180-240,0-100),(100-200,0-110),(200-240,0-120)'
a
'(180-240,0-100),(100-200,0-110),(200-240,0-120)'
</code></pre>
<p>After the below split, it is still one element of string, I am expecting to get a lsit with 3 tuples of</p>
<pre><code>['(180-240,0-100','100-200,0-110','200-240,0-120)']
</code></pre>
<p>However I got below a list with only one string. Not sure where is the problem? Thanks</p>
<pre><code>
b=a.split('(,)')
b
['(180-240,0-100),(100-200,0-110),(200-240,0-120)']
</code></pre>
| <python> | 2023-10-30 21:36:56 | 1 | 4,338 | roudan |
77,391,988 | 2,205,916 | How do I break up a large for-loop in Python into multiple files? | <p>In actuality, I have hundreds of lines in a for-loop. It is very painful to edit these hundreds of lines. But let's examine the following, related toy code snippet:</p>
<pre><code>from datetime import datetime, timedelta, timezone
import pytz
import time
central_timezone = pytz.timezone('US/Central') # define the Eastern Timezone (US/Eastern)
current_time = datetime.now(central_timezone) # get the current time in Eastern Time
rounded_time = current_time + timedelta(minutes=(5 - current_time.minute % 5)) #
</code></pre>
<p>How would I put this into another file, <code>test_time.py</code>, then tell that file to execute to update <code>rounded_time</code> and use <code>rounded_time</code> in a Jupyter notebook, <code>main.ipynb</code>?</p>
<p>My actual use case is to have hundreds of lines of code in each of several <code>.py</code> that I would call from <code>main.ipynb</code>. This will make my code my modular and easier to update.</p>
| <python> | 2023-10-30 21:36:31 | 1 | 3,476 | user2205916 |
77,391,933 | 272,023 | Is it possible to get Kerberos logging using gssapi inside secure context? | <p>I am using <a href="https://github.com/pythongssapi/python-gssapi" rel="nofollow noreferrer"><code>gssapi</code></a> to create a Flask server that is protected by Kerberos authentication. I am wanting to debug some Kerberos errors I am seeing and hence I would like to turn on debug logging by means of the <a href="https://web.mit.edu/kerberos/krb5-latest/doc/admin/troubleshoot.html" rel="nofollow noreferrer"><code>KRB5_TRACE</code> environmental variable</a>:</p>
<pre><code>KRB5_TRACE=/dev/stdout
</code></pre>
<p>However, no logs are created, but they are created when I set that environmental variable and call <code>kinit</code>, meaning that my version of Kerberos supports this variable but it's just <code>python-gssapi</code> that isn't respecting that variable.</p>
<p>In the documentation linked to above, it states:</p>
<blockquote>
<p>Some programs do not honor KRB5_TRACE, ... because they use secure
library contexts</p>
</blockquote>
<p>When I look at the Kerberos documentation for the library call to make a secure context, <a href="https://web.mit.edu/tsitkova/www/build/krb_appldev/refs/api/krb5_init_secure_context.html#krb5_init_secure_context" rel="nofollow noreferrer"><code>krb5_init_secure_context()</code></a>, I see this statement:</p>
<blockquote>
<p>Create a context structure, using only system configuration files. All information passed through the environment variables is ignored.</p>
</blockquote>
<p>It sounds like <code>gssapi</code> may be somehow making a call to <code>krb5_init_secure_context()</code> and hence the trace logging configuration is being ignored.</p>
<p>Is there a way of turning on debug logging using <code>gssapi</code>? If <code>gssapi</code> is indeed creating a secure context, is there any way of turning on logging inside that context?</p>
| <python><kerberos><gssapi> | 2023-10-30 21:25:15 | 0 | 12,131 | John |
77,391,843 | 4,643,092 | Searching for values in large dataframe with unnamed columns | <p>I have a dataframe with ~300 columns in the following format:</p>
<pre><code>| Column1 | Column2 | Column3 | Column5
| ------------| -------------- |-----------|----------
| Color=Blue | Location=USA | Name=Steve| N/A
| Location=USA| ID=123 | Name=Randy| Color=Purple
| ID=987 | Name=Gary | Color=Red | Location=Italy
</code></pre>
<p>What is the best way to process such a huge and irregular dataset if I'm only interested in specific attributes, such as 'Color' and 'ID'?</p>
<p>An example output if I only wanted to see 'ID' could be something like:</p>
<pre><code>| Column1 | Column2 | Column3 | Column5
| ------------| -------------- |-----------|----------
| ID=987= | ID=123 | |
</code></pre>
<p>Or maybe even a list of results would work:</p>
<pre><code>ID=[987, 123]
</code></pre>
| <python><pandas><dataframe> | 2023-10-30 21:06:32 | 3 | 323 | NRH |
77,391,828 | 8,171,079 | Overload binary operators to support right application | <p>I'm trying to make a class <code>A</code> that supports both multiplication between two instances of <code>A</code> and scalar multiplication.</p>
<p>I currently have the following implementation for the <code>__mul__</code> method:</p>
<pre class="lang-py prettyprint-override"><code>class A:
def __init__(self, value: float):
self.value = value
def __mul__(self, other):
if type(other) == A:
return A(self.value * other.value)
elif type(other) == float:
return A(other * self.value)
else:
raise
</code></pre>
<p>This allows to perform multiplication <code>x * y</code> where <code>x</code> is an instance of <code>A</code> and <code>y</code> is an intance of <code>A</code> or <code>float</code>. However if <code>x</code> is a <code>float</code> and <code>y</code> in an <code>A</code> then a <code>TypeError</code> is raised.</p>
<p>Is there a way to overload this case as well? If it matters, I would like the same behaviour regardless of whether the instance of <code>A</code> is the left or right operand.</p>
| <python> | 2023-10-30 21:02:02 | 1 | 377 | user8171079 |
77,391,785 | 6,734,243 | How to add a reference in a Sphinx custom directive? | <p>I'm creating a custom directive to display the list of all the available components in pydata-sphinx theme. I try to avoid using the raw directive so I'm building a custom one to remain compatible with the other builders.</p>
<p>Here is the important part of the code:</p>
<pre class="lang-py prettyprint-override"><code>"""A directive to generate the list of all the built-in components.
Read the content of the component folder and generate a list of all the components.
This list will display some information about the component and a link to the
GitHub file.
"""
from docutils import nodes
from sphinx.application import Sphinx
from sphinx.util.docutils import SphinxDirective
class ComponentListDirective(SphinxDirective):
"""A directive to generate the list of all the built-in components."""
# ...
def run(self) -> List[nodes.Node]:
"""Create the list."""
# ...
# `component` is a list of pathlib Path
# `url` is a list of string
# `docs` is a list of string
# build the list of all the components
items = []
for component, url, doc in zip(components, urls, docs):
items.append(nodes.list_item(
"",
nodes.reference("", component.name, refuri=url), #this line is the source of the issue
nodes.Text(f": {doc}")
))
return [nodes.bullet_list("", *items)]
</code></pre>
<p>When I try to execute the previous code in my sphinx build I get the following error:</p>
<pre><code>Exception occurred:
File "/home/borntobealive/libs/pydata-sphinx-theme/.nox/docs/lib/python3.10/site-packages/sphinx/writers/html5.py", line 225, in visit_reference
assert len(node) == 1 and isinstance(node[0], nodes.image)
AssertionError
</code></pre>
<p>This assertion is performed by sphinx <a href="https://github.com/sphinx-doc/sphinx/blob/bb74aec2b6aa1179868d83134013450c9ff9d4d6/sphinx/writers/html5.py#L321" rel="nofollow noreferrer">if the parent node is not a TextELement</a>. So I tried to wrap things in a <code>Text</code> node:</p>
<pre class="lang-py prettyprint-override"><code>nodes.Text(nodes.reference("", component.name, refuri=url))
</code></pre>
<p>But the I only get the <code>__repr__</code> of the reference not a real link (I think it's because Text nodes only accept strings)</p>
<p>So I also tried using a <code>TextElement</code>:</p>
<pre><code>nodes.TextElement("", "", nodes.reference("", component.name, refuri=url))
</code></pre>
<p>which also raised an error:</p>
<pre><code>Exception occurred:
File "/home/borntobealive/libs/pydata-sphinx-theme/.nox/docs/lib/python3.10/site-packages/docutils/nodes.py", line 2040, in unknown_departure
raise NotImplementedError(
NotImplementedError: <class 'types.BootstrapHTML5Translator'> departing unknown node type: TextElement
</code></pre>
<p>Does someone know how I should add the link at the start of the bullet list ?
If you miss some context, you can find the complete code of the directive <a href="https://github.com/12rambau/pydata-sphinx-theme/blob/9c3216cef4e2a5ff9bd7496bab6a81824bab92ec/docs/_extension/component_directive.py" rel="nofollow noreferrer">here (<100 lines)</a></p>
| <python><python-sphinx><docutils> | 2023-10-30 20:53:39 | 1 | 2,670 | Pierrick Rambaud |
77,391,664 | 9,983,172 | matplotlib.path.contains_points inconsistent behavior | <p>I have been using <code>matplotlib.path.contains_points()</code> method without any figures or plots. I am getting inconsistent results depending on the path. In the following example, a simple square path works, but a longer path from an ellipse does not:</p>
<pre><code>import numpy as np
from skimage.draw import ellipse_perimeter
from matplotlib.path import Path
import matplotlib.pyplot as plt
s = 100
squarepath = Path([(0,0), (0, s), (s,s), (s, 0)])
print(squarepath.contains_points([(s/2, s/2)]))
(xv,yv) = ellipse_perimeter(200,360,260,360)
xyverts = np.stack((xv,yv),axis=1)
ellipsepath = Path(xyverts)
pt = (213,300)
print(ellipsepath.contains_points([pt]))
fig,ax = plt.subplots()
plt.scatter(ellipsepath.vertices[:,0],ellipsepath.vertices[:,1])
plt.scatter(pt[0],pt[1])
plt.show()
</code></pre>
<p>I am using <code>contains_points()</code> before anything is plotted, so there should not be any data versus display coordinates issue as is discussed in other similar questions about <code>contains_points()</code>. What else could be going on here?</p>
| <python><matplotlib> | 2023-10-30 20:30:58 | 1 | 480 | J B |
77,391,636 | 7,055,769 | program gets stuck when sending email via gmail | <p>I have the following python function:</p>
<pre><code>import smtplib, ssl
port = 465 # For SSL
password = "valid password"
context = ssl.create_default_context()
def sendEmail(to_email, subject, content):
from_email = "valid_email@zaliczenie.pl"
message = (f"Tresc zadania: {content}",)
try:
print("connecting")
with smtplib.SMTP_SSL("smtp.gmail.com", port, context=context) as server:
print("connected")
server.login("tmp_email@gmail.com", password)
print("logged")
server.sendmail(
to_addrs=to_email,
from_addr=from_email,
msg=message,
)
print("sent")
except Exception as e:
print(e.message)
</code></pre>
<p>And here are the logs:</p>
<pre><code>connecting
connected
logged
</code></pre>
<p>So it connects, logs into google mail, but gets stuck when trying to send an email.</p>
<p>What am I missing?</p>
<p>I allowed the less safe applications and password is the one generated from google security 2FA tab (App passwords).</p>
<p><a href="https://i.sstatic.net/BOOrH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BOOrH.png" alt="enter image description here" /></a></p>
| <python><django><django-rest-framework><smtplib> | 2023-10-30 20:24:34 | 0 | 5,089 | Alex Ironside |
77,391,595 | 5,392,289 | two copies of source code being checked by flake8 in CI/CD pipeline | <p>When I run <code>flake8 .</code>, it runs on two copies of my source code:</p>
<pre><code>./build/lib/smartx/smartpressing/frlo/silver/tagenriched.py:14:13: E225 missing whitespace around operator
./smartx/smartpressing/frlo/silver/tagenriched.py:14:13: E225 missing whitespace around operator
</code></pre>
<p>And because of it, I get duplicates of every issue as seen above.</p>
<p>I don't need the copy that is saved in <code>./build</code>...</p>
<p>Here is my <code>pyproject.toml</code> file:</p>
<pre><code>[project]
name = "mip"
version = "0.1.0"
license = {file = "LICENSE"}
description = "source code to run on mip databricks instance"
readme = "README.md"
requires-python = ">=3.10"
authors = [
{name = "MIP engineers"},
]
dependencies = [
]
[project.optional-dependencies]
test = [
"pyspark==3.5.0",
"flake8==6.1.0",
"mypy==1.6.1",
"pandas-stubs==2.1.1.230928",
"pytest==7.4.3",
"pytest-cov==4.1.0",
"isort==5.12.0"
]
[tool.setuptools]
include-package-data = true
[tool.setuptools.packages.find]
where = ["."]
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
</code></pre>
<p>And here is my CI <code>.yaml</code> file:</p>
<pre><code>pool:
name: SelfHosted-Linux
demands:
- Agent.Name -equals az02
strategy:
matrix:
Python310:
python.version: '3.10'
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '$(python.version)'
displayName: 'Use Python $(python.version)'
- script: |
pip install '.[test]'
displayName: 'Install Packages'
- script: |
pip list
flake8 . --count --max-complexity=10 --max-line-length=127 --statistics --ignore=W291,E302,W503,W292 --exclude notebooks --color always
displayName: 'Run flake8 code checks'
- script: |
mypy .
displayName: 'Run mypy type-hinting checks'
- script: |
pytest
displayName: 'Run pytest'
- script: |
isort --diff --check .
displayName: 'Run isort'
</code></pre>
| <python><python-3.x><continuous-integration><setuptools> | 2023-10-30 20:16:01 | 1 | 1,305 | Oliver Angelil |
77,391,573 | 11,305,741 | Why is `db.query(Order).get(order_id)` behaving as O(n) and not O(log(n))? - SQLAlchemy and Python | <p>We have this service, with the method <code>get_order</code>, which for some reason increases linearly with the quantity of orders:</p>
<pre class="lang-py prettyprint-override"><code>from nameko.events import EventDispatcher
from nameko.rpc import rpc
from nameko_sqlalchemy import DatabaseSession
from orders.exceptions import NotFound
from orders.models import DeclarativeBase, Order, OrderDetail # , Orders
from orders.schemas import OrderSchema
class OrdersService:
name = "orders"
db = DatabaseSession(DeclarativeBase)
event_dispatcher = EventDispatcher()
@rpc
def get_order(self, order_id):
order = self.db.query(Order).get(order_id)
if not order:
raise NotFound("Order with id {} not found".format(order_id))
return OrderSchema().dump(order).data
</code></pre>
<p>The <code>Order</code>:</p>
<pre class="lang-py prettyprint-override"><code>class Order(DeclarativeBase):
__tablename__ = "orders"
id = Column(Integer, primary_key=True, autoincrement=True)
class OrderDetail(DeclarativeBase):
__tablename__ = "order_details"
id = Column(Integer, primary_key=True, autoincrement=True)
order_id = Column(
Integer, ForeignKey("orders.id", name="fk_order_details_orders"), nullable=False
)
order = relationship(Order, backref="order_details")
product_id = Column(Integer, nullable=False)
price = Column(DECIMAL(18, 2), nullable=False)
quantity = Column(Integer, nullable=False)
</code></pre>
<p>The OrderSchema:</p>
<pre class="lang-py prettyprint-override"><code>class OrderDetailSchema(Schema):
id = fields.Int(required=True)
product_id = fields.Str(required=True)
price = fields.Decimal(as_string=True)
quantity = fields.Int()
class OrderSchema(Schema):
id = fields.Int(required=True)
order_details = fields.Nested(OrderDetailSchema, many=True)
</code></pre>
<p>I expected a O(log(n)), or almost constant retrieval time. Instead, the result is a O(n) response.</p>
<p><a href="https://i.sstatic.net/4u1Fr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4u1Fr.png" alt="Graph hits vs. Execution time" /></a></p>
<p>I hypothesize it's because <code>Order</code> becomes a list/vector. And, I should create something like <code>Orders</code> table and a <code>OrdersSchema</code>, as an object/dictionary and operate on key-values basis. But, I'm not so sure how to do it.</p>
<p>I tried and to use it, but without success.</p>
<pre class="lang-py prettyprint-override"><code>class OrdersSchema(Schema):
id = fields.Int(required=True, autoincrement=True)
order = fields.Nested(OrderSchema, many=False)
</code></pre>
| <python><sqlalchemy><orm> | 2023-10-30 20:11:41 | 0 | 627 | BuddhiLW |
77,391,524 | 1,609,514 | Why is df.dtypes.isin not giving expected results when passed a list of strings? | <p>I want to check that all the dtypes of a dataframe are contained in a subset of dtypes which I specify.</p>
<p>I thought this might be a good way to do that but getting unexpected results:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A': [1., 2., 3.], 'B': [100, 101, 102], 'C': ['a', 'b', 'c']})
assert df.dtypes.A == 'float64'
assert df.dtypes.B == 'int64'
assert df.dtypes.C == 'object'
print(df.dtypes.isin(['int64', 'float64']))
</code></pre>
<p>Output</p>
<pre class="lang-none prettyprint-override"><code>A True
B False
C False
dtype: bool
</code></pre>
<p>In fact, the results seem to vary each time I run the script. Sometimes I get this:</p>
<pre class="lang-none prettyprint-override"><code>A False
B False
C False
dtype: bool
</code></pre>
<p>Sometimes this:</p>
<pre class="lang-none prettyprint-override"><code>A False
B True
C False
dtype: bool
</code></pre>
<p>Clearly, <code>df.dtypes.isin</code> was not meant to be used this way.</p>
<p>The following works as expected, so I suspect this is something to do with my use of strings in place of the dtype objects:</p>
<pre><code>df.dtypes.isin([np.dtype('float64'), np.dtype('int64')])
</code></pre>
<p>(I realize I could use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.select_dtypes.html#pandas-dataframe-select-dtypes" rel="nofollow noreferrer"><code>select_dtypes</code></a> if I wanted to select the columns of a certain type or <a href="https://pandas.pydata.org/docs/reference/api/pandas.api.types.is_numeric_dtype.html" rel="nofollow noreferrer"><code>is_numeric_dtype</code></a> if I wanted to check all columns are a numeric type, but that is not what I want to do.)</p>
<p><strong>Additional Info</strong></p>
<p>People in the comments are saying it is not reproducible. Here is a copy of my console output.</p>
<pre class="lang-none prettyprint-override"><code>(base) Mac-mini-2:stackoverflow username$ python pandas_dtypes_isin.py
A False
B False
C False
dtype: bool
(base) Mac-mini-2:stackoverflow username$ python pandas_dtypes_isin.py
A True
B False
C False
dtype: bool
(base) Mac-mini-2:stackoverflow username$ python pandas_dtypes_isin.py
A True
B False
C False
dtype: bool
(base) Mac-mini-2:stackoverflow username$ python pandas_dtypes_isin.py
A True
B True
C False
dtype: bool
(base) Mac-mini-2:stackoverflow username$ cat pandas_dtypes_isin.py
import pandas as pd
df = pd.DataFrame({'A': [1., 2., 3.], 'B': [100, 101, 102], 'C': ['a', 'b', 'c']})
assert df.dtypes.A == 'float64'
assert df.dtypes.B == 'int64'
assert df.dtypes.C == 'object'
print(df.dtypes.isin(['int64', 'float64']))
</code></pre>
| <python><pandas><dtype> | 2023-10-30 20:02:09 | 0 | 11,755 | Bill |
77,391,325 | 14,385,099 | Interpreting special case of 'x for x in ...' loop in python | <p>I have a question about the code chunk below:</p>
<pre><code>for i, name in enumerate([x[:-3] for x in dm_cov.columns[:10]]):
stats['beta'][i].write(f'../data/{sub}_beta_{name}.nii.gz')
</code></pre>
<p>Is it accurate to say that the above code chunk is functionally equivalent to the code chunk below?</p>
<pre><code>for i, name in enumerate([x for x in dm_cov.columns[:7]]):
stats['beta'][i].write(f'../data/{sub}_beta_{name}.nii.gz')
</code></pre>
<p>I'm curious because <code>x</code> should be a column name, so I'm not sure how it can be subsetted with <code>[:-3]</code>.</p>
<p>Thank you!</p>
| <python> | 2023-10-30 19:22:13 | 3 | 753 | jo_ |
77,391,204 | 11,628,437 | Pandas `merge_asof` generating error - ValueError: left keys must be sorted | <p>I have the following dataframes that I'd like to merge using <code>merge_asof</code> -</p>
<pre><code>import pandas as pd
data_0 = {'setting':[0,0,0,0,1,1,1,2,2,2,2,2,2],
'values': [0, 1, 2, 3, 0, 1, 2, 0,1,2,3, 5,6]
}
df_0 = pd.DataFrame(data_0)
print(df_0)
</code></pre>
<pre><code> setting values
0 0 0
1 0 1
2 0 2
3 0 3
4 1 0
5 1 1
6 1 2
7 2 0
8 2 1
9 2 2
10 2 3
11 2 5
12 2 6
</code></pre>
<pre><code>data_1 = {'setting':[0,1,2,2],
'start_value': [1,0,1,3],
'end_value':[3,2,3,6]
}
df_1 = pd.DataFrame(data_1)
print(df_1)
</code></pre>
<pre><code> setting start_value end_value
0 0 1 3
1 1 0 2
2 2 1 3
3 2 3 6
</code></pre>
<p>Based on <a href="https://stackoverflow.com/a/77380207/11628437">this</a> answer, I wrote the following code -</p>
<pre><code>out = (pd.merge_asof(df_0.reset_index().sort_values(by=['setting', 'values']),
df_1.sort_values(by=['setting','start_value']),
by='setting', left_on='values', right_on='start_value')
.query('values < end_value')
.set_index('index').sort_index()
.rename(columns={'values': 'values_from_data_0'})
)
print(out)
</code></pre>
<p>However, I changed the sorting process slightly. Now, I am getting this error -</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-2-8ab16dcd6572> in <cell line: 20>()
18
19
---> 20 out = (pd.merge_asof(df_0.reset_index().sort_values(by=['setting', 'values']),
21 df_1.sort_values(by=['setting','start_value']),
22 by='setting', left_on='values', right_on='start_value')
3 frames
/usr/local/lib/python3.10/dist-packages/pandas/core/reshape/merge.py in _get_join_indexers(self)
2035 raise ValueError(f"Merge keys contain null values on {side} side")
2036 else:
-> 2037 raise ValueError(f"{side} keys must be sorted")
2038
2039 if not Index(right_values).is_monotonic_increasing:
ValueError: left keys must be sorted
</code></pre>
<p>While it seems that multi-column sorting is not the right way to go, <a href="https://stackoverflow.com/a/76022878/11628437">this</a> answer suggests to use multi-column sorting whereas there are other answers that suggest to sort only by the required column. Can someone clarify what's the right process to follow?</p>
| <python><pandas><dataframe> | 2023-10-30 18:55:29 | 1 | 1,851 | desert_ranger |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.