QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,470,209
| 8,884,239
|
How to explode string type column in pyspark dataframe and make individual columns in table
|
<p>I am getting following value as string from dataframe loaded from table in pyspark. It is List of nested dicts. I want to explode and make them as separate columns in table using pyspark.</p>
<pre><code>dataframe col = [{ser={cos=hon, mgse=yyyyyyyy, bd=1994-06-11}, ap={}, ep={}}, {ser={cos=null, mgse=null, bd=null}, ap={ncd=035, ccd=A, scd2=C, cos=hon, pgse=nnnnnnn, pcd=06, nar=vvvvvvvv}, ep={eptd=bbbb, ept=bbbb}}]
</code></pre>
<p>I want to convert this columns as individual cols from each occurance of ser. For example, see what is my expected results.</p>
<p><a href="https://i.sstatic.net/rUM29JSk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rUM29JSk.png" alt="enter image description here" /></a></p>
<p>And I want to further explode ap column and ep column and it needs to be looks like below:</p>
<p><a href="https://i.sstatic.net/pzgW1NOf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pzgW1NOf.png" alt="enter image description here" /></a></p>
<p>Can someone provide or help me how we can implement it on the string format which is coming from redshift table.</p>
<p>Thanks in Advance.
Bab</p>
|
<python><amazon-web-services><apache-spark><pyspark><aws-glue>
|
2024-05-13 05:31:14
| 1
| 301
|
Bab
|
78,470,100
| 4,390,618
|
Raspberry pi GPIO pin not turning off
|
<p>I bought RPi, flashed the OS. I also bought a 5V relay. I have connected the 2nd pin to VCC of the relay, 6th pin to GND of the relay, 40th pin to the 'IN' point of the relay. At present I have not connected anything to the other side of the relay (but the problem also exists if I connect a load of led). Also no other changes have been made to the RPi except the SSH connection enablement and software update using sudo apt get commands i.e. RPi is almost at factory settings.</p>
<p>My code:</p>
<pre><code>import RPi.GPIO as GPIO
import time
in1 = 29 # i.e 40th pin
GPIO.setmode(GPIO.BCM)
GPIO.setup(in1, GPIO.OUT)
try:
GPIO.output(in1, GPIO.HIGH) # 1
time.sleep(1)
GPIO.output(in1, GPIO.LOW) # 2
print("inside try after low") # 3
time.sleep(1)
except KeyboardInterrupt:
GPIO.cleanup() # 4
</code></pre>
<p>My problem:
Once the relay turns high at comment <code># 1</code> it does not turns off at point <code># 2</code>. I kill the program using 'control + c' then it turns off which is not what I desire. I just want it to turn on and then turn off.</p>
<p>The point <code># 3</code> does get executed. <code># 4</code> executes when ctrl+c pressed. If I remove cleanup code and do not press ctrl+c then the relay's green light remains turned on continuously.</p>
<p>This is raspberry pi 64bit OS.</p>
|
<python><raspberry-pi3><gpio>
|
2024-05-13 04:36:04
| 3
| 439
|
LearneriOS
|
78,470,098
| 19,048,408
|
How do I configure Python "yapf" to format method calls with chained methods in args (Polars-style) like this?
|
<p>How do I configure <a href="https://github.com/google/yapf" rel="nofollow noreferrer">YAPF</a> to auto-format Python code like the following example? This is very useful in the domain-specific language (DSL) that Polars creates for data transformations.</p>
<p>This is roughly how I want it to be formatted (note the indented chained method calls):</p>
<pre class="lang-py prettyprint-override"><code>df = df.select(
event_type=pl.col("log_entry")
.str.extract(r"Event Type: ([\w ]+)")
.cast(pl.Utf8),
event_duration=pl.col("log_entry")
.str.extract(r"Duration: (\d+)")
.cast(pl.Int32),
)
</code></pre>
<p>With the default settings, it likes to left-align <code>.select</code>/<code>.with_columns</code> calls with many arguments. The following is how yapf/ruff/black all format the code (which is difficult to read):</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = df.select(
event_type=pl.col("log_entry")
.str.extract(r"Event Type: ([\w ]+)")
.cast(pl.Utf8),
event_duration=pl.col("log_entry")
.str.extract(r"Duration: (\d+)")
.cast(pl.Int32),
)
</code></pre>
|
<python><python-polars><yapf>
|
2024-05-13 04:34:56
| 0
| 468
|
HumpbackWhale194
|
78,469,817
| 11,280,068
|
duckdb - can't insert data due to constraint, but also there is no constraint
|
<p>I have a table in postgres that I want to use duckdb to insert into. I have a unique constraint on the table that uses multiple columns to determine a unique row. On conflict, the duplicate rows should be ignored.</p>
<p>Here is the code I use to set it up (in a class but doesn't make a difference)</p>
<pre class="lang-py prettyprint-override"><code>self.conn = duckdb.connect()
self.conn.install_extension('postgres')
self.conn.load_extension('postgres')
self.conn.sql("ATTACH 'dbname=player_data user=postgres host=127.0.0.1 password=mypassword' AS postgres (TYPE POSTGRES);")
</code></pre>
<p>If I run this</p>
<pre class="lang-py prettyprint-override"><code>r = self.conn.sql('''
INSERT INTO postgres.ranked.ranked_data
SELECT * FROM 'data/*.json';
''')
print(r)
</code></pre>
<p>it gives me this error</p>
<pre><code>duckdb.duckdb.Error: Failed to copy data: ERROR: duplicate key value violates unique constraint "unique_row"
DETAIL: Key (leagueid, queuetype, tier, rank, leaguepoints)=(766896e1-2b75-450f-acd4-083d5ef60b55, RANKED_SOLO_5x5, DIAMOND, I, 50) already exists.
CONTEXT: COPY ranked_data, line 45
</code></pre>
<p>If I add an <code>ON CONFLICT</code> statement</p>
<pre class="lang-py prettyprint-override"><code>r = self.conn.sql('''
INSERT INTO postgres.ranked.ranked_data
SELECT * FROM 'data/*.json'
ON CONFLICT DO NOTHING;
''')
print(r)
</code></pre>
<p>It gives me this error</p>
<pre><code>duckdb.duckdb.BinderException: Binder Error: There are no UNIQUE/PRIMARY KEY Indexes that refer to this table, ON CONFLICT is a no-op
</code></pre>
<p>So according to duckdb, it can't insert because there is a constraint but also there is no constraint at the same time. Help me make this make sense please!</p>
|
<python><sql><postgresql><constraints><duckdb>
|
2024-05-13 02:14:53
| 1
| 1,194
|
NFeruch - FreePalestine
|
78,469,772
| 5,656,369
|
Import fails with certain subdirectory name
|
<ul>
<li>I attempt to import a Python module that's in a subdirectory of my
current working directory.</li>
<li>If I name the subdirectory <code>foo_bar</code> or <code>one_two_three</code> or <code>python_foo</code>, the import succeeds.</li>
<li>But if I name the subdirectory <code>python_utils</code>, the import fails:</li>
</ul>
<pre><code>02:27: droot ⦿ tree
.
└── python_utils
└── dostuff.py
2 directories, 1 file
02:27: droot ⦿ python
Python 3.12.3 (main, Apr 17 2024, 00:00:00) [GCC 14.0.1 20240411 (Red Hat 14.0.1-0)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from python_utils.dostuff import foo
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'python_utils.dostuff'
>>>
02:27: droot ⦿ mv python_utils/ foo_bar/
02:27: droot ⦿ python
Python 3.12.3 (main, Apr 17 2024, 00:00:00) [GCC 14.0.1 20240411 (Red Hat 14.0.1-0)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from foo_bar.dostuff import foo
>>>
02:28: droot ⦿ mv foo_bar/ one_two_three/
02:29: droot ⦿ python
Python 3.12.3 (main, Apr 17 2024, 00:00:00) [GCC 14.0.1 20240411 (Red Hat 14.0.1-0)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from one_two_three.dostuff import foo
>>>
02:35: droot ⦿ mv one_two_three/ python_foo
02:37: droot ⦿python
Python 3.12.3 (main, Apr 17 2024, 00:00:00) [GCC 14.0.1 20240411 (Red Hat 14.0.1-0)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from python_foo.dostuff import foo
>>>
</code></pre>
<ul>
<li>What's going on? I have tried many names, and all work except <code>python_utils</code>!</li>
</ul>
|
<python>
|
2024-05-13 01:49:34
| 0
| 462
|
teraspora
|
78,469,755
| 108,741
|
Zoho CRM Python SDK 2.1 won't update record owner
|
<p>When I try to change the owner on a Zoho record I get a success message but the Lead Owner field doesn't change. The record is in the "Potentials" module (I think it's a custom module)</p>
<pre><code>lead: Record = self.get_lead(int(lead_id))
user: ZCRMUser = self.get_user_by_email(owner_email)
# Perform the update
lead.add_key_value("Owner", user)
request = BodyWrapper()
request.set_data([new_record])
request.set_trigger(["approval", "workflow", "blueprint"])
response = self.record_operators.update_record(zoho_id, "Potentials", request)
</code></pre>
<p>The update is performed and reports no errors but it doesn't change the "Owner" field:</p>
<pre><code>INFO Status: success
INFO Code: SUCCESS
INFO Details
INFO Modified_Time : 2024-05-13 11:23:12+10:00
INFO Modified_By : <zcrmsdk.src.com.zoho.crm.api.users.user.User object at 0x115384c10>
INFO Created_Time : 2024-05-02 14:03:19+10:00
INFO id : 68968000000476063
INFO Created_By : <zcrmsdk.src.com.zoho.crm.api.users.user.User object at 0x115347d10>
INFO Message: record updated
</code></pre>
<p>If I inspect the lead itself the field exists:</p>
<pre><code>Owner = <zcrmsdk.src.com.zoho.crm.api.users.user.User object at 0x1153ae7d0>
</code></pre>
<p>Also the user object is a valid User object. The API token I'm using seems to have all permissions granted. It seems like Zoho just quietly drops the field. I don't know if this is a permissions thing, an API limitation or something else.</p>
<p>UPDATE: I've noticed the code I'm working on has pinned <code>zohocrmsdk2_1 = "^1.0.0"</code> so might be an old issue. Will retest with latest 3.0.0 and see if it helps.
UPDATE 2: Didn't help. Same issue with versions <code>1.1.0</code> and <code>3.0.0</code> of <code>zohocrmsdk2-1</code></p>
|
<python><zoho>
|
2024-05-13 01:40:43
| 1
| 39,106
|
SpliFF
|
78,469,621
| 2,684,250
|
Custom MultiInput Model Field in Django
|
<p>I'm trying to create a custom field for volume made up of 4 parts (length, width, height and units). I'm thinking that extending the models.JSONField class makes the most sense.</p>
<p>Here is what I have so far.</p>
<p>inventory/models.py</p>
<pre><code>from django.db import models
from tpt.fields import VolumeField
class Measurement(models.Model):
volumne = VolumeField()
</code></pre>
<p>tpt/fields.py</p>
<pre><code>from django.db import models
from tpt import forms
class VolumeField(models.JSONField):
description = 'Package volume field in 3 dimensions'
def __init__(self, length=None, width=None, height=None, units=None, *args, **kwargs):
self.widget_args = {
"length": length,
"width": width,
"height": height,
"unit_choices": units,
}
super(VolumeField, self).__init__(*args, **kwargs)
def formfield(self, **kwargs):
defaults = {"form_class": forms.VolumeWidgetField}
defaults.update(kwargs)
defaults.update(self.widget_args)
return super(VolumeField, self).formfield(**defaults)
</code></pre>
<p>tpt/forms.py</p>
<pre><code>import json
from django import forms
class VolumeWidget(forms.widgets.MultiWidget):
def __init__(self, attrs=None):
widgets = [forms.NumberInput(),
forms.NumberInput(),
forms.NumberInput(),
forms.TextInput()]
super(VolumeWidget, self).__init__(widgets, attrs)
def decompress(self, value):
if value:
return json.loads(value)
else:
return [0, 0, 0, '']
class VolumeWidgetField(forms.fields.MultiValueField):
widget = VolumeWidget
def __init__(self, *args, **kwargs):
list_fields = [forms.fields.CharField(max_length=8),
forms.fields.CharField(max_length=8),
forms.fields.CharField(max_length=8),
forms.fields.CharField(max_length=4)]
super(VolumeWidgetField, self).__init__(list_fields, *args, **kwargs)
def compress(self, values):
return json.dumps(values)
</code></pre>
<p>I'm able to run the server but when I try and add a new entry to the Measurement Model in Admin I get this error:</p>
<pre><code>[13/May/2024 11:57:59] "GET /admin/inventory/measurement/ HTTP/1.1" 200 12250
Internal Server Error: /admin/inventory/measurement/add/
Traceback (most recent call last):
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/core/handlers/base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/contrib/admin/options.py", line 716, in wrapper
return self.admin_site.admin_view(view)(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/utils/decorators.py", line 188, in _view_wrapper
result = _process_exception(request, e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/utils/decorators.py", line 186, in _view_wrapper
response = view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/views/decorators/cache.py", line 80, in _view_wrapper
response = view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/contrib/admin/sites.py", line 240, in inner
return view(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/contrib/admin/options.py", line 1945, in add_view
return self.changeform_view(request, None, form_url, extra_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/utils/decorators.py", line 48, in _wrapper
return bound_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/utils/decorators.py", line 188, in _view_wrapper
result = _process_exception(request, e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/utils/decorators.py", line 186, in _view_wrapper
response = view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/contrib/admin/options.py", line 1804, in changeform_view
return self._changeform_view(request, object_id, form_url, extra_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/contrib/admin/options.py", line 1838, in _changeform_view
fieldsets = self.get_fieldsets(request, obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/contrib/admin/options.py", line 404, in get_fieldsets
return [(None, {"fields": self.get_fields(request, obj)})]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/contrib/admin/options.py", line 395, in get_fields
form = self._get_form_for_get_fields(request, obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/contrib/admin/options.py", line 786, in _get_form_for_get_fields
return self.get_form(request, obj, fields=None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/contrib/admin/options.py", line 837, in get_form
return modelform_factory(self.model, **defaults)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/forms/models.py", line 652, in modelform_factory
return type(form)(class_name, (form,), form_class_attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/forms/models.py", line 310, in __new__
fields = fields_for_model(
^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/forms/models.py", line 239, in fields_for_model
formfield = formfield_callback(f, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/contrib/admin/options.py", line 228, in formfield_for_dbfield
return db_field.formfield(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/Development/django/tpt/tpt/fields.py", line 22, in formfield
return super(VolumeField, self).formfield(**defaults)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/db/models/fields/json.py", line 159, in formfield
return super().formfield(
^^^^^^^^^^^^^^^^^^
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/db/models/fields/__init__.py", line 1145, in formfield
return form_class(**defaults)
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/stu/Development/django/tpt/tpt/forms.py", line 26, in __init__
super(VolumeWidgetField, self).__init__(list_fields, *args, **kwargs)
File "/Users/stu/.local/share/virtualenvs/storefront-QvD1zd1X/lib/python3.12/site-packages/django/forms/fields.py", line 1087, in __init__
super().__init__(**kwargs)
TypeError: Field.__init__() got an unexpected keyword argument 'encoder'
[13/May/2024 11:58:01] "GET /admin/inventory/measurement/add/ HTTP/1.1" 500 179729
</code></pre>
<p>I've tried extending models.CharField to test but I'm then getting an error <code>CharFields must define a 'max_length' attribute.</code> and when I do specify a max_length I get a similar result to the JSONField but with <code>max_length</code> instead of <code>encode</code>: <code>TypeError: Field.__init__() got an unexpected keyword argument 'max_length'</code></p>
<p>Any ideas what I'm doing wrong?</p>
<p>Thanks</p>
|
<python><django><django-models><django-forms>
|
2024-05-13 00:09:15
| 1
| 405
|
Stuart Clarke
|
78,469,592
| 16,717,009
|
How can I access the name of namedtuple attribute from inside a method if it wasn't passed as a string?
|
<p>This is a followup to <a href="https://stackoverflow.com/questions/78450557/how-can-i-pass-a-namedtuple-attribute-to-a-method-without-using-a-string">How can I pass a namedtuple attribute to a method without using a string?</a>.
Given this solution to the above:</p>
<pre><code>from typing import NamedTuple
class Record(NamedTuple):
id: int
name: str
age: int
class NamedTupleList:
def __init__(self, data):
self._data = data
def attempt_access(self, row, column):
r = self._data[row]
try:
val = r[column]
except TypeError:
val = column.__get__(r)
return val
data = [Record(1, 'Bob', 30),
Record(2, 'Carol', 25),
]
class_data = NamedTupleList(data)
print(class_data.attempt_access(0, 2)) # 30
print(class_data.attempt_access(0, Record.age)) # 30
</code></pre>
<p>My question is, is there a way to get the name of the attribute from inside the method? In other words, in this example how can I get <code>'age'</code> inside the method <code>attempt_access</code> so I can produce a more useful error message (for other stuff not shown).</p>
|
<python><namedtuple>
|
2024-05-12 23:54:40
| 1
| 343
|
MikeP
|
78,469,589
| 2,410,605
|
Python program to send updates to a vendor's API works accept for processing the return response
|
<p>This is my first attempt at using an API. I went with Python and everything seemed to work -- I was able to send 17,500+ records to a vendor and they are showing up perfect on their website. The problem is that the vendor is using different IDs then we are. I work for a public school system and this is for our bus transportation. I pass the vendor bus stops and in their response they pass me the ID that was created. I then have to take that ID and pass it into a SQL Server procedure to update a cross reference table. I tested this by just sending a single record and it worked great, so I turned on the process to push all 17,500+. When I checked the cross reference table, I'm seeing unexpected results. Of the 17,500 records, only 13 got updated with the Vendor ID. On top of that, there is a status field that should have gotten updated from "New" to "Processed", and it didn't get updated even in the 13 records that got the Vendor ID updated.</p>
<p>I don't know what I did wrong, but I'm hoping somebody can spot the issue. Here is my python code:</p>
<pre><code>import pyodbc as db
import requests
url = "<vendor URL>"
def GetData():
connVendorData = db.connect(<Driver Info>)
connVendorData.setdecoding(db.SQL_CHAR, encoding='latin1')
connVendorData.setencoding('latin1')
cursor_Vendor = connVendorData.cursor()
cursor_Vendor.execute('''exec get_addressBookUpdates''')
for row in cursor_Vendor:
payload = {
"geofence": { "circle": {
"latitude": str(row.latitude),
"longitude": str(row.longitude),
"radiusMeters": str(row.radius)
} },
"formattedAddress": row.formattedAddress,
"longitude": str(row.longitude),
"latitude": str(row.latitude),
"name": row.addressBookTitle
}
headers = {
"accept": "application/json",
"content-type": "application/json",
"authorization": "<vendor token>"
}
response = requests.post(url, json=payload, headers=headers)
connMyCompanyData = db.connect(<Driver Info>)
connMyCompanyData.setdecoding(db.SQL_CHAR, encoding='latin1')
connMyCompanyData.setencoding('latin1')
cursor_MyCompany = connMyCompanyData.cursor()
cursor_MyCompany.execute('''exec update_stopIDs_xref_AfterProcessing ?, ?''', row.status, response.text)
GetData()
</code></pre>
<p>And here is the stored procedure that is updating the cross reference table:</p>
<pre><code>alter PROC update_stopIDs_xref_AfterProcessing
@status VARCHAR(25),
@json NVARCHAR(MAX)
AS
SET NOCOUNT ON
--Test Parms
--DECLARE @json nvarchar(MAX)
--DECLARE @status VARCHAR(25) = 'New'
--SET @json = '{
-- "data": {
-- "id": "137279769",
-- "name": "MILLGATE RD @ LA GRANGE RD",
-- "createdAtTime": "2024-05-12T15:12:47.880383692Z",
-- "formattedAddress": "MILLGATE RD @ LA GRANGE RD",
-- "geofence": {
-- "circle": {
-- "latitude": 38.27088968,
-- "longitude": -85.57484079,
-- "radiusMeters": 25
-- }
-- },
-- "latitude": 38.27088968,
-- "longitude": -85.57484079
-- }
--}'
--Load the parms into a #temp table
DROP TABLE IF EXISTS #temp
SELECT JSON_VALUE(@json, '$.data.id') vendorID,
JSON_VALUE(@json, '$.data.formattedAddress') formattedAddress,
JSON_VALUE(@json, '$.data.createdAtTime') createdTime
INTO #temp
--Update stopIDs_xref depending on the processing that happened
IF @status = 'New'
UPDATE stopIDs_xref
SET vendorId = t.vendorID,
status = 'Processed'
FROM stopIDs_xref sx
JOIN #temp t ON t.formattedAddress = sx.formattedAddress
ELSE IF @status = 'Update'
UPDATE stopIDs_xref
SET status = 'Processed'
FROM stopIDs_xref sx
JOIN #temp t ON t.formattedAddress = sx.formattedAddress
ELSE IF @status = 'Delete'
DELETE
FROM stopIDs_xref
FROM stopIDs_xref sx
JOIN #temp t ON t.formattedAddress = sx.formattedAddress
</code></pre>
|
<python><t-sql>
|
2024-05-12 23:53:51
| 1
| 657
|
JimmyG
|
78,469,325
| 7,339,624
|
AttributeError: module 'networkx' has no attribute 'bfs_layout'
|
<p>I just installed <code>networkx</code>(currently version 3.3) and I went to try their <a href="https://networkx.org/documentation/stable/reference/generated/networkx.drawing.layout.bfs_layout.html#networkx.drawing.layout.bfs_layout" rel="nofollow noreferrer">official sample code</a> for drawing a graph with bfs layout. The code is as follow:</p>
<pre><code>G = nx.path_graph(4)
pos = nx.bfs_layout(G, 0)
</code></pre>
<p>But I get this error:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[43], line 2
1 G = nx.path_graph(8)
----> 2 pos = nx.bfs_layout(G, 0)
AttributeError: module 'networkx' has no attribute 'bfs_layout'
</code></pre>
<p>Any help to fix this would be appreciated.</p>
|
<python><graph><networkx>
|
2024-05-12 21:20:06
| 1
| 4,337
|
Peyman
|
78,469,275
| 12,304,000
|
to_numeric() adds a decimal to all numeric values
|
<p>I have a dataset that looks like this:</p>
<pre><code>profession Australia_F Australia_M Canada_F Canada_M Kenya_F Kenya_M
Author #DIV/0! 80 55 34 60 23
Librarian 10 34 89 33 89 12
Pilot 78 12 67 90 12 55
</code></pre>
<p>To create a heatmap out of this data, I melt it and convert values to numeric values.</p>
<pre><code>melted_df = pd.melt(df, id_vars='Profession', var_name='Country_Gender', value_name='Number')
melted_df[['Country', 'Gender']] = melted_df['Country_Gender'].str.split('_', expand=True)
melted_df['Number'] = pd.to_numeric(melted_df['Number'], errors='coerce')
</code></pre>
<p>Before the <code>pd.to_numeric</code> step, the numbers are fine. However, after applying this function, two things happen:</p>
<ul>
<li>A decimal (.0) is added to all numeric values. For example 80 becomes 80.0</li>
<li>#DIV/0! is changed to NULL</li>
</ul>
<p>I am okay with changing the unkown value to NULL but I want to avoid the decimals. What can I do here?</p>
<p>What I need to do after this step:</p>
<pre><code>heatmap_data = melted_df.pivot_table(index='Profession', columns=['Country', 'Gender'], values='Percentage')
...generating heatmap...
</code></pre>
|
<python><pandas><dataframe><data-analysis><pandas-melt>
|
2024-05-12 20:55:58
| 1
| 3,522
|
x89
|
78,469,262
| 12,304,000
|
create heatmap with only 2 colors
|
<p>I have a dataset that looks like this:</p>
<pre><code>profession Australia_F Australia_M Canada_F Canada_M Kenya_F Kenya_M
Author 20 80 55 34 60 23
Librarian 10 34 89 33 89 12
Pilot 78 12 67 90 12 55
</code></pre>
<p>I want to plot a sort of heatmap with these values. I tried this:</p>
<pre><code>melted_df = pd.melt(df, id_vars='Profession', var_name='Country_Gender', value_name='Number')
melted_df[['Country', 'Gender']] = melted_df['Country_Gender'].str.split('_', expand=True)
melted_df['Number'] = pd.to_numeric(melted_df['Number'], errors='coerce')
heatmap_data = melted_df.pivot_table(index='Profession', columns=['Country', 'Gender'], values='Number')
plt.figure(figsize=(10, 8))
sns.heatmap(heatmap_data, cmap='coolwarm', annot=True, fmt=".1f", linewidths=.5)
plt.xlabel('Country and Gender')
plt.ylabel('Profession')
plt.xticks(rotation=45)
plt.tight_layout()
plt.savefig('heatmap.png')
</code></pre>
<p>and it seems to work but currently it assigns different colors to all cells based on the numerical value. However, I only want 2 colors in my chart: red & blue.</p>
<p>what I want is that for each profession (each row), I compare each country's F vs M values and color the higher value cell in red.</p>
<p>For example, for Author, these three cells should be red:</p>
<p><strong>Australia_M (80)
Canada_F (55)
Kenya_F (60)</strong></p>
<p>while the other 3 in that row should be blue. How can I achieve this?</p>
|
<python><pandas><matplotlib><seaborn><heatmap>
|
2024-05-12 20:49:21
| 2
| 3,522
|
x89
|
78,469,209
| 1,377,288
|
Unable to connect to VertexAI from Cloud Run, but it works locally
|
<p><strong>EDIT</strong></p>
<p>In my case, this was happening because of the VPC. I'm not sure how to configure things so that VertexAI is accessible via VPC (like I said below, I added the integration), but for now, I disabled the VPC and the service is able to access the model.</p>
<hr />
<p>I have a Cloud Run service running a Flask project. This service has VertexAI integration configured and uses a service account with the roles "Vertex AI Viewer" and "Vertex AI User".</p>
<p>Here are the roles this Service Account has:</p>
<pre><code>{
"role": "roles/aiplatform.viewer",
"members": [
"serviceAccount:ai-demo-vertex-accessor@<project_id>.iam.gserviceaccount.com"
]
}
{
"role": "roles/aiplatform.user",
"members": [
"serviceAccount:ai-demo-vertex-accessor@<project_id>.iam.gserviceaccount.com"
]
}
</code></pre>
<p>Locally, I can run this project using my own account configured in the gcloud CLI.</p>
<p>Here's the code that accesses the model:</p>
<pre class="lang-py prettyprint-override"><code># get credentials and create client
credentials, project = default(
scopes=["https://www.googleapis.com/auth/cloud-platform"]
)
client_options = {"api_endpoint": API_ENDPOINT}
client = PredictionServiceClient(
client_options=client_options, credentials=credentials
)
...
# classify text
instance = schema.predict.instance.TextClassificationPredictionInstance(
content=CONTENT,
).to_value()
instances = [instance]
parameters_dict = {}
parameters = json_format.ParseDict(parameters_dict, Value())
endpoint = client.endpoint_path(
project=PROJECT_ID, location=COMPUTE_REGION, endpoint=ENDPOINT_ID
)
response = client.predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
predictions = MessageToDict(response._pb)
predictions = predictions["predictions"][0]
</code></pre>
<p>Here's the Dockerfile I'm using to build and push to the Artifact Registry:</p>
<pre><code>FROM python:3.11-slim
WORKDIR /app
COPY poetry.lock pyproject.toml ./
RUN pip install poetry==1.8.2
RUN poetry install --no-root --no-interaction --no-ansi
COPY ./flask/ai_demo .
ENV ENVIRONMENT production
ENV PORT 8080
ENV HOST 0.0.0.0
EXPOSE $PORT
# Define the command to run the Flask application
CMD ["poetry", "run", "python", "main.py"]
</code></pre>
<p>This code works locally, both inside and outside a container, but when I deploy it to Cloud Run, I get this error:</p>
<p><code>google.api_core.exceptions.ServiceUnavailable: 503 failed to connect to all addresses; last error: UNKNOWN: ipv4:<IP>:443: Failed to connect to remote host: FD Shutdown</code></p>
<p>The same endpoint returns a 503. I googled it and couldn't find anything helpful. Most people said it's a temporary problem, but I've been trying to deploy this since Friday and keep getting the same problem. Also, like I said before, I can connect to the model and classify text when I run this code locally.</p>
<p>What's happening and how can I make my cloud Run service access VertexAI?</p>
|
<python><docker><gcloud><google-cloud-run><google-cloud-vertex-ai>
|
2024-05-12 20:23:53
| 0
| 527
|
Tuma
|
78,469,002
| 3,120,501
|
Python packaging: including files/modules which import each other
|
<p>Sorry for the simple question, but I've been through some tutorials and I'm still not sure what the best way is to solve my issue.</p>
<p>I'm trying to install my Python code locally to make it easier to import into other projects I'm developing (the core project is a simulator which I want to utilise in other applications).</p>
<p>I have the following package structure:</p>
<pre><code>simulator/
+- simulator/
+- data/
+- model.yaml
+- __init__.py
+- sim.py
+- dynamics.py
+- config.py
+- setup.py
</code></pre>
<p>with this <code>setup.py</code> file:</p>
<pre><code>from setuptools import setup, find_packages
setup(name='simulator',
version='0.1.0',
author='LC',
packages=find_packages(include=['simulator', 'simulator.*'])
)
</code></pre>
<p>Running <code>pip install -e . --user</code> in the top-level <code>simulator</code> folder achieves the desired result of installing the package, e.g. I am able to use the statements</p>
<pre><code>import simulator
from simulator.sim import run
</code></pre>
<p>from other code files on my system. But, say that <code>sim</code> imports <code>dynamics</code>. This breaks things - I get a <code>ModuleNotFoundError</code> when the second statement is run.</p>
<p>What do I need to do to package the code when the files/modules within the package import each other in this way? I suppose I could import them all inside <code>__init__.py</code>, as in this article: <a href="https://changhsinlee.com/python-package/" rel="nofollow noreferrer">https://changhsinlee.com/python-package/</a>, but I don't know if this will cause some circular import issues, or other unintended consequences.</p>
<p>(As an aside, how can I ensure the data folder and yaml file are included too?)</p>
|
<python><python-import><setuptools><python-packaging>
|
2024-05-12 19:00:48
| 0
| 528
|
LordCat
|
78,468,872
| 16,674,436
|
matplotlib ValueError: Image size of 1781907x1084 pixels is too large. It must be less than 2^16 in each direction
|
<p>Yet another question regarding this somewhat typical error I guess, but I neither understand nor find the correct answer.</p>
<p>Here is the data frame I have:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'author': ['someguy', 'someone', 'again'],
'created_utc': ['2021-01-30 18:00:38', '2021-01-28 13:40:34', '2021-01-28 21:06:23'],
'score': ['417276', '317384', '282358']
})
</code></pre>
<p>And this the code I can’t get working:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# Assuming coms_per_month is your first dataset and best_score is the top scoring authors
plt.figure(figsize=(16, 12))
# Plot the first dataset (comments per month)
ax1 = plt.gca()
coms_per_month.plot(kind='bar', ax=ax1, color='blue', alpha=0.7) # Primary y-axis
plt.title('Frequency Distribution of Comments per Month', fontsize=20)
plt.xlabel('Date', fontsize=20)
plt.ylabel('Number of Comments (Primary)', fontsize=20)
plt.grid(True)
# Create a secondary y-axis
ax2 = ax1.twinx()
top_submissions = main_submissions.nlargest(10, 'score')
top_authors = top_submissions[['author', 'created_utc', 'score']]
top_authors['log_score'] = np.log(top_authors['score'])
plt.scatter(top_authors['created_utc'], top_authors['log_score'], c='red', s=100, label='Top Authors')
# Plot the names of the top scoring authors at the positions of their highest scores
for idx, row in top_authors.iterrows():
ax2.text(row['created_utc'],row['log_score'], s=row['author'])
plt.tight_layout()
plt.savefig('results/Elites Influence.pdf')
plt.show()
</code></pre>
<p>I can’t reproduce the other data set, but I reckon it’s not the one causing the issue as I’m able to produce the plot with it. The only thing I’m missing is the second y-axis where I want to plot the name of <code>author</code> in the coordinates of <code>x=top_authors['created_utc']</code> and <code>y=top_authors['score']</code>.</p>
<p>I either get a plot with the main data set plotted correctly, but without the second y-axis and its corresponding values (author, etc.), or I get the following error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/IPython/core/formatters.py in __call__(self, obj)
339 pass
340 else:
--> 341 return printer(obj)
342 # Finally look for special method names
343 method = get_real_method(obj, self.print_method)
8 frames
/usr/local/lib/python3.10/dist-packages/matplotlib/backends/backend_agg.py in __init__(self, width, height, dpi)
82 self.width = width
83 self.height = height
---> 84 self._renderer = _RendererAgg(int(width), int(height), dpi)
85 self._filter_renderers = []
86
ValueError: Image size of 891146x450 pixels is too large. It must be less than 2^16 in each direction.
<Figure size 800x400 with 2 Axes>
</code></pre>
<p>Which is why I tried to log the <code>score</code> field, but the issue remains the same.</p>
<p>I’m for sure missing something, but I’m not able to find what it is. I even asked ChatGPT and Claude, without success.</p>
|
<python><pandas><matplotlib><plot>
|
2024-05-12 18:18:37
| 0
| 341
|
Louis
|
78,468,862
| 10,089,194
|
Struggling with relative imports. Help me confirm that my understanding of "attempted relative import beyond top-level package" error is correct
|
<p>In my folder named <code>project</code>, this is my directory structure.</p>
<pre><code>.
├── folder_1
│ ├── folder_3
│ │ ├── module_3.py
│ │ └── module_4.py
│ └── module_1.py
├── folder_2
│ └── module_2.py
└── script.py
</code></pre>
<p>I am trying to do relative imports inside <code>module_3.py</code></p>
<pre><code># module_3.py
print("Code is currently in module_3.py file")
print("\n File name of module_3.py :",__name__)
from . import module_4
from .. import module_1
from ...folder_2 import module_2
</code></pre>
<ol>
<li><p>When I do <code>python3 -m project.folder_1.folder_3.module_3</code>, it runs successfully.</p>
</li>
<li><p>When I run <code>python3 -m folder_1.folder_3.module_3</code>, I am getting below error:</p>
</li>
</ol>
<pre><code>Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/Users/admin/Desktop/remote/project/folder_1/folder_3/module_3.py", line 13, in <module>
from ...folder_2 import module_2
ImportError: attempted relative import beyond top-level package
</code></pre>
<p>What I know is that relative imports works inside packages only.</p>
<p><strong>QUESTION</strong> - Is my understanding correct that giving <code>project</code> in command helps python understand that <code>project</code> is also a package, otherwise it considers only <code>folder_1</code> a package and cannot find <code>folder_2</code> even if it is at the same level as <code>folder_1</code> because it does not recognize its parent folder <code>project</code> directory as a package since I am running the command excluding <code>project</code> word from it.</p>
<p>Does this mean it is always better to run your program with the topmost parent folder name in your command hence telling python the top most package.</p>
<p>If not so, what is the reason it cannot traverse <code>folder_2</code> which is a sibling directory at the same level as <code>folder_1</code></p>
|
<python><import><package><relative-import>
|
2024-05-12 18:14:59
| 1
| 369
|
user10089194
|
78,468,856
| 2,128,014
|
Advancing to the next video using PyQt6
|
<p>I am trying to run a PyQt6 <a href="https://doc.qt.io/qtforpython-6.2/examples/example_multimedia__player.html" rel="nofollow noreferrer">example</a> , but the method to advance the video player to the next clip gets the UI stuck (unresponsive). How can I fix that?</p>
<p>The code to move to the next clip is</p>
<pre><code>@Slot()
def next_clicked(self):
print(f"next_from {self._playlist_index}")
self._playlist_index = (self._playlist_index + 1) % len(self._playlist)
self._player.setSource(self._playlist[self._playlist_index])
</code></pre>
<p>I have checked that I have a valid playlist containing more than one clip, and the index correctly progresses from <code>n-1</code> to <code>0</code>, etc. Once <code>setSource()</code> is executed, the UI becomes unresponsive and needs to be killed.</p>
<p>I am running this on Windows 11 with the latest patches and Python 3.11.</p>
<p>UPDATE: The code works better with shorter clips (~20s) and on a faster drive (C: vs external USB), so the problems might be due to synchronization, i.e., I should not attempt to play a clip before it is fully loaded. But how can I do that, i.e., which event do I need to handle? (Hypothetically: onMediaLoaded() <code>self._player.play()</code>).</p>
<p>The code posted below is a complete example. It should run for you as is, and allow you to open a clip from a directory containing several clips in MP4 format. When you press the "Next clip" button, it may become unresponsive.</p>
<p>Running with <code>QT_FFMPEG_DEBUG=1</code> does not print any debug messages.</p>
<p>My code, which is virtually identical to the example:</p>
<pre><code># see https://doc.qt.io/qtforpython-6/examples/example_multimedia_player.html
"""PySide6 Multimedia player example"""
import sys
import os
import glob
from PySide6.QtCore import QStandardPaths, Qt, Slot, QUrl
from PySide6.QtGui import QAction, QIcon, QKeySequence
from PySide6.QtWidgets import (QApplication, QDialog, QFileDialog,
QMainWindow, QSlider, QStyle, QToolBar)
from PySide6.QtMultimedia import (QAudioOutput, QMediaFormat,
QMediaPlayer)
from PySide6.QtMultimediaWidgets import QVideoWidget
AVI = "video/x-msvideo" # AVI
MP4 = 'video/mp4'
def get_supported_mime_types():
result = []
for f in QMediaFormat().supportedFileFormats(QMediaFormat.Decode):
mime_type = QMediaFormat(f).mimeType()
result.append(mime_type.name())
return result
def is_directory(path):
return os.path.isdir(path)
def list_files_with_extension(directory, extension):
if not extension.startswith('.'):
extension = '.' + extension
search_path = os.path.join(directory, '*' + extension)
return glob.glob(search_path)
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self._playlist = [] # FIXME 6.3: Replace by QMediaPlaylist?
self._playlist_index = -1
self._audio_output = QAudioOutput()
self._player = QMediaPlayer()
self._player.setAudioOutput(self._audio_output)
self._player.errorOccurred.connect(self._player_error)
tool_bar = QToolBar()
self.addToolBar(tool_bar)
file_menu = self.menuBar().addMenu("&File")
icon = QIcon.fromTheme(QIcon.ThemeIcon.DocumentOpen)
open_action = QAction(icon, "&Open...", self,
shortcut=QKeySequence.Open, triggered=self.open)
file_menu.addAction(open_action)
tool_bar.addAction(open_action)
icon = QIcon.fromTheme(QIcon.ThemeIcon.ApplicationExit)
exit_action = QAction(icon, "E&xit", self,
shortcut="Ctrl+Q", triggered=self.close)
file_menu.addAction(exit_action)
play_menu = self.menuBar().addMenu("&Play")
style = self.style()
icon = QIcon.fromTheme(QIcon.ThemeIcon.MediaPlaybackStart,
style.standardIcon(QStyle.SP_MediaPlay))
self._play_action = tool_bar.addAction(icon, "Play")
self._play_action.triggered.connect(self._player.play)
play_menu.addAction(self._play_action)
icon = QIcon.fromTheme(QIcon.ThemeIcon.MediaSkipBackward,
style.standardIcon(QStyle.SP_MediaSkipBackward))
self._previous_action = tool_bar.addAction(icon, "Previous")
self._previous_action.triggered.connect(self.previous_clicked)
play_menu.addAction(self._previous_action)
icon = QIcon.fromTheme(QIcon.ThemeIcon.MediaPlaybackPause,
style.standardIcon(QStyle.SP_MediaPause))
self._pause_action = tool_bar.addAction(icon, "Pause")
self._pause_action.triggered.connect(self._player.pause)
play_menu.addAction(self._pause_action)
icon = QIcon.fromTheme(QIcon.ThemeIcon.MediaSkipForward,
style.standardIcon(QStyle.SP_MediaSkipForward))
self._next_action = tool_bar.addAction(icon, "Next")
self._next_action.triggered.connect(self.next_clicked)
play_menu.addAction(self._next_action)
icon = QIcon.fromTheme(QIcon.ThemeIcon.MediaPlaybackStop,
style.standardIcon(QStyle.SP_MediaStop))
self._stop_action = tool_bar.addAction(icon, "Stop")
self._stop_action.triggered.connect(self._ensure_stopped)
play_menu.addAction(self._stop_action)
#{ Add a Delete button
icon = QIcon.fromTheme(QIcon.ThemeIcon.MediaRecord,
style.standardIcon(QStyle.SP_MediaVolumeMuted))
self._delete_action = tool_bar.addAction(icon, "Delete")
self._delete_action.triggered.connect(self._delete_media)
play_menu.addAction(self._delete_action)
#} Add a Delete button
self._volume_slider = QSlider()
self._volume_slider.setOrientation(Qt.Horizontal)
self._volume_slider.setMinimum(0)
self._volume_slider.setMaximum(100)
available_width = self.screen().availableGeometry().width()
self._volume_slider.setFixedWidth(available_width / 10)
self._volume_slider.setValue(self._audio_output.volume())
self._volume_slider.setTickInterval(10)
self._volume_slider.setTickPosition(QSlider.TicksBelow)
self._volume_slider.setToolTip("Volume")
self._volume_slider.valueChanged.connect(self._audio_output.setVolume)
tool_bar.addWidget(self._volume_slider)
icon = QIcon.fromTheme(QIcon.ThemeIcon.HelpAbout)
about_menu = self.menuBar().addMenu("&About")
about_qt_act = QAction(icon, "About &Qt", self, triggered=qApp.aboutQt) # noqa: F821
about_menu.addAction(about_qt_act)
self._video_widget = QVideoWidget()
self.setCentralWidget(self._video_widget)
self._player.playbackStateChanged.connect(self.update_buttons)
self._player.setVideoOutput(self._video_widget)
self.update_buttons(self._player.playbackState())
self._mime_types = []
def closeEvent(self, event):
self._ensure_stopped()
event.accept()
@Slot()
def open(self):
self._ensure_stopped()
file_dialog = QFileDialog(self)
is_windows = sys.platform == 'win32'
if not self._mime_types:
self._mime_types = get_supported_mime_types()
if (is_windows and AVI not in self._mime_types):
self._mime_types.append(AVI)
elif MP4 not in self._mime_types:
self._mime_types.append(MP4)
file_dialog.setMimeTypeFilters(self._mime_types)
#efault_mimetype = AVI if is_windows else MP4
default_mimetype = MP4
if default_mimetype in self._mime_types:
file_dialog.selectMimeTypeFilter(default_mimetype)
movies_location = QStandardPaths.writableLocation(QStandardPaths.MoviesLocation)
file_dialog.setDirectory(movies_location)
if file_dialog.exec() == QDialog.Accepted:
url = file_dialog.selectedUrls()[0]
print(f"Opening {url.toString()}")
if url.isLocalFile():
file_url = url.toLocalFile()
if is_directory(file_url):
self.append_files(file_url)
else:
dir_url = os.path.dirname(file_url)
self.append_files(dir_url)
self._player.play()
def append_file(self, url):
self._playlist.append(url)
self._playlist_index = len(self._playlist) - 1
self._player.setSource(url)
def append_files(self, dir_url):
files = list_files_with_extension(dir_url, 'mp4')
urls = [QUrl.fromLocalFile(file) for file in files]
for url in urls:
self.append_file(url)
@Slot()
def _ensure_stopped(self):
if self._player.playbackState() != QMediaPlayer.StoppedState:
self._player.stop()
@Slot()
def _delete_media(self):
playing_now = self._playlist[self._playlist_index]
print(f"Delete {playing_now}")
@Slot()
def previous_clicked(self):
# Go to previous track if we are within the first 5 seconds of playback
# Otherwise, seek to the beginning.
if self._player.position() <= 5000 and self._playlist_index > 0:
self._playlist_index -= 1
self._playlist.previous()
self._player.setSource(self._playlist[self._playlist_index])
else:
self._player.setPosition(0)
@Slot()
def next_clicked(self):
print(f"next_from {self._playlist_index}")
self._playlist_index = (self._playlist_index + 1) % len(self._playlist)
self._player.setSource(self._playlist[self._playlist_index])
#self._player.play()
@Slot("QMediaPlayer::PlaybackState")
def update_buttons(self, state):
media_count = len(self._playlist)
self._play_action.setEnabled(media_count > 0 and state != QMediaPlayer.PlayingState)
self._pause_action.setEnabled(state == QMediaPlayer.PlayingState)
self._stop_action.setEnabled(state != QMediaPlayer.StoppedState)
self._previous_action.setEnabled(self._player.position() > 0)
self._next_action.setEnabled(media_count > 1)
def show_status_message(self, message):
self.statusBar().showMessage(message, 5000)
@Slot("QMediaPlayer::Error", str)
def _player_error(self, error, error_string):
print(error_string, file=sys.stderr)
self.show_status_message(error_string)
if __name__ == '__main__':
app = QApplication(sys.argv)
main_win = MainWindow()
available_geometry = main_win.screen().availableGeometry()
main_win.resize(available_geometry.width() / 3,
available_geometry.height() / 2)
main_win.show()
sys.exit(app.exec())
</code></pre>
|
<python><pyqt6>
|
2024-05-12 18:14:27
| 0
| 4,191
|
radumanolescu
|
78,468,763
| 6,626,531
|
Using MultiThreading to Call and API. How to get kill slow processes?
|
<p>I need to kill tasks that are running long.</p>
<p>I have a bunch of api calls that I need to make. I'm currenting using <code>ThreadPoolExecutor</code> to have 12 workers to run in parallel. Each worker will call the api and return a result that I then join back together.</p>
<p>The problem is that the api that I'm calling takes around 20 seconds if it works and around 500 seconds if it's going to fail.</p>
<p>The problem is as far as I can tell, <code>ThreadPoolExecutor</code> has a timeout, but can't kill executing workers. It'll just tell you that a task took longer than you asked it to take. <code>Requests.get</code> also has a timeout, but this timeout isn't for the duration of the request, it's for when <a href="https://requests.kennethreitz.org/en/latest/user/quickstart/#timeouts" rel="nofollow noreferrer">the server has not issued a response for timeout seconds</a>.</p>
<p>Is there anything else that I can do to cancel the long running tasks that I know will fail?</p>
<p>Code has been simplified to:</p>
<pre class="lang-py prettyprint-override"><code>def process():
response = get(url, stream=True, verify=self.cert, params=params, timeout=timeout)
return response
</code></pre>
<p>Simple ThreadPool</p>
<pre><code>with ThreadPoolExecutor(max_workers=number_of_threads) as executor:
output = list(
executor.map(self.process, df.to_dict(orient="records"))
)
</code></pre>
<p>More complicated ThreadPool</p>
<pre><code>with ThreadPoolExecutor(max_workers=self.num_threads) as executor:
futures = {executor.submit(process, record) for record in df.to_dict(orient="records")}
for future in as_completed(futures):
start_time = time()
try:
process_data_output = future.result(timeout)
except concurrent.futures.TimeoutError:
logger.warning("Failure: Timeout Error. Timeout: %s seconds.", timeout)
future.cancel() # Attempt to cancel the task
duration = time() - start_time
logger.info("Success: Timeout not met. Process took %s seconds", duration)
</code></pre>
<p>Thoughts?</p>
<p>I've tried changing looking into both timeout options, but neither one seems to be able to cancel the job if it takes a long time.</p>
|
<python><python-3.x><python-requests><threadpoolexecutor>
|
2024-05-12 17:34:21
| 0
| 1,975
|
Micah Pearce
|
78,468,180
| 9,815,697
|
How to install a locally developed python wheel file as part of a native Snowflake app
|
<p>I am trying to install a local python wheel file as part of a Snowflake native app.</p>
<p>I tried the steps in the following article:
<a href="https://medium.com/snowflake/introducing-simple-workflow-for-using-python-packages-in-snowpark-928f667ff1aa" rel="nofollow noreferrer">https://medium.com/snowflake/introducing-simple-workflow-for-using-python-packages-in-snowpark-928f667ff1aa</a></p>
<p>But, I keep getting the following error:</p>
<pre><code>093082 (42601): 01b44791-0102-aab9-0000-0002b61281f9: Failed to execute setup script:
Uncaught exception of type 'STATEMENT_ERROR' on line 34 at position 0 : SQL compilation error: Package 'perpetual' is not supported.
</code></pre>
<p>The only difference is that they use a package from pypi in the medium article but I am trying to install a local wheel file.</p>
<p>Is there a way to install locally developed python wheel file as part of a native Snowflake app?</p>
|
<python><snowflake-cloud-data-platform>
|
2024-05-12 14:18:27
| 1
| 1,182
|
Mutlu Simsek
|
78,468,135
| 5,515,287
|
Anaconda Navigator Backup on Mac OS is not working on Windows
|
<p>I have a backup from my anaconda navigator environment on mac but when I'm importing it on windows its not working and showing the error that packages not found:</p>
<pre><code>name: university_web_scraping
channels:
- conda-forge
- defaults
dependencies:
- anyio=4.2.0=py310hecd8cb5_0
- appnope=0.1.2=py310hecd8cb5_1001
- argon2-cffi=21.3.0=pyhd3eb1b0_0
- argon2-cffi-bindings=21.2.0=py310hca72f7f_0
- asttokens=2.0.5=pyhd3eb1b0_0
- async-lru=2.0.4=py310hecd8cb5_0
- attrs=23.1.0=py310hecd8cb5_0
- babel=2.11.0=py310hecd8cb5_0
- beautifulsoup4=4.12.2=py310hecd8cb5_0
- blas=1.0=mkl
- bleach=4.1.0=pyhd3eb1b0_0
- bottleneck=1.3.7=py310h7b7cdfe_0
- bzip2=1.0.8=h6c40b1e_6
- ca-certificates=2024.3.11=hecd8cb5_0
- certifi=2024.2.2=py310hecd8cb5_0
- cffi=1.16.0=py310h6c40b1e_1
- charset-normalizer=2.0.4=pyhd3eb1b0_0
- chart-studio=1.1.0=pyh9f0ad1d_0
- comm=0.2.1=py310hecd8cb5_0
- debugpy=1.6.7=py310hcec6c5f_0
- decorator=5.1.1=pyhd3eb1b0_0
- defusedxml=0.7.1=pyhd3eb1b0_0
- exceptiongroup=1.2.0=py310hecd8cb5_0
- executing=0.8.3=pyhd3eb1b0_0
- html5lib=1.1=pyhd3eb1b0_0
- icu=73.1=hcec6c5f_0
- idna=3.7=py310hecd8cb5_0
- intel-openmp=2023.1.0=ha357a0b_43548
- ipykernel=6.28.0=py310hecd8cb5_0
- ipython=8.20.0=py310hecd8cb5_0
- jedi=0.18.1=py310hecd8cb5_1
- jinja2=3.1.3=py310hecd8cb5_0
- json5=0.9.6=pyhd3eb1b0_0
- jsonschema=4.19.2=py310hecd8cb5_0
- jsonschema-specifications=2023.7.1=py310hecd8cb5_0
- jupyter-lsp=2.2.0=py310hecd8cb5_0
- jupyter_client=8.6.0=py310hecd8cb5_0
- jupyter_core=5.5.0=py310hecd8cb5_0
- jupyter_events=0.8.0=py310hecd8cb5_0
- jupyter_server=2.10.0=py310hecd8cb5_0
- jupyter_server_terminals=0.4.4=py310hecd8cb5_1
- jupyterlab=4.0.11=py310hecd8cb5_0
- jupyterlab_pygments=0.1.2=py_0
- jupyterlab_server=2.25.1=py310hecd8cb5_0
- libcxx=14.0.6=h9765a3e_0
- libffi=3.4.4=hecd8cb5_1
- libiconv=1.16=h6c40b1e_3
- libsodium=1.0.18=h1de35cc_0
- libxml2=2.10.4=h45904e2_2
- libxslt=1.1.37=h6c40b1e_1
- lxml=5.2.1=py310h946e0e5_0
- markupsafe=2.1.3=py310h6c40b1e_0
- matplotlib-inline=0.1.6=py310hecd8cb5_0
- mechanize=0.4.10=pyhd8ed1ab_0
- mistune=2.0.4=py310hecd8cb5_0
- mkl=2023.1.0=h8e150cf_43560
- mkl-service=2.4.0=py310h6c40b1e_1
- mkl_fft=1.3.8=py310h6c40b1e_0
- mkl_random=1.2.4=py310ha357a0b_0
- nbclient=0.8.0=py310hecd8cb5_0
- nbconvert=7.10.0=py310hecd8cb5_0
- nbformat=5.9.2=py310hecd8cb5_0
- ncurses=6.4=hcec6c5f_0
- nest-asyncio=1.6.0=py310hecd8cb5_0
- notebook=7.0.8=py310hecd8cb5_0
- notebook-shim=0.2.3=py310hecd8cb5_0
- numexpr=2.8.7=py310h827a554_0
- numpy=1.26.4=py310h827a554_0
- numpy-base=1.26.4=py310ha186be2_0
- openssl=3.0.13=hca72f7f_1
- overrides=7.4.0=py310hecd8cb5_0
- packaging=23.2=py310hecd8cb5_0
- pandas=2.2.1=py310hcec6c5f_0
- pandocfilters=1.5.0=pyhd3eb1b0_0
- parso=0.8.3=pyhd3eb1b0_0
- pexpect=4.8.0=pyhd3eb1b0_3
- pip=24.0=py310hecd8cb5_0
- platformdirs=3.10.0=py310hecd8cb5_0
- plotly=5.19.0=py310h20db666_0
- prometheus_client=0.14.1=py310hecd8cb5_0
- prompt-toolkit=3.0.43=py310hecd8cb5_0
- prompt_toolkit=3.0.43=hd3eb1b0_0
- psutil=5.9.0=py310hca72f7f_0
- ptyprocess=0.7.0=pyhd3eb1b0_2
- pure_eval=0.2.2=pyhd3eb1b0_0
- pycparser=2.21=pyhd3eb1b0_0
- pygments=2.15.1=py310hecd8cb5_1
- python=3.10.14=h5ee71fb_1
- python-dateutil=2.9.0post0=py310hecd8cb5_0
- python-fastjsonschema=2.16.2=py310hecd8cb5_0
- python-json-logger=2.0.7=py310hecd8cb5_0
- python-tzdata=2023.3=pyhd3eb1b0_0
- pytz=2024.1=py310hecd8cb5_0
- pyyaml=6.0.1=py310h6c40b1e_0
- pyzmq=25.1.2=py310hcec6c5f_0
- readline=8.2=hca72f7f_0
- referencing=0.30.2=py310hecd8cb5_0
- requests=2.31.0=py310hecd8cb5_1
- retrying=1.3.3=pyhd3eb1b0_2
- rfc3339-validator=0.1.4=py310hecd8cb5_0
- rfc3986-validator=0.1.1=py310hecd8cb5_0
- rpds-py=0.10.6=py310hf2ad997_0
- send2trash=1.8.2=py310hecd8cb5_0
- setuptools=69.5.1=py310hecd8cb5_0
- six=1.16.0=pyhd3eb1b0_1
- sniffio=1.3.0=py310hecd8cb5_0
- soupsieve=2.5=py310hecd8cb5_0
- sqlite=3.45.3=h6c40b1e_0
- stack_data=0.2.0=pyhd3eb1b0_0
- tbb=2021.8.0=ha357a0b_0
- tenacity=8.2.2=py310hecd8cb5_0
- terminado=0.17.1=py310hecd8cb5_0
- tinycss2=1.2.1=py310hecd8cb5_0
- tk=8.6.14=h4d00af3_0
- tomli=2.0.1=py310hecd8cb5_0
- tornado=6.3.3=py310h6c40b1e_0
- traitlets=5.7.1=py310hecd8cb5_0
- typing-extensions=4.11.0=py310hecd8cb5_0
- typing_extensions=4.11.0=py310hecd8cb5_0
- tzdata=2024a=h04d1e81_0
- urllib3=2.1.0=py310hecd8cb5_0
- wcwidth=0.2.5=pyhd3eb1b0_0
- webencodings=0.5.1=py310hecd8cb5_1
- websocket-client=0.58.0=py310hecd8cb5_4
- wheel=0.43.0=py310hecd8cb5_0
- xz=5.4.6=h6c40b1e_1
- yaml=0.2.5=haf1e3a3_0
- zeromq=4.3.5=hcec6c5f_0
</code></pre>
<p>The Error:</p>
<pre><code>Channels:
- conda-forge
- defaults
Platform: win-64
Collecting package metadata (repodata.json): done
Solving environment: failed
PackagesNotFoundError: The following packages are not available from current channels:
- zeromq==4.3.5=hcec6c5f_0
- yaml==0.2.5=haf1e3a3_0
- xz==5.4.6=h6c40b1e_1
- wheel==0.43.0=py310hecd8cb5_0
- websocket-client==0.58.0=py310hecd8cb5_4
- webencodings==0.5.1=py310hecd8cb5_1
- urllib3==2.1.0=py310hecd8cb5_0
- typing_extensions==4.11.0=py310hecd8cb5_0
- typing-extensions==4.11.0=py310hecd8cb5_0
- traitlets==5.7.1=py310hecd8cb5_0
- tornado==6.3.3=py310h6c40b1e_0
- tomli==2.0.1=py310hecd8cb5_0
- tk==8.6.14=h4d00af3_0
- tinycss2==1.2.1=py310hecd8cb5_0
- terminado==0.17.1=py310hecd8cb5_0
- tenacity==8.2.2=py310hecd8cb5_0
- tbb==2021.8.0=ha357a0b_0
- sqlite==3.45.3=h6c40b1e_0
- soupsieve==2.5=py310hecd8cb5_0
- sniffio==1.3.0=py310hecd8cb5_0
- setuptools==69.5.1=py310hecd8cb5_0
- send2trash==1.8.2=py310hecd8cb5_0
- rpds-py==0.10.6=py310hf2ad997_0
- rfc3986-validator==0.1.1=py310hecd8cb5_0
- rfc3339-validator==0.1.4=py310hecd8cb5_0
- requests==2.31.0=py310hecd8cb5_1
- referencing==0.30.2=py310hecd8cb5_0
- readline==8.2=hca72f7f_0
- pyzmq==25.1.2=py310hcec6c5f_0
- pyyaml==6.0.1=py310h6c40b1e_0
- pytz==2024.1=py310hecd8cb5_0
- python-json-logger==2.0.7=py310hecd8cb5_0
- python-fastjsonschema==2.16.2=py310hecd8cb5_0
- python-dateutil==2.9.0post0=py310hecd8cb5_0
- python==3.10.14=h5ee71fb_1
- pygments==2.15.1=py310hecd8cb5_1
- psutil==5.9.0=py310hca72f7f_0
- prompt-toolkit==3.0.43=py310hecd8cb5_0
- prometheus_client==0.14.1=py310hecd8cb5_0
- plotly==5.19.0=py310h20db666_0
- platformdirs==3.10.0=py310hecd8cb5_0
- pip==24.0=py310hecd8cb5_0
- pandas==2.2.1=py310hcec6c5f_0
- packaging==23.2=py310hecd8cb5_0
- overrides==7.4.0=py310hecd8cb5_0
- openssl==3.0.13=hca72f7f_1
- numpy-base==1.26.4=py310ha186be2_0
- numpy==1.26.4=py310h827a554_0
- numexpr==2.8.7=py310h827a554_0
- notebook-shim==0.2.3=py310hecd8cb5_0
- notebook==7.0.8=py310hecd8cb5_0
- nest-asyncio==1.6.0=py310hecd8cb5_0
- ncurses==6.4=hcec6c5f_0
- nbformat==5.9.2=py310hecd8cb5_0
- nbconvert==7.10.0=py310hecd8cb5_0
- nbclient==0.8.0=py310hecd8cb5_0
- mkl_random==1.2.4=py310ha357a0b_0
- mkl_fft==1.3.8=py310h6c40b1e_0
- mkl-service==2.4.0=py310h6c40b1e_1
- mkl==2023.1.0=h8e150cf_43560
- mistune==2.0.4=py310hecd8cb5_0
- matplotlib-inline==0.1.6=py310hecd8cb5_0
- markupsafe==2.1.3=py310h6c40b1e_0
- lxml==5.2.1=py310h946e0e5_0
- libxslt==1.1.37=h6c40b1e_1
- libxml2==2.10.4=h45904e2_2
- libsodium==1.0.18=h1de35cc_0
- libiconv==1.16=h6c40b1e_3
- libffi==3.4.4=hecd8cb5_1
- libcxx==14.0.6=h9765a3e_0
- jupyterlab_server==2.25.1=py310hecd8cb5_0
- jupyterlab==4.0.11=py310hecd8cb5_0
- jupyter_server_terminals==0.4.4=py310hecd8cb5_1
- jupyter_server==2.10.0=py310hecd8cb5_0
- jupyter_events==0.8.0=py310hecd8cb5_0
- jupyter_core==5.5.0=py310hecd8cb5_0
- jupyter_client==8.6.0=py310hecd8cb5_0
- jupyter-lsp==2.2.0=py310hecd8cb5_0
- jsonschema-specifications==2023.7.1=py310hecd8cb5_0
- jsonschema==4.19.2=py310hecd8cb5_0
- jinja2==3.1.3=py310hecd8cb5_0
- jedi==0.18.1=py310hecd8cb5_1
- ipython==8.20.0=py310hecd8cb5_0
- ipykernel==6.28.0=py310hecd8cb5_0
- intel-openmp==2023.1.0=ha357a0b_43548
- idna==3.7=py310hecd8cb5_0
- icu==73.1=hcec6c5f_0
- exceptiongroup==1.2.0=py310hecd8cb5_0
- debugpy==1.6.7=py310hcec6c5f_0
- comm==0.2.1=py310hecd8cb5_0
- cffi==1.16.0=py310h6c40b1e_1
- certifi==2024.2.2=py310hecd8cb5_0
- ca-certificates==2024.3.11=hecd8cb5_0
- bzip2==1.0.8=h6c40b1e_6
- bottleneck==1.3.7=py310h7b7cdfe_0
- beautifulsoup4==4.12.2=py310hecd8cb5_0
- babel==2.11.0=py310hecd8cb5_0
- attrs==23.1.0=py310hecd8cb5_0
- async-lru==2.0.4=py310hecd8cb5_0
- argon2-cffi-bindings==21.2.0=py310hca72f7f_0
- appnope==0.1.2=py310hecd8cb5_1001
- anyio==4.2.0=py310hecd8cb5_0
Current channels:
- https://conda.anaconda.org/conda-forge/win-64
- https://repo.anaconda.com/pkgs/main/win-64
- https://repo.anaconda.com/pkgs/r/win-64
- https://repo.anaconda.com/pkgs/msys2/win-64
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
</code></pre>
|
<python><anaconda>
|
2024-05-12 14:01:06
| 1
| 3,123
|
Mustafa Poya
|
78,467,773
| 2,302,262
|
Use decorator for repeated overloading
|
<h3>TLDR</h3>
<p>I'm trying to make my type checker happy by specifying overloading of my functions. There is a specific signature I have many of, and I'd like to keep things dry.</p>
<h3>Example and current code</h3>
<p>I have many functions that take, as their first argument, an <code>int</code> or a <code>float</code>, and return the same type. To make sure my editor understands the one-to-one mapping, I add overloading:</p>
<pre class="lang-py prettyprint-override"><code>from typing import overload
@overload
def add_two(x: int) -> int: ...
@overload
def add_two(x: float) -> float: ...
def add_two(x: int | float) -> int | float:
return x + 2
</code></pre>
<p>This works, great.</p>
<p>Now, I have many such functions, e.g.</p>
<pre class="lang-py prettyprint-override"><code>def add_two(x: int | float) -> int | float:
return x + 2
def square(x: int | float) -> int | float:
return x ** 2
def multiply_by_int_add_int(x: int | float, y: int, z: int) -> int | float:
return x * y + z
(...)
</code></pre>
<p>Adding the overloading in the way it's described, to each of the functions, will add a lot of almost identical code.</p>
<h3>Question</h3>
<p><strong>Is there a way to write a decorator such, that I can add it to each function definition?</strong></p>
<p>Note that:</p>
<ul>
<li><p>The functions have in common, that the type of the first argument determines the type of the return value.</p>
</li>
<li><p>The number and type of the remaining parameters is not pre-determined.</p>
</li>
</ul>
<h3>Current attempt</h3>
<pre class="lang-py prettyprint-override"><code>import functools
from typing import Callable, ParamSpec, TypeVar, overload
T = TypeVar("T")
P = ParamSpec("P")
def int2int_and_float2float(fn: Callable[P, T]) -> Callable[P, T]:
@overload
def wrapped(x: int, *args, **kwargs) -> int: ...
@overload
def wrapped(x: float, *args, **kwargs) -> float: ...
@functools.wraps(fn)
def wrapped(*args: P.args, **kwargs: P.kwargs):
return fn(*args, **kwargs)
return wrapped
@int2int_and_float2float
def add_two(x: int | float) -> int | float:
"""Add two"""
return x * 2
</code></pre>
<p>This is not working; <code>add_two(3)</code> is not recognised to give an integer:
<a href="https://i.sstatic.net/0fDQ1SCY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0fDQ1SCY.png" alt="enter image description here" /></a></p>
<p><strong>What am I missing?</strong></p>
<p>(I want to support python >=3.10)</p>
|
<python><python-typing>
|
2024-05-12 12:04:53
| 1
| 2,294
|
ElRudi
|
78,467,713
| 2,707,955
|
How to use numpy polynomial expression without numpy?
|
<p>To follow up on a <a href="https://stackoverflow.com/questions/78455901/how-to-find-the-mathematical-function-from-list-of-plots">previous question</a>, I would like to extract the polynomial to calculate the result for certain values in a python environment without numpy.</p>
<p>The code :</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from numpy.polynomial import Polynomial
x = [430, 518, 471, 581, 330, 560, 387, 280, 231, 196, 168, 148, 136, 124, 112, 104, 101, 93, 88, 84, 80, 76]
y = [13, 10.5, 11.5, 10, 16.9, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90]
plt.figure(figsize=(12,6))
plt.plot(x, y, 'o')
polynomial = Polynomial.fit(x, y, deg=7)
x1 = np.arange(76, 570, 1)
y1 = [polynomial(i) for i in x1]
plt.plot(x1, y1, '-')
print("polynomial result :")
print(polynomial)
print("-------------------")
# test single value of x
x2 = 330
y2_direct = 16.94343988 - 11.3117263*x2 + 17.65987276*x2**2 - 31.01304843*x2**3 - 21.01641368*x2**4 + 46.31228129*x2**5 + 36.3575357*x2**6 - 44.05000699*x2**7
y2_poly = polynomial(x2)
print("y2_direct = ", y2_direct)
print("y2_poly = ", y2_poly)
plt.legend('data deg7'.split())
plt.show()
</code></pre>
<p>I get this :</p>
<p><a href="https://i.sstatic.net/mLfaK2ID.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mLfaK2ID.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/xFTXGN0i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xFTXGN0i.png" alt="enter image description here" /></a></p>
<p>Why the 2 values y2_direct and y2_poly are different?</p>
|
<python><numpy>
|
2024-05-12 11:46:22
| 1
| 365
|
Aurélien
|
78,467,681
| 1,381,340
|
How is Numpy able to get an array size at run-time?
|
<p>In C++ the size of an array must be determined at the compile-time.
I want to write a simple code in C++, say, to do a naive matrix multiplication with itself (for a matrix that is square in size) and I want to do it for any matrix size. However, I do not know how I can get the matrix sizes or the matrices of various sizes at the run-time in C++.</p>
<p>I have read that there are two types of memory, stack and heap. It seems the size of arrays on stack must be determined at the compile-time while the size of an array on heap can be determined at rune-time.</p>
<p>I also know that <a href="https://numpy.org/" rel="nofollow noreferrer">Numpy</a> can get the size of an array from the user in Python and create an array in C/C++, so, the way Numpy do this might be the solution.</p>
<p>Does Numpy do compilation after getting the size of an array from the user? or are Numpy arrays created on the heap in C?
Basically, how does Numpy avoid this issue?</p>
<p><strong>Edit</strong>
I changed the question to put the focus on how Numpy resolves the issue I had.</p>
|
<python><c++><numpy><heap-memory><stack-memory>
|
2024-05-12 11:37:27
| 1
| 2,871
|
MOON
|
78,467,670
| 9,864,539
|
Passing pointer to C struct to ioctl in Python
|
<p>I am writing a Python script that needs to use the <a href="https://docs.kernel.org/filesystems/fscrypt.html#fs-ioc-add-encryption-key" rel="nofollow noreferrer"><code>FS_IOC_ADD_ENCRYPTION_KEY</code></a> ioctl.</p>
<p>This ioctl expects an argument of type (pointer to) <code>fscrypt_add_key_arg</code>, which has this definition in the Linux kernel header files:</p>
<pre class="lang-c prettyprint-override"><code>struct fscrypt_add_key_arg {
struct fscrypt_key_specifier key_spec;
__u32 raw_size;
__u32 key_id;
__u32 __reserved[8];
__u8 raw[];
};
#define FSCRYPT_KEY_SPEC_TYPE_DESCRIPTOR 1
#define FSCRYPT_KEY_SPEC_TYPE_IDENTIFIER 2
#define FSCRYPT_KEY_DESCRIPTOR_SIZE 8
#define FSCRYPT_KEY_IDENTIFIER_SIZE 16
struct fscrypt_key_specifier {
__u32 type; /* one of FSCRYPT_KEY_SPEC_TYPE_* */
__u32 __reserved;
union {
__u8 __reserved[32]; /* reserve some extra space */
__u8 descriptor[FSCRYPT_KEY_DESCRIPTOR_SIZE];
__u8 identifier[FSCRYPT_KEY_IDENTIFIER_SIZE];
} u;
};
</code></pre>
<p>This is my Python code:</p>
<pre class="lang-py prettyprint-override"><code>import fcntl
import struct
FS_IOC_ADD_ENCRYPTION_KEY = 0xc0506617
policy_data.key_descriptor: str = get_key_descriptor()
policy_key: bytes = get_policy_key(policy_data)
fscrypt_key_specifier = struct.pack(
'II16s',
0x2,
0,
bytes.fromhex(policy_data.key_descriptor)
)
fscrypt_add_key_arg = struct.pack(
f'{len(fscrypt_key_specifier)}sII8I{len(policy_key)}s',
fscrypt_key_specifier,
len(policy_key),
0,
0, 0, 0, 0, 0, 0, 0, 0,
policy_key
)
fd = os.open('/mnt/external', os.O_RDONLY)
res = fcntl.ioctl(fd, FS_IOC_ADD_ENCRYPTION_KEY, fscrypt_add_key_arg)
print(res)
</code></pre>
<p>When I execute this code I get an <code>OSError</code>:</p>
<pre><code>Traceback (most recent call last):
File "/home/foo/fscryptdump/./main.py", line 101, in <module>
res = fcntl.ioctl(fd, FS_IOC_ADD_ENCRYPTION_KEY, fscrypt_add_key_arg)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: [Errno 22] Invalid argument
</code></pre>
<p>I have double-checked the documentation of that ioctl and I think the values I am passing are correct, but there probably is a problem in the way they are packed.</p>
<p>How can I solve this?</p>
|
<python><c><ioctl>
|
2024-05-12 11:33:58
| 1
| 672
|
Ricky Sixx
|
78,467,605
| 1,113,579
|
Broken automatic parameter injection of Connexion 2.7.0 when used with OpenAPI 3.0.0
|
<p>I have a Flask backend which uses Connexion 2.7.0 with OpenAPI 3.0.0.</p>
<p>My swagger.yaml file is defined like this:</p>
<pre><code>openapi: 3.0.0
info:
title: MyApp
version: "1.0"
paths:
/run_recon:
post:
summary: run_recon
operationId: services.model.run_recon
requestBody:
content:
application/json:
schema:
type: object
x-name: payload
x-body-name: payload
responses:
"200":
description: response
content:
"application/json":
schema:
type: object
security:
- jwt-user: ["secret"]
</code></pre>
<p>swagger.yaml is loaded into connexion app like this:</p>
<pre><code>logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
load_dotenv()
options = {"swagger_ui": True,
"debug": True}
app = connexion.App(__name__, server='gevent')
app.add_api('swagger.yaml', options=options)
app.app.config["JSON_SORT_KEYS"] = False
app.app.wsgi_app = CorsHeaderMiddleware(app.app.wsgi_app)
if __name__ == "__main__":
print("DB_NAME:",os.getenv("DB_NAME"))
app.run(port=5002, host='0.0.0.0')
</code></pre>
<p>And the Python function in Flask app is defined like this:</p>
<pre><code>def run_recon(payload):
print("[run_recon]")
</code></pre>
<p>When the frontend sends request to <code>/run_recon</code>, the request does not even reach the Python function.</p>
<p><strong>But if I simply change the function signature to: <code>def run_recon(body)</code>, then the rqeeust reaches and the function works.</strong></p>
<p>Why is the custom name for the request body breaking the automatic parameter injection of Connexion?</p>
<p>I have tried individually <code>x-body</code> and <code>x-body-name</code>, as well as using both together. Nothing seems to work.</p>
|
<python><flask><swagger><openapi><connexion>
|
2024-05-12 11:09:59
| 0
| 1,276
|
AllSolutions
|
78,467,484
| 4,342,400
|
Installed requirements cannot be found in heroku
|
<p>Hello everyone im trying to deploy a small monorepo app with the ui and the api on the same project. I want to deploy on heroku solely through the git integration and not heroku cli or something else. I use a package.json in my root folder which is the following.</p>
<pre><code>{
"name": "app",
"version": "1.0.0",
"engines": {
"node": "18.17.0",
"npm": "10.7.0"
},
"scripts": {
"start": "cd api && flask run",
"install-client": "cd ui && npm install && ng build && cd ..",
"install-server": "cd api && curl -sSL https://bootstrap.pypa.io/get-pip.py -o get-pip.py && python get-pip.py && python -m pip install -r requirements.txt && python -m pip install flask && export PATH=\"/app/.local/bin:$PATH\" && cd ..",
"heroku-postbuild": "npm cache clean --force && npm run install-client && npm run install-server"
},
"dependencies": {
"@angular/cli": "^17.3.7",
"@angular-devkit/build-angular":"17.3.7"
}
</code></pre>
<p>}</p>
<p>I get and arror when running the start script with flask</p>
<pre><code>2024-05-12T10:14:11.608272+00:00 app[web.1]: sh: 1: flask: not found
</code></pre>
<p>At first I did not had the export path in the install-server script but also after adding the error persits (I added it because I saw in the logs that the installed dependencies from required.txt are not on path). I also tried to use virtual environments for python but the same error happened when I added them. Also I tried python3 instead of python in the scripts. Not sure what else can I check and all relevant questions here do not use the package.json with this project structure. Any ideas?</p>
<p>requirements.txt:</p>
<pre><code>el-core-news-lg @ https://github.com/explosion/spacy-models/releases/download/el_core_news_lg-3.7.0/el_core_news_lg-3.7.0-py3-none-any.whl#sha256=810efefee1419e204499bb0e2524d727fd7bd748354d9c7648734f698fc49bc9
en-core-web-lg @ https://github.com/explosion/spacy-models/releases/download/en_core_web_lg-3.7.1/en_core_web_lg-3.7.1-py3-none-any.whl#sha256=ab70aeb6172cde82508f7739f35ebc9918a3d07debeed637403c8f794ba3d3dc
Flask==3.0.2
Flask-Cors==4.0.0
networkx==3.2.1
pandas==2.2.0
spacy==3.7.4
spacy-legacy==3.0.12
spacy-loggers==1.0.5
tqdm==4.66.1
nltk==3.8.1
emoji==1.7.0
textblob==0.18.0.post0
</code></pre>
<p>Heroku build logs:</p>
<pre><code> Running heroku-postbuild
> app@1.0.0 heroku-postbuild
> npm run install-server
> app@1.0.0 install-server
> cd api && curl -sSL https://bootstrap.pypa.io/get-pip.py -o get-pip.py && python get-pip.py && python -m pip install -r requirements.txt && python -m pip install flask && export PATH="/app/.local/bin:$PATH" && cd ..
Defaulting to user installation because normal site-packages is not writeable
Collecting pip
Downloading pip-24.0-py3-none-any.whl.metadata (3.6 kB)
Collecting setuptools
Downloading setuptools-69.5.1-py3-none-any.whl.metadata (6.2 kB)
Collecting wheel
Downloading wheel-0.43.0-py3-none-any.whl.metadata (2.2 kB)
Downloading pip-24.0-py3-none-any.whl (2.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 40.4 MB/s eta 0:00:00
Downloading setuptools-69.5.1-py3-none-any.whl (894 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 894.6/894.6 kB 55.5 MB/s eta 0:00:00
Downloading wheel-0.43.0-py3-none-any.whl (65 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 65.8/65.8 kB 8.2 MB/s eta 0:00:00
Installing collected packages: wheel, setuptools, pip
WARNING: The script wheel is installed in '/app/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The scripts pip, pip3 and pip3.10 are installed in '/app/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed pip-24.0 setuptools-69.5.1 wheel-0.43.0
Defaulting to user installation because normal site-packages is not writeable
Collecting el-core-news-lg@ https://github.com/explosion/spacy-models/releases/download/el_core_news_lg-3.7.0/el_core_news_lg-3.7.0-py3-none-any.whl#sha256=810efefee1419e204499bb0e2524d727fd7bd748354d9c7648734f698fc49bc9 (from -r requirements.txt (line 1))
Downloading https://github.com/explosion/spacy-models/releases/download/el_core_news_lg-3.7.0/el_core_news_lg-3.7.0-py3-none-any.whl (568.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 568.6/568.6 MB 1.3 MB/s eta 0:00:00
Collecting en-core-web-lg@ https://github.com/explosion/spacy-models/releases/download/en_core_web_lg-3.7.1/en_core_web_lg-3.7.1-py3-none-any.whl#sha256=ab70aeb6172cde82508f7739f35ebc9918a3d07debeed637403c8f794ba3d3dc (from -r requirements.txt (line 2))
Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_lg-3.7.1/en_core_web_lg-3.7.1-py3-none-any.whl (587.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 587.7/587.7 MB 1.3 MB/s eta 0:00:00
Collecting Flask==3.0.2 (from -r requirements.txt (line 3))
Downloading flask-3.0.2-py3-none-any.whl.metadata (3.6 kB)
Collecting Flask-Cors==4.0.0 (from -r requirements.txt (line 4))
Downloading Flask_Cors-4.0.0-py2.py3-none-any.whl.metadata (5.4 kB)
Collecting networkx==3.2.1 (from -r requirements.txt (line 5))
Downloading networkx-3.2.1-py3-none-any.whl.metadata (5.2 kB)
Collecting pandas==2.2.0 (from -r requirements.txt (line 6))
Downloading pandas-2.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (19 kB)
Collecting spacy==3.7.4 (from -r requirements.txt (line 7))
Downloading spacy-3.7.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (27 kB)
Collecting spacy-legacy==3.0.12 (from -r requirements.txt (line 8))
Downloading spacy_legacy-3.0.12-py2.py3-none-any.whl.metadata (2.8 kB)
Collecting spacy-loggers==1.0.5 (from -r requirements.txt (line 9))
Downloading spacy_loggers-1.0.5-py3-none-any.whl.metadata (23 kB)
Collecting tqdm==4.66.1 (from -r requirements.txt (line 10))
Downloading tqdm-4.66.1-py3-none-any.whl.metadata (57 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 57.6/57.6 kB 6.3 MB/s eta 0:00:00
Collecting nltk==3.8.1 (from -r requirements.txt (line 11))
Downloading nltk-3.8.1-py3-none-any.whl.metadata (2.8 kB)
Collecting emoji==1.7.0 (from -r requirements.txt (line 12))
Downloading emoji-1.7.0.tar.gz (175 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 175.4/175.4 kB 15.8 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting textblob==0.18.0.post0 (from -r requirements.txt (line 13))
Downloading textblob-0.18.0.post0-py3-none-any.whl.metadata (4.5 kB)
Collecting Werkzeug>=3.0.0 (from Flask==3.0.2->-r requirements.txt (line 3))
Downloading werkzeug-3.0.3-py3-none-any.whl.metadata (3.7 kB)
Collecting Jinja2>=3.1.2 (from Flask==3.0.2->-r requirements.txt (line 3))
Downloading jinja2-3.1.4-py3-none-any.whl.metadata (2.6 kB)
Collecting itsdangerous>=2.1.2 (from Flask==3.0.2->-r requirements.txt (line 3))
Downloading itsdangerous-2.2.0-py3-none-any.whl.metadata (1.9 kB)
Collecting click>=8.1.3 (from Flask==3.0.2->-r requirements.txt (line 3))
Downloading click-8.1.7-py3-none-any.whl.metadata (3.0 kB)
Collecting blinker>=1.6.2 (from Flask==3.0.2->-r requirements.txt (line 3))
Downloading blinker-1.8.2-py3-none-any.whl.metadata (1.6 kB)
Collecting numpy<2,>=1.22.4 (from pandas==2.2.0->-r requirements.txt (line 6))
Downloading numpy-1.26.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (61 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.0/61.0 kB 8.2 MB/s eta 0:00:00
Collecting python-dateutil>=2.8.2 (from pandas==2.2.0->-r requirements.txt (line 6))
Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl.metadata (8.4 kB)
Collecting pytz>=2020.1 (from pandas==2.2.0->-r requirements.txt (line 6))
Downloading pytz-2024.1-py2.py3-none-any.whl.metadata (22 kB)
Collecting tzdata>=2022.7 (from pandas==2.2.0->-r requirements.txt (line 6))
Downloading tzdata-2024.1-py2.py3-none-any.whl.metadata (1.4 kB)
Collecting murmurhash<1.1.0,>=0.28.0 (from spacy==3.7.4->-r requirements.txt (line 7))
Downloading murmurhash-1.0.10-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.0 kB)
Collecting cymem<2.1.0,>=2.0.2 (from spacy==3.7.4->-r requirements.txt (line 7))
Downloading cymem-2.0.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (8.4 kB)
Collecting preshed<3.1.0,>=3.0.2 (from spacy==3.7.4->-r requirements.txt (line 7))
Downloading preshed-3.0.9-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.2 kB)
Collecting thinc<8.3.0,>=8.2.2 (from spacy==3.7.4->-r requirements.txt (line 7))
Downloading thinc-8.2.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (15 kB)
Collecting wasabi<1.2.0,>=0.9.1 (from spacy==3.7.4->-r requirements.txt (line 7))
Downloading wasabi-1.1.2-py3-none-any.whl.metadata (28 kB)
Collecting srsly<3.0.0,>=2.4.3 (from spacy==3.7.4->-r requirements.txt (line 7))
Downloading srsly-2.4.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (20 kB)
Collecting catalogue<2.1.0,>=2.0.6 (from spacy==3.7.4->-r requirements.txt (line 7))
Downloading catalogue-2.0.10-py3-none-any.whl.metadata (14 kB)
Collecting weasel<0.4.0,>=0.1.0 (from spacy==3.7.4->-r requirements.txt (line 7))
Downloading weasel-0.3.4-py3-none-any.whl.metadata (4.7 kB)
Collecting typer<0.10.0,>=0.3.0 (from spacy==3.7.4->-r requirements.txt (line 7))
Downloading typer-0.9.4-py3-none-any.whl.metadata (14 kB)
Collecting smart-open<7.0.0,>=5.2.1 (from spacy==3.7.4->-r requirements.txt (line 7))
Downloading smart_open-6.4.0-py3-none-any.whl.metadata (21 kB)
Collecting requests<3.0.0,>=2.13.0 (from spacy==3.7.4->-r requirements.txt (line 7))
Downloading requests-2.31.0-py3-none-any.whl.metadata (4.6 kB)
Collecting pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4 (from spacy==3.7.4->-r requirements.txt (line 7))
Downloading pydantic-2.7.1-py3-none-any.whl.metadata (107 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 107.3/107.3 kB 14.5 MB/s eta 0:00:00
Requirement already satisfied: setuptools in /app/.local/lib/python3.10/site-packages (from spacy==3.7.4->-r requirements.txt (line 7)) (69.5.1)
Collecting packaging>=20.0 (from spacy==3.7.4->-r requirements.txt (line 7))
Downloading packaging-24.0-py3-none-any.whl.metadata (3.2 kB)
Collecting langcodes<4.0.0,>=3.2.0 (from spacy==3.7.4->-r requirements.txt (line 7))
Downloading langcodes-3.4.0-py3-none-any.whl.metadata (29 kB)
Collecting joblib (from nltk==3.8.1->-r requirements.txt (line 11))
Downloading joblib-1.4.2-py3-none-any.whl.metadata (5.4 kB)
Collecting regex>=2021.8.3 (from nltk==3.8.1->-r requirements.txt (line 11))
Downloading regex-2024.5.10-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (40 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 40.9/40.9 kB 5.2 MB/s eta 0:00:00
Collecting MarkupSafe>=2.0 (from Jinja2>=3.1.2->Flask==3.0.2->-r requirements.txt (line 3))
Downloading MarkupSafe-2.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)
Collecting language-data>=1.2 (from langcodes<4.0.0,>=3.2.0->spacy==3.7.4->-r requirements.txt (line 7))
Downloading language_data-1.2.0-py3-none-any.whl.metadata (4.3 kB)
Collecting annotated-types>=0.4.0 (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy==3.7.4->-r requirements.txt (line 7))
Downloading annotated_types-0.6.0-py3-none-any.whl.metadata (12 kB)
Collecting pydantic-core==2.18.2 (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy==3.7.4->-r requirements.txt (line 7))
Downloading pydantic_core-2.18.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.5 kB)
Collecting typing-extensions>=4.6.1 (from pydantic!=1.8,!=1.8.1,<3.0.0,>=1.7.4->spacy==3.7.4->-r requirements.txt (line 7))
Downloading typing_extensions-4.11.0-py3-none-any.whl.metadata (3.0 kB)
Requirement already satisfied: six>=1.5 in /usr/lib/python3/dist-packages (from python-dateutil>=2.8.2->pandas==2.2.0->-r requirements.txt (line 6)) (1.16.0)
Collecting charset-normalizer<4,>=2 (from requests<3.0.0,>=2.13.0->spacy==3.7.4->-r requirements.txt (line 7))
Downloading charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (33 kB)
Collecting idna<4,>=2.5 (from requests<3.0.0,>=2.13.0->spacy==3.7.4->-r requirements.txt (line 7))
Downloading idna-3.7-py3-none-any.whl.metadata (9.9 kB)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/lib/python3/dist-packages (from requests<3.0.0,>=2.13.0->spacy==3.7.4->-r requirements.txt (line 7)) (1.26.5)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests<3.0.0,>=2.13.0->spacy==3.7.4->-r requirements.txt (line 7)) (2020.6.20)
Collecting blis<0.8.0,>=0.7.8 (from thinc<8.3.0,>=8.2.2->spacy==3.7.4->-r requirements.txt (line 7))
Downloading blis-0.7.11-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (7.4 kB)
Collecting confection<1.0.0,>=0.0.1 (from thinc<8.3.0,>=8.2.2->spacy==3.7.4->-r requirements.txt (line 7))
Downloading confection-0.1.4-py3-none-any.whl.metadata (19 kB)
Collecting cloudpathlib<0.17.0,>=0.7.0 (from weasel<0.4.0,>=0.1.0->spacy==3.7.4->-r requirements.txt (line 7))
Downloading cloudpathlib-0.16.0-py3-none-any.whl.metadata (14 kB)
Collecting marisa-trie>=0.7.7 (from language-data>=1.2->langcodes<4.0.0,>=3.2.0->spacy==3.7.4->-r requirements.txt (line 7))
Downloading marisa_trie-1.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (8.6 kB)
Downloading flask-3.0.2-py3-none-any.whl (101 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 101.3/101.3 kB 16.4 MB/s eta 0:00:00
Downloading Flask_Cors-4.0.0-py2.py3-none-any.whl (14 kB)
Downloading networkx-3.2.1-py3-none-any.whl (1.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 51.3 MB/s eta 0:00:00
Downloading pandas-2.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 13.0/13.0 MB 115.2 MB/s eta 0:00:00
Downloading spacy-3.7.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (6.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.6/6.6 MB 121.3 MB/s eta 0:00:00
Downloading spacy_legacy-3.0.12-py2.py3-none-any.whl (29 kB)
Downloading spacy_loggers-1.0.5-py3-none-any.whl (22 kB)
Downloading tqdm-4.66.1-py3-none-any.whl (78 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 78.3/78.3 kB 12.5 MB/s eta 0:00:00
Downloading nltk-3.8.1-py3-none-any.whl (1.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 MB 99.5 MB/s eta 0:00:00
Downloading textblob-0.18.0.post0-py3-none-any.whl (626 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 626.3/626.3 kB 64.1 MB/s eta 0:00:00
Downloading blinker-1.8.2-py3-none-any.whl (9.5 kB)
Downloading catalogue-2.0.10-py3-none-any.whl (17 kB)
Downloading click-8.1.7-py3-none-any.whl (97 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 97.9/97.9 kB 15.4 MB/s eta 0:00:00
Downloading cymem-2.0.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (46 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━��━━━━━ 46.1/46.1 kB 6.5 MB/s eta 0:00:00
Downloading itsdangerous-2.2.0-py3-none-any.whl (16 kB)
Downloading jinja2-3.1.4-py3-none-any.whl (133 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.3/133.3 kB 20.6 MB/s eta 0:00:00
Downloading langcodes-3.4.0-py3-none-any.whl (182 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 182.0/182.0 kB 25.3 MB/s eta 0:00:00
Downloading murmurhash-1.0.10-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (29 kB)
Downloading numpy-1.26.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.2/18.2 MB 106.3 MB/s eta 0:00:00
Downloading packaging-24.0-py3-none-any.whl (53 kB)
━━━━━━━━━━━━━━━━��━━━━━━━━━━━━━━━━━━━━━━━ 53.5/53.5 kB 7.3 MB/s eta 0:00:00
Downloading preshed-3.0.9-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (156 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 156.9/156.9 kB 21.2 MB/s eta 0:00:00
Downloading pydantic-2.7.1-py3-none-any.whl (409 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 409.3/409.3 kB 48.0 MB/s eta 0:00:00
Downloading pydantic_core-2.18.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 105.7 MB/s eta 0:00:00
Downloading python_dateutil-2.9.0.post0-py2.py3-none-any.whl (229 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 229.9/229.9 kB 30.7 MB/s eta 0:00:00
Downloading pytz-2024.1-py2.py3-none-any.whl (505 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 505.5/505.5 kB 46.6 MB/s eta 0:00:00
Downloading regex-2024.5.10-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (774 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 774.1/774.1 kB 68.7 MB/s eta 0:00:00
Downloading requests-2.31.0-py3-none-any.whl (62 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.6/62.6 kB 8.7 MB/s eta 0:00:00
Downloading smart_open-6.4.0-py3-none-any.whl (57 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 57.0/57.0 kB 8.7 MB/s eta 0:00:00
Downloading srsly-2.4.8-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (493 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 493.0/493.0 kB 57.6 MB/s eta 0:00:00
Downloading thinc-8.2.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (922 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 922.3/922.3 kB 82.3 MB/s eta 0:00:00
Downloading typer-0.9.4-py3-none-any.whl (45 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.0/46.0 kB 7.0 MB/s eta 0:00:00
Downloading tzdata-2024.1-py2.py3-none-any.whl (345 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 345.4/345.4 kB 37.9 MB/s eta 0:00:00
Downloading wasabi-1.1.2-py3-none-any.whl (27 kB)
Downloading weasel-0.3.4-py3-none-any.whl (50 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 50.1/50.1 kB 7.8 MB/s eta 0:00:00
Downloading werkzeug-3.0.3-py3-none-any.whl (227 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 227.3/227.3 kB 32.5 MB/s eta 0:00:00
Downloading joblib-1.4.2-py3-none-any.whl (301 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 301.8/301.8 kB 39.4 MB/s eta 0:00:00
Downloading annotated_types-0.6.0-py3-none-any.whl (12 kB)
Downloading blis-0.7.11-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (10.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.2/10.2 MB 128.9 MB/s eta 0:00:00
Downloading charset_normalizer-3.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (142 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 142.1/142.1 kB 20.9 MB/s eta 0:00:00
Downloading cloudpathlib-0.16.0-py3-none-any.whl (45 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.0/45.0 kB 6.8 MB/s eta 0:00:00
Downloading confection-0.1.4-py3-none-any.whl (35 kB)
Downloading idna-3.7-py3-none-any.whl (66 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 66.8/66.8 kB 9.8 MB/s eta 0:00:00
Downloading language_data-1.2.0-py3-none-any.whl (5.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.4/5.4 MB 116.4 MB/s eta 0:00:00
Downloading MarkupSafe-2.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Downloading typing_extensions-4.11.0-py3-none-any.whl (34 kB)
Downloading marisa_trie-1.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3/1.3 MB 91.3 MB/s eta 0:00:00
Building wheels for collected packages: emoji
Building wheel for emoji (setup.py): started
Building wheel for emoji (setup.py): finished with status 'done'
Created wheel for emoji: filename=emoji-1.7.0-py3-none-any.whl size=171034 sha256=eeef11f45e3b0973de4d98db5a2af88253e631a772543cba0dd08033b0b08232
Stored in directory: /app/.cache/pip/wheels/31/8a/8c/315c9e5d7773f74b33d5ed33f075b49c6eaeb7cedbb86e2cf8
Successfully built emoji
Installing collected packages: pytz, emoji, cymem, wasabi, tzdata, typing-extensions, tqdm, spacy-loggers, spacy-legacy, smart-open, regex, python-dateutil, packaging, numpy, networkx, murmurhash, MarkupSafe, marisa-trie, joblib, itsdangerous, idna, click, charset-normalizer, catalogue, blinker, annotated-types, Werkzeug, typer, srsly, requests, pydantic-core, preshed, pandas, nltk, language-data, Jinja2, cloudpathlib, blis, textblob, pydantic, langcodes, Flask, Flask-Cors, confection, weasel, thinc, spacy, en-core-web-lg, el-core-news-lg
WARNING: The script tqdm is installed in '/app/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script f2py is installed in '/app/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script normalizer is installed in '/app/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script nltk is installed in '/app/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script flask is installed in '/app/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script weasel is installed in '/app/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script spacy is installed in '/app/.local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed Flask-3.0.2 Flask-Cors-4.0.0 Jinja2-3.1.4 MarkupSafe-2.1.5 Werkzeug-3.0.3 annotated-types-0.6.0 blinker-1.8.2 blis-0.7.11 catalogue-2.0.10 charset-normalizer-3.3.2 click-8.1.7 cloudpathlib-0.16.0 confection-0.1.4 cymem-2.0.8 el-core-news-lg-3.7.0 emoji-1.7.0 en-core-web-lg-3.7.1 idna-3.7 itsdangerous-2.2.0 joblib-1.4.2 langcodes-3.4.0 language-data-1.2.0 marisa-trie-1.1.1 murmurhash-1.0.10 networkx-3.2.1 nltk-3.8.1 numpy-1.26.4 packaging-24.0 pandas-2.2.0 preshed-3.0.9 pydantic-2.7.1 pydantic-core-2.18.2 python-dateutil-2.9.0.post0 pytz-2024.1 regex-2024.5.10 requests-2.31.0 smart-open-6.4.0 spacy-3.7.4 spacy-legacy-3.0.12 spacy-loggers-1.0.5 srsly-2.4.8 textblob-0.18.0.post0 thinc-8.2.3 tqdm-4.66.1 typer-0.9.4 typing-extensions-4.11.0 tzdata-2024.1 wasabi-1.1.2 weasel-0.3.4
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: flask in /app/.local/lib/python3.10/site-packages (3.0.2)
Requirement already satisfied: Werkzeug>=3.0.0 in /app/.local/lib/python3.10/site-packages (from flask) (3.0.3)
Requirement already satisfied: Jinja2>=3.1.2 in /app/.local/lib/python3.10/site-packages (from flask) (3.1.4)
Requirement already satisfied: itsdangerous>=2.1.2 in /app/.local/lib/python3.10/site-packages (from flask) (2.2.0)
Requirement already satisfied: click>=8.1.3 in /app/.local/lib/python3.10/site-packages (from flask) (8.1.7)
Requirement already satisfied: blinker>=1.6.2 in /app/.local/lib/python3.10/site-packages (from flask) (1.8.2)
Requirement already satisfied: MarkupSafe>=2.0 in /app/.local/lib/python3.10/site-packages (from Jinja2>=3.1.2->flask) (2.1.5)
-----> Caching build
- node_modules
-----> Pruning devDependencies
removed 834 packages, and audited 1 package in 2s
found 0 vulnerabilities
-----> Build succeeded!
-----> Discovering process types
Procfile declares types -> (none)
Default types for buildpack -> web
-----> Compressing...
Done: 44.5M
-----> Launching...
Released v16 ```
</code></pre>
|
<python><heroku><deployment><requirements.txt>
|
2024-05-12 10:30:22
| 1
| 336
|
leo_bouts
|
78,467,412
| 9,506,773
|
How to toggle torch cpu and gpu installation using poetry
|
<p>I am trying to enable the installation of cpu and gpu versions of <code>torch</code> and <code>torchvision</code>, using <code>poetry install --with cpu</code> and <code>poetry install --with gpu</code>, respectively. I have the following in my <code>pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.dependencies]
python = "^3.8"
filterpy = "^1.4.5"
gdown = "^5.1.0"
lapx = "^0.5.5"
loguru = "^0.7.2"
numpy = "1.24.4"
pyyaml = "^6.0.1"
regex = "^2024.0.0"
yacs = "^0.1.8"
scikit-learn = "^1.3.0"
pandas = "^2.0.0"
opencv-python = "^4.7.0"
ftfy = "^6.1.3"
gitpython = "^3.1.42"
[tool.poetry.group.gpu]
optional = true
[tool.poetry.group.gpu.dependencies]
torch = [
{version = "^2.2.1", source="pypi", markers = "sys_platform == 'darwin'"},
{version = "^2.2.1", source="torch_cuda121", markers = "sys_platform == 'linux'"},
{version = "^2.2.1", source="torch_cuda121", markers = "sys_platform == 'win32'"},
]
torchvision = [
{version = "^0.17.1", source="pypi", markers = "sys_platform == 'darwin'"},
{version = "^0.17.1", source="torch_cuda121", markers = "sys_platform == 'linux'"},
{version = "^0.17.1", source="torch_cuda121", markers = "sys_platform == 'win32'"},
]
[tool.poetry.group.cpu]
optional = true
[tool.poetry.group.cpu.dependencies]
torch = [
{version = "^2.2.1", source="pypi", markers = "sys_platform == 'darwin'"},
{version = "^2.2.1", source="torchcpu", markers = "sys_platform == 'linux'"},
{version = "^2.2.1", source="torchcpu", markers = "sys_platform == 'win32'"},
]
torchvision = [
{version = "^0.17.1", source="pypi", markers = "sys_platform == 'darwin'"},
{version = "^0.17.1", source="torchcpu", markers = "sys_platform == 'linux'"},
{version = "^0.17.1", source="torchcpu", markers = "sys_platform == 'win32'"},
]
[[tool.poetry.source]]
name = "torchcpu"
url = "https://download.pytorch.org/whl/cpu"
priority = "explicit"
[[tool.poetry.source]]
name = "torch_cuda121"
url = "https://download.pytorch.org/whl/cu121"
priority = "explicit"
</code></pre>
<p>When running <code>poetry lock</code> I get:</p>
<pre class="lang-bash prettyprint-override"><code>Incompatible constraints in requirements of boxmot (10.0.71):
torch (>=2.2.1,<3.0.0) ; sys_platform == "linux" or sys_platform == "win32" ; source=torchcpu
torch (>=2.2.1,<3.0.0) ; sys_platform == "linux" or sys_platform == "win32" ; source=torch_cuda121
</code></pre>
<p>So it seems that my way of approaching this problem does not seem to be the right one, but I have no idea how to proceed either. What am I missing? How can this be achieved with poetry?</p>
|
<python><python-poetry>
|
2024-05-12 10:01:44
| 2
| 3,629
|
Mike B
|
78,467,030
| 6,032,979
|
Why does Cognito's admin-get-user report that a user does not exist if it is external user?
|
<p>There is a user, let's say, <code>user@email.com</code> whom I can see in AWS Cognito Console. But when I try to access that user using <code>admin-get-user</code>, I get <code>UserNotFoundException</code>. Here is a python code of what I was trying to do:</p>
<pre class="lang-py prettyprint-override"><code>import boto3
# Initialize the Cognito client with the specified profile
session = boto3.Session(profile_name='Profile_Name')
client = session.client('cognito-idp', region_name='us-east-1')
# Get the user details
try:
response = client.admin_get_user(
UserPoolId='user-pool-id',
Username='user@email.com'
)
# Extract the email and creation time
email = response['UserAttributes'][2]['Value']
created_at = response['UserCreateDate'].strftime('%Y-%m-%d %H:%M:%S')
# Print the user details
print(f"Email: {email}, Created At: {created_at}")
except client.exceptions.UserNotFoundException:
print("User not found")
</code></pre>
<p>The output of the above code is <code>User not found</code>.</p>
<hr />
<p>Thinking it might be that cognito sdk filters external users, I experimented with <code>list-users</code>. The code was able to easily get the user. Here is what the code looked like:</p>
<pre class="lang-py prettyprint-override"><code>import boto3
# Initialize the Cognito client with the specified profile
session = boto3.Session(profile_name='Profile_Name')
client = session.client('cognito-idp', region_name='us-east-1')
# Get all users in the user pool
user_info = []
pagination_token = None
while True:
# Get users with pagination
if (pagination_token != None):
response = client.list_users(
UserPoolId='user-pool-id',
AttributesToGet=['email'],
PaginationToken=pagination_token
)
else:
response = client.list_users(
UserPoolId='user-pool-id',
AttributesToGet=['email'],
)
# Extract the emails from the response
for user in response['Users']:
# print(user)
user_info.append({
'email': user['Attributes'][0]['Value'],
'created_at': user['UserCreateDate'].strftime('%Y-%m-%d %H:%M:%S')
})
# Check if there are more users to fetch
if 'PaginationToken' in response:
pagination_token = response['PaginationToken']
else:
break
print("List of emails and creation times:")
for info in user_info:
print(f"Email: {info['email']}, Created At: {info['created_at']}")
# Check if the specified email is in the list
target_email = 'user@email.com'
is_email_present = any(info['email'] == target_email for info in user_info)
# Print the result
print(f"Email {target_email} is present: {is_email_present}")
</code></pre>
<p>The weird thing here is that <a href="https://docs.aws.amazon.com/cli/latest/reference/cognito-idp/admin-get-user.html" rel="nofollow noreferrer"><code>admin-get-user</code></a>'s doc says that it should work on any user:</p>
<blockquote>
<p>Gets the specified user by user name in a user pool as an administrator. <strong>Works on any user.</strong></p>
</blockquote>
<p>I need help on solving this problem. Otherwise, I would have to use <code>list-users</code> to fetch all users just so that I can get details about a user that I want.</p>
|
<python><amazon-web-services><amazon-cognito>
|
2024-05-12 07:43:25
| 1
| 584
|
KapilDev Neupane
|
78,466,922
| 9,072,753
|
What is the rationale for specifying dynamic width and precision _after_ the argument in str.format?
|
<p>In C programming language dynamic field width and precision is specified before the argument.</p>
<pre><code>printf("%.*s", 2, "ABCDEF")
</code></pre>
<p>Yet when <code>str.format</code> was introduced in Python3.0 in 2008 the dynamic field width and precision is specified <em>after</em> the argument.</p>
<pre><code>"{:.{}}".format("ABCDEF", 2)
</code></pre>
<p>Why is that? I find specifying dynamic field width after the argument very confusing. Is there a specific technical reason for it, or is it a preference of the author?</p>
<p>(The origin of my question is, C++ <code>{fmt}</code> also wants dynamic field width and precision after the argument, yet C++ is statically typed langauge. My guess is {fmt} blindly followed python specification. As C++ is not dynamically typed like Python, I do not think it was a correct decision, so I decided to ask the community about clarification of the original decision in Python.)</p>
<p>@edit Just mentioning, I wrote to the author of PEP 3101 and received a response that it was 17 years ago, and he does not remember and suggested to go through mailing Python list around 2007.</p>
|
<python><python-3.x><format><f-string>
|
2024-05-12 06:52:19
| 3
| 145,478
|
KamilCuk
|
78,466,852
| 3,286,489
|
In Python, how to print a date in an array concisely without extracting it?
|
<p>When I extract a date from a PR and printed it as below</p>
<pre><code>updated = pull_request.updated_at
print(f"Updated At: {updated}")
</code></pre>
<p>It printed well</p>
<pre><code>Updated At: 2024-05-09 00:45:14+00:00
</code></pre>
<p>However if I put in an array (I have some other element in the array, but for simplicity purpose, I just show I have the date)</p>
<pre><code>updated = pull_request.updated_at
pull_requests.append([updated])
for pull_request in pull_requests:
print(pull_request)
</code></pre>
<p>It printed as below format</p>
<pre><code>[datetime.datetime(2024, 5, 9, 6, 7, 36, tzinfo=datetime.timezone.utc)]
</code></pre>
<p>How can I have the date variable in the array, and get the array printed directly i.e. <code>print(pull_request)</code>, while still having the date string shown concisely, like the <code>2024-05-09 00:45:14+00:00</code> format?</p>
<p>p/s: I can do it by extracting the element in the array to print <code>print(pull_request[0])</code>, but it will be cumbersome given I actually have several element in the array, and I don't want to extract them out individually.</p>
<pre><code>updated = pull_request.updated_at
pull_requests.append([updated])
for pull_request in pull_requests:
print(pull_request[0])
</code></pre>
|
<python><arrays>
|
2024-05-12 06:20:39
| 3
| 61,245
|
Elye
|
78,466,835
| 5,043,301
|
Create Super User issue in django
|
<p>My model is like below.</p>
<pre><code>from django.db import models
from django.contrib.auth.models import AbstractUser
# Create your models here.
class User(AbstractUser):
GENDER_CHOICES = (
('M', 'Male'),
('F', 'Female'),
('O', 'Other'),
)
USER_TYPE = (
('S', 'Super Admin'),
('A', 'Admin'),
('P', 'Patient'),
('D', 'Doctor'),
('U', 'User'),
)
first_name = models.CharField(max_length=255)
last_name = models.CharField(max_length=255)
date_of_birth = models.DateField(null=True, default=None)
gender = models.CharField(max_length=1, choices=GENDER_CHOICES)
user_type = models.CharField(max_length=1, choices=USER_TYPE)
username = models.EmailField(unique=True)
email = models.EmailField(unique=True)
phone = models.CharField(max_length=15, blank=True, null=True)
address = models.TextField(blank=True, null=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
active = models.BooleanField(default=True)
REQUIRED_FIELDS = ['email', 'password']
def __str__(self):
return f"{self.first_name} {self.last_name}"
</code></pre>
<p>I am trying to run this command.</p>
<pre><code>python manage.py createsuperuser
</code></pre>
<p>I am getting errors</p>
<pre><code>Username: admin@admin.com
Password:
Password (again):
The password is too similar to the username.
This password is too common.
Bypass password validation and create user anyway? [y/N]: y
Traceback (most recent call last):
File "/home/foysal/.local/lib/python3.10/site-packages/django/db/backends/utils.py", line 105, in _execute
return self.cursor.execute(sql, params)
File "/home/foysal/.local/lib/python3.10/site-packages/django/db/backends/sqlite3/base.py", line 329, in execute
return super().execute(query, params)
sqlite3.IntegrityError: NOT NULL constraint failed: hospital_user.date_of_birth
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/foysal/Documents/pYtHoN/hospital_management/manage.py", line 22, in <module>
main()
File "/home/foysal/Documents/pYtHoN/hospital_management/manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/home/foysal/.local/lib/python3.10/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line
utility.execute()
File "/home/foysal/.local/lib/python3.10/site-packages/django/core/management/__init__.py", line 436, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/foysal/.local/lib/python3.10/site-packages/django/core/management/base.py", line 413, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/foysal/.local/lib/python3.10/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 89, in execute
return super().execute(*args, **options)
File "/home/foysal/.local/lib/python3.10/site-packages/django/core/management/base.py", line 459, in execute
output = self.handle(*args, **options)
File "/home/foysal/.local/lib/python3.10/site-packages/django/contrib/auth/management/commands/createsuperuser.py", line 238, in handle
self.UserModel._default_manager.db_manager(database).create_superuser(
File "/home/foysal/.local/lib/python3.10/site-packages/django/contrib/auth/models.py", line 172, in create_superuser
return self._create_user(username, email, password, **extra_fields)
File "/home/foysal/.local/lib/python3.10/site-packages/django/contrib/auth/models.py", line 155, in _create_user
user.save(using=self._db)
File "/home/foysal/.local/lib/python3.10/site-packages/django/contrib/auth/base_user.py", line 78, in save
super().save(*args, **kwargs)
File "/home/foysal/.local/lib/python3.10/site-packages/django/db/models/base.py", line 822, in save
self.save_base(
File "/home/foysal/.local/lib/python3.10/site-packages/django/db/models/base.py", line 909, in save_base
updated = self._save_table(
File "/home/foysal/.local/lib/python3.10/site-packages/django/db/models/base.py", line 1067, in _save_table
results = self._do_insert(
File "/home/foysal/.local/lib/python3.10/site-packages/django/db/models/base.py", line 1108, in _do_insert
return manager._insert(
File "/home/foysal/.local/lib/python3.10/site-packages/django/db/models/manager.py", line 87, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/home/foysal/.local/lib/python3.10/site-packages/django/db/models/query.py", line 1847, in _insert
return query.get_compiler(using=using).execute_sql(returning_fields)
File "/home/foysal/.local/lib/python3.10/site-packages/django/db/models/sql/compiler.py", line 1823, in execute_sql
cursor.execute(sql, params)
File "/home/foysal/.local/lib/python3.10/site-packages/django/db/backends/utils.py", line 122, in execute
return super().execute(sql, params)
File "/home/foysal/.local/lib/python3.10/site-packages/django/db/backends/utils.py", line 79, in execute
return self._execute_with_wrappers(
File "/home/foysal/.local/lib/python3.10/site-packages/django/db/backends/utils.py", line 92, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/home/foysal/.local/lib/python3.10/site-packages/django/db/backends/utils.py", line 100, in _execute
with self.db.wrap_database_errors:
File "/home/foysal/.local/lib/python3.10/site-packages/django/db/utils.py", line 91, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/home/foysal/.local/lib/python3.10/site-packages/django/db/backends/utils.py", line 105, in _execute
return self.cursor.execute(sql, params)
File "/home/foysal/.local/lib/python3.10/site-packages/django/db/backends/sqlite3/base.py", line 329, in execute
return super().execute(query, params)
django.db.utils.IntegrityError: NOT NULL constraint failed: hospital_user.date_of_birth
</code></pre>
|
<python><django><django-models>
|
2024-05-12 06:11:13
| 0
| 7,102
|
abu abu
|
78,466,772
| 7,021,836
|
Single millisecond send resolution with python-can, Vector CANalyzer
|
<p>python version: 3.9.13, python-can version: 4.0.0</p>
<p>I am trying to send a CAN message every millisecond using python-can to access Vector's CANalyzer software. Using the code below, I would expect to see a message every millisecond:</p>
<pre><code>import can
import time
import logging
bus = can.interface.Bus(bustype="vector", channel=0, bitrate=500000, app_name='CANalyzer')
def periodic_send(msg, bus):
task = bus.send_periodic(msg, 0.001)
assert isinstance(task, can.CyclicSendTaskABC)
return task
tasks = []
msg = can.Message(arbitration_id=0xCE, data=[1, 1, 1, 1], is_extended_id=False)
tasks.append(periodic_send(msg,bus))
time.sleep(200)
</code></pre>
<p>However, when I look at the signal in CANalyzer, it shows the message being sent every 15ms or so (blue measurement cursor is first sample, orange is second):
<a href="https://i.sstatic.net/pjXAUjfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pjXAUjfg.png" alt="image" /></a></p>
<p>I have also tried managing timing directly and have similarly been unable to achieve single millisecond intervals:</p>
<pre><code>import can
import time
class PreciseTimer():
def __init__(self, task_function, period):
self.task_function = task_function
self.period = period
self.i = 0
self.t0 = time.perf_counter()
def sleep(self):
self.i += 1
delta = self.t0 + self.period * self.i - time.perf_counter()
if delta > 0:
time.sleep(delta)
def run(self):
while True:
self.task_function()
self.sleep()
bus = can.interface.Bus(bustype="vector", channel=0, bitrate=500000, app_name='CANalyzer')
def send_can_msg():
msg = can.Message(arbitration_id=0xCE, data=[1, 1, 1, 1], is_extended_id=False)
bus.send(msg)
timer = PreciseTimer(send_can_msg, 0.001)
timer.run()
</code></pre>
<p>This is possible using an Interactive Generator in CANalyzer, but is it possible with python-can?</p>
|
<python><can-bus><python-can>
|
2024-05-12 05:25:04
| 0
| 1,251
|
ack
|
78,466,708
| 193,939
|
CrewAI not finding co-worker
|
<p>I'm new to python and its frameworks, but I'm an experienced C# developer. I'm attempting to write a test "Crew" for CrewAI, but I'm getting:</p>
<pre><code>Error executing tool. Co-worker mentioned not found, it must to be one of the following options:
- writer
</code></pre>
<p>A single-agent crew works as I would expect, but adding the second agent (writer) causes this failure. A few times it even led to asking me how to communicate with "writer," and it's generating that text from my local LLM as I would expect.</p>
<p>Changing to Process.hierarchical and adding the appropriate manager_llm results in a "Crew Manager" that can't talk to either agent. In Process.sequential, once the "researcher" limits out on attempts and returns a result, "writer" begins. None of the agents can talk to each other. From my understanding this is all built on langchain, which I know basically nothing about. I don't know where to even begin troubleshooting this.</p>
<p>Here's my entire code sample, largely derived from a direct sample in the CrewAI docs:</p>
<pre><code>from crewai import Agent, Task, Crew, Process
from langchain_openai import ChatOpenAI
from langchain.agents import tool
@tool
def get_client_feedback(query: str):
"""Pass a question defined in the 'query' variable to the human to get more details."""
return input("\n\n"+query)
# Creating a senior researcher agent with memory and verbose mode
researcher = Agent(
role='Senior Researcher',
goal='Uncover groundbreaking technologies',
verbose=True,
backstory=(
"Driven by curiosity, you're at the forefront of technological"
"innovation, eager to explore and share knowledge that could change"
"the world."
),
llm=ChatOpenAI(
model="crewai-llama3-latest",
base_url="http://localhost:11434/v1",
api_key="NA"
),
tools=[get_client_feedback],
allow_delegation=True
)
# Creating a writer agent with custom tools and delegation capability
writer = Agent(
role='Writer',
goal='Narrate compelling tech stories about technology',
verbose=True,
backstory=(
"With a flair for simplifying complex topics, you craft"
"engaging narratives that captivate and educate, bringing new"
"discoveries to light in an accessible manner."
),
llm=ChatOpenAI(
model="crewai-llama3-latest",
base_url="http://localhost:11434/v1",
api_key="NA"
),
tools=[get_client_feedback],
allow_delegation=False
)
# Research task
research_task = Task(
description=(
"Identify the next big trend in technology."
"Focus on identifying pros and cons and the overall narrative."
"Your final report should clearly articulate the key points,"
"its market opportunities, and potential risks."
),
expected_output='A comprehensive 3 paragraphs long report on the latest AI trends.',
tools=[get_client_feedback],
agent=researcher,
)
# Writing task with language model configuration
write_task = Task(
description=(
"Compose an insightful article on technology."
"Focus on the latest trends and how it's impacting the industry."
"This article should be easy to understand, engaging, and positive."
),
expected_output='A 4 paragraph article on technological advancements formatted as markdown.',
agent=writer,
async_execution=False,
output_file='new-blog-post.md' # Example of output customization
)
# Forming the tech-focused crew with some enhanced configurations
crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential, # Optional: Sequential task execution is default
share_crew=True,
verbose=2
)
# Starting the task execution process with enhanced feedback
result = crew.kickoff()
print(result)
</code></pre>
|
<python><langchain><crewai>
|
2024-05-12 04:33:53
| 1
| 5,952
|
Nathan Wheeler
|
78,466,674
| 10,474,024
|
No such file or directory error for entrypoint.sh file in docker
|
<p>I am following a course and currently stuck on <a href="https://testdriven.io/courses/tdd-django/pytest-setup/" rel="nofollow noreferrer">this part</a> of it.</p>
<p>After starting up Docker container (that says it is installed correctly when running localhost) I get: <code>service "movies" is not running</code></p>
<p>Folder structure:</p>
<p><a href="https://i.sstatic.net/ipcrh5j8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ipcrh5j8.png" alt="folder structure" /></a></p>
<p><code>docker-compose.yaml</code>:</p>
<pre><code># version: '3.8'
services:
movies:
build: ./TestDrivenIO
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./TestDrivenIO/:/usr/src/TestDrivenIO/
ports:
- 8009:8000
env_file:
- ./TestDrivenIO/.env.dev
depends_on:
- movies-db
movies-db:
image: postgres:15
volumes: # create a volume and bind directory to the container
- postgres_data:/var/lib/postgresql/data/
environment: # added ENV key to define a name for default DB & credentials
- POSTGRES_USER=movies
- POSTGRES_PASSWORD=movies
- POSTGRES_DB=movies_dev
volumes:
postgres_data:
</code></pre>
<p>Dockerfile:</p>
<pre><code># pull official base image
FROM python:3.11.2-slim-buster
# set working directory
WORKDIR /usr/src/TestDrivenIO
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# updated
# install system dependencies
RUN apt-get update \
&& apt-get -y install netcat gcc postgresql \
&& apt-get clean
# install dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
RUN pip install -r requirements.txt
# new
# copy entrypoint.sh
COPY ./entrypoint.sh /usr/src/TestDrivenIO/entrypoint.sh
RUN chmod +x /usr/src/TestDrivenIO/entrypoint.sh
# add app
COPY . .
# new
# run entrypoint.sh
ENTRYPOINT ["/usr/src/TestDrivenIO/entrypoint.sh"]
</code></pre>
<p>How can I determine the issue here? I don't see any way to check logs, it SEEMS as if I've done everything right, but I don't understand why it says it's not running.</p>
<p>Here is what it looks like when checking processes:</p>
<p><a href="https://i.sstatic.net/rU6m76Ik.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rU6m76Ik.png" alt="docker ps" /></a></p>
<p>After some additional research, I have found that for some reason, the <code>entrypoint.sh</code> file is not being copied over.</p>
<p><a href="https://i.sstatic.net/E46PFVSZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E46PFVSZ.png" alt="file not found error" /></a></p>
|
<python><django><docker-compose>
|
2024-05-12 04:06:49
| 1
| 321
|
ProsperousHeart
|
78,466,656
| 1,851,782
|
Matplotlib plot contours of a function with downhill direction?
|
<p>I am trying to recreate a plot style in matplotlib.</p>
<p>In the plot, the ticks on each contour show the downhill direction of the function. I want to recreate this plot but for the surface <code>-x^3/3+x y^2+x</code> with tick marks showing the downhill direction like in this plot.</p>
<p>How can I achieve this effect in matplotlib?</p>
<p><a href="https://i.sstatic.net/6HH07XqB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6HH07XqB.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><plot>
|
2024-05-12 03:49:03
| 0
| 1,724
|
random0620
|
78,466,652
| 5,361,994
|
How to exclude fields with null values using polar dataframe columns for final json output?
|
<p>I am trying to read and process the bigquery table using the polars. At the end I want to write the each row as a JSON in the file such that all the fields that have null values in it (no matter how nested the JSON is) is excluded.</p>
<p>I am using polars to process the bigquery row. I am new to polars and could not achieve the result.</p>
<p>My code looks like below</p>
<pre class="lang-py prettyprint-override"><code>def clean_json(data):
return json.dumps(clean_record_df(data), separators=(',', ':'))
def remove_null(data):
"""
Recursively remove fields with null values.
"""
if isinstance(data, dict):
return {k: clean_record_df(v) for k, v in data.items() if v is not None}
elif isinstance(data, list) or isinstance(data, pl.Series):
return [clean_record_df(item) for item in data if item is not None]
return data
def process_row():
table = pa.Table.from_batches(lazy_frames)
df = pl.from_arrow(table)
df = df.with_columns(
pl.col("event_params").map_elements(clean_json, return_dtype=pl.Utf8).str.json_decode()
)
file = f"file.json"
df.write_ndjson(file)
query = f"select * from {self.table_name}"
query_job = self.bq_client.query(query)
arrow_iterable = query_job.result().to_arrow_iterable(bqstorage_client=self.bq_storage_client)
arrows = []
for arrow in arrow_iterable:
arrows.append(arrow)
.....
if num_of_rows >= 50000
process_row(arrows)
</code></pre>
<p>And the final results looks like</p>
<pre class="lang-json prettyprint-override"><code>{"event_date":"20240324","event_timestamp":1711213604759724,"event_name":"event_name","event_params":[{"key":"ignore_referrer","value":{"string_value":"true","int_value":null}}.......
</code></pre>
<p>As you can see the <code>event_params</code> still have <code>int_value</code> fields even though the field value is null.</p>
<p>And I think this is expected because as soon as I do <code>.str.json_decode()</code> it converts to Struct and when the field is not present in one row but its present in another row, the struct takes consideration of that as well when determining the final type.</p>
<p>And if I don't use <code>.str.json_decode()</code> then I will get something like this</p>
<pre class="lang-json prettyprint-override"><code>{"event_date":"20240324","event_timestamp":1711213604759724,"event_name":"event_name","event_params":"[{\"key\":\"ignore_referrer\",\"value\":{\"string_value\":\"true\"}} .......
</code></pre>
<p>It is removing the null value fields but again it is escaping the characters which is again as expected.</p>
<p>So, I would like to know how can I process the row in polar such that at the end I would get at the JSON line that don't have fields with null values no matter nested it the json value is like below</p>
<pre class="lang-json prettyprint-override"><code>{"event_date":"20240324","event_timestamp":1711213604759724,"event_name":"first_visit","event_params":[{"key":"ignore_referrer","value":{"string_value":"true"}} ................
</code></pre>
<p>Thanks in advance</p>
|
<python><google-bigquery><python-polars><google-bigquery-storage-api>
|
2024-05-12 03:46:30
| 2
| 428
|
programmingtech
|
78,466,591
| 222,977
|
Tensorflow input of varying shapes
|
<p>I'm using Tensorflow 2.16.1. I have a list of Tensors I want to use as training data. They are varying shapes. It's unclear how I can make this work. Here is some example code:</p>
<pre><code>import tensorflow as tf
indices = [[0, 0], [1, 1]]
values = [0.5, 0.5]
tensor_1 = tf.sparse.SparseTensor(indices=indices, values=values, dense_shape=[2, 2])
tensor_1 = tf.sparse.reorder(tensor_1)
tensor_1 = tf.sparse.to_dense(tensor_1)
indices = [[0, 0], [1, 1], [2, 2]]
values = [0.5, 0.5, 0.5]
tensor_2 = tf.sparse.SparseTensor(indices=indices, values=values, dense_shape=[3, 3])
tensor_2 = tf.sparse.reorder(tensor_2)
tensor_2 = tf.sparse.to_dense(tensor_2)
tensor_list = [tensor_1, tensor_2]
tensors = tf.convert_to_tensor(tensor_list)
</code></pre>
<p>Running this results in: <code>Shapes of all inputs must match: values[0].shape = [2,2] != values[1].shape = [3,3] [Op:Pack] name: packed</code></p>
<p>Ideally I wouldn't be converting them to dense tensors, but that's another issue. I can do something like this:</p>
<pre><code>class InfoModel(tf.keras.Model):
def __init__(self):
super(InfoModel, self).__init__()
def call(self, inputs, training=False):
print(inputs)
return inputs
InfoModel()(tensor_list)
</code></pre>
<p>The above code works fine. However, I'm trying to generate these tensors with a custom <code>tf.keras.utils.PyDataset</code> implementation. Here's an example:</p>
<pre><code>class TestGenerator(tf.keras.utils.PyDataset):
def __init__(self):
pass
def __getitem__(self, index):
indices = [[0, 0], [1, 1]]
values = [0.5, 0.5]
tensor_1 = tf.sparse.SparseTensor(indices=indices, values=values, dense_shape=[2, 2])
tensor_1 = tf.sparse.reorder(tensor_1)
tensor_1 = tf.sparse.to_dense(tensor_1)
indices = [[0, 0], [1, 1], [2, 2]]
values = [0.5, 0.5, 0.5]
tensor_2 = tf.sparse.SparseTensor(indices=indices, values=values, dense_shape=[3, 3])
tensor_2 = tf.sparse.reorder(tensor_2)
tensor_2 = tf.sparse.to_dense(tensor_2)
return tf.convert_to_tensor([tensor_1, tensor_2]), tf.convert_to_tensor([0.0, 0.0])
def __len__(self):
return 1
</code></pre>
<p>The above generator complains about shapes not matching. If I change this line:</p>
<pre><code>return tf.convert_to_tensor([tensor_1, tensor_2]), tf.convert_to_tensor([0.0, 0.0])
</code></pre>
<p>to:</p>
<pre><code>return [tensor_1, tensor_2], tf.convert_to_tensor([0.0, 0.0])
</code></pre>
<p>Then the error I get is:</p>
<pre><code>TypeError: `output_signature` must contain objects that are subclass of `tf.TypeSpec` but found <class 'list'> which is not.
</code></pre>
|
<python><tensorflow><keras>
|
2024-05-12 02:59:41
| 0
| 583
|
Dan
|
78,465,961
| 3,163,618
|
Correctness of smaller cube function
|
<p>For what range is this function I came up with to find the biggest cube smaller than integer n valid?</p>
<pre><code>def smaller_cube(n):
k = int(n**(1/3))
return k**3 if k**3 < n else (k-1)**3
</code></pre>
<p><code>64**(1/3) == 3.99999...</code> but the truncation works in this case, and if it were <code>4.00000...1</code> it would still work in the <code>k-1</code> case.</p>
<p>I think it could go wrong if, for some large cube <code>c</code>, <code>n > c**3</code> but <code>n**(1/3) < c</code> due to precision error, so would return <code>c-1</code> instead of <code>c</code>. And for enormous values <code>n**(1/3)</code> will be completely off.</p>
<p>I'm also interested if <code>int</code> is replaced with <code>round</code>. Truncation of a number always rounds down, but rounding could round up to 0.5 up.</p>
|
<python><integer><precision>
|
2024-05-11 20:03:04
| 1
| 11,524
|
qwr
|
78,465,956
| 2,789,445
|
Are WEBSITE_RUN_FROM_PACKAGE="1" and SCM_DO_BUILD_DURING_DEPLOYMENT=true compatible options for a ZipDeploy Azure App Service Python WebApp on linux?
|
<p>Are <code>WEBSITE_RUN_FROM_PACKAGE="1"</code> and <code>SCM_DO_BUILD_DURING_DEPLOYMENT=true</code> compatible options for a ZipDeploy Azure App Service Python WebApp on linux?
Also, my webapp is using gunicorn, if that matters.</p>
<p>When I deploy my python webapp as a zipfile, which includes my python files and a requirements.txt (but no venv). Ex command:</p>
<p><code>az webapp deploy --resource-group "$resource_group_name" --name "$app_name" --type "zip" --src-path "backend.zip"</code></p>
<p>My assumption is that my webapp will be built by <strong>Oryx</strong> and then <strong>Kudu</strong> will upload the package, a zip now including the venv, to <code>/home/data/SitePackages</code> with a <code>packagename.txt</code> in the same directory.
Then it will run it by mounting the package in <code>/home/site/wwwroot</code>.</p>
|
<python><web-applications><azure-web-app-service><gunicorn><azure-appservice>
|
2024-05-11 20:01:08
| 2
| 3,401
|
successhawk
|
78,465,666
| 8,131,903
|
How to save the child set instance in the parent object in Django's Many to One (Foreign Key) relationship?
|
<p>I am working on a social media project where users can follow each other. Here are the models that are relevant to this problem</p>
<pre class="lang-py prettyprint-override"><code>django.contrib.auth.models.User
</code></pre>
<pre class="lang-py prettyprint-override"><code>class UsersProfiles(ddm.Model):
user = ddm.OneToOneField(
dcam.User,
on_delete=ddm.CASCADE,
related_name="profile",
)
followers_count = ddm.IntegerField(default=0)
followings_count = ddm.IntegerField(default=0)
</code></pre>
<pre class="lang-py prettyprint-override"><code>class UsersFollows(ddm.Model):
follower = ddm.ForeignKey(
dcam.User,
on_delete=ddm.CASCADE,
related_name="followers",
)
following = ddm.ForeignKey(
dcam.User,
on_delete=ddm.CASCADE,
related_name="followings",
)
</code></pre>
<p>I am creating data redundancy for scalability by storing the followers and followings counts instead of calculating it by using <code>UsersFollows</code> model. So I am using Django's transaction to add a follower for atomicity.</p>
<pre class="lang-py prettyprint-override"><code>with ddt.atomic():
umuf.UsersFollows.objects.create(
follower=user,
following=followed_user,
)
followed_user.profile.followers_count += 1
user.profile.followings_count += 1
followed_user.profile.save()
user.profile.save()
</code></pre>
<p>But during the testing, the user to its followers and followings set is appearing empty.</p>
<pre class="lang-py prettyprint-override"><code>def test_valid(self):
response = self.client.post(
c.urls.USERS_1_FOLLOWINGS,
{"user": 2},
headers=h.generate_headers(self.login_response),
)
h.assertEqualResponses(response, c.responses.SUCCESS)
user1 = dcam.User.objects.get(pk=1)
user2 = dcam.User.objects.get(pk=2)
self.assertEqual(user1.profile.followings_count, 1)
self.assertEqual(user2.profile.followers_count, 1)
follow = umuf.UsersFollows.objects.filter(
follower=user1,
following=user2,
)
self.assertEqual(len(follow), 1)
# Below 2 assertions are failing
self.assertIn(user1, user2.followers.all())
self.assertIn(user2, user1.followings.all())
</code></pre>
<p>I tried saving both the <code>User</code> instances. I also tried saving the <code>UserFollow</code> instance just in case the create function did not save it automatically.</p>
<pre class="lang-py prettyprint-override"><code>umuf.UsersFollows.objects.create(
follower=user,
following=followed_user,
).save()
user.save()
followed_user.save()
</code></pre>
<p>I don't get what am I missing here.</p>
|
<python><python-3.x><django><django-models>
|
2024-05-11 18:19:18
| 1
| 315
|
Aditya
|
78,465,537
| 3,052,438
|
CalledProcessError.stderr of subprocess.check_call is None despite the process outputting an error message
|
<p>I'm trying to capture the error message of a program called by <code>subprocess.check_call</code> but the <code>stderr</code> in the error object is always <code>None</code>.</p>
<p>Here is a short script that shows this:</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
import sys
if '--fail' in sys.argv: # the script calls itself with this option, for the test
print('this is the error message', file=sys.stderr)
sys.exit(1)
try:
subprocess.check_call([sys.executable, sys.argv[0], '--fail'],
stdout=subprocess.DEVNULL, stderr=subprocess.PIPE)
print('Not failed?') # this neither happens nor is expected
except subprocess.CalledProcessError as err:
if err.stderr:
print(err.stderr.decode()) # this is what I expect
else:
print('<No error message>') # this is what happens
</code></pre>
<p>I have Python 3.10.12.</p>
|
<python><subprocess>
|
2024-05-11 17:34:41
| 1
| 5,159
|
Piotr Siupa
|
78,465,459
| 9,659,840
|
how to combine array of map in a single map per column in pyspark
|
<p>i have followed <a href="https://stackoverflow.com/questions/43723864/combine-array-of-maps-into-single-map-in-pyspark-dataframe">this</a> question but the answers there not working for me
i don't want a UDF for this and map_concat doesn't work for me.
is there any other way to combine maps?</p>
<p>eg</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">id</th>
<th style="text-align: center;">value</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">Map(k1 -> v1)</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">Map(k2 -> v2)</td>
</tr>
</tbody>
</table></div>
<p>output should be</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">id</th>
<th style="text-align: center;">value</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">Map(k1 -> v1, k2 -> v2)</td>
</tr>
</tbody>
</table></div>
|
<python><pyspark><databricks>
|
2024-05-11 17:09:15
| 1
| 469
|
UC57
|
78,465,203
| 242,042
|
Internally, is asyncio run_forever() basically a while True loop?
|
<p><a href="https://stackoverflow.com/questions/32761095/python-asyncio-run-forever-or-while-true">python asyncio run_forever or while True</a> is similar but it is a "should I do this..." question.</p>
<p>I am more trying to understand if the internals of python <code>asyncio</code> is basically a</p>
<pre><code>while True:
...
time.sleep(1)
</code></pre>
<p>(or more precisely... <code>while not stopped:</code> loop.</p>
<p>Does it use also use <code>sleep</code> to prevent spin waits?</p>
|
<python><python-asyncio>
|
2024-05-11 15:39:44
| 1
| 43,097
|
Archimedes Trajano
|
78,465,177
| 13,438,431
|
Generic Factory <-> Good relationship with PEP695 (Self referential generics)
|
<p>How to express this kind of relationship between types with PEP695 annotations? Pylance (pyright) says that <code>Self is not valid in this context</code>, but there seems to be no other way to express it?</p>
<p>The goal is to have <code>AbstractFactory</code> know what kind of goods it produces, and have <code>AbstractGood</code> know which kind of <code>AbstractFactory</code> produced it.</p>
<pre><code># factory.py
from typing import Self
from abc import ABC
class AbstractFactory[G: AbstractGood[Self]](ABC):
def __init__(self, good_class: type[G]) -> None:
self.good_class = good_class
def produce(self) -> G:
good = self.good_class(self)
return good
class AbstractGood[F: AbstractFactory[Self]](ABC):
def __init__(self, factory: F):
self.factory = factory
# charlie.py
from factory import AbstractFactory, AbstractGood
class ChocolateFactory(AbstractFactory[ChocolateBar]):
...
class ChocolateBar(AbstractGood[ChocolateFactory]):
...
</code></pre>
|
<python><generics><python-typing>
|
2024-05-11 15:32:42
| 1
| 2,104
|
winwin
|
78,465,166
| 2,850,522
|
How to Fix Shape Mismatch in TensorFlow when attempting to create a model from a trained data set
|
<p>I am working with machine learning project using TensorFlow and Keras to process audio data. The project is designed to handle inputs with multiple channels, but there's an issue during the training phase where the input shape does not match the expected model input shape.</p>
<p>In the github repo you can find model.py, I think that is supposed to reduce the channel dimension of audio data from 3 to 1 using a TimeDistributed layer wrapped around a Conv2D layer. However, during training, I'm encountering a shape mismatch error:</p>
<pre><code>TypeError: `generator` yielded an element of shape (32, 2, 15, 80, 3) where an element of shape (None, 2, 15, 80, 1) was expected.
</code></pre>
<p>The github repo is here <a href="https://github.com/cpuguy96/StepCOVNet" rel="nofollow noreferrer">https://github.com/cpuguy96/StepCOVNet</a></p>
<p>I used that repo to prep the training data from the raw data and went to create the model which gives the mismatch.</p>
<p>The prints before seem to suggest some shapes some of the time are correct?</p>
<p>I'm just not sure where to go from here. My goal is to create the model and use it for inferencing a novel song into stepmania step data.</p>
<p>I'm not sure if the repo is just in a bad unworking state or if its user error on my part.</p>
<p>I tend to learn by doing and thought this would be a good project to start in.</p>
<p>Exact command and output.</p>
<pre><code>(StepNet) nonlin@Nonlin:/mnt/c/Users/gerfe/StepCOVNet$ python train.py -i /home/nonlin/miniconda3/envs/StepNet/TrainingData -o /home/nonlin/miniconda3/envs/StepNet/Model -d 1 --name "MyModel"
2024-05-11 10:00:27.499059: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-05-11 10:00:27.499132: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-05-11 10:00:27.516496: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-05-11 10:00:27.564170: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-05-11 10:00:28.449546: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/home/nonlin/miniconda3/envs/StepNet/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
2024-05-11 10:00:30.166859: I external/local_xla/xla/stream_executor/cuda/cuda_executor.cc:887] could not open file to read NUMA node: /sys/bus/pci/devices/0000:0b:00.0/numa_node
Your kernel may have been built without NUMA support.
2024-05-11 10:00:30.508089: W tensorflow/core/common_runtime/gpu/gpu_device.cc:2256] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
WARNING:root:Failed to set memory growth for GPU.
Traceback (most recent call last):
File "/mnt/c/Users/gerfe/StepCOVNet/stepcovnet/tf_config.py", line 10, in <module>
tf.config.list_physical_devices("GPU")[0], enable=True
IndexError: list index out of range
WARNING:tensorflow:Mixed precision compatibility check (mixed_float16): WARNING
The dtype policy mixed_float16 may run slowly because this machine does not have a GPU. Only Nvidia GPUs with compute capability of at least 7.0 run quickly with mixed_float16.
If you will use compatible GPU(s) not attached to this host, e.g. by running a multi-worker model, you can ignore this warning. This message will only be logged once
WARNING:tensorflow:Mixed precision compatibility check (mixed_float16): WARNING
The dtype policy mixed_float16 may run slowly because this machine does not have a GPU. Only Nvidia GPUs with compute capability of at least 7.0 run quickly with mixed_float16.
If you will use compatible GPU(s) not attached to this host, e.g. by running a multi-worker model, you can ignore this warning. This message will only be logged once
/home/nonlin/miniconda3/envs/StepNet/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
2024-05-11 10:02:19.167084: W external/local_tsl/tsl/framework/cpu_allocator_impl.cc:83] Allocation of 154389504 exceeds 10% of free system memory.
2024-05-11 10:02:19.202167: W external/local_tsl/tsl/framework/cpu_allocator_impl.cc:83] Allocation of 154389504 exceeds 10% of free system memory.
2024-05-11 10:02:19.232825: W external/local_tsl/tsl/framework/cpu_allocator_impl.cc:83] Allocation of 154389504 exceeds 10% of free system memory.
2024-05-11 10:02:20.980241: W external/local_tsl/tsl/framework/cpu_allocator_impl.cc:83] Allocation of 154389504 exceeds 10% of free system memory.
All PyTorch model weights were used when initializing TFGPT2Model.
All the weights of TFGPT2Model were initialized from the PyTorch model.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFGPT2Model for predictions without further training.
Model: "StepCOVNet"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
audio_input (InputLayer) [(None, 2, 15, 80, 1)] 0 []
time_distributed (TimeDist (None, 2, 15, 80, 1) 2 ['audio_input[0][0]']
ributed)
arrow_input (InputLayer) [(None, None)] 0 []
arrow_mask (InputLayer) [(None, None)] 0 []
batch_normalization (Batch (None, 2, 15, 80, 1) 4 ['time_distributed[0][0]']
Normalization)
tfgpt2_model (TFGPT2Model) TFBaseModelOutputWithPastA 1244398 ['arrow_input[0][0]',
ndCrossAttentions(last_hid 08 'arrow_mask[0][0]']
den_state=(None, None, 768
),
past_key_values=((2, None
, 12, None, 64),
(2, None, 12, None, 64),
(2, None, 12, None, 64),
(2, None, 12, None, 64),
(2, None, 12, None, 64),
(2, None, 12, None, 64),
(2, None, 12, None, 64),
(2, None, 12, None, 64),
(2, None, 12, None, 64),
(2, None, 12, None, 64),
(2, None, 12, None, 64),
(2, None, 12, None, 64)),
hidden_states=None, atten
tions=None, cross_attentio
ns=None)
VGGish (Functional) (None, 2, 512) 4499712 ['batch_normalization[0][0]']
global_max_pooling1d (Glob (None, 768) 0 ['tfgpt2_model[0][0]']
alMaxPooling1D)
bidirectional (Bidirection (None, 256) 656384 ['VGGish[0][0]']
al)
concatenate (Concatenate) (None, 1024) 0 ['global_max_pooling1d[0][0]',
'bidirectional[0][0]']
dense (Dense) (None, 256) 262400 ['concatenate[0][0]']
batch_normalization_1 (Bat (None, 256) 1024 ['dense[0][0]']
chNormalization)
activation (Activation) (None, 256) 0 ['batch_normalization_1[0][0]'
]
dropout_37 (Dropout) (None, 256) 0 ['activation[0][0]']
onehot_encoded_arrows (Den (None, 256) 65792 ['dropout_37[0][0]']
se)
==================================================================================================
Total params: 129925126 (495.63 MB)
Trainable params: 40370436 (154.00 MB)
Non-trainable params: 89554690 (341.62 MB)
__________________________________________________________________________________________________
Saving model metadata at /home/nonlin/miniconda3/envs/StepNet/Model/MyModel
Training on 1975957 samples (156 songs) and validating on 231660 samples (18 songs)
Starting training...
Epoch 1/15
2024-05-11 10:02:28.230065: W external/local_tsl/tsl/framework/cpu_allocator_impl.cc:83] Allocation of 154389504 exceeds 10% of free system memory.
2024-05-11 10:02:36.035559: W tensorflow/core/framework/op_kernel.cc:1827] INVALID_ARGUMENT: TypeError: `generator` yielded an element of shape (32, 2, 15, 80, 3) where an element of shape (None, 2, 15, 80, 1) was expected.
Traceback (most recent call last):
File "/home/nonlin/miniconda3/envs/StepNet/lib/python3.10/site-packages/tensorflow/python/ops/script_ops.py", line 270, in __call__
ret = func(*args)
File "/home/nonlin/miniconda3/envs/StepNet/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 643, in wrapper
return func(*args, **kwargs)
File "/home/nonlin/miniconda3/envs/StepNet/lib/python3.10/site-packages/tensorflow/python/data/ops/from_generator_op.py", line 235, in generator_py_func
raise TypeError(
TypeError: `generator` yielded an element of shape (32, 2, 15, 80, 3) where an element of shape (None, 2, 15, 80, 1) was expected.
Traceback (most recent call last):
File "/mnt/c/Users/gerfe/StepCOVNet/train.py", line 166, in <module>
train(
File "/mnt/c/Users/gerfe/StepCOVNet/train.py", line 115, in train
run_training(
File "/mnt/c/Users/gerfe/StepCOVNet/train.py", line 61, in run_training
executor.TrainingExecutor(stepcovnet_model=stepcovnet_model).execute(
File "/mnt/c/Users/gerfe/StepCOVNet/stepcovnet/executor.py", line 111, in execute
history = self.train(input_data, self.get_training_callbacks(hyperparameters))
File "/mnt/c/Users/gerfe/StepCOVNet/stepcovnet/executor.py", line 191, in train
history = self.stepcovnet_model.model.fit(
File "/home/nonlin/miniconda3/envs/StepNet/lib/python3.10/site-packages/keras/src/utils/traceback_utils.py", line 70, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/home/nonlin/miniconda3/envs/StepNet/lib/python3.10/site-packages/tensorflow/python/eager/execute.py", line 53, in quick_execute
tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.InvalidArgumentError: Graph execution error:
Detected at node PyFunc defined at (most recent call last):
<stack traces unavailable>
TypeError: `generator` yielded an element of shape (32, 2, 15, 80, 3) where an element of shape (None, 2, 15, 80, 1) was expected.
Traceback (most recent call last):
File "/home/nonlin/miniconda3/envs/StepNet/lib/python3.10/site-packages/tensorflow/python/ops/script_ops.py", line 270, in __call__
ret = func(*args)
File "/home/nonlin/miniconda3/envs/StepNet/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py", line 643, in wrapper
return func(*args, **kwargs)
File "/home/nonlin/miniconda3/envs/StepNet/lib/python3.10/site-packages/tensorflow/python/data/ops/from_generator_op.py", line 235, in generator_py_func
raise TypeError(
TypeError: `generator` yielded an element of shape (32, 2, 15, 80, 3) where an element of shape (None, 2, 15, 80, 1) was expected.
[[{{node PyFunc}}]]
[[IteratorGetNext]] [Op:__inference_train_function_33834]
</code></pre>
|
<python><tensorflow><deep-learning><conv-neural-network>
|
2024-05-11 15:27:03
| 0
| 570
|
Nonlin
|
78,464,919
| 1,888,781
|
Special Characters Error While converting STRING to JSON in python
|
<p>I have following string coming from 3rd party api.</p>
<pre><code>teststring = '{"@speccialcharacterssample": "/special.character.test.Sample","content": {"description": "Some number **451000**, Some Time **Monday, April 8th, UTC 11:00**. \n# BackSpace\nBreakline\nMore Special Character[Full Change Log](https://example.com/v1.0.1).\n# Special Character Test\ncomma test, colon test: breakline\nBreakline followed by hashes test\n## Breakline\nnew test **451000**, time **Monday, April 8th, UTC 11:00**. Square Brackets [HERE](https://example.com/451000).\nThats all.","plan": {"info": "{\"bin\":{\"example/zya32\":\"https://example.com/v1.0.1/version-v1.0.1-example-xya32?check=asda43tsd23423scasq\"}}"}}}'
</code></pre>
<p>When i try to convert it to json using code</p>
<pre><code>teststring_json = json.loads(teststring)
</code></pre>
<p>It throws following error</p>
<pre><code> teststring_json = json.loads(teststring)
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/json/decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 1 column 1408 (char 1407)
</code></pre>
<p>I tried several ways to remove special characters but it never works. e.g</p>
<p><code>updated_json = teststring.replace('\\"', '\\\\"')</code></p>
|
<python><json>
|
2024-05-11 14:08:20
| 0
| 471
|
Khizar Shujaat
|
78,464,772
| 1,667,884
|
performance of getting a new list with an additional value
|
<p>To get a new list from a list with an additional value, I used to think that <code>list_b = [*list_a, value]</code> is more performant than <code>list_b = list_a + [value]</code>, as the latter generates an intermediate <code>[value]</code>.</p>
<p>However, according to the benchmark (tested in Python 3.12.3 / Windows 10), it seems that the result is opposite:</p>
<p><a href="https://i.sstatic.net/zy72lQ5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zy72lQ5n.png" alt="plot" /></a></p>
<pre><code>from timeit import timeit
import random
import matplotlib.pyplot as plt
num_data_points = 1000
step = 10
methods = [
# ordered from slowest to fastest to make the key easier to read
# "list_b = list_a.copy(); list_b += (value,)",
# "list_b = list_a.copy(); list_b.append(value)",
# "list_b = list(list_a); list_b.append(value)",
"list_b = [*list_a, value]",
"list_b = list_a + [value]",
]
x = list(range(0, num_data_points * step, step))
y = [[] for _ in methods]
for i in x:
list_a = list(range(i))
random.shuffle(list_a)
value = random.randint(0, num_data_points * step)
setup = f"list_a = {list_a}; value = {value}"
for method_index, method in enumerate(methods):
y[method_index].append(timeit(method, setup=setup, number=800))
print(i, "out of", num_data_points * step)
ax = plt.axes()
for method_index, method in enumerate(methods):
ax.plot(x, y[method_index], label=method)
ax.set(xlabel="size of the list", ylabel="time (s) (lower is better)")
ax.legend()
ax.figure.canvas.get_default_filetype = lambda: 'svg'
plt.show()
</code></pre>
<p>I have also checked the disassembled code:</p>
<pre><code>import dis
list_a = [1, 2, 3]
value = 4
def method_1():
list_b = [*list_a, value]
def method_2():
list_b = list_a + [value]
if __name__ == '__main__':
dis.dis(method_1)
dis.dis(method_2)
</code></pre>
<p>and have got the following:</p>
<pre><code># list_b = [*list_a, value]
6 0 RESUME 0
7 2 BUILD_LIST 0
4 LOAD_GLOBAL 0 (list_a)
14 LIST_EXTEND 1
16 LOAD_GLOBAL 2 (value)
26 LIST_APPEND 1
28 STORE_FAST 0 (list_b)
30 RETURN_CONST 0 (None)
# list_b = list_a + [value]
9 0 RESUME 0
10 2 LOAD_GLOBAL 0 (list_a)
12 LOAD_GLOBAL 2 (value)
22 BUILD_LIST 1
24 BINARY_OP 0 (+)
28 STORE_FAST 0 (list_b)
30 RETURN_CONST 0 (None)
</code></pre>
<p>which doesn't seem sufficient to explain the performance difference without knowning further implementation details.</p>
<p>Can someone help explain why?</p>
|
<python><list><performance>
|
2024-05-11 13:24:52
| 1
| 2,357
|
Danny Lin
|
78,464,654
| 3,120,501
|
Access values of design variables while solver is running (OpenMDAO with IPOPT)
|
<p>I'm trying to do trajectory optimisation in Dymos (a library built atop OpenMDAO), but I'm not getting the convergence properties I'm expecting and I'd like to inspect the intermediate solutions of the solver in order to debug.</p>
<p>Is there a way, in OpenMDAO, to access the design variables as the solver is running, so that intermediate values can be printed or plotted? I believe this is theoretically possible as other libraries seem to provide this functionality for IPOPT (see the bottom of this page: <a href="https://cyipopt.readthedocs.io/en/stable/tutorial.html#scipy-compatible-interface" rel="nofollow noreferrer">https://cyipopt.readthedocs.io/en/stable/tutorial.html#scipy-compatible-interface</a>), but I can't work out how to do this (or see any documentation) in OpenMDAO.</p>
|
<python><optimization><openmdao><ipopt>
|
2024-05-11 12:44:15
| 1
| 528
|
LordCat
|
78,464,573
| 4,505,301
|
AWS Boto3 EC2 client send_command process shuts down
|
<p>I am using send_command() function to run local python script on about 100 ec2 instances for long periods(days).</p>
<pre><code>ssm_client = boto3.client('ssm', region_name = 'eu-west-1')
commands_to_execute = """cd /home/ubuntu
set -x
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>logv5.out 2>&1
python3 myscript.py"""
command_lines_to_execute = commands_to_execute.split('\n')
ssm_client.send_command(InstanceIds=[instance_id], DocumentName="AWS-RunShellScript", Parameters={'commands': commands_to_execute})
</code></pre>
<p>I am also logging the console output via the part below:</p>
<pre><code>set -x
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>logv5.out 2>&1
</code></pre>
<p>Logging part works as intended.</p>
<p>However after about an hour, Python process in each EC2 instance simply dies with no output.
I thought perhaps some kind of virtual bash session dies which causes the python process that was launched by it to die. So, I tried launching screen first before executing other commands, but this did not help either.
What am I missing here? Why does my Python process die without any output?</p>
|
<python><amazon-web-services><bash><amazon-ec2><boto3>
|
2024-05-11 12:18:20
| 1
| 1,562
|
meliksahturker
|
78,464,534
| 7,307,824
|
Extracting int values from a string (in different formats) using a regex
|
<p>I have a string value (football score) in my Pandas dataset. I would like to extract the home goals and the away goals from this score.</p>
<p>The score can be written in a couple of ways (sometimes it is won on penalties and presented with brackets.</p>
<p>Standard score:</p>
<p><code>"4-2"</code> I would like to extract 4 as the home goals and 2 as the away goals.</p>
<p>Won on penalties:</p>
<p><code>"(5) 2-3 (4)"</code> I would like to extract 2 as the home goals and 3 as the away goals and ignore the brackets</p>
<p>Is there a reg-ex (or 2) that would give me the 2 values I require in both instances. Maybe checking for the "-" and take the number either side of it?</p>
|
<python><pandas><regex>
|
2024-05-11 12:06:59
| 2
| 568
|
Ewan
|
78,464,426
| 774,575
|
How to correctly index a dataframe with a function?
|
<p>Given a dataframe:</p>
<pre><code> a b c
u 5 0 3
v 3 7 9
w 3 5 2
</code></pre>
<p>I'd like to select rows/columns in a dataframe using a function. This function gets the dataframe and returns a tuple of lists of labels, e.g. it returns <code>['v', 'u'], ['c']</code> using this lambda:</p>
<pre><code>get_labels = lambda df: (['v', 'u'], ['c'])
</code></pre>
<p>If I use <code>.loc</code> with the label literals, I get the desired result:</p>
<pre><code>df.loc[['v', 'u'], ['c']]
c
v 9
u 3
</code></pre>
<p>The <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer">documentation</a> says about using a function (a callable), that a function returning two lists of labels is valid ("one of the above"):</p>
<blockquote>
<p>A callable function with one argument (the calling Series or DataFrame) and that returns valid output for indexing (one of the above)</p>
</blockquote>
<p>But if I use <code>.loc</code> with the function, which I believe returns the same list tuple, I get an error:</p>
<pre><code>df.loc[get_labels]
unhashable type: 'list'
</code></pre>
<p>What's wrong? (as a possible clue: Pandas seems trying to create an <code>Index</code> object with the result of the function -- don't know why, no explanation in the documentation.)</p>
<hr />
<p>Full code:</p>
<pre><code>import numpy.random as npr
import pandas as pd
npr.seed(0)
df = pd.DataFrame(npr.randint(10, size=(3,3)),
index=['u','v','w'],
columns=['a','b','c'])
get_labels = lambda df: (['v', 'u'], ['c'])
print(df.loc[['v', 'u'], ['c']])
print(df.loc[get_labels])
</code></pre>
|
<python><pandas><indexing>
|
2024-05-11 11:33:11
| 2
| 7,768
|
mins
|
78,464,072
| 10,200,497
|
What is the best way to merge two dataframes that one of them has overlapping ranges?
|
<p>My DataFrames are:</p>
<pre><code>import pandas as pd
df_1 = pd.DataFrame(
{
'a': [10, 12, 14, 20, 25, 30, 42, 50, 80]
}
)
df_2 = pd.DataFrame(
{
'start': [9, 19],
'end': [26, 50],
'label': ['a', 'b']
}
)
</code></pre>
<p>Expected output: Adding column <code>label</code> to <code>df_1</code>:</p>
<pre><code>a label
10 a
12 a
14 a
20 a
25 a
20 b
25 b
30 b
42 b
50 b
</code></pre>
<p><code>df_2</code> defines the ranges of labels. So for example, the first row of <code>df_2</code> start of the range is 9 and the end is 22. Now I want to slice <code>df_1</code> based on start and end and give this label to the slice. Note that <code>start</code> is exlusive and <code>end</code> is inclusive. And my labels ranges are overlapping.</p>
<p>These are my attempts. The first one works but I am not sure if it is the best.</p>
<pre><code># attempt_1
dfc = pd.DataFrame([])
for idx, row in df_2.iterrows():
start = row['start']
end = row['end']
label = row['label']
df_slice = df_1.loc[df_1.a.between(start, end, inclusive='right')]
df_slice['label'] = label
dfc = pd.concat([df_slice, dfc], ignore_index=True)
## attempt 2
idx = pd.IntervalIndex.from_arrays(df_2['start'], df_2['end'], closed='both')
label = df_2.iloc[idx.get_indexer(df_1.a), 'label']
df_1['label'] = label.to_numpy()
</code></pre>
|
<python><pandas><dataframe>
|
2024-05-11 09:26:53
| 3
| 2,679
|
AmirX
|
78,463,870
| 17,718,669
|
Lock classes and Threading in python for dict moifications
|
<p>I was learning and testing <code>threading</code> in python, and <code>Lock</code> class, So in here I want to add a key to <code>persons</code> but let's say it will takes some time, in another thread I want to read the key. in first thread I should lock the process but I still get an error.</p>
<pre class="lang-py prettyprint-override"><code>import threading
from time import sleep
persons = {
"test": "test",
"test2": "test2",
}
lock = threading.Lock()
def func1():
global persons
lock.acquire()
sleep(10)
persons["test3"] = "test3"
lock.release()
def func2():
global persons
print(persons["test3"])
t1 = threading.Thread(target=func1)
t2 = threading.Thread(target=func2)
t1.start()
t2.start()
t2.join()
t1.join()
</code></pre>
<p>I lock the <code>func1()</code> but still get error:</p>
<pre><code>Traceback (most recent call last):
File "c:\Users\p.riahi\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
self.run()
File "c:\Users\p.riahi\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "c:\Users\p.riahi\Desktop\temp\thred.py", line 20, in func2
print(persons["test3"])
~~~~~~~^^^^^^^^^
KeyError: 'test3'
</code></pre>
<p>Can anyone help me what I misunderstand here, the <code>func1()</code> should have locked the <code>persons</code> but it's not.</p>
|
<python><multithreading><python-multiprocessing>
|
2024-05-11 08:17:20
| 2
| 326
|
parsariyahi
|
78,463,764
| 16,359,942
|
CCXT - load markets for exchange OneTrading
|
<p>I am trying to use python library <code>ccxt</code> to retrieve public data (e.g. available markets) from crypto exchange OneTrading.</p>
<p>I created an API key with read-permissions.</p>
<p>Somehow, ccxt retrieves an empty dict. Why is this?</p>
<p>This is my example code:</p>
<pre class="lang-py prettyprint-override"><code>from pprint import pprint
import ccxt
api_key = "<redacted>"
creds = {
'apiKey': api_key,
'verbose': True, # for debug output
}
exchange = ccxt.onetrading(creds)
pprint(exchange.requiredCredentials) # prints required credentials
exchange.check_required_credentials() # raises AuthenticationError
pprint(exchange.load_markets())
</code></pre>
<p>Output:</p>
<pre><code>{'accountId': False,
'apiKey': True,
'login': False,
'password': False,
'privateKey': False,
'secret': False,
'token': False,
'twofa': False,
'uid': False,
'walletAddress': False}
fetch Request: onetrading GET https://api.onetrading.com/public/v1/currencies RequestHeaders: {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} RequestBody: None
fetch Response: onetrading GET https://api.onetrading.com/public/v1/currencies 200 ResponseHeaders: {'Date': 'Sat, 11 May 2024 07:34:44 GMT', 'Content-Type': 'application/json', 'Content-Length': '2', 'Connection': 'keep-alive', 'apigw-requestid': 'XmHfOicWDoEEPOg=', 'strict-transport-security': 'max-age=31536000; includeSubDomains', 'x-frame-options': 'sameorigin', 'x-xss-protection': '1; mode=block', 'x-content-type-options': 'nosniff', 'referrer-policy': 'no-referrer', 'Cache-Control': 'no-store', 'pragma': 'no-cache', 'bp-request-id': '6e914827-43be-4fb8-9b52-cd9d93caadaa', 'CF-Cache-Status': 'DYNAMIC', 'Server': 'cloudflare', 'CF-RAY': '88207cfe9cb49722-AMS'} ResponseBody: []
fetch Request: onetrading GET https://api.onetrading.com/public/v1/instruments RequestHeaders: {'User-Agent': 'python-requests/2.31.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive'} RequestBody: None
fetch Response: onetrading GET https://api.onetrading.com/public/v1/instruments 200 ResponseHeaders: {'Date': 'Sat, 11 May 2024 07:34:44 GMT', 'Content-Type': 'application/json', 'Content-Length': '2', 'Connection': 'keep-alive', 'apigw-requestid': 'XmHfQjFgjoEEPFA=', 'strict-transport-security': 'max-age=31536000; includeSubDomains', 'x-frame-options': 'sameorigin', 'x-xss-protection': '1; mode=block', 'x-content-type-options': 'nosniff', 'referrer-policy': 'no-referrer', 'Cache-Control': 'no-store', 'pragma': 'no-cache', 'bp-request-id': '7aee16cb-fc32-47bc-b8c5-60f2d8fa54e1', 'CF-Cache-Status': 'DYNAMIC', 'Server': 'cloudflare', 'CF-RAY': '88207d002e889722-AMS'} ResponseBody: []
{}
</code></pre>
|
<python><python-3.x><ccxt>
|
2024-05-11 07:43:38
| 1
| 464
|
dovregubben
|
78,463,746
| 1,324,366
|
How to set port number in Python Package manager Anaconda Navigator
|
<p>Every time, I launch Anaconda-Navigator it is open in a new port. I want to set or fix the port number when I launch it.</p>
<p>I have searched the documentation a bit, however, I did not find anything or I might miss.</p>
<p>Does anyone know how to set the port number while launching the Anaconda-navigator?</p>
<p>Note
OS: Ubuntu</p>
|
<python><anaconda><conda>
|
2024-05-11 07:33:48
| 0
| 4,453
|
Ahmad Sharif
|
78,463,676
| 1,413,856
|
Understanding Tkinter window, and Canvas sizes
|
<p>I can’t get any information on exactly how window and canvas sizes work in Tkinter.</p>
<p>If I create a window, I can set its size using either the <code>.configure()</code> method or the <code>.geometry()</code> method. If I set both, the <code>.geometry()</code> method appears to override the <code>.configure()</code> method, and I know I can also set the position using the <code>.geometry()</code> method.</p>
<p>I can also create a <code>Canvas</code> object and pack it inside the window. Regardless of the size I set for the canvas, either too large or too small, it still fits neatly inside the window.</p>
<p>Here is a code snippet to illustrate my experiments:</p>
<pre class="lang-py prettyprint-override"><code>root = tkinter.Tk()
root.configure(background='#ECECEC', padx=20, pady=20)
root.geometry('800x600+200+200')
root.configure(width=1200, height=700)
canvas = tkinter.Canvas(root, width=600, height=1400, background='white')
canvas.pack(fill="both", expand=True)
root.mainloop()
</code></pre>
<p>What’s the relationship between the window <code>.configure()</code> and <code>.geometry()</code> methods, and how should this affect the included canvas?</p>
|
<python><tkinter><tkinter-canvas>
|
2024-05-11 07:06:06
| 0
| 16,921
|
Manngo
|
78,463,610
| 9,494,140
|
Django rest frame-work error : django.core.exceptions.ImproperlyConfigured: Router with basename X is already registered , despite it is not
|
<p>In my <code>django</code> app am using <code>DRF</code> and it gives me , and it gives me this strange error when i try to run the app :</p>
<pre><code>water_maps | router.register(r'breaches_edit_delete', api.BreacEditDeletehViewSet)
water_maps | File "/usr/local/lib/python3.10/dist-packages/rest_framework/routers.py", line 59, in register
water_maps | raise ImproperlyConfigured(msg)
water_maps | django.core.exceptions.ImproperlyConfigured: Router with basename "breach" is already registered. Please provide a unique basename for viewset "<class 'water_maps_dashboard.api.BreacEditDeletehViewSet'>"
</code></pre>
<p>despite there is no duplication and the routers ,, here is my <code>URLS</code> file :</p>
<pre class="lang-py prettyprint-override"><code>from django.urls import path, include
from rest_framework import routers
from . import api
from . import views
from .api import check_current_version, check_for_device_id, check_for_login, get_cords_ids
router = routers.DefaultRouter()
router.register("main_depart", api.MainDepartmentViewSet)
router.register("job_description", api.JobDescriptionViewSet)
# router.register("section", api.SectionViewSet)
router.register("city", api.CityViewSet)
router.register("AppUser", api.AppUserViewSet)
router.register("Breach", api.BreachViewSet)
router.register("Section", api.SectionViewSet)
router.register("MapLayers", api.MapLayersViewSet)
router.register("LayersCategory", api.LayersCategoryViewSet)
router.register("UserMessage", api.UserMessageViewSet)
router.register("about_us", api.AboutUsViewSet)
# router.register(r'breaches_edit_delete', api.BreacEditDeletehViewSet)
router.register(r'edit_delete_breach', api.BreacEditDeletehViewSet)
urlpatterns = (
path("api/do_search/", api.do_search, name="do_search"),
path("api/activate_account/", api.activate_account, name="activate_account"),
path("api/user_login/", api.user_login, name="user_login"),
path("api/create_app_user/", api.create_app_user, name="create_app_user"),
path("api/save_breaches/", api.save_breaches, name="save_breaches"),
path("api/add_cords_to_breach/", api.add_cords_to_breach, name="add_cords_to_breach"),
path("api/do_forgot_password/", api.do_forgot_password, name="do_forgot_password"),
path("api/reset_password/", api.reset_password, name="reset_password"),
path("api/get_my_breaches/", api.get_my_breaches, name="get_my_breaches"),
path("api/map_view/", api.map_view, name="map_view"),
path("api/post_user_message/", api.post_user_message, name="post_user_message"),
path("api/add_edit_request/", api.add_edit_request, name="add_edit_request"),
path("api/request_delete_account/", api.request_delete_account, name="request_delete_account"),
path("api/", include(router.urls)),
path("api/check_for_login/", check_for_login, name="check_for_login"),
path("api/check_for_device_id/", check_for_device_id, name="check_for_device_id"),
path("api/check_current_version/", check_current_version, name="check_current_version"),
path("api/get_cords_ids/", get_cords_ids, name="get_cords_ids"),
path("upload_data/", views.upload_data, name="upload_data"),
path("admin_logs/", views.admin_logs, name="admin_logs"),
path("trigger_task/", views.admin_logs, name="trigger_task"),
path("send_email/", views.send_email_to_users, name="send_email"),
path("api/the_map/", api.show_map, name="the_map"),
path("reset_device_id/",views.reset_device_id,name="reset_device_id"),
path("create-folder/",views.create_folder,name="create_folder"),
path("get_folders_and_drawings/",views.fetch_folders_and_drawings,name="get_folders_and_drawings"),
path("get_connected_water_elements/",views.get_connected_water_elements,name="get_connected_water_elements"),
path('verify/<int:app_user_id>/', views.verify_otp_view, name='verify_otp'),
path("api/get_map_layers_count/", api.get_map_layers_count, name="get_map_layers_count"),
)
</code></pre>
<p><strong>api views</strong></p>
<pre><code>.....
class BreachViewSet(viewsets.ModelViewSet):
"""ViewSet for the Breach class"""
queryset = models.Breach.objects.all()
serializer_class = serializers.BreachSerializer
permission_classes = [permissions.IsAuthenticated]
class BreacEditDeletehViewSet(viewsets.ModelViewSet):
queryset = models.Breach.objects.all()
serializer_class = serializers.BreachEditDeleteSerializer
@permission_classes(["IsAuthenticated"])
@api_view(['POST'])
def save_breaches(request):
data = {}
if request.method == 'POST':
token = request.META['HTTP_AUTHORIZATION'][6:]
text = request.data['text']
admin_area_name = request.data['admin_area_name']
# breach_area_data = request.data['breach_area_data']
date = request.data['date']
time = request.data['time']
cont = request.data['cont']
others = request.data['others']
breach_area_appointments = request.data['breach_area_appointments']
trophies_fault = request.data['trophies_fault']
city_id = request.data['city_id']
helpers = request.data['helpers']
image = None
image_2 = None
if 'image' in request.data:
image = request.data['image']
if 'image_2' in request.data:
image_2 = request.data['image_2']
city_object = models.City.objects.get(id=city_id)
main_user = Token.objects.get(key=token).user
app_user = models.AppUser.objects.get(user=main_user)
new_breach = models.Breach.objects.create(
app_user=app_user, text=text, admin_area_name=admin_area_name,
breach_area_appointments=breach_area_appointments,
trophies_fault="مكافأة" if trophies_fault == "0" else "محاسبة" if trophies_fault == "1" else "قبول أداء",city=city_object,image=image,image_2=image_2,
helpers=helpers,date=date,time=time,main_contributors=cont,others=others)
data = {"success": True, "breach_id": new_breach.id}
return Response(data, headers=get_headers())
@permission_classes(["IsAuthenticated"])
@api_view(['POST'])
def add_cords_to_breach(request):
"""
add cords to breach
"""
data = {}
if request.method == 'POST':
# try:
breach_id = request.data['breach_id']
breach_object = models.Breach.objects.get(id=breach_id)
cords = request.data['cords']
kml = simplekml.Kml()
lineCords = []
for item in cords:
new_cord = models.BreachCords.objects.create(
latlng=item, breach=breach_object)
theLat = float(item.split(",")[0])
theLng = float(item.split(",")[1])
kml.newpoint(name=item.split(",")[0], coords=[(theLat, theLng)])
lineCords.append((theLat, theLng))
lin = kml.newlinestring(name="", description="",
coords=lineCords)
theDIR = os.path.join(BASE_DIR, f'kml_files/')
new_kml = kml.save(f'{theDIR}{breach_id}.kml')
with open(f'{theDIR}{breach_id}.kml') as f:
breach_object.kml_file.save(f"{breach_id}.kml", File(f))
create_report_file(breach_object)
data = {"success": True, }
return Response(data, headers=get_headers())
@permission_classes(["IsAuthenticated"])
@api_view(['GET'])
def get_my_breaches(request):
data = {}
if request.method == 'GET':
token = request.META['HTTP_AUTHORIZATION'][6:]
main_user = Token.objects.get(key=token).user
app_user = models.AppUser.objects.get(user=main_user)
my_breaches = models.Breach.objects.filter(app_user=app_user)
data = {
"success": True,
"details": serializers.BreachSerializer(my_breaches, many=True).data
}
return Response(data, headers=get_headers())
</code></pre>
|
<python><django><django-rest-framework>
|
2024-05-11 06:36:49
| 1
| 4,483
|
Ahmed Wagdi
|
78,463,455
| 5,821,028
|
Manual SVD Implementation in Python failed
|
<p>I am trying to implement a function for Singular Value Decomposition (SVD) in Python by myself, using the eigenvalue decomposition of A^T A and AA^T, but the reconstructed matrix (B) does not always match the original matrix (A). Here is my code:</p>
<pre><code>import numpy as np
# Generate a random matrix A
row, col = 3, 3
A = np.random.normal(size=row * col).reshape(row, col)
# Eigen decomposition of A^T*A and A*A^T
ATA = A.T @ A
AAT = A @ A.T
eigenvalues_ATA, eigenvectors_ATA = np.linalg.eig(ATA)
eigenvalues_AAT, eigenvectors_AAT = np.linalg.eig(AAT)
# Sort eigenvalues and eigenvectors
idx_ATA = eigenvalues_ATA.argsort()[::-1]
idx_AAT = eigenvalues_AAT.argsort()[::-1]
sorted_eigenvectors_ATA = eigenvectors_ATA[:, idx_ATA]
sorted_eigenvectors_AAT = eigenvectors_AAT[:, idx_AAT]
# Calculate singular values
sorted_singularvalues_ATA = np.sqrt(np.abs(eigenvalues_ATA[idx_ATA]))
sorted_singularvalues_AAT = np.sqrt(np.abs(eigenvalues_AAT[idx_AAT]))
# Construct diagonal matrix S
S = np.zeros_like(A)
np.fill_diagonal(S, sorted_singularvalues_ATA)
# Reconstruct matrix B
B = sorted_eigenvectors_AAT @ S @ sorted_eigenvectors_ATA.T
print(np.allclose(A, B))
</code></pre>
<p>Could anyone know why this reconstruction only occasionally match the original matrix A?</p>
<pre><code># Example it works
A_equal = [-1.59038869, -0.28431377, 0.36309318,
0.07133563, -0.20420962, 1.82207923,
0.84681193, 0.31419994, -0.93808105]
# Example it fails
A_not_equal = [ 1.61171729, 0.6436384, 0.47359656,
-1.04121454, 0.17558459, 0.36595138,
0.40957221, 0.20499528, 0.18525562]
A = np.array(A_not_equal).reshape(3,3)
# Expected output
[[ 1.61171729 0.6436384 0.47359656]
[-1.04121454 0.17558459 0.36595138]
[ 0.40957221 0.20499528 0.18525562]]
# Actual output
[[-1.61240387 -0.63872607 -0.47789066]
[ 1.04102391 -0.17422069 -0.36714363]
[-0.40734839 -0.22090623 -0.17134711]]
</code></pre>
|
<python><svd>
|
2024-05-11 05:10:46
| 1
| 1,125
|
Jihyun
|
78,463,439
| 13,380,708
|
Django products not being added to cart immediately
|
<p><a href="https://i.sstatic.net/kEPmcEeb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kEPmcEeb.png" alt="This is the add to cart button which when I click on, the product is added and the cart modal should show up with the added product" /></a></p>
<p>[<img src="https://i.sstatic.net/xfKteFiI.png" alt="This is the what I am expecting to see, but I only see this after refreshing the page][2]" /></p>
<p>The first picture shows the add to cart button which when I click on, the product is added and the cart modal should show up with the added product.</p>
<p>The second picture shows what I am expecting to see, but I only see this after refreshing the page.</p>
<p>The third picture shows what I am actually seeing when I click on the add to cart button, just an empty cart modal pops up.</p>
<p><a href="https://i.sstatic.net/jyhLRLBF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jyhLRLBF.png" alt="enter image description here" /></a>.</p>
<p>So I am trying to build a Django e-commerce website, where there is a cart modal which pops up when I try to add any product by clicking the add to cart button.</p>
<p>While the product is being added correctly in the backend (which I can verify by going to admin panel), the product just doesn't show up immediately on my cart modal, something which is essential for the website to look good. Only when I am refreshing the page, the product shows up on my modal. What can I try next?</p>
<p>My cart model:</p>
<pre><code>
class Cart(models.Model):
user = models.ForeignKey(CustomUser, on_delete=models.CASCADE, related_name='cart')
product = models.ForeignKey(Product, on_delete=models.CASCADE)
quantity = models.PositiveIntegerField(default=1)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
def __str__(self):
return f"{self.user.username} - {self.product.name}"
</code></pre>
<p>My views.py:</p>
<pre><code>class CartView(LoginRequiredMixin, View):
def get(self, request):
cart_items = Cart.objects.filter(user=request.user)
total_price = sum(item.product.price * item.quantity for item in cart_items)
return render(request, 'business/cart.html', {'cart_items': cart_items, 'total_price': total_price})
def post(self, request):
try:
data = json.loads(request.body)
product_id = data.get('product_id')
except json.JSONDecodeError:
logger.error("Invalid JSON data in the request body")
return JsonResponse({'error': 'Invalid JSON data'}, status=400)
logger.debug(f'Received product_id: {product_id}')
if not product_id:
logger.error("No product_id provided in the request")
return JsonResponse({'error': 'No product_id provided'}, status=400)
product = get_object_or_404(Product, id=product_id)
cart_item, created = Cart.objects.get_or_create(user=request.user, product=product)
if not created:
cart_item.quantity += 1
cart_item.save()
cart_items = Cart.objects.filter(user=request.user)
cart_data = []
for item in cart_items:
cart_data.append({
'id': item.id,
'product': {
'id': item.product.id,
'name': item.product.name,
'price': float(item.product.price),
'image': item.product.image.url if item.product.image else None,
},
'quantity': item.quantity,
})
total_price = sum(item.product.price * item.quantity for item in cart_items)
logger.debug("Returning updated cart data as JSON response")
return JsonResponse({'success': True, 'items': cart_data, 'subtotal': total_price})
def get_cart_data(self, request):
logger.debug("Received request to fetch cart data")
cart_items = Cart.objects.filter(user=request.user)
cart_data = []
for item in cart_items:
cart_data.append({
'id': item.id,
'product': {
'id': item.product.id,
'name': item.product.name,
'price': float(item.product.price),
'image': str(item.product.image.url) if item.product.image else None,
},
'quantity': item.quantity,
})
total_price = sum(item.product.price * item.quantity for item in cart_items)
logger.debug(f"Returning cart data: {cart_data}")
return JsonResponse({'items': cart_data, 'subtotal': total_price})
def update_quantity(self, request):
try:
data = json.loads(request.body)
item_id = data.get('item_id')
action = data.get('action')
except json.JSONDecodeError:
return JsonResponse({'error': 'Invalid JSON data'}, status=400)
# Retrieve the cart item
cart_item = get_object_or_404(Cart, id=item_id)
if action == 'increase':
cart_item.quantity += 1
elif action == 'decrease':
if cart_item.quantity > 1:
cart_item.quantity -= 1
cart_item.save()
# Calculate total price and prepare cart data
cart_items = Cart.objects.filter(user=request.user)
cart_data = [
{
'id': item.id,
'product': {
'id': item.product.id,
'name': item.product.name,
'price': float(item.product.price),
'image': item.product.image.url if item.product.image else None,
},
'quantity': item.quantity,
}
for item in cart_items
]
total_price = sum(item.product.price * item.quantity for item in cart_items)
return JsonResponse({'success': True, 'items': cart_data, 'subtotal': total_price})
def dispatch(self, request, *args, **kwargs):
if request.method == 'POST':
if request.path == '/cart/update_quantity/':
return self.update_quantity(request, *args, **kwargs)
else:
# Handle other POST requests here
pass
elif request.method == 'GET':
if request.path == '/cart/data/':
logger.debug(f"Request Path: {request.path}")
return self.get_cart_data(request)
else:
# Handle other GET requests here
pass
# Fall back to the default behavior if the request doesn't match any of the above conditions
return super().dispatch(request, *args, **kwargs)
</code></pre>
<p>My urls.py:</p>
<pre><code>
urlpatterns = [
path('cart/', views.CartView.as_view(), name='cart'),
path('cart/data/', views.CartView.as_view(), name='cart_data'),
path('cart/update_quantity/', views.CartView.as_view(), name='update_quantity'),
]
</code></pre>
<p>My cart modal in my base.html:</p>
<pre><code> <div class='modal-cart-block'>
<div class='modal-cart-main flex'>
<div class="right cart-block md:w-1/2 w-full py-6 relative overflow-hidden">
<div class="heading px-6 pb-3 flex items-center justify-between relative">
<div class="heading5">Shopping Cart</div>
<div
class="close-btn absolute right-6 top-0 w-6 h-6 rounded-full bg-surface flex items-center justify-center duration-300 cursor-pointer hover:bg-black hover:text-white">
<i class="ph ph-x text-sm"></i>
</div>
</div>
<div class="time countdown-cart px-6">
<div class=" flex items-center gap-3 px-5 py-3 bg-green rounded-lg">
<p class='text-3xl'>🔥</p>
<div class="caption1">Your cart will expire in <span
class='text-red caption1 font-semibold'><span class="minute">04</span>:<span
class="second">59</span></span>
minutes!<br />
Please checkout now before your items sell out!</div>
</div>
</div>
<div class="heading banner mt-3 px-6">
<div class="text">Buy <span class="text-button"> $<span class="more-price">0</span>.00 </span>
<span>more to get </span>
<span class="text-button">freeship</span>
</div>
<div class="tow-bar-block mt-3">
<div class="progress-line"></div>
</div>
</div>
<div class="list-product px-6">
{% for item in cart_items %}
<div data-item="2" class="item py-5 flex items-center justify-between gap-3 border-b border-line">
<div class="infor flex items-center gap-3 w-full">
<div class="bg-img w-[100px] aspect-square flex-shrink-0 rounded-lg overflow-hidden">
<img style="height: 130px" src="{{ item.product.image.url }}" alt="product" class="w-full h-full">
</div>
<div class="w-full">
<div class="flex items-center justify-between w-full">
<div class="name text-button">{{ item.product.name }}</div>
<div class="remove-cart-btn remove-btn caption1 font-semibold text-red underline cursor-pointer">
Remove
</div>
</div>
<div class="flex items-center justify-between gap-2 mt-3 w-full">
<div class="flex items-center text-secondary2 capitalize">
XS/white
</div>
<div class="product-price text-title">${{ item.product.price }}</div>
</div>
</div>
</div>
</div>
{% endfor %}
</div>
<div class="footer-modal bg-white absolute bottom-0 left-0 w-full">
<div class="flex items-center justify-center lg:gap-14 gap-8 px-6 py-4 border-b border-line">
<div class="note-btn item flex items-center gap-3 cursor-pointer">
<i class="ph ph-note-pencil text-xl"></i>
<div class="caption1">Note</div>
</div>
<div class="shipping-btn item flex items-center gap-3 cursor-pointer">
<i class="ph ph-truck text-xl"></i>
<div class="caption1">Shipping</div>
</div>
<div class="coupon-btn item flex items-center gap-3 cursor-pointer">
<i class="ph ph-tag text-xl"></i>
<div class="caption1">Coupon</div>
</div>
</div>
<div class="flex items-center justify-between pt-6 px-6">
<div class="heading5">Subtotal</div>
<div class="total-price">${{ total_price }}</div>
</div>
<div class="block-button text-center p-6">
<div class="flex items-center gap-4">
<a href='{% url "cart" %}'
class='button-main basis-1/2 bg-white border border-black text-black text-center uppercase'>
View cart
</a>
<a href='checkout.html' class='button-main basis-1/2 text-center uppercase'>
Check Out
</a>
</div>
<div
class="text-button-uppercase continue mt-4 text-center has-line-before cursor-pointer inline-block">
Or continue shopping</div>
</div>
<div class='tab-item note-block'>
<div class="px-6 py-4 border-b border-line">
<div class="item flex items-center gap-3 cursor-pointer">
<i class="ph ph-note-pencil text-xl"></i>
<div class="caption1">Note</div>
</div>
</div>
<div class="form pt-4 px-6">
<textarea name="form-note" id="form-note" rows=4
placeholder='Add special instructions for your order...'
class='caption1 py-3 px-4 bg-surface border-line rounded-md w-full'></textarea>
</div>
<div class="block-button text-center pt-4 px-6 pb-6">
<div class='button-main w-full text-center'>Save</div>
<div class="cancel-btn text-button-uppercase mt-4 text-center
has-line-before cursor-pointer inline-block">Cancel</div>
</div>
</div>
<div class='tab-item shipping-block'>
<div class="px-6 py-4 border-b border-line">
<div class="item flex items-center gap-3 cursor-pointer">
<i class="ph ph-truck text-xl"></i>
<div class="caption1">Estimate shipping rates</div>
</div>
</div>
<div class="form pt-4 px-6">
<div class="">
<label for='select-country' class="caption1 text-secondary">Country/region</label>
<div class="select-block relative mt-2">
<select id="select-country" name="select-country"
class='w-full py-3 pl-5 rounded-xl bg-white border border-line'>
<option value="Country/region">Country/region</option>
<option value="France">France</option>
<option value="Spain">Spain</option>
<option value="UK">UK</option>
<option value="USA">USA</option>
</select>
<i
class="ph ph-caret-down text-xs absolute top-1/2 -translate-y-1/2 md:right-5 right-2"></i>
</div>
</div>
<div class="mt-3">
<label for='select-state' class="caption1 text-secondary">State</label>
<div class="select-block relative mt-2">
<select id="select-state" name="select-state"
class='w-full py-3 pl-5 rounded-xl bg-white border border-line'>
<option value="State">State</option>
<option value="Paris">Paris</option>
<option value="Madrid">Madrid</option>
<option value="London">London</option>
<option value="New York">New York</option>
</select>
<i
class="ph ph-caret-down text-xs absolute top-1/2 -translate-y-1/2 md:right-5 right-2"></i>
</div>
</div>
<div class="mt-3">
<label for='select-code' class="caption1 text-secondary">Postal/Zip Code</label>
<input class="border-line px-5 py-3 w-full rounded-xl mt-3" id="select-code" type="text"
placeholder="Postal/Zip Code" />
</div>
</div>
<div class="block-button text-center pt-4 px-6 pb-6">
<div class='button-main w-full text-center'>Calculator
</div>
<div class="cancel-btn text-button-uppercase mt-4 text-center
has-line-before cursor-pointer inline-block">Cancel</div>
</div>
</div>
<div class='tab-item coupon-block'>
<div class="px-6 py-4 border-b border-line">
<div class="item flex items-center gap-3 cursor-pointer">
<i class="ph ph-tag text-xl"></i>
<div class="caption1">Add A Coupon Code</div>
</div>
</div>
<div class="form pt-4 px-6">
<div class="">
<label for='select-discount' class="caption1 text-secondary">Enter Code</label>
<input class="border-line px-5 py-3 w-full rounded-xl mt-3" id="select-discount"
type="text" placeholder="Discount code" />
</div>
</div>
<div class="block-button text-center pt-4 px-6 pb-6">
<div class='button-main w-full text-center'>Apply</div>
<div class="cancel-btn text-button-uppercase mt-4 text-center
has-line-before cursor-pointer inline-block">Cancel</div>
</div>
</div>
</div>
</div>
</div>
</div>
</code></pre>
<p>My cart modal in my main.js:</p>
<pre><code>// Modal Cart
const cartIcon = document.querySelector(".cart-icon");
const modalCart = document.querySelector(".modal-cart-block");
const modalCartMain = document.querySelector(".modal-cart-block .modal-cart-main");
const closeCartIcon = document.querySelector(".modal-cart-main .close-btn");
const continueCartIcon = document.querySelector(".modal-cart-main .continue");
const addCartBtns = document.querySelectorAll(".add-cart-btn");
const openModalCart = () => {
modalCartMain.classList.add("open");
};
const closeModalCart = () => {
modalCartMain.classList.remove("open");
};
function getCookie(name) {
let cookieValue = null;
if (document.cookie && document.cookie !== '') {
const cookies = document.cookie.split(';');
for (let i = 0; i < cookies.length; i++) {
const cookie = cookies[i].trim();
if (cookie.substring(0, name.length + 1) === (name + '=')) {
cookieValue = decodeURIComponent(cookie.substring(name.length + 1));
break;
}
}
}
return cookieValue;
}
const csrftoken = getCookie('csrftoken');
const addToCart = (productId) => {
const product_id = productId;
console.log('Product ID:', product_id);
fetch('/cart/', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-CSRFToken': csrftoken,
},
body: JSON.stringify({ product_id }),
})
.then(response => {
if (!response.ok) {
throw new Error('Failed to add product to cart');
}
return response.json();
})
.then(data => {
if (data.success) {
console.log('Product added successfully:', data);
updateCartModalContent(); // Ensure this function is called immediately after adding the product
openModalCart();
}
})
.catch(error => console.error('Error:', error));
};
document.addEventListener("DOMContentLoaded", function() {
const plusIcons = document.querySelectorAll(".ph-plus");
const minusIcons = document.querySelectorAll(".ph-minus");
plusIcons.forEach(icon => {
icon.addEventListener("click", function() {
const itemId = icon.dataset.itemId;
updateQuantity(itemId, 'increase');
});
});
minusIcons.forEach(icon => {
icon.addEventListener("click", function() {
const itemId = icon.dataset.itemId;
updateQuantity(itemId, 'decrease');
});
});
function updateQuantity(itemId, action) {
fetch('/cart/update_quantity/', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'X-CSRFToken': csrftoken,
},
body: JSON.stringify({ item_id: itemId, action: action }),
})
.then(response => {
if (!response.ok) {
throw new Error('Failed to update quantity');
}
return response.json();
})
.then(data => {
if (data.success) {
// Update the cart display based on the response
updateCartModalContent();
}
})
.catch(error => console.error('Error:', error));
}
});
addCartBtns.forEach((btn) => {
btn.addEventListener('click', () => {
const productId = btn.dataset.productId;
console.log('Product ID from button:', productId); // Add this line
addToCart(productId);
});
});
cartIcon.addEventListener("click", openModalCart);
modalCart.addEventListener("click", closeModalCart);
closeCartIcon.addEventListener("click", closeModalCart);
continueCartIcon.addEventListener("click", closeModalCart);
modalCartMain.addEventListener("click", (e) => {
e.stopPropagation();
});
function updateCartModalContent() {
console.log('Updating cart modal content...');
fetchCartData()
.then((cartData) => {
console.log('Cart data fetched:', cartData);
const cartItemsContainer = document.querySelector('.list-product');
cartItemsContainer.innerHTML = '';
if (cartData.items.length === 0) {
cartItemsContainer.innerHTML = '<p class="mt-1">No product in cart</p>';
} else {
cartData.items.forEach((item) => {
const cartItem = createCartItemElement(item);
cartItemsContainer.appendChild(cartItem);
});
}
const subtotalElement = document.querySelector('.total-cart');
const subtotal = typeof cartData.subtotal === 'number' ? cartData.subtotal : 0;
subtotalElement.textContent = `$${subtotal.toFixed(2)}`;
})
.catch((error) => {
console.error('Error fetching cart data:', error);
});
}
function fetchCartData() {
// Make an AJAX request to fetch the current cart data
return fetch('/cart/data/')
.then((response) => response.json())
.then((data) => data);
}
function createCartItemElement(item) {
console.log('Creating cart item element for:', item);
const cartItemElement = document.createElement('div');
cartItemElement.classList.add('item', 'py-5', 'flex', 'items-center', 'justify-between', 'gap-3', 'border-b', 'border-line');
cartItemElement.dataset.item = item.id;
const imageUrl = item.product.image || '/static/path/to/default-image.png';
cartItemElement.innerHTML = `
<div class="infor flex items-center gap-3 w-full">
<div class="bg-img w-[100px] aspect-square flex-shrink-0 rounded-lg overflow-hidden">
<img src="${imageUrl}" alt="product" class="w-full h-full">
</div>
<div class="w-full">
<div class="flex items-center justify-between w-full">
<div class="name text-button">${item.product.name}</div>
<div class="remove-cart-btn remove-btn caption1 font-semibold text-red underline cursor-pointer">
Remove
</div>
</div>
<div class="flex items-center justify-between gap-2 mt-3 w-full">
<div class="flex items-center text-secondary2 capitalize">
XS/white
</div>
<div class="product-price text-title">$${item.product.price}</div>
</div>
</div>
</div>
`;
return cartItemElement;
}
</code></pre>
<p>Note:</p>
<p>These are my console debug statements:</p>
<pre><code>Product ID from button: 4
main.js:496 Product ID: 4
main.js:514 Product added successfully: Object
main.js:587 Updating cart modal content...
main.js:590 Cart data fetched: Object
main.js:623 Creating cart item element for: Object
main.js:608 Error fetching cart data: TypeError: Cannot set properties of null (setting 'textContent')
at main.js:605:35
</code></pre>
<p>These are my terminal debug statements:</p>
<pre><code>Received product_id: 4
Returning updated cart data as JSON response
[14/May/2024 07:42:32] "POST /cart/ HTTP/1.1" 200 169
Request Path: /cart/data/
Received request to fetch cart data
Returning cart data: [{'id': 71, 'product': {'id': 4, 'name': 'Marlin Knit', 'price': 199.0, 'image': '/products/RG1.jpeg'}, 'quantity': 1}]
</code></pre>
|
<javascript><python><html><django>
|
2024-05-11 04:59:45
| 1
| 470
|
Jack
|
78,463,386
| 4,387,837
|
Python: MultiIndex Dataframe to json-like list of dictionaries
|
<p>I want to store this data frame</p>
<pre><code>df = pd.DataFrame({
'id':[1,1,2,2],
'gender':["m","m","f","f"],
'val1':[1,2,5,6],
'val2':[3,4,7,8]
}).set_index(['id','gender'])
</code></pre>
<p>as a json file that contains lists of dictionaries like this:</p>
<pre><code>d = [{'id':1, 'gender':"m", 'val1':[1,2], 'val2':[3,4]},
{'id':2, 'gender':"f", 'val1':[5,6], 'val2':[7,8]}]
</code></pre>
<p>All my attempts to use variations of <code>df.to_dict(orient = '...')</code> or <code>df.to_json(orient = '...')</code> did not produce the desired output.</p>
|
<python><pandas><dictionary>
|
2024-05-11 04:32:33
| 1
| 839
|
HOSS_JFL
|
78,463,166
| 4,367,371
|
Spark melt/transpose columns to values
|
<p>I am trying to transpose the columns of a table to rows</p>
<p>I have a table that looks like this:</p>
<pre><code>+-----+-----+-----+-------+
|Date |col_1|col_2|col_...|
+-----+-------------------+
| 1 | 0.0| 0.6| ... |
| 2 | 0.6| 0.7| ... |
| 3 | 0.5| 0.9| ... |
| ...| ...| ...| ... |
</code></pre>
<p>And the result should look like this:</p>
<pre><code>+-----+--------+-----------+
|Date |Location| Value |
+-----+--------+-----------+
| 1 | col_1| 0.0|
| 1 | col_2| 0.6|
| ...| ...| ...|
| 2 | col_1| 0.6|
| 2 | col_2| 0.7|
| ...| ...| ...|
| 3 | col_1| 0.5|
| 3 | col_2| 0.9|
| ...| ...| ...|
</code></pre>
<p>In Pandas I have the following code that does what I need</p>
<pre><code>df = df.melt(id_vars=["Date"],
var_name="Location",
value_name="Value")
</code></pre>
<p>What is the equivalent code in Pyspark?</p>
<p>I have seen a similar question here: <a href="https://stackoverflow.com/questions/37864222/transpose-column-to-row-with-spark">Transpose column to row with Spark</a></p>
<p>However all the answers assume that you can list out all of the columns, In my case I have thousand of columns and the column names are a mix of lat,long coordinates with a mix of other strings.</p>
<p>Is there an equivalent way to achieve what the pandas code does in pyspark?</p>
|
<python><pandas><apache-spark><pyspark><azure-databricks>
|
2024-05-11 02:11:14
| 2
| 3,671
|
Mustard Tiger
|
78,462,902
| 1,815,710
|
Django rest framework's ModelSerializer missing user field causing KeyError
|
<p>My <code>ModelSerializer</code> is getting a <code>KeyError</code> when I call <code>.data</code> on the serializer and I'm not sure how to properly fix this error.</p>
<p><a href="https://github.com/jazzband/django-rest-knox/blob/master/knox/models.py#L25" rel="nofollow noreferrer">Knox's AuthToken model</a> looks like this</p>
<pre><code>class AuthToken(models.Model):
objects = AuthTokenManager()
digest = models.CharField(
max_length=CONSTANTS.DIGEST_LENGTH, primary_key=True)
token_key = models.CharField(
max_length=CONSTANTS.TOKEN_KEY_LENGTH, db_index=True)
user = models.ForeignKey(User, null=False, blank=False,
related_name='auth_token_set', on_delete=models.CASCADE)
created = models.DateTimeField(auto_now_add=True)
expiry = models.DateTimeField(null=True, blank=True)
def __str__(self):
return '%s : %s' % (self.digest, self.user)
</code></pre>
<p>I created a Model serializer</p>
<pre><code>class AuthTokenResponseSerializer(serializers.ModelSerializer):
class Meta:
model = AuthToken
fields = "__all__"
</code></pre>
<p>However, when I create a <code>token_serializer</code>, I get this error</p>
<pre><code>>> token_serializer.data
*** KeyError: "Got KeyError when attempting to get a value for field `user` on serializer `AuthTokenResponseSerializer`.\nThe serializer field might be named incorrectly and not match any attribute or key on the `dict` instance.\nOriginal exception text was: 'user'."
</code></pre>
<p>I think I am getting this error because when the <code>token_serializer</code> is created, it's missing a <code>user</code> key. I can see there is a <code>user_id</code> key but not <code>user</code></p>
<pre><code>(Pdb) token_serializer
AuthTokenResponseSerializer(context={'request': <rest_framework.request.Request: POST '/auth/token/'>}, data={'_state': <django.db.models.base.ModelState object>, 'digest': '6b82ac8776113db376d8938a641c3ebdea573b4ab49cb648c171fccc7f4a517e6450cebb21682e26f36562cf7b659f5089957ba49e99c096ebc9eaaa93ab0e53', 'token_key': '872832f8', 'user_id': 34, 'created': None, 'expiry': datetime.datetime(2024, 5, 11, 9, 17, 2, 540546, tzinfo=datetime.timezone.utc)}, partial=True):
digest = CharField(max_length=128, validators=[<UniqueValidator(queryset=AuthToken.objects.all())>])
token_key = CharField(max_length=8)
created = DateTimeField(read_only=True)
expiry = DateTimeField(allow_null=True, required=False)
user = PrimaryKeyRelatedField(queryset=User.objects.all())
</code></pre>
<p>My User model</p>
<pre><code>from django.db import models
from django.utils import timezone
from django.contrib.auth.models import (
AbstractBaseUser,
PermissionsMixin,
BaseUserManager,
)
from django.utils.translation import gettext_lazy as _
from phonenumber_field.modelfields import PhoneNumberField
class User(AbstractBaseUser, PermissionsMixin):
email = models.EmailField(
verbose_name=_("email address"), unique=True, max_length=255
)
password = models.CharField(
max_length=128, null=True
)
first_name = models.CharField(max_length=128, null=True)
last_name = models.CharField(max_length=128, null=True)
phone_number = PhoneNumberField(null=True, blank=True)
is_staff = models.BooleanField(default=False)
is_active = models.BooleanField(default=True)
created_at = models.DateTimeField(default=timezone.now)
updated_at = models.DateTimeField(default=timezone.now)
</code></pre>
|
<python><django><django-rest-framework><django-serializer><django-rest-knox>
|
2024-05-10 23:25:34
| 0
| 16,539
|
Liondancer
|
78,462,855
| 2,055,998
|
How to refer to configured Datasource from my code (IntelliJ + Python)
|
<p>I have Cassandra Datasource configured in my IntelliJ (as global).</p>
<p>I can manually establish a session and run CQL queries from console.</p>
<p>How do I do it from Python code without re-entering User ID and other parameters and referring to the Datasource instead?</p>
<p>This must a <em>RTFM</em> case but I could not find the answer.</p>
|
<python><intellij-idea><datasource>
|
2024-05-10 23:02:24
| 1
| 13,449
|
PM 77-1
|
78,462,450
| 22,056,059
|
I am trying to Forecast a trend for the next year, using a historical data of past year (every 10 minute interval for 1 year) using SARIMAX
|
<pre><code>from statsmodels.tsa.statespace.sarimax import SARIMAX
import pandas as pd
import itertools
import warnings
import numpy as np
import statsmodels.api as sm
try:
print("Starting script...")
# Load your dataset
print("Loading dataset...")
df = pd.read_csv('C:/Users/iameh/Downloads/archive/powerconsumption.csv')
# Convert 'Datetime' to datetime and set it as the index
print("Preprocessing data...")
df['Datetime'] = pd.to_datetime(df['Datetime'])
df.set_index('Datetime', inplace=True)
# Define the p, d, q parameters to take any value between 0 and 1
p = d = q = range(0, 2)
# Generate all different combinations of p, q and q triplets
pdq = list(itertools.product(p, d, q))
# Generate all different combinations of seasonal p, q and q triplets
seasonal_pdq = [(x[0], x[1], x[2], 12) for x in list(itertools.product(p, d, q))]
# Specify to ignore warning messages
warnings.filterwarnings("ignore")
# List of parameters to forecast
parameters = ['Temperature', 'Humidity', 'WindSpeed', 'GeneralDiffuseFlows', 'DiffuseFlows', 'PowerConsumption_Zone1', 'PowerConsumption_Zone2', 'PowerConsumption_Zone3']
print("Starting forecasting for each parameter...")
for parameter in parameters:
print(f"Forecasting for {parameter}...")
# Select the relevant column for forecasting
endog = df[parameter]
# Find the best parameters for the model
best_aic = np.inf
best_pdq = None
best_seasonal_pdq = None
temp_model = None
print("Finding best model parameters...")
for param in pdq:
for param_seasonal in seasonal_pdq:
try:
temp_model = SARIMAX(endog,
order = param,
seasonal_order = param_seasonal,
enforce_stationarity=False,
enforce_invertibility=False)
results = temp_model.fit()
# print("SARIMAX{}x{}12 - AIC:{}".format(param, param_seasonal, results.aic))
if results.aic < best_aic:
best_aic = results.aic
best_pdq = param
best_seasonal_pdq = param_seasonal
except:
continue
print("Best SARIMAX{}x{}12 model for {} - AIC:{}".format(best_pdq, best_seasonal_pdq, parameter, best_aic))
# Fit the best model
print("Fitting the best model...")
model = SARIMAX(endog,
order=best_pdq,
seasonal_order=best_seasonal_pdq,
enforce_stationarity=False,
enforce_invertibility=False)
results = model.fit()
# Get forecast steps ahead in future
print("Making forecasts...")
pred_uc = results.get_forecast(steps=52416)
# Write the DataFrame to a CSV file
print("Saving forecasts to CSV file...")
pred_uc.predicted_mean.to_csv(f'C:/Users/iameh/Downloads/archive/{parameter}_forecast.csv')
print(f"Forecasting for {parameter} completed!")
print("Script completed!")
except Exception as e:
print("An error occurred during the execution of the script:")
print(e)
</code></pre>
<p>I am using this code to generate the next years prediction. But it is crashing/shell is restarting</p>
|
<python>
|
2024-05-10 20:38:36
| 0
| 314
|
Jahidul Hasan Razib
|
78,462,307
| 2,153,235
|
How best to avoid overwriting library source files from Spyder?
|
<p>I have Anaconda installed with a Python 3.9 environment under my personal user account on Windows. I use <code>runfile</code> to run a <code>*.py</code> file from the Spyder console. I normally have the Spyder editor closed and use Cygwin and Vim tabs/subwindows to do anything text related.</p>
<p>When <code>runfile</code> encounters an error, the Spyder editor window opens. I usually close it right away, but sometimes, I'm just trying to maintain mental flow in getting a code pattern to work via trial and error. At some point, an error caused the following file to open:</p>
<pre><code>/c/Users/User.Name/AppData/Local/anaconda3/envs/py39/Lib/site-packages/pandas/core/reshape/merge.py
</code></pre>
<p>This creates a hazard in that I may inadvertently make some changes in <code>merge.py</code>, e.g., if keyboard focus not where I thought it was. However, I would never deliberately <em>save</em> any changes to the source file in a library.</p>
<p>It seems like changes were made anyway, and I <em>assume</em> that I made them. I don't know how they got saved. Maybe when I pressed <code>Ctrl+D</code> in the Spyder console to clean out the Variable Explorer, it saves the file to which the text editor is opened (I'm not really sure). The result is that <code>merge.py</code> has a missing block of code</p>
<pre><code>if rk is not None:
<...Here belongs the missing code...>
else:
# work-around for merge_asof(right_index=True)
right_keys.append(right.index)
if lk is not None and lk == rk: # FIXME: what about other NAs?
# avoid key upcast in corner case (length-0)
lk = cast(Hashable, lk)
</code></pre>
<p>Fortunately, I was able to use Cygwin's <code>find</code> to locate an almost identical <code>merge.py</code>:</p>
<pre><code>/c/Users/User.Name/AppData/Local/anaconda3/Lib/site-packages/pandas/core/reshape/merge.py
</code></pre>
<p>To prevent this from happening again, I am tempted make all files read-only by issuing one of the following in <code>/c/Users/User.Name/AppData/Local/anaconda3</code>:</p>
<pre><code>chmod -R u-w *
find * -type f -print0 | xargs -0 chmod u-w
</code></pre>
<p>However, I do need to update Anaconda or install packages. Even if I didn't, I am not sure if Anaconda, Conda, or Python needs write access. Or any of the packages that I installed to get Apache Spark functionality. I am relatively new to Python, Anaconda, and Spark.</p>
<p>What is best practice for avoiding what I assume is inadvertent modificaton of library source files?</p>
|
<python><anaconda><spyder>
|
2024-05-10 19:59:26
| 0
| 1,265
|
user2153235
|
78,462,178
| 480,118
|
Routing submodule functions using FastAPI's APIRouter
|
<p>I have the following in my <code>main.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, Request
from fastapi.responses import HTMLResponse
from fastapi.middleware.gzip import GZipMiddleware
from fastapi.templating import Jinja2Templates
from fastapi.staticfiles import StaticFiles
from my_app.web.views import default
app = FastAPI()
app.add_middleware(GZipMiddleware, minimum_size=500)
app.include_router(default.router)
#app.include_router(data_mgr_view.router)
app.mount("/static", StaticFiles(directory="static"), name="static")
app.mount("/templates", Jinja2Templates(directory="templates"), name="templates")
@app.get('/test')
async def test():
return "test successful"
</code></pre>
<p>This works fine. I can hit the URL at <code>localhost:5000/test</code> and it returns expected string.
Now I have this file <code>default.py</code> whose handlers I want to be based off root:</p>
<pre><code>import sys, traceback
import pandas as pd
from fastapi import APIRouter, Request, Query
from my_app.web.models.response_data import ResponseData
from my_app.data.providers.internals_provider import MktInternalsProvider
from my_app.config import tmplts
router = APIRouter(prefix="")
@router.route('/')
async def root(request: Request):
context = {"title":"Playground", "content":f"Place for my charts, studies, etc..."}
return tmplts.TemplateResponse(request=request, name="index.html", context=context)
@router.route('/test2')
async def test2():
return "test2 success"
</code></pre>
<p>The first method works fine when I hit <code>localhost:5000/</code>. The second method throws an exception when I hit <code>localhost:5000/test2</code>:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/gzip.py", line 24, in __call__
await responder(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/gzip.py", line 44, in __call__
await self.app(scope, receive, self.send_with_gzip)
File "/usr/local/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/usr/local/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.11/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
^^^^^^^^^^^^^
TypeError: test2() takes 0 positional arguments but 1 was given
</code></pre>
|
<python><fastapi><starlette>
|
2024-05-10 19:24:34
| 2
| 6,184
|
mike01010
|
78,462,165
| 19,299,757
|
How to return multiple list in a python return statement
|
<p>I've a python block like this. And I want to return 2 lists in the return statement.</p>
<p>I currently have return trade_result_list + blotter_result_list, but this shows an error "Expected type 'list[TradeResult]' (matched generic type 'list[_T]'), got 'list[BlotterResults]' instead " in the line where return statement is present.</p>
<p>Not sure what's wrong here. Any help is much appreciated.</p>
<pre><code>@dataclass(frozen=True, eq=False, order=False)
class TradeResult:
trade_ids: Optional[list[str]]
aIds: Optional[list[int]]
statuses: list[str]
messages: Optional[list[str]]
@dataclass(frozen=True, eq=False, order=False)
class BlotterResults:
trade_id: str
s_id: Optional[int]
portfolio_id: Optional[int]
ptxn_type: Optional[str]
class ExcelReportCreator:
def __init__(self):
pass
def _correlate_results(self, ptxn_match_results: List[PtxnFinderResult],
ptxn_match_results_blotter: List[PtxnFinderResultForBlotter]) ->
List[Union[TradeResult, BlotterResults]]:
trade_result_list: list[TradeResult] = list()
blotter_result_list: list[BlotterResults] = list()
results_list_for_blotter_and_bulk: List[Union[TradeResult, BlotterResults]] =
list()
trade_id_list = []
a_Id_list = []
status_list = []
status_msg_list = []
for record in ptxn_match_results:
for txn_record in record.txn_record.get('records', []):
external_pks = txn_record['recordBody']['externalPks']
trade_id = next((pk['value'] for pk in external_pks if pk['key'] ==
'tradeId'), None)
trade_id_list.append(trade_id)
a_id = next((pk['value'] for pk in external_pks if pk['key'] == 'aId'),
None)
a_Id_list.append(a_id)
status = txn_record.get('status', None)
status_value = status[0].get('status')
status_list.append(status_value)
message = status[0].get('message', None)
status_msg_list.append(message)
trade_result_list.append(TradeResult(
trade_ids=trade_id_list,
aIds=a_Id_list,
statuses=status_list,
messages=status_msg_list
))
for i, r in enumerate(ptxn_match_results_blotter):
cur_result: PtxnFinderResultForBlotter = r
if cur_result.txn_record is not None:
s_id = cur_result.txn_record['sId']['id']
portfolio_id = cur_result.txn_record['portfolioId']['id']
ptxn_type = cur_result.txn_record['type']
else:
s_id = None
portfolio_id = None
ptxn_type = None
blotter_result_list.append(BlotterResults(
trade_id=cur_result.trade_id,
s_id=s_id,
portfolio_id=portfolio_id,
ptxn_type=ptxn_type
))
return trade_result_list + blotter_result_list
def create_report(
self,
config: ConfigValues,
report_file_name: str,
ptxn_match_results: list[PtxnFinderResult],
ptxn_match_results_blotter: list[PtxnFinderResultForBlotter],
trade_file_name: str,
ptxn_invalid_rows_list: list,
invalid_rows_result: Union[int, list[InvalidTradeRunDetails]]) -> Path:
trade_result_list: list[TradeResult] = self._correlate_results(
ptxn_match_results=ptxn_match_results,
ptxn_match_results_blotter=ptxn_match_results_blotter)
</code></pre>
|
<python>
|
2024-05-10 19:22:29
| 2
| 433
|
Ram
|
78,461,881
| 2,475,195
|
Pandas keep every nth row with special rule
|
<p>For example, I want to keep every 3rd row, but I must keep numbers divisible by 3(or some special rule like that). When I see a number divisible by 3, that restarts the count, meaning I will start counting to 3 from there, unless I see anoter value divisible by 3. Example given below:</p>
<pre><code>import pandas as pd
df = pd.DataFrame.from_dict({'x': [0, 1, 2, 3, 4, 5, 7, 8, 9, 11, 12, 13, 14, 17, 20, 23]})
filtered = pd.DataFrame.from_dict({'x': [0, 3, 7, 9, 12, 17]}) # this is the desired dataframe
print (df, '\n\n--------------\n\n', filtered)
x
0 0
1 1
2 2
3 3
4 4
5 5
6 7
7 8
8 9
9 11
10 12
11 13
12 14
13 17
14 20
15 23
--------------
x
0 0
1 3
2 7
3 9
4 12
5 17
</code></pre>
|
<python><pandas><dataframe><subsampling>
|
2024-05-10 18:06:44
| 1
| 4,355
|
Baron Yugovich
|
78,461,790
| 2,671,688
|
How to split a NumPy array such that lengths of subarrays are evenly distributed?
|
<p>I'm new to NumPy and still figuring out what's easy to do with built-ins, vs. rolling my own.</p>
<p>I'm trying to split an array, but have the unevenness in subarrays be distributed evenly throughout the result.</p>
<p>To illustrate, if I run the following in numpy:</p>
<pre class="lang-py prettyprint-override"><code>a = np.arange(17)
for c in [3, 5, 10]:
chunks = np.array_split(a, c)
sizes = np.array([x.size for x in chunks])
print(f'{c} chunks sizes: {sizes}')
</code></pre>
<p>The output I get is:</p>
<pre><code>3 chunks sizes: [6 6 5]
5 chunks sizes: [4 4 3 3 3]
10 chunks sizes: [2 2 2 2 2 2 2 1 1 1]
</code></pre>
<p>I'd like it to set the sizes gradually to adjust for unevenness, rather than greedily. In other words, I'd like the output to look more like:</p>
<pre><code>3 chunks sizes: [6 5 6]
5 chunks sizes: [3 4 3 4 3]
10 chunks sizes: [2 2 1 2 2 1 2 2 1 2]
</code></pre>
<p>Is there a clean way to do this with NumPy builtins, or is the only good way to roll my own (rounding up/down as needed)? From what I've found in the docs so far, I most likely have to roll my own.</p>
|
<python><numpy>
|
2024-05-10 17:43:07
| 1
| 701
|
user2671688
|
78,461,648
| 13,366,967
|
Pylance reportArgumentType with Pydantic’s BeforeValidator
|
<p>I’m writing a <code>pydantic</code> type similar to <code>ImportString</code> called <code>ImportPlugin</code> which imports and optionally calls objects and returns the call result. For example:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Annotated, TypeVar
from pydantic import BaseModel, BeforeValidator
class MyModel(BaseModel):
length: ImportPlugin[int]
MyModule(length={"obj": "builtins.len", "args": ([1,2,3],)}).length # 3
</code></pre>
<p>My <code>ImportPlugin</code> type is defined like so:</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar('T')
def generate_plugin(plugin_dict: Dict[str, Any]):
return _Plugin.model_validate(plugin_dict).generate()
ImportPlugin = Annotated[T, BeforeValidator(generate_plugin)]
</code></pre>
<p>Where the <code>_Plugin</code> class is a generic <code>BaseModel</code> that handles importing and calling the object.</p>
<p>This works fine, but Pylance draws a red squiggly line under the dictionary input to <code>MyClass</code>, flagging it as a <code>reportArgumentType</code>. Is there a way to solve this issue? Pylance should understand that the input type should be what the <code>BeforeValidator</code> expects, not <code>T</code>.</p>
|
<python><python-typing><pydantic><pyright>
|
2024-05-10 17:07:58
| 0
| 394
|
mohamed martini
|
78,461,566
| 869,180
|
Flask app running on AWS App Runner returns 502 Bad Gateway
|
<p>I have a Flask application running in AWS App Runner with Gunicorn. It was running fine but I added a new feature which includes an API that calls to different endpoints and takes around 2-3 minutes to return a response.</p>
<p>In local it's working fine but when deployed it returns 502 Bad Gateway every time I call this API. Logs seem fine, no errors nor I can see any differences from running in local. All the other endpoints work fine.</p>
<p>Dockerfile</p>
<pre><code>FROM python:3.11
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN pip cache purge
RUN pip install --upgrade pip
# Install dependencies:
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Run the application:
COPY . .
CMD ["gunicorn", "--config", "gunicorn_config.py", "app:app"]
</code></pre>
<p>Gunicorn_config.py</p>
<pre><code>import os
workers = int(os.environ.get('GUNICORN_PROCESSES', '2'))
threads = int(os.environ.get('GUNICORN_THREADS', '4'))
bind = os.environ.get('GUNICORN_BIND', '0.0.0.0:8080')
#timeout = int(os.environ.get('GUNICORN_TIMEOUT', '600'))
forwarded_allow_ips = '*'
secure_scheme_headers = { 'X-Forwarded-Proto': 'https' }
</code></pre>
|
<python><amazon-web-services><flask><gunicorn><amazon-app-runner>
|
2024-05-10 16:51:52
| 1
| 1,068
|
andoni90
|
78,461,564
| 2,836,338
|
How to properly rotate 3D numpy array to uniformly sample sphere?
|
<p>I have a 3D numpy array with density values. I would like to rotate this array around the origin to uniformly sample the sphere as closely as possible. I found some answers here that allow me to get close (<a href="https://stackoverflow.com/a/44164075/2836338">Evenly distributing n points on a sphere</a>, <a href="https://stackoverflow.com/questions/59738230/">Apply rotation defined by Euler angles to 3D image, in python</a>).</p>
<p>However, when rotating the array it does not seem to correctly sample the sphere, and leaves about half of it out and lacks sampling at the poles. I can't seem to figure out what my issue is. I'm guessing it has something to do with conversion of spherical to Euler to rotation matrix. Any help would be appreciated.</p>
<pre><code>import sys
import numpy as np
from numpy.testing import assert_allclose
from scipy.spatial.transform import Rotation
from scipy import ndimage
import matplotlib.pyplot as plt
def generate_phi_theta_uniform_sphere_by_spiral_method(num_pts):
#taken from https://stackoverflow.com/a/44164075/2836338
indices = np.arange(0, num_pts, dtype=float) + 0.5
phi = np.arccos(1 - 2 * indices / num_pts)
theta = np.pi * (1 + 5 ** 0.5) * indices
return phi, theta
def rotate_density(rho, alpha, beta, gamma, order=1):
#https://nbviewer.jupyter.org/gist/lhk/f05ee20b5a826e4c8b9bb3e528348688
#https://stackoverflow.com/questions/59738230
# create meshgrid
dim = rho.shape
ax = np.arange(dim[0])
ay = np.arange(dim[1])
az = np.arange(dim[2])
coords = np.meshgrid(ax, ay, az)
# stack the meshgrid to position vectors, center them around 0 by substracting dim/2
xyz = np.vstack([coords[0].reshape(-1) - float(dim[0]) / 2, # x coordinate, centered
coords[1].reshape(-1) - float(dim[1]) / 2, # y coordinate, centered
coords[2].reshape(-1) - float(dim[2]) / 2]) # z coordinate, centered
# create transformation matrix
r = Rotation.from_euler('zyx', [alpha, beta, gamma], degrees=False)
mat = r.as_matrix()
# apply transformation
transformed_xyz = np.dot(mat, xyz)
# extract coordinates
x = transformed_xyz[0, :] + float(dim[0]) / 2
y = transformed_xyz[1, :] + float(dim[1]) / 2
z = transformed_xyz[2, :] + float(dim[2]) / 2
x = x.reshape((dim[1],dim[0],dim[2]))
y = y.reshape((dim[1],dim[0],dim[2]))
z = z.reshape((dim[1],dim[0],dim[2])) # reason for strange ordering: see next line
# the coordinate system seems to be strange, it has to be ordered like this
new_xyz = [y, x, z]
# sample
new_rho = ndimage.map_coordinates(rho, new_xyz, order=order)
return new_rho
def spherical_to_euler(phi, theta):
"""Converts spherical coordinates (phi, theta) to Euler angles (zyx convention).
Args:
phi: Azimuth angle in radians (0 to 2*pi).
theta: Coltitude angle in radians (0 to pi or extended range).
Returns:
alpha, beta, gamma: Euler angles (zyx convention) in radians.
"""
phi = np.atleast_1d(phi)
theta = np.atleast_1d(theta)
# Check for theta close to pi (tolerance can be adjusted)
tolerance = 1e-6
theta_near_pi = np.abs(theta - np.pi) < tolerance
# Handle negative theta values (wrap to positive range)
theta = np.where(~theta_near_pi, np.mod(theta + np.pi, 2 * np.pi) - np.pi, theta)
# Alpha (rotation around Z)
alpha = phi
# Beta (rotation around Y) with handling for theta at pi
beta = np.pi / 2 - theta
beta = np.where(beta < -np.pi, beta + 2 * np.pi, beta) # Adjust for other negative beta
# Set beta to pi/2 for theta close to pi (South Pole)
beta = np.where(theta_near_pi, np.pi / 2, beta)
# Gamma (rotation around X) - set to zero for ZYX convention
gamma = np.zeros(phi.shape)
return alpha, beta, gamma
def test_spherical_to_euler():
# Test cases with various phi and theta values
# each tuple formatted as (phi, theta, alpha_expected, beta_expected, gamma_expected)
test_data = [
# Standard case (ZYX convention)
(np.pi/3, np.pi/4, np.pi/3, np.pi/2 - np.pi/4, 0),
# Edge cases (theta=0, pi)
(np.pi/3, 0, np.pi/3, np.pi/2, 0),
(np.pi/3, np.pi, np.pi/3, np.pi/2, 0),
# Edge cases (phi=0, 2*pi)
(0, np.pi/4, 0, np.pi/2 - np.pi/4, 0),
(2*np.pi, np.pi/4, 2*np.pi, np.pi/2 - np.pi/4, 0),
# Additional test cases
(np.pi/6, np.pi/2, np.pi/6, 0, 0),
(5*np.pi/6, np.pi/3, 5*np.pi/6, np.pi/2 - np.pi/3, 0),
]
for phi, theta, alpha_expected, beta_expected, gamma_expected in test_data:
alpha, beta, gamma = spherical_to_euler(phi, theta)
# Check results
np.testing.assert_allclose(alpha, alpha_expected)
np.testing.assert_allclose(beta, beta_expected)
np.testing.assert_allclose(gamma, gamma_expected)
print("Unit tests passed!")
test_spherical_to_euler()
num_pts = 1000 #number of samples on unit sphere
phi, theta = generate_phi_theta_uniform_sphere_by_spiral_method(num_pts)
#convert to cartesian coordinates for plotting for the spiral method
x_spiral, y_spiral, z_spiral = np.cos(theta) * np.sin(phi), np.sin(theta) * np.sin(phi), np.cos(phi)
#make a density map with only one nonzero pixel
n = 32
rho = np.zeros((n,n,n))
rho[n//4, n//4, n//4] = 1
x_ = np.linspace(-n/2.,n/2.,n)
x,y,z = np.meshgrid(x_,x_,x_,indexing='ij')
#convert spherical to euler
alpha, beta, gamma = spherical_to_euler(phi=phi, theta=theta)
#transform the array for each orientation
rho_rotated = np.copy(rho)
for i in range(num_pts):
#since its only one voxel, to make plotting easy just add up all the arrays
#to show all the voxels on one scatter plot easily
rho_rotated += rotate_density(rho, alpha[i], beta[i], gamma[i], order=0)
fig = plt.figure(figsize=plt.figaspect(0.5))
#plot the points using the spiral method
ax0 = fig.add_subplot(1, 2, 1, projection='3d')
ax0.set_box_aspect((np.ptp(x_spiral), np.ptp(y_spiral), np.ptp(z_spiral)))
ax0.scatter(x_spiral, y_spiral, z_spiral)
ax0.title.set_text('Spiral Method')
ax0.view_init(elev=45, azim=30)
#plot the voxel points for the array rotation method
ax1 = fig.add_subplot(1, 2, 2, projection='3d')
ax1.set_box_aspect((np.ptp(x), np.ptp(y), np.ptp(z)))
ax1.scatter(x[rho_rotated>0], y[rho_rotated>0], z[rho_rotated>0])
ax1.set_xlim([x.min(),x.max()])
ax1.set_ylim([y.min(),y.max()])
ax1.set_zlim([z.min(),z.max()])
ax1.title.set_text('3D Array Rotation Method')
ax1.view_init(elev=45, azim=30)
plt.savefig("points_sampled_on_sphere_by_spiral_method_or_array_rotation.png",dpi=150)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/JpxApp92.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpxApp92.png" alt="enter image description here" /></a></p>
|
<python><numpy><scipy><rotation><ndimage>
|
2024-05-10 16:51:32
| 0
| 363
|
tomerg
|
78,461,523
| 6,431,509
|
Python unresolved reference with inner class
|
<p>I'm learning inner classe in python, pycharm tips me</p>
<blockquote>
<p>unresolved reference Clienta</p>
</blockquote>
<p>in this code</p>
<blockquote>
<p>model = <em><strong>Clienta</strong></em>.OverviewSchema.Overview</p>
</blockquote>
<pre><code>class Clienta(Document):
class Overview(EmbeddedDocument):
consume = IntField(required=True)
purchaseToken = ListField(StringField(), )
purchase_token = StringField(required=True, unique=True)
clientId = StringField(required=True)
overview = EmbeddedDocumentField(Overview, )
class OverviewSchema(ModelSchema):
class Meta:
model_skip_values = ()
model = Clienta.Overview #unresolved reference Clienta
# model = Overview #unresolved reference Overview
class ClientaSchema(ModelSchema):
class Meta:
# model_skip_values = ()
model = Clienta
</code></pre>
<p>How to fix it? I know move the OverviewSchema to the outside will solve the problem, but I'm wondering why the OverviewSchema can't find the parent class and the parent's inner class(Overview).</p>
|
<python>
|
2024-05-10 16:42:40
| 0
| 521
|
candrwow
|
78,461,448
| 1,113,579
|
Why is this simple Python code resulting in an empty dict?
|
<p>Why is this simple Python code resulting in an empty dict?</p>
<pre><code>from collections import defaultdict
dict1 = defaultdict(list)
dict1['old_key'] = [{'name': 'A'}]
dict1['old_key'].extend(dict1.pop('old_key'))
print(dict1)
</code></pre>
<p>My understanding of how this should have worked:</p>
<p>The pop method will return the list associated with the <code>old_key</code> which will be passed to the extend method.
The extend method will set the <code>old_key</code> again; since it won't exist by now, it will initialize it with an empty list and extend that empty list with the list received from the pop method.</p>
<p>So this should have resulted in <code>{'old_key': [{'name': 'A'}]}</code>, but what I am getting is: <code>{}</code></p>
<p><strong>Context of the actual code:</strong></p>
<p>In my actual code, I need to change the key of the dict, so I am popping the value of the old key and extending the new key with those values.</p>
<p>However, sometimes, it so happens that the new key that is generated based on some conditions is same as the old key, and the code having the extend and pop in same line is behaving weird.</p>
<p>Of course, I can fix it either by splitting into 2 lines: pop first and store the result in a value, and then extend.</p>
<p>Or, before setting new key, I can check if the new key is already in the dict or same as the key being popped.</p>
<p>But I want to understand what is causing the weird behaviour of the code.</p>
|
<python>
|
2024-05-10 16:28:17
| 4
| 1,276
|
AllSolutions
|
78,460,988
| 1,123,336
|
How to shut down the resource tracker after running Python's ProcessPoolExecutor
|
<p>I have been using the ProcessPoolExecutor in concurrent.futures for a while, but I recently noticed that jobs with concurrent processes, which were submitted to a cluster batch queue, were never properly shut down after completion. The reason appears to be that, although the parallel processes completed succesfully, a subprocess created by ProcessPoolExecutor is still active even after the pool itself is shut down.</p>
<p>I know that people expect a minimal reproducible example, so here is some code.</p>
<pre><code>import time
from concurrent.futures import ProcessPoolExecutor, as_completed
def test_function(i):
time.sleep(20)
return i
def test_pool():
with ProcessPoolExecutor(max_workers=6) as executor:
futures = []
result = []
for i in range(6):
futures.append(executor.submit(test_function, i))
for future in as_completed(futures):
result.append(future.result())
print(len(result))
if __name__ == '__main__':
test_pool()
</code></pre>
<p>The complication is that, if you run this from the command line, the code will complete as expected. To see the problem, you have to embed the functions in an importable module and run the test_pool function in a debugger. I ran this in an interactive IPython shell within the <a href="https://nexpy.github.io/nexpy/" rel="nofollow noreferrer">NeXpy application</a>. In VS Code, I see the following:</p>
<p>These are the processes before running test_pool.
<a href="https://i.sstatic.net/iALFN9j8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iALFN9j8.png" alt="enter image description here" /></a>
These are the processes during the run. Note that there are seven additional subprocesses, one for each process in the pool plus an additional one (9812).
<a href="https://i.sstatic.net/2TUkAGM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2TUkAGM6.png" alt="enter image description here" /></a>
These are the processes after the run. The pool subprocesses have been successfully removed, leaving the additional one (9812), which causes the problem.
<a href="https://i.sstatic.net/QJqq8InZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QJqq8InZ.png" alt="enter image description here" /></a>
Further debugging shows that the additional process is a resource tracker, which is launched when ProcessPoolExecutor uses the 'spawn' method. However, it is not removed when the executor is shut down, and I can’t see what additional steps I can use to remove it. This would not be a problem except that the existence of this subprocess seems to prevent batch queues from shutting the job down.</p>
<p>So my question is whether anyone knows how to close the resource tracker process.</p>
<p>Edit (2024/5/11): It turns out that if I force the ProcessPoolExecutor to use 'fork' instead of 'spawn' as the context, the additional process is not created. I had thought that 'fork' was the default context on MacOS, but apparently not. The additional process is a <a href="https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods" rel="nofollow noreferrer">resource tracker</a>, which is only needed when using the 'spawn' mode. I have edited the question to reflect this new information.</p>
|
<python><python-multiprocessing><concurrent.futures>
|
2024-05-10 15:00:48
| 0
| 582
|
Ray Osborn
|
78,460,765
| 2,244,766
|
How to convert to dict for append action?
|
<p>Having something like this:</p>
<pre><code>>>> import argparse
>>> ap = argparse.ArgumentParser()
>>> ap.add_argument('-e', '--env', action='append', nargs=2, metavar=('NAME', 'VALUE'), dest='extra_env', help='set some env var to given value')
_AppendAction(option_strings=['-e', '--env'], dest='extra_env', nargs=2, const=None, default=None, type=None, choices=None, help='set some env var to given value', metavar=('NAME', 'VALUE'))
</code></pre>
<p>it kind of works:</p>
<pre><code>>>> ap.parse_args(['-e', 'SOMEENVVAR', 'SOMEVAL', '--env', 'OTHERENVVAR', 'OTHERVAL'])
Namespace(extra_env=[['SOMEENVVAR', 'SOMEVAL'], ['OTHERENVVAR', 'OTHERVAL']])
</code></pre>
<p>but I was wondering how to instantly transform this into a dict? Now I have to do this in some postprocessing:</p>
<pre><code>>>> args = ap.parse_args(['-e', 'SOMEENVVAR', 'SOMEVAL', '--env', 'OTHERENVVAR', 'OTHERVAL'])
>>> dict(args.extra_env)
{'SOMEENVVAR': 'SOMEVAL', 'OTHERENVVAR': 'OTHERVAL'}
</code></pre>
<p>I wanted to use the <code>type</code> argument for that, but it seems that the combination of <code>action=append</code> and <code>nargs=2</code> seems to break it; the function passed to <code>type</code> is called for each of the args in <code>nargs</code> individually:</p>
<pre><code>>>> ap.add_argument('-e', '--env', action='append', nargs=2, metavar=('NAME', 'VALUE'), dest='extra_env', help='set some env var to given value', type=lambda x: print(f'called lambda for >{x}<'))
_AppendAction(option_strings=['-e', '--env'], dest='extra_env', nargs=2, const=None, default=None, type=<function <lambda> at 0x7f904df518c8>, choices=None, help='set some env var to given value', metavar=('NAME', 'VALUE'))
>>> ap.parse_known_args(['-e', 'SOMEENVVAR','SOMEVAL', '--env', 'OTHERENVVAR','OTHERVAL'])
called lambda for >SOMEENVVAR<
called lambda for >SOMEVAL<
called lambda for >OTHERENVVAR<
called lambda for >OTHERVAL<
(Namespace(extra_env=[[None, None], [None, None]]), [])
</code></pre>
|
<python><argparse>
|
2024-05-10 14:25:13
| 1
| 4,035
|
murison
|
78,460,746
| 1,209,675
|
How to copy Numpy arrays into C++ arrays quickly
|
<p>I'm trying to create an circular buffer for images in C++ that receives the images as Numpy arrays. I can correctly store the images and retrieve them, but it's crazy slow, 100's of times slower than just doing it all in Python. I'm learning C++, but I'm not sure how to speed this up. Any help would be greatly appreciated.</p>
<p>Edit: I'm using Visual Studio and have enabled all of the speed optimizations, however they didn't seem to have any effect at all.</p>
<pre><code>void Buffer::do_nothing(py::array_t<uint8_t> &input_array, int time_code, string time_stamp) {
auto view = input_array.unchecked<3>();
if ((view.shape(0) != Buffer::frame_height) || (view.shape(1) != Buffer::frame_width) || (view.shape(2) != 3)) {
cout << "Error: Incorrect incoming array size\n";
exit(1);
}
const int channels = view.shape(2);
const int height = view.shape(0);
const int width = view.shape(1);
int t;
for (int i = 0; i < channels; i++) {
for (int j = 0; j < height; j++) {
for (int k = 0; k < width; k++) {
Buffer::frames[(Buffer::high * Buffer::frame_size) + (Buffer::channel_size * i) + (j * Buffer::frame_width) + k] = view(j, k, i);
}
}
}
}
</code></pre>
<p>Buffer::frames is an array initialized by:</p>
<pre><code>Buffer::frames = new uint8_t[circ_buffer_size * frame_height * frame_width * 3];
</code></pre>
<p>For comparison, in Python I created a list of <code>[None] * circ_buffer_size</code> then assign the frame to one of the list elements.</p>
<pre><code>import numpy as np
import time
circ_buffer_size = 10
framebuffer = [None] * circ_buffer_size
test_array = np.random.randint(low=0, high=255, size=(1080,1920, 3), dtype=np.uint8)
start = time.time()
for i in range(200):
framebuffer[i % circ_buffer_size] = test_array
print(f"Elapsed: {time.time() - start}")
</code></pre>
<blockquote>
<p>Elapsed: 0.0030002593994140625</p>
</blockquote>
<p>By comparison, my C++ / Pybind11 example takes 1.08 seconds.</p>
|
<python><c++><numpy><pybind11>
|
2024-05-10 14:22:07
| 0
| 335
|
user1209675
|
78,460,333
| 23,260,297
|
Create new columns in dataframe with values from other columns based on condition
|
<p>Let's say I have a dataframe (df) that looks like this(dots indicate more columns):</p>
<pre><code>Type Price1 Price2 Price3 Price4 Price5 ... ...
A nan 1 nan nan 2
A nan 3 nan nan 2
B nan nan 4 5 nan
B nan nan 6 7 nan
C nan 2 nan nan 1
C nan 4 nan nan 3
D 1 8 nan nan nan
D 9 6 nan nan nan
</code></pre>
<p>I need to transform this dataframe to look like this:</p>
<pre><code>Type newcol1 newcol2 ... ...
A 2 1
A 2 3
B 4 5
B 6 7
C 1 2
C 3 4
D 1 8
D 9 6
</code></pre>
<p>These are the criteria for selecting which columns get which values:</p>
<pre><code>if type == 'A'
'Price5' -> 'newcol1' && 'Price2' -> 'newcol2'
</code></pre>
<pre><code>if type == 'B'
'Price3' -> 'newcol1' && 'Price4' -> 'newcol2'
</code></pre>
<pre><code>if type == 'C'
'Price5' -> 'newcol1' && 'Price2' -> 'newcol2'
</code></pre>
<pre><code>if type == 'D'
'Price1' -> 'newcol1' && 'Price2' -> 'newcol2'
</code></pre>
<p>My logic was to use the mask function to achieve this, and assign the values to 2 columns then just rename them, which works but I am handling every case seperately and is kind of messy:</p>
<pre><code>df = df.mask(df['Type'].eq('A'), df.assign(**{'Price3': df['Price5'].values, 'Price4': df['Price2'].values}))
df.rename(columns = {'Price3':'newcol1'}, inplace = True)
df.rename(columns = {'Price4':'newcol2'}, inplace = True)
</code></pre>
<p> How could I achieve this in a more efficient manner?</p>
|
<python><pandas><dataframe>
|
2024-05-10 13:07:51
| 3
| 2,185
|
iBeMeltin
|
78,460,166
| 21,049,944
|
How to effectively iterate through categories
|
<p>I have a large dataframe -> lazyframe that was cut by the <a href="https://docs.pola.rs/py-polars/html/reference/expressions/api/polars.Expr.cut.html" rel="nofollow noreferrer">Expr.cut</a> function. Now I would like to iterate through these categories, but I failed to find an effective way. When I use group_by, it tells me it is not iterable (unless I collect it before the grouping). It raises:</p>
<blockquote>
<p>TypeError: 'LazyGroupBy' object is not iterable</p>
</blockquote>
<p>I really don't want to filter the whole lazyframe for every category. How to achieve it?</p>
<p>The code I would love to be working (namely the for cycle in the end):</p>
<pre><code>import polars as pl
import numpy as np
data = pl.LazyFrame(dict(
a = np.linspace(1,5,100),
b = np.linspace(20,30,100),
))
N_bins = 10
cutPoints = np.linspace(
data.select(pl.max("a")).collect(),
data.select(pl.min("a")).collect(),
N_bins
)
data = data.with_columns(
pl.col("a").cut(cutPoints)
)
groups = data.group_by("a")
for name, group in groups:
print( name)
print(group)
</code></pre>
|
<python><group-by><python-polars><lazyframe>
|
2024-05-10 12:39:04
| 1
| 388
|
Galedon
|
78,459,786
| 2,475,195
|
Pandas every nth row from each group
|
<p>Assume groups will have more than <code>n</code> memebers, I want to take every <code>n</code>th row from each group. I looked at <a href="https://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.core.groupby.GroupBy.nth.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.core.groupby.GroupBy.nth.html</a> but that only takes one row from each group.</p>
<p>For example:</p>
<pre><code> import pandas as pd
x = pd.DataFrame.from_dict({'a': [1, 1, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3], 'b': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]})
a b
0 1 1
1 1 2
2 2 3
3 2 4
4 2 5
5 3 6
6 3 7
7 3 8
8 3 9
9 3 10
10 3 11
11 3 12
</code></pre>
<p>And if we are keeping every 2nd row, this would be the desired outcome:</p>
<pre><code> a b
1 1 2
3 2 4
6 3 7
8 3 9
10 3 11
</code></pre>
|
<python><pandas><dataframe><group-by>
|
2024-05-10 11:23:11
| 1
| 4,355
|
Baron Yugovich
|
78,459,479
| 1,028,024
|
Odoo onchange event for invoice_status of sale.order not firing
|
<p>Does anyone know why the onchange event as coded below is not fired as expected? It fires only once, when making a new Sale Order (before saving). The invoice_status <em>is actually changing</em> (as is confirmable in the tree view). Expected behavior is that the event is fired on every change.</p>
<pre><code>import logging
from odoo import api, fields, models, _
_logger = logging.getLogger(__name__)
class SaleOrder(models.Model):
_inherit = "sale.order"
@api.onchange('invoice_status')
def _onchange_invoice_status(self):
_logger.warning("Invoice status: %s" % self.invoice_status)
</code></pre>
|
<python><odoo><odoo-17>
|
2024-05-10 10:21:33
| 1
| 655
|
Berdus
|
78,459,338
| 6,386,056
|
Scipy.optimize.least_square sensitive to order of inputs
|
<p>The following code generate different results depending on the order of the inputs I pass on. Why is that? I'd expect least square optimization to reach same result no matter what order the inputs are passed to the error generating function. What is it exactly that create this discrepancy?</p>
<pre><code>import numpy as np
from scipy.optimize import least_squares
def dummy_func(Y, T):
# Define fn to optimize
def func(X):
tau = X[0]
C1 = X[1]
C2 = X[2]
C3 = X[3]
C4 = X[4]
L0 = X[5]
S1 = X[6]
exp_tt1 = np.exp(- T / tau)
f1 = (1 - exp_tt1) / (T / tau)
f2 = 2 * f1 - exp_tt1 * (T / tau)
f3 = 3 * f2 - exp_tt1 * (T / tau) ** 2
f4 = 4 * f3 - exp_tt1 * (T / tau) ** 3
f5 = 5 * f4 - exp_tt1 * (T / tau) ** 4
y_hat = L0 + S1 * f1 + C1 * f2 + C2 * f3 + C3 * f4 + C4 * f5
errs = Y - y_hat
return errs
return func
# starting point and boundaries
initial_params = np.array([ 10, 0, 0, 0, 0, 100, 0])
bounds_lower = np.array([ 3, -500, -1e3, -100, -15, 0, -1e3])
bounds_upper = np.array([ 15, 500, 1e3, 100, 15, 1e5, 1e3])
# inputs
Y = np.array([5, 6, 7])
T = np.array([1, 1.5, 1.6])
# Get min function
min_fn = dummy_func(Y, T)
# Fit
res = least_squares(min_fn,
initial_params,
bounds=(bounds_lower, bounds_upper),
ftol=1e-3,
loss="linear",
verbose=0)
# swap items 1 and 2
T_swap = T.copy()
Y_swap = Y.copy()
ix1 = 1
ix2 = 2
T_swap[[ix1, ix2]] = T[[ix2, ix1]]
Y_swap[[ix1, ix2]] = Y[[ix2, ix1]]
# Get min function
min_fn_swap = dummy_func(Y_swap, T_swap)
# Fit
res_swap = least_squares(min_fn_swap,
initial_params,
bounds=(bounds_lower, bounds_upper),
ftol=1e-3,
loss="linear",
verbose=0)
print(f"same results? {all(res.x == res_swap.x)} \n res.x: {res.x} \nres_swap.x: {res_swap.x}")
</code></pre>
<p>This produces the following output:</p>
<pre><code>same results? False
res.x: [ 3.00001277 310.88942015 300.69037515 -40.34027084 -9.85586819
43.96214131 -290.54867755]
res_swap.x: [ 10.28332721 86.26342455 -67.14921168 8.76243152 0.51657765
93.50701816 -133.10992102]
</code></pre>
|
<python><scipy><scipy-optimize><least-squares>
|
2024-05-10 09:59:29
| 1
| 667
|
Mth Clv
|
78,459,217
| 10,425,150
|
Keep columns and rows that contains "FAIL" in pandas dataframe
|
<p>I would like to keep columns that contains word "FAIL".</p>
<p><strong>Input data:</strong></p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: right;">Values1</th>
<th style="text-align: right;">Values2</th>
<th style="text-align: right;">Values3</th>
<th style="text-align: left;">Status1</th>
<th style="text-align: left;">Status2</th>
<th style="text-align: left;">Status3</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">1</td>
<td style="text-align: left;">PASS</td>
<td style="text-align: left;">PASS</td>
<td style="text-align: left;">FAIL</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">2</td>
<td style="text-align: left;">PASS</td>
<td style="text-align: left;">PASS</td>
<td style="text-align: left;">PASS</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">3</td>
<td style="text-align: left;">PASS</td>
<td style="text-align: left;">PASS</td>
<td style="text-align: left;">PASS</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">4</td>
<td style="text-align: left;">PASS</td>
<td style="text-align: left;">FAIL</td>
<td style="text-align: left;">PASS</td>
</tr>
</tbody>
</table></div>
<p><strong>Expected output:</strong></p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Status2</th>
<th style="text-align: left;">Status3</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">PASS</td>
<td style="text-align: left;">FAIL</td>
</tr>
<tr>
<td style="text-align: left;">FAIL</td>
<td style="text-align: left;">PASS</td>
</tr>
</tbody>
</table></div>
<p><strong>Current Output:</strong></p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Status1</th>
<th style="text-align: left;">Status2</th>
<th style="text-align: left;">Status3</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">PASS</td>
<td style="text-align: left;">PASS</td>
<td style="text-align: left;">FAIL</td>
</tr>
<tr>
<td style="text-align: left;">PASS</td>
<td style="text-align: left;">FAIL</td>
<td style="text-align: left;">PASS</td>
</tr>
</tbody>
</table></div>
<p><strong>My code:</strong></p>
<pre><code>import pandas as pd
values = range(1,5)
status_pass = ["PASS"]*len(values)
status1 = status_pass[1:]+["FAIL"]
status2 = status1[::-1]
df = pd.DataFrame({"Values1":values,"Values2":values,"Values3":values,"Status1":status_pass,"Status2":status1,"Status3":status2})
# drop unwanted rows
words_to_keep = ["FAIL"]
df = df[df.stack().groupby(level=0).apply(
lambda x: all(x.str.contains(w, case=False).any() for w in words_to_keep))]
# Filter by column name
df = df.filter(like='Status', axis=1)
</code></pre>
|
<python><pandas><dataframe>
|
2024-05-10 09:37:11
| 2
| 1,051
|
Gооd_Mаn
|
78,459,119
| 9,494,140
|
How to fix the ImportError: cannot import name 'formatargspec' from 'inspect' (/usr/lib/python3.12/inspect.py). Did you mean: 'formatargvalues'?
|
<p>I'm trying to run <code>django</code> app usind <code>docker</code> .. it gives me this error when i try to <code>docker-compose up --build</code> :</p>
<pre><code>water_maps | File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
water_maps | File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
water_maps | File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
water_maps | File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
water_maps | File "<frozen importlib._bootstrap_external>", line 995, in exec_module
water_maps | File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
water_maps | File "/var/www/html/demo_app/water_maps/__init__.py", line 4, in <module>
water_maps | from celery import Celery
water_maps | File "/usr/local/lib/python3.12/dist-packages/celery/__init__.py", line 17, in <module>
water_maps | from . import local # noqa
water_maps | ^^^^^^^^^^^^^^^^^^^
water_maps | File "/usr/local/lib/python3.12/dist-packages/celery/local.py", line 17, in <module>
water_maps | from .five import PY3, bytes_if_py2, items, string, string_t
water_maps | File "/usr/local/lib/python3.12/dist-packages/celery/five.py", line 7, in <module>
water_maps | import vine.five
water_maps | File "/usr/local/lib/python3.12/dist-packages/vine/__init__.py", line 8, in <module>
water_maps | from .abstract import Thenable
water_maps | File "/usr/local/lib/python3.12/dist-packages/vine/abstract.py", line 6, in <module>
water_maps | from .five import with_metaclass, Callable
water_maps | File "/usr/local/lib/python3.12/dist-packages/vine/five.py", line 364, in <module>
water_maps | from inspect import formatargspec, getargspec as _getargspec # noqa
water_maps | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
water_maps | ImportError: cannot import name 'formatargspec' from 'inspect' (/usr/lib/python3.12/inspect.py). Did you mean: 'formatargvalues'?
</code></pre>
<p>here is how my <code>Dockerfile</code> looks like :</p>
<pre><code>FROM ubuntu
RUN apt-get update
# Avoid tzdata infinite waiting bug
ARG DEBIAN_FRONTEND=noninteractive
ENV TZ=Africa/Cairo
RUN apt clean
RUN apt-get update
RUN apt-get install -y apt-utils vim curl apache2 apache2-utils git
RUN apt-get -y install python3 libapache2-mod-wsgi-py3
RUN apt -y install certbot python3-certbot-apache
RUN ln /usr/bin/python3 /usr/bin/python
RUN apt-get -y install python3-pip
RUN apt -y install software-properties-common
RUN add-apt-repository universe
RUN apt update
#Add sf to avoid ln: failed to create hard link '/usr/bin/pip': File exists
RUN ln -sf /usr/bin/pip3 /usr/bin/pip
RUN pip install --upgrade pip --break-system-packages
RUN pip install django ptvsd --break-system-packages
RUN pip uninstall web3.py --break-system-packages
RUN pip install git+https://github.com/ethereum/web3.py.git --break-system-packages
RUN apt install wait-for-it
RUN apt-get -y install gettext
RUN apt-get -y install poppler-utils
RUN apt-get -y install redis-server
RUN a2enmod headers
RUN service apache2 restart
COPY www/demo_app/water_maps/requirements.txt requirements.txt
RUN pip install -r requirements.txt --break-system-packages
ADD ./demo_site.conf /etc/apache2/sites-available/000-default.conf
EXPOSE 80 5432
WORKDIR /var/www/html/demo_app
#CMD ["apache2ctl", "-D", "FOREGROUND"]
#CMD ["python", "manage.py", "migrate", "--no-input"]
</code></pre>
<p>And here is how my <code>docker-compose.yaml</code> file looks like :</p>
<pre><code>version: "2"
services:
db:
image: postgres:14
restart: always
volumes:
- ./data/db:/var/lib/postgresql/data
- ./www/:/var/www/html
- ./www/demo_app/kml_files:/var/www/html/demo_app/kml_files
- ./www/demo_app/temp_kml_file:/var/www/html/demo_app/temp_kml_file
- ./www/demo_app/upload:/var/www/html/demo_app/upload
- ./data/log:/var/log/apache2
ports:
- '5432:5432'
environment:
- POSTGRES_DB=database_innvoentiq
- POSTGRES_USER=database_user_innvoentiq
- POSTGRES_PASSWORD=Yahoo000@
django-apache2:
build: .
container_name: water_maps
restart: always
environment:
- POSTGRES_DB=database_innvoentiq
- POSTGRES_USER=database_user_innvoentiq
- POSTGRES_PASSWORD=Yahoo000@
ports:
- 6000:80
- 6001:443
# - 80:80
# - 443:443
volumes:
- ./www/:/var/www/html
- ./www/demo_app/kml_files:/var/www/html/demo_app/kml_files
- ./www/demo_app/temp_kml_file:/var/www/html/demo_app/temp_kml_file
- ./www/demo_app/upload:/var/www/html/demo_app/upload
- ./data/log:/var/log/apache2
# - ./data/config/etc/apache2:/etc/apache2
# command: sh -c 'python manage.py migrate && python manage.py loaddata the_db.json '
command: sh -c 'wait-for-it db:5432 -- python manage.py makemigrations && python manage.py migrate && python manage.py collectstatic --noinput && python manage.py compilemessages && apache2ctl -D FOREGROUND'
# command: sh -c 'wait-for-it db:5432 -- python manage.py migrate && python manage.py loaddata last.json && apache2ctl -D FOREGROUND'
depends_on:
- db
</code></pre>
|
<python><docker>
|
2024-05-10 09:15:16
| 1
| 4,483
|
Ahmed Wagdi
|
78,458,998
| 18,949,720
|
Matplotlib interactive figures with streamlit
|
<p>I am trying to display matplotlib interactive figures in Streamlit generated pages i.e. to display the pan/zoom and mouse-location tools.</p>
<p>I know there are other ways to make figures interactive that are commonly used with streamlit, but I need to stick to matplotlib because they provide widgets that I use (rectangle selector for example).</p>
<p>Is such a thing possible?</p>
|
<python><matplotlib><streamlit><matplotlib-widget>
|
2024-05-10 08:51:04
| 1
| 358
|
Droidux
|
78,458,994
| 3,373,967
|
Plotly generated pie chart text exceed page boundary
|
<p>I am using plotly to generate a pie chart, with textinfo set in outside of the sector. However, when number of sectors increase to a large amount, the textinfo will not shown in the figure frame</p>
<p>A MWE is as the following</p>
<pre class="lang-py prettyprint-override"><code>import plotly.express as px
from random import randint
import pandas as pd
allData = []
for i in range(100):
ele = {}
ele['Category'] = "Category_" + str(i)
ele['number'] = i + 1
allData.append(ele)
df = pd.DataFrame(allData)
fig = px.pie(names = df['Category'], values = df['number'], hole = 0.5)
fig.update_traces(textinfo = 'label+value', textposition = "outside")
fig.show()
</code></pre>
<p>It gives me something like this. Some of the text info get lost in upper and lower boundary
<a href="https://i.sstatic.net/kcrHNtb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kcrHNtb8.png" alt="Some Text Info Get Lost on upper and lower" /></a></p>
<p>When I zoom out the page, all the text info shows up</p>
<p><a href="https://i.sstatic.net/BXxCW1zu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BXxCW1zu.png" alt="Zoom out to show all contetents" /></a></p>
<p>But this time the font size become quite small to recognize.</p>
<p>Is there a way to generate a large 100% size image with scrollbars? With that all information will be visible.</p>
|
<python><plotly><bar-chart><image-size>
|
2024-05-10 08:49:22
| 1
| 973
|
Eric Sun
|
78,458,753
| 2,151,275
|
How to receive file send from flask with curl?
|
<p>Here the server side code with python & flask:</p>
<pre><code>from flask import Flask, request, send_file
import io
import zipfile
import bitstring
app = Flask(__name__)
@app.route('/s/', methods=['POST'])
def return_files_tut():
try:
f = io.BytesIO()
f.write("abcd".encode())
return send_file(f, attachment_filename='a.ret.zip')
except Exception as e:
return str(e)
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>Bellowing is curl command:</p>
<pre><code>λ curl -X POST http://localhost:5000/s/ -o ret.tmp
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 4 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (18) transfer closed with 4 bytes remaining to read
</code></pre>
<p>How should I use curl to receive the file?</p>
|
<python><web-services><flask><curl>
|
2024-05-10 07:56:55
| 1
| 1,864
|
heLomaN
|
78,458,709
| 480,118
|
FastAPI how to refer to a base/extend jina2 template
|
<p>In my old flask app i have the following:</p>
<pre><code>{% extends "base.html" %} {% block title %}Home{% endblock %}
</code></pre>
<p>Im converting my app to FastAPI.<br />
The folders 'static' and 'templates' are subfolders from where main.py resides.</p>
<p>main.py:</p>
<pre><code>app = FastAPI()
app.add_middleware(GZipMiddleware, minimum_size=500)
app.include_router(default.router)
#app.include_router(data_mgr_view.router)
app.mount("/static", StaticFiles(directory="static"), name="static")
app.mount("/templates", Jinja2Templates(directory="templates"), name="templates")
</code></pre>
<p>default.py:</p>
<pre><code>from alpha_insite.config import tmplts
router = APIRouter(
prefix="",
#dependencies=[Depends(get_token_header)],
responses={404: {"description": "resource not found"}},
)
@router.route('/')
async def root(request: Request):
context = {"title":"test", "content":f"Place for my charts, studies, etc..."}
return tmplts.TemplateResponse(request=request, name="index.html", context=context)
</code></pre>
<p>index.html:</p>
<pre><code>//{% extends "base.html" %} {% block title %}Home{% endblock %}
{% extends "{{ url_for('templates', path='/base.html')}}" %} {% block title %}Home{% endblock %}
</code></pre>
<p>WHen hitting the root of my url however, i see this:</p>
<pre><code> File "/usr/local/lib/python3.11/site-packages/jinja2/environment.py", line 936, in handle_exception
raise rewrite_traceback_stack(source=source)
File "templates/index.html", line 1, in top-level template code
{% extends "{{ url_for('templates', path='base.html')}}" %} {% block title %}Home{% endblock %}
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/jinja2/loaders.py", line 204, in get_source
raise TemplateNotFound(template)
jinja2.exceptions.TemplateNotFound: {{ url_for('templates', path='base.html')}}
</code></pre>
<p>I've tried both <code>/templates</code> and <code>/base.html</code> as well.
So im doing something wrong here. what is the proper way of doing this?</p>
|
<python><jinja2><fastapi>
|
2024-05-10 07:45:31
| 0
| 6,184
|
mike01010
|
78,458,665
| 3,099,733
|
How to report `redefined-argument-from-local` as error in ruff?
|
<p>In order to avoid making another bug due to override the function arguments in a for-loop, I find that there is a lint rule <a href="https://docs.astral.sh/ruff/rules/redefined-argument-from-local/" rel="nofollow noreferrer">redefined-argument-from-local</a> can detect this error. I try to enable this rule by add it to <code>pyproject.toml</code> and add <code>--preview</code> args in VsCode setting. But it is just warning not error.</p>
<pre class="lang-ini prettyprint-override"><code>#pyproject.toml
[tool.ruff]
line-length = 120
[tool.ruff.lint]
extend-select = ["PLR1704"]
</code></pre>
<pre class="lang-json prettyprint-override"><code># vscode workspace setting
{
"ruff.lint.args": [
"--preview"
]
}
</code></pre>
<p>Does I do it wrong with the setting?</p>
<p>Besides, is there any other method to detect this notorious problem in Python?</p>
|
<python><pylint><ruff>
|
2024-05-10 07:34:40
| 0
| 1,959
|
link89
|
78,458,502
| 2,192,824
|
Is there a simple way in python to read and write a file from two separate threads?
|
<p>Not familiar with the file operation and multi-threading in python, so I'm asking here. So I want to do something like this: one writing thread will write some data to a file, and another reading thread to read the data from the top of the file, but if the file is empty the reading thread would just wait until the file has content. I think it should be easy with multithreading, but experimented with some ways, didn't get it. Very much appreciate if some code snippets could be provided. Thanks!</p>
|
<python><python-3.x><multithreading><file>
|
2024-05-10 06:58:48
| 1
| 417
|
Ames ISU
|
78,458,418
| 2,641,038
|
How to get all attributes of an Python object `pxr.Usd.Prim` bound by an C++ object
|
<p>I am using the Python package <a href="https://pypi.org/project/usd-core/" rel="nofollow noreferrer">pxr</a> to read a <a href="https://en.wikipedia.org/wiki/Universal_Scene_Description" rel="nofollow noreferrer">USD</a> file. The problem is that I cannot find enough information to fully understand how to work with the objects defined in <code>pxr</code>.</p>
<h2>Example code</h2>
<p>I am trying to crack a Python object <code>pxr.Usd.Prim</code> which is a binding of a C++ object <code>UsdPrim</code>. The problem is that I cannot get all attributes of the Python object.</p>
<pre class="lang-py prettyprint-override"><code>usdc_file = "./input/20240506_145520.usdc"
stage = pxr.Usd.Stage.Open(usdc_file)
</code></pre>
<p>The <code>stage</code> has the type <code><class 'pxr.Usd.Stage'></code>. I can traverse <code>stage</code> as follows:</p>
<pre class="lang-py prettyprint-override"><code>for prim in stage.Traverse():
print("prim", type(prim), prim)
</code></pre>
<p>The result is</p>
<pre><code>prim: <class 'pxr.Usd.Prim'> Usd.Prim(</_20240506_145520>)
prim: <class 'pxr.Usd.Prim'> Usd.Prim(</_20240506_145520/Geom>)
prim: <class 'pxr.Usd.Prim'> Usd.Prim(</_20240506_145520/Geom/MDL_OBJ_material0000>)
prim: <class 'pxr.Usd.Prim'> Usd.Prim(</_20240506_145520/Materials>)
prim: <class 'pxr.Usd.Prim'> Usd.Prim(</_20240506_145520/Materials/material0000>)
prim: <class 'pxr.Usd.Prim'> Usd.Prim(</_20240506_145520/Materials/material0000/surfaceShader>)
prim: <class 'pxr.Usd.Prim'> Usd.Prim(</_20240506_145520/Materials/material0000/st_texCoordReader>)
prim: <class 'pxr.Usd.Prim'> Usd.Prim(</_20240506_145520/Materials/material0000/diffuseColor_texture>)
</code></pre>
<p>As observed, all primitives in <code>stage</code> have type <code><class 'pxr.Usd.Prim'></code>.</p>
<h2>Try to read attributes</h2>
<p>I tried using <code>dir()</code> and <code>vars()</code> to get all attributes by following this <a href="https://stackoverflow.com/questions/192109/is-there-a-built-in-function-to-print-all-the-current-properties-and-values-of-a">question</a>. But only methods are returned. E.g.,</p>
<pre class="lang-py prettyprint-override"><code>for prim in stage.Traverse():
def dump(obj):
for attr in dir(obj):
print("obj.%s = %r" % (attr, getattr(obj, attr)))
print("prim:", type(prim), prim)
dump(prim)
break
</code></pre>
<p>The result is</p>
<pre><code>prim: <class 'pxr.Usd.Prim'> Usd.Prim(</_20240506_145520>)
obj.AddAppliedSchema = <bound method AddAppliedSchema of Usd.Prim(</_20240506_145520>)>
obj.ApplyAPI = <bound method ApplyAPI of Usd.Prim(</_20240506_145520>)>
obj.CanApplyAPI = <bound method CanApplyAPI of Usd.Prim(</_20240506_145520>)>
obj.ClearActive = <bound method ClearActive of Usd.Prim(</_20240506_145520>)>
obj.ClearAssetInfo = <bound method ClearAssetInfo of Usd.Prim(</_20240506_145520>)>
obj.ClearAssetInfoByKey = <bound method ClearAssetInfoByKey of Usd.Prim(</_20240506_145520>)>
obj.ClearChildrenReorder = <bound method ClearChildrenReorder of Usd.Prim(</_20240506_145520>)>
obj.ClearCustomData = <bound method ClearCustomData of Usd.Prim(</_20240506_145520>)>
obj.ClearCustomDataByKey = <bound method ClearCustomDataByKey of Usd.Prim(</_20240506_145520>)>
obj.ClearDisplayName = <bound method ClearDisplayName of Usd.Prim(</_20240506_145520>)>
obj.ClearDocumentation = <bound method ClearDocumentation of Usd.Prim(</_20240506_145520>)>
obj.ClearHidden = <bound method ClearHidden of Usd.Prim(</_20240506_145520>)>
obj.ClearInstanceable = <bound method ClearInstanceable of Usd.Prim(</_20240506_145520>)>
obj.ClearMetadata = <bound method ClearMetadata of Usd.Prim(</_20240506_145520>)>
obj.ClearMetadataByDictKey = <bound method ClearMetadataByDictKey of Usd.Prim(</_20240506_145520>)>
obj.ClearPayload = <bound method ClearPayload of Usd.Prim(</_20240506_145520>)>
obj.ClearPropertyOrder = <bound method ClearPropertyOrder of Usd.Prim(</_20240506_145520>)>
obj.ClearTypeName = <bound method ClearTypeName of Usd.Prim(</_20240506_145520>)>
obj.ComputeExpandedPrimIndex = <bound method ComputeExpandedPrimIndex of Usd.Prim(</_20240506_145520>)>
obj.CreateAttribute = <bound method CreateAttribute of Usd.Prim(</_20240506_145520>)>
obj.CreateRelationship = <bound method CreateRelationship of Usd.Prim(</_20240506_145520>)>
obj.FindAllAttributeConnectionPaths = <bound method FindAllAttributeConnectionPaths of Usd.Prim(</_20240506_145520>)>
obj.FindAllRelationshipTargetPaths = <bound method FindAllRelationshipTargetPaths of Usd.Prim(</_20240506_145520>)>
obj.GetAllAuthoredMetadata = <bound method GetAllAuthoredMetadata of Usd.Prim(</_20240506_145520>)>
obj.GetAllChildren = <bound method GetAllChildren of Usd.Prim(</_20240506_145520>)>
obj.GetAllChildrenNames = <bound method GetAllChildrenNames of Usd.Prim(</_20240506_145520>)>
obj.GetAllMetadata = <bound method GetAllMetadata of Usd.Prim(</_20240506_145520>)>
obj.GetAppliedSchemas = <bound method GetAppliedSchemas of Usd.Prim(</_20240506_145520>)>
obj.GetAssetInfo = <bound method GetAssetInfo of Usd.Prim(</_20240506_145520>)>
obj.GetAssetInfoByKey = <bound method GetAssetInfoByKey of Usd.Prim(</_20240506_145520>)>
obj.GetAttribute = <bound method GetAttribute of Usd.Prim(</_20240506_145520>)>
obj.GetAttributeAtPath = <bound method GetAttributeAtPath of Usd.Prim(</_20240506_145520>)>
obj.GetAttributes = <bound method GetAttributes of Usd.Prim(</_20240506_145520>)>
obj.GetAuthoredAttributes = <bound method GetAuthoredAttributes of Usd.Prim(</_20240506_145520>)>
obj.GetAuthoredProperties = <bound method GetAuthoredProperties of Usd.Prim(</_20240506_145520>)>
obj.GetAuthoredPropertiesInNamespace = <bound method GetAuthoredPropertiesInNamespace of Usd.Prim(</_20240506_145520>)>
obj.GetAuthoredPropertyNames = <bound method GetAuthoredPropertyNames of Usd.Prim(</_20240506_145520>)>
obj.GetAuthoredRelationships = <bound method GetAuthoredRelationships of Usd.Prim(</_20240506_145520>)>
obj.GetChild = <bound method GetChild of Usd.Prim(</_20240506_145520>)>
obj.GetChildren = <bound method GetChildren of Usd.Prim(</_20240506_145520>)>
obj.GetChildrenNames = <bound method GetChildrenNames of Usd.Prim(</_20240506_145520>)>
obj.GetChildrenReorder = <bound method GetChildrenReorder of Usd.Prim(</_20240506_145520>)>
obj.GetCustomData = <bound method GetCustomData of Usd.Prim(</_20240506_145520>)>
obj.GetCustomDataByKey = <bound method GetCustomDataByKey of Usd.Prim(</_20240506_145520>)>
obj.GetDescription = <bound method GetDescription of Usd.Prim(</_20240506_145520>)>
obj.GetDisplayName = <bound method GetDisplayName of Usd.Prim(</_20240506_145520>)>
obj.GetDocumentation = <bound method GetDocumentation of Usd.Prim(</_20240506_145520>)>
obj.GetFilteredChildren = <bound method GetFilteredChildren of Usd.Prim(</_20240506_145520>)>
obj.GetFilteredChildrenNames = <bound method GetFilteredChildrenNames of Usd.Prim(</_20240506_145520>)>
obj.GetFilteredNextSibling = <bound method GetFilteredNextSibling of Usd.Prim(</_20240506_145520>)>
obj.GetInherits = <bound method GetInherits of Usd.Prim(</_20240506_145520>)>
obj.GetInstances = <bound method GetInstances of Usd.Prim(</_20240506_145520>)>
obj.GetKind = <bound method GetKind of Usd.Prim(</_20240506_145520>)>
obj.GetMetadata = <bound method GetMetadata of Usd.Prim(</_20240506_145520>)>
obj.GetMetadataByDictKey = <bound method GetMetadataByDictKey of Usd.Prim(</_20240506_145520>)>
obj.GetName = <bound method GetName of Usd.Prim(</_20240506_145520>)>
obj.GetNamespaceDelimiter = <Boost.Python.function object at 0x64c26e21cde0>
obj.GetNextSibling = <bound method GetNextSibling of Usd.Prim(</_20240506_145520>)>
obj.GetObjectAtPath = <bound method GetObjectAtPath of Usd.Prim(</_20240506_145520>)>
obj.GetParent = <bound method GetParent of Usd.Prim(</_20240506_145520>)>
obj.GetPath = <bound method GetPath of Usd.Prim(</_20240506_145520>)>
obj.GetPayloads = <bound method GetPayloads of Usd.Prim(</_20240506_145520>)>
obj.GetPrim = <bound method GetPrim of Usd.Prim(</_20240506_145520>)>
obj.GetPrimAtPath = <bound method GetPrimAtPath of Usd.Prim(</_20240506_145520>)>
obj.GetPrimDefinition = <bound method GetPrimDefinition of Usd.Prim(</_20240506_145520>)>
obj.GetPrimInPrototype = <bound method GetPrimInPrototype of Usd.Prim(</_20240506_145520>)>
obj.GetPrimIndex = <bound method GetPrimIndex of Usd.Prim(</_20240506_145520>)>
obj.GetPrimPath = <bound method GetPrimPath of Usd.Prim(</_20240506_145520>)>
obj.GetPrimStack = <bound method GetPrimStack of Usd.Prim(</_20240506_145520>)>
obj.GetPrimStackWithLayerOffsets = <bound method GetPrimStackWithLayerOffsets of Usd.Prim(</_20240506_145520>)>
obj.GetPrimTypeInfo = <bound method GetPrimTypeInfo of Usd.Prim(</_20240506_145520>)>
obj.GetProperties = <bound method GetProperties of Usd.Prim(</_20240506_145520>)>
obj.GetPropertiesInNamespace = <bound method GetPropertiesInNamespace of Usd.Prim(</_20240506_145520>)>
obj.GetProperty = <bound method GetProperty of Usd.Prim(</_20240506_145520>)>
obj.GetPropertyAtPath = <bound method GetPropertyAtPath of Usd.Prim(</_20240506_145520>)>
obj.GetPropertyNames = <bound method GetPropertyNames of Usd.Prim(</_20240506_145520>)>
obj.GetPropertyOrder = <bound method GetPropertyOrder of Usd.Prim(</_20240506_145520>)>
obj.GetPrototype = <bound method GetPrototype of Usd.Prim(</_20240506_145520>)>
obj.GetReferences = <bound method GetReferences of Usd.Prim(</_20240506_145520>)>
obj.GetRelationship = <bound method GetRelationship of Usd.Prim(</_20240506_145520>)>
obj.GetRelationshipAtPath = <bound method GetRelationshipAtPath of Usd.Prim(</_20240506_145520>)>
obj.GetRelationships = <bound method GetRelationships of Usd.Prim(</_20240506_145520>)>
obj.GetSpecializes = <bound method GetSpecializes of Usd.Prim(</_20240506_145520>)>
obj.GetSpecifier = <bound method GetSpecifier of Usd.Prim(</_20240506_145520>)>
obj.GetStage = <bound method GetStage of Usd.Prim(</_20240506_145520>)>
obj.GetTypeName = <bound method GetTypeName of Usd.Prim(</_20240506_145520>)>
obj.GetVariantSet = <bound method GetVariantSet of Usd.Prim(</_20240506_145520>)>
obj.GetVariantSets = <bound method GetVariantSets of Usd.Prim(</_20240506_145520>)>
obj.GetVersionIfHasAPIInFamily = <bound method GetVersionIfHasAPIInFamily of Usd.Prim(</_20240506_145520>)>
obj.GetVersionIfIsInFamily = <bound method GetVersionIfIsInFamily of Usd.Prim(</_20240506_145520>)>
obj.HasAPI = <bound method HasAPI of Usd.Prim(</_20240506_145520>)>
obj.HasAPIInFamily = <bound method HasAPIInFamily of Usd.Prim(</_20240506_145520>)>
obj.HasAssetInfo = <bound method HasAssetInfo of Usd.Prim(</_20240506_145520>)>
obj.HasAssetInfoKey = <bound method HasAssetInfoKey of Usd.Prim(</_20240506_145520>)>
obj.HasAttribute = <bound method HasAttribute of Usd.Prim(</_20240506_145520>)>
obj.HasAuthoredActive = <bound method HasAuthoredActive of Usd.Prim(</_20240506_145520>)>
obj.HasAuthoredAssetInfo = <bound method HasAuthoredAssetInfo of Usd.Prim(</_20240506_145520>)>
obj.HasAuthoredAssetInfoKey = <bound method HasAuthoredAssetInfoKey of Usd.Prim(</_20240506_145520>)>
obj.HasAuthoredCustomData = <bound method HasAuthoredCustomData of Usd.Prim(</_20240506_145520>)>
obj.HasAuthoredCustomDataKey = <bound method HasAuthoredCustomDataKey of Usd.Prim(</_20240506_145520>)>
obj.HasAuthoredDisplayName = <bound method HasAuthoredDisplayName of Usd.Prim(</_20240506_145520>)>
obj.HasAuthoredDocumentation = <bound method HasAuthoredDocumentation of Usd.Prim(</_20240506_145520>)>
obj.HasAuthoredHidden = <bound method HasAuthoredHidden of Usd.Prim(</_20240506_145520>)>
obj.HasAuthoredInherits = <bound method HasAuthoredInherits of Usd.Prim(</_20240506_145520>)>
obj.HasAuthoredInstanceable = <bound method HasAuthoredInstanceable of Usd.Prim(</_20240506_145520>)>
obj.HasAuthoredMetadata = <bound method HasAuthoredMetadata of Usd.Prim(</_20240506_145520>)>
obj.HasAuthoredMetadataDictKey = <bound method HasAuthoredMetadataDictKey of Usd.Prim(</_20240506_145520>)>
obj.HasAuthoredPayloads = <bound method HasAuthoredPayloads of Usd.Prim(</_20240506_145520>)>
obj.HasAuthoredReferences = <bound method HasAuthoredReferences of Usd.Prim(</_20240506_145520>)>
obj.HasAuthoredSpecializes = <bound method HasAuthoredSpecializes of Usd.Prim(</_20240506_145520>)>
obj.HasAuthoredTypeName = <bound method HasAuthoredTypeName of Usd.Prim(</_20240506_145520>)>
obj.HasCustomData = <bound method HasCustomData of Usd.Prim(</_20240506_145520>)>
obj.HasCustomDataKey = <bound method HasCustomDataKey of Usd.Prim(</_20240506_145520>)>
obj.HasDefiningSpecifier = <bound method HasDefiningSpecifier of Usd.Prim(</_20240506_145520>)>
obj.HasMetadata = <bound method HasMetadata of Usd.Prim(</_20240506_145520>)>
obj.HasMetadataDictKey = <bound method HasMetadataDictKey of Usd.Prim(</_20240506_145520>)>
obj.HasPayload = <bound method HasPayload of Usd.Prim(</_20240506_145520>)>
obj.HasProperty = <bound method HasProperty of Usd.Prim(</_20240506_145520>)>
obj.HasRelationship = <bound method HasRelationship of Usd.Prim(</_20240506_145520>)>
obj.HasVariantSets = <bound method HasVariantSets of Usd.Prim(</_20240506_145520>)>
obj.IsA = <bound method IsA of Usd.Prim(</_20240506_145520>)>
obj.IsAbstract = <bound method IsAbstract of Usd.Prim(</_20240506_145520>)>
obj.IsActive = <bound method IsActive of Usd.Prim(</_20240506_145520>)>
obj.IsComponent = <bound method IsComponent of Usd.Prim(</_20240506_145520>)>
obj.IsDefined = <bound method IsDefined of Usd.Prim(</_20240506_145520>)>
obj.IsGroup = <bound method IsGroup of Usd.Prim(</_20240506_145520>)>
obj.IsHidden = <bound method IsHidden of Usd.Prim(</_20240506_145520>)>
obj.IsInFamily = <bound method IsInFamily of Usd.Prim(</_20240506_145520>)>
obj.IsInPrototype = <bound method IsInPrototype of Usd.Prim(</_20240506_145520>)>
obj.IsInstance = <bound method IsInstance of Usd.Prim(</_20240506_145520>)>
obj.IsInstanceProxy = <bound method IsInstanceProxy of Usd.Prim(</_20240506_145520>)>
obj.IsInstanceable = <bound method IsInstanceable of Usd.Prim(</_20240506_145520>)>
obj.IsLoaded = <bound method IsLoaded of Usd.Prim(</_20240506_145520>)>
obj.IsModel = <bound method IsModel of Usd.Prim(</_20240506_145520>)>
obj.IsPathInPrototype = <Boost.Python.function object at 0x64c26e22b350>
obj.IsPrototype = <bound method IsPrototype of Usd.Prim(</_20240506_145520>)>
obj.IsPrototypePath = <Boost.Python.function object at 0x64c26e22b230>
obj.IsPseudoRoot = <bound method IsPseudoRoot of Usd.Prim(</_20240506_145520>)>
obj.IsSubComponent = <bound method IsSubComponent of Usd.Prim(</_20240506_145520>)>
obj.IsValid = <bound method IsValid of Usd.Prim(</_20240506_145520>)>
obj.Load = <bound method Load of Usd.Prim(</_20240506_145520>)>
obj.MakeResolveTargetStrongerThanEditTarget = <bound method MakeResolveTargetStrongerThanEditTarget of Usd.Prim(</_20240506_145520>)>
obj.MakeResolveTargetUpToEditTarget = <bound method MakeResolveTargetUpToEditTarget of Usd.Prim(</_20240506_145520>)>
obj.RemoveAPI = <bound method RemoveAPI of Usd.Prim(</_20240506_145520>)>
obj.RemoveAppliedSchema = <bound method RemoveAppliedSchema of Usd.Prim(</_20240506_145520>)>
obj.RemoveProperty = <bound method RemoveProperty of Usd.Prim(</_20240506_145520>)>
obj.SetActive = <bound method SetActive of Usd.Prim(</_20240506_145520>)>
obj.SetAssetInfo = <bound method SetAssetInfo of Usd.Prim(</_20240506_145520>)>
obj.SetAssetInfoByKey = <bound method SetAssetInfoByKey of Usd.Prim(</_20240506_145520>)>
obj.SetChildrenReorder = <bound method SetChildrenReorder of Usd.Prim(</_20240506_145520>)>
obj.SetCustomData = <bound method SetCustomData of Usd.Prim(</_20240506_145520>)>
obj.SetCustomDataByKey = <bound method SetCustomDataByKey of Usd.Prim(</_20240506_145520>)>
obj.SetDisplayName = <bound method SetDisplayName of Usd.Prim(</_20240506_145520>)>
obj.SetDocumentation = <bound method SetDocumentation of Usd.Prim(</_20240506_145520>)>
obj.SetHidden = <bound method SetHidden of Usd.Prim(</_20240506_145520>)>
obj.SetInstanceable = <bound method SetInstanceable of Usd.Prim(</_20240506_145520>)>
obj.SetKind = <bound method SetKind of Usd.Prim(</_20240506_145520>)>
obj.SetMetadata = <bound method SetMetadata of Usd.Prim(</_20240506_145520>)>
obj.SetMetadataByDictKey = <bound method SetMetadataByDictKey of Usd.Prim(</_20240506_145520>)>
obj.SetPayload = <bound method SetPayload of Usd.Prim(</_20240506_145520>)>
obj.SetPropertyOrder = <bound method SetPropertyOrder of Usd.Prim(</_20240506_145520>)>
obj.SetSpecifier = <bound method SetSpecifier of Usd.Prim(</_20240506_145520>)>
obj.SetTypeName = <bound method SetTypeName of Usd.Prim(</_20240506_145520>)>
obj.Unload = <bound method Unload of Usd.Prim(</_20240506_145520>)>
obj._GetSourcePrimIndex = <bound method _GetSourcePrimIndex of Usd.Prim(</_20240506_145520>)>
obj.__bool__ = <bound method __bool__ of Usd.Prim(</_20240506_145520>)>
obj.__class__ = <class 'pxr.Usd.Prim'>
obj.__delattr__ = <method-wrapper '__delattr__' of Prim object at 0x787b16fcd7c0>
obj.__dict__ = {}
obj.__dir__ = <built-in method __dir__ of Prim object at 0x787b16fcd7c0>
obj.__doc__ = None
obj.__eq__ = <bound method __eq__ of Usd.Prim(</_20240506_145520>)>
obj.__format__ = <built-in method __format__ of Prim object at 0x787b16fcd7c0>
obj.__ge__ = <method-wrapper '__ge__' of Prim object at 0x787b16fcd7c0>
obj.__getattribute__ = <bound method __getattribute__ of Usd.Prim(</_20240506_145520>)>
obj.__gt__ = <method-wrapper '__gt__' of Prim object at 0x787b16fcd7c0>
obj.__hash__ = <bound method __hash__ of Usd.Prim(</_20240506_145520>)>
obj.__init__ = <bound method __init__ of Usd.Prim(</_20240506_145520>)>
obj.__init_subclass__ = <built-in method __init_subclass__ of Boost.Python.class object at 0x64c26e1e0390>
obj.__instance_size__ = 56
obj.__le__ = <method-wrapper '__le__' of Prim object at 0x787b16fcd7c0>
obj.__lt__ = <method-wrapper '__lt__' of Prim object at 0x787b16fcd7c0>
obj.__module__ = 'pxr.Usd'
obj.__ne__ = <bound method __ne__ of Usd.Prim(</_20240506_145520>)>
obj.__new__ = <built-in method __new__ of Boost.Python.class object at 0x787b44086040>
obj.__reduce__ = <bound method <unnamed Boost.Python function> of Usd.Prim(</_20240506_145520>)>
obj.__reduce_ex__ = <built-in method __reduce_ex__ of Prim object at 0x787b16fcd7c0>
obj.__repr__ = <bound method __repr__ of Usd.Prim(</_20240506_145520>)>
obj.__setattr__ = <method-wrapper '__setattr__' of Prim object at 0x787b16fcd7c0>
obj.__sizeof__ = <built-in method __sizeof__ of Prim object at 0x787b16fcd7c0>
obj.__str__ = <method-wrapper '__str__' of Prim object at 0x787b16fcd7c0>
obj.__subclasshook__ = <built-in method __subclasshook__ of Boost.Python.class object at 0x64c26e1e0390>
obj.__weakref__ = None
</code></pre>
<p>As you can see, it only returns methods.</p>
<h2>Getting <code>points</code> attributes by using <code>GetAttribute()</code> and <code>Get()</code></h2>
<p>I found some old code that can get the <code>points</code> information from <code>pxr.UsdGeom.Mesh</code> as follows:</p>
<pre class="lang-py prettyprint-override"><code>for prim in stage.Traverse():
if prim.IsA(pxr.UsdGeom.Mesh):
points = prim.GetAttribute('points').Get()
print(f"points: {type(points)}")
points = np.array(points)
print(f"points: {points.shape}")
</code></pre>
<p>It first checks if <code>prim</code> is <code>pxr.UsdGeom.Mesh</code>. If so, it uses <code>prim.GetAttribute('points').Get()</code> to get the <code>points</code> from the mesh and turn it into a Numpy array. The output is:</p>
<pre><code>points: <class 'pxr.Vt.Vec3fArray'>
points: (26330, 3)
</code></pre>
<p>But how do I know I can use this operation to get the information like this?</p>
<h2>My question</h2>
<p>Based on the above description, my question is the following:</p>
<ul>
<li>How do I know I can get what attributes from the <code>pxr.Usd.Prim</code>? I've read through the <a href="https://openusd.org/release/api/class_usd_prim.html" rel="nofollow noreferrer">USD manual</a> and <a href="https://docs.omniverse.nvidia.com/kit/docs/pxr-usd-api/latest/pxr.html" rel="nofollow noreferrer">omniverse manual</a> but cannot find any relevant information.</li>
</ul>
|
<python><c++><usdz><usd><omniverse>
|
2024-05-10 06:38:56
| 0
| 2,025
|
Der Fänger im Roggen
|
78,458,284
| 7,700,802
|
How to fix line bleeding in fpdf
|
<p>I have these functions</p>
<pre><code>def investment_summary(pdf, text):
pdf.set_font("Arial", size=8)
pdf.write(5, text)
for point in text.splitlines():
if point.startswith("-"):
pdf.set_x(15)
pdf.write(5, point)
def create_report(county_and_state, llm_text, analytics_location_path=None):
pdf = FPDF()
pdf.add_page()
pdf.set_text_color(r=50,g=108,b=175)
pdf.set_font('Arial', 'B', 18)
pdf.cell(w=0, h=10, txt="Verus-AI: " + county_and_state, ln=1,align='C')
pdf.set_font('Arial', 'B', 16)
pdf.cell(w=0, h=10, txt="Investment Summary", ln=1,align='L')
investment_summary(pdf, llm_text)
# pdf.set_text_color(r=50,g=108,b=175)
# pdf.set_font('Arial', 'B', 16)
# pdf.cell(w=0, h=10, txt="Analytics", ln=1,align='L')
pdf.output(f'./example1.pdf', 'F')
</code></pre>
<p>When I use this text:</p>
<pre><code>text = """
Branch County is a rural county in southwestern Michigan, with a population of about 45,000. The county seat is Coldwater, which is also the largest city in the county. The county has a diverse economy, with agriculture, manufacturing, health care, and education as the main sectors. The county is also home to several state parks, lakes, and recreational trails.
Pros of investing in Branch County, MI:
- Low property taxes and affordable housing prices, compared to the state and national averages.
- High rental demand and low vacancy rates, due to the lack of new construction and the presence of several colleges and universities in the county.
- Potential for appreciation and cash flow, as the county has a stable and growing population, a low unemployment rate, and a strong school system.
- Opportunity to diversify your portfolio and hedge against inflation, by investing in different types of properties, such as single-family homes, multi-family buildings, mobile homes, and land.
Cons of investing in Branch County, MI:
- Limited access to public transportation and major highways, which may affect the mobility and convenience of tenants and owners.
- High crime rates and low quality of education in some areas, which may deter some potential renters and buyers.
- Weather-related risks, such as harsh winters, floods, and tornadoes, which may require additional maintenance and insurance costs.
- Lack of economic and demographic diversity, which may limit the future growth and demand for real estate in the county.
Overall investment thesis:
Branch County, MI is a viable option for real estate investors who are looking for a long-term, passive, and income-generating strategy. The county offers a combination of low-cost, high-demand, and appreciating properties, with a relatively low risk of market fluctuations and downturns. However, investors should also be aware of the potential drawbacks and challenges of investing in a rural and homogeneous market, and be prepared to deal with the environmental and social issues that may arise. Investors should also conduct thorough due diligence and research on the specific locations, neighborhoods, and properties that they are interested in, and seek professional advice and guidance as needed.
"""
</code></pre>
<p>and call the function create_report("Branch County, MI", text)</p>
<p>I get this</p>
<p><a href="https://i.sstatic.net/QsesrPFn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QsesrPFn.png" alt="enter image description here" /></a></p>
<p>I am not sure why text on the bottom bleeds in with other text, any suggestions on how to fix this would be greatly appreciated.</p>
|
<python><pyfpdf>
|
2024-05-10 06:02:21
| 1
| 480
|
Wolfy
|
78,458,074
| 913,494
|
Query dynamoDB with datetime filter expression - Incorrect operand
|
<p>I am try to query my Dynamodb to get items older than 5 min.</p>
<p>My item is stored as:</p>
<pre><code>{
"main_event_code": {
"S": "1160"
},
"event_code": {
"S": "9999"
},
"last_upload": {
"S": "2024-05-10T04:09:29.614535Z"
}
}
</code></pre>
<p>Python script:</p>
<pre><code> current_utc_time = datetime.now(timezone.utc)
past_time = current_utc_time - timedelta(minutes=5)
# Format both times as ISO 8601 strings
current_utc_time_iso = current_utc_time.strftime('%Y-%m-%dT%H:%M:%SZ')
past_time_iso = past_time.strftime('%Y-%m-%dT%H:%M:%SZ')
# Print the current UTC time and the time minus 5 minutes
print(f"Current UTC Time: {current_utc_time_iso}")
print(f"Time 5 Minutes Ago: {past_time_iso}")
try:
# Scan the table with a filter for items older than 'past_time'
response = table.scan(
FilterExpression="last_upload < :past_time",
ExpressionAttributeValues={":past_time": {"S": past_time_iso}}
)
</code></pre>
<p>I get:</p>
<blockquote>
<p>Error querying DynamoDB table: An error occurred (ValidationException) when calling the Scan operation: Invalid FilterExpression: Incorrect operand type for operator or function; operator or function: <, operand type: M</p>
</blockquote>
<p>I know the filterexpression works because, using the AWS CLI, I get the item returned.</p>
<pre><code>aws dynamodb scan --table-name ingestion_queue --filter-expression "last_upload < :val" --expression-attribute-values '{":val":{"S":"2024-05-10T04:23:08Z"}}'
</code></pre>
|
<python><amazon-dynamodb><boto3>
|
2024-05-10 05:01:29
| 1
| 535
|
Matt Winer
|
78,457,889
| 10,485,253
|
How can I find if another DAG is currently running and what configs were passed to it
|
<p>I'm using airflow 1.10.12</p>
<p>I have DAG A and DAG B. DAG A runs every 12 hours and DAG B is only triggered manually with a configuration JSON.</p>
<p>I know how to access the configuration JSON in DAG B, but is there a way to tell from DAG A if DAG B is running and it's configuration JSON?</p>
|
<python><airflow>
|
2024-05-10 03:42:18
| 1
| 887
|
TreeWater
|
78,457,753
| 19,123,103
|
Importing GuardrailsOutputParser from langchain.output_parsers is deprecated
|
<p>I'm using langchain 0.1.17. I get this warning:</p>
<pre class="lang-none prettyprint-override"><code>\venv\Lib\site-packages\langchain\_api\module_import.py:87:
LangChainDeprecationWarning: Importing GuardrailsOutputParser
from langchain.output_parsers is deprecated. Please replace
the import with the following:
from langchain_community.output_parsers.rail_parser import GuardrailsOutputParser
warnings.warn(
</code></pre>
<p>even though my code doesn't import this package at all.</p>
<p>A code to trigger this warning:</p>
<pre class="lang-py prettyprint-override"><code>from langchain.agents import create_openai_tools_agent
</code></pre>
<p>What gives?</p>
|
<python><warnings><langchain>
|
2024-05-10 02:36:31
| 1
| 25,331
|
cottontail
|
78,457,650
| 3,788,767
|
Python Numba is not performing as expected
|
<p>I have the following code in python3 and I am comparing the <a href="https://numba.pydata.org/" rel="nofollow noreferrer">Numba JIT</a> compiler performance with a regular code.</p>
<p>My OS is Ubuntu 22.04, <code>pip3 install virtualenv</code> and <code>pip3 install numba numpy</code></p>
<p>benchmark.py</p>
<pre><code>import warnings
warnings.filterwarnings("ignore") #https://github.com/numba/numba/issues/4553
import timeit
import numpy as np
from numba import jit
A = list(range(1, 5000000, 1))
# Algorithms to be tested
def makeSquareLoopFor(A):
result = []
for i, v in enumerate(A):
result.append(A[i]**2)
return result
makeSquareLoopForBenchmark = timeit.timeit("makeSquareLoopFor(A)", globals=globals(), number=1)
print("makeSquareLoopForBenchmark =", makeSquareLoopForBenchmark)
@jit(nopython=True)
def makeSquareLoopForJit(A):
result = []
for i, v in enumerate(A):
result.append(A[i]**2)
return result
makeSquareLoopForBenchmarkJit = timeit.timeit("makeSquareLoopForJit(A)", globals=globals(), number=1)
print("makeSquareLoopForBenchmarkJit =", makeSquareLoopForBenchmarkJit)
print()
def makeSquareLoopWhile(A):
result = []
i = 0
while i < len(A):
result.append(A[i]**2)
i += 1
return result
makeSquareLoopWhileBenchmark = timeit.timeit("makeSquareLoopWhile(A)", globals=globals(), number=1)
print("makeSquareLoopWhileBenchmark =", makeSquareLoopWhileBenchmark)
@jit(nopython=True)
def makeSquareLoopWhileJit(A):
result = []
i = 0
while i < len(A):
result.append(A[i]**2)
i += 1
return result
makeSquareLoopWhileJitBenchmark = timeit.timeit("makeSquareLoopWhileJit(A)", globals=globals(), number=1)
print("makeSquareLoopWhileJitBenchmark =", makeSquareLoopWhileJitBenchmark)
print()
def makeSquareListComprehension(A):
return [v**2 for v in A]
makeSquareListComprehensionBenchmark = timeit.timeit("makeSquareListComprehension(A)", globals=globals(), number=1)
print("makeSquareListComprehensionBenchmark =", makeSquareListComprehensionBenchmark)
@jit(nopython=True)
def makeSquareListComprehensionJit(A):
return [v**2 for v in A]
makeSquareListComprehensionJitBenchmark = timeit.timeit("makeSquareListComprehensionJit(A)", globals=globals(), number=1)
print("makeSquareListComprehensionJitBenchmark =",makeSquareListComprehensionJitBenchmark)
print()
def makeSquareMap(A):
return list(map(lambda v: v**2, A ))
makeSquareMapBenchmark = timeit.timeit("makeSquareMap(A)", globals=globals(), number=1)
print("makeSquareMapBenchmark =", makeSquareMapBenchmark)
@jit(nopython=True)
def makeSquareMapJit(A):
return list(map(lambda v: v**2, A ))
makeSquareMapJitBenchmark = timeit.timeit("makeSquareMapJit(A)", globals=globals(), number=1)
print("makeSquareMapJitBenchmark =", makeSquareMapJitBenchmark)
print()
def makeSquareNumpy(A):
return np.square(A)
makeSquareNumpyBenchmark = timeit.timeit("makeSquareNumpy(A)", globals=globals(), number=1)
print("makeSquareNumpyBenchmark =", makeSquareNumpyBenchmark)
# @jit(nopython=True)
def makeSquareNumpyJit(A):
return np.square(A)
makeSquareNumpyJitBenchmark = timeit.timeit("makeSquareNumpyJit(A)", globals=globals(), number=1)
print("makeSquareNumpyJitBenchmark =", makeSquareNumpyJitBenchmark)
</code></pre>
<p>As we can see in the following output the Numba Jit compiler is not performing as expected when compared with no numba code.</p>
<pre class="lang-none prettyprint-override"><code>makeSquareLoopForBenchmark = 1.6181618960108608
makeSquareLoopForBenchmarkJit = 7.653679737006314
makeSquareLoopWhileBenchmark = 2.3787347140023485
makeSquareLoopWhileJitBenchmark = 7.815255765017355
makeSquareListComprehensionBenchmark = 2.4829419960151426
makeSquareListComprehensionJitBenchmark = 8.279022732982412
makeSquareMapBenchmark = 1.5203839530004188
makeSquareMapJitBenchmark = 7.935965497017605
makeSquareNumpyBenchmark = 0.43739607697352767
makeSquareNumpyJitBenchmark = 0.4406902039772831
</code></pre>
<p>What am I missing here? Why the numba is less performant than regular Python?</p>
|
<python><python-3.x><numba>
|
2024-05-10 01:49:51
| 1
| 5,580
|
IgorAlves
|
78,457,389
| 8,271,180
|
Auto inherite a parent class with decorator
|
<p>I have a class decorator <code>p()</code> that can be used only on classes that inherite from some abstract class <code>P</code>.</p>
<p>So anytime someone uses <code>p</code> it should be written as follows:</p>
<pre><code>@p()
class Child(P):
...
</code></pre>
<p>Is there any way to automatically inherite P using the decorator p() so it would simply be written as follows:</p>
<pre><code>@p()
class Child:
...
</code></pre>
|
<python><class><inheritance><python-decorators>
|
2024-05-09 23:43:10
| 1
| 1,356
|
Tomer Wolberg
|
78,457,374
| 2,057,516
|
Circular imports from classes whose attributes reference one another
|
<p>I know there are a lot of questions that talk about circular imports. I've looked at a lot of them, but I can't seem to be able to figure out how to apply them to this scenario.</p>
<p>I have a pair of data loading classes for importing data from an excel document with multiple sheets. The sheets are each configurable, so the sheet name and the individual column names can change (but they have defaults defined in the class attributes).</p>
<p>There are other inter-class references between the 2 classes in question, but I figured this example was the most straight-forward:</p>
<p>One of the features of a separate export script is to use the loaders' metadata to populate an excel template (with multiple sheets). That template has comments on the column headers in each sheet that reference the other sheets, because the contents of some sheets are used to populate dropdowns in other sheets.</p>
<p>So a comment in a header in one sheet may say "This column's dropdown data is populated by the contents of column X in sheet 2". And sheet 2 column X's header would have a comment that says "This column's contents is used to populate dropdowns for column Y in sheet 1."</p>
<p>I went ahead and added the respective imports, knowing I would end up with a circular import issue, but I figured I would get everything conceptually established as to what I wanted to do and then try and solve the import issue.</p>
<p>Here's some toy code to try and boil it down:</p>
<p><code>infusates_loader.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from DataRepo.loaders.tracers_loader import TracersLoader
class InfusatesLoader(TableLoader):
DataColumnMetadata = DataTableHeaders(
TRACERNAME=TableColumn.init_flat(
...
source_sheet=TracersLoader.DataSheetName,
source_column=TracersLoader.DataHeaders.NAME,
),
)
</code></pre>
<p><code>tracers_loader.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from DataRepo.loaders.infusates_loader import InfusatesLoader
class TracersLoader(TableLoader):
DataColumnMetadata = DataTableHeaders(
NAME=TableColumn.init_flat(
...
# Cannot reference the InfusatesLoader here (to include the name of its
# sheet and its tracer name column) due to circular import
target_sheet=InfusatesLoader.DataSheetName,
source_column=InfusatesLoader.DataHeaders.TRACERNAME,
),
)
</code></pre>
<p>I was able to avoid the issue for now just by setting static string values in <code>tracers_loader.py</code>, but ideally, those values would only live in one place (each in its respective class).</p>
<p>A lot of the circular import questions out there have to do with methods, so I don't think they apply to class attributes? I tried using <code>importlib</code> and I tried doing the import inside a function, but as soon as it tried to set up the class, I get hit with the import error.</p>
|
<python><python-import><circular-reference>
|
2024-05-09 23:31:39
| 1
| 1,225
|
hepcat72
|
78,457,370
| 4,526,441
|
How to create a tensor based on another one - Studying PyTorch in practice?
|
<p>I'm studying IA using PyTorch and implementing some toy examples.
First, I created a one-dimensional tensor (X) and a second tensor (y), derived from the first one:</p>
<pre class="lang-py prettyprint-override"><code>X = torch.arange(0, 100, 1.0).unsqueeze(dim=1)
y = X * 2
</code></pre>
<p>So I have something like</p>
<pre><code>X = tensor([[0.], [1.], [2.], [3.], [4.], [5.], ...
y = tensor([[ 0.], [ 2.], [ 4.], [ 6.], [ 8.], [10.], ...
</code></pre>
<p>Then, I trained a model to predict y and it was working fine.</p>
<p>Now, I would like something different. The X will be 2D and y 1D. y is calculated by an operation in the elements of X:
<code>If x[0] + x[1] >0? y = 10: y -10 </code></p>
<pre><code>X = tensor([[ 55.5348, -97.7608],
[ 29.0493, -52.1908],
[ 47.1722, -43.1151],
[ 11.1242, -62.8652],
[ 44.8067, 80.8335],...
y = tensor([[-10.], [-10.], [ 10.], [-10.], [ 10.],...
</code></pre>
<p>First question, Is it make sense in terms of Machine Learning?</p>
<p>Second one...
I'm generating the tensors using numpy. Could I do it in a smarter way?</p>
<pre class="lang-py prettyprint-override"><code># Criar X valores de entrada para testes
X_numpy = np.random.uniform(low=-100, high=100, size=(1000,2))
print("X", X_numpy)
#y_numpy = np.array([[ (n[0]+n[1]) >= 0 ? 10:-10] for n in X_numpy])
y_numpy = np.empty(shape=[0, 1])
for n in X_numpy:
if n[0] + n[1] >= 0:
y_numpy = np.append(y_numpy, [[10.]], axis=0)
elif n[0] + n[1] < 0:
y_numpy = np.append(y_numpy, [[-10.]], axis=0)
</code></pre>
|
<python><numpy><machine-learning><pytorch><artificial-intelligence>
|
2024-05-09 23:28:50
| 1
| 313
|
Victor Vidigal Ribeiro
|
78,457,216
| 9,983,652
|
how to add a new column with different length into a existing dataframe
|
<p>I am trying to go over a loop to add columns into a empty dataframe. each column might have a different length. Look like the final number of rows are defined by lenght of first column added. The columns with a Longer length will be cut values.</p>
<p>How to always keep all the values of each column when column length are different? Thanks</p>
<p>Here is case 1 with first column has lower length, then 2nd column's value will be cut</p>
<pre><code>import pandas as pd
df_profile=pd.DataFrame()
df_profile['A']=pd.Series([1,2,3,4])
df_profile['B']=pd.Series([10,20,30,40,50,60])
print(df_profile)
A B
0 1 10
1 2 20
2 3 30
3 4 40
</code></pre>
<p>Here are case 2 with first column has highest length, then it is find for other columns</p>
<pre><code>import pandas as pd
df_profile=pd.DataFrame()
df_profile['A']=pd.Series([1,2,3,4,5,6,7,8])
df_profile['B']=pd.Series([10,20,30,40,50,60])
df_profile['C']=pd.Series([100,200,300,400,500,600])
df_profile['D']=pd.Series([100,200])
print(df_profile)
A B C D
0 1 10.0 100.0 100.0
1 2 20.0 200.0 200.0
2 3 30.0 300.0 NaN
3 4 40.0 400.0 NaN
4 5 50.0 500.0 NaN
5 6 60.0 600.0 NaN
6 7 NaN NaN NaN
7 8 NaN NaN NaN
</code></pre>
|
<python><pandas>
|
2024-05-09 22:22:54
| 1
| 4,338
|
roudan
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.