QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
75,424,946
| 10,474,998
|
Make subplots using plotly express with values coming from a dataframe
|
<p>Assuming I have a toy model <code>df</code> which lists the <code>model</code> of the car and <code>customer rating</code> of one car showroom.</p>
<pre><code>CustomerID Model Cust_rating
1 Corolla A
2 Corolla B
3 Forester A
4 GLC C
5 Xterra A
6 GLC A
</code></pre>
<p>Using plotly express, I created pie charts of percentage of cars by model and by Cust_rating, respectively as two separate graphs:</p>
<pre><code>import plotly.express as px
px.pie(df,names='Model',title='Proportion Of each Model')
px.pie(df,names='Cust_rating',title='Proportion Of each Rating')
</code></pre>
<p>Now, I want to create subplots, and all the ways of doing it using the documentation are throwing up errors:</p>
<pre><code>ValueError: Trace type 'pie' is not compatible with subplot type 'xy'
at grid position (1, 1)
</code></pre>
<p>This is what I tried:</p>
<pre><code>from plotly.subplots import make_subplots
import plotly.graph_objects as go
fig = make_subplots(rows=1, cols=2)
fig.add_trace(go.Pie(values=df['Model']), row=1, col=1)
fig.add_trace(go.Pie(values=df['Cust_rating']), row=1, col=2)
fig.update_layout(height=700, showlegend=False)
fig.show()
</code></pre>
|
<python><pandas><matplotlib><plotly>
|
2023-02-12 04:47:03
| 2
| 1,079
|
JodeCharger100
|
75,424,936
| 11,611,632
|
ModuleNotFoundError - from .local_settings import *; Django
|
<p>Rather than using environmental variables to hide sensitive information, such as, SECRET_KEY, I'm using a module called <code>local_settings.py</code> since the file is ignored by <code>gitignore</code>.</p>
<ul>
<li><a href="https://stackoverflow.com/questions/33179419/private-settings-in-django-and-deployment">Private settings in Django and Deployment</a></li>
</ul>
<p>Within <code>settings.py</code>, I imported the module as <code>from .local_settings import *</code></p>
<p>I'm using PythonAnywhere as the means of deploying my website yet when <code>python manage.py collectstatic</code> is executed in its console the following error is raised:</p>
<pre><code> from .local_settings import *
ModuleNotFoundError: No module named 'stackoverflow_clone.local_settings'
</code></pre>
<p>Is this error occuring because <code>local_settings.py</code> is being treated as if it doesn't exist at all?
How can the error be resolved so that configuration such as <code>SECRET_KEY</code> can be imported?</p>
<p>This directory structure reflects what's on my local machine.</p>
<p><a href="https://i.sstatic.net/q9vqY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/q9vqY.png" alt="enter image description here" /></a></p>
<pre><code>(stackoverflow_clone-virtualenv) 23:19 ~ $ pwd
/home/ittybitty
(stackoverflow_clone-virtualenv) 23:19 ~ $ ls
README.txt django_stackoverflow
(stackoverflow_clone-virtualenv) 23:19 ~ $ cd django_stackoverflow
(stackoverflow_clone-virtualenv) 23:19 ~/django_stackoverflow (main)$ ls -l
-rw-rw-r-- 1 ittybitty registered_users 22 Feb 12 19:09 README.md
drwxrwxr-x 7 ittybitty registered_users 4096 Feb 12 19:09 authors
-rw-rw-r-- 1 ittybitty registered_users 675 Feb 12 19:09 manage.py
drwxrwxr-x 4 ittybitty registered_users 4096 Feb 12 19:09 node_modules
-rw-rw-r-- 1 ittybitty registered_users 369 Feb 12 19:09 package-lock.json
-rw-rw-r-- 1 ittybitty registered_users 569 Feb 12 19:09 package.json
-rw-rw-r-- 1 ittybitty registered_users 8581 Feb 12 19:09 paginated_db_set.json
drwxrwxr-x 7 ittybitty registered_users 4096 Feb 12 19:09 posts
-rw-rw-r-- 1 ittybitty registered_users 107 Feb 12 19:09 requirements.txt
drwxrwxr-x 3 ittybitty registered_users 4096 Feb 12 23:12 stackoverflow_clone
drwxrwxr-x 2 ittybitty registered_users 4096 Feb 12 19:09 templates
drwxrwxr-x 4 ittybitty registered_users 4096 Feb 12 19:09 web_assets
(stackoverflow_clone-virtualenv) 23:19 ~/django_stackoverflow (main)$ ls -l stackoverflow_clone
total 20
-rw-rw-r-- 1 ittybitty registered_users 0 Feb 12 19:09 __init__.py
drwxrwxr-x 2 ittybitty registered_users 4096 Feb 12 23:13 __pycache__
-rw-rw-r-- 1 ittybitty registered_users 415 Feb 12 19:09 asgi.py
-rw-rw-r-- 1 ittybitty registered_users 4034 Feb 12 23:12 settings.py
-rw-rw-r-- 1 ittybitty registered_users 3247 Feb 12 19:09 urls.py
-rw-rw-r-- 1 ittybitty registered_users 415 Feb 12 19:09 wsgi.py
</code></pre>
<p><em>/var/www/ittybitty_pythonanywhere_com_wsgi.py</em></p>
<pre><code># +++++++++++ DJANGO +++++++++++
# To use your own django app use code like this:
import os
import sys
# assuming your django settings file is at '/home/ittybitty/mysite/mysite/settings.py'
# and your manage.py is is at '/home/ittybitty/mysite/manage.py'
path = '/home/ittybitty/django_stackoverflow'
if path not in sys.path:
sys.path.append(path)
os.environ['DJANGO_SETTINGS_MODULE'] = 'stackoverflow_clone.settings'
# then:
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
</code></pre>
|
<python><django><git><pythonanywhere>
|
2023-02-12 04:44:45
| 1
| 739
|
binny
|
75,424,934
| 4,531,757
|
Pandas-Transpose multi column level
|
<p>I got stuck with my limited Pandas knowledge. I have 100s patients who have test results (Success or Fail) spread by date. My objective here is to group the test result by Patient, Brand, Status types ....and spread by Date.</p>
<p>I am enclosing my sample data and giving my result as a screenshot. Please help.</p>
<pre><code>df2 = pd.DataFrame({'patient': ['one', 'one', 'one', 'one','one', 'one', 'one', 'one','two', 'two','two','two','two', 'two','two','two'],
'Brand': ['A', 'A', 'A', 'A', 'B', 'B','B','B','A','A','A','A','B', 'B', 'B', 'B'],
'date': ['11/1/2022', '11/2/2022', '11/3/2022', '11/4/2022', '11/5/2022', '11/6/2022','11/7/2022', '11/8/2022', '11/9/2022','11/10/2022', '11/11/2022','11/12/2022',
'11/1/2022', '11/2/2022', '11/3/2022', '11/4/2022'],
'Status1': ['S', 'F', 'F', 'F', 'S', 'F','F', 'F', 'S','F', 'F','F','S', 'F', 'F', 'F'],
'Status2': ['F', 'S', 'F', 'F', 'F', 'S','F', 'F', 'F','S', 'F','F','F', 'S', 'F', 'F'],
'Status3': ['F', 'F', 'S', 'F', 'F', 'F','S', 'F', 'F','F', 'S','F','F', 'F', 'S', 'F'],
'Status4': ['F', 'F', 'F', 'S', 'F', 'F','F', 'S', 'F','F', 'F','S','F', 'F', 'F', 'S']})
</code></pre>
<p>Result Desired in the screenshot:</p>
<p><a href="https://i.sstatic.net/dvze5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dvze5.png" alt="enter image description here" /></a></p>
|
<python><pandas><numpy>
|
2023-02-12 04:43:11
| 1
| 601
|
Murali
|
75,424,785
| 1,128,412
|
Case insensitive Array any() filter in sqlalchemy
|
<p>I'm migrating some code from SqlAlchemy 1.3 to SqlAlchemy 1.4 with Postgres 12. I found a query that looks like this:</p>
<pre><code>session.query(Horse)
.filter(Horse.nicknames.any("charlie", operator=ColumnOperators.ilike))
</code></pre>
<p>The type of the column <code>nicknames</code> is <code>Column(ARRAY(String(64)))</code>.</p>
<p>It seems to me that what this is doing is queryng any <code>Horse</code> whose one of their <code>nicknames</code> is <code>charlie</code> in a case-insensitive (<code>ilike</code>) way.</p>
<p>This code seems to work fine in <code>SqlAlchemy==1.3.0</code> and fails in version <code>1.4.40</code> with the following error:</p>
<pre><code>sqlalchemy.exc.UnsupportedCompilationError:
Compiler <sqlalchemy.dialects.postgresql.psycopg2.PGCompiler_psycopg2 object at 0x7fce54c80f10>
can't render element of type <function ColumnOperators.ilike at 0x7fce92944280>
(Background on this error at: https://sqlalche.me/e/14/l7de)
</code></pre>
<p>What would be an equivalent way of doing this that works, ideally for both versions?</p>
|
<python><postgresql><sqlalchemy>
|
2023-02-12 03:44:38
| 1
| 3,609
|
Juan Enrique MuΓ±oz Zolotoochin
|
75,424,637
| 10,613,037
|
How to get inclusive difference between timestamps
|
<p>I'd like to get the difference between the end and start date columns, inclusive</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame()
df['start'] = pd.to_datetime(['1/1/2020','1/2/2020'])
df['end'] = pd.to_datetime(['1/31/2020', '1/25/2020'])
df['diff'] = df['end'] - df['start']
</code></pre>
<p>So instead of</p>
<pre><code> start end diff
0 2020-01-01 2020-01-31 30 days
1 2020-01-02 2020-01-25 23 days
</code></pre>
<p>I want to get 31 and 24 days. I can solve it by adding a 1 day Timedelta, but it seems a bit fragile. Is there any other way></p>
<p><code>df['diff'] = df['end'] - df['start'] + pd.Timedelta(days=1)</code></p>
|
<python><pandas>
|
2023-02-12 02:52:10
| 1
| 320
|
meg hidey
|
75,424,585
| 343,215
|
How do I work with Pandas multi-level series?
|
<p>I'm using Pandas <code>filter()</code> and <code>groupby()</code> to arrive at a count of employee types on a given day. I need to analyze this data for outliers. In this toy example, the outlier is any shift & day where there are not two (2) shifts.</p>
<pre><code>shifts = [("Cashier", "Thursday"), ("Cashier", "Thursday"),
("Cashier", "Thursday"), ("Cook", "Thursday"),
("Cashier", "Friday"), ("Cashier", "Friday"),
("Cook", "Friday"), ("Cook", "Friday"),
("Cashier", "Saturday"), ("Cook", "Saturday"),
("Cook", "Saturday")]
labels = ["JOB_TITLE", "DAY"]
df = pd.DataFrame.from_records(shifts, columns=labels)
df1 = df[['JOB_TITLE', 'DAY']].groupby('DAY')
shifts_series = df1['JOB_TITLE'].value_counts()
</code></pre>
<p>The results:</p>
<pre><code>DAY JOB_TITLE
Friday Cashier 2
Cook 2
Saturday Cook 2
Cashier 1
Thursday Cashier 3
Cook 1
Name: JOB_TITLE, dtype: int64
</code></pre>
<p>In this example, the data I need as output is,</p>
<ul>
<li>Which value(s) is == 1? <code>[('Saturday', 'Cashier'), ('Thursday', 'Cook')]</code></li>
<li>Which value(s) is > 2? <code>[('Thursday', 'Cashier')]</code></li>
</ul>
<p><strong>I'm thinking the better question is...</strong>
How can I work with Multi-index Series to generate the expected list of tuples?</p>
<p>Which I can then use to get to my end goal, <strong>ultimately</strong>, to add two new columns to the DataFrame <code>Worked a Double: (True,False)</code> and <code>Over staffed: (True,False)</code>.</p>
|
<python><pandas>
|
2023-02-12 02:32:20
| 1
| 2,967
|
xtian
|
75,424,473
| 511,571
|
Python return type hint of literal string
|
<p>This code is from GitPython library, and in particular from the <a href="https://github.com/gitpython-developers/GitPython/blob/3.1.30/git/cmd.py#L1207" rel="nofollow noreferrer">cmd.py file</a> (removed comments, to make it more concise):</p>
<pre><code>class Git(LazyMixin):
# A lot of code, removed
def __call__(self, **kwargs: Any) -> "Git":
self._git_options = self.transform_kwargs(split_single_char_options=True, **kwargs)
return self
# Some more code here
</code></pre>
<p>My question is what does the return type of "Git" (in quotes), which is a literal string signify. I suspect it has something to do with the fact that the enclosing class is Git.</p>
<p>I looked into <a href="https://docs.python.org/3/library/typing.html" rel="nofollow noreferrer">python typing</a>, and browsed over all the PEPs mentioned in there, the only similar thing I found is <a href="https://peps.python.org/pep-0647/" rel="nofollow noreferrer">pep-647</a> (similar in that, the type hint is quoted string "TypeGuard[Person]", but does not seem to be related).</p>
<p>Can you help me understand what that means, and preferably a name, or a doc for this?</p>
|
<python><python-3.x><type-hinting>
|
2023-02-12 01:57:11
| 0
| 1,685
|
Virtually Real
|
75,424,467
| 10,557,442
|
How to customize the superuser creation in Django when using a proper User model?
|
<p>I have the following models defined on my user's app:</p>
<pre class="lang-py prettyprint-override"><code>class Address(models.Model):
tower = models.SmallIntegerField()
floor = models.SmallIntegerField()
door = models.CharField(max_length=5)
class User(AbstractUser):
address = models.OneToOneField(Address, on_delete=models.CASCADE)
REQUIRED_FIELDS = ["email", "address"]
</code></pre>
<p>That is, I'm extending the Django's base <code>User</code> object, by adding a one to one field to a table which handles the address for each user.</p>
<p>But taking this approach, then when I try to create a superuser account from CLI, with <code>python manage.py createsuperuser</code>, the following happens:</p>
<pre class="lang-bash prettyprint-override"><code>Username: admin
Email address: admin@admin.com
Address (Address.id): 1
Error: address instance with id 1 does not exist.
</code></pre>
<p>So Django is requesting me to enter an address id but as no address is yet stored in the database, that error is raised.</p>
<p>Is there any way to create a superuser by entering both fields from <code>User</code> model and from <code>Address</code> model, and creating a record in both tables? That is, something like:</p>
<pre class="lang-bash prettyprint-override"><code>Username: admin
Email address: admin@admin.com
Tower: 1
Floor: 12
Door: A
</code></pre>
|
<python><django>
|
2023-02-12 01:54:57
| 2
| 544
|
Dani
|
75,424,412
| 5,500,634
|
No module named 'pm4py.objects.petri' in pm4py
|
<p>I am using thePython open source <a href="https://github.com/pm4py" rel="nofollow noreferrer">pm4py</a>, and following the blog: <a href="https://medium.com/@c3_62722/process-mining-with-python-tutorial-a-healthcare-application-part-2-4cf57053421f" rel="nofollow noreferrer">Process Mining with Python tutorial: A healthcare application β Part 2</a>. When I ran</p>
<pre><code>from pm4py.objects.conversion.log import converter as log_converter
from pm4py.algo.discovery.alpha import algorithm as alpha_miner
log = log_converter.apply(eventlog)
net, initial_marking, final_marking = alpha_miner.apply(log)
</code></pre>
<p>no problem at all.</p>
<p>But when I import the following visualization module</p>
<pre><code>from pm4py.visualization.petrinet import visualizer as pn_visualizer
</code></pre>
<p>or try other module:</p>
<pre><code>from pm4py.objects.petri import performance_map
</code></pre>
<p>it showed</p>
<pre><code>ModuleNotFoundError: No module named 'pm4py.visualization.petrinet'
ModuleNotFoundError: No module named 'pm4py.objects.petri'
</code></pre>
<p>I checked the documentation and the <a href="https://pm4py-source.readthedocs.io/en/stable/pm4py.visualization.petrinet.html#module-pm4py.visualization.petrinet.visualizer" rel="nofollow noreferrer">classes</a> do exist in the library, so they are not deprecated.</p>
<p>I googled but failed to find anyone to mention the bug. I also tried different python version 3.8, 3.9 and, even</p>
<pre><code>pip install "pm4py==<early version>"
</code></pre>
<p>still doesn't work. Did anyone else have same issue? Thanks</p>
|
<python><machine-learning>
|
2023-02-12 01:36:31
| 1
| 489
|
TripleH
|
75,424,405
| 11,671,779
|
Python decorate `class.method` that modify on `class.self`
|
<p>How to access <code>self</code> of the decorated method?</p>
<p>Based on <a href="https://stackoverflow.com/a/57807897/11671779">this answer</a>, the self refer to the decorator:</p>
<pre class="lang-py prettyprint-override"><code>class Decorator:
def __init__(self, func):
self.func = func
def __call__(self, *args, **kwargs):
print(self.func.__name__)
self.counter += 1
return self.func(*args, **kwargs)
class A:
def __init__(self):
self.counter = 0
@Decorator
def method1(self):
pass
</code></pre>
<p>Above example will cause:</p>
<pre class="lang-bash prettyprint-override"><code> 5 def __call__(self, *args, **kwargs):
----> 6 self.counter += 1
7 return self.func(*args, **kwargs)
AttributeError: 'Decorator' object has no attribute 'counter'
</code></pre>
<p>NOTE:</p>
<p>The <code>return self.func(*args, **kwargs)</code> is also causing error. I don't yet fully understand how to pass that to <code>A.method1</code>. The point is just that I want to update counter and printing <code>self.func.__name__</code>, that is all.</p>
|
<python><python-3.x><python-decorators><python-class>
|
2023-02-12 01:35:01
| 1
| 2,276
|
Muhammad Yasirroni
|
75,424,382
| 6,024,751
|
Use different values of expandtabs() in the same string - Python
|
<p>How can we define several tab lengths in a python string? For example, we want to print the keys, value types and values of a dict nicely aligned (with varying sizes of keys and types):</p>
<pre><code>my_dict = {
"short_key": 4,
"very_very_very_very_very_long_keys": 5.0
}
formatted_string_1 = '\n'.join([f"{k}:\t({type(v).__name__})\t{v}".expandtabs(10) for k, v in my_dict.items()])
print(f"Option 1 (.expandtabs(10)), first tab is too small:\n{formatted_string_1}")
formatted_string_2 = '\n'.join([f"{k}:\t({type(v).__name__})\t{v}".expandtabs(40) for k, v in my_dict.items()])
print(f"\n\nOption 2 (.expandtabs(40)), second tab is too large:\n{formatted_string_2}")
</code></pre>
<p>Running this we get:</p>
<pre><code>Option 1 (.expandtabs(10)), first tab is too small:
short_key: (int) 4
very_very_very_very_very_long_keys: (float) 5.0
</code></pre>
<p>and:</p>
<pre><code>Option 2 (.expandtabs(40)), second tab is too large:
short_key: (int) 4
very_very_very_very_very_long_keys: (float) 5.0
</code></pre>
<p>I would like to be able to define a long tab for the first space, and a short tab for the second one, something like <code>.expandtabs([40, 10])</code>, such that we get two nice alignments:</p>
<pre><code>short_key: (int) 4
very_very_very_very_very_long_keys: (float) 5.0
</code></pre>
<p>Any idea?</p>
|
<python><string><string-formatting><tabexpansion>
|
2023-02-12 01:27:10
| 2
| 790
|
Julep
|
75,424,342
| 3,788,614
|
Python: Extract text and element selectors from html elements
|
<p>Given something like the following html:</p>
<pre><code><div>
<div>
<meta ... />
<img />
</div>
<div id="main">
<p class="foo">Hello, World</p>
<div>
<div class="bar">Hey, there!</div>
</div>
</div>
</div>
</code></pre>
<p>How would I go about selecting only the elements that have text and outputting a generated, unique css selector for said element?</p>
<p>For this example, that would be:</p>
<pre><code> # can be even more specific if there are other .foo's
------
[ |
{ "html": "Hello, World", "selector": ".foo"},
{ "html": "Hey, there!", "selector": ".bar" }
]
</code></pre>
<p>Was playing with <code>BeautifulSoup</code> and <code>html_sanitizer</code> but wasn't getting great results.</p>
|
<python><html><beautifulsoup><css-selectors>
|
2023-02-12 01:09:51
| 1
| 1,116
|
Jack
|
75,424,275
| 19,130,803
|
How to kill a thread in python
|
<p>I have a flask app, in that I have a button name start and a cancel button.
I have a task to process. I am creating a new thread for it to run in background setting as daemon thread. since it is a long running task, I am trying to provide a cancel option for it.</p>
<p>Methods used as below:</p>
<ul>
<li>I tried using Event by checking is_set() in while loop and on cancel button calling set()</li>
<li>I tried using signals by passing thread id (ident) and SIGTERM signal to pthread_kill()</li>
</ul>
<p>for both methods when I click on cancel button, The task does not get cancel and executes completely.</p>
<pre><code> class BackgroundWorker(Thread):
# event = Event()
def __init__(self, name: str, daemon: bool, a: str, b: str, c: int) -> None:
self.a: str = a
self.b: str = b
self.c: int = c
# self.event = Event()
super().__init__(name=name, daemon=daemon)
def run(self) -> None:
while True:
long_background_task(self.a, self.b, self.c)
# Creating object
task = BackgroundWorker("task", True, a, b, c)
task.start()
</code></pre>
<p>Is there any other way to terminate the running thread or what I am missing in the listed methods?</p>
|
<python><flask>
|
2023-02-12 00:48:00
| 0
| 962
|
winter
|
75,424,197
| 1,441,864
|
Python email module behaves unexpected when trying to parse "raw" subject lines
|
<p>I have trouble parsing an email which is encoded in win-1252 and contains the following header (literally like that in the file):</p>
<pre><code>Subject: Π‘ΡΠ΅ΡΠ° Π½Π° ΠΎΠΏΠ»Π°ΡΡ ΠΏΠΎ Π·Π°ΠΊΠ°Π·Ρ . .
</code></pre>
<p>Here is a hexdump of that area:</p>
<pre><code>000008a0 56 4e 4f 53 41 52 45 56 40 41 43 43 45 4e 54 2e |VNOSAREV@ACCENT.|
000008b0 52 55 3e 0d 0a 53 75 62 6a 65 63 74 3a 20 d1 f7 |RU>..Subject: ..|
000008c0 e5 f2 e0 20 ed e0 20 ee ef eb e0 f2 f3 20 ef ee |... .. ...... ..|
000008d0 20 e7 e0 ea e0 e7 f3 20 20 20 2e 20 20 2e 20 20 | ...... . . |
000008e0 20 20 0d 0a 58 2d 4d 61 69 6c 65 72 3a 20 4f 73 | ..X-Mailer: Os|
000008f0 74 72 6f 53 6f 66 74 20 53 4d 54 50 20 43 6f 6e |troSoft SMTP Con|
</code></pre>
<p>I realize that this encoding doesn't adhere to the usual RFC 1342 style encoding of <code>=?charset?encoding?encoded-text?=</code> but I assume that many email clients will still correctly display the subject and hence I would like to extract it correctly as well. For context: I am not making these emails up or creating them, they are given and I need to deal with them as is.</p>
<p>My approach so far was to use the <code>email</code> module that comes with Python:</p>
<pre class="lang-py prettyprint-override"><code>import email
with open('data.eml', 'rb') as fp:
content = fp.read()
mail = email.message_from_bytes(content)
print(mail.get('subject'))
# οΏ½οΏ½οΏ½οΏ½οΏ½ οΏ½οΏ½ οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½ οΏ½οΏ½ οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½ . .
print(mail.get('subject').encode())
# '=?unknown-8bit?b?0ffl8uAg7eAg7u/r4PLzIO/uIOfg6uDn8yAgIC4gIC4gICAg?='
</code></pre>
<p><strong>My questions are:</strong></p>
<ol>
<li>can I somehow convince the <code>email</code> module to parse mails with subjects like this correctly?</li>
<li>if not, can I somehow access the "raw" data of this header? i.e. the entries of <code>mail._headers</code> without accessing private properties?</li>
<li>if not, can someone recommend a more versatile Python module for email parsing?</li>
</ol>
<p><strong>Some random observations:</strong></p>
<p>a) Poking around in the internal data structure of <code>mail</code>, I arrived at <code>[hd[1] for hd in mail._headers if hd[0] == 'Subject']</code> which is:</p>
<pre><code>['\udcd1\udcf7\udce5\udcf2\udce0 \udced\udce0 \udcee\udcef\udceb\udce0\udcf2\udcf3 \udcef\udcee \udce7\udce0\udcea\udce0\udce7\udcf3 . . ']
</code></pre>
<p>b) According to the docs, <code>mail.get_charsets()</code> returns a list of character sets in case of multipart message, and it returns <code>[None, 'windows-1251', None]</code> here. So at least theoretically, the modules does have a chance to guessing the correct charset.</p>
<p>For completeness, the SHA256 has of the email file is <code>1aee4d068c2ae4996a47a3ae9c8c3fa6295a14b00d9719fb5ac0291a229b4038</code> (and I uploaded it to <a href="https://malshare.com/sample.php?action=detail&hash=1aee4d068c2ae4996a47a3ae9c8c3fa6295a14b00d9719fb5ac0291a229b4038" rel="nofollow noreferrer">MalShare</a> and <a href="https://www.virustotal.com/gui/file/1aee4d068c2ae4996a47a3ae9c8c3fa6295a14b00d9719fb5ac0291a229b4038" rel="nofollow noreferrer">VirusTotal</a>).</p>
|
<python><email><character-encoding><subject><eml>
|
2023-02-12 00:24:41
| 3
| 823
|
born
|
75,424,185
| 6,228,056
|
Pandas pull rows where any column contains certain strings
|
<p>I'm trying to return rows where any of my columns contain any of the words in a word list. Let's say <code>word_list = ['Synthetic', 'Advanced or Advantage/Excellence']</code>. I've tried the following code <code>df[df.apply(' '.join, 1).str.contains('|'.join(word_list))]</code>.</p>
<p>The problem is some of my columns contain null values, so after running that code I got the error <code>TypeError: sequence item 0: expected str instance, int found</code> (maybe Pandas treats the null values as "int" type?)</p>
<p>Is there anyway I can construct my code in a way that Pandas can either ignore the null values, or treat the null values as string, so that my function can work?</p>
|
<python><pandas><string><dataframe><null>
|
2023-02-12 00:22:35
| 1
| 865
|
Stanleyrr
|
75,424,006
| 5,157,280
|
Is @staticmethod thread blocking?
|
<p>I had this question while adding an explicit wait for selenium as below.</p>
<pre class="lang-py prettyprint-override"><code>class Wait:
@staticmethod
def wait_until_element_present(driver: webdriver, timeout: int = 10) -> None:
WebDriverWait(driver, timeout).until(
EC.presence_of_element_located((By.ID, "myDynamicElement"))
)
</code></pre>
<p>Just in case you have not worked with selenium, the Above code simply polls the DOM until the element is present and lets the program moves on. So this method can hold the thread until the condition given evaluates to true. If I want to run tests in parallel, this method might get called simultaneously. My question is, will this delay the other calls to the methods in a parallel test execution situation?</p>
|
<python><python-3.x><multithreading><blocking>
|
2023-02-11 23:36:17
| 1
| 322
|
Rasika
|
75,423,878
| 14,566,295
|
How to apply pipe to dictionary in python
|
<p>I am wondering if there is any way to apply pipe operator to the dictionary object like pandas dataframe.</p>
<p>With pandas dataframe we can do below steps:</p>
<pre><code>import pandas as pd
dat1 = pd.DataFrame({'A' : [0], 'B' : [1]})
dat2 = pd.DataFrame({'C' : [0], 'D' : [1]})
chose = 'something'
((dat1 if chose == 'something' else dat2)
.pipe(lambda x : x.assign(col_new = lambda z : 'some_value'))
)
</code></pre>
<p>Similarly now let's say we have a dictionary:</p>
<pre><code>dat1 = {'A' : [0], 'B' : [1]}
dat2 = {'C' : [0], 'D' : [1]}
chose = 'something'
((dat1 if chose == 'something' else dat2)
.pipe(lambda x : x['A'])
)
</code></pre>
<p>But now I get below error</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 2, in <module>
AttributeError: 'dict' object has no attribute 'pipe'
</code></pre>
<p>Is there any way to apply pipe like pandas to dictionary object?</p>
|
<python>
|
2023-02-11 23:08:00
| 2
| 1,679
|
Brian Smith
|
75,423,821
| 12,546,311
|
Find first row where value is less than threshold
|
<p>I have a pandas data frame where I try to find the first <code>ID</code> for when the <code>left</code> is less than the values of</p>
<pre><code>list = [0,50,100,150,200,250,500,1000]
</code></pre>
<pre><code> ID ST ... csum left
0 0 AK ... 4.293174e+05 760964.996900
1 1 AK ... 4.722491e+06 760535.679500
2 2 AK ... 8.586347e+06 760149.293900
3 3 AK ... 2.683233e+07 758324.695200
4 4 AK ... 2.962290e+07 758045.638900
.. ... ... ... ... ...
111 111 AK ... 7.609006e+09 107.329336
112 112 AK ... 7.609221e+09 85.863469
113 113 AK ... 7.609435e+09 64.397602
114 114 AK ... 7.609650e+09 42.931735
115 115 AK ... 7.610079e+09 0.000000
</code></pre>
<p>So I would end up with a list or dataframe looking like</p>
<pre><code>threshold ID
0 115
50 114
100 112
150 100
200 100
250 99
500 78
1000 77
</code></pre>
<p>How can I achieve this?</p>
|
<python><pandas><dataframe>
|
2023-02-11 22:51:02
| 2
| 501
|
Thomas
|
75,423,594
| 7,267,480
|
pyswarms optimization of function in python
|
<p>I am trying to implement the fitting routine for experimentally received data.
But the function that I am trying to optimize is a black-box - I don't know anything about some specific moments right now - but I can call it with some parameters.</p>
<p>I am trying to find optimal parameters for function f(x), where x - is the list of parameters to optimize,</p>
<p>The function f() returns one value as a result.</p>
<p>Trying to use Particle Swarm Optimization to find optimal parameters for x.
I have bounds for all the parameters inside x, also I have some like initial parameters for almost all of them.</p>
<p>As the toy-problem trying to get this code working:</p>
<pre><code>import pyswarms as ps
import numpy as np
# Define the function to optimize
def f1(x:list) -> float:
return x[0]**2 + x[1]**2 + x[2]**2
# Define the bounds for the parameters to optimize
# Create bounds
max_bound = 5 * np.ones(3)
min_bound = - max_bound
bounds = (min_bound, max_bound)
print(bounds)
# Set up the optimization options
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9}
# Perform the optimization
dimensions = 3 # because we have 3 inputs for f1()??
# how to give the PSO initial values for all optimization parameters?
# init_pos =
optimizer = ps.single.GlobalBestPSO(n_particles=100, dimensions=dimensions, options=options, bounds=bounds, init_pos=None)
cost, pos = optimizer.optimize(f1, iters=1000)
# Print the optimized parameters and the cost
optimized_params = pos
print("Optimized parameters: ", optimized_params)
print("Cost: ", cost)
</code></pre>
<p>It gives an error here:</p>
<pre><code>ValueError: operands could not be broadcast together with shapes (3,) (100,)
</code></pre>
<p>What am I doing wrong?</p>
<p>If I give the <code>n_particles=3</code> parameter - it actually works - but it can't find the minima of the function and works really slow.. That is strange so I am pretty confused.</p>
<p><strong>Note</strong> My real application requires large number of elements in X-list in input can be relatively large in real-world application - approx 100.</p>
<p>And the real application must vary all the components inside the x-list...
Maybe someone can suggest a python module to efficiently use PSO?</p>
<p>How can I give the optimizer the information on the initial guesses for the parameters in this case?</p>
|
<python><optimization><particle-swarm>
|
2023-02-11 22:04:37
| 1
| 496
|
twistfire
|
75,423,541
| 6,630,397
|
In Spyder, is it possible to avoid "SyntaxError: 'return' outside function" when running a cell with a return statement
|
<p>In the <a href="https://www.spyder-ide.org/" rel="nofollow noreferrer">Spyder</a> (v.5.4.2) IDE, let:</p>
<pre class="lang-py prettyprint-override"><code>import modules
# %%
def func(**args):
# %% Spyder cell N
do stuff
if not var:
return # <-- this line causes the error in Spyder when running the current cell
# %%
else:
do other stuff
retval = something
# %%
return something
</code></pre>
<p>When I run the Spyder cell N, it says: <code>SyntaxError: 'return' outside function</code>.</p>
<p>Is there a way to avoid that so that I can run this particular cell without having to comment the <code>return</code> statement in the middle of the function?</p>
<p>Maybe a trick to tell Spyder not to interpret this line, for example as the <code># noqa</code> comment for some linters?</p>
|
<python><spyder>
|
2023-02-11 21:54:07
| 0
| 8,371
|
swiss_knight
|
75,423,423
| 7,437,143
|
Adding a new attribute to super class of plotly Annotation object?
|
<h2>Error message</h2>
<p>After creating a super class that contains 1 additional attribute than the original <code>Annotation</code> class from Plotly, I am receiving error:</p>
<pre><code>File "/home/name/git/snn/snncompare/src/snncompare/mwe0.py", line 8, in __init__
self.edge = edge
File "/home/name/anaconda/envs/snncompare/lib/python3.10/site-packages/plotly/basedatatypes.py", line 4910, in __setattr__
self._raise_on_invalid_property_error()(prop)
File "/home/name/anaconda/envs/snncompare/lib/python3.10/site-packages/plotly/basedatatypes.py", line 5070, in _ret
raise _error_to_raise(
ValueError: Invalid property specified for object of type plotly.graph_objs.layout.OtherAnnotation: 'edge'
Did you mean "name"?
Valid properties:
...
Did you mean `name`?
</code></pre>
<h2>Attempt</h2>
<p>In essence, I am not able to add a new attribute to the super attribute:</p>
<pre class="lang-py prettyprint-override"><code>from plotly.graph_objs.layout import Annotation
class OtherAnnotation(Annotation):
def __init__(self, edge, *args, **kw):
print(f'args={args}')
print(f'kw={kw}')
super(OtherAnnotation, self).__init__(*args, **kw)
self.edge = edge
def create_annotation_with_identifier():
some_annotation= OtherAnnotation(
edge="hello world", # β new parameter
x=5,
y=6,
xref="x",
yref="y",
text="dict Text",
align='center',
showarrow=False,
yanchor='bottom',
textangle=90
)
print(some_annotation.edge)
</code></pre>
<h2>Question</h2>
<p>Is it possible and advisable to create some sort of super/sub class that can function as the original <code>Annotation</code> object whilst carrying an extra property that is not in the <code>Annotation</code> object?</p>
<p>A work-around could be to create a new object named:</p>
<pre class="lang-py prettyprint-override"><code>class NewObject():
def __init__(self, annotation:Annotation, extra_arg:str):
self.annotation:Annotation=annotation
self.extra_arg:str = extra_arg
</code></pre>
<p>However, I thought there may be a better way.</p>
|
<python><properties><attributes><plotly-dash><super>
|
2023-02-11 21:31:18
| 0
| 2,887
|
a.t.
|
75,423,388
| 4,320,924
|
Python's orphaned parser commands expr, st2list, sequence2st, compilest: are there any replacements?
|
<p>I have (I might even say I inherited it) a superb curve fitting library entirely written in Python. It worked pretty well until 3.10 was released. Then the module <strong>parser</strong> was deprecated and, obviously, my much-loved library stopped working. It all boils down to four parser commands only: <strong>expr, st2list, sequence2st</strong> and <strong>compilest</strong>, which appear in just a few lines (listed in their order of appearance, but not necessarily one right after the other):</p>
<pre><code>st = parser.expr(stringToConvert)
stList = parser.st2list(st)
st = parser.sequence2st(stList)
self.userFunctionCodeObject = parser.compilest(st)
</code></pre>
<p>I never knew exactly what <strong>parser</strong> did, and, needless to say, I'm completely lost as to what to do now to get my library working again. I can't write my own parser commands because I don't know what the deprecated commands did. Also, I don't know what the parameters in the calls (stringToConvert, st, stList) are, type-wise, nor how they look like. I wish I could tell, but the execution aborts before any debug can be made since <strong>parser</strong> can't be imported. I tried (and have been trying) Google, but the more I delve into this matter, the more confused I get.</p>
<p>Any hints? I'm using Windows 11 and Python 3.10.8-64 bit. Thanks!</p>
<p><strong>PS:</strong>
Indeed, there is just 1 line in the middle of those commands. Here it is the sequence as it appears in the code:</p>
<pre><code># convert integer use such as (3/2) into floats such as (3.0/2.0)
st = parser.expr(stringToConvert)
stList = parser.st2list(st)
stList = self.RecursivelyConvertIntStringsToFloatStrings(stList)
st = parser.sequence2st(stList)
# later evals re-use this compiled code for improved performance
# in EvaluateCachedData() methods
self.userFunctionCodeObject = parser.compilest(st)
</code></pre>
<p>It's an internal function. I might try to check what it does.</p>
|
<python><python-3.x><parsing>
|
2023-02-11 21:23:48
| 1
| 408
|
Fausto Arinos Barbuto
|
75,423,382
| 14,509,475
|
How to remove carriage return characters from string as if it was printed?
|
<p>I would like to remove all occurrences of <code>\r</code> from a string as if it was printed via <code>print()</code> and store the result in another variable.</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>>>> s = "hello\rworld"
>>> print(s)
world
</code></pre>
<p>In this example, how do I "print" <code>s</code> to a new variable which then contains the string <code>"world"</code>?</p>
<p>Background:
I am using the subprocess module to capture the stdout which contains a lot of <code>\r</code> characters. In order to effectively analyze the string I would like to only have the resulting output.</p>
|
<python><python-3.x><string><stdout><carriage-return>
|
2023-02-11 21:22:58
| 3
| 496
|
trivicious
|
75,423,304
| 2,755,116
|
Make python script executable via pip using pyproject.toml
|
<p>I do not find any way to make an executable script via <code>pyproject.toml</code>. All I found is to use old <code>setup.cfg</code> file. The <a href="https://packaging.python.org/en/latest/tutorials/packaging-projects/" rel="nofollow noreferrer">official documentation</a> says nothing about that.</p>
<p>Suppose you simply create a function as a official documention does:</p>
<pre><code>def add_one(number):
return number + 1
</code></pre>
<p>and you want to have this (save it as <code>src/example_package_YOUR_USERNAME_HERE/myscript.py</code> following the example in the documentation)</p>
<pre><code>from example_package_YOUR_USERNAME_HERE import example
example.add_one(2)
</code></pre>
<p>as executable. How to do that?</p>
|
<python><pip><executable>
|
2023-02-11 21:09:08
| 0
| 1,607
|
somenxavier
|
75,423,280
| 3,132,935
|
Python is printing 0x90c2 instead of just 0x90 NOP
|
<p>The following command is outputting 200 bytes of 'A' followed by one byte of 0x0a:</p>
<pre class="lang-python prettyprint-override"><code>python3 -c "print('\x41'*200)" > out.txt
</code></pre>
<p><code>hexdump out.txt</code> confirms this:</p>
<pre><code>0000000 4141 4141 4141 4141 4141 4141 4141 4141
*
00000c0 4141 4141 4141 4141 000a
00000c9
</code></pre>
<p>However, whenever I try to output 200 bytes of NOP sled (0x90), for some reason, python decides to also add a series of 0xc2 after every 0x90. So I'm running this:</p>
<pre class="lang-python prettyprint-override"><code>python3 -c "print('\x90'*200)" > out.txt
</code></pre>
<p>And according to <code>hexdump out.txt</code>:</p>
<pre><code>0000000 90c2 90c2 90c2 90c2 90c2 90c2 90c2 90c2
*
0000190 000a
0000191
</code></pre>
<p>This is not an issue in perl as the following outputs 200 bytes of NOP sled:</p>
<pre class="lang-perl prettyprint-override"><code>perl -e 'print "\x90" x 200' > out.txt
</code></pre>
<p>Why is Python outputting 0x90 followed by 0xc2?</p>
|
<python><python-3.x><no-op>
|
2023-02-11 21:03:02
| 3
| 900
|
ramon
|
75,423,278
| 12,013,353
|
How are arrays formed in particle swarm optimization with pyswarms?
|
<p>I'm very new to optimization algorithms. I'm trying to do a particle swarm optimization in python with <code>pyswarms</code>, but I'm obviously missing something as I'm constantly getting errors like:<br />
<code>ValueError: operands could not be broadcast together with shapes (1,40,10) (10,40) (10,40)</code><br />
There are 40 variables in the function, and here I've chosen 10 particles. I don't understand where the first three-dimensional array is coming from. For the other two they seem to be the input array copied for each particle.
Here is the code of the objective function:</p>
<pre><code>def svm_func(x_in, model, target_disp=0, npart=10):
feats = tuple(model.feature_names_in_)
nfeat = model.n_features_in_
nsupp = int(model.n_support_)
support_vectors = model.support_vectors_
dual_coefs = model.dual_coef_[0]
fx = (np.matmul(dual_coefs.reshape(1,nsupp),
rbf_kernel(support_vectors.reshape(nsupp,nfeat),
Y=x_in.reshape(npart,nfeat),
gamma=model.get_params()['gamma']))
+ model.intercept_)
fx_err = (fx - target_disp)**2
return fx_err
</code></pre>
<p>And for the optimization:</p>
<pre><code>feats = tuple(model.feature_names_in_)
x_opt_vars = {'E1':[-3,3],'E5':[-3,3]} # optimizing only these 2 out of the 40 vars
bounds = (np.array([arr_input[feats.index(i)] if i not in x_opt_vars.keys() else x_opt_vars[i][0] for i in feats]),
np.array([arr_input[feats.index(i)] if i not in x_opt_vars.keys() else x_opt_vars[i][1] for i in feats]))
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9}
optimizer = GlobalBestPSO(n_particles=10, dimensions=40, options=options, bounds=bounds)
cost, pos = optimizer.optimize(svm_func, iters=1000, model=ksvr, target_disp=-2.78, npart=10)
</code></pre>
<p>I thought I messed up somewhere in the "complicated" function, so I tried to create a simple parabola and find the minimum but the same occurs.
The parabola:</p>
<pre><code>def test_fun(x):
fx = x ** 2
return fx
options = {'c1': 0.5, 'c2': 0.3, 'w': 0.9}
optimizer = GlobalBestPSO(n_particles=1, dimensions=1, options=options)
optimized = optimizer.optimize(test_fun, iters=1000)
</code></pre>
<p>And here I'm getting a similar error:
<code>ValueError: non-broadcastable output operand with shape (1,1) doesn't match the broadcast shape (1,1,1)</code></p>
|
<python><optimization><particle-swarm>
|
2023-02-11 21:02:46
| 1
| 364
|
Sjotroll
|
75,423,163
| 14,722,297
|
Sorting by subsampling every nth element in numpy array?
|
<p>I am trying to sample every nth element to sort an array. My current solution works, but it feels like there should be a solution that does not involve concatenation.</p>
<p>My current implementation is as follows.</p>
<pre class="lang-py prettyprint-override"><code>arr = np.arange(10)
print(arr)
[0 1 2 3 4 5 6 7 8 9]
# sample every 5th element
res = np.empty(shape=0)
for i in range(5):
res = np.concatenate([res, arr[i::5]])
print(res)
[0. 5. 1. 6. 2. 7. 3. 8. 4. 9.]
</code></pre>
<p>Looking for any tips to make this faster/more pythonic. My use case is with an array of ~10,000 values.</p>
|
<python><numpy><sorting>
|
2023-02-11 20:41:39
| 1
| 1,895
|
BoomBoxBoy
|
75,423,123
| 14,715,170
|
how to iterate dictionary over list in python?
|
<p>I have a dictionary following,</p>
<pre><code>a = [{"x":"--New Value","y":20},{"x":"--New Value","y":21},{"x":"--New Value","y":27}]
</code></pre>
<p>While iterating using the code,</p>
<pre><code>for i in a:
print(i["x"])
print(i["y"])
</code></pre>
<p>I am getting the following output,</p>
<pre><code>--New Value
20
--New Value
21
--New Value
27
</code></pre>
<p>well what I want the output is,</p>
<pre><code>--New Value
20
21
27
</code></pre>
<p>Any help please ?</p>
|
<python><python-3.x><list><dictionary><for-loop>
|
2023-02-11 20:32:23
| 2
| 334
|
sodmzs1
|
75,423,085
| 1,317,018
|
How do I make appending large vectors to list faster
|
<p>I am trying to implement skip gram algorithm in plain numpy (not pytorch) which requires doing calculation (possibly avoidable details at the end):</p>
<pre><code>xy = []
for i in tqdm(range(100000)):
for j in range(15):
xy.append([np.zeros(50000), np.zeros(50000)])
</code></pre>
<p>So, this is a huge array <code>15*100000*2*50000</code> elements of type <code>numpy.float64</code>. There are two issues here:</p>
<ol>
<li><p>This takes huge memory (at least 3 GBs) as explained above. In fact, I was never able to complete this calculation because of second issue mentioned below. But it easily filled all my laptop RAM (total 16 GB).</p>
</li>
<li><p>This takes huge time (at least couple of hours), may be because of first issue above.</p>
</li>
</ol>
<p>I also tried to pre-generate x with all zeroes as follows:</p>
<pre><code> count = 15*100000
xy = [[np.zeros(vocabSize), np.zeros(vocabSize)] for _ in range(count)]
</code></pre>
<p>But the moment I step over this second line in my debugger, my RAM fills up.</p>
<p>How can deal with this?</p>
<p><strong>Avoidable details</strong></p>
<p>I am trying to implement skip gram algorithm, in which we have to prepare list of skip grams <code>[target-word, context-words]</code>. Each target-word and context-words are represented as one hot vector of size equal to input vocabulary size (50000 above). 100000 above is number of sentences in data. 15 is average number of words per sentence.</p>
<p><strong>PS</strong></p>
<p>I have to implement this in plain python + numpy. That is not using any ML library like pytorch or tensorflow</p>
|
<python><numpy><machine-learning><numpy-ndarray>
|
2023-02-11 20:24:36
| 1
| 25,281
|
Mahesha999
|
75,423,020
| 8,372,455
|
pyTest verify files are created correctly
|
<p>I'm using the <a href="https://python-docx.readthedocs.io/en/latest/" rel="nofollow noreferrer">Python docx package</a> to generate Microsoft Word documents with a script that also incorporates some other calculations/math functions handled in Python/Pandas/Numpy, etc... that automatically get inputted into the Word Document report via Docx.</p>
<p>Is it possible to use pyTest to verify a different Python script is outputing a report properly? As I develop more complicated calculations even if I can verify the Microsoft Word document is being generated I know the script hasn't errored out so that would be useful to me that pyTest can verify the report generates okay. In time it would be nice that the calculations work in a specific manner as well but I am just trying to get setup with pyTest on something simple to begin with, any tips appreciated.</p>
<p>My project directory looks like this below with one report generating script named <code>fc1.py</code>:</p>
<pre><code>fc1.py β Python script with docx package to generate a Word Document
final_report β Directory for the Word Document report output from fc1.py
reports β Directory for an init.py
β init.py β Python docx/Pandas methods to generate the report called from fc1.py
tests β Directory for pyTest scripts
β fc1_test.py β pyTest script that calls fc1.py and attempts to see if the Microsoft Word report is getting generated in the final_report directoy
</code></pre>
<p>My pyTest script looks like this below where I am attempting to just call <code>fc1.py</code> to run with an argument and then verify the output file exists when complete in the <code>final_report</code> directory.</p>
<pre><code>import os
import pytest
import subprocess
# dir_path = os.path.dirname(os.path.realpath(__file__))
# pytest_args = os.path.join(dir_path,'fc1.py')
pytest_args = ['../fc1.py "Test Report"']
print(pytest_args)
def verify_if_file_exists():
final_report_path = './final_report/"Test Report".docx'
final_report_existing = os.path.exists(final_report_path)
is_existing = os.path.exists(final_report_existing)
# return boolean if report was generated
return is_existing
def test_report_generated():
assert generator() == True
def generator():
pytest.main(pytest_args)
return verify_if_file_exists()
</code></pre>
<p>Trying to run pyTest on Windows 10:</p>
<pre><code>C:\Users\tests>pytest
================================================= test session starts =================================================
platform win32 -- Python 3.9.13, pytest-7.1.2, pluggy-1.0.0
rootdir: C:\Users\bbartling\OneDrive - Slipstream\Desktop\pytester\tests
plugins: anyio-3.5.0
collected 1 item
fc1_test.py . [100%]
================================================== 1 passed in 0.77s ==================================================
</code></pre>
<p>It is showing a successful test but I dont see the report being generated in the <code>final_report</code> directory, its empty... is there anything I am doing incorrect? Or does pytest generate some sort of virtual duplicate directories and files, so that's why I wouldn't see anything in my <code>final_report</code> directory as I were running the actual <code>fc1.py</code> file? Not alot of wisdom here any tips appreciated.</p>
|
<python><pytest>
|
2023-02-11 20:11:16
| 1
| 3,564
|
bbartling
|
75,422,970
| 12,945,785
|
Plotly backend with pandas
|
<p>Hi I would like to do multiple graph (2 indeed) with pandas / backend Plotly.
I don't know how to proceed.
and what are the main option to change the size of my graph (it seems that figsize does not work) ? the color ?</p>
<p>I did something like that:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
pd.options.plotting.backend = "plotly"
f1 = data.plot(y=['vl','bench'], title='Fonds vs Bench')
f2 = data.plot(y='aum', title='AuM du fonds')
f1.show(figsize=(8,5))
f2.show(figsize=(8,5))
</code></pre>
<p>and would like something equivalent of (without Plotly backend):</p>
<pre><code>f, (ax1,ax2) = plt.subplots(2, 1, figsize=(8,5), sharex=True)
data.plot(y=['vl', 'bench'], title='Fonds vs Bench', ax=ax1)
data.plot(y='aum', title='AuM du fonds',ax=ax2);
</code></pre>
|
<python><pandas><plotly>
|
2023-02-11 20:01:03
| 3
| 315
|
Jacques Tebeka
|
75,422,565
| 10,796,158
|
Why use read_fwf in Pandas if I can just use read_csv with a custom separator?
|
<p>I don't see the point of using <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_fwf.html#pandas.read_fwf" rel="nofollow noreferrer"><code>read_fwf</code></a> in Pandas. Why would I ever use this instead of <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html" rel="nofollow noreferrer"><code>read_csv</code></a>, which supports custom separators ? I tried testing both in terms of speed for a large fixed column width file, and <code>read_csv</code> is way faster on my machine:</p>
<pre><code>data = ("colum1 column2222 column3333 column4\n"
"id8141 360.242940 149.910199 11950.7\n"
"id1594 444.953632 166.985655 11788.4\n"
)
colspecs = [(0, 6), (8, 20), (21, 33), (34, 43)]
data = data * 10000000
with open("big_file.txt", "w") as f:
f.write(data)
</code></pre>
<pre><code>start_time = time.time()
df = pd.read_csv("big_file.txt", header=None, dtype={"colum1": str, "column2222": float, "column3333": float, "column4":float}, sep="\s+")
print(f"--- {time.time() - start_time} seconds ---")
--- 4.0295188426971436 seconds ---
start_time = time.time()
df = pd.read_fwf("big_file.txt", header=None, colspecs=colspecs,
dtype={"colum1": str, "column2222": float, "column3333": float, "column4":float})
print(f"--- {time.time() - start_time} seconds ---")
--- 77.41955280303955 seconds ---
</code></pre>
|
<python><pandas><dataframe><csv>
|
2023-02-11 18:52:11
| 1
| 1,682
|
An Ignorant Wanderer
|
75,422,440
| 17,124,619
|
Threading decorator is not callable
|
<p>I am implementing a thread as a decorator to a class which should apply threads to the method, all concurrently. However, I get the following error:</p>
<blockquote>
<p>TypeError: 'TEST_THREAD' object is not callable</p>
</blockquote>
<p>The example below should print out each iteration over the maximum thread number.</p>
<pre><code>def start_workload(NUM_THREADS):
def wrapper(fn, *args):
thread = []
for i in range(*args):
t = threading.Thread(target=fn, args=(i,))
#t = threading.Thread(target=do_query, args=(i,))
t.start()
thread.append(t)
for i in range(*args):
thread[i].join()
return wrapper
class TEST_THREAD(object):
def __init__(self, *args):
super().__init__()
self._args = args
@start_workload
def print(self, threads):
print(threads, self._args)
if __name__ == '__main__':
test = TEST_THREAD(*list([1, 2, 3, 4, 5]))
test.print(5)
</code></pre>
<p>I was expecting the wrapper to perform the same functionality like the following approach:</p>
<pre><code>class TEST_THREAD:
def __init__(self, *args):
super().__init__()
self._args = args
def print(self, threads):
print(threads, self._args)
def start_workload(fn, num_thread):
thread = []
print(num_thread)
for i in range(num_thread):
t = threading.Thread(target=fn, args=(i,))
t.start()
thread.append(t)
for i in range(num_thread):
thread[i].join()
if __name__ == '__main__':
test = TEST_THREAD(*list([1, 2, 3, 4, 5]))
start_workload(test.print, 5)
</code></pre>
<p>Expected output:</p>
<pre><code>0 (1, 2, 3, 4, 5)
1 (1, 2, 3, 4, 5)
2 (1, 2, 3, 4, 5)
3 (1, 2, 3, 4, 5)
4 (1, 2, 3, 4, 5)
</code></pre>
|
<python><multithreading>
|
2023-02-11 18:34:13
| 1
| 309
|
Emil11
|
75,422,337
| 14,947,895
|
Dask LocalCluster Fails to compute random.random above 300Mio data points
|
<p>I wanted to create some random data for later benchmarking. The chunks need to be configured this way as I want to calculate the rfft later.</p>
<p>However, the sampling of the random data fails as soon as I am around (and above) 300 million data points. The code works fine in local mode. The code works fine when I store the samples directly into a zarr array.
The size at which the code breaks is consistent across multiple shapes and chunk sizes. It also does not depend on initialising the cluster with different values.</p>
<p>Following is an <strong>minimal example producing the error</strong>, please be advised, that the code is working with an array of <code>size=(60, 4_000_000)</code>. However, using the slightly bigger array, leads to error.</p>
<pre class="lang-py prettyprint-override"><code>cluster = dd.LocalCluster(n_workers=1, threads_per_worker=10, memory_limit='30GB')
client = dd.Client(cluster)
# print(client)
RNG_da = da.random.RandomState(42)
_ = RNG_da.random((60, 5_000_000), chunks=(1, 5_000_000)).compute()
client.close()
cluster.close()
</code></pre>
<p>The same <strong>error</strong> occurs using <code>LocalCluster()</code> without parameters:</p>
<pre class="lang-py prettyprint-override"><code>cluster = dd.LocalCluster()
client = dd.Client(cluster)
RNG_da = da.random.RandomState(1212)
_ = RNG_da.random((60, 5_000_000), chunks=(1, 5_000_000)).compute()
print(_.shape)
client.close()
cluster.close()
</code></pre>
<p>However, not specifying or only using the <code>Client</code> works. So <strong>all of the versions below work</strong>:</p>
<pre class="lang-py prettyprint-override"><code>RNG_da = da.random.RandomState(1212)
_ = RNG_da.random((60, 5_000_000), chunks=(1, 5_000_000)).compute()
print(_.shape)
</code></pre>
<pre class="lang-py prettyprint-override"><code>client = dd.Client(processes=False)
RNG_da = da.random.RandomState(1212)
_ = RNG_da.random((60, 5_000_000), chunks=(1, 5_000_000)).compute()
print(_.shape)
</code></pre>
<pre class="lang-py prettyprint-override"><code>with dask.config.set(scheduler='processes'):
RNG_da = da.random.RandomState(1212)
_ = RNG_da.random((60, 5_000_000), chunks=(1, 5_000_000)).compute()
print(_.shape)
</code></pre>
<pre class="lang-py prettyprint-override"><code>with dask.config.set(scheduler='threads'):
RNG_da = da.random.RandomState(1212)
_ = RNG_da.random((60, 5_000_000), chunks=(1, 5_000_000)).compute()
print(_.shape)
</code></pre>
<pre class="lang-py prettyprint-override"><code>with dd.LocalCluster(n_workers=1, threads_per_worker=10, memory_limit='15GiB') as cluster, dd.Client(cluster) as client:
RNG_da = da.random.RandomState(1212)
_ = RNG_da.random((60, 5_000_000), chunks=(1, 5_000_000)).persist()
print(_.shape)
</code></pre>
<p>Can it have something to do, with calling the sampling in a multi-processes environment, since using <code>client = Client(process=True)</code> results in this <code>[...]return self.socket.recv_into(buf, len(buf)) OSError: [Errno 22] Invalid argument</code> error.</p>
<hr />
<p>Here is the error trace, however, I interrupted the program, since it usually runs super long...:</p>
<pre class="lang-py prettyprint-override"><code><Client: 'tcp://127.0.0.1:53084' processes=1 threads=10, memory=27.94 GiB>
2023-02-11 18:54:44,007 - distributed.scheduler - ERROR - Couldn't gather keys {"('random_sample-aaf2531c59d5bd1381c467d7a0f0644c', 0, 0)": [βtcp://127.0.0.1:53089'],
[...]
"('random_sample-aaf2531c59d5bd1381c467d7a0f0644c', 0, 1)": ['tcp://127.0.0.1:53089'], "('random_sample-aaf2531c59d5bd1381c467d7a0f0644c', 2, 1)": ['tcp://127.0.0.1:53089']} state: ['memory', 'memory', 'memory', 'memory', 'memory', 'memory', 'memory', 'memory', 'memory', 'memory', 'memory', 'memory', 'memory', 'memory', 'memory', 'memory', 'memory', 'memory', 'memory', 'memory', 'memory'] workers: ['tcp://127.0.0.1:53089']
NoneType: None
2023-02-11 18:54:44,007 - distributed.worker - ERROR - failed during get data with tcp://127.0.0.1:53089 -> None
Traceback (most recent call last):
File "/Users/me/opt/anaconda3/envs/zarr_benchmarking/lib/python3.10/site-packages/tornado/iostream.py", line 973, in _handle_write
num_bytes = self.write_to_fd(self._write_buffer.peek(size))
File "/Users/me/opt/anaconda3/envs/zarr_benchmarking/lib/python3.10/site-packages/tornado/iostream.py", line 1146, in write_to_fd
return self.socket.send(data) # type: ignore
ConnectionResetError: [Errno 54] Connection reset by peer
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/me/opt/anaconda3/envs/zarr_benchmarking/lib/python3.10/site-packages/distributed/worker.py", line 1768, in get_data
response = await comm.read(deserializers=serializers)
File "/Users/me/opt/anaconda3/envs/zarr_benchmarking/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read
convert_stream_closed_error(self, e)
File "/Users/me/opt/anaconda3/envs/zarr_benchmarking/lib/python3.10/site-packages/distributed/comm/tcp.py", line 142, in convert_stream_closed_error
raise CommClosedError(f"in {obj}: {exc.\_\_class\_\_.\_\_name\_\_}: {exc}") from exc
distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://127.0.0.1:53089 remote=tcp://127.0.0.1:53096>: ConnectionResetError: [Errno 54] Connection reset by peer
2023-02-11 18:54:44,009 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: ['tcp://127.0.0.1:53089'], ('random_sample-aaf2531c59d5bd1381c467d7a0f0644c', 0, 0)
NoneType: None
2023-02-11 18:54:44,009 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: ['tcp://127.0.0.1:53089'], ('random_sample-aaf2531c59d5bd1381c467d7a0f0644c', 6, 2)
[...]
NoneType: None
2023-02-11 18:54:44,011 - distributed.scheduler - ERROR - Shut down workers that don't have promised key: ['tcp://127.0.0.1:53089'], ('random_sample-aaf2531c59d5bd1381c467d7a0f0644c', 2, 1)
NoneType: None
2023-02-11 18:54:44,013 - distributed.client - WARNING - Couldn't gather 21 keys, rescheduling {"('random_sample-aaf2531c59d5bd1381c467d7a0f0644c', 0, 0)": ('tcp://127.0.0.1:53089',), "('random_sample-aaf2531c59d5bd1381c467d7a0f0644c', 6, 2)": ('tcp://127.0.0.1:53089',), "('random_sample-aaf2531c59d5bd1381c467d7a0f0644c', 5, 0)": (βtcp://127.0.0.1:53089',),
[...]
"('random_sample-aaf2531c59d5bd1381c467d7a0f0644c', 6, 1)": ('tcp://127.0.0.1:53089',), "('random_sample-aaf2531c59d5bd1381c467d7a0f0644c', 0, 1)": ('tcp://127.0.0.1:53089',), "('random_sample-aaf2531c59d5bd1381c467d7a0f0644c', 2, 1)": ('tcp://127.0.0.1:53089',)}
^C
</code></pre>
<hr />
<hr />
<h2><strong>EDIT 2</strong></h2>
<p>I did some more testing to try and understand the problem, as I really don't understand why it behaves the way it does. I think I found two important differences, but I can't "understand the puzzle".</p>
<p><strong>It depends on whether I use <code>.compute()</code> or <code>.persists()</code></strong>. Although one should prefer <code>.persist()</code> for larger data, I do not understand why it fails with <code>.compute()</code>.</p>
<p><strong>However, the code works regardless of whether I use <code>.persist()</code> or <code>.compute()</code> if I do not register the client as the default scheduler (<code>dd.Client(cluster, set_as_default=False)</code>)</strong>.</p>
<pre class="lang-py prettyprint-override"><code>with dd.LocalCluster(n_workers=1, threads_per_worker=10, memory_limit='15GiB') as cluster, dd.Client(cluster, set_as_default=False) as client: # if set_as_default=False -> persist and compute work, if True, only persist does work
RNG_da = da.random.RandomState(1212)
_ = RNG_da.random((60, 5_000_000), chunks=(1, 5_000_000)).compute() # .persist() is always working
print(_)
</code></pre>
<p>It also <em>worked</em> when submitting the data via the client and using <code>client.compute()</code>, regardless of whether the client was registered as the default scheduler or not. Whether <code>pure=False</code> was set when the data was submitted or not does <em>not</em> change whether the program works or not. It also works if the data is not submitted, but only via <code>client.compute()</code>.</p>
<pre class="lang-py prettyprint-override"><code>with dd.LocalCluster(n_workers=1, threads_per_worker=10, memory_limit='15GiB') as cluster, dd.Client(cluster, set_as_default=True) as client:
RNG_da = da.random.RandomState(1212)
_ = client.submit(RNG_da.random, (60, 5_000_000), chunks=(1, 5_000_000), pure=False) # working does not depend on pure set false or true
# also working:
# _ = RNG_da.random((60, 5_000_000), chunks=(1, 5_000_000))
# _ = client.compute(_)
x = _.result() # if used commented section -> x = _
print(_)
print(x)
del _
try:
print(_)
except NameError:
print('caught: _ is not defined')
print(x) # working and making sure, that persist terminated
</code></pre>
<p>As a side note (I do not know if this is normal behaviour or not): I found that the client is not really respecting <code>asynchronous=False</code> in both the client and the cluster. Both have <code>self.asynchronous=True</code> after initialisation.</p>
<p>Perhaps you can understand more from this behaviour. Thanks for looking into this!</p>
|
<python><dask><dask-distributed>
|
2023-02-11 18:19:11
| 0
| 496
|
Helmut
|
75,422,292
| 7,744,106
|
Telethon: check if message has replies?
|
<p>Let's assume we have replies with this structure:</p>
<p>1ββΒ Β Β Β Β Β Β Β (How are you today?)<br />
Β Β Β Β Β 2ββΒ Β Β (I'm fine, thanks. And you?)<br />
Β Β Β Β Β Β Β Β Β Β 3Β Β Β (I'm all right.)</p>
<p>And I somehow got second message. I know that I can <code>get_reply_message</code> - this way I will get message <code>1</code> (current message replied to).</p>
<p>But how can I get message <code>3</code> (a message replied to a current message)?</p>
|
<python><telethon><telegram-api>
|
2023-02-11 18:13:43
| 1
| 2,043
|
egvo
|
75,422,260
| 3,620,725
|
Do any libraries other than pandas use a MultiIndex?
|
<p>Are there any libraries with data tables that allow you to define multiple levels of column names the way you can in pandas with <a href="https://pandas.pydata.org/docs/user_guide/advanced.html" rel="nofollow noreferrer">MultiIndex</a>?</p>
<p>It seems like there is no equivalent in <a href="https://stackoverflow.com/questions/30944281/r-multi-index-on-columns-and-or-rows">R</a>, <a href="https://stackoverflow.com/questions/67088072/multi-level-indexing-of-data-frames-in-julia">Julia</a>, <a href="https://github.com/dask/dask/issues/1493" rel="nofollow noreferrer">dask</a>, <a href="https://pola-rs.github.io/polars-book/user-guide/coming_from_pandas.html#polars-does-not-have-a-multi-indexindex" rel="nofollow noreferrer">polars</a>, etc. Is pandas the only package that does this?</p>
<p>For example this table below has 2 levels of column headers, which doesn't seem possible to directly represent in most other packages and would have to be unstacked.</p>
<p><a href="https://i.sstatic.net/Ib772.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ib772.png" alt="enter image description here" /></a></p>
|
<python><pandas>
|
2023-02-11 18:09:50
| 0
| 5,507
|
pyjamas
|
75,422,158
| 998,070
|
Moving Points with 1D Noise
|
<p>I'd like to move points in X & Y with 1D Noise. To further clarify, I don't want each point to move by a unique random number, but rather a larger noise over the whole line with gradients moving the points. The Noise would serve as a multiplier for a move amount and would be a value between -1 and 1. For example, if the Noise value was 0.8, it would multiply the X & Y of points by the amount.</p>
<p>How would I go about this?</p>
<p>This is what I have so far (the black line is the original line). I think it's wrong, because the frequency is 1 but there appears to be multiple waves in the noise.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
import random
import math
from enum import Enum
#PerlinNoise by alexandr-gnrk
class Interp(Enum):
LINEAR = 1
COSINE = 2
CUBIC = 3
class PerlinNoise():
def __init__(self,
seed, amplitude=1, frequency=1,
octaves=1, interp=Interp.COSINE, use_fade=False):
self.seed = random.Random(seed).random()
self.amplitude = amplitude
self.frequency = frequency
self.octaves = octaves
self.interp = interp
self.use_fade = use_fade
self.mem_x = dict()
def __noise(self, x):
# made for improve performance
if x not in self.mem_x:
self.mem_x[x] = random.Random(self.seed + x).uniform(-1, 1)
return self.mem_x[x]
def __interpolated_noise(self, x):
prev_x = int(x) # previous integer
next_x = prev_x + 1 # next integer
frac_x = x - prev_x # fractional of x
if self.use_fade:
frac_x = self.__fade(frac_x)
# intepolate x
if self.interp is Interp.LINEAR:
res = self.__linear_interp(
self.__noise(prev_x),
self.__noise(next_x),
frac_x)
elif self.interp is Interp.COSINE:
res = self.__cosine_interp(
self.__noise(prev_x),
self.__noise(next_x),
frac_x)
else:
res = self.__cubic_interp(
self.__noise(prev_x - 1),
self.__noise(prev_x),
self.__noise(next_x),
self.__noise(next_x + 1),
frac_x)
return res
def get(self, x):
frequency = self.frequency
amplitude = self.amplitude
result = 0
for _ in range(self.octaves):
result += self.__interpolated_noise(x * frequency) * amplitude
frequency *= 2
amplitude /= 2
return result
def __linear_interp(self, a, b, x):
return a + x * (b - a)
def __cosine_interp(self, a, b, x):
x2 = (1 - math.cos(x * math.pi)) / 2
return a * (1 - x2) + b * x2
def __cubic_interp(self, v0, v1, v2, v3, x):
p = (v3 - v2) - (v0 - v1)
q = (v0 - v1) - p
r = v2 - v0
s = v1
return p * x**3 + q * x**2 + r * x + s
def __fade(self, x):
# useful only for linear interpolation
return (6 * x**5) - (15 * x**4) + (10 * x**3)
x = np.linspace(10, 10, 20)
y = np.linspace(0, 10, 20)
seed = 10
gen_x = PerlinNoise(seed=seed, amplitude=5, frequency=1, octaves=1, interp=Interp.CUBIC, use_fade=True)
noise_x = np.array([gen_x.get(pos) for pos in y])
fig, ax = plt.subplots(1)
ax.set_aspect("equal")
ax.plot(x, y, linewidth=2, color="k")
ax.scatter(x, y, s=20, zorder=4, color="k")
ax.plot(x+noise_x, y, linewidth=2, color="blue")
ax.scatter(x+noise_x, y, s=80, zorder=4, color="red")
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/LWnwq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LWnwq.png" alt="enter image description here" /></a></p>
<p>Thank you!</p>
|
<python><numpy><scipy><noise><perlin-noise>
|
2023-02-11 17:54:00
| 0
| 424
|
Dr. Pontchartrain
|
75,422,072
| 3,376,169
|
Dataframe query to filter using dictionary results
|
<p>I want to filter dataframe rows using dictionary. I want all the rows where val1 > min_val_dict[user_id] But when I run following I get error</p>
<pre><code>TypeError: unhashable type: 'Series'
</code></pre>
<p>Here is code:</p>
<pre><code>import pandas as pd
d={'user_id':[1,1,2,2,2,3,3],'val1':[101,102,103,104,105,106,107]}
df = pd.DataFrame(data=d)
min_val_dict={1:101,2:103,3:102}
df.query('val1 > @min_val_dict[user_id]')
</code></pre>
|
<python><python-3.x><pandas>
|
2023-02-11 17:40:30
| 1
| 439
|
user3376169
|
75,422,064
| 3,963,430
|
Validate X-Hub-Signature-256 meta / whatsapp webhook request
|
<p>I can't manage to validate the X-Hub-Signature-256 for my meta / whatsapp webhook in flask successfully.</p>
<p>Can anyone tell me where the error is or provide me with a working example?</p>
<pre class="lang-python prettyprint-override"><code>import base64
import hashlib
import hmac
import os
from dotenv import load_dotenv
from flask import Flask, jsonify, request
from werkzeug.middleware.proxy_fix import ProxyFix
load_dotenv()
API_SECRET = os.environ.get('API_SECRET')
app = Flask(__name__)
app.wsgi_app = ProxyFix(app.wsgi_app, x_for=1, x_host=1)
def verify_webhook(data, hmac_header):
hmac_recieved = str(hmac_header).removeprefix('sha256=')
digest = hmac.new(API_SECRET.encode('utf-8'), data,
digestmod=hashlib.sha256).digest()
computed_hmac = base64.b64encode(digest)
return hmac.compare_digest(computed_hmac, hmac_recieved.encode('utf-8'))
@app.route("/whatsapp", methods=["GET", "POST"])
def whatsapp_webhook():
if request.method == "POST":
try:
data = request.get_data()
if not verify_webhook(data, request.headers.get('X-Hub-Signature-256')):
return "", 401
except Exception as e:
print(e)
return "", 500
return jsonify({"status": "success"}, 200)
</code></pre>
|
<python><flask><whatsapi>
|
2023-02-11 17:39:42
| 3
| 838
|
GurkenkΓΆnig
|
75,422,019
| 3,485,908
|
Running different celery tasks (@shared_tasks) on different databases in django multi tenant application
|
<p>The application I am working on is a multitenant application. I am using celery to run background tasks.</p>
<p>The background tasks that get pushed to Queue (rabbitmq) are getting executed properly when run on the <code>default</code> db setting configured in the settings. But when I submit the background jobs from other tenants, i.e. settings other than <code>default</code> they fail. This is because, in normal sync flow, I am using a custom router that sets the DB to be used based on the request URL( which contains the tenant details). but the same context is lost when the job is submitted as a background task.</p>
<p>Any suggestions ?</p>
<p>I tried using transaction block and passing the db_name as below, but still, it is using the <code>default</code> database only</p>
<pre><code>@shared_task
def run_background_task(task_id, db_name):
with transaction.atomic(using=tenant_db):
task = DPTask.objects.get(pk=task_id)
</code></pre>
<p>Ideally, the above query should get executed on <code>db_name</code> database settings, but it is happening on default only</p>
|
<python><python-3.x><django><celery>
|
2023-02-11 17:32:17
| 2
| 1,539
|
ankit
|
75,421,997
| 5,212,614
|
How can I make node_color and node_size from items in a column and plot in networkx?
|
<p>I am trying to understand networkx. I am testing the sample code below.</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import networkx as nx
# Input data files check
from subprocess import check_output
import warnings
warnings.filterwarnings('ignore')
G = nx.Graph()
#df_nodes = df_deduped[['Circuit_Number','IEEE_Description','Duration','Device_County','Picklist_Count','Customers_Served_On_Circuit']].copy()
#df_nodes = df_nodes[df_nodes['Picklist_Count'].between(15, 30)]
#df_nodes = df_nodes.head(100)
data = [{'Circuit': 'html','Description':1, 'Duration':10.2, 'Source':'Westchester', 'Destination':'Davie', 'Picklist':1000, 'Postlist':50000.2},
{'Circuit': 'html', 'Description':2, 'Duration':12.1, 'Source':'Westchester', 'Destination':'Davie', 'Picklist':3000, 'Postlist':40000.1},
{'Circuit': 'html', 'Description':3, 'Duration':11.3, 'Source':'Westchester', 'Destination':'Davie', 'Picklist':7000, 'Postlist':50000.2},
{'Circuit': 'html', 'Description':3, 'Duration':8.1, 'Source':'West', 'Destination':'San Bernardino', 'Picklist':3000, 'Postlist':40000.0},
{'Circuit': '.net', 'Description':4, 'Duration':6.2, 'Source':'Queens', 'Destination':'San Bernardino', 'Picklist':5000, 'Postlist':6000.1},
{'Circuit': '.net', 'Description':3, 'Duration':20.1, 'Source':'Queens', 'Destination':'Los Angeles', 'Picklist':5000, 'Postlist':4000.1},
{'Circuit': '.net', 'Description':2, 'Duration':15.5, 'Source':'Brooklyn', 'Destination':'San Francisco', 'Picklist':5000, 'Postlist':9000.3},
{'Circuit': '.net', 'Description':4, 'Duration':7.7, 'Source':'Brooklyn', 'Destination':'Davie', 'Picklist':6000, 'Postlist':10000},
{'Circuit': '.net', 'Description':4, 'Duration':7.7, 'Source':'Los Angeles', 'Destination':'Westchester', 'Picklist':6000, 'Postlist':10000},
{'Circuit': '.net', 'Description':4, 'Duration':7.7, 'Source':'San Berdarnino', 'Destination':'Westchester', 'Picklist':6000, 'Postlist':10000}]
df = pd.DataFrame(data)
df
node_color = []
for word in df['Circuit']:
if word not in node_color:
node_color.append(word)
node_size = []
for word in df['Picklist']:
if word not in node_size:
node_size.append(word)
# G=nx.from_pandas_edgelist(df, "Description", "Picklist")
fig = plt.figure()
nx.draw_networkx(df, "Source", "Destination", node_color=node_color, node_size=node_size, font_color="whitesmoke")
fig.set_facecolor('blue')
plt.show()
</code></pre>
<p>When I run the code, I get the error described below. What am I doing wrong here?</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
~\AppData\Local\Temp\1\ipykernel_3332\2173665631.py in <module>
45
46 fig = plt.figure()
---> 47 nx.draw_networkx(df, "Source", "Destination", node_color=node_color, node_size=node_size, font_color="whitesmoke")
48 fig.set_facecolor('blue')
49 plt.show()
~\Anaconda3\lib\site-packages\networkx\drawing\nx_pylab.py in draw_networkx(G, pos, arrows, with_labels, **kwds)
301 pos = nx.drawing.spring_layout(G) # default to spring layout
302
--> 303 draw_networkx_nodes(G, pos, **node_kwds)
304 draw_networkx_edges(G, pos, arrows=arrows, **edge_kwds)
305 if with_labels:
~\Anaconda3\lib\site-packages\networkx\drawing\nx_pylab.py in draw_networkx_nodes(G, pos, nodelist, node_size, node_color, node_shape, alpha, cmap, vmin, vmax, ax, linewidths, edgecolors, label, margins)
423
424 try:
--> 425 xy = np.asarray([pos[v] for v in nodelist])
426 except KeyError as err:
427 raise nx.NetworkXError(f"Node {err} has no position.") from err
~\Anaconda3\lib\site-packages\networkx\drawing\nx_pylab.py in <listcomp>(.0)
423
424 try:
--> 425 xy = np.asarray([pos[v] for v in nodelist])
426 except KeyError as err:
427 raise nx.NetworkXError(f"Node {err} has no position.") from err
TypeError: string indices must be integers
</code></pre>
|
<python><python-3.x><pandas><networkx>
|
2023-02-11 17:28:21
| 1
| 20,492
|
ASH
|
75,421,959
| 14,720,975
|
Python Equivalent for Deno's ensureDir
|
<p>What's the Python equivalent to Deno's <a href="https://deno.land/std@0.144.0/fs/ensure_dir.ts?s=ensureDir" rel="nofollow noreferrer"><code>ensureDir</code></a>?</p>
<p>Usage example:</p>
<pre><code>import { ensureDir, ensureDirSync } from "https://deno.land/std/fs/mod.ts";
ensureDir("./logs").then(
() => console.log("Success Created"),
).catch((err) => console.log(err));
</code></pre>
|
<python><file-handling>
|
2023-02-11 17:20:57
| 1
| 720
|
Eliaz Bobadilla
|
75,421,849
| 4,837,637
|
Google Cloud Run Flask App error import module
|
<p>I have developed a flask rest API and I'm using flask_smorest.
I'm developing into cloud shell editor into google cloud.
When I run google cloud run emulator, I receive the error:</p>
<pre><code>##########Linting Output - pylint##########
************* Module app
3,0,error,import-error:Unable to import 'flask_smorest'
</code></pre>
<p>and this my dockerfile:</p>
<pre><code>FROM python:3.10
ENV PYTHONUNBUFFERED True
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
RUN pip install Flask gunicorn flask-smorest marshmallow
CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 app:app
</code></pre>
<p>This is app.py file:</p>
<pre><code>import os
from flask import Flask, request
from flask_smorest import Api
app = Flask(__name__)
app.config["PROPAGATE_EXCEPTIONS"] = True
app.config["API_TITLE"] = "Stores REST API"
app.config["API_VERSION"] = "v1"
app.config["OPENAPI_VERSION"] = "3.0.3"
app.config["OPENAPI_URL_PREFIX"] = "/"
app.config["OPENAPI_SWAGGER_UI_PATH"] = "/swagger-ui"
app.config["OPENAPI_SWAGGER_UI_URL"] ="https://cdn.jsdelivr.net/npm/swagger-ui-dist/"
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0", port=int(os.environ.get("PORT", 8080)))
</code></pre>
<p>how i can fix this issue? And why i have this error?</p>
|
<python><docker><google-cloud-run>
|
2023-02-11 17:03:28
| 1
| 415
|
dev_
|
75,421,794
| 8,162,211
|
App created using Pyinstaller unable to run due to pyqtgraph colormap issue
|
<p>I have a simple script that uses pyqtgraph to create a heatplot animation. I receive no error messages when converting it to an .app using pyinstaller. However, when attempting to run the .app from the command line using</p>
<pre><code>./dist/MyApplication.app/Contents/MacOS/MyApplication
</code></pre>
<p>I obtain the error message ending with</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory:
'/var/folders/6j/xqx91xb15wl81pcnf33255bct7pzn4/T/_MEIsQn9RL/pyqtgraph/colors/maps/CET-D1'
</code></pre>
<p>Within my script, the line where I define my colormap is what's causing the error:</p>
<pre><code> colorMap = pg.colormap.get("CET-D1")
bar = pg.ColorBarItem( interactive=False,values=(0,1) , colorMap=colorMap)
</code></pre>
<p>Removing the line where I define <code>colorMap</code> and removing the <code>colorMap</code> argument appearing in the next line, eliminates the problem and the .app runs fine. Of course, the result is an animation where all heatplots appear in default-grayscale.</p>
<p>All I want is a simple colorbar--nothing fancy. Is there a different approach to defining my colorMap that might work, or is there something I might add to my <code>.spec</code> file so that the file is found?</p>
|
<python><pyqt><pyqtgraph>
|
2023-02-11 16:54:16
| 1
| 1,263
|
fishbacp
|
75,421,574
| 14,045,537
|
How to customize marker styles and add numbers inside Folium markers
|
<p>I saw an example to add numbers inside markers using <code>plugins.BeautifyIcon</code> here - <a href="https://stackoverflow.com/a/70896775/14045537">Folium markers with numbers inside</a>.</p>
<p>But I need an alternate solution to add numbers without using <code>plugins.BeautifyIcon</code>.</p>
<pre><code>import folium
from folium.plugins import MarkerCluster
m = folium.Map(location=[44, -73], zoom_start=5)
marker_cluster = MarkerCluster().add_to(m)
folium.Marker(
location=[40.67, -73.94],
popup="Add popup text here.",
icon=folium.Icon(color="green", icon="", prefix='fa'),
).add_to(marker_cluster)
folium.Marker(
location=[44.67, -73.94],
popup="Add popup text here.",
icon=folium.Icon(color="red", icon="", prefix='fa'),
).add_to(marker_cluster)
folium.Marker(
location=[44.67, -71.94],
popup="Add popup text here.",
icon=folium.Icon(color="blue", icon="", prefix='fa'),
).add_to(marker_cluster)
m
</code></pre>
<p><a href="https://i.sstatic.net/gD3M6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gD3M6.png" alt="enter image description here" /></a></p>
<p>Is it possible to customize the marker styles and add numbers inside like the below examples??</p>
<p><a href="https://i.sstatic.net/0MZdE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0MZdE.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/SLXjd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SLXjd.png" alt="enter image description here" /></a></p>
|
<python><html><css><leaflet><folium>
|
2023-02-11 16:21:08
| 1
| 3,025
|
Ailurophile
|
75,421,404
| 14,045,537
|
How to add Search plugin in folium for multiple fields?
|
<p>I'm trying to add a search bar in <code>folium</code> map using <code>folium plugins</code>.</p>
<p><strong>Data:</strong></p>
<pre><code>import geopandas
states = geopandas.read_file(
"https://raw.githubusercontent.com/PublicaMundi/MappingAPI/master/data/geojson/us-states.json",
driver="GeoJSON",
)
states_sorted = states.sort_values(by="density", ascending=False)
states_sorted.head(5).append(states_sorted.tail(5))[["name", "density"]]
def rd2(x):
return round(x, 2)
minimum, maximum = states["density"].quantile([0.05, 0.95]).apply(rd2)
mean = round(states["density"].mean(), 2)
import branca
colormap = branca.colormap.LinearColormap(
colors=["#f2f0f7", "#cbc9e2", "#9e9ac8", "#756bb1", "#54278f"],
index=states["density"].quantile([0.2, 0.4, 0.6, 0.8]),
vmin=minimum,
vmax=maximum,
)
colormap.caption = "Population Density in the United States"
</code></pre>
<pre><code> id name density geometry
0 01 Alabama 94.650 POLYGON ((-87.35930 35.00118, -85.60667 34.984...
1 02 Alaska 1.264 MULTIPOLYGON (((-131.60202 55.11798, -131.5691...
2 04 Arizona 57.050 POLYGON ((-109.04250 37.00026, -109.04798 31.3...
3 05 Arkansas 56.430 POLYGON ((-94.47384 36.50186, -90.15254 36.496...
4 06 California 241.700 POLYGON ((-123.23326 42.00619, -122.37885 42.0...
</code></pre>
<p><strong>Folium Map:</strong></p>
<pre><code>import folium
from folium.plugins import Search
m = folium.Map(location=[38, -97], zoom_start=4)
def style_function(x):
return {
"fillColor": colormap(x["properties"]["density"]),
"color": "black",
"weight": 2,
"fillOpacity": 0.5,
}
stategeo = folium.GeoJson(
states,
name="US States",
style_function=style_function,
tooltip=folium.GeoJsonTooltip(
fields=["name", "density"], aliases=["State", "Density"], localize=True
),
).add_to(m)
statesearch = Search(
layer=stategeo,
geom_type="Polygon",
placeholder="Search for a US State",
collapsed=False,
search_label="name",
weight=3,
).add_to(m)
folium.LayerControl().add_to(m)
colormap.add_to(m)
m
</code></pre>
<p><a href="https://i.sstatic.net/eQd9A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eQd9A.png" alt="enter image description here" /></a></p>
<p>In the above map user can search only by US state name, is it possible to include multiple fields for search, like searching based on <code>density</code>/ <code>id</code> / <code>Name</code>??</p>
|
<python><leaflet><full-text-search><folium><folium-plugins>
|
2023-02-11 15:56:50
| 1
| 3,025
|
Ailurophile
|
75,421,384
| 5,308,892
|
No intellisense for Tensorflow functions in VSCode
|
<p>I've been trying to setup Tensorflow for Python in VSCode for a while now and I am constantly running into issues for which I cannot find a solution on the web. I installed it via <code>pip install tensorflow</code> and imported it with <code>import tensorflow as tf</code>. However, I do not seem to be getting any intellisense for Tensorflow member functions. Take the following example:</p>
<p><a href="https://i.sstatic.net/Thx7Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Thx7Y.png" alt="enter image description here" /></a></p>
<p>Conversely, the Numpy equivalent function gets highlighted in yellow and I get information when I hover over it:</p>
<p><a href="https://i.sstatic.net/Go74V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Go74V.png" alt="enter image description here" /></a></p>
<p>Naturally, autocomplete does not work either. I'm using Python 3.10.10 and Tensorflow 2.11.0. What could be causing this issue?</p>
|
<python><tensorflow><intellisense>
|
2023-02-11 15:54:46
| 1
| 2,146
|
cabralpinto
|
75,421,383
| 18,758,062
|
Resume Optuna study from most recent checkpoints
|
<p>Is there a way to be able to pause/kill the optuna study, then resume it either by running the incomplete trials from the beginning, or resuming the incomplete trials from the latest checkpoint?</p>
<pre><code>study = optuna.create_study()
study.optimize(objective)
</code></pre>
|
<python><pytorch><hyperparameters><optuna>
|
2023-02-11 15:54:38
| 3
| 1,623
|
gameveloster
|
75,421,382
| 7,090,501
|
Highlight a single point in a boxplot in Plotly
|
<p>I have a boxplot in Plotly. I would like to overlay a single point on some of the boxes. I thought I could do this by adding a scatter trace to the <code>fig</code>, but when I look into the data of the figure I can't see anything specifying the y coordinate of the boxes so I'm not sure how to overlay the point. How can I add a single point to some boxes in a boxplot?</p>
<pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go
import numpy as np
N = 10
fig = go.Figure(data=[go.Box(
x=2.5 * np.sin(np.pi * i/N) + i/N + (1.5 + 0.5 * np.cos(np.pi*i/N)) * np.random.rand(10),
) for i in range(int(N))])
fig.update_layout(height=600)
fig
</code></pre>
<p>Result:
<a href="https://i.sstatic.net/y2X0x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y2X0x.png" alt="enter image description here" /></a></p>
<p>Ideal Result:</p>
<p><a href="https://i.sstatic.net/HQmtH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HQmtH.png" alt="enter image description here" /></a></p>
|
<python><plotly><scatter-plot><boxplot>
|
2023-02-11 15:54:26
| 1
| 333
|
Marshall K
|
75,421,365
| 4,451,315
|
Groupby sum string
|
<p>In pandas, I can do</p>
<pre class="lang-py prettyprint-override"><code>In [33]: df = pd.DataFrame({'a': [1, 1, 2], 'b': ['foo', 'bar', 'foo']})
In [34]: df
Out[34]:
a b
0 1 foo
1 1 bar
2 2 foo
In [35]: df.groupby('a')['b'].sum()
Out[35]:
a
1 foobar
2 foo
Name: b, dtype: object
</code></pre>
<p>and have the strings concatenated when I do <code>groupby.sum</code></p>
<p>In polars, however:</p>
<pre class="lang-py prettyprint-override"><code>In [36]: df = pl.DataFrame({'a': [1, 1, 2], 'b': ['foo', 'bar', 'foo']})
In [37]: df
Out[37]:
shape: (3, 2)
βββββββ¬ββββββ
β a β b β
β --- β --- β
β i64 β str β
βββββββͺββββββ‘
β 1 β foo β
β 1 β bar β
β 2 β foo β
βββββββ΄ββββββ
In [38]: df.group_by('a').agg(pl.col('b').sum())
Out[38]:
shape: (2, 2)
βββββββ¬βββββββ
β a β b β
β --- β --- β
β i64 β str β
βββββββͺβββββββ‘
β 2 β null β
β 1 β null β
βββββββ΄βββββββ
</code></pre>
<p>Is there a way to concatenate all the strings in each group in polars?</p>
|
<python><group-by><python-polars>
|
2023-02-11 15:52:10
| 3
| 11,062
|
ignoring_gravity
|
75,421,335
| 13,629,335
|
tkinter - Infinite Canvas "world" / "view" - keeping track of items in view
|
<p>I feel like this is a little bit complicated or at least I'm confused on it, so I'll try to explain it by rendering the issue. Let me know if the issue isn't clear.</p>
<hr />
<p>I get the output from my <code>viewing_box</code> through the <code>__init__</code> method and it shows:<br />
<code>(0, 0, 378, 265)</code><br />
Which is equivalent to a width of 378 and a height of 265.</p>
<p>When failing, I track the output:</p>
<pre><code>1 false
1 false
here ([0.0, -60.0], [100.0, 40.0]) (0, 60, 378, 325)
</code></pre>
<p>The tracking is done in <code>_scan_view</code> with the code:</p>
<pre><code> if not viewable:
current = self.itemcget(item,'tags')
if isinstance(current, tuple):
new = current-('viewable',)
else:
print('here',points, (x1,y1,x2,y2))
new = ''
self.inview_items.discard(item)
</code></pre>
<p>So the rectangle stays with width and height of 100, the coords however failing to be the expected ones. While view width and height stays the same and moves correctly in my current understanding. Expected:<br />
<code>if x1 <= point[0] <= x2 and y1 <= point[1] <= y2:</code> and it feels like I've created two coordinate systems but I don't get it. Is someone looking on it and see it immediately?</p>
<p>Full Code:</p>
<pre><code>import tkinter as tk
class InfiniteCanvas(tk.Canvas):
def __init__(self, master, **kwargs):
super().__init__(master, **kwargs)
self.inview_items = set() #in view
self.niview_items = set() #not in view
self._xshifted = 0 #view moved in x direction
self._yshifted = 0 #view moved in y direction
self._multi = 0
self.configure(confine=False,highlightthickness=0,bd=0)
self.bind('<MouseWheel>', self._vscroll)
self.bind('<Shift-MouseWheel>', self._hscroll)
root.bind('<Control-KeyPress>',lambda e:setattr(self,'_multi', 10))
root.bind('<Control-KeyRelease>',lambda e:setattr(self,'_multi', 0))
print(self.viewing_box())
return None
def viewing_box(self):
'returns x1,y1,x2,y2 of the currently visible area'
x1 = 0 - self._xshifted
y1 = 0 - self._yshifted
x2 = self.winfo_reqwidth()-self._xshifted
y2 = self.winfo_reqheight()-self._yshifted
return x1,y1,x2,y2
def _scan_view(self):
x1,y1,x2,y2 = self.viewing_box()
for item in self.find_withtag('viewable'):
#check if one felt over the edge
coords = self.coords(item)
#https://www.geeksforgeeks.org/python-split-tuple-into-groups-of-n/
points = tuple(
coords[x:x + 2] for x in range(0, len(coords), 2))
viewable = False
for point in points:
if x1 <= point[0] <= x2 and y1 <= point[1] <= y2:
#if any point is in viewing box
viewable = True
print(item, 'true')
else:
print(item, 'false' )
if not viewable:
current = self.itemcget(item,'tags')
if isinstance(current, tuple):
new = current-('viewable',)
else:
print('here',points, (x1,y1,x2,y2))
new = ''
self.inview_items.discard(item)
self.itemconfigure(item,tags=new)
for item in self.find_overlapping(x1,y1,x2,y2):
#check if item inside of viewing_box not in inview_items
if item not in self.inview_items:
self.inview_items.add(item)
current = self.itemcget(item,'tags')
if isinstance(current, tuple):
new = current+('viewable',)
elif isinstance(current, str):
if str:
new = (current, 'viewable')
else:
new = 'viewable'
self.itemconfigure(item,tags=new)
print(self.inview_items)
def _create(self, *args):
if (current:=args[-1].get('tags', False)):
args[-1]['tags'] = current+('viewable',)
else:
args[-1]['tags'] = ('viewable',)
ident = super()._create(*args)
self._scan_view()
return ident
def _hscroll(self,event):
offset = int(event.delta/120)
if self._multi:
offset = int(offset*self._multi)
canvas.move('all', offset,0)
self._xshifted += offset
self._scan_view()
def _vscroll(self,event):
offset = int(event.delta/120)
if self._multi:
offset = int(offset*self._multi)
canvas.move('all', 0,offset)
self._yshifted += offset
self._scan_view()
root = tk.Tk()
canvas = InfiniteCanvas(root)
canvas.pack(fill=tk.BOTH, expand=True)
size, offset, start = 100, 10, 0
canvas.create_rectangle(start,start, size,size, fill='green')
canvas.create_rectangle(
start+offset,start+offset, size+offset,size+offset, fill='darkgreen')
root.mainloop()
</code></pre>
<hr />
<p>PS: Before thinking this is over-complicated and using just <code>find_overlapping</code> isn't working, since it seems the item needs to be at least <code>51%</code> in the view to get tracked with tkinters algorithm.</p>
<h2><a href="https://codereview.stackexchange.com/q/283227/228833">You can find an improved version now on CodeReview!</a></h2>
|
<python><tkinter><canvas><tk-toolkit>
|
2023-02-11 15:47:30
| 1
| 8,142
|
Thingamabobs
|
75,421,185
| 683,945
|
Extract element names and type from XSD schemas
|
<p>I've got several schema definitions that I need to reconcile with a data model. I'm trying to extract the entity names, types and attribute names and types from these schema definitions with the Python ElemTree library, but finding it difficult to get the exact information out.</p>
<p>In my XSDs there are several of the below complex types.</p>
<p>I've been trying to use <code>[elem.tag for elem in root.iter()]</code> but it gives all the tags as well which I'm not interested in. I'm thinking XPath might be the way but I'm stuck.</p>
<p>Is there an easy way to just extract the name and type attributes?</p>
<pre><code><xsd:complexType name="EntityX_t">
<xsd:sequence>
<xsd:element name="xyz" type="xxx:Entityx_t"/>
<xsd:element name="subElement">
<xsd:complexType>
<xsd:sequence>
<xsd:element name="subelement" type="SubElement_t" maxOccurs="2"/>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name="elementRelationships" type="xxx:ElementRelationships_t" minOccurs="0">
<xsd:annotation>
<xsd:documentation>lorem ipsum...</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name="notesSection" type="xxx:NotesSection_t" minOccurs="0"/>
<xsd:element name="extension" type="abcExtension_t" minOccurs="0"/>
</xsd:sequence>
</xsd:complexType>
</code></pre>
|
<python>
|
2023-02-11 15:22:36
| 1
| 1,097
|
L4zl0w
|
75,421,008
| 173,748
|
Passing a python class constant to a decorator without self
|
<p>I'm using the python ratelimit library and it uses a decorator to handle rate limiting of a function. I have a class constant I'd like to pass into the decorator but of course <code>self</code> wont work.</p>
<p>Is there a way to reference the <code>UNIVERSALIS_MAX_CALLS_PER_SECOND</code> constant within the decorator? Or, is there a clean, more appropriate way I should handle this?</p>
<p>Edit: I'm seeking to avoid globals. What I was hoping for was some python introspection maybe?</p>
<pre><code>class Universalis():
# FIXME: hard-coded North America
API_ENDPOINT = "http://universalis.app/api/v2/North-America/"
UNIVERSALIS_MAX_CALLS_PER_SECOND = 13 # 25 max
UNIVERSALIS_MAX_CONNECTIONS = 6 # 8 max
LISTINGS_PER_API_CALL = 500 # Universalis no max.
@sleep_and_retry
@limits(calls=self.UNIVERSALIS_MAX_CALLS_PER_SECOND, period=1)
def fetch_and_process_item_listings(self, item_ids_querystring):
# magic...
</code></pre>
|
<python><decorator><introspection>
|
2023-02-11 14:55:59
| 2
| 2,067
|
Chris Cummings
|
75,420,947
| 11,462,274
|
How to collect the updated expiry time of a service created using gspread?
|
<p>In order not to need to create a new service each time I need it (sometimes I use it 8 times per code loop), I create a global object:</p>
<pre class="lang-python prettyprint-override"><code>import gspread
client_gspread = None
def send_sheet(id_sheet, page_sheet, sheet_data):
global client_gspread
if not client_gspread:
client_gspread = gspread.service_account(filename='client_secret.json')
sheet = client_gspread.open_by_key(id_sheet).worksheet(page_sheet)
sheet.clear()
sheet.update([sheet_data.columns.values.tolist()] + sheet_data.values.tolist())
print(client_gspread.auth.expiry)
</code></pre>
<p>When printing <code>client_gspread.auth.expiry</code>, it will always deliver the original expiration value, but I already let my code looping using the same service for more than 24 hours, so the expiration value that is delivered (1 hour to expire), it is not fixed, it updates if the service is used for any action on the worksheets.</p>
<p>How do I get to look at the updated expiration value so I can put an update possibility in case the expiration time happens?</p>
|
<python><google-oauth><gspread>
|
2023-02-11 14:46:02
| 1
| 2,222
|
Digital Farmer
|
75,420,922
| 19,369,393
|
How to transform every element of numpy array into an array of size N filled with the element?
|
<p>I have a numpy array <code>a</code> the shape of which is (m, n).
I want to transform this array into an array <code>b</code> with shape (m, n, l), where:</p>
<pre><code>b[i,j].length == l
b[i,j,k] == a[i,j]
0 <= i < m
0 <= j < n
0 <= k < l
</code></pre>
<p>For example:</p>
<pre><code>m = 2
n = 3
a = [[1,2,3],[4,5,6]]
</code></pre>
<p>If <code>l = 2</code>, then <code>b</code>:</p>
<pre><code>b = [[[1,1],[2,2],[3,3]],[[4,4],[5,5],[6,6]]]
</code></pre>
<p>How can I do it? Is there an easy one-line solution?</p>
|
<python><arrays><numpy>
|
2023-02-11 14:40:29
| 1
| 365
|
g00dds
|
75,420,760
| 13,885,312
|
Django ORM JOIN of models that are related through JSONField
|
<p>If I have 2 models related through ForeignKey I can easily get a queryset with both models joined using select_related</p>
<pre><code>class Foo(Model):
data = IntegerField()
bar = ForeignKey('Bar', on_delete=CASCADE)
class Bar(Model):
data = IntegerField()
foos_with_joined_bar = Foo.objects.select_related('bar')
for foo in foos_with_joined_bar:
print(foo.data, foo.bar.data) # this will not cause extra db queries
</code></pre>
<p>I want to do the same thing but in the case where Foo keeps its reference to bar in a JSONField</p>
<pre><code>class Foo(Model):
data = IntegerField()
bar = JSONField() # here can be something like {"bar": 1} where 1 is the ID of Bar
class Bar(Model):
data = IntegerField()
foos_with_joined_bar = ???
</code></pre>
<p>Is it possible to get <strong>foos_with_joined_bar</strong> in this case using Django ORM?</p>
<p>P.S. We're not discussing the reasoning behind storing foreign keys in the JSONField, of course it's better to just use ForeignKey.</p>
|
<python><django><orm><django-orm>
|
2023-02-11 14:13:52
| 3
| 415
|
Anton M.
|
75,420,574
| 2,762,570
|
As of 2023, is there any way to line profile Cython at all?
|
<p>Something has substantially changed with the way that line profiling Cython works, such that previous answers no longer work. I am not sure if something subtle has changed, or if it is simply totally broken.</p>
<p>For instance, <a href="https://stackoverflow.com/questions/28301931/how-to-profile-cython-functions-line-by-line">here</a> is a very highly upvoted question about this from about 8 years ago.</p>
<p>The notebook in there no longer seems to work, even with the updates referenced in the post. For instance, here is a new version of the notebook incorporating the updates suggested in the original post: <a href="https://nbviewer.org/gist/battaglia01/f138f6b85235a530f7f62f5af5a002f0?flush_cache=true" rel="nofollow noreferrer">https://nbviewer.org/gist/battaglia01/f138f6b85235a530f7f62f5af5a002f0?flush_cache=true</a></p>
<p>The output of line_profiler to profiling that Cython function is simply</p>
<pre><code>Timer unit: 1e-09 s
</code></pre>
<p>with no line-by-line infomration at all.</p>
<p>I'm making a new question about this because other comments I've seen on the site all seem to reference these older answers, and they all seem to be broken. If anyone even has the beginnings of a starting point it would be much appreciated - either for the Jupyter notebook, or with something built using cythonize. My notes on what I've tried at least on the Jupyter side are in that notebook.</p>
<p><strong>Is there any way to get this to work?</strong></p>
|
<python><profiling><cython><profiler><cythonize>
|
2023-02-11 13:43:09
| 1
| 405
|
Mike Battaglia
|
75,420,397
| 6,357,916
|
nn.Linear gives `Only Tensors of floating point and complex dtype can require gradients`
|
<p>I am trying to understand hon <code>nn.Embedding</code> works as <code>nn.Linear</code> in case when input is one hot vector.</p>
<p>Consider, input is <code>[0,0,0,1,0,0]</code> which is one hot vector corresponding to index 3. So, I first created both:</p>
<pre><code>_in = torch.tensor([0,0,0,1,0,0]).long() # used later
_index = torch.LongTensor([3])
</code></pre>
<p>Then I tried <code>nn.Embedding</code>:</p>
<pre><code>customEmb = torch.tensor([[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4],[5,5,5,5],[6,6,6,6]]).float()
emb = nn.Embedding(d_vocabSize, emb_size) # 6x4
emb.weight = torch.nn.Parameter(customEmb)
print(emb.weight)
print('\n----- embedding(input) ------')
hidden = emb(_index)
print(hidden)
</code></pre>
<p>which correctly outputted (selected 3rd row in embedding <code>[4., 4., 4., 4.]</code>):</p>
<pre><code>----- hiddent layer / embedding -----
Parameter containing:
tensor([[1., 1., 1., 1.],
[2., 2., 2., 2.],
[3., 3., 3., 3.],
[4., 4., 4., 4.],
[5., 5., 5., 5.],
[6., 6., 6., 6.]], requires_grad=True)
----- embedding(input) ------
tensor([[4., 4., 4., 4.]], grad_fn=<EmbeddingBackward0>)
</code></pre>
<p>I tried similar with <code>nn.Linear</code>:</p>
<pre><code>print('\n----- hiddent layer / embedding -----')
customEmb = torch.tensor([[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4],[5,5,5,5],[6,6,6,6]]).long()
emb = nn.Linear(d_vocabSize, emb_size) # 6x4
emb.weight = torch.nn.Parameter(customEmb.T)
print(emb.weight)
print('\n----- embedding(input) ------')
hidden = emb(_in)
print(hidden)
</code></pre>
<p>For above code, it gave following error:</p>
<pre><code>----- hiddent layer / embedding -----
RuntimeError
# ...
---> 16 emb.weight = torch.nn.Parameter(customEmb.T)
# ...
RuntimeError: Only Tensors of floating point and complex dtype can require gradients
</code></pre>
<p>So I tried to call <code>.float()</code> instead of <code>.long()</code> for <code>customEmb</code>, but got following error:</p>
<pre><code>----- hiddent layer / embedding -----
Parameter containing:
tensor([[1., 2., 3., 4., 5., 6.],
[1., 2., 3., 4., 5., 6.],
[1., 2., 3., 4., 5., 6.],
[1., 2., 3., 4., 5., 6.]], requires_grad=True)
----- embedding(input) ------
RuntimeError
# ...
---> 21 hidden = emb(_in)
# ...
RuntimeError: expected scalar type Long but found Float
</code></pre>
<p><strong>Q.</strong> What I am missing here?</p>
<p><strong>PS</strong></p>
<p>I was expecting <code>nn.Linear</code> to return something like:</p>
<pre><code>[[0.,0.,0.,0.],
[0.,0.,0.,0.],
[0.,0.,0.,0.],
[4.,4.,4.,4.],
[0.,0.,0.,0.],
[0.,0.,0.,0.],
]
</code></pre>
<p><strong>Q.</strong> Somewhat unrelated question: Am I wrong with above expectation?</p>
<p><strong>Update</strong></p>
<p>I tried converting both <code>_in</code> and <code>customEmb</code> to float and error went away:</p>
<pre><code>_in = torch.tensor([0,0,0,1,0,0]).float()
customEmb = torch.tensor([[1,1,1,1],[2,2,2,2],[3,3,3,3],[4,4,4,4],[5,5,5,5],[6,6,6,6]]).float()
emb = nn.Linear(d_vocabSize, emb_size) # 6x4
emb.weight = torch.nn.Parameter(customEmb.T)
hidden = emb(_in)
print(hidden)
</code></pre>
<p>and it printed:</p>
<pre><code>tensor([3.6693, 4.3959, 3.9726, 4.3447], grad_fn=<AddBackward0>)
</code></pre>
<p><strong>Q2.</strong> Now am guessing why it is not <code>[4.,4.,4.,4.]</code>? Am I completely screwed up with the assumptions / my understanding?</p>
|
<python><numpy><pytorch>
|
2023-02-11 13:04:36
| 1
| 3,029
|
MsA
|
75,420,317
| 15,363,250
|
How to remove nulls from a pyspark dataframe based on conditions?
|
<p>Let's say I have a table with the ids of my clients, which streaming platform they are subscribers to and how often they pay for their subscription:</p>
<pre><code>user_info
+------+------------------+---------------------+
| id | subscription_plan| payment_frequency |
+------+------------------+---------------------+
| 3004 | Netflix | Monthly |
| 3004 | Disney + | Monthly |
| 3004 | Netflix | Null |
| 3006 | Star + | Yearly |
| 3006 | Apple TV | Yearly |
| 3006 | Netflix | Monthly |
| 3006 | Star + | Null |
| 3009 | Apple TV | Null |
| 3009 | Star + | Monthly |
+------+------------------+---------------------+
</code></pre>
<p>The problem is that I have some duplicate values, and I need to get rid of the ones that are duplicate and where the status on the payment_frequency is null. If payment_frequency is null but the record is not duplicated, this is fine, like for example ID 3009 for Apple TV.</p>
<p>I could simple remove all the nulls from the payment_frequency table, but that's not the ideal, as the only reason where a null is worthless for me is when it's coming from a duplicated id and subscription_plan. How do I make sure I get rid of the nulls if they match those requirements?</p>
<p>The result I need:</p>
<pre><code>user_info
+------+------------------+---------------------+
| id | subscription_plan| payment_frequency |
+------+------------------+---------------------+
| 3004 | Netflix | Monthly |
| 3004 | Disney + | Monthly |
| 3006 | Star + | Yearly |
| 3006 | Apple TV | Yearly |
| 3006 | Netflix | Monthly |
| 3009 | Apple TV | Null |
| 3009 | Star + | Monthly |
+------+------------------+---------------------+
</code></pre>
<p>Thanks</p>
|
<python><dataframe><pyspark>
|
2023-02-11 12:47:15
| 2
| 450
|
Marcos Dias
|
75,420,105
| 1,484,601
|
python typing: callback protocol and keyword arguments
|
<p>For example, mypy accepts this:</p>
<pre><code>class P(Protocol):
def __call__(self, a: int)->int: ...
def call(a: int, f: P)->int:
return f(a)
def add_one(a: int)->int:
return a+1
call(1,add_one)
</code></pre>
<p>but not this:</p>
<pre><code>class P(Protocol):
def __call__(self, a: int, **kwargs: Any)->int: ...
def call(a: int, f: P)->int:
return f(a)
def add_one(a: int)->int:
return a+1
def add_one_and_b(a: int, b: int= 1)->int:
return a+1+b
call(1,add_one)
call(1,add_one_and_b)
</code></pre>
<p>errors (same error for the last two lines):</p>
<pre><code>error: Argument 2 to "call" has incompatible type "Callable[[int], int]"; expected "P" [arg-type]
</code></pre>
<p>How can one specify the type "method that takes as argument an int, possibly keyword arguments; and returns an int" ? (with or without using Protocol)</p>
|
<python><python-typing><callable>
|
2023-02-11 12:07:20
| 3
| 4,521
|
Vince
|
75,420,072
| 7,848,740
|
Django media file page not found
|
<p>So, I'm trying to follow <a href="https://docs.djangoproject.com/en/4.1/howto/static-files/#serving-files-uploaded-by-a-user-during-development" rel="nofollow noreferrer">Django documentation</a> about the static files and media files</p>
<p>I have a clean Django installation and I want to add the media folder. What I've done? I've changed the <code>urls.py</code> inside the project (not the app) and the <code>settings.py</code> as below</p>
<h2>settings.py</h2>
<pre><code>STATIC_URL = 'static/'
MEDIA_URL = 'media/'
MEDIA_ROOT = BASE_DIR / "media"
STATICFILES_DIRS = [
BASE_DIR / "static",
]
</code></pre>
<h2>urls.py</h2>
<pre><code>urlpatterns = [
path('admin/', admin.site.urls),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
</code></pre>
<p>But I got the</p>
<blockquote>
<p>Page not found (404) Request Method: GET Request
URL: <a href="http://127.0.0.1:8000/" rel="nofollow noreferrer">http://127.0.0.1:8000/</a> Using the URLconf defined in
web_project.urls, Django tried these URL patterns, in this order:</p>
<p>admin/ ^media/(?P.*)$ The empty path didnβt match any of these.</p>
</blockquote>
<p>I've also tried adding the <code>'django.template.context_processors.media'</code> into <code>TEMPLATES</code> and using os as below</p>
<pre><code>STATIC_URL = '/static/'
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
STATICFILES_DIRS = (os.path.join(BASE_DIR, 'static'),)
</code></pre>
<p>but nothing changes</p>
<p>What it could be?</p>
|
<python><django><django-urls><django-staticfiles><django-media>
|
2023-02-11 12:01:49
| 2
| 1,679
|
NicoCaldo
|
75,420,026
| 18,749,472
|
Python - create wrapper function for SQL queries
|
<p>In my project I have lots of functions that carry out SQL queries. These queries include SELECT, UPDATE, INSERT etc... When writing functions for SELECT queries for example, I write the same structure of code in every single function. e.g.</p>
<pre><code>def generic_select_function(self):
result = self.cursor.execute("""
SQL CODE
""")
return result.fetchall()
</code></pre>
<p>To avoid repeating this code over and over I though I could create a wrapper function that inserts the SQL code into a template for SELECT queries.</p>
<p>I understand that a wrapper function may not even be necessary for this but I would like to try and implement one to develop my understanding of them.</p>
<p><em>What I have tried:</em></p>
<pre><code>class DatabaseManagment():
def __init__(self) -> None:
self.con = sqlite3.connect("...")
self.cursor = self.con.cursor()
self.lock = threading.Lock()
def sql_select(self):
def inner(func):
result = self.cursor.execute(f"""
{func}
""")
return result.fetchall()
return inner
@sql_select
def test(self):
return "SELECT * FROM App_price"
</code></pre>
<blockquote>
<p>'function' object has no attribute 'cursor'</p>
</blockquote>
|
<python><oop><decorator><wrapper><python-decorators>
|
2023-02-11 11:53:00
| 1
| 639
|
logan_9997
|
75,419,990
| 7,093,241
|
Capturing or printing variables in bashrc with shell=True in run command of subprocess module
|
<p>I am learning concurrency with <em>Python 3 Standard Library, 2nd Editio</em>n. Is there a way to get the <code>subprocess</code> module to use variables in my <code>.bashrc</code> when I set <code>shell=True</code>?</p>
<p>I tried adding <code>echo "something"</code> in my <code>.bashrc</code> and ran the following but I couldn't see <code>something</code> in the output but I could see <code>$HOME</code>.</p>
<pre><code>import subprocess
completed = subprocess.run('echo $HOME', shell=True)
print('returncode:', completed.returncode)
</code></pre>
|
<python><bash><subprocess>
|
2023-02-11 11:48:33
| 1
| 1,794
|
heretoinfinity
|
75,419,794
| 720,877
|
How to specify setuptools entrypoints in a pyproject.toml
|
<p>I have a setup.py like this:</p>
<pre><code>#!/usr/bin/env python
from setuptools import setup, find_packages
setup(
name="myproject",
package_dir={"": "src"},
packages=find_packages("src"),
entry_points={
"console_scripts": [
"my-script = myproject.myscript:entrypoint",
],
},
)
</code></pre>
<p>How can I write that <code>entry_points</code> configuration in pyproject.toml using setuptools?</p>
<p>I'm guessing something like this, going on <a href="https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html#dynamic-metadata" rel="noreferrer">setuptools' pyproject.toml docs</a>, which says I need to use "INI format" following <a href="https://packaging.python.org/en/latest/specifications/entry-points/" rel="noreferrer">the docs that references for entry-points</a> but it doesn't seem to give an example, and my guess at how to combine the setuptools syntax with the pyproject.toml syntax is wrong (I get a traceback from <code>pip install -e .</code> that reports <code>pip._vendor.tomli.TOMLDecodeError: Invalid value</code>, pointing at the <code>entry-points</code> line in pyproject.toml):</p>
<pre><code>[build-system]
requires = ["setuptools", "setuptools-scm"]
build-backend = "setuptools.build_meta"
[metadata]
name = "myproject"
[tool.setuptools]
package-dir = {"" = "src"}
[tool.setuptools.packages.find]
where = ["src"]
[tool.setuptools.dynamic]
entry-points =
my-script = myproject.myscript:entrypoint
</code></pre>
<p>Note I have a stub setup.py alongside that pyproject.toml, like this (which I read I need to support <code>pip install -e .</code> i.e. "editable installation"):</p>
<pre><code>from setuptools import setup
if __name__ == "__main__":
setup()
</code></pre>
|
<python><setuptools><program-entry-point><python-packaging><pyproject.toml>
|
2023-02-11 11:17:53
| 1
| 2,820
|
Croad Langshan
|
75,419,592
| 12,263,543
|
AzureML SVD Model Deployment Fails for Managed Instance. Missing azureml.studio package
|
<p>I've built a SVD recommmendation model in AzureML Studio based on the <a href="https://learn.microsoft.com/en-us/azure/machine-learning/component-reference/train-svd-recommender" rel="nofollow noreferrer">example in the azure docs</a>. If I deploy the model to a real-time Container Instance endpoint directly from the job it works. However, I'd like to create a real-time endpoint that uses the <strong>Managed Instance</strong> compute type.</p>
<p>The problem I'm running into is that when I create a new endpoint, I need to upload a <code>score.py</code> function. I've uploaded the <code>score.py</code> file that is automatically generated (attached below). However, the deployment then fails with:</p>
<pre><code> File "/azureml-envs/minimal/lib/python3.8/site-packages/azureml_inference_server_http/server/user_script.py", line 73, in load_script
main_module_spec.loader.exec_module(user_module)
File "<frozen importlib._bootstrap_external>", line 843, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/var/azureml-app/230210165016-1423170274/score.py", line 6, in <module>
from azureml.studio.core.io.model_directory import ModelDirectory
ModuleNotFoundError: No module named 'azureml.studio'
</code></pre>
<p>So it looks like the <code>azureml.studio</code> package is not available on the environment? Since the <code>azureml.studio</code> package doesn't seem to be public, I don't know how I can get the score.py function to work. I tried creating a custom environment and adding the package to the conda dependencies, but that didn't help either.</p>
<p>The <code>score.py</code> file:</p>
<pre><code>import os
import json
from collections import defaultdict
from pathlib import Path
from azureml.studio.core.io.model_directory import ModelDirectory
from azureml.studio.modules.recommendation.score_svd_recommender.score_svd_recommender import \
ScoreSVDRecommenderModule, RecommenderPredictionKind
from azureml.studio.common.datatable.data_table import DataTable
from azureml.designer.serving.dagengine.utils import decode_nan
from azureml.designer.serving.dagengine.converter import create_dfd_from_dict
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'trained_model_outputs')
schema_file_path = Path(model_path) / '_schema.json'
with open(schema_file_path) as fp:
schema_data = json.load(fp)
def init():
global model
model = ModelDirectory.load(load_from_dir=model_path).model
def run(data):
data = json.loads(data)
input_entry = defaultdict(list)
for row in data:
for key, val in row.items():
input_entry[key].append(decode_nan(val))
data_frame_directory = create_dfd_from_dict(input_entry, schema_data)
score_params = dict(
learner=model,
test_data=DataTable.from_dfd(data_frame_directory),
training_data=None,
prediction_kind=RecommenderPredictionKind.RatingPrediction)
result_dfd, = ScoreSVDRecommenderModule().run(**score_params)
result_df = result_dfd.data_frame
return json.dumps(result_df.to_dict("list"))
</code></pre>
<p>I would appreciate any comments and ideas. I've been stuck on this for a while. Thanks!</p>
|
<python><azure-machine-learning-service><azuremlsdk>
|
2023-02-11 10:39:24
| 0
| 1,655
|
picklepick
|
75,419,489
| 9,102,437
|
Jupyter lab doesn't open a notebook
|
<p>I am using jupyter lab on a server. I have recently reinstalled it using <code>pip-autoremove</code> and got a new issue: some <code>.ipynb</code> files which were created prior to the reinstall are not opening, they just display a blank page (as if there are no cells) and I am unable to run the code. At the same time jupyter logs just spam the following:</p>
<pre><code>[E 2023-02-11 10:01:18.559 ServerApp] Uncaught exception GET /api/yjs/json:notebook:47dee2d6-9ed3-42b0-b765-dd14ef0d9928 (188.132.162.66)
HTTPServerRequest(protocol='http', host='35.188.218.231:8888', method='GET', uri='/api/yjs/json:notebook:47dee2d6-9ed3-42b0-b765-dd14ef0d9928', version='HTTP/1.1', remote_ip='188.132.162.66')
Traceback (most recent call last):
File "/home/jeff/.local/lib/python3.10/site-packages/tornado/websocket.py", line 944, in _accept_connection
await open_result
File "/home/jeff/.local/lib/python3.10/site-packages/jupyter_server_ydoc/handlers.py", line 222, in open
if self.room.document.source != model["content"]:
File "/home/jeff/.local/lib/python3.10/site-packages/jupyter_ydoc/ydoc.py", line 26, in source
return self.get()
File "/home/jeff/.local/lib/python3.10/site-packages/jupyter_ydoc/ydoc.py", line 166, in get
metadata=meta["metadata"],
KeyError: 'metadata'
</code></pre>
<p>I have installed jupyterlab with <code>pip install jupyterlab</code> which should make it the most recent version, but just in case here is the <code>jupyter -version</code> output:</p>
<pre class="lang-bash prettyprint-override"><code>Selected Jupyter core packages...
IPython : 8.10.0
ipykernel : 6.21.1
ipywidgets : 8.0.4
jupyter_client : 8.0.2
jupyter_core : 5.2.0
jupyter_server : 2.2.1
jupyterlab : 3.6.1
nbclient : 0.7.2
nbconvert : 7.2.9
nbformat : 5.7.3
notebook : 6.5.2
qtconsole : not installed
traitlets : 5.9.0
</code></pre>
<p>I am launching jupyterlab using <code>jupyter lab --ip 0.0.0.0 --port 8888 --no-browser --collaborative</code> to ensure that it is always on the port which is accessible and that I can use the notebook with other people.</p>
<p>I am not sure what could be causing this issue, because making a duplicate of the notebook fixes it entirely, but inspecting the code and permissions of the two resulting files reveals that they are absolutely the same. What could be causing this?</p>
<p>Bug report: <a href="https://github.com/jupyterlab/jupyterlab/issues/13966" rel="nofollow noreferrer">https://github.com/jupyterlab/jupyterlab/issues/13966</a></p>
|
<python><linux><jupyter-notebook><jupyter><jupyter-lab>
|
2023-02-11 10:19:12
| 0
| 772
|
user9102437
|
75,419,222
| 17,696,880
|
Replace all occurrences of a word with another specific word that must appear somewhere in the sentence before that word
|
<pre class="lang-py prettyprint-override"><code>import re
#example 1
input_text = "((PERSON)MarΓa Rosa) ((VERB)pasarΓ‘) unos dias aqui, hay que ((VERB)mover) sus cosas viejas de aqui, ya que sus cosmΓ©ticos ((VERB)estorban) si ((VERB)estan) tirados por aquΓ. ((PERSON)Cyntia) es una buena modelo, su cabello es muy bello, hay que ((VERB)lavar) su cabello"
#example 2
input_text = "Sus ΓΊtiles escolares ((VERB)estan) aqui, me sorprende que ((PERSON)Juan Carlos) los haya olvidado siendo que suele ((VERB)ser) tan cuidadoso con sus ΓΊtiles."
#I need replace "sus" or "su" but under certain conditions
subject_capture_pattern = r"\(\(PERSON\)((?:\w\s*)+)\)" #underlined in red in the image
associated_info_capture_pattern = r"(?:sus|su)\s+((?:\w\s*)+)(?:\s+(?:del|de )|\s*(?:\(\(VERB\)|[.,;]))" #underlined in green in the image
identification_pattern =
replacement_sequence =
input_text = re.sub(identification_pattern, replacement_sequence, input_text, flags = re.IGNORECASE)
</code></pre>
<p>this is the correct output:</p>
<pre class="lang-py prettyprint-override"><code>#for example 1
"((PERSON)MarΓa Rosa) ((VERB)pasarΓ‘) unos dias aqui, hay que ((VERB)mover) cosas viejas ((CONTEXT) de MarΓa Rosa) de aqui, ya que cosmΓ©ticos ((CONTEXT) de MarΓa Rosa) ((VERB)estorban) si ((VERB)estan) tirados por aquΓ. ((PERSON)Cyntia) es una buena modelo, cabello ((CONTEXT) de Cyntia) ((VERB)es) muy bello, hay que ((VERB)lavar) cabello ((CONTEXT) de Cyntia)"
#for example 2
"ΓΊtiles escolares ((CONTEXT) NO DATA) ((VERB)estan) aqui, me sorprende que ((PERSON)Juan Carlos) los haya olvidado siendo que suele ((VERB)ser) tan cuidadoso con ΓΊtiles ((CONTEXT) Juan Carlos)."
</code></pre>
<p>Details:</p>
<p>Replace the possessive pronouns <code>"sus"</code> or <code>"su"</code> with <code>"de " + the content inside the last ((PERSON) "THIS SUBSTRING")</code>, and if there is no <code>((PERSON) "THIS SUBSTRING")</code> before then replace <code>sus</code> or <code>su</code> with <code>((PERSON) NO DATA)</code></p>
<p>Sentences are read from left to right, so the replacement will be the substring inside the parentheses <code>((PERSON)the substring)</code> before that <code>"sus"</code> or <code>"su"</code>, as shown in the example.</p>
<p>In the end, the replaced substrings should end up with this structure:</p>
<p><code>associated_info_capture_pattern + "((CONTEXT)" + subject_capture_pattern + ")"</code></p>
<p><a href="https://i.sstatic.net/PyDyR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PyDyR.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><regex><string><regex-group>
|
2023-02-11 09:29:51
| 1
| 875
|
Matt095
|
75,419,141
| 1,999,585
|
How can I efficiently create a list with the figures of a number in Python?
|
<p>I have a number and I want to create a list, each element having the figures of the aforementioned number.</p>
<p>As an example, I have <code>n=225</code> and I want to have the list <code>l=[2,2,5]</code>.</p>
<p>I can convert the number into a string, then convert the string as a list, like this:</p>
<pre><code>l=list(str(n))
l=[int(i) for i in lst]
</code></pre>
<p>But I think there is a better way of doing it. Can you help me?</p>
|
<python>
|
2023-02-11 09:11:44
| 0
| 2,424
|
Bogdan Doicin
|
75,419,046
| 19,429,024
|
Why can't I sort columns in my PyQt5 QTableWidget using UserRole data?
|
<p>I am trying to sort my QTableWidget columns by the values stored in the user role of each QTableWidgetItem, but I am unable to do so. I have enabled sorting with <code>self.setSortingEnabled(True)</code>, and I have set the data in each QTableWidgetItem with <code>item.setData(Qt.DisplayRole, f'M - {r}')</code> and <code>item.setData(Qt.UserRole, r)</code>. However, when I try to sort the columns by the values stored in the user role, it sorts the columns by the values stored in the display role instead.</p>
<p>Here is a minimal working example of my code:</p>
<pre class="lang-py prettyprint-override"><code>from random import randint
from PyQt5.QtCore import Qt
from PyQt5.QtWidgets import QApplication, QMainWindow, QTableWidget, QWidget, QGridLayout, \
QTableWidgetItem, QPushButton
class Table(QTableWidget):
def __init__(self):
super().__init__()
self.setSortingEnabled(True)
def populate(self):
self.clear()
self.setColumnCount(3)
self.setRowCount(200)
for row in range(500):
for column in range(3):
r = randint(0, 1000)
item = QTableWidgetItem()
item.setData(Qt.DisplayRole, f'M - {r}')
item.setData(Qt.UserRole, r)
self.setItem(row, column, item)
class MainApp(QMainWindow):
def __init__(self):
super().__init__()
self.table = Table()
self.button = QPushButton('Roll')
self.button.clicked.connect(self.table.populate)
layout = QWidget()
self.setCentralWidget(layout)
grid = QGridLayout()
layout.setLayout(grid)
grid.addWidget(self.button)
grid.addWidget(self.table)
if __name__ == '__main__':
app = QApplication([])
main_app = MainApp()
main_app.showMaximized()
app.exec()
</code></pre>
<p><img src="https://i.sstatic.net/gPh0i.png" alt="Example" /></p>
<p>Additionally, I tried using EditRole, but the values that appear in the table are not the values from DisplayRole. For example, in the code below, I set item.setData(Qt.DisplayRole, f'M - {r}'), but even though r is an integer, the display role value is a string ('M - {r}'). I was hoping that sorting by UserRole or EditRole would sort based on the integer value of r, but that doesn't seem to be the case.</p>
<pre class="lang-py prettyprint-override"><code>item.setData(Qt.DisplayRole, f'M - {r}')
item.setData(Qt.EditRole, int(r))
</code></pre>
|
<python><qt><pyqt><pyqt5>
|
2023-02-11 08:51:01
| 2
| 587
|
Collaxd
|
75,418,911
| 9,363,441
|
How to set connection with database in the api Flask?
|
<p>I'm trying to create the api on my linux PC. At this moment I have support for some basic requests which were done just for testing. My api works in cooperation with <code>uswgi+nginx+flask</code>. And now I'm trying to add connection to the database. For this purpose I had installed <code>MySQL</code> and created database. But I don't understand how to connect from the api to database. For example here is code of the script which can connect to the DB but it works separately of the api:</p>
<pre><code>try:
connection = mysql.connector.connect(host='localhost',
database='tired_db',
user='test',
password='pw')
if connection.is_connected():
mycursor = connection.cursor()
mycursor.execute("SHOW TABLES")
for x in mycursor:
print(x)
return connection
except Error as e:
print("Error while connecting to MySQL", e)
finally:
if connection.is_connected():
mycursor.close()
connection.close()
print("MySQL connection is closed")
</code></pre>
<p>and it works correctly. I thought that maybe I can call this connection like some metaclass:</p>
<pre><code>import mysql.connector
from mysql.connector import Error
class DbProvider(type):
@property
def my_data(cls):
try:
connection = mysql.connector.connect(host='localhost',
database='tired_db',
user='test',
password='pw')
if connection.is_connected():
mycursor = connection.cursor()
mycursor.execute("SHOW TABLES")
for x in mycursor:
print(x)
return connection
except Error as e:
print("Error while connecting to MySQL", e)
finally:
if connection.is_connected():
mycursor.close()
connection.close()
print("MySQL connection is closed")
class MyClass(metaclass=DbProvider):
pass
if __name__ == "__main__":
MyClass.my_data
</code></pre>
<p>but I think that such stuff can be done with more efficient way. For example here is some request in the api:</p>
<pre><code>@app.route("/api/login", methods = ['POST'])
def logIn():
return "all is ok"
</code></pre>
<p>and the idea is that for example I have to connect during this request to the DB and check whether a user exists or not and if all is ok generate+save some token to the database. I don't understand whether it is important to keep connection alive during all api uptime or only during requests. And also is it important to close connection after an every request or we have to keep alive it forever. And also how to call connection from separate class, or I have to have all stuff in one file together with api calls.</p>
|
<python><mysql><flask>
|
2023-02-11 08:18:50
| 1
| 2,187
|
Andrew
|
75,418,680
| 9,757,174
|
firebase-admin adding a random alphanumeric key to the realtime database when using push()
|
<p>I am using the Python <code>firebase-admin</code> library to integrate Django rest framework and Firebase Realtime storage. I am using the <code>push()</code> function to create a new child node. However, the function adds an alphanumeric key to each child. Is there a way I can avoid that and add my own custom keys to the data?</p>
<p>See example below:</p>
<pre class="lang-py prettyprint-override"><code> def post(self, request):
"""Function to handle post requests
Args:
request (_type_): _description_
"""
# Get the data to be posted from the request
name = request.POST.get('name')
age = request.POST.get('age')
location = request.POST.get('location')
# Set the reference to the database
ref = db.reference('/')
# Push the data to the database
ref.push({"user1" : {
'Name': name,
'Age': age,
'Location': location
}})
return JsonResponse({"message": "Data posted successfully"}, status=200)
</code></pre>
<p>When I run this, the node is created as follows</p>
<pre class="lang-json prettyprint-override"><code>{
"data": {
"-NNzIPh4SUHo6FLhs060": {
"user1": {
"Age": "20",
"Location": "US",
"Name": "Dummy name 2"
}
},
"user2": {
"Age": "22",
"Location": "count1",
"Name": "Dummy 1"
}
}
}
</code></pre>
<p>The <code>-NNzIPh4SUHo6FLhs060</code> key is created which I want to customize.</p>
|
<python><django><firebase><django-rest-framework><firebase-admin>
|
2023-02-11 07:19:26
| 1
| 1,086
|
Prakhar Rathi
|
75,418,655
| 6,778,118
|
How to load private python package when loading a MLFlow model?
|
<p>I am trying to use a private Python package as a model using the <code>mlflow.pyfunc.PythonModel</code>.</p>
<p>My <code>conda.yaml</code> looks like</p>
<pre><code>channels:
- defaults
dependencies:
- python=3.10.4
- pip
- pip:
- mlflow==2.1.1
- pandas
- --extra-index-url <private-pypa-repo-link>
- <private-package>
name: model_env
</code></pre>
<p><code>python_env.yaml</code></p>
<pre><code>python: 3.10.4
build_dependencies:
- pip==23.0
- setuptools==58.1.0
- wheel==0.38.4
dependencies:
- -r requirements.txt
</code></pre>
<p><code>requirements.txt</code></p>
<pre><code>mlflow==2.1.1
pandas
--extra-index-url <private-pypa-repo-link>
<private-package>
</code></pre>
<p>When running the following</p>
<pre class="lang-py prettyprint-override"><code>import mlflow
model_uri = '<run_id>'
# Load model as a PyFuncModel.
loaded_model = mlflow.pyfunc.load_model(model_uri)
# Predict on a Pandas DataFrame.
import pandas as pd
t = loaded_model.predict(pd.read_json("test.json"))
print(t)
</code></pre>
<p>The result is</p>
<pre><code>WARNING mlflow.pyfunc: Encountered an unexpected error (InvalidRequirement('Parse error at "\'--extra-\'": Expected W:(0-9A-Za-z)')) while detecting model dependency mismatches. Set logging level to DEBUG to see the full traceback.
</code></pre>
<p>Adding in the following before loading the mode makes it work</p>
<pre class="lang-py prettyprint-override"><code>dep = mlflow.pyfunc.get_model_dependencies(model_uri)
print(dep)
import subprocess
import sys
subprocess.check_call([sys.executable, "-m", "pip", "install", "-r", dep])
</code></pre>
<p>Is there a way automatically install these dependencies rather than doing it explicitly? What are my options to get mlflow to install the private package?</p>
|
<python><pandas><mlflow><mlops>
|
2023-02-11 07:13:57
| 1
| 383
|
Vikrant Yadav
|
75,418,351
| 14,436,930
|
np.pad significantly slower than concatenate or assignment
|
<p>Given a list of <code>n</code> integer arrays of variable lengths (length always less than 500), my goal is to form a single matrix of size <code>(n, 500)</code>, the arrays shorter than 500 will be padded with a given constant in the front. However, I noticed that np.pad, which is designed to pad values, is actually very slow compared to other methods, see benchmark code below:</p>
<pre><code>import random
import time
import numpy as np
def pad(arr):
retval = np.empty((500,), dtype=np.int64)
idx = 500 - len(arr)
retval[:idx] = 100001 # pad value
retval[idx:] = arr # original array
return retval
a = [np.random.randint(low=0, high=100000, size=(random.randint(5, 500),), dtype=np.int64) for _ in range(32)]
# approach 1: np.pad
t = time.time()
for _ in range(10000):
b = np.array([np.pad(cur, pad_width=(500 - len(cur), 0), mode='constant', constant_values=100001) for cur in a])
print(time.time() - t)
# approach 2: np.concatenate
t = time.time()
for _ in range(10000):
b = np.array([np.concatenate((np.full((500 - len(cur),), 100001), cur)) for cur in a])
print(time.time() - t)
# approach 3: assign to an empty array
t = time.time()
for _ in range(10000):
b = np.array([pad(cur) for cur in a])
print(time.time() - t)
b1 = np.array([np.pad(cur, pad_width=(500 - len(cur), 0), mode='constant', constant_values=100001) for cur in a])
b2 = np.array([np.concatenate((np.full((500 - len(cur),), 100001), cur)) for cur in a])
b3 = np.array([pad(cur) for cur in a])
print(np.allclose(b1, b2))
print(np.allclose(b1, b3))
print(np.allclose(b2, b3))
</code></pre>
<p>Output:</p>
<pre><code>5.376873016357422
1.297654151916504
0.5892848968505859
True
True
True
</code></pre>
<p>Why is np.pad so slow? (actually 10 times slower than assigning to empty array). The above custom pad() can actually be more optimized by simply creating a single np.empty of size <code>(n, 500)</code>, which is even faster, but for comparison fairness, I still did padding per row. I have also tried commenting others out and benchmarking one by one, but the result is similar, so it probably isn't something like caching issue</p>
|
<python><arrays><numpy><performance>
|
2023-02-11 05:50:52
| 2
| 671
|
seermer
|
75,418,252
| 19,238,204
|
How to Create 3D Torus from Circle Revolved about x=2r, r is the radius of circle (Python or JULIA)
|
<p>I need help to create a torus out of a circle by revolving it about x=2r, r is the radius of the circle.</p>
<p>I am open to either JULIA code or Python code. Whichever that can solve my problem the most efficient.</p>
<p>I have Julia code to plot circle and the x=2r as the axis of revolution.</p>
<pre><code>using Plots, LaTeXStrings, Plots.PlotMeasures
gr()
ΞΈ = 0:0.1:2.1Ο
x = 0 .+ 2cos.(ΞΈ)
y = 0 .+ 2sin.(ΞΈ)
plot(x, y, label=L"x^{2} + y^{2} = a^{2}",
framestyle=:zerolines, legend=:outertop)
plot!([4], seriestype="vline", color=:green, label="x=2a")
</code></pre>
<p><a href="https://i.sstatic.net/M6mS4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6mS4.png" alt="1" /></a></p>
<p>I want to create a torus out of it, but unable, meanwhile I have solid of revolution Python code like this:</p>
<pre><code># Calculate the surface area of y = sqrt(r^2 - x^2)
# revolved about the x-axis
import matplotlib.pyplot as plt
import numpy as np
import sympy as sy
x = sy.Symbol("x", nonnegative=True)
r = sy.Symbol("r", nonnegative=True)
def f(x):
return sy.sqrt(r**2 - x**2)
def fd(x):
return sy.simplify(sy.diff(f(x), x))
def f2(x):
return sy.sqrt((1 + (fd(x)**2)))
def vx(x):
return 2*sy.pi*(f(x)*sy.sqrt(1 + (fd(x) ** 2)))
vxi = sy.Integral(vx(x), (x, -r, r))
vxf = vxi.simplify().doit()
vxn = vxf.evalf()
n = 100
fig = plt.figure(figsize=(14, 7))
ax1 = fig.add_subplot(221)
ax2 = fig.add_subplot(222, projection='3d')
ax3 = fig.add_subplot(223)
ax4 = fig.add_subplot(224, projection='3d')
# 1 is the starting point. The first 3 is the end point.
# The last 200 is the number of discretization points.
# help(np.linspace) to read its documentation.
x = np.linspace(1, 3, 200)
# Plot the circle
y = np.sqrt(2 ** 2 - x ** 2)
t = np.linspace(0, np.pi * 2, n)
xn = np.outer(x, np.cos(t))
yn = np.outer(x, np.sin(t))
zn = np.zeros_like(xn)
for i in range(len(x)):
zn[i:i + 1, :] = np.full_like(zn[0, :], y[i])
ax1.plot(x, y)
ax1.set_title("$f(x)$")
ax2.plot_surface(xn, yn, zn)
ax2.set_title("$f(x)$: Revolution around $y$")
# find the inverse of the function
y_inverse = x
x_inverse = np.power(2 ** 2 - y_inverse ** 2, 1 / 2)
xn_inverse = np.outer(x_inverse, np.cos(t))
yn_inverse = np.outer(x_inverse, np.sin(t))
zn_inverse = np.zeros_like(xn_inverse)
for i in range(len(x_inverse)):
zn_inverse[i:i + 1, :] = np.full_like(zn_inverse[0, :], y_inverse[i])
ax3.plot(x_inverse, y_inverse)
ax3.set_title("Inverse of $f(x)$")
ax4.plot_surface(xn_inverse, yn_inverse, zn_inverse)
ax4.set_title("$f(x)$: Revolution around $x$ \n Surface Area = {}".format(vxn))
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/jV4Re.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jV4Re.png" alt="2" /></a></p>
|
<python><julia>
|
2023-02-11 05:23:33
| 3
| 435
|
Freya the Goddess
|
75,418,186
| 7,797,210
|
Linux NoHup fails for Streaming API IG Markets where file is python
|
<p>This is quite a specific question regarding nohup in linux, which runs a python file.
Back-story, I am trying to save down streaming data (from IG markets broadcast signal). And, as I am trying to run it via a remote-server (so I don't have to keep my own local desktop up 24/7),
somehow, the nohup will not engage when it 'listen's to a broadcast signal.</p>
<p>Below, is the example python code</p>
<pre><code>#!/usr/bin/env python
#-*- coding:utf-8 -*-
"""
IG Markets Stream API sample with Python
"""
user_ = 'xxx'
password_ = 'xxx'
api_key_ = 'xxx' # this is the 1st api key
account_ = 'xxx'
acc_type_ = 'xxx'
fileLoc = 'marketdata_IG_spx_5min.csv'
list_ = ["CHART:IX.D.SPTRD.DAILY.IP:5MINUTE"]
fields_ = ["UTM", "LTV", "TTV", "BID_OPEN", "BID_HIGH", \
"BID_LOW", "BID_CLOSE",]
import time
import sys
import traceback
import logging
import warnings
warnings.filterwarnings('ignore')
from trading_ig import (IGService, IGStreamService)
from trading_ig.lightstreamer import Subscription
cols_ = ['timestamp', 'data']
# A simple function acting as a Subscription listener
def on_prices_update(item_update):
# print("price: %s " % item_update)
print("xxxxxxxx
))
# A simple function acting as a Subscription listener
def on_charts_update(item_update):
# print("price: %s " % item_update)
print(xxxxxx"\
.format(
stock_name=item_update["name"], **item_update["values"]
))
res_ = [xxxxx"\
.format(
stock_name=item_update["name"], **item_update["values"]
).split(' '))]
# display(pd.DataFrame(res_))
try:
data_ = pd.read_csv(fileLoc)[cols_]
data_ = data_.append(pd.DataFrame(res_, columns = cols_))
data_.to_csv(fileLoc)
print('there is data and we are reading it')
# display(data_)
except:
pd.DataFrame(res_, columns = cols_).to_csv(fileLoc)
print('there is no data and we are saving first time')
time.sleep(60) # sleep for 1 min
def main():
logging.basicConfig(level=logging.INFO)
# logging.basicConfig(level=logging.DEBUG)
ig_service = IGService(
user_, password_, api_key_, acc_type_
)
ig_stream_service = IGStreamService(ig_service)
ig_session = ig_stream_service.create_session()
accountId = account_
################ my code to set sleep function to sleep/read at only certain time intervals
s_time = time.time()
############################
# Making a new Subscription in MERGE mode
subscription_prices = Subscription(
mode="MERGE",
# make sure to put L1 in front of the instrument name
items= list_,
fields= fields_
)
# adapter="QUOTE_ADAPTER")
# Adding the "on_price_update" function to Subscription
subscription_prices.addlistener(on_charts_update)
# Registering the Subscription
sub_key_prices = ig_stream_service.ls_client.subscribe(subscription_prices)
print('this is the line here')
input("{0:-^80}\n".format("HIT CR TO UNSUBSCRIBE AND DISCONNECT FROM \
LIGHTSTREAMER"))
# Disconnecting
ig_stream_service.disconnect()
if __name__ == '__main__':
main()
#######
</code></pre>
<p>Then, I try to run it on linux using this command : <code>nohup python marketdata.py</code>
where marketdata.py is basically the python code above.</p>
<p>Somehow, the nohup will not engage....... Any experts/guru who might see what I am missing in my code?</p>
|
<python><linux><algorithmic-trading><nohup>
|
2023-02-11 05:02:03
| 1
| 571
|
Kiann
|
75,418,176
| 2,324,298
|
BERT get top N words for each category
|
<p>I trained a BERT text classification model with 38 categories. Now, for each of these 38 categories I want to find out the top N words.</p>
<p>To do that, I used sklearn's CountVectorizer to create a vocabulary from the training dataset.</p>
<p>I passed that vocabulary to the tokenizer and used those tokens to pass to the model and get the last layer activations. So now I have a dataframe thats <code>vocab x num categories</code> size.</p>
<pre><code>from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
import pandas as pd
import torch, os
from transformers import AutoTokenizer, AutoModelForSequenceClassification
if torch.cuda.is_available():
device = torch.device("cuda")
print('GPU:', torch.cuda.get_device_name(0))
else:
device = torch.device("cpu")
# Define a function to compute the activations of a hidden state for a given input
def get_hidden_state_activations(input_ids, model, layer_idx):
with torch.no_grad():
outputs = model(input_ids)
activations = outputs[layer_idx]
return activations#.mean(dim=1)
# create the vocabulary
vectorizer = CountVectorizer(ngram_range=(1,1), max_features=10000, min_df=50)
# vectorizer = TfidfVectorizer(ngram_range=(1,10), max_features=200, use_idf=True)
vecs = vectorizer.fit_transform(corpus)
print('collecting frequencies')
dense = vecs.todense()
lst1 = dense.tolist()
feature_names = vectorizer.get_feature_names_out()
print('sorting them around')
# get all data into a DF
df = pd.DataFrame(lst1, columns=feature_names)
df = df.T.sum(axis=1).sort_values(ascending=False).reset_index()
df.columns = ['word', 'frequency']
df = df[~df.word.str.isdigit()].reset_index(drop=True)
# load models
tokenizer = AutoTokenizer.from_pretrained(TOKENIZER_MODEL_NAME)
model = AutoModelForSequenceClassification.from_pretrained(BERT_MODEL_NAME).to(device)
# data and labels
data = df.word.tolist()
labels = []
# Convert the data to input ids using the tokenizer
input_ids = [tokenizer.encode(example, add_special_tokens=True, return_tensors='pt') for example in data]
# Compute the activations of the hidden states for each input in the data
layer_idx = -1 # Choose the index of the layer to extract activations from
activations = [get_hidden_state_activations(input_id.to(device), model, layer_idx) for input_id in tqdm(input_ids)]
# get all activations into a df
activations_df = pd.DataFrame([i.cpu().detach().numpy().reshape(len(labels)) for i in activations])
activations_df.columns = labels
activations_df['word'] = data
activations_df = activations_df[['word'] + labels.tolist()]
# activations_df = activations_df.set_index('word')
activations_df.head(2)
</code></pre>
|
<python><pytorch><huggingface-transformers><bert-language-model>
|
2023-02-11 04:58:07
| 1
| 8,005
|
Clock Slave
|
75,418,150
| 1,354,930
|
Is there a way to access the "another exceptions" that happen in a python3 traceback?
|
<p>Let's assume you have some simple code that <em>you don't control</em> (eg: it's in a module you're using):</p>
<pre class="lang-py prettyprint-override"><code>def example():
try:
raise TypeError("type")
except TypeError:
raise Exception("device busy")
</code></pre>
<p>How would I go about accessing the <code>TypeError</code> in this traceback in order to handle it?</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/google/home/dthor/dev/pyle/pyle/fab/visa_instrument/exception_helpers.py", line 3, in example
raise TypeError("type")
TypeError: type
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/google/home/dthor/dev/pyle/pyle/fab/visa_instrument/exception_helpers.py", line 7, in <module>
example()
File "/usr/local/google/home/dthor/dev/pyle/pyle/fab/visa_instrument/exception_helpers.py", line 5, in example
raise Exception("device busy")
Exception: device busy
</code></pre>
<p>I can do the below, but i'm not happy with it because I'm doing string comparison - meaning things would break if the underlying module changes what string they raise (I don't control <code>example()</code>):</p>
<pre class="lang-py prettyprint-override"><code>try:
example()
except Exception as err:
if "device busy" in str(err):
# do the thing
pass
# But raise any other exceptions
raise err
</code></pre>
<p>I'd much rather have:</p>
<pre><code>try:
example()
except Exception as err:
if TypeError in err.other_errors: # magic that doesn't exist
# do the thing
pass
raise err
</code></pre>
<p>or even</p>
<pre><code>try:
example()
except TypeError in exception_stack: # Magic that doesn't exist
# do the thing
pass
except Exception:
</code></pre>
<p>I'm investigating the <code>traceback</code> module and <code>sys.exc_info()</code>, but don't have anything concrete yet.</p>
<p>Followup: would things be different if the exception was <em>chained</em>? Eg: <code>raise Exception from the_TypeError_exception</code></p>
|
<python><python-3.x><error-handling><try-except><traceback>
|
2023-02-11 04:49:10
| 1
| 1,917
|
dthor
|
75,418,148
| 9,596,111
|
Why is the "is" operator behaves differently for strings and lists in Python?
|
<p>From what I read in the documents,</p>
<blockquote>
<p><code>is</code> operator is used to check if two values are located on the same part of the memory</p>
</blockquote>
<p>So I compared two empty lists and as I expected I got <code>False</code> as a result.</p>
<pre><code>print([] is []) # False
</code></pre>
<p>But why is it different for strings?</p>
<pre><code>print('' is '') # True
</code></pre>
|
<python><arrays><string>
|
2023-02-11 04:48:56
| 0
| 878
|
Maran Sowthri
|
75,418,054
| 58,845
|
Where can I find information about model size or model-loading rate limits when using MLflow on Azure Databricks?
|
<p>How can I find out what the limits might be to the rate at which I can load MLflow models or the size of models I can register with MLflow? (I'm using MLflow as integrated with Azure Databricks.)</p>
<p>I've been looking at these two posts from the Databricks blog:</p>
<ul>
<li><a href="https://www.databricks.com/blog/2022/07/20/parallel-ml-how-compass-built-a-framework-for-training-many-machine-learning-models-on-databricks.html" rel="nofollow noreferrer">Parallel ML: How Compass Built a Framework for Training Many Machine Learning Models on Databricks</a></li>
<li><a href="https://www.databricks.com/blog/2021/09/21/managing-model-ensembles-with-mlflow.html" rel="nofollow noreferrer">Managing Model Ensembles With MLflow</a></li>
</ul>
<p>Both address ways to use a single custom MLflow PythonModel to wrap what are actually multiple models behind the scenes. (My use case is the same as in the first link - my "model" is really multiple models fit to sub-groups of a larger dataset.)</p>
<p>In either approach, however, there must be some limit or bottleneck. If I'm doing a batch of predictions, then a "meta-model" that loads the correct per-group model when it is asked to serve up a prediction will quickly send many <code>load_model</code> requests. If I pre-package multiple per-group models into a single custom model and register that, then there is only one thing to load but that thing might wind up quite large.</p>
<p>I've coded up small prototype solutions using both approaches and they work fine. And of course I plan to test full-sized solutions to see if they work in practice within our environment. But I'd feel better about choosing a path if I <em>also</em> had some source that described what the limits could or should be. Are they configurable within my Databricks workspace or more fundamental, etc.?</p>
<p>Could someone who knows Databricks and/or MLflow point me in the right direction?</p>
<p>EDIT: Updating with a bit of info I found...the <a href="https://docs.databricks.com/mlflow/tracking.html#:%7E:text=The%20maximum%20size%20for%20an%20MLflow%20artifact%20uploaded,must%20download%20them%20using%20a%20blob%20storage%20client." rel="nofollow noreferrer">MLflow guide for Databricks <em>on AWS</em></a> says, "The maximum size for an MLflow artifact uploaded to DBFS on AWS is 5GB." The same section of the <a href="https://learn.microsoft.com/en-us/azure/databricks/mlflow/tracking" rel="nofollow noreferrer">matching guide for Azure Databricks</a> doesn't mention a size limit. So perhaps this is something configurable within Azure Databricks? (My question is specific to Azure Databricks.)</p>
|
<python><azure><databricks><azure-databricks><mlflow>
|
2023-02-11 04:12:17
| 0
| 7,164
|
jtolle
|
75,417,919
| 11,652,655
|
Computing similarities between pairs of words
|
<p>This the code I am using to compute similarities between pairs of words.</p>
<pre><code>computed_similarities=[]
for s in nlp.vocab.vectors:
_:nlp.vocab[s]
for word in nlp.vocab:
if word.has_vector:
if word.is_lower:
if word.is_alpha:
similarity=cosine_similarity(new_vec,word.vector)
computed_similarities.append((word,similarity))
computed_similarities=sorted (computed_similarities, key=lambda item:-item[1])
print([t[0].text for t in computed_similarities[:10] ])
</code></pre>
<p>What I didn't understand is what this piece of code means:</p>
<pre><code>for s in nlp.vocab.vectors:
_:nlp.vocab[s]
</code></pre>
<p>What does it do ?</p>
|
<python><scipy><spacy>
|
2023-02-11 03:25:34
| 1
| 1,285
|
Seydou GORO
|
75,417,698
| 16,124,033
|
What type of code should I use to insert some strings in a string?
|
<p>I want to insert some strings in a string.</p>
<p>All I know is that there are four ways to do this, here are four examples:</p>
<pre class="lang-py prettyprint-override"><code>query = "What type of code should I use to insert some strings in a string?"
category = "Python"
query_category = "".join(["Query: ", query, " Category: ", category])
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>query = "What type of code should I use to insert some strings in a string?"
category = "Python"
query_category = "Query: " + query + " Category: " + category
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>query = "What type of code should I use to insert some strings in a string?"
category = "Python"
query_category = f"Query: {query} Category: {category}"
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>query = "What type of code should I use to insert some strings in a string?"
category = "Python"
query_category = "Query: {query} Category: {category}".format(query = query, category = category)
</code></pre>
<p>What type of code should I use to insert some strings in a string? Can anyone explain the pros and cons of each code?</p>
|
<python>
|
2023-02-11 02:10:42
| 1
| 4,650
|
My Car
|
75,417,677
| 219,976
|
How to debug docker application running by gunicorn by PyCharm
|
<p>I have django rest framework application with socket.io which is run in docker using gunicorn as a WSGI-server. Here's how application is run in docker-compose:</p>
<pre><code> web:
build:
context: .
command:
- gunicorn
- my_app.wsgi:application
ports:
- "8000:8000"
</code></pre>
<p>I want to debug it (with breakpoints, step-by-step execution and variable watch) inside docker using PyCharm Prefessional.
I already set my Python intepreter to remote one from docker-compose:</p>
<p><a href="https://i.sstatic.net/RBXuZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RBXuZ.png" alt="enter image description here" /></a></p>
<p>What to do next to debug the application? It's important that it is run by gunicorn as well as I use custom worker which I want to debug as well.</p>
|
<python><docker><debugging><pycharm><gunicorn>
|
2023-02-11 02:05:38
| 0
| 6,657
|
StuffHappens
|
75,417,661
| 9,988,487
|
Make a Python memory leak on purpose
|
<p>I'm looking for an example that purposely makes a memory leak in Python.</p>
<p>It should be as short and simple as possible and ideally not use non-standard dependencies (that could simply do the memory leak in C code) or multi-threading/processing.</p>
<p>I've seen memory leaks achieved before but only when bad things were being done to libraries such as <code>matplotlib</code>. Also, there are many questions about how to find and fix memory leaks in Python, but they all seem to be big programs with lots of external dependencies.</p>
<p>The reason for asking this is about how good Python's GC really is. I know it detects reference cycles. However, can it be tricked? Is there some way to leak memory? It may be impossible to solve the most restrictive version of this problem. In that case, I'm very happy to see a rigorous argument why. Ideally, the answer should refer to the actual implementation and not just state that "an ideal garbage collector would be ideal and disallow memory leaks".</p>
<p>For nitpicking purposes: An ideal solution to the problem would be a program like this:</p>
<pre class="lang-py prettyprint-override"><code># Use Python version at least v3.10
# May use imports.
# Bonus points for only standard library.
# If the problem is unsolvable otherwise (please argue that it is),
# then you may use e.g. Numpy, Scipy, Pandas. Minus points for Matplotlib.
def memleak():
# do whatever you want but only within this function
# No global variables!
# Bonus points for no observable side-effects (besides memory use)
# ...
for _ in range(100):
memleak()
</code></pre>
<p>The function <em>must</em> return and be called multiple times. Goals in order of bonus points (high number = many bonus points)</p>
<ol>
<li>the program keeps using more memory, until it crashes.</li>
<li>after calling the function multiple times (e.g. the 100 specified above), the program may continue doing other (normal) things such that the memory leaked during the function is never freed.</li>
<li>Like 2 but the memory <strong>cannot</strong> be freed, even by by calling <code>gc</code> manually and similar means.</li>
</ol>
|
<python><memory-leaks><cpython>
|
2023-02-11 02:01:01
| 1
| 1,640
|
Adomas Baliuka
|
75,417,626
| 219,976
|
How to I set logging for gunicorn GeventWebSocketWorker
|
<p>I have django rest framework application with socket.io. To run it in staging I use gunicorn as WSGI-server and GeventWebSocketWorker as a worker. The thing I want to fix is that there's no logs for web requests like this:</p>
<pre><code>[2023-02-10 10:54:21 -0500] [35885] [DEBUG] GET /users
</code></pre>
<p>Here's my gunicorn.config.py:</p>
<pre><code>worker_class = "geventwebsocket.gunicorn.workers.GeventWebSocketWorker"
bind = "0.0.0.0:8000"
workers = 1
log_level = "debug"
accesslog = "-"
errorlog = "-"
access_log_format = "%(h)s %(l)s %(u)s %(t)s '%(r)s' %(s)s %(b)s '%(f)s' '%(a)s'"
</code></pre>
<p>Here's the command in docker compose I use deploy app:</p>
<pre><code>command:
- gunicorn
- my_app.wsgi:application
</code></pre>
<p>I saw the issue was discussed in gitlab: <a href="https://gitlab.com/noppo/gevent-websocket/-/issues/16" rel="nofollow noreferrer">https://gitlab.com/noppo/gevent-websocket/-/issues/16</a> but still I have no idea how to fix it.</p>
|
<python><logging><gunicorn>
|
2023-02-11 01:47:08
| 2
| 6,657
|
StuffHappens
|
75,417,581
| 10,568,883
|
How to properly connect QPushButton clicked signal to pyqtSlot
|
<p>I'm writing a tool with GUI, where I inevitably need to use <code>pyqtSlot</code>. I had errors in this tool, related to its usage and decided to try a minimal example. However, I still fail to figure out the problem.</p>
<p>I've read instructions <a href="https://pythonspot.com/pyqt5-buttons/" rel="nofollow noreferrer">here</a>. My findings for creation of a custom slot for a pushbutton were the following:</p>
<ol>
<li>In a class, defining my GUI, I need to create a method, decorated as <code>@pyqtSlot()</code>;</li>
<li>I need to write something like <code>self.mybtn.connect(self.<method name>)</code> after button creation.</li>
</ol>
<p>So, I created a UI in QtDesigner, compiled it and came up with the following code:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.verticalLayout = QtWidgets.QVBoxLayout(self.centralwidget)
self.verticalLayout.setObjectName("verticalLayout")
self.pushButton = QtWidgets.QPushButton(self.centralwidget)
self.pushButton.setObjectName("pushButton")
self.pushButton.clicked.connect(self.react_to_signal)
self.verticalLayout.addWidget(self.pushButton)
MainWindow.setCentralWidget(self.centralwidget)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.pushButton.setText(_translate("MainWindow", "PushButton"))
@QtCore.pyqtSlot()
def react_to_signal(self):
print("Button press signal emitted")
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
</code></pre>
<p>However, when this code is run, it fails for me with the following error:</p>
<pre class="lang-none prettyprint-override"><code>QObject::connect: Cannot connect QPushButton::clicked(bool) to (nullptr)::react_to_signal()
Traceback (most recent call last):
File "<mypath>/ui_test_slots.py", line 36, in <module>
ui.setupUi(MainWindow)
File "<mypath>/ui_test_slots.py", line 15, in setupUi
self.pushButton.clicked.connect(self.react_to_signal)
TypeError: connect() failed between clicked(bool) and react_to_signal()
</code></pre>
<p>Other questions I saw on SO for this problem were caused by usage of the decorated method as a class method, no an instance method. But this doesn't seem to be my case.</p>
<p>What am I missing? What are the real differences between the example I referred to and my code?</p>
|
<python><qt><pyqt5><qt5>
|
2023-02-11 01:34:00
| 0
| 499
|
ΠΠ²Π³Π΅Π½ΠΈΠΉ ΠΡΠ°ΠΌΠ°ΡΠΎΠ²
|
75,417,579
| 4,212,158
|
How to manually log to Ray Train's internal Tensorboard logger?
|
<p>Ray Train automatically stores various things to Tensorboard. In addition, I want to log custom histograms, images, PR curves, scalars, etc. How do I access Ray Train's internal TBXLogger so that I can log additional things?</p>
|
<python><tensorboard><ray><ray-tune><tensorboardx>
|
2023-02-11 01:33:33
| 1
| 20,332
|
Ricardo Decal
|
75,417,499
| 1,410,769
|
Sum cells with duplicate column headers in pandas during import - python
|
<p>I am trying to do some basic dimensional reduction. I have a CSV file that looks something like this:</p>
<pre><code>A B C A B B A C
1 1 2 2 1 3 1 1
1 2 3 0 0 1 1 2
0 2 1 3 0 1 2 2
</code></pre>
<p>I want to import as a pandas DF but without renaming the headers to A.1 A.2 etc. Instead I want to sum the duplicates and keep the columns names. Ideally my new DF should look like this:</p>
<pre><code>A B C
4 5 3
2 3 5
5 3 3
</code></pre>
<p>Is it possible to do this easily or would you recommend a different way? I can also use bash, R, or anything that can do the trick with a file that is 1 million lines and 1000 columns.</p>
<p>Thank you!</p>
|
<python><pandas><csv>
|
2023-02-11 01:12:56
| 2
| 455
|
Taurophylax
|
75,417,495
| 3,508,811
|
How to use Python's regex to match a GPG signature?
|
<p>I am trying to see if I can match the gpgsig using the regex below, but ran into an error also shown below.</p>
<p>Is there any guidance on how to fix it?</p>
<pre><code>import re
if __name__ == '__main__':
log = '''
tree e76fa5ccd76492d843b6a4a06038d1c3b5aef6f8
parent 0d533a3a5fd51fd8c2x932832ef9ea91d0756c18
author firstname lastname <userid@company.com> 1676061999 -0800
committer firstname lastname <userid@company.com> 1676061999 -0800
gpgsig -----BEGIN SIGNED MESSAGE-----
MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0B
BwEAAKCCAuswggLnMIICjKADAgECAhANVjmYTunVjjNs9EhuJ4YXMAoGCCqGSM49
BAMCME0xKTAnBgNVBAMMIEFmorplIENvcnBvcmF0ZSBTaWduaW5nIEVDQyBDQSAx
MRMwEQYDVQQKDApBcHBsZSBJbmMuMQswCQYDVQQGEwJVUzAeFw0yMzAyMDkxOTU2
NTlaFw0yMzAzMDIyMDA2NTlaMDIxEzARBgNVBAoMCkFmorplIEluYy4xGzAZBgNV
BAMMEmduYWtrYWxhQGFmorplLmNvbTBZMBMGByqGSM49AgEGCCqGSM49AwEHA0IA
BGwmvh7HYXCyerdERaLr+OOJ3AQxYNSfUorWkROO2xv/ra8yYGL/aBCYJSQUoYRY
kY4GE90s8NAUwmQmsthdbFSjggFnMIIBYzAMBgNVHRMBAf8EAjAAMB8GA1UdIwQY
MBaAFEJi3AGoy1MCpVzt8IjG9uFJdhE9MHMGCCsGAQUFBwEBBGcwZTAvBggrBgEF
BQcwAoYjaHR0cDovL2NlcnRzLmFmorplLmNvbS9hY3NlY2NhMS5kZXIwMgYIKwYB
BQUHMAGGJmh0dHA6Ly9vY3NwLmFmorplLmNvbS9vY3NwMDMtYWNzZWNjMTA0MB0G
A1UdEQQWMBSBEmduYWtrYWxhQGFmorplLmNvbTAUBgNVHSUEDTALBgkqhkiG92Nk
BBQwMgYDVR0fBCswKTAnoCWgI4YhaHR0cDovL2NybC5hcHBsZS5jb20vYWNzZWNj
YTEuY3JsMB0GA1UdDgQWBBR1dRRNvQ/7RwRTorG97HmKR4xoJjAOBgNVHQ8BAf8E
BAMCB4AwJQYDVR0gBB4wHDAMBgoqhkiG92NkBRQBMAwGCiqGSIb3Y2QFFAIwCgYI
KoZIzj0EAwIDSQAwRgIhAPQ4IiaCG6V5A7u0lwbhJxyXHf9jN2IoqRLj7BlFo4Uv
AiEAtJAekfgFoiE3h8ZZDgvhwRiwPJseo8GDfM0tb5DP0h8xggE3MIIBMwIBATBh
ME0xKTAnBgNVBAMMIEFmorplIENvcnBvcmF0ZSBTaWduaW5nIEVDQyBDQSAxMRMw
EQYDVQQKDApBcHBsZSBJbmMuMQswCQYDVQQGEwJVUwIQDVY5mE7p1Y4zbPRIbieG
FzANBglghkgBZQMEAgEFAKBpMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJ
KoZIhvcNAQkFMQ8XDTIzMDIxMDIwNDY1MFowLwYJKoZIhvcNAQkEMSIEIP8j8iYG
Ggpc74AeVdxLkIArVBLw3+vw6/FVmGtNig+uMAkGByqGSM49AgEERjBEAiB0dBI3
9c1b/bsStaT3blWb19ehQDt8J/NNov/TzSgEzAIgWvpSs/DZI7wmlHtIJ8HpmIp4
+oNOu4kJJlhtUy9ZImUAAAAAAAA=
-----END SIGNED MESSAGE-----
'''
pattern = "gpgsig -----BEGIN SIGNED MESSAGE------{3,}$(?s).*?^-{3,} -----END SIGNED MESSAGE-----"
if re.search(pattern,log):
print ("Found a match")
</code></pre>
<p>Here is the error:</p>
<pre><code>/Users/Documents/pythonscripts/test.py:40: DeprecationWarning: Flags not at the start of the expression 'gpgsig -----BEGIN SI' (truncated)
if re.search(pattern,log):
</code></pre>
|
<python><python-3.x><python-re><gpg-signature>
|
2023-02-11 01:12:14
| 1
| 925
|
user3508811
|
75,417,406
| 13,460,543
|
Is there an error with pandas.Dataframe.ewm calculation or I am wrong?
|
<p>I choose the recursive option in order to calculate weighted moving average starting from the latest calculated value.</p>
<p>According to <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.ewm.html" rel="nofollow noreferrer">Documentation</a> :</p>
<blockquote>
<p>When adjust=False, the exponentially weighted function is calculated
recursively:</p>
</blockquote>
<blockquote>
<p><code>y0 = x0</code></p>
</blockquote>
<blockquote>
<p><code>y(t) = (1-alpha) * y(t-1) + alpha * x(t)</code></p>
</blockquote>
<p>So I have the following code :</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({'col1':[1, 1, 2, 3, 3, 5, 8, 9],
})
alpha=0.5
df['ewm'] = df['col1'].ewm(alpha, adjust=False).mean()
</code></pre>
<p>which gives :</p>
<pre><code>>>> df
col1 ewm
0 1 1.000000
1 1 1.000000
2 2 1.666667
3 3 2.555556
4 3 2.851852
5 5 4.283951
6 8 6.761317
7 9 8.253772
</code></pre>
<p>The problem is that it's not corresponding to following mathematical calculations :</p>
<ul>
<li>y0 = x0 = <strong>1</strong></li>
<li>y1 = (1-0.5) * y0 + 0.5 * x1 = 0.5 + 0.5 = <strong>1</strong></li>
<li>y2 = (1-0.5) * y1 + 0.5 * x2 = 0.5 + 0.5 * 2 = <strong>1.5</strong></li>
<li>y3 = (1-0.5) * y2 + 0.5 * x3 = 0.5 * 1.5 + 0.5 * 3 = 0.75 + 1.5 = <strong>2.25</strong>
...</li>
</ul>
<p>We do not have the same values. What's wrong ?</p>
|
<python><pandas>
|
2023-02-11 00:48:49
| 1
| 2,303
|
Laurent B.
|
75,417,334
| 825,227
|
Use of parameters dictionary with Python requests GET method
|
<p>Trying to retrieve data via the EIA data API (v2): <a href="https://www.eia.gov/opendata/documentation.php" rel="nofollow noreferrer">https://www.eia.gov/opendata/documentation.php</a>.</p>
<p>I'm able to use the API dashboard to return data:</p>
<p><a href="https://www.eia.gov/opendata/browser/electricity/retail-sales?frequency=monthly&data=price;revenue;sales;&start=2013-01" rel="nofollow noreferrer">https://www.eia.gov/opendata/browser/electricity/retail-sales?frequency=monthly&data=price;revenue;sales;&start=2013-01</a></p>
<p>But when I attempt to retrieve within Python using the attached documentation, I don't appear to be returning any values when using the same parameters.</p>
<pre><code>url = 'https://api.eia.gov/v2/electricity/retail-sales/data/?api_key=' + API_KEY
params = {
"frequency": "monthly",
"data": [
"revenue",
"sales",
"price"
],
"start": "2013-01"
}
if x.status_code == 200:
print('Success')
else:
print('Failed')
res = x.json()['response']
data = res['data']
</code></pre>
<p>If I print the url created by the GET method, and compare to API url included in the dashboard, the issue appears to be in the way the GET method is attempting to retrieve items from the <code>data</code> parameter:</p>
<p><strong>Works</strong></p>
<p><a href="https://api.eia.gov/v2/electricity/retail-sales/data/?frequency=monthly&data%5B0%5D=price&data%5B1%5D=revenue&data%5B2%5D=sales&start=2013-01&sort%5B0%5D%5Bcolumn%5D=period&sort%5B0%5D%5Bdirection%5D=desc&offset=0&length=5000" rel="nofollow noreferrer">https://api.eia.gov/v2/electricity/retail-sales/data/?frequency=monthly&data[0]=price&data[1]=revenue&data[2]=sales&start=2013-01&sort[0][column]=period&sort[0][direction]=desc&offset=0&length=5000</a></p>
<p><strong>Doesn't work (returned by GET method):</strong></p>
<p><a href="https://api.eia.gov/v2/electricity/retail-sales/data/?api_key=MY_API&frequency=monthly&data=revenue&data=sales&data=price&start=2013-01" rel="nofollow noreferrer">https://api.eia.gov/v2/electricity/retail-sales/data/?api_key=MY_API&frequency=monthly&data=revenue&data=sales&data=price&start=2013-01</a></p>
<p><strong>Can anyone provided guidance on how to coerce the GET method to pass my data parameters in the same way as the API dashboard appears to?</strong></p>
|
<python><python-requests>
|
2023-02-11 00:29:11
| 1
| 1,702
|
Chris
|
75,417,315
| 5,049,813
|
`isin` fails to detect a row that is in a dataframe
|
<p>I've been struggling with an error for days, and after many conversations with ChatGPT, finally got it boiled down to this one minimal example:</p>
<pre><code>import pandas as pd
# Create two data frames with duplicate values
goal_df = pd.DataFrame({'user_id': [1], 'sentence_id': [2]})
source_df = pd.DataFrame({'user_id': [1, 1], 'sentence_id': [2, 2]})
# The first assertion passes
assert (goal_df[['user_id', 'sentence_id']].iloc[0] == source_df[['user_id', 'sentence_id']].iloc[0]).all()
# The second assertion fails
assert goal_df[['user_id', 'sentence_id']].iloc[0].isin(source_df[['user_id', 'sentence_id']]).all()
</code></pre>
<p><em>Why does the second assertion fail?</em></p>
<p>When I print out intermediate values, it looks like even if I replaced the <code>all</code> with <code>any</code>, it would still fail. That is, the <code>isin</code> is saying that the <code>user_id</code> and <code>sentence_id</code> aren't in the <code>source_df</code> at all, despite the line just beforehand proving that they are.</p>
<p>I also thought that maybe it was because there was an indexing issue where the example didn't match the index, as it's required to by <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.isin.html" rel="nofollow noreferrer"><code>isin</code></a>, however, even if you make <code>source_df = pd.DataFrame({'user_id': [1], 'sentence_id': [2]})</code>, the same behavior occurs (first assert passes, second fails.)</p>
<p>What's going on here?</p>
|
<python><pandas>
|
2023-02-11 00:26:35
| 3
| 5,220
|
Pro Q
|
75,417,265
| 3,769,076
|
Error uploading Spark parquet files from Snowflake-S3-Stage to a Snowflake Table
|
<p>EDIT: The error was from Spark's <code>_SUCCESS</code> file. Only include parquet files in the SQL query: <code>pattern = '.*parquet'</code></p>
<p>Original:</p>
<p>Can Snowflake load my multi-part parquet files? I have other inserts that work in the same tech-stack but they all use a single parquet file. I'm wondering if the data is being partitioned under-the-hood or otherwise becoming unrecognizable to Snowflake</p>
<p>Here's my simplified query</p>
<pre class="lang-sql prettyprint-override"><code>COPY INTO database.schema.table
FROM (
SELECT $1
FROM @database.schema.stage/path_to_parquet
)
file_format = (type = parquet)
</code></pre>
<p>I get this error when trying to copy data in S3 to a Snowflake table:</p>
<pre><code>snowflake.connector.errors.ProgrammingError: 100152 (22000):
Error parsing the parquet file:
Invalid:
Parquet file size is 0 bytes
Row 0 starts at line 0, column
</code></pre>
<p>If it helps, the command to generate the parquet files looks like this:</p>
<pre class="lang-py prettyprint-override"><code>spark_dataframe.select("date", "cityid", "prediction")
.write.mode("overwrite")
.parquet(predictions_path)
</code></pre>
<p>And a sample of the parquet files (snappy compression -- a snowflake default)</p>
<pre><code>_SUCCESS
part-00000-75a71af4-e797-417a-a2f1-1c31cf9dc891-c000.snappy.parquet
part-00001-75a71af4-e797-417a-a2f1-1c31cf9dc891-c000.snappy.parquet
part-00002-75a71af4-e797-417a-a2f1-1c31cf9dc891-c000.snappy.parquet
part-00003-75a71af4-e797-417a-a2f1-1c31cf9dc891-c000.snappy.parquet
part-00004-75a71af4-e797-417a-a2f1-1c31cf9dc891-c000.snappy.parquet
</code></pre>
|
<python><apache-spark><snowflake-cloud-data-platform><parquet>
|
2023-02-11 00:15:42
| 1
| 1,068
|
solbs
|
75,417,119
| 6,484,157
|
how to find what is the latest version of python that pytorch
|
<p>When I try
<code>pip install torch</code>, I get</p>
<p>ERROR: Could not find a version that satisfies the requirement torch (from versions: none)</p>
<p>ERROR: No matching distribution found for torch</p>
<p>Searching on here stackoverflow I find that the issue is I need an older verson of python, currently I'm using 3.11. That post said 3.8 but was written some time ago, so how do I find the latest version of python that will run pytorch? I couldn't find it easily on the PyTorch pages.</p>
|
<python><pytorch>
|
2023-02-10 23:42:00
| 2
| 501
|
usr0192
|
75,416,994
| 998,070
|
Maintaining Sharp Corners in a Numpy Interpolation
|
<p>I am interpolating a shape with numpy's <code>linspace</code> and <code>interp</code> using <a href="https://stackoverflow.com/users/2749397/gboffi">gboffi</a>'s magnificent code from <a href="https://stackoverflow.com/questions/75416188/get-evenly-spaced-points-from-a-curved-shape/">this post</a> (included, below).</p>
<p>This works well, however, the corners sometimes get missed and the resulting softened shape is undesired.</p>
<p><a href="https://i.sstatic.net/yML9N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yML9N.png" alt="softened shape" /></a></p>
<p>I'd like to maintain the sharp corners of my shapes with an angle threshold parameter. Is there any way to keep the corners of the shape's interpolation if the next line segment is of a sharp enough angle? Thank you!</p>
<pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt
import numpy as np
x = np.array([815.9, 693.2, 570.4, 462.4, 354.4, 469.9, 585.4, 700.6, 815.9])
y = np.array([529.9, 637.9, 746, 623.2, 500.5, 326.9, 153.3, 341.6, 529.9])
fig, ax = plt.subplots(1)
ax.set_aspect('equal')
ax.scatter(x, y, s=40, zorder=3, alpha=0.3)
# compute the distances, ds, between points
dx, dy = x[+1:]-x[:-1], y[+1:]-y[:-1]
ds = np.array((0, *np.sqrt(dx*dx+dy*dy)))
# compute the total distance from the 1st point, measured on the curve
s = np.cumsum(ds)
# interpolate
xinter = np.interp(np.linspace(0,s[-1], 30), s, x)
yinter = np.interp(np.linspace(0,s[-1], 30), s, y)
# plot the interpolated points
ax.plot(xinter, yinter)
ax.scatter(xinter, yinter, s=5, zorder=4)
plt.show()
</code></pre>
|
<python><numpy><matplotlib><scipy><interpolation>
|
2023-02-10 23:18:17
| 4
| 424
|
Dr. Pontchartrain
|
75,416,990
| 3,821,425
|
Pythonic: How much work is too much work to run at the time of file import
|
<p>Conceptually what I'm wanting to do is below. I have an enum for status, and I want to make a set which defines the statuses which are a 'completed' status (e.g. finished and failed)</p>
<p>My internal gripe is that the variable 'completed_statuses' is being created at import time. I've been on teams where people would make REST endpoint calls at the time of import and it's caused large issues, so I'm so I'm somewhat gunshy on the topic. It's my understanding that it's non-pythonic to 'do work' at the time of import. I'm not sure how else to do it other than maybe making a class with static methods. That seems overboard to me though.</p>
<p>Is below the most pythonic way of doing it? Is there a better way?</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum, auto
class Status(Enum):
PENDING = auto()
FINISHED = auto()
FAILED = auto()
completed_statuses = set((Status.FINISHED, Status.FAILED))
</code></pre>
|
<python><enums>
|
2023-02-10 23:17:58
| 0
| 3,107
|
nanotek
|
75,416,925
| 702,846
|
group_by and add a counter column in polars dataframe
|
<p>I have a polars dataframe</p>
<pre><code>import polars as pl
df = pl.from_repr("""
βββββββββββ¬ββββββββββββββββββββββββββββββ
β item_id β num_days_after_first_review β
β --- β --- β
β i64 β i64 β
βββββββββββͺββββββββββββββββββββββββββββββ‘
β 1 β 3 β
β 1 β 3 β
β 1 β 10 β
β 2 β 2 β
β 2 β 2 β
β 3 β 1 β
β 3 β 5 β
βββββββββββ΄ββββββββββββββββββββββββββββββ
""")
</code></pre>
<p>I would like to have a column that indicates a counter for each <code>item_id</code> with respect to <code>num_days_after_first_review</code>;</p>
<p>so the result will be like</p>
<pre><code>shape: (7, 3)
βββββββββββ¬ββββββββββββββββββββββββββββββ¬ββββββ
β item_id β num_days_after_first_review β num β
β --- β --- β --- β
β i64 β i64 β i64 β
βββββββββββͺββββββββββββββββββββββββββββββͺββββββ‘
β 1 β 3 β 1 β
β 1 β 3 β 2 β
β 1 β 10 β 3 β
β 2 β 1 β 1 β
β 2 β 2 β 2 β
β 3 β 1 β 1 β
β 3 β 5 β 2 β
βββββββββββ΄ββββββββββββββββββββββββββββββ΄ββββββ
</code></pre>
|
<python><python-polars>
|
2023-02-10 23:08:18
| 1
| 6,172
|
Areza
|
75,416,834
| 1,451,579
|
How to apply a transformation matrix to the plane defined by the origin and normal
|
<p>I have a plane defined by the origin(point) and normal. I need to apply 4 by 4 transformation matrix to it. How to do this correctly?</p>
|
<python><numpy><geometry><rotation>
|
2023-02-10 22:51:15
| 1
| 659
|
Brans
|
75,416,760
| 8,548,583
|
Chrome does not use my download.default_directory settings, and always downloads to my Python execution folder
|
<p>This is Chrome 111 on Debian 11 - I am attempting to download a file to a folder. As of 3 AM last night, it was working - as of 6 AM this morning, with no server modifications or updates or resets, it was not - all files any Python script utilizing this code segment now download the files to their execution directory. Below is the Chrome selenium headless browser driver instantiation:</p>
<pre><code>def create_temp_folder():
temp_folder = new_file_folder+directory_separator+(str)(uuid.uuid4())
os.mkdir(temp_folder)
return temp_folder
def init_chrome_service(temp_directory = False):
#init stage, notes for later class construction
service = Service(executable_path=ChromeDriverManager().install())
chrome_options = Options()
if temp_directory:
download_directory = create_temp_folder()
chrome_options.add_argument("--disable-web-security")
chrome_options.add_argument("--disable-extensions")
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--headless")
prefs = {'download.default_directory' : download_directory}
chrome_options.add_experimental_option("detach", True)
chrome_options.add_experimental_option('prefs', prefs)
driver = webdriver.Chrome(options=chrome_options)
return driver, download_directory
</code></pre>
<p>The chrome.default_directory variable exists, is being set, is properly creating the directory, has permissions, hasn't randomly lost permissions, and the behavior is the same whether I run the Python script as sudo, myself, or through a cron job. I have tried reinstalling chrome and the chromedriver.</p>
<p>What's even stranger is, the exact same copy of the code on my Windows computer works perfectly - so something changed in the Debian environment, is my working theory, but I cannot for the life of me isolate what.</p>
<p>The code that actually downloads the file is trivial - a driver.get('elementid').click() on a button that runs a report.</p>
|
<python><linux><selenium-chromedriver><debian>
|
2023-02-10 22:37:46
| 3
| 458
|
Kwahn
|
75,416,656
| 6,357,649
|
403 Request Failure Despite working Service Account google.oauth2
|
<p>I am consistently running into problems querying in python using the following libraries. I am given a 403 error, that the "user does not have 'bigquery.readsessions.create' permissions for the project I am accessing.</p>
<pre><code>#BQ libs
from google.cloud import bigquery
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file('path.json')
#BigQuery Connection and query execution
bqProjectId = 'poject_id'
project_id = bqProjectId
client = bigquery.Client(credentials= credentials,project=project_id)
query = client.query("SELECT * FROM `table`")
Output = Query.to_dataframe()
</code></pre>
<p>I am using the same service account json file, and same query in Java, R, and even on a BI tool. All three successfully retreived the data. So this seems to be python specific.</p>
<p>I have tried starting with a clean environment. I even reinstalled anaconda. Nothing seems to work. What are some possible culprits here?</p>
<p>*Obviously my path, query, and creds are different for that actual script.</p>
|
<python><pandas><google-bigquery>
|
2023-02-10 22:20:48
| 1
| 323
|
Devin
|
75,416,655
| 17,274,113
|
calling a function defined in different python script within working folder
|
<p>I am trying to call a function defined in file <code>lidar_source_code.py</code> from my main script <code>py_lidar_depressions.py</code>. There are numerous sources which explain how to do this such as <a href="https://www.geeksforgeeks.org/python-call-function-from-another-file/" rel="nofollow noreferrer">this one</a>, however When I attempt this the error <code>ModuleNotFoundError: No module named 'lidar_source_code'</code> is returned. I must say that I am not surprised at this error as at no point did I point to where the file is located even though relevant tutorials do not seem to suggest the need to. The code to import the file and all of the defined functions was as simple as the following.</p>
<pre><code>from lidar_source_code import *
</code></pre>
<p>Is there something I am missing here. Am I correct that one must first define the path of the function containing file before importing it?</p>
<p>Thanks</p>
|
<python><function>
|
2023-02-10 22:20:42
| 0
| 429
|
Max Duso
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.