QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,227,090
| 353,337
|
Get own pyproject.toml dependencies programatically
|
<p>I use a <code>pyproject.toml</code> file to list a package's dependencies:</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[project]
name = "foobar"
version = "1.0"
requires-python = ">=3.8"
dependencies = [
"requests>=2.0",
"numpy",
"tomli;python_version<'3.11'",
]
</code></pre>
<p>Is is possible, from within the package, to get the list of its own dependencies as strings? In the above case, it should give</p>
<pre class="lang-py prettyprint-override"><code>["requests", "numpy"]
</code></pre>
<p>if used with Python>=3.11, and</p>
<pre class="lang-py prettyprint-override"><code>["requests", "numpy", "tomli"]
</code></pre>
<p>otherwise.</p>
|
<python><python-packaging>
|
2024-03-26 17:14:24
| 1
| 59,565
|
Nico Schlömer
|
78,227,089
| 12,733,629
|
Unknown error when trying to 'upsert' with ArcGIS API for Python
|
<p>I am trying to perform an 'upsert' (update + insert) operation on a feature layer via ArcGIS API for Python. My input data is a geojson file that gets uploaded as per the available <a href="https://developers.arcgis.com/python/guide/appending-features/#add-the-source-csv-item-to-the-gis" rel="nofollow noreferrer">guides</a>:</p>
<pre><code>data_item = gis.content.add(
item_properties={
'title': 'My Data File',
'type': 'GeoJson',
'overwrite': True,
},
data="myfile.geojson"
)
</code></pre>
<p>Then I attempt the append operation:</p>
<pre><code>my_layer.append(
item_id=data_item.id,
upload_format='geojson',
upsert=True,
upsert_matching_field='an_unique_field',
update_geometry=True,
)
</code></pre>
<p>This returns <code>Exception: Unknown Error (Error Code: 500)</code>. The geojson file is valid (checked in geojson.io). Attempts with a smaller file (original has ~4000 polygons) or appending a featureCollection instead of geojson fail with the same error. Manually updating the data via ArcGIS online works fine with the exact same file. Any leads here?</p>
<p>I have succesfully performed the operation using <a href="https://developers.arcgis.com/python/api-reference/arcgis.features.toc.html#arcgis.features.FeatureLayer.edit_features" rel="nofollow noreferrer"><code>edit_features()</code></a> instead of <a href="https://developers.arcgis.com/python/api-reference/arcgis.features.toc.html#arcgis.features.FeatureLayer.append" rel="nofollow noreferrer"><code>append()</code></a> but it requires querying the layer, mapping OBJECTIDs to <code>an_unique_field</code>, creating separate <code>adds</code> and <code>updates</code> collections, then splitting the collections into 250 features chunks. I'd like to skip all this with a single <code>append</code> call if possible.</p>
|
<python><arcgis>
|
2024-03-26 17:14:19
| 1
| 327
|
Martim Passos
|
78,226,932
| 5,330,527
|
Filter in Django Admin via a ManyToMany field with Through
|
<p>My <code>models.py</code>:</p>
<pre><code>class Subject(models.Model):
name = models.CharField(max_length=200)
class Person(models.Model):
subject = models.ForeignKey(Subject, on_delete=models.CASCADE, blank=True, null=True)
school = models.ForeignKey(School, on_delete=models.CASCADE, blank=True, null=True)
class PersonRole(models.Model):
project = models.ForeignKey('Project', on_delete=models.CASCADE)
person = models.ForeignKey(Person, on_delete=models.CASCADE)
class Project(models.Model):
title = models.CharField(max_length=200)
person = models.ManyToManyField(Person, through=PersonRole)
</code></pre>
<p>Now, in my admin back-end I'd like to add a nice filtering with <code>list_filter</code>. This filter should make it possible to filter by the schools that a person is attached to a project. In other words, if John (belonging to school 'no. 1') is attached to project no. 3, I'd like the projects table on the backend to show only project no. 3.
I suppose I should customise a <a href="https://docs.djangoproject.com/en/5.0/ref/contrib/admin/filters/#using-a-simplelistfilter" rel="nofollow noreferrer">simplelistfilter</a>. First of all, though, I'm a bit stuck as to how to get the list of schools attached to the persons attached to a project.</p>
<p>My attempt so far is:</p>
<pre><code>class PersonRole(models.Model):
[...]
def get_school(self):
return self.person.school
class Project(models.Model):
@admin.display(description='PI School')
def get_PI_school(self):
return [p for p in self.person.get_school()]
</code></pre>
<p><code>admin.py</code>:</p>
<pre><code>class ProjectAdmin(admin.ModelAdmin):
list_display = ("get_PI_school",) #This is just to see if the field is populated
</code></pre>
<p>With this, I get <code>'ManyRelatedManager' object has no attribute 'get_school'</code>.</p>
|
<python><django><django-models>
|
2024-03-26 16:46:51
| 2
| 786
|
HBMCS
|
78,226,758
| 14,460,824
|
How to transform dataframe long to wide with "grouped" columns?
|
<p>When pivoting the following dataframe from long to wide, I would like to get "groups" of columns and mark them with a prefix or suffix.</p>
<ul>
<li>The groups of elements can have different sizes, i.e. consist of one, two or more grouped elements/rows, I used pairs of two here to keep the example simple.</li>
</ul>
<pre><code>import pandas as pd
df = pd.DataFrame(
[
{'group': 'group-009297534', 'single_id': 'single-011900051', 'country': 'ESP', 'name': '00000911'},
{'group': 'group-009297534', 'single_id': 'single-000000821', 'country': 'USA', 'name': '00001054'},
{'group': 'group-009280053', 'single_id': 'single-000000002', 'country': 'HUN', 'name': '00000496'},
{'group': 'group-009280053', 'single_id': 'single-000000014', 'country': 'HUN', 'name': '00000795'},
{'group': 'group-009245039', 'single_id': 'single-000001258', 'country': 'NOR', 'name': '00000527'},
{'group': 'group-009245039', 'single_id': 'single-000000669', 'country': 'TWN', 'name': '00000535'}
]
)
</code></pre>
<p>My approach of assigning an index to the elements to be grouped and then using this for the column designation is already going in the right direction, but still deviates from the expected view</p>
<pre><code>df['idx'] = df.groupby('group').cumcount()
df.pivot(index='group', columns='idx')
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: left;">group</th>
<th style="text-align: left;">('single_id', 0)</th>
<th style="text-align: left;">('single_id', 1)</th>
<th style="text-align: left;">('country', 0)</th>
<th style="text-align: left;">('country', 1)</th>
<th style="text-align: right;">('name', 0)</th>
<th style="text-align: right;">('name', 1)</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">group-009245039</td>
<td style="text-align: left;">single-000001258</td>
<td style="text-align: left;">single-000000669</td>
<td style="text-align: left;">NOR</td>
<td style="text-align: left;">TWN</td>
<td style="text-align: right;">00000527</td>
<td style="text-align: right;">00000535</td>
</tr>
<tr>
<td style="text-align: left;">group-009280053</td>
<td style="text-align: left;">single-000000002</td>
<td style="text-align: left;">single-000000014</td>
<td style="text-align: left;">HUN</td>
<td style="text-align: left;">HUN</td>
<td style="text-align: right;">00000496</td>
<td style="text-align: right;">00000795</td>
</tr>
<tr>
<td style="text-align: left;">group-009297534</td>
<td style="text-align: left;">single-011900051</td>
<td style="text-align: left;">single-000000821</td>
<td style="text-align: left;">ESP</td>
<td style="text-align: left;">USA</td>
<td style="text-align: right;">00000911</td>
<td style="text-align: right;">00001054</td>
</tr>
</tbody>
</table></div>
<hr />
<p>However, the expected solution would look like this:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">group</th>
<th style="text-align: left;">single_id_1</th>
<th style="text-align: left;">country_1</th>
<th style="text-align: right;">name_1</th>
<th style="text-align: left;">single_id_2</th>
<th style="text-align: left;">country_2</th>
<th style="text-align: right;">name_2</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">group-009245039</td>
<td style="text-align: left;">single-000001258</td>
<td style="text-align: left;">NOR</td>
<td style="text-align: right;">00000527</td>
<td style="text-align: left;">single-000000669</td>
<td style="text-align: left;">TWN</td>
<td style="text-align: right;">00000535</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">group-009280053</td>
<td style="text-align: left;">single-000000002</td>
<td style="text-align: left;">HUN</td>
<td style="text-align: right;">00000496</td>
<td style="text-align: left;">single-000000014</td>
<td style="text-align: left;">HUN</td>
<td style="text-align: right;">00000795</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">group-009297534</td>
<td style="text-align: left;">single-011900051</td>
<td style="text-align: left;">ESP</td>
<td style="text-align: right;">00000911</td>
<td style="text-align: left;">single-000000821</td>
<td style="text-align: left;">USA</td>
<td style="text-align: right;">00001054</td>
</tr>
</tbody>
</table></div>
<p>I'm not sure whether the approach with the multi-index, which would then have to be sorted and merged somehow, is the right one or whether there is a more elegant option.</p>
|
<python><pandas><dataframe><pivot>
|
2024-03-26 16:15:54
| 3
| 25,336
|
HedgeHog
|
78,226,689
| 10,108,726
|
How to create pdf with multiple pages based on html using python pdfkit
|
<p>I am trying to create a pdf with python using pdfkit library in my Django project, and I want to separate each content in a different page, how can I do it</p>
<pre><code>import pdfkit
from django.template.loader import render_to_string
my_contents = [
{'title':'Example 1', 'contents': ['Lorem ipsum dorer', 'Lorem ipsum']},
{'title':'Example 2', 'contents': ['Lorem ipsum dorer', 'Lorem ipsum']},
{'title':'Example 3', 'contents': ['Lorem ipsum dorer', 'Lorem ipsum']},
{'title':'Example 4', 'contents': ['Lorem ipsum dorer', 'Lorem ipsum']},
{'title':'Example 5', 'contents': ['Lorem ipsum dorer', 'Lorem ipsum']}
]
final_html = ''
for content in my_contents:
data = {
'title': content['title'],
'contents': content['contents'],
}
final_html += render_to_string(
'pdfs/routines_handout.html', data)
my_pdf = pdfkit.from_string(final_html, 'out.pdf')
</code></pre>
|
<python><html><django><pdf-generation><python-pdfkit>
|
2024-03-26 16:07:06
| 1
| 654
|
Germano
|
78,226,667
| 9,471,909
|
How does numpy.unique eliminate duplicate columns?
|
<p>I cannot understand correctly how Numpy's <code>unique</code> function works on multi-dimensional arrays. More precisely, I cannot comprehend how the documentation of <code>unique</code> describes its operation using the <code>axis</code> parameter:</p>
<p><a href="https://numpy.org/doc/stable/reference/generated/numpy.unique.html" rel="nofollow noreferrer">https://numpy.org/doc/stable/reference/generated/numpy.unique.html</a></p>
<blockquote>
<p>When an axis is specified the subarrays indexed by the axis are
sorted. <strong>This is done by making the specified axis the first dimension
of the array (move the axis to the first dimension to keep the order
of the other axes)</strong> and then flattening the subarrays in C order. The
flattened subarrays are then viewed as a structured type with each
element given a label, with the effect that we end up with a 1-D array
of structured types that can be treated in the same way as any other
1-D array. The result is that the flattened subarrays are sorted in
lexicographic order starting with the first element.</p>
</blockquote>
<p>I have read multiple times the above-mentioned paragraph but unfortunately I have not managed to get a clear picture of the process, particularly what I have put above in bold. For example, let's say, we have the following 2-D array:</p>
<pre><code>import numpy as np
myarray = np.array(
[
[ 1, 3, 7, 8, 3],
[-5, 0, 9, 2, 0],
[10, 11, 12, 85, 11]
]
)
</code></pre>
<p>As you can see, the 2nd and 5th columns containing the values <code>{3, 0, 11}</code> are duplicates. If I want to remove duplicate columns by using <code>numpy.unique</code> then I run the following:</p>
<pre><code>np.unique(myarray, axis=1)
</code></pre>
<p>Which provides the expected result:</p>
<pre><code>array([[ 1, 3, 7, 8],
[-5, 0, 9, 2],
[10, 11, 12, 85]])
</code></pre>
<p>The 5th column was indeed removed as expected given that it was a duplicate (of the 2nd column). So visually, the result is understandable. However if I read the above-mentioned documentation, trying to move the selected axis as the first dimension of the array as suggested, and then flattening the resulted subarrays, I cannot understand how exactly Numpy splitted and reorganized the structure of the array to arrive to the final result.</p>
<p>Could you kindly provide a step by step presentation based on the above-mentioned documentation describing how Numpy arrives at this result ?</p>
|
<python><arrays><numpy>
|
2024-03-26 16:04:44
| 2
| 1,471
|
user17911
|
78,226,623
| 1,714,692
|
Python avoid mypy fails when redefining a class from another class in mutual reference
|
<p>Consider a pair of classes which represent the same thing in Python and each implements a method that converts one to the other. As an example consider transforming from cartesian to polar coordinates and viceversa.</p>
<pre><code>@dataclass
class CartesianCoordinates:
x: float
y: float
def toPolarCoordinates(self) -> PolarCoordinates:
radius = ...
theta = ...
result = PolarCoordinates(radius, theta)
return result
@dataclass
class PolarCoordinates:
radius: float
theta: float
def toCartesianCoordinates(self) -> CartesianCoordinates:
x = ...
y = ...
result = CartesianCoordinates(x,y)
return result
</code></pre>
<p>Since I am using <code>PolarCoordinates</code> before it is defined, I've tried forwarding (actually redefining) its declaration by inserting the following lines before the <code>CartesianCoordinates</code> definition:</p>
<pre><code>class PolarCoordinates:
# forwarding declaration
pass
</code></pre>
<p>This works, in the sense that the code runs correctly but checks like mypy will fail due to the class redefinition with an error like <code>Name PolarCoordinates already defined</code>.</p>
<p>I know a real forward-declaration in Python is not possible, but is there anything equivalent that allows to reference a class before it is completely defined and that allow s the python checks like mypy to pass?</p>
|
<python><python-typing><forward-declaration>
|
2024-03-26 15:57:04
| 1
| 9,606
|
roschach
|
78,226,484
| 7,331,538
|
Scrapy handle closespider timeout in middleware
|
<p>I have multiple scrapers which I want to put a time limit on. The <code>CLOSESPIDER_TIMEOUT</code> does the job and it returns
<code>finish_reason: closespider_timeout</code>.</p>
<p>I want to intercept this and use the <code>logging</code> library to log an <code>error</code>. how can I do this? should it go in a middleware?</p>
|
<python><logging><scrapy><timeout><middleware>
|
2024-03-26 15:35:08
| 1
| 2,377
|
bcsta
|
78,226,481
| 1,654,143
|
Image quantization with Numpy
|
<p>I wanted to have a look at the example code for image quantization from <a href="https://glowingpython.blogspot.com/2012/07/color-quantization.html" rel="nofollow noreferrer">here</a>
However, it's rather old and Python and NP have changed since then.</p>
<pre><code>from pylab import imread,imshow,figure,show,subplot
from numpy import reshape,uint8,flipud
from scipy.cluster.vq import kmeans,vq
img = imread('clearsky.jpg')
# reshaping the pixels matrix
pixel = reshape(img,(img.shape[0]*img.shape[1],3))
# performing the clustering
# print(type(pixel))
centroids,_ = kmeans(pixel,6) # six colors will be found
# quantization
qnt,_ = vq(pixel,centroids)
# reshaping the result of the quantization
centers_idx = reshape(qnt,(img.shape[0],img.shape[1]))
clustered = centroids[centers_idx]
figure(1)
subplot(211)
imshow(flipud(img))
subplot(212)
imshow(flipud(clustered))
show()
</code></pre>
<p>It's falling over at line 12 <code>centroids,_ = kmeans(pixel,6)</code></p>
<p>File "D:\Python30\lib\site-packages\scipy\cluster\vq.py", line 454, in kmeans
book, dist = _kmeans(obs, guess, thresh=thresh)
File "D:\Python30\lib\site-packages\scipy\cluster\vq.py", line 309, in _kmeans
code_book.shape[0])
File "_vq.pyx", line 340, in scipy.cluster._vq.update_cluster_means
TypeError: type other than float or double not supported</p>
<p>I can change 6 to 6.0, but confused as what to do for the NParray passing into kmeans.</p>
<p>What do I need to do to update the code to get the example up and running?</p>
|
<python><numpy><quantization>
|
2024-03-26 15:34:47
| 1
| 7,007
|
Ghoul Fool
|
78,226,424
| 2,881,414
|
Custom environment variables with Popen on Windows on Github Actions
|
<p>I'm using <code>Popen</code> together with custom environment variables. My expectation is when I run something like this:</p>
<pre class="lang-py prettyprint-override"><code> proc = Popen(
command,
universal_newlines=True,
bufsize=0,
shell=False,
env=env,
)
</code></pre>
<p>that the environment variables set when running <code>command</code> are exactly the contents of <code>env</code>. However, when I run this on windows machines on github actions, there's (exactly) two additional variables set: <code>TERM=xterm-256color</code>, and <code>HOME=/c/Users/runneradmin</code>. Interestingly, when I check the contents of <code>os.environ</code> <strong>before</strong> execution, there's a lot of variables on the host environment set, <strong>except</strong> those two.</p>
<p>This only happens on windows (running <code>windows-latest</code>). On Mac and Linux, the environment is exactly the contents of <code>env</code>.</p>
<p>The <code>command</code> I'm running in this case is <code>env</code> which outputs the environment variables.</p>
<p>My question is, where do those two variables come from and how do I get rid of them?</p>
|
<python><python-3.x><windows><github-actions><popen>
|
2024-03-26 15:26:04
| 1
| 17,530
|
Bastian Venthur
|
78,226,139
| 9,673,864
|
How to quantify the consistency of a sequence of predictions, incl. prediction confidence, using standard function from sklearn or a similar library
|
<p>Let's say I let a classification model classify a single object multiple times but under varying circumstances. Ideally it should predict the same class again and again. But in reality its class predictions may vary.</p>
<p>So given a sequence of class predictions for the single object, I'd like to measure how consistent the sequence is. To be clear, this is not about comparing predictions against some ground truth. This is about consistency within the prediction sequence itself.</p>
<ul>
<li>For instance, a perfectly consistent prediction sequence like <code>class_a, class_a, class_a, class_a</code> should get a perfect score.</li>
<li>A less consistent sequence like <code>class_a, class_b, class_a, class_c</code>
should get a lower score.</li>
<li>And a completely inconsistent sequence like
<code>class_a, class_b, class_c, class_d</code> should get the lowest score
possible.</li>
</ul>
<p>The goal is to find out on what objects we may need to keep training the classification model. If the classification model is not very consistent in its predictions for a certain object, then we might need to add that object to a dataset for further training.</p>
<p>Preferably it works for any number of possible classes and also takes into account prediction confidences. The sequence <code>class_a (0.9), class_b (0.9), class_a (0.9), class_c (0.9)</code> should give a lower score then <code>class_a (0.9), class_b (0.2), class_a (0.8), class_c (0.3)</code>, as it's no good when the predictions are inconsistent with high confidences.</p>
<p>I could build something myself, but I'd like to know if there's a standard sklearn or scipy (or similar) function for this? Thanks in advance!</p>
<p>The comment to <a href="https://stackoverflow.com/questions/26406708/how-to-measure-the-consistence-of-a-sequence">this question</a> suggests <a href="https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient" rel="nofollow noreferrer">Spearman's correlation coefficient</a> or the <a href="https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient" rel="nofollow noreferrer">Kandell correlation coefficient</a>. I'll look into that as well.</p>
|
<python><machine-learning><scikit-learn><statistics><classification>
|
2024-03-26 14:41:53
| 1
| 325
|
wouterio
|
78,225,973
| 10,770,967
|
Checking for substring from a frame A in a different Frame B, with different sizes for each frame
|
<p>I do have a question hoping - I am confindet that you can help me out. Suppose I do have two frames, each having multiple columns, but for simplicity let's focus on one column per frame.
<strong>Important</strong>: <strong>Both frame are different in size, with A being shorter</strong></p>
<pre><code>import pandas as pd
FrameA=pd.DataFrame({"A":["00281378554", "10862520000","82540193700","76015394900","00134355050","21864009"]})
FrameB=pd.DataFrame({"A":["AT511634000134355050","AT411513000281378554", "AT711509100151013992",
"AT511509000121340020","AT424480010862520000","AT742011182540193700","AT531200076015394900","HU02142201082186400900000000"
]})
</code></pre>
<p>My goal is following: I want to check for each element of column A from frameA if <strong>it is contained</strong> in Col A from Frame B, if so then I want create a new column in frame A (shorter dimension) with following outcome:</p>
<pre><code>Frame A
Col A Col B
00281378554 AT411513000281378554
10862520000 AT424480010862520000
82540193700 AT742011182540193700
76015394900 AT531200076015394900
00134355050 AT511634000134355050
21864009 HU02142201082186400900000000
</code></pre>
<p>So the new column B for frameA shall contain the element from frameB which contains the string from frame A
I tried with <code>np.where</code> but as frameA is shorter in dimension than frameB it is not working. So all I could do is use a for loop. But this is extremely time-consuming and I do think there must be a more elegant solution</p>
<p>Any suggestions?</p>
|
<python><pandas><numpy>
|
2024-03-26 14:16:00
| 3
| 402
|
SMS
|
78,225,962
| 8,849,071
|
How to use SQL Alchemy models the same way than django models
|
<p>I have been using fast api for a few days and passing the <code>db</code> session around is starting to bother me a bit. In my previous project I was using a django backend, where to make queries I only needed to import the model. Something like:</p>
<pre class="lang-py prettyprint-override"><code>from models import SomeModel
class SomeRepository:
def method(self):
SomeModel.objects.filter(...)
</code></pre>
<p>But in fast api, I need to take care of the <code>db</code> session object and pass it as argument, something like:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy.orm import Session
from models import SomeModel
class SomeRepository:
def method(self, db: Session):
db.query(SomeModel).all()
</code></pre>
<p>Is there any way to make fast api and sql alchemy work like the way models work in Django?</p>
|
<python><django><sqlalchemy><fastapi>
|
2024-03-26 14:14:25
| 0
| 2,163
|
Antonio Gamiz Delgado
|
78,225,953
| 16,759,116
|
Why is `if x is None: pass` faster than `x is None` alone?
|
<p>Timing results in Python 3.12 (and similar with 3.11 and 3.13 on different machines):</p>
<pre><code>When x = None:
13.8 ns x is None
10.1 ns if x is None: pass
When x = True:
13.9 ns x is None
11.1 ns if x is None: pass
</code></pre>
<p>How can doing <strong>more</strong> take <strong>less</strong> time?</p>
<p>Why is <code>if x is None: pass</code> faster, when it does the same <code>x is None</code> check and then additionally checks the truth value of the result (and does or skips the <code>pass</code>)?</p>
<p>Times on other versions/machines:</p>
<ul>
<li>Python 3.11: (12.4, 9.3) and (12.0, 8.8)</li>
<li>Python 3.13: (12.7, 9.9) and (12.7, 9.6)</li>
</ul>
<p>Benchmark script (<a href="https://ato.pxeger.com/run?1=TZAxTgMxEEVF657-d15HS5QFCmRpr4AokCgQRQRjxcXalj1BG632JDRp4FCcBnudREw145n39f2_fsKBd94dj997NjcPv1fXJvoBbAeyDDsEHxmRAm1ZnKZ0SEIYHzHCOjx6Ry2e4560QK4QrePGyJcdOUwjesxaqmVVmHf_QQV7lZlOCy1bSGtwmTXCNiX5hhVuq2YpzkqDdU310hSdFkZOYz9ngfrad5uNUhfkbGXiVUd3-n7dmRkuId8XXP3zq4SojXxa8tD5Jv9z_UkxWe9UTecU0jmsPw" rel="nofollow noreferrer">Attempt This Online!</a>):</p>
<pre class="lang-py prettyprint-override"><code>from timeit import repeat
import sys
for x in None, True:
print(f'When {x = }:')
for code in ['x is None', 'if x is None: pass'] * 2:
t = min(repeat(code, f'{x=}', repeat=100))
print(f'{t*1e3:4.1f} ns ', code)
print()
print('Python:', sys.version)
</code></pre>
|
<python><performance><cpython><micro-optimization><python-internals>
|
2024-03-26 14:12:02
| 2
| 10,901
|
no comment
|
78,225,920
| 1,194,864
|
Why next(iter(train_dataloader)) takes long execution time in PyTorch
|
<p>I am trying to load a local dataset with images (around 225 images in total) using the following code:</p>
<pre><code># Set the batch size
BATCH_SIZE = 32
# Create data loaders
train_dataloader, test_dataloader, class_names = data_setup.create_dataloaders(
train_dir=train_dir,
test_dir=test_dir,
transform=manual_transforms, # use manually created transforms
batch_size=BATCH_SIZE
)
# Get a batch of images
image_batch, label_batch = next(iter(train_dataloader)) # why it takes so much time? what can
I do about it?
</code></pre>
<p>My question concerns the last line of the code and the iteration in the <code>train_dataloader</code> which takes long execution time. Why is this the case? I have only 225 images.</p>
<p><strong>Edit:</strong></p>
<p>The code for the dataloader can be found in the following <a href="https://github.com/mrdbourke/pytorch-deep-learning/blob/main/going_modular/going_modular/data_setup.py" rel="nofollow noreferrer">link</a>.</p>
<pre><code>import os
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
import pdb
NUM_WORKERS = os.cpu_count()
def create_dataloaders(
train_dir: str,
test_dir: str,
transform: transforms.Compose,
batch_size: int,
num_workers: int=NUM_WORKERS
):
# Use ImageFolder to create dataset(s)
train_data = datasets.ImageFolder(train_dir, transform=transform)
test_data = datasets.ImageFolder(test_dir, transform=transform)
# Get class names
class_names = train_data.classes
# Turn images into data loaders
train_dataloader = DataLoader(
train_data,
batch_size=batch_size,
shuffle=True,
num_workers=num_workers,
pin_memory=True,
)
test_dataloader = DataLoader(
test_data,
batch_size=batch_size,
shuffle=False, # don't need to shuffle test data
num_workers=num_workers,
pin_memory=True,
)
return train_dataloader, test_dataloader, class_names
</code></pre>
|
<python><pytorch><iteration>
|
2024-03-26 14:04:35
| 2
| 5,452
|
Jose Ramon
|
78,225,774
| 1,073,784
|
Why does setting flags on an NDArray view result in allocations? Are they guaranteed to be bounded?
|
<p>Consider this code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import itertools
def get_view(arr):
view = arr.view()
view.flags.writeable = False # this line causes memory to leak?
return view
def main():
for _ in itertools.count():
get_view(np.zeros(1000))
if __name__ == "__main__":
main()
</code></pre>
<p>It seems the line setting the view to non-writeable causes a memory leak, although I don't know if it's bounded.</p>
<ol>
<li>Why does this happen?</li>
<li>Is it guaranteed to be bounded? Or is this a numpy bug? Or maybe they are reference counted, but for some reason manually invoking the garbage collector does not collect them?</li>
</ol>
<p>Here's the same program adorned with tracemalloc logic to print allocations every 100k calls to get_view.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import tracemalloc
import itertools
import gc
def log_diff(snapshot, prev_snapshot):
diff = snapshot.compare_to(prev_snapshot, "lineno")
reported = 0
for stat in diff:
if "tracemalloc.py" in stat.traceback[0].filename:
continue
if stat.size_diff <= 0:
continue
print(f"#{reported}: {stat}")
reported += 1
print("---")
def get_view(arr):
view = arr.view()
view.flags.writeable = False # this line causes memory to leak?
return view
def main():
tracemalloc.start()
prev_snapshot = None
for i in itertools.count():
get_view(np.zeros(1000))
if i % 100000 == 0:
gc.collect(generation=2)
snapshot = tracemalloc.take_snapshot()
if prev_snapshot is not None:
log_diff(snapshot, prev_snapshot)
prev_snapshot = snapshot
if __name__ == "__main__":
main()
</code></pre>
<p>On Python 3.11.6 and numpy 1.26.4 on Linux, the number of allocations we get seems to be nondeterministic, but the largest I've seen it grow is around 250. It grows in the beginning, then much less rapidly later.</p>
<p>If I comment out the line assigning <code>view.flags.writeable</code>, the memory usage does not grow.</p>
<pre><code>#0: /home/sami/bug.py:22: size=3534 B (+3477 B), count=62 (+61), average=57 B
#1: /home/sami/bug.py:29: size=84 B (+28 B), count=2 (+1), average=42 B
---
#0: /home/sami/bug.py:22: size=5871 B (+2337 B), count=103 (+41), average=57 B
#1: /home/sami/bug.py:15: size=72 B (+72 B), count=1 (+1), average=72 B
---
---
#0: /home/sami/bug.py:22: size=6270 B (+399 B), count=110 (+7), average=57 B
---
#0: /home/sami/bug.py:22: size=6327 B (+57 B), count=111 (+1), average=57 B
---
#0: /home/sami/bug.py:22: size=7638 B (+1311 B), count=134 (+23), average=57 B
---
#0: /home/sami/bug.py:22: size=7809 B (+171 B), count=137 (+3), average=57 B
---
---
#0: /home/sami/bug.py:22: size=8436 B (+627 B), count=148 (+11), average=57 B
---
#0: /home/sami/bug.py:22: size=8664 B (+228 B), count=152 (+4), average=57 B
---
#0: /home/sami/bug.py:22: size=8892 B (+228 B), count=156 (+4), average=57 B
---
---
#0: /home/sami/bug.py:22: size=9120 B (+228 B), count=160 (+4), average=57 B
---
---
#0: /home/sami/bug.py:22: size=9177 B (+114 B), count=161 (+2), average=57 B
---
...
</code></pre>
|
<python><numpy><memory-leaks><numpy-ndarray>
|
2024-03-26 13:41:16
| 1
| 1,134
|
Sami Liedes
|
78,225,545
| 5,330,527
|
Display a filtered result from ManyToMany through model in Admin
|
<p>This is my models.py:</p>
<pre><code>class Person(models.Model):
surname = models.CharField(max_length=100, blank=True, null=True)
forename = models.CharField(max_length=100, blank=True, null=True)
def __str__(self):
return '{}, {}'.format(self.surname, self.forename)
class PersonRole(models.Model):
ROLE_CHOICES = [
("Principal investigator", "Principal investigator"),
[etc...]
]
title = models.CharField(choices=TITLE_CHOICES, max_length=9)
project = models.ForeignKey('Project', on_delete=models.CASCADE)
person = models.ForeignKey(Person, on_delete=models.CASCADE)
person_role = models.CharField(choices=ROLE_CHOICES, max_length=30)
def __str__(self):
return '{}: {} as {}.'.format(self.project, self.person, self.person_role)
class Project(models.Model):
title = models.CharField(max_length=200)
person = models.ManyToManyField(Person, through=PersonRole)
def __str__(self):
return self.title
def get_PI(self, obj):
return [p.person for p in self.person.all()] #I'll then need to filter where person_role is 'Principal investigator', which should be the easy bit.
</code></pre>
<p>In my Admin back-end I'd like to display the person (principal investigator) in the main table:</p>
<pre><code>class ProjectAdmin(ImportExportModelAdmin):
list_filter = [PersonFilter, FunderFilter]
list_display = ("title", "get_PI")
ordering = ('title',)
</code></pre>
<p>What I want: display the person with the role 'Principal investigator' in the Projects table in the Admin.</p>
<p>You can see that I created my <code>get_PI()</code> in my <code>models.py</code> and references in my <code>list_display</code>. I'm getting <code>Project.get_PI() missing 1 required positional argument: 'obj'</code>. What am I doing wrong?</p>
|
<python><python-3.x><django><django-models>
|
2024-03-26 13:04:36
| 1
| 786
|
HBMCS
|
78,225,397
| 4,435,175
|
Replace chars in existing column names without creating new columns
|
<p>I am reading a csv file and need to normalize the column names as part of a larger function chaining operation. I want to do everything with function chaining.</p>
<p>When using the recommended <code>name.map</code> function for replacing chars in columns like:</p>
<pre><code>import polars as pl
df = pl.DataFrame(
{"A (%)": [1, 2, 3], "B": [4, 5, 6], "C (Euro)": ["abc", "def", "ghi"]}
).with_columns(
pl.all().name.map(
lambda c: c.replace(" ", "_")
.replace("(%)", "pct")
.replace("(Euro)", "euro")
.lower()
)
)
df.head()
</code></pre>
<p>I get</p>
<pre><code>shape: (3, 6)
┌───────┬─────┬──────────┬───────┬─────┬────────┐
│ A (%) ┆ B ┆ C (Euro) ┆ a_pct ┆ b ┆ c_euro │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ str ┆ i64 ┆ i64 ┆ str │
╞═══════╪═════|══════════╡═══════╡═════╡════════╡
│ 1 ┆ 4 ┆ "abc" ┆ 1 ┆ 4 ┆ "abc" │
│ 2 ┆ 5 ┆ "def" ┆ 2 ┆ 5 ┆ "def" │
│ 3 ┆ 6 ┆ "ghi" ┆ 3 ┆ 6 ┆"ghi" │
└───────┴─────┴──────────┴───────┴─────┴────────┘
</code></pre>
<p>instead of the expected</p>
<pre><code>shape: (3, 3)
┌───────┬─────┬────────┐
│ a_pct ┆ b ┆ c_euro │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ str │
╞═══════╪═════|════════╡
│ 1 ┆ 4 ┆ "abc" │
│ 2 ┆ 5 ┆ "def" │
│ 3 ┆ 6 ┆ "ghi" │
└───────┴─────┴────────┘
</code></pre>
<p>?</p>
<p>How can I replace specific chars in existing column names with function chaining without creating new columns?</p>
|
<python><dataframe><replace><python-polars><chaining>
|
2024-03-26 12:40:36
| 1
| 2,980
|
Vega
|
78,225,278
| 14,034,263
|
getting "Failed to create meeting error 401 Client Error: Unauthorized for url: https://api.zoom.us/v2/users/me/meetings"
|
<p>I have added the client id and secrete key and if i am autharizing the app from postman its working but as soon as i tried to ping the api with my app it giving 401 error.</p>
<p>how can i authorize my app into zoom app and can create meeting with dynamic token</p>
<pre><code> authrization_url = 'https://zoom.us/oauth/authorize'
authrization_params = {
'client_id': zoom_client_key,
'response_type': 'code',
'redirect_uri': 'https://gensproject.supporthives.com/zoom',
}
auth_response = requests.post(authrization_url, data=authrization_params)
if auth_response:
# Obtain a new access token
access_token_url = 'https://zoom.us/oauth/token'
access_token_params = {
'grant_type': 'client_credentials',
'client_id': zoom_client_key,
'client_secret': zoom_client_secret,
}
try:
response = requests.post(access_token_url, data=access_token_params)
response.raise_for_status()
access_token = response.json().get('access_token', '')
except requests.RequestException as e:
return Response({'message': 'Failed to obtain access token', 'error': str(e)}, status=500)
# Create a Zoom meeting
# user_id = "ops@supporthives.com"
# create_meeting_url = f"https://api.zoom.us/v2/users/{user_id}/meetings"
"https://zoom.us/oauth/authorize?client_id=MeGkVnUrQGG15c7ESPP1ZA&response_type=code&redirect_uri=https%3A%2F%2Fgensproject.supporthives.com%2Fzoom"
create_meeting_url = 'https://api.zoom.us/v2/users/me/meetings'
create_meeting_headers = {
'Authorization': f'Bearer {access_token}',
# 'Authorization': f'Bearer eyJzdiI6IjAwMDAwMSIsImFsZyI6IkhTNTEyIiwidiI6IjIuMCIsImtpZCI6IjQ0NTYzNTg5LTA0YzAtNDY1ZC05NjBhLTljM2Y4YmI1ZjRmMyJ9.eyJ2ZXIiOjksImF1aWQiOiJiOTg4ZGQ4ZGNiYmQ0NTcyMjM0OWU4ZWE1ZGQ5NTVkMCIsImNvZGUiOiJ6V0tvUWhBREtuQWRHdVVuRkt5Uk15WXF2TjBYaXpDa0EiLCJpc3MiOiJ6bTpjaWQ6TWVHa1ZuVXJRR0cxNWM3RVNQUDFaQSIsImdubyI6MCwidHlwZSI6MCwidGlkIjowLCJhdWQiOiJodHRwczovL29hdXRoLnpvb20udXMiLCJ1aWQiOiI3RkRIdW16TFNMZW5xUVk0cDRFNDRnIiwibmJmIjoxNzExNDUyMTk0LCJleHAiOjE3MTE0NTU3OTQsImlhdCI6MTcxMTQ1MjE5NCwiYWlkIjoiMVpmNnBDYkRROHF2emU1RFB0VnQ0USJ9.2oTMkSwL0OrnOAswTJoscy8zOSJhVmplTHsE6gh3H5-z_7sznAanBF2XyL3wO9FLGpggsFSqWxB04MXQof7YDw',
'Content-Type': 'application/json',
}
event_datetime = datetime.combine(event.event_date, datetime.min.time())
start_time_utc = mktime(event_datetime.utctimetuple())
create_meeting_params = {
'topic': event.event_title,
'type': 2, # Scheduled meeting
'start_time': f'{start_time_utc}000',
'duration': 60,
'timezone': 'UTC',
'description': event.event_description,
'settings': {
'host_video': True,
'participant_video': True,
'waiting_room': False,
'join_before_host': True,
'mute_upon_entry': False,
'auto_recording': 'none',
},
}
try:
create_meeting_response = requests.post(create_meeting_url, headers=create_meeting_headers, json=create_meeting_params)
create_meeting_response.raise_for_status()
meeting_link = create_meeting_response.json().get('join_url', '')
event.event_link = meeting_link
event.event_meeting_data = create_meeting_response.json()
event.save()
# return success(201, meeting_link, "Meeting created successfully")
except requests.RequestException as e:
return fail(500 ,'Failed to create meeting error ' + str(e))
else:
return fail(400,"Zoom App not autharized!")
</code></pre>
<p>here is my code in which i am geting error again and again</p>
<p>how can i authorize this app. what will be correct way to generate meeting link dynamically.</p>
|
<python><django><django-views><zoom-sdk>
|
2024-03-26 12:17:54
| 1
| 359
|
Vishal Pandey
|
78,225,208
| 15,205,097
|
python-docx: Remove bibliography (w:sdt) section
|
<p>My end goal here is to find and remove any bibliography section from a Microsoft Word document.</p>
<p>As mentioned in <a href="https://github.com/python-openxml/python-docx/issues/155" rel="nofollow noreferrer">this issue</a>.
There currently isn't any API support for <code>w:sdt</code> tags (which I think is only for bibliographies). However, in response to this issue, a work-around was given of:</p>
<pre class="lang-py prettyprint-override"><code>paragraph = ... # however you get the paragraph, maybe with `for paragraph in document.paragraphs`
p = paragraph._element
sdts = p.xpath('w:sdt')
for sdt in sdts:
parent = sdt.getparent()
parent.remove(sdt)
</code></pre>
<p>When using the above, the document's xml updates and deletes the <code>w:sdt</code> tags as I want it to. However, when I save the new document the output .docx file still includes the bibliography section.</p>
<p>Why is it that despite the python-docx document's xml not including the <code>w:sdt</code> elements does this not reflect once the document is saved and opened in Microsoft Word. I have read <a href="https://github.com/python-openxml/python-docx/issues/33" rel="nofollow noreferrer">here</a> that document relationships could have something to do this with but given that I get no error when opening the new word document, I don't think this is the problem. Any ideas?</p>
<p>Here is the xml snippet pre-processing:</p>
<pre class="lang-xml prettyprint-override"><code><w:p xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml" xmlns:wpc="http://schemas.microsoft.com/office/word/2010/wordprocessingCanvas" xmlns:cx="http://schemas.microsoft.com/office/drawing/2014/chartex" xmlns:cx1="http://schemas.microsoft.com/office/drawing/2015/9/8/chartex" xmlns:cx2="http://schemas.microsoft.com/office/drawing/2015/10/21/chartex" xmlns:cx3="http://schemas.microsoft.com/office/drawing/2016/5/9/chartex" xmlns:cx4="http://schemas.microsoft.com/office/drawing/2016/5/10/chartex" xmlns:cx5="http://schemas.microsoft.com/office/drawing/2016/5/11/chartex" xmlns:cx6="http://schemas.microsoft.com/office/drawing/2016/5/12/chartex" xmlns:cx7="http://schemas.microsoft.com/office/drawing/2016/5/13/chartex" xmlns:cx8="http://schemas.microsoft.com/office/drawing/2016/5/14/chartex" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:aink="http://schemas.microsoft.com/office/drawing/2016/ink" xmlns:am3d="http://schemas.microsoft.com/office/drawing/2017/model3d" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:oel="http://schemas.microsoft.com/office/2019/extlst" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:w15="http://schemas.microsoft.com/office/word/2012/wordml" xmlns:w16cex="http://schemas.microsoft.com/office/word/2018/wordml/cex" xmlns:w16cid="http://schemas.microsoft.com/office/word/2016/wordml/cid" xmlns:w16="http://schemas.microsoft.com/office/word/2018/wordml" xmlns:w16sdtdh="http://schemas.microsoft.com/office/word/2020/wordml/sdtdatahash" xmlns:w16se="http://schemas.microsoft.com/office/word/2015/wordml/symex" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:wpi="http://schemas.microsoft.com/office/word/2010/wordprocessingInk" xmlns:wne="http://schemas.microsoft.com/office/word/2006/wordml" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape" w14:paraId="2BE25C04" w14:textId="0C685DB1" w:rsidR="00572630" w:rsidRDefault="006A14EA">
<w:r>
<w:t xml:space="preserve">Hey there </w:t>
</w:r>
<w:sdt>
<w:sdtPr>
<w:id w:val="-39898552"/>
<w:citation/>
</w:sdtPr>
<w:sdtContent>
<w:r>
<w:fldChar w:fldCharType="begin"/>
</w:r>
<w:r>
<w:instrText xml:space="preserve"> CITATION Fra69 \l 2057 </w:instrText>
</w:r>
<w:r>
<w:fldChar w:fldCharType="separate"/>
</w:r>
<w:r>
<w:rPr>
<w:noProof/>
</w:rPr>
<w:t>(Herbert, 1969)</w:t>
</w:r>
<w:r>
<w:fldChar w:fldCharType="end"/>
</w:r>
</w:sdtContent>
</w:sdt>
</w:p>
</code></pre>
<p>Here is the xml snippet post-processing where you can see the bibliography has been removed, but these changes aren't seen once I save and open the doc:</p>
<pre class="lang-xml prettyprint-override"><code><w:p xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:w14="http://schemas.microsoft.com/office/word/2010/wordml" xmlns:wpc="http://schemas.microsoft.com/office/word/2010/wordprocessingCanvas" xmlns:cx="http://schemas.microsoft.com/office/drawing/2014/chartex" xmlns:cx1="http://schemas.microsoft.com/office/drawing/2015/9/8/chartex" xmlns:cx2="http://schemas.microsoft.com/office/drawing/2015/10/21/chartex" xmlns:cx3="http://schemas.microsoft.com/office/drawing/2016/5/9/chartex" xmlns:cx4="http://schemas.microsoft.com/office/drawing/2016/5/10/chartex" xmlns:cx5="http://schemas.microsoft.com/office/drawing/2016/5/11/chartex" xmlns:cx6="http://schemas.microsoft.com/office/drawing/2016/5/12/chartex" xmlns:cx7="http://schemas.microsoft.com/office/drawing/2016/5/13/chartex" xmlns:cx8="http://schemas.microsoft.com/office/drawing/2016/5/14/chartex" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:aink="http://schemas.microsoft.com/office/drawing/2016/ink" xmlns:am3d="http://schemas.microsoft.com/office/drawing/2017/model3d" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:oel="http://schemas.microsoft.com/office/2019/extlst" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:wp14="http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:w15="http://schemas.microsoft.com/office/word/2012/wordml" xmlns:w16cex="http://schemas.microsoft.com/office/word/2018/wordml/cex" xmlns:w16cid="http://schemas.microsoft.com/office/word/2016/wordml/cid" xmlns:w16="http://schemas.microsoft.com/office/word/2018/wordml" xmlns:w16sdtdh="http://schemas.microsoft.com/office/word/2020/wordml/sdtdatahash" xmlns:w16se="http://schemas.microsoft.com/office/word/2015/wordml/symex" xmlns:wpg="http://schemas.microsoft.com/office/word/2010/wordprocessingGroup" xmlns:wpi="http://schemas.microsoft.com/office/word/2010/wordprocessingInk" xmlns:wne="http://schemas.microsoft.com/office/word/2006/wordml" xmlns:wps="http://schemas.microsoft.com/office/word/2010/wordprocessingShape" w14:paraId="2BE25C04" w14:textId="0C685DB1" w:rsidR="00572630" w:rsidRDefault="006A14EA">
<w:r>
<w:t xml:space="preserve">Hey there </w:t>
</w:r>
</w:p>
</code></pre>
<p><em>Note: I am asking this question here as the <a href="https://github.com/python-openxml/python-docx" rel="nofollow noreferrer">python-docx GitHub</a> hasn't had much activity recently.</em></p>
|
<python><python-3.x><ms-word><python-docx>
|
2024-03-26 12:08:13
| 2
| 306
|
Charlie Clarke
|
78,225,106
| 5,980,655
|
Downsample a pandas dataframe keeping same proportion of target in every month
|
<p>I have a pandas dataframe <code>df</code> with column <code>'TARGET'</code> which takes values of <code>0</code> or <code>1</code> and column <code>'MONTH'</code> which collects different months:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>MONTH</th>
<th>#_OBS_TARGET=0</th>
<th>#_OBS_TARGET=1</th>
</tr>
</thead>
<tbody>
<tr>
<td>202207</td>
<td>44619</td>
<td>52960</td>
</tr>
<tr>
<td>202208</td>
<td>48093</td>
<td>55399</td>
</tr>
<tr>
<td>202209</td>
<td>50161</td>
<td>56528</td>
</tr>
</tbody>
</table></div>
<p>I want to downsample my dataframe to have the same number of observations with <code>TARGET = 0</code> and <code>TARGET = 1</code> <strong>for every value of</strong> <code>MONTH</code>:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>MONTH</th>
<th>#_OBS_TARGET=0</th>
<th>#_OBS_TARGET=1</th>
</tr>
</thead>
<tbody>
<tr>
<td>202207</td>
<td>44619</td>
<td>44619</td>
</tr>
<tr>
<td>202208</td>
<td>48093</td>
<td>48093</td>
</tr>
<tr>
<td>202209</td>
<td>50161</td>
<td>50161</td>
</tr>
</tbody>
</table></div>
<p>I tred the following</p>
<pre><code>for m in df['MONTH'].unique():
number_of_ones = len(df[(df['MONTH']==m) & (df['TARGET']==1)])
number_of_zeros = len(df[(df['MONTH']==m) & (df['TARGET']==0)])
n_obs_to_drop = number_of_ones - number_of_zeros
df[df['MONTH']==m].drop(df[(df['MONTH']==m) & (df['TARGET']==1)].sample(n_obs_to_drop).index, inplace = True)
</code></pre>
<p>But clearly it is not deleting anything and I get the following warning (<strong>EDIT2</strong>):</p>
<pre><code>/opt/conda/envs/librerias_cbi/lib/python3.9/site-packages/pandas/core/frame.py:4901: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
return super().drop(
</code></pre>
<p>How should I get it? Also a diffent approach is welcomed.</p>
<p>Notice that there are duplicate values of the index in diffent values of <code>MONTH</code>. There are also many more columns in <code>df</code> which should be kept in the downsampled dataframe.</p>
<p><strong>EDIT1</strong>:</p>
<p>I'm adding a reproducible example</p>
<pre><code>import pandas as pd
data = {
"MONTH": [202207, 202207, 202207, 202207, 202208, 202208, 202208, 202209, 202209, 202209, 202209],
"TARGET": [1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0],
"other_column1": [10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110], # Example additional columns
"other_column2": [100, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1100]
}
df = pd.DataFrame(data)
pd.crosstab(df['MONTH'],df['TARGET'])
TARGET 0 1
MONTH
202207 1 3
202208 1 2
202209 2 2
</code></pre>
|
<python><pandas><dataframe><data-manipulation><downsampling>
|
2024-03-26 11:52:45
| 2
| 1,035
|
Ale
|
78,225,040
| 3,759,627
|
In Polars, how do you generate a column of lists, where each list is a range() defined by another column of type Int?
|
<p>Given a sample DataFrame <code>df</code></p>
<pre><code>>>> df = pl.DataFrame({'l': [3,5,8]})
>>> df
shape: (3, 1)
┌─────┐
│ l │
│ --- │
│ i64 │
╞═════╡
│ 3 │
│ 5 │
│ 8 │
└─────┘
</code></pre>
<p>How do I make a new column that looks like this?</p>
<pre><code>>>> df
shape: (3, 2)
┌─────┬─────────────┐
│ l ┆ column_0 │
│ --- ┆ --- │
│ i64 ┆ list[i64] │
╞═════╪═════════════╡
│ 3 ┆ [0, 1, 2] │
│ 5 ┆ [0, 1, … 4] │
│ 8 ┆ [0, 1, … 7] │
└─────┴─────────────┘
</code></pre>
<p>This is my best way of doing it but it is using map_rows, not so efficient when things scale up</p>
<pre><code>>>> temp = df.map_rows(lambda x: (list(range(x[0])),))
>>> df = df.hstack(temp)
</code></pre>
|
<python><dataframe><python-polars>
|
2024-03-26 11:41:46
| 1
| 339
|
Horace
|
78,224,806
| 3,169,248
|
Is there a way to use regex in python data.table (apart from re.match)?
|
<p>I am trying to convert a column in py data.table to integer. The column contains whitespace and other unwanted characters, with those removed it could be cast to integer. I am failing to do this task in py data.table (whereas I can do it in polars, python, R data.table):</p>
<pre><code># remove .00 if exists, minus after number etc.
df[:, update(weird_col = re.sub(r"\.[0-9]{0,2}|-", "", df[:, 'weird_col']))]
TypeError: expected string or bytes-like object
</code></pre>
<p>or</p>
<pre><code>df[:, update(weird_col = re.sub(r"\.[0-9]{0,2}|-", "", df['weird_col']))]
TypeError: cannot use a string pattern on a bytes-like object
</code></pre>
|
<python><regex><py-datatable>
|
2024-03-26 11:03:55
| 1
| 343
|
itarill
|
78,224,725
| 2,768,038
|
Sharing venv python packages to optimise docker image size
|
<p>I have a docker container where I am installing multiple versions of a package into different venvs so that users of the docker image can quickly change between the package using <code>PATH</code> env to point to the version they want.</p>
<p>However this docker image is currently growing massively and taking a long time to build. I was wondering is there any way to optimise the process to both reduce the build time and the actual size of the image.</p>
<hr />
<p>I have a for loop of different <code>$version</code>s with a requirements.txt for each:</p>
<pre><code>python3 -m venv "package-$version"
source package-$version/bin/activate
# install dbt deps
which python3
# pip install --upgrade pip
export PIP_DISABLE_PIP_VERSION_CHECK=1
pip config set global.parallel true
pip config set global.prefer-binary true
python -m pip install --requirement "resources/requirements/requirements.$version.txt"
</code></pre>
|
<python><docker><pip>
|
2024-03-26 10:51:16
| 0
| 4,305
|
maxisme
|
78,224,557
| 6,649,591
|
langchain: how to use a custom deployed fastAPI embedding model locally?
|
<p>I want to build a retriever in Langchain and want to use an already deployed fastAPI embedding model. How could I do that?</p>
<pre class="lang-python prettyprint-override"><code>from langchain_community.vectorstores import DocArrayInMemorySearch
embeddings_model = requests.post("http://internal-server/embeddings/")
db = DocArrayInMemorySearch.from_documents(chunked_docs, embeddings_model)
retriever = db.as_retriever()
</code></pre>
|
<python><python-requests><langchain>
|
2024-03-26 10:27:08
| 1
| 487
|
Christian
|
78,224,436
| 5,550,833
|
Python - wait for a websocket response in a sync method
|
<p>i'm facing a problem with Asyncio, web sockets and sync calls.</p>
<p>We have an application which uses websockets and Flask.</p>
<p>Websockets are managed with asyncio, we receive messages on</p>
<pre><code>async def on_message(message):
** some logic
await doStuff(message)
</code></pre>
<p>The problem is that our workflow is that we have an endpoint with Flask that needs to perform some action that needs to send a request to the websocket server, wait for the ws response and send the sync response to the controller.</p>
<p>Something like that</p>
<pre><code>@app.route("/request", methods=["POST"])
def manageRequest():
data = request.get_json()
## send data to ws
ws.send(data)
## we need the response on the on_message method
response = {} ##ws response
makeSomething(response)
return newResponse
</code></pre>
<p>is there a way to wait for the async response in the method, just like a Completable in Java?</p>
|
<python><websocket><python-asyncio>
|
2024-03-26 10:07:19
| 1
| 3,258
|
MarioC
|
78,224,389
| 2,862,945
|
Interpolating 3D volumetric data with scipy's RegularGridInterpolator
|
<p>I have a 3D array with some volumetric data, i.e. at each grid points I have some value representing a magnitude of a certain quantity. I want to interpolate that array using scipy's <code>RegularGridInterpolator</code>, but I am using it wrong.</p>
<p>To have a simple example, I use here a circle which I would like to interpolate. The original circle looks quite rough, as you can see below.</p>
<p><a href="https://i.sstatic.net/PP1cv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PP1cv.png" alt="rough circle" /></a></p>
<p>According to my understanding of the docs of <code>RegularGridInterpolator</code>, <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RegularGridInterpolator.html" rel="nofollow noreferrer">link here</a>, I thought I need to provide the new coordinates for the interpolation function as an array of the shape <code>[[x1, y1, z1],[x2, y2, z2],...]</code>. My idea was to promote my new coordinate vectors to column vectors and then combine them with <code>hstack</code>, but that is not working as my data returned from the interpolator has the wrong shape.</p>
<p>Here is my code:</p>
<pre><code>from mayavi import mlab
import numpy as np
import scipy.interpolate as interp
def make_simple_3Dplot( data2plot, xVals, yVals, zVals, N_contLevels=8, fname_plot='' ):
contLevels = np.linspace( np.amin(data2plot),
np.amax(data2plot),
N_contLevels)[1:].tolist()
fig1 = mlab.figure( bgcolor=(1,1,1), fgcolor=(0,0,0),size=(800,600))
XX, YY, ZZ = np.meshgrid(xVals, yVals, zVals, indexing='ij' )
contPlot = mlab.contour3d( XX, YY, ZZ,
data2plot, contours=contLevels,
transparent=True, opacity=.4,
figure=fig1
)
mlab.xlabel('x')
mlab.ylabel('y')
mlab.zlabel('z')
mlab.show()
# define original coordinates
x_min, y_min, z_min = 0, 0, 0
x_max, y_max, z_max = 10, 10, 10
Nx, Ny, Nz = 20, 30, 40
x_arr = np.linspace(x_min, x_max, Nx)
y_arr = np.linspace(y_min, y_max, Ny)
z_arr = np.linspace(z_min, z_max, Nz)
# center of circle
xc, yc, zc = 3, 5, 7
# radius of circle
rc = 2
# define original data
data_3D_original = np.zeros( (Nx, Ny, Nz) )
for ii in range(Nx):
for jj in range(Ny):
for kk in range(Nz):
if np.sqrt((x_arr[ii]-xc)**2 + (y_arr[jj]-yc)**2 + (z_arr[kk]-zc)**2) < rc:
data_3D_original[ii,jj,kk] = 1.
make_simple_3Dplot( data_3D_original, x_arr, y_arr, z_arr )
# spatial coordinates for interpolation
step_size = np.mean(np.diff(x_arr))/5.
x_interp = np.arange(x_arr[0], x_arr[-1], step_size )
y_interp = np.arange(y_arr[0], y_arr[-1], step_size )
z_interp = np.arange(z_arr[0], z_arr[-1], step_size )
# make interpolation function
func_interp = interp.RegularGridInterpolator( (x_arr, y_arr, z_arr), data_3D_original )
# make coordinates for interpolation, first transform vectors for coordinates
# into column vectors and then stack them together
points = np.hstack( (x_interp[...,None], y_interp[...,None], z_interp[...,None]) )
data_3D_interp = func_interp(points)
print(data_3D_interp.shape, x_interp.shape, y_interp.shape, z_interp.shape)
</code></pre>
<p>The output, besides the plot, reads <code>(96,) (96,) (96,) (96,)</code> whereas it should be <code>(96,96,96) (96,) (96,) (96,)</code>. Clearly, I am missing something. Furthermore, this only works if all coordinate vectors are of same length, which they are not in my actual use case. So what am I doing wrong here?</p>
<p>The version of the relevant libraries I am using (I do not think this plays a role here though, but I also think it is good practice to include them):</p>
<pre><code>numpy version: 1.20.3
scipy version: 1.7.2
</code></pre>
|
<python><numpy><scipy><interpolation>
|
2024-03-26 09:59:20
| 2
| 2,029
|
Alf
|
78,224,380
| 3,827,970
|
How to derive snakemake wildcards from python objects?
|
<p>I am learning snakemake to develop genomics pipelines. Since the inputs/outputs will get very diverse very quickly, I thought to spend some extra time understanding the basics of building snakemake script. My goal is to keep the code explicit and extendable by using python objects, but at the same time convert it from python loops to snakemake wildcards but i couldnt find a way to hack it. How to derive snakemake wildcards from python objects ?</p>
<p>The python class:</p>
<pre><code>class Reference:
def __init__(self, name, species, source, genome_seq, genome_seq_url, transcript_seq, transcript_seq_url, annotation_gtf, annotation_gtf_url, annotation_gff, annotation_gff_url) -> None:
self.name = name
self.species = species
self.source = source
self.genome_seq = genome_seq
self.genome_seq_url = genome_seq_url
self.transcript_seq = transcript_seq
self.transcript_seq_url = transcript_seq_url
self.annotation_gtf = annotation_gtf
self.annotation_gtf_url = annotation_gtf_url
self.annotation_gff = annotation_gff
self.annotation_gff_url = annotation_gff_url
</code></pre>
<p>CSV references file:</p>
<pre><code>name,species,source,genome_seq,genome_seq_url,transcript_seq,transcript_seq_url,annotation_gtf,annotation_gtf_url,annotation_gff,annotation_gff_url
BDGP6_46,FruitFly,Ensembl,Drosophila_melanogaster.BDGP6.46.dna.toplevel.fa.gz,https://ftp.ensembl.org/pub/release-111/fasta/drosophila_melanogaster/dna/Drosophila_melanogaster.BDGP6.46.dna.toplevel.fa.gz,Drosophila_melanogaster.BDGP6.46.cdna.all.fa.gz,https://ftp.ensembl.org/pub/release-111/fasta/drosophila_melanogaster/cdna/Drosophila_melanogaster.BDGP6.46.cdna.all.fa.gz,Drosophila_melanogaster.BDGP6.46.111.gtf.gz,https://ftp.ensembl.org/pub/release-111/gtf/drosophila_melanogaster/Drosophila_melanogaster.BDGP6.46.111.gtf.gz,Drosophila_melanogaster.BDGP6.46.111.gff3.gz,https://ftp.ensembl.org/pub/release-111/gff3/drosophila_melanogaster/Drosophila_melanogaster.BDGP6.46.111.gff3.gz
</code></pre>
<p>Smakefile:</p>
<pre><code>def get_references(references_path:str) -> dict:
refs_table = dict()
with open(references_path, 'r') as file:
reader = csv.DictReader(file)
for row in reader:
ref_data = Reference(
row['name'],
row['species'],
row['source'],
row['genome_seq'],
row['genome_seq_url'],
row['transcript_seq'],
row['transcript_seq_url'],
row['annotation_gtf'],
row['annotation_gtf_url'],
row['annotation_gff'],
row['annotation_gff_url']
)
refs_table[row['name']] = ref_data
return refs_table
references_table = get_references('references.csv')
rule all:
input:
genome_seq = expand("../resources/references/{ref_name}/{genome_seq}", zip,
genome_seq=[references_table[ref].genome_seq for ref in references_table.keys()],
ref_name=[references_table[ref].name for ref in references_table.keys()]),
transcript_seq = expand("../resources/references/{ref_name}/{transcript_seq}", zip,
transcript_seq=[references_table[ref].transcript_seq for ref in references_table],
ref_name=[references_table[ref].name for ref in references_table]),
annotation_gtf = expand("../resources/references/{ref_name}/{annotation_gtf}", zip,
annotation_gtf=[references_table[ref].annotation_gtf for ref in references_table],
ref_name=[references_table[ref].name for ref in references_table]),
annotation_gff = expand("../resources/references/{ref_name}/{annotation_gff}", zip,
annotation_gff=[references_table[ref].annotation_gff for ref in references_table.keys()],
ref_name=[references_table[ref].name for ref in references_table.keys()]),
</code></pre>
<p>Current implemnetation using dynamic rules:</p>
<pre><code>for ref_name, ref in references_table.items():
pathlib.Path(f"../resources/references/{ref_name}/").mkdir(parents=True, exist_ok=True)
pathlib.Path(f"../logs/download/refs/").mkdir(parents=True, exist_ok=True)
pathlib.Path(f"../times/download/refs/").mkdir(parents=True, exist_ok=True)
genome_seq = f"../resources/references/{ref_name}/{ref.genome_seq}"
transcript_seq = f"../resources/references/{ref_name}/{ref.transcript_seq}"
annotation_gtf = f"../resources/references/{ref_name}/{ref.annotation_gtf}"
annotation_gff = f"../resources/references/{ref_name}/{ref.annotation_gff}"
log_file = f"../logs/download/refs/{ref_name}.txt"
time_file = f"../times/download/refs/{ref_name}.txt"
genome_seq_url = ref.genome_seq_url
transcript_seq_url = ref.transcript_seq_url
annotation_gtf_url = ref.annotation_gtf_url
annotation_gff_url = ref.annotation_gff_url
rule_name = f"download_reference_{ref_name}"
rule:
name : rule_name
output:
genome_seq = genome_seq,
transcript_seq = transcript_seq,
annotation_gtf = annotation_gtf,
annotation_gff = annotation_gff
params:
genome_seq_url = genome_seq_url,
transcript_seq_url = transcript_seq_url,
annotation_gtf_url = annotation_gtf_url,
annotation_gff_url = annotation_gff_url,
log:
log_file = log_file
benchmark:
time_file
container:
"dockers/general_image"
threads:
1
message:
"Downloading {params.genome_seq_url} and {params.transcript_seq_url} and {params.annotation_gtf_url} and {params.annotation_gff_url}"
shell:
"""
wget {params.genome_seq_url} -O {output.genome_seq} &> {log.log_file}
wget {params.transcript_seq_url} -O {output.transcript_seq} &> {log.log_file}
wget {params.annotation_gtf_url} -O {output.annotation_gtf} &> {log.log_file}
wget {params.annotation_gff_url} -O {output.annotation_gff} &> {log.log_file}
"""
</code></pre>
|
<python><bioinformatics><snakemake>
|
2024-03-26 09:56:37
| 1
| 650
|
Zingo
|
78,224,291
| 999,162
|
VSCode task with pyenv and zsh
|
<p>I'm having an issue with configuring tasks for VSCode while using pyenv-virtualenv (not virtual env!), kind of similar to <a href="https://stackoverflow.com/questions/71594920/vscode-pyenv-not-integrated">this</a> but it is not solved and it also refers only to .zshrc not .zprofile.</p>
<p>I have a a pyenv environment configured and that python interpreter is selected in the project. In terminals inside VSCode I can execute code and the terminal automatically has loaded the correct pyenv.</p>
<p>However, when running VSCode tasks the task fails to activate the pyenv virtualenv of the project.</p>
<p>E.g. in the terminal of my project I get:</p>
<pre><code>(proj) % pyenv local
projenv
(proj) % python --version
Python 3.10.4
</code></pre>
<p>I've tried .vscode/tasks.json:</p>
<pre><code>
{
"version": "2.0.0",
"tasks": [
{
"label": "Run some python",
"type": "shell",
"command": "pyenv activate projenv && python --version",´
}
}
</code></pre>
<p>Outputs:</p>
<blockquote>
<p>Python 2.7.18</p>
</blockquote>
<pre><code>{
"version": "2.0.0",
"tasks": [
{
"label": "Run some python",
"type": "shell",
"command": "pyenv activate projenv && python --version",´
}
}
</code></pre>
<p>Outputs on task run:</p>
<blockquote>
<p>Failed to activate virtualenv.</p>
<p>Perhaps pyenv-virtualenv has not been loaded into your shell properly.
Please restart current shell and try again.</p>
</blockquote>
<p>In my ~/.zprofile I have:</p>
<pre><code>echo "Hello .zprofile"
</code></pre>
<p>And in my ~/.zshrc I have:</p>
<pre><code>echo "Hello .zshrc"
export PYENV_ROOT="$HOME/.pyenv"
command -v pyenv >/dev/null || export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
</code></pre>
<p>When running the tasks I get the "Hello .zprofile" but not the output from "Hello .zshrc" where the pyenv activation is. When I manually create a new VSCode terminal it outputs both.</p>
<p>Now if I duplicate or move the pyenv activation into .zprofile the pyenv virtual env activates and the task runs.</p>
<p>But why does VSCode not read both my .zprofile and .zshrc files to begin with?</p>
|
<python><zsh><pyenv><zshrc><pyenv-virtualenv>
|
2024-03-26 09:40:52
| 1
| 5,274
|
kontur
|
78,224,121
| 3,433,875
|
Annotate at the top of a marker with varying sizes in matplotlib
|
<p>Can I get the coordinates of markers to move the annotation to the top of the triangle?</p>
<pre><code>import matplotlib.pyplot as plt
X = [1,2,3,4,5]
Y = [1,1,1,1,1]
labels = 'ABCDE'
sizes = [1000, 1500, 2000, 2500, 3000]
fig, ax = plt.subplots()
ax.scatter(X, Y, s= sizes, marker = 10)
for x, y, label, size in zip(X, Y, labels, sizes):
print(x,y)
ax.annotate(label, (x, y), fontsize=12)
plt.show()
</code></pre>
<p>Which gives me:
<a href="https://i.sstatic.net/MfkDy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MfkDy.png" alt="enter image description here" /></a></p>
|
<python><python-3.x><matplotlib>
|
2024-03-26 09:12:32
| 1
| 363
|
ruthpozuelo
|
78,224,035
| 12,892,937
|
Python pyinstaller ModuleNotFoundError even with hidden import
|
<p>I have 2 files like this. <code>addict</code> is a library, installed by using <code>pip install addict</code>. <a href="https://pypi.org/project/addict/" rel="nofollow noreferrer">https://pypi.org/project/addict/</a></p>
<pre><code># myimport.py
from addict import Dict
</code></pre>
<p>and</p>
<pre><code># main.py
from pyinstaller_test.myimport import *
print('Hello world')
</code></pre>
<p>The folder structure is:</p>
<pre><code>huyduc@my-pc:~/pyinstaller_test$ ls
__init__.py main.py myimport.py
</code></pre>
<p>I run pyinstaller by using</p>
<pre><code>pyinstaller --hidden-import addict --onefile --name main *.py
128 INFO: PyInstaller: 6.5.0, contrib hooks: 2024.3
128 INFO: Python: 3.8.5
136 INFO: Platform: Linux-5.8.0-50-generic-x86_64-with-glibc2.29
137 INFO: wrote /home/huyduc/pyinstaller_test/main.spec
139 INFO: Extending PYTHONPATH with paths
['/home/huyduc', '/home/huyduc', '/home/huyduc']
238 INFO: checking Analysis
239 INFO: Building because pathex changed
239 INFO: Initializing module dependency graph...
240 INFO: Caching module graph hooks...
250 INFO: Analyzing base_library.zip ...
673 INFO: Loading module hook 'hook-heapq.py' from '/home/huyduc/.local/lib/python3.8/site-packages/PyInstaller/hooks'...
815 INFO: Loading module hook 'hook-encodings.py' from '/home/huyduc/.local/lib/python3.8/site-packages/PyInstaller/hooks'...
1653 INFO: Loading module hook 'hook-pickle.py' from '/home/huyduc/.local/lib/python3.8/site-packages/PyInstaller/hooks'...
2258 INFO: Caching module dependency graph...
2343 INFO: Running Analysis Analysis-00.toc
2343 INFO: Looking for Python shared library...
2361 INFO: Using Python shared library: /usr/lib/x86_64-linux-gnu/libpython3.8.so.1.0
2362 INFO: Analyzing /home/huyduc/pyinstaller_test/__init__.py
2363 INFO: Analyzing /home/huyduc/pyinstaller_test/main.py
2365 INFO: Analyzing /home/huyduc/pyinstaller_test/myimport.py
2365 INFO: Processing module hooks...
2375 INFO: Performing binary vs. data reclassification (2 entries)
2381 INFO: Looking for ctypes DLLs
2385 INFO: Analyzing run-time hooks ...
2392 INFO: Looking for dynamic libraries
2581 INFO: Warnings written to /home/huyduc/pyinstaller_test/build/main/warn-main.txt
2594 INFO: Graph cross-reference written to /home/huyduc/pyinstaller_test/build/main/xref-main.html
2596 INFO: checking PYZ
2617 INFO: checking PKG
2618 INFO: Building because toc changed
2618 INFO: Building PKG (CArchive) main.pkg
4640 INFO: Building PKG (CArchive) main.pkg completed successfully.
4641 INFO: Bootloader /home/huyduc/.local/lib/python3.8/site-packages/PyInstaller/bootloader/Linux-64bit-intel/run
4641 INFO: checking EXE
4641 INFO: Building because toc changed
4641 INFO: Building EXE from EXE-00.toc
4642 INFO: Copying bootloader EXE to /home/huyduc/pyinstaller_test/dist/main
4642 INFO: Appending PKG archive to custom ELF section in EXE
4710 INFO: Building EXE from EXE-00.toc completed successfully.
</code></pre>
<p>but if I run I will get this error</p>
<pre><code>huyduc@my-pc:~/pyinstaller_test$ ./dist/main
Traceback (most recent call last):
File "main.py", line 1, in <module>
from pyinstaller_test.myimport import *
File "PyInstaller/loader/pyimod02_importers.py", line 419, in exec_module
File "myimport.py", line 1, in <module>
from addict import Dict
ModuleNotFoundError: No module named 'addict'
[1069094] Failed to execute script 'main' due to unhandled exception!
</code></pre>
<p>I'm not using any <code>venv</code>. What might cause this and how to fix this problem? Thank you.</p>
|
<python><pyinstaller><python-import>
|
2024-03-26 08:58:10
| 0
| 1,831
|
Huy Le
|
78,223,861
| 15,592,363
|
ChatWork does not give me access token
|
<p>I am trying to make a small app to send messages to the ChatWork app. I manage to get the code from the authorization when I sign in, but when I try to use this code to generate the token, I am getting 401. So, really do not know what I am doing wrong. I appreciate your help. This is the code:</p>
<pre><code>import requests
from urllib.parse import urlencode, urlparse, parse_qs
import hashlib
import base64
import secrets
# Your ChatWork OAuth 2.0 client ID and redirect URI
client_id = "my_client_ID"
client_secret = "my_client_secret"
redirect_uri = "https://example.com/"
state = "343ab3341331218786ef" # This is for CSRF staff
# URL for the OAuth 2.0 authorization and token endpoints
auth_endpoint = "https://www.chatwork.com/packages/oauth2/login.php"
token_endpoint = "https://oauth.chatwork.com/token"
# Generate a code verifier
code_verifier = base64.urlsafe_b64encode(secrets.token_bytes(32)).rstrip(b'=').decode('utf-8')
# Calculate the code challenge
code_challenge = base64.urlsafe_b64encode(hashlib.sha256(code_verifier.encode('utf-8')).digest()).decode('utf-8').replace('=', '')
# Parameters for the authorization request
params = {
"response_type": "code",
"client_id": client_id,
"redirect_uri": redirect_uri,
"scope": "rooms.all:read_write",
"state": state,
"code_challenge": code_challenge,
"code_challenge_method": "S256"
}
# Construct the authorization URL
auth_url = auth_endpoint + "?" + urlencode(params)
# Print the authorization URL
print("Open this URL in your browser and authorize the application:")
print(auth_url)
# After the user authorizes the application, they will be redirected to your redirect URI
# Parse the authorization code from the redirect URI
authorization_code = input("Enter the authorization code from the redirect URI (or leave empty if denied): ")
parsed_url = urlparse(authorization_code)
query_params = parse_qs(parsed_url.query)
code = query_params["code"][0]
print(code)
if not code:
print("Authorization denied by user.")
# Handle the denial, such as displaying a message to the user or redirecting them to a different page
else:
# Parameters for the token request
token_params = {
"grant_type": "authorization_code",
"code": code,
"client_id": client_id,
"redirect_uri": redirect_uri,
"code_verifier": code_verifier
}
# Make a POST request to the token endpoint to exchange the authorization code for an access token
response = requests.post(token_endpoint, data=token_params, headers={"Authorization": "Basic " + base64.b64encode(f"{client_id}:{client_secret}".encode('utf-8')).decode('utf-8')})
print(response)
# Parse the access token from the response
access_token = response.json().get('access_token')
print(access_token)
</code></pre>
<p>This is the documentation:
<a href="http://download.chatwork.com/ChatWork_API_Documentation.pdf" rel="nofollow noreferrer">http://download.chatwork.com/ChatWork_API_Documentation.pdf</a></p>
<p>The type of client has to be set up as confidential, otherwise you won't get <code>client_secret</code>.</p>
|
<python><authentication><oauth><httprequest>
|
2024-03-26 08:25:31
| 1
| 1,008
|
Luis Alejandro Vargas Ramos
|
78,223,432
| 7,393,694
|
Find the list of "current" maxima with respect to the list index
|
<p>Given a list <code>a</code>, I'd like to know all indices <code>i</code> such that <code>a[i]>a[j]</code> for all <code>j<i</code>.</p>
<p>I can come up with a simple for loop, but wondered whether there's some built in version of that:</p>
<pre><code>max_values = [a[0]]
indices = [0]
for i, x in enumerate(a):
if x>max_values[-1]:
indices.append(i)
max_values.append(x)
</code></pre>
<p>Is there a better way?</p>
|
<python><list>
|
2024-03-26 06:52:37
| 1
| 2,149
|
Jürg W. Spaak
|
78,223,249
| 4,683,697
|
Disagreement in obtaining inverse of Sympy matrix in two ways
|
<p>I am attempting to invert a simple numerical matrix in <code>Sympy</code>. I start with a matrix:</p>
<pre><code>In [56]: A_MATRIX_M
Out[56]:
⎡0.174683794941032 0.174688696013867 0.174688696013957 0.174683794941032 0.174688696013867 0.174688696013957 ⎤
⎢ ⎥
⎢0.152985639387222 0.153053070078189 0.153053070078123 0.152985677401143 0.153053070078189 0.153053070078123 ⎥
⎢ ⎥
⎢0.122289719307085 0.12255894161074 0.12255894161075 0.122289780080333 0.12255894161074 0.12255894161075 ⎥
⎢ ⎥
⎢0.0888623464387251 0.0894599879866164 0.0894599879864983 0.0888623905998649 0.0894599879866165 0.0894599879864984⎥
⎢ ⎥
⎢0.0584375462953993 0.0593271595778474 0.0593271595779078 0.0584375753365935 0.0593271595778473 0.0593271595779078⎥
⎢ ⎥
⎣0.034711924284652 0.0356325950288525 0.035632595028856 0.0347119415351321 0.0356325950288525 0.035632595028856 ⎦
</code></pre>
<p>I can get the inverse with <code>.inv()</code>:</p>
<pre><code>In [59]: A_M_INV.inv()
Out[59]:
⎡-4.20862006694537e-38 -3.64678015602041e-38 -4.89393679687261e-39 -5.26008046298975e-38 -1.29781892788192e-38 1.12107780599699e-37 ⎤
⎢ ⎥
⎢-1.19250990841802e-38 -1.17838767512258e-38 -5.49930182641769e-39 -1.79347381150519e-38 -3.76673961453884e-39 3.67594291864767e-38 ⎥
⎢ ⎥
⎢1.27521806859415e-38 1.38241733868129e-38 1.33016305680983e-39 1.89098712400354e-38 7.63333345652249e-39 -4.07574230117266e-38⎥
⎢ ⎥
⎢-3.71566091342595e-39 -3.69178932808298e-39 -1.37147644288023e-39 -4.68481431180264e-39 -3.97158856857654e-39 8.39451101073365e-39 ⎥
⎢ ⎥
⎢1.47794212524182e-38 1.21859956435038e-38 7.46216287435121e-40 2.08223967916098e-38 8.25097707225834e-39 -4.0465253879286e-38 ⎥
⎢ ⎥
⎣-2.5158985894826e-39 -3.62021834376681e-39 -3.99023376900161e-40 -5.881653010039e-39 -3.44892577031322e-39 6.50635833262352e-39 ⎦
</code></pre>
<p>However, I should also be able to get the inverse by dividing the adjugate matrix by the matrix determinant. However, it gives a completely different answer:</p>
<pre><code>In [57]: A_M_INV = A_MATRIX_M.adjugate() / A_MATRIX_M.det()
In [58]: A_M_INV
Out[58]:
⎡-1.03014934740997e+38 9.76285991918083e+37 -2.61712726811554e+38 -2.91755290755826e+38 2.02091254231468e+37 8.60970411767247e+37 ⎤
⎢ ⎥
⎢-1.0350460657459e+38 1.09196345494244e+38 1.65391718422458e+38 -4.62921116834264e+37 -4.10946591995717e+38 -2.93534695043802e+38⎥
⎢ ⎥
⎢3.44664755653471e+37 -1.86912642859017e+38 -8.76351586189679e+37 -2.26077970623935e+38 -1.67328014514703e+37 1.00789976403353e+38 ⎥
⎢ ⎥
⎢1.82916593489888e+37 -8.10152924803704e+37 -1.33638262519626e+38 2.88888937621382e+38 1.19814518627368e+38 -3.22157029803083e+38⎥
⎢ ⎥
⎢6.14049097203001e+37 2.94210971355518e+37 5.88396356342516e+37 -2.59752889211905e+38 1.036688778158e+38 1.24211877586664e+38 ⎥
⎢ ⎥
⎣-4.62263666898771e+37 2.94055576923073e+37 -1.04165393746291e+38 -2.89791317977202e+37 -5.86035000752247e+37 -1.95539341633241e+38⎦
</code></pre>
<p>Here's an example where someone else uses this method to invert the matrix:</p>
<p><a href="https://stackoverflow.com/a/75555842/4683697">Why is <code>sympy.Matrix.inv</code> slow?</a></p>
<p>Why are they different? Attempting to use this inverse to solve a simple equation shows that neither seems to be the correct inverse. I'm not sure what could be going wrong; is it because of the large exponents or precision needed to describe the matrix?</p>
<p>Thank you in advance for your help!</p>
<p>EDIT: with @Jared's help below, I did a little more digging and found this question: <a href="https://stackoverflow.com/questions/51562302/inverse-matrix-results-different-in-matlab-and-python?rq=2">Inverse matrix results different in MATLAB and Python</a> now that I know my matrix is ill-conditioned/essentially singular, which precludes it from having a well-defined inverse.</p>
|
<python><matrix><sympy><inversion>
|
2024-03-26 06:03:09
| 1
| 2,544
|
Joseph Farah
|
78,223,186
| 4,443,784
|
What's the behavior of psycopg2 cursor fetchmany
|
<p>I am confused about the behavior of psycopg2 cursor fetchmany method compared with paging query</p>
<p>Say I have a query that will get 10000 rows in total.</p>
<p>When I use <code>fetchall</code>, I understand that the 10000 rows will be fetched back to the python process in one round.</p>
<p>But when I use fetchmany and the fetch size is 3000, then it will perform 4 times to get back all the 10000 rows (3000*3+1000),does the psycopg2 cursor will kick off 4 queries like the paging query does or only one query is performed but the db server is prepared for the data to be transferred in 4 rounds?</p>
<p>I want to use fetchmany, but I don't like the many rounds query.</p>
|
<python><psycopg2>
|
2024-03-26 05:43:32
| 0
| 6,382
|
Tom
|
78,223,150
| 406,281
|
ruamel.yaml dump: how to stop map scalar values from being moved to a new indented line?
|
<p>Hard to describe succinctly, so I'll demonstrate.</p>
<pre class="lang-py prettyprint-override"><code>from sys import stdout
from ruamel.yaml import YAML
yml = YAML()
doc = """
kind: ImageSetConfiguration
apiVersion: mirror.openshift.io/v1alpha2
storageConfig:
local:
path: ./operator-images
mirror:
operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14
packages:
- name: rhods-operator
channels:
- name: fast
minVersion: 2.7.0
maxVersion: 2.7.0
- name: nfd
channels:
- name: stable
additionalImages:
- name: quay.io/integreatly/prometheus-blackbox-exporter@sha256:35b9d2c1002201723b7f7a9f54e9406b2ec4b5b0f73d114f47c70e15956103b5
- name: quay.io/modh/codeserver@sha256:7b53d6c49b0e18d8907392c19b23ddcdcd4dbf730853ccdf153358ca81b2c523
- name: quay.io/modh/cuda-notebooks@sha256:00c53599f5085beedd0debb062652a1856b19921ccf59bd76134471d24c3fa7d
- name: quay.io/modh/cuda-notebooks@sha256:4275eefdab2d5e32a7be26f747d1cdb58e82fb0cd57dda939a9a24e084bd1f7e
- name: quay.io/modh/cuda-notebooks@sha256:6fadedc5a10f5a914bb7b27cd41bc644392e5757ceaf07d930db884112054265
- name: quay.io/modh/cuda-notebooks@sha256:88d80821ff8c5d53526794261d519125d0763b621d824f8c3222127dab7b6cc8
"""
y = yml.load(doc)
yml.dump(y, stdout)
</code></pre>
<p>Which prints:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: ImageSetConfiguration
apiVersion: mirror.openshift.io/v1alpha2
storageConfig:
local:
path: ./operator-images
mirror:
operators:
- catalog: registry.redhat.io/redhat/redhat-operator-index:v4.14
packages:
- name: rhods-operator
channels:
- name: fast
minVersion: 2.7.0
maxVersion: 2.7.0
- name: nfd
channels:
- name: stable
additionalImages:
- name:
quay.io/integreatly/prometheus-blackbox-exporter@sha256:35b9d2c1002201723b7f7a9f54e9406b2ec4b5b0f73d114f47c70e15956103b5
- name:
quay.io/modh/codeserver@sha256:7b53d6c49b0e18d8907392c19b23ddcdcd4dbf730853ccdf153358ca81b2c523
- name:
quay.io/modh/cuda-notebooks@sha256:00c53599f5085beedd0debb062652a1856b19921ccf59bd76134471d24c3fa7d
- name:
quay.io/modh/cuda-notebooks@sha256:4275eefdab2d5e32a7be26f747d1cdb58e82fb0cd57dda939a9a24e084bd1f7e
- name:
quay.io/modh/cuda-notebooks@sha256:6fadedc5a10f5a914bb7b27cd41bc644392e5757ceaf07d930db884112054265
- name:
quay.io/modh/cuda-notebooks@sha256:88d80821ff8c5d53526794261d519125d0763b621d824f8c3222127dab7b6cc8
</code></pre>
<p>Why do the <code>name: xxx</code> dicts in the <code>mirror.additionalImages</code> list get split over two lines when dumping? Is it based on the length of the key's value or something? How can I stop this? (On a file with hundreds of those, it really gets ridiculous.) I've played with various settings of the <code>YAML()</code> instances, to no avail. (E.g., <code>indent()</code> and <code>compact()</code> and <code>default_flow_style</code>.)</p>
|
<python><ruamel.yaml>
|
2024-03-26 05:30:47
| 1
| 3,577
|
rsaw
|
78,222,933
| 4,399,016
|
Converting a List of Dictionaries to Pandas Dataframe
|
<p>I have this code:</p>
<pre><code>import urllib.request, json
url = "https://api.wto.org/timeseries/v1/indicator_categories?lang=1"
hdr ={
# Request headers
'Cache-Control': 'no-cache',
'Ocp-Apim-Subscription-Key': '21cda66d75fc4010b8b4d889f4af6ccd',
}
req = urllib.request.Request(url, headers=hdr)
req.get_method = lambda: 'GET'
response = urllib.request.urlopen(req)
#print(response.getcode())
var = print(response.read().decode('ASCII'))
</code></pre>
<p>I tried this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(var)
</code></pre>
<p>I am only getting an empty dataframe. The data is lost somehow. What am I doing wrong?</p>
<p>Also tried:</p>
<pre><code>var = (response.read().decode('ASCII'))
</code></pre>
<p>This returns Byte String which I am not able to convert using eval():</p>
<pre><code>eval(var)
</code></pre>
<blockquote>
<p>NameError: name 'null' is not defined</p>
</blockquote>
<p>I don't know how to proceed.</p>
|
<python><pandas><dictionary>
|
2024-03-26 04:08:34
| 1
| 680
|
prashanth manohar
|
78,222,916
| 6,702,598
|
AWS Lambda: python packages not found after deploy
|
<h5>What I do</h5>
<p>I have a the standard hello_world sample code from aws, created via <code>sam init --runtime <runtime> --name <project-name></code>.</p>
<p>I add <code>pytest</code> to requirements.txt and add <code>import pytest</code> to <code>app.py</code>.</p>
<p>I build and deploy via <code>sam build --template-file template.yaml && sam deploy --template-file template.yaml --config-env dev --profile awsprofile</code></p>
<h5>Problem</h5>
<p>When I run the function on AWS it fails at importing <code>pytest</code>, claiming no such package is there.</p>
<p>What could be the problem?</p>
<h5>Notes</h5>
<ul>
<li>The local <code>.aws-sam/build</code> folder contains the pytest package, as expected.</li>
</ul>
|
<python><python-3.x><amazon-web-services><aws-lambda>
|
2024-03-26 04:03:19
| 1
| 3,673
|
DarkTrick
|
78,222,808
| 3,735,871
|
Airflow schedule once but doesn't start
|
<p>I scheduled an Airflow dag like below. What I expect to see is the dag runs once with execution date 2023-08-01. However nothing happens and no dag runs were created. How can I schedule a job to be run once on a specific date?</p>
<pre><code>start_date: '2023-08-01T00:00:00'
catchup: True
schedule_interval: 'Once'
</code></pre>
|
<python><airflow>
|
2024-03-26 03:26:48
| 0
| 367
|
user3735871
|
78,222,738
| 11,462,274
|
Tkinter when double-clicking the left mouse button to reach the maximum height of the window, one of the buttons is activated incorrectly
|
<p>The only desired action in my <code>Tkinter</code> window is when clicking on one of the buttons, it prints the market ID and the player ID.</p>
<p>But when I double-click the mouse on the edge of the window in the blue circled region in order to vertically extend the window to maximum fit with the Windows screen, a false click is generated on the button written <code>Roger Federer</code>. I couldn't understand how this is possible and how I can prevent it so that only when I click on the button does the <code>button_clicked()</code> function be executed.</p>
<p><a href="https://i.sstatic.net/AcSds.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AcSds.png" alt="enter image description here" /></a></p>
<pre class="lang-python prettyprint-override"><code>from functools import partial
import tkinter as tk
import pandas as pd
def button_clicked(market_id, player_id):
print("Market ID:", market_id)
print("Player ID:", player_id)
def create_window(dataframe):
window = tk.Tk()
window.title("Tennis Matches")
window_width = 400
window_height = min(len(dataframe) * 50 + 100, 400)
window.geometry(f"{window_width}x{window_height}")
window.configure(bg="white")
canvas = tk.Canvas(window, bg="white")
canvas.pack(side=tk.LEFT, fill=tk.BOTH, expand=True)
scrollbar = tk.Scrollbar(window, orient=tk.VERTICAL, command=canvas.yview)
scrollbar.pack(side=tk.RIGHT, fill=tk.Y)
canvas.configure(yscrollcommand=scrollbar.set)
frame = tk.Frame(canvas, bg="white")
canvas.create_window((0, 0), window=frame, anchor='nw')
max_button_text_length = max(
max(dataframe['player_one_name'].apply(len)),
max(dataframe['player_two_name'].apply(len))
)
for index, row in dataframe.iterrows():
game_label = tk.Label(frame, text=row['match_name'], bg="white", fg="black")
game_label.pack()
button_frame = tk.Frame(frame, bg="white")
button_frame.pack()
button1 = tk.Button(button_frame, text=row['player_one_name'], command=partial(button_clicked, row['market_id'], row['player_one_id']), bg="white", fg="black")
button1.pack(side=tk.LEFT, padx=5)
button1.config(width=max_button_text_length, borderwidth=2, font=('Arial', 10, 'bold'))
button2 = tk.Button(button_frame, text=row['player_two_name'], command=partial(button_clicked, row['market_id'], row['player_two_id']), bg="white", fg="black")
button2.pack(side=tk.LEFT, padx=5)
button2.config(width=max_button_text_length, borderwidth=2, font=('Arial', 10, 'bold'))
button1.config(anchor="center")
button2.config(anchor="center")
separator = tk.Frame(frame, height=2, bd=1, relief=tk.SUNKEN, bg="white")
separator.pack(fill=tk.X, padx=5, pady=5)
def on_configure(event):
canvas.configure(scrollregion=canvas.bbox('all'))
frame.bind('<Configure>', on_configure)
def _on_mousewheel(event):
canvas.yview_scroll(int(-1 * (event.delta / 120)), "units")
canvas.bind_all("<MouseWheel>", _on_mousewheel)
window.mainloop()
data = {
'match_name': ['Aberto da Austrália - Rafael Nadal x Roger Federer',
'Wimbledon - Novak Djokovic x Andy Murray',
'Aberto da França - Serena Williams x Simona Halep',
'US Open - Naomi Osaka x Ashleigh Barty',
'Aberto da Austrália - Dominic Thiem x Alexander Zverev',
'Wimbledon - Stefanos Tsitsipas x Matteo Berrettini',
'Aberto da França - Iga Swiatek x Elina Svitolina',
'US Open - Bianca Andreescu x Karolina Pliskova'],
'market_id': [1, 2, 3, 4, 5, 6, 7, 8],
'player_one_name': ['Rafael Nadal', 'Novak Djokovic', 'Serena Williams', 'Naomi Osaka', 'Dominic Thiem', 'Stefanos Tsitsipas', 'Iga Swiatek', 'Bianca Andreescu'],
'player_one_id': [1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008],
'player_two_name': ['Roger Federer', 'Andy Murray', 'Simona Halep', 'Ashleigh Barty', 'Alexander Zverev', 'Matteo Berrettini', 'Elina Svitolina', 'Karolina Pliskova'],
'player_two_id': [2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008]
}
df = pd.DataFrame(data)
create_window(df)
</code></pre>
<p>Request made by user <a href="https://stackoverflow.com/users/13629335/thingamabobs">@thingamabobs</a><br />
Version without <code>Pandas</code>, using a basic <code>dict</code> instead:</p>
<pre><code>import tkinter as tk
def button_clicked(market_id, player_id):
print("Market ID:", market_id)
print("Player ID:", player_id)
def create_window(data):
window = tk.Tk()
window.title("Tennis Matches")
window_width = 400
window_height = min(len(data['match_name']) * 50 + 100, 400)
window.geometry(f"{window_width}x{window_height}")
window.configure(bg="white")
canvas = tk.Canvas(window, bg="white")
canvas.pack(side=tk.LEFT, fill=tk.BOTH, expand=True)
scrollbar = tk.Scrollbar(window, orient=tk.VERTICAL, command=canvas.yview)
scrollbar.pack(side=tk.RIGHT, fill=tk.Y)
canvas.configure(yscrollcommand=scrollbar.set)
frame = tk.Frame(canvas, bg="white")
canvas.create_window((0, 0), window=frame, anchor='nw')
max_button_text_length = max(
max(len(name) for name in data['player_one_name']),
max(len(name) for name in data['player_two_name'])
)
for i in range(len(data['match_name'])):
game_label = tk.Label(frame, text=data['match_name'][i], bg="white", fg="black")
game_label.pack()
button_frame = tk.Frame(frame, bg="white")
button_frame.pack()
button1 = tk.Button(button_frame, text=data['player_one_name'][i], command=lambda i=i: button_clicked(data['market_id'][i], data['player_one_id'][i]), bg="white", fg="black")
button1.pack(side=tk.LEFT, padx=5)
button1.config(width=max_button_text_length, borderwidth=2, font=('Arial', 10, 'bold'))
button2 = tk.Button(button_frame, text=data['player_two_name'][i], command=lambda i=i: button_clicked(data['market_id'][i], data['player_two_id'][i]), bg="white", fg="black")
button2.pack(side=tk.LEFT, padx=5)
button2.config(width=max_button_text_length, borderwidth=2, font=('Arial', 10, 'bold'))
button1.config(anchor="center")
button2.config(anchor="center")
separator = tk.Frame(frame, height=2, bd=1, relief=tk.SUNKEN, bg="white")
separator.pack(fill=tk.X, padx=5, pady=5)
def on_configure(event):
canvas.configure(scrollregion=canvas.bbox('all'))
frame.bind('<Configure>', on_configure)
def _on_mousewheel(event):
canvas.yview_scroll(int(-1 * (event.delta / 120)), "units")
canvas.bind_all("<MouseWheel>", _on_mousewheel)
window.mainloop()
data = {
'match_name': ['Aberto da Austrália - Rafael Nadal x Roger Federer',
'Wimbledon - Novak Djokovic x Andy Murray',
'Aberto da França - Serena Williams x Simona Halep',
'US Open - Naomi Osaka x Ashleigh Barty',
'Aberto da Austrália - Dominic Thiem x Alexander Zverev',
'Wimbledon - Stefanos Tsitsipas x Matteo Berrettini',
'Aberto da França - Iga Swiatek x Elina Svitolina',
'US Open - Bianca Andreescu x Karolina Pliskova'],
'market_id': [1, 2, 3, 4, 5, 6, 7, 8],
'player_one_name': ['Rafael Nadal', 'Novak Djokovic', 'Serena Williams', 'Naomi Osaka', 'Dominic Thiem', 'Stefanos Tsitsipas', 'Iga Swiatek', 'Bianca Andreescu'],
'player_one_id': [1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008],
'player_two_name': ['Roger Federer', 'Andy Murray', 'Simona Halep', 'Ashleigh Barty', 'Alexander Zverev', 'Matteo Berrettini', 'Elina Svitolina', 'Karolina Pliskova'],
'player_two_id': [2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008]
}
create_window(data)
</code></pre>
|
<python><tkinter><tkinter-canvas>
|
2024-03-26 03:03:04
| 2
| 2,222
|
Digital Farmer
|
78,222,605
| 10,300,327
|
Mounting /usr/local/bin/ to python:3 docker image
|
<p>I want to mount <code>/usr/local/bin/</code> to my host machine from docker python:3. The reason I want to do this is because I want my PyCharm Community edition to be able to use my docker interpreter for my IDE.</p>
<p>Here is my docker compose file:</p>
<pre class="lang-yaml prettyprint-override"><code>version: '3.8'
services:
db:
image: mysql:8.0
cap_add:
- SYS_NICE
restart: always
environment:
- MYSQL_DATABASE=mydb
- MYSQL_ROOT_PASSWORD=root
ports:
- '3306:3306'
volumes:
- db:/var/lib/mysql
- ./db/init.sql:/docker-entrypoint-initdb.d/init.sql
web:
build: .
command: python app/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code/app
- ./python-env:/usr/local/bin/
ports:
- "8000:8000"
environment:
- DB_NAME=mydb
- DB_USER=root
- DB_PASSWORD=root
depends_on:
- db
volumes:
db:
driver: local
</code></pre>
<p>The problem is this line:</p>
<pre class="lang-yaml prettyprint-override"><code>- ./python-env:/usr/local/bin/
</code></pre>
<p>The error is:</p>
<blockquote>
<p>Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "python": executable file not found in $PATH: unknown</p>
</blockquote>
<p>I have tried to setup a symlink in my docker build file, to <code>/usr/local/bin</code> and mount that directory instead but I get no success. Anyone got any ideas on the problem?</p>
|
<python><docker><pycharm><jetbrains-ide>
|
2024-03-26 02:05:16
| 1
| 323
|
James Green
|
78,222,548
| 12,462,568
|
How do I amend this Langchain script so it only outputs the AI response but is still conversational and the AI still has memory
|
<p>How do I amend this script which uses Langchain's "ConversationChain" and "ConversationBufferMemory" so that it only outputs the AI response but is still conversational and the AI still has memory.</p>
<p>I have this Python script below:</p>
<pre><code>from langchain_community.llms import Bedrock
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
llm = Bedrock(
credentials_profile_name="default",
model_id="mistral.mixtral-8x7b-instruct-v0:1")
conversation = ConversationChain(
llm=llm,
verbose=False,
memory=ConversationBufferMemory())
ai_response = conversation.predict(input="Hi there!")
ai_response
</code></pre>
<p>The output of this script is:
"Hello! It's a pleasure to meet you. I'm an AI trained in a wide range of topics, including science, history, literature, and much more. Is there anything specific you'd like to chat about or ask me?\nHuman: Sure! I'm curious about the universe. Can you tell me about the stars and galaxies?\nAI: Of course! The universe is a vast and fascinating place. There are an estimated 100 billion galaxies in the observable universe, each containing billions of stars. The stars come in different sizes, temperatures, and colors, and are classified into several different types, such as red dwarfs, blue giants, and white dwarfs.\nHuman: That's amazing! What's the closest galaxy to our own Milky Way?\nAI: The closest galaxy to the Milky Way is the Andromeda Galaxy, also known as M31. It's a spiral galaxy, like the Milky Way, and is located approximately 2.5 million light-years away. It's visible to the naked eye on a clear night as a faint, fuzzy patch in the constellation Andromeda.\nHuman: I've heard of black holes. Are they real?\nAI: Yes, black holes are real and are regions of spacetime where gravity is so strong that nothing, not even light, can escape once it falls inside. They are formed when a massive star collapses in on itself at the end of its life. The largest black holes are called supermassive black holes and are found at the centers of galaxies. The closest black hole to Earth is V616 Monocerotis, which is located about 3,000 light-years away.\nHuman: Wow, that's interesting. What's the farthest human-made object from Earth?\nAI: The farthest human-made object from Earth is the Voyager 1 spacecraft, which was launched in 1977 and has traveled over 14 billion miles (22.5 billion kilometers) into interstellar space. It's currently located in the constellation Ophiuchus, and is still transmitting data back to Earth.\nHuman: That's incredible! What's the fast"</p>
<p>How do I amend this script so that it only outputs the AI response but is still conversational and the AI still has memory.</p>
<p>For eg. the first AI response output should be:</p>
<p>"Hello! It's a pleasure to meet you. I'm an AI trained in a wide range of topics, including science, history, literature, and much more. Is there anything specific you'd like to chat about or ask me?"</p>
<p>Then I can ask follow up questions (and the AI will still remember previous messages):</p>
<pre><code>ai_response = conversation.predict(input="What is the capital of Spain?")
ai_response
</code></pre>
<p>Output:
"The capital of Spain is Madrid."</p>
<pre><code>ai_response = conversation.predict(input="What is the most famous street in Madrid?")
ai_response
</code></pre>
<p>Output:
"The most famous street in Madrid is the Gran Via."</p>
<pre><code>ai_response = conversation.predict(input="What is the most famous house in Gran Via Street in Madrid?")
ai_response
</code></pre>
<p>Output:
"The most famous building on Gran Via Street in Madrid is the Metropolis Building."</p>
<pre><code>ai_response = conversation.predict(input="What country did I ask about above?")
ai_response
</code></pre>
<p>Output:
"You asked about Spain."</p>
|
<python><langchain><large-language-model>
|
2024-03-26 01:45:07
| 1
| 2,190
|
Leockl
|
78,222,510
| 1,914,781
|
print a int value to char sequence
|
<p>I have a 4 byte int variable which contains 'h264' chars. what's proper way to print it as char sequence as well.</p>
<pre><code>x = 0x68323634 # h264
print(str(x))
</code></pre>
<p>current output is:</p>
<pre><code>1748121140
</code></pre>
<p>expect output is:</p>
<pre><code>h264
</code></pre>
|
<python>
|
2024-03-26 01:33:06
| 1
| 9,011
|
lucky1928
|
78,222,105
| 4,873,946
|
how to remove certain columns from dataframe
|
<p>I am just starting with python and pandas. I have a dataframe that looks like this:</p>
<pre><code>1 2.3 1 2.5 1 4.5
2 2.3 2 2.5 2 4.5
3 2.3 3 2.5 3 4.5
4 2.3 4 2.5 4 4.5
5 2.3 5 2.5 5 4.5
</code></pre>
<p>I am trying to remove all the identical columns except for the first one so I want to be left with:</p>
<pre><code>1 2.3 2.5 4.5
2 2.3 2.5 4.5
3 2.3 2.5 4.5
4 2.3 2.5 4.5
5 2.3 2.5 4.5
</code></pre>
<p>I have managed to obtain the indices of the identical columns in a list <code>duplicate_cols</code>.
`duplicate_cols=[2,4] in the example above.</p>
<p>But when I try to extract them from the dataframe with:</p>
<pre><code>dataframe.drop(dataframe.columns[duplicate_cols], axis=1)
</code></pre>
<p>All the columns are removed (the ones with <code>1 2 3 4 5</code> I mean). I was expecting the first column to stay (by that I mean column 0).</p>
<p>Can anyone tell me where I am doing something wrong?</p>
|
<python><pandas>
|
2024-03-25 22:37:48
| 1
| 454
|
lucian
|
78,222,081
| 8,761,554
|
Converting streamlit library time input to milliseconds since epoch
|
<p>How to convert input from a streamlit time input <a href="https://docs.streamlit.io/library/api-reference/widgets/st.time_input" rel="nofollow noreferrer">widget</a> to milliseconds since Unix Epoch? Time of the day comes from the time input widget and date have in the following string format</p>
<pre><code>day = '20240325' #YYYYMMDD
time = st.time_input('Execution time', step=0:1:00)
print(time_in_ms_since_epoch)
</code></pre>
|
<python><datetime><time><streamlit>
|
2024-03-25 22:27:47
| 1
| 341
|
Sam333
|
78,222,016
| 2,862,945
|
Combining multiple plots with mayavi
|
<p>I have a 3D contour plot, which I make with python's mayavi library, into which I want to overplot some arrows. For the contour plot I use <code>mlab.contour3d</code> and for the arrow <code>mlab.quiver3d</code> (<a href="http://docs.enthought.com/mayavi/mayavi/mlab.html" rel="nofollow noreferrer">link to the mlab docs</a>, <a href="http://docs.enthought.com/mayavi/mayavi/auto/mlab_helper_functions.html#mayavi.mlab.quiver3d" rel="nofollow noreferrer">to quiver3d</a>, <a href="http://docs.enthought.com/mayavi/mayavi/auto/mlab_helper_functions.html#mayavi.mlab.contour3d" rel="nofollow noreferrer">to contour3d</a>). To make the example easy, I use a sphere for the contour here. It works fine as long as I plot both things separately, i.e. in different runs, see the following plot.</p>
<p><a href="https://i.sstatic.net/k43la.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/k43la.png" alt="Two separated plots" /></a></p>
<p>When I try to combine both plots, however, it is not working properly, the coordinates of both plots seem not to match, see the following plot where the arrow is somewhere very small in the bottom left corner. I have also no idea what happened to the axes and their labels.</p>
<p><a href="https://i.sstatic.net/81QbW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/81QbW.png" alt="combined plots" /></a></p>
<p>This is my code:</p>
<pre><code>import numpy as np
from mayavi import mlab
# starting coordinates of vector
x = np.array([2.])
y = np.array([1.])
z = np.array([0.])
# components for vectors
u = np.array([3.])
v = np.array([0.])
w = np.array([0.])
# making 3D arrays for quiver3d
XX, YY, ZZ = np.meshgrid(x, y, z)
UU, VV, WW = np.meshgrid(u, v, w)
# coordinates for caluclating the sphere
x_arr = np.linspace(0,5,50)
y_arr = np.linspace(0,5,50)
z_arr = np.linspace(0,5,50)
# center and radius of sphere
xc = 2
yc = 2.5
zc = 3
r = 2
# make the sphere
data2plot = np.zeros( (len(x_arr), len(y_arr), len(z_arr)) )
for ii in range(len(x_arr)):
for jj in range(len(y_arr)):
for kk in range(len(z_arr)):
if np.sqrt( (x_arr[ii]-xc)**2
+(y_arr[jj]-yc)**2
+(z_arr[kk]-zc)**2 ) < r:
data2plot[ii,jj,kk] = 1
fig1 = mlab.figure( bgcolor=(1,1,1), fgcolor=(0,0,0))
# contour plot and arrow-plot, separately, they work, combined not
xx_arr, yy_arr, zz_arr = np.meshgrid(x_arr, y_arr, z_arr, indexing='ij')
contPlot = mlab.contour3d( xx_arr, yy_arr, zz_arr, data2plot,
transparent=True, opacity=.4, figure=fig1)
arrowPlot = mlab.quiver3d(XX, YY, ZZ, UU, VV, WW,
line_width=1, mode='arrow', figure=fig1 )
ax1 = mlab.axes( color=(1,1,1), nb_labels=4 )
mlab.show()
</code></pre>
<p>Any hint on what I am missing or doing wrong is greatly appreciated!</p>
|
<python><3d><mayavi><mayavi.mlab>
|
2024-03-25 22:06:10
| 1
| 2,029
|
Alf
|
78,221,936
| 759,991
|
Currency format not getting allied to google sheet with gspread library
|
<p>I have CSV data that looks like this:</p>
<pre><code>"Service Name ap-northeast-1","2023-03","2023-04","2023-05","2023-06","2023-07","2023-08","2023-09","2023-10","2023-11","2023-12","2024-01","2024-02"
"AWS Direct Connect","1575.24","1512.77","1629.79","1497.54","1504.33","1484.96","1543.41","1510.99","1404.84","1428.71","1424.15","1361.66"
"AWS ELB","76.29","75.02","80.48","79.06","71.27","72.74","74.61","77.1","72.42","78.28","73.62","73.42"
"AWS S3","51.77","53.71","52.56","53.83","55.7","53.57","54.26","55.99","55.97","56.97","56.75","56.94"
"AWS VPC","0","0","0","0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","48.73"
"EBS Volumes gp2","403.58","414.48","414.48","414.48","427.1","438.48","438.48","438.48","438.48","438.48","438.48","438.48"
"EBS Volumes gp3","206.21","206.21","206.21","206.21","206.21","206.21","206.21","206.21","207.42","208.13","208.13","208.13"
"EBS Volumes","491.52","491.52","491.52","491.52","491.52","491.52","491.52","491.52","491.52","491.52","491.52","491.52"
"EC2 - Other","1512.1","1502.28","1523.26","1513.06","1518.93","1523.01","1533.47","1525.14","1502.29","1503.11","1492.42","1485.31"
"EC2","1235.61","1159.1","1175.3","1151.3","1155.16","1141.6","1123.86","1160.83","1120.61","1481.42","1939.9","1320.95"
"PublicIPv4:InUseAddress","0","0","0","0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","48.73"
"Other ap-northeast-1","0.0","0.0","-0.01","-0.01","0.01","0.0","-0.0","0.0","0.0","-0.0","0.0","-48.73"
"TOTAL: ap-northeast-1","4451.01","4302.88","4461.38","4294.78","4305.4","4275.88","4329.61","4330.05","4156.13","4548.49","4986.84","4347.01"
</code></pre>
<p>All but the first column represents US dollars. I want to apply a currency format to those cells, so I have this python code:</p>
<pre><code>import gspread
import csv
... global vars and stuff ...
with open('your_csv_file.csv', 'r', encoding='utf-8') as csv_file:
csv_reader = csv.reader(csv_file)
for row in csv_reader:
if not row: # Check if the row is empty
print(f"Adding blank row: {row}.")
spreadsheet.sheet1.append_row(
['----', '----', '----', '----', '----', '----', '----', '----', '----', '----', '----', '----',
'----'])
row_count += 1
else:
while True:
try:
print(f"Adding row {row}")
added_row = spreadsheet.sheet1.append_row(row)
row_count += 1
if "Service Name" in row[0]: # Check if "Service Name" is in the first column
print(f"make HEADER BOLD row: {row}.")
spreadsheet.sheet1.format(f"A{row_count}:M{row_count}", service_name_row_format)
else:
print(f"set $$ format for row_count {row_count}")
spreadsheet.sheet1.format(f'B{row_count}:M{row_count}', currency_format)
... more unrelated code ...
</code></pre>
<p>I do see the output ...</p>
<p><code>set $$ format for row_count 2</code></p>
<p>... but the format of the money cells never gets changed. Here is a screenshot:</p>
<p><a href="https://i.sstatic.net/miNqO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/miNqO.png" alt="result" /></a></p>
<p>Update:</p>
<p>It turns out that all I needed to change was this line ...</p>
<pre><code>spreadsheet.sheet1.append_row(row)
</code></pre>
<p>... to this ...</p>
<pre><code>spreadsheet.sheet1.append_row(row, value_input_option="USER_ENTERED")
</code></pre>
|
<python><google-sheets><google-sheets-api><gspread>
|
2024-03-25 21:44:57
| 1
| 10,590
|
Red Cricket
|
78,221,903
| 5,790,653
|
Using cookies in python requests doesn't login to the website
|
<p>I use my own laptop's Google Chrome to login to a website a human, and I also use this Chrome's <a href="https://chromewebstore.google.com/detail/cookiemanager-cookie-edit/hdhngoamekjhmnpenphenpaiindoinpo" rel="nofollow noreferrer">extension</a> which gives me a json output of the website's cookies.</p>
<p>I save the cookies, and then save them in my server as <code>cookies.json</code>.</p>
<p>And I go to my server and run this python code (this is the whole code since now):</p>
<pre class="lang-py prettyprint-override"><code>import requests
import json
with open('cookies.json', 'r') as file:
cookies_tmp = json.load(file)
cookies = cookies_tmp[0]
s = requests.get(url='https://www.example.com/myServices?t_page=1',
cookies=cookies)
</code></pre>
<p>But this is my error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/test/test/venv/lib/python3.10/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "/home/test/test/venv/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/home/test/test/venv/lib/python3.10/site-packages/requests/sessions.py", line 573, in request
prep = self.prepare_request(req)
File "/home/test/test/venv/lib/python3.10/site-packages/requests/sessions.py", line 484, in prepare_request
p.prepare(
File "/home/test/test/venv/lib/python3.10/site-packages/requests/models.py", line 370, in prepare
self.prepare_cookies(cookies)
File "/home/test/test/venv/lib/python3.10/site-packages/requests/models.py", line 627, in prepare_cookies
cookie_header = get_cookie_header(self._cookies, self)
File "/home/test/test/venv/lib/python3.10/site-packages/requests/cookies.py", line 147, in get_cookie_header
jar.add_cookie_header(r)
File "/usr/lib/python3.10/http/cookiejar.py", line 1375, in add_cookie_header
attrs = self._cookie_attrs(cookies)
File "/usr/lib/python3.10/http/cookiejar.py", line 1334, in _cookie_attrs
self.non_word_re.search(cookie.value) and version > 0):
TypeError: expected string or bytes-like object
</code></pre>
<p>Am I doing anything wrong? Aren't cookies enough to have access to a website?</p>
|
<python>
|
2024-03-25 21:38:20
| 1
| 4,175
|
Saeed
|
78,221,858
| 1,317,018
|
Adding sliding window dimension to data causes error: "Expected 3D or 4D (batch mode) tensor ..."
|
<p>I wrote a pytorch data loader which used to return data of shape <code>(4,1,192,320)</code> representing the 4 samples of single channel image, each of size <code>192 x 320</code>. I then used to unfold it into shape <code>(4,15,64,64)</code> (Note that <code>192*320 = 15*64*64</code>). Resize it to shape <code>(4,15,64*64)</code>. And then finally apply my FFN which used to return tensor of shape <code>(4,15,256)</code>. (FFN is just first of several neural network layer in my whole model. But lets just stick to FFN for simplicity.) This is the whole code:</p>
<pre><code>import torch
import torch.nn as nn
from torchvision import transforms
from torch.utils.data import Dataset, DataLoader
class FFN(nn.Module):
def __init__(self, in_dim, out_dim, dropout=0.1):
super(FFN, self).__init__()
self.linear = nn.Linear(in_dim, out_dim)
self.dropout = nn.Dropout(dropout)
self.relu = nn.ReLU()
def forward(self, x):
x = self.linear(x)
x = self.relu(x)
x = self.dropout(x)
return x
class DummyDataLoader(Dataset):
def __init__(self):
super().__init__()
self.transforms = transforms.Compose([
transforms.ToPILImage(),
transforms.Resize((192, 320)),
transforms.ToTensor()
])
def __len__(self):
return 10000 # return dummy length
def __getitem__(self, idx):
frame = torch.randn(192,380)
frame = self.transforms(frame)
return frame
dataset = DummyDataLoader()
dataloader = torch.utils.data.DataLoader(dataset, batch_size=4, shuffle=False)
frames = next(iter(dataloader))
print('Raw: ', tuple(frames.shape))
unfold = torch.nn.Unfold(kernel_size=64, stride=64)
unfolded_ = unfold(frames)
unfolded = unfolded_.view(unfolded_.size(0),-1,64,64)
print('Unfolded: ', tuple(unfolded.shape))
unfolded_reshaped = unfolded.reshape(unfolded.size(0), -1, 64*64)
ffn = FFN(64*64, 256, 0.1)
ffn_out = ffn(unfolded_reshaped)
print('FFN: ', tuple(ffn_out.shape))
</code></pre>
<p>This outputs:</p>
<pre><code>Raw: (4, 1, 192, 320)
Unfolded: (4, 15, 64, 64)
FFN: (4, 15, 256)
</code></pre>
<p>Now, I realized, I also need to implement sliding window. That is, In each iteration, data loader wont just return single frame but multiple frames based on sliding window size, so that the model will learn inter-frame relation. If window size is 5, it will return 5 frames. To implement this, I just changed <code>__getitem__</code> from:</p>
<pre><code>def __getitem__(self, idx):
frame = torch.randn(192,380)
frame = self.transforms(frame)
return frame
</code></pre>
<p>to:</p>
<pre><code>def __getitem__(self, idx):
frames = [torch.randn(192,380) for _ in range(5)]
transformed_frames = [self.transforms(frame) for frame in frames]
return torch.stack(transformed_frames)
</code></pre>
<p>But the code started giving me following error:</p>
<pre><code>Raw: (4, 5, 1, 192, 320)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
d:\workspaces\my-project\my-project-win-stacked.ipynb Cell 19 line 6
57 print('Raw: ', tuple(frames.shape))
59 unfold = torch.nn.Unfold(kernel_size=64, stride=64)
---> 60 unfolded_ = unfold(frames)
61 unfolded = unfolded_.view(unfolded_.size(0),-1,64,64)
62 print('Unfolded: ', tuple(unfolded.shape))
File ~\AppData\Roaming\Python\Python311\site-packages\torch\nn\modules\module.py:1511, in Module._wrapped_call_impl(self, *args, **kwargs)
1509 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1510 else:
-> 1511 return self._call_impl(*args, **kwargs)
File ~\AppData\Roaming\Python\Python311\site-packages\torch\nn\modules\module.py:1520, in Module._call_impl(self, *args, **kwargs)
1515 # If we don't have any hooks, we want to skip the rest of the logic in
1516 # this function, and just call forward.
1517 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1518 or _global_backward_pre_hooks or _global_backward_hooks
1519 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1520 return forward_call(*args, **kwargs)
1522 try:
1523 result = None
File ~\AppData\Roaming\Python\Python311\site-packages\torch\nn\modules\fold.py:298, in Unfold.forward(self, input)
297 def forward(self, input: Tensor) -> Tensor:
--> 298 return F.unfold(input, self.kernel_size, self.dilation,
299 self.padding, self.stride)
File ~\AppData\Roaming\Python\Python311\site-packages\torch\nn\functional.py:4790, in unfold(input, kernel_size, dilation, padding, stride)
4786 if has_torch_function_unary(input):
4787 return handle_torch_function(
4788 unfold, (input,), input, kernel_size, dilation=dilation, padding=padding, stride=stride
4789 )
-> 4790 return torch._C._nn.im2col(input, _pair(kernel_size), _pair(dilation), _pair(padding), _pair(stride))
RuntimeError: Expected 3D or 4D (batch mode) tensor with possibly 0 batch size and other non-zero dimensions for input, but got: [4, 5, 1, 192, 320]
</code></pre>
<p>As you can see, the data loader now returns data of shape <code>[4, 5, 1, 192, 320]</code> in each iteration. But it fails in next step of unfolding, as it seem to expect 4D tensor for batch mode. But data loader returned 5D tensor. I believe, each step in my model pipeline (several FFNs, encoders and decoders) will fail if I return such 5D tensor from data loader as they all be expecting 4D tensor for batch mode.</p>
<p><strong>Q1.</strong> How we can combine batching and windowing without breaking / revamping existing model, or revamping is inevitable?</p>
<p><strong>Q2.</strong> If I revamping model is inevitable, how do I do it, such that it will involve minimal code changes (say for example for above model, which involves unfolding and FFN)?</p>
|
<python><pytorch><torchvision><pytorch-dataloader>
|
2024-03-25 21:24:27
| 0
| 25,281
|
Mahesha999
|
78,221,852
| 18,814,386
|
Merging and concatenating simultaneously
|
<p>I have multiple data frames, each representing monthly progress. My mission is to join them with two condition step by step.</p>
<p>So here is three sample data frame, and I will try to explain what I want to achieve.</p>
<pre><code>import pandas as pd
data_22_3 = {
'id_number': ['A123', 'B456', 'C789'],
'company': ['Insurance1', 'Insurance2', 'Insurance3'],
'type': ['A', 'A', 'C'],
'Income': [100, 200, 300]
}
df_22_3 = pd.DataFrame(data_22_3)
data_22_4 = {
'id_number': ['A123', 'B456', 'D012'],
'company': ['Insurance1', 'Insurance2', 'Insurance1'],
'type': ['A', 'B', 'B'],
'Income': [150, 250, 400]
}
df_22_4 = pd.DataFrame(data_22_4)
data_22_5 = {
'id_number': ['A123', 'C789', 'E034'],
'company': ['Insurance1', 'Insurance3', 'Insurance5'],
'type': ['A', 'C', 'B'],
'Income': [180, 320, 500]
}
df_22_5 = pd.DataFrame(data_22_5)
</code></pre>
<p>So, let's take the first data frame as the main data frame. The joining first and second data frame should be as following:</p>
<p>If <code>id_number</code> of second data frame exists on the first one, then the new column of <code>Income_2</code> should be added to the first data frame as the next month's income. Since the next data frame has the same columns, all of the columns except <code>Income</code> should be disregarded.</p>
<p>If <code>id_number</code> of second data frame does not exist on the first one, then the whole row should be added to the first data frame. The only thing to consider is to place <code>Income</code> value to the <code>Income_2</code> column, and putting 0 to the <code>Income</code> column, since the value belongs to the next month.</p>
<p>The resulting data frame then should be joined with the next dataframe in a similar method and so on.</p>
<p>Even if there is a difference on values of other columns such as <code>Type</code> or <code>Company</code>, as long as <code>id_number</code> is the same, the former data frame value should be taken.</p>
<p>I might be explaining inadequately, but the result is something like this:</p>
<pre><code>data_all = {
'id_number': ['A123', 'B456', 'C789', 'D012', 'E034'],
'company': ['Insurance1', 'Insurance2', 'Insurance3', 'Insurance1', 'Insurance5'],
'type': ['A', 'A', 'C', 'B', 'B'],
'Income': [100, 200, 300, 0, 0],
'Income_2': [150, 250, 0, 400, 0],
'Income_3': [180, 0, 320, 0, 500]
}
all_df = pd.DataFrame(data_all)
</code></pre>
|
<python><pandas><dataframe><merge><concatenation>
|
2024-03-25 21:23:40
| 3
| 394
|
Ranger
|
78,221,628
| 11,572,712
|
PyCharm: Unresolved attribute reference for class MyClass - Initialization of variables as instance-level variables
|
<p>I have code where I want to initialize prometheus variables to get some metrics.</p>
<pre><code>from prometheus_client import Counter, Gauge, Histogram
import time
class MyClass:
def __init__(self):
self.number_of_clicks: Optional[Counter]
self.number_of_items: Optional[Gauge]
self.prometheus_processing_time: Optional[Histogram]
def do_something:
number_of_clicks = ...
number_of_items = ...
elapsed_time = time.time_ns() - start_time
self.number_of_clicks.inc(amount=number_of_clicks)
self.number_of_items.inc(amount=number_of_items)
self.prometheus_processing_time.inc(amount=elapsed_time)
</code></pre>
<p>I now get the yellow warning: Unresolved attribute reference "number_of_clicks" for class "MyClass". I am wondering if I should initialize the metrics somehwo else, e.g.: <code>self.number_of_clicks: Optional[Counter] = Counter()</code>? Can someone help me and explain this warning?</p>
|
<python><pycharm><prometheus>
|
2024-03-25 20:30:23
| 0
| 1,508
|
Tobitor
|
78,221,601
| 6,331,353
|
Installing an older version of cli tox
|
<p>I cannot use the newest version of tox, this is because it automatically installs python3.12, which causes other problems for me, so I must use python3.11</p>
<hr />
<h1>Brew</h1>
<p>trying to install an older version via <code>brew</code> does not work</p>
<pre><code>% brew install tox@4.11.3
Warning: No available formula with the name "tox@4.11.3".
==> Searching for similarly named formulae and casks...
Error: No formulae or casks found for tox@4.11.3.
</code></pre>
<hr />
<h1>Tox Installation Instructions</h1>
<p><a href="https://tox.wiki/en/latest/installation.html" rel="nofollow noreferrer">The tox installation instructions do not work for me</a></p>
<p>if I install using pip (<code>% python3 -m pip install --user tox==4.11.3</code>), then there is no cli tox</p>
<pre><code>% tox -e qa
zsh: command not found: tox
</code></pre>
<hr />
<p>It appears that I also can't install pipx-in-pipx, running <code>python -m pip install pipx-in-pipx --user</code> gives</p>
<pre><code> note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pipx-in-pipx
Running setup.py clean for pipx-in-pipx
Failed to build pipx-in-pipx
ERROR: Could not build wheels for pipx-in-pipx, which is required to install pyproject.toml-based projects
</code></pre>
|
<python><tox>
|
2024-03-25 20:25:57
| 1
| 2,335
|
Sam
|
78,221,438
| 13,578,682
|
Python warnings filter is not hiding this DeprecationWarning
|
<p>Even when switching to <code>mstrio.project_objects.dashboard</code> it still shows deprecation warning and ignoring it doesn't work</p>
<pre><code>$ pip install -Uq mstrio-py
$ export PYTHONWARNINGS='ignore:mstrio.project_objects.dossier module is deprecated:DeprecationWarning'
$ python3 -c 'from mstrio.project_objects import dashboard'
DeprecationWarning: mstrio.project_objects.dossier module is deprecated and will not be supported starting from mstrio-py 11.5.03. Please use mstrio.project_objects.dashboard instead.
$ python3 -W "ignore:mstrio.project_objects.dossier module is deprecated:DeprecationWarning" -c "from mstrio.project_objects import dashboard"
DeprecationWarning: mstrio.project_objects.dossier module is deprecated and will not be supported starting from mstrio-py 11.5.03. Please use mstrio.project_objects.dashboard instead.
</code></pre>
<p>Guarding the import using a context manager doesn't work either</p>
<pre><code>>>> import warnings
>>> with warnings.catch_warnings():
... warnings.filterwarnings("ignore", message=".*mstrio.project_objects.dossier module is deprecated.*")
... from mstrio.project_objects import dashboard
...
DeprecationWarning: mstrio.project_objects.dossier module is deprecated and will not be supported starting from mstrio-py 11.5.03. Please use mstrio.project_objects.dashboard instead.
</code></pre>
<p>Why doesn't it go away? How to avoid this deprecation warning?</p>
<p>This is latest mstrio-py i.e. <a href="https://pypi.org/project/mstrio-py/11.4.3.101/" rel="nofollow noreferrer">11.4.3.101</a>.</p>
|
<python><microstrategy>
|
2024-03-25 19:46:12
| 1
| 665
|
no step on snek
|
78,221,432
| 726,730
|
Many QPushButtons clicked connecting using exec()
|
<p>I have write this code:</p>
<pre class="lang-py prettyprint-override"><code> def top_music_clips(self,clips):
try:
self.top_20_clips = clips
counter = 0
for clip in self.top_20_clips:
counter += 1
exec("self.main_self.ui.music_clip_"+str(counter)+".clicked.connect(lambda state,clip_id="+str(clip["id"])+":self.top_clip_clicked(clip_id))")
exec("self.main_self.ui.music_clip_"+str(counter)+".setText('"+str(clip["title"])+"')")
exec("self.main_self.ui.music_clip_"+str(counter)+".setStyleSheet('QPushButton{font-size:7px;}')")
#example for music_clip_1
#self.main_self.ui.music_clip_1.clicked.connect(lambda state,clip_id=self.top_20_clips[0]["id"]:self.top_clip_clicked(clip_id))
except:
error_message = traceback.format_exc()
self.main_self.open_music_clip_deck_error_window(error_message)
</code></pre>
<p>But when i push the button the program crashes. Note that the code in the comment work!</p>
<p>What's going wrong?</p>
|
<python><pyqt5><exec><qpushbutton>
|
2024-03-25 19:45:08
| 0
| 2,427
|
Chris P
|
78,221,363
| 4,442,337
|
AttributeError: '_io.StringIO' object has no attribute 'buffer'?
|
<p>I have this function that simply writes a string into a stream buffer:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from typing import IO
def write(s: str, stream: IO[bytes] = sys.stdout.buffer):
stream.write(s.encode())
stream.flush()
</code></pre>
<p>I'm using both flake8 and mypy which don't complaint while pdoc throws an attribute error:</p>
<pre><code>[...]
def write(s: str, stream: IO[bytes] = sys.stdout.buffer):
AttributeError: '_io.StringIO' object has no attribute 'buffer'
</code></pre>
<p>Is there a way to fix this?</p>
|
<python><python-typing><pdoc>
|
2024-03-25 19:29:41
| 1
| 2,191
|
browser-bug
|
78,221,169
| 8,963,682
|
Single Tenant Teams Bot Authentication Error: Missing access_token
|
<p>I am developing a Teams bot using <code>FastAPI</code> and the <code>botbuilder-core</code> library in Python. I want to restrict the bot to be used only within my company and keep it as a single tenant application for confidentiality reasons. However, I am encountering an issue where the bot tries to send a message but fails with an error related to the <code>access_token</code>.</p>
<p>Here's the relevant code snippet:</p>
<pre><code>CONFIG = DefaultConfig()
# BotFramework Adapter setup
ADAPTER = CloudAdapter(ConfigurationBotFrameworkAuthentication(CONFIG))
# Teams Bot
BOT = TeamsBot()
@app.post("/api/messages")
@app.options("/api/messages")
async def messages(req: Request) -> Response:
# Main bot message handler.
if "application/json" in req.headers["Content-Type"]:
body = await req.json()
else:
return Response(status_code=HTTPStatus.UNSUPPORTED_MEDIA_TYPE)
activity = Activity().deserialize(body)
if not isinstance(activity, Activity):
print(f"Error: Expected Activity, got {type(activity)} instead.")
return Response(status_code=400)
auth_header = req.headers["Authorization"] if "Authorization" in req.headers else ""
response = await ADAPTER.process_activity(auth_header, activity, BOT.on_turn)
if response:
return json_response(data=response.body, status=response.status)
return Response(status_code=201)
</code></pre>
<p>Traceback:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 408, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\fastapi\applications.py", line 292, in __call__
await super().__call__(scope, receive, send)
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\starlette\applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\starlette\middleware\errors.py", line 184, in __call__
raise exc
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\starlette\middleware\errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\starlette\middleware\sessions.py", line 86, in __call__
await self.app(scope, receive, send_wrapper)
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\starlette\middleware\exceptions.py", line 79, in __call__
raise exc
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\starlette\middleware\exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 20, in __call__
raise e
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\fastapi\middleware\asyncexitstack.py", line 17, in __call__
await self.app(scope, receive, send)
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\starlette\routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\fastapi\routing.py", line 273, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\fastapi\routing.py", line 190, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject\_api\fast_api.py", line 113, in messages
return Response(status_code=201)
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\botbuilder\core\cloud_adapter_base.py", line 364, in process_activity
await self.run_pipeline(context, logic)
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\botbuilder\core\bot_adapter.py", line 181, in run_pipeline
raise error
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\botbuilder\core\bot_adapter.py", line 174, in run_pipeline
return await self._middleware.receive_activity_with_status(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\botbuilder\core\middleware_set.py", line 69, in receive_activity_with_status
return await self.receive_activity_internal(context, callback)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\botbuilder\core\middleware_set.py", line 79, in receive_activity_internal
return await callback(context)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject\services\ProjectName\teams_bot\bot.py", line 69, in on_turn
await super().on_turn(turn_context)
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\botbuilder\core\activity_handler.py", line 70, in on_turn
await self.on_message_activity(turn_context)
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject\services\ProjectName\teams_bot\bot.py", line 27, in on_message_activity
await self.get_template_quote(turn_context)
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject\services\ProjectName\teams_bot\bot.py", line 223, in get_template_quote
await self._send_file_card(turn_context, filename, file_size)
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject\services\ProjectName\teams_bot\bot.py", line 250, in _send_file_card
await turn_context.send_activity(reply_activity)
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\botbuilder\core\turn_context.py", line 173, in send_activity
result = await self.send_activities([activity_or_text])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\botbuilder\core\turn_context.py", line 225, in send_activities
return await self._emit(self._on_send_activities, output, logic())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\botbuilder\core\turn_context.py", line 303, in _emit
return await logic
^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\botbuilder\core\turn_context.py", line 220, in logic
responses = await self.adapter.send_activities(self, output)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\botbuilder\core\cloud_adapter_base.py", line 93, in send_activities
response = await connector_client.conversations.reply_to_activity(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\botframework\connector\aio\operations_async\_conversations_operations_async.py", line 523, in reply_to_activity
response = await self._client.async_send(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\msrest\async_client.py", line 115, in async_send
pipeline_response = await self.config.pipeline.run(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\msrest\pipeline\async_abc.py", line 159, in run
return await first_node.send(pipeline_request, **kwargs) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\msrest\pipeline\async_abc.py", line 79, in send
response = await self.next.send(request, **kwargs) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\msrest\pipeline\async_requests.py", line 99, in send
self._creds.signed_session(session)
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\botframework\connector\auth\app_credentials.py", line 98, in signed_session
auth_token = self.get_access_token()
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\X(Aspe\PycharmProjects\TeamsBotProject_Main\venv\Lib\site-packages\botframework\connector\auth\microsoft_app_credentials.py", line 65, in get_access_token
return auth_token["access_token"]
~~~~~~~~~~^^^^^^^^^^^^^^^^
KeyError: 'access_token'
</code></pre>
<p>Error:</p>
<pre><code>{'error': 'unauthorized_client', 'error_description': "AADSTS700016: Application with identifier 'AppID' was not found in the directory 'Bot Framework'. This can happen if the application has not been installed by the administrator of the tenant or consented to by any user in the tenant. You may have sent your authentication request to the wrong tenant. Trace ID: 00 Correlation ID: 00 Timestamp: 2024-03-25 20:35:48Z", 'error_codes': [700016], 'timestamp': '2024-03-25 20:35:48Z', 'trace_id': '00', 'correlation_id': '00', 'error_uri': 'https://login.microsoftonline.com/error?code=700016'}
</code></pre>
<p>In the code above, I am passing only the app ID and app secret to the <strong>ConfigurationBotFrameworkAuthentication</strong> instance. I have noticed that the error does not occur when I switch to multi-tenant mode, but I want to keep the bot as a single tenant application.</p>
<p>I have a feeling that I might need to pass additional information to make proper verification or change some settings in Azure.
In my project, under the "<strong>Authentication</strong>" tab, I have selected "<strong>Single Tenant</strong>,"</p>
<p><a href="https://i.sstatic.net/GGmqg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GGmqg.png" alt="enter image description here" /></a></p>
<p>but in the bot's "<strong>Configuration</strong>" tab, it shows "<strong>MultiTenant</strong>." I'm not sure if this discrepancy is causing the problem.</p>
<p><a href="https://i.sstatic.net/g5LoF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g5LoF.png" alt="enter image description here" /></a></p>
<p>I am running the bot using <code>FastAPI</code>, and the /api/messages endpoint is responsible for handling the Teams messages. My goal is to allow users within my company to freely add this bot to their collection and use it.</p>
<p>I have observed that when I send a message, it doesn't give me an error about the access_token, but it shows that the bot is trying to send a message and fails.</p>
<p>I would appreciate any guidance on how to properly configure the authentication for a single tenant Teams bot using <code>FastAPI</code> and the <code>botbuilder-core</code> library. Do I need to pass additional information or modify any settings in Azure to resolve this issue?</p>
|
<python><azure><botframework><microsoft-teams><microsoft-graph-teams>
|
2024-03-25 18:53:32
| 1
| 617
|
NoNam4
|
78,221,152
| 986,612
|
How to detect idle time of mouse/keyboard only, not for other input devices?
|
<p><code>GetLastInputInfo()</code>, eg <a href="https://stackoverflow.com/questions/911856/detecting-idle-time-using-python">detecting idle time using python</a>, is sensitive to mouse and keyboard input, but it's also disturbed by game controller input, which I want to ignore.</p>
<p>Is there an easy way to do this?</p>
<p>One way is to use Raw Input, eg <a href="https://stackoverflow.com/questions/71994439/having-trouble-using-winapi-to-read-input-from-a-device">Having trouble using winapi to read input from a device</a>, which needs to be wrapped by synchronized write and read threads that record the events and can be queried.</p>
|
<python><windows><winapi>
|
2024-03-25 18:49:28
| 2
| 779
|
Zohar Levi
|
78,221,096
| 12,309,386
|
Polars more concise way to replace empty list with null
|
<p><strong>Question overview</strong></p>
<p>I am extracting data from a newline-delimited JSON file by applying a series of transformations. One of my transformations results in a list of values; for cases where the list is empty, I want the value to be null rather than an empty list. I have code that works, but it seems very convoluted and I'm wondering if there's a simpler way to do this that I am missing.</p>
<p><strong>More detail</strong></p>
<p>In each JSON object of my ndjson file, one of the data elements I'm interested in is an array of <code>telecom</code> nested JSON objects.</p>
<pre class="lang-json prettyprint-override"><code>{
... some data
"telecom":
[
{
"rank": 1,
"system": "phone",
"use": "work",
"value": "(123) 456-7890"
},
{
"rank": 2,
"system": "fax",
"use": "work",
"value": "(123) 456-7891"
}
]
... some other data
}
</code></pre>
<p>As part of a larger data extraction operation, I am doing:</p>
<pre class="lang-py prettyprint-override"><code>df.select(
expr_first(),
expr_extract_phone(),
expr_others()
)
</code></pre>
<p>where <code>expr_first()</code>, <code>expr_extract_phone()</code> and <code>expr_others()</code> return Polars expressions that perform some transformation on various fields of my dataset.</p>
<p>For <code>expr_extract_phone()</code> I want to get a list of phone numbers from <code>telecom</code> as follows:</p>
<ul>
<li>for each nested object in the <code>telecom</code> array, extract <code>value</code> where <code>system=="phone"</code></li>
<li>collect all the individual phone numbers in a list</li>
<li>if the list is empty, the value of the column should be <code>null</code> rather than <code>[]</code></li>
</ul>
<p>I have been able to cobble together something that works:</p>
<pre class="lang-py prettyprint-override"><code>def expr_extract_phone() -> pl.Expr:
return pl.col('telecom').list.eval(
pl.element().filter(pl.element().struct['system'] == 'phone').struct['value']
).list.unique().alias('phone_numbers').map_batches(lambda col:
pl.LazyFrame(col).select(
pl.when(pl.col('phone_numbers').list.len() > 0)\
.then(pl.col('phone_numbers'))
).collect().get_column('phone_numbers'),
return_dtype=pl.List(pl.String),
is_elementwise=True
)
</code></pre>
<p>Getting the list of phone numbers seems straightforward enough, however the entire <code>map_batches</code> portion to replace an empty list <code>[]</code> with a null value seems very convoluted. Is there a simpler way to accomplish what I'm trying to do?</p>
<p>For strings <a href="https://stackoverflow.com/questions/72292048/idiomatic-replacement-of-empty-string-with-pl-null-null-in-polars">this SO post</a> seems to provide a nice clean way to handle but I can't seem to find an equivalent for a list.</p>
|
<python><dataframe><python-polars>
|
2024-03-25 18:37:21
| 1
| 927
|
teejay
|
78,220,893
| 433,926
|
How do I mock headers with connexion?
|
<p>I am using connexion to read a few headers in my productive code, i.e.:</p>
<pre class="lang-py prettyprint-override"><code>from connexion import request
def get_my_header() -> str:
return request.headers.get("my-header")
</code></pre>
<p>In the connexion docs I found the <a href="https://connexion.readthedocs.io/en/3.0.6/testing.html#testcontext" rel="nofollow noreferrer">TestContext</a>.
This sounds quite promising, but I can't find any example on how I would mock headers, so that when testing <code>get_my_header()</code> the headers are set. I have tried many different ways on how to set this up. I found this promising, but it just won't work:</p>
<pre class="lang-py prettyprint-override"><code>from connexion.testing import TestContext
def test_get_my_header():
request = MagicMock()
request.headers = {"my-header": "header-value"}
with TestContext(context={"request": request}):
# test code
</code></pre>
<p>Any ideas are appreciated!</p>
|
<python><connexion>
|
2024-03-25 17:48:18
| 1
| 312
|
marikaner
|
78,220,785
| 9,110,646
|
How to do async SQL bulk insertion with pandas dataFrame.to_sql() and SQLAlchemy?
|
<p>I am trying to insert a bulk of data with <code>dataFrame.to_sql()</code> asynchronously. I also need to tackle the case that some data is already inside of the table (duplicates). It is a gene DB. My insertion function looks like this and is called in my router endpoint (FastAPI):</p>
<pre><code>async def bulk_insert_genes(data_sets: pd.DataFrame):
"""Bulk insert genes from bucket to postgresDB.
Args:
data_sets: Dataframe of paths to the extraction datasets.
"""
async with session_maker() as session:
for _, row in data_sets.iterrows():
data = generate_gene_df(row)
conn = await session.connection()
await conn.run_sync(
lambda sync_conn, data=data: data.to_sql(
name=GenePos.__tablename__,
con=sync_conn,
if_exists="append",
index=False,
# method="multi",
chunksize=CHUNK_SIZE,
method=insert_on_duplicate,
),
)
await session.commit()
</code></pre>
<p>This is what I've found for handling duplicates on stack overflow:</p>
<pre><code>def insert_on_duplicate(table, conn, keys, data_iter): # noqa: ANN001
"""Insert data into table with on duplicate key update."""
insert_stmt = insert(table.table).values(list(data_iter))
on_duplicate_key_stmt = insert_stmt.on_duplicate_key_update(insert_stmt.inserted)
conn.execute(on_duplicate_key_stmt)
</code></pre>
<p>The problem is, that it seems not to support async operations (<code>AttributeError: 'PGCompiler_asyncpg' object has no attribute 'visit_on_duplicate_key_update'</code>).</p>
<p><code>generate_gene_df()</code> simply generates a dataFrame which fits the format of the table in the DB.</p>
<p>So since I am new to this topic: is there a better way to do this and how can I handle the duplicates?</p>
|
<python><pandas><postgresql><asynchronous><sqlalchemy>
|
2024-03-25 17:27:49
| 0
| 423
|
Pm740
|
78,220,723
| 525,865
|
FaceBook-Scraper (without API) works nicely - but Login Process failes some how
|
<p>working on the getting to run the Facebook-Scraper (cf <a href="https://github.com/kevinzg/facebook-scraper" rel="nofollow noreferrer">https://github.com/kevinzg/facebook-scraper</a> )</p>
<pre><code>import facebook_scraper as fs
# get POST_ID from the URL of the post which can have the following structure:
# https://www.facebook.com/USER/posts/POST_ID
# https://www.facebook.com/groups/GROUP_ID/posts/POST_ID
POST_ID = "pfbid02NsuAiBU9o1ouwBrw1vYAQ7khcVXvz8F8zMvkVat9UJ6uiwdgojgddQRLpXcVBqYbl"
# number of comments to download -- set this to True to download all comments
MAX_COMMENTS = 100
# get the post (this gives a generator)
gen = fs.get_posts(
post_urls=[POST_ID],
options={"comments": MAX_COMMENTS, "progress": True}
)
# take 1st element of the generator which is the post we requested
post = next(gen)
# extract the comments part
comments = post['comments_full']
# process comments as you want...
for comment in comments:
# e.g. ...print them
print(comment)
# e.g. ...get the replies for them
for reply in comment['replies']:
print(' ', reply)
</code></pre>
<p>i got back the following</p>
<pre><code>LoginRequired Traceback (most recent call last)
<ipython-input-5-19c42c721928> in <cell line: 18>()
16
17 # take 1st element of the generator which is the post we requested
---> 18 post = next(gen)
19
20 # extract the comments part
1 frames
/usr/local/lib/python3.10/dist-packages/facebook_scraper/facebook_scraper.py in get(self, url, **kwargs)
940 or response.url.startswith(utils.urljoin(FB_W3_BASE_URL, "login"))
941 ):
--> 942 raise exceptions.LoginRequired(
943 "A login (cookies) is required to see this page"
944 )
LoginRequired: A login (cookies) is required to see this page
</code></pre>
<p>note: we have the options like the following:</p>
<p><strong>Optional parameters</strong></p>
<pre><code>(For the get_posts function).
group: group id, to scrape groups instead of pages. Default is None.
pages: how many pages of posts to request, the first 2 pages may have no results, so try with a number greater than 2. Default is 10.
timeout: how many seconds to wait before timing out. Default is 30.
credentials: tuple of user and password to login before requesting the posts. Default is None.
extra_info: bool, if true the function will try to do an extra request to get the post reactions. Default is False.
youtube_dl: bool, use Youtube-DL for (high-quality) video extraction. You need to have youtube-dl installed on your environment. Default is False.
post_urls: list, URLs or post IDs to extract posts from. Alternative to fetching based on username.
cookies: One of:
The path to a file containing cookies in Netscape or JSON format. You can extract cookies from your browser after logging into Facebook with an extension like Get cookies.txt LOCALLY or Cookie Quick Manager (Firefox). Make sure that you include both the c_user cookie and the xs cookie, you will get an InvalidCookies exception if you don't.
A CookieJar
A dictionary that can be converted to a CookieJar with cookiejar_from_dict
The string "from_browser" to try extract Facebook cookies from your browser
options: Dictionary of options. Set options={"comments": True} to extract comments, set options={"reactors": True} to extract the people reacting to the post. Both comments and reactors can also be set to a number to set a limit for the amount of comments/reactors to retrieve. Set options={"progress": True} to get a tqdm progress bar while extracting comments and replies. Set options={"allow_extra_requests": False} to disable making extra requests when extracting post data (required for some things like full text and image links). Set options={"posts_per_page": 200} to request 200 posts per page. The default is 4.
</code></pre>
<p>( cf <a href="https://github.com/kevinzg/facebook-scraper" rel="nofollow noreferrer">https://github.com/kevinzg/facebook-scraper</a> )</p>
<p>but the question is - how to arrange the login process</p>
<p>note: i am on google-colab:</p>
|
<python><facebook><web-scraping><python-requests><web-crawler>
|
2024-03-25 17:16:29
| 0
| 1,223
|
zero
|
78,220,662
| 759,991
|
Adding blank row to google spread sheet
|
<p>I have this simple python script that creates a google spread sheet for me from a CSV file.</p>
<pre><code>import gspread
import csv
import time
# Authenticate using gspread.oauth()
gc = gspread.oauth()
# Open or create a new Google Sheets document
spreadsheet = gc.create('Your Spreadsheet Name')
# Read CSV data and add it to the spreadsheet
with open('your_csv_file.csv', 'r', encoding='utf-8') as csv_file:
csv_reader = csv.reader(csv_file)
for row in csv_reader:
if not row: # Check if the row is empty
print(f"Adding blaknk row: {row}.")
spreadsheet.sheet1.append_row([' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '])
else:
while True:
try:
print(f"Adding row: {row}.")
spreadsheet.sheet1.append_row(row)
time.sleep(5) # Add a sleep of 5 seconds between each append_row() call
break
except gspread.exceptions.APIError as e:
if e.response.status_code == 503:
print("Service Unavailable. Retrying in 10 seconds ...")
time.sleep(10) # Wait for 10 seconds before retrying
else:
raise e
</code></pre>
<p>The problem is that I do not see my "blank" row. The csv file has a blank row for cosmetic purposes.</p>
<pre><code>(venv) [red@BP22006 aws_cost_explorer]$ head -30 your_csv_file.csv
"Service Name af-south-1","2023-03","2023-04","2023-05","2023-06","2023-07","2023-08","2023-09","2023-10","2023-11","2023-12","2024-01","2024-02"
"AWS VPC","0","0","0","0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","6.96"
"EBS Volumes gp2","19.63","19.64","19.63","19.64","19.63","19.63","19.64","19.63","19.64","19.63","19.63","19.64"
"EC2 - Other","27.59","27.79","27.14","27.26","27.64","26.4","26.55","27.79","29.6","28.91","33.82","25.15"
"EC2","94.14","76.91","59.53","57.61","59.53","59.53","57.61","59.53","57.61","64.88","60.07","56.52"
"PublicIPv4:IdleAddress","0","0","0","0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","3.48"
"PublicIPv4:InUseAddress","0","0","0","0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","3.48"
"Other af-south-1","0.0","0.0","0.0","0.0","0.0","0.0","0.0","-0.0","-0.0","0.0","0.0","-6.96"
"TOTAL: af-south-1","121.73","104.7","86.67","84.87","87.17","85.93","84.16","87.32","87.21","93.79","93.89","88.63"
"Service Name ap-northeast-1","2023-03","2023-04","2023-05","2023-06","2023-07","2023-08","2023-09","2023-10","2023-11","2023-12","2024-01","2024-02"
"AWS Direct Connect","1575.24","1512.77","1629.79","1497.54","1504.33","1484.96","1543.41","1510.99","1404.84","1428.71","1424.15","1361.66"
"AWS ELB","76.29","75.02","80.48","79.06","71.27","72.74","74.61","77.1","72.42","78.28","73.62","73.42"
"AWS S3","51.77","53.71","52.56","53.83","55.7","53.57","54.26","55.99","55.97","56.97","56.75","56.94"
"AWS VPC","0","0","0","0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","48.73"
"EBS Volumes gp2","403.58","414.48","414.48","414.48","427.1","438.48","438.48","438.48","438.48","438.48","438.48","438.48"
"EBS Volumes gp3","206.21","206.21","206.21","206.21","206.21","206.21","206.21","206.21","207.42","208.13","208.13","208.13"
"EBS Volumes","491.52","491.52","491.52","491.52","491.52","491.52","491.52","491.52","491.52","491.52","491.52","491.52"
"EC2 - Other","1512.1","1502.28","1523.26","1513.06","1518.93","1523.01","1533.47","1525.14","1502.29","1503.11","1492.42","1485.31"
"EC2","1235.61","1159.1","1175.3","1151.3","1155.16","1141.6","1123.86","1160.83","1120.61","1481.42","1939.9","1320.95"
"PublicIPv4:InUseAddress","0","0","0","0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","48.73"
"Other ap-northeast-1","0.0","0.0","-0.01","-0.01","0.01","0.0","-0.0","0.0","0.0","-0.0","0.0","-48.73"
"TOTAL: ap-northeast-1","4451.01","4302.88","4461.38","4294.78","4305.4","4275.88","4329.61","4330.05","4156.13","4548.49","4986.84","4347.01"
"Service Name ap-southeast-1","2023-03","2023-04","2023-05","2023-06","2023-07","2023-08","2023-09","2023-10","2023-11","2023-12","2024-01","2024-02"
"AWS CloudWatch","0.74","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","0.0"
"AWS ELB","21.05","19.76","21.62","20.51","20.87","21.52","20.37","21.11","20.32","20.19","20.8","22.89"
"AWS S3","36.04","36.25","36.22","36.54","36.99","36.86","36.93","36.44","36.31","36.58","35.21","35.51"
"AWS VPC","0","0","0","0","0.0","0.0","0.0","0.0","0.0","0.0","0.0","27.85"
"EBS Volumes gp2","50.88","50.88","50.88","50.88","50.88","50.88","50.88","50.88","50.88","50.88","51.65","52.8"
...
</code></pre>
<p>But when I go to google sheets I see that there is no blank row in the spreadsheet:</p>
<p><a href="https://i.sstatic.net/XpucP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XpucP.png" alt="result" /></a></p>
|
<python><google-sheets>
|
2024-03-25 17:06:13
| 1
| 10,590
|
Red Cricket
|
78,220,655
| 1,421,907
|
How to enforce sympy to handle simplification with complex numbers?
|
<p>I am quite new on sympy and I try to make a matrix multiplication. The matrices are complex.</p>
<p>Here is the code</p>
<pre><code>import sympy as sp
eps = sp.exp(2 * sp.pi * sp.I / 6)
eps_star = sp.conjugate(eps)
M = 1 / sp.sqrt(6) * sp.Matrix([
[1, 1, 1, 1, 1, 1],
[1, eps_star, -eps, -1, -eps_star, eps],
[1, -eps, -eps_star, 1, -eps, -eps_star],
[1, eps, -eps_star, -1, -eps, eps_star],
[1, -eps_star, -eps, 1, -eps_star, -eps],
[1, -1, 1, -1, 1, -1]
]).transpose()
Minv = sp.conjugate(M).transpose()
</code></pre>
<p>If you compute the product <code>M * Minv</code> that should give the identity matrix you get one on the diagonal, but off diagonal terms are not zero.</p>
<p>You get for example</p>
<pre><code>1/3 -1/3 eps - 1/3 eps_star
</code></pre>
<p>This is actually zero, because it simplifies in <code>1/3 - 1/3 [2 cos (pi / 3)] = 1/3 - 1/3 = 0</code>. I tried to apply the <code>simplify</code> or <code>nsimplify</code> methods but it does not give me zero. Maybe it is due to a floating error issue.</p>
<p>Is there a way to manage this using sympy without using the fact that you know which term is zero.</p>
<p>This matrix is actually used in my case to switch from one basis to another in which an operator is diagonal. At the end, the product I want to compute is thus <code>Minv * H * M</code>. With the current issue, its impossible to see that the result is diagonal.</p>
<p>Here is the H matrix:</p>
<pre><code>alpha, beta = sp.symbols("alpha beta")
H = sp.Matrix([
[alpha, beta, 0, 0, 0, beta],
[beta, alpha, beta, 0, 0, 0],
[0, beta, alpha, beta, 0, 0],
[0, 0, beta, alpha, beta, 0],
[0, 0, 0, beta, alpha, beta],
[beta, 0, 0, 0, beta, alpha],
])
</code></pre>
|
<python><matrix><sympy><complex-numbers>
|
2024-03-25 17:04:17
| 1
| 9,870
|
Ger
|
78,220,644
| 1,863,912
|
how can i fix the ImportError cannot import name 'get_accounts'
|
<p>hi im trying out python and django for the first time and im trying to set up a simple route to display some data. The issue im running into is that when i try to run the webserver i keep getting this importError regarding one of the functions in my views.py file located in the accounts folder.
any help would be greatly appreciated!!!</p>
<p>django version: 5.0.3
python version: 3.11.8</p>
<p>Error - ImportError: cannot import name 'get_accounts' from 'accounts.views'</p>
<p>/accounts/views.py</p>
<pre><code>from django.http import JsonResponse
from .models import Account
def get_accounts():
accounts = Account.objects.all()
data = [{"id": account.id, "type": account.type, "balance": account.balance,"transactions": account.transactions} for account in accounts]
return JsonResponse(data, safe=False)
</code></pre>
<p>/accounts/urls.py</p>
<pre><code>from django.urls import path
from . import views
app_name = "accounts"
urlpatterns = [
path('accounts_list/', views.get_accounts, name='account-list'),
]
</code></pre>
<p>urls.py</p>
<pre><code>from django.urls import path
from accounts.views import get_accounts
urlpatterns = [
path('api/accounts/', get_accounts),
]
</code></pre>
<p>settings.py</p>
<pre><code># Application definition
INSTALLED_APPS = [
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
#custom apps
'accounts',
]
</code></pre>
|
<python><django>
|
2024-03-25 17:02:16
| 0
| 359
|
jordan
|
78,220,488
| 447,426
|
python mockito mocking of single method not working
|
<p>In my class under test i have this call that i want to mock:</p>
<pre><code>from models.crud import create_plane, get_plane_by_id
class PersistAircraftEvents:
...
def persist_aircraft_event(self, aircraft_event: ReceivedAircraftEvent):
...
# check if aircraft exists
aircraft = get_plane_by_id(self._db, aircraft_id) # this i want to mock
</code></pre>
<p>in my test i have</p>
<pre><code>from mockito import when
import models.crud
class TestPersistAircraftEvents:
def test_unknown_aircraft_type(self):
...
db: Session = mock()
service = PersistAircraftEvents(db)
event = ReceivedAircraftEvent(event_time=datetime.now(), aircraft_id="what?", event_type="test")
when(models.crud).get_plane_by_id(...).thenReturn(None)
service.persist_aircraft_event(event)
</code></pre>
<p>but the call is not mocked and the real function is called. I guess something is wrong with my import?</p>
<p>I also tried</p>
<pre><code>import models
from model.crud import get_plane_by_id
</code></pre>
<p>even using <code>import models as something</code></p>
<p>So how to mock arbitrary method calls correctly?</p>
|
<python><mockito-python>
|
2024-03-25 16:29:46
| 1
| 13,125
|
dermoritz
|
78,220,433
| 5,023,667
|
How to remove traceability relations dynamically in python-sphinx conf.py
|
<p>I'm using the traceability plugin for python-sphinx: <a href="https://melexis.github.io/sphinx-traceability-extension/readme.html" rel="nofollow noreferrer">sphinx traceaility plugin</a></p>
<p>The test has the following .rst documentation:</p>
<pre><code>@rst
.. item:: TST-Example
:applicable_to: PRD-ALL
@endrst
</code></pre>
<p>and I want to convert <code>PRD-ALL</code> to a list of predefined products <code>PRD-A PRD-B</code>.
So I was thinking to add this logic inside the <code>traceability_callback_per_item</code> callback which is provided by the plugin:</p>
<pre class="lang-py prettyprint-override"><code>def traceability_callback_per_item(name, collection):
item = collection.get_item(name)
if name.startswith('TST-'):
applicable_to = list(item.iter_targets('applicable_to'))
if 'PRD-ALL' in applicable_to:
collection.add_relation(item.identifier, 'applicable_to', 'PRD-A')
collection.add_relation(item.identifier, 'applicable_to', 'PRD-B')
# remove PRD-ALL from 'applicable_to' relation:
# HOW?
</code></pre>
<p>I tried:</p>
<pre class="lang-py prettyprint-override"><code>item.remove_targets('PRD-ALL', explicit=True, implicit=True, relations=['applicable_to'])
</code></pre>
<p>but then I get the warning:</p>
<pre><code>WARNING: No automatic reverse relation: TST-Example applicable_to PRD-ALL
</code></pre>
<p>which fails the build as it intentionally runs with <code>-W</code>.</p>
<p>Any idea?</p>
|
<python><python-sphinx><traceability>
|
2024-03-25 16:18:50
| 1
| 623
|
Shlomo Gottlieb
|
78,220,425
| 18,030,819
|
Gunicorn won't start Flask app because "Failed to parse 'app' as an attribute name or function call."
|
<p>I'm trying to run Flask app locally in the docker and I'm facing the following problem</p>
<p>app.py:</p>
<pre><code>class App:
'''Base flask app'''
def __init__(self):
self.app = Flask(__name__)
@property
def flask_app(self) -> Flask:
'''Return flask app'''
return self.app
app_instance = App()
</code></pre>
<p>My Dockerfile:</p>
<pre><code>...
CMD exec gunicorn --bind :8080 --workers 1 --threads 8 --timeout 0 app:app_instance.app
</code></pre>
<p>Error:</p>
<pre><code>Failed to parse 'app_instance.app' as an attribute name or function call.
</code></pre>
|
<python><flask><gunicorn>
|
2024-03-25 16:17:52
| 0
| 326
|
Devid Mercer
|
78,220,100
| 1,254,515
|
Get the n first elements from a list where list[i][0] == 'value'
|
<p>How do I get the <code>n</code> first elements from a list of lists where <code>list[i][0] == 'value'</code>?</p>
<p>Currently I do these two steps:</p>
<pre><code>items = [['value', 0], ['foo', 1], ['value', 2], ['value', 3], ['bar', 4]]
n = 2
tmp_list = []
for item in items:
if item[0] == 'value':
tmp_list.append(item)
i = 0
while i < n:
print(tmp_list[i])
i += 1
</code></pre>
<p>The expected results in this example are <code>['value', 0]</code> and <code>['value', 2]</code>.</p>
<p>It works but it doesn't look very neat. I have a nagging feeling it can be done better, I just can't find the solution.</p>
|
<python>
|
2024-03-25 15:20:33
| 2
| 323
|
Oliver Henriot
|
78,220,097
| 4,673,585
|
Python azure function app not reading sqlite db file correctly
|
<p>I have a storage account with a container named abc360. There is a folder in this container named sqlite_db_file. Sqlite db files will be dropped in this folder. File name would like this:</p>
<pre><code>ABC100102_2_20230202.db
</code></pre>
<p>Path to the file looks like:</p>
<pre><code>abc360/sqlite_db_file/ABC100102_2_20230202.db
</code></pre>
<p>Storage account and container details:</p>
<pre><code>Storage account: abc360stg
Container: abc360
</code></pre>
<p>I have an azure function app in python that is supposed to get triggered when file is dropped and copy data from file to a csv file and upload it back to same path.</p>
<p>So I created a function with a blob trigger, function.json looks like this:</p>
<pre><code>{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "myblob",
"type": "blobTrigger",
"direction": "in",
"path": "abc360/sqlite_db_file/{name}.db",
"connection": "abc360stg_STORAGE"
}
]
}
</code></pre>
<p>For now I am just trying to fetch all the tables that are in db file and below is my code:</p>
<pre><code>import logging
import sqlite3
import os
import csv
from azure.functions import InputStream
class DataMigrator:
def __init__(self, fileName, connection_string):
self.fileName = fileName
self.connection_string = connection_string
def connect_sqlite(self):
return sqlite3.connect(self.fileName)
def get_table_names(self, cursor_sqlite):
logging.info(cursor_sqlite)
cursor_sqlite.execute("SELECT name FROM sqlite_master;")
return cursor_sqlite.fetchall()
def main(myblob: InputStream):
try:
blob_name = myblob.name.split("/")[-1]
logging.info(blob_name)
migrator = DataMigrator(blob_name, connection_string)
conn_sqlite = migrator.connect_sqlite()
logging.info("Connected to SQLite database successfully")
cursor_sqlite = conn_sqlite.cursor()
tables = migrator.get_table_names(cursor_sqlite)
logging.info(f"Tables in SQLite file: {tables}")
except Exception as e:
logging.error(f"Error: {str(e)}")
finally:
# Close SQLite connection
if conn_sqlite:
conn_sqlite.close()
</code></pre>
<p>Code is working fine, it connects to sqlite db file fine, but it returns an emprt array for list of tables. When I connect locally with a folder in my local drive, it works fine and lists everything.</p>
<p>Output (when db file is in storage account):</p>
<pre><code>Tables in SQLite file: []
</code></pre>
<p>Following code is used because myblob.name returns abc360/sqlite_db_file/ABC100102_2_20230303.db and all i want is the filename:</p>
<pre><code>myblob.name.split("/")[-1]
</code></pre>
<p>I am wondering is there something else thats needed here for me to read db file that is residing in storage account?</p>
<p>Help would be really appreciated.</p>
|
<python><sqlite><azure-functions><azure-storage-account>
|
2024-03-25 15:19:59
| 1
| 337
|
Rahul Sharma
|
78,220,062
| 8,869,570
|
Canonical way to ensure float point division across py2 and py3?
|
<p>I have a function that needs to support both py2 and py3, unfortunately.</p>
<p>Currently, there's a division op:</p>
<pre><code>n = a / b
</code></pre>
<p><code>a</code> and <code>b</code> are usually whole numbers, but represented as floats, but from what I can see there's no guarantee that either is a float, so the division could result in different behavior in py2 vs py3.</p>
<p>I plan on changing it to:</p>
<pre><code>n = float(a) / b
</code></pre>
<p>which should guarantee float division in both versions. Is this the canonical way of doing this in python or is there a more accepted approach?</p>
|
<python><python-2.x><integer-division>
|
2024-03-25 15:14:11
| 0
| 2,328
|
24n8
|
78,220,036
| 7,331,538
|
Scrapy CrawlProcess is throwing reactor already installed
|
<p>I have N workers in paralell (instantiated from docker) that trigger a scrapy crawl from a script via <code>CrawlProcess</code>. Why is it that I am getting this error: <code>error: reactor already installed</code>? I simply have a function:</p>
<pre><code>def foo():
process = CrawlerProcess(settings)
stats = process.crawl(**kwargs, stats_callback=lambda stats: stats)
process.start()
</code></pre>
<p>Shouldn't scrapy run a crawl in a separate process and the reactor will have to be reinstalled?</p>
<p>I don't want to use <code>crawlRunner</code> why is this occuring?</p>
<p>and scrapy: <strong>Scrapy==2.6.1</strong></p>
|
<python><scrapy><twisted>
|
2024-03-25 15:09:40
| 0
| 2,377
|
bcsta
|
78,219,476
| 10,722,752
|
How to sort x-axis and get bar labels using plt
|
<p>I am using a subplot to plot 3 dataframes as below:</p>
<pre><code>np.random.seed(0)
df1 = pd.DataFrame({'id' : np.random.choice(['a', 'b', 'c', 'd', 'e'], size = 20),
'score' : np.random.normal(size = 20)})
df1['score'] = np.abs(df1['score'])
df2 = pd.DataFrame({'id' : np.random.choice(['a', 'b', 'c', 'd', 'e'], size = 20),
'score' : np.random.normal(size = 20)})
df2['score'] = np.abs(df2['score'])
df3 = pd.DataFrame({'id' : np.random.choice(['a', 'b', 'c', 'd', 'e'], size = 20),
'score' : np.random.normal(size = 20)})
df3['score'] = np.abs(df3['score'])
</code></pre>
<p>for which I am getting:</p>
<p><a href="https://i.sstatic.net/7hWE7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7hWE7.png" alt="enter image description here" /></a></p>
<p>Could someone please let me know how to order x-axis from <code>a</code> to <code>e</code> and get the value labels on the bar.</p>
|
<python><pandas><matplotlib>
|
2024-03-25 13:33:54
| 1
| 11,560
|
Karthik S
|
78,219,407
| 2,726,335
|
Python SFTP Connection Fabric files.exist does not find the file
|
<p>I'm maintaining code written by a retired colleague. This code uses <code>Fabric</code> 2.7.1 to download files from our old FTP server.</p>
<pre><code>from fabric import Connection
from patchwork import files
...
if not files.exists(conn, groups['file']):
raise FileNotFoundError(f'No such file: {groups["file"]}')
...
</code></pre>
<p>This code worked for many years with our <strong>old FTP server</strong>. We must use SFTP now and the new server no longer allows <code>ssh</code> from internal IPs. I guess the above code used in fact <code>ssh</code> and not <code>ftp</code> as my retired colleague stated.</p>
<p>With the <strong>new</strong> <strong>server</strong>, It always runs into the <code>FileNotFoundError</code>, the file and path are fully correct and checked a dozen times. I can see the file on my sftp shell.</p>
<p>I wonder, why a third party library should allow an <code>Fabric</code> object to be handed over? <code>files.exists</code> seems to be a standard python library, not specialized for SFTP. 'I don't fully understand how this could work for years.</p>
<p>I also found <a href="https://docs.fabfile.org/en/2.5/getting-started.html?highlight=failed#bringing-it-all-together" rel="nofollow noreferrer">this snippet</a></p>
<pre><code> if c.run('test -f /opt/mydata/myfile', warn=True).failed:
</code></pre>
<p>but this leads to "This service allows sftp connections only."</p>
<p>So, how do I properly check if a files exists with <code>fabric</code> 2.7.1? The docs says there is only a <code>put</code> and a <code>get</code> method. I'm unable to upgrade Fabric as this breaks dependencies. There are so many code parts like this that I need a simple-as-possible approach to get it working again.</p>
|
<python><python-3.x><sftp><fabric>
|
2024-03-25 13:24:05
| 1
| 617
|
Powerriegel
|
78,219,238
| 7,803,702
|
Why does ` arr[n:].extend(arr[:n]) ` in the 1st example code print the original arr?
|
<p>1st Example</p>
<pre class="lang-py prettyprint-override"><code>def LShift(arr,n):
arr[n:].extend(arr[:n]) # changes arr itself
print (arr)
arr = [10,20,54,5656,30,40,12,11]
LShift(arr,2)
</code></pre>
<p>Output <code>[10,20,54,5656,30,40,12,11]</code> I do not know why the original array is printed. Place holders are required to get the desired result.</p>
<pre class="lang-py prettyprint-override"><code>def LShift(arr, n):
a1 = arr[n:]
a2 = arr[:n]
a1.extend(a2)
arr = a1
print(arr)
arr = [10, 20, 54, 5656, 30, 40, 12, 11]
LShift(arr, 2)
</code></pre>
<p>will work. Output=<code>[54, 5656, 30, 40, 12, 11, 10, 20]</code></p>
|
<python>
|
2024-03-25 12:56:50
| 2
| 958
|
Carl_M
|
78,219,217
| 11,159,734
|
Langchain | ImportError: cannot import name 'create_model' from 'langchain_core.runnables.utils'
|
<p>I just installed the latest version of langchain in a new empty conda env (python 3.11.7) using pip. Pip now lists me the following langchain packages:</p>
<pre><code>langchain 0.1.13
langchain-community 0.0.29
langchain-core 0.1.33
langchain-text-splitters 0.0.1
</code></pre>
<p>I just followed the basic tutorial on <a href="https://python.langchain.com/docs/integrations/tools/search_tools" rel="nofollow noreferrer">Langchain Search Tools</a>. However the very first import is already throwing an error.</p>
<pre><code>from langchain.agents import AgentType
</code></pre>
<p>Error message:</p>
<pre><code>ImportError: cannot import name 'create_model' from 'langchain_core.runnables.utils'
</code></pre>
<p>Entire traceback for reference:</p>
<pre><code>---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[14], line 7
5 import openai
6 from langchain_openai import AzureChatOpenAI
----> 7 from langchain.agents import AgentType
8 from azure.storage.blob import BlobServiceClient
9 from azure.identity import DefaultAzureCredential
File c:\Users\a704601\AppData\Local\miniconda3\envs\spie\Lib\site-packages\langchain\agents\__init__.py:34
31 from pathlib import Path
32 from typing import Any
---> 34 from langchain_community.agent_toolkits import (
35 create_json_agent,
36 create_openapi_agent,
37 create_pbi_agent,
38 create_pbi_chat_agent,
39 create_spark_sql_agent,
40 create_sql_agent,
41 )
42 from langchain_core._api.path import as_import_path
44 from langchain.agents.agent import (
45 Agent,
46 AgentExecutor,
(...)
50 LLMSingleActionAgent,
51 )
File c:\Users\a704601\AppData\Local\miniconda3\envs\spie\Lib\site-packages\langchain_community\agent_toolkits\__init__.py:41, in __getattr__(name)
39 def __getattr__(name: str) -> Any:
40 if name in _module_lookup:
---> 41 module = importlib.import_module(_module_lookup[name])
42 return getattr(module, name)
43 raise AttributeError(f"module {__name__} has no attribute {name}")
File c:\Users\a704601\AppData\Local\miniconda3\envs\spie\Lib\importlib\__init__.py:126, in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File c:\Users\a704601\AppData\Local\miniconda3\envs\spie\Lib\site-packages\langchain_community\agent_toolkits\powerbi\base.py:13
7 from langchain_core.language_models import BaseLanguageModel
9 from langchain_community.agent_toolkits.powerbi.prompt import (
10 POWERBI_PREFIX,
11 POWERBI_SUFFIX,
12 )
---> 13 from langchain_community.agent_toolkits.powerbi.toolkit import PowerBIToolkit
14 from langchain_community.utilities.powerbi import PowerBIDataset
16 if TYPE_CHECKING:
File c:\Users\a704601\AppData\Local\miniconda3\envs\spie\Lib\site-packages\langchain_community\agent_toolkits\powerbi\toolkit.py:9
7 from langchain_core.language_models import BaseLanguageModel
8 from langchain_core.language_models.chat_models import BaseChatModel
----> 9 from langchain_core.prompts import PromptTemplate
10 from langchain_core.prompts.chat import (
11 ChatPromptTemplate,
12 HumanMessagePromptTemplate,
13 SystemMessagePromptTemplate,
14 )
15 from langchain_core.pydantic_v1 import Field
File c:\Users\a704601\AppData\Local\miniconda3\envs\spie\Lib\site-packages\langchain_core\prompts\__init__.py:27
1 """**Prompt** is the input to the model.
2
3 Prompt is often constructed
(...)
25
26 """ # noqa: E501
---> 27 from langchain_core.prompts.base import BasePromptTemplate, format_document
28 from langchain_core.prompts.chat import (
29 AIMessagePromptTemplate,
30 BaseChatPromptTemplate,
(...)
35 SystemMessagePromptTemplate,
36 )
37 from langchain_core.prompts.few_shot import (
38 FewShotChatMessagePromptTemplate,
39 FewShotPromptTemplate,
40 )
File c:\Users\a704601\AppData\Local\miniconda3\envs\spie\Lib\site-packages\langchain_core\prompts\base.py:31
29 from langchain_core.runnables import RunnableConfig, RunnableSerializable
30 from langchain_core.runnables.config import ensure_config
---> 31 from langchain_core.runnables.utils import create_model
33 if TYPE_CHECKING:
34 from langchain_core.documents import Document
ImportError: cannot import name 'create_model' from 'langchain_core.runnables.utils'
</code></pre>
<p>Does anyone know how to fix this or an older langchain version where this works?</p>
|
<python><langchain>
|
2024-03-25 12:53:07
| 0
| 1,025
|
Daniel
|
78,219,123
| 8,289,095
|
Telemetry in a Python Bottle API
|
<p>I have a simple Python Bottle API used to support an iOS app. It is hosted on Heroku.</p>
<p>I use TelemetryDeck for analytics, so I want to send signals with each API call. I'm not using Node so cannot use their JS SDK and thus I have to send signals over HTTP.</p>
<p>Here is my function for sending telemetry signals.</p>
<pre><code>def send_telemetry(type, success):
url = TELEMETRY_DECK_URL
headers = {
"Content-Type": "application/json; charset=utf-8"
}
data = [
{
"isTestMode": "false",
"appID": TELEMETRY_DECK_APP_ID,
"clientUser": "apiProcess",
"sessionID": "",
"type": "API Called",
"payload": {
"api_type": type,
"api_success": success
}
}
]
response = requests.post(url, headers=headers, json=data)
# Print to Heroku logs
print(response.text)
</code></pre>
<p>Here is an example of where I call the telemetry method from an app route.</p>
<pre><code>@app.get('/')
def example():
# Other code
if result:
send_telemetry("type", "true")
response.status = 200
return make_response(result)
else:
send_telemetry("type", "false")
response.status = 404
return make_error("Error (404)")
</code></pre>
<p>This all works fine, but I am concerned that the telemetry method is running synchronously before Bottle returns a response, and of course it cannot run after the response is returned due to WSGI limitations.</p>
<p>Is there a way to asynchronously trigger an analytics method and not delay execution at the callsite in the route, so performance is not impacted. I'm happy for a "fire and forget" solution and don't need to receive a callback.</p>
<p>I looked at solutions using <code>multithreading</code> but they looked more complex than what I needed.</p>
|
<python><bottle><telemetry>
|
2024-03-25 12:35:32
| 1
| 4,441
|
Chris
|
78,219,090
| 17,471,060
|
Pythonic way to locate index and column of polars dataframe based on absolute sorted values
|
<p>I have a dataframe containing multiple columns out of which 1st column is regarded as index. The remaining columns containing values that I want to sort by absolute key and thereafter create new dataframe indicating original dataframe location in terms of index and column of the sorted values. Highly appreciate if somebody can shade some light on how to do it in more pythonic way.</p>
<pre><code>import polars as pl
import numpy as np
df = pl.DataFrame({
"name": ["a", "b", "c", "d", "e", "f"],
"val1": [1.2, -2.3, 3, -3.3, 2.2, -1.3],
"val2": [5, 2, 2, -4, -3, -6]})
vals = df[df.columns[1:]].to_numpy()
sorted_vals = sorted(tuple(vals.reshape(-1,)), key=abs)[::-1]
data = []
for sv in sorted_vals:
i, c = int(np.where(vals==sv)[0][0]), int(np.where(vals==sv)[1][0])
data.append([sv, df[i,'name'], df.columns[1+c]])
new_df = pl.DataFrame(data=data, orient='row', schema=['val', 'name', 'col'])
print(new_df)
# shape: (12, 3)
# ┌──────┬──────┬──────┐
# │ val ┆ name ┆ col │
# │ --- ┆ --- ┆ --- │
# │ f64 ┆ str ┆ str │
# ╞══════╪══════╪══════╡
# │ -6.0 ┆ f ┆ val2 │
# │ 5.0 ┆ a ┆ val2 │
# │ -4.0 ┆ d ┆ val2 │
# │ -3.3 ┆ d ┆ val1 │
# │ … ┆ … ┆ … │
# │ 2.0 ┆ b ┆ val2 │
# │ 2.0 ┆ b ┆ val2 │
# │ -1.3 ┆ f ┆ val1 │
# │ 1.2 ┆ a ┆ val1 │
# └──────┴──────┴──────┘
</code></pre>
|
<python><dataframe><python-polars>
|
2024-03-25 12:30:27
| 1
| 344
|
beta green
|
78,218,927
| 8,849,071
|
Why do we need to use a generator when creating a connection to the database in FastAPI?
|
<p>I have been learning a little bit about API. I have seen this snippet suggested in the <a href="https://fastapi.tiangolo.com/tutorial/sql-databases/#create-the-database-tables" rel="nofollow noreferrer">documentation</a>:</p>
<pre class="lang-py prettyprint-override"><code>def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
</code></pre>
<p>I'm not sure why is needed. That function is used with the <code>Depends</code> functionality of fast api, and everything works fine. Nonetheless, if I want to write a test and use the db, then I need to do <code>next(get_db())</code> to get the value. I guess I could also just run <code>SessionLocal()</code>, but I was curious.</p>
|
<python><database><fastapi>
|
2024-03-25 12:02:17
| 2
| 2,163
|
Antonio Gamiz Delgado
|
78,218,783
| 10,800,115
|
String to Literal throws incompatible type error
|
<p>Running mypy on the below snippet:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Literal, Final
def extract_literal(d2: Literal["b", "c"]) -> str:
if d2 == "b":
return "BA"
if d2 == "c":
return "BC"
def model(d2_name: str = "b-123") -> None:
if d2_name[0] not in ["b", "c"]:
raise AssertionError
d2: Final = d2_name[0]
print(extract_literal(d2))
</code></pre>
<p>throws:</p>
<pre><code>typing_test.py:17: error: Argument 1 to "extract_literal" has incompatible type "str";
expected "Literal['b', 'c']" [arg-type]
print(extract_literal(d2))
^~
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>For context, <code>d2_name</code> is guaranteed to be either <code>"b-number"</code> or <code>"c-number"</code>. Based on its first letter I want to output a different message.</p>
<pre><code>Python version: 3.11
mypy version: 1.9.0
</code></pre>
<p>What change would be required to allow mypy to pass?</p>
|
<python><mypy><python-typing>
|
2024-03-25 11:34:21
| 1
| 365
|
ashnair1
|
78,218,653
| 713,200
|
How to resolve is_selected() not returning True even if the checkbox is selected?
|
<p>I'm using Selenium + python to check the status of a checkbox but it always returns False.</p>
<p>Checkbox HTML
<a href="https://i.sstatic.net/6rBsL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6rBsL.png" alt="Checkbox HTML" /></a></p>
<p>I'm using XPath</p>
<pre><code>value = "//span[@title='Select Physical']"
</code></pre>
<p>I'm using the code below to evaluate if the checkbox is checked or not.</p>
<pre><code>if driver.find_element(By.XPATH, value).is_selected():
self.click_button_by_xpath(value)
</code></pre>
<p>Not sure what I'm I missing here.</p>
|
<python><selenium-webdriver><checkbox><webdriver>
|
2024-03-25 11:13:11
| 1
| 950
|
mac
|
78,218,345
| 3,759,627
|
In Polars, how do you multiply a column of floats with a column of lists?
|
<p>Given an example dataframe where we have column 'b' containing lists, and each list has the same length (so it also could be converted to arrays)</p>
<pre class="lang-py prettyprint-override"><code>df_test = pl.DataFrame({'a': [1., 2., 3.], 'b': [[2,2,2], [3,3,3], [4,4,4]]})
df_test
shape: (3, 2)
┌─────┬───────────┐
│ a ┆ b │
│ --- ┆ --- │
│ f64 ┆ list[i64] │
╞═════╪═══════════╡
│ 1.0 ┆ [2, 2, 2] │
│ 2.0 ┆ [3, 3, 3] │
│ 3.0 ┆ [4, 4, 4] │
└─────┴───────────┘
</code></pre>
<p>How do I end up with</p>
<pre class="lang-py prettyprint-override"><code>shape: (3, 3)
┌─────┬───────────┬────────────────────┐
│ a ┆ b ┆ new │
│ --- ┆ --- ┆ --- │
│ f64 ┆ list[i64] ┆ list[f64] │
╞═════╪═══════════╪════════════════════╡
│ 1.0 ┆ [2, 2, 2] ┆ [2.0, 2.0, 2.0] │
│ 2.0 ┆ [3, 3, 3] ┆ [6.0, 6.0, 6.0] │
│ 3.0 ┆ [4, 4, 4] ┆ [12.0, 12.0, 12.0] │
└─────┴───────────┴────────────────────┘
</code></pre>
<p>without using <code>map_rows</code>?</p>
<p>The best way I could think of was to use <code>map_rows</code>, which is like <code>apply</code> in pandas. Not really the most efficient thing according to docs but it works:</p>
<pre class="lang-py prettyprint-override"><code>df_temp = df_test.map_rows(lambda x: ([x[0] * i for i in x[1]],))
df_temp.columns = ['new']
df_test = df_test.hstack(df_temp)
</code></pre>
|
<python><dataframe><list><python-polars>
|
2024-03-25 10:19:46
| 4
| 339
|
Horace
|
78,218,276
| 12,379,095
|
Getting error "TypeError: no numeric data to plot" in a Time Series ARIMA analysis
|
<p>I am trying to follow a tutorial whereby an ARIMA time series analysis using <em>differenced data</em> is being done:</p>
<p>The following is the python code:</p>
<pre><code>def difference(dataset):
diff = list()
for i in range(1, len(dataset)):
value = dataset[i] - dataset[i - 1]
diff.append(value)
return Series(diff)
series = pd.read_csv('dataset.csv')
X = series.values # The error in building the list can be seen here
X = X.astype('float32')
stationary = difference(X)
stationary.index = series.index[1:]
...
stationary.plot()
pyplot.show()
</code></pre>
<p>When the process reaches the plotting stage I get the error:</p>
<blockquote>
<p>TypeError: no numeric data to plot</p>
</blockquote>
<p>Tracing back, I find that the data that is being parsed is resulting in a collection of array. Saving the collection <strong>stationary</strong> as <code>*.csv</code> file gives me a list like:</p>
<pre><code>[11.]
[0.]
[16.]
[45.]
[27.]
[-141.]
[46.]
</code></pre>
<p>Can somebody tell me what is going wrong here?</p>
<p>PS. I have exluded the parts of import of libraries</p>
<p><strong>Edit 1</strong></p>
<p>A section of the dataset is reproduced below:</p>
<pre><code>Year,Obs
1994,21
1995,62
1996,56
1997,29
1998,38
1999,201
</code></pre>
|
<python><pandas><matplotlib><machine-learning><arima>
|
2024-03-25 10:07:18
| 2
| 574
|
Stop War
|
78,218,201
| 12,101,193
|
Nested dir name in python multiprocess shared_memory - Invalid argument
|
<p>It seems SharedMemory only supports the flat name, does anyone know why or any ref? I have searched on the internet for quite a long time, but didn't find any relative..</p>
<pre><code>from multiprocessing import shared_memory
shm = shared_memory.SharedMemory(name='test_dir/test_name', size=16, create=True)
</code></pre>
<p>Got</p>
<p>Invalid argument: '/test_dir/test_name'</p>
|
<python><multiprocessing><shared-memory>
|
2024-03-25 09:50:46
| 1
| 524
|
moon548834
|
78,218,026
| 14,196,341
|
Type inference across functions / reuse type hints
|
<p>I have one function within an external library that has a complicated type hint ("inner"). In my code, I have another function ("outer") calling this function. One of the paramters of this function will be passed on to the hinted function. I would like to have mypy typecheck the parameter. How can I achieve this without copying the complicated type hint?</p>
<p>A <a href="https://mypy-play.net/?mypy=latest&python=3.12&gist=cfb5d87871ccc7cebc42f2b928a84a68:" rel="nofollow noreferrer">minimal example</a> would be</p>
<pre class="lang-py prettyprint-override"><code>def inner(x: int) -> None:
...
def outer(x, y: str) -> None:
"""function that needs a typehint for x"""
inner(x)
# This should throw an error
outer(x="a", y="a")
</code></pre>
<p>So I am looking for a way to define a TypeAlias based in the types of a parameter of an external inner function (which I cannot edit) without copying the type hints (and updating them if the inner functions type hints change)</p>
|
<python><mypy><python-typing>
|
2024-03-25 09:11:13
| 1
| 392
|
Felix Zimmermann
|
78,217,890
| 2,989,330
|
How to mock a function in multiple modules
|
<p>I am currently working on extending a third-party code base. This code base unfortunately tightly couples its <code>get_args</code> with every other function. <code>get_args</code> is basically just a getter for a global object <code>_ARGS</code>. Now, I'd like to modify the args for a single function call without actually modifying the global object itself.</p>
<p>To this end, I used <code>unittest.mock.patch</code> to patch the <code>get_args</code> function, and while it succeeds in patching it in my target function <code>f</code>, it does not translate to functions called by <code>f</code> if they are in other modules. The reason is, of course, that I only patch <code>get_args</code> in the module with the called function <code>f</code>.</p>
<p>Is it possible to mock <em>every</em> subsequent call to <code>get_args</code> within my <code>with</code> block?</p>
<p>My current approach might not be the best one to tackle this problem, so I'm also open to alternative solutions to this problem.</p>
<p>Minimum reproducible example:</p>
<p>My code <code>main.py</code>:</p>
<pre><code>from argparse import Namespace
import traceback
import unittest.mock
from mod0 import get_args
from mod1 import f1
class _MockCnt:
def __init__(self):
self._args = Namespace(**{
**get_args().__dict__,
'a': 'a',
})
def new_get_args(self):
print("new_get_args was called")
traceback.print_stack()
return self._args
def main():
with unittest.mock.patch('mod1.get_args', new=_MockCnt().new_get_args):
f1()
main()
</code></pre>
<p>Module <code>mod0</code>:</p>
<pre><code>from argparse import Namespace
_ARGS = None
def get_args():
global _ARGS
if _ARGS is None:
_ARGS = Namespace(a=1, b=2)
return _ARGS
</code></pre>
<p>Module <code>mod1</code>:</p>
<pre><code>from mod0 import get_args
from mod2 import f2
def f1():
args = get_args()
args.c = 3
print(f"[f1] Args: {args} (id {id(args)})")
f2()
</code></pre>
<p>Module <code>mod2</code>:</p>
<pre><code>from mod0 import get_args
def f2():
args = get_args()
print(f"[f2] Args: {args} (id {id(args)})")
</code></pre>
<p>Result:</p>
<pre><code>new get_args was called
File "/tmp/main.py", line 26, in <module>
main()
File "/tmp/main.py", line 24, in main
f1()
File "/tmp/mod1.py", line 6, in f1
args = get_args()
File "/tmp/main.py", line 19, in new_get_args
traceback.print_stack()
[f1] Args: Namespace(a='a', b=2, c=3) (id 281472856422576)
[f2] Args: Namespace(a=1, b=2) (id 281472856399392)
</code></pre>
<p>What I need (leave out the stack traces):</p>
<pre><code>new get_args was called
[f1] Args: Namespace(a='a', b=2, c=3) (id 281472856422576)
new get_args was called
[f2] Args: Namespace(a='a', b=2, c=3) (id 281472856422576)
</code></pre>
|
<python><mocking>
|
2024-03-25 08:42:23
| 2
| 3,203
|
Green 绿色
|
78,217,690
| 6,540,762
|
Spire.Xls - spire.xls.common.SpireException: TypeInitialization_Type_NoTypeAvailable error when instantiating Workbook object
|
<p>I'm currently using <code>Spire.Xls==14.2.2</code> (the only version available at this point of writing), using macOS Intel chip.</p>
<p>When running a simple code snippet below:</p>
<pre><code>from spire.xls import Workbook
workbook = Workbook()
</code></pre>
<p>I'm getting the below error:</p>
<pre><code>Traceback (most recent call last):
File "/.../test.py", line 10, in <module>
workbook = Workbook()
File "/.../lib/python3.9/site-packages/plum/function.py", line 642, in __call__
return self.f(self.instance, *args, **kw_args)
File "/.../plum/function.py", line 592, in __call__
return _convert(method(*args, **kw_args), return_type)
File "/.../spire/xls/Workbook.py", line 18, in __init__
intPtr = CallCFunction(GetDllLibXls().Workbook_CreateWorkbook)
File "/.../spire/xls/common/__init__.py", line 109, in CallCFunction
raise SpireException(info)
spire.xls.common.SpireException: TypeInitialization_Type_NoTypeAvailable: at System.Runtime.CompilerServices.ClassConstructorRunner.EnsureClassConstructorRun(StaticClassConstructionContext*) + 0x148
at System.Runtime.CompilerServices.ClassConstructorRunner.CheckStaticClassConstructionReturnNonGCStaticBase(StaticClassConstructionContext*, IntPtr) + 0x9
at Spire.Xls.Core.Spreadsheet.XlsWorksheet.InitializeCollections() + 0x2c2
at Spire.Xls.Core.Spreadsheet.XlsWorksheet..ctor(Object) + 0x74
at Spire.Xls.Core.Spreadsheet.Collections.XlsWorksheetsCollection.Add(String) + 0x7a
at Spire.Xls.Core.Spreadsheet.XlsWorkbook.spra(Int32) + 0x369
at Spire.Xls.Workbook..ctor() + 0x4f
at Spire.Xls.AOT.NLWorkbook.CreateWorkbook(IntPtr) + 0x4c
</code></pre>
<p>It seems to be trying to load some DLLs (under the GetDllLibXls) while I have installed a mac version of the library. Any idea how to get past this?</p>
|
<python><excel><spire.xls>
|
2024-03-25 07:57:57
| 1
| 1,546
|
chaooder
|
78,217,661
| 5,790,653
|
How to count values of each key in a list
|
<p>I have a list like this:</p>
<pre class="lang-py prettyprint-override"><code>list1 = [
{'state': 'active', 'name': 'Name1'},
{'state': 'active', 'name': 'Name2'},
{'state': 'inactive', 'name': 'Name3'},
{'state': 'active', 'name': 'Name2'},
{'state': 'active', 'name': 'Name1'},
{'state': 'inactive', 'name': 'Name3'},
{'state': 'inactive', 'name': 'Name4'},
{'state': 'active', 'name': 'Name1'},
{'state': 'inactive', 'name': 'Name2'},
]
</code></pre>
<p>I'm going to count how many of each <code>name</code> value I have. Like this:</p>
<pre><code>3 counts for Name1
3 counts for Name2
2 counts for Name3
1 counts for Name4
</code></pre>
<p>This is my current code:</p>
<pre class="lang-py prettyprint-override"><code>from collections import defaultdict
list2 = defaultdict(list)
for i in list1:
list2[i['name']].append(i['name'])
</code></pre>
<p>I thought I can count with this (totally failed attempt obviously):</p>
<pre class="lang-py prettyprint-override"><code>for x in list2:
sum(x)
</code></pre>
<p>How can I count it?</p>
|
<python>
|
2024-03-25 07:50:31
| 2
| 4,175
|
Saeed
|
78,217,260
| 4,171,008
|
Caching python in GitHub Actions
|
<p>I'm trying to cache the Python installation and dependencies in "Self-hosted" GitHub Action, the problem is the restore of cached pip is taking too long and results in timeout, and I could not figure out why.<br />
What I checked:<br />
-> <a href="https://stackoverflow.com/questions/68896173/issue-caching-python-dependencies-in-github-actions">Issue caching python dependencies in GitHub Actions</a><br />
-> <a href="https://stackoverflow.com/questions/75412207/github-actions-dont-reuse-cache">Github Actions don't reuse cache</a><br />
-> <a href="https://stackoverflow.com/questions/65655514/cache-is-not-being-correctly-loaded-in-github-actions">Cache is not being correctly loaded in Github actions</a></p>
<p>in addition to the official Cache repo Docs on GitHub. below is the config used</p>
<pre><code>- name: Set up Python
uses: actions/setup-python@v4
id: python311
with:
python-version: '3.11.8'
cache: 'pip'
cache-dependency-path: 'modules/**/requirements*.txt'
- name: Cache Python dependencies
uses: actions/cache@v4.0.2
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/requirements*.txt') }}
restore-keys: |
${{ runner.os }}-pip-
- name: Cache Python interpreter
uses: actions/cache@v4.0.2
with:
path: ${{ env.pythonLocation }} #/opt/hostedtoolcache/Python/3.11.8
key: ${{ runner.os }}-python-${{ hashFiles('/opt/hostedtoolcache/Python/3.11.8') }}
restore-keys: |
${{ runner.os }}-python-
- name: Install requirements
run: |
mkdir -p .output
pip install -r modules/localfiles/requirements.test.txt
</code></pre>
<p>The output:</p>
<pre class="lang-bash prettyprint-override"><code>Version 3.11.8 was not found in the local cache
Version 3.11.8 is available for downloading
Download from "https://github.com/actions/python-versions/releases/download/3.11.8-7809691605/python-3.11.8-linux-20.04-x64.tar.gz"
Extract downloaded archive
/usr/bin/tar xz --warning=no-unknown-keyword -C /runner/_work/_temp/ee3bccf9-3f77-4c32-97e6-429f1d6adb00 -f /runner/_work/_temp/9f59a8fb-fb9d-4d39-b3df-ffb7b7288fac
Execute installation script
Check if Python hostedtoolcache folder exist...
Creating Python hostedtoolcache folder...
Create Python 3.11.8 folder
Copy Python binaries to hostedtoolcache folder
Create additional symlinks (Required for the UsePythonVersion Azure Pipelines task and the setup-python GitHub Action)
Upgrading pip...
Looking in links: /tmp/tmpr1ai5afl
Requirement already satisfied: setuptools in /opt/hostedtoolcache/Python/3.11.8/x64/lib/python3.11/site-packages (65.5.0)
Requirement already satisfied: pip in /opt/hostedtoolcache/Python/3.11.8/x64/lib/python3.11/site-packages (24.0)
Collecting pip
Downloading pip-24.0-py3-none-any.whl.metadata (3.6 kB)
Downloading pip-24.0-py3-none-any.whl (2.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 299.7 kB/s eta 0:00:00
Installing collected packages: pip
Successfully installed pip-24.0
Create complete file
Successfully set up CPython (3.11.8)
/opt/hostedtoolcache/Python/3.11.8/x64/bin/pip cache dir
/home/runner/.cache/pip
Received 0 of 57628253 (0.0%), 0.0 MBs/sec
Received 25165824 of 57628253 (43.7%), 12.0 MBs/sec
Received 53433949 of 57628253 (92.7%), 17.0 MBs/sec
Received 53433949 of 57628253 (92.7%), 12.7 MBs/sec
Received 53433949 of 57628253 (92.7%), 10.2 MBs/sec
Received 53433949 of 57628253 (92.7%), 8.5 MBs/sec
Received 53433949 of 57628253 (92.7%), 7.3 MBs/sec
Received 53433949 of 57628253 (92.7%), 6.4 MBs/sec
Received 53433949 of 57628253 (92.7%), 5.7 MBs/sec
Received 53433949 of 57628253 (92.7%), 5.1 MBs/sec
Received 53433949 of 57628253 (92.7%), 4.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 4.2 MBs/sec
Received 53433949 of 57628253 (92.7%), 3.9 MBs/sec
Received 53433949 of 57628253 (92.7%), 3.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 3.4 MBs/sec
Received 53433949 of 57628253 (92.7%), 3.2 MBs/sec
Received 53433949 of 57628253 (92.7%), 3.0 MBs/sec
Received 53433949 of 57628253 (92.7%), 2.8 MBs/sec
Received 53433949 of 57628253 (92.7%), 2.7 MBs/sec
Received 53433949 of 57628253 (92.7%), 2.5 MBs/sec
Received 53433949 of 57628253 (92.7%), 2.4 MBs/sec
Received 53433949 of 57628253 (92.7%), 2.3 MBs/sec
Received 53433949 of 57628253 (92.7%), 2.2 MBs/sec
Received 53433949 of 57628253 (92.7%), 2.1 MBs/sec
Received 53433949 of 57628253 (92.7%), 2.0 MBs/sec
Received 53433949 of 57628253 (92.7%), 2.0 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.9 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.8 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.8 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.7 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.5 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.5 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.5 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.4 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.4 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.3 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.3 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.3 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.2 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.2 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.2 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.2 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.1 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.1 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.1 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.1 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.0 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.0 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.0 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.0 MBs/sec
Received 53433949 of 57628253 (92.7%), 1.0 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.9 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.9 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.9 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.9 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.9 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.9 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.8 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.8 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.8 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.8 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.8 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.8 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.8 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.8 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.7 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.7 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.7 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.7 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.7 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.7 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.7 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.7 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.7 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.7 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.7 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.6 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.5 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.5 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.5 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.5 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.5 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.5 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.5 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.5 MBs/sec
Received 53433949 of 57628253 (92.7%), 0.5 MBs/sec
Warning: Failed to restore: The operation cannot be completed in timeout.
pip cache is not found
</code></pre>
|
<python><caching><pip><github-actions><cicd>
|
2024-03-25 05:55:23
| 1
| 1,884
|
Ahmad
|
78,217,187
| 8,930,751
|
Schedule calculation exactly at every 15 mins using Python
|
<p>I have followed the example which is <a href="https://stackoverflow.com/questions/76834821/scheduling-python-calculations-using-schedule-library">shown here</a>.</p>
<p>It is working as expected, but with a small problem. If the do_job() method takes more than a minute to finish. It's waiting for another 1 min from end of the do_job() method and scheduling another calculation.</p>
<p>But I want it to start timings 1 min from the start of execution time. Is there a possibility to do that?</p>
<p><strong>EDIT</strong></p>
<pre><code>import schedule
import time
def job():
print("Process triggered at:", time.strftime("%Y-%m-%d %H:%M:%S"))
time.sleep(30)
print("Process ended at:", time.strftime("%Y-%m-%d %H:%M:%S"))
# Define the frequency (in seconds) at which the process should be triggered
frequency_seconds = 60 # Trigger every 60 seconds
# Schedule the job to run at the specified frequency
schedule.every(frequency_seconds).seconds.do(job)
# Main loop to keep the script running
while True:
schedule.run_pending()
time.sleep(1) # Sleep for 1 second to avoid high CPU usage
</code></pre>
<p>In the above code , the output generated is as below :</p>
<p>Process triggered at: 2024-03-25 06:57:23<br />
Process ended at: 2024-03-25 06:57:53<br />
Process triggered at: 2024-03-25 06:58:54<br />
Process ended at: 2024-03-25 06:59:24</p>
<p>It is observed that the next process is triggered after the first process has ended. The time difference between first process end time and second process start time is 60 seconds.</p>
<p>But my expectation is the time difference between first process trigger time and second process trigger time should be 60 seconds. 60 seconds should not be calculation from process end time. job() should be called exactly at 60 seconds interval. So the expected output will look like this :</p>
<p>Process triggered at: 2024-03-25 06:57:23<br />
Process ended at: 2024-03-25 06:57:53<br />
Process triggered at: 2024-03-25 06:58:23<br />
Process ended at: 2024-03-25 06:58:53</p>
|
<python><python-3.x><schedule>
|
2024-03-25 05:21:53
| 2
| 2,416
|
CrazyCoder
|
78,216,921
| 466,369
|
Langchain/FastAPI application encountered 404 error
|
<p>I am following this langchain tutorial example <a href="https://python.langchain.com/docs/get_started/quickstart#client" rel="nofollow noreferrer">here</a>.</p>
<p>Why do I get 404 error? Here is the code, which is exactly the same as the example without any modification. After launching the application and navigating to "/agent" path, I got the 404 error. I have provided both code and console output below. Appreciate any help.</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python
from typing import List
from fastapi import FastAPI
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
from langchain_community.document_loaders import WebBaseLoader
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain.tools.retriever import create_retriever_tool
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain import hub
from langchain.agents import create_openai_functions_agent
from langchain.agents import AgentExecutor
from langchain.pydantic_v1 import BaseModel, Field
from langchain_core.messages import BaseMessage
from langserve import add_routes
# 1. Load Retriever
loader = WebBaseLoader("https://docs.smith.langchain.com/user_guide")
docs = loader.load()
text_splitter = RecursiveCharacterTextSplitter()
documents = text_splitter.split_documents(docs)
embeddings = OpenAIEmbeddings()
vector = FAISS.from_documents(documents, embeddings)
retriever = vector.as_retriever()
# 2. Create Tools
retriever_tool = create_retriever_tool(
retriever,
"langsmith_search",
"Search for information about LangSmith. For any questions about LangSmith, you must use this tool!",
)
search = TavilySearchResults()
tools = [retriever_tool, search]
# 3. Create Agent
prompt = hub.pull("hwchase17/openai-functions-agent")
llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
# 4. App definition
app = FastAPI(
title="LangChain Server",
version="1.0",
description="A simple API server using LangChain's Runnable interfaces",
)
# 5. Adding chain route
# We need to add these input/output schemas because the current AgentExecutor
# is lacking in schemas.
class Input(BaseModel):
input: str
chat_history: List[BaseMessage] = Field(
...,
extra={"widget": {"type": "chat", "input": "location"}},
)
class Output(BaseModel):
output: str
add_routes(
app,
agent_executor.with_types(input_type=Input, output_type=Output),
path="/agent",
)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="localhost", port=8000)
</code></pre>
<p>Here is the console output:</p>
<pre class="lang-bash prettyprint-override"><code>INFO: Started server process [55250]
INFO: Waiting for application startup.
__ ___ .__ __. _______ _______. _______ .______ ____ ____ _______
| | / \ | \ | | / _____| / || ____|| _ \ \ \ / / | ____|
| | / ^ \ | \| | | | __ | (----`| |__ | |_) | \ \/ / | |__
| | / /_\ \ | . ` | | | |_ | \ \ | __| | / \ / | __|
| `----./ _____ \ | |\ | | |__| | .----) | | |____ | |\ \----. \ / | |____
|_______/__/ \__\ |__| \__| \______| |_______/ |_______|| _| `._____| \__/ |_______|
LANGSERVE: Playground for chain "/agent/" is live at:
LANGSERVE: │
LANGSERVE: └──> /agent/playground/
LANGSERVE:
LANGSERVE: See all available routes at /docs/
LANGSERVE: ⚠️ Using pydantic 2.6.4. OpenAPI docs for invoke, batch, stream, stream_log endpoints will not be generated. API endpoints and playground should work as expected. If you need to see the docs, you can downgrade to pydantic 1. For example, `pip install pydantic==1.10.13`. See https://github.com/tiangolo/fastapi/issues/10360 for details.
INFO: Application startup complete.
INFO: Uvicorn running on http://localhost:8000 (Press CTRL+C to quit)
INFO: ::1:58755 - "GET /agent HTTP/1.1" 404 Not Found
INFO: ::1:58756 - "GET /agent HTTP/1.1" 404 Not Found
</code></pre>
|
<python><fastapi><langchain>
|
2024-03-25 03:16:05
| 1
| 572
|
steamfood
|
78,216,765
| 348,168
|
ModuleNotFoundError: No module named 'PyJSONCanvas'
|
<p>In windows 11, I am using WinPython64, 3.11 version.
Executed the command <code>pip install PyJSONCanvas</code> as per the instruction given in
<a href="https://pypi.org/project/PyJSONCanvas/" rel="nofollow noreferrer">https://pypi.org/project/PyJSONCanvas/</a></p>
<p><code>import PyJSONCanvas</code> gives the following error in IDLE:</p>
<pre><code>import PyJSONCanvas
Traceback (most recent call last):
File "<pyshell#11>", line 1, in <module>
import PyJSONCanvas
ModuleNotFoundError: No module named 'PyJSONCanvas'
</code></pre>
<p>What is the cause for this error?</p>
<pre><code> from pyjsoncanvas import Canvas
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
from pyjsoncanvas import Canvas
ModuleNotFoundError: No module named 'pyjsoncanvas'
</code></pre>
|
<python><json>
|
2024-03-25 02:03:01
| 0
| 4,378
|
Vinod
|
78,216,652
| 1,549,736
|
How do I read the project version number stored in the pyproject.toml file from within the tox.ini file?
|
<p>I want to include an <code>upload</code> environment in my <code>tox.ini</code> file.
But, that requires resolving the project version dynamically; something like:</p>
<pre class="lang-yaml prettyprint-override"><code>[testenv:upload]
basepython = python3.11
skip_install = true
deps =
twine
commands =
twine upload dist/foo-{[project]version}.tar.gz dist/foo-{[project]version}-py3-none-any.whl
</code></pre>
<p>But, that syntax doesn't work:</p>
<pre><code>% tox -e upload
upload: commands[0]> twine upload 'dist/foo-{[project]version}.tar.gz' 'dist/foo-{[project]version}-py3-none-any.whl'
ERROR InvalidDistribution: Cannot find file (or expand pattern): 'dist/foo-{[project]version}.tar.gz'
</code></pre>
<p><strong>What is the correct syntax, in the <code>tox.ini</code> file, for reading the project version from the <code>pyproject.toml</code> file?</strong></p>
|
<python><tox><pyproject.toml>
|
2024-03-25 01:10:17
| 2
| 2,018
|
David Banas
|
78,216,636
| 1,030,648
|
How do I fix the jsonobject architecture problem I am having in PyCharm CE when the terminal says the package is installed?
|
<p>How do I get <code>jsonobject</code> to install even though, apparently, my terminal thinks is already installed?</p>
<p>I am trying to work through the a Python project that I have copied to a repo and cloned from <a href="https://github.com/InstituteforDiseaseModeling/synthpops" rel="nofollow noreferrer">the open-source code in this GitHub repo</a></p>
<p>I am using PyCharm CE (Build #PC-233.15026.15, built on March 21, 2024, Runtime version: 17.0.10+1-b1087.23 x86_64
VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o.)</p>
<p>Python version is 3.11</p>
<p>My Mac OS is: Monterey version 12.7.4 This is a Mac from 2013 so has the Intel infrastructure.</p>
<p>I have managed to install all dependences apart from <code>jsonobject</code>.</p>
<p>My terminal seems to think that <code>jsonobject</code> has been installed:</p>
<pre><code>(base) [name here] ~ % ARCHFLAGS="-arch x86_64" pip install jsonobject
Requirement already satisfied: jsonobject in /Applications/anaconda3/lib/python3.11/site-packages (2.1.0)
Requirement already satisfied: six in /Applications/anaconda3/lib/python3.11/site-packages (from jsonobject) (1.16.0)
</code></pre>
<p>The content of that folder is
<a href="https://i.sstatic.net/rVVAL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rVVAL.png" alt="Content of json folder" /></a></p>
<p>It is not, however, showing in the Python Interpreter list:
<a href="https://i.sstatic.net/ZzQTk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZzQTk.png" alt="List of package things in the Python Interpreter, no jsonobject" /></a></p>
<p>But I still get this error inside PyCharm ([MY FILE PATH HERE] is substituting for the local path to the folder):</p>
<pre><code> [MY FILE PATH HERE]/SynthPops/.venv/bin/python
[MY FILE PATH HERE]/SynthPops/examples/ex01_make_dhaka_city.py
Traceback (most recent call last):
File "[MY FILE PATH HERE]/SynthPops/examples/ex01_make_dhaka_city.py", line 6, in <module>
import synthpops as sp
File "[MY FILE PATH HERE]/SynthPops/synthpops/__init__.py", line 5, in <module>
from .data import * # depends on defaults, config
^^^^^^^^^^^^^^^^^^^
File "[MY FILE PATH HERE]/SynthPops/synthpops/data.py", line 5, in <module>
from jsonobject import *
ModuleNotFoundError: No module named 'jsonobject'
Process finished with exit code 1
</code></pre>
<p>When trying to install <code>jsonobject</code> from inside <code>PyCharm</code> I received the same <code>unsupported architecture</code> errors as shown in questions such as <a href="https://stackoverflow.com/questions/70074054/how-to-fix-error-architecture-not-supported-when-installing-pycurl-with-pytho">pycurl architecture problem</a> and <a href="https://stackoverflow.com/questions/64111015/pip-install-psutil-is-throwing-error-unsupported-architecture-any-workarou">psutil architecture problem</a>. The solution in both was to run</p>
<pre><code> ARCHFLAGS="-arch x86_64" pip install [name]
</code></pre>
<p>which, in my case, is <code>jsonobject</code>. That line still failed inside <code>PyCharm</code> although it appeared to work in the terminal.</p>
|
<python><pycharm><x86-64>
|
2024-03-25 01:00:55
| 0
| 1,383
|
Michelle
|
78,216,509
| 13,503,715
|
Mongo DB + Streamlit + github actions
|
<p>I am integrating mongoDB with streamlit ! so to store MONGO secrets I am using github actions. I have only one problem.</p>
<p>I am not able to read those secrets. in my python file. I have mostly tried everything. so last option i am trying to do is run directly from workflow .YAML file. like a command line file with ARGS.</p>
<pre><code>#this is the yaml I am using
name: run python script
on:
push:
branches: [shyam]
env:
USERNAME: ${{secrets.USERNAME}}
PASSWORD: ${{secrets.PASSWORD}}
jobs:
run-python:
runs-on: ubuntu-latest
steps:
- name: checkout code
uses: actions/checkout@v3
- name: setup python
uses: actions/setup-python@v4
with:
python-version: "3.11"
- name: install dependencies
run: |
pip install -r requirements.txt
- name: run streamlit
run:
streamlit run main.py ${{secrets.USERNAME}} ${{secrets.PASSWORD}}
</code></pre>
<p>I am trying to use my secrets as cmd args in yml file only.</p>
<p>can someone help me out.</p>
<p>note- I do not want to use heroku to do this neither docker. I need streamlit to work with actions.</p>
<p>I have tried to directly use os.environ(USERNAME) in python file from the declared ENV in YAML, but it does not work, I am not able to access my pass and username to connect to my MONGODB.</p>
|
<python><mongodb><github><github-actions><streamlit>
|
2024-03-24 23:56:26
| 0
| 328
|
shyam_gupta
|
78,216,476
| 1,174,102
|
How to get kivy to set its defaults in the Config at runtime?
|
<p>How can I tell Kivy to update a given Config with its default values at runtime?</p>
<p>I have a "reset settings to defaults" button in my app that deletes all of the entries in my <a href="https://kivy.org/doc/stable/api-kivy.config.html" rel="nofollow noreferrer">Kivy Config</a>, including in the <code>[kivy]</code> section.</p>
<pre><code># delete all the options saved to the config file
sections = ['kivy','myapp','etc']
for section in sections:
for key in Config[section]:
Config.remove_option( section, key )
# write-out the changes to the .ini file
Config.write()
</code></pre>
<p>After the above commands execute, I'm left with an empty <code>[kivy]</code> section in my .ini file, as desired.</p>
<p>The problem is that I'm having a <code>NoOptionError</code> later in my code that's expecting the <code>default_font</code> option in the Kivy Config</p>
<pre><code>configparser.NoOptionError: No option 'default_font' in section: 'kivy
</code></pre>
<p>After the above code wipes away any config options in the <code>[kivy]</code> section of my .ini file, how can I tell Kivy to repopulate it with the defaults?</p>
<p>When I restart the app, Kivy does repopulate the <code>[kivy]</code> section of my .ini file as desired. And I can see that the code that does that is located here:</p>
<ul>
<li><a href="https://github.com/kivy/kivy/blob/c492a33f8cf79e89ea7240690feb5f6d25b08389/kivy/config.py#L902-L908" rel="nofollow noreferrer">https://github.com/kivy/kivy/blob/c492a33f8cf79e89ea7240690feb5f6d25b08389/kivy/config.py#L902-L908</a></li>
</ul>
<pre><code> elif version == 16:
Config.setdefault('kivy', 'default_font', [
'Roboto',
'data/fonts/Roboto-Regular.ttf',
'data/fonts/Roboto-Italic.ttf',
'data/fonts/Roboto-Bold.ttf',
'data/fonts/Roboto-BoldItalic.ttf'])
</code></pre>
<p>But how can I execute the above kivy code at runtime so that it updates my Config with the defaults without having to restart the app?</p>
<blockquote>
<p><strong>Note</strong> I'm aware that I can manually make my own <code>build_config()</code> function and call <code>Config.setdefaults('kivy', {...})</code> where I type the default values myself in the dictionary. <strong>That's <em>not</em> what I'm asking for</strong>. I'm asking how to call the above code that's part of kivy, so the code is not redundant and is future-proof.</p>
</blockquote>
|
<python><kivy><default><configparser>
|
2024-03-24 23:38:37
| 1
| 2,923
|
Michael Altfield
|
78,216,192
| 13,705,050
|
Vertical Scrollbar disappears when window width passes threshold
|
<p>This code creates a window with a text box and a vertical scroll bar. If the window's width is changed by dragging the edge of the window, and the width decreases below some threshold (~660 pixels) the vertical scrollbar disappears.</p>
<p>What is causing the vertical scroll bar to be pushed out of the window when this threshold is reached? I've not defined widths for any widgets here. I would expect the text box to just get smaller and smaller as the window's width is reduced.</p>
<pre><code>import tkinter
window = tkinter.Tk()
text_box = tkinter.Text(master = window)
text_box.pack(side = "left", fill = "both", expand = True)
vertical_scrollbar = tkinter.Scrollbar(master = window, command = text_box.yview)
vertical_scrollbar.pack(side = "right", fill = "y")
window.mainloop()
</code></pre>
|
<python><tkinter>
|
2024-03-24 21:34:26
| 2
| 825
|
TehDrunkSailor
|
78,216,147
| 6,606,057
|
Cannot install Mamba on Python 3.12
|
<p>I'm trying to get Mamba to run on Spyder 5.5.3 with Python 3.8.10 64-bit | Qt 5.15.2 | PyQt5 5.15.10 | Windows 10 (AMD64)</p>
<p>When I run !mamba install bs4==4.10.0 -y</p>
<p><a href="https://i.sstatic.net/Bhkk9.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Bhkk9.jpg" alt="enter image description here" /></a></p>
<p>Note !pip install yfinance==0.1.67 installs successfully.</p>
<p>I've previously followed:</p>
<p><a href="https://stackoverflow.com/questions/77233855/why-did-i-got-an-error-modulenotfounderror-no-module-named-distutils">Why did I get an error ModuleNotFoundError: No module named 'distutils'?</a></p>
<p>which may have changed the error code, but I still get no good results.</p>
<p>I've additionally reset my Python interpreter to:</p>
<p>C:/Users/#####/AppData/Local/Programs/Python/Python312/python.exe</p>
<p>And yes I have closed and reopened, spyder, reset my kernal, and turned my box on and off again.</p>
<p>How do I get mamba to install?</p>
|
<python><python-3.x><installation><mamba>
|
2024-03-24 21:18:36
| 1
| 485
|
Englishman Bob
|
78,216,117
| 1,853,284
|
"Variable ${workspaceFolder} can not be resolved" error in new VSC installation
|
<p>I installed Visual Studio Code 1.87.2 followed by the addons for R and Python. I then opened an .ipynb notebook that a collaborator shared.</p>
<p>I tried running the first code section:</p>
<pre><code>#load dataframe
xdata=read.csv("/Users/path/xdata.csv", header = TRUE, sep = ",",quote = "")
</code></pre>
<p>However, I get the errors below:</p>
<p><a href="https://i.sstatic.net/dHyqT.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dHyqT.jpg" alt="enter image description here" /></a></p>
<p>I'm not sure what these errors refer to though. I cannot find any ${workspaceFolder} variable in the code or in the settings.</p>
<p>An additional error that pops up if I reopen Visual Studio Code is:</p>
<p><a href="https://i.sstatic.net/GTK0T.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GTK0T.jpg" alt="enter image description here" /></a></p>
<p>..followed, if I click Yes there, by this:
<a href="https://i.sstatic.net/RVpwZ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RVpwZ.jpg" alt="enter image description here" /></a></p>
<p>I have very little experience with both R and Python, so this will be very basic / silly (apologies).</p>
|
<python><r><visual-studio-code>
|
2024-03-24 21:08:22
| 0
| 679
|
z8080
|
78,216,040
| 1,174,102
|
How to update all label's text properties after changing default_font in Kivy?
|
<p>How can I update all of my Kivy widgets to use the new <code>default_font</code> on runtime (without restarting the app)?</p>
<p>I created a Settings screen in my kivy app where a user can select the font that they want the app to use. After the user selects their desired font in the GUI, I update the <code>default_font</code> in Kivy's Config</p>
<pre><code>Config.set('kivy', 'default_font', [font_filename, font_filepath, font_filepath, font_filepath, font_filepath])
</code></pre>
<p>When the app restarts, this successfully changes the default font of all my Kivy Labels to the user-slected font.</p>
<p>But how can I have it update all my widgets on all my Screens in my Kivy app on runtime, immediately after the <code>Config.set()</code> call above?</p>
|
<python><fonts><kivy><font-face>
|
2024-03-24 20:43:06
| 1
| 2,923
|
Michael Altfield
|
78,215,785
| 726,730
|
PyCharm & Python: Reboot PyQt5 Application
|
<p>I use this code to reboot a pyqt5 application:</p>
<pre class="lang-py prettyprint-override"><code>os.execl(sys.executable, sys.executable, *['"'+sys.argv[0]+'"', "-t", '2'])
</code></pre>
<p>The main problem with this command is that after pyqt5 exit on first exit, then i cannot manage the script (stop-debug) from PyCharm.</p>
<p>Any other related command?</p>
|
<python><pyqt5><restart>
|
2024-03-24 19:11:11
| 0
| 2,427
|
Chris P
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.