QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,878,672
| 17,795,398
|
How can I get the seed of a numpy `Generator`?
|
<p>I am using <code>rng = np.random.default_rng(seed=None)</code> for testing purposes following <a href="https://numpy.org/doc/stable/reference/random/generator.html#numpy.random.default_rng" rel="nofollow noreferrer">documentation</a>.</p>
<p>My program is a scientific code, so it is good to have some random values to test it, but if I found a problem with the code's result, I would like to get the seed back and try again to find the problem. Is there any way to do that?</p>
<p>Things like this <a href="https://stackoverflow.com/questions/32172054/how-can-i-retrieve-the-current-seed-of-numpys-random-number-generator">question</a> does not seem to work with a <code>Generator</code>:</p>
<blockquote>
<p>AttributeError: 'numpy.random._generator.Generator' object has no attribute 'get_state'</p>
</blockquote>
<p>Of course I can always try a set of predefined seeds, but that is not what I want.</p>
|
<python><numpy><random>
|
2024-01-25 09:06:40
| 2
| 472
|
Abel Gutiérrez
|
77,877,808
| 2,998,077
|
Kivy to display current time on screen
|
<p>Below lines are used to create a popup window to display a running string (current time) from screen right to left. (system: Windows 10)</p>
<p>It works as the screen pops, with no reported errors. But the screen display is blank (black).</p>
<p>What went wrong here?</p>
<pre><code>from kivy.app import App
from kivy.uix.label import Label
from kivy.core.window import Window
from kivy.clock import Clock
from random import randint, choice
from datetime import datetime
class Danmaku(Label):
def __init__(self, **kwargs):
super(Danmaku, self).__init__(**kwargs)
self.x = Window.width # Start from the rightmost side
self.y = Window.height // 2 # Start from the center vertically
self.speed = 100
self.text = datetime.now().strftime("%H:%M")
self.color = (200, 255, 255) # Random light colors
self.font_size = int(0.5 * Window.height) # Adjust font size to fill 90% of the screen height
self.font_name = "Arial"
self.font_hinting = None # or remove this line, as None is the default
self.update_text()
self.size_hint = (None, None) # Set size_hint to None
self.size = self.texture_size # Set size to the texture size
def update_text(self):
self.text = datetime.now().strftime("%H:%M")
self.font_size = int(0.9 * Window.height)
self.font_name = "Arial"
self.color = (200, 255, 255)
def shift(self, dt):
self.x -= dt * self.speed
class DanmakuApp(App):
def build(self):
self.danmaku_list = [Danmaku()]
self.pause_time = 1 # 1 second
Clock.schedule_interval(self.update, 1.0 / 60.0) # Update at 60 FPS
return
def update(self, dt):
for danmaku in self.danmaku_list:
danmaku.shift(dt) # Update Danmaku position
# Check if the current Danmaku has moved off the screen
if self.danmaku_list[-1].x < -self.danmaku_list[-1].font_size:
self.danmaku_list.append(Danmaku()) # Create a new Danmaku
def on_resize(self, width, height):
for danmaku in self.danmaku_list:
danmaku.update_text()
danmaku.size = danmaku.texture_size # Update size to the new texture size
return super(DanmakuApp, self).on_resize(width, height)
if __name__ == '__main__':
DanmakuApp().run()
</code></pre>
|
<python><kivy>
|
2024-01-25 06:04:22
| 1
| 9,496
|
Mark K
|
77,877,804
| 1,039,302
|
Dataframe - difference of rows by some style
|
<p>I want the difference of column 'position' by the column 'Seg' with 'x' adjacent.</p>
<pre><code>import numpy as np
import pandas as pd
mydict = {'position':['0.0', '0.433', '2.013', '3.593', '5.173', '6.753', '6.9'],'Seg':['x', 'x', np.nan, np.nan, np.nan, np.nan, 'x']}
df = pd.DataFrame.from_dict(mydict)
df
position Seg
0 0.0 x
1 0.433 x
2 2.013 NaN
3 3.593 NaN
4 5.173 NaN
5 6.753 NaN
6 6.9 x
</code></pre>
<p>How can I get the difference 'diff' and 'Seg ID'? Note: 'x' can randomly be at any rows and 'Seg ID' changes accordingly.</p>
<pre><code> position Seg diff Seg ID
0 0.0 x NaN NaN
1 0.433 x 0.433 Seg 1
2 2.013 NaN NaN NaN
3 3.593 NaN NaN NaN
4 5.173 NaN NaN NaN
5 6.753 NaN NaN NaN
6 6.9 x 6.467 Seg 2
</code></pre>
|
<python><pandas><dataframe><difference>
|
2024-01-25 06:03:07
| 2
| 1,713
|
warem
|
77,877,686
| 6,824,915
|
Solving a LP Transportation Problem with Storage Constraints Using Pyomo Python
|
<p>It's been a few years since I have done any optimization and I am set about solving a transportation problem using LP via Pyomo.</p>
<p>The minimization problem:</p>
<p><a href="https://i.sstatic.net/cOA1n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cOA1n.png" alt="LP Minimization Problem" /></a></p>
<p>This one is simple enough to solve and I did something like this (changed around tutorial code from Pyomo docs):</p>
<pre class="lang-py prettyprint-override"><code>from pyomo.environ import *
from pyomo.opt import SolverFactory
# create Pyomo model objects
demand = {'p1': 750, 'p2': 600, 'p3': 500, 'p4': 800}
supply = {'s1': 400, 's2': 600, 's3': 750, 's4': 300, 's5': 800}
combos = tuple(product(demand.keys(), supply.keys()))
vals = [114, 145, 153, 158, 163,
155, 180, 190, 194, 202,
135, 160, 172, 176, 185,
142, 171, 180, 185, 194]
T = dict(zip(combos, vals))
# create model instance
model3 = ConcreteModel()
model3.dual = Suffix(direction=Suffix.IMPORT)
# define index sets
plants = list(demand.keys())
suppliers = list(supply.keys())
# define the decision
model3.x = Var(plants, suppliers, domain=NonNegativeReals)
# define objective
@model3.Objective(sense=minimize)
def cost(m):
return sum([T[c,s]*model3.x[c,s] for c in plants for s in suppliers])
# constraints
@model3.Constraint(suppliers)
def src(m, s):
return sum([model3.x[c,s] for c in plants]) <= supply[s]
@model3.Constraint(plants)
def dmd(m, c):
return sum([model3.x[c,s] for s in suppliers]) == demand[c]
# solve
results = SolverFactory('glpk').solve(model3)
results.write()
# print results
if 'ok' == str(results.Solver.status):
print("Total Shipping Costs = ", model3.cost())
print("\nShipping Table:")
for s in suppliers:
for c in plants:
if model3.x[c,s]() > 0:
print("Ship from ", s," to ", c, ":", model3.x[c,s]())
else:
print("No Valid Solution Found")
</code></pre>
<p>This yields a solution which I know is correct:</p>
<pre class="lang-none prettyprint-override"><code>Solution:
- number of solutions: 0
number of solutions displayed: 0
Total Shipping Costs = 443600.0
Shipping Table:
Ship from s1 to p1 : 150.0
Ship from s1 to p4 : 250.0
Ship from s2 to p2 : 100.0
Ship from s2 to p3 : 500.0
Ship from s3 to p2 : 200.0
Ship from s3 to p4 : 550.0
Ship from s4 to p2 : 300.0
Ship from s5 to p1 : 600.0
</code></pre>
<p>What I am stuck on is the following added constraint:</p>
<p><a href="https://i.sstatic.net/vIszx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vIszx.png" alt="added constraint" /></a></p>
<p>I am fairly certain I should create a dict signifying this:</p>
<pre class="lang-py prettyprint-override"><code># added constraint
space_limit = [2, 1.5, 3, 1.2, 1.7]
supply_space = dict(zip(suppliers, space_limit))
</code></pre>
<p>I know I should add another constraint function, but I am at a loss for the syntax. I am struggling with unpacking Pyomo objects to see "what's under the hood" as I can't just print stuff out until it clicks.</p>
<p>Help would be appreciated!</p>
<p>My attempt (not the right answer):</p>
<pre class="lang-py prettyprint-override"><code>@model3.Constraint(suppliers)
def capacity(m, s):
return(sum([supply_space[s] * model3.x[c, s] for c in plants])) <= 2000
# solve
results = SolverFactory('glpk').solve(model3)
results.write()
# print results
if 'ok' == str(results.Solver.status):
print("Total Shipping Costs = ", model3.cost())
print("\nShipping Table:")
for s in suppliers:
for c in plants:
if model3.x[c,s]() > 0:
print("Ship from ", s," to ", c, ":", model3.x[c,s]())
else:
print("No Valid Solution Found")
</code></pre>
|
<python><linear-programming><pyomo>
|
2024-01-25 05:23:12
| 1
| 371
|
DudeWah
|
77,877,654
| 8,540,947
|
How to get the exact index ID numbers of Maya ramp points using Maya python commands and not API
|
<p>Is it possible to query the Index numbers of the points of a 2D graph texture in Maya using Maya commands and Python and not API code? The points do not necessarily start at 0 and are not always sequential. <a href="https://i.sstatic.net/ct07L.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ct07L.jpg" alt="enter image description here" /></a></p>
|
<python><textures><maya>
|
2024-01-25 05:11:51
| 1
| 519
|
winteralfs
|
77,877,602
| 12,789,602
|
Odoo 16 MailThread._message_auto_subscribe_notify() missing 1 required positional argument: 'template'
|
<p>In Odoo 16, Mail Thread notify called in a custom module.</p>
<pre class="lang-py prettyprint-override"><code>if approver.user_id:
self._message_auto_subscribe_notify([approver.user_id.partner_id.id])
</code></pre>
<p>But throwing an error,</p>
<p><a href="https://i.sstatic.net/uM8Di.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uM8Di.png" alt="enter image description here" /></a></p>
<p>What will be second argument for template and example of template code?</p>
|
<python><python-3.x><xml><odoo><odoo-16>
|
2024-01-25 04:49:57
| 2
| 552
|
Bappi Saha
|
77,877,444
| 19,675,781
|
Seaborn heatmap legend disturb the plots order with subplots
|
<p>I have 2 dataframes like this:</p>
<pre><code>df1 = pd.DataFrame(np.random.randint(0,50,size=(5,5)), columns=list('ABCDE'))
df2 = pd.DataFrame(np.random.randint(0,50,size=(5,5)), columns=list('FGHIJ'))
</code></pre>
<p>I want to create 2 heatmaps side by side for these two dataframes.
The dataranges of these dataframes are same so I want to use only one cbar as legend.</p>
<p>In this common color bar, I want to shrink the color bar size to half and move the color_bar label to right of the cbar.</p>
<pre><code>f,axs = plt.subplots(1,2,sharex=False,figsize=[5,5],gridspec_kw={'height_ratios':[10],'width_ratios':[1,1]})
plot1=sns.heatmap(df1,square=True,vmax=50,vmin=0,cmap=sns.color_palette("viridis", as_cmap=True), cbar=True,cbar_kws = dict(orientation='horizontal',location='bottom',label='Scale',pad=0.1,shrink=0.5),ax=axs[0])
plot2=sns.heatmap(df2,square=True,vmax=50,vmin=0,cmap=sns.color_palette("viridis", as_cmap=True),\
cbar=False,ax=axs[1])
plt.show()
</code></pre>
<p>My output looks like this:</p>
<p><a href="https://i.sstatic.net/Tp5Hh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tp5Hh.png" alt="enter image description here" /></a></p>
<p>Here, the two heatmaps are not on the same level.</p>
<p>Can anyone help me how to get these heatmaps to the same level and move the color_bar label to right of the color bar?</p>
|
<python><matplotlib><seaborn><legend>
|
2024-01-25 03:51:38
| 1
| 357
|
Yash
|
77,877,343
| 10,693,596
|
List all properties defined on a Pydantic BaseModel
|
<p>Is there an elegant way to list all properties defined on a Pydantic BaseModel?</p>
<p>It's possible to extract the list of fields via BaseModel.model_fields and the list of computed fields via BaseModel.model_computed_fields, but there does not appear to be a method for listing all the properties.</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
class Test(BaseModel):
field: str
@property
def len_field(self) -> int:
return len(self.field)
@property
def first_char(self) -> str:
return self.field[0]
t = Test(field="abc")
print(t.model_fields, dir(t))
# how do I list "len_field", "first_char"?
</code></pre>
|
<python><pydantic><python-class><pydantic-v2>
|
2024-01-25 03:10:07
| 2
| 16,692
|
SultanOrazbayev
|
77,877,288
| 11,163,122
|
How to cache playwright-python contexts for testing?
|
<p>I am doing some web scraping using <a href="https://github.com/microsoft/playwright-python" rel="nofollow noreferrer"><code>playwright-python>=1.41</code></a>, and have to launch the browser in a headed mode (e.g. <code>launch(headless=False)</code>.</p>
<p>For CI testing, I would like to somehow cache the headed interactions with Chromium, to enable offline testing:</p>
<ul>
<li>First invocation: uses Chromium to make real-world HTTP transactions</li>
<li>Later invocations: uses Chromium, but all HTTP transactions read from a cache</li>
</ul>
<p>How can this be done? I can't find any clear answers on how to do this.</p>
|
<python><caching><playwright><playwright-python>
|
2024-01-25 02:48:18
| 3
| 2,961
|
Intrastellar Explorer
|
77,877,225
| 3,727,079
|
How can I do this calculation without opening the same file twice?
|
<p>I've got a bunch of txt files: <code>A.txt</code>, <code>B.txt</code>, etc. With each file, I can calculate a variable <code>varA</code>, <code>varB</code>, etc. I then want to take the difference of these variables, select the 10 combinations whose differences are smallest, and use those txt files to perform even more calculations. The new calculations require both relevant files. For example, suppose the first calculation finds that <code>A.txt</code> and <code>B.txt</code> have the smallest difference. The new calculation then requires a column from <code>A.txt</code> and a column from <code>B.txt</code></p>
<p>How can I do this without opening the txt files several times?</p>
<p>The brute-force approach would be to calculate the difference first, sort it, and then reopen the files again to do the new calculations</p>
<pre><code>varList = []
DifferenceList = []
for f in files:
read_csv(f)
var = #do calculations
varList.append(var)
for i in range(1,len(f))
for j in range(i, len(f))
difference = varList(i) - varList(j)
DifferenceList.append(difference, i, j)
DifferenceList.sort()
#Extract indices, make smaller new_list_of_files using the best elements of DifferenceList
for i in range(1, len(new_list_of_files))
read_csv(new_list_of_files[i])
for j in range(i, len(new_list_of_files))
read_csv(new_list_of_files[j])
#Do calculations using both new_list_of_files[i] and new_list_of_files[j]
</code></pre>
<p>But this seems very inefficient - I have to open the .txt files twice, and since they are very big files, it's very time consuming.</p>
|
<python><algorithm>
|
2024-01-25 02:21:14
| 1
| 399
|
Allure
|
77,876,971
| 10,391,013
|
Creating list of modified sub-words with indexes
|
<p>I have to make a code for a personal project. I got stuck in dealing with strings that have repeated characters. I need to get all correct indexes of each sub-word originated from a string (of course is a DNA string). I need all the correct indexes of each sub-words combinations (a list of small words build from the original string). I need to build a list of modified words for example, ACnC (take a look in the tables below). However, for sub-words with consecutive indexes like ACG = [0, 1, 2], the word is returned as it is, ACG. I want to work with sub-words of length 2 to 5.</p>
<p>I have this code here to get all combinations of words:</p>
<pre><code>def get_kmer_combo(kmer, alphabet, repeats=1):
"""
> alphabet = "ACGT"
> kmer = "ACGCAT"
>get_kmer_combo(kmer, alphabet, repeats=1)
['AC', 'AG', 'AC', 'AA', 'AT', 'CG', 'CC', 'CA',
'CT', 'GC', 'GA', 'GT', 'CA', 'CT', 'AT']
"""
return ["".join(c) for c in combinations(kmer, r=repeats)]
> alphabet = "ACGT"
> kmer = "ACGCAT"
>get_kmer_combo(kmer, alphabet, repeats=1)
['AC', 'AG', 'AC', 'AA', 'AT', 'CG', 'CC', 'CA', 'CT', 'GC', 'GA', 'GT', 'CA',
'CT', 'AT']
>get_kmer_combo("kmer", alphabet, repeats=3)
['ACG', 'ACC', 'ACA', 'ACT', 'AGC', 'AGA', 'AGT', 'ACA', 'ACT', 'AAT', 'CGC',
'CGA', 'CGT', 'CCA', 'CCT', 'CAT', 'GCA', 'GCT', 'GAT', 'CAT']
</code></pre>
<p>I will show you with examples what I need.</p>
<p>1- For three letter sub-words:</p>
<pre><code>0 1 2 3 4 5
A C G C A T
A C G
A C - C
A C - - A
A C - - - T
A - G C
A - G - A
A - G - - T
A - - C A
A - - C - T
A - - - A T
C G C
C G - A
C G - - T
C - C A
C - C - T
C A T
G C A
G C - T
G - A T
C A T
</code></pre>
<p>I need to get the indexes for each sub-words that are in the original string and then check the missed indexes (-) and then,I will need to add a letter(s) of the DNA alphabet "ACGT", depending the number of missing indexes.</p>
<ul>
<li><p>When 1 index is missing in the sub-word a combination of four sub-words must be returned, for example, <code>AC-C = ["ACAC", "ACTC", "ACCC", ACGC"]</code>.</p>
</li>
<li><p>when 2 indexes are missing 4^len missing indexes:</p>
<p>A C - - A = ['ACAAA','ACACA','ACAGA','ACATA','ACCAA','ACCCA','ACCGA','ACCTA','ACGAA','ACGCA',
'ACGGA','ACGTA','ACTAA','ACTCA', 'ACTGA','ACTTA']</p>
</li>
</ul>
<p>-when three indexes missing 4^len("-") =3, etc...</p>
<p>The same with two letter sub-words:</p>
<pre><code>0 1 2 3 4 5
A C G C A T
A C
A - G
A - - C
A - - - A
A - - - - T
C G
C - C
C - - A
C - - - T
G C
G - A
G - - T
C A
C - T
A T
</code></pre>
<p>And also with 4 and 5 letter sub-words.</p>
<p>I tried to find the indexes of each combination of words with this function:</p>
<pre><code>def get_indices(kmer, sub_words):
results = []
for sw in sub_words:
temp_results = []
start_idxs = [i for i, c in enumerate(kmer) if c == sw[0]]
for start in start_idxs:
temp_idxs = [start]
temp_start = start
for char in sw[1:]:
found = False
for i in range(temp_start + 1, len(kmer)):
if kmer[i] == char:
temp_idxs.append(i)
temp_start = i
found = True
break
if not found:
temp_indices = []
break
if temp_idxs and len(temp_idxs) == len(sw):
temp_results.append(tuple(temp_idxs))
results.extend(temp_results)
return results
</code></pre>
<p>Running it with:</p>
<pre><code>kmer = "ACGCAT"
subwords_1 = ['ACG', 'ACC', 'ACA', 'ACT', 'AGC', 'AGA', 'AGT',
'ACA', 'ACT', 'AAT', 'CGC', 'CGA', 'CGT', 'CCA',
'CCT', 'CAT', 'GCA', 'GCT', 'GAT', 'CAT']
> results_1 = get_indices(kmer, subwords_1)
> for k, i in zip(subwords_1, results_1):
print(k, i)
ACG (0, 1, 2)
ACC (0, 1, 3)
ACA (0, 1, 4)
ACT (0, 1, 5)
AGC (0, 2, 3)
AGA (0, 2, 4)
AGT (0, 2, 5)
ACA (0, 1, 4)
ACT (0, 1, 5)
AAT (0, 4, 5)
CGC (1, 2, 3)
CGA (1, 2, 4)
CGT (1, 2, 5)
CCA (1, 3, 4)
CCT (1, 3, 5)
CAT (1, 4, 5)
GCA (3, 4, 5)
GCT (2, 3, 4)
GAT (2, 3, 5)
CAT (2, 4, 5)
</code></pre>
<p>It handle some indices well but many are wrong and I am stuck, because I need the indexes corrects to create all combinations of sub-words with the alphabets letter as explained above. The wrong indexes:</p>
<pre><code>ACA (0, 1, 4) -> (0, 3, 4)
ACT (0, 1, 5) -> (0, 3, 5)
GCA (3, 4, 5) -> (2, 3, 4)
GCT (2, 3, 4) -> (2, 3, 5)
GAT (2, 3, 5) -> (2, 4, 5)
CAT (2, 4, 5) -> (3, 4, 5)
</code></pre>
<p>I tried with two letter sub-words and also get mistakes.</p>
<pre><code>AC (0, 1)
AG (0, 2)
AC (0, 1) -> (0, 3)
AA (0, 4)
AT (0, 5)
CG (4, 5) -> (1, 2)
CC (1, 2) -> (1, 3)
CA (1, 3) -> (1, 4)
CT (1, 4) -> (1, 5)
GC (3, 4) -> (2, 3)
GA (1, 5) -> (2, 4)
GT (3, 5) -> (2, 5)
CA (2, 3) -> (3, 4)
CT (2, 4) -> (3, 5)
AT (2, 5) -> (4, 5)
</code></pre>
<p>I worked all day long and got stuck. If Any of the friends have any idea would be very nice.</p>
<p>I start to work with character distances, but still working.</p>
|
<python>
|
2024-01-25 00:35:04
| 2
| 504
|
Paulo Sergio Schlogl
|
77,876,917
| 6,111,957
|
PyTest produces warning DeprecationWarning: There is no current event loop
|
<p>I have added pytest-asyncio to my poetry test and dev groups with the following code:</p>
<pre class="lang-py prettyprint-override"><code> async def blah():
return 1
@pytest.mark.asyncio
async def test_me(mock_client, mock_config, event_loop):
res = await blah()
assert 1 == res
</code></pre>
<p>When I run <code>poetry run pytest</code> I get this warning:</p>
<pre><code>/Users/ykhaburzaniya/Library/Caches/pypoetry/virtualenvs/my-project-6HJyiZWL-py3.11/lib/python3.11/site-packages/pytest_asyncio/plugin.py:884: DeprecationWarning: There is no current event loop
_loop = asyncio.get_event_loop()
</code></pre>
<p>In my poetry.lock I have pytest-asyncio version = "0.23.3".</p>
<p>What can I do to get rid of this warning?</p>
|
<python><pytest><pytest-asyncio>
|
2024-01-25 00:15:42
| 1
| 432
|
Yason
|
77,876,761
| 1,584,120
|
django rest framework serializer.data throws error about attribute in manytomany relationship
|
<p>I have the following models:</p>
<pre><code>class MenuItem(models.Model):
class Category(models.TextChoices):
PIZZA = "pizza"
SIDE = "side"
OTHER = "other"
name = models.CharField(max_length=40)
description = models.TextField(max_length=150)
price = models.FloatField(default=0.0)
category = models.CharField(max_length=50, choices=Category.choices, default=Category.OTHER)
class Order(models.Model):
class OrderStatus(models.TextChoices):
NEW = "new"
READY = "ready"
DELIVERED = "delivered"
customer = models.CharField(max_length=50)
order_time = models.DateTimeField(auto_now_add=True)
items = models.ManyToManyField(MenuItem, through='OrderItem')
status = models.CharField(max_length=50, choices=OrderStatus.choices, default=OrderStatus.NEW)
class OrderItem(models.Model):
order = models.ForeignKey(Order, on_delete=models.CASCADE)
menu_item = models.ForeignKey(MenuItem, on_delete=models.CASCADE)
quantity = models.PositiveIntegerField()
</code></pre>
<p>And the following serializers:</p>
<pre><code>class MenuItemSerializer(serializers.Serializer):
id = serializers.IntegerField(read_only=True)
name = serializers.CharField(max_length=40)
description = serializers.CharField(max_length=150)
price = serializers.FloatField(default=0.0)
category = serializers.CharField(max_length=50)
def create(self, validated_data):
return MenuItem.objects.create(**validated_data)
class OrderItemSerializer(serializers.Serializer):
id = serializers.IntegerField(read_only=True)
menu_item_id = serializers.PrimaryKeyRelatedField(queryset=MenuItem.objects.all(), source='menu_item', read_only=False)
quantity = serializers.IntegerField(min_value=0)
class OrderSerializer(serializers.Serializer):
id = serializers.IntegerField(read_only=True)
customer = serializers.CharField(max_length=50)
order_time = serializers.DateTimeField(read_only=True)
items = OrderItemSerializer(many=True)
def create(self, validated_data):
order_items_data = validated_data.pop('items')
order = Order.objects.create(**validated_data)
for order_item_data in order_items_data:
quantity = order_item_data.pop('quantity')
menu_item = order_item_data.pop('menu_item')
OrderItem.objects.create(order=order, quantity=quantity, menu_item=menu_item)
return order
</code></pre>
<p>And then in views:</p>
<pre><code>@csrf_exempt
def order(request):
if request.method == 'POST':
data = JSONParser().parse(request)
serializer = OrderSerializer(data=data)
if serializer.is_valid():
serializer.save()
return JsonResponse(serializer.data, status=201)
print(serializer.errors)
return JsonResponse(serializer.errors, status=400)
</code></pre>
<p>An example request looks like:</p>
<pre><code>echo -n '{"customer": "John Doe", "items": [{"menu_item_id": 1, "quantity": 2}, {"menu_item_id": 2, "quantity": 1}]}' | http POST http://127.0.0.1:8000/order
</code></pre>
<p>When serializer.data is called an error is thrown:</p>
<blockquote>
<p>AttributeError: Got AttributeError when attempting to get a value for
field <code>menu_item_id</code> on serializer <code>OrderItemSerializer</code>. The
serializer field might be named incorrectly and not match any
attribute or key on the <code>MenuItem</code> instance. Original exception text
was: 'MenuItem' object has no attribute 'menu_item'.</p>
</blockquote>
<p>So its gone through the serializer.save() but errors on serializer.data. Can't work out why it can't get a value <code>menu_item_id</code> as it should be related to the <code>id</code> on <code>MenuItem</code> with the foreign key.</p>
|
<python><django><django-models><django-rest-framework><manytomanyfield>
|
2024-01-24 23:23:27
| 1
| 1,261
|
user1584120
|
77,876,547
| 3,112,985
|
Why are Alembic environment tags Optional[str]
|
<p>I'm running Alembic within our application code, where we have already created the sqlalchemy engine. I can pass the engine to the env.py using the <code>tag</code> parameter. However, the definition of the Alembic command I'm calling in our application is (copied from the Alembic python package):</p>
<pre><code>def upgrade(
config: Config,
revision: str,
sql: bool = False,
tag: Optional[str] = None,
) -> None:
</code></pre>
<p>I'm wondering why Alembic has the type restriction <code>Optional[str]</code> and not just an arbitrary type for the <code>tag</code>. I'm not enforcing these checks and things seem to work ok when passing the sqlalchemy engine for the <code>tag</code>. Is there a more appropriate way of doing what I want? I'm not sure if these types hints are specified for the <code>EnvironmentContext</code>. Or have I misunderstood these type specifications?</p>
|
<python><sqlalchemy><alembic>
|
2024-01-24 22:18:02
| 1
| 1,325
|
Jibbity jobby
|
77,876,520
| 1,591,921
|
Type-hint a Python function that *never* raises exception
|
<p>I have an async Python contextmanager that will (among other things) catch any exceptions thrown in the with-block.</p>
<p>Is there a way to indicate to my users that doing a try-catch over it is a bad idea (as it will catch any exceptions).</p>
<p>I think the answer is ”no, its impossible”, but couldnt find a definitive answer. I also understand that a function <em>can</em> always raise an exception, but in my particular case, wrapping this in a try-catch will almost always be a mistake.</p>
|
<python><mypy><python-typing>
|
2024-01-24 22:12:39
| 0
| 11,630
|
Cyberwiz
|
77,876,489
| 16,547,860
|
Pyspark dataframe creadted using jdbc, but when tried to write it to a parquet file in hdfs, throws `year out of range` error
|
<p>I am trying to do data ingestion from Oracle to Hive DB(hdfs) using Pyspark.</p>
<p>The source table has data row as given:</p>
<pre><code>LOAD_TIMESTAMP| OPERATION| COMMIT_TIMESTAMP|KSTOKEY| LPOLNUM|LSYSTYP| XPOLVSN| LPOLVSN| XSTOTYP| DSTODAT| DSTOTIM|
2024-01-24 16:23:...|2.0000000000|null| 2007.0000000000| QX-1123| BS2|1.0000000000| ORIGINAL TERM|4.0000000000|1998-04-23 00:00:00|0001-01-01 10:33:44|
</code></pre>
<p>I have this logic of code to read the data to a spark_df and then to write to a target location in hive hdfs path in parquet format:</p>
<pre><code># Read data from Oracle DB using Spark SQL
oracle_df = spark.read.format("jdbc").option("url", source_jdbc_url).option("driver", "oracle.jdbc.driver.OracleDriver").option("dbtable", "{}".format(final_select_statement))\
.option("numPartitions", 20).option("partitionColumn", split_by_column).option("lowerBound", int(lowerbound_value))\
.option("upperBound", int(upperbound_value)).option("user", source_jdbc_user).option("password", source_jdbc_pass).option("fetchsize", 10000).load()
oracle_df.show(20)
# Check if Dataframe is empty
if oracle_df.head(1):
print("EXECUTING NON EMPTY.")
# Write data to target HDFS path
oracle_df.write.mode("overwrite").option("path", "{}/{}/{}".format(target_hdfs_path, schema, table_name)).partitionBy("dt").format("parquet").save()
else:
# Create the target HDFS path including the partition
target_partitioned_path = os.path.join(target_hdfs_path, schema, table_name, "dt=ingestion")
empty_schema = oracle_df.schema
empty_df = spark.createDataFrame([], schema=empty_schema)
print("EXECUTING EMPTY")
# Write the empty DataFrame to HDFS
empty_df.write.mode("overwrite").option("path", target_partitioned_path).format("parquet").save()
</code></pre>
<p>The data is being read into a spark dataframe as oracle_df. The <code>oracle_df.show()</code> displays the data.</p>
<p>But right after that, the spark throws an error as given below.</p>
<pre><code>Traceback (most recent call last):
File "Full_Ingestion_AQS.py", line 235, in <module>
pool.map(ingest_table, chunk)
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 253, in map
return self.map_async(func, iterable, chunksize).get()
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 572, in get
raise self._value
ValueError: year is out of range
</code></pre>
<p>this code is part of def_ingest(table_name) function. This function is called within a pool.map() as given below:</p>
<pre><code># Split the tables into chunks for parallel processing
chunks = [tables_to_ingest[i:i + int(parallel_jobs)] for i in range(0, len(tables_to_ingest), int(parallel_jobs))]
# Create a ThreadPool with the desired number of workers
pool = ThreadPool(processes=int(parallel_jobs))
# Parallelize the ingestion of tables
for chunk in chunks:
print("executing these tables: {}".format(chunk))
pool.map(ingest_table, chunk)
# Close the ThreadPool to release resources
pool.close()
pool.join()
</code></pre>
<p>I know that the date <code>0001-01-01 10:33:44</code> is causing the error. But I want to ignore that in pyspark and ingest these data and write the data to hdfs target path.</p>
<p>Any suggestions is greatly helpful and much appreciated. Thank you.</p>
|
<python><apache-spark><hadoop><pyspark><hdfs>
|
2024-01-24 22:05:53
| 0
| 312
|
Shiva
|
77,876,447
| 850,781
|
No module named 'distutils' despite setuptools installed
|
<p>I get:</p>
<pre class="lang-none prettyprint-override"><code>$ pip install -e .
...
ModuleNotFoundError: No module named 'distutils'
</code></pre>
<p>with Python 3.12, despite:</p>
<pre class="lang-none prettyprint-override"><code>$ conda install setuptools
...
All requested packages already installed.
$ pip install setuptools
Requirement already satisfied: setuptools in c:\users\...\appdata\local\miniconda3\envs\c312\
lib\site-packages (69.0.3)
</code></pre>
<p>My <em>pyproject.toml</em> file is:</p>
<pre class="lang-toml prettyprint-override"><code>[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
[project]
name = "zzz"
authors = [
...
]
description = "zzz"
readme = "README.md"
version = "36"
dependencies = [
"numpy",
"pandas",
]
[project.optional-dependencies]
test = [
"coverage",
]
speed = [
"python-rapidjson",
]
[project.urls]
Homepage = "..."
[tool.setuptools.packages.find]
exclude = ["tests"]
[tool.pylint.'MESSAGES CONTROL']
max-bool-expr = 10
</code></pre>
<p>My setup with Miniconda 3:</p>
<pre class="lang-none prettyprint-override"><code>$ conda create --name c312 python=3.12 pandas matplotlib scipy scikit-learn ...
...
$ conda activate c312
$ which pip
/c/Users/.../AppData/Local/miniconda3/envs/c312/Scripts/pip
$ pip --version
pip 23.3.2 from C:\Users\...\AppData\Local\miniconda3\envs\c312\Lib\site-packages\pip (python 3.12)
$ conda --version
conda 23.11.0
$ which python
/c/Users/.../AppData/Local/miniconda3/envs/c312/python
$ python
Python 3.12.1 | packaged by conda-forge | (main, Dec 23 2023, 07:53:56) [MSC v.1937 64 bit (AMD64)]
on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> import setuptools
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\ssteingold\AppData\Local\miniconda3\envs\c312\Lib\site-packages\setuptools\__init__
.py", line 8, in <module>
import distutils.core
ModuleNotFoundError: No module named 'distutils'
>>> import distutils
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'distutils'
>>>
$ conda list '^(python|setuptools)$'
# packages in environment at C:\Users\...\AppData\Local\miniconda3\envs\c312:
#
# Name Version Build Channel
python 3.12.1 h2628c8c_1_cpython conda-forge
setuptools 69.0.3 pyhd8ed1ab_0 conda-forge
</code></pre>
<p>What am I doing wrong?</p>
<p>The following questions:</p>
<ul>
<li><em><a href="https://stackoverflow.com/q/77233855/850781">Why did I get an error ModuleNotFoundError: No module named 'distutils'?</a></em></li>
<li><em><a href="https://stackoverflow.com/q/77247893/850781">ModuleNotFoundError: No module named 'distutils' in Python 3.12</a></em></li>
</ul>
<p>basically say "install <code>setuptools</code>", but I already have it (and shouldn't it be auto-installed because I require it in <code>pyproject</code>?)</p>
<p><em><a href="https://stackoverflow.com/q/14295680/850781">Unable to import a module that is definitely installed</a></em> has nothing to do with mine.</p>
|
<python><conda><setuptools><distutils><python-3.12>
|
2024-01-24 21:55:48
| 2
| 60,468
|
sds
|
77,876,442
| 8,849,755
|
Why pandas.DataFrame.join is not doing what I expect?
|
<p>I have one data frame <code>inverters</code> which looks like this:</p>
<pre><code> Voltage (ADCu)
Time (ps) n_TDC signal
0 0 INV[0] 1404
INV[0] 1403
INV[0] 1403
INV[0] 1404
INV[0] 1403
... ...
5990 2 INV[8] 1182
INV[8] 1181
INV[8] 1181
INV[8] 1174
INV[8] 1192
</code></pre>
<p>Now I have the following code:</p>
<pre><code>_ = inverters['Voltage (ADCu)'] - inverters.groupby(['n_TDC','signal']).min()['Voltage (ADCu)']
_ = _ / _.groupby(['n_TDC','signal']).max()
_.name = 'Voltage (normalized)'
_ = _.to_frame()
_ = _.reset_index(drop=False).set_index(inverters.index.names).sort_index()
print(_)
inverters = inverters.join(_)
print(inverters)
</code></pre>
<p>which prints</p>
<pre><code> Voltage (normalized)
Time (ps) n_TDC signal
0 0 INV[0] 0.996522
INV[0] 0.994783
INV[0] 0.994783
INV[0] 0.996522
INV[0] 0.994783
... ...
5990 2 INV[8] 0.511062
INV[8] 0.509956
INV[8] 0.509956
INV[8] 0.502212
INV[8] 0.522124
[269973 rows x 1 columns]
Voltage (ADCu) Voltage (normalized)
Time (ps) n_TDC signal
0 0 INV[0] 1404 0.996522
INV[0] 1404 0.994783
INV[0] 1404 0.994783
INV[0] 1404 0.996522
INV[0] 1404 0.994783
... ... ...
5990 2 INV[8] 1192 0.511062
INV[8] 1192 0.509956
INV[8] 1192 0.509956
INV[8] 1192 0.502212
INV[8] 1192 0.522124
[4502709 rows x 2 columns]
</code></pre>
<p>I don't understand why the last print does not have 269973 rows. What is going on? The two data frames in the call to <code>join</code> have literally the same index... Even if I add <code>_.index = inverters.index</code> right before the call to <code>join</code> I get the same result.</p>
<p>What I want to do is the obvious horizontal concatenation matching by index value, i.e. just append <code>_</code> to the right of <code>inverters</code>.</p>
|
<python><pandas>
|
2024-01-24 21:53:55
| 2
| 3,245
|
user171780
|
77,876,395
| 11,422,610
|
How do i break out of this while loop in Python?
|
<pre><code>def main():
dmn_options = ["lenovo", "rasbberrypi", "Exit"]
while True:
display_menu(dmn_options)
user_choice = select_single_l(dmn_options);print(user_choice)
def display_menu(options):
print("Select an option:")
for index, option in enumerate(options, start=1):
print(f"{index}. {option}")
def select_single_l(options):
choice=None
while True:
try:
i = int(input("Enter the number of your choice: "))
if 1 <= i <= len(options) - 1:
choice=options[i - 1]
print(f"You selected {choice}.")
break
elif i == len(options):
print("Exiting the menu.")
break
else:
print("Invalid choice. Please enter a valid number.")
except ValueError:
print("Invalid input. Please enter a number.")
break
return choice
</code></pre>
<p>I've tried chatGPT but to no avail. I tell the code to break but it does not break - it is frustrating. When the user select <code>Exit</code> it is meant to stop selecting - mimicking Bash's select statement. Please help.</p>
|
<python><python-3.x><while-loop><break>
|
2024-01-24 21:44:31
| 2
| 937
|
John Smith
|
77,876,372
| 8,540,947
|
getting erroneous red points when controlling a Maya ramp texture using Python
|
<p>I am using a python script and the getAttr and setAttr Maya commands on 2d ramp textures in Maya 2022. The ramp attrs are saved out to a text file and read back in and applied to other ramps so the values can be shared. Sometimes, it works perfectly, but other times I am getting extra red points added to the ramp and it seems to go haywire. The expected ramp result after reading the file containing the attribution data would look like the image below, but without the red points. I can not find a pattern as to when it will work or break. However, once the ramp node has the issue, it seems to keep having the issue, but other ramp nodes work fine. Has anyone else experienced this problem and what causes the ramp to trigger extra red colored points when being called by a getAttr command?</p>
<pre><code>for attr in attributes:
attr_value = cmds.getAttr(ramp + '.' + attr)
ramp_preset_text = attr + ':' + str(attr_value) + '\n'
ramp_preset_file.write(ramp_preset_text)
n = 0
if cmds.attributeQuery('colorEntryList',node = ramp, ex = True ):
number_of_entries = cmds.getAttr(ramp + '.colorEntryList', size = True)
number_of_entries_text = 'entry number:' + '*' + str(number_of_entries) + '*' + '\n'
ramp_preset_file.write(number_of_entries_text)
print('number_of_entries_text', number_of_entries_text)
print('n', n)
while n < number_of_entries:
print('while n', n)
entry_position = cmds.getAttr(ramp + '.colorEntryList[' + str(n) + '].position')
entry_color = cmds.getAttr(ramp + '.colorEntryList[' + str(n) + '].color')
entry_preset_text = ('entry_' + str(n) + ' ' + ' position:' + str(entry_position) + ' color:' + str(entry_color) + '\n')
ramp_preset_file.write(entry_preset_text)
n = n + 1
ramp_preset_file.close()
</code></pre>
<p><a href="https://i.sstatic.net/K3Sii.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K3Sii.jpg" alt="enter image description here" /></a></p>
|
<python><textures><maya>
|
2024-01-24 21:40:06
| 2
| 519
|
winteralfs
|
77,876,253
| 13,171,274
|
Sort imports alphabetically with ruff
|
<p>Trying ruff for the first time and I'm not being able to sort imports alphabetically, using default settings. According to docs ruff should be very similar to isort.</p>
<p>Here is a short example with unsorted imports</p>
<pre><code>import os
import collections
</code></pre>
<p>Run ruff command</p>
<pre><code>$ ruff format file.py
1 file left unchanged
</code></pre>
<p>But if I run isort the imports are properly sorted</p>
<pre><code>$ isort file.py
Fixing .../file.py
</code></pre>
<p>What am I doing wrong?</p>
|
<python><isort><ruff>
|
2024-01-24 21:12:26
| 2
| 358
|
alan
|
77,876,062
| 11,197,957
|
Safety of passing integers between Python and Rust
|
<h2>Is This Safe?</h2>
<p>Suppose I wanted to call some <strong>Rust</strong> code from within <strong>Python</strong>. Suppose my <code>lib.rs</code> looks something like this:</p>
<pre class="lang-rust prettyprint-override"><code>#[no_mangle]
pub extern fn add(left: i32, right: i32) -> i32 {
return left+right;
}
</code></pre>
<p>And supposed I called this code from Python using <code>ctypes</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>import ctypes
def rust_add(left, right):
rust_lib = ctypes.CDLL("path/to/the/so/file")
return rust_lib.add(left, right)
</code></pre>
<p>Then <strong>is the above safe?</strong></p>
<h2>An Anticipated Snag: Integer Overflow</h2>
<h3>Overflow in the Inputs</h3>
<p>One issue which even I - with my extremely tenuous grasp of Rust - can foresee is <strong>integer overflow</strong>. I imagine that, if either <code>left</code> or <code>right</code> was greater than 2^32, that would cause bad things to happen: either Rust would hit a run-time error (most likely), or it would just return silly answers (worst case). Would some type-checking in the Python, perhaps using NumPy's <code>uint32</code>, be sufficient to prevent this issue?</p>
<h3>Overflow in the Workings</h3>
<p>This one is more insidious: suppose I ask, from Python, my Rust function to add (2^32)-1 and (2^32)-2. There shouldn't be any immediate overflow, but there will be once we add the two numbers together. As I understand it, on hitting such an overflow, Rust will panic in debug mode but try its best to soldier on in release mode.</p>
<h2>Summary</h2>
<ul>
<li>Is there any well-known practice for smoothing out the difficulties that arise when passing integers between a dynamic-width language, such as Python, and a fixed-width one, such as Rust, other than thorough testing?</li>
<li>Are there any issues with passing integers between Rust and Python <em>apart from</em> integer overflow?</li>
</ul>
|
<python><rust><ctypes><integer-overflow>
|
2024-01-24 20:34:26
| 1
| 734
|
Tom Hosker
|
77,875,984
| 10,467,153
|
How to activate Python virtualenvironment not knowing the terminal
|
<p>I need to activate Python's virtual environment from inside Makefile.</p>
<p>Python <code>venv</code> module creates virtual environment with activation files for 3 different terminals:</p>
<ul>
<li>activate</li>
<li>activate.bat</li>
<li>Activate.ps1</li>
</ul>
<p>is it possible to activate virtual environment from inside the Makefile, not knowing from what type of terminal (CMD, PowerShell, bash) the person will run a Makefile command?</p>
|
<python><makefile><terminal><virtualenv>
|
2024-01-24 20:19:30
| 1
| 352
|
tikej
|
77,875,969
| 10,935,321
|
Azure Machine Learning pin python version on notebook
|
<p>Looking for a way to use Python 3.10 on a compute resource associated to Azure ML Studio
It appears 3.10 is available with ML Studio and we select 3.10 with SDK2 from the kernel selection dropdown but when running a "python --version" on the terminal, it says v 3.8.5. Is there a way to get Azure ML Studio to use a different version of Python?
We've tried the environment option but had no luck.
Looks like the kernel selection option does not match what the OS is running, perhaps it's designed to work this way?</p>
|
<python><azure><azure-machine-learning-service>
|
2024-01-24 20:16:09
| 1
| 385
|
mac
|
77,875,930
| 5,790,653
|
find the lowest number in json value and find the value of that key
|
<p>I have <code>file.json</code>:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"name": "personA",
"valA": 2,
"valB": 0
},
{
"name": "personB",
"valA": 3,
"valB": 2
},
{
"name": "personC",
"valA": 2,
"valB": 1
},
{
"name": "personD",
"valA": 5,
"valB": 3
}
]
</code></pre>
<p>This is python code:</p>
<pre class="lang-py prettyprint-override"><code>with open('file.json', 'r') as file:
data = json.load(file)
valA_numbers = []
for valA in data:
valA_numbers.append(valA['valA'])
min_number = min(tuple(valA_numbers))
</code></pre>
<p>This is logic I have:</p>
<blockquote>
<p>Open <code>file.json</code> and see who has the least <code>valA</code> value. Increment that +1 and print the related <code>name</code> should do today's <code>valA</code>.</p>
</blockquote>
<p>With my python code we, <code>min_number = 2</code>. I increment it by 1, but I don't know how to determine if this <code>2</code> belongs to <code>personA</code> or <code>personC</code>.</p>
<p>I expect to have this:</p>
<blockquote>
<p>Number 2 belongs to <code>personA</code>.</p>
</blockquote>
|
<python>
|
2024-01-24 20:07:42
| 2
| 4,175
|
Saeed
|
77,875,829
| 10,318,539
|
Connection of python 3.10.9 with Matrikon Server
|
<p>I am currently working on connecting Python with Matrikon Server Simulation. I am encountering several issues and am not aware of its default port. Nevertheless, I am facing errors in the process.</p>
<p>Python Code:</p>
<pre><code>import opcua
c = opcua.Client("opc.tcp://localhost:53530/Matrikon.OPC.Simulation.1")
c.connect()
</code></pre>
<p>Errors:</p>
<pre><code>Discovery Error: [WinError 10061] No connection could be made because the target machine actively refused it
</code></pre>
<p>My Matrikon server:
<a href="https://i.sstatic.net/y3Jau.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y3Jau.png" alt="enter image description here" /></a></p>
|
<python><opc>
|
2024-01-24 19:45:12
| 1
| 485
|
Engr. Khuram Shahzad
|
77,875,736
| 9,462,829
|
fetch GeoJSON from Flask backend and use it in Google Maps Javascript API
|
<p>I'm working on an app that'll draw some shapes on a Google Maps map, using Flask as the backend. As a test, I'm trying to draw <a href="https://storage.googleapis.com/mapsdevsite/json/google.json" rel="nofollow noreferrer">these shapes</a> into my map. I have this simple function:</p>
<pre><code>@app.route('/test', methods=['POST'])
def receive_pos_data():
r = requests.get('https://storage.googleapis.com/mapsdevsite/json/google.json')
return json.dumps(r.json())
</code></pre>
<p>And then I have this script to fetch it:</p>
<pre><code>fetch('/test', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: jsonPos,
})
.then(response => response.json())
.then(data_geojson => {
// Handle the response from the server
console.log('Server response:', data_geojson);
})
.then(data_geojson => map.data.loadGeoJson(data_geojson))
.catch(error => {
console.error('Error:', error);
});
</code></pre>
<p>You can ignore its body, it's something for the future. Then, my InitMap function looks like this:</p>
<pre><code>function initMap() {
map = new google.maps.Map(document.getElementById("map"), {
zoom: 4,
center: { lat: -28, lng: 137 },
});
}
</code></pre>
<p>So, if I add the data directly through <code>map.data.loadGeoJson("https://storage.googleapis.com/mapsdevsite/json/google.json");</code> it works no problem, but if I try to do it through the fetch requests, I get or:</p>
<pre><code>util.js:231
GET http://127.0.0.1:5000/undefined 404 (NOT FOUND)
</code></pre>
<p>or</p>
<pre><code>util.js:231
GET http://127.0.0.1:5000/[object%20Object] 404 (NOT FOUND)
</code></pre>
<p>If I do this instead:</p>
<pre><code>fetch('/test', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: jsonPos,
})
.then(response => response.json())
.then(data_geojson => {
// Handle the response from the server
console.log('Server response:', data_geojson);
map.data.loadGeoJson(data_geojson); // moved this inside the second then statement
})
</code></pre>
<p>Internet says my JSON might be malformed, but I'm not sure how to fix it.</p>
<p>My desired output would be being able to plot this by fetching the geoJson from Python:</p>
<p><a href="https://i.sstatic.net/MvOgi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MvOgi.png" alt="enter image description here" /></a></p>
<p>Which is what happens when I do <code>map.data.loadGeoJson("https://storage.googleapis.com/mapsdevsite/json/google.json");</code> directly on Javascript</p>
|
<javascript><python><flask>
|
2024-01-24 19:26:24
| 1
| 6,148
|
Juan C
|
77,875,632
| 20,793,070
|
How to create pandas dataframe without rounding long values
|
<p>I have the sample:</p>
<pre><code>import pandas as pd
data = [[1706121660, 1.152229705572702e-06, 1.074741466619169e-06, 1.074741466619169e-06, 1.074741466619169e-06, 0.28], [1706121600, 1.125333298543622e-06, 1.157527924446797e-06, 1.088866832550798e-06, 1.152229705572702e-06, 4.681397952]]
df = pd.DataFrame(data)
df
</code></pre>
<p>I see this result:</p>
<pre><code> 0 1 2 3 4 5
0 1706121660 0.000001 0.000001 0.000001 0.000001 0.280000
1 1706121600 0.000001 0.000001 0.000001 0.000001 4.681398
</code></pre>
<p>How can I see the real values without rounding in the result? Thank you!</p>
|
<python><pandas><dataframe>
|
2024-01-24 19:06:17
| 4
| 433
|
Jahspear
|
77,875,529
| 2,893,496
|
How to find a tag with an arbitrary attribute having an arbitrary value with Beautiful Soup?
|
<p>I am trying to find <code><a></code> tag with <code>itemprop</code> attribute being <code>url</code>. So far the only approach i found is to use a lambda:</p>
<pre><code>soup.find(lambda a : a.name=='a' and a.has_attr('itemprop') and a['itemprop']=='url')
</code></pre>
<p>But somehow i cannot believe that this is the best approach. Is there something simpler?</p>
|
<python><beautifulsoup>
|
2024-01-24 18:45:32
| 1
| 5,886
|
v010dya
|
77,875,473
| 1,744,491
|
Pulumi error: key: massage(attr[key], seen) for key in attr if not key.startswith("_")
|
<p>I'm trying to build a simple bucket with intelligent tiering configuration, but I'm getting an error from pulumi code. The code message isn't intuitive and it seems to be a bug on the project or some config that I'm missing. Here follows my code:</p>
<pre><code>import pulumi
from pulumi_aws import s3
# logging bucket
logging = s3.Bucket("my-bucket-name", bucket="my-bucket-name", tags={"Owner": "me"})
logging_intelligent_tiering = s3.BucketIntelligentTieringConfiguration(
"my-bucket-name" + "-intelligent-tiering",
bucket=logging.id,
tierings=[
s3.BucketIntelligentTieringConfigurationTieringArgs(
access_tier="ARCHIVE_ACCESS", days=90
),
s3.BucketIntelligentTieringConfigurationTieringArgs(
access_tier="DEEP_ARCHIVE_ACCESS", days=180
),
],
)
pulumi.export(logging.id, logging)
pulumi.export(logging_intelligent_tiering.id, logging_intelligent_tiering)
</code></pre>
<p>Then the error I'm getting (If needed I have the full log):</p>
<pre><code> error: Program failed with an unhandled exception:
Traceback (most recent call last):
File "C:\Program Files (x86)\Pulumi\pulumi-language-python-exec", line 197, in <module>
loop.run_until_complete(coro)
File "C:\Users\guilherme.francis\.pyenv\pyenv-win\versions\3.11.6\Lib\asyncio\base_events.py", line 653, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "C:\Users\guilherme.francis\Github\cda_iac\.venv\Lib\site-packages\pulumi\runtime\stack.py", line 137, in run_in_stack
await run_pulumi_func(lambda: Stack(func))
File "C:\Users\guilherme.francis\Github\cda_iac\.venv\Lib\site-packages\pulumi\runtime\stack.py", line 49, in run_pulumi_func
func()
File "C:\Users\guilherme.francis\Github\cda_iac\.venv\Lib\site-packages\pulumi\runtime\stack.py", line 137, in <lambda>
await run_pulumi_func(lambda: Stack(func))
^^^^^^^^^^^
File "C:\Users\guilherme.francis\Github\cda_iac\.venv\Lib\site-packages\pulumi\runtime\stack.py", line 162, in __init__
self.register_outputs(massage(self.outputs, []))
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\guilherme.francis\Github\cda_iac\.venv\Lib\site-packages\pulumi\runtime\stack.py", line 206, in massage
return massage_complex(attr, seen)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\guilherme.francis\Github\cda_iac\.venv\Lib\site-packages\pulumi\runtime\stack.py", line 241, in massage_complex
return {
^
File "C:\Users\guilherme.francis\Github\cda_iac\.venv\Lib\site-packages\pulumi\runtime\stack.py", line 242, in <dictcomp>
key: massage(attr[key], seen) for key in attr if not key.startswith("_")
^^^^^^^^^^^^^^^^^^^
TypeError: 'Output' object is not callable
</code></pre>
<p>Any suggestion on why is this happening?</p>
|
<python><pulumi-python>
|
2024-01-24 18:34:57
| 1
| 670
|
Guilherme Noronha
|
77,875,331
| 7,748,291
|
Relative import of local package with VS code
|
<p>I typically have a folder open in VS code which contain my python project with a <code>.venv</code> folder.</p>
<p>I placed my current project in such a way that I have to go back to the parent folder to find my other module.</p>
<pre><code>workspace_folder
-- module_to_import
--- __init__.py
--- submodule1
---- __init__.py
-- project_currently_working_on
</code></pre>
<p>When I try to do a relative import as such</p>
<pre><code>import ..module_to_import
</code></pre>
<p>I am getting a <code>ImportError: attempted relative import with no known parent package</code></p>
<p>I open the folder in VS code and configured the <code>launch.json</code> as</p>
<pre><code> // Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python: Module - name module",
"type": "python",
"request": "launch",
// "program": "${file}",
"cwd": "${workspaceRoot}",
"env": {"PYTHONPATH": "${workspaceRoot}"},
"console": "integratedTerminal"
}
]
}
</code></pre>
<p>Tried this with <code>settings.json</code></p>
<pre><code>{
"editor.minimap.enabled": false,
"editor.renderWhitespace": "all",
"breadcrumbs.enabled": false,
"[python]": {
"editor.formatOnType": true
},
"python.terminal.launchArgs": [
],
"jupyter.interactiveWindow.creationMode": "perFile",
"jupyter.askForKernelRestart": false,
"workbench.startupEditor": "none",
"C_Cpp.default.compilerPath": "/usr/bin/g++",
"notebook.confirmDeleteRunningCell": false,
"jupyter.askForLargeDataFrames": false,
"[cpp]": {
"editor.wordBasedSuggestions": "off",
"editor.suggest.insertMode": "replace",
"editor.semanticHighlighting.enabled": true
},
"python.analysis.extraPaths": ["../module_to_import"], // <----- added
"explorer.confirmDelete": false,
"jupyter.interactiveWindow.textEditor.executeSelection": true,
}
</code></pre>
<p>To no avail.</p>
<p>If I put this instead <code>"python.analysis.extraPaths": [".."]</code>, I can see in the IDE the module, and no warning appear. However, I get a <code>ModuleNotFoundError</code> when running the script:</p>
<pre><code>import module_to_import // <--- IDE automatically proposes me this
from module_to_import import submodule1 // <--- Can even see it pop-up in the IDE
</code></pre>
<p>Then, I asked chatGPT which proposed to create a workspace in such a way that:</p>
<pre><code>workspace_folder
-- module_to_import
-- project_currently_working_on
</code></pre>
<p>With the workspace configuration having:</p>
<pre><code>{
"folders": [
{
"path": "."
}
],
"settings": {
"python.envFile": "${workspaceFolder}/.env",
"python.analysis.extraPaths": ["${workspaceFolder}"]
},
}
</code></pre>
<p>Again, same <code>ModuleNotFoundError</code>.</p>
<p>When I try relative import, I can see all of the other projects/modules when typing <code>..</code>. However, I get again a <code>ImportError: attempted relative import with no known parent package</code> when running the script with:</p>
<pre><code>import ..module_to_import // <--- IDE proposes a list of module after typing .. with the option of the module_to_import visible
</code></pre>
<p>Any idea how I could get a configuration that would allow me to easily add local module to my current project?</p>
|
<python><relative-import>
|
2024-01-24 18:06:25
| 0
| 363
|
mooder
|
77,875,253
| 4,240,413
|
Why does local inference differ from the API when computing Jina embeddings?
|
<p>I am computing <a href="https://jina.ai/news/jina-embeddings-2-the-best-solution-for-embedding-long-documents/" rel="nofollow noreferrer">Jina v2 embeddings</a> via the <code>transformers</code> Python libraries and via the API (see <a href="https://jina.ai/embeddings/" rel="nofollow noreferrer">https://jina.ai/embeddings/</a>).</p>
<p>With <code>transformers</code> I can run something like</p>
<pre class="lang-py prettyprint-override"><code>from transformers import AutoModel
sentences = ['How is the weather today?']
model = AutoModel.from_pretrained('jinaai/jina-embeddings-v2-base-en', trust_remote_code=True)
embeddings_1 = model.encode(sentences)
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>from sentence_transformers import SentenceTransformer
model = SentenceTransformer('jinaai/jina-embeddings-v2-base-en')
embeddings_2 = model.encode(sentences)
</code></pre>
<p>and the resulting <code>embeddings_1</code> and <code>embeddings_2</code> match.</p>
<p>However if I use the Jina API e.g. via</p>
<pre class="lang-py prettyprint-override"><code>import requests
url = 'https://api.jina.ai/v1/embeddings'
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer jina_123456...' # visit https://jina.ai/embeddings/ for an API key
}
data = {
'input': sentences,
'model': 'jina-embeddings-v2-base-en' # note that the model name matches
}
response = requests.post(url, headers=headers, json=data)
embeddings_3 = eval(response.content)["data"][0]["embedding"]
</code></pre>
<p><code>embeddings_3</code> differ from the other two arrays by a small difference, around 2e-4 in absolute value on average. I see this discrepancy both with CPU and GPU runtimes. What am I doing wrong?</p>
|
<python><nlp><huggingface-transformers><sentence-transformers><jina>
|
2024-01-24 17:52:59
| 1
| 6,039
|
Davide Fiocco
|
77,875,128
| 2,386,113
|
How to aling the data along y-axis in Matlibplot
|
<p>I have created a MWE to explain my issue. As seen in the plot below, the curves are moving along the horizontal axis. I need to force the curves to move along the vertical axis, and set the corresponding ticks on the x-axis.</p>
<p><a href="https://i.sstatic.net/TlPmE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TlPmE.png" alt="enter image description here" /></a></p>
<p><strong>MWE:</strong></p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
employees = np.arange(0, 12, 76)
sub_one_grades = np.random.normal(0, 1, 76) + 10
sub_two_grades = np.random.normal(0, 1, 76) + 15
sub_three_grades = np.random.normal(0, 1, 76) + 30
# plot
plt.plot(sub_one_grades)
plt.plot(sub_two_grades)
plt.plot(sub_three_grades)
#plt.yticks(employees) #<------REQUIRED, but not working
plt.show()
print()
</code></pre>
<p><strong>REQUIRED PLOT:</strong></p>
<p><a href="https://i.sstatic.net/BPjiG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BPjiG.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2024-01-24 17:32:55
| 1
| 5,777
|
skm
|
77,875,111
| 8,919,749
|
How to get minutes between to dates?
|
<p>I have the following dates (<code>"%Y-%m-%dT%H:%M:%S.%f"</code> format) :</p>
<pre><code>dateA = "2024-01-24T03:09:28.000523"
dateB = "2024-01-24T03:24:04.000719"
</code></pre>
<p>How can we get the difference between these two dates for a result in minutes ?</p>
|
<python><date><datetime>
|
2024-01-24 17:30:49
| 1
| 380
|
Mamaf
|
77,875,055
| 3,358,927
|
How to force using a lower version python library in Databricks
|
<p>I am trying to use huggingface <code>transformers</code> in Databricks runtime 13.3. However current version of transformers can't be imported because it conflicts with the version of <code>urllib3</code> (2.1.0) that is preinstalled on the cluster. A lower version would work with <code>transformers</code>. However according to Databricks, none of the libraries that comes with the runtime can be uninstalled.</p>
<p>I tried something like <a href="https://i.sstatic.net/6iwg5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6iwg5.png" alt="this" /></a>, but it doesn't let me switch. What is the correct way to force using lower version of a lib on Databricks</p>
|
<python><pip><databricks>
|
2024-01-24 17:22:39
| 1
| 5,049
|
ddd
|
77,875,049
| 1,254,515
|
Method association in python dataclass
|
<p>I have to maintain code which uses dataclasses. In these, methods from libraries are imported and associated and I find the way it's done confusing and not very readable. It's done like this (a very simplified example) :</p>
<pre><code>from dataclasses import dataclass
from code.custom.package import method_a, method_b
@dataclass
class myClass:
import numpy as np
var1: str
var2: str
var3: np.ndarray = None
myClass.method_a = method_a
myClass.method_b = method_b
</code></pre>
<p>The methods generally do operations on the data structures contained in the class.</p>
<p>Is this a common practice ? Is this the proper way to do this ? Or should this be avoided and done in another way ?</p>
|
<python><python-dataclasses>
|
2024-01-24 17:22:05
| 2
| 323
|
Oliver Henriot
|
77,874,908
| 14,230,633
|
Polars `ValueError: could not convert value 'Unknown' as a Literal` when taking mean of function of variable
|
<pre><code>┌───────┬─────┐
│ group ┆ var │
│ --- ┆ --- │
│ i64 ┆ str │
╞═══════╪═════╡
│ 1 ┆ x │
│ 1 ┆ x │
│ 2 ┆ x │
│ 2 ┆ y │
│ 3 ┆ y │
│ 3 ┆ y │
└───────┴─────┘
</code></pre>
<p>Suppose the above dataframe is called <code>df</code> and I want to get the percentage of variables named <code>'x'</code> by group. The following gives me <code>ValueError: could not convert value 'Unknown' as a Literal</code>. Can someone explain why?</p>
<pre><code>df.group_by('group').agg(
pl.mean((pl.col('var') == 'x').cast(pl.Int8))
)
</code></pre>
|
<python><dataframe><python-polars>
|
2024-01-24 17:02:07
| 1
| 567
|
dfried
|
77,874,878
| 2,410,605
|
Selenium Python How Do I Read the child elements of the '//following' td tag?
|
<p><a href="https://i.sstatic.net/bD0ge.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bD0ge.png" alt="enter image description here" /></a></p>
<p>I have this list that I need to cycle through until I reach the munprod DB on the host server ending with "01". I'm using the code below and apparently have a syntactical error that I cannot figure out because it's dropping into the code for the if statement as if it were true, but it's doing it for host = '02' and not '01'.</p>
<pre><code>#5. Select munprod from Availability Groups tab **NOTE: There are 2 munprods, get the one on 01...NOT 02
munprods = WebDriverWait(browser, 10).until(EC.visibility_of_all_elements_located((By.XPATH, "//span[contains(text(), 'munprod')]")))
for munprod in munprods:
munprodServer = munprod.find_element(By.XPATH, '//following::td')
if munprodServer.find_element(By.XPATH, '//div[contains(text(), "01.jefferson.ketsds.net")]'):
munprod.click()
logger.info("Step 5: Databases --> munprod 01.jefferson.ketsds.net selected")
break
</code></pre>
<p>I know it's host '02' because when it hits the munprod.click() line it goes to the next screen and shows that it's for host '02'. Can a pair of fresh eyes spot what I'm doing wrong?</p>
|
<python><selenium-webdriver>
|
2024-01-24 16:57:30
| 2
| 657
|
JimmyG
|
77,874,837
| 6,766,408
|
On calling list values in robot framework returns an error "List '${element}' has no item in index 0"
|
<p>I am using Robot Framework, Selenium Library in Pycharm. I need to fetch product id from below table and then save it in the list</p>
<p><a href="https://i.sstatic.net/oWbxV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oWbxV.png" alt="enter image description here" /></a></p>
<p>below code to create the list</p>
<pre><code>${list}= Create List ${ProductID}
${data}= create list ${list}
Append To Csv File ProductID.csv ${data}
</code></pre>
<p>I got values in CSV</p>
<p><a href="https://i.sstatic.net/cjBqR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cjBqR.png" alt="enter image description here" /></a></p>
<p>Then calling saved values in another script like</p>
<pre><code>${rows} = get element count xpath=//*[@id="YearlyReport"]/tbody/tr
sleep 2
log to console Total Rows ${rows}
${ProductID}= Read Csv File To List ProductID.csv
FOR ${element} IN @{ProductID}
Log To Console ${element}[0]
FOR ${rowno} IN RANGE 1 ${rows}+1
Input Text xpath=//input[@placeholder='Search ...'] ${element}[0]
sleep 5
${Details} = Get Text xpath=//*[@id="YearlyReport"]/tbody/tr[${rowno}]/td[10]
log to console In Search Stage of ${Details}[0] is ${Details}
</code></pre>
<p>Getting below error</p>
<p><strong>List '${element}' has no item in index 0.</strong> If I change value to 1 then it gives same error <strong>List '${element}' has no item in index 1.</strong></p>
|
<python><list><selenium-webdriver><pycharm><robotframework>
|
2024-01-24 16:53:05
| 2
| 312
|
ADS KUL
|
77,874,808
| 2,556,795
|
How to get all possible combinations at a column level
|
<p>I have following dataframe which is subset of my larger dataframe.</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'item':['a','a','a','b','b','a','b','a','a','b','a','b','a','a','b'],
'country':['in','in','in','in','in','in','us','us','us','us','us','uk','uk','uk','uk'],
'timeframe':['sep-23','oct-23','nov-23','dec-23','q3\'23','q1\'23','sep-23','jul-23','aug-23','q2\'23','q1\'23','jul-23','aug-23','oct-23','sep-23'],
'value':[10,23,26,46,21,34,65,87,56,43,32,16,10,74,85]})
</code></pre>
<pre class="lang-none prettyprint-override"><code>df
item country timeframe value
0 a in sep-23 10
1 a in oct-23 23
2 a in nov-23 26
3 b in dec-23 46
4 b in q3'23 21
...
7 a us jul-23 87
8 a us aug-23 56
9 b us q2'23 43
</code></pre>
<p>There are different time frames for each country and item and there can be missing timeframes for few countries per item.
Where ever there is a missing timeframe I want to include it with <code>NaN</code> value. For each item the minimum date or quarter will be starting point. In the dataframe, for item <code>a</code> country <code>in</code> doesn't have fields for q2'23, jul-23 and aug-23 similarly <code>us</code> doesn't have values for q3'23 and so on.
Similarly for item <code>b</code> <code>uk</code> doesn't have quarterly data (minimum quarter is q2'23 where US has values).</p>
<p>I tried using <code>MultiIndex.from_product</code> but irrespective of <code>item</code> this is creating product of all combinations. But I would need combinations at each item level.</p>
<pre class="lang-py prettyprint-override"><code>mux = pd.MultiIndex.from_product([df['item'].unique(),df['country'].unique(),df['timeframe'].unique()], names=('item','country','timeframe'))
mux.to_frame().reset_index(drop=True).merge(df,on=['item','country','timeframe'],how='outer')
</code></pre>
<p>One possible solution that I could think of is creating list of unique <code>item</code> and using for loop filtering each item and then using <code>MultiIndex</code> which could serve the purpose but there must be a better way to handle this.</p>
<p><strong>Edit:</strong>
The question is marked as duplicate but the answers provided there gives me combinations of all columns but I need combinations of each item which I couldn't get.</p>
<p><strong>PS:</strong> Please feel free to mark it as duplicate if there is an answer already provided.</p>
|
<python><pandas>
|
2024-01-24 16:47:49
| 0
| 1,370
|
mockash
|
77,874,757
| 10,452,700
|
How can down-sampling periodic time-series data without loosing general pattern for characterization analysis?
|
<p>I'm experimenting with the characterization of data over time after downsampling. After studying this <a href="https://stackoverflow.com/q/18060619/10452700">post</a>, I generated some synthesized data with certain periodic patterns over time every 5mins (granularity of 5mins= data generated with the interval of 5 mins). It means that for each hour I generate 12 observations and at the end of the day (24hrs) I have 12*24 = 288 observations\data points over time. I generated the PWM data pattern with a certain period. I could generate other patterns i.e. <em>Positive Triangle Pulse</em>, <em>Positive Pulses (Rectangles)</em>, <em>Impulses</em>,... but I picked PWM for a better understanding of changes in behavior.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import scipy.signal as signal
import matplotlib.pyplot as plt
# Generate periodic data : PWM Modulated Sinusoidal Signal
# Set the duty cycle percentage for PWM
percent = 40.0
# Set the time period for one cycle of the PWM signal
TimePeriod = 6.0
# Set the desired number of samples
desired_samples = 278
# Calculate the time step (dt) to achieve the desired number of samples
# 30 is the original number of cycles in the provided code
dt = TimePeriod / (desired_samples / 30)
# Calculate the number of cycles needed to achieve the desired number of samples
Cycles = int(desired_samples * dt / TimePeriod) + 1
# Create a time array
t = np.arange(0, Cycles * TimePeriod, dt)
# Generate a PWM signal
pwm = (t % TimePeriod) < (TimePeriod * percent / 100)
# Create a sinusoidal signal
x = np.linspace(-10, 10, len(pwm))
y_pwm = np.sin(x)
# Zero out the sinusoidal signal where PWM is zero
y_pwm[pwm == 0] = 0
# Convert data to a Pandas DataFrame
data = {'datetime': t_num, "PWM":y_pwm}
df = pd.DataFrame(data)
df.shape #(288, 2)
</code></pre>
<p>So now I have a univariate time series including timestamp <code>datetime</code> and some periodic values in form of <code>PWM</code> signal.</p>
<p>Before downsampling, I made sure about datetime column by using:</p>
<ul>
<li><code>df.datetime = pd.to_timedelta(df.datetime, unit='T')</code> <a href="https://stackoverflow.com/a/52886911/10452700">ref</a></li>
<li><code>df['datetime'] = pd.to_datetime(df['datetime'])</code> <a href="https://stackoverflow.com/q/74302888/10452700">ref</a></li>
</ul>
<p>Then I applied <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.resample.html" rel="nofollow noreferrer"><code>resample()</code></a> to downsample <em>5mins to 1hour</em> like this post.</p>
<pre class="lang-py prettyprint-override"><code>resampled_df = (df.set_index('datetime') # Conform data by setting a datetime column as dataframe index needed for resample
.resample('1H') # resample with frequency of 1 hour
.mean() # used mean() to aggregate
.interpolate() # filling NaNs and missing values [just in case]
)
resampled_df.shape # (24, 1)
</code></pre>
<p>another way inspired from <a href="https://stackoverflow.com/a/71253512/10452700">here</a></p>
<pre class="lang-py prettyprint-override"><code>resampled_df2 = (df.set_index('datetime') # Conform data by setting a datetime column as dataframe index needed for resample
.groupby([pd.Grouper(freq='1H')]) # resample with frequency of 1 hour
.mean() # used mean() to aggregate
)
resampled_df2.shape # (24, 1)
</code></pre>
<p>I used <code>mean()</code> method because I think average of each 12 observations within each hour could be good representative of behavior and has less negative impact on behavior for <em>Periodic Patterns\behavior Identification</em>.</p>
<p>Now I want to demonstrate raw periodic data and resampled version:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import pandas as pd
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(15, 4))
# PWM
axes[0].plot( df['datetime'], df['PWM'], color='blue')
axes[0].scatter(df['datetime'], df['PWM'], color='blue', marker='o', s=10)
axes[0].set_title(f'PWM incl. {len(df)} observations')
# Resample of PWM
axes[1].plot( resampled_df.index, resampled_df['PWM'], color='blue')
axes[1].scatter(resampled_df.index, resampled_df['PWM'], color='blue', marker='o', s=10)
axes[1].set_title(f'PWM (resampled frequency=1H) incl. {len(resampled_df)} observations')
for ax in axes:
ax.set_xticks(selected_ticks)
ax.set_xticklabels(selected_ticks, rotation=90)
plt.show()
</code></pre>
<hr />
<p>Output:
<img src="https://i.imgur.com/fqMuXxF.png" alt="img" /></p>
<p>My objective is to <strong>detect periodic/cyclic behavior</strong> for further <em>characterization of data over time</em> but sometimes potential (nearly) periodic patterns can be seen in different data resolutions and one needs to downsample also because of volume of (big-)data. So I'm looking for best practices to downsample and detect periodic/cyclic behavior or potentially (nearly) periodic patterns.</p>
<p>I find the approach <a href="https://stackoverflow.com/q/43393945/10452700">Averaging periodic data with a variable sampling rate</a> but I'm not sure if fits my problem.</p>
<p>The closest workaround I have found so far is <a href="https://stackoverflow.com/q/71876632/10452700"><em>How to decompose multiple periodicities present in the data without specifying the period?</em></a>. Another thing is always people propose FFT to see possible patterns instead of time-domain in frequency-domain but there are some discussion which state:</p>
<blockquote>
<p><em>The FFT is the <strong>wrong</strong> tool to use for finding the periodicity. ...</em> <a href="https://stackoverflow.com/a/4242450/10452700">ref</a></p>
</blockquote>
<p>If FFT or DFT approaches are useful what is the best practice to set sampling frequency for PWM I have generated according to <code>sample_freq=n_samples/(tmax-tmin)</code> which should reflect periodicity by <code>sample_preiod=1/sample_freq</code> <a href="https://stackoverflow.com/a/66863541/10452700">ref</a>.</p>
<p>Please note that I'm not interested in this solution such as this <a href="https://stackoverflow.com/a/69383550/10452700">answer</a> using <code>.iloc</code> because I believe <em>(de-)selecting by position</em> would ruin the periodic patterns of data.</p>
<p>I also I'm aware of some facts like:</p>
<blockquote>
<ol>
<li><em>"Downsampling always means a loss of information, which is why in general downsampling is preferably avoided."</em> <a href="https://stackoverflow.com/a/61713037/10452700">ref.</a>
following picture from my own experiments for the showcase:
<img src="https://i.imgur.com/y293YFA.png" alt="img" /></li>
<li><em>"...down sample how much of data observations..."</em> <a href="https://stackoverflow.com/a/53660461/10452700">ref.</a></li>
</ol>
</blockquote>
<hr />
<p>Time-series:</p>
<ul>
<li><a href="https://stackoverflow.com/q/73221445/10452700">Downsampling time series data in pandas</a></li>
<li><a href="https://stackoverflow.com/q/72167661/10452700">Downsample non timeseries pandas dataframe</a></li>
<li><a href="https://stackoverflow.com/q/74302888/10452700">How to convert data into time series</a></li>
<li><a href="https://stackoverflow.com/a/19915420/10452700">Convert Pandas dataframe to time series</a></li>
<li><a href="https://stackoverflow.com/a/56049200/10452700">Resample DataFrame at certain time intervals</a></li>
</ul>
<p>Signal processing:</p>
<ul>
<li><p><a href="https://stackoverflow.com/a/66863541/10452700">Signal Processing: Sample rate vs sample period</a></p>
</li>
<li><p><a href="https://stackoverflow.com/a/8150168/10452700">Finding the sample at the beginning of a period of a compound periodic signal</a></p>
</li>
<li><p><a href="https://stackoverflow.com/a/22509639/10452700">how to find out the period of a signal based on its autocorrelation</a></p>
</li>
<li><p><a href="https://stackoverflow.com/a/69919920/10452700">Periodic Patterns Identification in R</a></p>
</li>
</ul>
|
<python><time-series><fft><resampling><periodic-processing>
|
2024-01-24 16:39:36
| 0
| 2,056
|
Mario
|
77,874,385
| 19,130,803
|
Simplifing big imports to short
|
<p>I am working on python application with following structure.</p>
<pre><code>proj/
- __init__.py
- pkg_one/
- __init__.py
- module_a.py
- module_b.py
- pkg_two/
- __init__.py
- module_c.py
- sub_pkg_two/
- __init__.py
- module_d.py
- class D
- module_e.py
- class E
- foo.py
</code></pre>
<p>currently to use in <code>foo.py</code> or other <code>pkg_one</code>, I am accessing like this</p>
<pre><code>from proj.pkg_two.sub_pkg_two.module_d import D
from proj.pkg_two.sub_pkg_two.module_e import E
</code></pre>
<p>I am trying to make it more simple to access like</p>
<pre><code>from proj.sub_pkg_two import D
from proj.sub_pkg_two import E
</code></pre>
<p>I did tried editing <code>__init__.py</code> at various levels by setting different possible paths or ways but none worked.
How can I achieve this?</p>
|
<python>
|
2024-01-24 15:47:55
| 1
| 962
|
winter
|
77,874,240
| 1,488,821
|
Is there a way to block on applying tasks in Python ThreadPool if all the threads in the pool are busy?
|
<p>I'm consuming a queue and need to pass the data to a thread pool for processing, but I don't want to consume more items than the thread pool can handle. In other words, I want the thread pool to regulate the rate at which the queue is consumed.</p>
<p>With Python's default <code>ThreadPool</code>, the <code>apply_async()</code> function returns immediately and apparently the task queue for the thread pool can grow unbounded - exactly what I want to avoid. The <code>ThreadPoolExecutor</code>'s <code>submit()</code> function seems to do the same.</p>
<p>Is there a way to do it with the standard library or a third party module, or do I have to do it with regular threads pulling records from the queue?</p>
|
<python><python-multithreading>
|
2024-01-24 15:25:29
| 1
| 2,030
|
Ivan Voras
|
77,874,031
| 8,737,016
|
Efficient access of nested dictionaries from a list of keys
|
<p>I would like to be able to access a nested dictionary from a list of keys.
For example, if</p>
<pre class="lang-py prettyprint-override"><code>d = {
'a': {
'b': {
'c': 1
},
'd': 2
}
}
</code></pre>
<p>To access <code>c</code> I need the keys <code>'a'</code>, <code>'b'</code>, and <code>'c'</code> and I can do it easily as <code>d['a']['b']['c']</code>. However, the value for key <code>'d'</code> is one level up w.r.t. <code>'c'</code> so I need one pair of square brackets <code>[]</code> less.</p>
<p>Of course, I can write a function</p>
<pre class="lang-py prettyprint-override"><code>def access(d: Dict, keys: List[str]):
cur_dict = d
for key in keys:
cur_dict = cur_dict[key]
return cur_dict
</code></pre>
<p>But I was wondering if there is any built-in method or library that does it more efficiently.</p>
|
<python>
|
2024-01-24 14:57:32
| 0
| 2,245
|
Federico Taschin
|
77,873,841
| 2,182,636
|
Using Playwright to Find and Click on Button with Dynamic CSS (Scraping TikTok)
|
<p>I am working on an OSS scraper for various social media platforms. I have a small issue with TikTok. I can successfully scrape the profile and get metadata. However, I also want to pull back metadata about videos associated with the profile. The video information is contained in an XHR call.</p>
<p>However, after loading the page, a login modal appears. I've found that if I click on the <code>Continue as guest</code> button, the modal disappears and the XHR request is executed. To make things difficult, TikTok uses a generated CSS style for the button.</p>
<p>I've gotten this to work: <code>page.click('.css-dcgpa6-DivBoxContainer');</code> However, the identifier changes every few minutes.</p>
<p>So my question is, is there a way to:</p>
<ol>
<li>Find the button's css class by using the fact that it contains known text? And then:</li>
<li>Click on this button using playwright?</li>
</ol>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code> def collect(self, username: str) -> dict:
_xhr_calls = []
final_url = f"{TIKTOK_BASE_URL}{username}"
def intercept_response(response):
"""Capture all background requests and save them."""
# We can extract details from background requests
if response.request.resource_type == "xhr":
logging.debug(f"Appending {response.request.url}")
_xhr_calls.append(response)
return response
with sync_playwright() as pw_firefox:
browser = pw_firefox.firefox.launch(headless=True, timeout=self.timeout)
context = browser.new_context(viewport={"width": 1920, "height": 1080},
strict_selectors=False)
page = context.new_page()
# Block cruft
page.route("**/*", AsyncUtils.intercept_route)
# Enable background request intercepting:
page.on("response", intercept_response)
# Navigate to the profile page
page.goto(final_url, referer=final_url)
page.wait_for_timeout(1500)
# Get the page content
html = page.content()
# Parse it.
soup = BeautifulSoup(html, 'html.parser')
# The user info is contained in a large JS object called __UNIVERSAL_DATA_FOR_REHYDRATION__.
tt_script = soup.find('script', attrs={'id': "__UNIVERSAL_DATA_FOR_REHYDRATION__"})
try:
raw_json = json.loads(tt_script.string)
except AttributeError as exc:
raise JSONDecodeError(
f"ScrapeOMatic was unable to parse the data from TikTok user {username}. Please try again.\n {exc}") from exc
user_data = raw_json['__DEFAULT_SCOPE__']['webapp.user-detail']['userInfo']['user']
stats_data = raw_json['__DEFAULT_SCOPE__']['webapp.user-detail']['userInfo']['stats']
"""
button = page.get_by_text('p:has-text("Continue as guest")')
guest_button = page.locator(selector="div", has=button)
if guest_button is not None:
logging.debug("Clicking button.")
guest_button.click(no_wait_after=True)
# page.click('.css-dcgpa6-DivBoxContainer');
# page.click('.emuynwa3');
# page.wait_for_timeout(500)
# page.keyboard.press("PageDown")
# page.wait_for_timeout(500)
# page.keyboard.press("PageDown")
"""
data_calls = [f for f in _xhr_calls if "list" in f.url]
for call in data_calls:
logging.debug(call.json())
profile_data = {
'sec_id': user_data['secUid'],
'id': user_data['id'],
'is_secret': user_data['secret'],
'username': user_data['uniqueId'],
'bio': emoji.demojize(user_data['signature'], delimiters=("", "")),
'avatar_image': user_data['avatarMedium'],
'following': stats_data['followingCount'],
'followers': stats_data['followerCount'],
'language': user_data['language'],
'nickname': emoji.demojize(user_data['nickname'], delimiters=("", "")),
'hearts': stats_data['heart'],
'region': user_data['region'],
'verified': user_data['verified'],
'heart_count': stats_data['heartCount'],
'video_count': stats_data['videoCount'],
'is_verified': user_data['verified'],
# 'videos': videos,
# 'hashtags': self.hashtags
}
return profile_data
</code></pre>
<p>Any help would be greatly appreciated. Also here is a link to the GitHub repo: <a href="https://github.com/geniza-ai/scrapeomatic" rel="nofollow noreferrer">https://github.com/geniza-ai/scrapeomatic</a></p>
<p>Thanks!!</p>
|
<python><playwright><playback><playwright-python><tiktok>
|
2024-01-24 14:31:18
| 1
| 586
|
cgivre
|
77,873,836
| 12,560,539
|
use typing.Dict or dict in the Python
|
<p>In the Python, dict and typing.Dict, I am seeing some information to say typing.Dict "Deprecated since version 3.9"</p>
<p>I am confused what I should choose dict or typing.Dict since Python 3.9?</p>
|
<python>
|
2024-01-24 14:30:36
| 2
| 405
|
Joe
|
77,873,508
| 22,221,987
|
How to exclude logging.exception from logfile, but include it in stdout without defining custom logging level
|
<p>I have custom logger with default levels. I want to show <code>logging.exception</code> "level" in logfile, but exclude it from stdout (I want to leave short error messages in stdout). It's important that I don't want to declare custom level, because I want to leave some customisation space for user (switch off my logging a.e.).</p>
<p>So this is the code:</p>
<pre><code>import logging
import sys
import os
from pathlib import Path
from dataclasses import fields
from API.global_constants.dataclasses import LoggerTemplate
from API.global_constants.vars import LOGGER_NAME
def set_logger(**kwargs):
logger_template = LoggerTemplate()
for key, value in kwargs.items():
if key in [field.name for field in fields(logger_template)]:
setattr(logger_template, key, value)
if not isinstance(logger_template.logger, logging.Logger):
logger_template.logger = logging.getLogger(LOGGER_NAME)
else: # means logger has been overriden by user
return logger_template.logger
logger = logger_template.logger
if not logger_template.enable_logger:
logger.setLevel(logging.CRITICAL + 1)
else:
logger.setLevel(logging.DEBUG)
std_formatter = Formatter()
std_handler = logging.StreamHandler(sys.stdout)
std_handler.setLevel(logger_template.log_std_level)
std_handler.setFormatter(std_formatter)
logger.addHandler(std_handler)
if not logger_template.enable_logfile:
logger.info(f'Logging started without file.')
else:
file_formatter = logging.Formatter(f"[%(asctime)s] %(levelname)-8s [%(filename)s/%(funcName)s:%(lineno)s] %(message)s")
Path.mkdir(Path(logger_template.logfile_path), parents=True, exist_ok=True)
log_file_path = Path(logger_template.logfile_path) / Path(str(logger_template.logfile_name))
file_handler = logging.FileHandler(log_file_path)
file_handler.setLevel(logger_template.logfile_level)
file_handler.setFormatter(file_formatter)
logger.addHandler(file_handler)
logger.info(f'Logging started in the [{log_file_path}].')
return logger
class Formatter(logging.Formatter):
def format(self, record):
color = {
logging.CRITICAL: 31,
logging.ERROR: 31,
logging.FATAL: 31,
logging.WARNING: 33,
logging.DEBUG: 36,
logging.INFO: 32
}.get(record.levelno, 0)
self._style._fmt = f"\033[37m[%(asctime)s]\033[0m \033[{color}m%(levelname)-8s\033[0m %(message)s"
return super().format(record)
if __name__ == '__main__':
logger1 = set_logger(logfile_name='some_user_defined_name')
logger1.debug('test message')
logger1.info('test message')
logger1.warning('test message')
logger1.error('test message')
logger1.critical('test message')
logger1.exception('test message')
</code></pre>
<p>And the dataclass (as a logging config template) from the <code>set_logger</code> function:</p>
<pre><code>import logging
import sys
import os
from dataclasses import dataclass
from typing import Tuple
from datetime import datetime
from pathlib import Path
@dataclass
class LoggerTemplate:
enable_logger: bool = True
enable_logfile: bool = False
logger: logging.Logger = None
logfile_path: str = Path(f'logfiles_[{os.path.basename(sys.argv[0])}]')
logfile_name: str = Path(f"{datetime.now().strftime('__%d.%m.%Y__%H.%M.%S')}__.log")
logfile_level: int = logging.INFO
log_std_level: int = logging.DEBUG
</code></pre>
<p>So, how can i print logging messages, including <code>logging.exception</code> messages in logfile, but exclude <code>logging.exception</code> messages in stdout, replacing it with <code>logging.error</code> short variants ?</p>
|
<python><python-3.x><logging>
|
2024-01-24 13:40:23
| 1
| 309
|
Mika
|
77,873,455
| 14,586,843
|
model.fit throws me error "Expected tensor of type int64 but got type float" at random epoch
|
<p>I am trying to fit this model with tensorflow :</p>
<pre class="lang-py prettyprint-override"><code>from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import Input, Dense, Dropout, BatchNormalization
from tensorflow.keras.optimizers import Adam
import tensorflow.keras.regularizers as regularizers
from tensorflow.keras.callbacks import ModelCheckpoint
regularizer = regularizers.l2(REGULARIZATION_RATE)
model = Sequential(
[
Input(shape=[input_shape]),
Dense(64, activation="relu", kernel_regularizer=regularizer),
Dense(64, activation="relu", kernel_regularizer=regularizer),
Dense(32, activation="relu", kernel_regularizer=regularizer),
Dense(units=output_shape, activation="softmax"),
]
)
optimizer = Adam(learning_rate=LEARNING_RATE)
checkpointer_callback = ModelCheckpoint(
filepath=f"models/{model_name}/{model_name}.hdf5",
monitor="val_loss",
verbose=False,
save_best_only=True,
)
model.summary()
model.compile(
optimizer=optimizer, loss="categorical_crossentropy", metrics=["accuracy"]
)
</code></pre>
<p>But when I train my model like so :</p>
<pre class="lang-py prettyprint-override"><code>history = model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
validation_split=VALIDATION_SPLIT,
verbose=VERBOSE,
callbacks=[
checkpointer_callback,
csv_logger_callback,
early_stopping_callback,
tensorboard_callback,
],
)
</code></pre>
<p>I get the following error at a random epoch at the line of my <code>model.fit</code>:</p>
<p><code>Expected tensor of type int64 but got type float [[{{node Equal}}]] [Op:__inference_train_function_1276164]</code></p>
<p>Full error :</p>
<pre><code>---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
Cell In[47], line 9
5 print(f"Training model {model_name}...\n")
6 start_time = time.time()
----> 9 history = model.fit(
10 x_train,
11 y_train,
12 batch_size=BATCH_SIZE,
13 epochs=NUM_EPOCHS,
14 validation_split=VALIDATION_SPLIT,
15 verbose=VERBOSE,
16 callbacks=[
17 checkpointer_callback,
18 csv_logger_callback,
19 early_stopping_callback,
20 tensorboard_callback,
21 ],
22 )
25 training_duration = get_timestamp(time.time() - start_time)
27 print("\n-------------\n")
File ~/miniconda3/envs/kinected/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py:70, in filter_traceback.<locals>.error_handler(*args, **kwargs)
67 filtered_tb = _process_traceback_frames(e.__traceback__)
68 # To get the full stack trace, call:
69 # `tf.debugging.disable_traceback_filtering()`
---> 70 raise e.with_traceback(filtered_tb) from None
71 finally:
72 del filtered_tb
File ~/miniconda3/envs/kinected/lib/python3.11/site-packages/tensorflow/python/eager/execute.py:53, in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
51 try:
52 ctx.ensure_initialized()
---> 53 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
54 inputs, attrs, num_outputs)
55 except core._NotOkStatusException as e:
56 if name is not None:
InvalidArgumentError: Graph execution error:
Detected at node Equal defined at (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/ipykernel_launcher.py", line 17, in <module>
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/traitlets/config/application.py", line 1077, in launch_instance
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/ipykernel/kernelapp.py", line 739, in start
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/tornado/platform/asyncio.py", line 195, in start
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/asyncio/base_events.py", line 607, in run_forever
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/asyncio/base_events.py", line 1922, in _run_once
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/asyncio/events.py", line 80, in _run
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 529, in dispatch_queue
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 518, in process_one
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 424, in dispatch_shell
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/ipykernel/kernelbase.py", line 766, in execute_request
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/ipykernel/ipkernel.py", line 429, in do_execute
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/ipykernel/zmqshell.py", line 549, in run_cell
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3048, in run_cell
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3103, in _run_cell
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/IPython/core/async_helpers.py", line 129, in _pseudo_sync_runner
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3308, in run_cell_async
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3490, in run_ast_nodes
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/IPython/core/interactiveshell.py", line 3550, in run_code
File "/var/folders/85/04dxvd3x7wsbz88m2jf0z4000000gn/T/ipykernel_6585/1245095393.py", line 9, in <module>
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py", line 65, in error_handler
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/keras/src/engine/training.py", line 1807, in fit
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/keras/src/engine/training.py", line 1401, in train_function
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/keras/src/engine/training.py", line 1384, in step_function
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/keras/src/engine/training.py", line 1373, in run_step
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/keras/src/engine/training.py", line 1155, in train_step
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/keras/src/engine/training.py", line 1249, in compute_metrics
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/keras/src/engine/compile_utils.py", line 620, in update_state
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/keras/src/utils/metrics_utils.py", line 77, in decorated
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/keras/src/metrics/base_metric.py", line 140, in update_state_fn
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/keras/src/metrics/base_metric.py", line 723, in update_state
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/keras/src/metrics/accuracy_metrics.py", line 426, in categorical_accuracy
File "/Users/louislecouturier/miniconda3/envs/kinected/lib/python3.11/site-packages/keras/src/utils/metrics_utils.py", line 969, in sparse_categorical_matches
Expected tensor of type int64 but got type float
[[{{node Equal}}]] [Op:__inference_train_function_1276164]
</code></pre>
<p>I am using a M1 MacBook Pro</p>
<p>Python <code>v3.11</code>
Tensorflow <code>v2.15.0</code></p>
<p>I don't know how to understand this error... My inputs are all in <code>float32</code>, my labels are one-hot encoded...</p>
<p>What am I doing wrong ? What is is expecting an <code>int64</code>?</p>
|
<python><python-3.x><tensorflow><keras>
|
2024-01-24 13:33:11
| 0
| 428
|
Louis Lecouturier
|
77,873,418
| 22,937,009
|
Efficient calculation of all permutations mapping a vector into another in Python?
|
<p>Given two vectors, I would like to calculate (in Python) all permutations (as vectors of coordinates) which map the first vector into the second. The vectors are given as <code>numpy</code> arrays of the same length, I call them <code>f_arr</code> (the source vector mapping from) and <code>t_arr</code> (the target vector mapping too). So I am looking for permutations <code>perm</code> of the index vector <code>list(range(len(f_arr)))</code> for which <code>f_arr[perm]</code> becomes equal to <code>t_arr</code>. It is important that the vectors can have repeated elements.</p>
<p>It is also important that I do not want to generate all the permutations. For example, the answers in this post do not work for me:
<a href="https://stackoverflow.com/questions/104420/how-do-i-generate-all-permutations-of-a-list">How do I generate all permutations of a list?</a></p>
<p>I have the following inefficient code. What I am looking for is an efficient backtracking algorithm, preferably implemented in an optimized Python library, which can use something like the <code>positions</code> vector below and generate <strong>only</strong> the valid permutations which <strong>map</strong> <code>f_arr</code> to <code>t_arr</code>.</p>
<pre><code>f_arr = np.array([1,2,3,4,3,4], dtype=np.uint8) # vector mapping from
t_arr = np.array([3,1,4,3,4,2], dtype=np.uint8) # vector mapping to
positions = [np.where(f_arr == a)[0] for a in t_arr]
for perm in itertools.product(*positions):
if len(perm) == len(set(perm)):
print(f'{perm} -> {f_arr[list(perm)]}')
else: # this branch is only for demonstration
print(f'Not a permutation: {perm}')
</code></pre>
<p>which prints:</p>
<pre><code>Not a permutation: (2, 0, 3, 2, 3, 1)
Not a permutation: (2, 0, 3, 2, 5, 1)
Not a permutation: (2, 0, 3, 4, 3, 1)
(2, 0, 3, 4, 5, 1) -> [3 1 4 3 4 2]
Not a permutation: (2, 0, 5, 2, 3, 1)
Not a permutation: (2, 0, 5, 2, 5, 1)
(2, 0, 5, 4, 3, 1) -> [3 1 4 3 4 2]
Not a permutation: (2, 0, 5, 4, 5, 1)
Not a permutation: (4, 0, 3, 2, 3, 1)
(4, 0, 3, 2, 5, 1) -> [3 1 4 3 4 2]
Not a permutation: (4, 0, 3, 4, 3, 1)
Not a permutation: (4, 0, 3, 4, 5, 1)
(4, 0, 5, 2, 3, 1) -> [3 1 4 3 4 2]
Not a permutation: (4, 0, 5, 2, 5, 1)
Not a permutation: (4, 0, 5, 4, 3, 1)
Not a permutation: (4, 0, 5, 4, 5, 1)
</code></pre>
<p>Is there some Python library which can efficiently generate <strong>only</strong> the valid permutations which <strong>map</strong> <code>f_arr</code> to <code>t_arr</code>?</p>
|
<python><numpy><permutation><combinatorics><enumeration>
|
2024-01-24 13:27:44
| 1
| 432
|
gabalz
|
77,873,084
| 3,975,218
|
Calling constructors of both parents in multiple inheritance in Python (general case)
|
<p>I'm trying to figure out the multiple inheritance in Python, but all articles I find are limited to simple cases. Let's consider the following example:</p>
<pre><code>class Vehicle:
def __init__(self, name: str) -> None:
self.name = name
print(f'Creating a Vehicle: {name}')
def __del__(self):
print(f'Deleting a Vehicle: {self.name}')
class Car(Vehicle):
def __init__(self, name: str, n_wheels: int) -> None:
super().__init__(name)
self.wheels = n_wheels
print(f'Creating a Car: {name}')
def __del__(self):
print(f'Deleting a Car: {self.name}')
class Boat(Vehicle):
def __init__(self, name: str, n_props: int) -> None:
super().__init__(name)
self.propellers = n_props
print(f'Creating a Boat: {name}')
def __del__(self):
print(f'Deleting a Boat: {self.name}')
class Amfibii(Car, Boat):
def __init__(self, name: str, n_wheels: int, n_props: int) -> None:
Car.__init__(self, name, n_wheels)
Boat.__init__(self, name, n_props)
print(f'Creating an Amfibii: {name}')
def __del__(self):
print(f'Deleting an Amfibii: {self.name}')
my_vehicle = Amfibii('Mazda', 4, 2)
</code></pre>
<p>I want to understand the order of calling constructors and destructors, as well as the correct and general use of the 'super' keyword.
In the example above, I get the following error:</p>
<blockquote>
<p><code>super().__init__(name)</code>
TypeError: <code>Boat.__init__()</code> missing 1 required positional argument: 'n_props'</p>
</blockquote>
<p>How should I correctly call constructors of both parents, which have different sets of constructor arguments?</p>
|
<python><inheritance><super><diamond-problem>
|
2024-01-24 12:32:00
| 2
| 740
|
Karol Borkowski
|
77,873,027
| 194,000
|
Django: Basic auth on only certain URL paths?
|
<p>I would like to password-protect only certain parts of my Django app with basic-auth. I'd like to protect all URLs except anything under <code>/api</code>.</p>
<p>I'm trying to use the <a href="https://pypi.org/project/django-basicauth/" rel="nofollow noreferrer">django-basicauth package</a> to do this.</p>
<p>I've configured it as follows. My app has three parts:</p>
<pre><code>/api
/candc
/places
</code></pre>
<p>In <code>candc/localsettings.py</code> I've added:</p>
<pre><code>BASICAUTH_USERS = {
"myuser": "mypass"
}
</code></pre>
<p>The <code>candc/urls.py</code> file looks like this:</p>
<pre><code>urlpatterns = [
path('', include('places.urls')),
path('api/1.0/', include('api.urls')),
]
</code></pre>
<p>Then in my <code>places/views.py</code> file, I've added decorators to the URLs I want to protect, like this:</p>
<pre><code>from basicauth.decorators import basic_auth_required
@basic_auth_required(
def index(request):
template = loader.get_template('index.html')
return HttpResponse(template.render({}, request))
</code></pre>
<p>However, my app is asking for basic-auth protect on URLs under <code>/api</code> as well. (In fact it's not even showing the dialog, just returning 403s for any requests.)</p>
<p>The <code>api</code> app is using django-rest-framework, which I suspect may somehow be related to this problem.</p>
<p>How can I configure this so URLs under <code>/api</code> are not password-protected, but everything else is?</p>
|
<python><django><basic-authentication>
|
2024-01-24 12:22:51
| 3
| 66,078
|
Richard
|
77,872,975
| 7,344,609
|
How to make use of Conditional expression of Pynamodb for graphQl?
|
<p>I am using PynamoDB for querying DynamoDB tables. Its a basic table with user_id as PK and name as SK. Also I have created a LSI with user_id as PK and unit as SK.</p>
<p>To make a query based on user_id and name we can write the query in following way.</p>
<pre><code>result = userModel.query(user_id, name)
print(result)
</code></pre>
<p>The above code is working fine. Now lets say I need to query the table based on user_id and unit(multiple value), we have conditional expressions in pynamodb. I am trying to use "is_in" method. Below is the query. Here "unit_index" is the LSI and "unit" is the attribute name.</p>
<pre><code>result = userModel.unit_index.query(
user_id,
userModel.unit.is_in(1,2)
)
</code></pre>
<p>The above query is not working. I am getting a following error.</p>
<pre><code>Failed to query items: An error occurred (ValidationException) on request (525267c5-b4fb-46a3-a8a9-008af5b234a6) on table (user_table) when calling the Query operation: Invalid operator used in KeyConditionExpression: IN
</code></pre>
<p>Could you let me know what's the best possible to write a query without using for loop?</p>
|
<python><amazon-dynamodb><pynamodb>
|
2024-01-24 12:15:32
| 1
| 561
|
Darshan theerth
|
77,872,944
| 19,694,624
|
discord embed freezes on 2nd execution
|
<p>I have a problem with a slash command, with an embed actually.</p>
<p>When I run the slash command, I get an embed with a button with a label "xN" (where N - any number from 1 to 99). This embed gets edited every 2 seconds, and the button's label and id increase by 1. If I click the button, callback function is called, some logic gets executed in the callback function and the main function stops.
And it all works as intended, until 2nd try and all the following. On 2nd try the embed just stops, nothing gets edited and button is not being changed, and when I click the button nothing happens.</p>
<p>Here is the screenshot:</p>
<p><a href="https://i.sstatic.net/wzfEV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wzfEV.png" alt="enter image description here" /></a></p>
<p>On this screenshot the first /test run worked properly, embed was being edited every two seconds, button's label and id were increasing by 1 and when I clicked the button it edited the embed with "stop function" message from callback function and ended the function (maybe the function wasn't ended tho ??), but when I run it 2nd time, it just freezes at "x1", and when I click it nothing happens.</p>
<p>Here is the code to replicate the problem:</p>
<p><em><strong>test.py</strong></em></p>
<pre><code>import asyncio
from discord.ui import Button, View
import discord
from discord.commands import slash_command
from discord.ext import commands
class Test(commands.Cog):
def __init__(self, bot):
self.bot = bot
self.stop_loop = False
@slash_command(name='test', description='')
async def run(self, ctx):
async def button_callback(interaction):
self.stop_loop = True
embed = discord.Embed(
description=f"stop function",
)
await interaction.response.edit_message(embed=embed, view=None)
return
button = Button(custom_id=f"1", label=f'x1', style=discord.ButtonStyle.blurple)
button.callback = button_callback
my_view = View()
my_view.add_item(button)
embed = discord.Embed(
title="some title",
)
sent_embed = await ctx.respond(embed=embed, view=my_view, ephemeral=True)
await asyncio.sleep(2)
for i in range(2, 100):
if self.stop_loop:
return
button = Button(custom_id=f"{i}", label=f'x{i}', style=discord.ButtonStyle.blurple)
button.callback = button_callback
my_view = View()
my_view.add_item(button)
embed = discord.Embed(
title="some title",
)
await sent_embed.edit_original_response(embed=embed, view=my_view)
await asyncio.sleep(2)
def setup(bot):
bot.add_cog(Test(bot))
</code></pre>
<p><strong>main.py</strong></p>
<pre><code>from discord.ext import commands
import discord
import os
from tools.config import TOKEN
intents = discord.Intents.default()
intents.members = True
client = commands.Bot(intents=intents, command_prefix = "!")
for f in os.listdir("./cogs"):
if f.endswith(".py"):
client.load_extension("cogs." + f[:-3])
client.run(TOKEN)
</code></pre>
<p>I am using py-cord==2.4.1 and python3.10
And discord cogs</p>
|
<python><python-3.x><discord><discord.py><pycord>
|
2024-01-24 12:12:01
| 1
| 303
|
syrok
|
77,872,941
| 534,238
|
How can I use `DoOutputsTuple` as either the parameter or the return of a `PTransform` in Apache Beam
|
<p>I have some <code>DoFn</code>s that return multiple output <code>PCollection</code>s, where one represents the "good" data path and the other captures errors to redirect them. Something like the following:</p>
<pre class="lang-py prettyprint-override"><code>output, error = (
pcol
| "Fix Timestamps"
>> ParDo(ConvertTimestamp(), timestamp_field)
.with_outputs(self.fail_tag, main=self.success_tag)
</code></pre>
<p>But I want to simplify and standardize, putting several of the <code>ParDo</code>/<code>DoFns</code> into a single <code>PTransform</code> that can be called directly. Something like the following:</p>
<pre class="lang-py prettyprint-override"><code>class ConvertToBQFriendlyTypes(PTransform):
def __init__(
self,
timestamp_fields: tuple[str, ...],
fail_tag: str = FAIL_TAG,
success_tag: str = SUCCESS_TAG,
):
super().__init__()
self.fail_tag = fail_tag
self.success_tag = success_tag
self.timestamp_fields = timestamp_fields
class _ConvertSingleTimestamp(DoFn):
def __init__(self, fail_tag: str = FAIL_TAG):
super().__init__()
self.fail_tag = fail_tag
def process(
self,
element: dict,
field_name: str,
) -> Iterable[dict[str, Any]] | Iterable[pvalue.TaggedOutput]:
timestamp_raw = element[field_name]
if hasattr(timestamp_raw, "to_utc_datetime"):
timestamp_utc = timestamp_raw.to_utc_datetime(has_tz=True) # type: ignore
else:
timestamp_utc = timestamp_raw
if hasattr(timestamp_utc, "timestamp"):
timestamp_utc = datetime.fromtimestamp(round(timestamp_utc.timestamp()))
if hasattr(timestamp_utc, "strftime"):
result = timestamp_utc.strftime("%Y-%m-%d %H:%M:%S.%f") # type: ignore
elif isinstance(timestamp_utc, str) or timestamp_utc is None:
result = timestamp_utc
else:
result = Failure( # `Failure` is a simple data class with these attributes:
pipeline_step="ConvertToBQFriendlyTypes",
element=element,
exception=TimestampError(
f'Field "{field_name}" has no means to convert time to a '
"string, which is needed for writing to BigQuery."
),
)
if isinstance(result, Failure):
yield pvalue.TaggedOutput(self.fail_tag, result)
else:
element[field_name] = result
yield element
def expand(
self, pcoll: PCollection[dict[str, Any]] | pvalue.PValue
) -> PCollection[dict[str, Any]] | pvalue.PValue:
for timestamp_field in self.timestamp_fields:
pcoll = pcoll | f'Convert "{timestamp_field}"' >> ParDo(
self._ConvertSingleTimestamp(self.fail_tag), timestamp_field
).with_outputs(self.fail_tag, main=self.success_tag)
return pcoll
</code></pre>
<p>But this fails because a <code>PTransform</code>s <code>expand</code> method can only accept a <code>PCollection</code>, not a <a href="https://beam.apache.org/releases/pydoc/2.30.0/_modules/apache_beam/pvalue.html#DoOutputsTuple" rel="nofollow noreferrer"><code>DoOutputsTuple</code> object</a>, like what is received from any <code>ParDo</code> that returns multiple outputs using <a href="https://beam.apache.org/releases/pydoc/2.30.0/_modules/apache_beam/transforms/core.html#ParDo" rel="nofollow noreferrer"><code>with_outputs</code></a>. Moreover, it seems that a <code>PTransform</code> is only allowed to return a <code>PCollection</code>, not a <code>DoOutputsTuple</code>.</p>
<p>I have tried a few different ways to manage this, either splitting the outputs and managing each separately (in which case, I get either <code>TypeError: cannot unpack non-iterable PCollection object</code> or <code>TypeError: 'PCollection' object is not subscriptable</code>, depending upon how I try to unpack it) or by working directly with the unpacked <code>DoOutputsTuple</code> (in which case, I get the dreaded <code>TypeError: '_InvalidUnpickledPCollection' object is not subscriptable...</code> error).</p>
<hr />
<h3>Does anyone know how I can create a <code>PTransform</code> that both receives a <code>DoOutputsTuple</code> as the first argument to its <code>expand</code> and returns a <code>DoOutputsTuple</code>? If not, does anyone have any better suggestions for how to manage this?</h3>
<p>I am tempted to use the <a href="https://github.com/tosun-si/pasgarde/tree/main" rel="nofollow noreferrer"><code>pasgarde</code> package</a> as it seems elegant, but (a) I don't want to create a dependency upon a lightly adopted open source package, and (b) it boxes me too tightly into only using Map, Flatmap, and Filter.</p>
|
<python><apache-beam>
|
2024-01-24 12:11:15
| 1
| 3,558
|
Mike Williamson
|
77,872,848
| 5,733,813
|
Understanding interferences between python importing policy and pytest mocking
|
<p>I am having some troubles in understanding how to properly mock a function using <code>pytest-mock</code> module.</p>
<p>I will report a minimal reproducible example:</p>
<p>file: <code>src/mini_handler.py</code></p>
<pre><code>import tempfile
from src.mini_pdf_handler import mini_pdf_handler
def mini_handler():
tmp_file = tempfile.NamedTemporaryFile()
mini_pdf_handler()
return tmp_file.name
</code></pre>
<p>file: <code>src/mini_pdf_handler.py</code></p>
<pre><code>import tempfile
def mini_pdf_handler():
tmp_file = tempfile.NamedTemporaryFile()
return tmp_file.name
</code></pre>
<p>file: <code>tests/test_handler.py</code></p>
<pre><code>def test_mini_handler(mocker):
mock_tempfile = mocker.MagicMock()
mock_tempfile.return_value.name = 'outputs/output.pdf'
mocker.patch("src.mini_handler.tempfile.NamedTemporaryFile", side_effect=mock_tempfile)
mini_handler()
</code></pre>
<p>The problem is that the mock works, but it is mocking even the <code>NamedTemporaryFile</code> in the <code>mini_pdf_handler</code> module, when it should mock it only in the <code>mini_handler</code>. I have the feeling the problem might be in the import policy that python have, since once a module is imported, it won't import it again. At the same time, I feel that the problem might be some silly oversight. Can someone help me?</p>
|
<python><pytest><pytest-mock>
|
2024-01-24 11:56:30
| 1
| 521
|
Francesco Alongi
|
77,872,459
| 6,575,732
|
How to deal with nested protocols and generics for Python stubs
|
<p>I'm trying to define a <code>Protocol</code> for a <code>Stream</code> object. <code>Stream</code> should be generic over two parameters: inputs and outputs. <code>Stream</code> defines methods that return a new stream with the new inputs being the outputs from the original stream and the output something else. For example, the <code>map</code> and <code>sliding_window</code> operations can be defined as follows:</p>
<pre><code>from __future__ import annotations
from typing import (
Callable,
Protocol,
TypeVar,
Tuple,
)
U = TypeVar("U", contravariant=True)
V = TypeVar("V")
T = TypeVar("T")
class StreamInterface(Protocol[U, V]):
def map(
self: StreamInterface[U, V],
func: Callable[[V], T],
) -> StreamInterface[V, T]:
...
def sliding_window(
self: StreamInterface[U, V],
n: int,
) -> StreamInterface[V, Tuple[V, ...]]:
...
</code></pre>
<p>I now want to define a <code>flatten</code> operation which should behave as follows:</p>
<pre><code>K = TypeVar("K")
IterableK = Iterable[K]
M = TypeVar("M", bound=IterableK)
...
def flatten(
self: StreamInterface[U, M],
) -> StreamInterface[M, K]:
...
</code></pre>
<p>Effectively this operation will only work if <code>V</code> is an iterable, which I try to enforce with the <code>M</code> typehint.</p>
<p>However, mypy seems to no longer be able to follow the types:</p>
<pre><code>def func1(input: int) -> float:
return input + 1.2
source: StreamInterface[int, int] = Stream()
slider = source.map(func1).sliding_window(3)
flattened = slider.flatten() # mypy complains that flattened needs a type hint
flattened.map(func2).sink(print)
</code></pre>
<p>I'm guessing it has something to do with the fact that <code>K</code> and <code>M</code> are defined separately from <code>U</code> and <code>V</code>, but if I use <code>V</code> it doesn't work because <code>self: StreamInterface[U, Iterable[V]]</code> is not properly dealt with.</p>
<p>I'm wondering what the best way would be to deal with typing the <code>flatten</code> method.</p>
|
<python><generics><protocols><mypy><typing>
|
2024-01-24 10:59:46
| 0
| 351
|
DIN14970
|
77,872,293
| 12,892,937
|
Python how to vectorize this loop that perform assignment of a sub-rectangle inside an image
|
<p>I have 2 arrays: <code>bot</code> and <code>top</code>, and an <code>image</code>. I need to loop over the array, and for each pair of <code>(bot,top)</code>,I need to fill column <code>3 * i + 1</code> from rows <code>bot[i]</code> to <code>top[i]</code> with 255.</p>
<p>I want to use vectorized code instead of a for loop to make it faster. I've tried some array indexing, but I don't know the correct syntax.</p>
<p>How should I change the code below?</p>
<pre><code>import numpy as np
import pandas as pd
N = 100
image = np.zeros((32, 32))
np.random.seed(42)
data = {
'Bot': np.array([2, 5, 3, 1, 7], dtype=np.int32),
'Top': np.array([3, 6, 4, 2, 10], dtype=np.int32)
}
# I don't have access to data, just df
df = pd.DataFrame(data)
value = 255
for i in range(len(data)):
image[df.loc[i]['Bot'] : df.loc[i]['Top'] + 1, 3 * i + 1] = value
image2 = np.zeros((32, 32))
image2[df['Bot'] : df['Top'] + 1, np.arange(len(df)) * 3 + 1] = value # error on this line
print(image == image2)
</code></pre>
<p>Running the code above, I have this error</p>
<pre><code>ERROR!
Traceback (most recent call last):
File "<string>", line 24, in <module>
TypeError: slice indices must be integers or None or have an __index__ method
</code></pre>
|
<python><pandas><numpy><optimization><vectorization>
|
2024-01-24 10:37:04
| 1
| 1,831
|
Huy Le
|
77,872,291
| 11,937,086
|
Python tfcausalimpact: how can I access the p-value?
|
<p>I'm using tfcausalimpact (<a href="https://github.com/WillianFuks/tfcausalimpact" rel="nofollow noreferrer">https://github.com/WillianFuks/tfcausalimpact</a>).</p>
<p>When I run the analysis, I know that I can access the p-value by instantiating the model</p>
<p><code>ci = CausalImpact(data,pre_period,post_period)</code></p>
<p><code>ci.summary()</code></p>
<p>But that gives a wrapped-up text summary, and I want to save the P-value as a variable. How can I do this?</p>
|
<python><p-value><attribution>
|
2024-01-24 10:36:53
| 1
| 378
|
travelsandbooks
|
77,872,289
| 9,640,238
|
Disable caching or clear cache with QtWebEngine
|
<p>I have been using a slightly adapted version of the <code>OpenconnectSamlAuth</code> class that has been shared <a href="https://stackoverflow.com/a/65694690/9640238">here</a>, with PyQT5.</p>
<p>Now, I'd like to either clear the cache or disable it entirely. I have to admit that the code is a bit opaque to me (I'm not familiar with PyQT), and I couldn't find any relevant setting <a href="https://doc.qt.io/qt-5/qwebenginesettings.html" rel="nofollow noreferrer">here</a> either.</p>
<p>Any hint?</p>
|
<python><pyqt5>
|
2024-01-24 10:36:35
| 1
| 2,690
|
mrgou
|
77,872,274
| 1,631,190
|
Python Pandas Pivot Table Count Occurrences without dummy column
|
<p>Whats the most elegant way to pivot a dataframe and getting to counts from the following example:</p>
<pre><code>id type
1 A
1 A
1 B
2 A
1 B
</code></pre>
<p>to get a wide format df looking like this</p>
<pre><code>id A B
1 2 1
2 1 1
</code></pre>
<p>At the moment I am going with creating a seperate column and then using aggfunc=sum</p>
<pre><code>df.loc[:,"count"] = 1
df.pivot_table(index="id", columns="type", values="count", aggfunc='sum')
</code></pre>
<p>which works fine.</p>
<p>Using:</p>
<pre><code>df.pivot_table(index='id', columns='type', aggfunc='count')
</code></pre>
<p>returns not the expected result giving me a df with 0 columns and only the ids in its rows.</p>
<p>What am I doing wrong in the second approach and is there a more elegant way instead of creating a seperate column containing only 1s?</p>
|
<python><pandas><pivot>
|
2024-01-24 10:35:20
| 1
| 1,901
|
J-H
|
77,871,919
| 8,144,423
|
Ensure all dates are present in each group (eg retailer) in dataframe
|
<p>I am trying to make sure, that for every groups I will have all dates presents (as I would like to have sum of rolling window of 7 days). My initial data has "blanks" in days that retailer was not selling products, and I would like to fill them with 0.</p>
<pre><code>Initial data:
| Date | Retailer | Sales |
| ---- | -------- | ----- |
| 2024-01-01 | A | 1 |
| 2024-01-02 | A | 1 |
| 2024-01-01 | B | 9 |
Desired output:
| Date | Retailer | Sales |
| ---- | -------- | ----- |
| 2024-01-01 | A | 1 |
| 2024-01-02 | A | 1 |
| 2024-01-01 | B | 9 |
| 2024-01-02 | B | 0 |
</code></pre>
<p>Minimal code of my working, but not best, solution:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
"Date": ["2024-01-01", "2024-01-02", "2024-01-01"],
"Retailer": ["A", "A", "B"],
"Sales": [1, 1, 9], })
# create container df
unique_dates = df["Date"].unique()
container = pd.DataFrame()
for r in df["Retailer"].unique():
tmp = pd.DataFrame({"Date": unique_dates})
tmp["Retailer"] = r
container = pd.concat([container, tmp], axis=0)
# join with orginal data
df_final = container.merge(df, on=["Date", "Retailer"], how="left")
# fill blanks
df_final = df_final.fillna(0)
#show
df_final
</code></pre>
<p>In my real life example there are multiple products, shops and long time series, so looping over everything is not best solution</p>
|
<python><dataframe>
|
2024-01-24 09:44:55
| 1
| 483
|
AAAA
|
77,871,888
| 6,535,324
|
source/activate python venv from shell script
|
<p>I have a shell script which I run on win10 usin git bash. It calls python under the hood.
For this to work I need to run the following two lines before in git bash:</p>
<pre><code>source /c/Mambaforge/etc/profile.d/conda.sh
conda activate my_py_venv
bash setup_stuff.sh
bash shell_script_using_python.sh
</code></pre>
<p>This works. I now wanted to move the first two lines into the <code>setup_stuff.sh</code> (which itself does not use python), since it is part of the setup everyday. This does not seem to work. the second script (shell_script_using_python.sh) complains that there is no python. Maybe the source/activation only works within the <code>setup_stuff.sh</code>. Any suggestions for this?</p>
|
<python><bash><shell>
|
2024-01-24 09:39:29
| 0
| 2,544
|
safex
|
77,871,587
| 4,583,536
|
Environment variable interpolation does not work with @_here_
|
<p>I'm setting up hydra config for my python project and struggling a bit to get variable interpolation working in the following context.</p>
<p>Directory structure</p>
<pre><code>├── config/
│ ├── config.yaml
│ ├── deployment.yaml
│ │ ├── objects.yaml
│ └── environment
│ ├── dev.yaml
│ └── prd.yaml
</code></pre>
<p>My config.yaml looks like this:</p>
<pre><code># config.yaml
defaults:
- _self_
- deployment: objects
</code></pre>
<p>I want to set up my objects.yaml such that it inherits config from the environment group.</p>
<p><strong>THIS WORKS</strong></p>
<pre><code># objects.yaml
defaults:
- _self_
- /environment/dev@_here_
file: my-file
</code></pre>
<p><strong>THIS DOES NOT WORK</strong></p>
<p>When I try and get dev from an env variable, it does not work.</p>
<pre><code>defaults:
- _self_
- /environment/${oc.env:CLUSTER_TARGET}@_here_
file: my-file
</code></pre>
<p>This is the error I get</p>
<blockquote>
<p>In 'deployment/objects': Could not load 'deployment/environment/dev'.</p>
</blockquote>
|
<python><fb-hydra><omegaconf>
|
2024-01-24 08:48:38
| 1
| 528
|
David Clarance
|
77,871,427
| 23,287,531
|
Script does not save output to correct path while using auto py to exe
|
<p>my program uses netsh to save wifi profile information on a txt file in a directory that is named as the system time of the moment of running the program</p>
<p>so lets say you have signed into three wifi networks on your PC:</p>
<p>Wifi1:
password:123456
wifi2:
password: abcdefg
wifi3:
password: 12456!</p>
<p>and the py file path is:</p>
<p>C:\Users\User\Desktop\Pp</p>
<p>and the system time is:</p>
<p>2024-01-23 16;51;44.894633</p>
<p>the output should be three txt files saved to directory:</p>
<p>c:\Users\User\Desktop\Pp\2024-01-23 16;51;44.894633</p>
<p>this works when i just run the python program through VScode, but i wanted to be able to make it an executable with auto py to exe</p>
<p>the problem is that it saves the txt to a different directory ONLY when i use the exe file. so when i open the exe file the path will be three txt files saved to this path:</p>
<p>saved to C:\Users\User\AppData\Local\Temp_MEI124562\2024-01-23 16;51;44.894633</p>
<p>in short it doesnt save the files to the correct path ONLY when using the .exe file, but it works correctly when i run the .py file.</p>
<p>python code:</p>
<pre><code>import os
from datetime import datetime
names_string = os.popen("netsh wlan show profile")
names_list = []
for i in names_string:
if ":" in i:
names_list.append(i[i.index(":") + 2:len(i) - 1])
else:
next
foldername = f"{datetime.now()}"
folder_name = foldername
folder_name = folder_name.replace(":", ";")
parent_path = os.path.dirname(__file__)
path = os.path.join(parent_path, folder_name)
os.mkdir(path)
for i in names_list:
with open (path + f"/{i}.txt", 'w') as fp:
print("saved to", path + f"/{i}.txt")
output = os.popen(f"""netsh wlan show profile name = "{i}" key = clear""").read()
fp.write(output)
input("press enter to close window")
</code></pre>
<p>auto py to exe output:</p>
<pre><code>Running auto-py-to-exe v2.42.0
Building directory: C:\Users\User\AppData\Local\Temp\tmps_xeexcq
Provided command: pyinstaller --noconfirm --onefile --console "C:/Users/User/Desktop/Pp/main.py"
Recursion Limit is set to 5000
Executing: pyinstaller --noconfirm --onefile --console C:/Users/User/Desktop/Pp/main.py --distpath C:\Users\User\AppData\Local\Temp\tmps_xeexcq\application --workpath C:\Users\User\AppData\Local\Temp\tmps_xeexcq\build --specpath C:\Users\User\AppData\Local\Temp\tmps_xeexcq
21259 INFO: PyInstaller: 6.3.0
21263 INFO: Python: 3.12.1
21302 INFO: Platform: Windows-11-10.0.22621-SP0
21316 INFO: wrote C:\Users\User\AppData\Local\Temp\tmps_xeexcq\main.spec
21332 INFO: Extending PYTHONPATH with paths
['C:\\Users\\User\\Desktop\\Pp']
21955 INFO: checking Analysis
21969 INFO: Building Analysis because Analysis-00.toc is non existent
21984 INFO: Initializing module dependency graph...
22003 INFO: Caching module graph hooks...
22083 INFO: Analyzing base_library.zip ...
23851 INFO: Loading module hook 'hook-heapq.py' from 'C:\\Users\\User\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\PyInstaller\\hooks'...
23871 INFO: Loading module hook 'hook-encodings.py' from 'C:\\Users\\User\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\PyInstaller\\hooks'...
27590 INFO: Loading module hook 'hook-pickle.py' from 'C:\\Users\\User\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\PyInstaller\\hooks'...
31066 INFO: Caching module dependency graph...
31219 INFO: Running Analysis Analysis-00.toc
31223 INFO: Looking for Python shared library...
31243 INFO: Using Python shared library: C:\Users\User\AppData\Local\Programs\Python\Python312\python312.dll
31250 INFO: Analyzing C:\Users\User\Desktop\Pp\main.py
31270 INFO: Processing module hooks...
31282 INFO: Performing binary vs. data reclassification (2 entries)
31302 INFO: Looking for ctypes DLLs
31315 INFO: Analyzing run-time hooks ...
31338 INFO: Including run-time hook 'C:\\Users\\User\\AppData\\Local\\Programs\\Python\\Python312\\Lib\\site-packages\\PyInstaller\\hooks\\rthooks\\pyi_rth_inspect.py'
31363 INFO: Looking for dynamic libraries
31552 INFO: Extra DLL search directories (AddDllDirectory): []
31558 INFO: Extra DLL search directories (PATH): []
31870 WARNING: Library not found: could not resolve 'api-ms-win-core-path-l1-1-0.dll', dependency of 'C:\\Users\\User\\AppData\\Local\\Programs\\Python\\Python312\\python312.dll'.
31887 INFO: Warnings written to C:\Users\User\AppData\Local\Temp\tmps_xeexcq\build\main\warn-main.txt
31919 INFO: Graph cross-reference written to C:\Users\User\AppData\Local\Temp\tmps_xeexcq\build\main\xref-main.html
31990 INFO: checking PYZ
31994 INFO: Building PYZ because PYZ-00.toc is non existent
32005 INFO: Building PYZ (ZlibArchive) C:\Users\User\AppData\Local\Temp\tmps_xeexcq\build\main\PYZ-00.pyz
32311 INFO: Building PYZ (ZlibArchive) C:\Users\User\AppData\Local\Temp\tmps_xeexcq\build\main\PYZ-00.pyz completed successfully.
32336 INFO: checking PKG
32351 INFO: Building PKG because PKG-00.toc is non existent
32355 INFO: Building PKG (CArchive) main.pkg
34491 INFO: Building PKG (CArchive) main.pkg completed successfully.
34521 INFO: Bootloader C:\Users\User\AppData\Local\Programs\Python\Python312\Lib\site-packages\PyInstaller\bootloader\Windows-64bit-intel\run.exe
34528 INFO: checking EXE
34537 INFO: Building EXE because EXE-00.toc is non existent
34550 INFO: Building EXE from EXE-00.toc
34566 INFO: Copying bootloader EXE to C:\Users\User\AppData\Local\Temp\tmps_xeexcq\application\main.exe
34754 INFO: Copying icon to EXE
34865 INFO: Copying 0 resources to EXE
34884 INFO: Embedding manifest in EXE
35011 INFO: Appending PKG archive to EXE
35036 INFO: Fixing EXE headers
38225 INFO: Building EXE from EXE-00.toc completed successfully.
Moving project to: C:\Users\User\Desktop\Pp\output
Complete.
</code></pre>
<p>i tried to reinstall auto py to exe multipul times, i tried disabling windows defender (as it has a pop up every time i use auto py to exe) and my antivirus, tried on two different pcs just for it to not work, i tried reinstalling python twice.and i havent found any useful information online.</p>
<p>thanks in advance!</p>
|
<python><windows><visual-studio-code><netsh><auto-py-to-exe>
|
2024-01-24 08:13:47
| 1
| 433
|
Roee Zamir
|
77,871,393
| 3,098,783
|
Dockerfile build with setuptools - how to avoid full rebuild when files change
|
<p>If have to dockerize an existing project that uses setuptools for building from a <code>setup.py</code> file, instead of <code>requirements.txt</code>.</p>
<p>This build includes large binary downloads (pytorch, fast-whisper) and after the build at runtime an initial download of the corresponding models. Alltogether ~10GB.</p>
<h2>Problem</h2>
<p>In order to get the build correctly done I need to <code>COPY</code> the files before the installation, which results in a rebuild every time I change a file of the source code.</p>
<p>If I only copy the <code>setup.py</code> for installation, there will be the package missing, <a href="https://stackoverflow.com/a/54357929/3098783">as detailed described in another question's answer</a>.</p>
<h2>Dockerfile example</h2>
<pre><code>FROM python:3.11-slim
WORKDIR /app
RUN apt update && \
apt install -y --no-install-recommends git ffmpeg curl
COPY setup.py /app
# this is the problem:
# if I move this line behind the next line,
# the build will result in an incomplete package
# but if I keep it here, all the following
# layers will not be cached and the
# downloads will run again
COPY mypackage /app/mypackage
# runs setuptools and installs deps,
# including 2.2GB pytorch
RUN pip install ./ --extra-index-url https://download.pytorch.org/whl/cu118
# downloads ~8GB of models
RUN ["mypackage", "init"]
# I would love to move COPY of the project
# files to this position
CMD ["mypackage", "start"]
</code></pre>
<h2>Content of the <code>setup.py</code> file</h2>
<pre class="lang-py prettyprint-override"><code>from setuptools import setup, find_packages
from distutils.util import convert_path
import platform
system = platform.system()
if system in ["Windows","Linux"]:
torch = "torch==2.0.0+cu118"
if system == "Darwin":
torch = "torch==2.0.0"
main_ns = {}
ver_path = convert_path('mypackage/version.py')
with open(ver_path) as ver_file:
exec(ver_file.read(), main_ns)
setup(
name='aTrain',
version=main_ns['__version__'],
readme="README.md",
license="LICENSE",
python_requires=">=3.10",
install_requires=[
torch,
"torchaudio==2.0.1",
"faster-whisper>=0.8",
"transformers",
"ffmpeg-python>=0.2",
"pandas",
"pyannote.audio==3.0.0",
"Flask==2.3.2",
"pywebview==4.2.2",
"flaskwebgui",
"screeninfo==0.8.1",
"wakepy==0.7.2",
"show-in-file-manager==1.1.4"
],
packages=find_packages(),
include_package_data=True,
entry_points={
'console_scripts': ['mypackage = mypackage:cli',]
}
)
</code></pre>
<p>I am still new to all this and I wonder what options I have to avoid downloading all the</p>
|
<python><docker><caching><setuptools>
|
2024-01-24 08:06:50
| 2
| 8,471
|
Jankapunkt
|
77,871,364
| 6,882,746
|
Vectorize a Convex Hull and Interpolation loop on a specific axis of an ndarray
|
<p>I'm struggling to find an efficient way to implement this interpolated convex hull data treatment.</p>
<p>I have a 2D ndarray, call it <code>arr</code>, with shape (2000000,19), containing floats.
I have a 1D ndarray, call it <code>w</code>, with shape (19,), also containing floats.</p>
<p>What I achieved (and works perfectly except that it's excruciatingly slow) is the following:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.interpolate import interp1d
# Sample data
arr = np.array([[49.38639913, 49.76769437, 49.66370476, 49.49905455, 49.15242251,
48.0518658 , 45.998071 , 45.31347273, 45.29614113, 45.25281212,
45.0448329 , 44.61154286, 43.72763117, 42.38443203, 41.17121991,
40.48662165, 40.35663463, 39.88001558, 39.55938095],
[47.97387359, 47.86121818, 47.69656797, 47.18528571, 46.70000087,
45.39146494, 43.50232035, 43.18168571, 43.82295498, 43.62364156,
43.31167273, 42.88704848, 42.37576623, 41.0585645 , 40.37396623,
39.09142771, 38.79679048, 38.51948485, 38.52815065]])
w = np.array([2.1017, 2.1197, 2.1374, 2.1548, 2.172 , 2.1893, 2.2068, 2.2254,
2.2417, 2.2592, 2.2756, 2.2928, 2.3097, 2.326 , 2.3421, 2.3588,
2.3745, 2.3903, 2.4064])
</code></pre>
<pre class="lang-py prettyprint-override"><code>def upper_andrews_hull(points: np.ndarray):
"""
Computes the upper half of the convex hull of a set of 2D points.
:param points: an iterable sequence of (x, y) pairs representing the points.
"""
# 2D cross product of OA and OB vectors, i.e. z-component of their 3D cross product.
# Returns a positive value, if OAB makes a counter-clockwise turn,
# negative for clockwise turn, and zero if the points are collinear.
def cross(o, a, b):
return (a[0] - o[0]) * (b[1] - o[1]) - (a[1] - o[1]) * (b[0] - o[0])
# Reverse the points so that we can pop from the end
points = np.flip(points, axis=0)
# Build upper hull
upper = []
for p in points:
while len(upper) >= 2 and cross(upper[-2], upper[-1], p) <= 0:
upper.pop()
upper.append(p)
# Reverse the upper hull
upper = np.flip(np.array(upper), axis=0)
return upper
</code></pre>
<pre class="lang-py prettyprint-override"><code>result = np.empty(arr.shape)
for i in range(arr.shape[0]):
# Create points, using w as x values, and arr as y values
points = np.stack((w, arr[i,:]), axis=1)
# Calculate the convex hull around the points
hull = upper_andrews_hull(points)
# Interpolate the hull
interp_function = interp1d(*hull.T)
# Store interpolation's result to have the same x references as original points
result[i,:] = interp_function(w)
</code></pre>
<p>I'm pretty sure that there's a way to forego the loop and only use vectorial calculations, but I can't find it (plus, there's the issue that <code>hull</code> doesn't always have the same number of points, so all of the hulls would not be storable in an ndarray.</p>
<p>My expected behaviour would be something like <code>result = interpolated_hull(arr, w, axis=0)</code>, to apply the operations on the entire <code>arr</code> array without a loop.</p>
|
<python><numpy><vectorization><convex-hull>
|
2024-01-24 07:58:53
| 1
| 2,053
|
Laurent
|
77,871,144
| 12,705,907
|
TypeCheckError: argument "config_file" (None) did not match any element in the union: pathlib.Path: is not an instance of pathlib.Path
|
<p>I am getting an error when I tried to implement pandas profiling. Please find the code that I've tried, the error I got and the versions of the packages I used.</p>
<p>Code:</p>
<pre><code>import pandas as pd
from pandas_profiling import ProfileReport
df = pd.read_csv("data.csv")
profile = ProfileReport(df)
profile
</code></pre>
<p>Error:</p>
<pre><code>---------------------------------------------------------------------------
TypeCheckError Traceback (most recent call last)
Cell In[18], line 1
----> 1 profile = ProfileReport(df)
2 profile
File ~\AppData\Local\anaconda3\lib\site-packages\pandas_profiling\profile_report.py:48, in ProfileReport.__init__(self, df, minimal, explorative, sensitive, dark_mode, orange_mode, tsmode, sortby, sample, config_file, lazy, typeset, summarizer, config, **kwargs)
45 _json = None
46 config: Settings
---> 48 def __init__(
49 self,
50 df: Optional[pd.DataFrame] = None,
51 minimal: bool = False,
52 explorative: bool = False,
53 sensitive: bool = False,
54 dark_mode: bool = False,
55 orange_mode: bool = False,
56 tsmode: bool = False,
57 sortby: Optional[str] = None,
58 sample: Optional[dict] = None,
59 config_file: Union[Path, str] = None,
60 lazy: bool = True,
61 typeset: Optional[VisionsTypeset] = None,
62 summarizer: Optional[BaseSummarizer] = None,
63 config: Optional[Settings] = None,
64 **kwargs,
65 ):
66 """Generate a ProfileReport based on a pandas DataFrame
67
68 Config processing order (in case of duplicate entries, entries later in the order are retained):
(...)
82 **kwargs: other arguments, for valid arguments, check the default configuration file.
83 """
85 if df is None and not lazy:
File ~\AppData\Local\anaconda3\lib\site-packages\typeguard\_functions.py:138, in check_argument_types(func_name, arguments, memo)
135 raise exc
137 try:
--> 138 check_type_internal(value, annotation, memo)
139 except TypeCheckError as exc:
140 qualname = qualified_name(value, add_class_prefix=True)
File ~\AppData\Local\anaconda3\lib\site-packages\typeguard\_checkers.py:759, in check_type_internal(value, annotation, memo)
757 checker = lookup_func(origin_type, args, extras)
758 if checker:
--> 759 checker(value, origin_type, args, memo)
760 return
762 if isclass(origin_type):
File ~\AppData\Local\anaconda3\lib\site-packages\typeguard\_checkers.py:408, in check_union(value, origin_type, args, memo)
403 errors[get_type_name(type_)] = exc
405 formatted_errors = indent(
406 "\n".join(f"{key}: {error}" for key, error in errors.items()), " "
407 )
--> 408 raise TypeCheckError(f"did not match any element in the union:\n{formatted_errors}")
TypeCheckError: argument "config_file" (None) did not match any element in the union:
pathlib.Path: is not an instance of pathlib.Path
str: is not an instance of str
</code></pre>
<p>Versions:
pandas==1.5.3
pandas-profiling==3.6.6</p>
<p>Couldn't find any resource to debug this. Tried updating the versions of pandas and pandas-profiling, but still couldn't succeed.</p>
|
<python><python-3.x><pandas><pip><pandas-profiling>
|
2024-01-24 07:16:30
| 1
| 714
|
Senthil Vikram Vodapalli
|
77,871,139
| 11,607,378
|
How do I debug a Python-C++ mixed project with gdb attach process and catch throw?
|
<p>I am having a mixed Python & C++ project and using the following steps to debug:</p>
<ul>
<li>start Python process</li>
<li>use <code>ps ax | grep python</code> to identify the process debugging process that will be attached to</li>
<li>start the gdb attach process
The "gdb process" in VSCode has following configuration, that basically only wants <code>std::vector</code> error to be stopped, with the pattern described in gdb doc: <code>catch throw <exception-regex></code></li>
</ul>
<pre class="lang-json prettyprint-override"><code> {
"name": "(gdb) Attach",
"type": "cppdbg",
"request": "attach",
"program": "/opt/conda/bin/python", /* My virtual env */
"processId": "${command:pickProcess}",
"MIMode": "gdb",
"setupCommands": [
{
"description": "Enable all-exceptions",
"text": "catch throw std::vector*",
"ignoreFailures": true
}
],
}
</code></pre>
<p>However, it stopped at one point (not the <code>std::vector</code> expected) with debug log</p>
<pre><code>Thread 1 "python" hit Catchpoint 1 (exception thrown), __cxxabiv1::__cxa_throw (obj=0xff0e650, tinfo=0x7f8f21433560 <typeinfo for torch::TypeError>, dest=0x7f8f209440f0 <torch::TypeError::~TypeError()>) at ../../../../libstdc++-v3/libsupc++/eh_throw.cc:80
</code></pre>
<p>and the VSCode debugger stopped, but could not really target to the error code line; the status was somehow like
<a href="https://i.sstatic.net/FhnvC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FhnvC.png" alt="error screenshot" /></a></p>
<p>So..</p>
<ul>
<li>Why did it stop abruptly?</li>
<li>How can one trace the error stack?</li>
</ul>
<p>PS:
I have <code>RuntimeError: vector::_M_range_check: __n (which is 1) >= this->size() (which is 1)</code> thrown by Python process but I can not target the culprit line in C++ code. I want to filter this error out.</p>
|
<python><c++><gdb>
|
2024-01-24 07:15:26
| 0
| 673
|
Litchy
|
77,871,084
| 4,948,889
|
Unable to communication Socket Python from Local Windows system to WSL Ubuntu
|
<p>I have a simple socket communication program in python:
This program is running from windows so that I can fetch the output:</p>
<pre><code>#!/usr/bin/env python3
import socket
HOST = "172.19.112.1"#"127.0.0.1" # The server's hostname or IP address
PORT = 65432 # The port used by the server
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect((HOST, PORT))
s.sendall(b"Hello, world")
data = s.recv(1024)
print(f"Received {data!r}")
</code></pre>
<p>This program runs in my WSL-Ubuntu system:</p>
<pre><code>#!/usr/bin/env python3
import socket
HOST = "0.0.0.0"#"117.248.124.120"#"127.0.0.1" # Standard loopback interface address (localhost)
PORT = 65432 # Port to listen on (non-privileged ports are > 1023)
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.bind((HOST, PORT))
s.listen()
conn, addr = s.accept()
with conn:
print(f"Connected by {addr}")
while True:
data = conn.recv(1024)
if not data:
break
conn.sendall(data)
</code></pre>
<p>The ubuntu is running on its system but when I tried to communicate and fetch output from the Windows system, I have received this error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\system\Desktop\materials-python-sockets-tutorial\echo-client.py", line 9, in <module>
s.connect((HOST, PORT))
ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
</code></pre>
<p>I have tried to use help from microsoft blog: <a href="https://learn.microsoft.com/en-us/windows/wsl/networking" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/windows/wsl/networking</a> and have tried this command in power shell:</p>
<pre><code>netsh interface portproxy add v4tov4 listenport=65432 listenaddress=0.0.0.0 connectport=65432 connectaddress=UbuntuIP address
</code></pre>
<p>Now I was able to receive output but it was empty value:</p>
<pre><code>Received b''
</code></pre>
<p>Please let me know how I can resolve this issue because I do not know what went wrong with the code. It works fine on the local system but not with WSL Ubuntu.</p>
|
<python><python-3.x><sockets><windows-subsystem-for-linux>
|
2024-01-24 07:01:13
| 0
| 7,303
|
Jaffer Wilson
|
77,870,975
| 10,964,685
|
Append new data to existing dataframe more efficiently - python
|
<p>Is there a more efficient way to append new data to an existing dataframe? As per the following example, I'm importing an existing df (<code>frame_orig</code>). Then some more code is performed, which produces some new values. I then append these back to the original df and export out.</p>
<p>This works fine but if the original df becomes too large, the process can a long time.</p>
<p>Is it possible to append the new values as a list. Or can the type of the original df be manipulated?</p>
<pre><code>import pandas as pd
#frame_orig = pd.read_csv('C:/path/to/file/frame_orig.csv')
frame_orig = pd.DataFrame({'Val1': ['1','2'],
'Val2': ['3','4'],
})
##some code
new_Val1 = '5000'
new_Val2 = '6000'
newvalues = []
newvalues.append([new_Val1, new_Val2])
df_values = pd.DataFrame(newvalues, columns = ['Val1','Val2'])
new_df = pd.concat([frame_orig, df_values], ignore_index=True)
</code></pre>
|
<python><pandas>
|
2024-01-24 06:35:13
| 1
| 392
|
jonboy
|
77,870,893
| 896,112
|
"NumbaPerformanceWarning: '@' is faster on contiguous arrays" warning on `@jitclass` methods
|
<p>Consider the following program which creates a <code>Rotator</code> class that stores a rotation matrix <code>_R</code> and can <code>rotate</code> a vector <code>v</code>:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import numpy.typing as npt
import numba as nb
from numba.experimental import jitclass
@jitclass([("_R", nb.float64[:,:])])
class Rotator:
_R : npt.NDArray[np.float64]
def __init__(self, R : npt.NDArray[np.float64]):
self._R = R
def rotate(self, v : npt.NDArray[np.float64]) -> npt.NDArray[np.float64]:
return self._R @ v
R = np.eye(3)
print(R.flags)
rotator = Rotator(R)
rotator.rotate(np.array([1., 0., 0.]))
</code></pre>
<p>Here are <code>R</code>'s flags that get printed :</p>
<pre><code> C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : True
WRITEABLE : True
ALIGNED : True
WRITEBACKIFCOPY : False
</code></pre>
<p>It's clear that <code>R</code> is row-major contiguous, but Numba complains that it's not :</p>
<pre><code><string>:3: NumbaPerformanceWarning: '@' is faster on contiguous arrays, called on (Array(float64, 2, 'A', False, aligned=True), Array(float64, 1, 'C', False, aligned=True))
</code></pre>
<p>However the same function works without warning if it's pulled out of a class.</p>
<pre><code>@njit
def rotate(R : npt.NDArray[np.float64], v : npt.NDArray[np.float64]) -> npt.NDArray[np.float64]:
return R @ v
rotate(np.eye(3), np.array([1., 0., 0.]))
</code></pre>
<p>I would appreciate any explanation on why the contiguity of the array changes according to Numba when compiled as a class member.</p>
<p>Thank you!</p>
<p>EDIT : Tried Jérôme's suggestion from the comment and tried declaring <code>R_</code> as a row major 2D array (C-style).</p>
<pre class="lang-py prettyprint-override"><code>@jitclass([("_R", nb.float64[::1,:])])
</code></pre>
<p>but I now get the error</p>
<pre><code><string>:3: NumbaPendingDeprecationWarning: Code using Numba extension API maybe depending on 'old_style' error-capturing, which is deprecated and will be replaced by 'new_style' in a future release. See details at https://numba.readthedocs.io/en/latest/reference/deprecation.html#deprecation-of-old-style-numba-captured-errors
Exception origin:
File "/Users/xx/miniconda3/envs/xx/lib/python3.10/site-packages/numba/np/arrayobj.py", line 6397, in array_to_array
assert fromty.mutable != toty.mutable or toty.layout == 'A'
Traceback (most recent call last):
File "/private/tmp/test.py", line 17, in <module>
rotator = Rotator(R)
File "/Users/xx/miniconda3/envs/xx/lib/python3.10/site-packages/numba/experimental/jitclass/base.py", line 124, in __call__
return cls._ctor(*bind.args[1:], **bind.kwargs)
File "/Users/xx/miniconda3/envs/xx/lib/python3.10/site-packages/numba/core/dispatcher.py", line 468, in _compile_for_args
error_rewrite(e, 'typing')
File "/Users/xx/miniconda3/envs/xx/lib/python3.10/site-packages/numba/core/dispatcher.py", line 409, in error_rewrite
raise e.with_traceback(None)
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Internal error at <numba.core.typeinfer.CallConstraint object at 0x11edbbbe0>.
Failed in nopython mode pipeline (step: native lowering)
Enable logging at debug level for details.
File "<string>", line 3:
<source missing, REPL/exec in use?>
</code></pre>
|
<python><numpy><numba>
|
2024-01-24 06:14:52
| 0
| 9,259
|
Carpetfizz
|
77,870,725
| 3,810,748
|
Python Plotly: X-Axis Glitches When Attempting to Draw Unified Spike Across Multiple Subplots
|
<h3>Background</h3>
<p>I am currently attempting to draw a unified spike across multiple subplots when hovering. I want to do this to improve the visualization of the x-axis. I am currently following <a href="https://github.com/plotly/plotly.py/issues/1677#issuecomment-514661015" rel="nofollow noreferrer">advice from this thread on GitHub.</a></p>
<h3>My Problem</h3>
<p>However, when I try to do it, the x-axis immediately glitches out and shoves all the data to the far right of the chart. Could anyone more familiar with Plotly explain why this is happening? I have included a reproducible piece of code below.</p>
<p><strong>Note: In order to see the visual bug, you have to uncomment the problematic lines</strong></p>
<h3>Reproducible Code</h3>
<pre><code>import plotly.graph_objects as go
from plotly.subplots import make_subplots
from datetime import date
trace1 = go.Scatter(x=[date(2023, 1, 15), date(2023, 2, 15), date(2023, 3, 15), date(2023, 4, 15),
date(2023, 5, 15), date(2023, 6, 15), date(2023, 7, 15), date(2023, 8, 15),
date(2023, 9, 15), date(2023, 10, 15), date(2023, 11, 15), date(2023, 12, 15)],
y=[1000, 2000, 3000, 4000,
5000, 6000, 7000, 8000,
9000, 10000, 11000, 12000],
name='trace1')
trace2 = go.Scatter(x=[date(2023, 1, 15), date(2023, 2, 15), date(2023, 3, 15), date(2023, 4, 15),
date(2023, 5, 15), date(2023, 6, 15), date(2023, 7, 15), date(2023, 8, 15),
date(2023, 9, 15), date(2023, 10, 15), date(2023, 11, 15), date(2023, 12, 15)],
y=[10_000, 20_000, 30_000, 40_000,
50_000, 60_000, 70_000, 80_000,
90_000, 100_000, 110_000, 120_000],
name='trace2')
fig = make_subplots(rows=2, cols=1, shared_xaxes=True, vertical_spacing=0.02)
fig.add_trace(trace1, row=1, col=1)
fig.add_trace(trace2, row=2, col=1)
fig.add_annotation(
xref='x domain',
yref='y domain',
x=0,
y=1,
text='FIRST ANNOTATION',
showarrow=False,
font=dict(size=14, color='blue'),
row=1, col=1
)
fig.add_annotation(
xref='x domain',
yref='y domain',
x=0,
y=1,
text='SECOND ANNOTATION',
showarrow=False,
font=dict(size=14, color='red'),
row=2, col=1
)
# fig.update_traces(xaxis='x')
# fig.update_xaxes(spikemode='across+marker')
# fig.update_shapes(selector=dict(type="line"), xref="x domain")
# fig.update_shapes(selector=dict(type="rect"), xref="x")
fig.update_layout(height=600, width=600, title_text='Example Chart')
fig.show()
</code></pre>
<h3>Chart Outputs</h3>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Before Uncommenting</th>
<th>After Uncommenting</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://i.sstatic.net/XKkXl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XKkXl.png" alt="enter image description here" /></a></td>
<td><a href="https://i.sstatic.net/ttKuK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ttKuK.png" alt="enter image description here" /></a></td>
</tr>
</tbody>
</table>
</div>
|
<python><plotly>
|
2024-01-24 05:27:27
| 1
| 6,155
|
AlanSTACK
|
77,870,655
| 22,285,820
|
Replacing google slides images using slides api in shared google drives
|
<p>Following up with my <a href="https://stackoverflow.com/questions/77706103/replacing-google-slides-images-using-slides-api">previous</a> question, now, facing an error when I use <code>shared google drive</code> with the <code>service account</code> having <code>Manager</code> privilege on that shared drive, it is failing during the image replacement in google slides. In addition, using my personal account I can edit any file on that shared drive, with that I have two questions:</p>
<ol>
<li><p>why slides API's <code>batchUpdate</code> method needs public access when used to replace images in slides using <code>replaceImage</code> ?</p>
</li>
<li><p>what is the difference when interacting with the shared drives with <code>personal account</code> and <code>service account</code> ?</p>
</li>
</ol>
<p>Any suggestion and help appreciated, note: I am still learning about google API's. Thank you in advance.</p>
<p>used personal account to edit the slides it works.</p>
|
<python><presentation><google-slides-api><google-slides><google-drive-shared-drive>
|
2024-01-24 05:01:53
| 0
| 596
|
nisakova
|
77,870,505
| 1,802,483
|
Python 3.12 Write Chinese in Excel CSV - UTF-8-SIG not work
|
<p>I am using Python 3.12.1 and upload it to AWS Lambda.</p>
<p>What I am doing is to get data from a MySQL DB (with some Chinese text in it) and export to Excel CSV.</p>
<p>Here is the code:</p>
<pre class="lang-py prettyprint-override"><code># Copied from https://gist.github.com/tobywf/3773a7dc896f780c2216c8f8afbe62fc#file-unicode-csv-excel-py
with open(self.full_csv_path, 'w', encoding='utf-8-sig', newline='') as fp:
writer = csv.writer(fp)
writer.writerow(['Row', 'Emoji'])
for i, emoji in enumerate(['🎅', '🤔', '😎']):
writer.writerow([str(i), emoji])
</code></pre>
<p>Result in (I use Excel: Data > From text to import, not double click)</p>
<p><a href="https://i.sstatic.net/m4rcn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/m4rcn.png" alt="enter image description here" /></a></p>
<p>This also did not work:</p>
<pre class="lang-py prettyprint-override"><code>with open(self.full_csv_path, 'w', encoding='utf-8-sig') as csvfile:
# Did not work
csvfile.write("許蓋功")
# Did not work, also tried 'utf-8'
csvfile.write("許蓋功".encode('utf-8-sig').decode('utf-8-sig'))
</code></pre>
<p>Tried this, not working as well</p>
<pre class="lang-py prettyprint-override"><code># Write CSV BOM mark
csvfile.write('\ufeff') # did not work
csvfile.write(u'\ufeff') # did not work
csvfile.write(u'\ufeff'.encode('utf8').decode("utf8")) # did not work
</code></pre>
<p>It will prepend the above text to the excel file, not BOM mark</p>
<p>It seems very clearly that the string is treated as UTF-8 encoded, but for some unknown and weird reason, it fails to convert to correct UTF-8.</p>
<p>Can you all please help?</p>
<p>Thank you very much.</p>
<p><strong>EDIT</strong>
What I want to do is to attach this CSV file that contains Chinese character to an email and send it out in AWS Lambda.</p>
<p>Here is the code to send out email via SES:</p>
<pre class="lang-py prettyprint-override"><code>
# Create a multipart/alternative child container.
msg_body = MIMEMultipart('alternative')
# Encode the text and HTML content and set the character encoding. This step is
# necessary if you're sending a message with characters outside the ASCII range.
textpart = MIMEText(BODY_TEXT.encode(CHARSET), 'plain', CHARSET)
htmlpart = MIMEText(BODY_HTML.encode(CHARSET), 'html', CHARSET)
# Add the text and HTML parts to the child container.
msg_body.attach(textpart)
msg_body.attach(htmlpart)
# Define the attachment part and encode it using MIMEApplication.
att = MIMEApplication(open(ATTACHMENT, 'r', encoding='utf-8').read())
# Add a header to tell the email client to treat this part as an attachment,
# and to give the attachment a name.
att.add_header('Content-Disposition','attachment',filename=os.path.basename(ATTACHMENT))
# Attach the multipart/alternative child container to the multipart/mixed
# parent container.
msg.attach(msg_body)
# Add the attachment to the parent container.
msg.attach(att)
# print(msg)
response = ''
try:
#Provide the contents of the email.
response = client.send_raw_email(
Source=SENDER,
# Destinations=[ RECIPIENT ],
Destinations=RECIPIENT,
RawMessage={
'Data':msg.as_string(),
}
)
# Display an error if something goes wrong.
except ClientError as e:
print(e.response['Error']['Message'])
else:
print("Email sent! Message ID:"),
print(response['MessageId'])
print(f'Attachment: {ATTACHMENT}')
</code></pre>
<p>I am thinking about this line:</p>
<pre class="lang-py prettyprint-override"><code>RawMessage={
'Data':msg.as_string(),
}
</code></pre>
<p>It may be the cause of all this mess. But I have no idea how it works.</p>
|
<python><excel><csv><utf-8>
|
2024-01-24 03:56:17
| 1
| 705
|
Ellery Leung
|
77,870,483
| 9,184,839
|
How reboot remote machine via ssh avoiding timeout?
|
<p>I have this part of code where I am trying to reboot a remote machine over ssh :</p>
<pre><code>def check_and_reboot(ssh, hostname):
# if not ssh_connect(ssh, hostname):
# print("SSH failed, restarting {}".format(hostname))
# Step 2: If connection fails, reboot the machine
if not reboot_machine(ssh, hostname):
print("Failed to reboot machine {}. Exiting.".format(hostname))
# return
# Step 3: Run the command over SSH
command = "ls -asl /raid/dfs/aiceph"
ssh_connect(ssh, hostname)
output, exit_code = execute_remote_command(ssh, command)
# Step 4: If the command returns an error code, reboot the machine
if exit_code != 0:
try:
# Execute the reboot command
_, stdout, stderr = ssh.exec_command("sudo nohup /sbin/reboot -f > /dev/null 2>&1 &")
# Wait for the command to complete
exit_status = stdout.channel.recv_exit_status()
# If the exit status is 0, the reboot command was sent successfully
if exit_status == 0:
print("Reboot command sent successfully")
return True
else:
print("Failed to send reboot command with exit status {}".format(exit_status))
return False
except Exception as e:
# If there is an exception, print the error and return False
print("Error sending reboot command: {}".format(str(e)))
return False
# Step 5: Validation successful
print("Validation successful for machine {}".format(hostname))
</code></pre>
<p>I had set the exit_code = 1 to verify the flow. And it appears that my code happens to be stuck at step 4 as my code keeps waiting for the prompt from the reboot command. How can I overcome this without having to wait for the response from the reboot command?</p>
|
<python><ubuntu><ssh><paramiko><reboot>
|
2024-01-24 03:49:22
| 1
| 505
|
Aravind S
|
77,870,417
| 574,563
|
no data from polygon.io websocket
|
<p>I'm running a python script locally in the console to connect to polygon.io's stocks websocket to get aggregate second data.</p>
<p>when I run this code, I don't get any errors, but I also see no data streaming in the console. I am using AM.TSLA as a test. I verified with polygon support that my API key is capable of getting websocket data.</p>
<p>Here is my code:</p>
<pre><code>import pandas as pd
import sys
import json
sys.path.insert(0, '/wamp64/www/market-data/py/client-python')
from polygon import WebSocketClient
from polygon.websocket.models import WebSocketMessage
from typing import List
def on_open(ws):
print("WebSocket connection opened.")
def handle_msg(msg: List[WebSocketMessage]):
for m in msg:
# Print the raw message data received
print("Raw message data:", m)
# If m.json is the correct attribute containing the message data
print("Parsed message data:", m.json)
def on_close(ws):
print("Connection closed")
def on_error(ws, error):
print("Error:", error)
def main():
print("Starting WebSocket client...")
# Polygon API key
api_key = '####'
# Create a WebSocket client
ws_client = WebSocketClient(api_key=api_key, subscriptions=["AM.TSLA"])
print("Getting AM.TSLA data...")
# Assign the callbacks
ws_client.on_open = on_open
ws_client.on_message = handle_msg
ws_client.on_close = on_close
ws_client.on_error = on_error
ws_client.run(handle_msg)
# Keep the script running
print("WebSocket client is running... Press Enter to stop listening.")
try:
# Wait for user input to close the connection
input()
finally:
# Ensure the WebSocket connection is closed properly
ws_client.close()
print("WebSocket connection closed.")
if __name__ == "__main__":
main()
</code></pre>
|
<python><websocket><polygon.io>
|
2024-01-24 03:28:03
| 1
| 487
|
Alexnl
|
77,870,183
| 10,518,698
|
Changing the name of the input layer of a pre-trained model in TensorFlow Transfer Learning
|
<p>I am trying to change the input layer's name of a Pre-trained TensorFlow model imported from Keras</p>
<p>I am able to add the output layer name but I keep failing in adding input model name. Is it even possible to change the input layer name of a pre-trained model?</p>
<p>I can't find any info about this in keras documentation as well. Not sure if I am missing something.</p>
<pre><code>tf_model = Sequential(name = 'mobilenet_model')
pretrained_model = tf.keras.applications.MobileNetV3Small(include_top = False,
input_shape = (180, 180, 3),
pooling = 'avg',
classes = len(CLASS_NAMES),
weights = 'imagenet')
for layer in pretrained_model.layers:
layer.trainable = False
tf_model.add(pretrained_model)
tf_model.add(Flatten())
tf_model.add(Dense(512, activation = 'relu'))
tf_model.add(Dense(len(CLASS_NAMES), activation = 'softmax', name = 'MODEL_OUTPUT_LAYER'))
tf_model.summary()
</code></pre>
<p>Currently the input model name is the default "MobilenetV3small" but I want to change it to something else.</p>
<p><a href="https://i.sstatic.net/7rQ4H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7rQ4H.png" alt="enter image description here" /></a></p>
|
<python><tensorflow><keras><tensorflow-lite><transfer-learning>
|
2024-01-24 01:56:24
| 1
| 513
|
JSVJ
|
77,869,957
| 896,112
|
Numba fails to compile `jitclass` with with constructor accepting Numpy array arguments
|
<p>The following implementations of <code>evaluate</code> compile correctly :</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import numpy.typing as npt
from numba import njit
from numba.experimental import jitclass
</code></pre>
<p><strong>Standalone Function</strong></p>
<pre class="lang-py prettyprint-override"><code>@njit
def evaluate(x : npt.NDArray[np.float64], m : float, b : float) -> npt.NDArray[np.float64]:
return m * x + b
x = np.linspace(0, 100)
y = evaluate(x, 2, 3)
</code></pre>
<p><strong><code>@jitclass</code> with Standalone Function</strong></p>
<pre class="lang-py prettyprint-override"><code>@jitclass
class LineEvaluator:
def __init__(self):
...
def evaluate(self, x : npt.NDArray[np.float64], m : float, b : float) -> npt.NDArray[np.float64]:
return m * x + b
x = np.linspace(0, 100)
y = LineEvaluator().evaluate(x, 2, 3)
</code></pre>
<p>However the following implementation fails to compile with an error :</p>
<p><strong><code>@jitclass</code> with Arguments in Constructor</strong></p>
<pre class="lang-py prettyprint-override"><code>@jitclass
class LineEvaluator:
def __init__(self, x : npt.NDArray[np.float64], m : float, b : float):
self.x = x
self.m = m
self.b = b
def evaluate(self) -> npt.NDArray[np.float64]:
return self.m * self.x + self.b
x = np.linspace(0, 100)
y = LineEvaluator(x, 2, 3).evaluate()
</code></pre>
<pre><code>Failed in nopython mode pipeline (step: nopython frontend)
Cannot resolve setattr: (instance.jitclass.LineEvaluator#117caf610<>).x = array(float64, 1d, C)
File "test.py", line 9:
def __init__(self, x : npt.NDArray[np.float64], m : float, b : float):
self.x = x
^
During: typing of set attribute 'x' at /private/tmp/test.py (9)
File "test.py", line 9:
def __init__(self, x : npt.NDArray[np.float64], m : float, b : float):
self.x = x
^
During: resolving callee type: jitclass.LineEvaluator#117caf610<>
During: typing of call at <string> (3)
During: resolving callee type: jitclass.LineEvaluator#117caf610<>
During: typing of call at <string> (3)
File "<string>", line 3:
<source missing, REPL/exec in use?>
</code></pre>
<p><strong><code>@jitclass</code> Member Type Annotations</strong></p>
<pre class="lang-py prettyprint-override"><code>@jitclass
class LineEvaluator:
x : npt.NDArray[np.float64]
m : float
b : float
def __init__(self, x : npt.NDArray[np.float64], m : float, b : float):
self.x = x
self.m = m
self.b = b
def evaluate(self) -> npt.NDArray[np.float64]:
return self.m * self.x + self.b
x = np.linspace(0, 100)
y = LineEvaluator(x, 2, 3).evaluate()
</code></pre>
<pre><code>Traceback (most recent call last):
File "/private/tmp/test.py", line 7, in <module>
class LineEvaluator:
File "/Users/xx/miniconda3/envs/xx/lib/python3.10/site-packages/numba/experimental/jitclass/decorators.py", line 88, in jitclass
return wrap(cls_or_spec)
File "/Users/xx/miniconda3/envs/xx/lib/python3.10/site-packages/numba/experimental/jitclass/decorators.py", line 77, in wrap
cls_jitted = register_class_type(cls, spec, types.ClassType,
File "/Users/xx/miniconda3/envs/xx/lib/python3.10/site-packages/numba/experimental/jitclass/base.py", line 180, in register_class_type
spec[attr] = as_numba_type(py_type)
File "/Users/xx/miniconda3/envs/xx/lib/python3.10/site-packages/numba/core/typing/asnumbatype.py", line 121, in __call__
return self.infer(py_type)
File "/Users/xx/miniconda3/envs/xx/lib/python3.10/site-packages/numba/core/typing/asnumbatype.py", line 115, in infer
raise errors.TypingError(
numba.core.errors.TypingError: Cannot infer numba type of python type numpy.ndarray[typing.Any, numpy.dtype[numpy.float64]]
</code></pre>
<p>The error messages are pretty opaque, why is the compilation failing in this specific case?</p>
<p>Thank you!</p>
|
<python><numpy><jit>
|
2024-01-24 00:24:40
| 1
| 9,259
|
Carpetfizz
|
77,869,940
| 9,213,600
|
How to delete the outer circle in an image and only keep the content inside it in OpenCV?
|
<p>I want to segment the bacteria types inside this microscopic image (original form is in gray), applying various thresholding operations to convert the image into binary format returns noisy results that effects the erosion/dilation/morphological operations that comes later. I want to remove the outer circle (container) of the image without affecting the instances inside it.</p>
<p><a href="https://i.sstatic.net/CoGoY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CoGoY.png" alt="enter image description here" /></a></p>
<p>The issue is, having this circle will affect the quality of the binary image and hence will lead to lower quality outcomes in the next steps.</p>
<p>Here is the current binary image, you can see the outer circle slightly visible:
<a href="https://i.sstatic.net/26cWr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26cWr.png" alt="enter image description here" /></a></p>
<p>Here is the code I am currently using to get the binary form of the gray image:</p>
<pre><code>import cv2
import numpy as np
import matplotlib.pyplot as plt
image = cv2.imread(image_path)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
kernel = np.array([[-1, -1, -1],
[-1, 9, -1],
[-1, -1, -1]])
sharpened_image = cv2.filter2D(image, -1, kernel)
gray_image = cv2.cvtColor(sharpened_image, cv2.COLOR_BGR2GRAY)
binary_image = cv2.threshold(gray_image, 190, 255, cv2.THRESH_BINARY_INV)[1]
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.imshow(cv2.cvtColor(sharpened_image, cv2.COLOR_BGR2RGB))
plt.title('Original Sharpened Image')
plt.axis('off')
plt.subplot(1, 2, 2)
plt.imshow(binary_image, cmap='gray')
plt.title('Binary Image')
plt.axis('off')
plt.show()
</code></pre>
|
<python><opencv><image-processing>
|
2024-01-24 00:18:54
| 0
| 1,104
|
Ali H. Kudeir
|
77,869,427
| 2,394,163
|
When developing Ansible plugins, how should you use module_utils imports?
|
<p>When developing custom Ansible modules in an Ansible collection, you often want to share code between multiple modules.</p>
<p>If I have the following directory structure in my collection:</p>
<pre><code>collection/
...
plugins/
modules/
module_one.py
module_two.py
module_utils/
utils.py
...
</code></pre>
<p>What is the correct way to import the shared code between all the different modules in your <code>collection/plugins/modules</code> directory? In this case <code>module_one.py</code> and <code>module_two.py</code>.</p>
<p>Should I use the short path?</p>
<pre class="lang-py prettyprint-override"><code>from ansible.module_utils.utils import helper_function
</code></pre>
<p>Or the full path?</p>
<pre><code>from ansible.module_utils.namespace.collection.plugins.module_utils.utils import helper_function
</code></pre>
<p><a href="https://docs.ansible.com/ansible/latest/dev_guide/developing_module_utilities.html#using-and-developing-module-utilities" rel="nofollow noreferrer">Using and developing module utilities</a> says the following:</p>
<blockquote>
<p>The ansible.module_utils namespace is not a plain Python package: it is constructed dynamically for each task invocation, by extracting imports and resolving those matching the namespace against a search path derived from the active configuration.</p>
<p>To reduce the maintenance burden in a collection or in local modules, you can extract duplicated code into one or more module utilities and import them into your modules. For example, if you have your own custom modules that import a my_shared_code library, you can place that into a ./module_utils/my_shared_code.py file like this:</p>
<p><code>from ansible.module_utils.my_shared_code import MySharedCodeClient</code></p>
<p>When you run ansible-playbook, Ansible will merge any files in your local module_utils directories into the ansible.module_utils namespace in the order defined by the Ansible search path.</p>
</blockquote>
<p>This seems to imply that you use the short version above.</p>
<p>Looking at popular Open Source module code, like <a href="https://github.com/ansible-collections/amazon.aws/blob/main/plugins/modules/autoscaling_group.py#L666" rel="nofollow noreferrer">autoscaling_group.py</a>, I see the following</p>
<pre class="lang-py prettyprint-override"><code>from ansible_collections.amazon.aws.plugins.module_utils.botocore import is_boto3_error_code
</code></pre>
<p>This is using the long form.</p>
<p>Additionally, both seem to fail when testing your code directly via python based on the information at <a href="https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#verifying-your-module-code-locally" rel="nofollow noreferrer">Verifying your module code locally</a></p>
<pre class="lang-py prettyprint-override"><code>python plugins/modules/module_one.py /tmp/args.json
Traceback (most recent call last):
File ".../ansible/collections/namespace/collection/plugins/modules/module_one.py", line 69, in <module>
from ansible.module_utils.utils import helper_function
ModuleNotFoundError: No module named 'ansible.module_utils.utils'
</code></pre>
<ul>
<li>What is the correct format to import shared code?</li>
<li>What is the correct way to test Ansible modules directly using Python?</li>
</ul>
|
<python><python-3.x><ansible><ansible-collections>
|
2024-01-23 21:42:26
| 1
| 2,054
|
Nick
|
77,869,389
| 7,087,604
|
Fast boolean interaction matrix with Numpy
|
<p>I have 2 integer vectors, one very long <code>document</code> (1E5 to 1E7 elements), and one rather short <code>query</code> (typically 5-8 elements). I want to create a 2D boolean matrix <code>(i, j)</code> that puts <code>1</code> where <code>document[i] == query[j]</code>, otherwise <code>0</code>. For example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;">Document \ Query</th>
<th>5</th>
<th>2</th>
<th>4</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">2</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td style="text-align: right;">8</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td style="text-align: right;">...</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table>
</div>
<p>Is there a fast way to do it with Numpy, that is without a Python loop ? (Using Pandas here is not an option)</p>
|
<python><numpy>
|
2024-01-23 21:33:43
| 1
| 713
|
Aurélien Pierre
|
77,869,307
| 152,873
|
Altair: How to format times in axis labels in a non-UTC non-local timezone?
|
<p>I'm trying to plot some time series data with Altair, and have the times (on the axis labels) show up in a fixed timezone that's not UTC and not the user's local timezone. As far as I can tell from the docs, those are the only options supported (local being the default, and <a href="https://altair-viz.github.io/user_guide/times_and_dates.html#using-utc-time" rel="nofollow noreferrer">UTC can be used as described here</a>) Is there a way to configure this that I'm missing?</p>
<p>Using the times directly as string values is not an option because this is not supported with the interactive selection interval.</p>
<p>Another alternative I looked into was formatting the timestamp into the target timezone in Python and passing it to Altair as a string, but that didn't seem to work either:</p>
<pre class="lang-py prettyprint-override"><code>import altair as alt
import pandas as pd
data = pd.DataFrame({
"temp": [-28, -28, -27],
"timestamp": [1706043000, 1706043060, 1706043120]
})
data['time'] = pd.to_datetime(data['timestamp'], utc=True, unit='s')
data['formatted_time'] = data['time'].dt.tz_convert('Antarctica/South_Pole').dt.strftime('%Y-%m-%d %H:%M:%S')
selection = alt.selection_interval(bind='scales', encodings=['x'])
(alt.Chart(data)
.mark_line()
# Errors out: Unrecognized signal name: "formatted_time"
# .encode(x=alt.X('time:T', axis=alt.Axis(values=alt.ExprRef('formatted_time'))), y='temp')
# Shows nothing on the X axis?
.encode(x=alt.X('time:T', axis=alt.Axis(values=data['formatted_time'].values)), y='temp')
.add_selection(selection))
</code></pre>
<p><a href="https://i.sstatic.net/KuvMd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KuvMd.png" alt="graph with no values on the x axis" /></a></p>
|
<python><altair><vega-lite>
|
2024-01-23 21:15:32
| 2
| 9,164
|
aviraldg
|
77,869,275
| 7,155,895
|
Converting a theme part from Tlc to Python
|
<p>I have a problem I'm having, which I don't know how to solve, due to my lack of knowledge. I'm trying to recreate the vertical scrollbar of the "<a href="https://github.com/rdbende/Azure-ttk-theme/blob/main/theme/dark.tcl" rel="nofollow noreferrer">Azure-ttk-theme</a>" theme (given that the program will not be for commercial use, but for personal use). To help me out, I used "<a href="https://svn.python.org/projects/sandbox/trunk/ttk-gsoc/samples/plastik_theme.py" rel="nofollow noreferrer">plastik_theme.py</a>", and was mostly successful in translating it from Tcl to Python. There remains one last part to do, which I can't translate, because I can't find the same or similar examples around:</p>
<pre><code>ttk::style element create Vertical.Scrollbar.thumb \
image [list $I(vert-accent) \
disabled $I(vert-basic) \
pressed $I(vert-hover) \
active $I(vert-hover) \
] -sticky ns
</code></pre>
<p>How should I write it in Python? Unfortunately, I haven't found anything around, even converters or similar. I have no desire to use the theme in question, just for scrollbar use, so what should I do? Thank you all</p>
|
<python><tkinter><python-3.11><tlc>
|
2024-01-23 21:08:39
| 1
| 579
|
Rebagliati Loris
|
77,869,089
| 353,337
|
mypy doesn't realize a dataclass's member is really what it is via `__post_init__`
|
<p>I have a <code>dataclass</code> that contains a <code>list[tuple[str,str]]</code>, and I'd like to be able to initialize with a <code>dict[str,str]</code> too. Programmatically it's this:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
@dataclass
class Foobar:
list_of_tuples: list[tuple[str, str]]
def __post_init__(self):
if isinstance(self.list_of_tuples, dict):
self.list_of_tuples = list(self.list_of_tuples.items())
Foobar({"a": "b"})
</code></pre>
<p>but <a href="https://github.com/python/mypy" rel="nofollow noreferrer">mypy</a> isn't happy:</p>
<pre><code>e.py:12: error: Argument 1 to "Foobar" has incompatible type "dict[str, str]"; expected "list[tuple[str, str]]" [arg-type]
</code></pre>
<p>mypy doesn't realize I transform the <code>dict</code> to <code>list[tuple]</code> straight after initialization.</p>
<p>Unfortunately, there's no <code>__pre_init__</code> for dataclasses. I'd like to avoid overriding <code>__init__()</code> as well, if possible.</p>
<p>Any hints?</p>
|
<python><mypy>
|
2024-01-23 20:28:48
| 1
| 59,565
|
Nico Schlömer
|
77,869,087
| 2,410,605
|
Selenium Python Trying to Better Understand WebDriverWait and Splash/Transition Screens
|
<p>I'm coding for a website that uses the transition screen below in between each data screen, including when you first connect to the site.</p>
<p><a href="https://i.sstatic.net/PkXb9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PkXb9.png" alt="enter image description here" /></a></p>
<p>So this screen appears before the initial login screen. My code to sign into the page is:</p>
<pre><code>try:
#username
user_val = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.ID, "username")))
user_val.send_keys(usr)
#password
pwd_val = WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.ID, "password")))
pwd_val.send_keys(pwd)
pwd_val.send_keys(Keys.ENTER)
except:
logger.info("Step 1: Login prompt not found - could not log into Veritas.")
</code></pre>
<p><a href="https://i.sstatic.net/HKLLc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HKLLc.png" alt="enter image description here" /></a></p>
<p>Because of the transition screen it's blowing past this code and hitting the exception. I've been able to correct the problem by putting time.sleep(3) as the first line of code, but I would have thought that using WebDriverWait(browser, 10)...would have searched for this node for 10 seconds before giving up on it. Is that not happening because it's looking for this element on the transition screen? And if that's the case, is there a better way to delay my code until the transition screen has passed? I hate using the time.sleep method but it's the only thing I could think to do.</p>
|
<python><selenium-webdriver><webdriverwait>
|
2024-01-23 20:28:24
| 0
| 657
|
JimmyG
|
77,868,946
| 353,337
|
mypy doesn't understand that the function returns a string
|
<p>I have a function that returns a <code>str</code> or a specified fallback value (<code>str</code> or <code>None</code>). When giving a <code>str</code> fallback, the function is guaranteed to return a <code>str</code>. Running <a href="https://github.com/python/mypy" rel="nofollow noreferrer">mypy</a> on this however gives an error.</p>
<p>MWE:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any
def f(x: str, fallback: str | None) -> str | None:
if x.lower() == "yes":
return "okay"
return fallback
f("no", fallback="").lower()
</code></pre>
<pre><code>f.py:10: error: Item "None" of "str | None" has no attribute "lower" [union-attr]
</code></pre>
<p>How can I tell mypy that it's safe to call <code>lower()</code> on the return value of the function?</p>
|
<python><mypy>
|
2024-01-23 20:00:47
| 3
| 59,565
|
Nico Schlömer
|
77,868,799
| 6,212,718
|
Polars dataframe: add columns conditional on other column yielding different lengths
|
<p>I have the following dataframe in <code>polars</code>.</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({
"Buy_Signal": [1, 0, 1, 0, 0],
"Returns": np.random.normal(0, 0.1, 5),
})
</code></pre>
<p>Ultimately, I want to do aggregations on column <code>Returns</code> conditional on different intervals - which are given by column <code>Buy_Signal</code>. In the above case the length is from each <code>1</code> to the end of the dataframe.</p>
<p><strong>What is the most polar way to do this?</strong></p>
<p>My "stupid" solution (before I can apply aggregations) is as follows:</p>
<pre class="lang-py prettyprint-override"><code>df = (df
.with_columns(pl.col("Buy_Signal").cum_sum().alias("Holdings"))
# (1) Determine returns for each holding period (>=1, >=2) and unpivot into one column
.with_columns(
pl.when(pl.col("Holdings") >= 1).then(pl.col("Returns")).alias("Port_1"),
pl.when(pl.col("Holdings") >= 2).then(pl.col("Returns")).alias("Port_2"),
)
)
</code></pre>
<p>The solution is obviously not working for many "buy signals". So I wrote a separate function <code>calc_port_returns</code> which includes a for loop which is then passed to polar's <code>pipe</code> function. See here:</p>
<pre class="lang-py prettyprint-override"><code>def calc_port_returns(_df: pl.DataFrame) -> pl.DataFrame:
n_portfolios = _df["Buy_Signal"].sum()
data = pl.DataFrame()
_df = _df.with_columns(pl.col("Buy_Signal").cum_sum().alias("Holdings"))
for i in range(1, n_portfolios + 1):
tmp = (
_df.with_columns(
pl.when(pl.col("Holdings") >= i).then(pl.col("Returns")).alias(f"Port_{i}"),
)
.select(pl.col(f"Port_{i}"))
)
data = pl.concat([data, tmp], how="horizontal")
_df = pl.concat([_df, data], how="horizontal")
return _df
df = pl.DataFrame({
"Buy_Signal": [1, 0, 1, 0, 0],
"Returns": np.random.normal(0, 0.1, 5),
})
df.pipe(calc_port_returns)
</code></pre>
<p>What is the "polars" way to do this? In pandas I could imagine solving it using <code>df.assign({f"Port_{i}": ... for i in range(1, ...)})</code> with a few prior extra columns / side calculations.</p>
<p>Thanks for any suggestions.</p>
|
<python><python-polars>
|
2024-01-23 19:27:08
| 2
| 1,489
|
FredMaster
|
77,868,779
| 6,345,518
|
Python regex replace quotation marks
|
<p>Based on <a href="https://stackoverflow.com/a/77846735/6345518">my previous question</a>, I realized that many of my strings are in fact concatenated strings, which makes replacing quotation marks with C++ raw string literal delimiters even harder.</p>
<p>F.i. I'd like to substitute the quotation marks in this text:</p>
<pre><code>text = r'''
docstring = "mystr";
docstring = "some \"subs" + some_str + "next";
docstring = (
some_str + "cde" + some_str + "efg" + another_str + "ghi"
);
'''
</code></pre>
<p>to this</p>
<pre><code>target_text = r'''
docstring = R""""(mystr)"""";
docstring = R""""(some \"subs)"""" + some_str + R""""(next)"""";
docstring =
some_str + R""""(cde)"""" + some_str + R""""(efg)"""" + another_str + R""""(ghi)"""";
'''
</code></pre>
<p>with the parentheses around the last concatenation being dropped and preserving special characters like the escaped <code>\"</code>.</p>
<p>My current approach is the following regex</p>
<pre><code>import re
re.sub(
r'(?<=[\s])((?:docstring|some_detailed_notes)\s*=\s*(?://.*\n\s*)*)(("|\(\s*")((?:[^"\\]|\\.)+)("|"\s*\)))([\+_a-z ]*)(\s*;)',
fr'\1{cpp_rawstr_start}\3{cpp_rawstr_end}\5'
text
)
</code></pre>
<p>but of course this won't work due to two reasons:</p>
<ul>
<li>my regex doesn't match the groups unless there is only one concatenation</li>
<li>group order/numbering changes depending on the string to analyze. Honestly: Absolutely no idea how to deal with this...</li>
</ul>
<p>I also tried this <a href="https://regex101.com/r/2IGG9X/1" rel="nofollow noreferrer">regex on regex101</a> but can't make it work.</p>
|
<python><regex>
|
2024-01-23 19:22:35
| 2
| 5,832
|
JE_Muc
|
77,868,548
| 1,874,170
|
Standard way to convert errors into warnings?
|
<p>I've got this snippet of code, which converts certain <code>Exception</code>s into nonfatal <code>RuntimeWarning</code>s:</p>
<pre class="lang-py prettyprint-override"><code> ...
for f in filter(_tty_check, [sys.stderr, sys.stdout, sys.stdin]):
cur_encoding = _tty_get_encoding(f)
if f.encoding != cur_encoding:
try:
f.reconfigure(encoding=cur_encoding) # 1. CATCH THE ERROR
except (io.UnsupportedOperation, LookupError) as err:
war = RuntimeWarning(*err.args) # 2. CONVERT IT INTO A WARNING
war.with_traceback(err.__traceback__) # 3. RESTORE TRACEBACK
if cur_encoding.startswith('x-ebcdic'):
try: f.reconfigure(encoding='cp037')
except (io.UnsupportedOperation, LookupError): pass
warnings.warn(war)
...
</code></pre>
<p>However, this business of <code>try...except</code>, then creating the Warning, then manually sticking junk onto it looks a bit awkward and verbose.</p>
<p>Is there anything like the <strong>opposite</strong> of warnings.filter? Instead of a block where specified warnings become errors, I want a block where specified <em>errors</em> become <em>warnings</em>.</p>
<p>Or is there any way to just "cast" an error to become a warning without this kludgy 2-step where I inject the traceback back in?</p>
|
<python><exception><warnings>
|
2024-01-23 18:32:36
| 1
| 1,117
|
JamesTheAwesomeDude
|
77,868,494
| 10,101,636
|
Splitting a record into multiple records based on character length
|
<p>I have to split records in a dataframe into multiple records based on character lengths.<br />
<strong>UPDATE</strong>:</p>
<p>The dataframe will have the key in a separate column. The <code>value</code> column should be split every 5 characters as above. The key should repeat with every new row generated. Also, a sequence number is required for every 5 character chunk that is split into a new record.</p>
<pre><code>key | value
ABC1 | 2345DEFMN45671
XYZ | 56712MNPQR2347762344
DEF | 38912JKLTR
</code></pre>
<p>Expected output:</p>
<pre><code>Key | seq | value
ABC | 1 | 12345
ABC | 2 | DEFMN
ABC | 3 | 45671
XYZ | 1 | 56712
XYZ | 2 | MNPQR
XYZ | 3 | 23477
XYZ | 4 | 62344
DEF | 1 | 38912
DEF | 2 | JKLTR
</code></pre>
<p>As shown above, there is a new sequence no column added to the dataframe. Please note that the seq no should exactly pertain to the order in which the split string occurs in the source.</p>
<p>Many thanks for your help.</p>
|
<python><pyspark>
|
2024-01-23 18:21:17
| 2
| 403
|
Matthew
|
77,868,464
| 907,714
|
Can I further speed up query to find all objects within a range in python
|
<p>Lets say I have a list of a simple dataclass like:</p>
<pre><code>@dataclass
class Widget:
category: str
weight: int
</code></pre>
<p>I want to write a function that returns all widgets that equal a certain category and are within a certain weight like:</p>
<pre><code>def query_widgets(category, lo_weight, hi_weight):
pass
</code></pre>
<p>Assuming this list is fixed and I can preprocess this however I like, what is the fastest way to do this purely in python?</p>
<p>My solution is to sort the list by weight, create a map of category to widgets, and then find the indexes of the first and last occurrence by weight. This lets me quickly find the start and stop index.</p>
<p>Is there anyway to speed this up? I tried using bisect instead of loop to find the indexes but that performed slightly worse.</p>
<p>Full Example:</p>
<pre><code>import random
import time
from collections import defaultdict
from dataclasses import dataclass
from typing import Iterable
@dataclass
class Widget:
categories: str
weight: int
class WidgetSearch:
def __init__(self, widgets: Iterable[Widget]) -> None:
self.widgets = sorted(widgets, key=lambda w: w.weight)
self.category_to_widgets = defaultdict(list)
self.category_weight_start_idx = defaultdict(dict)
self.category_weight_end_idx = defaultdict(dict)
for m in self.widgets:
for category in m.categories:
self.category_to_widgets[category].append(m)
for category, widgets in self.category_to_widgets.items():
prev_weight = None
for i, widget in enumerate(widgets):
weight = widget.weight
if prev_weight != weight:
self.category_weight_start_idx[category][weight] = i
if prev_weight is not None:
self.category_weight_end_idx[category][prev_weight] = i
prev_weight = weight
self.category_weight_end_idx[category][prev_weight] = len(widgets)
def _find_most_recent_start_idx(self, category: str, weight: int) -> int:
if weight in self.category_weight_start_idx[category]:
return self.category_weight_start_idx[category][weight]
for idx_weight, idx in self.category_weight_start_idx[category].items():
if idx_weight > weight:
return idx
return 0
def _find_most_recent_end_idx(self, category: str, weight: int) -> int:
if weight in self.category_weight_end_idx[category]:
return self.category_weight_end_idx[category][weight]
for idx_weight in reversed(self.category_weight_end_idx[category].keys()):
if idx_weight < weight:
return self.category_weight_end_idx[category][idx_weight]
return None
def query_widgets(self, category: str, lo_weight: int, hi_weight: int) -> Iterable[Widget]:
if category not in self.category_to_widgets or lo_weight > hi_weight:
return []
start = self._find_most_recent_start_idx(category, lo_weight)
end = self._find_most_recent_end_idx(category, hi_weight)
return self.category_to_widgets[category][start:end]
if __name__ == '__main__':
widgets = []
categories = list('ABCDEF')
for _ in range(10000):
cats = random.sample(categories, random.randint(1, 3))
widgets.append(Widget(categories=cats, weight=random.randint(1000, 2000)))
ws = WidgetSearch(widgets)
start = time.perf_counter_ns()
widgets = ws.query_widgets('C', 1400, 1800)
end = time.perf_counter_ns()
dur = (end - start) / 1000.0
print(f'# Found: {len(widgets)} in {dur} microsec')
</code></pre>
|
<python><algorithm><performance><optimization><data-structures>
|
2024-01-23 18:17:11
| 1
| 3,516
|
postelrich
|
77,868,386
| 1,041,958
|
Relative Import In Unittest
|
<p>I have reviewed several posts re: Relative vs Absolute imports, but none of the solutions proposed seemed to solve my problem.</p>
<p>I have a directory structure like the following:</p>
<pre><code>--src
--dir1
--signal
__init__.py
parent_class.py
--hf
__init__.py
child_class.py
child_class_tests.py
</code></pre>
<p>The import within <code>child_class.py</code> for the parent class is as follows <code>from ..parent_class import Parent</code></p>
<p>If I execute the <code>child_class_tests.py</code> with the pycharm Run configuration with a script path of: <code>C:\test\src\dir1\signal\hf\child_class_tests.py</code> with a working directory of <code>C:\test\src\dir1\signal\hf\</code> I get the exception: <code>ImportError: attempted relative import with no known parent package</code></p>
<p>within <code>child_class_tests.py</code> I have the following code:</p>
<pre><code>import unittest
import pandas as pd
import sys
import os
print(os.getcwd())
from child_class import Child
class TestSignals(unittest.TestCase):
DATA = pd.read_pickle(r'data.pkl')
def test_child_class(self):
"""
Test
"""
test_set = self.DATA.copy()
print(test_set.head())
if __name__ == '__main__':
unittest.main()
</code></pre>
<p>The error occurs on the attempted <code>from child_class import Child</code> on <code>from ..parent import Parent</code> within the Child class definition.</p>
<p>I have tried adding absolute references as well as running from the higher directory and neither have worked. I'm using Python 3.9 with PyCharm 2021.1.1.</p>
<p>In following the other posts, I've tried to recreate the environments that they're referencing and have been unable to rectify the problem.</p>
|
<python><path><reference>
|
2024-01-23 18:04:22
| 1
| 955
|
StormsEdge
|
77,868,356
| 4,532,845
|
How to create a file-like wrapper that works just like a file but auto-delete on close/__del__?
|
<pre class="lang-py prettyprint-override"><code>def ():
zip_fd = open(zip_path, mode="rb")
file = _TemporaryFileWrapper(zip_fd) # Like the one inside tempfile.NamedTemporaryFile
return FileResponse(file) # django
</code></pre>
<p>I would like my file deleted when eventually django is done responsing it</p>
|
<python><django>
|
2024-01-23 17:57:39
| 1
| 391
|
Edgard Lima
|
77,868,289
| 10,446,433
|
Python subprocess & Poetry cutted/shortened stdout
|
<p>Since the <em>requiremtns.txt</em> files from <a href="https://python-poetry.org" rel="nofollow noreferrer">Poetry</a> are hard to read, I try to write a simple script that creates the <em>requiremtns.txt</em> file but also at the same time creates a more human readable file next to it. A fitting Poetry command is:</p>
<pre><code>~$ poetry show
</code></pre>
<p>OK..., so I start my subprocess to execute this:</p>
<pre class="lang-python prettyprint-override"><code>if __name__ == '__main__':
PROJECT_DIR = Path(__file__).parent
# CMD
package_list = subprocess.run(['poetry', 'show'], cwd=PROJECT_DIR, encoding='utf_8', capture_output=True)
# Powershell
# package_list = subprocess.run(['powershell.exe', '-Command', 'poetry', 'show'], cwd=PROJECT_DIR, encoding='utf_8', capture_output=True)
print(package_list.stdout)
</code></pre>
<p>Only problem is, I get this "cutted" content:</p>
<pre><code>alabaster 0.7.16 A light, configurable Sphin...
argon2-cffi 23.1.0 Argon2 for Python
argon2-cffi-bindings 21.2.0 Low-level CFFI bindings for...
asgiref 3.7.2 ASGI specs, helper code, an...
attrs 23.2.0 Classes Without Boilerplate
autopep8 2.0.4 A tool that automatically f...
babel 2.14.0 Internationalization utilities
certifi 2023.11.17 Python package for providin...
cffi 1.16.0 Foreign Function Interface ...
charset-normalizer 3.3.2 The Real First Universal Ch...
colorama 0.4.6 Cross-platform colored term...
concurrent-log-handler 0.9.25 RotatingFileHandler replace...
cryptography 41.0.7 cryptography is a package w...
django 5.0.1 A high-level Python web fra...
django-cors-headers 4.3.1 django-cors-headers is a Dj...
django-cprofile-middleware 1.0.5 Easily add cProfile profili...
django-rest-knox 4.2.0 Authentication for django r...
django-silk 5.1.0 Silky smooth profiling for ...
django-stubs 4.2.7 Mypy stubs for Django
django-stubs-ext 4.2.7 Monkey-patching and extensi...
djangorestframework 3.14.0 Web APIs for Django, made e...
djangorestframework-dataclasses 1.3.1 A dataclasses serializer fo...
djangorestframework-stubs 3.14.5 PEP-484 stubs for django-re...
docutils 0.20.1 Docutils -- Python Document...
drf-spectacular 0.27.1 Sane and flexible OpenAPI 3...
</code></pre>
<p>As you can see, I already tried using Powershell, since I thought it's a CMD problem, but result stays the same. I also tried rerouting <code>stdout</code> directly into a file. Still the same. I also tried unbuffered version (<code>bufsize=0</code>), still the same...</p>
<p><strong>Now the question:</strong> What do I need to do, to get the "full" text, like you get if you enter <code>poetry show</code> in your terminal?</p>
<h2>Update</h2>
<p>I have now tried this on an Ubuntu OS. The result is the same, therefore this is <strong>NOT</strong> Windows specific.</p>
|
<python><python-poetry>
|
2024-01-23 17:43:38
| 1
| 368
|
SimpleJack
|
77,868,077
| 12,320,336
|
How to prevent unintended blocks from being added?
|
<p>The top of my input file's third page:
<a href="https://i.sstatic.net/5xu8y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5xu8y.png" alt="enter image description here" /></a></p>
<p>The top of my output file's first page:
<a href="https://i.sstatic.net/PfKhb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PfKhb.png" alt="enter image description here" /></a></p>
<p>The first few blocks of my output file's first page:
<a href="https://i.sstatic.net/PhePm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PhePm.png" alt="enter image description here" /></a></p>
<p>Why was the source file's page number added as well? I thought it was because it's in the header, but the footer wasn't added alongside it to confirm such a suspicion. <code>ez_save()</code> doesn't work. <code>scrub()</code> doesn't work.</p>
<p>The clipping window for <code>show_pdf_page()</code> was set to precisely the topmost point of the question number. No extra padding space was taken so I'm confused as to why it detected the page number.</p>
<p>Similarly, I also have a section where I <em>only</em> grab the central footer (the QP number) from the source pdf's middle page (for this particular file, it happened to be page 11 where question 4 resides). When I add it, I can see the QP number is added as a content (instead of a footer), but when I print the blocks, I also see the entirety of question 4 (the parts of page 11) listed, so... What's up with that?</p>
<p>Here's the code to set a collection of <code>Rect</code> objects to be used to clip from the source file.</p>
<pre class="lang-py prettyprint-override"><code>def fetchQuestionList(file, Q1Page, edge, Q1Top, Q1Bot):
listOfQuestions = []
questionBoxes = {}
currentQuestion = 0
for page in file.pages(Q1Page):
# As soon as you enter a new page and an existing question was already being scanned in the previous page,
# immediately add a new rectangle as a potential continuation of the question from the previous page
if currentQuestion != 0:
questionBoxes[currentQuestion]['rects'].append(fitz.Rect(0,Q1Top,page.mediabox.x1,0))
questionBoxes[currentQuestion]['pages'].append(page.number)
pageWords = page.get_text("words")
for wordIndex, word in enumerate(pageWords):
# If a new question number is found
if word[0] < edge and word[4].isdigit() and word[5]>1 and word[-1] == 0:
if currentQuestion != 0:
# Update the currently scanning question's rectangle's bottom to the top of the new question
questionBoxes[currentQuestion]['rects'][-1] += fitz.Rect(0,0,0,word[1])
# If the old question's rectangle is not even as tall as a single line, discard it from the list
if questionBoxes[currentQuestion]['rects'][-1].height < Q1Bot - Q1Top:
questionBoxes[currentQuestion]['rects'].pop()
questionBoxes[currentQuestion]['pages'].pop()
# Add the new question and also update the currentQuestion to the newly found question
listOfQuestions.append(list(word)+[page.number])
currentQuestion = word[4]
questionBoxes[currentQuestion] = {'rects': [fitz.Rect(0,word[1],page.mediabox.x1,0)], 'pages': [page.number]}
# If the last word of the page has been reached, set the current question's rectangle's bottom to the bottom of this last word
elif wordIndex == len(pageWords) - 1:
questionBoxes[currentQuestion]['rects'][-1] += fitz.Rect(0,0,0,word[3])
return listOfQuestions, questionBoxes
</code></pre>
<p>The <code>edge</code> parameter is the right coordinate (<code>x1</code>) of the question number '1' (the pymupdf 'word', not 'block'). The conditions <code>word[5]>1 and word[-1]==0</code> are just fail-safes. The first two conditions generally suffice.</p>
<p>And here's the code to actually create the output pdf.</p>
<pre class="lang-py prettyprint-override"><code>def generateQuestionwisePDF(file, QPNUM, questionList, questionBoxes, padding):
newPDF = fitz.open()
for _, rectPageDict in questionBoxes.items():
# Generate pages
pageHeight = padding
for rectIndex, rectangle in enumerate(rectPageDict['rects']):
pageHeight += rectangle.height
pageHeight += padding/2
pageHeight += padding/2
blankPage = newPDF.new_page(width=file[0].rect.width, height=pageHeight)
# Paste content
pasteBox = fitz.Rect(0,0,file[0].rect.width,padding)
for rectIndex, rectangle in enumerate(rectPageDict['rects']):
pasteBox = fitz.Rect(0, pasteBox.y1, pasteBox.x1, pasteBox.y1 + rectangle.height)
blankPage.show_pdf_page(pasteBox, file, rectPageDict['pages'][rectIndex], clip=rectangle)
pasteBox = fitz.Rect(0, pasteBox.y1, pasteBox.x1, pasteBox.y1 + padding/2)
return newPDF
</code></pre>
<p>I've omitted the use of <code>QPNUM</code> parameter so only one problem can be focused on at a time. But in short, the QPNUM is also a 'word' containing the coordinates of the qp number from page 11 (with the page number itself appended).</p>
<p>UPDATE:</p>
<p><a href="https://pastpapers.papacambridge.com/viewer/caie/as-and-a-level-physics-9702-2023-may-june-9702-s23-ms-34-pdf-9702-s23-ms-35-pdf-9702-s23-ms-41-pdf-9702-s23-ms-42-pdf-9702-s23-ms-43-pdf-9702-s23-ms-51-pdf-9702-s23-ms-52-pdf-9702-s23-ms-53-pdf-9702-s23-qp-11-pdf-9702-s23-qp-12-pdf-9702-s23-qp-13-pdf-9702-s23-qp-21-pdf" rel="nofollow noreferrer">This is the source pdf</a></p>
<p>This is starting to drive me nuts. I tried using the solution in <a href="https://stackoverflow.com/questions/74963837/crop-pdf-content-with-python-not-just-the-cropbox">this post</a> but got a <code>source object number out of range</code> error. This error occurs during the second execution of <code>show_pdf_page()</code> because introducing a print page's blocks statement right before it gave me the contents of both the source pdf's pages 3 & 4.</p>
<p>As an alternative, I tried to test if <code>set_cropbox()</code> can delete outside content <a href="https://pymupdf.readthedocs.io/en/latest/page.html#Page.set_cropbox" rel="nofollow noreferrer">despite the documentation saying it only changes visible part</a>, and it turns out, it does!.......... until it doesn't.</p>
<p>Code edit summary:</p>
<ul>
<li>Added a dummy blank pdf in the beginning of <code>generateQuestionwisePDF()</code>: <code>anotherPDF = fitz.open()</code></li>
<li>Inside the <code># Paste content</code> for loop, added the following code</li>
</ul>
<pre class="lang-py prettyprint-override"><code> cropPage = file[rectPageDict['pages'][rectIndex]]
cropPage.set_cropbox(rectangle)
if rectPageDict['pages'][rectIndex]== 2:
print("---------USING PAGE------------")
for block in cropPage.get_text('blocks'): print(block)
</code></pre>
<ul>
<li>Just before the <code>return</code> statement, added the following code</li>
</ul>
<pre class="lang-py prettyprint-override"><code> print("---------USING DOC[0] BEFORE SAVE------------")
for block in anotherPDF[0].get_text('blocks'): print(block)
anotherPDF.save("another.pdf")
print("---------USING DOC[0] AFTER SAVE------------")
for block in anotherPDF[0].get_text('blocks'): print(block)
</code></pre>
<p>And the result?
<a href="https://i.sstatic.net/SH3ah.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SH3ah.png" alt="enter image description here" /></a></p>
<p>And to really nail it home, I tried to select text in the output file, and...
<a href="https://i.sstatic.net/AC00i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AC00i.png" alt="enter image description here" /></a></p>
|
<python><pdf><pdf-generation><pymupdf>
|
2024-01-23 17:08:11
| 0
| 319
|
SpectraXCD
|
77,867,894
| 13,518,907
|
Streaming local LLM with FastAPI, Llama.cpp and Langchain
|
<p>I have setup FastAPI with Llama.cpp and Langchain. Now I want to enable streaming in the FastAPI responses. Streaming works with Llama.cpp in my terminal, but I wasn't able to implement it with a FastAPI response.</p>
<p>Most tutorials focused on enabling streaming with an OpenAI model, but I am using a local LLM (quantized Mistral) with llama.cpp. I think I have to modify the Callbackhandler, but no tutorial worked. Here is my code:</p>
<pre><code>from fastapi import FastAPI, Request, Response
from langchain_community.llms import LlamaCpp
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
import copy
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
model_path = "../modelle/mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf"
prompt= """
<s> [INST] Im folgenden bekommst du eine Aufgabe. Erledige diese anhand des User Inputs.
### Hier die Aufgabe: ###
{typescript_string}
### Hier der User Input: ###
{input}
Antwort: [/INST]
"""
def model_response_prompt():
return PromptTemplate(template=prompt, input_variables=['input', 'typescript_string'])
def build_llm(model_path, callback=None):
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
#callback_manager = CallbackManager(callback)
n_gpu_layers = 1 # Metal set to 1 is enough. # ausprobiert mit mehreren
n_batch = 512#1024 # Should be between 1 and n_ctx, consider the amount of RAM of your Apple Silicon Chip.
llm = LlamaCpp(
max_tokens =1000,
n_threads = 6,
model_path=model_path,
temperature= 0.8,
f16_kv=True,
n_ctx=28000,
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
callback_manager=callback_manager,
verbose=True,
top_p=0.75,
top_k=40,
repeat_penalty = 1.1,
streaming=True,
model_kwargs={
'mirostat': 2,
},
)
return llm
# caching LLM
@lru_cache(maxsize=100)
def get_cached_llm():
chat = build_llm(model_path)
return chat
chat = get_cached_llm()
app = FastAPI(
title="Inference API for Mistral and Mixtral",
description="A simple API that use Mistral or Mixtral",
version="1.0",
)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
def bullet_point_model():
llm = build_llm(model_path=model_path)
llm_chain = LLMChain(
llm=llm,
prompt=model_response_prompt(),
verbose=True,
)
return llm_chain
@app.get('/model_response')
async def model(question : str, prompt: str):
model = bullet_point_model()
res = model({"typescript_string": prompt, "input": question})
result = copy.deepcopy(res)
return result
</code></pre>
<p>In a example notebook, I am calling FastAPI like this:</p>
<pre><code>import subprocess
import urllib.parse
import shlex
query = input("Insert your bullet points here: ")
task = input("Insert the task here: ")
#Safe Encode url string
encodedquery = urllib.parse.quote(query)
encodedtask = urllib.parse.quote(task)
#Join the curl command textx
command = f"curl -X 'GET' 'http://127.0.0.1:8000/model_response?question={encodedquery}&prompt={encodedtask}' -H 'accept: application/json'"
print(command)
args = shlex.split(command)
process = subprocess.Popen(args, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = process.communicate()
print(stdout)
</code></pre>
<p>So with this code, getting responses from the API works. But I only see streaming in my terminal (I think this is because of the <code>StreamingStdOutCallbackHandler</code>. After the streaming in the terminal is complete, I am getting my FastAPI response.</p>
<p>What do I have to change now that I can stream token by token with FastAPI and a local llama.cpp model?</p>
|
<python><fastapi><langchain><llamacpp>
|
2024-01-23 16:36:43
| 1
| 565
|
Maxl Gemeinderat
|
77,867,650
| 6,457,405
|
You tried to access openai.File, but this is no longer supported in openai>=1.0.0
|
<p>I am doing fine tuning in chatgpt where I am looking to adjust the model so that it defines a set of subtopics (or related concepts) based on a specific topic (concept).</p>
<p>This is my training dataset:</p>
<pre><code>{"prompt":"Wellbeing of Child->","completion":" Immunization, Equal nurturing of boys and girls, Parents Mental health, Shaking baby, Children with disabilities, Dental care (only gum care), Importance of height and weight measurement, Feeding during sickness, Playing with rattle, Make mealtimes fun, Avoid physical maltreatment, Reducing screen time, Childrens common diseases, Nurturing care for children\n"}
{"prompt":"Learning and Development->","completion":" Reducing screen time, Ways of learnings, Family values, Show name talk, Importance of height and weight measurement, Toilet training, Sorting and matching, Drowning, Learning through play\n"}
{"prompt":"Hygiene and Safety->","completion":" Hygiene, Safety, Drowning, Handwashing, Avoid physical maltreatment\n"}
{"prompt":"Breastfeeding->","completion":" Importance of Breastfeeding, Method of breastfeeding, Demerits of Infant formula milk, Maternal Nutrition, Sore Nipples, Feeding during sickness, Complementary feeding, Healthy feeding, Interactions with baby during breastfeeding\n"}
</code></pre>
<p>This is the script I am running:</p>
<pre><code>import openai
import json
import os
api_key =" "
openai.api_key = api_key
##Cheking training data
!openai tools fine_tunes.prepare_data -f training_data_prepared.jsonl -q
response = openai.File.create(
file=open("training_data_prepared.jsonl","rb"),
purpose = "fine-tune")
print(response)
</code></pre>
<p>In the response function I am getting this error:</p>
<pre><code>APIRemovedInV1:
You tried to access openai.File, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
</code></pre>
<p>I know this is an openai library version error, but I was looking for a solution to solve it but nothing works.</p>
|
<python><python-3.x><openai-api><chatgpt-api>
|
2024-01-23 15:58:48
| 1
| 395
|
Daniel
|
77,867,618
| 3,336,423
|
How to convert elapsed days since January 1st year 0000 to Python elapsed seconds since January 1st year 1970
|
<p>I have a float that comes from Matlab, expressed as a number of days elapsed since January 1st of year 0000.</p>
<p>Let's use <code>739183.392799916</code> as an example.</p>
<p>In Matlab, <code>datestr(739183.392799916 ,'dd/mm/yyyy-HH:MM:SS.FFF')</code> gives <code>24/10/2023-09:25:37.913</code></p>
<p>I'm working in Python, so I want to convert this value to a number of seconds since epoch (January 1st of year 1970), same standard as <code>time.time()</code></p>
<p>So I did:</p>
<pre><code>def matlab_time_to_python_float(matlab_time: float) -> float:
# Matlab standard is days since 1er Janvier 0000 heure 00:00:00
# Python standard is seconds since 1er Janvier 1970 00:00:00
from datetime import datetime, date
matlab_ref = date(1, 1, 1)
python_ref = datetime.utcfromtimestamp(0).date()
delta = python_ref - matlab_ref
python_day = matlab_time - delta.days
seconds_in_a_day = 24*3600
python_time = python_day*seconds_in_a_day
return python_time
</code></pre>
<p>And additionnaly, to output the time:</p>
<pre><code>def python_float_to_str(python_time_float: float) -> str:
# Use localtime, not gmtime. With gmtime, we get UTC time without timezone/summer/winter
import time
local = time.localtime(python_time_float)
millisecondes = int(round((python_time_float - int(python_time_float)),3) * 1000)
return time.strftime('%d/%m/%Y ', local) + f"{local.tm_hour:02d}:{local.tm_min:02d}:{local.tm_sec:02d}.{millisecondes:03d}"
</code></pre>
<p>Now, when I do:</p>
<pre><code>as_str = ( python_float_to_str( matlab_time_to_python_float( 739183.392799916 ) ) )
print(as_str)
</code></pre>
<p>It outputs <code>25/10/2024 11:25:37.913</code>. Different than Matlab's <code>24/10/2023-09:25:37.913</code> by <strong>1 year, one day and 2 hours</strong>....</p>
<p>Any idea what I'm doing wrong? The code is very simple, I can't see what's wrong.</p>
|
<python><matlab><datetime>
|
2024-01-23 15:55:22
| 0
| 21,904
|
jpo38
|
77,867,589
| 127,251
|
How do you replace set_tight_layout with set_layout_engine?
|
<p>When I call <code>fig.set_tight_layout(True)</code> on a Matplotlib figure, I receive this deprecation warning:</p>
<pre><code>The set_tight_layout function will be deprecated in a future version. Use set_layout_engine instead.
</code></pre>
<p>How do I call <code>set_layout_engine</code> so as to match the current behavior as closely as possible?</p>
<pre><code>Environment:
OS: Mac
Python: 3.10.6
matplotlib: 3.7.2
</code></pre>
|
<python><matplotlib><deprecation-warning>
|
2024-01-23 15:50:25
| 1
| 5,593
|
Paul Chernoch
|
77,867,577
| 8,236,076
|
How to create table of contents using unstructured (the python package)
|
<h2>tl;dr</h2>
<p>How can I extract a clean table of contents from a pdf document that has hierarchical section headers using the <a href="https://unstructured.io/" rel="nofollow noreferrer"><code>unstructured</code></a> package?</p>
<h2>Some more details</h2>
<p>I have a pdf document that is multiple pages long. The text in the document is organised into multiple sections, each with a header/title. Each of these sections are potentially split up into subsections with their own header/title. These subsections can have subsubsections, etc.</p>
<p>The document does not have a table of contents page. How can I use the <a href="https://unstructured.io/" rel="nofollow noreferrer"><code>unstructured</code></a> package to automatically extract a table of contents from my document? The table of contents should have the same hierarchy as the sections and subsections in my document.</p>
<h2>Example</h2>
<p>If my document looks like this:</p>
<blockquote>
<p><strong>This is the title of section 1</strong></p>
<p>Bla bla bla.</p>
<p><strong>This is the title of subsection 1.1</strong></p>
<p>More bla bla bla.</p>
<p><strong>This is the title of subsubsection 1.1.1</strong></p>
<p>More bla bla bla.</p>
<p><strong>This is the title of subsection 1.2</strong></p>
<p>More bla bla bla.</p>
<p><strong>This is the title of section 2</strong></p>
<p>Even more bla bla.</p>
</blockquote>
<p>Then I would like to extract a table of contents from this that includes the hierarchy of headers. For example:</p>
<pre class="lang-py prettyprint-override"><code>{
"This is the title of section 1": 0,
"This is the title of subsection 1.1": 1,
"This is the title of subsubsection 1.1.1": 2,
"This is the title of subsection 1.2": 1,
"This is the title of section 2": 0,
}
</code></pre>
<p>Where the number indicates the level of the header in the hierarchy.</p>
|
<python><nlp>
|
2024-01-23 15:49:04
| 0
| 1,144
|
Willem
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.