QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,366,550
| 2,011,284
|
SQLAlchemy: Difference between session.commit() and session.execute('COMMIT')
|
<p>I'm trying to understand the difference between <code>session.commit()</code> and <code>session.execute('COMMIT')</code> but I can't find anything in the documentation.</p>
<p>They seem to do different things, for example, this:</p>
<pre><code>db.session.commit()
db.session.execute("VACUUM ANALYZE my_table;")
</code></pre>
<p>Throws the error: <code>VACUUM cannot run inside a transaction block</code>.</p>
<p>But this works fine:</p>
<pre><code>db.session.execute("COMMIT")
db.session.execute("VACUUM ANALYZE my_table;")
</code></pre>
<p>I'm using SQLAlchemy 1.4.42.</p>
|
<python><sqlalchemy>
|
2023-05-30 15:48:28
| 0
| 4,927
|
CIRCLE
|
76,366,462
| 11,065,874
|
fastapi body not working properly with Body and Depends for singular values in Body
|
<p>I have this small fastapi application</p>
<pre><code>import uvicorn
from fastapi import FastAPI, Body, Query
from fastapi import Path
app = FastAPI()
@app.get("/test/{test_id}")
def test(
id: str = Path(...),
q: str = Query(...),
b: str = Body(...)
):
return "Hello world"
def main():
uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001)
if __name__ == "__main__":
main()
</code></pre>
<p>it works as expected.</p>
<hr />
<p>But I now make some changes as below</p>
<pre><code>import uvicorn
from fastapi import FastAPI, Depends, Body, Query
from fastapi import Path
from pydantic import BaseModel
app = FastAPI()
class Input(BaseModel):
id: str = Path(...)
q: str = Query(...)
b: str = Body(...)
@app.get("/test/{test_id}")
def test(inp: Input = Depends()):
return "Hello world"
def main():
uvicorn.run("run:app", host="0.0.0.0", reload=True, port=8001)
if __name__ == "__main__":
main()
</code></pre>
<p>I expect b to be shown as Body in the docs but it is being interpreted as a query string.</p>
<p>What is wrong?</p>
|
<python><fastapi>
|
2023-05-30 15:36:50
| 1
| 2,555
|
Amin Ba
|
76,366,458
| 223,992
|
Python parse POST in Lambda
|
<p>I am trying to write a simple HTTP POST handler in Python (running on Lambda) dealing with data sent from a simple html form (i.e. application/x-www-form-urlencoded). I am not very familiar with Python. My code appears to be parsing the data but the output is not a form which Python can process?</p>
<p>I was using the instructions provided by AWS at <a href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html#lambda-examples-access-request-body-examples-read" rel="nofollow noreferrer">https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html#lambda-examples-access-request-body-examples-read</a></p>
<p>Using both bash PHP, if I base64 encode <code>email=user%40example.com&message=hello+world</code>, I get:</p>
<pre><code>ZW1haWw9dXNlciU0MGV4YW1wbGUuY29tJm1lc3NhZ2U9aGVsbG8rd29ybGQ=
</code></pre>
<p>However when I decode this in Python, I get a "binary" string. While I can still run parse_qs().items against this, the returned key/value pairs are binary strings and subsequent operations fail.</p>
<p>Hopefully, this code illustrates what I am trying to achieve:</p>
<pre><code>import base64
from urllib.parse import parse_qs
# s = "email=user%40example.com&message=hello+world"
s = "ZW1haWw9dXNlciU0MGV4YW1wbGUuY29tJm1lc3NhZ2U9aGVsbG8rd29ybGQ="
s = base64.b64decode(s)
postVars = {k: v[0] for k, v in parse_qs(s).items()}
if ("email" in postVars) and ("message" in postVars) and (0<len(postVars['message'])):
body = body + "Email: " + postVars['email'] + "\n"
body = body + div + bodyCleanUp(postVars['message']) + "\n"
else:
body="Error: message content missing or blank\n"
print (postVars['email'])
print (postVars['message']
</code></pre>
<p>The <code>if</code> expression evaluates as false. Eexcution fails at the first print statement.</p>
<p>How can I get the data in a form that Python does not choke on?</p>
|
<python><aws-lambda>
|
2023-05-30 15:36:11
| 1
| 48,387
|
symcbean
|
76,366,371
| 12,493,545
|
How to fix module 'fluidsynth' has no attribute 'Synth'
|
<h1>What I have tried</h1>
<ol>
<li>I've installed <code>pip install pyfluidsynth</code> (installed 1.23.5) and <code>pip install fluidsynth</code> (0.2)</li>
<li>I followed the solution here, but it didn't work for me. I also think that this wouldn't have been a real solution since it links an older version: <a href="https://stackoverflow.com/questions/20022160/pyfluidsynth-module-object-has-no-attribute-synth">pyFluidsynth 'module' object has no attribute 'Synth'</a></li>
<li>I tried <code>print(fluidsynth.__version__)</code> which results in <code>module 'fluidsynth' has no attribute '__version__'</code></li>
</ol>
<h1>Additional Information</h1>
<p>I tried to follow this tutorial <a href="https://www.tensorflow.org/tutorials/audio/music_generation" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/audio/music_generation</a> but I get the error on <code>waveform = pm.fluidsynth(fs=_SAMPLING_RATE)</code>. I am using pycharm and I installed all packages into a single virtual environment.</p>
|
<python><ubuntu><fluidsynth>
|
2023-05-30 15:25:04
| 1
| 1,133
|
Natan
|
76,366,301
| 18,018,869
|
detach rectangle from scaling and give it "absolute" coordinates, or autoscale the rectangle
|
<p>This is the layout I am trying to achieve:
<a href="https://i.sstatic.net/b5Hxf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b5Hxf.png" alt="My Plot" /></a><br>
I actually manage to do this with this code:</p>
<pre class="lang-py prettyprint-override"><code>def create_bcs_plot(x_data: tuple = tuple(), y_data: tuple = tuple()):
# initialize figure
fig = Figure(figsize=(7, 6))
axs = fig.subplots()
minx, maxx, midx = 0, 1, 0.5
miny, maxy, midy = 0, 1, 0.5
offsetx, offsety = 0.1, 0.1
axs.add_patch(mpatches.Rectangle((minx, maxy), maxx, offsety, fill=False, edgecolor="black", clip_on=False, lw=0.5))
axs.add_patch(mpatches.Rectangle((maxx, miny), offsetx, maxy, fill=False, edgecolor="black", clip_on=False, lw=0.5))
# colored boxes inside plot
axs.add_patch(mpatches.Rectangle((minx, midy), midx, midy, alpha=0.1, facecolor="green"))
axs.add_patch(mpatches.Rectangle((midx, midy), midx, midy, alpha=0.1, facecolor="yellow"))
axs.add_patch(mpatches.Rectangle((minx, miny), midx, midy, alpha=0.1, facecolor="gray"))
axs.add_patch(mpatches.Rectangle((midx, miny), midx, midy, alpha=0.1, facecolor="red"))
# crossed lines that separate the four classes
axs.add_line(Line2D(xdata=(minx, maxx + offsetx), ydata=(midy, midy), clip_on=False, color="black", lw=0.5))
axs.add_line(Line2D(xdata=(midx, midx), ydata=(miny, maxy + offsety), clip_on=False, color="black", lw=0.5))
# y-axis HIGH, LOW labeling
axs.text(maxx + 0.5 * offsetx, 0.25 * maxy, "LOW", fontdict={}, rotation="vertical", va="center", ha="center")
axs.text(maxx + 0.5 * offsetx, 0.75 * maxy, "HIGH", fontdict={}, rotation="vertical", va="center", ha="center")
# x-axis HIGH, LOW labeling
axs.text(0.25 * maxx, maxy + 0.5 * offsety, "HIGH", fontdict={}, va="center", ha="center")
axs.text(0.75 * maxx, maxy + 0.5 * offsety, "LOW", fontdict={}, va="center", ha="center")
# populate with dynamic datapoints
# x_data, y_data = (5, 255, 2000), (0.2, 1.1, 95)
# axs.scatter(x_data, y_data)
buf = BytesIO()
fig.savefig(buf, format="png")
data = base64.b64encode(buf.getbuffer()).decode("ascii")
return f"<img src='data:image/png;base64,{data}'/>"
</code></pre>
<p>Note: I can't use pyplot because it is not recommended to be used for webapplications.</p>
<p><strong>Problem: If I now add data (uncomment 'populate with dynamic datapoints' section) it messes up the entire plot.</strong> Also both axis need to have logarithmic scale.</p>
<p>Question: (select as you wish) <br>
A) How do I add dynamic scaling for the rectangles that are placed "outside" of the actual plot?<br>
B) How do I detach the rectangles from axis scaling to give them a fixed spot?</p>
<p>Maybe I could just not show the axis-tick-labels of this plot and then add something like a second layer displaying the scatter data and "correct" axis-tick-labels?</p>
|
<python><matplotlib><python-imaging-library>
|
2023-05-30 15:17:49
| 0
| 1,976
|
Tarquinius
|
76,366,229
| 10,266,106
|
Multiprocess Sharing Static Data Between Process Jobs
|
<p>Consider the following Python function to be parallelized, which utilizes an georeferenced ndarray (assembled from rioxarray) and a shapefile. This function uses both these datasets to generate map plots with Matplotlib/CartoPy, the dependent variable being changes in map domain extent. Note that code to govern cosmetic alterations to the plot for titles, etc. has been removed to make this example as straightforward as possible:</p>
<pre><code>def plotter(data, xgrid, ygrid, region) -> 'Graphics Plotter':
fig = plt.figure(figsize=(14,9))
gs = gridspec.GridSpec(ncols=1, nrows=2, width_ratios=[1], height_ratios=[0.15, 3.00])
gs.update(wspace=0.00, hspace=0.00)
bar_width = 0.40
ax1 = fig.add_subplot(gs[0, :])
ax1.axes.get_xaxis().set_visible(False)
ax1.axes.get_yaxis().set_visible(False)
for pos in ['top','right','left','bottom']:
ax1.spines[pos].set_visible(False)
ax2 = fig.add_subplot(gs[1, :], projection=crs.LambertConformal())
ax2.set_extent(region, crs=crs.LambertConformal())
ax2.set_adjustable('datalim')
im = ax2.pcolormesh(xgrid, ygrid, data.variable.data[0], cmap=cmap, norm=norm)
cb = plt.colorbar(im, ax=ax2, pad=0.01, ticks=ticks, aspect=80, orientation='horizontal')
ax2.add_feature(counties_feature, linewidth=0.45)
ax2.add_feature(states_feature, linewidth=1.25)
ax2.add_feature(canada_feature, linewidth=1.25)
</code></pre>
<p>This plotting function is passed data, grid extents, and region constraints from the main function, where the parallel execution is also defined. Note that da, x, y, and all shapefiles are static and are never altered through the duration of this script execution.</p>
<pre><code>import multiprocess as mpr
import matplotlib as mpl
import cartopy.crs as crs
import cartopy.feature as cfeature
from cartopy.io.shapereader import Reader
from cartopy.feature import ShapelyFeature
import rioxarray as rxr
def main():
canada_feature = ShapelyFeature(Reader(canada).geometries(), crs.LambertConformal(), facecolor='none', edgecolor='black')
states_feature = ShapelyFeature(Reader(states).geometries(), crs.LambertConformal(), facecolor='none', edgecolor='black')
counties_feature = ShapelyFeature(Reader(counties).geometries(), crs.LambertConformal(), facecolor='none', edgecolor='black')
regions = pd.read_csv('/path/to/defined_regions.txt')
da = rxr.open_rasterio('path/to/somefile.tif', lock=False, mask_and_scale=True)
Y, X = da['y'], da['x']
x, y = np.meshgrid(da['x'], da['y'])
def parallel() -> 'Parallel Execution':
processes = []
for i, g in regions.iterrows():
pro = mpr.Process(target=plotter, args=(da, x, y, g['region']))
processes.extend([pro])
for p in processes:
p.start()
for p in processes:
p.join()
parallel()
</code></pre>
<p>The regions file contains 12 unique regions, which are each passed into a new process in the parallel function and executed. I'm noticing higher RAM usage when the pool executes, which I suspect is from inefficient utilization of memory when the ndarrays <code>da, x, & y</code> and shapefiles are utilized by the parallel function.</p>
<p>Is there an effective way to share these data across the Multiprocess pool such that the RAM use is less expensive?</p>
|
<python><matplotlib><multiprocessing><python-multiprocessing><shared-memory>
|
2023-05-30 15:08:07
| 1
| 431
|
TornadoEric
|
76,366,195
| 3,080,056
|
Python logger code stopping code from executing
|
<p>The below code stops my class from running, could someone please tell me why. From what I can tell it is the logger.FileHandler section but I don't understand why, I use the same code in a different script without issue. I have checked the file location and it exists and the IIS server user has write access.</p>
<p>It is called using <code>logger.info(f'get_vlans({ip_address}, {int_len}, {type})')</code> and have confirmed all of the variables are suppling the correct values.</p>
<pre><code>import logging
import datetime
now = datetime.datetime.now()
now = now.strftime("%m-%Y")
# create logger with the name 'gp_uccx_app'
logger = logging.getLogger('netweb_logger')
logger.setLevel(logging.DEBUG)
# # create file handler
fh = logging.FileHandler(f'C:/inetpub/wwwroot/site1/python/custom/logs/log-{now}.txt')
fh.setLevel(logging.DEBUG)
# create formatter and add it to the handler
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
fh.setFormatter(formatter)
# add the handler to the logger
logger.addHandler(fh)
</code></pre>
<p>Class code</p>
<pre><code>class Netweb:
# Collects the vlans for all interfaces in the stack that have the line 'switchport access vlan' in the running config
def get_vlans(ip_address, int_len, type):
logger.info(f'get_vlans({ip_address}, {int_len}, {type})')
vlans = []
try:
reSearch = ''
if int_len == 'long' and type == 'gigabitethernet':
reSearch = "^interface GigabitEthernet\d\/0\/\d"
elif int_len == 'long' and type == 'fastethernet':
reSearch = "^interface FastEthernet\d\/\d"
elif int_len == 'short' and type == 'gigabitethernet':
reSearch = "^interface GigabitEthernet\d\/\d"
elif int_len == 'short' and type == 'fastethernet':
reSearch = "^interface FastEthernet\d\/\d"
net_connect = ConnectHandler( device_type=platform, ip=ip_address, username=cisco_username, password=cisco_password )
output = net_connect.send_command( 'show run | i (interface)|.(access vlan)' )
net_connect.disconnect()
output = output.splitlines()
x = 0
for out in output:
if re.search( reSearch, out ):
interface = out.split()
interface = interface[ 1 ]
if 'access vlan' in output[ x + 1 ]:
vlan = output[ x + 1 ].split( 'switchport access vlan' )[1].strip()
else:
vlan = '1'
vlans.append( [ interface, vlan ] )
x = x + 1
return vlans
except Exception as e:
logger.info(e)
</code></pre>
|
<python><logging>
|
2023-05-30 15:03:59
| 0
| 1,564
|
Blinkydamo
|
76,366,153
| 3,614,254
|
Python Selenium-Beautifulsoup not loading dynamic text
|
<h2>Problem</h2>
<p>I'm scraping a dynamic page - one that appears to be loading results from a database for display - but, it appears, I'm only getting the placeholder for the text elements rather than the text itself. The page I'm loading is:
<code>https://www.bricklink.com/v2/search.page?q=8084#T=S</code></p>
<h2>Expected / Actual</h2>
<h4>Expected:</h4>
<pre class="lang-html prettyprint-override"><code><table>
<tr>
<td class="pspItemClick">
<a class="pspItemNameLink" href="www.some-url.com">The Name</a>
<br/>
<span class="pspItemCateAndNo">
<span class="blcatList">Catalog Num</span> : 1111
</span>
</td>
</tr>
</table
</code></pre>
<h4>Actual</h4>
<pre class="lang-html prettyprint-override"><code><table>
<tr>
<td class="pspItemClick">
<a class="pspItemNameLink" href="[%catalogUrl%]">[%strItemName%]</a>
<br/>
<span class="pspItemCateAndNo">
<span class="blcatList">[%strCategory%]</span> : [%strItemNo%]
</span>
</td>
</tr>
</table
</code></pre>
<h2>Attempted Solutions</h2>
<ol>
<li>I first just tried loading the site using the <code>requests</code> library which, of course, didn't work since it's not a static page.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>def load_page(url: str) -> BeautifulSoup:
headers = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET',
'Access-Control-Allow-Headers': 'Content-Type',
'Access-Control-Max-Age': '3600',
'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0'
}
req = requests.get(url, headers=headers)
return BeautifulSoup(req.content, 'html.parser')
</code></pre>
<ol start="2">
<li>I then tried Selenium's <code>webdriver</code> to load the dynamic content:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>def html_source_from_webdriver(url: str, wait: int = 0) -> BeautifulSoup:
browser = webdriver.Chrome(service=selenium_chrome_service, chrome_options=options)
browser.implicitly_wait(wait)
browser.get(urljoin(ROOT_URL, url))
page_source = browser.page_source
return BeautifulSoup(page_source, features="html.parser")
</code></pre>
<p>Both attempts yield the same results. I haven't used the <code>implicitly_wait</code> feature much so I was just experimenting with different values (0-15) - none of which worked. I've also tried the <code>browser.set_script_timeout(<timeout>)</code> which also did not work.</p>
<p>Any thoughts on where to go from here would be greatly appreciated.</p>
<h2>Update</h2>
<p>I appreciate those of you providing suggestions. I've also tried the following with no luck:</p>
<ul>
<li>using <code>time.sleep()</code> - added after the <code>browser.get(...)</code> call.</li>
<li>using <code>browser.set_page_load_timeout()</code> - didn't expect this one to work, but tried anyway.</li>
</ul>
|
<python><selenium-webdriver><beautifulsoup>
|
2023-05-30 14:58:21
| 1
| 557
|
James B
|
76,366,080
| 9,002,568
|
Get the selected date from python/nicegui
|
<p>I have two functions for getting date from user, which are:</p>
<pre class="lang-py prettyprint-override"><code>def date_selector():
with ui.input('Date') as date:
with date.add_slot('append'):
ui.icon('edit_calendar').on('click', lambda: menu.open()).classes('cursor-pointer')
with ui.menu() as menu:
ui.date().bind_value(date)
s_date = date.value
print(s_date)
return s_date
def tab_date():
ui.label(date_selector())
return
</code></pre>
<p>But it is not assign value to s_date. How can I make?</p>
|
<python><nicegui>
|
2023-05-30 14:50:32
| 1
| 593
|
kur ag
|
76,366,003
| 9,692,180
|
Not possible to access selected file name in Panel FileInput widget
|
<p>I am using python Panel FileInput widget (panel==0.13.0) to upload a file. Upload works, but I can’n find a way to access uploaded file name. However, after selecting file the widget displays filename next to it, but filename is not included in objects, nor filename,…</p>
<p>Using file_input.get_param_values(), I will get filename=None…</p>
<pre><code>import panel as pn
class WidgetManager:
def __init__(self):
self.load_row = self.load_file_widget()
def load_file_widget(self):
# Create a row of widgets for file load
self.file_input = pn.widgets.FileInput(accept='.csv')
load_row = pn.Row(self.file_input)
print(self.file_input.filename)
return load_row
# Create an instance of the WidgetManager class
widget_manager = WidgetManager()
# Create a FastList dashboard and add the widgets to it
dashboard = pn.Column(widget_manager.load_row)
# Show the dashboard
dashboard.show()
</code></pre>
<p>Any idea how can I access to the uploaded filename?</p>
|
<python><widget><bokeh><panel>
|
2023-05-30 14:41:23
| 0
| 787
|
Léo SOHEILY KHAH
|
76,365,676
| 11,649,050
|
Huggingface datasets.DatasetDict gives only labels to transform preprocessing method
|
<p>I'm trying to use Huggingface's datasets and transformers libraries to train a model on the CIFAR10 dataset. For the specific model I'm using however, there needs to be preprocessing on the images, which I do using the <code>.with_transform()</code> method. When picking individual datapoints for visualizing or testing, everything works. However when I use the <code>Trainer</code> class and its <code>.train()</code> method, something breaks.</p>
<p>If I include a print statement in my preprocessing function, I get that the input to the function is:</p>
<pre class="lang-py prettyprint-override"><code>{'label': [8, 1, 8, 9, 0, 5, 5, 5]}
</code></pre>
<p>Whereas if I pick a single datapoint I would get:</p>
<pre class="lang-py prettyprint-override"><code>>>> print(data['train'][0])
{'img': <PIL image object>, 'label': 0}
</code></pre>
<p>This results in a KeyError when I try to access <code>'img'</code> from the dictionary. I can't figure out why it passes only the labels to the preprocessing function. I get that there are multiple labels because it's processing in batches, but why are the images removed? I've also tried renaming the <code>'img'</code> key in the datapoints to <code>'image'</code> but this did not help.</p>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>from torchvision.models import mobilenet_v3_small, MobileNet_V3_Small_Weights
from transformers import TrainingArguments, Trainer
import evaluate
from datasets import load_dataset
weights = MobileNet_V3_Small_Weights.IMAGENET1K_V1
preprocess = weights.transforms()
raw_data = load_dataset("cifar10")
def transform(x):
print("x")
print(x)
return dict(x, img=[preprocess(img) for img in x["img"]])
data = raw_data.with_transform(transform)
print("data['train'][0]\n", data["train"][0]) # works
training_args = TrainingArguments(
output_dir="test_trainer", evaluation_strategy="epoch"
)
accuracy = evaluate.load("accuracy")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return accuracy.compute(
predictions=predictions,
references=labels,
)
model = mobilenet_v3_small(
weights=weights,
)
model.train()
trainer = Trainer(
model=model,
args=training_args,
train_dataset=data["train"].select(range(5000)),
eval_dataset=data["test"].select(range(1000)),
compute_metrics=compute_metrics,
)
trainer.train() # keyError
</code></pre>
|
<python><pytorch><huggingface-transformers><huggingface-datasets>
|
2023-05-30 14:05:24
| 0
| 331
|
Thibaut B.
|
76,365,674
| 5,704,159
|
dataframe bar plot not looks good
|
<p>I'm trying to plot data with ~2000 values as bar kind, but the plot looks bad. probably because the amount of values.</p>
<p>In case I have 500 values, the plot looks ok.</p>
<p>Here is my simple code:</p>
<pre><code>data = {
'du': duList,
'sum_of_avg': sum_of_avg,
'sum_of_max': sum_of_max,
'sum_of_std': sum_of_std
}
df = DataFrame(data)
df.plot(x='du', y=['sum_of_avg', 'sum_of_max', 'sum_of_std'], kind='bar')
</code></pre>
<p><a href="https://i.sstatic.net/kcVJd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kcVJd.png" alt="enter image description here" /></a></p>
<p>What I'm missing?</p>
|
<python><tkinter><plot>
|
2023-05-30 14:05:18
| 0
| 429
|
gogo
|
76,365,636
| 5,371,505
|
Warning: 'news' is an entry point defined in pyproject.toml, but it's not installed as a script. You may get improper `sys.argv[0]`
|
<ul>
<li>I am trying to run my poetry based python project inside docker using docker compose</li>
<li>When I run the application, it works but it gives me this warning</li>
</ul>
<pre><code>ch_news_dev_python | Warning: 'news' is an entry point defined in pyproject.toml, but it's not installed as a script. You may get improper `sys.argv[0]`.
ch_news_dev_python |
ch_news_dev_python | The support to run uninstalled scripts will be removed in a future release.
ch_news_dev_python |
ch_news_dev_python | Run `poetry install` to resolve and get rid of this message.
</code></pre>
<p><strong>My project structure</strong></p>
<pre><code>news
├── docker
│ ├── development
│ │ ├── ...
│ │ ├── python_server
│ │ │ └── Dockerfile
│ │ ├── .env
│ │ └── docker-compose.yml
│ ├── production
│ │ └── ...
│ └── test
│ └── ...
├── src
│ └── news
│ ├── __init__.py
│ ├── __main__.py
│ ├── app.py
│ └── ...
├── tests
├── .gitignore
├── pyproject.toml
├── poetry.lock
└── ...
</code></pre>
<p><strong>My python_server/Dockerfile</strong></p>
<pre><code>FROM python:3.10.11-slim
ENV PYTHONDONTWRITEBYTECODE 1 \
PYTHONUNBUFFERED 1
RUN apt-get update \
&& apt-get install --no-install-recommends -y gcc libffi-dev g++\
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ENV POETRY_VERSION=1.5.0
RUN pip install "poetry==$POETRY_VERSION"
RUN groupadd --gid 10000 ch_news \
&& useradd --uid 10000 --gid ch_news --shell /bin/bash --create-home ch_news
WORKDIR /home/ch_news
COPY --chown=10000:10000 pyproject.toml poetry.lock ./
USER ch_news
RUN poetry install --no-root --no-ansi --without dev
COPY --chown=10000:10000 ./src ./
CMD ["poetry", "run", "news"]
</code></pre>
<p><strong>My docker-compose file</strong></p>
<pre><code>version: '3.9' # optional since v1.27.0
name: ch_news_dev
services:
...
ch_news_dev_python:
build:
context: ../..
dockerfile: ./docker/development/python_server/Dockerfile
container_name: ch_news_dev_python
depends_on:
ch_news_dev_postgres:
condition: service_healthy
env_file:
- .env
image: ch_news_dev_python_image
networks:
- network
restart: 'always'
volumes:
- postgres_certs:/home/ch_news/certs
networks:
network:
driver: bridge
volumes:
postgres_certs:
driver: local
postgres_data:
driver: local
</code></pre>
<p><strong>My pyproject.toml file</strong></p>
<pre><code>[tool.poetry]
authors = ["..."]
description = "..."
name = "news"
version = "0.1.0"
[tool.poetry.dependencies]
feedparser = "^6.0.10"
python = "^3.10"
aiohttp = "^3.8.4"
python-dateutil = "^2.8.2"
asyncpg = "^0.27.0"
loguru = "^0.7.0"
[tool.poetry.dev-dependencies]
commitizen = "^3.2.2"
pre-commit = "^3.3.2"
pytest = "^7.3.1"
pytest-cov = "^4.0.0"
tox = "^4.5.1"
bandit = "^1.7.5"
black = "^23.3.0"
darglint = "^1.8.1"
flake8 = "^6.0.0"
flake8-bugbear = "^23.5.9"
flake8-docstrings = "^1.7.0"
isort = "^5.12.0"
mypy = "^1.3.0"
pytest-clarity = "^1.0.1"
pytest-sugar = "^0.9.7"
typeguard = "^4.0.0"
xdoctest = "^1.1.0"
aioresponses = "^0.7.4"
pytest-asyncio = "^0.21.0"
types-python-dateutil = "^2.8.19"
[tool.poetry.group.dev.dependencies]
isort = "^5.12.0"
types-python-dateutil = "^2.8.19.7"
flake8-docstrings = "^1.7.0"
xdoctest = "^1.1.1"
pre-commit = "^3.3.2"
commitizen = "^3.2.2"
tox = "^4.5.1"
mypy = "^1.3.0"
pytest = "^7.3.1"
flake8-bugbear = "^23.5.9"
black = "^23.3.0"
pytest-asyncio = "^0.21.0"
bandit = "^1.7.5"
typeguard = "^4.0.0"
pytest-sugar = "^0.9.7"
[tool.coverage.run]
branch = true
omit = ["src/news/__main__.py", "src/news/app.py"]
source = ["news"]
[tool.pytest.ini_options]
pythonpath = "src"
addopts = [
"--import-mode=importlib",
]
[tool.coverage.report]
fail_under = 95
[tool.isort]
profile = "black"
src_paths = ["src", "tests"]
skip_gitignore = true
force_single_line = true
atomic = true
color_output = true
[tool.mypy]
pretty = true
show_column_numbers = true
show_error_codes = true
show_error_context = true
ignore_missing_imports = true
strict = true
warn_unreachable = true
[tool.poetry.scripts]
news = "news.__main__:app"
[tool.commitizen]
name = "cz_conventional_commits"
tag_format = "v$major.$minor.$patch$prerelease"
version = "0.0.1"
[build-system]
build-backend = "poetry.core.masonry.api"
requires = ["poetry-core>=1.0.0"]
</code></pre>
<p>Can someone kindly tell me how to get rid of this warning?</p>
<p><strong>UPDATE 1</strong></p>
<p>Getting the warning even after removing --no-root</p>
|
<python><python-packaging><python-poetry>
|
2023-05-30 14:01:01
| 5
| 6,352
|
PirateApp
|
76,365,600
| 1,838,076
|
Python-MySQL connector demo works on WSL but fails when run from Windows
|
<p>I have a simple demo program trying to demonstrate the connection to MySQL from Python, that works fine on WSL for fails on Windows.</p>
<p>Here is the sample code</p>
<pre class="lang-py prettyprint-override"><code>import mysql.connector
con = mysql.connector.connect(
host='mysql-host',
user='mysql-user',
password='mysql-password',
database='myql-db',
port=3306,
use_pure=True
)
print(con)
cur = con.cursor()
cur.execute('SHOW TABLES;')
for i in cur.fetchall():
print(i)
</code></pre>
<p>Output on WSL</p>
<pre><code><mysql.connector.connection.MySQLConnection object at 0x7fd6cece4790>
('Table1',)
('Table2',)
('Table3',)
('Table4',)
...
('Table5',)
</code></pre>
<p>Output on Windows</p>
<pre><code><mysql.connector.connection.MySQLConnection object at 0x00000211CF9B5750>
Traceback (most recent call last):
File "C:\Users\User\hello.py", line 12, in <module>
cur = con.cursor()
^^^^^^^^^^^^
File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\site-packages\mysql\connector\connection.py", line 1410, in cursor
raise OperationalError("MySQL Connection not available")
mysql.connector.errors.OperationalError: MySQL Connection not available
</code></pre>
<p>It also works on Windows with <code>mysqlsh.exe</code></p>
<pre><code>>mysqlsh.exe mysql://mysql-host:3306 -u mysql-user --sql
MySQL Shell 8.0.33
Copyright (c) 2016, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.
Type '\help' or '\?' for help; '\quit' to exit.
Creating a Classic session to 'mysql-user@mysql-host:3306'
Fetching global names for auto-completion... Press ^C to stop.
Your MySQL connection id is 348624462
Server version: 5.6.12 Source distribution
No default schema selected; type \use <schema> to set one.
MySQL mysql-host:3306 SQL > use mysql-db;
Default schema set to `mysql-db`.
Fetching global names, object names from `mysql-db` for auto-completion... Press ^C to stop.
MySQL mysql-host:3306 mysql-db SQL > show tables;
+----------------------+
| Tables_in_mysql-db |
+----------------------+
| Table1 |
| ... |
+----------------------+
10 rows in set (0.8742 sec)
</code></pre>
<p>Ping to the <code>mysql-host</code> works fine on both. Looking for clues on what else might be wrong?</p>
|
<python><mysql><windows>
|
2023-05-30 13:57:07
| 1
| 1,622
|
Krishna
|
76,365,578
| 1,736,407
|
TypeError: Parameter to MergeFrom() must be instance of same class: expected google.cloud.dataproc.v1.WorkflowTemplate got str
|
<p>I am trying to write a Google cloud function that imports a Dataproc Workflow template from storage, and invokes the workflow. I am getting the above <code>TypeError</code> when trying to instantiate the template. Here is the offending piece of code:</p>
<pre class="lang-py prettyprint-override"><code> # create workflow client
workflow_client = dataproc.WorkflowTemplateServiceClient()
parent = 'projects/{0}/regions/{1}'.format(project_id, region)
# submit request
operation = workflow_client.instantiate_inline_workflow_template(
request={'parent': parent, 'template': '{0}'.format(workflow_json)}
)
</code></pre>
<p>The value for <code>workflow_json</code> is definitely valid json and is of the <code>dict</code> type which is required by this function.</p>
<p>Here is the contents of my requirements.txt:</p>
<pre><code>google-cloud-dataproc==2.0.0
google-cloud-logging==1.15.1
google-cloud-storage==1.32.0
PyYAML==6.0
urllib3==1.26.16
</code></pre>
<p>Any help with this would be greatly appreciated</p>
|
<python><google-cloud-platform><google-cloud-functions><google-cloud-dataproc>
|
2023-05-30 13:54:21
| 1
| 2,220
|
Cam
|
76,365,558
| 353,337
|
Match at whitespace with at most one newline in regex
|
<p>I would like to match <code>a b</code> if between <code>a</code> and <code>b</code> is only whitespace with at most one newline.</p>
<p>Python example:</p>
<pre class="lang-py prettyprint-override"><code>import re
r = "a\s*b" # ?
# should match:
print(re.match(r, "ab"))
print(re.match(r, "a b"))
print(re.match(r, "a \n b"))
# shouldn't match:
print(re.match(r, "a\n\nb"))
print(re.match(r, "a \n\n b"))
</code></pre>
|
<python><regex>
|
2023-05-30 13:51:39
| 2
| 59,565
|
Nico Schlömer
|
76,365,515
| 5,875,041
|
Django model constraint at least one field from two is not null or empty
|
<p>I added a constraint to the model but seems that it has no impact because I'm still able to create a model instance without supplying any of the fields. Please help me to enforce the constraint to check is at least one field from two is not null or empty.</p>
<p>Fields:</p>
<pre><code>class Fruit(models.Model):
name1 = models.CharField(max_length=255, blank=True, null=True)
name2 = models.CharField(max_length=255, blank=True, null=True)
class Meta:
constraints = [
models.CheckConstraint(
check=(
Q(name1__isnull=False)
| Q(name2__isnull=False)
| ~Q(name1__exact='')
| ~Q(name2__exact='')
),
name='color_names_not_null_or_empty'
)
]
</code></pre>
<p>Django v4
Postgres v15</p>
|
<python><django><postgresql><django-orm>
|
2023-05-30 13:46:21
| 0
| 445
|
Artem Dumanov
|
76,365,100
| 11,684,473
|
Argmax of numpy array returning non-flat indices, but on conditioned array
|
<p>I want to extend following <a href="https://stackoverflow.com/questions/9482550/argmax-of-numpy-array-returning-non-flat-indices">question</a> with particular concern:</p>
<p><strong>How to obtain the argmax of <code>a[...]</code> in proper <code>a</code> indices</strong></p>
<pre class="lang-py prettyprint-override"><code>>>> a = (np.random.random((10, 10))*10).astype(int)
>>> a
array([[4, 1, 7, 4, 3, 3, 8, 9, 3, 0],
[7, 7, 8, 9, 9, 6, 1, 4, 2, 0],
[6, 9, 4, 9, 2, 7, 9, 0, 8, 6],
[2, 4, 7, 8, 0, 6, 0, 7, 1, 8],
[7, 9, 7, 0, 1, 2, 3, 7, 9, 6],
[7, 1, 1, 0, 5, 1, 8, 8, 5, 5],
[5, 4, 3, 0, 0, 4, 4, 5, 5, 4],
[9, 5, 0, 5, 8, 1, 6, 4, 8, 5],
[5, 8, 0, 8, 2, 6, 4, 9, 5, 1],
[2, 5, 0, 1, 4, 0, 0, 9, 6, 4]])
>>> np.unravel_index(a.argmax(), a.shape)
(0, 7)
>>> np.unravel_index(a[a>5].argmax(), a.shape)
(0, 2)
>>> np.unravel_index(a[a>5].argmax(), a[a>5].shape)
(2,)
</code></pre>
|
<python><numpy>
|
2023-05-30 12:56:34
| 2
| 1,565
|
majkrzak
|
76,365,084
| 1,200,914
|
Rollback a SQS message
|
<p>I would like to know if one can remove a message from SQS FIFO queue only with the response of <code>sqs_client.send_message</code>. I have tried MessageId from the response, but it's not a valid <code>ReceiptHandle</code>.</p>
<p>I want to do this because I need to send several messages (like 10), but if one fails, I want to remove all of the ones that didn't failed to be sent.</p>
|
<python><amazon-sqs><aws-sqs-fifo>
|
2023-05-30 12:54:22
| 0
| 3,052
|
Learning from masters
|
76,365,022
| 17,596,179
|
Duplicate callback outputs
|
<p>So I created this dash project but I keep encountering this <code>Duplicate callback outputs</code> error.
This is my <code>index.py</code> file.</p>
<pre><code>from dash import html, dcc
from dash.dependencies import Input, Output
from app import app
from pages import portfolio, prices
from components import navbar
nav = navbar.Navbar()
app.layout = html.Div([
dcc.Location(id='url', refresh=False),
nav,
html.Div(id='index', children=[]),
])
@app.callback(Output('index', 'children', allow_duplicate=True),
[Input('url', 'pathname')],
prevent_initial_call=True,
)
def display_page(pathname):
if pathname == '/prices':
return prices.layout
if pathname == '/portfolio':
return portfolio.layout
else:
return "404 Page Error! Please choose a link"
if __name__ == '__main__':
app.run_server(debug=True)
</code></pre>
<p>And this is my <code>prices.py</code></p>
<pre><code># Import necessary libraries
from dash import html, dcc, callback, Output, Input
import dash_bootstrap_components as dbc
import plotly.express as px
import pandas as pd
from index import app
from help.helper_funcs import read_file
# Define the page layout
layout = dbc.Container([
dbc.Row([
html.Center(html.H1("Prices")),
html.Br(),
html.Hr(),
dcc.Graph(id='prices-graph', style={'height': '80vh'}),
dcc.Interval(id='interval-component', interval=15000, n_intervals=0),
html.Button('Buy', id='buy-button', n_clicks=0),
html.Textarea(id='buy-text', style={'width': '100%'}),
])
])
@app.callback(
[Output('prices-graph', 'figure')],
[Input('interval-component', 'n_intervals')],
allow_duplicate=True
)
def update_graph():
df = read_file("s3://silver-stage-bucket-test/silver_prices.parquet")
line = px.line(df.to_pandas(), x='datetime', y='solar_price', color_discrete_sequence=['red'], labels={'solar_price': 'Solar Price'})
second_line = px.line(df.to_pandas(), x='datetime', y='wind_price', color_discrete_sequence=['blue'], labels={'wind_price': 'Wind Price'})
line.add_trace(second_line.data[0])
line.update_layout(yaxis_title='Prices')
return [line]
</code></pre>
<p>I've looked accross pages wih the same problem but most people have the same id in their Output in app.callback but I don't have this. I've tried allowing duplicates but none seemed to work for me yet.</p>
<p>Both files use the app.callback. But I think my problem lays within the <code>index.py</code> file.
This is the full error</p>
<pre><code>In the callback for output(s):
index.children@b80108fca58ba4aeaa5e8c587a7e4ff4
Output 0 (index.children@b80108fca58ba4aeaa5e8c587a7e4ff4) is already in use.
To resolve this, set `allow_duplicate=True` on
duplicate outputs, or combine the outputs into
one callback function, distinguishing the trigger
by using `dash.callback_context` if necessary.
</code></pre>
<p>All help is greatly appreciated!</p>
<h3>EDIT</h3>
<p>contents of <code>app.py</code></p>
<pre><code>import dash
import dash_bootstrap_components as dbc
app = dash.Dash(__name__,
external_stylesheets=[dbc.themes.BOOTSTRAP],
meta_tags=[{"name": "viewport", "content": "width=device-width"}],
suppress_callback_exceptions=True)
</code></pre>
|
<python><flask><plotly-dash>
|
2023-05-30 12:47:39
| 0
| 437
|
david backx
|
76,364,921
| 9,779,999
|
Seaborn pairplot() error, OptionError: "No such keys(s): 'mode.use_inf_as_null'", any idea?
|
<p>I am thrown an error when I am trying to apply searbor pairplot.
My full script is easy, and is copied as follows:</p>
<pre><code>import seaborn as sns
import pandas as pd
import numpy as np
# Creating a sample DataFrame
data = {
'A': np.random.randn(100),
'B': np.random.randn(100),
'C': np.random.randn(100),
'D': np.random.randn(100)
}
df = pd.DataFrame(data)
# Create a pair plot
sns.pairplot(df)
</code></pre>
<p>But I am thrown this error:</p>
<pre><code>---------------------------------------------------------------------------
OptionError Traceback (most recent call last)
Cell In[26], line 15
12 df = pd.DataFrame(data)
14 # Create a pair plot
---> 15 sns.pairplot(df)
File ~/miniforge3/envs/marketing/lib/python3.9/site-packages/seaborn/_decorators.py:46, in _deprecate_positional_args..inner_f(*args, **kwargs)
36 warnings.warn(
37 "Pass the following variable{} as {}keyword arg{}: {}. "
38 "From version 0.12, the only valid positional argument "
(...)
43 FutureWarning
44 )
45 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)})
---> 46 return f(**kwargs)
File ~/miniforge3/envs/marketing/lib/python3.9/site-packages/seaborn/axisgrid.py:2126, in pairplot(data, hue, hue_order, palette, vars, x_vars, y_vars, kind, diag_kind, markers, height, aspect, corner, dropna, plot_kws, diag_kws, grid_kws, size)
2124 diag_kws.setdefault("legend", False)
2125 if diag_kind == "hist":
-> 2126 grid.map_diag(histplot, **diag_kws)
2127 elif diag_kind == "kde":
2128 diag_kws.setdefault("fill", True)
File ~/miniforge3/envs/marketing/lib/python3.9/site-packages/seaborn/axisgrid.py:1478, in PairGrid.map_diag(self, func, **kwargs)
...
--> 121 raise OptionError(f"No such keys(s): {repr(pat)}")
122 if len(keys) > 1:
123 raise OptionError("Pattern matched multiple keys")
OptionError: "No such keys(s): 'mode.use_inf_as_null'"
</code></pre>
<p>I have tried removing Seaborn, and reinstalled again with the conda command, but the error is the same.</p>
<p>Have anyone encountered this error before?</p>
|
<python><seaborn><inf><pairplot>
|
2023-05-30 12:38:17
| 5
| 1,669
|
yts61
|
76,364,851
| 3,008,221
|
How to create a row number that infer the group it belongs to from another column? Want to do it in both pandas and postgresql/sql
|
<p>UPDATE: SQL solution provided below, but pandas solution not yet. Appreciate if anyone has a pandas solution.</p>
<p>I have a table/pandas dataframe that looks like this:</p>
<p><a href="https://i.sstatic.net/73WFh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/73WFh.png" alt="enter image description here" /></a></p>
<p>Where the first row of each user is always a new group, indicated by 'new', then the next row could be in the same group (indicated by 'same') or a new group (indicated by 'new').</p>
<p>I want to add a column group_number, that would create a number for each row related to its group, such that all rows of the first group of a user would be 1, all rows of the second group of the user would be 2, etc. It would then look like this in my example:</p>
<p><a href="https://i.sstatic.net/iWE4F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iWE4F.png" alt="enter image description here" /></a></p>
<p>I can do it in pandas while iterating on the rows like the below code, but it would be great if there is a vectorized solution?:</p>
<pre><code>group_number=[]
current_user=-1
for index, row in myDF.iterrows():
if row['user']!=current_user:
group_number.append(1)
counter=1
current_user=row['user']
elif row['group']=='same' :
group_number.append(counter)
else:
counter+=1
group_number.append(counter)
myDF['group_number']=group_number
</code></pre>
<p>(Side note that may or may not be relevant: I think this problem has some flavor of the gaps and islands structure/problem, but it is a bit different (I believe it is a bit more general))</p>
<p>How do I create that group_number in postgresql/sql and in pandas (a vectorized solution)?</p>
|
<python><sql><pandas><postgresql>
|
2023-05-30 12:30:16
| 1
| 433
|
Aly
|
76,364,741
| 8,419,962
|
QT Waiting Spinner not showing
|
<p>I am trying to use qwaitingspinner.py (<a href="https://github.com/snowwlex/QtWaitingSpinner" rel="nofollow noreferrer">QtWaitingSpinner on GitHub</a>) module in my project. The spinner's parent is an existing, visible QTabWidget page. When I add a page (a process whose initialization takes 5 to 10 seconds during which the 'tabVisible' attribute of the new page is kept at False), I start the spinner. This one is not displaying as I expect. However, it becomes visible and functional once the new page has been added and made visible (voluntarily, I don't stop the spinner to see what happens). I understand Python is busy executing the while loop in the example. And Qt's event loop also seems to be impacted since I don't see the spinner while the while loop is executing. So how to make the spinner functional while executing the loop in the example?</p>
<pre><code>import sys
from time import monotonic
from PyQt6.QtWidgets import (
QApplication,
QMainWindow,
QPushButton,
QTabWidget,
QVBoxLayout,
QWidget,
)
from waitingspinnerwidget import QtWaitingSpinner
class MyWindow(QMainWindow):
def __init__(self):
super().__init__()
self.resize(400, 400)
self.setWindowTitle("Spinner test")
layout = QVBoxLayout()
self._tab = QTabWidget(self)
_page_1 = QWidget()
self._page_index = self._tab.addTab(_page_1, "Page with the spinner")
layout.addWidget(self._tab)
btn_add_page = QPushButton()
btn_add_page.setText("Add a page")
btn_add_page.clicked.connect(self.add_new_page)
layout.addWidget(btn_add_page)
widget = QWidget()
widget.setLayout(layout)
self.setCentralWidget(widget)
self.spinner = None
def add_new_page(self):
_new_index = self._page_index + 1
widget = QWidget()
widget.setObjectName(f"page_{_new_index}")
self.start_spinner()
self._page_index = self._tab.addTab(widget, f"Page no {_new_index}")
self._tab.setTabVisible(self._page_index, False)
t = monotonic()
while monotonic() - t < 5.0:
# The purpose of this loop is to simulate time-consuming by the project constructor of a new page.
continue
self._tab.setTabVisible(self._page_index, True)
self._tab.setCurrentIndex(self._page_index)
self.stop_spinner()
def start_spinner(self):
self.spinner = QtWaitingSpinner(parent=self._tab.widget(self._tab.count() - 1))
self.spinner.start()
</code></pre>
|
<python><pyqt6>
|
2023-05-30 12:16:57
| 1
| 418
|
Pierre Lepage
|
76,364,689
| 4,451,521
|
Tox can not find python interpreter
|
<p>I just installed pyenv. After that I installed python 3.8.0 in the system</p>
<p>So I have</p>
<pre><code>pyenv versions
system
* 3.8.0 (set by /home/me/.pyenv/version)
</code></pre>
<p>I have the tox.ini</p>
<pre><code>[tox]
envlist = py36,py38
skipsdist = True
[testenv]
# instsall pytest in the virtualenv where commands will be executed
deps = pytest
commands =
#NOTE: you can run any command line tool here -not just tests
pytest
</code></pre>
<p>When I tried tox from system, it could not find py38 but when I did <code>pyenv global 3.8.0</code> it worked.
mmm do I have to enter the version? - I asked myself</p>
<p>I then installed python 3.9.16 so now I have</p>
<pre><code>pyenv versions
system
3.8.0
* 3.9.16 (set by /home/me/.pyenv/version)
</code></pre>
<p>and I added py39 to the envlist in the <code>tox.ini</code></p>
<pre><code>[tox]
envlist = py36,py38,py39
skipsdist = True
[testenv]
# instsall pytest in the virtualenv where commands will be executed
deps = pytest
commands =
#NOTE: you can run any command line tool here -not just tests
pytest
</code></pre>
<p>However now, it always fail.</p>
<ul>
<li>If I am in 3.8.0 it can not find 3.9.16</li>
<li>If I am in 3.9.16 it can not find 3.8.0</li>
</ul>
<p>The curious thing is that inside <code>.tox</code> folder there are three complete folders with py36,py38 and py39</p>
<p>But tox cannot run them all.</p>
<p>How can this be solved?</p>
|
<python><pyenv><tox>
|
2023-05-30 12:10:21
| 1
| 10,576
|
KansaiRobot
|
76,364,685
| 264,136
|
Not able to update MongoDB database
|
<p>I have the below doc in DB:</p>
<pre><code>{
"_id": {
"$oid": "6475dd3485054c30333cf52c"
},
"job_queue_name": "CURIE_BLR",
"job_job_id": 1059,
"job_jenkins_job_id": 0,
"job_status": "ENQUEUED",
"job_platform_name": "FUGAZI",
"job_branch": "master",
"job_json": "fugazi_imix_profiles",
"job_email_id": "akshjosh@cisco.com",
"job_profiles_to_run": "IPSEC_MCAST-imix_1400,IPSEC_QOS_DPI_FNF_MCAST-imix_1400",
"job_qt_mode": "prod",
"job_baseline": "none",
"job_no_of_trials": "1",
"job_cycle": "1",
"job_type": "UP",
"job_submitted_time": {
"$date": "2023-05-29T07:15:43.825Z"
},
"job_start_time": {
"$date": {
"$numberLong": "-62135596800000"
}
},
"job_end_time": {
"$date": {
"$numberLong": "-62135596800000"
}
},
"job_results": "NA"
}
</code></pre>
<p>I want to update the job_jenkins_job_id and job_status. Using the below code but its not updating. No error is also thrown.</p>
<p>I want to make sure that the update happens and I get the count of updated docs. How to achieve this?</p>
<pre><code>myclient = pymongo.MongoClient("mongodb://10.64.127.94:27017/")
mydb = myclient["UPTeam"]
mycol = mydb["perf_sdwan_queue"]
myquery = {"$and":[
{"job_job_id":{"$gt":"{}".format("1059")}},
{"queue_name":{"$gt":"CURIE_BLR"}}
]}
newvalues = { "$set": { "job_jenkins_job_id": 12, "job_status": "RUNNING" } }
mycol.update_one(myquery, newvalues)
</code></pre>
<p>myquery is not returning anything.</p>
|
<python><mongodb><pymongo>
|
2023-05-30 12:10:08
| 1
| 5,538
|
Akshay J
|
76,364,600
| 16,371,459
|
running code in two gpus consume more time, when run independently, as compared to running in a single one
|
<p>I am trying to inference model with the best possible speed.
While testing, I found that one inference take 34 milliseconds ( on average), when run on one GPU.
And one inference ( on average) take 40 milliseconds, when requested to two GPUs, independently.
And one inference ( on average) take 50 milliseconds, when requested to three GPUs, independently.
And one inference ( on average) take 64milliseconds, when requested to four GPUs, independently.</p>
<p>This inference is performed with different workers, i.e., for 4 GPUs, 4 workers are run, similarly, for n GPUs, n parallel workers are run. And each worker is assigned a specific GPU.</p>
<p>I am using the following code, to measure the time, for each inference</p>
<pre><code>st_m = time.time()
output = ort_model.generate( **inputs )
end_time_m = time.time()
</code></pre>
<p>I am investigating that what could be the possible reasons that the time increases, with the increase in number of parallel inference independently from the GPUs.</p>
|
<python><performance><gpu><nvidia><onnxruntime>
|
2023-05-30 12:01:47
| 0
| 318
|
Basir Mahmood
|
76,364,144
| 13,086,128
|
TypeError: histogram() got an unexpected keyword argument 'normed'
|
<p>I am using <code>numpy.histogram</code> and I am getting this error:</p>
<pre><code>import numpy as np
np.histogram(np.arange(4), bins=np.arange(5), normed=True)
TypeError: histogram() got an unexpected keyword argument 'normed'
</code></pre>
<p>I was expecting:</p>
<pre><code>(array([0.2,0.25,0.25]),array([0,1,2,3,4]))
</code></pre>
<p>I am using numpy 1.24.3</p>
|
<python><python-3.x><numpy><math><histogram>
|
2023-05-30 11:03:03
| 2
| 30,560
|
Talha Tayyab
|
76,363,921
| 7,483,211
|
How to fix pandas v2 "ValueError: Cannot convert from timedelta64[ns] to timedelta64[D]."
|
<p>When upgrading from pandas version 1 to 2.0.0, I suddenly get a <code>ValueError</code> in a script that worked fine before upgrading pandas to version 2:</p>
<pre><code>ValueError: Cannot convert from timedelta64[ns] to timedelta64[D].
Supported resolutions are 's', 'ms', 'us', 'ns'
</code></pre>
<p>This is a minimally reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({'designation_date': ['2021-01-01', '2021-01-02']})
df['recency'] = pd.to_datetime('today') - pd.to_datetime(df['designation_date'])
df['recency'] = df['recency'].astype('timedelta64[D]')
</code></pre>
<p>What do I need to replace <code>df['recency'].astype('timedelta64[D]')</code> with so that the code works with pandas v2?</p>
<p>Using <code>astype('timedelta64[D]')</code> is used quite a bit in answers across SO, e.g. <a href="https://stackoverflow.com/a/31918181/7483211">here</a>.</p>
|
<python><pandas><type-conversion><timedelta>
|
2023-05-30 10:34:29
| 3
| 10,272
|
Cornelius Roemer
|
76,363,837
| 11,197,301
|
Count the number of specific objects in a OrderedDict structure
|
<p>I create the following OrderedDict:</p>
<pre><code>import collections
class obj1():
def __init__(self):
self.aa = 22
self.bb = 23
class obj2():
def __init__(self):
self.dd = 22
self.ee = 23
my_test = collections.OrderedDict()
my_test['1'] = obj1
my_test['2'] = obj1
my_test['3'] = obj1
my_test['4'] = obj2
</code></pre>
<p>and this is the outcome:</p>
<pre><code>my_test
Out[41]:
OrderedDict([('1', __main__.obj1),
('2', __main__.obj1),
('3', __main__.obj1),
('4', __main__.obj2)])
</code></pre>
<p>As it can be noticed, there are two types of object: <code>obj1</code> and <code>obj2</code>. I would like to know if it is possible to know the number of <code>obj1</code> (i.e. 3) in that OrderedDict structure.</p>
<p>I was thinking about a cycle over all the tuples. This approach has, however, two problems:</p>
<ol>
<li>I do not know how to extract the object name,</li>
<li>it seems no so straightforward.</li>
</ol>
|
<python><counting>
|
2023-05-30 10:23:07
| 3
| 623
|
diedro
|
76,363,766
| 10,866,873
|
Tkinter get child widget on creation
|
<p>I am trying to get the widget that has just been created in a frame.</p>
<p>Using either the or bindings will trigger <strong>only</strong> if I am binding <code>root (r)</code>, if I bind the <code>Frame (a)</code> then no event is triggered when creating new widgets inside.</p>
<pre class="lang-py prettyprint-override"><code>from tkinter import *
def test(event):
print(event, event.widget)
def add(w):
w.bind('<Configure>', test)
y = Button(w, text="Hello")
z = Button(w, text="World")
y.pack(side=TOP)
z.pack(side=TOP)
r = Tk()
r.geometry("400x400+100+100")
a = Frame(r)
a.pack(side=TOP, fill=X, expand=1)
w1=r
w2=a
b=Button(r, text="make new", command=lambda :add(w1)) ##switch w1 to w2 to see differences
b.pack(side=BOTTOM)
r.mainloop()
</code></pre>
<p>I don't think I could filter through the 10,000's of events triggered to somehow pick out the few I need without causing both high memory usage and huge lag.</p>
<p>Is there some way if triggering a configure event automatically when a widget is created inside?</p>
|
<python><tkinter>
|
2023-05-30 10:12:59
| 0
| 426
|
Scott Paterson
|
76,363,743
| 17,487,457
|
Retrieving data from two separate files and writing to a third csv file
|
<p>I have been thinking all day how to approach this task. I have these two files:</p>
<ol>
<li><code>user.plt:</code> contains timestamped GPS trajectory of user.</li>
<li><code>label.txt:</code> contains information about mode of travel used to cover user trips.</li>
</ol>
<p>The first file (<code>user.plt</code>) is a 7-field comma-separated data that looks like this:</p>
<pre><code>lat,lon,constant,alt,ndays,date,time
39.921712,116.472343,0,13,39298.1462037037,2007-08-04,03:30:32
39.921705,116.472343,0,13,39298.1462152778,2007-08-04,03:30:33
39.863516,116.373796,0,115,39753.1872916667,2008-11-01,04:29:42
39.863471,116.373711,0,112,39753.1873032407,2008-11-01,04:29:43
39.991778,116.333088,0,223,39753.2128240741,2008-11-01,05:06:28
39.991776,116.333031,0,223,39753.2128472222,2008-11-01,05:06:30
39.991568,116.331501,0,95,39756.4298611111,2008-11-04,10:19:00
39.99156,116.331508,0,95,39756.4298726852,2008-11-04,10:19:01
39.975891,116.333441,0,-98,39756.4312615741,2008-11-04,10:21:01
39.915171,116.455808,0,656,39756.4601157407,2008-11-04,11:02:34
39.915369,116.455791,0,620,39756.4601273148,2008-11-04,11:02:35
39.912271,116.470686,0,95,39756.4653587963,2008-11-04,11:10:07
39.912088,116.469958,0,246,39756.4681481481,2008-11-04,11:14:08
39.912106,116.469936,0,246,39756.4681597222,2008-11-04,11:14:09
39.912189,116.465108,0,184,39756.4741666667,2008-11-04,11:22:48
39.975859,116.334063,0,279,39756.6100115741,2008-11-04,14:38:25
39.975978,116.334041,0,272,39756.6100231481,2008-11-04,14:38:26
39.991336,116.331886,0,115,39756.6112847222,2008-11-04,14:40:15
39.991581,116.33131,0,164,39756.6123148148,2008-11-04,14:41:44
</code></pre>
<p>The second file (<code>label.txt</code>) is a tab separated 3 column of user trip info, and looks like:</p>
<pre><code>Start Time End Time Transportation Mode
2008/11/01 03:59:27 2008/11/01 04:30:18 train
2008/11/01 04:35:38 2008/11/01 05:06:30 taxi
2008/11/04 10:18:55 2008/11/04 10:21:11 subway
2008/11/04 11:02:34 2008/11/04 11:10:08 taxi
2008/11/04 11:14:08 2008/11/04 11:22:48 walk
</code></pre>
<p>I am looking for a way to read content of <code>user.plt</code> for the each period of a trip with travel mode annotation, and write to a <code>CSV</code> file this way:</p>
<ul>
<li><p>Read 1 row of <code>label.txt</code> (i.e travel mode info of particular trip). Create two fields <code>trip_id</code> initialised to <code>1</code> and <code>segment_id</code> also initialised to <code>1</code>.</p>
</li>
<li><p>Read each rows of <code>user.plt</code> whose date and time are within the interval of start-time/end-time from <code>label.txt</code> (i.e. get GPS traces of the trip).</p>
</li>
<li><p>Read the next row of <code>label.txt</code>.</p>
<ul>
<li>if the difference between end-time of previous row, and start-time of current row is less than 30 minutes (i.e. same trip, new segment), keep <code>trip_id</code> as <code>1</code>, update <code>segment_id</code> to <code>2</code>.</li>
<li>if the difference between end-time of previous row and the start time of current row is more than 30 minutes (then new trip, new segment), update <code>trip_id = 2</code> and <code>segment_id = 1</code>.</li>
</ul>
</li>
<li><p>Each time, write the values into a <code>CSV</code> file in the form:</p>
<p><code>trip_id, segment_id, lat, lon, date, time, transportation-mode</code></p>
</li>
</ul>
<p><strong>Expected result</strong></p>
<p>Given the 2 input files above, the expected CSV file (<code>processed.csv</code>) is would be:</p>
<pre><code>trip_id,segment_id,lat,lon,date,time,transportation-mode
1,1,39.863516,116.373796,2008-11-01,04:29:42,train
1,1,39.863471,116.373711,2008-11-01,04:29:43,train
1,2,39.991778,116.333088,2008-11-01,05:06:28,taxi
1,2,39.991776,116.333031,2008-11-01,05:06:30,taxi
2,1,39.991568,116.331501,2008-11-04,10:19:00,subway
2,1,39.99156,116.331508,2008-11-04,10:19:01,subway
2,1,39.975891,116.333441,2008-11-04,10:21:01,subway
3,1,39.915171,116.455808,2008-11-04,11:02:34,taxi
3,1,39.915369,116.455791,2008-11-04,11:02:35,taxi
3,1,39.912271,116.470686,2008-11-04,11:10:07,taxi
3,2,39.912088,116.469958,2008-11-04,11:14:08,walk
3,2,39.912106,116.469936,2008-11-04,11:14:09,walk
3,2,39.912189,116.465108,2008-11-04,11:22:48,walk
</code></pre>
<p>N.B.: Not all rows of <code>user.plt</code> have corresponding trip info in <code>label.txt</code>. These rows are ignored and not needed.</p>
<p><strong>EDIT</strong></p>
<p>Below I give the data in form of dictionary as advised on the comment.</p>
<ol>
<li><code>user.plt</code>:</li>
</ol>
<pre><code>{'lat': [39.921712,39.921705,39.863516,39.863471,39.991778,39.991776,
39.991568,39.99156,39.975891,39.915171,39.915369,39.912271,39.912088,
39.912106,39.912189,39.975859,39.975978,39.991336,39.991581],
'lon': [116.472343,116.472343,116.373796,116.373711,116.333088,116.333031,
116.331501,116.331508,116.333441,116.455808,116.455791,116.470686,116.469958,
116.469936,116.465108,116.334063,116.334041,116.331886,116.33131],
'constant': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'alt': [13,13,115,112,223,223,95,95,-98,656,620,95,246,246,184,279,272,115,164],
'ndays': [39298.1462037037,39298.1462152778,39753.1872916667,39753.1873032407,
39753.2128240741,39753.2128472222,39756.4298611111,39756.4298726852,39756.4312615741,
39756.4601157407,39756.4601273148,39756.4653587963,39756.4681481481,39756.4681597222,
39756.4741666667,39756.6100115741,39756.6100231481,39756.6112847222,39756.6123148148],
'date': ['2007-08-04','2007-08-04','2008-11-01','2008-11-01','2008-11-01','2008-11-01',
'2008-11-04','2008-11-04','2008-11-04','2008-11-04','2008-11-04','2008-11-04',
'2008-11-04','2008-11-04','2008-11-04','2008-11-04','2008-11-04','2008-11-04','2008-11-04'],
'time': ['03:30:32','03:30:33','04:29:42','04:29:43','05:06:28','05:06:30','10:19:00',
'10:19:01','10:21:01','11:02:34','11:02:35','11:10:07','11:14:08','11:14:09','11:22:48',
'14:38:25','14:38:26','14:40:15','14:41:44']}
</code></pre>
<ol start="2">
<li><code>label.txt</code>:</li>
</ol>
<pre><code>{'Start Time': ['2008/11/01 03:59:27',
'2008/11/01 04:35:38',
'2008/11/04 10:18:55',
'2008/11/04 11:02:34',
'2008/11/04 11:14:08'],
'End Time': ['2008/11/01 04:30:18',
'2008/11/01 05:06:30',
'2008/11/04 10:21:11',
'2008/11/04 11:10:08',
'2008/11/04 11:22:48'],
'Transportation Mode': ['train', 'taxi', 'subway', 'taxi', 'walk']}
</code></pre>
|
<python><python-3.x><csv><file><file-io>
|
2023-05-30 10:09:59
| 2
| 305
|
Amina Umar
|
76,363,493
| 7,089,108
|
Plotly. Animated 3D surface plots
|
<p>I want to make a 3D animation using Surface of Plotly.</p>
<p>However, I run into two issues:
(1) When I press play, the figure is only updated at the second frame.
(2) I see all the previous frames as well. I just want to see one frame.</p>
<p>What do I need to change? Below is a minimal example, which highlights my issues.</p>
<pre><code>import plotly.graph_objects as go
from plotly.graph_objs import *
import numpy as np
data = np.random.rand(10, 10,10)
fr = np.arange(10)
layout = go.Layout(
paper_bgcolor='rgba(0,0,0,0)',
plot_bgcolor='rgba(0,0,0,0)'
)
fig = go.Figure(layout=layout)
frames = []
for i in range(len(data)):
z = data[i]
sh_0, sh_1 = z.shape
x, y = np.linspace(0, 1, sh_0), np.linspace(0, 1, sh_1)
trace = go.Surface(
z=z,
x=x,
y=y,
opacity=1,
colorscale="Viridis",
colorbar=dict(title="Counts"),
cmin=0,
cmax=1
)
frame = go.Frame(data=[trace], layout=go.Layout(title=f"Frame: {fr[i]}"))
frames.append(frame)
fig.add_trace(trace)
fig.frames = frames
fig.update_layout(
autosize=False,
width=800,
height=800,
margin=dict(l=65, r=50, b=65, t=90)
)
zoom = 1.35
fig.update_layout(
scene={
"xaxis": {"nticks": 20},
"zaxis": {"nticks": 4},
"camera_eye": {"x": 0.1, "y": 0.4, "z": 0.25},
"aspectratio": {"x": 0.4 * zoom, "y": 0.4 * zoom, "z": 0.25 * zoom}
}
)
fig.update_layout(
updatemenus=[
dict(
type='buttons',
buttons=[
dict(
label='Play',
method='animate',
args=[None, {'frame': {'duration': 500, 'redraw': True}, 'fromcurrent': True, 'transition': {'duration': 0}}]
)
]
)
]
)
fig.show()
</code></pre>
|
<python><animation><plotly>
|
2023-05-30 09:37:11
| 1
| 433
|
cerv21
|
76,363,452
| 9,620,095
|
Display automatic row height adjustment. XLSXWRITER (python)
|
<p>How to display automatic row height adjustment?
I tried with using 'text_wrap': True , but doesn't work for me. Also , I doen't want to set_row() statically beacuse the data is dynamic.</p>
<p>This is the option in Excel which I want .</p>
<p><a href="https://i.sstatic.net/s3wbL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/s3wbL.png" alt="enter image description here" /></a></p>
<p>For example:</p>
<pre><code> cell_format = workbook.add_format()
cell_format.set_text_wrap()
sheet.write(2, 0, "Some long text to wrap in a cell Some long text to wrap in a cell Some long text to wrap in a cell Some long text to wrap in a cell Some long text to wrap in a cell Some long text to wrap in a cell", cell_format)
sheet.write(4, 0, "It's\na bum\nwrapIt's\na bum\nwrapIt's\na bum\nwrapIt's\na bum\nwrapIt's\na bum\nwrapIt's\na bum\nwrapIt's\na bum\nwrap ", cell_format)
</code></pre>
<p>This is the output of text_wrap</p>
<p><a href="https://i.sstatic.net/VVzlv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VVzlv.png" alt="enter image description here" /></a></p>
<p>which I need is that :
<a href="https://i.sstatic.net/RaAwy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RaAwy.png" alt="enter image description here" /></a></p>
<p>Any help please?</p>
<p>Thanks .</p>
|
<python><xlsxwriter>
|
2023-05-30 09:31:14
| 0
| 631
|
Ing
|
76,363,436
| 610,569
|
cannot import name 'PartialState' from 'accelerate' when using Huggingface pipeline on Kaggle notebook?
|
<p>When import pipeline from Huggingface on Kaggle notebook,</p>
<pre><code>from transformers import pipeline
</code></pre>
<p>it throws this error:</p>
<pre><code>/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/__init__.py:98: UserWarning: unable to load libtensorflow_io_plugins.so: unable to open file: libtensorflow_io_plugins.so, from paths: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so']
caused by: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so: undefined symbol: _ZN3tsl6StatusC1EN10tensorflow5error4CodeESt17basic_string_viewIcSt11char_traitsIcEENS_14SourceLocationE']
warnings.warn(f"unable to load libtensorflow_io_plugins.so: {e}")
/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/__init__.py:104: UserWarning: file system plugins are not loaded: unable to open file: libtensorflow_io.so, from paths: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io.so']
caused by: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io.so: undefined symbol: _ZTVN10tensorflow13GcsFileSystemE']
warnings.warn(f"file system plugins are not loaded: {e}")
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1172, in _LazyModule._get_module(self, module_name)
1171 try:
-> 1172 return importlib.import_module("." + module_name, self.__name__)
1173 except Exception as e:
File /opt/conda/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File <frozen importlib._bootstrap>:1050, in _gcd_import(name, package, level)
File <frozen importlib._bootstrap>:1027, in _find_and_load(name, import_)
File <frozen importlib._bootstrap>:1006, in _find_and_load_unlocked(name, import_)
File <frozen importlib._bootstrap>:688, in _load_unlocked(spec)
File <frozen importlib._bootstrap_external>:883, in exec_module(self, module)
File <frozen importlib._bootstrap>:241, in _call_with_frames_removed(f, *args, **kwds)
File /opt/conda/lib/python3.10/site-packages/transformers/pipelines/__init__.py:44
35 from ..utils import (
36 HUGGINGFACE_CO_RESOLVE_ENDPOINT,
37 is_kenlm_available,
(...)
42 logging,
43 )
---> 44 from .audio_classification import AudioClassificationPipeline
45 from .automatic_speech_recognition import AutomaticSpeechRecognitionPipeline
File /opt/conda/lib/python3.10/site-packages/transformers/pipelines/audio_classification.py:21
20 from ..utils import add_end_docstrings, is_torch_available, logging
---> 21 from .base import PIPELINE_INIT_ARGS, Pipeline
24 if is_torch_available():
File /opt/conda/lib/python3.10/site-packages/transformers/pipelines/base.py:36
35 from ..image_processing_utils import BaseImageProcessor
---> 36 from ..modelcard import ModelCard
37 from ..models.auto.configuration_auto import AutoConfig
File /opt/conda/lib/python3.10/site-packages/transformers/modelcard.py:48
32 from .models.auto.modeling_auto import (
33 MODEL_FOR_AUDIO_CLASSIFICATION_MAPPING_NAMES,
34 MODEL_FOR_CAUSAL_LM_MAPPING_NAMES,
(...)
46 MODEL_FOR_ZERO_SHOT_IMAGE_CLASSIFICATION_MAPPING_NAMES,
47 )
---> 48 from .training_args import ParallelMode
49 from .utils import (
50 MODEL_CARD_NAME,
51 cached_file,
(...)
57 logging,
58 )
File /opt/conda/lib/python3.10/site-packages/transformers/training_args.py:67
66 if is_accelerate_available():
---> 67 from accelerate import PartialState
68 from accelerate.utils import DistributedType
ImportError: cannot import name 'PartialState' from 'accelerate' (/opt/conda/lib/python3.10/site-packages/accelerate/__init__.py)
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
Cell In[1], line 2
1 from transformers import AutoModelForSequenceClassification, AutoTokenizer
----> 2 from transformers import pipeline
File <frozen importlib._bootstrap>:1075, in _handle_fromlist(module, fromlist, import_, recursive)
File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1162, in _LazyModule.__getattr__(self, name)
1160 value = self._get_module(name)
1161 elif name in self._class_to_module.keys():
-> 1162 module = self._get_module(self._class_to_module[name])
1163 value = getattr(module, name)
1164 else:
File /opt/conda/lib/python3.10/site-packages/transformers/utils/import_utils.py:1174, in _LazyModule._get_module(self, module_name)
1172 return importlib.import_module("." + module_name, self.__name__)
1173 except Exception as e:
-> 1174 raise RuntimeError(
1175 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
1176 f" traceback):\n{e}"
1177 ) from e
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
cannot import name 'PartialState' from 'accelerate' (/opt/conda/lib/python3.10/site-packages/accelerate/__init__.py)
</code></pre>
<p><strong>How do I resolve this error?</strong></p>
|
<python><pipeline><huggingface-transformers><kaggle>
|
2023-05-30 09:29:24
| 3
| 123,325
|
alvas
|
76,363,394
| 7,657,180
|
Import packages in Pyscript framework
|
<p>I have this html that uses Pyscript framework</p>
<pre><code><!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF8">
<title>PyScript Hello, World!</title>
<link rel="stylesheet" href="https://pyscript.net/alpha/pyscript.css" />
<script defer src="https://pyscript.net/alpha/pyscript.js"></script>
<py-env>
- numpy
- matplotlib
</py-env>
</head>
<body>
<py-script>
print('Hello World')
</py-script>
<py-script>
a = 0
b = 1
for _ in range(10):
print(a)
a, b = b, a + b
</py-script>
<py-script>
import numpy as np
import matplotlib.pyplot as plt
x, y = np.random.random(100), np.random.random(100)
plt.scatter(x, y)
plt.gcf()
</py-script>
</body>
</html>
</code></pre>
<p>But when loading the page I got an error <code>JsException(PythonError: Traceback (most recent call last): File "/lib/python3.10/site-packages/_pyodide/_base.py", line 429, in eval_code .run(globals, locals) File "/lib/python3.10/site-packages/_pyodide/_base.py", line 300, in run coroutine = eval(self.code, globals, locals) File "", line 1, in ModuleNotFoundError: No module named 'numpy' )</code> at the end of the page</p>
|
<python><pyscript>
|
2023-05-30 09:25:31
| 1
| 9,608
|
YasserKhalil
|
76,363,375
| 661,716
|
numba jitclass with dictionary key tuple
|
<p>Below code works with @jit, but does not work with jitclass on Dict.empty. Is it that jitclass does not support the dictionary with tuple key while @jit supports it?</p>
<pre><code>from numba import types, njit
from numba.experimental import jitclass
from numba.typed import Dict
from numba import int64, float64
spec = [
#key : time, s
('values', types.DictType(types.Tuple((int64, types.unicode_type)), float64))
]
@jitclass(spec)
class Data(object):
def __init__(self):
self.values = Dict.empty(
key_type=types.Tuple((int64, types.unicode_type)),
value_type=float64,
)
def add_value(self, time: int, s: str, value: float):
self.values[(time, s)] = value
def get_value(self, time: int, s: str):
return self.values.get((time, s), None)
def iterate_values(self):
for (time, s), value in self.values.items():
print(f"At time {time}, s {s} had a value of {value}")
# Instantiate the class
data = Data()
# Add some sample data
data.add_value(20230530, "s", 150.45)
print(data.get_value(20230530, "s")) # Outputs: 150.45
# Iterate over all values
data.iterate_values()
</code></pre>
|
<python><numba><jit>
|
2023-05-30 09:23:03
| 0
| 1,226
|
tompal18
|
76,363,333
| 10,215,160
|
How to preserve multiindex order on join in pandas?
|
<p>Consider The following Example Code:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'data1':[1,2,3,4]}, index=pd.MultiIndex.from_product([['a','b'],[1,2],], names=['index1','Index2']))
df2 = pd.DataFrame({'data2':[5,6]}, index=pd.MultiIndex.from_product([[1,2]], names=['Index2']))
df3 = df1.join(df2, how='left',sort=False)
</code></pre>
<p>If I print <code>df1</code>, The index is in Order as I expect it:</p>
<pre><code> data1
index1 Index2
a 1 1
2 2
b 1 3
2 4
</code></pre>
<p>Now I want to add additional data with df2:</p>
<pre><code> data2
Index2
1 5
2 6
</code></pre>
<p>but after the merge with <code>df2</code>, the Order of the Index has changed. Index2 is now the first level. I explicitly tried to forbid it with using <code>sort=False</code>, but it still promotes the joined index to the first level:</p>
<pre><code>print(df3)
data1 data2
Index2 index1
1 a 1 5
b 3 5
2 a 2 6
b 4 6
</code></pre>
<p>The pandas documentation states, that <code>how='left'</code> does accomplish that, but it does not seem to work.</p>
<p>Is there a way i can enforce the resulting index_columns of the merge to be in the same order as df1? Like this:</p>
<pre><code> data1 data2
Index1 index2
a 1 1 5
b 1 3 5
a 2 2 6
b 2 4 6
</code></pre>
|
<python><python-3.x><pandas>
|
2023-05-30 09:17:20
| 2
| 1,486
|
Sandwichnick
|
76,363,301
| 12,764,964
|
What is the logic behind AWS Sagemaker error responses using boto3 python?
|
<p>I am working with AWS Sagemaker API using boto3, python 3.10. One of my goals is to implement proper error handling for several cases to make method call idempotent.</p>
<p>However, while exploring the errors, I found that they are inconsistent.</p>
<p>So my question is that what logic is behind implementing such errors or where I can read about their implementation logic by AWS?</p>
<p>For example, I am trying to call API action for resources which do not exist:</p>
<p>Request:</p>
<pre><code>import boto3
s = boto3.Session()
sm = s.client("sagemaker")
sm.delete_pipeline(PipelineName="tressdgfsd")
</code></pre>
<p>Response - expected, ResourceNotFound:</p>
<pre><code>botocore.errorfactory.ResourceNotFound: An error occurred (ResourceNotFound) when calling the DeletePipeline operation: Pipeline '***' does not exist.
</code></pre>
<p>Request:</p>
<pre><code>sm.list_trial_components(TrialName="sdfds")
</code></pre>
<p>Response - expected, ResourceNotFound:</p>
<pre><code>botocore.errorfactory.ResourceNotFound: An error occurred (ResourceNotFound) when calling the ListTrialComponents operation: Trial '***' does not exist.
</code></pre>
<p>Request:</p>
<pre><code>sm.delete_trial(TrialName='sdfgdsfgds')
</code></pre>
<p>Response - expected, ResourceNotFound:</p>
<pre><code>An error occurred (ResourceNotFound) when calling the DeleteTrial operation: Trial '****' does not exist.
</code></pre>
<p>Request:</p>
<pre><code>sm.delete_model(ModelName="testfdgdfgfd")
</code></pre>
<p>Response - NOT EXPECTED, ClientError:</p>
<pre><code>botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the DeleteModel operation: Could not find model "***".
</code></pre>
<p>Request:</p>
<pre><code>sm.delete_endpoint_config(EndpointConfigName="testdfgdfgdf")
</code></pre>
<p>Response - NOT EXPECTED, ClientError:</p>
<pre><code>botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the DeleteEndpointConfig operation: Could not find endpoint configuration "***".
</code></pre>
<p>Request:</p>
<pre><code>sm.delete_model_package_group(ModelPackageGroupName="testasfsd")
</code></pre>
<p>Response - NOT EXPECTED, ClientError:</p>
<pre><code>botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the DeleteModelPackageGroup operation: ModelPackageGroup *** does not exist.
</code></pre>
<p>Request:</p>
<pre><code>sm.delete_model_package(ModelPackageName="testasfsd")
</code></pre>
<p>Response - NOT EXPECTED, ClientError:</p>
<pre><code>botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the DeleteModelPackage operation: ModelPackage *** does not exist.
</code></pre>
<p>To find more information,</p>
<ul>
<li><p>I checked the official <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker.html" rel="nofollow noreferrer">boto3 documentation for Sagemaker</a>, but I could not find relevant information.</p>
</li>
<li><p>I checked related questions here, e.g. the most relevant one - <a href="https://stackoverflow.com/questions/33068055/how-to-handle-errors-with-boto3">How to handle errors with boto3?</a> and all resources mentioned there</p>
</li>
<li><p>Raised an issue to AWS Support in my account.</p>
</li>
</ul>
|
<python><python-3.x><amazon-web-services><boto3><amazon-sagemaker>
|
2023-05-30 09:13:34
| 1
| 553
|
Kyrylo Kravets
|
76,362,880
| 8,248,194
|
Pandas assign with comprehension giving unexpected results
|
<p>I want to use assign in pandas passing a dictionary comprehension.</p>
<p>Reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>
import pandas as pd
df = pd.DataFrame({
"a": [1, 2, 3],
"b": [4, 5, 6],
"weight": [0.1, 0.2, 0.3]
})
metrics = ["a", "b"]
df = df.assign(
**{
f"weighted_{metric}": lambda df: df[metric] * df["weight"]
for metric in metrics
}
)
print(df)
</code></pre>
<p>Results:</p>
<pre class="lang-py prettyprint-override"><code> a b weight weighted_a weighted_b
0 1 4 0.1 0.4 0.4
1 2 5 0.2 1.0 1.0
2 3 6 0.3 1.8 1.8
</code></pre>
<p>I don't get the expected results, for a, I should get 0.1, 0.2, 0.3.</p>
<p>Do you know how can I get weighted_a correctly?</p>
|
<python><pandas><dataframe>
|
2023-05-30 08:18:09
| 3
| 2,581
|
David Masip
|
76,362,792
| 277,176
|
How to intern objects with weakref/gc support?
|
<p>I'm trying to implement interning of non-string objects in python. For strings we have the <code>sys.intern</code> function. However it doesn't support other immutable objects. To put it out of the way, I'm aware of the problems that may arise when the interned objects are modified.</p>
<p>A commonly cited way of interning for non-strings is:</p>
<pre><code>S = {}
x = S.setdefault(x, x)
</code></pre>
<p>However such a <code>dict</code> holds strong references to the interned objects, and therefore will grow indefinitely, which is a problem in my application.</p>
<p>I figured out that there's a <code>WeakSet</code> type that would auto-collect unreferenced elements. However there doesn't seem to be a fast way to get the held object in the set:</p>
<pre><code>S = WeakSet()
S.add(x) # returns nothing
# a very slow way to work around it:
for y in S:
if y == x:
x = y
</code></pre>
<p>There are also <code>WeakKeyDictionary</code> and <code>WeakValueDictionary</code>, but there doesn't seem to be a <code>WeakKeyValueDictionary</code>.</p>
<p>So how can this be achieved in python?</p>
|
<python><garbage-collection><weak-references>
|
2023-05-30 08:06:24
| 1
| 72,849
|
Yakov Galka
|
76,362,768
| 2,646,881
|
"in" clause, best (fastest) target type for checking
|
<p>I was doing a project that is doing some stuff, and wanted to print "summary" of the warnings that occurred during processing at the end.</p>
<p>What I did was to store Enums in a list, than for each list member, there is <code>if</code> in the last <code>print_warnings()</code> method. Something like this:</p>
<pre class="lang-py prettyprint-override"><code>class MyClass:
def __init__(self):
self.warnings = []
def foo(self):
do_something()
if it_failed:
self.warnings.append(WarningEnum.foo_failed)
def print_warnings():
if WarningEnum.foo_failed in self.warnings:
print('Very long')
print('Error message')
print('With explanaiton')
</code></pre>
<p>I would really care if <code>self.warnings</code> is <code>tuple</code> or <code>list</code>, or whatever else.
Is there any difference when using <code>in</code> clause with different types, like <code>tuple</code> or <code>list</code> or whatever? Maybe <code>tuple</code> is faster than <code>list</code>, or <code>list</code> is for some reason better than <code>tuple</code>. Maybe something completely different is more efficient (like making it <code>string</code> and use <code>unicode</code> chars instead of <code>Enum</code>).</p>
<hr />
<p>Edit: Yes, I know <code>warnings</code> shouldn't be initialized this way. I just wanted to save few lines of code in example :)</p>
|
<python>
|
2023-05-30 08:02:59
| 1
| 418
|
rRr
|
76,362,664
| 1,342,516
|
Exclude overlapping intervals with numpy
|
<p>I have two lists of intervals. I would like to remove all ranges from list1 that already exist in list2.
Example:
List1:</p>
<blockquote>
<p>[(0,10),(15,20)]</p>
</blockquote>
<p>List2:</p>
<blockquote>
<p>[(2,3),(5,6)]</p>
</blockquote>
<p>Output:</p>
<blockquote>
<p>[(0,2),(3,5),(6,10),(15,20)]</p>
</blockquote>
<p>Here was the same question asking how to do it in Java <a href="https://stackoverflow.com/questions/16304245/exclude-overlapping-intervals">Exclude overlapping intervals</a>,
but I want to do it with numpy.</p>
|
<python><numpy>
|
2023-05-30 07:51:01
| 2
| 539
|
user1342516
|
76,362,600
| 525,865
|
getting data out of clutch.co: with BS4 and requests failed:
|
<p>trying to gather the data form the page <code>https://clutch.co/il/it-services</code> and that said i - think that there are probably several options to do that</p>
<p>using <code>bs4</code> and requests b. using pandas</p>
<p>this first approach uses a.</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas as pd
url = "https://clutch.co/il/it-services"
response = requests.get(url)
soup = BeautifulSoup(response.content, "html.parser")
company_names = soup.find_all("h3", class_="company-name")
locations = soup.find_all("span", class_="locality")
company_names_list = [name.get_text(strip=True) for name in company_names]
locations_list = [location.get_text(strip=True) for location in locations]
data = {"Company Name": company_names_list, "Location": locations_list}
df = pd.DataFrame(data)
df.to_csv("it_services_data.csv", index=False)
</code></pre>
<p>This code will scrape</p>
<p>a. the company names and locations from the specified webpage and b. stores them in a Pandas DataFrame. c. It will then save the data to a CSV file named <code>it_services_data.csv</code> in the current working directory.</p>
<p>but i ended up with a empty result-file. In fact the file is really empty:</p>
<p>what i did was the following:</p>
<p>1.install the required packages:</p>
<pre><code>pip install beautifulsoup4 requests pandas
</code></pre>
<ol start="2">
<li><p>Import the necessary libraries:</p>
<p><code>import requests</code></p>
<p><code>from bs4 import BeautifulSoup</code></p>
<p><code>import pandas as pd </code></p>
</li>
<li><p>Send a GET request to the webpage and retrieve the HTML content:</p>
<p><code>url = "https://clutch.co/il/it-services" </code></p>
<p><code>response = requests.get(url) </code></p>
</li>
<li><p>Create a BeautifulSoup object to parse the HTML content:</p>
<p><code>soup = BeautifulSoup(response.content, "html.parser")</code></p>
</li>
<li><p>Identify the HTML elements containing the data we want to scrape. Inspect the webpage's source code to find the relevant tags and attributes. For example, let's assume we want to extract the company names and their respective locations. In this case, the company names are contained in tags with the class name "company-name" and the locations are in tags with the class name "locality":</p>
<p><code>company_names = soup.find_all("h3", class_="company-name")</code></p>
<p><code>locations = soup.find_all("span", class_="locality")</code></p>
</li>
<li><p>Extract the data from the HTML elements and store them in lists:</p>
<p><code>company_names_list = [name.get_text(strip=True) for name in company_names] locations_list = [location.get_text(strip=True) for location in locations]</code></p>
</li>
<li><p>Create a Pandas DataFrame to organize the extracted data:</p>
<p><code>data = {"Company Name": company_names_list, "Location": locations_list}</code></p>
<p><code>df = pd.DataFrame(data)</code></p>
</li>
</ol>
<p>8: Optionally, you can perform further data processing or analysis using the Pandas DataFrame, or export the data to a file. For example, to save the data to a CSV file:</p>
<pre><code>`df.to_csv("it_services_data.csv", index=False)`
</code></pre>
<p>That's it! that was all i did: I thougth that with this approach i am able to scrape the company names and their locations from the specified webpage using Python with the Beautiful Soup, Requests, and Pandas packages.</p>
<p>Well - i need also to have the url of the company.
and if i would be able to gather even a bit more data, that would be great.</p>
<p>update: many thanks to badduker: awesome. i tried it out in Colab - and after installing cloudsraper-plugin - runned the code and got back the following:</p>
<pre><code>ERROR:root:Internal Python error in the inspect module.
Below is the traceback from this internal error.
ERROR:root:Internal Python error in the inspect module.
Below is the traceback from this internal error.
ERROR:root:Internal Python error in the inspect module.
Below is the traceback from this internal error.
cloudscraper.exceptions.CloudflareChallengeError: Detected a Cloudflare version 2 Captcha challenge, This feature is not available in the opensource (free) version.
During handling of the above exception, another exception occurred:
AttributeError: 'CloudflareChallengeError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
AssertionError
cloudscraper.exceptions.CloudflareChallengeError: Detected a Cloudflare version 2 Captcha challenge, This feature is not available in the opensource (free) version.
During handling of the above exception, another exception occurred:
AttributeError: 'CloudflareChallengeError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
TypeError: object of type 'NoneType' has no len()
During handling of the above exception, another exception occurred:
AttributeError: 'TypeError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
AssertionError
cloudscraper.exceptions.CloudflareChallengeError: Detected a Cloudflare version 2 Captcha challenge, This feature is not available in the opensource (free) version.
During handling of the above exception, another exception occurred:
AttributeError: 'CloudflareChallengeError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
TypeError: object of type 'NoneType' has no len()
During handling of the above exception, another exception occurred:
AttributeError: 'TypeError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
TypeError: object of type 'NoneType' has no len()
During handling of the above exception, another exception occurred:
AttributeError: 'TypeError' object has no attribute '_render_traceback_'
During handling of the above exception, another exception occurred:
AssertionError
</code></pre>
|
<python><pandas><beautifulsoup><python-requests>
|
2023-05-30 07:44:58
| 1
| 1,223
|
zero
|
76,362,579
| 7,257,731
|
Wrapper over Flask endpoint is very slow
|
<p>I have a Flask service made with <a href="https://editor-next.swagger.io/" rel="nofollow noreferrer">Swagger Editor</a> and Swagger Codegen (python flask). The code generated uses Connexion and Flask to serve the application.</p>
<p>This service has an endpoint that, on its own, answers the request in just 300ms (when tested both with a unit test and with curl).</p>
<pre><code>def endpoint():
# Endpoint implementation
...
</code></pre>
<p>I also have a wrapper that makes a simple validation and takes just 200ms (when tested with a unit test).</p>
<pre><code>def validation_wrapper(param1, param2):
def validation_wrapper_outer(func):
@wraps(func)
def validation_wrapper_inner(*args, **kwargs):
# Do some validations
...
return func(*args, **kwargs)
return validation_wrapper_inner
return validation_wrapper_outer
</code></pre>
<p>But when I combine the two, the response time skyrockets up to 2500ms (unit test and curl also). I need to keep it under 1000ms.</p>
<pre><code>@validation_wrapper(param1=1, param2=2)
def endpoint():
# Endpoint implementation
...
</code></pre>
<p>What could be causing this problem? Is there another way I can implement this functionality but improving the response times?</p>
|
<python><flask><wsgi><swagger-codegen><connexion>
|
2023-05-30 07:42:01
| 1
| 392
|
Samuel O.D.
|
76,362,522
| 13,396,497
|
Read XML parent and child tag value from log file if child tag value condition match in Python
|
<p>I have a log file in which XML and text data is there. I want to read the XML with below conditions and read some values in data frame format.</p>
<ul>
<li>Want to read only the set in which <code><NEW_DATA></code> tag is there inside <code><DATA></code> tag, ignore all other.</li>
<li>Then filter only those sets where <code>ItemState = 'ACTIVE'</code> and <code>ItemDisplayState = 'ACTIVE'</code>.</li>
<li>Also, create some field from <code><Transactions></code> tag like first <code>TransactionID</code> will be <code>Main_ItemID</code> always, second <code>TransactionID</code> will be <code>Second_ItemID</code>
<ul>
<li>If next <code>'TransactionType' = 'POINT'</code> then next 4th & 5th <code>TransactionID</code> will be in one column (<code>Third_ItemID</code>) until next <code>'TransactionType' = 'POINT'</code>.
Snippet of log file to be parsed:</li>
</ul>
</li>
</ul>
<pre><code>2023-01-29 18:00:01,091 - SENT: <AutomationMessage xmlns:xsi="2001/XMLSchema-instance" xmlns="">
<Header>
<SequenceNumber>146</SequenceNumber>
<Timestamp>1674986401</Timestamp>
<PVersion>500</PVersion>
<DeskName>DISPLAY-A</DeskName>
</Header>
<Data>
<AT_INFO xmlns:xsi="2001/XMLSchema-instance">
<MessageID>AT_INFO</MessageID>
<MessageTime>1674986401</MessageTime>
</AT_INFO>
</Data>
<AB33>1194238753</AB33>
</AutomationMessage>
2023-01-29 18:07:05,411 - SENT: <AutomationMessage xmlns:xsi="2001/XMLSchema-instance" xmlns="">
<Header>
<Timestamp>1674986825</Timestamp>
<DeskName>DISPLAY-A</DeskName>
</Header>
<Data>
<NEW_DATA xmlns:xsi="2001/XMLSchema-instance">
<MessageID>NEW_DATA</MessageID>
<Action>UPDATE</Action>
<Data>
<DeliveryID mi_key="DELIVERY_ID" read_only="true">AB011027 - 1015</DeliveryID>
<ItemName mi_key="ITEM_NAME" read_only="true">XYZ200</ItemName>
<ItemID mi_key="ID_HAL">1015</ItemID>
<Transactions mi_key="TRANSACTION_LIST" read_only="true">
<Transaction>
<TransactionPosition mi_key="TRANSACTION_POSITION">1</TransactionPosition>
<TransactionType mi_key="TRANSACTION_TYPE">ONLINE</TransactionType>
<TransactionID mi_key="TRANSACTION_ID">1015</TransactionID>
</Transaction>
<Transaction>
<TransactionPosition mi_key="TRANSACTION_POSITION">2</TransactionPosition>
<TransactionType mi_key="TRANSACTION_TYPE">ONLINE</TransactionType>
<TransactionID mi_key="TRANSACTION_ID">1005</TransactionID>
</Transaction>
</Transactions>
<NumberOfPackages mi_key="ONLINE_COUNT" read_only="true">3</NumberOfPackages>
<PointIDA mi_key="POINT_ID_A" read_only="true">23</PointIDA>
<PointIDB mi_key="POINT_ID_B" read_only="true">19</PointIDB>
<DeliveryDirection mi_key="DIRECTION_DELIVERY" read_only="true">SOUTH</DeliveryDirection>
<StartLocation mi_key="LOCATION_START" read_only="true">
<Station>ABC</Station>
<Road>
<Kilometerage>12484</Kilometerage>
</Road>
</StartLocation>
<ItemState mi_key="ITEM_STATE" read_only="true">ACTIVE</ItemState>
<ItemDisplayState mi_key="ITEM_DISPLAY_STATE">STARTING</ItemDisplayState>
</Data>
</NEW_DATA>
</Data>
</AutomationMessage>
2023-01-29 18:07:05,415 - SENT: <AutomationMessage xmlns:xsi="2001/XMLSchema-instance" xmlns="">
<Header>
<Timestamp>1674986825</Timestamp>
<DeskName>DISPLAY-B</DeskName>
</Header>
<Data>
<NEW_DATA xmlns:xsi="2001/XMLSchema-instance">
<MessageID>NEW_DATA</MessageID>
<Action>UPDATE</Action>
<Data>
<DeliveryID mi_key="DELIVERY_ID" read_only="true">AB011027 - 1015</DeliveryID>
<ItemName mi_key="ITEM_NAME" read_only="true">XYZ200</ItemName>
<ItemID mi_key="ID_HAL">1015</ItemID>
<Transactions mi_key="TRANSACTION_LIST" read_only="true">
<Transaction>
<TransactionPosition mi_key="TRANSACTION_POSITION">1</TransactionPosition>
<TransactionType mi_key="TRANSACTION_TYPE">ONLINE</TransactionType>
<TransactionID mi_key="TRANSACTION_ID">1015</TransactionID>
<TransactionDisplay mi_key="DISPLAY_ONLINE">SHORT</TransactionDisplay>
</Transaction>
<Transaction>
<TransactionPosition mi_key="TRANSACTION_POSITION">2</TransactionPosition>
<TransactionType mi_key="TRANSACTION_TYPE">ONLINE</TransactionType>
<TransactionID mi_key="TRANSACTION_ID">1005</TransactionID>
<TransactionDisplay mi_key="DISPLAY_ONLINE">LONG</TransactionDisplay>
</Transaction>
<Transaction>
<TransactionPosition mi_key="TRANSACTION_POSITION">3</TransactionPosition>
<TransactionType mi_key="TRANSACTION_TYPE">POINT</TransactionType>
<TransactionID mi_key="TRANSACTION_ID">23</TransactionID>
</Transaction>
<Transaction>
<TransactionPosition mi_key="TRANSACTION_POSITION">4</TransactionPosition>
<TransactionType mi_key="TRANSACTION_TYPE">ONLINE</TransactionType>
<TransactionID mi_key="TRANSACTION_ID">1013</TransactionID>
<TransactionDisplay mi_key="DISPLAY_ONLINE">SHORT</TransactionDisplay>
</Transaction>
<Transaction>
<TransactionPosition mi_key="TRANSACTION_POSITION">5</TransactionPosition>
<TransactionType mi_key="TRANSACTION_TYPE">ONLINE</TransactionType>
<TransactionID mi_key="TRANSACTION_ID">1019</TransactionID>
<TransactionDisplay mi_key="DISPLAY_ONLINE">SHORT</TransactionDisplay>
</Transaction>
<Transaction>
<TransactionPosition mi_key="TRANSACTION_POSITION">6</TransactionPosition>
<TransactionType mi_key="TRANSACTION_TYPE">POINT</TransactionType>
<TransactionID mi_key="TRANSACTION_ID">19</TransactionID>
</Transaction>
</Transactions>
<NumberOfPackages mi_key="ONLINE_COUNT" read_only="true">3</NumberOfPackages>
<NumberOfItems mi_key="ITEM_COUNT" read_only="true">240</NumberOfItems>
<PointIDA mi_key="POINT_ID_A" read_only="true">23</PointIDA>
<PointIDB mi_key="POINT_ID_B" read_only="true">19</PointIDB>
<DeliveryDirection mi_key="DIRECTION_DELIVERY" read_only="true">SOUTH</DeliveryDirection>
<StartLocation mi_key="LOCATION_START" read_only="true">
<Station>ABC</Station>
<Line>T3</Line>
<Road>
<Kilometerage>12484</Kilometerage>
</Road>
</StartLocation>
<ItemState mi_key="ITEM_STATE" read_only="true">ACTIVE</ItemState>
<ItemDisplayState mi_key="ITEM_DISPLAY_STATE">ACTIVE</ItemDisplayState>
</Data>
</NEW_DATA>
</Data>
</AutomationMessage>
2023-01-29 18:07:25,908 - SENT: <AutomationMessage xmlns:xsi="2001/XMLSchema-instance" xmlns="">
<Header>
<Timestamp>1674986845</Timestamp>
<DeskName>DISPLAY-A</DeskName>
</Header>
<Data>
<NEW_DATA xmlns:xsi="2001/XMLSchema-instance">
<MessageID>NEW_DATA</MessageID>
<Action>UPDATE</Action>
<Data>
<DeliveryID mi_key="DELIVERY_ID" read_only="true">AB011028 - 1011</DeliveryID>
<ItemName mi_key="ITEM_NAME" read_only="true">XYZ333</ItemName>
<ItemID mi_key="ID_HAL">1011</ItemID>
<Transactions mi_key="TRANSACTION_LIST" read_only="true">
<Transaction>
<TransactionPosition mi_key="TRANSACTION_POSITION">1</TransactionPosition>
<TransactionType mi_key="TRANSACTION_TYPE">ONLINE</TransactionType>
<TransactionID mi_key="TRANSACTION_ID">1011</TransactionID>
<TransactionDisplay mi_key="DISPLAY_ONLINE">SHORT</TransactionDisplay>
</Transaction>
<Transaction>
<TransactionPosition mi_key="TRANSACTION_POSITION">2</TransactionPosition>
<TransactionType mi_key="TRANSACTION_TYPE">ONLINE</TransactionType>
<TransactionID mi_key="TRANSACTION_ID">1097</TransactionID>
<TransactionDisplay mi_key="DISPLAY_ONLINE">LONG</TransactionDisplay>
</Transaction>
<Transaction>
<TransactionPosition mi_key="TRANSACTION_POSITION">3</TransactionPosition>
<TransactionType mi_key="TRANSACTION_TYPE">POINT</TransactionType>
<TransactionID mi_key="TRANSACTION_ID">23</TransactionID>
</Transaction>
<Transaction>
<TransactionPosition mi_key="TRANSACTION_POSITION">4</TransactionPosition>
<TransactionType mi_key="TRANSACTION_TYPE">ONLINE</TransactionType>
<TransactionID mi_key="TRANSACTION_ID">1877</TransactionID>
<TransactionDisplay mi_key="DISPLAY_ONLINE">SHORT</TransactionDisplay>
</Transaction>
<Transaction>
<TransactionPosition mi_key="TRANSACTION_POSITION">5</TransactionPosition>
<TransactionType mi_key="TRANSACTION_TYPE">POINT</TransactionType>
<TransactionID mi_key="TRANSACTION_ID">19</TransactionID>
</Transaction>
</Transactions>
<NumberOfPackages mi_key="ONLINE_COUNT" read_only="true">3</NumberOfPackages>
<PointIDA mi_key="POINT_ID_A" read_only="true">23</PointIDA>
<PointIDB mi_key="POINT_ID_B" read_only="true">19</PointIDB>
<DeliveryDirection mi_key="DIRECTION_DELIVERY" read_only="true">NORTH</DeliveryDirection>
<StartLocation mi_key="LOCATION_START" read_only="true">
<Station>ABC</Station>
<Road>
<Kilometerage>13555</Kilometerage>
</Road>
</StartLocation>
<ItemState mi_key="ITEM_STATE" read_only="true">ACTIVE</ItemState>
<ItemDisplayState mi_key="ITEM_DISPLAY_STATE">ACTIVE</ItemDisplayState>
</Data>
</NEW_DATA>
</Data>
<AB33>1507126111</AB33>
</AutomationMessage>
2023-01-29 18:30:59,731 - SENT: <AutomationMessage xmlns:xsi="2001/XMLSchema-instance" xmlns="">
<Header>
<Timestamp>1674988259</Timestamp>
<DeskName>DISPLAY-A</DeskName>
</Header>
<Data>
<AT_ITEM_SCHEDULES xmlns:xsi="2001/XMLSchema-instance">
<MessageID>AT_ITEM_SCHEDULES</MessageID>
<ItemData>
<Component>ITEM_DULE</Component>
<Identifier></Identifier>
<ItemDataComponent>
<ComponentData>
<ItemData>
<DeliveryID mi_key="DELIVERY_ID" read_only="true">AB011027 - 1015</DeliveryID>
<ItemName mi_key="ITEM_NAME" read_only="true">XYZ200</ItemName>
<ItemID mi_key="ID_HAL">1015</ItemID>
<Transactions mi_key="TRANSACTION_LIST" read_only="true">
<Transaction>
<TransactionPosition mi_key="TRANSACTION_POSITION">1</TransactionPosition>
<TransactionType mi_key="TRANSACTION_TYPE">ONLINE</TransactionType>
<TransactionID mi_key="TRANSACTION_ID">1015</TransactionID>
<TransactionDisplay mi_key="DISPLAY_ONLINE">SHORT</TransactionDisplay>
</Transaction>
</Transactions>
<DeliveryDirection mi_key="DIRECTION_DELIVERY" read_only="true">SOUTH</DeliveryDirection>
<StartLocation mi_key="LOCATION_START" read_only="true">
<Station>ABC</Station>
<Line>_T3</Line>
<Road>
<RoadID>93</RoadID>
<RoadName>93</RoadName>
<Kilometerage>12484</Kilometerage>
</Road>
</StartLocation>
<Mode mi_key="ITEM_MODE" read_only="true">CBS</Mode>
<ItemState mi_key="ITEM_STATE" read_only="true">ACTIVE</ItemState>
<ItemDisplayState mi_key="ITEM_DISPLAY_STATE">ACTIVE</ItemDisplayState>
</ItemData>
</ComponentData>
</ItemDataComponent>
</ItemData>
</AT_ITEM_SCHEDULES>
</Data>
</AutomationMessage>
2023-01-29 18:30:45,110 - SENT: <AutomationMessage xmlns:xsi="/2001/XMLSchema-instance" xmlns="">
<Header>
<Timestamp>66666666</Timestamp>
<DeskName>ROC-AMMI-Z</DeskName>
</Header>
<Data>
<NEW_DATA xmlns:xsi="/2001/XMLSchema-instance">
<MessageID>AUT_TRAIN_DATA</MessageID>
<Action>UPDATE</Action>
<Data>
<DeliveryID mi_key="DELIVERY_ID">RH_1032_B - 1040</DeliveryID>
<ItemName mi_key="ITEM_NAME">XYZ1040</ItemName>
<ItemID mi_key="ID_HAL">1040</ItemID>
<Transactions mi_key="TRANSACTION_LIST">
<Transaction>
<TransactionPosition mi_key="TRANSACTION_POSITION">1</TransactionPosition>
<TransactionType mi_key="TRANSACTION_TYPE">ONLINE</TransactionType>
<TransactionID mi_key="TRANSACTION_ID">1040</TransactionID>
<TransactionDisplay mi_key="DISPLAY_ONLINE">SHORT_HOOD</TransactionDisplay>
</Transaction>
<Transaction>
<TransactionPosition mi_key="TRANSACTION_POSITION">2</TransactionPosition>
<TransactionType mi_key="TRANSACTION_TYPE">ONLINE</TransactionType>
<TransactionID mi_key="TRANSACTION_ID">1039</TransactionID>
<TransactionDisplay mi_key="DISPLAY_ONLINE">LONG_HOOD</TransactionDisplay>
</Transaction>
</Transactions>
<NumberOfPackages mi_key="ONLINE_COUNT">2</NumberOfPackages>
<PointIDA mi_key="RAKE_ID_A">15</PointIDA>
<DeliveryDirection mi_key="DIRECTION_DELIVERY">NORTH</DeliveryDirection>
<StartLocation mi_key="LOCATION_START" read_only="true">
<Station>ABC</Station>
<Road>
<Kilometerage>99999</Kilometerage>
</Road>
</StartLocation>
<ItemState mi_key="ITEM_STATE">ACTIVE</ItemState>
<ItemDisplayState mi_key="ITEM_DISPLAY_STATE">ACTIVE</ItemDisplayState>
</Data>
</NEW_DATA>
</Data>
<AB33>3336487589</AB33>
</AutomationMessage>
</code></pre>
<p>Output I am looking for:</p>
<pre><code>Timestamp DeliveryID Main_ItemID Second_ItemID Third_ItemID ItemName Kilometerage
1674986825 AB011027 - 1015 1015 1005 1013,1019 XYZ200 12484
1674986845 AB011028 - 1011 1011 1097 1877 XYZ333 13555
66666666 RH_1032_B - 1040 1040 1039 XYZ1040 99999
</code></pre>
<p>I was trying to write a logic (1st & 2nd condition done) but not sure how to read the tag values with 3rd conditions:</p>
<pre><code>with open('ITEM.xml', 'r') as f:
s = f.read()
s = re.sub(r'<\?xml.*?>', '', ''.join(re.findall(r'<.*>', s)))
s = '<root>'+s+'</root>'
root = ET.fromstring(s)
for items_data in root.findall("./AutomationMessage/Data/NEW_DATA/Data/[ItemState='ACTIVE'][ItemDisplayState='ACTIVE']"):
# Logic to read the tag values with above 3rd condition
</code></pre>
|
<python><xml><dataframe>
|
2023-05-30 07:33:44
| 2
| 347
|
RKIDEV
|
76,362,447
| 10,939,080
|
Python dataclasses: Allow mutation of existing field but disallow new fields
|
<p>I would like to write a dataclass that allows change of value for fields that already exist, but prevent addition of new fields.</p>
<p>I am on Python >= 3.10</p>
<pre><code>from dataclasses import dataclass
@dataclass
class Foo:
bar: int
baz: int
foo = Foo(bar=1, baz=0)
foo.baz = 2 # this should be okay
foo.qux = 'hey' # I want this to throw error
</code></pre>
<p>Setting <code>@dataclass(frozen=True)</code> cannot achieve my goal as it would also prevent mutation of values.
How should I accomplish this?</p>
|
<python><python-dataclasses>
|
2023-05-30 07:23:22
| 1
| 734
|
tyson.wu
|
76,362,377
| 5,080,612
|
Retrieving float and binary data sqlite3
|
<p>I have a database in sqlite</p>
<p><a href="https://i.sstatic.net/OWE0j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OWE0j.png" alt="enter image description here" /></a></p>
<p>This database contains strings, floats and column with BLOB values. These BLOB values contain binary data that it has to be converted to float numbers</p>
<p>I was wondering what is the most effective way to retrieve the data as if i try to retrieve all the required columns with only one query all the data become binary type</p>
<pre><code>TDI %name of the variable. String
cn = _connect(file)
cur = cn.cursor()
query1 = f"""
SELECT ST.Name || '.' ||EL.Name AS TDI,VA.Value,VA.CursorOffsetMS, EL.Scaling,VA.BlobElems
FROM tblValues AS VA
LEFT JOIN tblElements AS EL ON VA.fkElements = EL.Id
LEFT JOIN tblStructures AS ST ON EL.fkStructures = ST.Id
WHERE TDI = ?
ORDER BY VA.CursorOffsetMs
"""
query2 = f"""
SELECT ST.Name || '.' ||EL.Name AS TDI,VA.BlobElems,VA.BlobValues
FROM tblValues AS VA
LEFT JOIN tblElements AS EL ON VA.fkElements = EL.Id
LEFT JOIN tblStructures AS ST ON EL.fkStructures = ST.Id
WHERE TDI = ?
ORDER BY VA.CursorOffsetMs
"""
cur.execute(query1,(TDI,))
float_data = np.array([(np.float(row[1]),row[2],row[3],row[4]) for row in cur.fetchall()])
</code></pre>
<p>Now I could do the same to get the blog values by executing query 2. But then I have to convert the data to float.</p>
<p>As I am not expert in databases I am asking cause there must be a more direct approach to get all the data at once and cast it properly. In the end I need to get all the numerical info as floats</p>
<p>I have been checking similar problems and the documentation from sqlite but so far I do not get anything clear.</p>
<p>Any info or tip is welcomed</p>
|
<python><sqlite><blob>
|
2023-05-30 07:11:21
| 0
| 1,104
|
gis20
|
76,362,291
| 4,495,790
|
How could skforecast.ForecasterAutoreg predict without lag?
|
<p>I'm using now <a href="https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiM7fOku5z_AhVLNuwKHcxjCLsQFnoECB0QAQ&url=https%3A%2F%2Fskforecast.org%2F&usg=AOvVaw0PPIBTqFLD6ym2Q7ouyMmj" rel="nofollow noreferrer">skforecast</a> to forecast kind of usual multivariate time series. My training set <code>trainX</code>, <code>trainy</code> is 36 steps long, and I also have a 12-step-long test set <code>testX</code>, <code>testy</code> as well. I have trained my regressor with 12 historical steps backward (<code>lag</code>) as follows:</p>
<pre><code>model = ForecasterAutoreg(regressor=RandomForestRegressor(), lags=12)
model.fit(y=trainX.y, exog=trainX[trainX.columns[1:]])
</code></pre>
<p>Now, when I give predictions with this trained model on my test features</p>
<pre><code>len(testX) # =12
predy = model2.predict(steps=steps, exog=testX[testX.columns[1:]])
len(predy) # =12 as well
</code></pre>
<p>it gives back 12 predicted steps, so equivalent to the length of the entire <code>testX</code>.</p>
<p>I'm confused at this point because my interpretation is that the prediction must be based on some <code>lag</code> steps back (12 in my case), which are precursors for the predictions but aren't part of the actual steps-to-predicted. So I do not understand how could my model give forecasts for all my <code>testX</code> rows? What is the basis for the prediction for the first value of <code>predy</code> for example? Prediction for <code>predy[0]</code> should be based on 12 values before <code>predX[0]</code>, but no 12 steps back before <code>predX[0]</code> evidently.</p>
<p>What do I misunderstand here?</p>
|
<python><time-series><forecasting>
|
2023-05-30 06:59:18
| 1
| 459
|
Fredrik
|
76,362,026
| 3,793,935
|
QR code detecting in python with OpenCV raises UnicodeDecodeError: 'utf-8' codec can't decode byte
|
<p>I have written a class to retrieve creditor data like the Iban from an invoice pdf with an qr code on it.
It worked fine, until we've gotten an pdf that throws this error:</p>
<pre><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfc in position 157: invalid start byte
</code></pre>
<p>If I try to process the pdf.</p>
<p>That is how I've done it:</p>
<pre><code>doc = fitz.open(self.image_path) # open document
i = 0
if not os.path.exists(f"./qr_codes/"):
os.makedirs(f"./qr_codes/")
for page in doc:
pix = page.get_pixmap(matrix=self.mat) # render page to an image
pix.save(f"./qr_codes/page_{i}.png")
img = cv2.imread(f"./qr_codes/page_{i}.png")
detect = cv2.QRCodeDetector()
text, points, straight_qrcode = detect.detectAndDecode(img)
if text:
# try to find a IBAN in one of the lines
self.iban = "\r\n".join([line for line in text.splitlines() if re.findall(r"CH\d{19}", line.strip())])
# try to find the reference number by joining all lines and searching for CH QRR \d+
# Also replace the CH QRR Stuff, because only the number is needed for SAP
ref_number = re.findall(r'CH\s*QRR\s*\d+|$', " ".join(text.splitlines()))
self.ref_number = int(re.sub(r"\D","", ref_number[0])) if ref_number else None
self.__save_values()
return True
i += 1
return False
</code></pre>
<p>Is there a way to strip the bytes somehow?</p>
<p>I've tried it via numpy array also:</p>
<pre><code> stream = open(f'./qr_codes/page_{i}.png', encoding="utf-8", errors="ignore")
stream = bytearray(stream.read(), encoding="utf-8")
detect = cv2.QRCodeDetector()
text, points, straight_qrcode = detect.detectAndDecode(numpy.asarray(stream, dtype=numpy.uint8))
# print(text)
</code></pre>
<p>But this way I only retrieve an empty text instead, so I'm doing something wrong this way I guess.
Could someone provide some ideas on how to solve the byte issue?</p>
<p><strong>Edit:</strong>
As asked, the full Traceback</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\m7073\Repos\Chronos_New\invoice_extraction\qr_code_scan.py", line 128, in <module>
qrcode.set_qr_values()
File "C:\Users\m7073\Repos\Chronos_New\invoice_extraction\qr_code_scan.py", line 73, in set_qr_values
text, points, straight_qrcode = detect.detectAndDecode(img)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfc in position 157: invalid start byte
</code></pre>
<p><strong>Edit 2(minimal reproducable example):</strong></p>
<pre><code>import cv2
img = cv2.imread(f"page_1.png")
detect = cv2.QRCodeDetector()
text, points, straight_qrcode = detect.detectAndDecode(img)
</code></pre>
<p><a href="https://i.sstatic.net/bd4wR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bd4wR.png" alt="enter image description here" /></a></p>
|
<python><opencv><qr-code>
|
2023-05-30 06:15:51
| 2
| 499
|
user3793935
|
76,361,997
| 13,078,067
|
Is passing Exceptions instances as values considered pythonic
|
<p>I want to pass caught <em>instance</em> of an <code>Exception</code> (or deriving class) as and argument to some function, and I wonder if it is an idiomatic (pythonic) thing to do. In summary I have a system that runs different callbacks depending on success of some other method call. In reduced form it looks like this:</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
def bar(self):
pass
def callback_success(self):
pass
def callback_error(self):
pass
</code></pre>
<p>and it is used in some other part of my code like this:</p>
<pre class="lang-py prettyprint-override"><code>foo = Foo()
was_success = True
try:
foo.bar()
except Exception:
was_success = False
foo.callback_error()
if was_success:
foo.callback_success()
</code></pre>
<p>Now I would like to pass to callback <code>callback_error</code> more information about what exactly went wrong. I come up with an idea of passing caught exception as an argument of <code>callback_error</code>. So doing something like this:</p>
<pre class="lang-py prettyprint-override"><code>def callback_error(self, ex: Exception):
pass
except Exception as e:
foo.callback_error(e)
</code></pre>
<p>I can see that this solution will work, but I wonder if it is a good idea to pass caught exceptions as values. So my question is, <strong>is this considered pythonic</strong> or not? The only thing that I can see wrong with this is that it <em>feels</em> a little bit odd. I never have seen such "pattern" used by anybody else before. Are there some other thing that I should know about, that make this a bad code? What would you thing when you would encounter code like this in your code base?</p>
|
<python><exception>
|
2023-05-30 06:09:43
| 0
| 6,455
|
Aleksander Krauze
|
76,361,868
| 10,097,229
|
Cannot import module from partially initialized module
|
<p>I am writing an Azure Function where I have two files. First file <code>helperfunc_postapi</code> contains some functions which return a value and the second one is calling the functions in first file. I am cythonizing (encrypting using cython package) the first file so that its contents are not visible to others.</p>
<p>The first file contains code like this and has file name <code>helperfunc_postapi.py</code>-</p>
<pre><code>import azure.functions as func
def function(a):
return a
</code></pre>
<p>This file is cythonized.</p>
<p>The second file contains code like this and has name <code>main.py</code>-</p>
<pre><code>import . from helperfunc_postapi
def test():
helperfunc_postapi.function('2')
print(test())
</code></pre>
<p>while running the second file, the ideal output before cythonization is <code>2</code> which is correct. But after we are cythonizing the file, we are getting an error that <code>cannot import name helperfunc_postapi from partially initiatlized module.</code></p>
<p><a href="https://i.sstatic.net/vkwHj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vkwHj.png" alt="enter image description here" /></a></p>
<p>In second file, we are even trying to use the <code>import function from helperfunc_postapi</code> instead of <code>import . from helperfunc_postapi</code> but then we are getting an error that <code>no module name helperfunc_postapi</code>.</p>
<p>We have tried everything but there is a circular import error or partially initialized module error and we are not able to debug it.</p>
|
<python><python-3.x><function><azure-functions>
|
2023-05-30 05:39:44
| 1
| 1,137
|
PeakyBlinder
|
76,361,820
| 8,278,075
|
How to add a fixed value to the Y axis in Altair?
|
<p>I'm new to Altair and I'd like to add a two rule lines on the Y axis of a stock progression in my line chart.</p>
<p>The DataFrame <code>symbol_data</code> has Date, Symbol, Name, Close, and I added another column "StdDev" which is the standard deviation for the entire data set.</p>
<pre class="lang-py prettyprint-override"><code>chart = alt.Chart(symbol_data).mark_line(point=True).encode(
x=alt.X("Date:T", title="Dates"),
y=alt.X("Close", title="Close price"),
)
)
line = alt.Chart(symbol_data).mark_rule(
color="red",
).encode(
y=alt.Y("mean(Close)" + StdDev),
)
line2 = alt.Chart(symbol_data).mark_rule(
color="blue",
).encode(
y=alt.Y("mean(Close)" - StdDev),
)
chart + line + line2
</code></pre>
<p>I'm trying to add the standard deviation lines by adding and subtracting from the Y axis values. I know this isn't valid but is there a way to do what I'm trying to do here?</p>
<pre class="lang-py prettyprint-override"><code>...
y=alt.Y("mean(Close)" + StdDev),
...
</code></pre>
<p>I've tried adding a mark area which is conceptually close to what I want but I want two rule lines.</p>
<pre class="lang-py prettyprint-override"><code>line2 = alt.Chart(symbol_data).mark_area(
opacity=0.5, color="gray"
).encode(
x=alt.X("Date:T"),
y=alt.Y("max(Close):Q"),
y2=alt.Y2("StdDev"),
)
</code></pre>
<p>Update: I was able to get the desired result but I had to add columns to the DataFrame; one for the +1 standard deviation and another for -1 standard deviation. I'd still like to know if I can add or subtract from the Y axis values.</p>
<pre class="lang-py prettyprint-override"><code> upper_std = alt.Chart(symbol_data).mark_rule(
color="green",
strokeDash=(6, 2)).encode(
y=alt.Y("Std_plus:Q"),
)
line = alt.Chart(symbol_data).mark_rule(
color="red",
strokeWidth=2,
strokeDash=(5, 2)).encode(
y=alt.Y("mean(Close):Q"),
)
lower_std = alt.Chart(symbol_data).mark_rule(
color="blue",
strokeDash=(6, 2)).encode(
y=alt.Y("Std_minus:Q"),
)
</code></pre>
|
<python><data-science><altair>
|
2023-05-30 05:27:49
| 1
| 3,365
|
engineer-x
|
76,361,397
| 1,743,837
|
mock.patch-ing the .append method of the Python list
|
<p>I am unit testing a single line of code within an if statement where I append an item onto a list if a variable has a specific value.</p>
<pre><code>foo = []
if bar == 'a':
foo.append(bar)
</code></pre>
<p>I would like to assert that such an append has been called. I have patched methods from a variety of sources before, but not methods belonging to basic Python data types. What class would I specify as the path for the mock.patch decorator?</p>
<pre><code>@mock.patch('append')
def test_appends_bar_to_foo(mock_append):
assert mock_append.called
</code></pre>
<p>With the above code, I get <code>TypeError: Need a valid target to patch. You supplied: 'append'</code></p>
|
<python><mocking><pytest><python-unittest><magicmock>
|
2023-05-30 03:13:29
| 1
| 1,295
|
nerdenator
|
76,361,324
| 473,923
|
Install Python 3.10 or 3.11 with OpenSSL on MAC Apple Silicon
|
<p>I have tried multiple times to get Python working on my Mac M2 and always have trouble <code>openssl</code>.</p>
<p>I have used both Stack overflow questions (which are out of date) and ChatGPT to guide me on installation and still cannot get a working environment.</p>
<p>I have tried with <code>pyenv</code>, <code>asdf</code> and know that it is happening on the python compilation, not as part of the version managers.</p>
<p>My goal is to write software using python 3.11.x and 3.10.x, the reason for 3.10 is that some of the software I wanted to use (WhisperAI) will not work on 3.11 so I will use virtual environments.</p>
<p>I am going to list every command I run in this post and show where it fails.</p>
<h2>remove any remnants of python</h2>
<pre class="lang-bash prettyprint-override"><code>sudo rm -rf /Library/Frameworks/Python.framework/Versions/3.10
sudo rm -rf /usr/local/bin/python*
sudo rm -rf /usr/local/bin/pip*
sudo rm -rf /usr/local/lib/python*
sudo rm -rf /usr/local/include/python*
sudo rm -rf /Library/Frameworks/Python.framework
sudo rm -rf ~/Library/Caches/pip
sudo rm -rf /Users/<username>/.local/lib/python*
sudo rm -rf /Users/<username>/.cache/pip
brew uninstall python
# done
which python3
# => /usr/bin/python3
python --version
# => Python 3.9.6
</code></pre>
<h2>Install Python</h2>
<pre class="lang-bash prettyprint-override"><code>brew install python@3.10
brew install python@3.11
python3 --version
Python 3.11.3
</code></pre>
<h2>Setup pyenv</h2>
<pre class="lang-bash prettyprint-override"><code>brew install pyenv
echo 'if command -v pyenv 1>/dev/null 2>&1; then eval "$(pyenv init -)"; fi' >> ~/.zshrc
source ~/.zshrc
</code></pre>
<h2>Find out what version of 3.10 and 3.11 to install</h2>
<pre class="lang-bash prettyprint-override"><code>pyenv install --list | grep 3.10
pyenv install --list | grep 3.11
# There is a 3.10.11 and 3.11.3
</code></pre>
<h2>Use pyenv to install latest versions</h2>
<blockquote>
<p>This is where the first issues with SSL start happening</p>
</blockquote>
<p>I get exactly the same issue with <code>3.10.11</code> and <code>3.11.3</code></p>
<pre class="lang-bash prettyprint-override"><code>pyenv install 3.11.3
python-build: use openssl@1.1 from homebrew
python-build: use readline from homebrew
Installing Python-3.11.3...
python-build: use tcl-tk from homebrew
python-build: use readline from homebrew
python-build: use zlib from xcode sdk
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/xxxxxxxxxx/.pyenv/versions/3.11.3/lib/python3.11/ssl.py", line 100, in <module>
import _ssl # if we can't import it, let the error propagate
^^^^^^^^^^^
ModuleNotFoundError: No module named '_ssl'
ERROR: The Python ssl extension was not compiled. Missing the OpenSSL lib?
Please consult to the Wiki page to fix the problem.
https://github.com/pyenv/pyenv/wiki/Common-build-problems
BUILD FAILED (OS X 13.3.1 using python-build 20180424)
Inspect or clean up the working tree at /var/folders/98/545gr7v16mscl5dhgtymzw5r0000gn/T/python-build.20230530123200.52419
Results logged to /var/folders/98/545gr7v16mscl5dhgtymzw5r0000gn/T/python-build.20230530123200.52419.log
Last 10 log lines:
File "/private/var/folders/98/545gr7v16mscl5dhgtymzw5r0000gn/T/python-build.20230530123200.52419/Python-3.11.3/Lib/hashlib.py", line 123, in __get_builtin_constructor
raise ValueError('unsupported hash type ' + name)
ValueError: unsupported hash type blake2s
Looking in links: /var/folders/98/545gr7v16mscl5dhgtymzw5r0000gn/T/tmpij1ox7r9
Processing /private/var/folders/98/545gr7v16mscl5dhgtymzw5r0000gn/T/tmpij1ox7r9/setuptools-65.5.0-py3-none-any.whl
Processing /private/var/folders/98/545gr7v16mscl5dhgtymzw5r0000gn/T/tmpij1ox7r9/pip-22.3.1-py3-none-any.whl
Installing collected packages: setuptools, pip
WARNING: The scripts pip3 and pip3.11 are installed in '/Users/xxxxxxxxxx/.pyenv/versions/3.11.3/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed pip-22.3.1 setuptools-65.5.0
</code></pre>
<h2>ASDF has the same problem as PyEnv</h2>
<pre class="lang-bash prettyprint-override"><code>asdf install python 3.11.3
</code></pre>
<p><a href="https://i.sstatic.net/awHh5.png" rel="noreferrer"><img src="https://i.sstatic.net/awHh5.png" alt="enter image description here" /></a></p>
<h2>Check what version of openssl is installed</h2>
<pre class="lang-bash prettyprint-override"><code>brew list --versions openssl
openssl@3 3.1.0
# also installed 1.1, but it does not show up when you list versions
brew install openssl@1.1
</code></pre>
<h2>Testing with <code>openssl@1.1</code> and <code>openssl@3</code></h2>
<p>I have attempted setting flags during compilation.</p>
<pre class="lang-bash prettyprint-override"><code># Attempt #1
ENV | grep FLAGS
LDFLAGS=-L/usr/local/opt/openssl@1.1/lib
CPPFLAGS=-I/usr/local/opt/openssl@1.1/include
# Attempt #2
ENV | grep FLAGS
LDFLAGS=-L/usr/local/opt/openssl@3/lib
CPPFLAGS=-I/usr/local/opt/openssl@3/include
</code></pre>
<h2>Using openssl@1.1</h2>
<p><a href="https://i.sstatic.net/EDyIK.png" rel="noreferrer"><img src="https://i.sstatic.net/EDyIK.png" alt="enter image description here" /></a></p>
<h2>Using openssl@3</h2>
<p><a href="https://i.sstatic.net/rCkau.png" rel="noreferrer"><img src="https://i.sstatic.net/rCkau.png" alt="enter image description here" /></a></p>
<h2>Maybe the issue is ARM64 vs AMD64?</h2>
<p>For a while, I thought the issue was related to an incompatible Homebrew installation.</p>
<p>My Mac M2 (apple silicon) was initially installed 6 months ago using the Mac Migration Assistant from an 16 inch MacBook Pro (Intel).</p>
<p>The Mac Min is on ARM64 architecture.
The MacBook was on AMD64 architecture.</p>
<p>When Homebrew transferred over, it kept working correctly, but maybe it was configured for a different architecture when it comes to compiling libraries.</p>
<p>I have reinstalled Homebrew, but I still cannot get PyEnv to install.</p>
<pre class="lang-bash prettyprint-override"><code># uninstall old (Intel Version)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/uninstall.sh)"
# removed reference to brew on my computer
sudo rm -rf /usr/local/bin/brew
# closed down my terminal and then started a new terminal session
</code></pre>
<h2>Re-install Homebrew</h2>
<pre class="lang-bash prettyprint-override"><code># install (Apple Version)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Reinstall openssl@1.1
brew install openssl@1.1
</code></pre>
<p>I also cleaned up some files that were suggested by 'brew doctor'</p>
|
<python><python-3.x><macos><openssl><pyenv>
|
2023-05-30 02:48:57
| 1
| 6,962
|
David Cruwys
|
76,360,982
| 2,571,019
|
Combine two videos in python and get frame-precision duration of the end of the first clip within the new combined clip
|
<p>I am trying to combine two videos (mov files) in Python and get the frame-precision duration of the end of the first clip within the new, combined clip so that I can skip to the end of the first clip/start of the second clip within the newly combined clip.</p>
<p>Here is how I combine the two videos.</p>
<pre><code>def concatenate_videos(video1_path, video2_path, output_path):
input1 = ffmpeg.input(video1_path)
input2 = ffmpeg.input(video2_path)
vid = '/Users/johnlawlor/projects/yesand/backend/media/videos/vid.mp4'
aud = '/Users/johnlawlor/projects/yesand/backend/media/videos/aud.mp3'
ffmpeg.concat(input1, input2, n=2, v=1, a=0).output(vid).run()
ffmpeg.concat(input1, input2, n=2, v=0, a=1).output(aud).run()
input1 = ffmpeg.input(vid)
input2 = ffmpeg.input(aud)
ffmpeg.concat(input1, input2, n=2, v=1, a=1).output(output_path).run()
</code></pre>
<p>Then I get the duration of the first clip.</p>
<pre><code>def _get_duration(local_file_path):
new_addition_metadata = ffmpeg.probe(local_file_path)
duration = float(new_addition_metadata['streams'][0]['duration'])
return duration
</code></pre>
<p>However, when I skip to the time that is returned by this logic, it skips to a few frames before the actual start of the second clip, so I get a jerky effect that seems like a bug.</p>
<p>I am currently using ffmpeg-python. I am open to any other Python libraries.</p>
<p>Any ideas?</p>
<p>Thanks!</p>
|
<python><ffmpeg><video-processing>
|
2023-05-30 00:39:55
| 0
| 1,998
|
johnklawlor
|
76,360,381
| 7,920,004
|
Python if creates false positive result
|
<p>I'm trying to investigate my AWS resources by listing all and checking whether they have special <code>troux</code> tag.</p>
<p>When iterating through my results I'm getting duplicate rows for some resources, both stating that <code>troux</code> tag does and doesn't exist at the same time.</p>
<pre><code>import boto3
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
#if using WSL2
boto3.setup_default_session(profile_name='profile')
client = boto3.client('cloudformation')
paginator = client.get_paginator('describe_stacks')
page_iterator = paginator.paginate()
result = []
for page in page_iterator:
for stack in page['Stacks']:
if "department_name_" in stack["StackName"]:
for dict in stack["Tags"]:
for key, value in dict.items():
if value in ("troux_id", "troux_uid"):
stack_with_troux = stack["StackName"] + " " + dict.get('Key') + ": " + dict.get('Value')
result.append(stack_with_troux)
else:
stack_without_troux = stack["StackName"] + " is missing troux tag!"
result.append(stack_without_troux)
print(result)
</code></pre>
<p><code>for dict in stack["Tags"]:</code> returns following data:</p>
<pre><code>{'Key': 'tag', 'Value': 'abc'}
{'Key': 'tag', 'Value': 'abc'}
{'Key': 'tag', 'Value': 'abc'}
{'Key': 'troux_id', 'Value': 'xyz'}
{'Key': 'tag', 'Value': 'xyz'}
{'Key': 'tag', 'Value': 'abc'}
{'Key'......
</code></pre>
<p>Final outcome is:</p>
<pre><code>department_name_resource_one is missing troux tag!
department_name_resource_one is missing troux tag!
department_name_resource_one is missing troux tag!
department_name_resource_one is missing troux tag!
department_name_resource_one troux_id: xyz
department_name_resource_one is missing troux tag!
</code></pre>
<p>Expected is that loop should stop if <code>troux</code> is found else print <code>missing tag</code></p>
<pre><code>department_name_resource_one is missing troux tag!
department_name_resource_two troux_id: xyz
department_name_resource_three is missing troux tag!
</code></pre>
|
<python>
|
2023-05-29 21:21:14
| 1
| 1,509
|
marcin2x4
|
76,360,370
| 11,692,124
|
Get init arguments of child class from parent class
|
<p><code>getInitInpArgs</code> on <code>ann</code> gets the input arguments in <code>init</code> of <code>ann</code> to a dictionary. Now I want to define a function like <code>getInitInpArgs</code> on <code>ann</code> (parent class) and not on the child classes, which makes a dictionary of input arguments of init of <code>myAnn</code>(child class).</p>
<pre><code>import inspect
class ann():
def __init__(self, arg1):
super(ann, self).__init__()
self.getInitInpArgs()
def getInitInpArgs(self):
args, _, _, values = inspect.getargvalues(inspect.currentframe().f_back)
self.inputArgs = {arg: values[arg] for arg in args if arg != 'self'}
class myAnn(ann):
def __init__(self, inputSize, outputSize):
super(myAnn, self).__init__(4)
z1=myAnn(40,1)
print(z1.inputArgs)
</code></pre>
<p>So here we either would have <code>z1.inputArgs</code> equal to <code>{'arg1': 4, 'inputSize':40, 'outputSize':1}</code> or equal to <code>{'inputSize':40, 'outputSize':1}</code> but now <code>z1.inputArgs</code> is equal to <code>{'arg1': 4}</code>.</p>
|
<python><introspection><python-class>
|
2023-05-29 21:20:16
| 1
| 1,011
|
Farhang Amaji
|
76,359,906
| 11,505,680
|
Figures suppressed with plt.ioff show up later
|
<p>I have a module that loads data and makes plots. I have a bunch of test cases that run whenever I import my module. Some of them generate figures just so that I can inspect their properties. I suppress these figures using <code>plt.ioff</code> as a context manager. But when, in an interactive session (Spyder), I import the module and make some plots, the suppressed test-case plots appear.</p>
<p>A minimal example that demonstrates this:</p>
<pre><code>from matplotlib import pyplot as plt
with plt.ioff():
plt.plot([0, 1, 2], [0, 1, 4])
plt.figure()
plt.plot([0, 1, 2], [1, 2, 3])
plt.show()
</code></pre>
<p>Two figures appear, one for each <code>plot</code> command. I get the same result in both Spyder and Jupyter. I'm using <code>matplotlib</code> v3.7.1.</p>
<p><a href="https://i.sstatic.net/5gIxv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5gIxv.png" alt="Both plots appear" /></a></p>
|
<python><matplotlib>
|
2023-05-29 19:26:28
| 1
| 645
|
Ilya
|
76,359,743
| 3,848,573
|
Cannot run SendGrid Python
|
<p>I'm using the SendGrid sample code to make sure my Token is ready, but every time I try to run it I get an error with RFC822
My Sample Code:</p>
<pre><code>import os
import sendgrid
from sendgrid.helpers.mail import Mail
message = Mail(
from_email='someone@gmail.com',
to_emails='someone@gmail.com',
subject='Sending with Twilio SendGrid is Fun',
html_content='<strong>and easy to do anywhere, even with Python</strong>')
try:
sg = sendgrid.SendGridAPIClient('some token')
response = sg.send(message)
print(response.status_code)
print(response.body)
print(response.headers)
except Exception as e:
print(e.message)
</code></pre>
<p>I always get this error:</p>
<pre><code>ModuleNotFoundError: No module named 'rfc822'
During handling of the above exception, another exception occurred:
ImportError: cannot import name 'Mail' from partially initialized module 'sendgrid.helpers.mail' (most likely due to a circular import) (/home/userrrr/.local/lib/python3.8/site-packages/sendgrid/helpers/mail/__init__.py)
</code></pre>
<p>I'm using python3 and pip3, also tried older versions of sendgrid with no hope, also tried it on Windows and Ubuntu</p>
<p>Any ideas?
Thank You !</p>
|
<python><python-3.x><pip><sendgrid><rfc822>
|
2023-05-29 18:57:36
| 1
| 715
|
Joseph Khella
|
76,359,670
| 2,699,929
|
3D connected components with periodic boundary conditions
|
<p>I am trying to identify connected features on the globe (i.e., in space-time on a sphere). The <a href="https://pypi.org/project/connected-components-3d/" rel="nofollow noreferrer">cc3d</a> package has gotten me 90% of the way but I am struggling to deal with the date border (i.e., the periodic boundary conditions in one of the three dimensions).</p>
<p>On a 2D map, the problem of my approach becomes apparent (note the connected buy differently labelled region around 0 longitude near the South Pole):</p>
<p><a href="https://i.sstatic.net/RBdFh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RBdFh.png" alt="![enter image description here" /></a></p>
<p>This appears because the data are defined on longitudes from 0 to 360 (while I show shown them from -180 to 180 here to make the problem more apparent).</p>
<p>My two typical solutions for these kinds of problems do not work:</p>
<ul>
<li>Flipping the anti-meridian to the pacific just shifts the problem and therefore does not help.</li>
<li>Concatenating a copy of the data at right hand boundary does also not help because it leads to ambiguity in the labels between the original left side and the pasted data on the right</li>
</ul>
<p><strong>MWE</strong></p>
<p>A break-down of the problem in 2 dimensions should be the following:</p>
<pre class="lang-py prettyprint-override"><code>import cc3d
import numpy as np
import matplotlib.pyplot as plt
arr = np.sin(np.random.RandomState(0).randn(100).reshape(10, 10)) > .3
labels = cc3d.connected_components(arr)
map = plt.pcolormesh(labels, vmin=.5)
map.cmap.set_under('none')
</code></pre>
<p><a href="https://i.sstatic.net/amPtc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/amPtc.png" alt="enter image description here" /></a></p>
<p>Here the yellow structure in the right top should be connected to the rest of the top structure and same for the two structures at the bottom. Please keep in mind that any helpful solution should also work for connected features in 3 dimensions.</p>
<p>Any help on how to approach this is appreciated.</p>
|
<python><python-3.x><numpy><python-xarray>
|
2023-05-29 18:40:28
| 1
| 457
|
Lukas
|
76,359,550
| 4,231,821
|
Converting SVG TO PNG are doing abnormal Behavior in other languages than English
|
<p>I want to export image by converting svg to png for the following purpose I am using <a href="https://cairosvg.org/" rel="nofollow noreferrer">CairoSvg</a></p>
<p>It is working fine when I am svg that exists in English.</p>
<p>For Example :</p>
<p><a href="https://i.sstatic.net/jw0EJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jw0EJ.png" alt="Working Fine In English" /></a></p>
<p>But when I am exporting any svg in which arabic charcater exists it is not appearing correctly.</p>
<p>For Example :</p>
<p><a href="https://i.sstatic.net/WR1dS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WR1dS.png" alt="Wrong image in arabic" /></a></p>
<p>Actually the title is القيمة السوقية / المبيعات but it is not appearing correctly</p>
<p>This is my code</p>
<pre><code>svgchart = chart_pygal.render()
pngfile = cairosvg.svg2png(bytestring=svgchart)
with open(image_name+'.png','wb') as f:
f.write(pngfile)
# return send_file(pngfile,mimetype='image/png')
return send_file(io.BytesIO(pngfile),mimetype='image/png', download_name=image_name+'.png', as_attachment=True)
</code></pre>
|
<python><charts><arabic-support><svgtopng>
|
2023-05-29 18:16:43
| 0
| 527
|
Faizan Naeem
|
76,359,494
| 11,922,765
|
Missing markers in the plot legends of scikit-learn examples
|
<p>I have been looking at the Scikit library documentation and example codes. Many of the plots does not have markers in the legends, leaving us to guess everything.</p>
<p><a href="https://scikit-learn.org/stable/auto_examples/svm/plot_separating_hyperplane_unbalanced.html#sphx-glr-auto-examples-svm-plot-separating-hyperplane-unbalanced-py" rel="nofollow noreferrer">Example code</a> :</p>
<pre><code>import matplotlib.pyplot as plt
from sklearn import svm
from sklearn.datasets import make_blobs
from sklearn.inspection import DecisionBoundaryDisplay
# we create two clusters of random points
n_samples_1 = 1000
n_samples_2 = 100
centers = [[0.0, 0.0], [2.0, 2.0]]
clusters_std = [1.5, 0.5]
X, y = make_blobs(
n_samples=[n_samples_1, n_samples_2],
centers=centers,
cluster_std=clusters_std,
random_state=0,
shuffle=False,
)
# fit the model and get the separating hyperplane
clf = svm.SVC(kernel="linear", C=1.0)
clf.fit(X, y)
# fit the model and get the separating hyperplane using weighted classes
wclf = svm.SVC(kernel="linear", class_weight={1: 10})
wclf.fit(X, y)
# plot the samples
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.Paired, edgecolors="k")
# plot the decision functions for both classifiers
ax = plt.gca()
disp = DecisionBoundaryDisplay.from_estimator(
clf,
X,
plot_method="contour",
colors="k",
levels=[0],
alpha=0.5,
linestyles=["-"],
ax=ax,
)
# plot decision boundary and margins for weighted classes
wdisp = DecisionBoundaryDisplay.from_estimator(
wclf,
X,
plot_method="contour",
colors="r",
levels=[0],
alpha=0.5,
linestyles=["-"],
ax=ax,
)
plt.legend(
[disp.surface_.collections[0], wdisp.surface_.collections[0]],
["non weighted", "weighted"],
loc="upper right",
)
plt.show()
</code></pre>
<p>Present plot: In the below plot legend, only text is present, no markers.</p>
<p><a href="https://i.sstatic.net/dNBor.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dNBor.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><scikit-learn><legend>
|
2023-05-29 18:08:17
| 1
| 4,702
|
Mainland
|
76,359,445
| 3,044
|
Simple Union type fails to type check in MyPy
|
<p>The following simple type check fails in MyPy.</p>
<pre class="lang-py prettyprint-override"><code>from typing import Dict, Union
A = Dict[str, Union[str, Dict[str, str]]]
B = Union[A, Dict[str, str]]
example: B = {"x": {"y": "z"}}
</code></pre>
<p>This results in an error message when checked with MyPy:</p>
<pre><code>Incompatible types in assignment (expression has type "Dict[str, Dict[str, str]]", variable has type "Union[Dict[str, Union[str, Dict[str, str]]], Dict[str, str]]") [assignment]mypy(error)
</code></pre>
<p>I cannot see any problem here. There are no errors if I type <code>example</code> as <code>A</code> instead of <code>B</code>, and B is simply a union that includes A, so it's hard not to think that this is a bug in MyPy. But I wanted to check here first in case I was missing something obvious.</p>
|
<python><mypy>
|
2023-05-29 17:58:39
| 0
| 8,520
|
levand
|
76,359,431
| 16,243,418
|
TypeError in Django: "float () argument must be a string or a number, not 'tuple.'"
|
<p><strong>Description:</strong></p>
<p>When attempting to access a specific page that relate to a cryptocurrency, in this case Dogecoin, I receive a TypeError in my Django application. "Field 'bought_price' expected a number but got (0.07314770668745041,)" reads the error notice. The traceback also states that the problem is with the float() function and that a tuple was used as the argument.</p>
<p>The passed value, however, is not a 'tuple'. It is a numpyfloat64 type, therefore I tried passing the same value after converting it to a float type in Python. Even if I provide the form a static value, the problem still occurs. <code>form.instance.bought_at = 45.66</code> results in the same issue.</p>
<p>Could anyone help me understand why this error is occurring and how I can resolve it?</p>
<p>Here's the relevant code:</p>
<p><strong>Model Code:</strong></p>
<pre><code>class Order(models.Model):
coin = models.CharField(max_length=100)
symbol = models.CharField(max_length=50)
bought_price = models.FloatField()
quantity = models.IntegerField(default=1)
bought_at = models.DateTimeField(default=datetime.now)
user = models.ForeignKey(User, on_delete=models.CASCADE)
</code></pre>
<p><strong>View Code:</strong></p>
<pre><code>ticker = yf.Ticker(symbols[pk])
price_arr = ticker.history(period='1d')['Close']
price = np.array(price_arr)[0]
</code></pre>
<pre><code>if request.method == 'POST':
form = BuyForm(request.POST)
if form.is_valid():
form.instance.coin = ticker.info['name'],
form.instance.symbol = ticker.info['symbol'],
form.instance.bought_price = price,
form.instance.user = request.user
form.save()
return redirect('portfolio')
else:
form = BuyForm()
</code></pre>
<p>I appreciate any assistance or insights you can provide. Thank you!</p>
|
<python><python-3.x><django><django-models>
|
2023-05-29 17:56:17
| 1
| 352
|
Akshay Saambram
|
76,359,395
| 12,297,666
|
How determine the y coordinate of center of mass from timeseries
|
<p>Consider the following pandas dataframe, with shape <code>NxT</code>, where <code>N = 10</code> is the number of samples and <code>T = 6</code> are the timesteps.</p>
<pre><code>import pandas as pd
import numpy as np
from scipy import ndimage
data = pd.DataFrame(np.random.random_sample((10, 6)))
</code></pre>
<p>I need to determine the <code>x</code> and <code>y</code> coordinates of the center of mass (centroid) for each of those <code>N</code> samples.</p>
<p>For the <code>x</code> coordinate, I have done this:</p>
<pre><code>time_axis = np.arange(data.shape[1])
centroid_x = np.sum(data.values * time_axis, axis=1) / np.sum(data.values, axis=1)
</code></pre>
<p>Which gives the same result as <code>center_of_mass</code> from scipy, for example for the first sample <code>N=0</code>:</p>
<pre><code>ndimage.center_of_mass(data.loc[0, :].values)
</code></pre>
<p>But I am not sure how can I determine the <code>centroid_y</code> for each sample. It is just the mean values, like this?</p>
<pre><code>centroid_y = data.values.mean(axis=1)
</code></pre>
<p>From <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.center_of_mass.html" rel="nofollow noreferrer">center_of_mass</a> page, it should return a tuple, the coordinates of centers-of-mass. Why i am getting only the <code>x</code> coordinate value?</p>
<p><strong>EDIT:</strong> One sample (row) of my dataset is similar to this plot. I need to find the <code>x</code> and <code>y</code> coordinates for each sample.</p>
<p><a href="https://i.sstatic.net/cK8Ls.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cK8Ls.png" alt="enter image description here" /></a></p>
|
<python><scipy>
|
2023-05-29 17:49:58
| 1
| 679
|
Murilo
|
76,359,359
| 10,530,575
|
Add a fixed value to get cumulative sum
|
<p>I have a dataframe , I want to add a fixed number 5 to the first element of the dataframe, to get cumulative sum</p>
<pre><code>import pandas as pd
data = {'Values': [10, 5000, 6000, 7000, 8000, 9000, 8000]}
df = pd.DataFrame(data)
</code></pre>
<p><a href="https://i.sstatic.net/8SmSO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8SmSO.png" alt="enter image description here" /></a></p>
<p>Expected result like below, the first row is calculate 10 + 5 =15, second row is cumulative sum 15+5 = 20, then 20+5</p>
<p><a href="https://i.sstatic.net/6xadT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6xadT.png" alt="enter image description here" /></a></p>
<p>please advise,thanks</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-05-29 17:44:17
| 3
| 631
|
PyBoss
|
76,359,347
| 11,692,124
|
Run a method of base class automatically after constructor of a child instance
|
<p>I want a way to run <code>func_z</code> after <code>ChildClass constructor</code> automatically.</p>
<pre><code>class BaseClass:
def __init__(self):
print("BaseClass constructor")
self.func_z()
def func_z(self):
print("BaseClass func_z")
class ChildClass(BaseClass):
def __init__(self):
super().__init__()
print("ChildClass constructor")
</code></pre>
<p>so it would print</p>
<pre class="lang-none prettyprint-override"><code>BaseClass constructor
ChildClass constructor
BaseClass func_z
</code></pre>
<p>Of course the whole point is not to put <code>self.func_z()</code> manually after <code>print("ChildClass constructor")</code>.</p>
<p>I don't want to redefine or override <code>func_z</code> in the child class.</p>
|
<python>
|
2023-05-29 17:42:30
| 0
| 1,011
|
Farhang Amaji
|
76,359,178
| 21,420,742
|
How to get first name and last name when last name is multiple names in pandas
|
<p>I have a data frame and need to separate first and last name. So far this is where I got to.</p>
<pre><code>df = [['Victor De La Cruz', 'Ashley Smith', 'Angel Miguel Hernandez', 'Hank Hill']]
df['first_name'] = df.str.split().str[0]
df['last_name'] = df.str.split().str[1:]
</code></pre>
<p>OutPut</p>
<pre><code>first_name last_name
Victor [De, La, Cruz]
Ashley [Smith]
Angel [Miguel, Hernandez]
Hank [Hill]
</code></pre>
<p>I have tried using <code>df'last_name'].replace('[', '')</code>for all characters not wanted but it didn't work.</p>
<p>Desired Output</p>
<pre><code> first_name last_name
Paul De La Cruz
Ashley Smith
Angel Miguel Hernandez
Hank Hill
</code></pre>
<p>Any Suggestions would be helpful thank you!</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-05-29 17:10:15
| 2
| 473
|
Coding_Nubie
|
76,359,112
| 489,088
|
Python Pandas giving me SettingWithCopyWarning when I attempt to map a particular DataFrame column - how to fix?
|
<p>I have some code that needs to replace two columns of a pandas DataFrame with the index of each value as they appear in a unique list of those values. For example:</p>
<pre><code>col1, col2, col3, col4
A, 1, 2, 3
A, 1, 2, 3
B, 1, 2, 3
</code></pre>
<p>Should end up in the data frame as:</p>
<pre><code>col1, col2, col3, col4
0, 1, 2, 3
0, 1, 2, 3
1, 1, 2, 3
</code></pre>
<p>since A is element 0 in the list of unique col1 values, and B is element number 1.</p>
<p>What I did is:</p>
<pre><code>unique_vals = df['col1'].unique()
# create a map to speed up looking indexes when we map the dataframe column
unique_vals.sort()
unique_vals_map = {}
for i in range(len(unique_vals)):
unique_vals_map[unique_vals[i]] = i
df['col1'] = df['col1'].apply(lambda r: unique_vals_map[r])
</code></pre>
<p>However that last line gives me:</p>
<pre><code>SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
</code></pre>
<p>I saw other SO answers about this, but I am not sure how to fix it in my particular case. I'm experienced with numpy but I'm new to pandas, any help is greatly appreciated!</p>
<p>Is there a better way to perform this mapping?</p>
|
<python><pandas><dataframe><apply>
|
2023-05-29 16:58:18
| 0
| 6,306
|
Edy Bourne
|
76,359,085
| 2,411,048
|
How to allow None type as a parameter in a fastAPI endpoint?
|
<p>I define the class I am assuming for the parameters. I want the parameters (eg param1 to potentially have a value of None):</p>
<pre><code> class A(BaseModel):
param1: Union[int, None] = None #Optional[int] = None
param2: Optional[int]
@app.post("/A_endpoint/", response_model=A)
def create_A(a: A):
# do something with a
return a
</code></pre>
<p>When I call the endpoint:</p>
<pre><code>curl -X 'POST' \
'http://0.0.0.0:7012/A_endpoint' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"param1": None,
"param2": 1,
}'
</code></pre>
<p>I get the error 422:</p>
<pre><code>{
"detail": [
{
"loc": [
"body",
65
],
"msg": "Expecting value: line 4 column 14 (char 65)",
"type": "value_error.jsondecode",
"ctx": {
"msg": "Expecting value",
"doc": "{\n "param1" = None, \n "param2" = 1}",
"pos": 65,
"lineno": 4,
"colno": 14
}
}
]
}
</code></pre>
<p>I understand that the default value can be None when I omit this parameter (that works). But how can I allow None to be a possible value for param1 explicitly?</p>
<p>Thanks!</p>
|
<python><fastapi>
|
2023-05-29 16:52:24
| 1
| 406
|
edgarbc
|
76,358,976
| 6,199,181
|
Is it possible to read and parse csv in streaming mode with Python?
|
<p>I want to download huge CSV and parse it line by line in streaming mode. My code is:</p>
<pre><code>with httpx.stream("GET", url) as r:
for line in r.iter_lines():
for row in csv.reader([line]):
...
</code></pre>
<p>but if there are "multi-lines" in input file this code doesn't work.</p>
<pre><code>col11,col12,col13
col21,"co
l22", col23
</code></pre>
<p>Do you have any idea how to solve this?</p>
|
<python><python-3.x><csv>
|
2023-05-29 16:35:47
| 1
| 1,517
|
Serge
|
76,358,689
| 21,404,794
|
Uncrop 3d plots in jupyter notebook
|
<p>I'm doing some 3d scatter plots with jupyter notebooks in VSCode, and they aren't showing properly.</p>
<p><a href="https://i.sstatic.net/LU6zN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LU6zN.png" alt="3d scatter plot from docs" /></a></p>
<p>I went to the documentation in matlab and downloaded the jupyter notebook for the <a href="https://matplotlib.org/stable/gallery/mplot3d/scatter3d.html" rel="nofollow noreferrer">3d scatter plot</a> and tried running that in vscode, getting the same results, the z label gets cut off.</p>
<p>I've seen a lot of questions about making the plot interactive with matplotlib magic, and some of those solutions (<code>%matplotlib qt</code>) do work (the image isn't cropped anymore, but gets created in a separate window. I want the plot to be inline, because I'm doing a lot of them and having one 40 windows created every time is a mess.</p>
<p>I've tried the magic <code>%matplotlib widget</code> and <code>%matplotlib notebook</code>, as suggested <a href="https://stackoverflow.com/questions/64331140/interactive-python-3d-plot-in-jupyter-notebook-within-vscode">here</a>, and the <code>%matplotlib ipympl</code> as suggested <a href="https://stackoverflow.com/questions/64613706/animate-update-a-matplotlib-plot-in-vs-code-notebook/64614116#64614116">here</a> but when I use those the plot stops showing, appearing only after I change to <code>%matplotlib inline</code> and showing any plot I've done before at that point (all cropped).</p>
<p><strong>I've also checked the code in jupyter lab and it does not have this problem, the image shows completely fine, so it seems to be a problem with Jupyter notebooks in VsCode.</strong></p>
<p>I'm not trying to change the position of the z axis, It's fine where it is, I just want to make the image bigger so the z label is shown properly.</p>
<p>Just in case, I've tried the comment of <a href="https://stackoverflow.com/users/7758804/trenton-mckinney">Trenton McKinney</a> of doing <code>ax.zaxis._axinfo['juggled'] = (1, 2, 2)</code> to change the z-label to the other side, and it still gets cut, just in the other side of the image.</p>
<p><a href="https://i.sstatic.net/vRqSN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vRqSN.png" alt="3d scatter plot from docs with the z axes on the other side" /></a></p>
<p>So it's not an issue of where the z axes and label are.</p>
<p>PS: As requested, I put the from the example here for ease of use.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
# Fixing random state for reproducibility
np.random.seed(19680801)
def randrange(n, vmin, vmax):
"""
Helper function to make an array of random numbers having shape (n, )
with each number distributed Uniform(vmin, vmax).
"""
return (vmax - vmin)*np.random.rand(n) + vmin
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
n = 100
# For each set of style and range settings, plot n random points in the box
# defined by x in [23, 32], y in [0, 100], z in [zlow, zhigh].
for m, zlow, zhigh in [('o', -50, -25), ('^', -30, -5)]:
xs = randrange(n, 23, 32)
ys = randrange(n, 0, 100)
zs = randrange(n, zlow, zhigh)
ax.scatter(xs, ys, zs, marker=m)
ax.set_xlabel('X Label')
ax.set_ylabel('Y Label')
ax.set_zlabel('Z Label')
plt.show()
</code></pre>
<p><strong>Update:</strong> I've posted an issue in the github VSCode repo, link <a href="https://github.com/microsoft/vscode-jupyter/issues/13661" rel="nofollow noreferrer">here</a></p>
<p><strong>Update on the update:</strong> The issue has been found to be a matplotlib/jupyter problem, so I've opened a new issue in the matplotlib repo, link <a href="https://github.com/matplotlib/matplotlib/issues/26465" rel="nofollow noreferrer">here</a></p>
|
<python><visual-studio-code><jupyter><matplotlib-3d><z-axis>
|
2023-05-29 15:45:48
| 1
| 530
|
David Siret Marqués
|
76,358,590
| 12,352,239
|
OpenCV polyLines() throws error: (-215:Assertion failed) p.checkVector(2, CV_32S) >= 0 in function 'polylines'
|
<p>I am trying to draw a segmentation mask from a YOLO segmentation mask dataset. The annotation line I am reading looks like this:</p>
<p><code>36 0.6158357764423077 0.814453125 0.6158357764423077 0.8095703125 0.6070381225961539 0.8095703125 0.6041055721153846 0.8115234375 0.5894428149038462 0.8154296875 0.5747800576923077 0.8125 0.5513196490384615 0.8134765625 0.5483870961538462 0.81640625 0.5923753653846154 0.818359375 0.6158357764423077 0.814453125</code></p>
<p>I am using cv2.polylines to draw the shape but am getting an error:</p>
<pre><code>image_height, image_width, c = img.shape
isClosed = True
color = (255, 0, 0)
thickness = 2
with open(annotation_file) as f:
for line in f:
split_line = line.split()
class_id = split_line[0]
mask_shape = [float(numeric_string) for numeric_string in split_line[1:len(split_line)]]
mask_points = []
for i in range(0,len(mask_shape),2):
x,y = mask_shape[i:i+2]
mask_points.append((x * image_width, y * image_height))
points = np.array([mask_points])
image = cv2.polylines(img, points,
isClosed, color, thickness)
break
</code></pre>
<p>Error:</p>
<p><code>OpenCV(4.7.0) /Users/xperience/GHA-OCV-Python/_work/opencv-python/opencv-python/opencv/modules/imgproc/src/drawing.cpp:2434: error: (-215:Assertion failed) p.checkVector(2, CV_32S) >= 0 in function 'polylines'</code></p>
|
<python><numpy><opencv><yolov8>
|
2023-05-29 15:33:41
| 1
| 480
|
219CID
|
76,358,564
| 12,394,134
|
Bootstrapping multiple random samples with polars in python
|
<p>I have generated a large simulated population polars dataframe using numpy arrays. I want to randomly sample from this population dataframe multiple times. However, when I do that, the samples are exactly the same from sample to sample. I know there must be an easy fix for this, any recommendations? It must be the repeat function, does anyone have any creative ideas for how I can simulate orthogonal multiple random samples?</p>
<p>Here's my code:</p>
<pre><code>N = 1000000 # population size
samples = 1000 # number of samples
num_obs = 100 # size of each sample
# Generate population data
a = np.random.gamma(2, 2, N)
b = np.random.binomial(1, 0.6, N)
x = 0.2 * a + 0.5 * b + np.random.normal(0, 10, N)
z = 0.9 * a * b + np.random.normal(0, 10, N)
y = 0.6 * x + 0.9 * z + np.random.normal(0, 10, N)
# Store this in a population dataframe
pop_data_frame = pl.DataFrame({
'A':a,
'B':b,
'X':x,
'Z':z,
'Y':y,
'id':range(1, N+1)
})
# Get 1000 samples from this pop_data_frame...
#... with 100 observations each sample.
sample_list = list(
repeat(
pop_data_frame.sample(n=num_obs), samples)
)
)
</code></pre>
|
<python><numpy><python-polars><bootstrapping>
|
2023-05-29 15:30:14
| 1
| 326
|
Damon C. Roberts
|
76,358,367
| 1,946,418
|
Figure out return of a method that returns empty hash on some condition
|
<p>I'm trying to understand how to make this work:</p>
<pre class="lang-py prettyprint-override"><code>def someMethod() -> dict[any, any]:
if not os.path.exists('some path'):
return {}
config = {'a': 1, 'b': 2}
return config
</code></pre>
<p>I don't think that's correct. Seeing this error - <code>Declared return type, "dict[Unknown, Unknown]", is partially unknownPylance</code></p>
<p>The idea is to return empty dict if a path doesn't exist (or on some condition) or correct dict with key-value pairs.</p>
<p>Any ideas?</p>
|
<python>
|
2023-05-29 14:57:54
| 1
| 1,120
|
scorpion35
|
76,358,355
| 5,869,121
|
FastAPI Depends w/ get_db failing for PUT endpoint
|
<p>I'm working on a FastAPI Project w/ some basic user endpoints. I recently shifted my project around to use routers to separate all of the endpoints out into individual files, but this has caused my only PUT endpoint to start failing while all of the GET ones still function just fine.</p>
<p>When I hit the POST endpoint to create the user, it works fine and the user gets created. When I hit the PUT endpoint to update that same user, I immediately get an error about <code>AttributeError: 'UserBase' object has no attribute 'query'</code>.</p>
<p>It's failing while trying to run the <code>existing_user_record</code> line. I put a <code>print(type(db))</code> line in each endpoint and for some reason the PUT endpoint is returning 2 objects for the db type, while in the POST endpoint it correctly only returns 1 object (the db session). I believe this is causing my issue in the PUT endpoint, but I don't know why it's happening or how to fix it. Why would it return <code>src.schemas.UserBase</code> here?</p>
<p><a href="https://i.sstatic.net/bshsl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bshsl.png" alt="![enter image description here" /></a></p>
<p>Below is some of my code and file structure.</p>
<pre><code># src/schemas.py
class UserBase(BaseModel):
username: str
password: str
email: Optional[str]
created_at: datetime = datetime.now(timezone.utc)
class Config:
orm_mode = True
class UserCreate(UserBase):
pass
</code></pre>
<pre><code># src/database.py
SQLAlchemyInstrumentor().instrument(engine=engine)
# separate database sessions for different users essentially.
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine, future=True)
Base = declarative_base()
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
</code></pre>
<pre><code># src/routers/users.py
from src.database import get_db
from src.models import Users
from src.schemas import UserBase, UserCreate
router = APIRouter()
@router.post("/users", response_model=UserCreate, status_code=201)
async def create_users(create_user_request: UserCreate, db: Session = Depends(get_db)):
print(type(db))
record_check = (
db.query(Users).filter(Users.username == create_user_request.username).first()
)
if record_check:
raise HTTPException(
status_code=403,
detail="Username already exists! Please select another username.",
)
return create_user(db, create_user_request)
@router.put("/users/{username}", response_model=UserBase)
def update_user(
update_user_request: UserBase, username: str, db: Session = Depends(get_db)
):
print(type(db))
# it fails here because of db.query() even though db.query works fine in the POST endpoint?
existing_user_record = db.query(Users).filter(Users.username == username).first()
if not existing_user_record:
raise HTTPException(
status_code=400,
detail="That old Username doesn't exist! Please select another username.",
)
new_record_check = (
db.query(Users).filter(Users.username == update_user_request.username).first()
)
if new_record_check:
raise HTTPException(
status_code=403,
detail="The new requested Username already exists! Please select another username.",
)
return update_user(db, existing_user_record, update_user_request)
</code></pre>
<pre><code># the requests i'm makiing
import requests
api = "http://127.0.0.1:8000"
# this works fine
df = requests.post(f"{api}/users", json = {"username": "jyablonski", "password": "helloworld", "email": "nobody@gmail.com"})
# this fails because of that - AttributeError: 'UserBase' object has no attribute 'query'`enter code here`
df = requests.put(f"{api}/users/jyablonski", json = {"username": "jyablonski_new_name", "password": "helloworld", "email": "nobody@gmail.com"})
</code></pre>
<p>I feel like I've tried a lot of things but haven't gotten anywhere, any help would be appreciated!</p>
|
<python><fastapi>
|
2023-05-29 14:56:30
| 1
| 1,019
|
jyablonski
|
76,358,179
| 2,504,762
|
GCP DataCatalog Linage API - Create Custom Linage
|
<p>We are using some custom spark process for certain ingestion and transformation. and So I wanted to utilize the Datacatalog Linage API to create our custom linage.</p>
<p>I am trying to follow mentioned documentation. However, not exactly able to figure out how to provide custom linage.</p>
<p><a href="https://cloud.google.com/python/docs/reference/lineage/latest/google.cloud.datacatalog.lineage_v1.types.CreateLineageEventRequest" rel="nofollow noreferrer">https://cloud.google.com/python/docs/reference/lineage/latest/google.cloud.datacatalog.lineage_v1.types.CreateLineageEventRequest</a></p>
<p>This is what my code looks like.</p>
<pre class="lang-py prettyprint-override"><code>
def sample_create_lineage_event():
# Create a client
client = lineage_v1.LineageClient()
# Initialize request argument(s)
request = lineage_v1.CreateLineageEventRequest(
parent="my project id",
)
# Make the request
response = client.create_lineage_event(request=request)
client.create_process(request=request)
# Handle the response
print(response)
</code></pre>
<p>This gives following error</p>
<pre><code>C:\Users\fki\AppData\Local\Programs\Python\Python39\python.exe C:/Users/fki/PycharmProjects/Demo/bq_linage/create_linage.py
E0529 10:27:44.489000000 6968 src/core/ext/transport/chttp2/transport/hpack_parser.cc:1227] Error parsing metadata: error=invalid value key=content-type value=text/html; charset=UTF-8
Traceback (most recent call last):
File "C:\Users\fki\AppData\Local\Programs\Python\Python39\lib\site-packages\google\api_core\grpc_helpers.py", line 65, in error_remapped_callable
return callable_(*args, **kwargs)
File "C:\Users\fki\AppData\Roaming\Python\Python39\site-packages\grpc\_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "C:\Users\fki\AppData\Roaming\Python\Python39\site-packages\grpc\_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNIMPLEMENTED
details = "Received http2 header with status: 404"
debug_error_string = "UNKNOWN:Error received from peer ipv4:199.36.153.10:443 {created_time:"2023-05-29T14:27:44.4902741+00:00", grpc_status:12, grpc_message:"Received http2 header with status: 404"}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\fki\PycharmProjects\Demo\bq_linage\create_linage.py", line 42, in <module>
sample_create_lineage_event()
File "C:\Users\fki\PycharmProjects\Demo\bq_linage\create_linage.py", line 32, in sample_create_lineage_event
response = client.create_lineage_event(request=request)
File "C:\Users\fki\AppData\Local\Programs\Python\Python39\lib\site-packages\google\cloud\datacatalog\lineage_v1\services\lineage\client.py", line 1759, in create_lineage_event
response = rpc(
File "C:\Users\fki\AppData\Local\Programs\Python\Python39\lib\site-packages\google\api_core\gapic_v1\method.py", line 113, in __call__
return wrapped_func(*args, **kwargs)
File "C:\Users\fki\AppData\Local\Programs\Python\Python39\lib\site-packages\google\api_core\grpc_helpers.py", line 67, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.MethodNotImplemented: 501 Received http2 header with status: 404
Process finished with exit code 1
</code></pre>
|
<python><google-cloud-platform>
|
2023-05-29 14:29:21
| 0
| 13,075
|
Gaurang Shah
|
76,358,170
| 16,383,578
|
How to implement Sieve of Eratosthenes with Wheel factorization in NumPy?
|
<p>I have already implemented <a href="https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes" rel="nofollow noreferrer">Sieve of Eratosthenes</a> and another version with <a href="https://en.wikipedia.org/wiki/Wheel_factorization" rel="nofollow noreferrer">Wheel factorization</a>, both in NumPy. But I don't think I have done it right, because the version with wheel factorization is actually slower.</p>
<p>I have already made several optimizations to the sieve. First all even numbers except 2 are composites (multiples of 2), so I only check odd numbers, the number of candidates is halved. Second all prime numbers except 2 and 3 are of the form 6k+1 or 6k-1, because if the modulo 6 remainder is 3 then the number must be a multiple of 3.</p>
<p>So starting from 5, only numbers with remainder of 1 or 5 need to be checked, the increment between each iteration is 6, thus the number of candidates is only a sixth of the original.</p>
<p>Third the start of the multiple for each prime number found is its square, this ensures this multiple cannot be previously flipped, and the step of the multiple increment is 2*n, because even multiples are already eliminated, so only odd multiples remain.</p>
<p>This is the version without Wheel factorization:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def prime_sieve(n: int) -> np.ndarray:
primes = np.ones(n + 1, dtype=bool)
primes[:2] = primes[4::2] = primes[9::6] = False
for a in range(6, int(n**0.5) + 1, 6):
for b in (-1, 1):
i = a + b
if primes[i]:
primes[i * i :: 2 * i] = False
return np.where(primes)[0]
</code></pre>
<p>It is highly inefficient, because it does way too many flipping operations, the same number can be flipped multiple times, and the flipping doesn't skip all the numbers that aren't of the forms 6k+1 and 6k-1 that are already eliminated, thus a lot of computational redundancy.</p>
<p>I have implemented another version with Wheel factorization, using information found on Wikipedia (I wrote the code completely by myself, there isn't pseudocode for Wheel sieve):</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from itertools import cycle
def prime_wheel_sieve(n: int) -> np.ndarray:
wheel = cycle([4, 2, 4, 2, 4, 6, 2, 6])
primes = np.ones(n + 1, dtype=bool)
primes[:2] = primes[4::2] = False
primes[9::6] = primes[15::10] = False
k = 7
while (square := k * k) <= n:
if primes[k]:
primes[square::2*k] = False
k += next(wheel)
return np.where(primes)[0]
</code></pre>
<p>But it suffers the same drawbacks as the one without Wheel, and is less performant:</p>
<pre><code>In [412]: %timeit prime_sieve(65536)
226 µs ± 2.97 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [413]: %timeit prime_wheel_sieve(65536)
236 µs ± 24 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
</code></pre>
<p>How can I properly implement Sieve of Eratosthenes with Wheel factorization in NumPy?</p>
|
<python><python-3.x><numpy><math><sieve-of-eratosthenes>
|
2023-05-29 14:27:36
| 0
| 3,930
|
Ξένη Γήινος
|
76,358,068
| 5,802,882
|
Performance and Data Integrity Issues with Hudi for Long-Term Data Retention
|
<p>Our project requires that we perform full loads daily, retaining these versions for future queries. Upon implementing Hudi to maintain 6 years of data with the following setup:</p>
<pre class="lang-py prettyprint-override"><code>"hoodie.cleaner.policy": "KEEP_LATEST_BY_HOURS",
"hoodie.cleaner.hours.retained": "52560", # 24 hours * 365 days * 6 years
</code></pre>
<p>We observed, after about 30 runs, a compromise in data integrity. During reading, the versions of data mix up and produce duplicate records, causing a series of significant issues in our DataLake (S3), since these tables are used by other scripts.</p>
<p>To solve these problems, we made adjustments for the maximum and minimum amount of commits, applying the following configurations, as referenced in the issue <a href="https://github.com/apache/hudi/issues/7600#issuecomment-1411949976" rel="nofollow noreferrer">#7600</a>:</p>
<pre class="lang-py prettyprint-override"><code>"hoodie.keep.max.commits": "2300", # (365 days * 6 years) + delta
"hoodie.keep.min.commits": "2200", # (365 days * 6 years) + delta2
</code></pre>
<p>However, this solution becomes excessively costly over time. We simulated running the scripts multiple times, partitioning by day, and both the difference and the writing cost grew significantly for a small table over a year of data. In 1 year, the average runtime for a script went from 00m:25s to 02m:30s. As we need to keep 6 years of history, this processing time tends to scale even more.</p>
<p><strong>Replication</strong></p>
<p>Follow the instructions below to reproduce the behavior:</p>
<ol>
<li>Create the example dataframe:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>data = [
Row(SK=-6698625589789238999, DSC='A', COD=1),
Row(SK=8420071140774656230, DSC='B', COD=2),
Row(SK=-8344648708406692296, DSC='C', COD=4),
Row(SK=504019808641096632, DSC='D', COD=5),
Row(SK=-233500712460350175, DSC='E', COD=6),
Row(SK=2786828215451145335, DSC='F', COD=7),
Row(SK=-8285521376477742517, DSC='G', COD=8),
Row(SK=-2852032610340310743, DSC='H', COD=9),
Row(SK=-188596373586653926, DSC='I', COD=10),
Row(SK=890099540967675307, DSC='J', COD=11),
Row(SK=72738756111436295, DSC='K', COD=12),
Row(SK=6122947679528380961, DSC='L', COD=13),
Row(SK=-3715488255824917081, DSC='M', COD=14),
Row(SK=7553013721279796958, DSC='N', COD=15)
]
dataframe = spark.createDataFrame(data)
</code></pre>
<ol start="2">
<li>With the following Hudi configuration:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>hudi_options = {
"hoodie.table.name": "example_hudi",
"hoodie.datasource.write.recordkey.field": "SK",
"hoodie.datasource.write.table.name": "example_hudi",
"hoodie.datasource.write.operation": "insert_overwrite_table",
"hoodie.datasource.write.partitionpath.field": "LOAD_DATE",
"hoodie.datasource.hive_sync.database": "default",
"hoodie.datasource.hive_sync.table": "example_hudi",
"hoodie.datasource.hive_sync.partition_fields": "LOAD_DATE",
"hoodie.cleaner.policy": "KEEP_LATEST_BY_HOURS",
"hoodie.cleaner.hours.retained": "52560",
"hoodie.keep.max.commits": "2300",
"hoodie.keep.min.commits":"2200",
"hoodie.datasource.write.precombine.field":"",
"hoodie.datasource.hive_sync.partition_extractor_class":"org.apache.hudi.hive.MultiPartKeysValueExtractor",
"hoodie.datasource.hive_sync.enable":"true",
"hoodie.datasource.hive_sync.use_jdbc":"false",
"hoodie.datasource.hive_sync.mode":"hms",
}
</code></pre>
<ol start="3">
<li>Now, write the date range:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>date = datetime.strptime('2023-06-02', '%Y-%m-%d') # Initial date (yyyy-mm-dd)
final_date = datetime.strptime('2023-11-01', '%Y-%m-%d') # Final date (yyyy-mm-dd)
while date <= final_date:
dataframe = dataframe.withColumn("LOAD_DATE", to_date(lit(date.strftime('%Y-%m-%d'))))
dataframe.write.format("hudi"). \
options(**hudi_options). \
mode("append"). \
save(basePath)
date += timedelta(days=1)
</code></pre>
<ol start="4">
<li>After this, analyze the time consumed between each load to notice the progressive growth of time. If the increase continues at this rate, the time will become unmanageable, since there are tables much larger than the example one.</li>
</ol>
<p><strong>Expected behavior</strong></p>
<p>We expected:</p>
<ul>
<li>No duplicate files would emerge after the completion of the 30 commits.</li>
<li>Execution time would not increase significantly over time.</li>
<li>Metadata would follow the behavior determined by the <code>hoodie.cleaner.policy KEEP_LATEST_BY_HOURS</code> attribute.</li>
</ul>
<p><strong>Environment</strong></p>
<ul>
<li>Hudi Version: 0.12.2</li>
<li>Spark Version: 3.3.1</li>
<li>Hive Version: 3.1.3</li>
<li>Storage: S3 (EMRFS)</li>
<li>Platform: AWS EMR</li>
</ul>
|
<python><amazon-web-services><apache-spark><amazon-emr><apache-hudi>
|
2023-05-29 14:11:42
| 1
| 1,325
|
Luiz
|
76,357,984
| 3,247,006
|
The class name with or without quotes in a Django model field to have foreign key relationship
|
<p>I can set <code>Category</code> with or without quotes to <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#foreignkey" rel="nofollow noreferrer">ForeignKey()</a> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>class Category(models.Model):
name = models.CharField(max_length=50)
class Product(models.Model): # Here
category = models.ForeignKey("Category", on_delete=models.CASCADE)
name = models.CharField(max_length=50)
</code></pre>
<p>Or:</p>
<pre class="lang-py prettyprint-override"><code>class Category(models.Model):
name = models.CharField(max_length=50)
class Product(models.Model): # Here
category = models.ForeignKey(Category, on_delete=models.CASCADE)
name = models.CharField(max_length=50)
</code></pre>
<p>And, I know that I can set <code>Category</code> with quotes to <code>ForeignKey()</code> before <code>Category</code> class is defined as shown below:</p>
<pre class="lang-py prettyprint-override"><code>class Product(models.Model): # Here
category = models.ForeignKey("Category", on_delete=models.CASCADE)
name = models.CharField(max_length=50)
class Category(models.Model):
name = models.CharField(max_length=50)
</code></pre>
<p>And, I know that I cannot set <code>Category</code> without quotes to <code>ForeignKey()</code> before <code>Category</code> class is defined as shown below:</p>
<pre class="lang-py prettyprint-override"><code>class Product(models.Model): # Error
category = models.ForeignKey(Category, on_delete=models.CASCADE)
name = models.CharField(max_length=50)
class Category(models.Model):
name = models.CharField(max_length=50)
</code></pre>
<p>Then, I got the error below:</p>
<blockquote>
<p>NameError: name 'Category' is not defined</p>
</blockquote>
<p>My questions:</p>
<ol>
<li><p>What is the difference between the class name with or without quotes in a Django model field to have foreign key relationship?</p>
</li>
<li><p>Which should I use, the class name with or without quotes in a Django model field to have foreign key relationship?</p>
</li>
</ol>
|
<python><django><django-models><foreign-keys><python-class>
|
2023-05-29 14:00:04
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
76,357,948
| 17,071,718
|
Installation of Python
|
<p>Want to use <code>python</code> and typed the command <code>py</code> which gave me the following list.</p>
<p>I certainly want to use the most recent version. Should one clean up such setup so that I only have <code>python3.6 </code> ?</p>
<pre><code>hagbard@fuckup:~$ py
py3clean pydoc pygettext2.7 python python3.6m
py3compile pydoc2.7 pygettext3 python2 python3m
py3versions pydoc3 pygettext3.6 python2.7 python3-qr
pyclean pydoc3.6 pygmentex python3 pythontex
pycompile pygettext pyhtmlizer python3.6 pyversions
</code></pre>
|
<python>
|
2023-05-29 13:53:05
| 1
| 465
|
Dilna
|
76,357,872
| 16,053,370
|
How to insert comment in iceberg table?
|
<p>I'm trying to put a comment on the ICEBRG table in glue catalog, and I used it as follows:</p>
<pre><code>spark.sql(f"""CREATE EXTERNAL TABLE IF NOT EXISTS {schema_name}.{table_name}({columns})
USING iceberg
COMMENT 'table description'
PARTITIONED BY ({partition_by_create})
LOCATION '{bucket_name}'""")
</code></pre>
<p>I also tried like this
<code>OPTIONS('comment' = '{comment_table}')</code></p>
<p>In both ways it creates the table inserts the comments in the fields but does not insert the comment in the table</p>
<p>In the glue catalog it appears as follows:</p>
<p><a href="https://i.sstatic.net/LUDSC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LUDSC.png" alt="enter image description here" /></a></p>
<p>Does anyone know how to insert this comment correctly?</p>
|
<python><pyspark><aws-glue><apache-iceberg>
|
2023-05-29 13:39:26
| 0
| 373
|
Carlos Eduardo Bilar Rodrigues
|
76,357,757
| 131,874
|
No module named 'flask' in virtualenv
|
<p>I was running <code>pgAdmin4</code> in Centos 8 with no problems. I upgraded to <code>pgAdmin4 7.11</code> and now it no longer works. This is the error:</p>
<pre><code>ModuleNotFoundError: No module named 'flask'
</code></pre>
<p>Flask is installed in the virtualenv:</p>
<pre><code># pwd
/usr/pgadmin4/venv/lib/python/site-packages
# ll -d flask*
drwxrwxr-x. 3 root root 4096 May 29 12:14 flask
drwxrwxr-x. 2 root root 46 May 29 12:14 flask_babel
drwxrwxr-x. 2 root root 98 May 29 12:14 flask_babel-3.1.0.dist-info
drwxrwxr-x. 2 root root 69 May 29 12:14 flask_compress
drwxrwxr-x. 2 root root 43 May 29 12:14 flask_gravatar
drwxrwxr-x. 2 root root 159 May 29 12:14 flask_login
-rw-rw-r--. 1 root root 17950 May 3 11:10 flask_mail.py
drwxrwxr-x. 3 root root 56 May 29 12:14 flask_migrate
drwxrwxr-x. 2 root root 44 May 29 12:14 flask_paranoid
-rw-rw-r--. 1 root root 13860 May 3 11:10 flask_principal.py
drwxrwxr-x. 6 root root 4096 May 29 12:14 flask_security
drwxrwxr-x. 2 root root 67 May 29 12:14 flask_socketio
drwxrwxr-x. 2 root root 4096 May 29 12:14 flask_sqlalchemy
drwxrwxr-x. 3 root root 120 May 29 12:14 flask_wtf
# ll -d Flask*
drwxrwxr-x. 2 root root 147 May 29 12:14 Flask-2.2.5.dist-info
drwxrwxr-x. 2 root root 123 May 29 12:14 Flask_Compress-1.13.dist-info
drwxrwxr-x. 2 root root 148 May 29 12:14 Flask_Gravatar-0.5.0.dist-info
drwxrwxr-x. 2 root root 119 May 29 12:14 Flask_Login-0.6.2.dist-info
drwxrwxr-x. 2 root root 119 May 29 12:14 Flask_Mail-0.9.1.dist-info
drwxrwxr-x. 2 root root 119 May 29 12:14 Flask_Migrate-4.0.4.dist-info
drwxrwxr-x. 2 root root 119 May 29 12:14 Flask_Paranoid-0.3.0.dist-info
drwxrwxr-x. 2 root root 87 May 29 12:14 Flask_Principal-0.4.0.dist-info
drwxrwxr-x. 2 root root 134 May 29 12:14 Flask_Security_Too-5.1.2.dist-info
drwxrwxr-x. 2 root root 119 May 29 12:14 Flask_SocketIO-5.3.4.dist-info
drwxrwxr-x. 3 root root 99 May 29 12:14 Flask_SQLAlchemy-3.0.3.dist-info
drwxrwxr-x. 2 root root 123 May 29 12:14 Flask_WTF-1.1.1.dist-info
</code></pre>
<p><code>$PYTHONPATH</code> is empty:</p>
<pre><code># source bin/activate
(venv) [root@ck venv]# echo $PYTHONPATH
</code></pre>
<p>The virtualenv is using its own python as expected:</p>
<pre><code>(venv) [root@ck venv]# which python
/usr/pgadmin4/venv/bin/python
</code></pre>
<p>If I import <code>flask</code> from the virtualenv python interactive it works:</p>
<pre><code>(venv) [root@ck venv]# python
Python 3.9.16 (main, Jan 17 2023, 18:53:15)
[GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import flask
>>>
</code></pre>
|
<python><flask><centos><virtualenv><pgadmin-4>
|
2023-05-29 13:21:46
| 1
| 126,654
|
Clodoaldo Neto
|
76,357,536
| 395,857
|
How can I run some inference on the MPT-7B language model?
|
<p>I wonder how I can run some inference on the <a href="https://huggingface.co/mosaicml/mpt-7b" rel="nofollow noreferrer">MPT-7B language model</a>. The <a href="https://huggingface.co/mosaicml/mpt-7b" rel="nofollow noreferrer">documentation page on MPT-7B language model </a> on huggingface doesn't mention how to run the inference (i.e., given a few words, predict the next few words).</p>
|
<python><nlp><huggingface-transformers><large-language-model>
|
2023-05-29 12:50:17
| 1
| 84,585
|
Franck Dernoncourt
|
76,357,364
| 641,565
|
pytest.postgresql external template database
|
<p>I am trying to write database integration tests against a dockerized postgresql database, the schema of which I would rather not replicate in my own test setups. In pytest-postgresql documentation there is a mention of the following option:</p>
<blockquote>
<p>You can also define your own database name by passing same dbname value to both factories.</p>
<p>The way this will work is that the process fixture will populate template database, which in turn will be used automatically by client fixture to create a test database from scratch. Fast, clean and no dangling transactions, that could be accidentally rolled back.</p>
<p>Same approach will work with noproces fixture, while connecting to already running postgresql instance whether it’ll be on a docker machine or running remotely or locally.</p>
</blockquote>
<p>But I don't seem to be able to figure out how this is really supposed to work. Is there anywhere a full working example of it? It would be real nice to be able to just copy the existing database in the container for my tests to run in.</p>
<p>When I give my no_proc factory and client factory the same db name, I run into a "database already exists" issue. In the stack trace there there is a "create database ... template" call, that tries to use a db with a suffix of _tmpl. It almost sounds like there should be a parameter to tell what is the db to be created and what the template is...</p>
|
<python><postgresql><docker><pytest>
|
2023-05-29 12:25:13
| 0
| 325
|
CptPicard
|
76,357,201
| 10,620,788
|
Pyspark optimization with loops
|
<p>I have a table with the columns items, date, and sales. I have the total sales by date and item. I need to find the combination of 3 items which minimizes the standard deviation. The table looks like this: e.g.</p>
<pre><code># df:
# +------------+----------+------------+
# |item | date|sales |
# +------------+----------+------------+
# | 325|2021-05-01| 8524.64|
# | 400|2021-05-01| 9939.59|
# | 314|2021-05-03| 5466.3|
# | 267|2021-05-04| 6471.63|
# | 387|2021-05-04| 5406.85|
# +------------+----------+------------+
</code></pre>
<p>I need to find the group of items, that when grouped together the standard deviation is at its minimum. In order to do this I made a loop but it takes too long, this is how I thought about it:</p>
<pre><code>bestis=[]
besti=0
best_score=99999
t=1
list_items= [325,400,314,267,387] #all values of item column
while t<=3:
if besti in all_other_stores: all_other_stores.remove(besti)
for i in list_items:
df_filtered = df.filter(col("item").isin(bestis+[i])
stddev_sales = df_filtered.select(stddev("sales")).collect()[0][0]
if stddev_sales< best_stdev:
best_stdev = stddev_sales
besti=i
bestis= bestis+[besti]
t += 1
</code></pre>
<p>At the end of the first loop, I will have the item with the min standard deviation. e.g: [325] At the end of the second loop, I should have the best two items that minimize the standard deviation. e.g. [325,400], at the end of third loop I should have [325,400,387] assuming that 387 is the additional item that minimizes the standard deviation. I got a working code, the only problem is that it takes too long because of the loop</p>
<p>How could I use a pySpark map function instead in this case? Or maybe is there some kind of optimization algorithm that will do this faster? like given the list, find me the best combo of 3 items that minimizes the standard deviation?</p>
|
<python><optimization><pyspark>
|
2023-05-29 12:03:45
| 0
| 363
|
mblume
|
76,357,177
| 7,657,219
|
pip can't find flit-core dependency when offline installing other packages
|
<p>So i'm trying to install idna and trio, and both need flit_core, the problem is that I'm installing it in an offline machine downloading from this PyPi resource "https://pypi.org/project/flit-core/", it install it perfectly, but when I try to install the other two, it says:</p>
<blockquote>
<p>ERROR: Could not find a version that satisfies the requirement flit_core<4,>=3.9.0 (from versions: none)</p>
</blockquote>
<blockquote>
<p>ERROR: No matching distribution found for flit_core<4,>=3.9.0</p>
</blockquote>
<p>I've been installing the 3.9.0 version and that one should work, but it does not find it.</p>
<p>Does anyone know what could be happening? thanks!</p>
<p>EDIT.</p>
<p>I've tried several versions of flit-core with no luck, neither 3.9.0, nor 3.8.0. It does not recognize flit_core when installing flit and/or idna3.4.</p>
<p>I've checked site-packages inside the python3.11 version and there are the folders flit_core and flit_core..dist-info.</p>
|
<python><linux><pip><package>
|
2023-05-29 12:00:42
| 2
| 353
|
Varox
|
76,356,969
| 54,557
|
How to add an outline to a mask with no internal lines in matplotlib
|
<p>For a given 2D mask, I would like to draw its outline with no internal lines between adjacent grid cells. Highlighting one cell is straightforward: <a href="https://stackoverflow.com/questions/56654952/how-to-mark-cells-in-matplotlib-pyplot-imshow-drawing-cell-borders">How to mark cells in matplotlib.pyplot.imshow (drawing cell borders)</a> and highlighting many cells is also straightforward: <a href="https://stackoverflow.com/questions/51432498/python-matplotlib-add-borders-to-grid-plot-based-on-value">Python matplotlib - add borders to grid plot based on value</a>. However, I do not want internal boundaries within the mask to be separated by a line, so the second link above is not suitable.</p>
<p>Here is example code to generate a mask:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
N = 50
xbound = np.linspace(0, 10, N + 1)
ybound = np.linspace(0, 10, N + 1)
x = (xbound[:-1] + xbound[1:]) / 2
y = (ybound[:-1] + ybound[1:]) / 2
X, Y = np.meshgrid(x, y)
Z = np.exp(-((X - 2.5)**2 + (Y - 2.5)**2)) + np.exp(-2 * ((X - 7.5)**2 + (Y - 6.5)**2))
mask = Z > 0.2
plt.imshow(mask, origin='lower', extent=(0, 10, 0, 10))
</code></pre>
<p>Which generates the image:</p>
<p><a href="https://i.sstatic.net/7t6Qr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7t6Qr.png" alt="Simple 2D mask" /></a></p>
<p>I want the boundary to be around each of the pixelated yellow circles, and to follow the same edges as show above (i.e. be parallel to the x/y-axis) - I want to emphasize that the underlying data is gridded. Using:</p>
<pre><code>plt.contour(x, y, mask, levels=[0.5])
</code></pre>
<p>comes close, but the contour is at 45° along the staircase edges.</p>
<p>Bonus points if the outline can be filled or shown using cartopy.</p>
|
<python><numpy><matplotlib><mask><cartopy>
|
2023-05-29 11:33:10
| 1
| 9,654
|
markmuetz
|
76,356,950
| 3,361,462
|
Log record is not propagated to the root handler
|
<p>I noticed very strange behaviour with logging and logger:</p>
<pre><code>import logging
logger = logging.getLogger(__name__)
logger.addHandler(logging.StreamHandler())
logger.warning("A") # Print one message
logging.warning("A") # Pirnts another message
logger.warning("A") # Prints two messages ??
</code></pre>
<p>Why <code>logger.warning</code> prints one message at first and two messages later?</p>
|
<python><logging>
|
2023-05-29 11:29:51
| 1
| 7,278
|
kosciej16
|
76,356,924
| 5,539,782
|
scrape website through it's api using request
|
<p>I want to scrape live matches data from this website: <a href="https://egamersworld.com/matches" rel="nofollow noreferrer">https://egamersworld.com/matches</a></p>
<p>I tried using the api: <a href="https://api.egamersworld.com/matches?lang=en" rel="nofollow noreferrer">https://api.egamersworld.com/matches?lang=en</a>
but ruturn nothing:</p>
<pre><code>import requests
session = requests.Session()
url = "https://api.egamersworld.com/matches?lang=en"
headers = {'User-Agent': 'Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:76.0) Gecko/20100101 Firefox/76.0', "referer": "https://egamersworld.com/"}
r = session.get(url, timeout=30, headers=headers)
print(r.status_code) #200
r.json() #{"list":[]}
</code></pre>
<p><code>r.status_code</code> return 200, but the <code>r.json()</code> return nothing.
How I can get the data using this api ?</p>
|
<python><web-scraping><python-requests>
|
2023-05-29 11:26:12
| 1
| 547
|
Khaled Koubaa
|
76,356,789
| 3,247,006
|
The default value with or without "()" in a Django model field and "ValueError: Cannot serialize function" error
|
<h2><The 1st case>:</h2>
<p>I can use <a href="https://docs.djangoproject.com/en/4.2/ref/utils/#django.utils.timezone.now" rel="nofollow noreferrer">timezone.now()</a> with or without <code>()</code> as a default value in <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#datetimefield" rel="nofollow noreferrer">DateTimeField()</a> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>from django.utils import timezone
# Here
datetime = models.DateTimeField(default=timezone.now())
</code></pre>
<p>Or:</p>
<pre class="lang-py prettyprint-override"><code>from django.utils import timezone
# Here
datetime = models.DateTimeField(default=timezone.now)
</code></pre>
<p>So, what is the difference between <code>now()</code> and <code>now</code>?</p>
<h2><The 2nd case>:</h2>
<p>I can use <code>timezone.now().date()</code> with <code>()</code> as a default value in <a href="https://docs.djangoproject.com/en/4.2/ref/models/fields/#datefield" rel="nofollow noreferrer">DateField()</a> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>from django.utils import timezone
# Here
date = models.DateField(default=timezone.now().date())
</code></pre>
<p>Also, I can use <code>timezone.now().date</code> without <code>()</code> as a default value in <code>DateField()</code> as shown below but when running <code>python manage.py makemigrations</code>:</p>
<pre class="lang-py prettyprint-override"><code>from django.utils import timezone
# Here
date = models.DateField(default=timezone.now().date)
</code></pre>
<p>Then, I got the error below:</p>
<blockquote>
<p>ValueError: Cannot serialize function <built-in method date of datetime.datetime object at 0x0000019D077B70F0>: No module</p>
</blockquote>
<p>So, what is the difference between <code>now().date()</code> and <code>now().date</code>?</p>
<p>Lastly overall, which should I use, the default value with or without <code>()</code> in a Django model field?</p>
|
<python><django><datetime><django-models><default>
|
2023-05-29 11:04:38
| 2
| 42,516
|
Super Kai - Kazuya Ito
|
76,356,601
| 7,657,219
|
How can I offline install tkinter for python3.11?
|
<p>In this case I'm having problems with Tkinter, although it is in the installed libraries/packages from python 3.11 installation, It seems I can't use it.</p>
<p>Whenever I try import tkinter it gives me the following error:</p>
<blockquote>
<p>No module named '_tkinter'</p>
</blockquote>
<p>My problem is that I'm using an <strong>offline linux RHEL 8</strong>, so I can't use sudo yum install python3-tk.</p>
<p>What could I do to solve this problem? Thanks!</p>
|
<python><python-3.x><linux><tkinter><rhel>
|
2023-05-29 10:35:22
| 0
| 353
|
Varox
|
76,356,591
| 10,396,469
|
How to skip weights init when loading pretrained transformers model?
|
<p>I need to find out how to load a pretrained transformer model without initializing weights in the beginning (to save time and memory)?</p>
<ol>
<li>I saw this code example, but this is not elegant:
<pre><code>saved_inits = torch.nn.init.kaiming_uniform_,
torch.nn.init.uniform_,
torch.nn.init.normal_ # preserving
torch.nn.init.kaiming_uniform_ = skip
torch.nn.init.uniform_ = skip
torch.nn.init.normal_ = skip
model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=args.model_path)
torch.nn.init.kaiming_uniform_,
torch.nn.init.uniform_,
torch.nn.init.normal_ = saved_inits # restoring
</code></pre>
</li>
<li>for <code>nn.module</code> subclasses there is <code>torch.nn.utils.skip_init</code>, but it won't work with <code>AutoModelForCausalLM</code></li>
</ol>
<p>Quest: find a way to skip weights initialization in <code>AutoModelForCausalLM</code> (or any similar transformers class) either using some standard wrapper or parameter.</p>
|
<python><pytorch><initialization><huggingface-transformers><transformer-model>
|
2023-05-29 10:33:20
| 1
| 4,852
|
Poe Dator
|
76,356,513
| 18,949,720
|
NeoVim ugly text next to variable assignment
|
<p>Using NeoVim, I created a Python file for a new project, and noticed that now when I declare a new variable, and go back to normal mode, a text appears looking like this:</p>
<pre><code>a = 1 : Literal[1]
b = 'hello' : Literal['hello']
c = ['a', 'random', 'list'] : list[str]
</code></pre>
<p>I am not sure it comes from that, but I use coc-nvim for autocompletion.</p>
<p>Where does this text come from?</p>
|
<python><neovim><neovim-plugin>
|
2023-05-29 10:23:18
| 1
| 358
|
Droidux
|
76,356,365
| 7,340,304
|
Django ORM get object based on many-to-many field
|
<p>I have model with m2m field <code>users</code>:</p>
<pre class="lang-py prettyprint-override"><code>class SomeModel(models.Model):
objects = SomeModelManager()
users = models.ManyToManyField(settings.AUTH_USER_MODEL, blank=True)
</code></pre>
<p>My goal is to get instance of this model where set of instance users matches given queryset (it means that every user from queryset should be in m2m relation and no other users).</p>
<p>If I do</p>
<pre class="lang-py prettyprint-override"><code>obj = SomeModel.objects.get(users=qs)
</code></pre>
<p>I get</p>
<pre class="lang-py prettyprint-override"><code>ValueError: The QuerySet value for an exact lookup must be limited to one result using slicing.
</code></pre>
<p>And I totaly understand the reason of such error, so the next thing I did was creating a custom Queryset class for this model to override <code>.get()</code> behavior:</p>
<pre class="lang-py prettyprint-override"><code>class SomeModelQueryset(QuerySet):
def get(self, *args, **kwargs):
qs = super() # Prevent recursion
if (users := kwargs.pop('users', None)) is not None:
qs = qs.annotate(count=Count('users__id')).filter(users__in=users, count=users.count()**2)
return qs.get(*args, **kwargs)
class SomeModelManager(models.Manager.from_queryset(SomeModelQueryset)):
...
</code></pre>
<p>So what I try to do is to filter only objects with matching users and make sure that amount of users is the same as in queryset.</p>
<p>But I don't like current version of code. <code>users__in</code> adds instance to queryset each time it finds match, so it results in <code>n</code> occurrences for each object (<code>n</code> - number of m2m users for specific object). <code>Count</code> in <code>.annotate()</code> counts unique users ids for each occurrence and then produces single object with all counts combined. So for each object there are <code>n</code> occurrences with count <code>n</code>, and the resulting object will have count <code>n**2</code>.</p>
<p>Is there a way to rewrite this annotate+filter to produce count=n, not n^2 ?</p>
|
<python><django><many-to-many><django-orm>
|
2023-05-29 09:59:37
| 1
| 591
|
Bohdan
|
76,356,156
| 19,451,374
|
Aynchronous Google Datastore Calls in a Python FastAPI Application
|
<p>I am developing an application using <strong>FastAPI</strong> hosted on <strong>Google Cloud Run</strong>. A vital part of this application involves a series of read/write operations using <strong>Google Datastore</strong>. I've observed that these operations, particularly the ones executed in a loop, are becoming a performance bottleneck.</p>
<p>Here's a simplified snippet of the code:</p>
<pre><code># Some function calls omitted for brevity
@staticmethod
def get_single_widget(request_data, widget_id, user_profile, support_event_urls, shuffle, existing_items_ids):
# Fetch widget data, perform some operations
return widget
@staticmethod
async def get_recommendation_widgets(request_data: RequestData, support_event_urls, shuffle):
widgets = {}
try:
# Other function calls
existing_items_ids = []
for widget_id in request_data.widget_ids:
widget = RecommendService.get_single_widget(request_data, widget_id, user_profile,
support_event_urls, shuffle, existing_items_ids)
widgets[widget_id] = widget
except Exception as e:
widgets['error'] = 'An Error occurred in: ' + request_data.url + ' ' + str(e)
raise e
return widgets
</code></pre>
<p>I have explored the possibility of making these operations asynchronous using <em><code>asyncio</code></em>, but it seems that <strong>Google Datastore</strong> does not support <code>parallel</code> operations and its methods are blocking, which limits the potential performance gains.</p>
<p>My question is twofold:</p>
<ol>
<li>Considering the limitations of <strong>Google Datastore</strong>, how can I
effectively make these Datastore operations non-blocking and implement the <code>async</code> call and improve the overall speed of my <strong>FastAPI</strong> application?</li>
<li>In the context of a resource-limited environment like <strong>Google Cloud
Run</strong>, what strategies would you recommend to handle high read/write
operations with Google Datastore and optimize the performance of my
application?</li>
</ol>
<p>I'm open to any advice, best practices, or alternative approaches that can help optimize the performance of these Google Datastore operations and enhance the overall efficiency of my <strong>FastAPI</strong> application.</p>
<blockquote>
<p>Already applied to the cache and cannot change or update the database due to code complexity I want mainly to make the parallel call works well</p>
</blockquote>
<p>Thank you in advance for your insights!</p>
|
<python><asynchronous><async-await><google-cloud-datastore>
|
2023-05-29 09:27:26
| 0
| 443
|
Mohamed Haydar
|
76,355,950
| 5,403,987
|
Why won't dynaconf read my settings.toml file?
|
<p>I"m writing this Q&A to document a painful investigation and resolution in the hopes of saving others the hassle.</p>
<p>I'm using the Python Dynaconf (3.1.11) to manage settings for a package I've written. The settings are stored in a settings.toml file within the package. When I first started using Dynaconf, the recommended usage pattern was</p>
<pre><code>from dynaconf import settings
my_value = settings.MY_KEY
print(my_value)
</code></pre>
<p>and the settings.toml file contained</p>
<pre><code>[default]
my_key = 5
</code></pre>
<p>and this worked fine. Dynaconf has a documented procedure for where it looks for the settings.toml file. Do be aware of issues around running under a debugger/IDE that may change your working directory - you need to set the current working directory to the top level of your project.</p>
<p>I recently updated my package to follow the newer recommendended practice of initializing the settings using an explicitly created object in my config.py.</p>
<pre><code># config.py
settings = Dynaconf(
envvar_prefix="DYNACONF",
settings_file="settings.toml",
root_path=Path(__file__).parent,
)
</code></pre>
<p>After this seemingly trivial change, attempts to access the settings fails where</p>
<pre><code>my_value = settings.MY_KEY
</code></pre>
<p>now throws an error</p>
<pre><code>AttributeError: 'Settings' object has no attribute 'MY_KEY'
</code></pre>
<p>which makes it appear as if dynaconf isn't loading/can't find the settings file.</p>
|
<python><dynaconf>
|
2023-05-29 08:59:27
| 1
| 2,224
|
Tom Johnson
|
76,355,874
| 12,196,370
|
Selenium python script can't run after an update to chrome browser
|
<p>The problem occurs in my python script run on an ubuntu server (no GUI). A chrome browser version 112 was installed on this server, and other requirements (chromedriver, python, etc.) were all met and a python3 script worked fine. Its function is calling webdriver, manipulating chrome to do some automated work.</p>
<p>But after an update to chrome browser a few days ago, the script don't work now, and I got an error like this:</p>
<pre><code>DevToolsActivePort file doesn't exist
</code></pre>
<p>I tried to add '--no-sandbox' and many other options (in ChromeOption), but they never work.</p>
<p>Does anybody know how to fix this problem?</p>
|
<python><google-chrome><selenium-webdriver>
|
2023-05-29 08:47:51
| 1
| 305
|
VKX
|
76,355,873
| 10,474,176
|
Unable to get complete participant list: get_participants in Telethon
|
<p>get_participants is only giving a few participants but not the complete group of participants. It was working fine a few weeks back but not working as of now. Am I missing anything here?</p>
<pre><code> target_group=self.groups[int(g_index)]
print('Fetching Members...')
all_participants = []
all_participants = self.client.get_participants(target_group, aggressive=True)
</code></pre>
|
<python><telegram><telethon>
|
2023-05-29 08:47:46
| 1
| 534
|
mondyfy
|
76,355,825
| 7,657,219
|
Offline install setuptools - Python3
|
<p>I'm trying to install setuptools offline in order to install cx_Oracle, as this one asks for the first in my offline machine and I keep getting this error after running this command:</p>
<p>python3.11 -m pip install setuptools-67.8.0.tar.gz</p>
<p><em>ERROR: Could not find a version that satisfies the requirement wheel (from versions: none)</em></p>
<p><em>ERROR: No matching distribution found for wheel</em></p>
<p><em>WARNING: There was an error checking the latest version of pip</em></p>
<p>How could I install it, what could be happening?</p>
<p>In the end, installing an older version of wheel (38) and an older version of setuptools (67.7.2) I managed to install it.</p>
<p>Reading the comment, I tried to install oracledb, the version 1.3.1 but now I have the same problem when installing this one:</p>
<p><em>ERROR: Could not find a version that satisfies the requirement setuptools>=40.6.0 (from versions: none)</em></p>
<p><em>ERROR: No matching distribution found for setuptools>=40.6.0</em></p>
<p><em>WARNING: There was an error checking the latest version of pip</em></p>
<p>Thanks a lot for the help!</p>
|
<python><linux><pip><package><offline>
|
2023-05-29 08:38:31
| 0
| 353
|
Varox
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.