QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,913,144
| 3,325,586
|
Databricks Python SQL connector certificate verify failed error
|
<p>I am connecting to databricks SQL warehouse for the first time on my local machine (MacOS Venture 13.2.1) via Python 3.11</p>
<pre><code>from databricks import sql
import os
connection = sql.connect(
server_hostname = "hostname",
http_path = "http path",
access_token = "token")
cursor = connection.cursor()
cursor.execute("SELECT * from range(10)")
print(cursor.fetchall())
cursor.close()
connection.close()
</code></pre>
<p>and I get the following error in the first connection line:</p>
<pre><code>RequestError: Error during request to server: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:992)
</code></pre>
<p>I tried many of the solutions found here with no luck:<a href="https://stackoverflow.com/questions/52805115/certificate-verify-failed-unable-to-get-local-issuer-certificate">certificate verify failed: unable to get local issuer certificate</a></p>
<p>This is a pretty fresh comptuer and Python install, so perhaps I'm missing something?</p>
|
<python><databricks>
|
2023-04-02 16:06:49
| 1
| 1,223
|
somethingstrang
|
75,912,950
| 13,606,345
|
Catching IntegrityError vs filtering before object creation?
|
<p>I have a development task. And at some point of this task, I need to create DB objects with a function which takes multiple json objects returning from third-party client request. I am for looping through json objects and calling <code>_create_db_object</code> method to create them one-by-one.</p>
<p>Sample model:</p>
<pre class="lang-py prettyprint-override"><code>class MyObject(models.Model):
some_unique_field = models.CharField(unique=True)
...
...
</code></pre>
<p>The function I use to create objects in DB:</p>
<pre class="lang-py prettyprint-override"><code>def _create_db_object(dict_: dict[str, str]) -> None:
try:
return MyObject.objects.create(...)
except IntegrityError as e:
continue
</code></pre>
<p>My question is: "Would it be better if I used something like this instead of try except?"</p>
<pre class="lang-py prettyprint-override"><code>def _create_db_object(dict_: dict[str, str]) -> None:
if MyObject.objects.filter(some_unique_field=dict_["unique_field"]):
continue
return MyObject.objects.create(...)
</code></pre>
<p>Which function would be better and why?</p>
|
<python><django>
|
2023-04-02 15:30:03
| 0
| 323
|
Burakhan Aksoy
|
75,912,710
| 21,346,793
|
Try to make fine-tuning of model gpt like
|
<p>I try to make fine tuning on essays, my datset looks like:</p>
<pre><code>[ {
"topic": "Мы были достаточно цивилизованны, чтобы построить машину, но слишком примитивны, чтобы ею пользоваться». (Карл Краус)",
"text": "Высказывание Карла Крауса, австрийского писателя, о том, что «мы были достаточно цивилизованны, чтобы построить машину... }]
</code></pre>
<p>There is the code:</p>
<pre><code>import torch
from transformers import AutoTokenizer, AutoModelWithLMHead, Trainer, TrainingArguments
model_name = "tinkoff-ai/ruDialoGPT-medium"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelWithLMHead.from_pretrained(model_name)
import json
def prepare_data(filepath):
with open(filepath, "r", encoding="utf-8") as f:
data = json.load(f)
prompts = [example["topic"] for example in data]
texts = [example["text"] for example in data]
inputs = []
for i in range(len(prompts)):
inputs.append(prompts[i] + texts[i])
return inputs
train_inputs = prepare_data("train.json")
test_inputs = prepare_data("test.json")
from transformers import TextDataset, DataCollatorForLanguageModeling
train_dataset = TextDataset(tokenizer=tokenizer, file_path="train.json", block_size=128)
test_dataset = TextDataset(tokenizer=tokenizer, file_path="test.json", block_size=128)
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=False,
)
from transformers import Trainer, TrainingArguments
training_args = TrainingArguments(
output_dir="./models",
evaluation_strategy="epoch",
learning_rate=2e-5,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
num_train_epochs=2,
weight_decay=0.01,
push_to_hub=False,
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
data_collator=data_collator,
)
trainer.train()
</code></pre>
<p>But it takes a lot of memory, and i can't fin-tune it in Kaggle. The output has >19gb.
Is it normal or how can i fix it?
[2501/7802 59:11 < 26:52, 2.06 it/s, Epoch 0.86/3] every 10 minutes it takes 2-3 GB, is it normal?</p>
|
<python><nlp><kaggle>
|
2023-04-02 14:44:41
| 1
| 400
|
Ubuty_programmist_7
|
75,912,664
| 19,553,193
|
Insert multiple list in database using python django
|
<p>Im confused on how can I insert multiple list bycolumn using a loop, let say the output what I want in database is something like this</p>
<pre><code>name percentage is_fixed_amount
PWD 20 0
Senior 20 0
OPD 6 0
Corporators 20 0
Special 1
</code></pre>
<p><strong>What I've tried</strong> but it's insert multiple data and it didn't reflect the actual data in database, Can somebody knows the solution? I appreciate any reply.</p>
<pre><code>discount = ['PWD','Senior Citizen','OPD','Corporators','Special']
discount_percentage = ['20','20','6','20','']
fixed_amount = [False,False,False,False,True]
for dis in discount:
for per in discount_percentage:
for fix in fixed_amount:
discount_val = Discounts(name=dis,percentage = per,is_fixed_amount =fix)
discount_val.save()
</code></pre>
|
<python><django><database>
|
2023-04-02 14:34:58
| 1
| 335
|
marivic valdehueza
|
75,912,627
| 17,487,457
|
Multidimentional array to pandas dataframe
|
<p>Suppose I have the following 4D array:</p>
<pre class="lang-py prettyprint-override"><code>A = np.array([
[[[0, 1, 2, 3],
[3, 0, 1, 2],
[2, 3, 0, 1],
[1, 3, 2, 1],
[1, 2, 3, 0]]],
[[[9, 8, 7, 6],
[5, 4, 3, 2],
[0, 9, 8, 3],
[1, 9, 2, 3],
[1, 0, -1, 2]]]])
A.shape
(2, 1, 5, 4)
</code></pre>
<p>I want to transform it to the following DataFrame (with columns A,B,C,D):</p>
<pre class="lang-py prettyprint-override"><code> A B C D
0 0 1 2 3
1 3 0 1 2
2 2 3 0 1
3 1 3 2 1
4 1 2 3 0
5 9 8 7 6
6 5 4 3 2
7 0 9 8 3
8 1 9 2 3
9 1 0 -1 2
</code></pre>
|
<python><pandas><dataframe><numpy><numpy-ndarray>
|
2023-04-02 14:27:15
| 1
| 305
|
Amina Umar
|
75,912,548
| 15,776,933
|
Python character with hex value
|
<p>Python have an escape sequence in which \x is there and if we write two integers after \x and print it, it gives us a character for e.g- <code>print("\x48")</code> gives us letter H. I want a list that shows which number is assigned to which character.</p>
<p>The hexadecimal numbers 48 and 46 are assigned to H and F respectively:</p>
<pre><code>print("\x48")
print("\x46")
</code></pre>
<p>I want list of all numbers that are assigned to their respective character.</p>
|
<python>
|
2023-04-02 14:11:54
| 1
| 955
|
Anubhav
|
75,912,547
| 3,398,324
|
How get predictions from a specific PyTorch model
|
<p>I would like to obtain the prediction values from this PyTorch model (<a href="https://github.com/allegro/allRank" rel="nofollow noreferrer">https://github.com/allegro/allRank</a>) but when I run:</p>
<pre><code> model(val_dl)
</code></pre>
<p>I get this error:</p>
<pre><code>TypeError: LTRModel.forward() missing 2 required positional arguments: 'mask' and 'indices'
</code></pre>
<p>Now, mask and indices are used inside of the fit function (inside of train_utils.py) and I assume these must be the weights.</p>
<p>Is it maybe possible to obtain these from the model object after training is finished? Or would it make more sense to get the predictions inside of the fit function after it has finished fitting?</p>
|
<python><pytorch><pytorch-dataloader>
|
2023-04-02 14:11:45
| 2
| 1,051
|
Tartaglia
|
75,912,013
| 1,627,466
|
Taking the mean of a row of a pandas dataframe with NaN and arrays
|
<p>Here is my reproducible example:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'x' : [np.NaN, np.array([0,2])], 'y' : [np.array([3,2]),np.NaN], 'z' : [np.array([4,5]),np.NaN], 't' : [np.array([3,4]),np.array([4,5])]})
</code></pre>
<p>I would like to compute the mean array for each row excluding NaN</p>
<p>I have tried <code>df.mean(axis=1)</code> which gives NaN for both row. This is particularly surprising to me as <code>df.sum(axis=1)</code> appears to be working as I would have expected.</p>
<p><code>[df.loc[i,:].mean() for i in df.index]</code> does work but I am sure there is a more straightforward solution.</p>
|
<python><arrays><pandas><dataframe><numpy>
|
2023-04-02 12:26:33
| 2
| 423
|
user1627466
|
75,911,989
| 4,358,785
|
Python unicode internalerror message during testing only
|
<p>I have a test that (among other things) reads a json.
When I read this file normally, everything is ok, but if I read it when during python's <code>Unitest.TestCase</code>, I get a strange error message:</p>
<pre><code>INTERNALERROR> self.message('testStdErr', name=testName, out=out, flowId=flowId)
INTERNALERROR> File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.3.1\plugins\python-ce\helpers\pycharm\_jb_runner_tools.py", line 117, in message
INTERNALERROR> _old_service_messages.message(self, messageName, **properties)
INTERNALERROR> File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.3.1\plugins\python-ce\helpers\pycharm\teamcity\messages.py", line 101, in message
INTERNALERROR> retry_on_EAGAIN(self.output.write)(self.encode(message))
INTERNALERROR> File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.3.1\plugins\python-ce\helpers\pycharm\teamcity\messages.py", line 68, in encode
INTERNALERROR> value = value.encode(self.encoding)
INTERNALERROR> File "C:\Users\myuser\Anaconda3\envs\myrepo\lib\encodings\cp1252.py", line 12, in encode
INTERNALERROR> return codecs.charmap_encode(input,errors,encoding_table)
INTERNALERROR> UnicodeEncodeError: 'charmap' codec can't encode characters in position 524-533: character maps to <undefined>`
</code></pre>
<p>The traceback doesn't lead anywhere in my code, only in Unittest's, so I'm not even sure what I should be looking for. I already changed reading of the json to utf-8, but I still get this message.
As far as I can tell, apart from this message the test passes correctly.</p>
<p>Why am I getting this? What should I be looking for in my code? I'm not providing examples since I'm not sure where in the code this happens. The traceback doesn't lead anywhere.I'm not even sure it has anything to do with the json, but as I googled this message, this seems to be an option.</p>
<p>EDIT: While debugging, this message happens during/after TestCase call to .tearDown(), i.e., after my code finished running</p>
<p>Thanks</p>
|
<python><unicode><python-unittest>
|
2023-04-02 12:22:55
| 1
| 971
|
Ruslan
|
75,911,976
| 10,358,059
|
Gmail removing hyperlinks from email. Why?
|
<p>My application is expected to send email verification link to user.</p>
<p>When I open such an email in Gmail, the links are not shown, Gmail removes them.</p>
<p>If I select [Show original] option, I can see that the links are there.</p>
<ul>
<li><p>Why is it so?</p>
</li>
<li><p>How can I fix this ?</p>
</li>
</ul>
<p><strong>Note:</strong> I'm on development server.</p>
<p><strong>Displayed Email:</strong></p>
<pre><code>
Hi from Team!
You just subscribed to our newsletter. Please click the link below in order to confirm:
Thank You!
Team
</code></pre>
<p><strong>Original Email:</strong></p>
<pre><code>Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Hi from Team!
You just subscribed to our newsletter. Please click the link below in order to confirm:
http://127.0.0.1:8000/newsletter/verify_email/MTQ/bmq8o8-15cab7bf32186aab16c2c086e9beaec3/
Thank You!
Team
</code></pre>
<p>Thanks!</p>
|
<python><django><hyperlink><gmail>
|
2023-04-02 12:19:08
| 1
| 880
|
alv2017
|
75,911,809
| 1,651,481
|
svg2rlg converting svg to png only part of the image with percentage size
|
<p>My svg image:
<a href="https://pastebin.com/raw/EeptY1C8" rel="nofollow noreferrer">https://pastebin.com/raw/EeptY1C8</a></p>
<p>My code:</p>
<pre><code>from svglib.svglib import svg2rlg
from reportlab.graphics import renderPM
drawing = svg2rlg("qr_code.svg")
renderPM.drawToFile(drawing, "temp.png", fmt="PNG")
from tkinter import *
tk = Tk()
from PIL import Image, ImageTk
img = Image.open('temp.png')
pimg = ImageTk.PhotoImage(img)
size = img.size
frame = Canvas(tk, width=size[0], height=size[1])
frame.pack()
frame.create_image(0,0,anchor='nw',image=pimg)
tk.mainloop()
</code></pre>
<p>And I get test.png image:
<a href="https://i.sstatic.net/nUYd4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nUYd4.png" alt="enter image description here" /></a></p>
<p>And there is an error code:</p>
<pre><code>Message: 'Unable to resolve percentage unit without knowing the node name'
Arguments: ()
</code></pre>
<p>How to resolve percentage units?</p>
|
<python><tkinter><svg>
|
2023-04-02 11:44:15
| 1
| 612
|
XuMuK
|
75,911,755
| 219,976
|
How do I emulate long-running cpu-bound function in python?
|
<p>I want to make some experiments with threads and multiprocessing in Python. For that purpose I need a function which emulates long-running cpu-bound process. I want to manually set process time to about 5 secs. The function I created:</p>
<pre><code>def long_running_cpu_bound_function():
print(f"Started at {datetime.now()}")
time.sleep(5)
print(f"Executed at {datetime.now()}")
</code></pre>
<p>But when I run this function with 2 processes:</p>
<pre><code>with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
executor.submit(long_running_cpu_bound_function)
executor.submit(long_running_cpu_bound_function)
</code></pre>
<p>I get the overall time about 5 seconds instead of expected 10 seconds, which I will have for real cpu-bound function because of GIL.</p>
<p>How do I emulate long-running cpu-bound process correctly?</p>
|
<python><multithreading><gil>
|
2023-04-02 11:31:33
| 1
| 6,657
|
StuffHappens
|
75,911,580
| 1,175,065
|
Multiprocessing in OpenAI Gym with abseil
|
<p>I am struggling with multiprocessing in OpenAI Gym with the abseil library. Basically, the gym.make seems working. However, I am trying to use <a href="https://github.com/Kautenja/gym-super-mario-bros/tree/master" rel="nofollow noreferrer">gym-super-mario-bros</a> which is not working. Below is a minimal working example:</p>
<pre><code>from absl import app
import os
os.environ['OMP_NUM_THREADS'] = '1'
import gym
import gym_super_mario_bros
from gym_super_mario_bros.actions import SIMPLE_MOVEMENT
from nes_py.wrappers import JoypadSpace
import multiprocessing as mp
import torch
from torch import nn
import time
def get_env():
env = JoypadSpace(gym_super_mario_bros.make('SuperMarioBros-1-1-v0'), SIMPLE_MOVEMENT)
# env = gym.make('LunarLander-v2') # other environment such as this one and others works well
return env
def do_something(env, net1, net2):
print('inside do_something')
obs = env.reset()
print(f'after reset {obs.shape}')
net2.load_state_dict(net1.state_dict())
print('after load_state_dict')
def main(args):
del args
env = get_env()
net1 = nn.Sequential(nn.Conv2d(1, 20, 5), nn.ReLU())
net2 = nn.Sequential(nn.Conv2d(1, 20, 5), nn.ReLU())
net1.share_memory()
net2.share_memory()
device = torch.device('cuda')
net1 = net1.to(device)
net2 = net2.to(device)
p = mp.Process(target=do_something, args=(env, net1, net2,))
p.start()
time.sleep(4.0) # wait for the above process to execute print statements
env.close()
if __name__ == '__main__':
mp.set_start_method('spawn')
app.run(main)
</code></pre>
<p>Upon running the above code, it seems stuck after printing <code>inside do_something</code>. It also reports <code>CUDA warning</code> in the terminal shown below:</p>
<pre><code>$ python mwe.py
inside do_something
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
[W CudaIPCTypes.cpp:15] Producer process has been terminated before all shared CUDA tensors released. See Note [Sharing CUDA tensors]
[W CUDAGuardImpl.h:46] Warning: CUDA warning: driver shutting down (function uncheckedGetDevice)
[W CUDAGuardImpl.h:62] Warning: CUDA warning: invalid device ordinal (function uncheckedSetDevice)
[W CUDAGuardImpl.h:46] Warning: CUDA warning: driver shutting down (function uncheckedGetDevice)
[W CUDAGuardImpl.h:62] Warning: CUDA warning: invalid device ordinal (function uncheckedSetDevice)
[W CUDAGuardImpl.h:46] Warning: CUDA warning: driver shutting down (function uncheckedGetDevice)
[W CUDAGuardImpl.h:62] Warning: CUDA warning: invalid device ordinal (function uncheckedSetDevice)
[W CUDAGuardImpl.h:46] Warning: CUDA warning: driver shutting down (function uncheckedGetDevice)
[W CUDAGuardImpl.h:62] Warning: CUDA warning: invalid device ordinal (function uncheckedSetDevice)
</code></pre>
<p>Please note that it works smoothly without any <code>CUDA warning</code> in case of the <code>gym.make</code>.</p>
<h2>Version Info.</h2>
<p>I am using Ubuntu 22.04.2 LTS OS. Below is the version info:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Library</th>
<th>Version</th>
</tr>
</thead>
<tbody>
<tr>
<td>absl-py</td>
<td>1.3.0</td>
</tr>
<tr>
<td>cuda</td>
<td>11.7</td>
</tr>
<tr>
<td>gym</td>
<td>0.17.2</td>
</tr>
<tr>
<td>gym-super-mario-bros</td>
<td>7.4.0</td>
</tr>
<tr>
<td>nes-py</td>
<td>8.2.1</td>
</tr>
<tr>
<td>numpy</td>
<td>1.21.0</td>
</tr>
<tr>
<td>python</td>
<td>3.9.16</td>
</tr>
<tr>
<td>torch</td>
<td>1.13.1</td>
</tr>
</tbody>
</table>
</div>
<p><strong>Is there a workaround to make mario work in multiprocessing environment?</strong></p>
|
<python><pytorch><openai-gym><abseil>
|
2023-04-02 10:55:08
| 0
| 6,358
|
ravi
|
75,911,568
| 1,473,517
|
Can numba use long doubles?
|
<p>In numpy float128 is typically a long double (not a 128 bit float). But I can’t tell if numba has support for this. The docs don’t seem to mention long doubles or the numpy type float128.</p>
|
<python><numba>
|
2023-04-02 10:51:37
| 1
| 21,513
|
Simd
|
75,911,472
| 572,575
|
Django cannot query data from another table by using OneToOneField
|
<p>I create django model like this which api_key of Setting table is OneToOneField to Key table.</p>
<pre><code>class Key(models.Model):
api_key=models.CharField(max_length=100,unique=True)
api_key_name=models.CharField(max_length=100)
def __str__(self):
return self.api_key
class Setting(models.Model):
api_key=models.OneToOneField(Key,on_delete=models.CASCADE)
field_en=models.BooleanField(default=False)
def __str__(self):
return str(self.api_key)
</code></pre>
<p>I add data to table Key like this.</p>
<pre><code>api_key="abc1"
api_key_name="test"
</code></pre>
<p>In views.py I use key to query like this code.</p>
<pre><code>def keyForm(request):
key="abc1"
data, created = Setting.objects.get_or_create(api_key__api_key=key)
key = Key.objects.get(api_key=key)
data = {'setting':data, 'key':key}
return render(request,'key_form.html', data)
</code></pre>
<p>Data in table Key
It show error like this. How to fix it?</p>
<pre><code>MySQLdb._exceptions.OperationalError: (1048, "Column 'api_key_id' cannot be null")
IntegrityError at /keyForm
(1048, "Column 'api_key_id' cannot be null")
</code></pre>
|
<python><django>
|
2023-04-02 10:33:10
| 2
| 1,049
|
user572575
|
75,911,397
| 1,627,466
|
Select and/or replace specific array inside pandas dataframe
|
<p>Here is my reproducible example:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'x' : [np.zeros(2), np.array([1,2])], 'y' : [np.array([3,2]),0], 'z' : [np.array([4,5]),np.zeros(2)], 't' : [np.array([3,4]),np.array([4,5])]})
</code></pre>
<p>My goal is to change <code>np.zeros(2)</code> to <code>np.Nan</code> so as to be able to compute the mean two-dimensional array for each row excluding 0.</p>
<p>I have tried:</p>
<p><code>df.replace(np.zeros(2),np.NaN)</code></p>
<p><code>df[df.eq(np.zeros(2)).any(axis=1)]</code></p>
<p><code>df.where(df == [np.zeros(2)])</code></p>
<p><code>df[df == np.zeros(2)]</code></p>
<p>all of which are expected to worked had the item I am looking not been an array.</p>
<p>Obviously, being new at Python, there must be a concept that I am not grasping.</p>
|
<python><arrays><pandas><dataframe><numpy>
|
2023-04-02 10:15:31
| 1
| 423
|
user1627466
|
75,911,387
| 9,104,399
|
My python module shows error that it does not have the attribute called for
|
<p>I am trying to install my own python package/module. I wrote some functions in a file namely <code>utils.py</code>, put it in the src folder. I also created the <code>__init__.py</code> and <code>setup.py</code>. I installed it from Pycharm's terminal, and it is installed. The package is installed with a name "OMR".</p>
<p>But when I import the installed package/module and call any function from there, I get an error message that the module has no attribute with that name. For example, when I run the following code:</p>
<pre><code>import cv2
import OMR as utils
img = cv2.imread("Img.png")
widthImg=800
heightImg=800
rectContours, imgContours = utils.getRectangles_fromImage(img, widthImg, heightImg)
</code></pre>
<p>This code shows an error:</p>
<blockquote>
<p>AttributeError: module 'OMR' has no attribute 'getRectangles_fromImage'</p>
</blockquote>
<p>although the <code>utils.py</code> file in the <code>src</code> folder does contain this function.</p>
<p>Here is the code in the <code>setup.py</code> file.</p>
<pre><code>setup(
name='OMR',
version='0.0.1',
description='This program processes an OMR image of specific format. Then it identifies correct answers and scores the script',
author='Mr Alam',
packages=setuptools.find_packages(),
keywords=['OMR'],
classifiers=["Programming Language :: Python::3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent"],
python_requires='>=3.0',
py_modules=['OMR'],
package_dir={'':'src'},
install_requires=['opencv-python','numpy']
)
</code></pre>
<p>Here is the content of the <code>__init__.py</code> file.</p>
<pre><code>from .src import utils
</code></pre>
<p><a href="https://i.sstatic.net/0enqv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0enqv.png" alt="Here is the folder tree" /></a></p>
|
<python><package>
|
2023-04-02 10:14:09
| 0
| 325
|
Alam
|
75,911,335
| 1,470,314
|
See Pylint warnings visually in PyCharm
|
<p>I would like to use PyLint as an automatic code inspection in PyCharm, as in VSCode. (Marking with errors with red underscores and the like)</p>
<p>I found a way to <a href="https://stackoverflow.com/questions/38134086/how-to-run-pylint-with-pycharm?rq=4">run pylint in PyCharm</a> as an external tool, but that was not exactly what I was looking for. (I want it to continuously run while I develop)</p>
<p>Is that supported in PyCharm?</p>
|
<python><pycharm><pylint><code-inspection>
|
2023-04-02 10:03:35
| 1
| 1,012
|
yuvalm2
|
75,911,268
| 11,630,148
|
DRF ManyToMany Field getting an error when creating object
|
<p>I have a <code>Rant</code> model with <code>Category</code> linked to it using <code>ManyToManyField</code>. I've serialized it but the problem is this error:</p>
<pre class="lang-json prettyprint-override"><code>{
"categories": [
"Expected a list of items but got type \"str\"."
]
}
</code></pre>
<p>These are my serializers:</p>
<pre class="lang-py prettyprint-override"><code>class CategorySerializer(serializers.ModelSerializer):
class Meta:
model = Category
fields = "__all__"
class RantSerializer(serializers.ModelSerializer):
categories = CategorySerializer(many=True)
class Meta:
model = Rant
fields = ('rant', 'slug', 'categories')
</code></pre>
<p>My <code>post</code> method is this:</p>
<pre class="lang-py prettyprint-override"><code> def post(self, request, format=None):
serializer = RantSerializer(data=request.data)
if serializer.is_valid():
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p>What did I do wrong here?</p>
|
<python><django><django-rest-framework><django-queryset>
|
2023-04-02 09:49:53
| 3
| 664
|
Vicente Antonio G. Reyes
|
75,911,233
| 1,954,677
|
Cancel common factors from two polynomials without merging them into a ration function expression
|
<p>I'd like to cancel a rational function <code>f1(z)=p1(z)/q1(z)</code> given by two polynomial expressions <code>p1,q1</code>.</p>
<p>An almost complete solution would be the following</p>
<pre><code>import sympy as sp
z = sp.symbols('z')
p1 = z**2 - 1
q1 = z**2 - z
f1 = p1/q1
f2 = f1.cancel() # -> (z + 1)/z
</code></pre>
<p>where the missing step would be to extract the <code>q2,p2</code> from <code>f2</code> (which I don't know how to accomplish).</p>
<p>However, <strong>this is not my preferred approach</strong> as all my code is based on the <code>p,q</code>, the above approach seems like like an unneccesary roundtrip <code>p1,q1 -> f1 -> f2 -> p2,q1</code> over the function representation <code>f</code> which I'd like to avoid due to performance risks and simplicity.</p>
<p>Instead I'd like to directly compute them <code>q1, p1 -> q2,p2</code>, i.e., <strong>I seek a solution of this form</strong></p>
<pre><code>cancel_common(p1, q1) # -> z+1, z
</code></pre>
<p>I expect that this functionality should exist, or should be simple to implement with provided manipulation features, because the <code>q,p</code> provide a much more explicit input to the problem, than <code>p/q</code>.</p>
<p>I thought a starting point could be <code>sympy.Rational</code> because it is explicitly based on the numerator-denominator-representation, but it unfortunately cannot handle the symbol <code>z</code> (i.e. does not work with functions):</p>
<pre><code>sp.Rational(p1, q1) # -> TypeError: invalid input: z**2 - 1
</code></pre>
<p>Does anyone know a way how to accomplish this?</p>
|
<python><sympy><symbolic-math><fractions>
|
2023-04-02 09:43:30
| 0
| 3,916
|
flonk
|
75,911,104
| 788,153
|
Error during Recursive feature elimination using Histogram based GBM
|
<p>I am implementing Recursive Feature Elimination using the HistGradientBoostingClassifier, but for some reason keeps on getting the following error:</p>
<p>ValueError: when <code>importance_getter=='auto'</code>, the underlying estimator HistGradientBoostingClassifier should have <code>coef_</code> or <code>feature_importances_</code> attribute. Either pass a fitted estimator to feature selector or call fit before calling transform.</p>
<pre><code>from sklearn.ensemble import HistGradientBoostingClassifier
from sklearn.feature_selection import RFECV
from sklearn.model_selection import cross_val_score, RepeatedStratifiedKFold
from sklearn.datasets import make_classification
X_train, y_train = make_classification(n_samples=1000, n_features=20, n_informative=10,
n_redundant=5, random_state=42)
# Create a HistGradientBoostingClassifier estimator
estimator = HistGradientBoostingClassifier().fit(X_train, y_train)
# Create a feature selector object using SelectFromModel
# Create a recursive feature elimination with cross-validation object
rfecv = RFECV(estimator=estimator, step=1, cv=RepeatedStratifiedKFold(n_splits=5, n_repeats=1),
scoring='roc_auc')
# Fit the recursive feature elimination object to the data
rfecv.fit(X_train, y_train)
# Print the selected features and their ranks
print("Selected Features: ", X_train.columns[rfecv.support_])
print("Feature Rankings: ", rfecv.ranking_)
</code></pre>
|
<python><machine-learning><classification><xgboost><cross-validation>
|
2023-04-02 09:18:32
| 1
| 2,762
|
learner
|
75,910,985
| 1,720,897
|
How to extract text using PyPDF2 without the verbose output
|
<p>I want to copy the contents from a PDF into a text file. I am able to extract the text using the following code:</p>
<pre><code>from PyPDF2 import PdfReader
infile = open("input.pdf", 'rb')
reader = PdfReader(infile)
for i in reader.pages:
text = i.extract_text()
...
</code></pre>
<p>However, I do not need the text to be output to the terminal.Is there a way to tell the method to not output it to the terminal? I could not see anything in the documentation for the method.</p>
<p>Update: Silly me, I was printing the PageObject later down in the code. That caused me to think the output was coming from the <code>extract_text()</code> method.</p>
|
<python><pypdf>
|
2023-04-02 08:55:09
| 1
| 1,256
|
user1720897
|
75,910,679
| 1,033,591
|
How to reference an object's attribute as both obj and attribute are variables?
|
<p>I have code as below:</p>
<pre><code> {% for entry in qs %}
{% for field_name in field_names %}
<span>{{entry.field_name}}</span>
{%endfor%}
{%endfor%}
</code></pre>
<p>But nothing will show up inside span tag. Then I change back to</p>
<pre><code> {{entry.name}}
</code></pre>
<p>which "name" is a property of every entry. Then the correct value shows up.</p>
<p>Why is that? Thanks...</p>
|
<python><django>
|
2023-04-02 07:36:25
| 1
| 2,147
|
Alston
|
75,910,641
| 9,727,704
|
Flask session variable doesn't persist between requests
|
<p>How do I save my session values between requests? I followed the guidelines here:</p>
<ul>
<li><a href="https://pythonbasics.org/flask-sessions/#Session-object" rel="nofollow noreferrer">https://pythonbasics.org/flask-sessions/#Session-object</a></li>
<li><a href="https://flask.palletsprojects.com/en/2.2.x/api/#sessions" rel="nofollow noreferrer">https://flask.palletsprojects.com/en/2.2.x/api/#sessions</a></li>
</ul>
<p>But the code below emits <code>{ 'message' : 'No session key' }</code> when I fetch <code>/</code> after <code>/quux</code>. In other words, the session key set in <code>/quux</code> doesn't persist.</p>
<p>Here's my curl commands:</p>
<pre><code>$ curl --verbose -X POST http://127.0.0.1:5000/quux --cookie-jar \
cookie.txt -H 'Content-Type: application/json' --data '{ "foo" : "mytest"}'
$ curl --verbose -X GET http://127.0.0.1:5000 --cookie-jar \
cookie.txt -H 'Content-Type: application/json'
</code></pre>
<p>After, the cookie file is empty:</p>
<pre><code>$ cat cookie.txt
# Netscape HTTP Cookie File
# https://curl.se/docs/http-cookies.html
# This file was generated by libcurl! Edit at your own risk.
$
</code></pre>
<p>And here's my flask code:</p>
<pre><code>from flask import Flask, jsonify, request, session, redirect, after_this_request
from flask_cors import CORS, cross_origin
app = Flask(__name__)
CORS(app)
app.config['CORS_HEADERS'] = 'Content-Type'
app.secret_key = 'secret'
@app.route('/')
def home():
if "baz" in session:
print("baz is",session['baz'])
return jsonify({
'message': session['baz']
}), 200
return jsonify({ 'message' : 'No session key' }), 400
@app.route('/quux', methods=['POST'])
def quux():
data = request.get_json()
session['baz'] = data["foo"]
# it is printed out here
for x in session:
print("key is",x," val is",session[x])
return redirect('/', code=302)
if __name__ == '__main__':
app.run()
</code></pre>
<p>What is missing?</p>
|
<python><flask><session-cookies>
|
2023-04-02 07:23:51
| 3
| 765
|
Lucky
|
75,910,549
| 3,682,549
|
Create a multi-index data-frame
|
<p>I have the following code to create a multi-indexed data frame:</p>
<pre><code>import pandas as pd
import numpy as np
# Define the data
data = {
('rf', 'wv_pretrained'): (0.7392722279437006, 0.7412604086615894),
('rf', 'wv_custom'): (0.7746309646412634, 0.7762235207436783),
('rf', 'glove_pretrained'): (0.7411603158256094, 0.7427841615992615),
('rf', 'spacy_pretrained'): (0.731719876416066, 0.7338888018745795),
('rf', 'sent_trf'): (0.7229660144181257, 0.7242986991569383),
('rf', 'bert_trf'): (0.7126673532440783, 0.7139687043942123),
('rf', 'gpt_trf'): (0.7351527634740816, 0.7369294342385289),
('rf', 'tfidf'): (0.6920700308959835, 0.6878065672519817),
('Logistic_Regression', 'wv_pretrained'): (0.7392722279437006, 0.7412604086615894),
('Logistic_Regression', 'wv_custom'): (0.7746309646412634, 0.7762235207436783),
('Logistic_Regression', 'glove_pretrained'): (0.7411603158256094, 0.7427841615992615),
('Logistic_Regression', 'spacy_pretrained'): (0.731719876416066, 0.7338888018745795),
('Logistic_Regression', 'sent_trf'): (0.7229660144181257, 0.7242986991569383),
('Logistic_Regression', 'bert_trf'): (0.7126673532440783, 0.7139687043942123),
('Logistic_Regression', 'gpt_trf'): (0.7351527634740816, 0.7369294342385289),
('Logistic_Regression', 'tfidf'): (0.6920700308959835, 0.6878065672519817)
}
# Create the multi-index
algos = ['rf', 'Logistic_Regression']
embeddings = ['wv_pretrained', 'wv_custom', 'glove_pretrained', 'spacy_pretrained', 'sent_trf', 'bert_trf', 'gpt_trf', 'tfidf']
eval_metrics = ['accuracy', 'f1_score']
idx = pd.MultiIndex.from_product([algos, embeddings], names=['algos', 'Embedding'])
columns = pd.Index(eval_metrics, name='Metrics')
# Create the dataframe
df = pd.DataFrame(data, index=idx, columns=columns)
df
</code></pre>
<p>But i am getting no numerical values. Screen shot of the error:</p>
<p><a href="https://i.sstatic.net/88D7N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/88D7N.png" alt="enter image description here" /></a></p>
<p>Any help would be appreciated.</p>
|
<python><multi-index>
|
2023-04-02 06:58:33
| 1
| 1,121
|
Nishant
|
75,910,162
| 10,868,426
|
Regex Expression in Sparql - Mixed text and numbers
|
<p>I have some resources that are identified by <a href="http://myexample.org/NNNN" rel="nofollow noreferrer">http://myexample.org/NNNN</a> where NNNN is any number, for example, one resource may be <a href="http://myexample.org/9890" rel="nofollow noreferrer">http://myexample.org/9890</a>. I am using SPARQL and Python. To retrieve such resources, along with their descriptions, I have tried the following:</p>
<pre><code> pattern="(http://myexample.org/)(\\d{4})"
r = myGraph.query('''select distinct ?s ?desc where {?s schema:description ?desc .
filter(regex(str(?s),pattern)))}''')
</code></pre>
<p>But I get an error:</p>
<pre><code> Expected {SelectQuery | ConstructQuery | DescribeQuery | AskQuery}, found 'f' (at char 115)
</code></pre>
<p>The problem is with my regex expression, as when I remove it, the query works. I also tried</p>
<pre><code> pattern="*\\d{4}$"
</code></pre>
<p>But also got an error. I am working with RDFLib. Any help is appreciated.</p>
|
<python><regex><sparql><rdflib>
|
2023-04-02 04:46:38
| 0
| 609
|
User 19826
|
75,910,116
| 4,717,149
|
AttributeError: module 'scipy.stats' has no attribute 'itemfreq'
|
<p>I am getting error like - <code>AttributeError: module 'scipy.stats' has no attribute 'itemfreq'</code>
while I am trying to use <code>stats.itemfreq</code> method from <code>scipy</code> as shown in the example <a href="https://www.geeksforgeeks.org/scipy-stats-itemfreq-function-python/" rel="nofollow noreferrer">here</a>.</p>
<p>Any idea, how can I resolve the issue? any help is highly appreciated.</p>
|
<python><scipy>
|
2023-04-02 04:25:30
| 1
| 506
|
Md Aman Ullah
|
75,910,045
| 11,462,274
|
How to send a large list created in Python as a parameter when making a request to a Google Apps Script Web App to update a Google Sheet?
|
<p>Initial important information, in my tests I tried to use both methods and both generate the same error:</p>
<p>GET:</p>
<pre><code># PYTHON
web_app_response = requests.get(
webAppsUrl, headers=headers, params=params, timeout=360
)
// Google Apps Script
function doGet(e) {}
</code></pre>
<p>POST:</p>
<pre><code># PYTHON
web_app_response = requests.post(
webAppsUrl, headers=headers, params=params, timeout=360
)
// Google Apps Script
function doPost(e) {}
</code></pre>
<blockquote>
<p>in the original code I will use <code>get</code> to explain in details</p>
</blockquote>
<p>I have a Python code in which I want to pass a DataFrame Pandas as a parameter and a URL also as a parameter in my request to the Web App of Google Apps Script:</p>
<pre class="lang-python prettyprint-override"><code>import requests
import json
UAGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36'
headers = {
'User-Agent': UAGENT
}
df_list = [['W. Yarbrough', '/players/william-paul-yarbrough-story/224324/', 'J. McCarthy', '/players/john--mccarthy/298849/', 'R. Priso-Mbongue', '/players/ralph-priso-mbongue/497224/', 'J. Murillo', '/players/jesus-david-murillo-largacha/238360/', 'Colorado Rapids', '/teams/united-states/colorado-rapids/2278/', 'https://secure.cache.images.core.optasports.com/soccer/teams/150x150/2278.png', 'D', 'https://a/b/2023/03/26/c/d/e/f/123456/', 'W', 'https://a/b/2023/03/26/c/d/e/f/123456/', 'USA', 'https://a/b/2023/03/26/c/d/e/f/123456/', 3.5, '', 'MLS', 'name large-link Link', 'shirtnumber sortdefaultasc', 'photo', 'text name sortdefaultasc', 'flag', 'number age', 'text position sortasc', 'number statistic game-minutes', 'number statistic appearances', 'number statistic lineups', 'number statistic subs-in', 'number statistic subs-out', 'number statistic subs-on-bench', 'number statistic goals', 'number statistic assists', 'number statistic yellow-cards', 'number statistic 2nd-yellow-cards', 'number statistic red-cards', '', 'MLS', 'name large-link Link', 'shirtnumber sortdefaultasc', 'photo', 'text name sortdefaultasc', 'flag', 'number age', 'text position sortasc', 'number statistic game-minutes', 'number statistic appearances', 'number statistic lineups', 'number statistic subs-in', 'number statistic subs-out', 'number statistic subs-on-bench', 'number statistic goals', 'number statistic assists', 'number statistic yellow-cards', 'number statistic 2nd-yellow-cards', 'number statistic red-cards', '', 'lat=39.80567609&lon=-104.891806841', "Dick's Sporting Goods Park", '', 'League', 'MLS - 2023', '', 'League', 'MLS - 2023', '', '/players/brad-stuver/296250/', '/players/william-paul-yarbrough-story/224324/', '/players/jt-marcinkowski/474554/', '', '/players/john--mccarthy/298849/', '/players/stefan-frei/73572/', '/players/john--mccarthy/298849/'], ['D. Wilson', '/players/daniel-wilson/88038/', 'G. Chiellini', '/players/giorgio-chiellini/17684/', 'D. Yapi', '/players/darren-yapi/591906/', 'D. Maldonado', '/players/denil-omar-maldonado-munguia/418816/', 'Los Angeles', '/teams/united-states/los-angeles-fc/41871/', 'https://secure.cache.images.core.optasports.com/soccer/teams/150x150/41871.png', 'L', 'https://a/b/2023/03/26/c/d/e/f/123456/', 'D', 'https://a/b/2023/03/26/c/d/e/f/123456/', 'MLS', '', 2.18, '', '2023', '/players/andreas-maxso/277505/', 5, 'https://secure.cache.images.core.optasports.com/soccer/players/18x18/277505.png', 'A. Maxsø', '', 29, 'D', 450, 5, 5, 0, 0, 0, 0, 0, 1, 0, 0, '', '2023', '/players/john--mccarthy/298849/', 77.0, 'https://secure.cache.images.core.optasports.com/soccer/players/18x18/298849.png', 'J. McCarthy', '', 30, 'G', 360, 4, 4, 0, 0, 0, 0, 0, 0, 0, 0, '', 'Location', 'Commerce City', '', 'Rank', '14', '', 'Rank', '3', '', '/players/an-kolmani/466622/', '/players/daniel-wilson/88038/', '/players/jonathan-mensah/62379/', '', '/players/giorgio-chiellini/17684/', '/players/yeimar-pastor-gomez-andrade/308766/', '/players/giorgio-chiellini/17684/'], ['A. Maxsø', '/players/andreas-maxso/277505/', 'R. Hollingshead', '/players/ryan-michael-hollingshead/332676/', 'K. Cabral', '/players/kevin-cabral/460281/', 'N. Ordaz', '/players/nathan-ordaz/637317/', '', '', '', 'L', 'https://a/b/2023/03/26/c/d/e/f/123456/', 'L', 'https://a/b/2023/03/26/c/d/e/f/123456/', '', '', 3.9, '', '', '/players/william-paul-yarbrough-story/224324/', 22, 'https://secure.cache.images.core.optasports.com/soccer/players/18x18/224324.png', 'W. Yarbrough', '', 34, 'G', 450, 5, 5, 0, 0, 0, 0, 0, 0, 0, 0, '', '', '/players/kellyn-kai-perry-acosta/172800/', 23.0, 'https://secure.cache.images.core.optasports.com/soccer/players/18x18/172800.png', 'K. Acosta', '', 27, 'M', 352, 4, 4, 0, 1, 0, 0, 0, 1, 0, 0, '', 'Country', 'US', '', 'Matches played', '5', '', 'Matches played', '4', '', '/players/nick-lima/474083/', '/players/andreas-maxso/277505/', '/players/miguel-trauco/176390/', '', '/players/jesus-david-murillo-largacha/238360/', '/players/nouhou-tolo/442347/', '/players/sergi-palencia-hurtado/314218/'], ['K. Rosenberry', '/players/keegan-rosenberry/298858/', 'A. Long', '/players/aaron-long/335436/', 'M. Edwards', '/players/michael-edwards/489485/', 'Sergi Palencia', '/players/sergi-palencia-hurtado/314218/', '', '', '', 'D', 'https://a/b/2023/03/26/c/d/e/f/123456/', 'W', 'https://a/b/2023/03/26/c/d/e/f/123456/', '', '', 2.16, '', '', '/players/cole-bassett/496345/', 23, 'https://secure.cache.images.core.optasports.com/soccer/players/18x18/496345.png', 'C. Bassett', '', 21, 'M', 442, 5, 5, 0, 1, 0, 1, 0, 1, 0, 0, '', '', '/players/kwadwo-opoku/675119/', 22.0, 'https://secure.cache.images.core.optasports.com/soccer/players/18x18/675119.png', 'K. Opoku', '', 21, 'A', 335, 4, 4, 0, 1, 0, 1, 0, 0, 0, 0, '', 'Weather', 'nuvens dispersas', '', 'Wins', '0', '', 'Wins', '3', '', '/players/jon-gallagher/532849/', '/players/alhassan-abubakar/445377/', '/players/carlos-akapo-martinez/226778/', '', '/players/sergi-palencia-hurtado/314218/', '/players/jackson-ragen/483475/', '/players/aaron-long/335436/']]
url = 'https://stackoverflow.com'
webAppsUrl = "https://script.google.com/macros/s/XXXXX/exec"
params = {
'pylist': json.dumps(df_list),
'okgo': url
}
web_app_response = requests.get(
webAppsUrl, headers=headers, params=params, timeout=360
)
print(web_app_response.text)
</code></pre>
<p>In my Web App code I tried to send it to the spreadsheet like this:</p>
<pre class="lang-javascript prettyprint-override"><code>function doGet(e) {
const lock = LockService.getDocumentLock();
if (lock.tryLock(360000)) {
try {
const ss = SpreadsheetApp.getActive();
let clrrg = Sheets.newBatchClearValuesRequest();
clrrg.ranges = ['All Python!A1:BW'];
Sheets.Spreadsheets.Values.batchClear(clrrg,ss.getId());
var pylist = JSON.parse(e.parameter.pylist);
var allpy_sheet = SpreadsheetApp.getActive().getSheetByName('All Python');
allpy_sheet.getRange(1, 1, pylist.length, pylist[0].length).setValues(pylist);
lock.releaseLock();
return ContentService.createTextOutput('Done!');
} catch (error) {
lock.releaseLock();
const errorObj = {
message: error.message,
stack: error.stack
};
const folder = DriveApp.getFoldersByName("Error GAS").next();
const file = folder.createFile(
new Date().toString() + '.txt',
JSON.stringify(errorObj, null, 2)
);
return ContentService.createTextOutput(error);
}
} else {
return ContentService.createTextOutput('Timeout!');
}
}
</code></pre>
<p>But I received a <code>Bad Request Error 400</code> when making the request:</p>
<pre><code><HTML>
<HEAD>
<TITLE>Bad Request</TITLE>
</HEAD>
<BODY BGCOLOR="#FFFFFF" TEXT="#000000">
<H1>Bad Request</H1>
<H2>Error 400</H2>
</BODY>
</HTML>
</code></pre>
<p>Error happening even when the values are placed in the spreadsheet:</p>
<p><a href="https://i.sstatic.net/2qVrn.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2qVrn.gif" alt="enter image description here" /></a></p>
|
<python><google-apps-script><google-sheets>
|
2023-04-02 03:56:18
| 1
| 2,222
|
Digital Farmer
|
75,909,965
| 6,176,440
|
Python using Pandas - Retrieving the name of all columns that contain numbers
|
<p>I searched for a solution on the site, but I couldn't find anything relevant, only outdated code. I am new to the Pandas library and I have the following <code>dataframe</code> as an example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
</tr>
</thead>
<tbody>
<tr>
<td>142</td>
<td>0.4</td>
<td>red</td>
<td>108</td>
<td>front</td>
</tr>
<tr>
<td>164</td>
<td>1.3</td>
<td>green</td>
<td>98</td>
<td>rear</td>
</tr>
<tr>
<td>71</td>
<td>-1.0</td>
<td>blue</td>
<td>234</td>
<td>front</td>
</tr>
<tr>
<td>109</td>
<td>0.2</td>
<td>black</td>
<td>120</td>
<td>front</td>
</tr>
</tbody>
</table>
</div>
<p>I would like to extract the name of the columns that contain numbers (integers and floats). It is completely fine to use the first row to achieve this.
So the result should look like this: <code>['A', 'B', 'D']</code></p>
<p>I tried the following command to get some of the columns that contained numbers:</p>
<pre><code>dataframe.loc[0, dataframe.dtypes == 'int64']
Out:
A 142
D 108
</code></pre>
<p>There are two problems with this. First of all, I just need the name of the columns, but not the values. Second, this captures only the integer columns. My next attempt just gave an error:</p>
<pre><code>dataframe.loc[0, dataframe.dtypes == 'int64' or dataframe.dtypes == 'float64']
</code></pre>
|
<python><pandas><dataframe>
|
2023-04-02 03:23:03
| 5
| 490
|
Adrian
|
75,909,937
| 17,801,773
|
Displaying a RGB image in float64
|
<p>I have an image with data type uint8. I want to convert it to the data type float64 and display it. I expected to see the image as I displayed it with data type uint8. But the result is this:
<a href="https://i.sstatic.net/45Hht.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/45Hht.png" alt="enter image description here" /></a>
My original image is like this:</p>
<p><a href="https://i.sstatic.net/wMQPK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wMQPK.jpg" alt="enter image description here" /></a></p>
<p>Why is this happening? How can I fix it?</p>
|
<python><image><matplotlib><image-processing><imshow>
|
2023-04-02 03:13:21
| 1
| 307
|
Mina
|
75,909,808
| 5,212,614
|
AttributeError: 'CountVectorizer' object has no attribute 'get_feature_names' -- Topic Modeling -- Latent Dirichlet Allocation
|
<p>I'm trying to follow the example from the link below.</p>
<p><a href="https://medium.datadriveninvestor.com/trump-tweets-topic-modeling-using-latent-dirichlet-allocation-e4f93b90b6fe" rel="nofollow noreferrer">https://medium.datadriveninvestor.com/trump-tweets-topic-modeling-using-latent-dirichlet-allocation-e4f93b90b6fe</a></p>
<p>All the code up to this point works, but the code below does not work.</p>
<pre><code>from sklearn.decomposition import LatentDirichletAllocation
vectorizer = CountVectorizer(
analyzer='word',
min_df=3,# minimum required occurences of a word
stop_words='english',# remove stop words
lowercase=True,# convert all words to lowercase
token_pattern='[a-zA-Z0-9]{3,}',# num chars > 3
max_features=5000,# max number of unique words
)
data_matrix = vectorizer.fit_transform(df_clean['question_lemmatize_clean'])
lda_model = LatentDirichletAllocation(
n_components=10, # Number of topics
learning_method='online',
random_state=20,
n_jobs = -1 # Use all available CPUs
)
lda_output = lda_model.fit_transform(data_matrix)
import pyLDAvis
import pyLDAvis.sklearn
pyLDAvis.enable_notebook()
pyLDAvis.sklearn.prepare(lda_model, data_matrix, vectorizer, mds='tsne')
</code></pre>
<p>When I run that code snippet, I get this error message.</p>
<pre><code>AttributeError Traceback (most recent call last)
Cell In[83], line 29
27 import pyLDAvis.sklearn
28 pyLDAvis.enable_notebook()
---> 29 pyLDAvis.sklearn.prepare(lda_model, data_matrix, vectorizer, mds='tsne')
File ~\anaconda3\lib\site-packages\pyLDAvis\sklearn.py:94, in prepare(lda_model, dtm, vectorizer, **kwargs)
62 def prepare(lda_model, dtm, vectorizer, **kwargs):
63 """Create Prepared Data from sklearn's LatentDirichletAllocation and CountVectorizer.
64
65 Parameters
(...)
92 See `pyLDAvis.prepare` for **kwargs.
93 """
---> 94 opts = fp.merge(_extract_data(lda_model, dtm, vectorizer), kwargs)
95 return pyLDAvis.prepare(**opts)
File ~\anaconda3\lib\site-packages\pyLDAvis\sklearn.py:38, in _extract_data(lda_model, dtm, vectorizer)
37 def _extract_data(lda_model, dtm, vectorizer):
---> 38 vocab = _get_vocab(vectorizer)
39 doc_lengths = _get_doc_lengths(dtm)
40 term_freqs = _get_term_freqs(dtm)
File ~\anaconda3\lib\site-packages\pyLDAvis\sklearn.py:20, in _get_vocab(vectorizer)
19 def _get_vocab(vectorizer):
---> 20 return vectorizer.get_feature_names()
AttributeError: 'CountVectorizer' object has no attribute 'get_feature_names'
</code></pre>
<p>I feel like, perhaps, some library is not updated correctly, but I can't tell, and when I Google it, I'm not getting great results to help me debug this thing. Anyone know what's wrong here?</p>
|
<python><python-3.x><topic-modeling><countvectorizer><latentdirichletallocation>
|
2023-04-02 02:21:38
| 2
| 20,492
|
ASH
|
75,909,708
| 610,569
|
How to raise meaningful import errors from users' casing typos?
|
<p>Given a library that allows this import:</p>
<pre><code>from thislibrary import FooBar
</code></pre>
<p><strong>Is there a way to figure out the casing of the characters in <code>FooBar</code>?</strong></p>
<p>Motivation: This is because users of <code>thislibrary</code> usually misspell the object and does</p>
<ul>
<li><code>from thislibrary import Foobar</code>,</li>
<li><code>from thislibrary import foobar</code> or even</li>
<li><code>from thislibrary import fooBar</code>.</li>
</ul>
<p>I've tried generating all possible cases of the object by doing something like, <a href="https://stackoverflow.com/a/11144539/610569">https://stackoverflow.com/a/11144539/610569</a>:</p>
<pre><code>from itertools import product
s = 'foobar'
list(map("".join, product(*zip(s.upper(), s.lower()))))
</code></pre>
<p>[out]:</p>
<pre><code>['FOOBAR',
'FOOBAr',
'FOOBaR',
'FOOBar',
'FOObAR',
'FOObAr',
...
]
</code></pre>
<p>Then I've tried to find the name of the variable in string as such:</p>
<pre><code>import importlib
from itertools import product
import thislibrary
def find_variable_case(s, max_tries):
var_permutations = list(map("".join, product(*zip(s.upper(), s.lower()))))
# Intuitively, any camel casing should minimize the no. of upper chars.
# From https://stackoverflow.com/a/58789587/610569
var_permutations.sort(key=lambda ss: (sum(map(str.isupper, ss)), len(ss)))
for i, in tqdm(enumerate(var_permutations)):
if i > max_tries:
return
try:
dir(thislibrary).index(v)
return v
except:
continue
find_variable_case('foobar')
</code></pre>
<p>[out]:</p>
<pre><code>'FooBar'
</code></pre>
<p>But to import this it's still kinda painful, since the user have to manually type in the following after using the <code>find_variable_case()</code> function.</p>
<pre><code>from thislibrary import FooBar
</code></pre>
<h2>Is there a way to write a function that checks for the objects imports inside <code>thislibrary</code>?</h2>
<p>Such that when the user runs this:</p>
<pre><code>from thislibrary import foobar
</code></pre>
<p>That raises a meaning error:</p>
<pre><code>ModuleNotFoundError: Perhaps you are referring to this import?
>>> from thislibrary import Foobar
</code></pre>
<hr />
<p>For context, this is often the case for model machine-learning models to be abbreviated with character casing that are not consistent, e.g.</p>
<ul>
<li>The name of the model on paper is <code>LLaMA</code> <a href="https://ai.facebook.com/blog/large-language-model-llama-meta-ai/" rel="nofollow noreferrer">https://ai.facebook.com/blog/large-language-model-llama-meta-ai/</a> and in code sometimes the developer names the object, <a href="https://colab.research.google.com/drive/1eWAmesrW99p7e1nah5bipn0zikMb8XYC" rel="nofollow noreferrer"><code>Llama</code></a></li>
<li>Sometimes the name of the model on paper is <code>BERT</code> and the developer names the object <a href="https://stackoverflow.com/questions/66822496/no-module-named-transformers-models-while-trying-to-import-berttokenizer"><code>Bert</code></a></li>
</ul>
<p>There seem to be some common convention to keep single caps titlecasing style variables but in any case uses should have more meaningful error message (whenever possible).</p>
<p>For now, I've tried:</p>
<pre><code>import transformers
from itertools import product
import importlib
def find_variable_case(s, max_tries=1000):
var_permutations = list(map("".join, product(*zip(s.upper(), s.lower()))))
# Intuitively, any camel casing should minimize the no. of upper chars.
# From https://stackoverflow.com/a/58789587/610569
var_permutations.sort(key=lambda ss: (sum(map(str.isupper, ss)), len(ss)))
for i, v in enumerate(var_permutations):
if i > max_tries:
return
try:
dir(transformers).index(v)
return v
except:
continue
v = find_variable_case('LLaMatokenizer')
exec(f"from transformers import {v}")
vars()[v]
</code></pre>
<p>Which outputs:</p>
<pre><code>transformers.utils.dummy_sentencepiece_objects.LlamaTokenizer
</code></pre>
<p>Letting the user know that the right casing for the variable is <code>LlamaTokenizer</code>.</p>
<p>Repeating the question given all the context above,</p>
<h2>Is there a way to write a function that checks for the objects imports inside thislibrary?</h2>
<p>Such that when a user does:</p>
<pre><code>from transformers import LLaMatokenizer
</code></pre>
<p>the error would show:</p>
<pre><code>ModuleNotFoundError: Perhaps you are referring to this import?
>>> from transformers import LlamaTokenizer
</code></pre>
|
<python><importerror><modulenotfounderror><python-importlib>
|
2023-04-02 01:41:04
| 2
| 123,325
|
alvas
|
75,909,676
| 14,154,784
|
Only render part of django template if objects.all is not empty
|
<p>I only want to render part of a django template if objects.all is not empty. Normally this is done like:</p>
<pre><code><ul>
{% for thing in things.all %}
<li>{{ thing.name }}</li>
{% empty %}
<li>Sorry, nothing to see here</li>
{% endfor %}
</ul>
</code></pre>
<p>But what if I want to have a heading or something that only shows if there's something to put in the list? I don't want the heading to be repeated each time the for loop runs. Is there something like <code>{% not empty %}</code> I could use, e.g.:</p>
<pre><code>{% if things.all not empty %}
<h1>Things</h1>
<ul>
{% for thing in things.all %}
<li>{{ thing.name }}</li>
{% endfor %}
</ul>
</code></pre>
<p>The above, however, throws a <code>TemplateSyntaxError</code> for django <code>Not expecting 'not' as infix operator in if tag.</code></p>
<p>How can we check if something is empty <em>before</em> running the loop?</p>
|
<python><django><django-templates><django-template-filters>
|
2023-04-02 01:23:01
| 1
| 2,725
|
BLimitless
|
75,909,672
| 16,922,748
|
Generating new a column in dataframe given value falls within a certain range of another column value
|
<p>Given the following dataframe:</p>
<pre><code>df = pd.DataFrame({'A':[random.randrange(0, 9, 1) for i in range(10000000)],
'B':[random.randrange(0, 9, 1) for i in range(10000000)]})
</code></pre>
<p>That may look like this:</p>
<pre><code> A B
0 8 3
1 3 0
2 8 4
3 6 5
4 8 2
...
</code></pre>
<p>I'd like to generate a new column called Eq. This column confirms if A and B row values fall within a certain range. If so, the number 1 is appended, if not 0 is appended.</p>
<p>If the range is 2 the result should look like this:</p>
<pre><code> A B Eq
0 8 3 0
1 3 0 0
2 8 4 0
3 6 5 1
4 8 2 0
...
</code></pre>
<pre><code>Essentially:
8 does NOT fall in range of (3-2,3+2)
3 does NOT fall in range of (0-2,0+2)
6 DOES fall in range of (5-2,5+2)
</code></pre>
<p>In my first attempt, I wrote a simple function to apply to each row of the df.</p>
<pre><code>def CountingMatches(row, range_limit):
if row['A'] in range (-range_limit + row['B'], range_limit+row['B']):
return 1
else:
return 0
</code></pre>
<pre><code>df['Eq'] = df.apply(CountingMatches, axis=1, range_limit=3)
</code></pre>
<p>This worked but took an incredibly long time, so kinda useless.</p>
<p>I then used my function with swifter <a href="https://towardsdatascience.com/speed-up-your-pandas-processing-with-swifter-6aa314600a13" rel="nofollow noreferrer">https://towardsdatascience.com/speed-up-your-pandas-processing-with-swifter-6aa314600a13</a></p>
<pre><code>df['Eq'] = df.swifter.apply(CountingMatches, axis=1, range_limit=3)
</code></pre>
<p>This also took a really long time.</p>
<p>I then checked how long it would take to check if the columns matched, no range.</p>
<pre><code>df['Eq'] = (df['A'].astype(int) == df['B'].astype(int)).astype(int)
</code></pre>
<p>This was incredibly fast ~ 1s.</p>
<p>Given this hopeful result, I tried to incorporate the ranges.</p>
<pre><code>range_limit=2
df['Eq'] = (df['A'].astype(int) in range(df['B'].astype(int)-range_limit,df['B'].astype(int) + range_limit)).astype(int)
</code></pre>
<p>But I get the following error, rightfully so:</p>
<pre><code>'Series' object cannot be interpreted as an integer
</code></pre>
<p>How can I efficiently complete this task on this dataframe?</p>
|
<python><pandas><dataframe>
|
2023-04-02 01:22:27
| 2
| 315
|
newbzzs
|
75,909,654
| 8,713,442
|
Pivot data in pyspark
|
<p>I want to pivot data based on the column name. Column name before <code>_</code> will become provider name in output, and the remaining part of that column name becomes the new column name. This is just an example as the real time scenario is much more complicated than this (some columns are present only for one provider).</p>
<p>Also we have some columns which are only specific to one provider and i'm wondering how to keep value as <code>null</code> for other provider</p>
<p>Desired Output is given as result. Please help in resolving this .</p>
<pre><code>import sys,os
import concurrent.futures
from concurrent.futures import *
import boto3
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from pyspark.context import SparkConf
from awsglue.context import GlueContext
from awsglue.job import Job
from awsglue.dynamicframe import DynamicFrame
from pyspark.sql import DataFrame
###############################
class JobBase(object):
spark=None
def __start_spark_glue_context(self):
conf = SparkConf().setAppName("python_thread")
self.sc = SparkContext(conf=conf)
self.glueContext = GlueContext(self.sc)
self.spark = self.glueContext.spark_session
def execute(self):
self.__start_spark_glue_context()
new_dict={}
print('hello')
# schema = StructType([ \
# StructField("curr_col1",StringType(),True), \
# StructField("curr_col2",StringType(),True), \
# ])
d = [{"curr_col1": '75757', "curr_col2": "fgsjdfd"}]
d = [{"v1_ind": 'A', "v1_rev": 23,"v2_ind": 'b', "v2_rev": 44}]
df = self.spark.createDataFrame(data=d)
df.show()
def main():
job = JobBase()
job.execute()
if __name__ == '__main__':
main()
+------+------+------+------+
|v1_ind|v1_rev|v2_ind|v2_rev|
+------+------+------+------+
| A| 23| b| 44|
+------+------+------+------+
required output
+---+---+--------+
|Ind|Rev|provider|
+---+---+--------+
| A| 23| V1|
| B| 44| V2|
+---+---+--------+
```
</code></pre>
|
<python><apache-spark><pyspark>
|
2023-04-02 01:16:04
| 2
| 464
|
pbh
|
75,909,635
| 14,154,784
|
Render Submit Button in Same Row as Form Field in Django Crispy Forms
|
<p>I'm using Django Crispy Forms, and rather than have the Submit button render below the rest of the fields, I want to move it to the same row as another field. My current Form code follows:</p>
<pre><code>class SetForm(forms.ModelForm):
class Meta:
model = Set
fields = ['exercise', 'actual_weight', 'actual_reps', 'actual_difficulty']
helper = FormHelper()
helper.form_method = 'POST'
helper.layout = Layout(
Row(
Column('exercise', css_class='form-group col-md-12 mb-0'),
css_class='form-row'
),
Row(
Column('actual_weight', css_class='form-group col-6 mb-0'),
Column('actual_reps', css_class='form-group col-6 mb-0'),
),
Row(
Column('actual_difficulty', css_class='form-group col-6 mb-0'),
Column(helper.add_input(Submit('submit', 'Submit', css_class='form-group btn-primary col-6 mb-0'))),
)
)
</code></pre>
<p>This doesn't work though, the Submit button is still on its own row below the form, though the <code>col-6</code> class does appear to be applied.</p>
<p>I tried looking at <a href="https://stackoverflow.com/questions/38181266/how-to-render-some-of-the-fields-of-a-django-form-in-the-same-row">this question</a>, but it neither has answers nor uses Django Crispy Forms, as well as <a href="https://stackoverflow.com/questions/15014810/django-crispy-forms-have-field-and-button-on-same-row">this one</a>, but that one is focused on prepended text and it's not straightforward to modify the answers for this use case. Help please!</p>
|
<python><django><django-forms><django-templates><django-crispy-forms>
|
2023-04-02 01:02:04
| 1
| 2,725
|
BLimitless
|
75,909,606
| 8,342,978
|
How to add Azure Digital Twins Data Owner Role via Azure Python SDK
|
<p>Using the Azure Python SDK, I have been able to instantiate a resource group and a digital twin within using the following code:</p>
<pre class="lang-py prettyprint-override"><code>from azure.identity import AzureCliCredential, DefaultAzureCredential
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.digitaltwins import AzureDigitalTwinsManagementClient
credential = DefaultAzureCredential()
subscription_id="some UUID" # not sure if safe to reveal, so removed it
resource_client = ResourceManagementClient(
credential, subscription_id=subscription_id)
resource_group_name = "Tutorial-RG"
rg_result = resource_client.resource_groups.create_or_update(
resource_group_name, {"location": "westeurope"}
)
client = AzureDigitalTwinsManagementClient(
credential=DefaultAzureCredential(),
subscription_id=subscription_id,
)
dt_resource_name = "myDigitalTwinsService"
response = client.digital_twins.begin_create_or_update(
resource_group_name=rg_result.name,
resource_name = dt_resource_name,
digital_twins_create={"location": "westeurope"},
).result()
print(response)
# ...
# 'provisioning_state': 'Succeeded',
# ...
</code></pre>
<p>I know that I need to add the 'Azure Digital Twins Data Owner' role before being able to manipulate it using the Azure Digital Twins Python SDK. I can do that using the Azure CLI as follows:</p>
<pre class="lang-bash prettyprint-override"><code>>>> az dt role-assignment create --dt-name myDigitalTwinsService --assignee "my UUID" --role "Azure Digital Twins Data Owner" --debug
</code></pre>
<p>But I am unable to add the same role using the Azure Authorization Management Client. So far I have tried the code below:</p>
<pre class="lang-py prettyprint-override"><code>from azure.mgmt.authorization.models import RoleAssignmentCreateParameters
from azure.mgmt.authorization import AuthorizationManagementClient
authorization_client = AuthorizationManagementClient(
credential=DefaultAzureCredential(),
subscription_id=subscription_id,
)
adt_data_owner_role_id ='bcd981a7-7f74-457b-83e1-cceb9e632ffe'
role_def_id = f'/subscriptions/{subscription_id}/providers/Microsoft.Authorization/roleDefinitions/{adt_data_owner_role_id}'
authorization_client.role_assignments.create(
scope=SCOPE,
role_assignment_name=f"/subscriptions/{subscription_id}/resourceGroups/Tutorial-RG/providers/Microsoft.DigitalTwins/digitalTwinsInstances/myDigitalTwinsService/providers/Microsoft.Authorization/roleAssignments/60252f13-5e5a-4686-8265-3ac2db6443f1",
parameters=RoleAssignmentCreateParameters(
role_definition_id= role_def_id,
principal_id= 'my UUID',
principal_type="User",
)
)
</code></pre>
<p>I have taken the parameters from the <code>az</code> call mentioned above by passing the <code>--debug</code> flag.
But I get the following error:</p>
<pre><code>HttpResponseError: (NoRegisteredProviderFound) No registered resource provider found for location 'westeurope' and API version '2022-04-01' for type 'digitalTwinsInstances'. The supported api-versions are '2023-01-31, 2022-10-31, 2022-05-31, 2021-06-30-preview, 2020-12-01, 2020-10-31, 2020-03-01-preview'. The supported locations are 'westcentralus, westus2, northeurope, australiaeast, westeurope, eastus, southcentralus, southeastasia, uksouth, eastus2, westus3, japaneast, koreacentral, qatarcentral'.
Code: NoRegisteredProviderFound
Message: No registered resource provider found for location 'westeurope' and API version '2022-04-01' for type 'digitalTwinsInstances'. The supported api-versions are '2023-01-31, 2022-10-31, 2022-05-31, 2021-06-30-preview, 2020-12-01, 2020-10-31, 2020-03-01-preview'. The supported locations are 'westcentralus, westus2, northeurope, australiaeast, westeurope, eastus, southcentralus, southeastasia, uksouth, eastus2, westus3, japaneast, koreacentral, qatarcentral'.
</code></pre>
<p>Even changing the location to a supported region doesn't help despite the error message saying so.
When I change the api version, it doesn't work. I just get a different error:</p>
<pre><code>authorization_client = AuthorizationManagementClient(
credential=DefaultAzureCredential(),
subscription_id=subscription_id,
api_version = '2022-05-31'
)
# same everything else
# ValueError: API version 2022-05-31 does not have operation group 'role_assignments'
</code></pre>
<p>How do I fix this error? Or the action that I want to do is not supported by the Azure Python SDK at present ?</p>
<p>The versions of the azure SDK that I am using are as follows:
Generated using <code>pip list --format=freeze | grep azure</code>:</p>
<pre><code>azure-common==1.1.28
azure-core==1.26.2
azure-digitaltwins-core==1.2.0
azure-identity==1.12.0
azure-mgmt-authorization==3.0.0
azure-mgmt-core==1.3.2
azure-mgmt-digitaltwins==6.4.0
azure-mgmt-resource==22.0.0
</code></pre>
|
<python><azure><azure-digital-twins>
|
2023-04-02 00:52:13
| 1
| 765
|
cozek
|
75,909,573
| 12,103,619
|
Can't pip install tensorflow with python 3.8 64bit
|
<p>I have a conda config file and I can't seem to create an environment with tensorflow and python=3.8</p>
<p>Here is my config file</p>
<pre><code>name: proj1
channels:
- defaults
dependencies:
- python=3.7
- pip
- pip:
- matplotlib
- tensorflow
- tensorflow-probability
</code></pre>
<p>Here is the error message :</p>
<pre><code>Pip subprocess error:
ERROR: Could not find a version that satisfies the requirement tensorflow (from versions: none)
ERROR: No matching distribution found for tensorflow
failed
CondaEnvException: Pip failed
</code></pre>
<p>I am on Macos 12.6 with M2 chip and my python is 64 bits
<code>pip --version</code> gives pip 23.0.1 and <code>pip3 --version</code> gives pip 22.2.2<br />
<code>python --version</code> gives Python 3.8.16 and <code>python3 --version</code> gives Python 3.10.6</p>
<p>Thanks a lot for helping</p>
|
<python><tensorflow><pip><conda>
|
2023-04-02 00:39:15
| 0
| 394
|
Aydin Abiar
|
75,909,497
| 12,103,619
|
ResolvePackageNotFound on python dependency when building conda environment from config file
|
<p>I am working on an old project and I am trying to build a conda environment based on a config file.
It used to work well few months ago but now I run into some issues, here is my file</p>
<pre><code>
name: proj1
channels:
- defaults
dependencies:
- python=3.7
- pip
- pip:
- matplotlib
- tensorflow
- tensorflow-probability
</code></pre>
<p>Now if I run <code>conda env create -f conda_env.yml</code> I get the error :</p>
<pre><code>Collecting package metadata (repodata.json): done
Solving environment: failed
ResolvePackageNotFound:
- python=3.7
</code></pre>
<p>Why is that ? I've set up many conda environments this way but it is the first time I run into this</p>
<p>Thanks a lot</p>
|
<python><conda>
|
2023-04-02 00:06:15
| 0
| 394
|
Aydin Abiar
|
75,909,386
| 14,503,336
|
VSCode not detecing functions from wildcard import
|
<p>I have some Python files in a directory.</p>
<pre><code>my-project/
utils/
a.py
b.py
__init__.py
main.py
</code></pre>
<p>They all run from <code>main.py</code>, but files inside the <code>utils</code> folder all import from one another.</p>
<pre><code># a.py
from .b import *
function_from_b()
</code></pre>
<p>The problem is, VSCode seems to have no clue where I'm importing from.</p>
<p>Specifically all my local imports, just lead to a bunch of annoying <code>reportUndefinedVariable</code> warnings in my problems window and distracting yellow underlines on virtually all my functions. VSCode thinks every function is undefined, even though my local import is there.</p>
<p>It's really frustrating though because my code works fine. I'm running everything from my <code>main.py</code> file and it works, but I still get the undefined variable warning when importing files from anything inside the <code>utils</code> folder. Even debugging works using the <code>main.py</code> file. The only time it doesn't work, is when I run individual files directly from the utils folder— like running current file debug or just running from the terminal (for example running <code>$ python a.py</code> from terminal inside <code>utils</code>). Then I get the following error.</p>
<pre><code>ImportError: attempted relative import with no known parent package
</code></pre>
<p>This seems like a VSCode issue, so if possible I'd like to keep my code unchanged. My issue is not the code, but rather the countless annoying warnings from VSCode about a problem that doesn't exist. The example files I provided above are just a general outline of my problem, so if extra context about the specifics of my project are needed, I'm happy to provide them.</p>
|
<python><python-3.x><visual-studio-code><import><python-import>
|
2023-04-01 23:26:26
| 1
| 599
|
Anonyo Noor
|
75,909,156
| 219,976
|
FastAPI - why can server handle several requests at the same time in synchronous mode
|
<p>I run the following program:</p>
<pre><code>import time
from datetime import datetime
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def root():
print(f"Started at {datetime.now()}")
time.sleep(30)
print(f"Executed at {datetime.now()}")
return {"message": "Hello World"}
</code></pre>
<p>with uvicorn.
Then I open in browser <a href="http://127.0.0.1:8000/" rel="nofollow noreferrer">http://127.0.0.1:8000/</a> in two different tabs in a short period of time. The output is like this:</p>
<pre><code>Started at 2023-04-01 23:40:53.668811
Started at 2023-04-01 23:41:16.992891
Executed at 2023-04-01 23:41:23.779460
INFO: 127.0.0.1:54310 - "GET / HTTP/1.1" 200 OK
Executed at 2023-04-01 23:41:47.248950
INFO: 127.0.0.1:54311 - "GET / HTTP/1.1" 200 OK
</code></pre>
<p>Why does the second Start go before first Executed even though <code>root</code> is not <code>async</code>?</p>
|
<python><asynchronous><fastapi><uvicorn>
|
2023-04-01 22:24:24
| 1
| 6,657
|
StuffHappens
|
75,909,141
| 19,051,091
|
How to improve PyTorch model with 4 classes?
|
<p><strong>Edit Update:</strong></p>
<p>After I made the Batch_Size = 128 and I added new Layer</p>
<pre><code> self.conv_block3 = nn.Sequential(nn.Conv2d(in_channels=hidden_units,out_channels=hidden_units,kernel_size=3,stride=1,padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=hidden_units, out_channels=hidden_units, kernel_size=3,stride=1,padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
</code></pre>
<p>I face weird something after 2 epoch the train and test loss look like this:</p>
<pre><code>Epoch: 1 | train_loss: 1.0392 | train_acc: 0.5169 | test_loss: 0.4287 | test_acc: 0.8576
Epoch: 2 | train_loss: 0.2700 | train_acc: 0.9096 | test_loss: 0.2665 | test_acc: 0.9110
</code></pre>
<p>I'm new to PyTorch and trying to build multiclass image classification with 4 classes, what is wrong with my model, or how to improve it, please?</p>
<p>1- All classes have 75 train images and 25 test images.
2- I don't use any augmentation</p>
<p>That's my input shape and full code:</p>
<h1>Create simple transform</h1>
<pre><code>simple_transform = transforms.Compose([
transforms.Grayscale(num_output_channels=1),
transforms.Resize(size=(128, 128)),
transforms.ToTensor(),
])
train_data_simple = datasets.ImageFolder(root=train_dir,
transform=simple_transform)
test_data_simple = datasets.ImageFolder(root=test_dir,
transform=simple_transform)
</code></pre>
<h1>Turn dataset into DataLoader</h1>
<pre><code>BATCH_SIZE = 32
NUM_WORKERS = 2
train_dataloader_simple = DataLoader(dataset=train_data_simple,
batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS,
shuffle=True)
test_dataloader_simple = DataLoader(dataset=test_data_simple,
batch_size=BATCH_SIZE,
num_workers=NUM_WORKERS,
shuffle=False)
</code></pre>
<h1>Creating class model</h1>
<pre><code>class TingVGG(nn.Module):
def __init__(self, input_shape: int, hidden_units: int, output_shape: int) -> None:
super().__init__()
self.conv_block1 = nn.Sequential(nn.Conv2d(in_channels=input_shape,out_channels=hidden_units,kernel_size=3,stride=1,padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=hidden_units, out_channels=hidden_units, kernel_size=3,stride=1,padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.conv_block2 = nn.Sequential(nn.Conv2d(in_channels=hidden_units,out_channels=hidden_units,kernel_size=3,stride=1,padding=1),
nn.ReLU(),
nn.Conv2d(in_channels=hidden_units, out_channels=hidden_units, kernel_size=3,stride=1,padding=1),
nn.ReLU(),
nn.MaxPool2d(kernel_size=2, stride=2)
)
self.classifier = nn.Sequential(nn.Flatten(), nn.Linear(in_features=hidden_units*32*32 ,out_features=output_shape))
def forward(self, x: torch.Tensor):
x = self.conv_block1(x)
#print(x.shape)
x = self.conv_block2(x)
#print(x.shape)
x = self.classifier(x)
#print(x.shape)
return x
</code></pre>
<h1>Create and initialize of TinyVGG</h1>
<pre><code>model_0 = TingVGG(input_shape=1, # Number of channels in the input image (c, h, w) -> 3
hidden_units=20,
output_shape=len(train_data.classes)).to(device)
</code></pre>
<h1>Setup the loss function and optimizer</h1>
<pre><code>loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(params= model_0.parameters(),
lr= 0.001)
</code></pre>
<p>I will attach the epochs and loss and accuracy as screenshots
Thanks<a href="https://i.sstatic.net/t0ZqC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t0ZqC.png" alt="Loss and accuracy" /></a> <a href="https://i.sstatic.net/zKwpE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zKwpE.png" alt="epoches" /></a></p>
|
<python><pytorch>
|
2023-04-01 22:18:38
| 2
| 307
|
Emad Younan
|
75,909,139
| 11,462,274
|
Correct formatting to send a list in Python as a parameter to the Google Web App in order to send the data to Google Sheets
|
<p>I have a Python code in which I want to pass a DataFrame Pandas as a parameter and a URL also as a parameter in my request to the Web App of Google Apps Script:</p>
<pre class="lang-python prettyprint-override"><code>import pandas as pd
import requests
UAGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36'
headers = {
'User-Agent': UAGENT
}
df = pd.DataFrame({'Col A': [1,2,3,4,5],'Col B': ['A','B','C','D','E']})
df_list = df.values.tolist()
url = 'https://stackoverflow.com'
webAppsUrl = "https://script.google.com/macros/s/XXXXXXXX/exec"
params = {
'pylist': df_list,
'okgo': url
}
web_app_response = requests.get(
webAppsUrl, headers=headers, params=params, timeout=360
)
</code></pre>
<p>In my Web App code I tried to send it to the spreadsheet like this:</p>
<pre class="lang-javascript prettyprint-override"><code>function doGet(e) {
var pylist = e.parameter.pylist;
var sheet = SpreadsheetApp.getActive().getSheetByName('All Python');
sheet.getRange(1, 1, pylist.length, pylist[0].length).setValues(pylist);
</code></pre>
<p>My expected result in Google Sheets <code>All Python</code> page:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>1</th>
<th>A</th>
</tr>
</thead>
<tbody>
<tr>
<td>2</td>
<td>B</td>
</tr>
<tr>
<td>3</td>
<td>C</td>
</tr>
<tr>
<td>4</td>
<td>D</td>
</tr>
<tr>
<td>5</td>
<td>E</td>
</tr>
</tbody>
</table>
</div>
<p>But I received a <code>Bad Request Error 400</code> when making the request.</p>
<p>In this case, what is the correct way to achieve what I want?</p>
|
<python><google-apps-script><google-sheets>
|
2023-04-01 22:17:57
| 1
| 2,222
|
Digital Farmer
|
75,909,124
| 7,764,497
|
Building correct response to login to Twitter
|
<p>I'm trying to login to twitter using just requests but I don't think I'm building the correct login response. I pass in the guest token but am I missing anything else? Am I using the correct URL? What am I doing wrong? And also <s>how would I tell if I successfully login this way -</s> with my current iteration I get a 200 status code but I'm 100% sure it's not from me successfully logging in.</p>
<pre><code>import requests
headers = {
'accept': '*/*',
'authorization': public_bearer,
'content-type': 'application/json',
'referer': 'https://twitter.com/',
'user-agent': user_agent,
'x-guest-token': guest_token
}
data = {
'username': username,
'password': password
}
r = requests.post('https://twitter.com/login', headers=headers, data=data)
print(r.status_code)
print(r.headers)
</code></pre>
<p>edit: the question is more along the lines of how the request flow is supposed to be built</p>
|
<python><web-scraping><twitter><python-requests>
|
2023-04-01 22:14:26
| 1
| 356
|
hwhat
|
75,909,116
| 1,082,410
|
Python how identify if an instance has been changed without making a copy?
|
<p>I'm trying to write a function to determine if an object has been modified at one point during the execution of the program.</p>
<p>I don't want to duplicate the object because this will take a lot of memory.</p>
<p>My object is a dataclass and has a few lists of dataclasses that might have nested dataclasses within them, but at the bottom level you'll only find primitive variables (str, ints, bool, ...)</p>
<p>Since these objects need to be modifiable I can't use <code>frozen=True</code>. What I've come up so far is <code>hash(str(self)) == PreviousHash</code> but this starts slowing down greatly as the amount of data increases.</p>
<p>What would you do to be able get a "hash" of a dataclass instance like this without having to do a slow convertion to a string first?</p>
|
<python><hash>
|
2023-04-01 22:11:50
| 3
| 879
|
Tolure
|
75,909,103
| 5,074,226
|
Tflite-model-maker is downloading several files endless
|
<p>I'm trying to install TensorFlow using this <a href="https://www.tensorflow.org/lite/models/modify/model_maker/image_classification" rel="nofollow noreferrer">tutorial</a>. So, when I run the following command on my terminal:</p>
<p>$ pip install -q tflite-model-maker</p>
<p>This command starts to download several files, but this process never ends. I have an Ubuntu machine, and I don't know how to resolve this. This command already download more than 50 GB files, until fill my SSD completely.</p>
<p><a href="https://i.sstatic.net/oTiXp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTiXp.png" alt="enter image description here" /></a></p>
<p>I've tryied some solutions using these links, but I can't find the solution.</p>
<p><a href="https://github.com/tensorflow/tensorflow/issues/51031" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/51031</a></p>
<p>I've found a similar question, but without answer:
<a href="https://stackoverflow.com/questions/65145690/what-causes-tf-nightly-to-download-every-nights-repo-in-github-actions">What causes tf-nightly to download every nights repo in github actions?</a></p>
|
<python><bash><tensorflow><ubuntu><installation>
|
2023-04-01 22:08:42
| 3
| 364
|
Ítalo De Pontes Oliveira
|
75,908,794
| 2,312,801
|
Split a csv file into multiple files based on a pattern
|
<p>I have a csv file with the following structure:</p>
<pre><code>time,magnitude
0,13517
292.5669,370
620.8469,528
0,377
832.3269,50187
5633.9419,3088
20795.0950,2922
21395.6879,2498
21768.2139,647
21881.2049,194
0,3566
292.5669,370
504.1510,712
1639.4800,287
46709.1749,365
46803.4400,500
</code></pre>
<p>I'd like to split this csv file into separate csv files, like the following:</p>
<p>File 1:</p>
<pre><code>time,magnitude
0,13517
292.5669,370
620.8469,528
</code></pre>
<p>File 2:</p>
<pre><code>time,magnitude
0,377
832.3269,50187
5633.9419,3088
20795.0950,2922
21395.6879,2498
</code></pre>
<p>and so on..</p>
<p>I've read several similar posts (e.g., <a href="https://stackoverflow.com/questions/9489078/how-to-split-a-huge-csv-file-based-on-content-of-first-column?rq=3">this</a>, <a href="https://stackoverflow.com/questions/9951393/split-large-csv-text-file-based-on-column-value?rq=3">this</a>, or <a href="https://stackoverflow.com/questions/46847803/splitting-csv-file-based-on-a-particular-column-using-python">this one</a>), but they all search for specific values in a column and save each groups of values into a separate file. However, in my case, the values of time column are not the same. I'd like to split base on a condition: <code>If time = 0, save that row and all subsequent rows in a new file until the next time =0</code>.</p>
<p>Can someone please let me know how to do this?</p>
|
<python><csv><unix><split>
|
2023-04-01 20:53:07
| 4
| 2,459
|
mOna
|
75,908,335
| 16,383,578
|
What is a better way to get filenames of downloaded mods from nexusmods download history?
|
<p>I have downloaded many mods from NexusMods, currently my download history tells me that I have downloaded 150 different mods. Because I have downloaded so many mods I literally have no idea which file is from which mod for which game...</p>
<p>If you have used NexusMods, you know its download history tab only shows you the name of the mod you have downloaded (and the hyperlink to it), it doesn't tell you the actual filename of the file you have downloaded, requiring you go to the mod page to figure it out.</p>
<p>In the file tab of the mod page a list of descriptive names for the files available for downloading is displayed, but they are not the actual filenames of the files that you can download, you have to click <kbd>Manual Download</kbd> to actually see the real filenames.</p>
<p>There aren't necessarily any clear relationships among game name, mod name, descriptive name and actual filenames, often there aren't, and it can be extremely hard to know which file is for which mod for which game from filename alone, sometimes I can use Google to find out which file is for which mod for which game, but often Google doesn't help.</p>
<p>I tried Google searching for a way to resolve downloaded filename to mod name, and found nothing relevant which is totally not surprising. So I decided to post a question here, and, because I am supposed to show my research effort but I have no idea how much is enough, I spent the last few hours writing a completely working but very inefficient solution just to show my research effort.</p>
<p>I used Python + Selenium + Firefox in this, the basic idea is to get the links to the downloaded mods from the download history, then visit every single mod page in a web browser, and click <kbd>Manual Download</kbd> to see the actual filenames. You can see all the steps in all their glorious details in my code below.</p>
<p>You will need this <a href="https://chrome.google.com/webstore/detail/get-cookiestxt-locally/cclelndahbckbenkjhflpdbgdldlbecc/related" rel="nofollow noreferrer">Chrome extension</a> and obviously a NexusMods account to test my script, you have to go to <a href="https://www.nexusmods.com" rel="nofollow noreferrer">https://www.nexusmods.com</a> in Chrome, login your account, and use the extension linked above to export the cookies to a .json file, then change the file path in my script to point to that file to login your account in selenium. Because you can't sign in using <a href="https://users.nexusmods.com/auth/sign_in" rel="nofollow noreferrer">https://users.nexusmods.com/auth/sign_in</a> in selenium, believe me, I tried.</p>
<p>Then you may also want to use Firefox developer tools Network tab to check which advertisement services, trackers, injectors and whatnot are loaded when loading the website, this is optional, but disabling them can speed up the execution of the script massively.</p>
<p>I went to <a href="https://www.nexusmods.com" rel="nofollow noreferrer">https://www.nexusmods.com</a> and exported the requests to a .json file (save all as HAR), I then went to <a href="https://www.nexusmods.com/users/myaccount?tab=download+history" rel="nofollow noreferrer">https://www.nexusmods.com/users/myaccount?tab=download+history</a> and exported the requests to another .json file, and finally I went to a mod page to export the requests to yet another .json file, I then got all the unique domain names from all three files using the following:</p>
<pre class="lang-py prettyprint-override"><code>import json
from pathlib import Path
domains = {e['request']['url'].replace('https://', '').split('/')[0] for e in json.loads(Path('D:/data.json').read_text(encoding='utf8'))['log']['entries']}
domains |= {e['request']['url'].replace('https://', '').split('/')[0] for e in json.loads(Path('D:/data1.json').read_text(encoding='utf8'))['log']['entries']}
domains |= {e['request']['url'].replace('https://', '').split('/')[0] for e in json.loads(Path('D:/data2.json').read_text(encoding='utf8'))['log']['entries']}
print('\n'.join(sorted(domains)))
</code></pre>
<p>And I found 150 bad websites and updated %windir%\system32\drivers\etc\hosts accordingly, <a href="https://drive.google.com/file/d/1aP3beRPqCLqypWg8kOomYNK5qdWm_dt6/view?usp=share_link" rel="nofollow noreferrer">the changes made</a></p>
<p>And this is my script:</p>
<pre class="lang-py prettyprint-override"><code>import json
import time
from pathlib import Path
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException, TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.firefox.options import Options
options = Options()
options.add_argument("--log-level=3")
options.add_argument("--mute-audio")
options.add_argument("--no-sandbox")
options.add_argument("--disable-dev-shm-usage")
options.set_preference("http.response.timeout", 1)
options.set_preference('permissions.default.stylesheet', 2)
options.set_preference('permissions.default.image', 2)
options.set_preference('dom.ipc.plugins.enabled.libflashplayer.so', 'false')
options.set_capability('pageLoadStrategy', 'eager')
Firefox = webdriver.Firefox(options=options)
wait = WebDriverWait(Firefox, 3)
def scrape_files(mod_url):
result = {
'url': mod_url,
'files': dict()
}
Firefox.get(mod_url+'?tab=files')
wait.until(EC.visibility_of_element_located((By.XPATH, '//dt[contains(@id, "file-expander-header")]')))
filelinks = dict()
for i, e in enumerate(Firefox.find_elements(by='xpath', value='//dt[contains(@id, "file-expander-header")]')):
try:
e.find_element(by='xpath', value='./div/i')
except NoSuchElementException:
continue
name = e.get_attribute('data-name')
try:
link = Firefox.find_element('xpath', f'(//dd[@class="clearfix open"])[{i+1}]//a[contains(@class, "btn inline-flex")]').get_attribute('href').replace('&nmm=1', '')
except NoSuchElementException:
Firefox.find_element(by='xpath', value=f'(//div[@class="acc-status"])[{i+1}]').click()
link = Firefox.find_elements('xpath', '//dd[@class="clearfix open"]//a[contains(@class, "btn inline-flex")]')[-1].get_attribute('href').replace('&nmm=1', '')
finally:
filelinks[name] = link
for name, link in filelinks.items():
requirements = None
Firefox.get(link)
if 'ModRequirementsPopUp' in link:
required = {e.find_element(by='xpath', value='./span').text: e.get_attribute('href') for e in Firefox.find_elements(by='xpath', value='//div[@class="mod-requirements-tab-content"]/ul/li/a')}
modlink = Firefox.find_element('xpath', '//div[@class="mod-requirements-tab-content"]/a[@class="btn"]').get_attribute('href')
if required:
requirements = {k: scrape_files(v) for k, v in required.items() if v.startswith('https://www.nexusmods.com/')}
if not requirements:
requirements = None
Firefox.get(modlink)
file = wait.until(EC.visibility_of_element_located((By.XPATH, '//div[@class="header"]'))).text.splitlines()[0]
result['files'][name] = {'requirements': requirements, 'filename': file}
return result
Firefox.get('https://www.nexusmods.com/')
for i in json.loads(Path('D:/cookies.json').read_text()):
Firefox.add_cookie({'name': i['name'], 'value': i['value']})
Firefox.get('https://www.nexusmods.com/users/myaccount?tab=download+history')
time.sleep(10)
mods = dict()
while True:
for mod in Firefox.find_elements(by='xpath', value="//div[@class='tracking-title']/a"):
name = mod.text
url = mod.get_attribute('href')
mods[name] = url
next_button = Firefox.find_element(by='xpath', value="//div/a[contains(@class, 'paginate_button next')]")
if next_button.get_attribute('class') != 'paginate_button next disabled':
next_button.click()
else:
break
modfiles = dict()
for name, url in mods.items():
game = url.split('/')[3]
if not modfiles.get(game):
modfiles[game] = {
name: scrape_files(url)
}
else:
modfiles[game][name] = scrape_files(url)
Path('D:/nexusmods_downloaded.json').write_text(json.dumps(modfiles, ensure_ascii=False, indent=4), encoding='utf8')
</code></pre>
<p>To prove to you my script is 100% working, you can see the output <a href="https://drive.google.com/file/d/1jGX_C5KenwbYBoy3kiaoaVJBvL-1aYP1/view?usp=share_link" rel="nofollow noreferrer">here</a></p>
<p>The task is deceptively hard, mainly because I can't use any reference to the web elements or they all will become stale, instead I have to get the links first and use the links directly. It took me several hours to complete, but I completed it all by myself without anyone's help, not even from Google.</p>
<p>Is there a better way to get the information like the attached output? Like maybe an API?</p>
<hr />
<hr />
<h2>Update</h2>
<hr />
<p>I have investigated XMLHTTPRequests and made a little progress, but I still have to pretty much rely on slow UI scraping.</p>
<p>I was able to find the XHR for getting download history but that's about it. I managed to get some <a href="https://drive.google.com/file/d/18xpgVi6pI77bG2AA8VvXaT6WObULD_Tp/view?usp=share_link" rel="nofollow noreferrer">data</a> using the following:</p>
<pre class="lang-py prettyprint-override"><code>import json
import requests
from pathlib import Path
cookies = json.loads(Path('D:/nexusmods_account_cookies.json').read_text())
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/111.0.0.0 Safari/537.36',
}
response = requests.get(url='https://www.nexusmods.com/Core/Libs/Common/Managers/Mods?GetDownloadHistory', cookies=cookies, headers=headers)
Path('D:/downloaded_nexusmods.json').write_text(json.dumps(json.loads(response.content.decode()), ensure_ascii=False, indent=4), encoding='utf8')
</code></pre>
<p>If you want to test my code you need the following values from your nexusmods cookies:</p>
<pre><code>{
"_app_session": "something",
"fwroute": "something",
"jwt_fingerprint": "something",
"member_id": "something",
"pass_hash": "something",
"sid_develop": "something"
}
</code></pre>
<p>And change the file path accordingly.</p>
<p>I found exactly 150 entries in it, corresponding to the 150 mods I have downloaded, but I have downloaded 169 unique files, and the entries don't show the names of the files, only the file id of the latest downloaded file.</p>
<p>I found f'https://www.nexusmods.com/{game_name}/mods/{mod_id}?tab=files' correspond to f'https://www.nexusmods.com/Core/Libs/Common/Widgets/ModFilesTab?id={mod_id}&game_id={game_id}', but that page has UI and doesn't expose raw data... (For example: <a href="https://www.nexusmods.com/newvegas/mods/57411?tab=files" rel="nofollow noreferrer">https://www.nexusmods.com/newvegas/mods/57411?tab=files</a> vs <a href="https://www.nexusmods.com/Core/Libs/Common/Widgets/ModFilesTab?id=57411&game_id=130" rel="nofollow noreferrer">https://www.nexusmods.com/Core/Libs/Common/Widgets/ModFilesTab?id=57411&game_id=130</a>).</p>
<p>And I still have to click manual download to see actual filenames, and to get to that page doesn't involve XHR at all...</p>
<p>So I am currently stuck, but I will keep searching.</p>
<hr />
<h2>Update 2</h2>
<p>I have managed to find this: NexusMods indeed does have an <a href="https://app.swaggerhub.com/apis-docs/NexusMods/nexus-mods_public_api_params_in_form_data/1.0" rel="nofollow noreferrer">API</a>, and you need to go to <a href="https://www.nexusmods.com/users/myaccount?tab=api+access" rel="nofollow noreferrer">this page</a> to get your own apikey.</p>
<p>I managed to get the filelist for the mods like this:</p>
<pre><code>import json
import requests
apikey="your_api_key"
headers = {
'accept': 'application/json',
'apikey': apikey
}
response = requests.get(url='https://api.nexusmods.com/v1/games/newvegas/mods/57411/files.json', headers=headers)
json.loads(response.content)
</code></pre>
<p>The response contains the file id, filename and descriptive name, but doesn't tell me whether I downloaded a file or not, and more importantly the mod dependencies.</p>
<p>And I also found out the download history I obtained via XHR doesn't contain any reference to the downloaded files at all, the first two numbers are probably timestamps and I have confirmed they aren't fileids.</p>
|
<python><python-3.x><selenium-webdriver><web-scraping><firefox>
|
2023-04-01 19:14:39
| 1
| 3,930
|
Ξένη Γήινος
|
75,908,271
| 262,875
|
How to make a Cog a prefix command group?
|
<p>I'm trying to make a Cog the "root group" for the commands it provides. The following is a pseudo example and doesn't actually work, but hopefully illustrates what I am trying to do:</p>
<pre><code>from discord.ext import commands
class MyCog(commands.GroupCog, name='cog'):
def __init__(self, bot):
self.bot = bot
@MyCog.command()
async def cmd1(self, ctx):
await ctx.reply('cmd1')
@MyCog.command()
async def cmd2(self, ctx):
await ctx.reply('cmd2')
</code></pre>
<p>I tried a couple of variations of this that are syntactically correct, but wasn't able to get any of them to work exactly how I wanted. Here are two main constraints:</p>
<ol>
<li>I'd like to be able to invoke the commands like follows:</li>
</ol>
<pre><code>!cog cmd1
!cog cmd2
</code></pre>
<ol start="2">
<li>I want the cog/command group to present in the help command like follows:</li>
</ol>
<pre><code>cog
cmd1 Cmd1 description
cmd2 Cmd2 description
someothercog
cmd1 Cmd1 description
cmd2 Cmd1 description
</code></pre>
<p>Now I know that I instead of using <code>GroupCog</code> I could just define a regular command group within the <code>Cog</code> using the <code>@commands.group()</code> decorator and call it <code>cog</code> to get the invocation signature I want, but when I use the default help command with that setup, I would get an output like</p>
<pre><code>cog
cog Cog group description
someothercog
someothercog SomeOtherCog group description
</code></pre>
<p>It wouldn't treat the groups as "root elements", but the cogs. Each cog would only have a single group. It would be much more useful if it treated the groups as "root elements". That's essentially what I am trying to achieve.</p>
<p>Is there a way to do this using either <code>GroupCog</code> (which appears to be intended only for application commands, but not for prefix commands) or some other way? Or am I thinking about this completely wrong? Maybe I should explore writing my own help command instead to get the formatting and output I am looking for?</p>
<p><strong>Edit</strong>:</p>
<p>This is how my !help menu currently looks like:</p>
<p><a href="https://i.sstatic.net/0yNKS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0yNKS.png" alt="Current Help Menu" /></a></p>
<p>Each of those commands under each of those cogs/categories is a Group. I would like to list the commands contained within those groups under the category instead of the single Group command. I now believe the best way for me to achieve this is to simply alter how <code>DefaultHelpCommand</code> prints the bot help. I was trying to achieve that by inheriting from it and overriding the <code>get_bot_mapping</code> method, just to realize that <code>DefaultHelpCommand</code> <a href="https://github.com/Rapptz/discord.py/blob/master/discord/ext/commands/help.py#L1234-L1262" rel="nofollow noreferrer">ignores</a> the <code>mapping</code> argument passed to <code>send_bot_help</code> and constructs it's own list of cogs/commands, making this particular approach a futile exercise.</p>
|
<python><discord><discord.py>
|
2023-04-01 19:00:55
| 0
| 11,089
|
Daniel Baulig
|
75,908,169
| 1,845,408
|
Generating a column showing the number of distinct values between consecutive days
|
<p>I have a pandas dataframe with the following format:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>UserId</th>
<th>Date</th>
<th>BookId</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2022-07-15</td>
<td>10</td>
</tr>
<tr>
<td>1</td>
<td>2022-07-16</td>
<td>11</td>
</tr>
<tr>
<td>1</td>
<td>2022-07-16</td>
<td>12</td>
</tr>
<tr>
<td>1</td>
<td>2022-07-17</td>
<td>12</td>
</tr>
</tbody>
</table>
</div>
<p>From this table, what I want to obtain is the number of new BookId on each consecutive day for each user. For example, based on the table above, the user read two new books on 2022-07-16, and did not read a new book on 2022-07-17 since s/he read it already on the previous day. Here is the expected outcome:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>UserId</th>
<th>2022-07-16</th>
<th>2022-07-17</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>I feel like this task could be done by grouping data by UserId and Date, and then using the apply lambda function. However, I could not manage it. I ended up with the following code, which uses the for loop. Is there a way to achieve this without a loop with a shorter code?</p>
<pre><code>df = studentAnswers.groupby('StudentId')
df.apply(findObjDiff)
def findObjDiff(df):
print(df.StudentId.head(3))
dataDict = {}
dates = list(df.Date)
dates.sort()
for d in dates:
ixNext = dates.index(d) + 1
if(ixNext > len(dates)):
break
dateNext = dates[ixNext]
objListPrev = set(df[df.Date == d].ObjectiveId)
objListNext = set(df[df.Date == dateNext].ObjectiveId)
dataDict[df.StudentId] = {dateNext : {'Different': len(objListPrev - objListNext)}}
return dataDict
</code></pre>
|
<python><pandas><dataframe>
|
2023-04-01 18:38:10
| 3
| 8,321
|
renakre
|
75,907,744
| 2,438,993
|
regex parsing of text and numbers blocks
|
<p>I am stuck on this regex. I would like to extract a single string for each chunk starting at AELIST and ignore the SET1 (or any other header) chunks. The + indicates a continuation of the single array. The blocks can be split by words that are all CAPS or lines that dont end in + or lines that dont start with +</p>
<p>my current attempt is</p>
<pre class="lang-py prettyprint-override"><code>import re
filestr1='''AELIST 1 5159 5160 7007 7008 7015 7016 7023+ \n+ 7024 7031 7032 7039 7040 7047 7048 7055+ \n+ 7056 7063 7064 7071 7072 7079 7080 7087+ \n+ 7088 7095 7096 7103 7104 7111 7112 7119+ \n+ 7120 7127 7128 7135 7136 7143 7144 7151+ \n+ 7152 7159 7160 7167 7168 7175 7176 7183+ \n+ 7184 7191 7192 7199 7200 7207 7208 7215+ \n+ 7216 7223 7224 7231 7232 \nSET1 2 6159 6160 9007 9008 9015 9016 9023+ \n+ 9024 9031 9032 9039 9040 9047 9048 9055+ \n+ 9056 9063 9064 9071 9072 9079 9080 9087+ \n+ 9088 9095 9096 9103 9104 9111 9112 9119+ \n+ 9120 9127 9128 9135 9136 9143 9144 9151+ \n+ 9152 9159 9160 \nAELIST 5 11017 11018 11023 11024 11029 11030 11035+ \n+ 11036 11041 11042 11047 11048 11053 11054 11059+ \n+ 11060 11065 11066 11071 11072 11077 11078 11083+ \n+ 11084 11089 11090 11095 11096 11101 11102 11107+ \n+ 11108 '''
re1 = re.findall('^[A-Z].*|^[+].*',filestr1, re.MULTILINE)
print(re1)
</code></pre>
<pre><code>['AELIST 1 5159 5160 7007 7008 7015 7016 7023+ ',
'+ 7024 7031 7032 7039 7040 7047 7048 7055+ ',
'+ 7056 7063 7064 7071 7072 7079 7080 7087+ ',
'+ 7088 7095 7096 7103 7104 7111 7112 7119+ ',
'+ 7120 7127 7128 7135 7136 7143 7144 7151+ ',
'+ 7152 7159 7160 7167 7168 7175 7176 7183+ ',
'+ 7184 7191 7192 7199 7200 7207 7208 7215+ ',
'+ 7216 7223 7224 7231 7232 ',
'SET1 2 6159 6160 9007 9008 9015 9016 9023+ ',
'+ 9024 9031 9032 9039 9040 9047 9048 9055+ ',
'+ 9056 9063 9064 9071 9072 9079 9080 9087+ ',
'+ 9088 9095 9096 9103 9104 9111 9112 9119+ ',
'+ 9120 9127 9128 9135 9136 9143 9144 9151+ ',
'+ 9152 9159 9160 ',
'AELIST 5 11017 11018 11023 11024 11029 11030 11035+ ',
'+ 11036 11041 11042 11047 11048 11053 11054 11059+ ',
'+ 11060 11065 11066 11071 11072 11077 11078 11083+ ',
'+ 11084 11089 11090 11095 11096 11101 11102 11107+ ',
'+ 11108 ']
</code></pre>
<p>the expected output would be a list of lists of each AELIST chunk and not the SET1 chunk</p>
<pre><code>[['1', '5159', '5160', '7007', '7008', '7015', '7016', '7023', '7024', '7031', '7032', '7039', '7040', '7047', '7048', '7055', '7056', '7063', '7064', '7071', '7072', '7079', '7080', '7087', '7088', '7095', '7096', '7103', '7104', '7111', '7112', '7119', '7120', '7127', '7128', '7135', '7136', '7143', '7144', '7151', '7152', '7159', '7160', '7167', '7168', '7175', '7176', '7183', '7184', '7191', '7192', '7199', '7200', '7207', '7208', '7215', '7216', '7223', '7224', '7231', '7232'], ['5', '11017', '11018', '11023', '11024', '11029', '11030', '11035', '11036', '11041', '11042', '11047', '11048', '11053', '11054', '11059', '11060', '11065', '11066', '11071', '11072', '11077', '11078', '11083', '11084', '11089', '11090', '11095', '11096', '11101', '11102', '11107', '11108']]
</code></pre>
<p>thanks</p>
|
<python><regex><nastran>
|
2023-04-01 17:18:09
| 1
| 1,367
|
nagordon
|
75,907,716
| 7,676,920
|
Add column with current date and time to Polars DataFrame
|
<p>How can I add a column to a Polars DataFrame with current date and time as value on every row?</p>
<p>With Pandas, I would do something like this:</p>
<pre class="lang-py prettyprint-override"><code>df["date"] = pd.Timestamp.today()
</code></pre>
|
<python><python-polars>
|
2023-04-01 17:12:48
| 1
| 1,383
|
basse
|
75,907,677
| 11,529,057
|
float value for the number of items for each cell in confusion matrix in Azure ML
|
<p>I work with Azure Machine Learning Service for modeling. To track and analyze the result of a binary classification problem, I use a method named <strong>score-classification</strong> in <em>azureml.training.tabular.score.scoring</em> library. I invoke the method like this:</p>
<pre><code>metrics = score_classification(
y_test, y_pred_probs, metrics_names_list, class_labels, train_labels, sample_weight=sample_weights, use_binary=True)
</code></pre>
<p>Input arguments are:</p>
<ul>
<li><em>y_test</em> is an array of 0 and 1.</li>
<li><em>y_pred</em> is an array of float values for each item.</li>
<li><em>metrics_names_list</em> is the list of the name of the metrics I want to calculate:['f1_score_classwise', 'confusion_matrix'].</li>
<li><em>class_labels</em> is a two-item array of [0, 1].</li>
<li><em>train_labels</em> is a two-item list of ['False', 'True'].</li>
</ul>
<p>When it calculates the metrics I sent as <em>metrics_names_list</em>, the results are shown in the Azure ML portal in the metrics page.</p>
<p>Confusion matrix is one of the metrics I draw each time. It has a combo box for the representation. This combo box could be set as <strong>Raw</strong> to show the number of items for each cell, and <strong>Normalized</strong> to show the percentage of the cells.</p>
<p>The problem is that I see float value instead of integer ones for the Raw configuration of this matrix! I do not know how to handle this issue?
<a href="https://i.sstatic.net/K2K59.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K2K59.jpg" alt="enter image description here" /></a></p>
|
<python><azure><confusion-matrix><azure-machine-learning-service>
|
2023-04-01 17:04:53
| 1
| 361
|
elldora
|
75,907,486
| 7,644,562
|
Django: Combine multiple forms into single one
|
<p>I'm working with a django(4) project, Where I have two forms and want to combine them into one to display in my template with custom html not by sing <code>{{form}}</code> syntax.</p>
<p>Here's my <code>forms.py</code>:</p>
<pre><code>class UserUpdateForm(forms.ModelForm):
email = forms.EmailField()
class Meta:
model = User
fields = ['username', 'first_name', 'last_name', 'email']
# Create a ProfileUpdateForm to update image.
class ProfileUpdateForm(forms.ModelForm):
class Meta:
model = Profile
fields = ['image']
</code></pre>
<p>How can I combine both of these forms into single one to display all the filed in HTML and submit as a single form?</p>
|
<python><python-3.x><django><django-forms><django-4.1>
|
2023-04-01 16:37:29
| 2
| 5,704
|
Abdul Rehman
|
75,907,395
| 9,300,627
|
Keeping the input provided to a generator
|
<p>Assume I have a generator <code>gen</code> that produces items, and another generator <code>trans</code> that transforms the items and returns one output item per input item, and assume that both generators are expensive and I can't change either of them. Both generators may have additional arguments. The output of <code>gen</code> is fed into <code>trans</code>, but when looping over the results of <code>trans</code>, I need the corresponding output of <code>gen</code> as well. My current solution is to <code>tee(gen())</code> and then <code>zip</code> that with the output of <code>trans</code>, and this works well, but my question is whether there is maybe a better solution that I am missing?</p>
<pre class="lang-py prettyprint-override"><code>from itertools import tee
# these two generators are just an example, assume these are expensive and can't be changed
def gen():
yield from range(3)
def trans(inp):
for x in inp:
yield chr(x + ord("A"))
# my question is: is there a better way to achieve what the following two lines are doing?
g1, g2 = tee(gen())
for i, o in zip(g1, trans(g2)):
print(f"{i} -> {o}")
</code></pre>
|
<python><python-3.x><generator>
|
2023-04-01 16:17:43
| 1
| 3,033
|
haukex
|
75,907,394
| 6,709,460
|
What is the difference between Security and Depends in FastAPI?
|
<p>This is my code:</p>
<pre><code>from fastapi import FastAPI, Depends, Security
from fastapi.security import HTTPBearer
bearer = HTTPBearer()
@app.get("/")
async def root(q = Security(bearer)):
return {'q': q}
@app.get("/Depends")
async def root(q = Depends(bearer)):
return {'q': q,}
</code></pre>
<p>Both routes give precisely the same result and act in the same manner. I checked the source code and found that the Security class inherits from the Depedends class. But I have no understanding in what way. Can you please show me the differences and why would I prefer to use Security over Depends.</p>
|
<python><fastapi>
|
2023-04-01 16:17:42
| 1
| 741
|
Testing man
|
75,907,379
| 11,940,581
|
Can a multi core processor make parallel network connections?
|
<p>I am currently having a use case wherein need to hit a a REST endpoint with some data(~20 MB) present in a file and upload it(HTTP POST). The machine that I am using for this action is of the type <em>Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz (40 cores, 375 GB RAM)</em> . The problem now is that I would need to make around 8000 such network calls ( with the cumulative payload size being around 200 GB) within a span of 1 hour .
This machine is connected to a 10GBps bandwidth line.</p>
<p>Sequential execution is bad idea for me, and am looking at parallelizing the calls.
Currently evaluating the asyncio library which is better for I/O bound ops (compared to multiprocessing, as there is no processing at all to be done).</p>
<p>I was thinking the most optimal solution for this. Will there be advantage of using a multi-core approach here, since all the data needs to flow out through the same single interface (NIC process)?</p>
<p>Found this 12 year old question: <a href="https://stackoverflow.com/questions/6327881/is-there-a-way-to-take-advantage-of-multi-core-when-dealing-with-network-connect">Take advantage of multi core processing</a> ,but it mostly speaks from a consumption perspective( how do you spin multi core applications to work on the incoming data in parallel). I have no qualms in switching to a more concurrency-friendly programming language like Java, if it serves my use case.</p>
|
<python><multithreading><multiprocessing><network-programming><python-asyncio>
|
2023-04-01 16:15:43
| 1
| 371
|
halfwind22
|
75,907,222
| 7,089,108
|
Jupyterlab extension gives a function not found error
|
<p>I have issues with jupyter extensions on ArchLinux. In particular, I get the following error:</p>
<pre><code>[W 2023-04-01 18:34:36.504 ServerApp] A `_jupyter_server_extension_points` function was not found in jupyter_nbextensions_configurator. Instead, a `_jupyter_server_extension_paths` function was found and will be used for now. This function name will be deprecated in future releases of Jupyter Server.
[W 2023-04-01 18:34:36.493 ServerApp] A `_jupyter_server_extension_points` function was not found in notebook_shim. Instead, a `_jupyter_server_extension_paths` function was found and will be used for now. This function name will be deprecated in future releases of Jupyter Server.
</code></pre>
<p>How can I get rid of this error/warning? I tried removing the functions with Pip, but it did not work.
Any ideas?</p>
|
<python><jupyter-notebook><jupyter-lab>
|
2023-04-01 15:43:22
| 1
| 433
|
cerv21
|
75,907,167
| 76,701
|
Something like `click_shell` for Typer
|
<p>I recently started using the Typer framework for command line programs in Python. It's pretty good. When I used Click, I would use the <code>click_shell</code> plugin to automatically make an interactive shell for my command line apps, so I could launch them without arguments and get a shell where I could run my commands.</p>
<p>Is there functionality similar to that for Typer?</p>
|
<python><typer>
|
2023-04-01 15:31:33
| 1
| 89,497
|
Ram Rachum
|
75,907,155
| 15,170,662
|
Is asyncio affected by the GIL?
|
<p>On <a href="https://superfastpython.com/asyncio-vs-threading/" rel="noreferrer">this</a> page I read this:</p>
<blockquote>
<p>Coroutines in the asyncio module are not limited by the Global Interpreter Lock or GIL.</p>
</blockquote>
<p>But how is this possible if both the <code>asyncio</code> event loop and the <code>threading</code> threads are running in a single Python process with GIL?</p>
<p>As far as I understand, the impact of the GIL on <code>asyncio</code> will not be as strong as on <code>threading</code>, because in the case of <code>threading</code>, the interpreter will switch to useless operations, such as <code>time.sleep()</code>. Therefore, when working with <code>asyncio</code>, it is recommended to use <code>asyncio.sleep()</code>.</p>
<p>I understand that these tools are designed for slightly different things, <code>threading</code> is more often used to execute "legacy" blocking code for IO-Bound operations, and <code>asyncio</code> for non-blocking code.</p>
|
<python><multithreading><python-asyncio><gil>
|
2023-04-01 15:29:27
| 2
| 415
|
Meetinger
|
75,907,063
| 1,391,466
|
Remove all text from a html node using regex
|
<p>Is it possible to remove all text from HTML nodes with a regex? This very simple case seems to work just fine:</p>
<pre class="lang-py prettyprint-override"><code>import htmlmin
html = """
<li class="menu-item">
<p class="menu-item__heading">Totopos</p>
<p>Chips and molcajete salsa</p>
<p class="menu-item__details menu-item__details--price">
<strong>
<span class="menu-item__currency"> $ </span>
4
</strong>
</p>
</li>
"""
print(re.sub(">(.*?)<", ">\1<", htmlmin.minify(html)))
</code></pre>
<p>I tried to use BeautifulSoup but I cannot figure out how to make it work. Using the following code example is not quite correct since it is leaving "4" in as text.</p>
<pre class="lang-py prettyprint-override"><code>soup = BeautifulSoup(html, "html.parser")
for n in soup.find_all(recursive=True):
print(n.name, n.string)
if n.string:
n.string = ""
print(minify(str(soup)))
</code></pre>
|
<python><html><beautifulsoup>
|
2023-04-01 15:13:26
| 2
| 2,087
|
chhenning
|
75,906,985
| 16,389,095
|
Python/Kivy: How to pass arguments to a class when it is called by a screen manager
|
<p>I developed an UI with Python/Kivy/KivyMD. It is a simple app in which three screens are defined by three different classes: <strong>View1</strong>, <strong>View2</strong> and <strong>View3</strong>. In the class <em>'MainApp'</em> a screen manager is defined and used to switch between screens. The switch between screens occurs when the user clicks a button and is managed by an event in which the screen manager is recalled.
The third screen (<strong>View3</strong>) contains a label too. How can I pass an argument/variable from <strong>View1</strong> to <strong>View3</strong>? I would like to show some text in this label which comes from <strong>View1</strong> (see commented lines). And in general, how can a global variable inside the <strong>View3</strong> be defined from another class, such as <strong>View1</strong>? In particular, I would like to pass a value in <strong>View1</strong> to a variable defined in <strong>View1</strong> (see <em>SwitchToScreen2</em>).
Here is the code:</p>
<pre><code>from kivy.lang import Builder
from kivymd.app import MDApp
#from kivy.uix.screenmanager import NoTransition, SwapTransition, SlideTransition, FallOutTransition
from kivymd.uix.transition import MDSwapTransition, MDSlideTransition
from kivymd.uix.screenmanager import MDScreenManager
from kivymd.uix.screen import MDScreen
from kivymd.uix.tab import MDTabsBase
from kivymd.uix.floatlayout import MDFloatLayout
Builder.load_string(
"""
<View3>:
MDRelativeLayout:
MDLabel:
id: label_view3
halign: 'center'
MDRaisedButton:
text: 'GO TO VIEW 1'
pos_hint: {'center_x': 0.7, 'center_y': 0.7}
on_release: root.switch_to_screen1()
<View2>:
MDRaisedButton:
text: 'GO TO VIEW 3'
pos_hint: {'center_x': 0.5, 'center_y': 0.5}
on_release: root.switch_to_screen3()
<View1>:
sm_view1: sm_view1
tabs: tabs
MDAnchorLayout:
anchor_x: 'center'
anchor_y: 'top'
MDBoxLayout:
size_hint: 1, 0.2
orientation: 'vertical'
MDTabs:
id: tabs
on_tab_switch: root.Tab_Switch(*args)
MDAnchorLayout:
anchor_x: 'center'
anchor_y: 'bottom'
MDScreenManager:
size_hint: 1, 0.8
id: sm_view1
MDScreen:
name: 'screen1'
MDLabel:
text: 'VIEW 1'
halign: 'center'
MDScreen:
name: 'screen2'
MDRaisedButton:
text: 'GO TO VIEW 2'
pos_hint: {'center_x': 0.3, 'center_y': 0.3}
on_release: root.switch_to_screen2()
"""
)
class Tab(MDFloatLayout, MDTabsBase):
'''Class implementing content for a tab.'''
class View3(MDScreen):
myString = ''
def __init__(self, **kwargs):
super().__init__(**kwargs)
# MAYBE INITIALIZE HERE THE VARIABLE myString
# self.myString =
def switch_to_screen1(self):
MDApp.get_running_app().sm.current = 'view1'
def WriteInTheLabel(self):
self.ids.label_view3.text = myString + ' RECEIVED'
class View2(MDScreen):
def __init__(self, **kwargs):
super().__init__(**kwargs)
def switch_to_screen3(self):
MDApp.get_running_app().sm.current = 'view3'
class View1(MDScreen):
sm_view1: MDScreenManager
tabs: Tab
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.tabsTitles = ['SCREEN 1', 'SCREEN 2']
self.Tab_Start()
def Tab_Start(self):
for label in self.tabsTitles:
self.ids.tabs.add_widget(Tab(title=label))
def Tab_Switch(
self, instance_tabs, instance_tab, instance_tab_label, tab_text
):
'''Called when switching tabs.
:type instance_tabs: <kivymd.uix.tab.MDTabs object>;
:param instance_tab: <__main__.Tab object>;
:param instance_tab_label: <kivymd.uix.tab.MDTabsLabel object>;
:param tab_text: text or name icon of tab;
'''
if tab_text == self.tabsTitles[0]:
self.sm_view1.current = 'screen1'
elif tab_text == self.tabsTitles[1]:
self.sm_view1.current = 'screen2'
def switch_to_screen2(self):
MDApp.get_running_app().sm.current = 'view2'
view3 = self.manager.get_screen('view3')
# THE FOLLOWING LINES GAVE ME A NAME ERROR: 'myString' is not defined
view3.myString = 'MSG FROM VIEW 1'
view3.WriteInTheLabel()
class MainApp(MDApp):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.sm = MDScreenManager(transition = MDSwapTransition())
self.sm.add_widget(View1(name="view1"))
self.sm.add_widget(View2(name="view2"))
self.sm.add_widget(View3(name="view3"))
def build(self):
return self.sm
if __name__ == '__main__':
MainApp().run()
</code></pre>
|
<python><kivy><kivy-language><kivymd>
|
2023-04-01 14:56:42
| 1
| 421
|
eljamba
|
75,906,827
| 4,859,268
|
GitLab CI python subprocess.Popen permission denied
|
<p>I'm running a GitLab-CI job which runs a python script which starts a <code>subprocess.Popen(...)</code>.</p>
<pre><code>def main():
proc = subprocess.Popen("./../binary_file --args value", stdout=subprocess.PIPE)
</code></pre>
<p>The problem is that I'm getting</p>
<blockquote>
<p>PermissionError: [Errno 13] Permission denied: './../binary_file'</p>
</blockquote>
<p>Ok. Maybe I forgot to set appropriate permissions?</p>
<pre><code>$ chmod +x ./binary_file
$ ls -l ./binary_file
-rwxr-xr-x 1 root root 30335023 Apr 1 14:16 ./binary_file
$ whoami
root
</code></pre>
<p>Well I'm not.</p>
<p>So what could be the reason of such behavior?</p>
<p>The <code>script</code> part of a <code>gitlab-ci</code> job</p>
<pre><code>script:
- chmod +x ./binary_file
- ls -l ./binary_file
- whoami
- pipenv run python ./scripts/run_tests.py
</code></pre>
|
<python><gitlab><gitlab-ci>
|
2023-04-01 14:26:45
| 1
| 1,447
|
Bob
|
75,906,819
| 303,513
|
Simple linear regression in pyTorch - why loss is increasing with each epoch?
|
<p>I'm trying to make a simple linear regression model with PyTorch to predict the perceived temperature <code>atemp</code> based on actual temperature <code>temp</code>.</p>
<p>I cannot understand why this code results in loss increasing with each epoch, instead of decreasing. And all predicted values are very far from the truth.</p>
<h2>sample data used</h2>
<pre><code>data_x = array([11.9, 12. , 13.4, 14.8, 15.8, 16.6, 16.7, 16.9, 16.9, 16.9, 16.5,
15.7, 15.3, 15. , 15. , 14.9, 14.6, 14.2, 14.2, 14. , 13.5, 12.9,
12.5, 12.4, 12.8, 14.3, 15.6, 16.5, 17. , 17.5, 17.7, 17.7, 17.8,
17.5, 16.9, 15.6, 14. , 12.2, 11. , 10.6, 10.6, 10.7, 10.9, 10.6,
10.3, 9.4, 8.7, 7.8, 8.1, 11. , 13.4, 15.2, 16.5, 17.4, 18.1,
18.5, 18.7, 18.6, 17.7, 16. , 14.6, 13.8, 13. , 12.5, 12. , 11.8,
11.5, 11.3, 10.9, 10.6, 10.2, 9.9, 10.5, 13.1, 15.3, 17.2, 18.9,
20.3, 21.2, 21.8, 21.9, 21.5, 20.2, 18.3, 16.8, 15.8, 14.9, 14.2,
13.6, 13.2, 12.9, 12.7, 12.6, 12.6, 12.6, 12.8, 13.4, 15.5, 17.6,
19.3])
data_y = array([ 8.9, 9.3, 10.7, 12.1, 13.1, 13.8, 14. , 14.1, 14.3, 14.5, 14.3,
13.7, 13.2, 12.7, 12.7, 12.5, 11.9, 11.7, 11.7, 11.5, 11.1, 10.6,
10.3, 10.2, 10.9, 12.5, 12.8, 13.8, 14.6, 14.9, 14.9, 15.1, 15.5,
15.6, 15.8, 14.7, 13.1, 11.2, 9.6, 9.1, 9.4, 9.7, 9.9, 9.6,
9.2, 8. , 7.1, 6.1, 6.5, 10.2, 12.7, 14.3, 15.5, 16.6, 17.4,
17.7, 17.8, 17.6, 17.2, 15.3, 13.4, 12.4, 11.5, 10.8, 10.1, 10. ,
9.8, 9.6, 9.3, 9. , 8.5, 8.1, 8.8, 12. , 14.4, 16.6, 18.5,
20.1, 21. , 21.3, 21.2, 21.2, 20.1, 17.9, 16.1, 14.6, 13.8, 13.1,
12.3, 11.8, 11.6, 11.4, 11.3, 11.3, 11.3, 11.4, 12. , 14.6, 16.8,
18.8])
</code></pre>
<p>Plotted data:</p>
<p><img src="https://i.postimg.cc/28fztLmb/OOO.png" alt="Plotted data" /></p>
<h2>Code</h2>
<pre><code># import data from CSV to pandas Dataframe
bg = pd.read_csv('data.csv')
X_pandas = bg['temp']
y_pandas = bg['atemp']
# covert to tensors
data_x = X_pandas.head(100).values
data_y = y_pandas.head(100).values
X = torch.tensor(data_x, dtype=torch.float32).reshape(-1, 1)
y = torch.tensor(data_y, dtype=torch.float32).reshape(-1, 1)
# create the model
model = nn.Linear(1, 1)
loss_fn = nn.MSELoss() # mean square error
optimizer = optim.SGD(model.parameters(), lr=0.01)
# train the model
n_epochs = 40 # number of epochs to run
for epoch in range(n_epochs):
# forward pass
y_pred = model(X)
# compute loss
loss = loss_fn(y_pred, y)
# backward pass
loss.backward()
# update parameters
optimizer.step()
# zero gradients
optimizer.zero_grad()
# print loss
print(f'epoch: {epoch + 1}, loss = {loss.item():.4f}')
# display the predicted values
predicted = model(X).detach().numpy()
display(predicted)
</code></pre>
<h2>Output</h2>
<pre><code>epoch: 1, loss = 16.5762
epoch: 2, loss = 191.0379
epoch: 3, loss = 2291.5081
epoch: 4, loss = 27580.5195
epoch: 5, loss = 332052.6875
epoch: 6, loss = 3997804.2500
epoch: 7, loss = 48132328.0000
epoch: 8, loss = 579498624.0000
epoch: 9, loss = 6976988160.0000
epoch: 10, loss = 84000866304.0000
epoch: 11, loss = 1011344670720.0000
epoch: 12, loss = 12176279470080.0000
epoch: 13, loss = 146598776537088.0000
epoch: 14, loss = 1765004462260224.0000
epoch: 15, loss = 21250117348622336.0000
epoch: 16, loss = 255844948350337024.0000
epoch: 17, loss = 3080297218377252864.0000
epoch: 18, loss = 37085819119396192256.0000
epoch: 19, loss = 446502312996857970688.0000
epoch: 20, loss = 5375748153858603352064.0000
epoch: 21, loss = 64722396677244886974464.0000
epoch: 22, loss = 779237667397586303057920.0000
epoch: 23, loss = 9381773651754967424303104.0000
epoch: 24, loss = 112953739724808869434621952.0000
epoch: 25, loss = 1359928800566679308764971008.0000
epoch: 26, loss = 16373128158657455337028714496.0000
epoch: 27, loss = 197127444146361433227589058560.0000
epoch: 28, loss = 2373354706586702693378941779968.0000
epoch: 29, loss = 28574463232459721913615454830592.0000
epoch: 30, loss = 344027831021918449557295178186752.0000
epoch: 31, loss = 4141990153063893156517557464727552.0000
epoch: 32, loss = 49868270370463502095675094080684032.0000
epoch: 33, loss = 600398977963427833849804206813216768.0000
epoch: 34, loss = inf
epoch: 35, loss = inf
epoch: 36, loss = inf
epoch: 37, loss = inf
epoch: 38, loss = inf
epoch: 39, loss = inf
epoch: 40, loss = inf
</code></pre>
<p>Predicted values:</p>
<pre><code>array([[1.60481241e+21],
[1.61822441e+21],
[1.80599158e+21],
[1.99375890e+21],
[2.12787834e+21],
[2.23517393e+21],
[2.24858593e+21],
[2.27540965e+21],
[2.27540965e+21],
[2.27540965e+21],
...
</code></pre>
<p>What could be the reason for this strange result?</p>
|
<python><machine-learning><deep-learning><pytorch><linear-regression>
|
2023-04-01 14:25:59
| 2
| 46,260
|
Silver Light
|
75,906,590
| 21,787,377
|
How can I allow user to submit a form
|
<p>When I try to submit <code>Table</code> model to the database using <code>create_table</code> view, it throws me an error: <code>NOT NULL constraint failed: audioApp_table.user_id</code> after I do my own research I found out it was because I didn't add user to the form, so I try to add it by adding: <code>table = Table(user=request.user)</code>, but it is not working, how can we allow user to submit this form, I knew how to do that in <code>class base view</code> using <code>instance.user = self.request.user</code>, but for <code>function base view</code> I'm failed.</p>
<pre><code>def create_table(request):
columns = Column.objects.filter(user=request.user)
fields = {}
for column in columns:
if column.field_type.data_type == 'number':
fields[column.name] = forms.IntegerField()
elif column.field_type.data_type == 'character':
fields[column.name] = forms.CharField(max_length=20)
elif column.field_type.data_type == 'decimal':
fields[column.name] = forms.DecimalField(max_digits=20, decimal_places=10)
elif column.field_type.data_type == 'image':
fields[column.name] = forms.ImageField()
elif column.field_type.data_type == 'boolean':
fields[column.name] = forms.BooleanField()
elif column.field_type.data_type == 'date':
fields[column.name] = forms.DateField()
TableForm = type('TableForm', (forms.Form,), fields)
if request.method == 'POST':
form = TableForm(request.POST, request.FILES)
if form.is_valid():
table = Table()
for column in columns:
setattr(table, column.name, form.cleaned_data[column.name])
table.save()
return redirect('Table')
else:
form = TableForm()
return render (request, 'create_table.html', {'form':form})
</code></pre>
<pre><code>class Table(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
number = models.IntegerField(blank=True, null=True)
decimal = models.DecimalField(max_digits=20, decimal_places=10, blank=True, null=True)
image = models.ImageField(upload_to='table-image', blank=True, null=True)
character = models.CharField(max_length=500, blank=True, null=True)
check_box = models.BooleanField(blank=True, null=True)
date = models.DateField(blank=True, null=True)
</code></pre>
|
<python><django>
|
2023-04-01 13:46:45
| 1
| 305
|
Adamu Abdulkarim Dee
|
75,906,560
| 3,825,996
|
Can we use Python 4's end keyword in Python 2.7?
|
<p>Because of my ancient animation pipeline, I am stuck with python 2.7. I saw that python 4 will have an end keyword which can already be used in python 3 with pyend (<a href="https://pypi.org/project/pyend/" rel="nofollow noreferrer">https://pypi.org/project/pyend/</a>).
I am using that in some python 3 projects already and it's pretty cool.</p>
<p>However, the pypi page states "Requires: Python >=3.7". Now my question is, does that mean that pyend itself needs Python 3.7 or newer to run on but it can be used on older python code or can it only be used on code that is >=3.7? I have tried with some python 2 code and it seems to be working fine but I would rather be sure before I switch.</p>
|
<python><auto-indent>
|
2023-04-01 13:41:37
| 1
| 766
|
mqnc
|
75,906,408
| 14,514,276
|
Dimensions error using BCEWithLogitsLoss PyTorch
|
<p>I have a problem with dimensions which appear in this line of code <code>loss = self.cross_entropy(preds, labels)</code>. Error looks like this: <code>RuntimeError: output with shape [32, 1] doesn't match the broadcast shape [32, 2]</code>.</p>
<p>I don't know what is wrong because when I do: <code>labels.shape</code> and <code>preds.shape</code> they give me both: <code>torch.Size([32, 1])</code>.</p>
<p>I do binary classificator for opinions about products.</p>
<p>My preds variable looks like this inside: <code>tensor([[0.9629], [1.0000], [0.9965], [0.9881], [0.9744], [0.8138], [0.8554], [0.8460], [0.6731], [0.9852], [0.9993], [0.7753], [0.7943], [0.7197], [0.9878], [0.7717], [1.0000], [0.9850], [0.9685], [0.6256], [0.7840], [0.5589], [0.4942], [0.8766], [1.0000], [0.2887], [0.8176], [0.9193], [0.4488], [0.8078], [0.8877], [1.0000]], grad_fn=<SigmoidBackward0>)</code></p>
<p>And my labels variable looks like this inside: <code>tensor([[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.],[0.]])</code>.</p>
<p>This is my achitecture:</p>
<pre><code>class _2FC(nn.Module):
def __init__(self, input_size=100, hidden_size=512):
super(_2FC, self).__init__()
self.dropout = nn.Dropout(0.1)
self.relu = nn.ReLU()
self.fc1 = nn.Linear(input_size, hidden_size)
self.fc2 = nn.Linear(hidden_size, 1)
self.sigmoid = nn.Sigmoid()
def forward(self, x):
x = self.fc1(x)
x = self.relu(x)
x = self.dropout(x)
x = self.fc2(x)
x = self.sigmoid(x)
return x
</code></pre>
<p>and this is my training code:</p>
<pre><code>class BasicTrainer():
def __init__(self, model, weights, train_dataloader, val_dataloader, file_name='saved_weights', device='cpu', epochs=100):
self.device = torch.device(device)
self.model = model.to(self.device)
self.optimizer = AdamW(model.parameters(), lr=1e-3)
self.cross_entropy = nn.BCEWithLogitsLoss(weight=weights.to(self.device)).to(self.device)
self.epochs = epochs
self.train_dataloader = train_dataloader
self.val_dataloader = val_dataloader
self.file_name = file_name
def _train(self):
self.model.train()
total_loss, total_accuracy = 0, 0
total_preds=[]
for step, batch in enumerate(self.train_dataloader):
batch = [r.to(self.device) for r in batch]
sent_id, labels = batch
self.model.zero_grad()
labels = labels.unsqueeze(1).float()
preds = self.model(sent_id)
loss = self.cross_entropy(preds, labels)
total_loss = total_loss + loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm_(self.model.parameters(), 1.0)
self.optimizer.step()
preds=preds.detach().cpu().numpy()
total_preds.append(preds)
avg_loss = total_loss / len(self.train_dataloader)
total_preds = np.concatenate(total_preds, axis=0)
return avg_loss, total_preds
</code></pre>
|
<python><deep-learning><pytorch><neural-network>
|
2023-04-01 13:13:33
| 0
| 693
|
some nooby questions
|
75,906,407
| 14,014,925
|
How to interpret the model_max_len attribute of the PreTrainedTokenizer object in Huggingface Transformers
|
<p>I've been trying to check the maximum length allowed by emilyalsentzer/Bio_ClinicalBERT, and after these lines of code:</p>
<pre><code>model_name = "emilyalsentzer/Bio_ClinicalBERT"
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer
</code></pre>
<p>I've obtained the following:</p>
<pre><code>PreTrainedTokenizerFast(name_or_path='emilyalsentzer/Bio_ClinicalBERT', vocab_size=28996, model_max_len=1000000000000000019884624838656, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'})
</code></pre>
<p>Is that true? Is the max length of the model (in the number of tokens, as it says <a href="https://huggingface.co/transformers/v2.11.0/main_classes/tokenizer.html#transformers.PreTrainedTokenizer" rel="nofollow noreferrer">here</a>) that high? Then, how am I supposed to interpret that?</p>
<p>Cheers!</p>
|
<python><nlp><huggingface-transformers><huggingface-tokenizers><huggingface>
|
2023-04-01 13:13:27
| 1
| 345
|
ignacioct
|
75,906,272
| 5,916,316
|
How to resolve this problem with RecycleView in Kivy?
|
<p>I am trying to pass data from <code>Menu class</code> to <code>Container class</code> but I am having an exception <code>TypeError: Container.__init__() missing 2 required positional arguments: 'source' and 'mipmap'</code> I think the problel is not in Container class and <code>rv</code>.
My app is so simple but I am still stack in this touble. I</p>
<p>my main.py</p>
<pre><code>from kivymd.app import MDApp
from kivymd.uix.label import MDLabel
from kivymd.uix.boxlayout import MDBoxLayout
from kivymd.uix.list import TwoLineAvatarIconListItem
from kivy.properties import StringProperty
from kivy.lang import Builder
from kivymd.uix.swiper import MDSwiper
class Container(TwoLineAvatarIconListItem):
text = StringProperty()
class Menu(MDBoxLayout):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.rv.data = [{'text': 'hello world'}]
class App(MDApp):
def build(self):
Builder.load_file('main.kv')
return Menu()
App().run()
</code></pre>
<p>my main.kv</p>
<pre><code><Container>
IcontLeftWidget:
icon: 'home'
IcontRightWidget:
icon: 'home'
<Menu>
rv: rv
RecycleView:
id: rv
viewclass: 'Container'
RecycleBoxLayout:
default_size_hint: 1, None
</code></pre>
|
<python><kivy><kivymd>
|
2023-04-01 12:49:47
| 1
| 429
|
Mike
|
75,906,239
| 11,092,636
|
ERROR: No matching distribution found for torchvision==0.8.2
|
<p>torchvision 0.8.2 <a href="https://pypi.org/project/torchvision/0.8.2/" rel="nofollow noreferrer">exists</a></p>
<p>But when running <code>pip install torchvision==0.8.2</code>, I get the following error:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement torchvision==0.8.2 (from versions: 0.1.6, 0.1.7, 0.1.8, 0.1.9, 0.2.0, 0.2.1,
0.2.2, 0.2.2.post2, 0.2.2.post3, 0.5.0, 0.9.0, 0.9.1, 0.10.0, 0.10.1, 0.11.0, 0.11.1, 0.11.2, 0.11.3, 0.12.0, 0.13.0, 0.13.1, 0.14.0, 0.14.1, 0.15.0, 0.15.1)
ERROR: No matching distribution found for torchvision==0.8.2
</code></pre>
<p>I'm using <code>Python 3.8.2</code>.</p>
<p>Any reason why the version is available on PyPi but it doesn't work with pip install?</p>
|
<python><pip><torchvision>
|
2023-04-01 12:43:22
| 0
| 720
|
FluidMechanics Potential Flows
|
75,906,153
| 1,262,480
|
Aws Moto redshift statements assertions (Python)
|
<p>Considering an AWS lambda written in Python that uses boto3 as client to AWS Redshift service.</p>
<p>Considering the following example:</p>
<pre class="lang-py prettyprint-override"><code>
import boto3
import moto
def lambda_handler(event, context):
session = boto3.session.Session()
redshift_data_service = session.client(
service_name='redshift-data',
region_name='a_region',
)
redshift_data_service.execute_statement(
sql="insert into XYZ VALUES..."
)
@moto.redshiftdata
def test_insert_sql_insert():
lambda_handler(...)
redshift_data_service = boto3.client("redshift-data", region_name='a_region')
# List of executed statements (id... sql...)
# Assert Statement Descriptions
</code></pre>
<p>Using Moto python dependency to mock <code>redshift-data</code> (similar to the example above), how can I retrieve data regarding executed statements? I am referring to the previously executed statement IDs and the sql string ("INSERT...") called on on <code>lambda_handler</code></p>
<p>I couldn't find anything to get statement IDs in order to query the service and have a statement description.</p>
<p>Is there a way to assert the text of the previously executed SQLs?</p>
|
<python><amazon-web-services><testing><boto3><moto>
|
2023-04-01 12:21:07
| 1
| 822
|
Yak O'Poe
|
75,905,984
| 13,903,942
|
Python types.SimpleNamespace is it thread safe?
|
<p>Is <a href="https://docs.python.org/3/library/types.html#types.SimpleNamespace" rel="nofollow noreferrer">types.SimpleNamespace</a> in Python thread-safe?</p>
<p>Giving a object of type <code>types.SimpleNamespace</code> with fix attributes, updates from one thread to another is it atomic?</p>
<p>Example of a possible use case</p>
<pre><code>import threading
import uuid
from pprint import pprint
import time
from types import SimpleNamespace
workers_count = 10
total_post_per_user = 5
storage = SimpleNamespace(
user_count=0,
users=set(),
posts=[],
latest_message=None
)
def add_post(username: str, post: str):
storage.posts.append(post)
storage.latest_message = post
print(f"User {username} posted : {post}")
def worker():
storage.user_count += 1
user_name = f'worker_{storage.user_count}'
storage.users.add(user_name)
print(f"User {user_name} joined, total users {storage.user_count}")
setattr(storage, user_name, user_name) # let's also add dynamically stuff to the SimpleNamespace
for i in range(total_post_per_user):
add_post(user_name, f"Message from {user_name} N {i}, message : {str(uuid.uuid4())}")
time.sleep(.2)
workers = [
threading.Thread(target=worker, daemon=True) for _ in range(workers_count)
]
for worker in workers:
worker.start()
for worker in workers:
worker.join()
print("Results")
print(f"-- User count : {storage.user_count}")
print(f"-- Users : {len(storage.users)} = {storage.users}")
print(f"-- Total posts: {len(storage.posts)} - expected : {workers_count * total_post_per_user}")
print(f"-- Last message: {storage.latest_message}")
pprint(storage)
</code></pre>
<p>Notice how, each thread worker is updating the SimpleNamepsace, but is also adding dynamically some data <code>setattr(storage, user_name, user_name)</code></p>
<p>The result suggests that it is <strong>thread safe</strong>, but I'd like to check on this and if is not, why is not.</p>
<p>Thanks.</p>
|
<python><multithreading><object><thread-safety><atomic>
|
2023-04-01 11:45:45
| 0
| 7,945
|
Federico Baù
|
75,905,879
| 8,350,828
|
Python binance - get futures position information
|
<p>I would like to retrieve future position histroy from the Binance API, just like it is on the dashboard.</p>
<p>I need this data to then later display on a spreadsheet.</p>
<p><a href="https://i.sstatic.net/ZQpgp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZQpgp.png" alt="position history" /></a></p>
<p>Thank you.</p>
<p>I've tried to play with the client.futures_account_trades().</p>
<p>client.futures_position_information() retrieves only currently active position.</p>
|
<python><binance-api-client><python-binance>
|
2023-04-01 11:27:18
| 1
| 813
|
Jože Strožer
|
75,905,778
| 9,728,885
|
Concat keys in PCollection
|
<p>Im trying to concat/join the values of 2 keys in apache beam to get a new list composed of all items in the two keys.</p>
<p>Suppose I have a PCollection as follows:</p>
<pre><code>(
"Key1": [file1, file2],
"Key2": [file3, file4],
)
</code></pre>
<p>How do I achieve a PColletion which looks like this using the python apache-beam sdk:</p>
<pre><code>(
"Key3": [file1, file2, file3, file4]
)
</code></pre>
|
<python><concatenation><apache-beam>
|
2023-04-01 11:02:08
| 1
| 591
|
Manuel
|
75,905,765
| 5,302,323
|
Identify Potential Duplicates in Two Dataframes Even if Dates are Not the Same (7d range)
|
<p>I have two datasframes with repeated transactions but the same transaction may have two different dates in each (appears within a 7d time window).</p>
<p>I am trying to isolate repeated transactions based on a date range of +/- 7d but am not able to do it.</p>
<p>The code below simply looks at 'Date', 'Amount' and (if date and amount match) it compares 'Category' and highlights if the same transaction has been categorised differently.</p>
<p>Could you please help me to modify this code so that it compares Date Ranges, not a spot Date and still identifies duplicates that appear within a 7d window?</p>
<pre><code>import pandas as pd
# convert the 'Date' column of both dataframes to datetime objects
df1['Date'] = pd.to_datetime(df1['Date'])
df2['Date'] = pd.to_datetime(df2['Date'])
# merge the two dataframes based on 'Amount' and 'Date'
merged_df = pd.merge(df1, df2, on=['Amount', 'Date'], how='outer', suffixes=['_df1', '_df2'])
# create a new column 'Differences' to store the differences in category
merged_df['Differences'] = ''
# iterate over the rows of the merged dataframe
for index, row in merged_df.iterrows():
# if the row is unique to df1
if pd.isnull(row['Category_df2']):
merged_df.at[index, 'Differences'] = 'Unique to first dataframe'
# if the row is unique to df2
elif pd.isnull(row['Category_df1']):
merged_df.at[index, 'Differences'] = 'Unique to second dataframe'
# if the row is present in both dataframes
else:
# if the categories are different
if row['Category_df1'] != row['Category_df2']:
merged_df.at[index, 'Differences'] = 'Category is different'
# if the categories are the same
else:
merged_df.at[index, 'Differences'] = 'Category is the same'
# print the merged dataframe with the differences highlighted in the 'Differences' column
print(merged_df)
</code></pre>
|
<python><loops><python-datetime>
|
2023-04-01 10:59:13
| 1
| 365
|
Cla Rosie
|
75,905,744
| 6,619,692
|
Understanding slicing behavior in custom PyTorch Dataset class with tuple return type in __getitem__ method
|
<p>I wrote the following dataset class:</p>
<pre><code>from torch.utils.data import Dataset
import json
def read_dataset(path: str) -> tuple[list[list[str]], list[list[str]]]:
tokens_s, labels_s = [], []
with open(path) as f:
for line in f:
data = json.loads(line)
assert len(data["tokens"]) == len(data["labels"])
tokens_s.append(data["tokens"])
labels_s.append(data["labels"])
assert len(tokens_s) == len(labels_s)
return tokens_s, labels_s
class EventDetectionDataset(Dataset):
def __init__(self, path: str) -> None:
self.tokens, self.labels = read_dataset(path)
def __len__(self) -> int:
return len(self.tokens)
def __getitem__(self, index) -> tuple[list[str], list[str]]:
return self.tokens[index], self.labels[index]
</code></pre>
<p>As you can see, we have that <code>EventDetectionDataset.__getitem__(idx) -> tuple[list[str], list[str]]</code></p>
<p>Why when I print the first two items of the dataset, do I get a tuple of lists of lists of strings and not a list of tuples of lists of strings?</p>
<pre><code>train = EventDetectionDataset("path/to/data/train.jsonl")
train = train[:2]
</code></pre>
<p>Returns:</p>
<pre><code>(
[
['Hard', 'Rock', 'Hell', 'III', ':', 'The', 'Vikings', 'Ball', '.'],
['Casualties', 'and', 'damage', 'were', 'severe', 'on', 'both', 'sides', ',', 'and', 'the', 'defiance', 'of', 'the', 'French', 'ship', 'was', 'celebrated', 'in', 'both', 'countries', 'as', 'a', 'brave', 'defence', 'against', 'overwhelming', 'odds', '.']
],
[
['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O'],
['B-SCENARIO', 'O', 'B-CHANGE', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-ACTION', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
]
)
</code></pre>
<p>In other words, why do I get an object of type <code>tuple[list[list[str]], list[list[str]]]</code> and not an object of type <code>list[tuple[list[str], list[str]]]</code>?</p>
<p>It seems that slicing a dataset that returns a tuple from its <code>__getitem__</code> method causes the individual elements in the returned tuple to be concatenated into a list. Is this right? Why is this behaviour modeled?</p>
|
<python><pytorch>
|
2023-04-01 10:55:27
| 1
| 1,459
|
Anil
|
75,905,638
| 3,251,645
|
FastAPI responds with required field is missing
|
<p>I'm following a simple tutorial from the FastAPI docs. I was already using SQLAlchemy in this project just added the fastapi dependency and trying to run it, here's my code:</p>
<pre><code>import re
import json
import copy
import traceback
import urllib.parse
from models import *
from mangum import Mangum
from datetime import datetime
from sqlalchemy.orm import Session
from sqlalchemy import *
from sqlalchemy import create_engine
from collections import defaultdict
from fastapi import FastAPI, Depends, Request, Response
app = FastAPI()
@app.middleware("http")
async def db_session_middleware(request, call_next):
response = Response("Internal server error", status_code=500)
try:
engine = create_engine(
"some db"
)
base.metadata.create_all(engine)
request.state.db = Session(engine)
response = await call_next(request)
finally:
request.state.db.close()
return response
def get_db(request):
return request.state.db
@app.get("/")
def get_root():
return {"Status": "OK"}
@app.get("/products/{sku}")
def get_product(sku, db=Depends(get_db)):
pass
@app.get("/products", status_code=200)
def get_products(page: int = 1, page_size: int = 50, db: Session = Depends(get_db)):
try:
result, sku_list = [], []
for row in (
db.query(Product, Image)
.filter(Product.sku == Image.sku)
.limit(page_size)
.offset(page * page_size)
):
if row[0].sku not in sku_list:
result.append(
{
"sku": row[0].sku,
"brand": row[0].brand,
"image": row[1].image_url,
"title": row[0].product_title,
"price": row[0].original_price,
"reviewCount": row[0].total_reviews,
"rating": row[0].overall_rating,
}
)
sku_list.append(row[0].sku)
print(f"Result: {result}")
return {"body": {"message": "Success", "result": result}, "statusCode": 200}
except Exception as err:
print(traceback.format_exc(err))
return {
"body": {"message": "Failure", "result": traceback.format_exc(err)},
"statusCode": 500,
}
</code></pre>
<p>When I hit the products endpoint using this url:</p>
<pre><code>http://127.0.0.1:8000/products?page=1&page_size=100
</code></pre>
<p>I'm getting this response:</p>
<pre><code>{"detail":[{"loc":["query","request"],"msg":"field required","type":"value_error.missing"}]}
</code></pre>
<p>I'm not sure what it means by <code>query</code> and <code>request</code> missing in the response. What am I doing wrong?</p>
|
<python><fastapi><starlette>
|
2023-04-01 10:32:03
| 3
| 2,649
|
Amol Borkar
|
75,905,429
| 4,876,058
|
Django - Insert data into child table once the parent record is created
|
<p>I am using <code>Django REST Framework</code> and facing a problem during inserting data into the child table. There are 2 models named <code>Card</code> and <code>ContactName</code> that have the following fields. The <code>Card</code> has a relation with <code>ContactName</code> via a foreign key field name <code>card</code>.</p>
<p><code>models.py</code>:</p>
<pre><code>class Card(models.Model):
image = models.ImageField(upload_to='images', max_length=255, null=True, blank=True)
filename = models.CharField(max_length=255, null=True, blank=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
def __str__(self):
return self.filename
class ContactName(models.Model):
first_name = models.CharField(max_length=255, null=True, blank=True)
last_name = models.CharField(max_length=255, null=True, blank=True)
confidence = models.FloatField(default=0)
card = models.ForeignKey(Card, on_delete=models.CASCADE, related_name='contact_name')
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
def __str__(self):
return self.first_name
</code></pre>
<p><code>serializers.py</code> file:</p>
<pre><code>class ContactNameSerializer(serializers.ModelSerializer):
class Meta:
model = ContactName
fields = ['id', 'first_name', 'last_name', 'confidence', 'card', 'created_at', 'updated_at']
class CardSerializer(serializers.ModelSerializer):
contact_name = ContactNameSerializer(many=True, read_only=True)
class Meta:
model = Card
fields = ['id', 'image', 'filename', 'contact_name', 'created_at', 'updated_at']
</code></pre>
<p>In <code>ViewSet</code> once the card record is created, I also want to add the following <code>ContactName</code> <code>JSON</code> data into the child table, set the reference <em>(card_id)</em> as an id of that card, and return the newly added record in the response.</p>
<p><code>e.g.</code></p>
<pre><code>[
{
"first_name": "John",
"last_name": "Doe",
"confidence": 0.9
},
{
"first_name": "Doe",
"last_name": "John",
"confidence": 0.5
}
]
</code></pre>
<p><code>views.py</code>:</p>
<pre><code>class CardViewSet(viewsets.ModelViewSet):
parser_classes = (MultiPartParser, FormParser)
queryset = Card.objects.all()
serializer_class = CardSerializer
def create(self, request):
cards_serializer = CardSerializer(data=request.data)
if cards_serializer.is_valid():
cards_serializer.save()
# insert the record into the child table and set the reference (card_id) as an id
json = [
{
"first_name": "John",
"last_name": "Doe",
"confidence": 0.9
},
{
"first_name": "Doe",
"last_name": "John",
"confidence": 0.5
}
]
return Response(cards_serializer.data, status=status.HTTP_201_CREATED)
else:
return Response(cards_serializer.errors, status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p>I really appreciate the help. The final response will be something like this:</p>
<pre><code>{
"id": 10,
"image": "/media/abc.jpg",
"filename": "abc.jpg",
"contact_names": [
{
"id": 1,
"first_name": "John",
"last_name": "Doe",
"confidence": 0.9,
"card_id": 10
},
{
"id": 2,
"first_name": "Doe",
"last_name": "John",
"confidence": 0.5,
"card_id": 10
}
]
}
</code></pre>
|
<python><django><django-rest-framework>
|
2023-04-01 09:49:13
| 1
| 1,019
|
Ven Nilson
|
75,905,258
| 10,620,788
|
Speeding up for loops in pySpark
|
<p>I need a hand at solving this problem. I have a for loop that iterates on a list and builds a linear model for each item on that list. I have built a python for loop, but my list of items can get very lengthy and I know I am probably not taking advantage of all Spark has to offer. So here is my code:</p>
<pre><code>for i in items:
df_item = df.where(col("items")==i)
vectorAssembler = VectorAssembler(inputCols = ['x'], outputCol = 'features')
vector_df = vectorAssembler.transform(dt)
premodel_df = vector_df.select(['features', 'Y'])
lr = LinearRegression(featuresCol = 'features', labelCol='y_pred', maxIter=10, regParam=0, elasticNetParam=0)
lr_model = lr.fit(premodel_df)
r_squared = lr_model.summary.r2
item = i
lr_info = (Row(item=i
,r2=r_squared)
)
lm_results = lm_results.union(lm_info)
</code></pre>
<p>I therefore need some sort of examples that will help in speeding up the process</p>
|
<python><apache-spark><pyspark><databricks><linear-regression>
|
2023-04-01 09:12:57
| 1
| 363
|
mblume
|
75,905,219
| 839,837
|
How to send a data from ESP8266 (Arduino) to PC (Python) over Wifi
|
<p>I have an ESP8266 measuring a voltage via its Analog port (1).
I have the ESP8266 successfully connecting to my local WiFi network.
I can successfully command the ESP8266 over the local WiFi with Python using simply requests.get commands. Which I can then action.</p>
<pre><code>import requests
try:
r = requests.get("http://192.168.0.xxx/1/voltage/")
print(f"Status Code: {r.status_code}")
except:
print("Failed to send message")
</code></pre>
<p>Where I am stuck it getting the ESP8266 to send back the voltage value it acquired back to my PC over the WiFi and how to create the correct Python code to listen and collect the data.
I've seen explanations that suggest using an API and sending the data to cloud servers, which I don't want to do if possible.
I've seen examples that suggest setting up socket listeners, unfortuantely I couldn't follow it that well, which left me copying code, which appears to start a listener, but couldn't find a way to get the ESP8266 to send the data in a way it could be captured or if the Python code is even working.</p>
<pre><code>import socket
serversocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
serversocket.bind(("192.168.0.XX", XXXX))
serversocket.listen()
(clientsocket, address) = serversocket.accept()
buffer = ""
while 1:
data = clientsocket.recv(128)
datastr = buffer + data.decode('utf-8')
split = datastr.split("\n")
buffer = split[-1]
print(split[:-1])
</code></pre>
<p>I would probably prefer something simple like a POST if possible.</p>
<p>So basically I'd like to write something in Python that can collect data sent from my ESP8266 directly over my LAN, as well as what ESP8266 code (I'm using C through the Arduino IDE) is required to send the data correctly from the ESP8266 to be collected by the PC Python code.</p>
<p>Any help would be much appreciated. Thank you.</p>
|
<python><python-3.x><arduino><esp8266><arduino-esp8266>
|
2023-04-01 09:07:03
| 1
| 727
|
Markus
|
75,905,137
| 728,438
|
Measuring the dB of a frequency after FFT
|
<p>I have a wavfile that is read and fft'd. Sampling frequency is 22050Hz.</p>
<pre><code>fs, y = scipy.io.wavfile.read('StarWars60.wav')
N = len(y)
yf = rfft(y)
xf = rfftfreq(N, 1/fs)
</code></pre>
<p>y is int16 and takes range [-32768,32767]
yf has abs values that are typically around 2000000</p>
<p>for each of the yf values, which correspond to the fft coefficient of a specific frequency at a specific sample, I'm trying to scale it to some dB measurement so that I can make comparisons across frequencies and times. It does not have to be an exact interpretation of dB, just one that is appropriately scaled (with a range from 0 to 120)</p>
<p>I think the rough scalings involved are:</p>
<ol>
<li>taking amplitude</li>
<li>intensity = amplitude**2</li>
<li>'dB' = 10 * log(intensity)</li>
</ol>
<p>I am thus doing it like</p>
<pre><code>amp = np.abs(yf[i])
intens = amp**2
d = 10*np.log(intens)
</code></pre>
<p>but getting numbers for d like 200 for the 10000Hz range, and 300 for the 50Hz range.</p>
<p>What am I doing wrong?</p>
|
<python><audio><signal-processing><fft>
|
2023-04-01 08:52:21
| 0
| 399
|
Ian Low
|
75,905,030
| 20,443,528
|
How to filter a QuerySet based on whether the objects in it are present in another model?
|
<p>I have a QuerySet called time_slots which contains objects of model class TimeSlot.</p>
<pre><code>time_slots = <QuerySet [
<TimeSlot: Room: Number: 1, category: Regular, capacity: 4, advance: 12, manager: anshul, from: 01:00:00, till: 02:00:00>,
<TimeSlot: Room: Number: 1, category: Regular, capacity: 4, advance: 12, manager: anshul, from: 04:07:00, till: 06:33:00>,
<TimeSlot: Room: Number: 1, category: Regular, capacity: 4, advance: 12, manager: anshul, from: 09:22:00, till: 10:55:00>
]>
</code></pre>
<p>models,py</p>
<pre><code>class Room(models.Model):
class Meta:
ordering = ['number']
number = models.PositiveSmallIntegerField(
validators=[MaxValueValidator(1000), MinValueValidator(1)],
primary_key=True
)
CATEGORIES = (
('Regular', 'Regular'),
('Executive', 'Executive'),
('Deluxe', 'Deluxe'),
)
category = models.CharField(max_length=9, choices=CATEGORIES, default='Regular')
CAPACITY = (
(1, '1'),
(2, '2'),
(3, '3'),
(4, '4'),
)
capacity = models.PositiveSmallIntegerField(
choices=CAPACITY, default=2
)
advance = models.PositiveSmallIntegerField(default=10)
manager = models.ForeignKey(
settings.AUTH_USER_MODEL, on_delete=models. CASCADE
)
class TimeSlot(models.Model):
class Meta:
ordering = ['available_from']
room = models.ForeignKey(Room, on_delete=models.CASCADE)
available_from = models.TimeField()
available_till = models.TimeField()
"""class used when a user books a room slot."""
class Booking(models.Model):
customer = models.ForeignKey(User, on_delete=models.CASCADE)
check_in_date = models.DateField()
timeslot = models.ForeignKey(TimeSlot, on_delete=models. CASCADE)
</code></pre>
<p>Each time slot can be either booked or vacant. In order to find if it is booked or vacant, I do the following-</p>
<pre><code>if request.session['occupancy'] == '':
for time_slot in time_slots:
try:
Booking.objects.get(check_in_date=request.session['date'], timeslot=time_slot)
time_slot.occupancy = "Booked"
except Exception:
time_slot.occupancy = "Vacant"
elif request.session['occupancy'] == 'Vacant':
for time_slot in time_slots:
try:
Booking.objects.get(check_in_date=request.session['date'], timeslot=time_slot)
time_slot.delete()
except Exception:
time_slot.occupancy = "Vacant"
elif request.session['occupancy'] == 'Booked':
for time_slot in time_slots:
try:
Booking.objects.get(check_in_date=request.session['date'], timeslot=time_slot)
time_slot.occupancy = "Booked"
except Exception:
time_slot.delete()
</code></pre>
<p>I have to render the timeslots in the HTML according to its occupancy status which can be either Any (''), Booked or Vacant.</p>
<p>I know the above code will not work but I just wanted to tell the logic. I want to know how can I render the time_slots in the HTML based on occupancy?</p>
|
<python><django><django-models><django-queryset>
|
2023-04-01 08:29:42
| 1
| 331
|
Anshul Gupta
|
75,904,876
| 4,045,275
|
conda warns me I should update conda, but the update keeps failing
|
<h2>The issue</h2>
<p>I have an Anaconda distribution installed on a Windows PC about 18 months ago. I updated the whole package with conda update --all but conda remains stuck at version 4.12.0
When using conda, I often get the message that</p>
<pre><code>==> WARNING: A newer version of conda exists. <==
current version: xyz1
latest version: xyz2
Please update conda by running
$ conda update -n base conda
</code></pre>
<h2>What I have tried</h2>
<p><code>conda update conda</code> tells me that all requested packages are already installed</p>
<p><code>conda update -n base conda</code> tells me all requested packages are already installed</p>
<p><code>conda install conda=23.3.1</code> doesn't work: it fails with the current repodata, it tries the next (what does this mean - I only have 'default' as channel in my .condarc file) but after 2 hours it is still stuck at "solving environment"</p>
<h2>What I have researched</h2>
<p>I have found many posts on how to use conda in general, e.g. <a href="https://stackoverflow.com/questions/70365296/how-to-use-conda-update-n-base-conda-properly">How to use "conda update -n base conda" properly</a> but they are not about what would prevent you from updating conda itself</p>
<h2>My interpretation</h2>
<p>Maybe there is some package which is incompatible with conda > 4.12.0 ? Is there a way to find out which package it is? Or maybe it is just some bug in conda?</p>
|
<python><anaconda><conda>
|
2023-04-01 07:50:08
| 1
| 9,100
|
Pythonista anonymous
|
75,904,821
| 7,745,011
|
Max retries exceeded with url: / (Caused by SSLError(FileNotFoundError(2, 'No such file or directory'))) only during debug
|
<p>I am using minio as follows (example):</p>
<pre><code>minio_client = Minio(
endpoint="my.minio.com:10092",
access_key="minio",
secret_key="minio123!"
)
buckets = minio_client.list_buckets()
for bucket in buckets:
print(bucket.name)
</code></pre>
<p>Debugging the code above with VSCode (Python 3.11 / Miniconda - a new environment with only minio installed) I get the following error:</p>
<blockquote>
<p>File "C:\Path\to\Miniconda3\envs\minio\Lib\site-packages\urllib3\util\retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='my.minio.com', port=10092): Max retries exceeded with url: / (Caused by SSLError(FileNotFoundError(2, 'No such file or directory')))</p>
</blockquote>
<p>I have double checked address and credentials several times, the minio is accessible from other SDKs (I have a different application using the C# SDK) and the WebUI. Additionally if I run the program from the terminal with the same Miniconda environment activated, it works flawlessly. What could be the reason for this issue?</p>
<p><strong>EDIT</strong></p>
<p>Just tested the same code with PyCharm and there it works as well. So the issue definitely has something to do with VSCode. Additionally running the script in VSCode without the debugger works as well.</p>
|
<python><visual-studio-code><vscode-debugger><minio><urllib3>
|
2023-04-01 07:36:07
| 1
| 2,980
|
Roland Deschain
|
75,904,795
| 17,580,381
|
Performance of reversed() function compared to reversing slice
|
<p>This is very similar to <a href="https://stackoverflow.com/questions/74998392/python-reverse-vs-1-slice-performance">this question</a> but there's a slight difference in my question.</p>
<p>I have two functionally identical functions as follows:</p>
<pre><code>def count1(text):
c = 0
for i, j in zip(text, reversed(text)):
if i == j:
c += 1
return c
def count2(text):
c = 0
for i, j in zip(text, text[::-1]):
if i == j:
c += 1
return c
</code></pre>
<p>I would not expect <em>count2()</em> to perform as well as <em>count1()</em> on the grounds that there's no new list created - just an iterator.</p>
<p>I have determined empirically that this is true for short strings. However, as the length of the strings increase, the performance characteristics seem to change.</p>
<p>Here's the test rig:</p>
<pre><code>from timeit import timeit
from random import choices
from string import ascii_lowercase
for k in range(2, 21, 2):
print(f'{k=}')
text = ''.join(choices(ascii_lowercase, k=k))
for func in count1, count2:
print(func.__name__, f'{timeit(lambda: func(text)):.4f}')
</code></pre>
<p>Now consider the output and note that <em>count1()</em> is faster than <em>count2()</em> for string lengths less than 12. As the string length increases <em>count2()</em> runs faster.</p>
<p>The differences are minimal and unlikely to ever be critical. It is however intriguing and I'd like to understand why this might be the case.</p>
<p><strong>Output:</strong></p>
<pre><code>k=2
count1 0.2870
count2 0.3247
k=4
count1 0.3454
count2 0.3740
k=6
count1 0.4228
count2 0.4518
k=8
count1 0.4799
count2 0.4990
k=10
count1 0.5382
count2 0.5584
k=12
count1 0.6406
count2 0.6426
k=14
count1 0.6792
count2 0.6555
k=16
count1 0.7593
count2 0.7480
k=18
count1 0.8595
count2 0.8335
k=20
count1 0.8538
count2 0.8121
</code></pre>
<p><strong>Notes:</strong></p>
<pre><code>macOS -> 13.3
Python -> 3.11.2
CPU -> 3GHz 10-Core Intel Xeon W
RAM -> 32GB
</code></pre>
|
<python><performance>
|
2023-04-01 07:28:45
| 1
| 28,997
|
Ramrab
|
75,904,780
| 11,746,588
|
Takes long time while building image on python wit messaage : Building wheel for pandas (pyproject.toml): still running
|
<p>i have problem while building docker image on python :
below execution process takes long time around 20 minutes:</p>
<pre><code>Building wheel for pandas (pyproject.toml): still running...
Building wheel for pandas (pyproject.toml): still running...
Building wheel for pandas (pyproject.toml): still running...
Building wheel for pandas (pyproject.toml): still running...
Building wheel for pandas (pyproject.toml): still running...
</code></pre>
<p><code>Dockerfile</code>:</p>
<pre><code>FROM python:3.11.2-buster
WORKDIR /app
COPY . .
RUN pip install -r /app/requirements.txt
CMD ["uvicorn", "main:app", "--host=0.0.0.0", "--port=80"]
</code></pre>
<p><code>requirement.txt</code>:</p>
<pre><code>fastapi
uvicorn
pydantic[dotenv]
requests
python-dotenv==0.19.2
pandas==1.4.3
numpy==1.24.2
scikit-learn==1.2.2
</code></pre>
|
<python><pandas>
|
2023-04-01 07:23:41
| 1
| 427
|
wahyu eko hadi saputro
|
75,904,709
| 972,202
|
Method to Sort 3d data
|
<p>Is there a method in numpy that allows me to sort these 3d vectors in ascending order?</p>
<p>For example; I have the following input array and I'd like the following output:</p>
<pre><code>input_arr = np.array( [
[[255,0,3],
[255,4,100],
[255,2,3],
[255,3,3],
[0,1,3],
] ]
, dtype='uint8')
# Sort input_arr to produce the below
output_arr = np.array( [
[[0,1,3],
[255,0,3],
[255,2,3],
[255,3,3],
[255,4,100],
] ]
, dtype='uint8')
</code></pre>
<p>I have tried the below but it does not produce the result I wanted above. Instead it produces the below.</p>
<pre><code>output_arr2 = np.sort( input_arr, axis=0)
# Results in the below
[[[255 0 3]
[255 4 100]
[255 2 3]
[255 3 3]
[ 0 1 3]]]
</code></pre>
|
<python><numpy>
|
2023-04-01 07:02:32
| 1
| 26,168
|
sazr
|
75,904,692
| 3,423,825
|
How to avoid IntegrityError issue with update_or_create?
|
<p>I would appreciate it if someone could explain why I'm having <code>IntegrityError</code> exception being thrown here, and how to avoid it.</p>
<p>When an object already exists, isn't the <code>update_or_create</code> method supposed to update it ?</p>
<p><strong>models.py</strong></p>
<pre><code>class OHLCV(TimestampedModel):
market = models.ForeignKey(Market, on_delete=models.CASCADE, related_name='candles', null=True)
index = models.ForeignKey(Index, on_delete=models.CASCADE, related_name='candles', null=True)
timeframe = models.CharField(max_length=10)
open, high, low, close, volume = [models.FloatField(null=True) for i in range(5)]
datetime = models.DateTimeField(null=True)
class Meta:
verbose_name_plural = "OHLCV"
unique_together = [['market', 'timeframe', 'datetime'],
['index', 'timeframe', 'datetime']]
</code></pre>
<p><strong>tasks.py</strong></p>
<pre><code>@app.task
def bulk_update_ohlcv(timeframe):
for obj in Market.objects.filter(active=True):
if obj.exchange.is_status_ok():
update_ohlcv.delay('market', obj.pk, timeframe)
@app.task
def update_ohlcv(self, type, pk, timeframe):
[code here]
if obj.__class__ == Market:
ohlcv, created = OHLCV.objects.update_or_create(market=obj,
timeframe=timeframe,
datetime=dt,
open=candle[1],
defaults=defaults
)
elif obj.__class__ == Index:
ohlcv, created = OHLCV.objects.update_or_create(index=obj,
timeframe=timeframe,
datetime=dt,
open=candle[1],
defaults=defaults
)
</code></pre>
<p>Error:</p>
<pre><code>-celery_worker-1 | 2023-04-01T06:55:47.710298043Z IntegrityError: duplicate key value violates unique constraint
-celery_worker-1 | 2023-04-01T06:55:47.710302260Z "market_ohlcv_market_id_timeframe_datetime_8ffd84de_uniq"
-celery_worker-1 | 2023-04-01T06:55:47.710306227Z DETAIL: Key (market_id, timeframe, datetime)=(84, 5m, 2023-03-31 21:20:00+00)
-celery_worker-1 | 2023-04-01T06:55:47.710310646Z already exists.
</code></pre>
|
<python><django>
|
2023-04-01 06:58:25
| 1
| 1,948
|
Florent
|
75,904,509
| 10,755,032
|
Network Analysis using textnets - ValueError: negative dimensions are not allowed
|
<p>I am trying to perform network analysis on some text data after I did some sentiment analysis on it. I am using <code>textnets</code> documentation for reference. I am getting the above mentioned error after running the following line:</p>
<pre><code>t = tn.Textnet(corpus.tokenized())
</code></pre>
<p>Here is my full Code:</p>
<pre><code>import textnets as tn
tn.params["seed"] = 42
df = pd.read_csv(r"C:\Users\User\Downloads\archive\nltk_split.csv")
df.head()
df.drop(['Unnamed: 0.2', 'Unnamed: 0.1', 'Unnamed: 0', 'Unnamed: 15', 'Unnamed: 16', 'id', 'favorite_count', 'created_at', 'retweet_count', 'coordinates', 'score', 'neg', 'neu', 'pos', 'compound','created_at_date'], axis=1, inplace=True)
df.head()
corpus = tn.Corpus.from_df(df, doc_col='text')
t = tn.Textnet(corpus.tokenized())
t.plot(label_nodes=True,
show_clusters=True)
</code></pre>
<p>The error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_19036\2509364554.py in <module>
----> 1 t = tn.Textnet(corpus.tokenized())
~\AppData\Roaming\Python\Python39\site-packages\textnets\network.py in __init__(self, data, min_docs, connected, remove_weak_edges, doc_attrs)
353 self._matrix = data
354 elif isinstance(data, (TidyText, pd.DataFrame)):
--> 355 self._matrix = _im_from_tidy_text(data, min_docs)
356 if remove_weak_edges:
357 pairs: pd.Series = self._matrix.stack()
~\AppData\Roaming\Python\Python39\site-packages\textnets\network.py in _im_from_tidy_text(tidy_text, min_docs)
812 .set_index("label")
813 )
--> 814 im = tt[tt["keep"]].pivot(values="term_weight", columns="term").fillna(0)
815 return IncidenceMatrix(im)
816
E:\anaconda\lib\site-packages\pandas\core\frame.py in pivot(self, index, columns, values)
7883 from pandas.core.reshape.pivot import pivot
7884
-> 7885 return pivot(self, index=index, columns=columns, values=values)
7886
7887 _shared_docs[
E:\anaconda\lib\site-packages\pandas\core\reshape\pivot.py in pivot(data, index, columns, values)
518 else:
519 indexed = data._constructor_sliced(data[values]._values, index=multiindex)
--> 520 return indexed.unstack(columns_listlike)
521
522
E:\anaconda\lib\site-packages\pandas\core\series.py in unstack(self, level, fill_value)
4155 from pandas.core.reshape.reshape import unstack
4156
-> 4157 return unstack(self, level, fill_value)
4158
4159 # ----------------------------------------------------------------------
E:\anaconda\lib\site-packages\pandas\core\reshape\reshape.py in unstack(obj, level, fill_value)
489 if is_1d_only_ea_dtype(obj.dtype):
490 return _unstack_extension_series(obj, level, fill_value)
--> 491 unstacker = _Unstacker(
492 obj.index, level=level, constructor=obj._constructor_expanddim
493 )
E:\anaconda\lib\site-packages\pandas\core\reshape\reshape.py in __init__(self, index, level, constructor)
138 )
139
--> 140 self._make_selectors()
141
142 @cache_readonly
E:\anaconda\lib\site-packages\pandas\core\reshape\reshape.py in _make_selectors(self)
186
187 selector = self.sorted_labels[-1] + stride * comp_index + self.lift
--> 188 mask = np.zeros(np.prod(self.full_shape), dtype=bool)
189 mask.put(selector, True)
190
ValueError: negative dimensions are not allowed
</code></pre>
|
<python><nlp><network-analysis>
|
2023-04-01 05:56:22
| 1
| 1,753
|
Karthik Bhandary
|
75,904,500
| 3,782,963
|
Unable to build wheels in pipenv
|
<p>I have recently moved to <code>pipenv</code> from using the traditional <code>requirements.txt</code> file and its not going too well. I am trying to build wheels for one of my Python modules and I always get an error while doing so. The error is:</p>
<pre><code>$ pipenv run python -m build --wheel
* Creating virtualenv isolated environment...
* Installing packages in isolated environment... (setuptools >= 40.8.0, wheel)
* Getting build dependencies for wheel...
Traceback (most recent call last):
File "C:\Users\gollaha\.virtualenvs\uHugo-eCFFAdEE\lib\site-packages\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\gollaha\.virtualenvs\uHugo-eCFFAdEE\lib\site-packages\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\Users\gollaha\.virtualenvs\uHugo-eCFFAdEE\lib\site-packages\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\gollaha\AppData\Local\Temp\build-env-kshagp3u\lib\site-packages\setuptools\build_meta.py", line 338, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "C:\Users\gollaha\AppData\Local\Temp\build-env-kshagp3u\lib\site-packages\setuptools\build_meta.py", line 320, in _get_build_requires
self.run_setup()
File "C:\Users\gollaha\AppData\Local\Temp\build-env-kshagp3u\lib\site-packages\setuptools\build_meta.py", line 484, in run_setup
super(_BuildMetaLegacyBackend,
File "C:\Users\gollaha\AppData\Local\Temp\build-env-kshagp3u\lib\site-packages\setuptools\build_meta.py", line 335, in run_setup
exec(code, locals())
File "<string>", line 4, in <module>
ModuleNotFoundError: No module named 'toml'
</code></pre>
<p>I am using <code>toml</code> and it's installed too. Here is my <code>Pipfile</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[[source]]
url = "https://pypi.org/simple"
verify_ssl = true
name = "pypi"
[packages]
click = "<=9.0.0,>=8.0.3"
packaging = ">=21.0,<24.0"
requests = "<=3.0.0,>=2.26.0"
toml = ">=0.10.0"
pyyaml = ">=6.0,<=7.0"
psutil = ">=5.8.0,<=6.0.0"
rich = ">=10.12.0,<14.0.0"
pydantic = ">=1.8.2,<=2.0.0"
[dev-packages]
sphinx = "*"
sphinx-tabs = "*"
pytest = "*"
pytest-mock = "*"
requests-mock = "*"
pytest-subprocess = "*"
pytest-cov = "*"
codecov = "*"
coverage = "*"
pytest-runner = "*"
flake8 = "*"
pipenv = "*"
build = "*"
</code></pre>
<p>To build I did <code>pipenv run python -m build --wheel</code>, and this gives me the above error. I also tried running in it's shell via <code>pipenv shell</code> but I still get the same error. Not sure what mistake I am doing.</p>
<p>Any help in this is appreciated.</p>
<p><strong>Update</strong></p>
<p>Here is the GitHub link of the module - <a href="https://github.com/akshaybabloo/uHugo" rel="nofollow noreferrer">https://github.com/akshaybabloo/uHugo</a></p>
|
<python><pipenv>
|
2023-04-01 05:52:57
| 0
| 2,835
|
Akshay
|
75,904,487
| 1,260,682
|
is there a magic method for logical and and or in python?
|
<p>I thought they are <code>__and__</code> and <code>__or__</code> but turns out they are for bitwise comparisons not logical ones. Are there such methods for the logical comparators?</p>
|
<python>
|
2023-04-01 05:46:56
| 0
| 6,230
|
JRR
|
75,904,348
| 10,086,964
|
Django Debug toolbar is not showing though `view page source` is showing debug tool's html
|
<p>I am a beginner at Django. Django-debug-tool is not showing. I have gone through the official documentation step by step. But It did work for me. I have seen lots of existing answers as well seems it doesn't work. Interestingly from the browser when I go to <code>view page source</code> it shows the debug tool's HTML after my page's code. But it doesn't show.</p>
<p><em>Django version: 4.1.7</em></p>
<p><em>django-debug-toolbar: latest</em></p>
<p><strong>settings.py</strong></p>
<pre><code>from pathlib import Path
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = "django-insecure-ug*k0401%7v_i887cu4m$szp%(i=h=*9yyi&4b#71%ozksz1%6"
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
ALLOWED_HOSTS = []
INSTALLED_APPS = [
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
"playground",
"debug_toolbar"
]
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
"debug_toolbar.middleware.DebugToolbarMiddleware",
]
INTERNAL_IPS = [
"127.0.0.1",
]
ROOT_URLCONF = "storefront.urls"
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
WSGI_APPLICATION = "storefront.wsgi.application"
DATABASES = {
"default": {
"ENGINE": "django.db.backends.sqlite3",
"NAME": BASE_DIR / "db.sqlite3",
}
}
AUTH_PASSWORD_VALIDATORS = [
{
"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
},
{
"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
},
{
"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
},
{
"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
},
]
LANGUAGE_CODE = "en-us"
TIME_ZONE = "UTC"
USE_I18N = True
USE_TZ = True
STATIC_URL = "static/"
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
</code></pre>
<p><strong>urls.py</strong></p>
<pre><code>from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path("admin/", admin.site.urls),
path('playground/', include('playground.urls')),
path('__debug__/', include('debug_toolbar.urls'))
]
</code></pre>
<p>Terminal Output</p>
<pre><code>[01/Apr/2023 10:46:50] "GET / HTTP/1.1" 404 11939
[01/Apr/2023 10:46:50] "GET /static/debug_toolbar/css/toolbar.css HTTP/1.1" 304 0
[01/Apr/2023 10:46:50] "GET /static/debug_toolbar/js/toolbar.js HTTP/1.1" 304 0
[01/Apr/2023 10:46:50] "GET /static/debug_toolbar/css/print.css HTTP/1.1" 304 0
[01/Apr/2023 10:46:56] "GET /playground/hello/ HTTP/1.1" 200 9757
</code></pre>
<p><strong>View page source</strong></p>
<pre><code>
<html>
<body>
<h1>ALim</h1>
<link rel="stylesheet" href="/static/debug_toolbar/css/print.css" media="print">
<link rel="stylesheet" href="/static/debug_toolbar/css/toolbar.css">
<script type="module" src="/static/debug_toolbar/js/toolbar.js" async></script>
<div id="djDebug" class="djdt-hidden" dir="ltr"
data-store-id="c7f18a2eed4c4e3ebaa43ed570ce3619"
data-render-panel-url="/__debug__/render_panel/"
data-sidebar-url="/__debug__/history_sidebar/"
data-default-show="true"
>
<div class="djdt-hidden" id="djDebugToolbar">
<ul id="djDebugPanelList">
<li><a id="djHideToolBarButton" href="#" title="Hide toolbar">Hide »</a></li>
<li id="djdt-HistoryPanel" class="djDebugPanelButton">
<input type="checkbox" data-cookie="djdtHistoryPanel" checked title="Disable for next and successive requests">
<a href="#" title="History" class="HistoryPanel">
History
<br><small>/playground/hello/</small>
</a>
</li>
<li id="djdt-VersionsPanel" class="djDebugPanelButton">
<input type="checkbox" data-cookie="djdtVersionsPanel" checked title="Disable for next and successive requests">
<a href="#" title="Versions" class="VersionsPanel">
Versions
<br><small>Django 4.1.7</small>
</a>
</li>
<li id="djdt-TimerPanel" class="djDebugPanelButton">
<input type="checkbox" data-cookie="djdtTimerPanel" checked title="Disable for next and successive requests">
<div class="djdt-contentless">
Time
<br><small>Total: 16.06ms</small>
</div>
</li>
<li id="djdt-SettingsPanel" class="djDebugPanelButton">
<input type="checkbox" data-cookie="djdtSettingsPanel" checked title="Disable for next and successive requests">
<a href="#" title="Settings from storefront.settings" class="SettingsPanel">
Settings
</a>
</li>
<li id="djdt-HeadersPanel" class="djDebugPanelButton">
<input type="checkbox" data-cookie="djdtHeadersPanel" checked title="Disable for next and successive requests">
<a href="#" title="Headers" class="HeadersPanel">
Headers
</a>
</li>
<li id="djdt-RequestPanel" class="djDebugPanelButton">
<input type="checkbox" data-cookie="djdtRequestPanel" checked title="Disable for next and successive requests">
<a href="#" title="Request" class="RequestPanel">
Request
<br><small>say_hello</small>
</a>
</li>
<li id="djdt-SQLPanel" class="djDebugPanelButton">
<input type="checkbox" data-cookie="djdtSQLPanel" checked title="Disable for next and successive requests">
<a href="#" title="SQL queries from 0 connections" class="SQLPanel">
SQL
<br><small>0 queries in 0.00ms</small>
</a>
</li>
<li id="djdt-StaticFilesPanel" class="djDebugPanelButton">
<input type="checkbox" data-cookie="djdtStaticFilesPanel" checked title="Disable for next and successive requests">
<a href="#" title="Static files (137 found, 0 used)" class="StaticFilesPanel">
Static files
<br><small>0 files used</small>
</a>
</li>
<li id="djdt-TemplatesPanel" class="djDebugPanelButton">
<input type="checkbox" data-cookie="djdtTemplatesPanel" checked title="Disable for next and successive requests">
<a href="#" title="Templates (1 rendered)" class="TemplatesPanel">
Templates
<br><small>hello.html</small>
</a>
</li>
<li id="djdt-CachePanel" class="djDebugPanelButton">
<input type="checkbox" data-cookie="djdtCachePanel" checked title="Disable for next and successive requests">
<a href="#" title="Cache calls from 1 backend" class="CachePanel">
Cache
<br><small>0 calls in 0.00ms</small>
</a>
</li>
<li id="djdt-SignalsPanel" class="djDebugPanelButton">
<input type="checkbox" data-cookie="djdtSignalsPanel" checked title="Disable for next and successive requests">
<a href="#" title="Signals" class="SignalsPanel">
Signals
<br><small>30 receivers of 15 signals</small>
</a>
</li>
<li id="djdt-LoggingPanel" class="djDebugPanelButton">
<input type="checkbox" data-cookie="djdtLoggingPanel" checked title="Disable for next and successive requests">
<a href="#" title="Log messages" class="LoggingPanel">
Logging
<br><small>0 messages</small>
</a>
</li>
<li id="djdt-RedirectsPanel" class="djDebugPanelButton">
<input type="checkbox" data-cookie="djdtRedirectsPanel" title="Enable for next and successive requests">
<div class="djdt-contentless djdt-disabled">
Intercept redirects
</div>
</li>
<li id="djdt-ProfilingPanel" class="djDebugPanelButton">
<input type="checkbox" data-cookie="djdtProfilingPanel" title="Enable for next and successive requests">
<div class="djdt-contentless djdt-disabled">
Profiling
</div>
</li>
</ul>
</div>
<div class="djdt-hidden" id="djDebugToolbarHandle">
<div title="Show toolbar" id="djShowToolBarButton">
<span id="djShowToolBarD">D</span><span id="djShowToolBarJ">J</span>DT
</div>
</div>
<div id="HistoryPanel" class="djdt-panelContent djdt-hidden">
<div class="djDebugPanelTitle">
<button type="button" class="djDebugClose">×</button>
<h3>History</h3>
</div>
<div class="djDebugPanelContent">
<div class="djdt-loader"></div>
<div class="djdt-scroll"></div>
</div>
</div>
<div id="VersionsPanel" class="djdt-panelContent djdt-hidden">
<div class="djDebugPanelTitle">
<button type="button" class="djDebugClose">×</button>
<h3>Versions</h3>
</div>
<div class="djDebugPanelContent">
<div class="djdt-loader"></div>
<div class="djdt-scroll"></div>
</div>
</div>
<div id="SettingsPanel" class="djdt-panelContent djdt-hidden">
<div class="djDebugPanelTitle">
<button type="button" class="djDebugClose">×</button>
<h3>Settings from storefront.settings</h3>
</div>
<div class="djDebugPanelContent">
<div class="djdt-loader"></div>
<div class="djdt-scroll"></div>
</div>
</div>
<div id="HeadersPanel" class="djdt-panelContent djdt-hidden">
<div class="djDebugPanelTitle">
<button type="button" class="djDebugClose">×</button>
<h3>Headers</h3>
</div>
<div class="djDebugPanelContent">
<div class="djdt-loader"></div>
<div class="djdt-scroll"></div>
</div>
</div>
<div id="RequestPanel" class="djdt-panelContent djdt-hidden">
<div class="djDebugPanelTitle">
<button type="button" class="djDebugClose">×</button>
<h3>Request</h3>
</div>
<div class="djDebugPanelContent">
<div class="djdt-loader"></div>
<div class="djdt-scroll"></div>
</div>
</div>
<div id="SQLPanel" class="djdt-panelContent djdt-hidden">
<div class="djDebugPanelTitle">
<button type="button" class="djDebugClose">×</button>
<h3>SQL queries from 0 connections</h3>
</div>
<div class="djDebugPanelContent">
<div class="djdt-loader"></div>
<div class="djdt-scroll"></div>
</div>
</div>
<div id="StaticFilesPanel" class="djdt-panelContent djdt-hidden">
<div class="djDebugPanelTitle">
<button type="button" class="djDebugClose">×</button>
<h3>Static files (137 found, 0 used)</h3>
</div>
<div class="djDebugPanelContent">
<div class="djdt-loader"></div>
<div class="djdt-scroll"></div>
</div>
</div>
<div id="TemplatesPanel" class="djdt-panelContent djdt-hidden">
<div class="djDebugPanelTitle">
<button type="button" class="djDebugClose">×</button>
<h3>Templates (1 rendered)</h3>
</div>
<div class="djDebugPanelContent">
<div class="djdt-loader"></div>
<div class="djdt-scroll"></div>
</div>
</div>
<div id="CachePanel" class="djdt-panelContent djdt-hidden">
<div class="djDebugPanelTitle">
<button type="button" class="djDebugClose">×</button>
<h3>Cache calls from 1 backend</h3>
</div>
<div class="djDebugPanelContent">
<div class="djdt-loader"></div>
<div class="djdt-scroll"></div>
</div>
</div>
<div id="SignalsPanel" class="djdt-panelContent djdt-hidden">
<div class="djDebugPanelTitle">
<button type="button" class="djDebugClose">×</button>
<h3>Signals</h3>
</div>
<div class="djDebugPanelContent">
<div class="djdt-loader"></div>
<div class="djdt-scroll"></div>
</div>
</div>
<div id="LoggingPanel" class="djdt-panelContent djdt-hidden">
<div class="djDebugPanelTitle">
<button type="button" class="djDebugClose">×</button>
<h3>Log messages</h3>
</div>
<div class="djDebugPanelContent">
<div class="djdt-loader"></div>
<div class="djdt-scroll"></div>
</div>
</div>
<div id="djDebugWindow" class="djdt-panelContent djdt-hidden"></div>
</div>
</body>
</html>
</code></pre>
|
<python><django><debugging><django-debug-toolbar>
|
2023-04-01 04:57:39
| 0
| 328
|
S. S. Saruar Jahan
|
75,904,337
| 139,150
|
Not able to return source code of any function
|
<p>I am not able to get the source code of any function in python.
Do I need to reinstall python? Does python looks OK in this case?</p>
<pre><code># python
Python 3.7.6 | packaged by conda-forge | (default, Mar 23 2020, 22:25:07)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> def foo(arg1,arg2):
... #do something with args
... a = arg1 + arg2
... return a
...
>>> import inspect
>>> lines = inspect.getsource(foo)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/miniforge3/lib/python3.7/inspect.py", line 973, in getsource
lines, lnum = getsourcelines(object)
File "/root/miniforge3/lib/python3.7/inspect.py", line 955, in getsourcelines
lines, lnum = findsource(object)
File "/root/miniforge3/lib/python3.7/inspect.py", line 786, in findsource
raise OSError('could not get source code')
OSError: could not get source code
</code></pre>
<hr />
<p>Update:</p>
<p>Here is the module that I am trying to use and getting the above error:</p>
<pre><code>>>> from marvin import ai_fn
>>> @ai_fn
... @ai_fn
... def rhyme(word: str) -> str:
... "Returns a word that rhymes with the input word."
...
>>> rhyme("blue")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/site-packages/marvin/bots/ai_functions.py", line 128, in ai_fn_wrapper
return_value = fn(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/marvin/bots/ai_functions.py", line 141, in ai_fn_wrapper
function_def = inspect.cleandoc(inspect.getsource(fn))
File "/usr/local/lib/python3.10/inspect.py", line 1139, in getsource
lines, lnum = getsourcelines(object)
File "/usr/local/lib/python3.10/inspect.py", line 1121, in getsourcelines
lines, lnum = findsource(object)
File "/usr/local/lib/python3.10/inspect.py", line 958, in findsource
raise OSError('could not get source code')
OSError: could not get source code
</code></pre>
|
<python>
|
2023-04-01 04:53:10
| 2
| 32,554
|
shantanuo
|
75,904,222
| 5,539,782
|
scraping odds with selenium after mouse over
|
<p>from the webpage of oddsportal, I want to scrape the odds of <strong>Pinnacle</strong> bookmaker only:
<a href="https://www.oddsportal.com/football/russia/premier-league/spartak-moscow-akhmat-grozny-dI0Fo2oa/#1X2;2" rel="nofollow noreferrer">https://www.oddsportal.com/football/russia/premier-league/spartak-moscow-akhmat-grozny-dI0Fo2oa/#1X2;2</a>
<a href="https://i.sstatic.net/XvwoD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XvwoD.png" alt="enter image description here" /></a></p>
<p>the data appear only when I mouse over the odd</p>
<p>I did using selenium this but got error:</p>
<pre><code>a = driver.find_element(by=By.XPATH, value='//*[@id="app"]/div/div[1]/div/main/div[2]/div[4]/div[1]/div/div[./a[contains(text(),"Pinnacle")]]/div[2]')
from selenium.webdriver.common.action_chains import ActionChains
hover = ActionChains(driver).move_to_element(a)
hover.perform()
</code></pre>
<p>I tried to find a solution with <strong>bs4</strong> and <strong>requests</strong> but I can't find a link to scrape it from a particular page:
<a href="https://i.sstatic.net/awXQl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/awXQl.png" alt="enter image description here" /></a></p>
<p>for example, in this page, data can be found in <a href="https://www.oddsportal.com/feed/match-event/1-1-dI0Fo2oa-1-2-yj972.dat" rel="nofollow noreferrer">https://www.oddsportal.com/feed/match-event/1-1-dI0Fo2oa-1-2-yj972.dat</a>
I can use this link with requests but the issue is when I automate the scraping and use the base url, the part <code>dI0Fo2oa</code> from the base url can be known (from the webpage url), but I don't know how to add the part (or from where to find) the part: <code>yj972</code></p>
|
<python><selenium-webdriver><web-scraping>
|
2023-04-01 04:09:07
| 1
| 547
|
Khaled Koubaa
|
75,904,146
| 6,296,626
|
Removing ANSI escape sequence in Python
|
<p>I am trying to remove ANSI escape sequences from a string.</p>
<p>I have tried all solutions proposed in <a href="https://stackoverflow.com/questions/14693701">this post</a> but none of them worked, thus I concluded that my case is a bit different.</p>
<p>I have the following code that should have replaced all ANSI escape sequences:</p>
<pre class="lang-py prettyprint-override"><code>print("ascii: " + ascii(string))
x = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])').sub('', string)
y = re.compile(br'(?:\x1B[@-Z\\-_]|[\x80-\x9A\x9C-\x9F]|(?:\x1B\[|\x9B)[0-?]*[ -/]*[@-~])').sub(b'', string.encode("utf-8"))
print("not escaped X: " + ascii(x))
print("not escaped Y: " + ascii(y))
</code></pre>
<p>however, I got the following output:</p>
<pre><code>ascii: '\x1b[m>....\x1b[?1h\x1b=\x1b[?2004h>....\r\x1b[K\x1b[32m[03:33:57] blabla'
not escaped X: '>....\x1b=>....\r[03:33:57] blabla'
not escaped Y: b'>....\x1b=>....\r[03:33:57] blabla'
</code></pre>
<p>How can I replace all the ANSI escape sequences so the expected result would be: <code>[03:33:57] blabla</code>?</p>
|
<python><regex><escaping><ansi-escape>
|
2023-04-01 03:39:09
| 1
| 1,479
|
Programer Beginner
|
75,904,104
| 11,299,809
|
Panda value_counts index output looks weird
|
<pre><code>df = pd.read_csv('./data/flights_2015.csv', low_memory=False)
print('Dataframe dimensions:', df.shape)
</code></pre>
<pre><code>Dataframe dimensions: (5819079, 31)
</code></pre>
<p>I tried to count how many flights there for each <code>airport</code> in the entire dataset using</p>
<pre><code>count_flights = df['ORIGIN_AIRPORT'].value_counts()
</code></pre>
<p>Its output looks like this:</p>
<pre><code>ATL 346836
ORD 285884
DFW 239551
DEN 196055
LAX 194673
...
13541 11
10165 9
14222 9
13502 6
11503 4
</code></pre>
<p>Most of counts look correct, but why did I get <code>13541, 10165, 14222, 13502, 11503 </code> these numbers in the index column?
<code>flights_2015.csv</code> does not even have that kind of number on <code>'ORIGIN_AIRPORT'</code> column.</p>
<p>What happened here?</p>
|
<python><pandas>
|
2023-04-01 03:16:57
| 1
| 353
|
mario119
|
75,903,962
| 1,457,672
|
Return list of sentences with a particular subject
|
<p>I am exploring a small corpus of texts, and one of the things I am doing is examining the actions associated with various subjects. I have already inventoried how many times, for example, "man" is the subject of a sentence in which the verb is "love": that work was done with subject-verb-object triplets using Textacy.</p>
<p>As I work through the various statistics, I would like to be able to go back into the data and see sentences that have the subjects in their original context. NLTK has a concordance feature built right in, but it does not pay attention to part-of-speech tagging. I have gotten this far with the code.</p>
<p>What I am trying to do is <code>find_the_subject("noun", corpus)</code>, such that if I input "man" I would get back a list of sentences with man as the subject of the sentence:</p>
<blockquote>
<p>A man walked down the street and said why am I short in the middle?</p>
<p>The man comes around.</p>
</blockquote>
<p>So far, I have the following code which will grab all the sentences with "man" but not just the ones with man as subject.</p>
<pre class="lang-py prettyprint-override"><code>def find_sentences_with_noun(subject_noun, sentences):
# Start with two empty lists
noun_subjects = []
noun_sentences = []
# Work through the sentences
for sentence in sentences:
words = word_tokenize(sentence)
tagged_words = nltk.tag.pos_tag(words)
# This works but doesn't get me the subject
for word, tag in tagged_words:
if "NN" in tag and word == subject_noun:
noun_subjects.append(word)
noun_sentences.append(sentence)
return noun_sentences
</code></pre>
<p>I cannot for the life of me figure out how to grab the noun in the subject position.</p>
|
<python><nlp><nltk><pos-tagger>
|
2023-04-01 02:21:30
| 1
| 407
|
John Laudun
|
75,903,895
| 9,727,704
|
adding cookie to a flask function that returns a redirect
|
<p>The flask documentation for <a href="https://flask.palletsprojects.com/en/2.2.x/api/#flask.make_response" rel="nofollow noreferrer">setting a header</a> suggests the following:</p>
<pre><code>def index():
response = make_response(render_template('index.html', foo=42))
response.headers['X-Parachutes'] = 'parachutes are cool'
return response
</code></pre>
<p>I wish to set a cookie which can be done like <a href="https://stackoverflow.com/a/46664792/9727704">this</a>:</p>
<pre><code>@app.route('/')
def index():
resp = make_response(render_template(...))
resp.set_cookie('somecookiename', 'I am cookie')
return resp
</code></pre>
<p>However, my function returns a redirect:</p>
<pre><code>@app.route('/applications', methods=['POST'])
def create_application():
data = request.get_json()
try:
application = Application(
'foo' : 'bar'
)
db.session.add(application)
db.session.commit()
# set a cookie with application.id here
return redirect('/application/'+str(application.id), code=302)
</code></pre>
<p>According to the <a href="https://flask.palletsprojects.com/en/2.2.x/api/#flask.Flask.redirect" rel="nofollow noreferrer">documentation</a> <code>redirect</code> only accepts a location, not a response object.</p>
<p>There is the <code>@app.after_response</code> decorator, which will run after every request is processed, but how do I get info from my <code>application</code> object to it? Is there an alternative?</p>
|
<python><flask><cookies>
|
2023-04-01 02:00:57
| 1
| 765
|
Lucky
|
75,903,878
| 5,516,760
|
Use custom transformer for albumentations
|
<p>I want to use the following custom
<code>albumentation</code> transformer</p>
<pre class="lang-py prettyprint-override"><code>import albumentations as A
from albumentations.pytorch import ToTensorV2
class RandomTranslateWithReflect:
"""Translate image randomly
Translate vertically and horizontally by n pixels where
n is integer drawn uniformly independently for each axis
from [-max_translation, max_translation].
Fill the uncovered blank area with reflect padding.
"""
def __init__(self, max_translation):
self.max_translation = max_translation
def __call__(self, old_image):
xtranslation, ytranslation = np.random.randint(-self.max_translation,
self.max_translation + 1,
size=2)
# Apply the translation using the Albumentations library
transform = A.ShiftScaleRotate(shift_limit=(xtranslation / old_image.shape[1], ytranslation / old_image.shape[0]),
scale_limit=0,
rotate_limit=0,
border_mode=cv2.BORDER_REFLECT,
p=1)
new_image = transform(image=old_image)["image"]
return new_image
</code></pre>
<p>I use the following code to put it in compose :</p>
<pre><code>train_transform = {
'cifar10': A.Compose([
A.Lambda(image=RandomTranslateWithReflect(4)),
A.HorizontalFlip(p=0.5),
A.Normalize(*meanstd['cifar10']),
ToTensorV2()
])
}
</code></pre>
<p>but I get this error:</p>
<pre><code>mg1 = transform(image=img)["image"]
File "/home/student/anaconda3/envs/few-shot/lib/python3.6/site-packages/albumentations/core/composition.py", line 205, in __call__
data = t(**data)
File "/home/student/anaconda3/envs/few-shot/lib/python3.6/site-packages/albumentations/core/transforms_interface.py", line 118, in __call__
return self.apply_with_params(params, **kwargs)
File "/home/student/anaconda3/envs/few-shot/lib/python3.6/site-packages/albumentations/core/transforms_interface.py", line 131, in apply_with_params
res[key] = target_function(arg, **dict(params, **target_dependencies))
File "/home/student/anaconda3/envs/few-shot/lib/python3.6/site-packages/albumentations/augmentations/transforms.py", line 1648, in apply
return fn(img, **params)
TypeError: __call__() got an unexpected keyword argument 'cols'
</code></pre>
|
<python><pytorch><albumentations>
|
2023-04-01 01:51:21
| 1
| 2,790
|
Marzi Heidari
|
75,903,837
| 7,705,108
|
Why does GET Details returns all Null and Delete Details returns "detail": "Method \"GET\" not allowed." when using objects.filter?
|
<p>I built a rest framework in Django. When I do getAll, I get all the data from my database (no problem). The problem is when I do GET details and DELETE details. For the get and delete, I need to return and delete multiple objects (not just one and not all).
In my case the problem is using objects.filter(). When I try using objects.get(id=id), I have no problem</p>
<p>models.py:</p>
<pre><code>class ProA(models.Model):
var1 = models.CharField(max_length=200, null=True, blank=True)
var2 = models.CharField(max_length=200, null=True, blank=True)
var3 = models.CharField(max_length=200, null=True, blank=True)
</code></pre>
<p>serializers.py:</p>
<pre><code>class ProASerializer(serializers.ModelSerializer):
class Meta:
model = ProA
fields = '__all__'
</code></pre>
<p>urls.py:</p>
<pre><code>urlpatterns = [
path('proa-getAll/', views.getAll, name='proa_getAll'),
path('proa-getOne/<str:var1>', views.getOneSet, name='proa_getOne'),
path('proa-delete/<str:var1>', views.deleteOneSet, name='proa_delete'),
]
</code></pre>
<p>views.py:</p>
<pre><code>@api_view(['GET'])
def getAll(request):
proas = ProA.objects.all()
serializer = ProASerializer(ProA.objects.all(), many=True)
return Response(serializer.data)
@api_view(['GET'])
def getOneSet(request, var1):
proa = ProA.objects.filter(var1=var1)
serializer = ProASerializer(proa, many=False)
return Response(serializer.data)
@api_view(['DELETE'])
def deleteOneSet(request, var1):
try:
data = ProA.objects.filter(var1=var1)
except data.DoesNotExist:
return Response({'message': 'The var1 value does not exist'}, status=status.HTTP_404_NOT_FOUND)
data.delete()
return Response({'message': 'Data was deleted successfully!'}, status=status.HTTP_204_NO_CONTENT)
</code></pre>
<p>When I go to the getOneSet URL to get all the objects where var1=MS107, I get the following:</p>
<p><a href="https://i.sstatic.net/8RJ3U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8RJ3U.png" alt="GetOneSet" /></a></p>
<p>When I go to the deleteOneSet URL to delete all the objects where var1=MS107, I get the following:</p>
<p><a href="https://i.sstatic.net/kpSJJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kpSJJ.png" alt="delete one set" /></a></p>
<p>I have tried many things, but no idea how to solve them. I am new to Django, so I hope I added all info needed.</p>
|
<python><python-3.x><django><django-rest-framework><django-views>
|
2023-04-01 01:34:01
| 1
| 381
|
ananvodo
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.